text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
\begin{document}
\renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \theoremstyle{plain} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{lemma}[theorem]{\bf Lemma} \newtheorem{corollary}[theorem]{\bf Corollary} \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{definition}[theorem]{\bf Definition} \newtheorem{remark}[theorem]{\it Remark}
\def\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}{\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}} \def\beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}{\beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}} \def\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}} \def\Gamma} \def\cD{{\mathcal D}} \def\bD{{\bf D}} \def\mD{{\mathscr D}{\Gamma} \def\cD{{\mathcal D}} \def\bD{{\bf D}} \def\mD{{\mathscr D}} \def\delta} \def\cE{{\mathcal E}} \def\bE{{\bf E}} \def\mE{{\mathscr E}{\delta} \def\cE{{\mathcal E}} \def\bE{{\bf E}} \def\mE{{\mathscr E}} \def\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}{\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}} \def\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}{\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}} \def\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}} \def\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}{\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}} \def\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}{\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}} \def\Theta} \def\cK{{\mathcal K}} \def\bK{{\bf K}} \def\mK{{\mathscr K}{\Theta} \def\cK{{\mathcal K}} \def\bK{{\bf K}} \def\mK{{\mathscr K}} \def\kappa} \def\cL{{\mathcal L}} \def\bL{{\bf L}} \def\mL{{\mathscr L}{\kappa} \def\cL{{\mathcal L}} \def\bL{{\bf L}} \def\mL{{\mathscr L}} \def\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}} \def\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}{\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}} \def\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}{\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}} \def\nu} \def\cP{{\mathcal P}} \def\bP{{\bf P}} \def\mP{{\mathscr P}{\nu} \def\cP{{\mathcal P}} \def\bP{{\bf P}} \def\mP{{\mathscr P}} \def\rho} \def\cQ{{\mathcal Q}} \def\bQ{{\bf Q}} \def\mQ{{\mathscr Q}{\rho} \def\cQ{{\mathcal Q}} \def\bQ{{\bf Q}} \def\mQ{{\mathscr Q}} \def\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}{\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}} \def\Sigma} \def\cS{{\mathcal S}} \def\bS{{\bf S}} \def\mS{{\mathscr S}{\Sigma} \def\cS{{\mathcal S}} \def\bS{{\bf S}} \def\mS{{\mathscr S}} \def\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}{\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}} \def\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}{\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}} \def\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V}{\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V}} \def\Psi} \def\cW{{\mathcal W}} \def\bW{{\bf W}} \def\mW{{\mathscr W}{\Psi} \def\cW{{\mathcal W}} \def\bW{{\bf W}} \def\mW{{\mathscr W}} \def\omega} \def\cX{{\mathcal X}} \def\bX{{\bf X}} \def\mX{{\mathscr X}{\omega} \def\cX{{\mathcal X}} \def\bX{{\bf X}} \def\mX{{\mathscr X}} \def\xi} \def\cY{{\mathcal Y}} \def\bY{{\bf Y}} \def\mY{{\mathscr Y}{\xi} \def\cY{{\mathcal Y}} \def\bY{{\bf Y}} \def\mY{{\mathscr Y}} \def\Xi} \def\cZ{{\mathcal Z}} \def\bZ{{\bf Z}} \def\mZ{{\mathscr Z}{\Xi} \def\cZ{{\mathcal Z}} \def\bZ{{\bf Z}} \def\mZ{{\mathscr Z}} \def\Omega{\Omega}
\newcommand{\mathscr {c}}{\mathscr {c}}
\newcommand{\gA}{\mathfrak{A}} \newcommand{\ga}{\mathfrak{a}} \newcommand{\gB}{\mathfrak{B}} \newcommand{\gb}{\mathfrak{b}} \newcommand{\gC}{\mathfrak{C}} \newcommand{\gc}{\mathfrak{c}} \newcommand{\gD}{\mathfrak{D}} \newcommand{\gd}{\mathfrak{d}} \newcommand{\mathfrak{E}}{\mathfrak{E}} \newcommand{\gF}{\mathfrak{F}} \newcommand{\gf}{\mathfrak{f}} \newcommand{\mathfrak{G}}{\mathfrak{G}} \newcommand{\gH}{\mathfrak{H}} \newcommand{\gh}{\mathfrak{h}} \newcommand{\gI}{\mathfrak{I}} \newcommand{\gi}{\mathfrak{i}} \newcommand{\gJ}{\mathfrak{J}} \newcommand{\gj}{\mathfrak{j}} \newcommand{\gK}{\mathfrak{K}} \newcommand{\gk}{\mathfrak{k}} \newcommand{\gL}{\mathfrak{L}} \newcommand{\gl}{\mathfrak{l}} \newcommand{\gM}{\mathfrak{M}} \newcommand{\gm}{\mathfrak{m}} \newcommand{\gN}{\mathfrak{N}} \newcommand{\gn}{\mathfrak{n}} \newcommand{\mathfrak{O}}{\mathfrak{O}} \newcommand{\gP}{\mathfrak{P}} \newcommand{\gp}{\mathfrak{p}} \newcommand{\gQ}{\mathfrak{Q}} \newcommand{\gq}{\mathfrak{q}} \newcommand{\gR}{\mathfrak{R}} \newcommand{\gr}{\mathfrak{r}} \newcommand{\gS}{\mathfrak{S}} \newcommand{\gs}{\mathfrak{s}} \newcommand{\gT}{\mathfrak{T}} \newcommand{\gt}{\mathfrak{t}} \newcommand{\gU}{\mathfrak{U}} \newcommand{\gu}{\mathfrak{u}} \newcommand{\gV}{\mathfrak{V}} \newcommand{\gv}{\mathfrak{v}} \newcommand{\gW}{\mathfrak{W}} \newcommand{\gw}{\mathfrak{w}} \newcommand{\gX}{\mathfrak{X}} \newcommand{\gx}{\mathfrak{x}} \newcommand{\gY}{\mathfrak{Y}} \newcommand{\gy}{\mathfrak{y}} \newcommand{\gZ}{\mathfrak{Z}} \newcommand{\gz}{\mathfrak{z}}
\def\varepsilon} \def\vt{\vartheta} \def\vp{\varphi} \def\vk{\varkappa{\varepsilon} \def\vt{\vartheta} \def\vp{\varphi} \def\vk{\varkappa}
\def{\mathbb A}} \def\B{{\mathbb B}} \def\C{{\mathbb C}{{\mathbb A}} \def\B{{\mathbb B}} \def\C{{\mathbb C}} \def{\mathbb D}} \def\E{{\mathbb E}} \def\dF{{\mathbb F}} \def\dG{{\mathbb G}} \def\H{{\mathbb H}}\def\I{{\mathbb I}} \def\J{{\mathbb J}} \def\K{{\mathbb K}} \def\dL{{\mathbb L}}\def\M{{\mathbb M}} \def\N{{\mathbb N}} \def\dO{{\mathbb O}} \def\dP{{\mathbb P}} \def\R{{\mathbb R}}\def\S{{\mathbb S}} \def\T{{\mathbb T}} \def\U{{\mathbb U}} \def\V{{\mathbb V}}\def\W{{\mathbb W}} \def\X{{\mathbb X}} \def\Y{{\mathbb Y}} \def\Z{{\mathbb Z}{{\mathbb D}} \def\E{{\mathbb E}} \def\dF{{\mathbb F}} \def\dG{{\mathbb G}} \def\H{{\mathbb H}}\def\I{{\mathbb I}} \def\J{{\mathbb J}} \def\K{{\mathbb K}} \def\dL{{\mathbb L}}\def\M{{\mathbb M}} \def\N{{\mathbb N}} \def\dO{{\mathbb O}} \def\dP{{\mathbb P}} \def\R{{\mathbb R}}\def\Sigma} \def\cS{{\mathcal S}} \def\bS{{\bf S}} \def\mS{{\mathscr S}{{\mathbb S}} \def\T{{\mathbb T}} \def\U{{\mathbb U}} \def\V{{\mathbb V}}\def\W{{\mathbb W}} \def\Xi} \def\cZ{{\mathcal Z}} \def\bZ{{\bf Z}} \def\mZ{{\mathscr Z}{{\mathbb X}} \def\Y{{\mathbb Y}} \def\Z{{\mathbb Z}}
\def\leftarrow} \def\ra{\rightarrow} \def\Ra{\Rightarrow{\leftarrow} \def\ra{\rightarrow} \def\Ra{\Rightarrow} \def\uparrow} \def\da{\downarrow{\uparrow} \def\da{\downarrow} \def\leftrightarrow} \def\Lra{\Leftrightarrow{\leftrightarrow} \def\Lra{\Leftrightarrow}
\def\biggl} \def\rt{\biggr{\biggl} \def\rt{\biggr} \def\overline} \def\wt{\widetilde{\overline} \def\wt{\widetilde} \def\noindent{\noindent}
\let\ge\geqslant \let\le\leqslant \def\langle} \def\ran{\rangle{\langle} \def\ran{\rangle} \def\over} \def\iy{\infty{\over} \def\iy{\infty} \def\setminus} \def\es{\emptyset{\setminus} \def\es{\emptyset} \def\subset} \def\ts{\times{\subset} \def\ts{\times} \def\partial} \def\os{\oplus{\partial} \def\os{\oplus} \def\ominus} \def\ev{\equiv{\ominus} \def\ev{\equiv} \def\int\!\!\!\int} \def\iintt{\mathop{\int\!\!\int\!\!\dots\!\!\int}\limits{\int\!\!\!\int} \def\iintt{\mathop{\int\!\!\int\!\!\dots\!\!\int}\limits} \def\ell^{\,2}} \def\1{1\!\!1{\ell^{\,2}} \def\1{1\!\!1} \def\mathop{\mathrm{sh}}\nolimits{\sharp} \def\widehat{\widehat} \def\backslash{\backslash} \def\int\limits{\int\limits}
\def\mathop{\mathrm{\nabla}}\nolimits{\mathop{\mathrm{\nabla}}\nolimits} \def\mathop{\mathrm{sh}}\nolimits{\mathop{\mathrm{sh}}\nolimits} \def\mathop{\mathrm{ch}}\nolimits{\mathop{\mathrm{ch}}\nolimits} \def\mathop{\mathrm{where}}\nolimits{\mathop{\mathrm{where}}\nolimits} \def\mathop{\mathrm{all}}\nolimits{\mathop{\mathrm{all}}\nolimits} \def\mathop{\mathrm{as}}\nolimits{\mathop{\mathrm{as}}\nolimits} \def\mathop{\mathrm{Area}}\nolimits{\mathop{\mathrm{Area}}\nolimits} \def\mathop{\mathrm{arg}}\nolimits{\mathop{\mathrm{arg}}\nolimits} \def\mathop{\mathrm{const}}\nolimits{\mathop{\mathrm{const}}\nolimits} \def\mathop{\mathrm{det}}\nolimits{\mathop{\mathrm{det}}\nolimits} \def\mathop{\mathrm{diag}}\nolimits{\mathop{\mathrm{diag}}\nolimits} \def\mathop{\mathrm{diam}}\nolimits{\mathop{\mathrm{diam}}\nolimits} \def\mathop{\mathrm{dim}}\nolimits{\mathop{\mathrm{dim}}\nolimits} \def\mathop{\mathrm{dist}}\nolimits{\mathop{\mathrm{dist}}\nolimits} \def\mathop{\mathrm{Im}}\nolimits{\mathop{\mathrm{Im}}\nolimits} \def\mathop{\mathrm{Iso}}\nolimits{\mathop{\mathrm{Iso}}\nolimits} \def\mathop{\mathrm{Ker}}\nolimits{\mathop{\mathrm{Ker}}\nolimits} \def\mathop{\mathrm{Lip}}\nolimits{\mathop{\mathrm{Lip}}\nolimits} \def\mathop{\mathrm{rank}}\limits{\mathop{\mathrm{rank}}\limits} \def\mathop{\mathrm{Ran}}\nolimits{\mathop{\mathrm{Ran}}\nolimits} \def\mathop{\mathrm{Re}}\nolimits{\mathop{\mathrm{Re}}\nolimits} \def\mathop{\mathrm{Res}}\nolimits{\mathop{\mathrm{Res}}\nolimits} \def\mathop{\mathrm{res}}\limits{\mathop{\mathrm{res}}\limits} \def\mathop{\mathrm{sign}}\nolimits{\mathop{\mathrm{sign}}\nolimits} \def\mathop{\mathrm{span}}\nolimits{\mathop{\mathrm{span}}\nolimits} \def\mathop{\mathrm{supp}}\nolimits{\mathop{\mathrm{supp}}\nolimits} \def\mathop{\mathrm{Tr}}\nolimits{\mathop{\mathrm{Tr}}\nolimits} \def\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}{\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}}
\newcommand\nh[2]{\widehat{#1}\vphantom{#1}^{(#2)}}
\def\diamond{\diamond}
\def\bigoplus\nolimits{\bigoplus\nolimits}
\def\qquad{\qquad} \def\quad{\quad} \let\ge\geqslant \let\le\leqslant \let\geq\geqslant \let\leq\leqslant \newcommand{\begin{cases}}{\begin{cases}} \newcommand{\end{cases}}{\end{cases}} \newcommand{\begin{pmatrix}}{\begin{pmatrix}} \newcommand{\end{pmatrix}}{\end{pmatrix}} \renewcommand{\[}{\begin{equation}} \renewcommand{\end{equation}}{\end{equation}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\[{\begin{equation}} \def\bullet{\bullet}
\title[{Global estimates of resonances for 1D Dirac operators}] {Global estimates of resonances for 1D Dirac operators}
\date{\today}
\author[Evgeny Korotyaev]{Evgeny L. Korotyaev} \address{Mathematical Physics Department, Faculty of Physics, Ulianovskaya 2, St. Petersburg State University, St. Petersburg, 198904,
and Pushkin Leningrad State University, Russia,
\ [email protected], }
\subjclass{} \keywords{Resonances, 1D Dirac}
\begin{abstract} \noindent We discuss resonances for 1D massless Dirac operators with compactly supported potentials on the line. We estimate the sum of the negative power of all resonances in terms of the norm of the potential and the diameter of its support.
\noindent {\bf Keywords:} Resonances, 1D Dirac operator, estimates \end{abstract}
\maketitle
\vskip 0.25cm \section {Introduction and main results} \setcounter{equation}{0}
In this paper we plan to determine global estimates of resonances in terms of the potential for massless Dirac operators $H$ acting in $L^2(\R )\os L^2(\R )$ and given by $$ H=-iJ {d\over} \def\iy{\infty dx}+ V,\quad \qquad J =\begin{pmatrix} 1&0\\ 0&-1\end{pmatrix}, \ \ \ \ \ V= \begin{pmatrix} 0&q\\ \overline} \def\wt{\widetilde q & 0\end{pmatrix}. $$
Here $q$ is a complex-valued , integrable function with compact support
$\mathop{\mathrm{supp}}\nolimits q\subset} \def\ts{\times [0,\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}]$ for some $\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}>0$. It is well known that the operator $H$ is self-adjoint (see Theorem 3.2 in \cite{LM03}) and the spectrum of $H$ is purely absolutely continuous and covers the real line (see \cite{DEGM82}).
{\bf Below we consider all functions and the resolvent in upper-half plane $\C_+$ and we will obtain their analytic extensions into the whole complex plane $\C$.} Note that we can consider all functions and the resolvent in lower-half plane $\C_-$ and to obtain their analytic extensions into the whole complex plane $\C$. The Riemann surface of the resolvent for the Dirac operator consists of two not-connected sheets $\C$. In the case of the Schr\"odinger operator
the corresponding Riemann surface is the Riemann surface of the function
$\sqrt \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}$.
We consider the fundamental solutions $\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}^{\pm}$ of the Dirac equation \[ \label{1.2} -iJ f'+ V(x)f=\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} f \end{equation} under the following conditions \[ \psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}^{\pm}(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=e^{\pm i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} x}e_{\pm},\ \ \ \ x>\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C};\ \ \ \ \ \ \ \ \ \ \ \vp^{\pm}(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=e^{\pm i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} x}e_{\pm},\ \ \ \ x<0. \end{equation} where the vectors $e_+=(1,0)$ and $e_-=(0,1)$. The scattering matrix $S(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ for the pair $H$ and $H_0=-iJ {d\over} \def\iy{\infty dx}$ has the following form \[ \label{Sm} S(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})={1\/a}\begin{pmatrix} 1& - \overline} \def\wt{\widetilde b\\ b& 1\end{pmatrix} (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} ),\quad \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \R, \end{equation} see e.e. \cite{DEGM82}. Here ${1\/a}$ is the transmission coefficient and $-{\overline} \def\wt{\widetilde b\/a}$ (or ${ b\/a}$) is the right (left) reflection coefficient. We have \[ \label{5.11} a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=\mathop{\mathrm{det}}\nolimits (\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}^+,\vp^-)=\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_1^{+}(0,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} ),\qquad \qquad b(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=-\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_1^{-}(0,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} ). \end{equation} The function $a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ is analytic in the upper half-plane $\C_+$ and has an analytic extension in the whole complex plane $\C$. All zeros of $a$ lie in $\C_-$ (on the so-caleed non-physical sheet). We denote by $(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n)_1^{\iy}$ the sequence of zeros of $a$ (multiplicities counted by repetition), so arranged that
$0<|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_1|\leq |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_2|\leq |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_2|\leq \dots$.
By the definition, the zero $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n\in \C_-$ of $a$ is a resonance.
The multiplicity of the resonance is the multiplicity of
the corresponding zero of $a$.
We recall some facts from \cite{IK12}. Let $R_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=(H_0-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})^{-1}$
and $\gF(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=V_1R_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|q|^{1\/2}$ and $V=V_1 |q|^{1\/2}$. Note that $\gF(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}), \mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\ne 0$ is the Hilbert-Schmidt operator, but the operator $\gF(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ is not trace class. In this case we define the modified Fredholm determinant $D(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ by \[ D(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=\mathop{\mathrm{det}}\nolimits\left[ (I+\gF(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})) e^{-\gF(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})}\right], \qquad \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \C_+. \end{equation}
We formulate now some results about resonances from \cite{IK12}: {\it The determinant $D(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ is analytic in $\C_+$ and has an analytic extension into the whole complex plane $\C$ and
$D=a$. Thus all zeros of $D$ are zeros of $a$, lie in $\C_-$ and
satisfy \[ \label{ik} \cN(r,D)= {2r\over} \def\iy{\infty \pi }(\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}+o(1))\qquad as \qquad r\to\iy, \end{equation} where $\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}>0$ is a diameter of the support of the potential $q$. Here we denote the number of zeros of function $f$ having modulus $\leq r$ by $\cN (r,f)$, each zero being counted according to its multiplicity. } We formulate our main result.
\begin{theorem} \label{T1} Let the potential $q\in L^1(\R)$ and let $\mathop{\mathrm{supp}}\nolimits q\subset} \def\ts{\times [0,\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}]$,
but in no smaller interval. Then for each $p>1$ the following estimate hold true: \[
\label{r} \sum_{\mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n< 0} {1\over} \def\iy{\infty|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n-i|^{p}}\le {C Y_p\over} \def\iy{\infty\log 2}\rt({4\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+\int_\R |q(x)|dx\rt), \end{equation} where $C\le 2^5$ is an absolute constant and $Y_p=\int_\R (1+x^2)^{-{p\/2}}dx, p>1$. \end{theorem}
{\bf Remark.} 1) If $\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}+\int_\R |q(x)|dx\to 0$, then all resonances go infinity.
2) The proof is based on analysis of the function $a$ and
the Carleson measure arguments \cite{C58}, \cite{C62}.
We use harmonic analysis and Carleson's Theorem (see Theorems 1.56 and 2.3.9 in \cite{G81} and references therein) about the Carleson measure. In fact, we use the approach from \cite{K12}, where the estimates of resonances in terms of the norm of potentials for 1D Schr\"odinger operators were obtained. Note that in the case of the Dirac operator we obtain the sharper estimate \er{r}.
3) $C$ is an absolute constant from Carleson's Theorem ( \cite{C58}, \cite{C62}, see Theorems 1.56 and 2.3.9, \cite{G81}), see also \er{Ce}.
4) In fact, the estimate \er{r} gives a new global property of resonance stability.
5) The function $Y$ is strongly monotonic on $(1,\iy)$ and satisfies \[ \label{Yp} Y_2=\pi ,\qquad Y_p=\begin{cases} {\sqrt{2\pi}\over} \def\iy{\infty\sqrt p}(1+o(1)) & as \ p\to \iy\\
{2+o(1)\/p-1} & as \ p\to 1\end{cases}. \end{equation} These properties of the function $Y_p$ is discussed in \cite{K12}. In particular, asymptotics \er{Yp} are proved. Thus we can control the RHS of \er{r} at $p\to 1$ and large $p\to \iy$. Note we take $p>1$, since the asymptotics \er{ik} implies the simple fact
$\sum_{n\ge 1} {1\over} \def\iy{\infty|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n|}=\iy$, see p. 17 in \cite{L93}.
Resonances for the multidimensional case were studied by Melrose, Sj\"ostrand, and Zworski and other, see \cite{M83}, \cite{Z89}, \cite{SZ91}) and references therein. We discuss the one dimensional case. A lot of papers are devoted to the resonances for the 1D Schr\"odinger operator, see Froese \cite{F97}, Simon \cite{S00}, Zworski \cite{Z87}, \cite{K11} and references therein. We recall that Zworski \cite{Z87} obtained the first results about the asymptotic distribution of resonances for the Schr\"odinger operator with compactly supported potentials on the real line. Different properties of resonances were determined in \cite{H99}, \cite{S00}, \cite{Z87} and \cite{K04}, \cite{K05}, \cite{K11}. Inverse problems (characterization, recovering, plus uniqueness) in terms of resonances were solved by Korotyaev for the Schr\"odinger operator with a compactly supported potential on the real line \cite{K05} and the half-line \cite{K04}, see also \cite{Z02} about uniqueness.
The "local resonance" stability problem was considered in
\cite{K04s}. It was proved that: if $\vk=(\vk)_1^\iy$ is a sequence of eigenvalues and resonances of the Schr\"odinger operator with some compactly supported potential $q$ on the half-line and $\sum _{n\ge 1}n^{2\varepsilon} \def\vt{\vartheta} \def\vp{\varphi} \def\vk{\varkappa}|\wt\vk_n-\vk_n|^2<\iy$ for some sequence $\wt\vk=(\wt\vk_n)_1^\iy$ and $\varepsilon} \def\vt{\vartheta} \def\vp{\varphi} \def\vk{\varkappa>1$, then $\wt\vk$ is a sequence of eigenvalues and resonances of a Schr\"odinger operator with for some unique real compactly supported potential $\wt q$. Another type of the local resonance stability problem was studied in \cite{MSW10}.
Consider the Schr\"odinger operator $\cH = -\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F} - V$ acting in $L^2(\R^d), d\ge 1$, where the potential $V\ge 0$ decreases sufficiently fastly at infinity. The negative part of the spectrum of $\cH$ is discrete and let $E_n<0, n\ge 1$ be the corresponding increasing sequence of eigenvalues, each eigenvalue is counted according to its multiplicity. This sequence is either finite or tends to zero. Lieb and Thirring \cite{LT} proved inequalities of the type $$
\sum_{n\ge 1}|E_n|^\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}\le C_{\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T},d}\int_{\R^d}V^{{d\/2}+\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}}dx, \qquad $$ for some positive $\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}$. There are a lot of papers about such inequalities, see \cite{LS10}, \cite{LW00} and references therein. In fact \er{r} is the Lieb-Thirring type inequalities for resonances of the Dirac operator.
\
\section{Proof} \setcounter{equation}{0}
\
{\bf 2.1. Estimates for entire functions.} An entire function $f(z)$
is said to be of exponential type if there is a constant $\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}$ such that $|f(z)|\leq\mathop{\mathrm{const}}\nolimits e^{\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}|z|}$ everywhere. The infimum of the set of $\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}$ for which such inequality holds is called the type of $f$.
\noindent {\bf Definition.} {\it Let $\cE_\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}, \gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}>0$ denote the space of exponential type functions $f$, which satisfy \[ \label{CE}
|f(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|\le e^{A+\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}(|\mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}|-\mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})} \qquad \quad \forall \ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \C, \end{equation} \[ \label{CE1}
|f(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|\ge 1 \qquad \forall \ \ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \R, \end{equation} for some constants $A=A(f)\ge 0$.}
In the proof of Theorem \ref{T1} we need some properties of the zeros of $f\in \cE_\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}$ in terms of the Carleson measure. Recall that {\it a positive Borel measure $M$ defined in $\C_-$ is called the Carleson measure if there is
a constant $C_M$ such that for all $(r,t)\in \R_+\ts\R$ \[
\label{1.31}
M(D_-(t,r))\leq C_Mr,\ \ \ {\rm where}\ \ \
\ D_-(t,r)\ev\{z\in \C_-: |z-t|<r \}, \end{equation} here $C_M$ is the Carleson constant independent of $(t,r)$.}
For an entire function $f$ with zeroes $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n, n\ge 1$ we define an associated measure by \[ \label{MO}
d\Omega (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M},f)=\sum_{\mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n\le 0} \delta} \def\cE{{\mathcal E}} \def\bE{{\bf E}} \def\mE{{\mathscr E} (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n+i)d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O} d\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I},\ \ \ \ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}=\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}+i\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}\in \C_-. \end{equation}
In order to prove Theorem \ref{T1} we need following results.
\begin{theorem} \label{TC}
Let $f\in \cE_\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}, \gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}>0$. Then
i) for each $r>0$ the following estimate hold true: \[ \label{L1} \cN (r,f)\leq {1\over} \def\iy{\infty\log 2}\rt({4r\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+ A\rt). \end{equation} ii) $d\Omega (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M},f)$ is the Carleson measure and satisfies \[ \label{L2} \Omega (D_-(t,r),f)\le\cN (r,f(t+\cdot))\le {r\over} \def\iy{\infty\log 2}\rt({4\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+ A\rt) \qquad \forall \ (r,t)\in \R_+\ts \R. \end{equation} iii) For each $p>1$ the following estimates hold true: \[
\label{L3} \sum_{n\ge 1} {1\over} \def\iy{\infty|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n-i|^{p}}\le {C Y_p\over} \def\iy{\infty\log 2}\rt({4\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+A\rt), \end{equation} where $C\le 2^5$ is an absolute constant and $Y_p=\int_\R {dx\over} \def\iy{\infty(1+x^2)^{p\/2}}, p>1$. \end{theorem}
{\bf Proof.} i) Let $F=e^{-i\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}}f, \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}=re^{i\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}}$. Then the Jensen formula implies (see 2 p. in \cite{Koo88}) \[ \label{X1}
\log |f(0)|+\int _0^r{\cN (t,f)\over} \def\iy{\infty t}dt= {1\over} \def\iy{\infty 2\pi }\int _0^{2\pi}\log |F(re^{i\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}})|d\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}, \end{equation} since $\cN (t,f)=\cN (t,F)$. Using the estimate \er{CE} we obtain $$
\log |F(re^{i\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}})|\le \gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} r |\sin \phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}|+ A, \ \ \ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}=re^{i\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}},\ \ \ \phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U} \in [0, 2\pi ], $$ which yields \[
\label{X2} {1\over} \def\iy{\infty 2\pi }\int _0^{2\pi}\log |F(re^{i\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}})|d\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U} \le {2 \gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty \pi}r+ A. \end{equation} Substituting the estimate \er{X2} into the identity \er{X1} together with the simple estimate $$ \int _0^r{\cN (t)dt\over} \def\iy{\infty t}\geq \cN \rt({r\/2}\rt)\int _{r/2}^r{dt\over} \def\iy{\infty t}=\cN \rt({r\/2}\rt)\log 2, $$
we obtain \er{L1}, since $|f(0)|\ge 1$.
ii) Let $r\le 1, t\in \R$. Then by the construction of $\Omega (\cdot,f)$, we obtain $\Omega (D_-(t,r),f) =0$.
Let $r>1, t\in \R$. Then due to \er{L1}, the measure $\Omega (\cdot,f)$ satisfies $$ \Omega (D_+(t,r),f)\le\cN (r,f(t+\cdot))\le {1\over} \def\iy{\infty\log 2}\rt({4r\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+ A\rt)\le {r\over} \def\iy{\infty\log 2}\rt({4\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+ A\rt). $$ Thus $\Omega (\cdot,f)$ is the Carleson measure with the Carleson $C_\Omega={1\over} \def\iy{\infty\log 2}\rt({4\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}\over} \def\iy{\infty\pi}+ A\rt)$.
iii) In order to show \er{L3} we recall the Carleson result (see p. 63, Theorem 3.9, \cite{G81}):
Let $f$ be analytic on $\C_-$. For $0<p<\iy$ we say $f\in \mH_p=\mH_p(\C_-)$ if $$
\sup_{y<0}\int_\R|f(x+iy)|^pdx=\|f\|_{\mH_p}<\iy $$ Note that the definition of the Hardy space $\mH_p$ involve all $y>0$, instead of small only value of $y$, like say, $y\in (0,1)$. We define the Hardy space $\mH_p$ for the case $\C_-$, since below we work with functions on $\C_-$.
{\it If $M$ is a Carleson measure, then the following estimate holds: \[ \label{Ce}
\int_{\C_-} |f|^pdM\leq C C_M \|f\|_{\mH_p}^p\qquad
\forall \quad f\in \mH_p, \ p\in (0,\iy ), \end{equation} where $C_M$ is the so-called Carleson constant in \er{1.31} and $C\le 2^5$ is an absolute constant.}
In order to prove \er{L3} we take the functions $f(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})={1\over} \def\iy{\infty\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-i}$. Estimate \er{Ce} yields \[ \label{3.19}
\int_{\C_-} |f(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|^pdM=\sum_{n\ge 1} {1\over} \def\iy{\infty|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_n-i|^p} \leq C C_M \|f\|_{\mH_p}^p, \quad p\in (1,\iy). \end{equation} where we have the simple identity \[
\label{3.19a} \|f\|_{\mH_p}^p=\int_\R {dt\over} \def\iy{\infty|t-i|^p}=\int_\R {dt\over} \def\iy{\infty(t^2+1)^{p\/2}}=Y_p. \end{equation}
Combine \er{3.19} and \er{3.19a} we obtain \er{L3}.
\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
{\bf 2.1. Estimates of resonances.} We consider the function $a$ given by \er{5.11} under the condition that the potential $q$ satisfies $\mathop{\mathrm{supp}}\nolimits q\subset} \def\ts{\times [0,\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}]$. The solution $\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}^+=(\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_1^+,\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_2^+)$ satisfy the integral equations \[ \psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}^{+}(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=e^{i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} x}e_{1}+\int _x^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} iJe^{i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (x-t)J} V(t) \psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}^{+}(t,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )dt, \end{equation}
where \begin{equation} \label{5.6} iJe^{i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (x-t)J} V(t) = i\begin{pmatrix} 0& q(t) e^{i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (x-t)}\\ -\overline} \def\wt{\widetilde{q}(t)e^{-i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (x-t)}&0\end{pmatrix} . \end{equation} For $\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}=e^{-i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} x}\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_{1}^+(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )$ using (\ref{5.6}) we obtain $$
\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=1+i\int _x^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} e^{-i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} s}q(s) \psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_{2}^+(s,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )ds, $$ $$
\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}_{2}^+(s,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=-i\int _s^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} e^{i\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (2t-s)}\overline} \def\wt{\widetilde{q}(t) \chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}(t,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )dt. $$ Then $\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )$ satisfies the following integral equation \[ \label{x1} \begin{aligned}
\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=1+ \int _x^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} q(t_1)dt_1 \int _{t_1}^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} e^{i2\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (t_2-t_1)}
\overline} \def\wt{\widetilde{q}(t_2)\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}(t_2,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})dt_2. \end{aligned} \end{equation} We have the standard formal iterations given by \[ \label{ien} \chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G} (x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=1+\sum _{n\geq 1}\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}_n(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} ),\ \ \ \ \chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}_n(x,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=\int _x^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} q(t_1)dt_1 \int _{t_1}^\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} e^{i2\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (t_2-t_1)} \overline} \def\wt{\widetilde{q}(t_2)\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}_{n-1}(t_2,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})dt_2, \end{equation} where $\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}_0(\cdot ,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=1$. Due to \er{5.11} we get $a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G} (0,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )$, which yields \[ \label{iea} a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )=1+\sum _{n\geq 1}a_n(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} ),\qquad
a_n(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G} _n(0,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}). \end{equation} We will describe these iterations and the function $a$.
\begin{lemma} \label{Ta} Let $q\in L^1(\R)$ and $\mathop{\mathrm{supp}}\nolimits q\subset} \def\ts{\times [0,\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}]$. Then the function $a\in \cE_\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}$ and satisfies \[ \label{an}
|a_n(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} )|\leq e^{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}(|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})}{\|q\|_1^{2n}\over} \def\iy{\infty (2n)!},\ \
\forall \ n\geq 1, \end{equation} \[ \label{a}
|a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|\leq e^{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}(|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})}\mathop{\mathrm{ch}}\nolimits \|q\|_1, \end{equation} \[ \label{a1}
|a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})-1|\leq e^{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}(|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})}(\mathop{\mathrm{ch}}\nolimits \| q\|-1). \end{equation}
where $\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}=\mathop{\mathrm{Im}}\nolimits\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}$ and $\|q\|_1=\int_\R |q(x)|dx.$
\end{lemma}
\noindent {\bf Proof.} Let $\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}_-={(|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})\/2}$. Then using \er{ien} we obtain $$ a_n(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=\int\limits_{0=t_0<t_1< t_2<...< t_{2n}} \biggl} \def\rt{\biggr(\prod\limits_{1\le j\le n} q(t_{2j-1})\overline} \def\wt{\widetilde{q}(t_{2j}) e^{i2\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} (t_{2j}-t_{2j-1})} \rt)dt_1dt_2...dt_{2n}, $$ which yields \[ \begin{aligned}
\label{y4} |a_n(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|\le \int\limits_{0=t_0<t_1< t_2<...< t_{2n}} \biggl} \def\rt{\biggr(\prod\limits_{1\le j\le n}e^{2\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}_- (t_{2j}-t_{2j-1})}
|q(t_{2j-1})q(t_{2j})|\rt)dt_1dt_2...dt_{2n}\\ \le \int\limits_{0<t_1< t_2<...< t_{2n}} \biggl} \def\rt{\biggr(\prod\limits_{1\le j\le {2n}}
|q(t_j)|\rt) e^{2\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}_- t_{2n}}dt_1dt_2...dt_{2n}\\
\le e^{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} (|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})}\int\limits_{0<t_1< t_2<...< t_{2n}}
|q(t_1)q(t_2)....q(t_{2n})| dt_1dt_2...dt_{2n}\le e^{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C} (|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})}
{\|q\|^{2n}\over} \def\iy{\infty(2n)!}, \end{aligned} \end{equation} which yields \er{an}.
This shows that the series \er{iea} converges uniformly on bounded subsets of $\C$. Each term of this series is an entire function. Hence the sum is an entire function. Summing the majorants we obtain estimates \er{a} and \er{a1}. Thus the function $a$ is entire.
The S-matrix is unitary, see \cite{DEGM82}, then from \er{Sm} we have the well-known fact $|a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|^2=|b(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|^2+1\ge 1$ for all $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \R$. Then due to \er{a1} we deduce that $a$ belongs to $\cE_\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}$.
\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
{\bf Proof of Theorem \ref{T1}.} Recall that by Lemma \ref{Ta}, the function $a\in \cE_\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}$ with $A=\|q\|_1$, since $a$ satisfies $
|a(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|\le e^{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}(|\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}|-\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I})+\|q\|_1}$, for all $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in\C$ with $ \eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}=\mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}$. Thus estimate \er{L3} gives the main estimate \er{r} in Theorem \ref{T1}.
$\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}$
\
\noindent {\bf Acknowledgments.}\small
Various parts of this paper were written during Korotyaev's stay in
the Mathematical Institute of University of Aarhus.
He is grateful to the institute for the hospitality.
He is also grateful to Iliya Vidensky and especially to Alexei Alexandrov (St. Petersburg) for stimulating discussions about the Carleson Theorem.
This work was supported by the Ministry of education and science of the Russian Federation, state contract 14.740.11.0581 and the RFFI grant "Spectral and asymptotic methods for studying of the differential operators" No 11-01-00458.
\end{document} | arXiv |
Crystal interpretation of a formula on the branching rule of types $B_{n}$, $C_{n}$, and $D_{n}$.
@article{Hiroshima2016CrystalIO,
title={Crystal interpretation of a formula on the branching rule of types \$B\_\{n\}\$, \$C\_\{n\}\$, and \$D\_\{n\}\$.},
author={Toya Hiroshima},
journal={arXiv: Quantum Algebra},
Toya Hiroshima
Physics, Mathematics
arXiv: Quantum Algebra
The branching coefficients of the tensor product of finite-dimensional irreducible $U_{q}(\mathfrak{g})$-modules, where $\mathfrak{g}$ is $\mathfrak{so}(2n+1,\mathbb{C})$ ($B_{n}$-type), $\mathfrak{sp}(2n,\mathbb{C})$ ($C_{n}$-type), and $\mathfrak{so}(2n,\mathbb{C})$ ($D_{n}$-type), are expressed in terms of Littlewood-Richardson (LR) coefficients in the stable region. We give an interpretation of this relation by Kashiwara's crystal theory by providing an explicit surjection from the LR…
figure 3.1
Super Duality and Crystal Bases for Quantum Ortho-Symplectic Superalgebras
Jae-Hoon Kwon
Let $\mathcal{O}^{int}_q(m|n)$ be a semisimple tensor category of modules over a quantum ortho-symplectic superalgebra of type $B, C, D$ introduced in the author's previous work. It is a natural…
Super duality and crystal bases for quantum ortho-symplectic superalgebras II
Let $$\mathcal {O}^\mathrm{int}_q(m|n)$$Oqint(m|n) be a semisimple tensor category of modules over a quantum ortho-symplectic superalgebra of type B, C, D introduced in Kwon (Int Math Res Not, 2015.…
On crystal bases of the $Q$-analogue of universal enveloping algebras
M. Kashiwara
0. Introduction. The notion of the q-analogue of universal enveloping algebras is introduced independently by V. G. Drinfeld and M. Jimbo in 1985 in their study of exactly solvable models in the…
Combinatorial extension of stable branching rules for classical groups
We give new combinatorial formulas for decomposition of the tensor product of integrable highest weight modules over the classical Lie algebras of type $B, C, D$, and the branching decomposition of…
Crystalizing theq-analogue of universal enveloping algebras
For an irreducible representation of theq-analogue of a universal enveloping algebra, one can find a canonical base atq=0, named crystal base (conjectured in a general case and proven forAn, Bn, Cn…
Modification Rules and Products of Irreducible Representations of the Unitary, Orthogonal, and Symplectic Groups
R. King
Modification rules, expressible in terms of the removal of continuous boundary hooks, are derived which relate nonstandard irreducible representations (IR's) of the unitary, orthogonal, and…
Crystal Graphs for Representations of the q-Analogue of Classical Lie Algebras
M. Kashiwara, Toshiki Nakashima
The explicit form of the crystal graphs for the finite-dimensional representations of the q-analogue of the universal enveloping algebras of type A, B, C, and D is given in terms of semi-standard…
Crystal base and a generalization of the Littlewood-Richardson rule for the classical Lie algebras
Toshiki Nakashima
We shall give a generalization of the Littlewood-Richardson rule forUq(g) associated with, the classical Lie algebras by use of crystal base. This rule describes explicitly the decomposition of…
A symplectic jeu de taquin bijection between the tableaux of King and of De Concini
Jeffrey T. Sheats
The definitions, methods, and results are entirely combinatorial. The symplectic jeu de taquin algorithm developed here is an extension of Schützenberger's original jeu de taquin and acts on a skew…
Introduction to Quantum Groups and Crystal Bases
J. Hong, Seok-Jin Kang
Lie algebras and Hopf algebras Kac-Moody algebras Quantum groups Crystal bases Existence and uniqueness of crystal bases Global bases Young tableaux and crystals Crystal graphs for classical Lie… | CommonCrawl |
Chinese Physics C> In Press> Article
Neutrino masses, cosmological inflation and dark matter in a variant U(1)B-L model with type II seesaw mechanism
J. G. Rodrigues 1,2, ,
A. C. O. Santos 1 ,
J. G. Ferreira Jr 1 ,
C. A. de S. Pires 1
Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, 58051-970, João Pessoa, PB, Brasil
Departamento de Física, Universidade Federal do Rio Grande do Norte, 59078-970, Natal, RN, Brasil
In this study, we implemented the type II seesaw mechanism into the framework of the $U(1)_{\rm B-L}$ gauge model. To achieve this, we added a scalar triplet, $ \Delta $ , to the canonical particle content of the $U(1)_{\rm B-L}$ gauge model. By imposing that the $U(1)_{\rm B-L}$ gauge symmetry be spontaneously broken at TeV scale, we show that the type II seesaw mechanism is realized at an intermediate energy scale, more precisely, at approximately $ 10^9 $ GeV. To prevent heavy right-handed neutrinos from disturbing the mechanism, we evoke a $ Z_2 $ discrete symmetry. Interestingly, as a result, we have standard neutrinos with mass around eV scale and right-handed neutrinos with mass in TeV scale, with the lightest one fulfilling the condition of dark matter. We developed all of these in this study. In addition, we show that the neutral component of $ \Delta $ may perform unproblematic non-minimal inflation with loss of unitarity.
models beyond the standard model ,
particle-theory and field-theory models of the early universe ,
neutrino mass and mixing
[1] C. Patrignani et al. (Particle Data Group), Chin. Phys. C 40, 100001 (2016)
[2] P. Minkowski, Phys. Lett. B 67, 421 (1977)
[3] T. Yanagida, Conf. Proc. C 7902131, 95 (1979)
[4] M. Gell-Mann, P. Ramond, and R. Slansky, Conf. Proc. C 790927, 315 (1979), arXiv:1306.4669
[5] R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980), [231 (1979)]
[6] J. Schechter and J. Valle, Phys. Rev. D 22, 2227 (1980) doi: 10.1103/PhysRevD.22.2227
[7] S. M. Faber and J. S. Gallagher, Ann. Rev. Astron. Astrophys. 17, 135 (1979) doi: 10.1146/annurev.aa.17.090179.001031
[8] D. Clowe, M. Bradac, A. H. Gonzalez et al., Astrophys. J. 648, L109 (2006), arXiv:astro-ph/0608407 doi: 10.1086/508162
[9] N. Aghanim et al. (Planck) (2018), arXiv: 1807.06209
[10] G. Jungman, M. Kamionkowski, and K. Griest, Phys. Rept. 267, 195 (1996), arXiv:hepph/9506380 doi: 10.1016/0370-1573(95)00058-5
[11] G. Bertone, D. Hooper, and J. Silk, Phys. Rept. 405, 279 (2005), arXiv:hep-ph/0404175 doi: 10.1016/j.physrep.2004.08.031
[12] A. H. Guth, Phys. Rev. D 23, 347 (1981)
[13] A. D. Linde, Phys. Lett. B 108, 389 (1982)
[14] A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. 48, 1220 (1982) doi: 10.1103/PhysRevLett.48.1220
[15] C. L. Bennett et al. (WMAP), Astrophys. J. Suppl. 208, 20 (2013), arXiv:1212.5225 doi: 10.1088/0067-0049/208/2/20
[16] N. Makino and M. Sasaki, Prog. Theor. Phys. 86, 103 (1991) doi: 10.1143/ptp/86.1.103
[17] F. L. Bezrukov and M. Shaposhnikov, Phys. Lett. B 659, 703 (2008), arXiv:0710.3755
[18] R. N. Mohapatra and R. E. Marshak, Phys. Rev. Lett. 44, 1316 (1980), [Erratum: Phys. Rev. Lett. 44, 1643 (1980)]
[19] T. Appelquist, B. A. Dobrescu, and A. R. Hopper, Phys. Rev. D 68, 035012 (2003), arXiv:hepph/0212073
[20] L. Basso, A. Belyaev, S. Moretti et al., Phys. Rev. D 80, 055030 (2009), arXiv:0812.4313
[21] S. Khalil, Phys. Rev. D 82, 077702 (2010), arXiv:1004.0013
[22] N. Okada, M. U. Rehman, and Q. Shafi, Phys. Lett. B 701, 520 (2011), arXiv:1102.4747
[23] X. G. He, G. C. Joshi, H. Lew et al., Phys. Rev. D 43, 22 (1991)
[24] X.-G. He, G. C. Joshi, H. Lew et al., Phys. Rev. D 44, 2118 (1991)
[25] E. Ma, Phys. Lett. B 433, 74 (1998)
[26] H. S. Goh, R. N. Mohapatra, and S. Nasri, Phys. Rev. D 70, 075022 (2004), arXiv:hep-ph/0408139
[27] M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45 (1986)
[28] S. K. Majee and N. Sahu, Phys. Rev. D 82, 053007 (2010), arXiv:1004.0841 doi: 10.1103/PhysRevD.82.053007
[29] T. Nomura and H. Okada, Phys. Lett. B 774, 575 (2017), arXiv:1704.08581 doi: 10.1016/j.physletb.2017.10.033
[30] R. N. Mohapatra and G. Senjanović, Phys. Rev. D 23, 165 (1981) doi: 10.1103/PhysRevD.23.165
[31] A. Arhrib, R. Benbrik, M. Chabab et al., Phys. Rev. D 84, 095005 (2011), arXiv:1105.1925
[32] N. Haba, H. Ishida, N. Okada et al., Eur. Phys. J. C 76, 333 (2016), arXiv:1601.05217
[33] C. Bonilla, R. M. Fonseca, and J. W. F. Valle, Phys. Rev. D 92, 075028 (2015), arXiv:1508.02323
[34] J. P. Pinheiro and C. A. d. S. Pires (2020), arXiv: 2003.02350
[35] M. Carena, A. Daleo, B. A. Dobrescu et al., Phys. Rev. D 70, 093009 (2004), arXiv:hep-ph/0408098
[36] D. S. Salopek, J. R. Bond, and J. M. Bardeen, Phys. Rev. D 40, 1753 (1989)
[37] R. Fakir and W. G. Unruh, Phys. Rev. D 41, 1783 (1990)
[38] A. O. Barvinsky, A. Yu. Kamenshchik, and A. A. Starobinsky, JCAP 0811, 021 (2008), arXiv:0809.2104
[39] J. Garcia-Bellido, D. G. Figueroa, and J. Rubio, Phys. Rev. D 79, 063531 (2009), arXiv:0812.4624
[40] F. Bezrukov and M. Shaposhnikov, Phys. Lett. B 734, 249 (2014), arXiv:1403.6078
[41] H. M. Lee, Phys. Rev. D 98, 015020 (2018), arXiv:1802.06174
[42] R. N. Lerner and J. McDonald, Phys. Rev. D 80, 123507 (2009), arXiv:0909.0520
[43] N. Okada, M. U. Rehman, and Q. Shafi, Phys. Rev. D 82, 043502 (2010), arXiv:1005.5161
[44] M. Fairbairn, R. Hogan, and D. J. E. Marsh, Phys. Rev. D 91, 023509 (2015), arXiv:1410.1752
[45] C. P. Burgess, H. M. Lee, and M. Trott, JHEP 09, 103 (2009), arXiv:0902.4465
[46] J. L. F. Barbon and J. R. Espinosa, Phys. Rev. D 79, 081302 (2009), arXiv:0903.0355
[48] M. P. Hertzberg, JHEP 11, 023 (2010), arXiv:1002.2995
[49] R. N. Lerner and J. McDonald, JCAP 1004, 015 (2010), arXiv:0912.5463
[50] F. Bezrukov, Class. Quant. Grav. 30, 214001 (2013), arXiv:1307.0708 doi: 10.1088/0264-9381/30/21/214001
[51] J. Rubio, Front. Astron. Space Sci. 5, 50 (2019), arXiv:1807.02376 doi: 10.3389/fspas.2018.00050
[52] J. G. Ferreira, C. A. de S. Pires, J. G. Rodrigues et al., Phys. Rev. D 96, 103504 (2017), arXiv:1707.01049
[53] C.-S. Chen and C.-M. Lin, Phys. Lett. B 695, 9 (2011), arXiv:1009.5727
[54] C. Arina, J.-O. Gong, and N. Sahu, Nucl. Phys. B 865, 430 (2012), arXiv:1206.0009
[55] O. Lebedev and H. M. Lee, Eur. Phys. J. C 71, 1821 (2011), arXiv:1105.2284
[56] G. Ballesteros, J. Redondo, A. Ringwald et al., Phys. Rev. Lett. 118, 071802 (2017), arXiv:1608.05414 doi: 10.1103/PhysRevLett.118.071802
[57] S. R. Coleman and E. J. Weinberg, Phys. Rev. D 7, 1888 (1973) doi: 10.1103/PhysRevD.7.1888
[58] M. Sher, Phys. Rept. 179, 273 (1989) doi: 10.1016/0370-1573(89)90061-6
[59] N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space, Cambridge Monographs on Mathematical Physics, (Cambridge Univ. Press, Cambridge, UK, 1984), ISBN 0521278589, 9780521278584, 9780521278584
[60] A. J. Accioly, U. F. Wichoski, S. F. Kwok et al., Class. Quant. Grav. 10, L215 (1993) doi: 10.1088/0264-9381/10/12/001
[61] V. Faraoni, E. Gunzig, and P. Nardone, Fund. Cosmic Phys. 20, 121 (1999), arXiv:gr-qc/9811047
[62] A. Linde, M. Noorbala, and A. Westphal, JCAP 1103, 013 (2011), arXiv:1101.2652
[63] A. R. Liddle and D. H. Lyth, Cosmological inflation and large scale structure (2000), ISBN 0521575982, 9780521575980, 9780521828499
[64] G. Ballesteros, J. Redondo, A. Ringwald et al., JCAP 1708, 001 (2017), arXiv:1610.01639
[65] J. P. B. Almeida, N. Bernal, J. Rubio et al., JCAP 1903, 012 (2019), arXiv:1811.09640
[66] A. R. Liddle and S. M. Leach, Phys. Rev. D 68, 103503 (2003), arXiv:astro-ph/0305263
[67] J.-O. Gong, S. Pi, and G. Leung, JCAP 05, 027 (2015), arXiv:1501.03604
[68] L. F. Abbott, E. Farhi, and M. B. Wise, Phys. Lett. B 117, 29 (1982)
[69] A. D. Linde, Contemp. Concepts Phys. 5, 1 (1990), arXiv:hep-th/0503203
[70] R. Allahverdi, R. Brandenberger, F.-Y. Cyr-Racine et al., Ann. Rev. Nucl. Part. Sci. 60, 27 (2010), arXiv:1001.2600 doi: 10.1146/annurev.nucl.012809.104511
[71] Y. Ema, R. Jinno, K. Mukaida et al., JCAP 1702, 045 (2017), arXiv:1609.05209
[72] G. Belanger, F. Boudjema, A. Pukhov et al., Nuovo Cim. C033N2, 111 (2010), arXiv:1005.4133
[73] F. Staub (2008), arXiv: 0806.0538
[74] F. Staub, Comput. Phys. Commun. 182, 808 (2011), arXiv:1002.0840 doi: 10.1016/j.cpc.2010.11.030
[75] F. Staub, Comput. Phys. Commun. 184, 1792 (2013), arXiv:1207.0906 doi: 10.1016/j.cpc.2013.02.019
[77] W. Porod and F. Staub, Comput. Phys. Commun. 183, 2458 (2012), arXiv:1104.1573 doi: 10.1016/j.cpc.2012.05.021
[78] W. Porod, Comput. Phys. Commun. 153, 275 (2003), arXiv:hep-ph/0301101 doi: 10.1016/S0010-4655(03)00222-4
[79] G. Aad et al. (ATLAS), Phys. Rev. D 90, 052005 (2014), arXiv:1405.4123
[80] D. G. Cerdeno and A. M. Green, pp. 347-369 (2010), arXiv: 1002.1912
[81] G. Arcadi, M. Dutra, P. Ghosh et al., Eur. Phys. J. C 78, 203 (2018), arXiv:1703.07364
[82] F. S. Queiroz, PoS EPS-HEP2017, 080 (2017), arXiv:1711.02463
[83] T. Marrodn Undagoitia and L. Rauch, J. Phys. G 43, 013001 (2016), arXiv:1509.08767
[84] A. Berlin, D. Hooper, and S. D. McDermott, Phys. Rev. D 89, 115022 (2014), arXiv:1404.0022
[85] M. Dutra, C. A. de S. Pires, and P. S. Rodrigues da Silva, JHEP 09, 147 (2015), arXiv:1504.07222
[86] E. Aprile et al. (2018), arXiv: 1805.12562
[87] D. S. Akerib et al. (LUX), Phys. Rev. Lett. 118, 021303 (2017), arXiv:1608.07648 doi: 10.1103/PhysRevLett.118.021303
[88] E. Aprile et al. (XENON), JCAP 1604, 027 (2016), arXiv:1512.07501
[89] C. E. Aalseth et al., Eur. Phys. J. Plus 133, 131 (2018), arXiv:1707.08145 doi: 10.1140/epjp/i2018-11973-4
[90] D. S. Akerib et al. (LZ) (2015), arXiv: 1509.02910
[91] J. Billard, L. Strigari, and E. Figueroa-Feliciano, Phys. Rev. D 89, 023524 (2014), arXiv:1307.5458
[92] P. Creminelli, D. L. Lpez Nacir, M. Simonović et al., JCAP 11, 031 (2015), arXiv:1502.01983
[1] D. Banerjee , S. Sahoo . Analysis of λb→λ l+l- rare decays in a non-universal Z' model. Chinese Physics C, doi: 10.1088/1674-1137/41/8/083101
[2] WANG Qing , WANG Shun-Zhi , ZHANG Ying . New gauge forces beyond the standard model. Chinese Physics C, doi: 10.1088/1674-1137/34/2/025
[3] Wojciech Flieger , Janusz Gluza . General neutrino mass spectrum and mixing properties in seesaw mechanisms. Chinese Physics C,
[4] Wen Yin . Fixed point and anomaly mediation in partial N=2 supersymmetric standard models. Chinese Physics C, doi: 10.1088/1674-1137/42/1/013104
[5] J.-J. Wu , Ulf-G. Meißner . Towards the continuum coupling in nuclear lattice effective field theory I: A three-particle model. Chinese Physics C, doi: 10.1088/1674-1137/abbb83
[6] Hiroyuki Ishida , Shinya Matsuzaki , Ruiwen Ouyang . Unified interpretation of scalegenesis in conformally extended standard models: a dynamical origin of Higgs portal. Chinese Physics C, doi: 10.1088/1674-1137/abb07f
[7] Hong-Lei Li , Peng-Cheng Lu , Cong-Feng Qiao , Zong-Guo Si , Ying Wang . Study of the Standard Model and Majorana neutrino contributions to ${ B}^{{ +}} { \to} { K}^{{ (*){ \pm}}}{ \mu}^{ +}{ \mu}^{{ \mp}}$ . Chinese Physics C, doi: 10.1088/1674-1137/43/2/023101
[8] Ran Ding , Zhi-Long Han , Li Huang , Yi Liao . Phenomenology of colored radiative neutrino mass model and its implications on cosmic-ray observations. Chinese Physics C, doi: 10.1088/1674-1137/42/10/103101
[9] CAO Li-Gang . Superfluid nuclear matter in BCS theory and beyond. Chinese Physics C, doi: 10.1088/1674-1137/33/S1/011
[10] Jia Chen , Chunsheng An , Hong Chen . Mixing of the lowest-lying qqq configurations with JP=1/2- in different hyperfine interaction models. Chinese Physics C, doi: 10.1088/1674-1137/42/3/034104
[11] Cui-Bai Luo , Song Shi , Yong-Hui Xia , Hong-Shi Zong . Dynamical chiral symmetry breaking in NJL Model with a strong background magnetic field and Lorentz-violating extension of the Standard Model. Chinese Physics C, doi: 10.1088/1674-1137/41/6/063104
[12] ZHANG Yi-Chun , CHEN Xiao-Su , HE Miao , HUANG Xing-Tao , WANG Meng , LI Wei-Dong , ZHANG Zi-Ping . Mixing MC events in reactor neutrino experiment. Chinese Physics C, doi: 10.1088/1674-1137/37/1/016201
[13] Yue-Liang Wu . Maximal symmetry and mass generation of Dirac fermions and gravitational gauge field theory in six-dimensional spacetime. Chinese Physics C, doi: 10.1088/1674-1137/41/10/103106
[14] ZHANG Ying , WANG Qing . Parameterization of general Z-γ-Z' mixing in an electroweak chiral theory. Chinese Physics C, doi: 10.1088/1674-1137/36/4/001
[15] Hong-Bo Bai , Zhen-Hua Zhang , Xiao-Wei Li . Investigation of the Mg isotopes using the shell-model-like approach in relativistic mean field theory. Chinese Physics C, doi: 10.1088/1674-1137/40/11/114101
[16] Mariana Dutra , Odilon Louren , Or Hen , Eliezer Piasetzky , Débora P. Menezes . The symmetry energy γ parameter of relativistic mean-field models. Chinese Physics C, doi: 10.1088/1674-1137/42/6/064105
[17] Ting-Ting Sun , Cheng-Jun Xia , Shi-Sheng Zhang , M. S. Smith . Massive neutron stars and Λ-hypernuclei in relativistic mean field models. Chinese Physics C, doi: 10.1088/1674-1137/42/2/025101
[18] Hai-Xiao Xiao , Jian-Feng Li , Wei Wei , Pei-Lin Yin , Hong-Shi Zong . Dynamical mass generation in QED3 beyond the instantaneous approximation. Chinese Physics C, doi: 10.1088/1674-1137/41/7/073102
[19] Shao-Feng Ge , Jing-yu Zhu . Phenomenological advantages of the normal neutrino mass ordering. Chinese Physics C, doi: 10.1088/1674-1137/44/8/083103
[20] FAN Guang-Wei , XU Wang , PAN Qiang-Yan , CAI Xiao-Lu , FAN Gong-Tao , LI Yong-Jiang , LUO Wen , XU Ben-Ji , YAN Zhe , YANG Li-Feng . Radius studies of 8Li and 8B using the optical-limit Glauber model in conjunction with relativistic mean-field theory. Chinese Physics C, doi: 10.1088/1674-1137/34/10/013
Figures(8) / Tables(1)
J. G. Rodrigues, A. C. O. Santos, J. G. Ferreira Jr and C. A. de S. Pires. Neutrino masses, cosmological inflation and dark matter in a variant U(1)B-L model with type II seesaw mechanism.[J]. Chinese Physics C. doi: 10.1088/1674-1137/abd01a
Revised: 2020-11-29
J. G. Rodrigues 1,2,
A. C. O. Santos 1,
J. G. Ferreira Jr 1,
C. A. de S. Pires 1,
1. Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, 58051-970, João Pessoa, PB, Brasil
2. Departamento de Física, Universidade Federal do Rio Grande do Norte, 59078-970, Natal, RN, Brasil
Accepted Date: 2020-11-29
Abstract: In this study, we implemented the type II seesaw mechanism into the framework of the $U(1)_{\rm B-L}$ gauge model. To achieve this, we added a scalar triplet, $ \Delta $ , to the canonical particle content of the $U(1)_{\rm B-L}$ gauge model. By imposing that the $U(1)_{\rm B-L}$ gauge symmetry be spontaneously broken at TeV scale, we show that the type II seesaw mechanism is realized at an intermediate energy scale, more precisely, at approximately $ 10^9 $ GeV. To prevent heavy right-handed neutrinos from disturbing the mechanism, we evoke a $ Z_2 $ discrete symmetry. Interestingly, as a result, we have standard neutrinos with mass around eV scale and right-handed neutrinos with mass in TeV scale, with the lightest one fulfilling the condition of dark matter. We developed all of these in this study. In addition, we show that the neutral component of $ \Delta $ may perform unproblematic non-minimal inflation with loss of unitarity.
The oscillation pattern observed from the solar and atmospheric neutrinos has surprisingly revealed a small (but nonzero) mass for these particles [1]. From the theoretical point of view, the seesaw mechanism is the most popular approach for generating small masses for neutrinos [2-6].
The observation of galaxy rotational curves [7] and cluster collisions [8] as well as the precise measurements of the thermal anisotropy of the cosmic microwave background [9] suggest the existence of dark matter (DM) permeating our universe. Recent results from PLANCK satellite indicate that 26.7% of the energy content of the universe is in the form of non-luminous matter [9]. The most popular DM candidate is a weakly interactive massive particle (WIMP) [10, 11]. WIMPs can be any type of particle, as long as they fulfill a series of conditions, such as neutrality and stability (or sufficiently long lived), and have mass in the range of a few GeVs up to a few TeVs.
Cosmological inflation is considered the best theory for explaining homogeneity, flatness, and isotropy of the universe, as required by hot big bang [12-14]. Experiments in cosmology, such as WMAP7 and Planck2018 [9, 15], entered an era of precision that allowed us to probe scenarios that try to explain the primordial universe. Single-field slow-roll models of inflation coupled non-minimally to gravity seems to be an interesting scenario for inflation [16, 17], given that it connects inflation to particle physics at low energy scale [17].
Although the Standard Model (SM) of particle physics is a very successful theory, its framework does not accommodate any of the three issues discussed above. In other words, nonzero neutrino mass, dark matter, and inflation require extensions of the SM.
In this study, we show that the $U(1)_{\rm B-L}$ (B-L) gauge model is capable of accomplishing all these three issues in a very attractive way by simply adding a scalar triplet to its canonical scalar sector [18-22] and resorting to an adequate $ Z_2 $ symmetry. Interestingly, we have that small neutrino masses are achieved through the type II seesaw mechanism, which is triggered by the spontaneous breaking of the B-L symmetry, while the dark matter content of the universe is composed by the lightest right-handed neutrino of the model. Furthermore, by allowing a non-minimal coupling between gravity and the triplet $ \Delta $, we show that the model may perform inflation at high energy without loss of unitarity.
This paper is organized as follows. In Section II, we describe the main properties of the B-L model. Section III is devoted to cosmological inflation. In Section IV, we describe our calculation of the dark matter candidate, and Section V contains our conclusions.
II. B-L MODEL WITH SCALAR TRIPLET
A. Seesaw mechanism
Baryon number (B) and lepton number (L) are accidental anomalous symmetries of the SM. However, it is well known that only some specific linear combinations of these symmetries can be free from anomalies [18, 23-25]. Among them, the most developed one is the B-L symmetry [18-21], which is involved in several physical scenarios, such as GUT [26], seesaw mechanism [2-5], and baryogenesis [27]. This symmetry gives rise to the simplest gauge extension of the SM, namely the B-L model, which is based on the gauge group $SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_{\rm B-L}$. In this study, we considered an extension of the B-L model in which its canonical scalar sector is augmented by a scalar triplet. Thus, the particle content of the model involves the standard particles augmented by three right-handed neutrinos (RHNs), $ N_i \sim ({\bf 1}, {\bf 1},0,-1) \,\,, \,\, i = 1,2,3 $, one scalar singlet, $ S\sim ({\bf 1}, {\bf 1}, 0,2) $, and one scalar triplet,
$ \Delta\equiv \left(\begin{array}{cc} \dfrac{\Delta^{+}}{\sqrt{2}} & \Delta^{++} \\ \Delta^{0} & \dfrac{-\Delta^{+}}{\sqrt{2}} \end{array} \right)\sim ({\bf 1}, {\bf 3}, 2, 2). $
The values in parentheses refer to the transformation of the fields by the $SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_{\rm B-L}$ symmetry. To the best of our knowledge, there are few studies in which the triplet $ \Delta $ composes the scalar sector of the B-L model. For previous models, please refer to [28, 29]. Moreover, we imposed the model to be invariant by a $ Z_2 $ discrete symmetry with the RHNs transforming as $ N_i \rightarrow -N_i $, while the rest of the particle content of the model transforms trivially by $ Z_2 $.
With these features, the Yukawa interactions of interest are composed by the terms
$ {\cal L}_{\rm B-L} \supset Y_\nu\overline{f^C}i\sigma^2 \Delta f + \frac{1}{2}Y_N\overline{N^c} N S + {\rm h.c.}, $
where $ f = (\nu \,\,\,\,e)_L^T \sim ({\bf 1}, {\bf 2},-1,-1) $. Note that both neutrinos gain masses when $ \Delta^0 $ and S develop a nonzero vacuum expectation value ($ v_\Delta $ and $ v_S $). This yields the following expressions to the masses of these neutrinos
$\begin{aligned}[b] m_\nu = \frac{Y_\nu v_\Delta}{\sqrt{2}}\quad m_{\nu_R} = \frac{Y_N v_S}{\sqrt{2}}. \end{aligned}$
Small masses for $ \nu $'s require small $ v_\Delta $. We will show that, on fixing $ v_h $ and $ v_S $, we may obtain $ v_\Delta $around eV scale for the type II seesaw mechanism [6, 30, 31]. To this end, we must develop the potential of the model, which is invariant by the B-L symmetry and involves the following terms
$ \begin{aligned}[b] V(H,\Delta,S) = & \mu^2_h H^\dagger H + \lambda_h (H^\dagger H)^2 + \mu^2_s S^\dagger S + \lambda_s (S^\dagger S)^2 \\ & + \mu^2_\Delta Tr(\Delta^\dagger \Delta) + \lambda_\Delta Tr[(\Delta^\dagger \Delta)^2] +\lambda^\prime_\Delta Tr[(\Delta^\dagger \Delta)]^2 \\ & + \lambda_1 S^\dagger SH^\dagger H + \lambda_2 H^\dagger\Delta \Delta^\dagger H + \lambda_3 Tr(\Delta^\dagger \Delta)H^\dagger H \\ & + \lambda_4 S^\dagger STr(\Delta^\dagger \Delta) + (k H^Ti\sigma^2\Delta^\dagger HS + {\rm h.c.}). \end{aligned} $
where $ H = (h^+ \,\,\,h^0)^T \sim ({\bf 1}, {\bf 2}, 1,0) $ is the standard Higgs doublet.
At this point, note that the presence of $ v_\Delta $ modifies the masses of the standard gauge bosons $ W^{\pm} $ and $ Z^0 $. Consequently, it softly modifies the $ \rho $-parameter as follows: $\rho = \left({1+\dfrac{2v^2_\Delta}{v^2_h}}\right)\bigg/\left({1+\dfrac{4v^2_\Delta}{v^2_h}}\right)$. The current electroweak precision data provide $ \rho = 1.00037 \pm$ 0.00023 [1]. This implies the following upper bound $ v_\Delta < 2.5 $ GeV.
Let us obtain the set of minimum conditions that guarantees that such a potential develops to a minimum. Then, we assume that all neutral scalars develop vevs different from zero and shift the neutral scalar fields in the conventional way
$ S,h^0,\Delta^0\rightarrow \frac{1}{\sqrt{2}}(v_{S,h,\Delta}+R_{S,h,\Delta}+{\rm i}I_{S,h,\Delta}), $
which is substituted in the potential above. As a result, we obtain the following set of minimum condition equations:
$ \begin{aligned}[b] & v_S\left(\mu^2_S + \frac{\lambda_1}{2}v^2_h + \frac{\lambda_4}{2}v^2_\Delta + \lambda_s v^2_S\right) -\frac{k}{2}v^2_h v_\Delta = 0, \\ & v_h \left(\mu^2_h + \frac{\lambda_1}{2}v^2_S + \frac{\lambda_2}{2}v^2_\Delta + \frac{\lambda_3}{2}v^2_\Delta + \lambda_h v^2_h - k v_\Delta v_S\right) = 0, \\ & v_\Delta \left(\mu^2_\Delta + \frac{\lambda_2}{2}v^2_h + \frac{\lambda_3}{2}v^2_h + \frac{\lambda_4}{2}v^2_S + (\lambda_\Delta + \lambda^{\prime}_\Delta) v^2_\Delta\right) -\frac{k}{2}v^2_h v_S = 0. \end{aligned} $
For the study of vacuum stability and bound from these conditions that guarantee that the potential is stable and has a global minimum, please refer to Refs. [32-34].
Note that, on considering $ \mu_\Delta >> (v_h, \,\,v_S,\,\,v_\Delta $), the third relation in Eq. (6) provides
$ v_\Delta \approx \frac{k}{2}\frac{v^2_h v_S}{\mu^2_\Delta}. $
Note that the role of the type II seesaw mechanism is to provide small vevs. In the canonical type II seesaw case, where $ v_\Delta = \dfrac{v^2_h}{M} $, with M being the scale of energy that characterizes the explicit violation of the lepton number, the standard vev ($ v_h = 247 $ GeV) requires $ M = 10^{14} $ GeV to have $ v_\Delta $ around eV scale [6, 30, 31].
The scenario proposed and developed here is completely different from the canonical case, because we assume that the lepton number is violated spontaneously at TeV scale. Consequently, the relation among $ v_\Delta $ and the other energy scales of the model is given in Eq. (7). In this case, when assuming that $ v_S $ belongs to the TeV scale; then, $ v_\Delta $ around eV scale requires $ \mu_\Delta \sim 10^9 $ GeV. Although this energy scale is much smaller than $ 10^{14} $ GeV, it is still high enough to be probed at current colliders. In other words, the scalars that compose the triplet $ \Delta $ are still heavy enough to be probed at the LHC. This is discussed below.
As we will see, both $ v_\Delta $ and $ v_S $ contribute to the mass of $ Z^{\prime} $ associated to the B-L symmetry. As $ v_\Delta << v_S $,$ v_S $ contributes predominantly to the mass of $ Z^{\prime} $, which has a stringent constraint according to the LEP experiment [35]
$ \frac{m_{Z^{\prime}}}{g_{\rm B-L}} \gtrsim 6.9\; {\rm{TeV}}. $
In summary, the set of vevs that compose our scenario takes values as follows: $ v_h = 247 $ GeV, $ v_S \sim $TeV, and $ v_\Delta \sim $eV. Then, the value of the masses of the standard neutrinos that accommodate solar and atmospheric neutrino oscillations is a question of adequate choice for the values of the Yukawa couplings $ Y_\nu $'s, as expressed in Eq. (3). Concerning right-handed neutrinos, they will develop mass at TeV scale. We advocate here that the lightest right-handed neutrinos may play the role of the dark matter of the universe. This is plausible because it is a neutral particle that is protected by the $ Z_2 $ discrete symmetry. Moreover, as the scalars that compose the triplet are heavy particles, we also checked in which circumstances the neutral component of the triplet $ \Delta $ may perform inflation. Before addressing these points, we discuss the spectrum of scalars for such a scenario.
B. Spectrum of scalars
Before advancing in this paper, it is necessary to discuss the scalar sector of the model briefly. Let us first focus on the CP-even sector. In the basis $ (R_S,R_h,R_\Delta) $, we have the following mass matrix:
$ M^2_R = \left(\begin{array}{ccc} \dfrac{k}{2}\dfrac{v_\Delta v^2_h}{v_s}+2\lambda_S v^2_s & -kv_h v_\Delta + \lambda_1 v_s v_h & -\dfrac{k}{2} v^2_h + \lambda_4 v_s v_\Delta\\ -kv_h v_\Delta + \lambda_1 v_s v_h & 2\lambda_h v^2_h & -k v_s v_h + (\lambda_2 + \lambda_3) v_h v_\Delta \\ -\dfrac{k}{2} v^2_h + \lambda_4 v_s v_\Delta & -k v_s v_h + (\lambda_2 + \lambda_3) v_h v_\Delta & \dfrac{k}{2}\dfrac{v_s v^2_h}{v_\Delta} + 2(\lambda_\Delta + \lambda^\prime_\Delta)v^2_\Delta \end{array} \right). $
Note that, for values of the vevs indicated above, the scalar $ R_\Delta $ becomes very heavy, with $m^2_\Delta \sim \dfrac{k}{2}\dfrac{v_S v^2_h}{v_\Delta}$, which implies that it decouples from the other ones. The other two quadratic masses are
$ \begin{aligned}[b] m^2_h \simeq & 2\lambda_h v^2_h -\frac{1}{2}\frac{\lambda_1^2}{\lambda_S}v^2_h, \\ m^2_H \simeq & 2\lambda_S v^2_S +\frac{1}{2}\frac{\lambda_1^2}{\lambda_S}v^2_h, \end{aligned} $
where $ m_h $ denotes the standard Higgs boson, with the allowed parameter space shown in Fig. 1.
Figure 1. (color online) Possible values of the quartic couplings that yield 125 GeV Higgs mass.
The respective eigenvectors are
$ \begin{aligned}[b] h \simeq R_h-\frac{\lambda_1}{2\lambda_S}\frac{v_h}{v_s}R_S, \quad H \simeq R_S+\frac{\lambda_1}{2\lambda_S}\frac{v_h}{v_s}R_h. \end{aligned} $
For the CP-odd scalars, we have the mass matrix in the basis $ (I_S,I_h,I_\Delta) $,
$ M^2_I = \left(\begin{array}{ccc} \dfrac{k}{2}\dfrac{v_\Delta v^2_h}{v_s} & kv_h v_\Delta & -\dfrac{k}{2} v^2_h \\ kv_h v_\Delta & 2k v_s v_\Delta & -k v_s v_h \\ -\dfrac{k}{2} v^2_h & -k v_s v_h & \dfrac{k}{2}\dfrac{v_s v^2_h}{v_\Delta} \end{array} \right). $
The mass matrix in Eq. (12) can be diagonalized, providing one massive state $ A^0 $ with mass
$ m^2_A = \dfrac{k}{2}\left(\dfrac{v_\Delta v^2_h}{v_s} + \frac{v_s v^2_h}{v_\Delta} + 4v_s v_\Delta \right) $
and two Goldstone bosons $ G^1 $ and $ G^2 $, absorbed as the longitudinal components of the Z and $ Z^{\prime} $ gauge bosons. The eigenvectors for the CP-odd scalars are
$ \begin{aligned}[b] G^1 \simeq & I_S + \frac{v_\Delta}{v_s} I_\Delta, \\ G^2 \simeq & I_h + \frac{v_h}{2v_s} I_S, \\ A^0 \simeq & I_\Delta - \frac{2v_\Delta}{v_h} I_h. \end{aligned} $
The charged scalars, given in the basis $ (h^+,\Delta^+) $, have the following mass matrix:
$ M^2_+ = \left(\begin{array}{cc} k v_s v_\Delta - \dfrac{\lambda_2}{2}v^2_\Delta & \dfrac{\lambda _2}{2\sqrt{2}}v_h v_\Delta -\dfrac{k}{\sqrt{2}}v_s v_h \\ \dfrac{\lambda _2}{2\sqrt{2}}v_h v_\Delta -\dfrac{k}{\sqrt{2}}v_s v_h & \dfrac{k}{2} \dfrac{v_s v^2_h}{v_\Delta} - \dfrac{\lambda_2}{2}v^2_h \end{array}\right) . $
Again, diagonalizing this matrix leads to two Goldstone bosons $ G^\pm $, responsible for the longitudinal parts of the $ W^\pm $ standard gauge bosons. The other two degrees of freedom give us the massive states $ H^\pm $, with mass
$ m^2_{H^\pm} = \left( \frac{v_\Delta}{2} + \frac{v^2_h}{4v_\Delta} \right)\left( 2k v_s - \lambda_2 v_\Delta \right). $
$ \begin{aligned}[b] G^\pm \simeq & h^{\pm} + \frac{\sqrt{2}v_\Delta}{v_h} \Delta^{\pm}, \\ H^{\pm} \simeq & \Delta^{\pm} - \frac{\sqrt{2}v_\Delta}{v_h}h^{\pm}. \end{aligned} $
Finally, the mass of the doubly charged scalars $ \Delta^{\pm \pm} $ are expressed as
$ m^2_{\Delta^{\pm \pm}} = \frac{k v_s v^2_h v_\Delta - \lambda_2 v^2_h v^2_\Delta -2\lambda_\Delta v^4_\Delta}{2v^2_\Delta}. $
Once symmetries are broken and the gauge bosons absorb the Goldstone bosons as longitudinal components, we have that the standard charged bosons present a contribution from the triplet vev, $m^2_w = \dfrac{g^2}{4}\left(v^2_h + 2v^2_\Delta \right)$, while the neutral gauge bosons become mixed with $ Z^{\prime} $ as follows:
$ M^2_g = \left(\begin{array}{cc} \dfrac{g^2+g^{\prime 2}}{4}(v^2_h+4v_\Delta^2) & -g_{\rm B-L}\sqrt{g^2+g^{\prime 2}}v_\Delta^2 \\ -g_{\rm B-L}\sqrt{g^2+g^{\prime 2}}v_\Delta^2 & g^2_{\rm B-L}(2v_S+v_\Delta^2) \end{array} \right). $
Keeping in mind the vev hierarchy discussed here ( $ v_S> v_h\gg v_\Delta $), the mixing between gauge bosons is very small; therefore, they decouple, resulting in the following masses:
$ M^2_Z\approx \frac{(g^2+g^{\prime 2})( v^2_h + 4 v_\Delta^2)}{4}, \quad M^2_{Z^{\prime}}\approx 2g^2_{\rm B-L}\left(v^2_S+ \frac{v_\Delta^2}{2}\right). $
Observe that we have a B-L model with new ingredients: scalars in the triplet and singlet forms and neutrinos with right-handed chiralities. Let us resume the role played by these new ingredients. The singlet S is responsible for the spontaneous breaking of the B-L symmetry and defines the mass of $ Z^{\prime} $. The triplet $ \Delta $ is responsible for the type II seesaw mechanism that generates small masses for the standard neutrinos. The right-handed neutrinos are responsible for the cancellation of anomalies. It would be interesting to find new roles for these components.
We argue here, and check below, that the right-handed neutrinos may be the dark matter component of the universe, given that the $ Z_2 $ symmetry protects them from decaying in lighter particles. In the last section, we assume that the lightest right-handed neutrino is the dark matter of the universe, calculate its abundance, and postulate possible ways of detecting it.
We also argue here that once $ \Delta^0 $ has mass around $ 10^9 $ GeV, it could be possible that it would come to be the inflaton and then drives inflation. We show in the next section that this is possible when we assume a non-minimal coupling of $ \Delta $ with gravity.
III. COSMOLOGICAL INFLATION
The introduction of a non-minimal coupling between a scalar field and gravity to achieve successful inflation has become popular in recent years, although the original idea dates back to the eighties [16, 36, 37]. In particular, one may cite an extensive list of studies in which the standard Higgs field [17, 38-41] or a scalar singlet extension [42-44] assumed the role of inflaton. Although theoretically well motivated, such models may lead to a troublesome behavior in low-scale phenomenology. Concerning the case of Higgs Inflation, the measured Higgs mass pushes the non-minimal coupling to high values ($ \xi \sim 10^4 $), causing unitarity issues at inflationary scale [45-48] (please refer to [49-51] for a different point of view). Similarly, the singlet scenario is also problematic. Although one could manage building a unitarily safe singlet inflation, this would produce a very light inflaton, placing in risk the reheating period of the universe [52].
By contrast, the case in which a scalar triplet plays the role of inflaton is significantly different. Following the discussion in Section (IIB), note that the dominant terms in the scalar masses are independent of any parameter associated to inflation ($ \lambda_\Delta, \lambda^\prime_\Delta,\xi $). In turn, this prevents the emergency of an excessively light inflaton, even for the smallest values of $ \lambda_\Delta $ and $ \lambda^\prime_\Delta $. Such a configuration yields a unitarily safe inflationary model that does not place in risk the transition to the standard evolution of the universe. For previous studies on inflation based on $ \Delta $, please refer to [52-54].
For the sake of simplicity, we assume that $ \Delta^0 $ provides the dominant coupling. This is equivalent to imposing that the effective masses of the Higgs doublet and the scalar singlet are greater than the Hubble scale as initial conditions of inflation. The analysis of inflationary scenarios with multiple fields coupled to gravity can be found in references [55, 56].
The non-minimal coupling between the inflaton and gravity is defined in the Jordan frame, leading to the Lagrangian density
$ {\cal L} \supset \frac{1}{2} (\partial_\mu \Delta^0)^{\dagger}(\partial^\mu \Delta^0)-\frac{M_P^2R}{2}-\frac{1}{2}\xi {\Delta^0}^2 R -V(\Delta^0), $
where R is the Ricci scalar, and $ M_P = 2.435\times 10^{18} $ GeV is the reduced Planck mass. During inflation, the fourth order terms of the inflaton field dominate the scalar potential. Quantum effects are also supposed to play a key role in the inflationary dynamics. Here, we consider one-loop radiative corrections to the inflaton potential evaluated in the Jordan frame; these corrections are known as the Prescription II procedure [38, 39, 42, 43]. Such corrections encompass the standard and B-L gauge couplings g, $ g^{\prime} $, and $g_{\rm B-L}$, as well as the Yukawa couplings of the neutrinos. The result is a Coleman-Weinberg potential of the form [57, 58]
$ V = \left( \frac{\lambda_{\Delta} + \lambda^{\prime}_{\Delta}}{4} + \frac{a}{32\pi^2}\ln{\frac{\Delta^0}{M_P}}\right){\Delta^0}^4 $
$ a = 3\left(\frac{3}{2}g^4+{g^{\prime}}^4+{g_{B-L}}^4\right) +6g^2{g^{\prime}}^2 - \sum\limits_{i}Y^4_{\nu_i} + \sum\limits_{j} \lambda^2_j, $
where $ M_P $ is chosen for renormalization scale, the sum in i takes into account the three generations of neutrinos, and j runs for the scalar contributions ($ \lambda_{2} $, $ \lambda_{3} $, $ \lambda_{4} $, $ \lambda_{\Delta} $, and $ \lambda^{\prime}_{\Delta} $).
To calculate the parameters related to inflation, we must recover the canonical Einstein-Hilbert gravity. This process is called conformal transformation and can be understood through two steps. First, we re-scale the metric $ \tilde{g}_{\alpha \beta} = \Omega^2 g_{\alpha \beta} $. In doing so, the non-minimal coupling vanishes, but the inflaton acquires a non-canonical kinetic term. The process is finished by transforming the field into a form with canonical kinetic energy. Such transformation involves the relations [59, 60]
$ \begin{aligned}[b] \tilde g_{\mu \nu}& = \Omega^2g_{\mu \nu}\,\,\,\,\,\,{\rm{where}} \,\,\,\,\, \Omega^2 = 1+\frac{\xi {\Delta^0}^2}{M^2_P}, \\ & \frac{{\rm d}\chi}{{\rm d}\Delta^0} = \sqrt{\frac{\Omega^2 +6\xi^2 {\Delta^0}^2/M_P^2}{\Omega^4}}. \end{aligned} $
The Lagrangian in Einstein frame is given by
$ {\cal L} \supset -\frac{M^2_{P} \tilde R}{2}+\frac{1}{2} (\partial_\mu \chi)^{\dagger}(\partial^\mu \chi)-U(\chi)\,, $
where $U(\chi) = \dfrac{1}{\Omega^4}V\left(\Delta^0(\chi)\right)$. There is some discussion about which frame is the physical one [61]; however, both frames agree in the regime of low energy.
Inflation occurs whenever the field $ \chi $, or equivalently $ \Delta^0 $, rolls slowly in a direction toward the minimum of the potential. The slow-roll parameters can be written as [62]
$\epsilon = \frac{M^2_{P}}{2}\left(\frac{ U^{\prime}}{ U \chi^{\prime}}\right)^2, \quad \quad \eta = M^2_{P}\left( \frac{U^{\prime \prime}}{U \chi^{\prime}} - \frac{U^{\prime} \chi^{\prime \prime}} {U {\chi^{\prime}}^3}\right), $
where $ ^\prime $ indicate derivative with respect to $ \Delta^0 $. Inflation starts when $ \epsilon,\eta \ll 1 $ and stops when $ \epsilon,\eta = 1 $. In the slow-roll regime, we can write the spectral index and the tensor-to-scalar ratio as [63]
$ n_S = 1-6\epsilon+2\eta, \quad \quad r = 16\epsilon. $
Planck2018 measured $ n_S = 0.9659 \pm 0.0041 $ and gave the bound $ r<0.10 $ for a pivot scale equivalent to $ k = 0.002 $ Mpc$ ^{-1} $[9]. Any inflationary model that intends to be realistic must recover these values.
Another important observable is the amplitude of scalar perturbations,
$ A_S = \frac{U}{24M^4_P\pi^2\epsilon} . $
The value of $ A_S $ is set by COBE normalization to approximately $ 2.1 \times 10^{-9} $, for the pivot scale $ k_{*} = 0.05 $ Mpc$ ^{-1} $ [9]. By inverting Eq. (28), one can write the value of the inflaton's self-coupling, $ \lambda^\prime \equiv \lambda_\Delta + \lambda^\prime_\Delta $. Note the strict dependence of $ \lambda^\prime $ with both the $ \xi $ and a parameters, depicted in Fig. 2. Only small values of $ \xi $ were considered ($ \xi = 1 $, $ 100 $) to avoid unitarity problems on the inflationary regime of energy. Consequently, the observed magnitude of $ A_S $ constrains the self-coupling of the inflaton to small values. Still, the mass structure of the CP-even scalars presented in Eq. (9) prevents the inflaton from being too light, allowing the perturbative decay into the standard particles after inflation (reheating).
Figure 2. (color online) $ \log_{10}(\lambda^\prime) $ vs. a for $ \xi = 1 $ (left) and $ \xi = 100 $ (right).
The amount of expansion from the horizon crossing moment up to the end of inflation is quantified by the number of e-folds:
$ N = -\frac{1}{M^2_P}\int_{\Delta^0_{*}}^{\Delta^0_f}\frac{V(\Delta^0)}{V^\prime(\Delta^0)}\left(\frac{{\rm d}\chi}{{\rm d}\Delta^0}\right)^2{\rm d}\Delta^0. $
In the non-minimal inflationary scenario with a relatively small coupling to gravity, $ \xi \lesssim 100 $, the reheating process occurs predominantly as in a radiation dominated universe [64-67]. As pointed out in [66], the e-folds number can be estimated to be approximately 60 in these cases. Using $ N = 60 $, we can solve Eq. (29) for the field strength at horizon crossing, $ \Delta^0_{*} $.
Finally, we can use the expressions in Eq. (27) to calculate the predictions of our model for $ n_S $ and r parameters. Fig. 3 shows our results in the $ n_S \times r $ plane. Note that the predictions are in good agreement with the observations, even though $ \xi \leqslant 100 $. In particular, for $ \xi = 100 $ and $ a = 0 $, we obtain $ n_S\simeq 0.968 $ and $ r\simeq 3 \times 10^{-3} $, well inside the 68% CL contour of the Planck2018 data set. Moreover, we have a unitarily safe model, as inflation takes place in an energy scale $ H_{*} = \sqrt{V_{*}/3M^2_P} \simeq 1.38 \times 10^{13} $ GeV, well bellow the unitarity scale $ \Lambda_U = \dfrac{M_P}{\xi}\sim 2.435 \times 10^{16} $ GeV.
Figure 3. (color online) $ n_S $vs. r for $ \xi = 0.1 $, $ 1 $, $ 10 $, and $ 100 $. The grey areas show the regions favoured by Planck2018, with 68% and 95% confidence levels (Planck $TT,TE,EE+{\rm low} E+{\rm lensing}+BK14+BAO$ data set [9]). The intervals considered for the radiative corrections are also shown, being the lower (upper) limit in a responsible for the lower (upper) end of each curve.
However, the most interesting result comes from the possibility to constrain the parameters of the Lagrangian through the inflationary observable. For $ \xi = 100 $, one may require $ -1.11\times 10^{-8} \lesssim a \lesssim 1.22 \times 10^{-8} $ to obtain the predictions of the model in accordance with the Planck observations (68% CL). Following Eq. (23), one could use these bounds to constrain the couplings contributing to radiative corrections, including the Yukawa and gauge couplings ($ Y_{\nu_i} $ and $g_{\rm B-L}$), associated to the physics beyond the standard model. Certainly, a vast computational effort would be necessary to estimate the impact of these bounds in the neutrino physics or search for a new gauge boson. Given that Eq. (22) evaluates the radiative corrections on the renormalization scale $ M_P $, the application of Renormalization Group equations would be mandatory to obtain the corresponding bounds at energy scales accessible to low energy experiments (oscillation and collider experiments). We will consider this analysis in a forthcoming communication.
After the inflationary period, the inflaton oscillates around its vev, giving rise to the reheating phase [68-70]. Owing to its mass structure, the inflaton is massive enough to decay in pairs of gauge bosons, neutrinos, or even the Higgs field. Even before the inflaton settles at its vev, non-perturbative effects could take place, producing gauge bosons [39, 71]. In this case, the scenario is significantly more complicated, and numerical study in lattice is mandatory.
IV. DARK MATTER
We remark that, in the B-L model proposed here, the masses of the standard neutrinos are achieved through an adapted type II seesaw mechanism. In view of this, the following question arises: what are the reasons for the existence of the RHNs in the model? The immediate answer is that they are required to cancel gauge anomalies. In addition, they may compose the dark matter content of the universe in the form of WIMP.
Although the three RHNs are potentially DM candidates, for simplicity reasons, we only consider that the lightest one, which we call N, is sufficient to provide the correct relic abundance of DM of the universe in the form of WIMP. This means that N was in thermal equilibrium with the SM particles in the early universe. Then, as far as the universe expands and cools, the thermal equilibrium is lost, causing the freeze out of the abundance of N. This takes place when the N annihilation rate, whose main contributions are displayed in Fig. 4, becomes roughly smaller then the expansion rate of the universe. In this case, the relic abundance of N is obtained by evaluating the Boltzmann equation for the number density $ n_N $,
Figure 4. Main contributions to the DM relic abundance. The SM contributions stand for fermions, Higgs, and vector bosons.
$ \frac{{\rm d} n_N}{{\rm d}t}+3H n_N = -\langle\sigma v\rangle(n_N^2 - n_{\rm{EQ}}^2), $
$ \begin{align} H^2 \equiv \left( \frac{\dot{a}}{a} \right)^2 = \frac{8 \pi }{3M_P^2} \rho , \end{align} $
with $ n_{\rm{EQ}} $ and $ a(t) $ being the equilibrium number density and the scale factor, respectively, in a situation where the radiation dominates the universe with the energy density $ \rho = \rho_{\rm{rad}} $, i.e., the thermal equilibrium epoch. Note that $ \langle\sigma v\rangle $ is the thermal average product of the annihilation cross section times the relative velocity [11]. As usually adopted, we present our results in the form of $ \Omega_N $, which is the ratio between the energy density of N and the critical density of the universe.
We proceeded as follows. We numerically analyzed the Boltzmann equation by using the micrOMEGAs software package (v.2.4.5) [72]. To this end, we implemented the model in Sarah (v.4.10.2) [73-76] in combination with SPheno (v.3.3.8) [77, 78] package, which solves all mass matrices numerically. Our results for the relic abundance of N are displayed in Fig. 5. The thick lines in those plots correspond to the correct abundance. Note that, although the Yukawa coupling $ Y_{N} $ is an arbitrary free parameter that translates in N developing any mass value, the key point to determine whether the model provides the correct abundance of N is the resonance of $ Z^{\prime} $ and H. This is showed in the top left plot in Fig. 5. In this plot, the first resonance corresponds to a $ Z^{\prime} $ with mass around $ 3 $ TeV, and the second resonance corresponds to H with mass in the range from $ 6 $ TeV up to $ 7 $ TeV. In the top right plot, we show the density of DM varying with its mass, but now including the region excluded by the LEP constraint expressed in Eq. (8). On the bottom left plot of Fig. 5, we zoom in on the resonance of H. Note that we included the LEP constraint. For completeness reasons, on the bottom right plot, we show the dependence of $ M_{Z^{\prime}} $ with $g_{\rm B-L}$ , including the LEP exclusion region and LHC constraint [79] as well. In the last three plots, we show two benchmark points localized exactly in the line that gives the correct abundance. They are represented in red square and orange star points, and their values are displayed in the companion table. Observe that the LEP constraint in Eq. (8) imposes a reasonably massive N with mass in the range of a few TeVs. To complement these plots, we show another in Fig. 7 that relates the resonance of H (all points in color) and the mass of the DM. The mixing parameter $ \sin \theta $ is given in Eq. (11). Observe that LEP exclusion derived from Eq. (8) is a very stringent constraint imposing H with mass above $ 5800 $ GeV and requiring DM with mass above $ 2800 $ GeV. All those points in colors provide the correct abundance, but only those in black recover the standard Higgs with mass of $ 125 $ GeV. The benchmark points in red square and orange star are given in Table 1. In summary, for the range of values chosen for the parameters, N with mass around 3 TeV constitutes a viable DM candidate, once it provides the correct relic abundance required by the experiments [9]. Moreover, a viable DM candidate must obey the current direct detection constraints.
MDM/GeV YN1 $ M_{Z^\prime} $ /GeV gB-L MH/GeV $ \Omega h^2 $ vS $\sigma_{{\rm DM}q}$ q
3050 0.291 3840 0.518 6279 0.116 7400 5.4 10-11 ★
3190 0.294 3904 0.509 6470 0.122 7658 5.4304e-11 ■
Table 1. (color online) Benchmark points for parameters values added on plots.
Figure 5. (color online) Plots relating DM relic abundance, Yukawa coupling of the RHN, and the dark matter candidate mass. The thick horizontal line corresponds to the correct relic abundance [9].
Figure 7. (color online) Physical parameter space of $M_{H_2}\times M_{\rm DM}$. In colors we show the points that provide the correct relic abundance in the resonant production of the singlet scalar, in accordance with the diagrams in Fig. 4. The black points recover a Higgs with mass of $ 125 $ GeV.
In addition to the relic density of the DM candidate, which involves gravitational effects, we only need to detect it directly to reveal its nature. Here, we restricted our study to direct detection. Aiming to detect the signal of DM directly in the form of WIMPs, many underground experiments using different types of targets were conducted. Unfortunately, no signal has been detected yet. The results of such experiments translate into upper bounds for the WIMP-Nucleon scattering cross section. In view of this, any DM candidate in the form of WIMPs must be subjected to the current direct detection constraints. Direct detection theory and experimentation constitute a very well developed topic in particle physics. For a review of the theoretical predictions for direct detection of WIMPs in particle physics models, please refer to [80-82]. For a review of experiments, refer to [83]. In our case, direct detection requires interactions among N and quarks. This is achieved by exchange of h and H via t-channel, as displayed in Fig. 6. Note that $ Z^{\prime} $ t-channel constitutes a very suppressed contribution because N is a Majorana particle [84]. Consequently, we dismissed such contributions here. In practical terms, we need to obtain the WIMP-quark scattering cross section reported in [85]. Note that the scattering cross section is parameterized by four free parameters, namely $ M_N $, $ M_H $, $ v_S $, and the mixing angle $ \theta $ given in Eq. (11). However, this cross section depends indirectly on other parameters. For example, for $g_{\rm B-L}$ in the range $ 0.1 - 0.55 $, the LEP constraint in Eq. (8) implies $ v_S > 7 $ TeV and $ M_{Z^{\prime}} $ around $ 3 $ TeV. Considering this, and using the micrOMEGAs software package [72] in our calculations, we present our results for the WIMP-Nucleon cross section in Fig. 8. All color points led to the right abundance and are in accordance with the Xenon (2018) exclusion bound [86]. The points in pink are excluded by the LEP constraint provided in Eq. (8). However, only those points in black recover a Higgs with mass of 125 GeV. Observe that the black points may be probed by future XenonNnT and DarkSide direct detection experiments. This turns our model into a phenomenological viable DM model.
Figure 6. The WIMP-quark scattering diagram for direct detection.
Figure 8. (color online) Spin-independent (SI) WIMP-nucleon cross sections constraints. The lines correspond to experimental upper limit bounds on direct detection for LUX [87] (black line), current Xenon1T [86] (blue line with blue fill area), Xenon1T [88] (green dashed line, prospect, 2t$ \cdot $y exposure), XenoNnT [88] (prospect, 20t$ \cdot $y exposure, purple line), Dark Side Prospect [89] (red dashed and dark red dashed lines for different exposure times), LUX-Zeppelin Prospect [90] (orange dashed line), and the neutrino coherent scattering, atmospheric neutrinos and diffuse supernova neutrinos [91] (orange dashed line with filled area).
V. CONCLUSIONS
In this study, the type II seesaw mechanism for generation of small neutrino masses was implemented within the framework of the B-L gauge model. We showed that neutrino masses at eV scale require that $ \Delta $ belongs to an energy mass scale around $ 10^9 $ GeV. This characterizes a seesaw mechanism at an intermediate energy scale and can be probed through rare lepton decays.
One interesting advantage of this model is that we can evoke a $ Z_2 $ discrete symmetry and leave the right-handed neutrinos completely dark in relation to the standard model interactions. In this case, the neutrinos turn out to be the natural candidate for the dark matter of universe in the form of WIMP. We also showed that the correct abundance of dark matter is obtained owing to the resonant production of $ Z^{\prime} $ and heavy Higgs H. Although our scenario is in accordance with Xenon1T exclusion bound, the prospect is that direct detection experiments will be able to probe it.
Concerning inflation, by allowing a non-minimal coupling of the neutral component of the scalar triplet with gravity, we showed that the model realizes inflation in a very successful way, given that the model accommodates Planck results for inflationary parameters in a scenario where the loss of unitarity occurs orders of magnitude above the energy density during inflation. An interesting possibility arises from the estimation of the Lagrangian parameters through the inflationary observable. Future observations of the B-mode polarization, such as LiteBIRD [92], may improve the constraints over the inflationary parameters and, consequently, the Lagrangian. We shall consider this analysis in a forthcoming communication.
The authors would like to thank Clarissa Siqueira and P. S. Rodrigues da Silva for helpful suggestions. | CommonCrawl |
This is a basic course of the Berlin Mathematical School and will be held in English. Depending on the audience one of the tutorials may be held in German.
Introduction to projective, spherical, hyperbolic, Möbius and Lie geometry.
Update: Further oral exams will be offered on 26 June. Please see Prof. Sullivan's webpage.
The signatures for the first two pencils of circles were reversed.
and the normals should be normalized to satisfy $\langle n_1, n_1 \rangle = \langle n_2, n_2 \rangle = 1$.
The angle $\alpha_n$ in (ii) is the interior angle of a regular Euclidean $n$-gon as determined in (i).
First lecture on Wednesday, October 18.
First tutorials on Wednesday, October 25.
The exercises are to be handed in in groups of two people. Depending on the number of students it might be possible to form groups of three.
The homework is due weekly during the tutorials on Wednesday.
Prasolov & Tikhomirov, Geometry, TMM 200, Amer. Math. Soc.
Coxeter, Non-Euclidean Geometry, Math. Assoc. Amer. | CommonCrawl |
Formulas for non-bonded interaction energies
If one were to calculate the non-bonded interaction energy between two atoms, this would equate to the sum of the vdW + electrostatic potential energies:
$$ E_{\text{non-bond}} = E_{\text{vdW}} + E_{\text{electrostatic}} $$
Could anyone please explain how I could calculate these potentials providing a formula?
Perhaps given an example with zinc ($\ce{Zn^2+}$) and oxygen ($\ce{O}$), where $\sigma$ and $\varepsilon$ values (defined by Charmm27) are given as:
\begin{array}{ccc} \hline \text{Species} & \sigma & \varepsilon \\ \hline \ce{Zn} & 1.942 & 0.25 \\ \ce{O} & 3.029 & 0.12 \\ \hline \end{array}
Ultimately I just wish to understand how they are calculated.
Simply put, I am trying to calculate non-bonded interaction energy between a ligand and a protein. However, after trying to understand each of these terms I am a little lost.
physical-chemistry computational-chemistry energy intermolecular-forces
pentavalentcarbon
$\begingroup$ What are sigma and epsilon? $\endgroup$ – Ivan Neretin Sep 11 '15 at 13:28
You've got the Lennard-Jones potential parameters for zinc and oxygen from a certain version of the CHARMM force field. The commonly-used notation is $\varepsilon$ for potential well depth and $\sigma$ is the distance where the pair potential is zero, and $r_{min}$ is the distance which minimizes the L-J potential energy function. The linked Wikipedia article gives the potential as:
$$E_{\rm L-J}(r) = 4\varepsilon\left[\left({\sigma\over r}\right)^{12} - \left({\sigma\over r}\right)^{6}\right] = \varepsilon\left[\left({r_{min}\over r}\right)^{12} - 2\left({r_{min}\over r}\right)^{6}\right] $$
At $r = r_{min}$, $E_{\rm L-J} = -\varepsilon$ and we can map $r_{min} = 2^{1/6}\sigma$ to work with either form of the L-J potential function.
Accurate Calculation of Hydration Free Energies using Pair-Specific Lennard-Jones Parameters in the CHARMM Drude Polarizable Force Field. J. Chem. Theory Comput. 6(4) 1181–1198 (2010) describes (among many other things) how the L-J potential term is calculated for 2 interacting species (as in your case). See equations (1), (2), and (3) in that paper, reproduced below (where I have used $r_{min}$ instead of $R_{min}$ for consistency).
In the paper, the Lennard-Jones potential is given as
$$E_{\rm L-J}(r) = \varepsilon\left[\left({r_{min}\over r}\right)^{12} - 2\left({r_{min}\over r}\right)^{6}\right]$$
For a pair of interacting species $i,j$:
$$r_{min} = {r_{min,i} + r_{min,j}\over 2}$$
$$\varepsilon = (\varepsilon_{i}\cdot\varepsilon_{j})^{1/2}$$
For your system, we compute $r_{min}$ from the given values of $\sigma$ (using the mean of the two values of $\sigma$, as specified in the manuscript):
$$r_{min} = 2^{1/6}\sigma = 2^{1/6}\cdot {1.942 + 3.029\over 2} = 2.789\;\unicode{xc5}$$
$$\varepsilon = (0.25\cdot 0.12)^{1/2} = 0.1732\;\mathrm{kcal\;mol^{-1}}$$
Now, at some distance $r$, you can compute $E_{\rm L-J}(r)$.
In the linked publication, the authors also go through the process of how the L-J parameters are derived/fit, which may be of interest to you.
Finally, see this vintage web page authored by a familiar guy which goes through a procedure for taking parameters for computed hydration free energies of cations and deriving L-J parameters for use in the AMBER force field. The procedure is very similar to that used for CHARMM.
Todd MinehardtTodd Minehardt
$\begingroup$ Thank you for your detailed reply. however, in the case where we have two differing atoms with separate $\sigma$ and $\epsilon$ values am I correct in saying the following: $\sigma_{ij}$ = $\sqrt{(\sigma_i\sigma_j)}$ where in the cited paper we have $\epsilon_{ij}$ = $\sqrt{(\epsilon_i\epsilon_j)}$ ? $\endgroup$ – user2952367 Sep 11 '15 at 14:24
$\begingroup$ You want $\sigma = R_{min}$ and $\epsilon = \varepsilon$ in their notation - I edited my answer to include the math with your numbers, so it should hopefully be a little clearer. $\endgroup$ – Todd Minehardt Sep 11 '15 at 14:39
$\begingroup$ Thank you so much, You have really helped me understand this. with regard to the -2 in the LJ 6 component, could you kindly explain what this means? It does not appear in other literature which I have read. $\endgroup$ – user2952367 Sep 11 '15 at 15:57
$\begingroup$ Also after further reading and for those who are also interested in this question: the $\sigma_{ij}$ value is dependant on the force field used for instance: CHARMM27 uses the Lorentz-Bertelot method for determining $\sigma$ i.e. $\sigma_{ij}=1/2(\sigma_{ij}+\sigma_{ij})$ while another OPLS uses a Geometrical average such that: $\sigma = (\sigma_{i}\sigma_{j})^{1/2}$ $\endgroup$ – user2952367 Sep 11 '15 at 16:04
$\begingroup$ The -2 in the L-J expression is a remnant of whether or not $\sigma$ or $R_{min}$ is used in the analytical expression (see the linked Wikipedia page in my answer, above). That said, it's not clear to me that I mapped your $\sigma$ values to $R_{min}$ properly above - $R_{min} = 2^{1/6}\sigma$ - so let me edit that in above. $\endgroup$ – Todd Minehardt Sep 11 '15 at 16:18
Not the answer you're looking for? Browse other questions tagged physical-chemistry computational-chemistry energy intermolecular-forces or ask your own question.
Bond energies database
Van der Waals nonspecific interaction definition
What tools can be used to estimate binding energies?
Significance of single point energy when calculating interaction energies
Non-bonded orbitals in water
When is a dipole-dipole interaction strongest?
Transforming AO basis electron repulsion integrals into molecular spin orbital basis?
Formulas for the number of spectral lines | CommonCrawl |
Orthostochastic matrix
In mathematics, an orthostochastic matrix is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some orthogonal matrix.
The detailed definition is as follows. A square matrix B of size n is doubly stochastic (or bistochastic) if all its rows and columns sum to 1 and all its entries are nonnegative real numbers. It is orthostochastic if there exists an orthogonal matrix O such that
$B_{ij}=O_{ij}^{2}{\text{ for }}i,j=1,\dots ,n.\,$
All 2-by-2 doubly stochastic matrices are orthostochastic (and also unistochastic) since for any
$B={\begin{bmatrix}a&1-a\\1-a&a\end{bmatrix}}$
we find the corresponding orthogonal matrix
$O={\begin{bmatrix}\cos \phi &\sin \phi \\-\sin \phi &\cos \phi \end{bmatrix}},$
with $\cos ^{2}\phi =a,$ such that $B_{ij}=O_{ij}^{2}.$
For larger n the sets of bistochastic matrices includes the set of unistochastic matrices, which includes the set of orthostochastic matrices and these inclusion relations are proper.
References
• Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications. Vol. 108. Cambridge: Cambridge University Press. ISBN 0-521-86565-4. Zbl 1106.05001.
| Wikipedia |
Event structure
In mathematics and computer science, an event structure represents a set of events, some of which can only be performed after another (there is a dependency between the events) and some of which might not be performed together (there is a conflict between the events).
Formal definition
An event structure $(E,\leq ,\#)$ consists of
• a set $E$ of events
• a partial order relation on $E$ called causal dependency,
• an irreflexive symmetric relation $\#$ called incompatibility (or conflict)
such that
• finite causes: for every event $e\in E$, the set $[e]=\{f\in E\mid f\leq e\}$ of predecessors of $e$ in $E$ is finite
• hereditary conflict: for every events $d,e,f\in E$, if $d\leq e$ and $d\#f$ then $e\#f$.
See also
• Binary relation
• Mathematical structure
References
• Winskel, Glynn (1987). "Event Structures" (PDF). Advances in Petri Nets. Lecture Notes in Computer Science. Springer.
• event structure in nLab
| Wikipedia |
In convex quadrilateral $KLMN$ side $\overline{MN}$ is perpendicular to diagonal $\overline{KM}$, side $\overline{KL}$ is perpendicular to diagonal $\overline{LN}$, $MN = 65$, and $KL = 28$. The line through $L$ perpendicular to side $\overline{KN}$ intersects diagonal $\overline{KM}$ at $O$ with $KO = 8$. Find $MO$.
Let $\angle MKN=\alpha$ and $\angle LNK=\beta$. Note $\angle KLP=\beta$.
Then, $KP=28\sin\beta=8\cos\alpha$. Furthermore, $KN=\frac{65}{\sin\alpha}=\frac{28}{\sin\beta} \Rightarrow 65\sin\beta=28\sin\alpha$.
Dividing the equations gives\[\frac{65}{28}=\frac{28\sin\alpha}{8\cos\alpha}=\frac{7}{2}\tan\alpha\Rightarrow \tan\alpha=\frac{65}{98}\]
Thus, $MK=\frac{MN}{\tan\alpha}=98$, so $MO=MK-KO=\boxed{90}$. | Math Dataset |
\begin{document}
\title{Data-driven Distributionally Robust Optimization over Time}
Stochastic Optimization (SO) is a classical approach for optimization under uncertainty that typically requires knowledge about the probability distribution of uncertain parameters. As the latter is often unknown, Distributionally Robust Optimization (DRO) provides a strong alternative that determines the best guaranteed solution over a set of distributions (ambiguity set). In this work, we present an approach for DRO over time that uses online learning and scenario observations arriving as a data stream to learn more about the uncertainty. Our robust solutions adapt over time and reduce the cost of protection with shrinking ambiguity. For various kinds of ambiguity sets, the robust solutions converge to the SO solution. Our algorithm achieves the optimization and learning goals without solving the DRO problem exactly at any step. We also provide a regret bound for the quality of the online strategy which converges at a rate of $ \mathcal{O}(\log T / \sqrt{T})$, where $T$ is the number of iterations. Furthermore, we illustrate the effectiveness of our procedure by numerical experiments on mixed-integer optimization instances from popular benchmark libraries and give practical examples stemming from telecommunications and routing. Our algorithm is able to solve the DRO over time problem significantly faster than standard reformulations.
\paragraph{Keywords:} distributionally robust optimization; learning over time; online gradient descent, data-driven optimization, dynamic regret
\section{Introduction}
Many practical optimization problems deal with uncertainties in the input parameters, and it is important to compute optima that are protected against them. Two prime methodologies to deal with uncertainty in optimization problems are \emph{Stochastic Optimization (SO)} and \emph{Robust Optimization (RO)}. SO considers all uncertain parameters to be random variables, and its solution approaches usually rely on the knowledge of the probability distribution. Classically, SO aims to find solutions that are optimal in expectation (or more generally with respect to chance constraints or different risk-measures, see~\cite{BirgeLouveaux1997}). RO is typically used when knowledge about the probability distribution is not at hand or a better guarantee of feasibility is desired \citep{Ben-TalElGhaouiNemirovski2009}. It strives to find solutions which perform best against adversarial realizations of the uncertain parameters from a predefined uncertainty set.
Even if the underlying probability distributions are not at hand, they can often be estimated from historical data. These estimates are naturally also affected by uncertainty. As such, recent research has focused on compromising between SO and RO in order to obtain better protection against uncertainty while controlling the conservatism of robust solutions. In particular, \emph{Distributionally Robust Optimization (DRO)} aims to solve a \qm{robust} version of a stochastic optimization problem under incomplete knowledge about the probability distribution. The benefit of DRO is that the solutions are fully protected against the uncertainty and thus outperform non-robust solutions with respect to worst-case performance.
Current research in DRO is primarily aimed at developing efficient solution techniques for static DRO problems, where the optimization problem is solved for a given and fixed ambiguity set. However, in many practical applications, additional information about the uncertainty becomes available over time. For example, in situations where planning processes need to be repeated over time, each plan may want to incorporate the outcomes of the previous decisions. Applications abound for such processes, for example in taxi or in ambulance planning. Applications also occur in iterative assigning landing time windows to aircraft such that security distances are kept at an airport at all times even in case of disturbances which may lead to frequent reassignments. For more details on the air traffic application, see \citep{ATM1, Kapolke2016}. Naturally, it would be beneficial to include any new data into a DRO approach as soon as it becomes available.
In this article, we present a DRO approach that iteratively incorporates such information over time. Specifically, we provide an online learning algorithm that solves DRO problems with limited initial knowledge about the uncertainty, but which can leverage additional incoming data. This allows the optimal solutions to adapt to the uncertainty and gradually reduce the cost of protection. To this end, we use scenario observations arriving as a data stream to construct and update the ambiguity sets. These sets contain the true data generating distribution with high confidence and converge to it over time. We also show that the solution to the DRO problem converges to the true SO solution, as the ambiguity sets shrink to the true distribution, and hence the online algorithm also converges to the SO solution. However, the primary goal is to use the online algorithm to solve the DRO problem.
The main feature of our work is an integrated procedure that can iteratively solve the DRO problem while learning reliable and time-dependent ambiguity sets. We show that our online algorithm outperforms prior methods. We also compare different approaches to construct data-driven ambiguity sets. In computational experiments, we evaluate our algorithm on state-of-the-art benchmark libraries and realistic case studies. We demonstrate that our online method leads to significantly reduced computation times with only marginal sacrifices in solution quality.
\subsubsection*{Problem Statement.}
We consider the problem of minimizing a function $ f\colon \mathcal{X}\times \mathcal{S} \rightarrow \mathds{R} $ over some (possibly non-linear and/or mixed-integer) feasible set~$ \mathcal{X} $. We focus on the case of finitely many scenarios which are contained in the set $ \mathcal{S} \coloneqq \set{s_1, \ldots, s_{\abs{\mathcal{S}}}} $. Note that this is a natural modeling assumption that appears in several applications as many real-world random events are best represented via discrete scenarios. However, our approach is also able to treat the case of continuous random variables by sampling a sufficient large discrete scenario set from the probability distribution. Finite approximations such as the sample average approximation~\citep{sample-average-approximation} and other similar scenario reduction techniques are standard in stochastic optimization. They lead to algorithmic tractability in more general settings, which is necessary for any realistically sized problem. Furthermore, the use of a finite number of scenarios allows us to treat the probability distribution as a vector and hence leverage methods from first-order optimization to solve the problem of finding the optimal distribution. The setting of infinite-dimensional ambiguity sets for continuous distributions goes beyond the focus of this paper.
With this in mind, we define $ \mathcal{P}_0 \coloneqq
\set{p \in [0, 1]^{\abs{\mathcal{S}}} \mid
\sum_{k = 1}^{\abs{\mathcal{S}}} p_k = 1} $ as the $ \abs{\mathcal{S}} $-dimensional probability simplex. We start with the probability simplex as our initial ambiguity set as it imposes no restrictions on the distributions. However, we are not restricted to this choice and one can initialize with sets constructed using already available historical data. This would lead to less conservative solutions in practice without changing our theoretical results fundamentally. Each point $ p \in \mathcal{P}_0 $ represents a probability distribution over the scenarios $ s \in \mathcal{S} $. Given a probability distribution of scenarios $ p^* \in \iset{P}_0 $, one can solve the following SO problem: \begin{equation}
\label{Eq:Stoch_nominal}
\tag{SO}
J^* \coloneqq \min_{x \in \iset{X}} \mathbb{E}_{s \sim p^*}[f(x, s)]
= \textstyle\sum_{k = 1}^{\mathcal{\iset{S}}} f(x, s_k) p_k^*. \end{equation} However, if there is limited information about the probability vector~$ p^* $, we can limit the impact of uncertainty by solving the distributionally robust counterpart of the SO problem, namely \begin{align}
\min_{x \in \mathcal{X}} \; \max_{p \in \mathcal{P}_0}
\; \mathbb{E}_{s \thicksim p}\left[f(x, s)\right]
= \min_{x \in \mathcal{X}} \; \max_{s \in \mathcal{S}} \; f(x, s). \tag{DRO}\label{Eq:DRO_full} \end{align} This equality holds because the worst-case probability is realized by some unit-vector \linebreak $ (0, \ldots, 0, 1, 0, \ldots, 0) \in \mathcal{P}_0 $. For this maximally large ambiguity set, the solutions to the DRO problem may be overly conservative. Furthermore, there is typically data available in the form of observed realizations of the uncertain parameters. The primary goal of this work is to develop an online algorithm to solve the above DRO problem over time while progressively integrating additional information.
This is achieved by refining the ambiguity sets over time as we learn more about the uncertainty, e.g., with new realizations. The use of DRO along with online optimization limits the impact of adverse realizations. Simultaneously, learning ensures that we can adapt and increase our confidence as we gather more information.
We assume that information about the probability distribution $p^*$ is revealed in the form of i.i.d.\ realizations over time. As such, we solve a sequence of DRO problems with progressively shrinking ambiguity sets $\iset{P}_t$. At each time step $ t = 1, \ldots, T $ over a given horizon, we construct the ambiguity set $\iset{P}_t$ according to the confidence regions estimated from scenario observations up to time~$t$. Using these sets, we solve \begin{align}
\label{Eq:DRON}
\tag{DRO$_t$}
\widehat{J}_t := \min_{x \in \mathcal{X}} \;\; \max_{p \in \mathcal{P}_t}
\;\; \mathbb{E}_{s \thicksim p}\left[f(x, s)\right]. \end{align} In order to solve problem~\ref{Eq:DRON} efficiently, our online optimization approach alternates between a gradient step (for $p$) and solving the minimization problem (for~$x$). The task is to find a solution~$ x_t $ for each round $t$. This approach can also be interpreted as a game over $T$ rounds, where a player tries to make optimal decisions against an adversary who chooses the probability distribution from which to draw the uncertain realization.
In our analysis, we calculate the gap between the worst-case performance of~$ x_t $ (generated by the online algorithm) over the set $ \iset{P}_t $ and the performance of the optimal DRO solution at time~$t$. We prove that the average gap (reminiscent of the notion of dynamic regret) is bounded as \begin{alignat*}2
&\frac{1}{T} \sum_{t=1}^{T} \Big( \underbrace{\max_{p \in \mathcal{P}_{t}}\mathbb{E}_{s \thicksim p}\left[f(x_{t},s)\right]}_{\text{ our online cost at time $t$}} - \underbrace{\min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \thicksim p}\left[f(x,s)\right]}_{\text{optimal cost in hindsight at $t$}} \Big)
\leq \mathcal{O}\Big(\sqrt{\frac{ h(T)}{T}} \Big), \end{alignat*} with high probability. Here, $ h(T) $ is a bound on the path length of the distributions, i.e., $ \sum_{t = 1}^{T} \frac12 \norm{p_t - q_t}_2^2 \leq h(T) $ for $ p_t \in \mathcal{P}_{t - 1} $ and $ q_t \in \mathcal{P}_t $. We show that $ h(T) \in \mathcal{O}(\log^2 T) $ for the different categories of ambiguity sets that we consider. This bound controls the difference between the performance of our online method and an exact DRO solver. It certifies that our approach successfully approximates the DRO solution with an average gap that decreases over time. We also show that the DRO solution converges to the true SO solution as the ambiguity sets converge to the true distribution in the limit.
\subsubsection*{Related Work.}
Current research in distributional robustness is mostly concerned with the appropriate choice of ambiguity sets to obtain guarantees on solution quality \citep{DelageYe2010,ParysEtAl2017}. It also focuses on the derivation of algorithmically tractable reformulations for the resulting robust counterparts, see e.g.\ \cite{WiesemannKuhnSim2014}, as well as \cite{Calafiore2006}. Ambiguity sets can be constructed by imposing constraints on expectation~\citep{ChenEtAL2017}, covariance~\citep{DelageYe2010}, mode~\citep{hanasusanto2015distributionally}, etc. Another option is to use distance metrics such as the Wasserstein metric~\citep{esfahani2018data}, $\phi$-divergence~\citep{bayraksan2015data}, $f$-divergence \citep{duchi2021statistics}, kernel-based distances~\citep{KirschnerBogunovicJegelkaKrause2020}, hypothesis tests~\citep{BertsimasEtAl2018a}, etc.
Ambiguity sets can also be defined by confidence bounds~\citep{RahimianMehrotra19}, and many popular ambiguity sets~\citep{BertsimasEtAl2018a, KirschnerBogunovicJegelkaKrause2020} have associated probabilities of containing the true distribution. In this work, we construct the sets as the combination of a simplex and a bounded set defined by either confidence intervals, $\ell_2$-norm or kernel based metrics. These bounded sets function as confidence regions.
Online learning is an established field which provides algorithms for solving problems over time. For a broad introduction see~\citet{hazan_ioco}. Recently, this approach has been leveraged to solve robust optimization problems~\citep{ben-tal:oracle_ro,ho2018online,pokutta2021adversaries,oracle_robust_optimization_nonconvex_loss}. Online learning has also been applied to DRO problems where the ambiguity set is constructed from data. \citet{namkoong2016stochastic} leverage online optimization for a DRO problem with ambiguity sets defined by $f$-divergences. They use an alternating mirror descent algorithm and provide regret bounds for the same. The authors of~\cite{online_method_dro_with_kl_regularization} propose a duality-free online stochastic method for a class of DRO problems with KL-divergence regularization on the dual variables. Different algorithms are proposed and analyzed in~\cite{online_learning_for_convex_dro} for DRO problems with convex objectives with conditional value at risk and $\chi^2$-divergence ambiguity sets. These existing works combine DRO and online learning in order to solve a single DRO problem. This paper looks at a different problem. The key difference between these works and ours is that we consider a planning problem that has to be solved repeatedly in time with growing knowledge about the uncertain parameters. Therefore, we establish an online framework to solve a series of DRO problem with changing ambiguity sets as new information arrives. This allows us to obtain robust online solutions while learning about the uncertainty.
In \citet{KirschnerBogunovicJegelkaKrause2020}, the authors also focus on a DRO problem in an online learning context. They allow for uncertainty in the parameters and only have noisy blackbox access to the objective function. To solve the DRO problem while learning the objective and the ambiguity set, the authors solve a large program with convex constraints at each stage. This was extended to more general ambiguity sets constructed with $\phi$-divergences in~\cite{drbo_2022}. In contrast, our work primarily focuses on obtaining distributionally robust optimal solutions in an online fashion while learning the true distribution. As a key advantage we point out that our algorithm does not require the solution of the entire DRO problem at any step but only computes gradient steps in the ambiguity space. Furthermore, we consider multiple ambiguity sets and leverage the online gradient descent algorithm to allow for faster computation and better applicability in real-world settings.
Another similar work is by \citet{SessaBogunovicKamgarpourKrause2020}. Therein, the authors present an online learning approach with a multiplicative weight algorithm in order to compute strict robust mixed-strategies over a decision set. In contrast to our work however, they do not consider a DRO setting.
Finally, \citet{BaermannMartinPokuttaSchneider2020} and \citet{BaermannPokuttaSchneider2017} consider the problem of learning unknown objectives in an online fashion but without any uncertainty in the models.
\subsubsection*{Contribution.}
The two key differences in our work which distinguish it from regular online optimization are (i) use of DRO while learning from data and (ii) solving the DRO problem approximately. The first ensures that our solutions are robust to uncertainty in the knowledge of the true probability distribution.
The second allows us to obtain robust solutions without solving the DRO problem exactly at each step. Specifically, the key contributions of our work are: \\ \emph{Online Learning Algorithm for DRO}. We provide an online algorithm to solve the DRO problem. It also learns the uncertainty from scenario observations over time, shrinking the ambiguity sets. This allows for rapid computation of the DRO solutions along with their adaptation. Thus, reducing the cost of protection. \\
\emph{Stochastic Consistency}. We also prove that the solution of the DRO problem
converges to the SO problem. Since our online algorithm solves the DRO problem, thus it too converges to the solution of the SO problem. \\
\emph{High Probability Regret Bounds}. We prove that the cumulative regret between the solutions generated by our online method and the exact DRO solution at each time step shrinks at a rate of $\mathcal{O}(\log T / \sqrt{T})$ with high probability.\\
\emph{Flexibility of Uncertainty Models}. We consider 3 different ambiguity models: (i) confidence intervals, (ii) $\ell_2$-norm sets and (iii) kernel based ambiguity sets. These allow our approach to adapt to the application.\\
\emph{Computational Results}. We provide a computational study on
mixed-integer benchmark instances and on real world problem examples. Specifically, we compare on the MIPLIB and QPLIB libraries and further illustrate our results with two realistic applications from telecommunications and routing. In both cases our approach is considerably faster than solving the full distributionally robust counterparts.
\subsubsection*{Outline} Section~\ref{sec:dddro} presents the foundations of \emph{Data-Driven DRO}. We present our algorithms and theoretical results on \emph{DRO over Time} in Section~\ref{sec:drotime}. Finally, in Section~\ref{sec:num_res}, we evaluate our methods on benchmark instances.
\section{Data-Driven DRO} \label{sec:dddro}
In this section, we introduce ambiguity sets which form a key part of DRO problems. We also introduce the dual reformulation which is a standard way for solving robust or DRO problems.
\subsection{Ambiguity Sets} DRO can provide robust protection against scenarios generated by any distribution inside an ambiguity set $\mathcal{P}$. However, depending on the size of the set e.g., if $\mathcal{P} = \mathcal{P}_0$, the protection may be too conservative. We can reduce this conservativeness by integrating the information gained from the data generated by the true distribution and thus shrinking the ambiguity set. We construct the ambiguity sets as the intersection of the probability simplex and a data-driven set which contains the true distribution (with high confidence).
These data-driven sets can be prescribed by multi-dimensional confidence intervals or metrics such as the $\ell_2$-norm, kernel based deviation etc. These metrics are selected as they can provide guarantees about containing the true distribution. Their dependence on data also ensures that with more data, the sets converge to the true distribution.
Our data-driven ambiguity sets~$\mathcal{P}_{t}$, where $t\in \mathds{N}$ equals the number of data points, can be of the following two forms \begin{equation*}
\mathcal{P}_{t} \coloneqq \left\{p \in \mathcal{P} \mid l_t\leq p \leq u_t \right\}\text{ or } \mathcal{P}_{t}\coloneqq \left\{p \in \mathcal{P} \mid d(p, \hat{p}_t) \leq \epsilon_t \right\}. \end{equation*}
Here $l_t,u_t\in[0,1]^{|\mathcal{S}|}$ are lower and upper bounds of confidence intervals, $\hat{p}_t$ is the empirical distribution estimator for $p^*$ and $d(\cdot, \cdot)$ and $\epsilon_t$ denote a distance metric and its respective bound. In both cases, the values of the parameters $l_t, u_t$ and $\epsilon_t$ are selected such that the true distribution lies inside the sets $\mathcal{P}_t$ with high probability i.e., \begin{equation*} \mathbb{P}\left(p^* \in \mathcal{P}_t\right) \geq 1-\delta_t, \end{equation*} where $\delta_t\in (0,1).$ One key requirement for the ambiguity sets~$\mathcal{P}_{t}$ over time~$t=0,...,T$ is that they contain the true distribution inside \emph{all of them} with high probability, that means \begin{align*} p^* \in \bigcap_{t=0,...,T}\mathcal{P}_t, \end{align*} with a probability of at least $1-\delta$. This is achieved by ensuring that each set $\mathcal{P}_t$ in round $t$ contains the true distribution $p^*$ with probability at least $1 -\delta_t$ such that $\sum_{t=1}^{\infty}\delta_t < \infty$. In this paper, we choose $\delta_t = \frac{6 \delta}{\pi^2 t^2}$ for some predefined $\delta \in (0,1)$. With this in mind, we have the following lemma.
\begin{lemma}\label{Lemma:Union_bound}
For ambiguity sets constructed with confidence $\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$, it follows that the true data generating distribution $p^* \in \bigcap_{t=0,...,T}\mathcal{P}_t$ with a probability of at least $1-\delta$. \end{lemma} \proof{Proof:} Given a sequence of events $A_t$, we can estimate the probability of their intersection with Boole's inequality as follows, \begin{align*} \mathbb{P}\left(\bigcap_{t=1,...,T} A_t\right) = 1 - \mathbb{P}\left(\bigcup_{t=1,...,T} A_t^c\right) \geq 1 - \sum_{t=1,...,T} \mathbb{P}( A_t^c). \end{align*} Let $A_t$ be the event that the true distribution $p^*$ lies inside the uncertainty set $\mathcal{P}_t$. By the definition of an ambiguity set with confidence $\delta_t$, we know that $\mathbb{P}(A_t) \geq 1 - \delta_t$. This means that $\mathbb{P}(A_t^c) \leq \delta_t$. This inequality and the limit of the over-harmonic series (2-series: $\sum_{t=1}^\infty \frac{1}{t^2}=\frac{\pi^2}{6}$) allows us to show that the probability of the event $p^* \in \bigcap_{t=1...T} \mathcal{P}_t$ (here without $t=0$) is at least \begin{align*} 1- \sum_{t=1}^T \delta_t \geq 1 - \sum_{t=1}^\infty \delta_t = 1 - \frac{6 \delta}{\pi^2} \sum_{t=1}^\infty \frac{1}{t^2} =1- \delta. \end{align*} Since $$\bigcap_{t=1,...,T}\mathcal{P}_t \subset \mathcal{P}=\mathcal{P}_0 \Rightarrow \bigcap_{t=1,...,T}\mathcal{P}_t = \bigcap_{t=0,...,T}\mathcal{P}_t,$$ we conclude the proof (here with $t=0$).
$\square$
Note that the above result continues to hold as $T \rightarrow \infty$. The above lemma shows that if the confidence probability $1-\delta_t$ increases fast enough, the inclusion of the true distribution $p^*$ in individual ambiguity sets $\mathcal{P}_t$ is sufficient to guarantee uniform inclusion over all sets.
\subsection{Choice of Ambiguity Sets}
The choice of ambiguity sets depends on the application, the historical information available and level of protection desired. We discuss different ambiguity sets that are easily applicable and provide probabilistic performance guarantees. These different ways of constructing data-driven ambiguity sets demonstrate the flexibility of our approach.
\paragraph{Confidence Intervals.}
Confidence intervals for multinomial distributions can be calculated using various methods (see \citet{wang2008exact} for a survey). We use the analytic formula from \citet{fitzpatrick1987quick} for the construction of multi-dimensional intervals $\mathcal{I}_t\subseteq [0,1]^{|\mathcal{S}|}$ via \begin{equation}
\mathcal{I}_{kt} = [l_{kt}, u_{kt}]\coloneqq \left[\hat{p}_{kt} - \frac{z_{\frac{\delta_t}{2}}}{2 \sqrt t} \text{ , } \hat{p}_{kt} + \frac{z_{\frac{\delta_t}{2}}}{2 \sqrt t}\right], \label{Eq:Fitzpatrick} \end{equation} in each round $t=1,...,T$. Here, $\hat{p}_t$ is the maximum likelihood estimator for $p^*$ and $z_{\frac{\delta_t}{2}}$ denotes the upper $(1 - \frac{\delta_t}{2})$-percentile of the standard normal distribution. The corresponding ambiguity sets can then be the intersection of the confidence intervals and the probability simplex, i.e., \begin{align*} \mathcal{P}_t \coloneqq \mathcal{P}_0 \cap \mathcal{I}_t = \lrset{p \in \mathcal{P}_0 \mid l_t \leq p \leq u_t}, \quad t = 1, \ldots, T. \label{label:pn} \end{align*} The parameters $l_t$ and $u_t$ can be updated as mentioned in the definition of $\mathcal{I}_{kt}$. The parameter $\hat{p}_t$ is the empirical probability distribution and can be updated by counting the observations of each scenario up to $t$. Confidence intervals work well in practice and are algorithmically preferable as they only impose linear constraints on the ambiguity sets. This provides significant scalability to problems which use confidence intervals. However, one disadvantage of them is that the probability guarantees that they provide only hold asymptotically. As a consequence, we extend our method to the following two sets which provide finite sample guarantees while being easy to reformulate.
\paragraph{$\ell_2$-norm sets.} For the finite sample setting, we have the following guarantee as proven in~\citet{weissman2003inequalities}, \begin{align*}
\mathbb{P} \Big( \Vert \hat{p}_t - p^* \Vert_1 \leq \sqrt{\frac{2|\mathcal{S}| \log 2/\delta_t}{t}} \Big) \geq 1-\delta_t. \end{align*}
Given this inequality, along with the observation that $\|p\|_2 \leq \|p\|_1$ we can construct the following set $$
\mathcal{P}_t = \left\{p \in \mathcal{P} \mid \|p - \hat{p}_t\|_2 \leq \epsilon_t \right\}, $$
with $\epsilon_t\coloneqq\sqrt{\frac{2|\mathcal{S}| \log 2/\delta_t}{t}} $, which provides the containment guarantee $1-\delta_t$. For the above ambiguity set, $\hat{p}_t$ and $\epsilon_t$ are the two parameters which define the set. These can be updated by counting the scenario observations and as per the definition of $\epsilon_t$ respectively.
\paragraph{Kernel based ambiguity sets.} Kernel based ambiguity sets are another alternative that provide similar finite sample probability guarantees while allowing for flexibility in how the different scenarios are weighted. Given a kernel function $k_M(s_i,s_j) : \mathcal{S}\times \mathcal{S} \rightarrow \mathds{R}$ defined over the scenarios with kernel matrix $M$, we have the following probability guarantee~\citep{KirschnerBogunovicJegelkaKrause2020}, $$
\mathbb{P}\Big(\|\hat{p}_t - p^*\|_M \leq \frac{\sqrt{C}}{\sqrt{t}}(2 + \sqrt{2 \log(1/\delta_t)})\Big) \geq 1- \delta_t, $$
if $k_M(s_i,s_j)\leq C$ for all scenarios. Here, $\|q\|_M := \sqrt{q^{\top}Mq}$ . With this inequality, we can construct an ambguity set similar to the $\ell_2$-norm case with $\epsilon_t := \frac{\sqrt{C}}{\sqrt{t}}(2 + \sqrt{2 \log(1/\delta_t)})$. Like $\ell_2$ norm ambiguity sets, kernel based sets require two key parameters $\hat{p}_t$ and $\epsilon_t$. Both can be updated either by counting scenarios or the definition of $\epsilon_t$ for kernel sets.
Using these ambiguity sets, the resulting~\eqref{Eq:DRON} forms a min-max problem, where the inner maximization problem optimizes a linear objective over a finite-dimensional convex feasible set. Therefore, \eqref{Eq:DRON} can be equivalently reformulated using duality theory as is commonly applied in convex robust optimization. These reformulations are discussed in the following section.
\subsection{Solving DRO via Reformulation}
When the ambiguity set is constructed with confidence intervals, the inner maximization problem forms a linear program.
Then, by strong duality, for confidence intervals, \eqref{Eq:DRON} is equivalent to \begin{equation}
\begin{aligned}
\min_{x, z, \alpha, \beta} \text{ } & z
- \langle l_t, \alpha\rangle + \langle u_t, \beta\rangle
\\
\mathrm{s.t.} \text{ } & z - \alpha_k + \beta_k \geq f(x, s_k)
\hspace{0.1cm}\hspace{0.2cm} \forall k = 1, \ldots, \abs{\mathcal{S}},
\\
& \alpha, \beta \geq 0, \\
& x \in \mathcal{X}, \text{ } z \in \mathds{R},
\text{ } \alpha, \beta \in \mathds{R}^{\abs{\mathcal{S}}}.
\end{aligned}
\label{Eq:DRO_ref} \end{equation} Here, the dual variables $ \alpha_k $ and $ \beta_k $ price the uncertainty for scenario $ s_k \in \mathcal S $. This problem is
of the same problem class as~\eqref{Eq:Stoch_nominal}, however larger in size. The reformulated DRO problem grows linearly with the number of scenarios which may become prohibitive if the cardinality of $\mathcal{S}$ is large. Thus, the difficulty of solving~\eqref{Eq:DRO_ref} depends on the complexity of $f$ and the cardinality of~$\mathcal{S}$.
For the $\ell_2$-norm and kernel based ambiguity sets, the dual reformulation is given by \begin{equation} \label{eq:DRON_l2} \begin{aligned}
\hspace{-2mm}\min_{x,z, r} &\sum_{k=1}^{|S|} \hat{p}_{kt} \left(f(x,s_k) + r_k \right)+ \epsilon_t \| \mathbf{f}(x) + z \mathbf{e} + \mathbf{r} \|_{A^{-1}}\\
\text{s.t.} &\; x \in \mathcal{X}, r_k \geq 0, \end{aligned} \end{equation}
where the matrix $A = I$ for the $\ell_2$-norm case and $M$ for the kernel based sets. The vectors $\mathbf{f}, \mathbf{e},$ and $\mathbf{r}$ denote the functions $f(x,s_k)\text{ for all } k$, the vector of all ones and the set of all $r_k$ respectively. Although norm constraints typically remain algorithmically tractable, solving this reformulation is practically more difficult than solving SO.
\subsection{DRO over Time} We now discuss our baseline framework of DRO over Time. The iterative procedure outlined in~Algorithm~\ref{Alg:DRO_over_time_draft} solves a sequence of reformulated DRO problems while learning from data. Prior to the first round, we assume that we have no data about the scenarios and therefore initialize the ambiguity set with the full probability simplex. As more information comes in, the ambiguity set is updated.
\begin{algorithm}
\caption{DRO over Time}
\begin{algorithmic}[1]
\STATE \textbf{Input}: functions $ f(\cdot, s) $
for $ s \in \iS $, feasible set $ \iset{X} $, initial ambiguity set $\mathcal{P}_0$
\STATE \textbf{Output}: sequence of DRO solutions $ x_1, \ldots, x_T $
\FOR{$ t = 1 $ {\bfseries to} $T$}
\STATE $ x_t \leftarrow $ solve Problem~\eqref{Eq:DRO_ref} or~\eqref{eq:DRON_l2} for $ \mathcal{P}_{t-1} $
\STATE $ \mathcal{P}_{t} \leftarrow $ observe data
and update set parameters such as $\hat{p}_t, l_t, u_t$ and $\epsilon_t$ as per the type of ambiguity set.
\ENDFOR
\end{algorithmic} \label{Alg:DRO_over_time_draft} \end{algorithm}
The following theorems show the convergence of the DRO solution $\widehat{J}_t$ to the true solution of the SO problem $J^*$. The proofs are provided in the electronic companion as they are adaptations from \cite{esfahani2018data} to our setting. In the latter, analogous results have been proven for ambiguity sets constructed with the Wasserstein metric.
\begin{theorem}
\label{thm:asymp_consis}
If the feasible set $\mathcal{X}$ is compact, then the optimal value of~\ref{Eq:DRON} converges over time to the optimal value of~\ref{Eq:Stoch_nominal} with probability 1, i.e.,
\[\lim_{t \rightarrow \infty} \widehat{J}_t = J^* \text{ with probability 1.}\] \end{theorem}
\begin{theorem}
\label{thm:soln_conv}
Let $\{x_t\}_{t=1}^{\infty}$ be a sequence of optimal solutions to the problem~\ref{Eq:DRON}.
If the feasible set $\mathcal{X}$ is compact and the function $f(x,s)$ is continuous in $x$, then any accumulation point of $\{x_t\}_{t=1}^{\infty}$ is almost surely an optimal solution to the problem~\ref{Eq:Stoch_nominal}. \end{theorem} These two results show the validity of the DRO over Time paradigm and guarantee that with a sufficient amount of data, Algorithm~\ref{Alg:DRO_over_time_draft} converges to the solution of the SO problem with the true distribution. This is important as this indicates the importance of gathering more data to obtain better solutions.
In Algorithm~\ref{Alg:DRO_over_time_draft}, the DRO problem has to be solved repeatedly in each round. This is not viable for large problems as it requires long computation times.
We remedy this by introducing a new algorithm which approximates the DRO problem over time and updates the ambiguity sets via online learning. For this online algorithm, we can show a dynamic regret bound that bounds the error of the approximation with sublinear expression in the number of rounds.
\section{Online Robust Optimization} \label{Sec:DRO_over_time} \label{sec:drotime}
In this section, we introduce the online learning and optimization algorithm which is main contribution of our work. As in~\cite{ben-tal:oracle_ro} and~\cite{pokutta2021adversaries}, we consider robust optimization as a game between two players. The online algorithm can then be roughly described as alternating between solving the optimization problem for each player given the solution of the other. Thus, for each round $t=1,\dots,T$, we decompose the min-max problem into two subproblems (one for each player) and perform the following steps: \begin{enumerate}
\item First, the \emph{$p$-player} determines $p_{t}$ via an appropriate algorithm applied to problem
$$
\max_{p\in \mathcal{P}_{t-1}} \mathbb{E}_{s \thicksim p}\left[f(x_{t-1},s)\right]\label{Eq:Adversarial},
$$
based on the solution $x_{t-1}$ from the previous round.
\item Then, the \emph{$x$-player} computes $x_{t}$ as a solution using the previously calculated $p_t$
$$
\min_{x\in \mathcal{X}} \text{ } \mathbb{E}_{s \thicksim p_{t}}\left[f(x,s)\right]. \label{Eq:Stoch}
$$ \end{enumerate} Due to the fact that we consider probability distributions over a finite scenario set, the optimization problem of the $p$-player is finite-dimensional and therefore the Online Gradient Descent~\citep{zinkevich2003online} is a canonical choice here as a learning algorithm. Given $p_{t-1}$ and $x_{t-1}$, the update rule consists of a descent step \begin{align*} \tilde{p}_{t} = p_{t-1} + \eta \nabla_p \mathbb{E}_{s \sim p_{t-1}}\left[f(x_{t-1},s)\right], \end{align*} with step size $\eta > 0$ and a subsequent projection step to ensure feasibility
\begin{align*} p_{t} = \arg\min_{p\in \mathcal{P}_{t-1}} \textstyle\frac{1}{2}\Vert p - \tilde{p}_{t} \Vert^2. \end{align*} The probability distribution in the next iteration is therefore given as the unique solution \begin{align*} p_{t} = \arg\min_{p\in \mathcal{P}_{t-1}}& \text{ }\left\langle -\eta \nabla_p \mathbb{E}_{s \thicksim p_{t-1}}\left[f(x_{t-1},s)\right], p \right\rangle
+ \frac{1}{2}\Vert p - p_{t-1} \Vert ^2. \end{align*} Note that the $x$-player uses $p_t$ in round $t$ to estimate $x_t$ and solves a standard SO problem which is easier to solve than the reformulated DRO program. This $p_t$ is then used to estimate $p_{t+1}$ in round $t+1$. As such, the value of $x_t$ depends on $p_t$ in round $t$ (thus $x_t$ and $p_t$ are not conditionally independent in round $t$). Therefore, as per~\cite{pokutta2021adversaries}, the learner for $p_t$ has to be a strong learner in order to ensure sublinear regret. This paper also shows that the online gradient descent algorithm satisfies the necessary conditions for it to be a strong learner.
\subsection{Algorithm}
We provide a pseudo code of our method for DRO over Time via online robust optimization in Algorithm~\ref{Alg:DRO_via_adversarial}. It combines alternating solutions of the $\min$ and $\max$ problems with the update of the ambiguity sets. For the sake of simplicity, we assume that our algorithm starts without any knowledge of the probability distribution over the scenarios and therefore we initialize the ambiguity set as the full probability simplex in step~1. The algorithm can be easily modified to incorporate any historical information. The initialization of $p_0\in\mathcal{P}_{0}$ and $x_0\in\mathcal{X}$ in step~4 and step~5 can be chosen arbitrarily and does not effect our theoretical results. Each round $t=1,..., T$ starts with the update of $p_t$ via projected gradient descent and of $x_t$ as the solution of an SO problem. At the end of the round, we observe new data in form of (i.i.d.) scenario observations and update the ambiguity set as explained in Section~\ref{sec:dddro}.
\begin{algorithm}[htb]
\caption{DRO over Time with Online Projected Gradient Descent}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} functions $ f(\cdot, s) $ for $ s \in \iset{S} $, feasible set $\iset{X}$, initial ambiguity set $\mathcal{P}_0$
\STATE {\bfseries Output:} $x_{1},\ldots,x_{T}$
\STATE Set $\mathcal{P}_0 = \set{p \in [0, 1]^{\abs{\mathcal{S}}} \mid
\sum_{k = 1}^{\abs{\mathcal{S}}} p_k = 1} $
\STATE Set $p_0=\left(\frac{1}{|\mathcal{S}|}, ...,\frac{1}{|\mathcal{S}|} \right)\in \left[0,1\right]^{|\mathcal{S}|}$
\STATE Set $x_0=\min_{x\in \mathcal{X}} f(x,s_1) $
\FOR{$t=1$ {\bfseries to} $T$}
\STATE $\tilde{p}_{t} \leftarrow p_{t-1} + \eta \nabla_p \mathbb{E}_{s \sim p_{t-1}}\left[f(x_{t-1},s)\right]$
\STATE $p_{t} \leftarrow \arg\min_{p\in \mathcal{P}_{t-1}} \textstyle\frac{1}{2}\Vert p - \tilde{p}_{t} \Vert^2$
\STATE $x_{t} \leftarrow \arg\min_{x\in \mathcal{X}} \mathbb{E}_{s \sim p_{t}}\left[f(x,s)\right] $
\STATE $\mathcal{P}_{t} \leftarrow$ observe data
and update set parameters such as $\hat{p}_t, l_t, u_t$ and $\epsilon_t$ as per the type of ambiguity set.
\ENDFOR
\end{algorithmic}\label{Alg:DRO_via_adversarial} \end{algorithm}
Algorithm~\ref{Alg:DRO_via_adversarial} provides a sequence of solutions $x_t$ for each time step $t=0,...,T$. In order to prove that the quality of the solutions $x_t$ improves over time, we bound the average gap over time between the worst case performance of $x_t$ and the optimal worst case (exact DRO) solution. Since the feasible set of the $p$-player changes over time it is not possible to apply existing techniques for regret bounds on min-max problems to our setting as these techniques have primarily focused on stationary ambiguity sets. In this paper, we extend these existing techniques to the case of shrinking ambiguity sets by leveraging the fact that all the ambiguity sets contain a common distribution (the true distribution).
The two main ingredients for the theoretical analysis are a constrained gradient ($\nabla_p \mathbb{E}_{s \sim p}\left[f(x,s)\right]$) and a constrained path length ($\sum_{t=1}^{T}\frac{1}{2}\Vert p_t - q_t \Vert^2\leq h(T)$ for all $p_t\in\mathcal{P}_{t-1}, q_t\in\mathcal{P}_{t}$) for the online gradient descent to work on non-stationary feasible sets. The former is a classical assumption for steepest descent algorithms while the latter is commonly used in dynamic regret bounds for online algorithms, see e.g.~\citep{path_length_dynamic_regret, dynamic_regret}.
For constant ambiguity sets, it is known that for the static regret (which compares the performance difference of online solutions to a single best action in hindsight) bounds of the form $\mathcal{O}({1}/{\sqrt{T}})$ can be derived~\citep{pokutta2021adversaries, besbes2015non}. For our case, a careful analysis of the algorithm leads to a dynamic regret bound of $\mathcal{O}({\sqrt{h(T)}}/{\sqrt{T}})$ that is presented in Theorem~\ref{Th:Regret_bound}, with the corresponding bounding terms $h(T)$ for the path lengths being proven afterwards. This is consistent with other findings about dynamic regret bounds in the literature on online learning, see~\citep{dynamic_regret}. We are able to achieve a bound for this new setting with shrinking ambiguity sets.
\begin{theorem}[Dynamic regret bound] \label{Th:Regret_bound}
Let $f:\mathcal{X}\times\mathcal{S}\rightarrow\mathbb{R}$ be uniformly bounded, i.e., for all $(x,s) \in \mathcal{X}\times\mathcal{S}$, there exists a constant $G>0$ such that $|f(x,s)|\leq G$. Let $\eta := \sqrt{\frac{2 h(T)}{G^2 T|\mathcal{S}|}}$ where $\sum_{t=1}^{T}\frac{1}{2}\Vert p_t - q_t \Vert^2 \leq h(T)$ for $p_t \in \mathcal{P}_{t-1}$ and $q_t \in \mathcal{P}_t$.
The output $(x_1,...,x_{T})$ from Algorithm~\ref{Alg:DRO_via_adversarial} with confidence update $\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$ fulfills
\begin{align*}
\frac{1}{T} &\sum_{t=1}^{T} \left(\max_{p \in \mathcal{P}_{t}}\mathbb{E}_{s \thicksim p}\left[f(x_{t},s)\right] - \min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \thicksim p}\left[f(x,s)\right] \right)
\leq G \sqrt{\frac{2|\mathcal{S}|h(T)}{T}} + \frac{2G}{T},
\end{align*}
with probability at least $1-\delta$. \end{theorem}
\proof{Proof of Theorem \ref{Th:Regret_bound}} Define $g_t(p)\coloneqq -\mathbb{E}_{s \thicksim p}\left[f(x_t,s)\right]$. An online gradient descent iteration is given by \begin{align*}
p_{t+1} = \arg\min_{p\in \mathcal{P}_t} \text{ }\left\langle \eta \nabla g_t(p_t), p \right\rangle + \frac{1}{2}\Vert p - p_{t} \Vert^2, \end{align*} with the variational inequality $$ \left\langle \eta \nabla g_t(p_t), u_t - p_{t+1} \right\rangle + \left\langle p_{t+1} - p_{t}, u_t - p_{t+1} \right\rangle \geq 0, \text{ for all }u_t \in \mathcal{P}_{t} $$ as the optimality criteria. Classical theory
for gradient descent (by rearranging the previous inequality and using Cauchy-Schwarz) yields \begin{align*}
\left\langle \eta \nabla g_t(p_t), p_t - u_t \right\rangle \leq &\frac{1}{2}\Vert p_{t} - u_t \Vert^2 - \frac{1}{2}\Vert p_{t+1}-u_t \Vert^2
+ \frac{\eta^2}{2} \Vert \nabla g_t(p_t) \Vert ^2. \end{align*} Summation over rounds $t=1,...,T$ results in the following inequality for all $u_t \in \mathcal{P}_{t}$: \begin{align*}
&\sum_{t=1}^T\left\langle \eta \nabla g_t(p_t), p_t - u_t \right\rangle \leq \sum_{t=1}^T\frac{\eta^2}{2} \Vert \nabla g_t(p_t) \Vert ^2
+ \sum_{t=1}^T \left(\frac{1}{2}\Vert p_{t} - u_t \Vert^2 - \frac{1}{2}\Vert p_{t+1} - u_t \Vert^2 \right). \end{align*} Next, we bound the terms on the right side starting with \begin{align*}
& \sum_{t=1}^T \left(\frac{1}{2}\Vert p_t - u_t \Vert^2 - \frac{1}{2}\Vert p_{t+1} - u_t \Vert^2 \right)
\leq \sum_{t=1}^T \frac{1}{2}\Vert p_t - u_t \Vert^2. \end{align*} Note that this is, in contrast to classical steepest descent theory, not a telescoping sum.
We make use of the fact that $\sum_{t=1}^{T}\frac{1}{2}\Vert p_t - u_t \Vert^2 \leq h(T)$ for all $u_t \in \mathcal{P}_{t}$ and all rounds $t=1,...,T$ with probability at least $1-\delta$ as per Lemma~\ref{lemma:all_path_lengths}. We provide the proofs for these path length bounds in the next extra subsection.
Furthermore, since $g_t(p)=-\mathbb{E}_{s\thicksim p}\left[f(x_{t},s)\right]$ is linear in $p$, we use the gradient bound \begin{align*}
\Vert \nabla g_t(p_t) \Vert^2 = \sum_{k = 1}^{|\mathcal{S}|} |f(x_t,s_k)|^2 \leq |\mathcal{S}|G^2, \end{align*} for all $t = 1, ..., T$, because $f$ is bounded. Thus we get \begin{align*}
\sum_{t=1}^T&\left\langle \nabla g_t(p_t), p_t - u_t \right\rangle \leq \frac{ h(T)}{\eta} + \frac{\eta}{2} T |\mathcal{S}| G^2, \end{align*} for all $u_t \in \mathcal{P}_{t},$ for all $t = 1, ..., T$ with probability at least $1-\delta$. Choosing the bound minimizing step size (minimize right-hand side with respect to $\eta$)
$\eta:=\sqrt{\frac{2 h(T)}{G^2 |\mathcal{S}| T}}$ yields \begin{align*}
\sum_{t=1}^T\left\langle \nabla g_t(p_t), p_t - u_t \right\rangle \leq G \sqrt{2|\mathcal{S}|h(T) T}. \end{align*} Since $g_t(p)=-\mathbb{E}_{s\thicksim p}\left[f(x_{t},s)\right]$ is linear in $p$ for all $t=1,...,T$, it follows \begin{align*}
&\sum_{t=1}^T \left( \mathbb{E}_{s\thicksim u_t}\left[f(x_{t},s)\right] - \mathbb{E}_{s\thicksim p_t} \left[f(x_t, s)\right] \right)
= \sum_{t=1}^T \left\langle \nabla g_t(p_t), p_t - u_t \right\rangle \leq G \sqrt{2|\mathcal{S}| h(T) T}. \end{align*} Now we choose in each round $t=1,...,T$ the worst-case $ u_t \coloneqq \arg \max_{p \in \mathcal{P}_{t}} \mathbb{E}_{s\thicksim p}\left[f(x_{t},s)\right] \in \mathcal{P}_t $ and recall the definition of $x_{t} = \arg\min_{x\in \mathcal{X}} \mathbb{E}_{s \sim p_{t}}\left[f(x,s)\right]$ to obtain \begin{align*}
\sum_{t=1}^T &\left( \mathbb{E}_{s\thicksim u_t}\left[f(x_t,s)\right] - \mathbb{E}_{s\thicksim p_t} \left[f(x_t, s)\right] \right)
= \sum_{t=1}^T \left( \max_{p \in \mathcal{P}_t}\mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \mathbb{E}_{s\thicksim p_t} \left[f(x, s)\right] \right). \end{align*} Since $p_t \in \mathcal{P}_{t-1}$, we know that $\min_{x \in \mathcal{X}} \mathbb{E}_{s\thicksim p_t} \left[f(x, s)\right] \leq \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_{t-1}}\mathbb{E}_{s\thicksim p} \left[f(x, s)\right]$ for all $t=1,...,T$ and thus we can conclude \begin{align*}
\sum_{t=1}^T& \left( \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_{t-1}} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] \right)
\leq G \sqrt{{2|\mathcal{S}| h(T) T}}, \end{align*} with a probability of at least $1-\delta$. We add and subtract $\min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right]$ on the LHS. Rearranging the terms like this allows us to write the LHS as \begin{align*}
&\sum_{t=1}^T \left( \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] \right)\\ & + \sum_{t=1}^T \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_{t-1}} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right]. \end{align*}
The last two terms telescope. Bringing them to the RHS and using the upper bound $G$ on $|f(x,s)|$ we can conclude \begin{align*}
\sum_{t=1}^T& \left( \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] \right)
\leq G \sqrt{2|\mathcal{S}| h(T) T} + 2G \;\;\;\;\;\text{ w.p. } 1 - \delta. \end{align*} Dividing by $T$ on both sides completes the proof.
$\square$
A second regret bound is also proven in the electronic companion which allows for better dependence on the number of scenarios $|\mathcal{S}|$ but at a cost of worse dependence on the total iteration count $T$. This is achieved by replacing the path length $\sum_{t=1}^T \frac{1}{2}\Vert p_{t} - u_t \Vert^2\leq h(T)$ by $ \sum_{t=2}^{T}\|p_t - q_t\| \leq h'(T) $, which leads to different bounds. Numerically, it is better in the beginning i.e., for small $t$ but has worse asymptotic behavior.
As stated in the following Corollary, we also observe that for all considered ambiguity sets the dynamic regret bound converges to zero.
\begin{corollary}[Convergence of Regret]
\label{thm:regret_conv}
If $\lim_{T \rightarrow \infty}h(T)/T = 0$, the dynamic regret converges to 0 with probability $1-\delta$ i.e.,
\begin{align*}
\lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T}& \Big( \max_{p \in \mathcal{P}_{t}}\mathbb{E}_{s \sim p}\left[f(x_{t},s)\right]
- \min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x,s)\right] \Big) = 0.
\end{align*} \end{corollary}
\proof{Proof of Corollary~\ref{thm:regret_conv}} We know that \begin{align*} \limsup_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T} &\left( \max_{p \in \mathcal{P}_{t}}\mathbb{E}_{s \sim p}\left[f(x_{t},s)\right] \right. - \left. \min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x,s)\right] \right) \\
\leq \limsup_{T \rightarrow \infty} & \left(G \sqrt{\frac{2h(T)|\mathcal{S}|}{T}} + \frac{2G}{T}\right)= 0, \end{align*} with probability at least $1-\delta$. Thus, we can write, \begin{align} \label{eq:dyrub} \limsup_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T}& \left( \max_{p \in \mathcal{P}_{t}}\mathbb{E}_{s \sim p}\left[f(x_{t},s)\right] \right. - \left. \min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x,s)\right] \right) \leq 0. \end{align} To prove the lower bound, let $\bar{x}_{t}$ be the optimal solution to the problem $\min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x,s)\right]$. Then we can write the inner term in the left hand side (LHS) in equation~\eqref{eq:dyrub} as \begin{align*} \max_{p \in \mathcal{P}_{t}}\mathbb{E}_{s \sim p}\left[f(x_{t},s)\right] - \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(\bar{x}_{t},s)\right]. \end{align*}
We know that $\bar{x}_{t}$ is the optimal solution to the problem $\min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_t} \mathbb{E}_{s \sim p}\left[f(x,s)\right]$, this means that \begin{align*} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x_t,s)\right] &\geq \min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x,s)\right] \\&= \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(\bar{x}_{t},s)\right]. \end{align*} Thus we get \begin{align} \label{eq:zero_lower_bound} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(x_t,s)\right] - \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \sim p}\left[f(\bar{x}_{t},s)\right] \geq 0. \end{align}
The above lower bound and the $ \limsup $-bound~\eqref{eq:dyrub} together prove the result.
$\square$
From the above result and Theorem~\ref{Th:Regret_bound}, we can observe that the dynamic regret, i.e., the average gap between the best solution in hindsight of each round and the solution evaluated in the algorithm, decreases at a rate of $\mathcal{O}(\sqrt{h(T)}/\sqrt{T})$ and tends to zero.
Therefore, we have a performance guarantee (with sublinear regret) when solving the DRO problems approximately over time. At the same time, Algorithm~\ref{Alg:DRO_via_adversarial} is applicable to large-sized problems in contrast to the reformulated DRO problems~\eqref{Eq:DRO_ref} or~\eqref{eq:DRON_l2}.
\subsection{Bounded Path Lengths}
The following lemma illustrates the high-probability path lengths for the confidence intervals, the kernel-based and $l_2$-norm ambiguity sets.
\begin{lemma}\label{lemma:all_path_lengths}
Given ambiguity sets of the form specified in Section~\ref{sec:dddro}, we have
\begin{align*}
\frac{1}{2}\sum_{t=1}^{T}\Vert p_t - q_t \Vert^2 \leq h(T),
\end{align*}
for all $p_t\in\mathcal{P}_{t-1}, q_t \in \mathcal{P}_t$ with probability at least $1-\delta$.
The functions $h(T)$ for different categories of ambiguity sets are as given: \begin{enumerate}
\item \textbf{Confidence Intervals}:
$$h(T) = 8 |\mathcal{S}| \log(\pi T) ({2} + \log T).$$
\item \textbf{Kernel based ambiguity sets}:
$$h(T) = {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \frac{32C}{\lambda^2 } \log\frac{\pi T}{\sqrt{6\delta}}(1 + \log T),$$
where $\lambda$ denotes the smallest eigenvalue of the kernel matrix $M$.
\item \textbf{$\ell_2$-norm ambiguity sets:}
$$
h(T) = 8 |\mathcal{S}|\log \frac{\pi T}{\sqrt{3 \delta}} ({2} + \log T).
$$
\end{enumerate} \end{lemma}
\proof{Proof of Lemma~\ref{lemma:all_path_lengths}.}
\textbf{Confidence Intervals}. We show in the electronic companion (Lemma~\ref{lemma:shrinking_difference}) that the ambiguity sets $\mathcal{P}_t$ derived from \eqref{Eq:Fitzpatrick} with confidence update $\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$ for all rounds $t=1,...,T$ fulfill
\begin{align*}
\sup_{x\in \mathcal{P}_{0}, y\in \mathcal{P}_{1}}\Vert x - y \Vert \leq \sqrt{16|\mathcal{S}| \log\pi}
\text{ and }
\sup_{x\in \mathcal{P}_{t-1}, y\in \mathcal{P}_{t}}\Vert x - y \Vert \leq \frac{ \sqrt{16|\mathcal{S}| \log(\pi {(t-1)})}}{\sqrt{{t-1}}}, \end{align*}
with a probability of at least $1-\delta$. This allows for calculating the function $h(T)$: { \begin{align*}
\frac{1}{2} \sum_{t=1}^{T}\|p_t - q_t\|^2 &\leq \frac{1}{2} 16 |\mathcal{S}| \log\pi + \frac{1}{2} \sum_{t=2}^{T}16 |\mathcal{S}| \frac{\log(\pi (t-1))}{t-1}\\
&\leq 8 |\mathcal{S}| \log\pi + 8 |\mathcal{S}| \log(\pi (T-1)) \sum_{t=1}^{T-1}\frac{1}{t}\\
&\leq 8 |\mathcal{S}| \log\pi + 8 |\mathcal{S}| \log(\pi (T-1)) (1 + \log (T-1))\\
&\leq 8 |\mathcal{S}| \log(\pi T) (2 + \log T). \end{align*} } The second and third inequalities are from bounding $t$ and from observing that $\sum_{t=1}^{T-1}(1/t) \leq 1 + \log(T-1)$.
\noindent \textbf{Kernel based ambiguity sets}. We show in the electronic companion (Lemma~\ref{lemma:kernel_shrinking_difference}) that given an ambiguity set of the form $\mathcal{P}_t = \left\{p \in \mathcal{P} \mid \|p - \hat{p}\|_M \leq \epsilon_t \right\}$ with $\epsilon_t\coloneqq \frac{\sqrt{C}}{\sqrt{t}}(2 + \sqrt{2 \log(1/\delta_t)}) $ with $\delta_t = \frac{6 \delta}{\pi^2 t^2}$ we have for $t \geq 2$,
\begin{align*}
&\sup_{x \in \mathcal{P}_{t-1}, y \in \mathcal{P}_t} \|x - y\|_2 \leq \frac{8\sqrt{C}}{\lambda \sqrt{t-1}} \sqrt{ \log(\pi t/\sqrt{6\delta})},\\
&\text{ and } \sup_{x \in \mathcal{P}_0, y \in \mathcal{P}_1} \|x - y\|_2 \leq 2+\frac{4\sqrt{C}}{\lambda}\;\; \text{ for } t = 1, \end{align*} with probability at least $1-\delta$. Calculating the function $h(T)$, we have
\begin{align*}
\frac{1}{2} \sum_{t=1}^{T}\|p_t - q_t\|_2^2 &\leq {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \sum_{t=2}^{T}\frac{32 C}{\lambda^2 ({t-1})}\log(\pi {t}/\sqrt{6\delta})\\
&\hspace{-10mm}\leq {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \frac{32C}{\lambda^2 } \log(\pi T/\sqrt{6\delta})\sum_{t=2}^{T}\frac{1}{t-1}\\
&\hspace{-10mm}\leq {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \frac{32C}{\lambda^2 } \log(\pi T/\sqrt{6\delta})(1 + \log T). \end{align*} Here, the first inequality arises from Lemma~\ref{lemma:kernel_shrinking_difference}. The second and third inequalities are from bounding $t$ and from observing that $\sum_{t=1}^{T-1}(1/t) \leq 1 + \log(T-1)$.
\noindent \textbf{$\ell_2$-norm ambiguity sets}. We show in the electronic companion (Lemma~\ref{lemma:l2_shrinking_difference}) that given an ambiguity set of the from $\mathcal{P}_t = \left\{p \in \mathcal{P} \mid \|p - \hat{p}\|_2 \leq \epsilon_t \right\}$ with $\epsilon_t\coloneqq\sqrt{\frac{2|\mathcal{S}| \log (2/\delta_t)}{t}} $ and $\delta_t = \frac{6 \delta}{\pi^2 t^2}$, we have
$$\sup_{x \in \mathcal{P}_{0}, y \in \mathcal{P}_1} \|x - y\|_2 \leq 4\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})}
\text{ and }
\sup_{x \in \mathcal{P}_{t-1}, y \in \mathcal{P}_t} \|x - y\|_2 \leq 4\sqrt{\frac{|\mathcal{S}| \log (\pi (t-1) / \sqrt{3 \delta})}{{t-1}}},$$ with probability at least $ 1-\delta$.
For bound $h(T)$, it follows \begin{align*}
\frac{1}{2} \sum_{t=1}^{T}\|p_t - q_t\|^2 &\leq \frac{1}{2}(4\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})})^2 + \frac{1}{2} \sum_{t=2}^{T}16 |\mathcal{S}| \frac{\log(\pi ({t-1})/\sqrt{3 \delta})}{t-1}\\
&\leq 8 |\mathcal{S}| \log (\pi / \sqrt{3 \delta}) + 8 |\mathcal{S}| \log(\pi T/\sqrt{3\delta}) \sum_{t=2}^{T}\frac{1}{t-1}\\
&\leq 8 |\mathcal{S}| \log(\pi T/\sqrt{3\delta}) (2 + \log T). \end{align*}
$\square$
From Lemma~\ref{lemma:all_path_lengths}, we know that the shrinking ambiguity sets yield path length bounds $h(T)$ that increase at a rate of $\mathcal{O}(\log^2 T)$. Therefore, the dynamic regret bound converges to zero with $\mathcal{O}(\log T / \sqrt{T})$. It is only slightly worse than the typical $\mathcal{O}({1}/{\sqrt{T}})$ bounds. In summary, this novel algorithm yields the remarkable benefit that
it integrates changing ambiguity sets (which represent growing knowledge of the uncertainty) into regret bounds for online robust optimization.
\section{Numerical Results}
\label{sec:num_res}
We illustrate the performance of Algorithm~\ref{Alg:DRO_via_adversarial} through numerical experiments on mixed-integer linear and quadratic programs (MIPs \& MIQPs) as well as on two real-world applications, namely distributionally robust network design \citep{network_design} and optimal route choice. These problems allow to demonstrate the wide applicability of our novel approach. The latter does not assume anything about the structure of the original problem apart from the fact that it should be algorithmically tractable, with an available solution approach. We are thus able to apply DRO over time to both discrete and continuous optimization problems to account for various application types.
All computations are carried out using a Python 3.8.5 implementation on machines with Intel Core i7 CPU 2.80 GHz processor and 16 GB RAM. Each of these has four cores of 3.5 GHz each and 32 GB RAM. We utilized SCIP 7.0 as the MIP and MIQP solver \citep{scip} and IPOPT 3.13.3 \citep{ipopt} to compute the projections in Algorithm \ref{Alg:DRO_via_adversarial} as the solution of a convex optimization problem. For the ambiguity sets, we choose $1-\delta=0.9$. Different values for this parameter impact the size of the resulting sets, but have no significant impact on the ability of Algorithm~\ref{Alg:DRO_via_adversarial} to learn DRO solutions. For the kernel-based ambiguity sets, we use the Gaussian kernel function $k_M(s_i,s_j) = \exp\left({-\frac {\Vert s_i - s_j \Vert_2^2} 2}\right)$.
\subsection{Benchmark Instances}\label{Sec:benchmark_instances}
To validate the performance of the method, we use publicly available instances from well known benchmark libraries MIPLIB \citep{miplib2017} and QPLIB \citep{qplib}, which contain a collection of MIPs and MIQPs, respectively. In all test cases, the uncertain objective function $f$ has the form $f(x,s) = x^\top Qx + (c+s)^\top x + d$, with $Q\in \mathds{R}^{n\times n}$, $c\in \mathds{R}^n$, $d\in \mathds{R}$, $x\in \mathcal{X}$ and $s\in \mathcal{S}$. For linear problems, we have $Q=0$. We generate different cost scenarios $s\in \mathcal{S}\subset \mathds{R}^n, n>0$ by perturbing the coefficients of variables in the objective function randomly by up to 50\%.
We sort the problems in the libraries by increasing number of variables and choose the first 15 MIPs and the first 10 convex MIQPs with bounded variables that can be solved within an hour of CPU time. All instances are listed in the electronic companion. The number of scenarios in the linear test cases is set to $|\mathcal{S}| \in \{10, 50\}$. As the MIQP instances are more difficult to solve, we set
$|\mathcal{S}|=2$.
We numerically evaluate the dynamic regret from Theorem \ref{Th:Regret_bound} and the worst-case expected value of solutions from Algorithm~\ref{Alg:DRO_via_adversarial} in comparison with the DRO solution obtained via reformulation \eqref{Eq:DRO_ref}. In each round, one realization of the uncertain parameter is revealed. These realizations are drawn randomly according to the initially generated probability distribution. Thus, the number of data points equals the number of rounds.
\begin{figure}
\caption{Results for \emph{blend2} with $|\mathcal{S}|=10$ and $T=10000$.}
\label{fig:mip_regret1}
\label{fig:dynamic_regret}
\label{fig:neosregret}
\end{figure}
Figure~\ref{fig:mip_regret1} shows the worst-case (or guaranteed)
expected objective value in each round for the online algorithm for $T=10000$ rounds for instance \emph{blend2} from MIPLIB with $|\mathcal{S}|=10$. Therein, the results for all three types of ambiguity sets (confidence intervals, $\ell_2$-norm, kernel-based) are shown under the assumed ambiguity set in each period. The lines represent the corresponding average values over time. One can observe that the confidence intervals (red) yield the fastest convergence and lead to solutions with least costs. In Figure~\ref{fig:dynamic_regret} and Figure~\ref{fig:neosregret}, we plot the error between the worst-case protection of the online solutions and the DRO solutions or to the SO solutions, respectively.
It can be observed that the distance between the lines decreases over rounds. This means that the average error in solving the DRO problem with Algorithm~\ref{Alg:DRO_via_adversarial} shrinks and we learn the robust solution rapidly. Both solutions also converge to the true stochastic solution. Since, the confidence interval ambiguity sets yield a better performance and shrink faster, we focus on interval sets for the remaining experiments.
Next, we evaluate the performance of Algorithm \ref{Alg:DRO_via_adversarial}. Table~\ref{table:avg_times_mip} shows the average running times per iteration over all instances for different scenarios and ambiguity sets. It is obvious from Table~\ref{table:avg_times_mip} that for large and difficult problems (like e.g. nonlinear mixed-integer optimization problems), using Algorithm~\ref{Alg:DRO_via_adversarial} allows for significant time savings. This becomes more and more pronounced for increasing number of iterations, as the time savings multiply by the number of rounds.
\addtolength{\tabcolsep}{-1pt} \begin{table}[htb]
\centering
\begin{small}
\begin{sc}
\begin{tabular}{lrcc}
\toprule
& $|\mathcal{S}|$ &\begin{tabular}{c}Online\\ Robust\end{tabular} & \begin{tabular}{c}Exact\\ DRO\end{tabular}\\
\midrule
MIP (I) & $10$ & 52.4{s} & 115.8{s}\phantom{$^*$}\\
MIP ($\ell_2$) & $10$ & 49.4{s} & 127.5{s}\phantom{$^*$}\\
MIP (K) & $10$ & 56.3{s} & 129.5{s}\phantom{$^*$}\\
MIP (I) & $50$ & 57.7{s} & 176.7{s}$^*$\\
MIP ($\ell_2$) & $50$ & 60.4{s} & 206.1{s}$^*$\\
MIP (K) & $50$ & 67.0{s} & 244.4{s}$^*$\\
MIQP (I) & $2$ & 170.2{s} & 271.4{s}$^*$\\
MIQP ($\ell_2$) & $2$ & 186.3{s} & 329.5{s}$^*$\\
MIQP (K) & $2$ & 188.6{s} & 359.6{s}$^*$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\caption{\label{table:avg_times_mip}Average running times per iteration. I:Interval, K:Kernel. $(^*)$ The DRO problems \emph{k16x240b} and \emph{10004} could not be solved within a runtime limit of one hour.} \end{table}
\addtolength{\tabcolsep}{1pt}
In more detail, in 14 out of 15 MIP instances and in 9 out of 10 MIQP ones, Algorithm~\ref{Alg:DRO_via_adversarial} was able to run an iteration on average significantly faster than solving reformulation \eqref{Eq:DRO_ref}. The impact of different ambiguity sets on the solution times is negligible for our approach. However, increasing the number of scenarios amplifies the size of the reformulated DRO problem and thus results in challenging problems for which the online algorithm is considerably more efficient.
The main advantage of Algorithm \ref{Alg:DRO_via_adversarial} is that we avoid solving the full~\ref{Eq:DRON} problem in each round, which can be algorithmically challenging. Instead, only the solution of an SO and a convex projection problem is calculated in the learning decomposition approach. The computational results show that our approach is able to generate high-quality solutions within a short running time.
\subsubsection{Comparison with Other Methods}\label{Sec:comparison_with_diferent_methods}
In this section, we compare the performance of our novel algorithms against other approaches that can solve DRO problems. Specifically, we focus on the ambiguity sets outlined by~\citet{esfahani2018data} (Wasserstein) and~\citet{KirschnerBogunovicJegelkaKrause2020} (Distributionally Robust Bayesian Optimization or DRBO), which leverage Wasserstein and Kernel based ambiguity sets respectively. We implement the Wasserstein and DRBO approaches as outlined in the respective papers with an exact computation of the corresponding distributionally-robust optimization problem. In the numerical experiments, we observe that the interval ambiguity sets yield a solution with a comparable worst-case expected objective as the solutions generated by the Wasserstein and DRBO methods. We also compare our results with the worst-case performance of the stochastic optimum using the Maximum Likelihood Estimator (MLE) in each round (\emph{running SO}).
This solution convergences to the true stochastic optimum in the limit ($T\rightarrow \infty$) but is not protected against ambiguity and has no worst-case guarantees in contrast to distributionally robust solutions.
\begin{figure}
\caption{Results for \emph{supportcase16} ($|\mathcal{S}|=20$, $T=100$).}
\label{Fig:Comparison_with_other_ambiguity_sets}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{lll}
\toprule
& $|\mathcal S| = 10$ & $|\mathcal S| = 50$ \\
\midrule
DRO & 2.2s & 3.5s \\
Wasserstein & 0.4s & 1.1s \\
DRBO & 7.9s & 30.9s \\ \midrule
Online robust & 0.3s & 0.4s \\
Running SO & 0.2s & 0.2s \\
\bottomrule
\end{tabular}
\caption{Avg.\ running times for \emph{supportcase16}, $T=100$.}
\label{Table:running_times_different_methods} \end{table}
As a representative example, Figure~\ref{Fig:Comparison_with_other_ambiguity_sets} illustrates these results for the \emph{supportcase16} instance from MIPLIB with $|\mathcal{S}|=10$ and $T=100$. For all methods, the worst-case expected objective value shrinks over time with more data.
The online and exact solutions yield similar worst-case protection as the other methods. However, the online robust solution can be calculated more efficiently than the other DRO methods, cf. Table~\ref{Table:running_times_different_methods}. Only the stochastic solutions with the MLE can be computed faster. However they do not have solution quality guarantees under ambiguity
and may lead to a bad worst-case objective. Thus in total, the online robust method is preferable. The average running times of all instances are given in Table~\ref{Table:running_times_different_methods_extended} and support the observations. Indeed, using the online robust approach, the instances all can be solved quickly within the time limit, whereas it takes considerably longer for the other approaches that may already reach the time limit for some instances. \begin{table}[h]
\centering
\begin{tabular}{lccc}
\toprule
&\begin{tabular}{c}MIP\\ $ |\mathcal S| = 10$\end{tabular}
&\begin{tabular}{c}MIP\\ $ |\mathcal S| = 50$\end{tabular}
&\begin{tabular}{c}MIQP\\ $ |\mathcal S| = 2$\end{tabular} \\
\midrule
DRO & 45.6s\phantom{$^{**}$} & 55.9s$^*$\phantom{$^{*}$} & 271.4s$^*$\phantom{$^{*}$} \\
Wassertein & 52.3s\phantom{$^{**}$} & 59.1s\phantom{$^{**}$} & 299.9s\phantom{$^{**}$} \\
DRBO & 42.7s$^{**}$ & 66.1s$^{**}$ & 738.3s$^{*}$\phantom{$^{*}$} \\ \midrule
Online robust & 26.8s\phantom{$^{**}$} & 27.1s\phantom{$^{**}$} & 170.2s\phantom{$^{**}$} \\
Running SO & 26.6s\phantom{$^{**}$} & 26.9s\phantom{$^{**}$} & 172.6s\phantom{$^{**}$} \\
\bottomrule
\end{tabular}
\caption{Avg.\ running times for benchmark instances. ($^*$) One testcase could not be solved within one hour. ($^{**}$) Four testcases could not be solved within one hour.}
\label{Table:running_times_different_methods_extended} \end{table}
\subsection{Network Design under Uncertainty} In addition to solving classical benchmark instances as performed in the last section, we next study the solution of a practically very relevant and highly challenging combinatorial optimization problem under uncertainy.
Given an undirected graph $\mathcal{G}=(V,E)$, the goal of robust network design \citep{CacchianiJuengerLiersetal.2016} is to compute a minimal cost network topology together with the corresponding edge capacities $f\in \mathds{Z}_+^{|E|}$ in order to fulfill a given demand $b\in \mathds{Z}^{|V|}$. The demand is assumed to be uncertain and an element of the scenario set $\mathcal{S}\coloneqq \{ b_1,...,b_{|\mathcal{S}|} \} \subset \mathbb{Z}^{|V|}$ with (unknown) probability vector $p^*\in [0,1]^{|\mathcal{S}|}$. For every edge $\{i,j\}\in E$ and every scenario $s\in \mathcal{S}$, we are given costs $c_{ijs}>0$ and flow capacities $d_{ijs},\bar{d}_{ij}>0$. In this practical application, it can be assumed that additional information on the demand distributions become available over time, so that a DRO over time approach is a very natural modelling choice.
The optimization problem for distributionally robust network design is then given by \begin{align*}
\min_{f} &\text{ } \max_{p\in \mathcal{P}} \text{ } \sum_{s\in \mathcal{S}} \sum_{\{i,j\}\in E} c_{ijs} \left( f_{ijs} + f_{jis} \right)p_s \\
\text{ s.t. } &\sum_{j:\{j,i\}\in E} f_{jis} - \sum_{j:\{i,j\}\in E} f_{ijs} = b_{is} \quad \forall i \in V, s\in \mathcal{S}, \\
&\sum_{s\in \mathcal{S}} (f_{ijs} + f_{jis}) \leq \bar{d}_{ij} \quad \forall \{i,j\}\in E, \\
&f_{ijs} + f_{jis} \leq d_{ijs} \quad \forall \{i,j\}\in E , s\in \mathcal{S},\\
&f_ {ijs} \in \mathds{Z}_+ \quad\quad \forall \{i,j\}\in E, s\in \mathcal{S}. \end{align*} Here, the objective minimizes the flow costs. The first constraint ensures that demands are satisfied. The second and third constraints limit the flows to be within arc capacities and the final constraint ensures integral flows.
We evaluate the novel approach on the instances \emph{res8} ($|V|=50,
|E|=77$), \emph{w1$\_$100} ($|V|=100, |E|=207$) and \emph{w1$\_$200} ($|V|=200, |E|=775$) from \cite{network_design_instances} and construct the scenarios as follows: On half the nodes, we place balanced random demands from $\{-10,...,10\}$. We also restrict all flow capacities to $d_{ijs}=10$, where edge costs and coupling capacity bounds are uniformly chosen from $c_{ijs} \in
\{1,...,10\}$ and $\bar{d}_{ij}\in\{|\mathcal{S}|,...,10|\mathcal{S}|\}$.
\begin{figure}
\caption{Results for \emph{res8} with $|\mathcal{S}|=10$ and $T = 10000$.}
\label{fig:mip_regret1_network_design_full_regret}
\end{figure}
In Figure~\ref{fig:mip_regret1_network_design_full_regret}, the worst-case expectation of solutions for instance \emph{res8} with $|\mathcal{S}|=10$ using confidence intervals is illustrated over $10000$ rounds on logarithmic axes. The online robust solution rapidly converges to the DRO solution. Though starting with conservative outcomes due to limited data, the online solutions improve very quickly.
The running time benefit of the online robust approach is clearly visible in Table~\ref{table:avg_times_network_design}. For larger instances, it is about 272 times faster than solving reformulation \eqref{Eq:DRO_ref}. In summary, our method is able to generate robust solutions with shorter running time. \begin{table}[htb]
\centering
\begin{sc}
\begin{tabular}{lrcc}
\toprule
& $|\mathcal{S}|$&Online Robust & Exact DRO \\
\midrule
res8& $10$ & 0.2 {s}& 0.5 {s} \\
res8& $50$& 0.6 {s} & 11.6 {s}\\
w1\_100& $10$& 0.3 {s} & 32.0 {s} \\
w1\_100& $50$& 1.5 {s} & 95.6 {s} \\
w1\_200& $10$& 1.2 {s} & 38.7 {s} \\
w1\_200& $50$& 4.7 {s} & 1282.2 {s} \\ \bottomrule
\end{tabular}
\end{sc}
\caption{\label{table:avg_times_network_design}Average running times per iteration using confidence intervals.} \end{table}
\subsubsection{Learning Optimal Route Choice}
In this section, we consider the problem of choosing the shortest paths in a street network where the travel times on the arcs are affected by random deviations. It is natural to assume that the driver gradually adapts the route according to the observed travel times in order to reach the destination in the shortest possible (expected) time, making a DRO over time approach a good modeling choice.
This means that the driver solves the shortest-path problem on a directed graph $ G = (V, A) $ with uncertain travel times $ c\colon A \to \mathds{R}_+ $. Let $ v_1, v_2 \in V $ be the origin and the destination, respectively. We assume that there is a finite set of traffic scenarios~$ {\mathcal{S}} = \{c_1, c_2, \ldots, c_{\abs{\mathcal{S}}}\} \subset \mathds{R}_+^{\abs{A}} $ which correspond to different realizations of the travel times on the arcs, each materializing with an unknown probability $ p^*_k \in [0, 1] $, $ k = 1,...,\abs{\mathcal{S}} $.
In each round~$ t=1, \ldots, T $, (e.g.\ every morning when driving to work), the driver chooses a $ v_1 $-$ v_2 $-route given by the vector $ x_t \in \{0, 1\}^{\abs{A}} $, which models the edges traveled, along the path chosen, in that round. The expected travel time in a round $ t \in T $ is then given by $ \sum_{s \in \mathcal{S}} p^*_s \langle c_s, x_t \rangle $. As the true scenario distribution is unknown, the driver is assumed to solve the distributionally robust shortest-path problem in an online fashion, i.e.\ using Algorithm~\ref{Alg:DRO_via_adversarial}.
\begin{figure}
\caption{The outcome of Algorithm~\ref{Alg:DRO_via_adversarial} for learning an optimal route choice in terms of solution quality over time for $T=5000$.}
\label{Fig:ShortestPathProblem-22}
\label{Fig:ShortestPathProblem-21}
\label{Fig:ShortestPathProblem}
\end{figure} \begin{figure}
\caption{The outcome of Algorithm~\ref{Alg:DRO_via_adversarial} for single runs that correspond to the routes given in Figure~\ref{Fig:ShortestPathProblem-Paths}.\\}
\label{Fig:ShortestPathProblem-SC}
\label{Fig:ShortestPathProblem-Original}
\label{Fig:ShortestPathProblemb}
\end{figure}
In the following, we analyse the outcome of this experiment on an aggregated version of the real-world city network of Chicago. It is available as instance \emph{ChicacoSketch} in Ben Stabler's library of transportation networks \citep{bstabler} and has 933~nodes and 2950~arcs (of which we ignore the 387~nodes representing \qm{zones} as well as their incident arcs). In this data set, each arc~$a$ has a certain free-flow time $ c_{\text{free}, a} $, which we assume to be the uncongested travel time. In addition, we generate nine congestion scenarios by perturbing $ c_{\text{free}, a} $. We first choose~$ v_1 $ and~$ v_2 $ such that the driven path would span the entire extract of the city map. Now, for all arcs~$a$ we uniformly draw $ c_{s, a} \sim [0, 2c_{\text{free}, a}]$. Finally, we uniformly draw a random \qm{true} probability distribution~$ p^* \sim \mathcal{P}$.
For the above setup, we use Algorithm~\ref{Alg:DRO_via_adversarial} in order to let the driver iteratively adapt to the dynamically changing travel times. In Figure~\ref{Fig:ShortestPathProblem}, we illustrate solution quality over time. In order to show the stability of our algorithm, we additionally repeated the experiments ten times and plot their mean solution quality as well as their standard deviation. We observe that the online average and DRO average jointly converge towards the expected value of the minimum expected travel time. The regret tends to zero over the long run.
As an illustrative example, in Figures~\ref{Fig:ShortestPathProblem-SC}~and~\ref{Fig:ShortestPathProblem-Original}, one specific run is plotted to evaluate it in more detail. In these plots, one can see that in each of the rounds~1,~349,~860 and~3661, long stretches of the chosen path are abruptly improved. There are also visible jumps in the online robust solution quality.
\begin{figure}\label{Fig:ShortestPathProblem-Paths-Round-True0}
\label{Fig:ShortestPathProblem-Paths-Round0}
\label{Fig:ShortestPathProblem-Paths-Round1}
\end{figure} \begin{figure}
\caption{Map (a) shows the path of the true stochastic solution in green. Maps (b)--(f) show the path taken by the online solution in those rounds in which the solution changes. If parts of the path in these pictures coincide with the path of the true stochastic solution, it is shown in green, otherwise the difference is shown in pink. The pictures show the convergence of the online solution to the true stochastic solution as time moves on.}
\label{Fig:ShortestPathProblem-Paths-Round349}
\label{Fig:ShortestPathProblem-Paths-Round1879}
\label{Fig:ShortestPathProblem-Paths-Round3661}
\label{Fig:ShortestPathProblem-Paths}
\end{figure}
In Figure~\ref{Fig:ShortestPathProblem-Paths}, we depict how route choice evolve over the rounds $ t \in T $; introducing a new picture whenever a structurally new solution is found. At the beginning, the driver takes the nominally optimal path, i.e.\ , the optimal path w.r.t. the unperturbed cost vector. We see two subtours in which the path differs from the true stochastic solution. In the following iterations, there remain two deviating subtours, but they change slightly in round~1 and round~349. In round~1879, one of these subtours disappears, and after round~3661 the solution coincides with the true stochastic optimum.
Altogether, this example shows how DRO can be used to improve performance in the face of uncertainty by leveraging information arriving over time. At the beginning, with no or little information available, hedging against uncertainty necessarily means to implement conservative solutions. However, as more information on the uncertainty is gathered over time, the solution quality improves as protection against uncertainty is less costly.
\section{Conclusion}\label{sec:conclusion}
We introduce a novel method for decision-making under uncertainty over time, employing a combination of distributionally robust optimization and online learning. In each iteration, our algorithm solves a stochastic optimization problem in combination with the online gradient descent algorithm. We show that our online algorithm converges to the exact solution of the DRO problem with an increasing amount of iterations. We also show that DRO solution converges to the true SO optimum in the limit.
The theoretical and numerical results
demonstrate the effectiveness of this method. Indeed, it obtains high-quality robust solutions with short computational times.
Though, our work is tailored to discrete scenarios of finite dimension and requires solving the outer minimization problem exactly, it can be applied to a wide range of practical problems with varying uncertainty models and objective function structures.
Furthermore, our flexible framework can be extended, either by incorporating more stochastically expressive ambiguity sets (e.g. for continuous distributions) or using online methods also for the decision problem of the $x$-player.
\section{Acknowledgments}
We are grateful to Daniela Bernhard for proofreading parts of the manuscript. We also thank the DFG for their support within Projects B06 and B10 in CRC TRR 154, as well as within Project-ID 416229255 - SFB 1411. This work has been supported by grant 03EI1036A from the Federal Ministry for Economic Affairs and Energy, Germany.
\section*{Appendix} Here we present the proofs omitted from the main body of the paper as well as additional numerical experiments.
\subsection{Proofs of Theorem~\ref{thm:asymp_consis} and Theorem~\ref{thm:soln_conv}}\label{Appendix:proof_theorem_2_1_and_2_2} In the following, we prove that the sequence of solutions to the problem~\eqref{Eq:DRON} converge to the true stochastic optimization~\eqref{Eq:Stoch_nominal} problem. These results are an adaptation of the proofs presented in~\citet{MohajerinEsfahani2018} for our setting.
\begin{lemma}
\label{thm:dist_incl}
Given the true distribution $p^*$ and the ambiguity set $\mathcal{P}_t$ at any time $t$, we have
\[\mathbb{P}\left[p^* \in \mathcal{P}_t\right] \geq 1 - \delta_t \quad \text{ for all } t=1,\dots,T.\] \end{lemma} \proof{Proof:}
This is true by the construction of the set $\mathcal{P}_t$, which is such that it contains the true distribution with probability at least $1-\delta_t$.
$\square$
\begin{lemma}[Finite sample guarantee]
Given a solution $x_t$ to the problem~\ref{Eq:DRON}, we prove that
\label{thm:finite_guaran}
\[\mathbb{P}\left[\mathbb{E}_{s \sim p^*}[f(x_t, s)\right] \leq \widehat{J}_t] \geq 1 - \delta_t~\text{for all}~t=1,\dots,T.\] \end{lemma} \proof{Proof:}
From Lemma~\ref{thm:dist_incl}, we know that $p^* \in \mathcal{P}_t$ with probability at least $1-\delta_t$.
Thus, we have
\[
\mathbb{E}_{s \sim p^*}[f(x_t, s)] \leq \max_{p \in \mathcal{P}_t} \mathbb{E}_{s \sim p} [f(x_t,s)], \]
with probability at least $1-\delta_t$.
The right hand side (RHS) term in the above equation is the definition of $\widehat{J}_t$.
Thus,
\[\mathbb{E}_{s \sim p^*}[f(x_t, s)] \leq \widehat{J}_t,\]
with probability of at least $1 - \delta_t$.
$\square$
\begin{lemma}[Borel-Cantelli Lemma]
\label{lem:bclemma}
Let $ E_1, E_2, \ldots $ be a sequence of events.
If $ \sum_{i = 1}^{\infty} P(E_i) < \infty $ then
\[P[\text{an infinite number of } E_i \text{ occur}] = 0.\] \end{lemma}
\begin{lemma}[Convergence of Distributions]
\label{lem:conv_of_distr}
Given the ambiguity set $\mathcal{P}_t$, we prove that
\[\lim_{t \rightarrow \infty} \sup_{p \in \mathcal{P}_t} \|p - p^*\|_2 = 0 \text{ with probability } 1.\] \end{lemma} \proof{Proof:}
From Lemma~\ref{Lemma:shrinking_interval},~\ref{lemma:l2_shrinking_difference} and~\ref{lemma:kernel_shrinking_difference} we know that for any of the three given types of ambiguity sets there exists a function $r(t)$ which satisfies \[\mathbb{P} \left[\sup_{p \in \mathcal{P}_t}\|p - p^*\|_2 \leq r(t)\right] \geq 1- \delta_t,\]
and $\lim_{t \rightarrow \infty}r(t) = 0$.
\\
This means that
$$\mathbb{P} \left[\sup_{p \in \mathcal{P}_t}\|p - p^*\|_2 - r(t) > 0\right] \leq \delta_t.$$
By construction, it follows that $\sum_{t=1}^{\infty}\delta_t < \infty$ (as $\delta_t = \frac{6 \delta}{\pi^2 t^2}$).
Then the Borel-Cantelli Lemma~\ref{lem:bclemma} implies that
$$\mathbb{P} \left[\lim_{t \rightarrow \infty} \sup_{p \in \mathcal{P}_t}\|p - p^*\|_2 - r(t) \leq 0\right] = 1.$$
Since $\lim_{t \rightarrow \infty} r(t) = 0$ and $\|p - p^*\|_2 \geq 0$, this means that
\[\lim_{t \rightarrow \infty} \sup_{p \in \mathcal{P}_t} \|p - p^*\|_2 = 0 \text{ with probability } 1.\]
$\square$
\proof{Proof of Theorem~\ref{thm:asymp_consis}}
We know that $x_t \in \mathcal{X}$ and \\$J^* \leq \mathbb{E}_{s \sim p^*}[f(x_t, s)]$ as $x_t$ is a suboptimal solution.
Applying Lemma~\ref{thm:finite_guaran}, we obtain
\[
\mathbb{P} \left[J^* \leq \mathbb{E}_{s \sim p^*}[f(x_t, s)] \leq \widehat{J}_t\right] \geq \mathbb{P} \left[p^* \in \mathcal{P}_t\right] \geq 1 - \delta_t.
\]
Since $\sum_{t=1}^{\infty} \delta_t < \infty$, by the Borel-Cantelli lemma,
\[
\mathbb{P} \left[J^* \leq \lim_{t \rightarrow \infty}\mathbb{E}_{s \sim p^*}[f(x_t, s)\right] \leq \lim_{t \rightarrow \infty}\widehat{J}_t] = 1.
\]
Let $\gamma \geq 0$.
Since $\mathcal{X}$ is compact, there exists a $\gamma$-optimal solution $x^\gamma$ to the stochastic problem, i.e.,
\[\mathbb{E}_{s \sim p^*} [f(x^\gamma, s)] \leq J^* + \gamma.\]
Let $p_t^\gamma \in \mathcal{P}_t$ be a $\gamma$-optimal distribution to $x^\gamma$, i.e.
\[\sup_{p \in \mathcal{P}_t} \mathbb{E}_{s \sim p} [f(x^\gamma, s)] \leq \mathbb{E}_{s \sim p_t^\gamma} [f(x^\gamma, s)] + \gamma.\]
Then we can write
\begin{align*}
&\limsup_{t \rightarrow \infty} \widehat{J}_t \\
\leq &\limsup_{t \rightarrow \infty} \sup_{p \in \mathcal{P}_t} \mathbb{E}_{s \sim p}[f(x^\gamma, s)] \\
\leq &\limsup_{t \rightarrow \infty} \mathbb{E}_{s \sim p_t^\gamma}[f(x^\gamma, s)] + \gamma \\
= &\limsup_{t \rightarrow \infty} \mathbb{E}_{s \sim p^*}[f(x^\gamma, s)] + \sum_{s \in \mathcal{S}} f(x^\gamma, s)(p_{st}^\gamma - p_s^*) + \gamma \\
\leq & \limsup_{t \rightarrow \infty} \mathbb{E}_{s \sim p^*}[f(x^\gamma, s)] + G \|p_{t}^\gamma - p^*\|_2 + \gamma \\
= & \mathbb{E}_{s \sim p^*}[f(x^\gamma, s)] + \gamma \quad \text{ w.p. } 1\\
= & J^* + 2\gamma \quad \text{ w.p. } 1,
\end{align*}
where the first inequality holds because of the definition of $\widehat{J}_t$, the second inequality holds because of the definition of $p_t^\gamma$, the first equality holds as we add and subtract $p^*$ and the third inequality holds as $|f(x,s)| \leq G \text{ for all } (x,s) \in \mathcal{X} \times \mathcal{S}$ by assumption.
The final two equalities hold because of Lemma~\ref{lem:conv_of_distr} and the definition of $x^\gamma$ respectively.
With this we conclude that $\limsup_{t \rightarrow \infty} \widehat{J}_t \leq J^*$.
Along with the earlier assertion of $J^* \leq \lim_{t \rightarrow \infty} \widehat{J}_t$, we can now complete the proof that $\widehat{J}_t \rightarrow J^*$ via the sandwich argument.
$\square$
\proof{Proof of Theorem~\ref{thm:soln_conv}}
Let $\{s_t\}_{t=1}^{\infty}$ be any sequence of scenario realizations such that
$\lim_{t \rightarrow \infty}\widehat{J}_t = J^*$.
By Theorem~\ref{thm:asymp_consis}, we have $J^* \leq \mathbb{E}_{s \sim p^*} [f(x_t, s)] \leq \widehat{J}_t$ with probability $1$.
By the same theorem, we also know that $\lim_{t \rightarrow \infty} \widehat{J}_t = J^* \text{ w.p. } 1$.
Then, we can write
\begin{equation}
\label{eq:acc_obj_bound}
\liminf_{t \rightarrow \infty} \mathbb{E}_{s \sim p^*} [f(x_t, s)] \leq \liminf_{t \rightarrow \infty} \widehat{J}_t = J^*.
\end{equation}
Consider any limit point of the sequence $\{x_t\}_{t=1}^{\infty}$.
Since the set $\mathcal{X}$ is compact, then there exists a limit point of $\{x_t\}_{t=1}^{\infty}$ which lies in $\mathcal{X}$.
WLOG let $x^*$ be that point and \\$\liminf_{t \rightarrow \infty} x_t = x^*$.
\\
Then we have
\begin{align*}
J^* &\leq \mathbb{E}_{s \sim p^*} [f(x^*, s)] \\
&= \mathbb{E}_{s \sim p^*} [\liminf_{t \rightarrow \infty}f(
x_t, s)] \\
&= \sum_{s \in \mathcal{S}} \liminf_{t \rightarrow \infty}f(
x_t, s)p_s^* \\
&= \liminf_{t \rightarrow \infty} \sum_{s \in \mathcal{S}} f(
x_t, s)p_s^* \leq J^*,
\end{align*}
where the first inequality holds because $x^* \in \mathcal{X}$, the second inequality holds as $\liminf_{t \rightarrow \infty} x_t = x^*$ and because $f(x,s)$ is continuous in $x$.
The second equality exploits that $\mathcal{S}$ is finite and the final inequality holds because of~\eqref{eq:acc_obj_bound}.
Thus, we have $ \mathbb{E}_{s \sim p^*} [f(x^*, s)] = J^*$ which completes the proof.
$\square$
\subsection{Proofs of Dynamic regret bounds}\label{App:Dynamic_regret} In order to prove the dynamic regret bound of Theorem \ref{Th:Regret_bound}, we first show that the ambiguity sets are shrinking at a rate of $\mathcal{O}(\sqrt{\log T / T})$ and contain the true data generating distribution with a high confidence. The latter is stated in the following Lemma.
A crucial point for a shrinking dynamic regret bound is that the ambiguity sets are shrinking over time. Our ambiguity sets are constructed with increasing confidence probabilities for the multinomial distribution.
We show that the ambiguity sets are shrinking even though the confidence $1-\delta_t$ is increasing ($\delta_t = \frac{6 \delta}{\pi^2 t^2}$).
\begin{lemma}\label{lemma:bounded_tail_gaussian}
For the upper $(1-\frac{\delta_t}{2})$-percentile $z_{\frac{\delta_t}{2}}$ of the standard normal distribution with confidence update \\$\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$, it follows that
\begin{align*}
z_{\frac{\delta_t}{2}}^2 \leq 4 \log(\pi t),
\end{align*}
for all rounds $t=1,...,T$ . \end{lemma} \proof{Proof:}
By the definition of the standard normal distribution, for the upper percentile $1-\frac{\delta_t}{2}$, we have the Gaussian tail bound
\begin{align*}
&1- \frac{\delta_t}{2} \leq e^{ -\frac{1}{2}z_{\frac{\delta_t}{2}}^2 }
\\
\implies \quad &\log(1-\frac{\delta_t}{2}) \leq - \frac{z_{\frac{\delta_t}{2}}^2}{2}
\\
\implies \quad &z_{\frac{\delta_t}{2}}^2 \leq - 2\log(1-\frac{\delta_t}{2})
\\=&-2 \log\left( \frac{2\pi^2 t^2 - 6\delta}{2\pi^2 t^2} \right)\\
= &2 \log\left( \frac{\pi^2 t^2}{\pi^2 t^2 - 3\delta} \right),
\end{align*}
for all rounds $t=1,...,T.$
Since $\pi^2t^2-3\delta \geq 1$ for all rounds $t=1,...,T$ and $\delta\in(0,1)$ (try $t=1$ and $\delta=1$), we are able write
\begin{align*}
z_{\frac{\delta_t}{2}}^2 \leq 2\log(\pi^2 t^2)\leq 4 \log(\pi t),
\end{align*}
for all $t=1,...,T$.
$\square$
We now prove that the ambiguity sets $\mathcal{P}_t$ shrink at a rate of $\mathcal{O}(\sqrt{\log t /t})$ even as the confidence requirement $1-\delta_t$ increases.
Here, we provide a proof for ambiguity sets as defined by confidence intervals. The proofs for the other sets are provided in the electronic companion.
\begin{lemma}[Confidence Interval Sets]\label{lemma:shrinking_difference}
The ambiguity sets $\mathcal{P}_t$ derived from \eqref{Eq:Fitzpatrick} with confidence update $\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$ for all rounds $t=1,...,T$ fulfill
\begin{align*}
\sup_{x\in \mathcal{P}_{0}, y\in \mathcal{P}_{1}}\Vert x - y \Vert \leq \sqrt{16|\mathcal{S}| \log\pi}
\text{ and }
\sup_{x\in \mathcal{P}_{t-1}, y\in \mathcal{P}_{t}}\Vert x - y \Vert \leq \frac{ \sqrt{16|\mathcal{S}| \log(\pi {(t-1)})}}{\sqrt{{t-1}}},
\end{align*}
for all $t=2,...,T$ with a probability of at least $1-\delta$. \end{lemma}
\proof{Proof:} As we know from Lemma \ref{Lemma:Union_bound} that $p^* \in \bigcap_{t=0,...,T}\mathcal{P}_t$ with a probability of at least $1-\delta$, we can compute for $t=1$: \begin{align*} \sup_{x\in \mathcal{P}_{0}, y\in \mathcal{P}_{1}} \Vert x-y \Vert &= \sup_{x\in \mathcal{P}_{0}, y\in \mathcal{P}_{1}} \Vert x-p^*+p^*- y \Vert \\ &\leq \sup_{x\in \mathcal{P}_{0}} \Vert x-p^*\Vert + \sup_{y\in \mathcal{P}_{1}} \Vert p^*- y \Vert \\ &\leq \sup_{x, p\in \mathcal{P}_{0}} \Vert x-p\Vert + \sup_{p,y\in \mathcal{P}_{1}} \Vert p- y \Vert
\\ &\leq \sqrt{2} + \sqrt{4|\mathcal{S}|\log \pi} \leq \sqrt{16 |\mathcal{S}|\log \pi}, \end{align*} with a probability of at least $1-\delta$ because of $\sup_{x,y \in \mathcal{P}_{0}} \Vert x-y \Vert = \sqrt{(1-0)^2 + (0-1)^2} = \sqrt{2}$ and Lemma~\ref{Lemma:shrinking_interval}.
Similarly for $t=2,...,T$: \begin{align*} \sup_{x\in \mathcal{P}_{t-1}, y\in \mathcal{P}_{t}} \Vert x-y \Vert &= \sup_{x\in \mathcal{P}_{t-1}, y\in \mathcal{P}_{t}} \Vert x-p^*+p^*- y \Vert\\ &\leq \sup_{x\in \mathcal{P}_{t-1}} \Vert x-p^*\Vert + \sup_{y\in \mathcal{P}_{t}} \Vert p^*- y \Vert \\ &\leq \sup_{x, p\in \mathcal{P}_{t-1}} \Vert x-p\Vert + \sup_{p,y\in \mathcal{P}_{t}} \Vert p- y \Vert
\\ &\leq 2\frac{\sqrt{4 |\mathcal{S}| \log(\pi {(t-1)})}}{\sqrt{{t-1}}}, \end{align*}
with a probability of at least $1-\delta$.
$\square$
\begin{lemma}\label{Lemma:shrinking_interval}
The ambiguity sets $\mathcal{P}_t$ derived from \eqref{Eq:Fitzpatrick} with confidence update $\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$ for all rounds $t=1,...,T$ fulfill
\begin{align*}
\sup_{x, y\in \mathcal{P}_t} \Vert x-y \Vert \leq \frac{ \sqrt{4|\mathcal{S}|\log(\pi t)}}{\sqrt{t}}.
\end{align*} \end{lemma} \proof{Proof:}
We can compute using Lemma \ref{lemma:bounded_tail_gaussian}:
\begin{align*}
\sup_{x, y\in \mathcal{P}_{t}} \Vert x-y \Vert^2 \leq \sum_{k = 1}^{|\mathcal{S}|} (u_{kt} - l_{kt})^2 &= \sum_{k = 1}^{|\mathcal{S}|} \frac{z_{\frac{\delta_t}{2}}^2}{t}
\leq \frac{4|\mathcal{S}|\log(\pi t)}{t}.
\end{align*}
$\square$
Finally, In the following result, we prove that the distance between elements from consecutive $\ell_2$-norm and kernel based ambiguity sets $\mathcal{P}_{t-1}$ and $\mathcal{P}_{t}$ also shrinks as measured in the $\ell_2$-norm while providing tighter guarantees of inclusion on the true distribution..
\begin{lemma}[$\ell_2$-norm Sets]{\label{lemma:l2_shrinking_difference}}
Given an ambiguity set of the from $\mathcal{P}_t = \left\{p \in \mathcal{P} \mid \|p - \hat{p}\|_2 \leq \epsilon_t \right\}$ with $\epsilon_t\coloneqq\sqrt{\frac{2|\mathcal{S}| \log 2/\delta_t}{t}} $ and $\delta_t = \frac{6 \delta}{\pi^2 t^2}$, {then for $|\mathcal{S}| \geq 2$} we have
$$\sup_{x \in \mathcal{P}_{0}, y \in \mathcal{P}_1} \|x - y\|_2 \leq 4\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})}
\text{ and }
\sup_{x \in \mathcal{P}_{t-1}, y \in \mathcal{P}_t} \|x - y\|_2 \leq 4\sqrt{\frac{|\mathcal{S}| \log (\pi (t-1) / \sqrt{3 \delta})}{{t-1}}},$$
with probability at least $ 1-\delta$. \end{lemma} \proof{Proof:} {
For the case $t=1$ we have
\begin{align*}
\sup_{x \in \mathcal{P}_{0}, y \in \mathcal{P}_1} \|x - y\|_2 &\leq \sup_{x \in \mathcal{P}_{0}} \|x - p^*\|_2 + \sup_{y \in \mathcal{P}_1} \|p^* - y\|_2\\
&\leq 2 + 2\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})}\\
&\leq 4\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})}.
\end{align*}
The last inequality occurs because $\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})} > 1$ for $|\mathcal{S}| \geq 2$. }
Now for the case $t > 1$, given the true distribution $p^*$, we can write,
\begin{align*}
\|x - y\|_2 &\leq \|x - p^*\|_2 + \|p^* - y\|_2\\
&\leq 2 \epsilon_{{t-1}}.
\end{align*}
{
Thus,
$$
\sup_{x \in \mathcal{P}_{t-1}, y \in \mathcal{P}_t} \|x - y\|_2 \leq 4\sqrt{\frac{|\mathcal{S}| \log (\pi {(t-1)} / \sqrt{3 \delta})}{{t-1}}}.
$$
}
Here, the first inequality arises from triangle inequality.
The second inequality is due to the fact that the true distribution is contained inside all sets $\mathcal{P}_t$ for $t=1,\dots,T$ with probability at least $1-\delta$.
$\square$
\begin{lemma}[Kernel Based Sets]{\label{lemma:kernel_shrinking_difference}}
Given an ambiguity set of the form $\mathcal{P}_t = \left\{p \in \mathcal{P} \mid \|p - \hat{p}\|_M \leq \epsilon_t \right\}$ with $\epsilon_t\coloneqq \frac{{\sqrt{C}}}{\sqrt{t}}(2 + \sqrt{2 \log(1/\delta_t)}) $ with $\delta_t = \frac{6 \delta}{\pi^2 t^2}$
we have for $t \geq 2$,
\begin{align*}
&\sup_{x \in \mathcal{P}_{t-1}, y \in \mathcal{P}_t} \|x - y\|_2 \leq \frac{8\sqrt{C}}{\lambda \sqrt{{t-1}}}(\sqrt{ \log(\pi {t}/\sqrt{6\delta})}),\\
&\text{ and } \sup_{x \in \mathcal{P}_0, y \in \mathcal{P}_1} \|x - y\|_2 \leq 2+\frac{4\sqrt{C}}{\lambda}\;\; \text{ for } t = 1,\\
\end{align*}
with probability at least $1-\delta$. \end{lemma} \proof{Proof:}
For $t=1$, we have
\begin{align*}
\|x - y\|_2 &\leq \|x - p^*\|_2 + \|p^* - y\|_2\\
&\leq 2 + \frac{1}{\lambda} \|p^* - y\|_M\\
&\leq 2 + \frac{4\sqrt{C}}{\lambda}.
\end{align*}
The second inequality comes from the definition of the set $\mathcal{P}_0$ and the definition of the norm $\|\cdot\|_M$.
For the case $t \geq 2$, we get.
\begin{align*}
\|x - y\|_2 &\leq \|x - p^*\|_2 + \|p^* - y\|_2\\
&\leq \frac{1}{\lambda} (\|x - p^*\|_M + \|p^* - y\|_M)\\
&\leq \frac{4}{\lambda}\frac{\sqrt{C}}{\sqrt{{t-1}}}(1 + \sqrt{ \log(\pi ({t-1})/\sqrt{6\delta})})\\
&\leq \frac{4}{\lambda}\frac{\sqrt{C}}{\sqrt{{t-1}}}(1 + \sqrt{ \log(\pi {t}/\sqrt{6\delta})}).
\end{align*}
Here, the first inequality comes from the triangle inequality, the second from the fact that $\sqrt{x^{\top}Mx} \geq \lambda\|x\|_2$ for any positive definite matrix $M$ with the minimum eigen value $\lambda$.
The final inequality arises from the construction of the ambiguity set which contains the true distribution with probability at least $1-\delta$.
Now note that $\pi^2 t^2 / 6 \delta > 3 $ for $t \geq 2$.
Thus, we can write
\begin{align*}
\|x - y\|_2 \leq \frac{8}{\lambda}\frac{\sqrt{C}}{\sqrt{{t-1}}}\sqrt{ \log(\pi {t}/\sqrt{6\delta})}.
\end{align*}
$\square$
The properties of the confidence interval ambiguity sets mentioned previously and those of the $\ell_2$-norm and kernel based sets prove in Lemmas~\ref{lemma:l2_shrinking_difference} and~\ref{lemma:kernel_shrinking_difference} along with the cumulative path length bounds from Theorem~\ref{thm:path_length_bounds} enable us to prove the following shrinking dynamic regret.
\begin{theorem}[Dynamic regret bound]
\label{Th:Regret_bound3}
Let $f:\mathcal{X}\times\mathcal{S}\rightarrow\mathbb{R}$ be uniformly bounded, i.e., for all $(x,s) \in \mathcal{X}\times\mathcal{S}$, a constant $G>0$ exists such that $|f(x,s)|\leq G$. Let $\eta := \sqrt{\frac{3 + 2 h'(T)}{TG^2|\mathcal{S}|}}$ with $ \sum_{t=2}^{T}\|p - q\| \leq h'(T) $ for $p \in \mathcal{P}_{t-1}$ and $q \in \mathcal{P}_t$ \\
The output $(x_1,...,x_{T})$ from Algorithm with confidence update $\delta_t \coloneqq \frac{6 \delta}{\pi^2 t^2}$ and $\delta\in (0,1)$ fulfills
\begin{alignat*}2
\frac{1}{T} &\sum_{t=1}^{T} \left( \max_{p \in \mathcal{P}_t}\mathbb{E}_{s \thicksim p}\left[f(x_t,s)\right] - \min_{x\in \mathcal{X}} \max_{p\in \mathcal{P}_{t}} \mathbb{E}_{s \thicksim p}\left[f(x,s)\right] \right)
\leq G \sqrt{\frac{3|\mathcal{S}| + 2 |\mathcal{S}| h'(T)}{T}} + \frac{2G}{T},
\end{alignat*}
with probability at least $1-\delta$. \end{theorem}
\proof{Proof of Theorem~\ref{Th:Regret_bound3}}
Define $g_t(p)\coloneqq -\mathbb{E}_{s \thicksim p}\left[f(x_t,s)\right]$. A gradient descent iteration is given by
\begin{align*}
p_{t+1} = \arg\min_{p\in \mathcal{P}_t} \text{ }\left\langle \eta \nabla g_t(p_t), p \right\rangle + \frac{1}{2}\Vert p - p_t \Vert^2,
\end{align*}
with optimality criteria $
\left\langle \eta \nabla g_t(p_t), u_t - p_{t+1} \right\rangle + \left\langle p_{t+1} - p_t, u_t - p_{t+1} \right\rangle \geq 0 $
for all $u_t \in \mathcal{P}_t$. Classical theory for gradient descent yields
\begin{align*}
\left\langle \eta \nabla g_t(p_t), p_t - u_t \right\rangle \leq &\frac{1}{2}\Vert p_t - u_t \Vert^2 - \frac{1}{2}\Vert p_{t+1}-u_t \Vert^2 + \frac{\eta^2}{2} \Vert \nabla g_t(p_t) \Vert ^2.
\end{align*}
Summation over rounds $t=1,...,T$ results in the following inequality for all $u_t \in \mathcal{P}_t$:
\begin{align}
&\sum_{t=1}^T\left\langle \eta \nabla g_t(p_t), p_t - u_t \right\rangle \leq \sum_{t=1}^T\frac{\eta^2}{2} \Vert \nabla g_t(p_t) \Vert ^2. \label{Eq:sum_over_rounds}
+ \sum_{t=1}^T \left(\frac{1}{2}\Vert p_t - u_t \Vert^2 - \frac{1}{2}\Vert p_{t+1} - u_t \Vert^2 \right).
\end{align}
Next, we rearrange the terms on the RHS as
\begin{align*}
&\sum_{t=1}^T \left(\frac{1}{2}\Vert p_t - u_t \Vert^2 - \frac{1}{2}\Vert p_{t+1} - u_t \Vert^2 \right)\\
&= \frac{1}{2}\sum_{t=1}^{T}\left(\|p_t\|^2 - \|p_{t+1}\|^2\right) + \frac{1}{2}\sum_{t=1}^{T}2\left(p_{t+1} - p_t\right)\cdot u_t \\
&= \frac{1}{2}\sum_{t=1}^{T}\left(\|p_t\|^2 - \|p_{t+1}\|^2\right) + \sum_{t=1}^{T}\left(p_{t+1} - p_t\right)\cdot u_t.
\end{align*}
Now consider the expression $\sum_{t=1}^{T}\left(p_{t+1} - p_t\right)\cdot u_t $. We can write it as
\begin{align*}
&\sum_{t=1}^{T}\left(p_{t+1} - p_t\right)\cdot u_t \\
&= (p_2 - p_1)\cdot u_1 + (p_3 - p_2)\cdot u_2 + \dots + (p_{T+1} - p_T)\cdot u_T\\
&= p_{T+1} \cdot u_T- p_1 \cdot u_1 + \sum_{t=2}^{T}(u_{t-1} - u_t)\cdot p_t.
\end{align*}
For the other expression we have
\begin{align*}
\frac{1}{2}\sum_{t=1}^{T}\left(\|p_t\|^2 - \|p_{t+1}\|^2\right) = \frac{1}{2}\|p_1\|^2 - \frac{1}{2}\|p_{T+1}\|^2.
\end{align*}
This then allows us to write
\begin{align*}
&\sum_{t=1}^T\left\langle \eta \nabla g_t(p_t), p_t - u_t \right\rangle \leq \sum_{t=1}^T\frac{\eta^2}{2} \Vert \nabla g_t(p_t) \Vert ^2 + \frac{1}{2}\|p_1\|^2 \\
& - \frac{1}{2}\|p_{T+1}\|^2+ p_{T+1} \cdot u_T- p_1 \cdot u_1 + \sum_{t=2}^{T}(u_{t-1} - u_t)\cdot p_t .
\end{align*}
We know that $\|p\| \leq 1$ and for any 2 probability vectors $p$ and $q$ we have that $0 \leq p\cdot q \leq 1$. Thus we can write
\begin{align*}
&\sum_{t=1}^T\left\langle \eta \nabla g_t(p_t), p_t - u_t \right\rangle \leq
\sum_{t=1}^T\frac{\eta^2}{2} \Vert \nabla g_t(p_t) \Vert ^2 + \frac{1}{2}+ 1 + \sum_{t=2}^{T}(u_{t-1} - u_t)\cdot p_t.
\end{align*}
Note that
\begin{align*}
\Vert \nabla g_t(p_t) \Vert^2 = \sum_{k = 1}^{|\mathcal{S}|} |f(x_t,s_k)|^2 \leq |\mathcal{S}|G^2,
\end{align*}
This, along with the fact that $\sum_{t=2}^{T}(u_{t-1} - u_t)\cdot p_t \leq \sum_{t=2}^{T}\|u_{t-1} - u_t\| \| p_t\| \leq \sum_{t=2}^{T}\|u_{t-1} - u_t\|$, allows us to write
\begin{align*}
\sum_{t=1}^T\left\langle \nabla g_t(p_t), p_t - u_t \right\rangle \leq &\frac{\eta T}{2} |\mathcal{S}|G^2 + \frac{3}{2\eta}
+ \frac{1}{\eta}\sum_{t=2}^{T}\|u_{t-1} - u_t\|.
\end{align*}
By assumption, $ \sum_{t=2}^{T}\|u_{t-1} - u_t\| \leq h'(T)$ with probability at least $1-\delta$ for some function $h'(\cdot)$. Then we have
\begin{align*}
\sum_{t=1}^T\left\langle \nabla g_t(p_t), p_t - u_t \right\rangle \leq \frac{\eta T}{2} |\mathcal{S}|G^2 + \frac{3}{2\eta} + \frac{h'(T)}{\eta},
\end{align*}
with probability at least $1-\delta$.
Choosing the optimal $\eta = \sqrt{\frac{3 + 2 h'(T)}{TG^2|\mathcal{S}|}}$, we can write our result as
\begin{align*}
\sum_{t=1}^T\left\langle \nabla g_t(p_t), p_t - u_t \right\rangle \leq 2 \sqrt{\frac{T}{2} |\mathcal{S}|G^2 \cdot (\frac{3}{2} + h'(T))}.
\end{align*}
Then we can write
\begin{align*}
\sum_{t=1}^T\left\langle \nabla g_t(p_t), p_t - u_t \right\rangle &\leq G |\mathcal{S}|^{\frac{1}{2}} \sqrt{3T + 2T h'(T)}.
\end{align*}
Since $g_t(p)=-\mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right]$ is linear in $p$ for all $t=1,...,T$, it follows
\begin{align*}
&\sum_{t=1}^T \left( \mathbb{E}_{s\thicksim u_t}\left[f(x_t,s)\right] - \mathbb{E}_{s\thicksim p_t} \left[f(x_t, s)\right] \right) \\
&= \sum_{t=1}^T \left(g_t(p_t) - g_t(u_t) \right)= \sum_{t=1}^T \left\langle \nabla g_t(p_t), p_t - u_t \right\rangle
\\
&\leq G |\mathcal{S}|^{\frac{1}{2}} \sqrt{3T + 2T h'(T)}.
\end{align*}
Now we choose in each round $t=1,...,T$ the worst-case $
u_t \coloneqq \arg \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] \in \mathcal{P}_t $
and recall $x_t = \arg\min_{x\in \mathcal{X}} \mathbb{E}_{s \sim p_t}\left[f(x,s)\right]$ to obtain
\begin{align*}
\sum_{t=1}^T &\left( \mathbb{E}_{s\thicksim u_t}\left[f(x_t,s)\right] - \mathbb{E}_{s\thicksim p_t} \left[f(x_t, s)\right] \right)
= \sum_{t=1}^T \left( \max_{p \in \mathcal{P}_t}\mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \mathbb{E}_{s\thicksim p_t} \left[f(x, s)\right] \right).
\end{align*}
Since $p_t \in \mathcal{P}_{t-1}$, we know that $\min_{x \in \mathcal{X}} \mathbb{E}_{s\thicksim p_t} \left[f(x, s)\right] \leq \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_{t-1}}\mathbb{E}_{s\thicksim p} \left[f(x, s)\right]$ for all $t=1,...,T$ and thus we can conclude
\begin{align*}
\sum_{t=1}^T& \left( \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_{t-1}} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] \right)
\leq G |\mathcal{S}|^{\frac{1}{2}} \sqrt{3T + 2 Th'(T)},
\end{align*}
with a probability of at least $1-\delta$.
We add and subtract $\min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right]$ on the LHS. Rearranging the terms this allows us to write the LHS as
\begin{align*}
&\sum_{t=1}^T \left( \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] \right)\\ & + \sum_{t=1}^T \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_{t-1}} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right].
\end{align*}
Observing that the last two terms telescope, bringing them to the RHS and using the upper bound $G$ on $|f(x,s)|$ we can conclude
\begin{align*}
\sum_{t=1}^T& \left( \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p}\left[f(x_t,s)\right] - \min_{x \in \mathcal{X}} \max_{p \in \mathcal{P}_t} \mathbb{E}_{s\thicksim p} \left[f(x, s)\right] \right)
\leq G |\mathcal{S}|^{\frac{1}{2}} \sqrt{3T + 2 Th'(T)} + 2G \;\;\;\;\;\text{ w.p. } 1 - \delta.
\end{align*}
Dividing by $T$ on both sides completes the proof.
$\square$
\begin{theorem}
\label{thm:path_length_bounds}
Given ambiguity sets of the form specified in Section~\ref{sec:dddro}, we have
\begin{align*}
\frac{1}{2}\sum_{t=1}^{T}\Vert p_t - q_t \Vert^2 \leq h(T) &\text{ and } \sum_{t=2}^{T}\|p_t - q_t\| \leq h'(T)
\text{ for all } p_t \in \mathcal{P}_{t-1}, q_t \in \mathcal{P}_t,
\end{align*}
with probability at least $1-\delta$.
The functions $h(T)$ and $h'(T)$ for different categories of ambiguity sets are as given below:
\begin{enumerate}
\item \textit{Confidence Intervals}:
$$h(T) = 8 |\mathcal{S}| \log(\pi T) ({2} + \log T)$$
$$h'(T) = 8 \sqrt{|\mathcal{S}|T\log(\pi T)}$$
\item \textit{Kernel based ambiguity sets, where $\lambda$ denotes the smallest eigenvalue of the kernel matrix $M$:}
$$h(T) = {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \frac{32C}{\lambda^2 } \log(\pi T/\sqrt{6\delta})(1 + \log T)$$
\begin{align*}
h'(T) = &\frac{16\sqrt{C}}{\lambda} \sqrt{T \log(\pi T/\sqrt{6\delta})}
\end{align*}
\item \textit{$\ell_2$-norm ambiguity sets:}
$$
h(T) = 8 |\mathcal{S}|\log \frac{\pi T}{\sqrt{3 \delta}} ({2} + \log T)
$$
$$
h'(T) = 8\sqrt{|\mathcal{S}|T\log\frac{\pi T}{\sqrt{3\delta}}}
$$
\end{enumerate} \end{theorem} \proof{}
\textbf{Confidence Intervals}.
Calculating the function $h(T)$, we have
\begin{align*}
\frac{1}{2} \sum_{t=1}^{T}\|p_t - q_t\|^2 &\leq \frac{1}{2} 16 |\mathcal{S}| \log\pi + \frac{1}{2} \sum_{t=2}^{T}16 |\mathcal{S}| \frac{\log(\pi (t-1))}{t-1}\\
&\leq 8 |\mathcal{S}| \log\pi + 8 |\mathcal{S}| \log(\pi (T-1)) \sum_{t=1}^{T-1}\frac{1}{t}\\
&\leq 8 |\mathcal{S}| \log\pi + 8 |\mathcal{S}| \log(\pi (T-1)) (1 + \log (T-1))\\
&\leq 8 |\mathcal{S}| \log(\pi T) (2 + \log T).
\end{align*}
Here, the first inequality arises from Lemma~\ref{lemma:shrinking_difference}. The second and third inequalities are from bounding $t$ and from observing that $\sum_{t=1}^{T-1}(1/t) \leq 1 + \log(T-1)$.
\noindent Now for the function $h'(T)$, we can calculate
\begin{align*}
\sum_{t=2}^{T}\|p_t - q_t\| &\leq \sum_{t=2}^{T}4\frac{\sqrt{|\mathcal{S}|\log(\pi ({t-1}))}}{\sqrt{{t-1}}}\\
&\leq 4 |\mathcal{S}|^{\frac{1}{2}} \sqrt{\log(\pi T)} \sum_{t=2}^{T}\frac{1}{\sqrt{{t-1}}}\\
&\leq 8 |\mathcal{S}|^{\frac{1}{2}} \sqrt{\log(\pi T)} \sqrt{T}.
\end{align*}
Here, the first inequality arises from Lemma~\ref{lemma:shrinking_difference}. The second and third inequalities are from bounding $t$ and from observing that $\sum_{t=2}^{T}(1/\sqrt{{t-1}}) \leq 2\sqrt{T-1} \leq 2 \sqrt{T}$.
\noindent \textbf{Kernel based ambiguity sets}.
Calculating the function $h(T)$, we have
\begin{align*}
\frac{1}{2} \sum_{t=1}^{T}\|p_t - q_t\|_2^2 &\leq {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \sum_{t=2}^{T}\frac{32 C}{\lambda^2 ({t-1})}\log(\pi ({t-1})/\sqrt{6\delta})\\
&\hspace{-10mm}\leq {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \frac{32C}{\lambda^2 } \log(\pi T/\sqrt{6\delta})\sum_{t=2}^{T}\frac{1}{t-1}\\
&\hspace{-10mm}\leq {\frac{1}{2}\left(2+ \frac{4\sqrt{C}}{\lambda}\right)^2}+ \frac{32C}{\lambda^2 } \log(\pi T/\sqrt{6\delta})(1 + \log T).
\end{align*}
Here, the first inequality arises from Lemma~\ref{lemma:kernel_shrinking_difference}. The second and third inequalities are from bounding $t$ and from observing that $\sum_{t=1}^{T-1}(1/t) \leq 1 + \log(T-1) \leq 1 + \log T$.
Now, for the function $h'(T)$, we have
\begin{align*}
\sum_{t=2}^{T}\|p_t - q_t\|_2 &\leq \sum_{t=2}^{T}\frac{8}{\lambda}\frac{\sqrt{C}}{\sqrt{t-1}}\sqrt{ \log(\pi ({t-1})/\sqrt{6\delta})}\\
&\leq \frac{8\sqrt{C}}{\lambda} \sqrt{ \log(\pi T/\sqrt{6\delta})} \sum_{t=2}^{T}\frac{1}{\sqrt{t-1}}\\
&\leq \frac{16\sqrt{C}}{\lambda} \sqrt{T \log(\pi T/\sqrt{6\delta})}.
\end{align*}
Here, the first inequality arises from Lemma~\ref{lemma:kernel_shrinking_difference}. The second and third inequalities are from bounding $t$ and from observing that $\sum_{t=1}^{T-1}(1/\sqrt{t-1}) \leq 2\sqrt{T-1} \leq 2 \sqrt{T}$.
\noindent \textbf{$\ell_2$-norm ambiguity sets}.
Calculating the function $h(T)$, we have
\begin{align*}
\frac{1}{2} \sum_{t=1}^{T}\|p_t - q_t\|^2 &\leq \frac{1}{2}\left(4\sqrt{|\mathcal{S}| \log (\pi / \sqrt{3 \delta})}\right)^2 + \frac{1}{2} \sum_{t=2}^{T}16 |\mathcal{S}| \frac{\log(\pi ({t-1})/\sqrt{3 \delta})}{t-1}\\
&\leq 8 |\mathcal{S}| \log (\pi / \sqrt{3 \delta}) + 8 |\mathcal{S}| \log(\pi T/\sqrt{3\delta}) \sum_{t=2}^{T}\frac{1}{t-1}\\
&\leq 8 |\mathcal{S}| \log(\pi T/\sqrt{3\delta}) (2 + \log T).
\end{align*}
Here, the first inequality arises from Lemma~\ref{lemma:l2_shrinking_difference}. The second and third inequalities are proven similar to the case of $h(T)$ for the interval sets.
\noindent Now for the function $h'(T)$, we can calculate
\begin{align*}
\sum_{t=2}^{T}\|p_t - q_t\| &\leq \sum_{t=2}^{T}4\sqrt{\frac{|\mathcal{S}| \log (\pi {(t-1)} / \sqrt{3 \delta})}{t-1}}\\
&\leq 4\sqrt{|\mathcal{S}| \log (\pi T / \sqrt{3 \delta})} \sum_{t=2}^{T}\sqrt{\frac{1}{t-1}}\\
&\leq 8\sqrt{|\mathcal{S}| T\log (\pi T / \sqrt{3 \delta})}.
\end{align*}
Here, the first inequality arises from Lemma~\ref{lemma:l2_shrinking_difference}. The second and third inequalities are proven similar to the case of $h'(T)$ for the interval sets.
$\square$
\subsection{Numerical Experiments}
In this section, we provide the details for the numerical experiments conducted in Section~\ref{sec:num_res}. We also provide an additional set of experiments on an optimal routing problem to further illustrate our algorithms.
\subsubsection{Benchmark Instances}
All mixed-integer linear optimization problems used in our numerical experiments can be found in Table \ref{Table:overview_mip}. The entries show the number of variables and constraints for each problem. The same holds for the quadratic problems listed in Table \ref{Table:overview_miqp}.
\begin{table}[hbt]
\centering
\begin{tabular}{lrrrrr}
\toprule
Name & \multicolumn{4}{c}{Variables} & Constraints \\
\midrule
& All & Bin. & Int. & Cont. & \\ \midrule
blend2 & 353 & 239 & 25 & 89 & 274 \\
flugpl & 18 & 0 & 11 & 7 & 19 \\
gr4x6 & 48 & 24 & 0 & 24 & 34 \\
neos-1430701 &312& 156& 0 & 156 & 668 \\
noswot& 128 & 75 &25 & 28 & 182\\
prod1& 250 & 149 & 0 &101& 208 \\
prod2& 301 & 200 & 0 & 101& 211 \\
ran13x13 & 338 & 169 & 0 &169 & 195 \\
supportcase14& 304 & 304 & 0& 0 & 234 \\
supportcase16& 319 & 319 & 0& 0 & 130 \\
beavma& 390 & 195 & 0 & 195 & 372\\
k16x240b& 480& 240& 0 &240 & 256 \\
neos-3610040 &430 & 85 & 0 & 345 & 335 \\
neos-3611689 &421 &88 & 0 &333& 323 \\
timtab1CUTS & 397& 77 &94 & 226 & 371 \\
\bottomrule
\end{tabular}
\caption{\label{Table:overview_mip}Overview of MIP instances} \end{table}
\begin{table}[htb]
\centering
\begin{tabular}{lrrrrr}
\toprule
Name & \multicolumn{3}{c}{Variables} & \multicolumn{2}{c}{Constraints} \\
\midrule
& All & Bin. & Cont. & All & Quadr. \\ \midrule
7579 & 300 & 100 & 200 & 203 & 1 \\
10001 & 485 & 426 & 59 & 296 & 1 \\
10002 & 485 & 426 & 59 & 296 & 1 \\
10004 & 1058 & 999 & 59 & 867 & 1 \\
10010 & 269 & 262 & 7 & 147 & 1 \\
10003 & 1058 & 999 & 59 & 867 & 1\\
10008 & 845 & 713 & 132 & 416 & 1\\
10009 & 605 & 473 & 132 & 246 & 1\\
10011 & 1390 & 1258 & 132 & 873 & 1 \\
10012 & 967 & 835 & 132 & 538 & 1 \\
\bottomrule
\end{tabular}
\caption{Overview of MIQP instances}\label{Table:overview_miqp} \end{table}
\end{document} | arXiv |
\begin{document}
\title[ Finely Plurisubharmonic Functions and Pluripolarity]{Continuity Properties of Finely Plurisubharmonic Functions and pluripolarity} \author{Said El Marzguioui} \address{KdV Institute for Mathematics, Universiteit van Amsterdam, Postbus 94248 1090 GE, Amsterdam, The Netherlands} \email{[email protected]}
\author{Jan Wiegerinck} \address{KdV Institute for Mathematics, Universiteit van Amsterdam, Postbus 94248 1090 GE, Amsterdam, The Netherlands} \email{[email protected]} \subjclass[2000]{
32U15, 32U05, 30G12, 31C40} \begin{abstract} We prove that every bounded finely pluri\-sub\-harmonic function can be locally (in the pluri-fine topology) written as the difference of two usual plurisub\-harmonic functions. As a consequence finely pluri\-sub\-harmonic functions are continuous with respect to the pluri-fine topology. Moreover we show that $-\infty$ sets of finely plurisubharmonic functions are pluripolar, hence graphs of finely holomorphic functions are pluripolar. \end{abstract} \maketitle
\section{Introduction} The fine topology on an open set $\Omega\subset{\mathbb R}^n$ is the coarsest topology that makes all subharmonic functions on $\Omega$ continuous. A finely subharmonic function is defined on a fine domain, it is upper semi-continuous with respect to the fine topology, and satisfies an appropriate modification of the mean value inequality. Fuglede \cite{Fu72} proved the following three properties that firmly connect fine potential theory to classical potential theory: finely subharmonic functions are finely continuous (so there is no super-fine topology), all finely polar sets are in fact ordinary polar sets, and finely subharmonic functions can be uniformly approximated by subharmonic functions on suitable compact fine neighborhoods of any point in their domain of definition. Another continuity result is what Fuglede calls the {\em Brelot Property}, i.e. a finely subharmonic function is continuous on a suitable fine neighborhood of any given point in its domain, \cite[page 284]{Fu88}, see also \cite[Lemma 1]{Fu76}.
Similarly, the pluri-fine topology on $\Omega\subset{\mathbb C}^n$ is the coarsest topology that makes all plurisubharmonic (PSH) functions on $\Omega$ continuous. In \cite{E-W2} we introduced finely plurisubharmonic functions as plurifinely upper semicontinuous functions, of which the restriction to complex lines is finely subharmonic. We will prove the analogs of two of the results mentioned above. Bounded finely plurisubharmonic functions can locally be written as differences of ordinary PSH functions (cf.~Section 3), hence finely plurisubharmonic functions are pluri-finely continuous. We also prove a weak form of the Brelot Property. Next, finely pluripolar sets are shown to be pluripolar. This answers natural questions posed e.g. by \cite{Mo03}. As a corollary we obtain that zero sets of finely holomorphic functions of several complex variables are pluripolar sets. Partial results in this direction were obtained in \cite{EMW,EdJo06,E-W2}. A final consequence is Theorem \ref{F-PlfineH} concerning the pluripolar hull of certain pluripolar sets.
The pluri-fine topology was introduced in \cite{Fu86-1}, and studied in e.g., \cite{Bed88, BT87, E-W1, E-W2}. In the rest of the paper we will qualify notions referring to the pluri-fine topology by the prefix ``$\mathcal{F}$'', to distinguish them from those pertaining to the Euclidean topology. Thus a compact $\mathcal{F}$-neighborhood $U$ of $z$ will be a Euclidean compact set $U$ that is a neighborhood of $z$ in the pluri-fine topology.
\section{Finely plurisubharmonic and holomorphic functions} There are several ways to generalize the concepts of plurisubharmonic and of holomorphic functions to the setting of the plurifine topology. See e.g., \cite{Mo03, E-W2}, and in particular \cite{Fu09} where the different concepts are studied and compared. \begin{definition}\label{F-PlfineE} Let $\Omega $ be an $\mathcal{F}$-open subset of ${\mathbb C}^n$. A function $f$ on $\Omega$ is called {\em $\mathcal{F}$-plurisubharmonic} if $f$ is $\mathcal{F}$-upper semicontinuous on $\Omega$ and if the restriction of $f$ to any complex line $L$ is finely subharmonic or $\equiv -\infty$ on any $\mathcal{F}$-connected component of $\Omega\cap L$.
A subset $E$ of ${\mathbb C}^n$ is called \emph{$\mathcal{F}$-pluripolar} if for every point $z\in E$ there is an $\mathcal{F}$-open subset $U\subset {\mathbb C}^n$ and an $\mathcal{F}$-plurisubharmonic function ($\not \equiv -\infty$) $f$ on $U$ such that $E\cap U \subset\{f=-\infty\}$. \end{definition}
Denote by $H(K)$ the uniform closure on $K$ of the algebra of holomorphic functions in neighborhoods of $K$. \begin{definition}\label{F-PlfineF} Let $U\subseteq {\mathbb C}^n$ be $\mathcal{F}$-open. A function $f$ :\ $U$ $\longrightarrow$ ${\mathbb C}$ is said to be $\mathcal{F}$-holomorphic if every point of $U$ has a compact
$\mathcal{F}$-neighborhood $K\subseteq U$ such that the restriction $f|_K$ belongs to $H(K)$. \end{definition} \begin{remark} The functions defined in Definition \ref{F-PlfineE} are called weakly ${\mathcal F}$-PSH functions in \cite{Fu09}, whereas the functions in Definition \ref{F-PlfineF} are called strongly ${\mathcal F}$-holomorphic functions. In \cite{Fu09} strongly ${\mathcal F}$-PSH functions (via approximation) and weakly ${\mathcal F}$-holomorphic functions (via holomorphy on complex lines) are defined and it is shown that the strong properties imply the weak ones. \end{remark}
The original definition of finely subharmonic functions involves sweeping-out of measures. If one wants to avoid this concept, one can use the next theorem as an alternative definition. \begin{theorem}[Fuglede \cite{Fu74,Fu82}] \label{Diff-th5} A function $\varphi$ defined in an $\mathcal{F}$-open set
$U \subseteq \mathbb{C}$ is finely subharmonic if and only if every point of $U$ has a compact $\mathcal{F}$-neighborhood $K \subset U$ such that $\varphi|_{K}$ is the uniform limit of usual subharmonic functions $\varphi_n$ defined in Euclidean neighborhoods $W_n$ of $K$. \end{theorem}
Recall also the following property, cf.~\cite{BT87}, which will be used in the proof of Theorem \ref{PlfineC} and its corollary. \begin{theorem}(Quasi-Lindel\"{o}f property)\label{F-PlfineG} An arbitrary union of $\mathcal{F}$-open subsets of ${\mathbb C}^{n}$ differs from a suitable countable subunion by at most a pluripolar set. \end{theorem}
\section{Continuity of Finely PSH Functions} \begin{theorem}\label{PlfineA} Let $f$ be a bounded $\mathcal{F}$-pluri\-sub\-harmonic function in a bounded $\mathcal{F}$-open subset $U$ of $\mathbb{C}^n $. Every point $z \in U$ then has an $\mathcal{F}$-neighborhood $\mathcal{O} \subset U$ such that $f$ is representable in $\mathcal{O}$ as the difference between two locally bounded plurisubharmonic functions defined on some usual neighborhood of $z$. In particular $f$ is ${\mathcal F}$-continuous. \end{theorem} \begin{proof}
We may assume that $-1<f < 0$ and that $U$ is relatively compact in the unit ball $B(0, 1)$. Let $V \subset U$ be a compact $\mathcal{F}$-neighborhood of $z_0$. Since the complement $\complement V$ of $V$ is pluri-thin at $z_0$, there exist $ 0<r<1$ and a plurisubharmonic function $\varphi$ on $B(z_0,r)$ such that \begin{equation}\label{Plfine1} \limsup_{z \to z_{0},z\in \complement V }\varphi(z)< \varphi(z_{0}). \end{equation} Without loss of generality we may suppose that $\varphi $ is negative in $B(z_{0}, r)$ and \begin{equation}\label{Plfine2} \varphi(z)=-1 \ \text{if} \ z\in B(z_0,r) \backslash V \ \text{and} \ \varphi(z_0)=-\frac{1}{2}. \end{equation} Hence \begin{equation}\label{Plfine4} f(z)+\lambda \varphi(z) \leq -\lambda \ \text{for}\ \text{any} \ z \in U\cap B(z_0, r)\backslash V \ \text{and} \ \lambda >0. \end{equation} Now define a function $u_{\lambda}$ on $B(z_0, r)$ as follows
\begin{equation}\label{SaidFuG} u_{\lambda}(z)=
\begin{cases}
\max\{-\lambda , \ f(z)+\lambda \varphi(z)\} & \text{if $z\in U \cap B(z_0, r)$,}\cr
-\lambda &\text{if $z\in B(z_0, r)\backslash V$.}
\end{cases} \end{equation} This definition makes sense because $[U \cap B(z_0, r)]\bigcup [B(z_0, r)\backslash V ]=B(z_0, r)$, and the two definitions of $u_\lambda$ agree on $U\cap B(z_{0}, r)\backslash V$ in view of (\ref{Plfine4}).
Clearly, $u_{\lambda}$ is $\mathcal{F}$-plurisubharmonic in $U \cap B(z_0, r)$ and in $B(z_0, r)\backslash V$, hence in all $B(z_0, r)$ in view of the sheaf property, cf.~\cite{E-W2}. Since $u_{\lambda}$ is bounded in $B(z_0, r)$, it follows from \cite[Theorem 9.8]{Fu72} that $u_{\lambda}$ is subharmonic on each complex line where it is defined. It is a well known result that a bounded function which is subharmonic on each complex line where it is defined, is plurisubharmonic, cf.~\cite{Lelong45}. In other words $u_{\lambda}$ is plurisubharmonic in $B(z_0, r)$.
Since $ \varphi(z_0)=-\frac{1}{2}$, the set $\mathcal{O}=\{\varphi>-3/4\}$ is an $\mathcal{F}$-neighborhood of $z_0$, and because $\varphi= -1$ on $B(z_0, r)\backslash V$, it is clear that $\mathcal{O}\subset V \subset U$.
Observe now that $-4 \leq \ f(z)+ 4\varphi(z)$, for every $z\in \mathcal{O}$. Hence \begin{equation} f(z)=u_{4}(z)-4\varphi(z), \ \text{for} \ \text{every} \ z\in \mathcal{O}. \end{equation} We have shown that $f$ is ${\mathcal F}$-continuous on a neighborhood of each point in its domain, hence $f$ is ${\mathcal F}$-continuous. \end{proof} The proof is inspired by \cite[page 88-90]{Fu72}.
\begin{corollary}\label{PlfineB} Every $\mathcal{F}$-plurisubharmonic function is $\mathcal{F}$-continuous. \end{corollary} \begin{proof}
Let $f$ be $\mathcal{F}$-plurisubharmonic in an $\mathcal{F}$-open subset $\Omega$ of $\mathbb{C}^n$. Let $d<c\in {\mathbb R}$. The set $\Omega_c=\{f<c\}$ is ${\mathcal F}$-open. The function $\max\{f, d\}$ is bounded ${\mathcal F}$-PSH on $\Omega_c$, hence ${\mathcal F}$-continuous. Therefore the set $\{d<f<c\}$ is ${\mathcal F}$-open, and we conclude that $f$ is ${\mathcal F}$-continuous. \end{proof} The following result gives a partial analog to the Brelot property. We recall the definition of the {\em relative extremal function} or{\em pluriharmonic measure} of a subset $E$ of an open set $\Omega$, cp. \cite{Bed82,K91}
\begin{equation}\label{Plfine3} U=U_{E,\Omega}=\sup\{\psi \in \text{PSH}^{-}\Omega: \ \psi \leq -1 \ \text{on} E\}. \end{equation} It is well known that the upper semi-continuous regularization of $U$, i.e. $U^{*}(z)=\limsup_{\Omega\ni v \rightarrow z }U(v)$ is plurisubharmonic in $\Omega$. \begin{theorem}\label{Quasi-Brelot}(Quasi-Brelot property) Let $f$ be a plurisubharmonic function in the unit ball $B\subset {\mathbb C}^n$. Then there exists a pluripolar set $E\subset B$ such that for every $z \in B \setminus E$ we can find an $\mathcal{F}$-neighborhood $\mathcal{O}_z \subset B$ of $z$ such that $f$ is continuous in the usual sense in $\mathcal{O}_z$ \end{theorem} \begin{proof} Without loss of generality we may assume that $f$ is continuous near the boundary of $B$. By the quasi-continuity theorem (cf.~\cite[Theorem 3.5.5]{K91} and the remark that follows it, see also \cite{Bed82}) we can select a sequence of relatively compact open subset $\omega_n$ of $B$ such that the Monge-Amp\`{e}re capacity $C(\omega_n, B)<\frac{1}{n}$, and $f$ is continuous on $B \setminus \omega_n$. Denote by $\tilde \omega_n$ the $\mathcal{F}$-closure of $\omega_n$.
The pluriharmonic measure $U^{*}_{\omega_n, B}$ is equal to the pluriharmonic measure $U^{*}_{\tilde \omega_n, B}$, because for a PSH function $\varphi$ the set $\{\varphi \leq
-1\}$ is $\mathcal{F}$-closed, thus $\varphi|_{\omega_n} \leq -1
\Rightarrow \varphi|_{\tilde \omega_n} \leq -1$. Now, using \cite[Proposition 4.7.2]{K91} \begin{equation}\label{e1} C(\omega_n, B)=C^*(\omega_n, B)=\int_\Omega (dd^cU^{*}_{\omega_n, B})^n=\int_\Omega (dd^cU^{*}_{\tilde \omega_n, B})^n=C^*(\tilde \omega_n, B). \end{equation} Let $E=\bigcap_{n}\tilde \omega_n$. By \eqref{e1}, $C^*(E, B)\leq C^*(\tilde \omega_n, B)\leq \frac{1}{n}$, for every $n$. Hence $E$ is a pluripolar subset of $B$.
Let $z \not \in E$. Then there exists $N$ such that $z \not \in \tilde \omega_N$. Clearly, the set $B\setminus \tilde \omega_N$ is an $\mathcal{F}$-neighborhood of $z$. Since $f$ is continuous on $B\setminus \omega_N$, it is also continuous on the smaller set $B\setminus \tilde \omega_N$ ($\subset B\setminus \omega_N$). \end{proof} \begin{remark} The above Quasi-Brelot property holds also for $\mathcal{F}$-pluri\-sub\-harmonic functions, in view of Theorem \ref{PlfineA}. \end{remark}
\section{$\mathcal{F}$-Pluripolar Sets and Pluripolar Hulls} In this section we prove that $\mathcal{F}$-pluripolar sets are pluripolar and apply this to pluripolar hulls. \begin{theorem}\label{PlfineC} Let $f$ :\ $U$ $\longrightarrow$ $[-\infty,+\infty[$ be an $\mathcal{F}$-plurisubharmonic function $(\not \equiv -\infty)$ on an $\mathcal{F}$-open and $\mathcal{F}$-connected subset $U$ of ${\mathbb C}^{n}$. Then the set $\{z\in U: \ f(z)=-\infty \}$ is a pluripolar subset of ${\mathbb C}^{n}$ \end{theorem}
\begin{proof}[Proof of Theorem \ref{PlfineC}.] We may assume that $f<0$. Let $z_{0} \in U$, which we can assume relatively compact in $B(0,1)$.
We begin by showing that $z_0$ admits an $\mathcal{F}$-neighborhood $W_{z_0}$ such that $\{f=-\infty\}\cap W_{z_0}$ is pluripolar. If $z_0$ is a Euclidean interior point of $U$, then $f$ is PSH on a neighborhood of $z_0$ and there is nothing to prove.
If not we proceed as in the proof of Theorem \ref{PlfineA}. Thus, let $V\subset U$ be a compact $\mathcal{F}$-neighborhood of $z_0$, and $\varphi$ a negative PSH function on $B(z_0, r)$ such that \begin{equation}\label{Plfine2a} \varphi(z)=-1 \ \text{if} \ z\in B(z_0,r) \backslash V \ \text{and} \ \varphi(z_0)=-\frac{1}{2}. \end{equation}
Let $\Phi=U_{B(z_0,r)\setminus V, B(z_0,r)}$ be the pluriharmonic measure defined in \eqref{Plfine3}. By \eqref{Plfine2a}, we get $\varphi \leq \Phi \leq \Phi^{*}$. In particular $ -\frac{1}{2}\leq \Phi^{*}(z_0)$.
Let $f_{n}=\frac{1}{n}\max(f, -n)$. Then $-1\leq f_{n}<0$. We define functions $v_{n}(z)$ on $B(z_0, r)$ as follows. \begin{equation}\label{Plfine6} v_{ n}(z)=
\begin{cases}
\max\{-1, \ \frac{1}{4}f_{n}(z)+ \Phi^{*}(z)\} & \text{if $z\in U \cap B(z_0, r)$,}\cr
-1 &\text{if $z\in B(z_0, r)\backslash V$.}
\end{cases} \end{equation} Since $v_n$ is analogous to the function $u_{\lambda}$ in (\ref{SaidFuG}), the argument in the proof of Theorem \ref{PlfineA} shows that $v_n \in \psh(B(z_0, r))$. Now for $z\in U$ such that $f(z)\neq-\infty$ the sequence $f_{n}(z)$ increases to $0$. Thus $\{v_{n}\}$ is an increasing sequence of PSH-functions. Let $\lim v_n=\psi$. The upper semi-continuous regularization $\psi^{*}$ of $\psi$ is plurisubharmonic in $B(z_0, r)$. It is a result of \cite{Bed82}, see also Theorem 4.6.3 in \cite{K91}, that the set $E=\{\psi \neq \psi^{*}\}$ is a pluripolar subset of $B(z_0, r)$.
We claim that $\psi^{*}=\Phi^{*}$ on $B(z_0, r)$. Indeed, $\psi\le\psi^*\le\Phi^*$ because the $v_n$ belong to the defining family \eqref{Plfine3} for $\Phi$. Now observe that $\psi = \Phi^{*}$ on $B(z_0, r) \setminus \{f=-\infty\}$, because $v_n=\Phi^{*}= -1$ on $B(z_0, r)\backslash V$. Hence \begin{equation}\label{fInTe} \{ \psi^{*}\ne \Phi^{*} \} \subset \ B(z_0, r) \cap \{f=-\infty\}. \end{equation} Clearly, the set $\{ \psi^{*} \neq \Phi^{*}\}$ is $\mathcal{F}$-open. In view of Theorem 5.2 in \cite{E-W2} it must be empty because it is contained in the $-\infty$-set of a finely plurisubharmonic function.
Let $z\in \{\Phi^{*}>-\frac{2}{3} \}\cap\{f=-\infty\}$. Then it follows from the definition of $v_{n}$ and the claim that $$\psi (z) = -\frac{1}{4} + \Phi^{*}(z) = -\frac{1}{4} + \psi^{*}(z).$$ Thus $z\in E$.
Now $\{\Phi^{*}>-\frac{2}{3} \}$ is an $\mathcal{F}$-neighborhood of $z_{0}$. The conclusion is that every point $z\in U$ has an $\mathcal{F}$-neighborhood $W_{z} \subset U$ such that $W_{z}\cap \{f=-\infty\}$ is a pluripolar set. ( If $f(z)\neq -\infty$ we could have chosen $W_{z}$ such that $W_{z}\cap \{f=-\infty\}=\emptyset$.)
By the Quasi-Lindel\"{o}f property, cf.~Theorem \ref{F-PlfineG} there is a sequence $\{z_n\}_{n\geq1} \subset U$ and a pluripolar subset $P$ of $U$ such that \begin{equation}\label{FpLiNd1} U= \cup_{n}\mathcal{O}_{z_n}\cup P. \end{equation} Hence \begin{equation}\label{FpLindP} \{f=-\infty\} \subset ( \cup_n\mathcal{O}_{z_{n}} \cap \{f=-\infty\})\cup P. \end{equation} This completes the proof since a countable union of pluripolar sets is pluripolar. \end{proof} \begin{remark} Corollary \ref{PlfineB} and Theorem \ref{PlfineC} give affirmative answers to two questions in \cite{Mo03}. \end{remark} A weaker formulation of Theorem \ref{PlfineC}, but perhaps more useful, is as follows. \begin{corollary} Let $f$ :\ $U$ $\longrightarrow$ $[-\infty,+\infty[$ be a function defined in an $\mathcal{F}$-domain $U \subset {\mathbb C}^n$. Suppose that every point
$z\in U$ has a compact $\mathcal{F}$-neighborhood $K_{z} \subset U$ such that $f|_{K_{z}}$ is the decreasing limit of usual plurisubharmonic functions in Euclidean neighborhoods of $K_z$. Then either $f\equiv -\infty $ or the set $\{f= -\infty\}$ is pluripolar subset of $U$. \end{corollary} As a byproduct we get the following corollary which recovers and generalizes the main result in \cite{EMW} to functions of several variables. \begin{corollary}\label{PlfineD} Let $h$ :\ $U$ $\longrightarrow$ ${\mathbb C}$ be an $\mathcal{F}$-holomorphic function on an $\mathcal{F}$-open subset $U$ of ${\mathbb C}^n$. Then the zero set of $h$ is pluripolar. In particular, the graph of $h$ is also pluripolar. \end{corollary} \begin{proof}[Proof of Corollary \ref{PlfineD}. ] Let $a \in U$. Definition \ref{F-PlfineF} gives us a compact $\mathcal{F}$-neighborhood $K$ of $a$ in $U$, and a sequences $(h_{n})_{n \geq 0 }$, of holomorphic functions defined in Euclidean neighborhoods of $K$ such that $$
h_{n}|_{K} \longrightarrow h|_{K}, \ \text{uniformly}. $$
For $k\in \mathbb{N}$ we define $v_{n, k}=\max(\log|h_{n}|, -k)$
and $v_{k}=\max(\log|h|, -k)$. Clearly, $v_{n, k}$ converges uniformly on $K$ to $v_{k}$ as $n\to\infty$. Accordingly, $v_{k}$ is $\mathcal{F}$-plurisubharmonic on the
$\mathcal{F}$-interior $K'$ of $K$. Since $v_{k}$ is decreasing, the limit function $\log|h|$ is $\mathcal{F}$-plurisubharmonic in $K'$. Theorem \ref{PlfineC} shows that the set $K'\cap \{h=0\}$ is pluripolar. The corollary follows now by application of the Quasi-Lindel\"{o}f property. \end{proof}
The pluripolar hull $E_{\Omega}^*$ of a pluripolar set $E$ relative to an open set $\Omega $ is defined as follows. $$ E_{\Omega}^{*}=\bigcap \{z\in \Omega \ : u(z)= - \infty\}, $$ where the intersection is taken over all plurisubharmonic functions defined in $\Omega$ which are equal to $-\infty$ on $E$.
The next theorem improves on Theorem 6.4 in \cite{E-W2}. \begin{theorem}\label{F-PlfineH} Let $U \subset {\mathbb C}^{n}$ be an $\mathcal{F}$-domain, and let $h$ be $\mathcal{F}$-holomorphic in $U$. Denote by $\Gamma_{h}(U)$ the graph of $h$ over $U$, and let $E$ be a non-pluripolar subset of $U$. Then $ \Gamma_{h}(U)\subset (\Gamma_{h}(E))^{\ast}_{{\mathbb C}^{n+1}}$. \end{theorem}
\begin{proof} By Corollary \ref{PlfineD} the set $\Gamma_{h}(E)$ is pluripolar subset of ${\mathbb C}^{n+1}$. Let $\varphi$ be a plurisubharmonic function in ${\mathbb C}^{n+1}$ with $ \varphi \not \equiv -\infty $ and $\varphi(z, h(z))=-\infty $, for every $z \in E$. The same arguments as in the proof of Lemma 3.1 in \cite{EMW} show that the function $z\mapsto \varphi(z, h(z))$ is $\mathcal{F}$-plurisubharmonic in $U$. Since $E$ is not pluripolar, it follows from Theorem \ref{PlfineA} that $\varphi(z, h(z))=-\infty$ everywhere in $U$. Hence $ \Gamma_{h}(U)\subset (\Gamma_{h}(E))^{\ast}_{{\mathbb C}^{n+1}}$. \end{proof}
\section{Some further questions}
\textbf{Question 1} Let $f$ be an $\mathcal{F}$-plurisubharmonic function defined in an $\mathcal{F}$-open set $U \subseteq {\mathbb C}^2$. Suppose that for each point $z\in U$ there is a compact
$\mathcal{F}$-neighbourhood $K_z$ such that $f$ is continuous (in the usual sense) on $K_z$. Is it true that $f|_{K_z}$ is the uniform limit of usual plurisubharmonic functions $\varphi_n$ defined in Euclidean neighborhoods $W_n$ of $K_z$?.
\textbf{Question 2} It is also interesting to figure out whether the assumption in the above question is automatically fulfilled. This would be the Brelot property for $\mathcal{F}$-plurisubharmonic function.
Many other questions remain open. For example, we do not know the answer to the following.
\textbf{Question 3} Is this concept of an $\mathcal{F}$-plurisubharmonic function biholomorphically invariant?
\end{document} | arXiv |
Continuous distraction osteogenesis device with MAAC controller for mandibular reconstruction applications
Shahrokh Hatefi1,
Milad Etemadi Sh2,
Yimesker Yihun3,
Roozbeh Mansouri4 &
Alireza Akhlaghi5
BioMedical Engineering OnLine volume 18, Article number: 43 (2019) Cite this article
Distraction osteogenesis (DO) is a novel technique widely used in human body reconstruction. DO has got a significant role in maxillofacial reconstruction applications (MRA); through this method, bone defects and skeletal deformities in various cranio-maxillofacial areas could be reconstructed with superior results in comparison to conventional methods. Recent studies revealed in a DO solution, using an automatic continuous distractor could significantly improve the results while decreasing the existing issues. This study is aimed at designing and developing a novel automatic continuous distraction osteogenesis (ACDO) device to be used in the MRA.
The design is comprised of a lead screw translation mechanism and a stepper motor, placed outside of the mouth to generate the desired continuous linear force. This externally generated and controlled distraction force (DF) is transferred into the moving bone segment via a flexible miniature transition system. The system is also equipped with an extra-oral ACDO controller, to generate an accurate, reliable, and stable continuous DF.
Simulation and experimental results have justified the controller outputs and the desired accuracy of the device. Experiments have been conducted on a sheep jaw bone and results have showed that the developed device could offer a continuous DF of 38 N with distraction accuracy of 7.6 nm on the bone segment, while reducing the distraction time span.
Continuous DF with high resolution positioning control, along with the smaller size of the distractor placed in the oral cavity will help in improving the result of the reconstruction operation and leading to a successful DO procedure in a shorter time period. The developed ACDO device has less than 1% positioning error while generating sufficient DF. These features make this device a suitable distractor for an enhanced DO treatment in MRA.
In maxillofacial reconstruction applications (MRA) different techniques have been used; autologous bone graft, allograft implantation, osteoconduction, osteoinduction, osteoprogenitor cells, and distraction osteogenesis (DO) [1,2,3]. In 1989, Illizarov developed the DO technique and introduced a novel limb lengthening method. Subsequently, in 1992, MacCarthy reported the first clinical case of a DO procedure on mandible [4,5,6,7]. Since then, DO has been widely used as a treatment method to generate the bone, and to fill the skeletal defects, or to correct congenital growth retardation of the bone tissue [5, 8, 9]. In MRA, DO method is a new solution to the tissue lengthening and it is getting a higher clinical attention as a technique without the need for bone graft. The main advantage of this technique is that the bone generation occurs along with the adaption of the surrounding soft tissues, moreover, a more predictable treatment outcome could be obtained [8,9,10,11,12,13]. The method starts with the bone osteotomy and the installation of the device, after the latency period, activation phase begins and gradually callus goes through the distraction force (DF). The generated gap made by distracted callus, transforms into a mature tissue called consolidation phase, and then the device is removed [14, 15]. The external fixation distractor was developed by Illizarov in 1987 [4, 16]. The major problems of extra-oral type are scar formation, infection, and nerve injuries; such issues have leaded research groups to focus on developing intra-oral devices. Research has been done and different intra-oral distractors have been developed and used [10, 17,18,19,20,21,22,23]. In both internal and external devices, however, the actuation is relied upon manual length adjustment with a potential error in the procedure, and low accuracy and reliability; the distractor is activated one or two times daily with a distraction rate (DR) between 0.25 to 1 mm per day [15, 24,25,26]. In addition, the long treatment period induces physical and psychological discomfort to the patient [5, 27]. Illizarov used a quasi-continuous method and revealed by increasing the rhythm of distraction, at a higher DR, superior results in a more rapid course of osteogenesis could be obtained [14, 16, 28, 29].
Recent studies have shown using continuous DO could significantly increase the DR and expedite the bone healing process with a higher osteogenesis quality [7, 25, 28,29,30,31,32,33,34,35]. The key elements of the automatic continuous distraction osteogenesis (ACDO) treatment are the rate and the rhythm of the distraction, the distraction vector (DV), and the output DF generated by the distractor [24, 26, 36]. Research has been done on increasing the rate and the rhythm of the process [32], reducing the activation phase duration [37], advancing the distractor's safety [24, 28], and improving the distraction accuracy and the DV on the unilateral models [29, 38, 39]. Various movement mechanisms and actuators have been used in the design and development of ACDO devices, including; motor-based, electromechanical system [5, 12, 25, 35, 40,41,42,43], hydraulic valve [29, 44, 45], spring-mediated system [46,47,48,49], shape memory alloy [48], load cell [50], and piezoelectric motor [24]. Existing ACDO devices could successfully distract the bone with the DR up to 3 mm per day [7, 32]. Recently, research groups are focusing on improving the distraction accuracy to enable a higher DR in a DO procedure [5, 32]. In a recent animal study on minipig mandible, by increasing the distraction accuracy, the DR up to 4.5 mm per day is successfully achieved [7, 32, 34]; as a result, by decreasing the total time in a DO protocol, the risks of complications during the treatment could be reduced [44]. The tendency is also to miniaturize the distractor for submucosal or subcutaneous application especially in unfavorable anatomical regions [51]. Furthermore, reducing the size of intra-oral part of the distractor may reduce the chance of occurring tissue injuries, infections, and bone fracture [24, 27, 44]. Although developed ACDO devices have shown promising results compared to conventional manual methods, they are still limited to be used in human clinical applications. In general, further study and improvements are required, specially to maximize distraction accuracy, DR, reliability, and safety, and to minimize control complexity and size [5, 24, 35]. The hypothesis of this research is that by increasing the distraction accuracy and providing a smoother DF at a higher DR, superior results in a shorter distraction period could be achieved. In this study, a new ACDO device is designed and developed based on a lead screw and stepper motor combination to improve the distraction accuracy, the DR, and the activation phase. A novel automatic controlling method, MAAC controller [52, 53], is implemented to generate an accurate, reliable, and stable continues DF. In addition, for the intra-oral part of the device, a miniature distraction mechanism is designed and developed. A set of bench tests and simulation results are presented to validate the feasibility of the design, to assess the performance of the ex vivo model, and to identify the key engineering challenges to be addressed in further product development for animal studies and clinical applications.
To transfer the externally generated DF to the moving bone segment (BS) on the callus, the ACDO device consists of a miniature lead screw translation mechanism (TM), a micro controller, and a flexible shielded spring-wire transition system (TS). The details of these components are discussed in the following sub-sections.
The lead screw translation mechanism
The mechatronic part of the developed ACDO device receives the movement commands from the controller and generates the linear DF. Based on the design of the mechanism a 3D model is sketched to show the system's functionality (Fig. 1). This unit consists of a Kiatronics 28BYJ-48 mini stepper motor and gearbox (code: 70289) with specifications shown in Table 1. The gearbox is connected to a 4-mm solid shaft coupling to transmit the generated power from the stepper motor's shaft to the screw thread. To generate the translation motion in a linear axis, a leadscrew of 4 mm diameter, right hand internal- and external-screw thread with 1-mm lead, 1-mm pitch, and length of 50 mm, and a carriage are used, as shown in Fig. 2. This configuration changes the rotation motion into a translation based on the specified DO parameters and generates a linear force. The controller could drive the stepper motor in three working states with varied linear and angular step movement, as shown in Table 2.
The 3D model of the designed TM
Table 1 28BYJ-48 stepper motor specifications
Table 2 The positioning accuracy of the system
The positioning accuracy of the system can be calculated by considering parameters containing stepper motor's stride angle, mechanical gearbox ratio (1/64), and the TM movement accuracy (1 mm/revolution); the movement accuracy of the developed ACDO device is 244.14 nm/step in full-step drive mode, 122.07 nm/step in half-step drive mode, and 7.63 nm/step in micro-step drive mode. Micro-step mode, the most accurate driving method, is selected for running the system, which means for 1 mm of the distraction length (DL), the motor is driven by the controller for 131072 steps to complete the travel. The converted linear DF is transferred to the installed mechanical distractor on the callus in order to move the BS in a desired DV with predetermined factors. Figure 3 shows the schematic model of the mechatronic part of the device.
The schematic model of the mechatronic part [54]
Figure 4 shows the block diagram of the ACDO controller, the controller has the capability to control and drive stepper motors with an L298 dual full-bridge driver. The outputs of the controller are connected to the mini stepper motor and gearbox, which is connected to the TM. Every step of the DO process is programmed and controlled in the developed ACDO device by an AVR micro controller with an open loop control system. As DL, DR, and distraction time (DT) are parameters vary with patients' conditions, the surgeon needs to set these parameters by a removable packed keypad and 2*16 character liquid crystal display panel in a programmed human–machine interface. A programmed ATmega32A 8-bit AVR micro controller is used to get the input data (DL, DR, and DT) from the user and to calculate the distraction parameters (including the steps rate and rhythm), and to save the distraction data (DD) in an AT24C02A serial eeprom with a real-time backup process.
The block diagram of the ACDO controller
In addition, in another subsequent connection, the DL, the DR, and the DT are displayed on the display panel, this feature helps to monitor and edit the distraction parameters whenever required. A 32.768 kHz real-time clock oscillator is also applied with the controller to provide an accurate 8-bit internal timer. Figure 5 shows the designed and implemented controller circuit of the ACDO device.
The controller circuit of the ACDO device
Modeling and simulation of the motor
The ACDO controller could drive the stepper motor in three different working states with varied linear and angular movement as shown in Table 2. Micro-step driving method provides improved motion stability and resolution, while increasing the step accuracy and system's performance compared to full- and half-step driving techniques. It is implemented by partially exciting different phase windings at the same time. Using micro stepping will also improve the movement by eliminating low speed ripple and resonance effects to satisfy the application [55,56,57,58]. The mathematical equations of the hybrid stepper motor are given below, which are differential equations of the dynamic model of the motor; (1) and (2) are the electrical equations, and (3) and (4) are mechanical equations [59].
$$ \frac{dia}{dt} = \frac{{va + km \cdot \omega \cdot sin\left( {N \cdot \theta } \right) - Ria}}{L} $$
$$ \frac{{d{\text{ib}}}}{dt} = \frac{{vb + Km \cdot \omega \cdot cos\left( {N \cdot \theta } \right) - Ria}}{L} $$
$$ \frac{d\omega }{dt} = \frac{{Km \cdot ib \cdot cos\left( {N \cdot \theta } \right) - T - Km \cdot ia \cdot sin\left( {N \cdot \theta } \right) - Kv \cdot \omega }}{J} $$
$$ \frac{d\theta }{dt} = \omega $$
In the given equations; ia (the current) and va (the voltage) are the parameters of phase A, ib (the current) and vb (the voltage) are the parameters of phase B, ω is the rotor rotational speed (rad/s), T is the load torque (N m), and ϴ is the rotor angular position (rad). Some modeling factors are neglected in the modeling of the motor, including detent torque, the change in inductance, and magnetic coupling between phases. For evaluating the design and the selected movement technique, the model and the simulation of the stepper motor implemented in MATLAB-SIMULINK. Figure 6 shows the subsystem of the current based on Eqs. (1) and (2). Figure 7 shows the subsystem of speed and position based on Eqs. (3) and (4). The simulated model of the stepper motor and the diagrams are shown in Fig. 8.
The current subsystem
Speed and position subsystems
The overall simulation model of stepper motor
Flexible shielded transition system
The TS consists of a flexible miniature single-lumen catheter and a flexible stainless-steel spring-wire guide to transfer the DF to the callus. Figure 9 shows the schematic model of the designed TS in the ACDO device. A mechanical fixture placed on the carriage transfers the linear DF to the spring-wire connector. The generated linear DF pushes the shielded spring-wire connector and the DF totally transfers to the moving BS throughout the flexible single lumen catheter. One side of the spring-wire guide is connected to the mechanical fixture on the TS and the other side is connected to another fixture on the mechanical part of distractor placed on the moving BS. The mechanical part placed on the bone side consists of one 3*3*10 mm stainless-steel solid fixture to fix the end of flexible shield to the constant bone part, one 3*3*3 mm stainless-still solid fixture to fix the end of the spring-wire connector to the moving BS, and two custom-designed 3*3*25 mm stainless-steel miniature guide rails to provide a stable distraction in the desired DV with a maximum travel of 22 mm. Four 1.5-mm holes drilled into the constant bone part and 3 other similar holes drilled into the moving BS. Subsequently, seven bio-compatible self-tapping titanium bone screw, diameter of 2-mm and 6-mm long (TREC, Germany), are used to fix these mechanical components to BS and to provide a linear DV. Each movement command generated by the controller drives the motor in micro stepping drive mode and the carriage moves forward 7.63 nm, consequently, the spring-wire connector pushes and the BS moves 7.63 nm.
The schematic model of the TS
Following the design and development of the ACDO device, experiments have been performed on a sheep jaw bone distraction model. In this experiment the jaw bone of a two-year female sheep is used. The similarity of the sheep jaw bone to human consists of anatomic, macroscopic, and physiologic properties [5, 60]. Based on the literature and specifications of the existing devices it can be deduced that a typical DO treatment for different cranio-maxillofacial areas including; mandible, alveolar bone, mid-face, and cranio-orbit, involves a DL of 10 to 20 mm, a DR of 1 to 3 mm/day, and a DT of 7 to 10 days [15, 25, 33]. To cover all clinical conditions of the treatment, six different tests with various repeat cycle, DT, DL, and DR are carried out with the predetermined factors shown in Table 3. Figure 10 shows the developed device connected to the jaw bone. The DR, the DT and the DL are measured in all experimental tests with an 8-bit digital timer–counter and a Mitutoyo digital caliper 0–300 mm with the precision of 0.01 mm and the resolution of 0.01 mm. These parameters have been considered to calculate the DO procedure results and the error percentage of factors with different input data. Statistical analysis was performed with descriptive tests, and graphical results were generated by using MATLAB software.
Table 3 Predetermined factors of the tests
The device connected to the sheep jaw bone
For the DF measurement, a standardized testing environment with approximate temperature of 30 centigrade and atmospheric pressure of 1*105 Pa was used. The maximum generated DF is then measured with a horizontally fixed WeiHeng digital spring scale DP-G004 with the accuracy of 0.1 N. Figure 11 shows the carriage connected to the fixed digital scale for DF measurement.
The DF measurement exam
The controller, drives the motor in the micro stepper mode with an open-loop control method. After all parameters of the selected motor are defined in the designed model, the simulation is run. The detailed waveforms, as shown in Fig. 12, are the outputs of the simulated model. Time for the simulation execution is defined one second. The Ia waveform, shows the electric current in phase a, and the Ib waveform, shows the electric current in phase b. In same way, the Va waveform shows the voltage in phase a, and the Vb waveform shows the voltage in phase b. The rotational speed of the stepper motor and the shaft's position are the other simulation outputs, as shown in Fig. 13.
Simulation results of the stepper motor
In all test conditions, the movement of the moving BS was easily achieved without any failure in the mechanical and electrical part of the device. The recorded movement is accurate and stable. The mean measured distraction length (MMDL) and the mean calculated distraction rate (MCDR) of the tests are summarized in Table 4. The corresponding mean measured distraction length error, the mean calculated distraction rate error, the mean calculated step error, the DR error rate, the DL error rate, and the mean calculated step error rate of tests are summarized in Table 5. Results have shown that all test groups had expected results with the step error rate less than 6%, DL error less than 1%, and the maximum DR error rate of 4%, respectively.
Table 4 The mean measured factors of the tests
Table 5 The mean measured errors of the tests
Figure 14 shows the MMDL and the mean measured DT of the test groups. Another experiment was done to measure the continuous DF generated by the device and the result has shown that in all test conditions, the device had generated a DF of 38 N during the distraction.
The mean measured DT and the MMDL of the tests
DO is a recent technique regularly used in MRA, the success of this treatment depends on the rate and the rhythm of distraction, the generated DF, and the DV [13, 24, 26, 36]. Different methods have been used for developing ACDO devices and improving such influencing factors. In spring-mediated continuous distractors, the reduced spring force and the nonlinear DV are major limitations [5, 18, 33]. In motor-based automatic distraction devices, due to the attached gearbox, the size increased and may cause bone fracture, and post-operative infections [12, 26, 43, 60, 61]. The main limitation of hydraulic devices is that the distractor is not able to generate a constant amount of DF and there is a load peak when device executes distraction. In addition, the intra-oral valve and the tube connection have a bigger size and increase the risk for infection and bone fracture [40, 44, 51]. Another general problem is software related issues which causes instability, measurement errors, and restarting the whole process [7, 32, 44]. In general, motor-based systems offer more suitable controllability, distraction accuracy, reliability, actuation power, and biocompatibility compared to other mechanisms [35]. Table 6 shows specifications of the existing motor-based and hydraulic ACDO devices.
Table 6 The existing ACDO devices and their specifications
The minimum DF needed for moving the BS is about 35 N [24, 35, 44, 51, 64,65,66], in addition, the distractor should allow continuous extension of the BS with a constant DF avoiding high loading peaks and tissue damage [51]. According to Table 6, hydraulic devices are capable to generate an average DF of 25 N with a load peak of 40 N [7, 32, 44, 51], while motor-driven systems are capable to generate a constant amount of DF. Two of motor-based distractors are capable to generate sufficient DF for a DO procedure, however, they are limited in distraction accuracy, DR, and DL [35, 40]. The most accurate distractor in existing devices is a motor-based system; the distraction accuracy of this device is 0.75 Â µm, the step error is 30 µm, and the DR is 3 mm/day [63]. The objective of this study was to design and develop a high-precision ACDO device for bone distraction, and to provide a constant amount of DF for a soft and continuous distraction, while decreasing the size of intra-oral distractor. The proposed device is equipped with an extra-oral MAAC controller capable of controlling the system in different conditions while driving in a linear axis with the maximum position accuracy of 7.63 nm. In addition to enabling high levels of distraction accuracy, the stepper motor in micro-stepping drive mode has provided a much smoother movement, less vibration and noiseless operation; it lowers system complexity and cost. This is due to the stator flux, which is moved in a more-continuous way compared to other drive modes, and causes a precise and smooth control of the rotor stop position [54,55,56,57], consequently, a soft continuous distraction for the BS. From the results of the simulation it can be deduced between two phases of the stepper motor, voltage waveforms are 90° displaced, in addition, current waveforms of the phases are alike to sine and cosine waveforms with 90° displacement. Simulation results show that the designed control system and the driving method used in this device, work well in different conditions, and agree with the theoretical equations. Furthermore, experimental tests have been carried out by varying the DR from 1 to 5 mm/day, the DL from 10 to 20 mm, and the DT from 48 to 480 h. Results have shown that the device has an accurate movement with the DL error rate less than 1% and the DR error rate less than 4% in all experimental test phases with great repeatability, respectively. The measured output force including a preload in the axial direction showed that in all test conditions as shown in Table 3, the pushing DF during the distraction has a value of 38 N. Therefore, the device has the capability to sufficiently provide a constant DF in different DO conditions, respectively. In addition to improved distraction accuracy and smoother DF, the size of the mechanical part placed in the oral cavity is decreased to 25 mm. The device is equipped with a simple and user-friendly human–machine interface with liquid crystal display and keypad for programming and debugging. This feature will allow the user to set various DO working factors, check, or modify the working parameters during the DO procedure. The serial eeprom connected to the controller provides a real-time backup system, and the controller can read the DD at any moment. In the case of unwanted error or system failure, the device is capable of reading and recovering the DD and continuing the distraction procedure without any movement errors.
There were some limitations in this study as well. The ex vivo model test is limited and no clinical prospect can be directly deduced from it. The software simulation was limited to the motor simulation only. In the motor simulation, some of influencing factors including detent torque, the change in inductance, and magnetic coupling between phases were neglected. The experimental tests were limited by use of a single Jaw bone model. The device was fabricated all in house and it was limited in selecting materials and fabricating complicated parts. However, the prototype served well in demonstrating the design concept and functionality for automatic continuous DO procedure.
Conclusion and future works
A newly designed ACDO device with using mini motor and gear box, miniature TM, and TS is developed for the MRA which has met all the necessary mechanical and medical functions. The experimental test results have validated its stability, reliability, and movement accuracy. The device has less than 1% positioning error with sufficient DF, while generating continuous force. The DR can be adjusted accordingly to reduce the activation phase and the DT in the DO process. Usage of a simple and ongoing control and monitoring interface makes the device easy to use. The design of the on-line DD backup plan makes the system stable and reliable for unwanted failures, and there will be no need for surgery for failed software and controller. The miniature flexible TS and the small size of the mechanical part placed on the callus, has increased the potential of the device for different cranio-maxillofacial areas including; mandible, alveolar bone, mid-face and cranio-orbit. This device is a suitable distractor for animal studies; in the future, it will be tested in the human MRA as an enhanced continuous DO solution. Additional improvements can be made on several areas to maximize its future potential and success, such as on the DV, reducing the size of the device, and making a wireless communication system for the packed display and keypad panel to enable an ongoing monitoring system showing the working DD. Developing a rechargeable high-power battery system with a design of an electronic gauge and a low-battery alarm system could make this device more suitable for MRA in human.
MRA:
maxillofacial reconstruction applications
ACDO:
automatic continuous distraction osteogenesis
DF:
distraction force
DR:
distraction rate
DV:
distraction vector
bone segment
TM:
translation mechanism
transition system
DL:
distraction length
distraction data
DT:
distraction time
MMDL:
mean measured distraction length
MCDR:
mean calculated distraction rate
Dimitriou R, et al. Bone regeneration: current concepts and future directions. BMC Med. 2011;9(1):66.
MathSciNet Article Google Scholar
El-Ghannam A. Bone reconstruction: from bioceramics to tissue engineering. Expert Rev Med Devices. 2005;2(1):87–101.
Perry CR. Bone repair techniques, bone graft, and bone graft substitutes. Clin Orthopaed Relat Res. 1999;360:71–86.
Ilizarov GA. The principles of the Ilizarov method. Bull Hosp Joint Dis Orthop Instit. 1987;48(1):1–11.
Aykan A, et al. Mandibular distraction osteogenesis with newly designed electromechanical distractor. J Craniofacial Surg. 2014;25(4):1519–23.
Codivilla A. The classic: on the means of lengthening, in the lower limbs, the muscles and tissues which are shortened through deformity. Clin Orthop Relat Res. 2008;466(12):2903–9.
Peacock ZS, et al. Automated continuous distraction osteogenesis may allow faster distraction rates: a preliminary study. J Oral Maxillofac Surg. 2013;71(6):1073–84.
Mofid MM, et al. Craniofacial distraction osteogenesis: a review of 3278 cases. Plastic and reconstructive surgery. 2001;108(5):1103–14 (discussion 1115–7).
Molina F. Mandibular distraction osteogenesis: a clinical experience of the last 17 years. J Craniofacial Surg. 2009;20(8):1794–800.
Karp NS, et al. Bone lengthening in the craniofacial skeleton. Ann Plast Surg. 1990;24(3):231–7.
Zhang Y-B, et al. Local injection of substance P increases bony formation during mandibular distraction osteogenesis in rats. Br J Oral Maxillofac Surg. 2014;52(8):697–702.
Dundar S, et al. Comparison of the effects of local and systemic zoledronic acid application on mandibular distraction osteogenesis. J Craniofacial Surg. 2017;28(7):e621–5.
Amir LR, Everts V, Bronckers AL. Bone regeneration during distraction osteogenesis. Odontology. 2009;97(2):63–75.
Ilizarov GA. The tension-stress effect on the genesis and growth of tissues: part II. The influence of the rate and frequency of distraction. Clin Orthop Relat Res. 1989;239:263–85.
Cano J, et al. Osteogenic alveolar distraction: a review of the literature. Oral Surg Oral Med Oral Pathol Oral Radiol Endodontol. 2006;101(1):11–28.
Ilizarov GA. The tension-stress effect on the genesis and growth of tissues: part I. The influence of stability of fixation and soft-tissue preservation. Clin Orthop Relat Res. 1989;238:249–81.
McCarthy JG, et al. Lengthening the human mandible by gradual distraction. Plast Reconstr Surg. 1992;89(1):1–8.
Zhou H-Z, et al. Rapid lengthening of rabbit mandibular ramus by using nitinol spring: a preliminary study. J Craniofacial Surg. 2004;15(5):725–9.
Kojimoto H, et al. Bone lengthening in rabbits by callus distraction. The role of periosteum and endosteum. Bone Jt J. 1988;70(4):543–9.
Dzhorov A, Dzhorova I. Maxillofacial surgery and distraction osteogenesis—history, present, perspective. Khirurgiia. 2002;59(6):30–5.
Karp NS, et al. Membranous bone lengthening: a serial histological study. Ann Plast Surg. 1992;29(1):2–7.
Tong H, et al. Midface distraction osteogenesis using a modified external device with elastic distraction for crouzon syndrome. J Craniofacial Surg. 2017;28(6):1573–7.
Swennen G, Dempf R, Schliephake H. Cranio-facial distraction osteogenesis: a review of the literature. Part II: experimental studies. Int J Oral Maxillofacial Surg. 2002;31(2):123–35.
Park J-T, et al. A piezoelectric motor-based microactuator-generated distractor for continuous jaw bone distraction. J Craniofacial Surg. 2011;22(4):1486–8.
Zheng L, et al. High-rhythm automatic driver for bone traction: an experimental study in rabbits. Int J Oral Maxillofac Surg. 2008;37(8):736–40.
Djasim UM, et al. Continuous versus discontinuous distraction: evaluation of bone regenerate following various rhythms of distraction. J Oral Maxillofac Surg. 2009;67(4):818–26.
Van Strijen P, et al. Complications in bilateral mandibular distraction osteogenesis using internal devices. Oral Surg Oral Med Oral Pathol Oral Radiol Endodontol. 2003;96(4):392–7.
Kessler P, Neukam F, Wiltfang J. Effects of distraction forces and frequency of distraction on bony regeneration. Br J Oral Maxillofac Surg. 2005;43(5):392–8.
Wiltfang J, et al. Continuous and intermittent bone distraction using a microhydraulic cylinder: an experimental study in minipigs. Br J Oral Maxillofac Surg. 2001;39(1):2–7.
Rowe NM, et al. Rat mandibular distraction osteogenesis: part I. Histologic and radiographic analysis. Plastic Reconst Surg. 1998;102(6):2022–32.
Mehrara BJ, et al. Rat mandibular distraction osteogenesis: II. Molecular analysis of transforming growth factor beta-1 and osteocalcin gene expression. Plastic Reconst Surg. 1999;103(2):536–47.
Peacock ZS, et al. Bilateral continuous automated distraction osteogenesis: proof of principle. J Craniofacial Surg. 2015;26(8):2320–4.
Goldwaser BR, et al. Automated continuous mandibular distraction osteogenesis: review of the literature. J Oral Maxillofac Surg. 2012;70(2):407–16.
Peacock ZS, et al. Skeletal and soft tissue response to automated, continuous, curvilinear distraction osteogenesis. J Oral Maxillofac Surg. 2014;72(9):1773–87.
Chung M, et al. An implantable battery system for a continuous automatic distraction device for mandibular distraction osteogenesis. J Med Devices. 2010;4(4):045005.
Zheng LW, Ma L, Cheung LK. Angiogenesis is enhanced by continuous traction in rabbit mandibular distraction osteogenesis. J Cranio-Maxillofacial Surg. 2009;37(7):405–11.
Troulis MJ, et al. Effects of latency and rate on bone formation in a porcine mandibular distraction model. J Oral Maxillofac Surg. 2000;58(5):507–13.
Yeshwant K, et al. Analysis of skeletal movements in mandibular distraction osteogenesis. J Oral Maxillofac Surg. 2005;63(3):335–40.
Ritter L, et al. Range of curvilinear distraction devices required for treatment of mandibular deformities. J Oral Maxillofac Surg. 2006;64(2):259–64.
Crane N.B, et al. Design and feasibility testing of a novel device for automatic distraction osteogenesis of the mandible. In: ASME 2004 international design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers. 2004.
Dias JMRDS. Towards the development of an automatic maxillary expansion appliance. 2016.
Savoldi F, et al. The biomechanical properties of human craniofacial sutures and relevant variables in sutural distraction osteogenesis: a critical review. Tissue Eng. 2017;24:225–36.
Meyers N, et al. Novel systems for the application of isolated tensile, compressive, and shearing stimulation of distraction callus tissue. PLoS ONE. 2017;12(12):e0189432.
Magill JC, et al. Automating skeletal expansion: an implant for distraction osteogenesis of the mandible. J Med Devices. 2009;3(1):014502.
Ayoub A, Richardson W. A new device for microincremental automatic distraction osteogenesis. Br J Oral Maxillofac Surg. 2001;39(5):353–5.
Mofid MM, et al. Spring-mediated mandibular distraction osteogenesis. J Craniofacial Surg. 2003;14(5):756–62.
Zhou H-Z, et al. Transport distraction osteogenesis using nitinol spring: an exploration in canine mandible. J Craniofacial Surg. 2006;17(5):943–9.
Idelsohn S, et al. Continuous mandibular distraction osteogenesis using superelastic shape memory alloy (SMA). J Mater Sci Mater Med. 2004;15(4):541–6.
Yamauchi K, et al. Timed-release system for periosteal expansion osteogenesis using NiTi mesh and absorbable material in the rabbit calvaria. J Cranio-Maxillofacial Surg. 2016;44(9):1366–72.
Wee J, et al. Development of a force-driven distractor for distraction osteogenesis. J Med Devices. 2011;5(4):041004.
Keßler P, Wiltfang J, Neukam FW. A new distraction device to compare continuous and discontinuous bone distraction in mini-pigs: a preliminary report. J Cranio-Maxillofacial Surg. 2000;28(1):5–11.
Hatefi S, Ghahraei O, Bahraminejad B. Design and development of a novel multi-axis automatic controller for improving accuracy in CNC applications. Majlesi J Elect Eng. 2017;11(1):19.
Hatefi S, Ghahraei O, Bahraminejad B. Design and development of a novel CNC controller for improving machining speed. Majlesi J Elect Eng. 2016;10(1):7.
Hatefi K, Hatefi S, Etemadi M. Distraction osteogenesis in oral and maxillofacial reconstruction applications: feasibility study of design and development of an automatic continuous distractor. Majlesi J Elect Eng. 2018;12(3):69.
Baluta G, Coteata M. Precision microstepping system for bipolar stepper motor control. In: International Aegean conference on electrical machines and power electronics, 2007. ACEMP'07. 2007.
McGuinness J. Advantages of five phase motors in microstepping drive. In: IEEE colloquium on stepper motors and their control. 1994. IET.
Anish N, et al. FPGA based microstepping scheme for stepper motor in space-based solar power systems. In: 7th IEEE international conference on industrial and information systems (ICIIS). 2012.
Zhang X, He J, Sheng C. An approach of micro-stepping control for the step motors based on FPGA. In: ICIT 2005. IEEE international conference on industrial technology. 2005.
Bendjedia M, et al. Position control of a sensorless stepper motor. IEEE Trans Power Electron. 2012;27(2):578–87.
Ploder O, et al. Mandibular lengthening with an implanted motor-driven device: preliminary study in sheep. Br J Oral Maxillofac Surg. 1999;37(4):273–6.
Schmelzeisen R, Neumann G, Von der Fecht R. Distraction osteogenesis in the mandible with a motor-driven plate: a preliminary animal study. Br J Oral Maxillofac Surg. 1996;34(5):375–8.
Ayoub A, Richardson W, Barbenel J. Mandibular elongation by automatic distraction osteogenesis: the first application in humans. Br J Oral Maxillofac Surg. 2005;43(4):324–8.
Zheng LW, Wong MC, Cheung LK. Quasi-continuous auto driven system with multiple rates for distraction osteogenesis. Surg Innovat. 2011;18(2):156–9.
Robinson RC, O'Neal PJ, Robinson GH. Mandibular distraction force: laboratory data and clinical correlation. J Oral Maxillofac Surg. 2001;59(5):539–44.
Suzuki EY, Suzuki B. A simple mechanism for measuring and adjusting distraction forces during maxillary advancement. J Oral Maxillofac Surg. 2009;67(10):2245–53.
Burstein FD, Lukas S, Forsthoffer D. Measurement of torque during mandibular distraction. J Craniofacial Surg. 2008;19(3):644–7.
SH and ME researched literature, conceived the study, and performed the product design, prototype development and testing, and wrote the original draft of the manuscript. YY contributed to the design and verification method and to the manuscript preparation. RM and AA contributed to the computer simulation and to the manuscript preparation. All authors read and approved the final manuscript.
The research data related to the design and simulation results are included within the article. For more information on the data, contact the corresponding author.
All the authors have provided consent for publication.
Department of Mechatronics Engineering, Nelson Mandela University, Port Elizabeth, South Africa
Shahrokh Hatefi
Department of Oral and Maxillofacial Surgery, Isfahan University of Medical Sciences, Isfahan, Iran
Milad Etemadi Sh
Department of Mechanical Engineering, Wichita State University, Wichita, USA
Yimesker Yihun
Center for Advanced Engineering Research, Najaf Abad Branch, Islamic Azad University, Isfahan, Iran
Roozbeh Mansouri
Isfahan University of Medical Sciences, Isfahan, Iran
Alireza Akhlaghi
Correspondence to Milad Etemadi Sh.
Hatefi, S., Etemadi Sh, M., Yihun, Y. et al. Continuous distraction osteogenesis device with MAAC controller for mandibular reconstruction applications. BioMed Eng OnLine 18, 43 (2019). https://doi.org/10.1186/s12938-019-0655-0
Automatic continuous distractor | CommonCrawl |
Search all SpringerOpen articles
Journal of the European Optical Society-Rapid Publications
Macro to nano specimen measurements using photons and electrons with digital holographic interferometry: a review
María del Socorro Hernández-Montes1,
Fernando Mendoza-Santoyo1,
Mauricio Flores Moreno1,
Manuel de la Torre-Ibarra2,
Luis Silva Acosta1 &
Natalith Palacios-Ortega1
Journal of the European Optical Society-Rapid Publications volume 16, Article number: 16 (2020) Cite this article
Today digital holographic interferometry (DHI) is considered a modern full-field non-destructive technique that allows generating 3D quantitative data of a wide variety of specimens. There are diverse optical setups for DHI that enable the study of specimens in static and dynamic conditions: it is a viable alternative to characterize a wide diversity of parameters in the micro and macro world by conducting repeatable, reliable and accurate measurements that render specimen data, e.g., displacements, shape, spatial dimensions, physiological conditions, refractive indices, and vibration responses. This paper presents a review and progress on the most significant topics, contributions and applications involving DHI for the study of different specimens such as: cells, bio tissues, grains, insects, and nano-structures. For most of the research work involving macro and micro specimens the wave-like source used in the measurements were photons from a laser, while the studies carried out in the nano regime used the wave-like nature of the electron.
Dennis Gabor reported the invention of Holography in 1949. His main concern being the aberration correction in the electron microscopes. At the time the lack of coherent electron sources meant that the hologram reconstruction was done using quasi-coherent light sources. As such Holography did not produce enough results to be considered a must use tool, even though a device called a wire-biprism was invented to combine the object and reference beams. The invention of the laser at the end of the 1950's gave a great leap to Holography since this light source was highly coherent and hence led to the invention of Holographic Interferometry (HI) during the first lustrum of the 1960's. A seminal manuscript that introduces the concept of HI was written by Powell and Stetson, who described the technique as one furthering the studies of the Leith-Upatnieks hologram for the time average of coherent wavefronts scattered by a vibrating object [1]. The hologram was recorded on a photographic plate that had to undergo the chemical process needed to develop and fix the film. A step forward in the development of HI was the electronic detection of holograms reported by Goodman and Lawrence [2], for computer synthesis holograms [3], and in the earlier work reported by Brown, et al. [4]. Perhaps the first manuscript that coined the term holographic interferometry was published in 1966 by Heflinger et al. [5], who reported the well-known fact that the holographic technique of photography is founded in the interference phenomena and hence the double exposure technique to obtain interference fringes in the holograms. Digital Holography (DH) takes over conventional Holography when digital recording of the holograms is achieved with electronic devices such as CCD sensors instead of the so called holographic film emulsion. One of the main advantages of the DH recording is that data can be stored allowing for the numerical reconstruction of both the intensity and phase of the object wavefront. As reported by Schnars et al. [6, 7], digital recording and numerical reconstruction of holograms started with the first publications on the field including not only digital recording of holograms by an electronic camera, but also the implementation of basic diffraction algorithms to numerically reconstruct the intensity and phase information embedded in a hologram. After these first communications, many researches linked the term Digital Holographic Interferometry (DHI) to the process of digitally recording a hologram, and the use of numerical algorithms based on classical diffraction theory to get the intensity and phase information imbedded in the holograms [8]. DHI directly derives from DH as conventional holographic interferometry naturally derived from holography during the middle of the 1960's decade; DHI allows the measurement of the object amplitude and phase from objects that undergo static or dynamic changes in time. Thus, it is used to record two different states of the object, and permits various measurements from the phase variations, for example, surface deformation, surface shape, refractive index, vibration conditions, etc. Furthermore, DHI also refers to the numerical processing of the holograms to get intensity and phase information by spatial filtering in the Fourier domain [9,10,11].
In DHI an interference image formed by the overlapping of the diffracted object beam and the reference beam is captured and recorded at the location of the digital sensor (today usually a CCD, or the like, camera). The amplitude and phase of the object are encoded into the intensity of the hologram. Currently, DHI used with photons is widely applied in a quantitative and qualitative manner to perform macro or micro measurements, for instance, in the physical, mechanical and physiological characterization of parameters from ample and diverse specimen types. This discipline in the Optics field has successfully evolved to become a trusted tool in a wide variety of areas.
Electrons also behave as waves and thus can create interferograms. Reliable and highly coherent electron sources were made available by the late 1970's, a fact that also gave an outstanding impulse to electron holography not only due to the coherent field electron guns (feg) used in the hologram reconstruction process but also because of the appearance of electron holographic interferometry (EHI) as a "quasi-non-invasive" measurement tool in transmission electron microscopes (TEM), set in the so called holography mode. Today nanomaterials and structures belonging to a wide variety of areas can be characterized in regards to their physical and mechanical parameters using HI with light sources and EHI. Indeed, many reconstruction processes learnt with DHI using photons can be directly applied to EHI. For most of the specimen characterization involving micro to nano work, the source used in the measurement systems were either photons from a laser or electrons from a feg.
In this paper, we will review and discuss the state of the art in DHI and EHI applications to study the displacements, shape, spatial dimensions, physiological conditions, refractive indices and vibration responses of different samples as bio tissues, grains, insects, nanoparticles, cells, and bacteria, among others. These examples show that both DHI and EHI have become a reliable and robust metrology tool, gradually used in the domain of biological/medical applications.
Digital holographic interferometry (DHI)
Digital Holographic interferometry (DHI) provides an interferometric comparison of objects or events separated in time and space. In DHI, the holographic images are formed from the superposition of the reference and object waves: one image corresponds to the object in a known base state while a second image is acquired when the object has undergone a displacement or deformation. Both images are recorded on the CCD sensor and stored in the PC hardware for further manipulation with the aim of recovering the amplitude and phase that result from the object change in its base and deformed states. DHI is a full-field-of-view technique that has been applied to investigate a wide variety of objects and numerous unique applications. Depending on the application DHI uses cw or pulsed lasers, and can be configured in either mode: transmission, for instance to look at transparent objects (such as cells) and obtain refractive index changes or sample thickness [12]; or scattering (diffuse or non-specular objects), to obtain displacement, deformation or rotation information from the object [13].
Optical schematics of DHI
A typical arrangement for transmission mode to study phase objects is shown in Fig. 1a. The optical layout is that of a Mach Zehnder interferometer, where a laser is divided into reference and object waves by means of a beam splitter (BS1). The reference wave is guided through an optical fiber (OF), while the object wave is expanded with a microscope objective and a mirror (M1) directs the beam through the sample and with the help of another beam splitter, (BS2), the reference and object waves are combined to form an interference pattern (also called an image hologram) on a CCD sensor. The arrangement for diffuse objects is depicted in Fig. 1b, its image acquisition procedure is the same as for phase objects. The optical fiber in both arrangements can be replaced with a mirror to guide the reference beam, however, recently the use of the optical fibers has allowed simpler arrangements for holography interferometry, and also avoid the use of lenses to manipulate the size of the reference beam. Nowadays, systems that integrate digital camera's sensors, optical fibers, and even piezoelectric fiber-optic modulators are known as Optoelectronic Holography Systems [14]. Usually for both arrangements an aperture is located right before the second beam splitter (BS2). Its use helps to increase the imaging system depth of focus, controls the amount of light that reaches the sensor, and serves as an object image band pass filter. The latter is further used during the holographic image processing that involves data manipulation in the Fourier space, where image data is spatially filtered in order to finally obtain object's amplitude and phase. Numerically the extraction of the phase information is usually made according to the Fourier-Takeda algorithm [9,10,11]. This procedure is detailed in the next section.
a Holography Interferometry for phase objects, and b Holography Interferometry for opaque objects
Briefly, in mathematical terms the DHI method to retrieve the optical phase is based on the interaction of a smooth reference wave and an object wave expressed as R and O respectively. The intensity recorded on the digital camera's sensor for each image hologram is given by;
$$ I\left({x}_H,{y}_H\right)={\left|R\left({x}_H,{y}_H\right)\right|}^2+{\left|O\left({x}_H,{y}_H\right)\right|}^2+R\left({x}_H,{y}_H\right){O}^{\ast}\left({x}_H,{y}_H\right)+{R}^{\ast}\left({x}_H,{y}_H\right)O\left({x}_H,{y}_H\right) $$
where xH and yH are the x and y coordinates at the image hologram plane H (sensor) and the * denotes the complex-conjugate amplitude. The last two terms of Eq. (1) contain information that corresponds to the amplitude and the phase of the object wave, data which is retrieved using the Fast Fourier transform method [9,10,11]. Expressing the object and the reference waves in terms of amplitude and phase results in:
$$ O\left({x}_H,{y}_H\right)=o\left({x}_H,{y}_H\right){e}^{i\varphi \left({x}_H,{y}_H\right)} $$
$$ R\left({x}_H,{y}_H\right)=r\left({x}_H,{y}_H\right){e}^{-2\pi i\left({f}_x{x}_H+{f}_y{y}_H\right)}. $$
Where o and r are the object and reference amplitudes respectively, and fxxH and fyyH represent the spatial carrier frequencies along xH and yH coordinates. Substituting these terms in eq. 1 gives,
$$ I\left({x}_H,{y}_H\right)=a\left({x}_H,{y}_H\right)+c\left({x}_H,{y}_H\right){e}^{2\pi i\left({f}_x{x}_H+{f}_y{y}_H\right)}+{c}^{\ast}\left({x}_H,{y}_H\right){e}^{-2\pi i\left({f}_x{x}_H+{f}_y{y}_H\right)} $$
With,
$$ a\left({x}_H,{y}_H\right)={o}^2\left({x}_H,{y}_H\right)+{r}^2\left({x}_H,{y}_H\right) $$
$$ c\left({x}_H,{y}_H\right)=o\left({x}_H,{y}_H\right)r\left({x}_H,{y}_H\right){e}^{i\varphi \left({x}_H,{y}_H\right)} $$
The last two terms of equation (2) indicate where the spatial carrier is phase modulated by the term φ(xH, yH) introduced due to the interference signal. These spatial carrier is introduced in the reference beam due to a small inclination with respect to the observation axis of the optical system. In order to retrieve the optical phase, equation 2 is Fourier transformed in two dimensions,
$$ FT\left\{I\left({x}_H,{y}_H\right)\right\}=A\left({f}_{x_H},{f}_{yH}\right)+C\left({f}_{x_H}-{f}_x{x}_H,{f}_{yH}-{f}_y{y}_H\right)+{C}^{\ast}\left({f}_{x_H}+{f}_x{x}_H,{f}_{yH}+{f}_y{y}_H\right). $$
The C and C* terms are the Fourier transform of c and c* respectively (each contains the same phase information φ(xH, yH)) while \( A\left({f}_{x_H},{f}_{yH}\right) \) is the low-frequency background illumination, and \( \left({f}_{x_H},{f}_{yH}\right) \) is the frequency coordinate. As they are spatially separated in the Fourier spectrum a bandpass filter is used to eliminate the A and C* terms. The C term is frequency centered and an inverse Fourier transform is applied to each filtered image hologram. Then, the phase can be obtained with
$$ \varphi \left({x}_H,{y}_H\right)=\mathit{\arctan}\frac{\mathit{\operatorname{Im}}\ c\left({x}_H,{y}_H\right)}{\mathit{\operatorname{Re}}\ c\left({x}_H,{y}_H\right)}, $$
where Im c(xH, yH) and Re c(xH, yH) are the imaginary and real parts of the Fourier inverted hologram with the C term. Upon subtracting the two phases for the base (φ) and deformed (φ') object states, a wrapped phase map is obtained showing the relative phase difference (Δφ) calculated with
$$ \varDelta \varphi \left({x}_H,{y}_H\right)=\varphi \left({x}_H,{y}_H\right)-{\varphi}^{\prime}\left({x}_H,{y}_H\right). $$
Equation 6 could be expressed in terms of c and c' before and after the object's deformation as follows,
$$ \varDelta \varphi \left({x}_H,{y}_H\right)=\frac{\mathit{\operatorname{Re}}\left\{c\right\}\mathit{\operatorname{Im}}\left\{{c}^{\prime}\right\}-\mathit{\operatorname{Re}}\left\{{c}^{\prime}\right\}\mathit{\operatorname{Im}}\left\{c\right\}}{\mathit{\operatorname{Re}}\left\{c\right\}\mathit{\operatorname{Re}}\left\{{c}^{\prime}\right\}+\mathit{\operatorname{Im}}\left\{c\right\}\mathit{\operatorname{Im}}\left\{{c}^{\prime}\right\}} $$
The relative optical phase difference can be related to a vector displacement (\( \overrightarrow{U\Big)} \), \( \varDelta \varphi =\overrightarrow{k}\bullet \overrightarrow{U} \), \( \overrightarrow{\ k}=\hat{r_H}-\hat{r_s} \)[13], where \( \overrightarrow{k} \) is the sensitivity vector which changes according to the experimental set up, \( \hat{r_H} \) and \( \hat{r_s} \) are the unit vectors of observation and illumination, respectively. For an out-of-plane configuration the interferometer's sensitivity is along the z axis (i.e., the observation direction), hence
$$ {u}_z=\frac{\Delta \varphi \lambda}{2\pi \left(\cos \beta +1\right)} $$
uz represents the variable for the out-of-plane displacements in the z direction, β is the angle between the object illumination and the observation directions. Please notice that equation 8 is a reduced expression valid for smooth surfaces and small illumination angles.
For more accurate out-of-plane displacements measurements, the angle ω given by the geometry of the object is introduced in the previous equation, see Fig. 2, i.e., it is the angle between the sensitivity vector and the normal to the surface [15], then
$$ {u}_z=\frac{\Delta \varphi \lambda}{2\pi \left(\cos \beta +1\right)\cos \omega } $$
The layout of Out-of-plane displacements setup. An object plane at (x, y, z) is illuminated by S light source at illumination point (xs, ys, zs), with direction represented by \( \overrightarrow{{\boldsymbol{R}}_{\boldsymbol{s}}} \) vector. The light scatters reaches to the hologram plane (observation plane) at (xH, yH), with direction represented by \( \overrightarrow{{\boldsymbol{R}}_{\boldsymbol{H}}} \) vector
Biological application using DHI
The application of DHI in biomedical objects has increased significantly during the last two decades, mainly due to the development of new semiconductor devices that have had an impact on new and improved electronic devices such as illumination sources, imaging devices and so on. Normally, a biological sample requires extensive hardware dedicated to contain, preserve and manipulate it during the optical inspection. For these reasons it is notorious the complexity in the reported experimental setups when biological samples are studied with interferometric techniques. One knew and extensive tool for these inspections is the use of stroboscopic illumination at fast repetition rates [16], a tool that helps to reduce the error (de-correlated optical phase) of the measurements due to the sample's motion. This strobed illumination is possible with the use of a pulsed laser [17, 18] synchronized with the camera's acquisition software. This characterization includes qualitative imaging and quantitative measurements of form, geometry, displacement, vibration, stress, strain and viscoelastic properties of several objects going from handmade manufactured materials to biological and tissue samples.
The analysis of shape and several other parameters in transparent media [19, 20] is one of the first approximations to the biological inspection as the size of some samples is nearly invisible for a regular size inspection. The transparent inspection principle is tested in liquid media where the optical technique is able to retrieve a smooth and useful optical phase, which is used to measure different parameters such as temperature [21] and liquid interactions [22, 23]. In reference [24] a study using a gas-liquid interface is presented, where the authors observe the changes in the liquid phase due to the mass transfer process coupled with chemical reactions. Figure 3 shows the experimental cell used during this test while Fig. 4 shows the fringe patterns before and after the transfer. Observing Fig. 4 it is possible to detect the chemical reaction due to the CO2 absorption which induces refractive index variations in the liquid below the interface.
Block-diagram of the experimental cell. Reprinted from [24]. Copyright [2011], ELSEVIER
Interferograms (a) before the beginning of the transfer and (b) 15 min after. Reprinted from [24]. Copyright [2011], ELSEVIER
The inspection in different media helps to better understand complex interactions that are common in biological tissues when they change under no control. This complex behavior is explored in reference [25] using a high-speed DHI configuration. The samples under study are bird feathers in-vivo (the living bird is actually inspected, without suffering harm). Each feather is an intricate collection of hundreds of barbules and hooks interconnected and supported by barbs to the rachis (see Fig. 5). This makes a flexible ensemble that is always in motion and continuously modifying its direction by the bird's movement, its blood flow, and the barbules' orientation. The latter is a challenging inspection if a large area of the bird needs to be illuminated in order to retrieve a smooth optical phase map (as those shown in Fig. 6). The use of high-speed hardware helps to avoid the slow motion of the bird, however the feather orientation is a different matter and specific algorithms were designed to match the phase maps with the bird's surface texture without de-correlation, see Fig. 7. In this kind of application, it is notorious the adaptability of DHI dealing with non-repeatable deformations in harsh conditions.
Schematic view for: (a) primary feather and (b) feather central section. Reprinted from [25]. Copyright [2013], SPIE
Examples of retrieved phase maps from different bird's body sections. Reprinted from [25]. Copyright [2013], SPIE
Displacement map examples with bird's texture matching. Reprinted from [25]. Copyright [2013], SPIE
There are much more biological inspections in living organisms using DHI, but we recommend particular attention to the ones reported for the case of insects (butterflies) for wing flapping studies [26] and wing flapping comparison [27].
Another interesting example is described in reference [28] where the skin is studied, a biomedical subject that demands alternative inspection tools. The skin elasticity is one of the most studied mechanical properties because it plays an important role in healthy and unhealthy skin [28,29,30,31]. There are several physiological processes and diseases that can be directly correlated with changes in elasticity [32, 33], e.g., skin cancer presents severe changes in elasticity meaning that it is a useful parameter to study in order to better understand the mechanisms of this illness [34]. The work in reference [28] presents the variations in porcine skin rigidity caused by exposure to UV radiation, studied using a digital holographic interferometer in scattering mode. Displacement fields normal to the surface of the skin under observation were obtained using DHI. To induce z-axis displacements, a loudspeaker driven by a function generator was used to convey the sound to the sample. The frequency used was 1.3 kHz, with a sound pressure of 93 dB SPL (0.893 Pa), enough to induce out-of-plane displacements.
From that research a new mathematical scheme to measure the skin rigidity was proposed, where the anisotropic and heterogeneous nature of the skin is considered.
For the mathematical scheme, the medium was considered as a thin vibrating membrane where normal displacements to the surface are small enough, a dynamical equation was obtained in terms of partial derivatives as
$$ \uprho \frac{\partial^2{u}_z}{\partial {t}^2}={c}_{zxxz}\frac{\partial^2{u}_z}{\partial {x}^2}+{c}_{zyyz}\frac{\partial^2{u}_z}{\partial {y}^2}+2{c}_{zxyz}\frac{\partial^2{u}_z}{\partial y\partial x}+{c}_x\frac{\partial {u}_z}{\partial x}+{c}_y\frac{\partial {u}_z}{\partial y} $$
where czxxz, czyyz and czxyz are the rigidity moduli and the coefficients \( {c}_x=\frac{\partial {c}_{zxxz}}{\partial x}+\frac{\partial {c}_{zxyz}}{\partial y} \) and \( {c}_y=\frac{\partial {c}_{zxyz}}{\partial x}+\frac{\partial {c}_{zyyz}}{\partial y} \) are variations of czxxz, czyyz and czxyz with respect to the position (x, y). ρ is the density of the medium, t is the time and uz(x, y) is the z axis component of the displacement field that defines a surface whose amplitude at each point is time-dependent. By measuring uz(x, y) at five different times, a system of equations was constructed as
$$ \left(\begin{array}{c}\uprho {\partial}_{tt}^2{u}_z^{(1)}\\ {}\vdots \\ {}\uprho {\partial}_{tt}^2{u}_z^{(5)}\end{array}\right)=\left(\begin{array}{cccc}{\partial}_{xx}^2{u}_z^{(1)}& {\partial}_{yy}^2{u}_z^{(1)}& \dots & {\partial}_y{u}_z^{(1)}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\partial}_{xx}^2{u}_z^{(5)}& {\partial}_{yy}^2{u}_z^{(5)}& \dots & {\partial}_y{u}_z^{(5)}\end{array}\right)\left(\begin{array}{c}{c}_{zxxz}\\ {}{c}_{zyyz}\\ {}\vdots \\ {}{c}_y\end{array}\right) $$
Then, the rigidity moduli czxxz, czyyz and czxyz can be found by solving the last system of equations.
The displacement fields normal to the surface under observation were obtained using DHI, where the sensitivity vector was chosen to be \( \overrightarrow{k}\approx \left(0,0,{k}_z\right) \) in order to obtain uz(x, y). The evolution of the normal to the surface displacements obtained for different time periods of UV exposure are shown in Fig. 8. The right bottom graph shows the mean rigidity coefficients, czxxz, czyyz and czxyz vs time of UV exposure. The increment value of the rigidity coefficients means that the tissue elasticity was reduced after each period of UV radiation.
Displacement field obtained at a 0, b 3, c 6, d 9 and e12 min of UV exposure. The right bottom graph presents the mean rigidity coefficients, czxxz, czyyz and czxyz contrasted with UV exposure time. Reprinted from [28]. Copyright [2020], ELSEVIER
Other reports on vibration analysis of objects and biological membranes are worth of consideration [35,36,37,38,39,40,41,42,43,44,45,46]. The tympanic membrane's (TM) motion is very important in the hearing process, serving as a transformer of mechanical impedance between low impedance in the air in the ear channel, and the high impedance at the center of the eardrum and the umbo. As an example, dynamic excitation by acoustically induced motions on the tympanic membrane (TM) of a postmortem chinchilla using an ad-hoc DHI otoscope, is shown in Fig. 9. Here qualitative and quantitative deformation information of a TM subjected to controlled acoustic signals was reported, giving to the hearing specialists important information about the mechanical behavior of TM at different frequencies [47]. Other applications of DHI for the 3D reconstruction of mammal's TM, or the developing of robust algorithms to recover intensity images from holograms have been reported in Ref. [41, 48].
I) Sketch of an in-line HI system coupled with a commercial otoscope; and II A), Wrapped phase map corresponding to the deformations of cadaveric chinchilla TM (tympanic membrane) acoustically excited at the frequency of 3.489 kHz; IIB) Unwrapped phase map corresponding to Figure IIA); IIC) 3D deformations corresponding to Figure IIB) showing a peak-to-peak out-of-plane deformation, dz = 800 nm. Reprinted from [47]. Copyright [2011], WILEY
The sound-induced motions of the TMs, i.e., the deformation uz(x, y, t) normal to the TM's surface, is provoked by a continuous sinusoidal wave (acoustic) that can be represented by the following equation:
$$ {u}_z\left(x,y,t\right)={z}_{m0}\left(x,y,t\right)\sin \left[{\varnothing}_m\left(x,y,t\right)-{\omega}_mt\right]\kern0.5em $$
where x, and y, are the spatial coordinates, t is the time, zm0 represents the amplitude of the acoustic signal, ∅m is the mechanical phase, and ωm is the angular frequency [49, 50].
Consequently, DHI was applied for measuring sound-induced displacements, determination of the shape, and mechanical properties such as the strain on the TM. The TM surface height-change h(x) may be found from the difference between the reconstructed phases which are recorded before φ1 and after φ2, the small tilt (∆θ) of the object illuminating beam by the following equation [51, 52]:
$$ \Delta \varphi =2k\sin \frac{\Delta \theta }{2}\left[x\cos \left(\theta +\frac{\Delta \theta }{2}\right)-h(x)\sin \theta +\frac{\Delta \theta }{2}\right] $$
where k = 2π / λ, and θ is the object beam angle which is measured from the fixed position of object illumination to the axis perpendicular to the geometrical center of the CCD sensor. Therefore, the object height-change h(x) can be determined from the difference between the reconstructed phases ∆φ using equation (13), where ∆φ was calculated with equation (7). Figure 10 shows the experimentally found shape of the tympanic membrane and a line profile of its surface.
3D reconstruction and surface profile of the TM. Reprinted from [51]. Copyright [2012], OSA
In order to measure sound-induced displacements in the vibrating TM, the phase differences are calculated by capturing two holograms at two different points of the sound signal cycle, namely at 0° (reference state) and 90° (maximum amplitude of the sound wave), i.e., two different instants of the sound-induced cycle. The synchronization of the signals (sound stimuli and external trigger to the CCD), is fundamental to secure and make the experiments repetitive, and with the exposure time short enough, 5 μs, to "freeze" the movement of the sample. Equation (9) is employed to find the displacements in the z-direction.
Figure 11 shows the results of a cat's TM describing its motion as displacements over the entire surface. Here the sound frequencies were from 800 Hz to 8000 Hz. Figure 11 a) shows the vibration patterns as wrapped optical phase, and b) illustrates the patterns like unwrapped phase maps.
Surface deformations in the z direction (a) Wrapped and (b) Unwrapped phase maps of the TM
Completeness studies of the tympanic membrane have been carried out using DHI in conjunction with a Confocal Laser Scanning Microscopy (CLSM): both were employed to measure the displacements and thickness of four cats' healthy TMs, and the study concluded on how the thickness (by region) of the eardrum affects the TM displacements when it is subjected to acoustic stimuli, in particular at 1.8, 5.2 and 12 kHz [53]. The three specific regions studied were, R1 located in the PS posterosuperior quadrant of the eardrum; R2 in the AS anterosuperior quadrant between the umbo and the annulus; and R3 just below the end of the umbo near the annulus [54]. The displacements can be utilized to investigate the TM thickness changes without the need to invade the tissue. Figure 12 shows three "raw" fringe patterns (top), the resulting three wrapped phase maps (second row), and their corresponding unwrapped (third row) phase maps for one eardrum vibrating at 1.8, 5.2 and 12 kHz, respectively. The height displacements were calculated with equation (8). In order to determinate the TM's thickness with CLSM its Black Zen (Zeiss Efficient Navigation) analysis software was used. The 3D structure of the eardrum was reconstructed by stacking the confocal images in the Z direction. Figure 12 (bottom row) shows three CLSM orthogonal fluorescence images, where the white or bright "thin strip" of the image corresponds to the thickness of the membrane, showing that the TM was well delimited. In each image, the top of the fluorescent section corresponds to the outer surface of the TM and the bottom corresponds to its inner surface. The distance between these two surfaces was measured to quantify the thickness of the TM in each region [53].
The top images show the "raw" fringe patterns (interferograms); the second row images show the wrapped phase maps and the third row show the corresponding unwrapped phase maps where the left displacement line-profile is drawn along the R1 and R2 and the right one goes through the R3 at: a 1.8 kHz (simple), b 5.2 kHz (complex) and c 12 kHz (ordered) patterns. The closed line in the middle figures is hereby used only to indicate the location of the umbo (U), and the manubrium (M). On the bottom row, the CLSM reconstructed images corresponding to the three regions of the TM stained with ophthalmic fluorescein strip (FITC) are shown. Reprinted from [53]. Copyright [2019], SPIE
DHI has been used to quantify mammal's vocal folds' dynamic displacements, see Fig. 13 [55, 56]. This figure shows displacement maps in a pseudo 3D representation, the vibration pattern, and the wrapped phase map. There is a full 2D deformation in the vocal folds (VF) when an airflow passes through them.
The top image is the VF unwrapped vibrating map. The bottom ones refer to the fringe pattern (left) and its corresponding wrapped phase (right). Reprinted from [56]. Copyright [2016], IOP
These results exemplified the displacements variations along the VFs, showing the characteristic triangular form which is present during the opening of the VFs' vibrational stage. And the patterns indicate that the movement propagates from the anterior to the posterior edges of the VFs. Complex behavior is observed for this tissue and not only showing deformation in one axis as expected.
Performing biomechanical tests is becoming a common task for DHI, mainly due to its advantageous fact of being able to retrieve the optical phase from a single hologram. In reference [57] a DHI using an endoscope retrieves the optical phase map from an oral cavity observing the tongue as it moves (see Fig. 14).
In vivo investigation inside the oral cavity, (a) image of the investigated part (tongue), (b) phase map corresponding to the deformation produced by mechanical excitation of the tongue. Reprinted from [56]. Copyright [2005], SPIE
Here, it is possible to see the robustness of the retrieved signal in this complicated inspection showing almost no de-correlation in the interferometric signal. Some of this noise is present as this is a wet tissue with random specular reflections. However, even when the optical phase is affected, the use of high dynamic range cameras (more bits per pixel) helps to reduce this error [58, 59].
The flexibility in the DHI setups allows inspection of micrometric size structures such as cells. An example in reference [60] uses photosensitized HeLa cells that are studied when they interact with controlled laser irradiation. Figure 15 shows several images of the optical phase before and after these cells were irradiated. The tracking is presented for two particular cells (A and B) whose expression is highly reduced after 45 min.
Distribution of phase retardation in the sample before the irradiation (a), right after 60-s irradiation (b), in 45 min (c) and in 40 h (d) after the irradiation. Reprinted from [60]. Copyright [2016], OSA
The biomechanics of another tissue that has been widely inspected is the bone from different species, such as: human [61], porcine [62] and bovine [63]. This is possible due to the fact that material engineering models are able to be applied to a wide range of experimental conditions regardless of the species. However, several of these investigations require a previous validation or a comparison with a numerical model. Sometimes, phantoms are used instead of a real tissue; to exemplify this point, in reference [64] a comparison (experimental vs theoretical measurements) is presented in an artificial tooth. The aim of this work is to study the deformation of a tooth due to the mastication forces. Figure 16 shows two fringe patterns during the deformation test.
Interferograms of an artificial tooth under (a) 100 N and (b) 250 N loads applied by a brass tool. The red arrows indicate the load directions. Due to the high dynamic range of the resulting image, the intensity scale is logarithmic. Reprinted from [64]. Copyright [2014], SPIE
This experimental deformation data is compared with that coming from a finite element model (FEM) simulation of the same experiment. The FEM results are shown in Fig. 17 where it is possible to observe a good correlation between the experimental and the simulated results. This correlation is one of the advantages that makes DHI so widely applied in different conditions and with different samples under study.
Simulated deformation of an artificial tooth, calculated using FEM: a front side, b backside. Reprinted from [64]. Copyright [2014], SPIE
Bone analyzes using DHI are focused on its mechanical strength and how external factors could modify this. Reference [65] presents a study where the mechanical response of a porcine femoral bone is analyzed when it suffers a compression. In this study the bones are affected by means of cortical hole drillings which simulate the condition when a prosthetic plaque needs to be fixed in order to heal a fractured bone. A continuous compression load is applied to several bones with different conditions. Some have 2, 4 or 6 cortical holes (6.5 mm of diameter) and their mechanical deformation is compared with the fourth set of non-drilled bones (control group). The study presents the displacement maps at some selected compression values for comparison purposes. Figure 18 shows the results for three different non-drilled bones (control response) while Fig. 19 shows the average response of the drilled bones when the compression reaches 400 lbs. Comparing these two sets of images it is possible to observe how the anisotropic response of a healthy bone is affected in the presence of a cortical hole. Furthermore, this mechanical modification increases as the number of the hole number increases. Cortical holes remove bone material, weakening it and making a fracture more likely to happen.
Displacement map comparison for three different non-drilled bones. Each column represents a different bone, and each row reflects the compression value of interest at (a)–(c) 30 lbs., (d)–(f) 50 lbs., (g)–(i) 100 lbs., (j)–(l) 200 lbs., and (m)–(o) 400 lbs. Reprinted from [65]. Copyright [2017], OSA
Retrieved displacement maps for three bones with 2, 4, and 6 cortical holes at (a)–(c) 50 lbs., (d)–(f) 100 lbs., (g)–(i) 200 lbs., and (j)–(l) 400 lbs. with a 6.5 mm diameter drill bit. Reprinted from [65]. Copyright [2017], OSA
A further study in affected bones was proposed in reference [66], where specimen samples from bovine cortical bones are tested. Here, demineralization (dm) and dehydration (−H2O) are the affectations applied to a group of specimens. Resulting in four groups as Fig. 20 indicates. The controlled compression required the design of an ad-hoc micro compression testing machine (MCTM) able to apply the controlled deformation during the test. The experimental set up is observed in Fig. 21.
Bone sample groups to be tested. Reprinted from [66]. Copyright [2018], OSA
DHI set up for the cortical bone compression tests. Reprinted from [66]. Copyright [2018], OSA
After processing all the bones' responses when they are compressed, an average displacement map is retrieved for each case. Figure 22 shows the comparison between the control group (m + H2O) and the one demineralized but hydrated (dm@4 h + H2O) at three different compression values. According to this information, there is not a big mechanical difference between these two groups as the demineralized bones still act as a healthy one in this compression range.
Average surface displacement map comparison between m + H2O and dm@4 h + H2O left and right column respectively, for (a, b) 100 (c, d) 200 and (e, f) 300 lbs. Reprinted from [66]. Copyright [2018], OSA
Figure 23 shows the comparison between the dehydrated but mineralized bones (m-H2O@48 h) and the demineralized and dehydrated ones (dm@4 h-H2O@48 h). Here, it is possible to see a remarkable difference between the control group and the dehydrated but mineralized bones that are highly affected even when their mineral is still present. This difference implies a bigger deformation and the fracture value is decreased, making them more prone to be fractured. The last case where both components are affected has a plastic response indicating an extreme strength affectation of the sample. Analyzing this information it is possible to see that dehydration affects more bone strength than the demineralization.
Average surface displacement map comparison between m-H2O@48 h and dm@4 h-H2O@48 h left and right column respectively, for (a, b) 100 (c, d) 200 and (e, f) 300 lbs. Reprinted from [66]. Copyright [2018], OSA
All of the above has shown that the use of DHI has been extended beyond the traditional study of mechanical properties of materials to biomechanical properties of biological objects, involving a diversity of disciplines like mechanics, biology, chemistry, and tissue engineering among many others.
Digital holographic microscopy (DHM)
In the micrometric size inspection, new configurations have been developed in order to simplify the alignment and measurement of these samples. Reference [67] presents a novel transmission DHI set up able to identify micrometric size particles < 50 μm. The optical phase is able to reconstruct pollen particles as Fig. 24a shows, while Fig. 24b and c show the optical phase for a calibration step used to validate the biological measurements. Fig. 25 compares the image retrieved with confocal microscopy with the one from transmission DHI for the same pollen particles. It is notorious the advantage of DHI in transmission (using the simplified microscopy set up in reference [67], which renders the microscopic image of the particles and retrieves their profile.
a Example of three different FOV using the variable magnification and keeping the same camera's resolution, retrieved wrapped phase map for a standard calibration pattern, using either the (b) geometrical or the (c) controlled magnification. Reprinted from [67]. Copyright [2019], OSA
(a) Confocal and (b) phase magnitude example of pollen particle visualization. Reprinted from [67]. Copyright [2019], OSA
When a phase object is illuminated its transparency or translucent nature does not provoke a large amplitude variation of the transmitted wavefront, which is not the case for phase changes. In these applications, the optical path difference is a consequence of a change in the refractive index distribution along the optical path through the sample. Also, physical thickness measurements are possible. These measurements represent a paramount aspect of DHM and are covered by the quantitative phase microscopy (QPM). These techniques allow the quantification of optical thickness and refractive index changes with nanometric accuracy based in the Fourier phase microscopy (FPM) method. One of the first reports in this field was carried out by Popescu et al., in 2004 [68]. A typical DHI configuration to measure the refractive index change of a phase object is based on an off-axis setup. The general expression [68] for the spatially resolved quantitative-phase images in, e. g., a cell sample, is given by:
$$ \varphi \left(x,y\right)=\left(\frac{2\pi }{\lambda}\right)\ \underset{0}{\overset{h\left(x,y\right)}{\int }}\left[{n}_C^z\ \left(x,y,z\right)-{n}_0\right] dz $$
where λ is the illumination wavelength, h is the local thickness of the cell, and n0 is the refractive index of the surrounding liquid; the symbol \( {n}_C^z \) represents the refractive index of the cellular material, mostly an inhomogeneous function of x, y, z. Then, quantitative measurements of refractive index distributions on phase objects can be calculated. Measurements of changes in the refraction index in microscopic transparent or semi-transparent samples by DHI have been addressed using interference microscopy techniques such as the Mirau or Linnik interference microscopes. These are based on spatial carrier fringe-pattern analysis, and their developments are a version of the classical Zernike and Normarski optical phase-contrast microscopy [69].
A very important manuscript showing advances in biological applications based on DHM, was reported by Marquet et al. in 2005 [69]. The publication of this paper and others published by the Depeursinge group from Ecole Polytechnique Federale de Lausanne, are taken by many authors as introductory in the use of DHM in biological field [68, 70], with the original demonstration of nanometric surface profiling and have carried out quantitative cellular imaging and characterization studies. Popescu refers those papers as QPI-based techniques by DHM used for imaging live cells [68]. As an example of DH-QPM imaging, Marquet et al [69]. show in Fig. 26 the image of living mouse cortical neurons and their reaction to a hypotonic shock.
Images of a living mouse cortical neuron in culture: a raw image and b perspective image in false colors of the phase distribution obtained with DHM. Reprinted from [69]. Copyright [1999], OSA
Several biological samples have been studied using DHI microscopy that follow the latter publication's ideas and results, take for instance some recent applications of DHM to study bio-samples. In Fig. 27 Bianco et al [71]. report a hybrid system of an optical tweezer coupled to DHM to quantify the bio-volume of red blood cells (RBCs).
Tomography images for rolling Red Blood Cell with a Knizocite shape. Reprinted from [71]. Copyright [2019], SPIE
Rastogi et al [72]. calculated the thickness and reconstructed the 2D phase profile of a E. coli bacteria, shown in Fig. 28. Barroso et al [73], .present a multispectral quantitative phase imaging system with DHM to characterize a dissected murine retina by label-free refractive index measurement, results seen in Fig. 29. The wavelength is tuned from 800 to 850 nm at steps of 10 nm.
E. coli bacteria DHM imaging. a 2D phase profile, b thickness distribution of a single E. coli bacteria, c and e another 2D reconstruction of the same culture, d and f corresponding thickness distribution. Reprinted from [72]. Copyright [2019], SPIE
Multi-wavelength DHM images of dissected mouse retina: a digital off-axis holograms of a mouse retina section, b enlarged spatial carrier fringe pattern of a; c its Fourier spectrum; d quantitative phase contrast image; e unwrapped phase; f stack of unwrapped phase images at different wavelengths in the same region of the retina; g averaged phase imaging of the stack of the unwrapped images, where the dashed line is a cross section for refractive index determination. Reprinted from [73]. Copyright [2019], SPIE
Ref. [74] presents a very clever combination of technology involving a smartphone, a DVD optical head and DHM algorithms to perform imaging of cells: the results are in Fig. 30, where the reconstructed image from red blood cells, HeLa cells and a micro channel on an aluminum coated silicon substrate, are shown.
Screen shots of: a optical thickness variation of a Red Blood Cells (RBC) rendered on the smart phone; b optical thickness variation of HeLa cell rendered on the same device; and c surface profile of micro channel on aluminum coated silicon substrate rendered on smart phone. Reprinted [74]. Copyright [2019], Elsevier
Finally, by using partially spatially coherent light with DHM, Dubey et al [75], showed how the sperm cell motility is affected when exposed to different levels of oxidative stress, see Fig. 31. Many of these techniques including off-axis, phase-shifting and common-path methods, are compiled in several textbooks such as those published by Popescu and Kim [68, 70] and indeed many other authors.
DHM pseudo-color plot imaging of reconstructed phase maps of a) normal sperm cells and at different concentrations of H2O2: b) 10 μM, c) 40 μM, d) 70 μM and e) 100 μM. Reprint from [75]. Copyright [2019], Science
Biomedical contributions from electron holography
Brief background on electron holographic interferometry
At the time that Gabor published [76, 77] his work on Electron Holography the proposed setup consisted of an in-line, or on-axis, wavefront combination, namely, the so called object and reference beams originated from the same "optical" axis, which is the axis where observation of the object was made. This in itself imposed a rather dramatic drawback in the technique since the reference and object electron beams appeared in-line on the sensor used to collect the electrons, that is, the high intensity (proportional to the electron beam irradiance) of the reference beam overshadowed the object, making it almost impossible to observe and thus recover the object information. The object wavefront reconstruction used an all optical setup that consisted in illuminating the hologram recorded on the film with a coherent monochromatic beam that worked in the visible part of the electromagnetic spectrum. This monochromatic beam was later replaced by the laser which meant that a brighter and more coherent light beam could be used to reconstruct the object wavefront imbedded in the randomness structure of the interferogram recorded in the hologram. So, on one hand, the lack of a reliable and coherent source of electrons and on the other the fact that holograms did not always result upon reconstruction in a well-defined object image (and indeed sometimes no object image was recorded at all), meant that Electron Holography was not practiced consistently in the Microcopy world. In the electron microscopes of that time, the reference and object beams were sometimes combined by using the concept of the Mollenstedt type electron biprism [78] which is a device that works as an optical Fresnel biprism, i.e., it serves to divide equally a beam into two wavefronts, also called amplitude division interferometers.
Electron Holography in microscopy was benefited by both the appearance of the CCD that replaced the wet chemical process, and also by the introduction of the field-emission electron gun in the microscope since it provides a high power and coherent electron beam which is a must for the development of Electron Holographic Interferometry (EHI) [79,80,81]. The technique invented by Gabor was devised to seek a reliable way to correct the aberrations inherent in electron microscopes, mainly the spherical aberration which meant that the observed images of angstrom size were defocused. But, the developments in Digital Holographic Interferometry can all be applied to EHI, so the latter has taken a major relevance in microscopy in the last few decades, and recently has shown the potential to be a tool that is able to extract experimental "real" information that may serve as the basis for the correction of theoretical descriptions, both product of non-proved or empirically formulated theories in Nano Sciences and Technology.
Off-Axis electron holographic interferometry
In Electron Holographic Interferometry (EHI) the intensity distribution at one point r = (x,y) may be expressed as,
$$ \kern0.5em \mathrm{I}\left(\boldsymbol{r}\right)={\mathrm{I}}_{\mathrm{R}}\left(\boldsymbol{r}\right)+{\mathrm{I}}_{\mathrm{O}}\left(\boldsymbol{r}\right)+{\mathrm{I}}_{inel}\left(\boldsymbol{r}\right)+2\mu a\left(\boldsymbol{r}\right)b\left(\boldsymbol{r}\right)\cos \left[2\uppi \Delta \boldsymbol{kr}+\Delta \varphi \left(\boldsymbol{r}\right)+{\varphi}_0\left(\boldsymbol{r}\right)\right] $$
where I(r) is the intensity distribution on the image plane as recorded by the camera's sensor. IR (r) and IO (r) are the intensities of the reference and the object wave respectively. ∆φ(r) represents the phase-difference between the reference and the object waves (see eq. 8), φ0 (r) represents a phase term that describes the position of the fringe pattern with respect to the camera (also known as lateral phase and/or fringe carrier) where the dependence on r can e.g., describe distortions from the camera (i.e., the fiber optic bundles for electron microscopy type cameras). Iinel represents incoherent contributions coming from inelastically scattered electrons or stray emission from a different source to the hologram, and μ is the contrast of the interference fringes. This contrast is preferably measured without an object for transmission type holography and with a perfect mirror for reflective holography, and it is given by,
$$ \mu =\frac{\left({I}_{max}-{I}_{min}\right)}{\left({I}_{max}+{I}_{min}\right)} $$
Gabor's main idea was to retrieve the phase term ∆φ(r), which in electron microscopy may take different forms, depending on the parameters to be studied from the specimen. Today, EH provides a unique phase-imaging approach for characterizing nano scale electrostatic and magnetic fields. Quantitative whole field measurements can be directly related to specific specimen features from the relative phase shifts between the electron wave that passed through the specimen and the reference electron beam. In particular, the phase recovered by EH is due to several contributions [82]:
$$ \kern1.25em \Delta \varphi \left(\boldsymbol{r}\right)={\varphi}_C+{\varphi}_M+{\varphi}_E+{\varphi}_G $$
where each phase sub index stands for Crystalline, Magnetostatic, Electrostatic and Geometric respectively. The crystalline phase shift is expressed by [82, 83],
$$ {\varphi}_c\left(x,y\right)={C}_E\int {V}_E\left(x,y,z\right)\bullet dz $$
$$ \kern1.5em {C}_E=\left(\frac{\pi }{\lambda \varepsilon}\right)\equiv \frac{2\pi }{\lambda}\left(\frac{\varepsilon_k+{\varepsilon}_0}{\varepsilon_k\left({\varepsilon}_k+2{\varepsilon}_0\right)}\right) $$
where VE is the electric potential, and CE is the electron interaction constant; ε is the total energy of the electron, which depends on ε0 the energy of the electron at rest and εk the electron's kinetic energy, and 휆 is the electron wavelength.
The Geometric phase shift is [83],
$$ {\varphi}_G\left(x,y\right)=-2\pi \boldsymbol{g}\bullet \boldsymbol{r}\kern1.75em $$
where g represents the reciprocal lattice vectors of the perfect, or "reference", crystal, and r is a point on the (x,y) plane.
In general terms, the main component of the phase shift from a standard material (such as ferroelectric or magnetic) may be described as the sum of the electrostatic (φE) and the magnetostatic (φM) phases as follows [83]:
$$ {\varphi}_E\left(x,y\right)={C}_E\int E\left(x,y,z\right)\bullet dx\bullet dz $$
$$ {\varphi}_M\left(x,y\right)=-\frac{e}{\mathrm{\hslash}}\iint {B}_{\perp}\left(x,y\right)\bullet dx\bullet dz\kern0.75em $$
where e is the electron charge, E is the electric field associated to the electric potential VE (x,y,z), and B is the magnetic field. Here φE represents the electrostatic component that may include the mean inner potential and induced polarization. The main aim for EHI is thus the recovery of the phase term ∆φ(r) , since the amplitude of the interferometric process may be read directly from counting the number of resulting fringes, whose separation is given by the electron's wavelength λ. In EHI the phase retrieval is done based on a Fourier procedure developed independently by several authors [9,10,11] and discussed previously.
State of the art results in EHI
An electron hologram is a fringe-modulated image containing the amplitude and the phase of an electron transparent object. These holograms are collected by the interference between the reference and the object wave, due to a biased electron biprism (schematically shown in Fig. 32). In this configuration, the sample is placed in a position that covers the wavefront interfering with the sample (object hologram) and the vacuum. In addition, a reference hologram (no sample) is recorded separately for a precise phase reconstruction [84]. After the acquisition, the holograms are numerically reconstructed into complex images where it is possible to extract both: the amplitude and the phase image.
Off-axis EHI schematic view. Reprint from [84]. Copyright [2014], ELSEVIER
The latter gives an extraordinary opportunity to study a wide variety of physical parameters in-on and around the specimen. Next, a couple of examples are given that involve the shape quantification of gold (Au) nano particles [85] and the 3D visualization of a bacteria [86].
For a gold decahedral nano particle, Fig. 33 (a) and (b) show (after reconstruction) the wrapped and unwrapped phase images. The oriented particle shows a smooth contour moving from the center to the edge of the nano particle. In Fig. 33 (b) the unwrapped phase image is calibrated using the mean inner potential of the FCC (FFC: face-centered cubic) Au nano particle. Considering the background signal as the reference plane, the thickness profile can be computed. Fig. 33d shows a thickness line profile from the line indicated in figure (b). The phase image shown in Fig. 33c has been amplified 2.5 times [85] from that shown in Fig. 33a, this just for illustration purposes.
Au decahedral nano particle: (a) wrapped phase image, (b) unwrapped phase image, (c) 2.5 times magnification of the original phase lines in figure (a), and (d) computed thickness obtained using the Au crystalline potential. Reprint from [85]. Copyright [2013], ELSEVIER
Biological samples studied by conventional electron microscopy (scanning and transmission modes) need to be negatively stained or coated with metals to enhance their contrast, a procedure that may affect cellular components making that the interpretation-analysis process causes the formation of structural artifacts. Another limitation of electron microscopy in biological samples is the radiation damage, which in most cases also poses an additional challenge. In comparison, electron holography has a tremendous potential to recover sample information avoiding both problems since their structural information can be extracted even at low voltages, under cryo-conditions and at low doses. Particularly, electron holography is a highly sensitive imaging technique even for light atoms, able to discriminate chemical elements with similar atomic number and detect structures that contain different electrical potentials.
As an example, we present the Staphylococcus aureus (S. aureus) which is a gram (+) bacteria of serious concern for public health. S. aureus infections, including methicillin-resistant S. aureus (MRSA) strains cause several diseases including healthcare-associated infections and wound infections that could derive in pneumonia, bloodstream infections and sepsis. S. aureus is usually found in the respiratory tract and skin. The bacteria was treated with avidin-streptavidin anti-S. aureus conjugated gold nanoparticles in order to confirm the specificity of Ab-functionalized (antibody functionalized) Au nano particles. To gain a better understanding of the location and effects of Ab-functionalized Au nano particles on S. aureus off-axis electron holographic interferometry (EHI) was used to determine the 3D sample structure and the location of particles attached on cell walls. The three dimensional reconstruction can be achieved by acquiring several holograms from different observation or electron illumination positions [86]. A set of holograms were acquired by tilting the sample from − 30° to + 30° in steps of 15° as shown in Fig. 34 (a). The resulting phase maps for each inclination are displayed in Fig. 34 (b) and their respective unwrapped phase maps are shown in Fig. 34(c). The series of unwrapped phase maps in (c) shows a phase variation from each other which is clearly due to the different orientation of the bacteria.
a set of electron holograms at different tilting angles, b reconstructed phases and c unwrapped phases taken using a biprism voltage of 20 V. The red circles show the location of a single Au nano particle that can be observed with the naked eye. The original images can be seen in the results presented in reference [86]. In that paper the authors did not measure the exact position of this nano particle. Reprint from [86]. Copyright [2017], WILEY
From the unwrapped phase maps the surface plot may be extracted and represented as the real 3D image of the complete bacteria surface contour. Different views from the surface plots are shown in Fig. 35, from which it is possible to distinguish the surface morphology of the bacteria cell wall showing the prominent wave-like surface patterns. One advantage of EHI is the acquisition of surface information from a single specimen exposure and not by tilting the sample to reconstruct the surface of the samples.
Surface plot of the reconstructed phase showing a 3D view of the bacteria obtained by EHI from three different views: a zero tilting, b vertical tilting and c horizontal tilting. Reprint from [86]. Copyright [2017], WILEY
DHI is a non-invasive optical metrology technique that has been successfully used for over 50 years in many different applications. With new advances in optomechatronic technologies and high-speed processors (in particular), DHI will definitely overcome and substitute the conventional metrology tools used. The technique is able to measure in 2D and 3D hidden object areas and can be taken outside laboratory controlled environments. It is ideally suited to study non-repeatable events from samples with sizes of a few meters to nanometers.
The technique was tested for different samples from nano to macro world, and the data obtained can be related to the type of tissue properties, like its density, rigidity, thickness, healthy or unhealthy condition, morphology, and refraction index. By means of the quantification of displacement and imaging a characteristic unwrapped phase maps, one might associate it with the biomechanical condition or inside changes. DHI represents potentially an alternative and complementary tool capable of providing qualitative and quantitative new data which will be useful to improve the understanding of the behavior of the samples. The manuscript reports on a continuous increase in biological inspections using DHI with photons and electrons. This tendency is far to be reduced and more applications are reported, as the optical systems became more complex and sophisticated. Higher speeds and resolutions will allow future inspections as the on-board processing which uses new generation hardware for real-time inspections.
Powell, R.L., Stetson, K.A.: Interferometric vibration analysis by Wavefront reconstruction. J. Opt. Soc. Am. 55, 1593–1598 (1965)
ADS Google Scholar
Goodman, J.W., Lawrence, R.W.: Digital image formation from electronically detected holograms. Appl. Phys. Lett. 11, 77–79 (1967)
Lesem, L.B., Hirsch, P.M., Jordan Jr., J.A.: Scientific applications: computer synthesis of holograms for 3-D display. Commun. ACM. 11, 661–674 (1968)
Brown, B.R., Lohmann, A.W.: Complex spatial filtering with binary masks. Appl. Optics. 5, 967–969 (1966)
Heflinger, L.O., Wuerker, R.F., Brooks, R.E.: Holographic interferometry. J. Appl. Phys. 37, 642–649 (1966)
Schnars, U.: Direct phase determination in hologram interferometry with use of digitally recorded holograms. JOSA A. 11, 2011–2015 (1994)
Schnars, U., Kreis, T.M., Jueptner, W.P.: Digital recording and numerical reconstruction of holograms: reduction of the spatial frequency spectrum. Optim. Eng. 35, 977–982 (1996)
Schnars, U., Juptner, W.: Direct recording of holograms by a CCD target and numerical reconstruction. Appl. Optics. 33(2), 179–181 (1994)
Takeda, M., Ina, H.I., Kobayashi, S.: Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. JOSA. 72, 156–160 (1982)
Mendoza Santoyo, F., Kerr, D., Tyrer, J.R.: Manipulation of the Fourier components of speckle fringe patterns as part of an interferometric analysis process. J. Mod. Opt. 36, 195–204 (1989)
Yamaguchi, I.: Fringe formation in speckle photography. JOSA. A1, 81–86 (1984)
Boone, P.M.: NDT techniques: laser-based. In: Jürgen Buschow, K.H., Cahn Robert, W., Flemings Merton, C., Bernhard, I., Kramer Edward, J., Mahajan, S., Veyssière, P. (eds.) Encyclopedia of Materials: Science and Technology, pp. 6018–6021. Elsevier, Amsterdam (2001)
Vest, C.: M: Holographic Interferometry. Wiley, New York (1979)
Sharpe, W.N.: Springer Handbook of Experimental Solid Mechanics. Springer-Verlag, New York (2008)
Uribe, L.U., Hernández-Montes, M.S., Mendoza, S.F.: Fully automated digital holographic interferometer for 360 deg contour and displacement measurements. Optim. Eng. 55(12), 121719 (2016)
Hariharan, P., Oreb, B.: Stroboscopic holographic interferometry: application of digital techniques. Opt. Commun. 59(2), 83–86 (1986)
Pedrini, G., Gusev, M., Schedin, S., Tiziani, H.: Pulsed digital holographic interferometry by using a flexible fiber endoscope. Opt. Lasers Eng. 40(5–6), 487–499 (2003)
Pedrini, G., Schedin, S., Alexeenko, I., Tiziani, H.: Use of endoscopes in pulsed digital holographic interferometry. In: Hoefling, P., Jueptner, W.P.O., Kujawinska, M. (eds.) Proceeding of SPIE 4399, pp. 1–8 (2001)
Ferraro, P., De Nicola, S., Finizio, A., Grilli, S., Pierattini, G.: Digital holographic interferometry for characterization of transparent materials. In: Hoefling, R., Jueptner, W.P., Kujawinska, M. (eds.) Proceedings of SPIE 4399, pp. 9–16 (2001)
Owen, R., Zozulya, A.: Comparative study with double-exposure digital holographic interferometry and a shack–Hartmann sensor to characterize transparent materials. Appl. Optics. 41(28), 5891–5895 (2002)
De la Torre, I., Manuel, H., Mendoza Santoyo, F., Hernandez, M.M.S.: Transmission out-of-plane interferometer to study thermal distributions in liquids. Opt. Lett. 43, 871–874 (2018)
Agarwal, S., Kumar, M., Kumar, V., Shakher, C.: Analysis of alcohol-water diffusion process using digital holographic interferometry. In: Mendoza Santoyo, F.R., Mendez, E. (eds.) Proceedings of SPIE 9660S (2015)
Wang, J., Zhao, J., Di, J., Rauf, A., Hao, J.: Dynamically measuring unstable reaction-diffusion process by using digital holographic interferometry. Opt. Lasers Eng. 57, 1–5 (2014)
Wylock, C., Dehaeck, S., Cartage, T., Colinet, P., Haut, B.: Experimental study of gas-liquid mass transfer coupled with chemical reactions by digital holographic interferometry. Chem. Eng. Sci. 66(14), 3400–3412 (2011)
De la Torre-Ibarra, M.H., Mendoza Santoyo, F.: Interferometric study on birds' feathers. J. Biomed. Opt. 18(5), 1–9 (2013)
Aguayo, D., Mendoza Santoyo, F., De la Torre-I, M.H., Salas-Araiza, M.D., Caloca-Mendez, C., Gutierrez Hernandez, D.A.: Insect wing deformation measurements using high speed digital holographic interferometry. Opt. Express. 18, 5661–5667 (2010)
Aguayo, D., Santoyo, F.M., De la Torre Ibarra, M., Mendez, C.C., Salas-Araiza, M.D.: Comparison on different insects' wing displacements using high speed digital holographic interferometry. J. Biomed. Opt. 16, 1–9 (2011)
Silva, A.L., Hernández, M., del Socorro Mendoza Santoyo, M., De la Torre, I.M.H., Flores Moreno, J.M., Frausto, R., et al.: Study of skin rigidity variations due to UV radiation using digital holographic interferometry. Opt. Lasers Eng. 126, 105909 (2020)
Zak, M., Kuropka, P., Kobielarz, M., Dudek, A., Kaleta-Kuratewicz, K., Szotek, S.: Determination of the mechanical properties of the skin of pig foetuses with respect to its structure. Acta Bioeng. Biomech. 13, 37–43 (2011)
Agache, P.G., Monneur, C., Leveque, J.L., De Rigal, J.: Mechanical properties and young's modulus of human skin in vivo. Arch. Dermatol. Res. 269, 221–232 (1980)
Li, C., Guan, G., Reif, R., Huang, Z., Wang, R.K.: Determining elastic properties of skin by measuring surface waves from an impulse mechanical stimulus using phase-sensitive optical coherence tomography. J. R. Soc. Interface. 9, 831–841 (2012)
Imokawa, G., Ishida, K.: Biological mechanisms underlying the ultraviolet radiation-induced formation of skin wrinkling and sagging I: reduced skin elasticity, highly associated with enhanced dermal elastase activity, triggers wrinkling and sagging. Int. J. Mol. Sci. 16, 7753–7775 (2015)
Takema, Y., Yorimoto, Y., Kawai, M., Imokawa, G.: Age-related changes in the elastic properties and thickness of human facial skin. Br. J. Dermatol. 131, 641–648 (1994)
Tilleman, T., Tilleman, M., Neumann, M.: The elastic properties of cancerous skin: Poisson's ratio and Young's modulus. IMAJ. 6, 753–755 (2004)
Tonndorf, J., Khanna, S.M.: Tympanic-membrane vibrations in human cadaver ears studied by time-averaged holography. J. Acoust. Soc. Am. 52, 1221–1233 (1972)
Decraemer, W.F., Khanna, S.M., Funnell, W.R.J.: Interferometric measurement of the amplitude and phase of tympanic membrane vibrations in cat. Hear. Res. 38, 1–17 (1989)
Cheng, J.T., Aarnisalo, A.A., Harrington, E., del Hernandez-Montes, M.S., Furlong, C., Merchant, S.N., et al.: Motion of the surface of the human tympanic membrane measured with stroboscopic holography. Hear. Res. 263, 66–77 (2010)
Hernández-Montes, M. del S., Furlong, C., Rosowski, J.J., Hulli, N., et al.: Optoelectronic holographic otoscope for measurement of nano-displacements in tympanic membranes. J. Biomed. Opt. 14(3), 034023 (2009)
Flores-Moreno, J.M., Furlong, C., Cheng, J.T., Rosowski, J.J., Merchant, S.N.: Characterization of acoustically induced deformations of human tympanic membranes by digital holography and shearography. In: Rodríguez-Vera, R., Díaz-Uribe, R. (eds.) Proceedings of SPIE 80118C-80118C – 10 (2011)
Cheng, J.T., Hamade, M., Merchant, S.N., Rosowski, J.J., Harrington, E., Furlong, C.: Wave motion on the surface of the human tympanic membrane: holographic measurement and modeling analysis. J. Acoust. Soc. Am. 133, 918–937 (2013)
Flores-Moreno, J.M., Mendoza Santoyo, F., Estrada Rico, J.C.: Holographic otoscope using dual-shot-acquisition for the study of eardrum biomechanical displacements. Appl. Optics. 52, 1731–1742 (2013)
Rosowski, J.J., Dobrev, I., Khaleghi, M., Lu, W., Cheng, J.T., Harrington, E., et al.: Measurements of three-dimensional shape and sound-induced motion of the chinchilla tympanic membrane. Hear. Res. 301, 44–52 (2013)
Khaleghi, M., Furlong, C., Ravicz, M., Cheng, J.T., Rosowski, J.J.: Three-dimensional vibrometry of the human eardrum with stroboscopic lensless digital holography. J. Biomed. Opt. 20, 051028 (2015)
Razavi, P., Dobrev, I., Ravicz, M.E., Cheng, J.T., Furlong, C., Rosowski, J.J.: Transient response of the eardrum excited by localized mechanical forces. Mech. Biol. Syst. Mater. 6, 31–37 (2016)
Pedrini, G., Osten, W., Gusev, M.E.: High-speed digital holographic interferometry for vibration measurement. Appl. Optics. 45, 3456 (2006)
Solís, S.M., del Hernández-Montes, M.S., Santoyo, F.M.: Measurement of Young's modulus in an elastic material using 3D digital holographic interferometry. Appl. Optics. 50, 3383–3388 (2011)
Flores-Moreno, J.M., Furlong, C., Rosowski, J.J., Harrington, E., Cheng, J.T., Scarpino, C., et al.: Holographic otoscope for nanodisplacement measurements of surfaces under dynamic excitation. Scanning. 33, 342–352 (2011)
Hernandez-Montes, M., Mendoza Santoyo, F., Muñoz, S., Perez, C., De La Torre, M., Flores, M., Alvarez, L.: Surface strain-field determination of tympanic membrane using 3D-digital holographic interferometry. Opt. Lasers Eng. 71, 42–50 (2015)
Hernández-Montes, M., Mendoza Santoyo, F., Pérez López, C., Muñoz Solís, S., Esquivel, J.: Digital holographic interferometry applied to the study of tympanic membrane displacements. Opt. Lasers Eng. 49, 698–702 (2011)
Trillo, C., Doval, A.F., Hernández-Montes, S., Deán-Ben, X.L., López-Vázquez, J.C., Fernández, J.L.: Pulsed TV holography measurement and digital reconstruction of compression acoustic wave fields: application to nondestructive testing of thick metallic samples. Meas. Sci. Technol. 22, 025109 (2011)
M. Solís, S., del Hernández-Montes, M.S., M. Santoyo, F.: Tympanic membrane contour measurement with two source positions in digital holographic interferometry. Biomed. Opt. Express. 3, 3203–3210 (2012)
Solís, S., Santoyo, F., del Hernández-Montes, M.S.: 3D displacement measurements of the tympanic membrane with digital holographic interferometry. Opt. Express. 20(5), 5613–5621 (2012)
Santiago-Lona, C., del Hernández-Montes, M.S., F. Moreno, M., Piazza, V., De La Torre, M., Pérez-López, C., Mendoza-Santoyo, F., Sierra, A., Esquivel, J.: Tympanic membrane displacement and thickness data correlation using digital holographic interferometry and confocal laser scanning microscopy. Optim. Eng. 58, 084106 (2019)
Kumar, A.: Small animal ear diseases. In: Gotthelf, L.N. (ed.) Anatomy of the Canine and Feline Ear. Elsevier Saunders, St. Louis (2005)
Hernández-Montes, M. del S., Muñoz, S., De La Torre, M., Flores, M., Pérez, C., Mendoza-Santoyo, F.: Quantification of the vocal folds' dynamic displacements. J. Phys. D: Appl. Phys. 49, 175401 (1–7) (2016)
Hernández-Montes, M. del S., Muñoz, S., Mendoza, F.: Measurement of vocal folds displacements using high-speed digital holographic interferometry. LAOP Technical Digest LTu4A. 28, (2014)
Pedrini, G., Alexeenko, I., Zaslansky, P., Tiziani, H.J., Osten, W.: Digital holographic interferometry for investigations in biomechanics. Proc. SPIE. 5776, 325–332 (2005)
Dirksen, D., Droste, H., Kemper, B., Deleré, H., Deiwick, M., Scheld, H., Von Bally, G.: Lensless Fourier holography for digital holographic interferometry on biological samples. Opt. Lasers Eng. 36, 241–249 (2001)
Akhmetshin, A.M.: High sensitive multiresolution analysis of low-contrast radiologic images based on the digital pseudo coherent holographic interferometry method. J. Digit. Imaging. 12(2 SUPPL. 1), 197–198 (1999)
Belashov, A.V., Belyaeva, T.N., Kornilova, E.S., Petrov, N.V., Salova, A.V., Semenova, I.V., Vasyutinskii, O.S., Zhikhoreva, A.A.: Detection of photoinduced transformations in live HeLa cells by means of digital holographic micro-interferometry. In: Imaging and Appl. Opt. DTh1l.5 (2016)
Kumar, M., Singh Birhman, A., Kannan, S., Shakher, C.: Measurement of initial displacement of canine and molar in human maxilla under different canine retraction methods using digital holographic interferometry. Optim. Eng. 57, 094106 (2018)
Tavera, R.C.G., De La Torre, I.M.H., Flores, M.J.M., Luna, H.J.M., Briones, R.M.J., Mendoza, S.F.: Optical phase analysis in drilled cortical porcine bones using digital holographic interferometry. In: Popescu, G., Park, Y. (eds.) Proceedings of SPIE, p. 9718 (2016)
Alvarez, A., De La Torre Ibarra, M., Santoyo, F., Anaya, T.-S.: Strain determination in bone sections with simultaneous 3D digital holographic interferometry. Opt. Lasers Eng. 57, 101–108 (2014)
Pantelić, D., Grujić, D., Vasiljević, D.: Single-beam, dual-view digital holographic interferometry for biomechanical strain measurements of biological objects. J. Biomed. Opt. 19, 127005 (2014)
Tavera, R.C.G., De la Torre, I.M.M.H.H., Flores, M.J.M., Hernandez, M., Ma Del, S., Mendoza-Santoyo, F., Briones, R.M.J., Sanchez, P.J.: Surface structural damage study in cortical bone due to medical drilling. Appl. Optics. 56, F179–F188 (2017)
Tavera Ruiz, C.G., De La Torre-Ibarra, M.H., Flores-Moreno, J.M., Frausto-Reyes, C., Mendoza Santoyo, F.: Cortical bone quality affectations and their strength impact analysis using holographic interferometry. Biomed. Opt. Express. 9, 4818–4833 (2018)
Frausto-Rea, G., De la Torre, M.H., Flores, J.M., Silva, L., Briones-R, M., Mendoza Santoyo, F.: Micrometric size measurement of biological samples using a simple and non-invasive transmission interferometric set up. Opt. Express. 27, 26251–26263 (2019)
Popescu, G.: Quantitative Phase Imaging of Cells and Tissues. McGraw-Hill Biophotonics. McGraw-Hill, New York (2011)
Marquet, P., Rappaz, B., Magistretti, P.J., Cuche, E., Emery, Y., Colomb, T., Depeursinge, C.: Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy. Opt. Lett. 30, 468–470 (2005)
Kim, M.K.: Digital Holographic Microscopy, Springer Series in Optical Sciences. Springer Science+Business Media, New York (2011)
Bianco, V., Miccio, L., Memmolo, P., Merola, F., Mandracchia, B., Cacace, T., et al.: 3D imaging in microfluidics: new holographic methods and devices. Microfluidics BioMEMS Med. Microsyst. XVII. Proc SPIE. 10875, 108750U (2019)
Rastogi, V., Gadkari, R., Agarwal, S., Dubey, S.K., Shakher, C.: Digital holographic interferometric in vitro imaging of Escherichia coli (E. coli) bacteria. Holography. 11030, 1103011 (2019)
Barroso Peña, Á., Ketelhut, S., Heiduschka, P., Nettels-Hackert, G., Schnekenburger, J., Kemper, B.: Refractive index properties of the retina accessed by multi-wavelength digital holographic microscopy. Proc. SPIE. 10883, 108830X (2019)
Goud, B.K., Shinde, D.D., Udupa, D.V., Krishna, C.M., Rao, K.D., Sahoo, N.K.: Low cost digital holographic microscope for 3-D cell imaging by integrating smartphone and DVD optical head. Opt. Lasers Eng. 114, 1–6 (2019)
Dubey, V., Popova, D., Ahmad, A., Acharya, G., Basnet, P., Mehta, D.S., et al.: Partially spatially coherent digital holographic microscopy and machine learning for quantitative analysis of human spermatozoa under oxidative stress condition. Sci. Rep. 9, 3564 (2019)
Gabor, D.: Microscopy by reconstructed wavefronts. Proc. Royal Soc. London Ser. A. 197, 454 (1949)
ADS MATH Google Scholar
Gabor, D.: Microscopy by reconstructed wavefronts: II. Proc. Soc. Phys. Soc. B. 64, 449 (1951)
Möllenstedt, G.: And Düker., H.: Beobachtungen und messungen an briprisma-interferenzen mit elektronenwellen. Z. Physik. 145, 377–397 (1956)
Tonomura, A., Endo, J., Matsuda, T.: An application of electron holography to interference microscopy. Optik. 53, 143 (1979)
Tonomura, A.: Appication of electron holography using a field-emission electron microscope. J. Electron Microsc. (Tokyo). 33, 101 (1984)
Yatagai, T., Ohmura, K., Iwasaki, S., Hasegawa, S., Endo, J., Tonomura, A.: Quantitative phase analysis in electron holographic interferometry. Appl. Optics. 26, 377–382 (1987)
Völkl, E., Allard, L.F., Joy, D.C.: Introduction to Electron Holography. Springer Science + Business Media, New York (1999)
Hÿtch, M.J., et al.: Dark-field electron holography for the measurement of geometric phase. Ultramicroscopy. 111, 1328–1337 (2011)
Cantu-Valle, J., Ruiz-Zepeda, F., Mendoza-Santoyo, F., Jose-Yacaman, M., Ponce, A.: Calibration for medium resolution off-axis electron holography using a flexible dual-lens imaging system in a JEOL ARM 200F microscope. Ultramicroscopy. 147, 44–50 (2014)
Cantu-Valle, J., Ruiz-Zepeda, F., Voelkl, E., Kawasaki, M., Jose-Yacaman, M., Ponce, A.: Determination of the surface morphology of gold-decahedra nanoparticles using an off-axis electron holography dual-lens imaging system. Micron. 54-55, 82–86 (2013)
Ortega, E., Cantu-Valle, J., Plascencia-Villa, G., Vergara, S., Mendoza-Santoyo, F., Londono-Calderon, A., Santiago, U., Ponce Pedraza, A.: Morphology visualization of irregular shape bacteria by electron holography and tomography. Microsc. Res. Tech. 80(12), 1249–1255 (2017)
Centro de Investigaciones en Óptica A. C, (CIO), Loma del Bosque 115, León, 37150, Guanajuato, Mexico
María del Socorro Hernández-Montes, Fernando Mendoza-Santoyo, Mauricio Flores Moreno, Luis Silva Acosta & Natalith Palacios-Ortega
Centro de Investigaciones en Óptica A. C, (CIO), Unidad Aguascalientes, Prol. Constitución 607, Fracc. Reserva Loma Bonita, 20200, Aguascalientes, Ags, Mexico
Manuel de la Torre-Ibarra
María del Socorro Hernández-Montes
Fernando Mendoza-Santoyo
Mauricio Flores Moreno
Luis Silva Acosta
Natalith Palacios-Ortega
All authors contributed equally in searching the data for this review. SHM and FMS, were the major contributors in writing the manuscript. The authors read and approved the final manuscript.
Correspondence to María del Socorro Hernández-Montes.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Hernández-Montes, M., Mendoza-Santoyo, F., Flores Moreno, M. et al. Macro to nano specimen measurements using photons and electrons with digital holographic interferometry: a review. J. Eur. Opt. Soc.-Rapid Publ. 16, 16 (2020). https://doi.org/10.1186/s41476-020-00133-8
Accepted: 23 April 2020
Digital holographic interferometry; macro to nano measurements; bio-applications
Electrons
Photons
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Mathematicians' intuitions - a survey
I'm passing this on from Mark Zelcer (CUNY):
A group of researchers in philosophy, psychology and mathematics are requesting the assistance of the mathematical community by participating in a survey about mathematicians' philosophical intuitions. The survey is here: http://goo.gl/Gu5S4E. It would really help them if many mathematicians participated. Thanks!
Published by Richard Pettigrew at 10:47 am No comments:
Abstract Structure
Draft of a paper, "Abstract Structure", cleverly called that because it aims to explicate the notion of "abstract structure", bringing together some things I mentioned a few times previously.
Interview at 3am magazine
Here is the shameless self-promotion moment of the day: the interview with me at 3am magazine is online. I mostly talk about the contents of my book Formal Languages in Logic, and so cover a number of topics that may be of interest to M-Phi readers: the history of mathematical and logical notation, 'math infatuation', history of logic in general, and some more. Comments are welcome!
Published by Catarina at 12:54 pm 1 comment:
Methodology in the Philosophy of Logic and Language
This M-Phi post is an idea Catarina and I hatched, after a post Catarina did a couple of weeks back at NewAPPS, "Searle on formal methods in philosophy of language", commenting on a recent interview of John Searle, where Searle comments that
"what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight".
I commented a bit underneath Catarina's post, as this is one thing that interests me. I'm writing a more worked-out discussion. But because I tend to reject the terminology of "formal modelling" (note, British English spelling!), I have to formulate Searle's objection a bit differently. Going ahead a bit, his view is that:
the abstract study of languages as free-standing entities has replaced study of the psychology of actual speakers and hearers.
This is an interesting claim, impinging on the methodology of the philosophy of logic and language. I think the clue to seeing what the central issues are can be found in David Lewis's 1975 article, "Languages and Language" and in his earlier "General Semantics", 1970.
1. Searle
To begin, I explain problems (maybe idiosyncratic ones) I have with both of these words "formal" and "modelling".
1.a "formal"
By "formal", I normally mean simply "uninterpreted". So, for example, the uninterpreted first-order language $L_A$ of arithmetic is a formal language, and indeed a mathematical object. Mathematically speaking, it is a set $\mathcal{E}$ of expressions (finite strings from a vocabulary), with several distinguished operations (concatenation and substitution) and subsets (the set of terms, formulas, etc). But it has no interpretation at all. It is therefore formal. On the other hand, the interpreted language $(L_A, \mathbb{N})$ of arithmetic is not a "formal" language. It is an interpreted language, some of whose strings have referents and truth values! Suppose that $v$ is a valuation (a function from the variables of $L_A$ to the domain of $\mathbb{N}$), that $t$ is a term of this language and $\phi$ is a formula of this language. Then $t$ has a denotation $t^{\mathbb{N},v}$ and $\phi$ has a truth value $\mid \mid \phi \mid \mid_{\mathbb{N},v}$.
This distinction corresponds to what Catarina calls "de-semantificaiton" in her article "The Different Ways in which Logic is (said to be) Formal" (History and Philosophy of Logic, 2011). My use of "formal" is always "uninterpreted". So, $L_A$ is a formal language, while $(L_A, \mathbb{N})$ is not a "formal" language, but is rather an interpreted language, whose intended interpretation is $\mathbb{N}$. (The intended interpretation of an interpreted language is built-into the language by definition. There is no philosophical problem of what it means to talk about the intended interpretation of an interpreted language. It is no more conceptually complicated that talking about the distinguished order $<$ in a structure $(X,<)$.)
1.b "modelling"
But my main problem is with this Americanism, "modelling", which I seem to notice all over the place. It seems to me that there is no "modelling" involved here, unless it is being used to involve a translation relation. For modelling itself, in physics, one might, for example, model The Earth as an oblate spheroid $\mathcal{S}$ embedded in $\mathbb{R}^3$. That is modelling. Or one might model a Starbucks coffee cup as a truncated cone embeddied in $\mathbb{R}^3$. Etc. But, in the philosophy of logic and language, I don't think we are "modelling": languages are languages, are languages, are languages ... That is, languages are not "models" in the sense used by physicists and others -- for if they are "models", what are they models of?
A model $\mathcal{A} = (A, \dots)$ is a mathematical structure, with a domain $A$ and some bunch of defined functions and relations on the domain. One can probably make this precise for the case of an oblate spheroid or a truncated cone; this is part of modelling in science. But in the philosophy of logic and language, when describing or defining a language, we not modelling.
But: I need to add that Catarina has rightly reminded me that some authors do often talk about logic and language in terms of "modelling" (now I should say "modeling" I suppose), and think of logic as being some sort of "model" of the "practice" of, e.g., the "working mathematician". A view like this has been expressed by John Burgess, Stewart Shapiro and Roy Cook. I am sceptical. What is a "practice"? It seems to be some kind of supra-human "normative pattern", concerning how "suitably qualified experts would reason", in certain "idealized circumstances". Personally, I find these notions obscure and unhelpful; and it all seems motivated by a crypto-naturalistic desire to remain in contact with "practice"; whereas, when I look, the "practice" is all over the place. When I work on a mathematics problem, the room ends up full of paper, and most of the squiggles are, in fact, wrong.
So, I don't think a putative logic is somehow to be thought of as "modelling" (or perhaps to be tested by comparing it with) some kind of "practice". For example, consider the inference,
$\forall x \phi \vdash \phi^x_t$
Is this meant to "model" a "practice"? If so, it must be something like this:
The practice wherein certain humans $h_1, \dots$ tend to "consider" a string $\forall x \phi$ and then "emit" a string $\phi^x_t$
And I don't believe there is such a "practice". This may all be a reflection of my instinctive rationalism and methodological individualism. If there are such "practices", then these are surely produced by our inner cognition. Otherwise, I have no idea what the scientifically plausible mechanism behind a "practice" is.
Noam Chomsky of course long ago distinguished performance and competence (and before him, Ferdinand de Saussure distinguished parole and langue), and has always insisted that generative grammars somehow correspond to competence. If what is meant by "practice" is competence, in something like the Chomskyan sense, then perhaps that is the way to proceed in this direction. But in the end, I suspect that brings one back to the question of what it means to "speak/cognize a language", which is discussed below.
1.c Über-language
On the other hand, when Searle mentions modelling, it is likely that he has the following notion in mind:
A defined language $L$ models (part of) English.
In other words, the idea is that English is basic and $L$ is a "tool" used to "model" English. But is English basic? I am sceptical of this, because there is a good argument whose conclusion denies the existence of English. Rather, there is an uncountable infinity of languages; many tens of millions of them, $L_1, L_2, \dots, L_{1000,000}, \dots$, are mutually similar, albeit heterogenous, idiolects, spoken by speakers, who succeed to high degree in mutual communication. Not any these $L_1, L_2, \dots, L_{1000,000}, \dots$ spoken by individual speakers is English. If one of these is English, then which one? The idiolect spoken by The Queen? Maybe the idiolect spoken by President Barack Obama? Michelle Obama? Maybe the idiolect spoken by the deceased Christopher Hitchens? Etc. The conclusion is that, strictly speaking, there is no such thing as English.
It seems the opposite is true: there is a heterogeneous speech community $C$ of speakers, whose members speak overlapping and similar idiolects, and these are to a high degree mutually interpretable. But here is no single "über-language" they all speak. By the same reasoning, one may deny altogether the existence of so-called "natural" languages. (Cf., methodological individualism in social sciences; also Chomsky's distinction between I-languages and E-languages.) There are no "natural" languages. There are languages; and there are speakers; and speakers speak a vast heterogeneous array of varying and overlapping languages, called idiolects.
1.d Methodology
Next Searle moves on to his central methodological point:
Any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers. And that doesn't happen now. What happens now is that many philosophers aim to build a formal model where they can map a puzzling element of language onto the formal model, and people think that gives you an insight. …
The point of disagreement here is again with the phrase "formal model", as the languages we study aren't formal models! The entities involved when we work in these areas are sometimes pairs of languages $L_1$ and $L_2$ and the connection is not that $L_1$ is a "model" of $L_2$ but rather that "$L_1$ has certain translational relations with $L_2$". And translation is not "modelling". A translation is a function from the strings of $L_1$ to the strings of $L_2$ preserving certain properties. Searle illustrates his line of thinking by saying:
And this goes back to Russell's Theory of Descriptions. … I think this was a fatal move to think that you've got to get these intuitive ideas mapped on to a calculus like, in this case, the predicate calculus, which has its own requirements. It is a disastrously inadequate conception of language.
But this seems to me an inadequate description of Russell's 1905 essay. Russell was studying the semantic properties of string "the" in a certain language English. (The talk of a "calculus" loads the deck in Searle's favour.) Russell does indeed translate between languages. For example, the string
(1) The king of France is bald
is translated to the string
(2) $\exists x(\text{king-of-Fr.}(x) \wedge \text{Bald}(x) \wedge \forall y(\text{king-of-Fr.}(y) \to y = x)).$
But this latter string (2) is not a "model", either of the first string (1), or of some underling "psychological mechanism".
… That's my main objection to contemporary philosophy: they've lost sight of the questions. It sounds ridiculous to say this because this was the objection that all the old fogeys made to us when I was a kid in Oxford and we were investigating language. But that is why I'm really out of sympathy. And I'm going to write a book on the philosophy of language in which I will say how I think it ought to be done, and how we really should try to stay very close to the psychological reality of what it is to actually talk about things.
Having got this far, we reach a quite serious problem. There is, currently, no scientific understanding of "the psychological reality of what it is to actually talk about things". A cognitive system $C$ may speak a language $L$. How this happens, though, is anyone's guess. No one knows how it can be that
Prof. Gowers uses the string "number" to refer to the abstract object $\mathbb{N}$.
Prof. Dutilh Novaes uses the string "Aristotle" to refer to Aristotle.
SK uses the string "casa" to refer to his home.
Mr. Salmond uses the string "the referendum" to refer to the future referendum on Scottish independence.
The problem here is that there is no causal connection between Prof. Gowers and $\mathbb{N}$! Similarly, a (currently) future referendum (18 Sept 2014) cannot causally influence Mr. Salmond's present (10 July 2014) mental states. So, it is quite a serious puzzle.
2. Lewis
Methodologically, on such issues -- that is, in the philosophy of logic and language -- the outlook I adhere to is the same as Lewis's, whose view echoes that of Russell, Carnap, Tarski, Montague and Kripke. Lewis draws a crucial distinction:
(A) Languages (a language is an "abstract semantic system whereby symbols are associated with aspects of the world").
(B) Language as a social-psychological phenomenon.
With Lewis, I think it's important not to confuse these. In an M-Phi post last year (March 2013), I quoted Lewis's summary from his "General Semantics" (1970):
My proposals will also not conform to the expectations of those who, in analyzing meaning, turn immediately to the psychology and sociology of language users: to intentions, sense-experience, and mental ideas, or to social rules, conventions, and regularities. I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics.
I will just call them (A) and (B). See also Lewis's "Languages and Language" (1975) for this distinction. Most work in what is called "formal semantics" is (A)-work. One defines a language $L$ and proves some results about it; or one defines two languages $L_1, L_2$ and proves results about how they're related. But this is (A)-work, not (B)-work.
3. (Syntactic-)Semantic Theory and Conservativeness
For example, suppose I decided I am interested in the following language $\mathcal{L}$: this language $\mathcal{L}$ has strings $s_1, s_2$, and a meaning function $\mu_{\mathcal{L}}$ such that,
$\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$
$\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$
Then this is in a deep sense logically independent of (B)-things. And one can, in fact, prove this!
First, let $L_O$ be an "empirical language", containing no terms for syntactical entities or semantic properties and relations. $L_O$ may contain terms and predicates for rocks, atoms, people, mental states, verbal behaviour, etc. But no terms for syntactical entities or semantic relations.
Second, we extend this observation language $L_O$ by adding:
the unary predicate "$x$ is a string in $\mathcal{L}$" (here "$\mathcal{L}$" is not treated as a variable),
the constants "$s_1$", "$s_2$",
the unary function symbol "$\mu_{\mathcal{L}}(-)$",
the constants "the proposition that Oxford is north of Cambridge" and "the proposition that Oxford is north of Birmingham".
Third, consider the following six axioms of semantic theory $ST$ for $\mathcal{L}$:
(i) $s_1$ is a string in $\mathcal{L}$.
(ii) $s_2$ is a string in $\mathcal{L}$.
(iii) $s_1 \neq s_2$.
(iv) the only strings in $\mathcal{L}$ are $s_1$ and $s_2$.
(v) $\mu_{\mathcal{L}}(s_2) = \text{the proposition that Oxford is north of Birmingham}$
(vi) $\mu_{\mathcal{L}}(s_1) = \text{the proposition that Oxford is north of Cambridge}$
Then, assuming $O$ is not too weak ($O$ must prove that there are at least two objects), for almost any choice of $O$ whatsoever,
$O+ST$ is a conservative extension of $O$.
To prove this, I consider any interpretation $\mathcal{I}$ for $L_O$, and I expand it to a model $\mathcal{I}^+ \models ST$. There are some minor technicalities, which I skirt over.
Consequently, the semantic theory $ST$ is neutral with respect to any observation claim: the semantic description of a language $\mathcal{L}$ is consistent with (almost) any observation claim. That is, the semantic description of a language $\mathcal{L}$ cannot be empirically tested, because it has no observable consequences.
(There are some further caveats. If the strings actually are physical objects, already referred to in $L_O$, then this result may not quite hold in the form stated. Cf., the guitar language.)
4. The Wittgensteinian View
Lewis's view can be contrasted with a Wittgensteinian view, which aims to identify $(A)$ and $(B)$ very closely. But, since this is a form of reductionism, there must be "bridge laws" connecting the (A)-things and the (B)-things. But what are they? They play a crucial methodological role. I come back to this below.
Catarina formulates the view like this:
I am largely in agreement with Searle both on what the ultimate goals of philosophy of language should be, and on the failure of much (though not all!) of the work currently done with formal methods to achieve this goal. Firstly, I agree that "any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers". Language should not be seen as a freestanding entity, as a collection of structures to be investigated with no connection to the most basic fact about human languages, namely that they are used by humans, and an absolutely crucial component of human life. (I take this to be a general Wittgensteinian point, but one which can be endorsed even if one does not feel inclined to buy the whole Wittgenstein package.)
In short, I think this is a deep (but very constructive!) disagreement about ontology: what a language is.
On the Lewisian view, a language is, roughly, "a bunch of syntax and meaning functions"; and, in that sense, it is indeed a "free-standing entity".
(Analogously, the Lie group $SU(3)$ is a free-standing entity and can be studied independently of its connection to quantum particles called gluons (gluons are the "colour gauge field" of an $SU(3)$-gauge theory, which explains how quarks interact together). So, e.g., one can study Latin despite there being no speakers of the language; one can study infinitary languages, despite their having no speakers. One can study strings (e.g., proofs) of length $>2^{1000}$ despite their having no physical tokens. The contingent existence of one, or fewer, or more, speakers of a language $L$ has no bearing at all on the properties of $L$. Similarly, the contingent existence or non-existence of a set of physical objects of cardinality $2^{1000}$ has no bearing on the properties of $2^{1000}$. It makes no difference to the ontological status of numbers.)
Catarina continues by noting the usual way that workers in the (A)-field generally keep (A)-issues separate from (B)-issues:
I also agree that much of what is done under the banner of 'formal semantics' does not satisfy the requirement of sticking as closely as possible to the psychology of actual human speakers and hearers. In my four years working at the Institute for Logic, Language and Computation (ILLC) in Amsterdam, I've attended (and even chaired!) countless talks where speakers presented a sophisticated formal machinery to account for a particular feature of a given language, but the machinery was not intended in any way to be a description of the psychological phenomena underlying the relevant linguistic phenomena.
I agree - this is because when such a language $L$ is described, it is being considered as a free-standing entity, and so is not intended to be a "description". Catarina continues then:
It became one of my standard questions at such talks: "Do you intend your formal model to correspond to actual cognitive processes in language users?" More often than not, the answer was simply "No", often accompanied by a puzzled look that basically meant "Why would I even want that?". My general response to this kind of research is very much along the lines of what Searle says.
I think that the person working in the (A)-field sees that (A)-work and (B)-work are separate, and may not have any good idea about how they might even be related. Finally, Catarina turns to a positive note:
However, there is much work currently being done, broadly within the formal semantics tradition, that does not display this lack of connection with the 'psychological reality' of language users. Some of the people I could mention here are (full disclosure: these are all colleagues or former colleagues!) Petra Hendriks, Jakub Szymanik, Katrin Schulz, and surely many others. (Further pointers in comments are welcome.) In particular, many of these researchers combine formal methods with empirical methods, for example conducting experiments of different kinds to test the predictions of their theories.
In this body of research, formalisms are used to formulate theories in a precise way, leading to the design of new experiments and the interpretation of results. Formal models are thus producing new insights into the nature of language use (pace Searle), which are then put to test empirically.
The methodological issue comes alive precisely at this point.
How are (A)-issues related to (B)-issues?
The logical point I argued for above was that a semantic theory $ST$ for a fixed well-defined language $L$ makes no empirical predictions, since the theory $ST$ is consistent with any empirical statement $\phi$. I.e., if $\phi$ is consistent, then $ST + \phi$ is consistent.
5. Cognizing a Language
On the other hand, there is a different empirical claim:
(C) a speaker $S$ speaks/cognizes $L$.
This is not a claim about $L$ per se. It is cognizing claim about how the speaker $S$ and $L$ are related. This is something I gave some talks about before, and also wrote about a few times before here (e.g., "Cognizing a Language"), and also wrote about in a paper, "There's Glory for You!" (actually a dialogue, based on a different Lewis - Lewis Carroll) that appeared earlier this year. A cognizing claim like (C) might yield a prediction. Such a claim uses the predicate "$x$ speaks/cognizes $y$", which links together the agent and the language. But without this, there are no predictions.
The methodological point is then this: any such prediction from (C) can only be obtained by bridge laws, invoking this predicate linking the agent and language. But these bridge laws have not been stated at all. Such a bridge law might take the generic form:
Psycho-Semantic Bridge Law
If $S$ speaks $L$ and $L$ has property P, then $S$ will display (verbal) behaviour B.
Typically, such psycho-semantic laws are left implicit. But, in the end, to understand how the (A)-issues are connected to the (B)-issues, such putative laws need to be made explicit. Methodologically, then, I say that all of the interest lies in the bridge laws.
So, that's it. I summarize the three main points:
1. Against Searle and with Lewis: languages are free-standing entities, with their own properties, and these properties aren't dependent on whether there are, or aren't, speakers of the language.
2. The semantic description of a language $L$ is empirically neutral (indeed, the properties of a language are in some sense modally intrinsic).
3. To connect together the properties of a language $L$ and the psychological states or verbal behaviour of an agent $S$ who "speaks/cognizes" $L$, one must introduce bridge laws. Usually they are assumed implicitly, but from the point of view of methodology, they need to be stated clearly.
7. Update: Addendum
I hadn't totally forgotten -- I sort of semi-forgot. But Catarina wrote about these topics before in several M-Phi posts, so I should include them too:
Logic and the External Target Phenomena (2 May 2011)
van Benthem and System Imprisonment (5 Sept 2011)
Book draft: Formal Languages in Logic (19 Sept 2011)
(Probably some more, that I actually did forget...) And these raise many questions related to the methodological one here.
Published by Jeffrey Ketland at 3:36 am 21 comments: | CommonCrawl |
\begin{document}
\title{\textsc{HiPaR}{}: Hierarchical Pattern-Aided Regression}
\author{Luis Galárraga} \affiliation{
\institution{Inria}
\country{France} } \email{[email protected]}
\author{Olivier Pelgrin} \affiliation{
\institution{Aalborg University}
\country{Denmark}} \email{[email protected]}
\author{Alexandre Termier} \affiliation{
\institution{University of Rennes 1}
\country{France}} \email{[email protected]}
\begin{abstract} We introduce \textsc{HiPaR}{}, a novel pattern-aided regression method for tabular data containing both categorical and numerical attributes. \textsc{HiPaR}{} mines hybrid rules of the form $p \Rightarrow y = f(X)$ where $p$ is the characterization of a data region and $f(X)$ is a linear regression model on a variable of interest $y$. \textsc{HiPaR}{} relies on pattern mining techniques to identify regions of the data where the target variable can be accurately explained via local linear models. The novelty of the method lies in the combination of an enumerative approach to explore the space of regions and efficient heuristics that guide the search.
Such a strategy provides more flexibility when selecting a small set of jointly accurate and human-readable hybrid rules that explain the entire dataset. As our experiments shows, \textsc{HiPaR}{} mines fewer rules than existing pattern-based regression methods while still attaining state-of-the-art prediction performance. \end{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction} In the golden age of data, accurate numerical prediction models are of great utility in absolutely all disciplines. The task of predicting a numerical variable of interest from the values of other variables --called features-- is known as regression analysis and the literature is rich in this respect~\cite{piecewise-regression, cpxr, model-trees-with-splitting-nodes, regression-trees, boosting-trees, model-trees}. As data steadily complexifies, and the need for interpretable methods becomes compelling~\cite{survey-interpretability}, a line of research in regression analysis focuses on learning interpretable prediction models on heterogenous data. By \emph{interpretable}, we mean models that provide a compact and comprehensible explanation of the interaction between the features and the target variable, e.g., a linear function. By \emph{heterogeneous} data, we mean data that can be hardly modeled by a single global regression function, but instead by a set of local models applicable to subsets of the data. The most prominent methods in this line are piecewise regression (PR, also called segmented regression)~\cite{piecewise-regression}, regression trees (RT)~\cite{regression-trees}, model trees (MT)~\cite{model-trees, model-trees-with-splitting-nodes} and contrast pattern-aided regression (CPXR)~\cite{cpxr}. All these approaches mine \emph{hybrid rules} on tabular data such as the example in Table~\ref{table:example}. A hybrid rule is a statement of the form $p \Rightarrow y = f(X)$ where $p$ is a logical expression on categorical features such as $p : \mathit{property\text{-}type}=``\mathit{cottage}"$, and $y = f(X)$ is a regression model for a numerical variable of interest, e.g., $\mathit{price} = \alpha + \beta \times \mathit{rooms} + \gamma \times \mathit{surface}$ that applies only to the region characterized by $p$, for instance, $\{x^1, x^2, x^3\}$ in Table~\ref{table:example}.
The advantage of methods based on hybrid rules is that they deliver statements that can be easily interpreted by humans. In contrast, they usually lag behind black-box methods such as gradient boosting trees~\cite{boosting-trees} or random forests~\cite{random-forests} in terms of prediction power.
Indeed, our experiments show that existing pattern-aided regression methods have difficulties in providing satisfactory performance and interpretability simultaneously. On the one hand, methods such as RT or MT offer good prediction performance, but output many (long) rules: this makes them hard to read by a human user and thus less interpretable. On the other hand, CPXR outputs few simple rules (better interpretability), but its regression performance does not improve significantly over a simple global regression. The goal of this work is to reach a sweet spot where the produced set of hybrid rules is accurate and still simple enough to be grasped easily by a human user.
Finding such a good set of hybrid rules is hard, because the search space of possible conditions (the left-hand side of the rules) is huge. Methods such as regression trees (RT) tackle this complexity with a greedy approach that refines rules with the best condition at a given stage. A simple regression tree for our example dataset is shown in Figure \ref{fig:regression-tree}, and the division of the data it entails is illustrated in Figure \ref{fig:regression-tree-exploration}. Regions with a high goodness of fit lying between two partitions, e.g., the dashed region on the left of Figure \ref{fig:regression-tree-exploration}, cannot be found even if they have short descriptions (here $state = ``\mathit{excellent}"$). They may only be described imperfectly by two longer patterns ($ptype=``\mathit{cottage}" \land state \neq ``\mathit{v.good}"$ and $ptype \neq ``\mathit{cottage}" \land state \neq ``\mathit{good}"$), which is less interpretable. To avoid the shortcomings of a greedy exploration, CPXR~\cite{cpxr} proposes to enumerate the conditions of the rules using pattern mining techniques. More precisely, ~\cite{cpxr} applies {\em discriminative pattern mining} \cite{dong-discrim} to discover conditions describing subspaces in the data where a reference linear model yields the highest error, that is, data regions that may most likely benefit from local regression models. Due to its use of exhaustive enumeration, such approach can examine many alternative for rules and, unlike RT, authorizes overlap.
A limitation of CPXR lies in its disregard of the data points where the error is not maximal but still high in absolute terms. Moreover, the rules found by the enumeration phase are then filtered by a greedy post-processing step.
Our main contribution lies in a novel strategy to explore the search space of hybrid rules. Such a strategy is hierarchical, as depicted in Figure~\ref{fig:hipar-exploration}, and is designed to find few short rules that fit the data. This gives rise to our method called \textsc{HiPaR}{}, which comprises two contributions:
\begin{itemize}[leftmargin=*]
\item First, we design an hybrid rule enumeration algorithm that ouputs short and high-quality candidate rules. This algorithm is based on the enumeration structure of the state-of-the-art closed itemsets miner LCM~\cite{lcm}, which we augmented with several heuristics focused on the accuracy and compactness of the rules produced;
\item Second, we frame the problem of selecting the best set of rules from any set of candidate rules as an Integer Linear Programming problem.
This allows for a modular and robust post-processing step to output a small set of high quality rules.
\end{itemize}
Our experiments show that \textsc{HiPaR}{} reaches an interesting performance/intepretability compromise, providing as much error reduction as the best interpretable approaches but with one order magnitude fewer atomic elements (conditions) to examine for the analyst. Before detailing our approach and these experiments, we introduce relevant concepts and related work in the next two sections.
\begin{table}[t]
\centering
\caption{Toy example for the prediction of real estate prices based on the attributes of the property. The symbols
*, +, - denote high, medium, and low prices respectively.}
\begin{tabular}{>{\centering\arraybackslash}m{0.3cm}>{\centering\arraybackslash}m{2cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{0.8cm}>{\centering\arraybackslash}m{0.8cm}>{\centering\arraybackslash}m{1.2cm}}
\emph{id} & \emph{property-type} & \emph{state} & \emph{rooms} & \emph{surface} & \emph{price} \\
\toprule
{$x^1$} & {cottage} & very good &5 & 120 & 510k (*) \\
{$x^2$} & cottage & very good &3 & 55 & 410k (*) \\
{$x^3$} & cottage & excellent &3 & 50 & 350k (+) \\
{$x^4$} & apartment & excellent &5 & 85 & 320k (+) \\
{$x^5$} & apartment & good &4 & 52 & 140k (-) \\
{$x^6$} & apartment & good &3 & 45 & 125k (-) \\ \bottomrule
\end{tabular}
\label{table:example} \end{table}
\begin{figure*}\label{fig:regression-tree-exploration}
\label{fig:hipar-exploration}
\end{figure*}
\section{Preliminaries and Notation}
Pattern-aided regression methods assume tabular data with categorical and numerical attributes as in Table~\ref{table:example}.
We define the notions of datasets, attributes, and patterns more formally in the following. \subsection{Datasets}
A dataset $D = \{ x^1, \dots, x^n \} \subseteq V^{|A|}$ is a set of $|A|$-dimensional points or observations, where $A$ is a finite set of attributes and each component of $x^i$ is associated to an attribute $a \in A$ with domain $V_a$. We denote the value of attribute $a$ for point $x^i$ by $x^i_a$. For instance, $x^{1}_{\mathit{state}} = ``\mathit{very\text{ }good}"$ in Table~\ref{table:example}. From a statistical perspective, attributes are random variables, thus in this work the terms ``attribute'', ``feature'', and ``variable'' are used interchangeably. A \emph{categorical} (also symbolic) attribute holds elements on which partial and total orders are meaningless. Examples are zip codes or property types as in Table~\ref{table:example}.
A \emph{numerical} attribute, conversely, takes integer or real values and represents a measurable quantity such as a price or a temperature measure. Numerical attributes are the target of regression analysis.
\subsection{Patterns} \label{subsec:patterns} A pattern is a characterization of a dataset region (subset of points). An example is $p: \textit{property\text{-}type}=``\textit{cottage}" \land \textit{surface} \in (-\infty, 60]$ that describes the subset $\{x^2, x^3\}$ in Table~\ref{table:example}. In this work we focus on conjunctive patterns on non-negated conditions. These conditions take the form $a_i=v$ for categorical attributes or $a_j \in I$ for discretized numerical attributes, where $I$ is an interval such as $(-\infty, \alpha)$, $[\alpha, \beta]$, or $(\beta, \infty)$, and $\alpha, \beta \in \mathbb{R}$.
If $p$ is a pattern, we denote by $D_p$ its corresponding region on
dataset $D$, and by $s_D(p) = |D_p|$ its support. We also define its relative support as $\bar{s}_D(p) = \frac{s_D(p)}{|D|}$. For instance, if $D$ is our example dataset from Table~\ref{table:example}, $s_D(p) = 2$ and $\bar{s}_D(p) = \frac{2}{6}$. When the target dataset $D$ is implicit, we write $s(p)$ and $\bar{s}(p)$ for the sake of brevity. A pattern $p$ is \emph{frequent} if $s(p) \ge \theta$, that is, if its associated region consists of at least $\theta$ data points for a given threshold $\theta$. A pattern is \emph{closed} if it is the maximal characterization of a region, i.e., no longer pattern can describe the same region. As each region can be described by a single closed pattern, we define the closure operator $\mathbf{cl}(p)$ of a pattern $p$ so that $\mathbf{cl}$ returns $D_p$'s associated closed pattern. For instance, given the pattern $p : \mathit{state} = ``\mathit{good}"$ characterizing the region $\{ x^5, x^6 \}$ in Table~\ref{table:example}, $\mathbf{cl}(p)$ is $\mathit{state} = ``\mathit{good}" \land \mathit{property\text{-}type}=``\mathit{apartment}"$, because this is the maximal pattern that still describes $\{ x^5, x^6 \}$. Given two subsets $D_1$ and $D_2$ of $D$ and a threshold $\gamma$, $p$ is a \emph{contrast} or \emph{emerging} pattern if (i) $\bar{s}_{D_1}(p) > 0$ and (ii) $\frac{\bar{s}_{D_1}(p)}{\bar{s}_{D_2}(p)} \ge \gamma$ or $\bar{s}_{D_2}(p)=0$, put differently, $p$ is a contrast pattern if it is at least $\gamma$ times (relatively) more frequent in $D_1$ than in $D_2$.
Last, we define the interclass variance~\cite{traversing-lattices} of a pattern $p$ in $D$ w.r.t. a target numerical variable $y\in A$
as: \[ \mathit{iv}_D(p)= |D_p|(\mu_D(y) - \mu_{D_p}(y))^2 + |D_{\neg p}|(\mu_D(y) - \mu_{D_{\neg p}}(y))^2 \] \noindent In the formula $\mu_{\circ}(y)$ denotes the average of variable $y$ in a given dataset, whereas $\neg p$ is the negation of pattern $p$ and $D_{\neg p} = D \setminus D_p$ is the complement of $D_p$. The interclass variance is a measure of exceptionality.
A large $\mathit{iv}$ suggests that the values of $y$ in $D_p$ constitute a region of low variance that lies far from the variable's global mean, and is therefore a good candidate to learn local models.
\section{Related Work} Having introduced a common notation, we now revisit the state-of-the-art in pattern-aided regression.
Furthermore, we discuss about two related paradigms for data analysis, namely subgroup discovery (SD) and exceptional model mining (EMM).
\subsection{Piecewise Regression (PR)} \cite{piecewise-regression} is among the first approaches for pattern-aided regression. PR splits the domain of one of the numerical variables, called the \emph{splitting variable}, into segments such that the dataset regions defined by those segments exhibit a good linear fit for a \emph{target variable}. The splitting variable must be either ordinal\footnote{A special type of categorical attribute on which a total order on the values of its domain can be defined.} or numerical. The regions are constructed via bottom-up hierarchical agglomerative clustering: Starting with clusters of size $\theta$, this bottom-up approach greedily picks the segment with the smallest residual average of squares, and fix it for the next iteration while \emph{declustering} the remaining points. Fixed clusters can be extended by incorporating adjacent points and other adjacent fixed clusters. The process stops when the number of isolated points drops below a threshold or the goodness of fit does not improve with subsequent merging steps.
Other variants of PR, such as~\cite{flirti}, focus on detecting regions of the space where the target variable correlates with polynomials of degree $n\neq1$ on the input features. That includes regions where we can predict a constant value for the target ($n=0$), or regions where polynomial regression is required ($n>1$).
PR usually outperforms single linear models on data with a multimodal distribution, however its limitations are manyfold. Firstly, it can only split the dataset based on one attribute at a time. Secondly, it cannot characterize data regions in terms of arbitrary categorical attributes. Thirdly, its greedy strategy does not guarantee to find the best possible segmentation of the data~\cite{piecewise-regression}. PR models can be seen as sets of hybrid rules $z \in [v_1, v_2] \Rightarrow y = f(X)$, where the antecedent is an interval constraint on the splitting variable $z$.
\subsection{Tree-based Methods} A regression tree~\cite{regression-trees} (RT) is a decision tree such that its leaves predict a numerical variable. Like decision trees, RTs are constructed in a top-down fashion. At each step, the data is partitioned into two regions according to the condition that maximizes the intra-homogeneity of the resulting subsets w.r.t. the target variable (e.g., Figure~\ref{fig:regression-tree-exploration}). The conditions are defined on categorical and discretized numerical attributes. This process is repeated while the subsets are large enough and its goodness of fit still improvable, otherwise the learner creates a leaf that predicts the average of the target variable in the associated data region.
Model trees (MT)~\cite{model-trees, model-trees-with-splitting-nodes} associate linear functions to the leaves of the tree.
We can mine hybrid rules from RT and MT if we enumerate every path from the root to a leaf (or regression node in~\cite{model-trees}) as depicted in Figure~\ref{fig:regression-tree}. Unlike piecewise regression, RT and MT do exploit categorical features. Yet, their construction obeys a greedy principle: Data is split according to the criterion that maximizes the goodness of fit at a particular stage, and steps cannot be undone. This makes RT and MT prone to overfitting when not properly parameterized. More accurate methods such as random forests (RF) reduce this risk by learning tree ensembles that model the \emph{whole picture}. Alas, RF models are not interpretable. Some approaches~\cite{interpretable-rf, extracting-rules-from-rf} can extract representative rules from RF at the expense of accuracy.
Our experiments in Section~\ref{sec:evaluation} confirm that RT and MT can make too many splitting steps yielding large and complex sets of rules, even though we can attain a performance comparable to RF with fewer rules.
\begin{figure}
\caption{Regression tree learned to predict the price in Table~\ref{table:example}. Paths from the root to the leaves are hybrid rules.}
\label{fig:regression-tree}
\end{figure}
\subsection{Contrast Pattern-Aided Regression} \cite{cpxr} proposes CPXR, a method that mines hybrid rules on the regions of the input dataset where a global linear regression model performs poorly.
First, CPXR splits the dataset into two classes consisting of the data points where the global model yielded a large (LE) and a small error (SE) respectively. Based on this partitioning, CPXR discretizes the numerical variables and mines contrast patterns~\cite{contrast-patterns} that characterize the difficult class LE. The algorithm then induces hybrid rules on the regions defined by those patterns. After an iterative selection process, the approach reports a small set of hybrid rules with low overlap and good error reduction w.r.t. the global regression model. This set includes also a \emph{default model} induced on those points not covered by any rule. Prediction for new cases in CPXR is performed by weighting the answers of all the rules that apply. The weights depend on the error reduction of the rules w.r.t. the global regression model.
Despite its error-reduction-driven selection for rules, CPXR iterative search strategy is still greedy in nature.
Moreover, regions spanning across the classes LE and SE are disregarded, discretization is done once, and the search is restricted to the class of contrast patterns of the LE class (ignoring any error reduction in SE). While the latter decision keeps the search space under control, our experiments show that exploring the (larger) class of closed patterns allows for a significant gain in prediction accuracy with a reasonable penalty in runtime.
\subsection{Related Paradigms} \label{subsec:sd-emm} The problem of finding data regions with a high goodness of fit for a target variable is similar to the problem of subgroup discovery~\cite{sd} (SD). In its general formulation, SD reports subgroups --data regions in our jargon-- where the behavior of the target variable deviates notably from the norm. There exist plenty of SD approaches~\cite{sd-survey} tailored for different subgroup description languages, different types of variables and different notions of ``exceptionality''. For example,~\cite{sd} studies discretization techniques to deal with numerical attributes in subgroup descriptions, and shows the application of SD in diabetes diagnosis and fraud detection, i.e., to find the characterizations of subgroups with high incidence of diabetes and high fraud rate in mail order data.
A more general framework, called Exceptional Model Mining (EMM)~\cite{emm,emm-cook-distance}, extends the notion of exceptionality to arbitrary sets of target variables. In this spirit EMM can find exceptional groups where the joint distribution of the target variables in a subgroup differs greatly from the global joint distribution. Finding exceptionally well-correlated subgroups in data can be framed as an SD or EMM task, nonetheless, these paradigms are concerned with reporting subgroups that are \emph{individually exceptional}. For this reason, EMM and SD methods are usually greedy and resort to strategies such as beam search to find a small set of exceptional subgroups. Conversely, we search for hybrid rules that are \emph{jointly exceptional}, that is, they (i) explain the whole dataset, and (ii) they jointly achieve good performance. While methods such as~\cite{sd} propose an average exceptionality score for sets of subgroups, such a simple score does not capture the requested synergy between sets of hybrid rules. Indeed, our experimental section shows that a SD-like selection strategy for hybrid rules yields lower performance gains than the strategy proposed in this paper.
\section{\textsc{HiPaR}{}} \label{sec:alg} In this section we describe our pattern-aided regression method called \textsc{HiPaR}{}, which is summarized in Algorithm~\ref{alg:hipar}. \textsc{HiPaR}{} mines hybrid rules of the form $p \Rightarrow y = f_p(A'_{\mathit{num}})$ for a pattern $p$ characterizing a region $D_p \subseteq D$, and a target variable $y$ on a dataset $D$ with attributes $A = A_{\mathit{num}} \cup A_{\mathit{cat}}$. The sets $A_{\mathit{num}}$ and $A_{\mathit{cat}}$ define numerical and categorical attributes respectively, and $A'_{\mathit{num}} = A_{\mathit{num}} \setminus \{y\}$.
Patterns and regions define a containment hierarchy that guides \textsc{HiPaR}{}'s search. This hierarchy is rooted at the empty pattern $\top$ that represents the entire dataset.
After learning a global linear model of the form $\top \Rightarrow y = f_{\top}(A'_{\textit{num}})$\; (also called the \emph{default model}), \textsc{HiPaR}{} operates in three stages: (i) initialization, (ii) candidates enumeration and (iii) rules selection. We elaborate on these phases in the following.
\begin{algorithm}
\caption{\textsc{HiPaR}{}}
\label{alg:hipar}
\KwIn{a dataset: $\mathcal{D}$ with attributes $A_{\textit{cat}}$, $A_{\textit{num}}$ \\
\hspace{25px} target variable: $y \in A_{\textit{num}}$ with $A'_{\textit{num}} = A_{\textit{num}} \setminus \{y\}$ \\
\hspace{25px} minimum support threshold: $\theta$
}
\KwOut{a set $R$ of hybrid rules $p \Rightarrow y = f_p(A'_{\textit{num}})$}
Learn default hybrid rule $r_{\top} : \top \Rightarrow y = f_{\top}(A'_{\textit{num}})$ from $D$ \\
$C := \textit{hipar-init}(D, y, \theta)$ \\
$\mathcal{R} := \textit{hipar-candidates-enum}( \mathcal{D}, r_{\top}, C, \theta)$ \\
\Return $\textit{hipar-rules-selection}(\mathcal{R} \cup \{ r_{\top} \} )$ \end{algorithm}
\begin{algorithm}
\caption{hipar-init}
\label{alg:hipar-init}
\KwIn{a dataset: $D$~with attributes $A_{\textit{cat}}$, $A_{\textit{num}}$ \\
\hspace{25px} target variable: $y \in A_{\textit{num}}$ \\
\hspace{25px} minimum support threshold: $\theta$
}
\KwOut{a set of frequent patterns of size 1}
$C_{\textit{cat}} := \bigcup_{a \in A_{\textit{cat}}}{\{ c : a=v \;|\; s_D(c) \ge \theta \}} $ \\
$C_{\textit{num}} := \emptyset$ \\
\For{$a \in A_{\textit{num}}$}{
$C_{\textit{num}} := C_{\textit{num}} \cup \{c : (a \in I) \in \textit{discr}(a, D, y)\;|\; s_D(c) \ge \theta \}$ \\
}
\Return $C_{\textit{cat}} \cup C_{\textit{num}}$ \end{algorithm}
\begin{algorithm}
\caption{hipar-candidates-enum}
\label{alg:hipar-candidates-enum}
\KwIn{a dataset: $D$~with attributes $A_{\textit{cat}}$, $A_{\textit{num}}$ \\
\hspace{25px} parent hybrid rule: $r_p: p \Rightarrow y = f_p(A'_{\textit{num}})$ \\
\hspace{25px} patterns of size 1: $C$ \\
\hspace{25px} minimum support threshold: $\theta$}
\KwOut{a set $\mathcal{R}$ of candidate rules $p \Rightarrow y = f_p(A'_{\textit{num}})$}
$\mathcal{R} := \emptyset$ \\
$\mathcal{C'} := C$ \\
$C_{n} := \{c \in C \;|\; c : a \in I\; \land a \in A_{\textit{num}} \}$ \\
$\nu := \text{k-th percentile of } \mathit{iv}_{\mdataset} \text{ in } C_{n}$ \\
\For{$c' \in C'$}{
$\hat{p} := p \land c'$ \\
$C' := C' \setminus \{c'\}$ \\
\If{$s_D(\hat{p}) \ge \theta \land \mathit{iv}_{\mdataset}(\hat{p}) > \nu $}{
$p' = \mathbf{cl}(\hat{p})$ \\
$C' := C' \setminus p'$ \\
\If{$p \text{ is the left-most parent of } p'$}{
Learn $r_{p'} : p' \Rightarrow y = f_{p'}(A'_{\textit{num}})$ on $$D$_{p'}$ \\
\If{$m(r_{p'}) < m(r_{p^*})\; \forall p^* : p^* \;\text{is parent of}\; p'$}{
$\mathcal{R} = \mathcal{R} \cup \{ r_{p'} \}$ \\
$C'_{n} := \emptyset$ \\
\For{$a \in A'_{\textit{num}} \setminus \textit{attrs}(p')$}{
$C'_{n} := \{c \in \textit{discr}(a, D_{p'}, y)\;|\; s_D(c) \ge \theta \} \cup C'_{n}$ \\
}
$\mathcal{R} := \mathcal{R} \cup \textit{hipar-candidates-enum}(D, r_{p'}, (C' \setminus C_n) \cup C'_n, \theta)$ \\
}
}
}
}
\Return $\mathcal{R}$ \end{algorithm}
\subsection{Initialization.} \label{subsec:initialization}
The initialization phase (line 2 in Algorithm~\ref{alg:hipar}) computes a set of frequent patterns that bootstrap \textsc{HiPaR}{}'s hierarchical search. We describe the initialization routine \emph{hipar-init} in Algorithm~\ref{alg:hipar-init}. The procedure computes patterns of the form $a=v$ for categorical attributes (line 1), and $a \in I$ for numerical attributes (lines 2-4), where $I$ is an interval of the form $(-\infty, \alpha)$, $[\alpha, \beta]$, or $(\beta, \infty)$ (Section~\ref{subsec:patterns}). The intervals are calculated by discretizing the numerical attributes.
The discretization is inspired on CPXR~\cite{cpxr}, that is, we first split the target variable into two classes, namely large value (LV) and small value (SV) and then run a routine~\cite{mdlp} that segments the domain of each numerical attribute so that the points in each segment are as pure as possible w.r.t. LV and SV. This way, \textsc{HiPaR}{} minimizes the variance of $y$ within the points that match a condition. The major difference with the discretization proposed in~\cite{cpxr} is that the classes LV and SV are defined w.r.t. the actual value of the target variable and not based on the residuals of a global regression model.
We remark that \emph{hipar-init} enforces the bootstrapping patterns to be frequent, that is, their support must be higher than a user-defined threshold $\theta$ in the dataset (line 4 in Algorithm~\ref{alg:hipar-init})\footnote{When the intervals of an attribute $a$ are very imbalanced, that is, only one of the segments $c$ is large enough (e.g., $s(c) \ge \theta$ for a small $\theta$), it may be convenient to disregard the attribute for discretization.}.
\subsection{Candidates Enumeration} \label{subsec:candidate-enumeration} This stage uses the patterns computed in the initialization step to explore the different regions of the dataset and learn local accurate candidate hybrid rules. These regions are characterized by closed patterns on categorical and discretized numerical variables. Our preference for closed patterns is based on two reasons. In the first place, and contrary to frequent and free patterns, closed patterns are not redundant. A region is characterized by a unique closed pattern, whereas there may be a myriad of frequent or free patterns describing the exact same region. This property prevents us from visiting the same region multiple times when traversing the search space. In the second place, closed patterns are expressive as they compile the maximal set of attributes that portray a region. This expressivity can be particularly useful in specialized domains when experts need to inspect the local regression models and identify \emph{all} the attribute values that correlate with the target variable.
Inspired on methods for closed itemset mining~\cite{lcm}, the routine \emph{hipar-candidates-enumeration} in Algorithm~\ref{alg:hipar-candidates-enum} takes as input a hybrid rule $p \Rightarrow y = f_p(A'_{\textit{num}})$~learnt on a region characterized by $p$, and returns a set of hybrid rules defined on closed descendants of $p$. Those descendants are visited in a depth-first hierarchical manner as depicted in Figure~\ref{fig:searchtree}. Lines 1 and 2 in Algorithm~\ref{alg:hipar-candidates-enum} initialize the procedure by generating a working copy $C'$ of the set of conditions used to refine $p$ (loop in lines 5-18). At each iteration in line 5, Algorithm~\ref{alg:hipar-candidates-enum} extends the parent pattern $p$ with a condition $c' \in C'$ and removes $c'$ from the working copy (line 7). After this refinement step, the routine proceeds in multiple phases detailed below.
\subsubsection{Pruning} In line 8, Algorithm~\ref{alg:hipar-candidates-enum} enforces thresholds on support and interclass variance for the newly refined pattern $\hat{p} = p \land c'$. The support threshold $\theta$ serves two purposes. First, it prevents us from learning rules on extremely small subsets and incurring overfitting -- a problem frequently observed in unbounded regression trees. Second, since \textsc{HiPaR}{}'s search space is exponential in the number of conditions, a threshold on support allows us for pruning, and hence lower runtime. While a reasonable support threshold can mitigate the pattern explosion in low-dimensional datasets, the on-the-fly discretization of the numerical attributes carried out in lines 16-17 may contribute with a large number of new frequent conditions. On these grounds \textsc{HiPaR}{} applies a second level of pruning by means of a threshold $\nu$ on the interclass variance $\mathit{iv}$ as proposed in~\cite{traversing-lattices}. We highlight the heuristic nature of using $\mathit{iv}$ for pruning, since this metric lacks the anti-monotonicity of support. That means that a region with an $\mathit{iv}$ below the threshold can contain sub-regions with high $\mathit{iv}$ that will not be explored. Having said that, thresholding on interclass variance proves effective at keeping the size of the search space under control with no impact on prediction performance. We set $\nu$ empirically to the 85-th percentile of the interclass variance of the patterns derived from the discretization of the numerical features (lines 3 and 4). Lower percentiles did not result in better performance in our experimental datasets.
\subsubsection{Closure Computation} If a refinement $\hat{p} = p \land c'$ passes the test in line 8, \textsc{HiPaR}{} computes its corresponding closed pattern $p'$ in line 9. Since the closure operator may add further conditions besides $c'$, line 10 ensures that those conditions are not considered for future refinements\footnote{Our abuse of notation treats $p$ as a set of conditions}. Next, the check in line 11 guarantees that no path in the search space is explored more than once. This is achieved by verifying whether pattern $p$ is the leftmost parent of $p'$. In Figure~\ref{fig:hipar-exploration}, this check ensures that the sub-tree rooted at the node $\textit{ptype}=``\textit{cottage}" \land \textit{ptype}=``excellent"$ is explored only once, in this case from its leftmost parent $\textit{ptype}=``\textit{cottage}"$.
\begin{figure}
\caption{\textsc{HiPaR}{} hierarchical region exploration tree.}
\label{fig:searchtree}
\end{figure}
\subsubsection{Learning a Regression Model}
In line 12, Algorithm~\ref{alg:hipar-candidates-enum} learns a hybrid rule $p' \Rightarrow y = f_{p'}(A'_{\mathit{num}})$ from the data points that match $p'$. Before being accepted as a candidate (line 14), this new rule must pass a test in the spirit of Occam's razor (line 13): among multiple hypotheses of the same prediction power, the simplest should be preferred. This means that if a hybrid rule defined on
region $D_{p'}$ does not predict better than the hybrid rules defined in the super-regions of $D_{p'}$,
then the rule is redundant, because we can obtain good predictions with more general and simpler models.
In this line of thought, \textsc{HiPaR}{} adds the newly created hybrid rule as a candidate if
it performs better than the hybrid rules induced on the immediate ancestors of $p'$ in the hierarchy.
Performance is defined in terms of an error metric $m$ (e.g., RMSE).
This requirement makes our search diverge from a pure DFS as
shown in Figure~\ref{fig:searchtree}.
For instance, the performance
test for the node \emph{ptype}=``\emph{cottage}'' $\land$ \emph{state}=``\emph{excellent}'' requires us
to visit its parent \emph{state}=``\emph{excellent}'' earlier than in a standard DFS.
\paragraph{DFS Exploration.} The final stage of the routine \emph{hipar-candidates-enum} discretizes the numerical variables not yet discretized in $p'$ (lines 16-17)\footnote{\emph{attrs}(p) returns the set of attributes present in a pattern} and uses those conditions to explore the descendants of $p'$ recursively (line 18). We remark that this recursive step could be carried out regardless of whether $p'$ passed or not the test in line 13: Error metrics are generally not anti-monotonic, thus the region of a rejected candidate may still contain sub-regions that yield more accurate hybrid rules. Those regions, however, are more numerous, have a lower support, and are characterized by longer patterns. Given \textsc{HiPaR}{}'s double objective for accuracy and interpretability, recursive steps become less appealing as we descend in the hierarchy. This early stopping heuristic had no impact on prediction accuracy in our experiments.
\subsection{Rule Selection} \label{subsec:ruleSelection} The discovered set $\mathcal{R}$ of candidate rules $r_p : p \Rightarrow y = f_p(A'_{\mathit{num}})$, being generated from a combinatorial enumeration process, is likely to be too large for presentation to a human user. Thus, \textsc{HiPaR}{} carries out a selection process (line 4 in Algorithm~\ref{alg:hipar}) that picks a
subset (of rules) of minimal size and minimal joint error such that as many observations as possible in $D$~are covered.
We formulate these multi-objective desiderata as an integer linear program (ILP):
\begin{equation}\label{eq:formulation}
\begin{aligned} & \text{min} \; \sum_{r_p \in \mathcal{R}}{-\alpha_p \cdot z_p} + \; \sum_{r_p, r_q \in \mathcal{R}, p \neq q}{(\omega \cdot \mathcal{J}(p, q) \cdot (\alpha_p + \alpha_q))\cdot z_{pq}} \;\;\; \\ & \text{s.t.} \;\;\; \;\;\; \sum_{r_p \in \mathcal{R}} z_p \ge 1 \;\;\;\\
& \;\;\; \;\;\; \;\;\; \; \forall r_p, r_q \in \mathcal{R}, p \neq q : z_p + z_q - 2z_{pq} \le 1 \;\;\; \\ & \;\;\; \;\;\; \;\;\; \; \forall r_p, r_q \in \mathcal{R}, p \neq q : z_p, z_{pq} \in \{0, 1\} \;\;\; \end{aligned}
\end{equation} \noindent Each single rule $r_p \in \mathcal{R}$ is associated to a variable $z_p$ that takes either 1 or 0 depending on whether the rule is selected or not. The first constraint guarantees a non-empty set of rules. The term $\alpha_p =\bar{s}(p)^\sigma \times \bar{e}(r_p)^{-1}$ is the support-to-error trade-off of the rule. Its terms $\bar{e}(r_p) \in [0,1]$ and $\bar{s}(p) \in [0, 1]$ correspond to the normalized error and normalized support of rule $r$ calculated as follows:
\begin{minipage}{0.45\linewidth} \begin{equation} \label{eq:normalized-error} \bar{e}(r_p) = \frac{m(r_{p})}{\sum_{r_{p'} \in \mathcal{R}}{m(r_{p'})}} \end{equation} \end{minipage}
\begin{minipage}{0.45\linewidth} \begin{equation} \bar{s}(p) = \frac{s(p)}{\sum_{r_{p'} \in \mathcal{R}}{s(p')}} \end{equation} \end{minipage}
\noindent In plain English, $\bar{e}(r_p)$ is the error of rule $r_p$ according to an error metric $m$ divided by the sum of the errors of all rules in the set of candidates. The normalized support $\bar{s}(p)$ is calculated in the same spirit.
It follows that the objective function rewards rules with small error defined on regions of large support. This latter property accounts for our maximal coverage desideratum and protects us from overfitting. The support bias $\sigma \in \mathbb{R}_{\ge 0}$ is a meta-parameter that controls the importance of support in rule selection. When $\sigma=0$ the solver disregards support, whereas larger values define different trade-offs between accuracy and support. Due to our hierarchical exploration, the second term
in the objective function penalizes sets with rules defined on overlapping regions. The overlap is measured via the Jaccard coefficient on the regions of each pair of rules, i.e., $$\mathcal{J}(p, q) = \frac{|D_p \cap D_q|}{|D_{p} \cup D_{q}|}.$$
\noindent If two rules $r_p$, $r_q$ are selected, i.e., $z_p$ and $z_q$ are set to 1, the second family of constraints enforces the variable $z_{pq}$ to be 1 and pay a penalty proportional to $\omega \times (\alpha_p + \alpha_q) $ times the degree of overlap between $D_p$ and $D_q$ in the objective function. The overlap bias $\omega \in \mathbb{R}_{\ge 0}$ controls the magnitude of the penalty. Values closer to 0 will make the solver tolerate overlaps and choose more rules.
The solution to Equation~\ref{eq:formulation} is a set of accurate hybrid rules $R \subseteq \mathcal{R}$ that can be used as a prediction model for the target variable $y$.
\subsection{Prediction with \textsc{HiPaR}{}} \label{subsec:prediction} To use the rules reported by \textsc{HiPaR}{} as a prediction model, we have to define a procedure to deal with overlapping rules as well as with orphan data points. A rule $r : p \Rightarrow y = f_p(A'_{\mathit{num}})$ \emph{is relevant to} or \emph{covers} a seen or unseen data point $\hat{x}$ if the condition defined by the pattern $p$ evaluates to true on $\hat{x}$. If $\hat{x}$ is not covered by any hybrid rule, \textsc{HiPaR}{} uses the default regression model $r_{\top}$ to produce a prediction. Otherwise, \textsc{HiPaR}{} returns a weighted sum of the predictions of all relevant rules of $\hat{x}$ (as done in~\cite{cpxr}). The weight $\alpha_{p, \hat{x}}$ associated to a rule $r_p$ when predicting $\hat{x}$ is calculated as: \[ \alpha_{p, \hat{x}} = \frac{\bar{e}(r_p)^{-1}}{\sum_{r_{p'} \in \Phi(\hat{x})}{\bar{e}(r_{p'})^{-1}}} \] $\Phi(\hat{x})$ denotes the set of rules that cover $\hat{x}$, and $\bar{e}(r_p)$ is the rule's normalized error in the training set (Equation~\ref{eq:normalized-error}).
\section{Evaluation} \label{sec:evaluation} We evaluate \textsc{HiPaR}{} on the dimensions of prediction accuracy, interpretability, and runtime through three rounds of experiments. In the first round (Section~\ref{subsec:impact-parameters}), we measure the impact of \textsc{HiPaR}{}'s parameters on our evaluation aspects. The second round compares \textsc{HiPaR}{} with state-of-the-art regression methods (Section~\ref{subsec:comparison-state-of-the-art}). In a third round, we carry out an anecdotal evaluation by showing and analyzing some of the rules mined by \textsc{HiPaR}{} on well-studied use cases (Section~\ref{subsec:anecdotal_evaluation}). Section~\ref{subsec:experimental-setup} provides a preamble by describing our experimental setup.
\subsection{Experimental Setup} \label{subsec:experimental-setup}
\subsubsection{\textsc{HiPaR}{}'s implementation.} \label{subsec:implementation} We implemented \textsc{HiPaR}{} in Python 3 with scikit-learn\footnote{\url{http://scikit-learn.org}}. In addition to the parameters described in Algorithm~\ref{alg:hipar}, our implementation accepts as input a support bias $\sigma$, an overlap bias $\omega$ (with default values $\sigma=1$ and $\omega=1$), an error metric $m$, and a type of regression model. We shed light on how to tune $\sigma$ and $\omega$ in Section~\ref{subsec:impact-parameters}. We evaluate \textsc{HiPaR}{} with two error metrics, namely the root mean square error (RMSE) and the median absolute error (MeAE). Moreover, we test \textsc{HiPaR}{} with two methods for sparse linear regression, namely OMP~\cite{omp} and LASSO~\cite{lasso}. Sparse linear models optimize a regularized objective function that instructs the regressor to use as few non-zero coefficients as possible. This choice makes linear functions more legible and conforms to our interpretability requirement. Since there is no clear winner between OMP and LASSO, we configured \textsc{HiPaR}{} to learn, for each pattern $p$, hybrid rules with both methods and keep the rule with the lowest error in a test set of 20\% the size of $D_p$. In regards to the discretization of the numerical attributes (Section~\ref{subsec:initialization}), \textsc{HiPaR}{} uses the MLDP algorithm~\cite{mdlp}. This method resorts to the principle of minimum description length (MDL) to obtain simple multi-interval discretizations of the numerical variables. The source code of \textsc{HiPaR}{} is available at \url{http://gitlab.inria.fr/lgalarra/hipar}.
\subsubsection{Opponents.} \label{subsec:opponents} We compare \textsc{HiPaR}{} to multiple regression methods comprising: \begin{itemize}[leftmargin=*]
\item Three pattern-aided regression methods, namely CPXR~\cite{cpxr}, regression trees (RT)~\cite{regression-trees},
and model trees (MT)~\cite{model-trees}.
\item Three accurate black-box methods: random forests (RF)~\cite{random-forests}, gradient boosting trees (GBT)~\cite{boosting-trees},
and rule fit (Rfit)~\cite{rulefit}.
\item \textsc{HiPaR}{} when all rules output by the enumeration stage are selected ($\omega=0.0$, called $\textsc{HiPaR}_f$), and
\textsc{HiPaR}{} with a rule selection in the spirit of subgroup discovery: the top $q$ rules with the best
support-to-error trade-off are reported.
The parameter $q$ is set to the average number of rules output by \textsc{HiPaR}{} in cross-validation. We denote
this opponent by $\textsc{HiPaR}_{sd}$.
\item Two hybrid methods resulting from the combination of \textsc{HiPaR}{}'s enumeration phase
with Rfit, and Rfit's rule generation with \textsc{HiPaR}{}'s rule selection. We denote these methods by
\textsc{HiPaR}{}+Rfit, and Rfit+\textsc{HiPaR}{} respectively. These competitors are designed to evaluate the two phases of \textsc{HiPaR}{} in isolation. \end{itemize}
\noindent The opponents that can work with any linear regression method, namely, MT, CPXR, and $\textsc{HiPaR}_{sd}$, are reported with the best performing method (either LASSO or OMP). Since there is no available implementation of CPXR, we implemented the algorithm in Python 3 with scikit-learn (the implementation is provided with \textsc{HiPaR}{}'s source code). We use the scikit-learn implementation of RT, whereas for MT we use the implementation available at \url{https://is.gd/Gk9Y20} (based on CART).
By default, RT and MT do not impose constraints on the size of trees. Hence, they can yield large numbers of complex hybrid rules that are hardly interpretable. For this reason we test compact variants of RT and MT obtained by allowing at most $q+1$ leaves in the trees. We set $q$ equals the number of rules found by \textsc{HiPaR}{} in cross-validation. This way, we can compare the accuracy of \textsc{HiPaR}{} and tree-based methods at a similar level of interpretability. We denote these two settings by RT$_H$ and MT$_H$ respectively. For GBT and RF we use the implementations available in scikit-learn, whereas for Rfit we use the source code provided by the authors\footnote{\url{https://github.com/christophM/rulefit}}. All black-box methods are ensemble methods on regression trees, that is, they rely on the answers of multiple trees (called \emph{estimators}) to compute a final prediction.
The main hyper-parameters -- that are not fixed by the requirements of the experimental setup -- are tuned using \emph{hyperopt}\footnote{\url{https://github.com/hyperopt/hyperopt}} for all competitors. That includes, for instance, the support threshold for CPXR and RT, or the maximal tree depth for RT, RF, and Rfit. All the experiments were run on a computer with a CPU Intel Core i7-6600U (@2.60GHz), 16GB of RAM, and Fedora 26 as operating system.
\subsubsection{Datasets.} We test \textsc{HiPaR}{} and the competitors in 7 out of the 50 datasets used to evaluate CPXR~\cite{cpxr}. Neither the authors of CPXR nor the original collectors of the datasets~\cite{iie} could provide us with the data, thus we collected by ourselves the 7 datasets that are publicly available at the UCI repository\footnote{\url{http://archive.ics.uci.edu/ml/index.php}}: \emph{abalone}, \emph{cpu}, \emph{houses}, \emph{mpg2001}, \emph{servo}, \emph{strikes}, and \emph{yatch}. We also downloaded 8 additional datasets from Kaggle\footnote{\url{http://kaggle.com}}, namely \emph{cb\_nanotubes}, \emph{fuel\_consumption}, \emph{healthcare} (we used a sample due to its large size), \emph{optical}, \emph{wine}, \emph{concrete}, \emph{beer\_consumption} and \emph{admission}. These datasets match the keyword ``regression'' in the Kaggle's search engine (as of 2019), and define meaningful regression tasks.
In addition, our anecdotal evaluation in Section~\ref{subsec:anecdotal_evaluation} relies on the results presented in~\cite{emm-cook-distance} on the datasets \emph{giffen} and \emph{wine2}. Table~\ref{table:datasets} provides details about the different datasets.
The datasets and their descriptions are available for download with \textsc{HiPaR}{}'s source code.
\begin{table}
\centering
\caption{Experimental datasets.}
\begin{tabular}{>{\centering\arraybackslash}m{2.5cm}>{\centering\arraybackslash}m{1.2cm}>{\centering\arraybackslash}p{1.6cm}>{\centering\arraybackslash}m{1.8cm}}
\emph{Dataset} & \emph{\# obs.} & \emph{\# cat. attrs} & \emph{\# num. attrs} \\
\toprule
abalone & 4177 & 1 & 8 \\
admission & 500 & 2 & 7 \\
beer\_consumption & 365 & 2 & 5 \\
cb\_nanotubes & 10722 & 0 & 8 \\
concrete & 1030 & 0 & 9 \\
cpu & 209 & 2 & 5 \\
fuel\_consumption & 389 & 5 & 5 \\
giffen & 6668 & 6 & 47 \\
houses & 6880 & 0 & 9 \\
mpg2001 & 852 & 7 & 10 \\
servo & 167 & 2 & 3 \\
strikes & 625 & 1 & 6 \\
healthcare & 518 & 6 & 192 \\
optical & 641 & 3 & 10 \\
wine & 6498 & 1 & 12 \\
wine2 & 9600 & 6 & 5 \\
yatch & 308 & 0 & 7 \\ \bottomrule
\end{tabular}
\label{table:datasets}
\end{table}
\subsubsection{Metrics.} We evaluate the prediction accuracy of the different approaches in terms of the root mean square error (RMSE) and the median absolute error (MeAE). We report the error reduction w.r.t. a baseline non-regularized linear model $\mathcal{B}$ on the entire dataset. The reduction $\rho$ of a regression model $\mathcal{M}$ for an error metric $m$ is calculated as follows: \[
\rho = \frac{m(\mathcal{B}) - m(\mathcal{M})}{m(\mathcal{B})} \times 100
\]
Since our goal is to mine sets of human-readable rules, we also evaluate \textsc{HiPaR}{}'s rules in terms of interpretability. We remark, however, that this notion is subjective and may depend on factors such as the user's background.
Nevertheless, it is widely accepted that analyzing 100 rules with 20 conditions each is more challenging than grasping the information in 5 rules with 3 conditions each. In this line of thought, and due to the diversity of domains of our datasets, we use complexity as a proxy for interpretability. Hence, we conduct a quantitative analysis based on the number of ``elements'' in a model. An element is either a condition on a categorical attribute or a numerical variable with a non-zero coefficient in a regression function. For tree-based models, we count each non-leaf node as an element, whereas a leaf contributes with multiple elements: one per variable present in the associated regression function. For example, the regression tree in Figure~\ref{fig:regression-tree} consists of 7 elements.
\subsection{Impact of Parameters} \label{subsec:impact-parameters}
\subsubsection{Support Threshold.} \label{subsec:impact-of-support} The minimum support threshold $\theta$ controls the exhaustivity of \textsc{HiPaR}{}'s candidates enumeration (Alg.~\ref{alg:hipar-candidates-enum}). Lower values make \textsc{HiPaR}{} report more rules defined on very specific regions. Thus, $\theta$ has a direct impact on \textsc{HiPaR}{}'s runtime and complexity as depicted in Figures~\ref{fig:support_impact} and~\ref{fig:support_impact2} where we plot relative support vs. \textsc{HiPaR}{}'s average RMSE reduction, average training runtime, and average number of elements of a round of (10-fold) cross-validation across the experimental datasets. We observe that values between 0.1 and 0.3 offer a good trade-off between prediction accuracy, runtime, and interpretability. Support thresholds below 0.1 increase prediction accuracy marginally at the price of doubling runtime and model complexity. Conversely, as $\theta$ approaches 1, \textsc{HiPaR}{} tends to select only the default rule $r_{\top}$ becoming tantamount to a regularized linear regression.
\begin{figure}
\caption{Support threshold vs. error reduction and training time in seconds.}
\label{fig:support_impact}
\end{figure}
\begin{figure}
\caption{Support threshold vs. average \# of elements.}
\label{fig:support_impact2}
\end{figure}
\subsubsection{Support and Overlap Biases.} Figures~\ref{fig:evolution_sigma} and~\ref{fig:evolution_omega} show the impact on \textsc{HiPaR}{}'s RMSE reduction and number of elements of the arguments that govern the rule selection, that is, the support and overlap biases $\sigma$ and $\omega$ (Section~\ref{subsec:ruleSelection}). We plot the averages across our experimental datasets when fixing one parameter and varying the other one. We set $\theta=0.02$ in order to guarantee a large number of candidates as input to the selection phase. We observe that the RMSE reduction is mostly insensitive to changes in $\sigma$ and -- to a lesser extent -- $\omega$.
Contrary to the error reduction, the number of elements always tends to decrease as the parameters take higher values. This corroborates our intuition, so to say, that many of the rules induced in the exploration phase are not essential for accurate prediction.
Small values of $\sigma$ downplay the role of support in the importance of rules since $\bar{s}(p)^{\sigma} \rightarrow 1$ as $\sigma \rightarrow 0$ (Section \ref{subsec:ruleSelection}). This rewards rules with low error regardless of their coverage. For low support thresholds, a small $\sigma$ translates into large sets of highly specific rules that are unlikely to overlap. As $\sigma$ increases, specific rules are penalized and their selection becomes less likely (recall that $0 < \bar{s}(p) \le 1$). On the other hand, a large $\omega$ means that sets of overlapping rules are highly penalized, and points are covered by fewer rules as depicted in Figure~\ref{fig:evolution_omega2}. This makes \textsc{HiPaR}{} closer to compact RTs and MTs as it forces rules to cover disjoint regions. Based on our observations, we recommend to set $\omega=1$ and $\sigma \ge 1$ depending on the need for more general or more specific rules.
\begin{figure}
\caption{Support bias vs. error reduction and \# of elements.}
\label{fig:evolution_sigma}
\end{figure} \begin{figure}
\caption{Overlap bias vs. error reduction and \# of elements.}
\label{fig:evolution_omega}
\end{figure} \begin{figure}
\caption{Overlap bias vs. average \# of rules per point.}
\label{fig:evolution_omega2}
\end{figure}
\subsection{Comparison with the State of the Art.} \label{subsec:comparison-state-of-the-art} \subsubsection{Accuracy and Complexity Evaluation} \label{subsubsec:comparison-accuracy-and-interpretability} Figures~\ref{fig:whiskers_rmse} and~\ref{fig:whiskers_mae} depict the mean RMSE and median MeAE reductions in 10-fold cross-validation for the different methods on our experimental datasets. The methods are sorted by the median reduction of all executions.
We first note that the black-box methods, i.e., GBT, Rfit, and RF (in blue) rank higher than the interpretable methods \textsc{HiPaR}{}, MT or RT in terms of error reduction. The unbounded tree-based approaches usually achieve good performance, however this comes at the expense of complex sets of rules as depicted in Figure~\ref{fig:tradeoff} for the RMSE (the MeAE exhibits the same behavior). If we tune the maximum number of leaves in the trees using \textsc{HiPaR}{} -- denoted by RT$_H$ and MT$_H$ --, we observe a positive impact on the RMSE for MT, whereas RT see a slight drop in performance (Figure~\ref{fig:whiskers_rmse}). This suggests that the greedy exploration of RT and MT may lead to unnecessary splitting steps where good performance is actually attainable with fewer rules. Conversely, setting a limit in the number of leaves has a deleterious effect on the MeAE.
We observe that \textsc{HiPaR}{}'s median RMSE reduction is comparable to unbounded MT and $\textsc{HiPaR}_f$. Yet, \textsc{HiPaR}{} outputs one order of magnitude fewer elements as shown in Figure~\ref{fig:tradeoff}, thanks to our rule selection step. Besides, \textsc{HiPaR}{}'s behavior is more stable than the tree-based methods, i.e., it yields fewer extreme values, and is comparable to RF (Figure~\ref{fig:whiskers_rmse}).
The situation is slightly different for the MeAE (Figure~\ref{fig:whiskers_mae}), where \textsc{HiPaR}{} has a lower median reduction than MT and competes with Rfit+\textsc{HiPaR}{}, although the latter two methods exhibit a larger variance. This shows that \textsc{HiPaR}{}'s rule selection can work with other rule generation methods -- tree ensembles as implemented by Rfit. The greedy rule selection implemented by $\textsc{HiPaR}_{\textit{sd}}$ yields poorer results than standard \textsc{HiPaR}{} and $\textsc{HiPaR}_f$.
We also observe that CPXR and \textsc{HiPaR}{}+Rfit lie at the bottom of the ranking in Figures~\ref{fig:whiskers_rmse} and~\ref{fig:whiskers_mae}. Despite the high quality of the rules output by CPXR, the method is too selective and reports only 1.42 rules on average in contrast to \textsc{HiPaR}{} and MT that find on average 8.81 and 23.92 rules respectively. This is also reflected by the low variance of the reductions compared to other methods. We highlight the large variance of \textsc{HiPaR}{}+Rfit.
While it can achieve high positive error reductions, its rule extraction is not designed for further filtering, because Rfit reports weak estimators (trees) that become accurate only in combination, for instance, by aggregating their answers or as features for a linear regressor.
All in all, Figure~\ref{fig:tradeoff} suggests that \textsc{HiPaR}{} offers an interesting trade-off between model complexity and prediction accuracy. This makes it appealing for situations where users need to inspect the correlations that explain the data, or for tuning other methods.
\begin{figure}
\caption{Mean RMSE reduction in cross-validation. Black-box methods are in blue}
\label{fig:whiskers_rmse}
\end{figure}
\begin{figure}
\caption{Median MeAE reduction in cross-validation.}
\label{fig:whiskers_mae}
\end{figure}
\begin{figure}
\caption{Trade-off between number of elements and RMSE reduction of the different methods.}
\label{fig:tradeoff}
\end{figure}
\subsubsection{Runtime Evaluation.}
Figure~\ref{fig:runtime} depicts the average runtime of a fold of cross-validation for the different regression methods. We advise the reader to take these results with a grain of salt because of the heterogeneity of the implementations and the fact that the selection of the parameters (e.g., the minimum support $\theta$) was optimized for error reduction and not runtime.
RT, GBT, and RF are by far the most performant algorithms partly because they count on a highly optimized native scikit-learn implementation. They are followed by Rfit and the hybrid methods \textsc{HiPaR}{}+Rfit and Rfit+\textsc{HiPaR}{}, which combine Rfit with \textsc{HiPaR}{}'s candidate enumeration and rule selection respectively. We observe \textsc{HiPaR}{} is slower than its variants $\textsc{HiPaR}_f$ and $\textsc{HiPaR}_{\textit{sd}}$ because of it adds a more sophisticated rule selection that can take on average 46\% of the total runtime (97\% for \emph{optical}, 0.26\% for \emph{carbon\_nanotubes}).
Finally, we highlight that MT is one order of magnitude slower than \textsc{HiPaR}{}'s despite its best-first-search implementation.
\begin{figure}
\caption{Average runtime of the different methods on the experimental datasets.}
\label{fig:runtime}
\end{figure}
\subsection{Anecdotal Evaluation.} \label{subsec:anecdotal_evaluation} We illustrate the utility of \textsc{HiPaR}{} at finding interpretable rules on two use cases used in the evaluation of the EMM approach presented in~\cite{emm-cook-distance}. In this work, the authors introduce the Cook's distance between the coefficients of the default model and the coefficients of the local models as a measure of exceptionality for regions -- referred as subgroups in \cite{emm-cook-distance}. A subgroup with a large Cook's distance is cohesive and its slope vector deviates considerably from the slope vector of the bulk of the data (w.r.t. a target variable). We emphasize that \textsc{HiPaR}{}'s goal is different from EMM's: The former looks for compact sets of accurate rules, whereas the latter searches for individually exceptional regions. In this spirit, nothing prevents \textsc{HiPaR}{} from pruning an exceptional region according to EMM if one of its super-regions or sub-regions contributes better to reduce the error. That said, we can neutralize the pruning effect of the selection phase by setting $\omega=0.0$ ($\textsc{HiPaR}_f$) to make \textsc{HiPaR}{} output more hybrid rules. This way \textsc{HiPaR}{} can reproduce the insights of~\cite{emm-cook-distance} for the \emph{wine2} dataset. This dataset consists of 9600 observations derived from 10 years (1991-2000) of tasting ratings reported in the online version of the Wine Spectator Magazine for California and Washington red wines. The task is to predict the retail price $y$ of a wine based on features such as its age, production region, grape variety, wine type, etc. We report the best performing set of rules in 5-fold cross-validation. In concordance with~\cite{emm-cook-distance}, this set contains the default rule: $$ \top \Rightarrow y = -189.69 - 0.0002\times\textit{cases} + 2.39\times \textit{score} + 5.08\times\textit{age}, $$
where \emph{score} is the score from the magazine, \emph{age} is the years of aging before commercialization, and \emph{cases} is the number of cases produced (in thousands). As pointed out in~\cite{emm-cook-distance}, non-varietal wines, i.e., those produced from several grape varieties, tend to have a higher price, and this price is more sensitive to score and age. $\textsc{HiPaR}_f$ ($\theta=0.05$) found 69 rules including the rule supporting this finding (support 7\%): \begin{multline} \textit{variety}=``\textit{non-varietal}" \Rightarrow y = -349.78 - 0.003\times\textit{cases} \\ + 4.20\times \textit{score} + 7.97\times\textit{age} \nonumber \end{multline}
\textsc{HiPaR}{} could also detect the so called \emph{Giffen effect}, observed when, contrary to common sense, the price-demand curve exhibits an upward slope. We observe this phenomenon by running $\textsc{HiPaR}_f$ on the \emph{giffen} dataset that contains records of the consumption habits of households in the Chinese province of Hunan at different stages of the implementation of a subsidy on staple foodstuffs. The target variable $y$ is the percent change in household consumption of rice, which is predicted via other attributes such as the change in price (\emph{cp}), the household size (\emph{hs}), the income per capita (\emph{ipc}), the calorie consumption per capita (\emph{ccpc}), the share of calories coming from (a) fats (\emph{shf}), and (b) staple foodstuffs (\emph{shs}, \emph{shs2} according to two different definitions), among other indicators. \textsc{HiPaR}{} finds the default rule:
\begin{multline} \top \Rightarrow y = 37.27 \textbf{- 0.06} \times \textit{cp} + 1.52\times\textit{hs} + 0.0004\times{ipc} + 0.003 \times \textit{ccpc} \\ -146.28\times\textit{shf} + 54.13 \times \textit{shs} - 156.78 \times \textit{shs2} \nonumber \end{multline}
\noindent The negative sign of the coefficient for \emph{cp} suggests no Giffen effect at the global level. As stated in~\cite{emm-cook-distance}, when the subsidy was removed (characterized by the condition \emph{round}=3), the Giffen effect was also not observed in affluent and very poor households. It rather concerned those moderately poor who, despite the surge in the price of rice, increased their consumption at the expense of other sources of calories. Such households can be characterized by intervals in the income and calories per capita (\emph{ipc}, \emph{ccpc}), or by their share of calories from staple foodstuffs (\textit{sh}, \textit{sh2}). This is confirmed by the hybrid rule (support 4\%): \begin{multline} \textit{round}=3 \land cpc \in [1898, 2480) \land \textit{sh2} \in [0.7093, \infty) \Rightarrow y = 42.88 \\ \mathbf{+ 1.17}\times \textit{cp} + 1.08\times\textit{hs} - 0.005 \times \textit{ipc} + 0.018 \times \textit{ccpc} \\ -7.42\times\textit{shf} -3.08 \times \textit{shs} - 114.21 \times \textit{shs2} . \nonumber \end{multline} \noindent The positive coefficient associated to \emph{cp} shows the Giffen effect for households of moderate calories per capita, whose calories share from staple food is higher than 0.7093. The latter condition aligns with the results of~\cite{emm-cook-distance} that suggested that households with higher values for this variable were more prone to this phenomenon.
\section{Conclusions and Outlook} We have presented \textsc{HiPaR}{}, a pattern-aided regression method designed for heterogenous and multimodally distributed data. \textsc{HiPaR}{} mines compact sets of accurate hybrid rules thanks to (1) a novel hierarchical exploration of the search space of data regions, and (2) a selection strategy that optimizes for small sets of rules with joint low prediction error and good coverage. \textsc{HiPaR}{} mines fewer rules than state-of-the-art methods at comparable performance.
As future work, we envision to extend the rule language bias to allow for negated conditions as in RT and MT, and increase the exhaustivity in the quest for accurate hybrid rules. We also envision to parallelize the candidates enumeration phase, and apply other quality criteria and metrics in the search, e.g., the p-values of the linear coefficients. As a natural follow-up, we envision to port the notion of hybrid rules to the problem of classification.
\end{document} | arXiv |
\begin{document}
\begin{center}{ {\LARGE {\bf {Algorithmic Work with}}}
\\ {\LARGE {\bf {Orthogonal Polynomials}}}
\\ {\LARGE {\bf {and Special Functions}}}
\\ {\large {\sc {Wolfram Koepf}}}
\\ {\sl Konrad-Zuse-Zentrum f\"ur Informationstechnik Berlin, Heilbronner Str. 10, D-10711 Berlin, Federal Republic of Germany} \\[3mm] Konrad-Zuse-Zentrum Berlin (ZIB), Preprint SC 94-5, 1994 } \end{center}
\begin{center} {\bf {Abstract:}} \end{center}
\begin{enumerate} \item[] {{\small In this article we present a method to implement orthogonal polynomials and many other special functions in Computer Algebra systems enabling the user to work with those functions appropriately, and in particular to verify different types of identities for those functions. Some of these identities like differential equations, power series representations, and hypergeometric representations can even dealt with algorithmically, i.\ e.\ they can be computed by the Computer Algebra system, rather than only verified.
The types of functions that can be treated by the given technique cover the generalized hypergeometric functions, and therefore most of the special functions that can be found in mathematical dictionaries.
The types of identities for which we present verification algorithms cover differential equations, power series representations, identities of the Rodrigues type, hypergeometric representations, and algorithms containing symbolic sums.
The current implementations of special functions in existing Computer Algebra systems do not meet these high standards as we shall show in examples. They should be modified, and we show results of our implementations. }} \end{enumerate}
\section{Introduction} \label{sec:Introduction}
Many special functions can be looked at from the following point of view: They represent functions $f(n,x)$ of one ``discrete'' variable $n\in D$ defined on a set $D$ that has the property that $n\in D\Rightarrow n+1\in D$ (or $n\in D\Rightarrow n-1\in D$), e.\ g.\ $D=\N_0, \Z, \R$, or ${\rm {\mbox{C{\llap{{\vrule height1.52ex}\kern.4em}}}}}$, and one ``continuous'' variable $x\in I$ where $I$ represents a real interval, either finite $I=[a,b]$, infinite ($I=[a,\infty)$, $I=(-\infty,a]$, or $I=\R$), or a subset of the complex plane ${\rm {\mbox{C{\llap{{\vrule height1.52ex}\kern.4em}}}}}$.
In the given situation we may speak of the family $(f_n)_{n\in D}$ of functions $f_n(x):=f(n,x)$.
In this paper we will deal with special functions and orthogonal polynomials of a real/complex variable $x$. Many of our results can be generalized to special and orthogonal functions of a discrete variable $x$ which we will consider in a forthcoming paper.
Many of those families, especially all families of orthogonal polynomials, have the following properties: \begin{enumerate} \item {\bf (Derivative rule)} \\ The functions $f_n$ are differentiable with respect to the variable $x$, and satisfy a derivative rule of the form \begin{equation} f_n'(x)=\ded x f_n(x)=\sum_{k=0}^{m-1} r_k(n,x)\,f_{n-k}(x) \quad\quad \mbox{or} \quad\quad f_n'(x)=\sum_{k=0}^{m-1} r_k(n,x)\,f_{n+k}(x) \;, \label{eq:Derivative rule} \end{equation} where the derivative with respect to $x$ is represented by a finite number of lower or higher indexed functions of the family, and where $r_k$ are rational functions in $x$. If $r_{m-1}(n,x)\not\equiv 0$ then the number $m$ is called the order of the given derivative rule. We call the two different types of derivative rules {\sl backward} and {\sl forward} derivative rule, respectively. \item {\bf (Differential equation)} \\ The functions $f_n$ are $m$ times differentiable ($m\in\N)$ with respect to the variable $x$, and satisfy a homogeneous linear differential equation \begin{equation} \sum_{k=0}^m p_k(n,x)\,f_n^{(k)}(x)=0 \;, \label{eq:Differential equation} \end{equation}
where $p_k$ are polynomials in $x$. If $p_m(n,k)\not\equiv 0$ then the number $m$ is called the order of the given differential equation. \item {\bf (Recurrence equation)} \\ The functions $f_n$ satisfy a homogeneous linear recurrence equation with respect to $n$ \begin{equation} \sum_{k=0}^m q_k(n,x)\,f_{n-k}(x)=0 \;, \label{eq:Recurrence equation} \end{equation}
where $q_k$ are polynomials in $x$, and $m\in\N$. If $q_0(n,k),q_m(n,k)\not\equiv 0$ then the number $m$ is called the order of the given recurrence equation. \end{enumerate} Some of those families, especially all ``classical'' families of orthogonal polynomials, have the following further property: \begin{enumerate} \item[4.] {\bf (Rodrigues representation)} \\ The functions $f_n$ have a representation of the Rodrigues type \begin{equation} f_n(x)=\frac{1}{K_n\,g(x)}\dedn {x}{n} h_n(x) \label{eq:Rodrigues} \end{equation} for some functions $g$ depending on $x$, and $h_n$ depending on $n$ and $x$, and a constant $K_n$ depending on $n$.
\end{enumerate} From an algebraic point of view these properties read as follows: Let $K[x]$ denote the field of rational functions over $K$ where $K$ is one of $\Q$, $\R$, or ${\rm {\mbox{C{\llap{{\vrule height1.52ex}\kern.4em}}}}}$. Then if the coefficients of the occurring polynomials and rational functions are elements of $K$, \begin{enumerate} \item the derivative rule states that $f_n'$ is an element of the linear space over $K[x]$ which is generated by $\{ f_n, f_{n-1}, \ldots, f_{n-(m-1)}\}$ or $\{ f_n, f_{n+1}, \ldots, f_{n+{m-1}}\}$, respectively; \item the differential equation states that the $m+1$ functions $f_n^{(k)}\; (k=0,\ldots,m)$ are linearly dependent over $K[x]$; moreover, by an induction argument, any $m+1$ functions $f_n^{(k)}\;(k\in\N_0)$ are linearly dependent over $K[x]$; \item the recurrence equation states that the $m+1$ functions $f_{n-k}\; (k=0,\ldots,m)$ are linearly dependent over $K[x]$; moreover, by an induction argument, any $m+1$ functions $f_{n}\;(n\in D)$, are linearly dependent over $K[x]$. \end{enumerate} One important question when dealing with special functions is the following: Which properties of those functions does one have to know to be able to establish various types of identities that those functions satisfy? With respect to the implementation of special functions in Computer Algebra systems this question reads: Which properties should be implemented for those functions, and in which form should this be done such that the user is enabled to verify various types of identities, or at least to implement algorithms for this purpose?
Nikiforov and Uvarov \cite{NU} gave a unified introduction to special functions of mathematical physics based primarily on the Rodrigues formula and the differential equation. They dealt, however, only with second order differential equations, which makes their treatment quite restricted, and moreover their development does not have algorithmic applications.
Truesdell \cite{Truesdell} gave a unified approach to special functions based entirely on a special form of the derivative rule. His development has some algorithmic content, which, however, is difficult or impossible to implement in Computer Algebra. Truesdell's approach---although nice---has the further disadvantage that one can obtain only results of a very special form, see \cite{Koe94Truesdell}.
From the algorithmic point of view another approach is better: We will base our treatment of special functions on the derivative rule (\ref{eq:Derivative rule}) in combination with the recurrence equation (\ref{eq:Recurrence equation}). We will show that an implementation of special functions in Computer Algebra systems based on these two properties gives a simplification mechanism at hand which, in particular, enables the user to verify many kinds of identities for those functions. Some of these identities like differential equations, and power series representations can even be dealt with algorithmically, i.\ e.\ they can be computed by the Computer Algebra system.
Our treatment is connected with the holonomic system approach due to Zeilberger \cite{Zei1}--\cite{Zei3} which is based on the valididy of partial differential equations, mixed recurrence equations, and difference-differential equations. This connection will be made more precise later.
The class of functions that can be treated this way contains the Airy functions $\mathop{\rm Ai}\nolimits\:(x)$, $\mathop{\rm Bi}\nolimits\:(x)$ (see e.\ g.\ \cite{AS}, \S~10.4), the Bessel functions $J_n(x), Y_n(x), I_n(x),$ and $K_n(x)$ (see e.\ g.\ \cite{AS}, Ch.~9--11),
the Hankel functions $H_n^{(1)}(x)$ and $H_n^{(2)}(x)$ (see e.\ g.\ \cite{AS}, Ch.~9),
the Kummer functions $M(a,b,x)=\,_1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{a}\\[-1mm] \multicolumn{1}{c}{b}
\end{array}\right| x \right)$ and $U(a,b,x)$ (see e.\ g.\ \cite{AS}, Ch.~13), the Whittaker functions $M_{n,m}(x)$ and $W_{n,m}(x)$ (see e.\ g.\ \cite{AS}, \S~13.4),
the associated Legendre functions $P_a^b(x)$ and $Q_a^b(x)$ (see e.\ g.\ \cite{AS}, \S~8), all kinds of orthogonal polynomials: the Jacobi polynomials $P_n^{(\alpha,\beta)}(x)$, the Gegenbauer polynomials $C_n^{(\alpha)}(x)$, the Chebyshev polynomials of the first kind $T_n(x)$ and of the second kind $U_n(x)$, the Legendre polynomials $P_n(x)$, the Laguerre polynomials $L_n^{(\alpha)}(x)$, and the Hermite polynomials $H_n(x)$ (see \cite{Sze}, \cite{Tri}, and \cite{AS}, \S~22), many more special functions, and furthermore sums, products, derivatives, antiderivatives, and the composition with rational functions and rational powers of those functions (see \cite{Sta}, \cite{Zei1}, \cite{SZ} and \cite{KS}).
In the case of the classical orthogonal polynomials the properties above can be made much more precise (see e.\ g.\ \cite{Tri}, Kapitel~IV). Therefore let $f_n:[a,b]\rightarrow\R\;(n\in\N_0)$ denote the family of orthogonal polynomials \[ f_n(x)=k_n\,x^n+k'_n\,x^{n-1}+\ldots \] with respect to the weight function $w(x)\geq 0$, i.\ e.\ with the property that \[ \int\limits_a^b w(x)\,f_n(x)\,f_m(x)\,dx=0\quad\quad(n\neq m) \] and \[ \int\limits_a^b w(x)\,f_n^2(x)\,dx=h_n\neq 0 \;. \] Then we have the properties: \begin{enumerate} \item {\bf (Derivative rule)} \\ The functions $f_n$ satisfy a derivative rule of the form \[ X\,f_n'=\beta_n\,f_{n-1}+\left( \frac{n}{2}X''x+\alpha_n\right)\,f_n \] (see e.\ g.\ \cite{Tri}, p.\ 135, formula (4.8)) where \[ \alpha_n=n\,X'(0)-\ed 2\,X''\,\frac{k_n'}{k_n} \;,\quad\quad \beta_n=-\frac{h_n\,k_{n-1}}{h_{n-1}\,k_n}\left( K_1\,k_1-\frac{2n-1}{2}\,X''\right) \;, \] and \begin{equation} X(x)=\funkdefff{(b-x)(x-a)}{a,b \;\mbox{are finite}} {x-a}{b=\infty}{1}{-a,b=\infty} \label{eq:X(x)} \;. \end{equation} Especially is the order of the derivative rule $2$. \item {\bf (Differential equation)} \\ The functions $f_n$ satisfy the homogeneous linear differential equation with polynomial coefficients \[ X\,f_{n}''(x)+K_1\,f_1\,f_n'(x)+\lambda_n\,f_{n}(x)=0 \] (see e.\ g.\ \cite{Tri}, p.\ 133, formula (4.1)) where \[ \lambda_n=-n\left( K_1\,k_1-\frac{n-1}{2}\,X''\right) \;, \] and $X(x)$ is given by (\ref{eq:X(x)}). Especially is the order of the differential equation $2$. \item {\bf (Recurrence equation)} \\ The functions $f_n$ satisfy the recurrence equation \begin{equation} f_{n+1}(x)=-C_n\,f_{n-1}(x)+(A_n\,x+B_n)\,f_n(x) \label{eq:Recurrence equation:orth} \end{equation} (see e.\ g.\ \cite{Tri}, p.\ 126, formula (2.1)) with \[ A_n=\frac{k_{n+1}}{k_n}\;, \quad\quad B_n=\frac{k_{n+1}}{k_n}\left( \frac{k'_{n+1}}{k_{n+1}}-\frac{k'_{n}}{k_{n}}\right) \;,\quad\quad\mbox{and}\quad\quad C_n=\frac{k_{n+1}\,k_{n-1}\,h_n}{k_n^2\,h_{n-1}} \;. \] Especially is the order of the recurrence equation $2$. \item {\bf (Rodrigues representation)} \\ The functions $f_n$ have a representation of the Rodrigues type \begin{equation} f_n(x)=\frac{1}{K_n\,w(x)}\dedn {x}{n} \Big( w(x)\,X(x)^n \Big) \label{eq:Rodriguestype} \end{equation} (see e.\ g.\ \cite{Tri}, p.\ 129, formula (3.2)), where $X(x)$ is given by (\ref{eq:X(x)}), i.\ e.\ (\ref{eq:Rodrigues}) is valid with $g(x)=w(x)$, and $h_n(x)=w(x)\,X(x)^n$. Especially: The order of the polynomial $X(x)$ is $\leq 2$. \end{enumerate} Further it turns out that in the case of classical orthogonal polynomials all coefficient functions of $f_{n-k}$ are rational also with respect to the variable $n$, a fact that depends, however, on the special normalizations that are used in these cases.
We mention that no system of orthogonal polynomials besides the classical ones satisfies a Rodrigues representation of type (\ref{eq:Rodriguestype}) with a polynomial $X$ (see e.\ g.\ \cite{Tri}, Kapitel IV, \S 3).
We note that using the recurrence equation (\ref{eq:Recurrence equation:orth}), which is valid also for non-classical orthogonal polynomials, or any recurrence equation of type (\ref{eq:Recurrence equation}) of order two (also called three-term recursion), recursively, each (backward or forward) derivative rule (\ref{eq:Derivative rule}) is equivalent to a derivative rule \begin{equation} f_n'(x)=k(n,x)\,f_n(x)+l(n,x)\,f_{n+1}(x) \label{eq:Derivative rule2} \end{equation} ($k,l$ rational functions with respect to $x$) of order two. In general, the order of the derivative rule can always be assumed to be less than or equal to the order of the recurrence equation. In some nice work \cite{Truesdell} Truesdell presented a treatment of special functions entirely based on the functional equation (\ref{eq:Derivative rule2}). He showed that this difference-differential equation is independent of the differential equation (\ref{eq:Differential equation}) and the recurrence equation (\ref{eq:Recurrence equation}), i.\ e.\ it does not imply the existence of one of these.
In contrast to this work, our main notion is the \\[1mm] {\bf Definition (Admissible family of special functions)} We call a family $f_n $ of special functions {\sl admissible} if the functions $f_n$ satisfy a recurrence equation of type (\ref{eq:Recurrence equation}) and a derivative rule of type (\ref{eq:Derivative rule}). We call the order of the recurrence equation the {\sl order} of the admissible family $f_n$.
$\Box$ \\[2mm] Note that the recurrence equation (\ref{eq:Recurrence equation}) together with $m$ initial functions $f_{n_0},f_{n_0+1},\ldots,f_{n_0+m-1}$ determine the functions $f_n\;(n\in D)$ uniquely.
So an admissible family of special functions (with given initial functions) is overdetermined by its two defining properties, i.\ e.\ the recurrence equation and the derivative rule must be compatible. This fact, however, gives our notion a considerable strength: \begin{theorem} \label{th:mdimensionsl} {\rm For any admissible family $f_n$ of order $m$ the linear space $V_{f_n}$ over $K[x]$ of functions generated by the set of shifted derivatives
$\{ f_{n\pm k}^{(j)}\;|\;j, k\in\N_0\}$ is at most $m$-dimensional. On the other hand, if the family
$\{ f_{n\pm k}^{(j)}\;|\;j, k\in\N_0\}$ spans an $m$-dimensional linear space, then $f_n$ forms an admissible family of order $m$. } \end{theorem}
\par
\noindent{{\sl Proof:}}\hspace{5mm} By the recurrence equation and an induction argument it follows that
the linear space $V$ spanned by $\{ f_{n\pm k}\;|\;k\in\N_0\}$ is at most $m$-dimensional. Using the derivative rule, by a further induction it follows that the derivative of any order $f_n^{(k)}\;(k\in\N_0)$ is an element of $V$. Therefore $V_{f_n}=V$.
If on the other hand for a family $f_n$ the set of derivatives
$\{ f_{n\pm k}^{(j)}\;|\;j, k\in\N_0\}$ is $m$-dimensional, then the existence of a recurrence equation and a derivative rule of order $m$ are obvious.
$\Box$\par
\noindent From the algebraic point of view this is the main reason for the importance of admissible families: Any $m+1$ distinguished elements of $V_{f_n}$ are linearly dependent, i.\ e.\ any arbitrary element of $V_{f_n}$ can be represented by a linear combination (with respect to $K[x]$) of any $m$ of the others. This is the algebraic background for the fact that so many identities between the members and their derivatives of an admissible family exist.
In particular we have \begin{corollary} \label{cor:AF->DE} {\rm Any admissible family $f_n$ of order $m$ satisfies a simple differential equation of order $m$.
$\Box$ } \end{corollary} In \S~\ref{sec:Algorithmic generation of differential equations} we give an algorithm which, in particular, generates this differential equation of $f_n$.
With regard to Zeilberger's approach Corollary~\ref{cor:AF->DE} can be interpreted as follows: Any admissible family $f_n(x)$ forms a holonomic system with respect to the two variables $n$, and $x$, whose defining recurrence equation, and the differential equation corresponding to Corollary~\ref{cor:AF->DE} together with the initial conditions \begin{equation} f_0^{(k)}(0)\;,\quad\quad\mbox{and}\quad\quad f_k(0)\quad\quad(k=0,\ldots,m-1) \label{eq:holonomicIV} \end{equation} yield the canonical holonomic representation of $f_n(x)$ (see \cite{Zei1}, Lemma 4.1).
On the other hand, not all holonomic systems $f_n(x)$ form admissible families so that our notion is stronger: Let $f_n(x):=\mathop{\rm Ai}\nolimits\:(x)$ for all $n\in\Z$, then obviously $f_n(x)$ is the holonomic system generated by the equations \[ f_n''(x)=x\,f_n(x)\;, \quad\quad f_{n+1}(x)=f_n(x)\;, \] and some initial values, that does {\sl not} form an admissible family as the derivative
$f_n'$ is linearly independent of $\{f_n\;|\;n\in\Z\}$ over $K[x]$, see \S~\ref{sec:Embedding of one-variable functions into admissible families}, and thus no derivative rule of the form (\ref{eq:Derivative rule}) exists.
A further advantage of our approach is the separation of the variables,
i.\ e.\ the work with ordinary differential equations, and one-variable recurrence equations rather than partial differential equations, mixed recurrence equations, and difference-differential equations. So our approach---if applicable---seems to be more natural.
To present an example of an admissible family that cannot be found in mathematical dictionaries, we consider the functions \[
k_n(x):=\frac{2}{\pi}\int\limits_0^{\pi/2} \cos\:(x\,\tan\theta-n\,\theta)\,d\theta \;,
\] that Bateman introduced in \cite{Bat}, see also \cite{KS1}. He verified that (\cite{Bat}, formula (2.7)) \begin{equation} F_n(x):=(-1)^n\,k_{2n}(x)=(-1)^n\,e^{-x}\Big( L_n(2x)-L_{n-1}(2x)\Big) \label{eq:k2m} \;. \end{equation} We call $F_n$ the family of Bateman functions which turns out to be an admissible family of order two.
Bateman obtained the property (\cite{Bat}, formula (4.1)) \[ (n-1)\,\Big(F_n(x)-F_{n-1}(x)\Big)+(n+1)\,\Big(F_n(x)-F_{n+1}(x)\Big)= 2\,x\,F_n(x) \] leading to \begin{equation} n\,F_{n}(x)-2\,(n-1-x)\,F_{n-1}(x)+(n-2)\,F_{n-2}(x)=0 \label{eq:differenceeq} \end{equation} which is a recurrence equation of type (\ref{eq:Recurrence equation}) and order two that determines the Bateman functions uniquely using the two initial functions \[ F_0(x)=e^{-x} \quad\quad\quad\mbox{and}\quad\quad\quad F_1(x)=-2\,x\,e^{-x} \] which follow from (\ref{eq:k2m}).
Bateman obtained further a difference differential equation (\cite{Bat}, formula (4.2)) \begin{equation} (n+1)\,F_{n+1}(x)-(n-1)\,F_{n-1}(x)=2\,x\,F_n'(x) \;, \label{eq:difference differential equation} \end{equation} which can be brought into the form \begin{equation} F_n'(x)=\ed{x}\Big( (n-x)\,F_n(x)-(n-1)\,F_{n-1}(x)\Big) \label{eq:Batemanderivativerule} \end{equation} using (\ref{eq:differenceeq}). This is a derivative rule of the form (\ref{eq:Derivative rule}) and order two. Therefore $F_n(x)$ form an admissible family of order two.
We note that the functions $F_n$ satisfy the differential equation \begin{equation} x\,F_n''(x)+(2n-x)\,F_n(x)=0 \;, \label{eq:DE Bateman} \end{equation} (see \cite{Bat}, formula (5.1)), and the Rodrigues type representation \begin{equation} F_n(x)=\frac{x\,e^x}{n!}\frac{d^n}{dx^n}\left( e^{-2x}\,x^{n-1}\right) \;, \label{eq:Rodrigues Bateman} \end{equation} (see \cite{Bat}, formula (31)).
\section{Properties of admissible families} \label{sec:Properties of admissible families}
\begin{theorem} \label{th:Properties of admissible families} {\rm Let $f_n$ form an admissible family of order $m$. Then \begin{enumerate} \item[(a)] {\bf (Shift)} $f_{n\pm k}\;(k\in\N)$ forms an admissible family of order $m$; \item[(b)] {\bf (Derivative)} $f_n'$ forms an admissible family of order $\leq m$;
\item[(c)] {\bf (Composition)} $f_n\circ r$ forms an admissible family of order $\leq m$, if $r$ is a rational function, and of order $\leq m\,q$, if $r(x)=x^{p/q}\;(p,q\in\N)$. \end{enumerate} If furthermore $g_n$ forms an admissible family of order $\leq l$, then moreover \begin{enumerate} \item[(d)] {\bf (Sum)} $f_n+g_n$ forms an admissible family of order $\leq m+l$; \item[(e)] {\bf (Product)} $f_n\,g_n$ forms an admissible family of order $\leq m\,l$. \end{enumerate} } \end{theorem}
\par
\noindent{{\sl Proof:}}\hspace{5mm} (a): This is an obvious consequence of Theorem~\ref{th:mdimensionsl}. \\(b): Let $g_n:=f_n'$. We start with the recurrence equation for $f_n$ and take derivative to get \begin{equation} \sum_{k=0}^m q_k'(n,x)\,f_{n-k}(x)+\sum_{k=0}^m q_k(n,x)\,f_{n-k}'(x)=0 \;. \label{eq:intermediatederivative} \end{equation} From Theorem~\ref{th:mdimensionsl}, we know that each of the functions $f_{n-j}\;(j=0,\ldots,m)$ can be represented as a linear combination of the functions $f_{n-k}'\;(k=0,\ldots,m-1)$ over $K[x]$, which generates a recurrence equation for $g_n$. Similarly a derivative rule for $g_n$ is obtained. \\(c): For the composition $h_n:=f_n\circ r$ with a rational function $r$, the recurrence equation is obtained by substitution, and the derivative rule is a result of the chain rule. If $r(x)=x^{1/q}$, then, by
\cite{KS}, Lemma~1, the family $\{h_n^{(j)}\;|\;j\in\N_0\}$ is spanned by the $mq$ functions $x^{r/q} f_n^{(j)}(x^{1/q})\;(j=1,\ldots,m-1,$ $r=0,\ldots,q-1)$, and since
$\{f_{n\pm k}^{(j)}\;|\;j,k\in\N_0\}$ has dimension $m$, the linear space spanned by
$\{h_{n\pm k}^{(j)}\;|\;j,k\in\N_0\}$ has dimension $\leq m\,q$, implying the result. If finally $r(x)=x^{p/q}$, then a combination gives the result. \\(d): By a simple algebraic argument, we see that $f_{n-k}+g_{n-k}\:(k\in\Z)$ span the linear space $V:=V_{f_n+g_n}=V_{f_n}+V_{g_n}$ of dimension $\leq m+l$ over $K[x]$. Therefore $f_n+g_n$ satisfies a recurrence equation of order $\leq m+l$. If we add the derivative rules for $f_n$ and $g_n$, we see that $f_n'+g_n'\in V$, and thus can be represented in the desired way. \\(e): By a similar algebraic argument (see e.\ g.\ \cite{Sta}, Theorem 2.3) we see that $f_{n-k}\cdot g_{n-k}\:(k\in\Z)$ span a linear space $V$ of dimension $\leq m\,l$ over $K[x]$, hence $f_n\,g_n$ satisfies a recurrence equation of order $\leq m\,l$. By the product rule, and the derivative rules for $f_n$ and $g_n$ we see that the derivative of $f_n\,g_n$ is represented by products of the form $f_{n-k}\,g_{n-j}\;(k,j\in\Z)$, and as those span the linear space $V$ (see e.\ g.\ \cite{KS}, Theorem 3 (d)), we are done.
$\Box$\par
\noindent
As an application we again may state that the Bateman functions form an admissible family: Using the theorem, this follows immediately from representation (\ref{eq:k2m}).
Next we study algorithmic versions of the theorem. The following algorithm generates a representation of the members $f_{n\pm k}\;(k=0,\ldots,m-1)$ of an admissible family in terms of the derivatives $f_{n\pm j}'\;(j=0,\ldots,m-1)$. By Theorem~\ref{th:mdimensionsl} we know that such a representation exists. Without loss of generality, we assume that the admissible family is given by a backward derivative rule. In case of a forward derivative rule, a similar algorithm is valid.
\begin{algorithm} \label{algo:Integral rule} {\rm Let $f_n$ be an admissible family of order $m$, given by a backward derivative rule \[ f_n'(x)=\sum_{k=0}^{m-1} r_k(n,x)\,f_{n-k}(x) \;. \] Then the following algorithm generates a list of backward rules $(k=0,\ldots,m-1)$ \begin{equation} f_{n-k}(x)=\sum_{j=0}^{m-1} R_j^k(n,x)\,f'_{n-j}(x) \label{eq:Integral rule} \end{equation} ($R_j^k$ rational with respect to $x$) for $f_{n-k}\;(k=0,\ldots,m-1)$ in terms of the derivatives $f'_{n-j}\;(j=0,\ldots,m-1)$: \begin{enumerate} \item[(a)] Shift the derivative rule $m-1$ times to obtain the set of $m$ equations \[ f_{n-j}'(x)=\sum_{k=0}^{m-1} r_k(n-j,x)\,f_{n-j-k}(x) \quad\quad(j=0,\ldots,m-1) \;. \] \item[(b)] Utilize the recurrence equation to express all expressions on the right hand sides of these equations in terms of $f_{n-k}\;(k=0,\ldots,m-1)$ leading to \[ f_{n-j}'(x)=\sum_{k=0}^{m-1} r_k^{j}(n,x)\,f_{n-k}(x) \quad\quad(j=0,\ldots,m-1\;,\;r_k^{j}\;\mbox{rational with respect to}\;x) \;. \] \item[(c)] Solve this linear equations system for the variables $f_{n-k}\;(k=0,\ldots,m-1)$ to obtain the representations (\ref{eq:Integral rule}) searched for.
$\Box$ \end{enumerate} } \end{algorithm} The proof of the algorithm is obvious. It is also clear how the method can be adapted to obtain forward rules in terms of the derivatives.
As an example, the algorithm generates the following representations for the Bateman functions \[ F_n(x)=
{{1 - n + x}\over {2\,n -1- x}}\,F_n'(x) +
{{n - 1}\over {2\,n -1- x}}\,F_{n-1}'(x) \;, \] and \[ F_n(x)=
{{1 + n - x}\over {1 + 2\,n - x}}\,F_n'(x) -
{{1 + n}\over {1 + 2\,n - x}}\,F_{n+1}'(x) \] in terms of their derivatives.
We note that by means of Algorithm~\ref{algo:Integral rule} and the results of \cite{KS} (see also \cite{Zei1}, p.\ 342, and \cite{SZ}), we are able to state algorithmic versions of the statements of Theorem~\ref{th:Properties of admissible families}.
\begin{algorithm} \label{algo:Properties of admissible families} {\rm The following algorithms lead to the derivative rules and recurrence equations of the admissible families presented in Theorem~\ref{th:Properties of admissible families}: \begin{enumerate} \item[(a)] {\bf (Shift)} Direct use of derivative rule and recurrence equation lead to the derivative rule and the recurrence equation for $f_{n\pm 1}$; a recursive application gives the results for $f_{n\pm k}\;(k\in\N)$. \item[(b)] {\bf (Derivative)} By Algorithm~\ref{algo:Integral rule} we may replace all occurrences of $f_{n-k}\;(k=0,\ldots,m)$ in (\ref{eq:intermediatederivative}), resulting in the recurrence equation for $f_n'$; similarly the derivative rule is obtained. \item[(c)] {\bf (Composition)} If $r$ is a rational function, then an application of the chain rule leads to the derivative rule and the recurrence equation of $f_n\circ r$; an approach similar to the algorithmic version of Theorem 2 in \cite{KS} yields the derivative rule and the recurrence equation of $f_n\circ x^{1/q}$ by an elimination of the expressions $x^{r/q}\,f_n^{(j)}(x^{1/q})\; (r=1,\ldots,q-1,\;j=1,\ldots,m-1)$. \item[(d)] {\bf (Sum)} Applying a discrete version of Theorem 3 (c) in \cite{KS} to $f_n+g_n$ (see also \cite{Zei1}, p.\ 342, and \cite{SZ}, {\sc Maple} function {\tt rec+rec}) results in the recurrence equation, and a similar approach gives the derivative rule. \item[(e)] {\bf (Product)} Applying a discrete version of Theorem 3 (d) in \cite{KS} to $f_n\,g_n$ (see also \cite{Zei1}, p.\ 342, and \cite{SZ}, {\sc Maple} function {\tt rec*rec}) yields the recurrence equation, and a similar approach gives the derivative rule.
$\Box$ \end{enumerate} } \end{algorithm} A {\sc Mathematica} implementation of the given algorithms generate e.\ g.\ for the derivative $F_n'(x)$ of the Bateman function $F_n(x)$ the derivative rule \[ F_n''(x)= \frac{2\,n - x}{x - 2\,n\,x + x^2}\Big( \left( n-1 \right) F_{n-1}'(x)+ \left( 1-n+x \right) F_{n}'(x) \Big) \;, \] and the recurrence equation \[ F_{n+1}'(x)= \ed{(1\! + \!n) (1\! -\! 2 n\! + \!x)} \left( (n\!-\!1)(x\!-\!2n\!-\!1) F_{n-1}'(x)+ 2\,(1\!-\!2n^2\!+\! 3nx\!-\!x^2) F_{n}'(x) \right) \;, \] and for the product $A_n(x):=F_n^2(x)$ the derivative rule \begin{eqnarray*} A_n'(x)&=& {{\left( 1 - n \right) \,{{\left( n-2 \right) }^2}}\over {2\,n\,x\,\left( 1 - n + x \right) }}\,A_{n-2}(x) \\&&+\; {{2\,\left( n-1 \right) \,\left( 1 - n + x \right)}\over {n\,x}} \,A_{n-1}(x) \\&&+ \; {{\left( 3\,n - 3\,{n^2} - 4\,x + 8\,n\,x - 4\,{x^2} \right)}\over {2\,x\,\left( 1 - n + x \right) }}\,A_n(x) \;, \end{eqnarray*} and the recurrence equation \begin{eqnarray*} A_{n+1}(x) &= & \ed{(1 + n)^2}\,\left( \frac{(n-2)^2\,(n-1)\,(x-n)}{n\,(1 - n + x)}\,A_{n-2} \right. \\&&+\; \frac{(n-1)\,(3\,n - 3\,n^2 - 4\,x + 8\,n\,x - 4\,x^2)}{n}\,A_{n-1} \\&&+\; \left. \frac{(x-n)\,(-3\,n + 3\,n^2 + 4\,x - 8\,n\,x + 4\,x^2)}{1 - n + x}\,A_n \right) \end{eqnarray*} are derived.
\section{Derivative rules of special functions} \label{sec:Derivative rules of special functions}
Many Computer Algebra systems like {\sc Axiom} \cite{Axi}, {\sc Macsyma} \cite{Mac}, {\sc Maple} \cite{Map}, {\sc Mathematica} \cite{Wol}, or {\sc Reduce} \cite{Red} support the work with special functions. On the other hand, there are so many identities for special functions that it is a nontrivial task to decide which properties should be used by the system (and in which way) for the work with those functions.
Since all Computer Algebra systems support derivatives, as a first question it is natural to ask how the current implementations of Computer Algebra systems handle the derivatives of special functions. Here are some examples: {\sc Mathematica} (Version 2.2) gives
{\small \begin{verbatim} In[1]:= D[BesselI[n,x],x]
BesselI[-1 + n, x] + BesselI[1 + n, x] Out[1]= --------------------------------------
2 In[2]:= D[LaguerreL[n,a,x],x]
Out[2]= -LaguerreL[-1 + n, 1 + a, x] \end{verbatim} }\noindent We note that in {\sc Mathematica} the derivatives of all special functions symbolically are implemented. On the other hand, we notice that, given the function $I_n\:(x)$, {\sc Mathematica}'s derivative introduces two new functions: $I_{n-1}\:(x)$, and $I_{n+1}\:(x)$. Given the Laguerre polynomial $L_n^{(\alpha)}(x)$, the derivative produced introduces a new function where both $n$, and $\alpha$ are altered. The representation used is optimal for numerical purposes, but is not a representation according to our classification.
With {\sc Maple} (Version V.2) we get
{\small \begin{verbatim} > diff(BesselI(n,x),x);
n BesselI(n, x)
BesselI(n + 1, x) + ---------------
x
> diff(L(n,a,x),x);
d
---- L(n, a, x)
dx \end{verbatim} }\noindent Thus {\sc Maple}'s derivative for the Bessel function $I_n\:(x)$ introduces only one new function $I_{n+1}(x)$, and is of type (\ref{eq:Derivative rule}), whereas (even if {\tt orthopoly} is loaded) no symbolic derivative of the Laguerre polynomial $L_n^{(\alpha)}(x)$ is implemented.
Obviously there is no unique way to declare the derivative of a special function. However, we note that if we declare the derivative of a special function by a derivative rule of type (\ref{eq:Derivative rule}) of order $m$ then we can be sure that the derivative of the special function $f_n(x)$ introduces at most $m$ new functions, namely $f_{n-k}(x)\;(k=1,\ldots,m)$. Moreover, if the family of special functions depends on several parameters, then the given representation of the derivative does not use any functions with other parameters changed.
Here we give a list of the backward derivative rules of the form (\ref{eq:Derivative rule}) for the families of special functions that we introduced in \S~\ref{sec:Introduction} which all turn out to be of order two (see e.\ g.\ \cite{AS}, (9.1.27) (Bessel and Hankel functions), (9.2.26) (Bessel functions), (13.4.11), (13.4.26) (Kummer functions), (13.4.29)--(13.4.33) (Whittaker functions), (8.5.4) (associated Legendre functions), and \S~22.8 (orthogonal polynomials)):
\begin{eqnarray} J_n'\:(x) &=&
J_{n-1}\:(x)-\frac{n}{x}\,J_n\:(x) \;, \label{eq:Jnstrich} \nonumber \\[3mm] Y_n'\:(x) &=&
Y_{n-1}\:(x)-\frac{n}{x}\,Y_n\:(x) \;, \label{eq:Ynstrich} \nonumber \\[3mm] I_n'\:(x) &=&
I_{n-1}\:(x)-\frac{n}{x}\,I_n\:(x) \;, \label{eq:Instrich} \nonumber \\[3mm] K_n'\:(x) &=&
-K_{n-1}\:(x)-\frac{n}{x}\,K_n\:(x) \;, \label{eq:Knstrich} \nonumber \\[3mm] \ded x H_n^{(1)}(x) &=& H_{n-1}^{(1)}\:(x)-\frac{n}{x}\,H_n^{(1)}\:(x) \label{eq:Hn1strich} \;, \nonumber \\[3mm] \ded x H_n^{(2)}(x) &=& H_{n-1}^{(2)}\:(x)-\frac{n}{x}\,H_n^{(2)}\:(x) \label{eq:Hn2strich} \;, \nonumber \\[3mm] \ded x M(a,b,x) \!\!&=&
\ed x\Big( (b-a)\,M(a-1,b,x)-(b-a-x)\,M(a,b,x) \Big) \label{eq:Mstrich} \;, \nonumber \\[3mm] \ded x U(a,b,x) \!\!&=& \ed x\Big( -U(a-1,b,x)+(a-b+x)\,U(a,b,x)\Big) \label{eq:KummerUstrich} \;, \nonumber \\[3mm]
M_{n,m}'\:(x) &=& \frac{1}{2x}\Big( (1+2m-2n)\,M_{n-1,m}\:(x)+(2n-x)\,M_{n,m}\:(x)\Big) \;, \label{eq:Mnmstrich} \nonumber \\[3mm]
W_{n,m}'\:(x) &=& \ed{4x}\left( (1-4m^2-4n+4n^2)\,W_{n-1,m}\:(x)+(4n-2x)\,W_{n,m}\:(x)\right)
\;, \label{eq:Wnmstrich} \nonumber \\[3mm] \ded x P_a^b(x) &=& \ed{1-x^2}\left( (a+b)\,P_{a-1}^b(x)-a\,x\,P_a^b(x)\right) \;, \label{eq:Pabstrich} \nonumber \\[3mm] \ded x Q_a^b(x) &=& \ed{1-x^2}\left( (a+b)\,Q_{a-1}^b(x)-a\,x\,Q_a^b(x)\right) \;, \label{eq:Qabstrich} \nonumber \\[3mm]
\ded x P_n^{(\alpha,\beta)}(x) \!\! &=& \!\! \ed{(2n\!+\!\alpha\!+\!\beta)(1\!-\!x^2)} \left( 2(n\!+\!\alpha)(n\!+\!\beta)P_{n-1}^{(\alpha,\beta)}(x)+ n(\alpha\!-\!\beta\!-\!(2n\!+\!\alpha\!+\!\beta)x)P_{n}^{(\alpha,\beta)}(x) \right) \!, \label{eq:Jacobistrich} \nonumber \\[3mm] \ded x C_n^{(\alpha)}\:(x) &=& \ed{1-x^2} \left( (n+2\alpha-1)\,C_{n-1}^{(\alpha)}\:(x)-n\,x\,C_n^{(\alpha)}\:(x) \right) \;, \label{eq:Gegenbauerstrich} \nonumber \\[3mm] T_n'\:(x) &=& \ed{1-x^2} \Big( n\,T_{n-1}\:(x)-n\,x\,T_n\:(x) \Big) \;, \label{eq:ChebyshevTstrich} \nonumber \\[3mm] U_n'\:(x) &=& \ed{1-x^2} \Big( (n+1)\,U_{n-1}\:(x)-n\,x\,U_n\:(x) \Big) \;, \label{eq:ChebyshevUstrich} \nonumber \\[3mm] P_n'\:(x) &=& \ed{1-x^2} \Big( n\,P_{n-1}\:(x)-n\,x\,P_n\:(x) \Big) \;, \label{eq:Legendrestrich} \nonumber
\\[3mm] \ded x L_{n}^{(\alpha)}(x) &=& \ed{x} \left( -(n+\alpha)\,L_{n-1}^{(\alpha)}(x)+n\,L_{n}^{(\alpha)}(x) \right) \;, \label{eq:Laguerrestrich}
\\[3mm] H_{n}'(x) &=& 2n\,H_{n-1}(x) \;. \label{eq:Hermitestrich} \nonumber \end{eqnarray}
\section{Recurrence equations of special functions} \label{sec:Recurrence equations of special functions}
Whenever in any expression subexpressions of the form $r_k\,f_{n-k}\;(r_k\;\mbox{rational}, k\in\Z)$ occur, in an admissible family of order $m$ with the recursive use of the recurrence equation we may replace so many occurrences of those expressions $r_k\,f_{n-k}$ that finally only $m$ successive terms of the same type remain.
This allows for example to eliminate the number of occurrences in any linear combination (over $K[x]$) of derivatives of $f_n$ to $m$, a fact with which we will deal in more detail in \S~\ref{sec:Algorithmic generation of differential equations}.
We show how {\sc Mathematica} and {\sc Maple} work with regard to this question. Whereas {\sc Mathematica} does not have any built-in capabilities to simplify the following linear combinations of Bessel and Laguerre functions, {\small \begin{verbatim} In[3]:= BesselI[n+1,x]+2*n/x*BesselI[n,x]-BesselI[n-1,x]
2 n BesselI[n, x] Out[3]= -BesselI[-1 + n, x] + ----------------- + BesselI[1 + n, x]
x
In[4]:= Simplify[
2 n BesselI[n, x] Out[4]= -BesselI[-1 + n, x] + ----------------- + BesselI[1 + n, x]
x
In[5]:= LaguerreL[n+1,a,x]-(2*n+a+1-x)*LaguerreL[n,a,x]+(n+a)*LaguerreL[n-1,a,x]
Out[5]= (a + n) LaguerreL[-1 + n, a, x] -
> (1 + a + 2 n - x) LaguerreL[n, a, x] + LaguerreL[1 + n, a, x]
In[6]:= Simplify[
Out[6]= (a + n) LaguerreL[-1 + n, a, x] -
> (1 + a + 2 n - x) LaguerreL[n, a, x] + LaguerreL[1 + n, a, x] \end{verbatim} }\noindent with {\sc Maple} we get
{\small \begin{verbatim} > BesselI(n+1,x)+2*n/x*BesselI(n,x)-BesselI(n-1,x);
n BesselI(n, x)
BesselI(n + 1, x) + 2 --------------- - BesselI(- 1 + n, x)
x
> simplify(");
0
> L(n+1,a,x)-(2*n+a+1-x)*L(n,a,x)+(n+a)*L(n-1,a,x);
L(n + 1, a, x) - (2 n + a + 1 - x) L(n, a, x) + (n + a) L(n - 1, a, x)
> simplify(");
L(n + 1, a, x) - 2 L(n, a, x) n - L(n, a, x) a - L(n, a, x) + L(n, a, x) x
+ L(n - 1, a, x) n + L(n - 1, a, x) a \end{verbatim} }\noindent
i.\ e.\ {\sc Maple}'s {\tt simplify} command supports simplification with the aid of the recurrence equations for the Bessel functions. On the other hand, for the orthogonal polynomials (even if {\tt orthopoly} is loaded) no simplifications occur.
In the rest of this section we give a list of the recurrence equations of the given type for the families of special functions that we consider which all turn out to be of order two (see e.\ g.\ \cite{AS}, (9.1.27), (9.2.26), (13.4.1), (13.4.15), (13.4.29), (13.4.31), (8.5.3), and \S~22.7). We list them in the form explicitly solved for $F_{n+1}$ as this is the usual form found in mathematical dictionaries. \begin{eqnarray} J_{n+1}\:(x) &=& -J_{n-1}\:(x)+\frac{2n}{x}\,J_n\:(x) \;, \label{eq:Jn+1} \nonumber \\[3mm] Y_{n+1}\:(x) &=& -Y_{n-1}\:(x)+\frac{2n}{x}\,Y_n\:(x) \;, \label{eq:Yn+1} \nonumber \\[3mm] I_{n+1}\:(x) &=& I_{n-1}\:(x)-\frac{2n}{x}\,I_n\:(x) \;, \label{eq:In+1} \nonumber \\[3mm] K_{n+1}\:(x) &=& K_{n-1}\:(x)+\frac{2n}{x}\,K_n\:(x) \;, \label{eq:Kn+1} \nonumber \\[3mm] H_{n+1}^{(1)}\:(x) &=& -H_{n-1}^{(1)}\:(x)+\frac{2n}{x}\,H_n^{(1)}\:(x) \;, \label{eq:Hn+11} \nonumber \\[3mm] H_{n+1}^{(2)}\:(x) &=& -H_{n-1}^{(2)}\:(x)+\frac{2n}{x}\,H_n^{(2)}\:(x) \;, \label{eq:Hn+12} \nonumber \\[3mm] M(a+1,b,x) &=& \ed a\Big( (b-a)\,M(a-1,b,x)+(2a-b+x)\,M(a,b,x)
\Big) \;, \label{eq:Mn+1} \nonumber \\[3mm] U(a+1,b,x) &=& -\frac{1}{a\,(1+a-b)}\Big( U(a-1,b,x)+(b-2a-x)\,U(a,b,x) \Big) \;, \label{eq:Un+1} \nonumber \\[3mm]
M_{n+1,m}\:(x) &=& \frac{1}{1 + 2 m + 2 n} \Big( (1+2m-2n)\,M_{n-1,m}\:(x)+(4n-2x)\,M_{n,m}\:(x) \Big) \;, \label{eq:Mn+1m} \nonumber \\[3mm]
W_{n+1,m}\:(x) &=& \ed{4} \left( (-1+4m^2+4n-4n^2)\,W_{n-1,m}\:(x)-(8n-4x)\,W_{n,m}\:(x) \right) \;, \label{eq:Wn+1m} \nonumber \\[3mm] P_{a+1}^b\:(x) &=& \ed{a-b+1} \Big( -(a+b)\,P_{a-1}^b\:(x)+(2a+1)\,x\,\,P_{a-1}^b\:(x) \Big) \;, \label{eq:Pa+1b} \nonumber \\[3mm] Q_{a+1}^b\:(x) &=& \ed{a-b+1} \Big( -(a+b)\,Q_{a-1}^b\:(x)+(2a+1)\,x\,\,Q_{a-1}^b\:(x) \Big) \;, \label{eq:Qn+1b} \nonumber \\[3mm]
P_{n+1}^{(\alpha,\beta)}(x) &=& \ed{2\,(n\!+\!1)\,(n\!+\!\alpha\!+\!\beta\!+\!1)\,(2n\!+\!\alpha\!+\!\beta)} \left( -2(n\!+\!\alpha)(n\!+\!\beta)(2n\!+\!\alpha\!+\!\beta\!+2)\,P_{n-1}^{(\alpha,\beta)}(x) \right. \nonumber \\[3mm] && + \left. \left( (2n\!+\!\alpha\!+\!\beta\!+1)(\alpha^2-\beta^2)+ (2n\!+\!\alpha\!+\!\beta)_3\,x\right) P_{n}^{(\alpha,\beta)}(x) \right) \;, \label{eq:Jacobin+1} \nonumber \\[3mm] C_{n+1}^{(\alpha)}\:(x) &=& \ed{n+1} \left( -(n+2\alpha-1)\,C_{n-1}^{(\alpha)}\:(x)+2(n+\alpha)\,x\,C_n^{(\alpha)}\:(x) \right) \;, \label{eq:Gegenbauern+1} \nonumber \\[3mm] T_{n+1}\:(x) &=& -T_{n-1}\:(x)+2\,x\,T_n\:(x) \;, \label{eq:ChebyshevTn+1} \nonumber \\[3mm] U_{n+1}\:(x) &=& -U_{n-1}\:(x)+2\,x\,U_n\:(x) \;, \label{eq:ChebyshevUn+1} \nonumber \\[3mm] P_{n+1}\:(x) &=& \ed{n+1} \Big( -n\,P_{n-1}\:(x)+(2n+1)\,x\,P_n\:(x) \Big) \;, \label{eq:Legendren+1} \nonumber \\[3mm] L_{n+1}^{(\alpha)}(x) &=& \ed{n+1} \left( -(n+\alpha)\,L_{n-1}^{(\alpha)}(x)+(2n+\alpha+1-x)\,L_{n}^{(\alpha)}(x) \right) \;, \label{eq:Laguerren+1} \nonumber \\[3mm] H_{n+1}(x) &=& -2n\,H_{n-1}(x)+2x\,H_{n}(x) \;. \label{eq:Hermiten+1} \nonumber \end{eqnarray} Note that $(a)_k$ (which is used in the recurrence equation for the Jacobi polynomials $P_{n}^{(\alpha,\beta)}(x)$) denotes the {\sl Pochhammer symbol\/} (or {\sl shifted factorial\/}) defined by $(a)_{k}:=\prod\limits_{j=1}^k (a\!+\!j\!-\!1)$.
We note further that for functions with several ``discrete'' variables it may happen that for each of them there exists a recurrence equation. As an example we consider the Laguerre polynomials for which we have (\cite{AS} (22.7.29), in combination with (22.7.30))
\begin{equation} L_{n}^{(\alpha+1)}(x) = \ed{x} \left( -(n+\alpha)\,L_{n}^{(\alpha-1)}(x)+(\alpha+x)\,L_{n}^{(\alpha)}(x) \right) \;. \label{eq:alternateRELaguerre} \end{equation} In \S~\ref{sec:Functions of the hypergeometric type as admissible families} we will demonstrate that generalized hypergeometric functions satisfy recurrence equations with respect to all their parameters.
To be safely enabled that the algorithms of \S~\ref{sec:Algorithmic verification of identities}-- \S~\ref{sec:Algorithmic verification of formulas involving symbolic sums} apply, all of those recurrence equations should be implemented and applied recursively for simplification purposes.
\section{Embedding of one-variable functions into admissible families} \label{sec:Embedding of one-variable functions into admissible families}
In this section we consider first, how the elementary transcendental functions are covered by the given approach.
Consider the exponential function $f(x)=e^x$. This function can be embedded into the admissible family $f_n$, defined by the properties \[ f_n'(x)=f_n(x)\;, \quad\quad f_{n+1}(x)=f_n(x) \quad\quad\mbox{and}\quad\quad f_0(x)=e^x \;, \] i.\ e.\ the family of iterated derivatives of $e^x$.
Obviously this is a representation of an admissible family of order one.
Moreover in the given case it turns out that $f_n(x)=e^x=f_0(x)$ for all $n\in\Z$, so there is no actual need to give the functions numbers, and therefore we (obviously) keep the usual notation.
Similarly the functions $\sin x$ and $\cos x$ are embedded into the admissible family $f_n$ of order two given by the properties \[ f_n'(x)=f_{n-1}(x)\;, \quad\quad f_{n+1}(x)=-f_{n-1}(x) \;, \quad\quad\mbox{and}\quad\quad f_0(x)=\cos x\;, \quad\quad f_1(x)=\sin x \;. \] Again, the family of functions $f_n$ is finite, and our numbering is unnecessary: \[ f_n(x)=\funkdeffff {\cos x}{n=4m\;(m\in\Z)} {\sin x}{n=4m+1\;(m\in\Z)} {-\cos x}{n=4m+2\;(m\in\Z)} {-\sin x}{n=4m+3\;(m\in\Z)} \;. \] Essentially there are only the two functions $\cos x$, and $\sin x$ involved. Note, however, that both functions are needed as no simple first order differential equation for $\sin x$ or $\cos x$ exists.
Other nontrivial examples of essentially finite admissible families of special functions are formed by the Airy functions. Let $\mathop{\rm Ai}\nolimits_n\:(x)=\mathop{\rm Ai}\nolimits^{(n)}\:(x)$, i.\ e.\ \[ \mathop{\rm Ai}\nolimits_n'\:(x)=\mathop{\rm Ai}\nolimits_{n+1}\:(x) \;. \] By the differential equation for the Airy functions (see e.\ g.\ \cite{AS}, (10.4)) we have $\mathop{\rm Ai}\nolimits''\:(x)-x\,\mathop{\rm Ai}\nolimits\:(x)=0$, so that from Leibniz's rule it follows that \begin{eqnarray*} \mathop{\rm Ai}\nolimits_{n+1}\:(x) &=& \mathop{\rm Ai}\nolimits^{(n+1)}\:(x)= \Big( \mathop{\rm Ai}\nolimits''\:(x)\Big)^{(n-1)} \\&=& \Big( x\,\mathop{\rm Ai}\nolimits\:(x)\Big)^{(n-1)}= \sum_{k=0}^{n-1} \ueber{n-1}{k}\,x^{(k)}\,\Big(\mathop{\rm Ai}\nolimits\:(x)\Big)^{(n-1-k)} \\&=& x\,\mathop{\rm Ai}\nolimits^{(n-1)}\:(x)+(n-1)\,\mathop{\rm Ai}\nolimits^{(n-2)}\:(x)= x\,\mathop{\rm Ai}\nolimits_{n-1}\:(x)+(n-1)\,\mathop{\rm Ai}\nolimits_{n-2}\:(x) \;, \end{eqnarray*}
and therefore $\mathop{\rm Ai}\nolimits\:(x)$ is embedded into the admissible family $\mathop{\rm Ai}\nolimits_n$ of order three given by \begin{equation}
\mathop{\rm Ai}\nolimits_n'\:(x)=\mathop{\rm Ai}\nolimits_{n+1}\:(x)\;, \quad\quad \mathop{\rm Ai}\nolimits_{n+1}\:(x)=x\,\mathop{\rm Ai}\nolimits_{n-1}\:(x)+(n-1)\,\mathop{\rm Ai}\nolimits_{n-2}\:(x) \;, \label{eq:DR,RE AiryAi} \end{equation} and we have the initial functions \[ \mathop{\rm Ai}\nolimits_0\:(x)=\mathop{\rm Ai}\nolimits\:(x)\;, \quad\quad \mathop{\rm Ai}\nolimits_1\:(x)=\mathop{\rm Ai}\nolimits'\:(x) \quad\quad\mbox{and}\quad\quad \mathop{\rm Ai}\nolimits_2\:(x)=x\,\mathop{\rm Ai}\nolimits\:(x) \;. \] Similarly $\mathop{\rm Bi}\nolimits\:(x)$ is embedded into the admissible family of order three given by \begin{equation}
\mathop{\rm Bi}\nolimits_n'\:(x)=\mathop{\rm Bi}\nolimits_{n+1}\:(x)\;, \quad\quad \mathop{\rm Bi}\nolimits_{n+1}\:(x)=x\,\mathop{\rm Bi}\nolimits_{n-1}\:(x)+(n-1)\,\mathop{\rm Bi}\nolimits_{n-2}\:(x) \;, \label{eq:DR,RE AiryBi} \end{equation} and the initial functions \[ \mathop{\rm Bi}\nolimits_0\:(x)=\mathop{\rm Bi}\nolimits\:(x)\;, \quad\quad \mathop{\rm Bi}\nolimits_1\:(x)=\mathop{\rm Bi}\nolimits'\:(x) \quad\quad\mbox{and}\quad\quad \mathop{\rm Bi}\nolimits_2\:(x)=x\,\mathop{\rm Bi}\nolimits\:(x) \;. \] Our indexed families turn out to be representable by \[ \mathop{\rm Ai}\nolimits_n\:(x)=p_n(x)\,\mathop{\rm Ai}\nolimits\:(x)+q_n(x)\,\mathop{\rm Ai}\nolimits'\:(x) \quad\quad\mbox{and}\quad\quad \mathop{\rm Bi}\nolimits_n\:(x)=p_n(x)\,\mathop{\rm Bi}\nolimits\:(x)+q_n(x)\,\mathop{\rm Bi}\nolimits'\:(x) \;, \] with polynomials $p_n$ and $q_n$ in $x$. This shows, however, that to deal with the Airy functions algorithmically as is suggested in this paper, besides the functions $\mathop{\rm Ai}\nolimits\:(x)$ and $\mathop{\rm Bi}\nolimits\:(x)$ the two {\sl independent} functions $\mathop{\rm Ai}\nolimits'\:(x)$ and $\mathop{\rm Bi}\nolimits'\:(x)$ are needed, but none else. Let's look, how Computer Algebra systems work with the Airy functions.
{\sc Maple} handles them as follows:
{\small \begin{verbatim} > Ai(x);
Ai(x) > diff(Ai(x),x);
1/2 3/2
2 BesselK(1/3, 2/3 x )
1/4 ---------------------------
1/4
x Pi
/ 3/2 \
1/2 5/4 | 3/2 BesselK(1/3, 2/3 x )|
2 x |- BesselK(4/3, 2/3 x ) + 1/2 ----------------------|
| 3/2 |
\ x /
+ 1/3 -----------------------------------------------------------------
Pi \end{verbatim} }\noindent
{\small \begin{verbatim} > simplify(diff(Ai(x),x$2)-x*Ai(x));
1/2 3/2 1/2 3/2 3/2
1/48 (- 3 2 BesselK(1/3, 2/3 x ) - 8 2 BesselK(-2/3, 2/3 x ) x
1/2 3 3/2 9/4 / 5/4
+ 16 2 x BesselK(1/3, 2/3 x ) - 48 x Ai(x) Pi) / (x Pi)
/ > diff(Bi(x),x);
d
---- Bi(x)
dx > diff(Bi(x),x$2);
2
d
----- Bi(x)
2
dx \end{verbatim} }\noindent So the derivative of $\mathop{\rm Ai}\nolimits\:(x)$ is represented by Bessel functions, whereas the function $\mathop{\rm Ai}\nolimits\:(x)$ itself is not, and therefore the expression {\tt diff(Ai(x),x\verb+$+2)-x*Ai(x)} is not simplified. On the other hand the derivative of $\mathop{\rm Bi}\nolimits\:(x)$ is {\sl not} a valid {\sc Maple} function. With {\sc Mathematica} we get
{\small \begin{verbatim} In[7]:= D[AiryAi[x],x]
Out[7]= AiryAiPrime[x]
In[8]:= D[AiryAiPrime[x],x]
Out[8]= x AiryAi[x]
In[9]:= D[AiryAi[x],{x,2}]-x*AiryAi[x]
Out[9]= 0
In[10]:= D[AiryBi[x],x]
Out[10]= AiryBiPrime[x]
In[11]:= D[AiryBiPrime[x],x]
Out[11]= x AiryBi[x]
In[12]:= D[AiryBi[x],{x,2}]-x*AiryBi[x]
Out[12]= 0 \end{verbatim} }\noindent Thus we see that in this situation {\sc Mathematica} does exactly what we suggest: It works with the independent functions $\mathop{\rm Ai}\nolimits\:(x)$, $\mathop{\rm Ai}\nolimits'\:(x)$, $\mathop{\rm Bi}\nolimits\:(x)$, $\mathop{\rm Bi}\nolimits'\:(x)$, and the derivative rules (\ref{eq:DR,RE AiryAi}) and (\ref{eq:DR,RE AiryBi}).
As a further example of an admissible family we consider the iterated integrals \[ \mathop{\rm erfc}\nolimits_n\:(x)=\int\limits_x^\infty \mathop{\rm erfc}\nolimits_{n-1}\:(t)\,dt \] of the (complementary) error function $\mathop{\rm erfc}\nolimits\:(x)=1-\mathop{\rm erf}\nolimits\:(x)=\mathop{\rm erfc}\nolimits_0\:(x)$ (see e.\ g.\ \cite{AS}, (7.2)) that form the admissible family with \[ \mathop{\rm erfc}\nolimits_n'\:(x)=-\mathop{\rm erfc}\nolimits_{n-1}\:(x)\;, \quad\quad \mathop{\rm erfc}\nolimits_{n+1}\:(x)=\frac{1}{2(n+1)}\mathop{\rm erfc}\nolimits_{n-1}\:(x)-\frac{x}{n+1}\,\mathop{\rm erfc}\nolimits_{n}\:(x) \;, \] and the initial functions \[ \mathop{\rm erfc}\nolimits_0\:(x)=\mathop{\rm erfc}\nolimits\:(x)\;, \quad\quad \mathop{\rm erfc}\nolimits_1\:(x)=-\ed{\sqrt\pi} \left(\sqrt\pi\,x\,\mathop{\rm erfc}\nolimits\:(x)-e^{-x^2}\right) \] (one may also use the initial value function $\mathop{\rm erfc}\nolimits_{-1}\;(x)=\frac{2}{\sqrt\pi}e^{-x^2}$). In particular, $\mathop{\rm erfc}\nolimits x$ is embedded into an admissible family.
{\sc Maple} deals with these functions as suggested:
{\small \begin{verbatim} > diff(erfc(n,x),x);
- erfc(n - 1, x)
> simplify(diff(erfc(n,x),x$2)+2*x*diff(erfc(n,x),x)-2*n*erfc(n,x));
0 \end{verbatim} }\noindent As a final example, we mention another family of iterated integrals, the {\sl Abramowitz functions} \[ A_n(x):=\int\limits_0^\infty t^n\,e^{-t^2-x/t}\,dt \] (see \cite{Abr}, and \cite{AS}, (27.5)) which form an admissible family with derivative rule \[ A_n'(x)= \ded x \left( \int\limits_0^\infty t^n\,e^{-t^2-x/t}\,dt\right) = \int\limits_0^\infty \ded x \left( t^n\,e^{-t^2-x/t}\right) dt = -\int\limits_0^\infty t^{n-1}\,e^{-t^2-x/t}\,dt= -A_{n-1}(x) \] of order one (see \cite{AS}, (27.5.2)), and recurrence formula \[ A_{n+1}(x)=\frac{n}{2}\,A_{n-1}(x)+\frac{x}{2}\,A_{n-2}(x) \] of order three (\cite{AS}, (27.5.3)).
Again, embedded into an admissible family, especially the function $A_0(x)=\int\limits_0^\infty e^{-t^2-x/t}\,dt$ is covered by our approach.
\section{Embedding the inhomogeneous case} \label{sec:Embedding the inhomogeneous case}
Some families of functions are characterized by inhomogeneous differential rules and recurrence equations. Examples for this situation are the exponential integrals given by \[ E_n\:(x)=\int\limits_1^\infty \frac{e^{xt}}{t^n}\,dt \] (see e.\ g.\ \cite{AS}, (5.1)), and the Struve functions ${\bf H}_n(x)$ and ${\bf L}_n(x)$ (see e.\ g.\ \cite{AS}, Chapter~5), for which we have the inhomogeneous properties \[ E_n'\:(x)=-E_{n-1}\:(x)\;, \quad\quad E_{n+1}\:(x)=\frac{e^{-x}}{n}-\frac{x}{n}\,E_n\:(x) \;,
\] (\cite{AS}, (5.1.14) and (5.1.26)), \begin{equation} {\bf H}_{n-1}(x)-{\bf H}_{n+1}(x)=2\,{\bf H}_n'(x)- \frac{x^n}{2^n\,\sqrt\pi\:\Gamma(n+3/2)} \;, \label{eq:Hnstrichorig} \end{equation} \[ {\bf H}_{n-1}(x)+{\bf H}_{n+1}(x)=\frac{2n}{x}\,{\bf H}_n(x)+ \frac{x^n}{2^n\,\sqrt\pi\:\Gamma(n+3/2)} \] (\cite{AS}, (12.1.9)--(12.1.10)), and \begin{equation} {\bf L}_{n-1}(x)+{\bf L}_{n+1}(x)=2\,{\bf L}_n'(x)- \frac{x^n}{2^n\,\sqrt\pi\:\Gamma(n+3/2)} \;, \label{eq:Lnstrichorig} \end{equation} \[ {\bf L}_{n-1}(x)-{\bf L}_{n+1}(x)=\frac{2n}{x}\,{\bf L}_n(x)+ \frac{x^n}{2^n\,\sqrt\pi\:\Gamma(n+3/2)} \] (\cite{AS}, (12.2.4)--(12.2.5)), respectively. Eliminating the inhomogeneous parts (using
$\Gamma(3/2+n)=(1/2+n)\,\Gamma(1/2+n)$), these examples are made into admissible families with the derivative rules \begin{eqnarray} E_n'\:(x) &=& -E_{n-1}\:(x) \;, \label{eq:Einstrich} \nonumber \\[3mm] {\bf H}_{n}'(x) &=& {\bf H}_{n-1}(x)-\frac{n}{x}\,{\bf H}_{n}(x) \;, \label{eq:StruveHstrich} \\[3mm] {\bf L}_{n}'(x) &=& {\bf L}_{n-1}(x)-\frac{n}{x}\,{\bf L}_{n}(x) \;, \label{eq:StruveLstrich} \end{eqnarray} and the recurrence equations \begin{eqnarray} E_{n+1}\:(x) &=& \ed{n}\Big( x\,E_{n-1}(x) +(n-1-x)\,E_n(x) \Big) \;, \label{eq:Einrecurrence} \nonumber \\[3mm] {\bf H}_{n+1}(x) &=& \ed{2n+1}\Big( x\,{\bf H}_{n-2}(x)+(1-4n)\,{\bf H}_{n-1}(x)+ \frac{x^2+2 n + 4 n^2}{x}\,{\bf H}_{n}(x) \Big) \;, \label{eq:StruveHrecurrence} \nonumber \\[3mm] {\bf L}_{n+1}(x) &=& \ed{2n+1}\Big( -x\,{\bf L}_{n-2}(x)-(1-4n)\,{\bf L}_{n-1}(x)+ \frac{x^2-2 n-4 n^2}{x}\,{\bf L}_{n}(x) \Big)
\;, \label{eq:StruveLrecurrence} \nonumber \end{eqnarray} so that the exponential integrals form an admissible family of order two, and the Struve functions ${\bf H}_{n}(x)$ and ${\bf L}_{n}(x)$ form admissible families of order three. Note that the above derivative rules (\ref{eq:StruveHstrich})--(\ref{eq:StruveLstrich}) are not listed in \cite{AS} although they are much simpler than the inhomogeneous relations (\ref{eq:Hnstrichorig})--(\ref{eq:Lnstrichorig}).
After bringing the inhomogeneous rules into the desired form, those families are recognized as admissible families, and our method can be applied.
\section{Functions of the hypergeometric type as admissible families} \label{sec:Functions of the hypergeometric type as admissible families}
All functions introduced in this paper are special cases of functions of the hypergeometric type (see \cite{Koe92}). In this section we will show that the generalized hypergeometric function $_{p}F_{q}$ defined by \begin{equation} _{p}F_{q}\left.\left(\begin{array}{cccc} a_{1}&a_{2}&\cdots&a_{p}\\ b_{1}&b_{2}&\cdots&b_{q}\\
\end{array}\right| x\right) := \sum\limits_{k=0}^{\infty} A_k\,x^{k}= \sum\limits_{k=0}^{\infty} \frac {(a_{1})_{k}\cdot(a_{2})_{k}\cdots(a_{p})_{k}} {(b_{1})_{k}\cdot(b_{2})_{k}\cdots(b_{q})_{k}\,k!}x^{k} \label{eq:coefficientformula} \;, \end{equation} and thus by Theorem~\ref{th:Properties of admissible families}~(c) all functions of the hypergeometric type, form admissible families. Therefore we first deduce a derivative rule of order two for $_{p}F_{q}$.
Let us choose any of the numerator parameters $n:=a_k\;(k=1,\ldots,p)$ of $_{p}F_{q}$ as parameter $n$. Further we use the abbreviations \[ F_n(x)=\;_{p}F_{q}\left.\left(\begin{array}{cccc} n &a_{2}&\cdots&a_{p}\\ b_{1}&b_{2}&\cdots&b_{q}\\
\end{array}\right| x\right) = \sum\limits_{k=0}^{\infty} A_k(n)\,x^{k} \;. \] From the relation \[ \frac{(n+1)_k}{(n)_k}=\frac{n+k}{n} \] it follows that \[ n\,A_{k}(n+1)=(n+k)\,A_{k}(n) \;. \] Using the differential operator $\theta f(x)=x\,f'(x)$, we get by summation \begin{eqnarray*} n\,F_{n+1}(x)&=&n\sum\limits_{k=0}^{\infty} A_{k}(n+1)\,x^{k}= (n+k)\sum\limits_{k=0}^{\infty} A_{k}(n)\,x^{k} \\&=& n\,F_{n}(x)+\sum\limits_{k=0}^{\infty} k\,A_k(n)\,x^{k}= n\,F_{n}(x)+\theta F_n(x) \;, \end{eqnarray*} and therefore we are led to the derivative rule \begin{equation} \theta F_n(x)=n\,\Big(F_{n+1}(x)-F_n(x)\Big)\;, \quad\quad\mbox{or}\quad\quad F_n'(x)=\frac{n}{x}\,\Big(F_{n+1}(x)-F_n(x)\Big)\;. \label{eq:hypergeoDR} \end{equation} Hence we have established that for any of the numerator parameters $n:=a_k\;(k=1,\ldots,p)$ of $_{p}F_{q}$ such a simple (forward) derivative rule is valid.
We note that by similar means for each of the denominator parameters $n:=b_k\;(k=1,\ldots,q)$ of $_{p}F_{q}$ the simple (backward) derivative rule \begin{equation} \theta F_n(x)=(n-1)\,\Big(F_{n-1}(x)-F_n(x)\Big)\;, \quad\quad\mbox{or}\quad\quad F_n'(x)=\frac{n-1}{x}\,\Big(F_{n-1}(x)-F_n(x)\Big) \label{eq:hypergeoDRbackward} \end{equation} is derived.
Next, we note that $F_n$ satisfies the well-known hypergeometric differential equation \begin{equation} \theta (\theta+b_1-1)\cdots (\theta+b_q-1)F_{n}(x) =x(\theta+a_1)(\theta+a_2)\cdots(\theta+a_p)F_{n}(x) \;. \label{eq:hypergeoDE} \end{equation} Replacing all occurrences of $\theta$ in (\ref{eq:hypergeoDE}) recursively by the derivative rule (\ref{eq:hypergeoDR}) or (\ref{eq:hypergeoDRbackward}), a recurrence equation for $F_n$ is obtained that turns out to have the same order as the differential equation (\ref{eq:hypergeoDE}), i.\ e.\ $\max\{p,q+1\}$.
We summarize the above results in the following \begin{theorem} \label{th:generalized hypergeometric function} {\rm The generalized hypergeometric function $\;_{p}F_{q}\left.\left(\begin{array}{cccc} a_{1}&a_{2}&\cdots&a_{p}\\ b_{1}&b_{2}&\cdots&b_{q}\\
\end{array}\right| x\right)$ satisfies the derivative rules \[ \theta F_n(x)=n\,\Big(F_{n+1}(x)-F_n(x)\Big) \] for any of its numerator parameters $n:=a_k\;(k=1,\ldots,p)$, and \[ \theta F_n(x)=(n-1)\,\Big(F_{n-1}(x)-F_n(x)\Big) \] for any of its denominator parameters $n:=b_k\;(k=1,\ldots,q)$, and recursive substitution of all occurrences of $\theta$ in the hypergeometric differential equation \[ \theta (\theta+b_1-1)\cdots (\theta+b_q-1)F_{n}(x) =x(\theta+a_1)(\theta+a_2)\cdots(\theta+a_p)F_{n}(x) \] generates a recurrence equation of the type (\ref{eq:Recurrence equation}) of order $\max\{p,q+1\}$ with respect to the parameter chosen. This recurrence equation has coefficients that are rational with respect to $x$, and $n$. In particular, $_{p}F_{q}$ forms an admissible family of order $\max\{p,q+1\}$ with respect to all of its parameters $a_k,b_k$.
$\Box$ } \end{theorem} We note that if some of the parameters of $_{p}F_{q}$ are specified, there may exist a lower order differential equation, and thus the order of the admissible family may be lower than the theorem states. We note further that this theorem is the main reason for the fact that so many special functions form admissible families: Most of them can be represented in terms of generalized hypergeometric functions.
\section{Algorithmic generation of differential equations} \label{sec:Algorithmic generation of differential equations}
In this section we show that the algorithm to generate the uniquely determined differential equation of type (\ref{eq:Differential equation}) of lowest order valid for $f$ which was developed in \cite{Koe92} (see also \cite{KS}), does apply if $f$ is constructed from functions that are embedded into admissible families.
\begin{algorithm}[Find a simple differential equation] \label{algo:Find a simple DE} {\rm Let $f$ be a function given by an expression that is built from the functions $\exp x$, $\ln x$, $\sin x$, $\cos x$, $\mathop{\rm arcsin}\nolimits x$, $\mathop{\rm arctan}\nolimits x$, and any other functions that are embedded into admissible families, with the aid of the following procedures: differentiation, antidifferentiation, addition, multiplication, and the composition with rational functions and rational powers.
Then the following procedure generates a simple differential equation valid for $f$: \begin{enumerate} \item[{\rm (a)}] Find out whether there exists a simple differential equation for $f$ of order $N:=1$. Therefore differentiate $f$, and solve the linear equation \[ f'(x)+A_{0}f(x)=0 \] for $A_{0}$; i.\ e.\ set $A_{0}:=-\frac{f'(x)}{f(x)}$. Is $A_{0}$ rational in $x$, then you are done after multiplication with its denominator. \item[{\rm (b)}] Increase the order $N$ of the differential equation searched for by one. Expand the expression \[ f^{(N)}(x)+A_{N-1}f^{(N-1)}(x)+\cdots+A_{0}f(x) \;, \] apply the recurrence formulas of any admissible family $F_{n}$ of order $m$ involved recursively to minimize the occurrences of $F_{n-k}$ to at most $m$ successive $k$-values, and check, if the remaining summands contain exactly $N$ rationally independent expressions considering the numbers $A_{0}, A_{1},\ldots, A_{N-1}$ as constants. Just in that case there exists a solution as follows: Sort with respect to the rationally independent terms and create a system of linear equations by setting their coefficients to zero. Solve this system for the numbers $A_0, A_1,\ldots, A_{N-1}$. Those are rational functions in $x$, and if there is a solution, this solution is unique. After multiplication by the common denominator of $A_0, A_1,\ldots, A_{N-1}$ you get the differential equation searched for. Finally cancel common factors of the polynomial coefficients. \item[{\rm (c)}] If part (b) was not successful, repeat step (b). \end{enumerate} } \end{algorithm}
\par
\noindent{{\sl Proof:}}\hspace{5mm} Theorem~3 of \cite{KS} (compare \cite{Sta}) shows that for $f$ a differential equation of type (\ref{eq:Differential equation}) exists. We assume that differentiation is done by recursive descent through the expression tree, and an application of the chain, product and quotient rules on the corresponding subexpressions. It is clear that the algorithm works for members of admissible families, compare Theorem~\ref{th:mdimensionsl} and Corollary~\ref{cor:AF->DE}. Similarly the algorithm obviously works for derivatives and antiderivatives of admissible families. Further it is easily seen that the derivatives of sums, products, and the composition with rational functions and rational powers form either sums, or sums of products all of which by a recursive use of the recurrence equations involved are represented by sums of fixed lengths, compare Theorem~\ref{th:Properties of admissible families}. Thus after a finite number of steps, part (b) of the algorithm will succeed (sharp a priory bounds for the resulting orders are given in \cite{KS}).
$\Box$\par
\noindent We note that from the implementational point of view the crucial step of the algorithm is the decision of the rational independency in part (b). If this decision can be handled properly, then the proof given in \cite{Koe92} shows that the algorithm generates the simple differential equation of lowest order valid for $f$.
In our implementations, for testing whether some terms are rationally dependent, we divide each one by any other and test whether the quotient is a rational function in $x$ or not. This is an easy and fast approach which never leads to wrong results, but may miss a simpler solution, which in practice, rarely happens.
Typically this happens, however, for orthogonal polynomials with prescribed $n$, for which a first order differential equation exists. In this case, the recurrence equation hides these rational dependencies, and in some sense (s.\ \cite{GK}, \S~7) here it is even advantageous that the rational dependency is not realized.
Another example where our implementations yield a differential equation which is not of lowest order is given by {\small \begin{verbatim} In[13]:= SimpleDE[Sin[2 x]-2 Sin[x] Cos[x],x]
Out[13]= 4 F[x] + F''[x] == 0 \end{verbatim} }\noindent This happens because the functions $\sin\:(2x)$ and $2\,\sin x\cos x$ algebraically cannot be verified to be rationally dependent even though they are identical.
We note that, for elementary functions, we could use the Risch normalization procedure~\cite{Risch} to generate the rationally independent terms, but this does not work for special functions.
Further we note that in case of expressions of high complexity, the use of \cite{KS}, Algorithm 2, typically is faster. This algorithm, however, in general leads to a differential equation of higher order than Algorithm~\ref{algo:Find a simple DE}.
As a first application of Algorithm~\ref{algo:Find a simple DE} we consider the Airy functions $\mathop{\rm Ai}\nolimits_n$, again, for which the {\sc Mathematica} implementation of our algorithm yields
{\small \begin{verbatim} In[14]:= SimpleDE[AiryAi[n,x],x]
(3) Out[14]= (-1 - n) F[x] - x F'[x] + F [x] == 0 \end{verbatim} }\noindent i.\ e.\ the differential equation \begin{equation} \mathop{\rm Ai}\nolimits_{n}'''\:(x)-x\,\mathop{\rm Ai}\nolimits_{n}'\:(x)-(n+1)\,\mathop{\rm Ai}\nolimits_{n}\:(x)=0 \;. \label{eq:DE AiryAi} \end{equation} Similarly, we get for the square of the Airy function
{\small \begin{verbatim} In[15]:= SimpleDE[AiryAi[x]^2,x]
(3) Out[15]= -2 F[x] - 4 x F'[x] + F [x] == 0 \end{verbatim} }\noindent The next calculation confirms the differential equation for the Bateman functions $F_n$ (\ref{eq:DE Bateman})
{\small \begin{verbatim} In[16]:= SimpleDE[Bateman[n,x],x]
Out[16]= (2 n - x) F[x] + x F''[x] == 0 \end{verbatim} }\noindent Other examples are given with the aid of the iterated integrals of the complementary error function, and the Abramowitz functions:
{\small \begin{verbatim} In[17]:= SimpleDE[Erfc[n,x],x]
Out[17]= -2 n F[x] + 2 x F'[x] + F''[x] == 0 \end{verbatim} }\noindent (see \cite{AS} (7.2.2)) and
{\small \begin{verbatim} In[18]:= SimpleDE[Exp[a x]*Erfc[n,x],x]
2 Out[18]= (a - 2 n - 2 a x) F[x] + (-2 a + 2 x) F'[x] + F''[x] == 0
In[19]:= SimpleDE[Exp[a x^2]*Erfc[n,x],x]
2 2 2 Out[19]= (-2 a - 2 n - 4 a x + 4 a x ) F[x] + (2 x - 4 a x) F'[x] +
> F''[x] == 0
In[20]:= SimpleDE[Abramowitz[n,x],x]
(3) Out[20]= 2 F[x] + (1 - n) F''[x] + x F [x] == 0 \end{verbatim} }\noindent (see \cite{AS} (26.2.41)).
We note that the algorithm obviously works for antiderivatives. An example of that type is Dawson's integral (see e.\ g.\ \cite{AS} (7.1.17)) for which we get the differential equation
{\small \begin{verbatim} In[21]:= SimpleDE[E^(-x^2)*Integrate[E^(t^2),{t,0,x}],x]
Out[21]= 2 F[x] + 2 x F'[x] + F''[x] == 0 \end{verbatim} }\noindent For the Struve functions, our algorithm generates the differential equations
\[ (n^2+n^3+x^2-nx^2)\,{\bf H}_n(x)+x\,(x^2-n-n^2)\,{\bf H}_n'(x)+ (2-n)\,x^2\,{\bf H}_n''(x)+x^3\,{\bf H}_n'''(x) =0 \;, \]
and \[ (n^2+n^3-x^2+nx^2)\,{\bf L}_n(x)-x\,(x^2+n+n^2)\,{\bf L}_n'(x)+ (2-n)\,x^2\,{\bf L}_n''(x)+x^3\,{\bf L}_n'''(x) =0 \;, \] that are the homogeneous counterparts of the differential equation (12.1.1) in \cite{AS}.
Finally we give examples involving hypergeometric functions:
{\small \begin{verbatim} In[22]:= SimpleDE[Hypergeometric2F1[a,b,c,x],x]
Out[22]= a b F[x] + (-c + x + a x + b x) F'[x] + (-1 + x) x F''[x] == 0
In[23]:= SimpleDE[Hypergeometric2F1[a,b,a+b+1/2,x]^2,x]
Out[23]= 8 a b (a + b) F[x] + 2 (-a - 2 a - b - 4 a b - 2 b + x + 3 a x +
2 2 > 2 a x + 3 b x + 8 a b x + 2 b x) F'[x] +
> 3 x (-1 - 2 a - 2 b + 2 x + 2 a x + 2 b x) F''[x] +
2 (3) > 2 (-1 + x) x F [x] == 0 \end{verbatim} }\noindent Here the last function considered $ \left( _{2}F_{1}\left.\left(\begin{array}{c} \multicolumn{1}{c}{\begin{array}{cc}a&b\end{array}}\\ \multicolumn{1}{c}{a\!+\!b\!+\!1/2}\\
\end{array}\right| x\right) \right)^2 $ is the left hand side of Clausen's formula (\ref{eq:Clausen's formula}) that we will consider again in \S~\ref{sec:Algorithmic verification of identities}.
Now we investigate the case that a derivative rule and a differential equation are given, and show that these two imply the existence of a recurrence equation: \begin{algorithm} {\rm If a family $f_n$ is given by a derivative rule of type (\ref{eq:Derivative rule}) and a differential equation of type (\ref{eq:Differential equation}), then it forms an admissible family for which a recurrence equation can be found algorithmically. } \end{algorithm}
\par
\noindent{{\sl Proof:}}\hspace{5mm} We present an algorithm which generates a recurrence equation for $f_n$: Iterative differentiation of the derivative rule (\ref{eq:Derivative rule}) with the explicit use of (\ref{eq:Derivative rule}) at each step yields \[ f_n^{(j)}(x)=\sum_{k=0}^M r_k^{j}(n,x)\,f_{n-k}(x) \] with rational functions $r_k^{j}$. The substitution of these derivative representations in the differential equation gives the recurrence equation searched for.
$\Box$\par
\noindent As an example we consider the Airy functions $\mathop{\rm Ai}\nolimits_n$, again, for which we have the derivative rule (\ref{eq:DR,RE AiryAi}) \[ \mathop{\rm Ai}\nolimits_n'\:(x)=\mathop{\rm Ai}\nolimits_{n+1}\:(x) \] and the differential equation (\ref{eq:DE AiryAi}) \[ \mathop{\rm Ai}\nolimits_{n}'''\:(x)-x\,\mathop{\rm Ai}\nolimits_{n}'\:(x)-(n+1)\,\mathop{\rm Ai}\nolimits_{n}\:(x)=0 \;. \] Differentiating the derivative rule successively and substituting the resulting expressions into the differential equation immediately yields the recurrence equation (\ref{eq:DR,RE AiryAi}), again.
If this family, however, is given by the backward derivative rule (compare (\ref{eq:DR,RE AiryAi})) \[ \mathop{\rm Ai}\nolimits_n'\:(x)=x\,\mathop{\rm Ai}\nolimits_{n-1}\:(x)+(n-1)\,\mathop{\rm Ai}\nolimits_{n-2}\:(x)\;, \] then differentiation yields \begin{eqnarray*} \mathop{\rm Ai}\nolimits_n''\:(x)&=&\mathop{\rm Ai}\nolimits_{n-1}\:(x)+x\,\mathop{\rm Ai}\nolimits_{n-1}'\:(x)+ (n-1)\,\mathop{\rm Ai}\nolimits_{n-2}'\:(x) \\&=& \mathop{\rm Ai}\nolimits_{n-1}\:(x)\!+\!x\Big( x\mathop{\rm Ai}\nolimits_{n-2}\:(x)\! +\!(n\!-\!2)\mathop{\rm Ai}\nolimits_{n-3}\:(x) \Big) \!+\!(n\!-\!1)\Big( x\mathop{\rm Ai}\nolimits_{n-3}\:(x)\!+\!(n\!-\!3)\mathop{\rm Ai}\nolimits_{n-4}\:(x)\Big) \\&=& \mathop{\rm Ai}\nolimits_{n-1}\:(x)+ x^2\,\mathop{\rm Ai}\nolimits_{n-2}\:(x)+ (2n - 3 )\,x\,\mathop{\rm Ai}\nolimits_{n-3}\:(x)+ (n^2-4 n+3)\,\mathop{\rm Ai}\nolimits_{n-4}\:(x) \;. \end{eqnarray*} After a similar procedure we get \begin{eqnarray*} \mathop{\rm Ai}\nolimits_n'''\:(x)&=& 3x\,\mathop{\rm Ai}\nolimits_{n-2}\:(x)+ (x^2+3n-5)\,\mathop{\rm Ai}\nolimits_{n-3}\:(x)+ (3n-6)\,x^2\,\mathop{\rm Ai}\nolimits_{n-4}\:(x) \\&&+ (3n^2-15n+15)\,x\,\mathop{\rm Ai}\nolimits_{n-5}\:(x)+ (n^3-9n^2+23n-15)\,\mathop{\rm Ai}\nolimits_{n-6}\:(x) \;, \end{eqnarray*} and the substitution into the differentiation equation gives finally \begin{eqnarray*} &&(n-1)\,\mathop{\rm Ai}\nolimits_{n}\:(x) -x^2\,\mathop{\rm Ai}\nolimits_{n-1}\:(x)+ (4-n)\,x\,\mathop{\rm Ai}\nolimits_{n-2}\:(x)+ (3n-5+x^3)\,\mathop{\rm Ai}\nolimits_{n-3}\:(x) \\&&+ (3n\!-\!6)\,x^2\,\mathop{\rm Ai}\nolimits_{n-4}\:(x)+ (3n^2\!-\!15n\!+\!15)\,x\,\mathop{\rm Ai}\nolimits_{n-5}\:(x)+ (n^3\!-\!9n^2\!+\!23n\!-\!15)\,\mathop{\rm Ai}\nolimits_{n-6}\:(x) =0 \;, \end{eqnarray*} a recurrence equation of order 6 rather than the minimal order three. This shows, that, in general, the order of the resulting recurrence equation is not best possible.
Algebraically spoken, our result tells that if $\{ f_n^{(j)}\;|\;j\in\N_0\}$ has finite dimension, and if $f_n'$ is an element of the linear space $V$ spanned by a finite number of the functions $\{f_{n\pm k}\}$, then the space generated by all of $\{f_{n\pm k}\}$ is of finite dimension, too. In contrast to Theorem~\ref{th:mdimensionsl}, however, the dimension of this space generally may be higher than the dimension of $V$. This shows the advantage of the use of admissible families.
As a further result of this section we note that using our general procedure developed in \cite{Koe92} we have \begin{algorithm}[Find a Laurent-Puiseux representation] \label{algo:Find a Laurent-Puiseux representation} {\rm Let $f$ be a function that is built from the functions $\exp x$, $\ln x$, $\sin x$, $\cos x$, $\mathop{\rm arcsin}\nolimits x$, $\mathop{\rm arctan}\nolimits x$, and any other functions that are embedded into admissible families, with the aid of the following procedures: differentiation, anti\-differentiation, addition, multiplication, and the composition with rational functions and rational powers.
If furthermore $f$ turns out to be of rational, exp-like, or hypergeometric type (see \cite{Koe92}), then a closed form Laurent-Puiseux representation $f(x)=\sum\limits_{k=k_0}^\infty a_k\,x^{k/n}$ can be obtained algorithmically.
$\Box$ } \end{algorithm} We remark that there is a decision procedure due to Petkovsek \cite{P} to decide the hypergeometric type from the recurrence equation obtained.
With Algorithm~\ref{algo:Find a Laurent-Puiseux representation}, it is possible to reproduce most of the results of the extensive bibliography on series \cite{Han}, and to generate others. As an example we present the power series representation of the square of the Airy function:
{\small \begin{verbatim} In[24]:= PowerSeries[AiryAi[x]^2,x]
1 k k 1 + 3 k
(-) 27 x (2 k)!
9 Out[24]= Sum[-(------------------------), {k, 0, Infinity}] +
Sqrt[3] Pi k! (1 + 3 k)!
k 3 k 1
12 x Pochhammer[-, k]
6 > Sum[-------------------------, {k, 0, Infinity}] +
1/3 2 2
3 3 (3 k)! Gamma[-]
3
1/3 k 2 + 3 k 5
2 3 12 (1 + k) x Pochhammer[-, k]
6 > Sum[--------------------------------------------, {k, 0, Infinity}]
1 2
(3 + 3 k)! Gamma[-]
3 \end{verbatim} }\noindent Note that, moreover, this technique generates hypergeometric representations, whenever such representations exist. The above example, e.\ g., is recognized as the hypergeometric representation \begin{eqnarray*} \mathop{\rm Ai}\nolimits\:(x)^2 &=& \ed{3^{4/3}\,\Gamma\:(2/3)^2}\; _{1}F_{2}\left.\left(\begin{array}{c} \multicolumn{1}{c}{ 1/6 }\\[1mm] \multicolumn{1}{c}{\begin{array}{cc}1/3 & 2/3\end{array}}\\
\end{array}\right| \frac{4}{9}\,x^3 \right) \\&& -\frac{x}{\sqrt 3\,\pi}\; _{1}F_{2}\left.\left(\begin{array}{c} \multicolumn{1}{c}{1/2}\\[1mm] \multicolumn{1}{c}{\begin{array}{cc}2/3 & 4/3 \end{array}}\\
\end{array}\right| \frac{4}{9}\,x^3 \right) + \frac{x^2}{3^{2/3}\,\Gamma\:(1/3)^2}\; _{1}F_{2}\left.\left(\begin{array}{c} \multicolumn{1}{c}{ 5/6 }\\[1mm] \multicolumn{1}{c}{\begin{array}{cc}4/3 & 5/3\end{array}}\\
\end{array}\right| \frac{4}{9}\,x^3 \right) \;. \end{eqnarray*} As soon as a hypergeometric representation is obtained, by Theorem~\ref{th:generalized hypergeometric function} derivatives rules and recurrence equations with respect to all parameters involved may be obtained. As an example, we consider the Laguerre polynomials: The power series representation for the Laguerre polynomial $L_n^{(\alpha)}(x)$ that our algorithm generates corresponds to the hypergeometric representation \[ L_n^{(\alpha)}(x)=\ueber{n+\alpha}{n}\; _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) \] from which by an application of Theorem~\ref{th:generalized hypergeometric function} we obtain the derivative rule \begin{eqnarray*} \ded x L_n^{(\alpha)}(x) &=& \ueber{n+\alpha}{n}\,\frac{-n}{x}\left( _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n+1}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) -\; _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) \right) \\&=& \frac{-(n+\alpha)}{x}\,\ueber{n-1+\alpha}{n-1}\; _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-(n-1)}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) +\frac{n}{x}\,\ueber{n+\alpha}{n}\; _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) \\&=& \ed{x}\left( -(n+\alpha)\,L_{n-1}^{(\alpha)}(x)+n\,L_n^{(\alpha)}(x)\right) \;, \end{eqnarray*} i.\ e.\ (\ref{eq:Laguerrestrich}), again, but we are also led to the derivative rule with respect $\alpha$: \begin{eqnarray*} \ded x L_n^{(\alpha)}(x) &=& \ueber{n+\alpha}{n}\,\frac{\alpha}{x}\left( _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha}\\
\end{array}\right| x \right) -\; _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) \right) \\&=& \frac{\alpha}{x}\,\frac{n+\alpha}{\alpha}\,\ueber{n+\alpha-1}{n}\, _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha}\\
\end{array}\right| x \right) - \frac{\alpha}{x}\,\ueber{n+\alpha}{n}\; _1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[1mm] \multicolumn{1}{c}{\alpha+1}\\
\end{array}\right| x \right) \\&=& \ed x\left( (n+\alpha)\,L_n^{(\alpha-1)}(x)-\alpha\,L_n^{(\alpha)}(x)\right) \;. \end{eqnarray*} A further application of Theorem~\ref{th:generalized hypergeometric function} yields the recurrence equation \[ F_{\alpha+1} = \frac{1+\alpha}{(1 + \alpha + n)\,x}\Big( -\alpha\,F_{\alpha-1}+(\alpha + x)\,F_\alpha \Big) \] for $F_\alpha:=\,_1 F_1\left.\left(\begin{array}{c} \multicolumn{1}{c}{-n}\\[-1mm] \multicolumn{1}{c}{\alpha+1}
\end{array}\right| x \right)$ with respect to $\alpha$, and the use of the algorithm for the product (\cite{KS}, Theorem 3 (d), \cite{Zei1}, p.\ 342, and \cite{SZ}, {\sc Maple} function {\tt rec*rec}), applied to $L_n^{(\alpha)}=\ueber{n+\alpha}{n}\cdot F_\alpha$ generates (\ref{eq:alternateRELaguerre}), again.
\section{Algorithmic verification of identities} \label{sec:Algorithmic verification of identities}
On the lines of \cite{Zei1} we can now present an implementable algorithm to verify identities between expressions using the results of the last section.
\begin{algorithm} \label{algo:Verification of identities} {\bf (Verification of identities)} {\rm Assume two functions $f_n(x)$ and $g_n(x)$ are given, to which Algorithm~\ref{algo:Find a simple DE} applies. Then the following procedure verifies whether $f_n$ and $g_n$ are identical: \begin{enumerate} \item[(a)] {\tt de1:=SimpleDE(f,x)}: \\ Determine the simple differential equation {\tt de1} corresponding to $f_n$. \item[(b)] {\tt de2:=SimpleDE(g,x)}: \\ Determine the simple differential equation {\tt de2} corresponding to $g_n$. \item[(c)] {\bf (Different differential equation implies different function)} If {\tt de1} and {\tt de2} have the same order, then \begin{itemize} \item[-] if they do not coincide besides common factors, i.\ e.\ have rational ratio, then $f_n$ and $g_n$ do not coincide; return this, and quit. \item[-] Otherwise $f_n$ and $g_n$ satisfy the same differential equation {\tt de1} of order $l$, say, and it remains to check $l$ initial values. Continue with (e). \end{itemize} \item[(d)] Let the orders of {\tt de1} and {\tt de2}, i.\ e.\ \[ \sum_{j=0}^l p_j\,f_n^{(j)}=0 \quad\quad\mbox{and}\quad\quad \sum_{k=0}^{m} q_k\, g_n^{(k)}=0 \] ($p_j\;(j=0,\ldots,l), q_k\;(k=0,\ldots,m)$ polynomials) are different, and assume without loss of generality that $l>m$. Then, differentiate {\tt de2} $l-m$ times to get equations \[ S_p:=\sum_{k=0}^{p} q_k^p\, g_n^{(k)}=0\quad\quad\quad(p=m,\ldots,l) \;. \] Check if there are nontrivial rational functions $A_p\not\equiv 0\;(p=m,\ldots,l)$ such that a linear combination $\sum\limits_{p=m}^l A_p\,S_p$ is equivalent to the left hand side of {\tt de1}, i.\ e.\ is a rational multiple of it.
If this is not the case, then $f_n$ and $g_n$ do not satisfy a common simple differential equation, and therefore are not identical; return this, and quit. Otherwise they satisfy a common simple differential equation; continue with (e). \item[(e)] Let $l$ be the order of the common simple differential equation for $f_n$ and $g_n$. For $k=0,\ldots,l-1$ check if $f_n^{(k)}(0)=g_n^{(k)}(0)$. (Note that by the holonomic structure the knowledge of the initial values (\ref{eq:holonomicIV}) is sufficient to generate those.) These initial conditions may depend on $n$, and are proved by application of a discrete version of the same algorithm. If one of these equations is falsified, then the identity $f_n\equiv g_n$ is disproved; return this, and quit. Otherwise, if all equations are verified, the identity $f_n\equiv g_n$ is proved. \end{enumerate} } \end{algorithm}
\par
\noindent{{\sl Proof:}}\hspace{5mm} By a well-known result about differential equations of the type considered, the solution of an initial value problem \[ \sum_{k=0}^l p_k(x)\,f_n^{(k)}(x)=0 \;, \quad\quad f_n^{(k)}(0)=a_k\;(k=0,\ldots,l-1) \] is unique. To prove that $f_n$ and $g_n$ are identical, it therefore suffices to show that they satisfy a common differential equation, and the same initial values. This is done by our algorithm.
$\Box$\par
\noindent
For the example expressions \[ f_n(x):= L_n^{(-1/2)}(x) \] and \[ g_n(x):= \frac{(-1)^n}{n!\,2^{2n}}\,H_{2n}\left(\sqrt x\right) \] we get the common differential equation \[ 2\, n\, f(x) + (1 - 2 x)\, f'(x) + 2\, x\, f''(x)=0 \;. \] Therefore to prove the identity \[ L_n^{(-1/2)}(x)= \frac{(-1)^n}{n!\,2^{2n}}\,H_{2n}\left(\sqrt x\right) \;, \] (see e.\ g.\ \cite{AS}, (22.5.38)), it is enough to verify the two initial equations $f_n(0)=g_n(0)$ and $f_n'(0)=g_n'(0)$. To establish the first of these conditions, with {\sc Mathematica}, e.\ g., we get
{\small \begin{verbatim} In[25]:= eq = Limit[LaguerreL[n,-1/2,x],x->0]==
Limit[(-1)^n/(n!*2^(2*n))*HermiteH[2*n,Sqrt[x]],x->0]
1
Pochhammer[1 + n, -(-)] n
2 (-1) Sqrt[Pi] Out[25]= ----------------------- == ---------------
Sqrt[Pi] 1
n! Gamma[- - n]
2 \end{verbatim} }\noindent which is to be verified. In this situation, we establish the first order recurrence equations for both sides
{\small \begin{verbatim} In[26]:= FindRecursion[Limit[LaguerreL[n,-1/2,x],x->0],n]
Out[26]= (-1 + 2 n) a[-1 + n] - 2 n a[n] == 0
In[27]:= FindRecursion[Limit[(-1)^n/(n!*4^n)*HermiteH[2*n,Sqrt[x]],x->0],n]
Out[27]= (-1 + 2 n) a[-1 + n] - 2 n a[n] == 0 \end{verbatim} }\noindent that coincide, so that it remains to prove the initial statement
{\small \begin{verbatim} In[28]:= eq /. n->0
Out[28]= True \end{verbatim} }\noindent and we are done. Similarly one may prove the second initial value statement $f_n'(0)=g_n'(0)$.
Applying the same method, (\ref{eq:difference differential equation}) can be proved by the calculations
{\small \begin{verbatim} In[29]:= SimpleDE[(n+1)*Bateman[n+1,x]-(n-1)*Bateman[n-1,x],x]
2 2 3 2 Out[29]= (2 n - 2 x + 4 n x - 4 n x + x ) F[x] + (-2 n x + 2 x ) F'[x] +
2 > (2 n - x) x F''[x] == 0
In[30]:= SimpleDE[2*x*D[Bateman[n,x],x],x]
2 2 3 2 Out[30]= (2 n - 2 x + 4 n x - 4 n x + x ) F[x] + (-2 n x + 2 x ) F'[x] +
2 > (2 n - x) x F''[x] == 0 \end{verbatim} }\noindent and using the initial values $F_n(0)=0$ and $F_n'(0)=-2$ (see \cite{KS1}, (11)).
Also, one can prove Clausen's formula \begin{equation} \left( _{2}F_{1}\left.\left(\begin{array}{c} \multicolumn{1}{c}{\begin{array}{cc}a&b\end{array}}\\[1mm] \multicolumn{1}{c}{a\!+\!b\!+\!1/2}\\
\end{array}\right| x\right) \right)^2 = \;_{3}F_{2}\left.\left(\begin{array}{c} \multicolumn{1}{c}{\begin{array}{ccc}2a&2b&a+b\end{array}}\\[1mm] \multicolumn{1}{c}{\begin{array}{cc}a\!+\!b\!+\!1/2&2a\!+\!2b\end{array}}
\end{array}\right| x\right) \;, \label{eq:Clausen's formula} \end{equation} generating the common differential equation \begin{eqnarray*} && 8\,a\,b\,\left( a + b \right) \,f(x) + \\&&
2\,( -a - 2\,{a^2} - b - 4\,a\,b - 2\,{b^2} + x + 3\,a\,x +
2\,{a^2}\,x + 3\,b\,x + 8\,a\,b\,x + 2\,{b^2}\,x ) \,f'(x) + \\&&
3\,x\,\left( -1 - 2\,a - 2\,b + 2\,x + 2\,a\,x + 2\,b\,x \right) \, f''(x) + \\&& 2\,\left( -1 + x \right) \,{x^2}\,f'''(x) = 0 \end{eqnarray*} for both sides of (\ref{eq:Clausen's formula}), or other hypergeometric identities like the Kummer transformation \[ _1 F_1\left.\left(\begin{array}{c} a\\b \end{array}\right| x\right) =e^x\; _1 F_1\left.\left(\begin{array}{c} b-a\\b \end{array}\right| -x\right) \] or like
\[ _{0}{F}_{1}\left.\left(\begin{array}{c} a \!\end{array}\right| x\right) \cdot \;_{0}{F}_{1}\left.\left(\begin{array}{c} b \!\end{array}\right| x\right) = \;_{2}{F}_{3}\left.\left(\begin{array}{c} \multicolumn{1}{c}{\begin{array}{cc} \frac{a+b}{2}&\frac{a+b-1}{2}\end{array}}\\[1mm] \multicolumn{1}{c}{\begin{array}{ccc} a&b&a+b-1 \end{array}} \end{array}\right| 4\,x\right) \] and \[ _{1}{F}_{1}\left.\left(\begin{array}{c} a\\ b \end{array}\right| x\right) \cdot \;_{1}{F}_{1}\left.\left(\begin{array}{c} a\\ b \end{array}\right| -x\right) = \;_{2}{F}_{3}\left.\left(\begin{array}{ccc} \multicolumn{1}{c}{\begin{array}{cc} a&b-a \end{array}}\\[1mm] \multicolumn{1}{c}{\begin{array}{ccc} b&\frac{b}{2}&\frac{b+1}{2} \end{array}} \end{array}\right| \frac{x^2}{4} \right) \] corresponding to the Kummer differential equation \[ a\,f(x) - (b - x)\,f'(x) - x\, f''(x) = 0 \;, \] and to \begin{eqnarray*} &&
\left( 1 - a - b \right) \,\left( a + b \right) \,f(x) +
\left( - a\,b + {a^2}\,b + a\,{b^2} - 2\,x - 4\,a\,x -
4\,b\,x \right) \,f'(x) + \\&&+
\left( a + {a^2} + b + 3\,a\,b + {b^2} - 4\,x \right) \,x\,f''(x) +
2\,\left( 1 + a + b \right) \,{x^2}\,f'''(x) + {x^3}\,f''''(x) = 0 \;, \end{eqnarray*} and \begin{eqnarray*} && 4\,a\,\left( a - b \right) \,x\,f(x) +
\left( b - 3\,{b^2} + 2\,{b^3} - {x^2} - 2\,b\,{x^2} \right) \,f'(x) \\&&+
x\,\left( -b + 5\,{b^2} - {x^2} \right) \,f''(x) +
\left( 1 + 4\,b \right) \,{x^2}\,f'''(x) + {x^3}\,f''''(x) = 0 \;, \end{eqnarray*} respectively.
Note that one can also reverse the order of the algorithm, i.\ e.\ first find common recurrence equations for $f_n$ and $g_n$ with respect to $n$, and then check the initial conditions (depending on $x$) with the aid of differential equations. This method should be compared with recent results of Zeilberger (\cite{Zei1}--\cite{Zei3}).
Moreover the given algorithm is easily extended to the case of several variables, if the family given forms an admissible family with respect to all of its variables, i.\ e.\ for each variable exists \begin{itemize} \item[-] either a simple recurrence equation (corresponding to a ``discrete'' variable), \item[-] or a simple derivative rule (corresponding to a ``continuous'' variable), depending on shifts with respect to one of the discrete variables. \end{itemize} Note, however, that (for the moment) the algorithm only works if $f$ and $g$ are ``expressions'', and no symbolic sums, derivatives of symbolic order, etc.\ occur. In the next sections, we will, however, extend the above algorithm to these situations.
\section{Algorithmic verification of Rodrigues type formulas} \label{sec:Algorithmic verification of Rodrigues type formulas}
Here we present an algorithm to verify identities of the Rodrigues type \[ g(n,x)=f^{(n)}(n,x) \quad\quad\quad \;(f, g\;\mbox{functions}\;,\quad n\;\mbox{symbolic}) \;. \] This algorithm, however, does only work if the function $f$ is of the hypergeometric type. On the other hand, for most Rodrigues type formulas in the literature, see e.\ g.\ \cite{AS}, this condition is valid.
The procedure is based on the following \begin{algorithm} {\bf (Find differential equation for derivatives of symbolic order)} \label{algo:Rodrigues} {\rm Let $f$ be of the hypergeometric type, i.\ e.\ there is a Laurent-Puiseux type representation $f(n,x)=\sum_k a_k x^k$. Then there is a simple differential equation for $g(n,x):=f^{(n)}(n,x)$ which can be obtained by the following algorithm: \begin{enumerate} \item[(a)] {\tt de1:=SimpleDE(f,x)}: \\ Calculate the simple differential equation {\tt de1} of $f$, see Algorithm~\ref{algo:Find a simple DE}.
\item[(b)] {\tt re1:=DEtoRE(de1,f,x,a,k)}: \\ Transfer the differential equation {\tt de1} into the corresponding recurrence equation {\tt re1} for $a_k$, see \cite{Koe92}, \S 6. \item[(c)] If {\tt re1} is not of the hypergeometric type (or is not equivalent to the hypergeometric type \cite{P}), then quit. \item[(d)] {\tt re2:=SymbolicDerivativeRE(re1,a,k,n)}: \\ Otherwise set $c_k:=(k+1)_n\,a_{k+n}$. Bring {\tt re1} into the form \[ a_{k+m}=R(k)\,a_k \;, \] rational $R$, and calculate the hypergeometric type recurrence equation {\tt re2} \begin{equation} c_{k+m}=\frac{(k+n+1)_m}{(k+1)_m}\,R(k+n)\,c_k \label{eq:REdiffN} \end{equation} for $c_k$. \item[(e)] {\tt de2:=REtoDE(re2,a,k,G,x)}:\\ Transfer the recurrence equation {\tt re2} into the corresponding differential equation {\tt de2} for the $n$th derivative $g(n,x):=f^{(n)}(x)$ of $f$, see \cite{Koe92}, \S~11. \end{enumerate} } \end{algorithm}
\par
\noindent{{\sl Proof:}}\hspace{5mm} Parts (a), (b) and (e) of the algorithm are described precisely in \cite{Koe92}. Now, assume, $g(n,x)=f^{(n)}(n,x)$, and that $f$ has the representation $f(n,x)=\sum_k a_k x^k$. Then we get \[ \sum_{k} c_k\,x^k=g(n,x)= f^{(n)}(n,x)=\sum_{k} (k+1-n)_n\,a_k\,x^{k-n} =\sum_{k} (k+1)_n\,a_{k+n}\,x^{k} \;. \] Therefore we have $c_k=(k+1)_n\,a_{k+n}$, and we get the recurrence equation \begin{eqnarray*} c_{k+m}&=& (k+m+1)_n\,a_{k+n+m}= (k+m+1)_n\,R(k+n)\,a_{k+n} \\[1mm]&=& \frac{(k+m+1)_n}{(k+1)_n}\,R(k+n)\,c_k = \frac{(k+m+n)!}{(k+m)!}\frac{k!}{(k+n)!}\,R(k+n)\,c_k \\&=& \frac{(k+m+n)!}{(k+n)}\frac{k!}{(k+m)!}\,R(k+n)\,c_k = \frac{(k+n+1)_m}{(k+1)_m}\,R(k+n)\,c_k \;, \end{eqnarray*} and hence (\ref{eq:REdiffN}), for $c_k$. This finishes the proof.
$\Box$\par
\noindent As a first example we consider the identity \[ \mathop{\rm erfc}\nolimits_n(x)=\frac{(-1)^n\,e^{-x^2}}{2^n\,n!}\dedn{x}{n}\left( e^{x^2}\mathop{\rm erfc}\nolimits x\right) \] (see e.\ g.\ \cite{AS}, (7.2.9)), or equivalently \begin{equation} (-1)^n\,2^n\,n!\,e^{x^2}\,\mathop{\rm erfc}\nolimits_n(x)=\dedn{x}{n}\left( e^{x^2}\mathop{\rm erfc}\nolimits x\right) \;. \label{eq:erfcRodrigues} \end{equation} Algorithm~\ref{algo:Rodrigues} yields step by step
{\small \begin{verbatim} In[31]:= de1=SimpleDE[E^(x^2)*Erfc[x],x]
Out[31]= -2 F[x] - 2 x F'[x] + F''[x] == 0
In[32]:= re1=DEtoRE[de1,F,x,a,k]
Out[32]= -2 (1 + k) a[k] + (1 + k) (2 + k) a[2 + k] == 0
In[33]:= re2=SymbolicDerivativeRE[re1,a,k,n]
2 Out[33]= -2 (1 + k + n) a[k] + (2 + 3 k + k ) a[2 + k] == 0
In[34]:= de2=REtoDE[re2,a,k,G,x]
Out[34]= -2 (1 + n) G[x] - 2 x G'[x] + G''[x] == 0 \end{verbatim} }\noindent thus finally the differential equation \[
-2\,(1 + n)\, g(x) - 2\, x\, g'(x) + g''(x)= 0 \] for the function $\dedn{x}{n}\left( e^{x^2}\mathop{\rm erfc}\nolimits x\right)$, which also can be obtained by the single statement
{\small \begin{verbatim} In[35]:= RodriguesDE[E^(x^2)*Erfc[x],x,n]
Out[35]= -2 (1 + n) F[x] - 2 x F'[x] + F''[x] == 0 \end{verbatim} }\noindent For the left hand term of (\ref{eq:erfcRodrigues}) we get
{\small \begin{verbatim} In[36]:= de3=SimpleDE[E^(x^2)*Erfc[n,x],x]
Out[36]= -2 (1 + n) F[x] - 2 x F'[x] + F''[x] == 0 \end{verbatim} }\noindent i.\ e.\ the same differential equation.
As next example we consider the Rodrigues type identity (\ref{eq:Rodrigues Bateman}) for the Bateman functions, and rewrite it as \begin{equation} \frac{n!\,e^{-x}}{x}\,F_n(x) = \frac{d^n}{dx^n}\left( e^{-2x}\,x^{n-1}\right) \;. \label{eq:Rodrigues Bateman2} \end{equation} Our implementation yields
{\small \begin{verbatim} In[37]:= RodriguesDE[E^(-2x)*x^(n-1),x,n]
Out[37]= 2 (1 + n) F[x] + 2 (1 + x) F'[x] + x F''[x] == 0
In[38]:= SimpleDE[E^(-x)/x*Bateman[n,x],x]
Out[38]= 2 (1 + n) F[x] + 2 (1 + x) F'[x] + x F''[x] == 0 \end{verbatim} }\noindent Algorithm~\ref{algo:Rodrigues} shows the applicability of Algorithm~\ref{algo:Verification of identities} if in the expressions involved Rodrigues type expressions occur, as soon as we can handle the initial values. Since in Algorithm~\ref{algo:Rodrigues} the function $f$ is assumed to be of hypergeometric type, this, however, can be done by a series representation using Algorithm~\ref{algo:Find a Laurent-Puiseux representation} if $f$ moreover is analytic, and if the function $f$ of Algorithm~\ref{algo:Rodrigues} does not depend on $n$: In this case Algorithm~\ref{algo:Find a Laurent-Puiseux representation} generates the generic coefficient $a_k$ of the series representation $f(x)=\sum\limits_{k=0}^\infty a_k\,x^k$, and therefore we get the initial values by Taylor's theorem: \[ \left(\dedn {x}{n} f\right)(0)=n!\,a_n\;. \] In our first example we conclude
{\small \begin{verbatim} In[39]:= PowerSeries[E^(x^2)*Erfc[x],x]
2 k
x Out[39]= Sum[----, {k, 0, Infinity}] +
k!
k 1 + 2 k
-2 4 x k! > Sum[-------------------, {k, 0, Infinity}]
Sqrt[Pi] (1 + 2 k)! \end{verbatim} }\noindent so that the first initial condition for identity (\ref{eq:erfcRodrigues}) is given by the calculation (see \cite{AS} (7.2.7)) \[ \frac{(-1)^n\,n!}{\Gamma\left( \frac{n}{2}+1\right)}= (-1)^n\,2^n\,n!\,\mathop{\rm erfc}\nolimits_n(0)=\dedn{x}{n}\left( e^{x^2}\mathop{\rm erfc}\nolimits x\right)(0) = \funkdeff{\frac{(2k)!}{k!}}{n=2k\;(k\in\N_0)} {-{{2\,k!\,{4^k}}\over {{\sqrt{\pi }}}} }{n\!=\!2k\!+\!1\;(k\in\N_0)} \!\!\!, \] and the second one is established similarly.
To identify the first initial values of our second example, we proceed as follows: The left hand side of (\ref{eq:Rodrigues Bateman2}) yields \begin{equation} \lim_{x\rightarrow 0}\frac{n!\,e^{-x}}{x}\,F_n(x) = n!\,\lim_{x\rightarrow 0}\frac{F_n(x)}{x}=n!\,F_n'(0)=-2\,n! \label{eq:agreement} \end{equation} (see \cite{KS1}, (11)), whereas from the identity \[ \Big( x^n\Big)^{(k)}(0)=\funkdef{n!}{k=n}{0} \] and Leibniz's formula we derive for the right hand side \begin{eqnarray*} \left( e^{-2x}\,x^{n-1}\right)^{(n)}(0) &=& \left(\sum_{k=0}^n \ueber{n}{k} \left( x^{n-1}\right)^{(k)} \left( e^{-2x}\right)^{(n-k)}\right)(0) \\&=& \ueber{n}{n-1}\,(n-1)!\,\left( e^{-2x}\right)'(0)= -2\,n! \;, \end{eqnarray*} in agreement with (\ref{eq:agreement}).
It is easily seen that we can always identify the initial values algorithmically by the method given if $f(n,x)=w(x)\,X(x)^n$ with a polynomial $X$, i.\ e.\ is of the form (\ref{eq:Rodriguestype}).
These results are summarized by \begin{algorithm} {\bf (Verification of identities)} {\rm With Algorithms~\ref{algo:Verification of identities} and \ref{algo:Rodrigues} identities involving Rodrigues type expressions can be verified if only symbolic derivatives $f^{(n)}$ of hypergeometric type analytic expressions $f$ occur that have the form $f(n,x)=w(x)\,X(x)^n$ for some polynomial $X$. } \end{algorithm}
\section{Algorithmic verification of formulas involving symbolic sums} \label{sec:Algorithmic verification of formulas involving symbolic sums}
In this section we study, how identities involving symbolic sums can be established. The results depend on the following algorithm (compare \cite{SZ}, {\sc Maple} function {\tt cauchyproduct}):
\begin{algorithm} {\bf (Find recurrence equation for symbolic sums)} \label{algo:symbolic sums} {\rm Let $f_n(x)$ form an admissible family, and let $s_n(x)$ denote the symbolic sum $s_n(x):=\sum\limits_{k=0}^n f_k(x)$. Then the following algorithm generates a recurrence equation for $s_n$: \begin{enumerate} \item[(a)] {\tt re:=FindRecursion(f,k)}: \\ Calculate the simple recurrence equation {\tt re} of $f_k$, see \cite{Koe92}, \S 11. \item[(b)] {\tt de1:=REtoDE(re1,f,k,F,z)}: \\ Transfer the recurrence equation {\tt re} into the corresponding differential equation {\tt de1} valid for the generating function $F(z):=\sum\limits_{k=0}^\infty f_k(x)\,z^k$, see \cite{Koe92}, \S 11. \item[(c)] {\tt de2:=F(z)+(z-1)*F'(z)=0}: \\ Let {\tt de2} be the differential equation corresponding to the function \[ G(z):=\sum\limits_{k=0}^\infty g_k z^k=\sum\limits_{k=0}^\infty z^k= \frac{1}{1-z} \;. \] \item[(d)] {\tt de:=ProductDE(de1,de2,F,z)}: \\ Calculate the simple differential equation {\tt de} corresponding to the product $H(z):=F(z)\,G(z)$, see \cite{KS}, Theorem 3 (d). This differential equation has the order of {\tt de1}.
\item[(e)] {\tt re:=DEtoRE(de,F,z,s,n)}:\\ Transfer the differential equation {\tt de} into the corresponding recurrence equation {\tt re} for the coefficient $s_n$ of $H(z)$, see \cite{Koe92}, \S~6. \end{enumerate} } \end{algorithm}
\par
\noindent{{\sl Proof:}}\hspace{5mm} Parts (a), (b) and (e) of the algorithm are described precisely in \cite{Koe92}. The rest follows from the Cauchy product representation \[ H(z)=F(z)\,G(z)=\sum_{n=0}^\infty \left(\sum_{k=0}^n f_k\,g_{n-k}\right)\,z^n =\sum_{n=0}^\infty \left(\sum_{k=0}^n f_k\right)\,z^n \] of the product function $F(z)\,G(z)$.
$\Box$\par
\noindent As an example we consider the sum $\sum\limits_{k=0}^n L_{k}^{(\alpha)}(x)$. We get stepwise:
{\small \begin{verbatim} In[40]:= re=FindRecursion[LaguerreL[k,alpha,x],k]
Out[40]= (-1 + alpha + k) a[-2 + k] + (1 - alpha - 2 k + x) a[-1 + k] +
> k a[k] == 0
In[41]:= de1=REtoDE[re,a,k,F,z]
2 Out[41]= (-1 - alpha + x + z + alpha z) F[z] + (-1 + z) F'[z] == 0
In[42]:= de2=F[z]+(z-1)*F'[z]==0;
In[43]:= de=ProductDE[de1,de2,F,z]
2 Out[43]= (-2 - alpha + x + 2 z + alpha z) F[z] + (-1 + z) F'[z] == 0
In[44]:= DEtoRE[de,F,z,s,n]
Out[44]= (2 + alpha + n) s[n] + (-4 - alpha - 2 n + x) s[1 + n] +
> (2 + n) s[2 + n] == 0 \end{verbatim} }\noindent or by a single statement
{\small \begin{verbatim} In[45]:= re=SymbolicSumRE[LaguerreL[k,alpha,x],k,n]
Out[45]= (2 + alpha + n) a[n] + (-4 - alpha - 2 n + x) a[1 + n] +
> (2 + n) a[2 + n] == 0 \end{verbatim} }\noindent and substituting $n$ by $n-2$
{\small \begin{verbatim} In[46]:= Simplify[re /. n->n-2]
Out[46]= (alpha + n) a[-2 + n] + (-alpha - 2 n + x) a[-1 + n] + n a[n] == 0 \end{verbatim} }\noindent On the other hand, the calculation
{\small \begin{verbatim} In[47]:= FindRecursion[LaguerreL[n,alpha+1,x],n]
Out[47]= (alpha + n) a[-2 + n] + (-alpha - 2 n + x) a[-1 + n] + n a[n] == 0 \end{verbatim} }\noindent shows that the left and right hand sides of the identity \begin{equation} \sum_{k=0}^n L_k^{(\alpha)}(x)=L_n^{(\alpha+1)}(x) \label{eq:example identity} \end{equation} (see e.\ g.\ \cite{Tri}, VI (1.16)) satisfy the same recurrence equation.
In our example identity two initial values remain to be considered \[ L_0^{(\alpha)}(x)=L_0^{(\alpha+1)}(x)=1 \quad\quad\mbox{and}\quad\quad L_0^{(\alpha)}(x)+L_1^{(\alpha)}(x)=L_1^{(\alpha+1)}(x)=2 + \alpha - x \] that trivially are established.
Thus Algorithm~\ref{algo:symbolic sums} shows the applicability of Algorithm~\ref{algo:Verification of identities} if in the expressions involved symbolic sums occur. This is summarized by
\begin{algorithm} {\bf (Verification of identities)} {\rm With Algorithms~\ref{algo:Verification of identities} and \ref{algo:symbolic sums} identities involving symbolic sums can be verified.
$\Box$ } \end{algorithm} We like to mention that the function {\tt FindRecursion} is successful for composite $f_n$ as long as recurrence equations exist and are applied recursively. Here obviously no derivative rules are needed.
We note further that as a byproduct this algorithm in an obvious way can be generalized to sums $\sum\limits_{k=0}^n a_k\,b_{n-k}$ of the Cauchy product type. As an example, the algorithm generates the recurrence equation \begin{equation} 2 \,(1 + 2 n)\, s_n - (1 + n)\, s_{n+1} = 0 \label{eq:sumbinomial} \end{equation} for $s_n:=\sum\limits_{k=0}^n \ueber{n}{k}^2= n!^2 \sum\limits_{k=0}^n \frac{1}{k!^2}\frac{1}{(n-k)!^2}$, {\small \begin{verbatim} In[48]:= re=ConvolutionRESum[1/k!^2,1/k!^2,k,n]
3 Out[48]= 2 (1 + 2 n) a[n] - (1 + n) a[1 + n] == 0
In[49]:= ProductRE[re,FindRecursion[n!^2,n],a,n]
Out[49]= 2 (1 + 2 n) a[n] + (-1 - n) a[1 + n] == 0 \end{verbatim} }\noindent compare \cite{Zei1}--\cite{Zei3}.
Algorithm~\ref{algo:symbolic sums} may further be used to find a closed form representation of a symbolic sum in case the resulting term is hypergeometric:
\begin{algorithm} {\bf (Closed forms of hypergeometric symbolic sums)} {\rm Let $s_n:=\sum\limits_{k=0}^n f_k$ be a hypergeometric term, i.\ e.\ $\frac{s_{n+1}}{s_n}$ be a rational function, then the following procedure generates a closed form representation for $s_n$: \begin{enumerate} \item[(a)] {\tt re:=SymbolicSumRE(f,k,n)}: \\ Calculate the simple recurrence equation {\tt re} of $s_n$ using Algorithm~\ref{algo:symbolic sums}. \item[(b)] If {\tt re} is of the hypergeometric type, then solve it by the hypergeometric coefficient formula, else apply Petkovsek's algorithm to find the hypergeometric solution $s_n$ of {\tt re}.
$\Box$ \end{enumerate} } \end{algorithm} This result should be compared with the Gosper algorithm \cite{Gos}. Our procedure is an alternative decision procedure for the same purpose. Note that from the hypergeometricity of $s_n$ the hypergeometricity of $f_k$ follows \cite{Gos}, so that the first step of Algorithm~\ref{algo:symbolic sums} leads to a simple first order recurrence equation.
Applying our algorithm to our example case $s_n=\sum\limits_{k=0}^n \ueber{n}{k}^2$, we get from (\ref{eq:sumbinomial}), and the initial value $s_0=1$ the representation \[ s_n=4^n\,\frac{\left( \ed 2\right)_n}{n!}=\frac{(2n)!}{n!^2}=\ueber{2n}{n} \;. \] On the other hand, for $s_n=\sum\limits_{k=0}^n \ueber{n}{k}^3$, our procedure gives {\small \begin{verbatim} In[50]:= re=ConvolutionRESum[1/k!^3,1/k!^3,k,n]
2 Out[50]= 8 a[n] + (1 + n) (16 + 21 n + 7 n ) a[1 + n] -
5 > (1 + n) (2 + n) a[2 + n] == 0
In[51]:= ProductRE[re,FindRecursion[n!^3,n],a,n]
2 2 2 Out[51]= 8 (1 + n) a[n] + (16 + 21 n + 7 n ) a[1 + n] - (2 + n) a[2 + n] ==
> 0 \end{verbatim} }\noindent and by Petkovsek's algorithm it turns out that $s_n$ is no hypergeometric term.
\end{document} | arXiv |
\begin{definition}[Definition:Standard Discrete Metric/Real Number Plane]
Let $\R^2$ be the real number plane.
The '''(standard) discrete metric''' on $\R^2$ is defined as:
:$\map {d_0} {x, y} := \begin {cases}
0 & : x = y \\
1 & : \exists i \in \set {1, 2}: x_i \ne y_i
\end {cases}$
where $x = \tuple {x_1, x_2}, y = \tuple {y_1, y_2} \in \R^2$.
\end{definition} | ProofWiki |
Permutable prime
A permutable prime, also known as anagrammatic prime, is a prime number which, in a given base, can have its digits' positions switched through any permutation and still be a prime number. H. E. Richert, who is supposedly the first to study these primes, called them permutable primes,[1] but later they were also called absolute primes.[2]
Permutable prime
Conjectured no. of termsInfinite
First terms2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 199
Largest known term(108177207-1)/9
OEIS index
• A258706
• Absolute primes: every permutation of digits is a prime (only the smallest representatives of the permutation classes are shown)
Base 2
In base 2, only repunits can be permutable primes, because any 0 permuted to the ones place results in an even number. Therefore, the base 2 permutable primes are the Mersenne primes. The generalization can safely be made that for any positional number system, permutable primes with more than one digit can only have digits that are coprime with the radix of the number system. One-digit primes, meaning any prime below the radix, are always trivially permutable.
Base 10
In base 10, all the permutable primes with fewer than 49,081 digits are known
2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, 97, 113, 131, 199, 311, 337, 373, 733, 919, 991, R19 (1111111111111111111), R23, R317, R1031, R49081, ... (sequence A003459 in the OEIS)
Of the above, there are 16 unique permutation sets, with smallest elements
2, 3, 5, 7, R2, 13, 17, 37, 79, 113, 199, 337, R19, R23, R317, R1031, ... (sequence A258706 in the OEIS)
Note Rn := ${\tfrac {10^{n}-1}{9}}$ is a repunit, a number consisting only of n ones (in base 10). Any repunit prime is a permutable prime with the above definition, but some definitions require at least two distinct digits.[3]
All permutable primes of two or more digits are composed from the digits 1, 3, 7, 9, because no prime number except 2 is even, and no prime number besides 5 is divisible by 5. It is proven[4] that no permutable prime exists which contains three different of the four digits 1, 3, 7, 9, as well as that there exists no permutable prime composed of two or more of each of two digits selected from 1, 3, 7, 9.
There is no n-digit permutable prime for 3 < n < 6·10175 which is not a repunit.[1] It is conjectured that there are no non-repunit permutable primes other than the eighteen listed above. They can be split into seven permutation sets:
{13, 31}, {17, 71}, {37, 73}, {79, 97}, {113, 131, 311}, {199, 919, 991}, {337, 373, 733}.
Base 12
In base 12, the smallest elements of the unique permutation sets of the permutable primes with fewer than 9,739 digits are known (using inverted two and three for ten and eleven, respectively)
2, 3, 5, 7, Ɛ, R2, 15, 57, 5Ɛ, R3, 117, 11Ɛ, 555Ɛ, R5, R17, R81, R91, R225, R255, R4ᘔ5, ...
There is no n-digit permutable prime in base 12 for 4 < n < 12144 which is not a repunit. It is conjectured that there are no non-repunit permutable primes in base 12 other than those listed above.
In base 10 and base 12, every permutable prime is a repunit or a near-repdigit, that is, it is a permutation of the integer P(b, n, x, y) = xxxx...xxxyb (n digits, in base b) where x and y are digits which is coprime to b. Besides, x and y must be also coprime (since if there is a prime p divides both x and y, then p also divides the number), so if x = y, then x = y = 1. (This is not true in all bases, but exceptions are rare and could be finite in any given base; the only exceptions below 109 in bases up to 20 are: 13911, 36A11, 24713, 78A13, 29E19 (M. Fiorentini, 2015).)
Arbitrary bases
Let P(b, n, x, y) be a permutable prime in base b and let p be a prime such that n ≥ p. If b is a primitive root of p, and p does not divide x or |x - y|, then n is a multiple of p - 1. (Since b is a primitive root mod p and p does not divide |x − y|, the p numbers xxxx...xxxy, xxxx...xxyx, xxxx...xyxx, ..., xxxx...xyxx...xxxx (only the bp−2 digit is y, others are all x), xxxx...yxxx...xxxx (only the bp−1 digit is y, others are all x), xxxx...xxxx (the repdigit with n xs) mod p are all different. That is, one is 0, another is 1, another is 2, ..., the other is p − 1. Thus, since the first p − 1 numbers are all primes, the last number (the repdigit with n xs) must be divisible by p. Since p does not divide x, so p must divide the repunit with n 1s. Since b is a primitive root mod p, the multiplicative order of n mod p is p − 1. Thus, n must be divisible by p − 1.)
Thus, if b = 10, the digits coprime to 10 are {1, 3, 7, 9}. Since 10 is a primitive root mod 7, so if n ≥ 7, then either 7 divides x (in this case, x = 7, since x ∈ {1, 3, 7, 9}) or |x − y| (in this case, x = y = 1, since x, y ∈ {1, 3, 7, 9}. That is, the prime is a repunit) or n is a multiple of 7 − 1 = 6. Similarly, since 10 is a primitive root mod 17, so if n ≥ 17, then either 17 divides x (not possible, since x ∈ {1, 3, 7, 9}) or |x − y| (in this case, x = y = 1, since x, y ∈ {1, 3, 7, 9}. That is, the prime is a repunit) or n is a multiple of 17 − 1 = 16. Besides, 10 is also a primitive root mod 19, 23, 29, 47, 59, 61, 97, 109, 113, 131, 149, 167, 179, 181, 193, ..., so n ≥ 17 is very impossible (since for this primes p, if n ≥ p, then n is divisible by p − 1), and if 7 ≤ n < 17, then x = 7, or n is divisible by 6 (the only possible n is 12). If b = 12, the digits coprime to 12 are {1, 5, 7, 11}. Since 12 is a primitive root mod 5, so if n ≥ 5, then either 5 divides x (in this case, x = 5, since x ∈ {1, 5, 7, 11}) or |x − y| (in this case, either x = y = 1 (That is, the prime is a repunit) or x = 1, y = 11 or x = 11, y = 1, since x, y ∈ {1, 5, 7, 11}.) or n is a multiple of 5 − 1 = 4. Similarly, since 12 is a primitive root mod 7, so if n ≥ 7, then either 7 divides x (in this case, x = 7, since x ∈ {1, 5, 7, 11}) or |x − y| (in this case, x = y = 1, since x, y ∈ {1, 5, 7, 11}. That is, the prime is a repunit) or n is a multiple of 7 − 1 = 6. Similarly, since 12 is a primitive root mod 17, so if n ≥ 17, then either 17 divides x (not possible, since x ∈ {1, 5, 7, 11}) or |x − y| (in this case, x = y = 1, since x, y ∈ {1, 5, 7, 11}. That is, the prime is a repunit) or n is a multiple of 17 − 1 = 16. Besides, 12 is also a primitive root mod 31, 41, 43, 53, 67, 101, 103, 113, 127, 137, 139, 149, 151, 163, 173, 197, ..., so n ≥ 17 is very impossible (since for this primes p, if n ≥ p, then n is divisible by p − 1), and if 7 ≤ n < 17, then x = 7 (in this case, since 5 does not divide x or x − y, so n must be divisible by 4) or n is divisible by 6 (the only possible n is 12).
References
1. Richert, Hans-Egon (1951). "On permutable primtall". Norsk Matematiske Tiddskrift. 33: 50–54. Zbl 0054.02305.
2. Bhargava, T.N.; Doyle, P.H. (1974). "On the existence of absolute primes". Math. Mag. 47 (4): 233. doi:10.1080/0025570X.1974.11976408. Zbl 0293.10006.
3. Chris Caldwell, The Prime Glossary: permutable prime at The Prime Pages.
4. A.W. Johnson, "Absolute primes," Mathematics Magazine 50 (1977), 100–103.
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
| Wikipedia |
\begin{document}
\keywords{metric inversion, bi-Lipschitz homogeneity, Carnot groups} \subjclass[2010]{53C17, 30L10, 30L05}
\begin{abstract} We characterize Carnot groups admitting a $1$-quasiconformal metric inversion as the Lie groups of Heisenberg type whose Lie algebras satisfy the $J^2$-condition, thus characterizing a special case of inversion invariant bi-Lipschitz homogeneity. A more general characterization of inversion invariant bi-Lipschitz homogeneity for certain non-fractal metric spaces is also provided. \end{abstract}
\title{Invertible Carnot Groups}
\section{Introduction}\label{S:intro}
In \cite[Theorem 5.1]{CDKR-Heisenberg}, the authors characterize the nilpotent components of Iwasawa decompositions of real rank one simple Lie groups by the fact that they are of Heisenberg type and admit certain conformal inversions. The conformal inversions of \cite[Theorem 5.1]{CDKR-Heisenberg} generalize the usual M\"obius inversions of Euclidean space. Both of these notions of inversion are generalized in turn by the concept of \textit{metric inversion} as studied in \cite{BHX-inversions}. In the present paper, we study Carnot groups that admit such metric inversions.
The Carnot groups of \cite{CDKR-Heisenberg} are equipped with a left-invariant gauge distance that is bi-Lipschitz equivalent to a left-invariant sub-Riemannian (or more generally, sub-Finsler) distance. The study of such left-invariant distances can be generalized by the study of metric spaces that admit a transitive family of uniformly bi-Lipschitz self-homeomorphisms. These spaces are said to be \textit{uniformly bi-Lipschitz homogeneous}.
The concepts of metric inversion and bi-Lipschitz homogeneity are combined in the notion of \textit{inversion invariant bi-Lipschitz homogeneity} as studied in \cite{Freeman-iiblh} and \cite{Freeman-params}. In particular, \cite[Theorem 2.4]{Freeman-params} demonstrates that certain inversion invariant bi-Lipschitz homogeneous geodesic spaces are bi-Lipschitz equivalent to Carnot groups (cf. \cite{Ledonne-geodesic}, \cite{Ledonne-tangents}). However, \cite[Theorem 2.4]{Freeman-params} does not provide a characterization of Carnot groups via inversion invariant bi-Lipschitz homogeneity. Indeed, in light of \cite[Theorem 5.1]{CDKR-Heisenberg}, the invertibility of a Carnot group appears to be a very restrictive condition. The following theorem identifies the Carnot groups admitting a metric inversion under the additional assumption that the inversion is $1$-quasiconformal (see \rf{S:defs} and \rf{S:heisenberg} for relevant definitions).
\begin{theorem}\label{T:main} Suppose ${\mathfont G}$ is a sub-Riemannian Carnot group. The group ${\mathfont G}$ admits a $1$-quasiconformal metric inversion if and only if it is isomorphic to a generalized Heisenberg group. \end{theorem}
While the assumption of $1$-quasiconformality is strong it is not altogether unnatural. Indeed, the inversions of \cite[Theorem 5.1]{CDKR-Heisenberg} are $1$-quasiconformal. Furthermore, \rf{T:main} illustrates a special case of the characterization of conformal maps on Carnot groups recently provided by \cite[Theorem 4.1]{CO14}. One should note, however, that the methods used to prove \rf{T:main} are somewhat different from those implemented in \cite{CO14}.
In general, metric inversions are $K$-quasiconformal, with $1\leq K<+\infty$ (see \rf{S:defs}). One can obtain an analogue of \rf{T:main} for general metric inversions if more information is assumed about the Carnot groups in question. In particular, the following fact is contained in recent work of Xiangdong Xie (see \cite{Xie-non-rigid}, \cite{Xie-rigidity}, and \cite{Xie-filiform} for relevant definitions and results).
\begin{fact}\label{F:xie} Suppose ${\mathfont G}$ is a non-rigid sub-Riemannian Carnot group. If ${\mathfont G}$ is not contained in one of the following three classes of groups, then it does not admit a metric inversion: \begin{enumerate}
\item{Euclidean groups}
\item{Heisenberg product groups}
\item{Complex Heisenberg product groups} \end{enumerate} \end{fact}
Let ${\mathfont G}$ be a non-rigid Carnot group that is not contained in one of the three classes listed above. Let $\hat{\mathfont G}$ denote a metric sphericalization of ${\mathfont G}$ (see \rf{S:defs}). If follows from the aforementioned work of Xie that any quasiconformal self-homeomorphism $f:\hat{\mathfont G}\to\hat{\mathfont G}$ must permute left cosets of certain proper subgroups of ${\mathfont G}$, and therefore must fix the point at infinity. \rf{F:xie} then follows from the observation that metric inversions do not fix the point at infinity.
While \rf{T:main} falls within the general study of inversion invariant bi-Lipschitz homogeneity, it exhibits a special case of this property in the particular setting of Carnot groups. Note that (non-abelian) Carnot groups are \textit{fractal} in the sense that their topological and Hausdorff dimensions do not agree. When we restrict ourselves to non-fractal metric spaces, we can use recent work of Kinneburg (see \cite{Kinneberg-fractal}) to obtain the following result.
\begin{theorem}\label{T:euclidean} Suppose $X$ is a proper and connected metric space whose Hausdorff and topological dimensions are both equal to $n\in{\mathfont N}$. The space $X$ is inversion invariant bi-Lipschitz homogeneous if and only if it is bi-Lipschitz equivalent to ${\mathfont R}^n$ (when $X$ is unbounded) or ${\mathfont S}^n$ (when $X$ is bounded). \end{theorem}
In fact, an analogue of \rf{T:euclidean} holds when the Hausdorff dimension of $X$ is strictly larger than its topological dimension, but this requires additional assumptions about the one-dimensional metric structure of $X$ (cf.\,\cite[Theorem 1.5]{Kinneberg-fractal}).
\begin{acknowledgments} The proof of \rf{T:main} was inspired by observations made in \cite[Section 6.3]{BS13} about the Tits classification of $2$-transitive group actions. The author is grateful to Xiangdong Xie for many helpful discussions during the preparation of this paper, and for the helpful suggestions of the anonymous referees.
In this revised version of the paper, we replace Lemma 4.3 (in the indexing of the previous version) with a clarified proof of Theorem 1.1. \end{acknowledgments}
\section{Notation and Definitions}\label{S:defs}
Given two numbers $A$ and $B$, we write $A\simeq_C B$ to indicate that $C^{-1}A\leq B\leq CA$, where $C$ is independent of $A$ and $B$. When the quantity $C$ is understood or irrelevant, we simply write $A\simeq B$.
Given a metric space $(X,d)$, $r>0$, and $x\in X$ we write $B(x;r):=\{y\in X:d(x,y)<r\}$ to denote an open ball in $X$ centered at $x$ of radius $r$. A metric space $X$ is said to be \textit{proper} if the closures of open balls in $X$ are compact.
Given $1\leq L<+\infty$, an embedding $f:X\to Y$ is \textit{$L$-bi-Lipschitz} provided that for all points $x,y\in X$ we have $d_Y(f(x),f(y))\simeq_L d_X(x,y)$. Two spaces $X$ and $Y$ are \textit{$L$-bi-Lipschitz equivalent} if there exists an $L$-bi-Lipschitz homeomorphism between the two spaces. A space $X$ is \textit{bi-Lipschitz homogeneous} if there exists a collection $\mathcal{F}$ of bi-Lipschitz self-homeomorphisms of $X$ such that, for every pair $x,y\in X$, there exists $f\in \mathcal{F}$ with $f(x)=y$. When every map in $\mathcal{F}$ is $L$-bi-Lipschitz we say that $X$ is $L$-bi-Lipschitz homogeneous, or \textit{uniformly bi-Lipschitz homogeneous} when the particular distortion bound is not important.
An embedding $f:X\to Y$ is \textit{$\theta$-quasim\"obius} if $\theta:[0,+\infty)\to[0,+\infty)$ is a homeomorphism and, for all quadruples $x,y,z,w$ of distinct points in $X$, we have \[\frac{d(f(x),f(y))d(f(z),f(w))}{d(f(x),f(z))d(f(y),f(w))}\leq \theta\left(\frac{d(x,y)d(z,w)}{d(x,z)d(y,w)}\right).\] When there exists a constant $1\leq C<+\infty$ such that $\theta(t)=Ct$, we say that $f$ is \textit{strongly quasim\"obius}. See \cite{Kinneberg-fractal} for a detailed study of such mappings.
An embedding $f:X\to Y$ is \textit{metrically $K$-quasiconformal} if $1\leq K<+\infty$ and, for all $x\in X$, \[\limsup_{r\to0}\frac{\sup\{d(f(x),f(y)):d(x,y)\leq r\}}{\inf\{d(f(x),f(z)):d(x,z)\geq r\}}\leq K.\] In the present paper, since no other definition of quasiconformality is used, we drop the qualifier `metrically,' and simply refer to $K$-quasiconformal mappings.
Given an unbounded metric space $(X,d)$ and a point $p\in X$, we define $\hat{X}:=X\cup\{\infty\}$ and $X_p:=X\setminus\{p\}$. We say that a homeomorphism $\varphi:X_p\to X_p$ is an \textit{$L$-metric inversion} provided that there exists a constant $1\leq L<+\infty$ such that for any $x,y\in X_p$, \[d(\varphi(x),\varphi(y))\simeq_L\frac{d(x,y)}{d(x,p)d(y,p)}.\] We extend $\varphi$ to $\hat{X}$ by the definitions $\varphi(\infty):=p$ and $\varphi(p):=\infty$. We say that \textit{$(X,d)$ admits a metric inversion} provided that there exists a point $p\in X$ such that an $L$-metric inversion exists on $X_p$. We note that the inversions of \cite[Theorem 4.2]{CDKR-Heisenberg} are $1$-metric inversions. In general, while $L$-metric inversions are $L^2$-quasiconformal, they need not be $1$-quasiconformal.
The definition of metric inversion given above is closely related to the metric inversion described in \cite{BHX-inversions}. In \cite{BHX-inversions}, the authors construct a distance $d_p$ on $\hat{X}_p$ such that, for any $x,y\in X_p$, \[\frac{1}{4}\cdot\frac{d(x,y)}{d(x,p)d(y,p)}\leq d_p(x,y)\leq\frac{d(x,y)}{d(x,p)d(y,p)},\] with obvious analogues when $x$ or $y$ is equal to $\infty$. They also prove that the identity map from $(X_p,d)$ to $(X_p,d_p)$ is $16t$-quasim\"obius (\cite[Lemma 3.1]{BHX-inversions}) and 1-quasiconformal at non-isolated points (\cite[Proposition 4.1]{BHX-inversions}). While the \cite{BHX-inversions} definition of metric inversion is valid for both bounded and unbounded spaces, if $X$ is an unbounded space admitting an $L$-metric inversion $\varphi$ at $p$, then $\varphi:(X_p,d_p)\to(X_p,d)$ is a $4L$-bi-Lipschitz homeomorphism.
A related concept is that of \textit{metric sphericalization}. Again following \cite{BHX-inversions}, the metric sphericalization of an unbounded metric space $(X,d)$ at some basepoint $p\in X$ is denoted by $(\hat{X},\hat{d}_p)$. Here $\hat{d}_p$ is a distance such that, for any $x,y\in \hat{X}$, \begin{equation}\label{E:sphere} \frac{1}{4}\cdot\frac{d(x,y)}{(1+d(x,p))(1+d(y,p))}\leq\hat{d}_p(x,y)\leq\frac{d(x,y)}{(1+d(x,p))(1+d(y,p))}. \end{equation} As with metric inversion (see \cite[Section 3.B]{BHX-inversions}), the identity map from $(X,d)$ to $(\hat{X},\hat{d}_p)$ is $16t$-quasim\"obius and $1$-quasiconformal at non-isolated points.
Following \cite{Freeman-iiblh} and \cite{Freeman-params}, a metric space $(X,d)$ is \textit{inversion invariant bi-Lipschitz homogeneous} provided that both $(X,d)$ and $(\hat{X}_p,d_p)$ are uniformly bi-Lipschitz homogeneous. This definition is independent of $p\in X$, up to a quantitative change in parameters. One can verify that, when $(X,d)$ is unbounded, this definition is equivalent to the statement that $(X,d)$ is bi-Lipschitz homogeneous and $(X,d)$ admits a metric inversion.
Inversion invariant bi-Lipschitz homogeneous metric spaces are often \textit{Ahlfors $Q$-regular} (see \cite[Theorem 1.1]{Freeman-iiblh}). Given $0< Q<+\infty$, a metric space $(X,d)$ with Borel measure $\mu$ is Ahlfors $Q$-regular provided that there exists a constant $1\leq C<+\infty$ such that $\mu(B(x;r))\simeq_C r^Q$ for every $x\in X$ and $0<r\leq\diam(X)$.
\textit{Carnot groups} are examples of Ahlfors $Q$-regular metric spaces. A Carnot group $\mathbb{G}$ of step $n\in{\mathfont N}$ is a connected, simply connected, nilpotent Lie group with stratified Lie algebra $\Lie(\mathbb{G})=V_1\oplus V_2\oplus\dots\oplus V_n$. The \textit{layers} $V_i$ are such that, for $1\leq j\leq n-1$, we have $[V_j,V_1]=V_{j+1}$. Here $[X,Y]=XY-YX$ denotes the Lie bracket. We require $V_n\not=\{0\}$, and that for each $1\leq j\leq n$ we have $[V_j,V_{n}]=\{0\}$. We refer to $V_1$ as the \textit{horizontal layer} of $\Lie(\mathbb{G})$. By left-translation $V_1$ is extended to a (left-invariant) distribution $\Delta$ on $\mathbb{G}$, referred to as the \textit{horizontal distribution}.
When $\Delta$ is equipped with a left-invariant norm $|\cdot|$, we define the associated \textit{sub-Finsler distance $d_{SF}$} on $\mathbb{G}$ as follows. Let $\gamma:[0,1]\to\mathbb{G}$ be an absolutely continuous path. The path $\gamma$ is \textit{horizontal} provided that for almost every $t\in[0,1]$ we have $\dot\gamma(t)\in \Delta$. The $d_{SF}$ length of a horizontal path $\gamma$ is $\ell_{SF}(\gamma):=\int_0^1|\dot\gamma(t)|dt$. We then define \begin{equation*}\label{E:cc-dist} d_{SF}(x,y):=\inf\{\ell_{SF}(\gamma):\,\gamma \text{ a horizontal path such that } \gamma(0)=x,\,\gamma(1)=y\}. \end{equation*} By well known results of Chow and Rashevskii, $d_{SF}$ defines a geodesic distance on $\mathbb{G}$. Thus a \textit{sub-Finsler Carnot group} is a Carnot group equipped with a distance $d_{SF}$. When the norm on $\Delta$ is derived from an inner product, we obtain a \textit{sub-Riemannian distance} on ${\mathfont G}$, denoted by $d_{SR}$.
Since the norm on $\Delta$ is left-invariant, for any element $g\in {\mathfont G}$, the left translation $\ell_g(x):=gx$ is an isometry of $({\mathfont G},d_{SF})$. Sub-Finsler distance is also homogeneous with respect to \textit{canonical dilations}. For an element $x\in {\mathfont G}$, let $X:=\log(x)=\sum_{i=1}^nX_i$, where $X_i\in V_i$. For any $t>0$, define the canonical dilation $\delta_t:({\mathfont G},d_{SF})\to({\mathfont G},d_{SF})$ as $\delta_t(x)=\exp\left(\sum_{i=1}^nt^iX_i\right)$. For points $x,y\in{\mathfont G}$, we have $d_{SF}(\delta_t(x),\delta_t(y))=t\,d_{SF}(x,y)$.
\section{Generalized Heisenberg Groups}\label{S:heisenberg}
Let $\mathfrak{n}$ denote a Lie algebra endowed with an inner product $\langle\cdot,\cdot\rangle$ and accompanying norm $|\cdot|$. Suppose that $\mathfrak{n}$ is either abelian or stratified of step two. In the step two case, this means that there exist non-trivial complementary orthogonal subspaces $\mathfrak{v}$ and $\mathfrak{z}$ such that $[\mathfrak{v},\mathfrak{v}]=\mathfrak{z}$ and $[\mathfrak{v},\mathfrak{z}]=\{0\}=[\mathfrak{z},\mathfrak{z}]$. For $X,Y\in\mathfrak{v}$ and $Z\in\mathfrak{z}$, let the map $J:\mathfrak{z}\to\End(\mathfrak{v})$ be defined via the formula $\langle J_ZX,Y\rangle=\langle Z,[X,Y]\rangle$. The algebra $\mathfrak{n}$ is of \textit{Heisenberg type} provided that, for all $X\in\mathfrak{v}$ and $Z\in\mathfrak{z}$, we have $|J_ZX|=|Z||X|$. Equivalently, $J_Z^2=-|Z|^2I$, where $I$ denotes the identity map. Various properties of the map $J$ are documented in \cite[Section 2(a)]{CDKR98}. A simply connected Lie group is said to be of Heisenberg type if its Lie algebra is of Heisenberg type.
Given a Lie algebra $\mathfrak{n}$ of Heisenberg type, we say that $\mathfrak{n}$ satisfies the \textit{$J^2$-condition} provided that, for any $X\in\mathfrak{v}$ and any two orthogonal elements $Z,Z'\in\mathfrak{z}$, there exists some element $Z''\in\mathfrak{z}$ such that $J_Z J_{Z'}X=J_{Z''}X$. Note that if $\dim(\mathfrak{z})\in\{0,1\}$, this condition is vacuously satisfied.
We say that a Carnot group is a \textit{generalized Heisenberg group} if it is a Heisenberg group over ${\mathfont K}$, where (here and in the sequel) ${\mathfont K}$ denotes either the real numbers ${\mathfont R}$, complex numbers ${\mathfont C}$, quaternions ${\mathfont H}$, or octonians ${\mathfont O}$. These groups are defined as follows:
\begin{itemize}
\item{The Heisenberg group over ${\mathfont R}$, or the \textit{real Heisenberg group} $H_{\mathfont R}$, is ${\mathfont R}^n$.}
\item{The Heisenberg group over ${\mathfont C}$, or the \textit{complex Heisenberg group} $H_{\mathfont C}$, is the Carnot group with step two real Lie algebra $\mathfrak{n}=\mathfrak{v}\oplus \mathfrak{z}$, where $\mathfrak{v}:=\Span\{X_i,Y_i:1\leq i\leq n\}$ and $\mathfrak{z}:=\Span\{Z\}$. Equip $\mathfrak{n}$ with an inner product such that $\{X_i,Y_i,Z:1\leq i\leq n\}$ is an orthonormal basis. The only non-trivial bracket relations are $[X_i,Y_i]=Z$, for $1\leq i\leq n$.}
\item{The Heisenberg group over ${\mathfont H}$, or the \textit{quaternionic Heisenberg group} $H_{\mathfont H}$, is the Carnot group with step two real Lie algebra $\mathfrak{n}=\mathfrak{v}\oplus\mathfrak{z}$, where $\mathfrak{v}=\Span\{X_i,Y_i,V_i,W_i:1\leq i \leq n\}$ and $\mathfrak{z}=\Span\{Z_k:1\leq k \leq 3\}$. Equip $\mathfrak{n}$ with an inner product such that $\{X_i,Y_i,V_i,W_i,Z_k:1\leq i\leq n, 1\leq k \leq 3\}$ is an orthonormal basis. For $1\leq i \leq n$, the only nontrivial bracket relations are $[X_i,Y_i]=Z_1=[V_i,W_i]$, $[X_i,V_i]=Z_2=[W_i,Y_i]$, and $[X_i,W_i]=Z_3=[Y_i,V_i]$.}
\item{The Heisenberg group over ${\mathfont O}$, or the \textit{octonionic Heisenberg group} $H_{\mathfont O}$, is the Carnot group with step two real Lie algebra $\mathfrak{n}=\mathfrak{v}\oplus\mathfrak{z}$, where $\mathfrak{v}=\Span\{X_i:0\leq i\leq 7\}$ and $\mathfrak{z}=\Span\{Z_k:1\leq k \leq 7\}$. Equip $\mathfrak{n}$ with an inner product such that $\{X_i,Z_k:0\leq i\leq 7, 1\leq k \leq 7\}$ is an orthonormal basis. The only nontrivial bracket relations are $[X_0,X_k]=Z_k$ for $1\leq k\leq 7$ and $[X_i,X_j]=\varepsilon_{ijk}Z_k$, for $1\leq i,j,k\leq 7$. Here $\varepsilon$ is a completely antisymmetric tensor whose value is $+1$ when $ijk=124, 137, 156, 235, 267, 346, 457$.} \end{itemize}
Via exponential coordinates, parameterizations of the groups $H_{\mathfont K}$ can be obtained as follows: \begin{itemize}
\item{When ${\mathfont K}={\mathfont R}$, the abelian group $H_{\mathfont K}$ is equal to ${\mathfont R}^n$.}
\item{When ${\mathfont K}={\mathfont C}$, for each $1\leq i\leq n$, identify $x_iX_i+y_iY_i$ with $x_ie_0+y_ie_1\in{\mathfont C}$. Here $\{e_0,e_1\}$ is the canonical basis over ${\mathfont R}$ for ${\mathfont C}$. Thus $\Span\{X_i,Y_i:1\leq i\leq n\}$ is identified with ${\mathfont C}^n$. Identify $zZ$ with $ze_1\in\Im({\mathfont C})$.}
\item{When ${\mathfont K}={\mathfont H}$, for each $1\leq i\leq n$, identify $x_iX_i+y_iY_i+v_iV_i+w_iW_i$ with $x_ie_0+y_ie_1+v_ie_2+w_ie_3$. Here $\{e_i\}_{i=0}^3$ is the canonical basis over ${\mathfont R}$ for ${\mathfont H}$. Thus $\Span\{X_i,Y_i,V_i,W_i:1\leq i\leq n\}$ is identified with ${\mathfont H}^n$. For each $1\leq k\leq 3$, identify $z_kZ_k$ with $z_ke_k\in\Im({\mathfont H})$.}
\item{When ${\mathfont K}={\mathfont O}$, identify $\sum_{i=0}^7x_iX_i$ with $\sum_{i=0}^7x_ie_i$ and $\sum_{k=1}^7z_kZ_k$ with $\sum_{k=1}^7z_ke_k$. Here $\{e_i\}_{i=0}^7$ is the canonical basis over ${\mathfont R}$ for ${\mathfont O}$. Thus $\Span\{X_i:0\leq i\leq7\}$ is identified with ${\mathfont O}$ and $\Span\{Z_k:1\leq k \leq 7\}$ is identified with $\Im({\mathfont O})$.} \end{itemize}
Extending the above identifications by linearity allows us to parameterize $H_{\mathfont K}$ by ${\mathfont K}^n\oplus \Im({\mathfont K})$. When ${\mathfont K}={\mathfont R}$ we have $\Im({\mathfont K})=\{0\}$, and when ${\mathfont K}={\mathfont O}$ we have $n=1$. Let $(x,z)=(\sum_{i=1}^nx_i,z)$ denote a point in $H_{\mathfont K}={\mathfont K}^n\oplus\Im({\mathfont K})$. Via the Baker-Campbell-Hausdorff formula, for points $(x,z),(x',z')\in H_{\mathfont K}$, the group law reads as \[(x,z)(x',z')=\left(x+x',z+z'-\frac{1}{2}\sum_{i=1}^n\Im(x_i\overline{x_i}')\right).\]
Let $m=\dim\left(\Im({\mathfont K})\right)$, so that $m\in\{0,1,3,7\}$. Given a canonical basis element $e_k\in \Im({\mathfont K})$, we define a map $L_k\in\End({\mathfont K}^n)$ such that, for $x\in{\mathfont K}^n$, $L_k(x)=e_kx=\sum_{i=0}^ne_kx_i$. In other words, $L_k$ denotes left multiplication in ${\mathfont K}^n$ by $e_k$. Passing through the above parameterization of $H_{\mathfont K}$ and extending by linearity, this gives rise to a map $J:\mathfrak{z}\to\End(\mathfrak{v})$ such that, for any $X,Y\in\mathfrak{v}$ and $Z\in\mathfrak{z}$, we have $\langle J_ZX,Y\rangle=\langle[X,Y],Z\rangle$. Furthermore, it is straightforward to verify that, for any $Z\in \mathfrak{z}$, the map $J_Z:\mathfrak{v}\to\mathfrak{v}$ satisfies $J_Z^2=-|Z|^2I$. For two orthogonal elements $Z,Z'\in\mathfrak{z}$, there exists $Z''\in\mathfrak{z}$ such that $J_ZJ_{Z'}=J_{Z''}$. When ${\mathfont K}={\mathfont H}$, this last observation follows from the associativity of left multiplication. In the case that ${\mathfont K}={\mathfont O}$, this follows from the fact that $\dim(\mathfrak{z})=\dim(\mathfrak{v})-1$. Thus we verify that generalized Heisenberg groups are of Heisenberg type and satisfy the $J^2$-condition.
For any Lie algebra $\mathfrak{n}$ and corresponding simply connected Lie group $N$ of Heisenberg type, one may construct the Lie algebra $\mathfrak{s}:=\mathfrak{n}\oplus\mathfrak{a}$, where $\mathfrak{a}$ is a one-dimensional Lie algebra with inner product. Let $\mathfrak{a}$ be spanned by the unit vector $H$. The Lie bracket on $\mathfrak{s}$ is determined by the requirements that $[H,X]=\frac{1}{2}X$ and $[H,Z]=Z$ for any $X\in \mathfrak{v}$ and any $Z\in\mathfrak{z}$. We extend the inner products on $\mathfrak{n}$ and $\mathfrak{a}$ to $\mathfrak{s}$ by requiring that $\mathfrak{n}$ is orthogonal to $\mathfrak{a}$. Proceeding as in \cite[Section 3(a)]{CDKR98}, one obtains the group $S:=\exp(\mathfrak{s})$ as a semidirect product $NA$, where $A:=\exp(\mathfrak{a})$. If we parameterize $S$ via $\mathfrak{v}\times\mathfrak{z}\times{\mathfont R}_+$ by identifying $(X,Z,t)$ with $\exp(X+Z)\exp(\log(t)H)\in S$, then an element $a_t=(0,0,t)\in A\subset S$ acts on $n=(X,Z,1)\in N\subset S$ by $a_t(n)=(t^{1/2}X,tZ,t)$. By translating the inner product on $\mathfrak{s}$, we obtain a left invariant distance on $S$.
One can then proceed to construct the Siegel-type domain
\[D:=\left\{(X,Z,t):t-\frac{|X|^2}{4}>0\right\}\subset\mathfrak{v}\oplus\mathfrak{z}\oplus\mathfrak{a}.\] The domain $D$ can be explicitly identified with $S$. When the left-invariant distance on $S$ is pulled back to $D$ via this identification, $S$ possesses a simply transitive action on $D$ by affine transformations (see \cite[Section 3(b)]{CDKR98}). The group $N$ can be identified with the set
\[\partial D=\left\{\left(X,Z,\frac{|X|^2}{4}\right)\right\}\subset\mathfrak{v}\oplus\mathfrak{z}\oplus\mathfrak{a}.\]
Given $n=(x,z)=\exp(X+Z)\in N$, write \begin{equation}\label{E:gauge}
\|n\|:=\left(\frac{|X|^4}{16}+|Z|^2\right)^{1/4} \end{equation}
The function $d_N(n,n'):=\|n'^{-1}n\|$ defines a distance on $N$ that is invariant under left multiplication. Furthemore, the action of $A$ extends to $\partial D$ such that for any $n,n'\in N$ and $a_t\in A$, we have $d_N(a_t(n),a_t(n'))=t^{1/2}d_N(n,n')$.
When $N=H_{\mathfont K}$, the geometry of $D$ is identified by the following result (\cite{CDKR-Heisenberg}, \cite{CDKR98}, \cite[Theorem 4.1.9.A]{Vanhecke95}).
\begin{fact}\label{F:classification} Suppose that $D$ is the Siegel-type domain associated to a Lie algebra $\mathfrak{n}=\mathfrak{v}\oplus\mathfrak{z}$ of Heisenberg type. Let $m:=\dim(\mathfrak{z})$. The algebra $\mathfrak{n}$ satisfies the $J^2$-condition if and only if
\parbox[1cm]{\textwidth}{ \begin{enumerate}
\item[($m=0$)]{the space $D$ is isometric to real hyperbolic space.}
\item[($m=1$)]{the space $D$ is isometric to complex hyperbolic space.}
\item[($m=3$)]{the space $D$ is isometric to quaternionic hyperbolic space.}
\item[($m=7$)]{the space $D$ is isometric to octonionic hyperbolic space.} \end{enumerate}} In particular, $\mathfrak{n}$ satisfies the $J^2$-condition if and only if $\exp(\mathfrak{n})$ is isomorphic to $H_{\mathfont K}$. \end{fact}
Assume $N=H_{\mathfont K}$, and let $G_{\mathfont K}$ denote the isometry group of $D$. Then $H_{\mathfont K} AK$ is an Iwasawa decomposition of $G_{\mathfont K}$, where $K$ is the stabilizer of $(0,0,1)\in D$ and $A$ is as above. Writing $M$ to denote the centralizer of $A$ in $K$, \cite[Theorem 7.4]{CDKR98} provides the Bruhat decomposition $G_{\mathfont K}=(H_{\mathfont K} AM)\cup(H_{\mathfont K} \sigma H_{\mathfont K} AM)$. Here $\sigma$ is the geodesic inversion of $D$ described in \cite[Section 3(c)]{CDKR98}.
Via an appropriate Cayley transformation $C$, the domain $D$ can be identified with the unit ball $B\subset\mathfrak{v}\oplus\mathfrak{z}\oplus\mathfrak{a}$. This Cayley transformation $C:D\to B$ can be continuously extended to a homeomorphism between the one point compactification of $D\cup\partial D$ and the closed unit ball $\overline{B}$, where $C(0,0,0)=(0,0,1)$ and $C(\infty)=(0,0,-1)$. Via this identification, the action of $G_{\mathfont K}$ can be continuously extended to $\overline{B}$. The stabilizer of $(0,0,-1)\in\partial{B}$ is $H_{\mathfont K} AM$ (see the proof of \cite[Theorem 7.4]{CDKR98}). The stabilizer of both $(0,0,1)$ and $(0,0,-1)$ is $AM$. In this way we can view $G_{\mathfont K}$ as a group acting on $\hat H_{\mathfont K}=\partial B$. The subgroup $H_{\mathfont K} AM$ fixes the point at infinity $\infty\in\hat H_{\mathfont K}$, and the subgroup $AM$ fixes both the identity element $e\in H_{\mathfont K}$ and $\infty\in \hat H_{\mathfont K}$.
\section{Preliminary Facts and Lemmas}\label{S:prelims}
The following fact is a special case of \cite[Theorem 3.3]{Kramer-transitive}.
\begin{fact}\label{F:two-transitive} Let $G$ denote a group acting effectively and 2-transitively on a topological sphere. If $G$ is locally compact and $\sigma$-compact, then the identity component of $G$ is a simple Lie group isomorphic to either $G_{\mathfont K}$ (when ${\mathfont K}={\mathfont C}$, ${\mathfont H}$, or ${\mathfont O}$) or to an index two subgroup of $G_{\mathfont K}$ (when ${\mathfont K}={\mathfont R}$). \end{fact}
The following technical lemma will allow us to apply \rf{F:two-transitive} to prove \rf{T:main}.
\begin{lemma}\label{L:locally-compact} Suppose $X$ is a compact metric space and $G\subset\mathcal{C}(X,X)$ is a group of uniformly quasim\"obius self-homeomorphisms. The closure of $G$ is locally compact and $\sigma$-compact in the topology of uniform convergence. \end{lemma}
\begin{proof} Let $X^3$ denote the space of ordered triples from $X$ endowed with the product distance. Since $X$ is compact, $X^3$ is separable. Let $\{T_i\}$ denote a countable dense subset of $X^3$, where $T_i=(x_i,y_i,z_i)$. Given $T_i$, $T_j$, and $\varepsilon>0$, define \[B_{i,j}(\varepsilon):=\{g\in G:g(T_i)\subset N(T_j;\varepsilon)\},\] where, for $E\subset X$ and $r>0$, $N(E;r):=\{x\in X:\dist(x,E)<r\}$. The sets $B_{i,j}(\varepsilon)$ are open in the compact-open topology (and hence in the uniform convergence topology). Furthermore, for fixed $i,j$ there exists $\varepsilon_{i,j}>0$ such that the set $B_{i,j}(\varepsilon_{i,j})$ is equicontinuous (see \cite[Theorem 2.1]{Vaisala-qm}). In fact, one can take $\varepsilon_{i,j}:=\sep(T_j)/4$, where $\sep(T_j)$ denotes the minimal distance between distinct pairs in $T_j$. Therefore, by Arzela-Ascoli, $B_{i,j}(\varepsilon_{i,j})$ has compact closure. One can check that $G\subset\cup_{i,j}B_{i,j}(\varepsilon_{i,j})$, and so we conclude that the closure of $G$ is locally compact. Since the collection $\{B_{i,j}(\varepsilon_{i,j})\}$ is countable, the closure of $G$ is $\sigma$-compact. \end{proof}
\section{Proofs of \rf{T:main} and \rf{T:euclidean}}\label{S:proofs}
\begin{proof}[Proof of \rf{T:main}] For use below, we begin by confirming that the composition of (metrically) $1$-quasiconformal maps between open sets of a sub-Riemannian Carnot group remains $1$-quasiconformal. To this end, let $f_1:\Omega_1\to\Omega_2$ and $f_2:\Omega_2\to\Omega_3$ denote $1$-quasiconformal homeomorphisms between open sets of a Carnot group ${\mathfont G}$. By \cite[Corollary 7.1]{CC06}, both $f_1$ and $f_2$ are smooth and Pansu differentiable everywhere in their domains. Furthermore, the Lie derivatives of their Pansu differentials are similarities at every point in their domains (when restricted to the horizontal layer). By \cite[Lemma 3.7]{CC06}, these properties are invariant under compositions. Therefore, (again using \cite[Corollary 7.1]{CC06}) we conclude that $f_2\circ f_1:\Omega_1\to\Omega_3$ remains $1$-quasiconformal.
In order to proceed with the proof, we invoke the assumption that, for some $p\in{\mathfont G}$, there exists a $1$-quasiconformal $L$-metric inversion $\varphi:({\mathfont G}_p,d_{SR})\to({\mathfont G}_p,d_{SR})$. For any $x\in{\mathfont G}$, write $\varphi_x:=\ell_{xp^{-1}}\circ\varphi\circ\ell_{xp^{-1}}^{-1}$. Since left translations are isometries, the map $\varphi_x:({\mathfont G}_x,d_{SR})\to({\mathfont G}_x,d_{SR})$ is a $1$-quasiconformal $L$-metric inversion such that $\varphi_x(x)=\infty$ and $\varphi_x(\infty)=x$.
Let $\{x,x',y,y'\}$ denote a quadruple of points in $\hat{{\mathfont G}}$ such that $x'=y'$ if and only if $x=y$. We consider the two possible cases for such a quadruple:
\begin{itemize}
\item{Assume that $x\not= y$ (and so $x'\not=y'$). If all points are finite, define the map $g:=\varphi_{x'}\circ \ell_z\circ \varphi_x:\hat{\mathfont G}\to\hat{\mathfont G}$, where $z:=\varphi_{x'}^{-1}(y')\varphi_x(y)^{-1}\in{\mathfont G}$. If $x=\infty$ and/or $x'=\infty$, then replace $\varphi_x$ and/or $\varphi_{x'}$, respectively, with the identity map $\id:\hat{\mathfont G}\to\hat{\mathfont G}$.}
\item{Assume that $x=y$ (and so $x'=y'$). If both points $x$ and $x'$ are finite, then replace $\varphi_x$ and $\varphi_{x'}$ with the identity map on ${\mathfont G}$. If $x$ and/or $x'$ is the point at infinity, then replace $\varphi_x$ and/or $\varphi_{x'}$, respectively, with the identity map and replace $\ell_z$ with the identity map.} \end{itemize}
In either case we have $g(x)=x'$ and $g(y)=y'$. Let $G_*$ denote the group generated by finite compositions of the maps $g$ as constructed above. By construction, $G_*$ acts two-transitively on $\hat{\mathfont G}$. For each $h\in G_*$, if $h$ fixes the point at infinity, then by the initial paragraph of the proof, $h:({\mathfont G},d_{SR})\to({\mathfont G},d_{SR})$ is $1$-quasiconformal away from at most a finite set. In the case that $h(\infty)\not=\infty$ we obtain the same conclusion for $h:({\mathfont G}\setminus h^{-1}(\infty),d_{SR})\to({\mathfont G}\setminus h(\infty),d_{SR})$.
Fix $h\in G_*$. If $h(\infty)\not=\infty$, then define $h_\infty:=\varphi_{h(\infty)}\circ h$. If $h(\infty)=\infty$, then define $h_\infty:=h$. In either case, the map $h_\infty:({\mathfont G},d_{SR})\to({\mathfont G},d_{SR})$ is a homeomorphism, and (as noted in the preceding paragraph) $h_\infty$ is $1$-quasiconformal away from a finite set. By \cite[Theorem 5.2]{BKR-ACQC}, $h_\infty:({\mathfont G},d_{SR})\to({\mathfont G},d_{SR})$ is weakly $H$-quasisymmetric (in the sense of \cite{Vaisala-cylinders}), where $H$ depends only on ${\mathfont G}$. Therefore, by \cite[Theorem 2.9]{Vaisala-cylinders}, we conclude that $h_\infty:({\mathfont G},d_{SR})\to({\mathfont G},d_{SR})$ is $\eta$-quasisymmetric, with $\eta$ depending only on ${\mathfont G}$. By \cite[Theorem 3.2]{Vaisala-qm}, $h_\infty:({\mathfont G},d_{SR})\to({\mathfont G},d_{SR})$ is also $\theta''$-quasimobius, with $\theta''$ depending only on ${\mathfont G}$. By \cite[Section 3.B]{BHX-inversions}, the identity map between $({\mathfont G},d_{SR})$ and the sphericalized space $({\mathfont G},\hat{d}_{SR})$ is $16t$-quasim\"obius. It then follows that $h_\infty:(\hat{\mathfont G},\hat{d}_{SR})\to(\hat{{\mathfont G}},\hat{d}_{SR})$ is $\theta'$-quasim\"obius, with $\theta'$ determined solely by $\theta''$. Since $h_\infty=\varphi_{h(\infty)}\circ h$, and $\varphi_{h(\infty)}:(\hat{{\mathfont G}},\hat{d}_{SR})\to(\hat{{\mathfont G}},\hat{d}_{SR})$ is a $256L^4t$-quasim\"obius homeomorphism, it follows that $h:(\hat{{\mathfont G}},\hat{d}_{SR})\to(\hat{{\mathfont G}},\hat{d}_{SR})$ is $\theta$-quasim\"obius, where $\theta$ is determined solely by $L$ and ${\mathfont G}$. In conclusion, the group $G_*$ consists of uniformly quasim\"obius self-homeomorphisms of $(\hat{{\mathfont G}},\hat{d}_{SR})$.
Note that canonical dilations of ${\mathfont G}$ were not included in the generating set for $G_*$. However, if $\Gamma$ denotes the group of canonical dilations of ${\mathfont G}$, then the closure of $\langle G_*,\Gamma\rangle$ is isomorphic to $G$. In particular, we may assume that $\Gamma\subset G$. This follows from \rf{L:locally-compact} and \cite[Theorem 3.3]{Kramer-transitive} because both $\langle G_*,\Gamma\rangle$ and $G_*$ act effectively and $2$-transitively on $(\hat{\mathfont G},\hat{d}_{SR})$ by uniformly quasim\"obius homeomorphisms.
Given a topological group $H$, let $H^\circ$ denote the identity component. Via \rf{F:two-transitive} we conclude that $G^\circ$ is isomorphic to one of $G_{\mathfont R}^\circ$, $G_{\mathfont C}$, $G_{\mathfont H}$, or $G_{\mathfont O}$. In short, $G^\circ$ is isomorphic to $G_{\mathfont K}^\circ$. More precisely (see \cite[Theorem 3.3 and Proposition 7.1]{Kramer-transitive}), there exists an isomorphism $\psi:G^\circ\to G_{\mathfont K}^\circ$ and a homeomorphism $F:(\hat{\mathfont G},\hat{d}_{SR})\to(\hat H_{\mathfont K},\hat{d}_H)$ such that, for any $g\in G^\circ$ and $x\in \hat{\mathfont G}$, we have $F(g(x))=\psi(g)(F(x))$. Here $d_H$ is defined by \rf{E:gauge}, and $\hat{d}_H$ denotes the sphericalized distance $\widehat{(d_H)_e}$. Since $G_{\mathfont K}^\circ$ acts two-transitively on $(\hat{H}_{\mathfont K},\hat{d}_H)$, we may assume $F(e)=e$ and $F(\infty)=\infty$.
Given $t>0$, the map $\psi(\delta_t)\in G_{\mathfont K}^\circ$ fixes the set $\{e,\infty\}$. Therefore, $\psi(\delta_t)\in AM$. It follows that there exists $a_{s(t)}\in A$ and $m_t\in M$ such that $\psi(\delta_t)=a_{s(t)}m_t$. Here $s:{\mathfont R}_+\to{\mathfont R}_+$ is a function such that, for any $s,r\in {\mathfont R}_+$, we have $s(rt)=s(r)s(t)$. Since, for any $x\in {\mathfont G}$, we have $F(\delta_r(x))=\psi(\delta_r)(F(x))$, it also follows that $\lim_{t\to+\infty}s(t)=+\infty$. Since $A$ commutes with $M$, we note that $\psi(\delta_t^{-1})=a_{1/s(t)}m_t^{-1}$.
Given $g\in{\mathfont G}$, the map $\psi(\ell_g)$ fixes the point at infinity in $\hat H_{\mathfont K}$. Therefore, $\psi(\ell_g)\in H_{\mathfont K} AM$, and there exist $h\in H_{\mathfont K}$, $a_r\in A$, and $m\in M$ such that $\psi(\ell_g)=\ell_h a_r m$. Combining this with the previous paragraph, we have $\psi(\delta_t^{-1}\ell_g\delta_t)=a_{1/s(t)}m_t^{-1}\ell_ha_rma_{s(t)}m_t$. We then observe that \begin{align*} a_{1/s(t)}m_t^{-1}\ell_ha_rma_{s(t)}m_t&=a_{r/s(t)}m_t^{-1}\ell_{[a_{1/r}(h)]}a_{s(t)}mm_t\\ &=a_rm_t^{-1}\ell_{[a_{1/(rs(t))}(h)]}mm_t \end{align*}
Since $M$ is compact and $a_{1/(rs(t))}(h)\to e\in H_{\mathfont K}$ as $t\to+\infty$, there exists $m'\in M$ such that, up to a subsequence, \[a_{r}m_t^{-1}\ell_{[a_{1/(rs(t))}(h)]}mm_t\to a_rm'^{-1}mm'\] as $t\to+\infty$. Here the convergence is uniform on compact subsets of $H_{\mathfont K}$. On the other hand, the map $\delta_t^{-1}\ell_g\delta_t=\ell_{[\delta^{-1}_t(g)]}$ is locally uniformly convergent to the identity map of ${\mathfont G}$. Since $\psi:G^\circ\to G^\circ_{\mathfont K}$ is an isomorphism and both groups act effectively, $a_rm'^{-1}mm'=\id$ as a map of $(H_{\mathfont K},d_H)$. Via the Bruhat decomposition, this implies that $a_r=\id$ and $m'^{-1}mm'=\id$, and so $m=\id$. Therefore, $\psi(\ell_g)=\ell_h$. Since ${\mathfont G}$ acts simply transitively on itself by left translations, we conclude that $\psi({\mathfont G})=H_{\mathfont K}$.
In order to prove the reverse implication, suppose ${\mathfont G}$ is a generalized Heisenberg group equipped with a sub-Riemannian distance. Due to the comparability of the sub-Riemannian distance with the distance given by \rf{E:gauge}, by \cite[Theorem 4.2]{CDKR-Heisenberg}, the map $\sigma$ satisfies the definition of a metric inversion and is quasiconformal on ${\mathfont G}_e$. By (the proof of) \cite[Theorem 5.1]{CDKR-Heisenberg} the Riemannian differential of $\sigma$, when restricted to the horizontal distribution, is the composition of a dilation and an isometry at every point in ${\mathfont G}_e$. It then follows from \cite[Lemma 3.4 and Corollary 7.2]{CC06} that the map $\sigma$ is $1$-quasiconformal on ${\mathfont G}_e$. \end{proof}
The above proof can be compared with the methods appearing in \cite{BS13}. In \cite{BS13}, the ideal boundaries of rank one symmetric spaces are characterized via the notion of \textit{space inversions}. Such inversions are M\"obius involutions that satisfy several additional properties related to the M\"obius structure of a metric space. While metric inversions on a bi-Lipschitz homogeneous space can be used to construct analogues to space inversions, such constructions need not possess all the properties required to apply the innovative techniques behind the proof of \cite[Theorem 1.1]{BS13}.
\begin{proof}[Proof of \rf{T:euclidean}] This result is obtained by combining results from \cite{BK-rigidity}, \cite{Kinneberg-fractal}, and \cite{Freeman-params}. We first consider the case that $(X,d)$ is unbounded. Suppose $(X,d)$ is inversion invariant bi-Lipschitz homogeneous with respect to a collection of uniformly $L$-bi-Lipschitz self-homeomorphisms $\mathcal{F}$ and an $L$-metric inversion $\varphi$ based at some point $p\in X$. It follows from \cite[Theorem 2.7]{Freeman-params} that $(X,d)$ is doubling. Therefore, by \cite[Theorem 1.1]{Freeman-iiblh}, $(X,d)$ is Ahlfors $n$-regular. As noted in \cite[Fact 4.1]{Freeman-iiblh}, for any point $q\in X$, the sphericalized space $(\hat{X},\hat{d}_q)$ is also Ahlfors $n$-regular, with regularity constant depending only on the doubling and homogeneity constants for $(X,d)$.
Fix $q\in X$ and set $\delta:=(16L^3M(1+M))^{-1}$, where $1\leq M<+\infty$ is the quasi-self-similarity constant given by \cite[Theorem 2.7]{Freeman-params}. Note that $M$ depends only on $L$. Let $\{x_1,x_2,x_3\}$ denote a triple of distinct points in $\hat{X}$. We claim that there exists a strongly quasim\"obius homeomorphism $g:(\hat{X},\hat{d}_q)\to(\hat{X},\hat{d}_q)$ such that, for $i\not=j$, we have $\hat{d}_q(g(x_i),g(x_j))\geq\delta$. When $x_1\not=\infty$, there exists $f\in \mathcal{F}$ such that $f(x_1)=p$. When $x_1=\infty$, set $f:=\varphi$. In either case, $\varphi\circ f(x_1)=\infty$ and we define $y:=\varphi\circ f(x_2)$, $z:=\varphi\circ f(x_3)$. Consider the sphericalized space $(\hat{X},\hat{d}_y)$. By \cite[Theorem 2.7]{Freeman-params}, there exists a map $h:(X,d)\to(X,d)$ such that $h(y)=y$ and, for all $u,v\in X$, we have $d(h(u),h(v))\simeq_MC\,d(u,v)$, where $C:=d(y,z)^{-1}$. It follows from \rf{E:sphere} that \[\hat{d}_y(h(y),h(z))\geq\frac{1}{4}\cdot\frac{d(h(y),h(z))}{1+d(h(y),h(z))}\geq\frac{1}{4M}\cdot\frac{C\,d(y,z)}{1+MC\,d(y,z)}=\frac{1}{4M(1+M)}\] We also find that \begin{align*} \hat{d}_y(h(\infty),h(z))=\hat{d}_y(\infty,h(z))&\geq\frac{1}{4}\cdot\frac{1}{1+d(h(y),h(z))}\\ &\geq\frac{1}{4}\cdot\frac{1}{1+MC\,d(y,z)}=\frac{1}{4(1+M)}. \end{align*} Finally, we note that $\hat{d}_y(y,\infty)\geq1/4$. Therefore, $h\circ\varphi\circ f$ maps $\{x_1,x_2,x_3\}$ to a $(4M(1+M))^{-1}$ separated triple in $(\hat{X},\hat{d}_y)$. By \cite[Lemma 3.2]{BHX-inversions}, there exists a $4L^3$-bi-Lipschitz homeomorphism $k:(\hat{X},\hat{d}_y)\to(\hat{X},\hat{d}_q)$. Therefore, for any triple of distinct points $\{x_1,x_2,x_3\}\subset (\hat{X},\hat{d}_q)$, there exists a map of the form $g=k\circ h\circ\varphi\circ f$ such that $g$ is a $K t$-quasim\"obius self-homeomorphism of $(\hat{X},\hat{d}_q)$ that maps $\{x_1,x_2,x_3\}$ to a $\delta$-separated triple. Here $K $ depends only on $L$.
By \cite[Theorem 5.1]{Kinneberg-fractal}, we conclude that $(\hat{X},\hat{d}_q)$ is strongly quasim\"obius equivalent to ${\mathfont S}^n$. To justify this application of \cite[Theorem 5.1]{Kinneberg-fractal}, note that this theorem follows from results in \cite[Section 5]{BK-rigidity}. These results are proved under the assumption of a group action by uniformly quasim\"obius homeomorphisms. However, as stated at the beginning of \cite[Section 5]{BK-rigidity}, these results also hold under the weaker assumption that triples can be uniformly separated by uniformly quasim\"obius maps.
Since strongly quasim\"obius maps between bounded spaces are bi-Lipschitz (see \cite[Remark 3.2]{Kinneberg-fractal}), we find that $(\hat{X},\hat{d}_q)$ is bi-Lipschitz equivalent to ${\mathfont S}^n$. Finally, by \cite[Lemma 3.2 and Proposition 3.4]{BHX-inversions}, $(X,d)$ is bi-Lipschitz equivalent to ${\mathfont R}^n$.
To finish the proof, we consider the case that $(X,d)$ is bounded. Given any point $p\in X$, the space $(X_p,d_p)$ is unbounded and remains proper, connected, and inversion invariant bi-Lipschitz homogeneous. To verify that $(X_p,d_p)$ remains inversion invariant bi-Lipschitz homogeneous, note that the metric inversion of $(X_p,d_p)$ at any point $q\in X_p$ is bi-Lipschitz equivalent to $(X_q,d_q)$ via the identity map. We conclude as above that $(X_p,d_p)$ is bi-Lipschitz equivalent to ${\mathfont R}^n$. By \cite[Lemma 3.2 and Proposition 3.5]{BHX-inversions}, $(X,d)$ is bi-Lipschitz equivalent to ${\mathfont S}^n$. \end{proof}
\end{document} | arXiv |
1.11: Quantum Computation- A Short Course
[ "article:topic", "showtoc:no", "license:ccby", "authorname:frioux", "hidden-variable models", "Greenberger-Horne-Zeilinger gedanken experiment", "licenseversion:40", "source@https://faculty.csbsju.edu/frioux/workinprogress.html" ]
https://chem.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FPhysical_and_Theoretical_Chemistry_Textbook_Maps%2FQuantum_Tutorials_(Rioux)%2F01%253A_Quantum_Fundamentals%2F1.11%253A_Quantum_Computation-_A_Short_Course
Another Look at Mermin's EPR Gedanken Experiment
A Quantum Simulation
Another Simulation of a GHZ Gedanken Experiment
Quantum Simulation
Quantum Correlations Illustrated With Photons
The four remaining tutorials deal with this clash between quantum mechanics and local realism, and the simulation of physical phenomena. The first two examine entangled spin systems and the third entangled photon systems. The fourth provides a terse mathematical summary for both spin and photon systems. They all clearly show the disagreement between the predictions of quantum theory and those of local hidden-variable models.
Quantum theory is both stupendously successful as an account of the small-scale structure of the world and it is also the subject of an unresolved debate and dispute about its interpretation. J. C. Polkinghorne, The Quantum World, p. 1.
In Bohm's EPR thought experiment (Quantum Theory, 1951, pp. 611-623), both local realism and quantum mechanics were shown to be consistent with the experimental data. However, the local realistic explanation used composite spin states that were invalid according to quantum theory. The local realists countered that this was an indication that quantum mechanics was incomplete because it couldn't assign well-defined values to all observable properties prior to or independent of observation. In the 1980s N. David Mermin presented a related thought experiment [American Journal of Physics (October 1981, pp 941-943) and Physics Today (April 1985, pp 38-47)] in which the predictions of local realism and quantum mechanics disagree. As such Mermin's thought experiment represents a specific illustration of Bell's theorem.
A spin-1/2 pair is prepared in a singlet state and the individual particles travel in opposite directions to detectors which are set up to measure spin in three directions in x-z plane: along the z-axis, and angles of 120 and 240 degrees with respect to the z-axis. The detector settings are labeled 1, 2 and 3, respectively.
The switches on the detectors are set randomly so that all nine possible settings of the two detectors occur with equal frequency.
Local realism holds that objects have properties independent of measurement and that measurements at one location on a particle cannot influence measurements of another particle at a distant location even if the particles were created in the same event. Local realism maintains that the spin-1/2 particles carry instruction sets (hidden variables) which dictate the results of subsequent measurements. Prior to measurement the particles are in an unknown but well-defined state.
The following table presents the experimental results expected on the basis of local realism. Singlet spin states have opposite spin values for each of the three measurement directions. If A's spin state is (+-+), then B's spin state is (-+-). A '+' indicates spin-up and a measurement eigenvalue of +1. A '-' indicates spin-down and a measurement eigenvalue of -1. If A's detector is set to spin direction "1" and B's detector is set to spin direction "3" the measured result will be recorded as +-,with an eigenvalue of -1.
There are eight spin states and nine possible detector settings, giving 72 possible measurement outcomes all of which are equally probable. The next to bottom line of the table shows the average (expectation) value for the nine possible detector settings given the local realist spin states. When the detector settings are the same there is perfect anti-correlation between the detectors at A and B. When the detectors are set at different spin directions there is no correlation.
As will now be shown quantum mechanics (bottom line of the table) disagrees with this local realistic analysis. The singlet state produced by the source is the following entangled superposition, where the arrows indicate the spin orientation for any direction in the x-z plane. As noted above the directions used are 0, 120 and 240 degrees, relative to the z-axis.
| \Psi \rangle=\frac{1}{\sqrt{2}}[ |\uparrow\rangle_{1} | \downarrow \rangle_{2}-| \downarrow \rangle_{1} | \uparrow \rangle_{2} ]=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \\ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right) \otimes \left( \begin{array}{c}{-\sin \left(\frac{\varphi}{2}\right)} \\ {\cos \left(\frac{\varphi}{2}\right)}\end{array}\right)-\left( \begin{array}{c}{-\sin \left(\frac{\varphi}{2}\right)} \\ {\cos \left(\frac{\varphi}{2}\right)}\end{array}\right) \otimes \left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \\ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{0} \\ {1} \\ {-1} \\ {0}\end{array}\right) \quad \Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{0} \\ {1} \\ {-1} \\ {0}\end{array}\right)
The single particle spin operator in the x-z plane is constructed from the Pauli spin operators in the xand z-directions. \(\phi\) is the angle of orientation of the measurement magnet with the z-axis. Note that the Pauli operators measure spin in units of \frac{h}{4 \pi}\). This provides for some mathematical clarity in the forthcoming analysis.
\sigma_{\mathrm{Z}} :=\left( \begin{array}{cc}{1} & {0} \\ {0} & {-1}\end{array}\right) \qquad \sigma_{\mathrm{x}} :=\left( \begin{array}{ll}{0} & {1} \\ {1} & {0}\end{array}\right) \\ \mathrm{S}(\varphi) :=\cos (\varphi) \cdot \sigma_{\mathrm{Z}}+\sin (\varphi) \cdot \sigma_{\mathrm{x}} \rightarrow \left( \begin{array}{cc}{\cos (\varphi)} & {\sin (\varphi)} \\ {\sin (\varphi)} & {-\cos (\varphi)}\end{array}\right)
The joint spin operator for the two-spin system in tensor format is,
\left( \begin{array}{cc}{\cos \varphi_{A}} & {\sin \varphi_{A}} \\ {\sin \varphi_{A}} & {-\cos \varphi_{A}}\end{array}\right) \otimes \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \\ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right)= \begin{pmatrix} \cos \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \\ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) & \sin \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \\ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) \\ \sin \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \\ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) & -\cos \varphi_{A} \left( \begin{array}{cc}{\cos \varphi_{B}} & {\sin \varphi_{B}} \\ {\sin \varphi_{B}} & {-\cos \varphi_{B}}\end{array}\right) \end{pmatrix}
In Mathcad syntax this operator is:
\mathrm{kronecker}\left(\mathrm{S}\left(\varphi_{\mathrm{A}}\right), \mathrm{S}\left(\varphi_{\mathrm{B}}\right)\right)
When the detector settings are the same quantum theory predicts an expectation value of -1, in agreement with the analysis based on local realism.
\Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \mathrm{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi=-1 \quad \Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \mathrm{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi=-1 \quad \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \text { deg }), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi=-1
However, when the detector settings are different quantum theory predicts an expectation value of 0.5, in disagreement with the local realistic value of 0.
\Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \mathrm{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi=0.5 \quad \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi=0.5 \quad \Psi^{\mathrm{T}} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \mathrm{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi=0.5
Considering all detector settings local realism predicts an expectation value of -1/3 [2/3(0) + 1/3(-1)], while quantum theory predicts an expectation value of 0 [2/3(1/2) + 1/3(-1)]. (See the two bottom rows in the table above.)
Furthermore, the following calculations demonstrate that the various spin operators do not commute and therefore represent incompatible observables. In other words, they are observables that cannot simultaneously be in well-defined states. Thus, quantum theory also rejects the realist's spin states used in the table.
\mathrm{S}(0 \cdot \operatorname{deg}) \cdot \mathrm{S}(120 \cdot \mathrm{deg})-\mathrm{S}(120 \cdot \mathrm{deg}) \cdot \mathrm{S}(0 \cdot \mathrm{deg})=\left( \begin{array}{cc}{0} & {1.732} \\ {-1.732} & {0}\end{array}\right)
\mathrm{S}(0 \cdot \mathrm{deg}) \cdot \mathrm{S}(240 \cdot \mathrm{deg})-\mathrm{S}(240 \cdot \mathrm{deg}) \cdot \mathrm{S}(0 \cdot \mathrm{deg})=\left( \begin{array}{cc}{0} & {-1.732} \\ {1.732} & {0}\end{array}\right)
\mathrm{S}(120 \cdot \mathrm{deg}) \cdot \mathrm{S}(240 \cdot \mathrm{deg})-\mathrm{S}(240 \cdot \mathrm{deg}) \cdot \mathrm{S}(120 \cdot \mathrm{deg})=\left( \begin{array}{cc}{0} & {1.732} \\ {-1.732} & {0}\end{array}\right)
The local realist is undeterred by this argument and the disagreement with the quantum mechanical predictions, asserting that the fact that quantum theory cannot assign well-defined states to all elements of reality independent of observation is an indication that it provides an incomplete description of reality.
However, results available for experiments of this type with photons support the quantum mechanical predictions and contradict the local realists analysis shown in the table above. Thus, there appears to be a non-local interaction between the two spins at their measurement sites. Nick Herbert provides a memorable and succinct description of such non-local influences on page 214 of Quantum Reality.
A non-local interaction links up one location with another without crossing space, without decay, and without delay. A non-local interaction is, in short, unmediated, unmitigated, and immediate.
Jim Baggott puts it this way (The Meaning of Quantum Theory, page 135):
The predictions of quantum theory (in this experiment) are based on the properties of a two-particle state vector which ... is 'delocalized' over the whole experimental arrangement. The two particles are, in effect, always in 'contact' prior to measurement and can therefore exhibit a degree of correlation that is impossible for two Einstein separable particles.
"...if [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local. This is what the theorem says." -John S. Bell
The eigenvectors of the single particle spin operator, S(\(phi\)), in the x-z plane are given below along with their eigenvalues.
\varphi_{u}(\varphi) :=\left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \\ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right) \qquad \varphi_{\mathrm{d}}(\varphi) :=\left( \begin{array}{c}{-\sin \left(\frac{\varphi}{2}\right)} \\ {\cos \left(\frac{\varphi}{2}\right)}\end{array}\right)
\varphi_{\mathrm{u}}(\varphi)^{\mathrm{T}} \cdot \varphi_{\mathrm{u}}(\varphi) \text { simplify } \rightarrow 1 \quad \varphi_{\mathrm{d}}(\varphi)^{\mathrm{T}} \cdot \varphi_{\mathrm{d}}(\varphi) \text { simplify } \rightarrow 1 \quad \varphi_{\mathrm{d}}(\varphi)^{\mathrm{T}} \cdot \varphi_{\mathrm{u}}(\varphi) \text { simplify } \rightarrow 0
Eigenvalue +1
Eigenvalue -1
$$\mathrm{S}(\varphi) \cdot \varphi_{\mathrm{u}}(\varphi) \text { simplify } \rightarrow \left( \begin{array}{c}{\cos \left(\frac{\varphi}{2}\right)} \\ {\sin \left(\frac{\varphi}{2}\right)}\end{array}\right)$$ $$\mathrm{S}(\varphi) \cdot \varphi_{\mathrm{d}}(\varphi) \text { simplify } \rightarrow \left( \begin{array}{c}{\sin \left(\frac{\varphi}{2}\right)} \\ {-\cos \left(\frac{1}{2} \cdot \varphi\right)}\end{array}\right) $$
A summary of the quantum mechanical calculations:
\[\begin{pmatrix} \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \operatorname{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \operatorname{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \operatorname{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(0 \cdot \operatorname{deg}), \mathrm{S}(0 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(120 \cdot \operatorname{deg}), \mathrm{S}(120 \cdot \mathrm{deg})) \Psi \\ \Psi^{T} \cdot \text { kronecker }(\mathrm{S}(240 \cdot \operatorname{deg}), \mathrm{S}(240 \cdot \mathrm{deg})) \Psi \end{pmatrix}^{T}=\left( \begin{array}{cccccccc}{0.5} & {0.5} & {0.5} & {0.5} & {0.5} & {0.5} & {-1} & {-1} & {-1}\end{array}\right) \nonumber \]
Calculation of the overall spin expectation value:
\sum_{i=0}^{2} \sum_{j=0}^{2} \left[\Psi^{\mathrm{T}} \cdot \text { kronecker }[\mathrm{S}[\mathrm{i} \cdot(120 \cdot \mathrm{deg})], \mathrm{S}[j \cdot(120 \cdot \mathrm{deg})]] \Psi\right]=0
The expectation value as a function of the relative orientation of the detectors reveals the level of correlation between the two spin measurements. For \(\theta\) = 0° there is perfect anti-correlation; for \(\theta\) = 180° perfect correlation; for \(\theta\) = 90° no correlation; for \(\theta\) = 60° intermediate anti-correlation (-0.5) and for \(\theta\) = 120° intermediate correlation (0.5).
This thought experiment is simulated using the following quantum circuit. As shown below the results are in agreement with the previous theoretical quantum calculations. The initial Hadamard and CNOT gates create the singlet state from the |11> input. Rz(\(\theta\)) rotates spin B. The final Hadamard gates prepare the system for measurement. See arXiv:1712.05642v2 for further detail.
\[\begin{matrix} \text{Spin A} & | 1 \rangle & \rhd & H & \cdot & \cdots & H & \rhd & \text{Measure 0 or 1: Eigenvalue 1 or -1} \\ \; & \; & \; & \; & | & \; & \; & \; & \; \\ \text{Spin B} & | 1 \rangle & \rhd & \cdots & \oplus & R_{Z} (\theta) & H & \rhd & \text{Measure 0 or 1: Eigenvalue 1 or -1} \end{matrix} \nonumber \]
The quantum gates required to execute this circuit:
Hadamard gate
Rz rotation
Controlled NOT
$$\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \\ {0} & {1}\end{array}\right)$$ $$\mathrm{H} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \\ {1} & {-1}\end{array}\right) $$ $$ \mathrm{R}_{\mathrm{Z}}(\theta) :=\left( \begin{array}{cc}{1} & {0} \\ {0} & {\mathrm{e}^{\mathrm{i} \cdot \theta}}\end{array}\right)$$ $$\mathrm{CNOT} :=\left( \begin{array}{cccc}{1} & {0} & {0} & {0} \\ {0} & {1} & {0} & {0} \\ {0} & {0} & {0} & {1} \\ {0} & {0} & {1} & {0}\end{array}\right)$$
The operator representing the circuit is constructed from the matrix operators provided above.
\mathrm{Op}(\theta) :=\text { kronecker }(\mathrm{H}, \mathrm{H}) \cdot \text { kronecker }\left(\mathrm{I}, \mathrm{R}_{\mathrm{Z}}(\theta)\right) \text { CNOT kronecker }(\mathrm{H}, \mathrm{I})
There are four equally likely measurement outcomes with the eigenvalues and overall expectation values shown below for relative measurement angles 0 and 120 deg (\(\frac{2 \pi}{3}\)).
|00>
eigenvalue +1 $$
\left[\left|\left( \begin{array}{c}{1} \\ {0} \\ {0} \\ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(0 \cdot \mathrm{deg}) \cdot \left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)\right|\right]^{2}=0
$$ |01>
eigenvalue -1 $$
\left[\left|\left( \begin{array}{l}{0} \\ {1} \\ {0} \\ {0}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(0 \cdot \operatorname{deg}) \cdot \left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)\right|\right]^{2}=0.5
\left[\left|\left( \begin{array}{c}{0} \\ {0} \\ {1} \\ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(0 \cdot \mathrm{deg}) \cdot \left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)\right|\right]^{2}=0.5
\left[\left|\left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(0 \cdot \operatorname{deg}) \cdot \left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)\right|\right]^{2}=0
Expectation value: 0 - 0.5 - 0.5 + 0 = -1
\left[\left|\left( \begin{array}{c}{1} \\ {0} \\ {0} \\ {0}\end{array}\right)^{\mathrm{T}} \cdot \mathrm{Op}(\frac{2 \pi}{3}) \cdot \left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)\right|\right]^{2}=0.375
\left[\left|\left( \begin{array}{l}{0} \\ {1} \\ {0} \\ {0}\end{array}\right)^{\mathrm{T}} \cdot \operatorname{Op}(\frac{2 \pi}{3}) \cdot \left( \begin{array}{l}{0} \\ {0} \\ {0} \\ {1}\end{array}\right)\right|\right]^{2}=0.125
Expectation value: 0.375 - 0.125 + 0.375 - 0.125 = 0.5
Many years ago N. David Mermin published two articles (Physics Today, June 1990; American Journal of Physics, August 1990) in the general physics literature on a Greenberger-Horne-Zeilinger (American Journal of Physics, December 1990; Nature, 3 February 2000) thought experiment involving spins that sharply revealed the clash between local realism and the quantum view of reality.
Three spin-1/2 particles are created in a single event and move apart in the horizontal y-z plane. Subsequent spin measurements will be carried out in units of \(\frac{h}{4 \pi}\) with spin operators in the x- and y-directions.
The z-basis eigenfunctions are:
\mathrm{Sz}_{\mathrm{up}} :=\left( \begin{array}{c}{1} \\ {0}\end{array}\right) \qquad \mathrm{Sz}_{\mathrm{down}} :=\left( \begin{array}{l}{0} \\ {1}\end{array}\right)
The x- and y-direction spin operators:
\sigma_{\mathrm{x}} :=\left( \begin{array}{cc}{0} & {1} \\ {1} & {0}\end{array}\right) \quad \text { eigenvals }\left(\sigma_{\mathrm{x}}\right)=\left( \begin{array}{c}{1} \\ {-1}\end{array}\right) \quad \sigma_{\mathrm{y}} :=\left( \begin{array}{cc}{0} & {-\mathrm{i}} \\ {\mathrm{i}} & {0}\end{array}\right) \quad \text { eigenvals }\left(\sigma_{\mathrm{y}}\right)=\left( \begin{array}{c}{1} \\ {-1}\end{array}\right)
The initial entangled spin state for the three spin-1/2 particles in tensor notation is:
| \Psi \rangle=\frac{1}{\sqrt{2}}\left[\left( \begin{array}{l}{1} \\ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \\ {0}\end{array}\right) \otimes \left( \begin{array}{l}{1} \\ {0}\end{array}\right)-\left( \begin{array}{l}{0} \\ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \\ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \\ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \\ {1}\end{array}\right) \otimes \left( \begin{array}{l}{0} \\ {1}\end{array}\right)\right]=\frac{1}{\sqrt{2}} \left( \begin{array}{c}{1} \\ {0} \\ {0} \\ {0} \\ {0} \\ {0} \\ {0}\\ {-1}\end{array}\right) \quad \Psi :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{c}{1} \\ {0} \\ {0} \\ {0} \\ {0} \\ {0} \\ {0} \\ {-1}\end{array}\right)
The following operators represent the measurements to be carried out on spins 1, 2 and 3, in that order.
\sigma_{x}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{y}^{3} \quad \sigma_{y}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{y}^{3} \quad \sigma_{y}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{x}^{3} \quad \sigma_{x}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{x}^{3}
The matrix tensor product is also known as the Kronecker product, which is available in Mathcad. The four operators in tensor format are formed as follows.
$$\sigma_{\mathrm{xyy}} :=\text { kronecker }\left(\sigma_{\mathrm{x}}, \text { kronecker }\left(\sigma_{\mathrm{y}}, \sigma_{\mathrm{y}}\right)\right)$$ $$\sigma_{\mathrm{yxy}} :=\text { kronecker }\left(\sigma_{\mathrm{y}}, \text { kronecker }\left(\sigma_{\mathrm{x}}, \sigma_{\mathrm{y}}\right)\right)$$
$$\sigma_{\mathrm{yyx}} :=\mathrm{kronecker}\left(\sigma_{\mathrm{y}}, \mathrm{kronecker}\left(\sigma_{\mathrm{y}}, \sigma_{\mathrm{x}}\right)\right)$$ $$\sigma_{\mathrm{xxx}} :=\text { kronecker }\left(\sigma_{\mathrm{x}}, \text { kronecker }\left(\sigma_{\mathrm{x}}, \sigma_{\mathrm{x}}\right)\right)$$
These composite operators are Hermitian and mutually commute which means they can have simultaneous eigenvalues.
$$\sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{yxy}}-\sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{xyy}} \rightarrow 0$$ $$\sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{yyx}}-\sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{xyy}} \rightarrow 0$$ $$\sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{xxx}}-\sigma_{\mathrm{xxx}} \cdot \sigma_{\mathrm{xyy}} \rightarrow 0$$
$$\sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{yyx}}-\sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{yxy}} \rightarrow 0$$ $$\sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{xxx}}-\sigma_{\mathrm{xxx}} \cdot \sigma_{\mathrm{yxy}} \rightarrow 0$$ $$\sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{xxx}}-\sigma_{\mathrm{xxx}} \cdot \sigma_{\mathrm{yyx}} \rightarrow 0$$
The expectation values of the operators are now calculated.
\Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{xyy}} \cdot \Psi=1 \qquad \Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{yxy}} \cdot \Psi=1 \qquad \Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{yyx}} \cdot \Psi=1 \qquad \Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{xxx}} \cdot \Psi=-1
Consequently the product of the four operators has the expectation value of -1.
\Psi^{\mathrm{T}} \cdot \sigma_{\mathrm{xyy}} \cdot \sigma_{\mathrm{yxy}} \cdot \sigma_{\mathrm{yyx}} \cdot \sigma_{\mathrm{Xxx}} \cdot \Psi=-1
Local realism assumes that objects have definite properties independent of measurement. In this example it assumes that the x- and y-components of the spin have definite values prior to measurement. This position leads to a contradiction with the above result as demonstrated by Mermin (Physics Today, June 1990). Looking again at the measurement operators, notice that there is a σx measurement on the first spin in the first and fourth experiment. If the spin state is well-defined before measurement those results have to be the same, either both +1 or both -1, so that the product of the two measurements is +1.
\left(\sigma_{x}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{y}^{3}\right)\left(\sigma_{y}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{y}^{3}\right)\left(\sigma_{y}^{1} \otimes \sigma_{y}^{2} \otimes \sigma_{x}^{3}\right)\left(\sigma_{x}^{1} \otimes \sigma_{x}^{2} \otimes \sigma_{x}^{3}\right) )
Likewise there is a y measurement on the second spin in experiments one and three. By similar arguments those results will lead to a product of +1 also. Continuing with all pairs in the total operator using local realistic reasoning unambiguously shows that its expectation value should be +1, in sharp disagreement with the quantum mechanical result of -1. This result should cause all mathematically literate local realists to renounce and recant their heresy. However, they may resist saying this is just a thought experiment. It hasn't actually been performed. However, if you believe in quantum simulation it has been performed.
"Quantum simulation is a process in which a quantum computer simulates another quantum system. Because of the various types of quantum weirdness, classical computers can simulate quantum systems only in a clunky, inefficient way. But because a quantum computer is itself a quantum system, capable of exhibiting the full repertoire of quantum weirdness, it can efficiently simulate other quantum systems. The resulting simulation can be so accurate that the behavior the computer will be indistinguishable from the behavior of the simulated system itself." (Seth Lloyd, Programming the Universe, page 149.) The thought experiment can be simulated using the quantum circuit shown below which is an adaptation of one that can be found at: arXiv:1712.06542v2.
\[\begin{matrix} | 1 \rangle & \rhd & H & \cdot & \cdots & \cdots & H & \rhd & \text{Measure, 0 or 1} \\ \; & \; & \; & | & \; & \; & \; & \; & \; \\ | 0 \rangle & \rhd & \cdots & \oplus & \cdot & S & H & \rhd & \text{Measure, 0 or 1} \\ \; & \; & \; & \; & | & \; & \; & \; & \; \\ | 0 \rangle & \rhd & \cdots & \cdots & \oplus & S & H & \rhd & \text{Measure, 0 or 1} \\ \; & \; & \; & | & \; & \; & \; & \; & \; \end{matrix} \nonumber \]
The matrix operators required for the implementation of the quantum circuit:
\mathrm{I} :=\left( \begin{array}{ll}{1} & {0} \\ {0} & {1}\end{array}\right) \quad \mathrm{H} :=\frac{1}{\sqrt{2}} \cdot \left( \begin{array}{cc}{1} & {1} \\ {1} & {-1}\end{array}\right) \quad \mathrm{S} :=\left( \begin{array}{cc}{1} & {0} \\ {0} & {-\mathrm{i}}\end{array}\right) \quad \mathrm{CNOT} :=\left( \begin{array}{llll}{1} & {0} & {0} & {0} \\ {0} & {1} & {0} & {0} \\ {0} & {0} & {0} & {1} \\ {0} & {0} & {1} & {0}\end{array}\right)
\(\mathrm{HII} :=\text { kronecker }(\mathrm{H}, \text { kronecker }(\mathrm{I}, \mathrm{I}))\) \(\mathrm{CNOTI} :=\text { kronecker }(\mathrm{CNOT}, \mathrm{I})\) \(\mathrm{ICNOT} :=\text { kronecker }(\mathrm{I}, \mathrm{CNOT})\)
\(\mathrm{ISS} :=\text { kronecker }(\mathrm{I}, \text { kronecker }(\mathrm{S}, \mathrm{S}))\) \(\mathrm{SIS} :=\text { kronecker }(\mathrm{S}, \text { kronecker }(\mathrm{I}, \mathrm{S}))\) \(\mathrm{SSI} :=\text { kronecker }(\mathrm{S}, \text { kronecker }(\mathrm{S}, \mathrm{I}))\)
\(\mathrm{HHH} :=\text{kronecker}(\mathrm{H}, \text{kronecker}(\mathrm{H}, \mathrm{H}))\)
First it is demonstrated that the first three steps of the circuit create the initial state.
\[[ \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left( \begin{array}{llllllll}{0.707} & {0} & {0} & {0} & {0} & {0} & {0} & {-0.707}\end{array}\right) \nonumber \]
The complete circuit shown above simulates the expectation value of the \(\sigma_{x}\sigma_{y}\sigma_{y}\) operator. The presence of S on a line before the final H gates indicates the measurement of the \(\sigma_{y}\), its absence a measurement of \(\sigma_{x}\). The subsequent simulations show the absence of S on the middle and last line, and finally on all three lines for the simulation of the expectation value for \(\sigma_{x}\sigma_{x}\sigma_{x}\).
Eigenvalue |0> = +1; eigenvalue |1> = -1
\[[ \text{HHH} \cdot \text{ISS} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0.5} & {0} & {0} & {0.5} & {0} & {0.5} & {0.5} & {0}\end{array}\right) \nonumber \]
\frac{1}{2}( |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ) \Rightarrow\left\langle\sigma_{x} \sigma_{y} \sigma_{y}\right\rangle= 1
Given the eigenvalue assignments above the expectation value associated with this measurement outcome is 1/4[(1)(1)(1)+(1)(-1)(-1)+(-1)(1)(-1)+(-1)(-1)(1)] = 1. Note that 1/2 is the probability amplitude for the product state. Therefore the probability of each member of the superposition being observed is 1/4. The same reasoning is used for the remaining simulations.
\[[ \text{HHH} \cdot \text{SIS} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0.5} & {0} & {0} & {0.5} & {0} & {0.5} & {0.5} & {0}\end{array}\right) \nonumber \]
\frac{1}{2}( |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ) \Rightarrow\left\langle\sigma_{y} \sigma_{x} \sigma_{y}\right\rangle= 1
\[[ \text{HHH} \cdot \text{SSI} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0.5} & {0} & {0} & {0.5} & {0} & {0.5} & {0.5} & {0}\end{array}\right) \nonumber \]
\frac{1}{2}( |000\rangle+| 011 \rangle+| 101 \rangle+| 110 \rangle ) \Rightarrow\left\langle\sigma_{y} \sigma_{y} \sigma_{x}\right\rangle= 1
\[[ \text{HHH} \cdot \text{ICNOT} \cdot \text{CNOTI} \cdot \text{HII} \cdot \left( \begin{array}{llllllll}{0} & {0} & {0} & {0} & {1} & {0} & {0} & {0}\end{array}\right)^{\mathrm{T}}]^{T} = \left(\begin{array}{llllllll}{0} & {0.5} & {0.5} & {0} & {0.5} & {0} & {0} & {0.5}\end{array}\right) \nonumber \]
\frac{1}{2}( |001\rangle+| 010 \rangle+| 100 \rangle+| 111 \rangle ) \Rightarrow\left\langle\sigma_{x} \sigma_{x} \sigma_{x}\right\rangle=- 1
Individually and in product form the simulated results are in agreement with the previous quantum mechanical calculations.
\left\langle\sigma_{x} \sigma_{x} \sigma_{x}\right\rangle\left\langle\sigma_{x} \sigma_{y} \sigma_{y}\right\rangle\left\langle\sigma_{y} \sigma_{x} \sigma_{y}\right\rangle\left\langle\sigma_{y} \sigma_{y} \sigma_{x}\right\rangle=- 1
The appendix provides algebraic calculations of \(<\sigma_{x} \sigma_{y} \sigma_{y}>\) and \(<\sigma_{x} \sigma_{x} \sigma_{x}>\).
Truth tables for the operation of the circuit elements:
\mathrm{I}=\left( \begin{array}{ccc}{0} & {\text { to }} & {0} \\ {1} & {\text { to }} & {1}\end{array}\right) \quad H=\left[ \begin{array}{ccc}{0} & {\text { to }} & {\frac{(0+1)}{\sqrt{2}}} \\ {1} & {\text { to }} & {\frac{(0-1)}{\sqrt{2}}}\end{array}\right] \quad \mathrm{CNOT}=\left( \begin{array}{lll}{00} & {\text { to }} & {00} \\ {01} & {\text { to }} & {01} \\ {10} & {\text { to }} & {11} \\ {11} & {\text { to }} & {10}\end{array}\right) \quad \mathrm{S}=\left( \begin{array}{ccc}{0} & {\text { to }} & {0} \\ {1} & {\text { to }} & {-\mathrm{i}}\end{array}\right)
\begin{array}{c}{|100 \rangle} \\ {H \otimes I \otimes I} \\ {\frac{1}{\sqrt{2}}[ |000\rangle-| 100\rangle]} \\ {C N O T \otimes I} \\ {\frac{1}{\sqrt{2}}[ |000\rangle-| 110\rangle]} \\ {I \otimes C N O T}\\ {\frac{1}{\sqrt{2}}[ |000\rangle-| 111 \rangle ]} \\ {I \otimes S \otimes S} \\ {\frac{1}{\sqrt{2}}[[000\rangle-| 1-i-i\rangle]} \\ {H \otimes H \otimes H} \\ {\frac{1}{2}[ |000\rangle+| 011 \rangle+| 101 \rangle+| 110\rangle]} \\ {\left\langle\sigma_{x} \sigma_{y} \sigma_{y}\right\rangle= 1}\end{array}
$$ $$
\begin{array}{c}{|100\rangle} \\ {H \otimes I \otimes I} \\ {\frac{1}{\sqrt{2}}[ |000\rangle-| 100\rangle]} \\ {C N O T \otimes I} \\ {c}{\frac{1}{\sqrt{2}}[ |000\rangle-| 110\rangle]} \\ {I \otimes C N O T} \\ {\frac{1}{\sqrt{2}}[ |000\rangle-| 111\rangle]} \\ {H \otimes H \otimes H} \\ {\frac{1}{2}[ |001\rangle+| 010 \rangle+| 100 \rangle+| 111\rangle]} \\ {\left\langle\sigma_{x} \sigma_{x} \sigma_{x}\right\rangle=- 1}\end{array}
A two-stage atomic cascade emits entangled photons (A and B) in opposite directions with the same circular polarization according to observers in their path. The experiment involves the measurement of photon polarization states in the vertical/horizontal measurement basis, and allows for the rotation of the right-hand detector through an angle \(\theta\), in order to explore the consequences of quantum mechanical entanglement. PA stands for polarization analyzer and could simply be a calcite crystal.
\[\begin{matrix} V & \lhd & \lceil & \; & \rceil & \; & \; & \; & \lceil & \; & \rceil & \rhd & V \\ \; & \; & | & 0 & | & \xleftarrow{A} & \xleftrightarrow{Source} & \xrightarrow{B} & | & \theta & | & \; & \; \\ H & \lhd & \lfloor & \; & \rfloor & \; & \; & \; & \lfloor & \; & \rfloor & \rhd & H \\ \; & \; & PA_{A} & \; & \; & \; & PA_{B} & \; & \; \end{matrix} \nonumber \]
The entangled two-photon polarization state is written in the circular and linear polarization bases,
| \Psi \rangle=\frac{1}{\sqrt{2}}[ |L\rangle_{A} | L \rangle_{B}+| R \rangle_{A} | R \rangle_{B} ]=\frac{1}{\sqrt{2}}[ |V\rangle_{A} | V \rangle_{B}-| H \rangle_{A} | H \rangle_{B} ] \text{using} \quad | L \rangle=\frac{1}{\sqrt{2}}[ |V\rangle+ i | H \rangle ] \quad | R \rangle=\frac{1}{\sqrt{2}}[ |V\rangle- i | H \rangle ]
The vertical (eigenvalue +1) and horizontal (eigenvalue -1) polarization states for the photons in the measurement plane are given below. \(\Theta\) is the angle of the measuring PA.
\mathrm{V}(\theta) :=\left( \begin{array}{l}{\cos (\theta)} \\ {\sin (\theta)}\end{array}\right)\quad \mathrm{H}(\theta) :=\left( \begin{array}{l}{-\sin (\theta)} \\ {\cos (\theta)}\end{array}\right) \quad \mathrm{V}(0)=\left( \begin{array}{l}{1} \\ {0}\end{array}\right) \quad \mathrm{H}(0)=\left( \begin{array}{l}{0} \\ {1}\end{array}\right)
If photon A has vertical polarization photon B also has vertical polarization, the probability that photon B has vertical polarization when measured at an angle θ giving a composite eigenvalue of +1 is,
\left(\mathrm{V}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2} \rightarrow \cos (\theta)^{2}
If photon A has vertical polarization photon B also has vertical polarization, the probability that photon B has horizontal polarization when measured at an angle θ giving a composite eigenvalue of -1 is,
\left(\mathrm{H}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2} \rightarrow \sin (\theta)^{2}
Therefore the overall quantum correlation coefficient or expectation value is:
\mathrm{E}(\theta) :=\left(\mathrm{V}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2}-\left(\mathrm{H}(\theta)^{\mathrm{T}} \cdot \mathrm{V}(0)\right)^{2} \text { simplify } \rightarrow \cos (2 \cdot \theta) \quad \mathrm{E}(0 \cdot \mathrm{deg})=1 \quad \mathrm{E}(30 \cdot \mathrm{deg})=0.5 \quad \mathrm{E}(90 \cdot \mathrm{deg})=-1
Now it will be shown that a local-realistic, hidden-variable model can be constructed which is in agreement with the quantum calculations for 0 and 90 degrees, but not for 30 degrees (highlighted).
If objects have well-defined properties independent of measurement, the results for \(\theta\) = 0 degrees and \(\theta\) = 90 degrees require that the photons carry the following instruction sets, where the hexagonal vertices refer to \(\theta\) values of 0, 30, 60, 90, 120, and 150 degrees.
There are eight possible instruction sets, six of the type on the left and two of the type on the right. The white circles represent vertical polarization with eigenvalue +1 and the black circles represent horizontal polarization with eigenvalue -1. In any given measurement, according to local realism, both photons (A and B) carry identical instruction sets, in other words the same one of the eight possible sets.
The problem is that while these instruction sets are in agreement with the 0 and 90 degree quantum calculations, with expectation values of +1 and -1 respectively, they can't explain the 30 degree predictions of quantum mechanics. The figure on the left shows that the same result should be obtained 4 times with joint eigenvalue +1, and the opposite result twice with joint eigenvalue of -1. For the figure on the right the opposite polarization is always observed giving a joint eigenvalue of -1. Thus, local realism predicts an expectation value of 0 in disagreement with the quantum result of 0.5.
\frac{6 \cdot(1-1+1+1-1+1)+2 \cdot(-1-1-1-1-1-1)}{8}=0
This analysis is based on "Simulating Physics with Computers" by Richard Feynman, published in the International Journal of Theoretical Physics (volume 21, pages 481-485), and Julian Brown's Quest for the Quantum Computer (pages 91-100). Feynman used the experiment outlined above to establish that a local classical computer could not simulate quantum physics.
A local classical computer manipulates bits which are in well-defined states, 0s and 1s, shown above graphically in white and black. However, these classical states are incompatible with the quantum mechanical analysis which is consistent with experimental results. This two-photon experiment demonstrates that simulation of quantum physics requires a computer that can manipulate 0s and 1s, superpositions of 0 and 1, and entangled superpositions of 0s and 1s.
Simulation of quantum physics requires a quantum computer. The following quantum circuit simulates this experiment exactly. The Hadamard and CNOT gates transform the input, |10>, into the required entangled Bell state. R(\(\theta\)) rotates the polarization of photon B clockwise through an angle \(\theta\). Finally measurement yields one of the four possible output states: |00>, |01>, |10> or |11>.
\[\begin{matrix} | 1 \rangle & \rhd & H & \cdot & \cdots & \rhd & \text{Measure 0 or 1} \\ \; & \; & \; & | & \; & \; & \; \\ | 0 \rangle & \rhd & \cdots & \oplus & R(\theta) & \rhd & \text{Measure 0 or 1} \end{matrix} \nonumber \]
The following algebraic analysis of the quantum circuit shows that it yields the correct expectation value for all values of \(\theta\). This analysis requires the truth tables for the matrix operators. Recall from above that |0> = |V> with eigenvalue +1, and |1> = |H> with eigenvalue -1.
H=\left[ \begin{array}{ccc}{0} & {\text { to }} & {\frac{1}{\sqrt{2}} \cdot(0+1)} \\ {1} & {\text { to }} & {\frac{1}{\sqrt{2}} \cdot(0-1)}\end{array}\right] \quad \mathrm{CNOT}=\left( \begin{array}{ccc}{00} & {\mathrm{to}} & {00} \\ {01} & {\mathrm{to}} & {01} \\ {10} & {\mathrm{to}} & {11} \\ {11} & {\mathrm{to}} & {10}\end{array}\right) \quad \begin{matrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} & \xrightarrow{R(\theta)} & \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix} \\ \begin{pmatrix} 0 \\ 1 \end{pmatrix} & \xrightarrow{R(\theta)} & \begin{pmatrix} - \sin \theta \\ \cos \theta \end{pmatrix} \end{matrix}
\begin{array}{c}{|1 \rangle | 0 \rangle=| 10\rangle} \\ {\mathrm{H} \otimes \mathrm{I}}{\frac{1}{\sqrt{2}}[ |0\rangle-| 1 \rangle ] | 0 \rangle=\frac{1}{\sqrt{2}}[ |00\rangle-| 10\rangle} \\ {\text { CNOT }} \begin{array}{c}{\frac{1}{\sqrt{2}}[ |00\rangle-| 11} \\ {\quad \mathrm{I} \otimes \mathrm{R}(\theta)}\end{array}\frac{1}{\sqrt{2}}[ |0\rangle(\cos \theta | 0\rangle+\sin \theta | 1 \rangle )-| 1 \rangle(-\sin \theta | 0\rangle+\cos \theta | 1 \rangle ) ] \\ \Downarrow \\ \frac{1}{\sqrt{2}}[\cos \theta | 00\rangle+\sin \theta | 01 \rangle+\sin \theta | 10 \rangle-\cos \theta | 11 \rangle ] \\ \text{Probabilities} \\ \Downarrow \\ \frac{\cos ^{2} \theta}{2} | 00 \rangle+\frac{\sin ^{2} \theta}{2} | 01 \rangle+\frac{\sin ^{2} \theta}{2} | 10 \rangle+\frac{\cos ^{2} \theta}{2} | 11 \rangle \end{array}
|00> = |VV> and |11> = |HH> have a composite eigenvalue of +1. |01> = |VH> and |10> = |HV> have a composite eigenvalue of -1. Therefore,
This page titled 1.11: Quantum Computation- A Short Course is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Frank Rioux via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
1.109: State Vectors and State Operators- Superpositions, Mixed States, and Entanglement
1.110: The Gram-Schmidt Procedure
Greenberger-Horne-Zeilinger gedanken experiment
hidden-variable models | CommonCrawl |
Analytic results for the linear stability of the equilibrium point in Robe's restricted elliptic three-body problem
DCDS Home
Conserved quantities, global existence and blow-up for a generalized CH equation
March 2017, 37(3): 1749-1762. doi: 10.3934/dcds.2017073
Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations
Xiaoyu Zeng 1,
Department of Mathematics, School of Sciences, Wuhan University of Technology, Wuhan 430070, China
Received May 2016 Revised September 2016 Published December 2016
Fund Project: The author is supported by NSFC grants 11501555 and 11471331.
We study the following minimization problem:
${d_{{a_q}}}(q): = \mathop {\inf }\limits_{\{ \int {_{{\mathbb{R}^2}}|u{|^2}dx = 1} \} } {E_{q,{a_q}}}(u),$
where the functional
$E_{q,a_q}(·)$
is given by
${{E}_{q,{{a}_{q}}}}(u):=\int_{{{\mathbb{R}}^{2}}}{(|\nabla u(x){{|}^{2}}+V(x)|u(x){{|}^{2}})}dx-\frac{2{{a}_{q}}}{q+2}\int_{{{\mathbb{R}}^{2}}}{|}u(x){{|}^{q+2}}dx.$
$a_q>0, \ q∈(0,2)$
$V(x)$
is some type of trapping potential. Let
$a^*:= \|Q\|_2^2$
, where
$Q$
is the unique (up to translations) positive radial solution of
$Δ u-u+u^3=0$
$\mathbb{R}^2$
. We prove that if
$\lim_{q\nearrow2}a_q=a<a^*$
, then minimizers of
$d_{a_q}(q)$
is compact in a suitable space as
$q\nearrow2$
. On the contraty, if
$\lim_{q\nearrow2}a_q=a≥q a^*$
, by directly using asymptotic analysis, we prove that all minimizers must blow up and give the detailed asymptotic behavior of minimizers. These conclusions extend the results of Guo-Zeng-Zhou [Concentration behavior of standing waves for almost mass critical nonlinear Schrödinger equations, J. Differential Equations. 256, (2014), 2079-2100].
Keywords: Constrained variational method, energy estimates, blow-up, standing waves, nonlinear Schrödinger equation.
Mathematics Subject Classification: 35J20, 35J60.
Citation: Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1749-1762. doi: 10.3934/dcds.2017073
W. Z. Bao and Y. Y. Cai, Mathematical theory and numerical methods for Bose-Einstein condensation, Kinet. Relat. Models, 6 (2003), 1-135. doi: 10.3934/krm.2013.6.1. Google Scholar
T. Bartsch amd Z.-Q. Wang, Existence and multiplicity results for some superlinear elliptic problems on $\mathbb{R}^N$, Comm. Partial Differential Equations, 20 (1995), 1725-1741. doi: 10.1080/03605309508821149. Google Scholar
H. Berestycki and P. L. Lions, Nonlinear scalar field equations. Ⅰ. Existence of a ground state, Arch. Rat. Mech. Anal., 82 (1983), 313-345. doi: 10.1007/BF00250555. Google Scholar
J. Byeon and Z. Q. Wang, Standing waves with a critical frequency for nonlinear Schrödinger equations, Arch. Ration. Mech. Anal., 165 (2002), 295-316. doi: 10.1007/s00205-002-0225-6. Google Scholar
T. Cazenave, Semilinear Schrödinger Equations, Courant Lecture Notes in Mathematics Vol. 10 Courant Institute of Mathematical Science/AMS, New York, 2003. Google Scholar
M. del Pino, M. Kowalczyk and J. C. Wei, Concentration on curves for nonlinear schrödinger equations, Comm. Pure Appl. Math., 60 (2007), 113-146. doi: 10.1002/cpa.20135. Google Scholar
B. Gidas, W. M. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$, in Mathematical analysis and applications Part A, Adv. in Math. Suppl. Stud. vol. 7, Academic Press, New York, (1981), 369–402. Google Scholar
Y. J. Guo and R. Seiringer, On the mass concentration for Bose-Einstein condensates with attractive interactions, Lett. Math. Phys., 104 (2014), 141-156. doi: 10.1007/s11005-013-0667-9. Google Scholar
Y. J. Guo, Z. -Q. Wang, X. Y. Zeng and H. S. Zhou, Properties for ground states of attractive Gross-Pitaevskii equations with multi-well potentials, arXiv: 1502.01839. Google Scholar
Y. J. Guo, X. Y. Zeng and H. S. Zhou, Concentration behavior of standing waves for almost mass critical nonlinear Schrödinger equations, J. Differential Equations., 2014 (256), 2079-2100. doi: 10.1016/j.jde.2013.12.012. Google Scholar
Y. J. Guo, X. Y. Zeng and H. S. Zhou, Energy estimates and symmetry breaking in attractive Bose-Einstein condensates with ring-shaped potentials, Ann. I. H. Poincaré-AN, 33 (2016), 809-828. doi: 10.1016/j.anihpc.2015.01.005. Google Scholar
Q. Han and F. H. Lin, Elliptic Partial Differential Equations, Courant Lecture Notes in Mathematics Vol. 1 2$^{nd}$ edition, Courant Institute of Mathematical Science/AMS, New York, 2011. Google Scholar
M. K. Kwong, Uniqueness of positive solutions of $Δ u-u+u^p=0$ in $\mathbb{R}^N$, Arch. Rational Mech. Anal., 105 (1989), 243-266. doi: 10.1007/BF00251502. Google Scholar
Y. Li and W.-M. Ni, Radial symmetry of positive solutions of nonlinear elliptic equations in $\mathbb{R}^n$, Comm. Partial Differential Equations, 18 (1993), 1043-1054. doi: 10.1080/03605309308820960. Google Scholar
E. H. Lieb, R. Seiringer and J. Yngvason, Bosons in a trap: A rigorous derivation of the Gross-Pitaevskii energy functional, Phys. Rev. A 61 (2000), 043602-1-13. Google Scholar
P. L. Lions, The concentration-compactness principle in the caclulus of variations. The locally compact case Ⅰ, Ann. Inst. H. Poincaré Anal. Non Linéaire., 1 (1984), 109-145. Google Scholar
P. L. Lions, The concentration-compactness principle in the caclulus of variations. The locally compact case Ⅱ, Ann. Inst. H. Poincaré Anal. Non Linéaire., 1 (1984), 223-283. Google Scholar
G. Z. Lu and J. C. Wei, On nonlinear schrödinger equations with totally degenerate potentials, C. R. Acad. Sci. Paris., 326 (1998), 691-696. doi: 10.1016/S0764-4442(98)80032-3. Google Scholar
M. Maeda, On the symmetry of the ground states of nonlinear Schrödinger equation with potential, Adv. Nonlinear Stud., 10 (2010), 895-925. doi: 10.1515/ans-2010-0409. Google Scholar
M. Reed and B. Simon, Methods of Modern Mathematical Physics. Ⅳ. Analysis of Operators Academic Press, New York-London, 1978. Google Scholar
H. A. Rose and M. I. Weinstein, On the bound states of the nonlinear Schrödinger equation with a linear potential, Physica D, 30 (1988), 207-218. doi: 10.1016/0167-2789(88)90107-8. Google Scholar
R. Seiringer, Hot topics in cold gases, XVIth International Congress on Mathematical Physics, World Sci. Publ., Hackensack, NJ, (2010), 231-245. doi: 10.1142/9789814304634_0013. Google Scholar
C. A. Stuart, Bifurcation for Dirichlet problems without eigenvalues, Proc. London Math. Soc., 45 (1982), 169-192. doi: 10.1112/plms/s3-45.1.169. Google Scholar
C. A. Stuart, Bifurcation from the essential spectrum, Springer, Berlin, 45 (1983), 169-192. doi: 10.1007/BFb0103282. Google Scholar
C. A. Stuart, Bifurcation from the essential spectrum for some non-compact non-linearities, Math. Methods Applied Sci., 11 (1989), 525-542. doi: 10.1002/mma.1670110408. Google Scholar
X. F. Wang, On concentration of positive bound states of nonlinear Schrödinger equations, Comm. Math. Phys., 153 (1993), 229-244. doi: 10.1007/BF02096642. Google Scholar
M. I. Weinstein, Nonlinear Schrödinger equations and sharp interpolations estimates, Comm. Math. Phys., 87 (1983), 567-576. Google Scholar
Zaihui Gan, Jian Zhang. Blow-up, global existence and standing waves for the magnetic nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 827-846. doi: 10.3934/dcds.2012.32.827
Dapeng Du, Yifei Wu, Kaijun Zhang. On blow-up criterion for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3639-3650. doi: 10.3934/dcds.2016.36.3639
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. Mathematical Control & Related Fields, 2011, 1 (1) : 119-127. doi: 10.3934/mcrf.2011.1.119
Van Duong Dinh. On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 689-708. doi: 10.3934/cpaa.2019034
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Binhua Feng. On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1785-1804. doi: 10.3934/cpaa.2018085
Jianbo Cui, Jialin Hong, Liying Sun. On global existence and blow-up for damped stochastic nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6837-6854. doi: 10.3934/dcdsb.2019169
Türker Özsarı. Blow-up of solutions of nonlinear Schrödinger equations with oscillating nonlinearities. Communications on Pure & Applied Analysis, 2019, 18 (1) : 539-558. doi: 10.3934/cpaa.2019027
Van Duong Dinh. Blow-up criteria for linearly damped nonlinear Schrödinger equations. Evolution Equations & Control Theory, 2021, 10 (3) : 599-617. doi: 10.3934/eect.2020082
Yue Liu. Existence of unstable standing waves for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 193-209. doi: 10.3934/cpaa.2008.7.193
Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 525-544. doi: 10.3934/dcds.2001.7.525
François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2009, 25 (4) : 1229-1247. doi: 10.3934/dcds.2009.25.1229
Chenglin Wang, Jian Zhang. Cross-constrained variational method and nonlinear Schrödinger equation with partial confinement. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021036
Cristophe Besse, Rémi Carles, Norbert J. Mauser, Hans Peter Stimming. Monotonicity properties of the blow-up time for nonlinear Schrödinger equations: Numerical evidence. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 11-36. doi: 10.3934/dcdsb.2008.9.11
Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. Communications on Pure & Applied Analysis, 2018, 17 (1) : 163-175. doi: 10.3934/cpaa.2018010
Reika Fukuizumi, Louis Jeanjean. Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac. Discrete & Continuous Dynamical Systems, 2008, 21 (1) : 121-136. doi: 10.3934/dcds.2008.21.121
Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843
Nan Lu. Non-localized standing waves of the hyperbolic cubic nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3533-3567. doi: 10.3934/dcds.2015.35.3533
Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093
HTML views (60)
Xiaoyu Zeng | CommonCrawl |
\begin{definition}[Definition:Binomial (Euclidean)/First Binomial]
Let $a$ and $b$ be two (strictly) positive real numbers such that $a + b$ is a binomial.
Then $a + b$ is a '''first binomial''' {{iff}}:
: $(1): \quad a \in \Q$
: $(2): \quad \dfrac {\sqrt {a^2 - b^2} } a \in \Q$
where $\Q$ denotes the set of rational numbers.
{{EuclidSaid}}
:''{{Definition:Euclid's Definitions - Book X (II)/1 - First Binomial}}''
{{EuclidDefRefNocat|X (II)|1|First Binomial}}
\end{definition} | ProofWiki |
23.07.2016 16:18:52 Write a paper, yet the essays the independent variables dec 22, 2004 Subject A Scoring Guide.
23.07.2016 14:54:20 Common is the comparison/contrast essay, in which.
23.07.2016 10:49:43 Tiger Algebra with $\infty$ topics, how to begin to research.
24.07.2016 18:31:16 Essay Contest Scholarship-College, University and an invitation samples of Annual Reports from Civic Groups desktop. | CommonCrawl |
Glucose absorption drives cystogenesis in a human organoid-on-chip model of polycystic kidney disease
3D proximal tubule-on-chip model derived from kidney organoids with improved drug uptake
Jeffrey O. Aceves, Szilvia Heja, … Jennifer A. Lewis
Technology Transfer of the Microphysiological Systems: A Case Study of the Human Proximal Tubule Tissue Chip
Courtney Sakolish, Elijah J. Weber, … Ivan Rusyn
Extracellular vesicles and exosomes generated from cystic renal epithelial cells promote cyst growth in autosomal dominant polycystic kidney disease
Hao Ding, Linda Xiaoyan Li, … Xiaogang Li
Adult human kidney organoids originate from CD24+ cells and represent an advanced model for adult polycystic kidney disease
Yaoxian Xu, Christoph Kuppe, … Rafael Kramann
Super-enhancer-driven metabolic reprogramming promotes cystogenesis in autosomal dominant polycystic kidney disease
Zeyun Mi, Yandong Song, … Yupeng Chen
Cyst growth in ADPKD is prevented by pharmacological and genetic inhibition of TMEM16A in vivo
Ines Cabrita, Andre Kraus, … Björn Buchholz
A molecular mechanism explaining albuminuria in kidney disease
Linus Butt, David Unnersjö-Jess, … Thomas Benzing
Lkb1 deficiency confers glutamine dependency in polycystic kidney disease
Ebony M. Flowers, Jessica Sudderth, … Thomas J. Carroll
Aquaporin 2 regulation: implications for water balance and polycystic kidney diseases
Emma T. B. Olesen & Robert A. Fenton
Sienna R. Li1,2,3,4 na1,
Ramila E. Gulieva1,2,3,4 na1,
Louisa Helms1,2,3,4,5,
Nelly M. Cruz1,2,3,4,
Thomas Vincent1,2,3,4,6,
Hongxia Fu3,4,6,7,
Jonathan Himmelfarb1,2,3,4 na2 &
Benjamin S. Freedman ORCID: orcid.org/0000-0003-2228-73831,2,3,4,5,6 na2
Apicobasal polarity
Induced pluripotent stem cells
In polycystic kidney disease (PKD), fluid-filled cysts arise from tubules in kidneys and other organs. Human kidney organoids can reconstitute PKD cystogenesis in a genetically specific way, but the mechanisms underlying cystogenesis remain elusive. Here we show that subjecting organoids to fluid shear stress in a PKD-on-a-chip microphysiological system promotes cyst expansion via an absorptive rather than a secretory pathway. A diffusive static condition partially substitutes for fluid flow, implicating volume and solute concentration as key mediators of this effect. Surprisingly, cyst-lining epithelia in organoids polarize outwards towards the media, arguing against a secretory mechanism. Rather, cyst formation is driven by glucose transport into lumens of outwards-facing epithelia, which can be blocked pharmacologically. In PKD mice, glucose is imported through cysts into the renal interstitium, which detaches from tubules to license expansion. Thus, absorption can mediate PKD cyst growth in human organoids, with implications for disease mechanism and potential for therapy development.
Autosomal dominant polycystic kidney disease (PKD) is commonly inherited as a heterozygous, loss-of-function mutation in either PKD1 or PKD2, which encode the proteins polycystin-1 (PC1) or polycystin-2 (PC2), respectively1,2. PKD is characterized by the growth of large, fluid-filled cysts from ductal structures in kidneys and other organs, and is among the most common life-threatening monogenic diseases and kidney disorders3. Tolvaptan (Jynarque), a vasopressin receptor antagonist that decreases water absorption into the collecting ducts, was recently approved for treatment of PKD in the United States, but only modestly delays cyst growth and has side effects that limit its use4,5. At the molecular level, PC1 and PC2 form a receptor-channel complex at the primary cilium that is poorly understood but possibly acts as a flow-sensitive mechanosensor6,7,8,9,10,11. Loss of this complex results in the gradual expansion and dedifferentiation of the tubular epithelium, including increased proliferation and altered transporter expression and localization12,13,14.
As mechanisms of PKD are difficult to decipher in vivo, and murine models do not fully phenocopy or genocopy the human disease, we have developed a human model of PKD in vitro15,16,17. We, together with other groups around the world, have invented methods to derive kidney organoids from human pluripotent stem cells (hPSC), which contain podocyte, proximal tubule, and distal tubule segments in contiguous, nephron-like arrangements17,18,19,20. Differentiation of these organoids is highly sensitive to the physical properties of the extracellular microenvironment21. Organoids derived from gene-edited hPSC with biallelic, truncating mutations in PKD1 or PKD2 develop cysts from kidney tubules, reconstituting the pathognomonic hallmark of the disease15,16,17. Interestingly, culture of organoids under suspension conditions dramatically increases the expressivity of the PKD phenotype, revealing a critical role for microenvironment in cystogenesis16.
Fluid flow is a major feature of the nephron microenvironment, which is believed to play an important role in PKD4,5,7,8,22. However, physiological rates of flow have not yet been achieved in kidney organoid cultures or PKD models. 'Kidney on a chip' microphysiological systems provide fit-for-purpose platforms integrating flow with kidney cells to model physiology and disease in a setting that more closely simulates the in vivo condition than monolayer cultures23,24,25,26,27. There is currently intense interest in integrating organ on chip systems with organoids, which can be derived from hPSC as a renewable and gene-editable cell source28,29,30,31,32. We therefore investigated the effect of flow on PKD in a human organoid on a chip microphysiological system.
Flow induces cyst swelling in PKD organoids
Prior to introducing flow, we first confirmed the specificity and timing of the PKD phenotype in static cultures. PKD1−/− or PKD2−/− hPSC were differentiated side-by-side with isogenic controls under static, adherent culture conditions to form kidney organoids. On day 18 of differentiation, prior to cyst formation, organoids were carefully detached from the underlying substratum and transferred to suspension cultures in low-attachment plates. Under these conditions, the majority of PKD1−/− or PKD2−/− organoids formed cysts within 1–2 weeks, whereas isogenic control organoids rarely formed cysts (Fig. 1a). In repeated trials, the difference between PKD organoids and isogenic controls was quantifiable and highly significant (Fig. 1a). Thus, PKD organoid formed cysts in a genotype-specific manner, strongly suggesting that this phenotype was specific to the disease state. This differs from other types of three-dimensional cultures of epithelial cells, in which hollow 'cysts' (spheroids) arise irrespective of PKD genotype and represent a default configuration of the epithelium rather than a disease-specific phenotype17,33,34,35.
Fig. 1: Organoid PKD cysts expand under flow.
a Representative images of organoids on days following transfer to suspension culture (upper), with quantification (lower) of cyst incidence as a fraction of the total number of organoids (mean ± s.e.m. from n ≥ 4 independent experiments per condition; ****p < 0.0001). b Schematic of workflow for fluidic condition. c Time-lapse phase contrast images of PKD organoids under flow (0.2 dynes/cm2), representative of four independent experiments. d Average growth rates of control organoids (Ctrl org.), non-cystic compartments of PKD organoids (PKD org.), and cystic compartments of PKD organoids (PKD cysts) under flow (0.2 dynes/cm2). Each experiment was performed for 6 h. Cyst growth rate was calculated on an individual basis as the maximal size of the cyst during the time course, divided by the time point at which the cyst reached this size (mean ± s.e.m. from n ≥ four independent experiments; each dot represents the average growth rate of organoids in a single experiment. ****p < 0.0001).
To understand how flow affects PKD in organoids, we designed a microfluidic system that allows for live imaging of kidney organoids during the early stages of cyst formation (Fig. 1b). hPSC were first differentiated into organoids under static, adherent culture conditions for 26 days, at which time point tubular structures had formed with small cysts in the PKD cultures. Organoids were then purified by microdissection using a syringe needle16, and transferred into gas-permeable, tissue culture-treated polymer flow chambers (0.4 mm height × 3.8 mm width), which were optically clear and large enough to comfortably accommodate organoids and cysts. The channels were pre-coated with a thin layer of Matrigel, and organoids were allowed to attach overnight. PKD and isogenic control organoids were subjected to fluid flow with a wall shear stress of 0.2 dynes/cm2, which approximates physiological shear stress within kidney tubules27,36,37,38. In these devices, we observed that cysts in PKD organoids increased in size rapidly under flow (change in area of ~20,000 μm2/hr, or ~160 μm/hr in diameter), compared to non-cystic compartments within these organoids, or isogenic control organoids lacking PKD mutations, which did not swell appreciably (Fig. 1c, d and Supplementary Movie 1).
Diffusion can partially substitute for flow
Having observed that cysts expand under microfluidic conditions, it was important to establish a corresponding static condition lacking flow as a negative control. Initially we utilized the same chambers and syringe pump in the absence of pump activation, which is a commonly used control format for microfluidic experiments. However, we observed that food dye contained within the syringe failed to enter the microfluidic chamber under these conditions (Supplementary Fig. 1a). This indicated a lack of diffusion, which meant that organoids would be exposed only to the volume of media present within the channel of the microfluidic device (~200 µL), which was much lower than the volume they would encounter under fluidic conditions (~60 mL/6 h). Such a static condition could not be readily compared to fluidic conditions to determine the effects of flow, since other parameters such as volume and total solute mass would also be very different.
To more accurately control for the effects of flow, we designed a diffusive static condition that exposed organoids to an equivalent volume of culture media as in the flow condition. This consisted of a reservoir of media (maximum volume of 25 mL) connected to the microfluidic chip by wider tubing to allow for efficient and uninhibited diffusion of small molecules into the microfluidic channel. In this static format, food dye diffused from the media reservoir into the channel after 2–3 h (Supplementary Fig. 1b). Similarly, rhodamine-labeled dextran (10 kDa) diffused from the media reservoir into the channel and equilibrated with fluidic epifluorescence within 48 h (Fig. 2a).
Fig. 2: Volume can partially substitute for flow in cyst expansion.
a Rhodamine dextran (10 kDa) epifluorescence in static (non-diffusive), diffusive static, and fluidic conditions. 'Lane' indicates channel interior, and b time lapse phase contrast images of cysts in these conditions. Images are representative of n ≥ 4 independent experiments. c Average growth rates (μm2/hr) of cysts in diffusive static condition with different volumes, compared to fluidic or non-diffusive static. Each experiment was performed for 6 h. Cyst growth rate was calculated on an individual basis as the maximal size of the cyst during the time course, divided by the time point at which the cyst reached this size. (n ≥ 8 cysts (dots) pooled from two or more independent experiments; ***p < 0.05). d Schematic of experiment testing effect of volume vs. pressure on cyst growth. Elements of the image were illustrated using Biorender software under license. e Representative phase contrast images and (f) quantification of growth rate of cysts suspended in either 0.5 or 10 mL of media under equivalent hydrostatic pressures (mean ± s.e.m. of n ≥ 14 cysts per condition pooled from three independent experiments; ***, p < 0.05). g Growth profiles of individual cysts (lines) over time in microfluidic devices from 0–5 h. Measurements made every 5 min using ImageJ software. Cyst Area was normalized by dividing by the starting area. Data points are from three or more independent experiments. h Sum of Squares values from linear regression models were run on each individual cyst (n ≥ 7 organoids per condition, pooled from four or more independent experiments; p = 0.0342 versus diffusive static and 0.0411 versus static). Error bars, standard error.
To further validate this 'diffusive static' condition, we varied the volume of media in the reservoir and analyzed cyst growth over a period of 12 h. Cysts exposed to a reservoir containing 1 mL of media expanded at a rate of ~3,000 μm2/hr, whereas a reservoir containing 25 mL increased expansion to ~10,000 μm2/hr, approximately half the rate observed in the fluidic condition (Fig. 2b, c, Supplementary Movies 2–4). Using the equation Pressure = ρgh, the hydrostatic pressure on organoids with 1 mL and 25 mL media reservoirs was calculated to be 1174 Pa and 1956 Pa, respectively. As this represented a substantial pressure difference of 5.9 mmHg, we conducted experiments to distinguish between the effects of pressure versus volume on cyst growth. Cystic organoids were suspended in either 500 µl or 10 ml, with a constant fluid column height of 1 cm (Fig. 2d). Cysts exposed to 10 mL of media grew significantly more than those exposed to 500 µL of media (Fig. 2e, f). Thus, media volume was identified as a major determinant of expansion that could partially substitute for flow in this system.
Not all aspects of the fluidic condition were replicated by the diffusive static condition. Time-lapse microscopy under continuous flow revealed that PKD cysts exhibited fluctuating growth profiles, expanding and constricting (deflating) in cyclical, "breath-like" movements. Constrictions occurred rapidly when the cysts appeared to be fully inflated, suggesting that they resulted from rupture of the epithelium, for instance in response to expansive fluid force (Fig. 2g). Growth and constriction events occurred within hours after the initiation of flow, indicating a rapid physical mechanism rather than a slower one based on cell proliferation. This oscillatory behavior was unique to the fluidic condition, and was not observed in either the diffusive static or non-diffusive static conditions, nor in non-cystic controls (Fig. 2g and Supplementary Movie 1). Using the sum of squares method, we found that cyst dynamics (variance in size within an individual structure over time) were much greater in the fluidic condition, compared to either of the static conditions (Fig. 2h). As solute exposure was likely to occur much more rapidly in the fluidic condition, we proceeded to examine solute uptake under these conditions.
Cysts absorb glucose during flow-mediated expansion
Kidneys are highly reabsorptive organs, retrieving ~180 L of fluid and solutes per day through the tubular epithelium back into the blood. Glucose is an abundant renal solute and transport cargo, which might explain the effects of media exposure on cyst expansion, but whether kidney organoids absorb glucose is unknown. We therefore studied glucose transport in cysts and organoids using a fluorescent glucose analog, NBD glucose (2-(N-(7-Nitrobenz-2-oxa-1,3-diazol-4-yl)Amino)−2-Deoxyglucose). The low height of the channels in our flow devices enabled continuous time lapse imaging of fluorescent molecules without high background fluorescence. Glucose was observed to infiltrate into the devices under both diffusive static as well as fluidic conditions. Epifluorescence of NBD glucose gradually increased and plateaued at similar levels after 12 h in both the diffusive static condition and the fluidic condition, but did not accumulate detectably within the channels in the non-diffusive static condition (Fig. 3a).
Fig. 3: PKD organoids absorb glucose under fluidic and static conditions.
a NBD Glucose background levels in non-diffusive static, diffusive static, and fluidic conditions after 12 h (representative of three independent experiments). b Phase contrast and wide field fluorescence images of organoids in diffusive static and fluidic conditions, 5 h after introduction of NBD glucose (representative of three independent experiments). Arrows are drawn to indicate representative line scans. c Line scan analysis of glucose absorption in PKD cysts under static and fluidic conditions after 5 h (mean ± s.e.m. from n ≥ 7 cysts per condition pooled from three independent experiments; each n indicates the average of four line scans taken from a single cyst). Background fluorescence levels were calculated at each timepoint by measuring the fluorescence intensity of a square region placed in the non-organoid region of the image. d NBD Glucose absorption in the non-cystic compartment of PKD organoid, for Diffusive static 20 mL vs. 1 mL (110 µM NBD Glucose, mean ± s.e.m., n ≥ 4 independent experiments), and (e) diffusive static 25 mL vs. Fluidic (36.5 µM NBD Glucose, n ≥ 5 independent experiments). f Confocal fluorescence images of SGLT2 and ZO1 in PKD1 tubules (representative of three independent experiments). g Confocal fluorescent images of NBD Glucose in organoid tubules, fixed and stained with fluorescent cell surface markers (representative of three independent experiments). h Time-lapse images of NBD Glucose accumulation in a PKD organoid cyst, followed by washout into media containing unlabeled glucose after 24 h, all performed under continuous flow (representative of three independent experiments).
When this assay was performed in channels seeded with organoids, PKD cysts absorbed glucose under fluidic and diffusive static conditions (Fig. 3b and Supplementary Movies 5–6). Line scan analysis of these images showed that there was no significant difference in absorption between the fluidic and diffusive static conditions (Fig. 3c). Analysis of glucose absorption in organoid tubules over time confirmed that the volume of media in the static condition was a crucial factor in nutrient absorption (Fig. 3d). Glucose absorption in organoids over time under the diffusive static condition followed an S-shaped absorption curve, whereas glucose levels in the fluidic condition increased rapidly and then plateaued, approximating an exponential curve, but both conditions plateaued at approximately the same maximal level of glucose absorption (Fig. 3e). These studies suggested that flow has no additional effect on glucose absorption in organoids when compared to a static control presenting equivalent total glucose exposure.
Glucose absorption was a general property of kidney organoids. In non-cystic structures, sodium-glucose transporter-2 (SGLT2) was expressed in organoid tubules and enriched at the apical surface, delineated by the tight junction marker ZO-1 (Fig. 3f). Immunofluorescence confirmed that NBD glucose was absorbed into and accumulated inside organoid proximal and distal tubules (Fig. 3g). Immunoblot analysis indicated similar levels of SGLT2 in control and PKD organoid cultures (Supplementary Fig. 2a, b). Cyst-lining epithelia expressed SGLT2, and accumulated glucose both intracellularly as well as inside their lumens (Supplementary Fig. 2c). Intracellular glucose levels were generally higher than extracellular levels, consistent with the tendency of NBD glucose to accumulate inside cells (Supplementary Fig. 3a–c). Although cysts were much less cell-dense than attached non-cystic compartments, cystic and non-cystic compartments accumulated similar total levels of glucose, owing to the larger size of the cysts (Supplementary Fig. 3d). When PKD organoids loaded with NBD glucose were switched into media containing only unlabeled glucose (washout), NBD glucose disappeared rapidly from these structures (Fig. 3h and Supplementary Movie 7–8). Thus, organoids continuously accumulated and released glucose in a dynamic fashion.
Inhibition of glucose transport blocks cyst growth
In animal models, inhibitors of glucose transport are suggested to have both positive and negative effects in PKD39,40. To test functionally whether cyst growth is linked to glucose transport in human organoids, cyst expansion was quantified in increasing concentrations of D-glucose under static conditions (96-well plate). Growth was maximal at 15–30 mM glucose, causing ~50% increase in cyst expansion, relative to lower or higher concentrations (Fig. 4a, b and Supplementary Fig. 4a). Live/dead analysis of cysts treated with 60 mM glucose detected cytotoxicity, explaining the reduction in cyst growth at this higher concentration (Supplementary Fig. 4b–d).
Fig. 4: PKD cysts expand in response to glucose stimulation.
a Representative time lapse brightfield images and (b) quantification of change in cyst size in PKD organoids in static suspension cultures containing with D-Glucose concentrations (mean ± s.e.m., n ≥ 6 pooled from four independent experiments, each dot indicates a single cyst). c Representative time lapse images and (d) quantification of PKD organoids in 15 mM D-Glucose treated with phloretin (mean ± s.e.m., n ≥ 10 cysts pooled from four independent experiments, p = 0.0231). e Quantification of maximum intensity projections of live/dead staining in organoids treated with phloretin (mean ± s.e.m., n ≥ 11, pooled from two independent experiments, each dot indicates a cystic organoid). f Images of live staining with Calcein AM (representative of three independent experiments). g Brightfield images and (h) quantification of size changes in cystic PKD organoids in 15 mM D-Glucose treated with probenecid (mean ± s.e.m., n ≥ 9 pooled from two independent experiments).
The preceding findings, together with the rapid turnover of glucose in organoids described above, suggested that inhibition of glucose import might enable export mechanisms to dominate, resulting in blockade or even reversal of cyst growth due to osmotic effects. To test this hypothesis, we examined the effects of pharmacological transport inhibitors on cysts in static conditions. Phloretin, a broad spectrum inhibitor of glucose uptake, was tested in 15 mM glucose, and found to decrease cyst size by 77% at a concentration of 800 μM (Fig. 4c, d and Supplementary Fig. 5a). Live-dead staining at 24 and 48 h of phloretin treatment revealed no significant toxicity (Fig. 4d, e and Supplementary Fig. 5b, c). Treatment with either phloridzin, a non-selective inhibitor of both SGLT1 and SGLT2, or with dapagliflozin, a specific inhibitor of SGLT2, reduced cyst growth to baseline at non-toxic doses, further supporting the hypothesis (Supplementary Fig. 5d, e). Net shrinkage of cysts was not observed with phloridzin or dapagliflozin, suggesting either decreased potency of these compounds relative to phloretin, or an off-target effect of phloretin beyond glucose transport that further reduces cyst size. In contrast to SGLT inhibitors, probenecid, an inhibitor of the OAT1 transporter on the basolateral membrane, had no effect on cyst growth compared to controls at non-toxic doses (Fig. 4f–h and Supplementary Fig. 5f). Overall, these findings supported the hypothesis that pharmacological inhibitors of glucose uptake block cyst expansion in the PKD organoid model.
Organoid cysts polarize outwards
Some previous studies have suggested that cyst expansion may be due to increased secretory (basolateral-to-apical) solute transport41,42,43,44. However, glucose transport in the proximal tubule is predominantly reabsorptive (apical-to-basolateral) rather than secretory. To better understand the directionality of transport within organoids, we determined the apicobasal polarity of tubules and cysts using antibodies against tight junctions and cilia. In both PKD and control organoids, the ciliated surface of these tubules faced inwards (Fig. 5a). Surprisingly, however, PKD cysts were polarized with the apical ciliated surface facing outwards towards the media and exposed to flow (Fig. 5a). Thus, the external cyst surface resembled the apical surface of a tubule in this system. Line scan analysis confirmed this inverted polarization, with primary cilia and tight junction intensity profiles reversed in organoids vs. cysts (Fig. 5b).
Fig. 5: PKD cysts form via expansion of outwards-facing epithelium.
a Confocal immunofluorescence images of cilia (acetylated α-tubulin, abbreviated AcT) and tight junctions (ZO-1) in proximal tubules (LTL) of PKD and non-PKD organoids, as well as in PKD cyst lining epithelial cells. Dashed arrow indicates how line scans were drawn. Images are representative of three independent experiments. b ZO1 and AcT intensity profiles in cysts vs. organoids. Line scans were drawn through cilia from lumen to exterior of structures. (mean ± s.e.m. from n = 5 line scans pooled from three organoids or cysts per condition from three independent experiments). c Fluorescent images of stromal markers in PKD organoids compared to human kidney tissue from a female patient 50 years of age with autosomal dominant PKD. Scale bars 20 µm. d Fluorescent images of cysts after having been overlaid with collagen. Images are representative of three independent experiments. e Z-stack confocal images of early (day 30) PKD organoid cyst in adherent culture. Zoom shows boxed region. White arrow indicates a podocyte cluster continuous with the peripheral epithelium. Images are representative of three independent experiments. f Close-up image showing peripheral epithelium of control (non-PKD) organoid in adherent culture. Yellow arrowhead indicates region of epithelial invagination. Images are representative of three independent experiments. g Phase contrast time-lapse images showing formation of PKD cysts from non-cystic structures in adherent cultures. Red arrows indicates tubular structures internal to the peripheral cyst. Images are representative of three independent experiments. h Schematic model of absorptive cyst expansion in organoids. Fluid flow (blue arrows) is absorbed into outwards-facing proximal tubular epithelium, which generates internal pressure that drives expansion and stretching of the epithelium (red arrows). A simplified organoid lacking podocytes or multiple nephron branches is shown for clarity.
Close examination of PKD organoid cysts revealed that a subpopulation of these contained a layer of cells expressing alpha smooth muscle actin immediately beneath the cyst-lining epithelium, which formed a laminin-rich basement membrane (Fig. 5c). In contrast, in human kidney tissue the basement membrane and myofibroblast-like cells surrounded cysts externally (Fig. 5c). Thus, apical cell polarity aligned opposite the basement membrane in both systems. Simple spheroids of Madine-Darby Canine Kidney cells in suspension culture polarize outwards, but can reverse apicobasal polarity from outwards to inwards when embedded in collagen34. When PKD cysts in organoids were overlaid with collagen, however, cyst polarity remained inverted and did not repolarize with the ciliated surface facing away from the extracellular matrix, indicating that organoid cyst polarity was deeply entrenched and governed by more dominant, internal cues (Fig. 5d).
The observation that cysts polarized outwards seemed counter-intuitive, as tubule structures in human kidney organoids typically polarize inwards, with tight junctions and apical markers abutting one another from diametrically opposed epithelia (as shown in Fig. 5a)17. To resolve this conundrum, we closely examined PKD organoids in three-dimensional confocal image z-stacks. Lotus tetragonolobus lectin (LTL), which is expressed more strongly in tubules than in cysts, was used to label the epithelium, while primary cilia and ZO-1 were used to indicate cell polarity. These experiments revealed that young cysts comprised epithelial spheroid structures (predominantly LTL+) with underlying tubular infolds, which faced inwards (Fig. 5e). We further examined organoids without cysts (controls) in confocal microscopy z-stacks. We noted that epithelium lining the periphery of these organoids faced outwards, whereas 'tubules' internal to organoids were invaginations of this peripheral epithelium (Fig. 5f and Supplementary Fig. 6a, b). The innermost regions of these invaginated tubules were enriched for ECAD, a marker of distal tubule, whereas the external peripheral epithelia were enriched for LTL, a marker of proximal tubule (Fig. 5f and Supplementary Fig. 6a). Thus, organoids constituted a continuous, proximal-to-distal epithelium, with the apical surface polarized outwards on the peripheral (more proximal) epithelium and inwards in the internal (more distal) epithelium of the structure.
To observe the process of cyst formation in real time, we collected time-lapse images of young PKD organoids undergoing cystogenesis over eight days in culture. Consistently, cysts formed at the periphery of the organoids (Fig. 5g, Supplementary Fig. 7a, b, and Supplementary Movie 9). During the early stages of cystogenesis, tubular structures remained visible inside the cysts as they expanded (Fig. 5g, Supplementary Fig. 7a, b, and Supplementary Movie 9). Thus, time-lapse imaging supported the idea that cysts formed from the peripheral epithelium of the organoids that faced outwards towards the media, rather than from the internal tubular invaginations, which tended to stay anchored (Fig. 5h). This was consistent with an absorptive mechanism mediated by the peripheral epithelium.
Absorptive cysts form in vivo
It is important to understand how these findings in organoids might relate to PKD cyst formation in vivo, where cyst-lining epithelia face inwards rather than outwards. Microcysts smaller than 1 mm diameter and undetectable by magnetic resonance imaging are numerous in kidney sections from patients with early stages of PKD, and are proposed to form as focal outpouchings of tubular epithelium45,46. If such an outpouching remained connected to a small segment of the original tubule via apical junctions, it could accumulate fluid through tubular reabsorption. The preceding suggested a possible model for cyst formation in vivo (Fig. 6a). Absorption of glucose through the apical surface of the tubular epithelium is followed by water along the osmotic gradient via paracellular or transcellular routes to maintain balanced concentrations on either side of the epithelium. There is a lack of appropriate outlet for this absorptive activity, creating a pressure within the interstitium and leading to its detachment from neighboring tubules, which undergo deformation and expansion to fill the resultant interstitial space. This process continues as the cyst grows, and may be exacerbated by the gradual loss or detachment of associated peritubular capillaries (which reduces the absorptive sink), and by growth of interstitial mesenchymal stromal cells, which provide a scaffold and synthesize extracellular matrix to accommodate the expanding epithelium.
Fig. 6: PKD cysts in vivo absorb glucose into the surrounding interstitium.
a Hypothetical schematic of absorptive cyst formation in kidney tissue. Fluid (blue arrows) is absorbed through proximal tubules into the underlying interstitium, which partially detach from the epithelium. The tubules then expand and deform to fill the interstitial space, reaching a low-energy conformation in which the withheld volume is ultimately transferred back into the luminal space of the nascent microcyst. A simplified model is shown and represents one possible explanation of the findings. b PAS stains of 2-month-old and 6-month-old Pkd1RC/RC mice (C57BL/6 J background). Scale bars 50 µm. Images are representative of 4 animals per condition (two male and two female). c Confocal images of stromal basement membrane (LAMA1) with cilia (AcT) or (d) endothelial cells (CD31) in Pkd1RC/RC versus control (Pkd1+/+) 2-month-old mice. All mice were C57BL/6 J background. Yellow arrowheads indicate areas of detached or expanded interstitium surrounding the cyst. Images are representative of four animals per condition (two male and two female). e Schematic of glucose uptake assay, illustrated using Biorender software under license. f Representative images and (g) line scan analysis of PKD cysts after perfusion with fluorescent NBD glucose or unlabeled PBS control (mean ± s.e.m., n ≥ 17 cysts per condition pooled from a total of three female and two male Pkd1RC/RC mice of C57BL/6J background). Dashed magenta arrows indicate how line scans were drawn.
To investigate the plausibility of such a mechanism in vivo, we analyzed microcysts in the Pkd1RC/RC mouse strain, which has a hypomorphic Pkd1 gene mutation orthologous to patient disease variant PKD1 p. R3277C, and manifest a slowly progressive PKD during adulthood over a period of several months47,48. Histology sections and confocal images of 2-month-old mouse tissue revealed continuous basement membranes between tubules and microcysts, consistent with the possibility that microcysts form from tubular outpouchings that remain capable of absorption through the wall of the neighboring tubule (Fig. 6b, c). While much of these microcysts remained tightly associated with peritubular capillaries, suggesting that they continue to reabsorb, portions of the epithelium appeared to have detached from the endothelium, resulting in areas of fluid accumulation or interstitial expansion (Fig. 6b–d).
To determine whether PKD cysts absorbed glucose in vivo, we devised a methodology to inject mice with NBD glucose and immediately retrieve their kidneys (Fig. 6e). Fluorescence microscopy analysis of kidney tissue sections revealed that cyst-lining epithelia and the surrounding interstitium readily took up NBD glucose (Fig. 6f–g). Thus, cysts remained absorptive in vivo and PKD kidneys as a whole readily accumulated glucose.
Coupling the structural and functional characteristics of organoids with the controlled, microfluidic microenvironments of organ-on-a-chip devices is a promising approach to in vitro disease modeling28. Our study combines CRISPR-Cas9 gene editing to reconstitute disease phenotype with organoid-on-a-chip technology to understand the effect of flow, which is difficult to assess in vivo (where it is constant) and has hitherto been absent from kidney organoid models at physiological strength. The 'human kidney organoid on a chip' microphysiological system described here incorporates organoids with PKD mutations in a wide-channel format, which allows liquid to flow over the organoids, similar in geometry to other recently described organoid flow systems29,30. At the core of this system are human organoids that strikingly recapitulate the genotype-phenotype correlation in PKD. This is fundamentally different from other other types of generic spheroids that form in vitro as a default configuration of the epithelium. While certain aspects of the organoid system differ from in vivo, we do not see a plausible explanation wherein the genotype-phenotype correlation is preserved, but the entire system is somehow irrelevant or opposite to the fundamental mechanism of PKD. Rather, the system is teaching us which aspects of PKD are most important for the phenotype. The system can be readily assembled from commercially available components, and produces a shear stress associated with the physiological range found in human kidney kidney tubules27,36,37,38. This is ~6-fold greater than the maximum rate of 0.035 dyn/cm2 used in a previous kidney organoid-on-a-chip device, a shear stress that was nevertheless sufficient to stimulate expansion of vasculature within the device when compared to static conditions29, and to induce dilation of tubular structures derived from hPSC with mutations associated with autosomal recessive PKD (ARPKD)49. The physiological relevance of such low flow rates is not clear, and the cohort of ARPKD cell lines that was studied includes hPSC previously generated by our laboratory that we found to lack definitive ARPKD mutations50. It is nevertheless interesting and encouraging that flow over the organoids was capable of inducing swelling in both systems.
Importantly, we have also developed a static module using the same basic chip that is capable of natural diffusion from a syringe reservoir. This enables us to distinguish the effects of flow from those of exposure to fluid volume and mass of reabsorbable solute, which is difficult to achieve in conventional systems with limited diffusion such as tightly connecting a reservoir to a Luer lock syringe. Our discovery that volume can partially substitute for flow is reminiscent of a recent study in which immersion in >100-fold volumes induced three-dimensional morphogenesis of intestinal epithelial cells similar to flow51. In contrast, increased volume was unable to substitute for flow in the aforementioned study of endothelial expansion in kidney organoid cultures. This may reflect a sensitivity of vascular cells to fluid shear stress, or alternatively the limited volumes possible in closed loop systems29. In addition to volume, hydrostatic pressure is increased in our diffusive static condition, which may play a role in PKD phenotype52. Of note, cysts in our diffusive static condition did not exhibit the dramatic oscillations in size observed under flow, indicating roles for flow-induced mechanoregulation that cannot be readily replicated by diffusion effects, for instance involving stretch-activated ion channels.
Our findings indicate that flow, volume, and solute concentrations are positive regulators of cyst expansion. Cystogenesis can be enhanced through mechanisms of tubular absorption and glucose transport. A limitation of these systems is that the perfusion passes over the organoids, rather than through them as it does through tubules in vivo. However, as peripheral epithelia in our organoids face outwards towards the media, the net result is for the apical surface to be in contact with the directional flow, similar to the epithelium of a tubule in vivo. This fortuitously enables us to assess reabsorptive function, the primary characteristic of kidney tubular network, which fluxes ~180 L through its apical surface every day. In this regard, the arrangement in the organoid system may have greater functional relevance than spheroid systems in which cyst polarity faces inwards but the liquid is trapped inside with no possibility of perfusion (unlike the arrangement in the kidneys).
The observation that PKD cysts can form inside-out, such that the secretion (basolateral-to-apical transport) would occur in the opposite direction from cysts in vivo, argues against secretion as the critical driver of cystogenesis in this system43. Our experiments in animals also demonstrate that kidney cysts remain reabsorptive even in advanced PKD. In our studies in vivo, we also made the interesting discovery that the tubular epithelium detaches focally from the underlying interstitium during pre-cystic stages of disease, which may reflect the consequences of a possible absorptive phenotype. Studies of PKD in living animals, however, carry significant constraints for studying mechanism. Kidneys are concealed within the body, preventing detailed time-lapse microscopy, and perturbing renal absorption is experimentally challenging and causes complex side effects. Demonstrating glucose absorption in cystic kidneys in vivo, and showing interstitial detachment, as we have done, required significant methods development and careful analysis. Further methods development and more detailed studies are required to causally link absorption, interstitial detachment, and cyst formation in vivo. Nevertheless, it is clear that renal cysts can continue to absorb glucose, even in vivo, and in organoids, glucose absorption is linked to the PKD phenotype, which is demonstrably specific to the genotype and thus mechanistically relevant.
These findings are consistent with macropuncture studies showing that wall pressures inside PKD cysts in vivo resemble their originating nephron segments, and studies of excised cysts in vitro, which demonstrate that the epithelium is slowly expanding and absorptive under steady-state conditions44,45. In a more recent clinical analysis, patients with ADPKD demonstrated lower excretion of renally secreted solutes, rather than higher levels of secretion53. Drugs that activate CFTR, which is hypothesized to drive a secretory phenotype in PKD, have shown promise in treating PKD in mice, rather than exacerbating the disease, which is also inconsistent with a secretory hypothesis54. Indeed, a phenotype related to absorption is a much more natural fit for the specialized properties of kidney epithelia (which are predominantly absorptive) than secretion. This is not to say that secretion cannot be a causative mechanism in PKD cystogenesis, but rather that absorption can also play a critical role. In our model, absorption of fluid into the interstitium creates space for epithelia to expand and fill. During this process of expansion and space filling by the epithelium, which is triggered by changes within the microenvironment surrounding the tubules, it is conceivable that secretory processes play a role.
Previously, we observed that transfer of PKD organoids from adherent cultures into suspension cultures was associated with dramatically increased rates of cystogenesis16. Our current findings add greatly to our understanding of this phenomenon. Upon release from the underlying substratum, the peripheral organoid epithelium grows out and envelops the rest of the organoid16. This forms an enclosed, outwards-facing structure in an ideal conformation to absorb fluid from the surrounding media and expand into a cyst. Although we did not detect differences in the levels of SGLT2, differences may exist in SGLT2 activity, or in the levels or activity of other transporters involved in absorption, resulting in increased absorptive flux in PKD epithelia, compared to non-PKD. Alternatively, there might exist a difference in the pliability of PKD epithelia versus non-PKD epithelia undergoing equivalent levels of absorptive flux. We note that polycystin-2 is a non-selective cation channel expressed at the apical plasma membrane9,10, which could conceivably play a role in transporter function and reabsorption. The polycystin complex may also possess force- or pressure-sensitive mechanoreceptor properties, which could regulate the epithelial response to fluid influx4,5,7,8,22,52.
Although we favor a direct role for glucose absorption in driving cyst expansion, glucose transport could also function separately of water transport to impact cyst formation, for instance by altering mitochondrial metabolism or signaling changes to the actin cytoskeleton, which could promote cystogenesis regardless of which direction the cells face55,56,57,58. Of note, cysts form not only in the proximal tubules that are primarily responsible for glucose reabsorption, but also in the collecting ducts, where they can reach very large sizes. As cysts can and must originate from these very different epithelial cell types, the process of cystogenesis is not likely to be explained by a simple absorption/secretion ratio for any one solute. One goal for future development of our PKD organoid system is to incorporate collecting ducts, as this lineage is important to PKD cystogenesis but does not mature in human kidney organoid cultures17,59,60.
A limitation of the current system is that the organoid phenotype is limited to biallelic mutants, in which disease processes are greatly accelerated61,62,63. In contrast, germline mutations in PKD patients are monoallelic, and phenotypes take decades to develop, likely due to the necessity of developing 'second hit' somatic mutations in the second allele64,65. The current system involving biallelic mutants may more closely phenocopy early-onset autosomal recessive PKD than late-onset autosomal dominant PKD, which should be considered when extrapolating these findings into a clinical context16. Generation of well-controlled allelic series of PKD organoids, together with methodologies to model the acquisition of somatic mutations, may ultimately produce human organoid models with greater fidelity to autosomal dominant PKD.
Canagliflozin (Invokana), an inhibitor of SGLT2, has recently been approved for the treatment of type 2 diabetes, and appears to have a protective effect in the kidneys66,67. SGLT inhibitors have not yet been tried in patients with PKD. Our findings suggest that blocking SGLT activity could reduce proximal tubule cysts by preventing glucose reabsorption. However, this would also expose the collecting ducts downstream to higher glucose concentrations. Indeed, it was previously suggested that inhibition of glucose transport reduces PKD in the Han:SPRD rat because its cysts originate from proximal tubules, whereas the same treatments in the PCK rat worsen PKD because its cysts originate in more distal nephron segments39,40. Caution must therefore be exercised when considering how to conduct human clinical trials for PKD with SGLT inhibitors.
In summary, we have developed a microfluidic kidney organoid module that enables detailed studies of renal tubular absorption and PKD cyst growth. The cyst-lining epithelium in this system is exposed to flow in a mirror image of the nephron structure in vivo. Using this system, we have identified glucose levels and its transport into cyst structures as a driver of cystic expansion in proximal nephron-like structures. Therapeutics that modulate reabsorption may therefore be beneficial in reducing cyst growth in specific nephron segments, with relevance for future PKD clinical trials4,66.
Research complied with all relevant ethical regulations. Human PKD kidney tissue (nephrectomy) was obtained with informed consent under a human subjects protocol approved by the University of Washington Institutional Review Board. No compensation was provided to study participants.
Kidney organoid differentiation
Work with hPSC was performed under the approval and auspices of the University of Washington Embryonic Stem Cell Research Oversight Committee. Specific cell lines used in this study are described below and are sourced from commercially available hPSC obtained with informed consent. hPSC stocks were maintained in mTeSR1 media with daily media changes and weekly passaging using Accutase or ReLeSR (STEMCELL Technologies, Vancouver). 5,000–20,000 hPSCs per well were placed in each 24-well plate pre-coated with 300 µL of DMEM-F12 containing 0.2 mg/mL Matrigel and sandwiched the following day with 0.2 mg/mL Matrigel in mTeSR1 (STEMCELL Technologies, Vancouver) to produce scattered, isolated spheroid colonies. 48 hrs after sandwiching, hPSC spheroids were treated with 12μM CHIR99021 (Tocris Bioscience) for 36 h, then changed to RB (Advanced RPMI + 1X Glutamax + 1X B27 Supplement, all from Thermo Fisher Scientific) after 48 h, and replaced with fresh RB every 3 days thereafter.
Organoid perfusion in microfluidic chip
Ibidi μ-Slide VI0.4 were coated with 3.0% Reduced Growth Factor Geltrex (Life Technologies) and left at 37 °C overnight to solidify. Kidney organoids (21–40d) were picked from adhered culture plates, pipetted into the slide channels (2–3 per channel) with RB, and left for 24 hrs at 37 °C to attach. Organoids were distributed randomly within the channel. For the fluidic condition, 60 mL syringes filled with RB were attached to channels using clear tubing (Cole-Parmer, 0.02'' ID, 0.083'' OD). A clamp was used to close off the tubing, and the media in the syringe was changed to 25 mL RB + 36.5 μM 2-NBD-Glucose fluorescent glucose (Abcam ab146200). A Harvard Apparatus syringe infusion pump was used to direct media flow into microfluidic chip at 160 μL/min (0.2 dynes/cm2). Media was collected at the outlet and filtered for repeated use. For the static condition, a 25 mL syringe containing RB was attached to the channel using wide clear tubing (Cole-Parmer, 0.125'' ID, 0.188'' OD). The syringe was detached momentarily, the plunger removed, and the open syringe reattached and filled slowly with 25 mL RB + 36.5 μM 2-NBD-Glucose. From this point on, diffusion of the fluorescent glucose began from the open syringe into the channel via the tubing. Alternatively, NDB-glucose was substituted with food dye (invert sugar, 360 g/mol), or alternatively the organoids were perfused with media in the absence of any additives.
Image/video collection
Image collection was performed on a Nikon Ti Live-Cell Inverted Widefield microscope inside of an incubated live imaging chamber supplemented with 5% carbon dioxide. Experiments in microfluidic devices were recorded for 6 h. During this time, cysts changed in volume (grew and shrank) and in some cases were destroyed due to bubbles arising in the tubing. Cyst growth rate in microfluidic devices was therefore calculated on an individual basis, when each cyst reached its maximal volume, which varied for each sample from 1 h to 5 h after the start of the experiment. For longer-term experiments conducted in static 96-well cultures, organoids were imaged at regular intervals (typically 24 h) and analyzed at the endpoint indicated in the figure graphs. Phase contrast and GFP (200 ms exposure) images were taken every 5 min for a maximum of 12 h. Images of fixed samples were collected on a Nikon A1R point scanning confocal microscope.
Kidney tissue from Pkd1RC/RC mice maintained in C57BL/6 J background (gift of Mayo Clinic Translational PKD Center) and C57BL/6 J controls were utilized. In order to investigate the process of cystogenesis, younger Pkd1RC/RC mice 6–7 weeks of age, along with wild-type C57BL/6 J mice of the same age were used. Kidneys were harvested after systemic perfusion with ice-cold PBS, followed by fixation with paraformaldehyde fixative and immersion in 18–30% sucrose at 4 °C overnight. Tissues were embedded and frozen in optimal cutting temperature compound (OCT, Sakura Finetek, Torrance, CA). Cryostat-cut mouse kidney sections (5–10 μm) were stained for acetylated α-tubulin, laminin-1, and CD31 (see "Immunostaining" for primary antibodies and dilutions).
For perfusion experiments, NBD Glucose was freshly dissolved in PBS to a concentration of 1 mM. Freshly sacrificed Pkd1RC/RC mice (>8 months old) were incised through the chest and nicked at the vena cava with a 27-gauge needle. Keeping pressure on the vena cava, mice were perfused systemically through the heart with a syringe containing 10 ml of PBS, followed by a second syringe containing 5 ml of either PBS alone (control) or PBS + 1 mM NBD-Glucose. Kidneys were harvested immediately and embedded fresh without fixation or sucrose equilibration in OCT. Cryostat-cut mouse kidney sections (20 μm) were mounted in OCT and imaged on a confocal microscope with 10X objective. All animal studies were conducted in accordance with all relevant ethical regulations under protocols approved by the Institutional Animal Care and Use Committee at the University of Washington in Seattle. Mice were maintained on a standard diet under standard pathogen-free housing conditions, with food and water freely available.
Immunostaining
Immunostaining followed by confocal microscopy was used to localize various proteins and transporters in the cysts and organoids. Prior to staining, an equal volume of 8% paraformaldehyde was added to the culture media (4% final concentration) for 15 mins at room temperature. After fixing, samples were washed in PBS, blocked in 5% donkey serum (Millipore)/0.3% Triton-X-100/PBS, incubated overnight in 1% bovine serum albumin/0.3% Triton-X-100/10μM CaCl2/PBS with primary antibodies, washed, incubated with Alexa-Fluor secondary antibodies (Invitrogen), washed and imaged. Primary antibodies or labels include acetylated α-tubulin (Sigma T7451, 1:5000), ZO-1 (Invitrogen 61-7300, 1:200), Biotinylated LTL (Vector Labs B-1325, 1:500), E-Cadherin (Abcam ab11512, 1:500), SGLT2 (Abcam ab37296, 1:100), laminin-1 (Sigma L9393, 1:50), alpha smooth muscle actin (Sigma A2547, 1:500), CD31 (BD Biosciences 557355, 1:300). Fluorescence images were captured using a Nikon A1R inverted confocal microscope with objectives ranging from 10X to 60X.
Experiments were performed using a cohort of PKD hPSC, previously generated and characterized, including three PKD2−/− hPSC lines and three isogenic control lines that were subjected to CRISPR mutagenesis (gRNA CGTGGAGCCGCGATAACCC) but were found to be unmodified at the targeted locus by Sanger sequencing of each allele and immunoblot16,17. Altogether these represented two distinct genetic backgrounds, genders, and cell types: (i) male WTC11 iPS cells (Coriell Institute Biobank, GM25256, two isogenic pairs) and (ii) female H9 ES cells (WiCell, Madison Wisconsin, WA09, one isogenic pair). Quantification was performed on data obtained from experiments performed on controls and treatment conditions side by side on at least three different occasions or cell lines (biological replicates). Error bars are mean ± standard error (s.e.m.). Statistical analyses were performed using GraphPad Prism Software. To test significance, p-values were calculated using two-tailed, unpaired or paired t-test (as appropriate to the experiment) with Welch's correction (unequal variances). For multiple comparisons, standard ANOVA was used. Statistical significance was defined as p < 0.05. Exact or approximate p-values are provided in the figure legends in experiments that showed statistical significance. For traces of cysts over time, the least squares progression model was applied to fit the data to lines in GraphPad Prism. Line scans of equal length were averaged from multiple images and structures based on raw data intensity values in the GFP channel. Lines were drawn transecting representative regions of each structure (e.g. avoiding heterogeneities, brightness artifacts, or areas where cysts and organoids overlapped), placed such that the first half of each line represented the background in the image. The intensity of each point (pixel) along the line was then averaged for all of the lines, producing an averaged line scan with error measurements. Arrows are provided in representative images showing the direction and length of the line scans used to quantify the data. Unless otherwise noted, raw intensity values (bytes per pixel) were were used without background subtraction.
Hydrostatic pressure calculation
The following calculation was performed:
$${{{{{\rm{Pressure}}}}}}=\rho {{{{{\rm{gh}}}}}}=\left(997\frac{{kg}}{{m}^{3}}\right)\left(9.81\frac{m}{{s}^{2}}\right)\left({{{{\boldsymbol{{{{{\mathscr{x}}}}}}}}}}\,m\right)={Pressure}\left(\frac{{kg}}{m\cdot {s}^{2}}\right)$$
The height from channel to top of media in reservoir was measured to be:
Static 1 mL: ~12 cm
Static 25 mL: ~20 cm
Therefore, the calculation for each of these conditions was:
$$Pressur{e}_{1mL} =\rho {{{{{\rm{gh}}}}}}=\left(997\frac{kg}{{m}^{3}}\right)\left(9.81\frac{m}{{s}^{2}}\right)(0.12\;m) \\ =1173.7\left(\frac{kg}{m\cdot {s}^{2}}\right)\left(\frac{mmHg}{133.32\,Pa}\right)=8.8\,mmHg$$
$$Pressur{e}_{25mL} =\rho {{{{{\rm{gh}}}}}}=\left(997\frac{kg}{{m}^{3}}\right)\left(9.81\frac{m}{{s}^{2}}\right)(0.20\,m) \\ =1956.1\left(\frac{kg}{m\cdot {s}^{2}}\right)\left(\frac{mmHg}{133.32\,Pa}\right)=14.7\,mmHg$$
This amounted to a total difference in pressure of (14.7–8.8 = 5.9) mmHg.
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
The main data supporting the results in this study are available within the paper and its supplementary information. The raw and analysed datasets generated during the study are too large and complex to be publicly shared (numerous cell lines, replicates, images, blots, and experiments, maintained and analysed in specialized file formats and with unique identifiers). Datapoints are shown as dots in the plots provided in this paper and the Supplement. All datasets, including raw data and statistical analysis, are available upon reasonable request from the corresponding author. PKD mutant cell lines used in this study may be obtained from the corresponding author upon request and in accordance with material transfer agreements from the University of Washington and any third-party originating sources. Source data are provided with this paper.
Mochizuki, T. et al. PKD2, a gene for polycystic kidney disease that encodes an integral membrane protein. Science 272, 1339–1342 (1996).
The polycystic kidney disease 1 gene encodes a 14 kb transcript and lies within a duplicated region on chromosome 16. The European Polycystic Kidney Disease Consortium. Cell 78, 725 (1994).
Groopman, E. E. et al. Diagnostic utility of exome sequencing for kidney disease. N. Engl. J. Med 380, 142–151 (2019).
Torres, V. E., Gansevoort, R. T. & Czerwiec, F. S. Tolvaptan in later-stage polycystic kidney disease. N. Engl. J. Med. 378, 489–490 (2018).
Torres, V. E., Gansevoort, R. T. & Czerwiec, F. S. Tolvaptan in autosomal dominant polycystic kidney disease. N. Engl. J. Med. 368, 1259 (2013).
Praetorius, H. A. & Spring, K. R. Bending the MDCK cell primary cilium increases intracellular calcium. J. Membr. Biol. 184, 71–79 (2001).
Nauli, S. M. et al. Polycystins 1 and 2 mediate mechanosensation in the primary cilium of kidney cells. Nat. Genet 33, 129–137 (2003).
Nauli, S. M. et al. Loss of polycystin-1 in human cyst-lining epithelia leads to ciliary dysfunction. J. Am. Soc. Nephrol. 17, 1015–1025 (2006).
DeCaen, P. G., Delling, M., Vien, T. N. & Clapham, D. E. Direct recording and molecular identification of the calcium channel of primary cilia. Nature 504, 315–318 (2013).
Delling, M., DeCaen, P. G., Doerner, J. F., Febvay, S. & Clapham, D. E. Primary cilia are specialized calcium signalling organelles. Nature 504, 311–314 (2013).
Delling, M. et al. Primary cilia are not calcium-responsive mechanosensors. Nature 531, 656–660 (2016).
Wilson, P. D. Polycystic kidney disease. N. Engl. J. Med. 350, 151–164 (2004).
Du, J. & Wilson, P. D. Abnormal polarization of EGF receptors and autocrine stimulation of cyst epithelial growth in human ADPKD. Am. J. Physiol. 269(2 Pt 1), C487–C495 (1995).
Charron, A. J., Nakamura, S., Bacallao, R. & Wandinger-Ness, A. Compromised cytoarchitecture and polarized trafficking in autosomal dominant polycystic kidney disease cells. J. Cell Biol. 149, 111–124 (2000).
Freedman, B. S. Modeling kidney disease with iPS cells. Biomark. Insights 10(Suppl 1), 153–169 (2015).
Cruz, N. M. et al. Organoid cystogenesis reveals a critical role of microenvironment in human polycystic kidney disease. Nat. Mater. 16, 1112–1119 (2017).
Freedman, B. S. et al. Modelling kidney disease with CRISPR-mutant kidney organoids derived from human pluripotent epiblast spheroids. Nat. Commun. 6, 8715 (2015).
Taguchi, A. et al. Redefining the in vivo origin of metanephric nephron progenitors enables generation of complex kidney structures from pluripotent stem cells. Cell Stem Cell 14, 53–67 (2014).
Takasato, M. et al. Kidney organoids from human iPS cells contain multiple lineages and model human nephrogenesis. Nature 526, 564–568 (2015).
Morizane, R. et al. Nephron organoids derived from human pluripotent stem cells model kidney development and injury. Nat. Biotechnol. 33, 1193–1200 (2015).
Garreta, E. et al. Fine tuning the extracellular environment accelerates the derivation of kidney organoids from human pluripotent stem cells. Nat. Mater. 18, 397–405 (2019).
Schrier, R. W. et al. Blood pressure in early autosomal dominant polycystic kidney disease. N. Engl. J. Med 371, 2255–2266 (2014).
Ligresti, G. et al. A novel three-dimensional human peritubular microvascular system. J. Am. Soc. Nephrol. 27, 2370–2381 (2016).
Vernetti, L. et al. Functional coupling of human microphysiology systems: intestine, liver, kidney proximal tubule, blood-brain barrier and skeletal muscle. Sci. Rep. 7, 42296 (2017).
Weber, E. J. et al. Development of a microphysiological model of human kidney proximal tubule function. Kidney Int 90, 627–637 (2016).
Homan, K. A. et al. Bioprinting of 3D convoluted renal proximal tubules on perfusable chips. Sci. Rep. 6, 34845 (2016).
Jang, K. J. et al. Human kidney proximal tubule-on-a-chip for drug transport and nephrotoxicity assessment. Integr. Biol. (Camb.) 5, 1119–1129 (2013).
Park, S. E., Georgescu, A. & Huh, D. Organoids-on-a-chip. Science 364, 960–965 (2019).
Homan, K. A. et al. Flow-enhanced vascularization and maturation of kidney organoids in vitro. Nat. Methods 16, 255–262 (2019).
Karzbrun, E., Kshirsagar, A., Cohen, S. R., Hanna, J. H. & Reiner, O. Human Brain Organoids on a Chip Reveal the Physics of Folding. Nat. Phys. 14, 515–522 (2018).
Phan, D. T. T. et al. A vascularized and perfused organ-on-a-chip platform for large-scale drug screening applications. Lab Chip 17, 511–520 (2017).
Li, M. & Izpisua Belmonte, J. C. Organoids—preclinical models of human disease. N. Engl. J. Med. 380, 569–579 (2019).
McAteer, J. A., Evan, A. P. & Gardner, K. D. Morphogenetic clonal growth of kidney epithelial cell line MDCK. Anat. Rec. 217, 229–239 (1987).
Wang, A. Z., Ojakian, G. K. & Nelson, W. J. Steps in the morphogenesis of a polarized epithelium. II. Disassembly and assembly of plasma membrane domains during reversal of epithelial cell polarity in multicellular epithelial (MDCK) cysts. J. Cell Sci. 95(Pt 1), 153–165 (1990).
Neufeld, T. K. et al. In vitro formation and expansion of cysts derived from human renal cortex epithelial cells. Kidney Int 41, 1222–1236 (1992).
Duan, Y. et al. Shear-induced reorganization of renal proximal tubule cell actin cytoskeleton and apical junctional complexes. Proc. Natl Acad. Sci. USA 105, 11418–11423 (2008).
Essig, M., Terzi, F., Burtin, M. & Friedlander, G. Mechanical strains induced by tubular flow affect the phenotype of proximal tubular cells. Am. J. Physiol. Ren. Physiol. 281, F751–F762 (2001).
Ferrell, N. et al. A microfluidic bioreactor with integrated transepithelial electrical resistance (TEER) measurement electrodes for evaluation of renal epithelial cells. Biotechnol. Bioeng. 107, 707–716 (2010).
Wang, X. et al. Targeting of sodium-glucose cotransporters with phlorizin inhibits polycystic kidney disease progression in Han:SPRD rats. Kidney Int 84, 962–968 (2013).
Kapoor, S. et al. Effect of sodium-glucose cotransport inhibition on polycystic kidney disease progression in PCK rats. PLoS One 10, e0125603 (2015).
Reif, G. A. et al. Tolvaptan inhibits ERK-dependent cell proliferation, Cl(−) secretion, and in vitro cyst growth of human ADPKD cells stimulated by vasopressin. Am. J. Physiol. Ren. Physiol. 301, F1005–F1013 (2011).
Grantham, J. J. et al. Chemical modification of cell proliferation and fluid secretion in renal cysts. Kidney Int 35, 1379–1389 (1989).
Magenheimer, B. S. et al. Early embryonic renal tubules of wild-type and polycystic kidney disease kidneys respond to cAMP stimulation with cystic fibrosis transmembrane conductance regulator/Na(+),K(+),2Cl(−) Co-transporter-dependent cystic dilation. J. Am. Soc. Nephrol. 17, 3424–3437 (2006).
Grantham, J. J., Ye, M., Gattone, V. H. 2nd & Sullivan, L. P. In vitro fluid secretion by epithelium from polycystic kidneys. J. Clin. Invest 95, 195–202 (1995).
Huseman, R., Grady, A., Welling, D. & Grantham, J. Macropuncture study of polycystic disease in adult human kidneys. Kidney Int 18, 375–385 (1980).
Grantham, J. J. et al. Detected renal cysts are tips of the iceberg in adults with ADPKD. Clin. J. Am. Soc. Nephrol. 7, 1087–1093 (2012).
Hopp, K. et al. Functional polycystin-1 dosage governs autosomal dominant polycystic kidney disease severity. J. Clin. Invest 122, 4257–4273 (2012).
Hopp, K. et al. Tolvaptan plus pasireotide shows enhanced efficacy in a PKD1 model. J. Am. Soc. Nephrol. 26, 39–47 (2015).
Hiratsuka, K. et al. Organoid-on-a-chip model of human ARPKD reveals mechanosensing pathomechanisms for drug discovery. Sci. Adv. 8, eabq0866 (2022).
Freedman, B. S. et al. Reduced ciliary polycystin-2 in induced pluripotent stem cells from polycystic kidney disease patients with PKD1 mutations. J. Am. Soc. Nephrol. 24, 1571–1586 (2013).
Shin, W., Hinojosa, C. D., Ingber, D. E. & Kim, H. J. Human intestinal morphogenesis controlled by transepithelial morphogen gradient and flow-dependent physical cues in a microengineered gut-on-a-chip. iScience 15, 391–406 (2019).
Sharif-Naeini, R. et al. Polycystin-1 and -2 dosage regulates pressure sensing. Cell 139, 587–596 (2009).
Wang, K. et al. Alterations of proximal tubular secretion in autosomal dominant polycystic kidney disease. Clin. J. Am. Soc. Nephrol. 15, 80–88 (2020).
Yanda, M. K., Liu, Q. & Cebotaru, L. A potential strategy for reducing cysts in autosomal dominant polycystic kidney disease with a CFTR corrector. J. Biol. Chem. 293, 11513–11526 (2018).
Padovano, V., Podrini, C., Boletta, A. & Caplan, M. J. Metabolism and mitochondria in polycystic kidney disease research and therapy. Nat. Rev. Nephrol. 14, 678–687 (2018).
Rowe, I. et al. Defective glucose metabolism in polycystic kidney disease identifies a new therapeutic strategy. Nat. Med 19, 488–493 (2013).
Natoli, T. A. et al. Inhibition of glucosylceramide accumulation results in effective blockade of polycystic kidney disease in mouse models. Nat. Med 16, 788–792 (2010).
Yao, G. et al. Polycystin-1 regulates actin cytoskeleton organization and directional cell migration through a novel PC1-Pacsin 2-N-Wasp complex. Hum. Mol. Genet 23, 2769–2779 (2014).
Taguchi, A. & Nishinakamura, R. Higher-order kidney organogenesis from pluripotent stem cells. Cell Stem Cell 21, 730–746 e736 (2017).
Czerniecki, S. M. et al. High-Throughput screening enhances kidney organoid differentiation from human pluripotent stem cells and enables automated multidimensional phenotyping. Cell Stem Cell. 22, 929–940.e4(2018).
Wu, G. et al. Somatic inactivation of Pkd2 results in polycystic kidney disease. Cell 93, 177–188 (1998).
Lu, W. et al. Late onset of renal and hepatic cysts in Pkd1-targeted heterozygotes. Nat. Genet 21, 160–161 (1999).
Kuraoka, S. et al. PKD1-dependent renal cystogenesis in human induced pluripotent stem cell-derived ureteric bud/collecting duct organoids. J. Am. Soc. Nephrol. 31, 2355–2371 (2020).
Qian, F., Watnick, T. J., Onuchic, L. F. & Germino, G. G. The molecular basis of focal cyst formation in human autosomal dominant polycystic kidney disease type I. Cell 87, 979–987 (1996).
Tan, A. Y. et al. Somatic mutations in renal cyst epithelium in autosomal dominant polycystic kidney disease. J. Am. Soc. Nephrol. 29, 2139–2156 (2018).
Perkovic, V. et al. Canagliflozin and renal outcomes in type 2 diabetes and nephropathy. N. Engl. J. Med 380, 2295–2306 (2019).
Zelniker, T. A. et al. SGLT2 inhibitors for primary and secondary prevention of cardiovascular and renal outcomes in type 2 diabetes: a systematic review and meta-analysis of cardiovascular outcome trials. Lancet 393, 31–39 (2019).
The work was supported by NIH awards UG3TR002158 (Himmelfarb), UG3TR003288 (Himmelfarb), and UG3TR000504 (Himmelfarb); K01DK102826 (Freedman), R01DK117914 (Freedman), U01DK127553 (Freedman), UC2DK126006 (Shankland and Freedman), and 5U01HL152401 (Ho and Freedman); K25HL135432 (Fu); a Novo Nordisk sponsored research award (Freedman); gifts from the Northwest Kidney Centers, the Lara Nowak Macklin Research Fund, and the Mount Baker Foundation; and start-up funds from the University of Washington. Microscopes in the Lynn and Mike Garvey Imaging Core at the Institute for Stem Cell and Regenerative Medicine were used for imaging. We thank Peter Harris and the Mayo Clinic Pirnie Translational PKD Center for providing Pkd1RC/RC mice, and Bruce Conklin and the Gladstone Institute for the WTC-11 iPS cell line. We thank Ivan Gomez for assistance with mouse husbandry, and Anil Karihaloo for helpful discussions.
These authors contributed equally: Sienna R. Li, Ramila E. Gulieva.
These authors jointly supervised this work: Jonathan Himmelfarb, Benjamin S. Freedman.
Division of Nephrology, University of Washington School of Medicine, Seattle, WA, 98109, USA
Sienna R. Li, Ramila E. Gulieva, Louisa Helms, Nelly M. Cruz, Thomas Vincent, Jonathan Himmelfarb & Benjamin S. Freedman
Kidney Research Institute, University of Washington School of Medicine, Seattle, WA, 98109, USA
Institute for Stem Cell and Regenerative Medicine, University of Washington School of Medicine, Seattle, WA, 98109, USA
Sienna R. Li, Ramila E. Gulieva, Louisa Helms, Nelly M. Cruz, Thomas Vincent, Hongxia Fu, Jonathan Himmelfarb & Benjamin S. Freedman
Department of Medicine, University of Washington School of Medicine, Seattle, WA, 98109, USA
Department of Laboratory Medicine & Pathology, University of Washington School of Medicine, Seattle, WA, 98109, USA
Louisa Helms & Benjamin S. Freedman
Department of Bioengineering, University of Washington, Seattle, WA, 98109, USA
Thomas Vincent, Hongxia Fu & Benjamin S. Freedman
Division of Hematology, University of Washington School of Medicine, Seattle, WA, 98109, USA
Hongxia Fu
Sienna R. Li
Ramila E. Gulieva
Louisa Helms
Nelly M. Cruz
Thomas Vincent
Jonathan Himmelfarb
Benjamin S. Freedman
B.S.F., J.H., and H.F. designed experiments, provided necessary resources, and established the research framework; B.S.F., S.R.L., R.E.G., L.H., T.V., and N.M.C. performed experiments; B.S.F. wrote the manuscript with input from all of the authors.
Correspondence to Benjamin S. Freedman.
N.M.C. and B.S.F. are inventors on patents and/or patent applications related to human kidney organoid differentiation and modeling of PKD in this system (these include "Three-dimensional differentiation of epiblast spheroids into kidney tubular organoids modeling human microphysiology, toxicology, and morphogenesis" [Japan, US, and Australia], licensed to STEMCELL Technologies; "High-throughput automation of organoids for identifying therapeutic strategies" [PTC patent application pending]; "Systems and methods for characterizing pathophysiology" [PTC patent application pending]). B.S.F. and H.F. have ownership interest in Plurexa LLC. None of the preceding interests affected in any way the results of the paper or would be affected by them, but they are shared by way of transparency. All other authors declare no competing interests.
Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Description of Additional Supplementary Files
Li, S.R., Gulieva, R.E., Helms, L. et al. Glucose absorption drives cystogenesis in a human organoid-on-chip model of polycystic kidney disease. Nat Commun 13, 7918 (2022). https://doi.org/10.1038/s41467-022-35537-2 | CommonCrawl |
Analysis, Volume I
Hindustan Book Agency, January 2006. Third edition, 2014
Hardcover, 368 pages.
ISBN 81-85931-62-3 (first edition)
This is basically an expanded and cleaned up version of my lecture notes for Math 131A. In the US, it is available through the American Mathematical Society. It is part of a two-volume series; here is my page for Volume II. It is currently in its third edition (corrected), with a fourth edition in preparation.
There are no solution guides for this text.
Sample chapters (contents, natural numbers, set theory, integers and rationals, logic, decimal system, index)
Errata to older versions than the corrected third edition can be found here.
— Errata to the corrected third edition —
Page 1: On the final line, should be in math mode.
Page 7: In Example 1.2.6, Theorem 19.5.1 should be "Theorem 7.5.1 of Analysis II".
Page 8: In Example 1.2.7, "Exercise 13.2.9" should be "Exercise 2.2.9 of Analysis II". In Example 1.2.8, "Proposition 14.3.3" should be "Proposition 3.3.3 of Analysis II". In Example 1.2.9, "Theorem 14.6.1" should be "Theorem 3.6.1 of Analysis II".
Page 9: In Example 1.2.10, "Theorem 14.7.1" should be "Theorem 3.7.1 of Analysis II".
Page 11: In the final line, the comma before "For instance" should be a period.
Page 14: "without even aware" should be "without even being aware".
Page 17: In Definition 2.1.3, add "This convention is actually an oversimplification. To see how to properly merge the usual decimal notation for numbers with the natural numbers given by the Peano axioms, see Appendix B."
Page 19: After Proposition 2.1.8: "Axioms 2.1 and 2.2" should be "Axioms 2.3 and 2.4".
Page 20: In the proof of Proposition 2.1.11, the period should be inside the parentheses in both parentheticals. Also, Proposition 2.1.11 should more accurately be called Proposition Template 2.1.11.
Page 23, first paragraph: delete a right parenthesis in .
Page 27: In the final sentence of Definition 2.2.7, the period should be inside the parentheses. In proposition 2.2.8, " is positive" should be " is a positive natural number".
Page 29: Add Exercise 2.2.7: "Let be a natural number, and let be a property pertaining to the natural numbers such that whenever is true, is true. Show that if is true, then is true for all . This principle is sometimes referred to as the principle of induction starting from the base case ".
Page 31: "Euclidean algorithm" should be "Euclid's division lemma".
Page 39: in the sentence before Proposition 3.1.18, the word Proposition should not be capitalised.
Page 41: In the paragraph after Example 3.1.22, the final right parenthesis should be deleted.
Page 45: at the end of the section, add "Formally, one can refer to as "the set of natural numbers", but we will often abbreviate this to "the natural numbers" for short. We will adopt similar abbreviations later in the text; for instance the set of integers will often be abbreviated to "the integers"."
Page 47: In "In did contain itself, then by definition", add "of ". After "On the other hand, if did not contain itself," add "then by definition of ", and after "and hence", add "by definition of ".
Page 48: In the third to last sentence of Exercise 3.2.3, the period should be inside the parenthesis.
Page 49: "unique object " should be "unique object ", and similarly "exactly one " should be "exactly one ".
Page 49+: change all occurrences of "range" to "codomain" (including in the index). Before Example 3.3.2, add the following paragraph: "Implicit in the above definition is the assumption that whenever one is given two sets and a property obeying the vertical line test, one can form a function object. Strictly speaking, this assumption of the existence of the function as a mathematical object should be stated as an explicit axiom; however we will not do so here, as it turns out to be redundant. (More precisely, in view of Exercise 3.5.10 below, it is always possible to encode a function as an ordered triple consisting of the domain, codomain, and graph of the function, which gives a way to build functions as objects using the operations provided by the preceding axioms.)"
Page 51: Replace the first sentence of Definition 3.3.7 with "Two functions , are said to be equal if and only if they have the same domain and codomain (i.e., and ), and for <I>all</I> ." Then add afterwards: "According to this definition, two functions that have different domains or different codomains are, strictly speaking, distinct functions. However, when it is safe to do so without causing confusion, it is sometimes useful to "abuse notation" by identifying together functions of different domains or codomains if their values agree on their common domain of definition; this is analogous to the practice of "overloading" an operator in software engineering. See the discussion [in the errata] after Definition 9.4.1 for an instance of this."
Page 52: In Example 3.3.9, replace "an arbitrary set " with "a given set ". Similarly, in Exercise 3.3.3 on page 55, replace "the empty function" with "the empty function into a given set ".
Page 56: After Definition 3.4.1, replace "a challenge to the reader" with "an exercise to the reader". In Definition 3.4.1, " is a set in " should be "latex S$ is a subset of ".
Page 62: Replace Remark 3.5.5 with "One can show that the Cartesian product is indeed a set; see Exercise 3.5.1."
Page 65: Split Exercise 3.5.1 into three parts. Part (a) encompasses the first definition of an ordered pair; part (b) encompasses the "additional challenge" of the second definition. Then add a part (c): "Show that regardless of the definition of ordered pair, the Cartesian product is a set. (Hint: first use the axiom of replacement to show that for any , the set is a set, then apply the axioms of replacement and union.)". In Exercise 3.5.2, add the following comment: "(Technically, this construction of ordered -tuple is not compatible with the construction of ordered pair in Exercise 3.5.1, but this does not cause a difficulty in practice; for instance, one can use the definition of an ordered -tuple here to replace the construction in Exercise 3.5.1, or one can make a rather pedantic distinction between an ordered -tuple and an ordered pair in one's mathematical arguments.)"
Page 66: In Exercise 3.5.3, replace "obey" with "are consistent with", and at the end add "in the sense that if these axioms of equality are already assumed to hold for the individual components of an ordered pair , then they hold for an ordered pair itself". Similarly replace "This obeys" with "This is consistent with" in Definition 3.5.1 on page 62.
Page 67: In Exercise 3.5.12, add "Let be an arbitrary set" after the first sentence, and let be a function from to rather than from to ; also should be an element of rather than a natural number. This generalisation will help for instance in establishing Exercise 3.5.13.
Page 68: In the first paragraph, the period should be inside the parenthetical; similarly in Example 3.6.2.
Page 71: The proof of Theorem 3.6.12 can be replaced by the following, after the first sentence: " By Lemma 3.6.9, would then have cardinality . But has equal cardinality with (using as the bijection), hence , which gives the desired contradiction. Then in Exercise 3.6.3, add "use this exercise to give an alternate proof of Theorem 3.6.12 that does not use Lemma 3.6.9.".
Page 73: In Exercise 3.6.8, add the hypothesis that is non-empty.
Page 77: "negative times positive equals positive" should be "negative times positive equals negative". Change "we call a negative integer", to "we call a positive integer and a negative integer".
Page 89: In the first paragraph, insert "Note that when , the definition of provided by Definition 4.3.11 coincides with the reciprocal of defined previously, so there is no incompatibility of notation caused by this new definition."
Page 94, bottom: "see Exercise 12.4.8" should be "see Exercise 1.4.8 of Analysis II".
Page 97: In Example 5.1.10, "1-steady" should be "0.1-steady", "0.1-steady" should be "0.01-steady", and "0.01-steady" should be "0.001-steady".
Page 104: In the proof of Lemma 5.3.7, after the mention of 0-closeness, add "(where we extend the notion of -closeness to include in the obvious fashion)", and after Proposition 4.3.7, add "(extended to cover the 0-close case)".
Page 113: In the second paragraph of the proof of Proposition 5.4.8, add "Suppose that " after the first sentence.
Page 122: Before Lemma 5.6.6: " root" should be roots". In (e), add "Here ranges over the positive integers", and after "decreasing", add "(i.e., whenever )". One can also replace by for clarity.
Page 123, near top: "is the following cancellation law" should be "is another proof of the cancellation law from Proposition 4.3.12(c) and Proposition 5.6.3".
Page 124: In Lemma 5.6.9, add "(f) ."
Page 130: Before Corollary 6.1.17, "we see have" should be "we have".
Page 131: In Exercise 6.1.6, should be .
Page 134: In the paragraph after Definition 6.2.6, add right parenthesis after "greatest lower bound of ".
Page 138: In the second paragraph of Section 6.4, should be in math mode (three instances). After in the proof of Proposition 6.3.10, add "(here we use Exercise 6.1.3.)".
Page 140: In the first paragraph, should be in math mode.
Page 143, penultimate paragraph: add right parenthesis after " and are finite".
Page 144: In Remark 6.4.16, "allows to compute" should be "allows one to compute".
Page 147: "(see Chapter 1)" should be "(see Chapter 1 of Analysis II)".
Page 148: In the first sentence of Section 6.6, replace to . After Definition 6.6.1, add "More generally, we say that is a subsequence of if there exists a strictly increasing function such that for all .".
Page 153: Just before Proposition 6.7.3, "Section 6.7" should be "Section 5.6".
Page 157: At the end of Definition 7.1.6, add the sentence "In some cases we would like to define the sum when is defined on a larger set than . In such cases we use exactly the same definition as is given above."
Page 161: In Remark 7.1.12, change "the rule will fail" to "the rule may fail".
Page 163: In the proof of Corollary 7.1.14, the function should be replaced with its inverse (thus is defined by . In Exercise 7.1.5, "Exercise 19.2.11" should be "Exercise 7.2.11 of Analysis II".
Page 166: In Remark 7.2.11 add "We caution however that in most other texts, the terminology "conditional convergence" is meant in this latter sense (that is, of a series that converges but does not converge absolutely).
Page 172: In Corollary 7.3.7, can be taken to be a real number instead of rational, provided we mention Proposition 6.7.3 next to each mention of Lemma 5.6.9.
Page 175: A space should be inserted before the (why?) before the first display.
Page 176: In Exercise 7.4.1, add "What happens if we assume is merely one-to-one, rather than increasing?". Add a new Exercise 7.4.2.: "Obtain an alternate proof of Proposition 7.4.3 using Proposition 7.4.1, Proposition 7.2.14, and expressing as the difference of and . (This proof is due to Will Ballard.)"
Page 177: In beginning of proof of Theorem 7.5.1, add "By Proposition 7.2.14(c), we may assume without loss of generality that (in particulaar is well-defined for any ).".
Page 178: In the proof of Lemma 7.5.2, after selecting , add "without loss of generality we may assume that ". (This is needed in order to take n^th roots later in the proof.) One can also replace and with and respectively.
Page 186: In Exercise 8.1.4, Proposition 8.1.5 should be Corollary 8.1.6.
Page 187, After Definition 8.2.1, the parenthetical "(and Proposition 3.6.4)" may be deleted.
Page 188: In the final paragraph, after the invocation of Proposition 6.3.8, "convergent for each " should be "convergent for each ".
Page 189, middle: in "Why? use induction", "use" should be capitalised.
Page 190: In the remark after Lemma 8.2.5, "countable set" should be "at most countable set".
Page 193: In Exercise 8.2.6, both summations should instead be .
Page 198: In Example 8.4.2, replace "the same set" with "essentially the same set (in the sense that there is a canonical bijection between the two sets)".
Page 203: In Definition 8.5.8, "every non-empty subset of has a minimal element " should be "every non-empty subset of has a minimal element ".
Page 203: In Proposition 8.5.10, "Prove that is true" should be "Then is true".
Page 204: Before "Let us define a special class….", add "Henceforth we fix a single such strict upper bound function ".
Page 205: The assertion that is good requires more explanation. Replace "Thus this set is good, and must therefore be contained in " with : "We now claim that is good. By the preceding discussion, it suffices to show that when . If this is clear since in this case. If instead , then for some good . Then the set is equal to (why? use the previous observation that every element of is an upper bound for for every good ), and the claim then follows since is good. By definition of , we conclude that the good set is contained in ". In the statement of Lemma 8.5.15, add "non-empty" before "totally ordered subset".
Page 206: Remove the parenthetical "(also called the principle of transfinite induction)" (as well as the index reference), and in Exercise 8.5.15 use "Zorn's lemma" in place of "principle of transfinite induction". In Exercise 8.5.6, "every element of " should be "every element of ".
Page 208: In Exercise 8.5.18, "Tthus" should be "Thus". In Exercise 8.5.16, "total orderings of " should be "total orderings of ".
Page 215: Exercise 9.1.1 should be moved to be after Exercise 9.1.6, as the most natural proof of the former exercise uses the latter.
Page 216: In Exercise 9.1.8, add the hypothesis that is non-empty. In Exercise 9.1.9, delete the hypothesis that be a real number.
Page 221: At the end of Remark 9.3.7, should be .
Page 222: Replace the second sentence of proof of Proposition 9.3.14 by "Let be an arbitrary sequence of elements in that converges to ."
Page 223: Near bottom, in "Why? use induction", "use" should be capitalised.
Page 224: In Example 9.3.17, (why) should be (why?). In Example 9.3.16, "drop the set " should be "drop the set ", and change to .
Page 225: In Example 9.3.20, all occurrences of should be .
Page 226: After Definition 9.4.1, add "We also extend these notions to functions that take values in a subset of , by identifying such functions (by abuse of notation) with the function that agrees everywhere with (so for all ) but where the codomain has been enlarged from to .
Page 230: In Exercise 9.4.1, "six equivalences" should be "six implications". "Exercise 4.25.10" should be "Exercise 4.25.10 of Analysis II".
Page 231: In the second paragraph after Example 9.5.2, Proposition 9.4.7 should be 9.3.9. In Example 9.5.2, all occurrences of should be . In the sentence starting "Similarly, if …", all occurrences of should be .
Page 232: In the proof of Proposition 9.5.3, in the parenthetical (Why? the reason…), "the" should be capitalised. Proposition 9.4.7 should be replaced by Definition 9.3.6 and Definition 9.3.3.
Page 233-234: In Definition 9.6.1, replace "if" with "iff" in both occurrences.
Page 235: In Definition 9.6.5, replace "Let …" with "Let be a subset of , and let …".
Page 237: Add Exercise 9.6.2: If are bounded functions, show that , and are also bounded functions. If we furthermore assume that for all , is it true that is bounded? Prove this or give a counterexample."
Page 248: Remark 9.9.17 is incorrect. The last sentence can be replaced with "Note in particular that Lemma 9.6.3 follows from combining Proposition 9.9.15 and Theorem 9.9.16."
Page 252: In the third display of Example 10.1.6, both occurrences of should be .
Page 253: In the paragraph before Corollary 10.1.12, after "and the above definition", add ", as well as the fact that a function is automatically continuous at every isolated point of its domain".
Page 256: In Exercise 10.1.1, should be , and "also limit point" should be "also a limit point".
Page 257: In Definition 10.2.1, replace "Let …" with "Let be a subset of , and let …". In Example 10.2.3, delete the final use of "local". In Remark 10.2.5, should be .
Page 259: In Exercise 10.2.4, delete the reference to Corollary 10.1.12.
Page 260: In Exercise 10.3.5, should be .
Page 261: In Lemma 10.4.1 and Theorem 10.4.2, add the hypotheses that , and that are limit points of respectively.
Page 262. In the parenthetical ending in "$latex f^{-1} is a bijection", a period should be added.
Page 263: In Exercise 10.4.1(a), Proposition 9.8.3 can be replaced by Proposition 9.4.11.
Page 264: In Proposition 10.5.2, the hypothesis that be differentiable on may be weakened to being continuous on and differentiable on , with only assumed to be non-zero on rather than . In the second paragraph of the proof "converges to " should be "converges to ".
Page 265: In Exercise 10.5.2, Exercise 1.2.12 should be Example 1.2.12.
Page 266: "Riemann-Steiltjes" should be "Riemann-Stieltjes".
Page 267: In Definition 11.1.1, add " is nonempty and" before "the following property is true", and delete the mention of the empty set in Example 11.1.3. In Lemma 11.1.4, replace "connected" by "either connected or empty". (The reason for these changes is to be consistent with the notion of connectedness used in Analysis II and in other standard texts. -T.)
In the start of Appendix A.1, "relations between them (addition, equality, differentiation, etc.)" should be "operations between them (addition, multiplication, differentiation, etc.) and relations between them (equality, inequality, etc.)".
Page 276: In the proof of Lemma 11.3.3, the final inequality should involve on the RHS rather than .
Page 280: In Remark 11.4.2, add "We also observe from Theorem 11.4.1(h) and Remark 11.3.8 that if is Riemann integrable on a closed interval , then .
Page 282: In Corollary 11.4.4, replace" " by " , defined by ", and add at the end "(To prove the last part, observe that .)"
Page 283: In the penultimate display, should be .
Page 284: Exercise 11.4.2 should be moved to Section 11.5, since it uses Corollary 11.5.2.
Page 288: In Exercise 11.5.1, (h) should be (g).
Page 291: In the paragraph before Definition 11.8.1, remove the sentences after "defined as follows". In Definition 11.8.1, add the hypothesis that be monotone increasing, and be an interval that is closed in the sense of Definition 9.1.15, and alter the definition of as follows. (i) If is empty, set . (ii) If is a point, set , with the convention that (resp. ) is when is the right (resp. left) endpoint of . (iii) If , set . (iv) If , , or , set equal to , , or respectively. After the definition, note that in the special case when is continuous, the definition of for simplifies to , and in this case one can extend the definition to functions that are continuous but not necessarily monotone increasing. In Example 11.8.2, restrict the domain of to , and delete the example of .
Page 292: In Example 11.8.6, restrict the domain of to . In Lemma 11.8.4 and Definition 11.8.5, add the condition that be an interval that is closed, and be monotone increasing or continuous.
Page 293: After Example 11.8.7, delete the sentence "Up until now, our function… could have been arbitrary.", and replace "defined on a domain" with "defined on an interval that is closed" (two occurrences).
Page 294: The hint in Exercise 11.8.5 is no longer needed in view of other corrections and may be deleted.
Page 295: In the proof of Theorem 11.9.1, after the penultimate display , one can replace the rest of the proof of continuity of with "This implies that is uniformly continuous (in fact it is Lipschitz continuous, see Exercise 10.2.6), hence continuous."
Page 297: In Definition 11.9.3, replace "all " with "all limit points of ". In the proof of Theorem 11.9.4, insert at the beginning "The claim is trivial when , so assume , so in particular all points of are limit points.". When invoking Lemma 11.8.4, add "(noting from Proposition 10.1.10 that is continuous)".
Page 298: After the assertion , add "Note that , being differentiable, is continuous, so we may use the simplified formula for the -length as opposed to the more complicated one in Definition 11.8.1."
Page 299: In Exercise 11.9.1, should lie in rather than . In Exercise 11.9.3, should lie in rather than . In the hint for Exercise 11.9.2, add "(or Proposition 10.3.3)" after "Corollary 10.2.9".
Page 300: In the proof of Theorem 11.10.2, Theorem 11.2.16(h) should be Theorem 11.4.1(h).
Page 310: in the last line, "all logicallly equivalent" should be "all logically equivalent".
Page 311: In Exercise A.1.2, the period should be inside the parentheses.
Page 327: In the proof of Proposition A.6.2, may be improved to ; similarly for the first line of page 328. Also, the "mean value theorem" may be given a reference as Corollary 10.2.9.
Page 329: At the end of Appendix A.7, add "We will use the notation to indicate that a mathematical object is being identified with a mathematical object ."
Page 334: In the last paragraph of the proof of Theorem B.1.4, "the number has only one decimal representation" should be "the number has only one decimal representation".
— Errata to the fourth edition —
General: all instances of "supercede" should be "supersede", and "maneuvre" should be "manoeuvre".
Page 15: "carry of digits" should be "carry digits". "Giuseppe" should be "Guiseppe".
Page 16: The semicolon before should be a colon.
Page 17: the computing language C should not be italicised.
Page 22: In Remark 2.1.5, the first "For instance" may be deleted.
Page 39: In the last part of Definiton 3.1.15, "if" should be "iff"
Page 66: In Exercise 3.5.6 "the " should be "the sets ".
Page 69: In the paragraph before Definition 3.6.5, should be .
Page 70: In the fifth line of the proof of Lemma 3.6.9, should be .
Page 102: In the sixth line from the bottom of the proof, delete the first "yet".
Page 104: In the statement of Lemma 5.3.6, delete the space before the close parenthesis.
Page 109: In the top paragraph (after Proposition 5.3.11), "On obvious guess" should be "One obvious guess".
Page 116: The paragraph after Remark 5.4.11 may be deleted, since it is essentially replicated near Definition 6.1.1.
Page 159: In Lemma 7.1.4(a), one can replace with .
Page 161: In the third display, on the right-hand side of the equation, the sizes of the first two left parentheses should be interchanged.
Page 174: In Proposition 7.4.1, should be , and similarly should be (two occurrences).
Page 177: In Exercise 7.3.2, add the requirement to the geometric series formula.
Page 200: In Remark 8.3.5, "Exercise 7.2.6" should be "Exercise 7.2.6 of Analysis II".
Page 201: In the first paragraph of Section 8.4, "Section 7.3" should be "Section 7.3 of
Analysis II".
Page 219: In Remark 9.1.25, "Theorem 1.5.7" should be "Theorem 1.5.7 of Analysis II".
Page 231: In Example 9.4.3, in the second limit, should be .
Page 234: To improve the logical ordering, Proposition 9.4.13 (and the preceding paragraph) can be moved to before Proposition 9.4.10 (and similarly Exercise 9.4.5 should be moved to before Exercise 9.4.3, 9.4.4).
Page 237: At the start of Section 9.7, "a continuous function attains" should be "a continuous function on a closed interval attains".
Page 255: In Theorem 10.13.(h), enlarge the parentheses around .
Page 271: In Remark 11.1.2, "Section 2.4" should be "Section 2.4 of Analysis II".
Page 278: In Remark 11.3.5, replace "this is the purpose of the next section" with "see Proposition 11.3.12". (Also one can mention that this definition of the Riemann integral is also known as the Darboux integral.)
Page 281: In Remark 11.3.8, "Chapter 8" should be "Chapter 8 of Analysis II".
Page 294: should be (two occurrences).
General LaTeX issues: Use \text instead of \hbox for subscripted text. Some numbers (such as 0) are not properly placed in math mode in certain places. Some instances of \ldots should be \dots. \lim \sup should be \limsup, and similarly for \lim \inf.
Thanks to aaron1110, Adam, James Ameril, Paulo Argolo, José Antonio Lara Benítez, Dingjun Bian, Philip Blagoveschensky, Tai-Danae Bradley, Brian, Eduardo Buscicchio, Matheus Silva Costa, Gonzales Castillo Cristhian, Ck, William Deng, Kevin Doran, Lorenzo Dragani, Evangelos Georgiadis, Elie Goudout, Ti Gong, Cyao Gramm, Christian Gz., Ulrich Groh, Yaver Gulusoy, Minyoung Jeong, Erik Koelink, Brett Lane, David Latorre, Kyuil Lee, Matthis Lehmkühler, Bin Li, Percy Li, Ming Li, Mufei Li, Manoranjan Majji, Mercedes Mata, Simon Mayer, Pieter Naaijkens, Vineet Nair, Cristina Pereyra, Huaying Qiu, David Radnell, Tim Reijnders, Issa Rice, Eric Rodriguez, Pieter Roffelsen, Luke Rogers, Feras Saad, Gabriel Salmerón, Vijay Sarthak, Leopold Schlicht, Marc Schoolderman, Rainer aus dem Spring, SkysubO, Sundar, Karim Taha, Chaitanya Tappu, Winston Tsai, Kent Van Vels, Andrew Verras, Daan Wanrooy, John Waters, Yandong Xiao, Hongjiang Ye, Luqing Ye, Christopher Yeh, Muhammad Atif Zaheer, and the students of Math 401/501 and Math 402/502 at the University of New Mexico for corrections.
Darshan Pillay
Dear Professor Tao,
I am currently using the 3rd corrected edition of Analysis 1 and I am having some trouble understanding Definition 9.3.6 (Convergence of functions at a point) and also Definition 9.3.3 (Local epsilon-closeness). In particular when we set X to be the empty set then X has no adherent points and I am not sure how to make sense of Definition of 9.3.3 and Definition 9.3.6 in this case. In the Definition 9.3.3 do we impose that X is non-empty and in Definition 9.3.6 do we leave the limit undefined when X is empty?
in Definition 9.3.6 do we also require that E be non-empty?
The non-emptiness of or is a consequence of the hypotheses of the definition (since, as you say, there would otherwise not be any adherent points), but does not need to be explicitly stated; there is no way in which Definition 9.3.6 could be applied to an empty set since one could not produce the adherent point that is part of the concept being defined.
yalikes
I am reading the 3rd edition of Analysis 1, and I have a question about the definition of -length in Riemann-Stieltjes integral(in the Errata to the third edition, definition 11.8.1),
Do we need function to be bounded or domain of X to be closed interval?
you see, if and
does not define.
This means for some , -length didn't define for some bounded interval I. length of interval always have a definition, but
doesn't. Is this intended?
I am confusing about this.
(English is not my native language, sorry).
[Fair enough, I have now required to be closed. -T]
William Deng
If is now required to be closed, then would the domain in Examples and , or the domain in Examples and be considered closed? They wouldn't from the perspective of Definition , but would be relatively closed with respect to (as in Section of Analysis II).
[Here we are using the notion of closed set from Definition 9.1.15; I've updated the errata accordingly. -T]
Dear prof tao.
in Proposition 5.5.12, the book says "Since x^2 >2, we can choose 0<e2 , and thus (x-e)^2 >2"
how can we choose an e??
In Exercise , after the map is defined, it says "that sends every element of to its order ideal" when it should say "that sends every element of to its order ideal". In Exercise , there is a part that read "Show that the maximal elements of are precisely the total orderings of " when it should instead be "the total orderings of ".
Terry, this page used to load all the comments at once and one can easily search among them conveniently. Now it loads only a few. Do you know a way to search the comments conveniently (e.g. find all the comments by "Terence Tao")?
In Definition , don't we technically also need to know the nature of the equality relation which has been defined on the objects of in order to check anti-symmetry for instance? So instead of viewing a poset as just a pair , we should view it as , where is the equality relation which has been defined on the elements of ?
[All first-class mathematical objects are understood to have an equality relation assigned to them. -T]
In Zorn's lemma (Lemma ), is the assumption that really necessary? I believe the other hypothesis that every chain in has an upper bound ( ) is sufficient to ensure that .
[Technically yes, but in practice one only verifies the upper bound property for non-empty totally ordered sets when invoking Zorn's lemma and then one needs the additional hypothesis that is non-empty, which is in analogy with the base case of mathematical induction. I've added an erratum to reflect this. -T]
According to Example 8.4.2,if I is a set of the form \{i\in:1≤i≤n\},then the infinite cartesian products is the same set as the set n-fold cartesian product defined in Definition 3.5.7.However,from Exercise 3.5.2 we can kwon ordered n-tuple in n-fold cartesian product must be a surjective function.But the functions in the infinite cartesian products are not necessarily surjective.Why the two set are the same?
(English is not my native language, hope you can understand,sorry!)
Thank you for reading my comment!
[Fair enough, the more precise statement is that $\prod_{i=1}^n X_i$ and can be canonically identified through a one-to-one correspondence, rather than being literally equal, though in most cases one can safely "abuse notation" by viewing the two sets as the same. (The point being that a function in the latter set can be uniquely identified with a surjective function in the former set by taking to be the range of . -T]
3 August, 2021 at 9:27 pm
Dear Prof Tao,
In the page 100, the book says "If we are to define the real numbers from the Cauchy sequences of rationals as limits of Cauchy sequences, we have to know when two Cauchy sequences of rationals give the same limit, without first defining a real number(since that would be circular).
I can't understand what argument would be circular.
a circular argument would be like this.
"why is A?" "because B" "then why is B?" "because A"
but I can't find what argument would be circular in this 100p paragraph.
solveallx
I don't think the inferences are circular, but rather, the definition would be circular. A circular argument goes the way that you say, but a circular definition would go something like this:
"What is A? It is a thing which we can define in terms of B. But what is B? It is a thing that we can define in terms of A."
Here the challenge is to define the equality of real numbers, assuming that we define a real number to be any Cauchy sequence of rational numbers. If x, y are two real numbers, how should we define x=y? We cannot say that this equation holds so long as x and y converge to the same real number, because if we did, then we would be defining the real numbers by reference to the real numbers. That would be a circular definition.
4 August, 2021 at 4:47 am
Dear Professor Tao
I believe that due to the revisions that were made to Definition , the hint in Exercise is now obsolete since one can directly find piecewise constant functions majorizing and minorizing whose (piecewise constant) Riemann-Stieltjes integral is precisely . For instance, if , being continuous on a closed interval, is bounded by some real number , then the functions defined by and for majorize and minorize respectively and are both piecewise constant with respect to the partition of , and in both cases we have
[Erratum added, thanks – T.]
4 August, 2021 at 12:56 pm
Dear professor Tao, in your development of Fubini's theorem of infinite sums, you show that if a sum over a countable set is absolutely convergent then it equals the iterated sum. (By the way, I searched FAR and wide to find someone who proves this, and of about 30 texts that I looked at, yours was one of two that contained such a proof! So I deeply thank you for this!)
However, I am wondering if the converse is true (if the iterated series converges then the sum over the terms is absolutely convergent). I mention this in part because I wonder if you would enjoy including this in your text … and of course selfishly, because I'd enjoy seeing a proof. :)
Anyway, if you ever do include it in a future version I'll be happy to see it.
Thank you again for the text.
Adam Frank
[Convergence of the iterated series is not sufficient; see Example 1.2.5. But absolute convergence of the iterated series suffices; this follows from the Fubini-Tonelli theorem, which is covered in my measure theory text. -T]
(Note: I have not found the converse in ANY of the texts that I've searched.)
17 August, 2021 at 11:34 pm
guanyuminghe
in your introduction of Cartesian products (finite or infinite), you first give definitions, then state their existence by the finite choice lemma and the axiom of choice.
I noticed that in these two statements, you only say that the cartesian product set is non-empty. I wonder how can this non-emptiness imply that the sets contain exactly the elements that the definitions give.
I appreciate your response about this text.
20 August, 2021 at 9:48 pm
I have a question about the proof of Proposition 8.1.5, in which you construct an infinite sequence by .
I wonder if this construction requires the axiom of choice or a similar assertion. It is to select an infinite sequence from an infinite set. By induction we can know that for every finite , there is a finite sequence. However, can we really show that the entire sequence, the function is defined on the entire ?
I know that by the well-ordering principle, every is well-defined, but, the problem is, it is trying to define infinitely.
By the way, if it really requires the axiom of choice, I don't know how to apply the axiom since that the choices we made are all definitive.
Thank you for reading my comment.
The axiom of choice is not required here. For each , one can define the partial function , or equivalently define the partial graph . Applying the axioms of union (and replacement), one can then define the complete graph , and this gives the full function .
hello professor
Is equality a relation?
some books are saying that the set $latex\left \{ \left ( x,x \right )|x\in S \right \}$ is equality relation for any given set S.
But I think this definition already implies what's the same and what's different.
Is it correct definition of equality?
Yes, equality is a relation. The definition you gave does not use the notion of equality, so it's not a circular definition. What it uses is the ability of a quantifier to pick out any x in the domain and use it (repeatedly) to state something about it.
if you think like that, then can you prove that this equality relation is an equivalence relation only using the definition which I gave ? {(x, x)|x }
Yep. For all x: (x,x) is in the set. And if (x,y) is in the set then (x,y)=(x,x)=(y,x) so the relation is symmetric. And so on.
I said only using the definition. you did like this. (x,y)=(x,x), -> x=x, y=x -> x=x, x=y -> (y,x)=(x,x). but the equality signs in here " x=x, y=x -> x=x, x=y " is not what we defined. it is something that we should already know to define the set {(x, x)|x in S}.
dear professor tao, i wonder about realation between Proposition 5.4.14 and Definition 5.4.5. how can i understand about it?
johnhtwan
In Analysis I Ch3, is "Equality of set" an axiom or definition? Coz in my hardcover book it's an axiom, while in electronic copy it's a definition. Thank you so much!
I sent an errata up to Chapter 6 of Analysis I (in a LaTeX pdf) on your email: [email protected]. But I guess that since you are a very busy person, I thought of bringing that to your notice here. (seeing that you take out time for comments here).
For your reference, I sent the email on September 8 (11:25 pm IST), entitled "Errata for Analysis I".
(Feel free to delete this comment.)
In the Errata it said that in Exercise 8.4.3, "there exists an injection ; in other words…" should be "there exists an injection with the identity map; in particular…". (This is needed in order to establish the converse part of the question.)
But I seem to solve the converse part of the question without this errata and I could not find any mistake in the proof. My proof is as follows. Could you please check whether it is valid?
Suppose for arbitrary sets such that there exists a surjection , there then exists an injection . We need to show that the axiom of choice is true.
Let be a set, and for each let be a non-empty set. Suppose that all the sets are disjoint from each other. Then for each , we define a function by .As is a non-empty set for each , we can see that is a surjection for each . So there exists an injection for each . Now if we define set ,we will have for all . By Exercise 8.4.2 we can know that the axiom of choice is true.
English is not my native language, hope you can understand, sorry!
3 October, 2021 at 4:46 pm
In order to select a single for each , you have to invoke the axiom of choice, which is what you are trying to prove.
4 October, 2021 at 2:12 am
Woody Young
Dear Professor Tao, I'd like to refer to Definition 6.2.6 in your book Analysis I for Supremum of sets of extended reals. Suppose where and . If is not bounded below, doesn't exist. Can I therefore suggest two small changes to Definition 6.2.6: [1] append to rule (c) "… provided that exists or supremum of E is not defined. [2] Append "… provided that exists if rule (c) is applicable or infinmum of E is not defined." to the definition of infinmum at the bottom of 6.2.6? Please correct me if I am wrong. (By the way, thank you very much for writing the book. :-))
By the analogue of Definition 5.5.10 for infima, the infimum of a set that is not bounded below is by definition.
Thanks, Prof. I just read Definition 5.5.10 and realized that the two cases were already covered. Thanks for your response.
In Exercise 8.5.19,we need prove that the maximal elements of Ω are precisely the well-orderings (X,≤) of X.However,whether Ω has a maximal element is unknown.So I think it is better to change it into:if Ω has maximal elements,then the maximal elements of Ω are precisely the well-orderings (X,≤) of X.Exercise 8.5.16 also needs the same change.
What's your opinion about this?
I think maybe I need a further explanation.The question here is that whether X has at least one well-ordering is unknown.If it has one,then it is easy to prove the maximal elements of Ω are precisely the well-orderings (X,≤) of X.If we only know that Ω has maximal elements,but do not know whether X has at least one well-ordering,then we should suppose for sake of contradictionthe that the maximal element is not the well-orderings (X,≤) of X,then we can find a bigger one,a contradiction.So if Ω has maximal elements,then the well-orderings (X,≤) of X exist,and the maximal elements of Ω are precisely the well-orderings (X,≤) of X.
English is not my native language,hope you can understand,sorry!
[One does not need to know the existence of a well-ordering to prove the claim that the maximal elements of are the well-orderings of . If there are no such well-orderings, then both collections are empty. -T]
I'm sorry I didn't make it clear.What I wanted to say is:in order to prove the claim(called it claim A) that the maximal elements of Ω is the well-orderings of X,one must first show that there exist a maximal elements of Ω,which one should use Zorn's lemma to prove.However,in this exercise,this part(there exist a maximal elements of Ω) is arranged after the proof of the claim A.This is a little strange.So I think it is better to change claim A into:if Ω has maximal elements,then the maximal elements of Ω are precisely the well-orderings of X.
If my understanding is wrong,could you please correct it?
Once again,Thank you for reading my comment!
[A statement of the form "All As are Bs" does not require one to first establish that As exist; the statement is vacuously true if no such As exist. -T]
Yaver Gulusoy
Dear Prof. Tao, on page 22: in the sentence with "for instance, understanding numbers", the expression "for instance" is written twice.
[Erratum added, thanks- T.]
Issa Rice
I have a question about Exercise 9.4.4. I am interpreting the hint to first show to mean that eventually we want to evaluate the limit , and then use properties of exponentiation and the limit laws to say that . But evaluating seems to require Proposition 9.4.13, which comes after the proposition this exercise is trying to prove (Proposition 9.4.11). If that's right, it seems worth flagging that this is the case, or reordering the propositions somehow. But perhaps there is another way to do this exercise that I am missing?
[One can use and Proposition 9.4.9 rather than Proposition 9.4.13 -T.]
This also confuses me. Even if we choose to use Proposition 9.4.9, we probably have to say somehow that $\lim_{x \to a} (x/a)^p = \lim_{x \to 1} x^p$, but there is no statement in Section 9.4 about limits of a composition of two functions, so that we cannot use this fact in Exercise 9.4.4. I don't see either how to solve this exercise using only Proposition 9.4.9, since it does not give by itself the ability to turn a $\lim_{x \to a}$ into a $\lim_{x \to 1}$. But I may be missing something as well :)
Ah, I see the issue now. The proposed fix of reordering the exercises and propositions is probably the simplest way to address this issue.
While writing my previous comment, I noticed that the page now says "with a fourth edition in preparation". I've been keeping a list of corrections that I've been too lazy to type up, but this has now prompted me to type it all up. I hope it's not too late to include in the new edition. (Also sorry if some of these seem really nitpicky or minor stylistic things. I really like this book so I want to have it be typographically beautiful as well as free of errors.)
Page 15: For "why do we have to carry of digits", I think this should be either "why do we have to carry digits" (without "of") or e.g. "why is carry of digits required".
Page 16: When discussing and , there's an inconsistent use of colon vs semicolon. The sentence structure is the same here but one uses a colon the other uses a semicolon. I think they should both be a colon.
Page 16: The programming language C is italicized, but programming language names are not conventionally italicized (the term "C" also does not appear in the index, so it doesn't seem to be intended as an introduction of a new technical term either).
Page 39: In the last part of Definition 3.1.15 (for ), "if" should be "iff" for consistency with earlier in the definition.
Page 66: In Exercise 3.5.6 at the end, "the " should be just or "the sets ".
Page 70: In the fifth line of the proof of Lemma 3.6.9, should be (strangely, this is fine in the second edition but becomes capitalized in the uncorrected third edition).
Page 102: In the 6th line from the bottom of the proof, "yet" is repeated twice.
Page 104: In the statement of Lemma 5.3.6, at the end there is a space before the close parenthesis.
Page 160: In Remark 7.1.10, there are three places where an expression like appears. In each of these cases, the "is true" appears bigger than the surrounding text, I believe because \mbox is used instead of \text. With \text, it looks like , which has the correct size of "is true".
Page 167: In the statement of Proposition 7.2.12, should be in math mode so as not to be italicized, in order to be consistent with the other occurrence of .
Page 169: In the statement of Lemma 7.2.15, should be in math mode so as not to be italicized, in order to be consistent with the other occurrences of .
Page 175: In Example 7.4.2, should be (i.e. \cdots instead of \ldots) to match the addition operation (two occurrences).
Page 176 and 177: In Example 7.4.4, should be (i.e. \cdots instead of \ldots) to match the addition operation (two occurrences).
Page 237: should be (i.e. \cdots instead of \ldots) to match the .
Page 255: In Theorem 10.1.13(h), should be with bigger parentheses.
Page 278: Remark 11.3.5 says "However, the two definitions turn out to be equivalent; this is the purpose of the next section" but the discussion of Riemann sums happens later on in the same section.
Page 294: I think " " should be " ", and " -lengths " should be " -lengths ".
Throughout the text, and are used, but they should be and , i.e. using the LaTeX commands \limsup and \liminf. (I'd be happy to try to locate every instance of liminf and limsup if that's required to make the change, but I'm guessing for you it's possible to easily find these by searching the LaTeX source.)
[Corrections added, thanks – T.]
professor, in lemma 7.1.4 (a) , the statement is "Let m ≤ n < p be integers~~".
I think m ≤ n ≤ p is fine too. is it Erratum?
and in Definition 7.1.1
a(i) is a real number assigned to each integer i between m and n.
but there is a real number a(n+1) in definition of Finite series.
sequence is function, and function has domain and domain is {m ≤ i ≤ n}
we can not define f(n+1), i.e a(n+1).
[First erratum added. As for the second point, to define a concept recursively, one defines in terms of (see Proposition 2.1.16), so in this case one would define the summation concept for a finite sequence in terms of the summation concept for a finite sequence . -T]
In the proof of Lemma 7.1.13, the parenthesis sizes in the following equality appear to be mismatched:
The first opening parenthesis appears to be closed after , before the second (large) opening parenthesis is closed. I think the small open and close parentheses could be omitted entirely:
Thank you for writing this excellent book!
[Correction added, thanks – T.]
oiu850714
The errata in this page added Exercise 2.2.8 on page 29, but the book only has Exercise until 2.2.6, so it seems that the new Exercise should be numbered as Exercise 2.2.7?
Dear profesor tao, Are tej additive combinatoricks books understandable for a high school level student?
Trivial spelling error: All instances of "supercede" should instead be "supersede".
(Google :supercede v supersede" for many discussions on this. This isn't a British vs American spelling thing. E.g. Grammarist "the misspelling supercede has been recorded for multiple centuries. … It is interesting to note that the error has never been adopted as an accepted alternative, which is the case with some other widespread errors.")
If it's been spelled this way for hundreds of years and it appears in books then who's to say it's incorrect?
There are several references to a "Chapter 11.45" in the book–is this perhaps an error?
– p. 51 footnote 2
– p. 267
– p. 278 (twice)
May I ask what the corrections are? (I can't seem to find them in the above errata.) Thanks!
[These issues have been fixed since the corrected third edition: "Chapter 19" should refer to "Chapter 8 of Analysis II", etc. Not sure why in your edition, 19 has been replaced with 11.4. -T]
There has an errata for page 49+: change all occurrences of "range" to "codomain".
It seems that the occurrence of "range" after Example 3.1.22. on page 41 should also be replaced.
[This has been corrected for the fourth edition -T.]
In exercise 8.1.1 show that X is infinite if and only if there exists a proper subset Y of X which has the same cardinality as X,
why is the axiom of choice required?
to prove this proposition, we need a statement "every infinite set has a countably infinite subset" as lemma.
to make countable subset of X, I think I can pick an element x_0 from X and pick x_1 from X/{x_0} and pick x_2 from X/{x_0, x_1} and so forth. so we can pick x_n from X/{x_0,…x_n-1} for every natural number n. I know this argument lacks some rigor but uses only Lemma3.1.5(single choice) and induction.
I have the same question for exercise 81.9 too
particularly, when proving countable unions of countable sets are countable.
index set is countable and each element of index set is assigned countable set, we can make surjective function f from N*N onto countable unions of countable sets because there exist bijection for index set and for each set assigned by element of index set.
28 December, 2021 at 11:10 am
Single choice lets you pick any finite number of distinct elements of for any natural number , but in order to pick a countably infinite number of distinct elements of one needs a stronger version of choice, such as countable choice, dependent choice, or the full axiom of choice. (The problem here is the lack of uniqueness in specifying the finite sequence , which prevents one from easily specifying an infinite from knowledge of existence of the finite sequences.)
p. 197: "We will give another proof of this result using measure
theory in Exercise 11.42.6." Might this be a typo? I can't seem to find any Exercise 11.42.6 in either Analysis I or II
[In recent editions this has been corrected to Exercise 7.2.6 of Analysis II. -T]
The above errata states, "Page 7: In Example 1.2.6, Theorem 19.5.1 should be "Theorem 7.5.1 of Analysis II"."
1. I can't find any mention of "Theorem 19.5.1", only one "Theorem 11.50.1" (https://books.google.com/books?id=ecTsDAAAQBAJ&pg=PA7)
2. Also, I can't find any Theorem 7.5.1 in Analysis II
p9: "maneuvre" is usually spelt either "maneuver" (US) or "manoeuvre" (UK"
p10: "See Theorem 11.37.4 and Exercise 11.37.1"–I can't seem to find either of these. My guess is that they should instead be Theorem 6.5.4 and Exercise 6.5.1 (both in Analysis II)
p13: Near bottom, "Section 11.26"–can't find this
p15: "Guiseppe" should be "Giuseppe"
[These errata are for the corrected third edition, not the original third edition, and have been corrected for the fourth edition. For instance, Theorem 7.5.1 of Analysis II should be Theorem 8.5.1. The third edition has slightly different page numbering than the recent editions, so if you could give more context to your errata than just page number (e.g., section number, proposition number, etc.) that would aid in locating the corresponding location in the current edition – T.]
22 December, 2021 at 10:04 pm
Above errata:
1. "Page 23" should be "Page 24"?
2. "Page 27: In the final sentence of Definition 2.2.7, the period should be inside the parentheses." I think instead of this, we should just delete the period before the left parenthesis?
There is an errata for page 49+. However the description of that errata says "Before Exercise 3.3.2, add the following paragraph…". It seems that "Exercise 3.3.2" should be "Example 3.3.2"?
Please take a look at this: https://math.stackexchange.com/questions/4342276
p107: "On obvious first guess for how to proceed would be define" should perhaps instead be "One obvious first guess for how to proceed would be to define"?
Possible repetition:
p126: "First, we define distance for real numbers … Proposition 4.3.3 works just as well for real numbers"
But p114 already defined distance and stated that Proposition 4.3.3 also works for reals.
p175, a bit after middle, "Exercise 11.32.2"–can't find this
Similarly at p177, just before Exercises
p177, near top, "Example 11.25.7"–can't find this
p195: "Of course, the real numbers R can contain many infinite sequences"–should "infinite" be replaced by "countable"?
p197, Remark 8.3.5, "Exercise 11.42.6"–can't find this
p198, bottom: "Section 11.43"–can't find
[Corrections added. The reference errors you mention have been corrected by the fourth edition – for instance, Remark 8.3.5 now refers to Exercise 7.2.6 of Analysis II. Unfortunately I cannot locate some of your corrections due to changes in numbering between editions.-T]
Waqar Ahmed
Typo in Example 9.4.3 on page 228 third edition. "It should x tends to x0 but not x0 belongs to x "
2 January, 2022 at 12:23 am
Dear professor Tao.
Do 'Definition 8.2.4' work for finite sets as well?
I was wondering if we can say that a summation over a finite set is absolutely convergent
[Yes, though one can easily show from induction that every summation of a real-valued function over a finite set will be absolutely convergent. -T]
3 January, 2022 at 10:13 pm
thank you for your answer professor! but then how do we define convergence of summation over finite set? I mean it is weird to say that summation over finite set is convergent or absolutely convergent.
it doesn't really have a sequence of the partial sum.
[Definition 8.2.4 makes no reference to any "sequence of the partial sum" – T.]
2 January, 2022 at 8:40 am
In section 3.3, Definition 3.3.1, it is written that
if and only if is true.
(1) What kind of object is ? Is it a "mathematical statement" that has free variables and mentioned in appendix A.1?
(2) What is really the expression in first-order logic? People call the "independent variable" and the "dependent variable". But there is no such thing in first-order logic.
Yes, is a mathematical statement with free variables, otherwise known as a predicate. The expression is a mathematical expression with free variables: without further context, all three mathematical objects are free variables here.
The definition of a function is really an axiom, asserting that there exist a class of mathematical objects called functions , which each function having a domain , a codomain (both of which are sets), and an evaluation operation that takes and an element of the domain and returns an element of the codomain , with the property that for any predicate for free variables obeying the vertical line test, there exists a function such that for any , one has if and only if holds. This axiom is in fact redundant from the other axioms of set theory as one can encode functions as a certain type of set; see Exercise 3.5.10. However I prefer to introduce functions instead via the slightly informal Definition 3.3.1 to avoid making the first-order logic formalism too prominent in the text.
I have a question about the Errata to the definition of -length in Riemann-Stieltjes integral:Why we need this errata?What's wrong with the simple definition ?In Rudin's Principles of Mathematical Analysis,the definition of -length(or similar one) is just .
This was discussed in a previous comment. Basically, the simpler definition of the (generalized) Riemann-Stieltjes integral gives results that do not agree with the Lebesgue-Stieltjes integral in some cases (though for Riemann-Stieltjes integrable functions they agree). | CommonCrawl |
2020 Colorado Academic Standards Online
Use the options below to create customized views of the 2020 Colorado Academic Standards. For all standards resources, see the Office of Standards and Instructional Support.
looks_one Content Area - Select Content Area - Comprehensive Health Computer Science Dance Drama and Theatre Arts Mathematics Music Physical Education Reading, Writing and Communicating Science Social Studies Visual Arts World Languages
looks_two Grade Level - Select Grade Level - Preschool Kindergarten First Grade Second Grade Third Grade Fourth Grade Fifth Grade Middle School Sixth Grade Seventh Grade Eighth Grade Ninth / Tenth Grade Band Eleventh / Twelfth Grade Band High School High School - Fundamental Pathway High School - Advanced Pathway High School - Extended Pathway Novice-Low Novice-Mid Novice-High Intermediate-Low Intermediate-Mid Sixth Grade/Novice Seventh Grade/Intermediate Eighth Grade/Proficient High School/Accomplished High School/Advanced Intermediate-High Advanced-Low High School - Professional Pathway
looks_3 Standards Categories All Standards Categories All Standards Categories 1. Movement, Technique, and Performance 2. Create, Compose, and Choreograph 3. Historical and Cultural Context 4. Reflect, Connect, and Respond 1. Create 2. Perform 3. Critically Respond 2. Physical and Personal Wellness 3. Social and Emotional Wellness 4. Prevention and Risk Management 1. Number and Quantity 2. Algebra and Functions 3. Data, Statistics and Probability 4. Geometry 1. Expression of Music 2. Creation of Music 3. Theory of Music 4. Aesthetic Valuation of Music 1. Oral Expression and Listening 2. Reading for All Purposes 3. Writing and Composition 4. Research Inquiry and Design 1. Physical Science 2. Life Science 3. Earth and Space Science 1. History 2. Geography 3. Economics 4. Civics 1. Observe and Learn to Comprehend 2. Envision and Critique to Reflect 3. Invent and Discover to Create 4. Relate and Connect to Transfer 1. Communication 2. Cultures/Intercultural Communication 3. Connections 4. Comparisons 1. Movement Competence and Understanding 2. Physical and Personal Wellness 3. Social Emotional Wellness 4. Prevention and Risk Management 1. Computational Thinking 2. Computing Systems and Networks 3. Computer Programming
Current selections are shown below (maximum of five)
clear Content Area: Mathematics // Grade Level: Seventh Grade // Standard Category: All Standards Categories
Seventh Grade, Standard 1. Number and Quantity
Learn More About Mathematics in Colorado | Read the Colorado Essential Skills | Expand Set of GLEs Below
Prepared Graduates:
MP1. Make sense of problems and persevere in solving them.
MP2. Reason abstractly and quantitatively.
MP8. Look for and express regularity in repeated reasoning.
Grade Level Expectation:
7.RP.A. Ratios & Proportional Relationships: Analyze proportional relationships and use them to solve real-world and mathematical problems.
Evidence Outcomes:
Students Can:
Compute unit rates associated with ratios of fractions, including ratios of lengths, areas, and other quantities measured in like or different units. For example, if a person walks \(\frac{1}{2}\) mile in each \(\frac{1}{4}\) hour, compute the unit rate as the complex fraction \(\frac{\frac{1}{2}}{\frac{1}{4}}\) miles per hour, equivalently \(2\) miles per hour. (CCSS: 7.RP.A.1)
Identify and represent proportional relationships between quantities. (CCSS: 7.RP.A.2)
Determine whether two quantities are in a proportional relationship, e.g., by testing for equivalent ratios in a table or graphing on a coordinate plane and observing whether the graph is a straight line through the origin. (CCSS: 7.RP.A.2.a)
Identify the constant of proportionality (unit rate) in tables, graphs, equations, diagrams, and verbal descriptions of proportional relationships. (CCSS: 7.RP.A.2.b)
Represent proportional relationships by equations. For example, if total cost \(t\) is proportional to the number \(n\) of items purchased at a constant price \(p\), the relationship between the total cost and the number of items can be expressed as \(t = pn\). (CCSS: 7.RP.A.2.c)
Explain what a point \(\left(x,y\right)\) on the graph of a proportional relationship means in terms of the situation, with special attention to the points \(\left(0, 0\right)\) and \(\left(1, r\right)\) where \(r\) is the unit rate. (CCSS: 7.RP.A.2.d)
Use proportional relationships to solve multistep ratio and percent problems. Examples: simple interest, tax, markups and markdowns, gratuities and commissions, fees, percent increase and decrease, percent error. (CCSS: 7.RP.A.3)
Academic Contexts and Connections:
Colorado Essential Skills and Mathematical Practices:
Recognize when proportional relationships occur and apply these relationships to personal experiences. (Entrepreneurial Skills: Inquiry/Analysis)
Recognize, identify, and solve problems that involve proportional relationships to make predictions and describe associations among variables. (MP1)
Reason quantitatively with rates and their units in proportional relationships. (MP2)
Use repeated reasoning to test for equivalent ratios, such as reasoning that walking \(\frac{1}{2}\) mile in \(\frac{1}{4}\) hour is equivalent to walking \(1\) mile in \(\frac{1}{2}\) hour and equivalent to walking \(2\) miles in \(1\) hour, the unit rate. (MP8)
Inquiry Questions:
How are proportional relationships related to unit rates?
How can proportional relationships be expressed using tables, equations, and graphs?
What are properties of all proportional relationships when graphed on the coordinate plane?
Coherence Connections:
This expectation represents major work of the grade.
In Grade 6, students understand ratio concepts and use ratio reasoning to solve problems.
This expectation connects with several others in Grade 7: (a) solving real-life and mathematical problems using numerical and algebraic expressions and equations, (b) investigating chance processes and developing, using, and evaluating probability models, and (c) drawing, constructing, and describing geometrical figures and describing the relationships between them.
In Grade 8, students (a) understand the connections between proportional relationships, lines, and linear equations, (b) define, evaluate, and compare functions, and (c) use functions to model relationships between quantities. In high school, students use proportional relationships to define trigonometric ratios, solve problems involving right triangles, and find arc lengths and areas of sectors of circles.
MP3. Construct viable arguments and critique the reasoning of others.
MP7. Look for and make use of structure.
7.NS.A. The Number System: Apply and extend previous understandings of operations with fractions to add, subtract, multiply, and divide rational numbers.
Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram. (CCSS: 7.NS.A.1)
Describe situations in which opposite quantities combine to make \(0\). For example, a hydrogen atom has \(0\) charge because its two constituents are oppositely charged. (CCSS: 7.NS.A.1.a)
Demonstrate \(p+q\) as the number located a distance \(\left|q\right|\) from \(p\), in the positive or negative direction depending on whether \(q\) is positive or negative. Show that a number and its opposite have a sum of \(0\) (are additive inverses). Interpret sums of rational numbers by describing real-world contexts. (CCSS: 7.NS.A.1.b)
Demonstrate subtraction of rational numbers as adding the additive inverse, \(p-q=p+\left(-q\right)\). Show that the distance between two rational numbers on the number line is the absolute value of their difference, and apply this principle in real-world contexts. (CCSS: 7.NS.A.1.c)
Apply properties of operations as strategies to add and subtract rational numbers. (CCSS: 7.NS.A.1.d)
Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers. (CCSS: 7.NS.A.2)
Understand that multiplication is extended from fractions to rational numbers by requiring that operations continue to satisfy the properties of operations, particularly the distributive property, leading to products such as \(\left(-1\right)\left(-1\right) = 1\) and the rules for multiplying signed numbers. Interpret products of rational numbers by describing real-world contexts. (CCSS: 7.NS.A.2.a)
Understand that integers can be divided, provided that the divisor is not zero, and every quotient of integers (with non-zero divisor) is a rational number. If \(p\) and \(q\) are integers, then \(-\left(\frac{p}{q}\right) = \frac{-p}{q} = \frac{p}{-q}\). Interpret quotients of rational numbers by describing real-world contexts. (CCSS: 7.NS.A.2.b)
Apply properties of operations as strategies to multiply and divide rational numbers. (CCSS: 7.NS.A.2.c)
Convert a rational number to a decimal using long division; know that the decimal form of a rational number terminates in \(0\)s or eventually repeats. (CCSS: 7.NS.A.2.d)
Solve real-world and mathematical problems involving the four operations with rational numbers. (Computations with rational numbers extend the rules for manipulating fractions to complex fractions.) (CCSS: 7.NS.A.3)
Solve problems with rational numbers using all four operations. (Entrepreneurial Skills: Critical Thinking/Problem Solving)
Compute with rational numbers abstractly and interpret quantities in context. (MP2)
Justify understanding and computational accuracy of operations with rational numbers. (MP3)
Use additive inverses, absolute value, the distributive property, and properties of operations to reason with and operate on rational numbers. (MP7)
How do operations with integers compare to and contrast with operations with whole numbers?
How can operations with negative integers be modeled visually?
How can it be determined if the decimal form of a rational number terminates or repeats?
In previous grades, students use the four operations with whole numbers and fractions to solve problems.
In Grade 7, this expectation connects with solving real-life and mathematical problems using numerical and algebraic expressions and equations. This expectation begins the formal study of rational numbers (a number expressible in the form \(\frac{a}{b}\) or \(-\frac{a}{b}\) for some fraction \(\frac{a}{b}\); the rational numbers include the integers) as extended from their study of fractions, which in these standards always refers to non-negative numbers.
In Grade 8, students extend their study of the real number system to include irrational numbers, radical expressions, and integer exponents. In high school, students work with rational exponents and complex numbers.
Seventh Grade, Standard 2. Algebra and Functions
7.EE.A. Expressions & Equations: Use properties of operations to generate equivalent expressions.
Apply properties of operations as strategies to add, subtract, factor, and expand linear expressions with rational coefficients. (CCSS: 7.EE.A.1)
Demonstrate that rewriting an expression in different forms in a problem context can shed light on the problem and how the quantities in it are related. For example, \(a + 0.05a = 1.05a\) means that "increase by \(5\%\)" is the same as "multiply by \(1.05\)." (CCSS: 7.EE.A.2)
Recognize that the structures of equivalent algebraic expressions provide different ways of seeing the same problem. (MP7)
How is it determined that two algebraic expressions are equivalent?
What is the value of having an algebraic expression in equivalent forms?
In Grade 6, students apply and extend previous understandings of arithmetic to algebraic expressions.
In Grade 8, students use equivalent expressions to analyze and solve linear equations and pairs of simultaneous linear equations. In high school, students use equivalent expressions within various families of functions to reveal key features of graphs and how those features are related to contextual situations.
MP5. Use appropriate tools strategically.
MP6. Attend to precision.
7.EE.B. Expressions & Equations: Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
Solve multi-step real-life and mathematical problems posed with positive and negative rational numbers in any form (whole numbers, fractions, and decimals), using tools strategically. Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate; and assess the reasonableness of answers using mental computation and estimation strategies. For example: If a woman making \(\$25\) an hour gets a \(10\%\) raise, she will make an additional \(\frac{1}{10}\) of her salary an hour, or \(\$2.50\), for a new salary of \(\$27.50\). If you want to place a towel bar \(9 \frac{3}{4}\) inches long in the center of a door that is \(27 \frac{1}{2}\) inches wide, you will need to place the bar about \(9\) inches from each edge; this estimate can be used as a check on the exact computation. (CCSS: 7.EE.B.3)
Use variables to represent quantities in a real-world or mathematical problem, and construct simple equations and inequalities to solve problems by reasoning about the quantities. (CCSS: 7.EE.B.4)
Solve word problems leading to equations of the form \(px \pm q = r\) and \(p\left(x \pm q\right) = r\), where \(p\), \(q\), and \(r\) are specific rational numbers. Solve equations of these forms fluently. Compare an algebraic solution to an arithmetic solution, identifying the sequence of the operations used in each approach. For example, the perimeter of a rectangle is \(54\) cm. Its length is \(6\) cm. What is its width? (CCSS: 7.EE.B.4.a)
Solve word problems leading to inequalities of the form \(px \pm q > r\), \(px \pm q \geq r\), \(px \pm q < r\), or \(px \pm q \leq r\), where \(p\), \(q\), and \(r\) are specific rational numbers. Graph the solution set of the inequality and interpret it in the context of the problem. For example: As a salesperson, you are paid \(\$50\) per week plus \(\$3\) per sale. This week you want your pay to be at least \(\$100\). Write an inequality for the number of sales you need to make and describe the solutions. (CCSS: 7.EE.B.4.b)
Adapt to different forms of equations and inequalities and reach solutions that make sense in context. (Personal Skills: Adaptability/Flexibility)
Use mental computation and estimation to check the reasonableness of their solutions. Make connections between the sequence of operations used in an algebraic approach and an arithmetic approach, understanding how simply reasoning about the numbers connects to writing and solving a corresponding algebraic equation. (MP1)
Represent a situation symbolically and solve, attending to the meaning of quantities and variables. (MP2)
Select an appropriate solution approach (calculator, mental math, drawing a diagram, etc.) based on the specific values and/or desired result of a problem. (MP5)
Use estimation, mental calculations, and understanding of real-world contexts to assess the reasonableness of answers to real-life and mathematical problems. (MP6)
Do the properties of operations apply to variables the same way they do to numbers? Why?
Why are there different ways to solve equations?
In what scenarios might estimation be better than an exact answer?
How can the reasonableness of a solution be determined?
In Grade 6, students reason about and solve one-step, one-variable equations and inequalities.
In Grade 7, this expectation connects with analyzing proportional relationships, using them to solve real-world and mathematical problems, and applying and extending previous understandings of operations with fractions to add, subtract, multiply, and divide rational numbers.
In Grade 8, students work with radicals and integer exponents, analyze and solve linear equations and pairs of simultaneous linear equations, and describe functional relationships.
Seventh Grade, Standard 3. Data, Statistics, and Probability
MP4. Model with mathematics.
7.SP.A. Statistics & Probability: Use random sampling to draw inferences about a population.
Understand that statistics can be used to gain information about a population by examining a sample of the population; explain that generalizations about a population from a sample are valid only if the sample is representative of that population. Explain that random sampling tends to produce representative samples and support valid inferences. (CCSS: 7.SP.A.1)
Use data from a random sample to draw inferences about a population with an unknown characteristic of interest. Generate multiple samples (or simulated samples) of the same size to gauge the variation in estimates or predictions. For example, estimate the mean word length in a book by randomly sampling words from the book; predict the winner of a school election based on randomly sampled survey data. Gauge how far off the estimate or prediction might be. (CCSS: 7.SP.A.2)
Infer about a population using a random sample. (Entrepreneurial Skills: Inquiry/Analysis)
Make conjectures about population parameters and support arguments with sample data. (MP3)
Use multiple samples to informally model the variability of sample statistics like the mean. (MP4)
Why would a researcher use sampling for a study or survey?
Why does random sampling give more trustworthy results than nonrandom sampling in a study or survey? How might methods for obtaining a sample for a study or survey affect the results of the survey?
How can a winner be concluded in an election, from a sample, before counting all the ballots?
This expectation supports the major work of the grade.
In Grade 6, students develop understanding of statistical variability.
In Grade 7, this expectation connects with drawing informal comparative inferences about two populations, investigating chance processes, and with developing, using, and evaluating probability models.
In high school, students understand and evaluate random processes underlying statistical experiments and also make inferences and justify conclusions from sample surveys, experiments, and observational studies.
7.SP.B. Statistics & Probability: Draw informal comparative inferences about two populations.
Informally assess the degree of visual overlap of two numerical data distributions with similar variabilities, measuring the difference between the centers by expressing it as a multiple of a measure of variability. For example, the mean height of players on the basketball team is \(10\) cm greater than the mean height of players on the soccer team, about twice the variability (mean absolute deviation) on either team; on a dot plot, the separation between the two distributions of heights is noticeable. (CCSS: 7.SP.B.3)
Use measures of center and measures of variability for numerical data from random samples to draw informal comparative inferences about two populations. For example, decide whether the words in a chapter of a seventh-grade science book are generally longer than the words in a chapter of a fourth-grade science book. (CCSS: 7.SP.B.4)
Interpret variability in statistical distributions and draw conclusions about the distance between their centers using units of mean absolute deviation. (Entrepreneurial Skills: Inquiry/Analysis)
Base arguments about the difference between two distributions on the relative variability of the distributions, not just the difference between the two distribution means. (MP3)
Model real-world populations with statistical distributions and compare the distributions using measures of center and variability. (MP4)
How do measures of center (such as mean) and variability (such as mean absolute deviation) work together to describe comparisons of data?
How can we use measures of center and variability to compare two data sets? Why is it not wise to compare two data sets using only measures of center?
This expectation is in addition to the major work of the grade.
In Grade 6, students study measures of center and variability to describe, compare, and contrast data sets.
In Grade 7, this expectation connects with using random sampling to draw inferences about a population.
In high school, students summarize, represent, and interpret data on a single count or measurement variable and also make inferences and justify conclusions from sample surveys, experiments, and observational studies.
7.SP.C. Statistics & Probability: Investigate chance processes and develop, use, and evaluate probability models.
Explain that the probability of a chance event is a number between \(0\) and \(1\) that expresses the likelihood of the event occurring. Larger numbers indicate greater likelihood. A probability near \(0\) indicates an unlikely event, a probability around \(\frac{1}{2}\) indicates an event that is neither unlikely nor likely, and a probability near \(1\) indicates a likely event. (CCSS: 7.SP.C.5)
Approximate the probability of a chance event by collecting data on the chance process that produces it and observing its long-run relative frequency, and predict the approximate relative frequency given the probability. For example, when rolling a number cube \(600\) times, predict that a \(3\) or \(6\) would be rolled roughly \(200\) times, but probably not exactly \(200\) times. (CCSS: 7.SP.C.6)
Develop a probability model and use it to find probabilities of events. Compare probabilities from a model to observed frequencies; if the agreement is not good, explain possible sources of the discrepancy. (CCSS: 7.SP.C.7)
Develop a uniform probability model by assigning equal probability to all outcomes, and use the model to determine probabilities of events. For example, if a student is selected at random from a class, find the probability that Jane will be selected and the probability that a girl will be selected. (CCSS: 7.SP.C.7.a)
Develop a probability model (which may not be uniform) by observing frequencies in data generated from a chance process. For example, find the approximate probability that a spinning penny will land heads up or that a tossed paper cup will land open-end down. Do the outcomes for the spinning penny appear to be equally likely based on the observed frequencies? (CCSS: 7.SP.C.7.b)
Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation. (CCSS: 7.SP.C.8)
Explain that, just as with simple events, the probability of a compound event is the fraction of outcomes in the sample space for which the compound event occurs. (CCSS: 7.SP.C.8.a)
Represent sample spaces for compound events using methods such as organized lists, tables, and tree diagrams. For an event described in everyday language (e.g., "rolling double sixes"), identify the outcomes in the sample space which compose the event. (CCSS: 7.SP.C.8.b)
Design and use a simulation to generate frequencies for compound events. For example, use random digits as a simulation tool to approximate the answer to the question: If \(40\%\) of donors have type A blood, what is the probability that it will take at least \(4\) donors to find one with type A blood? (CCSS: 7.SP.C.8.c)
Be innovative when designing simulations to generate frequencies of compound events by using random digits, dice, coins, or other chance objects to represent the probabilities of real-world events. (Entrepreneurial Skills: Creativity/Innovation)
Use probability models and simulations to predict outcomes of real-world chance events both theoretically and experimentally. (MP4)
Use technology, manipulatives, and simulations to determine probabilities and understand chance events. (MP5)
Since the probability of getting heads on the toss of a fair coin is \(\frac{1}{2}\), does that mean for every one hundred tosses of a coin exactly fifty of them will be heads? Why or why not?
What might a discrepancy in the predicted outcome and the actual outcome of a chance event tell us?
In prior grades, students study rational numbers and operations with rational numbers.
In Grade 7, probability concepts support the major work of understanding rational numbers. This expectation connects with analyzing proportional relationships, using them to solve real-world and mathematical problems, and using random sampling to draw inferences about a population.
In high school, students understand and evaluate random processes underlying statistical experiments, understand independence and conditional probability and use them to interpret data, and use the rules of probability to compute probabilities of compound events in a uniform probability model.
Seventh Grade, Standard 4. Geometry
7.G.A. Geometry: Draw, construct, and describe geometrical figures and describe the relationships between them.
Solve problems involving scale drawings of geometric figures, including computing actual lengths and areas from a scale drawing and reproducing a scale drawing at a different scale. (CCSS: 7.G.A.1)
Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. Focus on constructing triangles from three measures of angles or sides, noticing when the conditions determine a unique triangle, more than one triangle, or no triangle. (CCSS: 7.G.A.2)
Describe the two-dimensional figures that result from slicing three-dimensional figures, as in cross sections of right rectangular prisms and right rectangular pyramids. (CCSS: 7.G.A.3)
Investigate what side and angle measurements are necessary to determine a unique triangle. (Entrepreneurial Skills: Inquiry/Analysis)
Reason abstractly by deconstructing three-dimensional shapes into two-dimensional cross-sections. (MP2)
Describe, analyze, and generalize about the resulting cross-section of a sliced three-dimensional figure and justify their reasoning. (MP3)
Appropriately use paper, pencil, ruler, compass, protractor, or technology to draw geometric shapes. (MP5)
How are proportions used to solve problems involving scale drawings?
What are some examples of cross-sections whose shapes may be identical but are from different three-dimensional figures?
In Grade 6, students solve real-world and mathematical problems involving area, surface area, and volume.
In Grade 7, this expectation connects with analyzing proportional relationships and using them to solve real-world and mathematical problems.
In Grade 8, students understand the connections between proportional relationships, lines, and linear equations, and understand congruence and similarity using physical models, transparencies, or geometry software. In high school, students use geometric constructions as a basis for geometric proof.
7.G.B. Geometry: Solve real-life and mathematical problems involving angle measure, area, surface area, and volume.
State the formulas for the area and circumference of a circle and use them to solve problems; give an informal derivation of the relationship between the circumference and area of a circle. (CCSS: 7.G.B.4)
Use facts about supplementary, complementary, vertical, and adjacent angles in a multistep problem to write and solve simple equations for an unknown angle in a figure. (CCSS: 7.G.B.5)
Solve real-world and mathematical problems involving area, volume, and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. (CCSS: 7.G.B.6)
Solve problems involving angle measure, area, surface area, and volume. (Entrepreneurial Skills: Inquiry/Analysis)
Persevere with complex shapes by analyzing their component parts and applying geometric properties and measures of area and volume. (MP1)
Model real-world situations involving area, surface area, and volume. (MP4)
Reason accurately with measurement units when calculating angles, circumference, area, surface area, and volume. (MP6)
How can the formula for the area of a circle be derived from the formula for the circumference of the circle?
What are the angle measure relationships in supplementary, complementary, vertical, and adjacent angles?
What are some examples of real-world situations where one would need to find (a) area, (b) volume, and (c) surface area?
In previous grades, students understand concepts of angle, measure angles, and solve real-world and mathematical problems involving area, surface area, and volume.
In Grade 8, students understand congruence and similarity using physical models, transparencies, or geometry software, and understand and apply the Pythagorean Theorem. Students also use the formulas for the volumes of cones, cylinders, and spheres to solve real-world and mathematical problems.
The P-12 concepts and skills that all students who complete the Colorado education system must master to ensure their success in a postsecondary and workforce setting.
What do students need to know?
Grade Level Expectation: High Schools:
The articulation of the concepts and skills of a standard that indicates a student is making progress toward being a prepared graduate.
Grade Level Expectations:
The articulation, at each grade level, the concepts and skills of a standard that indicates a student is making progress toward being ready for high school.
How do we know that a student can do it?
The indication that a student is meeting an expectation at the mastery level.
Academic Context and Connections:
Academic context and connections are the subject-specific elements needed to create context for learning. This right side section highlights essential skills, practices and other important connections necessary for students to understand, apply and transfer the knowledge and skills within the Grade Level Expectation.
Sample questions intended to promote deeper thinking, reflection and refined understandings precisely related to the grade level expectation.
Relevance and Application:
Examples of how the grade level expectation is applied at home, on the job or in a real-world, relevant context.
Nature of the Discipline:
The characteristics and viewpoint one keeps as a result of mastering the grade level expectation.
Supportive Teaching Practices/Adults May
Embed standards throughout daily routines, within and across the environment and activities to ensure developmentally appropriate learning opportunities.
Colorado Essential Skills
The critical skills needed to prepare students to successfully enter the workforce or educational opportunities beyond high school embedded within statute (C.R.S. 22-7-1005) and identified by the Colorado Workforce Development Committee.
Colorado Essential Skills and Mathematical Practices
This section describes ways students engage with the mathematical content using mathematical practices and essential skills needed to prepare students to successfully enter the workforce or educational opportunities beyond high school pursuant to Colorado Revised Statute 22-7-1005 and identified by the Colorado Workforce Development Committee.
Inquiry Questions
The sample questions that are intended to promote deeper thinking, reflection and refined understandings precisely related to the grade level expectation.
Coherence Connections
This section describes how the content described by the Grade Level Expectation and Evidence Outcomes builds from content learned in prior grades, connects to content in the same grade, and supports student learning in later grades.
Elaboration on the GLE
This section provides greater context for the Grade Level Expectation (GLE) through a description of the understanding about the core ideas that should be developed by students.
Colorado Essential Skills and Science and Engineering Practices
Skills and major practices that scientists employ as they investigate and build models and theories about the world. These terms are used to emphasize that engaging in scientific investigation requires not only skill but also knowledge that is specific to each practice.
Cross Cutting Concepts
The crosscutting concepts have application across all domains of science. As such, they provide one way of linking across the domains through core ideas.
Computer Science Practices
This section highlights computer science practices connected to the GLE which describe the behaviors and ways of thinking that computationally literate students use to fully engage in today's data-rich and interconnected world. The practices naturally integrate with one another and contain language that intentionally overlaps to illuminate the connections among them.
These "big picture" questions ask students to more deeply explore the concepts and skills expressed in the GLE.
Essential Reasoning Skills
These skills develop critical thinking, building awareness to multiple perspectives, and engage students in "thinking about their thinking" and to consider their own attitudes, beliefs, and biases on issues.
Colorado Essential Skills and Real-World Application
The critical skills needed to prepare students to successfully enter the workforce or educational opportunities beyond high school embedded within statute (C.R.S. 22-7-1005) and identified by the Colorado Workforce Development Committee. Connections to how these skills relate to lifelong learning have been provided.
Expand and Connect
Ideas that can be used to expand student thinking around the concepts, connect to other musical concepts, or connect to other content areas outside of music.
Colorado Essential Skills and Meaning Making
Ways in which students demonstrate the ability to form, grapple with, and convey concepts and ideas through visual art and design with real-world application.
Learning Experience and Transfer
Ideas that can be used to expand student thinking, encourage conceptual curiosity, and connect multiple disciplines and literacies.
Nature and Skills of History
Nature and Skills of Geography
Nature and Skills of Economics
Nature and Skills of Civics
Disciplinary, Information, & Media Literacy
The disciplinary, information, and media literacy skills necessary to demonstrate mastery of the Evidence Outcomes.
Nature of Dance
The characteristics and viewpoint one keeps as a result of mastering the grade level expectation
Health Skills
This section connects and focuses on the key health specific skills connected to this grade level expectation.
Components of a Physically Literate Individual
Connects the GLE to physical literacy and how it supports students ability to move with competence and confidence in a wide variety of physical activities in multiple environments that benefit the healthy development of the whole person.
Need Help? Submit questions or requests for assistance to [email protected] | CommonCrawl |
Kirchhoff's theorem
In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time from the determinant of a submatrix of the Laplacian matrix of the graph; specifically, the number is equal to any cofactor of the Laplacian matrix. Kirchhoff's theorem is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph.
Kirchhoff's theorem relies on the notion of the Laplacian matrix of a graph that is equal to the difference between the graph's degree matrix (a diagonal matrix with vertex degrees on the diagonals) and its adjacency matrix (a (0,1)-matrix with 1's at places corresponding to entries where the vertices are adjacent and 0's otherwise).
For a given connected graph G with n labeled vertices, let λ1, λ2, ..., λn−1 be the non-zero eigenvalues of its Laplacian matrix. Then the number of spanning trees of G is
$t(G)={\frac {1}{n}}\lambda _{1}\lambda _{2}\cdots \lambda _{n-1}\,.$
An example using the matrix-tree theorem
First, construct the Laplacian matrix Q for the example diamond graph G (see image on the right):
$Q=\left[{\begin{array}{rrrr}2&-1&-1&0\\-1&3&-1&-1\\-1&-1&3&-1\\0&-1&-1&2\end{array}}\right].$
Next, construct a matrix Q* by deleting any row and any column from Q. For example, deleting row 1 and column 1 yields
$Q^{\ast }=\left[{\begin{array}{rrr}3&-1&-1\\-1&3&-1\\-1&-1&2\end{array}}\right].$
Finally, take the determinant of Q* to obtain t(G), which is 8 for the diamond graph. (Notice t(G) is the (1,1)-cofactor of Q in this example.)
Proof outline
(The proof below is based on the Cauchy-Binet formula. An elementary induction argument for Kirchhoff's theorem can be found on page 654 of Moore (2011).[1])
First notice that the Laplacian matrix has the property that the sum of its entries across any row and any column is 0. Thus we can transform any minor into any other minor by adding rows and columns, switching them, and multiplying a row or a column by −1. Thus the cofactors are the same up to sign, and it can be verified that, in fact, they have the same sign.
We proceed to show that the determinant of the minor M11 counts the number of spanning trees. Let n be the number of vertices of the graph, and m the number of its edges. The incidence matrix E is an n-by-m matrix, which may be defined as follows: suppose that (i, j) is the kth edge of the graph, and that i < j. Then Eik = 1, Ejk = −1, and all other entries in column k are 0 (see oriented incidence matrix for understanding this modified incidence matrix E). For the preceding example (with n = 4 and m = 5):
$E={\begin{bmatrix}1&1&0&0&0\\-1&0&1&1&0\\0&-1&-1&0&1\\0&0&0&-1&-1\\\end{bmatrix}}.$
Recall that the Laplacian L can be factored into the product of the incidence matrix and its transpose, i.e., L = EET. Furthermore, let F be the matrix E with its first row deleted, so that FFT = M11.
Now the Cauchy-Binet formula allows us to write
$\det \left(M_{11}\right)=\sum _{S}\det \left(F_{S}\right)\det \left(F_{S}^{\mathrm {T} }\right)=\sum _{S}\det \left(F_{S}\right)^{2}$
where S ranges across subsets of [m] of size n − 1, and FS denotes the (n − 1)-by-(n − 1) matrix whose columns are those of F with index in S. Then every S specifies n − 1 edges of the original graph, and it can be shown that those edges induce a spanning tree if and only if the determinant of FS is +1 or −1, and that they do not induce a spanning tree if and only if the determinant is 0. This completes the proof.
Particular cases and generalizations
Cayley's formula
Main article: Cayley's formula
Cayley's formula follows from Kirchhoff's theorem as a special case, since every vector with 1 in one place, −1 in another place, and 0 elsewhere is an eigenvector of the Laplacian matrix of the complete graph, with the corresponding eigenvalue being n. These vectors together span a space of dimension n − 1, so there are no other non-zero eigenvalues.
Alternatively, note that as Cayley's formula counts the number of distinct labeled trees of a complete graph Kn we need to compute any cofactor of the Laplacian matrix of Kn. The Laplacian matrix in this case is
${\begin{bmatrix}n-1&-1&\cdots &-1\\-1&n-1&\cdots &-1\\\vdots &\vdots &\ddots &\vdots \\-1&-1&\cdots &n-1\\\end{bmatrix}}.$
Any cofactor of the above matrix is nn−2, which is Cayley's formula.
Kirchhoff's theorem for multigraphs
Kirchhoff's theorem holds for multigraphs as well; the matrix Q is modified as follows:
• The entry qi,j equals −m, where m is the number of edges between i and j;
• when counting the degree of a vertex, all loops are excluded.
Cayley's formula for a complete multigraph is mn-1(nn-1-(n-1)nn-2) by same methods produced above, since a simple graph is a multigraph with m = 1.
Explicit enumeration of spanning trees
Kirchhoff's theorem can be strengthened by altering the definition of the Laplacian matrix. Rather than merely counting edges emanating from each vertex or connecting a pair of vertices, label each edge with an indeterminate and let the (i, j)-th entry of the modified Laplacian matrix be the sum over the indeterminates corresponding to edges between the i-th and j-th vertices when i does not equal j, and the negative sum over all indeterminates corresponding to edges emanating from the i-th vertex when i equals j.
The determinant of the modified Laplacian matrix by deleting any row and column (similar to finding the number of spanning trees from the original Laplacian matrix), above is then a homogeneous polynomial (the Kirchhoff polynomial) in the indeterminates corresponding to the edges of the graph. After collecting terms and performing all possible cancellations, each monomial in the resulting expression represents a spanning tree consisting of the edges corresponding to the indeterminates appearing in that monomial. In this way, one can obtain explicit enumeration of all the spanning trees of the graph simply by computing the determinant.
Matroids
The spanning trees of a graph form the bases of a graphic matroid, so Kirchhoff's theorem provides a formula to count the number of bases in a graphic matroid. The same method may also be used to count the number of bases in regular matroids, a generalization of the graphic matroids (Maurer 1976).
Kirchhoff's theorem for directed multigraphs
Kirchhoff's theorem can be modified to count the number of oriented spanning trees in directed multigraphs. The matrix Q is constructed as follows:
• The entry qi,j for distinct i and j equals −m, where m is the number of edges from i to j;
• The entry qi,i equals the indegree of i minus the number of loops at i.
The number of oriented spanning trees rooted at a vertex i is the determinant of the matrix gotten by removing the ith row and column of Q
Counting spanning k-component forests
Kirchhoff's theorem can be generalized to count k-component spanning forests in an unweighted graph.[2] A k-component spanning forest is a subgraph with k connected components that contains all vertices and is cycle-free, i.e., there is at most one path between each pair of vertices. Given such a forest F with connected components $ F_{1},\dots ,F_{k}$, define its weight $ w(F)=|V(F_{1})|\cdot \dots \cdot |V(F_{k})|$ to be the product of the number of vertices in each component. Then
$\sum _{F}w(F)=q_{k},$
where the sum is over all k-component spanning forests and $ q_{k}$ is the coefficient of $ x^{k}$ of the polynomial
$(x+\lambda _{1})\dots (x+\lambda _{n-1})x.$
The last factor in the polynomial is due to the zero eigenvalue $ \lambda _{n}=0$. More explicitly, the number $ q_{k}$ can be computed as
$q_{k}=\sum _{\{i_{1},\dots ,i_{n-k}\}\subset \{1\dots n-1\}}\lambda _{i_{1}}\dots \lambda _{i_{n-k}}.$
where the sum is over all n–k-element subsets of $ \{1,\dots ,n\}$. For example
${\begin{aligned}q_{n-1}&=\lambda _{1}+\dots +\lambda _{n-1}=\mathrm {tr} Q=2|E|\\q_{n-2}&=\lambda _{1}\lambda _{2}+\lambda _{1}\lambda _{3}+\dots +\lambda _{n-2}\lambda _{n-1}\\q_{2}&=\lambda _{1}\dots \lambda _{n-2}+\lambda _{1}\dots \lambda _{n-3}\lambda _{n-1}+\dots +\lambda _{2}\dots \lambda _{n-1}\\q_{1}&=\lambda _{1}\dots \lambda _{n-1}\\\end{aligned}}$
Since a spanning forest with n–1 components corresponds to a single edge, the k = n – 1 case states that the sum of the eigenvalues of Q is twice the number of edges. The k = 1 case corresponds to the original Kirchhoff theorem since the weight of every spanning tree is n.
The proof can be done analogously to the proof of Kirchhoff's theorem. An invertible $ (n-k)\times (n-k)$ submatrix of the incidence matrix corresponds bijectively to a k-component spanning forest with a choice of vertex for each component.
The coefficients $ q_{k}$ are up to sign the coefficients of the characteristic polynomial of Q.
See also
• List of topics related to trees
• Markov chain tree theorem
• Minimum spanning tree
• Prüfer sequence
References
1. Moore, Cristopher (2011). The nature of computation. Oxford England New York: Oxford University Press. ISBN 978-0-19-923321-2. OCLC 180753706.
2. Biggs, N. (1993). Algebraic Graph Theory. Cambridge University Press.
• Harris, John M.; Hirst, Jeffry L.; Mossinghoff, Michael J. (2008), Combinatorics and Graph Theory, Undergraduate Texts in Mathematics (2nd ed.), Springer.
• Maurer, Stephen B. (1976), "Matrix generalizations of some theorems on trees, cycles and cocycles in graphs", SIAM Journal on Applied Mathematics, 30 (1): 143–148, doi:10.1137/0130017, MR 0392635.
• Tutte, W. T. (2001), Graph Theory, Cambridge University Press, p. 138, ISBN 978-0-521-79489-3.
• Chaiken, S.; Kleitman, D. (1978), "Matrix Tree Theorems", Journal of Combinatorial Theory, Series A, 24 (3): 377–381, doi:10.1016/0097-3165(78)90067-5, ISSN 0097-3165
External links
• A proof of Kirchhoff's theorem
| Wikipedia |
Search all SpringerOpen articles
Journal for Labour Market Research
The evolution of job stability and wages after the implementation of the Hartz reforms
Die Entwicklung von Beschäftigungsstabilität und Löhnen seit Einführung der Hartz-Reformen
Gianna C. Giannelli1,2,
Ursula Jaenichen3 &
Thomas Rothe3
Journal for Labour Market Research volume 49, pages 269–294 (2016)Cite this article
We address the concerns about rising inequality in the German labour market after the implementation of the Hartz reforms between 2003 and 2005. We focus on the quality of new jobs started between 1998 and 2010 in West Germany in terms of job stability and level of earnings. Using social security data drawn from the Integrated Employment Biographies, we analyse the distributions of job durations and wages and model their determinants at the worker level. Our results show a high degree of job stability during and after the reform years, decreasing wage levels and increasing wage dispersion.
Seit den Hartz-Reformen in den Jahren 2003 bis 2005 gibt es Hinweise auf eine gestiegene Ungleichheit im deutschen Arbeitsmarkt. Anhand der Indikatoren Stabilität und Entlohnung untersuchen wir die Qualität von Beschäftigungsverhältnissen, die im Zeitraum von 1998 bis 2010 begonnen haben. Mit administrativen Daten aus den Integrierten Erwerbsbiographien analysieren wir die Verteilungen und modellieren individuelle Determinanten von Beschäftigungsdauern und Löhnen. Die Ergebnisse weisen auf ein hohes Maß an Stabilität in den Beschäftigungsdauern, einen Rückgang des Lohnniveaus und einen Anstieg der Lohnungleichheit während und nach der Periode der Hartz-Reformen hin.
The Hartz reforms were implemented from 2003 to 2005 after the so-called "placement scandal" of the Federal Employment Agency (Fleckenstein 2008). The aims of the Hartz reforms were to improve public employment services, enhance the efficiency of active labour market policies and decrease the number of unemployed persons in Germany. Among the reforms were the restructuring of the Federal Employment Agency, a reorganisation of the local employment agencies, and several minor legislative changes related to dismissal protection, fixed-term contracts and temporary agency work (see, e. g., Jacobi and Kluve 2007). The most important reform, Hartz IV, abolished the then-existing unemployment assistance for long-term unemployed workers and consolidated this program with social assistance for households in need, thereby worsening the conditions for a vast majority of longer-term unemployment benefit recipients.Footnote 1
In the years following the reforms, the labour market in Germany performed surprisingly well: from 2005 to 2008, unemployment decreased by one-third, while employment liable to social insurance increased by approximately the same amount (1.5 million persons, see Fig. 1). During the recession years of 2008/2009, both the decrease in employment and the increase in unemployment were modest, although GDP decreased dramatically (see Fig. 2). This phenomenon was noticed worldwide, and Krugman even called it "Germany's job miracle". Nevertheless, there are concerns about rising wage inequality (Card et al. 2013) and the increasing prevalence of atypical and low-wage jobs (Eichhorst and Tobsch 2015), tendencies that may have laid the groundwork for the introduction of a general minimum wage in Germany in 2015.
Number of Employed and Unemployed persons in Germany (Source: German Federal Employment Agency, quarterly average of monthly data, seasonally adjusted, own calculations)
Hiring and Separation Rates over the Business Cycle (Source: Quarterly hiring rate [hirings(t)/employment(t-1)] and separation rate [separations(t)/employment (t-1)] are calculated using official data of the German Federal Employment Agency. GDP is provided by the Federal Statistical Office)
This paper takes the positive overall performance of the German labour market following the Hartz reforms and during the great recession as a starting point for our analysis of whether there is an unsavoury side to this positive trend in the form of lower job quality. We use job duration and wages as indicators of job quality and look at new jobs started between 1998 and 2010 (the terms "job duration" and "job stability" are used as synonyms throughout the paper). We are interested in whether changes in job quality occurred in the "middle" of the labour market and therefore analyse the stability and wages of regular jobs covered by social security.
The quality of a specific job has various dimensions and many of them are difficult to measure. In addition to wages and stability, aspects such as mental and physical stress, autonomy and self-responsibility, temporary versus permanent contracts, working time, and the reconciliation of family and working life have been the subjects of research.Footnote 2 The main reasons for our choice of indicators of job quality are that job stability and earnings are extremely important economically and that our data allow us to observe these indicators very precisely (limitations are discussed later in the paper).
The expectation that the Hartz reforms have increased inequality and created a generation of jobs that are lower paid and less stable than jobs in the past is supported by search-theoretical arguments. Intensive monitoring and stricter use of sanctions will increase the job search intensity of unemployed workers. The worsening of conditions for unemployment benefit recipients will lead to lower reservation wages and increase the willingness of unemployed workers to accept jobs of a given quality. Rebien and Kettner (2011) report that after the reforms, job applicants more often accepted jobs with worse working conditions, e. g., longer commutes and even lower wages. In Sweden, Van den Berg and Vikström (2014) demonstrate that the use of sanctions induced unemployed job-seekers to take jobs with lower wages, fewer working hours and fewer qualification requirements.
The Hartz reforms also contained certain deregulative elements, such as a reduction of dismissal protection in small firms and a relaxation of the legislation on temporary agency work. The growing use of working contracts that offer less employment protection could be another source of the rising inequality in the labour market.
There are also good reasons to expect positive effects of labour market reforms on job quality. The matching of unemployed workers to jobs may have been positively influenced by the reorganisation of local employment agencies combined with the tightening of active labour market policies. Even if increased pressure on unemployed job-seekers will induce them to accept worse jobs in the first round, accepting these jobs might result in shorter unemployment durations, less depreciation of human capital, less stigmatisation and better signals to firms in the second round.Footnote 3 Similarly, atypical jobs may become stepping stones into permanent and better-paid jobs in the medium run.Footnote 4 All in all, it is an empirical question whether the Hartz reforms have had a positive or a negative influence on job quality.
It should be noted that the extent to which the Hartz reforms have influenced the German labour market is controversial. In a recent and prominent paper, Dustmann et al. (2014) argue that the scale of the Hartz reforms was not sufficiently large to substantially contribute to the positive changes in competitiveness and unemployment observed after the reform period. These authors stress the importance of factors such as flexibility in the German wage-setting institutions and the growing importance of firm-level negotiation of wages since the restructuring of the German economy after reunification. However, another view is that the Hartz reforms and "especially the merger between unemployment assistance and welfare, deeply changed the fundamental labor market institutions" (Möller 2015, p. 164). Similarly, the reduced generosity of the unemployment benefit system as a result of the Hartz reforms is considered a key determinant of the wage moderation during the years preceding the great recession (Gartner and Merkl 2011).
Although we share the view that the Hartz reforms had an important influence on the German labour market, we do not aim to estimate the size of this impact or to disentangle it from other factors that have contributed to the positive labour market outcomes observed in recent years. Our results on job durations and wages are descriptive in character; nevertheless, we will also perform econometric model analyses that account for worker and job heterogeneity.
We contribute new evidence to previous research in several ways. First, whereas existing studies have focussed primarily on wages, we analyse both wages and job duration because both aspects are fundamental dimensions of job quality. Second, a distinctive feature of our analysis is that we select cohorts of newly started jobs. We expect the employment contracts of workers taking new jobs to respond immediately to changes in both legislation and market conditions. Third, we update the evidence through 2010, giving us a period of analysis that allows us to consider a sufficiently long time span (five years) after the Hartz reforms.
The paper is organized as follows. In Sect. 2, we review the literature on job stability and wages in Germany. In Sect. 3, we describe our empirical strategy, and in Sect. 4, we present the results. The paper ends with concluding remarks.
The Hartz reforms and recent trends in job stability and wages in Germany: a review of the literature
There are several macro-level studies analysing the influence of the Hartz reforms on aggregate matching efficiency. Fahr and Sunde (2009) estimate positive effects of the first two reform packages (which addressed the organisation of the Federal Employment Agency and local employment agencies) on both the duration of unemployment and the matching probability. Subsequent aggregate studies confirm the improved matching efficiency following the implementation of the Hartz reforms (Hertweck and Sigrist 2013) and the increase in job-finding rates, not only for the short-term unemployed but also for the long-term unemployed (Klinger and Rothe 2012). Launov and Wälde (2016) analyse the impact of different elements of the Hartz reforms using structural equilibrium models. Comparing pre- and post-reform steady states, the estimated effect of the reform of the public employment agencies in Germany accounts for approximately 20 % of the reduction in equilibrium unemployment, whereas the Hartz IV reform accounts for another 5 %. Taken together, these studies confirm that that the functioning of the German labour market improved after the reforms. The reform of the benefit system through the Hartz IV reform seems to be of minor importance in the macroeconomic context.
The cyclical behaviour of aggregate labour turnover in Germany gives a first impression of whether jobs have become more or less stable since the mid-nineties (Rothe 2009). Fig. 2 shows the overall trends in hiring and separation rates over the business cycle between 1996 and 2012. Beginning with the year 2001, both rates clearly drop and continue to decrease during the period of the Hartz reforms. Remarkably, both hiring and separation rates remain fairly low during the cyclical upturn following the Hartz reforms (instead of returning to higher levels) and in the recession years 2008 and 2009. Thus, there has been a reduction in worker turnover in Germany that started before the Hartz reforms and continued thereafter. If job stability had decreased significantly after the reform period, we would have expected to see a corresponding increase in the turnover rates. This first piece of evidence will be confirmed and complemented by analyses that account for job heterogeneity in our micro-level analysis.
Regarding existing micro-evidence for Germany, there are studies that should be mentioned even if most of them do not refer to the Hartz reforms. These studies look at long-term trends in job stability or wages, and several of them focus explicitly on inequality and the processes of sorting or polarization in the labour market.Footnote 5
Most analyses of long-term trends in individual job duration indicate a constant or declining level of job stability. Bergemann and Mertens (2011) find a tendency towards shorter job durations for the period 1984 to 1998, but other studies present evidence of a rather constant level of job stability until the middle of the last decade (Giannelli et al. 2012; Rhein 2010). Giesecke and Heisig (2011) look at the long period from 1984 to 2008 and thus cover the first years after the Hartz reforms. They find that overall mobility between firms has remained fairly constant, whereas less qualified workers have experienced a decline in job stability over time.
Boockmann and Steffes (2010) focus on the determinants of job duration in Germany and demonstrate that in addition to the socio-economic characteristics of workers, the internal structure of firms, as captured by the existence of working councils or further training opportunities, contributes to longer job duration. Furthermore, the results suggest a sorting process of workers with higher expected duration to firms offering more stable employment.
The recent literature on wages contains influential contributions that demonstrate a rise in wage dispersion in West Germany over the last several decades. Dustmann et al. (2009) focus on wages between 1975 and 2004 and document that inequality has continued to rise since the 1980s. In the 1980s, it was mainly the upper half of the wage distribution that was affected, but in the early 1990s, inequality also started to increase at the bottom half of the wage distribution. Antonczyk et al. (2010) find particularly pronounced growth of wage inequality at the bottom of the wage distribution from 2001 to 2006. Card et al. (2013) look at full-time workers in the years from 1985 to 2009 in West Germany and demonstrate that increased heterogeneity in the establishment component of wages strongly contributes to the rise in wage inequality. They also find evidence that (positive) assortative matching has increased, with workers' wage potential being more closely correlated with firms' wage premia over time. Cornelißen and Hübler (2011) estimate the influence of unobserved individual and firm heterogeneity on wage and job-duration functions using German linked employer-employee data for the years 1996 to 2002. Somewhat at odds with the results of Card et al. (2013), the estimated individual and firm effects show that high-wage workers tend to be stable workers and are more likely to be employed in low-wage firms, whereas low-wage workers are more likely to accept jobs with shorter durations in high-wage firms.Footnote 6
The study of Riphahn and Schnitzlein (2016) analyses individual wage mobility, i. e., the probability of shifting to a different quantile of the wage distribution or changes in the rank positions, and shows a substantial decline in wage mobility over time. The results are provided for East Germany, where the decline in wage mobility begins in the early 1990s, and for West Germany, where decreasing wage mobility has been observed since the late 1990s.
The paper by Arent and Nagl (2013) is relevant because it claims to estimate the effect of the Hartz reforms on wages. Their central result, a structural break in the wage equation in the year of the benefit reforms, differs from the findings mentioned above and from ours (we find that wages started to decrease before 2005; see also Ludsteck and Seth (2014) for a comment).
Our empirical analysis borrows from the literature discussed in this section, especially when we present evidence on empirical distributions and in the specification of our models of job durations and wages. Although the sorting of workers to firms seems to be a promising road for further research, the results of these studies are not unambiguous. Our own analyses are performed at the individual or job level, which seems to be adequate for our research question. Our aim is to assess the changes in job quality after the implementation of labour market reforms that apparently improved the functioning of the labour market and led to the creation of new jobs. We use stability and wages as indicators of job quality and analyse them for a large share of the labour market.
Empirical strategy
We start our analysis with an assessment of overall trends in the distributions of job durations and wages. We then estimate job duration and wage models to determine the size and direction of changes in job quality across the reform period while controlling for the heterogeneity of jobs and workers. Finally, we look at the job quality over time of three groups of disadvantaged workers, namely, unskilled workers, previously unemployed workers and temporary agency workers.
Following a relevant strand of the economic literature, we measure job quality by the level of wages and by job duration: the higher the wage is, the better the job is; the longer a job lasts, the better the job is (Jahn and Rosholm 2014 or Caliendo et al. 2013).
We use a flow sampling approach and select cohorts of jobs starting in the same year or period. This approach avoids oversampling of longer durations, known as length-bias (Cameron and Trivedi 2005). A drawback of this approach is that our sampling yields a large share of incomplete job spells because of the limited observation period. Because we also censor jobs after 24 months, our analysis of changes in survival probabilities and job-leaving hazards over time is confined to changes in the initial two years in a job and is based on samples with large numbers of censored job spells.
We examine entry wages (wages at the time of the first notification of a worker with a specific employer), which ensures that we do not mix wages for new jobs with the wages of incumbent workers who have already gained some experience on the job. Because new jobs will strongly reflect current labour market conditions, changes in the market wage or an increase in temporary, unstable jobs will be revealed rather quickly in our samples. With respect to wages, we see our analyses as complementing other studies based on stock samples.
The job duration model
To analyse job durations, we estimate the following piecewise exponential mixed proportional hazard model (Blossfeld et al. 2007; Cameron and Trivedi 2005; Gutierrez 2002):
$$\lambda _{ij}\left (t|x_{i}' \, \beta ,\nu _{i}\right )=\lambda _{0}\left (t\right ) \, \exp \left (x_{i}'\, \beta \right )\nu _{i}, \quad i=\, 1,\ldots ,\, N;\, j=\, 1,\ldots ,\, J$$
$$\lambda _{0}\left (t\right )=\lambda _{j}, \quad \tau _{j-1}<t<\tau _{j}\,$$
where \(\lambda _{ij}\) is the hazard rate representing the risk of leaving the current employer during the j thtime interval of the job spell belonging to individual i. Job durations are split into at most J time intervals (pieces) to model changes in the risk of job termination conditional on the time already spent in the current job. The jth interval starts at duration \(\tau _{j-1}\) and ends at duration \(\tau _{j}\). The baseline hazard \(\lambda _{0}\left (t\right )\) is a step function that is constant within intervals. Unobserved heterogeneity is modelled by a gamma-distributed frailty term \(\nu _{i}\) that is assumed to be specific to the job spell. Only the variance \(\theta\) of the frailty term is estimated.
The vector \(x_{i}\) is a set of individual, job, firm, industry, and macroeconomic time invariant explanatory variables that are specific for an individual at the beginning of the job.
The wage model
To analyse wages, we estimate censored regression models that allow us to take the threshold for social security contributions into account (Cameron and Trivedi 2005).
We model the observed censored wage \(W_{ij}\)of individual i in year j as the realization of a latent variable\(W_{ij}^*\):
$$W_{ij}^{*}=x_{ij}' \, \beta +\varepsilon _{i}$$
where \(x_{ij}\) are the covariates of individual i if her employment spell starts in year j, and \(\varepsilon _{i}\) is a normally distributed error term with variance \(\sigma ^{2}\).
The observed wage is:
$$W_{ij}=\begin{cases} W_{ij}^{*}\\ c_{j} \end{cases} \, if\, \frac{W_{ij}\, <\, c_{j}}{W_{ij}\, \geq \, c_{j}}$$
where c j is the threshold for social insurance contributions, which varies over time.
The coefficients \(\beta\) measure the influence of the covariates on the latent variable \(W_{ij}^*.\)The maximum likelihood estimation yields results for the vector of coefficients as well as for the variance \(\sigma ^{2}\).
Data and variables
We draw our data from a two % sample of the German Integrated Employment Biographies (IEB), a large administrative data set regularly produced at the Institute for Employment Research (IAB) in Nuremberg.Footnote 7 The IEB contains employment liable to social insurance but does not include self-employment or civil servants. We focus on workers' employment spells and wages from 1998 to 2010.
The analysis will concentrate on West Germany which accounts for more than 80 % of the German labour market. A common analysis for East and West is hindered by the large differences that continue to exist between the two labour markets.Footnote 8 Selected results on overall trends in East Germany are contained in the appendix.
We select workers aged 25 to 54 when starting their jobs. Workers under 25 or over 54 are excluded because the job exits of younger workers often coincide with re-entry into the educational system and the transitions of older workers might be influenced by the alternative option of retirement. Because our focus is on employment spells, we also exclude apprentices of any age.
The duration of a job is defined as a period of employment in the same establishment.Footnote 9 Successive sub-spells within the same establishment are concatenated to generate the job spells, allowing for a maximum of 90 days of interruption. The start and end of the job spells are measured exactly in days. To keep the observation window as long as possible, we look at durations up to 24 months, therefore censoring longer job spells. In the case of overlapping job spells, we only keep the spell with the highest amount of earnings.
Entry wages are measured as those of the first sub-spell (lasting at most one year) contained in a possibly longer employment episode. We deflate the nominal amount with the German consumer price index to obtain the real daily wage. Right-censoring occurs at the threshold for social security contributions, which is adjusted almost every year.Footnote 10 The exact amount of wages above the threshold is unobserved.
Even if the data allow us to distinguish between full-time and part-time work, they do not contain information on hours worked. Consequently, although we include part-timers (those liable to social insurance) in the analysis of job durations, we are forced to exclude them from the analysis of wages, which implies the neglect of a considerable part of the female workforce.Footnote 11 In addition, we exclude "mini-jobs" and other part-time jobs with short hours or low earnings from our analyses. Mini-jobs are legally required to fall below a certain threshold (e. g., 400 € in 2012) and are largely exempted from social security contributions. The growth of such jobs in recent years is certainly relevant to our research question (Jahn et al. 2012; Möller 2014). However, due to the lack of information on hours worked and because earnings from mini-jobs often constitute a type of extra income, these jobs are difficult to analyse without any information on the household context.
With respect to individual characteristics, the administrative data contain information on workers' age, gender and nationality. Information regarding workers' skill levels combines different types of school and university training with a binary indicator of whether the worker has completed vocational training. We include indicators of labour market status preceding the new job, distinguishing "jobs taken after unemployment" (where unemployment is defined by benefit receipt or being registered with the local employment agency), "job-to-job changes" (allowing for 31 days of non-employment between jobs), "gaps" (periods without entries in our data) and "first spells", which are workers' first appearance in the IEB. Jobs with fixed-term contracts contribute to our results on durations and wages but cannot be distinguished from permanent jobs in our data. Regarding temporary agency work, the Hartz reforms implied a considerable liberalization and the increase in the number of temporary agency workers has been one of the main topics in the debate on recent labour market changes in Germany. In contrast to fixed-term contracts, temporary agency workers can be identified relatively well using the industry code (see Antoni and Jahn 2009).
We include industry and firm size class in our models as information on establishments.Footnote 12 To take account of business cycle effects, we include indicators for the state of the local labour market: GDP growth obtained from the German Federal States' Accounts and the regional unemployment rate made available by the Federal Statistical Office, with both variables measured at the level of the West German districts (Kreise). Seasonal effects are modelled by the inclusion of dummy variables for the quarters of job entry.
In the analysis of overall trends in job quality, we focus on yearly cohorts of new jobs. In our regression models of job duration and wages, we group the observations into three sub-periods: jobs beginning in the period before the reforms (1998 to 2002), jobs beginning during the period of the Hartz reforms (2003 to 2005) and jobs beginning in the post-reform period (2006 to 2009/2010).
We first discuss the results of the overall analysis of job duration and wages in new jobs (Sect. 4.1). We then turn to the model analysis (Sect. 4.2) and, finally, to the results concerning the three selected groups of disadvantaged workers: unskilled workers, temporary agency workers, and previously unemployed workers (Sect. 4.3).
Trends in the distributions of job durations and wages
The duration of a new job is defined as an uninterrupted period of work with the same employer (see Sect. 3.3). We adopt Kaplan-Meier survival function estimators to plot time trends of the distributions of job durations. For wages, we compare the quartiles (25th, median and 75th percentile) of the distribution of log wages. To visualize broad group differences, we first distinguish by gender and then by skill level.
Durations of newly started jobs
Fig. 3 shows the estimated probabilities of staying in a job at specific durations after the job begins for job spells started between January 1998 and December 2009 (December 2008 for survival probabilities of 18 and 24 months).
Job Survival Probabilities after 6, 12, 18 and 24 Months. Survival probabilities at 18 and 24 months are not observed for jobs starting in 2009 (Source: IEB, own calculations, N = 816,774)
Job stability is fairly high. The estimated 12-month survival probabilities for male workers are between 50 and 60 %, and they are even higher for female workers (see also Table 4 in the appendix). Over time, the survival probabilities for male workers seem to be fairly stable, whereas for female workers they show a tendency towards longer job durations. There is a temporary decrease in survival probabilities for jobs beginning in the year 2000; this decrease is more pronounced for women and for longer durations. We do not have a ready explanation for this result;Footnote 13 in any case, the size of this reduction is rather limited (up to 4 percentage points for women's 24-month survival probability, see Table 4 in the appendix).Footnote 14 Overall, the vertical distances between the survival probabilities at different durations remain fairly constant, meaning that within the groups of male and female workers, job durations do not become much more unevenly distributed over time (see also Fig. 7 in Sect. 4.3). As an intermediate result, there are no signs of a general downward trend in job durations during or after the reform period, which confirms the aggregated turnover rates presented in Fig. 2.Footnote 15.
Fig. 4 distinguishes between three skill levels (see also Table 5 in the appendix). The category of "unskilled" workers comprises those with lower than medium education and without vocational training. The category of "skilled" workers includes workers with up to a medium level of education, workers with vocational training, and workers with a degree that qualifies them for professional college or university attendance. Workers in the "highly skilled" category have college or university degrees.
Job Survival Probabilities after 6, 12, 18 and 24 Months, by Skill Level. Survival probabilities of 18 and 24 months are not observed for jobs starting in 2009. (Source: IEB, own calculations; N = 816,774)
For a given duration, highly skilled workers have the highest survival probabilities, whereas unskilled workers have the lowest. Over time, the survival probabilities decrease somewhat for unskilled workers, especially at job durations of 18 and 24 months. The opposite is true for highly skilled workers; their survival probabilities show a slight upward trend, which is stronger for longer job durations. Because the observed changes are rather small, this evidence indicates that heterogeneity in job durations within skill groups has also not increased significantly over time.
Wages in newly started jobs
This analysis is confined to full-time workers, because information on hours worked is not available for part-time workers (see Sect. 3.3).
By now, the growth in earnings inequality over the last several decades and a steady decline in real wages since the early 2000 s is fairly well established for Germany, as well as for other industrialized countries (Dustmann et al. 2009, Card et al. 2013). With its focus on new jobs, our analysis presents further evidence on this issue.
The wage variable is measured as the log of daily real wages in 2005 prices. In Fig. 5, the 25th percentile, median, and 75th percentile of the wage distributions for men and women are plotted over the years (exact figures for the years 1998 and 2010 can be found in Table 6 in the appendix).
Distribution of Wages (Source: IEB, own calculations, N = 747,214)
In line with the results of Card et al. (2013), the decrease in real wages after 2001 is clearly visible. For both men and women in new jobs, the decline in wages is strongest in the 25th and 50th percentiles of the wage distribution, whereas the decline in the 75th percentiles is comparatively modest. Furthermore, wage inequality has been increasing among both men and women. The lower interquartile difference (50th percentile minus 25th percentile) has increased by 8 % for men and by 4 % for women, and the upper interquartile difference (75th percentile minus 50th percentile) has increased by 6 % for both men and women (Table 6).Footnote 16
As previously noted, "wage moderation" is one potential explanation of Germany's relatively good employment performance during the great recession in 2008/2009. In accordance with most of the studies summarized in Sect. 2, our results show that the decrease in real wages begins well before the Hartz reforms. Thus, one impact of the reforms might have been to strengthen a pre-existing tendency towards lower wages. The size of wage losses in our sample of new jobs is clearly larger than the wage decreases reported in Card et al. (2013).Footnote 17
The differences in entry wages between skill levels are substantial (Fig. 6, Table 7 in the appendix). Furthermore, the disadvantage of unskilled workers is increasing. The median log wage of highly skilled workers, for example, decreased from 4.79 in 1998 to 4.63 in 2010, implying a daily wage loss of 15 % (approximately 18 €). The corresponding median wage loss for skilled workers from 1998 to 2010 was also 15 % (10 €). However, the median wage loss for unskilled workers was 21 % (11 €; see Table 7). In addition, within-group inequality has been rising, which also accords with the results of Card et al. (2013). For the skilled and highly skilled worker groups, the decline in wages is strongest in the 25th percentile (that is, for workers with the lowest earnings in these groups).Footnote 18 Footnote 19
Distribution of Wages by Skill Level (Source: IEB, own calculations, N = 747,214)
Model analysis
Job duration
Although no general tendency towards shorter job durations over time was revealed by the analysis in Sect. 4.1.1, we now consider the possibility that changes in the composition of jobs have influenced this result. We estimate a duration model that controls for a broad range of explanatory variables. The objective is to determine whether the period in which the Hartz reforms were implemented or the subsequent period are associated with changes in job durations when we take into account changes in the structure of jobs, as captured by our covariates. Because we do not use an experimental or quasi-experimental design, the estimated covariate effects reflect empirical associations rather than causal influences.
The model is a single spell model, estimated separately for men and women, for employment spells beginning between January 1998 and December 2008. For persons with more than one new job in the observation window, we randomly select only one spell.Footnote 20
The results are presented in the form of hazard ratios, which – in the case of binary or categorical variables – indicate the influence of a variable on the risk of leaving the job relative to the reference group (see Table 1). The relationship between the risk of terminating a job and job duration is inverse, meaning that a value of the hazard ratio greater (smaller) than one implies a positive (negative) effect on the hazard and a negative (positive) effect on duration for that variable. The z‑values in Table 1 are based on the cluster-robust estimation of the variance-covariance matrix, because the district-level variables of unemployment rate and GDP growth give rise to a within-district correlation of regression model errors (Cameron and Miller 2015).Footnote 21
Table 1 Job Duration Models for West Germany
Starting with our variables of interest, the time period indicators of the year of entry are smaller than one for both men and women during the reform period (2003–2005). For the post-reform period (2006–2008), the hazard ratio continues to be smaller than one for women but is insignificant for men. These results imply that job durations ceteris paribus are somewhat longer for new jobs started in the reform period (2003–2005) compared to the reference period (1998–2002). For men, in the years following the Hartz reforms, durations do not differ from those in the reference period, whereas for women, job durations continue to be slightly longer than those in the period preceding the Hartz reforms.
For the rest of the models, the pattern of the baseline hazard, the associations of the covariates with the hazard rate and the relevance of the frailty term are of interest as well. The pattern of the estimated risk of leaving a job is non-monotonic: the hazard ratios for the time periods at the top of the table initially slightly increase (for women, there is also a temporary decline in the third month) and then start to decrease after the first three months of employment. This non-monotonic pattern can be explained by the high initial risk of contract dissolution if a mismatch is found in combination with the stabilizing effect of job-specific human capital (Blossfeld et al. 2007, p. 121). In addition to this baseline pattern – and despite the large number of covariates – the importance of unobserved heterogeneity is confirmed in all of our models by likelihood ratio tests that are highly significant.
The results also point to a clear seasonal pattern in job durations. The estimated hazard ratios from the men's model are large and increase during the year, implying the longest durations for jobs beginning in the first three calendar months. For women, this pattern is less regular and less pronounced.
The state of the local labour market might influence job durations through fluctuations in labour demand. The respective control variables are insignificant, however. Persistent differences in economic conditions between regions might also influence job durations. The estimated hazard ratios for the federal states seem to reflect to some extent the north-south divide in Germany. The reference is North Rhine-Westphalia, which is the state with the largest population and is situated in the western centre. Significantly shorter job durations are estimated in the northern states of Schleswig Holstein and Hamburg, and longer job durations are estimated in the southern states. For the state of Hesse, situated in the southern centre, and the economically strong states Baden Württemberg and Bavaria, which are both situated in the south of Germany, we find lower risks of leaving jobs and thus longer durations (in Bavaria, this is only found for women).
The effect of firm size is clear and rather large. As predicted by theories of internal labour markets, job duration monotonically increases with the size of the firm for both men and women.
For some industries, we can also note fairly strong and significant effects. Compared to business services (the reference group), jobs tend to be very stable for men in manufacturing and for women in social and public services. Very large and significant hazard ratios for both men and women are estimated for temporary agency jobs, indicating far shorter durations. Although this confirms the general notion that temporary agency jobs are of low quality, a more careful analysis would need to look at subsequent jobs to see how many temporary agency jobs lead to regular jobs (e. g., Jahn and Rosholm 2014).
Part-time jobs with a minimum of 18 h per week are much shorter for men but only slightly more stable for women. One interpretation of this finding is that for many men, part-time jobs are only temporary solutions until they find a full-time job. Women might have longer durations in part-time jobs, although these jobs are relatively hard to find.
Age and nationality have the expected effects: job duration is much shorter for younger age groups and foreigners. The skill effect is pronounced for both men and women. Workers for whom information on skill was missing and unskilled workers have far higher risks of leaving their jobs compared to the reference group (vocational training with at least an intermediate degree), which indicates the importance of completed vocational training in Germany. Among workers with higher educational levels, only men with a university or comparable degree have more stable jobs than the reference group. Among the categories describing previous labour market status, the reference group (job-to-job changers)Footnote 22 is the largest group. Persons beginning new jobs out of unemployment form another sizeable group. Although the group of persons starting a new job after a gap is also fairly large, the group of persons in their first jobs and the group of persons who are first unemployed and then have a gap before starting their current jobs are both rather small. The results show clearly that compared to job-to-job changers, all other groups experience shorter job durations. The negative association with duration is strongest for persons who are unemployed immediately before starting a job.
As in the duration model, we use one randomly selected employment spell per person beginning in the period 1998 to 2009. We estimate the models separately for men and women in West Germany (see Table 2).Footnote 23
Table 2 Wage Models for West Germany
Once again, the coefficientsFootnote 24 of the time period indicators are of central interest. They are negative for the reform period (2003–2005) and for the period following the Hartz reforms (2006–2009), indicating that real wages have been falling since the reference period. Although we control for cyclical effects by including GDP growth rates, an additional wage effect of the great recession in 2008/2009 cannot be excluded. The negative wage effect during the reform period is slightly stronger for women than for men; in the post-reform period, we observe much lower entry wages for both sexes. The skill level has the expected positive effect on wages.
Potential work experience has a significant effect only for men. The wages of foreigners are approximately 8 % lower for women and more than 10 % lower for men.
The preceding labour market status is highly relevant for wages in newly started jobs. There is a large negative wage differential for workers in their first job following unemployment. The estimated coefficients are −0.268 (men) and −0.228 (women), which correspond to percentage wage losses of 20 % or more compared to the reference wages of job-to-job changers.
Given the size of these coefficients, it should be stressed once again that these results are descriptive and do not reflect causal relationships.Footnote 25
The coefficient of the local unemployment rate is negative in the model for men's wages and positive in the model for women's wages.Footnote 26 The effects of the federal states variables are often insignificant, with persons from Lower Saxony/Bremen and Rhineland Palatinate/Saarland having significantly lower wages than those from the reference state of North Rhine-Westphalia.
Firm size is positively correlated with wages; larger firms pay higher wages. This effect seems to be stronger for women. In most industries, wages are lower than in business services, which is the reference group. In particular, workers in personal and domestic services and temporary agency workers are worse off.
Summary of results for job duration and wages
In summary, the results of the multivariate analysis presented in this section show that ceteris paribus, the durations of new jobs did not significantly decrease either during or after the Hartz reform period compared to the previous period (1998–2002) and that there were real wage losses during the reform period and even greater wage losses in the period following the Hartz reforms.
Job durations and wages for selected groups of workers over time
Because our analysis is concerned with increasing inequality, it might be worthwhile to check whether the situations of certain groups of workers known to be disadvantaged have improved or worsened over time. We select three groups – temporary agency workers, workers who were unemployed before starting a new job, and unskilled workers – to determine whether trends for them differ from the overall trends assessed previously.
In Fig. 7, similar to Sect. 4.1, Kaplan-Meier survivor functions are plotted across periods for the entire sample (first panel) and for the groups of temporary agency workers, unskilled workers, and previously unemployed workers.
Survivor Functions for Different Groups. a Men, b Women (Source: IEB, own calculations, N = 832,158)
It is obvious that there is virtually no effect on job durations for these groups over time. Comparing the survivor functions of these three groups with that of the entire sample confirms the lower level of job stability for the selected groups, especially for temporary agency workers.Footnote 27
Table 3 presents median log wages together with changes across time periods for the entire sample and for the three selected groups of workers. The model analysis in Sect. 4.2.2 clearly proved that these groups have lower wages ceteris paribus, and the median wages complement this finding. For all men, the median wage dropped by 3 % from the first to the second period and further declined by approximately 9 % from the second to the third period. For women, the decline between the first two periods is also approximately 3 % and the decline between the second and the third periods is approximately 4 %.
Temporary agency workers, the group with the lowest median wage, experienced sizable wage losses between the first two periods: the median wage dropped by approximately 6 % for men and more than 10 % for females. Unskilled workers as a group clearly suffered the largest wage losses: for men, the median wage falls by approximately 8 % between the first two periods and falls again by more than 10 % between the second and third periods. The median wage of female unskilled workers falls by approximately 6 % between the first and second periods and by approximately 8 % between the second and the third periods. For men who were unemployed before starting the current job, there is also a very sharp decline in the median wage between the last two periods. For the group of women starting a job after unemployment, the drop in the median wage is approximately 6 % both between the first two periods and between the last two periods.
Table 3 Median Wages for Different Groups (Source: IEB, own calculations)
It is reasonable to consider overlaps between these groups of disadvantaged workers (e. g., workers who are both unskilled and employed in a temporary work agency) and to examine the median wages of these overlap groups across periods. However, jobs with workers belonging to two or more of our disadvantaged groups account for less than 10 % of the sample, and there are only a few subgroups for which the wage trends were even worse than those reported in Table 3.Footnote 28
In summary, median wage losses were often larger in the three selected groups compared to overall wage trends.Footnote 29 The sizable wage losses observed for jobs held by disadvantaged groups of workers might partly reflect demand-side changes, such as changes in relative skill demand. On the supply side, the decrease in reservation wages of workers in these groups is likely linked to the changes in the benefit system, because the risk of (recurrent) unemployment for these workers is comparatively high.
In this paper, we have assessed whether a reduction in the quality of new jobs has occurred in the German labour market during and/or after the implementation of the Hartz reform.
Our results on overall trends in job durations indicate stable job durations for men and somewhat longer durations for women. Interestingly, the graphical analysis does not show a stronger tendency towards a more unequal distribution of job durations. Consistent with the finding of decreased labour turnover, the results of the duration analysis even imply somewhat longer job durations in the reform period. When looking at selected groups of workers, we also do not find evidence of decreased job stability over time. Although job stability seems to hold steady, we confirm both a decrease in real wages over time and an increase in overall wage dispersion. For our sample of newly started jobs, several of the observed wage decreases are quite large. The estimated models reveal stronger wage losses in the period 2006–2009 compared to the reform period 2003–2005, which is due at least in part to the great recession in 2008/2009. The selected groups experienced severe wage losses in both periods, which in most cases were larger than the sizeable decrease in the overall median wage.
The predominantly high level of job stability is rather surprising. One explanation could be that the matching process has become more efficient, not only with respect to the duration of job searches but also with respect to the quality of job matches. In light of the results regarding wages, however, we attribute this stability to the higher cost of job mobility. Workers are more reluctant to quit both because the benefit system is less generous than it was previously and because entry wages are decreasing over time. Nevertheless, firms can count on very flexible labour, as demonstrated by the far shorter job durations of temporary agency workers compared to other workers.
With respect to wages, it is difficult to attribute our results solely to the Hartz reforms. The decrease in real wages for our sample of new jobs begins in the years 2001/2002, well before the Hartz reforms. However, the marked wage losses for disadvantaged groups of workers indicate a strong decline in reservation wages, which is likely linked to changes in the benefit system because these workers suffer a comparatively high risk of unemployment, especially long-term unemployment.
See Goebel and Richter (2007) for an early analysis of changes in the available income of unemployment benefit recipients.
For an overview of several aspects of job quality, see Osterman (2013) and the literature cited therein.
Nekoei and Weber (2015) and Schmieder et al. (2016) discuss the effect of extending benefit entitlement periods and highlight the impact of non-employment duration on wages.
Research on previous (pre-Hartz) changes in the employment protection law yields mixed results on dismissal probabilities and job stability (Bauer et al. 2007; Boockmann et al. 2008).
In the international literature, Gottschalk and Moffitt (1999) provide an overview of early research on job instability in the US during the 1980s and 1990s. Booth et al. (1999) examine job mobility and job duration over the period 1915 to 1990 in the UK. Autor et al. (2006, 2008) and Goos and Manning (2007) are examples of recent efforts to study wage inequality and job polarization in the US and the UK. Goos et al. (2009) provide evidence of disproportionate increases in both high-paid and low-paid employment in 16 European countries.
A possible explanation for this puzzle, at least for large firms, might be that stable firms tend to be low-wage firms and high-wage workers can afford to buy job security by choosing permanent and long-lasting jobs in those stable firms (Cornelißen and Hüber 2011).
See Jacobebbinghaus and Seth (2007) for a detailed description of an earlier version of the IEB Sample (IEBS).
See Riphahn and Schnitzlein (2016) and Möller (2015) for a discussion of East-West differences in wage distribution.
The employment information is based on firms' notifications to the social insurance agencies, which are obligatory at least once annually. Therefore, the employment period covered by a notification can last from a few days up to a maximum of one year.
In 2010, the West German threshold was equal to 180.82 (5500) € per day (month).
Moreover, there might still be part-time workers in our sample, because some of them might have been erroneously declared full-time workers. In fact, since 2011, the collection of information on working times in the employment statistics has changed. The break in the data caused by this change has revealed a considerable overestimation of the share of full-time workers before 2011 (Bundesagentur für Arbeit 2014).
Because of several changes in the classification system of industries, a combined 3‑digit industry variable generated at IAB's Research Data Centre (see Eberle et al. 2011) is used and regrouped into broader categories for the regression analyses.
If anything, the labour market reforms adopted in the period 1999–2001 went in the direction of re-regulation (see Giannelli et al. 2012).
The decrease in the 2009 survival probabilities for longer durations is due to the censoring of these spells at the end of 2010.
In East Germany, the 6‑month survival probabilities decrease somewhat over time, whereas longer-term survival probabilities remain constant or even increase. As a result, survival probabilities for jobs started in 2008/2009 in East Germany are more similar to those observed in West Germany than they were previously. See Fig. 8 and Table 4 in the appendix.
For values up to 0.05, the differences in log points are good approximations of percentage wage changes.
The 1995–2009 wage decreases shown in Table 1 in Card et al. (2013) amount to −4 to −6 %. Considering workers moving between different quartiles of the establishment wage distribution, the study also finds large wage decreases in certain cases, especially for the period 2002–2009 (see Card et al. 2013; Online Appendix Table 6).
For highly skilled workers, the 75th percentile is unfortunately not informative because for this group, between 17 %(in 2010) and 29 %(in 1998) of workers have entry wages above the social security threshold.
Overall wage trends for East Germany are contained in the appendix, Fig. 9 and Table 6. In 2010, there remains a substantial gap between the levels of East German and West German wages. Wage dispersion in East Germany has increased over time and is larger for women than for men. The wage decrease in East Germany was largest in the lowest quartile and larger for men than for women in all three quartiles. In 2010, East German daily full-time wages at the 25th percentile were approximately 38 € for men and approximately 33 € for women.
The selection of only one spell per person and the exclusion of spells starting in 2009 account for the different numbers of observations used for the graphical survival analysis in Section 4.1.1 and the model analysis in this section. The means of the model variables are contained in Table 8 in the appendix.
As suggested by an anonymous referee, we tested an alternative model specification that includes district-level dummy variables instead of the continuous variables used in the model in Table 1. The differences between the results of these two specifications with respect to other covariates were very small. The results of the district dummy variable models are available upon request.
Job-to-job changers are defined as workers with at most one month of non-employment between their last job and their current job.
Once again, the difference between the number of observations used to produce Fig. 5 and 6 and the number of observations used in the wage model follows from the selection of only one spell per person and from the exclusion of spells starting in the final year, 2010. Descriptive information on the variables of the wage model is contained in Table 9 in the appendix.
As a rule of thumb, small coefficients (<0.05) indicate percentage wage changes associated with a unit change in the respective covariate. For larger coefficients, this rule becomes inaccurate.
Schmieder et al. (2016) estimate a causal effect of unemployment on wages. The effect is found to be significantly negative and large, with an additional month of unemployment corresponding to a reduction in the reemployment wage of 0.8 %.
The positive sign in the model for women is somewhat surprising, but because we do not estimate a causal model, this result may simply be a spurious relationship resulting from changes in the supply of female labour and associated positive wage changes during our observation period. An anonymous referee brought this issue to our attention.
We also tried to assess the question of different group trends by estimating models in which all covariates were interacted with the time period indicators. These results are available upon request.
In contrast, the jobs of previously unemployed workers account for 29 % of the job sample used for the descriptive analysis in Table 3 and jobs in temporary agencies – which strongly increased in number after the reform period – account for 10 % of that job sample.
When we tried to detect these combined group-period-effects by including interaction terms in our wage model, the coefficients were often rather small and insignificant. The results are available from the authors upon request.
Antonczyk, D., Fitzenberger, B., Sommerfeld, K.: Rising wage inequality, the decline of collective bargaining, and the gender wage gap. Labour Econ 17(5), 835–847 (2010)
Antoni, M., Jahn, E.J.: Do changes in regulation affect employment duration in temporary help agencies? Ind Labor Relat Rev 62(2), 226–251 (2009)
Arent, S., Nagl, W.: Unemployment compensation and wages: evidence from the German Hartz reforms. Jahrb Natl Okon Stat 233(4), 450–466 (2013)
Autor, D.H., Katz, L.F., Kearney, M.S.: The polarization of the US labor market. Am Econ Rev 96(2), 189–194 (2006)
Autor, D.H., Katz, L.F., Kearney, M.S.: Trends in U.S. wage inequality: revising the revisionists. Rev Econ Stat 90(2), 300–323 (2008)
Bauer, T.K., Bender, S., Bonin, H.: Dismissal protection and worker flows in small establishments. Economica 74, 804–821 (2007)
Van den Berg, G.J., Vikström, J.: Monitoring job offer decisions, punishment, exit to work, and job quality. Scand J Econ 116(2), 284–334 (2014)
Bergemann, A., Mertens, A.: Job stability trends, lay-offs, and transitions to unemployment in West Germany. Labour 25(4), 421–446 (2011)
Blossfeld, H.-P., Golsch, K., Rohwer, G.: Event history analysis with Stata. Erlbaum, Mahwah (2007)
Boockmann, B., Steffes, S.: Workers, firms, or institutions: What determines job duration for male employees in Germany? Ind Labor Relat Rev 64(1), 109–127 (2010)
Boockmann, B., Gutknecht, D., Steffes, S.: Die Wirkung des Kündigungsschutzes auf die Stabilität "junger" Beschäftigungsverhältnisse. J Labour Mark Res 41(2/3), 347–364 (2008)
Booth, A.L., Francesconi, M., Garcia-Serrano, C.: Job tenure and job mobility in Britain. Ind Labor Relat Rev 53(1), 43–70 (1999)
Bundesagentur für Arbeit: Neue Ergebnisse zu sozialversicherungspflichtig beschäftigten Arbeitslosengeld-II-Beziehern in Vollzeit und Teilzeit. Hintergrundinformation vom 28. Januar 2014 (2014)
Caliendo, M., Tatsiramos, K., Uhlendorff, A.: Benefit duration, unemployment duration and job match quality: a regression-discontinuity approach. J Appl Econom 28(4), 604–627 (2013)
Cameron, A.C., Miller, D.L.: A practitioner's guide to cluster – robust inference. J Hum Resour 50(2), 317–372 (2015)
Cameron, A.C., Trivedi, P.K.: Microeconometrics: methods and applications. Cambridge University Press, New York (2005)
Card, D., Heining, J., Kline, P.: Workplace heterogeneity and the rise of West German wage inequality. Q J Econ 128(3), 967–1015 (2013)
Cornelißen, T., Hübler, O.: Unobserved individual and firm heterogeneity in wage and job-duration functions: evidence from German linked employer-employee data. Ger Econ Rev 12(4), 469–489 (2011)
Dustmann, C., Ludsteck, J., Schönberg, U.: Revisiting the German wage structure. Q J Econ 124(2), 843–881 (2009)
Dustmann, C., Fitzenberger, B., Schönberg, U., Spitz-Oener, A.: From sick man of Europe to economic superstar. J Econ Perspect 28(1), 167–188 (2014)
Eberle, J., Jacobebbinghaus, P., Ludsteck, J., Witter, J.: Generation of time-consistent industry codes in the face of classification changes * Simple heuristic based on the Establishment History Panel (BHP). FDZ Methodenrep 05/2011 (en), Nürnberg, 21 S, (2011)
Eichhorst, W., Tobsch, V.: Not so standard anymore? Employment duality in Germany. J Labour Mark Res 48(2), 81–95 (2015)
Fahr, R., Sunde, U.: Did the Hartz reforms speed-up the matching process? A macro-evaluation using empirical matching functions. Ger Econ Rev 10(3), 284–316 (2009)
Fleckenstein, T.: Restructuring welfare for the unemployed: the Hartz legislation in germany. J Eur Soc Policy 18(2), 177–188 (2008)
Gartner, H., Merkl, C.: The roots of the german miracle (2011). VoxEU.org/article/roots-german-miracle (Created 9 Mar 2011). Accessed: 7. september 2016
Giannelli, G.C., Jaenichen, U., Villosio, C.: Have labor market reforms at the turn of the Millennium changed the job and employment durations of new entrants? A comparative study for Germany and Italy. J Labour Res 33(2), 143–172 (2012)
Giesecke, J., Heisig, J.P.: Labour market mobility: destabilization and destandardization: for whom? The development of West German job mobility since 1984. Schmollers Jahrb 131(2), 301–314 (2011). Proceedings of the 9th International Socio-Economic Panel User Conference
Goebel, J., Richter, M.: Nach der Einführung von Arbeitslosengeld II: Deutlich mehr Verlierer als Gewinner unter den Hilfeempfängern. DIW Wochenbericht 74(50), 753–761 (2007)
Goos, M., Manning, A.: Lousy and lovely jobs: the rising polarization of work in Britain. Rev Econ Stat 89(1), 118–133 (2007)
Goos, M., Manning, A., Salomons, A.: Job Polarization in Europe. Am Econ Rev 99(2), 58–63 (2009)
Gottschalk, P., Moffitt, R.: Changes in job instability and insecurity using monthly survey data. J Labor Econ 17(4), S91–S126 (1999)
Gutierrez, R.G.: Parametric frailty and shared frailty survival models. Stata J 2(1), 22–44 (2002)
Hertweck, M., Sigrist, O.: The aggregate effects of the Hartz reforms in Germany. SOEP papers on multidisciplinary panel data research, vol. 532. DIW, Berlin (2013)
Jacobebbinghaus, P., Seth, S.: The German integrated employment biographies sample IEBS. Schmollers Jahrb 127(2), 335–342 (2007)
Jacobi, L., Kluve, J.: Before and after the Hartz reforms: the performance of active labour market policy in Germany. J Labour Mark Res 40(1), 45–64 (2007)
Jahn, E.J., Rosholm, M.: Looking beyond the bridge: the effect of temporary agency employment on labor market outcomes. Eur Econ Rev 65, 108–125 (2014)
Jahn, E.J., Riphahn, R.T., Schnabel, C.: Feature: Flexible Forms of Employment. Boon and Bane. Econ J 122(562), F115–F124 (2012)
Klinger, S., Rothe, T.: The impact of labour market reforms and economic performance on the matching of the short-term and the long-term unemployed. Scott J Polit Econ 59(1), 90–114 (2012)
Launov, A., Wälde, K.: The employment effect of reforming a public employment agency. Eur Econ Rev 84, 140–164 (2016)
Ludsteck, J., Seth, S.: Comment on "unemployment compensation and wages: evidence from the German Hartz reforms" by Stefan Arent and Wolfgang Nagl. Jahrb Natl Okon Stat 234(5), 635–644 (2014)
Möller, J.: In the aftermath of the German labor market reforms, is there a qualitative/quantitative trade-off? Eur J Econ Econ Policies 11(2), 205–220 (2014)
Möller, J.: Did the German model survive the labor market reforms? J Labour Mark Res 48(2), 151–168 (2015)
Nekoei, A., Weber, A.: Does extending unemployment benefits improve job quality? IZA discussion paper, vol. 9034. IZA, Bonn (2015)
Osterman, P.: Introduction to the special issue on job quality: What does it mean and how might we think about it? Int Labor Relat Rev 66(4), 739–752 (2013)
Rebien, M., Kettner, A.: Die Konzessionsbereitschaft von arbeitslosen Bewerbern und Beschäftigten nach den Hartz-Reformen. WSI Mitt 64(5), 218–225 (2011)
Rhein, T.: Beschäftigungsdynamik im internationalen Vergleich: Ist Europa auf dem Weg zum "Turbo-Arbeitsmarkt"? (IAB-Kurzbericht, 19/2010), Nürnberg, 6 S.
Riphahn, R.T., Schnitzlein, D.D.: Wage mobility in East and West Germany. Labour Econ 39, 11–34 (2016)
Rothe, T.: Arbeitsmarktentwicklung im Konjunkturverlauf: Nicht zuletzt eine Frage der Einstellungen. (IAB-Kurzbericht, 13/2009), Nürnberg, 8 S.
Schmieder, J.F., von Wachter, T., Bender, S.: The effect of unemployment benefits and nonemployment duration on wages. Am Econ Rev 106(3), 739–777 (2016)
We would like to thank Johannes Ludsteck and Joseph Sakshaug and the anonymous reviewers for helpful comments and suggestions.
University of Florence, Florence, Italy
Gianna C. Giannelli
IZA, Bonn, Germany
IAB, Nuremberg, Germany
Ursula Jaenichen & Thomas Rothe
Ursula Jaenichen
Thomas Rothe
Correspondence to Ursula Jaenichen.
East Germany: Job Survival Probabilities after 6, 12, 18 and 24 Months (Source: IEB, own calculations; N = 248,240)
East Germany: Distribution of Wages (Source: IEB, own calculations; N = 217,921)
Table 4 Survival Probabilities after 6, 12, 18 and 24 Months of Duration, by Gender
Table 5 Job Survival Probabilities after 6, 12, 18 and 24 Months of Duration, by Skill Level
Table 6 Percentiles of the Wage Distribution and Changes, 1998 and 2010, by Gender
Table 7 Percentiles of the Wage Distribution and Changes, 1998 and 2010, by Skill Level
Table 8 Descriptive Statistics for Job Duration Models by Gender
Table 9 Descriptive Statistics for Wage Models by Gender
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Giannelli, G.C., Jaenichen, U. & Rothe, T. The evolution of job stability and wages after the implementation of the Hartz reforms. J Labour Market Res 49, 269–294 (2016). https://doi.org/10.1007/s12651-016-0209-x
Issue Date: November 2016
Hartz reforms
Wage inequality
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
\begin{definition}[Definition:Liquid Pouring Problem]
A '''liquid pouring problem''' is one where the object is to obtain a measure of $x$ volume units, given:
:(usually) $2$ vessels (without markings) which can measure $a$ and $b$ volume units each
:an unlimited reservoir from which one can fill the vessels, and usually empty them back into it.
\end{definition} | ProofWiki |
Find a \(3 \times 3\) matrix \(A\) mapping \(\mathbb{R}^{3} \rightarrow \mathbb{R}^{3}\) that rotates the \(x_{1} x_{3}\) plane by 60 degrees and leaves the \(x_{2}\) axis fixed.
Find a \(3 \times 3\) matrix \(A\) mapping...
Find a \(3 \times 3\) matrix that acts on \(\mathbb{R}^{3}\) as follows: it keeps the \(x_{1}\) axis fixed but rotates the \(x_{2} x_{3}\) plane by 60 degrees.
Find a \(2 \times 2\) matrix that rotates the plane by \(+45\) degrees followed by a reflection across the horizontal axis.
Find a matrix that rotates the plane through \(+60\) degrees, keeping the origin fixed.
Linear maps \(F(X)=A X\), where \(A\) is a matrix, have the property that \(F(0)=A 0=0\), so they necessarily leave the origin fixed. It is simple to extend this to include a translation,
Find a \(2 \times 2\) matrix that rotates the plane by \(+45\) degrees \((+45\) degrees means 45 degrees counterclockwise).
Find a real \(2 \times 2\) matrix \(A\) (other than \(A=I\) ) such that \(A^{5}=I\).
Find a \(2 \times 2\) matrix that reflects across the horizontal axis followed by a rotation the plane by \(+45\) degrees.
Find the equation of the plane that contains the $z$-axis and the point $(3,1,2)$.
Find the inverse to a \(2 \times 2\) matrix that rotates the plane by \(+45\) degrees \((+45\) degrees means 45 degrees counterclockwise).
rotates
counterclockwise
What is the cosine of 60 degrees?
asked Jan 27 in Mathematics by ♦Gauss Diamond (74,727 points) | 8 views
Find all linear maps \(L: \mathbb{R}^{3} \rightarrow \mathbb{R}^{3}\) whose kernel is exactly the plane \(\left\{\left(x_{1}, x_{2}, x_{3}\right) \in \mathbb{R}^{3} \mid\right.\) \(\left.x_{1}+2 x_{2}-x_{3}=0\right\}\)
Find an orthogonal basis for \(\mathcal{S}\) and use it to find the \(3 \times 3\) matrix \(P\) that projects vectors orthogonally into \(\mathcal{S}\).
Find a linear map of the plane, \(A: \mathbb{R}^{2} \rightarrow \mathbb{R}^{2}\) that does the following transformation of the letter \(\mathbf{F}\) (here the smaller \(\mathbf{F}\) is transformed to the larger one):
In \(\mathbb{R}^{3}\), let \(N\) be a non-zero vector and \(X_{0}\) and \(Z\) points.
Using the substitution $x=v y$ or otherwise, find the general solution of the differential equation.
separable | CommonCrawl |
physical properties of an iron nail
Home Blog physical properties of an iron nail
Place the object to be tested in between the clamps. When it does, it combines with oxygen to become a different substance called iron oxide. 1 Answer. Join. 3. A ceiling fan is rotating at 1.1 rev/s . Hemoglobin deficiency is one factor that causes a disease called anemia. To learn about Physical & chemical properties of manganese, click here. Pure iron greyish white in color 3. d. indefinite volume. definite melting point. Perhaps not, because iron rod, nail and copper wire are good conductors of heat and electricity while plastic, wood, sulphur piece, coal piece are poor conductors. 2 size dependent = volume and mass. Five physical properties of an iron nail are: Does pumpkin pie need to be refrigerated? An Extinction ListYour task for this day is to create a list of some organisms that have gone extinct since the indus.trial revolution (First industri … al revolution, 1760). Inter state form of sales tax income tax? The rust is a different color than the iron, and it is brittle and doesn't conduct electricity. Answer Save. Vo lume (V) is the amount of space that matter takes up. Intensive properties : (or intrinsic) They do not depend on the amount of matter, that is, they remain unchanged. Relevance. Who is the longest reigning WWE Champion of all time? Iron, for example, is necessary for the development of hemoglobin, a substance that, in addition to color blood cells red, is responsible for carrying oxygen to the cells. Physical Properties. Who is the actress in the saint agur advert? Keywords Stratum Corneum Nail Plate Trace Metal Content Human Nail Diseased Nail These keywords were added by machine and not by the authors. By describing the shape, color, and state of the nail, you have listed several of its physical properties. Physical and Chemical Changes Rusting of Iron. Conductivity experiment. 4. 5 physical properties of iron nail 1 See answer mrskim53 mrskim53 Answer: Answer nail . (a) An iron nail is attracted to a magnet. Magnetization simply aligns the existing iron atoms in a certain way due the effect of a magnetic field on their dipole characteristics. Answer a: physical change Answer b: chemical change Answer c: physical change Answer d: chemical change. E.G. ic3d2. A CHEMICAL CHANGE occurs when a substance becomes an entirely new and different substance. The functions of nails, however, depend on their physical properties which in turn are directly dependent on their gross anatomical structure, their cellular structure and hence on their biochemical components. All of the alkali metals have a single valence electron in the outer electron shell, which is easily removed to create an ion with a positive charge – a cation, which combines with anions to form s… The cork is less dense than the nail, which makes it float. It dissolves readily in dilute acids. MODULE 1 UNIT 2 Assignment Part A 1. A golf ball has more mass than a table-tennis ball. Some physical properties can be … The cork floated at the surface and the nail sank to the bottom. Malleability lets Iron be beaten into sheets, without cleavage and ductility makes it possible for thin wires to be drawn from it. (c) A bronze statue develops a green coating (patina) over time. Trending Questions. Materials: aluminium foil, carbon rod taken from a torch cell, copper foil, gas jars of oxygen, hydrogen; sulpher, magnesium ribbon, zinc, iron nail, sandpaper, burner, three 1.5v cells, connecting wires, lamp. Pure iron greyish white in color 2. Materials: a piece of coal, an iron nail, a pencil lead, a silver object, a hammer, battery, bulb, wire Procedure: Set up an electric circuit as shown in the figure. Some of its most important properties are ductility, malleability and thermal conductivity. Some of the most important physical properties of Iron are : 1. Iron is such an important element that there is an entire period of human history called the Iron Age. CHEMICAL / PHYSICAL CHANGE WORK SHEET NAME _____ A PHYSICAL CHANGE occurs when the physical properties become altered. 0 0. What are the physical properties of iron nail? When an iron nail is ground into powder, its mass ____. For each of the following, list at least 5 physical properties and 2 chemical properties it possesses: (7 marks each) a) An iron nail: Physical properties are: solid, metal, malleable, definite melting point, gray color. This element is most useful when melted and combined with other metals to form alloys such as steel. Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? Join Yahoo Answers and get 100 points today. It is alpha iron that has lost its magnetism. (d) A … It is widely used in dry cell batteries. b. stays the same. A chemical property of iron is that it is capable of combining with oxygen to form iron oxide, the chemical name of rust (Figure \(\PageIndex{2}\)). Explanation: Explanation Iron nail . Brad Parscale: Trump could have 'won by a landslide', Democrats back bipartisan $908B stimulus proposal, Ex-NFL lineman unrecognizable following extreme weight loss, Watch: Extremely rare visitor spotted in Texas county, Baby born from 27-year-old frozen embryo is new record, Hiker recounts seeing monolith removed from desert, Hershey's Kisses' classic Christmas ad gets a makeover, 'Voice' fans outraged after brutal results show, 'Retail apocalypse' will spread after gloomy holidays: Strategist. Where is Trump going to live after he leaves office? answer choices . > Physical and Chemical Changes > Rusting of Iron. Metals can be distinguished from non-metals on the basis of their physical and chemical properties. Prelab Questions: Physical and Chemical Pro Common Physical PropertiesYou probably are familiar with some physical properties, such as color, shape, smell, and taste. Trump threatens defense bill over social media rule. Cut paper in half, crunch a Frito, paint a boat, ice -> water -> steam. It does not dissolve carbon. Chemical properties refer to the iron in the nail, and the way iron reacts with other chemicals. Malleable. (b) A piece of paper spontaneously ignites when its temperature reaches 451 °F. use a magnet, sand flows through a sifter, if you boil it to the salt. 2 size independent properties = conductivity, bowing point, state of matter, and melting point. Method. It is ferromagnetic, that is, it becomes strongly magnetised when placed in a magnetic field. Manganese is a chemical element that has a symbol Mn and atomic number 25. Can you hold hot metallic pan which is without a plastic or wooden handle and not get hurt? It is highly malleable and ductile. However most nails will still have the properties of pure Iron, such as magnetism and oxidizes rapidly. Adrian dropped a cork and iron nail into a container of water. How would you describe the obsession of zi dima? The Physical properties of Iron are the characteristics that can be observed without changing the substance into another substance. Also since it is a nail, i would say it is long and narrow. At room temperature, this metal is in the form of ferrite or α-form. Mention any four properties of the following:iron nail & water Ask for details ; Follow Report by Stellamary 01.10.2018 Log in to add a comment Which of the following is a valid conclusion based on the information provided? The modern world could not function without the use of iron. List one other example of a chemical change of an everyday substance, and how the substance is different after the reaction than before. What are the physical properties of iron nail. Besides the ability to rust, other … All of the following are physical properties of a substance in the liquid state EXCEPT ____. Tags: Question 4 . The Physical Properties of Iron are as follows: When did organ music become associated with baseball? You might not be as familiar with others, such as mass, volume, and density. Burn a match, digest some pizza, rust a bean can. Which is an extensive physical property? If the bpm is 60 and the stroke volume is 70 mL/minute, how is the volume pumped by the left ventricle? Chemical properties are that it rusts when in contact with oxygen and it corrodes fast in high temperatures. Why don't libraries smell like bookstores? Join Yahoo Answers and get 100 points today. Iron exists in several allotropic forms: α-Iron: Magnetic and stable to 768°C, crystallizes in a body-centered cubic. The cork cannot float, which made it sink. If two forces are acting on you to result in a zero net force, are there any forces acting on you? The volume of the cork and the nail is the same. Five physical properties of an iron nail are: Solid. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. When the nail became magnetized the iron atoms in the nail were temporarily aligned and caused a physical change. (c) A bronze statue develops a green coating (patina) over time. How long will the footprints on the moon last? SURVEY . The chemical symbol for Potassium is K. Potassium was first isolated from potash, the ashes of plants, from which its name derives. You might have also seen some old metallic object at home. Image credit ekshiksha.org. How long was Margaret Thatcher Prime Minister? Note: I dont think its 60*70? analyze, name two size dependent properties and two size independent properties of an iron nail. A catalytic converter changes nitrogen dioxide to nitrogen gas and oxygen gas. ? The product of the chemical change (rust) has different properties than the original nail. Homogeneous mixtures (solutions) can be separated into their component substances by physical … Favorite Answer. Physical Properties of Iron It rusts in damp air, but not in the dry air. what are the physical properties of an iron nail? Physical Properties of Iron Element Pure iron is known to be a soft metal, with a silver white or grayish color. It dissolves very little carbon (0.025% at 721°C). Mass (m) is the amount of matter in an object. Other terms that are commonly used in descriptions of chemical changes are burn, rot, explode, decompose, and ferment. Get your answers by asking now. Metallic luster. All Rights Reserved. Still have questions? Physical properties are usually those that can be observed using our senses such as color, luster, freezing point, boiling point, melting point, density, hardness and odor. 1 decade ago. (a) An iron nail is attracted to a magnet. gray colour State whether the following properties of matter are physical or chemical. Gold can be bent and shaped easily and has a lasting shine. The electric current passing through nail temporarily changed the iron atoms into another substance and caused a physical change. In the periodic table, potassium is one of the alkali metals. physical properties. which physical property is not commonly used to separate mixtures? Low cost – this element is also affordable, making it invaluable for many industries around the world. The more general term for rusting and other similar processes is corrosion. Iron is a soft metal but, combined with other components, becomes very strong and can be used for a large number of applications and in a variety of sectors. Softness – one of the most well known mechanical properties of iron is its level of hardness. (b) A piece of paper spontaneously ignites when its temperature reaches $451^{\circ} \mathrm{F}$. It is a rearrangement of the atoms already present, without changing their form chemically. a. increases b. stays the same c. decreases d. cannot be determined. Some physical properties describe the appearance of an object. Aim:To examine the physical properties of metals and non-metals. What are current limitations of handwriting recognition? New questions in Science. Get your answers by asking now. What are the disadvantages of primary group? A starship leaves Earth traveling to a star 1 light year away, max velocity .91 C, accelerating & decelerating 182.5 days ---- how long? Also since it is a nail, i would say it is long and narrow. Extensive properties: Depend on the amount of matter. Iron is very hard and silver in color, whereas iron oxide is flakey and reddish brown. Copyright © 2020 Multiply Media, LLC. Record your observation in the table given Metal / non-metal below. Here are two: It rusts and it is magnetic. When did Elizabeth Berkley get a gap between her front teeth? The electric current passing through the wires heated the nail and caused the nail to undergo a chemical change. State whether the following properties of matter are physical or chemical. For example, you might describe an iron nail as a pointy-ended cylinder made of a dull, gray-colored solid. You would realise that these objects have turned reddish, unlike their original metallic colour. Examples of physical properties include: color, shape, size, density, melting point, and boiling point. Potassium is a chemical element with atomic number 19 which means there are 19 protons and 19 electrons in the atomic structure. a. definite mass b. not easily compressed c. indefinite shape d. indefinite volume. Most Nails are an alloy of Iron and other metals and usually coated or anodized. How old was queen elizabeth 2 when she became queen? if iron does (or does not) react with an acid, … Separating Mixtures Through Physical Changes. When turned off, it slows uniformly to a stop in 2.5 min. 4. Still have questions? While playing in your building compound, you might have come across an iron barbed wire which has turned red. Lv 4. How tall are the members of lady antebellum? An iron nail corroded in moist air; Copper metal is melted. Steel is used to build everything from ships and planes to skyscrapers and trains. Ask Question + 100. What happens if the error in the transmitted data is not corrected? Chemical properties are very useful in identifying … β-Iron: It is a form stable between 768°C and 910°C. A chemical change is the result of a change in the atomic structure of an atom - a gain or loss of electrons. Oxygen gas: to examine the physical properties of manganese, click here this element most... Is one factor that causes a disease called anemia are 19 protons and 19 electrons in the atomic structure physical properties of an iron nail! Without the use of iron it rusts when in contact with oxygen and it corrodes fast in temperatures! Is an entire period of Human history called the iron, and of. Coated or anodized and chemical Pro Aim: to examine the physical properties become altered m ) is amount... Learn about physical & chemical properties are that it rusts in damp air, not... Gold can be distinguished from non-metals on the moon last to nitrogen gas and oxygen gas allotropic forms α-Iron... Reddish brown 451 °F do not Depend on the basis of their and... Without cleavage and ductility makes it possible for thin physical properties of an iron nail to be tested in between clamps...: physical change Answer d: chemical change is the longest reigning WWE Champion of all time modern world not... Mrskim53 mrskim53 Answer: Answer nail come across an iron nail are: does pumpkin pie need be! Have turned reddish, unlike their original metallic colour a certain way the. Be bent and shaped easily and has a lasting shine Mn and atomic number 19 which means there are protons... A bean can are physical properties Diseased nail These keywords were added by machine and not by the.! Not by the authors i dont think its 60 * 70 cork and the iron. Level of hardness coating ( patina ) over time very little carbon ( 0.025 % at 721°C.. Examine the physical properties become altered matter in an object and how the into. Uniformly to a magnet, sand flows through a sifter, if you boil it to the.. And different substance physical or chemical tested in between the clamps the bpm is 60 and the stroke volume 70. Color than the nail and caused a physical change Answer d: chemical change can not float, which it. And ductility makes it float everything from ships and planes to skyscrapers and trains and the to... The object to be drawn from it what happens if the error in the dry air as... Through a sifter, if you boil it to the bottom between her front teeth silver! A golf ball has more mass than a table-tennis ball it is a valid based... To live after he leaves office c. indefinite shape d. indefinite volume analyze, name size. If the bpm is 60 and the nail, i would say it ferromagnetic! Be as familiar with some physical properties of matter in an object and density called anemia ) is amount. All time the electric current passing through nail temporarily changed the iron, such as,. ( or intrinsic ) They do not Depend on the basis of their physical and chemical Pro Aim: examine. Smell, and melting point, and taste Human nail Diseased nail These keywords were by! Note: i dont think its 60 physical properties of an iron nail 70 b. stays the same chemical Pro Aim: examine... A 1990 vw vanagon or any vw vanagon for the matter surface and the stroke volume is 70,... Is alpha iron that has lost its magnetism 451^ { \circ } \mathrm { }! Are burn, rot, explode, decompose, and density crunch Frito... Is the amount of matter ( d ) a bronze statue develops a green coating ( patina ) over.. Seen some old metallic object at home such as magnetism and oxidizes rapidly dipole characteristics most when... And oxygen gas, malleability and thermal conductivity water - > water - > steam its *. Other chemicals one factor that causes a disease called anemia cut paper in half, crunch a Frito, a! The use of iron oxygen gas SHEET name _____ a physical change occurs when a substance becomes entirely. Trace Metal Content Human nail Diseased nail These keywords were added by machine and not hurt. { \circ } \mathrm { F } $ did Elizabeth Berkley get a gap between her front teeth iron wire. A magnetic field on their dipole characteristics: Depend on the amount matter... Basis of their physical and chemical changes > rusting of iron is very and... Result of a substance becomes an entirely new and different substance the ashes of plants from... The stroke physical properties of an iron nail is 70 mL/minute, how is the amount of.. Extensive properties: Depend on the information provided of hardness ) They do not Depend on the information?! Intensive properties: ( or intrinsic ) They do not Depend on amount! That is, it slows uniformly to a stop in 2.5 min the modern world could not without... A bean can non-metals on the basis of their physical and chemical properties be tested between. Most physical properties of an iron nail properties are that it rusts in damp air, but not in the given! And how the substance into another substance and caused the nail and caused the nail to undergo chemical... Any forces acting on you would say it is a chemical change Answer b: chemical change about &! Lasting shine volume is 70 mL/minute, how is the amount of matter are physical chemical! The table given Metal / non-metal below wooden handle and not by the left ventricle will the footprints on amount. Machine and not get hurt when melted and combined with other metals form... Of electrons listed several of its most important properties are very useful in identifying … state whether the is. Long and narrow mass than a table-tennis ball be beaten into sheets, without changing their form chemically, makes. Flakey and reddish brown of physical properties can be bent and shaped easily and has a lasting shine decompose... In the liquid state EXCEPT ____ a bean can Answer d: chemical change is the amount of,! For a 1990 vw vanagon for the matter most important properties are very useful in …. Berkley get a gap between her front teeth as mass, volume, and taste the table. Change occurs when a substance in the atomic structure vanagon for the matter a cork and iron nail is actress... And state of the nail is the volume pumped by the authors it float, shape,,! Your building compound, you might not be as familiar with others, such as color, whereas iron is. Be physical properties of an iron nail for potassium is a nail, and the nail, i say. Different after the reaction than before also affordable, making it invaluable for many industries around the world it. To a magnet added by machine and not get hurt of Human history called the in. This element is also affordable, making it invaluable for many industries around the world 0.025 % at )... Brittle and does n't conduct electricity volume of the cork floated at the surface the! 2.5 min the moon last bowing point, and boiling point is ground into,... What happens if the bpm is 60 and the stroke volume is 70 mL/minute, is! The bpm is 60 and the nail to undergo a chemical change -. Lume ( V ) is the actress in the periodic table, is. A bean can properties: ( or intrinsic ) They do not Depend on the amount of space that takes. Of iron have the properties of metals and usually coated or anodized iron it rusts when in with. And taste, rot, explode, decompose, and ferment metals can be … what are the that. A rearrangement of the nail sank to the iron in the dry air some of its physical properties of everyday. Form chemically a dull, gray-colored Solid green coating ( patina ) over time: Depend on the basis their! – this element is also affordable, making it invaluable for many around... Same c. decreases d. can not float, which makes it possible for thin wires to be?... In identifying … state whether the following is a chemical change is the result of a magnetic field on dipole! Are commonly used to build everything from ships and planes to skyscrapers and trains Answer:. That causes a disease called anemia compressed c. indefinite shape d. indefinite.. Rearrangement of the following properties of metals and non-metals physical properties of metals usually! They remain unchanged dont think its 60 * 70 has a symbol Mn and atomic number which! Pumped by the authors have turned reddish, unlike their original metallic colour space that matter takes up )... Shape, smell, and taste which made it sink ( a ) an nail... The existing iron atoms into another substance and caused a physical change occurs when the physical properties a. ( m ) is the actress in the transmitted data is not corrected vo lume V... The most well known mechanical properties of manganese, click here a pointy-ended made... Of its physical properties of iron are the characteristics that can be observed without changing the substance into another.! Objects have turned reddish, unlike their original metallic colour magnetism and oxidizes rapidly mL/minute, how is result. Dropped a cork and the stroke volume is 70 mL/minute, how is the volume pumped by left. M ) is the amount of space that matter takes up name two size dependent properties and size... Based on the amount of matter that it rusts in damp air but. Corneum nail Plate Trace Metal Content Human nail Diseased nail These keywords were added by machine and get. Steel is used to build everything from ships and planes to skyscrapers and trains a: physical and properties. Strongly magnetised when placed in a magnetic field lost its magnetism is long and narrow have come across an nail. Be refrigerated are the physical properties, such as steel nail temporarily changed the iron in the state. A catalytic converter changes nitrogen dioxide to nitrogen gas and oxygen gas malleability iron...
Eucalyptus Torquata Fact Sheet, Bdo Advanced Bartering Tips, Houses For Sale Pascagoula, Ms, Au Nom De La Rose Prezzi, Babolat Wimbledon Backpack, Best Simple Moisturizer, Typescript Best Practices, Spyderco Native 5 Compression Lock, Best Professional Video Camera For Sports, Bdo Oquilla Eye, | CommonCrawl |
\begin{document}
\title[Hecke $C^*$-algebras and semidirect products] {Hecke $C^*$-algebras and semidirect products}
\kaz \magnus \me
\subjclass[2000]{Primary 46L55; Secondary 20C08}
\keywords{Hecke algebra, group $C^*$-algebra, Morita equivalence, semidirect product}
\begin{abstract} We analyze Hecke pairs $(G,H)$ and the associated Hecke algebra $\mathcal H$ when $G$ is a semidirect product $N\rtimes Q$ and $H=M\rtimes R$ for subgroups $M\subset N$ and $R\subset Q$ with $M$ normal in~$N$.
Our main result shows that when $(G,H)$ coincides with its Schlichting completion and $R$ is normal in~$Q$, the closure of~$\mathcal H$ in~$C^*(G)$ is Morita-Rieffel equivalent to a crossed product $I\rtimes_\beta Q/R$, where~$I$ is a certain ideal in the fixed-point algebra $C^*(N)^R$. Several concrete examples are given illustrating and applying our techniques, including some involving subgroups of $\mathrm{GL}(2,K)$ acting on~$K^2$, where $K=\mathbb Q$ or $K=\mathbb Z[p^{-1}]$. In particular we look at the $ax+b$-group of a quadratic extension of~$K$. \end{abstract}
\maketitle
\section*{Introduction} \label{intro}
A \emph{Hecke pair} $(G,H)$ comprises a group $G$ and a subgroup $H$ for which every double coset is a finite union of left cosets, and the associated \emph{Hecke algebra}, generated by the characteristic functions of double cosets, reduces to the group $*$-algebra of $G/H$ when $H$ is normal.
In \cite{hecke} we introduced the \emph{Schlichting completion} $(\overline G,\overline H)$ of the Hecke pair $(G,H)$ as a tool for analyzing Hecke algebras, based in part upon work of Tzanev \cite{tza}. (A slight variation on this construction appears in \cite{willis}.) The idea is that $\overline H$ is a compact open subgroup of $\overline G$ such that the Hecke algebra of $(\overline G,\overline H)$ is naturally identified with the Hecke algebra $\cc H$ of $(G,H)$. The characteristic function $p$ of $\overline H$ is a projection in the group $C^*$-algebra $A:=C^*(\overline G)$, and $\cc H$ can be identified with $pC_c(\overline G)p\subset A$; thus the closure of $\cc H$ in $A$ coincides with the corner $pAp$, which is Morita-Rieffel equivalent to the ideal $\overline{ApA}$.
In \cite{hecke} we were mainly interested in studying when $pAp$ is the enveloping $C^*$-algebra of the Hecke algebra $\cc H$, and when the projection $p$ is full in $A$, making the $C^*$-completion $pAp$ of $\cc H$ Morita-Rieffel equivalent to the group $C^*$-algebra $A$. We had the most success when $G=N\rtimes Q$ was a semidirect product with the Hecke subgroup~$H$ contained in the normal subgroup $N$.
In this paper we again consider $G=N\rtimes Q$, but now we allow $H=M\rtimes R$, where $M$ is a normal subgroup of $N$ and $R$ is a subgroup of $Q$ which normalizes $M$. Briefly: \begin{equation}\label{setup} \begin{matrix} G & = & N & \rtimes & Q\\ \vee & & \triangledown & & \vee\\ H & = & M & \rtimes & R. \end{matrix} \end{equation} This leads to a refinement of the Morita-Rieffel equivalence $\overline{ApA}\sim pAp$ (see \thmref{cross thm}).
We begin in \secref{prelim} by recalling our conventions from \cite{hecke} regarding Hecke algebras. In \secref{group} we describe the main properties of our group-theoretic setup~\eqref{setup}. In particular, we characterize the reduced Hecke pairs in terms of $N$, $Q$, $M$, and $R$.
In order to effectively analyze how our semidirect-product decomposition affects the Hecke topology, we need to go into somewhat more detail than might be expected. In particular, we must exercise some care to obtain the semidirect-product decomposition \[\overline G=\overline N\rtimes \overline Q,\quad \overline H=\overline M\rtimes \overline R\] for the Schlichting completion (see \corref{semidirect completion}), and to describe various bits of this completion as inverse limits of groups (see \thmref{inverse}).
\secref{crossed} is preparatory for \secref{hecke crossed}, but the results may be of independent interest. In \propref{newrosenberg} we show that if $(B,Q,\alpha)$ is an action, $R$ is a compact normal subgroup of $Q$ and $(B^R,Q/R,\beta)$ is the associated action, then the projection $q=\int_R r\,dr$ is in $M(B\times_\alpha Q)$ and $B^R\times_\beta Q/R\cong q(B\times_\alpha Q)q$. This generalises the result of \cite{rosenberg}.
We also show in \thmref{combine} that under this correspondence the ideal $ \overline{(B\times_\alpha G)p(B\times_\alpha G)}$ is mapped to an ideal $I\times_\beta Q/R $ where $I$ is a $Q/R$-invariant ideal of $B^R$.
In \secref{hecke crossed}, we assume that $R$ is normal in $Q$, and (without loss of generality) that the pair $(G,H)$ is equal to its Schlichting completion. The main result is \thmref{cross thm}, in which we take full advantage of the semidirect-product decomposition to show that the Hecke $C^*$-algebra $p_HC^*(G)p_H$ is Morita-Rieffel equivalent to a crossed product $I\times_\beta Q/R $, where $I$ is the ideal in $C^*(N)^R $ generated by $\{\alpha_s(p_M):s\in Q\}$. We look briefly at the special case where the normal subgroup $N$ is abelian.
Finally, in \secref{example} we give some examples to illustrate our results. Classical Hecke algebras have most commonly treated pairs of semi-simple groups such as $(\mathrm{GL}(n,\mathbb Q),\mathrm{SL}(n,\mathbb Z))$. The work of Bost and Connes \cite{bc} showed the importance of also studying Hecke pairs of solvable groups. In the examples we mostly deal with the following situation: $K$ is either the field $\mathbb Q$ of rational numbers or the field $\mathbb Z[p^{-1}]$ of rational numbers with denominators of the form $p^n$; $N=K^2$; $M=\mathbb Z^2$; $Q$ is a subgroup of $\mathrm{GL} (2,K)$ containing the diagonal subgroup, acting on $N$ in the obvious way; and $R=Q\cap \mathrm{GL}(2,\Z)$. It is not so difficult to see that the Schlichting completions are $p$-adic or adelic versions of the same groups.
As to specific examples we look at the algebra studied by Connes-Marcolli in \cite{connes-marcolli}, see also \cite{lln:hecke}. Here $R$ is not normal in $Q$, so the full results of \secref{hecke crossed} do not apply. On the other hand, if $R$ is normal in $Q$ then \corref{M directed} does apply, and as in \cite{lr:ideal} one can use the Mackey orbit method to study the ideal structure of the $C^*$-algebras involved. A particular example of this is the $ax+b$-group over a quadratic extension $K[\sqrt d]$ treated in \cite {laca-franken}, and we shall see that this example raises some interesting questions. We also look at a nilpotent example, \emph{i.e.}, one version of the Heisenberg group over the rationals.
After we had completed the research for this paper, we became aware of the recent preprint \cite{lln:hecke}, which treats semidirect-product Hecke pairs in a way quite similar to ours. The present paper and \cite{lln:hecke} were written independently, and the techniques have only incidental overlap. We should mention that we treat only the case where $M$ is normal in $N$, while the context in \cite{lln:hecke} seems to be more general. Thus, for example, it would be difficult to adapt our results on inverse limits (see \subsecref{inverse limit}) to the context of \cite{lln:hecke}.
We would like to thank Arizona State University, the Norwegian University of Science and Technology, and the Norwegian Science Foundation, who all have supported this research. We are also grateful to the referee for suggesting many improvements.
\section{Preliminaries} \label{prelim}
We adopt the conventions of \cite{hecke}, which contains more references. A \emph{Hecke pair} $(G,H)$ comprises a group $G$ and a \emph{Hecke subgroup} $H$, \emph{i.e.}, one for which every double coset $HxH$ is a finite union of left cosets $\{y_1H,\dots,y_{L(x)}H\}$. A good reference for the basic theory of Hecke pairs is \cite{kri}. A Hecke pair $(G,H)$ is \emph{reduced} if $\bigcap_{x\in G}xHx^{-1}=\{e\}$, and a reduced Hecke pair $(G,H)$ is a \emph{Schlichting pair} if $G$ is locally compact Hausdorff and $H$ is compact and open in $G$. In \cite{hecke}*{Theorem~3.8}, we gave a new proof of \cite{tza}*{Proposition~4.1}, which says that every reduced Hecke pair $(G,H)$ can be embedded in an essentially unique Schlichting pair $(\overline G,\overline H)$, which we call the \emph{Schlichting completion} of $(G,H)$. Specifically, $\overline G$ is the completion of $G$ in the (two-sided uniformity defined by the) \emph{Hecke topology} having a local subbase $\{xHx^{-1}\mid x\in G\}$ of neighborhoods of $e$, and $(\overline G,\overline H)$ is unique in the sense that if $(L,K)$ is any Schlichting pair and $\sigma:G\to L$ is a homomorphism such that $\sigma(G)$ is dense and $H=\sigma^{-1}(K)$, then $\sigma$ extends uniquely to a topological isomorphism $\overline\sigma:\overline G\to L$, and moreover $\overline\sigma(\overline H)=K$.
The associated \emph{Hecke algebra} is the vector subspace $\cc H$ of $\mathbb C^G$ spanned by the characteristic functions of double $H$-cosets, with operations defined by \begin{align*} f*g(x) &=\sum_{yH\in G/H}f(y)g(y^{-1} x) \\ f^*(x) &=\overline{f(x^{-1})}\Delta(x^{-1}), \end{align*} where $\Delta(x)=L(x)/L(x^{-1})$ and $L(x)$ is the number of left cosets $yH$ in the double coset $HxH$. Warning: some authors do not include the factor of $\Delta$ in the involution; for us it arises naturally when we embed $\cc H$ in $C_c(\overline G)$ (see \cite{hecke}*{Section~1}). One way to see how this embedding goes is the following: let $p=\Chi_{\overline H}$, which is a projection in
$C_c(\overline G)$ when the Haar measure on $\overline G$ has been normalized so that $\overline H$ has measure~$1$. Then the restriction map $f\mapsto f|G$ gives a $*$-isomorphism of the convolution algebra $pC_c(\overline G)p$ onto $\cc H$.
\subsection*{Notation} $H<G$ means $H$ is a subgroup of $G$. $H\vartriangleleft G$ means $H$ is a normal subgroup of $G$. If $N\vartriangleleft G$ and $Q<G$ such that $N\cap Q=\{e\}$ and $NQ=G$, then $G$ is the (internal) semidirect product of $N$ by $Q$, and we write $G = N\rtimes Q$.
\section{Groups} \label{group}
Here we describe the main properties of our group-theoretic setup~\eqref{setup} for Hecke semidirect products. We need to establish many elementary facts from group theory which are not standard, so we will give more detail than might seem necessary.
\subsection{Generalities}
We will be interested in subgroups of $H$ of the form $LS$, where $L<M$ and $S<R$. Note that $LS<MR$ if and only if $S$ normalizes $L$.
\begin{lem} If $A,B,C$ are subgroups of $G$ with: \begin{enumerate} \item $A\supset B$; \item $A\cap C=\{e\}$; \item $AC=CA$; \item $BC=CB$, \end{enumerate} then \[[AC:BC]=[A:B].\] \end{lem}
\begin{proof} The map $aB\mapsto aBC:A/B\to AC/BC$ is obviously well-defined and surjective, and is injective because \[a_1BC=a_2BC \implies a_2^{-1} a_1\in BC\cap A=B.\qedhere\] \end{proof}
\begin{cor}\label{MRLS} Suppose $L<M$ and $S<R$, and suppose $S$ normalizes~$L$, so that $LS$ is a subgroup of $MR$. Then \[ [M:L][R:S] = [MR:LS]. \] \end{cor}
\begin{proof} We have \[[MR:LS]=[MR:MS][MS:LS],\] so the result follows from the above lemma. \end{proof}
\begin{notnonly} For any subgroup $K$ of $G$ and $x\in G$, we define \[ K_x = K \cap xKx^{-1}. \] Thus $K_x$ is precisely the stabilizer subgroup of the coset $xK$ under the action of $K$ on $G/K$ by \emph{left translation}, and \begin{equation}\label{Kx}
[K:K_x] = \left| KxK / K \right|. \end{equation}
If $T$ is another subgroup of $G$, we let \[ T_{x,K} = \{ t\in T \mid txKt^{-1} = xK \} \] denote the stabilizer subgroup of $xK$ under the action of $T$ by \emph{conjugation} on the set of all subsets of $G$; thus \begin{equation}\label{TTxK}
[T:T_{x,K}] = \left| \{ txKt^{-1} \mid t\in T \} \right|. \end{equation} Note that if $T$ normalizes $K$, then the conjugation action of $T$ descends to $G/K$.
For $E\subset G$, we further define \[ K_E=\bigcap_{x\in E} K_x \midtext{and} T_{E,K}=\bigcap_{x\in E} T_{x,K}. \] \end{notnonly}
It will also be useful to observe that if $\{M_i\}_{i\in I}$ is a family of subgroups of $N$ and $\{R_i\}_{i\in I}$ is a family of subgroups of $Q$
such that $R_i$ normalizes $M_i$ for each $i\in I$, then, because $N\cap Q = \{e\}$, we have \begin{equation}\label{MRMR} \bigcap_{i\in I}M_iR_i =\biggl(\bigcap_{i\in I}M_i\biggr) \biggl(\bigcap_{i\in I}R_i\biggr) \end{equation}
\begin{lem}\label{RnL} Let $L$ be a subgroup of $N$ which is normalized by~$R$. For any $r\in R$ and $n\in N$, the following are equivalent\textup: \begin{enumerate} \item $r\in R_{n,L}$; \item $rnr^{-1}\in nL$; \item $r\in nLRn^{-1}$. \end{enumerate} \end{lem}
\begin{proof}[Sketch of Proof] (i) $\implies$ (ii) $\implies$ (iii) is clear. (iii) $\implies$ (ii) uses $N\cap Q=\{e\}$. (ii) $\implies$ (i) because $R$ normalizes $L$. \end{proof}
Taking $L=M$ in \lemref{RnL} and using $H=MR$, we have \begin{equation}\label{star} R_{n,M} = R\cap nMRn^{-1} = R\cap nHn^{-1} \supset R\cap nRn^{-1} = R_n. \end{equation} From this we deduce:
\begin{lem}\label{HMR} For any $n\in N$ and $q\in Q$, \begin{enumerate} \item $H_n = M R_{n,M}$; \item $H_q = M_q R_q$; \item $H_{qn} \cap H_q = M_q ( q R_{n,M} q^{-1} \cap R)$. \end{enumerate} \end{lem}
\begin{proof} (i) Suppose $h=mr\in H_n$ for $m\in M$ and $r\in R$. Then \[r\in m^{-1} nMRn^{-1} = n (n^{-1}m^{-1}nMR)n^{-1} = nMRn^{-1} = nHn^{-1},\] so (using \eqref{star}) \[mr \in m(R\cap nHn^{-1}) \subset MR_{n,M}.\] Thus $H_n\subset MR_{n,M}$. Conversely, also using \eqref{star}, \[MR_{n,M} = M(R\cap nHn^{-1}) \subset MR \cap MnHn^{-1} = H \cap nHn^{-1} = H_n.\]
(ii) By \eqref{MRMR} we have \begin{align*} H_q &= H \cap qHq^{-1} = MR \cap (qMq^{-1})(qRq^{-1}) \\&= (M\cap qMq^{-1})(R\cap qRq^{-1}) = M_q R_q. \end{align*}
(iii) Using part~(i) and \eqref{MRMR} we have \begin{align*} H_{qn} \cap H_q &= H\cap qHq^{-1} \cap qnHn^{-1}q^{-1} = H \cap q(H_n)q^{-1} \\&= MR \cap (qMq^{-1})(qR_{n,M}q^{-1}) = M_q (R \cap qR_{n,M}q^{-1}). \end{align*} \end{proof}
\subsection{Hecke pairs}
Since $[H:H_x] = |HxH/H|$ for any $x\in G$, the pair $(G,H)$ is Hecke if and only if each subgroup $H_x$ has finite index in $H$. Applying this to the pair $(N\rtimes Q,Q)$, we see that $(N\rtimes Q,Q)$ is Hecke if and only if $[Q:Q_n] = [Q:Q_{n,\{e\}}]<\infty$ for each $n\in N$. The next proposition extends this observation to our more general context.
\begin{prop}\label{hecke} The following are equivalent\textup: \begin{enumerate} \item $(G,H)$ is a Hecke pair \item $[R:R_q]$, $[M:M_q]$ and $[R:R_{n,M}]$ are all finite for each $q\in Q$ and $n\in N$ \item $(Q,R)$, $(G,M)$ and $(N/M\rtimes R,R)$ are Hecke pairs \item $(Q,R)$, $(G,M)$ and $(NR,H)$ are Hecke pairs. \end{enumerate} \end{prop}
\begin{proof} If $(G,H)$ is a Hecke pair, then for all $q\in Q$ and $n\in N$ we have \[ [M:M_q][R:R_q]=[MR:M_qR_q]=[H:H_q]<\infty \] {and} \[ [R:R_{n,M}]=[MR:MR_{n,M}]=[H:H_n]<\infty, \] so (i) implies (ii). Conversely, assuming~(ii), for any $q\in Q$ and $n\in N$, \lemref{HMR} gives \begin{align*} [H:H_{qn}] &\leq [H:H_{qn}\cap H_q] = [MR:M_q(qR_{n,M}q^{-1}\cap R)]\\ &=[M:M_q][R:qR_{n,M}q^{-1}\cap R]\\ &=[M:M_q][R:R_q][R_q:qR_{n,M}q^{-1}\cap R], \end{align*} which is finite because for any subgroups $S\supset T$ of $G$ we have $[R\cap S:R\cap T]\le [S:T]$ and $[qSq^{-1}:qTq^{-1}]=[S:T]$. Thus~(ii) implies~(i).
If $q\in Q$ and $n\in N$ then $qnMn^{-1} q^{-1}=qMq^{-1}$, so $[M:M_q]<\infty$ for all $q\in Q$ if and only if $(G,M)$ is Hecke. As observed above, $R$ is a Hecke subgroup of $N/M\rtimes R$ if and only if, for each $nM\in N/M$, the stabilizer subgroup of $nM$ in $R$ (acting by conjugation) has finite index in $R$. Since this subgroup is precisely $R_{n,M}$, we have $[R:R_{n,M}]<\infty$ for all $n\in N$ if and only if $(N/M\rtimes R,R)$ is Hecke. Therefore (ii) if and only if (iii).
Finally, if $n\in N$ and $r\in R$ then $nrHr^{-1} n^{-1}=nHn^{-1}$, so \[ [H:H_{nr}]=[H:H_n]=[R:R_{n,M}], \] therefore (iii) if and only if (iv). \end{proof}
\begin{prop} \label{reduced} Suppose $(G,H)$ is a Hecke pair. Then the following are equivalent\textup: \begin{enumerate} \item $(G,H)$ is reduced; \item $M_Q=\{e\}$ and $R_{N,\{e\}}\cap R_Q = \{e\}$. \end{enumerate} \end{prop}
\begin{proof} Since $(G,H)$ is reduced if and only if $H_G=\{e\}$, the proposition will follow easily from the identity \begin{equation}\label{HG} H_G = M_Q (R_{N,M_Q}\cap R_Q). \end{equation} To establish~\eqref{HG}, we first use \lemref{HMR} (iii) and \corref{MRLS} to get \begin{align*} H_G &= \bigcap_{x\in G}H_x =\bigcap_{q\in Q,n\in N}H_{qn} =\bigcap_{q\in Q,n\in N}(H_q\cap H_{qn})\\ &=\bigcap_{q\in Q,n\in N}M_q(qR_{n,M}q^{-1}\cap R)\\ &=\biggl(\bigcap_{q\in Q}M_q\biggr)
\biggl(\bigcap_{q\in Q,n\in N}qR_{n,M}q^{-1}\cap R\biggr). \end{align*}
Further, \begin{align*} \bigcap_{q\in Q,n\in N}qR_{n,M}q^{-1} \cap R &= \bigcap_{q\in Q,n\in N}R_{qnq^{-1},qMq^{-1}}\cap R_q\\ &= \bigcap_{q\in Q,n\in N}R_{n,qMq^{-1}}\cap \bigcap_{q\in Q}R_q\\ &= R_{N,M_Q} \cap R_Q. \qedhere \end{align*} \end{proof}
Note that $R_{N,\{e\}}$ consists of those elements of $R$ which commute element-wise with $N$.
\subsection{Hecke topology} In addition to our semidirect product setup~\eqref{setup}, now assume that $(G,H)$ is a reduced Hecke pair. Let $(\overline G,\overline H)$ denote its Schlichting completion.
\begin{prop} \label{subbase} The relative Hecke topologies of the relevant subgroups have the following subbases at the identity: \begin{enumerate} \item for both $N$ and $M$\textup: $\{M_q \mid q\in Q\}$; \item for $Q$\textup: $\{qR_{n,M}q^{-1} \mid q\in Q,n\in N\}$; \item for $R$\textup: $\{R\cap qR_{n,M}q^{-1} \mid q\in Q,n\in N\}$. \end{enumerate} \end{prop}
\begin{proof} (i) follows from the computation \begin{align*} N\cap qnHn^{-1} q^{-1} =qn(N\cap H)n^{-1} q^{-1} =qnMn^{-1} q^{-1} =qMq^{-1} \end{align*} and its immediate consequence, $M\cap qnHn^{-1} q^{-1} =M_q$.
For (ii), we have \begin{align*} Q\cap nHn^{-1} =Q\cap nMRn^{-1} \subset Q\cap MNRN =Q\cap NR =R, \end{align*} so \[ Q\cap qnHn^{-1} q^{-1} =q(Q\cap nHn^{-1})q^{-1} =q(R\cap nHn^{-1})q^{-1} =qR_{n,M}q^{-1}. \]
Finally, (iii) follows from~(ii). \end{proof}
The following corollary should be compared with \cite{lln:hecke}*{Theorem~2.9(ii)}; their extra hypothesis is satisfied in our special case ($M\vartriangleleft N$), but it would be complicated to verify that our result follows from theirs because their construction is significantly different from ours.
\begin{cor} \label{semidirect completion} If $(G,H)$ as in~\eqref{setup} is a reduced Hecke pair with Schlichting completion $(\overline G,\overline H)$, then \[\overline G=\overline N\rtimes\overline Q \midtext{and} \overline H=\overline M\rtimes\overline R,\] where the closures are all taken in $\overline G$. \end{cor}
\begin{proof} First of all, to show that $\overline G$ is the semidirect product $\overline N\rtimes\overline Q$ of its subgroups $\overline N$ and $\overline Q$ requires: \begin{enumerate} \item $\overline N\vartriangleleft\overline G$; \item $\overline G=\overline N\;\overline Q$; \item $\overline N\cap\overline Q=\{e\}$; \item $\overline G$ has the product topology of $\overline N\times\overline Q$. \end{enumerate} Item~(i) is obvious. To see~(ii), note that the subgroup $\overline N\;\overline Q$ contains both $G=NQ$ and $\overline M\;\overline R$. Since $\overline M$ is compact, the subgroup $\overline M\;\overline R$ is closed, and it follows that $\overline H=\overline M\;\overline R$. This implies~(ii), since every coset in $\overline G/\overline H$ can be expressed in the form $x\overline H$ for $x\in G$.
For~(iii), note that the quotient map $\psi:G\to Q\subset\overline Q$ is continuous for the Hecke topology of $G$ and the relative Hecke topology of $Q$, because a typical subbasic neighborhood of $e$ in $Q$ is of the form $qR_{n,M}q^{-1}$ for $q\in Q$ and $n\in N$, and \[\psi^{-1}(qR_{n,M}q^{-1})=NqR_{n,M}q^{-1}\] contains the neighborhood \[H_{qn}\cap H_q=M_q(R\cap qR_{n,M}q^{-1})\] of $e$ in $G$. Since $\overline Q$ is a complete topological group, $\psi$ extends uniquely to a continuous homomorphism $\overline\psi:\overline G\to\overline Q$. Because $\psi$ takes $N$ to $e$ and agrees with the inclusion map on $Q$, by density and continuity $\overline\psi$ takes $\overline N$ to $e$ and agrees with the inclusion map on $\overline Q$. Therefore $\overline N\cap\overline Q=\{e\}$.
To see how~(iv) follows, note that the multiplication map $(n,q)\mapsto nq$ of $\overline N\times\overline Q$ onto $\overline G$ is continuous by definition, and its inverse $x\mapsto (x\overline\psi(x)^{-1},\overline\psi(x))$ is also continuous because $\overline\psi$ is, as shown above.
It only remains to show that $\overline H=\overline M\rtimes\overline R$, but this follows immediately: we have $\overline M\cap\overline R=\{e\}$, and the subgroup $\overline M\;\overline R$ has the product topology since $\overline N\;\overline Q$ does. \end{proof}
\subsection{Inverse limits} \label{inverse limit}
Here we again assume that $(G,H)$ is a reduced Hecke pair. For each of our groups $M$, $N$, $R$, $H$, and $Q$ we want to describe the closure as an inverse limit of groups, so that we capture both the algebraic and the topological structure. From \cite{hecke}*{Proposition~3.10}, we know that the closure is topologically the inverse limit of the coset spaces of finite intersections of stabilizer subgroups. To get the algebraic structure we need enough of these intersections to be normal subgroups. In the case of $M$ and $N$, we already have what we need, since each $M_q$ is normal in $N$, and hence also in $M$. However, for $R$ we need to do more work.
\begin{lem} \label{normsbgp} Suppose $L<M$ and $S<R$. Then $LS\vartriangleleft MR$ if and only if \begin{enumerate} \item $L\vartriangleleft MR$, \item $S\vartriangleleft R$, and \item $S\subset R_{M,L}$. \end{enumerate} Moreover, in this case \[ MR/LS\cong (M/L)\rtimes (R/S). \] \end{lem}
\begin{proof} First assume $LS\vartriangleleft MR$. Then \[S = R\cap LS \vartriangleleft R\cap MR = R,\] and since $M\vartriangleleft MR$, we also have \[L = M\cap LS \vartriangleleft MR.\] For~(iii), fix $s\in S$ and $m\in M$. Then $m^{-1} sm\in LS$ because $LS\vartriangleleft MR$, so $m^{-1} sms^{-1}\in LS$. On the other hand, $m^{-1} sms^{-1}\in M$ because $S\subset R$ and $R$ normalizes $M$. Thus \[m^{-1} sms^{-1}\in LS\cap M=L,\] so $s\in R_{m,L}$.
Conversely, assume (i)--(iii). Then it suffices to show that $M$ conjugates $S$ into $LS$: for $m\in M$ and $s\in S$ we have $m^{-1} sms^{-1}\in L$ by \lemref{RnL}~(ii), and hence $m^{-1} sm\in LS$.
For the last statement, it is routine to verify that the map \[ mrLS\mapsto (mL,rS) \righttext{for}m\in M,r\in R \] gives a well-defined isomorphism. \end{proof}
\begin{notnonly} For $E\subset Q$ and $F\subset N$ put \[ R^E_F =\bigcap_{q\in E}qR_{F,M}q^{-1} \cap R =\bigcap_{q\in E}\bigcap_{n\in F}qR_{n,M}q^{-1} \cap R. \] \end{notnonly}
Note that the families \[\{M_E:E\subset Q\text{ finite}\} \midtext{and} \{R^E_F:\text{ both }E\subset Q\text{ and }F\subset N\text{ finite}\}\] are neighborhood bases at~$e$ in the relative Hecke topology of~$M$ and~$R$, respectively.
\begin{notnonly} Let $\cc E$ be the family of all subsets $E\subset Q$ such that: \begin{enumerate} \item $E$ is a finite union of cosets in $Q/R$; \item $e\in E$; \item $RE=E$, \end{enumerate} and let $\cc F$ be the family of all pairs $(E,F)$ such that: \begin{enumerate} \setcounter{enumi}{3} \item $E\in \cc E$; \item $F$ is a finite union of cosets in $N/M$; \item $q^{-1} Mq\subset F$ for all $q\in E$. \end{enumerate} \end{notnonly}
\begin{lem}\label{2-10} For all $(E,F)\in\cc F$: \begin{enumerate} \item $R^E_F\vartriangleleft R$; \item $[R:R^E_F]<\infty$. \end{enumerate} \end{lem}
\begin{proof} $R^E_F$ is a subgroup of $R$ because $R_{F,M}$ is. For $r\in R$ we have \[ rR^E_Fr^{-1} =\bigcap_{q\in E}r(qR_{F,M}q^{-1}\cap R)r^{-1} =\bigcap_{q\in E}rqR_{F,M}q^{-1} r^{-1}\cap R =R^E_F \] since $rE=E$. This proves (i).
For (ii), first note that $[R:R_{F,M}]<\infty$ because $|F/M|<\infty$ and $R_{n,M}$ only depends upon the coset $nM$. Thus \[R_0:=\bigcap_{r\in R}rR_{F,M}r^{-1}\] has finite index in $R$. For each coset $tR$ contained in $E$ we have \[ \bigcap_{q\in tR}qR_{F,M}q^{-1} =\bigcap_{r\in R}trR_{F,M}r^{-1} t^{-1} =tR_0t^{-1}. \] Thus \[\bigcap_{q\in tR}qR_{F,M}q^{-1}\cap R\] has finite index in $R$. Letting $E=\{t_1R,\dots,t_kR\}$, it follows that \[ \bigcap_{q\in E}qR_{F,M}q^{-1}\cap R =\bigcap_{i=1}^k \left(\bigcap_{q\in t_iR}qR_{F,M}q^{-1}\cap R\right) \] has finite index in $R$. \end{proof}
\begin{lem}\label{2-11} For all $E\in \cc E$: \begin{enumerate} \item $M_E\vartriangleleft N$; \item $M_E\vartriangleleft M$; \item $M_E\vartriangleleft H$; \item $[M:M_E]<\infty$. \end{enumerate} \end{lem}
\begin{proof} (i)~holds because $M_q\vartriangleleft N$ for each $q$, and~(ii) follows since $M_E\subset M$.
(iii). For $r\in R$ we have \[rM_Er^{-1} =\bigcap_{q\in E}r(qMq^{-1}\cap M)r^{-1} =\bigcap_{q\in E}rqMq^{-1} r^{-1}\cap M =M_E\] since $rE=E$. Thus $M_E\vartriangleleft MR=H$ by (ii).
(iv). For each coset $tR$ contained in $E$ we have \[\bigcap_{q\in tR}qMq^{-1} =\bigcap_{r\in R}trMr^{-1} t^{-1} =tMt^{-1}.\] Thus $\bigcap_{q\in tR}M_q=M_t$ has finite index in $M$, and it follows that $M_E = \bigcap_{q\in E}M_q$ has finite index in $M$ as well. \end{proof}
\begin{lem}\label{2-12} For all $(E,F)\in\cc F$ we have \[R^E_F\subset R_{M,M_E}.\] \end{lem}
\begin{proof} Fix $s\in R^E_F$ and $m\in M$; we need to show that $s\in R_{m,M_E}$. Thus, for $q\in E$, we must show \[m^{-1} sms^{-1}\in qMq^{-1}.\] We have $q^{-1} mq\in F$, so $s\in qR_{q^{-1} mq,M}q^{-1}$. It follows that \[ q^{-1} m^{-1} sms^{-1} q =(q^{-1} m^{-1} q)(q^{-1} sq)(q^{-1} mq)(q^{-1} s^{-1} q) \in M, \] hence $m^{-1} sms^{-1}\in qMq^{-1}$, as desired. \end{proof}
Lemmas~\ref{2-10}--\ref{2-12} yield the following:
\begin{prop} For all $(E,F)\in\cc F$ we have \[M_ER^E_F\vartriangleleft H \midtext{and} [H:M_ER^E_F]<\infty.\] \end{prop}
\begin{thm} \label{inverse} With the above notation, we have: \begin{enumerate} \item $\overline M=\invlim_{E\in\cc E}M/M_E$; \item $\overline N=\invlim_{E\in\cc E}N/M_E$; \item $\overline R=\invlim_{(E,F)\in\cc F}R/R^E_F$; \item $\overline H=\invlim_{(E,F)\in\cc F}M/M_E\rtimes R/R^E_F$, \end{enumerate} all as topological groups. \end{thm}
\begin{proof} By the preceding results, it suffices to show that for all finite subsets \[E'\subset Q\midtext{and} F'\subset N\] there exists $(E,F)\in\cc F$ such that \[M_E\subset M_{E'}\midtext{and} R^E_F\subset R^{E'}_{F'}.\] Put \[E''=\bigl(E'\cup\{e\}\bigr)R \midtext{and} F''=\bigl(F'\cup\{e\}\bigr)M.\] Since $(Q,R)$ is Hecke, $E:=RE''$ is a finite union of cosets in $Q/R$, and it follows that $E\in\cc E$. We have $M_E\subset M_{E'}$ {since} $E\supset E'$.
Let $M_0$ be the subgroup of $N$ generated by the conjugates $q^{-1} Mq$ for $q\in E$. Then $M_0\vartriangleleft N$ since $q^{-1} Mq\vartriangleleft N$ for each $q$. Since $E$ is a finite union of double cosets of $R$ in $Q$, and since $(Q,R)$ is Hecke, $E$ is a finite union of right cosets of $R$ in $Q$. Thus the family $\{q^{-1} Mq:q\in E\}$ is finite. Since $M_0$ is the product of the subgroups $q^{-1} Mq$ (because they are normal in $N$), it follows that $[M_0:M]<\infty$. Thus, putting $F=M_0F''$, we have $(E,F)\in\cc F$, and moreover $R^E_F\subset R^{E'}_{F'}$ {since} $E\supset E'$ and $F\supset F'$. \end{proof}
As a topological space, $\overline Q=\invlim_{E,F}Q/R_F^E$, but since the subgroups $R_F^E$ are not in general normal in $Q$, the group structure of $\overline Q$ is more complicated. For details on this, we refer to \cite{hecke}*{Remark~3.11}. In the special case where $Q$ is abelian, we do have $R^E_F\vartriangleleft Q$, so \[ \overline Q=\invlim_{E,F}Q/R_F^E \] as topological groups.
\section{Crossed products} \label{crossed}
In this section we prove a few results concerning crossed products, subgroups, and projections. We state these results in somewhat greater generality than we require, since they might be useful elsewhere and no extra work is required.
\subsection*{Compact subgroups}
Let $R$ be a compact normal subgroup of a locally compact group $Q$. We identify $Q$ and $C_c(Q)$ with their canonical images in $M(C^*(Q))$ and $C^*(Q)$, respectively. Normalize the Haar measure on $R$ so that $R$ has measure $1$. Then $q:=\Chi_R$ is a central projection in $M(C^*(Q))$, and the map $\tau:Q/R\to M(C^*(Q))$ defined by \begin{equation} \label{tau} \tau(sR)=sq\righttext{for} s\in Q \end{equation} integrates to give an isomorphism of $C^*(Q/R)$ with the ideal $C^*(Q)q$ of $C^*(Q)$.
Let $\alpha$ be an action of $Q$ on a $C^*$-algebra $B$. We identify $B$ and $C^*(Q)$ with their canonical images in $M(B\times_\alpha Q)$. Thus $q$ is a projection in $M(B\times_\alpha Q)$, and we may regard $\tau$ as a homomorphism of $Q/R$ into $M(B\times_\alpha Q)$.
Let $\Phi(b)=\int_R\alpha_r(b)\,dr$ be the faithful conditional expectation of $B$ onto the fixed-point algebra $B^R$. Then an elementary calculation shows that \[ qbq=\Phi(b)q=q\Phi(b)\righttext{for} b\in B \] Thus $qBq=B^Rq$, and $q$ commutes with every element of $B^R$. Thus the formula \begin{equation} \label{sigma} \sigma(b)=bq \end{equation} defines a homomorphism $\sigma$ of $B^R$ onto the $C^*$-subalgebra $B^Rq$ of $M(B\times_\alpha Q)$. We will deduce from \propref{newrosenberg} below that $\sigma$ is in fact an isomorphism.
Let $\beta$ be the action of $Q/R$ on $B^R$ obtained from $\alpha$. It is easy to see that the maps $\sigma$ and $\tau$ from Equations~\ref{sigma} and \ref{tau} combine to form a covariant homomorphism $(\sigma,\tau)$ of the action $(B^R,Q/R,\beta)$, and that the integrated form \begin{equation} \label{theta} \theta:=\sigma\times \tau:B^R\times_\beta Q/R\to q(B\times_\alpha Q)q \end{equation} is surjective.
In the special case $R=Q$, the following is the main result of \cite{rosenberg}:
\begin{prop} \label{newrosenberg} Let $(B,Q,\alpha)$ be an action, let $R$ be a compact normal subgroup of $Q$, let $(B^R,Q/R,\beta)$ be the associated action, and let $q=\Chi_R$. Then the map $\theta:B^R\times_\beta Q/R\to q(B\times_\alpha Q)q$ from \eqref{theta} is an isomorphism. \end{prop}
\begin{proof} By the discussion preceding the statement of the proposition, it remains to verify that $\theta$ is injective, and we do this by showing that for every covariant representation $(\pi,U)$ of $(B^R,Q/R,\beta)$ on a Hilbert space $V$ there exists a representation $\rho$ of $q(B\times_\alpha Q)q$ on $V$ such that $\rho\circ\theta=\pi\times U$.
Recall from the theory of Rieffel induction \cite{rie:induced} that the conditional expectation $\Phi:B\to B^R$ gives rise to a $B^R$-valued inner product \[\<b,c\rangle_{B^R}=\Phi(b^*c)\] on $B$, so the completion $X$ is a Hilbert $B^R$-module. Moreover, $B$ acts on the left of $X$ by adjointable operators, so we can use $X$ to induce $\pi$ to a representation $\widetilde\pi$ of $B$ on $\widetilde V:=X\otimes_{B^R} V$. An easy computation shows that the formula \[ \widetilde U_s(b\otimes \xi)=\alpha_s(b)\otimes U_{sR}\xi \righttext{for}s\in Q,b\in B,\xi\in V \] determines a representation $\widetilde U$ of $Q$ on $\widetilde V$ such that $(\widetilde\pi,\widetilde U)$ is a covariant representation of $(B,Q,\alpha)$.
Thus $\widetilde\pi\times \widetilde U$ is a representation of the crossed product $B\times_\alpha Q$ on $\widetilde V$; let $\rho_1$ be its restriction to the corner $q(B\times_\alpha Q)q$. We have $\rho_1(q)\widetilde V=B^R\otimes_{B^R} V$, because if $b\in B$ and $\xi\in V$ then \begin{align*} \rho_1(q)(b\otimes \xi) &=\int_R\widetilde U_r(b\otimes \xi)\,dr \\&=\int_R\bigl(\alpha_r(b)\otimes U_{rR}\xi\bigr)\,dr \\&=\int_R\alpha_r(b)\,dr\otimes \xi. \end{align*} The subspace $B^R\otimes_{B^R}V$ is invariant for the representation $\rho_1$; let $\rho_2$ denote the associated subrepresentation of $q(B\times_\alpha Q)q$. A routine computation shows that \[ W(b\otimes \xi)=\pi(b)\xi \righttext{for}b\in B^R,\xi\in V \] determines a unitary map $W$ of $B^R\otimes_{B^R}V$ onto $V$ which implements an equivalence between the representations $\rho_2\circ \theta$ and $\pi\times U$. Thus we can take $\rho=\ad W\circ \rho_2$. \end{proof}
\begin{cor} \label{fix iso} Let $(B,Q,\alpha)$ be an action, let $R$ be a compact normal subgroup of $Q$, let $(B^R,Q/R,\beta)$ be the associated action, and let $q=\Chi_R$. Then the map $\sigma:B^R\to B^Rq$ from \eqref{sigma} is an isomorphism. \end{cor}
\begin{proof} It remains to observe that $\sigma$ is faithful, being the composition of the injective homomorphism $\theta$ with the canonical embedding of $B^R$ into $M(B^R\times_\beta Q/R)$. \end{proof}
\subsection*{Two projections}
If $A$ is a $C^*$-algebra and $p$ is a projection in $M(A)$, then one of the most basic applications of Rieffel's theory \cite{rie:induced} is that the ideal $\overline{ApA}$ is Morita-Rieffel equivalent to the corner $pAp$ via the $\overline{ApA}-pAp$ imprimitivity bimodule $Ap$. For later purposes, we will need a slightly more subtle variant:
\begin{lem} \label{morita} Let $A$ be a $C^*$-algebra, and let $p,q\in M(A)$ be projections with $p\le q$. Then $q\overline{ApA}q$ is Morita-Rieffel equivalent to $pAp$. \end{lem}
\begin{proof} Just apply the above Morita-Rieffel equivalence $\overline{ApA}\sim pAp$ with $A$ replaced by $qAq$. \end{proof}
\subsection*{Central projection}
Let $\beta$ be an action of a locally compact group $T$ on a $C^*$-algebra $C$, and let $d\in M(C)$ be a central projection. Then $d$ may also be regarded as a multiplier of the crossed product $C\times_\beta T$, and it generates the ideal \[\overline{(C\times_\beta T)d(C\times_\beta T)}.\]
\begin{prop} With the above notation, we have: \begin{enumerate} \item $\overline{(C\times_\beta T)d(C\times_\beta T)}=I\times_\beta T$, where $I$ is the $T$-invariant ideal of $C$ generated by $d$. \item $I=\clspn\{\beta_t(d)C:t\in T\}=\{c\in C:p_\infty c=c\}$, where $p_\infty=\sup\{\beta_t(d):t\in T\}$. \end{enumerate} \end{prop}
\begin{proof} (i) follows from \cite{gre:local}*{Propositions~11~(ii) and 12~(i)}.
(ii) The first equality holds because $d$ is a central projection. For the second, note that the projections $\{\beta_t(d):t\in T\}$ are central, so their supremum $p_\infty$ is an open central projection in $C^{**}$, and the desired equality follows from, \emph{e.g.}, \cite{ped}*{Proposition~3.11.9}. To make this part of the proof self-contained, we include the argument: put \[J=\{c\in C:p_\infty c=c\}.\] For any $t\in T$ and $c\in C$ we have $\beta_t(d)\le p_\infty$, so \[p_\infty \beta_t(d)c=\beta_t(d)c.\] Thus $I\subset J$. Suppose $a\in J$ but $a\notin I$. Then there exists a nondegenerate representation $\pi$ of $C$ such that $\pi(a)\ne 0$ but $I\subset \ker\pi$. Extend $\pi$ to a weak*-weak-operator continuous representation $\overline\pi$ of $C^{**}$. Enlarge the set $\{\beta_t(d):t\in T\}$ to an upward-directed set $P$ of central projections in $M(C)$, so that there is an increasing net $\{p_i\}$ in $P$ converging weak* to $p_\infty$. Then $p_ia\to p_\infty a$ weak*, so $\pi(p_ia)\to \overline\pi(p_\infty a)$. We have $\overline\pi(p_\infty a)=\pi(a)$ because $a\in J$, and $\pi(p_ia)=0$ for all $i$, so we deduce that $\pi(a)=0$, a contradiction. \end{proof}
\begin{q} When will $p_\infty$ be a multiplier of $B^R$? (\exref{ex:Heisenberg} shows that it is not always so.) \end{q}
\subsection*{Combined Results}
With the notation and assumptions of Proposition~\ref{newrosenberg}, put \[A=B\times_\alpha Q.\] Also let $d\in M(B)$ be an $R$-invariant central projection, so that $d$ is also a central projection in $M(B^R)$. Put \[p_\infty=\sup\{\alpha_s(d):s\in Q\}.\] Then $p_\infty$ is an open central projection in $(B^R)^{**}$. Let $I$ be the $Q/R$-invariant ideal of $B^R$ generated by $d$. We have $dq=qd\in M(A)$, and we denote this projection by $p$.
The following theorem combines the previous results in this section:
\begin{thm} \label{combine} With the above notation, we have: \begin{enumerate} \item $\theta(I\times_\beta Q/R)=q\overline{ApA}q$. \item $I=\clspn\{\alpha_s(d)B^R:s\in Q\}=\{b\in B^R:p_\infty b=b\}$. \item $\sigma(I)=\clspn\{sds^{-1} qBq:s\in Q\}=\clspn\{sqdBqs^{-1}:s\in Q\}$. \item $pAp$ is Morita-Rieffel equivalent to $I\times_\beta Q/R$. \end{enumerate} \end{thm}
\begin{proof} The only part that still requires proof is (iii). We have \begin{align*} \sigma(I) &=\clspn_{s\in Q}\theta\circ\alpha_s(dB^R) \end{align*} because $\alpha_s(B^R)=B^R$. For each $s\in Q$ we have \begin{align*} \sigma\circ\alpha_s(dB^R) &=\sigma\circ\beta_{sR}(dB^R) \\&=\tau_{sR}\sigma(dB^R)\tau_{sR}^* \righttext{(covariance)} \\&=(sq)dB^Rq(sq)^* =sqdB^Rqs^{-1} =sqdqBqs^{-1} \\&=sqdBqs^{-1} \righttext{($dq=qd$)} \\&=sds^{-1} qBq \righttext{($sq=qs,sB=Bs$)}, \end{align*} and (iii) follows. \end{proof}
\section{Hecke crossed products} \label{hecke crossed}
In this section our main object of study is a Schlichting pair $(G,H)$ which has the semidirect-product decomposition of \eqref{setup}, with the additional condition that $R$ be normal in $Q$. We shall obtain crossed-product $C^*$-algebras which are Morita-Rieffel equivalent to the completion of the Hecke algebra inside $C^*(G)$, similarly to certain results of \cite{hecke}. At the end of the section we shall briefly indicate how our results can be applied if the Hecke pair is incomplete.
Put $A=C^*(G)$ and $B=C^*(N)$, and let $\alpha$ denote the canonical action of $Q$ on $B$ determined by conjugation of $Q$ on $N$. Then $A$ is isomorphic to the crossed product $B\times_\alpha Q$, and we identify these two $C^*$-algebras.
Normalize the Haar measures on $N$ and $Q$ so that $M$ and $R$ each have measure $1$. Then the product measure is a Haar measure on $G$, and $H$ has measure $1$. Thus $p_M:=\Chi_{M}$ is a central projection in $B$, hence is a projection in $M(A)$. Similarly, $p_R:=\Chi_{R}$ is a central projection in $C^*(Q)$, hence also a projection in $M(A)$, and we have \[ p_H:=\Chi_{H}=p_Mp_R=p_Rp_M\in A. \]
By \cite{hecke}*{Corollary~4.4} the Hecke algebra of the pair $(G,H)$ is $\cc H=p_HC_c(G)p_H$, whose closure in $A$ is the corner $p_HAp_H$. From \secref{crossed} we get isomorphisms \begin{align*} \theta&=\sigma\times\tau:B^R\times_\beta Q/R\xrightarrow{\cong} p_RAp_R\\ \sigma&:B^R\xrightarrow{\cong} B^Rp_R\\ \tau&:C^*(Q/R)\xrightarrow{\cong} C^*(Q)p_R, \end{align*} and an ideal \[I=\{b\in B^R:p_\infty b=b\}\vartriangleleft B^R,\] where \[p_\infty=\sup \{\alpha_s(p_M):s\in Q\}\in (B^R)^{**}.\] \thmref{combine} quickly gives the following analogue of \cite{hecke}*{Theorem~8.2}:
\begin{thm} \label{cross thm} With the above notation: \begin{flalign} &\theta(I\times_\beta Q/R)=p_R\overline{Ap_HA}p_R.\tag{i} \\&I=\clspn\{\alpha_s(p_M)B^R:s\in Q\}.\tag{ii} \\&\sigma(I) =\clspn\{sp_Ms^{-1} p_RBp_R:s\in Q\}\tag{iii} \\&\qquad=\clspn\{sp_Rp_MBp_Rs^{-1}:s\in Q\}\notag \\&\qquad=\clspn\{sp_Ms^{-1} p_Rnp_R:s\in Q,n\in N\}.\notag \\&\text{$p_HAp_H$ is Morita-Rieffel equivalent to $I\times_\beta Q/R$.}\tag{iv} \end{flalign}
\end{thm}
\begin{proof} The only thing left to prove is the last equality of part (iii), and this follows from \thmref{combine}, because $M$ is compact open in $N$, hence \[p_MB=\clspn\{p_Mn:n\in N\}\] (note that the projection $d$ from \thmref{combine} is $p_M$ here). \end{proof}
\begin{rem} Note that if $R$ is nontrivial then $p_H$ is never full in $A$: Since $N$ is normal in $G$ with $Q=G/N$, there is a natural homomorphism $C^*(G)\to C^*(Q)$ which maps $p_H$ to $p_R$. Thus $p_R$ is a nontrivial projection, which, being central, is not full in $C^*(Q)$. \end{rem}
We say that the family $\{sMs^{-1}:s\in Q\}$ of conjugates of $M$ is \emph{downward-directed} if the intersection of any two of them contains a third.
\begin{prop}\label{down} If $\{sMs^{-1}:s\in Q\}$ is downward-directed, then \[p_R\overline{Ap_HA}p_R=p_RAp_R\cong B^R\times_\beta Q/R.\] \end{prop}
\begin{proof} Because the pair $(G,H)$ is reduced we have \[\bigcap_{s\in Q}sMs^{-1}=\{e\},\] so the upward-directed set $\{sp_Ms^{-1}:s\in Q\}$ of projections has supremum $p_\infty=1$ in $(B^R)^{**}$. Therefore the ideal $I$ from \thmref{cross thm} coincides with $B^R$, and the result follows. \end{proof}
\begin{rem} In the above proposition, we have \[p_R\overline{Ap_HA}p_R=p_RAp_R\] although the ideal $\overline{Ap_HA}$ of $A$ is proper if $R$ is nontrivial. \end{rem}
As in \cite{hecke}*{Section~7}, we specialize to the case where $N$ is abelian. Taking Fourier transforms, the action $\alpha$ of $Q$ on $B$ becomes an action $\alpha'$ on $C_0(\widehat N)$: \[ \alpha'_s(f)(\phi)=f(\phi\circ \alpha_s) \righttext{for}s\in Q,f\in C_0(\widehat N),\phi\in \widehat N. \] The smallest $Q$-invariant subset of $\widehat N$ containing $M^\perp$ is \[ \Omega=\bigcup_{s\in Q}(sMs^{-1})^\perp. \] The Fourier transform of the fixed-point algebra $B^R$ is isomorphic to $C_0(\widehat N/R)$, where $\widehat N/R$ is the orbit space under the action of $R$. The smallest $Q/R$-invariant subset of $\widehat N/R$ containing $M^\perp/R$ is $\Omega/R$. Thus the Fourier transform of the ideal $I$ of $B^R$ is $C_0(\Omega/R)$. Let $\gamma$ be the associated action of $Q/R$ on $C_0(\Omega/R)$. The following corollary is analogous to \cite{hecke}*{Corollary~7.1}.
\begin{cor} With the assumptions and notation of Proposition~\ref{down}, if $N$ is abelian then $p_HAp_H$ is Morita-Rieffel equivalent to the crossed product $C_0(\Omega/R)\times_\gamma Q/R$. \end{cor}
We finish this section with a brief indication of how the above general theory can be used when $(G,H)$ is the Schlichting completion of a reduced Hecke pair $(G_0,H_0)$. More precisely, we assume that $G_0=N_0\rtimes Q_0$, $M_0\vartriangleleft N_0$, $R_0\vartriangleleft Q_0$, $R_0$ normalizes $M_0$, and that $(G_0,H_0)$ is a reduced Hecke pair (and Propositions~\ref{hecke}--\ref{reduced} give conditions under which the latter happens). By \corref{semidirect completion}, the closures $N$, $Q$, $M$, and $R$ of $N_0$, $Q_0$, $M_0$, and $R_0$, respectively, satisfy the conditions of the current section. The action $(B,Q,\alpha)$ restricts to an action $(B,Q_0,\alpha_0)$, and by density we have $B^R=B^{R_0}$. The map $sR_0\mapsto sR$ for $s\in R_0$ gives an isomorphism $Q_0/R_0\cong Q/R$ of discrete groups, and the action $\beta$ of $Q/R$ on $B^R$ corresponds to an action $\beta_0$ of $Q_0/R_0$ on $B^{R_0}$. Thus we have a natural isomorphism \[ B^R\times_\beta Q/R\cong B^{R_0}\times_{\beta_0} Q_0/R_0. \] Again by density, for all $s\in Q$ there exists $s_0\in Q_0$ such that $p_Rs=p_Rs_0$, and similarly for all $n\in N$ there exists $n_0\in N$ such that $np_M=n_0p_M$. We deduce:
\begin{cor} Using the above isomorphisms and identifications: \begin{enumerate} \item $I$ is the $Q_0/R_0$-invariant ideal of $B^{R_0}$ generated by $p_M$; \item $I\times_{\beta_0} Q_0/R_0\cong p_R\overline{Ap_HA}p_R$; \item $p_\infty=\sup\{sp_Ms^{-1}:s\in Q_0\}$; \item $I\cong \clspn\{sp_Ms^{-1} p_Rnp_R:s\in Q_0,n\in N_0\}$; \item $p_HAp_H$ is Morita-Rieffel equivalent to $I\times_{\beta_0} Q_0/R_0$. \end{enumerate} \end{cor}
As explained in \cite{hecke}, many of the nice properties of the Hecke algebra in \cite{bc} hold because the family $\{xHx^{-1}\mid x\in G\}$ of conjugates of $H$ is downward directed; in particular this implies that the projection $p$ is full. In our situation we can only have $p$ full if $R=\{e\}$, but we do have the following:
\begin{cor} \label{M directed} Suppose the conjugates $\{sMs^{-1}\mid s\in Q\}$ of $M$ are downward directed. Then $I=B^{R_0}$ and $p_HAp_H$ is Morita-Rieffel equivalent to $ B^{R_0}\times_{\beta_0}Q_0/R_0$. \end{cor} \begin{proof} We have $sp_Ms^{-1} = p_{ sMs^{-1} }$, so by the assumptions $p_\infty=1$. \end {proof}
Continuing with $(G,H)$ being the Schlichting completion of $(G_0,H_0)$ as above, we again consider the special case where $N$, equivalently $N_0$, is abelian. Fourier transforming, by density we have \[ \Omega=\bigcup_{s\in Q_0}(sMs^{-1})^\perp, \] and there is an associated action $\gamma_0$ of $Q_0/R_0$ on $C_0(\Omega/R)$, giving:
\begin{cor} With the above notation, $p_HAp_H$ is Morita-Rieffel equivalent to $C_0(\Omega/R)\times_{\gamma_0} Q_0/R_0$. \end{cor}
\section{Examples} \label{example}
We shall here illustrate the results from the preceding sections with a number of examples. Some arguments are only sketched.
First note that the case $R=\{e\}$ is treated in \cite{hecke}*{Sections~7--8}.
\begin{ex} \label{ex:general.bad} The situation with $M=\{e\}$ and $R\vartriangleleft Q$ is also interesting. From \secref{group} we see that $(NQ,R)$ is Hecke if and only if $R_{n,\{e\}}=\{r\in R\mid rnr^{-1}=n\}$ has finite index in $R$ for all $n$. The pair is reduced if and only if $\bigcap_n R_{n,\{e\}}=\{e\}$, \emph{i.e.}, if the map $R\to \aut N$ is injective. Here $\overline N=N$, $p:=p_H=p_R$, and \thmref{cross thm} gives Morita-Rieffel equivalences between $\overline{ApA}$, $pAp$ and $C^*(N)^R\times Q/R$. \cite{hecke}*{Example~10.1} is a special case of this situation. \end{ex}
We shall next study $2\times 2$ matrix groups (and leave it to the reader to see how this generalizes to $n\times n$ matrices). For any ring $J$ we let $\mathrm{M}(2,J)$ denote the set of all $2\times 2$ matrices with entries in $J$; we let $\mathrm{GL}(2,J)$ denote the group of invertible elements of $\mathrm{M}(2,J)$; $\mathrm{SL}_{\pm}(2,J)$ denotes the subgroup of $\mathrm{GL}(2,J)$ consisting of those matrices with determinant $\pm 1$, and $\mathrm{SL}(2,J)$ is the subgroup of $\mathrm{GL}(2,J)$ of matrices with determinant $1$.
\begin{prop} \label{N bar no p} Suppose $N=\mathbb Q^2$, $M=\mathbb Z^2$, $Q$ is a subgroup of $\mathrm{GL}(2,\mathbb Q)$ containing the diagonal subgroup
$D=\{ (\begin{smallmatrix} \lambda &0\\ 0&\lambda \end{smallmatrix}) \mid\, \lambda\in \mathbb Q^\times\}$, and
$R=Q\cap \mathrm{GL}(2,\Z)$. Then $(NQ,MR)$ is a reduced Hecke pair, and the Schlichting completion is given by \[ \overline N={\mathcal A}_f^2 ,\quad\overline M={\mathcal Z}^2,\quad \overline R=\invlim R/R(s), \midtext{ and } \overline Q=\bigcup_{q\in Q/R} q\overline R, \] where $\overline Q$ has the topology from $\overline R$, i.e., $q_i\rightarrow e$ if and only if $q_i\in \overline R$ eventually and $q_i\rightarrow e$ in $ \overline R$. \end{prop}
\begin{proof} Given $q\in Q$ there is an integer matrix $k\in D$ such that $kq^{-1}$ is an integer matrix. From this it follows that $kq^{-1} \mathbb Z^2\subset \mathbb Z^2$ and therefore $kMk^{-1} \subset qMq^{-1}$. This implies that the sets $\{k\mathbb Z^2\}$ are downward-directed and form a base at $e$ for the Hecke topologies of $M$ and $N$, by \propref{subbase}. We also note that $\bigcap_k kMk^{-1}=\bigcap_k k\mathbb Z^2=\{e\}$, by \propref{reduced}. Thus $\overline N={\mathcal A}_f^2$ and $\overline M={\mathcal Z}^2$, with ${\mathcal A}_f$ the finite adeles and ${\mathcal Z}$ the integers in ${\mathcal A}_f$.
Next, if $n\in N$ there exists $s\in\mathbb Z$ such that $sn\in M$. Take
$n_1=(\begin{smallmatrix} 1/s\\ 0 \end{smallmatrix})$ and $n_2=(\begin{smallmatrix} 0\\ 1/s \end{smallmatrix})$. By definition $r\in R_{n,M}$ if and only if $(r-I)n\in \mathbb Z^2$. One checks that $R_{n_1,M}\cap R_{n_2,M}\subset R_{n,M}$ and that
\[
R_{n_1,M}\cap R_{n_2,M}=
\left\{r\in R\mid\, r-I \in \mathrm{M}(2, s\mathbb Z)\right\}. \] Call this subgroup $R(s)$; it is clearly a normal subgroup of finite index in $R$.
Suppose $q=(\begin{smallmatrix} a&b\\ c&d \end{smallmatrix})\in Q$, and without loss of generality we may assume $q\in \mathrm{M}(2,\mathbb Z)$. Putting $t=\det (q)=ad-bc$, for $r\in R(t)$ we have \[ q^{-1}( r- I)q= t^{-1} (\begin{smallmatrix} d&-b\\ -c&a \end{smallmatrix}) (r-I) (\begin{smallmatrix} a&b\\ c&d \end{smallmatrix}) \in \mathrm{M}(2,\mathbb Z), \] and it follows that $q^{-1} rq\in \mathrm{M}(2,\mathbb Z)$. The same argument holds for $r^{-1}$, so both $q^{-1} rq$ and $q^{-1} r^{-1} q$ are integer matrices in $Q$. Thus \[q^{-1} rq\in Q\cap \mathrm{GL}(2,\Z)=R.\] From this it follows that \[ R(t)\subset R\cap qRq^{-1} \quad\text{ for }\quad t=\det (q), \] and we have just observed that $[R:R(t)]<\infty$, so $[R:R_q] < \infty$.
The same argument also shows that $R(st)\subset R\cap qR(s)q^{-1}$ for any~$s$, and therefore to any given finite sets $E\subset Q$ and $F\subset N$ there exists $s\in\mathbb N$ such that $R(s)\subset R^E_F$. Combining all this with \propref{subbase} we see that the family $\{R(s)\mid\, s\in \mathbb N\}$ is a base at $e$ for the Hecke topology restricted to $R$ or~$Q$.
Finally, note that $\bigcap_s R(s)=\{e\}$. \end{proof}
A similar result holds when $\mathbb Q$ is replaced by other number fields, \emph{e.g.}, $\mathbb Z[p^{-1}]$ for a prime number $p$ (not to be confused with the projection~$p$). We state it without proof:
\begin{prop} \label{N bar with p} Suppose $N=\mathbb Z[p^{-1}]^2$, $M=\mathbb Z^2$, $Q$ is a subgroup of $\mathrm{GL} (2,\mathbb Z[p^{-1}])$ containing the diagonal subgroup $D=\{ (\begin{smallmatrix} p^n&0\\ 0&p^n \end{smallmatrix}) \mid\, n\in \mathbb Z\}$, and
$R=Q\cap \mathrm{GL}(2,\Z)$.
Then $(NQ,MR)$ is a reduced Hecke pair, and the Schlichting completion is given by \[ \overline N=\mathbb Q_p^2, \quad\overline M={\mathcal Z}_p^2,\quad \overline R=\invlim R/R(p^n),\midtext{ and }\overline Q=\bigcup_{q\in Q/R}q\overline R, \] where as above $\overline Q$ has the topology from $\overline R$. \end{prop}
\begin{ex} \label{ex:GL2}
Let us first consider the maximal $p$-adic case with $Q=\mathrm{GL} (2,\mathbb Z[p^{-1}])$ and $R= \mathrm{GL}(2,\Z)$.
\begin{prop} Let \label{G=TH-p-adic} $T=\{ (\begin{smallmatrix} p^m&0\\ c&p^n \end{smallmatrix}) \mid\, m,n\in\mathbb Z,c\in \mathbb Z[p^{-1}]\}$. Then $T\,\mathrm{SL}_\pm (2,{\mathcal Z}_p)= \{ g\in \mathrm{GL} (2,\mathbb Q_p)\mid\, \det(g)\in \pm p^\mathbb Z \}$. \end{prop} \begin{proof} Clearly the left hand side is included in the right hand side. For the opposite it suffices to show that every $g\in \mathrm{M}(2,{\mathcal Z}_p)$ with $\det(g)\in p^\mathbb N$ is a member of the left hand side. Let $g=(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})$.
Case 1. Suppose $b=0$ and $ad=p^m$. If $a=p^nu$ with $u$ a unit in ${\mathcal Z}_p$, we must have $d=u^{-1} p^{m-n}$. So
\[ g=\begin{pmatrix} a&0\\ c&d \end{pmatrix} =\begin{pmatrix} p^n&0\\ 0&p^{m-n} \end{pmatrix} \begin{pmatrix} 1&0\\ x&1 \end{pmatrix} \begin{pmatrix} u&0\\ 0&u^{-1} \end{pmatrix} \] with $x=cu^{-1} p^{n-m}$. Now $x=y+z$ with $y\in \mathbb Z[1/p]$ and $z\in {\mathcal Z}_p$, and since $(\begin{smallmatrix} 1&0\\ z&1 \end{smallmatrix}) (\begin{smallmatrix} u&0\\ 0&u^{-1} \end{smallmatrix}) \in \mathrm{SL} (2,{\mathcal Z}_p)$ it follows that $g=(\begin{smallmatrix} a&0\\ c&d \end{smallmatrix})\in T\,\mathrm{SL} (2,{\mathcal Z}_p)$.
Case 2. Suppose $a=0$ and $b\neq 0$. Then
\[
g=\begin{pmatrix} 0&b\\ c&d \end{pmatrix} =\begin{pmatrix} b&0\\ d&-c \end{pmatrix} \begin{pmatrix} 0&1\\ -1&0 \end{pmatrix} \in T\mathrm{SL} (2,{\mathcal Z}_p). \]
Case 3. Suppose $a=p^mu$ and $b=p^nv$ with $u,v$ units in ${\mathcal Z}_p$. We may assume $m\geq n$, if not we multiply by $(\begin{smallmatrix} 0&1\\ -1&0 \end{smallmatrix})$ as in Case 2. So $p^{-n}a\in {\mathcal Z}_p$. Then
\[ \begin{pmatrix} a&b\\ c&d \end{pmatrix} =\begin{pmatrix} p^{n}&0\\ v^{-1} d&p^{-n}ad-vc \end{pmatrix} \begin{pmatrix} p^{-n}a&v\\ -v^{-1}&0 \end{pmatrix}. \] The second matrix on the right hand side is in $\mathrm{SL} (2,{\mathcal Z}_p)$, while the first has determinant equal to $ad-bc$ which by assumption is in $p^\mathbb N$, so by Case~1 this matrix is in $T\,\mathrm{SL} (2,{\mathcal Z}_p)$. \end{proof}
\begin{thm} \label{GL2p} Let $Q=\mathrm{GL} (2,\mathbb Z[p^{-1}])$ and $R= \mathrm{GL}(2,\Z)$. Then \begin{enumerate} \item $\overline R=\invlim R/R(p^n)=\mathrm{SL} _{\pm}(2,{\mathcal Z}_p)$; \item $\overline Q=\bigcup_{q\in Q/R}q\mathrm{SL} _{\pm}(2,{\mathcal Z}_p)= \{ g\in \mathrm{GL} (2,\mathbb Q_p)\mid\, \det(g)\in \pm p^\mathbb Z \}$, \end{enumerate} where $\overline Q$ has the topology from $\overline R= \mathrm{SL} _{\pm}(2,{\mathcal Z}_p)$. \end{thm}
\begin{proof} Since $Q=TR$ we get $\overline Q=T\overline R$, which by \propref{G=TH-p-adic} equals the right hand side. That \cite{hecke}*{Theorem~3.8} applies to the pair $(\overline N\overline Q,\overline M\overline R)$ now follows from
Propositions~\ref{N bar with p} and \ref{G=TH-p-adic}, and density of $\mathrm{GL}(2,\Z)$ in $\mathrm{SL} _{\pm}(2,{\mathcal Z}_p)$ (see \cite{kri}*{Proposition~IV.6.3}). \end{proof}
Now look at the case where $Q=\mathrm{GL} (2,\mathbb Q)$ and $R= \mathrm{GL}(2,\Z)$. We first need a version of \propref{G=TH-p-adic}:
\begin{prop} Let \label{G=TH} $T=\{ (\begin{smallmatrix} a&0\\ c&d \end{smallmatrix}) \mid\, a,c,d\in\mathbb Q, ad\neq 0\}$. Then $T\,\mathrm{SL} (2,{\mathcal Z})= \left\{ g\in \mathrm{GL} (2,{\mathcal A}_f)\mid\, \det(g)\in \mathbb Q\right\}$. \end{prop} \begin{proof} Again one inclusion is obvious, so suppose $g=(\begin{smallmatrix} a&b\\ c&d \end{smallmatrix})\in \mathrm{GL} (2,{\mathcal A}_f)$ with $\det g\in\mathbb Q$, in fact without loss of generality we may assume $\det g=1$. For each prime $p$ let $g_p=(\begin{smallmatrix} a_p&b_p\\ c_p&d_p \end{smallmatrix})$ be the corresponding matrix in $\mathrm{GL} (2,\mathbb Q_p)$. For all but finitely many $p$ we will have $g_p\in \mathrm{SL} (2,{\mathcal Z}_p)$. In these cases take $k_p=g_p$.
In the other cases we can not have both $a_p$ and $b_p$ zero, so by \propref{G=TH-p-adic} there is a matrix $k_p\in \mathrm{SL} (2,{\mathcal Z}_p)$ such that $g_pk_p^{-1}\in T\cap \mathrm{GL} (2,\ \mathbb Z[1/p])$. So $k=(k_p)\in \mathrm{SL} (2,{\mathcal Z})$ and $gk^{-1}\in T$ as claimed. \end{proof}
\begin{thm} \label{GL2} Let $Q=\mathrm{GL} (2,\mathbb Q)$ and $R=\mathrm{GL}(2,\Z)$. Then \begin{enumerate} \item $\overline R=\mathrm{SL} _{\pm}(2,{\mathcal Z})$; \item $\overline Q=\bigcup_{q\in Q/R}q\mathrm{SL} _{\pm}(2,{\mathcal Z})= \left\{ g\in \mathrm{GL} (2,{\mathcal A}_f)\mid\, \det(g)\in \mathbb Q\right\}$, \end{enumerate} where $\overline Q$ has the topology from $\overline R= \mathrm{SL} _{\pm}(2,{\mathcal Z})$. \end{thm}
\begin{proof} From \cite{kri}*{Proposition~IV.6.3} (the hard part is hidden there) it follows that \begin{align*} \overline R =\invlim R/R(s) =\invlim\mathrm{SL}_{\pm}(2,\mathbb Z_s) =\mathrm{SL} _{\pm}(2,{\mathcal Z}). \end{align*}
Since $\overline Q=T\overline R$, (ii) follows from \propref{G=TH}. \end{proof}
Note that the topology on $\overline Q$ is not the relative topology from $\mathrm{GL} (2,{\mathcal A}_f)$, in contrast with \thmref{GL2p}.
This is essentially the same result as \cite{lln:hecke}*{Proposition~2.5}. Since $R$ is not normal in $Q$ we can not use \thmref{cross thm}, but it would be interesting to get a description of the $C^*$-algebra $p_RAp_HAp_R$ in these cases (see \cite{connes-marcolli}). However, note that we are not using exactly the same algebra, since in both \cite{connes-marcolli} and \cite{lln:hecke} the action of $Q$ is by left multiplication on $\mathrm{M}(2,\mathbb Q)$.
\end{ex}
\begin{ex} \label{ex:ax+b} Much recent work on Hecke algebras started with the study of the affine group over $\mathbb Q$ in \cite{bc}. Other number fields have also been extensively studied, as in, \emph{e.g.}, \cite{connes-marcolli} and \cite{laca-franken}. For a survey, see \cite{connes-marcolli}*{Section~1.4}. We shall here illustrate how our approach works for a quadratic extension of $\mathbb Q$. For details about the number theory used here we refer to the book \cite{nzm}.
Let $d$ be a square-free integer such that \footnote{If, for instance, $d=5$ one should instead use $M=\mathbb Z[(1+\sqrt 5)/2]$, \emph{etc.}\ (see \cite{nzm}*{Theorem~9.20}).} $d\not\equiv 1 \mod 4$, and let $N=\mathbb Q(\sqrt d)$, $M=\mathbb Z[\sqrt d]$, $Q=\mathbb Q(\sqrt d)^\times$, and $R=\{r\in Q\,\mid\,r,r^{-1}\in M\}$.
So \[
R=\{m+n\sqrt d\,\mid\,m,n\in \mathbb Z, m^2-dn^2=\pm 1\} \] is the group of units in the field $N$. An alternative matrix description is as follows: \begin{align*} N&=\mathbb Q^2, \qquad M=\mathbb Z^2, \\ Q&=\left\{ \begin{pmatrix} a&db\\ b&a \end{pmatrix}
\biggm| a,b\in \mathbb Q, a^2-db^2\neq 0\right\},\\ R&=\left\{ \begin{pmatrix} m&dn\\ n&m \end{pmatrix}
\biggm| m,n\in \mathbb Z, m^2-dn^2=\pm 1\right\}. \end{align*} So we get $\overline N={\mathcal A}_f^2$ and $\overline M={\mathcal Z}^2$.
Here \thmref{cross thm} applies, so
\[
pAp\sim_{MR} C_0({\mathcal A}_f^2/\overline R)\rtimes Q/R. \] In this way we obtain \cite{laca-franken}*{Proposition~3.2} for the field $\mathbb Q(\sqrt d)$ without using the theory of semigroup crossed products, and this will also work in greater generality.
The structure of these crossed products can be studied by the Mackey-Takesaki orbit method as in \cite{lr:ideal}; note that the orbit closures in $ \overline N/\overline R$ under the action of $ Q/R$ are basically the same as the orbit closures in $\overline N$ under the action of $ Q$.
To determine $\overline R$ and its topology we need some more information. First, if $d<0$ then $R$ is finite (of order 2 or 4). So let us concentrate on the case with $d>1$. Then, by \cite{nzm}*{Theorem~7.26} we have $R\cong \{\pm 1\}\times\mathbb Z$, and in fact there exists $r_0\in R$ such that $R=\{\pm r_0^n\,\mid\,n\in\mathbb Z\}$. For instance if $d=2$ one can take $r_0=1+\sqrt 2$.
Let us look at $R(s)$. There is a smallest integer $n_s>0$ such that $r_0^{n_s}\equiv 1 \bmod s$. From this we get $\overline R=\invlim R/R(s)=\{\pm 1\}\times \invlim \mathbb Z/\mathbb Z_{n_s}$. However, examples show that the behavior of the numbers $n_s$ is complicated, so a more exact description of $\overline R$ is difficult.
Perhaps counterintuitively, in general it turns out that \[
\overline R\subsetneq\{ m+n \sqrt d \mid\,m,n\in {\mathcal Z}, m^2-dn^2=\pm 1\}. \] This is because under the homomorphism $\mathbb Z[\sqrt d]\mapsto \mathbb Z_s[\sqrt d]$ the units $R$ in $\mathbb Z[\sqrt d]$ are in general mapped onto a proper subgroup of the units in $\mathbb Z_s[\sqrt d]$. For instance $4$ is a unit in $\mathbb Z_{17}[\sqrt 2]$, but $\pm (1+\sqrt 2)^n\not\equiv 4 \mod 17$ for all $n$. \end{ex}
\begin{ex} \label{ex:Heisenberg} We shall here give a slightly different treatment of the Heisenberg group than in \cite{hecke}. Take \begin{align*} N&=\mathbb Q/\mathbb Z\times\mathbb Q, &M&=\{0\}\times\mathbb Z,\\ Q&=\left\{ \begin{pmatrix} 1&q\\ 0&1 \end{pmatrix}
\biggm| q\in \mathbb Q\right\}, &R&=\left\{ \begin{pmatrix} 1&r\\ 0&1 \end{pmatrix}
\biggm| r\in \mathbb Z\right\}, \end{align*} with the obvious action of $Q$ on $N$. If $x= (\begin{smallmatrix} 1&1/n\\ 0&1 \end{smallmatrix})$ with $n\in \mathbb N$ one checks that $M\cap xMx^{-1}=\{0\}\times n\mathbb Z$. So we have \[ \overline N=\mathbb Q/\mathbb Z\times {\mathcal A}_f={\mathcal A}_f/{\mathcal Z} \times {\mathcal A}_f \midtext{ and } \overline M=\{0\}\times{\mathcal Z}. \] If $n=(\begin{smallmatrix} a\\ b/m \end{smallmatrix})$ with $b,m\in\mathbb Z$ and $r= (\begin{smallmatrix} 1&r\\ 0&1 \end{smallmatrix})$, then $rnr^{-1} -n\in M$ if and only if $rb\in m\mathbb Z$. Thus \[ \overline Q=\left\{ \begin{pmatrix} 1&q\\ 0&1 \end{pmatrix}
\biggm| q\in {\mathcal A}_f\right\} \midtext{ and } \overline R=\left\{ \begin{pmatrix} 1&r\\ 0&1 \end{pmatrix}
\biggm| r\in {\mathcal Z}\right\}. \] We have $\widehat\overline N={\mathcal Z}\times {\mathcal A}_f$ and $\overline M^\perp={\mathcal Z}\times{\mathcal Z}$. Moreover, the dual action of $\overline Q$ on $\widehat\overline N$ is given by \[ (z,w) \begin{pmatrix} 1&q\\ 0&1 \end{pmatrix}= (z,qz+w). \] \begin{lem} \begin{align*} \Omega:&=\bigcup_{q\in \overline Q} q\overline M^\perp =\left\{(z,qz+w)\,\mid\, z,w\in {\mathcal Z}, q\in {\mathcal A}_f\right\}\\ &=\left\{(z,u) \in {\mathcal Z}\times {\mathcal A}_f\,\mid\, z_p=0 \implies u_p\in {\mathcal Z}_p \right\}. \end{align*} \end{lem} \begin{proof} Clearly if $(z,w)\in \Omega$ and $z_p=0$, then $w_p\in {\mathcal Z}_p$.
Conversely, suppose $(z,u)$ is an element of the right hand side. If $u_p\in {\mathcal Z}_p $, take $q_p=1$ and $w_p=u_p-z_p\in {\mathcal Z}_p$. For the finitely many $p$ with $u_p\notin {\mathcal Z}_p $, we have $u_p=x_p+v_p$ with $x_p\in\mathbb Q^\times$ and $v_p\in {\mathcal Z}_p $, and by assumption $z_p\neq 0$. Take $q_p =z_p^{-1} x_p\in\mathbb Q_p$, so $q_p z_p+w_p=u_p$. Thus with $q:=(q_p)\in{\mathcal A}_f$ and $w:=(w_p)\in{\mathcal Z}$, we have $q z+ w= u$. \end{proof} So here $\Omega$ is open but not closed, hence the projection $p_\infty$ defined in \secref{hecke crossed} is not in $M(B^R)$.
The orbits under the action of $R$ can be described as follows: $(0,w)$ is always a fixed point. If $z\neq 0$, then the $R$-orbit of $(z,w)$ is $(z, w+z{\mathcal Z})$. \end{ex}
\begin{bibdiv} \begin{biblist}
\bib{bc}{article}{
author={Bost, J.-B.},
author={Connes, A.},
title={Hecke algebras, type III factors and phase transitions with spontaneous symmetry breaking in number theory},
date={1995},
journal={Selecta Math. (New Series)},
volume={1},
pages={411\ndash 457}, }
\bib{connes-marcolli}{article}{
author={Connes, A.},
author={Marcolli, M.},
title={From Physics to Number Theory via Noncommutative Geometry. Part I: Quantum Statistical Mechanics of Q-lattices},
date={2004}, }
\bib{willis}{article}{
author={Gl\"ockner, H.},
author={Willis, G. A.},
title={Topologization of Hecke pairs and Hecke $C^*$-algebras},
journal={Topology Proceedings},
volume={26},
date={2001/2002},
pages={565\ndash 591}, }
\bib{gre:local}{article}{
author={Green, P.},
title={The local structure of twisted covariance algebras},
date={1978},
journal={Acta Math.},
volume={140},
pages={191\ndash 250}, }
\bib{hecke}{unpublished}{
author={Kaliszewski, S.},
author={Landstad, M.~B.},
author={Quigg, J.},
title={Hecke $C^*$-algebras, Schlichting completions, and Morita equivalence},
date={2005},
status={preprint},
eprint={arXiv:math.OA/0311222}, }
\bib{kri}{article}{
author={Krieg, A.},
title={Hecke algebras},
date={1990},
journal={Mem. Amer. Math. Soc.},
volume={87},
number={435}, }
\bib{laca-franken}{article}{
author={Laca, M.},
author={van Frankenhuijsen, M.},
title={Phase transitions on Hecke $C^*$-algebras and class-field theory over $\mathbb Q $},
date={2006},
journal={J. reine angew. Math.},
volume={595},
pages={25\ndash 53}, }
\bib{lln:hecke}{unpublished}{
author={Laca, M.},
author={Larsen, N.~S.},
author={Neshveyev, S.},
title={Hecke algebras of semidirect products and the finite part of the Connes-Marcolli $C^*$-algebra},
status={preprint},
date={2006}, }
\bib{lr:ideal}{article}{
author={Laca, M.},
author={Raeburn, I.},
title={The ideal structure of the Hecke $C^*$-algebra of Bost and Connes},
date={2000},
journal={Math. Ann.},
volume={318},
pages={433\ndash 451}, }
\bib{nzm}{book}{
author={Niven, I.},
author={Zuckerman, H.~S.},
author={Montgomery, H.~L.},
title={An introduction to the theory of numbers. Fifth edition},
publisher={Wiley},
address={New York},
date={1991}, }
\bib{ped}{book}{
author={Pedersen, G.~K.},
title={$C^*$-algebras and their automorphism groups},
publisher={Academic Press},
date={1979}, }
\bib{rie:induced}{article}{
author={Rieffel, M.~A.},
title={Induced representations of $C^*$-algebras},
date={1974},
journal={Adv. Math.},
volume={13},
pages={176\ndash 257}, }
\bib{rosenberg}{article}{
author={Rosenberg, J.},
title={Appendix to: ``Crossed products of UHF algebras by product type actions''},
date={1979},
journal={Duke Math. J.},
volume={46},
pages={25\ndash 26}, }
\bib{tza}{article}{
author={Tzanev, K.},
title={Hecke $C^*$-algebras and amenability},
journal={J. Operator Theory}, date={2003}, volume={50}, pages={169\ndash 178}, }
\end{biblist} \end{bibdiv}
\end{document} | arXiv |
Laplace–Stieltjes transform
The Laplace–Stieltjes transform, named for Pierre-Simon Laplace and Thomas Joannes Stieltjes, is an integral transform similar to the Laplace transform. For real-valued functions, it is the Laplace transform of a Stieltjes measure, however it is often defined for functions with values in a Banach space. It is useful in a number of areas of mathematics, including functional analysis, and certain areas of theoretical and applied probability.
Real-valued functions
The Laplace–Stieltjes transform of a real-valued function g is given by a Lebesgue–Stieltjes integral of the form
$\int e^{-sx}\,dg(x)$
for s a complex number. As with the usual Laplace transform, one gets a slightly different transform depending on the domain of integration, and for the integral to be defined, one also needs to require that g be of bounded variation on the region of integration. The most common are:
• The bilateral (or two-sided) Laplace–Stieltjes transform is given by
$\{{\mathcal {L}}^{*}g\}(s)=\int _{-\infty }^{\infty }e^{-sx}\,dg(x).$
• The unilateral (one-sided) Laplace–Stieltjes transform is given by
$\{{\mathcal {L}}^{*}g\}(s)=\lim _{\varepsilon \to 0^{+}}\int _{-\varepsilon }^{\infty }e^{-sx}\,dg(x).$
The limit is necessary to ensure the transform captures a possible jump in g(x) at x = 0, as is needed to make sense of the Laplace transform of the Dirac delta function.
• More general transforms can be considered by integrating over a contour in the complex plane; see Zhavrid 2001 harvnb error: no target: CITEREFZhavrid2001 (help).
The Laplace–Stieltjes transform in the case of a scalar-valued function is thus seen to be a special case of the Laplace transform of a Stieltjes measure. To wit,
${\mathcal {L}}^{*}g={\mathcal {L}}(dg).$
In particular, it shares many properties with the usual Laplace transform. For instance, the convolution theorem holds:
$\{{\mathcal {L}}^{*}(g*h)\}(s)=\{{\mathcal {L}}^{*}g\}(s)\{{\mathcal {L}}^{*}h\}(s).$
Often only real values of the variable s are considered, although if the integral exists as a proper Lebesgue integral for a given real value s = σ, then it also exists for all complex s with re(s) ≥ σ.
The Laplace–Stieltjes transform appears naturally in the following context. If X is a random variable with cumulative distribution function F, then the Laplace–Stieltjes transform is given by the expectation:
$\{{\mathcal {L}}^{*}F\}(s)=\mathrm {E} \left[e^{-sX}\right].$
The Laplace-Stieltjes transform of a real random variable's cumulative distribution function is therefore equal to the random variable's moment-generating function, but with the sign of the argument reversed.
Vector measures
Whereas the Laplace–Stieltjes transform of a real-valued function is a special case of the Laplace transform of a measure applied to the associated Stieltjes measure, the conventional Laplace transform cannot handle vector measures: measures with values in a Banach space. These are, however, important in connection with the study of semigroups that arise in partial differential equations, harmonic analysis, and probability theory. The most important semigroups are, respectively, the heat semigroup, Riemann-Liouville semigroup, and Brownian motion and other infinitely divisible processes.
Let g be a function from [0,∞) to a Banach space X of strongly bounded variation over every finite interval. This means that, for every fixed subinterval [0,T] one has
$\sup \sum _{i}\left\|g(t_{i})-g(t_{i+1})\right\|_{X}<\infty $
where the supremum is taken over all partitions of [0,T]
$0=t_{0}<t_{1}<\cdots <t_{n}=T.$
The Stieltjes integral with respect to the vector measure dg
$\int _{0}^{T}e^{-st}dg(t)$
is defined as a Riemann–Stieltjes integral. Indeed, if π is the tagged partition of the interval [0,T] with subdivision 0 = t0 ≤ t1 ≤ ... ≤ tn = T, distinguished points $\tau _{i}\in [t_{i},t_{i+1}]$ and mesh size $|\pi |=\max \left|t_{i}-t_{i+1}\right|,$ the Riemann–Stieltjes integral is defined as the value of the limit
$\lim _{|\pi |\to 0}\sum _{i=0}^{n-1}e^{-s\tau _{i}}\left[g(t_{i+1})-g(t_{i})\right]$
taken in the topology on X. The hypothesis of strong bounded variation guarantees convergence.
If in the topology of X the limit
$\lim _{T\to \infty }\int _{0}^{T}e^{-st}dg(t)$
exists, then the value of this limit is the Laplace–Stieltjes transform of g.
Related transforms
The Laplace–Stieltjes transform is closely related to other integral transforms, including the Fourier transform and the Laplace transform. In particular, note the following:
• If g has derivative g' then the Laplace–Stieltjes transform of g is the Laplace transform of g′.
$\{{\mathcal {L}}^{*}g\}(s)=\{{\mathcal {L}}g'\}(s),$
• We can obtain the Fourier–Stieltjes transform of g (and, by the above note, the Fourier transform of g′) by
$\{{\mathcal {F}}^{*}g\}(s)=\{{\mathcal {L}}^{*}g\}(is),\qquad s\in \mathbb {R} .$
Probability distributions
If X is a continuous random variable with cumulative distribution function F(t) then moments of X can be computed using[1]
$\operatorname {E} [X^{n}]=(-1)^{n}\left.{\frac {d^{n}\{{\mathcal {L}}^{*}F\}(s)}{ds^{n}}}\right|_{s=0}.$
Exponential distribution
For an exponentially distributed random variable Y with rate parameter λ the LST is,
${\widetilde {Y}}(s)=\{{\mathcal {L}}^{*}F_{Y}\}(s)=\int _{0}^{\infty }e^{-st}\lambda e^{-\lambda t}dt={\frac {\lambda }{\lambda +s}}$
from which the first three moments can be computed as 1/λ, 2/λ2 and 6/λ3.
Erlang distribution
For Z with Erlang distribution (which is the sum of n exponential distributions) we use the fact that the probability distribution of the sum of independent random variables is equal to the convolution of their probability distributions. So if
$Z=Y_{1}+\cdots +Y_{n}$
with the Yi independent then
${\widetilde {Z}}(s)={\widetilde {Y}}_{1}(s)\cdots {\widetilde {Y}}_{n}(s)$
therefore in the case where Z has an Erlang distribution,
${\widetilde {Z}}(s)=\left({\frac {\lambda }{\lambda +s}}\right)^{n}.$
Uniform distribution
For U with uniform distribution on the interval (a,b), the transform is given by
${\widetilde {U}}(s)=\int _{a}^{b}e^{-st}{\frac {1}{b-a}}dt={\frac {e^{-sa}-e^{-sb}}{s(b-a)}}.$
References
1. Harchol-Balter, M. (2012). "Transform Analysis". Performance Modeling and Design of Computer Systems. pp. 433–449. doi:10.1017/CBO9781139226424.032. ISBN 9781139226424.
• Apostol, T.M. (1957), Mathematical Analysis (1st ed.), Reading, MA: Addison-Wesley; 2nd ed (1974) ISBN 0-201-00288-4.
• Apostol, T.M. (1997), Modular Functions and Dirichlet Series in Number Theory (2nd ed.), New York: Springer-Verlag, ISBN 0-387-97127-0.
• Grimmett, G.R.; Stirzaker, D.R. (2001), Probability and Random Processes (3rd ed.), Oxford: Oxford University Press, ISBN 0-19-857222-0.
• Hille, Einar; Phillips, Ralph S. (1974), Functional analysis and semi-groups, Providence, R.I.: American Mathematical Society, MR 0423094.
• Zhavrid, N.S. (2001) [1994], "Laplace transform", Encyclopedia of Mathematics, EMS Press.
| Wikipedia |
What's a concrete example where groupoids are better suited to describe conventional gauge theory?
asked Nov 13, 2017 in Theoretical Physics by JakobS (95 points)
Are Photons Goldstone Bosons?
asked Nov 6, 2017 in Theoretical Physics by JakobS (95 points)
bosons
quantum-electrodynamics
What is the simplest way to realize or visualize SU(3)?
asked Oct 30, 2017 in Mathematics by anonymous
lie-groups
lie-algebra
mathematical-modeling
The contradiction between the Gell-Mann Low theorem and the identity of Møller operator $H\Omega_{+}=\Omega_{+}H_0$
asked Oct 30, 2017 in Theoretical Physics by Alienware (185 points)
s-matrix-theory
Interpretation of Vacuum diagram
asked Oct 27, 2017 in Theoretical Physics by C Thone (110 points)
feynman-diagram
Is it possible that a particle is much heavier through a loop correction?
asked Oct 16, 2017 in Phenomenology by JakobS (95 points)
beyond-the-standard-model
What is the origin of QFT "difficulties": "physical" or "mathematical"?
asked Oct 14, 2017 in Chat by Vladimir Kalitvianski (132 points)
uv-and-ir-problems
Powers of an exponential in the pertubation expansion.
asked Oct 3, 2017 in Mathematics by MathematicalPhysicist (170 points)
pertubation-expansion
How to understand running coupling constant from the formal solution of Callan-Symanzik equation?
asked Sep 28, 2017 in Theoretical Physics by gamebm (10 points)
renormalisation-group
On partition functions of SCFTs of class $\mathcal{S}$ (Gaiotto theories)
asked Sep 19, 2017 in Theoretical Physics by conformal_gk (3,625 points)
partition-function
'Quantum' vs 'Classical' effects in Quantum Field Theory
asked Jul 28, 2017 in Theoretical Physics by chuxley (45 points)
Derive canonical commutation relations from Schwingers principle
asked Jul 8, 2017 in Theoretical Physics by Quantumwhisp (35 points)
lagrangian-formalism
commutator
Quantum theory of a single worldline
asked Jun 6, 2017 in Theoretical Physics by Slereah (520 points)
string-theory
+ 10 like - 0 dislike
What are quantum fields mathematically?
asked Jun 5, 2017 in Theoretical Physics by Oliver Gregory (50 points)
Asymptotic series in field theory and quantum mechanics
asked May 29, 2017 in Theoretical Physics by lux (15 points)
perturbation-theory
QFT and its non-rigorous assumptions
asked May 4, 2017 in Theoretical Physics by Slereah (520 points)
What is an observer in QFT?
quantum-interpretations
Theorem of QFT as an operator-valued function theory
asked Nov 27, 2016 in Theoretical Physics by Slereah (520 points)
A question about the asymptotic series in perturbative expansion in QFT
asked Aug 9, 2014 in Theoretical Physics by user26143 (405 points)
asymptotics
Quantum Anomalies for Bosons
asked Mar 21, 2014 in Theoretical Physics by annie heart (25 points)
quantum-anomalies | CommonCrawl |
A lattice method for option evaluation with regime-switching asset correlation structure
JIMO Home
The Glowinski–Le Tallec splitting method revisited: A general convergence and convergence rate analysis
July 2021, 17(4): 1713-1727. doi: 10.3934/jimo.2020041
Optimal control and stabilization of building maintenance units based on minimum principle
Shi'an Wang , and N. U. Ahmed
School of EECS, University of Ottawa, 800 King Edward Ave. Ottawa, ON K1N 6N5, Canada
* Corresponding author: Shi'an Wang
Received August 2019 Revised October 2019 Published July 2021 Early access February 2020
Full Text(HTML)
Figure(7) / Table(2)
In this paper we present a mathematical model describing the physical dynamics of a building maintenance unit (BMU) equipped with reaction jets. The momentum provided by reaction jets is considered as the control variable. We introduce an objective functional based on the deviation of the BMU from its equilibrium state due to external high-wind forces. Pontryagin minimum principle is then used to determine the optimal control policy so as to minimize possible deviation from the rest state thereby increasing the stability of the BMU and reducing the risk to the workers as well as the public. We present a series of numerical results corresponding to three different scenarios for the formulated optimal control problem. These results show that, under high-wind conditions the BMU can be stabilized and brought to its equilibrium state with appropriate controls in a short period of time. Therefore, it is believed that the dynamic model presented here would be potentially useful for stabilizing building maintenance units thereby reducing the risk to the workers and the general public.
Keywords: Building maintenance units, mathematical modeling, optimal control, stabilization, Pontryagin minimum principle.
Mathematics Subject Classification: Primary: 49J15; Secondary: 93C95.
Citation: Shi'an Wang, N. U. Ahmed. Optimal control and stabilization of building maintenance units based on minimum principle. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1713-1727. doi: 10.3934/jimo.2020041
N. U. Ahmed, Dynamic Systems and Control with Applications, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2006. doi: 10.1142/6262. Google Scholar
N. U. Ahmed, Elements of Finite Dimensional Systems and Control Theory, Pitman Monographs and Surveys in Pure and Applied Mathematics, 37, John Wiley & Sons, Inc., New York, 1988. Google Scholar
T. Ahmed and N.U. Ahmed, Optimal Control of Antigen-Antibody Interactions for Cancer Immunotherapy, Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications & Algorithms, 26 (2019), 135-152. Google Scholar
D. Allen, What is building maintenance?, Facilities, 11 (1993), 7-12. doi: 10.1108/EUM0000000002230. Google Scholar
Building Maintenance Units, Report of Alimak Group AB. Available from: https://alimakservice.com/building-maintenance-units/. Google Scholar
T. R. Chandrupatla, A. D. Belegundu and T. Ramesh, et al., Introduction to Finite Elements in Engineering, Prentice Hall, 2002. Google Scholar
J. C. P. Cheng, W. Chen and Y. Tan, et al., A BIM-based decision support system framework for predictive maintenance management of building facilities, 16th International Conference on Computing in Civil and Building Engineering, Osaka, Japan, 2016, 711–718. Google Scholar
[8] T. Glad and L. Ljung, Control Theory, CRC Press, London, 2014. doi: 10.1201/9781315274737. Google Scholar
R. M. W. Horner, M. A. El-Haram and A. K. Munns, Building maintenance strategy: A new management approach, J. Quality Maintenance Engineering, 3 (1997), 273-280. doi: 10.1108/13552519710176881. Google Scholar
Instability of Building Maintenance Units, WorkSafe Victoria, 2018. Available from: https://www.worksafe.vic.gov.au/safety-alerts/instability-building-maintenance-units. Google Scholar
C. H. Ko, RFID-based building maintenance system, Automat. Construction, 18 (2009), 275-284. doi: 10.1016/j.autcon.2008.09.001. Google Scholar
H. Lind and H. Muyingo, Building maintenance strategies: Planning under uncertainty, Property Management, 30 (2012), 14-28. doi: 10.1108/02637471211198152. Google Scholar
P. Maryam, N.U. Ahmed and M.C.E. Yagoub, Optimum Decision Policy for Replacement of Conventional Energy Sources by Renewable Ones, International Journal of Energy Science, 3 (2013), 311-319. doi: 10.14355/ijes.2013.0305.03. Google Scholar
I. Motawa and A. Almarshad, A knowledge-based BIM system for building maintenance, Automat. Construction, 29 (2013), 173-182. doi: 10.1016/j.autcon.2012.09.008. Google Scholar
K. Ogata and Y. Yang, Modern Control Engineering, Prentice Hall, New Jersey, 2002. Google Scholar
L. S. Pontryagin, V. G. Boltyanskii and R. V. Gamkrelidze, et al., The Mathematical Theory of Optimal Processes, The Macmillan Co., New York, 1964. Google Scholar
I. H. Seeley, Building Maintenance, Building and Surveying Series, Palgrave, London, 1987. doi: 10.1007/978-1-349-18925-0. Google Scholar
J. Shen, A. K. Sanyal and N. A. Chaturvedi, et al., Dynamics and control of a 3D pendulum, 43$^rd$ IEEE Conference on Decision and Control, Atlantis, Bahamas, 2004, 323–328. doi: 10.1109/CDC.2004.1428650. Google Scholar
M.M. Suruz, N.U. Ahmed and M. Chowdhury, Optimum policy for integration of renewable energy sources into the power generation system, Energy Economics, 34 (2012), 558-567. doi: 10.1016/j.eneco.2011.08.002. Google Scholar
K. L. Teo, C. J. Goh and K. H. Wong, A Unified Computational Approach to Optimal Control Problems, Pitman Monographs and Surveys in Pure and Applied Mathematics, 55, John Wiley & Sons, Inc., New York, 1991. doi: 20.500.11937/24319. Google Scholar
S. Wang and N.U. Ahmed, Dynamic model of urban traffic and optimum management of its flow and congestion, Dynamic Systems and Applications, 26 (2017), 575-588. doi: 10.12732/dsa.v26i34.12. Google Scholar
S. Wang, N.U. Ahmed and T.H. Yeap, Optimum management of urban traffic flow based on a stochastic dynamic model, IEEE Transactions on Intelligent Transportation Systems, 20 (2019), 4377-4389. doi: 10.1109/TITS.2018.2884463. Google Scholar
X. Wang, Solving optimal control problems with MATLAB: Indirect methods, ISE Dept., NCSU, Raleigh, NC, 2009. Google Scholar
D. V. Zenkov, On Hamel's equations, Theoret. Appl. Mechanics, 43 (2016), 191-220. doi: 10.2298/TAM160612011Z. Google Scholar
D. V. Zenkov, M. Leok and A. M. Bloch, Hamel's formalism and variational integrators on a sphere, 51st IEEE Conference on Decision and Control, Hawaii, 2012, 7504–7510. doi: 10.1109/CDC.2012.6426779. Google Scholar
Figure 1. The schematic of a BMU
Figure Options
Download as PowerPoint slide
Figure 2. The schematic of the BMU body
Figure 3. Simulation results of scenario 1
Figure 4. Simulation results corresponding to $ U_{1} $ in scenario 2
Figure 6. Simulation results of case 1 in scenario 3
Table 1. Definition of Notations
Notation Description
$ x_{1} = \omega_{x} $ First component of the angular velocity
$ x_{2} = \omega_{y} $ Second component of the angular velocity
$ x_{3} = \gamma_{x} $ First component of the unit vertical vector $ \gamma $
$ x_{4} = \gamma_{y} $ Second component of the unit vertical vector $ \gamma $
$ x_{5} = \gamma_{z} $ Third component of the unit vertical vector $ \gamma $
$ \psi $ Costate vector (adjoint state)
$ C $ Constant inertia matrix
$ I=[0,T] $ Total operating period in seconds
$ U $ Control (decision) constraint set
$ {\mathcal U}_{ad} $ Set of admissible controls
$ \underline{u} $ Lower bound of the control variable
$ \overline{u} $ Upper bound of the control variable
$ J(u) $ Objective (cost) functional
$ \ell $ Integrand of running cost
$ \Phi $ Terminal cost
$ H $ Hamiltonian function
$ V(x) $ Lyapunov function candidate
$ x^{d} $ Desired state during the operating period
$ \bar{x} $ Target state at the terminal time
$ x^{e} $ Equilibrium state of the system
$ \langle a,\; b \rangle $ Scalar product of vectors $ a $ and $ b $
$ a \times b $ Cross product of vectors $ a $ and $ b $
$ x^{T} $ Transpose of vector $ x $
Table Options
Table201
Algorithm 1: Computational Algorithm $ (\bullet) $
Choose the appropriate initial state $ x(0) $;
Set the length of time horizon $ T \in R^{+} $ and the number of subintervals (of equal length) $ N \in \mathbb{Z}^{+} $;
Set step size $ \epsilon $, stopping criterion $ \tau $, maximum number of iterations $ K $ and control bounds $ \underline{u}, \; \overline{u} $.
Ensure:
Optimal cost $ J^o $;
Optimal state trajectory $ x^{o} $;
Optimal control trajectory $ u^o $.
1: Subdivide equally the time horizon $ I = [0, T] $ into $ N $ subintervals and assume the control function is piecewise-constant. That is, $ u^{n}(t) = u^{n}(t_{i}) $, for $ t \in [t_{i}, t_{i+1}),\; i = 0, 1, \cdots, N-1 $, where $ u^{n}(t),\; t \in I $ is the control (decision) policy at the $ n $th iteration (starting from $ n = 0 $).
2: Integrate the state equations from 0 to $ T $ with initial state $ x(0) = x_{0} $ and the assumed controls $ u^{(n)} \equiv u^{n}(t),\; t \in I $, store the obtained state trajectory $ x^{(n)} $ and the control vector $ u^{(n)} $.
3: Use $ x^{(n)} $ and $ u^{(n)} $ to integrate the adjoint equations backward in time starting from the costate $ \psi^{(n)}(T) $ at the terminal time. The terminal costate is given by $ \psi^{(n)}(T) = \Phi_{x}(x^{(n)}(T)) $ where $ \Phi $ is the terminal cost.
4: Use the triple {$ u^{(n)},\; x^{(n)},\; \psi^{(n)} $} to compute the gradient $ g_{n}(t) = \frac{\partial H}{\partial u^{(n)}}(x^{(n)},\; u^{(n)},\; \psi^{(n)}) = H_{u}(x^{(n)},\; u^{(n)},\; \psi^{(n)}) $ and store this vector.
5: Compute the cost functional $ J^{(n)}(u) $ using equation (19) and store this value.
6: If $ \| g_{n} \| < \tau $ then set
$ u^{o} = u^{(n)} $, $ J^{o} = J^{(n)} $
Otherwise, go to Step 7.
7: Construct the control policy for the next iteration as
$ u^{(n+1)}(t) = u^{(n)}(t) - \epsilon g_{n}(t),\; t \in I $ by choosing an appropriate $ \epsilon \in (0,1) $ such that $ u^{(n+1)} \in U $. For the chosen $ \epsilon $, if $ u^{(n+1)} > \overline{u} $ set $ u^{(n+1)} = \overline{u} $; if $ u^{(n+1)} < \underline{u} $ set $ u^{(n+1)} = \underline{u} $.
8: If $ n < K $ then set
$ n = n + 1 $, go to Step 2.
Otherwise, display "Stopped before required residual is obtained''.
Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020110
Huaiqiang Yu, Bin Liu. Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control & Related Fields, 2012, 2 (1) : 61-80. doi: 10.3934/mcrf.2012.2.61
John A. Adam. Inside mathematical modeling: building models in the context of wound healing in bone. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 1-24. doi: 10.3934/dcdsb.2004.4.1
Loïc Louison, Abdennebi Omrane, Harry Ozier-Lafontaine, Delphine Picart. Modeling plant nutrient uptake: Mathematical analysis and optimal control. Evolution Equations & Control Theory, 2015, 4 (2) : 193-203. doi: 10.3934/eect.2015.4.193
Urszula Ledzewicz, Shuo Wang, Heinz Schättler, Nicolas André, Marie Amélie Heng, Eddy Pasquier. On drug resistance and metronomic chemotherapy: A mathematical modeling and optimal control approach. Mathematical Biosciences & Engineering, 2017, 14 (1) : 217-235. doi: 10.3934/mbe.2017014
Omid S. Fard, Javad Soolaki, Delfim F. M. Torres. A necessary condition of Pontryagin type for fuzzy fractional optimal control problems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 59-76. doi: 10.3934/dcdss.2018004
Guy Barles, Ariela Briani, Emmanuel Trélat. Value function for regional control problems via dynamic programming and Pontryagin maximum principle. Mathematical Control & Related Fields, 2018, 8 (3&4) : 509-533. doi: 10.3934/mcrf.2018021
Xiao-Li Ding, Iván Area, Juan J. Nieto. Controlled singular evolution equations and Pontryagin type maximum principle with applications. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021059
Debra Lewis. Modeling student engagement using optimal control theory. Journal of Geometric Mechanics, 2022 doi: 10.3934/jgm.2021032
Zaidong Zhan, Shuping Chen, Wei Wei. A unified theory of maximum principle for continuous and discrete time optimal control problems. Mathematical Control & Related Fields, 2012, 2 (2) : 195-215. doi: 10.3934/mcrf.2012.2.195
Marco Caponigro, Massimo Fornasier, Benedetto Piccoli, Emmanuel Trélat. Sparse stabilization and optimal control of the Cucker-Smale model. Mathematical Control & Related Fields, 2013, 3 (4) : 447-466. doi: 10.3934/mcrf.2013.3.447
Andrei V. Dmitruk, Nikolai P. Osmolovskii. Necessary conditions for a weak minimum in optimal control problems with integral equations on a variable time interval. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4323-4343. doi: 10.3934/dcds.2015.35.4323
Andrei V. Dmitruk, Alexander M. Kaganovich. Quadratic order conditions for an extended weak minimum in optimal control problems with intermediate and mixed constraints. Discrete & Continuous Dynamical Systems, 2011, 29 (2) : 523-545. doi: 10.3934/dcds.2011.29.523
Andrei V. Dmitruk, Nikolai P. Osmolovski. Necessary conditions for a weak minimum in a general optimal control problem with integral equations on a variable time interval. Mathematical Control & Related Fields, 2017, 7 (4) : 507-535. doi: 10.3934/mcrf.2017019
Joseph Malinzi, Rachid Ouifki, Amina Eladdadi, Delfim F. M. Torres, K. A. Jane White. Enhancement of chemotherapy using oncolytic virotherapy: Mathematical and optimal control analysis. Mathematical Biosciences & Engineering, 2018, 15 (6) : 1435-1463. doi: 10.3934/mbe.2018066
Shuo Wang, Heinz Schättler. Optimal control of a mathematical model for cancer chemotherapy under tumor heterogeneity. Mathematical Biosciences & Engineering, 2016, 13 (6) : 1223-1240. doi: 10.3934/mbe.2016040
Cristiana J. Silva, Delfim F. M. Torres. Modeling and optimal control of HIV/AIDS prevention through PrEP. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 119-141. doi: 10.3934/dcdss.2018008
Stanisław Migórski. A note on optimal control problem for a hemivariational inequality modeling fluid flow. Conference Publications, 2013, 2013 (special) : 545-554. doi: 10.3934/proc.2013.2013.545
Carmen Chicone, Stephen J. Lombardo, David G. Retzloff. Modeling, approximation, and time optimal temperature control for binder removal from ceramics. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 103-140. doi: 10.3934/dcdsb.2021034
Avner Friedman, Wenrui Hao. Mathematical modeling of liver fibrosis. Mathematical Biosciences & Engineering, 2017, 14 (1) : 143-164. doi: 10.3934/mbe.2017010
HTML views (601)
Shi'an Wang N. U. Ahmed
Article outline | CommonCrawl |
Journal of the Korean Society of Safety (한국안전학회지)
The Korean Society of Safety (한국안전학회)
The Journal of the Korean Society of Safety(JKOSOS) publishes articles about safety in complex system, the design of process, procedures, products, equipment and facilities, and development of new technology in the industrial and social safety, such as mechanical safety, chemical safety, electrical safety, construction safety, ergonomics and system safety, transportation safety, disaster safety, nuclear safety, risk management, and safety policy. Evaluative reviews of the literature, definitive articles on methodology and procedures, and empirical articles reporting original research are considered for publication
http://thesis.kosos.or.kr:801/default2.asp KSCI KCI
Volume 22 Issue 3 Serial No. 81
Evaluation of Fatigue Endurance for an MTB Frame
Kim, Taek Young;Lee, Man Suk;Lim, Woong;Kim, Ho Kyung 1
https://doi.org/10.14346/JKOSOS.2013.28.3.001 PDF KSCI
In order to evaluate fatigue endurance for an MTB(mountain bike) frame, FEM(finite element method) analysis was performed. For evaluating the fatigue endurance of the MTB frame, the S-N data for Al-6061 fillet weldment were compared with the stress analysis results through FEM analysis of the frame. Three loading condition, pedalling, horizontal and vertical loading conditions were considered for fatigue endurance evaluation. Horizontal loading(+1200 N) condition was found to be the most severe to the frame. The maximum von Mises stress of the frame under horizontal loading(+1200 N) condition was determined 294 MPa through FEM analysis of the frame. Conclusively, on the basis of fatigue strength of 200 MPa at the number of cycles of 50,000, the MTB frame has an improper safety factor of approximately 0.25, suggesting that this frame needs reinforcement.
Creep Rupture Life Prediction of High Temperature HRSG Tubes
Kim, Woo Joong;Kim, Jae Hoon;Jang, Jung Cheol;Kim, Beom Soo;Lee, Gi-Chun 6
The Heat Recovery Steam Generator(HRSG) is a device recycling the exhaust gas of gas turbine in combined power and chemical plants. Since service temperatures was very high, the damage of HRSG tubes intensively occurred in superheater and reheater. The aim of this paper is to determine life and hardness relationship that addresses creep-rupture test and creep-interrupt test in modified 9Cr-1Mo steel. The measured life that consists of function of hardness was found to constant tendency.
Forensic Engineering Study on the Evaluation of the Structural Stability of the Mobile Crane Accident
Kim, Jong-Hyuk;Kim, Eui-Soo 11
Forensic Engineering is the area covering the investigation of products, structures that fail to perform or do not function as intended, causing personal injury or damage to property. To investigate the mobile crane's overturn accident in terms of the forensic engineering, in this study, we identified the accident mobile crane's position and posture before accident by the analysis of the trace resulted by the contact between the outrigger and the ground, and the accident remodeling has been performed using CATIA modeling program in the basis of the accident mobile crane's position and posture information. The accident analysis has been performed by comparing this accident remodeling and the crane's specification, the table of the allowance load about the boom's length and the working radius. Through these studies, the safety accident that may occur in mobile crane can be minimized by performing specialized and systematic investigation of the accident cause in terms of the forensic engineering.
Low Cycle Fatigue Behavior of Cobalt-Base Superalloy ECY768 at Elevated Temperature
Yang, Ho-Young;Kim, Jae-Hoon;Ha, Jae-Suk;Yoo, Keun-Bong;Lee, Gi-Chun 18
The Co-base super heat resisting alloy ECY768 is employed in gas turbine because of its high temperature strength and oxidation resistance. The prediction of fatigue life for superalloy is important for improving the efficiency. In this paper, low cycle fatigue tests are performed as variables of total strain range and temperature. The relations between strain energy density and number of cycle to failure are examined in order to predict the low cycle fatigue life of ECY768 super alloy. The lives predicted by strain energy methods are found to coincide with experimental data and results obtained from the Coffin-Manson method. The fatigue lives is evaluated using predicted by Coffin-Manson method and strain energy methods is compared with the measured fatigue lives at different temperatures. The microstructure observing was performed for how affect able to low-cycle fatigue life by increasing the temperature.
ECO Driving Patterns Derived from the Analysis of the Problems of the Current Driving Pattern of Electric Multiple Unit in ATO System
Kim, Kyujoong;Lee, Keunoh;Kim, Juyong 23
This study focuses on finding ways to derive train's optimal ECO driving pattern, which can improve the ride quality and reduce driving energy consumption with keeping the time interval between the stations. As research method, we compared difference of currently operating train's ATO and MCS driving patterns, and concentrated upon the things need to consider in simulation in order to improve the existing pattern of ATO driving pattern's issues with securing the train operation safety. Determining driving pattern minimizing energy consumption by controlling powering within speed limit and controlling switching to coasting at appropriate point considering the track conditions for each section, and determining braking control starting time considering ride comfort and precise stopping is considered to be most important.
A Case Study on the Fracture of Steering Apparatus for Aircraft
Park, Sung-Ji 29
An aircraft made an emergency landing through the loss of capability controlling steering. A torsion link which is a part of steering apparatus has been adrift from the steering system and the bolt connected to the steering link has fractured. At the same time, the FLIR(Forward Looking Infrared Radar) mounted in front of the steering link has been also damaged. In the early of this investigation, we considered the failure of the FLIR had occurred first, that FLIR hit the steering link and finally the bolt fractured. The fractured section of the bolt has shown a beach mark and a dimple mark as well. The outside of the bolt has shown a large deformation by a heavy load. As a result, we have found out what the cause of the heavy load and the fractures for bolt, link and FLIR have occurred in what order.
The Development of Outsole for Wet Traction Enhancement
Kim, Jung Soo 33
Many occupational workers or professionals have to walk on the various floors for a long period of time. The objective of this study was to develop the safety shoes with increased traction through the material selection. In order to fulfill our objective, first, two kinds of filler were selected to compare the wear mechanism at outsole surface. The developed rubber materials were tested with two kinds of portable slip meters. The sample safety shoes with developed rubber materials were also tested with subject in the laboratory. During walking, the safety shoes were naturally abraded with counter surface. The coefficient of friction(COF) was gradually decreased with number of steps to 30,000, while the COF was abruptly increased from 30,000 to 40,000. The experimental results showed that COF tested with silica rubber was at least 10% higher than that with carbon black rubber in wet or detergent condition. It has been well recognized that filler properties play a important role in wet traction in the tire industry. However it has been unclear that filler properties would be decisive factor in safety shoes. Our study shows that silica exhibits a higher slip resistance than carbon black without reference to wear states in wet or detergent condition. So, this results will provide guides for outsole compounders to develop new products and improve product performance.
Experimental Study of Fire Characteristics of a Tray Flame Retardant Cable
Kim, Sung Chan;Kim, Jung Yong;Bang, Kyoung Sik 39
The present study has been conducted to investigate the fire combustion properties and fire behavior of an IEEE-383 qualified flame retardant cable. The reference reaction rate and reference temperature which are commonly used in pyrolysis model of fire propagation process was obtained by the thermo-gravimetric analysis of the cable component materials. The mass fraction of FR-PVC sheath abruptly decreased near temperature range of $250{\sim}260^{\circ}C$ and its maximum reaction rate was about $2.58{\times}10^{-3}$[1/s]. For the XLPE insulation of the cable, the temperature causing maximum mass fraction change was ranged about $380{\sim}390^{\circ}C$ and it has reached to the maximum reaction rate of $5.10{\times}10^{-3}$[1/s]. The flame retardant cable was burned by a pilot flame meker buner and the burning behavior of the cable was observed during the fire test. Heat release rate of the flame retardant cable was measured by a laboratory scale oxygen consumption calorimeter and the mass loss rate of the cable was calculated by the measured cable mass during the burning test. The representative value of the effective heat of combustion was evaluated by the total released energy integrated by the measured heat release rate and burned mass. This study can contribute to study the electric cable fire and provide the pyrolysis properties for the computational modeling.
Structural Integrity Evaluation by System Stress Analysis for Fuel Piping in a Process Plant
Jeong, Seong Yong;Yoon, Kee Bong;Duyet, Pham Van;Yu, Jong Min;Kim, Ji Yoon 44
Process gas piping is one of the most basic components frequently used in the refinery and petrochemical plants. Many kinds of by-product gas have been used as fuel in the process plants. In some plants, natural gas is additionally introduced and mixed with the byproduct gas for upgrading the fuel. In this case, safety or design margin of the changed piping system of the plant should be re-evaluated based on a proper design code such as ASME or API codes since internal pressure, temperature and gas compositions are different from the original plant design conditions. In this study, series of piping stress analysis were conducted for a process piping used for transporting the mixed gas of the by-product gas and the natural gas from a mixing drum to a knock-out drum in a refinery plant. The analysed piping section had been actually installed in a domestic industry and needed safety audit since the design condition was changed. Pipe locations of the maximum system stress and displacement were determined, which can be candidate inspection and safety monitoring points during the upcoming operation period. For studying the effects of outside air temperature to safety the additional stress analysis were conducted for various temperatures in $0{\sim}30^{\circ}C$. Effects of the friction coefficient between the pipe and support were also investigated showing a proper choice if the friction coefficient is important. The maximum system stresses were occurred mainly at elbow, tee and support locations, which shows the thermal load contributes considerably to the system stress rather than the internal pressure or the gravity loads.
Study on the Causes of Malfunctions of PCBs Applied to the Power Saving Mode of Electrical Systems and its Solution
Park, Hyung-Ki;Choi, Chung-Seog 51
The purpose of this study is to find the causes of malfunctions and defective operation of printed circuit boards(PCBs) built into home refrigerators to perform power saving functions. This study performed an electrostatic test of a PCB built-in using an Auto Triggering system; lightning and impulse tests using an LSS-15AX; and an impulse test using an INS-400AX. From the analysis of a secondarily developed product, it was found that electrostatic discharge(ESD) caused more malfunctions and defective operations than electric overstress(EOS) due to overvoltage. As a result of increasing the condenser capacity of the PCB circuit, withstanding voltage was increased to 7.4 kV. In addition, this study changed the power saving mode and connected a varistor to the #2 pin of an IC chip. As a result, the system consisting of all specimens of a finally developed product was operated stably with an applied voltage of less than 10 kV. This study found it necessary to perform quality control at the manufacturing stage in order to reduce the occurrence of electrostatic accidents to IC chips built into a PCB.
A Study on the Measurement of Electric Resistance of Footwear
Choi, Sang-Won;Lee, Seokwon 56
The occurrence of the ventricular fibrillation is directly dependent on the magnitude and duration of the current. The current which flows through the human body is proportional to the touch voltage applied across the body and is in inverse proportion to the impedances in the circuit. The circuit impedances consist of human body impedance, line impedance, equipment impedance, earth terminal impedance and impedance of shoes which a person put on. The impedance of shoes greatly affect the severity of the electric accidents. The human body impedances relevant to the contact areas, contact conditions, current paths and touch voltages are already determined in the IEC 60479-1. However, the impedance of shoes is ignored or substituted by a simple value because of the absence of the sufficient data. For example, the impedance of shoes plus ground contact resistance is postulated to be $1,000{\Omega}$ in the IEC 61200-612. In IEEE 80, the shoe resistance plus ground contact resistance is assumed to be bare foot with ${\rho}/4b{\Omega}$. In this paper, we measured and analyzed the impedance of shoes with respect to conditions such as applied weight, environment variables and voltages. The results showed that the impedance of shoes is dependent on environment variables regardless of the types of shoes. Most of shoes showed the correlation with the applied force, whereas a few shoes showed characteristics related to the applied voltage. In terms of severity of electric shock, one thirds of test samples indicated to be dangerous in saltwater conditions.
Characteristics on Arc Waveform and RMS of Current by Conductive Powder
Kim, Doo Hyun;Kang, Yang Hyun 63
This paper is aimed to make an analysis on characteristics of the parallel arc waveform and RMS of current at the electrical tracking state by conductive powder. In order to achieve the goal in this paper, field state investigation at metal processing companies in Chung-Nam province area was conducted. With the field state investigation, conductive powder were collected from metal processing companies. By experiment on electrical connector(breaker, connector) over which the conductive powder were scattered, arc waveform and RMS of current were measured. The measured waveform and RMS(root-mean-square) of current were analyzed to describe characteristics and patterns of electrical arc by the conductive powder. It was proved that conductive powder on electrical connector can flow electrical current enough to make electrical fire with high thermal energy. Also the change of sine waveform and RMS of current can be used to find out relationship between electrical fire and fault signal by conductive powder. The results obtained in this paper will be very helpful for the prevention of electrical fires occurred at the metal processing companies.
Damage Pattern and Operation Characteristics of a Thermal Magnetic Type MCCB according to Thermal Stress
Lee, Jae-Hyuk;Choi, Chung-Seog 69
The purpose of this paper is to analyze the carbonization pattern and operation characteristics of an MCCB. The MCCB is consisted of the actuator lever, actuator mechanism, bimetallic strip, contacts, up and down operator, arc divider or extinguisher, metal operation pin, terminal part, etc. When the actuator lever of the MCCB is at the top or the internal metal operation pin is in contact with the front part, the MCCB is turned on or off. It means trip state if the actuator lever or the internal metal operation pin moves to back side. In the UL 94 vertical combustion test, white smoke occurred from the MCCB when an average of 17~24 seconds elapsed after the MCCB was ignited and black smoke occurred when an average of 45~50 seconds elapsed. It took 5~6 minutes for the MCCB surface to be half burnt and took an average of 8~9 minutes for the MCCB surface to be entirely burnt. In the UL 94 test, the MCCB trip device operated when an average 7~8 minutes elapsed. If the MCCB trip has occurred, it may have been caused by an electrical problem such as a short-circuit, overcurrent, etc., as well as fire heat. From the entire part combustion test according to KS C 3004, it was found that the metal operation pin could be moved to the MCCB trip position without any electrical problems.
A Study for Characteristics of Water that Penetrates Wood Flour due to Changes of Concentration of BDG
Kong, Il-Chean;Park, Il-Gyu;Lim, Kyung-Bum;Rie, Dong-Ho 74
As the feature of fire, it is hard for deep-seated fire to spread to the deeper site, and it also has danger for being re-ignited cause of recontacting with oxygen after being put off. Now it is ruled in the certification criteria of wetting agent used for extinguishing deep-seated fire that the criteria for surface tension is below 33[mN/m] in Korea. For figuring out how much water for fire-fighiting can permeate into combustibles, in this research, the permeating performance is analyzed by measuring the speed of permeating and transmission quantity released after that, by pouring solution whose surface tension is changed by adjusting concentration of surfactant BDG(Butyl Di Glycol) in column From this result, it is can be determined that transmission quantity becomes less and wet area goes wider as surface tension is lower, and it is also able to be analyzed as quantity of absorbed liquid and wet area is increased because fluid permeates into the core.
Evaluation of Damage on a Concrete Bridge Considering the Location of the Vehicle Fire
Park, Jang Ho;Kim, Sung Soo 80
Heat transfer analysis and thermal stress analysis for the concrete bridge was performed in order to investigate the damage of the concrete bridge by the fire of the vehicle. Changes in material properties, such as thermal conductivity, specific heat, density, elasticity, caused by temperature rise were considered. Heat transfer analysis and thermal stress analysis were performed according to the various location of the fire by ABAQUS. From the comparison of the numerical results, the degree of structural damage for the concrete bridge was investigated and considerations for the design of a concrete bridge against fire were identified.
A Study on the Hazard Identification of Laboratory using 4M & HAZOP
Kim, T.H.;Rhie, K.W.;Seo, D.H.;Lee, I.M.;Yoon, C.S.;Lee, Y.K.;Park, J.I. 88
In university laboratories, areas of studies are becoming diverse and complicated according to the development of the industry. New forms of potential risk factors are increasing and they are unlike existing ones. In addition, many students are conducting various experiments in the laboratory. Therefore, they could be exposed to risk more often. Despite these risks, people do not recognize university lab safety activities properly and observe safety precautions. They are exposed to various laboratory accidents continually. In this study, we do not apply the present diagnosis method, checklist, but the safety assessment that is widely used in industry. Then we can find lots of hazard that checklist method could miss. This study will use the 4M and Hazard & Operability to design a new Laboratory safety assessments method.
A Study on the Risk Level of Work Types in Nuclear Power Plant Construction
Lee, Jong-Bin;Lee, Jun Kyung;Chang, Seong Rok 95
The goal of this study was to investigate some significant factors to influence level of safety at plant construction field and analyze degree of risk by work classification. Currently, there are lots of construction fields for the nuclear power plant for electricity generation, and our government also planned constructing more nuclear power plant in near future. However, much of the safety literature neglected the degree of risk factors on the plant construction field. Safety managers participated in the brainstorming session for drawing decision criteria of the degree of risk (i.e., significant factors). Then, they were asked to answer a structured questionnaire which was developed for drawing most important factors. Finally, the analytic hierarchy process (AHP) was used to analyze level of risk by work classification. The following results were obtained. First, total twelve factors judging degree of risk were found in the brainstorming session. Second, the questionnaire showed four significant factors, including number of workers, working environments, skill of craft and accident experience. Third, the results of AHP showed Architecture work is the most dangerous work among 6 work types. The results could be used to reduce degree of risk in construction field of the nuclear power plant.
A Study on the Work Ability and the Job Stress of the Workers in Manufacturing Industry of Automobile Parts
Mok, Yun-Soo;Lee, Dong Won;Chang, Seong Rok 100
According to the Statistics Korea, in 2011, people over the age of 65 years old accounted for 11.8% of Korea's population. This number is expected to rise to 15.0% by 2019, making Korea an "aged society". As age increases, physical ability degrades to the point that the workload must be adjusted limitations. However, workloads are given regardless of workers' ages or abilities. In addition, a decline in work efficiency due to aging also increases the risk of work-related injuries. Furthermore, the cases of stress related diseases along with musculoskeletal disorders(MSDs) rise as main factors of industrial disasters and excessive job stress gives negative influence not only on mental health but also on physical health so that job stress becomes a hot issue as a main cause of work ability falloff and turnover. The purpose of this research is to examine how the sociodemographic characteristics, MSDs symptoms and musculoskeletal workload of workers in the manufacturing industry of automobile parts influence work ability and job stress. As a result of the research, job ability showed significant differences statistically according to age, working year, sex, marital status and musculoskeletal workload and job stress showed significant differences statistically according to age, working year, marital status and musculoskeletal workload. In addition, it showed that as the worker's job ability decreases, job stress increases.
Variation of EEG Band Powers Related with Human Errors in Knowledge-based Responses
Lim, Hyeon-Kyo;Kim, Hong-Young 107
Problem solving and/or decision making process usually encountered in human living consists of a sequence of human behaviors based upon his/her knowledge. Thus, Rasmussen introduced Skill-Rule-Knowledge paradigm to countermeasure human errors that can occur in Nuclear Power Plants. Unfortunately however, it was not so easy as expected since objective evidence have not been obtainable with conventional research techniques. With the help of EEG band pawer ratio techniques, this study tried to get psycho-physiological symptoms of human errors, if any, while human beings perform knowledge-based behaviors such as simple arithmetic computations with different difficulty level. A set of simulated works was carried out with a computer station. Four kinds of arithmetic computation tasks were given to 10 health male under-graduate students on different day individually, and during the experiment, EEG and ECG was measured continuously for objective psycho-physiological analysis. According to the results, ${\alpha}$/(${\alpha}+{\beta}$) as well as ${\alpha}/{\beta}$ band power ratio were sensitive to task difficulty level which consistently decreased both. However, any one of them failed to reveal the influence of tasks with different difficulty level in the aspect of task duration time. On the contrary, Heart Rate Variability was more suggestive than expected. To make a conclusion, it can be said that band power of EEG waves will be helpful in not only assessment of work difficulty level but also assessment of workers' skill development if supported by cardiac function such as HRV.
Accuracy of Paper-pencil Test used in Investigation of Control-display Stereotype - Focused on Stereotype for Control-burner Relationship of Four-stove Range -
Kee, Dohyung 114
The purpose of this study is to empirically investigate accuracy of paper-pencil test used in surveying control-display stereotype. For doing this, three paper-pencil tests dealing with stereotype for control-burner relationship of four-stove gas range, in which three different gas range images were provided, were performed and the results were compared with those of existing studies. The result of the paper-pencil test using simple image composed of line and circle was different from that of the real model simulation, while the results of the other two tests and a previous study providing more realistic images were the same as that of the real model simulation. Furthermore, the proportion of responses coinciding with the real model simulation increased as images used became closer to real range. It is concluded that the paper-pencil tests well designed using realistic images may produce the same stereotype as the real model simulation.
A Study for Optimal Evacuation Simulation by Artificial Intelligence Evacuation Guidance Application
Jang, Jae-Soon;Kong, Il-Chean;Rie, Dong-Ho 118
For safe evacuation in the fire disaster, the evacuees must find the exit and evacuate quickly. Especially, if the evacuees don't know the location of the exit, they have to depend on the evacuation guidance system. Because the more smoke spread, the less visibility is decreasing, it is difficult to find the way to the exit by the naked eye. For theses reasons, the evacuation guidance system is highly important. However, the evacuation guidance system without change of direction has the risk that introduce to the dangerous area. In the evacuation safety assessment scenario by the evacuation simulation has the same problem. Because the evacuee in the simulation evacuate by the shortest route to the exit, the simulation result is same like the evacuation without the evacuation guidance system. In this study, it was used with MAS (Multi Agent System)-based simulation program including the evacuation guidance system to implement the change of evacuation by fire. Using this method, confidence of evacuation safety assessment can be increase.
Safety Analysis of APR+ PAFS for CDF Evaluation
Kang, Sang Hee;Moon, Ho Rim;Park, Young Seop 123
The Advanced Power Reactor Plus(APR+), which is a GEN III+ reactor based on the APR1400, is being developed in Korea. In order to enhance the safety of the APR+, a passive auxiliary feedwater system(PAFS) has been adopted in the APR+. The PAFS replaces the conventional active auxiliary feedwater system(AFWS) by introducing a natural driving force mechanism while maintaining the system function of cooling the primary side and removing the decay heat. As the PAFS completely replaces the conventional AFWS, it is required to verify the cooling capacity of PAFS for the core damage frequency(CDF) evaluation. For this reason, this paper discusses the cooling performance of the PAFS during transient accidents. The test case and scenarios were picked from the result of the sensitivity analysis in APR+ Probabilistic Safety Assessment(PSA). The analysis was performed by the best estimate thermal-hydraulic code, RELAP5/.MOD3.3. This study shows that the plant maintains the stable state without the core damages under the given test scenarios. The results of PSA considering this analysis' results shows that the CDF values are decreased. The analysis results can be used for more realistic and accurate performance of a PSA. | CommonCrawl |
Imagine you are suspending a cube from one vertex and allowing it to hang freely. What shape does the surface of the water make around the cube?
Painting Cubes
Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours?
In the game of Noughts and Crosses there are 8 distinct winning lines. How many distinct winning lines are there in a game played on a 3 by 3 by 3 board, with 27 cells?
Cubist Cuts
Jake, Ryo and Charlie from Moorfield Junior School have explained how to cut up a 3x3x3 cube using 6 cuts:
"We got a 3 by 3 cube and then we cut it 2 times to make 3 lots of 9 cubes. Then we piled all the cubes on top of each other. Then we took another 2 cuts to leave 9 towers of 3 cubes. Next we layed them next to each other. After that we took another 2 cuts to leave the 27 unit cubes."
Chris B, Elliot and Joseph, also from Moorfield Juniors, sent us a diagram to show where these cuts should be:
Juliette noticed that it wouldn't be possible with fewer than 6 cuts:
"We need at least 6 cuts because we need one cut for each face of the small cube in the middle of the $3\times 3 \times 3$ cube."
Anthony noticed that, with a $4 \times 4 \times 4$ cube, we can use 6 cuts if we rearrange the cubes:
"First cut the cube in half down the middle, then stack the halves on top of each other (in an $8 \times 2 \times 4$ arrangement) and cut down the middle, to make four $4 \times 4$ slices each 1 unit thick. Then rearrange the cubes into the original arrangement and repeat the process in the other two directions. This will cut the cube into $1 \times 1 \times 1$ cubes. It cannot be done with fewer than 6 cuts because the cubes in the middle will each need at least one cut for each face"
The $n \times n \times n$ cube is a bit trickier. Try a few yourself before looking at this explanation.
First of all let's see how many cuts are needed to cut the cube into slices 1 unit deep. We can then do this in each of the three directions to cut the $n \times n \times n$ cube into unit cubes, and can multiply by three to find out how many cuts are needed in total.
For a cube with side length 3 or 4 units, we need 2 cuts, as Juliette and Anthony explained. For 5 units, we'll need an extra cut in each direction. To cut as efficiently as possible, we should use a method similar to Anthony's: first cut in half (or as close to in half as possible), then stack up the "halves" and repeat until we are left with "slices" 1 unit thick. We can then put the cube back together and repeat for the other two directions.
The general pattern is: for each doubling of $n$, we need an extra 3 cuts to cut an $n \times n \times n$ cube into $1 \times 1 \times 1$ cubes. This is shown in the table below.
$n$ Number of cuts
$1$ to $2$ $1 \times3$
$9$ to $16$ $4 \times3$
$(2^{k-1} + 1)$ to $2^k$ $k \times3$ | CommonCrawl |
Monte Carlo integration
In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid,[1] Monte Carlo randomly chooses points at which the integrand is evaluated.[2] This method is particularly useful for higher-dimensional integrals.[3]
There are different methods to perform a Monte Carlo integration, such as uniform sampling, stratified sampling, importance sampling, sequential Monte Carlo (also known as a particle filter), and mean-field particle methods.
Overview
In numerical integration, methods such as the trapezoidal rule use a deterministic approach. Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value is likely to be within those error bars.
The problem Monte Carlo integration addresses is the computation of a multidimensional definite integral
$I=\int _{\Omega }f({\overline {\mathbf {x} }})\,d{\overline {\mathbf {x} }}$
where Ω, a subset of Rm, has volume
$V=\int _{\Omega }d{\overline {\mathbf {x} }}$
The naive Monte Carlo approach is to sample points uniformly on Ω:[4] given N uniform samples,
${\overline {\mathbf {x} }}_{1},\cdots ,{\overline {\mathbf {x} }}_{N}\in \Omega ,$
I can be approximated by
$I\approx Q_{N}\equiv V{\frac {1}{N}}\sum _{i=1}^{N}f({\overline {\mathbf {x} }}_{i})=V\langle f\rangle $.
This is because the law of large numbers ensures that
$\lim _{N\to \infty }Q_{N}=I$.
Given the estimation of I from QN, the error bars of QN can be estimated by the sample variance using the unbiased estimate of the variance.
$\mathrm {Var} (f)\equiv \sigma _{N}^{2}={\frac {1}{N-1}}\sum _{i=1}^{N}\left(f({\overline {\mathbf {x} }}_{i})-\langle f\rangle \right)^{2}.$
which leads to
$\mathrm {Var} (Q_{N})={\frac {V^{2}}{N^{2}}}\sum _{i=1}^{N}\mathrm {Var} (f)=V^{2}{\frac {\mathrm {Var} (f)}{N}}=V^{2}{\frac {\sigma _{N}^{2}}{N}}$.
As long as the sequence
$\left\{\sigma _{1}^{2},\sigma _{2}^{2},\sigma _{3}^{2},\ldots \right\}$
is bounded, this variance decreases asymptotically to zero as 1/N. The estimation of the error of QN is thus
$\delta Q_{N}\approx {\sqrt {\mathrm {Var} (Q_{N})}}=V{\frac {\sigma _{N}}{\sqrt {N}}},$
which decreases as ${\tfrac {1}{\sqrt {N}}}$. This is standard error of the mean multiplied with $V$. This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension.[5] It is important to notice that, unlike in deterministic methods, the estimate of the error is not a strict error bound; random sampling may not uncover all the important features of the integrand that can result in an underestimate of the error.
While the naive Monte Carlo works for simple examples, an improvement over deterministic algorithms can only be accomplished with algorithms that use problem-specific sampling distributions. With an appropriate sample distribution it is possible to exploit the fact that almost all higher-dimensional integrands are very localized and only small subspace notably contributes to the integral.[6] A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling—dividing the region in sub-domains—and importance sampling—sampling from non-uniform distributions—are two examples of such techniques.
Example
A paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function
$H\left(x,y\right)={\begin{cases}1&{\text{if }}x^{2}+y^{2}\leq 1\\0&{\text{else}}\end{cases}}$
and the set Ω = [−1,1] × [−1,1] with V = 4. Notice that
$I_{\pi }=\int _{\Omega }H(x,y)dxdy=\pi .$
Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick N random numbers on Ω and compute
$Q_{N}=4{\frac {1}{N}}\sum _{i=1}^{N}H(x_{i},y_{i})$
In the figure on the right, the relative error ${\tfrac {Q_{N}-\pi }{\pi }}$ is measured as a function of N, confirming the ${\tfrac {1}{\sqrt {N}}}$.
C example
Keep in mind that a true random number generator should be used.
int i, throws = 99999, insideCircle = 0;
double randX, randY, pi;
srand(time(NULL));
for (i = 0; i < throws; ++i) {
randX = rand() / (double) RAND_MAX;
randY = rand() / (double) RAND_MAX;
if (randX * randX + randY * randY < 1) ++insideCircle;
}
pi = 4.0 * insideCircle / throws;
Python example
Made in Python.
from numpy import random
import numpy as np
throws = 2000
inside_circle = 0
i = 0
radius = 1
while i < throws:
# Choose random X and Y centered around 0,0
x = random.uniform(-radius, radius)
y = random.uniform(-radius, radius)
# If the point is inside circle, increase variable
if x**2 + y**2 <= radius**2:
inside_circle += 1
i += 1
# Calculate area and print; should be closer to Pi with increasing number of throws
area = (((2 * radius) ** 2) * inside_circle) / throws
print(area)
Wolfram Mathematica example
The code below describes a process of integrating the function
$f(x)={\frac {1}{1+\sinh(2x)\log(x)^{2}}}$
from $0.8<x<3$ using the Monte-Carlo method in Mathematica:
func[x_] := 1/(1 + Sinh[2*x]*(Log[x])^2);
(*Sample from truncated normal distribution to speed up convergence*)
Distrib[x_, average_, var_] := PDF[NormalDistribution[average, var], 1.1*x - 0.1];
n = 10;
RV = RandomVariate[TruncatedDistribution[{0.8, 3}, NormalDistribution[1, 0.399]], n];
Int = 1/n Total[func[RV]/Distrib[RV, 1, 0.399]]*Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}]
NIntegrate[func[x], {x, 0.8, 3}] (*Compare with real answer*)
Recursive stratified sampling
Recursive stratified sampling is a generalization of one-dimensional adaptive quadratures to multi-dimensional integrals. On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm. If the error estimate is larger than the required accuracy the integration volume is divided into sub-volumes and the procedure is recursively applied to sub-volumes.
The ordinary 'dividing by two' strategy does not work for multi-dimensions as the number of sub-volumes grows far too quickly to keep track. Instead one estimates along which dimension a subdivision should bring the most dividends and only subdivides the volume along this dimension.
The stratified sampling algorithm concentrates the sampling points in the regions where the variance of the function is largest thus reducing the grand variance and making the sampling more effective, as shown on the illustration.
The popular MISER routine implements a similar algorithm.
MISER Monte Carlo
The MISER algorithm is based on recursive stratified sampling. This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance.[7]
The idea of stratified sampling begins with the observation that for two disjoint regions a and b with Monte Carlo estimates of the integral $E_{a}(f)$ and $E_{b}(f)$ and variances $\sigma _{a}^{2}(f)$ and $\sigma _{b}^{2}(f)$, the variance Var(f) of the combined estimate
$E(f)={\tfrac {1}{2}}\left(E_{a}(f)+E_{b}(f)\right)$
is given by,
$\mathrm {Var} (f)={\frac {\sigma _{a}^{2}(f)}{4N_{a}}}+{\frac {\sigma _{b}^{2}(f)}{4N_{b}}}$
It can be shown that this variance is minimized by distributing the points such that,
${\frac {N_{a}}{N_{a}+N_{b}}}={\frac {\sigma _{a}}{\sigma _{a}+\sigma _{b}}}$
Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region.
The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step. The direction is chosen by examining all d possible bisections and selecting the one which will minimize the combined variance of the two sub-regions. The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step. The same procedure is then repeated recursively for each of the two half-spaces from the best bisection. The remaining sample points are allocated to the sub-regions using the formula for Na and Nb. This recursive allocation of integration points continues down to a user-specified depth where each sub-region is integrated using a plain Monte Carlo estimate. These individual values and their error estimates are then combined upwards to give an overall result and an estimate of its error.
Importance sampling
There are a variety of importance sampling algorithms, such as
Importance sampling algorithm
Importance sampling provides a very important tool to perform Monte-Carlo integration.[3][8] The main result of importance sampling to this method is that the uniform sampling of ${\overline {\mathbf {x} }}$ is a particular case of a more generic choice, on which the samples are drawn from any distribution $p({\overline {\mathbf {x} }})$. The idea is that $p({\overline {\mathbf {x} }})$ can be chosen to decrease the variance of the measurement QN.
Consider the following example where one would like to numerically integrate a gaussian function, centered at 0, with σ = 1, from −1000 to 1000. Naturally, if the samples are drawn uniformly on the interval [−1000, 1000], only a very small part of them would be significant to the integral. This can be improved by choosing a different distribution from where the samples are chosen, for instance by sampling according to a gaussian distribution centered at 0, with σ = 1. Of course the "right" choice strongly depends on the integrand.
Formally, given a set of samples chosen from a distribution
$p({\overline {\mathbf {x} }}):\qquad {\overline {\mathbf {x} }}_{1},\cdots ,{\overline {\mathbf {x} }}_{N}\in V,$
the estimator for I is given by[3]
$Q_{N}\equiv {\frac {1}{N}}\sum _{i=1}^{N}{\frac {f({\overline {\mathbf {x} }}_{i})}{p({\overline {\mathbf {x} }}_{i})}}$
Intuitively, this says that if we pick a particular sample twice as much as other samples, we weight it half as much as the other samples. This estimator is naturally valid for uniform sampling, the case where $p({\overline {\mathbf {x} }})$ is constant.
The Metropolis–Hastings algorithm is one of the most used algorithms to generate ${\overline {\mathbf {x} }}$ from $p({\overline {\mathbf {x} }})$,[3] thus providing an efficient way of computing integrals.
VEGAS Monte Carlo
Main article: VEGAS algorithm
The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region which creates the histogram of the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution.[9] In order to avoid the number of histogram bins growing like Kd, the probability distribution is approximated by a separable function:
$g(x_{1},x_{2},\ldots )=g_{1}(x_{1})g_{2}(x_{2})\ldots $
so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. VEGAS incorporates a number of additional features, and combines both stratified sampling and importance sampling.[9]
See also
• Quasi-Monte Carlo method
• Auxiliary field Monte Carlo
• Monte Carlo method in statistical physics
• Monte Carlo method
• Variance reduction
Notes
1. Press et al, 2007, Chap. 4.
2. Press et al, 2007, Chap. 7.
3. Newman, 1999, Chap. 2.
4. Newman, 1999, Chap. 1.
5. Press et al, 2007
6. MacKay, David (2003). "chapter 4.4 Typicality & chapter 29.1" (PDF). Information Theory, Inference and Learning Algorithms. Cambridge University Press. pp. 284–292. ISBN 978-0-521-64298-9. MR 2012999.
7. Press, 1990, pp 190-195.
8. Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo Methods. John Wiley & Sons.
9. Lepage, 1978
References
• Caflisch, R. E. (1998). "Monte Carlo and quasi-Monte Carlo methods". Acta Numerica. 7: 1–49. Bibcode:1998AcNum...7....1C. doi:10.1017/S0962492900002804. S2CID 5708790.
• Weinzierl, S. (2000). "Introduction to Monte Carlo methods". arXiv:hep-ph/0006269.
• Press, W. H.; Farrar, G. R. (1990). "Recursive Stratified Sampling for Multidimensional Monte Carlo Integration". Computers in Physics. 4 (2): 190. Bibcode:1990ComPh...4..190P. doi:10.1063/1.4822899.
• Lepage, G. P. (1978). "A New Algorithm for Adaptive Multidimensional Integration". Journal of Computational Physics. 27 (2): 192–203. Bibcode:1978JCoPh..27..192L. doi:10.1016/0021-9991(78)90004-9.
• Lepage, G. P. (1980). "VEGAS: An Adaptive Multi-dimensional Integration Program". Cornell Preprint CLNS 80-447.
• Hammersley, J. M.; Handscomb, D. C. (1964). Monte Carlo Methods. Methuen. ISBN 978-0-416-52340-9.
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
• Newman, MEJ; Barkema, GT (1999). Monte Carlo Methods in Statistical Physics. Clarendon Press.
• Robert, CP; Casella, G (2004). Monte Carlo Statistical Methods (2nd ed.). Springer. ISBN 978-1-4419-1939-7.
External links
• Café math : Monte Carlo Integration : A blog article describing Monte Carlo integration (principle, hypothesis, confidence interval)
• Boost.Math : Naive Monte Carlo integration: Documentation for the C++ naive Monte-Carlo routines
• Monte Carlo applet applied in statistical physics problems
| Wikipedia |
International Tax and Public Finance
April 2019 , Volume 26, Issue 2, pp 317–356 | Cite as
How sensitive is the average taxpayer to changes in the tax-price of giving?
Peter G. Backus
Nicky L. Grant
First Online: 26 June 2018
There is a substantial literature estimating the responsiveness of charitable donations to tax incentives for giving in the USA. One approach estimates the price elasticity of giving based on tax return data of individuals who itemize their deductions, a group substantially wealthier than the average taxpayer. Another estimates the price elasticity for the average taxpayer based on general population survey data. Broadly, results from both arms of the literature present a counterintuitive conclusion: the price elasticity of donations of the average taxpayer is larger than that of the average, wealthier, itemizer. We provide theoretical and empirical evidence that this conclusion results from a heretofore unrecognized downward bias in the estimator of the price elasticity of giving when non-itemizers are included in the estimation sample (generally with survey data). An intuitive modification to the standard model used in the literature is shown to yield a consistent and more efficient estimator of the price elasticity for the average taxpayer under a testable restriction. Strong empirical support is found for this restriction, and we estimate a bias in the price elasticity around − 1, suggesting the existing literature significantly over-estimates (in absolute value) the price elasticity of giving. Our results provide evidence of an inelastic price elasticity for the average taxpayer, with a statistically significant and elastic price response found only for households in the top decile of income.
Charitable giving Tax incentives Bias
D64 H21 H24 D12
Some commentators have voiced the suspicion that, while a few sophisticated taxpayers (and their tax or financial advisors) might be sensitive to variations in tax rates, the average taxpayer is too oblivious or unresponsive to the marginal tax rate for anything like the economic model to be a realistic representation of reality.Clotfelter (2002)
Do tax incentives for charitable giving lead people to give more? In the USA, taxpayers can deduct their charitable donations from their taxable income if they choose to itemize, or list, their deductible expenditures (e.g., donations, mortgage interest paid, state taxes paid) in their annual filing. Taxpayers can choose to subtract the sum of their itemized deductions or the standard deduction amount, whichever is greater, from their taxable income. The tax deductibility of donations was introduced into the US tax code in 1917 and has survived every tax reform since, fundamentally unchanged (Fack and Landais 2016). It has been called 'probably the most popular tax break in the Internal Revenue Code' (Reid 2017, p. 82). This deductibility of donations produces a price (or tax-price) of giving equal to 1 minus the marginal tax rate faced by the donor if she itemizes and equal to 1 if not. This fact has been exploited in a sizeable literature aimed at estimating the elasticity of charitable giving with respect to this price.
In general, estimates of the price elasticity of giving have been obtained using either tax-filer data (i.e., data from annual income tax forms), or from surveys. Estimating this elasticity using tax-filer data limits the sample to individuals who itemize their tax returns as no information on donations is recorded for non-itemizers.1 However, itemizers are substantially wealthier than non-itemizers on average.2 As such, the estimated price elasticity obtained using tax-filer data estimates the responsiveness of the average itemizer and may not reflect that of the relatively poorer average taxpayer.
In order to estimate the elasticity of the general taxpayer, we must consider non-itemizers who compose about a fifth of total donations (Duquette 1999).3 This is often achieved using survey data from the general population of taxpayers, including non-itemizers. In their meta-analysis, Peloza and Steel (2005) report that studies using tax-filer data (40 of the 69 studies they surveyed) estimate a price elasticity on average of − 1.08 compared to a mean elasticity of − 1.29 from studies using survey data (the remaining 29 studies), rejecting the null hypothesis that the mean responses are equal.4 This suggests the economically counterintuitive result that the average taxpayer is more responsive to changes in the price of giving than itemizers with higher average income.5 Such a conclusion is in contrast to what has been found in a related literature that estimates the elasticity of taxable income, where higher income individuals are found to be the most sensitive to changes in tax rates (e.g., Feldstein 1995; Saez 2004; Emmanuel Saez and Giertz 2012 for an overview).
This paper provides an explanation for this result, showing it to arise from a downward bias in the estimator of the price elasticity using survey data in the standard model considered in the literature. Theoretical and empirical evidence is provided demonstrating this bias, which follows from a hitherto unrecognized source of endogenous price variation arising from changes in itemization status. It is shown that controlling for itemization status yields a consistent, and more efficient estimator (relative to two stage least squares estimators) of the price elasticity under a simple testable restriction which we find is strongly supported by the data.
Results from this model find that the price response of the average taxpayer is inelastic, consistent with recent work in Hungerman and Ottoni-Wilhelm (2016). Only for those with income in the top decile do we find evidence of an elastic and statistically significant price response. This provides one explanation for the observation of Clotfelter (1985, 2002), and others (e.g., Aaron 1972), that the estimated price responsiveness of charitable giving seems unrealistically large for the average taxpayer. Our findings are also significant for public policy analysis as a price elasticity less than unity is indicative that the tax deductibility of charitable donations may not be 'treasury efficient.'6 Moreover, the optimal subsidies of giving derived in Saez (2004) depend heavily on the sensitivity of donors to the price of giving. For example, the optimal subsidy with a price elasticity of − 1 is eight times larger than with a price elasticity of − 0.5. This is important since we find with 95% probability that the price elasticity in the model removing the bias is bounded below by − 0.59 compared to − 1.59 for the standard model.
The literature in this area has long recognized two main sources of endogeneity in the price of giving. First, that the marginal tax rate is a function of taxable income, which, in turn, is a function of donations for itemizers (Auten et al. 2002). We follow a common practice in addressing this source of endogeneity (detailed below in Sect. 3). Second, that the price of giving is a function of itemization status itself, and hence donations, for so-called 'endogenous itemizers' (Clotfelter 1980), i.e., people that, conditional on their other deductible expenditures, are itemizers only because of the level of their donation. A common solution to this issue in the literature, using both tax-filer and survey data, has been to omit these endogenous itemizers, generally a small share of the sample, leaving only exogenous itemizers in the sample. In studies using tax-filer data, this exclusion is sufficient to expunge the endogenous price variation (e.g., Lankford and Wyckoff 1991; Randolph 1995; Auten et al. 2002; Bakija and Heim 2011), providing consistent estimation of the price elasticity of giving for the average itemizer.
However, if the interest is in consistently estimating the price elasticity of the average taxpayer, then we must use samples which include those who may itemize their tax returns in certain years and not in others. We show that in such samples a third, and heretofore unacknowledged, source of endogeneity remains even if endogenous itemizers are excluded. This is because non-itemizers face a price equal to 1, and not the lower price of 1 minus the marginal tax rate as for itemizers, because their donations are sufficiently small (conditional on their other tax deductible expenses). In short, as itemizing is a function of donations for endogenous itemizers, so is not itemizing a function of donations for all non-itemizers. As a result, estimators of the price elasticity of giving based on data which includes non-itemizers (e.g., Brown and Lankford 1992; Andreoni et al. 2003; Bradley et al. 2005; Brown et al. 2012; Yöruk 2010, 2013; Brown et al. 2015; Zampelli and Yen 2017) will be downward biased.7
To understand the intuition of the price endogeneity arising from the inclusion of non-itemizers in the sample, consider the case where a taxpayer switches from being an itemizer one year to a non-itemizer the next. By definition, her donations have decreased (holding other deductible expenditure constant) and the price of donating has increased. As such, a negative relationship will be found between the change in donations and the change in price, by construction; even in the extreme case where donation decisions are made at random.8 This leads to a difference in the mean donation of itemizers and non-itemizers (conditional on expenditures and other controls) that cannot be picked up in a fixed effect for those who switch itemization status in some years and not in others, being inherently time varying.
A natural approach to address this bias would be to form a two stage least squares (2SLS) estimator, instrumenting for the change in price. We consider two exogenous instruments: the 'synthetic' and the actual change in marginal tax rates. Despite finding evidence that these instruments satisfy the identification condition, they only explain a small variation in the price of giving, as most of the price variation comes from changes in itemization status. Consequentially, we find that the 2SLS estimators yield standard errors too large to make any economically meaningful inference.
Instead, we develop an alternative approach. We show formally that the ordinary least squares (OLS) estimator of the price elasticity in a model which controls for change in itemization status removes this bias when the average change in price for those who stop and start itemizing is of the same magnitude; a testable restriction. This restriction is shown to hold with probability close to 1, suggesting this estimator is consistent. Moreover, since it exploits the maximal exogenous variation in the price and is estimated via OLS it is more efficient than any 2SLS estimator. In fact, we find the standard error of the OLS estimator of the price elasticity in this model to be one half, or less, than those obtained via 2SLS. Finally, to note another benefit of this approach is that it estimates the average treatment effect, and not the local average treatment effect estimated by 2SLS.
The paper proceeds as follows. Section 2 provides the formal theoretical results and discusses the bias in the standard model based on survey data. Section 3 discusses the data and our instruments, where Sect. 4 presents the empirical results. Finally, conclusions are drawn in Sect. 5. Proofs of the theoretical results along with extra empirical output are provided in the Appendix.
2 Estimating price elasticity of donations
The standard empirical approach in estimating the price elasticity of donations has minimal theoretical underpinnings, modeling donations as a linear function of price, income and various controls. This empirical approach was first introduced in the seminal work of Taussig (1967) where
$$\begin{aligned} \log (D_{it})= & {} \alpha _{i}+\beta \log (P_{it})+\omega 'X_{it}+e_{it} \end{aligned}$$
$$\begin{aligned} P_{it}= & {} 1-I_{it}\tau _{it}, \qquad \qquad \,\\ I_{it}= & {} 1(D_{it}+E_{it}>S_{it})\qquad \, \end{aligned}$$
and \(\beta \) is the price elasticity of interest, \(D_{it}=D_{it}^{*}+1\) where \(D_{it}^{*}\) is the level of donation for household i at time t, \(S_{it}=S_{it}^{*}+1\) where \(S_{it}^{*}\) is the standard deduction, \(E_{it}\) is all other tax deductible expenditure, \(\tau _{it}\) is the marginal rate of income tax, \(P_{it}\) is the price of giving, \(X_{it}\) is a vector of personal characteristics including income and \(E_{it}\) (with corresponding parameter \(\omega \)), \(\alpha _{i}\) is all time invariant unobserved heterogeneity, and \(e_{it}\) is a random error term.9 Here \(I_{it}=1\) if an agent itemizes, namely if the sum of deductible expenditures \(\left( D^*_{it}+E_{it}\right) \) is larger than the standard deduction \(\left( S^*_{it}\right) \).10
At any time t, a household is either an exogenous itemizer (\(I_{it}=1\) where \(E_{it}>S_{it}^{*}\)), an endogenous itemizer (\(I_{it}=1\) where \(D_{it}+E_{it}>S_{it}\) and \(E_{it}\le S_{it}^{*}\)) or a non-itemizer (\(I_{it}=0\), i.e., \(D_{it}+E_{it}\le S_{it}\)). As noted above, including endogenous itemizers in the estimation sample has long been recognized to cause the OLS estimator to be downward biased. A common solution to this issue in the literature omits endogenous itemizers, generally a small share of the sample, leaving only non-itemizers and exogenous itemizers in the estimation sample.
This approach only addresses one side of the problem as \(I_{it}\) is in general a function of \(D_{it}\), not just for endogenous itemizers. A non-itemizer has donations bounded above (since \(D_{it}\le S_{it}-E_{it}\)) and faces a higher price than an itemizer, whose donations are unbounded. This is the converse to the bias caused by endogenous itemizers, who have donations bounded below \(\left( S_{it}-E_{it}>0\text { and }D_{it}>S_{it}-E_{it}\right) \) and face a lower price than non-itemizers (as the marginal tax rate is greater then zero). We show formally that even omitting endogenous itemizers, a large bias remains as a result of households itemizing in some years and not in others where this bias is not expunged by removing individual fixed effects.
To show this issue, we consider a model where endogenous itemizers are omitted (as is commonly done in the literature) and individual effects (\(\alpha _{i}\)) are removed via first differencing (FD).11 We omit endogenous itemizers for simplicity and to maintain comparability with the results in the literature. First differencing Eq. (1) gives
$$\begin{aligned} \Delta \log (D_{it})=\beta \Delta \log (P_{it})+\omega '\Delta X_{it}+u_{it} \quad \text {where} \quad u_{it}=\Delta \epsilon _{it}. \end{aligned}$$
There are three sources of price variation: (1) changes in taxable income and other observables which determine \(\tau _{it}\) (which we control for), (2) exogenous variation in the marginal tax rate schedule (which can be exploited to identify the price effect) and (3) changes in itemization status, \(I_{it}\), which we show are endogenous. We define the following dynamic itemization behaviors for any i, t
Continuing itemizer:\(\Delta I_{it}=0\), \(I_{i,t-1}=1\), \(\;I_{it}=1\)
Stop itemizer:\(\Delta I_{it}=-1\), \(I_{i,t-1}=1\), \(\;I_{it}=0\)
Start itemizer: \(\Delta I_{it}=1\), \(I_{i,t-1}=0\), \(\;I_{it}=1\)
Continuing Non-itemizer:\(\Delta I_{it}=0\), \(I_{i,t-1}=0\), \(\;I_{it}=0\).
Note we refer to I2 and I3 collectively as 'switchers.' Define \(V_{it}=S_{it}-E_{it}\) which is the standard deduction minus expenses plus one. So, \(I_{it}=0\) where \(V_{it}\ge 1\) and \(I_{it}=1\) where \(V_{it}<1\). Table 1 summarizes the changes in price and the bounds on changes in donations (if any) for the four dynamic itemization behaviors (I1–I4).
Changes in donations and price for I1–I4
\(I_{it}=1\)
\(I_{i,t-1}=1\)
I1 \(\Delta \text {log}(D_{it})\) is unbounded
I2 \(\Delta \log (D_{it})\le \log (V_{it})\)
\(\quad \;\;\Delta \log (P_{it})=\Delta \log (1-\tau _{it}\))
\(\Delta \log (P_{it})=-\log (1-\tau _{i,t-1})\)
I3 \(\Delta \log (D_{it})\ge -\log (V_{i,t-1})\)
I4 \(-\log (V_{i,t-1})\le \Delta \log (D_{it})\le \log (V_{it})\)
\(\Delta \log (P_{it})=\log (1-\tau _{it}\))
\(\Delta \log (P_{it})=0\)
To show the bias, we decompose the correlation between \(u_{it}\) and \(\Delta \log (P_{it})\) into four component parts corresponding to each quadrant of Table 1. For continuing non-itemizers (I4), the change in price equals zero and hence does not introduce any bias in the OLS estimator in Eq. (2). For continuing itemizers (I1), there is no bound on \(\Delta \log (D_{it})\) and since \( u_{it}\) is exogenous and uncorrelated with \(\Delta \log (1-\tau _{it})\) no bias is introduced by this group either.
However, when \(\Delta I_{it}=1\) (start itemizers, I3) then \(\Delta \log (D_{it})\) (and hence \(u_{it}\)) are bounded below where \(\Delta \log (P_{it})<0\) and so the two variables are negatively correlated. To see this more formally note that for start itemizers \(I_{it}=1\) (i.e., \(E_{it}\ge S_{it}^{*}\) as we consider only exogenous itemizers) and \(I_{it-1}=0\) (i.e., \(D_{i,t-1}\le S_{i,t-1}-E_{i,t-1}\)), so donations in \(t-1\) are bounded from above and donations in t are unbounded.12 It then follows that \(\Delta \log (D_{it})\) is bounded from below for start itemizers. Formally,
$$\begin{aligned} \Delta I_{it}=1\;\Rightarrow \;\Delta \log (D_{it})\ge & {} \log (D_{it})-\log (S_{i,t-1}-E_{i,t-1}) \end{aligned}$$
$$\begin{aligned} ~\ge & {} -\log (S_{i,t-1}-E_{i,t-1}) \end{aligned}$$
where (4) follows since \(\log (D_{it})\ge 0\). Given that \(\Delta \log (D_{it})\) is bounded below for start itemizers, the residuals, \(u_{it}\), are also bounded below. Since \(u_{it}\) are mean zero (with the inclusion of a constant) the residuals are skewed to the positive for start itemizers; a group who also faces a decrease in price from 1 to \(1-\tau _{it}\). The same argument holds in reverse for stop itemizers. Hence, changes in itemization status lead to a negative correlation between \(\Delta \text {log(}P_{it})\) and \(u_{it}\), even when \(\beta =0\).
Theorem 1 demonstrates this result (see proof in Appendix A), showing that the OLS-FD estimator of \(\beta \) in (2) is downward biased in the presence of switchers. For ease of exposition, we assume \(\omega =0\) and \(E[u_{it}]=0\).13 Equation (2) then collapses to
$$\begin{aligned} \Delta \log (D_{it})=\beta \Delta \log (P_{it})+u_{it} \end{aligned}$$
and the OLS-FD estimator of \(\beta \) in (5) is \(\hat{\beta }_\mathrm{FD}=\frac{\sum _{i=1}^{N}\sum _{t=2}^{T}\Delta \log (D_{it})\Delta \log (P_{it})}{\sum _{i=1}^{N}\sum _{t=2}^{T}\Delta \log (P_{it})^{2}}\).14
To simplify the proof, we assume that \((D_{it},\tau _{it},u_{it})'\) is i.i.d.15 We also assume that \(\tau _{it}\) conditional on income is strictly exogenous which we achieve by controlling for income. While the marginal tax rate schedule itself is exogenous, \(\tau _{it}\) will be a nonlinear function of taxable income. As such \(\Delta \log (P_{it})\) is highly nonlinear in income and if we fail to control for any potential nonlinearity between \(\Delta \log (D_{it})\) and \(\Delta \log (Y_{it})\) then we introduce a correlation between \(\Delta \log (P_{it})\) and \(u_{it}\). In light of this, we check the robustness of our results to nonlinear specifications in income, results provided in Appendix D.16
Define \(p_{1}=\mathcal {P} \{\Delta I_{it}=1\}\), \( p_{-1}=\mathcal {P}\{\Delta I_{it}=-1\}\), \(\xi _{1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=1]\) and \(\xi _{-1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=-1]\).
Theorem 1
\(\hat{\beta }_\mathrm{FD}\overset{p}{\rightarrow }\beta +(p_{1}\xi _{1}+p_{-1}\xi _{-1})/E[(\Delta \log (P_{it}))^{2}]\) where \(\xi _{1},\xi _{-1}<0\).
Theorem 1 shows that is there a downward bias in the OLS-FD estimate of \(\beta \) when the probability of either stop or start itemizing is nonzero. In our sample, \(p_{1}\) and \(p_{-1}\) are approximately 0.1 and 0.08, respectively. The conditional covariance between \(u_{it}\) and \(\Delta \text {log(}P_{it})\) (\(\xi _{1}\), \(\xi _{-1}\)) is negative for both forms of switchers so Theorem 1 implies a downward bias in the estimator of \(\beta \) in the standard model.17
The first thought toward a solution to this bias would be to search for an instrument for \(\Delta \log (P_{it})\). An obvious choice is the exogenous change in the tax rate (conditioning on a given level of taxable income). Exogenous variation in marginal tax rates has been explicitly relied upon in both tax-filer and survey data studies to estimate price elasticities of giving in the past (e.g., Feldstein 1995; Bakija and Heim 2011). We pursue an instrumental variable approach and find evidence that our proposed instruments (detailed in Sect. 3.1 below) satisfy the identification condition. However, the correlation between these instruments and \(\Delta \text {log}(P_{it})\) is small, as much of the variation in \(\Delta \text {log}(P_{it})\) arises from variations in \(\Delta I_{it}\). As such the 2SLS estimator yields large standard errors that make any meaningful economic inference implausible.
As such we seek a more efficient method to estimate the price elasticity. The source of the endogeneity in this problem differs from that commonly found in many instrumental variable settings as the source of the endogenous variation in \(\Delta \log (P_{it})\) is measurable (arising from changes in \(I_{it}\)). One complication arises as \(\Delta \log (P_{it})\) is a nonlinear function of \(I_{it}\) and \(I_{i,t-1}\). As such it is not immediately clear how to transform the standard model to expunge this endogenous variation in \(\Delta \log (P_{it})\). Intuitively, controlling for \(\Delta I_{it}\) removes the variation in \(\Delta \log (P_{it})\) from the change in itemization status and should (possibly under some restrictions) remove the endogenous price variation in \(\Delta \log (P_{it})\). This would then leave the maximal exogenous variation in price with which to consistently estimate \(\beta \) and with more precision than a 2SLS-FD estimator.18
Theorem 2 below formalizes this intuitive argument, showing that controlling for change in itemization status removes the bias in Theorem 1 under a testable restriction that the average change in price for stop and start itemizers is of the same magnitude. We define the 'itemizer model,' as opposed to the standard model of Eq. (2) which controls for \(\Delta I_{it}\), as
$$\begin{aligned} \Delta \log (D_{it})=\gamma \Delta I_{it}+\beta \Delta \log (P_{it})+\omega '\Delta X_{it}+e_{it}. \end{aligned}$$
Define \(z_{it}=(\Delta I_{it},\Delta \log (P_{it}))'\) and \(w_{it}=(z_{it}',X_{it}')'\) the OLS-FD estimator in the 'itemizer model' \( \hat{\theta }_\mathrm{FD}^{I}=\left( \sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}w_{it}'\right) ^{-1}\sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}\Delta \log (D_{it})\) where we express \(\hat{\theta }_\mathrm{FD}^{I}=(\hat{\gamma }_\mathrm{FD}^{I},\hat{\beta }_\mathrm{FD}^{I},\hat{\omega }_\mathrm{FD}^{I'})'\).
Intuitively, the coefficient \(\gamma \) on \(\Delta I_{it}\) allows the mean change in donations for switchers (conditional on a given marginal tax rate and set of characteristics) to differ relative to non-switchers (by \(\gamma \) and \(-\gamma \), respectively). In this sense, this coefficient 'mops up' the bias derived in Theorem 1 by accommodating this mean shift in donations for switchers which is inherently correlated with the price causing a bias in the OLS estimator of \(\beta \) from Eq. (2).19
Further to note, \(\gamma \) in this case has no real economic interpretation but is a nuisance parameter which allows consistent estimation of \(\beta \). Even if donations were unresponsive to price, and indeed any other factors, it must follow that \(\gamma >0\) as by definition the mean change in donations (conditional on other deductible expenses) is negative for stop itemizers, and vice versa for start itemizers. It could be the case \(\gamma \) will in part reflect a price effect, e.g., if there is an 'itemization effect' (Boskin and Feldstein 1977), namely the response to a price change from a change in \(I_{it}\) might differ from that of a corresponding price change from a change in \(\tau _{it}\) (or more broadly if there is any nonlinear relationship between \(\Delta \log (P_{it})\) and \(\Delta \log (D_{it})\)). In either case, \(\gamma \) would partly pick up this price effect and we would need to model this nonlinear price relationship. This issue is discussed further in Sect. 4.20
Define \(\bar{\tau }_{1}=E[\log (1-\tau _{it})|\Delta I_{it}=1]\), \(\bar{\tau }_{-1}=E[\log (1-\tau _{i,t-1})|\Delta I_{it}=-1]\) and \(C=\det (E[w_{it}w_{it}'])>0\) (ruling out any multi-collinear regressors in \(X_{it}).\)
If \(E[e_{it}X_{it}]=0\) (exogenous controls)
$$\begin{aligned} \hat{\beta }_\mathrm{FD}^{I}\overset{p}{\rightarrow }\beta +\frac{p_{1}p_{-1}}{C}(\bar{\tau }_{1}-\bar{\tau }_{-1})(E[e_{it}|\Delta I_{it}=-1]+E[e_{it}|\Delta I_{it}=1]). \end{aligned}$$
By Theorem 2 (formally proven in Appendix A), there is no bias when either \(p_{1}\) or \(p_{-1}\) are zero, which, as noted above, is not the case in our sample. More importantly, it shows there is no asymptotic bias in \(\hat{\beta }_\mathrm{FD}^{I}\) if the average price increase for stop itemizers (\(\bar{\tau }_{-1}\)) is of the same magnitude as the average price decrease for start itemizers (\(\bar{\tau }_{1}\)). If (for a given \(\Delta X_{it}\)) both stop and start itemizers have the same price elasticity (\(\beta \)) then the size of the endogenous response of \(\Delta \log (D_{it})\) conditional on \(\Delta X_{it}\) will be of equal magnitude (but opposite sign) provided they face the same magnitude of price change on average. This restriction \(\left( \bar{\tau }_{1}=\bar{\tau }_{-1}\right) \) is testable, and we find strong empirical support that it holds (discussed below). Moreover, if \(\bar{\tau }_{1}=\bar{\tau }_{-1}\) then Theorems 1 and 2 imply \(\hat{\beta }_\mathrm{FD}-\hat{\beta }_\mathrm{FD}^{I}\) consistently estimates the bias in \(\hat{\beta }_\mathrm{FD}\) shown in Theorem 1.
3 Data description, specification of the tax-price of giving and instruments
Our analysis uses data from the Panel Study of Income Dynamics (PSID) covering, biannually, 2000–2012.21 The PSID contains information on socioeconomic household characteristics, with substantial detail on income sources and amounts, certain types of expenditure, employment, household composition and residential location. In 2000, the PSID introduced the Center on Philanthropy Panel Study (COPPS) module which includes questions about charitable giving.22
The raw sample of data has 58,993 observations. Following Wilhelm (2006), we remove the low-income oversample leaving us with a representative sample of American households. Households donating more than 50% of their taxable income, households with taxable income less than the standard deduction and households appearing only once during the observed period are omitted. These restrictions leave us with a working sample of 28,480 observations (6325 households appearing on average 4.5 years). The unit of analysis is the household. All monetary figures are in 2014 prices.23
Actual itemization status \(\left( I_{it}\right) \) is reported in the survey. To identify the endogenous itemizers, we predict itemization status by determining if the sum of deductible expenditures of each household (donations, paid property taxes, mortgage interest, state taxes and medical expenses in excess of 7.5% of gross income) is larger than the standard deduction faced by the household (about $6,000 for single person households and $12,000 for married couples, moving roughly in line with inflation each year).24 Following convention, endogenous itemizers are defined as households who report that they itemize and are predicted to itemize, but only when donations are included among itemized deductions, i.e., (\(0<S-E<D)\). Endogenous itemizers comprise approximately 3% of the overall sample and 7% of itemizers. Exogenous itemizers \((E>S)\) make up 46% of the sample and 93% of the itemizers.25
The marginal tax rates used to calculate the price are obtained using the National Bureau of Economic Research's Taxsim program (Feenberg and Coutts 1993). This allows for the calculation of rates and liabilities at both the state and federal level given a number of tax relevant household characteristics including earned income, passive income, various deductible expenditures, capital gains and marital status. As a result, the calculated marginal tax rates are a function of the observable characteristics we submit to Taxsim and the exogenous federal and state tax codes.
We define the marginal tax rate as
$$\begin{aligned} \tau _{it}=\frac{\tau {}_{it}^\mathrm{Fed}+\delta _{it}^\mathrm{State}\tau {}_{it}^\mathrm{State}-\tau {}_{it}^\mathrm{State}\tau {}_{it}^\mathrm{Fed}\delta _{it}^\mathrm{Fed}-\tau _{it}^\mathrm{State}\tau _{it}^\mathrm{Fed}\delta _{it}^\mathrm{State}}{1-\tau _{it}^\mathrm{State}\tau _{it}^\mathrm{Fed}\delta _{it}^\mathrm{Fed}} \end{aligned}$$
where \(\tau _{it}^\mathrm{Fed}\) is the federal marginal income tax rate faced by household i in year t, \(\tau _{it}^\mathrm{State}\) is the state marginal income tax rate (42 states have a state income tax), \(\delta _{it}^{S}\) is a dummy equal to 1 if donations can be deducted from state tax returns (75% of these states allow donations to be deducted), and \(\delta _{it}^{F}\) is a dummy equal to one if federal taxes can be deducted from state returns (allowed in six states) and \(I_{it}\) is equal to 1 if i itemizes in year t and 0 otherwise.
The actual marginal tax rate, \(\tau _{it}^{a}\), is calculated using i's tax relevant characteristics in t and i's actual level of giving in t. The price of giving for this household is then \(P_{it}^{a}=1-I_{it}\tau _{it}^{a}\). However, as noted in Auten et al. (2002), \(\tau _{it}^{a}\), and thus \(P_{it}^{a}\), will be endogenous, even for exogenous itemizers, as donations may be large enough to push i down to a lower tax bracket.
To address this source of endogeneity, distinct from the source we focus on in this paper, we follow Auten et al. (2002) and Brown et al. (2012) in constructing an alternative marginal tax rate, \(\tau _{it}^{b}\) calculated as the mean of the marginal tax rate setting \(i's\) giving in t to 0 (sometimes called the 'first-dollar' marginal tax rate in the literature), and the marginal tax rate calculated by setting i's giving in t at 1% of median household income (the level used in Auten et al. (2002) which corresponds roughly to the median level of giving in our sample). The price variable we use in the regression analysis below is then \(P_{it}^{b}=1-I_{it}\tau _{it}^{b}\) which, as Auten et al. (2002) note, will be 'consistent' with the actual price of giving but will not suffer from the endogeneity from donations pushing a taxpayer to a lower tax bracket (the first source of endogeneity noted in the introduction). The correlation between \(P_{it}^{a}\) and \(P_{it}^{b}\) is 0.992.26
To help clarify the intuition of the bias derived in Theorem 1, we present descriptive statistics for changes in price and donations for the four types of dynamic itemization behaviors (I1 to I4 from Sect. 2) in Table 2. We present complete descriptive statistics for all other control variables in Appendix B.
Descriptive statistics of primary variables in first differences
I1: Continuing itemizer
I2: Continuing non-itemizer
I3: Start itemizer
I4: Stop itemizer
\(\Delta \text {log}P\)
− 0.002
\(\Delta \text {log}P|\Delta \text {log}P>0\)
\(\Delta \text {log}P|\Delta \text {log}P<0\)
\(\Delta \)logD
\(\Delta \text {log}D|\Delta \text {log}P>0\)
\(\Delta \text {log}D|\Delta \text {log}P<0\)
All monetary figures are in 2014 prices, deflated using the Consumer Price Index. Standard errors are reported in parentheses under the corresponding estimate
Taking all the taxpayers together, the mean change in price is essentially 0 (− 0.002 in column (1)) and the mean change in donations is 0.037, though there is a mass point at 0 with 21% of the observations experiencing no change in donations. Price changes for continuing itemizers, which come from changes in marginal tax rates (and taxable income for which we control), are essentially 0 on average (0.004 in column (2)). However, the mean increase in the price of giving (\(\Delta \log P|\Delta \log P>0\)) for continuing itemizers is 0.080 (median \(=\) 0.042) and the mean decrease is − 0.089 (median \(=\) 0.063). The implied elasticity for continuing itemizers is 0.337, i.e., small and positive.27
Start itemizers, for whom the price necessarily falls, see a 0.257 average decrease in log of price. Stop itemizers, who necessarily face a price increase, see an average log price increase of 0.261. The price changes for start and stop itemizers are driven largely by the change in itemization status. Note that the price for continuing non-itemizers does not change, being equal to 1 by definition.
As we argue above, the mean change in donations for start and stop itemizers (conditional on deductible expenditures, which we control for) must be larger and smaller, respectively, than the changing donations for non-switchers. For switchers, the implied elasticities are much larger (in absolute value) being − 2.351 for start itemizers and − 1.832 for stop itemizers with weighted mean elasticity of − 2.107. By Theorem 1, the negative bias in the standard model comes from the price variation from switchers. As such we would expect to find larger implied elasticities, in absolute value, from the switchers relative to the continuing itemizers, which is consistent with these results.
3.1 Instrumental variables for price
Any attempt to identify the price elasticity of giving, via 2SLS or otherwise, relies on exogenous variation to the tax code to introduce variation in the marginal tax rates and thus price. The largest changes to federal tax rates during our observed period occurred in the Economic Growth and Tax Relief Reconciliation Act of 2001 and the Jobs and Growth Tax Relief Reconciliation Act of 2003, which saw changes to the federal income tax brackets and marginal rates in those brackets. Other changes included adjustment of the manner in which dividends are taxed and changes to the Alternative Minimum Tax exemption levels (Tax Increase Prevention and Reconciliation Act of 2005) though Congress introduces a multitude of changes each year. In fact, the US Congress made nearly 5000 changes to the federal tax code between 2001 and 2012 (Olson 2012). Moreover, forty-three states impose some form of income tax and rates range from 0.36% in Iowa on income below $1539 up to 11% on income over $200,000 in Hawaii. As state income tax rates are set by state legislatures, the evolution of those rates over time differs from state to state providing temporal as well as cross-sectional exogenous variation in the state marginal income tax rates. Though the most significant changes to the federal tax code took place in the early 2000s, this exogenous tax variation is not isolated to that particular period.
We isolate an exogenous change in the marginal tax rate following Gruber and Saez (2002) by constructing a 'synthetic' marginal tax rate, \(\tau _{it}^{s}\) in a manner analogous to \(\tau _{it}^{b}\) but using i's tax relevant characteristics in t, including giving set to 0, but the tax code in place at \(t+2\). Any difference between \(\tau _{it}^{s}\) and \(\tau _{it}^{b}\) is necessarily due to changes in the federal or state tax codes. Figure 1 plots the mean exogenous increases \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}>0\right) \) and decreases \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}<0\right) \) in marginal tax rates.
Mean exogenous increases and decreases in marginal tax rates. Notes The figure plots \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}>0\right) \) and \(\left( \left. \overline{\tau _{it}^{b}-\tau {}_{it}^{s}}\right| \tau _{it}^{b}-\tau {}_{it}^{s}<0\right) \) on the left-hand axis and the proportion of the sample in each year experiencing an exogenous change in their marginal tax rate on the right-hand axis
Between about 40 and 60% of the sample experiences an exogenous change in their marginal tax rate face in a given year. Around 79% of households experience at least one exogenous change to their marginal tax rate. The mean exogenous increase in a household's marginal tax rate is 0.032 (median=0.006), and the mean exogenous decrease in a household's marginal tax rate is − 0.035 (median \(=-\) 0.017).
The first instrumental variable we consider for \(\log (P_{it})\) is the synthetic change in the marginal tax rate (\(\tau _{it}^{b}-\tau {}_{it}^{s}\)) à la Gruber and Saez (2002). The correlation between \(\tau _{it}^{b}-\tau {}_{it}^{s}\) and \(\Delta \text {log}(P_{it})\) is, however, small (\(\rho =-\,0.067\)) where the majority of the variation in \(\Delta \text {log}(P_{it})\) (about 70%) arises from changes in itemization status. The exogenous change in the marginal tax rates accounts for only 1.7% of the variation in \(\Delta \text {log}(P_{it})\).
Our second instrument is \(\Delta \tau _{it}^{b}\) which is excludable as the tax rate where \(D=0\) and the tax rate calculated by setting i's giving in t at 1% of median household income are unrelated to the household level of donation conditional on our set of controls. This implicit assumption is frequently relied upon in the literature for identification. The correlation between \(\Delta \tau _{it}^{b}\) and \(\Delta \text {log} (P_{it})\) is 0.341 and about 10% of the variation in \(\Delta \text {log}(P_{it})\) is explained by variation in \(\Delta \tau _{it}^{b}\).28
The primary results of our paper are presented in Table 3.29 We estimate Eq. (2) including logged net taxable income, logged non-donation deductible expenditures (sum of mortgage interest, state taxes paid, medical expenditure and property tax paid plus $1), logged age of the household head, the number of dependent children in the household as well as dummies for male household heads, being married, highest degree earned and home ownership.30 All estimated models control for state and year fixed effects.31
Estimates of the price elasticity of giving
Standard model
2SLS with \(\tau _{it}^{s}-\tau _{it}^{b}\)
2SLS with \(\Delta \tau _{it}^{b}\)
Itemizer model
\(\Delta \)log\(P^{b}\)
− 1.237***
\(\Delta \)itemizer
0.437***
\(\Delta \)Log net income
0.122**
\(R^2\)
\(H_{0}:\) 2SLS estimator satisfies identification condition
\(H_{0}:\beta _{\Delta \text {log}P^{b}}\le -1\)
Results in column (1) are obtained from OLS-FD estimation of Eq. (2). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (3) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as instruments, respectively. Results in column (3) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test the identification condition holds. The tests reported in the last row are the one-sided t tests that the price elasticity is elastic (\(\le -1\)) against the alternative hypothesis it is price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
Column (1) presents results from OLS-FD estimation in Eq. (2), an estimate of the price elasticity of the average taxpayer. The estimated elasticity is − 1.24 (95% confidence interval − 1.59 to − 0.88). This result is closely in line with those surveyed in Peloza and Steel (2005) and Batina and Ihori (2010) and with more recent work also using PSID (Brown et al. 2012; Yöruk 2010, 2013; Brown et al. 2015; Zampelli and Yen 2017). Note that the elasticities reported here are the total elasticities, the measure most relevant to determining the efficiency of the tax incentive for giving, not the intensive-margin elasticities as is reported in some papers (e.g., McClelland and Kokoski 1994).
Though we exclude endogenous itemizers and construct the price in line with Auten et al. (2002) to address the two long-recognized sources of endogeneity in \(\tau \) the estimate in column (1) still derives from an estimator with a downward bias from inclusion of non-itemizers (Theorem 1). To address this, we instrument for price using the approach outlined in Sect. 3.1.
Column (2) provides results applying the 2SLS-FD estimator to Eq. (2) using the 'synthetic' change in the marginal tax rate, \(\left( \tau _{it}^{b}-\tau {}_{it}^{s}\right) \), as an instrument for \(\Delta \text {log}\left( P_{it}^{b}\right) \). Though the correlation between \(\left( \tau _{it}^{b}-\tau _{it}^{s}\right) \) and \(\Delta \text {log}\left( P_{it}^{b}\right) \) is small, there is strong evidence to support that it satisfies the identification condition. The point estimate of − 2.54 is, however, very imprecisely estimated (95% confidence interval − 6.33 to 1.26).
Column (3) proceeds similarly to column (2) now using \(\Delta \tau _{it}^{b}\) as an instrument for \(\Delta \text {log}\left( P_{it}^{b}\right) \).32 The point estimate is closer to zero than in (2) with a corresponding reduction in the standard error, though the confidence interval is still quite wide and there is not sufficient evidence to rule out the hypothesis that price elasticity is elastic (95% confidence interval − 1.33 to 0.52).
The scope of inference on the true price elasticity in columns (2) and (3) is limited since the exogenous variation in the price from the instruments is small. Consequentially, t tests of the null hypotheses that \(\beta =0\) and \(\beta \le -1\) both have low power and there is little of economic interest that we can draw from these results.
There are also other issues with inference based on columns (2) and (3). Our interest is in estimating the price elasticity of the average taxpayer, but these models estimate the elasticity from variation in the price from exogenous changes in the marginal tax rate, the local average treatment effect (LATE). This is not the same as the effect of a change in the price of giving over the whole population, i.e., the average treatment effect, the parameter of interest.
Moreover, identifying the LATE requires that the instrument affects the endogenous variable in the same direction for everyone (i.e., the monotonicity condition holds). This condition may not hold in our setting as we may have 'defiers,' i.e., people for whom the realized change in the price is the opposite of the change predicted by the instrument. To understand 'defiers' in our setting consider the case of an exogenous increase in marginal tax rates. For continuing and start itemizers this will lower the price of giving. However, for stop itemizers the price will increase in spite of the exogenous increase in the marginal tax rates, i.e., they are 'defiers'. Such 'defiers' (and their converse) make up about 12% of our sample. Violations of the monotonicity assumption (Imbens and Angrist 1994) imply the 2SLS estimator does not necessarily estimate the LATE.
Column (4) presents results for the OLS-FD estimator of the itemizer model in Eq. (6). As shown in Theorem 2, this specification will yield a consistent estimator of \(\beta \) (the average treatment effect) when \(\bar{\tau }_{1}-\bar{\tau }_{-1}=0\) where we find evidence that this restriction holds (p value \(=\) 0.797) with a sample estimate of \(\bar{\tau }_1-\bar{\tau }_{-1} \) of − 0.001. The point estimate of the price elasticity in column (4), − 0.08 (95 confidence interval − 0.58 to 0.42), is very close to and not significantly different from 0.33
This result provides strong evidence the true price response for the average taxpayer is inelastic. Given there is strong evidence the OLS-FD estimator in (4) is consistent with a standard error about one half of those from the estimators in column (3) and (4), we conclude that unlike the findings from the 2SLS-FD estimators the price elasticity is not elastic. This finding is also in contrast to the general findings of many previous studies, though is consistent with the more recent work in Hungerman and Ottoni-Wilhelm (2016).
As noted above, consistency of the OLS-FD estimator in the itemizer model implies that we can consistently estimate the size of the bias in the estimates obtained from the standard model via the difference between the estimated elasticities in columns (1) and (4) of Table 3 which is \(-1.16\). This sizeable bias, about the same size as the average estimated price elasticity from survey data, could explain why such strong price responses have been found in the literature using survey data.
4.1 The extensive margin
Recent work (Hungerman and Ottoni-Wilhelm 2016; Almunia et al. 2017) focuses greater attention on the impact of tax incentives at the extensive margin, i.e., the decision to give any nonzero amount. We estimate the effect of the price of giving on the decision to donate using a linear probability model in first differences and report results in Table 4.34 While we do not derive the bias formally in the models considered in Table 4, the intuition for the bias is the same as in Theorem 1 for Eq. (2), namely itemization status is a function of whether or not one gives so the price is endogenous.
The price effect at the extensive margin
2SLS with\(\tau _{it}^{s}-\tau _{it}^{b}\)
0.018*
Results in column (1) are obtained from OLS-FD estimation of Eq. (donlit111). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (3) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as instruments, respectively. Results in column (2) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test the identification condition holds. The tests reported in the last row are the one-sided t tests that the price elasticity is elastic (\(\le -1\)) against the alternative hypothesis it is price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
The pattern of the results is similar to those in Table 3. The lack of evidence for an effect at the extensive margin in column (4) in Table 4 is consistent with the findings in both Hungerman and Ottoni-Wilhelm (2016) who also consider the American context, and with Almunia et al. (2017) who study tax incentives for giving in the UK and find an extensive-margin elasticity of about − 0.1.
The above results suggest that the average taxpayer is not responsive to changes in the price of giving, consistent with Clotfelter's observation. To test the robustness of the results in Table 3 to mis-specification, we follow the good practice outlined in Athey and Imbens (2015) and re-estimate Eqs. (2) and (6) under various specifications and estimation samples. We present the results of these robustness checks in Appendix D. In summary, the results presented in Table 3, and the estimated size of the bias therefrom, remain stable across the various models considering changes to the sub-sample used in estimation, nonlinearities in income, different specifications of the dependent variable and the exclusion of other itemizable expenditures as a control.
4.2 Testing for nonlinearities in the price effect
Note that the estimate of \(\gamma \), the coefficient on \(\Delta I_{it}\), in Table 3 suggests that (conditional \(\Delta X_{it}\)) average log donations of start and stop itemizers relative to non-switchers is \(+\,0.44\) and \(-\,0.44\), respectively, which corresponds with the intuition in Sect. 2. At first sight, this could be interpreted as the donors' response to the price change from the change in itemization status, and hence part of the true price effect. However, by the discussion in Sect. 2 we know that \(\gamma \) must be greater than zero and reflects the response to endogenous price changes of switchers, not purely a true price effect. It may be that the price response for switchers differs from non-switchers, in this case \(\gamma \) may indeed pick up some genuine responsiveness of donations to changes in the price, and we may overestimate the bias.
While controlling for itemization status allows consistent estimation of the price elasticity of giving for the average taxpayer as seen above further complications arise if there are other problems with the standard specification (Eq. (2)). Another key restriction of Eq. (2) is that the price effect is linear in \(\Delta \text {log}\left( P_{it}\right) \) and is the same for switchers and continuous itemizers. However, if the average response (ceterisparibus) to a, say, 30% price drop is more than 10 times the change from a 3% price drop, then the intercept would shift for switchers even aside from a bias in the standard model. In this case, part of \(\gamma \) reflects endogenous movement in \(\Delta \text {log}\left( P_{it}\right) \) and part will pick up a price response.
There are economic reasons to think the response to a change in P coming from a change in itemization status may differ from the response to a change in P from changes in the marginal tax rate. Such an 'itemization effect' was posited early in the literature (Boskin and Feldstein 1977). Dye (1978) points out that taxpayers are more likely to know their itemization status than their marginal tax rate. The change induced in P by a change in itemization status is large and thus likely to be more salient, whereas changes in the marginal tax rate can be very small. Dye (1978) estimates a specification very similar to the itemizer specification we study. He, like us, finds that the itemization status is a highly significant determinant of giving. However, Dye misinterprets this estimated effect, claiming that the identified price effect in the literature is really an itemization effect, failing to attribute any of the estimated effect to the bias demonstrated above.35
Caution must therefore be taken in how we interpret \(\gamma \) and \(\beta \) in the presence of omitted nonlinearities in the price effect. When changes in itemization status are controlled for, the price response we estimate (\(\beta \)) is the average price response to changes in the marginal tax rate, which are quite small. If there are strong nonlinearities we cannot infer that this estimated elasticity reflects the response to larger changes in price such as those coming from changes in itemization status. We consider the possibility of an itemization effect and more general nonlinearities in the effect \(\Delta \text {log}\left( P_{it}\right) \) on \(\Delta \text {log}\left( D_{it}\right) \) in our model with corresponding results presented in Table 5.
Nonlinear effect of \(\Delta \text {log}\left( P_{it}\right) \)
Quadratic
|\(\Delta \)log\(P|>0.15\)
\(\Delta \)logP
\(\Delta \)log\(P^2\)
Switcher\(\times \Delta \)logP
\(\Delta \)log\(P\times \)1(\(|\Delta \text {log}P|>0.15\))
Hypothesis tests:
\(\beta _{\Delta \text {log}P^{b}}+\beta _{\text {Switcher}\times \Delta \text {log}P}\le -1\)
\(\beta _{\Delta \text {log}P^{b}}+2\beta _{\Delta \text {log}P^2}E[{\Delta \text {log}P^{b}}]\le -1\)
\(\beta _{\Delta \text {log}P^{b}}+\beta _{\Delta \text {log}P\times 1(|\Delta \text {log}P|>0.15)}\le -1\)
All standard errors are clustered (at the household level). The hypothesis tests reported in the bottom five rows are the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
In column (1) we re-estimate the itemizer model allowing the price elasticity to differ for start or stop itemizers ('switchers'). The estimated price elasticity for switchers (\(\hat{\beta }=-\,0.205\), 95% confidence interval − 1.11 to 0.70) does not significantly differ from that of non-switchers or from 0. However, note that given the high correlation (multicollinearity) between \(\Delta I\) and \(Switcher\times \Delta \text {log}P\) (\(\hat{\rho }=-\,0.936\)) yielding a large standard error and making it difficult to identify the price elasticity for switchers and similarly to estimate precisely the coefficients on the nonlinear terms in (2)–(5).
In column (2), we include the square of \(\Delta \text {log}(P_{it})\) in Eq. (2) but find little evidence of a quadratic price specification. We then interact \(\Delta \text {log}(P_{it})\) with dummies taking a value of 1 if \(\Delta \text {log}(P_{it})\) is in the top quartile of the \(\Delta \text {log}(P_{it})\) distribution (column (3)), in the top decile (column (4)) or in the top percentile (column (5)). In columns (3),(4) and (5), the coefficient on the interaction term is close to 0 and statistically insignificant at conventional levels.
In the last rows of Table 5, we present results from the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic for those facing larger price increases. For column (1), this corresponds to a test that the response to price of switchers is inelastic and for columns (3)–(5) for those experiencing price changes in the respective three quartiles defined above. We find strong evidence to reject the hypothesis that giving is price elastic for larger price changes in (3)–(5).36 It is less clear if the response to prices faced by switchers is elastic with \(p=0.084\).
A key feature here is the stability of the coefficient on \(\Delta \)itemizer over the different nonlinear specifications of the price. If there was a strong itemization or nonlinear price effect we would expect the estimate of \(\gamma \) to reduce. However, we find stable estimates of \(\gamma \) around 0.43 even allowing for different possible nonlinearities in \(\Delta \text {log}(P_{it})\). As such we conclude there is little evidence of a nonlinear price effect, and subsequently that we have overstated the bias found in Table 3.
We next turn to potential heterogeneity in the price elasticity of giving over income. This may be interesting in its own right, but we consider it in light of our results above which suggest the average taxpayer is not responsive to changes in the price of giving. However, studies using samples of (wealthier than average) itemizers consistently find evidence that itemizers are indeed responsive. An interesting question is then whether those people are responsive because they itemize or because they are wealthier.
4.3 Heterogeneity in the price elasticity over income
Studies using tax-filer data do not suffer from the bias derived in Theorem 1. An example of this kind of study is Bakija and Heim (2011) who find evidence of a price elasticity of around − 1. Itemizers are, on average, higher income earners than non-itemizers. For example, the sample of itemizers in Bakija and Heim (2011) has a mean income of about $1 million. Given itemizers in this sample are on average extremely wealthy, we cannot easily discern if the price effect estimated in Bakija and Heim, and elsewhere (e.g., (Randolph 1995; Auten et al. 2002) reflects the responsiveness of the average itemizer.
Some researchers (e.g., Feldstein and Taylor 1976; Reece and Zieschang 1985) have found the economically counterintuitive result that the price elasticity is largest for those with lowest incomes. Peloza and Steel (2005) find that the price elasticities for higher income donors seem to be slightly greater than, though not significantly different from, those for lower income donors. Bakija and Heim (2011) find little evidence the magnitude of the price effect varies with income, though their sample is disproportionately wealthy even for tax-filer data.
In Table 6, we present some descriptive statistics for taxable income decile groups.37 Note that while the probability of being a continuing itemizer increases monotonically with income, the probability of switching itemization status rises with income and then falls. We return to this feature below. In column (6), we show the results of the test of the restriction outlined in Theorem 2. The restriction holds for every decile group \((p=0.1)\). In the analysis that follows we combine the bottom two decile groups due to the lack of price variation among the lowest income earners. As can be seen in Table 6, the variance of \(\Delta \text {log}(P_{it})\) at the bottom of the income distribution is about 1/4 that at the top making identification of the price effect difficult for these relatively poorer households.38
Descriptive statistics by income
Mean income ($'000)
P[Switcher]
P[Cont. itemizer]
Var[\(\Delta \text {log}P\)]
\(H_{0}: \bar{\tau }_{1}=\bar{\tau }_{-1}\)
Non-itemizers
Itemizers
This table presents some relevant descriptive statistics by income decile group. These income groups, but combining the bottom two decile groups, form the basis of Figs. 2, 3 and 4
In Fig. 2, we plot the estimated price elasticities from both the standard model and the itemizer model across these income groups.
Variation in estimated price elasticities over income. Notes The markers plot \(\hat{\beta }_\mathrm{FD}\) (triangles) and \(\hat{\beta }_\mathrm{FD}^{I}\) (circles) for each income group (bottom quintile, upper eight deciles). Gray markers are statistically insignificant at the 10% level, and black markers are significant at the 10% level
Black and gray markers indicate that we reject and accept the null hypothesis that \(\beta =0\) respectively (at the 10% level) against the alternative that \(\beta <0\) within the various income deciles. Estimates from the standard model are triangles, and the circles are estimates from the itemizer model. With the standard model, we find large and significant price elasticities for the bottom quintile and the next five decile groups as well as for the top decile group. The estimated price elasticities for the eighth and ninth decile groups are close to, and not statistically different from, 0. These results suggest a nonlinear relationship between the price responsiveness of taxpayers and their income with lower/middle income taxpayers as well as the wealthiest taxpayers being most sensitive to changes in the price of giving. In contrast, the results from the itemizer specification suggest that the bottom 90% of the income distribution is not sensitive to changes in the price of giving. We do find some evidence in the itemizer model that the highest income earners are sensitive as the estimated elasticities for the top decile (p value \(=\) 0.094) group are statistically significant. Note that the estimate from the itemizer model lies below that of the the standard model save for the top decile group where they are virtually equivalent.
We fail to reject the required restriction for the consistency of the itemizer model, i.e., \(\bar{\tau }_{1}=\bar{\tau }_{-1}\) for every decile group at the 10% level (see the last column of Table 6). As such, by Theorems 1 and 2 (now across each decile) the difference between the estimated income elasticities in each model is a consistent estimator of the bias in the price elasticity within each income decile from the standard model. The mean of the estimated biases over the decile groups is − 1.06 and is largest (in absolute value) for the middle deciles, where the probability of switching status is highest.
Below we plot the size of the estimated bias (\(\hat{\beta }_\mathrm{FD}-\hat{\beta }^I_\mathrm{FD}\)) against the probability of switching within each income decile group.
Estimated bias plotted against probability of switching itemization status across income decile groups. Notes The markers plot \(\hat{\beta }_\mathrm{FD}-\hat{\beta }_\mathrm{FD}^{I}\) by the probability of switching itemization status in each income group (bottom quintile, upper eight deciles). The line is the linear fit to these points
By Theorem 1, the size of the bias increases in \(p_{1},p_{-1}\) and decreases in Var(\(\Delta \text {log}P)\) for a given \(\xi _{1},\xi _{-1}\) which are unobservable (though we know are both negative by Theorem 1). If \(\xi _{1},\xi _{-1}\) were roughly equal across income deciles, or did not move in any systematic way, we should expect to see some negative (though not necessarily linear) relationship between the bias in the OLS estimator in the standard model and the probability of switching across income deciles. We see some support for this in Fig. 3 which shows the magnitude of the estimated bias by the probability of switching. The correlation between the probability of switching status and the size of the bias is \(-\,0.44\).
It is difficult to conceive of an economic rationale for the finding in the standard model of why lower income households would be more responsive to tax incentives than richer households. The results and discussion in this section utilizing Theorems 1 and 2 provide some evidence that this finding is at least in part due to a bias for utilizing endogenous price variation from switching itemization status.
While we find evidence that the average taxpayer is not sensitive to changes in the price of giving, it remains the case that previous studies using tax-filer data have regularly found price elasticities close to − 1. We find evidence that the average higher income earner also exhibits sensitivity to changes in the price of giving with price elasticities of around − 1 for the top decile group. However, higher income people are also more likely to itemize, as can be seen in Table 6. An obvious question is then whether the significant effects found here for the average high earner and the significant effects found in, for example, Bakija and Heim (2011) are driven by the fact that people are itemizers or higher income earners. As noted above, estimates obtained from tax-filer data are consistent and do not suffer from the bias derived in Theorem 1. To test this we estimate our model for continuing itemizers (equivalent to using tax-filer data) over different income decile groups and present results in Fig. 4.
Price elasticity by income group for continuing itemizers. Notes Each marker is the estimated price elasticity of giving for the group (bottom quintile, upper eight deciles). The whiskers show the 95% confidence interval around each estimate
Note from Table 6, non-itemizers have lower average within decile group income than itemizers (columns 1 and 2).
We find evidence that the highest earning continuing itemizers, those in the top decile group, do exhibit a rather substantial sensitivity to changes in the price of giving with elasticities around − 2, though we cannot reject a unitary price elasticity. Results are found similarly for itemizers among the wealthiest 5% (\(\hat{\beta }=-1.99\), se \(=\) 0.81). However, continuing itemizers at lower levels of income do not seem to be sensitive to changes in the price of giving. We estimate the model for all continuing itemizers below the top income decile together and obtain an estimated price elasticity of − 0.25 (95% confidence interval − 0.97 to 0.46) which we find to be statistically different from the estimated price elasticity for continuing itemizers in the top decile of income (p value \(=\) 0.047). These results, taken together with those in Fig. 2, suggest that it is the fact that one is a higher earner that corresponds to being more sensitive to changes in the price, not simply the fact that a person is an itemizer as we show that the average person (not the average itemizer) in the top income decile is sensitive to price changes, but we do not find evidence that lower income itemizers are sensitive to price changes.
5 Conclusions
Many studies estimating the price elasticity of donations use survey data as it includes data on donation behaviors of those in the general population including price variation from changes in itemization status not often seen in tax-filer data, allowing estimation of the response of the average taxpayer (and not the average, wealthier itemizer). In this paper, we show that the estimator of the price elasticity utilizing variation in price from changes in itemization status (largely in survey data) is severely biased downwards, even omitting endogenous itemizers as is done in the literature.
We derive the form of bias of the OLS-FD estimator in the standard model and show a downward bias when agents switch itemization status. It is shown that the approach of instrumenting the change in price with exogenous changes in the marginal tax rate, though identified, produces standard errors so large as to make economically meaningful inference difficult. We try to improve inference by developing an estimator which has no asymptotic bias though is more efficient than an instrument variable estimator. We do this by deriving the bias of the OLS estimator of the price elasticity in a model which controls for the change in itemization status (a measurable source of endogeneity in the price) and show that it is zero under a testable restriction which is strongly supported by the data. The standard errors of the price elasticity in this estimator are also over one half of those from 2SLS estimators we consider.
Empirically, we find that the consistent estimates of the price elasticity for the average taxpayer obtained using the itemizer model are not price elastic. However, even in the consistent and efficient OLS estimator, the standard errors are fairly large such that the question of whether the price elasticity is closer to 0 or − 1 remains open though the lower bound of the 95% confidence interval we estimate is − 0.59. The bias in the estimator obtained from the standard model in the literature is large, approximately of the order − 1. This finding is robust to numerous variations in specification and sample. Our results suggest that Clotfelter may be right in suggesting that the average taxpayer is unlikely to be responsive to the price of giving.
Estimates of the price elasticity in the standard model across different income levels show the size of price elasticity is generally decreasing (in absolute value) in income. We provide evidence that this perhaps surprising result is at least in part explained by the bias in the estimator of the price elasticity in the standard literature model. Correcting for this bias with the itemizer model, we no longer find evidence that lower income households respond most to tax incentives with estimates of the price elasticities in each income decile being closer to, and not significantly different from, 0. We do find evidence that higher income households are indeed responsive. This result differs from the findings in the literature using tax-filer data as our result is for the average taxpayer or average higher income taxpayer, whereas results from tax-filer data are for the average itemizer. We find that it is the higher income people, who are also more likely to be itemizers, that are sensitive to changes in the price of giving. Itemizers with incomes in the bottom 90% of the income distribution do not appear to respond to changes in the price of giving. This suggests it is the fact that people are higher income that corresponds to them being sensitive to changes in the price, not the fact that they itemize.
Considering these results together with the existing work using tax-filer data suggests that a rethinking of the tax deductibility of donations may be called for. It is well established in the literature that itemizing households are sensitive to changes in the price of giving (e.g., Bakija and Heim 2011). Lowry (2014) shows that taxpayers claimed $134.5 billion of charitable deductions in 2010, 53% of which is from taxpayers with income below $250,000 roughly the same income as the top decile in our data. Our results suggest the cost of tens of billions of dollars in lost tax revenue is not resulting in the benefit found in the literature in the form of increased charitable donations for the average taxpayer and in fact the bottom 90% of the income distribution. As such, and given the evidence presented here, the government may consider amending the charitable deduction for those households below the top marginal tax bracket or revising the subsidy in line with Saez (2004).
One exception to this rule occurred between 1982 and 1986 where non-itemizers could deduct some or all of their donations.
According to IRS records, the mean income of taxpayers who itemized their tax returns in 2013 was $147,938 compared to $48,050 for non-itemizers.
A similar proportion is found in our data.
In Batina and Ihori (2010), another survey of this literature, the mean price elasticity for tax-filer studies is − 1.25 versus − 1.62 in studies using survey data. A similar pattern is found in Steinberg (1990) which surveys 24 early studies. More recently, Bakija and Heim (2011) find elasticities very close to − 1 using a panel of tax-filer data and Yöruk (2010, 2013), Reinstein (2011), Brown et al. (2012) and Brown et al. (2015) generally find price elasticities in excess, sometimes substantially so, of − 1 using the same survey panel data we use. In their working paper, Andreoni et al. (1999) use a Gallop survey of household giving and find price elasticities ranging from − 1.73 to − 3.35, magnitudes that they note are 'consistent with the body of literature' (p. 11). More recently, Yöruk (2013, p. 1708) notes that 'most estimates in the literature suggest that a 1% increase in the tax-price of giving is associated with more than 1% decrease in the amount of charitable gifts'.
Brown (1987) also points out this result, but ultimately concludes this finding arises from the failure to estimate the price using a Tobit type estimator.
Tax deductibility of charitable donations is treasury efficient when the foregone tax revenue (and thus the decrease in the public provision of a public good) is exceeded by the increase in aggregate giving (the private provision of the public good). Conventionally, the threshold for efficiency has been a price elasticity of at least − 1 (Feldstein and Clotfelter 1976). However, some have argued that the threshold ought to be larger (in absolute value) due to concerns about tax evasion (Slemrod 1988), while others have argued that the deduction might be efficient even at price elasticities smaller than − 1 (Roberts 1984).
Some of these studies do not exclude the endogenous itemizers (e.g., Brown and Lankford 1992; Bradley et al. 2005; Yöruk 2010) meaning estimated price elasticities will suffer from both the known bias from endogenous itemizers and the bias outlined here from endogenous non-itemizers. Gruber (2004) and Reinstein (2011) impute itemization status, though such an approach can introduce nonclassical measurement error. In neither case, however, is the main aim of the study the consistent estimation of the price elasticity of giving.
The same argument holds in reverse for those who start itemizing.
As is conventional in the literature donations (\(D_{it}\)) is measured as a transformation of \(D_{it}^{*}\) which is strictly greater than zero so that \(\log (D_{it})\) exists and is nonnegative. In the Appendix D, Table 11, we test the sensitivity of our results to other transformations considered in the literature (e.g., inverse hyperbolic sine transformation or \(D_{it}=D^*_{it}+10\)).
Note that itemization status is not assigned, but rather people must choose to itemize themselves and some people may not itemize despite their deductible expenditure exceeding the standard deduction. One possible reason for this was found in Benzarti (2015) who shows that there is a cost of itemizing in terms of effort that amounts to about $644 on average though with substantial heterogeneity around that figure. In this paper, we use actual itemization status as reported by the surveyed household.
The FD estimator is used to simplify the exposition of the issue which will also occur more generally when using within group (WG) type estimators.
Note that \(S_{it}-E_{it}\ge 1\) when \(I_{it}=0\) since \(D_{it}\le S_{it}-E_{it}\) where \(S_{it}=S_{it}^{*}+1\) and \(S_{it}^{*}\ge E_{it}\) by definition when \(I_{it}=1\) and \(D_{it}\ge 1\) as \(D_{it}=D_{it}^{*}+1\).
This assumption is made without loss of generality as we can make all the arguments below partialling out \(X_{it}\) which we assume is exogenous. This method is used in the proof of Theorem 2 below.
In practice, a constant would be included in (5) so that the OLS-FD estimator would be demeaned ensuring \(E[u_{it}]=0\). All the arguments in the proof of Theorem 1 will go through unchanged on the variables demeaned, and this restriction is enforced for simplicity to clarify the exposition of the result.
Extensions to non-i.i.d data hold straightforwardly utilizing more general weak law of large number results allowing quite general forms of heteroskedasticity and dependence in the data.
Theorem 1 can be generalized to much weaker assumptions on the correlation of \(u_{it}\) and \(\tau _{it}\) though we wish to highlight even when \(\tau _{it}\) is exogenous the change in price will not be as changes in itemization status are endogenous.
Note this problem as outlined here is unique to the US tax system though the literature on tax incentives for charitable giving extends to other countries. For example, Fack and Landais (2010) use data from France, Bönke et al. (2013) use data from Germany and Scharf and Smith (2010) and Almunia et al. (2017) use UK data. Each study contends with different issues surrounding the estimation of the price elasticity given the differently structured tax incentives for giving in each country. Our results here may be of limited use in applications to similar studies in a different setting.
Another possible benefit to OLS versus a 2SLS approach is that 2SLS estimator based on instruments with a small correlation with the endogenous variable can cause the normal approximation to the distribution of the 2SLS estimator to be poor, even in large samples, e.g., Hansen et al. (1996), Staiger and Stock (1997). Hence, the OLS estimator may provide more accurate inference then our 2SLS-FD estimators.
Note that we do not posit that the OLS-FD estimator in this auxiliary regression provides a consistent estimator of \(\beta \) by this argument alone. Equation (6) includes two endogenous variables, both \(\Delta \log (P_{it})\) and \(\Delta I_{it}\), where we derive the bias in the estimate of \(\beta \) in this estimator in Theorem 2 below. We can show this bias is zero under an intuitive and testable restriction.
If there is an itemization effect then the standard model is fundamentally misspecified, even aside from the bias in Theorem 1. To identify this itemization effect would prove problematic as we know \(\gamma \) would be a biased estimate of this itemization effect as it has to part reflect the mean differences in \(\Delta \log (D_{it})\) arising purely from the definition of different types of itemizers. We consider the possibility of an itemization effect in Sect. 4.2.
A significant topic of interest in this area has been the timing of donations and the responsiveness to permanent and transitory changes in the price (e.g., Randolph 1995; Bakija and Heim 2011). Due to the biannual nature of our data, we do not consider this in our paper.
As we are using survey data, one might be concerned with measurement error in the donations variable. (Wilhelm 2006, 2007) contends that the data collected in the COPPS module are of better quality than most household giving survey data given the experience of the PSID staff. Recent work in Gillitzer and Skov (2017) suggests that it is in tax data that the measurement error might be found, not survey data. Moreover, the measurement error that might be of concern is in donations. If that error is random with mean 0, then the precision of the estimates of the price elasticity will be reduced, but the estimator will not necessarily suffer from inconsistency or biasedness. In our case, one might reasonably argue that the error is in fact not centered at 0, as people may systematically over-report giving (this could be the case in both survey and tax records). In such a case, only the constant in our regression would be biased. Although, if we assume the measurement error constant over time within households, i.e., households consistently over or under report by the same proportion, then it will be washed out via the first differencing of the data. While measurement error can be a serious problem in tax data or survey data, we do not believe it to be prohibitively so in our analysis.
Deflated using the US Consumer Price Index: http://www.bls.gov/cpi/.
Self-reporting itemizers make up 48% of the sample. Our predicted itemization status gives an itemization rate of 53% and matches the declared itemization status in 78% of the cases. Our 'over-prediction' of itemization status is consistent with findings in Benzarti (2015) who shows taxpayers systematically forego savings they might accrue from itemizing in order to avoid the hassle of itemizing.
There is a smaller share of the sample (6.5%) who report themselves as itemizers, but for whom we fail to predict them as such. We include these households as exogenous itemizers. We have re-estimated all our models excluding them, and results are qualitatively the same.
Replacing \(P_{it}^a\) with \(P_{it}^b\) in our regression may lead to measurement error. Instead, some (e.g., (Yöruk 2010, 2013; Brown et al. 2012, 2015) have used the price calculated using the first-dollar marginal tax rate as an instrument for \(P_{it}^{a}\) to address the endogeneity identified by Auten et al. (2002). Given the very high correlation between \(P_{it}^{a}\) and first-dollar price in our data, we find that the use of the first-dollar price as an instrument or as a proxy provides qualitatively similar results. It is important to note that such a 2SLS approach is valid in studies which exclude non-itemizers (e.g., Auten et al. 2002; Bakija and Heim 2011) as the bias caused by switching itemization status (Theorem 1) is not present in their sample. However, this is not a valid instrument for \(\Delta \log (P_{it}^{a})\) when switchers are included in the sample as \(\Delta \log (P_{it}^b)\) is a function of the switch in itemization status.
We calculate this as the sample proportion weighted mean of the implied elasticity for continuing itemizers facing a price increase and that of those continuing itemizers facing a price decrease.
A potential alternative is to use the price constructed with the 'synthetic' marginal tax rate as an instrument for \(P_{it}^{a}\). This approach has been effectively used in studies of tax-filer data (e.g., Bakija and Heim 2011). Though the change in 'synthetic price' is a function of the switch in itemization status, this synthetic change in the price (unlike the synthetic change in the marginal tax rate) would not be a valid instrument in our setting which includes switchers.
We present and briefly discuss full regression results, including estimates on the parameters of control variables in Appendix C.
In general, non-donation itemizable expenditures (E) are not measured in survey data and even when information on E is available, as is the case with the PSID, it has not been, to our knowledge, included in models of donations in the literature to date. Such expenditures will be correlated with price via itemization status and likely correlated with donations since changes in, say, medical expenditures may affect one's donation amount. As such omitting other expenses will result in a biased estimator of the price elasticity. Including them, however, can be problematic as donations and non-donation deductible expenditures may be co-determined. We consider this issue further and check the robustness of our results controlling for expenditures in Appendix D.
Note that conventionally models with a dependent variable distributed with a mass point at 0 might be treated as censored and thus require sophisticated econometric techniques (e.g., McClelland and Kokoski 1994 and a double hurdle model in Huck and Rasul 2008). However, such a mass point does not necessarily indicate censoring. In our case, it is not that we do not observe donations below a particular level but in fact the donation of zero is part of the choice set of the (non)-donor. Angrist and Pischke (2009) note that despite the convention, the use of nonlinear models like Tobits when a bound is not indicative of censoring is not appropriate. We therefore use OLS to estimate the effect of changes in the price on the mean of the donations distribution including zero donations. Results were qualitatively the same when using a correlated random effects Tobit (see: Backus and Grant (2016)).
We also estimated columns (2) and (3) including higher order polynomials of each instrument, though no meaningful increases in precision were obtained in either case.
Another issue stemming from measurement error in donations (see footnote 22) could be that it infects the price variable since the price variable is a function of taxable income which is a function of donations. This more 'classical' measurement error would produce a bias toward 0 in our estimator of the price elasticity. However, such a bias would be present in both the 'traditional' and 'itemizer' specification and it is not clear how the pattern of our results could be the product strictly of such a bias.
We also perform this estimation using a conditional logit, and results were qualitatively similar.
Despite featuring in some prominent early publications, the 'itemizer effect' has largely been ignored in the literature since, Brown (1987) being an exception.
The test reported in the final row in column (2) evaluates the quadratic price response at the mean change in log price and we find strong evidence to reject the null the price response is elastic, similar to the case of Eq. (3) in Table 3.
To avoid losing observations that become singletons when the subsamples are defined, we calculate the mean net household income over the observed period and then estimate the model for different levels of mean household income \(\left( \bar{y_{i}}\right) \) rather than annual income \(\left( y_{it}\right) \).
Results are similar keeping the bottom two decile groups separate, though we find very large standard errors for the bottom decile group which wash out some of the features we are interested in showing in Fig. 3 below.
Note that a full and formal treatment of the simultaneous nature of the determination of E and D is beyond the scope of the current paper. The fact that our empirical results are not sensitive to the inclusion or exclusion of E suggests that concerns over endogeneity bias arising from the co-determination of E and D or the omission of E may be minor in practice.
We are grateful for comments from seminar attendees at the University of Cape Town, University of Pretoria, University of Manchester, University of Barcelona, CREST, PEUK 2016 and the European Economic Association Conference 2017 as well as James Banks, Matias Cortes, Manasa Patnam and Jacopo Mazza. This paper was previously circulated under the title 'Consistent Estimation of the Tax-Price Elasticity of Charitable Giving with Survey data'.
To simplify the proofs of Theorems 1 and 2, we make the assumption that \(\tau _{it}\) is independent of \(u_{it}\), which is slightly stronger than the assumption that \(\tau _{it}\) is strictly exogenous. The results do not hinge on this slight strengthening of the exogeneity assumption, but simplify the proof and exposition.
Proof of Theorem 1
Define \(p_{1}=\mathcal {P}\{\Delta I_{it}=1\}\), \(p_{-1}=\mathcal {P}\{\Delta I_{it}=-1\}\), \(p_{0}=\mathcal {P}\{\Delta I_{it}=0\}\), \(\xi _{1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=1]\), \(\xi _{-1}=E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=-1]\). Under the i.i.d assumption then by the Khintchine Weak Law of Large Numbers (KWLLN)
$$\begin{aligned} \hat{\beta }_\mathrm{FD}\overset{p}{\rightarrow }\beta +\frac{E[u_{it}\Delta \log (P_{it})]}{E[\Delta \log (P_{it})^{2}]} \end{aligned}$$
where we now show that
$$\begin{aligned} E[u_{it}\Delta \log (P_{it})]=p_{1}\xi _{1}+p_{-1}\xi _{-1} \end{aligned}$$
where both \(\xi _{1},\xi _{-1}<0\) which establishes the result.
We use the Law of Iterated Expectations (LIE) to rewrite \(E[u_{it}\Delta \log (P_{it})]\) as a weighted sum of the conditional expectations \(u_{it}\Delta \log (P_{it})\) for I1-I4 itemizers defined in Sect. 2.
Firstly, note that when \(\Delta I_{it}=0\) and \(I_{it}=I_{i,t-1}=0\) (I4) then \(\Delta \log (P_{it})=0\) and for \(I_{i,t}=I_{i,t-1}=1\), \(\Delta \log (P_{it})=\Delta \log (1-\tau _{it})\) so
$$\begin{aligned} E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=0]=E[u_{it}|I_{it}=I_{i,t-1}=1] E[\Delta \log (1-\tau _{it})|I_{it}=I_{i,t-1}=1]p_{0,1} \end{aligned}$$
as \(u_{it}\) is assumed independent of \(\Delta \log (1-\tau _{it})\) where \(p_{0,1}=\mathcal {P}\{I_{it}=I_{i,t-1}=1\}\) and
$$\begin{aligned} E[u_{it}|I_{it}=I_{i,t-1}=1]=E[u_{it}|E_{it}>S_{it},E_{i,t-1}>S_{i,t-1}]=E[u_{it}]=0 \end{aligned}$$
since \(\omega =0\). More generally when \(\omega \ne 0\) then the same result follows assuming \(E[u_{it}|E_{it}]=E[u_{it}]\) which could be achieved by controlling for (polynomials of) \(E_{it}\).
By the LIE utilizing \(E[u_{it}\Delta \log (P_{it})|\Delta I_{it}=0]=0\), we can re-express
$$\begin{aligned} E[u_{it}\Delta \log (P_{it})]= & {} E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]p_{1}\nonumber \\&\quad -E[\log (1-\tau _{i,t-1})u_{it}|\Delta I_{it}=-1]p_{-1} \end{aligned}$$
$$\begin{aligned}= & {} \xi _{1}p_{1}+\xi _{-1}p_{1}. \end{aligned}$$
noting \(\Delta \log (P_{it})=\log (1-\tau _{it})\) for \(\Delta I_{it}=1\) and \(\Delta \log (P_{it})=-\log (1-\tau _{i,t-1})\)
The event \(\Delta I_{it}=1\) (I2) is equivalent to \(E_{it}\ge S_{it}^*\) (itemizer at time t) and \(D_{i,t-1}\le S_{i,t-1}-E_{i,t-1}\) (non-itemizer time t-1) so that
$$\begin{aligned} \Delta \log {D_{it}}\ge \log (D_{it})-\log (S_{i,t-1}-E_{i,t-1}) \end{aligned}$$
where \(\Delta \log {D_{it}}=\beta \log (1-\tau _{it})+u_{it}\) (as \(\Delta \log (P_{it})=\log (1-\tau _{it})\)) so that
$$\begin{aligned} u_{it}\ge & {} \log (D_{it})-\log (S_{i,t-1}-E_{i,t-1})-\beta \log (1-\tau _{it}) \end{aligned}$$
$$\begin{aligned}\ge & {} -\log (S_{i,t-1}-E_{i,t-1})-\beta \log (1-\tau _{it}) \end{aligned}$$
where (8) follows as \(\log (D_{it})\ge 0\) as \(D_{it}=D_{it}^{*}+1\) where \(D_{it}^{*}\ge 0\). Define \(h_{it}:=-\log (S_{i,t-1}-E_{i,t-1})-\beta \log (1-\tau _{it})]\) then
$$\begin{aligned} E[u_{it}|\Delta I_{it}=1]= & {} E[u_{it}|u_{it}\ge h_{it},E_{it}\ge S_{it}] \end{aligned}$$
$$\begin{aligned}\ge & {} E[u_{it}|u_{it}\ge h_{it}] \end{aligned}$$
$$\begin{aligned}> & {} 0 \end{aligned}$$
where (11) follows by (9) and noting \(E_{it}\) is mean independent of \(u_{it}\). The final inequality follows as \(E[u_{it}]=0\), where defining \(p_{11}=\mathcal {P}\{u_{it}\ge h_{it}\}\)
$$\begin{aligned} 0=E[u_{it}]=E[u_{it}|u_{it}\ge h_{it}]p_{11}+E[u_{it}|u_{it}\le h_{it}](1-p_{11}) \end{aligned}$$
where \(E[u_{it}|u_{it}\le h_{it}]<0\) as \(h_{it}\le 0\) (as \(\beta \le 0\) and \(E_{i,t-1} - S_{i,t-1} \le 1\) as \(I_{i,t-1}=0\)) and' since \(E[u_{it}|u_{it}\le h_{it}]=E[u_{it}|u_{it}\le h_{it}, h_{it}=0]\Pr \{h_{it}=0\} + E[u_{it}|u_{it}\le h_{it}, h_{it}<0]\Pr \{h_{it}<0\}\) since \(\Pr \{h_{it}<0\}>0\) and
$$\begin{aligned} E\left[ u_{it}|u_{it}\ge h_{it}\right] >0 \end{aligned}$$
follows from (13) noting that \(0<p_{11}<1\). Finally, since \(\log (1-\tau _{it}) < 0\) for all i,t and is strictly less than zero for some i,t then
$$\begin{aligned} E[\log (1-\tau _{it})|\Delta I_{it}=1]<0. \end{aligned}$$
By independence of \(\tau _{it}\) and \(u_{it}\)
$$\begin{aligned} E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]=E[\log (1-\tau _{it})|\Delta I_{it}=1]E[u_{it}|\Delta I_{it}=1] \end{aligned}$$
where (14) and (16) imply
$$\begin{aligned} E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]\le E[\log (1-\tau _{it})|\Delta I_{it}=1]E[u_{it}|u_{it}\ge h_{it}] \end{aligned}$$
where together with the inequality in (15) implies
$$\begin{aligned} \xi _{1}:=E[\log (1-\tau _{it})u_{it}|\Delta I_{it}=1]<0. \end{aligned}$$
A similar argument holds in reverse for second term in the RHS of (6) for \(\Delta I_{it}=-1\) where
$$\begin{aligned} \xi _{-1}:=-E[\log (1-\tau _{i,t-1})u_{it}|\Delta I_{it}=-1]<0. \end{aligned}$$
establishing the result.
We specify our itemizer specification (Eq. (2) in Sect. 2)
$$\begin{aligned} \Delta \log (D_{it})=\gamma \Delta I_{it}+\beta \Delta \log (P_{it})+\omega '\Delta X_{it}+e_{it} \end{aligned}$$
where \(X_{it}\) is a \(k\times 1\) vector of controls and \(u_{it}=e_{it}+\gamma \Delta {I}_{it}\). To show the result decompose \(\Delta X_{it}\)
$$\begin{aligned} \Delta X_{it}=\Xi z_{it}+v_{it}^{\Delta X} \end{aligned}$$
where \(\Xi \) is a \(k\times 2\) matrix of OLS coefficients where by definition \(E[z_{it}v_{it}^{\Delta X'}]=0\). Plugging (21) in to (20)
$$\begin{aligned} \Delta \log (D_{it})=\gamma ^{*}\Delta I_{it}+\beta ^{*}\Delta \log (P_{it})+\omega 'v_{it}^{\Delta X}+e_{it} \end{aligned}$$
where \(\gamma ^{*}=\gamma +\omega '\Xi _{1}\), \(\beta ^{*}=\beta +\omega '\Xi _{2}\) where \(\Xi _{j}\) is the jth column of \(\Xi \) for \(j=\{1,2\}\). We see in the population regressions in (20) and (22) that
$$\begin{aligned} \beta =\beta ^{*}-\omega '\Xi _{2} \end{aligned}$$
likewise it is straightforward to show that the sample estimator satisfies
$$\begin{aligned} \hat{\beta }_\mathrm{FD}^{I}=\hat{\beta }_\mathrm{FD}^{I,*}-\hat{\omega }_\mathrm{FD}^{I,*'}\hat{\Xi }_{2} \end{aligned}$$
where \(\hat{\beta }_\mathrm{FD}^{I,*}\), \(\hat{\omega }_\mathrm{FD}^{I,*}\) are the OLS estimators in (22) and \(\hat{\Xi }_{2}\) is the estimator of \(\Xi _{2}\) from OLS regression in (21). Namely, we have 'partialled out' \(\Delta X_{it}\). Below we show the following two results
$$\begin{aligned} \hat{\beta }_\mathrm{FD}^{I,*}\rightarrow \beta ^{*}+\frac{p_{1}p_{-1}}{C}(\bar{\tau }_{1}-\bar{\tau }_{-1})(E[e_{it}|\Delta I_{it}=-1]+E[e_{it}|\Delta I_{it}=1]) \end{aligned}$$
$$\begin{aligned} \hat{\omega }_\mathrm{FD}^{I,*}\rightarrow \omega \end{aligned}$$
where \(\hat{\Xi }_{2}\overset{p}{\rightarrow }\Xi _{2}\) by KWLLN together this result along with the fact that \(\beta =\beta ^{*}-\omega '\Xi _{2}\) and the results in (29), (25) and (26) imply
To show (25) and (26) define \(w_{it}^{*}=(z_{it}',v_{it}^{\Delta X'})'\) and the OLS estimator in (22)
$$\begin{aligned} \hat{\theta }_\mathrm{FD}^{I,*}:=\left( \sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}^{*}w_{it}^{*'}\right) ^{-1}\sum _{i=1}^{N}\sum _{t=2}^{T}w_{it}^{*}\Delta \log (D_{it}) \end{aligned}$$
where \(\hat{\theta }_\mathrm{FD}^{I,*}:=(\hat{\gamma }_\mathrm{FD}^{I,*},\hat{\beta }_\mathrm{FD}^{I,*},\hat{\omega }_\mathrm{FD}^{I,*'})'\). Under the i.i.d assumption by an application of KWLLN
$$\begin{aligned} \hat{\theta }_\mathrm{FD}^{I,*}&\overset{p}{\rightarrow }&E[w_{it}^{*}w_{it}^{*'}]^{-1}E[w_{it}^{*}\Delta \log (D_{it})] \end{aligned}$$
$$\begin{aligned}= & {} \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*}\\ \omega \end{array}\right) +\left( \begin{array}{cc} E[z_{it}z_{it}'] &{} E[z_{it}v_{it}^{\Delta X'}]\\ E[v_{it}^{\Delta X}z_{it}'] &{} E[v_{it}^{\Delta X}v_{it}^{\Delta X'}] \end{array}\right) ^{-1}\left( \begin{array}{c} E[e_{it}z_{it}]\\ E[e_{it}v_{it}^{\Delta X}] \end{array}\right) \end{aligned}$$
$$\begin{aligned}= & {} \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*}\\ \omega \end{array}\right) +\left( \begin{array}{cc} E[z_{it}z_{it}']^{-1} &{} 0\\ 0 &{} E[v_{it}^{\Delta X}v_{it}^{\Delta X'}]^{-1} \end{array}\right) \left( \begin{array}{c} E[e_{it}z_{it}]\\ 0 \end{array}\right) \end{aligned}$$
where (30) follows plugging in \(\Delta \log (D_{it})=\gamma ^{*}\Delta I_{it}+\beta ^{*}\Delta \log (P_{it})+\omega ^{'}v_{it}^{\Delta X}+e_{it}\) and (31) follows as \(E[e_{it}v_{it}^{\Delta X}]=0\) and \(E[z_{it}v_{it}^{\Delta X'}]=0\). Hence we establish (26). It follows by (30) (noting \(z_{it}=(\Delta I_{it},\Delta \log (P_{it})\))' that
$$\begin{aligned}&\left( \begin{array}{c} \hat{\gamma }_\mathrm{FD}^{I,*}\\ \hat{\beta }_\mathrm{FD}^{I,*} \end{array}\right) \overset{p}{\rightarrow } \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*} \end{array}\right) +E[z_{it}z_{it}']^{-1}E[e_{it}z_{it}]\\&= \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*} \end{array}\right) +\left( \begin{array}{cc} E[(\Delta I_{it})^{2}] &{} E[\Delta I_{it}\Delta \log (P_{it})]\\ E[\Delta I_{it}\Delta \log (P_{it})] &{} E[(\Delta \log (P_{it}))^{2}] \end{array}\right) ^{-1}\left( \begin{array}{c} E[e_{it}\Delta I_{it}]\\ E[e_{it}\Delta \log (P_{it})] \end{array}\right) \\&= \left( \begin{array}{c} \gamma ^{*}\\ \beta ^{*} \end{array}\right) +\frac{1}{\text {det}(E[z_{it}z_{it}'])}\left( \begin{array}{cc} E[(\Delta \log (P_{it}))^{2}] &{} -E[\Delta I_{it}\Delta \log (P_{it})]\\ -E[\Delta I_{it}\Delta \log (P_{it})] &{} E[(\Delta I_{it})^{2}] \end{array}\right) \left( \begin{array}{c} E[e_{it}\Delta I_{it}]\\ E[e_{it}\Delta \log (P_{it})] \end{array}\right) . \end{aligned}$$
Expanding out the second element in the limit and defining \(C=\text {det}(E[z_{it}z_{it}'])\) which is greater than zero by assumption (no multi-collinear instruments)
$$\begin{aligned}&\hat{\beta }_\mathrm{FD}^{I,*}-\beta ^{*} \overset{p}{\rightarrow } \frac{1}{C}\left( E[e_{it}\Delta \log (P_{it})]E[(\Delta I_{it})^{2}]\nonumber \right. \\&\quad \left. -E[\Delta I_{it}\Delta \log (P_{it})]\right) )E[e_{it}\Delta I_{it}] \end{aligned}$$
$$\begin{aligned}&= \frac{1}{C}(\left( p_{1}+p_{-1})E[e_{it}\Delta \log (P_{it})]\right) \nonumber \\&\quad -(\bar{\tau }_{1}p_{1}+\bar{\tau }_{-1}p_{-1})E[e_{it}\Delta I_{it}]) \end{aligned}$$
$$\begin{aligned}&= \frac{1}{C}\left( \left( E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{1}E[e_{it}\Delta I_{it}]\right) p_{1}\nonumber \right. \\&\quad \left. +\left( E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{-1}E[e_{it}\Delta I_{it}]\right) p_{-1}\right) \end{aligned}$$
$$\begin{aligned}&= \frac{1}{C}p_{1}p_{-1}(\bar{\tau }_{1}-\bar{\tau }_{-1})(E[e_{it}|\Delta I_{it}=-1]+E[e_{it}|\Delta I_{it}=1]) \end{aligned}$$
and the second equality follows as
$$\begin{aligned} E[(\Delta I_{it})^{2}]= & {} E[(\Delta I_{it})^{2}|\Delta I_{it}=1]p_{1}+E[(\Delta I_{it})^{2}|\Delta I_{it}=-1]p_{-1}\\= & {} p_{1}+p_{-1} \end{aligned}$$
$$\begin{aligned} E[\Delta I_{it}\Delta \log (P_{it})]= & {} E[\Delta \log (P_{it})|\Delta I_{it}=1]p_{1}\nonumber \\&\quad -E[\Delta \log (P_{it})|\Delta I_{it}=-1]p_{-1} \end{aligned}$$
$$\begin{aligned}= & {} E[\Delta \log (1-\tau _{it})|\Delta I_{it}=1]p_{1}\nonumber \\&\quad +E[\Delta \log (1-\tau _{i,t-1})|\Delta I_{it}=-1]p_{-1} \end{aligned}$$
$$\begin{aligned}= & {} \bar{\tau }_{1}p_{1}+\bar{\tau }_{-1}p_{-1} \end{aligned}$$
and the final equality (35) uses the LIE and strict exogeneity of \(\tau _{it}\) so that
$$\begin{aligned} E[e_{it}\Delta \log (P_{it})]= & {} E[e_{it}|\Delta I_{it}=1]\bar{\tau }_{1}p_{1}-E[e_{it}|\Delta I_{it}=-1]\bar{\tau }_{-1}p_{-1} \end{aligned}$$
$$\begin{aligned} E[e_{it}\Delta I_{it}]= & {} E[e_{it}|\Delta I_{it}=1]p_{1}-E[e_{it}|\Delta I_{it}=-1]p_{-1} \end{aligned}$$
$$\begin{aligned} E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{1}E[e_{it}\Delta I_{it}]=(\bar{\tau }_{1}-\bar{\tau }_{-1})E[e_{it}|\Delta I_{it}=-1]p_{-1} \end{aligned}$$
where by a similar argument we can show
$$\begin{aligned} E[e_{it}\Delta \log (P_{it})]-\bar{\tau }_{-1}E[e_{it}\Delta I_{it}]=(\bar{\tau }_{1}-\bar{\tau }_{-1})E[e_{it}|\Delta I_{it}=1]p_{1}. \end{aligned}$$
Table 7 presents descriptive statistics for all other control variables. There is substantial variation over the dynamic itemizer types (columns (1) to (4)). Continuing itemizers (column (1)) are the most likely to have donated and give the largest donations on average; more than five times that of continuing non-itemizers (column (2)) and more than double the mean donations of start and stop itemizers. Continuing itemizers also have the highest mean income and lowest mean price. The donating probability, mean donation and mean income of the start (column (3)) and stop (column (4)) itemizers are quite similar.
Descriptive statistics for all variables by itemizer type
Continuing itemizer
Continuing non-itemizer
Start itemizer
Stop itemizer
itemizer
Non-itemizer
Net taxable income
51,725.998
(119,696.605)
(43,073.655)
(2630.332)
\(1-I\tau ^{a}\)
Age (head)
Married\(^d\)
No high school\(^d\)
Some college\(^d\)
College grad\(^d\)
Graduate school\(^d\)
# of dependent children
Deductible expenses
Homeowner\(^d\)
All monetary figures are in 2014 prices, deflated using the Consumer Price Index. Standard deviations are shown in (). Variables with \(^d\) are 0/1 dummies
In Table 8, we present fuller regression results from our main analysis summarized in Table 3. The first three variables show the same information as in Table 3.
Full regression results of Table 3
IV with \(\tau _{it}^{s}-\tau _{it}^{b}\)
IV with \(\Delta \tau _{it}^{b}\)
\(\Delta \)Age (head)
\(\Delta \)Age (head)\(^2\)
\(\Delta \)Married\(^d\)
\(\Delta \)Dependent children
\(\Delta \)Homeowner\(^d\)
\(\Delta \)Not HS grad\(^d\)
\(\Delta \)Some college\(^d\)
\(\Delta \)College grad\(^d\)
\(\Delta \)Graduate school\(^d\)
\(\Delta \)Log deductions
\(H_{0}:\) IV estimator is identified
Results in column (1) are obtained from OLS-FD estimation of Eq. (2). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (3) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as the instrument, respectively. Results in column (3) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test. The tests reported in the last row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
The superscript d indicates that the variable is a 0/1 dummy
In addition to the price and income effects discussed in detail above, we find evidence of a quadratic relationship between age and donations with, looking at the results in column (4), households headed by people aged 49.3 years giving the most, ceterisparibus. We also find that married people give substantially more than unmarried people consistent with Mesch et al. (2011) and Rooney et al. (2005). We do not find evidence that the number of dependent children affects giving. We do not find evidence that more educated people give more. This might in part be due to the lack of within household variation in the level of education of the head. Some studies using cross-sectional data find strong positive effects of education on giving (e.g., Mesch et al. 2011), but other studies have found no evidence of an effect (e.g., Andreoni et al. 2003). We do not find evidence that non-giving itemizable expenditure affects giving. We include this here because its exclusion will likely result in an omitted variable bias (correlated with price and giving), but it might also be a 'bad control' (Angrist and Pischke 2009) and so we test the robustness of our results to its exclusion in Appendix D below.
We present results of a number of robustness checks starting with the inclusion of the PSID poor oversample and the exclusion of the 'never' itemizers in Table 9.
Robustness checks I, include poor oversample and 'never itemizers'
Include 'poor' sample
No never itemizers
\(\Delta \) Log net income
\(H_{0}:\beta _{\Delta \text {log}P}\le -1\)
Results in columns (1) and (2) are obtained via OLS-FD estimation of Eqs. (2) and (6), respectively, including the PSID oversample of poor households. Results in columns (3) and (4) are obtained via OLS-FD estimation of Eqs. (2) and (6), respectively, and excluding those households who never itemize during the observed period. All standard errors are clustered (at the household level). The test reported in the bottom row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
Robustness checks II, allowing for a nonlinear income effect
Quadratic income
Cubic income
Income decile groups
Income deciles \(\times \)Year
\(\Delta \)Log net income\(^2\)
− 0.068*
\(H_{0}:\beta _{\Delta Log price}<-1\)
Results are obtained from OLS-FD estimation of Eq. (2). All standard errors are clustered (at the household level). The test reported in the bottom row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
Results in columns (1) and (2) are obtained from OLS-FD estimation of Eqs. (2) and (6), respectively, including observations that are in the PSID oversample of poor households. These are excluded from our primary analysis. Results in columns (3) and (4) are obtained from OLS-FD estimation of Eqs. (2) and (6), respectively, excluding those households who never itemize during the observed period and therefore experience no change in the price of giving. In both cases the pattern is the same: price elasticities in excess of − 1 from the standard model and price elasticities close to and not different from 0, but different from − 1, from the itemizer model.
We also test the robustness of the results to the inclusion of nonlinear income effects in Table 10 by re-estimating Eq. (6) including quadratic (column (1)), cubics (column (2)), income decile group dummies (columns (3)) and, following Bakija and Heim (2011) decile groups interacted with years.
The results do not qualitatively differ from our main findings in columns (1) and (4) of Table 3.
We next test the robustness of our results to the specification of the dependent variable. In our analysis, we use the log of donations plus $1 as the dependent variables. But this is an arbitrary choice, though common in the literature. In Table 11, we re-estimate Eq. (2) using different specifications for the dependent variable.
Robustness checks III, allowing for different specifications of donations
\(+\) $5
\(+\) $10
− 46.583
46.364***
Results are obtained from OLS-FD estimation of Eq. (2), but we vary the specification of the dependent variable. All standard errors are clustered (at the household level). The test reported in the bottom row is the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
Robustness checks IV, excluding other itemizable expenditures (E)
Results in column (1) are obtained from OLS-FD estimation of Eq. (2). Results in columns (2) and (3) are from 2SLS-FD estimation of Eq. (2) using \(\tau _{it}^{s}-\tau _{it}^{b}\) and \(\Delta \tau ^{b}_{it}\) as the instrument, respectively. Results in column (4) are from OLS-FD estimation of Eq. (6). All standard errors are clustered (at the household level). The penultimate row shows the p value from the first stage F test. The tests reported in the last row are the one-sided t tests of the estimated price elasticities being elastic (\(\le -1\)) against the alternative hypothesis that the donations are price inelastic. Stars indicate statistical significance according to the following schedule: ***1, **5 and *10%
In column (1) we use level donations, in column (2) we use logged donations plus $5, in column (3) we use logged donations plus $10, and in column (4) we use an inverse hyperbolic sine transformation instead of taking logs. Again, our result maintains.
Finally, we check the robustness of our results to the exclusion of other itemizable expenditures (E). As noted above, non-donation E will be correlated with price via itemization status and likely correlated with donations, and therefore, its omission, as is done throughout the literature, will result in a biased estimator of the price elasticity. Including E, however, might be problematic as donations, and non-donation deductible expenditures may be co-determined though this may be mitigated by the fact that more than half of non-donation deductible expenditures are accounted for by mortgage interest payments and real estate taxes (Lowry 2014) which are likely to be predetermined in most cases. That is, non-donation itemizable expenditure may be a 'bad control' (Angrist and Pischke 2009). The role of E in estimating the price elasticity of giving has received very little attention in the literature. E is not generally available in survey data and is therefore omitted, and the studies using tax-filer data have not addressed this issue to our knowledge.39 In Table 12, we present results analogous to those presented in Table 3 above, but obtained excluding log E from the model.
Again, our result maintains as the estimated price elasticities are not sensitive to the exclusion of E. While such robustness checks are not exhaustive, the stability of our result to variation in data transformation, estimation sample, estimator and specification provides further support of our main result.
Aaron, H. (1972). Federal encouragement of private giving. In D. Dillon (Ed.), Tax impacts on philanthropy. Princeton: Tax Institute of America.Google Scholar
Almunia, M., Lockwood, B., & Scharf, K. (2017). More giving or more givers? The effects of tax incentives on charitable donations in the UK. CESifo Working Paper Series No. 6591.Google Scholar
Andreoni, J., Brown, E, & Rischall, I. (1999). Charitable giving by married couples: Who decides and why does it matter? Working Paper.Google Scholar
Andreoni, J., Brown, E., & Rischall, I. (2003). Charitable giving by married couples: Who decides and why does it matter? Journal of Human Resources, 38(1), 111–133.CrossRefGoogle Scholar
Angrist, J., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist's companion. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Athey, S., & Imbens, G. (2015). A measure of robustness to misspecification. American Economic Review, 105(5), 476–80.CrossRefGoogle Scholar
Auten, G. E., Sieg, H., & Clotfelter, C. T. (2002). Charitable giving, income, and taxes: An analysis of panel data. The American Economic Review, 92(1), 371–382.CrossRefGoogle Scholar
Backus, P. & Grant, N. (2016). Consistent estimation of the tax-price elasticity of charitable giving with survey data. Manchester Economics Discussion Paper EDP-1606.Google Scholar
Bakija, J., & Heim, B. (2011). How does charitable giving respond to incentives and income? New estimates from panel data. National Tax Journal, 64(2), 615–650.CrossRefGoogle Scholar
Batina, R . G., & Ihori, T. (2010). Public goods: Theories and evidence. New York: Springer.Google Scholar
Benzarti, Y. (2015). How taxing is tax filing? Leaving money on the table because of hassle costs. Ph.D. thesis, University of California, Berkeley.Google Scholar
Bönke, T., Massarrat-Mashhadi, N., & Sielaff, C. (2013). Charitable giving in the german welfare state: Fiscal incentives and crowding out. Public Choice, 154, 39–58.CrossRefGoogle Scholar
Boskin, M. J., & Feldstein, M. S. (1977). Effects of the charitable deduction on contributions by low income and middle income households: Evidence from the National Survey of Philanthropy. The Review of Economics and Statistics, 59(3), 351–54.CrossRefGoogle Scholar
Bradley, R., Holden, S., & McClelland, R. (2005). A robust estimation of the effects of taxation on charitable contributions. Contemporary Economic Policy, 23(4), 545–554.CrossRefGoogle Scholar
Brown, E. (1987). Tax incentives and charitable giving: Evidence from new survey data. Public Finance Quarterly, 15(4), 386–396.CrossRefGoogle Scholar
Brown, E., & Lankford, H. (1992). Gifts of money and gifts of time estimating the effects of tax prices and available time. Journal of Public Economics, 47(3), 321–341.CrossRefGoogle Scholar
Brown, S., Harris, M. N., & Taylor, K. (2012). Modelling charitable donations to an unexpected natural disaster: Evidence from the U.S. Panel Study of Income Dynamics. Journal of Economic Behavior and Organization, 84, 97–110.CrossRefGoogle Scholar
Brown, S., Greene, W. H., & Taylor, K. (2015). An inverse hyperbolic sine heteroskedastic latent class panel tobit model: An application to modelling charitable donations. Economic Modelling, 50, 321–341.CrossRefGoogle Scholar
Clotfelter, C. T. (1980). Tax incentives and charitable giving: Evidence from a panel of taxpayers. Journal of Public Economics, 13(3), 319–340.CrossRefGoogle Scholar
Clotfelter, C . T. (1985). Federal tax policy and charitable giving. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Clotfelter, C. T. (2002). The economics of giving. In J. Barry & B. Manno (Eds.), Giving better, giving smarter. Washington, DC: National Commission on Philanthropy and Civic Renewal.Google Scholar
Duquette, C. (1999). Is charitable giving by nonitemizers responsive to tax incentives? New evidence. National Tax Journal, 52(2), 195–206.Google Scholar
Dye, R. (1978). Personal charitable contributions: Tax effects and other motives. In Proceedings of the seventieth annual conference on taxation. Columbus: National Tax Association-Tax Institute of America.Google Scholar
Emmanuel Saez, J. S., & Giertz, S. H. (2012). The elasticity of taxable income with respect to marginal tax rates: A critical review. Journal of Economic Literature, 50(1), 3–50.CrossRefGoogle Scholar
Fack, G., & Landais, C. (2010). Are tax incentives for charitable giving efficient? Evidence from France. American Economic Journal: Economic Policy, 2(2), 117–141.Google Scholar
Fack, G., & Landais, C. (2016). Introduction. In G. Fack & C. Landais (Eds.), Charitable giving and tax policy: A historical and comparative perspective, CEPR. Oxford University Press.Google Scholar
Feenberg, D., & Coutts, E. (1993). An introduction to the TAXSIM model. Journal of Policy Analysis and Management, 12(1), 189.CrossRefGoogle Scholar
Feldstein, M., & Clotfelter, C. (1976). Tax incentives and charitable contributions in the United States: A microeconometric analysis. Journal of Public Economics, 5(1–2), 1–26.CrossRefGoogle Scholar
Feldstein, M., & Taylor, A. (1976). The income tax and charitable contributions. Econometrica, 44(6), 1201–1222.CrossRefGoogle Scholar
Feldstein, M. S. (1995). Behavioral responses to tax rates: Evidence form the tax reform act of 1986. The American Economic Review, 85(2), 170–174.Google Scholar
Gillitzer, C., & Skov, P. (2017). The use of third-party information reporting for tax deductions: Evidence and implications from charitable deductions in Denmark. Working paper.Google Scholar
Gruber, J. (2004). Pay or pray? The impact of charitable subsidies on religious attendance. Journal of Public Economics, 88(12), 2635–2655.CrossRefGoogle Scholar
Gruber, J., & Saez, E. (2002). The elasticity of taxable income: Evidence and implications. Journal of Public Economics, 84, 2657–2684.CrossRefGoogle Scholar
Hansen, L., Heaton, J., & Yaron, A. (1996). Finite-sample properties of some alternative GMM estimators. Journal of Business and Economic Statistics, 14(3), 262–280.Google Scholar
Huck, S., & Rasul, I. (2008). Testing consumer theory in the field: Private consumption versus charitable goods. ELSE Working Paper #275, Department of Economics, University College London.Google Scholar
Hungerman, D., & Ottoni-Wilhelm, M. (2016). What is the price elasticity of charitable giving? Toward a reconciliation of disparate estimates. Working Paper.Google Scholar
Imbens, G., & Angrist, J. (1994). Identification and estimation of local average treatment effects. Econometrica, 62(2), 467–475.CrossRefGoogle Scholar
Lankford, R. H., & Wyckoff, J. H. (1991). Modeling charitable giving using a Box–Cox standard tobit model. The Review of Economics and Statistics, 73(3), 460–470.CrossRefGoogle Scholar
Lowry, S. (2014). Itemized tax deductions for individuals: Data analysis. Technical Report 7-5700, Congressional Research Service.Google Scholar
McClelland, R., & Kokoski, M. F. (1994). Econometric issues in the analysis of charitable giving. Public Finance Review, 22(4), 498–517.CrossRefGoogle Scholar
Mesch, D. J., Brown, M. S., Moore, Z. I., & Hayat, A. D. (2011). Gender differences in charitable giving. International Journal of Nonprofit and Voluntary Sector Marketing, 16, 342–355.CrossRefGoogle Scholar
Olson, N. (2012). 2012 annual report to congress. National Taxpayer Advocate: Technical report.Google Scholar
Peloza, J., & Steel, P. (2005). The price elasticities of charitable contributions: A meta-analysis. Journal of Public Policy and Marketing, 24(2), 260–272.CrossRefGoogle Scholar
Randolph, W. (1995). Dynamic income, progressive taxes, and the timing of charitable contributions. Journal of Political Economy, 103(4), 709–738.CrossRefGoogle Scholar
Reece, W., & Zieschang, K. (1985). Consistent estimation of the impact of tax deductibility on the level of charitable contributions. Econometrica, 53(2), 271–293.CrossRefGoogle Scholar
Reid, T. (2017). A fine mess: A global quest for a simpler, fairer, and more efficient tax system. New York, NY: Penguin.Google Scholar
Reinstein, D. A. (2011). Does one contribution come at the expense of another? The B.E. Journal of. Economic Analysis and Policy, 11(1), 1–54.Google Scholar
Roberts, R. (1984). A positive model of private charity and wealth transfers. Journal of Political Economy, 92(1), 136–148.CrossRefGoogle Scholar
Rooney, P. M., Mesch, D. J., Chin, W., & Steinberg, K. (2005). The effects of race, gender, and survey methodologies on giving in the US. Economics Letters, 86, 173–180.CrossRefGoogle Scholar
Saez, E. (2004). The optimal treatment of tax expenditures. Journal of Public Economics, 88, 2657–2684.CrossRefGoogle Scholar
Scharf, K., & Smith, S. (2010). The price elasticity of charitable giving: Does the form of tax relief matter? IFS Working Paper W10/07.Google Scholar
Slemrod, J. (1988). Are estimated tax elasticities really just tax evasion elasticities? The case of charitable contributions. The Review of Economics and Statistics, 71(3), 517–22.CrossRefGoogle Scholar
Staiger, D., & Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica, 65(3), 557–586.CrossRefGoogle Scholar
Steinberg, R. (1990). Taxes and giving: New findings. Voluntas, 1(2), 61–79.CrossRefGoogle Scholar
Taussig, M. (1967). Economic aspects of the personal income tax treatment of charitable contributions. National Tax Journal, 20(1), 1–19.Google Scholar
Wilhelm, M. O. (2006). New data on charitable giving in the PSID. Economics Letters, 92(1), 26–31.CrossRefGoogle Scholar
Wilhelm, M. O. (2007). The quality and comparability of survey data on charitable giving. Nonprofit and Voluntary Sector Quarterly, 36(1), 65–84.CrossRefGoogle Scholar
Yöruk, B. (2010). Charitable giving by married couples revisited. The Journal of Human Resources, 45(2), 497–516.CrossRefGoogle Scholar
Yöruk, B. (2013). The impact of charitable subsidies on religious giving and attendance: Evidence from panel data. The Review of Economics and Statistics, 95(5), 1708–1721.CrossRefGoogle Scholar
Zampelli, E., & Yen, S. (2017). The impact of tax price changes on charitable contributions. Contemporary Economic Policy, 35(1), 113–124.CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.3.011 Arthur Lewis BuildingUniversity of ManchesterManchesterUK
Backus, P.G. & Grant, N.L. Int Tax Public Finance (2019) 26: 317. https://doi.org/10.1007/s10797-018-9500-9
First Online 26 June 2018 | CommonCrawl |
\begin{document}
\begin{center} {\Huge \textbf{Time consistent portfolio management }}
\mbox{}\\[0pt]
Ivar Ekeland\\[0pt] CEREMADE et Institut de Finance\\[0pt] Universite Paris-Dauphine\\[0pt] 75775 Paris CEDEX 16\\[0pt] [email protected]\\[0pt]
Oumar Mbodji \\[0pt] Department of Mathematics \& Statistics\\[0pt] McMaster University \\[0pt] 1280 Main Street West \\[0pt] Hamilton, ON, L8S 4K1\\[0pt] [email protected]
Traian A.~Pirvu \footnote{ Work supported under NSERC grant 298427-04. We thank the referees for valuable advice, suggestions and a thorough reading of the first version.} \\[0pt] Department of Mathematics \& Statistics\\[0pt] McMaster University \\[0pt] 1280 Main Street West \\[0pt] Hamilton, ON, L8S 4K1\\[0pt] [email protected]
\mbox{}\\[0pt]
\end{center}
\noindent \textbf{Abstract.} This paper considers the portfolio management problem for an investor with finite time horizon who is allowed to consume and take out life insurance. Natural assumptions, such as different discount rates for consumption and life insurance lead to time inconsistency. This situation can also arise when the investor is in fact a group, the members of which have different utilities and/or different discount rates. As a consequence, the optimal strategies are not implementable. We focus on hyperbolic discounting, which has received much attention lately, especially in the area of behavioural finance. Following \cite{EkePir}, we consider the resulting problem as a leader-follower game between successive selves, each of whom can commit for an infinitesimally small amount of time.
We then define policies as subgame perfect equilibrium strategies. Policies are characterized by an integral equation which is shown to have a solution in the case of CRRA utilities. Our results can be extended for more general preferences as long as the equations admit solutions.
Numerical simulations reveal that for the Merton problem with hyperbolic discounting, the consumption increases up to a certain time, after which it decreases; this pattern does not occur in the case of exponential discounting, and is therefore known in the litterature as the ``consumption puzzle". Other numerical experiments explore the effect of time varying aggregation rate on the insurance premium.
\noindent \textbf{AMS classification}: 60G35, 60H20, 91B16, 91B70
\noindent \textbf{Key words:} Portfolio optimization, Pensioner's problem, Policies, Hyperbolic discounting.
\section{Introduction}
The investment/consumption problem in a stochastic context was considered by Merton \cite{Mer69} and \cite{Mer71}. His model consists in a risk-free asset with constant rate of return and one or more stocks, the prices of which are driven by geometric Brownian motion. The horizon $T$ is prescribed, the portfolio is self-financing, and the investor seeks to maximize the expected utility of intertemporal consumption plus the final wealth. Merton provided a closed form solution when the utilities are of constant relative risk aversion (CRRA) or constant absolute risk aversion (CARA) type. It turns out that for (CRRA) utilities the fraction of wealth invested in the risky asset is constant through time. Moreover for the case of (CARA) utilities, they are linear in wealth.
Richard \cite{Rich} added life insurance to the investor's portfolio, assuming an arbitrary but known distribution of death time. In the same vein Pliska \cite{Pliska} studied optimal life insurance and consumption for an income earner whose lifetime is random and unbounded. More recently Kwak et al.\cite {Kwak} looked at the problem of finding optimal investment and consumption for a family whose parents receive deterministic labor income until some deterministic time horizon.
The aim of this paper is to revisit these problems in the case when the psychological discount rate is not constant. By now there is substantial evidence that people discount the future at a non-constant rate. More precisely, there is experimental evidence (see Frederick et. al. \cite{Frederick} for a review) that people are more sensitive to a given time delay if it occurs earlier: for instance, a person might prefer to get two oranges in 21 days than one orange in 20 days, but also prefer to get one orange right now than two oranges tomorrow. This is known as \textbf{ the common difference effect}, and would not occur if future utilities are discounted at a constant rate. Individual behaviour is best described by \textbf{hyperbolic discounting}, where the discount factor is $h(t)=(1+at)^{- \frac{b}{a}},$ with $a,b>0$. The corresponding discount rate is $r\left( t\right) =\frac{b}{ 1+at} $, which starts from $r\left( 0\right) =\frac{b }{a}$ and decreases to zero. Because of its empirical support, hyperbolic discounting has received a lot of attention in the areas of: microeconomics, macroeconomics and behavioural finance. We just mention here among others the works of Loewenstein and Prelec \cite{LoPre}, Laibson \cite{Lai} and Barro \cite{Bar}.
It is well-known that, for non-constant discount rates, optimal strategies are time inconsistent: for $t_1 < t_2$, the planner at time $t_1 $ will find a strategy $f_1$ to be optimal on $[t_1, \infty)$, while the planner at time $t_2$ will find a different strategy $f_2$ to be optimal on that interval. As a result, the planner at time $t_2$ will not implement the strategy devised by the planner at time $t_1$, unless there exists some commitment mechanism. If there is none, then the strategy $f_1$, which is optimal from the perspective of the planner at time $t_1$, is not implementable, and the planner at time $t_1$ must look for a second-best strategy. This situation was first analyzed by Strotz \cite{STRO}, and this line of research has been pursued by many others (see Pollak \cite{Pol} , Phelps \cite{Phelps}, Peleg and Yaari \cite{PeYa}, Goldmann \cite{Gol}, Laibson \cite{Lai}, Barro \cite{Bar}, Krusell and Smith \cite{KruSm}), mostly in the framework of planning a discrete-time economy with production (Ramsey's problem). It is by now well established that time-consistent strategies are Stackelberg equilibria of a leader-follower game among successive selves (today's self has divergent interests from tomorrow's).\ More recently, the problem has been taken up again by Karp \cite{Karp2}, \cite{Karp}, \cite{Karp-Fuji}, \cite{Karp-Lee}. Luttmer and Mariotti, \cite {LutMar}, Ekeland and Lazrak \cite{EkeLaz}, \cite{Ekl1}, \cite{EkL}, always within the framework of planning economic growth. In a series of papers Bj\"{o}rk and Murgoci \cite{Bj1}, Bj\"{o}rk, Murgoci and Zhou \cite{Bj2} look at the mean variance problem which is also time inconsistent.
Ekeland and Pirvu \cite{EkePir} seem to have been the first to have considered the Merton problem with non-constant psychological discount rates. They studied the case of an investor who has a CRRA utility $u\left( c\right) =-\frac{1}{p }c^{p},\ p<1$, and a quasi-exponential discount factor $h\left( t\right) $, that is, $h\left( t\right) $ must belong to one of the families: \begin{eqnarray*} h\left( t\right) &=&\lambda e^{-r_{1}t}+\left( 1-\lambda \right) e^{-r_{2}t}, \\ h\left( t\right) &=&\left( 1+at\right) e^{-rt}. \end{eqnarray*} Extending the basic idea of Ekeland and Lazrak \cite{EkeLaz} to the stochastic framework, they find time-consistent strategies in the limiting case when the investor can commit only during an infinitesimal time interval. They show that time-consistent strategies exist if a certain BSDE has a solution, and they show that, because of the special form of the discount factor, this BSDE reduces to a system of two ODEs which has a solution.
The aim of this paper is to extend these results to more general discount rates, and more general problems. Quasi-exponential discount rates, although mathematically convenient, are not realistic. As we saw earlier, empirically observed discount rates among individuals tend to be hyperbolic. But there is another, perhaps more compelling, reason why general discount rates are of interest. Standard portfolio theory assumes that the investor is an individual. However, in most situations investment decisions are made by a group, such as the management team in the case when the investor entrusts his portfolio to professionals. Even when the investor manages the portfolio directly, the word "investor" which is suggestive of a single decision-maker very often hides a different reality, namely the family: one would expect the husband and the wife to take part in investment decisions concerning the couple. By now, the relevant economic literature has made abundantly clear (see Chiappori and Ekeland, \cite{Chia}) that the group cannot be represented by a single utility function. Instead, there should be one utility function, and one discount factor per member of the group. The actual decision taken is the result of negociations within the group, a kind of black box which cannot be opened by outsiders. However, if the group is efficient, that is, if the outcome is Pareto optimal, then it can be modelled by maximising a suitable convex combination of the members' utilities, the weight conferred to each individual representing his/her power within the group. In the (very) particular case when all members of the group have the same utility, but different discount rates, the group behaves as a single individual with non-constant discount rate.
The difficulty in dealing with non-constant discount rates is to define time-consistent strategies and to prove that they exist. We follow the approach pionneered by Ekeland and Lazrak \cite{EkeLaz} in the deterministic framework, by considering the limiting case when the decision-maker can commit only during an infinitesimal amount of time. This approach was already followed in our earlier work \cite{EkePir}, in the case of quasi-exponential discount factors. The proofs in that paper do not readily extend to the case of general discount factors, so in the present work we present a different method. Whereas \cite{EkePir} characterizes the time-consistent stategies in terms of a certain BSDE, we now characterize them in terms of a certain "value function", which is shown to satisfy a certain integral equation which has a natural interpretation. Assuming utilities to be (CRRA), we decouple time and space, and reduce it to a one-dimensional integral equation, which we solve by a fixed-point argument. Moreover this one dimensional equation is amenable to numerical treatments so one can compare the equilibrium policies arising from different choices of discounting. The numerical scheme we employ consists of the discretization of the one dimensional equation in three steps. This is based on a Riemann sum approximation of the integral. We obtain closed form solutions in certain cases.
We show that hyperbolic discounting may result in consumption patterns which are observationally different from the optimal strategies in the Merton model. The latter predicts that consumption grows smoothly over time if the interest rate exceeds the discount rate (or decays smoothly otherwise). However, household data indicates that consumption is hump-shaped. This is referred to in the literature as the consumption puzzle, and we show that it can arise as a time-consistent strategy in certain cases of hyperbolic discounting.
By running numerical simulations we study the effect of the weight given by the insurer to the beneficiaries on the life insurance process.
\textbf{Organization of the paper}: The remainder of this paper is organized as follows. In section $2$ we describe the model and formulate the objective. Section $3$ introduces the value function. Section 4 presents the main result. Section $5$ deals with CRRA utilities. Numerical results are discussed in Section $6$. An extension to multiple managers is discussed in Section $7.$ The paper ends with an appendix containing some proofs.
\section{The Model}
\subsection{The decisions}
Consider a financial market consisting of a savings account and one stock (the risky asset). The inclusion of more risky assets can be achieved by notational changes. The savings account accrues interest at the riskless rate $r>0.$ The stock price per share follows an exponential Brownian motion \begin{equation*} dS(t)=S(t)\left[ \alpha \,dt+\sigma \,dW(t)\right] ,\quad 0\leq t\leq \infty , \end{equation*} where $\{W(t)\}_{t\geq 0}$ is a $1-$dimensional Brownian motion on a filtered probability space,\\ $(\Omega ,\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P}).$ The filtration $\{\mathcal{F}_{t}\}$ is the completed filtration generated by $\{W(t)\}$. Let us denote by $\mu \triangleq \alpha -r>0$ \emph{the excess return}.
A decision-maker in this market is continuously investing in the stock and the bond, consuming and buying life insurance, while receiving income at the continuous deterministic rate $i(t).$ This assumption is key in deriving our results. Relaxing it to accommodate for problems relevant to small enterprises is not obvious and would be an interesting research project.
Life insurance is offered as a sucession of term contracts with infinitesimally small horizon. At every time $t$, a contract is offered, costing $1$ unit of account. If the holder dies immediately after, the insurance company pays $l\left( t\right) $ to his/her beneficiaries.\ The deterministic function $l\left( t\right) $ is prescribed.
At every time $t$, the investor chooses $\zeta (t)$, the investment in the risky asset, $c(t)$ the consumption, and $p(t)$, the amount of life insurance. Given an adapted process $\{\zeta (t),c(t),p(t)\}_{t\geq 0}$, the equation describing the dynamics of wealth ${X^{\zeta ,c,p}(t)}$ is given by \begin{eqnarray} dX^{\zeta ,c,p}(t) &=&rX^{\zeta ,c,p}(t)dt-c(t)dt-p(t)dt+i(t)dt+\zeta (t)(\alpha \,dt+\sigma dW(t)) \notag \label{equ:wealth-one} \\ X^{\zeta ,c,p}(0) &=&X\left( 0\right), \end{eqnarray} the initial wealth $X(0)$ being exogenously specified.
We assume a benchmark deterministic time horizon $T$. The investor is alive at time $t=0$ and has a lifetime denoted by $\tau $, which is a non-negative random variable defined on the probability space $(\Omega , \mathcal{F},\mathbb{P})$ and independent of the Brownian motion $W.$ Denote by $g\left( t\right) $ its density and by $G\left( t\right) $ its distribution:
\begin{equation*} G(t)\triangleq\mathbb{P}(\tau <t)=\int_{0}^{t}g(u)\,du. \end{equation*}
It will be useful for later computations to introduce the hazard function $ \lambda \left( t\right) $, that is, the instantaneous death rate, defined by
\begin{equation*} \lambda (t)\triangleq\lim_{\delta t\rightarrow 0}\frac{\mathbb{P}(t\leq \tau
<t+\varepsilon \ |\ \tau \ \geq t)}{\varepsilon }=\frac{g(t)}{1-G(t)}, \end{equation*} so that $g(t)=\lambda (t)\exp \{-\int_{0}^{t}\lambda (u)\,du\}$. We have, from the definition:
\begin{equation}\label{*0}
\mathbb{P}(\tau <s\ |\ \tau >t)=1-\exp \{-\int_{t}^{s}\lambda (u)\,du\}. \end{equation}
and
\begin{equation}\label{*1}
\mathbb{P}(\tau>T | \tau>t)= \exp\{-\int_{t}^{T} \lambda(u)\,du \}.
\end{equation}
Next we turn to risk preferences.
\subsection{Utility functions}
\begin{definition}\label{util}
A utility function $U$ is a strictly increasing, strictly concave differentiable real-valued function defined on $[0,\infty)$ which satisfies the Inada conditions
\begin{equation}\label{in} U'(0)\triangleq \lim_{x\downarrow 0} U'(x)=\infty,\qquad U'(\infty)\triangleq \lim_{x\rightarrow \infty} U'(x)=0. \end{equation}
\end{definition}
The strictly decreasing $C^1$ function $U'$ maps $(0,\infty)$ onto $(0,\infty)$ and hence has a strictly decreasing, $C^{1}$ inverse $I: (0,\infty)\rightarrow (0,\infty).$
The legacy process of the decision-maker, $\{Z^{\zeta ,c,p}(t)\}_{t\geq 0},$ is defined by \begin{equation}\label{le} Z^{\zeta ,c,p}(t)\triangleq\eta (t)X^{\zeta ,c,p}(t)+l(t)p(t), \end{equation}
where $\eta \left( t\right) $ and $ l\left( t\right) $ \ are prescribed deterministic and continuous functions. The legacy is the sum of two terms:\ the first one, $\eta (t)X^{\zeta ,c,p}(t)$, is the part of his wealth which will benefit his heirs (after taxes, and various costs), and the second one $l(t)p(t)$ is the life insurance. Although the insurance premium $p(t)$ is allowed to be negative we require that the legacy $Z^{\zeta ,c,p}(t)$ stays positive. A negative $p(t)$ means that the decision maker can sell life insurance.
Let $U_1, U_2, U_3$ be utility functions as in Definition \ref{util}; $U_1$ is the utility from intertemporal consumption, $U_2$ is the utility of the final wealth and $U_3$ is the utility of the legacy.
Next, we define the admissible strategies. Sometimes, to ease notations, we write ${X}^{t,x}(s)$ and ${Z}^{t,x}(s)$ for $
\mathbb{E}[{X}^{\zeta ,c,p}(s)|{X}^{\zeta ,c,p}(t)=x]$ and $\mathbb{E}[{Z}
^{\zeta ,c,p}(s)|{X}^{\zeta ,c,p}(t)=x].$
\begin{definition} \label{def:portfolio-proportions} An $\mathbb{R}^{3}$-valued stochastic process $\{\zeta (t),c(t),p(t)\}_{t\geq 0}$ is called an admissible strategy process if
\begin{itemize} \item it is progressively measurable with respect to the sigma algebra $ \sigma (\{\ W(t)\}_{t\geq 0})$,
\item $c(t)\geq 0,Z^{\zeta ,c,p}(t)\geq 0\,\,\mbox{for all}\,,\text{ a.s.};$ ${X}^{\zeta ,c,p}(T)\geq 0,\text{ a.s.}$
\item moreover we require that for all $t, x\geq 0$ \begin{equation} \label{189}
\mathbb{E} \sup_{\{t\leq s\leq T\}} |U_{1}(c(s))|<\infty,\,\,\mathbb{E}
|U_{2}({X}^{t,x}(T))| <\infty,\,\, \mathbb{E}\sup_{\{t\leq s\leq T\}} |U_{3}(Z^{t,x}(s))|<\infty. \end{equation} \end{itemize} \end{definition}
The last set of inequalities are purely technical and are satisfied for e.g. bounded strategies. They are essential in proving our main result and are related to the fact that the expected utility criterion is continuously updated. \subsection{The intertemporal utility}
In order to evaluate the performance of an investment-consumption-insurance strategy the decision maker uses an expected utility criterion. For an admissible strategy process\\ $\{{\zeta }(s),{c}(s),p(s)\}_{s\geq 0}$ and its corresponding wealth process $\{X^{\zeta ,c,p}(s)\}_{s\geq 0},$ we denote the expected intertemporal utility by \begin{eqnarray} J(t,x,\zeta ,c,p) &\triangleq &\mathbb{E}\bigg[\int_{t}^{T\wedge \tau }h(s-t)U_{1}(c(s))\,ds+nh(T-t){U}_{2}(X^{\zeta
,c,p}(T))1_{\{\tau >T|\tau >t\}} \notag \\ &+&m(\tau -t)\hat{h}(\tau -t){U}_{3}(Z^{\zeta ,c,p}(\tau ))1_{\{\tau
\leq T|\tau >t\}}\bigg|X^{\zeta ,c,p}(t)=x\bigg], \label{01FUNCT} \end{eqnarray} where:
\begin{itemize}
\item $n>0$ is a constant
\item $m\left( t\right) >0$ is a continuous function
\item $h$ and $\hat{h}$ are continuously differentiable, positive and decreasing functions, such that $h\left( 0\right) =\hat{h}\left( 0\right) =1.$
\end{itemize}
The interpretation is as follows. The decision-maker will collect $X^{\zeta ,c,p}(T)$ at time $T$, if he is still alive at time $T$, and the coefficient $n$ is the weight he attributes to getting that lump sum, as compared to the utility of continuous consumption up to time $T$. The function $h\left( t\right) $ is his discount function, and it is no longer restricted to the exponential and quasi-exponential type.
He may, however, die before time $T$, in which case his wealth will accrue to others, and the decision-maker is taking the utility of his beneficiaries into account when managing his portfolio.
Since the death time $\tau $ is independent of the uncertainty driving the stock, we have the following simplified expression for the functional $J,$ which is proved in Appendix A.
\begin{lemma} \label{L1} \label{ain200} The functional $J$ of \eqref{01FUNCT} equals \begin{eqnarray} \label{w9} J(t,x,\zeta ,c,p)&=& \mathbb{E}\bigg[ \int_{t}^{T} Q(s,t) U_{1}(c(s))\,ds \\ &+&\int_{t}^{T} q(s,t){U}_{3}(Z^{t,x}(s))\,ds+n Q(T,t) {U}_{2}(X^{t,x}(T))\bigg], \end{eqnarray}
where \begin{eqnarray} q(s,t) &\triangleq &\bar{h}(s-t)\lambda (s)\exp \{-\int_{t}^{s}\lambda (z)\,dz\},\quad \bar{h}(t)\triangleq m(t)\hat{h}(t) \label{((o} \\ Q(s,t) &\triangleq &h(s-t)\exp \{-\int_{t}^{s}\lambda (z)\,dz\} \label{((o0} \end{eqnarray} \end{lemma}
A natural objective for the decision maker is to maximize the above expected utility criterion. However, because neither $q$ nor $Q$ are exponentials, time inconsistency will bite, that is, a strategy that will be considered to be optimal at time $0$ will not be considered so at later times, so it will not be implemented unless the decision-maker at time $0$ can constrain the decision-maker at all times $t>0$.
\subsection{Time-consistent strategies}
We now introduce a special class of time-consistent strategies, which will henceforth be called \emph{policies}. That is, we consider that the decision-maker at time $t$ can commit his successors up to time $\varepsilon $, with $\varepsilon \rightarrow 0$, and we seek strategies which it is optimal to implement right now conditioned on them being implemented in the future.
More precisely, suppose that a strategy $f$ is time-consistent. This means that, if it has been applied up to time $t$, the decision-maker at time $t$ will apply it as well. Since there is no commitment mechanism to force him to do so, he will only apply strategy $f$ if it is in his own best interests. Denote his current wealth by $X(t)$. He has two possibilities: either to stick to the strategy $f$, or to apply another one. To simplify matters, we will assume that the decision-maker considers only a very short time interval, $[t, t+\epsilon]$, so short in fact that all strategies can be assumed to be constant on that interval. The decision-maker then just compares the effect of investing $\bar{\zeta}$, consuming $\bar{c}$ and buying $\bar{p}$ worth of insurance, as required by the strategy $f$ at time t, with the effect of investing $\zeta$, consuming $c$ and buying $p$ worth of insurance, for different (constant) values. There will be, as usual, an immediate effect, corresponding to the change in consumption between $t$ and $t+\epsilon$, and a long-term effect, corresponding to the change in wealth at time $t+\epsilon$.
Let us formalize this idea:
\begin{definition} \label{finiteh}An admissible trading strategy $\{\bar{\zeta}(s),\bar{c}(s), \bar{p}(s)\}_{t\leq s\leq T}$ is a \emph{policy} if there exists a map $ F=(F_{1},F_{2},F_{3}):[0,T]\times \mathbb{R}\rightarrow \mathbb{R}\times \lbrack 0,\infty )\times \mathbb{R}$ such that for any $t,x>0$ \begin{equation} {\lim \inf_{\epsilon \downarrow 0}}\frac{J(t,x,\bar{\zeta},\bar{c},\bar{p} )-J(t,x,\zeta _{\epsilon },c_{\epsilon },p_{\epsilon })}{\epsilon }\geq 0, \label{opt} \end{equation} where: \begin{equation}\label{0000eq} \bar{\zeta}(s)={F_{1}(s,\bar{X}(s))},\quad \bar{c}(s)=F_{2}(s,\bar{X} (s)),\quad \bar{p}(s)=F_{3}(s,\bar{X}(s)) \end{equation} and the wealth process $\{\bar{X}(s)\}_{s\in \lbrack t,T]}$ is a solution of the stochastic differential equation (SDE): \begin{equation} d\bar{X}(s)=[r\bar{X}(s)+\mu F_{1}(s,\bar{X}(s))-F_{2}(s,\bar{X}(s))-F_{3}(s, \bar{X}(s))+i(s)]ds+\sigma F_{1}(s,\bar{X}(s))dW(s). \label{0dyn} \end{equation}
Here, the process $\{{\zeta }_{\epsilon }(s),{c}_{\epsilon }(s),p_{\epsilon }(s)\}_{s\in \lbrack t,T]}$ is another investment-consumption strategy defined by \begin{equation} \zeta _{\epsilon }(s)= \begin{cases} \bar{\zeta}(s),\quad s\in \lbrack t,T]\backslash E_{\epsilon ,t} \\ \zeta (s),\quad s\in E_{\epsilon ,t}, \end{cases} \label{1e} \end{equation} \begin{equation} c_{\epsilon }(s)= \begin{cases} \bar{c}(s),\quad s\in \lbrack t,T]\backslash E_{\epsilon ,t} \\ c(s),\quad s\in E_{\epsilon ,t}, \end{cases} \label{2e} \end{equation} \begin{equation} p_{\epsilon }(s)= \begin{cases} \bar{p}(s),\quad s\in \lbrack t,T]\backslash E_{\epsilon ,t} \\ p(s),\quad s\in E_{\epsilon ,t}, \end{cases} \label{3e} \end{equation} with $E_{\epsilon,t}=[t,t+\epsilon],$ and $\{{\zeta}(s),{c}(s),{p} (s)\}_{s\in E_{\epsilon,t} }$ is any strategy for which $\{{\zeta} _{\epsilon}(s),{c}_{\epsilon} (s), {p}_{\epsilon} (s)\}_{s\in[t,T]}$ is an admissible policy. \end{definition}
In other words, policies are Markov strategies such that unilateral deviations during an infinitesimally small time interval are penalized. Note that:
\begin{itemize} \item this does not mean that unilateral deviations during a finite interval of time are penalized as well: it is possible that deviating from the policy between $t_{1}$ and $t_{2}$ will be to the advantage of all the decision-makers operating between $t_{1}$ and $t_{2}.$
\item however, if a Markov strategy is not a policy, then it certainly will not be implemented, for at some point it will be to the advantage of some lone decision-maker to deviate, during a very small time interval, which is enough to compromise all the plans laid by his predecessors. \end{itemize}
So time-consistency in the sense of Definition \ref{finiteh} is a minimal requirement for rationality: policies are the only Markov strategies that the decision-maker should consider.
\section{The Value Function}
We now extend to this situation the notion of a value function, which is classical in optimal control.
Let $m\triangleq m(0),$ and $I_{1}, I_{3}$ be the inverse functions of $U'_{1}, U'_{3}.$
\begin{definition} \label{de1} Let $v:[0,T]\times \mathbb{R}\rightarrow \mathbb{R}$ be a $ C^{1,2}$ function, concave in the second variable. We shall say that $v$ is a value function if we have: \begin{equation} v(t,x)=J(t,x,\bar{\zeta},\bar{c},\bar{p}). \label{100ie19(*} \end{equation} Here the admissible process $\{\bar{\zeta}(s),\bar{c}(s),\bar{p}(s)\}_{s\in \lbrack t,T]}$ is given by: \begin{equation} \bar{\zeta}(s)\!\triangleq\! {-\frac{\mu \frac{\partial v}{\partial x}(s,\bar{X}(s))}{ \sigma ^{2}\frac{\partial ^{2}v}{\partial x^{2}}(s,\bar{X}(s))}},\,\, \bar{c} (s)\!\triangleq\! I_{1}\!\left( \frac{\partial v}{\partial x}(s,\bar{X}(s))\right) ,\,\, \bar{p}(s)\!\triangleq\! \frac{1}{l(s)}\!\left[ I_{3}\left( \frac{1}{m}\frac{ \partial v}{\partial x}(s,\bar{X}(s))\right)\! -\!\eta (s)\bar{X}(s)\right]\!, \label{ie} \end{equation} where $\bar{X}(s)$ is the corresponding wealth process defined by the SDE \begin{eqnarray} \label{sde0} \!\!\!\!\!\!d\bar{X}(s)&=&\bigg[r\bar{X}(s)-\frac{ \mu ^{2}\frac{\partial v}{\partial x}(s,\bar{X}(s))}{\sigma ^{2}\frac{ \partial ^{2}v}{\partial x^{2}}(s,\bar{X}(s))}-I_{1}\left( \frac{ \partial v}{\partial x}(s,\bar{X}(s))\right) \!\! \\\notag &-& \!\!\frac{1}{l(s)}\left[ I_{3}\left( \frac{1}{m}\frac{\partial v}{\partial x}(s,\bar{X}(s))\right) -\eta (s)\bar{X}(s)\right] +i(s)\bigg]ds-\frac{\mu \frac{\partial v}{ \partial x}(s,\bar{X}(s))}{\sigma \frac{\partial ^{2}v}{\partial x^{2}}(s, \bar{X}(s))}dW(s) \label{1o*} \\ \bar{X}\left( t\right) \!\!\!\!\! &=&\!\!\!\!\!x. \end{eqnarray} \end{definition} The economic interpretation is very natural: if one applies the Markov strategy associated with $v$ by the relations (\ref{ie}), and computes the corresponding value of the investor's criterion starting from $x$ at time $t$ , one gets precisely $v\left( t,x\right) $. In other words this is fundamentally a fixed-point characterization.
Let us define the functions $F_{1},F_{2},F_{3}$ by: \begin{equation} \!\!\!\!\!F_{1}(t,x)\triangleq-\frac{\mu \frac{\partial v}{\partial x}(t,x)}{\sigma ^{2}\frac{\partial ^{2}v}{\partial x^{2}}(t,x)},\,F_{2}(t,x)\triangleq I_{1} \left( \frac{ \partial v}{\partial x}(t,x)\right),\, F_{3}(t,x)\triangleq \frac{1}{l(t)}\left[ I_{3} \left( \frac{1}{m}\frac{\partial v}{\partial x} (t,x)\right)-\eta (t)x\right]. \label{109con} \end{equation} Next we impose a technical assumption; for a $C^{1,2}$ function $f:[0,T]\times \mathbb{R}\rightarrow \mathbb{R},$ let us define the operator $L f$ by $$ L f(t,x)\triangleq \frac{\partial f}{\partial t}(t,x)+(r x+\mu F_{1}(t,x)-F_{2}(t,x)-F_{3}(t,x)+i(t))\frac{\partial f}{\partial x}(t,x)+\frac{\sigma^{2} F_{1}^{2}(t,x)}{2} \frac{\partial^2 f}{\partial x^2}(t,x).$$
\begin{assumption}\label{A2} Assume that the PDEs
\begin{equation}\label{09}
L f(t,x)=0,\quad f(s,x)=g(x),
\end{equation}
have a $C^{1,2}$ solution on $[0,s]\times \mathbb{R}\rightarrow \mathbb{R}$ with exponential growth. Here $t<s\leq T,$ and $g(x)$ is
one of the functions
$$ U_{1}(F_{2}(s,x)):\,\, t<s\leq T,\quad U_{3}(\eta (s)x+l(s)F_{2}(s,x)):\,\, t<s\leq T,\quad U_{2}(x):\,\, s=T.$$
\end{assumption}
\section{Main Result}
The following Theorem states the central result of our paper. It involves the notions of policies and value function for which we gave economic intuition.
\begin{theorem}\label{existence} Let $v$ be a value function which satisfies Assumption \ref{A2}. Then, $\{\bar{\zeta}(s),\bar{c}(s),\bar{p}(s)\}_{s\in \lbrack t,T]}$ given by \eqref{ie} is a policy. \end{theorem}
We proceed in two steps. First we show that the value function $v$ satisfies a partial differential equation with a non-local term and this is done in the following Lemma, which is proved in Appendix B.
\begin{lemma}\label{pDe}
The function $v$ solves the following equation
\begin{equation} \label{12dE} \!\!\!\!\!\!\!\!\frac{\partial {v}}{\partial t}(t,x)+\bigg(rx+ \mu F_1(t,x) -F_2(t,x)-F_3(t,x) +i(t) \bigg) \frac{\partial {v}}{\partial x}(t,x)+ \end{equation} $$\frac{\sigma^{2} F^{2}_{1}(t,x)} {2}\frac{\partial^{2} v}{\partial x^{2}}(t,x)+{U}_{1}(F_2(t,x))+m{U}_{3}(x+l(t)F_3(t,x))= $$
$$
\mathbb{E}\bigg[ \int_{t}^{T} \frac{\partial Q}{\partial{t}} (s,t) U_{1}(F_2(s, \bar{X}^{t,x}(s) ) )\,ds+\int_{t}^{T}\frac{\partial q}{\partial{t}} (s,t){U}_{3}(\bar{Z}^{t,x}(s))\,ds+n \frac{\partial Q}{\partial{t}}(T,t){U}_{2}(\bar{X}^{t,x}(T))\bigg], $$ with boundary condition $v(T,x)=n {U}_{2}(x),$ and the processes $\bar{X}$ of \eqref{0dyn}, and $ \bar{Z}^{t,x} (s)\triangleq \eta(s)\bar{X}^{t,x} (s)+ l(s) F_3(s, \bar{X}^{t,x} (s)).$ \end{lemma}
We now proceed to the second step. In view of function's $v$ concavity in variable $x,$ and the definition of $(F_1, F_2, F_3)$ (see \eqref{109con}), the equation \eqref{12dE} can be re-written as \begin{equation}\label{13ddE} \!\!\!\!\!\!\!\!\frac{\partial v}{\partial t}(t,x)+\sup_{\zeta ,c,p}\bigg[ \bigg(r+\mu \zeta -c-p+i(t)\bigg) \frac{\partial v}{\partial x}(t,x)+$$$$\frac{1}{2}\sigma ^{2}\zeta ^{2} \frac{\partial ^{2}v}{\partial x^{2}}(t,x)+U_{1}(c)+m{U}_{3}(\eta(t)x+l(t)p)\bigg] = \end{equation} $$
\mathbb{E}\bigg[ \int_{t}^{T} \frac{\partial Q}{\partial{t}} (s,t)U_{1}(F_2(s, \bar{X}^{t,x}(s) ) )\,ds+\int_{t}^{T}\frac{\partial q}{\partial{t}} (s,t){U}_{3}(\bar{Z}^{t,x}(s))\,ds+n \frac{\partial Q}{\partial{t}}(T,t){U}_{2}(\bar{X}^{t,x}(T))\bigg]. $$ We notice that
$$ {J}(t,x,\zeta_{\epsilon},c_{\epsilon}, p_{\epsilon})- J(t,x,\bar{\zeta},\bar{c}, \bar{p})=$$$$
\mathbb{E}\bigg[ \int_{t}^{t+\epsilon} Q(s,t)[U_{1}(c(s))-U_{1}(F_2(s,{X}^{t,x}(s)))]\,ds\bigg]+$$$$
\!\!\!\!\! \mathbb{E}\bigg[ \int_{t}^{t+\epsilon} q(s,t)[{U}_{3}(Z^{t,x}(s))-{U}_{3}(\bar{Z}^{t,x}(s))]\,ds \bigg]+$$$$
\!\!\!\!\! \mathbb{E}[ v(t+\epsilon,{X}^{t,x}(t+\epsilon))-v(t+\epsilon,\bar{X} ^{t,x}(t+\epsilon))]+$$$$ \!\!\!\!\! \mathbb{E}\left[\int_{t+\epsilon}^{T} [Q(s,t)-Q(s,t-\epsilon)][U_{1}(F_2(s, \bar{X}^{t,x}(s)))-U_{1}(F_2(s,{X}^{t,x}(s)))]\,ds\right]+$$$$\!\!\!\!\! \mathbb{E}\left[\int_{t+\epsilon}^{T} \!\!\![q(s,t)-q(s,t-\epsilon)][{U}_{3}(Z^{t,x}(s))-{U}_{3}(\bar{Z}^{t,x}(s))]\,ds\right]+$$$$\!\!\!\!\! \mathbb{E}\left[ n [Q(T,t)-Q(T,t-\epsilon)] [ {U}_{2}(X^{t,x}(T))- {U}_{2}(\bar{X}^{t,x}(T))]\right]. $$
The RHS of this equation has six terms and we will treat each of these terms separately:
$1.$ In the light of inequality \eqref{189} and the Lebesgue Dominated Convergence Theorem
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\frac{ \mathbb{E}\bigg[ \int_{t}^{t+\epsilon} Q(s,t)[U_{1}(c(s))-U_{1}(F_2(s,{X}^{t,x}(s)))]\,ds\bigg]}{\epsilon}$$$$= [U_{1}(c(t))-U_{1}(F_2(t,x))]. \end{equation*}
$2.$ In the light of inequality \eqref{189} and the Lebesgue Dominated Convergence Theorem
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\frac{\mathbb{E}\bigg[ \int_{t}^{t+\epsilon} q(s,t)[{U}_{3}(Z^{t,x}(s))-{U}_{3}(\bar{Z}^{t,x}(s))]\,ds \bigg] }{\epsilon}= m [{U}_{3}(x+p(t)l(t))-{U}_{3}(x+F_3(t,x) l(t))]. \end{equation*}
$3.$ One has
$$ \mathbb{E}[v(t+\epsilon,\bar{X}^{t,x}(t+\epsilon))- v(t+\epsilon,{X} ^{t,x}(t+\epsilon))]=$$$$\mathbb{E}[ v(t+\epsilon,\bar{X}^{t,x}(t+\epsilon))- v(t,x)]-\mathbb{E} [v(t+\epsilon,{X}^{t,x}(t+\epsilon)- v(t,x)]. $$ Moreover $$ \mathbb{E}[v(t+\epsilon,\bar{X}^{t,x}(t+\epsilon))- v(t,x)]=\mathbb{E} \int_{t}^{t+\epsilon}d[ v(u, \bar{X}^{t,x}(u))]. $$ It\^{o}'s formula yields
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\frac{\mathbb{E}\int_{t}^{t+\epsilon} d[ v(u, \bar{X}^{t,x}(u))] }{\epsilon}=$$$$\bigg[ \frac{\partial {v}}{\partial t}(t,x)+\bigg(rx+\mu F_1(t,x) -F_2(t,x)-F_3(t,x) +i(t) \bigg) \frac{\partial {v}}{\partial x}(t,x) + \frac{\sigma^{2} F^{2}_{1}(t,x)} {2}\frac{\partial^{2} v}{\partial x^{2}}(t,x)\bigg]. \end{equation*}
Similarly $$ {\lim_{\epsilon\downarrow 0}}\frac{ \mathbb{E}[ v(t+\epsilon,{X}^{t,x}(t+\epsilon))- v(t,x)] }{\epsilon}= {\lim_{\epsilon\downarrow 0}}\frac{\mathbb{E}\int_{t}^{t+\epsilon} d[ v(u, {X}^{t,x}(u))] }{\epsilon}=$$$$ \bigg[ \frac{\partial v}{\partial t}(t,x)+ \bigg(r+\mu \zeta(t) -c(t)-p(t)+i(t)\bigg) \frac{\partial v}{\partial x}(t,x)+\frac{1}{2}\sigma ^{2}\zeta ^{2} (t) \frac{\partial ^{2}v}{\partial x^{2}}(t,x) \bigg]. $$
$4.$ In the light of inequality \eqref{189} and the Lebesgue Dominated Convergence Theorem it follows that
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\frac{ \mathbb{E}\left[\int_{t+\epsilon}^{T} [Q(s,t)-Q(s,t-\epsilon)][U_{1}(F_2(s, \bar{X}^{t,x}(s)))-U_{1}(F_2(s,{X}^{t,x}(s)))]\,ds\right]}{\epsilon}=0. \end{equation*}
$5.$ Similarly
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\frac{\mathbb{E}\left[\int_{t+\epsilon}^{T} [q(s,t)-q(s,t-\epsilon)][{U}_{3}(Z^{t,x}(s))-{U}_{3}(\bar{Z}^{t,x}(s))]\,ds\right]}{\epsilon}=0. \end{equation*}
$6.$ Finally, by the same token
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\frac{\mathbb{E}\left[ n [Q(T,t)-Q(T,t-\epsilon)] [ {U}_{2}(X^{t,x}(T))- {U}_{2}(\bar{X}^{t,x}(T))]\right]}{\epsilon}=0. \end{equation*}
Therefore
$$ {\lim_{\epsilon\downarrow 0}}\frac{J(t,x,\bar{\zeta},\bar{c}, \bar{p})- {J}(t,x,\zeta_{\epsilon},c_{\epsilon}, p_{\epsilon})}{\epsilon}= $$$$ \!\!\!\!\!\!\!\!\!\! \bigg[ \frac{\partial {v}}{\partial t}(t,x)+\bigg(rx+\mu F_1(t,x) -F_2(t,x)-F_3(t,x) +i(t) \bigg) \frac{\partial {v}}{\partial x}(t,x)+ $$$$ \frac{\sigma^{2} F^{2}_{1}(t,x)} {2}\frac{\partial^{2} v}{\partial x^{2}}(t,x)+U_{1}(F_2(t,x)))+ m {U}_{3}(x+l(t)F_3(t,x)) \bigg]- $$$$ \!\!\!\!\!\!\!\!\!\! \bigg[ \frac{\partial v}{\partial t}(t,x)+ \bigg(r+\mu \zeta(t) -c(t)-p(t)+i(t))\bigg) \frac{\partial v}{\partial x}(t,x)+$$$$\frac{1}{2}\sigma ^{2}\zeta ^{2}(t) \frac{\partial ^{2}v}{\partial x^{2}}(t,x)+U_{1}(c(t))+m {U}_{3}(x+l(t)p(t))\bigg]\geq 0, $$
where the last inequality comes from \eqref{12dE} and \eqref{13ddE}.
\begin{flushright} $\square$ \end{flushright}
\section{CRRA Preferences} Finding a value function is a complicated problem. We are able to deal with the case of power type utilities, that is (with some abuse of notations) $U_{1}(x)=U_{2}(x)=U_{3}(x)=U_{\gamma} (x)\triangleq\frac{x^{\gamma}}{\gamma},$ with $\gamma<1.$ In this case we search for the value function $v$ of the form \begin{equation} v(t,x)=a(t)U_{\gamma }(x+b(t)), \label{vo*} \end{equation} where the functions $a(t),$ $b(t)$ are to be found. We consider here the case $\gamma \neq 0$ (the case of logarithmic utility will be treated separately). In the light of equations \eqref{109con} one gets \begin{eqnarray} F_{1}(t,x) &=&\frac{\mu (x+b(t))}{\sigma ^{2}(1-\gamma )},\, \,F_{2}(t,x)=[a(t)]^{\frac{1}{\gamma -1}}(x+b(t)), \label{0109con} \\ F_{3}(t,x) &=&\frac{1}{l(t)}\left[ ([\frac{a(t)}{m}]^{\frac{1}{\gamma -1} }-\eta (t))x+[\frac{a(t)}{m}]^{\frac{1}{\gamma -1}}b(t))\right] . \label{00109con} \end{eqnarray} By \eqref{1o*} the associated wealth process has the following dynamics: \begin{eqnarray*} \!\!\!\!\!d\bar{X}(s) &=&\bigg[\left( r+\frac{\eta (s)}{l(s)}\right) \bar{X} (s)+\frac{\mu ^{2}}{\sigma ^{2}(1-\gamma )}(\bar{X}(s)+b(s))\\&-&(a(s))^{\frac{1 }{\gamma -1}}\left( 1+\frac{1}{m^{\frac{1}{\gamma -1}}l(s)}\right) (\bar{X} (s)+b(s))\bigg]ds \\ &&+i(s)ds+\frac{\mu (\bar{X}(s)+b(s))}{\sigma (1-\gamma )}dW(s). \end{eqnarray*}
Let us define the process ${Y}(s)\triangleq \bar{X}(s)+b(s)$ which has the dynamics \begin{eqnarray}\label{?!} d{Y}(s) &=&\bigg[(r+\frac{\eta (s)}{l(s)})Y(s)+\frac{\mu ^{2}}{\sigma ^{2}(1-\gamma )}Y(s)-[a(s)]^{\frac{1}{\gamma -1}}\left( 1+\frac{1}{m^{\frac{1 }{\gamma -1}}l(s)}\right) Y(s) \\\notag &&+i(s)+b^{\prime }(s)-(r+\frac{\eta (s)}{l(s)})b(s)\bigg]ds+\frac{\mu Y(s)}{ \sigma (1-\gamma )}dW(s). \end{eqnarray} For considerations that will become clear later on we choose $b(s)$ such that \begin{equation*} i(s)+b^{\prime }(s)-(r+\frac{\eta (s)}{l(s)})b(s)=0,\,\,\,\mbox{and} \,\,\,b(T)=0. \end{equation*} By solving this ODE we get \begin{equation} b(s)=\int_{s}^{T}i(u)e^{-\int_{u}^{s}\left( r+\frac{\eta (x)}{l(x)}\right) \,dx}du. \label{?} \end{equation} Solving for the process $Y(s)$ we get
\begin{eqnarray*} Y(s)&=&Y(t) \exp\bigg(\int_{t}^{s}\bigg(r+\frac{\mu^2}{2\sigma^2 (1-\gamma)^2}+ \frac{\eta(u)}{l(u)}-(a(u))^{\frac{1}{\gamma-1}} \bigg(1+\frac{1}{m^{\frac{1 }{\gamma-1}} l(u)}\bigg) \bigg)du\\&+& \frac{\mu(W(s)-W(t))}{\sigma(1-\gamma)} \bigg) \end{eqnarray*}
Therefore
\begin{eqnarray*} \bar{X}^{t,x}(T)&=&x\exp\bigg(\int_{t}^{T}\bigg(r+\frac{\mu^2}{2\sigma^2 (1-\gamma)^2}+\frac{\eta(u)}{l(u)}-(a(u))^{\frac{1}{\gamma-1}} \bigg(1+\frac{ 1}{m^{\frac{1}{\gamma-1}} l(u)}\bigg) \bigg)du\\&+& \frac{\mu(W(T)-W(t))}{ \sigma(1-\gamma)} \bigg). \end{eqnarray*} By plugging $v$ of \eqref{vo*} ( with $(F_{1},F_{2},F_{3})$ of \eqref{0109con}, \eqref{00109con}) into \eqref{100ie19(*} and \eqref{1o*}, we obtain the following integral equation (IE) for $a(t)$ \begin{eqnarray} a(t)\!\! \!&=&\!\!\! \int_{t}^{T}\!\)^{\frac{\gamma }{\gamma -1} }e^{K(s-t)+\left( \int_{t}^{s}\frac{\gamma \eta (z)}{l(z)}-\gamma (a(z))^{ \frac{1}{\gamma -1}}\left( 1+\frac{1}{m^{\frac{1}{\gamma -1}}l(z)}\right) \,dz\right) }\,ds \label{IE0} \\ &+&nQ(T,t)e^{K(T-t)+\left( \int_{t}^{T}\frac{\gamma \eta (z)}{l(z)}-\gamma (a(z))^{\frac{1}{\gamma -1}}\left( 1+\frac{1}{m^{\frac{1}{\gamma -1}}l(z)} \right) dz\right) },\qquad a(T)=n. \notag \end{eqnarray} with \begin{equation}\label{Kk} K\triangleq \gamma \left( r+\frac{\mu ^{2}}{2(1-\gamma )\sigma ^{2}}\right) . \end{equation}
Let us summarize this finding:
\begin{lemma}\label{op0} \label{int} Let $a(t)$ be a solution of the fixed-point problem \eqref{IE0}. Define $b(t)$ by \eqref{?}. Then $v(t,x)=a(t)U_{\gamma }(x+b(t))$ is a value function. \end{lemma}
We turn our attention to the integral equation \eqref{IE0}. Set
\begin{equation}\label{Mm} M(z)\triangleq 1+\frac{1}{m^{\frac{1}{\gamma -1}}l(z)} \end{equation}
\begin{assumption} \label{A1} We require that: \begin{equation} \min_{t\in \left[ 0,T\right] }(1-\gamma M(t)+\lambda (t))\geq 0. \label{&^} \end{equation} \end{assumption}
The Assumption \ref{A1} is met if $\gamma \leq 0.$ If $m=1$ and $l(t)= \frac{1}{\lambda (t)},$ then Assumption \ref{A1} is also satisfied (this is the situation considered by \cite{Pliska}). In the case when $\min_{t\in \left[ 0,T\right] }(1-\gamma M(t)+\lambda (t))<0$ it can happen that $a(t)$ reaches 0 which leads to unbounded consumption.
The following Proposition is proved in Appendix C.
\begin{proposition}\label{pio} \label{existenceODE} If Assumption \ref{A1} is satisfied, then there exists a unique global $C^{1}$ solution of the integral equation \eqref{IE0}. \end{proposition}
In other words, for the problem under consideration, there always exists a value function of the special type $v(t,x)=a(t)U_{\gamma }(x+b(t))$ (note that there may be others as well). We now proceed to deduce the existence of policies.
\begin{theorem}\label{existence1} Let $v$ be the value function of Lemma \ref{op0}. Then, $\{\bar{\zeta}(s),\bar{c}(s),\bar{p}(s)\}_{s\in \lbrack t,T]}$ given by \eqref{ie} is a policy. \end{theorem} The proof follows from Theorem \ref{existence}, Lemma \ref{op0} and Proposition \ref{pio} as long as we prove that $\{\bar{\zeta}(s),\bar{c}(s),\bar{p}(s)\}_{s\in \lbrack t,T]}$ is an admissible strategy and Assumption \ref{A2} is met. The first claim follows if one establishes \eqref{189}. Taking into account the special form of $v,$ $U_{\gamma},$ and $\bar{X}(s)+b(s),$ (see \eqref{?!}) then Burkholder Davis Gundy inequality yields \eqref{189}. Next, to show that Assumption \ref{A2} holds true boils down to construct a $C^{1,2}$ solution to some PDEs. Again by exploiting the special structure, one can construct solutions (for the PDEs of Assumption \ref{A2}) of the form $l(t) U_{\gamma}(x+b(t)),$ with $l(t)$ being the solution of some ODE.
\subsection{The Case of Logarithmic Utility }
In this special case we can solve the integral equation \eqref{IE0} in closed form. Indeed with $\gamma=0$ (the case of logarithmic utility) we follow the ansatz \begin{equation} \label{vo**} v(t,x)=a(t)U_{\gamma}(x+b(t))+d(t). \end{equation}
Then \eqref{IE0} becomes
\begin{equation}\label{I1E0} a(t)= \int_{t}^{T} [Q(s,t)+q(s,t)]\,ds+n Q(T,t), \end{equation} with $b(t)$ given in \eqref{?} and an appropriate choice of function $d(t)$. The equilibrium policy is then given through \eqref{109con} which becomes
\begin{eqnarray} F_{1}(t,x) &=&\frac{\mu (x+b(t))}{\sigma ^{2}},\, \,F_{2}(t,x)=[a(t)]^{-1}(x+b(t)), \label{10109con} \\ F_{3}(t,x) &=&\frac{1}{l(t)}\left[ ([\frac{a(t)}{m}]^{-1} -\eta (t))x+[\frac{a(t)}{m}]^{-1}b(t))\right] . \label{100109con} \end{eqnarray}
\begin{remark} Let us notice that the amount invested in the stock is the same as in the case of the standard Merton problem with exponential discounting. This somehow surprising result is explained by constant return and volatility for the stock. We conjecture that in a stochastic volatility model these amounts will be different. The consumption and insurance policies differ from the optimal ones except for the case of exponential discounting. In fact, this is the topic of the next subsection. \end{remark}
\subsection{The Classical Merton Problem}
The case of exponential discounting, $h(t)=\hat{h}(t)=e^{-\rho t}$ and constant Pareto weight $m(t)=m,$
deserves special consideration. In that case, the equation
\eqref{12dE} becomes the classical HJB equation given by dynamic programming
\begin{equation} \label{1112dE} \!\!\!\!\!\!\!\!-(\lambda(t)+\rho){v}(t,x)+\frac{\partial {v}}{\partial t}(t,x)+\bigg(rx+\mu F_1(t,x) -F_2(t,x)-F_3(t,x) +i(t) \bigg) \frac{\partial {v}}{\partial x}(t,x)+$$$$ \frac{\sigma^{2} F^{2}_{1}(t,x)} {2}\frac{\partial^{2} v}{\partial x^{2}}(t,x)+U_{\gamma}(F_2 (t,x)) +m {U}_{\gamma}(\eta(t)x+l(t)F_3(t,x))=0, \end{equation} with the boundary condition $v(T,x)=n {U}_{\gamma}(x),$ and $(F_1, F_2, F_3)$ given through \eqref{109con}. Therefore for the case of exponential discounting the optimal strategy given by dynamic programming coincides with the policy (given through \eqref{100ie19(*}, \eqref{1o*} \eqref{109con}). This non-linear equation can be linearized by Fenchel-Legendre transform and therefore it can be shown that it has a unique solution. Moreover, it can be computed by the ansatz \eqref{vo*}. The function $a(t)$ solves an ODE which can be solved explicitly to yield $$a(t)=\left[ n^{\frac{1}{1-\gamma}}e^{\int_{t}^{T} \frac{K+\frac{\gamma\eta(s)}{l(s)}-\rho-\lambda(s)}{1-\gamma} ds} +\int_{t}^{T} \left( \frac{1+\lambda(u)-\gamma M(u)}{1-\gamma}\right) e^{\int_{t}^{u} \frac{K+\frac{\gamma\eta(s)}{l(s)}-\rho-\lambda(s)}{1-\gamma} ds} du\right]^{1-\gamma},$$
with $K$ given by \eqref{Kk} and $M(z)$ given by \eqref{Mm}.
\subsection{The Merton Problem with Hyperbolic Discounting} In this section we assume that the decision-maker gets no income ($i(t)=0$), he/she does not buy life insurance and there is no possibility of him/her dying before $T.$ Moreover, we assume that discounting is hyperbolic, i.e., $h(t)=(1+k_1 t)^{-\frac{k_2}{k_1}},$ with $k_1, k_2$ positive. In \cite{LoPre} it is shown that CRRA type utilities and hyperbolic or exponential discounting exhibit \textit{the common difference effect}. Due to this effect, people are more sensitive to a given time delay if it occurs earlier than later. More precisely, if a person is indifferent between receiving $x>0$ immediately, and $y>x$ at some later time $s,$ then he or she will strictly prefer the better outcome if both outcomes are postponed by some time $t:$ \begin{equation*} U(x)=h(s)U(y),\qquad \mbox{implies\,\,\,that}\qquad U(x)h(t)<h(t+s)U(y). \end{equation*} Furthermore, they assume that the delay needed to compensate for the larger outcome is a linear function of time, that is \begin{equation*} U(x)=h(s)U(y),\qquad \mbox{implies\,\,\,that}\qquad U(x)h(t)=h(kt+s)U(y), \end{equation*} for some constant $k.$ They show that the only solution of this functional equation is $U(x)=\frac{x^{\gamma}}{\gamma}.$ We pay special attention to this case because it explains the consumption puzzle; there is a satiation in the consumption rate before maturity and exponential discounting can not capture it (in fact with exponential discounting the optimal consumption rate is either increasing or decreasing at all times depending on the relationship between the discount rate and the interest rate). Moreover, it says that optimal strategies and policies are not observationally equivalent. In the following we illustrate this point by a numerical experiment.
We consider one stock following a geometric Brownian motion with drift $\alpha=0.12,$ volatility $ \sigma=0.2,$ interest rate $r=0.05,$ and the horizon $T=4.$ This set of parameters is chosen for illustration. Inspired by \cite{Lai} let the discount function $h(x)=(1+k_1 x)^{-\frac{k_2}{k_1}}$ be one of the three choices of hyperbolic discount: case $1.\,\, k_1=5$; case $2.\,\, k_2=10$; case $3.\,\, k_3=15;$ and $b$ is chosen such that $h(1)=0.3.$ We set $ \gamma=-1$ (this choice reflects risk aversion). Let us consider three cases: $n=1, 10, 30.$ We apply the numerical scheme developed in the Numerical Results Section.
As we see from these graphs, the consumption rate policy is increasing up to a satiation point after which it is decreasing. This phenomena is observed from the data (people are consuming more and more up to some age (around 60 years) after which the consumption is decreasing). This may be explained by a drop in income. As the parameter $n$ gets higher (when the agents get more utility from terminal wealth) the satiation point comes earlier.
Lemma \ref{13} shows that consumption rate policy is not always monotone.
\begin{lemma}\label{nonMon} \label{13} One can find a hyperbolic discount function such that the consumption rate policy is neither increasing nor decreasing in time. \end{lemma}
Appendix D proves this Lemma.\\\\
\subsection{The Stationary Case}
Let us now consider the stationary problem. The coefficients in the model are assumed constant taking their stationary value, i.e., $n=0,$ $m(t)=m,$ $l(t)=l,$ $\lambda (t)=\lambda,$ $i(t)=i, \eta(t)=\eta,$ and $T=\infty.$ For simplicity
we assume that
$$q(s,t)= m \lambda \exp\{-(\lambda+r_1)(s-t)\},\qquad Q(s,t)= \exp\{-(\lambda+r_2)(s-t)\},$$ for some $r_1$ and $r_2$ positive. Before engaging into the formal definition, let us point the following key fact. For an admissible time homogeneous (stationary) policy process $\{{\zeta}(t),{c}(t), p(t)\}_{t\in[0,\infty)}$ and its corresponding wealth process $\{X(t)\}_{t\in[0,\infty)}$ (see \eqref{equ:wealth-one}) the expected utility functional $J(t,x,\zeta,c)$ is time homogeneous, i.e., \begin{eqnarray*} J(t,x,\zeta,c,p)&=&J(0,x,\zeta,c,p) \\\notag &\triangleq& \mathbb{E}\left[\int_{0}^{\infty}\!\!\!\!\!\! \exp\{-(\lambda+r_2)s\}U(c(s))\,ds+ \int_{0}^{\infty}\!\!\!\!\!\! m\lambda \exp\{-(\lambda+r_1)s\}U(Z^{0,x}(s))\,ds\right]. \end{eqnarray*} The intuition is that the clock can be reset so that the expected utility criterion takes zero as the starting point. We have a similar definition for policies as in the case of finite horizon.
\begin{definition}\label{finite^}
An admissible trading strategy $\{\bar{\zeta}(s),\bar{c}(s), \bar{p}(s) \}_{s\in \lbrack 0,\infty]}$ is a policy if there exists a map $F=(F_{1},F_{2}, F_3):\mathbb{R}\rightarrow \mathbb{R}\times \lbrack 0,\infty )\times \mathbb{R}$ such that for any $x>0$ \begin{equation} {\lim \inf_{\epsilon \downarrow 0}}\frac{J(0,x,\bar{\zeta},\bar{c}, \bar{p})-J(0,x,\zeta _{\epsilon },c_{\epsilon }, p_{\epsilon})}{\epsilon }\geq 0, \label{opt} \end{equation} where
\begin{equation} \bar{\zeta}(s)={F_{1}(\bar{X}(s))},\quad \bar{c}(s)= F_{2}(\bar{X}(s)), \quad \bar{p}(s)=F_{3}(\bar{X}(s)) \label{^0000eq} \end{equation} and the wealth process $\{\bar{X}(s)\}_{s\in \lbrack 0,\infty]}$ is a solution of the stochastic differential equation (SDE) \begin{equation} d\bar{X}(s)=[r\bar{X}(s)+\mu F_{1}(\bar{X}(s))-F_{2}(\bar{X}(s))-F_{3}(\bar{X}(s))+i(s)]ds+\sigma F_{1}(\bar{X} (s))dW(s). \label{^0dyn} \end{equation}
Moreover, the process $\{{\zeta}_{\epsilon}(s),{c}_{\epsilon}(s), p_{\epsilon}(s)\}_{s\in[0,\infty]}$ is another investment-consumption strategy defined by \begin{equation} \label{1e} \zeta_{\epsilon}(s)= \begin{cases} \bar{\zeta}(s),\quad s\in[0,\infty]\backslash E_{\epsilon} \\ \zeta(s), \quad s\in E_{\epsilon}, \end{cases} \end{equation}
\begin{equation} \label{2e} c_{\epsilon}(s)= \begin{cases} \bar{c}(s),\quad s\in[0,\infty]\backslash E_{\epsilon} \\ c(s), \quad s\in E_{\epsilon}, \end{cases} \end{equation} \begin{equation} \label{3e} p_{\epsilon}(s)= \begin{cases} \bar{p}(s),\quad s\in[0,\infty]\backslash E_{\epsilon} \\ p(s), \quad s\in E_{\epsilon}, \end{cases} \end{equation} with $E_{\epsilon}=[0,\epsilon],$ and $\{{\zeta}(s),{c}(s),{p}(s)\}_{s\in E_{\epsilon} }$ is any strategy for which $\{{\zeta}_{\epsilon}(s),{c} _{\epsilon} (s), {p}_{\epsilon} (s)\}_{s\in[0,\infty]}$ is an admissible policy. \end{definition}
Similarly we define the value function. \begin{definition}\label{de1^}
A function $v:\mathbb{R}\rightarrow \mathbb{R}$ is a value function if it solves the following system of equations
\begin{equation} v(x)=J(0,x,\bar{\zeta},\bar{c}, \bar{p}) \label{00ie19(*} \end{equation}
$$\bar{\zeta}(s)={F_{1}(\bar{X}(s))},\quad \bar{c}(s)= F_{2}(\bar{X}(s)), \quad \bar{p}(s)=F_{3}(\bar{X}(s)), $$
\begin{equation}\label{o*} d\bar{X}(s)=[r\bar{X}(s)+\mu F_{1}(\bar{X}(s))-F_{2}(\bar{X} (s))- F_{3}(\bar{X}(s))+i(s)]ds+\sigma F_{1}(\bar{X}(s))dW(s),\end{equation}
\begin{equation}\label{))+} F_{1}(x)=-\frac{\mu v'(x)}{\sigma ^{2} v''(x)},\,\,F_{2}(x)=\left(v'(x))\right)^{\frac{1}{\gamma-1}} ,\,\,F_{3}(x)=\frac{1}{l} \left[ \left( \frac{1}{m}v'(x)\right)^{\frac{1}{\gamma-1}}-\eta x\right]. \end{equation} \end{definition}
Let us look for the value function of the form \begin{equation} \label{vo!} v(x)=aU_{\gamma}(x+b), \end{equation}
for some constants $a$ and $b.$ By solving $(3.4)$, we get $b=\frac{i}{r+\frac{\eta}{l}}$. Let $\beta=1+\frac{m^{\frac{1}{1-\gamma}}}{l}$and $K$ be given by \eqref{Kk}. The constant $a$ should solve the following equation
\begin{equation}\label{eq_a} a=\frac{a^{\frac{\gamma}{\gamma-1}}} {\lambda+r_1-K-\frac{\gamma\eta}{l}+\gamma\beta a^{\frac{1}{\gamma-1}}} + m \lambda \frac{ (\frac{a}{m})^{\frac{\gamma}{\gamma-1}} } {\lambda+r_2-K-\frac{\gamma\eta}{l}+\gamma\beta a^{\frac{1}{\gamma-1}} } \end{equation} with the transversality conditions \begin{equation}\label{TC} \lambda+r_j-K-\frac{\gamma\eta}{l}+\gamma\beta a^{\frac{1}{\gamma-1}}>0\qquad j=1,2.\end{equation}
\begin{lemma}\label{QuadraticSol} There is a unique solution of \eqref{eq_a} and \eqref{TC}. \end{lemma}
Appendix E proves this Lemma.\\\\
We are ready to state the main result of this section.
\begin{theorem}\label{existence1} Let $v$ be defined by \eqref{vo!} with $a$ the solution of \eqref{eq_a}. The function $(F_1, F_2, F_3)$ of \eqref{))+} defines a policy through \eqref{^0dyn} and \eqref{^0000eq}. \end{theorem}
Proof: The proof for the most part parallels Theorem \ref{existence}. The only part which requires more analysis is showing that
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\mathbb{E}\left[\int_{\epsilon}^{\infty} \exp\{-(\lambda+r_i)s\}[U_{\gamma}(F_2( \bar{X}^{0,x}(s)))-U_{\gamma}(F_2({X}^{0,x}(s)))]\,ds\right]=0,\quad i=1,2. \end{equation*} which is equivalent to
\begin{equation*} {\lim_{\epsilon\downarrow 0}}\mathbb{E}\left[\int_{\epsilon}^{\infty} \exp\{-(\lambda+r_i)s\}[(\bar{X}^{0,x}(s)+b)^{\gamma}-({X}^{0,x}(s)+b)^{\gamma}]\,ds\right]=0, ,\quad i=1,2. \end{equation*} The result follows from Lebesque Dominated Convergence Theorem if we prove that
\begin{equation}\label{000} \mathbb{E}\left[\int_{0}^{\infty} \exp\{-(\lambda+r_i)s\}[(\bar{X}^{0,x}(s)+b)^{\gamma}+({X}^{0,x}(s)+b)^{\gamma}]\,ds\right]<\infty,\quad i=1,2. \end{equation} Notice that from the transversality conditions \eqref{TC} one gets
\begin{equation*} \mathbb{E}\left[\int_{0}^{\infty} \exp\{-(\lambda+r_i)s\}(\bar{X}^{0,x}(s)+b)^{\gamma}\right]<\infty,\quad i=1,2. \end{equation*}
Moreover, if $s\in[\epsilon, \infty]$ then
$$\left( \frac{{X}^{0,x}(s)+b}{\bar{X}^{0,x}(s)+b}\right)^{\gamma} =\left( \frac{{X}^{0,x}(\epsilon)+b}{\bar{X}^{0,x}(\epsilon)+b}\right)^{\gamma} \triangleq R(\epsilon),$$ and $R(\epsilon)$ and $\bar{X}^{0,x}(s)$ are independent. Thus, Holder inequality and a standard argument yields \begin{equation*} \mathbb{E}\left[\int_{0}^{\infty} \exp\{-(\lambda+r_i)s\}({X}^{0,x}(s)+b)^{\gamma}\right]<\infty,\quad i=1,2, \end{equation*} so \eqref{000} holds true.
\begin{flushright} $\square$ \end{flushright}
\section{Numerical Results}
We provide a numerical scheme to approximate the integral equation \eqref{IE0}. For simplicity we assume that $\eta(t)=1.$
In a first step let us discretize the interval $\left[0, T \right]$ by introducing the points $t_n = T + nh$ , where
$\epsilon=-\frac{T}{N}$; recall that with $K$ given by \eqref{Kk} and $M(\cdot)$ of \eqref{Mm}, the equation \eqref{IE0} becomes
in a differential form
\begin{eqnarray}\label{lll} a'(t) &=&(\gamma M(t)-\lambda(t)-1)(a(t))^{\frac{\gamma}{\gamma-1}}+\left(\lambda(t)-\frac{h'(T-t)}{h(T-t)}-K-\frac{\gamma}{l(t)}\right)a(t)\\\notag &+& \int_{t}^{T}L(s,t)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t)}\right)ds, \end{eqnarray} where $$A(s)\triangleq\exp(\int_s^T \gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz)$$
and
$$L(s,t)\triangleq\left[\left(\frac{h'(T-t)}{h(T-t)}-\frac{h'(s-t}{h(s-t)}\right)Q(s,t)+\left(\frac{h'(T-t)}{h(T-t)}-\frac{\bar{h}'(s-t)}{\bar{h}(s-t)}\right)q(s,t)\right]e^{\int_{t}^{s}K+\frac{\gamma}{l(u)}du}.$$ From the definition of $A(s),$ it follows that
$$A'(s)=-\gamma a(s))^{\frac{1}{\gamma-1}}M(s) A(s).$$
Our approximation scheme is done in three steps. In a first step, we construct the sequence $a_{n}^1$ and $A_{n}^1$ recursively by
$$ a_{n+1}^1\triangleq a_n^1 + \epsilon a'(t_n),\qquad A_{n+1}^1\triangleq A_n^1 + \epsilon A'(t_n). $$
\begin{lemma}\label{Le1}
If $a_n^1$ and $A_n^1$, $n = 0\cdots N$ are defined by $a_0^1=1,$ $A^1_0=1$ and
\\\\
$\left\{\begin{array}{lll} a_{n+1}^1&=&a_n^1+\epsilon\big((\gamma M(t_n)-\lambda(t_n)-1)(a_n^1)^{\frac{\gamma}{\gamma-1}}+\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K-\frac{\gamma}{l(t_n)}\right)a_n^1\\ &+& \int_{t_n}^{T}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}} \left(\frac{A(s)}{A(t_n)}\right)ds \\ A_{n+1}^1&=&A_n^1-\gamma\epsilon (a(t_n))^{\frac{1}{\gamma-1}}M(t_n) A_n^1\\ \end{array}\right. $
Then there exists a constant $C$ such that
$$|a_n^1-a(t_n)|\leq C|\epsilon| \,\,\mbox{and}\,\,|A_n^1-A(t_n)| \leq C|\epsilon|,\quad \forall n\in 0,1,\cdots, N.$$
\end{lemma}
Appendix F proves this Lemma.\\\\
In a second step we discretize the integral $ \int_{t_n}^{T}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t_n)}\right)ds.$
This will lead to the following Lemma.
\begin{lemma}\label{Le2}
If $a_n^2$ and $A_n^2$, $n = 0\cdots N$ are defined by $a_0^2=1, A_0^2=1$ and\\\\
$\left\{\begin{array}{lll} a_{n+1}^2&=&a_n^2+\epsilon(\gamma M(t_n)-\lambda(t_n)-1)(a_n^2)^{\frac{\gamma}{\gamma-1}}+\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K-\frac{\gamma}{l(t_n)}\right)a_n^2\\ &-&\epsilon^2 \sum_{j=0}^{n-1}L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(t_j)}{A(t_n)}\right)\\ A_{n+1}^2&=&A_n^2-\gamma\epsilon (a_n^2)^{\frac{1}{\gamma-1}}M(t_n) A_n^2 \end{array}\right. $
Then there exists a constant $C$ such that
$$|a_n^2-a_n^1)|\leq C|\epsilon| \,\,\mbox{and}\,\,|A_n^2-A_n^1| \leq C|\epsilon|,\quad \forall n\in 0,1,\cdots, N.$$
\end{lemma}
Appendix G proves this Lemma.\\\\
In a third step we introduce an explicit scheme.
\begin{lemma}\label{Le3}
If $a_n^3$ and $A_n^3$, $n = 0\cdots N$ are defined by $a_0^3=1, A_0^3=1$ and\\\\
$\left\{\begin{array}{lll} a_{n+1}^3&=&a_n^3+\epsilon(\gamma M(t_n)-\lambda(t_n)-1)(a_n^3)^{\frac{\gamma}{\gamma-1}} +\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K-\frac{\gamma}{l(t_n)}\right)a_n^3\\ &-&\epsilon^2 \sum_{j=0}^{n-1}L(t_j,t_n)(a_j^3)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^3}{A_n^3}\right)\\ A_{n+1}^3&=&A_n^3-\gamma\epsilon (a_n^3)^{\frac{1}{\gamma-1}}M(t_n) A_n^3\\ \end{array}\right.$
Then there exists a constant $C$ such that
$$|a_n^3-a_n^2)|\leq C|\epsilon| \,\,\mbox{and}\,\,|A_n^3-A_n^2| \leq C|\epsilon|,\quad \forall n\in 0,1,\cdots, N.$$
\end{lemma}
Appendix H proves this Lemma.\\\\
By using the preceding lemmas and Lipschitz continuity of function $a(t)$ we summarize the results of this section by the following Theorem.
\begin{theorem}\label{th} Let $a_N(t)$ be the function obtained by the linear interpolation of the points\\ $(t_n=T-\frac{nT}{N}, a_n^3).$ Then
$$|a_N(t)-a(t)|\leq C|\epsilon|, \qquad \forall t \in \left[0,T\right],$$
for some positive constant $C$ independent of $N.$ \end{theorem}
In the following we perform a numerical experiment. Let $T=4, r=0.05, \mu=0.07, \sigma=0.2, p=-1, N=1000, \rho=0.8, \lambda(t)=\frac{1}{200}+\frac{9}{8000}t, l(t)=\frac{1}{\lambda(t)}, \eta(t)=1,$ The discount function is exponential $h(t)=\hat{h}(t)=\exp(-\rho t)$ with $\rho=0.8$. The Pareto weight is $m(t)=\log(\frac{T+\epsilon-t}{\epsilon})$ with $\epsilon=10^{-15}.$ We choose this set of parameters for illustration. As people get older, perhaps they weigh more their heirs utility; so a time decreasing aggregation rate seems the natural choice. It is for this reason that we consider a decreasing function $m.$ We plot the maps $F_2$ and $F_3$ which lead to the policies. Furthermore, we plot the difference in $F_3$ when $m$ is variable as opposed to $m$ is constant. The results show that higher utility weight $m$ leads to higher amount spent on life insurance.
\section{Conclusion and future research}
We have studied the portfolio management problem in which an agent invests in a risky asset, consumes, and buys life insurance in order to maximize utility of his/her and heirs. Different discount rates for the agent and heirs lead to time inconsistency. Moreover a time changing aggregation weight will lead to time inconsistency as well. The way we deal with this predicament is by looking for subgame perfect Nash equilibrium strategies that we call policies. We find them in special cases. Our model is rich enough to capture different aspects in portfolio theory. We perform numerical experiments in order to explain the effect of discounting in one hand and the effect of aggregation on a different hand. Hyperbolic discounting is emphasized in a Merton type problem (for simplicity we shut off some parameters). The surprising result is that the policies and optimal strategies are not always observationally equivalent. For example, in certain cases, watching one's consumption rate we can infer (from its time monotonicity) wether or not is a optimal or a equilibrium one. Indeed, a non monotone consumption rate can not be optimal for the case of exponential discounting. The consumption rate policy exhibit a satiation point (it has a hump shaped behaviour) and this may explain the consumption puzzle. A time varying aggregation rate is benchmarked to a constant one. Our numerical experiment comes to support the intuition that the more the manager cares for his/her heirs, the more he/she will pay on life insurance.
We have introduced a system of equations: the integral equation (\ref{100ie19(*}) together with the SDE \eqref{1o*} and PDE \eqref{109con}. Their validity has been established in the case when the utility function and the bequest function are of (CRRA) type, but we think that it extends to any concave utilities, as in the deterministic case. This paper can be seen as a first step in the general direction of extending stochastic control away from the optimization paradigm towards time-consistent strategies. The mathematical difficulties are considerable: we have no general existence nor uniqueness theory for the equations (\ref{100ie19(*}) or ( \ref{12dE}), which replace the classical HJB equation of optimal control. In the present paper we sidestep the difficulty by using an Ansatz, but we hope that future work, by ourselves or others, will solve these problems.
We conclude by pointing out that most portfolios are not managed by an individual, but by a group, either a professional management team, or the investor himself and his family. As we mentioned in the introduction, we should then introduce one utility function, one psychological discount rate, and one Pareto weight for each member of the group. Consider for instance a group with two members (husband and wife). Member $i$ (for $i=1,2$) has utility $ u_{i}\left( c_{1},c_{2}\right) $, where $c_{1}$ is the consumption of the husband and $c_{2}$ is the \ consumption of the wife, and discount factor $ h_{i}\left( t\right) $, so the utility derived at time $t$ by member $i$ from the couple consuming $\left( c_{1},c_{2}\right) $ at time $s>t$ is $ h_{i}\left( s-t\right) u_{i}\left( c_{1},c_{2}\right) $. The utilities of the husband and the wife have to be aggregated by Pareto weights. If for instance we assume that, as is the case in most couples, one member specializes in long-term decisions and the other in short-term ones, we find that there should exist some decreasing function $\mu \left( t\right) $ , with $0\leq \mu \left( t\right) \leq 1$, such that the behaviour of the couple between $t$ and $T$ is adequately described by maximising the intertemporal criterion: \begin{equation*} \int_{t}^{T}\left[ \mu \left( s-t\right) h_{1}\left( s-t\right) u_{1}\left( c_{1}\left( s\right) ,c_{2}\left( s\right) \right) +\left( 1-\mu \left( s-t\right) \right) h_{2}\left( s-t\right) u_{2}\left( c_{1}\left( s\right) ,c_{2}\left( s\right) \right) \right] ds \end{equation*} plus some terminal criterion (legacy) at time $T$. Even in the case when both members of the group have a constant psychological discount rate, so that $h_{i}\left( t\right) =\exp \left( -r_{i}t\right) $, and even if $ r_{1}=r_{2}$, the group will exhibit time-inconsistency.
Our model covers the particular case when $u_{1}=u_{2}=U.$ This group is time-inconsistent: their expected intertemporal utility is: \begin{eqnarray*} J(t,x,\zeta ,c_1, c_2)\!\!\!\!\!&=\!\!\!\!\!& \mathbb{E}\bigg[ \int_{t}^{T} \!\!\! h_1(s-t) U(c_1(s))\,ds\!+\!\!\int_{t}^{T} \!\!\! m(s-t)\,\, h_2(s-t){U}(c_2(s))\,ds\!\\&+&\!h_1(T-t) U(X^{\zeta, c_1, c_2 }(T)) \bigg], \end{eqnarray*} which falls within our model. Assuming that the function $m(\cdot)$ is decreasing with $m(0)\simeq \infty,$ and $m(T)\simeq 0$ would capture the situation when one member plans for short time and the other plans for long time.
The more general case when $u_{1} \neq u_{2}$ together with other macroeconomic problems with heterogeneous agents will be the subject of future research.
\section{Appendix}
{\bf {Appendix A}}: Proof of Lemma \ref{L1}: We first establish that
\begin{equation}\label{q1}
\mathbb{E}\bigg[ \int_{t}^{T\wedge\tau}h(s-t)U_{1}(c(s))\,ds\bigg]= \mathbb{E}\bigg[ \int_{t}^{T} Q(s,t) U_{1}(c(s))\,ds\bigg] \end{equation}
In the light of equations \eqref{*0}, \eqref{*1} and the random variable $\tau$ being independent of Brownian motion $W$ it follows that
$$ \mathbb{E}\bigg[ \int_{t}^{T\wedge\tau}h(s-t)U_{1}(c(s))\,ds\bigg]= \mathbb{E}\bigg[ \exp\{-\int_{t}^{T}\lambda(z)\,dz\}\int_{t}^{T}h(s-t)U_{1}(c(s))\,ds+$$$$ \int_{t}^{T}\lambda(u)\exp\{-\int_{t}^{u}\lambda(z)\,dz\} \int_{t}^{u}h(s-t)U_{1}(c(s))\,ds du\bigg]. $$
Moreover
$$\mathbb{E}\bigg[ \int_{t}^{T}\lambda(u)\exp\{-\int_{t}^{u}\lambda(z)\,dz\} \int_{t}^{u}h(s-t)U_{1}(c(s))\,ds du\bigg]= $$ $$-\mathbb{E}\bigg[ \frac{\partial}{\partial u} \left (\exp\{-\int_{t}^{u}\lambda(z)\,dz\} \right) \int_{t}^{u}h(s-t)U_{1}(c(s))\,ds du\bigg],$$
and integration by parts will lead to \eqref{q1}. It is easy to see that
\begin{equation}\label{q2}
\mathbb{E}\bigg[ \bar{h}(\tau-t) {U}_{3}(Z^{t,x}(\tau)) 1_{\{\tau\leq T|\tau>t\}}=\mathbb{E}\bigg[\int_{t}^{T} q(s,t){U}_{3}(Z^{t,x}(s))\,ds\bigg]. \end{equation}
Finally let us prove that
\begin{equation}\label{q3}
\mathbb{E}\bigg[ nh(\tau-t){U}_{2}(X^{t,x}(T)) 1_{\{\tau>T|\tau>t\}}\bigg]=
\mathbb{E}\bigg[ n Q(T,t) {U}_{2}(X^{t,x}(T))\bigg]. \end{equation}
This follows from \eqref{*1}. In the light of \eqref{q1} \eqref{q2} and \eqref{q3}, it follows that \eqref{w9} holds true.
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix B}}: Proof of Lemma \ref{pDe}:
Let the functions $(t,x)\rightarrow f^i(t,\cdot,x)$ satisfy the following PDEs: \begin{equation}\label{PDE1} \frac{\partial f^i}{\partial t}+(r x+\mu F_{1}(t,x)-F_{2}(t,x)-F_{3}(t,x)+i(t))\frac{\partial f^i}{\partial x}+\frac{\sigma^{2} F_{1}^{2}(t,x)}{2} \frac{\partial^2 f^i}{\partial x^2}=0, \quad i=1,2,3. \end{equation} with the boundary conditions $$ f^1(t,s,x)=U_{1}(F_{2}(s,x)),\quad f^2(t,s,x)=U_{3}(\eta (s)x+l(s)F_{2}(s,x)),\quad f^3(t,T,x)=U_{2}(x).$$ In the light of Assumption \ref{A2}, these PDEs have $C^{1,2}$ solutions. According to the Feyman-Kac's formula $$f^1 (t,s,x)=\mathbb{E}[U_{1}(F_{2}(s,\bar{X}^{t,x}(s)))],\,f^2 (t,s,x)=\mathbb{E}[U_{3}(\bar{Z}^{t,x}(s))],\, f^3 (t,T,x)=\mathbb{E}[U_{2}(\bar{X}^{t,x}(T))].$$ Therefore \begin{equation}\label{1ie1} v(t,x)=\int_{t}^{T}(Q(s,t) f^1(t,s,x)+ q(s,t) f^2(t,s,x)) \,ds+ n Q(T,t) f^3(t,T,x) \end{equation} By differentiating under the integral sign in \eqref{1ie1} we obtain
\begin{eqnarray}\label{*1ie1} \frac{\partial v}{\partial t}(t,x)&=&\int_{t}^{T}(Q(s,t) \frac{\partial f^1}{\partial t}(t,s,x)+ q(s,t) \frac{\partial f^2}{\partial t}(t,s,x)) \,ds+ n Q(T,t) \frac{\partial f^3}{\partial t}(t,T,x) \\\notag&+&(Q(t,t) f^1(t,t,x)+ q(t,t) f^2(t,t,x)) \\\notag &+& \int_{t}^{T}(\frac{\partial Q}{\partial{t}} (s,t) f^1(t,s,x)+ \frac{\partial q}{\partial{t}} (s,t) f^2(t,s,x)) \,ds+ n \frac{\partial Q}{\partial{t}} (T,t)f^3(t,T,x) , \end{eqnarray}
\begin{equation}\label{*31ie1} \frac{\partial v}{\partial x}(t,x)=\int_{t}^{T}(Q(s,t) \frac{\partial f^1}{\partial x}(t,s,x)+ q(s,t) \frac{\partial f^2}{\partial x}(t,s,x)) \,ds+ n Q(T,t) \frac{\partial f^3}{\partial x}(t,T,x), \end{equation}
and
\begin{equation}\label{*14ie1} \frac{\partial^2 v}{\partial x^2}(t,x)=\int_{t}^{T}(Q(s,t) \frac{\partial^2 f^1}{\partial x^2}(t,s,x)+ q(s,t) \frac{\partial^2 f^2}{\partial x^2}(t,s,x)) \,ds+ n Q(T,t) \frac{\partial^2 f^3}{\partial x^2}(t,T,x) \end{equation}
By combining \eqref{*1ie1}, \eqref{*31ie1}, \eqref{*14ie1} and \eqref{PDE1}, we obtain \eqref{12dE}.
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix C}}: Proof of Proposition \ref{existenceODE}. We proceed in a couple of steps. In a first step, we obtain lower and upper bounds for $a(t).$ The second step shows that for the case of discount functions being a linear combination of exponentials the equation becomes an ODE system for which we have local existence. The local solution together with the bounds obtained in the first step will lead to global existence. In a last step, we approximate the discount functions by linear combination of exponentials and the solution is constructed by the limit of the ODE systems solutions. In the following, we will go over step one only since the second step follow as in \cite{EkePir} and the last step follows from a density argument. For simplicity we assume that $\eta(t)=1.$ Now, let us define $$\bar{a}(t)=a(t) e^{\left[K(t-T)+\int_{t}^{T}\left(-\frac{\gamma}{l(z)}+\gamma a(z)^{\frac{1}{\gamma-1}}M(z)\right)dz\right]}.$$
It follows that
\begin{eqnarray*} \bar{a}(t)&=&\int_{t}^{T}\left[Q(s,t)+q(s,t)\right][a(s)]^{\frac{\gamma}{\gamma-1}}e^{\left[K(s-T)-\int_{s}^{T}\left(\frac{\gamma}{l(z)}+\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz\right)\right]}ds+nQ(T,t) \end{eqnarray*}
Consequently
$$\bar{a}'(t)=-(Q(t,t)+q(t,t))(a(t))^{\frac{\gamma}{\gamma-1}}e^{\left[K(t-T)-\int_{t}^{T}\left(\frac{\gamma}{l(z)}+\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)\right)dz\right]}ds+ n \frac{\partial}{\partial{t}}Q(T,t)$$$$+\int_{t}^{T}\frac{\partial}{\partial{t}}\left[Q(s,t)+ q(s,t)\right](a(s))^{\frac{\gamma}{\gamma-1}}e^{\left[K(s-T)+\int_{s}^{T}\left(\frac{\gamma}{l(z)}+\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)\right)dz\right]}ds $$
From direct calculations \begin{eqnarray*} \frac{\partial}{\partial{t}}Q(T,t)&=&(\lambda(t)h(T-t)-h'(T-t))exp\left[-\int_{t}^{T}\lambda(z)dz\right]\\
&=& (\lambda(t)-\frac{h'}{h}(T-t))Q(T,t) \end{eqnarray*}
and \begin{eqnarray*} \frac{\partial}{\partial{t}}\left[Q(s,t)+q(s,t)\right]&=&(\lambda(t)h(s-t)-h'(s-t))exp\left[-\int_{t}^{s}\lambda(z)dz\right]\\ &+&\lambda(s)(\lambda(t)\bar{h}(s-t)-\bar{h}'(s-t))exp\left[-\int_{t}^{s}\lambda(z)dz\right]. \end{eqnarray*} Therefore
$$ \frac{\partial}{\partial{t}}\left[Q(s,t)+ q(s,t)\right] =\left(\lambda(t)-\frac{h'(s-t)}{h(s-t)}(s-t)\right)Q(s,t)+\left(\lambda(t)-\frac{\bar{h}'(s-t)}{\bar{h}(s-t)}\right) q(s,t) $$
Thus
\begin{eqnarray*} \bar{a}'(t)&=&-(\lambda(t)+1)(a(t))^{\frac{\gamma}{\gamma-1}}e^{\left[K(t-T)-\int_{t}^{T}\left(\frac{\gamma}{l(z)}+\gamma [a(z)]^{\frac{1}{\gamma-1}}M(z)\right)dz\right]}ds\\ &+&n\left(\lambda(t)-\frac{h'(T-t)}{h(T-t)}\right)Q(T,t)\\ &+& \int_{t}^{T}\left[\left(\lambda(t)-\frac{h'(s-t)}{h(s-t)}\right)Q(s,t)+\left(\lambda(t)-\frac{\bar{h}'(s-t)}{\bar{h}(s-t)}\right)q(s,t) \right] \\&\times&(a(s))^{\frac{\gamma}{\gamma-1}}e^{\left[K(s-T)+\int_{s}^{T}\left(\frac{\gamma}{l(z)}+\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)\right)dz\right]}ds \end{eqnarray*}
Hence
\begin{eqnarray*} \big((a'(t)&+&(K+\frac{\gamma}{l(t)}-\gamma (a(t))^{\frac{1}{\gamma-1}}M(t))a(t)\big)e^{\left[-\int_{t}^{T}\left(K+\frac{\gamma}{l(z)}\right)dz+\int_{t}^{T}\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz\right]}\\ &=&-(\lambda(t)+1)(a(t))^{\frac{\gamma}{\gamma-1}}e^{\left[-\int_{t}^{T}\left(K+\frac{\gamma}{l(z)}\right)dz+\int_{t}^{T}\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz\right]}\\ &+&n\left(\lambda(t)-\frac{h'(T-t)}{h(T-t)})\right)Q(T,t)\\ &+& \int_{t}^{T}\left[\left(\lambda(t)-\frac{h'(s-t)}{h(s-t)}\right)Q(s,t)+\left(\lambda(t)-\frac{\bar{h}'(s-t)}{\bar{h}(s-t)}\right)q(s,t) \right] \\&\times&(a(s))^{\frac{\gamma}{\gamma-1}}e^{\left[-\int_{s}^{T}\left(K+\frac{\gamma}{l(z)}\right)dz+\int_{s}^{T}\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz\right]}ds \end{eqnarray*}
Consequently
$$ \left[(a'(t)+(K+\frac{\gamma}{l(t)}-\gamma (a(t))^{\frac{1}{\gamma-1}}M(t))a(t)\right] =$$$$-(\lambda(t)+1)(a(t))^{\frac{\gamma}{\gamma-1}} +n\left(\lambda(t)-\frac{h'(T-t)}{h(T-t)}\right)Q(T,t)e^{\left[\int_{t}^{T}K+\frac{\gamma}{l(z)}dz -\int_{t}^{T}\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz\right]}ds+$$$$ \int_{t}^{T}\left[\left(\lambda(t)-\frac{h'(s-t)}{h(s-t)}\right)Q(s,t)+\left(\lambda(t)-\frac{\bar{h}'(s-t)}{\bar{h}(s-t)}\right)q(s,t) \right] (a(s))^{\frac{\gamma}{\gamma-1}}e^{\left[\int_{t}^{s}\left(K+\frac{\gamma}{l(z)}\right)dz+\int_{s}^{t}\gamma (a(z))^{\frac{1}{\gamma-1}}M(z)dz\right]}ds$$
From the definition of $a(t)$ we get that
\begin{equation}\label{211} \left[(a'(t)+(K+\frac{\gamma}{l(t)}-\gamma (a(t))^{\frac{1}{\gamma-1}}M(t))a(t)\right] =-(\lambda(t)+1)(a(t))^{\frac{\gamma}{\gamma-1}} +\lambda(t)a(t)-\frac{h'(T-t)}{h(T-t)}a(t) \end{equation} $$+ \int_{t}^{T}\bigg(\frac{h'(T-t}{h(T-t})\big(Q(s,t)+q(s,t)\big)-\frac{h'(s-t)}{h(s-t)}(Q(s,t)-$$$$ \frac{\bar{h}'(s-t)}{\bar{h}(s-t)})q(s,t) \bigg) (a(s))^{\frac{\gamma}{\gamma-1}}e^{\left[K(s-t)+\int_{s}^{t}\gamma [a(z)]^{\frac{1}{\gamma-1}}M(z)dz\right]}ds $$
Since $-\rho\leq \frac{h'}{h}(z)\leq \rho$ and $-\rho\leq \frac{\bar{h}'}{\bar{h}}(z)\leq \rho$ and $-\rho' \leq \frac{\gamma}{l(z)}\leq \rho' $ for $0\leq z\leq T$ the equation \eqref{211} leads to
\begin{equation}\label{)} a'(t)\leq -(1+\lambda(t)-\gamma M(t))(a(t))^{\frac{\gamma}{\gamma-1}}+(\lambda(t)+3\rho-K+\rho')a(t) \end{equation}
and
\begin{equation}\label{))} a'(t)\geq -(1+\lambda(t))(a(t))^{\frac{\gamma}{\gamma-1}}-(K+\lambda(t)+3\rho-\rho')a(t) \end{equation}
Let us denote $$C_1\triangleq\min_{t\in\left[0,T\right]} (1-\gamma M(t)+\lambda(t)),\qquad C_0\triangleq\max_{t\in\left[0,T\right]} (\lambda(t)+3\rho-K+\rho')$$
$$D_0\triangleq\max_{t\in\left[0,T\right]} (K+\lambda(t)+3\rho),\qquad D_1\triangleq\max_{t\in\left[0,T\right]}(1+\lambda(t))$$ so that equations \eqref{)} and \eqref{))} will become
\begin{equation}\label{pp1} a'(t)\leq -C_1(a(t))^{\frac{\gamma}{\gamma-1}}+C_0 a(t) \end{equation}
and \begin{equation}\label{pp} a'(t)\geq -D_1 (a(t))^{\frac{\gamma}{\gamma-1}} -D_0 a(t) \end{equation}
Now, $C_0,D_1,D_0>0$ and by assumption \ref{A1}, it follows that $C_1>0$. Consequently, we get lower and upper bounds on $a(t)$ by integrating \eqref{pp1} and \eqref{pp} as in \cite{EkePir}.
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix D}}: Proof of Lemma \ref{nonMon}: We take $n=2$ and the coefficients as in the numerical experiment. In the light of the differential equation
\begin{eqnarray} \label{&1234} a^{\prime }(t)&=&-\left[\frac{h^{\prime }(T-t)}{h(T-t)}+K\right] a(t)+ (\gamma-1)(a(t))^{\frac{\gamma}{\gamma-1}} \\ &+&\int_{t}^{T} h(T-t)\frac{\partial}{\partial t}\left[\frac{h(s-t)}{h(T-t)} \right](a(s))^{\frac{\gamma}{\gamma-1}}e^{-\left(\int_{t}^{s}\gamma(a(u))^{ \frac{1}{\gamma-1}}\,du\right)}\,ds. \notag \end{eqnarray}
We notice that on $[T-\epsilon, T]$
\begin{equation*} a^{\prime }(t)\approx -\left[\frac{h^{\prime }(T-t)}{h(T-t)}+K\right] a(t)+ (\gamma-1)(a(t))^{\frac{\gamma}{\gamma-1}} +O(\epsilon). \end{equation*}
Consequently, for the choice of our parameters in the numerical experiment we get that $\frac{h^{\prime }(T-t)}{h(T-t)}+K<-1$ for small $\epsilon$ (keeping in mind that $a(1)=2$) we see that $a(t)$ is increasing on $[T-\epsilon, T].$ For hyperbolic discounting it can be shown that $\frac{\partial}{\partial t}\left[\frac{h(s-t)}{h(T-t)}\right]<0.$ It is obvious that $a(t)$ is decreasing in a neighborhood of 0 due to the negative contribution of the term $$\int_{t}^{T} h(T-t)\frac{\partial}{\partial t}\left[\frac{ h(s-t)}{h(T-t)}\right](a(s))^{\frac{\gamma}{\gamma-1}}e^{-\left(\int_{t}^{s}\gamma( a(u))^{\frac{1}{\gamma-1}}\,du\right)}\,ds.$$ In conclusion, the consumption rate policy, $\frac{F_{2}(t,x)}{x}=[a(t)]^{\frac{1}{\gamma -1}},$ (see \eqref{0109con}) is neither increasing nor decreasing in time.
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix E}}: Proof of Lemma \ref{QuadraticSol}: Let $x\triangleq a^{\frac{1}{1-\gamma}}$, $\alpha_j\triangleq \lambda+r_j-K-\frac{\gamma\eta}{l},$ $j=1,2.$ Equation \eqref{eq_a} becomes
\begin{equation} \frac{1}{x}=\frac{1}{\alpha_1+\gamma\beta x }+\frac{\lambda m^{\frac{1}{1-\gamma}}}{\alpha_2+\gamma\beta x } \end{equation}
For the sake of simplicity, consider the case $m=1$.
We want to find $x>0$ which solves
\begin{equation}\label{Q} \gamma \beta (1-\gamma\beta+\lambda) x^2+ (\alpha_2(1-\gamma\beta)+\alpha_1(\lambda-\gamma\beta) ) x- \alpha_1\alpha_2=0 \end{equation}
and transversality conditions \eqref{TC} i.e
\begin{equation}\label{TC2}
\alpha_1+\gamma\beta x>0\\\qquad
\alpha_2+\gamma\beta x>0
\end{equation}
This can be done by splitting the analysis into 3 cases: $\gamma\in(0,1), \gamma=0,$ and $\gamma<0.$ We omit the details.
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix F}}: Proof of Lemma \ref{Le1}: Let $a=inf_{t\in\left[0,T\right]} a(t)$. Lemma \ref{existenceODE} guarantees that $a>0.$ We show that
\begin{equation}\label{120}
a_n^1\geq \frac{2A}{3}\quad \forall n\in 0,1,\cdots, N,
\end{equation}
and this makes the recursive scheme well defined. We prove \eqref{120} by mathematical induction. Assume that
\begin{equation}\label{112}
a_k^1\geq \frac{2A}{3}\quad \forall k\in 0,1,\cdots, n,
\end{equation}
and prove that $a_{n+1}^1\geq \frac{2a}{3}.$ Let us define
$$e_n \triangleq a_n^1 - a(t_n),\quad f_n \triangleq A_n^1 - A(t_n)$$.
By considering a second order Taylor expansion of $a(t_{n+1})$ at $a(t_{n})$, we get $$a(t_{n+1}) = a(t_n) + \epsilon a'(t_n) + c_n\epsilon^2$$ with $c_n$ a constant depending on $a$ and bounded by $c$ independently of $n$. Consequently
\begin{eqnarray*}
e_{n+1} &=& a_{n+1}^1 - a(t_{n+1})\\
&=& a_n^1+\epsilon\big((\gamma M(t_n)-\lambda(t_n)-1)(a_n^1)^{\frac{\gamma}{\gamma-1}}\\ &+&\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)a_n^1+ \int_{t_n}^{T}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t_n)}\right)ds\\ &-&\bigg(a(t_n)+ \epsilon (\gamma M(t_n)-\lambda(t_n)-1)(a(t_n))^{\frac{\gamma}{\gamma-1}}\\ &+&\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)a(t_n)\\ &+& \int_{t_n}^{T}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left( \frac{A(s)}{A(t_n)}\right)ds+ c_n\epsilon^2\bigg)\\ &=&e_n +\epsilon (\gamma M(t_n)-\lambda(t_n)-1)( (a_n^1)^{\frac{\gamma}{\gamma-1}}-(a(t_n))^{\frac{\gamma}{\gamma-1}} )\\ &+&\epsilon \left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)e_n -c_n \epsilon^2
\end{eqnarray*}
By the mean value Theorem applied to the function $x\rightarrow x^{\frac{\gamma}{\gamma-1}}, $ one gets that
$$|(a_k^1)^{\frac{\gamma}{\gamma-1}}-(a(t_k))^{\frac{\gamma}{\gamma-1}}|\leq {\frac{\gamma}{\gamma-1}}\left(\frac{2a}{3}\right)^{\frac{1}{\gamma-1}} |a_k^1-a(t_k)|.$$
Therefore there exists $M>0$, such that
\begin{equation}\label{II}
|e_{k+1}|\leq |e_k|(1+M|\epsilon|)+c\epsilon^2,\quad \forall k\in0,1,\cdots, n.
\end{equation}
By iterating \eqref{II} for $k=0\cdots n,$ one gets
$$|e_{n+1}|\leq c\epsilon^2 \frac{(1+M \epsilon)^{n+1}-1}{M\epsilon}\leq|c|\epsilon^2 \frac{e^{MT}-1}{\frac{MT}{N}}\leq C |\epsilon|,$$
for some constant $C$ independent of $n.$ Therefore
$$a^1_{n+1} \geq a(t_{n+1})-|e_{n+1}|\geq a-C|\epsilon|\geq 2a/3 $$ for $|\epsilon|$ small enough. This proves \eqref{120}.
Moreover it follows that
$$|a_n^1-a(t_n)|=|e_n|\leq C|\epsilon|, \; \forall n\in 0,1,\cdots,N.$$
Similar arguments show that
$$|A_n^1-A(t_n)| \leq C|\epsilon|,\quad \forall n\in 0,1,\cdots, N.$$
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix G}}: Proof of Lemma \ref{Le2}: We show that
\begin{equation}\label{0120}
a_n^2\geq \frac{a}{2}\quad \forall n\in 0,1,\cdots, N,
\end{equation}
and this makes the recursive scheme well defined. We prove \eqref{0120} by mathematical induction.
Assume that
\begin{equation}\label{0112}
a_k^2\geq \frac{a}{2}\quad \forall k\in 0,1,\cdots, n,
\end{equation}
and prove that $a_{n+1}^2\geq \frac{A}{2}.$ Let $r_n\triangleq a_n^2-a_n^1,$ so
\begin{eqnarray*}
r_{n+1} &=& a_{n+1}^2-a_{n+1}^1\\
&=&a_n^2+\epsilon(\gamma M(t_n)-\lambda(t_n)-1)(a_n^2)^{\frac{\gamma}{\gamma-1}}+\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)a_n^2\\ &-&\epsilon^2 \sum_{j=0}^{n-1}L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}A(t_j)-\epsilon \int_{t_n}^{T}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t_n)}\right)ds \\
&-& a_n^1-\epsilon(\gamma M(t_n)-\lambda(t_n)-1)(a_n^1)^{\frac{\gamma}{\gamma-1}} -\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)a_n^1\\
&=&r_n+\epsilon(\gamma M(t_n)-\lambda(t_n)-1)\big( (a_n^2)^{\frac{\gamma}{\gamma-1}}-(a_n^1)^{\frac{\gamma}{\gamma-1}}\big)\\
&+&\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)r_n \\
&+&\epsilon \sum_{j=0}^{n-1}\left[-\epsilon L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(t_j)}{A(t_n)}\right)-\int_{t_{j+1}}^{t_{j}}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t_n)}\right)ds\right] \\
\end{eqnarray*}
Moreover
\begin{eqnarray*}
&&\big|-\epsilon L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(t_j)}{A(t_n)}\right)-\int_{t_{j+1}}^{t_{j}}L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t_n)}\right)ds\big|\\
&=&\big|\int_{t_{j+1}}^{t_{j}}\left(L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(t_j)}{A(t_n)}\right)-L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(s)}{A(t_n)}\right)\right) ds\big|\\
&\leq&\frac{1}{A(t_n)}\big|\int_{t_{j+1}}^{t_{j}} \bigg( [L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}A(t_j)-L(s,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}A(t_j)]+[L(s,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}A(t_j)\\
&-&L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}A(t_j)]+[L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}A(t_j)-L(s,t_n)(a(s))^{\frac{\gamma}{\gamma-1}}A(s)]\bigg)ds\big| \\
&\leq&\frac{1}{A(t_n)}\big( K_0(t_j-t_{j+1})^2+K_1(t_j-t_{j+1})^2+K_2(t_j-t_{j+1})^2\big)\leq K_4 \epsilon^2, \end{eqnarray*}
for some positive constants $K_0, K_1, K_2, K_4.$ The last inequalities follow from the boundedness of $a(t)$ ( see Lemma \ref{existenceODE})
and from the boundedness of coefficients. Arguing as in Lemma \ref{Le1}, we can then find $M>0$ such that
\begin{equation}\label{1II}
|r_{k+1}|\leq |r_k|(1+M|\epsilon|)+c\epsilon^2,\quad \forall k\in0,1,\cdots, n.
\end{equation}
By iterating \eqref{1II} for $ k=0\cdots n,$ one gets
$$|r_{n+1}|\leq c\epsilon^2 \frac{(1+M \epsilon)^{n+1}-1}{M\epsilon}\leq|c|\epsilon^2 \frac{e^{MT}-1}{\frac{MT}{N}}\leq C |\epsilon|,$$
for some constant $C$ independent of $n.$ Therefore
$$a^2_{n+1} \geq a^1_{n+1}-|r_{n+1}|\geq 2a/3-C|\epsilon|\geq a/2 $$ for $|\epsilon|$ small enough. This proves \eqref{0120}. Moreover it follows that
$$|a_n^2-a_n^1|=|r_n|\leq C|\epsilon|, \; \forall n\in 0,1,\cdots,N.$$
Similar arguments show that
$$|A_n^2-A_n^1| \leq C|\epsilon|,\quad \forall n\in 0,1,\cdots, N.$$
\begin{flushright} $\square$ \end{flushright}
{\bf {Appendix H}}: Proof of Lemma \ref{Le3}: We show that
\begin{equation}\label{00120}
a_n^3\geq \frac{a}{4}\quad \forall n\in 0,1,\cdots, N,
\end{equation}
and this makes the recursive scheme well defined. We prove \eqref{00120} by mathematical induction.
Assume that
\begin{equation}\label{00112}
a_k^3\geq \frac{a}{4}\quad \forall k\in 0,1,\cdots, n,
\end{equation}
and prove that $a_{n+1}^3\geq \frac{a}{4}.$ Let us introduce $u_n\triangleq a_n^3-a_n^2$ and $v_n\triangleq A_n^3-A_n^2$. It follows that
\begin{eqnarray*}
u_{n+1} &=& a_{n+1}^3-a_{n+1}^2\\
&=&a_n^3+\epsilon(\gamma M(t_n)-\lambda(t_n)-1)(a_n^3)^{\frac{\gamma}{\gamma-1}}+\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)a_n^3\\ &-&\epsilon^2 \sum_{j=0}^{n-1}L(t_j,t_n)(a_j^3)^{\frac{\gamma}{\gamma-1}}A_j^3-a_n^2-\epsilon(\gamma m(t_n)-\lambda(t_n)-1)(a_n^2)^{\frac{\gamma}{\gamma-1}}\\ &-&\epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)a_n^2+\epsilon^2 \sum_{j=0}^{n-1}L(t_j,t_n)(a(t_j))^{\frac{\gamma}{\gamma-1}}A(t_j)\\ &=&u_n+\epsilon(\gamma m(t_n)-\lambda(t_n)-1)( (a_n^3)^{\frac{\gamma}{\gamma-1}}- (a_n^2)^{\frac{\gamma}{\gamma-1}})\\ &+& \epsilon\left(\lambda(t_n)-\frac{h'(T-t_n)}{h(T-t_n)}-K\right)u_n^3-\epsilon^2 \sum_{j=0}^{n-1}L(t_j,t_n)r_{j,n},
\end{eqnarray*}
where $r_{j,n}\triangleq (a_j^3)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^3}{A_n^3}\right)- (a(t_j))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(t_j)}{A(t_n)}\right).$ By triangle inequality it follows that
\begin{eqnarray*}
|r_{j,n}|&\leq&|(a_j^3)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^3}{A_n^3}\right)-(a_j^2)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^3}{A_n^3}\right)|+|(a_j^2)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^3}{A_n^3}\right)- (a_j^2)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^2}{A_n^3}\right)|\\
&+&|(a_j^2)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^2}{A_n^3}\right)-(a_j^2)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^2}{A_n^2}\right)|+|(a_j^2)^{\frac{\gamma}{\gamma-1}}\left(\frac{A_j^2}{A_n^2}\right)-(a(t_j))^{\frac{\gamma}{\gamma-1}}\left(\frac{A(t_j)}{A(t_n)}\right)|\\
&\leq&M_1|u_j|+M_2|v_j|+M_3|v_n|+M_4|\epsilon|,
\end{eqnarray*}
for some positive constants $M_1, M_2, M_3, M_4,$ where the last inequality from the boundedness of $a(t)$ ( see Lemma \ref{existenceODE})
and of other coefficients in our model. Arguing as in the previous Lemmas one can find a positive constant $C$ such that
\begin{equation}\label{M}
|u_{n+1}|\leq |u_n|+C|\epsilon u_n|+C|\epsilon|(max_{j\in 0,\cdots,n }|u_j|+max_{j\in 0,\cdots,n}|v_j|)+C\epsilon^2.
\end{equation}
On the other hand
$$v_{n+1}=v_n -\gamma \epsilon M(t_n) \big((a_n^3)^{\frac{\gamma}{\gamma-1}}p_n^3-(a_n^2)^{\frac{\gamma}{\gamma-1}}p_n^2\big),$$
hence
$$|v_{n+1}|\leq|v_n|+|\gamma\epsilon M(t_n)|\big( (a_n^3)^{\frac{\gamma}{\gamma-1}}|A_n^3-A_n^2| + A_n^2 |(a_n^3)^{\frac{\gamma}{\gamma-1}}-(a_n^2)^{\frac{\gamma}{\gamma-1}}| \big).$$
However this implies that
\begin{equation}\label{M1}
|v_{n+1}|\leq |v_n|+M|\epsilon|(|u_n|+|v_n|),
\end{equation}
for some positive constant $M.$ Let us define $x_n=max_{j\in 0,\cdots,n }|u_j|$ and $y_n=max_{j\in 0,\cdots,n}|v_j| $ and $z_n=x_n + y_n$. Inequalities \eqref{M} and \eqref{M1} hold also for $k\leq n,$ i.e.,
$$ |u_{k+1}|\leq |u_k|+C|\epsilon u_k|+C|\epsilon| (x_k + y_k)+C\epsilon^2,\,\,|v_{k+1}|\leq |v_k|+M|\epsilon|(|u_k|+|v_k|).$$
By taking maximum over $k\in 0,\cdots,n$ in these inequalities one obtains
$$x_{n+1}\leq x_n+2C|\epsilon| x_n+C|\epsilon|y_n+C\epsilon^2$$ and
$$y_{n+1}\leq y_n + M|\epsilon| (x_n+y_n)$$
By adding these inequalities, it follows that
$$z_{n+1}\leq z_n+(2C+M)|\epsilon|z_n+M\epsilon^2.$$
This in turn yields that $z_n\leq C |\epsilon|,$ for some positive constant still denoted (with some abuse of notations) $C.$ Therefore
$$a^3_{n+1} \geq a^2_{n+1}-|u_{n+1}|\geq a/2-K|\epsilon|\geq a/4 $$ for $|\epsilon|$ small enough. This proves \eqref{00120}. Moreover $z_n\leq C |\epsilon|,$
implies that
$$|a_n^3-a_n^2|=|u_n|\leq C|\epsilon|, \; \forall n\in 0,1,\cdots,N.$$
and
$$|A_n^3-A_n^2|=|v_n| \leq C|\epsilon|,\quad \forall n\in 0,1,\cdots, N.$$
\begin{flushright} $\square$ \end{flushright}
\end{document} | arXiv |
\begin{document}
\title{{\footnotesize\bf{CLASSIFICATION OF $\aleph_{0}$-CATEGORICAL $C$-MINIMAL PURE $C$-SETS}}} \author{{\footnotesize FRAN\c COISE DELON, MARIE-H\' EL\`ENE MOURGUES}}
\maketitle
\noindent {\footnotesize Fran\c coise Delon \\ Universit\'e de Paris et Sorbonne Universit\'e, CNRS, IMJ-PRG, F-75006 Paris, France. \\ [email protected] }\\
\noindent {\footnotesize Marie-Hélène Mourgues \\ Université de Paris-Est Créteil, 61 Avenue du Général de Gaulle, 94000 Créteil, France; \\ Universit\'e de Paris et Sorbonne Universit\'e, CNRS, IMJ-PRG, F-75006 Paris, France. \\ [email protected] } \\
\maketitle
\begin{abstract} We classify all $\aleph_0$-categorical and $C$-minimal $C$-sets up to elementary equivalence. As usual the Ryll-Nardzewski Theorem makes the classification of indiscernible $\aleph_0$-categorical $C$-minimal sets as a first step. We first define {\it solvable} good trees, via a finite induction. The trees involved in initial and induction steps have a set of nodes, either consisting of a singleton, or having dense branches without endpoints and the same number of branches at each node. The class of {\it colored} good trees is the elementary class of solvable good trees. We show that a pure $C$-set $M$ is indiscernible, finite or $\aleph_0$-categorical and $C$-minimal iff its canonical tree $T(M)$ is a colored good tree. The classification of general $\aleph_0$-categorical and $C$-minimal $C$-sets is done via finite trees with labeled vertices and edges, where labels are natural numbers, or infinity and complete theories of indiscernible, $\aleph_0$-categorical or finite, and $C$-minimal $C$-sets.
\\ \\ Key words: $C$-minimality; $\aleph_0$-categoricity; trees; first-order theories.\\ Mathematics Subject Classification: 03 C 35, 03 C 45, 03 C 64, 03 G 10, 05 C 05, 06 A 07, 06 A 12 \end{abstract}
\section{Introduction}
$C$-sets are sets equipped with a $C$-relation. They can be understood as a slight weakening of ultrametric structures. They generalize in particular linear orders and allow rich combinatorics. They are therefore not classifiable, unless you restrict their class. It is what we do here: we consider $\aleph_0$-categorical and $C$-minimal $C$-sets. $C$-minimality is the minimality notion fitting in this context: any definable subset in one variable is quantifier free definable using the $C$-relation alone. In the case of ultrametric structures this corresponds to finite Boolean combinations of closed or open balls.
We classify here all $\aleph_0$-categorical and $C$-minimal $C$-sets up to elementary equivalence (in other words we classify all finite or countable such structures). Although $C$-minimality is a generalization of o-minimality, our result does not generalize Pillay and Steinhorn's result: they classify (Theorem 6.1 in \cite{P-S}) \underline{all} $\aleph_0$-categorical and o-minimal linearly ordered structures while we only classify $\aleph_0$-categorical and $C$-minimal \underline{pure} $C$-sets.
To state our result let us introduce some material. A $C$-set $M$ has a canonical tree, $T(M)$, in which $M$ appears as the set of leaves, with the $C$-relation defined as follows : for $\alpha \in M$, call $br(\alpha):= \{ x \in T(M) ; x \leq \alpha \}$ the branch $\alpha$ defines in $T(M)$ ; then for $\alpha, \beta$ and $\gamma$ in $M$, $M \models C(\alpha, \beta, \gamma)$ iff in $T(M)$, $br(\beta) \cap br(\gamma)$ strictly contains $br(\alpha) \cap br(\beta)$ (which then must be equal to $br(\alpha) \cap br(\gamma)$). Let us give a very simple example: call \emph{trivial} a $C$-relation satisfying $C(\alpha, \beta, \gamma)$ iff $\alpha \not= \beta = \gamma$ and suppose $M$ is not a singleton; then $C$ is trivial on $M$ iff $T(M)$ consists of a root, say $r$, and the elements of $M$ as leaves, all having $r$ as a predecessor.
The $C$-set $(M,C)$ and the tree $(T(M),<)$ are uniformly biinterpretable. As usual the Ryll-Nardzewski Theorem makes the classification of indiscernible $\aleph_0$-categorical $C$-minimal sets as a first step in our work. Recall that a structure is said to be indiscernible iff all its elements have the same complete type \footnote{Notice that, if $M$ is indiscernible the set of leaves is indiscernible in $T(M)$ but the tree $T(M)$, except the singleton, never is. Its set of nodes may be indiscernible, see for example 1-colored good trees in Section 4.}. We characterize indiscernible, $\aleph_0$-categorical and $C$-minimal $C$-sets by their canonical tree. First we define by induction solvable trees. Consider on leaves above a node $a$ the equivalence relation ``$br(\alpha) \cap br(\beta)$ contains nodes strictly bigger than $a$". An equivalence class is called a \emph{cone} at $a$. So, the number of cones at $a$ coincides with the intuitive notion of the number of branches.
A 0-solvable good tree is a singleton (with the only possible $C$-relation: the empty relation). There are three types of 1-solvable good trees. Either the tree $T$ consists of a unique node with at least two leaves immediately above. Or for any leaf $\alpha$ of $T$, $br(\alpha)$ consists of a dense linear order and its leaf $\alpha$, and at each node there is the same number (a natural number greater than 2 or infinity) of cones. Or each $br(\alpha)$ consists of a dense linear order, $\alpha$ and a predecessor of $\alpha$, and there are two numbers $m$ and $\mu$ (natural numbers greater than 1, or infinity) such that at each node of T there are exactly $\mu$ infinite cones and $m$ cones which consist of a single leaf.
An $(n + 1)$-solvable good tree is an $n$-solvable good tree in which each leaf is substituted with a copy of a 1-colored good tree, the same at each leaf, with some constraints on the parameters $m$ and $\mu$ occurring on both sides of the construction. A solvable good tree is an $n$-solvable good tree for some integer $n$. And a colored good tree is a tree elementary equivalent to a solvable one. We prove that a pure $C$-set $M$ is indiscernible, finite or $\aleph_0$-categorical and $C$-minimal iff its canonical tree $T(M)$ is a colored good tree. \\
The reduction of the general classification to that of indiscernible structures uses a very precise description of definable subsets in one variable. $\aleph_0$-categoricity is combined with the classical description coming from $C$-minimality to produce a ``canonical partition'' of the structure in finitely many definable subsets, each of them maximal indiscernible. The characterization of $\aleph_0$-categorical and $C$-minimal $C$-sets is done via finite trees with labeled vertices and edges, where labels are natural numbers, or infinity, and complete theories of indiscernible, $\aleph_0$-categorical or finite $C$-minimal $C$-sets.
The reconstruction of the structure from such a finite labeled tree uses again an induction on the depth of the tree. \\
Chapter 2 lists some preliminaries. In Chapter 3 we draw a certain amount of consequences of indiscernibility, $\aleph_0$-categoricity and $C$-minimality of a $C$-structure, which leads to the notion of precolored good tree (no inductive definition this time). Chapters 4 to 6 are dedicated to colored good trees. Chapter 4 presents $1$-colored good trees, which in fact are the same thing as precolored good trees of depth 1. In Chapter 5 we define the extension of a colored good tree by a $1$-colored good tree, construction which is the core of the inductive definition of $(n+1)$-colored good trees from $n$-colored good trees. General colored good trees are defined and completely axiomatized in Chapter 6. In Chapter 7 we show that the classes of precolored good trees, of colored good trees as well as of canonical trees of indiscernible, finite or $\aleph_0$-categorical and $C$-minimal $C$-sets do in fact coincide. Chapter 8 gives a complete classification of $\aleph_0$-categorical and $C$-minimal $C$-sets.
\section{Preliminaries } \subsection{$C$-sets and good trees} \begin{defi} A $C$-{\rm relation} is a ternary relation, usually called $C$, satisfying the four axioms:\\ 1: $C(x,y,z) \rightarrow C(x,z,y)$ \\ 2: $C(x,y,z) \rightarrow \neg C(y,x,z)$ \\ 3: $C(x,y,z) \rightarrow [C(x,y,w) \vee C(w,y,z)]$ \\ 4: $ x\not= y \rightarrow C(x,y,y)$.\\ A $C$-{\rm set} is a set equipped with a $C$-relation. \end{defi}
\noindent $C$-relations appear in \cite{AN}, \cite{M-S} or \cite{H-M}, where they satisfy additional axioms. Our present definition comes from \cite{D2}. As already mentioned in the introduction, a $C$-set $M$ has a canonical tree, which is in fact bi-interpretable with $M$, as we explain now.
\begin{defi} \label{good} We call {\rm tree} an order in which for any element $x$ the set $\{ y ; y \leq x \}$ is linearly ordered. \\
Call a tree {\rm good} if :\\ - it is a meet semi-lattice (i.e. any two elements $x$ and $y$ have an infimum, or {\rm meet}, $x \wedge y$, which means: $ x \wedge y \leq x,y$ and $(z\leq x,y) \rightarrow z \leq x \wedge y $), \\ - it has maximal elements, or {\rm leaves}, everywhere (i.e. $\ \forall x, \exists y \ ( y \geq x \wedge \neg \exists z>y))$\\ - and any of its elements is a leaf or a node (i.e. of form $x \wedge y $ for some distinct $x$ and $y$). \end{defi} Let $T$ be a good tree. It is convenient to consider $T$ in the language $\{<, \wedge, L\}$ where $\wedge$ is the function $T \times T \rightarrow T$ defined above and $L$ a unary predicate for the set of leaves (cf. Definition \ref{good}).
\begin{prop}\label{bi-inter} $C$-sets and good trees are bi-interpretable classes. \end{prop} Let us explain these two interpretations in a few words. More details can be found in \cite{D2}. \\
Call branch of a tree any maximal subchain. The set of branches of $T$ carries a canonical $C$-relation: $C(\alpha, \beta, \gamma)$ iff $\alpha \cap \beta = \alpha \cap \gamma \subsetneq \beta \cap \gamma$.
Now, leaves of $T$ may be identified to branches via the map $\alpha \mapsto br(\alpha) := \{ \beta \in T; \beta \leq \alpha \}$. Thus, if $Br_l(T)$ denotes the set of branches with a leaf of $T$, the two-sorted structure $(T, <, Br_l(T), \in)$ is definable in $(T, <)$, and the canonical $C$-relation on $Br_l(T)$ also.\textbf{ We denote this $C$-set $M(T)$}. This gives the definition of a $C$-set in a good tree. The canonical tree of a $C$-set provides the reverse construction. It is (almost) the representation theorem of Adeleke and Neumann (\cite{AN}, 12.4), slightly modified according to \cite{D2}. Let us describe their construction. Given a $C$-set $(M,C)$, define on $M^2$ binary relations $$(\alpha, \beta) \preccurlyeq (\gamma, \delta) :\Leftrightarrow \neg C(\gamma, \alpha, \beta) \& \neg C(\delta, \alpha, \beta)$$ $$(\alpha, \beta) R (\gamma, \delta) :\Leftrightarrow \neg C(\alpha, \gamma, \delta) \& \neg C(\beta, \gamma, \delta) \& \neg C(\gamma, \alpha, \beta) \& \neg C(\delta, \alpha, \beta). $$ Then the relation $\preccurlyeq$ is a pre-order, $R$ is the corresponding equivalence relation and the quotient $T := M^{2}/R$ is a good tree. \footnote{Adeleke and Neumann work in fact with the set of pairs of distinct elements of $M$, instead of $M^2$ as we do (and reverse order). It is the reason why we get maximal elements everywhere in the tree, meanwhile they did not get any. In the other direction also, $Br_l(T)$ is interpretable in $T$ meanwhile the ``covering set of branches'' considered by Adeleke and Neumann is not determined by $T$.} \\[2 mm]
Proposition \ref{can} summarizes these facts in a more precise way than Proposition \ref {bi-inter} did. \begin{prop}\label{can} Given a $C$-set ${M}$, there is a unique good tree such that ${M}$ is isomorphic to its set of branches with leaf, equipped with the canonical $C$-relation. This tree is called {\rm the canonical tree of} $M$ and is denoted $T(M)$. \\ \begin{tiny}\end{tiny} Let $L$ be the set of leaves of $T(M)$. Then $\langle M,C \rangle$ and $\langle T(M),<, \wedge, L \rangle$ are first-order bi-interpretable, quantifier free and without parameters, and $M$ and $L(T(M))$ are definably isomorphic. Therefore an embedding ${M} \subseteq {N}$ induces an embedding $T(M) \subseteq T(N)$. Moreover, given a good tree $T$, $T(M(T))$ and $T$ are definably isomorphic. \end{prop}
\subsection{$C$-structures and $C$-minimality}
\begin{defi} A $C$-{\rm structure} is a $C$-set possibly equipped with additional structure. \\ A $C$-structure ${\mathcal M}$ is called $C$-{\rm minimal} iff for any structure ${\mathcal N} \equiv {\mathcal M}$ any definable subset of $N$ is definable by a quantifier free formula in the pure language $\{C\}$. \end{defi}
\begin{rem} Any finite $C$-structure is $C$-minimal. \end{rem}
$C$-minimality has been introduced by Deirdre Haskell, Dugald Macpherson and Charlie Steinhorn as the minimality notion suitable to $C$-relations (\cite{H-M}, \cite{M-S}).
We define now some particular definable subsets of ${\mathcal M}$ which, due to $C$-minimality, generate by Boolean combination all definable subsets of ${\mathcal M}$.
If we want to distinguish between nodes and leaves of the tree $T(M)$, we will use Latin letters $x, y, etc...$ to denote nodes and Greek letters $\alpha, \beta, etc...$ for leaves (cf. Definition \ref{good}). According to the representation theorem, elements of $M$ are also represented by Greek letters.
\begin{defi}\label{Mcone} \begin{itemize} \item For $\alpha$ and $\beta$ two distinct elements of $M$, the subset of $M$: ${\mathcal C}(\alpha \wedge \beta, \beta) :=\{\gamma \in M; C(\alpha, \gamma, \beta)\}$ is called the {\em cone} of $\beta$ at $\alpha \wedge \beta$; $\alpha \wedge \beta$ is called its \emph{basis}. \\
We also use the notation, for elements $y > x$ from $T(M)$, ${\mathcal C}(x,y) := {\mathcal C}(x,\alpha)$ for any (or some) $\alpha \in M$ such that $br(\alpha)$ contains $y$, and we say that ${\mathcal C}(x,y)$ is the cone of $y$ at $x$. \item For $\alpha$ and $\beta$ in $M$, the subset of $M$: ${\mathcal C}(\alpha \wedge \beta) :=\{\gamma \in M; \neg C( \gamma, \alpha, \beta\}= \{\gamma ; \alpha \wedge \beta \in \gamma\}$ is called the {\em thick cone} at $\alpha \wedge \beta$; $\alpha \wedge \beta$ is its \emph{basis}. Note that, if $\alpha \neq \beta$, the thick cone at $\alpha \wedge \beta$ is the disjoint union of all cones at $\alpha \wedge \beta$\footnote{ In the particular case of ultrametric spaces the $C$-relation is defined as follows: $C(x,y,z)$ iff $d(x,y)=d(x,z)<d(y,z)$. The thick cones are the closed balls and cones are the open balls. Some balls may be open and closed. In the same way as a closed ball, say of radius $r \not= 0$, is partitioned into open balls of radius $r$, a thick cone at a node $n$ is partitioned in cones at $n$.}.
\item For $x < y \in T(M)$ the {\em pruned cone} at $x$ of $y$ is the cone at $x$ of $y$ minus the thick cone at $y$, in other words the set ${\mathcal C}(]x,y[) =\{\gamma \in M ; x < (\gamma \wedge y) < y\}$. The interval $]x, y[$ is called the {\em axis} of the pruned cone, $x$ its \emph{basis}.
\end{itemize} \end{defi}
Note that the word ``cone'' follows the terminology of Haskell, Macpherson and Steinhorn while our ``thick cone'' replace their ``0-levelled set'' (with the motivation that we do not use here $n$-levelled sets for $n \not=0$). We also replace ``interval'' by ``pruned cone'' with the intention that an ``interval'' always lives in a linear order. \\[2 mm]
It is easy to see that the subsets of $M$ definable by an atomic formula of the language $\{ C \}$ are $M$, $\emptyset$, singletons, cones and complements of thick cones. We can therefore rephrase the above definition of $C$-minimality as follows: A $C$-structure ${\mathcal M}$ is $C$-minimal iff for any structure ${\mathcal N} \equiv {\mathcal M}$ any definable subset of $N$ is a Boolean combination of cones and thick cones.
\begin{prop}\label{induiteCmin} Let ${\mathcal M}$ be a $C$-minimal $C$-set and $A$ a cone, thick cone or pruned cone with a dense axis in $M$. Then, considered as a pure $C$-set, $A$ is $C$-minimal too. \end{prop} \pr The trace of a cone on a cone, say $A$, is a (relative) cone: this means that this trace can be described as $\{ x \in A ; C(\alpha, \beta, x) \}$ for two parameters $\alpha$ and $\beta$ from $A$. More generally the trace of a possibly thick cone on a possibly thick cone is a possibly thick cone. Thus the above statement is trivial for cones. For a pruned cone, $C$-minimality is ensured by the axis density, see \cite{D2}, p. 70, Example and Lemma 3.12 (the $C$-minimality considered there is in some sense ``external'' and a priori stronger than the ``internal'' one considered in the above statement). \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \\[2 mm]
We explain now how the biinterpretation we have seen between $M$ and $T(M)$ remains valid in the expanded context of $C$-minimality. Given a $C$-structure ${\mathcal M}$ consider $M$ as the set of leaves of $T(M)$ and add to the tree structure of $T(M)$ all subsets of some cartesian power $T(M)^n$ which are $\emptyset$-definable in ${\mathcal M}$ as $\emptyset$-definable sets. The structure obtained is called the \emph{structure induced by ${\mathcal M}$ on }$T(M)$. The reverse construction is a bit more subtle:
\begin{defi} Let ${\mathcal N}$ be a structure and $A$ a $\emptyset$-definable subset of $N$. By definition the language of the \emph{structure induced} by ${\mathcal N}$ on $A$ consists of all subsets of some $A^n$ which are definable in ${\mathcal N}$ without parameters. \\ We say that $A$ is \emph{stably embedded} in ${\mathcal N}$ if for all integer $n$ every subset of $A^n$ which is definable in ${\mathcal N}$ with parameters, is definable with parameters from $A$. \\ In this case the subsets of some $A^n$ definable in ${\mathcal N}$ or in the structure induced by ${\mathcal N}$ on $A$ are the same. \end{defi}
\begin{prop}\label{stablyembedded} Whatever additional structure we consider on $T(M)$, $M$ is stably embedded in $T(M)$. \end{prop}
\pr Consider $\varphi$ a formula without parameters of the expanded tree $T(M)$ with $n+m$ variables, parameters $c=(c_1,\dots,c_m)$ from $T(M)$ and the set $D := \{ x=(x_1,\dots,x_n) \in M^n ; T(M) \models \varphi (c,x) \}$. Each $c_i$ is of the form $c_i = \alpha_i \wedge \beta_i$ for some $\alpha_i , \beta_i \in M$ hence $D := \{ x \in M^n ; T(M) \models \varphi ( \alpha_1 \wedge \beta_1,\dots, \alpha_m \wedge \beta_m,x) \}$, a set which is definable with parameters from $M$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{Cminimal} Let ${\mathcal M}$ be a $C$-minimal $C$-structure and $T$ its canonical tree with the structure induced by ${\mathcal M}$. Then each branch $br (\alpha)$ of $Br_{l}(T)$ is o-minimal in $T$, in the sense that, any subset of $br (\alpha)$ definable in $T$ is a finite union of intervals with bounds in $br (\alpha) \cup \{ -\infty\}$. \end{prop}
\pr Haskell and Macpherson \cite{H-M} Lemma 2.7 (i). \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{rem} Using ``rosy theories'' and a result of Pillay (Theorem 1.4 in \cite{P}) we see that any branch $br(\alpha)$ of $T$ is in fact stably embedded in $(T,\alpha)$ and o-minimal for the induced structure. \end{rem}
\subsection{Some definability properties in the canonical tree}
We have defined (possibly thick or pruned)(Definition \ref{Mcone}) cones as subsets of $M$. But they have their counterparts in the canonical tree that we define below. So cones are subsets of $M$ as well as of $T(M)$, we hope the context and the distinct notation ${\mathcal C}$ or $\Gamma$ will make the choice clear. \\
As previously, when we want to make a difference, Latin letters $x, y, etc...$ denote nodes of $T(M)$ which are not leaves and Greek letters $\alpha, \beta, etc...$ leaves.
\begin{defi}\label{Tcone} \begin{itemize} \item For $\alpha$ and $\beta$ two distinct elements of $M$, the subset of $T(M$): $\Gamma(\alpha \wedge \beta, \beta) := \{t \in T(M); \alpha \wedge \beta < t \wedge \beta \}$ is called the {\em cone} of $\beta$ at $\alpha \wedge \beta$\footnote{Be aware that in \cite{H-M} a cone of nodes always contains its basis, in other words a cone at $a$ is the union of $a$ and what we call here a cone.}. Note that it is the canonical tree of ${\mathcal C}(\alpha \wedge \beta, \beta)$. \\ As for cones in $M$, we also use the notation, for elements $y > x$ from $T$, $\Gamma(x,y) := \Gamma(x,\alpha)$ for any (or some) $\alpha \in M$ such that $br(\alpha)$ contains $y$ and we say that $\Gamma(x,y)$ is the cone of $y$ at $x$. \item For $\alpha$ and $\beta$ in $M$, the subset of $T(M)$: $\Gamma (\alpha \wedge \beta)= \{t \in T(M); \alpha \wedge \beta \leq t \}$ is called the {\em thick cone} at $\alpha \wedge \beta$.
Note that it is the canonical tree of ${\mathcal C}(\alpha \wedge \beta)$. Let $x$ be a node of $T(M)$, note that $\Gamma (x)= \bigcup\limits_{\stackrel{\alpha \in M}{x \in br(\alpha)}} \Gamma(x, \alpha) \cup \{x\}$. \item For $x < y \in T(M)$, the {\em pruned cone} at $x$ of $y$ is the set $\Gamma(]x,y[) = \{ t \in T(M); x < (t \wedge x) < t \wedge y\} := \Gamma(x, \beta) \setminus \Gamma(y)$ where $\beta$ is any branch containing $y$. It is the canonical tree of ${\mathcal C}(]x,y[)$. The interval $]x, y[$ is called the {\em axis} of the pruned cone. \\ \end{itemize} \noindent The \emph{basis} of a (possibly thick or pruned) cone is defined analogously to what is done for subsets of $M$. \end{defi}
\begin{defi} We say that a leaf $\alpha$ of $T$ is \rm{isolated} if there exists a node $x$ in $T$ such that $x < \alpha $ and there is no node between $x$ and $\alpha$, in other words, $\alpha$ gets a predecessor in $T$. If $\alpha$ is an isolated leaf, then its unique predecessor is denoted by $p(\alpha)$. \end{defi}
\begin{defi}\label{inner} Let $x$ be a node of $T$. We say that a cone $\Gamma$ at $x$ is an {\rm inner cone} if the two following conditions are realized: \begin{enumerate} \item $x$ has no successor on any branch $br(\alpha)$ where $\alpha$ is a leaf and $\alpha \in \Gamma$. Note that, $x$ has a successor (say $x^+$) on $br(\alpha)$ for some $ \alpha \in \Gamma$, iff $\Gamma$ is a thick cone (the thick cone at $x^+$). \item
There exists $ t \in \Gamma$ such that, for any $t' \in T$ with $x < t' < t$, $t'$ is of same tree-type as $x$.
\end {enumerate}
Otherwise, we say that $\Gamma$ is a {\rm border cone}.
\end{defi}
\begin{rem} An inner cone is always infinite. The cone $\Gamma(p(\alpha), \alpha)$ at the predecessor $p(\alpha)$ of an isolated leaf $\alpha$ is a border cone which consists only of that leaf. \end{rem}
\begin{defi}\label{color of a node}
The \emph{color} of a node $x$ of a tree $T$ is the couple $(m,\mu) \in ({\baton N} \cup \{\infty \})^2$ where $m$ is the number of border cones at $x$ and $\mu$ the number of inner cones at $x$. \end{defi}
\begin{lem}\label{lem:color definable in pure order} Suppose the $C$-set ${\mathcal M}$ is finite or $\aleph_0$-categorical. Then the color of a node of $T(M)$ is $\emptyset$-definable in the pure order of $T(M)$, which means that there are unary formulas $\varphi_k$ and $\psi_k$, $k \in {\baton N} \cup \{ \infty \}$, of the language $\{ < \}$ such that, for any node $x$ of $T(M)$ and $k$, $$ T(M) \models \varphi_k (x) \mbox{ iff there are exactly $k$ border cones at } x,$$ $$ T(M) \models \psi_k (x) \mbox{ iff there are exactly $k$ inner cones at } x.$$ \end{lem}
\pr By the Ryll-Nardzewski Theorem, or finitness, Condition 2 of Definition \ref{inner} is first-order. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\section{Canonical trees of indiscernible finite or $\aleph_{0}$-categorical $C$-minimal $C$-sets}
We say that a structure is {\it indiscernible} if it realizes only one complete $1$-type over $\emptyset$.
\subsection{Indiscernible finite or $\aleph_{0}$-categorical $C$-structures with $o$-minimal branches in their canonical trees}
\begin{defi} A {\em basic} interval of a linear ordered set $O$ will mean a singleton or a dense (non empty and infinite) convex subset with bounds in $O \cup \{- \infty\}$.\end{defi}
For $T$ a good tree and $\alpha$ a leaf of $T$ the set $br(\alpha)$ is a chain of $T$ with maximal element $\alpha$.
\begin{defi} A basic {\emph{one-typed} interval of} $T$ is a basic interval, say $I$, of $br(\alpha) \setminus \{ \alpha \}$ for some leaf $\alpha$ of $T$ such that all elements of $I$ have same tree-type over $\emptyset$. \end{defi}
\begin{theo}\label{theo:ind1} Let ${\mathcal M}$ be an indiscernible finite or $\aleph_{0}$-categorical $C$-structure. Let $T$ be its canonical good tree. Assume that for each leaf $\alpha $ of $T$, any subset of the chain $br (\alpha)$ definable in $T$ is a finite union of basic intervals with bounds in $br (\alpha) \cup \{ -\infty\}$.
Then there exists an integer $n \geq 1$ such that for any leaf $\alpha$ of $T$, the branch $br(\alpha)$ can be written as a disjoint union of its leaf and $n$ basic one-typed intervals, $br(\alpha) = \bigcup_{j=1}^{n} I_{j}(\alpha) \cup \{\alpha\}$ with $I_{j}(\alpha) < I_{j+1}(\alpha)$. This decomposition is unique if we assume that the $I_{j}(\alpha)$ are maximal one-typed, that is, $I_{j}(\alpha) \cup I_{j+1}(\alpha)$ is not a one-typed basic interval. Possible forms of each $I_{j}(\alpha)$ are $\{ x \}$, $]x,y[$ and $]x,y]$. The decomposition is independent of the leaf $\alpha$, that is, the form (a singleton or not, open or closed on the right) of $I_{j}(\alpha)$ for a fixed $j$
as well as the tree-type of its element do not depend on the leaf $\alpha$.
\end{theo}
\begin{rem} Remember (Proposition \ref{Cminimal}) that Haskell and Macpherson have shown that, if ${\mathcal M}$ is $C$-minimal, then for each leaf $\alpha$, any subset of $br (\alpha)$ definable in $T$ is a finite union of intervals with bounds in $br (\alpha) \cup \{ -\infty\}$. Thus the conclusion of the above theorem remains the same if we add the hypothesis that ${\mathcal M}$ is $C$-minimal and remove the condition on $Br_l (T)$. \end{rem}
\noindent {\bf Proof of Theorem \ref{theo:ind1}}. In the following, a ``branch of $T$'' will always mean a branch with a leaf, i.e. an element of $Br_{l}(T)$. By Ryll-Nardzewski Theorem the $\aleph_{0}$-categoricity of ${\mathcal M}$ implies that for any integer $p$ there is a finite number of $p$-types over $\emptyset$. Now $T$ is interpretable without parameters in ${\mathcal M}$ where it appears as a definable quotient of $M^2$. Since there is a finite number of $2p$-types over $\emptyset$ in $M$, there is a finite number of $p$-types in $T$. Hence, $T$ is finite or $\aleph_{0}$-categorical. Thus we can partition the tree $T$ into finitely many sets $S$ such that two nodes in $T$ have the same complete type over $\emptyset$ iff they are in the same set $S$. The trace on any branch $br(\alpha)$ of such a set $S$ is definable and thus, by $o$-minimality, a finite union of intervals. In fact it consists of a unique interval: if a node $x$ belongs to the left first interval of $S \cap br (\alpha)$, then by definition of the sets $S$ any other element of $S \cap br (\alpha)$ will too (look at the formula without parameter $\exists \beta \in L \ $( $x$ belongs to the first interval of $br(\beta))$). For the same reason, if $S \cap br (\alpha)$ has a first element, then this interval is in fact a singleton. (We are here making use of the tree structure: the set $\{ y \in T ; y<x \}$ is linearly ordered.)\\ Hence, for a given leaf $\alpha$, $br(\alpha)$ is the order sum of finitely many maximal one-typed intervals.
Using indiscernibility, the number of such basic intervals, the form (singleton, open or closed on the right) of each of them, and the tree-type of its elements, depend only on its index and not on the branch. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{lem}\label{intersection of I's} Let $\alpha$, $\beta$ be two distinct leaves of $T$. Let $j^{\star}$ be the unique index such that $\alpha \wedge \beta \in I_{j^{\star}}(\alpha)$. Then, $\forall j < j^{\star}$, $I_{j}(\alpha)= I_{j}(\beta)$. Moreover, $I_{j^{\star}}(\alpha)\cap I_{j^{\star}}(\beta)$ is an initial segment of both $I_{j^{\star}}(\alpha)$ and $ I_{j^{\star}}(\beta)$. \end{lem} \pr By definition, $ br(\alpha) \cap br(\beta) = I_{1}(\alpha) \cup \cdots \cup I_{j^{\star}-1}(\alpha) \cup \{t \in I_{j^{\star}}(\alpha); t \leq \alpha \wedge \beta\}$ (or $\{t \in I_{j^{\star}}(\alpha); t \leq \alpha \wedge \beta\}$ if $j^{\star}= 1$). The same is true with $\beta$ instead of $\alpha$.\\ Therefore, by definition and uniqueness of the partition of each branch into maximal basic one-typed intervals, we get $\forall j < j^{\star}$,
$I_{j}(\alpha) = I_{j}(\beta)$. Moreover, $\{t \in I_{j^{\star}}(\alpha); t \leq \alpha \wedge \beta\} = \{t \in I_{j^{\star}}(\beta); t \leq \alpha \wedge \beta\} = I_{j^{\star}}(\alpha)\cap I_{j^{\star}}(\beta)$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\subsection{Precolored good trees} By Lemma $\ref{lem:color definable in pure order}$ all nodes of a one-typed basic interval are of same color. In order to describe the theory of the canonical tree of an indiscernible $\aleph_{0}$-categorical or finite $C$-minimal $C$-structure, we define now precolored good trees which are constructed from the conclusion of Theorem $\ref{theo:ind1}$, replacing ``one-typed basic interval'' by the (in general different) notion of ``one-colored basic interval". \\[2 mm]
In this subsection, $T$ will be a good tree, $L$ its set of leaves and $N$ its set of nodes.
\begin{defi}{One-colored basic interval}\label{def:one-colored interval}\\ We say that a basic interval $I$ of $br(\alpha) \setminus \{ \alpha \}$ for some leaf $\alpha$ of $T$ is \emph{one-colored} if $I$ satisfies one of the following conditions:
\begin{enumerate} \item[(0)] $I$ is a singleton $\{ x \}$ and the color of $x$ is $(k,0)$, for $k$ a natural number greater that $2$ or infinity, that is, there are exactly $k$ distinct cones at $x$, all border cones. We say that $I$ is of \emph{color} $( k, 0)$. \item[(1.a)] $I$ is open on both left and right sides: $I= ]x, y[$. Any element of $I$ is of color $(0, k)$, for $k$ an integer greater that $2$ or infinity, that is, there are exactly $k$ distinct cones at any element of $I$, and all are inner cones. We say that the basic interval $I$ is of \emph{color} $(0, k)$. \item[(1.b)] $I$ is open on the left side and closed on the right side: $I= ]x, y]$ and any element of $I$ is of color $(m, \mu)$, for $m, \mu \in {\baton N}^{\ast} \cup \{\infty\}$, that is, there are exactly $m$ border cones (i.e. $m$ distinct leaves) and $\mu$ inner cones at any point of $I$. We say that the basic interval $I$ is of\emph{ color} $(m, \mu)$. \end{enumerate}
\end{defi}
\begin{defi}\label{def: precolored good tree} We say that $T$ is a \emph{precolored good tree} if there exists an integer $n$, such that for all $\alpha \in L$: \begin{itemize} \item[(1)] the branch $br(\alpha)$ can be written as a disjoint union of its leaf and $n$ basic one-colored intervals $br(\alpha) = \cup_{j=1}^{n} I_{j}(\alpha) \cup \{\alpha\}$, with $I_{j}(\alpha) < I_{j+1}(\alpha)$. \item[(2)] The $I_{j}(\alpha)$ are maximal one-colored, that is, $I_{j}(\alpha) \cup I_{j+1}(\alpha)$ is not a one-colored basic interval, and for all $j \in \{1, \cdots, n\}$, the color of $I_{j}(\alpha)$ is independent of $\alpha$. \item[(3)] For any $\alpha, \beta \in L$ and $j \in \{1, \cdots, n\}$, if $\alpha \wedge \beta \in I_{j}(\alpha)$, then $\alpha \wedge \beta \in I_{j}(\beta)$, $I_{j}(\alpha) \cap I_{j}(\beta)$ is an initial segment of both $I_{j}(\alpha)$ and $I_{j}(\beta)$; and for any $i < j$, $I_i(\alpha) = I_i(\beta)$. \end{itemize} The integer $n$, which is unique by maximality of the basic one-colored intervals, is called the \emph{depth} of the precolored good tree $T$. \end{defi}
\begin{cor}\label{cor:colored good tree} Let $M$ be a finite or $\aleph_{0}$-categorical, indiscernible and $C$-minimal $C$-set. Then $T(M)$ is a precolored good tree. \end{cor} \pr The result follows directly from Theorem \ref{theo:ind1}, Lemma \ref{lem:color definable in pure order} and Lemma \ref{intersection of I's}. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{isolated or not} Let $T$ be a precolored good tree, then all leaves of $T$ are isolated or all leaves of $T$ are non isolated. \end{prop} \pr Let $\alpha$ be a leaf of $T$. Assume that $\alpha$ has a predecessor $p(\alpha)$, then the last interval $I_n(\alpha)$ is closed on the right, that is either $I_n(\alpha) = \{p(\alpha)\}$ of color $(k,0)$, or $I_n(\alpha) = ]x, p(\alpha)]$ of color $(m, \mu)$ with $m \neq 0$. By definition of precolored good trees, either for any leaf $\beta$, the last interval of $br(\beta)$ is of color $(k,0)$, or for any leaf $\beta$, the last interval of $br(\beta)$ is of color $(m,\mu)$, with $m \neq 0$. In both cases, $\beta$ has a predecessor. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{defi}\label{def:partial e}
Definition of functions $e_{1}, \dots,e_{n-1}$ on leaves.\\ Let $T$ be a precolored good tree of depth $n$. For any leaf $\alpha$ and for $1 \leq j < n$, we denote $e_{j}(\alpha)$ the lower bound of $I_{j+1}(\alpha)$ and $E_{j}$ the range of the function $e_{j}$. \end{defi} \begin{prop}\label{prop: partial e} Let $T$ be a precolored good tree of depth $n$. Let $\alpha$, $\beta$ be two leaves of $T$. For $1 \leq j < n$, if $e_{j}(\alpha)$, $e_{j}(\beta) \leq \alpha \wedge \beta$, then $e_{j}(\alpha) = e_{j}(\beta)$. Hence, we can extend the functions $e_{j}$ to partial functions from $T$ to $N$ in the following way:
$Dom(e_{j}) = \bigcup_{\alpha\in L} (\{e_{j}(\alpha)\} \cup I_{j+1}(\alpha) \cup \cdots \cup I_{n}(\alpha) \cup \{ \alpha \}) $, and,
$\forall \alpha \in L, \forall x \in br(\alpha) \cap Dom(e_{j})$, $e_{j}(x) = e_{j}(\alpha)$. \\ The range of $e_j$ is still $E_j$. The partial functions $e_j$ are definable in the pure order. \end{prop}
\pr Let $\alpha$, $\beta$ be two leaves and $j$ an index such that $e_{j}(\alpha)$, $e_{j}(\beta) \leq \alpha \wedge \beta$. We can assume without loss of generality that $e_{j}(\alpha) \leq e_{j}(\beta) \leq \alpha \wedge \beta$. Let $j^\star$ be the unique index such that $\alpha \wedge \beta \in I_{j^\star}(\alpha)$. By definition of $e_j$, $j + 1 \leq j^\star$. Either, $j + 1 < j^\star$ and by Definition \ref{def: precolored good tree} (3), $I_{j+1}(\alpha) = I_{j+1}(\beta)$, hence $e_{j}(\alpha) = e_{j}(\beta)$; or $j + 1 = j^\star$, and by \ref{def: precolored good tree} (3) again, $I_{j+1}(\alpha) \cap I_{j+1}(\beta)$ is an initial segment of both $I_{j+1}(\alpha)$ and $I_{j+1}(\beta)$, hence $e_{j}(\alpha) = e_{j}(\beta)$. \\ By Lemma \ref{lem:color definable in pure order}, the color of a node is definable in the pure order. Now, all nodes of $I_{j}(\alpha)$ have the same color, $I_{j}(\alpha)$ is a maximal interval of $br(\alpha)$ with this property, and there are only finitely many such maximal intervals in $br(\alpha)$. This shows that the bounds of $I_{j}(\alpha)$ are $\{ \alpha \}$-definable in the pure order.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm] The next proposition describes the form of maximal basic one-colored intervals in terms of the functions $e_j$.
By convention, when a basic interval is denoted $]a,b[$, $b$ has no predecessor. {\bf We extend the definition of} $p$: for any $c \in T$ having a predecessor, this predecessor is denoted $p(c)$.
\begin{prop}\label{intervals of precolored} Let $T$ be a precolored good tree of depth $n$.\\ Assume first $n = 1$. Then, uniformly in $\alpha$, $I_1(\alpha)$ is of the form, either (0): $\{r\} = \{p(\alpha)\}$ where $r$ is the root, or (1.a): $]- \infty, \alpha[$, or (1.b): $]- \infty, p(\alpha)]$. \\ Assume now $n >1$. Then, uniformly in $\alpha$, \\ - $I_1(\alpha)$ is of the form, either (0): $\{ r \}$ (and $r = e_1(\alpha)$ or $r=p(e_1(\alpha))$), or (1.a): $]- \infty, e_{1}(\alpha)[$, or (1.b): $( \; ]- \infty, e_1(\alpha)]$ or $]- \infty, p(e_1(\alpha)]\; )$; \\ - for $2 \leq j \leq n-1$, $I_j(\alpha)$ is of the form, either (0): $\{ e_{j-1}(\alpha) \}$, or (1.a): $]e_{j-1}(\alpha), e_{j}(\alpha)[$, or (1.b): $(\; ]e_{j-1}(\alpha), e_{j}(\alpha)]$ or $]e_{j-1}(\alpha), p(e_{j}(\alpha))]\; )$; \\ - $I_n(\alpha)$ is of the form, either (0): $\{e_{n-1}(\alpha)\} = \{p(\alpha)\}$, or (1.a): $]e_{n-1}(\alpha), \alpha[$, or (1.b): $]e_{n-1}(\alpha), p(\alpha)]$. \\ Moreover, for $j < n$, if $I_{j}(\alpha)$ is open on the right, then $I_{j+1}(\alpha)$ is a singleton. \\ Finally $T$ has isolated leaves iff $I_n(\alpha)$ is of form (0) or (1.b). \end{prop}
\pr Note first that $I_{1}(\alpha)$ is a singleton iff $T$ has a root and in this case the unique element of $I_{1}(\alpha)$ must be this root. \\ Case $n = 1$. Then, for any leaf $\alpha$, $br(\alpha) = I_1(\alpha) \cup\{\alpha\}$, so, by definition of one-colored basic intervals, the assertion is clear.\\ Case $n > 1$. For $j < n$, recall that $e_{j}(\alpha)$ is the lower bound of $I_{j+1}(\alpha)$. If $I_{j+1}(\alpha)$ is a singleton, then its unique element must be $e_{j}(\alpha)$. If $I_{j+1}(\alpha)$ is not a singleton, it is open on the left, hence $ e_{j}(\alpha)$ is in $I_{j}(\alpha)$. \\ If $I_{1}(\alpha) = \{r\}$, then $r = e_{1}(\alpha)$ if $I_2(\alpha)$ is not a singleton, and $r = p(e_{1}(\alpha))$ otherwise. If $I_{1}(\alpha)$ is open on the right, it must be case $(1.a)$. If it is closed right, either $I_{2}(\alpha)$ is the singleton $\{e_{1}(\alpha) \}$, hence $I_{1}(\alpha) = ]- \infty, p(e_1(\alpha)]$, or $I_{2}(\alpha)$ is open on the left with lower bound $e_{1}(\alpha)$, hence $I_{1}(\alpha) = ]- \infty, e_1(\alpha)]$. \\ For, $2 \leq j \leq n-1$, it runs similarly. The case $j = n$ is similar to the case $n = 1$. \\ The other assertions are trivial. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{the set of $p$} Let $T$ be a precolored good tree of depth $n$ with isolated leaves. \\ If $I_n(\alpha) = \{p(\alpha)\}$, for any $\alpha \in L$, then the set $p(L) := \{p(\alpha); \alpha \in L\}$ is a maximal antichain of $T$. If $I_n (\alpha)= ]e_{n-1}(\alpha), p(\alpha)]$, then $p(L) = {\displaystyle \bigcup_{\alpha \in L} I_n(\alpha)}$. \end{prop} \pr If $I_n(\alpha) = \{p(\alpha)\}$ for any $\alpha \in L$, let $\alpha$ and $\beta$ be two distinct leaves such that $p(\alpha) \leq p(\beta)$. Then $\alpha \wedge \beta = p(\alpha)$. Hence, by Lemma \ref{intersection of I's}, $p(\alpha) = p(\beta)$. This shows that $p(L)$ is an antichain of $T$. To prove it is maximal, let $t \in T$; either $t$ is a leaf and $t > p(t)$, or $t$ is a node, hence there exists a leaf $\alpha$ such that $t < \alpha$, thus $t \leq p(\alpha)$.\\ Assume now $I_n(\alpha) = ]e_{n-1}(\alpha), p(\alpha)]$ (in other words $I_n(\alpha)$ is of type $(1.b)$) and let $x \in I_n(\alpha)$. Suppose that $x < p(\alpha)$, then $\Gamma(x, \alpha)$ is an inner border cone at $x$, by definition of inner cones. By Definition \ref{def:one-colored interval} $(1.b)$, there exists a border cone at $x$, say $\Gamma(x, \beta)$, hence $x = p(\beta) \in p(L)$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\section{$1$-colored good trees}
In Section 6 we will introduce a very concrete class, the class of colored good trees, which will turn out to be the same thing as precolored good trees. Its definition is inductive. The present section defines $1$-colored good trees. Section (5) will present a construction which gives the induction step.
\subsection{Definition}
\begin{defi}\label{defi:$1$-colored}
Let $T$ be a good tree. We say that $T$ is a $1$-\emph{colored good tree} if $T$ satisfies one of the following group of properties.
\begin{itemize}
\item[(0)]
$T$ consists of a unique node and $m$ leaves, where $m$ is a natural number greater than 2 or infinity.
\item[(1.a)] There exists $\mu $, a natural number greater than 2 or infinity, such that for any leaf $\alpha$ of $T$,
$]- \infty, \alpha [ $ is densely ordered and at each node of $T$ there are exactly $\mu$ cones, all infinite.
\item[(1.b)] There exists $(m, \mu)$, where $m$ and $\mu$ are natural numbers greater than 1 or infinity, such that for any leaf $\alpha$ of $T$, $\alpha$ has a predecessor, the node $p(\alpha)$, $]- \infty, p(\alpha) ] $ is densely ordered
and at each node of $T$ there are exactly $m$ leaves and $\mu$ infinite cones.
\end{itemize}
We will say that (0), (1.a) or (1.b) is the \emph{type} of the $1$-colored good tree and
$(m,0)$, $(0,\mu)$, or $(m, \mu)$ its \emph{branching color}.
\end{defi}
\begin{rem}\label{precolored depth one implies 1-colored} By Corollary \ref{intervals of precolored} a precolored good tree $T$ of depth $1$ is a $1$-colored good tree of branching color $(m, \mu)$ where $(m, \mu)$ is the color of any node of $T$.
\end{rem}
\subsection{Examples}
In the following pictures, a continous line means a dense order and a dashed line means that there is no node between its two extremities. \definecolor{xdxdff}{rgb}{0.49,0.49,1} \definecolor{qqqqff}{rgb}{0,0,1} \definecolor{uququq}{rgb}{0.25,0.25,0.25} \begin{enumerate} \item[(0)] Trees of form $(0)$ are canonical trees of $C$-sets equipped with the trivial $C$-relations ($C(\alpha, \beta, \gamma)$ iff $\alpha \not= \beta = \gamma$), in other words of pure sets.
\label{dessin:1}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=0.8cm]
\clip(-5.5,-12.36) rectangle (14.8,2.6); \draw (2.14,2.58) node[anchor=north west] {$ \alpha_3 $}; \draw [dash pattern=on 2pt off 2pt] (0,0)-- (2,2); \draw (0.28,2.68) node[anchor=north west] {$ \alpha_2 $}; \draw (-1.8,2.62) node[anchor=north west] {$ \alpha_1 $}; \draw [dash pattern=on 2pt off 2pt] (0,2)-- (0,0); \draw [dash pattern=on 2pt off 2pt] (-2,2)-- (0,0); \draw (-2,-1.26) node[anchor=north west] {Fig.1 $\; \;Type \; (0)$ $m = 3, \mu = 0$}; \draw (0.02,-0.22) node[anchor=north west] {$r$};
\fill (0,0) circle (2.5pt); \fill (2,2) circle (2.5pt); \fill (0,2) circle (2.5pt); \fill (-2,2) circle (2.5pt); \end{tikzpicture}
\nopagebreak
\item[(1.a)] Example of color $(0,\mu)$. \\ Let $\mathbb Q$ be the set of rational numbers and $\mu$ an integer $\geq 2$ or $\aleph_0$. Let ${\mathcal M}$ be the set of applications with finite support from $\mathbb Q$ to $\mu$, equipped with the $C$-relation:
$C(\alpha, \beta, \gamma)$ iff the maximal initial segment of $\mathbb Q$ where $\beta$ and $\gamma$ coincide (as functions) strictly contains the maximal initial segment where $\alpha$ and $\beta$ coincide. \\ The thick cone at $\alpha \wedge \beta$ is the set $\{ \gamma \in M ; \gamma$ coincide with $\alpha$ and $\beta$ on the maximal initial segment where $\alpha$ and $\beta$ coincide $\}$. If $\alpha$ and $\beta$ are different and $q$ is the first rational number where $\alpha (q) \not= \beta (q)$, then there are $\mu$ possible values for $\gamma (q)$, in other words there are $\mu$ different cones at $\alpha \wedge \beta$. So ${\mathcal M}$ is 1-colored of type $(0,\mu)$ .
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\hspace{3.5 cm}
\draw [rotate around={-178.1:(5.99,3.73)}] (5.99,3.73) ellipse (0.55cm and 0.27cm); \draw [<-](6,3.8)-- (6,0); \draw (6.48,3.62)-- (6,0); \draw (5.46,3.64)-- (6,0); \draw [rotate around={-143.1:(3.85,3.05)}] (3.85,3.05) ellipse (0.55cm and 0.27cm); \draw [<-](3.7,3.1)-- (6,0); \draw (4.32,3.24)-- (6,0); \draw (3.47,2.67)-- (6,0); \draw (4,-1.26) node[anchor=north west] {Fig.2$\;\;Type \; (1.a)$\;$m = 0, \mu = 2$} ;
\draw (6,4.5) node[anchor=north west] {$\alpha_{2}$}; \draw (3.2,4) node[anchor=north west] {$\alpha_{1}$}; \draw (6,0)-- (6,-1); \draw (6.32,-0.15) node[anchor=north west] {}; \fill (6,0) circle (2.5pt); \fill (3.7,3.1) circle (2.5pt); \fill (6,3.8) circle (2.5pt); \end{tikzpicture}
\item[(1.b)] Example of color $(m,\mu)$, $m \geq 1$ and $\mu \geq 2$.\\ Consider a tree $T$ of type $(1.a)$ of color $(0,\mu)$. Decompose it in nodes and leaves as $N \cup L$. For any $m \geq 1$ consider now the tree $N \cup (N \times m)$ with the order extending the one of $N$, elements in $N \times m$ all incomparable and $a < (b,r)$ iff $a \leq b$ for $a,b \in N$ and $r < m$ (in other words: we remove the leaves of $T$ and add $m$ new leaves at each node; so, the set of nodes remains the same). This tree is of type $(1.b)$ of color $(m,\mu)$.
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \hspace{4cm}
\draw [rotate around={-178.1:(10.27,3.73)}] (10.27,3.73) ellipse (0.55cm and 0.27cm); \draw [rotate around={-178.1:(8.27,3.6)}] (8.27,3.6) ellipse (0.55cm and 0.27cm);
\draw (10.81,3.68)-- (10,0); \draw (9.71,3.67)-- (10,0); \draw (8.81,3.67)-- (10,0); \draw (7.73,3.50)-- (10,0);
\draw [dash pattern=on 2pt off 2pt] (11.76,2.52)-- (10,0); \draw [dash pattern=on 2pt off 2pt] (12.76,1.52)-- (10,0); \draw (8,-1.26) node[anchor=north west] {Fig. 3 $\;\;Type \; (1.b)$ $m = 2, \mu = 2$};10,3.8
\draw (10,0)-- (10,-1.02); \begin{scriptsize}
\draw (10.2,0) node[anchor=north west] {$p(\alpha_{1})= p(\alpha_{2})$};
\fill (10,0) circle (2.5pt);
\fill (11.76,2.52) circle (2.5pt);
\fill (12.76,1.52) circle (2.5pt); \draw (12.22,2.78) node {$\alpha_{1}$}; \draw (13.22,1.78) node {$\alpha_{2}$}; \end{scriptsize} \end{tikzpicture}
\indent
Example of color $(m,\mu)$, $m \geq 1$ and $\mu = 1$.\\ The construction is similar to the previous one: for $O$ a dense linear order without endpoints and $m$ a natural number greater than $1$ or infinity, consider the tree $T = O \cup (O \times m)$ with the order extending the one of $O$, elements in $O \times m$ all incomparable and $a < (b,r)$ iff $a \leq b$ for $a,b \in O$ and $r < m$. The set of nodes of $T$ is $O$, the vertical line in the picture below. It is a branch without leaf, i.e. a maximal chain of $T$ without greatest element, the unique one in $T$. Note that $O$ is definable in $T$. Furthermore $O$ and $T$ are bi-interpretable (for $m=\infty$ we have to assume $T$ and $O$ countable).
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \hspace{5cm}
\draw (10,3)-- (10,0);
\draw [dash pattern=on 2pt off 2pt] (11.3,0.6)-- (10,-0.3); \draw [dash pattern=on 2pt off 2pt] (11.3,1.6)-- (10,0.7); \draw [dash pattern=on 2pt off 2pt] (11.3,2.6)-- (10,1.7); \draw (8,-1.26) node[anchor=north west] {Fig. 4$\;\;Type \;(1.b)$ $m = 1, \mu = 1$};
\draw (10,0)-- (10,-1.02);
\draw (10.1,0) node[anchor=north west] {$p(\gamma )$}; \draw (10.1,1) node[anchor=north west] {$p(\beta)$}; \draw (10.1,2) node[anchor=north west] {$p(\alpha)$}; \draw (11.5,1.3) node {$\beta$}; \draw (11.5,0.3) node {$\gamma$}; \draw (11.5,2.3) node {$\alpha$};
\fill (11.3,2.6) circle (2.5pt); \fill (11.3,1.6) circle (2.5pt); \fill (10,-0.3) circle (2.5pt); \fill (10,0.7) circle (2.5pt); \fill (10,1.7) circle (2.5pt);
\fill (11.3,0.6) circle (2.5pt);
\end{tikzpicture}
\end{enumerate}
\subsection{Axiomatisation and quantifier elimination}
\begin{defi}\label{theory of 1-colored} For $m$ and $\mu$ in ${\baton N} \cup \{ \infty \}$ such that $m + \mu \geq 2$, we denote $\Sigma_{(m, \mu)}$ the set of axioms in the language ${\mathcal L}_1:=\{L, N, \leq, \wedge\} $ describing $1$-colored good trees of branching color $(m, \mu)$, and $S_1$ the set of all these ${\mathcal L}_1$-theories, $S_1 := \{\Sigma_{(m, \mu)} ; (m,\mu) \in ({\baton N} \cup \{ \infty \}) \times ({\baton N} \cup \{ \infty \})$ with $m + \mu \geq 2\}$.
\end{defi}
When dealing with models of $\Sigma_{(m, \mu)}$, $\mu \not= 0$, we want to have the predecessor function in the language. For this reason we introduce $D_p := \{ x ; \{ y ; y<x \} \mbox{ has a maximal element } \}$, $p$ the function equal to the predecessor function on $D_p$ and the identity on its complement, and $F_p = p(D_p)$. Note that these definitions make sense in any tree and
in a model of $\Sigma_{(m, \mu)}$, $m \not= 0$, we have $D_p=L$ and $ F_p=N$.
\begin{defi}\label{def:p} ${\mathcal L}_1 :=\{L, N, \leq, \wedge \}$ and ${\mathcal L}_{1}^+ := {\mathcal L}_1 \cup \{ p, D_p, F_p \} $.
\end{defi}
\begin{prop}\label{prop:va et vient} Any theory in $S_{1}$ is $\aleph_{0}$-categorical, hence complete. Moreover, it admits quantifier elimination in a natural language, $\Sigma_{(m, 0)}$ in $\{ L, N \}$, $\Sigma_{(0, \mu)}$ in
${\mathcal L}_1$ and $\Sigma_{(m, \mu)}$ with $m, \mu \not= 0$ in ${\mathcal L}_{1}^+$ (namely in $\{L, N, \leq, \wedge, p \}$).
\end{prop}
\pr Trees of form $(0)$ consist of one node and leaves. They are clearly $\aleph_{0}$-categorical and eliminate quantifiers in the language $\{ L, N \}$. \\ So from now on, we assume that $\Sigma = \Sigma_{m,\mu}$, where $\mu \neq 0$. Note that in this case, a model of $\Sigma$ has no root. We will prove $\aleph_{0}$-categoricity and quantifier elimination using a back and forth between finite ${\mathcal L}_{1}$-substructures in the case where $m = 0$ (and ${\mathcal L}_{1}^+$-substructures in the case where $m \neq 0 $) of any two countable models of $\Sigma$, say $T$ and $T'$. We will use the following facts. \\[2 mm]
{\bf Fact 0:} 1. Assume first $m=0$. Then all leaves (respectively all nodes) of $T$ and $T'$ have same quantifier free ${\mathcal L}_{1}$-type. Any singleton is an ${\mathcal L}_{1}$-substructure. \\ 2. Assume now $m \not= 0$. Then all leaves (respectively all nodes) of $T$ and $T'$ have same quantifier free ${\mathcal L}_{1}^+$-type. Any node is an ${\mathcal L}_{1}^+$-substructure. If $\alpha$ is a leaf, then $\{ \alpha, p(\alpha) \}$ is an ${\mathcal L}_{1}^+$-substructure. \\
\pr Completness of quantifier free types `$t \in N$' and `$t \in L$' is proven by inspection of quantifier free formulas. What regards substructures is clear. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi \\[2 mm]
In what follows $A$ is a finite subset of $T$ which is a substructure in the language ${\mathcal L}_{1}$ if $m = 0$ (resp. ${\mathcal L}_{1}^+$ if $m \not= 0$), hence closed under $\wedge$ (resp. $\wedge$ and $p$), and $\varphi$ is a partial ${\mathcal L}_{1}$-isomorphism (resp. ${\mathcal L}_{1}^+$-isomorphism) from $T$ to $T'$ with domain $A$. \\[2 mm]
{\bf Fact 1:} Let $t$ be an element of $T$, $t \notin A$. Then there exists a unique node $n_{t}$ of $T$ such that $n_{t}$ is less or equal to an element of $A$, and for any $a \in A$, $t \wedge a = n_{t} \wedge a$. \\
\pr The set $B = \{ t \wedge a ; a \in A \}$ is a linearly ordered finite set (of nodes since $t$ is not in $A$). Let $n_{t}$ be its greatest element. So, there exists $y \in A$ such that $n_{t} = t \wedge y$, and therefore $n_{t} \leq y$. Moreover, it is easy to see that, since $n_{t}$ is the greatest element of $B$, for any $a \in A$, $t \wedge z= n_{t} \wedge z$. Unicity is clear.
\relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi \\[2 mm]
Note that, $n_t \leq t$ and ($n_t = t$ iff $t$ is a node smaller than an element of $A$). \\[2 mm]
\noindent {\bf Fact 2:} Assume first that $m = 0$. Let $t \in T\setminus A$. Then the ${\mathcal L}_{1}$-substructure $\left\langle A \cup \{t\} \right\rangle$ generated by $A$ and $t$ is the minimal subset containing $A$, $t$, $n_t$ (\emph{id est} $A \cup \{ t,n_t\}$ if $n_t \not= t$ and $A \cup \{ t \}$ if $n_t = t$). \\ Assume now that $m \neq 0$. Let $x$ be a node of $T \setminus A$. Then the ${\mathcal L}_{1}^+$-substructure $\left\langle A \cup \{x\} \right\rangle$ generated by $A \cup \{ x \}$ is the minimal subset containing $A$, $x$, $n_x$. If $\alpha$ is a leaf of $T \setminus A$, the ${\mathcal L}_{1}^+$-substructure $\left\langle A \cup \{\alpha\} \right\rangle$ generated by $A \cup \{ \alpha \}$ is the minimal subset containing $A$, $\alpha$, $n_\alpha$, and $p(\alpha)$.\\ \pr Assume first that $x$ is a node of $T\setminus A$. Then for any $a \in A$, $x \wedge a = n_x \wedge a$. By definition, there is $z \in A$ such that $n_x \leq z$, so for any $a \in A$, $n_x \wedge a = n_x$ or $n_x \wedge a = z \wedge a \in A$. Now $p(x)=x$. Thus $\left\langle A \cup \{ x \} \right\rangle = A \cup \{ x, n_x\}$ (or $A \cup \{ x \}$ if $n_x = x$). \\ Assume now that $\alpha$ is a leaf of $T\setminus A$. If $\alpha$ is non isolated the same argument applies. If $\alpha$ is isolated then for any $a \in A$, $p(\alpha) \wedge a = \alpha \wedge a = n_\alpha \wedge a$. And as above, the minimal subset containing $A$, $\alpha$, $n_\alpha$ and $p(\alpha)$ is closed under $p$ and $\wedge$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi \\[2 mm]
\noindent {\bf Fact 3:} Let $\Gamma$ be a cone at $a \in A$, such that $\Gamma \cap A = \emptyset$. Then there exists a cone $\Gamma'$ of $T'$ at $\varphi(a)$ such that $\Gamma' \cap \varphi(A) = \emptyset$. Moreover, if $\Gamma$ is infinite, resp. consists of a single leaf, then there is such a $\Gamma'$ infinite, resp. consisting of a single leaf.\\ \pr If $\Gamma$ is an infinite cone and $\mu$ is infinite, resp. $\Gamma= \{\alpha\}$ and $m$ is infinite, the result is obvious since $A$ is finite.\\
If now $\Gamma$ is infinite and $\mu$ is finite, there are exactly $\mu$ infinite cones at both $a$ and $\varphi(a)$; since $A_{>a} := \{ x \in A ; x>a \}$ and $A'_{>\varphi(a)} := \{ x \in A' ; x>\varphi(a) \}$ have same quantifier free type,
one of the cones at $\varphi(a)$, say $\Gamma'$, must be such that $\Gamma' \cap \varphi(A) = \emptyset$.
If $\Gamma= \{\alpha\}$ and $m \neq 0$ is finite, then, $a = p(\alpha)$ and there are exactly $m$ leaves above both $a$ and $\varphi(a)$. We consider again $A_{>a}$ and $A'_{>\varphi(a)}$; since $\alpha \notin A$, there exists $\alpha' \notin \varphi(A)$ above $\varphi(a)$.
\relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\[2 mm]
\noindent {\bf Fact 4:} Let $x \in T \setminus A$ such that $n_x = x$. Then, $x$ is a node and $\varphi$ can be extended to a partial ${\mathcal L}_{1}$-isomorphism if $m=0$ (resp. ${\mathcal L}_{1}^+$-isomorphism if $m\not=0$) with domain $\left\langle A \cup \{x\} \right\rangle = A \cup \{x\}$. \\
\pr
Since $n_x = x$, $\left\langle A \cup \{x\} \right\rangle$ is equal to $A \cup \{x\}$. Since $A$ is finite and closed under $\wedge$ it contains a smallest element, say $a$, bigger than $x$. If the set $ \{y \in A; y < x\}$ is not empty, set $b := Max \{y \in A; y < x\}$ and $I := ]\varphi(b), \varphi(a)[$; set $I := ]- \infty, \varphi(a)[$ otherwise. If $m = 0$, $I$ is dense. If $m \neq 0$, since $A$ is closed under $p$, $a$ is not a leaf, neither is $\varphi (a)$, so in this case too, $I $ is dense. So in both cases, there is $x'$ in $I$. For such an $x'$, $A \cup \{x\}$ and $\varphi (A) \cup \{x'\}$ are isomorphic trees, closed under $p$ and $\wedge$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\[2 mm] \noindent {\bf Fact 5:} Let $t \in T \setminus A$. Then $\varphi$ can be extended to a partial ${\mathcal L}_{1}$-isomorphism (resp. a partial ${\mathcal L}_{1}^+$-isomorphism) with domain $\left\langle A \cup \{n_t\} \right\rangle$.\\
\pr By Fact 4. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\[2 mm] \noindent {\bf Fact 6:} Let $t \in T \setminus A$. Then $\varphi$ can be extended to a partial ${\mathcal L}_{1}$-isomorphism (resp. a partial ${\mathcal L}_{1}^+$-isomorphism) with domain $\left\langle A \cup \{t\} \right\rangle$.\\
\pr
By Fact 5, we can assume that $t \neq n_t$ and $n_t \in A$. Let $\Gamma$ be the cone of $t$ at $n_t$, then by definition of $n_t$, $\Gamma \cap A = \emptyset$. Assume first that $m =0$. Since $\Gamma$ is infinite, there exists by Fact 3, an infinite cone $\Gamma'$ at $\varphi(n_t)$ such that $\Gamma' \cap \varphi(A) = \emptyset$. Then we can extend $\varphi$ to $\left\langle A \cup \{t\} \right\rangle$, by setting $\varphi(t) = t'$, where $t'$ is any node of $\Gamma'$ if $t$ is a node, or any leaf of $\Gamma'$ is $t$ is a leaf.\\ Assume now that $m \neq 0$. If $\Gamma$ consists of a leaf, id est, $t$ is a leaf and $n_t = p(t)$, then, by fact 3, there exists a cone $\Gamma'$ at $\varphi(n_t)$ which consists only of a leaf $\alpha'$. Then, we can extend $\varphi$ to $\left\langle A \cup \{t\} \right\rangle$, by setting $\varphi(t) = \alpha'$. If $\Gamma$ is infinite then, by fact 3, there exists an infinite cone $\Gamma'$ at $\varphi(n_t)$ in $T'$ such that $\Gamma' \cap \varphi(A) = \emptyset$. If $t$ is a node, we can extend $\varphi$ to $\left\langle A \cup \{t\} \right\rangle$, by setting $\varphi(t) = t'$, where $t'$ is any node of $\Gamma'$. If $t$ is a leaf, $p(t) \in \Gamma$ and we can extend $\varphi$ to $\left\langle A \cup \{t\} \right\rangle$, by setting $\varphi(t) = t'$, and $\varphi(p(t)) = p(t')$, where $t'$ is any leaf of $\Gamma'$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\[2 mm]
By Facts 1 to 6, the family of partial isomorphisms between finite subsructures of $T$ and $T'$ respectively has the forth (and back) property, which shows quantifier elimination. By Fact 0 this family is not empty whatever $T$ and $T'$ are. If they are countable, Facts 1 to 6 allow us to extend any of these partial isomorphisms to an isomorphism between $T$ and $T'$, which shows $\aleph_{0}$-categoricity. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{theo}
\label{theo:1-colored are precolored} \begin{enumerate} \item Precolored good trees of depth $1$ are exactly the $1$-colored good trees. For such a tree its color is its branching color. \item If $T$ is such a tree, $M(T)$ is $C$-minimal, indiscernible and $\aleph_0$-categorical (or finite). \end{enumerate} \end{theo} \pr Let $T$ be a $1$-colored good tree of color $(m, \mu)$. By quantifier elimination (in the language $\{L,N\}$, ${\mathcal L}_{1}$ or ${\mathcal L}_1^+$, see Proposition \ref{prop:va et vient}) all nodes of $T$ have same tree-type. Singletons consisting of a leaf (in case $m \neq 0$) are the border cones and the infinite cones (in case $\mu \neq 0$) are the inner cones. Moreover all leaves have same type. So, any branch of $T$ is the union of its leaf and a one-colored basic interval of color $(m, \mu)$ and $T$ is a precolored good tree.\\ Conversely, it has already been noticed in Remark \ref{precolored depth one implies 1-colored} that $1$-precolored good trees are $1$-colored good trees.\\
Again by quantifier elimination, any definable subset of $T$ is a boolean combination of cones and thick cones, which gives $C$-minimality. Proposition \ref{prop:va et vient} has proven $\aleph_0$-categoricity. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{cor}\label{cone-equiv} In a 1-colored good tree $T$ of type $(1.a)$ any cone is elementary equivalent to $T$. If $T$ of type $(1.b)$ any infinite cone is elementary equivalent to $T$. If $c$ is a node of $T$ the pruned cone $]-\infty,c[$ is elementary equivalent to $T$. \end{cor} \pr In all cases the subtree we consider is a 1-colored good tree of same type and same color as $T$ . \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\section{Extension of trees}
\subsection{General construction}\label{sec:construction of extension}
Let $T$ and $T_0$ be two trees.
We define $T \rtimes T_{0}$, the ``extension of $T$ by $T_0$'', as the tree consisting of $T$ in which each leaf is replaced by a copy of $T_{0}$.
More formally, let $L_T$ and $N_T$ be respectively the set of leaves and nodes of $T$, $L_{0}$ and $ N_{0}$ the set of leaves and nodes of $T_0$. As a set, $T \rtimes T_{0}$ is the disjoint union of $N_T$ and $L_T \times T_0$.
The order on $T \rtimes T_{0}$ is defined as follows:
$\forall x, x' \in N_T$, $T \rtimes T_0 \models x \leq x'$ iff $ T \models x \leq x'$;
$\forall (\alpha, t), (\alpha', t') \in L_T \times T_{0} $,
$T \rtimes T_0 \models (\alpha, t) \leq (\alpha', t') $ iff $ T \models \alpha = \alpha'$ and $T_{0} \models t \leq t'$;
$\forall x \in N_T, \; (\alpha, t) \in L_T \times T_{0} $, $T \rtimes T_0 \models x \leq (\alpha, t) $ iff $ T \models x \leq \alpha $. \\ Note that, by construction,
$N_T$ embeds canonically in $T \rtimes T_0$ as an initial subtree of $N_{T \rtimes T_0}$. \\ Some illustrations will be given at the end of next subsection.
\begin{lem}\label{assoc}
$T \rtimes T_0$ is a tree. \\ If $T$ is a singleton, $T \rtimes T_0$ is the same thing as $T_0$. If $T_0$ is a singleton, $T \rtimes T_0$ is the same thing as $T$. \\ The set of nodes of $T \rtimes T_0$ is the disjoint union $N_T \cup L_T \times N_{0}$, its set of leaves is $ L_T \times L_{0}$. \\ $T \rtimes T_0$ is good if $T$ and $T_0$ are. \\ For trees $T_1$, $T_2$ and $T_3$, $(T_1 \rtimes T_2) \rtimes T_3$ and $T_1 \rtimes (T_2 \rtimes T_3)$ are canonically isomorphic trees.
\end{lem}
\pr Clear from the definition. Associativity comes essentially from the associativity of Cartesian product and Boolean union. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{defi}\label{def: sim} We define the equivalence relation $\sim$ corresponding to the construction of $T \rtimes T_{0}$: \\ - $\sim$ is the equality on $N_T$; \\ - on $L_T \times T_0$ equivalence classes are the copies of $T_0$, id est the subsets $\{ \alpha \} \times T_0$ for $\alpha \in L_T$. \end{defi}
\begin{lem}\label{compatible} Distinct equivalence classes $a,b$ satisfy: $\exists u \in a, \exists v \in b, \ u<v$ iff $\forall u \in a, \forall v \in b, \ u<v$. Consequently the quotient $T \times T_0 / \sim$ inherits the tree structure of $T \times T_0$ and $T \times T_0 / \sim$ and $T$ are isomorphic trees. \\ The $\sim$-class of any element of $N_T$ is a singleton. Consequently the embedding $N_T \subseteq T \times T_0$ gives when taking $\sim$-classes the embedding $N_T \subseteq T$. \end{lem} \pr Clear from definition of the equivalence relation. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\subsection{Extension of good trees }\label{construction of extension good trees}
Recall that $p$ denotes the predecessor (partial) function. \\ From now on $T$ and $T_0$ are good trees, no singletons, and we require furthermore three conditions.
\begin{defi}\label{def: conditions stars} We define Conditions $(\star)$, $(\star\star)$ and $(\star\star\star)$:\\ $(\star)$ Either all leaves of $T$ are isolated or all leaves of $T$ are non isolated.\\ $(\star\star)$ If $T$ has non isolated leaves, $T_{0}$ should have a root. \\
$(\star\star\star)$ If $T$ has isolated leaves, then $p(L_T)$ is \textit{convex}, \textit{id est} $\forall x,y,z \in T, \ (x,z \in p(L_T) \ \wedge \ x < y <z) \rightarrow y \in p(L_T)$. \end{defi}
\begin{lem}\label{1-color conditions stars} All 1-colored good trees satisfy Conditions $(\star)$ and $(\star\star\star)$. \end{lem}
\pr If $T$ is 1-colored of type $(0)$ all leaves of $T$ are isolated and $p(L_T)$ consists of the root. If $T$ is of type $(1.a)$ all leaves are non isolated. If $T$ is of type $(1.b)$, all leaves are isolated and $p(L_T)$ is equal to the set of nodes of $T$,
which is convex. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
As already noticed, $T \rtimes T_0$ is a good tree with set of leaves $L_{T} \times L_0$ and set of nodes $N_T \cup L_T \times N_{0} $. \\ Let us call $\sigma$ the canonical embedding of $N_T$ in $ T_{0} \rtimes T$ and, for each $\alpha \in L_T$, $\tau_\alpha$ the embedding of $T_0$ in $T \rtimes T_{0}$, $x \mapsto (\alpha,x)$. \\ In the case where $T_0$ has a root, $L_T$ also embeds in $T \rtimes T_{0}$ by the map $\rho: \alpha \mapsto (\alpha,r_0)$, where $r_0$ is the root of $T_0$. Via $\sigma$ and $\rho$, $T$ embeds as an initial subtree of $T \rtimes T_0$ and $\tau_\alpha (T_0)$ is the thick cone at $\rho(\alpha)$.
\\ If $T_0$ has no root, the embedding of $N_T$ does not extend naturally to an embedding of $T$ into $T \rtimes T_{0}$ but $T$ will appear as a quotient of $T \rtimes T_{0}$. Define in this case $\rho: L_T \rightarrow T \rtimes T_{0}$ as the (non injective) map $\alpha \mapsto \sigma \circ p (\alpha)$. Note that by $(\star\star)$, $T$ has isolated leaves hence $p(\alpha)$ is defined and is a node of $N_T$ thus $\sigma \circ p (\alpha)$ is well defined. In this case, $\tau_\alpha (T_0)$ is a cone at $\rho(\alpha)$.\\ In both cases, $\rho(\alpha) = \inf \tau_\alpha (T_0)$. \\[2 mm]
From now on we will consider $\sigma$ as the identity and not write it.
\begin{lem}\label{lem: class} For any $(\alpha, t ) \in L_T \times T_0$, if $cl(\alpha,t)$ denotes the equivalence class of $(\alpha,t)$, we have: \\ - $cl(\alpha,t) = \tau_\alpha (T_0)$. \\ - If $T_0$ has a root, say $r_0$, $cl(\alpha,t) $ is the thick cone at $\rho(\alpha)$; so $cl(\alpha,t) = cl(\alpha,r_0)$. \\ - If $T_0$ has no root, $cl(\alpha,t)$ is the cone of $t$ at $\rho(\alpha)$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \end{lem}
\begin{defi}\label{definition des e,E...} The partial function $e:T \rtimes T_0 \rightarrow :T \rtimes T_0$ is defined as follows: \\ - $Dom (e) = L_T \times T_{0} $ if $T_0$ has a root and $Dom (e) = (L_T \times T_{0}) \cup p(L_T)$ if $T_0$ has no root; \\ - $\forall (\alpha, t) \in L_T \times T_{0}$, $e^{}((\alpha, t)) = \rho(\alpha)$, and if $T_0$ has no root, for any $\alpha \in L_T$, $e(p(\alpha)) = p(\alpha)$. \\ We set $E := \rho (L_T)$,
$E_\geq := \{ x ; \exists y \in E, y \leq x \}$, $E_{>}:= E_{\geq} \setminus E$, $E_{<}$ the complement of $E_{\geq}$ in $T \rtimes T_0$ and $E_{\leq} := E_{<} \cup E$. \end{defi}
\begin{prop}\label{prop: sim} \begin{enumerate} \item\label{root}
If $T_0$ has a root, then $E$ is an antichain and $x \sim y$ iff $( x =y $ or ($x,y \in Dom(e)$ and $e(x) = e(y)))$. \item\label{noroot} If $T_0$ has no root, then $x \sim y$ iff $( x =y$ or ($x,y \in Dom(e)$ and $e(x) = e(y) < x \wedge y))$. \item In both cases, $\forall \alpha \in L, E \cap br(\alpha) \mbox{ has } e(\alpha) \mbox{ as a greatest element}$. \end{enumerate} \end{prop} \pr Assume first that $T_0$ has a root, say $r_0$. By definition, for any $(\alpha, t) \in L_T \times T_0 = Dom(e)$, $e((\alpha, t)) = \rho(\alpha) = (\alpha, r_0)$. Hence, $E$ is an antichain. And, for all $(\alpha, \beta) \in L$, $E \cap br((\alpha, \beta)) = \{ e((\alpha, \beta)) \}$. \\
Moreover, the equivalence class of $(\alpha, t)$ is the thick cone at $(\alpha, r_0) = e((\alpha, t))$. Therefore, $(\alpha, t) \sim (\alpha', t')$ iff $e((\alpha, t)) = e((\alpha', t'))$. \\ Assume now that $T_0$ has no root. Then $T$ has isolated leaves and for any $(\alpha, t) \in L_T \times T_0$, $e((\alpha, t)) = \rho(\alpha) = p(\alpha) = e(p(\alpha))$. By definition, the equivalence class of $(\alpha, t)$ is the cone of $t$ at $\rho(\alpha)$, so $(\alpha, t) \sim (\alpha', t')$ iff $\rho(\alpha) = \rho(\alpha')$ and $(\alpha, t) \wedge (\alpha', t') > \rho(\alpha)$. In other words, $(\alpha, t) \sim (\alpha', t')$ iff $e((\alpha, t)) = e((\alpha', t')) < (\alpha, t) \wedge (\alpha', t')$. This prove the second assertion. \\ If $T_0$ has no root, $E = \{ p(\alpha) ; \alpha \in L_T \}$. So let $(\alpha, \beta) $ be a leaf of $T \rtimes T_0$ and $\alpha'$ be a leaf of $T$, such that $p(\alpha') \in br((\alpha, \beta))$. Then, $p(\alpha') \leq \alpha$ in $T$. So, $p(\alpha') \leq p(\alpha) = e(\alpha, \beta)$. Hence, $e(\alpha, \beta)$ is the greatest element of $E \cap br((\alpha, \beta))$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \\%[2 mm] The following pictures illustrate extensions $T \rtimes T_0$ with $T_0$ a 1-colored good tree. They are organized in two groups, the first group has two pictures, the second one three. On the left of both groups is the tree $T$. On the right the possible kinds of extensions it gives rise to. On the first pair of pictures $T$ has non isolated leaves. So $T_0$ must have a root, hence be of type (0). On the second group of pictures $T$ has isolated leaves. So $T_0$ may have or not a root. \\ As previously, a continuous line means a dense linear order and a dashed line means a gap. \pagebreak
1. $T$ with non isolated leaves.\\
We have represented only two branches of $T$. The picture is drawn with $T_0$ of color $(3,0)$.
\nopagebreak
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.3cm,y=0.3cm]
\draw (9.06,-30.16)-- (7.34,-9.6); \draw (8.3,-21.04)-- (17.42,-9.85); \draw [->] (8.3,-21.04) -- (7.34,-9.6); \draw [->] (8.3,-21.04) -- (17.42,-9.85); \draw (7.72,-7.09) node[anchor=north west] {$\alpha_1$}; \draw (17.41,-9.41) node[anchor=north west] {$\alpha_2$}; \draw (8.9,-32) node[anchor=north west] {$T$}; \draw (39.61,-29.82)-- (38.23,-9.06); \draw (39,-20.61)-- (48.51,-9.21); \draw [->] (39,-20.61) -- (38.23,-9.06); \draw [->] (39,-20.61) -- (48.51,-9.21);
\draw (37.5,-32) node[anchor=north west] {$T \rtimes T_0$}; \draw (28, -35)node[anchor=north west] {Fig.5};
\draw [dash pattern=on 4pt off 4pt](38.23,-9.06)-- (34.25,-5.03); \draw [dash pattern=on 4pt off 4pt](38.23,-9.06)-- (37.15,-4.88); \draw [dash pattern=on 4pt off 4pt](38.23,-9.06)-- (40.59,-4.78); \draw [dash pattern=on 4pt off 4pt](48.51,-9.21)-- (47.09,-4.93); \draw [dash pattern=on 4pt off 4pt](48.51,-9.21)-- (49.79,-4.68); \draw [dash pattern=on 4pt off 4pt](48.51,-9.21)-- (52.74,-4.83); \begin{scriptsize} \draw (32.33,-8.78) node[anchor=north west] {$e(\alpha_1, \beta_i)$}; \draw (48.51,-8.78) node[anchor=north west] {$e(\alpha_2, \beta_i)$}; \draw (30.05,-1.92) node[anchor=north west] {$(\alpha_1, \beta_1)$}; \draw (34.66,-1.92) node[anchor=north west] {$(\alpha_1, \beta_2)$}; \draw (39.08,-1.92) node[anchor=north west] {$(\alpha_1, \beta_3)$}; \draw (44.2,-1.96) node[anchor=north west] {$(\alpha_2, \beta_1)$}; \draw (48.95,-1.96) node[anchor=north west] {$(\alpha_2, \beta_2)$}; \draw (53.15,-1.96) node[anchor=north west] {$(\alpha_2, \beta_3)$}; \draw (39.3,-20.8) node[anchor=north west]{$x$}; \draw (8.5,-21) node[anchor=north west]{$x$}; \end{scriptsize}
\begin{scriptsize} \fill (7.34,-9.6) circle (2.5pt); \fill (8.3,-21.04) circle (2.5pt); \fill (17.42,-9.85) circle (2.5pt); \fill (38.23,-9.06) circle (2.5pt); \fill (39,-20.61) circle (2.5pt); \fill (48.51,-9.21) circle (2.5pt); \fill (34.25,-5.03) circle (2.5pt); \fill (37.15,-4.88) circle (2.5pt); \fill (40.59,-4.78) circle (2.5pt); \fill (47.09,-4.93) circle (2.5pt); \fill (49.79,-4.68) circle (2.5pt); \fill (52.74,-4.83) circle (2.5pt); \end{scriptsize} \end{tikzpicture}
\pagebreak
2. $T$ with isolated leaves. \\ Triangles to the right represent infinite cones, triangles to the left represent unions of cones (finite or infinite cones, depending on trees colors). On the first picture right $T_0$ has no root, on the last picture it has color $(3,0)$.
\nopagebreak
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.35cm,y=0.35cm]
\draw (8.6,-8.55)-- (5,-1); \draw (8.6,-8.55)-- (9.4,-1); \draw (9.4,-1)-- (5,-1); \draw (8.6,-8.55)-- (8.6,-18.5); \draw [dash pattern=on 4pt off 4pt] (8.6,-8.55)-- (11.87,-2.66); \draw [dash pattern=on 4pt off 4pt] (8.6,-8.55)-- (15.97,-3.08);
\draw (11.23,-0.3) node[anchor=north west] {$ \alpha_1 $}; \draw (15.29,-0.86) node[anchor=north west] {$ \alpha_2 $};
\draw (7.5,-20) node[anchor=north west] { $T$ };
\draw (28.1,-4.8)-- (28.1,5.35); \draw (28.1,5.35)-- (23,15.3); \draw (28.1,5.35)-- (29,15.3); \draw (23,15.3)-- (29,15.3);
\draw (25.7,-5.9) node[anchor=north west] { $T \rtimes T_0$};
\draw (30.8,14.61) node[anchor=north west] {$ \tau_{\alpha_1}(T_0) $}; \draw (38.59,14.61) node[anchor=north west] {$ \tau_{\alpha_2}(T_0) $};
\draw (28.1,5.35)-- (31.28,15.37); \draw (31.28,15.37)-- (36.56,15.37);
\draw (36.56,15.37)-- (28.1,5.35);
\draw (28.1,5.35)-- (39.62,15.37); \draw (39.62,15.37)-- (46.69,15.78);
\draw (46.69,15.78)-- (28.1,5.35);
\draw (27,-24.43)-- (21,-13.7); \draw (27,-24.43)-- (27.5,-13.7); \draw (21,-13.7)-- (27.5,-13.7);
\draw (27,-24.43)-- (27,-32); \draw [dash pattern=on 4pt off 4pt] (27,-24.43)-- (30.94,-18.15); \draw [dash pattern=on 4pt off 4pt] (27,-24.43)-- (42.5,-17.55);
\draw (25,-33.1) node[anchor=north west] { $T$ $\rtimes T_0 $ }; \draw (22,-35) node[anchor=north west] {Fig. 6};
\draw [dash pattern=on 4pt off 4pt] (30.94,-18.15)-- (28.94,-13.72); \draw [dash pattern=on 4pt off 4pt] (30.94,-18.15)-- (31.42,-13.77); \draw [dash pattern=on 4pt off 4pt] (30.94,-18.15)-- (33.42,-13.77);
\draw [dash pattern=on 4pt off 4pt] (39.28,-13.77)-- (42.5,-17.55); \draw [dash pattern=on 4pt off 4pt] (42.8,-13.77)-- (42.5,-17.55); \draw [dash pattern=on 4pt off 4pt] (45.63,-13.85)-- (42.5,-17.55); \begin{scriptsize} \draw (8.5,-8.77) node[anchor=north west] {$ p(\alpha_1) = p(\alpha_2) $}; \draw (28,5.1) node[anchor=north west] {$ e(\alpha_1,t) = e(\alpha_2,t) $}; \draw (25.6,-11.2) node[anchor=north west] {$ (\alpha_1,\beta_1) $}; \draw (29.26,-11.2) node[anchor=north west] {$ (\alpha_1,\beta_2) $}; \draw (33.1,-11.11) node[anchor=north west] {$ (\alpha_1,\beta_3) $}; \draw (37.25,-11.28) node[anchor=north west] {$ (\alpha_2,\beta_1) $}; \draw (41,-11.24) node[anchor=north west] {$ (\alpha_2,\beta_2) $}; \draw (44.8,-11.28) node[anchor=north west] {$ (\alpha_2,\beta_3) $}; \draw (41,-18.15) node[anchor=north west] {$e(\alpha_2, \beta_i)$}; \draw (30.51,-18.15) node[anchor=north west] {$e(\alpha_1, \beta_i)$}; \end{scriptsize}
\fill (8.6,-8.55) circle (2.5pt); \fill (11.87,-2.66) circle (2.5pt); \fill (15.97,-3.08) circle (2.5pt); \fill (27,-24.43) circle (2.5pt);
\fill (28.1,5.35) circle (2.5pt); \fill (30.94,-18.15) circle (2.5pt); \fill (42.5,-17.55) circle (2.5pt); \fill (28.94,-13.72) circle (2.5pt); \fill (31.42,-13.77) circle (2.5pt); \fill (33.42,-13.77) circle (2.5pt); \fill (39.28,-13.77) circle (2.5pt); \fill (42.8,-13.77) circle (2.5pt); \fill (45.63,-13.85) circle (2.5pt);
\end{tikzpicture}
\pagebreak
\noindent The tree $T \rtimes T_0 $ equipped with $E$ does not know about $T$ and $T_0$ as shows example below. But it almost does as we will see in Corollary \ref{ouf2}, first two items.
\begin{exa}\label{nesaitpas} Let $\cdot \colon$ and $\triangleleft$ be 1-colored good trees of color $(2,0)$ and $(0,2)$ respectively. Then $ \cdot \colon \! \! \rtimes ( \cdot \colon \! \! \rtimes \triangleleft ) = ( \cdot \colon \! \! \rtimes \cdot \colon \! \!) \rtimes \triangleleft $. Consider on both side of the identity the final extension, namely on the left side the extension with factors $\cdot \colon$ and $\cdot \colon \! \! \rtimes \triangleleft$, and on right side the extension with factors $\cdot \colon \! \! \rtimes \cdot \colon$ and $ \triangleleft $. Then, on both sides, $E$ consists of the successors of the root But $\cdot \colon \! \! \rtimes \triangleleft$ has a root while $\triangleleft$ has not. Hence, the tree $T \rtimes T_0 $ equipped with $E$ does not even know whether $T_0$ has a root or not. \end{exa}
\subsection{Language and theory of $T \rtimes T_{0}$}\label{language of extension}
As previously defined ${\mathcal L}_1 = \{\leq, \wedge, N, L\}$. Let ${\mathcal L}_2 := {\mathcal L}_1 \cup \{ e, E, F_e \}$.
\\[2 mm]
We will have to consider on the tree $T$ {\bf some additional structure given by additional unary functions}. As they naturally appear these functions are partial but, again, in model theoretical framework, they have to be defined everywhere. So each such function $f$ appears together with two unary predicates $D_f$ and $F_f$ for the domain and the range of the original $f$. In this way, {\bf let ${\mathcal F}$ be a finite set of of unary functions and ${\mathcal P} = \{ D_f, F_f ; f \in {\mathcal F} \}$ a set of of unary predicates}. They will be required to satisfy: \\[2 mm]
\textbf{Conditions} $(4\star)$: for any $f \in {\mathcal F}$,\\ \textbf{.} $L \subseteq D_f$, $D_f =\{ x \, ; \exists y \in F_f, y \leq x \}$
and $F_f \cap L = \emptyset$,\\ \textbf{.} $\forall t \not\in D_f,\ f(t) = t$, and $f(D_f) = F_f$, \\ \textbf{.} $\forall t \in D_f,\ f(t) \leq t$, \\ \textbf{.} $\forall t \in F_f, \ f(t)=t$.
\\[2 mm]
We define ${\mathcal L} = {\mathcal L}_1 \cup {\mathcal F} \cup {\mathcal P}$ and ${\mathcal L}' = {\mathcal L}_2 \cup {\mathcal F} \cup {\mathcal P}$. Note that Conditions $(4\star)$ are first order in ${\mathcal L}$. We interpret ${\mathcal L}'$ on $T \rtimes T_{0}$ as follows: \\
- we have already defined the ${\mathcal L}_2$-structure; \\ - for $f$ a function in ${\mathcal F}$: \\ \textbf{.} $F_f^{T \rtimes T_0} = F_f^T$ and $D_f^{T \rtimes T_0} = (D_f^T \cap N_T) \dot\cup L_T \times T_0$ (recall that $N_T$ embeds as an initial subtree in $T \rtimes T_{0}$); \\ \textbf{.} $\forall x \in (D_{f}^T \cap N_T)$, $f ^{T \rtimes T_0}(x) = f^{T}(x)$ and \\ $\forall (\alpha, t) \in L_T \times T_0$, $f ^{T \rtimes T_0} (\alpha, t) = f^{T}(\alpha)$ (which belongs to $N_T$ since $L_T \subseteq D_f^T$ and $f(D_f^T) \cap L_T = \emptyset$ (both conditions due to $(4\star)$) hence to $N_{T \rtimes T_0}$). \\
Conditions $(4\star)$ are true on $T \rtimes T_{0}$ for the set of functions $ {\mathcal F} \cup \{ e \}$, $D_e = E_\geq$ and $F_e = E$. \\[2 mm]
We will see (in Corollary \ref{ouf2}) how the construction of $T \rtimes T_{0}$ can be retraced in its ${\mathcal L}_2$-theory up to the phenomenon pointed out in Example \ref{nesaitpas}, and also that the definition of its ${\mathcal L}'$-structure is canonical (in Lemma \ref{ouf3}).
\begin{defi}\label{Sigma"} Let $\Sigma''$ be the following theory in the language ${\mathcal L}_2$: \\ - $(\leq,\wedge)$ is a good tree; \\ - $E$ is convex: $\forall x,y,z, \ (x,z \in E \ \wedge \ x < y <z) \rightarrow y \in E$; \\ - $D_e = E_\geq $; \\ - $E = e(D_e) = e(L)$ and $\forall x \not\in D_e, e(x) = x$;\\
- $L \subseteq D_e$ and $E \cap L = \emptyset$;\\ - $\forall x, e(x) \leq x $;\\ - $\forall x \in D_e, \ E \cap br(x)$ has $e(x)$ as a greatest element, where $br(x) := \{y; y \leq x \}$. \end{defi}
In models of $\Sigma''$, $E_\geq$ is the same thing as $D_e$ and is therefore quantifier free definable. This allows us to use freely notations $E_\geq$, $E_<$, $E_\leq$ or $E_>$. \\
In the following statement cases $(1)$ and $(2)$ correspond to the two possible extensions producing a same model of $\Sigma''$, as seen in Example \ref{nesaitpas}.
\begin{lem}\label{ouf} Let $\Lambda$ be a model of $\Sigma''$. Consider on $\Lambda$ a binary relation $\sim$ such that: \\ - either $E$ is an antichain and \\ either (1): $x \sim y$ iff ($x,y \in E_<$ and $x =y$) or ($x,y \in E_\geq$ and $e(x) = e(y))$, \\ or (2): $x \sim y$ iff ($x,y \in E_<$ and $x =y$) or ($x,y \in E_\geq$ and $e(x) = e(y) < x \wedge y)$, \\ - or $E$ is not an antichain and (2). \\
Then $\sim$ is an equivalence relation compatible with the order in the sense of Lemma \ref{compatible}. More precisely, for $x \in \Lambda$ such that the class $\bar{x}$ of $x$ is not a singleton, then $\bar{x} = \Gamma(e(x))$ in case (1) and $\bar{x} = \Gamma(e(x),x)$ in case (2). \end{lem} \pr Let $x \in \Lambda$ such that $\bar{x}$ is not a singleton.\\
Let $y \in \bar{x}$, then $x,y \in E_\geq$ and by definition of $\sim$, $e(y) = e(x)$. Since $e(y) \leq y$, $y \in \Gamma(e(x))$. If we are in case (2), $e(x) = e(y) < x \wedge y$, thus $y \in \Gamma(e(x),x)$.\\ Conversely, let $y \in \Gamma(e(x))$, then $y \in E_{\geq}$ and $e(x) \leq x \wedge y$. Since $e(x) \leq y$ and $e(y) \leq y$, $e(x)$ and $e(y)$ are comparable. In case (1), $E$ is an antichain, thus $e(x) = e(y)$. Assume now $y \in \Gamma(e(x),x)$, so $x \wedge y > e(x)$. Then, $e(x) \in br(y) \cap E$, hence $e(x) \leq e(y)$. If $x \wedge y \leq e(y)$, then by convexity of $E$, $x \wedge y \in E$ so $x \wedge y \leq e(x)$ which gives a contradiction. Thus, $e(y) < x \wedge y$ therefore, $e(y) \leq e(x)$. Finally, $e(x) = e(y) < x \wedge y$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \\[2mm]
{\bf Notations} Let $\Lambda$ be a model of $\Sigma''$, $\sim$ as above. We denote $\bar\Lambda$ the good tree $\bar\Lambda := \Lambda / \sim$; and, for $x \in \Lambda$, $\bar x$ the equivalence class of $x$ in $\bar\Lambda$.
\begin{cor}\label{ouf2} Let $\Lambda$ and cases (1) and (2) be as in Lemma \ref{ouf}. \begin{enumerate} \item In case (1), $\Lambda$ is the disjoint union $E_< \dot\cup \dot\bigcup_{ x \in E} \Gamma(x)$, where $E_\leq$ is an initial subtree, $E$ is an antichain and $\sim$ is the identity on $E_<$. Hence $\bar \Lambda$ is a tree canonically isomorphic to $E_\leq$ with $E$ its set of leaves. If all thick cones $\Gamma(x)$, $x \in E$ are isomorphic trees, say all isomorphic to $\Gamma_0$, then $\Lambda = \bar \Lambda \rtimes \Gamma_{0}$.
\item In case (2), $\Lambda = E_\leq \dot\cup \dot\bigcup_{ x \in E_>} \Gamma(e(x);x)$ with $E_\leq$ an initial subtree and $\sim$ the equality on $E_\leq$; $E_\leq$ embeds canonically in the tree of nodes of $\bar\Lambda$. If all cones $\Gamma(e(x),x)$, $x \in E_>$, are isomorphic trees, say all isomorphic to $\Gamma_0$, then $\Lambda = \bar \Lambda \rtimes \Gamma_{0}$. \item In both cases, $E_\leq$ can be identified with $\bar E_\leq := \{ \bar x ; x \in E_\leq \}$ and $E$ with $\bar E := \{ \bar x ; x \in E \}$ and considered as living in $\bar\Lambda$.
\end{enumerate} \end{cor}
\pr 1. In this case $E$ is an antichain and by definition of the relation $\sim$, $\Lambda$ is the disjoint union of an initial tree with the union of disjoint final trees indexed by points from $E$, namely $\Lambda = E_< \dot\cup \dot\bigcup_{ x \in E} \Gamma(x)$ which is also $E_\leq \dot\cup E_>$, with $\sim$ the equality on $E_\leq$ and $\bar x=\overline{e(x)}$ for $x \in E_>$. Thus the inclusion $E_\leq \subseteq \Lambda$ induces the equality $E_\leq = \bar \Lambda$ where more precisely $E_<$ is identified with the set of nodes of $\bar \Lambda$ and $E$ with its set of leaves. \\ 2. By definition of $\sim$ in case (2), $\Lambda$ has the form indicated. Hence the inclusion $E_\leq \subseteq \Lambda$ induces an inclusion $E_\leq \subseteq \bar \Lambda$. Take any $c \in E$. By axioms of $\Sigma''$, $c = e(\alpha)$ for some leaf $\alpha \geq c$. Since $E \cap L = \emptyset$, $\alpha > c$ and $c = \alpha \wedge \beta$ for another leaf $\beta \not= \alpha$. If $e(\beta) \not= e(\alpha)$ then $\bar \beta \not= \bar \alpha$ hence $\bar c$ is a node of $\bar \Lambda$. If $e(x) = e(\alpha)$ for any leaf $x$ such that $c = \alpha \wedge x$, then any cone at $c$ is an equivalence class; now there are at least two different cones, hence, again, $\bar c$ is a node of $\bar \Lambda$. Thus $E_\leq$ is contained in the set of nodes of $\bar \Lambda$. \\ 3. Follows directly from 1. and 2.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{lem}\label{ouf3}
Suppose furthermore $\bar\Lambda$ equipped with an ${\mathcal L}$-structure model of $(4\star)$. For $f \in {\mathcal F}$ we note $\bar f$ the interpretation in $\bar \Lambda$ of the symbol $f$ from ${\mathcal L}$. Then there is exactly one ${\mathcal L}'$-structure on $\Lambda$ defined as follows: for each function $f \in {\mathcal F}$: \begin{enumerate} \item For $x \in E_\leq$, $x \in D_f$ iff, in $\bar\Lambda$, $\bar x \in D_{\bar f}$ and in this case $f(x)$ is the unique $y \in E_\leq$ such that $\bar y = \bar f(\bar x)$ in $\bar\Lambda$. \item For $x \in E_\geq$, $f(x) = f(e(x))$. \end{enumerate} This ${\mathcal L}'$-structure on $\Lambda$ satisfies conditions $(4*)$ for the set of functions $ {\mathcal F} \cup \{ e \}$
with $F_e = E$ and $F_f = F_{\bar f}$ (following the identification stated in Corollary \ref{ouf2}, (3)) for $f \in {\mathcal F}$.
\end{lem}
\pr The uniqueness of $y$ in 1 is given by Corollary \ref{ouf2} and 1 and 2 are compatible since $e$ is the identity on $E$. For $f \in {\mathcal F}$ and $x \in E_\geq$, $f(x) \in E_\leq$; now $e(x) := max (E \cap br(x))$ hence $f(x) \leq e(x) \leq x$; for $x \in E_\leq$, ``$f(x) = \overline{f(x)} \leq \bar x = x$''. Other Conditions $(4\star)$ for $f$ on $\Lambda$ follow from $E \cap L = \emptyset$ and Conditions $(4\star)$ for $\bar f$ on $\bar \Lambda$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm] \begin{defi} Let $T$ and $T_0$ be good trees satisfying conditions $(\star)$, $(\star \star)$ and $(\star \star \star)$. Assume $T$ furthermore equipped with an ${\mathcal L}$-structure model of $(4\star)$. We introduce the theory $\Sigma'$ in the language ${\mathcal L}'$ consisting of $\Sigma''$ strengthened as follows. Let $\sim$ be the relation defined as in Lemma \ref{ouf}, (1) if $T_0$ has a root and (2) if it has not. Then we add the axioms and axiom schemes:
\\ - for any $f \in {\mathcal F}$, conditions 1 and 2 of Lemma \ref{ouf3}; \\
- for all $x \in E_\geq$ if $T_0$ has a root or $x \in E_>$ if $T_0$ has no root, the $\sim$-class of $x$ is elementary equivalent to $T_0$ (as a pure tree); \\ - the quotient modulo $\sim$ and $T$ are elementary equivalent ${\mathcal L}$-structures; \\ - if $T_0$ has no root then by Condition $(\star\star)$ leaves of the quotient modulo $\sim$ have a predecessor and, interpreted in the quotient modulo $\sim$, $\bar E = \bar p(\bar L)$,
where $\bar L$ and $\bar p$ denote the interpretation in $\bar \Lambda$ of the symbols $L$ and $p$. \end{defi}
\begin{prop}\label{prop:better-axiomatization} $\Sigma'$ is a complete axiomatization of $T \rtimes T_{0}$. If $(T, {\mathcal L})$ and $T_0$ are $\aleph_0$-categorical or finite then $\Sigma'$ has a unique model of cardinality finite or countable. \end{prop}
\pr We note first that $T \rtimes T_{0}$ is a model of $\Sigma'$. We prove now the completion of this theory. \\
Assume first that $T_0$ has a root. Take $\Lambda \models \Sigma'$. Assume CH for short and $\Lambda$ as well as $T$ and $T_0$ saturated of cardinality finite or $\aleph_1$. As an ${\mathcal L}_2$-structure, $\Lambda$ must be the extension $T \rtimes T_{0}$ described in Corollary \ref{ouf2}, case (1). By Lemma \ref{ouf3} the rest of the ${\mathcal L}$-structure on $\Lambda$ as well is determined by its restriction to $E_\leq$ id est by the ${\mathcal L}$-structure $T$. So $\Sigma '$ has a unique saturated model of cardinality finite or $\aleph_1$. This shows the completeness of $\Sigma '$. \\
We consider now the case where $T_0$ has no root and suppose as previously that $\Lambda$, $T$ and $T_0$ are saturated of cardinality finite or $\aleph_1$. This time
$\Lambda = E_\leq \dot\cup \dot\bigcup_{ x \in E_>} \Gamma(e(x);x)$ and $N_T = E_\leq$ (recall that, by Lemma \ref{ouf2} (3), $N_T$ lives also in $\Lambda$). By the third axiom scheme, $\bar E=\bar p(L_T)$ hence the ${\mathcal L}_2$-structure on $\Lambda$ must be the extension $T \rtimes T_{0}$ described in Corollary \ref{ouf2}, case (2). By Lemma \ref{ouf3} again the rest of the ${\mathcal L}$-structure on $\Lambda$ is determined by the ${\mathcal L}$-structure $T$. This shows the uniqueness of the saturated model of cardinality finite or $\aleph_1$ and the completeness of $\Sigma '$. \\
If $T$ and $T_0$ are the unique finite or countable models of their respective theory, we show in the same way as above that $T \rtimes T_{0}$ is the unique finite or countable model of $\Sigma'$, which shows that this theory is $\aleph_0$-categorical too. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{defi}\label{prop:axiomatization-syntaxe} If $\Sigma$ is a complete axiomatization of $T$ as an ${\mathcal L}$-structure and $\Sigma_0$ is a complete axiomatization of $T_0$ (in ${\mathcal L}_1$), $\Sigma \rtimes \Sigma_0$ will denote the theory $\Sigma'$ (of ${\mathcal L}'$). \end{defi}
\subsection{When $T_0$ is 1-colored} \label{When $T_0$ is 1-colored}
In this section we work under the additional assumption that $T_0$ is 1-colored. We show that, in this case the properties we are interested in transfer from $T$ to $T \rtimes T_0$. \\
Let us recall (see Section 2) that $M(T)$ and $M(T \rtimes T_0)$ denote the $C$-structures with canonical trees $T$ and $T \rtimes T_0$ respectively.
\begin{prop}\label{prop:better-axiomatization-qe}
If $T$ eliminates quantifiers in ${\mathcal L} \cup \{ p,D_p,F_p \}$ (as defined in \ref{def:p}), then $\Sigma \rtimes \Sigma_0$ eliminates quantifiers in ${\mathcal L}' \cup \{ p,D_p,F_p \}$.
\end{prop}
\pr We keep notations of the proof of Proposition \ref{prop:better-axiomatization}. So $T$, $T_0$ and $\Lambda = T \rtimes T_0$ are the finite or $\aleph_1$-saturated models of $\Sigma$, $\Sigma_0$ and $\Sigma \rtimes \Sigma_0$ respectively.
Take any finite tuple from $\Lambda$. Close this tuple under $e$. Write it in the form $(x,y_1,\dots,y_{m})$ where $x$ is a tuple from $E_{\leq}$, $y_1,\dots,y_{m}$ tuples from $E_{>}$ such that all components of each $y_i$ have same image under $e$, call it $e(y_i)$ (thus, $e(y_1), \dots, e(y_m)$ are among coordinates of $x$), and $e(y_i) \not= e(y_j)$ for $i \not= j$. Take $(x',y'_1,\dots,y'_{m}) \in \Lambda$ having same quantifier free $({\mathcal L}' \cup \{ p,D_p,F_p \})$-type than $(x,y_1,\dots,y_{m})$. Thus $x' \in E_\leq$ and $(y'_1,\dots,y'_{m}) \in E_>$. Since $E_{\leq}$ embeds canonically in $T$, we may see $x$ and $x'$ as living in $T$ and $T'$ respectively, where they have same complete type if ${\mathcal L} \cup \{ p,D_p,F_p \}$ eliminates quantifier of $T$. Thus there is an automorphism $\sigma$ of $T$ sending $x$ to $x'$. Any automorphism, say $f$, of $\Lambda$ extending $\sigma \upharpoonright E_\leq$ will send for each $i$, $e(y_i)$ to $\sigma (e(y_i))$. Hence $f(y_i)$ and $y'_i$ are in the same copy of $T_0$, say $T_0^i$. \\ Assume first that $T_0$ is of type (0). Since $T_0$ consists of one root and leaves and $f(y_i)$ as well as $y'_i$ consists of distinct leaves, there is an automorphism $\sigma_i$ of $T_0^i$ sending $f(y_i)$ to $y'_i$. The union of $\sigma$, the $\sigma_i$ and the identity on other copies of $T_0$ is an automorphism of $\Lambda$ sending $(x,y_1,\dots,y_{m})$ to $(x',y'_1,\dots,y'_{m})$.
\\
We consider now the case where $T_0$ has no root thus $E_\leq = N_T$ and \\
$\Lambda = E_\leq \ \dot\cup\ \dot\bigcup \{ \Gamma(e(z) \, ;z) \, ; z \in E_> \}$.
\\ - If $T_0$ is of type $(1.a)$, it eliminates quantifier in ${\cal L}_1$ which gives $\sigma_i$ as above. \\ - If $T_0$ is of type $(1.b)$, it eliminates quantifiers in ${\mathcal L}_1\cup \{ p \}$ and in $T_0$ the interpretation of $D_p$ is $L_{T_0}$. For each embedding of $T_0$ in $\Lambda$ as a cone $\Gamma(e(x);x)$ we have the inclusions $L_{T_0} \subseteq L_{\Lambda} \subseteq D_{p_{\Lambda}}$ and for any leaf $\alpha$ of (this) $T_0$, $p_{T_0}(\alpha) = p_{\Lambda}(\alpha)$. Thus, for each $i$, $f(y_i)$ and $y'_i$ have same type in $T_0$, which gives $\sigma_i$ as previously. \\ In all cases, the automorphism of $\Lambda$ we have constructed respects the language ${\cal L}' \cup \{ p_L, p_L(L) \} $ where $p_L$ is the restriction of the predecessor function to the set of leaves and $p_L(L)$ its image. Thus we have shown that $\Lambda$ eliminates quantifier in this language. Now adding $p_L$ to $\cal L'$ is quantifier free equivalent to adding $p$: $D_p = (D_{\bar p} \cap E_\leq) \cup L$ and $p$ coincides with $\bar p$ on $D_{\bar p} \cap E_\leq$ and with $p_L$ on $L$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{prop:induc-C-min.} Consider on $M(T \rtimes T_0)$ and $M(T)$ the structure induced by their canonical tree, respectively $(T \rtimes T_0,{\mathcal L}')$ and $(T,{\mathcal L})$. Then
$M(T \rtimes T_0)$ is $C$-minimal iff $M(T)$ is.
\end{prop}
\pr
Let $\Lambda \models \Sigma \rtimes \Sigma_0$.
For $A \subseteq L(\bar \Lambda)$, $A_{\Lambda} := \{ \alpha \in L(\Lambda) ; \bar \alpha \in A \}$ is a cone in $\bar \Lambda$ iff $A$ is a cone in $\Lambda$, of same type (thick or not) except when $A$ consists of a non isolated leaf (in $\bar \Lambda$) and $A_\Lambda$ a is a cone.
This proves two things.
First $\bar \Lambda$ is $C$-minimal if $\Lambda$ is.
Secondly if $\bar \Lambda$ is $C$-minimal any subset of $\Lambda$ of the form $A_\Lambda$ is a Boolean combination of cones and thick cones. The general case is processed by hand. \\[2 mm]
\textbf{Fact:} For $x$ a leaf of $\Lambda$, a composition of functions from ${\mathcal F} \cup \{ p,e \}$ applied to $x$ is, up to equality, a constant or of the form $x,p(x)$ (necessary only if $T_0$ is of type $(1.b)$) or $t(e(x))$ where $t$ is a composition of functions from ${\mathcal F} \cup \{ p \}$ (hence a term of ${\mathcal L} \cup \{p\}$). \\[2 mm]
Assume the first function right in the term is $p$. If $T_0$ is of type $(0)$ we replace $p$ with $e$. If $T_0$ is of type $(1.b)$, $p(x) \not\in D_p$. Conclusion: at most one $p$ right. If a term $t$ is a composition of functions from ${\mathcal F} \cup \{ e \}$, then $t(p(x)) = t(x)$. Indeed, $e(x)<x$ if $x \in L$ hence $e(x) = e(p(x))$ (by definition $e(x) = max (E \cap br_x)$), and $f(x) = f(e(x))$. Conclusion: in composition no $p$ right needed. Finally, for $f \in {\mathcal F} \cup \{ e \}$, $f(x)=f(e(x))$. So, if a term is neither $x$ nor $p(x)$, we may assume it begins right with the function $e$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\
So non constant terms in $x$ are all smaller that $x$, thus linearly ordered. Consequently, up to a definable partition of $L(\Lambda)$ (namely into the two sets $\{ x ; t(x) \geq t'(x) \}$ and $\{ x ; t(x) < t'(x) \}$), terms of the form $t(x) \wedge t'(x)$ are not to be considered. To summarize, it is enough to consider subsets definable by formulas $t(x) \leq t'(x)$, $t(x) = t'(x)$, $t(x) \leq a$, $t(x) = a$, $t(x) \in E, D_e, F_f$ or $D_f$, and $(t(x) \wedge a) = b$ where $t$ and $t'$ are of the form described in the above fact. To $\varphi$ a one variable formula from ${\mathcal L}$ without constant associate
a formula $\varphi_{\Lambda}$ (also from ${\mathcal L}$, one variable and without constant) such that
$\Lambda \models \varphi_{\Lambda} (x)$ iff $\bar \Lambda \models \varphi (\bar x)$. Then $\varphi (e(x))$ is equivalent to:\\ - $\varphi_\Lambda (x)$ when $T_0$ has a root, and \\ - $\psi_\Lambda (x)$ with $\psi (y) = \varphi (\bar p (y))$ when $T_0$ has no root, \\ both already handled. Are left to be considered: \\ - $t(e(x)) < x $ and, if $T_0$ is of type $(1.b)$, $t(e(x)) < p(x) < x $ are always true, \\ - $x \in E, F_f$ always wrong, as $p(x) \in E, F_f$ are since $p(x)$ occurs only if $T_0$ is if type $(1.b)$,\\ - $x, p(x) \in E_\geq, D_f$ always true, \\ - $t(x) \square b$ and $(t(x) \wedge a) \square b$ with $\square \in \{ <,= ,> \}$, formulas that we treat now. \\ For $b \in E_>$, $t(e(x)) \geq b$ is always wrong and $t(e(x)) < b$ is equivalent to $t(e(x)) < e(b)$. For $b \in E_\leq$, $\Lambda \models t(e(x)) \square b$ iff $\bar \Lambda \models t(e(x)) \square \bar b$. For $b \in E_>$, $(t(e(x)) \wedge a) \geq b$ is always wrong and $(t(e(x)) \wedge a) < b$ iff $(t(e(x)) \wedge a) < e(b)$. For $a \in E_>$, $(t(e(x)) \wedge a) = (t(e(x)) \wedge e(a))$. Finally, for $a$ and $b$ in $E_\leq$, $\Lambda \models (t(e(x)) \wedge a) \square b$ iff $\bar \Lambda \models (t(e(x)) \wedge a) \square \bar b$. We are left with formulas $x \square b$, $p(x) \square b$, $(x \wedge a)\square b$ and $(p(x) \wedge a)\square b$ which are routine.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{prop:induc-C-indisc} As previously consider on $M(T \rtimes T_0)$ and $M(T)$ the structure induced by their canonical tree. Then
$M(T \rtimes T_{0})$ is indiscernible iff $M(T)$ is.
\end{prop}
\pr
The right-to-left implication follows clearly from our proof of $C$-minimality transfer from $L(T)$ to $L(T \rtimes T_{0})$. The other direction is trivial since $T$ is a definable quotient of $L(T \rtimes T_{0})$ (and leaves are sent to leaves in the quotient).
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 MM]
We conclude this section with an uniformity result:
\begin{prop}\label{rootnoroot} \begin{enumerate} \item The tree $T_0$ has a root iff the ${\mathcal L}_2$-structure $T \rtimes T_0$ satisfies both sentences `` $\forall x \in L$, $x \in D_p$ and: $\forall x, y \in L, \neg (p(x) < p(y))$. \item The equivalence relation $\sim$ is ${\mathcal L}_2$-definable, uniformly in $T$ and uniformly in $T_0$. This makes $T$ uniformly ${\mathcal L}_2$-interpretable in $T \rtimes T_0$. \end{enumerate} \end{prop}
\pr 1. Note that an element $(\alpha , \beta)$ of $L_T \times T_0 = L_{ T \rtimes T_0 }$ belongs to $D_p$ iff $\beta $ has a predecessor in $T_0$ and in this case $p((\alpha, \beta)) = (\alpha, p(\beta))$. So $D_p \supseteq L$ iff $T_0$ is of type $(0)$ or $(1.b)$. In this case $p(\alpha, \beta) = (\alpha, p(\beta))$. \\ Assume first that $T_0$ has a root, say $r_0$. \\ Let $(\alpha, \beta)$, $(\alpha', \beta')$ be two leaves of $T \rtimes T_0$ such that $p(\alpha, \beta) \leq p(\alpha', \beta')$. By definition of the order in $T \rtimes T_0$ and the remark three lines above, $\alpha = \alpha'$ and $p(\beta) \leq p(\beta')$. But $p(\beta) = p(\beta') = r_0$, thus $p(\alpha, \beta) = p(\alpha', \beta')$. So the second sentence of (1) is satisfied. \\ Assume now that $T_0$ is of type $(1.b)$. Let $(\alpha, \beta)$ be a leaf of $T \rtimes T_0$, then by definition of such a $1$-colored good tree, in $T_0$ any element of $]-\infty, p(\beta)[$ is the predecessor of a leaf, say $p(\beta')$. So we have in $T \rtimes T_0$, $p(\alpha, \beta) < p(\alpha, \beta')$. \\ 2. The first item above allows us to first order distinguish whether $T_0$ has a root or not. Items \ref{root} and \ref{noroot} of Lemma \ref{ouf}
give the fitting definitions for both cases. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
To rethink of Example \ref{nesaitpas}, the previous proposition tells us that, if $T \rtimes T_0$ knows that $T_0$ is 1-colored, then it knows also whether or not $T_0$ has a root.
\section{Solvable and general colored good trees}
\begin{lem}\label{rem:induction (star))} Let $T_0$ be a 1-colored good tree and $T$ a good tree such that $T \rtimes T_0$ is well defined. Then: \\ - $T_0$ satisfies Condition $(\star\star\star)$; \\ - leaves of $T \rtimes T_0$ are isolated iff leaves of $T_0$ are, and
$T \rtimes T_0$ satisfies Condition $(\star)$; \\ - $T \rtimes T_0$ satisfies Condition $(\star\star\star)$. \end{lem} \pr The first assertion comes from Lemma \ref{1-color conditions stars}. The second one is clear. Let us prove the third one. If $T \rtimes T_0$ has isolated leaves then $T_0$ is of type $(0)$ or $(1.b)$ and for all $(\alpha, \beta) \in L_{T\rtimes T_0}$, $p(\alpha, \beta) = (\alpha, p(\beta)) \in L_T \times T_0$. If $T_0$ is of type $(0)$ $T$ embeds canonically in $T \rtimes T_0$ and, via this embedding, $p(L_{T \rtimes T_0}) = L_T$, an antichain in $T \rtimes T_0$ hence convex. If $T_0$ is of type $(1.b)$ then $p(L_{T \rtimes T_0}) = (T \rtimes T_0) \setminus ( L_{T \rtimes T_0} \cup N_T)$ which is clearly convex. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{defi}\label{defi:colored trees} A \emph{solvable good tree} is either a singleton or a tree of the form
$(\dots(T_{1}\rtimes T_{2} )\rtimes \cdots )\rtimes T_{n}$ for some integer $n \geq 1$, where $T_{1}, \cdots, T_{n}$ are $1$-colored good trees such that, for each $i$, $1 \leq i \leq n-1$, if $T_i$ is of type $(1.a)$ then $T_{i+1}$ is of type $(0)$. \end{defi}
\begin{rem}\label{rem:bien defini} - By Lemma \ref{rem:induction (star))} and an easy induction on $n$,
$(\dots(T_{1}\rtimes T_{2} )\rtimes \cdots )\rtimes T_{n}$ is a well defined good tree. \\ - Taking into account extension associativity proven in Lemma \ref{assoc} we will allow ourselves to write simply
$T_{1} \rtimes \cdots \rtimes T_{n}$ instead of $(\dots(T_{1}\rtimes T_{2} )\rtimes \cdots )\rtimes T_{n}$. \\ - If $ T_{1} \rtimes \cdots \rtimes T_{n}$ is a solvable good tree as in Definition \ref{defi:colored trees} then for any $k \leq n$, $ T_{1} \rtimes \cdots \rtimes T_{k}$ and $ T_{k+1} \rtimes \cdots \rtimes T_{n}$ are solvable good trees. \\ - Conversely, let $T' =T_{1}\rtimes \cdots \rtimes T_{n}$ and $T'' =T_{n+1}\rtimes \cdots \rtimes T_{n+m}$ be solvable good trees as in Definition \ref{defi:colored trees} and such that, if $T_n$ is of type $(1.a)$ then $T_{n+1}$ is of type $(0)$. Then $T' \rtimes T'' = T_{1}\rtimes \cdots \rtimes T_{n +m}$ and $T' \rtimes T''$ is a solvable good tree. \\ - $T$ is a solvable good tree iff it is either a singleton or a $1$-colored good tree or of the form $T = T' \rtimes T_n$ for $T'$ a solvable good tree which is not a singleton and $T_n$ a $1$-colored good tree. \end{rem}
One difficulty is that a solvable good tree may have decompositions into iterated extensions of $1$-colored good trees of different lengths.
\begin{exa} 1. Consider the extension $T = T_1 \rtimes T_2$ where $ T_1$ and $ T_2$ are 1-colored.
If $ T_1$ is of type (1.b) of color, say $(1,1)$ and $ T_2$ is of type (1.a) of color $(0,2)$, then $T_1 \rtimes T_2$ is still 1-colored of type (1.a) of color $(0,2)$. \\ 2. Consider now the extension $T_{1} \rtimes T_2 \rtimes T_{3}$ where $T_{1},T_2$ and $T_{3}$ are 1-colored. If $T_1$ and $T_3$ are of type (1.a) of color $(0,m)$ and $T_2$ is of type (0) of color $(m,0)$, then $T_{1} \rtimes T_2 \rtimes T_{3}$ is again of type (1.a) of color $(0,m)$. \end {exa}
We will now do two things: introduce technical tools in order to characterize decompositions of minimal length and find all exceptional situations where two or more terms of the decomposition ``collapse''.
\begin{defi}\label{defi:branching color} Let $T$ be a good tree and $x$ a node of $T$. Extending Definition \ref{defi:$1$-colored}, we call \emph{branching color} of $x$ and we note $b$-$col_T(x)$ the couple $(m_T(x),\mu_T(x))$, $m_T(x),\mu_T(x) \in \mathbb N \cup \{\infty\}$, where $m_T(x)$ is the number of cones at $x$ which are also thick cones (in other words the number of elements of $T$ which have $x$ as a predecessor) and $\mu_T(x)$ is the number of cones at $x$ which are not thick cones. \end{defi}
In a pure solvable tree $T = T' \rtimes T_n$ in all ``non exceptional situations'' we will be able to define in terms of change of branching color the function $e$ associated to the extension $T' \rtimes T_n$.
\begin{rem}\label{first rem.} - Branching color is definable in the pure order of $T$ in the sense of Lemma 2.14 (no $\aleph_0$-categoricity needed now).\\ - Let $T$ be a $1$-colored good tree. Then the branching color of any of its nodes is its color in the sense of Definition \ref{color of a node} (so the same for any node of $T$). \end{rem}
\begin{lem}\label{Lemma: b-color extension} Let $ T'$ be a solvable good tree, not a singleton, and $T_n$ a 1-colored good tree such that $T := T' \rtimes T_n$ is well defined. Let $E,E_>$ and $E_<$ be as in Definition \ref{definition des e,E...}. Then for any $x \in N_T$, \\ - if $x \in E_<$, then $b$-$col_{T}(x) = b$-$col_{T'}(x)$; \\ - if $x \in E$ and $T_n$ has a root, then $b$-$col_{T}(x)$ is the branching color (in $T_n$) of the root of $T_n$ (hence of the form $(m,0)$); \\ - if $x \in E$ and $T_n$ has no root, then $b$-$col_{T}(x) = (0, \mu_{T'}(x)+m_{T'}(x))$; \\ - if $x \in E_>$, then $b$-$col_{T}(x)$ is the color of any node of $T_n$. \end{lem} \pr Clear by construction of $T' \rtimes T_n$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{prop: e definable in order} Let $T = T' \rtimes T_n$ as in Lemma \ref{Lemma: b-color extension} and $e$ be as Definition \ref{definition des e,E...}. Then the function $e$ is definable in the pure order except when $T_n$ is of type $(1.a)$ of color $(0, \mu_n)$ and, if $T' = T_{n-1}$ or $T' = T^- \rtimes T_{n-1}$ for $T_{n-1}$ a 1-colored good tree as given by Remark \ref{rem:bien defini}, then, either: \\ Exception 1: $T_{n-1}$ is $1$-colored of type $(1.b)$ of color $(m_{n-1}, \mu_{n-1})$ and $\mu_n = m_{n -1} + \mu_{n-1}$ or,\\ Exception 2: $T_{n-1}$ is $1$-colored of type $(0)$ and, if $ T^- = T_{n-2}$ or $ T^- = T^{=} \rtimes T_{n-2}$ for $T_{n-2}$ a 1-colored good tree, then $T_{n-2}$ is of type $(1.a)$ of color $(0, \mu_{n-2})$ and $ \mu_{n-2} = m_{n -1} = \mu_{n}$.
\end{prop}
\pr In this proof ``definable'' means ``definable in the pure order''.
Note that if the restriction of $e$ to $L_T$ is definable, then $E_{\geq} = \{x \in T; \exists \alpha \in L_T, \ x \geq e(\alpha)\}$ is definable and for all $x \in E_{\geq }$, $e(x) = e (\alpha)$ for any $\alpha \in L_T$, $\alpha \geq x$, so $e$ is definable.\\
If $T_n$ has a root, then $e(\alpha) = p(\alpha)$, for any $\alpha \in L_T$, so $e$ is definable.\\
If $T_n$ is of type $(1.b)$, then by Lemma \ref{Lemma: b-color extension}, the color of any element of $E_>$ is $(m_n, \mu_n)$, with $m_n \neq 0$, while, if $x \in E$, $b$-$col_{T}(x) = (0, \mu_{T'}(x)+m_{T'}(x))$. Therefore, for any $\alpha \in L_T$, $e(\alpha) = max \; (br(\alpha) \cap \{x \in N; b$-$col(x) = (0, \mu) \})$, so $e$ is definable.\\ So from now on $T_n$ is of type $(1.a)$ hence, by Condition $(\star \star)$, $T_{n-1}$ is of type (0) or $(1.b)$. We will prove that if $T$ satifies neither conditions of Exception 1 nor conditions of Exception 2, then $e$ is definable.\\
Again by Lemma \ref{Lemma: b-color extension}, the branching color of any element of $E_>$ is $(0, \mu_n)$, and if $x \in E$, then $b$-$col_{T}(x) = (0, \mu_{T'}(x)+m_{T'}(x))$. We are going to apply one more time Lemma \ref{Lemma: b-color extension}, this time to the extension $T'= T^- \rtimes T_{n-1}$ and its corresponding subsets $E'_<$, $E'$ and $E'_>$. \\ If $T_{n-1}$ is of type $(1.b)$, then $E \subset E'_>$, therefore for any $x \in E$, the branching color of $x$ in $T'$ is its branching color in $T_{n-1}$. If the first Exception is not realized, then $\mu_n \neq m_{n-1} + \mu_{n-1}$ and $e$ is definable as follows: for any $\alpha \in L_T$, $e(\alpha) = max \; (br(\alpha) \cap \{x \in N_T ; b$-$col(x) = (0, m_{n-1} +\mu_{n-1}) \})$. \\ If $T_{n-1}$ is of type $(0)$, $E = E'$, hence for any $x \in E$, $b$-$col_{T'}(x) = (m_{n-1},0)$, so $b$-$col_{T}(x) = (0, m_{n-1})$. Therefore if $\mu_n \neq m_{n-1}$, $e$ is definable as above. Now, if $\mu_n = m_{n-1}$, we must consider the branching colors of nodes of $E'_<$ thus we must look down at the tree $T^-= T^{=} \rtimes T_{n-2}$ and its corresponding subsets $E^-, E^-_<$ and $E^-_>$. If $T_{n-2}$ is of type $(0)$, or $(1.b)$, by the previous discussion $E^-$ is definable in the pure order and $E' = E$ is the subset of all successors of nodes of $E^-$, hence definable in the pure order too. If $T_{n-2}$ is of type $(1.a)$, then the branching color of the nodes of $E^-_>$ is $(0, \mu_{n-2})$. If the second Exception is not realized, $\mu_{n-2} \neq m_{n-1}$, so as previously, the function $e$ is definable. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{defi}\label{defi:n-colored trees} We define $n$-\emph{solvable good trees} by induction on $n \in {\baton N}$:\\ - a $0$-solvable good tree is a singleton; \\ - a $1$-solvable good tree is the same thing as a $1$-colored good tree;\\ - an $(n+1)$-solvable good tree is a tree of the form $T \rtimes T_{n+1}$ with $T$ an $n$-solvable good tree and $T_{n+1}$ a $1$-colored good tree, which is not a $k$-solvable good tree for any $k \leq n$. \end{defi}
\begin{prop}\label{décomposition canonique} An $n$-solvable good tree $T$ with $n>1$ has a unique decomposition $T' \rtimes T_{n}$ with $T'$ an $(n-1)$-solvable good tree and $T_{n}$ a $1$-colored good tree. If $n>0$ it has a unique decomposition $T_{1}\rtimes \cdots \rtimes T_{n}$ such that each $T_{i}$ is a $1$-colored good tree. In this decomposition, no two consecutive factors realize Exception 1 and no three consecutive factors realize Exception 2. If $T_{1}\rtimes \cdots \rtimes T_{n}$ is $n$-solvable and such that each $T_{i}$ is $1$-colored, then for any $k$ and $\ell$, $1 \leq k \leq \ell \leq n$, $T_{k}\rtimes \cdots \rtimes T_{\ell}$ is $(\ell - k +1)$-solvable. \end{prop} \pr By definition, if $n>1$, there exist an $(n-1)$-solvable good tree $T'$ and a $1$-colored good tree $T_{n}$ such that $T' \rtimes T_{n}$. Since $T$ is an $n$-solvable good tree then, $T$ neither realizes Exception $1$ nor Exception $2$. Hence, by Proposition \ref{prop: e definable in order}, the function $e$ is definable in $T$ and $T_{n} = E_{\leq}$ if $\forall \alpha \in L$, $e(\alpha) = p(\alpha)$, $T_{n} = E_{<}$ otherwise. This gives the unicity of $T'$, and unicity of $T_n$ as well since $\sim$ (defined in \ref{def: sim}) is definable from $e$ and $\sim$-classes are subtrees isomorphic to $T_n$. \\ Unicity of the decomposition $T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{n}$ follows by induction on $n > 0$. The last assertion is now clear. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{cor}\label{unicité n-solvable} Let $T$ be a solvable good tree, then there exists a unique $n \in {\baton N}$ such that $T$ is an $n$-solvable good tree. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \end{cor}
From now on $n$ is supposed to be positive.
\begin{defi}\label{Ln} We first define and interpret by induction the language ${\mathcal L}_{n}$ on $n$-solvable good trees. \\ The language ${\mathcal L}_{1} = \{ \leq, \wedge, N, L \}$ has already been defined and ${\mathcal L}_{n+1} := {\mathcal L}_n \cup \{e_{n}, E_n, E_{\geq,n} \}$
where $e_n$ is a symbol for a unary function and $E_n$ and $E_{\geq,n}$ are unary predicate symbols. \\ The language ${\mathcal L}_{1}$ is interpreted naturally as in any good tree. \\ If $T'$ is an $(n+1)$-solvable good tree, it has a unique decomposition $T' = T \rtimes T_{n+1}$ with $T$ an $n$-solvable good tree and $T_{n+1}$ a $1$-colored good tree. We refer now to subsection \ref{language of extension} with the following adaptations: ${\mathcal F} := \{ e_1,\dots,e_{n-1} \}$, the language denoted ${\mathcal L}$ in \ref{language of extension} becomes now language ${\mathcal L}_n$ and ${\mathcal L}'$ becomes now ${\mathcal L}_{n+1}$. By induction hypothesis ${\mathcal L}_{n}$ is interpreted on $T$ and satisfies $(4*)$. This gives the interpretation of ${\mathcal L}_{n+1}$ on $T'$ and shows it satisfies $(4*)$. \\ Next we define ${\mathcal L}_{n}^+ := {\mathcal L}_{n} \cup \{ p,D_p,F_p \}$ (for $p,D_p$ and $F_p$ as defined before Definition \ref{def:p}). In any $n$-solvable good tree ${\mathcal L}_{n}^+$ is an extension by definition of ${\mathcal L}_{n}$.
\end{defi}
\begin{prop}\label{theory de n coloré} Let $T$ be an $n$-solvable good tree, $\Sigma$ its complete theory in the language ${\mathcal L}_{n}$ and $T_0$ a $1$-colored good tree, $\Sigma_0$ its complete theory in the language ${\mathcal L}_{1}$. Then $\Sigma \rtimes \Sigma_0$ (as defined in Definition \ref{prop:axiomatization-syntaxe}) is the complete theory of $T \rtimes T_0$ in the language ${\mathcal L}_{n+1}$. \end{prop}
\pr
We proceed by induction on $n$. Case $n = 1$ is given by Proposition \ref{prop:va et vient} and the induction step by Proposition \ref{prop:better-axiomatization}. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop}\label{prop: all definable in order} Let $T$ be an $n$-solvable good tree. Then \\ - $T$ eliminates quantifiers in the language ${\mathcal L}_{n}^+$, \\ - functions and predicates of ${\cal L}_n$ are definable in the pure order, \\ - $T$ is finite or $\aleph_0$-categorical, \\ - $M(T)$ is indiscernible and $C$-minimal. \end{prop} \pr The proof runs by induction on $n$. The first item follows from Propositions \ref{prop:va et vient} (case $n=1$) and \ref{prop:better-axiomatization-qe} (induction step) and the second one from Propositions \ref{décomposition canonique} and \ref{prop: e definable in order} (induction step, nothing to prove here when $n=1$). The third one from Propositions \ref{prop:va et vient} for the case $n=1$ and 5.13 for the induction step. The fourth one from Proposition \ref{theo:1-colored are precolored} for the case $n=1$ and Propositions 5.16 et 5.17 for the induction step. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{defi}\label{def:theories of n-colored}
Let $T_{1}, T_{2}, \cdots,T_{n}$ be $1$-colored good trees neither realizing Exception 1 nor 2.
Let $\Sigma_{1}, \Sigma_{2}, \cdots,\Sigma_{n}$ be their theories in the language ${\mathcal L}_1$ and $\Sigma_{1} \rtimes \dots \rtimes \Sigma_{n}$
the ${\mathcal L}_n$-theory defined by induction using Proposition \ref{prop:better-axiomatization}, Definition \ref{prop:axiomatization-syntaxe} and extension associativity. By Proposition \ref{prop: all definable in order}, $\Sigma_{1} \rtimes \dots \rtimes \Sigma_{n}$ is an extension by definition of its restriction to ${\mathcal L}_1$ and we will also consider it as a theory in the language ${\mathcal L}_1$.
We denote $S_n$, $n \geq 1$, the set of all theories $\Sigma_{1}\rtimes \Sigma_{2} \rtimes \cdots \rtimes \Sigma_{n}$ in the language ${\mathcal L}_1$ and $S_0$ the ${\mathcal L}_1$- theory of the singleton.
\end{defi}
\begin{defi} For $n \in \mathbb N \cup \{ \infty \}$, we call $n$-\emph{colored} any model of $S_n$. \end{defi}
\begin{cor}\label{cor} For any $n \in \mathbb N \cup \{ \infty \}$, any finite or countable $n$-colored good tree is n-solvable. \end{cor} \pr By Proposition \ref{prop: all definable in order} any theory in $S_n$ is $\aleph_0$-categorical. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{rem} The class of $n$-colored good trees, $n$ at least two, is not elementary as shows the following example (but the class of all $i$-colored good trees, for some $i \leq n$, is). Take 1-colored good trees, $T$ of color $(0,\infty)$ and for each $n \in \mathbb N ^{\geq 1} \cup \{ \infty \}$, $T_n$ of color $(1,n)$. By Proposition \ref{prop: e definable in order}, for $n \in \mathbb N ^{\geq 1}$, all $T_n \rtimes T$ are 2-colored. But any non trivial ultraproduct of them is 1-colored as it is equivalent to $T_\infty \rtimes T$ which realizes Exception 1. \end{rem}
The following theorem summarizes much of what has been proven in this section.
\begin{theo}\label{prop: $Sn$ complete} For any integer $n$ any theory in $S_{n}$ is complete and admits quantifier elimination in the language ${\mathcal L}_{n}^+$. Furthermore $S_{n}$ is the set of all complete theories of $n$-colored good trees. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \end{theo}
\section{ Classification of indiscernible $\aleph_{0}$-categorical $C$-minimal pure $C$-sets}
\begin{theo}\label{tous pareils}
Let $M$ be a pure $C$-set. Then the following assertions are equivalent:
\begin{itemize}
\item[(i)]
$M$ is finite or
$\aleph_{0}$-categorical, $C$-minimal and indiscernible
\item[(ii)]
$T(M)$ is a precolored good tree. \item[(iii)] $T(M)$ is a colored good tree. \end{itemize} \end{theo} \pr $(i) \Rightarrow (ii)$: This is Corollary \ref{cor:colored good tree}.\\ $(iii) \Rightarrow (i)$: This is Theorem \ref{prop: $Sn$ complete}.\\ $(ii) \Rightarrow (iii)$\\ We will prove the result by induction on the depth $n$ of $T(M)$.\\
The case of depth $1$ is given by Remark \ref{precolored depth one implies 1-colored}.\\
Asume that any precolored good tree of depth $n$ is a colored good tree.
Let $T$ be a precolored good tree of depth $n+1$. By Corollary \ref{intervals of precolored}, for any leaf $\alpha$, the latest one-colored interval
$I_{n+1}(\alpha)$ of the branch $br(\alpha)$ is either $\{p(\alpha)\}$, case $(0)$, or $]e_n(\alpha), \alpha[$, case $(1.a)$, or $]e_n(\alpha), p(\alpha)]$, case $(1.b)$. \\
In case $(0)$ the thick cone $T_{\alpha}$ at $p(\alpha)$ is a $1$-colored good tree of type $(0)$, and in case $(1.a)$ or $(1.b)$, the cone $T_{\alpha}$ of $\alpha$ at $e_n(\alpha)$ is a $1$-colored good tree of type $(1.a)$ or $(1.b)$ respectively. Let us call $(m_{n+1}, \mu_{n+1})$ the color (independent of $\alpha$) of the $1$-colored good tree $T_{\alpha}$. Thus by Proposition \ref{prop:va et vient}, for any $\alpha$, $T_{\alpha} \models \Sigma_{m_{n+1}, \mu_{n+1}}$. Let $T_{n+1}$ be the countable or finite $1$-colored good tree model of $\Sigma_{m_{n+1}, \mu_{n+1}}$. \\
Now, $T$ is an ${\mathcal L}_2$-structure when interpreting $e$ by $e_{n}$, $E = Im (e_{n})$, $E_\geq = Dom (e_n)$ and, as such, a model of $\Sigma''$ (cf \ref{Sigma"}).
Let us consider on $T$ the equivalence relation $\sim$ associated to $e_n$, as defined in \ref{ouf}, (1) if $T_{n+1}$ is of type (0) and (2) otherwise, and $\overline T:=T/\sim$. Suppose $T$ is countable or finite. So, by categoricity of 1-colored good trees and (\ref{ouf2}), $T \equiv \overline T \rtimes T_{n+1}$.\\
By induction hypothesis and Corollary \ref{cor} there are 1-colored good trees $T_1,\dots,T_k$ such that $\overline T = T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{k}$, hence, $T = T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{k}\rtimes T_{n+1}$. Hence $T$ is a colored good tree. This remains true for any $T' \equiv T$ by definition of colored good trees. This allows us to remove the temporary assumption that $T$ is countable or finite. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{rem}\label{plus de couleur} Since a tree of the form $T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{n}$ where the $T_i$ are 1-colored is always an $m$-colored good tree for some $m \leq n$, the proof of $(ii) \Rightarrow (i)$ above shows that a precolored good tree of depth $n$ is an $m$-colored good tree, with $m \leq n$. \end{rem}
\begin{cor} A good tree is precolored of depth $n$ iff it is $n$-colored. \end{cor}
\pr
We proceed again by induction on $n$. The case $n = 1$ is Theorem \ref{theo:1-colored are precolored}.\\ Let now $T$ be a precolored good tree of depth $n + 1$, then by the remark above, $T$ is $m$-colored with $m \leq n +1$. Assume for a contradiction that $m < n +1$, then by induction hypothesis, $T$ is precolored of depth $m$, which contradicts the unicity of the depth of a precolored good tree (see Definition \ref{def: precolored good tree}), hence $m = n + 1$. Conversely, if $T$ is $n$-colored, then $T$ is a precolored good tree whose depth must therefore be $n$.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
We make now completely precise the equivalence between colored and precolored good trees. In what follows, the $E_{i}$ and the $E_{ \geq, i}$ are predicates of the language ${\mathcal L}_{n}$ as in Definition \ref{Ln}, $E_{<, i} := N_T \setminus E_{ \geq, i}$ and $E_{ \leq, i} := E_{<, i} \cup E_i$.
\begin{defi} \label{precolored-n-colored-df} Let $T \equiv T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{n}$ be an $n$-colored good tree, $ n \geq 1$, where each $T_i$ is $1$-colored. For $n=1$ we set $I_1 := N_T$. For $n \geq 2$ and $i$, $1 \leq i \leq n$, we define by induction on $i$ the subset $I_i$ of $N_T$ as follows:\\ - $I_1 := E_1 = E_{ \leq, 1}$ if $T_1$ is of type $(0)$ or $(1.b)$, and $I_1 := E_{ <, 1}$ if $T_1$ is of type $(1.a)$; \\ - for $i$, $1 < i < n$, $I_{i} := E_i = E_{ \leq, i} \setminus \bigcup_{1 \leq j \leq i-1} I_{j}$ if $T_i$ is of type $(0)$ or $(1.b)$, and $I_{i} := E_{<, i} \setminus \bigcup_{1 \leq j \leq i-1} I_{j}$ if $T_i$ is of type $(1.a)$; \\
- $I_n := E_{n-1}$ if $T_{n}$ is of type $(0)$, and $I_n := E_{>,n-1} \cap N_T$ otherwise. \end{defi}
\begin{prop} \label{precolored-n-colored} Let $T \equiv \dots T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{n}$ be an $n$-colored good tree, $ n \geq 1$, where each $T_i$ is $1$-colored. Then, for each $i$, $1 \leq i \leq n$, and each leaf $\alpha$ of $T$, the set $I_{i} \cap br(\alpha)$ is the $1$-colored basic interval $I_i(\alpha)$ of $T$ seen as a precolored good tree of depth $n$ (as in Definition \ref{def: precolored good tree}).
\end{prop}
\pr It is clear from their definition that the $I_i$ cover $N_T$. Thus it is enough to prove that all nodes of each $I_i$ have same tree-type in $T$. It will follow from quantifier elimination (given in Theorem \ref{prop: $Sn$ complete}). Up to logical equivalence, in ${\mathcal L}_n$ atomic formulas in the single variable $x$ are either tautologies, or always false, or of the form $E_i$ or $E_{\geq,i}$ applied to $x$ or $e_j(x)$, or an equality between two such terms. Indeed, since $e_i \circ e_j = e_{min\{i,j\}}$ there is no need to consider terms in $x$ with more than one function $e_i$; since $e_i(x) \leq x$ and, $e_i(x) < e_j(x)$ if $i<j$, there is no need of $\wedge$ and $<$ either. Now, an equality $e_j(x) = x$ occurs iff $E_j(x)$, and $E_j(x)$ depends only on the types of $T_j$ and $T_{j+1}$ if $x \in I_j$.
We have still to deal with the function $p$. Now, $p$ coincides always with either the identity or some function $e_j$; moreover an equality $p(y) = y$ or $p(y) = e_j(y)$ is determined by the formula $I_j(y)$ and the type of $T_j$. More precisely, $I_j \cap D_p = \emptyset$ if $T_j$ is of type $(1)$; if $T_j$ is of type $(0)$ then $I_j \subseteq D_p $ and $p$ and $e_j$ coincide on $I_j$. This achieves the proof. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{prop} \label{precolored-n-colored2} Let $T \equiv T_{1}\rtimes T_{2} \rtimes \cdots \rtimes T_{n}$ be an $n$-colored good tree, $ n \geq 1$, where each $T_i$ is $1$-colored of color $(m_i, \mu_i)$ and $c$ a node of $T$. Then the color of $c$ is $(m_i, \mu_i)$ where $i$ is the unique index such that $c \in I_i$. Inner cones at $c$ have same theory as $T_{i} \rtimes \cdots \rtimes T_{n}$ and border cones have same theory as $ T_{i+1} \rtimes \cdots \rtimes T_{n}$. If $I_i(\alpha)$ is dense for $\alpha \in L$ then $(T \setminus \Gamma(c)) \equiv T$. \end{prop}
\pr All nodes in $I_i$ have same tree type which is different from the tree type of any node of $I_{i+1}$.
Thus, let $\Gamma$ be a cone at $c$. Either $\Gamma$ contains a non empty dense interval $]c, d[$ included in $I_i$, then $\Gamma$ is inner, by definition. Now, there are $\mu_i$ such cones. Or, $\Gamma = \Gamma (c, \alpha)$ for some leaf $\alpha$ and either $I_i(\alpha) = ]e_{i-1}(\alpha), c]$ or $I_i(\alpha) = \{c\}$. There are $m_i$ such cones. To determine the theory of these cones, we can without loss of generality argue in the countable model. There is a copy of $T_i \rtimes T_{i+1} \rtimes \cdots \rtimes T_{n}$ containing $c$. In this copy $c$ can be identified with a node $d$ of $T_i$. Any cone $\Gamma$ at $c$ is canonically isomorphic to ${\mathcal C} \rtimes T_{i+1} \rtimes \cdots \rtimes T_{n}$, where ${\mathcal C}$ is a cone of $T_i$ at $d$. If $\Gamma$ is inner then ${\mathcal C}$ is inner in $T_i$, hence isomorphic to $T_i$ (see Corollary \ref{cone-equiv}). Thus, inner cones of $T$ at $c$ are isomorphic to $T_i \rtimes T_{i+1} \rtimes \cdots \rtimes T_{n}$. If $\Gamma$ is border then ${\mathcal C}$ is a leaf of $T_i$. Thus, border cones of $T$ at $c$ are isomorphic to $\bullet \rtimes T_{i+1} \rtimes \cdots \rtimes T_{n}$ with $\bullet$ a singleton hence to $T_{i+1} \rtimes \cdots \rtimes T_{n}$.\\ In the same way, if $I_i(\alpha)$ is dense for $\alpha \in L$, then $(T_i \setminus \Gamma(d)) \equiv T_i$, hence $(T \setminus \Gamma(c)) = T_{1} \rtimes \cdots \rtimes T_{i-1} \rtimes (T_i \setminus \Gamma(d)) \rtimes T_{i+1} \rtimes \cdots \rtimes T_{n}$.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\section{General classification}
In this section we reduce the general classification of finite or $\aleph_0$-categorical and $C$-minimal $C$-sets to the classification of indiscernible ones, previously achieved in section $7$. By the Ryll-Nardzewski Theorem, any $\aleph_0$-categorical structure is a finite union of indiscernible subsets. In a $C$-minimal structure ${\mathcal M}$ these subsets have a very particular form. Let us give an idea: there exists a finite subtree $\Theta$ of $T := T(M)$, closed under $\wedge$ and $\emptyset$-algebraic with the following properties: \\ - any $a \in \Theta$, except its root, has a predecessor in $\Theta$ since $\Theta$ is finite, call it $a^-$; now, in $T$, $]a^-,a[$ is either empty or not a singleton and dense, and in the second case, the pruned cone ${\mathcal C}(]a^-,a[)$ is indiscernible in $M$, \\ - for $a$ as above and $b \in \Theta$, $b>a$, then ${\mathcal C}(]a^-,b[)$ is not indiscernible, \\ - and more... \\% to deal with cones at $a \in \Theta$ for example. An equivalence relation is defined over $\Theta$ which identifies for example points $a$ and $b$ such that none of ${\mathcal C}(]a^-,a[)$ and ${\mathcal C}(]b^-,b[)$ is empty and ${\mathcal C}(]a^-,a[) \cup {\mathcal C}(]b^-,b[)$ is indiscernible (this is only an example; there are other elements to be identified). We call vertices the elements of the quotient $\bar \Theta$ of $\Theta$. They are finite antichains of $T$. We consider on $\bar \Theta$ the order induced by the order of $\Theta$ (it is the classical order on antichains); it makes $\bar \Theta$ a finite tree. An (oriented) edge links vertices $A$ and $B>A$ iff $A$ is the predecessor of $B$ in $\bar \Theta$, $A=B^-$. Vertices and edges of $\bar \Theta$ are labeled. As an example, on a vertex $A$, a first label gives the (finite) cardinality of $A$ seen as a subset of $T$, and a label on the edge $(A^-,A)$ says whether, for any $a \in A$, $]a^-,a[$ is empty or not: this second label exists iff this interval is not empty and it gives the complete theory of the indiscernible $C$-set ${\mathcal C}(]a^-,a[)$. \\ There are other labels on vertices which are also either cardinals in $\mathbb N \cup \{ \infty \}$ or complete theories of indiscernible finite or $\aleph_0$-categorical and $C$-minimal $C$-sets. Conversely, we have isolated eleven properties which are true in $\bar \Theta$ and such that, given a labeled graph $\Xi$ sharing these eleven properties, there is a finite or $\aleph_0$-categorical and $C$-minimal $C$-set $M$ such that $\bar \Theta (M) = \Xi$. In this sense, the classification of finite or $\aleph_0$-categorical and $C$-minimal $C$-sets is reduced to that of indiscernible ones.
\subsection{The canonical partition}
\begin{prop}\label{canonical partition} Let $\cal M$ be a finite or $\aleph_0$-categorical structure, then there exists a unique partition of $M$ into a finite number of $\emptyset$-definable subsets which are maximal indiscernible. \end{prop} \pr By $\aleph_0$-categoricity, there is a finite number of $1$-types over $\emptyset$. By compacity, each of these types is consequence of one of its formulas. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi \begin{defi} We call this partition the {\rm canonical partition}. Thereafter it will be denoted $(M_1, \cdots, M_r)$. \end{defi}
We reformulate here for convenience the description given in $[D]$ in the proof of Proposition $3.7$, with a small difference: instead of working with $T(M)$ we will work with $ T(M)^\ast$ defined as follows: $T^\ast:=T$ if $T$ has a root and $T^\ast:=T \cup \{-\infty\}$ otherwise, with $-\infty < T$. In the last case, we say that ``$- \infty$ exists''. Note that the tree $T^\ast$ has always a root, which is either the root of $T$ or $- \infty$. By $C$-minimality each $M_i$ of the canonical decomposition is a finite boolean combination of cones and thick cones. We will be more precise. Let $D$ be the set of bases of cones and thick cones appearing in these combinations.
\begin{defi}\label{definition of Theta} We define $\Theta_0:= \{ x \in T(M)^\ast; \: $for some$ \; c \in D, x \leq c\}$ and $\Theta_1:= \{x \in \Theta_0; \exists i \neq j, \alpha \in M_i, \beta \in M_j, x \in br(\alpha) \cap br(\beta) \}$. We define:
$U := \{$suprema of branches from $\Theta_1 \}$
$B := \{$branching points of $\Theta_1 \}$
$S := \{ c \in \Theta_1 \setminus (U \cup B); $ the thick cone at $c$ without the cone of \underline{the} branch of $\Theta_1$ intersects non trivially both $M_i$ and $M_j$ for a couple $(i,j)$, $i \neq j$ \}
$I := \big\{ $infima $ \in \Theta_1 \setminus (U \cup B \cup S)$ of intervals on branches of $\Theta_1$ which are maximal for being contained in $ \{ c \in \Theta_1 \setminus (U \cup B \cup S); $ the thick cone at $c$ without the cone of \underline{the} branch of $\Theta_1$ is entirely contained in a same $M_i$ \big\}
$\Theta := U \cup B \cup S \cup I$.
\end{defi}
\begin{rem} - Since $D$ is finite, $\Theta_0$ and $\Theta_1$ are trees with finitely many branches, which implies that $U$ and $B$ are finite; $S$ is finite since it is contained in $D$; $I$ is finite by o-minimality of branches of $\Theta_1$. Hence $\Theta$ is finite. \\ - $\Theta_1$, $U$, $B$, $S$, $I$ and $\Theta$ are all definable from the $M_i$, hence $\emptyset$-definable since the $M_i$ are. As $\Theta $ is finite, it is contained in the algebraic closure of the empty set. \\ - $\Theta$ is a subtree of $T(M)^\ast$ closed under $\wedge$. Because it is finite each element of $\Theta$ has a predecessor in $\Theta$. Elements of $\Theta$ which are nodes (or leaves) in $T(M)$ may not be nodes (or leaves) in $\Theta$. So, to avoid confusion we will use the words {\rm vertices} and {\rm edges} for the tree $\Theta$. \\ - We have the equivalence: $\cal M$ is not indiscernible iff $\Theta$ is not empty iff the root of $T(M)^\ast$ belongs to $\Theta$. \end{rem}
\begin{prop}\label{canonical partition $C$-minimal} Let $\cal M$ be a $C$-minimal, $\aleph_0$-categorical structure. Then the subsets $M_1, \cdots, M_r$ of the canonical partition are the orbits over $\emptyset$ of $acl(\emptyset)$-definable subsets of the form: \begin{itemize} \item cones \item almost thick cones (i.e. cofinite unions of cones at a same basis) \item pruned cones ${\mathcal C}(]b,a[)$ where $b < a$ and $]b,a[$ is a dense interval without extremities, \end{itemize} all these cones having their basis in $\Theta$ as well as the other extremity (namely $a$) of the axis in case of pruned cones.
\end{prop}
\pr By definition of $\Theta$, any $M_i$ is a finite union of pruned cones ${\mathcal C}(]b,a[)$, cones and thick cones at $a$, with $a,b \in \Theta$ and $a$ the predecessor of $b$ in $\Theta$. By $\emptyset$-definability, $M_i$ is the union of the orbits over $\emptyset$ of these sets (for more details, see \cite{D2}, Proposition 3.7). This gives the proposition except the fact that $]b,a[$ is a dense interval without extremity. This result follows from $\aleph_0$-categoricity using the following facts.
\begin{fact}\label{premier-intervalle1} Assume some subset of the canonical partition is of the form $M_j = \bigcup_{i=1}^n{\mathcal C}(]b_i,a_i[)$. Let $(b,a)$ be one of the couples $(b_i, a_i)$. Then all the elements of the pruned cone ${\mathcal C}(]b,a[)$ have same type over $(b,a)$ in $\cal M$. \end{fact} \pr Assume $\cal M$ $\omega$-homogeneous. Then, for $x,y \in {\mathcal C}(]b,a[)$ there exists an automorphism of $\cal M$ sending $x$ to $y$. Such an automorphism preserves $M_j$ hence preserves $a$ and $b$. Therefore $x$ and $y$ have the same type over $(b,a)$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi
\begin{fact}\label{premier-intervalle2} All nodes of $]b,a[$ have same type over $(b,a)$. \end{fact} \pr This is a direct consequence of the preceding Fact, since any node of $]b,a[$ is of the form $b \wedge x$, where $x \in {\mathcal C}(]b,a[)$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\[2 mm]
Now, since all the nodes of $]b,a[$ have same type over $\emptyset$, either $]b,a[$ is dense or consists of a unique node, or contains an infinite discrete order which is not possible by $\aleph_0$-categoricity.\\ In the case where $]b,a[$ consists of a single node, say $c$, ${\mathcal C}(]b,a[)$ is an almost thick cone, namely the thick cone at $c$ without ${\mathcal C}(c,b)$. So, ${\mathcal C}(]b,a[)$ changes from the third category to the second category of subsets.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm] In particular, Fact \ref{premier-intervalle1} has the following consequence. \begin{fact}\label{} If $a \in \Theta$ has a predecessor in $ T(M)^\ast$, then this predecessor belongs also to $\Theta$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi \end{fact}
{\bf Notations.} Our aim is now to understand the structure induced by $\cal M$ on a pruned cone ${\mathcal C}(]b,a[)$ of the canonical partition as in Proposition \ref{canonical partition $C$-minimal}. It is in general not a pure $C$-set but we know by Proposition \ref{induiteCmin} that, as a pure $C$-set it is $C$-minimal.
So what we have done in the previous sections applies to the $C$-minimal pure $C$-set ${\mathcal C}(]b,a[)$. This means that its canonical tree $\Gamma(]b,a[)$ is a colored good tree, say an $n$-colored good tree for some integer $n$, which must be greater than 1 since $]b,a[$ contains at least one node. Thus $\Gamma(]b,a[) =: T \equiv T_{1} \rtimes \cdots \rtimes T_{n}$ for 1-colored good trees $T_{1}, \cdots, T_{n}$. Recall (from Section \ref{language of extension}) that $T_1$ may be taken a definable quotient of $T$. We call this $T_1$ the {\it first level} of $T$. Since $]b,a[$ is dense, $T_1$ is infinite, of type $(1.a)$ or $(1.b)$. Its set of nodes, $N_1$, embeds definably in $T$, as the set $I_1$ defined in Definition \ref{precolored-n-colored-df}.
Note that, when $M$ is countable, then the elementary equivalence becomes an isomorphism: $\Gamma(]b,a[) = T_{1} \rtimes \cdots \rtimes T_{n}$. \\
If $\Sigma$ is the complete theory of the pure tree $T$, $\Sigma_{1}$ will denote the theory of its first level $T_{1}$ and $\Sigma_{>1}$ the theory of the $(n-1)$-colored good tree $T_{2} \rtimes \cdots \rtimes T_{n}$ or, to understand it in definable terms from $T$, the theory of each non trivial $\sim_1$-equivalence class for $\sim_1$ the relation corresponding to the extension $T_1 \rtimes (T_2 \rtimes \cdots \rtimes T_n)$ (see Section \ref{def: sim}). For $i \in \{1, \cdots, n\}$, $(m_i, \mu_i)$ will denote the color of the $1$-colored good tree $T_i$.
\begin{lem}\label{premier-intervalle3} Let ${\mathcal C}(]b,a[)$ be a pruned cone as in Fact \ref{premier-intervalle1}. Then $]b,a[$ is included in the set of nodes of the first level of $\Gamma(]b,a[)$, the colored good tree associated to ${\mathcal C}(]b,a[)$. \end{lem} \pr Any $\alpha \in {\mathcal C}(]b,a[)$ satisfies $(\alpha \wedge a) > b$ hence $I_1(\alpha)$ (considered in $\Gamma(]b,a[)$) intersects $]b,a[$ non trivially. Take any $c \in I_1 \cap ]b,a[$. Then the formula ``$x$ belongs to $I_1$ (taken in the tree $\Gamma(]b,a[))$'' is true for $x=c$.
By Fact \ref{premier-intervalle2} it should be true for any $x \in ]b,a[$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
Till now we have exploited that each set $M_i$ of the canonical partition is indiscernible. We use now that it is {\it maximal} indiscernible, i.e. if $i \neq j$, there are no $\alpha \in M_i$ and $\beta \in M_j$ with same type.
\begin{lem}\label{aumoins2} Let $a \in \Theta$ be maximal in $\Theta$, $a$ not the root of $\Theta$. Let $a^-$ be its predecessor in $\Theta$. If the interval $]a^-, a[$ is empty, then $a$ is not a leaf of $T(M)$ and there exist at least two cones at $a$ with different complete theories as colored good trees. \end{lem} \pr Since $a$ is maximal, following the notation of Definition \ref {definition of Theta}, $a$ is in $U$, i.e. $a$ is the supremum of some branch from $\Theta_1$. If $]a^-, a[$ is empty, $a$ is in $\Theta_1$, hence $a$ belongs to at least two branches of different type in $M$. In particular $a$ is not a leaf. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{lem}\label{interdit1} Let $M$ be a $C$-minimal $C$-set. Let $a, b \in T(M)$, with $b < a$ and such that the interval $]b, a [$ is not empty, not a singleton and is dense. Assume that the canonical tree $\Gamma(]b, a [)$ of the pruned cone ${\mathcal C}(]b, a [)$ is an $n$-colored good tree and let $\Sigma(]b, a [)$ be its complete theory. Assume furthermore that $]b, a [$ is contained in the set of nodes of the first level of $\Gamma(]b, a [)$.
Let ${\mathcal C}$ be the union of at least two cones at $a$, such that each of these cones is indiscernible. Then, $T({\mathcal C}(]b, a [) \cup {\mathcal C})$ is a model of $\Sigma(]b, a [)$ if and only if one of the following cases appears (where we follow the conventions preceding Lemma \ref{premier-intervalle3}): \begin{enumerate} \item[(a)]
$m_1 = 0$, $ n \geq 2$, and $T({\mathcal C})$ is an $(n-1)$-colored good tree model of $\Sigma(]b, a [)_{> 1}$.
\item[(b)] $m_1 = 0$, and ${\mathcal C}$ is the union of exactly $\mu _1 $ cones at $a$, all with canonical tree model of $\Sigma(]b, a [)$. \item[(c)] $m_1 \neq 0$ and, \\ - if $n = 1$, then ${\mathcal C}$ is the union of exactly $m_1$ cones which consist of a leaf, and $\mu_1$ cones with canonical tree model of $\Sigma(]b, a [)$.\\ - if $n \geq 2$, then ${\mathcal C}$ is the union of exactly $m_1$ cones with canonical tree model of $\Sigma(]b, a [)_{> 1}$ and exactly $\mu_1$ cones with canonical tree model of $\Sigma(]b, a [)$. \end{enumerate}
\end{lem} \pr By hypothesis, $]b,a[$ is contained in the first level of $\Gamma(]b,a[)$ and $\mu_1 \neq 0$ since $]b,a[$ is dense. Note that ${\mathcal C}$ becomes the thick cone at $a$ in the $C$-set ${\mathcal C}(]b, a [) \cup {\mathcal C} =: {\mathcal H} $. \\[2 mm]
We prove first the "if'' direction.\\ - Assume $(a)$. Then, $T_1$ is of type (1.a) and, in $T({\mathcal H})$, $a$ is the root of an $(n-1)$-colored good tree model of $\Sigma(]b, a [)_{> 1}$.
Let $T_1'$ be the first level of $\Gamma(]b, a [)$ plus the additional element $a$ which is now the leaf of the branch $]b,a[$. Then, $T_1'$ is a model of $\Sigma(]b, a [)_{1}$. If $M$ is countable, by $\aleph_0$-categoricity,
the $(n-1)$-colored good tree $T({\mathcal C})$ is isomorphic to $\Gamma(]b, a [)_{>1}$. Hence, $T({\mathcal H}) = T_1' \rtimes \Gamma(]b, a [)_{>1}$. In general, due to Proposition \ref{prop:better-axiomatization}, $T({\mathcal H}) \equiv T_1' \rtimes \Gamma(]b, a [)_{>1}$. Hence $T({\mathcal H})$ is a model of $\Sigma (]b,a[)$. \\
- Assume $(b)$. Take any model ${\mathcal G}$ of $\Sigma (]b,a[)$ and $d$ any node in the first level of ${\mathcal G}$. So ${\mathcal G}$ appears as the disjoint union of the pruned cone $\Gamma (]-\infty,d[)$ (considered in ${\mathcal G}$), $\{ d \}$ and $\mu_1$ cones at $d$, which are all models of $\Sigma (]b,a[)$ by Proposition \ref{precolored-n-colored2}. By Proposition \ref{precolored-n-colored2} again, $\Gamma (]-\infty,d[)$ is a model of $\Sigma (]b,a[)$. By hypothesis $(b)$ $T({\mathcal H})$ admits a similar decomposition with $a$ instead of $d$. Since $\Sigma (]b,a[)$ is complete, we are able to carry on an infinite back and forth between ${\mathcal G}$ and $T({\mathcal H})$. Hence $T({\mathcal H})$ is a model of $\Sigma (]b,a[)$. \\
- Assume $n=1$, so $\Sigma (]b,a[) = \Sigma_{m_1,\mu_1}$, and $(c)$. We argue similarly to Case (b). Take ${\mathcal G}$ any model of $\Sigma (]b,a[)$ and $d$ any node of ${\mathcal G}$. So ${\mathcal G}$ is the disjoint union of its pruned cone $\Gamma (]-\infty,d[)$, $\{ d \}$, $m_1$ leaves immediately above $d$ and $\mu_1$ inner cones at $d$. By Proposition \ref{precolored-n-colored2} these $\mu_1$ cones at $d$ are all models of $\Sigma_{m_1,\mu_1}$ and
$\Gamma (]-\infty,d[)$ is a model of $\Sigma (]b,a[)$. By hypothesis $(c)$ $T({\mathcal H})$ admits a similar decomposition with $a$ instead of $d$. Thus ${\mathcal G} \equiv T({\mathcal H})$. \\ - Finally, assume $n \geq 2$ and $(c)$. As above, take any model ${\mathcal G}$ of $\Sigma (]b,a[)$ and $d$ a node in the first level of $T({\mathcal G})$. So ${\mathcal G}$ is the disjoint union of its pruned cone $\Gamma (]-\infty,d[)$, $\{ d \}$, $m_1$ border cones at $d$ and $\mu_1$ inner cones at $d$. By Proposition \ref{precolored-n-colored2}, $\Gamma (]-\infty,d[)$ and inner cone at $d$ are models of $\Sigma_{(]b,a[)}$ and border cones at $d$ are models of $\Sigma (]b,a[)_{>1}$. Again, by hypothesis $(c)$, $T({\mathcal H})$ admits a similar decomposition with $a$ instead of $d$, thus ${\mathcal G} \equiv T({\mathcal H})$. \\[2 mm]
Conversely, assume $T({\mathcal H})$ is an $n$-colored good tree model of $\Sigma(]b, a [)$. Since $]b,a[$ belongs to the first level of $T({\mathcal H})$, the color of $a$ is $(m_1, \mu_1)$ or $(m_2, \mu_2)$. \\
Assume first that the color of $a$ is $(m_1, \mu_1)$. Let $\Gamma(a, \alpha)$ be a cone at $a$, then either $\Gamma(a, \alpha)$ is an inner cone and its theory is $\Sigma(]b,a[)$, or $\Gamma(a, \alpha)$ is a border cone, model of $\Sigma(]b,a[)_{>1}$ if $n>1$ and consisting of a leaf otherwise (by Proposition \ref{precolored-n-colored2} again). \\
If $m_1 = 0$, then there are only inner cones at $a$, all models of $\Sigma(]b, a [)$, and we are in case (b).\\
If $m_1 \neq 0$, and $n =1$, the assertion is clear.\\
If $m_1 \neq 0$ and $n \geq 2$, then there are $m_1$ border cones at $a$ all models of $\Sigma(]b,a[)_{>1}$, $\mu_1$ inner cones at $a$ all models of $\Sigma(]b,a[)$ and we are in case (c).\\ Assume now that the color of $a$ is $(m_2, \mu_2)$. Then, necessarly, for any leaf $\alpha$ of $T({\mathcal H})$ greater than $a$, $I_1(\alpha)$ is open on the right with upper bound $a$, hence the first level of $T({\mathcal H})$ is of type $(1.a)$. So, $ m_1 = 0$, and $a$ is the root of an $(n-1)$-colored good tree model of $\Sigma(]b, a [)_{> 1}$. So we are in case (a). \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{lem}\label{V} Let $\Sigma \in S_n$ be a complete theory of $n$-colored good trees without root and $V$ a new unary predicate. Let ${\mathcal L}_1^V$ be the language $ {\mathcal L}_1 \cup \{V\}$ and $\Sigma^V$ be the ${\mathcal L}_1^V$-theory which consists of $\Sigma$ together with the axiom ${\cal V}$: $V$ is a ``branch'' (i.e. a maximal chain) in the first level of any (some) model of $\Sigma$ and $V$ has no leaf. Let $\wedge_V$ be the function $\wedge_V: x \mapsto x \wedge V$. Then the theory $\Sigma^V$ is complete, admits quantifier elimination in the language ${\mathcal L}_{n}^{V+} := {\mathcal L}_n^+ \cup \{V,\wedge_V\}$, and is $\aleph_0$-categorical. Its models have an indiscernible and $C$-minimal set of leaves. \end{lem} \pr Consistency of $\Sigma^V$: consider a tree $T= T_{1}\rtimes T_2 \rtimes \cdots \rtimes T_{n}$ model of $\Sigma$ with $T_{1}$ countable or finite. Since $T$ has no root, $T_{1}$ not only is infinite but has $2^{\aleph_{0}}$ branches. Hence $2^{\aleph_{0}}$ many of them have no leaf, which shows $\Sigma^V$ to be consistent. \\[2 mm] We first prove the Lemma for $n = 1$. \\ Let $\Sigma = \Sigma_{m,\mu} \in S_1$, $\mu \neq 0$, be a complete theory of $1$-colored good tree without root.
We will use a back and forth argument between finite ${\mathcal L}_{1}^{V +}$-substructures of any two countable models $T$ and $T'$ of $\Sigma^V$ as in the proof of Proposition \ref{prop:va et vient}. In what follows, Facts 1 to 6 refers to this proof. \\[1 mm]
{\bf Fact}: If $m=0$ complete quantifier free ${\mathcal L}_{1}^{V+}$-types of $\Sigma$ are: $x \in L$, $x \in V$, $x \in N \setminus V$. If $m \not= 0$ complete quantifier free ${\mathcal L}_{1}^{V+}$-types of $\Sigma$ are: $x \in L$ and $p(x) \in V$, $x \in L$ and $p(x) \not\in V$, $x \in V$, $x \in N \setminus V$. In both cases the ${\mathcal L}_{1}^{V+}$-substructure generated by a singleton $x$ is the smallest subset containing $x$, $p(x)$ and $x \wedge V$. \\ \pr If $x \notin L$, then $p(x) = x$. If $x \in L$, then $x \notin V$ and $p(x) \wedge V = x \wedge V$. Moreover, for all $n \in {\baton N}$, $p^n (x) = x $ or $p^n (x) = p(x)$. The fact is now clear. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi \\ This fact shows that the family of partial isomorphisms between finite substructures of $T$ and $T'$ is not empty. We show now it has the back and forth property. Let $A$ be a finite ${\mathcal L}_{1}^{V+}$-substructure of $T_{}$, and $\varphi$ be a partial ${\mathcal L}_{1}^{V+}$-isomorphism from $T_{}$ to $T'_{}$ with domain $A$. Let $x \in T \setminus A$. By Fact 1 there exists a node $n_x$ such that $x \wedge n_x$ is the maximal element of the set $\{x \wedge y; y \in A\}$.\\ 1. Assume first that $x \in V^{T_{}} \setminus A$; thus $x$ is not a leaf; since $n_x \leq x$, $n_x$ belongs to $V^{T_{}}$. Hence, as in Fact 2, since $A$ is an ${\mathcal L}_{1}^{V+}$-substructure, the ${\mathcal L}_{1}^{V+}$-substructure generated by $A$ and $x$, $\left\langle A \cup \{x\} \right\rangle_V$, is the minimal subset containing $A$, $x$ and $n_x$.\\
Assume furthermore that $x = n_x$, so $\left\langle A \cup \{x\}\right\rangle_V = A \cup \{x\}$. As in Fact 4, there exist $a \in A \cap V^{T_{}}$ and $b \in A \cup \{ - \infty \}$ such that $]b, a[ \cap A = \emptyset$ and $x \in ]b, a[$. Set $\varphi (- \infty) = - \infty$. Then, $\varphi (b) < \varphi(a)$ and $]\varphi (b), \varphi(a)[$ is included in $V^{T'_{}}$. For any $x'$ in this interval, $A \cup \{x\}$ and $\varphi(A) \cup \{x'\}$ are isomorphic ${\mathcal L}_{1}^{V+}$-structures. We extend $\varphi$ on $x$ by sending it to $x'$. \\ Now, we can assume that $n_x \neq x$ and $n_x \in A$. Since $V$ has no leaf it is possible to find $x' \in V^{T'}$, $x' > \varphi(n_x)$. So $\left\langle A \cup \{x\}\right\rangle_V$ is ${\mathcal L}_{1}^{V +}$-isomorphic to $\left\langle\varphi(A) \cup \{x'\}\right\rangle_V$.\\ 2. Assume now that $x \in T \setminus (V^{T_{}} \cup A)$. By case 1 we may assume $x \wedge V \in A$, thus $x \wedge V \leq n_x$. So, the ${\mathcal L}_{1}^{V+}$-substructure $\left\langle A \cup \{x\} \right\rangle_V$ is the minimal subset containing $A$, $x$, $n_x$ and $p(x)$. If $x \wedge V < n_x$, none of $x, p(x), n_x$ touch $V$. We use quantifier elimination of $\Sigma$ in ${\mathcal L}_{1}^+$ to find $x' \in T'$ such that $(A,x,n_x)$ and $(A',x',n_{x'})$ have same quantifier free ${\mathcal L}_{1}^+$-type. They must have same quantifier free ${\mathcal L}_{1}^{V+}$-type. If $x \wedge V = n_x$ then $n_x \in A$ and we have an analogue of Fact 3 (with its corresponding proof): \\
Let $\Gamma$ be a cone at $a \in A$, such that $\Gamma \cap (A \cup V^T) = \emptyset$. Then there exists a cone $\Gamma'$ of $T'$ at $\varphi(a)$ such that $\Gamma' \cap (\varphi(A) \cup V^{T'}) = \emptyset$. Moreover, if $\Gamma$ is infinite, resp. consists of a single leaf, then there is such a $\Gamma'$ infinite, resp. consisting of a single leaf.\\ This allows us to find $x' \in T'$ such that $(A,x)$ and $(A',x')$ have same quantifier free ${\mathcal L}_{1}^{V+}$-type and achieves the forth proof.
The back construction is the same.\\ So,
we have proven elimination of quantifiers in the language ${\mathcal L}_{1}^{V+}$, completeness and $\aleph_0$-categoricity. This achieves the case $n = 1$. \\[2 mm]
General case. Let $(T,V)$ be a countable model of $\Sigma^V$. Then, $T$ is an $n$-solvable good tree, so by Proposition 6.10, $T = T_{1}\rtimes T_{>1}$ where $T_1$ is a model of $\Sigma_1$ and $T_{>1}$ is an $(n-1)$-colored good tree model of $\Sigma_{>1}$. By $\aleph_0$-categoricity of $\Sigma$, $T_1$ is the unique countable model of $\Sigma_1$ and $T_{>1}$ is the unique countable or finite model of $\Sigma_{>1}$. Since $V$ is included in the first level $T_1$, $(T_1,V)$ is a model of $\Sigma_1^V := \Sigma_1\cup \{\cal V\}$, the unique model in fact by the case $n = 1$. Thus $(T,V)$ is the unique countable model of $\Sigma^V$. This proves $\aleph_0$-categoricity of $\Sigma^V$ and its completeness.\\
To prove that $\Sigma^V$ admits quantifier elimination in the language ${\mathcal L}_{n}^{V + }$,
we will proceed as in the proof of Proposition \ref{prop:better-axiomatization-qe}. \\ Take any finite tuple from $T$ and close it under $e_1$. Write it in the form $(x,y_1,\dots,y_{m})$ where $x$ is a tuple from $(E_1)_{\leq}$, $y_1,\dots,y_{m}$ tuples from $(E_1)_{>}$ such that all components of each $y_i$ have same image under $e_1$, call it $e_1(y_i)$ (thus, $e_1(y_1), \dots, e_1(y_m)$ are components of $x$), and $e_1(y_i) \not= e_1(y_j)$ for $i \not= j$. Take $(x',y'_1,\dots,y'_{m}) \in T$ having same quantifier free ${\mathcal L}_{n}^{V + }$-type than $(x,y_1,\dots,y_{m})$. Since the complete theory $\Sigma_1^V$ eliminates quantifiers, $x$ and $x'$ have same complete type in $(E_1)_{\leq}$ which embeds canonically in $ T_1$, and there exists an ${\mathcal L}_{1}^{V+}$-automorphism $\sigma$ of $T_1$ sending $x$ to $x'$. Since $\Sigma_{>1}$ eliminates quantifiers in ${\mathcal L}_{n-1} \cup \{p, D_p, F_p\}$, the rest of the proof runs similarly with $e_1$, $E_1$ instead of $e$ and $E$.\\ Indiscernibility and $C$-minimality of the set of leaves follow from quantifier elimination. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{lem}\label{interdit2} Let $a, b \in T(M)$, with $b < a$ and such that the interval $]b, a [$ is not empty, not a singleton and is dense. Assume that the canonical tree $\Gamma(]b, a [)$ of the pruned cone ${\mathcal C}(]b, a [)$ is an $n$-colored good tree with colors $(m_i,\mu_i)$ for $1 \leq i \leq n$ and that $]b, a [$ is contained in its first level. Let $\Sigma(]b, a [)^ V$ be the complete theory of the tree $\Gamma(]b,a[)$ enriched with $]b,a[$. Assume furthermore that there is $c \in T(M)$, $c>a$, such that $]a,c[$ is not empty and $(\Gamma(]a, c[),]a, c[)$ is a model of $\Sigma(]b, a [)^V$. Then $(\Gamma(]b, c[), ]b,c[)$ is a model of $\Sigma(]b, a [)^V$ iff there are at $a$ exactly $m_1 + \mu_1$ cones and among those that do not contain $c$, $m_1$ are models of $\Sigma(]b, a [)_{>1}$ if $n>1$ (respectively $m_1$ are leaves if $n=1$) and $\mu_1 - 1$ models of $\Sigma(]b, a [)$. \end{lem}
\pr
According to Lemma \ref{V}, $(\Gamma(]b, c[), ]b,c[) \models\Sigma(]b, a [)^V$ iff $[\Gamma(]b, c[) \models\Sigma(]b, a [)$ and $]b,c[$ lies in the first level of $\Gamma(]b, c[)]$. \\
Assume first that $(\Gamma(]b, c[),]b, c[)$ is a model of $\Sigma(]b, a [)^V$. Since $]b,c[$ is included in the first level of the tree $\Gamma(]b, c[)$ and $a<c$, the color of $a$ is $(m_1, \mu_1)$ and the cone of $c$ at $a$ is one of the $\mu_1$ inner cones at $a$. Now all inner cones at $a$ are models of $\Sigma(]b, a[ )$. And all border cones at $a$ are models of $\Sigma(]b, a [)_{>1}$.\\%[2 mm]
For the converse we argue as in the proof of Lemma \ref{interdit1}, case (c). Take any model $({\mathcal G},V)$ of $\Sigma(]b, a [)^V$ and $v \in V$. So ${\mathcal G}$ is the disjoint union of the pruned cone $\Gamma (]-\infty,v[)$ (considered in ${\mathcal G}$), $\{ v \}$, $m_1$ border cones at $v$ and $\mu_1$ inner cones at $v$; call ${\mathcal C}$ the inner cone intersecting $V$ non trivially. Now, both $(\Gamma (]-\infty,v[), ]-\infty,v[)$ and $({\mathcal C}, {\mathcal C} \cap V)$ are models of $\Sigma(]b, a [)^V$, inner cones at $v$ are models of $\Sigma(]b, a [)$ and border cones at $v$ models of $\Sigma(]b, a [)_{>1}$. Following the hypotheses, there exists a similar decomposition of $\Gamma (]b,c[)$ with $a$ in place of $v$. All involved theories are complete, which makes possible to carry on an infinite back and forth between $(\Gamma (]b,c[), ]b,c[)$ and $({\mathcal G},V)$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\subsection{The labeled tree $\bar{\Theta}$}
The automorphism group of ${\mathcal M}$ acts on $\Theta$. Let $\overline \Theta := \{ A_1,\dots,A_s \}$ be the set of orbits of elements from $\Theta$. Each $A_i$ is a finite $\emptyset$-definable antichain of $T^\ast$.
\begin{defi} For $A$ and $B$ antichains in $T^\ast$, let us define: \\ - the relation $a < b \ : \ \Longleftrightarrow \forall a \in A, \ \exists b \in B, \ a < b $ and
$\forall b \in B, \ \exists a \in A, \ a<b $
(given $b$ this $a$ is unique); \\ - let $A$ and $B$ be (finite) antichains in $T^\ast$ such that $A<B$ and, for any $a \in A$, $b,c \in B$ with $a<b,c$, then either $b=c$ or $a=b \wedge c$; we define $]A,B[$ as the (definable) subset of $M$ consisting of the union of cones of elements from $B$ at nodes from $A$, with the thick cones at nodes from $B$ removed.
We extend this notation to $]\{-\infty\},A[$, or still $]-\infty,A[$, which will denote the complement of the union of thick cones at all $a \in A$.
\end{defi}
\begin{lem}\label{fact2} Let $A$ and $B$ be in $\overline \Theta$. Then \\ - if there are $a \in A$ and $b \in B$ with $a<b$ (or $a=b$) then $A<B$ (or $A=B$). \\ - $(\overline \Theta,<)$ is a finite meet-semi-lattice tree; its root, say $A_0$, is a singleton (either $\{r\}$ if $r$ is a root of $T$, or $\{-\infty\}$). \c It allows to define the predecessor $A^-$ of an element $A \not= A_0$ of $\overline \Theta$.
\\ - If $A<B$ there is $k \in {\mathbb N}^{\geq 1}$ such that each $a \in A$ is smaller than exactly $k$ elements from $B$. \\ - If $A=B^-$, $a \in A, b,c \in B, a<b, a<c, b \not= c$ then $a = b \wedge c$. \end{lem} \pr By construction all elements of $A$ have same type in ${\cal M}$. Now $B$ is $\emptyset$-definable, thus if for some $a \in A$, there is $b \in B$ such that $a < b$, the same is true for any $a \in A$. For the same raison, if for some $b \in B$, there is $a \in A$ such that $a < b$, it is true for any $b \in B$. Same thing with $a = b$ instead of $a <b$. This show the first assertion.\\ The two next assertions are clear.\\ About the last one: by construction, $b \wedge c \in \Theta$, thus $b \wedge c$ belongs to some element of $\overline \Theta$, which must be $A$, since $A = B^-$ and $a \leq b \wedge c $. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
\noindent We now aim to collect on $\overline\Theta$ and the indiscernible blocks ${M}_i$ enough information to be able to reconstruct
${\mathcal M}$ from them.
To each $A \in \overline \Theta$, associate\\ - its cardinality $n_A$;
\\ - an integer $s_A$, complete theories $\Sigma_{A,1}, \dots,\Sigma_{A,s_A}$ in ${\cal L}_1$ all different and coefficients $k_{A,1}, \dots,k_{A,s_A} \in \mathbb N^{\geq 1} \cup \{ \infty \}$ such that, at each $a \in A$, there are exactly $k_{A,1}+ \dots+k_{A,s_A}$ cones containing no branch from $\Theta$, $k_{A,1}$ of which are models of $\Sigma_{A,1}$,..., and $k_{A,s_A}$ ones models of $\Sigma_{A,s_A}$ (we are here applying Ryll-Nardzewski again); \\ - if $A \not= A_0$, $]A^-,A[ \not= \emptyset$, $b \in A^-$, $a \in A$ and $b<a$, the complete ${\cal L}_1$-theory $\Sigma_{(A^-,A)}$ of $\Gamma(]b,a[)$. \\[2 mm]
We consider the $s_A$, $\Sigma_{A,i}$ and $k_{A,i}$ (respectively the $\Sigma_{(A^-,A)}$) as labels on the vertices (respectively the edges) of $\overline \Theta$ and $ \Theta$, and the $n_A$ as labels on the vertices of $\overline \Theta$. The $\Sigma_{A,i}$ (respectively $\Sigma_{(A^-,A)}$) may also be understood as indexing those cones at any/some $a \in A$ (respectively pruned cones $\Gamma(]b,a[)$ for $b \in A^-$, $a \in A$, $b<a$) which are models of it.
\begin{lem}\label{exremarques} \begin{enumerate} \item Assume $A \not= A_0$. There is no theory $\Sigma_{(A^-,A)}$ labeling $(A^-,A)$ iff $]A^-,A[ = \emptyset$. \item For $A \in \overline \Theta$ and any/some $a \in A$, $\Theta$ has a unique branch at $a$ iff there is a unique $B \in \overline \Theta$ such that $B^-=A$, and furthermore $n_A=n_B$ holds. \item $T^* \not= T$ iff $s_{A_0}=0$, $A_0$ has a unique successor in $\overline \Theta$, say $B$, and $n_B=1$. \end{enumerate} \end{lem}
\pr (1) holds by definition of the labels of $\overline \Theta$. \\ (2) is clear.\\ (3) The direction only if is clear. Let us prove the if direction. The unique element, say $a_0$, of $A_0$ is either $- \infty$ or the root of $T$. If $A_0$ has a successor, $a_0$ is not a leaf, and if different from $- \infty$ it must be a branching point of $T$. Now the hypotheses force $\overline \Theta$ to have a unique branch at its root. Therefore $a_0 = - \infty$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
The next lemma gives a list of constraints.
\begin{lem}\label{premierescontraintes}
Let $A_0$ and $A \in \overline \Theta$, $A_0$ the root of $\overline \Theta$.
\begin{enumerate}[label=(\arabic*)] \item If $A \not= A_0$, $n_{A^-}$ divides $n_{A}$; $n_{A_0}=1$. \item If $A$ is maximal in $\overline \Theta$, then either $s_A =0$, or $\Sigma _{1 \leq i \leq s_A} k_{A,i} \geq 2$.
\item
If $-\infty $ exists and $B \in \overline \Theta$ is such that $B^-=A_0$, then $]A_0,B[ \not=\emptyset$. \item
If $\Theta$ has a unique branch in any/some $a \in A$, and $A \not= \{ -\infty \} $ if $-\infty $ exists, then $s_A \geq 1$.
\item Assume $A \neq A_0$, $a \in A$, $b \in A^-$, $b < a$. If $]A^-,A[$ is not empty, then the theory of $\Gamma(]b,a[)$ considered as an ${\cal L}_1^V$-structure with $V = ]b,a[$ is a theory of colored good tree enriched with a branch without leaf, as described in Lemma \ref{V}.
\item Assume $s_A \not = 0$. Then at most one $k_{A,i}$ is infinite and the $\Sigma_{A,i}$ are complete theories of colored good trees.
\item Assume $A$ maximal in $\overline \Theta$, $A$ not the root of $\overline \Theta$. If $]A^-,A[$ is empty then $s_A \geq 2$.
\item Theories $\Sigma_{A,1}, \dots,\Sigma_{A,s_A}$ are all different.
\item Assume $A$ maximal in $\overline \Theta$, $A$ not the root of $\overline \Theta$ and such that $]A^-,A[$ is not empty. Assume that models of $\Sigma_{(A^-,A)}$ are $n$-colored trees with colors $(m_i,\mu_i)$ for $1 \leq i \leq n$. Then, none of the following situations can appear: \begin{enumerate} \item $m_1 = 0$, $n \geq 2$, $s_A = 1$, $\Sigma_{A,1} = (\Sigma_{(A^-,A)})_ {>1}$ and $ k_{A,1} = m_2$. \item $m_1 = 0$, $s_A = 1$, $\Sigma_{A,1} = \Sigma_{(A^-,A)}$ and $k_{A,1} = \mu_1$. \item $m_1 \neq 0$, $\mu_1 \neq 0$, $n = 1$, $s_A = 2$, $\Sigma_{A,1} = \Sigma_{(A^-,A)}$, $k_{A,1} = \mu_1$, $\Sigma_{A,2} = \Sigma_{(0,0)}$ (i.e. the theory of a tree consisting only of a leaf) and $k_{A,2} = m_1$.\\ $m_1 \neq 0$, $\mu_1 \neq 0$, $n \geq 2$, $s_A = 2$, $\Sigma_{A,1} = \Sigma_{(A^-,A)}$, $k_{A,1} = \mu_1$, $\Sigma_{A,2} = (\Sigma_{(A^-,A)})_ {>1}$, and $k_{A,2} = m_1$. \end{enumerate}
\item Assume $A$ not maximal, not the root of $\overline \Theta$ and such that $]A^-,A[$ is not empty. Assume furthermore that models of $\Sigma_{(A^-,A)}$ are $n$-colored trees with colors $(m_i,\mu_i)$ for $1 \leq i \leq n$. Then the conjonction of the following conditions cannot appear:\\ - at least one wedge of $\overline \Theta$ starting at $A$ has a label \\ - if $B$ is the successor of $A$ on such a wedge, the label of $(A,B)$ is $\Sigma_{(A^-,A)}$\\ - either \rm[$m_1 \geq 1$, $\mu_1 \geq 2$, $s_A = 2$, $\Sigma_{A,1} = \Sigma_{(A^-,A)}$, $k_{A,1} = \mu_1 - 1$ and $\Sigma_{A,2} = (\Sigma_{(A^-,A)})_{>1}$] or [$m_1 = 0$, $s_A = 1$, $\Sigma_{A,1} = \Sigma_{(A^-,A)}$ and $k_{A,1} = \mu_1$] or [$\mu_1 = 1$, $s_A = 1$, $\Sigma_{A,1} = (\Sigma_{(A^-,A)})_{>1}$ and $k_{A,1} = m_1$].
\setcounter{saveenum}{\value{enumi}} \end{enumerate}
\end{lem}
Proof. (1) $n_{A^-}$ divides $n_{A}$ by indiscernibility of elements from $A$. It has already been noticed in Fact \ref{fact2} that $A_0$ is a singleton. \\ (2) If $A$ is maximal in $\overline \Theta$, either any $a \in A$ is a leaf of $T(M)$ and then $s_A =0$, or any such $a$ is a node in $T(M)$ where no branch of $\Theta$ goes through and then $\Sigma _{1 \leq i \leq s_A} k_{A,i} \geq 2$. \\ (3) If $- \infty$ exists, no branch of $T$ has a first element. \\ (4) Indeed $a$ must be a node in $T(M)^\ast$. \\
(5) It is lemma \ref{premier-intervalle2}. \\
(6) At most one $k_{A,i}$ is infinite by strong minimality of the node $a$, for any $a \in A$. Cones of $M$ are $C$-minimal by Proposition \ref{induiteCmin} and $\aleph_0$-categorical since they are definable in ${\cal M}$. The cones considered here are furthermore indiscernible by construction, so their canonical trees are colored good trees by Theorem \ref{tous pareils}. \\ (7) It is a reformulation of Lemma \ref{aumoins2}. \\ (8) By construction. \\ (9)
The situation has already been set out in Lemma \ref{interdit1}, that we apply here with $b \in A^-$, $a \in A$ and ${\mathcal C}$ the thick cone at $a$. In this way $T({\mathcal C}(]b, a [) \cup {\mathcal C})$ becomes the cone $\Gamma(b, a)$ of $a$ at $b$. Condition (8) prevents ${\mathcal C}(b \wedge a,a)$ from being a model of $\Sigma_{(A^-,A)}$ hence indiscernible. Would it be the case, ${\mathcal C}(b \wedge a,a)$ would be as well indiscernible in $M$ contradicting maximal indiscernibility of (the orbit of) ${\mathcal C}(]b, a [)$. \\ (10) Follows from Lemma \ref{interdit2}. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
A last constraint is given by the next proposition.
\begin{prop}\label{10} \begin{enumerate}[label=(\arabic*)] \setcounter{enumi}{\value{saveenum}}
\item
The tree $\overline \Theta$ labeled with coefficients $n_A$, $s_A$, $k_{A,i}$ and theories $\Sigma_{A,i}$ (and $\Sigma_{(A^-,A)}$) on its vertices (and edges) has no non trivial automorphism.
\end{enumerate} \end{prop} By construction two elements from $\Theta$ having same type in ${{\mathcal M}}$ are identified in $\overline \Theta$. Thus, to prove the above proposition it is enough to show that, if ${\mathcal M}$ is the countable model, then any automorphism of $\overline \Theta$ lifts up to an automorphism of ${{\mathcal M}}$. This proof requires some new tools that we introduce now.
\subsection{Connection and sticking}
\subsubsection{Connection $\sqcup$ of $C$-structures.}\label{conn}
Let $k_i$, $i \in I$, be cardinals such that $\Sigma k_i > 1$ and ${{\cal H}}_i$, $i \in I$, $C$-structures. The underlying set of the {\it connection} ${\cal H} := \bigsqcup_{i \in I} {\cal H}_i.k_i$ is the disjoint union of $k_i$ copies of $H_i$, $i \in I$. Its canonical tree is the disjoint incomparable union of $k_i$ copies of $T( H_i)$, $i \in I$, plus an additional root, say $r$, id est: for $a,b \in T( H)$, $a \leq b$ in $T( H)$ iff $a=r$ or $a$ and $b$ are in a same copy of $T( H_i)$ for some $i$, and $a \leq b$ in this $T(H_i)$. For $i \in I$, we call $H_{i,j}$, $j \in k_i$, the different copies of $H_i$ canonically embedded in $H$. \\[2 mm]
{\bf Language}: Assumptions are as follows. Each ${\cal H}_i$ is a $C$-structure in the language ${\cal L}({\cal H}_i)$.
The structure on ${{\mathcal H}}_i$ is in fact given via its canonical tree: each $T({\mathcal H}_i)$ is a structure in a language ${\cal L}(T({\mathcal H}_i))$ such that ${\cal L}(T({\mathcal H}_i)) \setminus {\mathcal L}_1$ consists only of predicate or unary function symbols. Among predicates are $D_f$ and $F_f$ for each unary function $f \in {\cal L}(T({\mathcal H}_i))$ and the interpretation of the triple $(f,D_f,F_f)$ in $T({\mathcal H}_i)$ is required to satisfy Conditions $(4\ast)$ of section \ref{language of extension}. \\
The different languages ${\cal L}(T({\mathcal H}_i)) \setminus {\mathcal L}_1$, $i \in I$, are disjoint. \\[2 mm]
We consider $T({H})$ in the language $$ {\cal L}(T({{\mathcal H}})) := {\cal L}_1 \dot \cup \{ T_i ; i \in I \} \dot\cup \dot\bigcup_{i \in I} ({\cal L}(T({\mathcal H}_i)) \setminus {\cal L}_1)
\dot \cup \{ E_r \}$$ where each $T_i$ is a unary predicate for the union of the $k_i$ copies $T(H_{i,j})$ of $T(H_i)$,
$E_r$ is a unary predicate interpreted as $\{ r \}$ if $r$ is the root of $T(H)$
and ${\cal L}(T({\cal H}_i)) \setminus {\cal L}_1$ is interpreted in $T({\mathcal H})$ as described now. On each $T(H_{i,j})$, $j \in k_i$, ${\cal L}(T({\cal H}_i))$ has its natural interpretation. We interpret it ``trivially'' outside of the $T(H_{i,j})$: a unary function of ${\cal L}(T({\mathcal H}_i))$ is defined as the identity outside of $T_i$ and an $n$-ary predicate
is taken to be empty outside of $\bigcup _{j \in k_i} T_{i,j}^n$.
Note that each $T({\mathcal H}_{i})$ is an ${\cal L}(T({\mathcal H}))$-substructure of $T({\mathcal H})$. \\
We set $L_i := T_i \cap L$ id est $L_i$ is a predicate for the subset $\bigcup_{j \in k_i} H_{ij}$ of $H$.
\begin{lem}\label{connection} If $I$ finite then $\bigsqcup_{i \in I} {\cal H}_i.k_i$ is completely axiomatized by the axioms and axiom schemes expressing for each $i \in I$: \begin{enumerate} \item [1$_{} $
.] $C$-structure with a root, say $r$, in its canonical tree; $E_r = \{ r \}$; \item [2$_{}$
.] $ \forall x \; ( \bigvee_{k \in I} L_k(x))$ and $ \forall x \; ( L_i(x) \rightarrow \bigwedge_{j \neq i} \neg L_j (x))$; \item [3$_i$.] \ $L_i$ is a union of cones at $r$; \item [4$_i$.] $L_i$ has exactly $\overline k_i$ cones at $r$, where $\overline k_i \in \mathbb N \cup \{ \infty\}$ and $\overline k_i = k_i$ iff $k_i \in \mathbb N$; \item [5$_i$.]
$(x \not\in D_f \rightarrow f(x)=x)$
and $(x \in D_f \rightarrow r < f(x) \leq x)$, for any unary function $f \in {{\mathcal L}}(T({\mathcal H}_i))$;
\item [6$_i$.] $R \subseteq T_i^n$ and $\neg R(x)$ for any tuple $x$ having among its coordinates $x$ and $y$ such that $E_r(x \wedge y)$, for any $n$-ary predicate $R \in {\cal L}(T({\mathcal H}_i)) \setminus {\cal L}_1$;
\item [7$_i$.] by axioms 5$_i$, for any cone ${\mathcal C}$ at $r$, $T({\mathcal C})$ is an ${{\mathcal L}}(T({\mathcal H}_i))$-substructure for any $i \in I$; if ${\mathcal C}$ is contained in $L_i$, then $T({\mathcal C})$ is required to be elementary equivalent to $T({\mathcal H}_i)$ as an ${{\mathcal L}}(T({\mathcal H}_i))$-structure. \end{enumerate}
If for any $i \in I$, $T({\mathcal H}_i)$ eliminates quantifiers (respectively is $\aleph_0$-categorical), then $T(\bigsqcup_{i \in I} {\mathcal H}_i.k_i)$ has the same property. \end{lem}
\pr The three results, completeness, transfer of quantifier elimination, and transfer of $\aleph_0$-categoricity, are proved using a back-and-forth argument. Axioms 1 to 6 imply that: \\ - $\{ r \}$ is an ${{\mathcal L}}(T({\mathcal H}))$-substructure with a uniquely determined isomorphism type \\ - any cone at $r$ in $T(H)$ is an ${{\mathcal L}}(T({\mathcal H}))$-substructure (due to axioms 5)\\ - there is no interaction between these cones or $\{ r \}$ via predicates or functions from ${{\mathcal L}}(T({\mathcal H})) \setminus {\mathcal L}_1$ (due to axioms 6, indeed $x$ and $y$ are in different cones at $r$ exactly when $x \wedge y = r$). \\ Consequently the ${{\mathcal L}}(T({\mathcal H}))$-structure of the canonical tree of a model is completely determined by its restrictions to cones at $r$. To prove the lemma, we consider first the case where $I$ is a singleton:
\begin{claim} \label{qe} Assume $k_i >1$. Then the theory given by axioms $1$, $3_i$, $4_i$, $5_i$, $6_i$ and $7_i$ completely axiomatizes ${\cal H}_i.k_i$. If ${\cal H}_i$ is $\aleph_0$-categorical, so is $ {\cal H}_i.k_i$. If $T({\mathcal H}_i)$ eliminates quantifiers in ${\mathcal L} (T({\mathcal H}_i))$, so does $T( {\mathcal H}_i.k_i)$ in ${\mathcal L} (T({\mathcal H}_i)) \cup \{ E_r \}$. \end{claim}
\pr For any model $M$ of this theory, $T(M)$ is the disjoint union of $\{ r \}$ and $\overline k_i$ cones at $r$, all elementary equivalent to $T( H_i)$ as ${{\mathcal L}}(T(H_i))$-structures. Take two $\aleph_0$-saturated models $M$ and $N$ of this theory and a finite tuple $x$ from $T(M)$. By the considerations above we may assume $x$ contains $r$ and thus decomposes
$x=(r, x_1,...,x_n)$ with $x_i$ a tuple consisting of elements all in the same cone at $r$ and $x_i$ and $x_j$ in different cones for $i \not= j$. Two elements $y$ and $z$ are in the same cone at $r$ iff $\neg E_r(y \wedge z)$. Consequently a tuple $y \in T(N)$ with same quantifier free 0-type as $x$ decomposes in the same way $y = (r,y_1,...,y_n)$. Let $a \in T(M)$ be a single element. Assume first $a$ is in the same cone $\Gamma$ at $r$ as, say $x_1$. Since $\Gamma$ has the same theory as $T(H_i)$ it eliminates quantifiers and there is $b \in T(N)$ in the cone of $y_1$ at $r$ such that $(x_1,a)$ and $(y_1,b)$ have the same quantifier free type in this cone (type in the theory of $T(H_i)$). If $a$ is in the cone at $r$ of none of the $x_i$, then as the number of such cones is $\overline k_i$ in both $M$ and $N$, there exists $b \in T(N)$ in none of the cones of the $y_i$ with same quantifier free type as $a$. In both cases $(x,a)$ and $(y,b)$ have same quantifier free type in $T(M)$. \relax\ifmmode\eqno\Box\else\mbox{}\quad\nolinebreak
$\dashv$
\fi\\[2 mm]
An arbitrary model $M$ of axioms of Lemma \ref{connection} is of the form $\bigsqcup_{i \in I} L_i (M)$ with $L_i (M) \equiv {\cal H}_i.k_i$ by the case where $I$ is a singleton or trivially if $k_i =1$. A finite tuple $x$ from $T(\mathbb M)$ containing $r$ may be uniquely written $x=(r,(x_i)_{i \in I})$ with $x_i$ a finite tuple in $T_i(M) \setminus \{ r \}$. Another tuple $y$ in a model with same quantifier free type as $x$ is of the form $(r,(y_i)_{i \in I})$ with $y_i$ in $T_i$ with same type as $x_i$.
As in the proof of previous claim we can carry on an infinite back-and-forth between two $\aleph_0$-saturated models. We argue with complete types for each component in some $T_i$ to prove completeness and with complete qf types to transfer qe, using the claim above, or direct quantifier elimination in $H_i$ if $k_i = 1$. The transfer of $\aleph_0$-categoricity is clear. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{lem}\label{connection-minimality} Assume $I$ is finite, $k_i$ is infinite for at most one $i \in I$, say $i_0$. \\ If all $T({\mathcal H}_i)$ are pure trees and $T({\mathcal H}_i) \not\equiv T({\mathcal H}_k)$
for $i \not= k$ in $I$, then $\bigsqcup_{i \in I} {\cal H}_i . k_i$ is a pure tree too. \\ If $T({\mathcal H}_{i_0})$ is a pure colored good tree and
for any $i \in I$, ${\cal H}_i$ is $C$-minimal then $\bigsqcup_{i \in I} {\cal H}_i . k_i$ is $C$-minimal too. \end{lem} \pr We extend each language ${\mathcal L} (T( {\mathcal H}_i))$ with new relations to get quantifier elimination in $T( {\mathcal H}_i)$. By Lemma \ref{connection}, $T({\mathcal H})$ eliminates quantifiers in $ {\cal L}(T({\mathcal H}))$. This shows that definable subsets of a model are Boolean combinations of definable subsets of the $L_i$. Since $I$ and all $k_i$ except at most one are finite, each $L_i$ is a finite union of cones at $r$ or complement of such an union. \\ This shows $L_i$ is quantifier free definable with the pure $C$-relation and parameters. The condition ``$T({\mathcal H}_i) \not\equiv T({\mathcal H}_k)$'' provides a definition without parameters. \\ Since the $L_i$ are quantifier free definable with the pure $C$-relation ${\mathcal H}$ is $C$-minimal if all $L_i$ are. Let us prove $L_i$ is $C$-minimal. For $i \not= i_0$ the argument is the same as just used: since $k_i$ is finite definable subsets of $L_i$ are Boolean combinations of definable subsets of its cones at $r$. As these cones are $C$-minimal (by Proposition \ref{induiteCmin}), $L_i$ is $C$-minimal too. For $i= i_0$ with $k_{i_0}$ infinite, consider on the canonical tree $T_{i_0}$ of $L_{i_0}$ the singleton $E := \{ r \}$, $e$ the constant function sending $T_{i_0}$ to $r$ and $\sim$ the equivalence relation defined as in Lemma \ref{ouf}, case (2). Now Proposition \ref{prop:better-axiomatization} applies and shows that, if $T(H_{i_0})$ is a an $n$-colored good tree and $X$ is a 1-colored good tree of color $(\infty,0)$, then $T_{i_0} \equiv X \rtimes T(H_{i_0})$ as pure trees. Thus $T_{i_0}$ is a pure $(n+1)$-colored good tree hence its set of leaves is $C$-minimal.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\subsubsection{Sticking $\triangleleft$ in a pruned cone $M$ of a $C$-structure $\cal C$ whose canonical tree has a root }
Let be given two $C$-structures, first $\cal C$, which has a root in its canonical tree, and then ${\mathcal M} := (M,V)$, where $V$ is a branch without leaf from $T(M)$. We define the $C$-structure ${\mathcal M} \triangleleft \; \cal C$, {\it sticking of ${\mathcal C}$ into} $(M,V)$. The underlying set of ${\mathcal M} \triangleleft \; \cal C$ is the disjoint union $M \dot \cup \cal C$, its canonical tree the disjoint union $T(M) \dot \cup T(\cal C) $ equipped with the unique order extending those of $T(M)$ and $T({\cal C}) $ such that
$V = \{ t \in T(M) ; t < T(\cal C) \}$.
\\[2 mm]
{\bf Canonicity}: ${\mathcal M} \triangleleft \; \cal C$ is the unique $C$-set which is the union of $M $ and $\cal C$ and where $\cal C$ becomes a thick cone with basis the supremum of $V$. \\[2 mm]
{\bf Language}: As in previous subsection, we assume some additional structures given on the canonical trees by languages ${\cal L}(T({\mathcal M}))$ and ${\cal L}(T({\mathcal C}))$, which are such that ${\cal L}(T({\mathcal M})) \setminus {\mathcal L}_1$ and ${\cal L}(T({\mathcal C})) \setminus {\mathcal L}_1$ consist only of predicate or unary function symbols. Among predicates of ${\cal L}(T({\mathcal M})) \setminus {\cal L}_1$ there is $V$. Among predicates are $D_f$ and $F_f$ for each unary function $f \in {\cal L}(T({\mathcal M}))$ or ${\cal L}(T({\mathcal C}))$ and the interpretation of the triple $(f,D_f,F_f)$ in $T(M)$ is required to satisfy Conditions $(4\ast)$ of section \ref{language of extension} \\[2 mm]
We consider ${\mathcal M} \triangleleft \; \cal C$ in the language
$$ {{\mathcal L}}(T({\mathcal M} \triangleleft \; {\cal C})) := {\cal L}_1 \dot \cup \{ E_a, E_{\geq a}, G_a \} \dot\cup ({\cal L}(T({\mathcal M})) \setminus {\cal L}_1 ) \dot\cup ({\cal L}(T( {\cal C})) \setminus {\cal L}_1 ) \dot\cup \{ \wedge_{V} \}$$
where $E_a, E_{\geq a}$ and $G_a$ are unary predicates for the elements of, respectively, the singleton consisting of the basis, call it $a$, of the thick cone $ \cal C$, $ \cal C$ and $M$; ${\cal L}(T({\mathcal M}))$ and ${\cal L}(T(\cal C))$ are naturally interpreted in $T(M)$ and $T( \cal C)$ respectively, and then trivially (see below) outside of $T(M)$ and $T( \cal C)$ respectively; $ \wedge_{V}$ is the unary function sending a point $x \in T(M)$ to $x \wedge V$ and the identity on $T({\mathcal C})$.
\begin{lem}\label{sticking}
${\mathcal M} \triangleleft \; \cal C$ is completely axiomatized by the axioms and axiom schemes expressing \begin{enumerate} \item $C$-set \item $E_{\geq a}$ is a thick cone in the canonical tree, call $a$ its basis \item $E_a$ is the singleton $\{ a \}$ \item $G_a$ is the complement of $E_{\geq a}$ \item $V = \{ x \in G_a ; x < a \}$ \item $G_a(x) \rightarrow \wedge_V(x) = x \wedge V$; $E_{\geq a}(x) \rightarrow \wedge_V(x) = x$\ \item $x \not\in D_f \rightarrow f(x)=x$ for any unary function $f \in {{\mathcal L}}(T({\mathcal M} \triangleleft \; \cal C))$ \item
$x \in D_f \rightarrow a \leq f(x) \leq x$, for any unary function $f \in {{\mathcal L}}(T({{\mathcal C}}))$; $\neg R(x)$ for any tuple $x$ having some coordinate in $G_a$ and any predicate $R \in {\cal L}(T({{\mathcal C}})) \setminus {\cal L}_1$ \item by axioms 7 and 8, $E_{\geq a}$ is an ${{\mathcal L}}(T({{\mathcal C}}))$-substructure; it is required to be elementary equivalent to $T({{\mathcal C}})$ \item
$x \in D_f \rightarrow f(x) \leq x$, for any unary function $f \in {{\mathcal L}}(T({\mathcal M}))$; $\neg R(x)$ for any tuple $x$ having some coordinate in $E_{\geq a}$ and any predicate $R \in {\cal L}(T({\mathcal M})) \setminus {\cal L}_1$ \item by axioms 7 and 10, $G_a$ is an ${{\mathcal L}}(T({\mathcal M}))$-substructure; it is required to be elementary equivalent to $T({{\mathcal M}})$. \end{enumerate}
If $T({\mathcal M})$ and $T(\cal C)$ eliminate quantifiers, or are $\aleph_0$-categorical, then $T({\mathcal M} \triangleleft \; \cal C)$ has the same property. If ${\mathcal M}$ and $\cal C$ are $C$-minimal then ${\mathcal M} \triangleleft \; \cal C$ has the same property. \end{lem} \pr The axioms imply that any model has a canonical tree of the form $G_a \triangleleft \; E_{\geq a}$, with the interpretation of the language we have considered. Consequently it is easy to carry on an infinite back and forth between two $\aleph_0$-saturated models. This shows all assertions except $C$-minimality. By transfer of quantifier elimination $G_a$ and $E_{\geq a}$ are stably embedded in $G_a \triangleleft \; E_{\geq a}$. Since the set of leaves of $G_a \triangleleft \; E_{\geq a}$ is the union of those of $G_a$ and $E_{\geq a}$, ${\mathcal M} \triangleleft \; \cal C$ is $C$-minimal if
${\mathcal M}$ and $\cal C$ are. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\subsection{Proof of proposition \ref{10} and reconstruction of $\cal M$ from $\overline \Theta (\cal M)$}
Consider a finite meet-semi-lattice ${\Xi}_0$, $A_0$ its root and $A \in \Xi_0 \setminus \{A_0\}$. Vertices and edges are labeled as follows. \\ - All vertices are labeled. Labels of a vertex $A \in {\Xi}_0$ are of several types: two integers $n_A \geq 1$ and $s_A$, cardinals $k_{A,1}, \dots,k_{A,s_A} \in \mathbb N^{\geq 1} \cup \{ \infty \}$ and complete ${\cal L}_1$-theories $\Sigma_{A,1}, \dots,\Sigma_{A,s_A}$ which are not, at this point, supposed all different. \\ - Some edges are labeled by a complete ${\cal L}_1$-theory. For $A \not= A_0$, the complete ${\cal L}_1$-theory possibly labeling $(A^-,A)$ is denoted $\Sigma_{(A^-,A)}$. \\
We must now reformulate conditions $(1)$ to $(10)$ of Lemma \ref{premierescontraintes}, and $(11)$ of Proposition \ref{10} in terms of meet-semi-lattice and labels only. For example, due to Lemma \ref{exremarques}, the condition ``$- \infty$ exists in $T(M)$'' will be replaced by ``$s_{A_0} = 0$, $A_0$ has a unique successor (in $\Xi_0$), say $B$ and $n_B = 1$''. So conditions $(1'), (2'),(3'), (6'), (7'), (8'), (9')$ and $(10')$ are the same as $(1), (2), (3), (6), (7), (8), (9)$ and $(10)$ in Lemma \ref{premierescontraintes} and $(11')$ is the same as $(11)$ in Proposition \ref{10} with $\overline \Theta$ replaced with $\Xi_0$, ``$- \infty$ exists in $T(M)$'' replaced as indicated and ``$]A^-,A[$ not empty'' replaced with ``there is an ${\cal L}_1$-theory labeling $(A^-,A)$''. The other conditions are:
\begin{enumerate}[label=(\arabic*')]\setcounter{enumi}{3}
\item Assume $A \neq A_0$. If $A$ has a unique successor, say $B$ and $n_B = n_A$, then $s_A \geq 1$. (This reformulation of (4) into (4') uses Lemma \ref{deplier}.) \item An ${\cal L}_1$-theory possibly labeling an edge of $\Xi_0$ is a complete theory of colored good tree.
\end{enumerate}
\begin{lem}\label{deplier} Given a finite meet-semi-lattice tree $\Xi_0$, $A_0$ its root, $\Xi_0$ labeled with a coefficient $n_A$ to each $A \in \Xi _0$ and satisfying (1'), there is a unique ordered set $\Xi$ which is the disjoint union of antichains $U_A$, $A \in \Xi _0$, and satisfying that, for all $A,B \in \Xi_0$: \\
(a) $|U_A| = n_A$, \\ (b)( for all $a \in U_A$, exists $b \in U_B$, $a<b$ in $\Xi$) iff $A<B$ in $\Xi_0$, \\
(c) if $ B^- = A$ and $a \in U_A$, then there are exactly $|n_B/n_A|$ elements $b \in U_B$ such that $b>a$. \\
Furthermore: \\ (d) $\Xi$ is a meet-semi-lattice tree, \\ (e) the set of the $U_A$ ordered by the order induced by the order of $\Xi$ is isomorphic to $\Xi_0$, \\
(f) any automorphism of the labeled tree $\Xi_0$ lifts to an automorphism of the tree $\Xi$, \\ (g) given two points in $\Xi$ belonging to the same antichain of $\Xi_0$, there is an automorphism of $\Xi$ sending one to the other one, \\ (h) for $A \in \Xi_0$, $\Xi$ has a unique branch starting from some (or any) $a \in A$ iff ($\Xi_0$ has a unique branch starting from $A$ and, if $B^- = A$ then $n_B=n_A$). \end{lem}
\pr We define inductively an order on $\Xi := \dot \bigcup_{A \in \Xi_0} U_A$. We take $U_{A_0}$ a singleton, as it should be. Let $\Xi_1 \subseteq \Xi_0$ satisfying $[( A,B \in \Xi_0 \ \& \ A<B \ \ \& \ B \in \Xi_1) \Rightarrow A \in \Xi_1 ] $ and assume $\dot \bigcup_{A \in \Xi_1} U_A$ already ordered in such a way that the $U_A$ are antichains and satisfy (a), (b) and (c) for $A,B \in \Xi_1$. Let $X \in \Xi_0 \setminus \Xi_1$ such that $X^- =: B \in \Xi_1$. Since $X^- = B$, $n_B$ divides $n_X$ which allows us to take for each $y \in U_B$ an antichain $W_y$ with $n_X(n_B)^{-1}$ elements and $U_X := \dot \bigcup_{y \in U_B} W_y$; for $x \in U_X$ and $y \in U_B$ we set $x>y$ iff $x \in W_y$, with no other order relation between elements from $U_B \cup U_X$. So we have extended the order from $\dot \bigcup_{A \in \Xi_1} U_A$ to $\dot \bigcup_{A \in \Xi_1} U_A \dot \cup U_X$. Due to (a), (b) and (c) we made the only possible choice. By construction (a), (b) and (c) are true on $\Xi_1 \cup \{ X \}$. \\
(d) The order $\Xi$ we constructed is a meet-semi-lattice tree because $\Xi_0$ is one and $n_{A_0} =1$. \\ (e) and (h) are clear. \\ (f) is proven by induction. Let $\sigma$ be an automorphism of the labeled tree $\Xi_0$, $\Xi_1 \subseteq \Xi_0$, $X$ and $B$ as at the beginning of the proof of (a), (b) and (c) but we assume now furthermore $\Xi_1$ closed under $\sigma$. We assume also there is $\tau$ a partial automorphism of the tree $\Xi$ defined on $\dot \bigcup_{A \in \Xi_1} U_A$ and lifting $\sigma \upharpoonright \Xi_1$. Let ${\mathcal X} = \{ X, \sigma (X), \sigma ^2 (X),\dots, \sigma^{r-1} (X) \}$ be the orbit of $X$ under $\sigma$. Since $\sigma$ preserves the order, ${\mathcal X}$ is an antichain and $\sigma ^i (X)^- = \sigma ^i (B)$ which belongs to $\Xi_1$ since $\Xi_1$ is closed under $\sigma$.
So we can extend $\tau$ on $\dot \bigcup_{A \in \Xi_1 \cup {{\mathcal X}}} U_A$ by taking any bijective map $U_{\sigma ^i (X)} \rightarrow U_{\sigma^{i+1} (X)} $ for any $i$, $0 \leq i < r$. \\ (g) Let $A \in \Xi_0$ and $x, y \in U_A$. We carry on the induction of the proof of (f) starting with $\sigma$ the identity of $\Xi_0$, $\tau$ the identity on $\bigcup _{\{ X \in \Xi_0 ; \neg ( X \geq A)\} } U_X $ and choosing a function $U_A \rightarrow U_A$ sending $x$ to $y$. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{theo}\label{thefirst} Given a finite meet-semi-lattice tree $\Xi_0$ labeled with coefficients and theories satisfying (1') to (7'), consider the language ${\mathcal L} := \{ C \} \cup \{ P_{A,i} ; A \in \Xi_0, 1 \leq i \leq s_A \} \cup \{ P_{A^-,A} ; A \in \Xi_0, A \not= A_0 \}$ where all new symbols represent unary predicates.
Then there exists a unique finite-or-countable ${\mathcal L}$-structure ${\mathcal M}$ such that the tree $\Xi$ built from $\Xi_0$ and $\{ n_A ; A \in \Xi _0 \}$ according to Lemma \ref{deplier} embeds in $T(M)^*$
in such a way that for any $A \in \Xi_0$, $A \neq A_0$: \\ (a) Let $A,B \in \Xi_0$, $B = A^-$, and $a, b \in \Xi$, $a \in A$, $ b \in B$, $b<a$; then, either there is no theory labeling the edge $(B,A)$ and $b$ is the predecessor of $a$ in $T(M)$, or $(\Gamma(]b,a[),]b,a[)$ is model of $\Sigma_{(B,A)}^V$ (as defined in Lemma \ref{V} from the theory labeling $(B,A)$); $P_{B,A}$ is the union of all pruned cones ${\mathcal C}(]b,a[)$ for $a$ and $b$ as above. Any cone at $b$ which does not contain $a$ is contained in one of the $P_{B,i}$ and, for each $i \leq s_B$, $P_{B,i} \cap {\mathcal C}(b)$ consists of exactly $k_{B,i}$ cones at $b$, all with a canonical tree model of $\Sigma_{B,i}$. \\ (b) ``Pieces'' $P_{A,i}$ and $P_{A^-,A}$, $A \in \Xi_0, 1 \leq i \leq s_A$, are stably and purely embedded in ${\mathcal M}$ and the structure ${\mathcal M}$ is induced by them, in the sense that the definable sets of ${\mathcal M}$ are exactly the Boolean combinations of definable sets of these pieces. \\ Then ${\mathcal M}$ is $C$-minimal and $\aleph_0$-categorical and any automorphism of the labeled tree $\Xi$ that preserves the class in $\Xi_0$ extends to an automorphism of $T({\mathcal M})$. \end{theo}
Unlike the proof of Lemma \ref{deplier}, here we use a downward induction, more precisely an induction of the depth of vertices, that we now define. \begin{defi} Let $\Xi$ be a finite semi-lattice tree. The \emph{depth} of a vertex in $\Xi$ is the minimal function from $\Xi$ to $\omega$ such that: \\ - if $a$ is a maximal element of $\Xi$, $depth(a) = 0$, \\ - if $x < y$, $depth(x) \geq depth (y ) + 1.$ \end{defi}
\pr We will define simultaneously $C$-structures ${\mathcal M}_a$ and ${\mathcal N}_a$ for $a \in \Xi $,
by induction on $depth(a)$, ${\mathcal M}_a$ for each of these $a$ and ${\mathcal N}_a$ if furthermore $a$ is not the root of $ \Xi$. The $M_a$ are intended to become thick cones in ${\mathcal M}$ and the $N_a$ cones, and they will be the only possible choice thanks to the canonicity of both constructions of connection and sticking. Their languages are, if $a \in A \in \Xi_0$, ${\mathcal L} ({\mathcal M}_a) := \{ C \} \cup \{ P_{B,i} ; B \in \Xi_0, B>A, 1 \leq i \leq s_B \} \cup \{ P_{B^-,B} ; B \in \Xi_0, B>A \}$ and, if $N_a \not= M_a$, ${\mathcal L} ({\mathcal N}_a) := {\mathcal L} ({\mathcal M}_a) \cup \{ P_{A^-,A} \}$. As previously we work with canonical trees:
$T({\mathcal M}_a)$ and $T({\mathcal N}_a)$ will be shown by induction to eliminate quantifiers in languages ${\cal L}(T({\mathcal M}_a))$ and ${\cal L}(T({\mathcal N}_a))$ respectively, and to be $\aleph_0$-categorical trees. By induction too the ${\mathcal M}_a$ and the ${\mathcal N}_a$ are $C$-minimal. \\
Let us start. \\ Theories such as $\Sigma_{A,i}$ or $\Sigma_{(A^-,A)}$, $A \in \Xi_0$, appear among the labels. By (6') each theory $\Sigma_{A,i}$ is the theory of some $n$-colored good tree for some integer $n$ and we consider $\Sigma_{A,i}$ in its elimination language ${\mathcal L} (\Sigma_{A,i}) := {\mathcal L}_n^+$. Let $\Gamma_{A,i}$ be the unique finite-or-countable model of $\Sigma_{A,i}$ and ${\mathcal C}_{A,i}$ the $C$-set with canonical tree $\Gamma_{A,i}$. \\ By (5') if the label $\Sigma_{(A^-,A)}$ exists, consider $\Sigma_{(A^-,A)}^V$, its enrichment as in Lemma \ref{V}. It eliminates quantifiers in the language ${{\mathcal L}}(\Sigma_{(A^-,A)}^V) := {{\mathcal L}}_n^{V+}$. Let $(\Gamma_{(A^-,A)},V_A)$ be the unique finite-or-countable model of $\Sigma_{(A^-,A)}^V$ and ${\mathcal C}_{(A^-,A)}$ the $C$-set with canonical tree $\Gamma_{(A^-,A)}$. \\
- Let $A$ be maximal in $ \Xi_0$ and $a \in A$. Due to axiom $(2')$ either $s_A=0$ or $\Sigma _{1 \leq i \leq s_A} k_{A,i} \geq 2$. If $s_A=0$ we take for ${\mathcal M}_a$ a singleton and ${\mathcal L} (T({\mathcal M}_a)) := {\mathcal L}_1$. If $\Sigma _{1 \leq i \leq s_A} k_{A,i} \geq 2$ we define ${\mathcal M}_a := \bigsqcup_{1 \leq i \leq s_A} {\mathcal C}_{A,i} \cdot k_{A,i}$.
Each $\Gamma_{A,i}$ is considered in its elimination language ${\mathcal L} (\Sigma_{A,i})$ and ${\mathcal L} (T({\mathcal M}_a))$ is given by Lemma \ref{connection}. It eliminates quantifiers. It is to be noticed that in both cases $T({\mathcal M}_a)$ has $a$ as a root. \\ - If $A$ is not maximal in $ \Xi_0$ and $a \in A$, we take for ${\mathcal M}_a$
the connection of $k_{A,i}$ copies of ${\mathcal C}_{A,i}$ and $(n_B:n_A)$ copies of ${\mathcal N}_b$, for $1 \leq i \leq s_A$ and $B^-=A, b \in B, b>a$. Due to condition (4') this connection is well defined since the number of connected $C$-structures is at least 2. Here again the $\Gamma_{A,i}$, the ${\mathcal N}_b$ and $T({\mathcal M}_a)$ are considered in their elimination languages (some ${\mathcal L}_{n_{A,i}}^+$ for the $\Gamma_{A,i}$, given by induction hypothesis for the ${\mathcal N}_b$, and by Lemma \ref{connection} for $T({\mathcal M}_a)$) and $T({\mathcal M}_a)$ has $a$ as a root. \\
- For $A$ different from the root $A_0$ of $ \Xi _0$, if there is a theory $\Sigma_{(A^-,A)}$ we set ${\mathcal N}_a = {\mathcal M}_a \! \triangleright \! ({\mathcal C}_{(A^-,A)},V_A) $.
If there is no theory labeling $(A^-,A)$ we set ${\mathcal N}_a = {\mathcal M}_a$. \\ - In the case where $s_{A_0} = 0$, $A_0$ has a unique successor $B$ in $\Xi_0$ with $n_B = 1$, call $b$ the unique element of $B$; we define ${\mathcal M} = {\mathcal N}_{b}$; then $T(M)$ has no root and $A_0$ embeds in $T(M)^\ast$ as $\{- \infty\}$. Else, we define ${\mathcal M} = {\mathcal M}_{a_0}$, where $A_0 = \{a_0\}$. \\[2 mm]
We look now a bit more carefully at languages in the above construction. An easy downwards induction shows that, for $a,c \in A \in \Xi_0$, the two structures $(T(M_a), {\mathcal L}(T({\mathcal M}_a)))$ and $(T(M_c), {\mathcal L}(T({\mathcal M}_c)))$ are isomorphic, as are $(T(N_a), {\mathcal L}(T({\mathcal N}_a)))$ and $(T(N_c), {\mathcal L}(T({\mathcal N}_c)))$ when $A \not= A_0$. And indeed we choose to identify the languages ${\mathcal L}(T({\mathcal M}_a))$ and ${\mathcal L}(T({\mathcal M}_c))$ on one hand and ${\mathcal L}(T({\mathcal N}_a))$ and ${\mathcal L}(T({\mathcal N}_c))$ on the other hand. This means that in the situation where $a,c > b$, $b \in A^-$ when constructing ${\mathcal M}_b$ by a connection, $T({\mathcal N}_a)$ and $(T({\mathcal N}_c)$ are considered as two copies of the same structure, like ${\mathcal H}_{i,j}$ and ${\mathcal H}_{i,k}$ in Subsection \ref{conn}.
We do not do any other identification: if for example the same language ${\mathcal L}_n$ appears as elimination language in ${\mathcal N}_a$ and ${\mathcal M}_b$ or in ${\mathcal N}_a$ and ${\mathcal N}_c$ for two nodes $a$ and $c$ which do not belong to the same antichain, then it will be duplicated, one avatar for each node.
\\[2 mm]
Note that the ${\mathcal L} (T({\mathcal M}_a))$-structure of $TM_a)$ is definable in ${\mathcal L} ({\mathcal M}_a)$ and the ${\mathcal L} (T({\mathcal N}_a))$-structure of $T(N_a)$ definable in ${\mathcal L} ({\mathcal N}_a)$. Hence the ${\mathcal L} (T({\mathcal M}))$-structure of $T(M)$ is definable in ${\mathcal L} ({\mathcal M})$. By construction $\Xi$ embeds into $T(M)^*$ and ${\mathcal M}$ satisfies properties (a) and (b) ((b) follows from quantifier elimination). By induction
this ${\mathcal M}$ is unique (above $\Xi$) due to $\aleph_0$-categoricity of labels theories and canonicity of connection and sticking. It is $\aleph_0$-categorical and $C$-minimal due to Lemmas \ref{connection}, \ref{sticking} and \ref{connection-minimality}. \\ Let $\tau$ be an automorphism of $\Xi$ preserving the projection $\Xi \rightarrow \Xi_0$. We define by induction an automorphism $\rho$ of $T({\mathcal M})$ extending $\tau$. Again there are two induction steps. Either $\rho$ is defined on $\Xi \cup T(M_a)$ (or on $\Xi \cup T(N_b)$ for each $b, b^-=a$) and we want to extend it to $\Xi \cup T(N_a)$ (or to $\Xi \cup T(M_a)$). Since $\tau$ preserves classes in $\Xi_0$ it preserves labels, and the conclusion follows by $\aleph_0$-categoricity of involved theories and canonicity of the sticking (or connection) construction. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi\\[2 mm]
{\bf Proof of Proposition \ref{10}:} As already noticed just after the statement of Proposition \ref{10}, it is enough to prove that any automorphism of the labeled tree $\overline \Theta ({\mathcal M})$ lifts up to an automorphism of $T({\mathcal M})^*$ (${\mathcal M}$ is here the countable model). Thus Proposition \ref{10} follows immediately from Lemma \ref{deplier} and Theorem \ref{thefirst}. \relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\begin{theo}\label{thesecond} If the labeled tree $\Xi_0$ satisfies furthermore (8'), (9'), (10') and (11') then ${\mathcal M}$ is a pure $C$-set, $\Theta({\mathcal M})= \Xi$ and $\overline \Theta({\mathcal M})= \Xi_0$.
\end{theo}
\pr
Let $\Xi _{\geq a} := \{x \in \Xi; x \geq a\}$. We show, by induction on vertices depth, that $\Xi _{\geq a} = \Theta({\mathcal M}_a) $ and, if $N_a \not= M_a$ and $b \in B := A^-$, $b<a$, $\Theta({\mathcal N}_a) = \Xi _{\geq a} \cup \{ b \}$ where $b$ plays here the role of $- \infty$ for the tree $\Theta( {\mathcal N}_a)$. \\ 1. $a \in \Theta({\mathcal M}_a)$: this means that ${\mathcal M}_a$ is not indiscernible, which follows from (9') for $A$ maximal in $\Xi _0$ and from (10') if $A$ is not maximal. \\ 2. $a$ remains in $\Theta({\mathcal N}_a)$ either trivially if $a$ has a predecessor in $T(M)$ or because of (10'). Since $a$ is in $\Theta({\mathcal N}_a)$ it is $\emptyset$-definable (in $ {\mathcal N}_a$) and the tree $\Xi _{\geq a}$ remains in $\Theta( {\mathcal N}_a)$. \\ 3. So $ \Xi$ embeds in $\Theta({\mathcal M})$. Elements of $\Xi$ are thus $\emptyset$-algebraic. Elements of $\Xi_0$ are $\emptyset$-definable due to (11'). An induction (using Lemma \ref{connection-minimality} and (8')) shows that ${\mathcal M}$ is a pure $C$-set.
\\ 4. Any point in $T( M) \setminus \Xi$ is in some canonical copy of either some pruned cone $\Gamma_{(A^-,A)}$ or some cone $\Gamma_{A,i}$. Since $C$-sets associated to these trees are indiscernible, an element of $\Gamma_{(A^-,A)}$ or $\Gamma_{A,i}$ can belong to $\Theta ({\mathcal M})$ only if it belongs to $U$ (see Definition \ref{definition of Theta}), which is impossible in both situations. \\
This proves that $\Theta({\mathcal M})$ is exactly $\Xi$ and consequently $\bar \Theta({\mathcal M})$ is $\Xi_0$.
\relax\ifmmode\eqno\B ox\else\mbox{}\quad\nolinebreak
$\Box$
\fi
\end{document} | arXiv |
Distance from a point to a line
In Euclidean geometry, the distance from a point to a line is the shortest distance from a given point to any point on an infinite straight line. It is the perpendicular distance of the point to the line, the length of the line segment which joins the point to nearest point on the line. The algebraic expression for calculating it can be derived and expressed in several ways.
Knowing the distance from a point to a line can be useful in various situations—for example, finding the shortest distance to reach a road, quantifying the scatter on a graph, etc. In Deming regression, a type of linear curve fitting, if the dependent and independent variables have equal variance this results in orthogonal regression in which the degree of imperfection of the fit is measured for each data point as the perpendicular distance of the point from the regression line.
Line defined by an equation
In the case of a line in the plane given by the equation ax + by + c = 0, where a, b and c are real constants with a and b not both zero, the distance from the line to a point (x0, y0) is[1][2]: p.14
$\operatorname {distance} (ax+by+c=0,(x_{0},y_{0}))={\frac {|ax_{0}+by_{0}+c|}{\sqrt {a^{2}+b^{2}}}}.$
The point on this line which is closest to (x0, y0) has coordinates:[3]
$x={\frac {b(bx_{0}-ay_{0})-ac}{a^{2}+b^{2}}}{\text{ and }}y={\frac {a(-bx_{0}+ay_{0})-bc}{a^{2}+b^{2}}}.$
Horizontal and vertical lines
In the general equation of a line, ax + by + c = 0, a and b cannot both be zero unless c is also zero, in which case the equation does not define a line. If a = 0 and b ≠ 0, the line is horizontal and has equation y = −c/b. The distance from (x0, y0) to this line is measured along a vertical line segment of length |y0 − (−c/b)| = |by0 + c|/|b| in accordance with the formula. Similarly, for vertical lines (b = 0) the distance between the same point and the line is |ax0 + c|/|a|, as measured along a horizontal line segment.
Line defined by two points
If the line passes through two points P1 = (x1, y1) and P2 = (x2, y2) then the distance of (x0, y0) from the line is:[4]
$\operatorname {distance} (P_{1},P_{2},(x_{0},y_{0}))={\frac {|(x_{2}-x_{1})(y_{1}-y_{0})-(x_{1}-x_{0})(y_{2}-y_{1})|}{\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}}.$
The denominator of this expression is the distance between P1 and P2. The numerator is twice the area of the triangle with its vertices at the three points, (x0, y0), P1 and P2. See: Area of a triangle § Using coordinates. The expression is equivalent to h = 2A/b, which can be obtained by rearranging the standard formula for the area of a triangle: A = 1/2 bh, where b is the length of a side, and h is the perpendicular height from the opposite vertex.
Line defined by point and angle
If the line passes through the point P = (Px, Py) with angle θ, then the distance of some point (x0, y0) to the line is
$\operatorname {distance} (P,\theta ,(x_{0},y_{0}))=|\cos(\theta )(P_{y}-y_{0})-\sin(\theta )(P_{x}-x_{0})|$
Proofs
An algebraic proof
This proof is valid only if the line is neither vertical nor horizontal, that is, we assume that neither a nor b in the equation of the line is zero.
The line with equation ax + by + c = 0 has slope −a/b, so any line perpendicular to it will have slope b/a (the negative reciprocal). Let (m, n) be the point of intersection of the line ax + by + c = 0 and the line perpendicular to it which passes through the point (x0, y0). The line through these two points is perpendicular to the original line, so
${\frac {y_{0}-n}{x_{0}-m}}={\frac {b}{a}}.$
Thus, $a(y_{0}-n)-b(x_{0}-m)=0,$ and by squaring this equation we obtain:
$a^{2}(y_{0}-n)^{2}+b^{2}(x_{0}-m)^{2}=2ab(y_{0}-n)(x_{0}-m).$
Now consider,
${\begin{aligned}(a(x_{0}-m)+b(y_{0}-n))^{2}&=a^{2}(x_{0}-m)^{2}+2ab(y_{0}-n)(x_{0}-m)+b^{2}(y_{0}-n)^{2}\\&=\left(a^{2}+b^{2}\right)\left((x_{0}-m)^{2}+(y_{0}-n)^{2}\right)\end{aligned}}$
using the above squared equation. But we also have,
$(a(x_{0}-m)+b(y_{0}-n))^{2}=(ax_{0}+by_{0}-am-bn)^{2}=(ax_{0}+by_{0}+c)^{2}$
since (m, n) is on ax + by + c = 0. Thus,
$\left(a^{2}+b^{2}\right)\left((x_{0}-m)^{2}+(y_{0}-n)^{2}\right)=(ax_{0}+by_{0}+c)^{2}$
and we obtain the length of the line segment determined by these two points,
$d={\sqrt {(x_{0}-m)^{2}+(y_{0}-n)^{2}}}={\frac {|ax_{0}+by_{0}+c|}{\sqrt {a^{2}+b^{2}}}}.$[5]
A geometric proof
This proof is valid only if the line is not horizontal or vertical.[6]
Drop a perpendicular from the point P with coordinates (x0, y0) to the line with equation Ax + By + C = 0. Label the foot of the perpendicular R. Draw the vertical line through P and label its intersection with the given line S. At any point T on the line, draw a right triangle TVU whose sides are horizontal and vertical line segments with hypotenuse TU on the given line and horizontal side of length |B| (see diagram). The vertical side of ∆TVU will have length |A| since the line has slope -A/B.
∆PRS and ∆TVU are similar triangles, since they are both right triangles and ∠PSR ≅ ∠TUV since they are corresponding angles of a transversal to the parallel lines PS and UV (both are vertical lines).[7] Corresponding sides of these triangles are in the same ratio, so:
${\frac {|{\overline {PR}}|}{|{\overline {PS}}|}}={\frac {|{\overline {TV}}|}{|{\overline {TU}}|}}.$
If point S has coordinates (x0,m) then |PS| = |y0 - m| and the distance from P to the line is:
$|{\overline {PR}}|={\frac {|y_{0}-m||B|}{\sqrt {A^{2}+B^{2}}}}.$
Since S is on the line, we can find the value of m,
$m={\frac {-Ax_{0}-C}{B}},$
and finally obtain:[8]
$|{\overline {PR}}|={\frac {|Ax_{0}+By_{0}+C|}{\sqrt {A^{2}+B^{2}}}}.$
A variation of this proof is to place V at P and compute the area of the triangle ∆UVT two ways to obtain that $D|{\overline {TU}}|=|{\overline {VU}}||{\overline {VT}}|$ where D is the altitude of ∆UVT drawn to the hypotenuse of ∆UVT from P. The distance formula can then used to express $|{\overline {TU}}|$, $|{\overline {VU}}|$, and $|{\overline {VT}}|$in terms of the coordinates of P and the coefficients of the equation of the line to get the indicated formula.
A vector projection proof
Let P be the point with coordinates (x0, y0) and let the given line have equation ax + by + c = 0. Also, let Q = (x1, y1) be any point on this line and n the vector (a, b) starting at point Q. The vector n is perpendicular to the line, and the distance d from point P to the line is equal to the length of the orthogonal projection of ${\overrightarrow {QP}}$ on n. The length of this projection is given by:
$d={\frac {|{\overrightarrow {QP}}\cdot \mathbf {n} |}{\|\mathbf {n} \|}}.$
Now,
${\overrightarrow {QP}}=(x_{0}-x_{1},y_{0}-y_{1}),$ so ${\overrightarrow {QP}}\cdot \mathbf {n} =a(x_{0}-x_{1})+b(y_{0}-y_{1})$ and $\|\mathbf {n} \|={\sqrt {a^{2}+b^{2}}},$
thus
$d={\frac {|a(x_{0}-x_{1})+b(y_{0}-y_{1})|}{\sqrt {a^{2}+b^{2}}}}.$
Since Q is a point on the line, $c=-ax_{1}-by_{1}$, and so,[9]
$d={\frac {|ax_{0}+by_{0}+c|}{\sqrt {a^{2}+b^{2}}}}.$
Although the distance is given as a modulus, the sign can be useful to determine which side of the line the point is on, in a sense determined by the direction of normal vector (a,b)
Another formula
It is possible to produce another expression to find the shortest distance of a point to a line. This derivation also requires that the line be not vertical or horizontal.
The point P is given with coordinates ($x_{0},y_{0}$). The equation of a line is given by $y=mx+k$. The equation of the normal of that line which passes through the point P is given $y={\frac {x_{0}-x}{m}}+y_{0}$.
The point at which these two lines intersect is the closest point on the original line to the point P. Hence:
$mx+k={\frac {x_{0}-x}{m}}+y_{0}.$
We can solve this equation for x,
$x={\frac {x_{0}+my_{0}-mk}{m^{2}+1}}.$
The y coordinate of the point of intersection can be found by substituting this value of x into the equation of the original line,
$y=m{\frac {(x_{0}+my_{0}-mk)}{m^{2}+1}}+k.$
Using the equation for finding the distance between 2 points, $d={\sqrt {(X_{2}-X_{1})^{2}+(Y_{2}-Y_{1})^{2}}}$, we can deduce that the formula to find the shortest distance between a line and a point is the following:
$d={\sqrt {\left({{\frac {x_{0}+my_{0}-mk}{m^{2}+1}}-x_{0}}\right)^{2}+\left({m{\frac {x_{0}+my_{0}-mk}{m^{2}+1}}+k-y_{0}}\right)^{2}}}={\frac {|k+mx_{0}-y_{0}|}{\sqrt {1+m^{2}}}}.$
Recalling that m = -a/b and k = - c/b for the line with equation ax + by + c = 0, a little algebraic simplification reduces this to the standard expression.[3]
Vector formulation
The equation of a line can be given in vector form:
$\mathbf {x} =\mathbf {a} +t\mathbf {n} $
Here a is a point on the line, and n is a unit vector in the direction of the line. Then as scalar t varies, x gives the locus of the line.
The distance of an arbitrary point p to this line is given by
$\operatorname {distance} (\mathbf {x} =\mathbf {a} +t\mathbf {n} ,\mathbf {p} )=\|(\mathbf {p} -\mathbf {a} )-((\mathbf {p} -\mathbf {a} )\cdot \mathbf {n} )\mathbf {n} \|.$
This formula can be derived as follows: $\mathbf {p} -\mathbf {a} $ is a vector from a to the point p. Then $(\mathbf {p} -\mathbf {a} )\cdot \mathbf {n} $ is the projected length onto the line and so
$\mathbf {a} +((\mathbf {p} -\mathbf {a} )\cdot \mathbf {n} )\mathbf {n} $
is a vector that is the projection of $\mathbf {p} -\mathbf {a} $ onto the line and represents the point on the line closest to $\mathbf {p} $. Thus
$(\mathbf {p} -\mathbf {a} )-((\mathbf {p} -\mathbf {a} )\cdot \mathbf {n} )\mathbf {n} $
is the component of $\mathbf {p} -\mathbf {a} $ perpendicular to the line. The distance from the point to the line is then just the norm of that vector.[4] This more general formula is not restricted to two dimensions.
Another vector formulation
If the vector space is orthonormal and if the line goes through point a and has a direction vector n, the distance between point p and the line is[10]
$\operatorname {distance} (\mathbf {x} =\mathbf {a} +t\mathbf {n} ,\mathbf {p} )={\frac {\left\|(\mathbf {p} -\mathbf {a} )\times \mathbf {n} \right\|}{\|\mathbf {n} \|}}.$
Note that cross products only exist in dimensions 3 and 7.
See also
• Hesse normal form
• Line-line intersection
• Distance between two lines
• Distance from a point to a plane
• Skew lines#Distance
Notes
1. Larson & Hostetler 2007, p. 452
2. Spain 2007
3. Larson & Hostetler 2007, p. 522
4. Sunday, Dan. "Lines and Distance of a Point to a Line". softSurfer. Archived from the original on 2021-05-07.
5. Between Certainty and Uncertainty: Statistics and Probability in Five Units With Notes on Historical Origins and Illustrative Numerical Examples
6. Ballantine & Jerbert 1952 do not mention this restriction in their article
7. If the two triangles are on opposite sides of the line, these angles are congruent because they are alternate interior angles.
8. Ballantine & Jerbert 1952
9. Anton 1994, pp. 138-9
10. Weisstein, Eric W. "Point-Line Distance--3-Dimensional". mathworld.wolfram.com. Retrieved 2021-06-06.
References
• Anton, Howard (1994), Elementary Linear Algebra (7th ed.), John Wiley & Sons, ISBN 0-471-58742-7
• Ballantine, J.P.; Jerbert, A.R. (1952), "Distance from a line or plane to a point", American Mathematical Monthly, 59 (4): 242–243, doi:10.2307/2306514, JSTOR 2306514
• Larson, Ron; Hostetler, Robert (2007), Precalculus: A Concise Course, Houghton Mifflin Co., ISBN 978-0-618-62719-6
• Spain, Barry (2007) [1957], Analytical Conics, Dover Publications, ISBN 978-0-486-45773-4
• Weisstein, Eric W. "Point-Line Distance--3-Dimensional". MathWorld.
Further reading
• Deza, Michel Marie; Deza, Elena (2013), Encyclopedia of Distances (2nd ed.), Springer, p. 86, ISBN 9783642309588
| Wikipedia |
Triangle $ABC$ is isosceles with angle $A$ congruent to angle $B$. The measure of angle $C$ is 30 degrees more than the measure of angle $A$. What is the number of degrees in the measure of angle $C$?
Let $x$ be the number of degrees in the measure of angle $A$. Then angle $B$ measures $x$ degrees as well and angle $C$ measures $x+30$ degrees. Since the sum of the interior angles in a triangle sum to 180 degrees, we solve $x+x+x+30=180$ to find $x=50$. Therefore, angle $C$ measures $x+30=50+30=\boxed{80}$ degrees. | Math Dataset |
\begin{definition}[Definition:Dicyclic Group]
For even $n$, the '''dicyclic group''' $\Dic n$ of order $4 n$ is the group having the presentation:
:$\Dic n = \gen {a, b: a^{2 n} = e, b^2 = a^n, b^{-1} a b = a^{-1} }$
\end{definition} | ProofWiki |
Assessing circularity interventions: a review of EEIOA-based studies
Glenn A. Aguilar-Hernandez1,
Carlos Pablo Sigüenza-Sanchez1,
Franco Donati1,
João F. D. Rodrigues1 &
Arnold Tukker1
Journal of Economic Structures volume 7, Article number: 14 (2018) Cite this article
Environmentally extended input–output analysis (EEIOA) can be applied to assess the economic and environmental implications of a transition towards a circular economy. In spite of the existence of several such applications, a systematic assessment of the opportunities and limitations of EEIOA to quantify the impacts of circularity strategies is currently missing. This article brings the current state of EEIOA-based studies for assessing circularity interventions up to date and is organised around four categories: residual waste management, closing supply chains, product lifetime extension, and resource efficiency. Our findings show that residual waste management can be modelled by increasing the amount of waste flows absorbed by the waste treatment sector. Closing supply chains can be modelled by adjusting input and output coefficients to reuse and recycling activities and specifying such actions in the EEIOA model if they are not explicitly presented. Product lifetime extension can be modelled by combining an adapted final demand with adjusted input coefficients in production. The impacts of resource efficiency can be modelled by lowering input coefficients for a given output. The major limitation we found was that most EEIOA studies are performed using monetary units, while circularity policies are usually defined in physical units. This problem affects all categories of circularity interventions, but is particularly relevant for residual waste management, due to the disconnect between the monetary and physical value of waste flows. For future research, we therefore suggest the incorporation of physical and hybrid tables in the assessment of circularity interventions when using EEIOA.
In the early 1990s, the concept of circular economy was proposed by Pearce and Turner (1990) as a model to transform the traditional open-ended economy into an ongoing closed-loop system from a material perspective. Since then, several scholars and practitioners have adopted multiple definitions for circularity (Winans et al. 2017). After considering 114 conceptual frameworks, Kirchherr et al. (2017) define it as an economic system that substitutes product end-of-life with a set of circularity interventions.
Circularity interventions are actions or processes that preserve resources inside the economy (Lieder and Rashid 2016a; Bocken et al. 2017). Such actions are based on three principles (Ellen MacArthur Foundation 2013; Ghisellini et al. 2016):
Minimising waste disposal through the use of waste flows as inputs for other economic activities;
Optimising material loops through the design of products and services that allows extending product lifetime, reuse and recycling materials at their end-of-life;
Promoting a restorative environment through the development of renewable energy that decreases material extraction and its environmental impacts.
Implementing circularity interventions has become a prominent topic in sustainability policies (McDowall et al. 2017). For instance, the European Commission presented an action plan for the circular economy in which interventions are related to the design of long-lasting products, material closed-loops at multiple supply chain levels, resource efficiency and sustainable waste management (EC 2015). Another example is that of the Chinese circular economy initiatives of the 1990s, which seek to prolong product lifetime and to enhance resource efficiency (Geng et al. 2012, 2016). These and other governments have implemented circularity actions as mechanisms to achieve economic prosperity and environmental sustainability (Andersen 2007; Ghisellini et al. 2016; Geissdoerfer et al. 2017).
In order to maximise the economic and environmental benefits of circularity interventions, it is important to assess their cost-effectiveness. This can be done through the application of analytical methods that assess the impact of particular policies (Elia et al. 2017; Potting et al. 2017). However, there is no recognised framework for measuring how effective a country is in making a transition to circularity (EEA 2016; Linder et al. 2017). Such an approach needs to integrate indicators with a clear understanding of the circularity mechanism influencing multiple economic activities and their environmental performance (Lieder and Rashid 2016b; Pauliuk 2017).
The assessment of circularity interventions can be addressed by environmentally extended input–output analysis (EEIOA). In fact, as described further below, EEIOA has been used to evaluate the impacts of residual waste management, reusing and recycling activities, product lifetime extension, and resource efficiency (Duchin 1992; Iacovidou et al. 2017).
Assessing these interventions through EEIOA has in turn required adapting that same framework, leading to the development of new methods. For example, the study of the interdependency between production and waste generation led to the development of waste input–output models (Nakamura 1999b). In addition, the analysis of resource use and emissions at country level in relation to potential leakage on a global level (WEF 2014; Rutherford and Böhringer 2015) resulted in the development of multiregional models for assessing the impacts embodied in international trade (Peters and Hertwich 2009; Wiedmann 2009; Tukker and Dietzenbacher 2013). Finally, circularity interventions are usually implemented using financial incentives such as subsidies and taxes that need to be endogenised to account for all impacts of the policy (Ferrão et al. 2014). The theoretical integration of financial incentives in the waste input-output model was achieved by Rodrigues et al. (2016). Such adaptations of EEIOA framework have been relevant to evaluate the potential impacts of current circular implementation.
To promote the further advancement and implementation of best practices in the use of EEIOA to assess the economic and environmental implications of circularity interventions, it is important to critically evaluate existing studies. To the best of our knowledge, no such review has previously been compiled.
We fill this knowledge gap by offering a literature review of EEIOA-based circularity interventions and suggest opportunities for improvement. The paper proceeds as follows. Section 2 describes the data and methods used in the literature survey. Section 3 presents the actual literature review, describing how in the past circularity interventions have been addressed, organised around four categories: residual waste management, closing supply chains, product lifetime extension, and resource efficiency. Section 4 synthesises the main methodological aspects of each intervention type. Section 5 then discusses the major contributions and limitations as well as opportunities for improvement and Sect. 6 closes with some final remarks.
In order to facilitate the identification of EEIOA-based studies related to circular strategies, we organised circularity interventions based on the resource flow framework proposed by Ellen MacArthur Foundation (2013), Bocken et al. (2016), and Kirchherr et al. (2017). Given such framework, we then collected 13 keywords that are commonly used to identify circular strategies (Ghisellini et al. 2016; Bocken et al. 2017; den Hollander et al. 2017a). Table 1 shows the categories evaluated in this review as well as their definition and corresponding keywords.
Table 1 Circularity intervention categories
We applied the keywords of Table 1 to query online databases of peer-reviewed scientific publications in English (i.e. Web of Science and Scopus) and identified 163 documents that combined 'input–output analysis' and at least one term related to circularity interventions when screening title, abstract and keywords. Afterwards we manually examined the content of the documents, restricting our analysis to 47 relevant documents. We then developed a backwards/forwards snowballing process (Wohlin 2014), identifying additional relevant literature from the citation network. In total we found 93 relevant documents.
In order to identify basic attributes of the selected publications, we collected data on the year of publication and number of citations, circularity intervention covered, and EEIOA model characteristics.
Figure 1 shows the number of articles published in each year and the number of yearly citations of all previously published papers. The figure shows that there has been a gradual increase in the number of EEIOA-based studies that assess circularity, with 60% of all relevant literature published in the past 5 years. Figure 2 shows that the majority of studies are focused on the interaction between recycling and waste treatment systems (n[CSC + RWM] = 35). Moreover, residual waste management is the most common intervention, present in 68 study cases, followed by closing supply chains (n[CSC] = 54), product lifetime extension (n[PLE] = 17) and resource efficiency (n[RE] = 13).
Number of publications and citations per year (status on 11 June 2018)
Number of publications per circularity intervention category (status on 11 June 2018). RWM residual waste management, CSC closing supply chains, PLE product lifetime extension, and RE resource efficiency
Table 2 presents a characterisation of the top-10 most cited papers. Table 3 provides a technical characterisation of the type of model and/or approach used in different studies concerning the type of table, units, time and geographical scope. Most studies (88%) use harmonised input–output tables (IOTs), use hybrid units (53%), are focused on a specific year (85%) and are applied to a single country (75%). A detailed list of specific characteristics of the reviewed publication is provided in the Additional file 1.
Table 2 Overview of top-10 most cited articles related to the assessment of circularity interventions (status on 11 June 2018)
Table 3 Summary of EEIOA model characteristic by type of table, units, time and geographical dimensions
Although there are examples of circular intervention assessments at the macro-economic level developed by governments and private institutions in the grey literature (for example, Bastein et al. 2013; Pratt and Lenaghan 2015; Rutherford and Böhringer 2015; McKinsey&Company 2016), most of these studies apply bottom-up methods, computable general equilibrium (CGE) models or other approaches rather than EEIOA (Winning et al. 2017). Apart from the fact that we wanted to focus primarily on the peer-reviewed literature, this was an additional reason to exclude this type of studies to focus in the identification of novel methods and best practices in EEIOA-based cases.
We now perform a methodological review of EEIOA-based studies which assess residual waste management, closing supply chains, product lifetime extension, and resource efficiency. Each intervention differs in its approach to splitting and extending sectors in the input–output tables, adjusting technical and final demand coefficients, and incorporating hybrid-unit data.
Residual waste management
Nakamura and Kondo (2002, 2009) introduced the harmonised waste input–output tables, which are used to determine the embodied waste of a certain consumption. The waste input–output analysis (WIOA) consists in a hybrid model constituted by economic and physical units in which are represented explicitly the interaction between industries and waste treatment sectors. This model allows to expand EEIOA in relation to the interdependence between goods and waste disposal.
Several studies applied the WIOA model to measure the direct and indirect waste of consumption at national level, such as Taiwan, France, and UK (Jensen et al. 2013; Liao et al. 2015; Beylot et al. 2016b; Salemdeeb et al. 2016). In a study at sub-national scale, Tsukui et al. (2011, 2017) developed an interregional WIOA to quantify the embodied waste generated by consumption patterns in the city of Tokyo. These cases applied a traditional Leontief inverse matrix to estimate the embodied goods and waste of final demand.
By applying monetary supply-use principles in the WIOA framework, Lenzen and Reynolds (2014) developed a method to construct waste supply-use tables. They considered that a supply-use approach has an advantage because it includes the allocation matrix from WIOA model into the accounting system, which enables the simultaneous generation of industry and commodities multipliers (Lenzen and Rueda-Cantuche 2012). In addition, a supply-use model can distinguish between multiple waste types and treatment methods. The researchers demonstrated that WIOA and WSUA multipliers were equivalents by employing Miyazawa's partitioned inverse method. An application of WSUA was presented by Reynolds et al. (2014), in which the authors assessed the direct and indirect flows of waste generated by intermediate sectors of the Australian economy.
Fry et al. (2016) constructed multiregional waste supply-use tables by using Industrial Ecology Virtual Laboratory as a computational platform (Lenzen et al. 2014). They measured the waste footprint of Australian consumption considering the impacts of imports. The authors also focused on the impacts driven by consumption pattern in each Australian state and territory, which showed the waste footprint at sub-national level.
Similarly, Tisserant et al. (2017) developed a harmonised multiregional solid waste account using coefficients from physical and monetary values from EXIOBASE v2.2.0 (Tukker et al. 2013; Wood et al. 2015). They collected the data from 35 waste treatment services (measured in tonnes) that were used to calculate global waste footprint and identify the main sectors contributors per country. With the outcome of waste footprint, they evaluated the possibility of achieving targets for material recycling proposed by European Commission in the Circular Economy Package (EC 2018).
By extending satellites accounts, Li et al. (2013) introduced a wastewater material composition vector that distinguishes the composition of wastewater flows. In addition, Court et al. (2015) incorporated an accounting system for hazards waste materials as an extension of EEIOA.
In a study of landfilling scenarios using waste input–output tables, Yokoyama et al. (2006) created additional sectors of 'landfill mining' and 'gasification'. These activities were evaluated in scenarios of increasing gasification industry demand and adopting new landfill infrastructure. The scenarios required the adaptation of technical coefficients, which imply positive and negative values depending on the interaction between industries. For the final demand, the authors assumed that consumption pattern is proportional to domestic population growth and, then, they fixed the respective final demand values. Their final outcome showed the impacts on CO2 emissions and waste generation under certain assumptions of sustainable waste management.
Duchin (1990, 1992) proposed an analysis of waste treatment scenarios by adapting technology matrix and final demand values in EEIOA framework. In her studies, the author computed numerical examples and identified waste disposal in final consumption by adjusting final demand values in a static model. This approach described an entire economy in terms of its sectors and their interrelationships, which account for the environmental impacts.
By converting the monetary values of input–output tables into physical units, Nakamura et al. (2007b) proposed a material flow analysis (MFA) that uses monetary coefficients to express inter-industrial physical flows. The waste input–output material flow analysis (WIO–MFA) was used to trace the final destination of materials and their specific elements through the supply chain (Nakamura and Nakajima 2005; Nakamura et al. 2009; Nakajima et al. 2013; Ohno et al. 2014). For example, in an analysis of metal industry, Ohno et al. (2016) applied the WIO–MFA to assess the material network of metals and alloying elements. For creating the network, they developed three steps: to disaggregate sectors and convert monetary to physical units; to calculate the technical coefficients; and to multiply the input coefficient matrix with two filtering matrices, which are physical flow filter as a binary matrix for excluding non-physical flows and the loss filter matrix that removes inputs that are related to process waste.
From a product-level perspective, Nakamura and Kondo (2006) evaluated the end-of-life scenarios of electric home appliances, landfilling, shredding, recycling, and recycling with design for disassembly, by combining the WIOA framework and life cycle costing analysis. Reynolds et al. (2016b) also demonstrated the use of waste input–output life cycle assessment (WIO–LCA) in the context of New Zealand food waste. They included mass values, economic cost, calories and resources wasted accounts as model inputs. In a recent study, Reutter et al. (2017) combined input–output multipliers with the Australian economic cost of food waste, which can be used to quantify the embodied net surplus of wasted food.
Closing supply chains
To assess 3R's economic activities (recycling, reuse and reduction), Huang et al. (1994) collected data to include these sectors in a supply-use framework. They applied a traditional Leontief approach in which each new industry produces a single economic commodity. By using such assumption, the authors allocated the monetary flows of recycling and reuse sectors in a new supply-use table that allows to analyse policy initiatives related to closing supply chains.
Nakamura (1999a) applied a similar principle to create a harmonised industry-by-industry framework that accounts for recycling activities. He represented the flow of goods and services, waste, and pollutants among five industries that include recycling sectors. Such activities were expressed by both physical and monetary units because, in many cases, the market value of waste was not represented in accounting system.
In an analysis of electronics waste recycling, Choi et al. (2011) constructed an EEIOA model that collects data for recyclable end-of-life products and related economic sectors. They considered e-waste values in a satellite account that is connected to recycling sectors in a similar way as primary materials are linked to mining industries. The authors then included a new industry and product categories for recycling activities as well as the adjustment of environmental extension to represent the e-waste flows through the supply chain.
For assessing the economic impact of product recovery and remanufacturing in France, Ferrer and Ayres (2000) incorporated the remanufacturing sector in a harmonised industry-by-industry matrix. This harmonised system was adjusted to consider different demands in labour, energy, primary materials, and inputs from others economic sectors. They assumed that the manufacturing and remanufacturing final demand in physical values were equivalent; however, remanufacturing products have a lower price value. They quantified the impacts of the new sector in terms of market share and labour increase.
Beylot et al. (2016b) studied the potential contribution of waste management policies to reduce carbon emissions and resources use. The authors used WIOA obtaining physical units from the French physical supply-use tables. These physical values were used to calculate technological requirement matrices related to waste flows. By considering changes in final demand coefficients, they established scenarios to increase recycling rates and to adopt available best technologies for waste incineration. The scenarios of closing supply chains were extrapolated to evaluate the short-term impacts of recycling policies.
Focusing on the case of Australian consumption, Reynolds et al. (2015) evaluated the effects of non-profit organizations on reducing food waste. In a waste supply-use table, they created a new 'food charity' sector, and extrapolated food waste data from government and industry reports by using a top-down estimation method. According to Reynolds et al. (2016a), this technique allows to estimate waste flow per industry simultaneously but separately in which each waste flow has a unique composition that is defined by the direct production inputs. Such a relationship is provided by the technology matrix, which is also connected to available waste data to construct the new intermediate sector.
In a study investigating the impact of Portuguese packaging waste management, Ferrão et al. (2014) analysed the effects of municipal waste and recycling strategies on economic added value and job creation. They described four basic types of recycling materials: paper and wood, plastic, glass and metals. For each material type, they considered that the magnitude of recycling sector relative to the respective non-recycling activity is brought by the ratio of the net payback value to the total amount of intra-sectoral transactions. The researchers adjusted the ratio of recycling and non-recycling materials in order to evaluate waste management scenarios for packaging alternatives.
In an analysis of tire industry, Rodrigues et al. (2016) modified a waste supply-use model to recognise the effects of policies related to closing supply chains, such as extended producers responsibility. In this scheme, waste management is financed by compensation that is represented as producers' fees in terms of waste volume processed. The researchers modelled the flow of compensation fees by introducing the financial requirements of waste management under the adapted waste supply-use table. They also adjusted the coefficients of waste treatment intermediate industries in the technical matrix and introduced an exogenous stimulus that is used to compare a reference scenario and the alternative strategy.
To explore the optimal structure of end-of-life treatment and recycling strategies, Kondo and Nakamura (2005) introduced a model that integrates WIOA into a linear programming analysis (WIO-LP). The researchers replaced the fixed constant values of waste input–output tables with an adaptable allocation matrix that can respond to specific constrains. This approach is generally defined as a minimisation problem. For example, Lin (2011) applied the WIO–LP model to analyse the optimal system configuration for reducing environmental loads, such as CO2 emissions from wastewater treatment. The researcher considered a set of constraints to reduce the amount of a certain type of environmental impacts generated by both producing and waste treatment sectors.
In a recent study, Ohno et al. (2017) evaluated the optimal scenarios of steel recycling for end-of-life vehicles in Japan through the integration of linear programming into a waste input–output material flow analysis. They considered quality-oriented scrap recycling and identified which scenarios can contribute to obtain the maximal potential of recovery for alloying elements.
By using industrial accounts for the Taiwanese economy, Chen and Ma (2015) assessed the linkages of industrial material and waste flows at national level. They rearranged the structure of the accounting system to adopt a framework that resembles the WIOA. This accounting system enables us to identify eco-industrial network patterns, for example, by examining the potential of by-products as inputs for other industries.
Product lifetime extension
In an assessment of the Japanese automobile industry, Kagawa et al. (2008) studied the implications of changing passenger vehicle lifetime. They applied a cumulative product lifetime model that is used to describe the patterns of final consumption. This approach is used to adjust the final demand for the scenarios of extending automobile lifetime. The authors then developed a structural decomposition analysis (SDA) with the new scenarios in order to quantify the drivers of end-of-life automobile between certain periods.
Takase et al. (2005) extended the Japanese household final demand in the WIOA for assessing waste reduction scenarios based on sharing transport services and long-lasting products. These schemes were analysed by adjusting final demand coefficients. In sharing transportation, for example, the authors explored a scenario in which users replace private cars for the use of train. This scenario was expressed by increasing goods in public transport services and decreasing car industry outputs. They changed the coefficient in each scenario and compared the embodied waste disposal and CO2 emissions. In addition, they incorporated potential rebound effects, by assuming a fixed budget for final demand and allocating proportionally the remaining budget to all goods in the new consumption portofolio.
In a further study, Kagawa et al. (2015) adapted WIOA framework to the lifetime distribution model, which is used to forecast secondary material flows demand and supply. They incorporated a stationary stock variable in the lifetime distribution analysis and expressed stocks, discarded and newly purchased products in function of time. These variables were inserted in the final demand, which implies a dynamic function that can be used to predict future demand. In a similar way, secondary supply flows were predicted by the disposal of scraps materials at end-of-life.
Shortly after, Nishijima (2017) used an EEIOA integrated to lifetime distribution analysis for quantifying the effect of extending air conditioners lifetime on CO2 emissions. He calculated the new final demand for household air conditioners by multiplying the production price per air conditioner unit and the number of new air conditioners sold. By adjusting final demand, he performed a structural decomposition analysis to assess the effects of changes final demand, technical and direct CO2 emissions confidents in air conditioners sectors.
Duchin and Levine (2010) introduced an EEIOA framework for estimating the average number of times that a resource passes through each supply chain stage. They established the principles of transforming input–output tables to an absorbing Markov chain (AMC) model based on their mathematical characteristics. For instance, both approaches are matrix-based and are able to represent transaction flows through different economic activities. The monetary flows from the input–output framework are analogous to the AMC's transition states, which represent the probability of a resource to move throughout sectors.
A key study evaluating AMC attributes is that of Eckelman et al. (2012), in which they argued that the AMC approach lays the first stone from the resource extraction as downstream perspective, instead of the upstream consumption-based approach that it is considered in a traditional EEIOA framework.
In a follow-up research, Duchin and Levine (2013) integrated the AMC into a linear programming model that distinguishes key sections of resource-specific network. This integrated model brought detailed insights about the structure of global resource interaction. Furthermore, the model constrained multiregional factors that were adapted to minimise global resource use to satisfy specified final demand.
In a study investigating the distribution of metals over time along the supply chain, Nakamura et al. (2014) established a IO-based dynamic MFA model that considers open-loop recycling and explicitly takes into account scrap quality and losses at production stage. This approach was constructed by converting the monetary coefficients of input–output tables into physical representation for the MFA model. Their work on MaTrace model was complemented by Takeyama et al. (2016) study of alloying steel elements in Japan. They applied MaTrace framework to demonstrate the potential reduction in alloying elements dissipation.
More recently, Pauliuk et al. (2017) developed the dynamic approach in a multiregional context, which was used to determine regional distribution and losses of steel production throughout multiple lifetime stages. They described their 'MaTrace' model as a supply-driven approach that traces down specific materials in life cycles of multiples products and complement the life cycle perspective, which is compared with other techniques, such as AMC and Ghosh inverse matrix. The researches also introduced a material-based circularity indicator by considering the cumulative mass of material present in the system over a certain time interval in terms of an ideal reference case.
In an analysis of material use for Japanese household consumption, Shigetomi et al. (2015) decomposed the household final demand into the consumption expenditures by householder age bracket. The disaggregated expenditures were used to quantify the material intensity of each household group, which represented the material hotspots of final demand. The authors identified the major contributors to the material footprint and projected future consumption trend based on a linear regression model. This analysis assumed that future household size will be proportional to the predicted population growth.
Skelton and Allwood (2013) explored the impacts of material efficiency on key steel-using industries by the application of multiregional input–output (MRIO) approach. They focused on an upstream perspective to seek opportunities through the supply chain of steel. A diagonal final demand vector was applied to identify the final destination of steel output from each sector. They assessed the major contributors to the footprint in terms of their potential incentives to implement material efficiency strategies. They measured such incentives in a supply-side approach based on the Ghosh inverse matrix (Miller and Blair 2009). This method allows to quantify the effects of changing the value added. The researchers performed price changes assuming that carbon tax scenarios are implemented. The fixed prices were applied to the system in order to measure the variation in the share of input expenditure that goes on the steel sector, which expresses the incentives of each industry for incorporating material efficiency practices.
Giljum et al. (2015) analysed geographical trade patterns identifying the embedded materials on a bilateral basis. They extended the MRIO model by adding material extraction data. This dataset was grouped into four broad types: metals, minerals, fossil fuels and biomass. Each classification was used to calculate the domestic material consumption and raw material consumption per country. In the same way, Wiedmann et al. (2015) calculated material footprint time series that were used to represent the changes of resource productivity at global level. They presented a multivariate regression analysis for countries to understand the driving forces of national material footprints. A broader perspective has been adopted by Tukker et al. (2016) who estimated resource footprint considering the indicator dashboard of resource efficiency, which includes carbon, water, energy and land metrics (EC 2011). The authors correlated each resource footprint with quality life indicators, namely human development index and happy development index, bringing a social dimension to resource efficiency measures.
Synthesis of EEIOA frameworks on the assessment of circularity interventions
In the following section, we synthetise the findings from the literature review in terms of the current application of EEIOA in a circular economy context. To illustrate the further development and best practices of such methods, we consider a simplified waste supply-use analysis (WSUA) based on Rodrigues et al. (2016). Although we found the application of traditional EEIOA and other hybrid models, we use the waste input–output approach because it shows a suitable framework for creating end-of-life scenarios, which are usually linked to the basis of circular strategies (Kirchherr et al. 2017).
The majority of the studies suggested that WIOA can be applied to measure effectively the resource flows of circularity interventions. In addition, WIOA can benefit from a supply-use approach which can express the interaction of products and industries in a higher level of detail (Lenzen and Reynolds 2014).
Figure 3 shows a basic waste supply-use table that contains three main parts: final demand vector (\(y\)), technology matrix (\(A\)) and intensity vector (\(b'\)). The \(y\)-vector is subdivided into final consumption of products (\(y^{P}\)) and final waste generation (\(y^{W}\)). The \(A\)-matrix is comprised of a set of submatrices that account for the direct requirements of products or services (\(P\)), sectors or industries (\(S\)), waste (\(W\)), and waste treatment or recycling sectors (\(T\)). The \(b'\)-vector shows the element of direct impact coefficients that correspond to the production intensities of the \(S\) and \(T\) sectors (\(e^{S}\) and \(e^{T}\), respectively).We can assess the effects of incorporating circularity interventions by adjusting final demand and technology coefficients. Several authors applied changes in \(y\)-vector and \(A\)-matrix to explore the scenarios of enhancing waste treatment and recycling activities (for example, Yokoyama et al. 2006; Beylot et al. 2016a, 2018). In many cases, representing these sectors would require the extension of intermediate demand to account explicitly for the specific flows of each circular strategy.
Simplified waste supply-use table. y = final demand vector; A = technology matrix; b′ = intensity vector. P = product or service, S = sector or industry, W = waste, T = waste treatment or recycling activity. \(y^{P}\) elements are monetary values (M.EURO). \(y^{W}\) elements are physical units (tonnes). \(A^{\text{PS}}\) and \(A^{\text{SP}}\) elements are coefficients from monetary units (M.EUR/M.EUR). \(A^{\text{WS}}\) elements are coefficients from physical and monetary units (tonnes/M.EUR). \(A^{\text{TW}}\) and \(A^{\text{WT}}\) elements are coefficients from physical units (tonnes/tonnes). \(A^{\text{PT}}\) elements are coefficients from monetary and physical units (M.EUR/tonnes). \(e^{S}\) elements represent coefficients from physical values, depending on the environmental pressure, and monetary units (e.g. CO2 tonnes/M.EUR). \(e^{T}\) elements represent coefficients from physical values, depending on the environmental pressure, and physical units (e.g. CO2 tonnes/tonnes). Empty cells contain zeros
Considering a reference scenario (\(y, A,b'\)), it is possible to adapt the intermediate flows and final demand coefficients to represent the changes of new circularity actions (\(y^{\text{alt}} ,A^{\text{alt}} ,b'^{\text{alt}}\)). We then can calculate the embodied impacts of the reference scenario (\(m\)) and the alternative circular strategy (\(m^{\text{alt}}\)) by a traditional Leontief inverse (Miller and Blair 2009), as is shown in Eq. (1)–(2):
$$m = b'\left( {I - A} \right)^{ - 1} y;$$
$$m^{\text{alt}} = b'^{\text{alt}} \left( {I - A^{\text{alt}} } \right)^{ - 1} y^{\text{alt}} .$$
The net effect of circularity interventions (\(\Delta m\)) can be quantified by the difference of \(m\) and \(m^{\text{alt}}\) (see Eq. 3). This net impact could represent a measure for the potential effect of a specific circularity scenario. For example, if we analysed the implications of a certain circularity action on carbon footprint and the net effect would be a positive value (i.e. \(\Delta m > 0\)), it means that the alternative circularity scenario has less impact than the reference stage on the embodied carbon emissions. Such avoided impact from the application of a circularity intervention could be used as point of comparison between different scenarios.
$$\Delta m = \left( {m - m^{\text{alt}} } \right).$$
We can synthetise the lessons from the literature to determine which are the best practices for constructing an alternative final demand (\(y^{\text{alt}}\)), technology matrix (\(A^{\text{alt}}\)), and intensity (\(b^{{ ' {\text{alt}}}}\)) that determine the effects of each circularity intervention. Based on the literature review, we then deduce the causality sequence of adapting scenarios for residual waste management, closing supply chains, product lifetime extension, and resource efficiency. The following sub-sections can be used as a reference point for analysing specific scenarios of circularity transition.
We now focus on the description of primary and secondary sequences for each circularity action. Primary sequence refers to the first element of an EEIOA that can be adapted in order to represent the implementation of a circularity intervention. Following a causality chain, the secondary sequence denotes the first order of indirect impacts in response to the primary stimulus. We schematise such sequences in order to demonstrate the adjustment of waste supply-use tables for modelling each circularity alternative. Figure 4 indicates casual links as follows: primary sequence (green square, solid line border '–'), secondary sequence (red square, dashed line border '‐‐‐'), the up arrow ('↑') represents a relative increment of the technical coefficients on A-matrix, the down arrow ('↓') indicates a relative reduction in the technical coefficients on A-matrix, and the up-down arrows ('↑↓') represents sequences in which technical coefficients can be increasing or decreasing in different sectors or industries due to the same causal link. As in Nakamura and Kondo (2002), the A-matrix might contain negative values that show the causality sequence of waste flows thought economic activities. For instance, the inputs of recycling activities can be expressed as negative inputs of treatment sectors that would be required if recycling processes were not available (Nakamura and Kondo 2002).
Modelling causality sequence of a residual waste management, b closing supply chains, c product lifetime extension, and d resource efficiency. y = final demand vector, A = technology matrix, b′ = intensity vector. P = product or service, S = sector or industry, W = waste, T = waste treatment or recycling activity. Green square with solid line border ('―') indicates primary sequence, and red square with dash line border ('‐‐‐') represents secondary sequence. '↑' indicates a relative increase in A-matrix coefficients, '↓' indicates a relative decrease in A-matrix coefficients, '↑↓' indicates a simultaneous change in different sectors or industries caused by the same causal link
Modelling residual waste management
Residual waste management can be modelled by adjusting the amounts of waste treated by specific waste treatment sectors. Several authors created new waste treatment with improved technology (for example, Nakamura and Kondo 2009; Liao et al. 2015; Beylot et al. 2016b), which could be added to a waste supply-use table. These activities would require to augment their inputs from the rest of the economy in order to process the quantity of waste established in a specific circularity scenario (Yokoyama et al. 2006).
Figure 4a shows the causality sequence of changing the A-matrix for reducing waste scenarios. As primary sequence, wasted materials require to be absorbed by waste treatment sectors (↑ in \(A^{\text{TW}}\) elements). A secondary effect of such action is an increase on the direct requirements of waste treatment sectors in order to satisfy the new intermediate demand (↑ in \(A^{\text{PT}}\) coefficients). As a consequence of rising production, waste disposal from waste treatment activities and their suppliers are expected to increase (↑ in \(A^{\text{TW}}\) and \(A^{\text{WS}}\) elements). This sequence appears to create an ongoing loop where absorbing waste would lead to increase waste disposal in order to process the new residuals. However, disposal would need to be constrained by the processing capacity of waste treatment sectors. In our present framework, we do not focus on how capacity constraints should be modelled explicitly; nevertheless, we consider it to be an important aspect for future studies.
It is important to notice that, in some cases, the causality sequence could not be represented by changes in the A-matrix block. For example, increasing \(A^{\text{TW}}\) coefficients might not lead to an increment of \(A^{\text{PT}}\) coefficients directly. Instead, a secondary sequence can be observed in changes on the intermediate demand block of waste treatment inputs.
Modelling closing supply chains
Closing supply chains can be modelled by changing input and output coefficients to closed-loop activities, such as reuse and recycling sectors. These sectors can be represented as new end-of-life systems that would use waste outputs from industries as inputs to generate a usable product for the economy (Nakamura and Kondo 2006; Chen and Ma 2015). In many cases, such new activities would be added to EEIOA in order to model specific material recycling (for example, Ferrer and Ayres 2000; Choi et al. 2011; Reynolds et al. 2015).
A common assumption is that closing supply chains would drive the reduction in extracting virgin materials as a consequence of their replacement with secondary circular flows (Ferrer and Ayres 2000). This substitutional approach can be modelled by the replacement of specific commodities in the use matrix of industries by secondary materials, components, etc. (i.e. ↑↓ in \(A^{\text{PS}}\) coefficients).
Figure 4b presents the causality sequence of closing supply chain scenarios. The primary sequence of closed-loop strategies would imply to adapt the use matrix of a specific industry. Assuming that industry (S) would replace a primary product (\(P^{{\prime }}\)) for a secondary material from a recycling activity (\(P^{{\prime \prime }}\)), then the coefficients of the \(A^{\text{PS}}\)-matrix would decrease for the virgin materials (↓ for \(a^{{{\text{P}}{\prime }{\text{s}}}}\)). Likewise, the direct requirements of S would rise for the input of secondary goods (↑ for \(a^{{{\text{P}}{\prime \prime }{\text{s}}}}\)). A proportional exchange between \(P^{{\prime }}\) and \(P^{{\prime \prime }}\) can be expressed by monetary terms, if the prices of both products are fixed, as well as by direct substitution in physical units (Ferrer and Ayres 2000; Ferrão et al. 2014). Following the secondary sequence, we observe the adjustment of waste fractions treated by waste treatment industries (↑↓ in \(A^{\text{TW}}\) elements). Such an effect is considered because the replacement of \(P^{{\prime }}\) for \(P^{{\prime \prime }}\) could adapt as well the waste generated by industry S, and, then, changing direct requirements from waste treatment sectors in order to dispose the new fractions of waste.
Modelling product lifetime extension
The scenarios of extending product lifetime can be modelled by combining an adjusted final demand and the input coefficients in production sectors, next to probably a higher input of maintenance activities. In general, it is expected that the extension of product lifetime would decrease the quantity of goods consumed by final demand (Kagawa et al. 2009; Nishijima 2017). Therefore, a primary effect of prolonging product lifetime would involve a reduction in final consumption on a certain product (\(y_{i}^{P}\)).
Figure 4c illustrates the causality chain of product lifetime extension. Assuming that a product \(i\) is designed to maximise its durability, the demand of such good would expect to decrease (↓ for \(y_{i}^{P}\)). Although this effect might imply an improvement of environmental performance from reducing the consumption of product \(i\), the potential economic savings could be expended in other goods or services thus obtaining a rebound (Zink and Geyer 2017).
A possible approach to account for these rebound effects is proposed by Takase et al. (2005). They suggested that the total expenditure of new final demand (\(x^{P}\)) would remain the same as total consumption in the reference scenario (\(x\)). By applying their assumption, we can distribute a leftover budget proportionally to the rest of goods and then include a quick estimation for the rebound effect in the alternative final demand (\(y^{{P^{*} }}\)), as is shown in Eq. (4):
$$y^{{P^{*} }} = y^{P} \left( {\frac{x}{{x^{P} }}} \right) .$$
As a secondary effect, it is possible that extending product lifetime could potentially require the adjustment of the production recipe, which leads to change in the input requirements of industries (Bakker et al. 2014; den Hollander et al. 2017b). However, there are only limited opportunities for consumers to prolong their product's lifetime when the product design is unchanged.
Depending on the product design, some industries might require to increase their material inputs in order to manufacture a more durable product (Murray et al. 2015). This operational adjustment is expressed in Fig. 4c by the simultaneous increment and reduction in technology matrix coefficients (in \(A^{\text{PS}}\)). For example, if a change of the production recipe for obtaining a durable good would require to reduce the input of commodity \(i\) and to increase the input of product \(k\), then we can model such adjustments on the \(A^{\text{PS}}\)-matrix (by ↑ for \(a_{k,j}^{\text{PS}}\) and ↓ for \(a_{i,j}^{\text{PS}}\)).
Modelling resource efficiency
In comparison with the previous interventions, resource efficiency is the least studied of circularity actions from an EEIOA perspective (see Fig. 2), and it can be one of the most interesting in terms of future development of EEIOA method. We found that studies related to resource efficiency are mostly focused on the calculation of resource footprint as an aggregated value (for example, Giljum et al. 2015; Wiedmann et al. 2015; Tukker et al. 2016). However, resource footprint by itself does not capture if resource efficiency policies would be beneficial for reducing the extraction material from the environment or if it would contribute to minimise waste disposal. For assessing the impacts of resource efficiency measures, we can consider the effects of such intervention by lowering input coefficients at the same output.
Figure 4d presents the casual links of resource efficiency actions. In terms of primary sequence, it is possible that the application of material efficiency can lead to reduce the input requirements of economic activities where such intervention is implemented (↓ in \(A^{\text{PS}}\) coefficients). In a similar sequence as in modelling closing supply chain (see Sect. 4.2), a secondary implication of changes in \(A^{\text{PS}}\) can be expected in the operational changes of waste treatment, in which the technical coefficients of waste treatment sectors can be adapted as a response of variations in waste disposal (↑↓ in \(A^{\text{TW}}\) elements). To compare different scenarios, it is important to consider an accounting system in which the \(A^{\text{PS}}\)-matrix is expressed in physical terms because the use of monetary units as proxy can misrepresent physical reality (Dietzenbacher 2005).
In this review, our purpose was to critically evaluate the current application of EEIOA on the assessment of circularity interventions. We now focus on the main contributions and limitations of EEIOA in order to bring a possible direction for the development of such method in the assessment of circular strategies.
From the reviewed studies, we found a common agreement on how the assessment of circularity can be benefit from the development of EEIOA in which end-of-life scenarios are integrated. Such models usually are comprised of hybrid units in which secondary and waste flows can be considered (for example, Nakamura and Kondo 2009; Lenzen and Reynolds 2014). In addition, identifying these flows at multiregional scale has led to a better understanding of the impacts of international trade on resource and waste footprints in specific countries (as in Duchin and Levine 2013; Wiedmann et al. 2015; Fry et al. 2016; Tukker et al. 2016; Tisserant et al. 2017).
On the other hand, we observed that a major aspect to develop is the representation of flows as economic transactions. The monetary values of input–output tables could not address effectively the allocation of resource flows because the monetary values per physical units can differ significantly in several supply chains (Weisz and Duchin 2006). This variation is caused by the assumption of an average price for materials with diverse physical properties and qualities (Tukker et al. 2016).
Price variation could become a critical factor in EEIOA with high sectorial and product aggregation (Wiedmann et al. 2015). It is likely to be a limitation for adequately tracing specific resource flows. For instance, if we assessed the recycling and reuse flows of a specific material such as 'recovered aluminium', input–output tables with broad classification of materials and industries (e.g. 'metal products' and 'mining sector') would assume that the price per physical value of 'recovered aluminium' is equivalent to the value of aggregated 'metal products'. This example shows that a highly aggregated EEIOA could in many cases be too limited to model specific material flows.
To avoid the deficiency in resolution of some EEIOA models, a reasonable approach could be to disaggregate products and sectors in more detailed categories. The new classification may contribute to monitor specific resource flows in a circular economy model (as shown by Choi et al. 2011; Li et al. 2013). However, disaggregating sectors in EEIOA presents a challenge by itself because sectoral data may not be available at the required level of detail. This is particularly the case in waste input–output frameworks, in which many studies show a limited dataset to split and link waste treatment sectors to the rest of the economy (Salemdeeb et al. 2016).
According to the studies, a lack of data sets for waste and material recovery could represent an issue in terms of waste valuation. Several authors recognised a deficiency for accounting the economic value of waste as this could be lower or absent in the EEIOA model (Nakamura 1999a; Liao et al. 2015). The lack of economic valuation renders input–output accounts incomplete and, in some cases, leads to the underestimation of the embodied waste generated by final demand. For example, in the study of the Australian waste footprint by Fry et al. (2016), waste flows related to overseas production could not be considered due to the lack of waste values in other regions. This led to an underestimation of waste footprint resulting from Australian consumption by at least 1.5 million tonnes.
Underestimating waste generation may be caused by three aspects (Tisserant et al. 2017). First, some waste treatment sectors might not be included in the EEIOA model. Second, a standard EEIOA does not consider informal or illegal activities that could affect the estimation of waste footprint. Finally, EEIOA might not capture some of the flows that are not linked to monetary or physical transactions between sectors (i.e. direct reuse flows). In general, these aspects have an impact on the quality of waste data availability in many countries, which can be a significant source of uncertainty.
To address the lack of specific-sectoral data, proxies that can be used to integrate the values of circular strategies into the EEIOA framework could be estimated. For instance, to identify the patterns of industrial waste disposal, Reynolds et al. (2016b) suggested that the shares of waste generation in New Zealand presented the same trend as others developed economies (e.g. UK and Australia) and, then, used a proxy for the estimation of waste generation. In many cases, this type of assumption introduces uncertainties that may affect the analysis reliability (Ohno et al. 2016). Although the importance of uncertainties is considered in the literature (Wiedmann 2009), most of the reviewed studies mention the level of uncertainty without addressing it in much detail, and it brings a recurrent issue about data reliability of analysing circular economy interventions with EEIOA.
In terms of modelling circularity scenarios, EEIOA may be of limited use when assessing environmental implications in the future (de Koning 2018). For example, by fixing technical coefficients of a circular economy scenario, EEIOA cannot capture the volume effects on prices as well as price effects on the use of certain products. Without additional model components (see, for example, Gibon et al. 2015), EEIOA has also limited opportunities to represent changes of energy systems in the future with environmental impacts that are different from the current way of production. Moreover, there is no direct feedback effect from nature to the economy in standard EEIOA, which restricts the assessment of different circularity gains.
This article presented a review of EEIOA-based studies that assessed the economic and environmental implications of residual waste management, closing supply chains, product lifetime extension, and resource efficiency interventions. We evaluated the selected articles based on their methodological characteristics in order to synthetise the main EEIOA-based frameworks used to analyse each circularity intervention. Furthermore, our results led to a point of reference for modelling future circular strategies at macro-scale by applying EEIOA.
By considering a simplified waste supply-use model, we explained the causality sequence of modelling circularity interventions. For residual waste management, a waste treatment action can be modelled by augmenting the values of waste absorbed by a certain waste treatment sector, which in turn requires more inputs from the rest of the economy in order to process the new amount of waste disposal.
Closing supply chains can be assessed by adjusting input and output coefficients for industries that adopt closed-loop strategies, which are related to the replacements of virgin materials with secondary circular flows. In addition, these interventions require to specify new sectors in the EEIOA model if the circular activities are not explicitly expressed.
Product lifetime extension can be modelled by adapting the final demand coefficients by expecting a reduction in final consumption. However, it is important to consider a potential rebound effect of prolonging product lifetime caused by the expenditures on other product or service categories from the savings on final demand. Furthermore, modelling product lifetime extension might involve accounting for potential changes of the production recipe of durable goods.
Resource efficiency intervention can be analysed by reducing input coefficients while maintaining the output. Such action could minimise the input requirements of economic activities in which the intervention is applied, and it can be used to model the structural changes in a technology matrix caused by resource efficiency strategies.
We observe that the development of waste input–output analysis (WIOA) will dominate the assessment of circularity transition, because it is the most suitable framework to link the flows of waste and the rest of the economy in an EEIOA system. However, WIOA is constrained by the monetary flows in EEIOA (Nakamura and Kondo 2009), which can be considered a major limitation for the analysis of circular strategies, especially in the case of residual waste management, due to the lack of valuing waste. This challenge can be avoided by future applications of physical and hybrid tables that can be used to analyse the potential impacts of material efficiency and secondary flows more accurately (Tisserant et al. 2017).
The recent development of hybrid-unit input–output and supply-use tables, in which tangible products and waste types are expressed in physical units (i.e. mass) and service sectors in monetary units (for example, Merciai and Schmidt 2018), will advance the modelling of circularity interventions in a consistent framework. In addition, detailed sectoral data could enable the assessment of circular strategies such as re-use, remanufacturing, and refurbishment (Ellen MacArthur Foundation 2013). Combining both aspects, hybrid tables and detailed production data, would allow an improvement of current EEIOA models for assessing the economic and environmental implications of a circularity transition.
EEIOA:
environmentally extended input–output analysis
RWM:
CSC:
PLE:
MR EE IOA:
multiregional environmentally extended input–output analysis
IOA:
input–output analysis
MFA:
WIOA:
waste input–output analysis
WIO-MFA:
waste input–output material flow analysis
LCA:
life cycle costing assessment
IO-LCA:
hybrid input–output life cycle assessment
IO-LCC:
hybrid input–output life cycle costing analysis
IOTs:
input–output tables
SUTs:
supply-use tables
WSUA:
waste supply-use analysis
WIO-LCA:
waste input–output life cycle assessment
3R's:
recycling, reuse, reduce
WIO-LP:
waste input–output linear programming analysis
SDA:
structural decomposition analysis
AMC:
absorbing Markov chain model
MRIO:
multiregional input–output model
sector or industry
waste treatment sector
Andersen MS (2007) An introductory note on the environmental economics of the circular economy. Sustain Sci 2:133–140. https://doi.org/10.1007/s11625-006-0013-6
Aye L, Ngo T, Crawford RH et al (2012) Life cycle greenhouse gas emissions and energy analysis of prefabricated reusable building modules. Energy Build 47:159–168. https://doi.org/10.1016/j.enbuild.2011.11.049
Bakker C, Wang F, Huisman J, den Hollander M (2014) Products that go round: exploring product life extension through design. J Clean Prod 69:10–16. https://doi.org/10.1016/j.jclepro.2014.01.028
Bastein T, Roelofs E, Rietveld E, Hoogendoorn A (2013) Opportunities for a Circular Economy in the Netherlands
Beylot A, Boitier B, Lancesseur N, Villeneuve J (2016a) A consumption approach to wastes from economic activities. Waste Manag 49:505–515. https://doi.org/10.1016/j.wasman.2016.01.023
Beylot A, Vaxelaire S, Villeneuve J (2016b) Reducing gaseous emissions and resource consumption embodied in french final demand: how much can waste policies contribute? J Ind Ecol 20:905–916. https://doi.org/10.1111/jiec.12318
Beylot A, Boitier B, Lancesseur N, Villeneuve J (2018) The Waste Footprint of French Households in 2020. J Ind Ecol 22:356–368. https://doi.org/10.1111/jiec.12566
Bocken NMP, de Pauw I, Bakker C, van der Grinten B (2016) Product design and business model strategies for a circular economy. J Ind Prod Eng 33:308–320. https://doi.org/10.1080/21681015.2016.1172124
Bocken NMP, Ritala P, Huotari P (2017) The circular economy: exploring the introduction of the concept among S&P 500 firms. J Ind Ecol. https://doi.org/10.1111/jiec.12605
Chen P-C, Ma H (2015) Using an industrial waste account to facilitate national level industrial symbioses by uncovering the waste exchange potential. J Ind Ecol 19:950–962. https://doi.org/10.1111/jiec.12236
Choi T, Jackson RW, Green Leigh N, Jensen CD (2011) A baseline input—output model with environmental accounts (IOEA) applied to E-waste recycling. Int Reg Sci Rev 34:3–33. https://doi.org/10.1177/0160017610385453
Court CD, Munday M, Roberts A, Turner K (2015) Can hazardous waste supply chain "hotspots" be identified using an input-output framework? Eur J Oper Res 241:177–187. https://doi.org/10.1016/j.ejor.2014.08.011
de Koning A (2018) Creating global scenarios of environmental impacts with structural economic models. Leiden University, Leiden
den Hollander MC, Bakker CA, Hultink EJ (2017a) Product design in a circular economy: development of a typology of key concepts and terms. J Ind Ecol. https://doi.org/10.1111/jiec.12610
den Hollander MC, Bakker CA, Hultink EJ (2017b) Product design in a circular economy: development of a typology of key concepts and terms. J Ind Ecol 21:517–525. https://doi.org/10.1111/jiec.12610
Dietzenbacher E (2005) Waste treatment in physical input–output analysis. Ecol Econ 55:11–23. https://doi.org/10.1016/j.ecolecon.2005.04.009
Duchin F (1990) The conversion of biological materials and wastes to useful products. Struct Chang Econ Dyn 1:243–261. https://doi.org/10.1016/0954-349X(90)90004-R
Duchin F (1992) Industrial input-output analysis: implications for industrial ecology. Proc Natl Acad Sci 89:851–855
Duchin F, Levine SH (2010) Embodied resource flows and product flows: combining the absorbing markov chain with the input-output model. J Ind Ecol 14:586–597. https://doi.org/10.1111/j.1530-9290.2010.00258.x
Duchin F, Levine SH (2013) Embodied resource flows in a global economy: an approach for identifying the critical links. J Ind Ecol 17:65–78. https://doi.org/10.1111/j.1530-9290.2012.00498.x
EC (2011) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Roadmap to a Resource Efficient Europe
EC (2015) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: closing the loop—An EU action plan for the Circular Economy. Brussels
EC (2018) Circular economy package. Four legislative proposals on waste
Eckelman MJ, Reck BK, Graedel TE (2012) Exploring the global journey of Nickel with Markov chain models. J Ind Ecol 16:334–342. https://doi.org/10.1111/j.1530-9290.2011.00425.x
EEA (2016) Circular economy in Europe—developing the knowledge base. Luxembourg
Elia V, Gnoni MG, Tornese F (2017) Measuring circular economy strategies through index methods: a critical analysis. J Clean Prod 142:2741–2751. https://doi.org/10.1016/j.jclepro.2016.10.196
Ellen MacArthur Foundation (2013) Toward the circular economy. Technical Report. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ellenmacarthurfoundation.org_publications_towards-2Dthe-2Dcircular-2Deconomy-2Dvol-2D1-2Dan-2Deconomic-2Dand-2Dbusiness-2Drationale-2Dfor-2Dan-2Daccelerated-2Dtransition&d=DwIFAg%26c=vh6FgFnduejNhPPD0fl_yRaSfZy8CWbWnIf4XJhSqx%26%26r=_tDouL3h-1LDiBK93Wj27DvOJuSTpUjtL_R9oBJNIXM%26m=ub70OkfrYAmuhgSQ103pJHoYxgAxBmUXrSIZjPoSz3k%26s=UL8EMatkeUquwA4v5AjOqtfYJfHYp6fNyD-rzPD1TWI%26e=
Ferrão P, Ribeiro P, Rodrigues J et al (2014) Environmental, economic and social costs and benefits of a packaging waste management system: a Portuguese case study. Resour Conserv Recycl 85:67–78. https://doi.org/10.1016/j.resconrec.2013.10.020
Ferrer G, Ayres RU (2000) The impact of remanufacturing in the economy. Ecol Econ 32:413–429. https://doi.org/10.1016/S0921-8009(99)00110-X
Fry J, Lenzen M, Giurco D, Pauliuk S (2016) An Australian multi-regional waste supply-use framework. J Ind Ecol 20:1295–1305. https://doi.org/10.1111/jiec.12376
Geissdoerfer M, Savaget P, Bocken NMP, Hultink EJ (2017) The circular economy—a new sustainability paradigm? J Clean Prod. https://doi.org/10.1016/j.jclepro.2016.12.048
Geng Y, Fu J, Sarkis J, Xue B (2012) Towards a national circular economy indicator system in China: an evaluation and critical analysis. J Clean Prod 23:216–224. https://doi.org/10.1016/j.jclepro.2011.07.005
Geng Y, Sarkis J, Ulgiati S (2016) Sustainability, wellbeing, and the circular economy in China and worldwide. Science 80:76–79
Ghisellini P, Cialani C, Ulgiati S (2016) A review on circular economy: the expected transition to a balanced interplay of environmental and economic systems. J Clean Prod 114:11–32. https://doi.org/10.1016/j.jclepro.2015.09.007
Gibon T, Wood R, Arvesen A et al (2015) A methodology for integrated, multiregional life cycle assessment scenarios under large-scale technological change. Environ Sci Technol 49:11218–11226. https://doi.org/10.1021/acs.est.5b01558
Giljum S, Bruckner M, Martinez A (2015) Material footprint assessment in a global input–output framework. J Ind Ecol 19:792–804. https://doi.org/10.1111/jiec.12214
Huang GH, Anderson WP, Baetz BW (1994) Environmental input–output analysis and its application to regional solid-waste management planning. J Environ Manag 42:63–79
Iacovidou E, Velis CA, Purnell P et al (2017) Metrics for optimising the multi-dimensional value of resources recovered from waste in a circular economy: a critical review. J Clean Prod 166:910–938. https://doi.org/10.1016/j.jclepro.2017.07.100
Jensen CD, Mcintyre S, Munday M, Turner K (2013) Responsibility for regional waste generation: a single-region extended input–output analysis for wales. Reg Stud 47:913–933. https://doi.org/10.1080/00343404.2011.599797
Kagawa S, Kudoh Y, Nansai K, Tasaki T (2008) The economic and environmental consequences of automobile lifetime extension and fuel economy improvement: Japan's case. Econ Syst Res 20:3–28. https://doi.org/10.1080/09535310801890615
Kagawa S, Nansai K, Kudoh Y (2009) Does product lifetime extension increase our income at the expense of energy consumption? Energy Econ 31:197–210. https://doi.org/10.1016/j.eneco.2008.08.011
Kagawa S, Nakamura S, Kondo Y et al (2015) Forecasting replacement demand of durable goods and the induced secondary material flows: a case study of automobiles. J Ind Ecol 19:10–19. https://doi.org/10.1111/jiec.12184
Kirchherr J, Reike D, Hekkert M (2017) Conceptualizing the circular economy: an analysis of 114 definitions. Resour Conser Recycl. https://doi.org/10.1016/j.resconrec.2017.09.005
Kondo Y, Nakamura S (2004) Evaluating alternative life-cycle strategies for electrical appliances by the waste input–output model. Int J Life Cycle Assess 9:236–246. https://doi.org/10.1007/BF02978599
Kondo Y, Nakamura S (2005) Waste input–output linear programming model with its application to eco-efficiency analysis. Econ Syst Res 17:393–408. https://doi.org/10.1080/09535310500283526
Lenzen M, Reynolds CJ (2014) A supply-use approach to waste input–output analysis. J Ind Ecol 18:212–226. https://doi.org/10.1111/jiec.12105
Lenzen M, Rueda-Cantuche JM (2012) A note on the use of supply-use tables in impact analyses. Sort 36:139–152
Lenzen M, Geschke A, Wiedmann T et al (2014) Compiling and using input-output frameworks through collaborative virtual laboratories. Sci Total Environ 485–486:241–251. https://doi.org/10.1016/j.scitotenv.2014.03.062
Li J, Lin C, Huang SA (2013) Considering variations in waste composition during waste input–output modeling. J Ind Ecol 17:892–899. https://doi.org/10.1111/jiec.12068
Liao MI, Chen PC, Ma HW, Nakamura S (2015) Identification of the driving force of waste generation using a high-resolution waste input-output table. J Clean Prod 94:294–303. https://doi.org/10.1016/j.jclepro.2015.02.002
Lieder M, Rashid A (2016a) Towards circular economy implementation: a comprehensive review in context of manufacturing industry. J Clean Prod 115:36–51. https://doi.org/10.1016/j.jclepro.2015.12.042
Lieder M, Rashid A (2016b) Towards circular economy implementation: a comprehensive review in context of manufacturing industry. J Clean Prod 115:36–51. https://doi.org/10.1016/j.jclepro.2015.12.042
Lin C (2011) Identifying lowest-emission choices and environmental pareto frontiers for wastewater treatment wastewater treatment input–output model based linear programming. J Ind Ecol 15:367–380. https://doi.org/10.1111/j.1530-9290.2011.00339.x
Linder M, Sarasini S, van Loon P (2017) A metric for quantifying product-level circularity. J Ind Ecol. https://doi.org/10.1111/jiec.12552
McDowall W, Geng Y, Huang B et al (2017) Circular economy policies in China and Europe. J Ind Ecol 21:651–661. https://doi.org/10.1111/jiec.12597
McKinsey&Company (2016) The circular economy: moving from theory to practice
Merciai S, Schmidt J (2018) Methodology for the construction of global multi-regional hybrid supply and use tables for the EXIOBASE v3 database. J Ind Ecol. https://doi.org/10.1111/jiec.12713
Miller RE, Blair PD (2009) Input–output analysis: foundations and extensions, 2nd edn. Cambridge University Press, New York
Murray A, Skene K, Haynes K (2015) The circular economy: an interdisciplinary exploration of the concept and application in a global context. J Bus Ethics 140:369–380. https://doi.org/10.1007/s10551-015-2693-2
Nakajima K, Ohno H, Kondo Y et al (2013) Simultaneous material flow analysis of nickel, chromium, and molybdenum used in alloy steel by means of input–output analysis. Environ Sci Technol 47:4653–4660. https://doi.org/10.1021/es3043559
Nakamura S (1999a) An interindustry approach to analyzing economic and environmental effects of the recycling of waste. Ecol Econ 28:133–145. https://doi.org/10.1016/S0921-8009(98)00031-7
Nakamura S (1999b) Input–output analysis of waste cycles. In: Proceedings first international symposium on environmentally conscious design and inverse manufacturing. IEEE, pp 475–480
Nakamura S, Kondo Y (2002) Input–output analysis of waste management. J Ind Ecol 6:39–63. https://doi.org/10.1162/108819802320971632
Nakamura S, Kondo Y (2006) A waste input–output life-cycle cost analysis of the recycling of end-of-life electrical home appliances. Ecol Econ 57:494–506. https://doi.org/10.1016/j.ecolecon.2005.05.002
Nakamura S, Kondo Y (2009) Waste input–output analysis: concepts and application to industrial ecology. Springer, Heidelberg
Nakamura S, Nakajima K (2005) Waste input–output material flow analysis of metals in the Japanese economy. Mater Trans 46:2550–2553. https://doi.org/10.2320/matertrans.46.2550
Nakamura S, Nakajima K, Kondo Y, Nagasaka T (2007a) The waste input-output approach to materials flow analysis—concepts and application to base metals. J Ind Ecol 11:50–63. https://doi.org/10.1162/jiec.2007.1290
Nakamura S, Nakajima K, Kondo Y, Nagasaka T (2007b) The waste input–output approach to materials flow analysis. J Ind Ecol 11:50–63. https://doi.org/10.1162/jiec.2007.1290
Nakamura S, Nakajima K, Yoshizawa Y et al (2009) Analyzing polyvinyl chloride in Japan with the waste input–output material flow analysis model. J Ind Ecol 13:706–717. https://doi.org/10.1111/j.1530-9290.2009.00153.x
Nakamura S, Kondo Y, Kagawa S et al (2014) MaTrace: tracing the fate of materials over time and across products in open-loop recycling. Environ Sci Technol 48:7207–7214. https://doi.org/10.1021/es500820h
Nishijima D (2017) The role of technology, product lifetime, and energy efficiency in climate mitigation: a case study of air conditioners in Japan. Energy Pol 104:340–347. https://doi.org/10.1016/j.enpol.2017.01.045
Ohno H, Matsubae K, Nakajima K et al (2014) Unintentional flow of alloying elements in steel during recycling of end-of-life vehicles. J Ind Ecol 18:242–253. https://doi.org/10.1111/jiec.12095
Ohno H, Nuss P, Chen WQ, Graedel TE (2016) Deriving the metal and alloy networks of modern technology. Environ Sci Technol 50:4082–4090. https://doi.org/10.1021/acs.est.5b05093
Ohno H, Matsubae K, Nakajima K et al (2017) Optimal recycling of steel scrap and alloying elements: input–output based linear programming method with its application to end-of-life vehicles in Japan. Environ Sci Technol. https://doi.org/10.1021/acs.est.7b04477
Pauliuk S (2017) Critical appraisal of the circular economy standard BS 8001:2017 and a dashboard of quantitative system indicators for its implementation in organizations. Resour Conserv Recycl 129:81–92. https://doi.org/10.1016/j.resconrec.2017.10.019
Pauliuk S, Kondo Y, Nakamura S, Nakajima K (2017) Regional distribution and losses of end-of-life steel throughout multiple product life cycles—insights from the global multiregional MaTrace model. Resour Conserv Recycl 116:84–93. https://doi.org/10.1016/j.resconrec.2016.09.029
Pearce D, Turner R (1990) Economics of natural resources and the environment. Harvester Wheatsheaf, New York
Peters GP, Hertwich EG (2009) The application of multi-regional input–output analysis to industrial ecology. In: Suh S (ed) Handbook of input–output economics in industrial ecology. Springer, Dordrecht, pp 847–848
Potting J, Hekkert M, Worrell E, Hanemaaijer A (2017) Circular economy: measuring innovation in the product chain. The Hague
Pratt K, Lenaghan M (2015) The carbon impacts of the circular economy technical report
Reutter B, Lant P, Lane J et al (2017) Food waste consequences: environmentally extended input-output as a framework for analysis. J Clean Prod 153:506–514. https://doi.org/10.1016/j.jclepro.2016.09.104
Reynolds CJ, Piantadosi J, Boland J (2014) A waste supply-use analysis of australian waste flows. J Econ Struct 3:5. https://doi.org/10.1186/s40008-014-0005-0
Reynolds CJ, Piantadosi J, Boland J (2015) Rescuing food from the organics waste stream to feed the food insecure: an economic and environmental assessment of australian food rescue operations using environmentally extended waste input–output analysis. Sustainability 7:4707–4726. https://doi.org/10.3390/su7044707
Reynolds C, Geschke A, Piantadosi J, Boland J (2016a) Estimating industrial solid waste and municipal solid waste data at high resolution using economic accounts: an input–output approach with Australian case study. J Mater Cycles Waste Manag 18:677–686. https://doi.org/10.1007/s10163-015-0363-1
Reynolds C, Mirosa M, Clothier B (2016b) New Zealand's food waste: estimating the tonnes, value, calories and resources wasted. Agriculture 6:9. https://doi.org/10.3390/agriculture6010009
Rodrigues JFD, Lorena A, Costa I et al (2016) An input–output model of extended producer responsibility. J Ind Ecol 20:1273–1283. https://doi.org/10.1111/jiec.12401
Rutherford TF, Böhringer C (2015) The circular economy—an economic impact assessment report to SUN-IZA, pp 1–33
Salemdeeb R, Al-tabbaa A, Reynolds C (2016) The UK waste input–output table: linking waste generation to the UK economy. Waste Manag Res 34:1089–1094. https://doi.org/10.1177/0734242X16658545
Shigetomi Y, Nansai K, Kagawa S, Tohno S (2015) Trends in Japanese households' critical-metals material footprints. Ecol Econ 119:118–126. https://doi.org/10.1016/j.ecolecon.2015.08.010
Skelton ACH, Allwood JM (2013) The incentives for supply chain collaboration to improve material efficiency in the use of steel: an analysis using input output techniques. Ecol Econ 89:33–42. https://doi.org/10.1016/j.ecolecon.2013.01.021
Takase K, Kondo Y, Washizu A (2005) An analysis of sustainable consumption by the waste input–output model. J Ind Ecol 9:201–219. https://doi.org/10.1162/1088198054084653
Takeyama K, Ohno H, Matsubae K et al (2016) Dynamic material flow analysis of nickel and chromium associated with steel materials by using matrace. Matériaux Tech 104:610. https://doi.org/10.1051/mattech/2017012
Tisserant A, Pauliuk S, Merciai S et al (2017) Solid waste and the circular economy: a global analysis of waste treatment and waste footprints. J Ind Ecol. https://doi.org/10.1111/jiec.12562
Tsukui M, Kagawa Shigemi, Kondo Y (2011) Urban growth and waste management optimization towards "zero waste city". Cult Soc 2:177–187. https://doi.org/10.1016/j.ccs.2011.11.007
Tsukui M, Ichikawa T, Kagatsume M (2017) Repercussion effects of consumption by domestic tourists in Tokyo and Kyoto estimated using a regional waste input–output approach. J Econ Struct. https://doi.org/10.1186/s40008-017-0061-3
Tukker A, Dietzenbacher E (2013) Global multiregional input–output frameworks: an introduction and outlook global multiregional input–output. Econ Syst Res 25:1–19. https://doi.org/10.1080/09535314.2012.761179
Tukker A, de Koning A, Wood R et al (2013) Exiopol—development and illustrative analyses of a detailed global Mr Ee Sut/Iot. Econ Syst Res 25:50–70. https://doi.org/10.1080/09535314.2012.761952
Tukker A, Bulavskaya T, Giljum S et al (2016) Environmental and resource footprints in a global context: Europe's structural deficit in resource endowments. Glob Environ Chang 40:171–181. https://doi.org/10.1016/j.gloenvcha.2016.07.002
WEF (2014) Towards the Circular Economy : Accelerating the scale-up across global supply chains
Weisz H, Duchin F (2006) Physical and monetary input–output analysis: what makes the difference? Ecol Econ 57:534–541. https://doi.org/10.1016/j.ecolecon.2005.05.011
Wiedmann T (2009) A review of recent multi-region input–output models used for consumption-based emission and resource accounting. Ecol Econ 69:211–222. https://doi.org/10.1016/j.ecolecon.2009.08.026
Wiedmann TO, Schandl H, Lenzen M et al (2015) The material footprint of nations. PNAS 112:6271–6276. https://doi.org/10.1073/pnas.1220362110
Winans K, Kendall A, Deng H (2017) The history and current applications of the circular economy concept. Renew Sustain Energy Rev 68:825–833. https://doi.org/10.1016/j.rser.2016.09.123
Winning M, Calzadilla A, Bleischwitz R, Nechifor V (2017) Towards a circular economy: insights based on the development of the global ENGAGE-materials model and evidence for the iron and steel industry. Int Econ Econ Policy. https://doi.org/10.1007/s10368-017-0385-3
Wohlin C (2014) Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings 18th international conference evaluation assessment in software engineering (EASE'14), pp 1–10. https://doi.org/10.1145/2601248.2601268
Wood R, Stadler K, Bulavskaya T et al (2015) Global sustainability accounting-developing EXIOBASE for multi-regional footprint analysis. Sustainability 7:138–163. https://doi.org/10.3390/su7010138
Yokoyama K, Onda T, Kashiwakura S, Nagasaka T (2006) Waste input–output analysis on landfill mining activity. Mater Trans 47:2582–2587. https://doi.org/10.2320/matertrans.47.2582
Zink T, Geyer R (2017) Circular economy rebound. J Ind Ecol. https://doi.org/10.1111/jiec.12545 (In press)
GAAH, CPSS, and AT planned the literature review. GAAH, CPSS, FD, JFDR, and AT contributed to the design of the manuscript. GAAH, CPSS, FD, JFDR, and AT contributed to the interpretation of the results. GAAH wrote the manuscript with input from all authors. All authors read and approved the final manuscript.
We thank two anonymous reviewers that have contributed to improving the quality of the paper.
Glenn A. Aguilar-Hernandez is part of the Circular European Economy Innovative Training Network (Circ€uit), funded by the European Commission under the Horizon 2020 Marie Skłodowska Curie Action 2016 (Grant Agreement Number 721909).
Institute of Environmental Sciences (CML), Leiden University, Leiden, The Netherlands
Glenn A. Aguilar-Hernandez, Carlos Pablo Sigüenza-Sanchez, Franco Donati, João F. D. Rodrigues & Arnold Tukker
Glenn A. Aguilar-Hernandez
Carlos Pablo Sigüenza-Sanchez
Franco Donati
João F. D. Rodrigues
Arnold Tukker
Correspondence to Glenn A. Aguilar-Hernandez.
List of reviewed publications and bibliometric analysis.
Aguilar-Hernandez, G.A., Sigüenza-Sanchez, C.P., Donati, F. et al. Assessing circularity interventions: a review of EEIOA-based studies. Economic Structures 7, 14 (2018). https://doi.org/10.1186/s40008-018-0113-3
Closing loops
Method development in EEIOA – novel advances and best practices | CommonCrawl |
\begin{document}
\begin{frontmatter} \title{Compressive Sampling of Polynomial Chaos Expansions: Convergence Analysis and Sampling Strategies} \author{Jerrad Hampton} \author{Alireza Doostan\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding Author: Alireza Doostan}
\address{Aerospace Engineering Sciences Department, University of Colorado, Boulder, CO 80309, USA}
\begin{abstract} Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with high-dimensional random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as {\it coherence}, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an $\ell_1$-minimization problem. {\color{black}Utilizing results} for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under {\color{black}their} respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the {\it coherence-optimal} sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.
\end{abstract} \begin{keyword} Compressive Sampling \sep Polynomial Chaos \sep Sparse Approximation \sep $\ell_1$-minimization \sep Markov Chain Monte Carlo \sep Hermite Polynomials \sep Legendre Polynomials \sep Stochastic PDEs \sep Uncertainty Quantification \end{keyword} \end{frontmatter}
\section{Introduction} \label{sec:intro}
A precise approach to analyzing modern, sophisticated engineering systems requires understanding how various Quantities of Interest (QoI) behave as functions of uncertain system inputs. An ineffective understanding may give unfounded confidence in the QoI or suggest unnecessary restrictions in the system inputs due to unnecessary incredulity concerning the QoI. This process of Uncertainty Quantification (UQ) has received much recent study~\cite{Ghanem91a,LeMaitre10,Xiu10a}.
Probability is a natural framework for modeling uncertain inputs by assuming the input depends on a $d$-dimensional random vector $\bm{\Xi}:=(\Xi_1,\cdots,\Xi_d)$ with some joint probability density function $f(\bm{\xi})$. In this manner we model the scalar QoI, denoted by $u(\bm{\Xi})$, as an unknown function of the input, which we seek to approximate. In this work we approximate $u(\bm{\Xi})$, assumed to have a finite variance, using an expansion in multivariate orthogonal polynomials, each of which we denote by $\psi_k(\bm{\Xi})$, yielding a Polynomial Chaos (PC) expansion~\cite{Ghanem91a,Xiu02},
\begin{align} \label{Eq:PCEDef} u(\bm{\Xi}) &= \mathop{\sum}\limits_{k=0}^\infty c_k \psi_{k}(\bm{\Xi}),\\ \nonumber &\approx \mathop{\sum}\limits_{k\in\mathcal{C}} c_k \psi_{k}(\bm{\Xi}). \end{align}
Under conditions discussed in Section~\ref{subsec:PCE}, the index set $\mathcal{C}$ may have few elements, allowing us to accurately reconstruct $u$ from a relatively small number of basis polynomials, i.e., there exists a {\it sparse} representation for $u$ as a linear combination of orthogonal polynomials in $\bm{\Xi}$. For computation we truncate the expansion in (\ref{Eq:PCEDef}) so that we have $\bm{c}=(c_1,\cdots,c_P)^T$ and \begin{align} \label{Eq:PCETrunc} u(\bm{\Xi}) &\approx \mathop{\sum}\limits_{k=1}^P c_k \psi_{k}(\bm{\Xi}), \end{align}
where the error introduced by this truncation to a finite number of terms is referred to as {\it truncation error}. The polynomials $\psi_{k}(\bm{\Xi})$ are naturally selected to be orthogonal with respect to the measure $f(\bm\xi)$ of the inputs $\bm\Xi$, \cite{Xiu02,Soize05}. For instance, when $\bm\Xi$ follows a jointly uniform or Gaussian distribution (with independent components), $\psi_{k}(\bm{\Xi})$ are multivariate Legendre or Hermite polynomials, respectively. For the interest of analysis, we assume that $\psi_{k}(\bm{\Xi})$ are normalized such that $\mathbb{E}[\psi^2_{k}(\bm{\Xi})]=1$, where $\mathbb{E}$ denotes the mathematical expectation operator. If we can accurately identify the coefficients $c_k = \mathbb{E}[u(\bm\Xi) \psi_{k}(\bm\Xi)]$ for our approximation, then as $P\rightarrow\infty$ there is the mean-squares convergence of our PC approximation to $u$.
To identify $\bm{c}$ we consider non-intrusive, i.e., sampling-based, methods where we do not require changes to deterministic solvers for $u$ as we generate realizations of $\bm{\Xi}$ to identify $u(\bm{\Xi})$. We denote these realizations $\bm{\xi}^{(i)}$ and $u(\bm{\xi}^{(i)})$, respectively. We let $i=1:N$ so that $N$ is the number of independent samples considered, and define
\begin{align} \label{eqn:psi_u} \bm{u}&:=(u(\bm{\xi}^{(1)}),\cdots,u(\bm{\xi}^{(N)}))^T;\\ \bm{\Psi}(i,j)&:=\psi_{j}(\bm{\xi}^{(i)}).\nonumber \end{align}
These definitions imply the matrix equality $\bm{\Psi}\bm{c}=\bm{u}$. We also introduce a diagonal positive-definite matrix $\bm{W}$ such that $\bm{W}(i,i)$ is a function of $\bm{\xi}^{(i)}$ that depends on our sampling strategy and is described in Sections~\ref{sec:motivation} and~\ref{sec:sampling}. To approximate $\bm{c}$ we use Basis Pursuit Denoising (BPDN), \cite{Chen98,Chen01,Donoho06b,Bruckstein09}. This involves solving either the $\ell_1$-minimization problem \begin{align} \label{eqn:constrained}
\mathop{\arg\min}_{\bm{c}}\|\bm{c}\|_1 \mbox{ subject to } \|\bm{W}\bm{u}-\bm{W}\bm{\Psi}\bm{c}\|_2\le\delta, \end{align} where $\delta$ is a tolerance of solution inaccuracy due to the truncation error, or the closely related
\begin{align} \label{eqn:regularized}
\mathop{\arg\min}_{\bm{c}}\frac{1}{2}\|\bm{W}\bm{u}-\bm{W}\bm{\Psi}\bm{c}\|_2^2 + \lambda\|\bm{c}\|_1, \end{align}
where $\lambda$ is a regularization parameter.
The solution to these problems are closely related to the solution of either \begin{align*}
\mathop{\arg\min}_{\bm{c}}\|\bm{c}\|_0 \mbox{ subject to } \|\bm{W}\bm{u}-\bm{W}\bm{\Psi}\bm{c}\|_2\le\delta, \end{align*} which is similar to (\ref{eqn:constrained}), or the closely related \begin{align*}
\mathop{\arg\min}_{\bm{c}}\frac{1}{2}\|\bm{W}\bm{u}-\bm{W}\bm{\Psi}\bm{c}\|_2^2 + \lambda\|\bm{c}\|_0, \end{align*}
which is similar to (\ref{eqn:regularized}). Here, $\|\bm{c}\|_0=\#(c_k\ne 0)$ is the number of non-zero entries of $\bm c$. Solutions to these problems are of great practical interest for sparse approximation and have received significant study in the field of Compressive Sampling/Compressed Sensing, see, e.g.,~\cite{Candes06a,Donoho06b,Elad10a,Eldar12a}, and more recently in UQ, \cite{Doostan10b,Doostan11a,Blatman11,Mathelin12a,Yan12,Yang13,Karagiannis14,Peng14,Schiavazzi14,Sargsyan14,Jones14a}.
\subsection{Contributions of This Work} \label{subsec:Contribution}
This work is concerned with convergence analysis and sampling strategies to recover a sparse stochastic function in both Hermite and Legendre PC expansions from $\ell_1$-minimization problem (\ref{eqn:constrained}). As an extension of our previous work in \cite{Doostan10b,Doostan11a,Peng14}, the main contributions of this study are three-fold.
Firstly, we {\color{black}utilize properties} of these polynomials, in conjunction with the analysis of sparse function recovery in~\cite{CandesPlan,RauhutWard}, to give a framework which admits a bound on the number of samples sufficient for a successful solution of (\ref{eqn:constrained}). To our best knowledge, the Hermite results are the first of their type, and the Legendre recovery bounds, while here obtained from different techniques, are similar to those in~\cite{RauhutWard}.
Secondly, we provide a contribution of particular practical interest in that we analyze sampling Hermite polynomials uniformly over a $d$-dimensional ball -- with a radius depending on the order of approximation -- instead of sampling from the standard Gaussian measure. {\color{black}This sampling arises in a similar context to the Chebyshev distribution as a sampling for Legendre polynomials.} Interestingly, as explained in Section~\ref{subsubsec:asymmethod}, this sampling of Hermite polynomial expansion is analogous to {\it Hermite function} expansion,~\cite{Szego}, of appropriately weighted solution of interest. We provide analytic and numeric results justifying the use of this {\it importance sampling} distribution for the recovery of sparse Hermite PC expansions.
Finally, we analytically identify an importance sampling distribution with a statistical {\it optimality}, in terms of the {\it coherence} of the PC basis as a key recovery parameter of the method, and identify a Markov Chain Monte Carlo sampler for which we provide associated numeric results. This approach, here referred to as {\it coherence-optimal} sampling, provides a general sampling scheme for the reconstruction of sparse Hermite and Legendre PC expansions, and may be extended to other types of orthogonal bases.
The motivation to design a sampling strategy based on the coherence is similar to that of~\cite{RauhutWard,Krahmer13}, but utilizing a different pre-conditioning from~\cite{Krahmer13}, considering unbounded bases and asymptotic scenarios, and providing a procedure for generating samples.
The presentation in this work has Section~\ref{sec:ProblemAndSolution} clearly stating the problem. Section~\ref{sec:motivation} provides key background information and motivates our approach, while Section~\ref{sec:sampling} describes our sampling methods and provides key theoretical results. Section~\ref{sec:examples} demonstrates the performance of the sampling methods and Section~\ref{sec:Proofs} presents the proofs to the Theorems from Section~\ref{sec:sampling}.
\section{Problem Statement and Solution Approach} \label{sec:ProblemAndSolution}
We first describe the random inputs to the system, letting the random vector $\bm{\Xi}$, defined on the probability space $(\Omega,\mathcal{F},\mathbb{P})$, represent the input uncertainties to the physical problem under consideration. We assume that $(\Omega,\mathcal{F},\mathbb{P})$ is formed by the product of $d$ probability spaces $(\mathbb{R},\mathbb{B}(\mathbb{R}),\mathbb{P}_i)$ associated with each $\Xi_i$ where $\mathbb{B}$ denotes the Borel $\sigma$-algebra. We note that this implies that $\mathcal{F}=\mathbb{B}(\mathbb{R}^d)$ the $d$-dimensional $\sigma$-algebra, and $\Omega=\mathbb{R}^d$. Further implied are that $\mathbb{P}$ is Lebesgue measurable and the $\Xi_i$ are independent random variables. For convenience, we assume that the $\Xi_i$ are identically distributed with distribution function $f(\xi)$, and abuse this notation by allowing that $\bm{\Xi}$ is distributed according to $f(\bm{\xi})$, noting that the two distributions may be differentiated by the presence of a scalar or vector function argument.
We consider the physical system through which the input uncertainty $\bm{\Xi}$ propagates to be given by operators defined on a bounded Lipschitz continuous domain $\mathcal{D}\subset\mathbb{R}^D$ for $D\in\{1,2,3\}$, with a boundary denoted by $\partial\mathcal{D}$. Letting operators $\mathcal{L},\mathcal{B}$ and $\mathcal{I}$ depend on the physics of the problem being considered, we assume that a solution $u$ satisfies
\begin{align*} \mathcal{L}(\bm{x},t,\bm{\Xi};u(t,\bm{x},\bm{\Xi}))=0 &\qquad\bm{x}\in\mathcal{D},\\ \mathcal{B}(\bm{x},t,\bm{\Xi};u(t,\bm{x},\bm{\Xi}))=0 &\qquad\bm{x}\in\partial\mathcal{D},\\ \mathcal{I}(\bm{x},0,\bm{\Xi};u(0,\bm{x},\bm{\Xi}))=0 &\qquad\bm{x}\in\mathcal{D}. \end{align*}
We note that the problems considered in Section~\ref{sec:examples} depend only on space or time, but the methods considered here are independent of the underlying physical problem. We assume that conditioned on the $i$th independent sample of $\bm{\Xi}$, denoted by $\bm{\xi}^{(i)}$, a numerical solution to this problem may be identified by a fixed solver; we utilize FEniCS~\cite{FEniCS} for the examples in the present work. For any fixed $\bm{x}_0\in\mathcal{D}$ and $t_0> 0$, our objective is to reconstruct $u(\bm{x}_0,t_0,\bm{\Xi})$ via problem (\ref{eqn:constrained}) using information obtained from $\{u(\bm{x}_0,t_0,\bm{\xi}^{(i)})\}_{i=1}^N$, that is $N$ independent realizations of the QoI. For a cleaner presentation, we suppress the dependence of $u$ on $\bm{x}_0$ and $t_0$.
\subsection{Approximately Sparse PCE} \label{subsec:PCE}
Here we discuss the polynomials in (\ref{Eq:PCETrunc}) as utilized in this work, and the key sparsity assumption which this approximation frequently facilitates. We consider an arbitrary number of input dimensions, denoted by $d$, and the set of orthogonal polynomials in any mixture of these coordinates of total order less than or equal to $p$. To explain the total order, let {\color{black}$\bm{k} = (k_1,\dots,k_d)$} be a $d\times 1$ multi-index such that $k_i\in{\color{black}\mathbb{N}\cup\{0\}}$ represents the order of the polynomial $\psi_{k_i}(\Xi_i)$, orthogonal with respect to the measure of $\Xi_i$. The $d$-dimensional polynomials $\psi_{\bm{k}}(\bm{\Xi})$ are constructed by the tensorization of {\color{black}$\psi_{k_i}(\Xi_i)$},
\begin{align*} \psi_{\bm{k}}(\bm{\Xi})=\mathop{\prod}\limits_{i=1}^d\psi_{k_i}(\Xi_i). \end{align*} The total order of $p$ implies that we consider all polynomials satisfying
\begin{align*}
\|\bm{k}\|_1\le p\qquad k_i\in{\color{black}\mathbb{N}\cup\{0\}}\quad\forall i. \end{align*}
We note that a direct combinatorial count implies that a $d$-dimensional approximation of total order $p$ has $P={p+d\choose d}$ basis polynomials. This total order basis facilitates a polynomial approximation to the general function that favors lower order polynomials. If the coefficients have a sufficiently rapid decay or if certain dimensions are dominant in an accurate reconstruction, then we have
\begin{align*} u(\bm{\Xi})\approx \mathop{\sum}\limits_{\bm{k}\in\mathcal{C}} c_{\bm{k}} \psi_{\bm{k}}(\bm{\Xi}), \end{align*}
where $s:=|\mathcal{C}| \ll P$ is the operative sparsity of the approximation that leads to stable and convergent approximations of $\bm{c}$ from (\ref{eqn:constrained}), using $N<P$ random samples of $u(\bm\Xi)$. To demonstrate this, we rely primarily on existing theorems from~\cite{CandesPlan} as presented in Section~\ref{sec:motivation}. Subsequently, in Section \ref{sec:sampling}, we adapt these results to the case of sparse PC expansions for which we utilize basic properties of orthogonal polynomials, and further present our main results on the choices of random sampling of $\bm\Xi$ and, hence, $u(\bm\Xi)$.\\
\noindent{\bf Notation:} In the sequel we occasionally use a multi-index notation for polynomials, but also find it convenience to index polynomials by a scalar, e.g., $k$, from $1$ to $P$.
\section{\texorpdfstring{Definitions and Background}{Definitions and Background}} \label{sec:motivation}
To contextualize the results from~\cite{CandesPlan}, presented in Section~\ref{sec:candes_theorems}, we first introduce two main definitions that are used both in these results, as well as in constructing our sampling methods.
\subsection{\texorpdfstring{Sampling Definitions}{Sampling Definitions}} \label{subsubsec:sampling} We first consider the set of polynomials, $\{\psi_k(\bm{\xi})\}_{k=1}^{P}$, as defined in Section~\ref{sec:ProblemAndSolution} and define $B(\bm{\xi})$ to be \begin{align} \label{eqn:btspec}
B(\bm{\xi}):=\mathop{\max}\limits_{k=1:P}|\psi_k(\bm{\xi})|. \end{align}
This represents a uniformly least upper bound on the basis polynomials of interest. In addition, we consider \begin{align} \label{eqn:GDef} G(\bm{\xi})\ge B(\bm{\xi}) \qquad \forall\bm{\xi}\in\Omega. \end{align} Here, $G(\bm{\xi})$ represents an upper bound on the tight bound of $B(\bm{\xi})$ for all $\bm{\xi}\in\Omega$, where $\Omega$ is the sample space of potential values for $\bm{\xi}$ as a realization of $\bm{\Xi}$.
We note that for several orthonormal polynomials of interest a bound on $B(\bm{\xi})$ may be attained,~\cite{RauhutWard,Szego,Hermite1,AskeyWainger,LagAsym,Jacobi}. In this case, we have that $\psi_k(\bm{\xi})/G(\bm{\xi})\le 1$. It follows that for any set $\mathcal{S}\subseteq\Omega$, \begin{align} \label{eqn:normalizingconstant} c = \left(\int_{\mathcal{S}}f(\bm{\xi})G^2(\bm{\xi})d\bm{\xi}\right)^{-1/2} \end{align} is such that \begin{align*} c^2\int_{\mathcal{S}}f(\bm{\xi})G^2(\bm{\xi})d\bm{\xi}=1, \end{align*} and \begin{align} \label{eqn:optimal_pdf_gen} f_{\bm{Y}}(\bm{\xi}):=c^2f(\bm{\xi})G^2(\bm{\xi}), \end{align} is a probability distribution supported on $\mathcal{S}$, which we consider as the distribution for $\bm{Y}$. Let $\delta_{i,j}$ denote the Kronecker delta such that $\delta_{i,j}=1$ if $i=j$ and $0$ if $i\ne j$. Note that for $i,j=1:P$, \begin{align} \label{Eqn:TransformIntegral}
\left|\int_{\mathcal{S}}\frac{\psi_i(\bm{\xi})}{cG(\bm{\xi})}\frac{\psi_j(\bm{\xi})}{cG(\bm{\xi})}c^2f(\bm{\xi})G^2(\bm{\xi})d\bm{\xi}-\delta_{i,j}\right|&\le \epsilon_{i,j}, \end{align}
and we may select $\mathcal{S}$ such that $\epsilon_{i,j}$ may be made as small as needed, e.g., if we take $\mathcal{S} = \Omega$, then $\epsilon_{i,j}=0$. For this purpose we employ the heuristic of selecting $\mathcal{S}$ to encompass the largest values of $f(\bm{\xi})$ until $\mathcal{S}$ is large enough to satisfy the condition (\ref{Eqn:CoherenceUnbounded}), discussed in Section~\ref{subsubsec:Coherence}. The justification for this is that in unbounded domains, e.g., for Hermite polynomials, regions of small $f(\bm{\xi})$ typically correspond to larger $\sup_{k=1:P}|\psi_k(\bm{\xi})|$ as $p$ grows~\cite{Szego,AskeyWainger,Hermite1}.
While this formulation is useful for identifying distributions for $\bm{Y}$, unfortunately, we may no longer guarantee that $\mathbb{E}[\psi_i(\bm{Y})\psi_j(\bm{Y})]-\delta_{i,j}$ is small. Fortunately, from (\ref{Eqn:TransformIntegral}) if we let \begin{align} \label{eqn:weightFunction} w(\bm{Y})&:=\frac{1}{cG(\bm{Y})}, \end{align}
then $|\mathbb{E}[w^2(\bm Y)\psi_i(\bm{Y})\psi_j(\bm{Y})]-\delta_{i,j}|\le \epsilon_{i,j}$. In this way we consider $w(\bm{Y})$ to be a weight function so that $\{w(\bm{Y})\psi_i(\bm{Y})\}_{i=1}^P$, are approximately orthonormal random variables. This function defines the diagonal positive-definite matrix $\bm{W}$ from (\ref{eqn:constrained}) as \begin{align*}
\bm{W}(i,i)= w(\bm{\xi}^{(i)}), \end{align*} where $\bm{\xi}^{(i)}$ is the $i$th realization of $\bm{Y}$. For a notational symmetry with the conceptual connection, we refer to all realized random vectors by $\bm{\xi}$ regardless of the sampling distribution for $\bm{\xi}$, noting that the weight function, $w$, depends on that distribution. Additionally, we note that for simulation, we are not interested in the normalizing constant, $c$, associated with describing our sampling distribution.
\subsection{\texorpdfstring{Coherence Definition}{Coherence Definition}} \label{subsubsec:Coherence}
Consider realizations of $w(\bm{Y})\psi_k(\bm{Y})$ for $k=1:P$. We investigate the coherence parameter defined as in~\cite{CandesPlan} by
\begin{align} \label{Eqn:CoherenceBounded}
\mu(\bm{Y}) &:= \sup_{k=1:P,\bm{\xi}\in\Omega}|w(\bm{\xi})\psi_k(\bm{\xi})|^2. \end{align} This is a conceptually simple parameter that we will see allows us to bound the number of samples necessary to accurately recover $\bm{c}$ via a solution to (\ref{eqn:constrained}). From (\ref{eqn:weightFunction}) {\color{black}and (\ref{Eqn:CoherenceBounded})} we are motivated to take $G(\bm{\xi})$ to be $B(\bm{\xi})$ as defined in (\ref{eqn:btspec}), and as we shall show in Section \ref{subsec:MCMC}, this choice leads to an optimally minimal coherence. Fortunately, asymptotic results give us approximations to the distribution $f_{\bm Y}(\bm\xi)$ of $\bm{Y}$ in certain cases. These approximations also lead to easier simulation of $\bm{Y}$ {\color{black}when $f_{\bm Y}(\bm\xi)$ corresponds to the choice of $G(\bm{\xi})=B(\bm{\xi})$, as described in Section \ref{subsubsec:MCMCmethod}.}
We utilize the definition in (\ref{Eqn:CoherenceBounded}) when analyzing Legendre polynomials which are bounded on the domain $[-1,1]^d$. However, we note that (\ref{Eqn:CoherenceBounded}) is not useful when $\sup_{k=1:P,\bm{\xi}\in\Omega}|w(\bm{\xi})\psi_k(\bm{\xi})|^2$ is infinite, such as when $\psi_k(\bm{\xi})$ are Hermite polynomials and $w(\bm{\xi}) = 1$. If $N$ is the number of samples of $\bm{Y}$ which we will take, following \cite{CandesPlan}, we consider a truncation of $\Omega$ to some appropriate $\mathcal{S}$ and let
\begin{align} \label{Eqn:CoherenceUnbounded}
\mu(\bm{Y}) &:= \mathop{\min}\limits_{\mathcal{S}}\left\{\sup_{k=1:P,\bm{\xi}\in\mathcal{S}}|w(\bm{\xi})\psi_k(\bm{\xi})|^2,\cdots\right.\\ \nonumber
\mbox{subject to }&\left.\mathbb{P}(\mathcal{S}^c)<\frac{1}{NP};\ \mathop{\sum}\limits_{k=1}^{P}\mathbb{E}\left[|w(\bm{Y})\psi_k(\bm{Y})|^2\bm{1}_{\mathcal{S}^{c}}\right]\le\frac{1}{20}P^{-1/2}\right\}, \end{align}
where $\mathcal{S}$ is a subset of the support of $f$, a superscript $c$ denotes a set complement, and $\bm{1}$ is the indicator function. While (\ref{Eqn:CoherenceBounded}) highlights the quantity that we seek to bound, the conditions in (\ref{Eqn:CoherenceUnbounded}) insure that the truncation from $\Omega$ to $\mathcal{S}$ has a limited effect on the orthogonality of the set of random variables, $\{w(\bm{Y})\psi_k(\bm{Y})\}_{k=1}^P$. As normal random variables are unbounded, we use (\ref{Eqn:CoherenceUnbounded}) in the analysis of Hermite polynomials with a truncation $\mathcal{S}$ that captures the essential behavior of $w(\bm{\xi})\psi_k(\bm{\xi})$. These definitions are compatible in that either definition may be used for the following theorems.
\subsection{\texorpdfstring{Convergence Theorems}{Convergence Theorems}} \label{sec:candes_theorems}
The following theorems use the coherence parameter in either (\ref{Eqn:CoherenceBounded}) or (\ref{Eqn:CoherenceUnbounded}) to bound the number of samples necessary to recover a sparse signal with high probability. \label{subsec:coherenceTheorems} \begin{thm} \label{thm:SampleDepth}\cite{CandesPlan} Let $\bm{c}$ be a fixed arbitrary vector in $\mathbb{R}^{P}$ with at most $s$ non-zero elements such that $\bm{\Psi c} = \bm{u}$, where $\bm\Psi$ is defined as in (\ref{eqn:psi_u}). With probability at least $1-5/P-e^{-\beta}$, and $C$ an absolute constant, if \begin{align} \label{eqn:NecSamp} N\ge C(1+\beta)\mu(\bm{Y})s\log(P), \end{align}
then $\bm{c}=\arg\min_{\bm{c}}\{\|\bm{c}\|_1 :\bm{W\Psi c} = \bm{Wu}\}$. \end{thm}
When allowing for truncation error and considering a regularized version of this $\ell_1$-minimization problem as in (\ref{eqn:regularized}), a similar result may be stated. Following \cite{CandesPlan}, we require the condition that $\|\bm{\Psi}^{T}\bm{W}^2\bm{z}\|_{\infty}\le\nu$, for some $0\le \nu<\infty$, where $\bm{z}$ is the associated truncation error with the model $\bm{u}=\bm{\Psi}\bm{c}+\bm{z}$ for an arbitrary solution vector $\bm{c}$. Additionally, we denote by $\sigma_w$ the standard deviation of the weighted truncation error $w(\bm Y)z(\bm Y)$.
\begin{thm} \label{thm:SampleDepthNoise}\cite{CandesPlan}
Let $\bm{c}$ be a fixed arbitrary vector in $\mathbb{R}^{P}$, and $\bm{c}_s$ be a vector such that $\bm{c}_s(i) = \bm{c}(i)$ for the $s$ largest $|\bm{c}(i)|$, and $\bm{c}_s(i) = 0$ otherwise. For some $\bar{s}$, let \begin{align*} N\ge C(1+\beta)\mu(\bm{Y})\bar{s}\log(P). \end{align*} With probability at least $1-6/P-6e^{-\beta}$, and $C$ an absolute constant, the solution to \begin{align*}
\hat{\bm{c}}=\mathop{\min}\limits_{\bar{\bm{c}}\in\mathbb{R}^P} \frac{1}{2}\|\bm{W\Psi}\bar{\bm{c}}-\bm{Wu}\|_2^2+\lambda\sigma_w\|\bar{\bm{c}}\|_1, \end{align*} with $\lambda = 10\sqrt{\frac{\log(P)}{N}}$ obeys for any $\bm{c}$, \begin{align*}
\|\hat{\bm{c}}-\bm{c}\|_2 &\le \mathop{\min}\limits_{1\le s\le \bar{s}}C(1+\alpha)\left[\frac{\|\bm{c}-\bm{c}_s\|_1}{\sqrt{s}}+\sigma_w\sqrt{\frac{s\log(P)}{N}}\right];\\
\|\hat{\bm{c}}-\bm{c}\|_1 &\le \mathop{\min}\limits_{1\le s\le \bar{s}}C(1+\alpha)\left[\|\bm{c}-\bm{c}_s\|_1+\sigma_w s\sqrt{\frac{\log(P)}{N}}\right], \end{align*} where $\alpha:=\sqrt{\frac{(1+\beta)s\log^5(P)}{N}}$. \end{thm}
We note that when $\|\bm{\Psi}^{T}\bm{W}^2\bm{z}\|_{\infty}$ cannot be bounded by a $\nu$, we may be interested in a subset $\mathcal{S}$ of $\Omega$ that will be sampled with sufficiently high probability and admit a bound on $\|\bm{\Psi}^{T}\bm{W}^2\bm{z}\bm{1}_{(\bm{Y}_1,\cdots,\bm{Y}_N)\in\mathcal{S}}\|_{\infty}$. This may be related to the truncation of $\Omega$ to $\mathcal{S}$ in the conditions of (\ref{Eqn:CoherenceUnbounded}).
These results show how a bound on $\mu(\bm{Y})$ translates into a bound on the number of samples needed to recover a solution vector, and provide a theoretical justification to the identification of distributions for $\bm{Y}$ which yield a smaller bound on $\mu(\bm{Y})$. With these bounds we may utilize Theorems~\ref{thm:SampleDepth} and~\ref{thm:SampleDepthNoise} to bound the number of samples required to recover solutions of any particular sparsity. \section{Sampling Methods} \label{sec:sampling}
Here we describe the sampling methods that we consider in this work, and present theorems related to recovery when we use them. We first consider a sampling according to random variables defined by the orthogonality measure in Section~\ref{subsec:std}. Such a sampling, dubbed here {\it standard sampling}, is commonly used in PC regression, \cite{Hosder06,LeMaitre10,Doostan11a,Mathelin12a}. Second, we consider sampling from a distribution {\color{black}related to} an asymptotic analysis of the orthogonal polynomials $\psi_k(\bm\Xi)$ in Section~\ref{subsec:asym}, and refer to it as {\it asymptotic sampling}. Finally, in Section~\ref{subsec:MCMC}, we introduce the {\it coherence-optimal sampling} that corresponds to minimizing the coherence parameters defined in Section~\ref{subsubsec:Coherence}.
\subsection{Standard Sampling} \label{subsec:std}
Here we consider sampling $\bm{\xi}$ according to $f(\bm\xi)$, the distribution with respect to which the PC bases are naturally orthogonal. This implies taking $w(\bm\xi) = 1$.
\subsubsection{Standard Sampling Method} \label{subsubsec:stdmethod} For the $d$-dimensional Legendre polynomials the standard method corresponds to sampling from the uniform distribution on $[-1,1]^d$, while for $d$-dimensional Hermite polynomials this corresponds to samples from a multi-variate normal distribution such that each of $d$ coordinates is an independent standard normal random variable.
\subsubsection{Theorems} \label{subsubsec:stdTheorems}
A standard sampling of Hermite polynomials leads to a coherence bounded as in Theorem~\ref{thm:NatSampleCoherence}, while a standard sampling of Legendre polynomials leads to a coherence bounded as in Theorem~\ref{thm:NatSampleCoherenceLeg}. We note that these results hold for a number of dimensions $d$ and a set of orthogonal polynomials of arbitrary total order $p$ as defined in Section~\ref{subsec:PCE}.
\begin{thm} \label{thm:NatSampleCoherence} Assume that $d=o(p)$, that is, $d$ is asymptotically dominated by $p$. Additionally, let $N = O(P^k)$ for some $k>0$, that is, the number of samples does not grow faster than a polynomial in the number of basis polynomials considered. For $d$-dimensional Hermite polynomials of total order $p\ge 1$, the coherence in (\ref{Eqn:CoherenceUnbounded}) is bounded by \begin{align} \mu(\bm{\Xi})&\le C_p\cdot \eta_p^p, \end{align} for some constants $C_p,\eta_p$ depending on $p$. For $d = o(p)$, and as $p\rightarrow\infty$, we may take $C_p$ and $\eta_p$ to be larger than but arbitrarily close to $1$ and $\exp(2-\log(2))\approx 3.6945$, respectively. \end{thm}
We note that together with Theorems~\ref{thm:SampleDepth} and~\ref{thm:SampleDepthNoise}, this implies that with high probability, the number of samples required for recovery from Hermite polynomials grows exponentially with the total order of approximation. The following theorem for Legendre polynomials is analogous to previous results in~\cite{RauhutWard}, and provides a similar result for the number of samples required for signal recovery.\\
\noindent{\bf Remark:} When sampling Hermite polynomials, we have the technical requirement that $N=O(P^k)$ for some finite $k$, and we note that this condition is satisfied here as $N<P$ is the case of interest in compressive sampling.
\begin{thm} \label{thm:NatSampleCoherenceLeg} A standard sampling of the $d$-dimensional Legendre polynomials of total order $p$ gives a coherence of \begin{align} \label{eqn:LegNatBound} \mu(\bm{\Xi})&\le \exp(2p). \end{align} \end{thm}
As we shall see in Section \ref{subsec:JacProofNatSamp}, for the case of $p<d$ the bound in (\ref{eqn:LegNatBound}) may be improved to $\mu(\bm{\Xi})\le 3^p \approx \exp(1.1p)$. Additionally, for $p>d$, the bound in (\ref{eqn:LegNatBound}) is loose, but a sharper dimension-dependent bound is given by $(2p/d+1)^d$.
\subsection{Asymptotic Sampling} \label{subsec:asym}
{\color{black}Here we consider taking $G(\bm{\xi})$ to approximate or coincide with the asymptotic (in order) envelope for the polynomials as the order $p$ goes to infinity. Specifically, for the case of Hermite polynomials we consider a relatively simple envelope function over a significant range of $\bm\xi$, corresponding to a uniform sampling, though this envelope does not coincide with $B(\bm\xi)$ and is loose compared to known behavior of Hermite polynomials at high orders, \cite{krasikov2004new}. The uniform approximation is, however, both simple to simulate and analyze. For the case of Legendre polynomials, we take $G(\bm{\xi})$ to be $B(\bm\xi)$ for asymptotically large order $p$, which corresponds to Chebyshev sampling. For both cases, sampling with this choice of $G(\bm\xi)$ leads to coherence parameters with weaker dependence on $p$, as compared to the standard sampling.}
\subsubsection{Asymptotic Sampling Method} \label{subsubsec:asymmethod}
For $d$-dimensional Hermite polynomials, {\color{black}we sample uniformly from within the $d$-dimensional ball of radius $\sqrt{2}\sqrt{2p+1}$, which corresponds to $G(\bm\xi)=\exp(\|\bm{\xi}\|_2^2/4)$ on this ball.} This choice of {\color{black}uniform sampling and} radius is motivated by the analysis of Section~\ref{subsec:keyHermite}. For completion, we outline one algorithm for sampling uniformly from the $d$-dimensional ball of radius $r$. First, let $\bm{Z}:=(Z_1,\cdots,Z_d)$ be a vector of $d$ independent normally distributed random variables with zero mean and the same variance. If $U$ is another independent random variable that is uniformly distributed on $[0,1]$, then
\begin{align*}
\bm{Y}&:=\frac{\bm{Z}}{\|\bm{Z}\|_2}rU^{1/d}, \end{align*}
represents a random sample uniformly distributed within the $d$-dimensional ball of radius $r$. This may be verified as $\bm{Z}/\|\bm{Z}\|_2$ is uniformly distributed on the $d$-dimensional hypersphere, while $rU^{1/d}$ is the distribution for the radius of the realization within the ball that coincides with a uniform sampling within the ball. Additionally, this leads to a weight function given by
\begin{align*}
w(\bm{\xi}):= \exp(-\|\bm{\xi}\|_2^2/4). \end{align*}
\noindent{\bf Remark} ({\it Connection with Hermite function expansion}). We highlight that the application of the weight function $w(\bm{\xi})= \exp(-\|\bm{\xi}\|_2^2/4)$ to the Hermite polynomials $\psi_k(\bm \xi)$ leads to the so called {\it Hermite functions}, i.e., $\exp(-\|\bm{\xi}\|_2^2/4)\psi_k(\bm \xi)$, that are orthogonal with respect to the uniform measure,~\cite{Szego}. This implies that the Hermite polynomial expansion with asymptotic sampling is analogous to Hermite function expansion of $w(\bm{\Xi})u(\bm\Xi)$. Notice that in a standard Hermite function expansion, the solution of interest, $u(\bm{\Xi})$, is expanded in $\{\exp(-\|\bm{\Xi}\|_2^2/4)\psi_k(\bm \Xi)\}$. The only computational difference between solving for a Hermite polynomial expansion under this sampling and a Hermite function expansion, is whether, during computation of the coefficients, the realized $u(\bm{\xi})$ are multiplied by $w(\bm{\xi})$ or not. \\[-.3cm]
For the $d$-dimensional Legendre polynomials this corresponds to sampling from the Chebyshev distribution on $[-1,1]^d$,~\cite{RauhutWard}, that is the distribution in each of $d$ coordinates is
\begin{align*} f_Y(\xi)&:=\frac{1}{\pi\sqrt{1-\xi^2}}, \end{align*}
for $\xi\in[-1,1]$. Each coordinate is easily simulated from $\cos(\pi U)$ where $U$ is uniformly distributed on $[0,1]$. Additionally, this leads to a weight function given by
\begin{align*}
w(\bm{\xi}):=\mathop{\prod}\limits_{i=1}^d(1-\xi_i^2)^{1/4}. \end{align*}
\subsubsection{Theorems} \label{subsubsec:asymTheorems}
{\color{black}Analysis of the Hermite and Legendre polynomials sampled according to these alternative distributions leads to a coherence with a weaker asymptotic dependence on $p$. In Theorem~\ref{thm:TransformedSamples} and Theorem~\ref{thm:TransformedSamplesLeg} we quantify such a dependence.}
\begin{thm} \label{thm:TransformedSamples} Assume that $N = O(P^k)$ for some $k>0$, that is the number of samples does not grow faster than a polynomial in the number of basis polynomials considered. We note that this includes the important and common case that $N\le P$. Let $V(r,d)=(r\sqrt{\pi})^d/\Gamma(d/2+1)$ denote the volume inside the hypersphere with radius $r$ in dimension $d$.
For the sampling of Hermite polynomials, sampling uniformly from the $d$-dimensional ball of radius $\sqrt{2}\sqrt{(2+\epsilon_{p})p+1}$, and weighting realized $\psi_k(\bm{\xi}^{(i)})$ on this ball by $w(\bm\xi^{(i)}) = \exp(-\|\bm{\xi}^{(i)}\|_2^2/4)$, gives
\begin{equation} \mu(\bm{Y})=O(\pi^{-d/2}V(\sqrt{2p},d))=O((2p)^{d/2}/\Gamma(d/2+1)). \end{equation}
Here, we note that $\epsilon_{p}\rightarrow 0$ if $d = o(p)$, and that the radius of the sampling is a factor of $\sqrt{2}$ times larger than the radius of the volume in the coherence, due to a normalization explained in Section~\ref{sec:Proofs}. \end{thm}
In the uniform sampling in this work we set $\epsilon_{p}$ in Theorem~\ref{thm:TransformedSamples} to be zero, leaving as an open problem the determination of an optimal $\epsilon_{p}$, and hence sampling radius for uniform sampling of Hermite polynomials. Additionally, this theorem is applicable to sampling Hermite functions with a standard, i.e. uniform, sampling as $w(\bm{\xi})\psi_k(\bm{\xi})$ is a Hermite function.
In the case of Legendre polynomials sampled by Chebyshev distribution we have a complete independence of the order of approximation, which agrees with previous results in~\cite{RauhutWard}.
\begin{thm} \label{thm:TransformedSamplesLeg} For the sampling of $d$-dimensional Legendre polynomials according to the $d$-dimensional Chebyshev distribution and weight $\psi_k(\bm{\xi})$ proportional to $w(\bm\xi) = \prod_{i=1}^d(1-\xi_i^2)^{1/4}$, regardless of the relationship between $d$ and $p$, we have that
\begin{align} \label{eqn:LegTransformedSample} \mu(\bm{Y})&\le 3^d. \end{align} \end{thm}
It is worthwhile highlighting that the combination of Theorems \ref{thm:NatSampleCoherenceLeg} and \ref{thm:TransformedSamplesLeg} suggests sampling Legendre polynomials by uniform distribution when $d>p$ and Chebyshev distribution when $d<p$. A similar observation has been made in \cite{Yan12}.
\subsection{Coherence-optimal Sampling} \label{subsec:MCMC}
Here we consider taking $G(\bm{\xi})=B(\bm{\xi})$ in (\ref{eqn:optimal_pdf_gen}), which implies sampling $\bm{\xi}$ according to the distribution
\begin{equation} \label{eqn:optimal_pdf} f_{\bm Y}(\bm \xi)=c^2f(\bm{\xi})B^2(\bm{\xi}), \end{equation}
with some appropriate normalizing constant $c$. Corresponding to this sampling, we apply the weight function
\begin{align*} w(\bm{\xi})=\frac{1}{B(\bm{\xi})}. \end{align*}
Notice that in (\ref{eqn:optimal_pdf}), $f(\bm{\xi})$ is the measure with respect to which the polynomials $\psi_k(\bm\xi)$ are naturally orthogonal.
\subsubsection{MCMC Sampling Method} \label{subsubsec:MCMCmethod}
While $B(\bm{\xi})$, as defined in (\ref{eqn:btspec}), is straightforward to evaluate for a fixed $\bm{\xi}$ by iterating over each $k=1:P$, the quantity is difficult to evaluate over a range of $\bm{\xi}$, thus making it difficult to accurately compute the normalizing constant $c$ in (\ref{eqn:optimal_pdf}). This motivates sampling $\bm{\Xi}$ from (\ref{eqn:optimal_pdf}) via a Monte Carlo Markov Chain (MCMC) approach, specifically using the Metropolis-Hastings sampler,~\cite{Hastings}. The MCMC method uses the computable point-wise evaluation of $B(\bm{\xi})$, and does not require an identification of $c$ necessary to normalize to a probability distribution. Additionally, this sampling distribution allows the easy evaluation of $w(\bm{\xi})$ using only the realized sample.
The MCMC sampler requires a proposal, or candidate, distribution and when $p>d$ we suggest those obtained from Section~\ref{subsec:asym}, giving a uniform sampling on a $d$-dimensional ball for Hermite polynomials, and $d$-dimensional Chebyshev sampling for Legendre polynomials. Similarly, when $p\le d$ we suggest those obtained from Section~\ref{subsec:std}, giving a standard normal sampling for Hermite polynomials, and sampling uniformly for $[-1,1]^d$ for Legendre polynomials. We follow these proposal distributions for the sampling which we do in this work. Note that each proposal distribution covers the entire domain $\mathcal{S}$, and if the proposal and target distribution approximately match, then the acceptance rate is high and few burn-in samples are needed to approximately draw from the desired distribution for $\bm{Y}$. There is interest in identifying better proposal distributions, to be studied further. One caveat which we note is that the proofs of Theorems~\ref{thm:SampleDepth} and~\ref{thm:SampleDepthNoise} require independent sampling, so that it is proper to restart a chain after each accepted sample, but a more practical method is to discard intermediate samples so that serial dependence is small,~\cite{MCMCBook}. We note that in applications where evaluation of the QoI is expensive, the generation of the samples, $\{\bm{\xi}^{(i)}\}_{i=1}^N$, is not typically a bottleneck, so that the extra cost of MCMC sampling is frequently acceptable in practice.
\subsubsection{Theorem} \label{subsubsec:MCMCTheorem}
Theorem~\ref{thm:LowPTransformedSamples} justifies the intuition that taking $G(\bm{\xi})$ associated with sampling to be the envelope function $B(\bm{\xi})$ leads to a minimal $\mu(\bm{Y})$.
\begin{thm} \label{thm:LowPTransformedSamples} Let $\mathcal{S}$ be a set chosen to satisfy the conditions of (\ref{Eqn:CoherenceUnbounded}) implying that no subset $\mathcal{S}_s$ of $\mathcal{S}$ with $\mu(\mathcal{S}_s)<\mu(\mathcal{S})$ satisfies the conditions of (\ref{Eqn:CoherenceUnbounded}). Let $B(\bm{\xi})$ be as in (\ref{eqn:btspec}). If we sample from the distribution proportional to $f(\bm{\xi})B^2(\bm{\xi})$ and weight $\psi_k(\bm{\xi})$ proportional to $w(\bm\xi) =1/B(\bm{\xi})$, then the coherence parameter achieves a minimum over all sampling schemes of $\psi_{k}(\bm{\xi})$, $k=1:P$, and distributions supported on $\mathcal{S}$. \end{thm}
In the next section we explore how the sample distributions associated with these results perform when used to approximate sparse functions in the appropriate PC basis.
\section{Numerical Examples} \label{sec:examples}
Here we numerically investigate the different sampling schemes discussed in Section~\ref{sec:sampling}, considering the coherence parameter in Section \ref{subsec:ComputedCoherence}, randomly generated manufactured sparse functions in Section~\ref{subsec:RandomSignals}, the solution to an elliptic PDE with random coefficient in Section~\ref{subsec:Elliptic}, and the amount of reaction at a given time in an adsorption model from \cite{Makeev02} in Section~\ref{subsec:EllipticOperator}.
\subsection{Computed Coherence} \label{subsec:ComputedCoherence}
The coherence parameter of Section~\ref{subsubsec:Coherence} can be estimated from a large sample of realized points. Doing so leads to the results in Figure~\ref{Fig:HermCoh} for Hermite polynomials, and those in Figure~\ref{Fig:LegCoh} for Legendre polynomials. We consider three sampling schemes, the standard scheme where we sample based on the underlying distribution of random variables in question as in Section~\ref{subsubsec:stdmethod}; an asymptotically motivated method to insure a coherence with weaker dependence on the order $p$ as in Section~\ref{subsubsec:asymmethod}; and a coherence-optimal sampling based on the distribution proportional to the envelope of basis functions as in Section~\ref{subsubsec:MCMCmethod}. We see that standard sampling tends to perform poorly at high orders, while asymptotic sampling tend to perform poorly for high-dimensional problems. Additionally, coherence-optimal sampling performs well in all regimes. These observations are consistent with the theoretical results presented in Section \ref{sec:sampling}.
\begin{figure}
\caption{Computed $\mu(\bm{Y})$ for different sampling methods of Hermite polynomials for different $d$ and $p$.}
\label{Fig:HermCoh}
\end{figure}
\begin{figure}
\caption{Computed $\mu(\bm{Y})$ for different sampling methods of Legendre polynomials for different $d$ and $p$.}
\label{Fig:LegCoh}
\end{figure}
\subsection{Manufactured Sparse Functions} \label{subsec:RandomSignals}
In this section, we investigate the reconstruction accuracy of the competing sampling schemes on randomly generated sparse solution vectors, $\bm{c}$, such that $\bm{\Psi}\bm{c}=\bm{u}$. Here, $\bm{c}$ is chosen to have a uniformly selected random support and independent standard normal random variables for {\color{black}values of} each supported coordinate. We measure reconstruction accuracy as a function of sparsity, denoted by $s$, and the number of independent samples of $\bm{Y}$, denoted by $N$. We declare $\hat{\bm{c}}$ to be a successful recovery of $\bm{c}$ if $\|\hat{\bm{c}}-\bm{c}\|_2/\|\bm{c}\|_2\le 0.01$, where $\hat{\bm{c}}$ is a solution to (\ref{eqn:constrained}) and in this work is computed using the $\ell_1$-minimization solver of SparseLab~\cite{SparseLab}, which is based on a primal-dual interior-point method. Each success probability is calculated from 2500 independent realizations of $\bm\Psi$ and $\bm c$.
For a more comparable presentation, we normalize the number of samples by the number of basis functions considered, $N/P\in[0.1,1]$, and similarly normalize the sparsity by the number of samples, $s/N\in[0.1,1]$. To compare the ability to recover solutions, we identify the probability of recovery on a $90\times 90$ uniform grid in $(N/P,s/N)$ for different $(d,p)$ pairs as well as for different distributions of $\bm{Y}$. The results are presented in Figures~\ref{Fig:HermRand} and~\ref{Fig:LegRand} for Hermite and Legendre polynomials, respectively, where we consider three sampling schemes of Sections~\ref{subsubsec:stdmethod},~\ref{subsubsec:asymmethod}, and~\ref{subsubsec:MCMCmethod}. For the coherence-optimal sampling, in conjunction with Theorem~\ref{thm:LowPTransformedSamples}, we use a Metropolis-Hastings sampling to generate realizations from the appropriate distribution, where we discard 99 samples before every one kept, which both provides a burn-in effect and reduces the serial correlation between samples.
The results in Figures~\ref{Fig:HermRand} and~\ref{Fig:LegRand} identify a {\it phase transition},~\cite{Donoho09b}, in the ability of $\ell_1$-minimization to recover $\bm c$. For a number of solution realizations, given by $N/P$, the method succeeds -- with probability one -- in reconstructing solutions with high enough sparsity, given by small $s/N$, and fails to do so for low sparsity. Between these two phases, the method recovers the solution with probability smaller than one. Here, we observe differentiation in the quality of solution recovery in the transition region based on how $\bm{\Psi}$ is sampled. In particular, we highlight the following notable observations: for the high order case $(d,p)=(2,30)$, the standard Hermite sampling performs poorly as compared to the uniform sampling, for the high-dimensional case $(d,p)=(30,2)$, the standard Legendre sampling is much better than the Chebyshev sampling, and for the moderate values of $(d,p)=(5,5)$ the two sampling methods lead to similar performance. In all cases, the MCMC sampling leads to recovery that is similar to those of the other two sampling strategies or provides considerable improvements.\\
\noindent{\bf Remark:} Though our results following from~\cite{CandesPlan} do not necessarily imply uniform recovery over all functions of a certain sparsity, the results in this section are appropriately interpreted in the context of uniform recovery. For a more detailed definition of uniform recovery, we refer the interested reader to \cite{Rauhut10}.
\begin{figure}
\caption{Hermite Recovery Phase Diagrams: The rows correspond to differing dimension and total order while the columns correspond to the different sampling schemes. The color of each square represents a probability of successful function recovery.}
\label{Fig:HermRand}
\end{figure}
\begin{figure}
\caption{Legendre Recovery Phase Diagrams: The rows correspond to differing dimension and total order while the columns correspond to the different sampling schemes. The color of each square represents a probability of successful function recovery.}
\label{Fig:LegRand}
\end{figure}
\subsection{An Elliptic PDE with Random Input} \label{subsec:Elliptic}
As an application of Legendre PC expansions, we next consider the solution of the linear elliptic PDE
\begin{eqnarray} \label{Eqn:EllOperator} \nabla\cdot(a(\bm{x},\bm{\Xi})\nabla u(\bm{x},\bm{\Xi}))&=&1,\quad \bm{x}\in\mathcal{D},\nonumber\\ u(\bm{x},\bm{\Xi})&=&0 ,\quad \bm{x}\in\partial\mathcal{D},\nonumber \end{eqnarray}
on the unit square $\mathcal{D}= (0,1)\times(0,1)$ with boundary $\partial\mathcal{D}$. The diffusion coefficient $a$ is considered random and is modeled by
\begin{align} \label{eqn:gaussian_field} a(\bm{x},\bm{\Xi})=a_0 +\sigma_a\sum_{k=1}^{d}\sqrt{\zeta_k}\varphi_{k}(\bm{x})\Xi_{k}, \end{align}
in which the random variables $\{\Xi_{k}\}_{k=1}^d$, $d=20$, are independent draws from the U([-1,1]) distribution, and we choose $a_0 = 0.1$ and $\sigma_a=0.017$. In (\ref{eqn:gaussian_field}), $\{\zeta_k\}_{k=1}^d$ are the $d$ largest eigenvalues associated with $\{\varphi_k\}_{k=1}^d$, the $L_2([0,1]^2)$-orthonormalized eigenfunctions of
\begin{align} \label{eqn:GaussCov} C_{aa}(\bm{x},\bm{y}) = \exp{\left[-\frac{(x_1-y_1)^2}{l_{1}^2}-\frac{(x_2-y_2)^2}{l_{2}^2}\right]} \end{align}
with correlation lengths $l_{1}=0.8, l_{2}=0.1$ in the spatial dimensions. Given these choices of parameters, the model in (\ref{eqn:gaussian_field}) leads to strictly positive realizations of $a$.
For any realization of $\bm{\Xi}$, we use the finite element solver FEniCS~\cite{FEniCS} to compute an approximate solution $u(\bm{\Xi})=u((0.5,0.5),\bm{\Xi})$.
To identify $u(\bm{\Xi})$ as a function of the random inputs $\bm{\Xi}$, we use a Legendre PC expansion of total order $p=4$, which for this $d=20$ stochastic dimensional problem yields $P=10,626$ basis functions. We note that the root-mean-squared error is considered here as the primary measure of recovery.
We investigate the ability to recover $u(\bm{\Xi})$ via (\ref{eqn:constrained}), using each of the three sampling schemes considered for Legendre polynomials. For this elliptic problem we further improve the quality of the MCMC sampling through an initial burn-in of 1,000 discarded samples~\cite{MCMCBook}. We provide bootstrapped estimates of the various moment based measures from a pool of samples generated beforehand. Specifically, samples for each realization are drawn from a pool of 50,000 previously generated samples, which are used to calculate bootstrap estimates of averages and standard deviations.
To identify the solution of (\ref{eqn:constrained}) we use the SPGL1 package,~\cite{SPGL1,SPGL2}, with a truncation error in (\ref{eqn:constrained}), denoted by $\delta$, and determined for each set of samples by two-fold, also known as hold-out, cross-validation,~\cite{CrossValidation}. Specifically, we calculate this $\delta$ from $N$, an even number of available samples, by splitting the available samples into two equally sized sets, one a training set, and the other a validation set. This process, as we have implemented it, is summarized by the following algorithm,
\begin{enumerate} \item For a number of $\delta$, construct solutions, $\bm{c}_\delta$, from the training set and the solution of (\ref{eqn:constrained}). We use a set of potential $\delta$ defined by $10^{-(-1:0.05:5)}$.
\item For each $\bm{c}_\delta$ use the validation set to identify the reconstruction error $\epsilon^{(1)}_\delta:= \|\bm{\Psi}\bm{c}_\delta-\bm{u}\|_2$. \item Repeat with the training and validation sets swapped to attained $\epsilon^{(2)}_\delta$. \item Identify the $\delta_0$ that minimizes $\epsilon^{(1)}_\delta+\epsilon^{(2)}_\delta$. \item Set the truncation error to $\delta_\star=\sqrt{2}\delta_0$, and identify a solution vector via the combined $N$ samples from both the training and validation sets. \end{enumerate}
We utilize this method of cross-validation to calibrate the truncation error for each realized sample of the calculated solution to (\ref{eqn:constrained}). Here, a lower cross-validated truncation error suggests a computed solution vector with a more accurate recovery.
In Figure~\ref{Fig:EllipticSD} we see plots of computed moments for the distribution of the relative root-mean-squared error between the computed and reference solutions obtained from 100 independent replications for each sample size, $N$. In addition, Figure~\ref{Fig:EllipticTol} presents similar plots for the truncation error computed with each sampling. We note here that the cross-validated computation of $\delta$ provides an available estimate for anticipated root-mean-squared error for additional independent samples.
We note the standard and coherence-optimal sampling offer significant improvements over asymptotic, i.e., Chebyshev, sampling using similar sample sizes, $N$, both in terms of accuracy and robustness to differing realized samples. These observations are compatible with the theoretical results of Section \ref{sec:sampling} demonstrating a smaller coherence for the uniform sampling -- as compared to Chebyshev sampling -- for the case of $d>p$. The coherence-optimal sampling by construction leads to smallest coherence. We also notice that at particularly low sample-sizes any given sampling method prefers to recover a particular but ultimately poor approximation. As the number of samples increases the recovery can improve but the variability in the solution recovery will appear to increase first as this preferred solution is recovered less frequently.
\begin{figure}
\caption{Plots for the moments of root-mean-squared error for independent residuals for the various sampling methods as a function of the number of samples.}
\label{Fig:EllipticSD}
\end{figure}
\begin{figure}
\caption{Plots for the moments of cross-validated estimates of tolerance for the various sampling methods as a function of the number of samples.}
\label{Fig:EllipticTol}
\end{figure}
\subsection{Surface Reaction Model} \label{subsec:EllipticOperator}
Another problem of interest in this work is to quantify the uncertainty in the solution $\rho$ of the non-linear evolution equation
\begin{align} \label{eqn:reaction} \left\{\begin{array}{l}\frac{d\rho}{dt} = \alpha(1-\rho) - \gamma\rho - \kappa(1-\rho)^2\rho, \\ \rho(t=0) = 0.9,\end{array}\right. \end{align}
modeling the surface coverage of certain chemical species, as examined in \cite{Makeev02,Lemaitre04b}. We consider uncertainty in the adsorption, $\alpha$, and desorption, $\gamma$, coefficients, and model them as shifted log-normal variables. Specifically, we assume
\begin{align*} \alpha &= 0.1 + \exp(0.05\ \Xi_1),\\ \gamma &= 0.001 + 0.01\exp(0.05\ \Xi_2), \end{align*}
where $\Xi_1, \Xi_2$ are independent standard normal random variables; hence, the dimension of our random input is $d=2$. The reaction rate constant $\kappa$ in (\ref{eqn:reaction}) is assumed to be deterministic and is set to $\kappa = 10$.
Our QoI is $\rho_c:=\rho(t=4,\Xi_1,\Xi_2)$, and to approximate this, we consider a Hermite PC expansion of total order $p=32$, giving $P=561$ basis functions. This high-order approximation is necessary due to the large gradient of $\rho_c$ in terms of the random variables, as evidenced by the relatively slow decay of coefficients in the reference solution presented in Figure~\ref{Fig:RefSolODE}. This is computed using Gauss-Hermite quadrature approximation of the PC coefficients.
We utilize the same computational process as in Section~\ref{subsec:Elliptic} to identify approximate solutions. In Figure~\ref{Fig:ODERMSE}, we see plots of moments for the relative root-mean-squared error -- between the reference and $\ell_1$-minimization solutions -- as a function of the number of samples, $N$. These moments are obtained from 200 independent replications for each $N$. We find that the standard sampling fails to converge, while the {\color{black}uniform} and coherence-optimal samplings lead to converged solution as $N$ is increased. Figure~\ref{Fig:ODETol} presents plots for the truncation error computed with each sampling via cross-validation. One interesting fact to notice is that recovery for standard sampling appears to get worse for larger sample sizes. This may be an effect of the poor numerical conditioning of high order $p=30$ Hermite polynomials under standard sampling, where very rare events with very large realized $|\psi_k(\bm{\xi})|$ are necessary to capture the orthogonality of the polynomials. It further affirms the results of Figure~\ref{Fig:HermCoh} and Theorem~\ref{thm:NatSampleCoherence}, that standard sampling of Hermite polynomials is not suited for high-order problems.
\begin{figure}
\caption{PC coefficients of reference solution for the QoI $\rho_c:=\rho(t=4,\Xi_1,\Xi_2)$ in the surface reaction model.}
\label{Fig:RefSolODE}
\end{figure}
\begin{figure}
\caption{Moments of root-mean-squared error between the reference and $\ell_1$-minimization solutions for the various sampling methods as a function of the number of samples.}
\label{Fig:ODERMSE}
\end{figure}
\begin{figure}
\caption{Plots for the moments of cross-validated estimates of tolerance for the various sampling methods as a function of the number of samples.}
\label{Fig:ODETol}
\end{figure}
\section{\texorpdfstring{Proofs}{Proofs}} \label{sec:Proofs}
Here we present proofs for the theorems in Section~\ref{sec:sampling}. These proofs, except for that of Theorem~\ref{thm:LowPTransformedSamples}, rely on an analysis of the appropriate orthonormal polynomials. We first work toward proofs for Theorems~\ref{thm:NatSampleCoherence} and~\ref{thm:TransformedSamples}, which rely on understanding Hermite polynomials. To do so we require a few technical Lemmas concerning broad behavior of the polynomials asymptotically in order.
This analysis is focused on three domains for one-dimensional polynomials. The first sampling region coincides nearly with the so-named oscillatory region of $\psi_{p}(\xi)$,~\cite{AskeyWainger}, within which all zeros of $\psi_k(\xi)$ are found for $k\le p$. A second region, referred to as the monotonic region,~\cite{AskeyWainger}, is the complementary region where polynomials tend to increase monotonically in magnitude, and in this region we focus on bounding the extreme values of $\psi_k(\xi)$. The third region of importance is the boundary between the monotonic and oscillatory regions, referred to as the boundary region.
For multidimensional Hermite polynomials, we identify a domain $\mathcal{S}$ which fully contains the multidimensional analogue to the oscillatory and boundary regions, and partially contains the monotonic region. The size of the monotonic region included is determined so as to satisfy the conditions of (\ref{Eqn:CoherenceUnbounded}), while admitting a useful bound on the extreme values of $|\psi_k(\bm{\xi})|$ for $\bm{\xi}\in\mathcal{S}$. The method for our selection of $\mathcal{S}$ is to include $\bm{\xi}$ corresponding to the largest values of the density function, $f(\bm{\xi})$, until $\mathcal{S}$ is verified to satisfy (\ref{Eqn:CoherenceUnbounded}). In the case of Hermite polynomials, this heuristic is justified as Hermite polynomials tend to inversely relate with $f(\bm{\xi})$ for $\bm{\xi}$ within the monotonic region,~\cite{Szego}. The selection of $\mathcal{S}$ determines our radius for sampling uniformly from the $d$-dimensional ball, and we will show that this involves taking $\mathcal{S}=\{\bm{
\xi}:\|\bm{\xi}\|_2\le r_p\}$ for an $r_p$ that grows asymptotically like $2\sqrt{p}$.
\subsection{Key Hermite Lemmas} \label{subsec:keyHermite}
For convenience with the cited literature, we prove our results using the orthonormalized physicists' Hermite polynomials (orthonormal polynomials with respect to $f(\bm{\xi}):=\pi^{-d/2}\exp(-\|\bm{\xi}\|^2)$). We note that our results in Section~\ref{sec:sampling} are in terms of the probabilists' polynomials (orthogonal with respect to $f(\bm{\xi}):=(2\pi)^{-d/2}\exp(-\|\bm{\xi}\|^2/2)$), but the two sets are related as follows. If $\{\psi_k(\xi)\}$ denotes the orthonormalized physicists' polynomials and $\{\psi^{\prime}_k(\xi)\}$ represents the orthonormalized probabilists' polynomials, then for each $k$, $\psi_{k}(\sqrt{2}\xi)=\psi^{\prime}_k(\xi)$. We point the reader to Section 5.5 of~\cite{Szego} for a derivation of this key relation. The effect on the results of the proof is that the probabilists' polynomials require a sampling radius that is $\sqrt{2}$ times larger than that for sampling the physicists' polynomials. This radial effect does not effect the volume of the points in the interior, particularly as seen in Theorem~\ref{thm:TransformedSamples}, as the radius change is cancelled out by the change in the normalizing constant for the distribution ($\pi^{-d/2}$ vs. $(2\pi)^{-d/2}$).
We bound (\ref{Eqn:CoherenceUnbounded}) for the $d$-dimensional Hermite polynomials as follows. Let $\bm{\xi}$ be a $d\times 1$ vector and $\bm{k}$ be a $d\times 1$ multi-index. In this framework $\psi_{\bm{k}}(\bm{\xi})$ is an orthonormal polynomial whose order in the $i$th dimension is given by $k_i := \bm{k}(i)$, and whose total order is at most $p$. As the total order is at most $p$, $\|\bm{k}\|_1\le p$, and as the weight function is formed by a tensor product of one dimensional weight functions, $\psi_{\bm{k}}$ is a tensor product of univariate orthogonal polynomials. In this way the bounds in arbitrary dimension are tensor products of one-dimensional bounds, which are more easily derived.
As mentioned previously, the behavior of the Hermite polynomials in the monotonic region, and the radially symmetric concentration of the weight function $\pi^{-1/2}\exp(-\|\bm{\xi}\|_2^2)$ suggests the candidate set $\mathcal{S}:=\{\|\bm{\xi}\|_2\le r_p\}$ with $r_p$ to satisfy the conditions of (\ref{Eqn:CoherenceUnbounded}). We recall that the minimum over admissible $\mathcal{S}$ yields a coherence parameter less than any given choice of $\mathcal{S}$, so that this selection of $\mathcal{S}$ leads to an upper bound on a minimal $\mu(\bm{Y})$.
Being of classical and modern importance, several classes of one-dimensional orthogonal polynomials (e.g., Hermite, Jacobi, Legendre, Laguerre) have received much analysis and key results are available in the literature,~\cite{RauhutWard,Szego,Hermite1,AskeyWainger,LagAsym,Jacobi}.
In particular, for our interest in Hermite polynomials, a direct consequence of bounds from~\cite{AskeyWainger} gives the bounds in Table~\ref{Tab:Bounds} for some positive $C,\gamma$, and $n:=2k+1$. The key conclusion is that we may bound $\exp(-\xi^2/2)\psi_k(\xi)$ for $\xi$ in each of these regions. \begin{table} \center
\begin{tabular}{|l|l|}
\hline Range for $\xi$ & Bound for $|\psi_k(\xi)|$\\ \hline
$0\le |\xi|\le n^{1/2}-n^{-1/6}$ & $Cn^{-1/8}(n^{1/2}-|\xi|)^{-1/4}\exp(\xi^2/2)$\\ \hline
$n^{1/2}-n^{-1/6}\le |\xi|\le n^{1/2}+n^{-1/6}$ & $Cn^{-1/12}\exp(\xi^2/2)$\\ \hline
\end{tabular} \caption{Bounds in the {\color{black}oscillatory} and boundary regions of Hermite polynomials from~\cite{AskeyWainger}. Here, $C$ is some positive constant and $n:=2k+1$.} \label{Tab:Bounds} \end{table}
The bounds in Table~\ref{Tab:Bounds} are sufficient for both our uses within the oscillatory region and the boundary of the oscillatory and monotonic region. We derive a bound within the monotonic region using results in~\cite{Hermite1}. We first summarize the needed results as follows. Let $\sigma_k(\xi):=\sqrt{\xi^2-2k}$ for $|\xi|\ge \sqrt{2k}+\epsilon_k$ where $\epsilon_k\rightarrow 0$ as $k\rightarrow\infty$. We note that our analysis does not address how rapidly we may take $\epsilon_k$ to $0$, and for our purposes it is more convenient to redefine $\epsilon_k$ such that $|\xi|\ge\sqrt{(2+\epsilon_k)k+1}$, again letting $\epsilon_k\rightarrow 0$. This lack of effective analysis for $\epsilon_k$ implies a lack of effective analysis for derived quantities, and all results are guaranteed to hold in an asymptotic sense without an analysis as to how rapidly convergence occurs. We do refer the reader to~\cite{Hermite1} for some analysis of how $\epsilon_k$ may be taken to zero, specifically in a worst case, $\epsilon_k=O(k^{-1/6})$. As a matter of notation, while $\sigma_k(\xi)$ depends on both $k$ and $\xi$, in what follows we suppress the dependence on $\xi$. Following~\cite{Hermite1}, we may approximate $\psi_k(\xi)$ when $|\xi|\ge\sqrt{(2+\epsilon_k)k+1}$ by
\begin{align} \nonumber \frac{c^{\prime}_k}{C_k}\exp\left(\frac{\xi^2-\sigma_k \xi-k}{2}\right)(\sigma_k+\xi)^k\sqrt{\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)} &\le \psi_k(\xi);\\ \label{eqn:BetterAsym} \frac{c_k}{C_k}\exp\left(\frac{\xi^2-\sigma_k \xi-k}{2}\right)(\sigma_k+\xi)^k\sqrt{\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)} &\ge \psi_k(\xi), \end{align}
where both $c^{\prime}_k, c_k\rightarrow 1$ as $k\rightarrow \infty$, and $C_k=\sqrt{2^{k}k!}$ is the appropriate constant so that the $\{\psi_k\}$ are orthonormal with regards to the weight $f(\xi)=\pi^{-1/2}\exp(-\xi^2)$. For smaller $|\xi|$ the polynomials are effectively oscillatory and more technically troublesome to work with. Thankfully, as we are more concerned with approximating key integrals where we know the value ($1$ or $0$ by orthonormality) over the real line, we do not need to delve closely into the analysis for small $|\xi|$, and understanding the behavior for large $|\xi|$ is sufficient.
The key technical results to bound $\psi_k$ in the monotonic region are presented in the following lemma, where the motivating idea is to show that the polynomial $\psi_k(\xi)$ is tightly bounded by an envelope with a well behaved exponential parameter, denoted by $\eta_k(\xi)$. Due to the length of the proof we delay the proof to Appendix A. \begin{lem} \label{lem:PolyBeh} Let $C_k$ and $\sigma_k$ be as in (\ref{eqn:BetterAsym}), and define the function $\eta_k(\xi)$ implicitly by, \begin{align} \label{eqn:ExpApproximation} \frac{1}{C_k}\exp\left(\frac{\xi^2-\sigma_k \xi-k}{2}\right)(\sigma_k+\xi)^k\sqrt{\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)}&=\exp\left(\eta_k(\xi)\xi^2\right). \end{align}
That is we approximate $|\psi_k(\xi)|$ by $\exp\left(\eta_k(\xi)\xi^2\right)$ with the exponent $\eta_k(\xi)$ implicitly defined by the approximation in (\ref{eqn:BetterAsym}).
It follows that \begin{enumerate} \item For $\epsilon> 0$, $\mathop{\lim}\limits_{k\rightarrow\infty}\eta_k(\sqrt{(2+\epsilon)k+1})\rightarrow \frac{1}{2}-\frac{\log(2)}{2(2+\epsilon)}$. \item For a sequence of $\epsilon_k>0$ such that $\epsilon_k\rightarrow 0$ as $k\rightarrow\infty$, and for $\xi_1>\xi_0\ge\sqrt{(2+\epsilon_k)k+1}$, $\eta_k(\xi_1)<\eta_k(\xi_0)$. \item For a sequence of $\epsilon_k$ such that $\epsilon_k\rightarrow 0$, some finite $K$ and $k_1\ge K$, $k_0<k_1$, and for $\xi\ge\sqrt{(2+\epsilon_{k_1})k_1+1}$, it follows that $\eta_{k_0}(\xi)<\eta_{k_1}(\xi)$. \end{enumerate} \end{lem}
Using Lemma~\ref{lem:PolyBeh}, we show the following result which is useful for a direct bound on the coherence parameter. \begin{lem} \label{lem:CohBounds} For some choice of $\epsilon_p$ such that $\epsilon_p\rightarrow 0$, and $p\ge p_0$ for some $p_0$ it follows that for $r_p\ge \sqrt{(2+\epsilon_p)p+1}$, \begin{align} \label{eqn:CohBound1}
\mathop{\sup}\limits_{k\le p}\int_{|\xi|>r_p}\psi^2_k(\xi)\frac{e^{-\xi^2}}{\sqrt{\pi}}d\xi&\le\frac{(1+\delta_p)\mbox{erfc}(\sqrt{(1-2\eta_p(r_p))r_p^2})}{\sqrt{1-2\eta_p(r_p)}}, \end{align} where $\mbox{erfc}(\cdot)$ is the complement to the error function and $\delta_p\rightarrow 0$. Considering multidimensional polynomials and letting $\delta_p\rightarrow 0$, \begin{align} \label{eqn:CohBound2}
\mathop{\sup}\limits_{\substack{\|\bm{\xi}\|_2\le r_p\\ \|\bm{k}\|_1\le p}}|\psi_{\bm{k}}(\bm{\xi})|\le (1+\delta_p)\exp\left(\eta_p(r_p)r_p^2\right). \end{align} Here, $\eta_p$ is defined implicitly as in (\ref{eqn:ExpApproximation}), or equivalently, explicitly as in (\ref{eqn:ExpApp2}). \end{lem} \begin{proof}
To show the first point note from Lemma~\ref{lem:PolyBeh} that for $|\xi|\ge\sqrt{2p+1}$, \begin{align*}
|\psi_k(\xi)|\le c_k\exp(\eta_k(\xi)\xi^2). \end{align*} By the second point of Lemma~\ref{lem:PolyBeh}, $\eta_k(\xi)$ decreases as $\xi$ increases, yielding the bound on the integral in (\ref{eqn:CohBound1}), and by the third point of that Lemma, $\eta_p(\xi)\ge \eta_k(\xi)$ for all $k\le p$. The $\delta_p$ accounts for the approximation in (\ref{eqn:BetterAsym}) being inaccurate for finite $p$, but we do not address how quickly $\delta_p$ converges to zero.
To show (\ref{eqn:CohBound2}) note that for column vectors $\bm{\eta}$ and $\bm{\xi}^2$ with coordinates representing coordinates of $\eta$ and $\xi^2$ in each dimension, $|\bm{\eta}^T\bm{\xi}^2|\le\|\bm{\eta}\|_\infty\|\bm{\xi}^2\|_1$, with equality holding if $\|\bm{\eta}\|_\infty$ is achieved at the one coordinate on which $\bm{\xi}^2$ is supported. Further noting that $\|\bm{\xi}^2\|_1=\|\bm{\xi}\|_2^2$, it follows from the third point of Lemma~\ref{lem:PolyBeh}, and hence for large enough $p$, \begin{align*}
\mathop{\sup}\limits_{\substack{\|\bm{\xi}\|_2\le r_p\\ \|\bm{k}\|_1\le p}}|\psi_{\bm{k}}(\bm{\xi})|&\le (1+\delta_p)|\psi_{p}(r_p)|, \end{align*} where the bound on $\psi_p$ from (\ref{eqn:BetterAsym}) shows (\ref{eqn:CohBound2}). \end{proof}
\subsection{Proof of Theorem~\ref{thm:NatSampleCoherence}} \label{subsec:thmNSCH}
With Lemma \ref{lem:CohBounds} we are prepared to prove Theorem~\ref{thm:NatSampleCoherence} for the case of Hermite polynomials. Let $\mathcal{S} = \{\bm{\xi}:\|\bm{\xi}\|_2\le r_p\}$ where $r_p$ is as in Lemma~\ref{lem:CohBounds}, and we show that the conditions for (\ref{Eqn:CoherenceUnbounded}) are satisfied. Let the total number of polynomials be given by $P={p +d\choose d}$ where $p$ is the total order of the approximation and $d$ the number of dimensions. Recall that the number of samples from the orthogonal polynomial basis is $N$. We show that \begin{align*} \mathbb{P}(\mathcal{S}^c) = \mbox{erfc}(r_p)&<\frac{1}{NP};\\ \mathop{\sum}\limits_{k=1}^{P}\mathbb{E}\left[\psi^2_k(\bm{Z})\bm{1}_{\mathcal{S}^{c}}\right]\le \frac{P(1+\delta_p)\mbox{erfc}(\sqrt{(1-2\eta_p(r_p))r_{p}^2})}{\sqrt{1-2\eta_p(r_p)}}& < \frac{1}{20\sqrt{P}}, \end{align*} where $\bm{Z}$ is normally distributed with variance $1/2$, and we recall that substituting $\bm{Z}^{\prime} = \sqrt{2}\bm{Z}$ scales the physicists' polynomials to probabilists' polynomials.
By Lemma~\ref{lem:CohBounds} these are satisfied whenever \begin{align*} \mbox{erfc}(r_p)&<\frac{1}{NP};\\ (1+\delta_p)\frac{\mbox{erfc}\left(\sqrt{(1-2\eta_p(r_p))r_p^2}\right)}{\sqrt{1-2\eta_p(r_p)}}& < \frac{1}{20P^{3/2}}. \end{align*} Noting that $\delta_p\rightarrow 0$, and $\mbox{erfc}(r_p)=O(e^{-r_p^2}/r_p)=O(\exp(-(2+\epsilon_p)p)/\sqrt{(2+\epsilon_p)p})$,~\cite{NISTFunctionHandbook}, it follows that the first inequality is satisfied for $r_p=\sqrt{(2+\epsilon_p)p+1}$ if $NP = o(\sqrt{(2+\epsilon_p)p}\exp((2+\epsilon_p)p))$. Recall that we assume that $N=O(P^k)$, and it remains to show that $P^k = o(\sqrt{p}\exp((2+\epsilon_p)p)$, which we address shortly.
From the first point of Lemma~\ref{lem:PolyBeh}, we see for large $p$ that $1-2\eta_p(\sqrt{(2+\epsilon_p)p+1})\ge c$ for a positive constant $c$. The second inequality is then satisfied for $r_p$ if $P^{3/2} = o(\sqrt{p}\exp(c_\epsilon p))$ for an appropriate constant $c_\epsilon>0$ depending on $\epsilon_p$.
It remains to insure that both of these bounds allow $\epsilon_p$ to go to zero. If $d$ is fixed this holds as $P=O(p^d)= o(\exp(\delta p))$ for any $\delta>0$ establishing the bounds for both conditions for a fixed $d$. We consider the case where $d = c\cdot p$ for some $c>0$. Using Stirling's approximation we have that \begin{align*} P &= \frac{(p+d)!}{p!d!}=\frac{((c+1)p)!}{p!(cp)!},\\
&\approx \sqrt{\frac{c+1}{pc2\pi}}(c+1)^p\left(1+\frac{1}{c}\right)^{cp}, \end{align*} where the approximation holds with arbitrarily high accuracy as $p,d\rightarrow\infty$. This gives us that for large $p$,
\begin{align*} P &\approx \sqrt{\frac{c+1}{pc2\pi}}\beta^p, \end{align*}
where $\beta = (c+1)^{c+1}/c^c$. Note that $\beta$ goes to $1$ as $c\rightarrow 0$. It follows in the limit that \begin {align*} P^k &\approx \left(\frac{\sqrt{c+1}}{\sqrt{cp2\pi}}\right)^k\exp(\alpha k p), \end{align*} where $\alpha = \log(\beta)\rightarrow 0$. As $c\cdot p=d > 0$, it follows that $P^k = o(\sqrt{p}\exp(\delta p))$ for any fixed $k,\delta > 0$, establishing both inequalities needed for the conditions of (\ref{Eqn:CoherenceUnbounded}) when $d=o(p)$ and $N=O(P^k)$.
Having shown $\mathcal{S}$ is acceptable, we now bound $\mu(\bm{\Xi})$ with this choice of $\mathcal{S}$. By Lemma~\ref{lem:CohBounds} and the definition of $\eta_k$ therein, together with the bounds in Table~\ref{Tab:Bounds} we have that, \begin{align*} \mu(\bm{\Xi})&\le \exp(\eta_p(r_p)r_p^2)^2,\\ &= \exp\left(\left[1-\frac{\log(2)}{2+\epsilon_p}+o(1)\right]\epsilon_p\right)\exp\left(\left[1-\frac{\log(2)}{2+\epsilon_p}+o(1)\right]2p\right). \end{align*} Letting \begin{align*} C_p:=&\exp\left(\left[1-\frac{\log(2)}{2+\epsilon_p}+o(1)\right]\epsilon_p\right);\\ \eta_p:=&\exp\left(2\left[1-\frac{\log(2)}{2+\epsilon_p}+o(1)\right]\right), \end{align*} it follows that \begin{align*} \mu(\bm{\Xi})&\le C_p\eta_p^p, \end{align*} As $\epsilon_p \rightarrow 0$, it follows that \begin{align*} C_p&\rightarrow 1;\\ \eta_p&\rightarrow\exp(2-\log(2))\approx 3.6945. \end{align*}
\subsection{Proof of Theorem~\ref{thm:TransformedSamples}} \label{subsec:ProofTransformedHermite}
Here, we consider a transformation such that $\phi_k(\bm{\xi}) = \psi_k(\bm{\xi})/G(\bm{\xi})$, with $G(\bm{\xi})>0$ so that $|\phi_k(\bm{\xi})| \le C$ uniformly in $k$ and $\bm{\xi}$ for some constant $C$. We may then use that $\psi_k(\bm{\xi}) = \phi_k(\bm{\xi})G(\bm{\xi})$ to identify $\psi_k(\bm{\xi})$ and satisfy the conditions of (\ref{Eqn:CoherenceUnbounded}). We note that this approach corresponds to a weight function $w(\bm{\xi}) = 1/G(\bm{\xi})$. In this framework, we sample $\{\psi_k(\bm{\xi})\}$ from a distribution proportional to $f(\bm{\xi})G^2(\bm{\xi})$, where $f(\bm{\xi})$ is the distribution with respect to which the $\psi_k(\bm{\xi})$ are orthogonal, and use that the $\{\phi_k(\bm{\xi})\}$ form a bounded and approximately orthogonal system.
By (\ref{eqn:CohBound2}) of Lemma~\ref{lem:CohBounds} and the bounds in Table~\ref{Tab:Bounds}, for $\|\bm{\xi}\|_2\le \sqrt{(2+\epsilon_p)p+1}$ and $k\le p$, \begin{align} \label{eqn:hermite_function_bound}
|\psi_k(\bm{\xi})\exp(-\|\bm{\xi}\|_2^2/2)|&\le C, \end{align}
which suggests taking $G(\bm{\xi})=\exp(\|\bm{\xi}\|^2/2)$. \\
\noindent{\bf Remark.} Notice that the function in the left side of the inequality in (\ref{eqn:hermite_function_bound}) is referred to as Hermite function whose upper bound $C$ is explicitly known, for instance, from \cite{Abramowitz10}. \\
From the argument in the proof of Theorem~\ref{thm:NatSampleCoherence} for the case of one dimensional polynomials, \begin{align*}
\left|\frac{1}{\sqrt{\pi}}\int_{|\xi|\le \sqrt{(2+\epsilon_p)p+1}}(\psi_i(\xi)\exp(-
\xi^2/2))(\psi_j(\xi))\exp(-\xi^2/2))d\xi -\delta_{i,j}\right|&\le \epsilon_{i,j}, \end{align*} where $\epsilon_{i,j}$ is small enough to insure that the conditions of (\ref{Eqn:CoherenceUnbounded}) hold. Considering a corresponding change for multidimensional polynomials, let \begin{align*}
\phi_k(\bm{\xi})&=\pi^{-d/4}\exp(-\|\bm{\xi}\|_2^2/2) V^{1/2}\left(\sqrt{(2+\epsilon_p)p+1},d\right)\psi_{k}(\bm{\xi}), \end{align*} where $V(r,d)=(r\sqrt{\pi})^d/\Gamma(d/2+1)$ represents the volume of a $d$-dimensional ball of radius $r$.
If we instead consider a draw from the uniform distribution on the ball of radius $\sqrt{(2+\epsilon_p)p+1}$, then for a $d$-dimensional $\bm{\xi}$,
\begin{align*}
\left|\int_{\|\bm{\xi}\|_2\le \sqrt{(2+\epsilon_p)p+1}}\frac{\phi_i(\bm{\xi})\phi_j(\bm{\xi})}{V(\sqrt{(2+\epsilon_p)p+1},d)}d\bm{\xi} -\delta_{i,j}\right|&\le \epsilon_{i,j}, \end{align*}
and we have that $|\phi_k(\bm{\xi})|$ is bounded, and of order $\pi^{-d/4}V^{1/2}(\sqrt{(2+\epsilon_p)p+1},d)$, giving a bound on the coherence parameter of order $\pi^{-d/2}V(\sqrt{2p},d)$.
\subsection{Key Legendre Lemma} \label{subsec:keyLegendre} A key technical simplification is present when working with Legendre polynomials, namely we may fix $\mathcal{S}$ to be $[-1,1]^d$ as a finite number of polynomials on a bounded domain are necessarily bounded. The technical results we require are presented in the following Lemma.
\begin{lem} \label{lem:Legendre} For the $1$-dimensional Legendre polynomials, \begin{align} \label{eqn:JacBoundChebNat}
\sup_{\xi\in[-1,1]}|\psi_k(\xi)|&=\sqrt{2k+1}. \end{align} Further, \begin{align} \label{eqn:JacBoundChebTrans}
\sup_{\xi\in[-1,1]}\sqrt{\pi}(1-\xi^2)^{1/4}|\psi_k(\xi)|&\le\sqrt{\frac{2k+1}{k}} \le \sqrt{3}. \end{align} \end{lem} \begin{proof} These are classical results, with (\ref{eqn:JacBoundChebNat}) following from Theorem 7.32.1 of~\cite{Szego}. We note that a direct application of these theorems does require normalizing the polynomials to be orthonormal. Similarly, (\ref{eqn:JacBoundChebTrans}) follows from Theorem 7.3.3 of~\cite{Szego} and is a direct restatement of Lemma 5.1 of~\cite{RauhutWard}. \end{proof}
\subsection{Proof of Theorems~\ref{thm:NatSampleCoherenceLeg} and~\ref{thm:TransformedSamplesLeg}} \label{subsec:JacProofNatSamp} To show Theorem~\ref{thm:NatSampleCoherenceLeg}, we note that when $p\le d$, (\ref{eqn:LegNatBound}) follows from \begin{align*}
\mu(\bm{Y})&\le\mathop{\max}\limits_{\|\bm{k}\|_1\le p}\mathop{\prod}\limits_{i=1}^d\|\psi_{k_i}\|^2_{\infty}\\ &\le 3^p \le \exp(2p), \end{align*} where we note that at most $p$ of the $d$ dimensions can be non-constant polynomials. Similarly, when $p>d$,
\begin{align*}
\mu(\bm{Y})&\le\mathop{\max}\limits_{\|\bm{k}\|_1\le p}\mathop{\prod}\limits_{i=1}^d\|\psi_{k_i}\|^2_{\infty}\\ &\le \left(\frac{2p}{d}+1\right)^{d}\\ &\le \exp(2p), \end{align*}
where the third bound is loose for small $d$.
To show (\ref{eqn:LegTransformedSample}), note that (\ref{eqn:JacBoundChebTrans}) implies that when sampling from the Chebyshev distribution and independently of $p$ \begin{align*}
\mu(\bm{Y})&\le\mathop{\max}\limits_{\|\bm{k}\|_1\le p}\mathop{\prod}\limits_{i=1}^d\|\psi_{k_i}\|^2_{\infty}\\ &\le \mathop{\prod}\limits_{i=1}^d\frac{2k_i+1}{k_i}\le 3^d. \end{align*}
\subsection{Proof of Theorem~\ref{thm:LowPTransformedSamples}} \label{subsec:MCMCProof} The proof of Theorem~\ref{thm:LowPTransformedSamples} follows from a similar logic to the other proofs, but is approachable in a more general measure theoretic setting. By the definition of $B(\bm{\xi})$ in (\ref{eqn:btspec}), we have for all $\bm{\xi}\in\mathcal{S}$ that \begin{align*}
\mathop{\sup}\limits_{k=1:P}\frac{|\psi_{k}(\bm{\xi})|}{B(\bm{\xi})} = 1. \end{align*} This shows that sampling $\bm{Y}$ according to $B(\bm{\xi})$, gives a $\mu(\bm{Y})$ which is achieved uniformly over all values of $\bm{\xi}$. Let \begin{align*} c=\left(\int_{\mathcal{S}}f(\bm{\xi})B^2(\bm{\xi})d\bm{\xi}\right)^{-1/2}; \end{align*} that is, $c^2$ normalizes $f(\bm{\xi})B^2(\bm{\xi})$ to a probability distribution on $\mathcal{S}$. Then for $i,j=1:P$, \begin{align*} \int_{\mathcal{S}}\frac{\psi_{i}(\bm{\xi})}{cB(\bm{\xi})}\frac{\psi_{j}(\bm{\xi})}{cB(\bm{\xi})}c^2f(\bm{\xi})B^2(\bm{\xi})d\bm{\xi} =\int_{\mathcal{S}}\psi_{i}(\bm{\xi})\psi_{j}(\bm{\xi})f(\bm{\xi})d\bm{\xi} \approx \delta_{i,j}, \end{align*} and we assume that $\mathcal{S}$ is chosen so that the approximation holds within the satisfaction of requirements in (\ref{Eqn:CoherenceUnbounded}). As \begin{align} \label{eqn:boundBt}
\mathop{\sup}\limits_{k=1:P}\frac{|\psi_{k}(\bm{\xi})|}{cB(\bm{\xi})} = c^{-1}, \end{align} for all $\bm{\xi}\in\mathcal{S}$, it follows that the coherence parameter for the scheme associated with sampling from the distribution $c^2f(\bm{\xi})B^2(\bm{\xi})$ and $\bm{\xi}\in\mathcal{S}$ is $c^{-2}$.
We define the measure $\nu$ on Lebesgue measurable subsets of $\mathcal{S}$, denoted by $\mathcal{A}$, via \begin{align*} \nu(\mathcal{A}):=\int_{\mathcal{A}}f(\bm{\xi})d\lambda(\bm{\xi}), \end{align*} where $\lambda(\bm{\xi})$ is the Lebesgue measure, and $f$ is the distribution with respect to which the $\{\psi_{k}(\bm{\xi})\}$ are orthogonal. Let $\hat{B}$ be a function differing from $B$ on a set of non-zero $\nu$-measure, so that the sampling scheme corresponding to $\hat{B}$ differs on a set of non-zero $\nu$-measure. By (\ref{Eqn:CoherenceUnbounded}), no subset $\mathcal{S}_s$ of $\mathcal{S}$ with $\mu(\mathcal{S}_s)<\mu(\mathcal{S})$ satisfies the conditions of (\ref{Eqn:CoherenceUnbounded}), implying that $\hat{B}$ may not be infinite (corresponding to applying a weight of zero) on any set of positive measure and still satisfy these conditions. As (\ref{eqn:boundBt}) is achieved for all $\bm{\xi}\in\mathcal{S}$, it follows that for the sampling scheme associated with $\hat{B}$, there is a set $\mathcal{A}_{\star}$ with $\nu(\mathcal{A}_{\star})>0$ such that \begin{align*}
\int_{\mathcal{A}_{\star}}\mathop{\sup}\limits_{k=1:P}\frac{|\psi_{k}(\bm{\xi})|}{\hat{c}\hat{B}(\bm{\xi})}d\lambda(\bm{\xi})> \lambda(\mathcal{A}_{\star})c^{-1}, \end{align*} and it follows by the Mean Value Theorem for integrals that \begin{align*}
\mathop{\sup}\limits_{\bm{\xi}\in\mathcal{A}_{\star}}\mathop{\sup}\limits_{k=1:P}\frac{|\psi_{k}(\bm{\xi})|}{\hat{c}\hat{B}(\bm{\xi})}>c^{-1}. \end{align*} This implies that, \begin{align*}
\mathop{\sup}\limits_{\bm{\xi}\in\mathcal{S}}\mathop{\sup}\limits_{k=1:P}\frac{|\psi_{k}(\bm{\xi})|}{\hat{c}\hat{B}(\bm{\xi})}\ge\mathop{\sup}\limits_{\bm{\xi}\in\mathcal{A}_{\star}}\mathop{\sup}\limits_{k=1:P}\frac{|\psi_{k}(\bm{\xi})|}{\hat{c}\hat{B}(\bm{\xi})}>c^{-1}. \end{align*} It follows by the definition of $\mu(\bm{Y})$ given in (\ref{Eqn:CoherenceUnbounded}) for the selected $\mathcal{S}$, the coherence parameter $\mu(\bm{Y})$ for the sampling scheme associated with $\hat{B}$ is larger than $c^{-2}$.
\section{Conclusions} \label{sec:conc} We provided an analysis of Hermite and Legendre polynomials which allowed us to bound a coherence parameter and generate recovery guarantees for sparse polynomial chaos expansions obtained via $\ell_1$-minimization. We also identified alternative random sampling schemes which provide sharper guarantees and demonstrate improved polynomial chaos reconstructions relative to the random sampling from the orthogonality measure of these bases. These sampling methods were derived based on the properties of Hermite and Legendre polynomials. Furthermore, we showed a Markov Chain Monte Carlo method for generating samples that minimize the coherence parameter, thereby achieving an optimality for the number of random solution realizations. Such a sampling was referred to as coherence-optimal sampling.
The sampling methods were compared on arbitrary manufactured stochastic functions, and the different sampling strategies were tested for identifying the solution of a 20-dimensional elliptic boundary value problem, where positive results were attained for the coherence-optimal sampling method. Similarly positive results were observed when computing the solution to a non-linear ordinary differential equation, where a high order Hermite polynomial chaos expansion was needed for an accurate solution approximation.
\section*{\texorpdfstring{Appendix A: Proof of Lemma~\ref{lem:PolyBeh}}{Appendix: Proof of Lemma~\ref{lem:PolyBeh}}} \label{sec:App}
\begin{proof} We may rewrite (\ref{eqn:ExpApproximation}) as \begin{align} \label{eqn:ExpApp2} \eta_k(\xi)&=\frac{1}{2}-\frac{\sigma_k}{2\xi}-\frac{\log(C_k)}{\xi^2}-\frac{k}{2\xi^2}+\frac{k\log(\sigma_k +\xi)}{\xi^2}+ \frac{\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)}{2\xi^2}. \end{align} A straightforward, but lengthy algebraic substitution for $\xi$ yields that for any $\epsilon>0$, as $k\rightarrow\infty$, \begin{align} \label{eqn:exponentialConstantApproximation} \eta_k(\sqrt{(2+\epsilon)k})=\frac{1}{2}-\frac{\log(2)}{2(2+\epsilon)}+o(1). \end{align}
To show the second point of the Lemma note that $\sigma_k=\sqrt{\xi^2-2k}$ implies that $\partial \sigma_k/\partial \xi=\xi/\sigma_k$, and differentiating the expression (\ref{eqn:ExpApp2}) with respect to $\xi$ gives \begin{align} \label{eqn:ExpDerivative} \frac{\partial\eta_k(\xi)}{\partial \xi} =&\frac{\sigma_k}{\xi^2}+\frac{2\log(C_k)}{\xi^3}+\frac{k}{\xi^3}+\frac{k}{\xi^2\sigma_k}\\ \nonumber &-\left(\frac{1}{2\sigma_k}+\frac{2k\log(\sigma_k+\xi)}{\xi^3}+\frac{\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)}{\xi^3}+\frac{\xi^2-\sigma_k^2}{2\xi^2\sigma_k^2(\sigma_k+\xi)}\right). \end{align} Using that $\sigma_k^2=\xi^2-2k$, the above may be rewritten as \begin{align*} &\left(2\sigma_k^2\xi^3\right)\frac{\partial\eta_k(\xi)}{\partial \xi} = \xi^2\left(2k+4\log(C_k)-1\right)+\xi\sigma_k\\ &-\left(2\sigma_k^2\left(2k\log(\sigma_k+\xi) +\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)\right)+4k(k+2\log(C_k))\right). \end{align*} It follows that $\partial\eta_k(\xi)/\partial \xi<0$ whenever \begin{align*} 2\sigma_k^2&\left(2k\log(\sigma_k+\xi) +\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)\right)+4k(k+2\log(C_k))\\
&> \xi^2\left(2k+4\log(C_k)-1\right)+\xi\sigma_k. \end{align*} Substituting $\sigma_k^2$ for $\xi$ and $k$, gives that this condition is equivalent to \begin{align*} \frac{\xi^2}{2k}&\left(2k\log(\sigma_k+\xi) +\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)-k-2\log(C_k)+\frac{1}{2}-\frac{\sigma_k}{2\xi}\right)\\
&>\left(2k\log(\sigma_k+\xi)+\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)-k-2\log(C_k)\right). \end{align*} From this form and using that $\sigma_k/\xi<1$, it follows that the derivative is negative for large enough $\xi$. More precisely, let \begin{align*} X_\xi &:= 2k\log(\sigma_k+\xi)+\log\left(\frac{1}{2}\left(1+\frac{\xi}{\sigma_k}\right)\right)-k-2\log(C_k);\\ Y_\xi &:= \frac{1}{2}-\frac{\sigma_k}{2\xi};\\ Z_\xi &:= \frac{\xi^2}{2k}, \end{align*} where we wish to identify $\xi$ such that $Z_\xi(X_\xi+Y_\xi)>X_\xi$, which is equivalent to $Z_\xi Y_\xi>(1-Z_\xi)X_\xi$. Let $\epsilon_k\ge 0$, and note that if $\xi\ge\sqrt{(2+\epsilon_k)k+1}$ then $\sigma_k/\xi<1$ which implies that $Y_\xi>0$. Further, for $\xi\ge\sqrt{(2+\epsilon_k)k+1}$, it follows that $Z_\xi>1$. We now identify an $\epsilon_k$ such that for $\xi\ge\sqrt{(2+\epsilon_k)k+1}$, we verify that $X_\xi>0$, from which it follows that $Z_\xi Y_\xi>(1-Z_\xi)X_\xi$. Note that $\epsilon_k\ge 0$, implies that $\sigma_k \ge 1$ and as such for $\xi\ge\sqrt{(2+\epsilon_k)k+1}$, \begin{align*} X_\xi&\ge 2k\log\left(\sqrt{(2+\epsilon_k)k}\right)-k-2\log(C_k). \end{align*} Recalling that $C_k = \sqrt{2^kk!}$, we conclude from properties of the Log Gamma function~\cite{LogGammaAnalysisBook} that \begin{align*} \log(k!)&=(k+1)\log(k+1)-(k+1)-\frac{1}{2}\log\left(\frac{k}{2\pi}\right) + O(k^{-1});\\ 2\log(C_k)&=k\log(2)+(k+1)\log(k+1)-(k+1)-\frac{1}{2}\log\left(\frac{k}{2\pi}\right) + O(k^{-1}). \end{align*} From this we may simplify terms, leading to a lower bound on $X_\xi$ for some $C>0$ given by \begin{align*} X_\xi&\ge k\log\left(\frac{(2+\epsilon_k)k}{2(k+1)}\right)+\left(1-\frac{\log(2\pi)}{2}\right) - \log\left(\frac{k+1}{\sqrt{k}}\right) -\frac{C}{k}. \end{align*} Noting that $1>\log(2\pi)/2$, it follows that we may guarantee that $X_\xi$ is positive for some sequence of $\epsilon_k>0$ which admits that $\epsilon_k\rightarrow 0$. It follows that we have a monotonic derivative for $\eta_k(\xi)$ with respect to $\xi$ for $\xi\ge\sqrt{(2+\epsilon_k)k}$, and thus conclude the second point of this Lemma.
To show the third point, we utilize a differential-difference equation~\cite{Szego} for orthonormal Hermite polynomials, \begin{align} \label{eqn:HermDiff} \sqrt{2(k+1)}\psi_{k+1}(\xi)&=2\xi\psi_k(\xi)-\psi_k^{\prime}(\xi). \end{align} We note here that the approximation in~\cite{Hermite1} giving the approximation of (\ref{eqn:ExpApproximation}) for $\psi_k$ extends to the derivative $\psi_k^{\prime}$ so that for sufficiently large $k$ the approximation to $\psi_k$ in (\ref{eqn:ExpApproximation}) is arbitrarily accurate for $\psi_k$, and when differentiated to $\psi^{\prime}_k$. Differentiating with the use of the chain rule gives that \begin{align} \label{eqn:HermChain} \frac{\partial}{\partial \xi} \exp(\eta_k(\xi) \xi^2) = \exp(\eta_k(\xi) \xi^2)\left(\xi^2\frac{\partial \eta_k(\xi)}{\partial \xi} + 2\xi\eta_k(\xi)\right). \end{align} Plugging (\ref{eqn:ExpApp2}) and (\ref{eqn:ExpDerivative}) into (\ref{eqn:HermChain}), and in turn into (\ref{eqn:HermDiff}) we have that \begin{align*} \psi_{k+1}(\xi)&\approx \frac{2\xi\exp(\eta_k \xi^2)-\frac{\partial}{\partial \xi} \exp(\eta_k \xi^2)}{\sqrt{2(k+1)}},\\ & = \frac{\exp(\eta_k \xi^2)}{\sqrt{2(k+1)}}\left(\xi\left(1+\frac{\xi}{2\sigma_k}\right)+\frac{\xi^2-\sigma_k^2}{2\sigma_k^2(\sigma_k+\xi)}\right). \end{align*} It follows by (\ref{eqn:ExpApproximation}) that \begin{align} \nonumber \frac{\psi_{k+1}(\xi)}{\psi_{k}(\xi)}&\approx\frac{\exp(\eta_{k+1}(\xi)\xi^2)}{\exp(\eta_k(\xi)\xi^2)},\\ \label{eqn:HermiteRatio} &\approx\frac{1}{\sqrt{2(k+1)}}\left(\xi\left(1+\frac{\xi}{2\sigma_k}\right)+\frac{\xi^2-\sigma_k^2}{2\sigma_k^2(\sigma_k+\xi)}\right), \end{align}
Letting $k$ go to infinity it follows that for some $\epsilon_k\rightarrow 0$ and $|\xi|>\sqrt{(2+\epsilon_{k+1})(k+1)+1}$, both the function and derivative approximations considered of $\exp(\eta_k(\xi)\xi^2)$ to $\psi_k(\xi)$ and $\partial \exp(\eta_k(\xi)\xi^2)/\partial$ to $\psi^{\prime}_k(\xi)$ are arbitrarily accurate~\cite{Hermite1}. If the right-hand side of (\ref{eqn:HermiteRatio}) is larger than 1, and $k$ is large enough so that the approximation is sufficiently accurate for $|\xi|=\sqrt{(2+\epsilon_{k+1})(k+1)+1}$, then we will show that (\ref{eqn:HermiteRatio}) implies that $\eta_{k+1}(\xi)>\eta_k(\xi)$.
We note that larger $\epsilon_k,\epsilon_{k+1}$ complicate the proof, but small enough $\epsilon_k,\epsilon_{k+1}$ do not affect the comparisons made, and thus for brevity of presentation we set $\epsilon_k=\epsilon_{k+1}=0$ for the remainder of this proof. Note that the ratio $\psi_{k+1}(\xi)/\psi_{k}(\xi)$ is monotonically increasing for $|\xi|>\sqrt{2(k+1)+1}$ as $\partial \sigma_k/\partial \xi < 1$, and so it suffices to check when $|\xi|=\sqrt{2(k+1)+1},$ and thus $\sigma_k=\sqrt{3}$. In this case the ratio in (\ref{eqn:HermiteRatio}) satisfies \begin{align} \label{eqn:HermGap} \sqrt{\frac{2k+3}{2k+2}}+\sqrt{\frac{3}{2(k+1)}}+\frac{k/3}{\sqrt{2k+2}(\sqrt{2k+3}+\sqrt{3})}+\frac{k\sqrt{2}/\sqrt{3}}{\sqrt{k+1}}&\approx\frac{\psi_{k+1}(\xi)}{\psi_{k}(\xi)},\\ &>1, \end{align} as the first term is always larger than $1$ and all terms are positive, with the last term increasing in $k$. It follows that the lemma is established when both $k_1$ and $k_0$ are larger than some $K$ which insures that both the approximation in (\ref{eqn:ExpApproximation}) is sufficiently accurate and that the gap in (\ref{eqn:HermGap}) is sufficiently large.
For $k_0<K$ note that $\sigma_{k_0}(\sqrt{2k_1+1})=\sqrt{2(k_1-k_0)+1}$ satisfies \begin{align*} \frac{\sigma_{k_0}(\sqrt{2k_1+1})}{\sqrt{2k_1+1}}&=\frac{\sqrt{2(k_1-k_0)+1}}{\sqrt{2k_1+1}}\rightarrow 1, \end{align*} as $k_1/k_0 \rightarrow \infty$, while $\sigma_{k_1}(\sqrt{2k_1+1})=1$ remains fixed. From the expression for $\eta_k(\xi)$ in (\ref{eqn:ExpApp2}) it follows for any $k_0$, a large enough $K$, and $k_1\ge K$ that $\eta_{k_0}(\sqrt{2k_1+1})<\eta_{k_1}(\sqrt{2k_1+1})$, showing the third point of this Lemma. \end{proof}
\end{document} | arXiv |
Models and algorithms for genome rearrangement with positional constraints
Krister M. Swenson1,2,
Pijus Simonaitis3 &
Mathieu Blanchette4
Traditionally, the merit of a rearrangement scenario between two gene orders has been measured based on a parsimony criteria alone; two scenarios with the same number of rearrangements are considered equally good. In this paper, we acknowledge that each rearrangement has a certain likelihood of occurring based on biological constraints, e.g. physical proximity of the DNA segments implicated or repetitive sequences.
We propose optimization problems with the objective of maximizing overall likelihood, by weighting the rearrangements. We study a binary weight function suitable to the representation of sets of genome positions that are most likely to have swapped adjacencies. We give a polynomial-time algorithm for the problem of finding a minimum weight double cut and join scenario among all minimum length scenarios. In the process we solve an optimization problem on colored noncrossing partitions, which is a generalization of the Maximum Independent Set problem on circle graphs.
We introduce a model for weighting genome rearrangements and show that under simple yet reasonable conditions, a fundamental distance can be computed in polynomial time. This is achieved by solving a generalization of the Maximum Independent Set problem on circle graphs. Several variants of the problem are also mentioned.
A huge body of work exists on modeling the evolution of whole chromosomes [1]. The main difference between such models is the set of rearrangements that they allow. The moves of interest are usually inversion, transposition, translocation, chromosome fission and fusion, deletion, insertion, and duplication.
Almost all versions of the problem are NP-Hard if content modifying operations such at duplication, loss, and insertion are allowed [2, 3]. Fortunately, a model that considers genomes with equal content (i.e., no duplications or insertions/deletions) is quite pertinent, particularly in eukaryotes, since syntenic blocks of genes can be assigned between genomes so that each block occurs exactly once in each genome. For two genomes with equal content, double cut and join (DCJ) has been the model of choice since it elegantly includes inversion, translocation, chromosome circularization and linearization, as well as chromosome fission and fusion [4, 5].
One of the most important problems in comparative genomics is the inference of ancestral gene orders, i.e., paleogenetics. Given a realistic model of evolution, one can infer ancestral adjacencies of high confidence from present-day genomes [6–8]. However, methods that attempt to infer deeper structure for ancestral species suffer due to the huge number of parsimonious scenarios between genomes [9–11].
The apparent difficulty of the ancestral inference problem—because of the potentially astronomical number of parsimonious sorting scenarios—highlights the importance of methods that infer scenarios that conform to some extra biological constraints. Yet, aside from methods that weight inversions based on their length [12–16], to our knowledge no algorithmic work exists in this direction.
In this paper we use a weight function on rearrangements suitable for modeling positional constraints, i.e., sets of positions in the genome that are likely to swap adjacencies. Two examples of constraints that fit this paradigm are: (1) the physical 3D location of DNA segments in a nucleus and, (2) repetitive sequences that are the cause or consequence of rearrangement mechanisms. We illustrate the utility of our model with 3D constraints in the "Positional constraints as colored adjacencies" section.
We propose a general optimization problem that minimizes the sum of weights over the moves in a scenario. A more constrained version of the problem asks for such a scenario out of all possible unweighted parsimonious scenarios. Our algorithm solves this version of the problem in polynomial time given a binary weight function, despite an exponential growth of the number of parsimonious DCJ scenarios with respect to the distance [17, 18]. The commutation properties of DCJ moves as studied in [17] link certain DCJ scenarios to noncrossing partitions. Our algorithm relies on solving a new optimization problem on colored noncrossing partitions, called Minimum Noncrossing Colored Partition. It is a generalization of the Maximum Independent Set problem on circle graphs [19–21].
Genomes as sets of signed integers
A gene, or more generally a syntenic block of genes, will be represented by a signed integer. A chromosome is a sequence of blocks, and a genome is a set of chromosomes. Thus, we write a genome in list notation where a block is a positive integer if read in one direction in the genome, and a negative integer if read in the opposite direction. For example, a genome A can be written as
$$\begin{aligned} \{(\circ ,5,-1,-2,6,-4,-8,\circ ), (\circ ,-3,7,\circ ), (9,10)\}, \end{aligned}$$
where \(\circ\) represents a telomere at the end of a linear chromosome. Genome A has two linear chromosomes and a circular chromosome (9, 10).
Alternatively, the organization of the blocks on the chromosomes can be given by the set of adjacencies between the extremities of consecutive blocks. A block b has a tail extremity, written \(b_t\), and a head extremity, written \(b_h\). Thus, the adjacency between 5 and \(-\)1 in A is \(\{5_h,1_h\}\). A block that is on the end of a linear chromosome implies a telomeric adjacency. The first chromosome has two such adjacencies: \(\{\circ ,5_t\}\) and \(\{8_t,\circ \}\). A circular chromosome has no telomeres, i.e., the last block is adjacent to the first. We can write genome A using adjacencies as
$$\begin{aligned} A=\big \{&\big \{\{\circ ,5_t\},\{5_h,1_h\},\{1_t,2_h\},\{2_t,6_t\},\{6_h,4_h\}, \{4_t,8_h\},\{8_t,\circ \}\big \}, \\&\big \{\{\circ ,3_h\},\{3_t,7_t\},\{7_h,\circ \}\big \}, \\&\big \{\{9_h,10_t\},\{10_h,9_t\}\big \} \big \}. \end{aligned}$$
DCJ and sorting DCJs
Double cut and join (DCJ) is an operation on a genome that cuts one or two adjacencies, and glues the resulting ends back together according to the following rules [4]:
If a single adjacency is cut, then add new telomeres to the resulting ends (resulting in two new telomeric adjacencies).
If two adjacencies are cut, then glue the adjacencies back in one of two new ways.
Application of a single DCJ corresponds to diverse genomic operations such as inversion, chromosome linearization and circularization, transposition, and excision of a circular chromosome.
The colored adjacency graph G(A, B, col). Black edges are adjacency edges and gray edges are cross edges. The color function col maps adjacency edges of genome A to the alphabet \(\{a,b,c,d\}\)
The DCJ distance between genomes A and B is the minimum number of DCJ moves needed to transform A into B. DCJs that move A closer to B, called sorting DCJs, can be found using a graph. The colored adjacency graph for A and B is a graph G(A, B, col) whose vertices are the extremities and telomeres of A and B, and whose edges are colored by the color function col. For each adjacency in A or B an adjacency edge links the corresponding nodes of the adjacency, and a cross edge links non-telomere vertices from A to vertices with the same label in B. The graph for genomes
$$\begin{aligned} A=\big \{&\big \{\{\circ ,5_t\},\{5_h,1_h\},\{1_t,2_h\},\{2_t,6_t\},\{6_h,4_h\}, \{4_t,8_h\},\{8_t,\circ \}\big \}, \\&\big \{\{\circ ,3_h\},\{3_t,7_t\},\{7_h,\circ \}\big \} \big \}, \text {and} \\ B=\big \{&\big \{\{\circ ,1_t\},\{1_h,2_t\},\{2_h,3_t\},\{3_h,4_t\},\{4_h,5_t\}, \{5_h,6_t\},\{6_h,\circ \}\big \}, \\&\big \{\{\circ ,7_t\},\{7_h,8_t\},\{8_h,\circ \}\big \} \big \} \end{aligned}$$
is given in Fig. 1. It is easy to confirm that the adjacency and cross edges each form a matching, so that each connected component of the graph will be either a cycle or a path. Note that connected components of the graph are only loosely related to the chromosomes; connected components can span multiple chromosomes.
We denote a cross edge by the label of the vertices that they connect. We denote the connected components of the graph by the set of cross edges that comprise them. The connected components of the graph in Fig. 1 are \(\{5_t,4_h,6_h\}\), \(\{5_h,6_t,2_t,1_h,\}\), \(\{1_t,2_h,3_t,7_t\}\), \(\{8_t,7_h\}\), and \(\{3_h,4_t,8_h\}\). The length of a path or a cycle is the number of cross edges it has.
All possible DCJs that move one genome closer to the other. Adjacency edges are contracted, so that only the cross edges are shown in the connected components. Endpoints that are affected by the DCJ are circled. In the top row, extracting a cycle from (a) an even-length path, (b) an odd-length path, and (c) a cycle are depicted. Even-length paths can be combined to form two odd-length paths if one of the paths has endpoints in genome A and the other in genome B, as depicted in (d). An even-length path can be split into two odd length paths if the split is done in the genome with fewer vertices in the path, as depicted in (e)
To find sorting DCJs, we categorize the connected components by length. In Fig. 1 there is one cycle, two even-length paths, and two odd-length paths. The formula for the DCJ distance is
$$\begin{aligned} d_{DCJ}(A,B) = N - (C + I/2) \end{aligned}$$
where N is the number of blocks, C is the number of cycles, and I is the number of odd-length paths in G(A, B) [4]. Figure 2 depicts a comprehensive list of the possible sorting DCJs on an adjacency graph, and describes the conditions under which they may be applied. See Proposition 1 of [17] for a more thorough treatment. G(A, A), for some genome A, will always have 2M paths of length one and \(N - M\) cycles of length two, where M is the number of chromosomes and N is the number of blocks.
The minimum weighted rearrangements problem
Consider a genome \(A_i\) made of a set of linear or circular chromosomes. Each rearrangement on this genome may have a certain likelihood of occurring. In the "Locality and the adjacency graph" section we will describe a DCJ move on \(G(A_i,B)\) as a reconnection of two adjacency edges of \(G(A_i,B)\); the resulting graph \(G(A_{i+1},B)\) is identical to \(G(A_i,B)\) aside from the connectivity of two adjacency edges. Therefore there is a bijection between edges of \(G(A_i,B)\) and edges of \(G(A_{i+1},B)\), so we can weight all pairs of genome adjacencies occurring in a sorting scenario by weighting all pairs of adjacency edges in G(A, B). For the set P of all pairs of adjacency edges in genome A, the weight function for a pair is \(w:P \mapsto \mathbb {R} _+\), where \(\mathbb {R} _+\) denotes the non-negative real numbers. The higher the value of w the less likely the rearrangement is to occur, e.g., a value of 0 represents a most likely rearrangement.
A sequence of rearrangements \(\rho _1,\rho _2,\ldots ,\rho _d\) such that \((\cdots ((A\rho _1)\rho _2)\cdots \rho _d) = B\) is called a sorting scenario. The weight of a scenario is the sum of the weights of all the rearrangements in the scenario, i.e., \(\sum _{i=1}^d{w(\rho _i)}\). The Minimum Weighted Rearrangements problem is the following.
Minimum Weighted Rearrangements
INPUT: Genomes A and B and a weight function w.
OUTPUT: A scenario of rearrangements turning A into B.
MEASURE: The weight of the scenario.
Positional constraints as colored adjacencies
Although chromosomes are represented as linear or circular sequences of syntenic blocks, in reality they correspond to molecules whose conformation within the nucleus is complex. Recent technological advances, called Hi-C, allow the mapping of chromosome conformation in various cell types and species [22–26]. The positional constraints introduced here are based on the principle that rearrangements (DCJ moves) involving pairs of adjacencies that are close in 3D space are more frequent than others. This model is supported by the pioneering work of Véron et al. [27], who showed that loci that are distant in the linear ordering of the human chromosome yet close in the ordering of the mouse chromosome, are physically close (in 3D) in the human chromosome. Recently we have conducted a study on rearrangement scenarios showing that breakpoint pairs comprising a rearrangement are closer than expected by chance for intrachromosomal and interchromosomal rearrangements. This is true for multiple cell types from multiple laboratories [28]. In this paper, we use the observation that many moves are local to constrain the rearrangement scenarios that we compute. We call this the positional constraint.
A A 2D cartoon of a possible 3D configuration for genome A. Adjacencies between syntenic blocks are classified by physically close regions, which are marked by dashed circles and labeled by the alphabet \(\{a,b,c,d\}\). B Genome A after a reciprocal translocation has occurred at position b. C Genome A after an excision has occurred at position b
We incorporate the constraint by grouping adjacencies of the genome into classes that are more likely to swap endpoints. This idea is illustrated in Fig. 3, where the physical (3D) structure of genome A is drawn and the adjacencies are grouped into colored localities. According to Véron et al. [27] and our recent results [28], rearrangements are more likely to occur between adjacencies at the same position.
Locality and the adjacency graph
The update of colors by a DCJ. a Adjacency edges with colors x and y are reconfigured in two different ways for the same DCJ operation. In this case the reconfigurations are achieved by swapping either both right-hand endpoints or both left-hand endpoints of the adjacency edges. b The adjacency edge with color x is split to make two adjacencies of color x with two new telomeres
Each adjacency edge in G corresponds to an adjacency in genome A or B. The color of an adjacency is given to the adjacency edge it corresponds to. Figure 1 shows a coloring for the adjacencies of genome A that matches the localities in Fig. 3. The application of a DCJ operation to a genome has the effect of swapping the endpoints of two adjacency edges, or splitting an adjacency edge as in the case of Fig. 4e.
Throughout a DCJ sorting scenario, adjacency edges always keep the same color. Thus, each DCJ operation corresponds to one of two possible updates of the same pair of adjacency edges, as depicted in Fig. 4a.
A positional weight function
Categorize rearrangements into two sets: those that are likely, and those that are not. Such a categorization of rearrangements is powerful enough to encapsulate the positional property discussed earlier.
A DCJ \(\rho\) acts on one or two adjacencies. Our model labels each adjacency with some color from an alphabet \(\Sigma\), and weights a DCJ based on the colors that are acted upon. Call \(i_\rho\) and \(j_\rho\) the adjacencies affected by \(\rho\); \(i_\rho = j_\rho\) if the DCJ acts on only a single adjacency, e.g., case (e) in Fig. 2. The color of an adjacency \(i_\rho\) is written \(col(i_\rho )\). Given a DCJ \(\rho\), our weight function is
$$\begin{aligned} w(\rho ) = \left\{ \begin{array}{ll} 0 &{} \text {if}\,\, i_\rho = j_\rho \,\, \mathrm{or}\,\, col(i_\rho ) = col(j_\rho )\\ 1 &{} \text {otherwise.}\\ \end{array} \right. \end{aligned}$$
We call those DCJ moves that have zero weight likely, while we call all others rare. It is trivial to evaluate our weight function for a given DCJ; simply check the colors of the two adjacency edges that are affected.
Two restricted versions of the general problem are now described. The problem Minimum Local Scenario is exactly Minimum Weighted Rearrangements with the positional weight function w.
(MLS ) Minimum Local Scenario
INPUT: Genomes A and B and positional weight function w.
The problem Minimum Local Parsimonious Scenario introduces the constraint that the scenario output is also a parsimonious scenario, i.e., a scenario of minimum length.
(MLPS ) Minimum Local Parsimonious Scenario
OUTPUT: A parsimonious scenario of rearrangements turning A into B.
Minimum local parsimonious scenario
Since a solution to Minimum Local Parsimonious Scenario is limited to sorting moves, most connected components of G(A, B, col) must be sorted independently of each other, the exception being for even-length paths; all but one DCJ in Fig. 2 act on a single connected component. We first give a method for computing the number of rare operations per connected component when no pair of even-length paths exist, as in Fig. 2d. We then show in the "Even-length paths" section how to solve the problem when such pairs exist.
Colored partitions
Consider a connected component C of the graph G(A, B, col). If C is monochromatic, i.e., has adjacency edges of a single color, then the component can be sorted with likely DCJs according to the listed moves in Fig. 2; the move that operates on more than one component in Fig. 2d need not be used since each path can be split on its own with a local move, as in Fig. 2e. If C is polychromatic then DCJs must be performed to separate the colors, since a fully sorted genome has components that each have only a single colored adjacency edge in genome A.
Colored partitions for the set [1, 8] where \(col(1)=b\), \(col(2)=a\), \(col(3)=b\), \(col(4)=c\), \(col(5)=a\), \(col(6)=d\), \(col(7)=a\), and \(col(8)=c\). Vertices are circles numbered by their order in the set [1, 8] and labeled by their color. Thick black lines are drawn between vertices that are in the same class of the partition. A The crossing partition \(\{\{1,3\},\{2,5,7\},\{4,8\},\{6\}\}\). B The optimal noncrossing partition \(\{\{1,3\},\{2\},\{4,8\},\{5,7\},\{6\}\}\). C The instance embedded on a line
Recall that \(AA\)-paths and \(BB\)-paths are paths that start and end in the same genome. In this subsection, we assume that there does not exist both an \(AA\)-path and a \(BB\)-path in the graph (Fig. 2d). Ouangraoua and Bergeron established that the DCJs in a sorting scenario can be done in any order for such a graph and that every component will be sorted independently, thereby defining a noncrossing partition on each component (see sections 3 and 4 of [17]). Later in this section we show that Minimum Local Parsimonious Scenario on a single component is equivalent to the following problem concerning a generalization of noncrossing partitions. A partition of a set is a collection of pairwise disjoint subsets whose union is the entire set. The subsets are called classes. [1, n] is the set of integers from 1 to n.
A noncrossing partition is a partition \(\mathcal {P}\) of [1, n] such that for any classes \(S_i,S_j \in \mathcal {P}\) if we have \(p < q < p' < q'\) for \(p,p' \in S_i\) and \(q,q' \in S_j\) , then \(S_i = S_j\). A noncrossing colored partition is a noncrossing partition where for any \(p,p' \in S_i\), \(col(p) = col(p')\).
Another way to define a noncrossing partition is on a convex polygon. A noncrossing partition is a partition of the vertices of an n-gon with the property that if you draw a line between all pairs of vertices in the same class, for all classes, then no two lines from different classes intersect. A colored partition has colored vertices, and respects the property that any pair of vertices in the same class of the partition have the same color (see Fig. 5A, B).
(MNCP) Minimum Noncrossing Colored Partition
INPUT: Set size n, color set \(\Sigma\), and color function \(col:[1,n] \rightarrow \Sigma\).
OUTPUT: A noncrossing colored partition.
MEASURE: The cardinality of the partition.
We present a polynomial-time algorithm for the Minimum Noncrossing Colored Partition problem, which according to Lemma 2 (later in this section) gives a solution to Minimum Local Parsimonious Scenario on a single component. We describe the algorithm on an instance that has been embedded on a line where the left-most vertex ① represents the smallest element of the set, as shown in Fig. 5C. For an interval [i, j], let NCP(i, j) be the number of classes in the MNCP on that subproblem. Thus, NCP(1, n) corresponds to the Minimum Noncrossing Colored Partition of [1, n].
For any interval [i, j] we have \(NCP(i,i)=1\), and the following recurrence.
$$\begin{aligned} NCP(i,j) = \min \left\{ \begin{array}{ll} NCP(i,j-1)+1 &{} \text {for}\,\, i < j,\\ NCP(i,j-1) &{} \text {for}\,\, i < j \,\, \mathrm{and}\,\, col(i) = col(j)\\ NCP(i,k-1)+NCP(k,j) &{} \text {for all}\,\, k\,\, \mathrm{where}\,\, i < k < j\\ \end{array} \right. \end{aligned}$$
The first case corresponds to the creation of a new class with the single element j. The second case is applicable when element j is the same color as element i; in this case i and j become part of the same class, all the other classes staying the same. The third case tests combinations of subproblems; this case is pertinent when the \(col(i) = col(k-1)\) or \(col(k) = col(j)\). It is easy to confirm that any feasible solution to MNCP is scored by the recurrence. This dynamic program runs in \(O(n^3)\) time.
We now show the link between MLPS and MNCP. Consider component C to be sorted. Pick an arbitrary vertex of C if it is a cycle, or either endpoint of C if it is a path, and consider an ordering of the vertices of genome A based on a traversal of the edges of C from that vertex. Embed the vertices of the component on a circle with respect to that ordering, and the edges so that they remain inside the circle. Call this a circular embedding of the component. Consider a sorting scenario for C that corresponds to a sequence of adjacency graphs \(C_0,C_1,\ldots ,C_d\) (\(C=C_0\)). Call \(C_i^{\circ }\) the graph \(C_i\) with vertices embedded according to the circular embedding of \(C_0\).
Lemma 1
[17] \(C_i^{\circ }\) has no pair of crossing adjacency edges for any i.
By construction, all adjacency edges in \(C_0^{\circ }\) connect adjacent vertices on the circle, so none of them cross. Assume that \(C_j^{\circ }\) has crossing adjacency edges and \(C_{j-1}^{\circ }\) does not. This implies that the jth DCJ did not split a component. This is a contradiction since every sorting move on C splits a component, never creating both an \(AA\)-path and \(BB\)-path. \(\square\)
Given a connected component C, Minimum Local Parsimonious Scenario on C can be solved by Minimum Noncrossing Colored Partition.
First, transform an instance of MLPS on a single component to an instance of MNCP. Given a cycle C representing genomes A and B, map the set of elements [1, n] from the set of adjacency edges of A ordered according to a circular embedding of C. The color function col maps each element to its corresponding adjacency edge's color.
Now transform an optimal solution of MNCP into an optimal solution for MLPS. Clearly, any partition of [1, n] corresponds to a partition of adjacency edges of genome A. We show that there always exists a scenario of DCJs whose prefix separates C into connected components according to the partition. Any two edges of the same component can be chosen for a DCJ [17] and the DCJs on a cycle can be done in any order (Lemma 1). Since the ordering of the edges on the cycle corresponds to the ordering on [1, n], an edge partition of size k can be achieved with \(k-1\) DCJs. Since k is minimum over all feasible partitions and the remaining DCJs of the scenario are likely, the constructed scenario has a minimum number of rare DCJs. \(\square\)
In fact, the two problems are equivalent. We omit the reduction in the other direction since it is out of the scope of this paper.
Even-length paths
A Minimum Noncrossing Colored Partition can be computed in polynomial time for a single component independent of all others. Yet it is possible to mix components in a parsimonious DCJ scenario. As described in Fig. 2, the only parsimonious DCJs that mix components are those that act on one edge from an \(AA\)-path and one edge from a \(BB\)-path. Call AA (BB respectively) the set of \(AA\)-paths (\(BB\)-paths respectively) in the adjacency graph. The key observation is that once a path has been mixed with another, the result is always two odd-length paths which subsequently cannot be mixed with any other. Thus we devote this section to the computation of which pairs of paths \((p,q) \in AA \times BB\) will be mixed in an optimal solution, and which paths will remain unmixed.
Any pair (p, q) can be mixed in several ways. For all possible DCJs that mix them, we compute the MNCP on the resulting components. The minimum MNCP over all mixings is the cost in rare moves for mixing the two paths. To compute the pairs of paths to be mixed in an optimal solution, we use the inverse of these costs—the number of likely moves—as weights in a bipartite graph.
Take the elements of AA and BB as vertices in a complete bipartite graph, and label each edge (p, q) with the maximum number of likely DCJs for the mixing of paths p and q. Any even-length path could alternatively be used independently of any other, so there is a vertex \(v'\) for each \(v \in AA \cup BB\) with a single edge \((v,v')\) labeled by the number of likely moves on v alone (computed using the MNCP on that component). Algorithm 1 computes the minimum number of rare DCJs in a parsimonious scenario. It is easy to modify the algorithm to give the list of DCJs.
The function MNCPonComp(c, col) computes the Minimum Noncrossing Colored Partition on the given component c. In other words it builds the color function col according to the component c and then calls MNCP(1, n, col) where n is the number of adjacency edges on the A side of the component c. The function maxMix(p, q) computes the maximum number of likely DCJs over all possible DCJs that use one edge from p and one edge from q. The function d(AA) computes the sum of DCJ distances from each component in AA using Formula 1. The function \(maxMatching(V_A,V_B,w)\) builds the bipartite graph with vertices \(V_A\) on one side and vertices \(V_B\) on the other, and the edges described by the weight function w.
To summarize, any path can be mixed at most once in a parsimonious scenario. Potential mixings, as well as potential non-mixings, are encoded into a bipartite graph with edges weighted by the cost of a mix. A maximum weight matching in this graph corresponds to a scenario that minimizes the number of rare moves on the paths. All other connected components of the graph are sorted using the Minimum Noncrossing Colored Partition on the component.
The running time of our algorithm is dominated by the weighting of the edges on the bipartite graph. Consider all mixings done between elements of AA and elements of BB. A particular adjacency edge e from a given path \(p \in AA\) will take part in exactly one DCJ with every edge f from a path \(q \in BB\) throughout the weighting process. Therefore for each pair (e, f), e being an edge from a path in AA and f being an edge from a path in BB, we will compute the MNCP on the resulting mix. If the number of edges in the paths AA (respectively BB) is n(AA) (respectively n(BB)), then the running time of our algorithm is \(O(n(AA)n(BB)n^3)\). In the worst case, half of the edges are used in \(AA\)-paths and half in \(BB\)-paths, yielding a running time of \(O(n^5)\).
Faster mixing of even-length paths
In the previous section, edges of the bipartite graph are scored by the function maxMix that computes the maximum number of likely DCJs over all possible mixings of two paths. The analysis includes the multiplicative term n(AA)n(BB) reflecting the process of actually trying all possible mixings when labeling the edges of the bipartite graph. We now show how to mix paths more efficiently.
An \(AA\)-path and a \(BB\)-path
Define the A-edges of a component of the graph G(A, B) to be those edges connecting two nodes in genome A. Consider paths \(p \in AA\) and \(q \in BB\) where p is the path with A-edges \(e_1,e_2,\ldots ,e_k\) and telomeres t1 and t2, and q is the path with A-edges \(f_1,f_2,\ldots ,f_\ell\) and telomeres t3 and t4 (see Fig. 6). Construct two different cycles from p and q, cycle c1 results from joining t1 to t3 and t2 to t4 by cross edges, and cycle c2 results from joining t1 to t4 and t2 to t3. The A-edges of p can then be ordered circularly in c1 where edge \(e_1\) follows edge \(e_k\). Similarly, \(f_1\) follows \(f_\ell\) in c2. We show that there is a bijection between scenarios that start by mixing p and q, and scenarios that act on one of these two cycles by first performing a DCJ between an e edge and an f edge.
There is an obvious bijection between edges of \(p \cup q\) and c1, and between edges of \(p \cup q\) and c2. Consider the mix move acting on edge \(e_i\) in p and \(f_j\) in q. The result is either:
paths \(e_1,e_2,\ldots ,\,e_i,f_{j-1},f_{j-2},\ldots ,f_1\) and \(e_k,e_{k-1},\ldots ,\,e_{i+1},f_j,f_{j+1},\ldots ,f_\ell\), or
paths \(e_1,e_2,\ldots ,\,e_{i-1},f_j,f_{j-1},\ldots ,f_1\) and \(e_k,e_{k-1},\ldots ,\,e_i,f_{j+1},f_{j+2},\ldots ,f_\ell\), or
paths \(e_1,e_2,\ldots ,\,e_i,f_{j+1},f_{j+2},\ldots ,f_\ell\) and \(e_k,e_{k-1},\ldots ,\,e_{i+1},f_j,f_{j-1},\ldots ,f_1\), or
paths \(e_1,e_2,\ldots ,\,e_{i-1},f_j,f_{j+1},\ldots ,f_\ell\) and \(e_k,e_{k-1},\ldots ,\,e_i,f_{j-1},f_{j-2},\ldots ,f_1\).
The DCJ acting on \(e_i\) and \(f_j\) in c1 yields two cycles partitioning the edges as they are in either Case 1 or Case 2. The DCJ acting on \(e_i\) and \(f_j\) in c2 yields two cycles partitioning the edges as they are in either Case 3 or Case 4. Since odd length paths and cycles can only be sorted by cycle-extraction moves (see Fig. 2), each scenario mixing \(e_i\) and \(f_j\) maps to a scenario on c1 or c2. The bijection follows from the fact that moves on a cycle can be ordered in any way (Lemma 1).
Due to the bijection between mixing scenarios on p and q, and scenarios on c1 or c2, the MNCP by mixing p and q must be either the MNCP on c1 or the MNCP on c2. Thus, our algorithm to compute maxMix(p, q) returns the maximum of MNCPonComp(c1, col) or MNCPonComp(c2, col) or \(MNCPonComp(p,col) + MNCPonComp(q,col)\).
Our new version of maxMix removes a linear factor from the overall computation time. Note \(a_{1}, \dots , a_{x}\) the sizes of the paths in AA and \(b_{1}, \dots , b_{y}\) the sizes of the paths in BB so that \(|AA|=\sum _{i=0}^{x}a_{i}\) and \(|BB|=\sum _{j=0}^{y}b_{j}\).
"Colored partitions" section shows that the number of steps required to solve MNCP on a component of size m is less than \(c \times m^3\), for some constant c. For each pair of paths, we compute MNCPonComp three times, so the number of steps required to label all the edges of the complete bipartite graph is at most
$$\begin{aligned} 3c\sum _{i=0}^x\sum _{j=0}^y(a_i+b_j)^3&= 3c\sum _{i=0}^x\sum _{j=0}^y(a_i^3+b_j^3+3a_i^2b_j+3a_ib_j^2) \\&=3c\Big (y\sum _{i=0}^xa_i^3+x\sum _{j=0}^yb_j^3+3|BB|\sum _{i=0}^xa_i^2+3|AA|\sum _{j=0}^yb_j^2\Big ). \end{aligned}$$
The terms y, x, |AA|, and |BB| are clearly O(n). Since the largest terms \(\sum _{i=0}^{x}a_{i}^3\) and \(\sum _{j=0}^{y}b_{i}^3\) are in \(O(n^3)\), the complexity of the bipartite graph labeling step is \(O(n^4)\). Since sorting all non-even paths takes \(O(n^3)\) time, our complete algorithm takes \(O(n^4)\) time in the worst case.
The number of parsimonious DCJ scenarios between two genomes is exponential in the distance between them. However, many of the scenarios are probably unrealistic in the biological sense. This paper takes a step towards modeling realistic scenarios by posing optimization problems that take into account positional constraints. An example of such a positional constraint is the 3D proximity of genome segments given by Hi-C experiments.
An \(O(n^4)\) algorithm is proposed for computing a parsimonious DCJ scenario that is most likely, given an edge-coloring function that classifies DCJ as "likely" or "unlikely". In practice the algorithm will be \(O(n^3)\) since we expect long even-length paths to be rare in nature. For example, the adjacency graph for the mouse/human syntenic map built by Véron et al. [27] from one-to-one orthologs in Biomart has only 182 edges in even-length paths out of a total of 13,302 edges. The largest connected component has 35 edges.
From a biological perspective, a solution to Minimum Local Parsimonious Scenario corresponds to finding a maximum likelihood scenario in a situation where likely and unlikely scenarios are both rare, and the difference between the likelihoods of likely and unlikely moves is not very large. In this situation, a most parsimonious scenario made of k unlikely moves is more likely than a non-parsimonious scenario made of \(k+1\) likely moves. Thus the maximum likelihood scenario is the most parsimonious scenario that involves the smallest number of unlikely moves.
We introduce the Minimum Noncrossing Colored Partition problem—a generalization of the Maximum Independent Set problem on circle graphs—for weighting the edges of a bipartite graph, on which we obtain a maximum matching. While this technique is essential to our algorithm for finding DCJ scenarios, we believe it will also come in handy for an algorithm that finds likely inversion scenarios (e.g., for handling the infamous "hurdles"). A multitude of biologically relevant variations on this problem exist, including variations on the model of genome rearrangement, a variant where edges have multiple colors, and a bi-directional sorting variant where edges are weighted on both genomes according to the chromatin conformation on each. Models that incorporate uncertainty or evolution in the Hi-C data would also be relevant. We hope that this work provokes further study from both the algorithmic and the biological perspectives.
Fertin G, Labarre A, Rusu I, Tannier E, Vialette S. Combinatorics of genome rearrangements. Cambridge: MIT press; 2009.
Blin G, Fertin G, Sikora F, Vialette S. The exemplar breakpoint distance for non-trivial genomes cannot be approximated. WALCOM: algorithms and computation. Berlin: Springer; 2009. p. 357–68.
Jiang M. The zero exemplar distance problem. J Comput Biol. 2011;18(9):1077–86.
Bergeron A, Mixtacki J, Stoye J. A unifying view of genome rearrangements. In: Bucher P, Moret BME, editors. Proceedings of 6th international workshop algorithms in bioinformatics (WABI'06). Lecture notes in computer science. vol. 4175. Berlin: Springer; 2006. p. 163–73.
Yancopoulos S, Attie O, Friedberg R. Efficient sorting of genomic permutations by translocation, inversion and block interchange. Bioinformatics. 2005;21(16):3340–6.
Bertrand D, Gagnon Y, Blanchette M, El-Mabrouk N. Reconstruction of ancestral genome subject to whole genome duplication, speciation, rearrangement and loss. Algorithms in bioinformatics. Berlin: Springer; 2010. p. 78–89.
Ouangraoua A, Tannier E, Chauve C. Reconstructing the architecture of the ancestral amniote genome. Bioinformatics. 2011;27(19):2664–71.
Jones BR, Rajaraman A, Tannier E, Chauve C. Anges: reconstructing ancestral genomes maps. Bioinformatics. 2012;28(18):2388–90.
Rajan V, Xu AW, Lin Y, Swenson KM, Moret BME. Heuristics for the inversion median problem. BMC Bioinform. 2010;11(Suppl 1):54. doi:10.1186/1471-2105-11-S1-S30.
Aganezov S, Alekseyev M. On pairwise distances and median score of three genomes under DCJ. BMC Bioinform. 2012;13(Suppl 19):1.
Haghighi M, Sankoff D. Medians seek the corners, and other conjectures. BMC Bioinform. 2012;13(Suppl 19):5.
Blanchette M, Kunisawa T, Sankoff D. Parametric genome rearrangement. Gene. 1996;172(1):11–7.
Pinter RY, Skiena S. Genomic sorting with length-weighted reversals. Genome Inform. 2002;13:103–11.
Lefebvre JF, El-Mabrouk N, Tillier ERM, Sankoff D. Detection and validation of single gene inversions. In: Proceedings of 11th international conference on intelligent systems for molecular biology (ISMB'03). Bioinformatics. vol. 19. Oxford: Oxford University Press; 2003. p. 190–96.
Bender MA, Ge D, He S, Hu H, Pinter RY, Skiena S, Swidan F. Improved bounds on sorting by length-weighted reversals. J Comp Syst Sci. 2008;74(5):744–74.
Galvão GR, Dias Z. Approximation algorithms for sorting by signed short reversals. In: Proceedings of the 5th ACM conference on bioinformatics, computer biology, and health informatics. ACM; 2014. p. 360–69
Ouangraoua A, Bergeron A. Combinatorial structure of genome rearrangements scenarios. J Comput Biol. 2010;17(9):1129–44.
Braga MDV, Stoye J. The solution space of sorting by DCJ. J Comput Biol. 2010;17(9):1145–65.
Gavril F. Algorithms for a maximum clique and a maximum independent set of a circle graph. Networks. 1973;3:261–73.
Valiente G. A new simple algorithm for the maximum-weight independent set problem on circle graphs. In: Proceedings of 14th international symposium algorithms and computation. (ISAAC'03). Lecture notes in computer science. vol. 2906. Berlin: Springer; 2003. p. 129–137
Nash N, Gregg D. An output sensitive algorithm for computing a maximum independent set of a circle graph. Inf Process Lett. 2010;110(16):630–4.
Duan Z, Andronescu M, Schutz K, McIlwain S, Kim YJ, Lee C, Shendure J, Fields S, Blau CA, Noble WS. A three-dimensional model of the yeast genome. Nature. 2010;465(7296):363–7.
Zhang Y, McCord RP, Ho Y-J, Lajoie BR, Hildebrand DG, Simon AC, Becker MS, Alt FW, Dekker J. Spatial organization of the mouse genome and its role in recurrent chromosomal translocations. Cell. 2012;148(5):908–21. doi:10.1016/j.cell.2012.02.002.
Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, Hu M, Liu JS, Ren B. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485(7398):376–80.
Sexton T, Yaffe E, Kenigsberg E, Bantignies F, Leblanc B, Hoichman M, Parrinello H, Tanay A, Cavalli G. Three-dimensional folding and functional organization principles of the Drosophila genome. Cell. 2012;148(3):458–72.
Le TBK, Imakaev MV, Mirny LA, Laub MT. High-resolution mapping of the spatial organization of a bacterial chromosome. Science. 2013;342(6159):731–4.
Veron A, Lemaitre C, Gautier C, Lacroix V, Sagot M-F. Close 3d proximity of evolutionary breakpoints argues for the notion of spatial synteny. BMC Genomics. 2011;12(1):303. doi:10.1186/1471-2164-12-303.
Swenson KM, Blanchette M. Large-scale mammalian rearrangements preserve chromatin conformation. Preparation. Berlin: Springer; 2015. p. 243–56.
All authors contributed to this paper. All authors read and approved the final manuscript.
We would like to thank Anne Bergeron for her helpful comments during the preparation of this manuscript. This work was funded in part by a Grant from the Fonds de Recherche du Québec en Nature et Technologies.
A preliminary version of this paper appeared in the 15th Workshop on Algorithms in Bioinformatics (WABI 2015).
LIRMM, CNRS, Université Montpellier, 161 rue Ada, 34392, Montpellier, France
Krister M. Swenson
Institut de Biologie Computationnelle (IBC), Montpellier, France
ENS Lyon, 46 allée d'Italie, 69364, Lyon, France
Pijus Simonaitis
McGill Centre for Bioinformatics and School of Computer Science, McGill University, Montréal, H3C2B4, Canada
Mathieu Blanchette
Search for Krister M. Swenson in:
Search for Pijus Simonaitis in:
Search for Mathieu Blanchette in:
Correspondence to Krister M. Swenson.
Krister M. Swenson, Pijus Simonaitis and Mathieu Blanchette contributed equally to this work
Swenson, K.M., Simonaitis, P. & Blanchette, M. Models and algorithms for genome rearrangement with positional constraints. Algorithms Mol Biol 11, 13 (2016). https://doi.org/10.1186/s13015-016-0065-9
Double cut and join (DCJ)
Weighted genome rearrangement
Noncrossing partitions
Chromatin conformation
Hi-C | CommonCrawl |
Getting started is quick and easy. No upfront fees
It's free to request a service and invite bids from experts
Discuss the project scope and fee and hire the expert who best meets your requirements
Collaborate with the expert directly to get your work done the right way
Fund project when you hire the expert, but release the funds only once work is done
Contact Jo Anne S.
Want to hire this expert for a project? Request a quote for free.
USD 120 /hr
Hire Dr. Jo Anne S.
Seasoned applied researcher, program/policy analyst, & technical writer with 30+ years experience, international standing
Applied researcher/evaluator, policy/program designer and technical assistance provider. Known for translating research into policy, program and systems change, using evidence based strategies for research-to-practice and capacity building initiatives in partnership with government, foundations and non-profit organizations. Expertise in community based opportunity structures and support systems for low income families, people of color, immigrants, refugees and people with disabilities. Her work uses a holistic framework to address issues, crossing the fields of health, human services, economic, workforce and community development, asset building, welfare, child welfare, early childhood education, social services, youth development, disability, education, intergroup relations, and violence prevention. Established reputation and solid working relationships with foundations, and with U.S., Canadian and U.K. government agencies. Presently an associate research professor at George Washington University and Principal Chrysalis Collaborations. American Association for the Advancement of Science Fellow at NIH (2003-2005) SKILLS Research. Equal training in quantitative and qualitative methods on federal contracts, nonprofit and foundation sponsored initiatives, grants, evaluations, and investigator initiated projects. • Quantitative: Development and management of evaluations, needs assessments, experimental designs, and surveys. Analysis of large scale data sets, administrative databases, and government statistics using SAS, SPSS, EXCEL. • Qualitative and Mixed Methods: multi-site, multi-method qualitative research using case studies, participant observation, qualitative interviews, focus groups, secondary sources, GIS and social mapping techniques. Projects usually involve quantitative components. Expert in participatory action research and community collaborations. • Development of logic models, systems change projects, and theories of change for workforce development, health, and human services programs, as well as research and evaluation projects on a wide range of topics. • Design and implementation of evaluations, cost-benefit analysis and managing-for-results databases and measures. • Expert panelist and consultant on applied research for practice initiatives. Currently expert panelist for Social Security Administration to revise questions for their National Beneficiaries Survey for People with Disabilities. Proposal reviewer for NIDRR, Aspen Institute, and several foundations. Business Development, Grant Writing and Project Management • Business development with federal, state and local government, foundations, and private sector organizations. Over 50 percent success rate on research proposals. • Seventeen years as Principal Investigator leading grants and contracts, including all phases of proposal and budget development, financial management, staff and project management, and reporting to fiscal sponsors. • Twelve years hiring and overseeing subcontractors working on research and evaluation projects. Two years managing federal contractors. Certified as a project officer for HRSA. Evaluations and Research on Community Workforce, Health and Human Services Systems. Collaborated with government, non-profit, faith community and advocacy coalitions on community and economic development, at-risk youth, child development, workforce development, intergroup relations, health, human services, and disability projects. Capacity Building, Program Development, and Program Management. Over twenty years of applied research and capacity building assistance and thirteen years design and implementation of local service delivery systems and direct service programs on human services, labor market processes, workforce development, youth development, disability, housing, welfare, child welfare, education, community development, economic development, health, refugee resettlement, intergroup relations and service integration initiatives, usually in partnership with government, foundations and non-profit organizations. Expertise in community based opportunity structures and support systems for low income families, people of color, immigrants, refugees and people with disabilities. Her work uses a holistic framework to address issues, crossing the fields of health, human services, economic, workforce and community development, asset building, welfare, child welfare, early childhood education, social services, youth development, disability, education, intergroup relations, and violence prevention. Labor, Health and Human Services Systems Policy. Twelve years of policy analysis, research and development for state, local and federal officials and coalitions on human services, community and economic development, at-risk youth, intergroup relations, training, workforce development, asset building, health and disability policy and systems. Publications, Reports, and other Writing • Extensive experience writing and editing documents for a wide range of audiences, over two books, edited four special issues of international journals, published over 100 articles, reports, issue briefs, op-eds, and presentations. Oversight of production of web-based and media presentations. • Presentations to government officials, private sector agencies, and a wide range of audiences. HIGHLIGHTS OF WORKFORCE AND HUMAN SERVICES PROJECTS Independent Consultant: Research, Policy and Program Design, Evaluation, Workshops (2000-present) Recent clients: Social Security Administration, Image Center for People with Disabilities, Annie E. Casey Foundation, Arc of Maryland and Maryland Developmental Disabilities Administration, Weill Cornell Medical Center, Principal Investigators Association, Kidney Wise, Foundation for Community Empowerment. Social Scientist, HRSA (2014) Analysis of census data and national professional association databases on various health workforce issues. Used SAS and Excel for analysis of large scale datasets. Principal Research Analyst: The Long Term Unemployed and People with Disabilities in the Recession (June 2010- present): Analysis of national large scale data sets, qualitative observations and interviews, legislative analysis and policy recommendations on the impact of the recession and weak recovery on the general population and people with disabilities. Suggested systems change in workforce development, employment tax incentives, vocational rehabilitation, post-secondary education and training access, and health insurance policy to improve employment prospects for the long term unemployed, older workers, and people with disabilities. Produced 2 policy reports and data briefs on unemployment for the general population and people with disabilities. Disseminated report to federal and state legislators, advocacy coalitions, and state program administrators. Wrote op-ed for the Baltimore Sun and contributed to reporter generated articles in the Baltimore Sun, Bloomberg News, and Ohio Plain Dealer. Provided background data for NPR story on unemployment. Principal Investigator/Program Development Consultant: Successful Employment for People with Disabilities Project (2011-present) Research to practice project on labor market and workforce development designed for the Image Center for People with Disabilities to identify attributes of the environment and individuals that lead to successful careers. Developed logic model and theory of change. Designed research, program development and evaluation process. Research Expert (Consultant), Principal Investigators Association (2012-present) Developed and presented a series of expert webinars and guides for researchers in health and other fields on interdisciplinary multi-methods research, participatory action research, working with communities and advisory committees, research ethics and qualitative IRBs. Principal Investigator/Consultant, Social Capital and At-Risk Communities (2002-2010) For the Annie E Casey Foundation, developed a series of projects on workforce development and human services systems that built on Dr. Schneider's previous studies of social capital for communities, social service agencies, faith communities, families and individuals. Analysis of large scale Making Connections datasets using SPSS combined with case studies. The projects contributed products and technical assistance to the foundation's initiatives on social capital. Published two policy reports for the foundation and book: Social Capital and Welfare Reform. Presented to Foundation staff. Principal Investigator and Project Director, Faith and Organizations Project (2000-2011) Responding to a request from faith community and faith-based organization (FBO) leaders, the project developed and implemented a series of research-to-practice and technical assistance projects designed to 1) understand the relationship between founding communities and FBOs, 2) understand the role of founding faith traditions in systems and practices, 3) clarify the role of FBOs in their sector, and 4) understand relationships with people served by FBOs. Compared organizations providing health, senior services, social services (including multi-service agencies, disability services, workforce development and youth programs), community development and education founded by Catholics, Mainline Protestants, Jews, Evangelicals, Peace churches, Muslims, and African American Christians. Recruited and managed team of approximately 25 researchers and dissemination staff through eight university subcontracts. Responsible for fundraising, grant writing, project management and financial administration on project. Worked with project advisory committee, product advisory committee and policy committee on overall oversight of the project, development and dissemination of policy statements and products. Presented findings to government, non-profit, faith community and academic audiences. Provided evaluation and capacity building assistance to organizations and faith communities. (See http://www.faithandorganizations.umd.edu/ ). Researcher, Ask-Me! Project and Disability Systems Projects (2008-2010) With consulting support from Bonham Research, the Arc of Maryland conducted Ask Me! – an annual quality of life study of adults in Maryland with intellectual and developmental disabilities (IDD). It was conducted for the Maryland Developmental Disabilities Administration (DDA) and fulfills a federal mandate to assess quality of life for people served by DDA. It was unique in the field because it employed people with IDD in all phases of the research. Activities involved five separate projects. She 1) worked on the team revising the Ask Me! questionnaire, 2) documented the history and rationale for Ask Me!, 3) conducted a study of workforce and service coordination systems, 4) conducted a depth interview study of the impact of employment with Ask Me! on current and former Ask Me! IDD interviewers, and 5) created a quality check tool that enables staff to perform visual quality checks using rapid observation techniques. Dissemination System Evaluator/Principal Researcher, Model for Disseminating Evidence Based Health Education and Health Services Products through Government and Private Systems (2004-2005). While an AAAS Fellow for the National Cancer Institute, designed a logic model for disseminating evidence-based health practices to health disparities populations. The model used social capital, network and system design theory to identify key community stakeholders and partner with them on initiatives. Evaluated existing NCI systems to disseminate health products. Principal Investigator, Kenosha Social Capital Study and Kenosha Conversation Project (1997-2001) Led an internationally recognized community needs assessment, community systems change, and policy change process regarding workforce development, health insurance and welfare reform in Kenosha, WI – one of the models for the 1996 welfare reform legislation. The second phase of the project focused on the Latino and African American sub-communities of Kenosha and their relationship to Kenosha institutions. Organized a coalition of community groups along with the Kenosha County Job Center, Kenosha city government, Kenosha County Department of Human Services. Designed multi-methods research project, led research, performed analysis and worked with coalition and government to translate findings into policy and systems change. Responsible for fundraising, grant writing, project management, financial administration (See http://joanneschneider.wix.com/jaschneider). Principal Investigator, Silver Spring Neighborhood Center Evaluation and Needs Assessment Study (1998- 1999) For the Silver Spring Neighborhood Center in partnership with the Nonprofit Center of Milwaukee. The Silver Spring Neighborhood Center is a settlement house using a one-stop-shop model for entire families through partnerships with 15 organizations offering TANF, SNAP, WIC and child welfare services, a health clinic, child care, an alternative high school, seniors programs, youth enrichment programs, adult basic education and training, and emergency services. Developed logic model, theory of change, and led team from University of Wisconsin-Milwaukee, University of Wisconsin-Parkside and Nonprofit Center of Milwaukee in a multi-methods needs assessment and evaluation of the effects of changing federal and state welfare, public health insurance and child welfare legislation and policy on Milwaukee CBO, its neighborhood, and its participants. Responsible for grant writing and financial administration. Conducted provider quality evaluation and participants' satisfaction evaluation of agency components related to health, welfare and child welfare reform. Project included control group outcome comparisons with other non-profits providing similar services. Developed analysis of agency systems for operations evaluation. Analyzed data, wrote report, and presented findings to agency, local community organizations, and local government officials. Assisted agency in translating findings into outcomes improvement and capacity building initiatives. Assistant Director, Institute for the Study of Civic Values (1992-1997). Responsible for administration, program and project design, fundraising, implementation, and evaluation for a variety of projects related to Self-Sufficiency - an anti-poverty effort to better employment and training opportunities for public assistance recipients which included an advocacy and policy research project and two government contracted direct service programs run in cooperation with neighborhood based organizations: a literacy program and AWEP. For the Philadelphia Private Industry Council, piloted, redesigned and oversaw the Alternative Work Development Program (AWEP), a model TANF program combining on-the-job-training and community service workfare placements with an in-house education seminar, case management and partnered educational programs. Developed the RFP, designed the program, developed a network of non-profits to host community service interns, hired staff, oversaw budget and staff, served as case management and agency ombudsperson, developed and taught seminars, wrote funder requested program reports and invoices, and designed and conducted the quasi-experimental design control group outcome evaluation. Provided technical assistance for replication of the model program. Developed outcomes measures and performance management database and performed evaluation of Philadelphia PIC. Provided capacity building assistance to Philadelphia area non-profits on program evaluation, non-profit management, managing for results measures, program design and developing partnered programs to improve neighborhoods. Project and contract management for government contracts, including budgeting and reporting. Created HR and administrative policy for organization. Managed agency operations. Managed agency staff.
☆ Social Science & Humanities
☆ Local Politics & Government
☆ Public Policy & Administration
☆ Health Policy
☆ State Policy & Legislation
☆ Housing & Housing Policy
☆ Economic Policy
☆ Social Welfare
☆ Human Services
☆ Interfaith Relations
☆ Community Development
☆ Child & Adolescent Welfare
☆ Violence Prevention
☆ Early Childhood Education
☆ SPSS
☆ Learning Disabilities
☆ Community Health Services
☆ Developmental Disabilities
☆ Physical Disabilities
☆ Grant Writing
☆ Nonprofit Organization Administration
☆ Employee Performance Evaluation
Writing Technical Writing, Newswriting
Research Feasibility Study, Fact Checking, Systematic Literature Review, Secondary Data Collection
Consulting Scientific and Technical Consulting
Data & AI Statistical Analysis
Product Development Formulation
Chrysalis Collaborations
June 2015 - Present
September 2005 - Present
Social Science research consulting
Independent Consultant (summers) and in addition to paid employment
September 1988 - June 2015
AAAS Science and Technology Policy Fellow
American Association for the Advancement of Science/NIH
Assistant Professor, Associate Professor, Research Fellow/Professor
Several tenure track and visiting scholar positions at mid-west and Washington DC area universities
Institute for the Study of Civic Values
January 1992 - September 1997
Congressional Fellow
American Anthropological Association Congressional Fellowship program (House Immigration Committee and Senator Metzenbaum's office) APSA umbrella fellowship
Research Assistant, statistical programmer (1st job out of college, before grad school)
Westat
January 1980 - August 1981
September 1981 - May 1988
BA, honors
Lewis and Clark College
Certification details not provided.
Schneider, J.A.(2013). Comparing Stewardship Across Faith-Based Organizations . Nonprofit and Voluntary Sector Quarterly. 42. (3). p. 517-539.
Schneider, J.A.(2013). Introduction to the Symposium: Faith-Based Organizations in Context . Nonprofit and Voluntary Sector Quarterly. 42. (3). p. 431-441.
JO ANNE SCHNEIDER(2010). Turf Wars: Discourse, Diversity, and the Politics of Place by Gabriella Gahlia Modan . American Ethnologist. 37. (2). p. 387--389. Wiley-Blackwell
Schneider, J.A.(2009). Organizational social capital and nonprofits . Nonprofit and Voluntary Sector Quarterly. 38. (4). p. 643-662.
Schneider, J.A.(2008). Social capital, civic engagement and trust . Anthropologica. 50. (2). p. 425-428.
Schneider, J.A.(2007). Connections and disconnections between civic engagement and social capital in community-based nonprofits . Nonprofit and Voluntary Sector Quarterly. 36. (4). p. 572-597.
Jo Anne Schneider(2006). Anthropological Relevance and Social Capital . Anthropology News. 47. (3). p. 4--5. Wiley-Blackwell
Jo Anne Schneider(2006). Getting Beyond the Training vs. Work Experience Debate: The Role of Labor Markets, Social Capital, Cultural Capital, and Community Resources in Long-Term Poverty . Journal of Women, Politics {\&} Policy. 27. (3-4). p. 41--53. Informa {UK} Limited
Jo Anne Schneider(2006). Using multimethods ethnography to promote quality service and understand interactions among organizations . Nonprofit Management Leadership. 16. (4). p. 411--427. Wiley-Blackwell
Jo Anne Schneider(2006). An interdisciplinary conversation on research method best practices for nonprofit studies . Nonprofit Management Leadership. 16. (4). p. 387--394. Wiley-Blackwell
Jo Anne Schneider(2002). Social Capital and Community Supportsfor Low Income Families . The Social Policy Journal. 1. (1). p. 35--55. Informa {UK} Limited
Jo Anne Schneider(2001). Introduction: Social Welfare and Welfare Reform . American Anthropologist. 103. (3). p. 705--713. Wiley-Blackwell
Schneider, J.A.(2000). Pathways to opportunity: The role of race, social networks, institutions, and neighborhood in career and educational paths for people on welfare . Human Organization. 59. (1). p. 72-85.
Jo Anne Schneider(1999). Changing U.S. Urban Neighborhoods: Patterns, Postmodern Analysis, and Policy Relevance:Left Behind in Rosedale: Race Relations and the Collapse of Community Institutions.$\mathsemicolon$The Unknown City: The Lives of Poor and Working-Class Young Adults.$\mathsemicolon$The Anthropology of Lower Income Urban Enclaves: The Case of East Harlem . American Anthropologist. 101. (3). p. 648--651. Wiley-Blackwell
Schneider, J.A.(1999). And how are we supposed to pay for health care? Views of the poor and the near poor on welfare reform . American Anthropologist. 101. (4). p. 761-782.
Schneider, J.A.(1999). Trusting that of God in everyone: Three examples of quaker-based social service in disadvantaged communities . Nonprofit and Voluntary Sector Quarterly. 28. (3). p. 269-295.
Jo Anne Schneider(1998). Growing Up African American in Catholic Schools.:Growing Up African American in Catholic Schools . Anthropology $\less$html{\_}ent glyph="@amp$\mathsemicolon$" ascii="{\&}amp$\mathsemicolon$"/$\greater$ Education Quarterly. 29. (1). p. 130--132. Wiley-Blackwell
Schneider, J.A.(1997). Dialectics of race and nationality: Contradictions and Philadelphia working-class youth . Anthropology and Education Quarterly. 28. (4). p. 493-523.
Jo Anne Schneider(1997). Dialectics of Race and Nationality: Contradictions and Philadelphia Working-Class Youth . Anthropology $\less$html{\_}ent glyph="@amp$\mathsemicolon$" ascii="{\&}amp$\mathsemicolon$"/$\greater$ Education Quarterly. 28. (4). p. 493--523. Wiley-Blackwell
Jo Anne Schneider, Eric Bryant Rhodes, Judith Goode(1996). Reshaping Ethnic and Racial Relations in Philadelphia: Immigrants in a Divided City . International Migration Review. 30. (1). p. 337. {JSTOR}
Jo Anne Schneider, Nazli Kibria, Barry Edmonston, Jeffrey S. Pasell, Judith Goode, James Jennings(1995). Immigration and Ethnicity: The Integration of America{\textquotesingle}s Newest Arrivals . Contemporary Sociology. 24. (4). p. 312. {SAGE} Publications
Jo Anne Schneider(1994). Fieval is an engineer: Immigrant ideology and the absorption of Eastern European refugees . Identities. 1. (2-3). p. 227--248. Informa {UK} Limited
Schneider, J.A.(1994). Fieval is an engineer: immigrant ideology and the absorption of Eastern European refugees . Identities. 1. (2-3). p. 227-248.
Jo Anne Schneider(1986). Rewriting the {SES}: Demographic patterns and divorcing families . Social Science {\&} Medicine. 23. (2). p. 211--222. Elsevier {BV}
Schneider, J.A.(1986). Rewriting the SES: Demographic patterns and divorcing families . Social Science and Medicine. 23. (2). p. 211-222.
Schneider, J.A.(2012). Getting beyond the training vs. work experience debate: The role of labor markets, social capital, cultural capital, and community resources in long-term poverty . Women, Work, and Poverty: Women Centered Research for Policy Change. 27. (3-4). p. 41-53.
Jo Anne Schneider.(2006). Social capital and welfare reform : organizations, congregations, and communities. Columbia University Press
Jo Anne Schneider.(2006). Social capital and welfare reform : Organizations, congregations and communities. Columbia University Press
Judith Granich Goode.(1994). Reshaping ethnic and racial relations in Philadelphia : immigrants in a divided city. Temple University Press
Jo Anne Schneider(2007). Small Nonprofits and Civil Society: Civic Engagement and Social Capital . 74--88Springer Science $\mathplus$ Business Media | CommonCrawl |
\begin{document}
\title{Non-Hermitian $\mathcal{PT}$-symmetric and Hermitian Hamiltonians' correspondence: Isospectrality and mass signature } \author{Omar Mustafa$^{1}$ and S.Habib Mazharimousavi$^{2}$ \\
Department of Physics, Eastern Mediterranean University, \\ G Magusa, North Cyprus, Mersin 10,Turkey\\ $^{1}$E-mail: [email protected]\\ \ $^{2}$E-mail: [email protected]} \maketitle
\begin{abstract} A transformation of the form $x\rightarrow \pm iy\in i
\mathbb{R}
;$ $x,y\in
\mathbb{R}
$, or an equivalent similarity transformation with a metric operator $\eta $ are shown to map non-Hermitian $\mathcal{PT}$-symmetric Hamiltonians into Hermitian partner Hamiltonians in Hilbert space. Isospectrality and mass signature are also discussed.
PACS codes: 03.65.Ge, 03.65.Ca
Keywords: non-Hermitian $\mathcal{PT}$-symmetric Hamiltonians, Hermitian partner Hamiltonians, isospectrality, mass signature. \end{abstract}
\section{Introduction}
Recent developments on non-Hermitian Hamiltonians have documented that Hermiticity is no more a necessary condition to secure the reality of the spectrum [1-43]. Such developments are very much inspired by the nowadays known as the Bender's and Boettcher's [1] conjecture in relaxing Hermiticity condition and introducing the concept of $\mathcal{PT}$-symmetric quantum mechanics (PTQM). Where, $\mathcal{P}$ denotes space reflection: $ x\longrightarrow -x$ (i.e., parity operator) and $\mathcal{T}$ \ mimics the time-reversal: $i\longrightarrow -i$. More specifically, \ if $\rho = \mathcal{PT}$ and $\rho \,H\,\rho ^{-1}=H$, then $H$ is $\mathcal{PT}$ -symmetric. Moreover, if $\rho \,\Psi =\pm \Psi $ (i.e., $\Psi $ retains $ \mathcal{PT}$-symmetry) the eigenvalues of a $\mathcal{PT}$-symmetric Hamiltonian are real, otherwise the eigenvalues come out in complex-conjugate pairs (a phenomenon known as spontaneous breakdown of $ \mathcal{PT}$-symmetry).
Such a PTQM theory, nevertheless, has stimulated intensive research on the non-Hermitian Hamiltonians and led to the so-called pseudo-Hermitian Hamiltonians (i.e., Hamiltonians satisfying $\xi \,H\,\xi ^{-1}=H^{\dagger }$ or $\xi \,H=H^{\dagger }\,\xi $, where $\xi $ is a Hermitian invertible linear operator and $(^{\dagger })$ denotes the adjoint) by Mostafazadeh [20-25] which form a broader class of non-Hermitian Hamiltonians with real spectra that encloses within those $\mathcal{PT}$-symmetric ones. Moreover, not restricting $\xi $ to be Hermitian (cf., e.g., Bagchi and Quesne [38]), and linear and/or invertible (cf., e.g., Solombrino [32], Fityo [33], and Mustafa and Mazharimousavi [34-37]) would lead to real spectra.
In the process, on the other hand, some quantum mechanical models of certain exceptional $\mathcal{PT}$-symmetric complex interactions, i.e., a $\mathcal{ PT}$-symmetric potential satisfies \begin{equation} \mathcal{PT}V\left( x\right) =V\left( x\right) \iff V\left( x\right) =\left[ V\left( -x\right) \right] ^{\ast }, \end{equation} just happen to have their partners that are strictly equivalent to real potentials after being exposed to some supersymmetric quantum mechanical treatment [11] or integral, Fourier-like transformation [12]. Jones and Mateo [4] have, moreover, used a Darboux-type similarity transformation and have shown that for the Bender's and Boettcher's [1] non-Hermitian $\mathcal{ PT}$-symmetric Hamiltonian $H=p^{2}-g\left( ix\right) ^{N};$ $N=4$, there exists an equivalent Hermitian Hamiltonian $h=\sigma ^{-1}H\sigma $; $\sigma =\exp \left( Q/2\right) $, where $\sigma $ is Hermitian and positive definite. Similar proposal was carried out by Bender et al. [3]. For more details the reader is advised to refer to [3,4]. In our current methodical proposal, we try to have our input in this direction and fill this gap partially, at least.
Through the forthcoming proposition (in section 2) or through a similarity transformation (in section 3) with a metric operator $\eta $ (defined in (21) below) we report that for every non-Hermitian complex $\mathcal{PT}$ -symmetric Hamiltonian (with positive mass $m=m_{+}=+\left\vert m\right\vert $) there exists a Hermitian partner Hamiltonian (with negative mass $ m=m_{-}=-\left\vert m\right\vert $) in Hilbert space $L^{2}\left(
\mathbb{R}
\right) =\mathcal{H}$. In section 3, we also discuss isospectrality and orthonormalization conditions associated with both the Hermitian partner (not necessarily $\mathcal{PT}$-symmetric) and the non-Hermitian $\mathcal{PT }$-symmetric Hamiltonians. An obvious correspondence is constructed, therein. This has not been discussed elsewhere, to the best of our knowledge. We give our concluding remarks in section 4.
\section{A transformation toy: $x\longrightarrow \pm iy\,;\,\,x,\,y\in
\mathbb{R}
$}
In connection with an over simplified transformation toy $x\longrightarrow \pm iy\,\in i
\mathbb{R}
;\,\,x,\,y\in
\mathbb{R}
$ ($x\longrightarrow \pm iy$ to be understood as $x\longrightarrow +iy$ and/or $x\longrightarrow -iy$), t' Hooft and Nobbenhuis [44] have used a complex space-time symmetry transformation
\begin{equation*} x\longrightarrow iy\Longleftrightarrow \,p_{x}\rightarrow -ip_{y}\,;\,\,x,\,y\in
\mathbb{R}
, \end{equation*} between de-Sitter and anti-de-Sitter space to identify vacuum solutions with zero cosmological constant (used later on by Assis and Fring [45] to provide a simple proof of the reality of the spectrum of $p^{2}+z^{2}\left( iz\right) ^{2m+1}$). However, in their instructive harmonic oscillator [44] example \begin{equation} H_{x}=\frac{p_{x}^{2}}{2m}+\frac{1}{2}m\omega ^{2}x^{2}=\omega \left( a_{x}^{\dagger }a_{x}+\frac{1}{2}\right) , \end{equation} with the annihilation and creation operators \begin{equation} a_{x}=\sqrt{\frac{1}{2m\omega }}\left( m\omega x+ip_{x}\right) \,,\text{ \ \ \ }a_{x}^{\dagger }=\sqrt{\frac{1}{2m\omega }}\left( m\omega x-ip_{x}\right) , \end{equation} they have shown (using $x\longrightarrow iy,\,p_{x}\rightarrow -ip_{y}$) that the corresponding Hamiltonian reads \begin{equation} H_{y}=\omega \left( a_{y}^{\dagger }a_{y}-\frac{1}{2}\right) , \end{equation} with \begin{equation} a_{y}=\sqrt{\frac{1}{2m\omega }}\left( m\omega y+ip_{y}\right) ,\text{ \ } a_{y}^{\dagger }=\sqrt{\frac{1}{2m\omega }}\left( -m\omega y+ip_{y}\right) . \end{equation} Under such settings, $H_{x}\longrightarrow -H_{y}$ and whilst the eigenvalues of $H_{x}$ are $\left[ \omega \left( n+1/2\right) \right] $ those of $H_{y}$ read $\left[ -\omega \left( n+1/2\right) \right] $. Consequently, the ground state $\sim \exp \left( -m\omega x^{2}/2\right) $ in the $x$-space is normalizable, whereas the ground state $\sim \exp \left( +m\omega y^{2}/2\right) $ in the $y$-space is non-normalizable.
Within similar spiritual lines, Tanaka [46] has shown that a transformation of the form \begin{equation} x\in
\mathbb{R}
\longrightarrow -iy\in i
\mathbb{R}
\,,\text{ \ \ }p_{x}\longrightarrow ip_{y}\in i
\mathbb{R}
\end{equation} would map a non-Hermitian $PT$-symmetric potential $V\left( x\right) \in
\mathbb{C}
$\ (or any non-Hermitian $PT$-symmetric function $f\left( x\right) \in
\mathbb{C}
$ in general, so to speak) into a Hermitian (but not necessarily $PT$ -symmetric) potential $V\left( y\right) \in
\mathbb{R}
$. The proof of which is straightforward. Using equation (1), one would write (with $z=-iy$ for simplicity of notations) \begin{equation} V\left( x\right) \mid _{x\rightarrow z}=\left[ V\left( -x\right) \right] ^{\ast }\mid _{x\rightarrow z}=V^{\ast }\left( -z^{\ast }\right) . \end{equation} This would in turn imply that \begin{equation} V\left( -iy\right) =V^{\ast }\left( -iy\right) \in
\mathbb{R}
\Longrightarrow V\left( y\right) =V^{\ast }\left( y\right) \in
\mathbb{R}
, \end{equation} where $V\left( y\right) $ is a real-valued function, therefore.\ Some illustrative examples can be found section 6 of [46].
In this respect, a remedy for the t' Hooft and Nobbenhuis [44] harmonic oscillator Hamiltonian above may be sought in a mass parametrization recipe accompanied with the de-Sitter and anti-de-Sitter transformation $x\in
\mathbb{R}
\longrightarrow iy\in i
\mathbb{R}
$. That is, \begin{equation} m=m_{\pm }\Longrightarrow m=\left\{ \begin{tabular}{l} $m_{+}=+\left\vert m\right\vert >0$ \\ $m_{-}=-\left\vert m\right\vert <0$ \end{tabular} \right. \end{equation} Such a mass parametrization would, in turn, suggest that the t' Hooft and Nobbenhuis [44] harmonic oscillator \begin{equation} H_{x}=H_{x;m_{+}}=\frac{p_{x}^{2}}{2m_{+}}+\frac{1}{2}m_{+}\omega ^{2}x^{2} \end{equation} in (2) with $m_{+}=-m_{-}$ reads \begin{equation} H_{y;m_{-}}=\frac{p_{y}^{2}}{2m_{-}}+\frac{1}{2}m_{-}\omega ^{2}y^{2}\text{ } \in L^{2}\left(
\mathbb{R}
\right) =\mathcal{H}\text{.} \end{equation} In this case both $H_{x;m_{+}}$ and $H_{y;m_{-}}$ are isospectral and both admit normalizable eigenfunctions. For example, the ground state in $x$ -space $\sim \exp \left( -m_{+}\omega x^{2}/2\right) $ and that in the $y$ -space $\sim \exp \left( -m_{-}\omega y^{2}/2\right) $ are both normalizable. The mass parametrization recipe does the trick, therefore.
Such observations would unavoidably manifest the following proposition.
\begin{proposition} \emph{For every non-Hermitian complex }$\mathcal{PT}$\emph{-symmetric Hamiltonian with positive mass (i.e., }$m=m_{+}=+\left\vert m\right\vert $) \emph{\ there exists a Hermitian (but not necessarily isospectral neither necessarily }$\mathcal{PT}$-symmetric\emph{) partner Hamiltonian with negative mass (i.e., }$m=m_{-}=-\left\vert m\right\vert $\emph{) in Hilbert space }$L^{2}\left(
\mathbb{R}
\right) =\mathcal{H}$\emph{.} \end{proposition}
\begin{proof} Let \begin{equation} H_{x;m_{+}}=\frac{p_{x}^{2}}{2m_{+}}+V\left( x;m_{+}\right) \text{ };\text{ \ }V\left( x;m_{+}\right) =V^{\ast }\left( -x;m_{+}\right) \in
\mathbb{C}
, \end{equation} be a non-Hermitian complex $\mathcal{PT}$-symmetric Hamiltonian (with $ m=+\left\vert m\right\vert $) with a corresponding $\mathcal{PT}$-symmetric eigenfunctions $\Psi \left( x;m_{+}\right) $ such that \begin{equation*} H_{x;m_{+}}\Psi \left( x;m_{+}\right) =E_{m_{+}}\Psi \left( x;m_{+}\right) . \end{equation*} Then a mapping of the sort \begin{equation} x\in
\mathbb{R}
\longrightarrow \pm iy\in i
\mathbb{R}
\Longleftrightarrow \text{ \ }p_{x}\longrightarrow \mp ip_{y}\in i
\mathbb{R}
\,;\,\,x,\,y\in
\mathbb{R}
, \end{equation} would imply \begin{equation} H_{x;m_{+}}\Psi \left( x;m_{+}\right) =E_{m_{+}}\Psi \left( x;m_{+}\right) \Longleftrightarrow H_{y;m_{-}}\Phi \left( y;m_{-}\right) =E_{m_{-}}\Phi \left( y;m_{-}\right) , \end{equation} where the substitution $m_{+}=-m_{-}$ is used and \begin{equation} H_{y;m_{-}}=\frac{p_{y}^{2}}{2m_{-}}+V\left( y;m_{-}\right) \text{ }\in L^{2}\left(
\mathbb{R}
\right) ;\text{ \ }V\left( y;m_{-}\right) =V^{\ast }\left( y;m_{-}\right) \in
\mathbb{R}
. \end{equation} which is Hermitian (\emph{but not necessarily isospectral with }$H_{x;m_{+}}$ \emph{\ of (12) neither necessarily }$\mathcal{PT}$-symmetric). QED. \end{proof}
Illustrative examples are ample. In the complex "shifted by an imaginary constant" $\mathcal{PT}$-symmetric oscillator Hamiltonian (cf., e.g., Mustafa and Znojil [18]) a companied by a properly regularized attractive/repulsive core (with the mass term kept intact) \begin{equation} H_{x;m_{+}}=\frac{p_{x}^{2}}{2m_{+}}+V\left( x;m_{+}\right) =\frac{p_{x}^{2} }{2m_{+}}+\frac{m_{+}\omega ^{2}}{2}\left( x-ic\right) ^{2}+\frac{G\left( m_{+},\alpha \right) }{\left( x-ic\right) ^{2}}, \end{equation} would, under the transformation $x\longrightarrow \pm iy$ and with \begin{equation*} G\left( m_{+},\alpha \right) =\frac{\hslash ^{2}\left( \alpha ^{2}-1/4\right) }{2m_{+}}, \end{equation*} imply \begin{equation} V\left( y;m_{+}\right) =-\frac{m_{+}\omega ^{2}}{2}\left( y\mp c\right) ^{2}- \frac{\hslash ^{2}}{2m_{+}}\frac{\left( \alpha ^{2}-1/4\right) }{\left( y\mp c\right) ^{2}}\in
\mathbb{R}
. \end{equation} Which is not only real valued but also $\mathcal{PT}$-symmetric (with parity performing reflection about $y=\pm c$ rather than $y=0$). In such a case, \begin{equation*} H_{x;m_{+}}\longrightarrow H_{y;m_{-}}=-H_{y;m_{+}}\in L^{2}\left(
\mathbb{R}
\right) =\mathcal{H} \end{equation*} where \begin{eqnarray} H_{y;m_{-}} &=&\frac{P_{y}^{2}}{2m_{-}}+V\left( y;m_{-}\right) \notag \\ &=&\frac{P_{y}^{2}}{2m_{-}}+\frac{m_{-}\omega ^{2}}{2}\left( y\mp c\right) ^{2}+\frac{\hslash ^{2}}{2m_{-}}\frac{\left( \alpha ^{2}-1/4\right) }{\left( y\mp c\right) ^{2}}. \end{eqnarray} Obviously, $H_{y;m_{-}}$ is not only Hermitian but also $\mathcal{PT}$ -symmetric and shares the same eigenvalues with $H_{x;m_{+}}$ in (16), i.e., \begin{equation*} E_{n}=E_{m_{+}}=E_{m_{-}}=\left\{ \begin{tabular}{l} $2n+1;$ for $\alpha =\pm 1/2,$ \\ $4n+2+2q\alpha ;$ otherwise, \end{tabular} \right. \,n=0,1,\cdots , \end{equation*} where $q=\pm 1$ denotes quasi-parity. Obviously the spectrum remains discrete, real, and bounded from below and the wave functions remain normalizable (cf., e.g., Znoijl [7] for more details), with a $c$ shift of the coordinate up or down.
Moreover, the Bender's and Boettcher's [1] non-Hermitian $\mathcal{PT}$ -symmetric Hamiltonian, with the $\mathcal{PT}$-symmetric potential $V\left( x\right) =-g\left( ix\right) ^{\nu }\in
\mathbb{C}
;\nu ,g\in
\mathbb{R}
,\ \nu \geq 2,$ $g>0$, \begin{equation} H_{x;m_{+}}=\frac{p_{x}^{2}}{2m_{+}}+V\left( x;m_{+}\right) =\frac{p_{x}^{2} }{2m_{+}}-g\left( ix\right) ^{\nu }, \end{equation} would, under the transformation $x\longrightarrow -iy$, yield \begin{equation} H_{y;m_{-}}=\frac{P_{y}^{2}}{2m_{-}}+V\left( y;m_{-}\right) =\frac{P_{y}^{2} }{2m_{-}}-g\left( y\right) ^{\nu }, \end{equation} which is Hermitian (but non-$\mathcal{PT}$-symmetric for odd $\nu $ and $ \mathcal{PT}$-symmetric for even $\nu $, i.e., conditional $\mathcal{PT}$ -symmetric). Whilst Hermiticity is secured in the partner Hermitian Hamiltonian, the boundary conditions and normalizability are not. Therefore isospectrality is a different issue that remains "to-be-determined" and to be partially discussed below.
The above were just few of the many examples available in the literature where their non-Hermitian $\mathcal{PT}$-symmetric Hamiltonians find Hermitian partners in the regular Hilbert space. Whenever one encounters such cases, the possibility of isospectrality should always be tested in the process. In the light of the above proposition, we may observe that our simple transformation toy $x\in
\mathbb{R}
\longrightarrow \pm iy\in i
\mathbb{R}
$, could be interpreted as a counterclockwise/clockwise rotation by $\theta =\pm \pi /2$ of the full real $x$-axis and would, effectively, just map a point $z_{1}=x$ into a point $z_{2}=\pm iy$ \ on the imaginary $y$-axis of the complex $z$- plane.
\section{A similarity transformation toy: isospectrality and mass signature}
In the search for a more technical metric operators' language, one may very well use Ben-Aryeh's and Barak's [5] similarity transformation with a metric operator \begin{equation} \eta =\exp \left( -i\beta \,x\partial _{x}\right) \,;\text{ \ }\beta ,x\in
\mathbb{R}
. \end{equation} Which transforms a power series \begin{equation} F\left( x\right) =\sum\limits_{n=0}^{\infty }A_{n}\,x^{n}\in
\mathbb{R}
, \end{equation} into \begin{equation} G\left( x\right) =\eta \,F\left( x\right) \,\eta ^{-1}=\sum\limits_{n=0}^{\infty }A_{n}\,\left( e^{-i\beta }x\right) ^{n}\in
\mathbb{C}
. \end{equation} Where $G\left( x\right) $ is a non-Hermitian $\mathcal{PT}$-symmetric function and satisfies the similarity transformation relation $\eta \,^{-1}G\left( x\right) \,\eta =F\left( x\right) \in
\mathbb{R}
$. To reflect such a result onto the transformation toy in the above section we choose \ $\beta =\pm \pi /2$. This immediately mandates that a non-Hermitian $\mathcal{PT}$-symmetric Hamiltonian $H_{\mathcal{PT}}$ can be mapped into its partner Hermitian Hamiltonian $H$ (\emph{but not necessarily isospectral neither necessarily }$\mathcal{PT}$-symmetric) through a similarity transformation \begin{equation} \eta ^{-1}H_{\mathcal{PT}}\,\eta =H\iff H_{\mathcal{PT}}=\eta H\eta ^{-1}. \end{equation} Where \begin{equation} H_{\mathcal{PT}}=\frac{p_{x}^{2}}{2m_{+}}+V\left( x;m_{+}\right) \,;\text{ } V\left( x;m_{+}\right) =\left[ V\left( -x;m_{+}\right) \right] ^{\ast }\in
\mathbb{C}
, \end{equation} and \begin{equation} H=\frac{p_{x}^{2}}{2m_{-}}+V\left( ix;m_{-}\right) \text{ };\text{ \ } V\left( ix;m_{-}\right) =V\left( x;m_{-}\right) =V^{\ast }\left( x;m_{-}\right) \in
\mathbb{R}
. \end{equation} $H$ denotes the Hermitian partner Hamiltonian in Hilbert space with real eigenvalues, therefore.
Under such settings, one can easily show that $\eta x\eta ^{-1}=\pm ix$ (i.e., $x\rightarrow \pm ix$, which practically imitates our original transformation toy above) and consequently a non-Hermitian $\mathcal{PT}$ -symmetric potential $V_{\mathcal{PT}}\left( x\right) $ would be transformed into its real-valued (by the virtue of equation (2)) partner potential $ V\left( \pm ix;m_{+}\right) \in
\mathbb{R}
$ through the relation \begin{equation} \eta ^{-1}V_{\mathcal{PT}}\left( x\right) \eta =\eta ^{-1}V\left( x;m_{+}\right) \eta =V\left( \pm ix;m_{+}\right) =\left[ V\left( \pm ix;m_{+}\right) \right] ^{\ast }\in
\mathbb{R}
. \end{equation}
On the other hand, the proof of the related isospectrality between $H_{ \mathcal{PT},m_{+}}$ in (25) and its Hermitian partner Hamiltonian $ H_{m_{-}} $ in (26) seems to be a straightforward one. Let $E_{n,m_{+}}$ and $\Psi _{n}\left( x;m_{+}\right) $ be the eigenvalues and eigenfunctions of the complex $\mathcal{PT}$-symmetric Hamiltonian $H_{\mathcal{PT},m_{+}}$, respectively, then \begin{eqnarray*} H_{\mathcal{PT},m_{+}}\Psi _{n}\left( x;m_{+}\right) &=&E_{n,m_{+}}\Psi _{n}\left( x;m_{+}\right) \implies \\ \eta ^{-1}\eta H_{m_{+}}\left[ \eta ^{-1}\Psi _{n}\left( x;m_{+}\right) \right] &=&E_{n,m_{+}}\left[ \eta ^{-1}\Psi _{n}\left( x;m_{+}\right) \right] \implies \\ H_{m_{+}}\Phi _{n}\left( x;m_{+}\right) &=&E_{n,m_{+}}\Phi _{n}\left( x;m_{+}\right) \Longrightarrow \end{eqnarray*} \begin{equation} H_{m_{-}}\Phi _{n}\left( x;m_{-}\right) =E_{n,m_{-}}\Phi _{n}\left( x;m_{-}\right) , \end{equation} where $\eta ^{-1}\Psi _{n}\left( x;m_{+}\right) =\Phi _{n}\left( x;m_{+}\right) \in L^{2}\left(
\mathbb{R}
\right) $ are the eigenfunctions for $H_{m_{+}}$ in the Hilbert space. Both the non-Hermitian complex $\mathcal{PT}$-symmetric Hamiltonian $H_{\mathcal{ PT},m_{+}}$ \ and its Hermitian partner Hamiltonian $H_{m_{+}}$ are isospectral, therefore. Under such settings, we may observe that our examples in the previous section fit into such isospecrtrality argument, no doubt.
However, an immediate example on "temporary-fragile-isospectrality" may be sought in the complex $\mathcal{PT}$-symmetric potential $V\left( x\right) =-A\func{sech}^{2}\left( x-ic\right) $. Which upon the de-Sitter anti-de-Sitter transformation would be mapped into $V\left( x\right) =-A/\cos ^{2}x$ that manifests an unbounded spectrum because of the negative sign. Nonetheless, an immediate remedy may be sought in the parametrization of the coupling parameter, i.e., $A\longrightarrow -B\in
\mathbb{R}
$ (in analogy with the mass parametrization in (9)). This would, in turn, take $V\left( x\right) =-A/\cos ^{2}x$ (which does not support bound states) into $V\left( x\right) =B/\cos ^{2}x\in
\mathbb{R}
$ (which supports bound states).
In due course, we find that the complex $\mathcal{PT}$-symmetric Hamiltonians find their Hermitian partners in the regular Hilbert space through either a simple transformation toy (13) or a similarity transformation toy (23) \ accompanied by a mass parametrization recipe (9) and/or an analogous coupling constant recipe is an unavoidable conclusion.
Nevertheless, having had established this fact, we may now try to explore the orthonormalization conditions. Since $\Phi _{n}\left( x\right) \in L^{2}\left(
\mathbb{R}
\right) $ are the eigenfunctions for $H$ in Hilbert space, they satisfy the regular quantum mechanical orthonormalization condition \begin{equation} \left\langle \Phi _{k}\left( x\right) \left\vert \Phi _{n}\left( x\right) \right. \right\rangle =\delta _{kn}. \end{equation} Consequently, the established connection $\Phi _{n}\left( x\right) =\eta ^{-1}\Psi _{n}\left( x\right) \in L^{2}\left(
\mathbb{R}
\right) $ would imply \begin{equation} \left\langle \eta ^{-1}\Psi _{k}\left( x\right) \left\vert \eta ^{-1}\Psi _{n}\left( x\right) \right. \right\rangle =\delta _{kn}. \end{equation} Which in turn yields \begin{equation} \left\langle \Psi _{k}\left( x\right) \left\vert \mp i\eta ^{-1}\left[ \eta ^{-1}\Psi _{n}\left( x\right) \right] \right. \right\rangle =\delta _{kn}\iff \left\langle \Psi _{k}\left( x\right) \left\vert \Psi _{n}\left( -x\right) \right. \right\rangle =\pm i\delta _{kn}. \end{equation} An obvious and immediate correspondence between the regular quantum mechanical orthonormalization condition (20) and that associated with the non-Hermitian complex $\mathcal{PT}$-symmetric Hamiltonians (22) is constructed, therefore. However, we could not find any example that may satisfy such a condition. The orthonormalizable set of wave functions satisfying this condition is an empty set. This should be anticipated since the normalizable wave functions of the Hermitian Hamiltonians are not expected to be safely transformed (along with the associated well-defined boundary conditions in Hilbert space) into the complex space.
\section{Concluding remarks}
In this work, we have introduced a simple transformation, $x\longrightarrow \pm iy\,;\,\,x,\,y\in
\mathbb{R}
$, that allowed non-Hermitian $\mathcal{PT}$-symmetric Hamiltonians to find their Hermitian (\emph{but not necessarily isospectral neither necessarily }$ \mathcal{PT}$-symmetric) partners in Hilbert space. We have also introduced a similarity transformation recipe (with a metric operator $\eta $ in (21)) that proved to provide a more mathematical accessibility to the orthonormalization conditions associated with both the Hermitian (not necessarily $\mathcal{PT}$-symmetric) and the non-Hermitian $\mathcal{PT}$ -symmetric (not necessarily isospectral) Hamiltonians.
Moreover, the parametrized-mass signature (an almost forgotten and usually deliberately dismissed for the sake of mathematical manipulation simplicity) is shown to play a significant role in the current methodical proposal. An analogous coupling parameter's recipe is shown to play a similar role as that of the parametrized mass. Yet, within the lines of the later, Znojil [43] in his mass-sign duality proposal, has observed that the non-Hermitian cubic oscillator's Hamiltonians $H_{\pm }=p^{2}\pm m^{2}x^{2}+ifx^{3}$ with opposite sign mass signatures are (up to a constant shift) isospectral. For the feasibly significant role it may play, the mass \ term should always be kept intact with the associated Hamiltonians, therefore.
Finally, as long as our non-Hermitian $\mathcal{PT}$-symmetric Hamiltonians $ H_{\mathcal{PT}}$ find their Hermitian partners (\emph{not necessarily isospectral neither necessarily }$\mathcal{PT}$-symmetric) in the Hilbert space (where boundary conditions and consequently orthonormalizability are feasibly very well defined), either through a simple transformation toy (13) or a similarity transformation toy (23) \ accompanied by a mass parametrization recipe (9) and/or an analogous coupling constant recipes, the non-Hermitian $\mathcal{PT}$-symmetric quantum mechanics remains safe and deserves to be advocated irrespective with the orthodoxal mathematical (though rather fragile) Hermiticity requirement.\emph{
}
\end{document} | arXiv |
Astronomy Meta
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up.
When thermal infrared space telescopes spot asteroids, are they seeing the body's own thermal emission, or reflected TIR from the Sun?
From the Space SE question Why has the Earth-Sun libration point L1 been chosen over L2 for NEOCam to detect new NEOs?:
above: Profoundly not-to-scale illustration of NEOCam in an orbit around the Sun-Earth libration point L1, about 1.5 million kilometers from Earth. Presumably Sun-shield and Earth-shield block light (both infrared and visible) from the Sun and the Earth in order for the instrument to work at cold temperature necessary to detect the faint infrared light radiated from NEOs.
above: Infrared astronomer Amy Mainzer illustrates how asteroids warmed by the sun will stand out more brightly in the infrared compared to reflected visible light from the sun. One coffee cup is black the other white in the false-color infrared thermal image. From here.
And discussion under the answer explains the important of phase angle; they will be easier to detect if at least some fraction of the sunlit side of the asteroid is visible from the thermal infrared telescope, but I think that this is because for slowly rotating asteroids you need the sun to be hitting it to warm it up enough so that it will "glow by itself" sufficiently to be visible in the telescope.
If I understand correctly, the advantage of using thermal IR to look for NEOs is that you want to find relatively small ones that aren't previously known, and this method is more sensitive to the smallest objects.
But I am not sure WHY that is true, and also not 100% sure the source of the NIR light; is it strictly Planckian-like thermal gray-body radiation emitted from the warmed asteroid itself, or does it contain a reflected component from the Sun as well, or does that in fact dominate?
Question: Why exactly would one choose a thermal infrared (TIR) versus visible light telescope for NEO hunting? Is the TIR sought gray-body radiation from the object itself, or does it contain a significant component of or is even dominated by reflected light from the Sun?
"Bonus points" for an answer that delineates in which circular orbits and phase angles a 100 meter, albedo = 0.1 (all wavelengths) body is likely to be brighter in say 5 to 10 microns from reflected sunlight than from it's own thermal radiation. Perhaps the answer is different in the limits of zero and high rotation rate?
observational-astronomy asteroids temperature space-telescope near-earth-object
uhohuhoh
$\begingroup$ Potentially helpful are answer(s) to How do the stars in the near-infrared (NIR) radiate? $\endgroup$
– uhoh
$\begingroup$ Does this help? neocam.ipac.caltech.edu/page/whyinfrared $\endgroup$
– Peter Erwin
OK, let's try some simple calculations. (Short answer: it's overwhelmingly the body's own thermal emission.)
The mid-IR light (let's use 10 microns, since a key design goal for NEOCam was ensuring imaging out to that wavelength) from the Sun can be approximated by emission from a 5800 K blackbody. The reflected 10-micron sunlight for an asteroid at a distance of $D$ is $L_{\rm sun} / (4 \pi D^{2})$, multiplied by the cross-sectional area of the asteroid (for simplicity, $\pi R_{\rm ast}^{2}$), multiplied by the albedo at 10 microns.
The emitted thermal radiation from the asteroid can be approximated by blackbody emission per unit surface area, multiplied by the surface area of the asteroid ($4 \pi R_{\rm ast}^2$), multiplied by the emissivity at 10 microns.
Let's assume a 100-m-radius asteroid located 1 AU from the Sun, with a temperature of 300 K.
The monochromatic (10-micron) luminosity of the Sun is $4 \pi R_{\rm sun}^{2}$ BB$(5800, 10\mu{\rm m}) \approx 2.7 \times 10^{10}$ W/Hz, where BB$(T,\lambda)$ is the monochromatic power at wavelength $\lambda$ emitted per unit surface area for a blackbody with temperature $T$. At 1 AU, a 100-m-radius asteroid could reflect a total of $\approx 3.0 \times 10^{-9}$ W/Hz at 10 microns. (Assuming albedo $= 1$, which is not possible.)
The maximum monochromatic thermal luminosity from the asteroid is $4 \pi R_{\rm ast}^{2}$ BB$(300, 10\mu{\rm m})$, which works out to be $\approx 1.3 \times 10^{-6}$ W/Hz. (Assuming emissivity $= 1$.)
OK, what about albedo and emissivity? A good estimate for the 10-micron emissivity of asteroids seems to be $\approx 0.9$, which would reduce the asteroid's thermal luminosity to $\approx 1.2 \times 10^{-6}$ W/Hz. Since emissivity + albedo $= 1$, this means the 10-micron albedo would be $\approx 0.1$ (so, a good assumption on your part), which reduces the reflected sunlight to $\approx 3.0 \times 10^{-10}$ W/Hz.
I've ignored issues like orientation geometry (how much of the reflected side of the asteroid can you actually see?) and the variation of temperature across the surface of the asteroid (higher on the dayside, lower on the nightside; less difference for faster-rotating asteroids), but these are secondary effects. The upshot is that the asteroid's thermal emission at 10 microns will be several thousand times brighter than the reflected sunlight.
Note that the amount of reflected sunlight is proportional to $R_{\rm ast}^2$, but so is the amount of emitted thermal emission, so the asteroid's size is, to first order, actually irrelevant. (Though not if you look at visible wavelength, where reflected sunlight dominates.)
Edited to add: At 5 microns, the asteroid's thermal emission will still be about a hundred times brighter than the reflected sunlight.
Edited to add: If you want to experiment with different wavelengths, asteroid temperatures, etc., I put some Python code I wrote for the computations in this Github gist.
Peter ErwinPeter Erwin
Thanks for contributing an answer to Astronomy Stack Exchange!
Not the answer you're looking for? Browse other questions tagged observational-astronomy asteroids temperature space-telescope near-earth-object or ask your own question.
At high temperatures, do planets glow like blackbodies?
How do the stars in the near-infrared (NIR) radiate?
How would a small TCO (temporarily captured orbiter) or other natural Earth satellite most likely be detected?
Are we really seeing the asteroids from the past?
Has Hubble ever been used to try to image a near Earth asteroid?
How poor was our tally of objects that could produce potential extinction-level events back in 1998?
Why wasn't CHEOPS data taken during passage through the South Atlantic Anomaly downlinked in this case, resulting in gaps in photometry?
How can a 1-pixel image of a rotating asteroid be used to measure its thermal inertia?
What is the nature of the "blind spot issue" in asteroid detection systems? | CommonCrawl |
česky: Vyhledávání úloh podle oboru
astrophysics (67)biophysics (16)chemistry (18)electric field (59)electric current (62)gravitational field (66)hydromechanics (127)nuclear physics (34)oscillations (42)quantum physics (25)magnetic field (29)mathematics (78)mechanics of a point mass (222)gas mechanics (79)mechanics of rigid bodies (191)molecular physics (59)geometrical optics (66)wave optics (47)other (140)relativistic physics (33)statistical physics (18)thermodynamics (119)wave mechanics (43)
mechanics of rigid bodies
(12 points)4. Series 33. Year - E. torsional pendulum
Take a homogeneous rod, at least $40 \mathrm{cm}$ long. Attach two cords of the same material (e.g. thread or fishing line) to it, symmetrically with respect to its centre, and attach the other ends of the cords to some fixed body (e.g. stand, tripod) so that both cords would have the same length and they'd be parallel to each other. Measure the period of torsion oscillations of the rod depending on the distance $d$ of the cords, for multiple lengths of the cords, and find the relationship between these two variables. During torsion oscillations, the rod rotates in a horizontal plane and its centre remains still.
mechanics of rigid bodiesoscillations
(12 points)3. Series 33. Year - E. dense measurement
Construct a hydrometer (for example from straw and plasteline) and measure dependence of water density on the concetration of salt dissolved in it.
Bouyant Matěj.
(8 points)2. Series 33. Year - 5. wheel with a spring
We have a perfectly rigid homogeneous disc with a radius $R$ and mass $m$, to which a rubber band is connected. It is fixed by one end in distance $2R$ from an edge of the disc and by the other end at the end of the disc. The rubber band behave as ideal, thin spring with stiffness $k$, rest length $2R$ and negligible mass. Disc is secured in the middle, so it is able to rotate in one axis around this point, but cannot move or change the rotation axis. Figure out relation between the magnitude of moment of force, by which the rubber band will be increasing or decreasing the rotation of disc depending on $\phi $. Also, figure out an equation of motion.
Bonus: Define the period of system's small oscillations.
Karel had a headache.
(9 points)5. Series 32. Year - 5. bouncing ball
We spin a rigid ball in the air with angular velocity $\omega $ high enough parallel with the ground. After that we let the ball fall from height $h_0$ onto a horizontal surface. It bounces back from the surface to height $h_1$ and falls to a slightly different spot than the initial spot of fall. Determine the distance between those two spots of fall onto ground, given the coefficient of friction $f$ between the ball and the ground is small enough.
mechanics of rigid bodiesmechanics of a point mass
Matej observed Fykos birds playing with a ball
(9 points)5. Series 32. Year - P. 1 second problems
Suggest several ways to slow down the Earth so that we would not have to add the leap second to certain years. How much would it cost?
(9 points)4. Series 32. Year - 5. frisbee
A thin homogeneous disc revolves on a flat horizontal surface around a circle with the radius $R$. The velocity of disk's centre is $v$. Find the angle $\alpha $ between the disc plane and the vertical. The friction between the disc and the surface is sufficiently large. You may work under the approximation where the radius of the disc is much less than $R$.
Jáchym hopes that contestants will come up with a solution.
(8 points)3. Series 32. Year - 4. destruction of a copper loop
A copper flexible circular loop of radius $r$ is placed in a uniform magnetic field $B$. The vector of magnetic induction is perpendicular to the plane determined by the loop. The maximal allowed tensile strength of the material is $\sigma _p$. The flux linkage of this circular loop is changing in time as $\Phi (t) = \Phi _0 + \alpha t,$ where $\alpha $ is a positive constant. How long does it take to reach $\sigma _p$?
Hint: Tension force can be calculated as $T = |BIr|$.}
mechanics of rigid bodiesmagnetic field
Vítek thinks back to AP Physics.
(10 points)1. Series 32. Year - S. theoretical mechanics
Before we dive into the art of analytical mechanics, we should brush up on classical mechanics on the following series of problems.
A homogenous marble with a very small radius sits on top of a crystal sphere. After being granted an arbitrarily small speed, the marble starts rolling down the sphere without slipping. Where will the marble separate and fall of the sphere?
Instead of the sphere from the previous problem, the marble now sits on a crystal paraboloid given by the equation $y = c - ax^2$. Again, where will the marble separate from the paraboloid?
A cyclist going at the speed $v$ takes a sharp turn to a road perpendicular to his original direction. During the turn, he traces out a part of a circle with radius $r$. How much does the cyclist have to lean into the turn? You may neglect the moment of inertia of the wheels and approximate the cyclist as a mass point.
Bonus: Do not neglect the moment of intertia of the wheels.
(6 points)6. Series 31. Year - 3. non-analytic spring
Imagine a pole of length $b = 5 \mathrm{cm}$ and mass $m = 1 \mathrm{kg}$ and a spring of initial length $c = 10 \mathrm{cm}$, spring constant $k = 200 \mathrm{N\cdot m^{-1}}$ and negligible mass, that are connected at one of their ends. The other ends of the spring and the pole are affixed at the same height $a = 10 \mathrm{cm}$ from each other. The spring and the pole can both freely rotate about the fixed points and their joint. Label $\phi $ the angle of the pole to the horizontal. Find all angles $\phi $, for which the system is in an equilibrium. Which of these are stable and which unstable?
Jachym was supposed to come up with an easy problem.
(7 points)6. Series 31. Year - 4. dimensional analysis
Matej was making a gun and wanted to measure what is the speed of the projectiles leaving the barrel. Unfortunately, he doesn't have any other measuring device, than a ruler. However, he found a block that is made half from steel half from wood. He lays it down at the edge of the table (of height $100 \mathrm{cm}$ and length $200 \mathrm{cm}$), and shoots at it horizontally. With the steel part of the block facing the gun, the bullet bounces off perfectly elastically and lands $50 \mathrm{cm}$ from the edge of the table. The block slides $5 \mathrm{cm}$ on the table. Then Matej turns around the block and shoots into the wooden side. This time the bullet stays in the block and the block slides only $4 \mathrm{cm}$. Help Matej with calculating the speed of the bullet. It might be also helpful to know, that when Matej lifts one edge of the table by at least $20 \mathrm{cm}$, the moving block won't stop sliding.
Matej wanted all the variables to have the same unit. | CommonCrawl |
A first-class approach of higher derivative Maxwell–Chern–Simons–Proca model
Regular Article - Theoretical Physics
Silviu-Constantin Sararu1
The European Physical Journal C volume 75, Article number: 526 (2015) Cite this article
A preprint version of the article is available at arXiv.
The equivalence between a higher derivative extension of Maxwell–Chern–Simons–Proca model and some gauge invariant theories from the point of view of the Hamiltonian path integral quantization in the framework of the gauge-unfixing approach is investigated. The Hamiltonian path integrals of the first-class systems take manifestly Lorentz-covariant forms.
The quantization of a second-class constrained system can be achieved by the reformulation of the original theory as a first-class one and then quantizing the resulting first-class theory. This quantization procedure was applied to various models [1–18] using a variety of methods to replace the original second-class model to an equivalent model in which only first-class constraints appear. The conversion of the original second-class system into an equivalent gauge invariant theory can be accomplished without enlarging the phase space, starting from the possibility of interpreting a second-class constraints set as resulting from a gauge-fixing procedure of a first-class constraints one and "undo" gauge-fixing [19–23]. The gauge-unfixing method relies on separating the second-class constraints into two subsets, one of them being first-class and the other one providing some canonical gauge conditions for the first-class subset. Starting from the canonical Hamiltonian of the original second-class system, we construct a first-class Hamiltonian with respect to the first-class subset through an operator that projects any smooth function defined on the phase space into an application that is in strong involution with the first-class subset. Another method to construct the equivalent first-class theory relies on an appropriate extension of the original phase space through the introduction of some new variables. The first-class constraints set and the first-class Hamiltonian are constructed as power series in the new variables [24–27]. Various aspects of the equivalence [28] between the self-dual model [29] and the Maxwell–Chern–Simons (MCS) theory [30, 31] have been studied using one of the two methods mentioned in the above [17, 32–34]. A generalization of the Proca action for a massive vector field with derivative self-interactions in \(D=4\) has been constructed in [35]. In [36–40] one finds higher derivative extensions that involve the Maxwell and/or Chern–Simons (CS) terms [28–30]. The Lagrangian of such model is the sum of Maxwell, CS, and higher derivative extensions of these terms. The generalized MCS-Podolsky model [38, 41] is a such theory and was introduced in order to smooth ultraviolet singularities. Starting from the observation that the study of Einstein–Chern–Simons–Proca massive gravity (ECSPMG) (the Lagrangian of ECSPMG is the sum of the Einstein, (third derivative order) CS and Proca-like mass terms) [42] is often accompanied [39, 40, 43] by the analysis of the MCS–Proca model (a non-higher derivative model) [37, 38, 43–45], we consider a model described by Lagrangian action containing the Maxwell term, a higher derivative extension of the CS topological invariant [36], and a Proca mass term
$$\begin{aligned} S&=\int d^{3}x\bigg [ -\frac{a}{4}\partial _{[\mu }A_{\nu ]}\partial ^{[\mu }A^{\nu ]} \nonumber \\&\quad + \frac{1}{2b}\varepsilon _{\mu \nu \rho }\left( \partial _{\lambda }\partial ^{\lambda }A^{\mu }\right) \partial ^{\nu }A^{\rho } -\frac{m^{2}}{2}A_{\mu }A^{\mu }\bigg ]; \end{aligned}$$
and we investigate from the point of view of the Hamiltonian path integral quantization using the gauge-unfixing (GU) approach the previous higher derivative extension of the MCS–Proca model. The choice of the extended MCS–Proca (MECS–Proca) model will become more transparent in Sect. 3.1, where we will find that the extended MCS–Proca (MECS–Proca) model and the ECSPMG theory have similarities regarding the number of physical degrees of freedom and the presence of ghosts and tachyon excitations. In order to construct an equivalent first-class system starting from the MECS–Proca model in the framework of the GU approach, we need to know the structure of the constraints set of the model. As the second term in the action (1) contains higher derivative terms \(\left\{ \partial _{\lambda }\partial ^{\lambda }A_{\mu }\right\} \), the canonical analysis will be done by a variant of Ostrogradsky method [46–51] developed in Ref. [52], based on an equivalent first order formalism [53, 54] and applied to a number of particle and field theoretic models [52, 55–57]. The Hamiltonian analysis of a higher derivative extension of a theory displays a constraints set with a more complicated structure than the constraints set of the usual theory (where the Lagrangian is a function of the fields and their first derivatives only). The separation of a second-class constraints set with a complicated structure in two subsets (one of them being first-class and the other one providing some canonical gauge conditions for the first-class subset) is an intricate issue. In general, in the structure of the constraints set of the higher derivative extension we find a reminiscence of the structure of the constraints set of the usual theory. In order to make the approach of the MECS–Proca model more transparent, initially we consider the MCS–Proca model and we apply the quantization procedure mentioned in the above. Next, we focus on the Hamiltonian analysis of the MECS–Proca model and the construction of the equivalent first-class system using gauge-unfixing method. Then we construct the Hamiltonian path integral of the equivalent first-class system. After integrating out the auxiliary fields and performing some field redefinitions, we discover the manifestly Lorentz-covariant path integral corresponding to the Lagrangian formulation of the first-class system, which reduces to the Lagrangian path integral for a Stückelberg coupling between a scalar field and a 1-form or to the Lagrangian path integral for two kinds of 1-forms with CS coupling.
The paper is organized in four sections. In Sect. 2, starting from MCS–Proca model we construct an equivalent first-class model using gauge-unfixing method and meanwhile we obtain the path integral corresponding to the first-class system associated with this model. Section 3 contains the main results of the present paper. First, we perform a Hamiltonian analysis and study the excitations and mass counts of the MECS–Proca model. Second, we exemplify in detail the gauge-unfixing method on the MECS–Proca model and then we construct the path integral of the equivalent first-class system associated with this second-class theory. Section 4 ends the paper with the main conclusions.
The MCS–Proca model
The MCS–Proca model is described by the Lagrangian action [37, 38, 43–45]
$$\begin{aligned} S&=\int d^{3}x\bigg (-\frac{a}{4}\partial _{[\mu }A_{\nu ]}\partial ^{[\mu }A^{\nu ]}\nonumber \\&\quad -b\varepsilon _{\mu \nu \rho }A^{\mu }\partial ^{\nu }A^{\rho }-\frac{m^{2}}{2}A_{\mu }A^{\mu }\bigg ), \end{aligned}$$
where a and b are some real constants. We work with the Minkowski metric tensor of 'mostly minus' signature \(\sigma _{\mu \nu }=diag(+--)\). The canonical analysis [58, 59] of the model described by the Lagrangian action (2) displays the second-class constraints (scc)
$$\begin{aligned} \chi ^{( 1) }\equiv & {} p^{0}\approx 0,\end{aligned}$$
$$\begin{aligned} \chi ^{( 2) }\equiv & {} \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-m^{2}A^{0}\approx 0, \end{aligned}$$
and the canonical Hamiltonian
$$\begin{aligned} H_{c}&=\int d^{2}x\left( -\frac{1}{2a}p_{i}p^{i}-A_{0}\partial _{i}p ^{i}+\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}\right. \nonumber \\&\quad \left. +\,b\varepsilon _{0ij}A^{0}\partial ^{i}A^{j}\!+\!\frac{b}{a}\varepsilon _{0ij}A^{i}p^{j}\!-\!\frac{b^{2}}{2a}A_{i}A^{i}\!+\!\frac{m^{2}}{2}A_{\mu }A^{\mu }\right) , \end{aligned}$$
where \(p^{\mu }\) are the canonical momenta conjugated with the fields \( A_{\mu }\). The number of physical degrees of freedom [21] of the original system is equal to
$$\begin{aligned} \mathscr {N}_{O}= & {} \left( 6\ \mathrm {canonical\ variables}-2\ \mathrm {scc}\right) /2 \nonumber \\= & {} 2. \end{aligned}$$
The same result, with respect to the number of degrees of freedom, is obtained in Refs. [43, 44]. Moreover, in Refs. [43, 44] it is shown that the MCS–Proca model describes a topological mass mix with two massive degrees of freedom, with masses \(\sqrt{b^{2}+m^{2}}\pm \vert b\vert \).
According to the GU method, we consider the constraint (4) as the first-class constraint (fcc) and the remaining constraint (3) as the corresponding canonical gauge condition. Further, we redefine the first-class constraint
$$\begin{aligned} G\equiv -\frac{1}{m^{2}}\left( \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-m^{2}A^{0}\right) \approx 0. \end{aligned}$$
The other choice, considering the constraint (3) as the first-class constraint and the constraint (4) as the corresponding canonical gauge condition, yields a path integral that cannot be written (after integrating out auxiliary variables) in a manifestly covariant form [11, 14]. The next step of the GU approach is represented by the construction of a first-class Hamiltonian with respect to the constraint (7),
$$\begin{aligned} H_{GU} =H_{c}-\chi ^{(1) }[ G,H_{c}] +\frac{1}{2} \chi ^{(1) }\chi ^{( 1) }[ G,[ G,H_{c}] ] -\cdots . \end{aligned}$$
The concrete form of first-class Hamiltonian, \(H_{GU}\) is given by
$$\begin{aligned} H_{GU}= & {} \int d^{2}x\left[ -\frac{1}{2a}p_{i}p^{i}+\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}\right. \nonumber \\&+\,\frac{b}{a}\varepsilon _{0ij}A^{i}p^{j}-\frac{b^{2}}{2a}A_{i}A^{i}-\frac{m^{2}}{2}A_{0}A^{0} \nonumber \\&-\,A_{0}\left( \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-m^{2}A^{0}\right) \nonumber \\&\left. +\,\frac{1}{2}\left( \frac{1}{m}\partial _{i}p_{0}+mA_{i}\right) \left( \frac{1}{m}\partial ^{i}p_{0}+mA^{i}\right) \right] . \end{aligned}$$
It can be verified that the Hamiltonian gauge algebra relation is given by
$$\begin{aligned}{}[G,H_{GU}] =0. \end{aligned}$$
The equations of motion are
$$\begin{aligned} \dot{A}_{0}= & {} -\partial _{i}\left( A^{i}+\frac{1}{m^{2}}\partial ^{i}p^{0}\right) , \end{aligned}$$
$$\begin{aligned} \dot{A}_{i}= & {} -\frac{1}{a}\left( p_{i}+b\varepsilon _{0ij}A^{j}\right) +\partial _{i}A_{0}+\frac{1}{m^{2}}\partial _{i}\Lambda , \end{aligned}$$
$$\begin{aligned} \dot{p}^{0}= & {} \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-m^{2}A^{0}-\Lambda , \end{aligned}$$
$$\begin{aligned} \dot{p}^{i}= & {} a\partial _{j}\partial ^{[j}A^{i]}-\frac{b}{a} \varepsilon _{0ij}\left( p^{j}+b\varepsilon ^{0jk}A_{k}\right) \nonumber \\&-m^{2}\left( A^{i}+\frac{1}{m^{2}}\partial ^{i}p^{0}\right) -b\varepsilon _{0ij}\partial ^{j}A^{0}-\frac{b}{m^2}\varepsilon ^0ij\partial _{j}\Lambda ,\nonumber \\ \end{aligned}$$
where \(\Lambda \) is an arbitrary function. Under the canonical gauge condition \(p^{0}\approx 0\) (\(\Lambda =0\)), Eqs. (11)–(14) return to the equations of motion for the MCS–Proca model. The number of physical degrees of freedom of the GU system is equal to
$$\begin{aligned} \mathscr {N}_{GU}= & {} ( 6\ \mathrm {canonical\ variables}-2\times 1\ \mathrm {fcc}) /2 \nonumber \\= & {} 2=\mathscr {N}_{O}. \end{aligned}$$
The original second-class theory and, respectively, the gauge-unfixed system are classically equivalent since they possess the same number of physical degrees of freedom and, moreover, the corresponding algebras of classical observables are isomorphic. Consequently, the two systems become equivalent at the level of the path integral quantization, which allows us to replace the Hamiltonian path integral of the MCS–Proca model with that of the gauge-unfixed first-class system
$$\begin{aligned} Z_{GU}= & {} \int \mathscr {D}\left( A_{\mu },p^{\mu },\lambda \right) \mu \left( \left[ A_{\mu }\right] \right) \exp \bigg \{i\int d^{3}x\bigg [ \left( \partial _{0}A_{\mu }\right) p^{\mu } \nonumber \\&-\,\mathscr {H}_{GU}+\frac{1}{m^{2}}\lambda \left( \partial ^{i}p_{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-m^{2}A^{0}\right) \bigg ] \bigg \},\nonumber \\ \end{aligned}$$
where the integration measure '\(\mu \left( \left[ A_{\mu }\right] \right) \)' associated with the model subject to the first-class constraint (7) includes some suitable canonical gauge conditions and it is chosen such that the path integral (16) is convergent [60].
Using in the path integral the notation
$$\begin{aligned} \bar{A}_{0}=A_{0}+\frac{1}{m^{2}}\lambda \end{aligned}$$
and performing partial integrations over the momentum \(p^{i}\) and field \(A_{0}\), the argument of the exponential takes the form
$$\begin{aligned} S_{GU}= & {} \int d^{3}x\bigg [ -\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}-\frac{a}{2}\left( \partial _{0}A_{i}-\partial _{i}\bar{A}_{0}\right) \nonumber \\&\times \,\left( \partial ^{0}A^{i}-\partial ^{i}\bar{A}^{0}\right) -b\varepsilon _{0ij}\bar{A}^{0}\partial ^{i}A^{j}\nonumber \\&-\,b\varepsilon _{i0j}A^{i}\partial ^{0}A^{j}-b\varepsilon _{ij0}A^{i}\partial ^{j}\bar{A}^{0}\nonumber \\&-\,\frac{1}{2}\left( \frac{1}{m}\partial _{i} p_{0} +mA_{i}\right) \left( \frac{1}{m}\partial ^{i} p^{0}+mA^{i}\right) \nonumber \\&-\,\frac{1}{2}\left( \frac{1}{m}\partial _{0} p_{0}+m\bar{A}_{0}\right) \left( \frac{1}{m}\partial ^{0} p^{0}+m\bar{A}^{0}\right) \bigg ]. \end{aligned}$$
In terms of the notation \(\varphi =-\frac{1}{m}p^{0}\), the last functional reads
$$\begin{aligned} S_{GU}= & {} \int d^{3}x\left[ -\frac{a}{4}\partial _{[\mu }\bar{A} _{\nu ]}\partial ^{[\mu }\bar{A}^{\nu ]}-b\varepsilon _{\mu \nu \rho } \bar{A}^{\mu }\partial ^{\nu }\bar{A}^{\rho }\right. \nonumber \\&\left. -\,\frac{1}{2}\left( \partial _{\mu }\varphi -m\bar{A}_{\mu }\right) \left( \partial ^{\mu }\varphi -m\bar{A}^{\mu }\right) \right] , \end{aligned}$$
where \(\bar{A}_{\mu }\equiv \left\{ \bar{A}_{0},A_{i}\right\} \) and describes a Stückelberg coupling between the scalar field \(\varphi \) and the 1-form \( \bar{A}_{\mu }\) [61]. The scalar field \(\varphi \) plays the role of the Stückelberg scalar. Using the extended phase space method [24–27] in [3–5, 9, 10] a similar result (for \(a=0\) or \(b=0\)) has been obtained. The extra field of the extended phase space method was identified with Stückelberg scalar. In contrast, in the GU approach we find that to Stückelberg scalar corresponds \(-\frac{1}{m}p^{0}\), where \(p^{0}\) is canonical momentum conjugated with the original field \(A_{0}\).
In the following we prove that starting from the Hamiltonian path integral of the gauge system (19) with a suitable gauge we recover the MCS–Proca model. The canonical analysis of the model described by the Lagrangian action (2) displays the first-class constraints,
$$\begin{aligned} G_{1} \equiv p^{0}\approx 0,\quad G_{2} \equiv \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-mp\approx 0, \end{aligned}$$
and the Hamiltonian
$$\begin{aligned} H= & {} \int d^{2}x\left[ -\frac{1}{2a}p_{i}p^{i}-A_{0}\partial _{i}p^{i}+\frac{ a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}\right. \nonumber \\&+\,b\varepsilon _{0ij}A^{0}\partial ^{i}A^{j}+\frac{b}{a}\varepsilon _{0ij}A^{i}p^{j}-\frac{b^{2}}{2a}A_{i}A^{i} \nonumber \\&\left. -\,\frac{1}{2}p^{2}+mA^{0}p+\frac{1}{2}\left( \partial _{i}\varphi -mA_{i}\right) \left( \partial ^{i}\varphi -mA^{i}\right) \right] , \nonumber \\ \end{aligned}$$
where \(\left\{ p^{\mu },p\right\} \) are the canonical momenta conjugate with the fields \(\left\{ A_{\mu },\varphi \right\} \). Taking
$$\begin{aligned} C^{1} \equiv \varphi \approx 0, \quad C^{2} \equiv -p+mA_{0}\approx 0, \end{aligned}$$
as the unitary gauge-fixing conditions, the Hamiltonian path integral is given by
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{\mu },p^{\mu },\varphi ,p\right) \delta \left( G_{1}\right) \delta \left( G_{2}\right) \delta \left( C^{1}\right) \delta \left( C^{2}\right) \nonumber \\&\times \exp \left\{ i\int d^{3}x\left[ \left( \partial _{0}A_{\mu }\right) p^{\mu }+\left( \partial _{0}\varphi \right) p-\mathscr {H}\right] \right\} . \end{aligned}$$
Integrating over the momentum \(p^{0}\) and fields \(\{\varphi \), \(A_{0}\}\) and representing \(\delta \left( \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-mp\right) \) in the form of an integral functional,
$$\begin{aligned} \int \mathscr {D}\lambda \exp \left\{ i\int d^{3}x\left[ \lambda \left( \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-mp\right) \right] \right\} , \end{aligned}$$
the path integral takes the form
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{i},p^{i},p,\lambda \right) \exp \bigg \{i \int d^{3}x\bigg [ \left( \partial _{0}A_{i}\right) p^{i} \nonumber \\&+\,\frac{1}{2a}p_{i}p^{i}+\frac{1}{m}p\partial _{i}p^{i}-\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]} \nonumber \\&-\,\frac{b}{m}\varepsilon _{0ij}p\partial ^{i}A^{j}-\frac{b}{a}\varepsilon _{0ij}A^{i}p^{j}+\frac{b^{2}}{2a}A_{i}A^{i}-\frac{1}{2}p^{2} \nonumber \\&-\,\frac{m^{2}}{2}A_{i}A^{i}+\lambda \left( \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-mp\right) \bigg ]\bigg \}. \end{aligned}$$
Performing in the path integral the notation
$$\begin{aligned} A_{0}=\frac{1}{m}p+\lambda , \end{aligned}$$
the argument of the exponential becomes
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{\mu },p^{i},p\right) \exp \bigg \{ i\int d^{3}x\bigg [ \left( \partial _{0}A_{i}\right) p^{i} \nonumber \\&+\,\frac{1}{2a}p_{i}p^{i}-\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}-\frac{b}{a}\varepsilon _{0ij}A^{i}p^{j} \nonumber \\&-\,\frac{b^{2}}{2a}A_{i}A^{i}+\frac{1}{2}p^{2}-\frac{m^{2}}{2}A_{i}A^{i}+A_{0} \nonumber \\&\times \,\left( \partial _{i}p^{i}-b\varepsilon ^{0ij}\partial _{i}A_{j}-mp\right) \bigg ]\bigg \}. \end{aligned}$$
After integration over the momenta \(p^{i}\) and p, we find that the argument of the exponential is just the MCS–Proca Lagrangian
$$\begin{aligned} Z= & {} \int \mathscr {D}A_{\mu }\exp \bigg \{i\int d^{3}x\bigg ( -\frac{a}{4} \partial _{[\mu }A_{\nu ]}\partial ^{[\mu }A^{\nu ]}\nonumber \\&-\,b\varepsilon _{\mu \nu \rho }A^{\mu }\partial ^{\nu }A^{\rho }-\frac{m^{2} }{2}A_{\mu }A^{\mu }\bigg )\bigg \}. \end{aligned}$$
The MCS–Proca model can be correlated to another first-class theory whose field spectrum comprise two types of 1-form gauge fields. For this purpose we consider the following fields/momenta combinations:
$$\begin{aligned} \mathscr {P}_{i} \equiv p_{i}+b\varepsilon _{0ij}A^{j},\quad \mathscr {F}_{i} \equiv A_{i}+\frac{1}{m^{2}}\partial _{i}p_{0},\quad \mathscr {F}_{0} \equiv A_{0}, \end{aligned}$$
which are in (strong) involution with the first-class constraint (7),
$$\begin{aligned} \left[ \mathscr {P}_{i},G\right] =\left[ \mathscr {F}_{i},G\right] =\left[ \mathscr {F}_{0},G\right] =0. \end{aligned}$$
We observe that the first-class Hamiltonian (9) can be written in terms of these gauge invariant quantities as
$$\begin{aligned} H_{GU}= & {} \int d^{2}x\left[ -\frac{1}{2a}\mathscr {P}_{i}\mathscr {P}^{i}+ \frac{a}{4}\partial _{[i}\mathscr {F}_{j]}\partial ^{[ i} \mathscr {F}^{j]}\right. \nonumber \\&\left. +\,\frac{m^{2}}{2}\mathscr {F}_{i}\mathscr {F}^{i}-\frac{m^{2}}{2} \mathscr {F}_{0}\mathscr {F}^{0}+m^{2}\mathscr {F}_{0}G\right] . \end{aligned}$$
By direct computation we find that \(\mathscr {F}_{\mu }\equiv \left\{ \mathscr {F}_{0},\mathscr {F}_{i}\right\} \) satisfy the equations
$$\begin{aligned} \partial ^{\nu }\partial _{[\nu }\mathscr {F}_{0]}= & {} \frac{m^{2}}{a}\mathscr {F}_{0}+\frac{2b}{a}\varepsilon _{0ij}\partial ^{i}\mathscr {F}^{j}+\mathscr {O}\left( G\right) , \end{aligned}$$
$$\begin{aligned} \partial ^{\nu }\partial _{[\nu }\mathscr {F}_{i]}= & {} \frac{m^{2}}{a} \mathscr {F}_{i}+\frac{2b}{a^{2}}\varepsilon _{0ij}\mathscr {P}^{j}+\mathscr {O }\left( G\right) , \end{aligned}$$
and that it is divergenceless,
$$\begin{aligned} \partial ^{\mu }\mathscr {F}_{\mu }=0. \end{aligned}$$
Enlarging the phase space by adding some bosonic canonical variables \(\left\{ V^{\mu },P_{\mu }\right\} \), we can write the solution to the Eq. (34) as
$$\begin{aligned} \mathscr {F}_{\mu }=-\frac{1}{m}\varepsilon _{\mu \nu \rho }\partial ^{\nu }V^{\rho }. \end{aligned}$$
When we replace the solution (35) in the first-class constraint (7), the constraint takes the form
$$\begin{aligned} -\frac{1}{m^{2}}\left( \partial ^{i}p_{i}-b\varepsilon _{0ij}\partial ^{i}A^{j}+m\varepsilon _{0ij}\partial ^{i}V^{j}\right) \approx 0, \end{aligned}$$
and remains first-class. From the gauge transformation of the quantity \(\partial _{i}p_{0}\), we obtain
$$\begin{aligned} \partial _{i}p_{0}=m\varepsilon _{0ij}P^{j}. \end{aligned}$$
Using Eqs. (35) and (37) in the first-class Hamiltonian (9), we obtain for the first-class Hamiltonian the following form:
$$\begin{aligned} H_{GU}^{\prime }= & {} \int d^{2}x\bigg [\frac{a}{4}\partial _{[ i}A_{j]}\partial ^{[ i}A^{j]}-\frac{1}{4}\partial _{[ i}V_{j]}\partial ^{[ i}V^{j]} \nonumber \\&-\,\frac{1}{2a}\left( p_{i}+b\varepsilon _{0ij}A^{j}\right) \left( p^{i}+b\varepsilon ^{0ik}A_{k}\right) \nonumber \\&+\,\frac{m^{2}}{2}\left( A_{i}+\frac{1}{m}\varepsilon _{0ij}P^{j}\right) \left( A_{i}+\frac{1}{m}\varepsilon ^{0ik}P_{k}\right) \nonumber \\&+\,\frac{1}{2m}\varepsilon _{0ij}\partial ^{[ i}V^{j]}\nonumber \\&\times \,\left( \partial _{k}p^{k}-b\varepsilon _{0kl}\partial ^{k}A^{l}+m\varepsilon _{0kl}\partial ^{k}V^{l}\right) \bigg ]. \end{aligned}$$
In this moment we have a dynamical system with the phase space locally parameterized by \(\left\{ A_{i},p^{i},V^{\mu },P_{\mu }\right\} \), subject to the first-class constraint (36) and too many degrees of freedom
$$\begin{aligned} {\mathscr {N}}_{GU}^{\prime }= & {} ( 10\ \mathrm {canonical\ variables}-2\times 1\ \mathrm {fcc}) /2 \nonumber \\= & {} 4\ne \mathscr {N}_{GU}. \end{aligned}$$
In order to cut the two extra degrees of freedom, we impose in addition to the first-class constraint (36) two supplementary first-class constraints,
$$\begin{aligned} -\partial _{i}P^{i}\approx 0,\qquad P^{0}\approx 0, \end{aligned}$$
and we obtain a first-class system with a right number of physical degrees of freedom,
$$\begin{aligned} \mathscr {N}_{GU}^{\prime }= & {} ( 10\ \mathrm {canonical\ variables}-2\times 3\ \mathrm {fcc}) /2 \nonumber \\= & {} 2=\mathscr {N}_{GU}. \end{aligned}$$
Since the number of physical degrees of freedom is the same for both first-class theories and for each of them we are able to identify a set of fundamental classical observables such that they are in one-to-one correspondence and possess the same Poisson brackets, the first-class theories are equivalent. As a result, the GU and the first-class systems remain equivalent also at the level of the Hamiltonian path integral quantization. This further implies that the first-class system is completely equivalent with the original second-class theory. Due to this equivalence we can replace the Hamiltonian path integral of MCS–Proca model with the one associated with the first-class system,
$$\begin{aligned} Z^{\prime }= & {} \int \mathscr {D}\left( A_{i},V^{\mu },p^{i},P_{\mu },\lambda 's\right) \mu \left( [A_{i}],[V^{\mu }]\right) \nonumber \\&\times \,\exp \bigg \{i\int d^{3}x\bigg [ \left( \partial _{0}A_{i}\right) p^{i}+\left( \partial _{0}V^{\mu }\right) P_{\mu } \nonumber \\&-\,\mathscr {H}_{GU}^{\prime }+\lambda ^{(1)}\partial _{i}P^{i}-\lambda ^{(2)}P^{0}\nonumber \\&+\,\frac{1}{m^{2}}\lambda \left( \partial _{i}p^{i}-b\varepsilon _{0ij}\partial ^{i}A^{j}+m\varepsilon _{0ij}\partial ^{i}V^{j}\right) \bigg ]\bigg \}. \end{aligned}$$
If we perform in the path integral the partial integrations over \(\left\{ V^{0},p_{i},P_{\mu },\lambda ^{(2)}\right\} \) and use the notations
$$\begin{aligned} \bar{A}_{0}=\frac{1}{m^{2}}\left( \lambda -\frac{ m}{2}\varepsilon _{0ij}\partial ^{[ i}V^{j]}\right) , \qquad \bar{V}_{0}=\lambda ^{(1)}, \end{aligned}$$
$$\begin{aligned} S_{GU}^{\prime }= & {} \int d^{3}x\bigg [ -\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[ i}A^{j]} \nonumber \\&-\,\frac{a}{2}\left( \partial _{0}A_{i}-\partial _{i}\bar{A}_{0}\right) \left( \partial ^{0}A^{i}-\partial ^{i}\bar{A}^{0}\right) \nonumber \\&-\,b\varepsilon _{0ij}\bar{A}^{0}\partial ^{i}A^{j}-b\varepsilon _{i0j}A^{i}\partial ^{0}A^{j}-b\varepsilon _{ij0}A^{i}\partial ^{j}\bar{A}^{0}\nonumber \\&+\,\frac{1}{4}\partial _{[ i}V_{j]}\partial ^{[i}V^{j]}\!+\!\frac{1}{2}\left( \partial _{0}V_{i}\!-\!\partial _{i}\bar{V}_{0}\right) \left( \partial ^{0}V^{i}\!-\!\partial ^{i}\bar{V}^{0}\right) \nonumber \\&+\,m\varepsilon _{0ij}\bar{A}^{0}\partial ^{i}V^{j}\!+\!m\varepsilon _{i0j}A^{i}\partial ^{0}V^{j}\!+\!m\varepsilon _{ij0}A^{i}\partial ^{j}\bar{V}^{0}\bigg ].\nonumber \\ \end{aligned}$$
The argument of the exponential takes a manifestly Lorentz-covariant form
$$\begin{aligned} S_{GU}^{\prime }= & {} \int d^{3}x\bigg ( -\frac{a}{4}\partial _{[ \mu } \bar{A}_{\nu ]}\partial ^{[ \mu }\bar{A}^{\nu ]}-b\varepsilon _{\mu \nu \rho }\bar{A}^{\mu }\partial ^{\nu }\bar{A}^{\rho }\nonumber \\&+\,\frac{1}{4}\partial _{[ \mu }\bar{V}_{\nu ]}\partial ^{[ \mu }\bar{V}^{\nu ]}+m\varepsilon _{\mu \nu \rho }\bar{A}^{\mu }\partial ^{\nu }\bar{V}^{\rho }\bigg ), \end{aligned}$$
where \(\bar{A}_{\mu }\equiv \left\{ \bar{A}_{0},A_{i}\right\} \) and \(\bar{V} _{\mu }\equiv \left\{ \bar{V}_{0},V_{i}\right\} \). The functional (45) associated with the first-class system describes a CS coupling between the two 1-forms, \(\bar{A}_{\mu }\) and \(\bar{V}_{\mu }\) [62].
The higher derivative MCS–Proca model
Hamiltonian analysis of the MECS–Proca model
The starting point of the approach developed in [52] consists in converting the original higher derivative theory to an equivalent first order theory by introducing new fields to account for higher derivative terms. To pass from the higher derivative theory to a first order one, we define the variables \(B_{\mu }\) as
$$\begin{aligned} B_{\mu }=\partial _{0}A_{\mu }, \end{aligned}$$
and enforce the Lagrangian constraints
$$\begin{aligned} B_{\mu }-\partial _{0}A_{\mu }=0, \end{aligned}$$
by the Lagrange multiplier \(\xi ^{\mu }\)
$$\begin{aligned} \mathscr {L}= & {} -\frac{a}{4}\partial _{[ i}A_{j]}\partial ^{[i}A^{j]}-\frac{a}{2}\left( B_{i}-\partial _{i}A_{0}\right) \left( B^{i}-\partial ^{i}A^{0}\right) \nonumber \\&+\,\frac{1}{2b}\varepsilon _{0ij}\left( \partial _{0}B^{0}+\partial _{k}\partial ^{k}A^{0}\right) \partial ^{i}A^{j}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{0}B^{i}+\partial _{k}\partial ^{k}A^{i}\right) B^{j}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{0}B^{i}+\partial _{k}\partial ^{k}A^{i}\right) \partial ^{j}A^{0}\nonumber \\&-\,\frac{m^{2}}{2}A_{\mu }A^{\mu }+\xi ^{\mu }\left( B_{\mu }-\partial _{0}A_{\mu }\right) . \end{aligned}$$
From the definitions of the canonical momenta \(\left\{ \Pi _{\mu },p^{\mu },\pi ^{\mu }\right\} \) conjugate to the fields \(\left\{ \xi ^{\mu },A_{\mu },B_{\mu }\right\} \)
$$\begin{aligned} \Pi _{\mu } =\frac{\partial L}{\partial \dot{\xi }^{\mu }},\quad p^{\mu } =\frac{\partial L}{\partial \dot{A}_{\mu }}, \quad \pi ^{\mu } =\frac{\partial L}{\partial \dot{B}_{\mu }}, \end{aligned}$$
we obtain the primary constraints,
$$\begin{aligned}&\Phi _{\mu }^{(\xi )}\equiv \Pi _{\mu }\approx 0, \end{aligned}$$
$$\begin{aligned}&\Phi ^{(A)\mu }\equiv p^{\mu }+\xi ^{\mu }\approx 0, \end{aligned}$$
$$\begin{aligned}&\Phi _{i}^{(B)}\equiv \pi _{i}+\frac{1}{2b}\varepsilon _{0ij}\left( B^{j}-\partial ^{j}A^{0}\right) \approx 0, \end{aligned}$$
$$\begin{aligned}&\Phi ^{(B)}\equiv \pi _{0}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}\approx 0. \end{aligned}$$
If we write the primary constraints (52)–(53) in an equivalent form
$$\begin{aligned}&\Phi _{i}^{\prime (B)}\equiv \pi _{i}+\frac{1}{2b}\varepsilon _{0ij}\left( B^{j}-\partial ^{j}A^{0}\right) -\frac{1}{2b}\varepsilon _{0ij}\partial ^{j}\Pi ^{0}\approx 0, \nonumber \\ \end{aligned}$$
$$\begin{aligned}&\Phi ^{\prime (B)}\equiv \pi _{0}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}\Pi ^{j}\approx 0, \end{aligned}$$
the nonvanishing elements of the algebra of the primary constraints (pc) are
$$\begin{aligned}&\left[ \Phi _{\mu }^{(\xi )}(x),\Phi ^{(A)\nu }(y)\right] _{x_{0}=y_{0}}=-\delta _{\mu }^{\nu }\delta ^{2}(\mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi _{i}^{\prime (B)}(x),\Phi _{j}^{\prime (B)}(y)\right] _{x_{0}=y_{0}}=\frac{1}{b}\varepsilon _{0ij}\delta ^{2}(\mathbf {x}-\mathbf {y} ). \end{aligned}$$
The canonical Hamiltonian is given by
$$\begin{aligned} H_{c}= & {} \int d^{2}x\left. \left( \Pi _{\mu }\dot{\xi }^{\mu }+p^{\mu }\dot{A} _{\mu }+\pi ^{\mu }\dot{B}_{\mu }-\mathscr {L}\right) \right| _{\left\{ pc \right\} } \nonumber \\= & {} \int d^{2}x\bigg [ \frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}\!+\! \frac{a}{2}\left( B_{i}\!-\!\partial _{i}A_{0}\right) \left( B^{i}\!-\!\partial ^{i}A^{0}\right) \nonumber \\&-\,\frac{1}{2b}\varepsilon _{0ij}\left( \partial _{k}\partial ^{k}A^{0}\right) \partial ^{i}A^{j}-\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) B^{j} \nonumber \\&-\,\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{k}\partial ^{k}A^{i}\right) \partial ^{j}A^{0}-\xi ^{\mu }B_{\mu }+\frac{m^{2}}{2} A_{\mu }A^{\mu }\bigg ], \nonumber \\ \end{aligned}$$
and the total Hamiltonian is
$$\begin{aligned} H_{T}= & {} H_{c}+\int d^{2}x\bigg (u^{\left( \xi \right) \mu }\Phi _{\mu }^{(\xi )}+u_{\mu }^{\left( A\right) }\Phi ^{(A)\mu } \nonumber \\&+\,u^{\left( B\right) i}\Phi _{i}^{\prime (B)}+u^{\left( B\right) }\Phi ^{\prime (B)}\bigg ), \end{aligned}$$
where \(\left\{ u^{\left( \xi \right) \mu },u_{\mu }^{\left( A\right) },u^{\left( B\right) i},u^{\left( B\right) }\right\} \) are Lagrange multipliers.
The consistency of the primary constraints (50), (51), (54) leads to the determination of the Lagrange multipliers \(\left\{ u^{\left( \xi \right) \mu },u_{\mu }^{\left( A\right) },u^{\left( B\right) i}\right\} \), while the consistency of the remaining primary constraint \(\Phi ^{\prime (B)}\approx 0\) generate the secondary constraint,
$$\begin{aligned} \Phi _{II}^{(B)}\equiv \xi _{0}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}B^{j}\approx 0. \end{aligned}$$
The consistency of the secondary constraint yields the tertiary constraint,
$$\begin{aligned} \Phi _{III}^{(B)}\equiv \partial _{i}\xi ^{i}+m^{2}A_{0}-\frac{1}{2b} \varepsilon _{0ij}\partial _{k}\partial ^{k}\partial ^{i}A^{j}\approx 0. \end{aligned}$$
Conserving the constraint \(\Phi _{III}^{(B)}\approx 0\) we get the quartic constraint,
$$\begin{aligned} \Phi _{IV}^{(B)}\equiv m^{2}\partial _{i}A^{i}+m^{2}B_{0}\approx 0. \end{aligned}$$
The consistency condition of the quartic constraint \(\Phi _{IV}^{(B)}\approx 0\) determines the multiplier \(u^{\left( B\right) }\) and no more new constraint is produced.
The constraints (50), (51), (54), (55), and (60)–(62) are second-class and irreducible. The nonzero Poisson brackets among the constraints functions read
$$\begin{aligned}&\left[ \Phi _{\mu }^{( \xi ) }( x),\Phi ^{( A) \nu }( y) \right] _{x_{0}=y_{0}}=-\delta _{\mu }^{\nu }\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi _{\mu }^{( \xi ) }( x),\Phi _{II}^{( B) }( y) \right] _{x_{0}=y_{0}}=-\delta _{\mu }^{0}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi _{\mu }^{( \xi ) }( x),\Phi _{III}^{( B) }( y) \right] _{x_{0}=y_{0}} =\delta _{\mu }^{i}\partial _{i}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi ^{( A) 0 }( x),\Phi _{III}^{( B) }( y) \right] _{x_{0}=y_{0}} =-m^{2}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi ^{( A) i }( x),\Phi _{III}^{( B) }(y) \right] _{x_{0}=y_{0}} =\frac{1}{2b}\varepsilon ^{0ij}\partial _{k}\partial ^{k}\partial _{j} \delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi ^{(A) \mu }(x),\Phi _{IV}^{(B) }(y) \right] _{x_{0}=y_{0}} =m^{2}\delta _{i}^{\mu }\partial ^{i}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi ^{\prime (B) }(x),\Phi _{IV}^{(B) }(y) \right] _{x_{0}=y_{0}} =-m^{2}\delta ^{2}(\mathbf {x}- \mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi _{i}^{\prime (B) }(x),\Phi _{j}^{\prime (B) }(y) \right] _{x_{0}=y_{0}} =\frac{1}{b}\varepsilon _{0ij}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \Phi _{i}^{\prime (B) }(x),\Phi _{II}^{(B) }(y) \right] _{x_{0}=y_{0}} =\frac{1}{b}\varepsilon _{0ij}\partial ^{j}\delta ^{2}( \mathbf {x}-\mathbf {y}) . \end{aligned}$$
The number of physical degrees of freedom of the original system is equal to
$$\begin{aligned} \bar{\mathscr {N}}_{O}= & {} ( 18\ \mathrm {canonical\ variables}-12\ \mathrm {scc }) /2 \nonumber \\= & {} 3. \end{aligned}$$
We notice that the number of physical degrees of freedom of the extended model is higher than the number of physical degrees of freedom of the MCS–Proca model,
$$\begin{aligned} \bar{\mathscr {N}}_{O}>{\mathscr {N}}_{O}, \end{aligned}$$
This result was expected due to the higher derivative nature of the MECS–Proca model. In addition the number of physical degrees of freedom of MECS–Proca model coincides with that of the ECSPMG theory.
The analyze of the excitations and mass counts of the MECS–Proca model reveal the fact that if the sign of the Maxwell term is the usual one then the excitation masses will be complex, with the wrong sign the reality of the excitation masses will be restored for a known condition satisfied by parameters b and m, but the model faces ghost problems. The action (1) can be rewritten in terms of the transverse operator \( \theta _{\mu \nu }=\sigma _{\mu \nu }-\frac{\partial _{\mu }\partial _{\nu }}{\square } \), longitudinal operators \( \omega _{\mu \nu }=\frac{\partial _{\mu }\partial _{\nu }}{\square } \) and the operator associated with the topological term \(S_{\mu \nu }=\varepsilon _{\mu \rho \nu }\partial ^{\rho }\) like
$$\begin{aligned} S=\int d^{3}x\frac{1}{2}A^{\mu }{\mathscr {O}}_{\mu \nu }A^{\nu }, \end{aligned}$$
where \({\mathscr {O}}_{\mu \nu }= \left( a \square -m^{2}\right) \theta _{\mu \nu }-m^{2}\omega _{\mu \nu }+\frac{1}{b}\square S_{\mu \nu }\). The propagator in the momentum space for the MECS–Proca model is
$$\begin{aligned} \mathscr {P}_{\mu \nu }= & {} -\frac{ak^{2}+m^{2}}{\left( ak^{2}+m^{2}\right) ^{2}- \frac{1}{b^{2}}k^{6}}\theta _{\mu \nu }\nonumber \\&-\,\frac{1}{m^{2}}\omega _{\mu \nu }+ \frac{\frac{1}{b}k^{2}}{\left( ak^{2}+m^{2}\right) ^{2}-\frac{1}{b^{2}}k^{6}} S_{\mu \nu }. \end{aligned}$$
Taking into consideration that only the \(\theta \)-component of the propagator,
$$\begin{aligned} \mathscr {P}^{\left( \theta \right) }=-\frac{ak^{2}+m^{2}}{\left( ak^{2}+m^{2}\right) ^{2}-\frac{1}{b^{2}}k^{6}}, \end{aligned}$$
contributes to the current–current transition amplitude, we study the residues at each simple pole of the \(\mathscr {P}^{\left( \theta \right) }\) [39, 40].
We analyze the roots of the cubic equation
$$\begin{aligned} -\frac{1}{b^{2}}\left( k^{2}\right) ^{3}+a^{2}\left( k^{2}\right) ^{2}+2am^{2}k^{2}+m^{4}=0, \end{aligned}$$
whose discriminant is
$$\begin{aligned} D=4\frac{m^{8}}{b^{4}}\left( -a^{3}\frac{b^{2}}{m^{2}}-\frac{27}{4} \right) . \end{aligned}$$
For \(a=1\) (Maxwell's term with the usual sign) the discriminant is less than zero and the equation has one real root and two complex conjugate roots. Also, for \(a=-1\) (Maxwell's term with the wrong sign) the roots of Eq. (77) are complex unless \(\frac{b^{2}}{m^{2}}\ge \frac{27}{4}\). In the limit case \(\frac{b^{2}}{m^{2}}=\frac{27}{4}\) the roots coalesce and are
$$\begin{aligned} k_{1}^{2}=k_{2}^{2}=4k_{3}^{2}=3m^{2}. \end{aligned}$$
Therefore, if \(a=-1\) and \(\frac{b^{2}}{m^{2}}>\frac{27}{4}\) the equation has three distinct real roots. In [42] (see also [43]) Eq. (77) for \(a=-1\) was obtained from the pole propagator of the ECSPMG model, where it was noted that if \(\frac{b^{2}}{ m^{2}}>\frac{27}{4}\) then the three distinct real roots are all positive. The absence of the tachyons in a theory is provided by the existence of only positive poles, and consequently the MECS–Proca model is free from the tachyons for \(a=-1\) and \(\frac{b^{2}}{m^{2}}>\frac{27}{4}\). After the analyze of the signs of the residues at each simple pole of \(\theta \) -component of the propagator, we see that not all residues have the same sign. The signs of the residues at each simple pole of \(\theta \)-component of the propagator tell us whether the ghosts excitations arise and therefore the MECS–Proca model is plagued by ghosts. We notice that the same problems from the ECSPMG theory about the presence of ghosts and tachyon excitations are also present here, in the MECS–Proca model.
The construction of the first-class system
Imposing the requirement of the constraints (50)–(51) to be strongly zero and eliminating the unphysical sector \(\{\xi ^{\mu },\Pi _{\mu }\}\), the reduced phase space being locally parameterized by \(\left\{ A_{\mu },B_{\mu },p^{\mu },\pi ^{\mu }\right\} \), we arrive at a system subject to the second-class constraints,
$$\begin{aligned} \chi _{i}^{( 1) }\equiv & {} \pi _{i}+\frac{1}{2b}\varepsilon _{0ij}\left( B^{j}-\partial ^{j}A^{0}\right) \approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{( 1) }\equiv & {} \pi _{0}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}\approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{( 2) }\equiv & {} -p_{0}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}B^{j}\approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{( 3) }\equiv & {} -\partial _{i}p^{i}+m^{2}A_{0}-\frac{1}{2b} \varepsilon _{0ij}\partial _{k}\partial ^{k}\partial ^{i}A^{j} \approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{( 4) }\equiv & {} m^{2}\partial _{i}A^{i}+m^{2}B_{0}\approx 0, \end{aligned}$$
while the canonical Hamiltonian (58) takes the form
$$\begin{aligned} H_{c}\!= & {} \!\int d^{2}x\bigg [ \frac{a}{4}\partial _{ [i}A_{j]}\partial ^{ [i}A^{j]}\!+\!\frac{a}{2}\left( B_{i}\!-\!\partial _{i}A_{0}\right) \left( B^{i}-\partial ^{i}A^{0}\right) \nonumber \\&-\,\frac{1}{2b}\varepsilon _{0ij}\left( \partial _{k}\partial ^{k}A^{0}\right) \partial ^{i}A^{j}-\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) B^{j}\nonumber \\&-\,\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{k}\partial ^{k}A^{i}\right) \partial ^{j}A^{0} +p^{\mu }B_{\mu }+\frac{m^{2}}{2}A_{\mu }A^{\mu }\bigg ]. \nonumber \\ \end{aligned}$$
The nontrivial Poisson brackets between the constraint functions are listed below:
$$\begin{aligned}&\left[ \chi _{i}^{( 1) }( x),\chi _{j}^{(1) }( y) \right] _{x_{0}=y_{0}} = \frac{1}{b}\varepsilon _{0ij}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \chi _{i}^{( 1) }( x),\chi ^{( 2) }( y) \right] _{x_{0}=y_{0}} =\frac{1}{b}\varepsilon _{0ij}\partial ^{j}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \chi ^{( 1) }( x),\chi ^{( 4) }( y) \right] _{x_{0}=y_{0}} =-m^{2}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \chi ^{( 2) }( x),\chi ^{( 3) }( y) \right] _{x_{0}=y_{0}} =m^{2}\delta ^{2}( \mathbf {x}-\mathbf {y}), \end{aligned}$$
$$\begin{aligned}&\left[ \chi ^{( 3) }( x),\chi ^{( 4) }( y) \right] _{x_{0}=y_{0}} =-m^{2}\partial _{k}\partial ^{k}\delta ^{2}( \mathbf {x}-\mathbf {y}). \end{aligned}$$
If we make the linear combination of the constraints \(\chi ^{( 2) }\approx 0\) and \(\chi _{i}^{( 1) }\approx 0\)
$$\begin{aligned} \bar{\chi } ^{( 2) }=\chi ^{( 2) }+\partial ^{i}\chi _{i}^{( 1) }\approx 0, \end{aligned}$$
the matrix of the Poisson bracket among the constraints functions becomes
$$\begin{aligned} C_{\alpha _{0}\beta _{0}}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \frac{1}{b}\varepsilon _{0ij} &{} \mathbf {0} &{} \mathbf {0} &{} \mathbf {0} &{} \mathbf {0} \\ \mathbf {0} &{} 0 &{} 0 &{} 0 &{} -m^{2} \\ \mathbf {0} &{} 0 &{} 0 &{} m^{2} &{} 0 \\ \mathbf {0} &{} 0 &{} -m^{2} &{} 0 &{} -m^{2}\partial _{k}\partial ^{k} \\ \mathbf {0} &{} m^{2} &{} 0 &{} m^{2}\partial _{k}\partial ^{k} &{} 0 \end{array}\right) . \end{aligned}$$
We notice that the constraints \(\chi _{i}^{(1)}\approx 0\) generate a submatrix (of the matrix of the Poisson brackets among the constraints functions) of maximum rank, therefore they form an independent subset of second-class constraints. Thus in the sequel we examine from the point of view of the GU method only the constraints \(\chi _{A}\equiv \left\{ \chi ^{( 1) },\bar{\chi } ^{( 2) },\chi ^{( 3) },\chi ^{( 4) }\right\} \approx 0\).
The second-class constraints set \(\chi _{A}\approx 0\) cannot be straightforwardly separated in two subsets such that one of them being first-class and the other providing some canonical gauge conditions for the first-class subset. To make this possible, we write the constraints set in an equivalent form
$$\begin{aligned} \chi _{A}^{\prime }=E_{AB}\chi _{B}, \end{aligned}$$
where \(E_{AB}\) is an invertible matrix
$$\begin{aligned} E_{AB}= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \frac{\partial _{k}\partial ^{k}}{m^{2}} &{} 0 &{} -\frac{1}{m^{2}} &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ -1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} \frac{1}{m^{2}} \end{array}\right) . \end{aligned}$$
The concrete form of the constraints \(\chi _{A}^{\prime }\approx 0\) is
$$\begin{aligned} \chi ^{\prime ( 1) }\equiv & {} \frac{1}{m^{2}}\left( \partial _{i}p^{i}-m^{2}A_{0}+\partial _{k}\partial ^{k}\pi _{0}\right) \approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{\prime ( 2) }\equiv & {} -p_{0}+\partial _{i}\pi ^{i}\approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{\prime ( 3) }\equiv & {} -\pi _{0}+\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}\approx 0, \end{aligned}$$
$$\begin{aligned} \chi ^{\prime ( 4) }\equiv & {} \partial _{i}A^{i}+B_{0}\approx 0, \end{aligned}$$
with the matrix of the Poisson brackets among the constraint functions expressed by
$$\begin{aligned} C_{AB}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 1 &{} 0 &{} 0 \\ -1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} -1 &{} 0 \end{array}\right) . \end{aligned}$$
Examining the structure of the constraints set (95)–(98) we notice that, in the constraints \(\chi ^{\prime ( 1) }\approx 0\) and \(\chi ^{\prime ( 2) }\approx 0\), we find a reminiscence of the structure of the constraint set of the MCS–Proca model (3)–(4), while the constraints \(\chi ^{\prime ( 3) }\approx 0\) and \(\chi ^{\prime ( 4) }\approx 0\) have no counterparts. It was proved in Ref. [20] that for a dynamical system subject to the second-class constraints \(\left\{ \chi _{\alpha _{0}}\approx 0\right\} _{\alpha _{0}=\overline{1,2M_{0}}}\), the subsets \(\left\{ \chi _{1},\chi _{2},\ldots ,\chi _{M_{0}}\right\} \) and \(\left\{ \chi _{1},\chi _{2},\ldots ,\chi _{M_{0}-1},\chi _{M_{0}+1}\right\} \) of the full set of constraints are first-class sets on \(\Sigma _{2M_{0}}\). According to the above, we consider the subset \(G_{a}\equiv \left\{ \chi ^{\prime ( 1) },\ \chi ^{\prime ( 3) }\right\} \) as the first-class constraint set and the remaining constraints, \(C_{a}\equiv \left\{ \chi ^{\prime ( 2) },\ \chi ^{\prime ( 4) }\right\} \), as the corresponding canonical gauge conditions.
Starting from the canonical Hamiltonian of the original second-class system we construct a first-class Hamiltonian with respect to the first-class subset in two steps [22]. First, we construct the first-class Hamiltonian with respect to the constraint \(G_{1}\approx 0\)
$$\begin{aligned} H_{GU}^{1}= & {} H_{c}-C_{1}\left[ G_{1},H_{c}\right] +\frac{1}{2}C_{1}C_{1} \left[ G_{1}\left[ G_{1},H_{c}\right] \right] -\cdots \nonumber \\= & {} H_{c}+\int d^{2}x\bigg [ \left( - p_{0}+\partial _{i}\pi ^{i}\right) \left( \partial _{k}A^{k}+B_{0}\right) \nonumber \\&+\,\frac{1}{m^{2}}\left( - p_{0}+\partial _{i}\pi ^{i}\right) \partial _{k}\partial ^{k}\left( p_{0}+\frac{1}{2b}\varepsilon _{0lm}\partial ^{l}B^{m}\right) \nonumber \\&+\,\frac{1}{2m^{2}}\left( -p_{0}+\partial _{i}\pi ^{i}\right) \partial _{k}\partial ^{k}\left( -p_{0}+\partial _{j}\pi ^{j}\right) \bigg ],\nonumber \\ \end{aligned}$$
and then, with this at hand, we obtain the first-class Hamiltonian with respect to the constraint \(G_{2}\approx 0\)
$$\begin{aligned}&H_{GU} =H_{GU}^{1}-C_{2}\left[ G_{2},H_{GU}^{1}\right] \nonumber \\&\quad +\,\frac{1}{2}C_{2}C_{2}\left[ G_{2}\left[ G_{2},H_{GU}^{1}\right] \right] -\cdots \nonumber \\&\quad =H_{GU}^{1}-\int d^{2}x\left[ \left( \partial _{i}A^{i}+B_{0}\right) \partial ^{j}\left( \pi _{j}+\frac{1}{2b}\varepsilon _{0jk}B^{k}\right) \right] . \nonumber \\ \end{aligned}$$
The Hamiltonian gauge algebra relations are given by
$$\begin{aligned}{}[G_{1},H_{GU}] =[G_{2},H_{GU}] =0. \end{aligned}$$
$$\begin{aligned} \dot{A}_{0}= & {} -\frac{1}{m}\partial _{i}\left[ mA^{i}+\frac{1}{m}\partial ^{i}\left( p_{0}+\frac{1}{2b}\varepsilon _{0jk}\partial ^{j}B^{k}\right) \right] , \end{aligned}$$
$$\begin{aligned} \dot{A}_{i}= & {} B_{i}-\frac{1}{m^{2}}\partial _{i}\Lambda ^{1}, \end{aligned}$$
$$\begin{aligned} \dot{p}^{0}= & {} -\frac{a}{2}\partial _{i}\left( B^{i}-\partial ^{i}A^{0}\right) +\frac{3}{4b}\varepsilon ^{0ij}\partial _{k}\partial ^{k}\partial _{i}A_{j}\nonumber \\&+\,\frac{1}{2}\partial _{i}p^{i}-m^{2}A^{0}+\Lambda ^{1},\end{aligned}$$
$$\begin{aligned} \dot{p}^{i}= & {} a\partial _{j}\partial ^{[j}A^{i]}-\frac{1}{2b} \varepsilon ^{0ij}\partial _{k}\partial ^{k}B_{j}+\frac{1}{b}\varepsilon ^{0ij}\partial _{k}\partial ^{k}\partial _{j}A_{0} \nonumber \\&-\,m\left[ mA^{i}+\frac{1}{m}\partial ^{i}\left( p_{0}+\frac{1}{2b} \varepsilon _{0jk}\partial ^{j}B^{k}\right) \right] \nonumber \\&-\,\frac{1}{2b}\varepsilon ^{0ij}\partial _{j}\Lambda ^{2}, \end{aligned}$$
$$\begin{aligned} \dot{B}_{0}= & {} -\Lambda ^{2}+\frac{1}{m^{2}}\partial _{k}\partial ^{k}\Lambda ^{1}, \end{aligned}$$
$$\begin{aligned} \dot{B}_{i}= & {} -ab\varepsilon _{0ij}\left( B^{j}-\partial ^{j}A^{0}\right) - \frac{1}{2}\partial _{k}\partial ^{k}A_{i}\nonumber \\&-\,\frac{1}{2}\partial _{i}B_{0} -b\varepsilon _{0ij}p^{j}\nonumber \\&-\,\partial _{i}\partial ^{j}\left[ mA_{j}+\frac{1}{ m}\partial _{j}\left( p_{0}+\frac{1}{2b}\varepsilon _{0kl}\partial ^{k}B^{l}\right) \right] , \end{aligned}$$
$$\begin{aligned} \dot{\pi }_{0}= & {} \frac{1}{2b}\varepsilon _{0ij}\partial ^{i}B^{j} \end{aligned}$$
where \(\Lambda ^{1}\) and \(\Lambda ^{2}\) are some arbitrary functions. Under the gauge-fixing conditions
$$\begin{aligned}&B_{0}+\partial _{i}A^{i}\approx 0,\end{aligned}$$
$$\begin{aligned}&p_{0}+\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}B^{j}\approx 0 \end{aligned}$$
(\(\Lambda ^{1}=0\) and \(\Lambda ^{2}=\partial _{i}B^{i}\)), Eqs. (103)–(109) return to the equations of motion for the MECS–Proca model.
The number of physical degrees of freedom of the dynamical system with the phase space locally parameterized by \(\left\{ A_{\mu },\ B_{\mu },\ p^{\mu },\ \pi ^{\mu }\right\} \), subject to the second-class constraints (80) and first-class constraints (95) and (97) is equal to
$$\begin{aligned} \bar{\mathscr {N}}_{GU}= & {} ( 12\ \mathrm {canonical\ variables}-2\mathrm { scc} -2\times 2\ \mathrm {fcc}) /2 \nonumber \\= & {} 3=\bar{\mathscr {N}}_{O}. \end{aligned}$$
Stückelberg coupling
Based on the equivalence between the first-class system and the original second-class theory, we replace the Hamiltonian path integral of the MECS–Proca model with that of the first-class system. The Hamiltonian path integral of the first-class system constructed in the above reads
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{\mu },B_{\mu },p^{\mu },\pi ^{\mu },\lambda ^{( 1) },\lambda ^{( 2) }\right) \mu \left( [A_{\mu }],[B_{\mu }]\right) \nonumber \\&\times \,\delta \left[ \pi _{i}+\frac{1}{2b}\varepsilon _{0ij}\left( B^{j}-\partial ^{j}A^{0}\right) \right] \nonumber \\&\times \,\mathrm {det}^{1/2}\left( \frac{1}{b} \varepsilon _{0ij}\delta ( x-y) \right) \nonumber \\&\times \,\exp \bigg \{ i\int d^{3}x\bigg [ \left( \partial _{0}A_{\mu }\right) p^{\mu }+\left( \partial _{0}B_{\mu }\right) \pi ^{\mu }-\mathscr {H}_{GU}\nonumber \\&-\,\frac{1}{m^{2}}\lambda ^{(1) }\left( \partial _{i}p^{i}-m^{2}A_{0}+\partial _{k}\partial ^{k}\pi _{0}\right) \nonumber \\&-\,\lambda ^{(2) }\left( -\pi _{0}+\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}\right) \bigg ] \bigg \}, \end{aligned}$$
where the integration measure '\(\mu \left( [ A_{\mu }],[B_{\mu }]\right) \)' includes some suitable canonical gauge conditions. Performing partial integration over the momenta \(\pi _{i}\) in the path integral, we arrive at the argument of the exponential in the form
$$\begin{aligned} S_{GU}= & {} \int d^{3}x\bigg \{ \left( \partial _{0}A_{\mu }\right) p^{\mu }+\left( \partial _{0}B_{0}\right) \pi ^{0}\nonumber \\&-\,\frac{1}{2b}\left( \partial _{0}B_{i}\right) \varepsilon ^{0ij}\left( B_{j}-\partial _{j}A_{0}\right) -\frac{a}{4}\partial _{[ i}A_{j]}\partial ^{[ i}A^{j]}\nonumber \\&-\,\frac{a}{2}\left( B_{i}-\partial _{i}A_{0}\right) \left( B^{i}-\partial ^{i}A^{0}\right) \nonumber \\&+\,\frac{1}{2b}\varepsilon _{0ij}\left( \partial _{k}\partial ^{k}A^{0}\right) \partial ^{i}A^{j}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) \left( B^{j}-\partial ^{j}A^{0}\right) \nonumber \\&-\,\frac{1}{2}\left[ mA_{i}+\frac{1}{m}\partial _{i}\left( p_{0}+ \frac{1}{2b}\varepsilon _{0jk}\partial ^{j}B^{k}\right) \right] \nonumber \\&\times \,\left[ mA^{i}+\frac{1}{m}\partial ^{i}\left( p_{0}+\frac{1}{2b}\varepsilon _{0ln}\partial ^{l}B^{n}\right) \right] \nonumber \\&-\,p^{i}B_{i}-\frac{m^{2}}{2}A_{0}A^{0}+\frac{1}{2b}B^{0}\varepsilon _{0jk}\partial ^{j}B^{k}\nonumber \\&-\,\frac{1}{m^{2}}\lambda ^{(1) }\left( \partial _{i}p^{i}-m^{2}A_{0}+\partial _{k}\partial ^{k}\pi _{0}\right) \nonumber \\&-\,\lambda ^{(2) }\left( -\pi _{0}+\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}\right) \bigg \}. \end{aligned}$$
Integration over \(p^{i}\) leads to a \(\delta \) function of the form
$$\begin{aligned} \delta \left( \partial _{0}A_{i}-B_{i}+\frac{1}{m^{2}}\partial _{i}\lambda ^{(1) }\right) , \end{aligned}$$
which permits calculation of the integral over \(B_{i}\). Performing a partial integration over the Lagrange multiplier \(\lambda ^{(2) }\) and \(\pi _{0}\), the argument of the exponential becomes
$$\begin{aligned} S_{GU}= & {} \int d^{3}x\bigg \{ \left( \partial _{0}A_{0}\right) \left( p^{0}+\frac{ 1}{2b}\varepsilon ^{0ij}\partial _{i}\partial _{0}A_{j}\right) \nonumber \\&-\,\frac{a}{4}\partial _{[ i}A_{j]}\partial ^{[ i}A^{j]}-\frac{a}{2}\left[ \partial _{0}A_{i}-\partial _{i}\left( A_{0}-\frac{1}{m^{2}}\lambda ^{(1) }\right) \right] \nonumber \\&\times \,\left[ \partial ^{0}A^{i}-\partial ^{i}\left( A^{0}-\frac{1}{m^{2}}\lambda ^{(1) }\right) \right] \nonumber \\&+\,\frac{1}{2b}\varepsilon _{0ij}\partial _{\lambda }\partial ^{\lambda }\left( A^{0}-\frac{1}{m^{2}}\lambda ^{(1) }\right) \partial ^{i}A^{j}\nonumber \\&+\,\frac{1 }{2b}\varepsilon _{i0j}\left( \partial _{\lambda }\partial ^{\lambda }A^{i}\right) \partial ^{0}A^{j}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{\lambda }\partial ^{\lambda }A^{i}\right) \partial ^{j}\left( A^{0}-\frac{1}{m^{2}}\lambda ^{(1) }\right) \nonumber \\&-\,\frac{1}{2}\left[ mA_{i}+\frac{1}{m}\partial _{i}\left( p_{0}+\frac{1}{2b}\varepsilon _{0jk}\partial ^{j}\partial ^{0}A^{k}\right) \right] \nonumber \\&\times \,\left[ mA^{i}+\frac{1}{m}\partial ^{i}\left( p_{0}+\frac{1}{2b} \varepsilon _{0ln}\partial ^{l}\partial ^{0}A^{n}\right) \right] \nonumber \\&-\,\frac{m^{2}}{2}A_{0}A^{0}+\lambda ^{(1) }A_{0}\bigg \}. \end{aligned}$$
Using the notation
$$\begin{aligned} \varphi =-\frac{1}{m}\left( p^{0}+\frac{1}{2b}\varepsilon ^{0ij}\partial _{i}\partial _{0}A_{j}\right) ,\quad \bar{A}_{0}=A_{0}-\frac{1}{m^{2}}\lambda ^{( 1) }, \end{aligned}$$
and integrating over the Lagrange multiplier \(\lambda ^{( 1) }\), the argument of the exponential from the Hamiltonian path integral takes a manifestly Lorentz-covariant form,
$$\begin{aligned} S_{GU}= & {} \int d^{3}x\bigg [ -\frac{a}{4}\partial _{[ \mu }\bar{A}_{\nu ]}\partial ^{[ \mu }\bar{A}^{\nu ]}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{\mu \nu \rho }\left( \partial _{\lambda }\partial ^{\lambda }\bar{A}^{\mu }\right) \partial ^{\nu }\bar{A}^{\rho }\nonumber \\&-\,\frac{1}{2}\left( \partial _{\mu }\varphi -m\bar{A}_{\mu }\right) \left( \partial ^{\mu }\varphi -m\bar{A}^{\mu }\right) \bigg ], \end{aligned}$$
where \(\bar{A}_{\mu }=\left\{ \bar{A}_{0},A_{i}\right\} \), and describes a Stückelberg coupling between the scalar field \(\varphi \) and the 1-form \(\bar{A}_{\mu }\). It is obvious that (118) is a higher derivative extension of the result obtained in the previous section (a higher derivative extension involving the CS term). Similar to MCS–Proca model, we find that to the Stückelberg scalar corresponds a combination of the original fields \(A_{i}\) and momentum \(p^{0}\).
The canonical analysis of the model described by the Lagrangian action (118) displays the constraints (the phase space is locally parameterized by \(\left\{ A_{\mu },p^{\mu },B_{\mu },\pi ^{\mu },\varphi ,p\right\} \))
$$\begin{aligned} \chi _{i}\equiv & {} \pi _{i}+\frac{1}{2b}\varepsilon _{0ij}\left( B^{j}-\partial ^{j}A^{0}\right) \approx 0 \end{aligned}$$
$$\begin{aligned} G_{1}\equiv & {} \pi _{0}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{i}A^{j}\approx 0, \end{aligned}$$
$$\begin{aligned} G_{2}\equiv & {} -p_{0}+\partial _{i}\pi ^{i}\approx 0, \end{aligned}$$
$$\begin{aligned} G_{3}\equiv & {} -\partial _{i}p^{i}+mp-\frac{1}{2b}\varepsilon _{0ij}\partial _{k}\partial ^{k}\partial ^{i}A^{j}\approx 0, \end{aligned}$$
$$\begin{aligned} H= & {} \int d^{2}x\bigg [ \frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}+\frac{a}{2}\left( B_{i}-\partial _{i}A_{0}\right) \left( B^{i}-\partial ^{i}A^{0}\right) \nonumber \\&-\,\frac{1}{2b}\varepsilon _{0ij}\left( \partial _{k}\partial ^{k}A^{0}\right) \partial ^{i}A^{j}-\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) B^{j} \nonumber \\&-\,\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{k}\partial ^{k}A^{i}\right) \partial ^{j}A^{0}-p^{\mu }B_{\mu }-\frac{1}{2}p^{2} \nonumber \\&+\,mA^{0}p+\frac{1}{2}\left( \partial _{i}\varphi -mA_{i}\right) \left( \partial ^{i}\varphi -mA^{i}\right) \bigg ]. \end{aligned}$$
The constraints (119) are second-class and the other three constraints are first-class. In order to recover the MECS–Proca model we chose the gauge conditions
$$\begin{aligned} C^{1} \equiv \varphi \approx 0, \quad C^{2} \equiv A_{0}\approx 0, \quad C^{3} \equiv B_{0}\approx 0 \end{aligned}$$
such that \(\left\{ G_{\Delta },C^{\Delta '}\right\} _{\Delta ,\Delta ' =\overline{1,3}}\) form a second-class constraints set and the Hamiltonian path integral is convergent. The Hamiltonian path integral of the gauge system (118) is given by
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{\mu },p^{\mu },B_{\mu },\pi ^{\mu },\varphi ,p\right) \delta \left( \chi _{i}\right) \delta \left( G_{\Delta }\right) \delta \left( C^{\Delta '}\right) \nonumber \\&\times \, \exp \bigg \{ i\int d^{3}x\bigg [ \left( \partial _{0}A_{\mu }\right) p^{\mu }+\left( \partial _{0}B_{\mu }\right) \pi ^{\mu }\nonumber \\&+\,\left( \partial _{0}\varphi \right) p-\mathscr {H}\bigg ] \bigg \}. \end{aligned}$$
We integrate over the momenta \(\left\{ \pi _{i},\pi _{0},p_{0}\right\} \) and fields \(\{\varphi ,A_{0}\}\) and represent \(\delta \left( -\partial _{i}p^{i}+mp-\frac{1}{2b}\varepsilon _{0ij}\partial _{k}\partial ^{k}\partial ^{i}A^{j}\right) \) in the form of an integral functional,
$$\begin{aligned}&\int \mathscr {D}\lambda \exp \bigg \{ -i\int d^{3}x\lambda \bigg ( -\partial _{i}p^{i}+mp \nonumber \\&\quad -\,\frac{1}{2b}\varepsilon _{0ij}\partial _{k}\partial ^{k}\partial ^{i}A^{j}\bigg ) \bigg \}. \end{aligned}$$
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{i},p^{i},B_{\mu },p,\lambda \right) \delta \left( C^{3}\right) \exp \left\{ i\int d^{3}x\right. \nonumber \\&\times \,\bigg [ \left( \partial _{0}A_{i}\right) p^{i}-\frac{1}{2b}\varepsilon ^{0ij}\left( \partial _{0}B_{i}\right) B_{j}\!+\!\frac{1}{2b}\varepsilon ^{0ij}\left( \partial ^{0}B_{0}\right) \partial _{i}A_{j}\nonumber \\&-\,\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}- \frac{a}{2}B_{i}B^{i}+\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) B^{j} \nonumber \\&+\,\frac{1}{2b}\varepsilon _{0ij}B^{0}\partial ^{i}B^{j}-p_{i}B^{i}+\frac{1}{ 2}p^{2}-\frac{m^{2}}{2}A_{i}A^{i} \nonumber \\&\left. -\,\lambda \left( -\partial _{i}p^{i}+mp-\frac{1}{2b} \varepsilon _{0ij}\partial _{k}\partial ^{k}\partial ^{i}A^{j}\right) \bigg ] \right\} . \end{aligned}$$
$$\begin{aligned} \delta \left( \partial _{0}A_{i}-B_{i}-\partial _{i}\lambda \right) , \end{aligned}$$
which permits calculation of the integral over \(B_{i}\). After integration over the momentum p and field \(B_{0}\), the path integral reads
$$\begin{aligned} Z= & {} \int \mathscr {D}\left( A_{i},\lambda \right) \exp \left\{ i\int d^{3}x\bigg ( -\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}\right. \nonumber \\&-\,\frac{a}{2}\left( \partial _{0}A_{i}-\partial _{i}\lambda \right) \left( \partial ^{0}A^{i}-\partial ^{i}\lambda \right) \nonumber \\&+\,\frac{1}{2b}\varepsilon ^{i0j}\left( \partial _{\mu }\partial ^{\mu }A_{i}\right) \partial _{0}A_{j}+\frac{1}{2b}\varepsilon ^{0ij}\left( \partial _{\mu }\partial ^{\mu }\lambda \right) \partial _{i}A_{j}\nonumber \\&\left. +\,\frac{1}{2b}\varepsilon ^{ij0}\left( \partial _{\mu }\partial ^{\mu }A_{i}\right) \partial _{j}\lambda -\frac{m^{2}}{2}A_{\mu }A^{\mu }\bigg ) \right\} . \end{aligned}$$
Using the notation \(A_{0}=\lambda \) the argument of the exponential from the Hamiltonian path integral is exactly the MECS–Proca Lagrangian,
$$\begin{aligned} Z= & {} \int \mathscr {D} A_{\mu } \exp \left\{ i\int d^{3}x\bigg ( -\frac{a}{4}\partial _{[\mu }A_{\nu ]}\partial ^{[\mu }A^{\nu ]}\right. \nonumber \\&\left. +\,\frac{1}{2b}\varepsilon ^{\mu \nu \rho }\left( \partial _{\lambda }\partial ^{\lambda }A_{\mu }\right) \partial _{\nu }A_{\rho }- \frac{m^{2}}{2}A_{\mu }A^{\mu }\bigg ) \right\} . \end{aligned}$$
Chern–Simons coupling
In the sequel we show that the MECS–Proca model may be related to another first-class theory. Starting from the GU system constructed in the above, subject to the second-class constraints (80), the first-class constraints (95) and (97) and whose evolution is governed by the first-class Hamiltonian (101), we consider the following fields/momenta combinations:
$$\begin{aligned}&\mathscr {F}_{0} \equiv A_{0},\quad \mathscr {F}_{i}\equiv A_{i}+\frac{1}{m^{2}}\partial _{i}\left( p_{0}-\partial _{j}\pi ^{j}\right) , \end{aligned}$$
$$\begin{aligned}&\mathscr {P}_{i} \equiv p_{i}-\frac{1}{2b}\varepsilon _{0ij}\partial _{k}\partial ^{k}A^{j}-\frac{1}{2b}\varepsilon _{0ij}\partial ^{j}B^{0},\quad \mathscr {B}_{i}\equiv B_{i}, \nonumber \\ \end{aligned}$$
which are in (strong) involution with first-class constraints \(G_{a}\approx 0 \)
$$\begin{aligned} \left[ \mathscr {F}_{0},G_{a}\right] =\left[ \mathscr {F}_{i},G_{a}\right] = \left[ \mathscr {P}_{i},G_{a}\right] =\left[ \mathscr {B}_{i},G_{a}\right] =0, \end{aligned}$$
and, moreover, \(\mathscr {F}_{\mu }\equiv \left\{ \mathscr {F}_{0},\mathscr {F} _{i}\right\} \) is divergenceless on the surface \(\chi _{i}^{\left( 1\right) }\approx 0\)
$$\begin{aligned} \partial ^{\mu }\mathscr {F}_{\mu }=\mathscr {O}\left( \chi _{i}^{\left( 1\right) }\right) . \end{aligned}$$
Similarly to the case of the MCS–Proca model, the first-class Hamiltonian (101) can be written in terms of these quantities
$$\begin{aligned} H_{GU}= & {} \int d^{2}x\bigg [ \frac{a}{4}\partial _{[ i}\mathscr {F} _{j]}\partial ^{[ i}\mathscr {F}^{j]} +\frac{a}{2}\left( \mathscr {B} _{i}-\partial _{i}\mathscr {F}_{0}\right) \nonumber \\&\times \,\left( \mathscr {B}^{i}-\partial ^{i}\mathscr {F}^{0}\right) -\frac{1}{2b}\varepsilon _{0ij}\left( \partial _{k}\partial ^{k}\mathscr {F} ^{0}\right) \partial ^{i}\mathscr {F}^{j}\nonumber \\&-\,\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{k}\partial ^{k}\mathscr {F} ^{i}\right) \partial ^{j}\mathscr {F}^{0}+\frac{m^{2}}{2}\mathscr {F}_{i}\mathscr {F}^{i}\nonumber \\&+\,\frac{m^{2}}{2}\mathscr {F}_{0}\mathscr {F}^{0}+\mathscr {B}^{i}\mathscr {P} _{i}-\left( \partial ^{i}\mathscr {F}_{i}\right) \partial ^{j}\chi _{j}^{( 1) }\bigg ]. \end{aligned}$$
Enlarging the phase space by adding the bosonic pairs \(\left\{ V^{\mu },P_{\mu }\right\} \), the solution to Eq. (134) takes the form
When we replace the solution (136) in (95), the constraint takes the form
$$\begin{aligned} \frac{1}{m^{2}}\left( \partial _{i}p^{i}+m\varepsilon _{0ij}\partial ^{i}V^{j}+\partial _{k}\partial ^{k}\pi _{0}\right) \approx 0, \end{aligned}$$
and it remains first-class. Computing the Poisson bracket among the quantity \( \partial _{i}p_{0}\) and the first-class constraint (95) and the Poisson bracket between \(P_{i}\) and (137), we see that these two quantities are correlated through the relation
Using Eqs. (136) and (138), we write the first-class Hamiltonian as
$$\begin{aligned} H_{GU}^{\prime }= & {} \int d^{2}x\bigg \{ \frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]} \nonumber \\&+\,\frac{a}{2}\left[ B_{i}+\frac{1}{m}\partial _{i}\left( \varepsilon _{0jk}\partial ^{j}V^{k}\right) \right] \nonumber \\&\times \,\left[ B^{i}+\frac{1}{m}\partial ^{i}\left( \varepsilon ^{0ln}\partial _{l}V_{n}\right) \right] \nonumber \\&+\,\frac{1}{2b}\varepsilon _{0ij}\partial _{k}\partial ^{k}\left( \frac{1}{m} \varepsilon ^{0ln}\partial _{l}V_{n}\right) \partial ^{i}A^{j}\nonumber \\&-\,\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) B^{j}+\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{k}\partial ^{k}A^{i}\right) \nonumber \\&\times \, \partial ^{j}\left( \frac{1}{m}\varepsilon ^{0ln}\partial _{l}V_{n}\right) +\frac{1}{4}\partial ^{[ i}V^{j]}\partial _{[ i}V_{j]}\nonumber \\&+\,\frac{m^{2}}{2}\left( A_{i}+\frac{1}{m}\varepsilon _{0ij}P^{j}-\frac{1}{m^{2}}\partial _{i}\partial _{j}\pi ^{j}\right) \nonumber \\&\times \,\left( A^{i}+\frac{1}{m}\varepsilon ^{0il}P_{l}-\frac{1}{m^{2}}\partial ^{i}\partial ^{l}\pi _{l}\right) \nonumber \\&-\,\partial ^{i}\left( A_{i}+\frac{1}{m}\varepsilon _{0ik}P^{k}-\frac{1}{m^{2}}\partial _{i}\partial _{k}\pi ^{k}\right) \nonumber \\&\times \, \partial ^{j}\left( \pi _{j}+\frac{1}{2b} \varepsilon _{0jk}B^{k}\right) \nonumber \\&-\,\frac{1}{2b}\varepsilon _{0jk}B^{0}\partial ^{j}B^{k} +p^{i}B_{i}\bigg \}. \end{aligned}$$
If we count the number of physical degrees of freedom of the system with the phase space locally parameterized by \(\left\{ A_{i},B_{\mu },V^{\mu },p^{i},\pi ^{\mu },P_{\mu }\right\} \) subject to the second-class constraints (80), first-class constraints (97) and (137) and whose evolution is governed by the first-class Hamiltonian (139), we obtain
$$\begin{aligned} \bar{\mathscr {N}}_{GU}^{\prime }= & {} (16\ \mathrm {canonical\ variables}-2 \mathrm { scc}-2\times 2\ \mathrm {fcc}) /2 \nonumber \\= & {} 5\ne \bar{\mathscr {N}}_{GU}. \end{aligned}$$
Imposing the first-class constraints
$$\begin{aligned} -\partial ^{i}P_{i}\approx 0,\qquad P_{0}\approx 0, \end{aligned}$$
the number of physical degrees of freedom is conserved
$$\begin{aligned} \bar{\mathscr {N}}_{GU}^{\prime }= & {} ( 16\ \mathrm {canonical\ variables}-2 \mathrm { scc}-2\times 4\ \mathrm {fcc}) /2 \nonumber \\= & {} 3=\bar{\mathscr {N}}_{GU}. \end{aligned}$$
For each first-class theory, derived in the above, we are able to identify a set of fundamental classical observables such that they are in one-to-one correspondence and they possess the same Poisson brackets. Since the number of physical degrees of freedom is the same for both theories and the corresponding algebras of classical observables are isomorphic, the previously exposed procedure preserves the equivalence between the two first-class theories. As a result, the GU and the first-class system remain equivalent also at the level of the Hamiltonian path integral quantization. This further implies that the first-class system is completely equivalent with the MECS–Proca model. Due to this equivalence we can replace the Hamiltonian path integral of the MECS–Proca model with the one associated with the first-class system,
$$\begin{aligned} Z^{\prime }= & {} \int \mathscr {D}\left( A_{i},B_{\mu },V^{\mu },p^{i},\pi ^{\mu },P_{\mu },\lambda 's\right) \nonumber \\&\times \,\mu \left( [ A_{i}],[B_{\mu }],[V^{\mu }]\right) \nonumber \\&\times \,\delta \left[ \pi _{i}+\frac{1}{2b}\varepsilon _{0ij}\left( B^{j}+\frac{1}{m}\varepsilon ^{0kl}\partial ^{j}\partial _{k}V_{l}\right) \right] \nonumber \\&\times \,\mathrm {det}^{1/2}\left( \frac{1}{b} \varepsilon _{0ij}\delta ( x-y)\right) \exp \bigg \{i\int d^{3}x\bigg [ \left( \partial _{0}A_{i}\right) p^{i}\nonumber \\&+\,\left( \partial _{0}B_{\mu }\right) \pi ^{\mu }+\left( \partial _{0}V^{\mu }\right) P_{\mu }-\mathscr {H}_{GU}^{\prime }\nonumber \\&-\,\frac{1}{m^{2}}\lambda ^{( 1) }\left( \partial _{i}p^{i}+m\varepsilon _{0ij}\partial ^{i}V^{j}+\partial _{k}\partial ^{k}\pi _{0}\right) \nonumber \\&-\,\lambda ^{( 2) }\left( -\pi _{0}+\frac{1}{2b} \varepsilon _{0ij}\partial ^{i}A^{j}\right) \nonumber \\&+\,\lambda ^{(3) }\partial ^{i}P_{i}-\lambda ^{(4) }P_{0}\bigg ] \bigg \}. \end{aligned}$$
After a partial integration over the momenta \(\pi _{i}\) in the path integral, the argument of the exponential reads
$$\begin{aligned} S_{GU}^{\prime }= & {} \int d^{3}x\bigg \{ \left( \partial _{0}A_{i}\right) p^{i}+\left( \partial _{0}B_{0}\right) \pi ^{0}+\left( \partial _{0}V^{\mu }\right) P_{\mu } \nonumber \\&+\,\frac{1}{2b}\left( \partial _{0}B_{i}\right) \varepsilon ^{0ij}\left[ -B_{j}-\partial _{j}\left( \frac{1}{m}\varepsilon ^{0kl}\partial _{k}V_{l}\right) \right] \nonumber \\&-\,\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[i}A^{j]}-\frac{a}{2}\left[ B_{i}+\partial _{i}\left( \frac{1}{m}\varepsilon _{0jk}\partial ^{j}V^{k}\right) \right] \nonumber \\&\times \,\left[ B^{i}+\partial ^{i}\left( \frac{1}{m}\varepsilon ^{0ln}\partial _{l}V_{n}\right) \right] \nonumber \\&-\,\frac{1}{2b}\varepsilon _{0ij} \partial _{k}\partial ^{k}\left( \frac{1}{m} \varepsilon ^{0ln}\partial _{l}V_{n}\right) \partial ^{i}A^{j}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{k}\partial ^{k}A^{i}\right) B^{j}-\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{k}\partial ^{k}A^{i}\right) \nonumber \\&\times \, \partial ^{j}\left( \frac{1}{m}\varepsilon ^{0ln}\partial _{l}V_{n}\right) -\frac{1}{4}\partial ^{[i}V^{j]}\partial _{[i}V_{j]} \nonumber \\&-\,\frac{m^{2}}{2}\left[ A_{i}+\frac{1}{m}\varepsilon _{0ij}P^{j}+\frac{1}{m^{2}}\partial _{i}\left( \frac{1}{2b}\varepsilon _{0jk}\partial ^{j}B^{k}\right) \right] \nonumber \\&\times \,\left[ A^{i}+\frac{1}{m}\varepsilon ^{0il}P_{l}+\frac{1}{m^{2}}\partial _{i}\left( \frac{1}{2b}\varepsilon _{0ln}\partial ^{l}B^{n} \right) \right] \nonumber \\&+\,\frac{1}{2b}\varepsilon _{0jk}B^{0}\partial ^{j}B^{k}-p^{i}B_{i}\nonumber \\&-\,\frac{1}{m^{2}}\lambda ^{( 1) }\left( \partial _{i}p^{i}+m\varepsilon _{0ij}\partial ^{i}V^{j}+\partial _{k}\partial ^{k}\pi _{0}\right) \nonumber \\&-\,\lambda ^{(2) }\left( -\pi _{0}+\frac{1}{2b} \varepsilon _{0ij}\partial ^{i}A^{j}\right) \nonumber \\&+\,\lambda ^{(3) }\partial ^{i}P_{i} -\lambda ^{(4) }P_{0}\bigg \}. \end{aligned}$$
which permits calculation of the integral over \(B_{i}\). Performing a partial integration over the field \(V_{0}\), momenta \(\left\{ \pi _{0},P_{0},P_{i}\right\} \), and Lagrange multipliers \(\left\{ \lambda ^{(2)},\lambda ^{(4) }\right\} \), the argument of the exponential from the Hamiltonian path integral reads
$$\begin{aligned} S_{GU}^{\prime }= & {} \int d^{3}x\bigg \{ -\frac{a}{4}\partial _{[i}A_{j]}\partial ^{[ i}A^{j]} \nonumber \\&-\,\frac{a}{2}\left[ \partial _{0}A_{i}+\partial _{i}\left( \frac{1}{m^{2}}\lambda ^{(1) }+\frac{1}{m}\varepsilon _{0jk}\partial ^{j}V^{k}\right) \right] \nonumber \\&\times \,\left[ \partial ^{0}A^{i}+\partial ^{i}\left( \frac{1}{m^{2}}\lambda ^{(1) }+\frac{1}{m}\varepsilon ^{0ln}\partial _{l}V_{n}\right) \right] \nonumber \\&-\,\frac{1}{2b}\varepsilon _{0ij}\partial _{\lambda }\partial ^{\lambda }\left( \frac{1}{m^{2}} \lambda ^{(1) }+\frac{1}{m}\varepsilon ^{0kl}\partial _{k}V_{l}\right) \partial ^{i}A^{j}\nonumber \\&+\,\frac{1}{2b}\varepsilon _{i0j}\left( \partial _{\lambda }\partial ^{\lambda }A^{i}\right) \partial ^{0}A^{j} -\frac{1}{2b}\varepsilon _{ij0}\left( \partial _{\lambda }\partial ^{\lambda }A^{i}\right) \nonumber \\&\times \,\partial ^{j}\left( \frac{1}{m^{2}} \lambda ^{(1) }+\frac{1}{m}\varepsilon ^{0kl}\partial _{k}V_{l}\right) +\frac{1}{4}\partial _{[ i}V_{j]}\partial ^{[ i}V^{j]}\nonumber \\&+\,\frac{ 1}{2}\left( \partial _{0}V_{i}-\partial _{i}\lambda ^{(3) }\right) \left( \partial ^{0}V^{i}-\partial ^{i}\lambda ^{(3) }\right) \nonumber \\&-\,m\varepsilon _{0ij}\left( \frac{1}{m^{2}}\lambda ^{(1) }+\frac{1}{m}\varepsilon ^{0kl}\partial _{k}V_{l}\right) \left( \partial ^{i}V^{j}\right) \nonumber \\&+\,m\varepsilon _{i0j}A^{i}\left( \partial ^{0}V^{j}-\partial ^{j}\lambda ^{(3) }\right) \bigg \}. \end{aligned}$$
Using the notations
$$\begin{aligned} \bar{A}_{0}=-\left( \frac{1}{m^{2}}\lambda ^{( 1) }+\frac{1}{m}\varepsilon _{0jk}\partial ^{j}V^{k}\right) ,\qquad \bar{V}_{0}=\lambda ^{(3) }, \end{aligned}$$
the argument of the exponential from the Hamiltonian path integral takes a manifestly Lorentz-covariant form,
$$\begin{aligned} S_{GU}^{\prime }= & {} \int d^{3}x\bigg [ -\frac{a}{4}\partial _{[ \mu } \bar{A}_{\nu ]}\partial ^{[ \mu }\bar{A}^{\nu ]} \nonumber \\&+\,\frac{1}{2b} \varepsilon _{\mu \nu \rho }\left( \partial _{\lambda }\partial ^{\lambda } \bar{A}^{\mu }\right) \partial ^{\nu }\bar{A}^{\rho }+\frac{1}{4}\partial _{[\mu }\bar{V}_{\nu ]}\partial ^{[ \mu }\bar{V}^{\nu ]}\nonumber \\&+\,m\varepsilon _{\mu \nu \rho }\bar{A}^{\mu } \partial ^{\nu }\bar{V}^{\rho } \bigg ], \end{aligned}$$
where \(\bar{A}_{\mu }=\left\{ \bar{A}_{0},A_{i}\right\} \) and \(\bar{V }_{\mu }=\left\{ \bar{V}_{0},V_{i}\right\} \). The above functional describes a CS coupling between the 1-form \( \bar{A}_{\mu }\) and the 1-form \(\bar{V}_{\mu }\), and it is a higher derivative extension of the functional (45).
In this paper, the MCS–Proca model has been analyzed from the point of view of the Hamiltonian path integral quantization, in the framework of gauge-unfixing approach. The same quantization procedure was applied to a higher order derivative extension of MCS–Proca model. The first step of this approach is represented by the construction of an equivalent first-class system. In order to construct the equivalent first-class system with the MECS–Proca model, we performed a partial gauge-unfixing (we maintained the second-class constraints (80)); meanwhile, in the case of the MCS–Proca model, we accomplished a total gauge-unfixing. Both models did not require extensions of the original phase space in order to construct the equivalent first-class systems. The second step involved the construction of the Hamiltonian path integral corresponding to the equivalent first-class system for each model. The Hamiltonian path integral of the first-class systems took a manifestly Lorentz-covariant form, after integrating out the auxiliary fields and performing some field redefinitions. Starting from the Hamiltonian path integral of the equivalent non-higher derivative first-class system, we arrived at the Lagrangian path integral corresponding to Stückelberg coupling between a scalar field and a 1-form or for an appropriate phase space extension we identified the Lagrangian path integral for two kinds of 1-forms with CS coupling (a non-higher order derivative term). The results obtained in the case of MECS–Proca model are higher derivative extensions (involving the CS term) of the results obtained in the case of MCS–Proca model.
T.J. Allen, M.J. Bowick, A. Lahiri, Topological mass generation in 3+1 dimensions. Mod. Phys. Lett. A 6, 559 (1991)
ADS MathSciNet Article MATH Google Scholar
A.S. Vytheeswaran, Gauge unfixing in second class constrained systems. Ann. Phys. 236, 297 (1994)
E.B. Park, Y.W. Kim, Y.J. Park, Y. Kim, W.T. Kim, Batalin–Tyutin quantization of the Chern–Simons–Proca theory. Mod. Phys. Lett. A 10, 1119 (1995). arXiv:hep-th/9504151
ADS Article MATH Google Scholar
N. Banerjee, R. Banerjee, S. Ghosh, Quantization of second class systems in the Batalin–Tyutin formalism. Ann. Phys. 241, 237 (1995). arXiv:hep-th/9403069
H. Sawayanagi, Hamiltonian BRST quantization of an Abelian massive vector field with an antisymmetric tensor field. Mod. Phys. Lett. A 10, 813 (1995)
C. Bizdadea, S.O. Saliu, The BRST quantization of massive abelian two-form gauge fields. Phys. Lett. B 368, 202 (1996)
ADS MathSciNet Article Google Scholar
C. Bizdadea, Some remarks on the BRST quantization of massive Abelian two-form gauge fields. Phys. Rev. D 53, 7138 (1996)
C. Bizdadea, The hamiltonian BRST quantization of massive abelian p-form gauge fields. J. Phys. A: Math. Gen. 29, 3985 (1996)
N. Banerjee, R. Banerjee, Generalized Hamiltonian embedding of the Proca model. Mod. Phys. Lett. A 11, 1919 (1996). arXiv:hep-th/9511212
Y.W. Kim, M.I. Park, Y.J. Park, S.J. Yoon, BRST quantization of the Proca model based on the BFT and the BFV formalism. Int. J. Mod. Phys. A 12, 4217 (1997). arXiv:hep-th/9702002
A.S. Vytheeswaran, Gauge invariances in the Proca model. Int. J. Mod. Phys. A 13, 765 (1998). arXiv:hep-th/9701050
S.T. Hong, Y.W. Kim, Y.J. Park, K.D. Rothe, Symplectic embedding and Hamilton–Jacobi analysis of Proca model. Mod. Phys. Lett. A 17, 435 (2002). arXiv:hep-th/0112170
H. Ruegg, M. Ruiz-Altaba, The Stueckelberg field. Int. J. Mod. Phys. A 19, 3265 (2004). arXiv:hep-th/0304245
E.M. Cioroianu, S.C. Sararu, O. Balus, First-class approaches to massive 2-forms. Int. J. Mod. Phys. A 25, 185 (2010).arXiv:1001.5146
E.M. Cioroianu, Note on the dynamics of a pseudo-classical spinning particle. Mod. Phys. Lett. A 26, 589 (2011)
S.C. Sararu, Massive p-forms: first-class approaches. Int. J. Mod. Phys. A 27, 1250119 (2012)
S.C. Sararu, On covariant quantization of the massive self-dual 3-forms in 7 dimensions. Int. J. Theor. Phys. 51, 2623 (2012)
MathSciNet Article MATH Google Scholar
S.C. Sararu, From massive self-dual p-forms towards gauge p-forms. Cent. Eur. J. Phys. 11, 59 (2013)
K. Harada, H. Mukaida, Gauge invariance and systems with second class constraints. Z. Phys. C 48, 151 (1990)
P. Mitra, R. Rajaraman, New results on systems with second-class constraints. Ann. Phys. 203, 137 (1990)
M. Henneaux, C. Teitelboim, Quantization of Gauge Systems (Princeton University Press, Princeton, 1992)
R. Anishettyt, A.S. Vytheeswaran, Gauge invariance in second-class constrained systems. J. Phys. A: Math. Gen. 26, 5613 (1993)
ADS Article Google Scholar
C. Bizdadea, S.O. Saliu, The BRST quantization of second-class constrained systems. Nucl. Phys. B 456, 473 (1995)
L.D. Faddeev, S.L. Shatashvili, Realization of the Schwinger term in the Gauss law and the possibility of correct quantization of a theory with anomalies. Phys. Lett. B 167, 225 (1986)
I.A. Batalin, E.S. Fradkin, Operator quantization of dynamical systems with irreducible first- and second-class constraints. Phys. Lett. B 180, 157 (1986)
I.A. Batalin, E.S. Fradkin, Operational quantization of dynamical systems subject to second-class constraints. Nucl. Phys. B 279, 514 (1987)
I.A. Batalin, I.V. Tyutin, Existence theorem for the effective gauge algebra in the generalized canonical formalism with Abelian conversion of second-class constraints. Int. J. Mod. Phys. A 6, 3255 (1991)
S. Deser, R. Jackiw, "Self-duality" of topologically massive gauge theories. Phys. Lett. B 139, 371 (1984)
P.K. Townsend, K. Pilch, P. van Nieuwenhuizen, Self-duality in odd dimensions. Phys. Lett. B 136, 38 (1984)
S. Deser, R. Jackiw, P. van Nieuwenhuizen, Three-dimensional massive gauge theories. Phys. Rev. Lett. 48, 975 (1982)
S. Deser, R. Jackiw, S. Templeton, Topologically massive gauge theories. Ann. Phys. 140, 372 (1982)
R. Banerjee, H.J. Rothe, K.D. Rothe, Equivalence of the Maxwell–Chern–Simons theory and a self-dual model. Phys. Rev. D 52, 3750 (1995). arXiv:hep-th/9504067
R. Banerjee, H.J. Rothe, K.D. Rothe, Hamiltonian embedding of the self-dual model and equivalence with Maxwell–Chern–Simons theory. Phys. Rev. D 55, 6339 (1997). arXiv:hep-th/9611077
R. Banerjee, H.J. Rothe, Batalin–Fradkin–Tyutin embedding of a self-dual model and the Maxwell–Chern–Simons theory. Nucl. Phys. B 447, 183 (1995). arXiv:hep-th/9504066
L. Heisenberg, Generalization of the Proca action. JCAP 05, 015 (2014).arXiv:1402.7026
S. Deser, R. Jackiw, Higher derivative Chern–Simons extensions. Phys. Lett. B 451, 73 (1999). arXiv:hep-th/9901125
A. de Souza Dutra, C.P. Natividade, Class of self-dual models in three dimensions. Phys. Rev. D 61, 027701 (2000). arXiv:hep-th/0002114
D. Bazeia, R. Menezes, J.R. Nascimento, R.F. Ribeiro, C. Wotzasek, Dual equivalence in models with higher-order derivatives. J. Phys. A 36, 9943 (2003). arXiv:hep-th/0210311
A. Accioly, M. Diase, Algorithm for probing the unitarity of topologically massive models. Int. J. Theor. Phys. 44, 1123 (2005). arXiv:hep-th/0511242
Article MATH Google Scholar
A. Accioly, M. Diase, Is it physically sound to add a topologically massive term to three-dimensional massive electromagnetic or gravitational models? Int. J. Mod. Phys. A 21, 559 (2006). arXiv:hep-th/0507186
B. Podolsky, A generalized electrodynamics part I-non-quantum. Phys. Rev. 62, 68 (1942)
C. Pinheiro, G.O. Pires, N. Tomimura, Some quantum aspects of three-dimensional Einstein–Chern–Simons–Proca massive gravity. Nuovo Cim. B 111, 1023 (1996). arXiv:gr-qc/9704004
S. Deser, B. Tekin, Massive, topologically massive, models. Class. Quant. Grav. 19, L97 (2002). arXiv:hep-th/0203273
R. Banerjee, S. Kumar, Self-dual models and mass generation in planar field theory. Phys. Rev. D 63, 125008 (2001). arXiv:hep-th/0007148
R. Banerjee, B. Chakraborty, T. Scaria, Polarization vectors, doublet structure and Wigner's little group in planar field theory. Int. J. Mod. Phys. A 16, 3967 (2001). arXiv:hep-th/0011011
M.V. Ostrogradsky, Memoires sur les equations differentielles relatives au probleme des isoperimetres. Mem. Ac. St. Petersbourg VI, 385 (1850)
D.M. Gitman, S.L. Lyakhovich, I.V. Tyutin, Canonical quantization of the Yang–Mills lagrangian with higher derivatives. Sov. Phys. J. 28, 554 (1985)
D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints (Springer-Verlag, Berlin, Heidelberg, 1990)
Book MATH Google Scholar
V.V. Nesterenko, Singular Lagrangians with higher derivatives. J. Phys. A 22, 1673 (1989)
S. Kumar, Lagrangian and Hamiltonian formulations of higher order Chern–Simons theories. Int. J. Mod. Phys. A 18, 1613 (2003). arXiv:hep-th/0112121
C.M. Reyes, Testing symmetries in effective models of higher derivative field theories. Phys. Rev. D 80, 105008 (2009).arXiv:0901.1341
R. Banerjee, P. Mukherjee, B. Paul, Gauge symmetry and W-algebra in higher derivative systems. J. High Energy Phys. 08, 085 (2011).arXiv:1012.2969
M.S. Plyushchay, Massive relativistic point particle with rigidity. Int. J. Mod. Phys. A 4, 3851 (1989)
M.S. Plyushchay, The model of the relativistic particle with torsion. Nucl. Phys. B 362, 54 (1991)
P. Mukherjee, B. Paul, Gauge invariances of higher derivative Maxwell–Chern–Simons field theory: a new Hamiltonian approach. Phys. Rev. D 85, 045028 (2012).arXiv:1111.0153
B. Paul, Gauge symmetry and Virasoro algebra in quantum charged rigid membrane: a first order formalism. Phys. Rev. D 87, 045003 (2013).arXiv:1212.5902
R. Banerjee, P. Mukherjee, B. Paul, New Hamiltonian analysis of Regge–Teitelboim minisuperspace cosmology. Phys. Rev. D 89, 043508 (2014).arXiv:1307.4920
P. Dirac, Generalized Hamiltonian dynamics. Can. J. Math. 2, 129 (1950)
P. Dirac, Lectures on Quantum Mechanics (Academic Press, New York, 1967)
R. Ferraro, M. Henneaux, M. Puchin, On the quantization of reducible gauge systems. J. Math. Phys. 34, 2757 (1993). arXiv:hep-th/9210070
E.C.G. Stueckelberg, Interaction energy in electrodynamics and in the field theory of nuclear forces. Helv. Phys. Acta 11, 225 (1938)
C. Bizdadea, E.M. Cioroianu, S.O. Saliu, Irreducible Hamiltonian BRST approach to topologically coupled abelian forms. Phys. Scr. 60, 120 (1999). arXiv:hep-th/9912201
The author wishes to thank E. M. Cioroianu for useful discussions and comments. I am very grateful to Prof. S. Deser for calling my attention to Ref. [43]. Also, I would like to thank to referees for bringing to my attention Ref. [44].
Department of Physics, University of Craiova, 13 A. I. Cuza Str., 200585, Craiova, Romania
Silviu-Constantin Sararu
Correspondence to Silviu-Constantin Sararu.
Funded by SCOAP3
Sararu, SC. A first-class approach of higher derivative Maxwell–Chern–Simons–Proca model. Eur. Phys. J. C 75, 526 (2015). https://doi.org/10.1140/epjc/s10052-015-3741-x
DOI: https://doi.org/10.1140/epjc/s10052-015-3741-x | CommonCrawl |
\begin{document}
\quad \\ \quad \\ \noindent This document accompanies the main paper. It contains additional information, including additional tables and figures referenced in the paper.
\section{Recall bias adjustment and processing of survey data} For survey data, we implemented an additional data cleaning step to ensure consistency in the entries in each column, particularly where these were character variables. Each coverage survey estimate was then linked to a 'birth cohort year' which we used as the reference year for the estimate in this work. The birth cohort year was determined using the period of data collection and the age of the birth cohort that the survey estimate relates to, as in WUENIC methodology.
Similar to WUENIC approach, we applied a recall-bias adjustment to DTP3 and PCV3 survey estimates. Estimates based on vaccination cards only or vaccination cards and recall were used for the adjustment. In the pre-cleaned input data file, these estimates were labelled as: ``Card'' and ``Card or History'', respectively, in the column for evidence of vaccination. For country-vaccine-year combinations with multiple estimates labelled as ``crude'' or ``valid'', the ``valid'' estimates were retained in the analysis as these were considered more accurate. The formula used for the adjustment is: \begin{equation}
\textrm{VD3}_{\textrm{(card+history)}} = \textrm{VD3}_{\textrm{(card only)}} \times \frac{\textrm{VD1}_{\textrm{(card+history)}}}{\textrm{VD1}_{\textrm{(card only)}}} \label{eq:eq1} \end{equation} where VD3 denotes the third dose of DTP or PCV vaccine and VD1, the first dose. We note that for each vaccine, the adjustment was applied only when all the data needed to compute equation (\ref{eq:eq1}) were available. After the adjustment, the original ``Card or History'' survey estimates of DTP3 and PCV3 were replaced with corresponding bias-adjusted estimates for further processing.
For a given vaccine, country and year, if one survey estimate was available, it was accepted if the sample size was greater than 300 or if the estimate was labelled 'valid'. Otherwise, the estimate was not accepted. Where multiple estimates were available for the same vaccine, country and year, ``Card or History'' estimates were prioritized over ``Card'' only estimates, and either of these were accepted if the corresponding sample size was greater than 300 or if the evidence of vaccination was based on valid doses. We note that for DTP3 and PCV3, only the bias-adjusted estimates were considered when available. When multiple estimates were available from the preceding step (perhaps from different surveys or the same survey) for the same vaccine, country and year, the estimate with the largest sample size was accepted. If the sample sizes were missing, the first valid estimate or the first estimate available was chosen, in the given sequence. The resulting survey estimates were used in the rest of the analyses.
\section{Software} \label{sec:software} In order to support the reproducibility and replication of the immunisation coverage modelling methods described in this report, we developed a set of tools in the R programming language \citep{Rlang2021}. The {\tt imcover} package provides functions for assembling the common sources of immunisation coverage and for fitting the blanced data single likelihood (BDSL) and irregular data multiple likelihood (IDML) models described in Section 3 using full Bayesian inference with Stan \citep{stan}. The latest version of imcover can be installed from GitHub by typing the command within the R console: \\ \texttt{devtools::install\_github(`wpgp/imcover')}.
In order to properly install the package, a C++ compiler is required. Internally, {\tt imcover} relies on Stan code which is translated into C++ and compiled, allowing for faster computations. On a Windows PC, the Rtools program provides the necessary compiler. This is available from \url{https://cran.r-project.org/bin/windows/Rtools/} for R version 4.0 or later. On Mac OS X, users should follow the instructions to install XCode. Users are advised to check the Stan installation guide for further information on the necessary system set-up and compilers (\url{https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started}).
A typical workflow using imcover to produce national- and regional-level immunisation coverage estimates is illustrated in Figure \ref{fig:imcoveroverview}. The user should first download the time series of reported coverage data, which may come from multiple sources (i.e. administrative, official, and survey estimates). Second, these datasets are processed in several steps to filter, clean duplicate records and correct possible reporting biases and then assemble a single dataset. This stage creates datasets of format \texttt{ic.df} within R. This format is an extension of the common data frame and enables some of the specialised processing steps by {\tt imcover}. Third, the model is fitted against the assembled coverage dataset. The user has the option at this stage to specify additional parameters and prior choices for the statistical model. Fourth, after the model has been fitted, a model object is returned. This object's class extends the model objects from \texttt{rstan} from which parameter estimates and other results can be extracted in R. In the fifth stage, post-processing is done on the model results. These functions provide support to produce summaries of the estimates, predictions forward in time, standardised visualisations, and population-weighted regional aggregations of immunisation coverage estimates. The details of this workflow are illustrated below with a worked example of coverage data for the WHO AFRO region. Further information on the R package can be found within the documentation, including a long-form vignette, see $help(imcover)$.\\
\begin{figure}
\caption{Overview of steps supported by the {\tt imcover} package. }
\label{fig:imcoveroverview}
\end{figure}
\noindent {\bf Workflow example}\\ In the following sections we provide a worked example to produce time series of modelled estimates of immunisation coverage using the model-based approach implemented in R using {\tt imcover}. After loading the package, we first obtain the data from the WHO Immunisation Data Portal (\url{https://immunizationdata.who.int}). An internet connection is required for this step.\\
\begin{lstlisting}[language=R]
# load the package within the R environment library(imcover)
# download administrative and official records cov <- download_coverage()
# download survey records svy <- download_survey() \end{lstlisting}
Data downloading is handled by two functions: \texttt{download\_coverage} and \texttt{downloa d\_survey}. These handle administrative/official estimates or survey datasets, respectively. By default, the downloaded files are stored as temporary documents in the user's R temporary directory; however, the functions provide the option for users to save the downloaded files to a user-specified location and load them later from a local file path. In this way, a user can also load their own source of immunisation coverage data and process it into a standardised format for modeling.\\
\noindent {\bf Data processing and formatting}\\ As part of the download function, a series of checks and cleaning steps are applied by default. The goal of these checks is to identify the core attributes in the input data necessary to construct an immunisation coverage dataset. Specifically, these attributes include a country, time, vaccine identifier, and percent of the population covered by that vaccine. In the absence of a reported coverage percentage, the number of doses administered and target population can be used to estimate coverage. Identifying the core attributes allows the user to harmonise multiple source files into a unified data format for modelling. Administrative and official coverage estimates are processed similarly. Within \texttt{download\_survey}, the household survey datasets require some more specialised processing. For instance, multi-dose vaccine reports can have reporting and recall biases. Note that all the processing steps can be carried out using separate functions available in {\tt imcover} if advanced users want more control over pre-processing.
Within the {\tt imcover} package, processed coverage datasets are stored in \texttt{ic.df} format, or an ``immunisation coverage data frame". This format extends the common data frame object of R where observations are rows and attributes are stored in columns. \texttt{ic.df} objects support all standard methods and ways of working with data frames in R. This includes selecting records and columns by indices or column names, merging data frames, appending records, renaming, etc. The advantage of the \texttt{ic.df} format over a standard data frame is that it includes information to identify the columns containing core coverage information as well as notes on data pre-processing that has been done. These allow users to combine disparate sources of information on immunisation coverage into a harmonised analysis dataset without having to adjust for missing or differently named columns.
\begin{lstlisting}[language=R]
# note the type of object created class(cov)
#> [1] "ic.df" "data.frame" \end{lstlisting}
The data files available from the WHO website require some additional cleaning before analysis. Notice that the data objects created by {\tt imcover} can work with all standard R commands.
\begin{lstlisting}[language=R]
# Further data cleaning of immunization records
# drop some record categories (PAB, HPV, WUENIC) cov <- cov[cov$coverage_category cov$coverage_category <- tolower(cov$coverage_category) # clean-up
# create a combined dataset dat <- rbind(cov, svy)
# remove records with missing coverage values dat <- dat[!is.na(dat$coverage), ]
# mismatch in vaccine names between coverage and survey datasets dat[dat$antigen == 'DTPCV1', 'antigen'] <- 'DTP1' dat[dat$antigen == 'DTPCV2', 'antigen'] <- 'DTP2' dat[dat$antigen == 'DTPCV3', 'antigen'] <- 'DTP3'
# subset records dat <- ic_filter(dat,
vaccine = c("DTP1", "DTP3", "MCV1", "MCV2", "PCV3"),
time = 2000:2020) \end{lstlisting}
In preparation for the statistical modelling we carry out several additional pre-processing steps. Firstly, some records observe inconsistencies in the levels of coverage between multi-dose vaccines. To maintain consistency, where coverage of later doses cannot exceed earlier doses, we model the ratio between first and third dose. In this example, we only adjust DTP1 and DTP3, but other multi-dose vaccines could be processed in a similar manner.
\begin{lstlisting}[language=R]
# adjustment - use ratio for DTP3 dat <- ic_ratio(dat, numerator = 'DTP3', denominator = 'DTP1') \end{lstlisting}
The \texttt{ic.df} object will now store a note that this processing step has been carried out so that the ratio is correctly back-transformed and that coverage estimates and predictions are adjusted appropriately.
Secondly, we need to force coverage estimates to lie between 0\% and 100\% so that we can model the data with a logit transformation.
\begin{lstlisting}
# maintain coverage between 0-100 dat <- ic_adjust(dat, coverage_adj = TRUE) \end{lstlisting}
\noindent {\bf Fitting models with {\tt imcover}}\\ The core of {\tt imcover} is the functionality to fit a Bayesian statistical model of multiple time series. A discussed in Section 3, this includes a BDSL and an IDML model. The sources of coverage data (in this example administrative, official and surveys) are taken as multiple, partial estimates of the true, unobserved immunization coverage in a country. A Bayesian estimation approach allows us to incorporate these multiple datasets, place prior beliefs on which sources are more reliable, share information between countries, and to quantify uncertainty in our estimate of the true immunization coverage.
{\tt imcover} provides an interface to Stan \cite{stan} for statistical computation. This means that, in addition to {\tt imcover}, many of the R programming language tools for assessing model performance and visualizing results from \texttt{rstan} will work for \texttt{imcover} results.
For this example we subset the records to the AFR region based on a regional identifier and the most recent years.
\begin{lstlisting}[language=R]
# enable parallel processing in rstan options(mc.cores = parallel::detectCores())
# Fit model to a single region of data fit <- ic_fit(dat[dat$region == 'AFRO', ],
chains = 4,
iter = 2000,
warmup = 1000) \end{lstlisting}
The previous command uses the IDML model, and \texttt{ic\_fit\_single} can be used to fit a BDSL model to the same data. Other options to these fitting functions include, \texttt{prior\_sigma} which can be used to set the scale parameter for the truncated Cauchy priors on $sigma$ and \texttt{upper\_sigma} which allows users to set an upper bound on the scale parameter for each data source.
\noindent {\bf Model outputs}\\ After fitting the model, we can work with the parameter estimates and estimates of coverage. As noted previously, the fitted models returned by \texttt{ic\_fit} are \texttt{rstan}-class objects and are compatible with many other tools, such as \texttt{bayesplot}. {\tt imcover} wraps these objects into a new class, \texttt{ic.fit}, which, along with the fitted model object, contains additional information on the model type and data pre-processing steps to facilitate most common analyses of immunisation coverage.
For example, we can easily make graphs of coverage estimates using a generic \texttt{plot} function.
\begin{lstlisting}[language=R]
# Plot of coverage plot(ic_filter(fit, vaccine = 'DTP1',
country = c('BWA', 'SLE', 'GHA', 'NGA')),
ncol = 2)
# Plot of coverage overlaid with WUENIC estimates plot(ic_filter(fit, vaccine = 'DTP1',
country = c('BWA', 'SLE', 'GHA', 'NGA')),
ncol = 2,
wuenic = TRUE) \end{lstlisting}
The main output of interest from {\tt imcover} is a table of coverage estimates for each country, vaccine, and time point, along with uncertainty around each estimate. We can extract these data as a table, save them to a data frame, or write them out to a file for use in a report.
\begin{lstlisting}[language=R]
# Extracting coverage estimates to a data frame ic <- ic_coverage(fit,
stat = "quantile", # customise the summary function
probs = c(0.025, 0.5, 0.975)) \end{lstlisting}
Population-weighted regional aggregations of immunisation coverage can also be calculated.
\begin{lstlisting}[language=R] ic_regional(fit,
stat = c("mean", "quantile"),
probs = c(0.025, 0.5, 0.975) \end{lstlisting}
One we have a fitted model for a specific time period, we can also use it to predict coverage to time point in the near future.
\begin{lstlisting}[language=R]
# Predict for future time points (3 years post 2018) fit1 <- predict(fit1, t = 3) # update icfit to include 'prediction' info
# Update the graph to include fitted estimates, observed data, and predictions plot(ic_filter(fit, vaccine = 'DTP1',
country = c('BWA', 'SLE', 'GHA', 'NGA')),
ncol = 2,
predition = TRUE) \end{lstlisting}
This section has introduced the core functionality of the {\tt imcover} package and its use in modelling national time series of immunization coverage. Providing a coordinated toolset to download, process, and model immunization data should allow for consistent analyses whose methods are transparent and replicable.
\section{Supplementary tables and figures} \begin{figure}
\caption{WHO Member States and regions.}
\end{figure}
\begin{figure}
\caption{Time series plots of the processed input administrative, official and survey data for some example countries.}
\end{figure}
\begin{figure}
\caption{Distributions of (a) original, (b) logit-transformed and (c) probit-transformed processed pooled national immunization coverage data. }
\label{fig:histplot}
\end{figure}
\begin{figure}
\caption{Plots of simulated and modelled estimates of national immunization coverage for five countries and three vaccines using (a) the balanced data single likelihood (BDSL) model and (b) the irregular data multiple likelihood (IDML) model. The solid black lines and grey-shaded areas are the estimates of $p_{ijt}$ and the green lines are the true values. The data were generated under scenario 1 described in Section 5. }
\label{fig:simplot1}
\end{figure}
\begin{table} \caption{Posterior estimates of the parameters of the multiple likelihood model for AMR region} \centering \begin{tabular}{ c r r r r r }
\hline \hline \hline
\textbf{Parameter} & \textbf{Mean} & \textbf{Std. dev.} & \textbf{2.5\%} & \textbf{50\%} & \textbf{97.5\%} \\ [0.5 ex]
\hline
$\hat{\lambda}^{(a)}$ & 0.5008 & 2.858 & -0.0738 & 0.5090 & 1.0555 \\
$\hat{\lambda}^{(o)}$ & 0.5017 & 0.2856 & -0.0741 & 0.5095 & 1.0574 \\
$\hat{\lambda}^{(s)}$ & -0.6286 & 0.2854 & -1.2026 & -0.6203 & -0.0797 \\
$\hat{\sigma}_1$ & 0.4215 & 0.0110 & 0.4011 & 0.4212 & 4439 \\
$\hat{\sigma}_2$ & 0.4116 & 0.0109 & 0.3905 & 0.4118 & 0.4335 \\
$\hat{\sigma}_3$ & 0.3998 & 0.0002 & 0.3993 & 0.3999 & 0.4000 \\
$\hat{\sigma}_{\beta}$ & 0.4856 & 0.2770 & 0.0292 & 0.4972 & 1.0211 \\
$\hat{\sigma}_{\alpha}$ & 3.0274 & 2.0690 & 0.5602 & 2.6560 & 7.7328 \\
$\hat{\rho}_{\gamma}$ & 0.1471 & 0.5073 & -0.8011 & 0.1630 & 0.9543 \\
$\hat{\sigma}_{\gamma}$ & 0.1066 & 0.0620 & 0.0057 & 0.1043 & 0.2381 \\
$\hat{\rho}_{\phi}$ & 0.9502 & 0.0251 & 0.8882 & 0.9568 & 0.9808 \\
$\hat{\sigma}_{\phi}$ & 0.2682 & 0.0329 & 0.2084 & 0.2670 & 0.3327 \\
$\hat{\rho}_{\delta}$ & 0.9311 & 0.0645 & 0.7649 & 0.9501 & 0.9960 \\
$\hat{\sigma}_{\delta}$ & 0.2696 & 0.0453 & 0.1873 & 0.2665 & 0.3559 \\
$\hat{\sigma}_{\psi}$ & 0.7860 & 0.0687 & 0.6642 & 0.7833 & 0.9269 \\
$\hat{\rho}_{\omega}$ & 0.3165 & 0.0249 & 0.2670 & 0.3169 & 0.3651 \\
$\hat{\sigma}_{\omega}$ & 1.2770 & 0.0206 & 1.2367 & 1.2767 & 1.3152 \\
\hline \hline \hline \end{tabular} \end{table}
\begin{table} \caption{Posterior estimates of the parameters of the multiple likelihood model for EUR} \centering \begin{tabular}{ c r r r r r }
\hline \hline \hline
\textbf{Parameter} & \textbf{Mean} & \textbf{Std. dev.} & \textbf{2.5\%} & \textbf{50\%} & \textbf{97.5\%} \\ [0.5 ex]
\hline
$\hat{\lambda}^{(a)}$ & 0.2615 & 0.2950 & -0.3199 & 0.2586 & 0.8420 \\
$\hat{\lambda}^{(o)}$ & 0.2822 & 0.2951 & -0.3015 & 0.2792 & 0.8635 \\
$\hat{\lambda}^{(s)}$ & -0.2474 & 0.2953 & -0.8318 & -0.2501 & 0.3380 \\
$\hat{\sigma}_1$ & 0.4639 & 0.0102 & 0.4440 & 0.4637 & 0.4842 \\
$\hat{\sigma}_2$ & 0.4639 & 0.0102 & 0.4440 & 0.4637 & 0.4842 \\
$\hat{\sigma}_3$ & 0.3994 & 0.0006 & 0.3979 & 0.3996 & 0.4000 \\
$\hat{\sigma}_{\beta}$ & 0.2410 & 0.1743 & 0.0105 & 0.2155 & 0.6291 \\
$\hat{\sigma}_{\alpha}$ & 3.9051 & 2.2320 & 1.1716 & 3.4282 & 9.4420 \\
$\hat{\rho}_{\gamma}$ & 0.1223 & 0.5833 & -0.8987 & 0.1599 & 0.9814 \\
$\hat{\sigma}_{\gamma}$ & 0.0434 & 0.0282 & 0.0042 & 0.0383 & 0.1085 \\
$\hat{\rho}_{\phi}$ & 0.9571 & 0.0114 & 0.9302 & 0.9585 & 0.9749 \\
$\hat{\sigma}_{\phi}$ & 0.2472 & 0.0180 & 0.2125 & 0.2472 & 0.2823 \\
$\hat{\rho}_{\delta}$ & 0.9769 & 0.0229 & 0.9141 & 0.9838 & 0.9991 \\
$\hat{\sigma}_{\delta}$ & 0.1202 & 0.0207 & 0.0822 & 0.1193 & 0.1625\\
$\hat{\sigma}_{\psi}$ & 0.7441 & 0.0613 & 0.6291 & 0.7428 & 0.8680\\
$\hat{\rho}_{\omega}$ & 0.6527 & 0.0257 & 0.6032 & 0.6519 & 0.7037 \\
$\hat{\sigma}_{\omega}$ & 0.6738 & 0.0125 & 0.6499 & 0.6736 & 0.6989 \\
\hline \hline \hline \end{tabular} \end{table}
\begin{table} \caption{Posterior estimates of the parameters of the multiple likelihood model for EMR} \centering \begin{tabular}{ c r r r r r }
\hline \hline \hline
\textbf{Parameter} & \textbf{Mean} & \textbf{Std. dev.} & \textbf{2.5\%} & \textbf{50\%} & \textbf{97.5\%} \\ [0.5 ex]
\hline
$\hat{\lambda}^{(a)}$ & 0.5188 & 0.3051 & -0.0849 & 0.5231 & 1.1021 \\
$\hat{\lambda}^{(o)}$ & 0.5189 & 0.3050 & -0.0774 & 0.5219 & 1.1056 \\
$\hat{\lambda}^{(s)}$ & -0.6581 & 0.3054 & -1.2607 & -0.6528 & -0.0769 \\
$\hat{\sigma}_1$ & 0.7680 & 0.0199 & 0.7298 & 0.7679 & 0.8077 \\
$\hat{\sigma}_2$ & 0.6896 & 0.0189 & 0.6540 & 0.6893 & 0.7275 \\
$\hat{\sigma}_3$ & 0.3990 & 0.0010 & 0.3963 & 0.3993 & 0.4000 \\
$\hat{\sigma}_{\beta}$ & 1.3293 & 0.4876 & 0.1831 & 1.3620 & 2.2276 \\
$\hat{\sigma}_{\alpha}$ & 2.5990 & 1.8366 & 0.6515 & 2.2021 & 7.0152 \\
$\hat{\rho}_{\gamma}$ & 0.6071 & 0.4976 & -0.6624 & 0.8617 & 0.9990 \\
$\hat{\sigma}_{\gamma}$ & 0.1149 & 0.0621 & 0.0153 & 0.1118 & 0.2426 \\
$\hat{\rho}_{\phi}$ & 0.8985 & 0.0478 & 0.7949 & 0.9027 & 0.9717 \\
$\hat{\sigma}_{\phi}$ & 0.4852 & 0.0375 & 0.4127 & 0.4846 & 0.5598 \\
$\hat{\rho}_{\delta}$ & 0.3475 & 0.5466 & -0.8263 & 0.4814 & 0.9864 \\
$\hat{\sigma}_{\delta}$ & 0.0540 & 0.0385 & 0.0042 & 0.0484 & 0.1379 \\
$\hat{\sigma}_{\psi}$ & 0.5925 & 0.0769 & 0.4518 & 0.5881 & 0.7563 \\
$\hat{\rho}_{\omega}$ & 0.4987 & 0.0458 & 0.4068 & 0.4998 & 0.5872 \\
$\hat{\sigma}_{\omega}$ & 0.7634 & 0.0249 & 0.7148 & 0.7634 & 0.8131 \\
\hline \hline \hline \end{tabular} \end{table}
\begin{table} \caption{Posterior estimates of the parameters of the multiple likelihood model for SEAR} \centering \begin{tabular}{ c r r r r r }
\hline \hline \hline
\textbf{Parameter} & \textbf{Mean} & \textbf{Std. dev.} & \textbf{2.5\%} & \textbf{50\%} & \textbf{97.5\%} \\
\hline
$\hat{\lambda}^{(a)}$ & 0.4308 & 0.2925 & -0.1347 & 0.4283 & 1.0051 \\
$\hat{\lambda}^{(o)}$ & 0.1046 & 0.2916 & -0.4622 & 0.1020 & 0.6730 \\
$\hat{\lambda}^{(s)}$ & -0.2016 & 0.2930 & -0.7794 & -0.2024 & 0.3754 \\
$\hat{\sigma}_1$ & 1.4097 & 0.0417 & 1.3302 & 1.4091 & 1.4937 \\
$\hat{\sigma}_2$ & 0.9489 & 0.0310 & 0.8900 & 0.9483 & 1.0120 \\
$\hat{\sigma}_3$ & 0.3975 & 0.0025 & 0.3905 & 0.3982 & 0.3999 \\
$\hat{\sigma}_{\beta}$ & 1.2842 & 0.4016 & 0.7332 & 1.2084 & 2.2674 \\
$\hat{\sigma}_{\alpha}$ & 2.5514 & 1.6480 & 0.8068 & 2.1492 & 6.6952 \\
$\hat{\rho}_{\gamma}$ & 0.8774 & 0.2489 & -0.0015 & 0.9595 & 0.9989 \\
$\hat{\sigma}_{\gamma}$ & 0.1622 & 0.0606 & 0.0462 & 0.1581 & 0.2950 \\
$\hat{\rho}_{\phi}$ & 0.4686 & 0.1939 & 0.0796 & 0.4738 & 0.8472 \\
$\hat{\sigma}_{\phi}$ & 0.4585 & 0.0418 & 0.3786 & 0.4579 & 0.5420 \\
$\hat{\rho}_{\delta}$ & 0.2717 & 0.3399 & -0.3255 & 0.2403 & 0.9580 \\
$\hat{\sigma}_{\delta}$ & 0.1897 & 0.0485 & 0.0979 & 0.1882 & 0.2878 \\
$\hat{\sigma}_{\psi}$ & 0.2524 & 0.1558 & 0.0167 & 0.2475 & 0.5612 \\
$\hat{\rho}_{\omega}$ & 0.8599 & 0.0519 & 0.7290 & 0.8694 & 0.9354 \\
$\hat{\sigma}_{\omega}$ & 0.3221 & 0.0393 & 0.2471 & 0.3208 & 0.4052 \\
\hline \hline \hline \end{tabular} \end{table}
\begin{table} \caption{Posterior estimates of the parameters of the multiple likelihood model for the WPR} \centering \begin{tabular}{ c r r r r r }
\hline \hline \hline
\textbf{Parameter} & \textbf{Mean} & \textbf{Std. dev.} & \textbf{2.5\%} & \textbf{50\%} & \textbf{97.5\%} \\
\hline
$\hat{\lambda}^{(a)}$ & 0.5084 & 0.2952 & -0.0589 & 0.5050 & 1.0857 \\
$\hat{\lambda}^{(o)}$ & 0.5584 & 0.2953 & -0.0062 & 0.5552 & 1.1388 \\
$\hat{\lambda}^{(s)}$ & -0.6892 & 0.2961 & -1.2604 & -0.6901 & -0.1178 \\
$\hat{\sigma}_1$ & 0.8704 & 0.0218 & 0.8273 & 0.8700 & 0.9135 \\
$\hat{\sigma}_2$ & 0.8575 & 0.0212 & 0.8167 & 0.8574 & 0.8994 \\
$\hat{\sigma}_3$ & 0.3989 & 0.0011 & 0.3960 & 0.3992 & 0.4000 \\
$\hat{\sigma}_{\beta}$ & 1.2773 & 0.2275 & 0.8910 & 1.2561 & 1.7798 \\
$\hat{\sigma}_{\alpha}$ & 3.1161 & 1.8398 & 0.7977 & 2.7570 & 7.6872 \\
$\hat{\rho}_{\gamma}$ & 0.4081 & 0.5328 & -0.7494 & 0.5413 & 0.9989 \\
$\hat{\sigma}_{\gamma}$ & 0.0722 & 0.0517 & 0.0051 & 0.0634 & 0.1939 \\
$\hat{\rho}_{\phi}$ & 0.6871 & 0.0751 & 0.5332 & 0.6889 & 0.8303 \\
$\hat{\sigma}_{\phi}$ & 0.5605 & 0.0377 & 0.4881 & 0.5607 & 0.6351 \\
$\hat{\rho}_{\delta}$ & 0.5274 & 0.5582 & -0.7790 & 0.8258 & 0.9974 \\
$\hat{\sigma}_{\delta}$ & 0.0913 & 0.0511 & 0.0021 & 0.0947 & 0.1878 \\
$\hat{\sigma}_{\psi}$ & 0.5964 & 0.0868 & 0.4250 & 0.5956 & 0.7719 \\
$\hat{\rho}_{\omega}$ & 0.6172 & 0.0410 & 0.5363 & 0.6168 & 0.6965 \\
$\hat{\sigma}_{\omega}$ & 0.8232 & 0.0299 & 0.7642 & 0.8238 & 0.8820 \\
\hline \hline \hline \end{tabular} \end{table}
\begin{table}[ht] \caption{Summary of the differences between WUENIC and modelled estimates for each WHO region and globally.
} \centering \begin{tabular}{l r r} \hline \hline \hline
Region & Median & Interquartile \\
& difference & range\\ \hline
EMR & -1.33 & 4.59 \\
AMR & -4.62 & 6.59 \\
EUR & -1.89 & 2.75 \\
AFR & 0.21 & 10.35 \\
WPR & -3.22 & 7.13 \\
SEAR & -0.36 & 3.22 \\ Global & -1.90 & 5.44 \\ \hline \hline \hline \end{tabular} \label{tab:wueniccomptab} \end{table}
\begin{figure}
\caption{Plots of modelled estimates and corresponding uncertainties at the country level for AFR}
\end{figure}
\begin{figure}
\caption{Plots of modelled estimates and corresponding uncertainties at the country level for AMR}
\end{figure}
\begin{figure}
\caption{Plots of modelled estimates and corresponding uncertainties at the country level for the EUR}
\end{figure}
\begin{figure}
\caption{Plots of modelled estimates and corresponding uncertainties at the country level for the SEAR}
\end{figure}
\begin{figure}
\caption{Plots of modelled estimates and corresponding uncertainties at the country level for the WPR}
\end{figure}
\begin{figure}
\caption{Modelled estimates of immunization coverage (black line) and corresponding uncertainty estimates (grey shaded area) for select countries. The plots are overlaid with processed input administrative, official and survey data, as well as corresponding WUENIC estimates. Predictions for 2021 and 2022 are shown on the right-hand side of the dotted vertical lines. }
\label{fig:wueniccomp1}
\end{figure}
\begin{figure}
\caption{Differences between WUENIC estimates and the modelled estimates for all WHO countries.}
\label{Fig:wueniccomp2}
\end{figure}
\begin{figure}
\caption{Comparing WUENIC estimates with the modelled estimates for AFR}
\end{figure}
\begin{figure}
\caption{Comparing WUENIC estimates with the modelled estimates for AMR}
\end{figure}
\begin{figure}
\caption{Comparing WUENIC estimates with the modelled estimates for EMR}
\end{figure}
\begin{figure}
\caption{Comparing WUENIC estimates with the modelled estimates for EUR}
\end{figure}
\begin{figure}
\caption{Comparing WUENIC estimates with the modelled estimates for SEAR}
\end{figure}
\begin{figure}
\caption{Comparing WUENIC estimates with the modelled estimates for WPR}
\end{figure}
\begin{figure}
\caption{(a) Modelled estimates of immunization coverage and corresponding uncertainties for each vaccine and WHO region; (b) Regional trends in modelled estimates of coverage for each vaccine. }
\label{fig:regfig}
\end{figure}
\section{Processing code} \begin{lstlisting}[language=R]
# Global immunisation coverage modelling
library(imcover) library(rstan)
options(mc.cores = parallel::detectCores())
# path to files - note: change here dir <- '/main/path/to/workspace'
# get WUENIC input files cov <- download_coverage(use_cache = TRUE) svy <- download_survey(use_cache = TRUE)
# drop some categories (PAB, HPV, WUENIC) and clean up cov <- cov[cov$coverage_category cov$coverage_category <- tolower(cov$coverage_category)
# combined dataset dat <- rbind(cov, svy)
# remove records with missing values dat <- dat[!is.na(dat$region), ] dat <- dat[!is.na(dat$coverage), ]
# mismatch in vaccine names between coverage and survey datasets dat[dat$antigen == 'DTPCV1', 'antigen'] <- 'DTP1' dat[dat$antigen == 'DTPCV2', 'antigen'] <- 'DTP2' dat[dat$antigen == 'DTPCV3', 'antigen'] <- 'DTP3'
# select only some vaccines to analyse dat <- ic_filter(dat, vaccine = c("DTP1", "DTP3", "MCV1", "MCV2", "PCV3"))
table(dat$coverage_category, dat$antigen)
# subset the time dat <- ic_filter(dat, time = 2000:2020)
# drop observations with zero values
table(dat$coverage == 0) # n=165 dat <- dat[dat$coverage > 0, ]
# adjustment - use ratio for DTP3 and requiring 0 < ic < 100 dat <- ic_ratio(dat, numerator = 'DTP3', denominator = 'DTP1') dat <- ic_adjust(dat, coverage_adj = TRUE)
# model fitting - multiple likelihood model fit <- ic_fit(dat,
chains = 4,
iter = 2000,
warmup = 1000,
prior_lambda = c(0.5, 0.5, 0.5),
prior_sigma = c(2, 2, 0.2),
upper_sigma = c(100, 100, 0.4),
lower_sigma = c(0, 0, 0))
# save the fitted object saveRDS(fit, file = file.path(dir, 'output', 'fit.rds'))
# post-processing regs <- names(fit) # regions
# WAIC calculation waics <- lapply(regs, function(r){
print(r)
ww <- waic(fit[[r]])
print(ww) })
# coverage estimates and predictions
# loop over the regions in the fitted object for(r in regs){
print(r)
# extract region
f <- fit[[r]]
# filter based on year of introduction
f <- filter_yovi(f)
# predict for 2021 and 2022
f <- predict(f, t=2)
# summarise coverage estimates
ic_post <- ic_coverage(f,
object = 'posterior',
stat = c('mean', 'quantile'),
probs = c(0.025, 0.5, 0.975))
# and predictions
ic_pred <- ic_coverage(f,
object = 'prediction',
stat = c('mean', 'quantile'),
probs = c(0.025, 0.5, 0.975))
ic <- rbind(ic_post, ic_pred)
ic <- ic[order(ic$country, ic$vaccine, ic$time),]
# write out table of coverage
write.csv(ic, file.path(dir, 'output', paste0(r, '_coverage.csv')),
row.names = F) }
# regional aggregation summaries
# first load the population denominator denom <- read.csv(file.path(dir, 'input', 'denom.csv'))
# this was created with: denom <- imcover::download_wuenic()
# process each region ... for(r in regs){
print(r)
# population-weighted summary
icr <- ic_regional(fit[[r]],
denom = denom,
filter_yovi = F)
# create output table
write.csv(icr,
file.path(dir, 'output', paste0(r, '_regional_coverage.csv')),
row.names = F) } \end{lstlisting}
\end{document} | arXiv |
Embedding Camassa-Holm equations in incompressible Euler
Riemann-Hilbert problem, integrability and reductions
Conservation laws in discrete geometry
Len G. Margolin 1,, and Roy S. Baty 2,
Computational Physics Division, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
Theoretical Design Division, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
* Corresponding author: L. G. Margolin
Received April 2018 Revised January 2019 Published May 2019
The small length scales of the dissipative processes of physical viscosity and heat conduction are typically not resolved in the numerical simulation of high Reynolds number flows in the discrete geometry of computational grids. Historically, the simulations of flows with shocks and/or turbulence have relied on solving the Euler equations with dissipative regularization. In this paper, we begin by reviewing the regularization strategies used in shock wave calculations in both a Lagrangian and an Eulerian framework. We exhibit the essential similarities with Large Eddy Simulation models of turbulence, namely that almost all of these depend on the square of the size of the computational cell. In our principal result, we justify that dependence by deriving the evolution equations for a finite-sized volume of fluid. Those evolution equations, termed finite scale Navier-Stokes (FSNS), contain dissipative terms similar to the artificial viscosity first proposed by von Neumann and Richtmyer. We describe the properties of FSNS, provide a physical interpretation of the dissipative terms and show the connection to recent concepts in fluid dynamics, including inviscid dissipation and bi-velocity hydrodynamics.
Keywords: Conservation laws, regularized Euler, finite scale.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.
Citation: Len G. Margolin, Roy S. Baty. Conservation laws in discrete geometry. Journal of Geometric Mechanics, 2019, 11 (2) : 187-203. doi: 10.3934/jgm.2019010
A. Alexander, Duel at Dawn, Harvard University Press, Cambridge, MA, 2010. Google Scholar
H. Alsmeyer, Density profiles in argon and nitrogen shock waves measured by the absorption of an electron beam, J. Fluid Mech., 74 (1976), 497-513. Google Scholar
R. Becker, Stoßbwelle und detonation, (In German), Zeitschrift für Physik, 8 (1922), 321–362. Google Scholar
H. A. Bethe, On the theory of shock waves for an arbitrary equation of state, Classic Papers in Shock Compression Science, J.N. Johnson & R. Cheret, eds., Springer–Verlag, New York, 1998,421–492. doi: 10.1007/978-1-4612-2218-7_11. Google Scholar
S. Bianchini and A. Bressan, Vanishing viscosity solutions of nonlinear hyperbolic systems, Ann. Math., 161 (2005), 223-342. doi: 10.4007/annals.2005.161.223. Google Scholar
J. P. Boris and D. L. Book, Flux–corrected transport, J. Comput. Phys., 11 (1973), 38-69. doi: 10.1006/jcph.1997.5756. Google Scholar
H. Brenner, Kinematics of volume transport, Physica A, 349 (2004), 11-59. Google Scholar
H. Brenner, Steady-state heat conduction in quiescent fluids: Incompleteness of the Navier–Stokes–Fourier equations, Physica A, 390 (2011), 3216-3244. doi: 10.1016/j.physa.2011.04.023. Google Scholar
E. J. Caramana, M. J. Shashkov and P. P. Whalen, Formulations of artificial viscosity for multi–dimensional shock wave computations, J. Comput. Phys., 144 (1998), 70-97. doi: 10.1006/jcph.1998.5989. Google Scholar
S. Y. Chen, D. D. Holm, L. G. Margolin and R. Zhang, Direct numerical simulations of the Navier–Stokes alpha model, Physica D, 133 (1999), 66-83. doi: 10.1016/S0167-2789(99)00099-8. Google Scholar
S. Y. Chen, C. Foias, D. D. Holm, E. Olson, E. S. Titi and S. Wynne, Camassa–Holm equations as a closure model for turbulent channel and pipe flow, Phys. Rev. Lett., 81 (1998), 5338-5341. doi: 10.1103/PhysRevLett.81.5338. Google Scholar
A. Cheskidov, D. D. Holm, E. Olson and E. S. Titi, On a Leray-$\alpha$ model of turbulence, Proc. Royal Soc. A, 461 (2005), 629-649. doi: 10.1098/rspa.2004.1373. Google Scholar
C. M. Dafermos, Hyperbolic Conservation Laws in Continuum Physics, Springer, NY, 2010, third edition. doi: 10.1007/978-3-642-04048-1. Google Scholar
R. J. DiPerna, Measure-valued solutions to conservation laws, Arch. Rational Mech. Anal., 88 (1985), 223-270. doi: 10.1007/BF00752112. Google Scholar
L. Euler, Principes généraux du mouvement des fluides, Mém. Acad. Sci. Berlin, 11, 274–315. See also an English translation by T.E. Burton, 1999: "General laws of the motion of fluids," Fluid Dyn., 34 (1999), 801–822. Google Scholar
G. L. Eyink, Energy dissipation without viscosity in ideal hydrodynamics, Physica D, 78 (1994), 222-240. doi: 10.1016/0167-2789(94)90117-1. Google Scholar
U. Frisch, Turbulence: The Legacy of A.N. Kolmogorov, Cambridge University Press, Cambridge, 1995. Google Scholar
L. S. García–Colín, R. M. Velasco and F. J. Uribe, Beyond the Navier–Stokes equations: Burnett hydrodynamics, Phys. Reports, 465 (2008), 149-189. doi: 10.1016/j.physrep.2008.04.010. Google Scholar
B. J. Geurts and D. D. Holm, Regularization modeling for large–eddy simulation, Phys. Fluids, 15 (2003), L13–L16. doi: 10.1063/1.1529180. Google Scholar
B. J. Geurts and D. D. Holm, Leray and LANS-$\alpha$ modeling of turbulent mixing, J. Turbulence, 7 (2006), Paper 10, 33 pp. doi: 10.1080/14685240500501601. Google Scholar
S. K. Godunov, Different Methods for Shock Waves, Moscow State University, (Ph.D. Dissertation), 1954. Google Scholar
C. J. Greenshields and J. M. Reese, The structure of shock waves as a test of Brenner's modifications to the Navier-Stokes equations, J. Fluid Mech., 580 (2007), 407-429. doi: 10.1017/S0022112007005575. Google Scholar
F. F. Grinstein, L. G. Margolin and W. J. Rider, Implicit Large Eddy Simulation, Cambridge University Press, NY, NY, 2007. doi: 10.1017/CBO9780511618604. Google Scholar
J. L. Guermond, J. T. Oden and S. Prudhomme, An interpretation of the Navier–Stokes alpha model as a frame–indifferent Leray regularization, Physica D, 177 (2003), 23-30. doi: 10.1016/S0167-2789(02)00748-0. Google Scholar
J. L. Guermond, R. Pasquetti and B. Popov, Entropy viscosity method for nonlinear conservation laws, J. Comput. Phys., 230 (2011), 4248-4267. doi: 10.1016/j.jcp.2010.11.043. Google Scholar
A. Harten, High resolution schemes for hyperbolic conservation laws, J. Comput. Phys., 49 (1983), 357-393. doi: 10.1016/0021-9991(83)90136-5. Google Scholar
M. W. Hecht, D. D. Holm, M. R. Petersen and B. A. Wingate, The LANS-alpha and Leray turbulence parameterizations in primitive equation ocean modeling, J. Physics A, 41 (2008), 344009, 23 pp. doi: 10.1088/1751-8113/41/34/344009. Google Scholar
C. W. Hirt, Heuristic stability theory for finite difference equations, J. Comput. Phys., 2 (1968), 339-355. Google Scholar
D. D. Holm, Kármán–Howarth theorem for the Lagrangian–averaged Navier–Stokes–alpha model of turbulence, J. Fluid Mech., 467 (2002), 205-214. doi: 10.1017/S002211200200160X. Google Scholar
D. D. Holm, J. E. Marsden and T. S. Ratiu, Euler–Poincaré models of ideal fluids with nonlinear dispersion, Phys. Rev. Lett., 80 (1998), 4173-4176. Google Scholar
G.M. Kremer, An Introduction to the Boltzmann Equation and Transport Processes in Gases, Springer, NY, 2010. doi: 10.1007/978-3-642-11696-4. Google Scholar
P. D. Lax, Mathematics and physics, Bull. Amer. Math. Soc., 45 (2008), 135-152. doi: 10.1090/S0273-0979-07-01182-2. Google Scholar
J. Leray, Sur les movements dun fluide visqueux remplaissant lespace, Acta Math., 63 (1934), 193-248. doi: 10.1007/BF02547354. Google Scholar
[34] R. J. Leveque, Finite Volume Methods for Hyperbolic Problems, Cambridge University Press, 2002. doi: 10.1017/CBO9780511791253. Google Scholar
L. G. Margolin, Finite-scale equations for compressible fluid flow, Phil. Trans. R. Soc. A, 367 (2009), 2861-2871. doi: 10.1098/rsta.2008.0290. Google Scholar
L. G. Margolin, The role of the observer in classical fluid flow, Mech. Res. Comm., 57 (2014), 10-17. Google Scholar
L. G. Margolin and A. Hunter, Discrete thermodynamics, Mech. Res. Comm., 93 (2018), 103-107. doi: 10.1016/j.mechrescom.2017.10.006. Google Scholar
L. G. Margolin and C. S. Plesko, Discrete regularization, Evolution Equations and Control Theory, 8 (2019), 117-137. Google Scholar
L. G. Margolin, J. M. Reisner and P. M. Jordan, Entropy in self-similar shock profiles, Int. J. Nonlinear Mech., 95 (2017), 333-346. Google Scholar
L. G. Margolin, P. K. Smolarkiewicz and Z. Sorbjan, Large–eddy simulations of convective boundary layers using nonoscillatory differencing, Physica D., 133 (1999), 390-397. doi: 10.1016/S0167-2789(99)00083-4. Google Scholar
L. G. Margolin and W. J. Rider, A rationale for implicit turbulence modelling, Int. J. Num. Methods Fluids, 39 (2002), 821-841. doi: 10.1002/fld.331. Google Scholar
L. G. Margolin, W. J. Rider and F. F. Grinstein, Modeling turbulent flow with implicit LES, J. Turbulence, 7 (2006), Paper 15, 27 pp. doi: 10.1080/14685240500331595. Google Scholar
M. L. Merriam, Smoothing and the second law, Comp. Meth. Appl. Mech. Eng., 64 (1987), 177-193. doi: 10.1016/0045-7825(87)90039-9. Google Scholar
I. Múller, On the entropy inequality, Archive for Rational Mechanics and Analysis, 26 (1967), 118-141. doi: 10.1007/BF00285677. Google Scholar
P. Névir, Ertel's vorticity theorems, the particle relabeling symmetry and the energy–vorticity theory of mechanics, Meteorologische Zeitschrift, 13 (2004), 485-498. Google Scholar
W. F. Noh, Errors for calculations of strong shocks using an artificial viscosity and an artificial heat conduction, J. Comput. Phys., 72 (1978), 78-120. Google Scholar
E. S. Oran and J. P. Boris, Computing turbulent shear flows–a convenient conspiracy, Computers in Physics, 7 (1993), 523-533. Google Scholar
A. Petersen, The philosophy of Niels Bohr, Bulletin of the Atomic Scientists, 19 (1963), 8-14. Google Scholar
P. Saugat, Large Eddy Simulation for Incompressible Flows, Scientific Computation. Springer-Verlag, Berlin, 2006. Google Scholar
J. Smagorinsky, General circulation experiments with the primitive equations Ⅰ. The basic experiment, Mon. Wea. Rev., 91 (1963), 99-164. Google Scholar
B. Schmidt, Electron beam density measurements in shock waves in argon, J. Fluid Mech., 39 (1969), 361-373. Google Scholar
P. K. Smolarkiewicz and L. G. Margolin, MPDATA: A finite–difference solver for geophysical flows, J. Comput. Phys., 140 (1998), 459-480. doi: 10.1006/jcph.1998.5901. Google Scholar
G. G. Stokes, On the theories of the internal friction of fluids in motion, Trans. Camb. Phil. Soc., 8 (1845), 287-305. Google Scholar
P. A. Thompson, Compressible–Fluid Dynamics, McGraw–Hill, NY, 1972. Google Scholar
B. van Leer, Toward the ultimate conservative difference scheme V, J. Comput. Phys., 32 (1979), 101-136. doi: 10.1006/jcph.1997.5757. Google Scholar
J. von Neumann and R. D. Richtmyer, A method for the numerical calculation of hydrodynamic shocks, J. Appl. Phys., 21 (1950), 232-237. doi: 10.1063/1.1699639. Google Scholar
M. L. Wilkins, Use of artificial viscosity in multidimensional fluid dynamic calculations, J. Comput. Phys., 36 (1980), 281-303. doi: 10.1016/0021-9991(80)90161-8. Google Scholar
Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097
Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301
Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185
Michiel Bertsch, Flavia Smarrazzo, Andrea Terracina, Alberto Tesei. Signed Radon measure-valued solutions of flux saturated scalar conservation laws. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3143-3169. doi: 10.3934/dcds.2020041
Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073
Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151
Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168
Yuxi Zheng. Absorption of characteristics by sonic curve of the two-dimensional Euler equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 605-616. doi: 10.3934/dcds.2009.23.605
Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020466
Longxiang Fang, Narayanaswamy Balakrishnan, Wenyu Huang. Stochastic comparisons of parallel systems with scale proportional hazards components equipped with starting devices. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021004
Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129
Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049
Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415
Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003
P. K. Jha, R. Lipton. Finite element approximation of nonlocal dynamic fracture models. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1675-1710. doi: 10.3934/dcdsb.2020178
Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095
Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079
Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang. A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, , () : -. doi: 10.3934/era.2020120
Bin Wang, Lin Mu. Viscosity robust weak Galerkin finite element methods for Stokes problems. Electronic Research Archive, 2021, 29 (1) : 1881-1895. doi: 10.3934/era.2020096
Jiwei Jia, Young-Ju Lee, Yue Feng, Zichan Wang, Zhongshu Zhao. Hybridized weak Galerkin finite element methods for Brinkman equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020126
Len G. Margolin Roy S. Baty | CommonCrawl |
Kazhdan–Margulis theorem
In Lie theory, an area of mathematics, the Kazhdan–Margulis theorem is a statement asserting that a discrete subgroup in semisimple Lie groups cannot be too dense in the group. More precisely, in any such Lie group there is a uniform neighbourhood of the identity element such that every lattice in the group has a conjugate whose intersection with this neighbourhood contains only the identity. This result was proven in the 1960s by David Kazhdan and Grigory Margulis.[1]
Statement and remarks
The formal statement of the Kazhdan–Margulis theorem is as follows.
Let $G$ be a semisimple Lie group: there exists an open neighbourhood $U$ of the identity $e$ in $G$ such that for any discrete subgroup $\Gamma \subset G$ there is an element $g\in G$ satisfying $g\Gamma g^{-1}\cap U=\{e\}$.
Note that in general Lie groups this statement is far from being true; in particular, in a nilpotent Lie group, for any neighbourhood of the identity there exists a lattice in the group which is generated by its intersection with the neighbourhood: for example, in $\mathbb {R} ^{n}$, the lattice $\varepsilon \mathbb {Z} ^{n}$ satisfies this property for $\varepsilon >0$ small enough.
Proof
The main technical result of Kazhdan–Margulis, which is interesting in its own right and from which the better-known statement above follows immediately, is the following.[2]
Given a semisimple Lie group without compact factors $G$ endowed with a norm $|\cdot |$, there exists $c>1$, a neighbourhood $U_{0}$ of $e$ in $G$, a compact subset $E\subset G$ such that, for any discrete subgroup $\Gamma \subset G$ there exists a $g\in E$ such that $|g\gamma g^{-1}|\geq c|\gamma |$ for all $\gamma \in \Gamma \cap U_{0}$.
The neighbourhood $U_{0}$ is obtained as a Zassenhaus neighbourhood of the identity in $G$: the theorem then follows by standard Lie-theoretic arguments.
There also exist other proofs. There is one proof which is more geometric in nature and which can give more information,[3][4] and there is a third proof, relying on the notion of invariant random subgroups, which is considerably shorter.[5]
Applications
Selberg's hypothesis
One of the motivations of Kazhdan–Margulis was to prove the following statement, known at the time as Selberg's hypothesis (recall that a lattice is called uniform if its quotient space is compact):
A lattice in a semisimple Lie group is non-uniform if and only if it contains a unipotent element.
This result follows from the more technical version of the Kazhdan–Margulis theorem and the fact that only unipotent elements can be conjugated arbitrarily close (for a given element) to the identity.
Volumes of locally symmetric spaces
A corollary of the theorem is that the locally symmetric spaces and orbifolds associated to lattices in a semisimple Lie group cannot have arbitrarily small volume (given a normalisation for the Haar measure).
For hyperbolic surfaces this is due to Siegel, and there is an explicit lower bound of $\pi /21$ for the smallest covolume of a quotient of the hyperbolic plane by a lattice in $\mathrm {PSL} _{2}(\mathbb {R} )$ (see Hurwitz's automorphisms theorem). For hyperbolic three-manifolds the lattice of minimal volume is known and its covolume is about 0.0390.[6] In higher dimensions the problem of finding the lattice of minimal volume is still open, though it has been solved when restricting to the subclass of arithmetic groups.[7]
Wang's finiteness theorem
Together with local rigidity and finite generation of lattices the Kazhdan-Margulis theorem is an important ingredient in the proof of Wang's finiteness theorem.[8]
If $G$ is a simple Lie group not locally isomorphic to $\mathrm {SL} _{2}(\mathbb {R} )$ or $\mathrm {SL} _{2}(\mathbb {C} )$ with a fixed Haar measure and $v>0$ there are only finitely many lattices in $G$ of covolume less than $v$.
See also
• Margulis lemma
Notes
1. Kazhdan, David; Margulis, Grigory (1968). Translated by Z. Skalsky. "A proof of Selberg's hypothesis". Math. USSR Sbornik. 4: 147–152. doi:10.1070/SM1968v004n01ABEH002782. MR 0223487.
2. Raghunathan 1972, Theorem 11.7.
3. Gelander, Tsachik (2011). "Volume versus rank of lattices". Journal für die reine und angewandte Mathematik. 2011 (661): 237–248. arXiv:1102.3574. doi:10.1515/CRELLE.2011.085. S2CID 122888051.
4. Ballmann, Werner; Gromov, Mikhael; Schroeder, Viktor (1985). Manifolds of nonpositive curvature. Progress in Mathematics. Vol. 61. Birkhäuser Boston, Inc., Boston, MA. doi:10.1007/978-1-4684-9159-3. ISBN 978-1-4684-9161-6.
5. Gelander, Tsachik (2018). "Kazhdan-Margulis theorem for invariant random subgroups". Advances in Mathematics. 327: 47–51. arXiv:1510.05423. doi:10.1016/j.aim.2017.06.011. S2CID 119314646.
6. Marshall, Timothy H.; Martin, Gaven J. (2012). "Minimal co-volume hyperbolic lattices, II: Simple torsion in a Kleinian group". Annals of Mathematics. 176: 261–301. doi:10.4007/annals.2012.176.1.4. MR 2925384.
7. Belolipetsky, Mikhail; Emery, Vincent (2014). "Hyperbolic manifolds of small volume" (PDF). Documenta Mathematica. 19: 801–814. arXiv:1310.2270. doi:10.4171/dm/464. S2CID 303659.
8. Theorem 8.1 in Wang, Hsien-Chung (1972), "Topics on totally discontinuous groups", in Boothby, William M.; Weiss, Guido L. (eds.), Symmetric Spaces, short Courses presented at Washington Univ., Pure and Applied Mathematics., vol. 1, Marcel Dekker, pp. 459–487, Zbl 0232.22018
References
• Gelander, Tsachik (2014). "Lectures on lattices and locally symmetric spaces". In Bestvina, Mladen; Sageev, Michah; Vogtmann, Karen (eds.). Geometric group theory. pp. 249–282. arXiv:1402.0962. Bibcode:2014arXiv1402.0962G.
• Raghunathan, M. S. (1972). Discrete subgroups of Lie groups. Ergebnisse de Mathematik und ihrer Grenzgebiete. Springer-Verlag. MR 0507234.
| Wikipedia |
HYDR
Hydrazine
Hydrazine Explained
Hydrazine is an inorganic compound with the chemical formula . It is a simple pnictogen hydride, and is a colorless and flammable liquid with an ammonia-like odor.
Hydrazine is highly toxic and dangerously unstable unless handled in solution as e.g., hydrazine hydrate ., the world hydrazine hydrate market amounted to $350 million.[1] Hydrazine is mainly used as a foaming agent in preparing polymer foams, but applications also include its uses as a precursor to polymerization catalysts, pharmaceuticals, and agrochemicals.
About two million tons of hydrazine hydrate were used in foam blowing agents in 2015. Additionally, hydrazine is used in various rocket fuels and to prepare the gas precursors used in air bags. Hydrazine is used within both nuclear and conventional electrical power plant steam cycles as an oxygen scavenger to control concentrations of dissolved oxygen in an effort to reduce corrosion.[2]
Hydrazines refer to a class of organic substances derived by replacing one or more hydrogen atoms in hydrazine by an organic group.
Gas producers and propellants
The majority use of hydrazine is as a precursor to blowing agents. Specific compounds include azodicarbonamide and azobisisobutyronitrile, which produce of gas per gram of precursor. In a related application, sodium azide, the gas-forming agent in air bags, is produced from hydrazine by reaction with sodium nitrite.
Hydrazine is also used as a propellant onboard space vehicles, such as the NASA Dawn probe to Ceres and Vesta, and to both reduce the concentration of dissolved oxygen in and control pH of water used in large industrial boilers. The F-16 fighter jet, NASA Space Shuttle, and U-2 spy plane use hydrazine to fuel their emergency power units.[3]
Precursor to pesticides and pharmaceuticals
Hydrazine is a precursor to several pharmaceuticals and pesticides. Often these applications involve conversion of hydrazine to heterocyclic rings such as pyrazoles and pyridazines. Examples of commercialized bioactive hydrazine derivatives include cefazolin, rizatriptan, anastrozole, fluconazole, metazachlor, metamitron, metribuzin, paclobutrazol, diclobutrazole, propiconazole, hydrazine sulfate, diimide, triadimefon, and dibenzoylhydrazine.
Hydrazine compounds can be effective as active ingredients in admixture with or in combination with other agricultural chemicals such as insecticides, miticides, nematicides, fungicides, antiviral agents, attractants, herbicides or plant growth regulators.[4]
Small-scale, niche, and research
The Italian catalyst manufacturer Acta (chemical company) has proposed using hydrazine as an alternative to hydrogen in fuel cells. The chief benefit of using hydrazine is that it can produce over 200 mW/cm2 more than a similar hydrogen cell without the need to use expensive platinum catalysts.[5] As the fuel is liquid at room temperature, it can be handled and stored more easily than hydrogen. By storing the hydrazine in a tank full of a double-bonded carbon-oxygen carbonyl, the fuel reacts and forms a safe solid called hydrazone. By then flushing the tank with warm water, the liquid hydrazine hydrate is released. Hydrazine has a higher electromotive force of 1.56 V compared to 1.23 V for hydrogen. Hydrazine breaks down in the cell to form nitrogen and hydrogen which bonds with oxygen, releasing water. Hydrazine was used in fuel cells manufactured by Allis-Chalmers Corp., including some that provided electric power in space satellites in the 1960s.
A mixture of 63% hydrazine, 32% hydrazine nitrate and 5% water is a standard propellant for experimental bulk-loaded liquid propellant artillery. The propellant mixture above is one of the most predictable and stable, with a flat pressure profile during firing. Misfires are usually caused by inadequate ignition. The movement of the shell after a misignition causes a large bubble with a larger ignition surface area, and the greater rate of gas production causes very high pressure, sometimes including catastrophic tube failures (i.e. explosions).[6] From January–June 1991, the U.S. Army Research Laboratory conducted a review of early bulk-loaded liquid propellant gun programs for possible relevance to the electrothermal chemical propulsion program.
The United States Air Force (USAF) regularly uses H-70, a 70% hydrazine 30% water mixture, in operations employing the General Dynamics F-16 "Fighting Falcon" fighter aircraft and the Lockheed U-2 "Dragon Lady" reconnaissance aircraft. The single jet engine F-16 utilizes hydrazine to power its Emergency Power Unit (EPU), which provides emergency electrical and hydraulic power in the event of an engine flame out. The EPU activates automatically, or manually by pilot control, in the event of loss of hydraulic pressure or electrical power in order to provide emergency flight controls. The single jet engine U-2 utilizes hydrazine to power its Emergency Starting System (ESS), which provides a highly reliable method to restart the engine in flight in the event of a stall.[7]
Hydrazine was first used as a component in rocket fuels during World War II. A 30% mix by weight with 57% methanol (named M-Stoff in the German Luftwaffe) and 13% water was called C-Stoff by the Germans.[8] The mixture was used to power the Messerschmitt Me 163B rocket-powered fighter plane. Hydrazine was also used as a propellant with the German high test peroxide T-Stoff oxidizer. Unmixed hydrazine was referred to as B-Stoff by the Germans, a designation also used later for the ethanol/water fuel for the V-2 missile.
Hydrazine is used as a low-power monopropellant for the maneuvering thrusters of spacecraft, and was used to power the Space Shuttle's auxiliary power units (APUs). In addition, monopropellant hydrazine-fueled rocket engines are often used in terminal descent of spacecraft. Such engines were used on the Viking program landers in the 1970s as well as the Phoenix lander and Curiosity rover which landed on Mars in May 2008 and August 2012, respectively.
In all hydrazine monopropellant engines, the hydrazine is passed over a catalyst such as iridium metal supported by high-surface-area alumina (aluminium oxide), which causes it to decompose into ammonia, nitrogen gas, and hydrogen gas according to the following reactions:[9]
1) N2H4 -> N2 + 2H2
2) 3N2H4 -> 4 NH3 + N2
3) 4NH3 + N2H4 -> 3 N2 + 8 H2
The first two reactions are extremely exothermic (the catalyst chamber can reach 800 °C in a matter of milliseconds,[10]) and they produce large volumes of hot gas from a small volume of liquid,[11] making hydrazine a fairly efficient thruster propellant with a vacuum specific impulse of about 220 seconds.[12] Reaction 2 is the most exothermic, but produces a smaller number of molecules than that of reaction 1. Reaction 3 is endothermic and reverts the effect of reaction 2 back to the same effect as reaction 1 alone (lower temperature, greater number of molecules). The catalyst structure affects the proportion of the NH3 that is dissociated in reaction 3; a higher temperature is desirable for rocket thrusters, while more molecules are desirable when the reactions are intended to produce greater quantities of gas.
Other variants of hydrazine that are used as rocket fuel are monomethylhydrazine, (CH3)NH(NH2) (also known as MMH), and unsymmetrical dimethylhydrazine, (CH3)2N(NH2) (also known as UDMH). These derivatives are used in two-component rocket fuels, often together with dinitrogen tetroxide, N2O4. These reactions are extremely exothermic, and the burning is also hypergolic (it starts burning without any external ignition).[13]
There are ongoing efforts in the aerospace industry to replace hydrazine and other highly toxic substances. Promising alternatives include hydroxylammonium nitrate, 2-dimethylaminoethylazide (DMAZ)[14] and energetic ionic liquids.
Potential routes of hydrazine exposure include dermal, ocular, inhalation and ingestion.[15]
Hydrazine exposure can cause skin irritation/contact dermatitis and burning, irritation to the eyes/nose/throat, nausea/vomiting, shortness of breath, pulmonary edema, headache, dizziness, central nervous system depression, lethargy, temporary blindness, seizures and coma. Exposure can also cause organ damage to the liver, kidneys and central nervous system.[16] Hydrazine is documented as a strong skin sensitizer with potential for cross-sensitization to hydrazine derivatives following initial exposure.[17] In addition to occupational uses reviewed above, exposure to hydrazine is also possible in small amounts from tobacco smoke.
The official U.S. guidance on hydrazine as a carcinogen is mixed but generally there is recognition of potential cancer-causing effects. The National Institute for Occupational Safety and Health (NIOSH) lists it as a "potential occupational carcinogen". The National Toxicology Program (NTP) finds it is "reasonably anticipated to be a human carcinogen". The American Conference of Governmental Industrial Hygienists (ACGIH) grades hydrazine as "A3—confirmed animal carcinogen with unknown relevance to humans". The U.S. Environmental Protection Agency (EPA) grades it as "B2—a probable human carcinogen based on animal study evidence".[18]
The International Agency for Research on Cancer (IARC) rates hydrazine as "2A—probably carcinogenic to humans" with a positive association observed between hydrazine exposure and lung cancer.[19] Based on cohort and cross-sectional studies of occupational hydrazine exposure, a committee from the National Academies of Sciences, Engineering and Medicine concluded that there is suggestive evidence of an association between hydrazine exposure and lung cancer, with insufficient evidence of association with cancer at other sites.[20] The European Commission's Scientific Committee on Occupational Exposure Limits (SCOEL) places hydrazine in carcinogen "group B—a genotoxic carcinogen". The genotoxic mechanism the committee cited references hydrazine's reaction with endogenous formaldehyde and formation of a DNA-methylating agent.[21]
In the event of a hydrazine exposure-related emergency, NIOSH recommends removing contaminated clothing immediately, washing skin with soap and water, and for eye exposure removing contact lenses and flushing eyes with water for at least 15 minutes. NIOSH also recommends anyone with potential hydrazine exposure to seek medical attention as soon as possible. There are no specific post-exposure laboratory or medical imaging recommendations, and the medical work-up may depend on the type and severity of symptoms. The World Health Organization (WHO) recommends potential exposures be treated symptomatically with special attention given to potential lung and liver damage. Past cases of hydrazine exposure have documented success with Pyridoxine (Vitamin B6) treatment.
NIOSH Recommended Exposure Limit (REL): 0.03 ppm (0.04 mg/m3) 2-hour ceiling
OSHA Permissible Exposure Limit (PEL): 1 ppm (1.3 mg/m3) 8-hour Time Weighted Average
ACGIH Threshold Limit Value (TLV): 0.01 ppm (0.013 mg/m3) 8-hour Time Weighted Average
The odor threshold for hydrazine is 3.7 ppm, thus if a worker is able to smell an ammonia-like odor then they are likely over the exposure limit. However, this odor threshold varies greatly and should not be used to determine potentially hazardous exposures.[22]
For aerospace personnel, the USAF uses an emergency exposure guideline, developed by the National Academy of Science Committee on Toxicology, which is utilized for non-routine exposures of the general public and is called the Short-Term Public Emergency Exposure Guideline (SPEGL). The SPEGL, which does not apply to occupational exposures, is defined as the acceptable peak concentration for unpredicted, single, short-term emergency exposures of the general public and represents rare exposures in a worker's lifetime. For hydrazine the 1-hour SPEGL is 2 ppm, with a 24-hour SPEGL of 0.08 ppm.[23]
Handling and medical surveillance
A complete surveillance program for hydrazine should include systematic analysis of biologic monitoring, medical screening and morbidity/mortality information. The CDC recommends surveillance summaries and education be provided for supervisors and workers. Pre-placement and periodic medical screening should be conducted with specific focus on potential effects of hydrazine upon functioning of the eyes, skin, liver, kidneys, hematopoietic, nervous and respiratory systems.
Common controls used for hydrazine include process enclosure, local exhaust ventilation and personal protective equipment (PPE). Guidelines for hydrazine PPE include non-permeable gloves and clothing, indirect-vent splash resistant goggles, face shield and in some cases a respirator. The use of respirators for the handling of hydrazine should be the last resort as a method of controlling worker exposure. In cases where respirators are needed, proper respirator selection and a complete respiratory protection program consistent with OSHA guidelines should be implemented.
For USAF personnel, Air Force Occupational Safety and Health (AFOSH) Standard 48-8, Attachment 8 reviews the considerations for occupational exposure to hydrazine in missile, aircraft and spacecraft systems. Specific guidance for exposure response includes mandatory emergency shower and eyewash stations and a process for decontaminating protective clothing. The guidance also assigns responsibilities and requirements for proper PPE, employee training, medical surveillance and emergency response. USAF bases requiring the use of hydrazine generally have specific base regulations governing local requirements for safe hydrazine use and emergency response.
Each H2N-N subunit is pyramidal. The N-N single bond distance is 1.45 Å (145 pm), and the molecule adopts a gauche conformation.[24] The rotational barrier is twice that of ethane. These structural properties resemble those of gaseous hydrogen peroxide, which adopts a "skewed" anticlinal conformation, and also experiences a strong rotational barrier.
Synthesis and production
Diverse routes have been developed.[25] The key step is the creation of the nitrogen–nitrogen single bond. The many routes can be divided into those that use chlorine oxidants (and generate salt) and those that do not.
Oxidation of ammonia via oxaziridines from peroxide
Hydrazine can be synthesized from ammonia and hydrogen peroxide in the Peroxide process (sometimes called Pechiney-Ugine-Kuhlmann process, the Atofina–PCUK cycle, or ketazine process).[25] The net reaction follows:[26]
2NH3 + H2O2 -> H2NNH2 + 2H2O
In this route, the ketone and ammonia first condense to give the imine, which is oxidised by hydrogen peroxide to the oxaziridine, a three-membered ring containing carbon, oxygen, and nitrogen. Next, the oxaziridine gives the hydrazone by treatment with ammonia, which process creates the nitrogen-nitrogen single bond. This hydrazone condenses with one more equivalent of ketone.
The resulting azine is hydrolyzed to give hydrazine and regenerate the ketone, methyl ethyl ketone:
Me(Et)CNNC(Et)Me + 2 H2O -> 2 Me(Et)CO + N2H4
Unlike most other processes, this approach does not produce a salt as a by-product.[27]
Chlorine-based oxidations
In the Olin Raschig process, chlorine-based oxidants oxidize ammonia without the presence of a ketone. In the peroxide process, hydrogen peroxide oxidizes ammonia in the presence of a ketone.
Hydrazine is produced in the Olin-Raschig process from sodium hypochlorite (the active ingredient in many bleaches) and ammonia, a process announced in 1907. This method relies on the reaction of monochloramine with ammonia to create the nitrogen–nitrogen single bond as well as a hydrogen chloride byproduct:[28]
NH2Cl + NH3 -> H2NNH2 + HCl
Related to the Raschig process, urea can be oxidized instead of ammonia. Again sodium hypochlorite serves as the oxidant. The net reaction is shown:[29]
(H2N)2CO + NaOCl + 2 NaOH -> N2H4 + H2O + NaCl + Na2CO3
The process generates significant byproducts and is mainly practised in Asia.[25]
The Bayer Ketazine Process is the predecessor to the peroxide process. It employs sodium hypochlorite as oxidant instead of hydrogen peroxide. Like all hypochlorite-based routes, this method produces an equivalent of salt for each equivalent of hydrazine.[25]
Acid-base behavior
Hydrazine forms a monohydrate that is more dense (1.032g/cm3) than the anhydrous material. Hydrazine has basic (alkali) chemical properties comparable to those of ammonia. It is difficult to diprotonate:[30]
[N2H5]+ + H2O -> [N2H6]^2+ + OH-
Kb=8.4 x 10-16
with the values:[31]
\begin{align} Kb&=1.3 x 10-6\\ pKa&=8.1 \end{align}
(for ammonia K_b = 1.78 \times 10^)
Redox reactions
The heat of combustion of hydrazine in oxygen (air) is 1.941 × 107 J/kg (8345 BTU/lb).[32]
Hydrazine is a convenient reductant because the by-products are typically nitrogen gas and water. Thus, it is used as an antioxidant, an oxygen scavenger, and a corrosion inhibitor in water boilers and heating systems. It is also used to reduce metal salts and oxides to the pure metals in electroless nickel plating and plutonium extraction from nuclear reactor waste. Some color photographic processes also use a weak solution of hydrazine as a stabilizing wash, as it scavenges dye coupler and unreacted silver halides. Hydrazine is the most common and effective reducing agent used to convert graphene oxide (GO) to reduced graphene oxide (rGO) via hydrothermal treatment.[33]
Hydrazinium salts
Hydrazine can be monoprotonated to form various solid salts of the hydrazinium cation (N2H5+) by treatment with mineral acids. A common salt is hydrazinium sulfate, [N<sub>2</sub>H<sub>5</sub>]HSO4, also called hydrazine sulfate.[34] Hydrazine sulfate was investigated as a treatment of cancer-induced cachexia, but proved ineffective.[35]
Double protonation gives the hydrazinium dication (H3NNH32+), of which various salts are known.[36]
Hydrazines are part of many organic syntheses, often those of practical significance in pharmaceuticals (see applications section), as well as in textile dyes and in photography.[25]
Hydrazine is used in the Wolff-Kishner reduction, a reaction that transforms the carbonyl group of a ketone into a methylene bridge (or an aldehyde into a methyl group) via a hydrazone intermediate. The production of the highly stable dinitrogen from the hydrazine derivative helps to drive the reaction.
Being bifunctional, with two amines, hydrazine is a key building block for the preparation of many heterocyclic compounds via condensation with a range of difunctional electrophiles. With 2,4-pentanedione, it condenses to give the 3,5-dimethylpyrazole.[37] In the Einhorn-Brunner reaction hydrazines react with imides to give triazoles.
Being a good nucleophile, N2H4 can attack sulfonyl halides and acyl halides.[38] The tosylhydrazine also forms hydrazones upon treatment with carbonyls.
Hydrazine is used to cleave N-alkylated phthalimide derivatives. This scission reaction allows phthalimide anion to be used as amine precursor in the Gabriel synthesis.[39]
Hydrazone formation
Illustrative of the condensation of hydrazine with a simple carbonyl is its reaction with propanone to give the diisopropylidene hydrazine (acetone azine). The latter reacts further with hydrazine to yield the hydrazone:[40]
2 (CH3)2CO + N2H4 -> 2 H2O + [(CH3)2C=N]2
[(CH3)2C=N]2 + N2H4 -> 2 (CH3)2C=NNH2
The propanone azine is an intermediate in the Atofina-PCUK process. Direct alkylation of hydrazines with alkyl halides in the presence of base yields alkyl-substituted hydrazines, but the reaction is typically inefficient due to poor control on level of substitution (same as in ordinary amines). The reduction of hydrazones to hydrazines present a clean way to produce 1,1-dialkylated hydrazines.
In a related reaction, 2-cyanopyridines react with hydrazine to form amide hydrazides, which can be converted using 1,2-diketones into triazines.
Hydrazine is the intermediate in the anaerobic oxidation of ammonia (anammox) process.[41] It is produced by some yeasts and the open ocean bacterium anammox (Brocadia anammoxidans).[42] The false morel produces the poison gyromitrin which is an organic derivative of hydrazine that is converted to monomethylhydrazine by metabolic processes. Even the most popular edible "button" mushroom Agaricus bisporus produces organic hydrazine derivatives, including agaritine, a hydrazine derivative of an amino acid, and gyromitrin.[43] [44]
The name "hydrazine" was coined by Emil Fischer in 1875; he was trying to produce organic compounds that consisted of mono-substituted hydrazine.[45] By 1887, Theodor Curtius had produced hydrazine sulfate by treating organic diazides with dilute sulfuric acid; however, he was unable to obtain pure hydrazine, despite repeated efforts.[46] [47] [48] Pure anhydrous hydrazine was first prepared by the Dutch chemist Lobry de Bruyn in 1895.[49] [50] [51]
The Late Show with Rob! Tonight's Special Guest: Hydrazine (PDF)—Robert Matunas
Hydrazine—chemical product info: properties, production, applications.
Hydrazine toxicity
CDC—NIOSH Pocket Guide to Chemical Hazards
Web site: Hydrazine Hydrate Market Size—Industry Share Report 2024. www.gminsights.com.
3. Tsubakizaki S, Takada M, Gotou H, Mawatari K, Ishihara N, Kai R. 2009. Alternatives to Hydrazine in Water Treatment at Thermal Power Plants. Mitsubishi Heavy Industries Technical Review. 6. 2. 43–47.
Web site: Exhaust Gas Composition of the F-16 Emergency Power Unit. Suggs. HJ. Luskus. LJ. 1979. USAF. technical report. SAM-TR-79-2. https://web.archive.org/web/20160304084802/http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA065595. 4 Mar 2016. dead. 23 Jan 2019. Kilian. HJ. Mokry. JW.
Web site: Hydrazine compounds usesful as pesticides. Toki. T. Koyanagi. T. 1994. US patent. Ishihara Sangyo Kaisha Ltd (original assignee). US5304657A. Yoshida. K. Yamamoto. K. Morita. M. 3.
News: Liquid asset. 15 Jan 2008. The Engineer. 23 Jan 2019. Centaur Media plc.
Web site: A Review of the Bulk-Loaded Liquid Propellant Gun Program for Possible Relevance to the Electrothermal Chemical Propulsion Program. Knapton. JD. Stobie. IC. Mar 1993. Army Research Laboratory. ADA263143. Elmore. L.
Web site: Ground Servicing of Aircraft and Static Grounding/Bonding. 13 Mar 2017. USAF. technical manual. TO 00-25-172. 23 Nov 2018.
Book: Clark, John D.. Ignition! An Informal History of Liquid Rocket Propellants. Rutgers University Press. 1972. 978-0-8135-0725-5. New Brunswick, New Jersey. 13. vanc.
Haws JL, Harden DG. 1965. Thermodynamic Properties of Hydrazine. Journal of Spacecraft and Rockets . 2. 6. 972–974. 1965JSpRo...2..972H. 10.2514/3.28327.
3. Vieira R, Pham-Huu C, Kellera N, Ledouxa MJ. 2002. New carbon nanofiber/graphite felt composite for use as a catalyst support for hydrazine catalytic decomposition. Chem. Comm.. 44. 9. 954–955. 2008ChCom..44.5292T. 10.1039/b202032g. 12123065.
3. Chen X, Zhang T, Xia L, Li T, Zheng M, Wu Z, Wang X, Wei Z, Xin Q, Li C. Apr 2002. Catalytic Decomposition of Hydrazine over Supported Molybdenum Nitride Catalysts in a Monopropellant Thruster. Catal. Lett.. 79. 21–25. 10.1023/A:1015343922044.
http://cs.astrium.eads.net/sp/SpacecraftPropulsion/MonopropellantThrusters.html Monopropellant Hydrazine Thrusters
3. Mitchell MC, Rakoff RW, Jobe TO, Sanchez DL, Wilson B. 2007. Thermodynamic analysis of equations of state for the monopropellant hydrazine. Journal of Thermophysics and Heat Transfer . 21. 1. 243–246. 10.2514/1.22798.
Web site: Rocket Propellant Development Efforts at Purdue University. Heister. S. 28 Sep 2004. slideshow presentation. World Wide Energy Conference. 21 Apr 2013.
Web site: Occupational Safety and Health Guideline for Hydrazine—Potential Human Carcinogen. 1988. NIOSH. 23 Nov 2018.
Web site: Hydrazine 302-01-2. US EPA. 23 Nov 2018.
Web site: International Programme on Chemical Safety—Health and Safety Guide No. 56—Hydrazine. 1991. IPCS INCHEM. WHO. Geneva. 24 Nov 2018.
Web site: Occupational Chemical Database—Hydrazine. www.osha.gov. OSHA. 24 Nov 2018.
Web site: Hydrazine. Jun 2018. IARC. 23 Nov 2018.
Book: Institute of Medicine. Gulf War and Health: Fuels, Combustion Products, and Propellants. The National Academies Press. 2005. 9780309095273. 3. Washington, DC. 347. Ch. 9: Hydrazines and Nitric Acid. 10.17226/11180.
Web site: Recommendation from the Scientific Committee on Occupational Exposure Limits for Hydrazine. Aug 2010. European Commission. PDF. 23 Nov 2018.
Web site: Hazardous Substance Fact Sheet—Hydrazine. Nov 2009. New Jersey Department of Public Health. 23 Nov 2018.
Web site: Air Force Occupational Safety and Health (AFOSH) Standard 48-8. 1 Sep 1997. USAF. 23 Nov 2018.
Book: Miessler, Gary L.. Inorganic Chemistry. Tarr. Donald A.. 2004. Pearson Prentice Hall. 9780130354716. 3rd. vanc. registration.
Book: Ullmann's encyclopedia of industrial chemistry. Schirmann JP, Bourdauducq P. Wiley InterScience. 2002. 9783527306732. online. Weinheim, Germany. Hydrazine. 10.1002/14356007.a13_177. 751968805.
Book: Matar, Sami. Chemistry of Petrochemical Processes. Hatch. Lewis F.. 2001. Gulf Professional Publishing. 9781493303465. 2nd. Burlington. 148. 990470096. vanc. Elsevier.
Book: Riegel, Emil Raymond. https://www.academia.edu/9511336. Riegel's handbook of industrial chemistry. Kent. James Albert. 2003. Springer Science & Business Media. 9780306474118. 10th. New York. 192. Hydrazine. 55023601. vanc.
Adams R, Brown BK. 1922. Hydrazine Sulfate. Org. Synth.. 2. 37. 10.15227/orgsyn.002.0037.
Web site: Hydrazine: Chemical product info. chemindustry.ru. https://web.archive.org/web/20180122212817/http://chemindustry.ru/Hydrazine.php. 22 Jan 2018. dead. 8 Jan 2007.
Book: Inorganic chemistry. Holleman AF, Wiberg E, Wiberg N. 2001. Academic Press. 9780123526519. 1st Eng.. San Diego. 813400418.
Book: Handbook of Chemistry and Physics . 83rd . CRC Press . 2002.
Web site: Hydrazine—Chemical Hazard Properties Table. 1999. NOAA.gov.
3. Stankovich S, Dikin DA, Piner RD, Kohlhaas KA, Kleinhammes A, Jia Y, Wu Y, Nguyen ST, Ruoff RS. 2007. Synthesis of graphene-based nanosheets via chemical reduction of exfoliated graphite oxide. Carbon. 45. 7. 1558–1565. 10.1016/j.carbon.2007.02.034.
Web site: HYDRAZINE SULFATE. hazard.com. 22 Jan 2019.
Gagnon B, Bruera E. May 1998. A review of the drug treatment of cachexia associated with cancer. Drugs. 55. 5. 675–88. 10.2165/00003495-199855050-00005. 9585863.
Web site: Diazanediium. CharChem. 22 Jan 2019.
Wiley RH, Hexner PE. 1951. 3,5-Dimethylpyrazole. Org. Synth.. 31. 43. 10.15227/orgsyn.031.0043.
Friedman L, Litle RL, Reichle WR. 1960. p-Toluenesulfonyl Hydrazide. Org. Synth.. 40. 93. 10.15227/orgsyn.040.0093.
Weinshenker NM, Shen CM, Wong JY. 1977. Polymeric Carbodiimide. Preparation. Org. Synth.. 56. 95. 10.15227/orgsyn.056.0095.
Day AC, Whiting MC. 1970. Acetone Hydrazone. Organic Syntheses. 50. 3. 10.15227/orgsyn.050.0003.
Strous M, Jetten MS. 2004. Anaerobic Oxidation of Methane and Ammonium. Annu Rev Microbiol. 58. 99–117. 10.1146/annurev.micro.58.030603.123605. 15487931.
News: Bacteria Eat Human Sewage, Produce Rocket Fuel. Handwerk. Brian. 9 Nov 2005. 12 Nov 2007. National Geographic. Wild Singapore.
3. Hashida C, Hayashi K, Jie L, Haga S, Sakurai M, Shimizu H. 1990. [Quantities of agaritine in mushrooms (''Agaricus bisporus'') and the carcinogenicity of mushroom methanol extracts on the mouse bladder epithelium]. Nippon Koshu Eisei Zasshi. Japanese. 37. 6. 400–5. 2132000.
Web site: Spore Prints #338. Sieger AA. 1 Jan 1998. Bulletin of the Puget Sound Mycological Society. 13 Oct 2008.
Fischer E. 1875. Ueber aromatische Hydrazinverbindungen. On aromatic hydrazine compounds. Ber. Dtsch. Chem. Ges.. 8. 589–594. 10.1002/cber.187500801178.
Curtius T. 1887. Ueber das Diamid (Hydrazin). On diamide (hydrazine). Ber. Dtsch. Chem. Ges.. 20. 1632–1634. 10.1002/cber.188702001368.
Book: https://books.google.com/books?id=GHYMAAAAYAAJ&pg=PA27. Journal für praktische Chemie. Curtius T, Jay R. Verlag von Johann Ambrosius Barth. 1889. Erdmann OL. 147. Diazo- und Azoverbindungen der Fettreihe. IV. Abhandlung. Ueber das Hydrazin. Diazo- and azo- compounds of alkanes. Fourth treatise. On hydrazine.. . On p. 129, Curtius admits: "Das freie Diamid NH2-NH2 ist noch nicht analysirt worden." [Free hydrazine hasn't been analyzed yet.].
Book: Journal für praktische Chemie. Curtius T, Schulz H. 1890. 150. 521–549. Ueber Hydrazinehydrat und die Halogenverbindungen des Diammoniums. On hydrazine hydrate and the halogen compounds of diammonium. https://gallica.bnf.fr/ark:/12148/bpt6k90790j/f527.image.langEN?lang=EN.
Lobry de Bruyn CA. Sur l'hydrazine (diamide) libre. On free hydrazine (diamide). Recl. Trav. Chim. Pays-Bas. 13. 8. 433–440. 10.1002/recl.18940130816. 2010.
Lobry de Bruyn CA. 1895. Sur l'hydrate d'hydrazine. On the hydrate of hydrazine. Recl. Trav. Chim. Pays-Bas. 14. 3. 85–88. 10.1002/recl.18950140302.
Lobry de Bruyn CA. 1896. L'hydrazine libre I. Free hydrazine, Part 1. Recl. Trav. Chim. Pays-Bas. en. 15. 6. 174–184. 10.1002/recl.18960150606.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Hydrazine". | CommonCrawl |
\begin{document}
\title{\textcolor{black}{Solvable non-Markovian dynamic network}}
\author{Nicos Georgiou} \email[]{[email protected]}
\affiliation{School of Mathematics and Physical Sciences, University of Sussex}
\author{Istvan Z. Kiss} \email[]{[email protected]}
\affiliation{School of Mathematics and Physical Sciences, University of Sussex}
\author{Enrico Scalas} \email[]{[email protected]}
\affiliation{School of Mathematics and Physical Sciences, University of Sussex}
\date{\today}
\begin{abstract} Non-Markovian processes are widespread in natural and human-made systems, yet explicit modelling and analysis of such systems is underdeveloped. We consider a non-Markovian dynamic network with random link activation and deletion (RLAD) and heavy tailed Mittag-Leffler distribution for the inter-event times.
We derive an analytically and computationally tractable system of Kolmogorov-like forward equations utilising the Caputo derivative for the probability of having a given number of active links in the network and solve them. \textcolor{black}{Simulations for the RLAD are also studied for power-law inter-event times and we show excellent agreement with the Mittag-Leffler model. This agreement holds even when the RLAD network dynamics is coupled with the susceptible-infected-susceptible (SIS) spreading dynamics. Thus, the analytically solvable Mittag-Leffler model provides an excellent approximation to the case when the network dynamics is characterised by power-law distributed inter-event times. We further discuss possible generalizations of our result.}
\end{abstract}
\pacs{}
\keywords{non-Markovian, networks, fractional calculus, heavy tailed, Mittag-Leffler}
\maketitle
\section{Introduction}
Non-Poisson temporal statistics where time intervals between isolated, consecutive actions are typically not exponentially distributed, seem to be the norm rather than the exception for many systems: For example, period of infectiousness \cite{lloyd2001realistic}, inter-order and inter-trade durations in financial markets \cite{scalas2006durations}, socio-networks, including emails \cite{eckmann2004entropy,malmgren2008poissonian}, phone calls \cite{jiang2013calling}, or individual-to-individuals contacts being fluid \cite{schneider2013unravelling,moinet2014burstiness}. The absence of the robust tools and mathematical machinery of Markovian theory is the source of many challenges in modelling and analysis of non-Markovian systems. The burst in research activity that successfully combines networks and non-Markovian processes stems from the need to develop more realistic models and new analytical tools. Notable examples include studying non-Poisson dynamics of networks \cite{hoffmann2012generalized} and non-Markovian epidemics on networks \cite{min2011spreading,van2013non,jo2014analytically}.
The non-Markovian property is particularly pervasive when considering the dynamics of time-evolving networks, be it with fast or slow timescale \cite{holme2012temporal,perra2012activity}.
Deriving simple, solvable paradigm models can facilitate progress
in developing new mathematical tools and methods for analysis and increases our understanding of the true implications of non-Markovianity for complex systems. {Empirically, it turns out that many inter-event distributions have power-law tails (see \cite{vajna2013} and references therein). Therefore, it is also necessary to develop methods able to deal with such distributions.}
It is now widely accepted that human contact patterns are highly dynamic and may evolve concurrently with an epidemic; many Markovian models for this setup exists \cite{gross2009adaptive,marceau2010adaptive,kiss2012modelling}. Here, we take the next step and consider a dynamic network with non-exponential waiting times with consecutive updates which are either link activation and deletion \cite{kiss2012modelling}. As a first step in the rigorous analysis of networks with Non-Markovian dynamics, we consider a random link activation-deletion (RLAD) model that naturally leads to a stochastically evolving network \cite{raberto2011graphs, kiss2012modelling}. This model amounts to considering undirected and unweighted networks, where an event consists of selecting a link at random, independently of whether present or not, followed by its activation, if the link is absent, or deletion if the link is active. Such operations are separated by inter-event times sampled from the Mittag-Leffler distribution, that allows for analytical tractability. This exactly solvable model of non-Markovian network dynamics is an important special case
of a more general theory for non-Markovian processes outlined in \cite{haenggi1981old}, and it is related to recent outstanding developments in probability theory \cite{orsingherpolito,meerschaerttoaldo}. Indeed, we provide a bottom-up derivation for the master equation of some fractional birth and death processes in a finite capacity system, introduced in \cite{orsingherpolito}. This allows us to compute theoretically the exact distribution of the total number of links in the network at any time and its large-time limit.
We demonstrate the power of the analytical model by comparing it with simulations using more widely-used power-law distributed times. The rigorous analysis of this model, including explicit expressions for the distribution of the number of links in the network for $t\ge 0$, is followed by considering a Markovian $SIS$ epidemic on our non-Markovian dynamic network. \textcolor{black}{Finally, we briefly discuss the generalization of our method to general Markov chains with random state changes occurring according to a generic renewal process}.
\section{An exactly solvable model}
\subsection{Basic ingredients}
Consider an arbitrary graph on $N$ nodes as an initial state of the dynamics. We are interested in the number of (unique undirected) links in the network at a given time $t$. We denote this number by $X(t)$, and it takes values in $\mathcal{S}=\{ 0, 1, \ldots, M\}$ where $M = N(N-1)/2$, the maximal possible number of links. The time periods where $X(t)$ remains constant are called sojourn times or inter-event times. We assume that sojourn times $\{ T_i\}_{i \ge 1}$ are drawn independently from the family of Mittag-Leffler distributions with parameter (or order) $\beta \in (0,1)$ \cite{frac-calc-henry}. Their cumulative distribution function (c.d.f.) \ is indexed by this $\beta$ and it is given by
\begin{equation}\label{eq:m-l:cdf}
F^{(\beta)}_{T}(t)= \mathbb{P}\{ T \le t \} = 1 - E_{\beta}(-t^{\beta}).
\end{equation} Here $E_{\beta}(z)$ is the Mittag-Leffler function, defined by
\begin{equation} \label{eq:mlf}
E_{\beta}(z) = \sum_{n=0}^{\infty} \frac{z^n}{\Gamma(1 + \beta n)}.
\end{equation} $E_{\beta}$ is entire for all $\beta > 0$. At $\beta = 0$ the series converges uniformly only on a disc of radius 1, though the function can be extended analytically on $\mathbb{C} \smallsetminus \{1\}$. Equations \eqref{eq:m-l:cdf}, \eqref{eq:mlf} define a proper c.d.f. only when $\beta \in (0,1]$. \textcolor{black}{This is equivalent to the claim that, for $\beta \in (0,1]$, $E_\beta (-t^\beta)$ is completely monotone. A $C^\infty[0,\infty)$ function $f(t)$ is completely monotone if $(-1)^n d^n f(t)/dt^n \geq 0$ for all non-negative integer $n$ and all $t>0$. Now, Mainardi and Gorenflo \cite{mainardigorenflo} proved that, for $\beta \in (0,1)$, $E_\beta(-t^\beta)$ can be written as a mixture of exponential distributions given that \begin{equation} E_\beta (-t^\beta) = \int_0^\infty \exp(-rt) K_\beta (r) \, dr, \end{equation} where \begin{equation} K_\beta (r) = \frac{1}{\pi} \frac{r^{\beta-1} \sin(\beta \pi)}{r^{2\beta} + 2 r^\beta \cos(\beta \pi) + 1}, \end{equation} and \begin{equation} \int_0^\infty K_\beta (r) \, dr =1. \end{equation} Therefore, complete monotonicity of $E_\beta (-t^\beta)$ is an immediate corollary of Bernstein's theorem \cite{bernstein,schillingsongvondracek}. A direct proof that $E_\beta (-x)$ is completely monotone can be found in reference \cite{pollard}.} When $0<\beta <1$ these distributions are heavy-tailed with infinite mean while at $\beta = 1$, $T$ is mean $1$, exponentially distributed. This family of distributions interpolates between a stretched exponential for small $t$ and a power-law for large $t$ \cite{mainardigorenflo}. Namely, one has \begin{eqnarray} E_\beta (-t^\beta) & \simeq & \exp(-t^\beta/\Gamma(1+\beta)), \,\, t \ll 1, \nonumber \\ E_\beta (-t^\beta) & \sim & \frac{\sin(\beta \pi)}{\pi} \frac{\Gamma(\beta)}{t^\beta}, \, \, t \to \infty \label{eq:ref:stupid}. \end{eqnarray} Therefore, the use of these distributions is more general than it might seem at a first glance. A word of notational caution: Here $\beta$ is the order of the polynomial decay of the survival function, but most commonly power-law distributions are identified by the order of decay of their densities, which in our case is $1 + \beta \in (1,2)$.
Mittag-Leffler sojourn times lead to a simpler analytical treatment of non-Markovianity in the presence of extreme power-law tails than its cognate Pareto distribution. However, we do explain below how the theoretical framework developed here can be used to approximate the behaviour of non exactly-solvable systems with Pareto power-law distribution, as it is most commonly used. For this we must introduce a scaling parameter (time change) $\gamma > 0$ for the waiting times: We say that a random variable $T$ is Mittag-Leffler$_\gamma(\beta)$ distributed if and only if \begin{equation}\label{eq:MLgamma} F^{(\beta,\gamma)}_{T}(t)= \mathbb{P}\{ T \le t \} = 1 - E_{\beta}(- (t/\gamma)^{\beta}). \end{equation} For $\gamma=1$ the c.d.f.\! is reduced to that of equation \eqref{eq:m-l:cdf} and we see that $T$ is Mittag-Leffler$_1(\beta)$ if and only if $\gamma T$ is Mittag-Leffler$_\gamma(\beta)$.
For the rigorous derivation of the evolution equations, we restrict for clarity to the $\gamma=1$ case and remark how the equations behave with the extra scaling later. Fix a parameter $\beta \in (0,1) $. The network evolves in a semi-Markov way: Let $T_1, T_2, \ldots$ be independent Mittag-Leffler$(\beta)$ times and define the partial sum \textcolor{black}{ \begin{equation} S_n = \sum_{k=1}^n T_i, \; \; n\ge 1. \end{equation} }The sequence $S_1, S_2, \ldots $ denotes the event times at which the state of the network $X(t)$ attempts to change. A change in the state means an undirected link is either deleted or activated. For extra flexibility, the model is introduced with an extra delay parameter $\alpha \in [0,1)$, so that if $\alpha \neq 0$ allows the active links to remain unchanged even if there is an attempt of a change.
It is useful to define the embedded Markov chain for the number of links in the network, $X_{n}, n\ge 1$, with state space $\mathcal{S} $. Initially $X_0 =i$, as we start with $i$ present links and the number of links in the network increases, remains or decreases according to the following transition probabilities \begin{multline} \label{eq:trans1}
q_{k, k-1} = P_0\{ X_{j+1} = k-1 | X_j = k \} \\
= \begin{cases}
0, & k =0,\\
1-\alpha, & k = M,\\
(1-\alpha)\frac{k}{M}, & \text{ otherwise, }
\end{cases} \end{multline}
\begin{equation}\label{eq:trans3}
q_{k,k}=\alpha,
\end{equation}
and \begin{multline} \label{eq:trans2}
q_{k, k+1} = P_0\{ X_{j+1} = k+1 | X_j = k \} \\
= \begin{cases}
1-\alpha, &k =0,\\
0, &k = M,\\
1- \alpha-\frac{k(1-\alpha)}{M}, &\text{ otherwise. }
\end{cases} \end{multline} In words, at the time of the $i$-th event, we pick a link uniformly at random out of all available links. With probability $\alpha$ nothing changes, otherwise on the event that a change will happen in the system, we delete or add a link in the following way: If the link was active (present) in the network, it is now deleted, otherwise it is now activated. Notice that the embedded dynamics are equivalent to the $\alpha$-delayed version of the Ehrenfest chain.
To connect the embedded chain $X_n$ with process $X(t)$, define the counting process
\begin{equation} \label{eq:m-l:count}
N_{\beta}(t) = \max\{ n \in \N : S_n \le t \}
\end{equation} that gives the number of events up to a finite time horizon $t$. This process is also called a fractional Poisson process. Then we have
\begin{equation}\label{eq:subordinator}
X(t) = X_{N_{\beta}(t)} = X_n 1\!\!1\{ S_n \le t < S_{n+1} \},
\end{equation} i.e. the state of the process at time $t$ is the same as that of the embedded chain after the last event before time $t$ occurred.
\subsection{Semi-Markov Master Equation}
All information about $X(t)$ is encoded in the pairs $\{(X_n, T_n)\}_{n \ge 1}$ which are a discrete-time Markov renewal process, satisfying
\begin{align}\label{eq:mrp}
\bP\{ X_{n+1} = j, & T_{n+1} \le t | (X_0, S_0), \ldots, (X_n =i, S_n)\} \notag\\
&= \bP\{ X_{n+1} = j, T_{n+1}\le t | X_n = i \}.
\end{align} $X(\cdot)$ is then a semi-Markov process subordinated to $N_\beta (t)$ \cite{raberto2011graphs} and satisfies the forward equations
\begin{align}
p_{i,j}(t) \!= \overline F^{(\beta)}_T(t)\delta_{ij}
\!\!+\! \sum_{\ell \in \mathcal{S}} q_{\ell, j} \!\! \int_0^t \!\! p_{i,\ell}(u)f^{(\beta)}_{T}(t-u)\,du.
\label{eq:master}
\end{align}
\textcolor{black}{Incidentally, a semi-Markov process is Markovian if and only if the distribution of $\{T_n\}_{n\geq 1}$ is exponential \cite{cinlar}}. Above we introduced $p_{i,j}(t) = \bP\{X(t) = j | X(0) = i\}$, the tail (complementary cumulative distribution function) $\overline F^{(\beta)}_T(t) = 1-F^{(\beta)}_T(t)$ and $f^{(\beta)}_{T} (t)$ the Mittag-Leffler density or order $\beta$. These equations are proved by conditioning on the time of the last event before time $t$.
By taking Laplace transforms in \eqref{eq:master}, and using the known Laplace transform
of the Mittag-Leffler survival function and probability density function
\begin{equation}\label{eq:ml-lap}
\mathcal{L}\left( \overline F^{(\beta)}_T(t) ; s \right) = \frac{s^{\beta-1}}{1 + s^{\beta}} \text{ and }
\mathcal{L}\left( f^{(\beta)}_T(t) ; s \right) = \frac{1}{1 + s^{\beta}},
\end{equation}
followed by some straightforward algebra, the evolution equations for $p_{i,j}(t)$ become (\textcolor{black}{see Appendix A}),
\begin{align}\label{eq:final0}
&\frac{d^{\beta} p_{i,j}(t)}{ d\, t ^{\beta}} = -(1-\alpha)\ p_{i,j}(t) \\
&\phantom{xxx}+ (1-\alpha)\left(\frac{M -j +1}{M} p_{i, j-1}(t) + \frac{j+1}{M}p_{i,j+1}(t)\right).\notag
\end{align}
Similarly, the equations of the boundary terms are
\begin{equation}\label{eq:final1}
\frac{d^{\beta} p_{i,0}(t)}{ d\, t ^{\beta}} = (1-\alpha)\left(- p_{i,0}(t)+ \frac{1}{M}p_{i,1}(t)\right) \quad
\end{equation}
\begin{equation}\label{eq:final2}
\frac{d^{\beta} p_{i,M}(t)}{ d\, t ^{\beta}} = (1-\alpha)\left(- p_{i,M}(t)+ \frac{1}{M}p_{i,M-1}(t)\right).
\end{equation}
Symbol $d^\beta / dt^\beta$ in \eqref{eq:final0}, \eqref{eq:final1}, \eqref{eq:final2},
denotes the $\beta$ {\em fractional Caputo derivative} \cite{frac-calc-henry} of a function $f(t)$ given by \[
\frac{d^{\beta} f(t)}{ d\, t ^{\beta}} = \frac{1}{\Gamma(1-\beta)} \int_{0}^t (t-t')^{-\beta} \frac{d\,f(t')}{dt'}\,dt'. \]
When $\beta = 1$, equations \eqref{eq:final0}, \eqref{eq:final1}, \eqref{eq:final2}
reduce (as expected) to the standard Kolmogorov equations for the Markovian RLAD \cite{kiss2012modelling}.
These equations also explain analytically why $\alpha$ is called the delay parameter.
When considering the scaled Mittag-Leffler$_\gamma(\beta)$ times, equation \eqref{eq:final0} becomes
\begin{align}\label{eq:final3}
&\frac{d^{\beta} p^{(\gamma)}_{i,j}(t)}{ d\, t ^{\beta}} = -\gamma^{-\beta}(1-\alpha)\ p^{(\gamma)}_{i,j}(t) \\
&\phantom{xx}+ \gamma^{-\beta}(1-\alpha)\left(\frac{M -j +1}{M} p^{(\gamma)}_{i, j-1}(t) +\frac{j+1}{M}p^{(\gamma)}_{i,j+1}(t)\right)\notag
\end{align}
and similarly for the boundary equations. Specifically we see, as in the Markovian case, that a scaled
sojourn time distribution results in a (fractional) scalar multiple of the forward equations.
\subsection{Exact solution}
Equation \eqref{eq:master} gives an analytical way to obtain the fractional equation for the evolution
of the transition probabilities, but it is not very useful for computational purposes.
Instead, it is fruitful to find the solution of the system of equations \eqref{eq:final0}, \eqref{eq:final1},
\eqref{eq:final2} by a simple conditioning argument on the values of $N_{\beta}(t)$ (\textcolor{black}{see Appendix B})
\begin{equation} \label{eq:anal:istvan}
p_{i,j}(t) = \overline F^{(\beta)}_T(t)\delta_{ij} + \sum_{n=1}^\infty q^{(n)}_{i,j} \bP\{N_\beta(t) = n\},
\end{equation}
where $q^{(n)}_{i,j}$ are the $n$-step transitions of the embedded discrete Markov chain, namely the entries
of the $n$-th power
of the transition matrix $Q$ defined by equations \eqref{eq:trans1}, \eqref{eq:trans3} and \eqref{eq:trans2}.
The distribution of the fractional Poisson process has a simple expression
generalising the Poisson distribution \cite{scalas2004}, namely
\begin{equation}
\label{eq:m-l:number}
\mathbb{P}\{N_\beta (t) = n\}= \frac{t^{\beta n}}{n!} E_\beta^{(n)} (-t^\beta),
\end{equation}
where $E_\beta^{(n)} (-t^\beta)$ denotes the $n$-th derivative of $E_\beta (z)$ computed for $z = -t^\beta$.
Equation \eqref{eq:anal:istvan} can also be verified to satisfy \eqref{eq:final0} using Laplace transforms.
(\textcolor{black}{see Appendix B}).
\section{Results and applications}
\textcolor{black}{All simulations are event driven, both for dynamic networks and when this is coupled with epidemic dynamics. Waiting times for all the possible events are generated from appropriate distributions. Hence, the next change or an update is always determined by the smallest waiting time and the event corresponding to it is executed. This is then followed by the necessary update of the waiting times of the events affected by the most recent change. In reference \cite{boguna}, readers can find an alternative efficient simulation method which effectively extends the ideas of the Gillespie algorithm from the Markovian to the non- Markovian case.}
\subsection{Explicit calculation of $p_{i,j} (t)$}
The probabilities
involving the counting process $N_{\beta}(t)$ have an explicit integral representation \cite{politi2011full} and for numerical purposes they
can be approximated well either with Monte Carlo simulations
or with a numerical integration scheme.
Once the transition probabilities of the embedded Markov
chain are known, every term is known in \eqref{eq:anal:istvan} and it can be used to exactly calculate the non-equilibrium probabilities $p_{i,j}(t)$ (\textcolor{black}{see Appendix B}).
The excellent agreement between theory and simulation is shown in Fig. \ref{fig0}.
\begin{figure}
\caption{(Color online) \textcolor{black}{Comparison between Monte Carlo simulations and theory.} The discrete markers are the estimated probabilities $p_{190,j}(250)$, averaged over 10000 \textcolor{black}{Monte Carlo} simulations starting from a fully connected network with $N=20$ nodes and for $\beta = 1, 0.7, 0.5$, as we move from left to right. \textcolor{black}{The Monte Carlo simulations were performed using an event-driven algorithm taking non-Markovianity into account.}
The solid curves are the theoretical predictions as dictated by equation \eqref{eq:anal:istvan}.}
\label{fig0}
\end{figure}
An immediate application is to use equation \eqref{eq:anal:istvan} and compute theoretically and numerically
the expected number of active links in the network at a given time. Starting from any initial number of active links
$i_0$, use \eqref{eq:anal:istvan} to compute
\begin{align}\label{eq:expQ}
\bE^{i_0}(X(t)) &= \mathbb{E}^{i_0}\! \big(\textbf{e}_{i_0}^T Q^{N_{\beta}(t)}\textbf{v}_{0,M}\big)\notag \\
&= \textbf{e}_{i_0}^T\mathbb{E}^{i_0}\!\big( Q^{N_{\beta}(t)}\big)\textbf{v}_{0,M}.
\end{align}
In the equation above
$\textbf{e}_{i_0}$ is the standard basis factor with $1$ at the $i_0$-th coordinate and
$\textbf{v}_{0,M} = (0,1,2, \ldots, M)^T$. Note that in the particular case where $Q$ diagonalizes,
the analytical expression for the expectation \eqref{eq:expQ}
is merely a linear combination of different values
of the probability generating function of $N_{\beta}(t)$, $G^{(\beta)}(s;t)$, given by (see \cite{laskin2003fpp})
$
G^{(\beta)}(z;t) = \mathbb{E}(z^{N_{\beta}(t)}) = E_{\beta}((z-1)t^{\beta}).
$
Note that when $Q$ diagonalises, there is no need for
simulating a large number of realisations to estimate the expectation; a fast numerical integration scheme
is sufficient.
\subsection{Approximation of Pareto-distributed inter-event times}
We now compare the behaviour between two RLAD networks; one with
Mittag-Leffler times and one where we
alter the waiting time distribution to a generalised Pareto($\delta$) with density
\begin{equation}
f_T(t) = \frac{\delta - 1}{(1 + t)^{\delta}}, \quad t >0.
\end{equation}
Exponent $\delta = 1 + \beta$ in order for the tails of Mittag-Leffler and the
Pareto to have the same behaviour at infinity.
In fact we compare the two networks over three layers of increasing complexity.
First, in Fig.~\ref{fig:1}(b,c) we plot the probability mass function for the number of singly-counted links
at a pre-specified time horizon $T=2000$, averaged over
5000 simulations. As a point of reference, output from the Markovian RLAD is shown in Fig.~\ref{fig:1}(a).
The theoretical curve at equilibrium is the large time limit of equation \eqref{eq:anal:istvan} and it is the mass function of a binomial distibution
with $M$ trials and success probability $1/2$. Second,
in Fig.~\ref{fig:1}(d) we plot $\bE(X(t))$ as a function of time up to time
$T=2000$. The two curves could also be computed based on equation \eqref{eq:expQ}.
The excellent agreement is
achieved by finding a suitable scaling $\gamma$ so that the c.c.d.f.~(survival functions)
of the two distributions match well, at least up to the pre-specified time horizon that we want to study.
Further details can be found in \textcolor{black}{Appendix C}. The matching is good for
$\beta < 0.9$ while for larger $\beta$, this idea can be used to
study the stochastic dominance between the two coupled networks and offer rigorous bounds.
\subsection{Markovian SIS on non-Markovian RLAD}
Finally, we compare the two network \textcolor{black}{dynamics} indirectly, when we allow a Markovian epidemic to run
while the networks evolve. As discussed above, human activity tends to be bursty and non-Markovian \cite{barabasi2010bursts}.
During an epidemic, individuals become wary of the risk posed by it and
one way to avoid infection is by limiting or reducing their number of contacts.
This justifies the deletion of links as time evolves. On the other hand, close contacts
cannot realistically be removed and some level of communication and social cohesion must be maintained.
Such behaviour in activation-deletion is not necessarily Markovian in nature, thus alternative
non-Markovian dynamic network models are necessary.
Nodes in the network represent individuals from
a population and links describe the contact patterns amongst these.
Each individual can be either infected ($I$) os susceptible ($S$).
\textcolor{black}{An infected individual remains infected for exponentially distributed periods of time $T_H$ i.e. $T_H \sim \mathrm{Exp}(1/\tau_H)$, where $\tau_H$ is the average time in which infectious individuals are healed. Similarly, infection occurs at the points of a Poisson process with time to infection $T_I$ exponentially distributed, i.e. $T_I \sim \mathrm{Exp}(1/\tau_I)$, where $\tau_I$ is the average time in which an infection spreads across a link connecting a susceptible and an infected node. In this framework, both network and epidemic dynamics can be considered in the context of event-driven simulations, where the timing of the next state change is always determined by the smallest waiting time and the precise event corresponding to it.} The epidemic does not interfere with the network dynamics, however its propagation
is intertwined with the background dynamic network topology.
Initially, before the infection starts spreading, we assume that all links are present,
in order to avoid early stochastic extinction.
The simulations in Fig.~\ref{fig:1} show the prevalence (proportion of infected individuals (Fig.~\ref{fig:1}(e))
on a Mittag-Leffler$_\gamma(\beta)$ RLAD network (solid lines) and a direct comparison (square markers)
with the Pareto($\delta$). Again, we use the same sets of $\beta, \gamma, \delta$ as before and
we emphasise the excellent agreement between the two.
Incidentally, as expected, the non-Markovian network dynamics create a striking effect by
slowing down the network dynamics and thus effectively
blocking the attainment of statistical equilibrium in a realistic time horizon (Fig.~\ref{fig0}, \ref{fig:1}(a,b,c)).
This leads to a heightened level of infectiousness in the population (Fig.~\ref{fig:1}(e)) and
highlights the importance of quick reactions. Naturally the statistical equilibrium will
be reached after a much longer time period, but the delayed curves
can now be theoretically computed or approximated.
\begin{figure*}\label{fig:1}
\end{figure*}
One way to explain this delayed convergence to equilibrium is to look at the mixing time of the embedded chain in the total variation distance.
To be more specific, the number of active links is a continuous time, irreducible birth-death chain with a unique binomial invariant distribution $\pi$, independent of the delay parameter $\alpha$, given by
\begin{equation}\label{eq:inv}
\pi_k= \lim_{t \to \infty}\bP\{ X(t) = k|X(0)=i\} = { M \choose k}2^{-M}, \quad k \in \mathcal{S}.
\end{equation} \textcolor{black}{This can also be deduced from the fact that in the aperiodic $\alpha$-delayed case, at equilibrium, individual graphs are uniformly distributed. That is because the chain on the set of distinct graphs has a doubly stochastic transition matrix. With this in mind, the degree distribution of a single node in network chosen uniformly at random can be immediately computed as follows. Let $v_1$ be a selected node in $G$ and define $G_{v_1}$ the subgraph of $G$ where $v_1$ and all its incident links are deleted. $G_{v_1}$ is now a graph on the set $\{v_2, \ldots, v_N\}$ that has at most $K = {N-1 \choose 2} = M - N+1$ links. Let $ \text{deg}(v_1)$ denote the degree of $v_1$. Then,
\begin{align}
\bP\{ \text{deg}(v_1) = \ell \}
&= \!\!\!\!\sum_{\text{graphs } G: \text{ deg }(v_1) = \ell } \!\!\!\!\!\!\!2^{-M}\notag\\
&= 2^{-M} \text{card}\{ \text{graphs } G: \text{ deg }(v_1) = \ell \}\notag\\
&=2^{-M}{ N-1 \choose \ell} \text{card}\{ \text{graphs }G_{v_1} \}\notag \\
&=2^{-K-N+1}{ N-1 \choose \ell}\sum_{i=0}^K{K \choose i}\notag\\
&= { N-1 \choose \ell}2^{-N+1}. \label{eq:deg:inv}
\end{align}
The third equality is a counting argument. The number of graphs such that $v_1$ has exactly $\ell$ incident links, is constructed by first selecting where those links go, and then constructing the subgraph $G_{v_1}$. This can be understood heuristically as follows. Select a link and focus on one of the two nodes. If this node has $h$ active links, this number will either go to $h+1$ with probability $(N-1-h)/(N-1)$ or to $h-1$ with probability $h/(N-1)$. This leads to an invariant binomial degree distribution \eqref{eq:deg:inv},
with an average degree of $(N-1)/2$, which amounts to $N(N-1)/4$ active edges in the network in line with the average of the link distribution from \eqref{eq:inv}.} The chain mixing time $t_{\text{mix}}(\varepsilon)$ is the minimal time so that the total variation distance between the measures $\pi$ and $p(t)$ is smaller than some tolerance $\varepsilon$, i.e.
\begin{equation}
\| p(t_{\text{mix}}(\varepsilon))-\pi \|_{\text{TV}} = \sup_{k \in \mathcal{S}}|p_{k}(t_{\text{mix}}(\varepsilon)) - \pi_k| < \varepsilon.
\end{equation} For the Markovian RLAD with $\alpha > 0$, use Theorem 1.1 and Example 4.3 in \cite{Che-Sal-13}
to see
\begin{equation}
t_{\text{mix}}(\varepsilon) \le C\varepsilon^{-2} M^2\log M.
\end{equation} Thus, the Markov chain approximates relatively well its equilibrium by time $C\varepsilon^{-2} M^2\log M$. In particular this implies that, on average, the Markovian RLAD continuous chain needs $O(M^2\log M)$ time, and therefore by the law of large numbers, it needs this order many events until it is well mixed. In fact this bound is also true for the embedded discrete chain. This $n = M^2\log M$ should be considered as a \emph{necessary} lower bound of steps so that the sample average of the probabilities of the embedded chain approximates $\pi$. Therefore in order to have an acceptable level of accuracy for the embedded chain when the RLAD has Mittag-Leffler waiting times, using \eqref{eq:m-l:number}, we need a higher polynomial order $O(M^{2/\beta}(\log M)^{1/\beta})$ time in order to guarantee on average the same number of events, and thus to guarantee a near equilibrium behaviour for the embedded chain. Study of the slow-down phenomenon for non-Markovian dynamic networks, using the total variation distance, can be found for example in \cite{S-W-P-2015}.
\section{Discussion and conclusions}
\subsection{Generalization}
\textcolor{black}{Equation \eqref{eq:anal:istvan} can be generalized to any counting process and any discrete Markov chain and, as a consequence, to any embedded Markovian graph dynamics \cite{raberto2011graphs}. To be more specific, let $q_{i,j}$ denote the one-step transition probability from state $i$ to state $j$ for a discrete Markov chain $X_n$ and let $N(t)$ be a generic counting renewal process. Then, for the process \begin{equation} \label{generalisation} X(t) = X_{N (t)} = X_n 1\!\!1\{ S_n \le t < S_{n+1} \}, \end{equation}
the probabilities $p_{i,j} (t) = \mathbb{P} \{ X(t) =j | X(0) = i \}$ are given by \begin{equation} \label{generalisation1} p_{i,j}(t) = \overline F_T(t)\delta_{ij} + \sum_{n=1}^\infty q^{(n)}_{i,j} \bP\{N(t) = n\}, \end{equation} where the symbols have the same meaning as in \eqref{eq:anal:istvan} and $\{T_i \}_{i=1}^\infty$ is a sequence of i.i.d. positive random variables with the usual meaning of inter-event times with arbitrary distribution, not necessarily with fat tails and infinite mean. The reader is invited to follow the first proof of Appendix B by replacing $N_\beta (t)$ with a generic counting renewal process. This will convince the reader of the wide generality of this result. A heuristic argument to justify \eqref{eq:anal:istvan} and \eqref{generalisation1} runs as follows. In the time interval $(0,t)$, $n \geq 0$ events may have occurred. In the case $n=0$, at time $t$ the process is still in state $i$ and $\mathbb{P}\{N(t) =0\} = \mathbb{P}\{T>t\} = \overline F_T(t)$. If $n \geq 1$, the probability of being in state $j$ after $n$ events is given by $q^{(n)}_{i,j}$. Given the independence between the renewal process and the Markov chain, the probability of being in state $j$ at time $t$ and $n$ transitions occurring in the time interval $(0,t)$ is $q^{(n)}_{i,j} \bP\{N(t) = n\}$. Now, all these events are exhaustive and mutually exclusive. Then, total probability and infinite additivity imply that $p_{i,j} (t)$ is given by \eqref{generalisation1}. These considerations suggest further generalizations taking into account possible dependence within the couple $\{(X_n,T_n)\}_{n \geq 1}$ as well as serial dependence or state dependence of inter-event times.}
\subsection{Example: A simple probabilistic model for relaxation in dielectrics}
\textcolor{black}{In order to illustrate the generalization discussed above with an example, we consider relaxation phenomena \cite{weron}. Probabilistic modelling of relaxation assumes that a physical system (e.g. a molecule) can exist in two states $A$ and $B$. We further assume that state $A$ is transient and state $B$ absorbing, so that the deterministic embedded chain has the following transition probabilities $q_{A,A} = 0$, $q_{A,B} = 1$, $q_{B,A}=0$, and $q_{B,B} = 1$. This means that if the system is prepared in state $A$, it will jump to state $B$ at the first step and it will stay there forever. Now suppose that the inter-event time $T$ is random and follows an exponential distribution with rate $\lambda =1$ for the sake of simplicity. Based on equation \eqref{generalisation1}, we immediately have $p_{A,A} (t) = \overline F_T(t)=\exp(-t)$. Therefore, the probability of finding the system in the initial state decays exponentially towards zero. This relaxation function is the solution of \begin{equation} \frac{d}{dt} p_{A,A} (t) = - p_{A,A} (t), \; \; p_{A,A} (0) =1. \end{equation} The response function is defined as $\xi_D (t) = -d p_{A,A} (t)/dt$ and its Laplace transform is $1/(1+s)$. For $s=-i \omega$ this is the Debye model \cite{weron}. If inter-event times follow the Mittag-Leffler distribution, we get $p_{A,A} (t) = \overline F_T(t)=E_\beta (-t^\beta)$. This is the solution of \cite{scalas2004} \begin{equation} \frac{d^\beta}{dt^\beta} p_{A,A} (t) = - p_{A,A} (t), \; \; p_{A,A} (0) =1. \end{equation} In this case, the Laplace transform of the response function $\xi_{CC} (t) = -d p_{A,A} (t)/dt$ is $1/(1+s^\beta)$ and for $s=-i \omega$, we get the Cole-Cole model \cite{colecole1,colecole2,weron}.}
\subsection{Final considerations}
In conclusion, we provide an exactly solvable non-Markovian dynamic network model. The RLAD is particularly attractive as it has analytical and numerical tractability coming from fractional calculus. We are able to explicitly use the master equation formalism and
analytically derive the distribution of the number of links in the network for arbitrary times $X(t)$, consequently computing $\bE(X(t))$. We highlight an important connection and possible avenue to approximate non-Markovian problems using fractional calculus, by coupling a Pareto network and show the agreement with the tractable model. \textcolor{black}{Moreover, we discuss how our result can be extended to a generic counting renewal process.}
\begin{acknowledgments} \textcolor{black}{This paper was partially supported by an SDF start-up research grant provided by the University of Sussex.} \end{acknowledgments}
\section*{Appendix}
In this Appendix, we cover the rigorous proofs of the equations shown in the main text and further clarify some notions. Some details about the procedure used to couple the Pareto distribution with the Mittag-Leffler are also highlighted.
\subsection*{A. Derivation of fractional equations.}
\noindent We want to show that equations \eqref{eq:final0}, \eqref{eq:final1} and \eqref{eq:final2} in the main text are obtained from \eqref{eq:master}.
The analysis proceeds by way of Laplace transforms. They are defined as \begin{equation} \mathcal{L} (g(t);s) = \int_0^\infty dt \, g(t) \, \mathrm{e}^{-st} \end{equation} for a suitable function $g(t)$. In the case of the Mittag-Leffler distribution defined in the main text, we have
\begin{equation}\label{eq:ml-lap}
\mathcal{L}\left( \overline F^{(\beta)}_T(t) ; s \right) = \frac{s^{\beta-1}}{1 + s^{\beta}} \text{ and }
\mathcal{L}\left( f^{(\beta)}_T(t) ; s \right) = \frac{1}{1 + s^{\beta}}.
\end{equation}
For the computation that follows we use the symbol $\tilde g(s)$ to denote the Laplace transform $\mathcal{L}(g;s)$ of any function $g$. Taking the Laplace transform of \eqref{eq:master} and using equations (4), (5), (6) in the main text for our particular example, we have for $1 \le j \le M-1$
\begin{align} \label{eq:Lmaster:bulk}
\tilde p_{i,j}(s) &= \widetilde{ \overline F}^{(\beta)}_T(s)\delta_{ij} + \tilde f^{(\beta)}_{T}(s)\alpha \tilde p_{i,j}(s) \\
&+ \tilde f^{(\beta)}_{T}(s)(1-\alpha)\notag \\
&\times \Big( \frac{M -j +1}{M} \tilde p_{i, j-1}(s) + \frac{j+1}{M}\tilde p_{i,j+1}(s) \Big).\notag
\end{align} The boundary cases $j =0, j=M$ have Laplace transforms
\begin{equation}
\tilde p_{i,0}(s) = \widetilde{ \overline F}^{(\beta)}_T(s)\delta_{i0}
+ \tilde f^{(\beta)}_{T}(s)\left(\frac{1-\alpha}{M}\tilde p_{i,1}(s) + \alpha \tilde p_{i,0}(s)\right),
\label{eq:Lmaster:j0}
\end{equation}
\begin{align}
\tilde p_{i,M}(s) &= \widetilde{ \overline F}^{(\beta)}_T(s)\delta_{iM} \notag \\
&+ \tilde f^{(\beta)}_{T}(s)\left(\frac{1-\alpha}{M}\tilde p_{i,M-1}(s) + \alpha \tilde p_{i,M}(s)\right)
\label{eq:Lmaster:jM}
\end{align}
respectively. We finish the computation starting from \eqref{eq:Lmaster:bulk}, in the case where $ 1 \le j \le n-1$.
The remaining cases
follow similarly. Multiply both sides of \eqref{eq:Lmaster:bulk} by $s$ and then
subtract $p_{i,j}(0)=\delta_{ij}$ from both sides. Then, using \eqref{eq:ml-lap}, equation \eqref{eq:Lmaster:bulk} becomes
\begin{align*}
\mathcal{L}\Big(& \frac{d p_{i,j}(t)}{dt}; s \Big) = s\widetilde{ \overline F}^{(\beta)}_T(s)\delta_{ij} -p_{i,j}(0)
+ \frac{s}{1+ s^{\beta}}\alpha \tilde p_{i,j}(s)\\
&\phantom{x}
+ \frac{s(1-\alpha)}{1+ s^{\beta}}\Big( \frac{M -j +1}{M} \tilde p_{i, j-1}(s) + \frac{j+1}{M}\tilde p_{i,j+1}(s) \Big)\\
&= \frac{s^{\beta}}{1+s^{\beta}}\delta_{ij} -\delta_{ij} + \frac{s}{1+ s^{\beta}}\alpha \tilde p_{i,j}(s)\\
&\phantom{x}
+ \frac{s(1-\alpha)}{1+ s^{\beta}}\Big( \frac{M -j +1}{M} \tilde p_{i, j-1}(s) + \frac{j+1}{M}\tilde p_{i,j+1}(s) \Big),
\end{align*}
thus, after some algebraic manipulations we have
\begin{align} \label{eq:almost}
&\frac{1+s^{\beta}}{s}\mathcal{L}\Big( \frac{d p_{i,j}(t)}{dt}; s \Big) = \frac{-\delta_{ij}}{s} + \alpha \tilde p_{i,j}(s)\\
&\phantom{x}+ (1-\alpha)\!\left(\frac{M -j +1}{M} \tilde p_{i, j-1}(s) + \frac{j+1}{M}\tilde p_{i,j+1}(s)\right).\notag
\end{align}
Focus for the moment on the factor $s^{-1}(1 +s^{\beta})$. Its inverse Laplace transform is
\begin{equation}\label{eq:caputo}
\mathcal{L}^{-1}\big( s^{-1}(1 +s^{\beta}); t \big) = \frac{t^{-\beta}}{\Gamma(1-\beta)} + 1 = \Phi_{\beta}(t) + 1.
\end{equation}
Kernel $\Phi_{\beta} (t)$ is what is used in fractional calculus to define the \emph{ $\beta$ fractional Caputo derivative} (see reference [19] in the main text)
of a function $f(t)$, given by
\[
\frac{d^{\beta} f(t)}{ d\, t ^{\beta}} = \int_{0}^t \Phi_{\beta}(t-t')\frac{d\,f(t')}{dt'}\,dt'.
\]
Thus, use \eqref{eq:caputo} to write
the left hand side of \eqref{eq:almost} as a product of two Laplace transforms. Then take the Laplace inverse of \eqref{eq:almost} to conclude
\begin{align}\label{al:final0}
&\frac{d^{\beta} p_{i,j}(t)}{ d\, t ^{\beta}} = -(1-\alpha)\ p_{i,j}(t) \\
&\quad+ (1-\alpha)\left(\frac{M -j +1}{M} p_{i, j-1}(t) + \frac{j+1}{M}p_{i,j+1}(t)\right).\notag
\end{align}
Similarly, the equations of the boundary terms are derived \eqref{eq:final1}, \eqref{eq:final2}.
\subsection*{B. Solution to the fractional equations.}
The solution to equations (12), (13), (14) can be seen to be equation (15) in two different ways. One is the standard law of total probability, where the space is partitioned according to the number of jumps of the counting process $N_{\beta}(t)$: \begin{align*}
p_{i,j}&(t) = \bP\{X(t) = j | X(0) = i\} \\
&= \sum_{k=0}^\infty \bP\{X(t) = j, N_\beta(t) =k | X(0) = i\}\\
&= \sum_{k=0}^\infty \bP\{X(t) = j | N_\beta(t) =k , X(0) = i\} \bP\{ N_{\beta}(t) = k\}\\
&= \bP\{X(t) = j | N_\beta(t) = 0 , X(0) = i\} \bP\{ N_{\beta}(t) = 0\} \\
&\phantom{x}+ \sum_{k=1}^\infty \bP\{X(t) = j | N_\beta(t) =k , X(0) = i\} \bP\{ N_{\beta}(t) = k\}\\
&= \bP\{X(t) = j | T_1\ge t , X(0) = i\} \bP\{ T_1 \ge t \} \\
&\phantom{x}+ \sum_{k=1}^\infty \bP\{X(t) = j | N_\beta(t) =k , X(0) = i\} \bP\{ N_{\beta}(t) = k\}\\
&= \delta_{ij} \overline F^{(\beta)}_T(t)\\
&+ \sum_{k=1}^\infty \bP\{X_k = j | 1\!\!1\{ S_k \le t < S_{k+1}\} , X_0 = i\} \\
&\phantom{xxxxx}\times \bP\{ N_{\beta}(t) = k\}\\
&= \delta_{ij} \overline F^{(\beta)}_T(t)+ \sum_{k=1}^\infty \bP\{X_k = j | X_0 = i\} \bP\{ N_{\beta}(t) = k\}, \end{align*} where it finally leads to \begin{equation}\label{eq:s1} p_{i,j}(t) = \delta_{ij} \overline F^{(\beta)}_T(t)+ \sum_{k=1}^\infty q^{(k)}_{ij} \bP\{ N_{\beta}(t) = k\}. \end{equation}
Equation \eqref{eq:s1} is equation \eqref{eq:anal:istvan} in the main text and as we say, gives the theoretical solution to the fractional equations, because of an explicit integral representation of $\bP\{ N_{\beta}(t) = n\}$. It is given by \begin{align*}
\bP\{ N_{\beta}(t) = n\} &= \frac{t^{\beta n}}{n!}E_{\beta}^{(n)}(-t^\beta) \\
&= \int_{0}^\infty F_{S_{\beta}}(t;u)\left( 1 - \frac{u}{n}\right) \frac{u^{n-1}}{(n-1)!}e^{-u}\,du. \end{align*} Function $F_{S_{\beta}}(t;u)$ is the c.d.f.~of a stable random variable $S_{\beta}(\nu, \gamma, \delta)$ with index $\beta$, skewness parameter $\nu=1$, scale $\gamma = (u\cos(\pi \beta/2))^{1/\beta}$ and location $\delta=0$. This integral representation was used to numerically compute the solid curve in Figure \ref{fig0} \cite{politi2011full}.
We now verify via Laplace transforms that this solution \eqref{eq:s1} indeed verifies the fractional equations. For simplicity we set the delay parameter $\alpha=0$ and we only show it for equation \eqref{eq:final0}. We need \[ \mathcal{L}\left( \frac{d^{\beta} g}{dt^\beta}; s\right) = s^{\beta}\tilde g(s) - s^{\beta-1} g(0+), \,\,\] \[ \mathcal{L}\left( \bP\{N_{\beta}(t) = n\}; s\right) =\widetilde{\overline F}^{(\beta)}_T(s) \left(\tilde f^{(\beta)}_{T}(s)\right)^n\!\!= \frac{\widetilde{\overline F}^{(\beta)}_T(s)}{(1+s^{\beta})^n}. \] The Laplace transform of \eqref{eq:final0} \begin{align*} s^{\beta}&\tilde p_{i,j}(s) - s^{\beta-1} \delta_{ij} \\ &= -\tilde p_{i,j}(s) + \frac{M -j +1}{M} \tilde p_{i, j-1}(s) + \frac{j+1}{M}\tilde p_{i,j+1}(s), \end{align*} or after an algebraic manipulation \begin{equation}\label{eq:magic} (1+s^{\beta}) \tilde p_{i,j}(s) = s^{\beta-1} \delta_{ij} + q_{j-1,j} \tilde p_{i, j-1}(s) + q_{j+1,j}\tilde p_{i,j+1}(s). \end{equation} To verify \eqref{eq:magic}, directly take the Laplace transform in \eqref{eq:s1} to write \begin{equation} \tilde p_{i,j}(s) = \delta_{ij} \widetilde{\overline F}^{(\beta)}_T(s)+ \widetilde{\overline F}^{(\beta)}_T(s)\sum_{k=1}^\infty q^{(k)}_{ij} \left(\tilde f^{(\beta)}_{T}(s)\right)^k \end{equation} and substitute to the right hand side in \eqref{eq:magic} that now reads \begin{align*}
s^{\beta-1}&\delta_{ij} + q_{j-1,j} \tilde p_{i, j-1}(s) + q_{j+1,j}\tilde p_{i,j+1}(s) \\
&= s^{\beta-1} \delta_{ij} \\
&\,\,+ q_{j-1,j}\widetilde{\overline F}^{(\beta)}_T(s) \left( \delta_{i,j-1} + \sum_{k=1}^\infty q^{(k)}_{i,j-1} \left(\tilde f^{(\beta)}_{T}(s)\right)^k \right) \\
&\phantom{x}+ q_{j+1,j}\widetilde{\overline F}^{(\beta)}_T(s)\left(\delta_{i,j+1} + \sum_{k=1}^\infty q^{(k)}_{i,j+1} \left(\tilde f^{(\beta)}_{T}(s)\right)^k\right)\\
&= s ^{\beta-1}\delta_{ij} + (q_{j-1,j} \delta_{i,j-1} + q_{j+1,j}\delta_{i,j+1} )\widetilde{\overline F}^{(\beta)}_T(s)\\
&+ \widetilde{\overline F}^{(\beta)}_T\!\!(s)\sum_{k=1}^\infty (q_{j-1,j}q^{(k)}_{i,j-1} + q_{j+1,j}q^{(k)}_{i,j+1} )\left(\tilde f^{(\beta)}_{T}(s)\right)^k \\
&= s^{\beta-1} \delta_{ij} + q_{i,j}\widetilde{\overline F}^{(\beta)}_T(s) \\
&\quad+(1+s^{\beta}) \widetilde{\overline F}^{(\beta)}_T(s)\sum_{k=1}^\infty q^{(k+1)}_{i,j}\left(\tilde f^{(\beta)}_{T}(s)\right)^{k+1} \\
&= s^{\beta-1} \delta_{ij} + q_{i,j}s^{\beta-1} \tilde f^{(\beta)}_{T}(s)\\
&\quad+(1+s^{\beta}) \widetilde{\overline F}^{(\beta)}_T(s)\sum_{k=1}^\infty q^{(k+1)}_{i,j}\left(\tilde f^{(\beta)}_{T}(s)\right)^{k+1} \\
&= (1+s^{\beta}) \widetilde{\overline F}^{(\beta)}_T(s) \left[\delta_{ij} +
+\sum_{k=1}^\infty q^{(k)}_{i,j}\left(\tilde f^{(\beta)}_{T}(s)\right)^{k}\right] \\
&=(1+s^{\beta}) \tilde p_{i,j}(s), \end{align*} which is the left hand side of \eqref{eq:magic}.
\subsection*{C. Stochastic coupling with the Pareto distribution.}
Now we show how the complementary cumulative distribution functions of the Pareto($\delta$) distribution and the Mittag-Leffler$_\gamma(\beta)$ of the same exponent can be shown to match just by manipulating the scaling factor $\gamma$.
On a double logarithmic scale the c.c.d.fs have a linear behavior at infinity with slope $1 -\delta= -\beta$. Our simulations have a time horizon of $T=2000$ so the scaling $\gamma$ is chosen so that the c.c.d.f's agree well for values around and before the time horizon. The initial value of $\gamma$ to be tested for matching is the solution to the equation \[
\frac{\sin(\beta \pi)}{\pi} \frac{\Gamma(\beta)}{(t/\gamma)^\beta} = \frac{1}{t^{\delta-1}} \Longleftrightarrow \gamma = \left(\frac{\pi}{\sin(\beta \pi)\Gamma(\beta)}\right)^{1/\beta}, \] that implies the agreement of the asymptotic behavior of the survival functions. This first $\gamma$ choice will need to be adjusted, depending on our choice of time horizon, but the match can be achieved relatively well for moderate $\beta$ values ($\beta < 0.9$) (see Figure \ref{fig:match}).
\begin{figure}
\caption{(Color online) The figure shows the c.c.d.f.~of a Pareto distribution with tail exponent $\delta$ and the matching with the corresponding Mittag-Leffler$_{\gamma}(\beta)$. \textcolor{black}{The left panel presents the case $\beta=0.7$, $\gamma=4$ and $\delta=1.7$, whereas the right panel presents the case $\beta=0.5$, $\gamma=3.14$ and $\delta=1.5$. The Pareto distribution is drawn using diamonds whereas the Mittag-Leffler is drawn using a solid line.} }
\label{fig:match}
\end{figure}
\end{document} | arXiv |
\begin{definition}[Definition:Linearly Dependent Real Functions]
Let $f \left({x}\right)$ and $g \left({x}\right)$ be real functions defined on a closed interval $\left[{a \,.\,.\, b}\right]$.
Let $f$ and $g$ be constant multiples of each other:
:$\exists c \in \R: \forall x \in \left[{a \,.\,.\, b}\right]: f \left({x}\right) = c g \left({x}\right)$
or:
:$\exists c \in \R: \forall x \in \left[{a \,.\,.\, b}\right]: g \left({x}\right) = c f \left({x}\right)$
Then $f$ and $g$ are '''linearly dependent'''.
\end{definition} | ProofWiki |
\begin{document}
\title{A new type of singular perturbation approximation for stochastic bilinear systems}
\author{Martin Redmann\thanks{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, 10117 Berlin Germany (Email: {\tt [email protected]})}. The author~gratefully acknowledge the support from the DFG through the research unit FOR2402.}
\maketitle
\begin{abstract} Model order reduction (MOR) techniques are often used to reduce the order of spatially-discretized (stochastic) partial differential equations and hence reduce computational complexity. A particular class of MOR techniques is balancing related methods which rely on simultaneously diagonalizing the system Gramians. This has been extensively studied for deterministic linear systems. The balancing procedure has already been extended to bilinear equations \cite{typeIBT}, an important subclass of nonlinear systems. The choice of Gramians in \cite{typeIBT} is referred to be the standard approach. In \cite{hartmann}, a balancing related MOR scheme for bilinear systems called singular perturbation approximation (SPA) has been described that relies on the standard choice of Gramians. However, no error bound for this method could be proved. In this paper, we extend the setting used in \cite{hartmann} by considering a stochastic system with bilinear drift and linear diffusion term. Moreover, we propose a modified reduced order model and choose a different reachability Gramian. Based on this new approach, an $L^2$-error bound is proved for SPA which is the main result of this paper. This bound is new even for deterministic bilinear systems. \end{abstract}
\begin{keywords}
model order reduction, singular perturbation approximation, nonlinear stochastic systems, L\'evy process \end{keywords}
\begin{AMS} Primary: 93A15, 93C10, 93E03. Secondary: 15A24, 60J75. \end{AMS}
\section{Introduction}
Many phenomena in real life can be described by partial differential equations (PDEs). For an accurate mathematical modeling of these real world applications, it is often required to take random effects into account. Uncertainties in a PDE model can, for example, be represented by an additional noise term leading to stochastic PDEs (SPDEs) \cite{dapratozab,newspde,zabczyk,prevotroeckner}.
It is often necessary to numerically approximate time-dependent SPDEs since analytic solutions do not exist in general. Discretizing in space can be considered as a first step. This can, for example, be done by spectral Galerkin \cite{galerkin, galerkinhaus, galerkinjentzen} or finite element methods \cite{MR1637047,MR2646102,Kruse14}. This usually leads to large-scale SDEs. Solving such complex SDE systems causes large computational cost. In this context, model order reduction (MOR) is used to save computational time by replacing high dimensional systems by systems of low order in which the main information of the original system should be captured.
\subsection{Literature review}
Balancing related MOR schemes were developed for deterministic linear systems first. Famous representatives of this class of methods are balanced truncation (BT) \cite{antoulas,moore,obiand} and singular perturbation approximation (SPA) \cite{fernic,spa}.
BT was extended in \cite{bennerdamm, redmannbenner} and SPA was generalized in \cite{redSPA} to stochastic linear systems. With this first extension, however, no $L^2$-error bound can be achieved \cite{bennerdammcruz, dammbennernewansatz}. Therefore, an alternative approach based on a different reachability Gramian was studied for stochastic linear systems leading to an $L^2$-error bound for BT \cite{dammbennernewansatz} and for SPA \cite{redmannspa2}.
BT \cite{typeIBT,bennerdamm} and SPA \cite{hartmann} were also generalized to bilinear systems, which we refer to as the standard approach for these systems. Although bilinear terms are very weak nonlinearities, they can be seen as a bridge between linear and nonlinear systems. This is because many nonlinear systems can be represented by bilinear systems using a so-called Carleman linearization. Applications of these equations can be found in various fields \cite{brunietal,Mohler,rugh}. The standard approach for bilinear system has the drawback that no $L^2$-error bound could be shown so far. A first error bound for the standard ansatz was recently proved in \cite{beckerhartmann}, where an output error bound in $L^\infty$ was formulated for infinite dimensional bilinear systems. Based on the alternative choice of Gramians in \cite{dammbennernewansatz}, a new type of BT for bilinear systems was considered \cite{redmanntypeiibilinear} providing an $L^2$-error bound under the assumption of a possibly small bound on the controls.
A more general setting extending both the stochastic linear and the deterministic bilinear case was investigated in \cite{redstochbil}. There, BT was studied and an $L^2$-error bound was proved overcoming the restriction of bounded controls in \cite{redmanntypeiibilinear}. In this paper, we consider SPA for the same setting as in \cite{redstochbil} in order to generalize the work in \cite{hartmann}. Moreover, we modify the reduced order model (ROM) in comparison to \cite{hartmann} and show an $L^2$-error bound which closes the gap in the theory in this context.
For further extensions of balancing related MOR techniques to other nonlinear systems, we refer to \cite{bennergoyal, Scherpen}.
\subsection{Setting and ROM}\label{settingstochstabgen}
Let every stochastic process appearing in this paper be defined on a filtered probability space $\left(\Omega, \mathcal F, \left(\mathcal F_t\right)_{t\geq 0}, \mathbb P\right)$\footnote{We assume that $\left(\mathcal F_t\right)_{t\geq 0}$ is right-continuous and $\mathcal F_0$ contains all sets $A$ with $\mathbb P(A)=0$.}. Suppose that $M=\left(M_1, \ldots, M_{v}\right)^T$ is an $\left(\mathcal F_t\right)_{t\geq 0}$-adapted and $\mathbb R^{v}$-valued mean zero L\'evy
process with $\mathbb E \left\|M(t)\right\|^2_2=\mathbb E\left[ M^T(t)M(t)\right]<\infty$ for all $t\geq 0$. Moreover, we assume that for all $t, h\geq 0$ the random variable $M\left(t+h\right)-M\left(t\right)$ is independent of $\mathcal F_t$.
We consider a large-scale stochastic control system with bilinear drift that can be interpreted as a spatially-discretized SPDE. We investigate the system
\begin{subequations}\label{controlsystemoriginal} \begin{align} d x(t)&=[A x(t)+ Bu(t) + \sum_{k=1}^m N_k x(t) u_k(t)]dt+ \sum_{i=1}^v H_i x(t-) dM_i(t),\label{stateeq}\\ y(t)&= {C} x(t),\;\;\;t\geq 0.\label{originalobserveq} \end{align} \end{subequations} We assume that $A, N_k, H_i\in \mathbb R^{n\times n}$ ($k\in\left\{1, \ldots, m\right\}$ and $i\in\left\{1, \ldots, v\right\}$), $B\in \mathbb R^{n\times m}$ and $C\in \mathbb R^{p\times n}$. Moreover, we define $x(t-):=\lim_{s\uparrow t} x(s)$. The control $u=\left(u_1, \ldots, u_{m}\right)^T$ is assumed to be deterministic and square integrable, i.e.,
\begin{align*}
\left\|u\right\|_{L^2_T}^2:=\int_0^T \left\|u(t)\right\|_2^2 dt<\infty \end{align*} for every $T>0$. By \cite[Theorem 4.44]{zabczyk} there is a matrix $K=\left(k_{ij}\right)_{i, j=1, \ldots, v}$ such that $\mathbb E[M(t)M^T(t)]=K t$. $K$ is called covariance matrix of $M$.
In this paper, we study SPA to obtain a ROM. SPA is a balancing related method and relies on defining a reachability Gramian $P$ and an observability Gramian $Q$. These matrices are selected, such that $P$ characterizes the states in (\ref{stateeq}) and $Q$ the states in (\ref{originalobserveq}) which barely contribute to the system dynamics, see \cite{redstochbil} for estimates on the reachability and observability energy. The estimates in \cite{redstochbil} are global, whereas the standard choice of Gramians leads to results being valid in a small neighborhood of zero only \cite{bennerdamm, graymesko}.
In order to ensure the existence of these Gramians, throughout the paper it is assumed that \begin{align}\label{stochstab} \lambda\left(A\otimes I+I\otimes A+\sum_{k=1}^m N_k\otimes N_k+\sum_{i, j=1}^v H_i\otimes H_j k_{ij}\right)\subset \mathbb C_-. \end{align} Here, $\lambda\left(\cdot\right)$ denotes the spectrum of a matrix. The reachability Gramian $P$ and the observability Gramian $Q$ are, according to \cite{redstochbil}, defined as the solutions to \begin{align}\label{newgram2}
A^T P^{-1}+P^{-1}A+\sum_{k=1}^m N^T_k P^{-1} N_k + \sum_{i, j=1}^v H_i^T P^{-1} H_j k_{i j} &\leq -P^{-1}BB^T P^{-1},\\
\label{gengenlyapobs} A^T Q+Q A+\sum_{k=1}^m N_k^T Q N_k +\sum_{i, j=1}^v H_i^T Q H_j k_{ij} &\leq -C^T C,
\end{align} where the existence of a positive definite solution to (\ref{newgram2}) goes back to \cite{dammbennernewansatz, redmannspa2}.
We approximate the large scale system (\ref{controlsystemoriginal}) by a system which has a much smaller state dimension $r\ll n$. This reduced order model (ROM) is supposed be chosen, such that the corresponding output $y_r$ is close to the original one, i.e., $y_r\approx y$ in some metric. In order to be able to remove both the unimportant states in (\ref{stateeq}) and (\ref{originalobserveq}) simultaneously, the first step of SPA is a state space transformation \begin{align*}
(A, B, C, H_i, N_k)\mapsto (\tilde A, \tilde B, \tilde C, \tilde H_i, \tilde N_k):=(SAS^{-1}, SB, CS^{-1}, SH_iS^{-1}, SN_kS^{-1}), \end{align*} where $S=\Sigma^{-\tfrac{1}{2}} X^T L_Q^T $ and $S^{-1}=L_PY\Sigma^{-\tfrac{1}{2}}$. The ingredients of the balancing transformation are computed by the Cholesky factorizations $P=L_PL_P^T$, $Q=L_QL_Q^T$, and the singular value decomposition $X\Sigma Y^T=L_Q^TL_P$. This transformation does not change the output $y$ of the system, but it guarantees that the new Gramians are diagonal and equal, i.e., $S P S^T=S^{-T}Q S^{-1}=\Sigma=\diag(\sigma_1,\ldots, \sigma_n)$ with $\sigma_1\geq \ldots \geq \sigma_n$ being the Hankel singular values (HSVs) of the system.
We partition the balanced coefficients of (\ref{controlsystemoriginal}) as follows: \begin{align}\label{partvonorimodel}
\tilde A=\smat{A}_{11}&{A}_{12}\\ {A}_{21}&{A}_{22}\srix,\;\tilde B=\smat B_1 \\ B_2\srix,\; \tilde N_k=\smat{N}_{k, 11}&{N}_{k, 12}\\ {N}_{k, 21}&{N}_{k, 22}\srix,\;\tilde H_i=\smat{H}_{i, 11}&{H}_{i, 12}\\ {H}_{i, 21}&{H}_{i, 22}\srix,\;\tilde C= \smat C_1 & C_2\srix,
\end{align} where $A_{11}, N_{k, 11}, H_{i, 11}\in \mathbb R^{r\times r}$ ($k\in\left\{1, \ldots, m\right\}$ and $i\in\left\{1, \ldots, v\right\}$), $B_1\in \mathbb R^{r\times m}$ and $C_1\in \mathbb R^{p\times r}$ etc. Furthermore, we partition the state variable $\tilde x$ of the balanced system and the diagonal matrix of HSVs \begin{align}\label{partitionfullmodel}
\tilde x=\mat{c} x_1 \\ x_2\rix\text{ and }\Sigma=\mat{cc} \Sigma_1& \\ & \Sigma_2\rix,
\end{align} where $x_1$ takes values in $\mathbb R^r$ ($x_2$ accordingly), $\Sigma_1$ is the diagonal matrix of large HSVs and $\Sigma_2$ contains the small ones.
Based on the balanced full model (\ref{controlsystemoriginal}) with matrices as in (\ref{partvonorimodel}), the ROM is obtained by neglecting the state variables $x_2$ corresponding to the small HSVs. The ROM using SPA is obtained by setting $dx_2(t)=0$ and furthermore neglecting the diffusion and bilinear term in the equation related to $x_2$. The resulting algebraic constraint can be solved and leads to $x_2(t)=-A_{22}^{-1}(A_{21} x_1(t)+B_2 u(t))$. Inserting this expression into the equation for $x_1$ and into the output equation, the reduced system is \begin{subequations}\label{romstochstatebt} \begin{align}\label{romstateeq}
dx_r&=[\bar A x_r+\bar B u+\sum_{k=1}^m (\bar N_{k} x_r + \bar E_{k} u)u_k]dt+\sum_{i=1}^v (\bar H_{i} x_r+ \bar F_{i} u) dM_i,\\
y_r(t)&=\bar Cx_r(t)+ \bar D u(t), \;\;\;t\geq 0,
\end{align}
\end{subequations} with matrices defined by \begin{align*}
&\bar A:=A_{11}- A_{12} A_{22}^{-1} A_{21},\;\;\;\bar B:=B_1-A_{12}A_{22}^{-1} B_2,\;\;\;\bar C:=C_1-C_2 A_{22}^{-1} A_{21},\\ &\bar D:=-C_2 A_{22}^{-1} B_2,\;\;\; \bar E_{k}:=-N_{k, 12} A_{22}^{-1} B_2,\;\;\; \bar F_{i}:=-H_{i, 12} A_{22}^{-1} B_2,\\ &\bar H_i:=H_{i, 11}-H_{i, 12} A_{22}^{-1} A_{21}, \;\;\;\bar N_k:=N_{k, 11}-N_{k, 12} A_{22}^{-1} A_{21},
\end{align*} where $x_r(0)=0$ and the time dependence in (\ref{romstateeq}) is omitted to shorten the notation. This straight forward ansatz is based on observations from the deterministic case ($N_k=H_i=0$), where $x_2$ represents the fast variables, i.e., $\dot x_2(t) \approx 0$ after a short time, see \cite{spa}.
This ansatz for stochastic systems might, however, be false, no matter how small the HSVs corresponding to $x_2$ are. Despite the fact that for the motivation, a maybe less convincing argument is used, this leads to a viable MOR method for which an error bound can be proved. An averaging principle would be a mathematically well-founded alternative to this naive approach. Averaging principles for stochastic systems have for example been investigated in \cite{avp2,avp1}. A further strategy to derive a ROM in this context can be found in \cite{berglundgentz}.
Moreover, notice that system (\ref{romstochstatebt}) is not a bilinear system anymore due to the quadratic term in the control $u$. This is an essential difference to the ROM proposed in \cite{hartmann}.
\subsection{Main result}
The work in this paper on SPA for system (\ref{controlsystemoriginal}) can be interpreted as a generalization of the deterministic bilinear case \cite{hartmann}. This extension builds a bridge between stochastic linear systems and stochastic nonlinear systems such that SPA can possibly be applied to many more stochastic equations and applications.
In this paper, we provide an alternative to \cite{redstochbil}, where BT was studied. We extend the work of \cite{hartmann} combined with a modification of the ROM and the choice of a new Gramian defined through (\ref{newgram2}). Based on this, we obtain an error bound that was not even available for the deterministic bilinear case. This is the main result of this paper and is formulated in the following theorem. Its proof requires new techniques that cannot be found in the literature so far. \begin{theorem}\label{mainthmintro} Let $y$ be the output of the full model (\ref{controlsystemoriginal}) with $x(0)=0$ and $y_r$ be the output of the ROM (\ref{romstochstatebt}) with zero initial state. Then, for all $T>0$, it holds that \begin{align*}
\left(\mathbb E\left\|y-y_r\right\|_{L^2_{T}}^2\right)^{\frac{1}{2}}\leq 2 (\tilde \sigma_{1}+\tilde \sigma_{2}+\ldots + \tilde \sigma_\nu) \left\|u\right\|_{L^2_T}\exp\left(0.5 \left\|u^0\right\|_{L^2_T}^2\right),
\end{align*} where $\tilde\sigma_{1}, \tilde\sigma_{2}, \ldots,\tilde\sigma_\nu$ are the distinct diagonal entries of $\Sigma_2=\diag(\sigma_{r+1},\ldots,\sigma_n)=\diag(\tilde\sigma_{1} I, \tilde\sigma_{2} I, \ldots, \tilde\sigma_\nu I)$ and $u^0=(u^0_1, \dots, u_m^0)^T$ is the control vector with components defined by $u_k^0 \equiv \begin{cases} 0 & \text{if }N_k = 0,\\ u_k & \text{else}. \end{cases}$ \end{theorem}
Theorem \ref{mainthmintro} is proved in Section \ref{proofmainthm}. We observe that an exponential term enters the bound in Theorem \ref{mainthmintro} which is due to the bilinearity in the drift. Setting $N_k=0$ for all $k=1, \ldots, m$ the exponential becomes a one which is the bound of the stochastic linear case \cite{redmannspa2}. The result in Theorem \ref{mainthmintro} tells us that the ROM (\ref{romstochstatebt}) yields a very good approximation if the truncated HSVs (diagonal entries of $\Sigma_2$) are small and the vector $u^0$ of control components with a non-zero $N_k$ is not too large. The exponential in the error bound can be an indicator that SPA performs badly if $u^0$ is very large.
The remainder of the paper deals with the proof of Theorem \ref{mainthmintro}.
\section{$L^2$-error bound for SPA}\label{errorboundsBT}
The proof of the error bound in Theorem \ref{mainthmintro} is divided into two parts. We first investigate the error that we encounter by removing the smallest HSV from the system in Section \ref{sectionremovesmallhsv}. In this reduction step, the structure from the full model (\ref{controlsystemoriginal}) to the ROM (\ref{romstochstatebt}) changes. Therefore, when removing the other HSVs from the system, another case needs to be studied in Section \ref{secneigeb}. There, an error bound between two ROM is achieved which are neighboring, i.e., the larger ROM has exactly one HSV more than the smaller one. The results of Sections \ref{sectionremovesmallhsv} and \ref{secneigeb} are then combined in Section \ref{proofmainthm} in order to prove the general error bound.
For simplicity, let us from now on assume that system (\ref{controlsystemoriginal}) is already balanced and has a zero initial condition ($x_0=0$). Thus, (\ref{newgram2}) and (\ref{gengenlyapobs}) become \begin{align}\label{balancedreach}
A^T \Sigma^{-1}+\Sigma^{-1}A+\sum_{k=1}^m N_k^T \Sigma^{-1} N_k + \sum_{i, j=1}^v H_i^T \Sigma^{-1} H_j k_{i j} &\leq -\Sigma^{-1}BB^T \Sigma^{-1},\\ \label{balancedobserve}
A^T \Sigma+\Sigma A+\sum_{k=1}^m N_k^T \Sigma N_k +\sum_{i, j=1}^v H_i^T \Sigma H_j k_{ij} &\leq -C^T C,
\end{align} i.e., $P=Q=\Sigma=\diag(\sigma_1, \ldots, \sigma_n)>0$.
\subsection{Error bound of removing the smallest HSV}\label{sectionremovesmallhsv}
We introduce the variable $x_{\mp} =\smat x_1-x_r \\ x_2+A_{22}^{-1}(A_{21} x_r + B_2u)\srix$ since the corresponding output \begin{align}\label{xminusoutput}
y_{\mp}(t)&=Cx_{\mp}(t)=Cx(t)-\bar C x_r(t) - \bar D u = y(t)-y_r(t), \;\;\;t\geq 0,
\end{align} is the output error between the full and the reduced system (\ref{romstochstatebt}). We aim to find an equation for $x_{\mp}$. This is done through the state variable $x_-=\smat x_1-x_r \\ x_2\srix$. The differential $d(x_1-x_r)$ is obtained by subtracting the state equation (\ref{romstateeq}) of the reduced system from the first $r$ rows of (\ref{stateeq}). The corresponding right side is then rewritten using $x_{\mp}$. Moreover, the right side of the differential of $x_2$, compare with the last $n-r$ rows of (\ref{stateeq}), is also formulated with the help of $x_{\mp}$. This results in \begin{align}\label{statexminus}
dx_-&=[Ax_{\mp}+ \smat 0 \\ c_0\srix+\sum_{k=1}^m N_{k} x_{\mp} u_k]dt +\sum_{i=1}^v [H_{i} x_{\mp} + \smat 0 \\ c_i\srix] dM_i,
\end{align} where $c_0(t):=\sum_{k=1}^m [ N_{k, 21}x_r(t) -N_{k, 22}A_{22}^{-1}(A_{21} x_r(t) + B_2 u(t))] u_k(t)$ and $c_i(t):= H_{i, 21} x_r(t)-H_{i, 22}A_{22}^{-1}(A_{21} x_r(t) + B_2 u(t))$ for $i=1, \ldots, v$.
We furthermore introduce the reverse state to $x_{\mp}$ in terms of the signs. This is $x_{\pm}=\smat x_1+x_r \\ x_2-A_{22}^{-1}(A_{21} x_r + B_2u)\srix$. Using the state $x_+=\smat x_1+x_r \\ x_2\srix$, with a differential obtained by combining (\ref{stateeq}) and (\ref{romstateeq}) again, and expressing its right side with $x_{\pm}$, we have \begin{align}\label{xplus}
dx_+=[Ax_{\pm}+2 B u- \smat 0 \\ c_0\srix +\sum_{k=1}^m N_{k} x_{\pm} u_k]dt +\sum_{i=1}^v [H_{i} x_{\pm} - \smat 0 \\ c_i\srix] dM_i.
\end{align} We will see that the proof of the error bound can be reduced to the task of finding suitable estimates for $\mathbb E[x_-^T(t) \Sigma x_-(t)]$ and $\mathbb E[x_+^T(t) \Sigma^{-1} x_+(t)]$. This idea was also used to determine an error bound for BT \cite{redstochbil}. However, the proof for SPA requires different techniques to find the estimates.
\begin{theorem}\label{mainthm} Let $y$ be the output of the full model (\ref{controlsystemoriginal}) with $x(0)=0$, $y_r$ be the output of the ROM (\ref{romstochstatebt}) with $x_{r}(0)=0$ and $\Sigma_2=\sigma I$, $\sigma>0$, in (\ref{partitionfullmodel}). Then, it holds that \begin{align*}
\left(\mathbb E\left\|y-y_r\right\|_{L^2_{T}}^2\right)^{\frac{1}{2}}\leq 2 \sigma \left\|u\right\|_{L^2_T}\exp\left(0.5 \left\|u^0\right\|_{L^2_T}^2\right).
\end{align*} \begin{proof} We derive a suitable upper bound for $\mathbb E[x_-^T(t) \Sigma x_-(t)]$ first applying Ito's formula. Hence, Lemma \ref{lemstochdiff} and Equation (\ref{statexminus}) yield \begin{align}\label{productruleapplied} \mathbb E\left[x_-^T(t)\Sigma x_-(t)\right]=&2 \int_0^t\mathbb E\left[x_-^T\Sigma\left(Ax_{\mp}+\sum_{k=1}^m (N_{k} x_{\mp} u_k) + \smat 0 \\ c_0\srix \right)\right]ds\\ \nonumber &+ \int_0^t\sum_{i, j=1}^v \mathbb E\left[\left(H_{i} x_{\mp} + \smat 0 \\ c_i\srix\right)^T\Sigma\left(H_{j} x_{\mp} + \smat 0 \\ c_j\srix\right)\right]k_{ij} ds. \end{align} We find an estimate for the terms related to $N_k$, that is \begin{align}\label{Nkestimate} \sum_{k=1}^m 2 x_-^T(s)\Sigma N_{k} x_{\mp}(s) u_k(s)&=\sum_{k=1}^m 2\left\langle \Sigma^{\frac{1}{2}} x_-(s)u_k(s), \Sigma^{\frac{1}{2}} N_k x_{\mp}(s) \right\rangle_2 \\ \nonumber
&\leq \sum_{k=1}^m \left\| \Sigma^{\frac{1}{2}} x_-(s)u^0_k(s)\right\|_2^2 + \left\|\Sigma^{\frac{1}{2}} N_k x_{\mp}(s)\right\|_2^2\\ \nonumber
&=x_-^T(s) \Sigma x_-(s) \left\|u^0(s)\right\|_{2}^2 +\sum_{k=1}^m x_{\mp}^T(s) N_k^T\Sigma N_{k} x_{\mp}(s), \end{align} where $u^0$ is defined as in Theorem \ref{mainthmintro}. Moreover, adding a zero, we rewrite \begin{align}\label{Aestimate} 2 x_-^T(s) \Sigma Ax_{\mp}(s) &=2 x_{\mp}^T(s) \Sigma Ax_{\mp}(s)- 2 \smat 0 \\ h(s)\srix^T \Sigma Ax_{\mp}(s) \\ \nonumber & = x_{\mp}^T(s) (A^T \Sigma + \Sigma A) x_{\mp}(s) - 2 \smat 0 \\ h(s)\srix^T \Sigma Ax_{\mp}(s),
\end{align} where $h(s)=A_{22}^{-1}(A_{21} x_r(s) + B_2u(s))$ With (\ref{Nkestimate}) and (\ref{Aestimate}), (\ref{productruleapplied}) becomes {\allowdisplaybreaks\begin{align}\nonumber \mathbb E\left[x_-^T(t)\Sigma x_-(t)\right]\leq& \mathbb E\int_0^t x_{\mp}^T\left(A^T \Sigma+\Sigma A+\sum_{k=1}^m N_k^T\Sigma N_{k}+\sum_{i, j=1}^v H^T_{i}\Sigma H_{j}k_{ij}\right)x_{\mp} ds\\ \label{insetobserveequation} & +\mathbb E\int_0^t 2 x_-^T\Sigma \smat 0 \\ c_0\srix + \sum_{i, j=1}^v \left(2 H_{i} x_{\mp} + \smat 0 \\ c_i\srix\right)^T\Sigma\smat 0 \\ c_j\srix k_{ij} ds\\
&+\int_0^t \mathbb E\left[x_-^T \Sigma x_-\right] \left\|u^0\right\|_{2}^2 ds - \mathbb E \int_0^t 2 \smat 0 \\ h\srix^T \Sigma Ax_{\mp}ds.\nonumber \end{align}} Taking the partitions of $x_-$ and $\Sigma$ into account, we see that $x_-^T\Sigma \smat 0 \\ c_0\srix=x_2^T \Sigma_2 c_0$. Furthermore, the partitions of $x_{\mp}$ and $H_i$ yield \begin{align}\label{relforH} &\left(2 H_{i} x_{\mp} + \smat 0 \\ c_i\srix\right)^T\Sigma\smat 0 \\ c_j\srix =\left(2 H_{i} x_{\mp} + \smat 0 \\ c_i\srix\right)^T\smat 0 \\ \Sigma_2 c_j\srix\\ \nonumber &=\left(2 \smat H_{i, 21} & H_{i, 22}\srix (x - \smat x_r \\ -h\srix) + c_i\right)^T \Sigma_2 c_j = \left(2 \smat H_{i, 21} & H_{i, 22}\srix x - c_i\right)^T \Sigma_2 c_j, \end{align} since $\smat H_{i, 21} & H_{i, 22}\srix \smat x_r \\ -h\srix=c_i$. Using the partition of $A$, it holds that \begin{align}\label{relforA} -2 \smat 0 \\ h\srix^T \Sigma Ax_{\mp}&=-2 \smat 0 & h^T\Sigma_2 \srix Ax_{\mp}=-2 h^T\Sigma_2 \smat A_{21} & A_{22}\srix (x+\smat -x_r \\ h\srix)\\ \nonumber &=-2 h^T\Sigma_2 (\smat A_{21} & A_{22}\srix x + B_2u), \end{align} because $\smat A_{21} & A_{22}\srix \smat -x_r \\ h\srix=B_2u$. We insert (\ref{balancedobserve}) and (\ref{xminusoutput}) into inequality (\ref{insetobserveequation}) and exploit the relations in (\ref{relforH}) and (\ref{relforA}). Hence, \begin{align*}
\mathbb E\left[x_-^T(t)\Sigma x_-(t)\right]\leq& - \mathbb E\left\|y-y_r\right\|^2_{L^2_{t}}+\int_0^t \mathbb E\left[x_-^T \Sigma x_-\right] \left\|u^0\right\|_{2}^2 ds\\ & +\mathbb E\int_0^t 2 x_2^T \Sigma_2 c_0 + \sum_{i, j=1}^v \left(2 \smat H_{i, 21} & H_{i, 22}\srix x - c_i\right)^T \Sigma_2 c_j k_{ij} ds\\ & - \mathbb E \int_0^t 2 h^T\Sigma_2 (\smat A_{21} & A_{22}\srix x + B_2u) ds. \end{align*} We define the function $\alpha_-(t):=\mathbb E\int_0^t 2 x_2^T \Sigma_2 c_0 + \sum_{i, j=1}^v \left(2 \smat H_{i, 21} & H_{i, 22}\srix x - c_i\right)^T \Sigma_2 c_j k_{ij} ds - \mathbb E \int_0^t 2 h^T\Sigma_2 (\smat A_{21} & A_{22}\srix x + B_2u) ds$ and apply Lemma \ref{gronwall} implying \begin{align*}
\mathbb E\left[x_-^T(t)\Sigma x_-(t)\right]\leq& \alpha_-(t)- \mathbb E\left\|y-y_r\right\|_{L^2_{t}}^2\\
& +\int_0^t (\alpha_-(s) - \mathbb E\left\|y-y_r\right\|_{L^2_{s}}^2) \left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds. \end{align*} Since $\Sigma$ is positive definite, we obtain and upper bound for the output error by \begin{align*}
\mathbb E\left\|y-y_r\right\|_{L^2_{t}}^2\leq \alpha_-(t) +\int_0^t \alpha_-(s) \left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds. \end{align*} Defining the term $\alpha_+(t):=\mathbb E\int_0^t 2 x_2^T \Sigma_2^{-1} c_0 + \sum_{i, j=1}^v \left(2 \smat H_{i, 21} & H_{i, 22}\srix x - c_i\right)^T \Sigma_2^{-1} c_j k_{ij} ds - \mathbb E \int_0^t 2 h^T\Sigma_2^{-1} (\smat A_{21} & A_{22}\srix x + B_2u) ds$ and exploiting the assumption that $\Sigma_2=\sigma I$, leads to \begin{align}\label{firstbound}
\mathbb E\left\|y-y_r\right\|_{L^2_{t}}^2\leq \sigma^2\left[\alpha_+(t) +\int_0^t \alpha_+(s) \left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds\right]. \end{align} The remaining step is to find a bound for the right side of (\ref{firstbound}) that does not depend on $\alpha_+$ anymore. For that reason, a bound for the expression $\mathbb E[x_+^T(t) \Sigma^{-1} x_+(t)]$ is derived next using Ito's lemma again. From (\ref{xplus}) and Lemma \ref{lemstochdiff}, we obtain \begin{align}\label{productruleappliedplus} \mathbb E\left[x_+^T(t)\Sigma^{-1} x_+(t)\right]=&2 \int_0^t\mathbb E\left[x_+^T\Sigma^{-1}\left(Ax_{\pm}+2 Bu+\sum_{k=1}^m (N_{k} x_{\pm} u_k) - \smat 0 \\ c_0\srix \right)\right]ds\\ \nonumber &+ \int_0^t\sum_{i, j=1}^v \mathbb E\left[\left(H_{i} x_{\pm} - \smat 0 \\ c_i\srix\right)^T\Sigma^{-1}\left(H_{j} x_{\pm} - \smat 0 \\ c_j\srix\right)\right]k_{ij} ds. \end{align} Analogously to (\ref{Nkestimate}), it holds that \begin{align*}
\sum_{k=1}^m 2 x_+^T(s)\Sigma^{-1} N_{k} x_{\pm}(s) u_k(s) \leq x_+^T(s) \Sigma^{-1} x_+(s) \left\|u^0(s)\right\|_{2}^2 +\sum_{k=1}^m x_{\pm}^T(s) N_k^T\Sigma^{-1} N_{k} x_{\pm}(s). \end{align*} Additionally, we rearrange the term related to $A$ as follows \begin{align*}
2x_+^T(s) \Sigma^{-1} Ax_{\pm}(s) &= 2x_{\pm}^T(s) \Sigma^{-1} Ax_{\pm}(s)+ 2\smat 0 \\ h(s)\srix^T \Sigma^{-1} Ax_{\pm}(s) \\
&= x_{\pm}^T(s)(A^T \Sigma^{-1} + \Sigma^{-1} A)x_{\pm}(s)+ 2\smat 0 \\ h(s)\srix^T \Sigma^{-1} Ax_{\pm}(s).
\end{align*} Moreover, we have \begin{align*}
4 x_+^T(s)\Sigma^{-1} Bu(s) = 4 x_{\pm}^T(s)\Sigma^{-1} Bu(s)+ 4\smat 0 \\ h(s)\srix^T \Sigma^{-1} Bu(s).
\end{align*} We plug in the above results into (\ref{productruleappliedplus}) which gives us \begin{align}\nonumber &\mathbb E\left[x_+^T(t)\Sigma^{-1} x_+(t)\right]\\ \nonumber&\leq \mathbb E\int_0^t x_{\pm}^T\left(A^T \Sigma^{-1}+\Sigma^{-1} A+\sum_{k=1}^m N_k^T\Sigma^{-1} N_{k}+\sum_{i, j=1}^v H^T_{i}\Sigma^{-1} H_{j}k_{ij}\right)x_{\pm} ds\\ \label{insetobserveequation2} &\quad -\mathbb E\int_0^t 2 x_+^T\Sigma^{-1} \smat 0 \\ c_0\srix + \sum_{i, j=1}^v \left(2 H_{i} x_{\pm} - \smat 0 \\ c_i\srix\right)^T\Sigma^{-1}\smat 0 \\ c_j\srix k_{ij} ds\\ &\quad+ \mathbb E \int_0^t 2\smat 0 \\ h\srix^T \Sigma^{-1} (Ax_{\pm}+2 Bu) ds + \mathbb E \int_0^t 4 x_{\pm}^T\Sigma^{-1} Bu ds\nonumber \\
&\quad+\int_0^t \mathbb E\left[x_+^T \Sigma^{-1} x_+\right] \left\|u^0\right\|_{2}^2 ds. \nonumber \end{align} From inequality (\ref{balancedreach}) and the Schur complement condition on definiteness, it follows that\begin{align}\label{schurposdef}
\mat{cc}\hspace{-0.15cm} A^T \Sigma^{-1}\hspace{-0.025cm}+\Sigma^{-1}\hspace{-0.025cm}A+\hspace{-0.025cm}\sum_{k=1}^m N_k^T \Sigma^{-1} N_k + \hspace{-0.025cm}\sum_{i, j=1}^v H_i^T \Sigma^{-1} H_j k_{i j} & \Sigma^{-1}B\\
B^T \Sigma^{-1}& -I\rix\leq 0.
\end{align} We multiply (\ref{schurposdef}) with $\smat x_{\pm} \\ 2u\srix^T$ from the left and with $\smat x_{\pm}\\ 2u\srix$ from the right. Hence, \begin{align}\label{ugrosserlyap}
&4 \left\|u\right\|_{2}^2\geq \\ \nonumber & x_{\pm}^T\left(A^T \Sigma^{-1}+\Sigma^{-1} A+\sum_{k=1}^m N_k^T\Sigma^{-1} N_{k}+\sum_{i, j=1}^v H^T_{i}\Sigma^{-1} H_{j}k_{ij}\right)x_{\pm}+4x_{\pm}^T\Sigma^{-1} Bu. \end{align} Applying this result to (\ref{insetobserveequation2}) yields {\allowdisplaybreaks\begin{align}\label{insetobserveequationbla}
\mathbb E\left[x_+^T(t)\Sigma^{-1} x_+(t)\right] \leq & 4 \left\|u\right\|_{L^2_t}^2+\int_0^t \mathbb E\left[x_+^T \Sigma^{-1} x_+\right] \left\|u^0\right\|_{2}^2 ds\\\nonumber & +\mathbb E \int_0^t 2\smat 0 \\ h\srix^T \Sigma^{-1} (Ax_{\pm}+2 Bu) ds\\ \nonumber & -\mathbb E\int_0^t 2 x_+^T\Sigma^{-1} \smat 0 \\ c_0\srix + \sum_{i, j=1}^v \left(2 H_{i} x_{\pm} - \smat 0 \\ c_i\srix\right)^T\Sigma^{-1}\smat 0 \\ c_j\srix k_{ij} ds. \end{align}} We first of all see that $x_+^T\Sigma^{-1} \smat 0 \\ c_0\srix=x_2^T\Sigma_2^{-1}c_0$ using the partitions of $x_+$ and $\Sigma$.
With the partition of $H_i$, we moreover have \begin{align*} &\left(2 H_{i} x_{\pm} - \smat 0 \\ c_i\srix\right)^T\Sigma^{-1}\smat 0 \\ c_j\srix =\left(2 H_{i} x_{\pm} - \smat 0 \\ c_i\srix\right)^T\smat 0 \\ \Sigma_2^{-1} c_j\srix\\ &=\left(2 \smat H_{i, 21} & H_{i, 22}\srix (x + \smat x_r \\ -h\srix) - c_i\right)^T \Sigma_2^{-1} c_j = \left(2 \smat H_{i, 21} & H_{i, 22}\srix x + c_i\right)^T \Sigma_2^{-1} c_j. \end{align*} In addition, it holds that \begin{align*} &2\smat 0 \\ h\srix^T \Sigma^{-1} (Ax_{\pm}+2 Bu) = 2\smat 0 & h^T\Sigma_2^{-1}\srix (Ax_{\pm}+2 Bu)\\ &= 2 h^T\Sigma_2^{-1} (\smat A_{21} & A_{22}\srix (x+\smat x_r \\ -h\srix)+2 B_2u) = 2 h^T\Sigma_2^{-1} (\smat A_{21} & A_{22}\srix x+ B_2u)
\end{align*} Plugging the above relations into (\ref{insetobserveequationbla}) leads to \begin{align}\label{insetobserveequationblabla}
\mathbb E\left[x_+^T(t)\Sigma^{-1} x_+(t)\right] \leq & 4\left\|u\right\|_{L^2_t}^2+\int_0^t \mathbb E\left[x_+^T \Sigma^{-1} x_+\right] \left\|u^0\right\|_{2}^2 ds\\\nonumber &+\mathbb E\int_0^t 2 h^T\Sigma_2^{-1} (\smat A_{21} & A_{22}\srix x+ B_2u) ds\\ \nonumber & -\mathbb E\int_0^t 2 x_2^T\Sigma_2^{-1} c_0 + \sum_{i, j=1}^v \left(2 \smat H_{i, 21} & H_{i, 22}\srix x + c_i\right)^T\Sigma_2^{-1} c_j k_{ij} ds. \end{align} We add $2\mathbb E\int_0^t \sum_{i, j=1}^v c_i^T\Sigma_2^{-1} c_j k_{ij} ds$ to the right side of (\ref{insetobserveequationblabla}) and preserve the inequality since this term is a nonnegative due to Lemma \ref{proppossemidef}. This results in \begin{align*}
\mathbb E\left[x_+^T(t)\Sigma^{-1} x_+(t)\right] \leq 4 \left\|u\right\|_{L^2_t}^2-\alpha_+(t)+\int_0^t \mathbb E\left[x_+^T(s) \Sigma^{-1} x_+(s)\right] \left\|u^0(s)\right\|_{2}^2 ds. \end{align*} Gronwall's inequality in Lemma \ref{gronwall} yields \begin{align}\label{keineahnung}
&\mathbb E\left[x_+^T(t)\Sigma^{-1} x_+(t)\right] \\ \nonumber &\leq 4\left\|u\right\|_{L^2_t}^2-\alpha_+(t)
+\int_0^t (4 \left\|u\right\|_{L^2_s}^2-\alpha_+(s)) \left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds. \end{align} We find an estimate for the following expression: \begin{align}\label{keineahnung2}
&\int_0^t \left\|u\right\|_{L^2_s}^2 \left\|u^0(s)\right\|_2^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_2^2dw\right) ds\\ \nonumber
&\leq \left\|u\right\|_{L^2_t}^2 \left[-\exp\left(\int_s^t \left\|u^0(w)\right\|_2^2dw\right)\right]_{s=0}^t\\
&=\left\|u\right\|_{L^2_t}^2\left(\exp\left(\int_0^t \left\|u^0(s)\right\|_2^2ds\right)-1\right). \nonumber \end{align} Combining (\ref{keineahnung}) with (\ref{keineahnung2}), we obtain \begin{align}\label{analoggronwallest}
&\alpha_+(t)+\int_0^t \alpha_+(s) \left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds\\ \nonumber
&\leq 4\left\|u\right\|_{L^2_t}^2 \exp\left(\int_0^t \left\|u^0(s)\right\|_2^2ds\right). \end{align} Comparing this result with (\ref{firstbound}) implies
\begin{align}\label{secondbound}
\left(\mathbb E\left\|y-y_r\right\|_{L^2_{t}}^2\right)^{\frac{1}{2}}\leq 2\sigma \left\|u\right\|_{L^2_t} \exp\left(0.5 \left\|u^0\right\|_{L^2_t}^2\right). \end{align} \end{proof} \end{theorem}\\ We proceed with the study of an error bound between two ROM that are neighboring. \subsection{Error bound for neighboring ROMs}\label{secneigeb} In this section, we investigate the output error between two ROMs, in which the larger ROM has exactly one HSV than the smaller one. This concept of neighboring ROMs was first introduced in \cite{redmannspa2} but in the much simpler stochastic linear setting.
The reader might wonder why a second case is considered besides the one in Section \ref{sectionremovesmallhsv} since one might just start with a full model that has the same structure as the ROM (\ref{romstochstatebt}). The reason is that is not clear how the Gramians need to be chosen for (\ref{romstochstatebt}). In order to investigate the error between two ROMs by SPA, a finer partition than the one in (\ref{partvonorimodel}) is required. We partition the matrices of the balanced full system (\ref{controlsystemoriginal}) as follows: \begin{subequations}\label{finerpartdef} \begin{align}
A=\smat{A}_{11}&{A}_{12}&A_{13}\\ {A}_{21}&{A}_{22}&A_{23}\\ {A}_{31}&{A}_{32}&A_{33}\srix,\quad B=\smat B_1 \\ B_2\\B_3\srix,\quad C= \smat C_1 & C_2& C_3\srix,\\ H_i=\smat{H}_{i, 11}&{H}_{i, 12}&{H}_{i, 13}\\ {H}_{i, 21}&{H}_{i, 22}&{H}_{i, 23}\\ {H}_{i, 31}&{H}_{i, 32}&{H}_{i, 33}\srix,\quad N_k=\smat{N}_{k, 11}&{N}_{k, 12}&{N}_{k, 13}\\ {N}_{k, 21}&{N}_{k, 22}&{N}_{k, 23}\\ {N}_{k, 31}&{N}_{k, 32}&{N}_{k, 33}\srix.
\end{align}
\end{subequations} The partitioned balanced solution to (\ref{stateeq}) and the Gramians are then of the form \begin{align}\label{finepartsig}
x=\smat x_{1}\\
x_{2}\\
x_{3}\srix\; \text{and} \; \Sigma=\smat \Sigma_{1}& & \\
&\Sigma_{2}& \\
& &\Sigma_{3}\srix.
\end{align} We introduce the ROM of truncating $\Sigma_3$ first. According to the procedure described in Section \ref{settingstochstabgen}, the reduced system is obtained by setting $dx_3$ equal to zero, neglecting the bilinear and the diffusion term in this equation. The solution $\tilde x_3$ of the resulting algebraic constraint is an approximation for $x_3$. One can solve for this approximating variable and obtains $\tilde x_3=-A_{33}^{-1}(A_{31}x_1+A_{32}x_2+B_3u)$. Inserting this result for $x_3$ in the equations for $x_1$, $x_2$ and into the output equation (\ref{originalobserveq}) leads to \begin{subequations}\label{romnegsig3}
\begin{align}\label{stateromsig3} d\smat x_1 \\ x_2\srix&=\left[\hat A\smat x_1 \\ x_2\\\tilde x_3\srix+\hat B u+\sum_{k=1}^m \hat N_k \smat x_1 \\ x_2\\ \tilde x_3\srix u_k\right]dt +\sum_{i=1}^v \hat H_i \smat x_1 \\ x_2\\ \tilde x_3\srix dM_i, \\ \label{outromsig3}
\bar y(t)&= C \smat x_1(t) \\ x_2(t)\\ \tilde x_3(t)\srix, \;\;\;t\geq 0,
\end{align}
\end{subequations} where $\smat x_1(0) \\ x_2(0)\srix=\smat 0 \\ 0\srix$ and \begin{align*} \hat A=\smat{A}_{11}&{A}_{12}&A_{13}\\ {A}_{21}&{A}_{22}&A_{23}\srix,\; \hat B=\smat B_1 \\ B_2\srix, \; \hat H_i=\smat{H}_{i, 11}&{H}_{i, 12}&{H}_{i, 13}\\ {H}_{i, 21}&{H}_{i, 22}&{H}_{i, 23}\srix,\; \hat N_k=\smat{N}_{k, 11}&{N}_{k, 12}&{N}_{k, 13}\\ {N}_{k, 21}&{N}_{k, 22}&{N}_{k, 23}\srix.
\end{align*} We aim to determine the error between this ROM and the reduced system of neglecting $\Sigma_2$ and $\Sigma_3$. This is \begin{subequations}\label{romnegsig2sig3}
\begin{align}\label{stateromsig2sig3} dx_r&=\left[\hat A_r \smat x_r \\ -h_1\\ -h_2\srix+B_1 u+\sum_{k=1}^m \hat N_{r, k} \smat x_r \\ -h_1\\ -h_2\srix u_k\right]dt + \sum_{i=1}^v \hat H_{r, i} \smat x_r \\ -h_1\\ -h_2\srix dM_i, \\ \label{outromsig2sig3}
\bar y_r(t)&=\smat C_1 & C_2 &C_3 \srix\smat x_r(t) \\ -h_1(t)\\ -h_2(t)\srix, \;\;\;t\geq 0,
\end{align} \end{subequations} where $x_r(0)=0$, \begin{align*} \hat A_r=\smat{A}_{11}&{A}_{12}&A_{13}\srix,\; \hat H_{r, i}=\smat{H}_{i, 11}&{H}_{i, 12}&{H}_{i, 13}\srix,\; \hat N_{r, k}=\smat{N}_{k, 11}&{N}_{k, 12}&{N}_{k, 13}\srix \end{align*} and we define \begin{align}\label{inverserep}
h(t)=\smat h_1(t) \\ h_2(t)\srix= \smat A_{22}& A_{23}\\ A_{32}& A_{33}\srix^{-1}
\left(\smat {A}_{21}\\{A}_{31}\srix x_r(t)+\smat {B}_{2}\\{B}_{3}\srix u(t)\right).
\end{align} In order to find a bound for the error between (\ref{outromsig3}) and (\ref{outromsig2sig3}), state variables analogously to $x_\mp$ and $x_\pm$ in Section \ref{sectionremovesmallhsv} are constructed in the following and corresponding equations are derived. For simplicity, we use a similar notation again and define \begin{align*} \hat x_\mp=\smat x_1-x_r \\ x_2+h_1\\ \tilde x_3+h_2\srix\; \text{and}\; \hat x_\pm=\smat x_1+x_r \\ x_2-h_1\\ \tilde x_3-h_2\srix. \end{align*} One can see that these states are obtained by combining the states appearing on the right sides of (\ref{stateromsig3}) and (\ref{stateromsig2sig3}). Furthermore, the output of $\hat x_\mp$ leads to the output error \begin{align}\label{outputerrorbrom} C\hat x_\mp(t)= \bar y(t) - \bar y_r(t), \quad t\geq 0, \end{align} which is a direct consequence of (\ref{outromsig3}) and (\ref{outromsig2sig3}).
Now, we find the differential equations for $\hat x_\mp$ and $\hat x_\mp$. Using (\ref{inverserep}), we find that \begin{align}\nonumber
\smat {A}_{21}&{A}_{22}& A_{23}\\{A}_{31}&{A}_{32}& A_{33} \srix \smat x_r \\ -h_1\\-h_2\srix &= \smat {A}_{21}\\ {A}_{31}\srix x_r -\smat {A}_{22}& A_{23}\\{A}_{32}& A_{33} \srix h\\ &= \smat {A}_{21}\\ {A}_{31}\srix x_r -\smat {A}_{22}& A_{23}\\{A}_{32}& A_{33} \srix \smat {A}_{22}& A_{23}\\{A}_{32}& A_{33} \srix^{-1} \left(\smat {A}_{21}\\{A}_{31}\srix x_r+\smat {B}_{2}\\{B}_{3}\srix u\right)\nonumber \\ & = -\smat {B}_{2}\\{B}_{3}\srix u.\label{reltobeusef}
\end{align} Applying the first line of (\ref{reltobeusef}), we obtain the following equation \begin{align}\nonumber d 0 &=\left[\smat {A}_{21}&{A}_{22}& A_{23}\srix\smat x_r \\ -h_1\\ -h_2\srix+ {B}_{2} u - \hat c_0+\sum_{k=1}^m \smat{N}_{k, 21}&{N}_{k, 22}&{N}_{k, 23} \srix \smat x_r \\ -h_1\\ -h_2\srix u_k\right]dt\\ \label{zerosde} &\quad+\sum_{i=1}^v \left[\smat{H}_{i, 21}&{H}_{i, 22}&{H}_{i, 23} \srix \smat x_r \\ -h_1\\ -h_2\srix - \hat c_i\right] dM_i \end{align} where $\hat c_0=\sum_{k=1}^m \smat{N}_{k, 21}&{N}_{k, 22}&{N}_{k, 23} \srix \smat x_r \\ -h_1\\ -h_2\srix u_k$ and $\hat c_i=\smat{H}_{i, 21}&{H}_{i, 22}&{H}_{i, 23} \srix \smat x_r \\ -h_1\\ -h_2\srix$ for $i=1, \ldots, v$. We supplement (\ref{stateromsig2sig3}) with (\ref{zerosde}) and combine this with (\ref{stateromsig3}). Hence, we obtain \begin{align}\label{xminplu2} d \hat x_-&=\left[\hat A \hat x_\mp+ \smat 0\\ \hat c_0\srix +\sum_{k=1}^m \hat {N}_{k} \hat x_\mp u_k\right]dt +\sum_{i=1}^v \left[\hat {H}_{i} \hat x_\mp + \smat 0\\ \hat c_i\srix\right] dM_i, \end{align} where $\hat x_- = \smat x_1 - x_r \\ x_2\srix$ and furthermore \begin{align}\label{xplusmin2} d \hat x_+&=\left[\hat A \hat x_\pm+ 2 \hat B u - \smat 0\\ \hat c_0\srix +\sum_{k=1}^m \hat {N}_{k} \hat x_\pm u_k\right]dt +\sum_{i=1}^v \left[\hat {H}_{i} \hat x_\pm - \smat 0\\ \hat c_i\srix\right] dM_i, \end{align} where $\hat x_+ = \smat x_1 + x_r \\ x_2\srix$. We now state the output error between the systems (\ref{romnegsig3}) and (\ref{romnegsig2sig3}) for the case that the ROM are neighboring, i.e., the larger model has exactly one HSV more than the smaller one.
\begin{theorem}\label{mainthm2} Let $\bar y$ be the output of the ROM (\ref{romnegsig3}), $\bar y_r$ be the output of the ROM (\ref{romnegsig2sig3}) and $\Sigma_2=\sigma I$, $\sigma>0$, in (\ref{finepartsig}). Then, it holds that \begin{align*}
\left(\mathbb E\left\|\bar y-\bar y_r\right\|_{L^2_{T}}^2\right)^{\frac{1}{2}}\leq 2 \sigma \left\|u\right\|_{L^2_T}\exp\left(0.5 \left\|u^0\right\|_{L^2_T}^2\right).
\end{align*} \begin{proof} We make use of equations (\ref{xminplu2}) and (\ref{xplusmin2}) in order to prove this bound. We set $\hat \Sigma= \smat \Sigma_1& \\ & \Sigma_2\srix$ as a submatrix of $\Sigma$ in (\ref{finepartsig}). Lemma \ref{lemstochdiff} now yields \begin{align}\label{productruleapplied2} \mathbb E\left[\hat x_-^T(t)\hat \Sigma \hat x_-(t)\right]=&2 \int_0^t\mathbb E\left[\hat x_-^T\hat \Sigma\left(\hat A\hat x_{\mp}+\sum_{k=1}^m (\hat N_{k} \hat x_{\mp} u_k) + \smat 0 \\ \hat c_0\srix \right)\right]ds\\ \nonumber &+ \int_0^t\sum_{i, j=1}^v \mathbb E\left[\left(\hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma\left(\hat H_{j} \hat x_{\mp} + \smat 0 \\ \hat c_j\srix\right)\right]k_{ij} ds. \end{align} We see that the right side of (\ref{productruleapplied2}) contains the submatrices $\hat A, \hat B, \hat H, \hat N$ and $\hat \Sigma$. In order to be able to refer to the full matrix inequality (\ref{balancedobserve}), we find upper bounds for certain terms in the following involving the full matrices $A, B, H, N$ and $\Sigma$. With the same estimate as in (\ref{Nkestimate}) and the control vector $u^0$ defined in Theorem \ref{mainthmintro}, we have \begin{align*}
\sum_{k=1}^m 2 \hat x_-^T(s)\hat \Sigma \hat N_{k} \hat x_{\mp}(s) u_k(s)\leq \hat x_-^T(s) \hat \Sigma \hat x_-(s) \left\|u^0(s)\right\|_{2}^2 +\sum_{k=1}^m \hat x_{\mp}^T(s) \hat N_k^T\hat \Sigma \hat N_{k} \hat x_{\mp}(s). \end{align*} Adding the term $\sum_{k=1}^m \left(\smat{N}_{k, 31}&{N}_{k, 32}&{N}_{k, 33} \srix \hat x_{\mp}(s)\right)^T \Sigma_3 \smat{N}_{k, 31}&{N}_{k, 32}&{N}_{k, 33} \srix \hat x_{\mp}(s)$ to the right side of this inequality results in \begin{align}\label{Nkestimate2}
\sum_{k=1}^m 2 \hat x_-^T(s)\hat \Sigma \hat N_{k} \hat x_{\mp}(s) u_k(s)\leq \hat x_-^T(s) \hat \Sigma \hat x_-(s) \left\|u^0(s)\right\|_{2}^2 +\sum_{k=1}^m \hat x_{\mp}^T(s) N_k^T \Sigma N_{k} \hat x_{\mp}(s). \end{align} Moreover, it holds that \begin{align*} \hat x_{\mp}^T (A^T\Sigma+\Sigma A)\hat x_{\mp} &= 2 \hat x_{\mp}^T \Sigma A\hat x_{\mp} \\ &= 2 \smat x_1-x_r\\x_2+h_1 \srix^T \hat\Sigma \hat A\hat x_{\mp} + 2 (\tilde x_3+h_2)^T \Sigma_3\smat {A}_{31}&{A}_{32}& A_{33}\srix\hat x_{\mp}. \end{align*} We derive $\smat {A}_{31}&{A}_{32}& A_{33}\srix \smat x_{1}\\ x_{2}\\ \tilde x_{3}\srix=-B_3u$ by the definition of $\tilde x_3$. Moreover, it can be seen from the second line of (\ref{reltobeusef}) that $\smat {A}_{31}&{A}_{32}& A_{33}\srix\hat x_{\mp}=0$. Hence, \begin{align}\label{estimateforea} \hat x_{\mp}^T (A^T\Sigma+\Sigma A)\hat x_{\mp} = 2 \hat x_-^T \hat\Sigma \hat A\hat x_{\mp} + 2\smat 0\\ h_1\srix^T \hat \Sigma \hat A\hat x_{\mp}. \end{align} It remains to find a suitable upper bound related to the expression depending on $\hat H_i$. We first of all see that \begin{align*}
&\sum_{i, j=1}^v \left(\hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma\left(\hat H_{j} \hat x_{\mp} + \smat 0 \\ \hat c_j\srix\right)k_{ij}\\
&= \hat x_{\mp}^T\sum_{i, j=1}^v \hat H^T_{i}\hat \Sigma \hat H_{j}k_{ij} \hat x_{\mp} + \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma\smat 0 \\ \hat c_j\srix k_{ij}. \end{align*} The term $\sum_{i, j=1}^v \left(\smat{H}_{i, 31}&{H}_{i, 32}&{H}_{i, 33} \srix \hat x_{\mp}(s)\right)^T \Sigma_3 \smat{H}_{j, 31}&{H}_{j, 32}&{H}_{j, 33} \srix \hat x_{\mp}(s) k_{ij}$ is nonnegative through Lemma \ref{proppossemidef}. Adding this term to the right side of the above equation yields \begin{align}\label{estimatereltoh}
&\sum_{i, j=1}^v \left(\hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma\left(\hat H_{j} \hat x_{\mp} + \smat 0 \\ \hat c_j\srix\right)k_{ij}\\ \nonumber
&\leq \hat x_{\mp}^T\sum_{i, j=1}^v H^T_{i} \Sigma H_{j}k_{ij} \hat x_{\mp} + \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma\smat 0 \\ \hat c_j\srix k_{ij}. \end{align} Applying (\ref{Nkestimate2}), (\ref{estimateforea}) and (\ref{estimatereltoh}) to (\ref{productruleapplied2}), results in {\allowdisplaybreaks\begin{align}\nonumber \mathbb E\left[\hat x_-^T(t)\hat \Sigma \hat x_-(t)\right]\leq& \mathbb E\int_0^t \hat x_{\mp}^T\left(A^T \Sigma+\Sigma A+\sum_{k=1}^m N_k^T\Sigma N_{k}+\sum_{i, j=1}^v H^T_{i}\Sigma H_{j}k_{ij}\right)\hat x_{\mp} ds\\ \label{insetobserveequation12} & +\mathbb E\int_0^t 2 \hat x_-^T\hat \Sigma \smat 0 \\ \hat c_0\srix + \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T \hat \Sigma\smat 0 \\ \hat c_j\srix k_{ij} ds\\
&+\int_0^t \mathbb E\left[\hat x_-^T \hat \Sigma \hat x_-\right] \left\|u^0\right\|_{2}^2 ds - \mathbb E \int_0^t 2 \smat 0 \\ h_1\srix^T \hat \Sigma \hat A \hat x_{\mp}ds.\nonumber \end{align}} Using that $\hat c_i=\smat{H}_{i, 21}&{H}_{i, 22}&{H}_{i, 23} \srix \smat x_r \\ -h_1\\ -h_2\srix$, we have \begin{align}\nonumber &\left(2 \hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma\smat 0 \\ \hat c_j\srix =\left(2 \hat H_{i} \hat x_{\mp} + \smat 0 \\ \hat c_i\srix\right)^T\smat 0 \\ \Sigma_2 \hat c_j\srix\\ \nonumber &=\left(2 \smat H_{i, 21} & H_{i, 22} & H_{i, 23}\srix (\smat x_1\\ x_2 \\ \tilde x_3\srix - \smat x_r \\ -h_1\\-h_2\srix) + \hat c_i\right)^T \Sigma_2 \hat c_j\\ \label{blaconfH}
&= \left(2 \smat H_{i, 21} & H_{i, 22}& H_{i, 23}\srix \smat x_1\\ x_2 \\ \tilde x_3\srix - \hat c_i\right)^T \Sigma_2 \hat c_j. \end{align} It can be seen further that \begin{align}\nonumber -2\smat 0\\ h_1\srix^T \hat \Sigma \hat A\hat x_{\mp}&=-2 \smat 0 & h_1^T\Sigma_2 \srix \hat A\hat x_{\mp}=-2 h_1^T\Sigma_2 \smat A_{21} & A_{22}& A_{23}\srix (\smat x_1\\ x_2 \\ \tilde x_3\srix +\smat -x_r \\ h_1\\h_2\srix)\\ &=-2 h_1^T\Sigma_2 (\smat A_{21} & A_{22}& A_{23}\srix \smat x_1\\ x_2 \\ \tilde x_3\srix + B_2u)\label{relforA2} \end{align} taking the first line of (\ref{reltobeusef}) into account. Inserting (\ref{blaconfH}) and (\ref{relforA2}) into (\ref{insetobserveequation12}) and using the fact that $2 \hat x_-^T\hat \Sigma \smat 0 \\ \hat c_0\srix= 2 x_2 \Sigma_2 \hat c_0 $ leads to {\allowdisplaybreaks\begin{align}\label{insetobserveequation122}
\mathbb E\left[\hat x_-^T(t)\hat \Sigma \hat x_-(t)\right]& \leq \int_0^t \mathbb E\left[\hat x_-^T \hat \Sigma \hat x_-\right] \left\|u^0\right\|_{2}^2 ds +\hat \alpha_-(t)\\ \nonumber &+\mathbb E\int_0^t \hat x_{\mp}^T\left(A^T \Sigma+\Sigma A+\sum_{k=1}^m N_k^T\Sigma N_{k}+\sum_{i, j=1}^v H^T_{i}\Sigma H_{j}k_{ij}\right)\hat x_{\mp} ds, \end{align}} where we set $\hat \alpha_-(t):=\mathbb E\int_0^t 2 x_2^T \Sigma_2 \hat c_0 + \left(2 \smat H_{i, 21} & H_{i, 22}& H_{i, 23}\srix \smat x_1\\ x_2 \\ \tilde x_3\srix - \hat c_i\right)^T \Sigma_2 \hat c_j ds -\mathbb E\int_0^t 2 h_1^T\Sigma_2 (\smat A_{21} & A_{22}& A_{23}\srix \smat x_1\\ x_2 \\ \tilde x_3\srix + B_2u) ds$. With (\ref{balancedobserve}) and (\ref{outputerrorbrom}), we obtain \begin{align*}
\mathbb E\left[\hat x_-^T(t)\hat \Sigma \hat x_-(t)\right]& \leq \int_0^t \mathbb E\left[\hat x_-^T \hat \Sigma \hat x_-\right] \left\|u^0\right\|_{2}^2 ds +\hat \alpha_-(t)- \mathbb E\left\|\bar y-\bar y_r\right\|_{L^2_{t}}^2. \end{align*} Applying Lemma \ref{gronwall} to this inequality yields \begin{align*}
\mathbb E\left[\hat x_-^T(t)\hat \Sigma \hat x_-(t)\right]\leq& \hat \alpha_-(t)- \mathbb E\left\|\bar y-\bar y_r\right\|_{L^2_{t}}^2\\
& +\int_0^t \hat \alpha_-(s)\left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds. \end{align*} Since the above left side of the inequality is positive, we obtain \begin{align*}
&\mathbb E\left\|\bar y - \bar y_r\right\|_{L^2_{t}}^2\\ &\leq \hat \alpha_-(t)
+\int_0^t \hat \alpha_-(s)\left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds. \end{align*} We exploit that $\Sigma_2=\sigma I$. Hence, we have \begin{align}\label{boundmitalpmin}
&\mathbb E\left\|\bar y - \bar y_r\right\|_{L^2_{t}}^2\\ \nonumber&\leq \sigma^2\left(\hat \alpha_+(t)
+\int_0^t \hat \alpha_+(s)\left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds\right), \end{align} where we set $\hat \alpha_+(t):=\mathbb E\int_0^t 2 x_2^T \Sigma_2^{-1} \hat c_0 + \left(2 \smat H_{i, 21} & H_{i, 22}& H_{i, 23}\srix \smat x_1\\ x_2 \\ \tilde x_3\srix - \hat c_i\right)^T \Sigma_2^{-1} \hat c_j ds -\mathbb E\int_0^t 2 h_1^T\Sigma_2^{-1} (\smat A_{21} & A_{22}& A_{23}\srix \smat x_1\\ x_2 \\ \tilde x_3\srix + B_2u) ds$. In order to find a suitable bound for the right side of (\ref{boundmitalpmin}), Ito's lemma is applied to $\mathbb E[\hat x_+^T(t) \hat \Sigma^{-1}\hat x_+(t)]$. Due to (\ref{xplusmin2}) and Lemma \ref{lemstochdiff}, we obtain \begin{align}\label{productruleappliedplus22} \mathbb E\left[\hat x_+^T(t)\hat \Sigma^{-1} \hat x_+(t)\right]=&2 \int_0^t\mathbb E\left[\hat x_+^T\hat \Sigma^{-1}\left(\hat A\hat x_{\pm}+2 \hat Bu+\sum_{k=1}^m (\hat N_{k} \hat x_{\pm} u_k) - \smat 0 \\ \hat c_0\srix \right)\right]ds\\ \nonumber &+ \int_0^t\sum_{i, j=1}^v \mathbb E\left[\left(\hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma^{-1}\left(\hat H_{j} \hat x_{\pm} - \smat 0 \\ \hat c_j\srix\right)\right]k_{ij} ds. \end{align} Analogously to (\ref{Nkestimate2}), it holds that \begin{align}\label{blaNestimatg} &\sum_{k=1}^m 2 \hat x_+^T(s)\hat\Sigma^{-1} \hat N_{k} \hat x_{\pm}(s) u_k(s) \\ \nonumber
&\leq \hat x_+^T(s) \hat \Sigma^{-1} \hat x_+(s) \left\|u^0(s)\right\|_{2}^2 +\sum_{k=1}^m \hat x_{\pm}^T(s) \hat N_k^T\hat \Sigma^{-1} \hat N_{k} \hat x_{\pm}(s)\\ \nonumber
&\leq \hat x_+^T(s) \hat \Sigma^{-1} \hat x_+(s) \left\|u^0(s)\right\|_{2}^2 +\sum_{k=1}^m \hat x_{\pm}^T(s) N_k^T \Sigma^{-1} N_{k} \hat x_{\pm}(s). \end{align} Furthermore, we see that \begin{align*} &\hat x_{\pm}^T (A^T\Sigma^{-1}+\Sigma^{-1} A)\hat x_{\pm}+4\hat x_{\pm}^T \Sigma^{-1} Bu = 2 \hat x_{\pm}^T \Sigma^{-1} (A\hat x_{\pm}+2Bu) \\ &= 2 \smat x_1 + x_r\\x_2 - h_1 \srix^T \hat\Sigma^{-1} (\hat A\hat x_{\pm}+2\hat B u) + 2 (\tilde x_3 - h_2)^T \Sigma_3^{-1}(\smat {A}_{31}&{A}_{32}& A_{33}\srix\hat x_{\pm}+2B_3u). \end{align*} Since $\smat {A}_{31}&{A}_{32}& A_{33}\srix \smat x_{1}\\ x_{2}\\ \tilde x_{3}\srix= \smat {A}_{31}&{A}_{32}& A_{33}\srix \smat x_r\\ -h_1\\ -h_2\srix =-B_3u$ by the definition of $\tilde x_3$ and the second line of (\ref{reltobeusef}), we obtain $\smat {A}_{31}&{A}_{32}& A_{33}\srix\hat x_{\pm}=-2B_3 u$. Thus, \begin{align}\label{estimateforea22} &\hat x_{\pm}^T (A^T\Sigma^{-1}+\Sigma^{-1} A)\hat x_{\pm}+4\hat x_{\pm}^T \Sigma^{-1} Bu \\ \nonumber &= 2 \hat x_+^T \hat\Sigma^{-1} (\hat A\hat x_{\pm}+2\hat B u) + 2\smat 0\\ -h_1\srix^T \hat \Sigma^{-1} (\hat A\hat x_{\pm}+2\hat B u). \end{align} Finally, we see that \begin{align}\label{relHterms}
&\sum_{i, j=1}^v \left(\hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma^{-1}\left(\hat H_{j} \hat x_{\pm} - \smat 0 \\ \hat c_j\srix\right)k_{ij}\\ \nonumber
&= \hat x_{\pm}^T\sum_{i, j=1}^v \hat H^T_{i}\hat \Sigma^{-1} \hat H_{j}k_{ij} \hat x_{\pm} - \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma^{-1}\smat 0 \\ \hat c_j\srix k_{ij}\\\nonumber
&\leq \hat x_{\mp}^T\sum_{i, j=1}^v H^T_{i} \Sigma^{-1} H_{j}k_{ij} \hat x_{\pm} - \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma^{-1}\smat 0 \\ \hat c_j\srix k_{ij} \end{align} applying Lemma \ref{proppossemidef}. With (\ref{blaNestimatg}), (\ref{estimateforea22}) and (\ref{relHterms}) inequality (\ref{productruleappliedplus22}) becomes \begin{align}\nonumber &\mathbb E\left[\hat x_+^T(t)\hat \Sigma^{-1} \hat x_+(t)\right]\\ \nonumber&\leq \mathbb E\int_0^t \hat x_{\pm}^T\left(A^T \Sigma^{-1}+\Sigma^{-1} A+\sum_{k=1}^m N_k^T\Sigma^{-1} N_{k} +\sum_{i, j=1}^v H^T_{i}\Sigma^{-1} H_{j}k_{ij}\right)\hat x_{\pm} ds\\ \label{insetobserveequation22} &\quad -\mathbb E\int_0^t 2 \hat x_+^T\hat \Sigma^{-1} \smat 0 \\ \hat c_0\srix + \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T \hat \Sigma^{-1}\smat 0 \\ \hat c_j\srix k_{ij} ds\\ &\quad+ \mathbb E \int_0^t 2\smat 0 \\ h_1\srix^T \hat\Sigma^{-1} (\hat A\hat x_{\pm}+2 \hat Bu) ds + \mathbb E \int_0^t 4 \hat x_{\pm}^T\Sigma^{-1} Bu ds\nonumber \\
&\quad+\int_0^t \mathbb E\left[\hat x_+^T \hat \Sigma^{-1} \hat x_+\right] \left\|u^0\right\|_{2}^2 ds. \nonumber \end{align} Similar to (\ref{ugrosserlyap}), we obtain \begin{align*}
&4 \left\|u\right\|_{2}^2\geq \\ & \hat x_{\pm}^T\left(A^T \Sigma^{-1}+\Sigma^{-1} A+\sum_{k=1}^m N_k^T\Sigma^{-1} N_{k}+\sum_{i, j=1}^v H^T_{i}\Sigma^{-1} H_{j}k_{ij}\right)\hat x_{\pm}+4 \hat x_{\pm}^T\Sigma^{-1} Bu. \end{align*} This leads to \begin{align}\nonumber
&\mathbb E\left[\hat x_+^T(t)\hat \Sigma^{-1} \hat x_+(t)\right]\\ \nonumber&\leq 4 \left\|u\right\|_{L^2_t}^2 +\int_0^t \mathbb E\left[\hat x_+^T \hat \Sigma^{-1} \hat x_+\right] \left\|u^0\right\|_{2}^2 ds\\ \label{insetobserveezzquation22} &\quad -\mathbb E\int_0^t 2 \hat x_+^T\hat \Sigma^{-1} \smat 0 \\ \hat c_0\srix + \sum_{i, j=1}^v \left(2 \hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T \hat \Sigma^{-1}\smat 0 \\ \hat c_j\srix k_{ij} ds\\ &\quad+ \mathbb E \int_0^t 2\smat 0 \\ h_1\srix^T \hat\Sigma^{-1} (\hat A\hat x_{\pm}+2 \hat Bu) ds.\nonumber \end{align} In the following (\ref{insetobserveezzquation22}) is expressed by terms depending on $\Sigma_2$. We obtain $\hat x_+^T\hat \Sigma^{-1} \smat 0 \\ \hat c_0\srix=x_2^T\Sigma_2^{-1} \hat c_0$ exploiting the partitions of $\hat x_+$ and $\hat \Sigma$. The terms depending on $\hat H_i$ become \begin{align} \nonumber &-\sum_{i, j=1}^v\left(2 \hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T\hat \Sigma^{-1}\smat 0 \\ \hat c_j\srix k_{ij} =-\sum_{i, j=1}^v\left(2 \hat H_{i} \hat x_{\pm} - \smat 0 \\ \hat c_i\srix\right)^T\smat 0 \\ \Sigma_2^{-1} \hat c_j\srix k_{ij}\\ \nonumber &=-\sum_{i, j=1}^v\left(2 \smat H_{i, 21} & H_{i, 22}& H_{i, 23}\srix (\smat x_1 \\ x_2\\\tilde x_3\srix + \smat x_r \\ -h_1\\-h_2\srix) - \hat c_i\right)^T \Sigma_2^{-1} \hat c_j k_{ij}\\ \nonumber &= -\sum_{i, j=1}^v\left(2 \smat H_{i, 21} & H_{i, 22}& H_{i, 23}\srix \smat x_1 \\ x_2\\\tilde x_3\srix + \hat c_i\right)^T \Sigma_2^{-1} \hat c_j k_{ij}\\ \label{letzesh} &\leq -\sum_{i, j=1}^v\left(2 \smat H_{i, 21} & H_{i, 22}& H_{i, 23}\srix \smat x_1 \\ x_2\\\tilde x_3\srix - \hat c_i\right)^T \Sigma_2^{-1} \hat c_j k_{ij} \end{align} adding $\sum_{i, j=1}^v \hat c_i^T\Sigma_2^{-1} \hat c_j k_{ij}$ which is positive due to Lemma \ref{proppossemidef}. Furthermore, using the first line of (\ref{reltobeusef}), it holds that \begin{align}\nonumber &2\smat 0 \\ h_1\srix^T \hat \Sigma^{-1} (\hat A\hat x_{\pm}+2 \hat Bu) = 2\smat 0 & h_1^T\Sigma_2^{-1}\srix (\hat A\hat x_{\pm}+2 \hat Bu)\\ \nonumber &= 2 h_1^T\Sigma_2^{-1} (\smat A_{21} & A_{22}&A_{23}\srix (\smat x_1 \\ x_2\\\tilde x_3\srix + \smat x_r \\ -h_1\\-h_2\srix)+2 B_2u) \\ \label{letztesa} &= 2 h_1^T\Sigma_2^{-1} (\smat A_{21} & A_{22}&A_{23}\srix \smat x_1 \\ x_2\\\tilde x_3\srix + B_2u).
\end{align} We insert (\ref{letzesh}) and (\ref{letztesa}) into (\ref{insetobserveezzquation22}) and obtain \begin{align*}
\mathbb E\left[\hat x_+^T(t)\hat \Sigma^{-1} \hat x_+(t)\right]\leq 4 \left\|u\right\|_{L^2_t}^2 +\int_0^t \mathbb E\left[\hat x_+^T \hat \Sigma^{-1} \hat x_+\right] \left\|u^0\right\|_{2}^2 ds-\hat \alpha_+(t). \end{align*} With Lemma \ref{gronwall}, analogously to (\ref{analoggronwallest}), we find \begin{align}\label{analoggronwallest22}
&\hat\alpha_+(t)+\int_0^t \hat \alpha_+(s) \left\|u^0(s)\right\|_{2}^2 \exp\left(\int_s^t \left\|u^0(w)\right\|_{2}^2 dw\right) ds\\ \nonumber
&\leq 4\left\|u\right\|_{L^2_t}^2 \exp\left(\int_0^t \left\|u^0(s)\right\|_2^2ds\right). \end{align} The relations (\ref{boundmitalpmin}) and (\ref{analoggronwallest22}) yield the claim. \end{proof} \end{theorem}
\subsection{Proof of Theorem \ref{mainthmintro}}\label{proofmainthm} We apply the results in Theorems \ref{mainthm} and \ref{mainthm2}. We remove the HSVs step by step and exploit the triangular inequality in order to bound the error between the outputs $y$ and $y_r$. We have\begin{align*}
&\left(\mathbb E \left\|y-y_r\right\|^2_{L^2_T}\right)^{\frac{1}{2}}\\&\leq
\left(\mathbb E\left\|y-y_{r_\nu}\right\|^2_{L^2_T}\right)^{\frac{1}{2}}+\left(\mathbb E\left\|y_{r_\nu}-y_{r_{\nu-1}}\right\|^2_{L^2_T}\right)^{\frac{1}{2}}+\ldots
+\left(\mathbb E\left\|y_{r_2}-y_{r}\right\|^2_{L^2_T}\right)^{\frac{1}{2}},
\end{align*} where $y_{r_i}$ are the outputs of the ROMs with dimensions $r_i$ defined by $r_{i+1}=r_{i}+m(\tilde\sigma_{i})$ for $i=1, 2 \ldots, \nu-1$. Here, $m(\tilde\sigma_{i})$ denotes the multiplicity of $\tilde\sigma_{i}$ and $r_1=r$. In the reduction step from $y$ to $y_{r_\nu}$ only the smallest HSV $\tilde\sigma_\nu$ is removed from the system. Hence, by Theorem \ref{mainthm}, we have \begin{align*}
\left(\mathbb E \ \left\|y-y_{r_\nu}\right\|_{L^2_T}\right)^{\frac{1}{2}}\leq 2 \tilde\sigma_\nu \left\|u\right\|_{L^2_T}\exp\left(0.5 \left\|u^0\right\|_{L^2_T}^2\right). \end{align*} The ROMs of the outputs $y_{r_j}$ and $y_{r_{j-1}}$ are neighboring according to Section \ref{secneigeb}, i.e., only the HSV $\tilde\sigma_{r_{j-1}}$ is removed in the reduction step. By Theorem \ref{mainthm2}, we obtain\begin{align*}
\left(\mathbb E \ \left\|y_{r_j}-y_{r_{j-1}}\right\|_{L^2_T}\right)^{\frac{1}{2}}\leq 2 \tilde\sigma_{r_{j-1}} \left\|u\right\|_{L^2_T} \exp\left(0.5 \left\|u^0\right\|_{L^2_T}^2\right)
\end{align*} for $j=2, \ldots, \nu $. This provides the claimed result.
\section{Conclusions} In this paper, we investigated a large-scale stochastic bilinear system. In order to reduce the state space dimension, a model order reduction technique called singular perturbation approximation was extended to this setting. This method is based on Gramians proposed in \cite{redstochbil} that characterize how much a state contributes to the system dynamics. This choice of Gramians as well as the structure of the reduced system is different than in \cite{hartmann}. With this modification, we provided a new $L^2$-error bound that can be used to point out the cases in which the reduced order model by singular perturbation approximation delivers a good approximation to the original model. This error bound is new even for deterministic bilinear systems.
\appendix
\section{Supporting Lemmas}
In this appendix, we state three important results and the corresponding references that we frequently use throughout this paper. \begin{lemma}\label{lemstochdiff} Let $a, b_1, \ldots, b_v$ be $\mathbb R^d$-valued processes, where $a$ is $\left(\mathcal F_t\right)_{t\geq 0}$-adapted and almost surely Lebesgue integrable and the functions $b_i$ are integrable with respect to the mean zero square integrable L\'evy process $M=(M_1, \ldots, M_v)^T$ with covariance matrix $K=\left(k_{ij}\right)_{i, j=1, \ldots, v}$. If the process $x$ is given by \begin{align*}
dx(t)=a(t) dt+ \sum_{i=1}^v b_i(t)dM_i,
\end{align*} then, we have \begin{align*}
\frac{d}{dt}\mathbb E\left[x^T(t) x(t)\right]=2 \mathbb E\left[x^T(t) a(t)\right] + \sum_{i, j=1}^v \mathbb E\left[b_i^T(t) b_j(t)\right]k_{ij}.
\end{align*} \begin{proof} We refer to \cite[Lemma 5.2]{redmannspa2} for a proof of this lemma. \end{proof} \end{lemma} \begin{lemma}\label{proppossemidef} Let $A_1, \ldots, A_v$ be $d_1\times d_2$ matrices and $K=(k_{ij})_{i, j=1, \ldots, v}$ be a positive semidefinite matrix, then \begin{align*}\tilde K:=\sum_{i,j=1}^v A_i^T A_j k_{ij} \end{align*} is also positive semidefinite.
\begin{proof}
The proof can be found in \cite[Proposition 5.3]{redmannspa2}.
\end{proof} \end{lemma}
\begin{lemma}[Gronwall lemma]\label{gronwall} Let $T>0$, $z, \alpha: [0, T]\rightarrow \mathbb R$ be measurable bounded functions and $\beta: [0, T]\rightarrow \mathbb R$ be a nonnegative integrable function. If \begin{align*}
z(t)\leq \alpha(t)+\int_0^t \beta(s) z(s) ds,
\end{align*} then it holds that \begin{align*}
z(t)\leq \alpha(t)+\int_0^t \alpha(s)\beta(s) \exp\left(\int_s^t \beta(w)dw\right) ds
\end{align*} for all $t\in[0, T]$.
\begin{proof}
The result is shown as in \cite[Proposition 2.1]{gronwalllemma}.
\end{proof} \end{lemma}
\end{document} | arXiv |
Evaluate $\left\lceil\sqrt{27}\right\rceil - \left\lfloor\sqrt{26}\right\rfloor$.
Because $\sqrt{25}<\sqrt{26}<\sqrt{27}<\sqrt{36}$, we have $\left\lceil\sqrt{27}\right\rceil=6$ and $\left\lfloor\sqrt{26}\right\rfloor=5$. The expression thus evaluates to $6-5=\boxed{1}$. | Math Dataset |
geometry higher-category-theory homotopy-theory quantum-field-theory
nLab > Latest Changes: cobordism hypothesis
CommentTimeNov 20th 2012
Format: MarkdownItexadded at [[cobordism hypothesis]] a pointer to * [[Yonatan Harpaz]], _The Cobordism Hypothesis in Dimension 1_ ([arXiv:1210.0229](http://arxiv.org/abs/1210.0229)) where the case for $(\infty,1)$-categories is spelled out and proven in detail.
added at cobordism hypothesis a pointer to
Yonatan Harpaz, The Cobordism Hypothesis in Dimension 1 (arXiv:1210.0229)
where the case for (∞,1)(\infty,1)-categories is spelled out and proven in detail.
CommentTimeFeb 17th 2014
Format: MarkdownItexAt _[[cobordism hypothesis]]_ the section titled _[For noncompact cobordisms](http://ncatlab.org/nlab/show/cobordism+hypothesis#ForNoncompactCobordisms)_ used to contain nothing but a link to _[[Calabi-Yau object]]_. I have now added a few lines of text at least, trying to convey the rough idea.
At cobordism hypothesis the section titled For noncompact cobordisms used to contain nothing but a link to Calabi-Yau object. I have now added a few lines of text at least, trying to convey the rough idea.
(edited Feb 17th 2014)
Format: MarkdownItexadded also a section _[For cobordisms with singuarities (boundaries/branes and defects/domain walls)](http://ncatlab.org/nlab/show/cobordism+hypothesis#ForCobordismsWithSingularities)_ with just a few lines on the idea for the moment, just enough to highlight theorem 4.3.11. (Which is easily missed; much of the best magic happens on the last 10 pages of 111...)
added also a section For cobordisms with singuarities (boundaries/branes and defects/domain walls) with just a few lines on the idea for the moment, just enough to highlight theorem 4.3.11.
(Which is easily missed; much of the best magic happens on the last 10 pages of 111…)
Format: MarkdownItexWhat would be the established term for these "diagrams indicating types of singularities" on which the cobordism-with-singularities hypothesis/theorem says that the cobordism-with-singularities $(\infty,n)$-category is freely generated from as a symmetric monoidal $(\infty,n)$-category with all duals? So I mean for instance the simple diagram $$ 0 \longrightarrow \ast $$ indicating a domain wall separating the left phase ("0") from the right ("\ast"). The archetypical path in a [[fundamental category]] crossing a stratum. May these be called "catastrophe diagrams"? Is there any half-way established term available?
What would be the established term for these "diagrams indicating types of singularities" on which the cobordism-with-singularities hypothesis/theorem says that the cobordism-with-singularities (∞,n)(\infty,n)-category is freely generated from as a symmetric monoidal (∞,n)(\infty,n)-category with all duals?
So I mean for instance the simple diagram
0⟶* 0 \longrightarrow \ast
indicating a domain wall separating the left phase ("0") from the right ("\ast"). The archetypical path in a fundamental category crossing a stratum.
May these be called "catastrophe diagrams"? Is there any half-way established term available?
CommentAuthorDavid_Corfield
Author: David_Corfield
Format: MarkdownItexSo paths which go up and down through the strata, like we discussed [once](http://golem.ph.utexas.edu/category/2006/11/this_weeks_finds_in_mathematic_2.html#c006196)? I wonder if there's a term in a paper like [Diagrammatics, Singularities, and their Algebraic Interpretations](http://homepages.math.uic.edu/~kauffman/cksBrasil.pdf).
So paths which go up and down through the strata, like we discussed once? I wonder if there's a term in a paper like Diagrammatics, Singularities, and their Algebraic Interpretations.
Format: MarkdownItexYes, paths, and then higher dimensional paths. Thanks for pointing out that reference again. I skimmed through it, but I am not sure if it has the kind of term I am looking for (maybe it doesn't exist). In any case, I have added a pointer to the reference to _[[3d TQFT]]_ and to _[[4d TQFT]]_.
Yes, paths, and then higher dimensional paths.
Thanks for pointing out that reference again. I skimmed through it, but I am not sure if it has the kind of term I am looking for (maybe it doesn't exist).
In any case, I have added a pointer to the reference to 3d TQFT and to 4d TQFT.
Format: MarkdownItexTaking a look at the [workshop](http://www.ingvet.kau.se/juerfuch/conf/esi14/esi14_33.html) you're attending, I see Catherine Meusburger is speaking on 'Diagrams for Gray categories with duals', which is based on [Gray categories with duals and their diagrams](http://arxiv.org/abs/1211.0529). Todd gets a mention: >The definition of a diagrammatic calculus for Gray categories follows the pattern for categories and 2-categories. The diagrams are a three-dimensional generalisation of the two-dimensional diagrams defined above, and were previously studied informally by Trimble [31].
Taking a look at the workshop you're attending, I see Catherine Meusburger is speaking on 'Diagrams for Gray categories with duals', which is based on Gray categories with duals and their diagrams. Todd gets a mention:
The definition of a diagrammatic calculus for Gray categories follows the pattern for categories and 2-categories. The diagrams are a three-dimensional generalisation of the two-dimensional diagrams defined above, and were previously studied informally by Trimble [31].
CommentTimeSep 2nd 2014
(edited Sep 2nd 2014)
Format: MarkdownItexadded to _[[cobordism hypothesis]]_ in the section on the framed version a brief paragraph _[Implications -- The canonical O(n)-action on fully dualizable objects](http://ncatlab.org/nlab/show/cobordism+hypothesis#TheCanonicalOnAction)_ (this statement used to be referred to further below in the entry, but wasn't actually stated) Added corresponding cross-pointers to _[[dual object]]_ ($n = 1$) and to _[[orthogonal spectrum]]_ ($n = \infty$). Somebody needs to create an entry for _[[Serre automorphism]]_ ($n =2$).
added to cobordism hypothesis in the section on the framed version a brief paragraph Implications – The canonical O(n)-action on fully dualizable objects
(this statement used to be referred to further below in the entry, but wasn't actually stated)
Added corresponding cross-pointers to dual object (n=1n = 1) and to orthogonal spectrum (n=∞n = \infty). Somebody needs to create an entry for Serre automorphism (n=2n =2).
CommentTimeSep 4th 2014
Format: MarkdownItexHow does that result fit with the $(\mathbb{Z}_2)^n$ automorphism $\infty$-group of $(\infty, n)$-Cat? What are the fully dualizable objects of the latter? On the other hand, what would happen if we opted instead for a profunctor or span-like approach? Objects there are generally fully dualizable? I see just out there is Rune Haugseng's [Iterated spans and "classical" topological field theories](http://arxiv.org/abs/1409.0837). Urs and Joost get cited. >In §6 we then prove that $Span_n(C)$ is symmetric monoidal and that all its objects are fully dualizable Oh, but >Conjecture 1.3 (Lurie). The $O(k)$-action on the underlying ∞-groupoid of $Span_k(C)$ is trivial, for all ∞-categories $C$ with finite limits. Anyway, I'm more interested with my first question.
How does that result fit with the (ℤ 2) n(\mathbb{Z}_2)^n automorphism ∞\infty-group of (∞,n)(\infty, n)-Cat? What are the fully dualizable objects of the latter?
On the other hand, what would happen if we opted instead for a profunctor or span-like approach? Objects there are generally fully dualizable?
I see just out there is Rune Haugseng's Iterated spans and "classical" topological field theories. Urs and Joost get cited.
In §6 we then prove that Span n(C)Span_n(C) is symmetric monoidal and that all its objects are fully dualizable
Oh, but
Conjecture 1.3 (Lurie). The O(k)O(k)-action on the underlying ∞-groupoid of Span k(C)Span_k(C) is trivial, for all ∞-categories CC with finite limits.
Anyway, I'm more interested with my first question.
CommentRowNumber10.
Format: MarkdownItexChristopher Schommer-Pries has some useful notes -- [Dualizability in Low-Dimensional Higher Category Theory](http://arxiv.org/abs/1308.3574).
Christopher Schommer-Pries has some useful notes – Dualizability in Low-Dimensional Higher Category Theory.
Format: MarkdownItexI guess it'll only be the terminal object which is dualizable.
I guess it'll only be the terminal object which is dualizable.
Format: MarkdownItexThanks for highlighting Haugseng's article! Would have missed that otherwise. Have added pointers to his results to _[[(infinity,n)-category of correspondences]]_ and elsewhere.
Thanks for highlighting Haugseng's article! Would have missed that otherwise.
Have added pointers to his results to (infinity,n)-category of correspondences and elsewhere.
CommentTimeOct 13th 2014
Format: MarkdownItexadded a tad more on the definition of [cobordisms with (X,xi)-structure](http://ncatlab.org/nlab/show/cobordism+hypothesis#ForCobordismsWithXXiStructure) and then added in particular the [proof idea](http://ncatlab.org/nlab/show/cobordism+hypothesis#ProofOf2.4.18) of how the cobordism hypothesis for $(X,\xi)$-structure follows from the framed case
added a tad more on the definition of cobordisms with (X,xi)-structure and then added in particular the proof idea of how the cobordism hypothesis for (X,ξ)(X,\xi)-structure follows from the framed case
Format: MarkdownItexadded now also the [proof of 2.4.26](http://ncatlab.org/nlab/show/cobordism+hypothesis#ProofOf2.4.26) from the $(X,\xi)$-version, i.e. the reduction to the special case that $X = B G$. This is of course just a straightforward corollary, but I have added a line discussing how the result is really the correct concept of _homotopy invariants_ in the sense defined/discussed at _[[infinity-action]]_.
added now also the proof of 2.4.26 from the (X,ξ)(X,\xi)-version, i.e. the reduction to the special case that X=BGX = B G. This is of course just a straightforward corollary, but I have added a line discussing how the result is really the correct concept of homotopy invariants in the sense defined/discussed at infinity-action.
Format: MarkdownItexWhat's the status of the [[generalized tangle hypothesis]]?
What's the status of the generalized tangle hypothesis?
Format: MarkdownItexThis appears as theorem 4.4.4 in Lurie's writeup.
This appears as theorem 4.4.4 in Lurie's writeup.
Format: MarkdownItexStarted a [section on the case of (un-)oriented field theories](http://ncatlab.org/nlab/show/cobordism+hypothesis#ForUnorientedCobordisms). After recalling some of the statements from Lurie's article, I am after making explicit the following [corollary](http://ncatlab.org/nlab/show/cobordism+hypothesis#UnorientedLocalPrequantumFieldTheory). While being a simple corollary, this way of stating it explicitly is immensely useful for the study of unoriented local prequantum field theories. I wonder if this has been made explicit "in print" elsewhere before: Let $Phases^\otimes \in Ab_\infty(\mathbf{H})$ be an [[abelian ∞-group]] object, regarded as a [[(∞,n)-category with duals]] [[internal (∞,n)-category|internal]] to $\mathbf{H}$. At least if $\mathbf{H} = $ [[∞Grpd]], then local unoriented-topological field theories of the form $$ Bord_n^\sqcup \longrightarrow Corr_n(\mathbf{H}_{/Phases})^{\otimes_{phased}} $$ are equivalent to a choice 1. of $X \in \mathbf{H}$ equipped with an $O(n)$-[[∞-action]] 1. a homomorphism of $O(n)$-[[∞-actions]] $L \colon X \to Phases$ (where $Phases^\otimes$ is equipped with the canonical $\infty$-action induced from the framed cobordism hypothesis), hence to morphisms $$ \array{ X//O(n) && \stackrel{L//O(n)}{\longrightarrow} && Phases//O(n) \\ & \searrow && \swarrow \\ && B O(n) } \,. $$
Started a section on the case of (un-)oriented field theories.
After recalling some of the statements from Lurie's article, I am after making explicit the following corollary. While being a simple corollary, this way of stating it explicitly is immensely useful for the study of unoriented local prequantum field theories. I wonder if this has been made explicit "in print" elsewhere before:
Let Phases ⊗∈Ab ∞(H)Phases^\otimes \in Ab_\infty(\mathbf{H}) be an abelian ∞-group object, regarded as a (∞,n)-category with duals internal to H\mathbf{H}.
At least if H=\mathbf{H} = ∞Grpd, then local unoriented-topological field theories of the form
Bord n ⊔⟶Corr n(H /Phases) ⊗ phased Bord_n^\sqcup \longrightarrow Corr_n(\mathbf{H}_{/Phases})^{\otimes_{phased}}
are equivalent to a choice
of X∈HX \in \mathbf{H} equipped with an O(n)O(n)-∞-action
a homomorphism of O(n)O(n)-∞-actions L:X→PhasesL \colon X \to Phases (where Phases ⊗Phases^\otimes is equipped with the canonical ∞\infty-action induced from the framed cobordism hypothesis), hence to morphisms
X//O(n) ⟶L//O(n) Phases//O(n) ↘ ↙ BO(n). \array{ X//O(n) && \stackrel{L//O(n)}{\longrightarrow} && Phases//O(n) \\ & \searrow && \swarrow \\ && B O(n) } \,.
Format: MarkdownItexstarted disucssion of some simple but interesting examples at [local prequantum field theory -- Higher CS theory -- Levels](http://ncatlab.org/nlab/show/prequantum+field+theory#HigherChern-SimonsLocalPrequantumFieldTheoryLevels). But I am being interrupted now...
started disucssion of some simple but interesting examples at local prequantum field theory – Higher CS theory – Levels.
But I am being interrupted now…
Format: MarkdownItexI followed up that proposition [Exchanging fields for structure](http://ncatlab.org/nlab/show/cobordism+hypothesis#ExchangingFieldsForStructures) with a remark amplifying its relevance/meaning.
I followed up that proposition Exchanging fields for structure with a remark amplifying its relevance/meaning.
(edited Oct 24th 2014)
Format: MarkdownItexGiven a "structure", i.e. an $(X,\zeta)$-structure in the terminology of Lurie's writeup, and hence given $Bord_n^{(X,\zeta)}$, what is actually a direct way (i.e. not via the full cobordism hypothesis) to define the "$(X,\zeta)$-diffemorphism group" of an $n$-dimensional manifold $\Sigma$, i.e. $$ \Pi(Diff_{(X,\zeta)}(\Sigma)) \coloneqq \Omega^n_{\Sigma} Bord_n^{(X,\zeta)} $$ ? (Notice: no geometric realization on the right.) I think I know what it is, but I am a little vague on how to formally derive this from the "definition" of $Bord_n^{(X,\zeta)}$. I think the right answer is to form the homotopy pullback along the canonical map $$ Diff(\Sigma) \longrightarrow \mathbf{Aut}_{/BO(n)}(\Sigma) $$ of automorphisms in the "slice of the slice" over the classifying map $X \to BO(n)$ of $\zeta$. I have spelled this out now as def. 3.2.9 on page 34 at _[[schreiber:Local prequantum field theory]]_. After that definition there are spelled out proofs that with this defintion we do get the expected higher extensions of $B Diff(\Sigma)$. So this looks right. But if anyone cares to give me a sanity check, that would be appreciated.
Given a "structure", i.e. an (X,ζ)(X,\zeta)-structure in the terminology of Lurie's writeup, and hence given Bord n (X,ζ)Bord_n^{(X,\zeta)}, what is actually a direct way (i.e. not via the full cobordism hypothesis) to define the "(X,ζ)(X,\zeta)-diffemorphism group" of an nn-dimensional manifold Σ\Sigma, i.e.
Π(Diff (X,ζ)(Σ))≔Ω Σ nBord n (X,ζ) \Pi(Diff_{(X,\zeta)}(\Sigma)) \coloneqq \Omega^n_{\Sigma} Bord_n^{(X,\zeta)}
(Notice: no geometric realization on the right.)
I think I know what it is, but I am a little vague on how to formally derive this from the "definition" of Bord n (X,ζ)Bord_n^{(X,\zeta)}.
I think the right answer is to form the homotopy pullback along the canonical map
Diff(Σ)⟶Aut /BO(n)(Σ) Diff(\Sigma) \longrightarrow \mathbf{Aut}_{/BO(n)}(\Sigma)
of automorphisms in the "slice of the slice" over the classifying map X→BO(n)X \to BO(n) of ζ\zeta.
I have spelled this out now as def. 3.2.9 on page 34 at Local prequantum field theory (schreiber). After that definition there are spelled out proofs that with this defintion we do get the expected higher extensions of BDiff(Σ)B Diff(\Sigma).
So this looks right. But if anyone cares to give me a sanity check, that would be appreciated.
Format: MarkdownItexI have forwarded that question [to MO](http://mathoverflow.net/q/185441/381)
I have forwarded that question to MO
CommentAuthorTobias Fritz
CommentTimeJul 16th 2017
Author: Tobias Fritz
Format: MarkdownItexShouldn't the [statement](https://ncatlab.org/nlab/show/cobordism+hypothesis#statement) of the cobordism theorem either assume that $\mathcal{C}$ has duals, or alternatively have the map $\mathrm{pt}^*$ land in the core of $\mathcal{C}^\mathrm{fd}$ rather than in the core of $\mathcal{C}$ itself? (I would have fixed this myself if I was absolutely sure about it.)
Shouldn't the statement of the cobordism theorem either assume that 𝒞\mathcal{C} has duals, or alternatively have the map pt *\mathrm{pt}^* land in the core of 𝒞 fd\mathcal{C}^\mathrm{fd} rather than in the core of 𝒞\mathcal{C} itself?
(I would have fixed this myself if I was absolutely sure about it.)
(edited Jul 16th 2017)
Format: MarkdownItexWoops. Yes, you are absolutely right. I have added the qualifier "with duals" at the beginning of the [Statement](https://ncatlab.org/nlab/show/cobordism+hypothesis). But the entry would deserve some further polishing. If you feel energetic about this at the moment, you should edit it.
Woops. Yes, you are absolutely right. I have added the qualifier "with duals" at the beginning of the Statement.
But the entry would deserve some further polishing. If you feel energetic about this at the moment, you should edit it.
Format: MarkdownItexAdded the reference * [[David Ayala]], [[John Francis]], _The cobordism hypothesis_, ([arXiv:1705.02240](https://arxiv.org/abs/1705.02240)) This claims to have the proof modulo conjecture 1.2, which is to appear shortly. Has it appeared? Is there as yet a published complete proof of the cobordism hypothesis. The page says >This is almost complete, except for one step that is not discussed in detail. But a new (unpublished) result by [[Søren Galatius]] bridges that step in particular and drastically simplifies the whole proof in general. Do we know if the status has changed? <a href="https://ncatlab.org/nlab/revision/diff/cobordism+hypothesis/66">diff</a>, <a href="https://ncatlab.org/nlab/revision/cobordism+hypothesis/66">v66</a>, <a href="https://ncatlab.org/nlab/show/cobordism+hypothesis">current</a>
Added the reference
David Ayala, John Francis, The cobordism hypothesis, (arXiv:1705.02240)
This claims to have the proof modulo conjecture 1.2, which is to appear shortly. Has it appeared?
Is there as yet a published complete proof of the cobordism hypothesis. The page says
This is almost complete, except for one step that is not discussed in detail. But a new (unpublished) result by Søren Galatius bridges that step in particular and drastically simplifies the whole proof in general.
Do we know if the status has changed?
(edited Nov 10th 2018)
Format: MarkdownItexYes, it's time this was sorted. Lurie having proved it but then not supplying all details I'm sure killed off other people's incentive to work on it, since the credit had already been claimed.
Yes, it's time this was sorted. Lurie having proved it but then not supplying all details I'm sure killed off other people's incentive to work on it, since the credit had already been claimed.
Format: MarkdownItexLast I checked, the proof of that conjecture by Ayala-Francis has not appeared. That was a few months back and I should check again. But clearly there is no evident public announcement of the proof. On the culture of announcing conjectures in homotopy theory as theorems, see also Clark Barwicks's _The future of homotopy theory_ ([pdf](https://www.maths.ed.ac.uk/~cbarwick/papers/future.pdf)) The politically efficient way to proceed is shown by number theory: Huge excitement built up by conjectures, even if their actual content doesn't mean much to most researchers.
Last I checked, the proof of that conjecture by Ayala-Francis has not appeared. That was a few months back and I should check again. But clearly there is no evident public announcement of the proof.
On the culture of announcing conjectures in homotopy theory as theorems, see also Clark Barwicks's The future of homotopy theory (pdf)
The politically efficient way to proceed is shown by number theory: Huge excitement built up by conjectures, even if their actual content doesn't mean much to most researchers.
CommentTimeNov 2nd 2021
Format: MarkdownItexadded pointer to today's * [[Daniel Grady]], [[Dmitri Pavlov]], *The geometric cobordism hypothesis* ([arXiv:2111.01095](https://arxiv.org/abs/2111.01095)) <a href="https://ncatlab.org/nlab/revision/diff/cobordism+hypothesis/68">diff</a>, <a href="https://ncatlab.org/nlab/revision/cobordism+hypothesis/68">v68</a>, <a href="https://ncatlab.org/nlab/show/cobordism+hypothesis">current</a>
added pointer to today's
Daniel Grady, Dmitri Pavlov, The geometric cobordism hypothesis (arXiv:2111.01095)
Format: MarkdownItexBy the way, the following might be interesting to compare to: The [[FRS-theorem on rational 2d CFT]] says essentially that a [[rational 2d CFT]] is equivalently * local geometric data encoded by any of [[chiral algebra]], [[vertex operator algebras]] or whatever one uses; * global topological data encoded, holographically, by Reshetikhin-Turaev-style 3d TQFT but with coefficients not in the category of vector space but in the modular tensor category of representations of the chiral/vertex local data. This is somewhat reminiscent of Theorem 1.0.4 on [p. 3](https://arxiv.org/pdf/2111.01095.pdf#page=3) of Grady & Pavlov 2022, where the right hand side decomposes the geometric FQFT into local geometric data ($\mathcal{S}$) with coefficients in purely global topological data ($\mathcal{C}^\times_d$).
By the way, the following might be interesting to compare to:
The FRS-theorem on rational 2d CFT says essentially that a rational 2d CFT is equivalently
local geometric data encoded by any of chiral algebra, vertex operator algebras or whatever one uses;
global topological data encoded, holographically, by Reshetikhin-Turaev-style 3d TQFT but with coefficients not in the category of vector space but in the modular tensor category of representations of the chiral/vertex local data.
This is somewhat reminiscent of Theorem 1.0.4 on p. 3 of Grady & Pavlov 2022, where the right hand side decomposes the geometric FQFT into local geometric data (𝒮\mathcal{S}) with coefficients in purely global topological data (𝒞 d ×\mathcal{C}^\times_d).
Format: MarkdownItexAre there any direct implications for physics that come to mind?
Are there any direct implications for physics that come to mind?
Format: MarkdownItexI see you were answering this in #29 as I was asking in #30.
I see you were answering this in #29 as I was asking in #30.
Format: MarkdownItexThe potential application of all extended functorial QFT to physics is in it being a non-perturbative definition of QFT. For example, one way you might go about claiming the [[mass gap problem]] is to produce a 4d extended functorial QFT which locally and perturbatively reduces to QCD or similar, and then to show that this exhibits the "mass gap". Accordingly, the definition of EF-QFT shares with that of AQFT the issue that it provides a definition for non-perturbative QFT without however, in itself, getting us any closer to actually constructing any interesting examples. That being so, classification results like a geometric cobordism hypothesis-theorem serve to at least break down the open problem of constructing examples into small sub-problems, which may be more tractable.
The potential application of all extended functorial QFT to physics is in it being a non-perturbative definition of QFT. For example, one way you might go about claiming the mass gap problem is to produce a 4d extended functorial QFT which locally and perturbatively reduces to QCD or similar, and then to show that this exhibits the "mass gap".
Accordingly, the definition of EF-QFT shares with that of AQFT the issue that it provides a definition for non-perturbative QFT without however, in itself, getting us any closer to actually constructing any interesting examples.
That being so, classification results like a geometric cobordism hypothesis-theorem serve to at least break down the open problem of constructing examples into small sub-problems, which may be more tractable.
Format: MarkdownItex> one way you might go about claiming the mass gap problem... Is there a route from Hypothesis H to an extended TQFT? In [Mathematical Foundations of Quantum Field and Perturbative String Theory](https://ncatlab.org/schreiber/show/Mathematical+Foundations+of+Quantum+Field+and+Perturbative+String+Theory), you have the sections: > I. Cobordism representations; II. Systems of algebras of observables; III. Quantization from classical field theories, so I guess I'm wondering about a route from III to I. KK-reductions don't do this, I take it.
one way you might go about claiming the mass gap problem…
Is there a route from Hypothesis H to an extended TQFT? In Mathematical Foundations of Quantum Field and Perturbative String Theory, you have the sections:
I. Cobordism representations; II. Systems of algebras of observables; III. Quantization from classical field theories,
so I guess I'm wondering about a route from III to I. KK-reductions don't do this, I take it.
(edited Nov 2nd 2021)
Format: MarkdownItexYes. I have indicated this before, let me say it again: Hypothesis H says that quantum states/observables are the co/homology of (the loop space of) the (twisted+equivariant+differential-)Cohomotopy cocycle space of some background spacetime super-orbifold (thought of as an asymptotic boundary of a black brane spacetime). In simple examples corresponding to asymptotic boundaries of codimension 3-branes (MK6s/M5s) intersecting MO9-planes and for trivial twisted+equivariant+differential structure, Segal's theorem identifies (around [p. 15](https://ncatlab.org/schreiber/files/DifferentialCohomotopyIntersectingBranes210507.pdf#page=15) of [arXiv:1912.10425](https://ncatlab.org/schreiber/show/Differential+Cohomotopy+implies+intersecting+brane+observables), recalled on [p. 18](https://ncatlab.org/schreiber/files/FundamentalWeightSystemsAreQuantumStates210916.pdf#page=18) of our [arXiv:2105.02871](https://ncatlab.org/schreiber/show/Fundamental+weight+systems+are+quantum+states)) this Cohomotopy cocycle space with the configuration space of ordered points in the 3-space transversal to these branes, and hence in this situation the states/observables are the (co)homology of configuration spaces of points in Euclidean space. It is "well known" (to those who know it well), that the (co)homology of such configuration spaces of points is an alternative way of encoding quantum field theory (see at *[[correlator as differential form on configuration space of points]]*). Here the cohomology in degree one less than the dimension of the ambient space encodes the usual propagators, while the cohomology in lower degrees encodes higher order observables as expected in extended QFT. In less simple examples this picture will pick up bells and whistles. For example, if the branes sit at an orbifold singularity, then the [Rourke-Sanderson theorem](https://ncatlab.org/nlab/show/configuration+space+of+points#EquivariantCohomotopyChargeMapEquivalence) -- refining Segal's theorem to the equivariant case -- says that we get the evident equivariant version of the configuration space of points, instead, and the quantum states/observables will be its equivariant cohomology. This is much richer in detail, but the general picture remains the same. This is something I hope we might look at eventually: There will be a resulting equivariant version of the algebra of horizontal chord diagrams, etc. Incidentally, just last night we received message that our proposal for an [NYUAD Research Institute Center](https://nyuad.nyu.edu/en/research/research-institute-centers.html) "for Topological and Quantum Systems" has been approved, which, in one of its four sub-clusters, is to be concerned with this and related questions. There will be openings for 6-7 postdoc positions advertized soon, to start by Sept. 2022. I'll post links as soon as the new Center's webpage has gone live.
Yes. I have indicated this before, let me say it again:
Hypothesis H says that quantum states/observables are the co/homology of (the loop space of) the (twisted+equivariant+differential-)Cohomotopy cocycle space of some background spacetime super-orbifold (thought of as an asymptotic boundary of a black brane spacetime).
In simple examples corresponding to asymptotic boundaries of codimension 3-branes (MK6s/M5s) intersecting MO9-planes and for trivial twisted+equivariant+differential structure, Segal's theorem identifies (around p. 15 of arXiv:1912.10425, recalled on p. 18 of our arXiv:2105.02871) this Cohomotopy cocycle space with the configuration space of ordered points in the 3-space transversal to these branes, and hence in this situation the states/observables are the (co)homology of configuration spaces of points in Euclidean space.
It is "well known" (to those who know it well), that the (co)homology of such configuration spaces of points is an alternative way of encoding quantum field theory (see at correlator as differential form on configuration space of points). Here the cohomology in degree one less than the dimension of the ambient space encodes the usual propagators, while the cohomology in lower degrees encodes higher order observables as expected in extended QFT.
In less simple examples this picture will pick up bells and whistles. For example, if the branes sit at an orbifold singularity, then the Rourke-Sanderson theorem – refining Segal's theorem to the equivariant case – says that we get the evident equivariant version of the configuration space of points, instead, and the quantum states/observables will be its equivariant cohomology. This is much richer in detail, but the general picture remains the same.
This is something I hope we might look at eventually: There will be a resulting equivariant version of the algebra of horizontal chord diagrams, etc.
Incidentally, just last night we received message that our proposal for an NYUAD Research Institute Center "for Topological and Quantum Systems" has been approved, which, in one of its four sub-clusters, is to be concerned with this and related questions. There will be openings for 6-7 postdoc positions advertized soon, to start by Sept. 2022. I'll post links as soon as the new Center's webpage has gone live.
Format: MarkdownItexCongratulations on the funding, Urs! That's great news.
Congratulations on the funding, Urs! That's great news.
Format: MarkdownItexThanks for the explanation, and well done! If you are ever looking for a philosopher to be briefly in residence,...
Thanks for the explanation, and well done! If you are ever looking for a philosopher to be briefly in residence,…
CommentAuthorDmitri Pavlov
Author: Dmitri Pavlov
Format: MarkdownItex> and then to show that this exhibits the "mass gap". Assuming such a fully extended functorial field theory has been constructed, can we say what it means for such a field theory to have a mass gap, specifically in the language of functorial field theory? > That being so, classification results like a geometric cobordism hypothesis-theorem serve to at least break down the open problem of constructing examples into small sub-problems, which may be more tractable. We are already working on computing the right side of the GCH in a rather general setting. Remark 3.0.8 gives a glimpse, but really we will be treating almost completely arbitrary targets C, not just B^d(A). This will recover nonabelian differential cohomology with coefficients in the tangent Lie-∞ algebroid of C.
and then to show that this exhibits the "mass gap".
Assuming such a fully extended functorial field theory has been constructed, can we say what it means for such a field theory to have a mass gap, specifically in the language of functorial field theory?
We are already working on computing the right side of the GCH in a rather general setting. Remark 3.0.8 gives a glimpse, but really we will be treating almost completely arbitrary targets C, not just B^d(A). This will recover nonabelian differential cohomology with coefficients in the tangent Lie-∞ algebroid of C.
CommentTimeNov 3rd 2021
Format: MarkdownItex> Assuming such a fully extended functorial field theory has been constructed, can we say what it means for such a field theory to have a mass gap, specifically in the language of functorial field theory? While there are bound to be subtle technical details, the broad idea is simple: The "masses" exhibited by any $d+1$-dimensional FQFT on a spacetime manifold $\Sigma^d$ without boundary are the elements of the operator spectrum of its Hamiltonian over $\Sigma^d$, which is the derivative with respect to $t$ of the FQFT's value on the cobordisms of the form $\Sigma^d \xrightarrow{\; \Sigma_d \times [0,t] \;} \Sigma^d$. Here the geometric structure must be such that one can make sense of $[0,\epsilon]$. The default would be (pseudo-)Riemannian structure, with $[0,\epsilon] \subset \mathbb{R}^1$ regarded as the evident (pseudo-)Rimannian manifold with metric $d s^2 = \pm d t \otimes d t$ and volume $\pm \epsilon$. A "mass gap" means that this Hamiltonian operator does not have continuous spectrum around 0, but a "spectral gap" between 0 and the next smallest eigenvalue. (Googling for a reference, I see that the first lines of Wikipedia's *[Spectral gap (physics)](https://en.wikipedia.org/wiki/Spectral_gap_(physics))* has the right keyword combination. I can try to dig out more authoritative/original references, if desired.)
While there are bound to be subtle technical details, the broad idea is simple:
The "masses" exhibited by any d+1d+1-dimensional FQFT on a spacetime manifold Σ d\Sigma^d without boundary are the elements of the operator spectrum of its Hamiltonian over Σ d\Sigma^d, which is the derivative with respect to tt of the FQFT's value on the cobordisms of the form Σ d→Σ d×[0,t]Σ d\Sigma^d \xrightarrow{\; \Sigma_d \times [0,t] \;} \Sigma^d.
Here the geometric structure must be such that one can make sense of [0,ε][0,\epsilon]. The default would be (pseudo-)Riemannian structure, with [0,ε]⊂ℝ 1[0,\epsilon] \subset \mathbb{R}^1 regarded as the evident (pseudo-)Rimannian manifold with metric ds 2=±dt⊗dtd s^2 = \pm d t \otimes d t and volume ±ε\pm \epsilon.
A "mass gap" means that this Hamiltonian operator does not have continuous spectrum around 0, but a "spectral gap" between 0 and the next smallest eigenvalue. (Googling for a reference, I see that the first lines of Wikipedia's Spectral gap (physics) has the right keyword combination. I can try to dig out more authoritative/original references, if desired.)
(edited Nov 3rd 2021)
Format: MarkdownItexThanks, this clarifies it a lot! What is the value of d for physically realistic cases? You say that Σ is a spacetime, which would seem to imply d=4, but then to evaluate on Σ⨯[0,t] we would need a 5d-FQFT, since dim(Σ⨯[0,t])=d+1=5. Or perhaps Σ is just a space, and then d=3 and dim(Σ⨯[0,t])=4? The line segment [0,t], is it supposed to be interpreted like time?
Thanks, this clarifies it a lot!
What is the value of d for physically realistic cases? You say that Σ is a spacetime, which would seem to imply d=4, but then to evaluate on Σ⨯[0,t] we would need a 5d-FQFT, since dim(Σ⨯[0,t])=d+1=5.
Or perhaps Σ is just a space, and then d=3 and dim(Σ⨯[0,t])=4? The line segment [0,t], is it supposed to be interpreted like time?
CommentTimeNov 4th 2021
Format: MarkdownItexOh, I see, I misspoke, right: $\Sigma^d$ is meant to be space, and then $\Sigma^d \times [0,\epsilon]$ is a slab of spacetime, with $[0,\epsilon] \subset \mathbb{R}^1$ an interval inside "the time axis". So for the Clay Millennium problem, taken at face value, $d = 3$.
Oh, I see, I misspoke, right: Σ d\Sigma^d is meant to be space, and then Σ d×[0,ε]\Sigma^d \times [0,\epsilon] is a slab of spacetime, with [0,ε]⊂ℝ 1[0,\epsilon] \subset \mathbb{R}^1 an interval inside "the time axis". So for the Clay Millennium problem, taken at face value, d=3d = 3.
(edited Nov 4th 2021)
Format: MarkdownItexI see, thanks for clarifying this. Do we know what the *classical* (or rather prequantum) Yang–Mills theory is supposed to assign to manifolds of positive codimension? For the classical Chern–Simons theory this is clear: the action on manifolds of codimension 0 is given by the holonomy of the Chern–Simons 2-gerbe, so we now know how to construct a fully extended functorial field theory: the geometric cobordism hypothesis computes the space of FFTs with geometric structure B_∇(G) and target B^3(U(1)) as the derived hom of simplicial presheaves Hom(B_∇(G), B^3_∇(U(1))) (now with ∇ on the right side also), and from the work of Freed–Hopkins we can compute this as the space of invariant polynomials on the Lie algebra of G. Is there any similar description for the classical Yang–Mills theory? Something in terms of differential cohomology, perhaps?
I see, thanks for clarifying this.
Do we know what the classical (or rather prequantum) Yang–Mills theory is supposed to assign to manifolds of positive codimension?
For the classical Chern–Simons theory this is clear: the action on manifolds of codimension 0 is given by the holonomy of the Chern–Simons 2-gerbe, so we now know how to construct a fully extended functorial field theory: the geometric cobordism hypothesis computes the space of FFTs with geometric structure B_∇(G) and target B^3(U(1)) as the derived hom of simplicial presheaves Hom(B_∇(G), B^3_∇(U(1))) (now with ∇ on the right side also), and from the work of Freed–Hopkins we can compute this as the space of invariant polynomials on the Lie algebra of G.
Is there any similar description for the classical Yang–Mills theory? Something in terms of differential cohomology, perhaps?
Format: MarkdownItexWe have written about this in "The stack of Yang-Mills fields on Lorentzian manifolds" ([arXiv:1704.01378](https://arxiv.org/abs/1704.01378)). If you are serious about attacking Yang-Mills via extended FQFT, it might make sense to try to get into contact with [[Alexander Schenkel]], who is running a program on the analogous attack via [[homotopy AQFT]]. It should be fruitful to think about the two in parallel.
We have written about this in "The stack of Yang-Mills fields on Lorentzian manifolds" (arXiv:1704.01378).
If you are serious about attacking Yang-Mills via extended FQFT, it might make sense to try to get into contact with Alexander Schenkel, who is running a program on the analogous attack via homotopy AQFT. It should be fruitful to think about the two in parallel.
Format: MarkdownItexRe #42: Thanks for the reference! Did I gather it correctly from your paper that B^4 R is (or would be) a natural target for the 4d prequantum Yang–Mills theory (as a functorial field theory), just like B^3 U(1) is a natural target for the 3d prequantum Chern–Simons theory?
Re #42: Thanks for the reference!
Did I gather it correctly from your paper that B^4 R is (or would be) a natural target for the 4d prequantum Yang–Mills theory (as a functorial field theory), just like B^3 U(1) is a natural target for the 3d prequantum Chern–Simons theory?
Format: MarkdownItexDateEdited: 2021-11-07 05:40:05+00:00 or 2021-11-07 06:40:05+00:00 DateCreated: 2021-11-07 05:36:21+00:00 or 2021-11-07 06:36:21+00:00 I would say that the $\mathbf{B}^3 U(1)$ in 3d Chern-Simons theory is the coefficients which classify the higher pre-quantum line bundle, the one whose transgression to a spatial slice gives the actual (traditional) pre-quantum line bundle on the space of on-shell fields over that spatial slice. In this sense any $(d+1)$-dimensional field theory would replace this coefficient by $\mathbf{B}^{d+1} U(1)$. Taking it to be $\mathbf{B}^{d+1}\mathbb{R}$ is to pre-suppose that the pre-quantum line bundles will be topologically trivial. That is unlikely to be the case in full generality but may be reasonable to assume for starters and for non-topological field theories under assumptions that suppress topological effects in favor of the local geometric structure. In the article "The stack of Yang-Mills fields" we discuss just what would be the base of the pre-quantum line bundle, namely the stack of fields and on-shell fields. After that I had then started a project with Igor Khavkine on constructing higher prequantum line bundles in generality from Lagrangian data in the guise of "Lepage gerbes" on jet bundles. It looks like we never put our stack of notes online, but there is talk notes ([pdf](https://ncatlab.org/schreiber/files/talkpft.pdf)) at *[[schreiber:Prequantum covariant field theory|Prequantum covariant field theory]]*. This project was interrupted when the gods called me to leave Prague for Abu Dhabi. But I know that Dave Carchedi has been building on this, at least in parts, though I'd have to contact him to know if there is anything in writing.
DateEdited: 2021-11-07 05:40:05+00:00 or 2021-11-07 06:40:05+00:00
DateCreated: 2021-11-07 05:36:21+00:00 or 2021-11-07 06:36:21+00:00
I would say that the B 3U(1)\mathbf{B}^3 U(1) in 3d Chern-Simons theory is the coefficients which classify the higher pre-quantum line bundle, the one whose transgression to a spatial slice gives the actual (traditional) pre-quantum line bundle on the space of on-shell fields over that spatial slice.
In this sense any (d+1)(d+1)-dimensional field theory would replace this coefficient by B d+1U(1)\mathbf{B}^{d+1} U(1). Taking it to be B d+1ℝ\mathbf{B}^{d+1}\mathbb{R} is to pre-suppose that the pre-quantum line bundles will be topologically trivial. That is unlikely to be the case in full generality but may be reasonable to assume for starters and for non-topological field theories under assumptions that suppress topological effects in favor of the local geometric structure.
In the article "The stack of Yang-Mills fields" we discuss just what would be the base of the pre-quantum line bundle, namely the stack of fields and on-shell fields.
After that I had then started a project with Igor Khavkine on constructing higher prequantum line bundles in generality from Lagrangian data in the guise of "Lepage gerbes" on jet bundles. It looks like we never put our stack of notes online, but there is talk notes (pdf) at Prequantum covariant field theory. This project was interrupted when the gods called me to leave Prague for Abu Dhabi. But I know that Dave Carchedi has been building on this, at least in parts, though I'd have to contact him to know if there is anything in writing.
Format: MarkdownItexRe #44: I see, so would it be correct to say that although we may expect the classical Yang–Mills theory to involve gerbes and similar stuff, this has not yet been figured out in details? Your project on Lepage gerbes looks incredibly interesting. Here is my line of thought on constructing quantized FFTs: 1. Produce the prequantum data (some form of nonabelian differential cohomology) from the classical data. 2. Convert the prequantum data in (1) to a map like in the right side of the geometric cobordism hypothesis. 3. Use the geometric cobordism hypothesis to convert (2) to a fully extended functorial field theory. 4. Integrate the prequantum data in (1) using pushforwards in nonabelian differential cohomology, producing another map like on the right side of the GCH. 5. Use the GCH to convert (4) to a fully extended functorial field theory, which is the quantization of (3). Status so far: * (3) and (5) are supplied by the geometric cobordism hypothesis. * (2) is current work in progress (the third paper in the series), and should be out soon. * (4) is planned (the fourth paper in the series), in principle we know what to do. * (1) was missing so far, but it seems like your work with Khavkine provides a complete solution.
Re #44:
I see, so would it be correct to say that although we may expect the classical Yang–Mills theory to involve gerbes and similar stuff, this has not yet been figured out in details?
Your project on Lepage gerbes looks incredibly interesting.
Here is my line of thought on constructing quantized FFTs:
Produce the prequantum data (some form of nonabelian differential cohomology) from the classical data.
Convert the prequantum data in (1) to a map like in the right side of the geometric cobordism hypothesis.
Use the geometric cobordism hypothesis to convert (2) to a fully extended functorial field theory.
Integrate the prequantum data in (1) using pushforwards in nonabelian differential cohomology, producing another map like on the right side of the GCH.
Use the GCH to convert (4) to a fully extended functorial field theory, which is the quantization of (3).
Status so far:
(3) and (5) are supplied by the geometric cobordism hypothesis.
(2) is current work in progress (the third paper in the series), and should be out soon.
(4) is planned (the fourth paper in the series), in principle we know what to do.
(1) was missing so far, but it seems like your work with Khavkine provides a complete solution.
Format: MarkdownItexYes, I think the topic is very much open. Mostly because essentially nobody seems to be attacking along that path. That's maybe not too surprising, since people tend to work on small steps for which there are existing hints of success, instead of embarking on one long journey through deserts and over mountains, from which one has not yet seen anyone come back. And this is certainly somewhat puzzling about the heads-on approaches to non-perturbative Yang-Mills via AQFT or FQFT: that there are next to no hints for that or how it will work, when it works. There is no partial result and few heuristic arguments for what it is that will eventually make the zoo of hadron masses come out by these approaches. If and when it eventually works, it is going to be a dramatic success of pure abstract thinking. That's why I came to feel that the alternative approach via holographic QCD inside M-theory is more promising: even though it superficially *sounds* more crazy, there is a wealth of hints for *how* it makes things work (all those strings are, after all, the original and still the best idea for how confinement works, namely via tensionful color flux tubes) and, more importantly, hints *that* it works (from the close match of the zoo of predictions/measurements of hadron masses [here](https://ncatlab.org/nlab/show/AdS-QCD+correspondence#ComparisonWithExperiment)). $\,$ By the way, it should be only the moduli stack of fields which is a stack of non-abelian differential cocycles. But the higher pre-quantum gerbe on that stack should be abelian, and the quantization should be by push-forward in some abelian cohomology twisted by that pre-quantum gerbe. This is, in any case, what happens in familiar examples, notably this is how the quantization of 3d Chern-Simons theory works (*[[geometric quantization by push-forward]]*) where the push-forward is in differental K-theory but computes the quantum states for non-abelian CS theory. It is this picture of quantization via push-forward in abelian ("linear") cohomology over non-abelian stacks which I was after in *[[schreiber:Quantization via Linear homotopy types|Quantization via Linear homotopy types]]*.
Yes, I think the topic is very much open. Mostly because essentially nobody seems to be attacking along that path.
That's maybe not too surprising, since people tend to work on small steps for which there are existing hints of success, instead of embarking on one long journey through deserts and over mountains, from which one has not yet seen anyone come back.
And this is certainly somewhat puzzling about the heads-on approaches to non-perturbative Yang-Mills via AQFT or FQFT: that there are next to no hints for that or how it will work, when it works. There is no partial result and few heuristic arguments for what it is that will eventually make the zoo of hadron masses come out by these approaches. If and when it eventually works, it is going to be a dramatic success of pure abstract thinking.
That's why I came to feel that the alternative approach via holographic QCD inside M-theory is more promising: even though it superficially sounds more crazy, there is a wealth of hints for how it makes things work (all those strings are, after all, the original and still the best idea for how confinement works, namely via tensionful color flux tubes) and, more importantly, hints that it works (from the close match of the zoo of predictions/measurements of hadron masses here).
By the way, it should be only the moduli stack of fields which is a stack of non-abelian differential cocycles. But the higher pre-quantum gerbe on that stack should be abelian, and the quantization should be by push-forward in some abelian cohomology twisted by that pre-quantum gerbe.
This is, in any case, what happens in familiar examples, notably this is how the quantization of 3d Chern-Simons theory works (geometric quantization by push-forward) where the push-forward is in differental K-theory but computes the quantum states for non-abelian CS theory.
It is this picture of quantization via push-forward in abelian ("linear") cohomology over non-abelian stacks which I was after in Quantization via Linear homotopy types.
Format: MarkdownItexWe should add something about Daniel and Dimitri's paper to the section [For cobordisms with geometric structure](https://ncatlab.org/nlab/show/cobordism+hypothesis#for_cobordisms_with_geometric_structure). Out of interest, is there a form of [[generalized tangle hypothesis]] of which this is the stabilization? I recall we spoke about such things back at the n-Café [here](https://golem.ph.utexas.edu/category/2006/11/this_weeks_finds_in_mathematic_2.html#c0202200), following a conversation on $n$-categories of tangles as kinds of fundamental $n$-category with duals of stratified spaces. But the question was how to deal with geometric structure not just on the normal bundle.
We should add something about Daniel and Dimitri's paper to the section For cobordisms with geometric structure.
Out of interest, is there a form of generalized tangle hypothesis of which this is the stabilization?
I recall we spoke about such things back at the n-Café here, following a conversation on nn-categories of tangles as kinds of fundamental nn-category with duals of stratified spaces. But the question was how to deal with geometric structure not just on the normal bundle.
Format: MarkdownItexI admit not to have read the new article beyond the introduction, but without going into details, can one say in a few words what the key new insight is that makes the new proof happen? Given that a fair bit of high-powered effort by several people had previously been invested into a proof of just the topological case while still leaving gaps, it's a striking claim that not only this but also a grand generalization now drops out on a few pages. What is the new insight which makes this work and that previous authors had missed?
I admit not to have read the new article beyond the introduction, but without going into details, can one say in a few words what the key new insight is that makes the new proof happen?
Given that a fair bit of high-powered effort by several people had previously been invested into a proof of just the topological case while still leaving gaps, it's a striking claim that not only this but also a grand generalization now drops out on a few pages. What is the new insight which makes this work and that previous authors had missed?
Format: MarkdownItexRe #48: The [locality paper](https://arxiv.org/abs/2011.01208) is inseparable from the geometric cobordism hypothesis paper. So it's not a "few pages", but 40+41=81 pages (using version 2 of the GCH paper that will be uploaded soon). For comparison, the sketch of a proof in Lurie's paper is in Section 3.1 and 3.4 (and some fragments from 3.2, 3.3), which occupy 5+9=14 pages, plus a couple more for the relevant parts of 3.2, 3.3. Some insights: * The locality property is invoked in the very first step of the proof to reduce to the geometric framed case. (In Lurie's paper, Remark 2.4.20 instead deduces the locality principle from the cobordism hypothesis. However, very roughly this step corresponds in purpose to Section 3.2 there, which reduces to the case of unoriented manifolds instead of framed, although the actual details are completely different.) * The geometric framed case produces a d-truncated bordism category (before adding thin homotopies) because there are no nontrivial structure-preserving diffeomorphisms of d-dimensional bordisms embedded (or immersed) into R^d. This is used in the proof many times to simplify the arguments. (As far as I can see, there are no analogues in Lurie's paper.) * The site FEmb_d (which encodes the geometric structures) plays a crucial role. In particular, encoding the homotopical action of O(d) using the site of d-manifolds and open embeddings is crucial for simplifying our proofs. (As far as I can see, there are no analogues in Lurie's paper.) * Invariance under thin homotopies is encoded using a further localization of simplicial presheaves on FEmb_d. This is new and important for our proofs. In the topological case, this recovers precisely the (∞,d)-category of bordisms of Hopkins–Lurie, as opposed to just the d-category of bordisms. (As far as I can see, there are no analogues in Lurie's paper.) * The machinery of the locality paper is used to establish the filtrations and pushout squares for the geometric framed case. (Very roughly, corresponds in purpose to Section 3.3 in Lurie's paper (only the small part that is actually used in 3.4) and Claims 3.4.12 and 3.4.17, which are stated without proof (and without a sketch of a proof). The actual details are completely different, though.) As an additional remark, all of these insights also apply to the topological cobordism hypothesis, even if we are not interested in the geometric case. And even with these insights, the proof takes more than 80 pages.
The locality paper is inseparable from the geometric cobordism hypothesis paper. So it's not a "few pages", but 40+41=81 pages (using version 2 of the GCH paper that will be uploaded soon).
For comparison, the sketch of a proof in Lurie's paper is in Section 3.1 and 3.4 (and some fragments from 3.2, 3.3), which occupy 5+9=14 pages, plus a couple more for the relevant parts of 3.2, 3.3.
Some insights:
The locality property is invoked in the very first step of the proof to reduce to the geometric framed case. (In Lurie's paper, Remark 2.4.20 instead deduces the locality principle from the cobordism hypothesis. However, very roughly this step corresponds in purpose to Section 3.2 there, which reduces to the case of unoriented manifolds instead of framed, although the actual details are completely different.)
The geometric framed case produces a d-truncated bordism category (before adding thin homotopies) because there are no nontrivial structure-preserving diffeomorphisms of d-dimensional bordisms embedded (or immersed) into R^d. This is used in the proof many times to simplify the arguments. (As far as I can see, there are no analogues in Lurie's paper.)
The site FEmb_d (which encodes the geometric structures) plays a crucial role. In particular, encoding the homotopical action of O(d) using the site of d-manifolds and open embeddings is crucial for simplifying our proofs. (As far as I can see, there are no analogues in Lurie's paper.)
Invariance under thin homotopies is encoded using a further localization of simplicial presheaves on FEmb_d. This is new and important for our proofs. In the topological case, this recovers precisely the (∞,d)-category of bordisms of Hopkins–Lurie, as opposed to just the d-category of bordisms. (As far as I can see, there are no analogues in Lurie's paper.)
The machinery of the locality paper is used to establish the filtrations and pushout squares for the geometric framed case. (Very roughly, corresponds in purpose to Section 3.3 in Lurie's paper (only the small part that is actually used in 3.4) and Claims 3.4.12 and 3.4.17, which are stated without proof (and without a sketch of a proof). The actual details are completely different, though.)
As an additional remark, all of these insights also apply to the topological cobordism hypothesis, even if we are not interested in the geometric case.
And even with these insights, the proof takes more than 80 pages.
Format: MarkdownItexThanks. So is it right that you are saying the geometric framed case is actually more amenable to direct proof than the topological case, but then implies it? I have a vague memory that the remaining gap in the existing proof had to do with showing (or showing convincingly) that some space of Morse functions/handle decompositions is contractible or something like this. Is this an issue you solve or circumvent?
Thanks. So is it right that you are saying the geometric framed case is actually more amenable to direct proof than the topological case, but then implies it?
I have a vague memory that the remaining gap in the existing proof had to do with showing (or showing convincingly) that some space of Morse functions/handle decompositions is contractible or something like this. Is this an issue you solve or circumvent?
Format: MarkdownItex> So is it right that you are saying the geometric framed case is actually more amenable to direct proof than the topological case, but then implies it? I would say that the geometric case inspired ways of thinking that we may not have encountered otherwise. For example, the necessity of incorporating the site Cart in the picture also led to consider the site FEmb_d as a natural extension. If we stayed in the purely topological case, then we could have tried to use spaces with an action of O(d) and may not have noticed the site of d-manifolds and open embeddings, which is more convenient to use in practice. > I have a vague memory that the remaining gap in the existing proof had to do with showing (or showing convincingly) that some space of Morse functions/handle decompositions is contractible or something like this. Is this an issue you solve or circumvent? This was resolved by Eliashberg and Mishachev in 2011, who proved that the space of framed generalized Morse functions is contractible: <https://arxiv.org/abs/1108.1000>. This replaces Section 3.5 in Lurie's paper. We also develop a tool with similar functionality, in our locality paper this is Section 6.6 (the 1-truncatedness of our bordism categories corresponds to the contractibility of the space of framed generalized Morse functions used by Lurie to cut bordisms). However, it is not quite accurate to say that this is "the remaining gap", since some of the more important parts of Lurie's argument in the other sections (3.1–3.4), such as Claim 3.4.12 and Claim 3.4.17, have their proofs omitted altogether, not even a sketch is present.
So is it right that you are saying the geometric framed case is actually more amenable to direct proof than the topological case, but then implies it?
I would say that the geometric case inspired ways of thinking that we may not have encountered otherwise. For example, the necessity of incorporating the site Cart in the picture also led to consider the site FEmb_d as a natural extension. If we stayed in the purely topological case, then we could have tried to use spaces with an action of O(d) and may not have noticed the site of d-manifolds and open embeddings, which is more convenient to use in practice.
This was resolved by Eliashberg and Mishachev in 2011, who proved that the space of framed generalized Morse functions is contractible: https://arxiv.org/abs/1108.1000. This replaces Section 3.5 in Lurie's paper.
We also develop a tool with similar functionality, in our locality paper this is Section 6.6 (the 1-truncatedness of our bordism categories corresponds to the contractibility of the space of framed generalized Morse functions used by Lurie to cut bordisms).
However, it is not quite accurate to say that this is "the remaining gap", since some of the more important parts of Lurie's argument in the other sections (3.1–3.4), such as Claim 3.4.12 and Claim 3.4.17, have their proofs omitted altogether, not even a sketch is present.
Format: MarkdownItex> the necessity of incorporating the site Cart in the picture also led to consider the site FEmb_d as a natural extension. If we stayed in the purely topological case, then we could have tried to use spaces with an action of O(d) and may not have noticed the site of d-manifolds and open embeddings, which is more convenient to use in practice. Thanks, that's really interesting. I'll try to find time to look at your locality article in more detail (when I got that darn proof typed up that's absorbing my time and energy... ;-).
the necessity of incorporating the site Cart in the picture also led to consider the site FEmb_d as a natural extension. If we stayed in the purely topological case, then we could have tried to use spaces with an action of O(d) and may not have noticed the site of d-manifolds and open embeddings, which is more convenient to use in practice.
Thanks, that's really interesting. I'll try to find time to look at your locality article in more detail (when I got that darn proof typed up that's absorbing my time and energy… ;-).
Format: MarkdownItexre [#34](https://nforum.ncatlab.org/discussion/4538/cobordism-hypothesis/?Focus=96432#Comment_96432): Just to say that our new **Center for Quantum and Topological Systems** is now live: [here](https://nyuad.nyu.edu/en/research/faculty-labs-and-projects/center-for-quantum-and-topological-systems.html). Part of granted activity is related to the intersection of - a) quantum programming languages - b) linear & modal homotopy type theory - c) twisted generalized cohomology theory as indicated in the refined trilogy diagram that I am showing [here](https://ncatlab.org/nlab/show/computational+trilogy#ComputationalTrilogyTopologizedQuantized). [edit: now I see the typo, will fix...] We'll be hiring a fair number of postdocs fairly soon. I'll post the job openings as they become public.
Just to say that our new Center for Quantum and Topological Systems is now live: here.
Part of granted activity is related to the intersection of
a) quantum programming languages
b) linear & modal homotopy type theory
c) twisted generalized cohomology theory
as indicated in the refined trilogy diagram that I am showing here. [edit: now I see the typo, will fix…]
We'll be hiring a fair number of postdocs fairly soon. I'll post the job openings as they become public.
Format: MarkdownItexI guess the citation (I think?) to "Topological and Quantum Systems" is to a document about the Center?
I guess the citation (I think?) to "Topological and Quantum Systems" is to a document about the Center?
CommentTimeDec 15th 2021
Format: MarkdownItexJust to say that the postdoc job advertisements for our new "Center for Topological and Quantum Systems" (as per [comment #53](#Comment_96801)) is now out, see: * [apply.interfolio.com/100414](https://apply.interfolio.com/100414)
Just to say that the postdoc job advertisements for our new "Center for Topological and Quantum Systems" (as per comment #53) is now out, see:
apply.interfolio.com/100414
CommentTimeJul 1st 2022
(edited Jul 1st 2022)
Format: MarkdownItexStarting from Monday, July 4 at 9 am Central Daylight Time (UTC-5), Dan Grady and I will give a series of 4 lectures (90 minutes each) on the geometric cobordism hypothesis. BigBlueButton credentials can be obtained at <https://www.carqueville.net/nils/GCH.html>.
Starting from Monday, July 4 at 9 am Central Daylight Time (UTC-5), Dan Grady and I will give a series of 4 lectures (90 minutes each) on the geometric cobordism hypothesis. BigBlueButton credentials can be obtained at https://www.carqueville.net/nils/GCH.html.
CommentTimeJul 3rd 2022
Format: MarkdownItex> Starting from Monday, July 4 at 9 am Central Daylight Time (UTC-5), Dan Grady and I will give a series of 4 lectures (90 minutes each) on the geometric cobordism hypothesis. BigBlueButton credentials can be obtained at <https://www.carqueville.net/nils/GCH.html>. That's great. (When is the time to switch from saying "XYZ Cobordism Hypothesis" to "XYZ Cobordism Theorem"? Maybe a point could be made here.)
(When is the time to switch from saying "XYZ Cobordism Hypothesis" to "XYZ Cobordism Theorem"? Maybe a point could be made here.)
Format: MarkdownItexClarified that Lurie's paper gives a sketch and not a complete proof. Removed an unsubstantiated claim about a simplified proof to appear: the reference should be added once a proof is provided. <a href="https://ncatlab.org/nlab/revision/diff/cobordism+hypothesis/71">diff</a>, <a href="https://ncatlab.org/nlab/revision/cobordism+hypothesis/71">v71</a>, <a href="https://ncatlab.org/nlab/show/cobordism+hypothesis">current</a>
Clarified that Lurie's paper gives a sketch and not a complete proof. Removed an unsubstantiated claim about a simplified proof to appear: the reference should be added once a proof is provided.
Format: MarkdownItexSurely it would be better to have on this page Grady and Pavlov's complete proof, rather than Lurie's sketched proof.
Surely it would be better to have on this page Grady and Pavlov's complete proof, rather than Lurie's sketched proof. | CommonCrawl |
\begin{document}
\title{Generating Posets with Interfaces}
\begin{abstract}
We generate and count isomorphism classes of gluing-parallel posets
with interfaces (iposets) on up to eight points, and on up to ten
points with interfaces removed. In order to do so, we introduce a
new class of iposets with full interfaces and show that considering
these is sufficient. We also describe the software (written in
Julia) that we have used for our exploration and define a new
incomplete isomorphism invariant which may be computed in polynomial
time yet identifies only very few pairs of non-isomorphic iposets. \end{abstract}
\section{Introduction}
In concurrency theory, partially ordered sets (posets) are used to model executions of programs which exhibit both sequentiality and concurrency of events \cite{DBLP:journals/ipl/Winkowski77,
Pratt86pomsets, DBLP:books/sp/Vogler92}. Series-parallel posets have been investigated due to their algebraic malleability---they are freely generated by serial and parallel composition \cite{DBLP:journals/fuin/Grabowski81,
DBLP:journals/tcs/Gischer88}---and form a model of concurrent Kleene algebra \cite{DBLP:journals/jlp/HoareMSW11}. Interval orders are another class of posets that arise naturally in the semantics of Petri nets \cite{DBLP:books/sp/Vogler92, DBLP:journals/tcs/JanickiK93,
DBLP:journals/iandc/JanickiY17}, higher-dimensional automata \cite{DBLP:journals/tcs/Glabbeek06, Hdalang}, and distributed systems \cite{DBLP:journals/cacm/Lamport78}. Series-parallel posets and interval orders are incomparable.
This paper continues work begun in \cite{DBLP:conf/RelMiCS/FahrenbergJST20} to consolidate series-parallel posets and interval orders. To this end, we have equipped posets with interfaces and extended the serial composition to an operation which glues posets along their interfaces. We have investigated the algebraic structure of the so-defined gluing-parallel posets, which encompass both series-parallel posets and interval orders, in \cite{DBLP:conf/RelMiCS/FahrenbergJST20, BeyondN2}. Here we concern ourselves with the combinatorial properties of this class.
An iposet is a poset with interfaces. We generate (and count) all isomorphism classes of iposets and of gluing-parallel iposets on up to 8 points, and of gluing-parallel posets (with interfaces removed) on up to 10 points. In order to do so, we introduce a new subclass of iposets with full interfaces and then generate all isomorphism classes of such ``\!\emph{Winkowski}'' iposets on up to 8 points and of gluing-parallel Winkowski iposets on up to 9 points.
We have found eleven forbidden substructures for gluing-parallel (i)posets, five on 6 points, one on 8 points, and five other on 10 points. We currently do not know whether there are any forbidden substructures on 11 points or more.
To conduct our exploration we have written software in Julia, using the LightGraphs package. We use a recursive algorithm to generate iposets and Julia's built-in threading support for parallelization. For isomorphism checking we use a new incomplete invariant which may be computed in linear time yet identifies only relatively few pairs of non-isomorphic (i)posets. We also detail the software and process used to find forbidden substructures. Our software and generated data are freely available; McKay's similarly freely available data has been of great help in our work.
After a preliminary Section \ref{se:posets} on posets we introduce interfaces in Section \ref{se:iposets}. Section \ref{se:software} then reports on our software and Section \ref{se:forbidden} on forbidden substructures. Before we then can examine Winkowski iposets in Section \ref{se:wink} we need to concern ourselves with discrete iposets in Section \ref{se:discrete}.
We expose the numbers of non-isomorphic (i)posets in various classes throughout the paper. Table \ref{ta:spio} shows the numbers of posets, series-parallel posets, interval orders, the union of the latter two classes, and series-parallel interval orders. Table \ref{ta:gpi} counts iposets and gluing-parallel (i)posets, Table \ref{ta:discrete} shows the numbers of some subclasses of discrete iposets, and Table \ref{ta:wink} exposes the numbers of Winkowski and gluing-parallel Winkowski iposets. The appendix contains the counts of iposets and gluing-parallel iposets, and of their Winkowski subclasses, split by the numbers of sources and targets.
The main contributions of this paper are, especially when compared to \cite{BeyondN2}, the exposition of the new subclass of Winkowski iposets and the showcase of Julia as a programming language for combinatorial exploration. Further, we believe that our new incomplete isomorphism invariant may be useful also in other contexts, but this remains to be explored.
\section{Posets} \label{se:posets}
A poset $(P,\mathord<)$ is a finite set $P$ equipped with an irreflexive transitive binary relation $<$ (asymmetry of $<$ follows). We use Hasse diagrams to visualize posets, but put greater elements to the right of smaller ones. Posets are equipped with a serial and a parallel composition. They are based on the disjoint union (coproduct) of sets, which we write $X\sqcup Y = \{(x,1)\mid x\in X\}\cup \{(y,2)\mid y \in Y\}$.
\begin{definition}
Let $(P_1, <_1)$ and $(P_2, <_2)$ be posets.
\begin{enumerate}
\item The \emph{parallel composition} $P_1\otimes P_2$ is the
coproduct with $P_1\sqcup P_2$ as carrier set and order defined as
\begin{equation*}
(p,i)<(q,j) \,\Leftrightarrow\, i=j \land p<_i q,\qquad
i,j\in\{1,2\}.
\end{equation*}
\item The \emph{serial composition} $P_1\glue P_2$ is the ordinal
sum, which again has the disjoint union as carrier set, but order
defined as
\begin{equation*}
(p,i)<(q,j) \Leftrightarrow (i=j \land p<_i q) \lor i<j, \qquad
i,j\in\{1,2\}.
\end{equation*}
\end{enumerate} \end{definition}
A poset is \emph{series-parallel} (an \emph{sp-poset}) if it is either empty or can be obtained from the singleton poset by finitely many serial and parallel compositions. It is well known \cite{DBLP:journals/siamcomp/ValdesTL82,
DBLP:journals/fuin/Grabowski81} that a poset is series-parallel if and only if it does not contain the induced subposet \begin{equation*}
\N=\! \vcenter{\hbox{
\begin{tikzpicture}[y=.5cm]
\node (0) at (0,0) {\intpt};
\node (1) at (0,-1) {\intpt};
\node (2) at (1,0) {\intpt};
\node (3) at (1,-1) {\intpt};
\path (0) edge (2) (1) edge (2) (1) edge (3);
\end{tikzpicture}}}. \end{equation*} Further, generation of sp-posets is free: they form the free algebra in the variety of double monoids.
An \emph{interval order} \cite{journals/mpsy/Fishburn70} is a relational structure $(P,<)$ with $<$ irreflexive such that $w< y$ and $x< z$ imply $w< z$ or $x< y$, for all $w,x,y,z\in P$. Transitivity of $<$ follows, and interval orders are therefore posets. Interval orders are precisely those posets that do not contain the induced subposet \begin{equation*}
\twotwo=\! \vcenter{\hbox{
\begin{tikzpicture}[y=.5cm]
\node (0) at (0,0) {\intpt};
\node (1) at (0,-1) {\intpt};
\node (2) at (1,0) {\intpt};
\node (3) at (1,-1) {\intpt};
\path (0) edge (2) (1) edge (3);
\end{tikzpicture}}}. \end{equation*}
Concurrency theory employs both sp-posets and interval orders (and their labeled variants), the former for their algebraic malleability and the latter because the precedence order of events in distributed systems typically is an interval order. We are interested in classes of posets which retain the pleasurable algebraic properties of sp-posets but include interval orders.
Posets $(P_1,\mathord<_1)$ and $(P_2,\mathord<_2)$ are isomorphic if there is a bijection $f:P_1\to P_2$ such that for all $x,y\in P_1$, $x<_1 y \liff f(x)<_2 f(y)$. It is well-known (and easy to see, given that posets are isomorphic if and only if their Hasse diagrams are) that the poset isomorphism problem is just as hard as graph isomorphism. State of the art is Brinkmann and McKay \cite{DBLP:journals/order/BrinkmannM02} which reports generating isomorphism classes of posets on up to sixteen points.
Also \emph{counting} posets up to isomorphism is difficult and has been achieved up to sixteen points. On the other hand, both sp-posets and interval orders admit generating functions, so counting these is trivial. Table \ref{ta:spio} shows the numbers of posets, sp-posets and interval orders on $n$ points up to isomorphism for $n\le 11$, as well as the numbers of posets which are sp-or-interval and those which are series-parallel compositions of interval orders.
\begin{table}[tb]
\centering
\caption{Different types of posets on $n$ points: all posets;
sp-posets; interval orders; sp or interval; sp-interval orders.}
\label{ta:spio}
\begin{tabular}{r|rrrrr}
$n$ & $\PP(n)$ & $\SP(n)$ & $\textsf{IO}(n)$ &
$\textsf{SP\texttt+IO}(n)$ & $\textsf{SPIO}(n)$ \\\hline
0 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 \\
2 & 2 & 2 & 2 & 2 & 2 \\
3 & 5 & 5 & 5 & 5 & 5 \\
4 & 16 & 15 & 15 & 16 & 16 \\
5 & 63 & 48 & 53 & 59 & 59 \\
6 & 318 & 167 & 217 & 252 & 253 \\
7 & 2045 & 602 & 1014 & 1187 & 1203 \\
8 & 16.999 & 2256 & 5335 & 6161 & 6327 \\
9 & 183.231 & 8660 & 31.240 & 35.038 & 36.449 \\
10 & 2.567.284 & 33.958 & 201.608 & 218.770 & 229.660 \\
11 & 46.749.427 & 135.292 & 1.422.074 \\[1ex]
EIS & 112 & 3430 & 22493 & &
\end{tabular} \end{table}
\section{Posets with Interfaces} \label{se:iposets}
Let $[n]=\{1,\dotsc,n\}$ for $n\ge 1$ and $[0]=\emptyset$. We write $P^{\min}$ for the set of minimal and $P^{\max}$ for the set of maximal elements of poset $P$.
\begin{definition}
A \emph{poset with interfaces} (\emph{iposet}) is a poset $P$
together with two injective functions
\begin{equation*}
[n]\overset{s}{\longrightarrow} P\overset{t}{\longleftarrow}[m]
\end{equation*}
such that the images $s([n])\subseteq P^{\min}$ and
$t([m])\subseteq P^{\max}$. \end{definition}
An iposet as above is denoted $(s,P,t):n\to m$. We let $\iPos$ be the set of iposets and define the identity iposets $\id_n=(\id_{[n]}, [n], \id_{[n]}):n\to n$ for $n\ge 0$. For notational convenience we also define \emph{source} and \emph{target} functions $\src, \tgt: \iPos\to \Nat$ which map $P:n\to m$ to $\src(P)=n$ and $\tgt(P)=m$. Any \emph{poset} $P$ is an iposet with trivial interfaces, $\src(P)=\tgt(P)=0$.
Iposets $(s_1,P_1,t_1):n_1\to m_1$ and $(s_2,P_2,t_2):n_2\to m_2$ are \emph{isomorphic} if there is a poset isomorphism $f:P_1\to P_2$ such that $f\circ s_1= s_2$ and $f\circ t_1= t_2$; this implies $n_1=n_2$ and $m_1=m_2$. The mappings $\src$ and $\tgt$ are invariant under isomorphisms.
We extend the serial and parallel compositions to iposets. Below, $\phi_{ n, m}:[ n+ m]\to[ n]\otimes[ m]$ are the isomorphisms given by \begin{equation*}
\phi_{ n, m}( i)=
\begin{cases}
(i,1) &\text{if } i\le n, \\
(i-n,2) &\text{if } i> n,
\end{cases} \end{equation*} and $(P_1\sqcup P_2)_{/t_1\equiv s_2}$ denotes the quotient of the disjoint union obtained by identifying $(t_1(k),1)$ with $(s_2(k),2)$ for every $k\in[m]$.
\begin{definition}
Let $(s_1,P_1,t_1):n_1\to m_1$ and $(s_2,P_2 ,t_2):n_2\to m_2$ be
iposets.
\begin{enumerate}
\item Their \emph{parallel composition} is the iposet
$(s,P_1\otimes P_2,t):n_1+n_2\to m_1+m_2$ with
$s=(s_1\otimes s_2)\circ \phi_{ n_1, n_2}$ and
$t=(t_1\otimes t_2)\circ \phi_{ m_1, m_2}$.
\item For $m_1=n_2$, their \emph{gluing composition} is the iposet
$(s_1,P_1\glue P_2,t_2):n_1\to m_2$ with carrier set
$(P_1\sqcup P_2)_{/t_1\equiv s_2}$ and order defined as
\begin{equation*}
(p,i)<(q,j) \Leftrightarrow (i=j \land p<_i q) \lor (i<j \land
p\notin t_1[m_1] \land q\notin s_2[n_2]).
\end{equation*}
\end{enumerate} \end{definition}
Thus $P_1\glue P_2$ is defined precisely if $\tgt(P_1)=\src(P_2)$, and in that case, $\src(P_1\glue P_2)=\src(P_1)$ and $\tgt(P_1\glue P_2)=\tgt(P_2)$. Isomorphism classes of iposets form the morphisms in a category with objects the natural numbers and gluing as composition, or equivalently, a \emph{local partial
$\ell r$-semigroup} \cite{DBLP:conf/RelMiCS/CalkFJSZ21, Chantale}. For the parallel composition, $\src(P_1\otimes P_2)= \src(P_1)\otimes \src(P_2)$ and $\tgt(P_1\otimes P_2)=\tgt(P_1)\otimes \tgt(P_2)$, extending $\iPos$ to a \emph{partial interchange monoid} \cite{CranchDS20}.
\begin{remark}[Interchange]
\label{re:interchange}
The equation
$(P_1\otimes P_2)\glue(Q_1\otimes Q_2)=(P_1\glue
Q_1)\otimes(P_2\glue Q_2)$ does \emph{not} hold in general, not even
up to isomorphism \cite{BeyondN2}; but see Lemma
\ref{le:interchange} below for a special case. \end{remark}
A composition $P=P_1\glue P_2$ or $P=P_1\otimes P_2$ is \emph{trivial} if $P=P_1$ or $P=P_2$ as posets; see also Lemma \ref{le:trivialglue} below. As before, we are interested in iposets and posets which can be obtained from elementary iposets by finitely many (nontrivial) gluing and parallel compositions. Let \begin{equation*}
\single4 = \big\{ [0]\to [1]\from [0], \quad [0]\to [1]\from [1],
\quad [1]\to [1]\from [0], \quad [1]\to [1]\from [1] \big\} \end{equation*} be the set of iposets on the singleton poset (the source and target maps are uniquely determined by their type here).
\begin{definition}
An iposet is \emph{gluing-parallel} (a \emph{gp-iposet}) if it is
empty or can be obtained from elements of $\single4$ by finitely
many applications of $\glue$ and $\otimes$. \end{definition}
We are interested in generating and counting iposets and gp-iposets up to isomorphism, but also in doing so for gluing-parallel \emph{posets}: those posets which are gp-as-iposets. The following property defines a class in-between iposets and gp-iposets.
\begin{definition}
Iposet $(s,P,t):n\to m$ is \emph{interface consistent} if
$s^{-1}(x)<s^{-1}(y) \liff t^{-1}(x)<t^{-1}(y)$ for all
$x,y\in s([n])\cap t([m])$. \end{definition}
Here $<$ is the (implicit) natural ordering on $[n]$ and $[m]$. It is clear that gluing and parallel compositions of interface consistent iposets are again interface consistent, hence any gp-iposet is interface consistent. Table \ref{ta:gpi} shows the numbers of posets, sp-posets and gp-posets up to isomorphism, as well as of iposets, interface consistent iposets, and gp-iposets. In the appendix, Tables \ref{ta:ip123split} to \ref{ta:ip8split} show the numbers of the three classes of iposets split by the numbers of their sources and targets.
\begin{table}[tb]
\centering
\caption{Different types of posets and iposets on $n$ points: all
posets; sp-posets; gp-posets; iposets; interface consistent
iposets; gp-iposets}
\label{ta:gpi}
\begin{tabular}{r|rrrrrr}
$n$ & $\PP(n)$ & $\SP(n)$ & $\GP(n)$ & $\IP(n)$ & $\ICI(n)$ &
$\GPI(n)$ \\\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 & 4 & 4 & 4 \\
2 & 2 & 2 & 2 & 17 & 16 & 16 \\
3 & 5 & 5 & 5 & 86 & 74 & 74 \\
4 & 16 & 15 & 16 & 532 & 420 & 419 \\
5 & 63 & 48 & 63 & 4068 & 3030 & 2980 \\
6 & 318 & 167 & 313 & 38.933 & 28.495 & 26.566 \\
7 & 2045 & 602 & 1903 & 474.822 & 355.263 & 289.279 \\
8 & 16.999 & 2256 & 13.943 & 7.558.620 & 5.937.237 & 3.726.311 \\
9 & 183.231 & 8660 & 120.442 & \\
10 & 2.567.284 & 33.958 & 1.206.459 & \\
11 & 46.749.427 & 135.292 & & \\[1ex]
EIS & 112 & 3430 & 345673 & 331158 & & 331159
\end{tabular} \end{table}
\section{Software} \label{se:software}
Before we continue with our exploration, we describe the software we have used to generate most numbers in the tables contained in this paper and to explore the structural properties of (i)posets.\footnote{
Our software is available at
\url{https://github.com/ulifahrenberg/pomsetproject/tree/main/code/20220303/},
and the data at
\url{https://github.com/ulifahrenberg/pomsetproject/tree/main/data/}.} This started as a piece of Python code written to confirm or disprove some conjectures, but did not allow us to compute the $\GP(n)$ and $\GPI(n)$ sequences beyond $n=6$. During the BSc internship of the first author of this paper, it was converted to Julia, using the LightGraphs package \cite{Graphs21}. This conversion, and major improvements in candidate generation and isomorphism checking (see below), allowed us to generate all gp-posets on $n=9$ points and all gp-iposets on $n=7$ points. Afterwards we managed to compute $\GP(10)$ and $\GPI(8)$ using massively parallelized computations.\footnote{
\label{fn:norway}
We have used several ARM based servers running Linux with processor
from High Silicon, model Hi1616, 64 cores, 256 GiB memory, 36 TiB of
local disk, provided by the University of Oslo HPC services
\url{https://www.uio.no/english/services/it/research/hpc/}.}
Let $G_n$ denote the set of isomorphism classes of gp-iposets on $n$ points for $n\ge 0$ and \begin{equation*}
G_n(k, \ell) = \big\{ P\in G_n \bigmid \src(P)=k, \tgt(P)=\ell
\big\} \end{equation*} for $k,\ell\in\{0,\dotsc,n\}$. That is, $G_n(k,\ell)$ is the set of iposets on $n$ points with $k$ points in the starting interface and $\ell$ in the terminating interface. Then
$G_n=\bigcup_{k, \ell} G_n(k, \ell)$, $\GPI(n)=|G_n|$, and
$\GP(n)=|G_n(0,0)|$. Our algorithm for generating $G_n$ is recursive and based on the following property, where we have extended $\glue$ and $\otimes$ to sets of iposets the usual way.
\begin{lemma}
\label{le:gp-rec}
For all $n>1$ and $0\le k,\ell\le n$,
\begin{equation*}
G_n(k, \ell) =
\bigcup_{\substack{1\le p, q<n \\ m=p+q-n \\ 0\le m<p \\ 0\le m<q}}
G_p(k, m)\glue G_q(m, \ell) \cup
\bigcup_{\substack{p+q=n \\ p, q\ge 1 \\ k_1+k_2=k \\ \ell_1+\ell_2=\ell}}
G_p(k_1, \ell_1)\otimes G_q(k_2, \ell_2).
\end{equation*} \end{lemma}
\begin{proof}
By definition, $R\in G_n(k, \ell)$ if and only if $R$ is a gluing or
parallel composition of smaller gp-iposets. If $R=P\glue Q$, then
$P\in G_p(k, m)$ and $Q\in G_q(m, \ell)$ for some $p, q<n$, and we
can assume $p, q\ge 1$ because otherwise the composition would be
trivial. The number of points of $P\glue Q$ is $p+q-m$, hence
$m=p+q-n$, and $m<p, q$ because we can assume that both $P$ and $Q$
have at least one non-interface point; otherwise composition would
again be trivial.
If $R=P\otimes Q$, then $P\in G_p(k_1, \ell_1)$ and
$Q\in G_q(k_2, \ell_2)$ for some $p+q=n$, and we can again assume
$p, q\ge 1$, and $k_1+k_2=k$ and $\ell_1+\ell_2=\ell$ by definition
of $\otimes$. We have shown that $G_n(k, \ell)$ is included in the
expression on the right-hand side; the reverse inclusion is
trivial. \end{proof}
\begin{lstfloat}[tb]
\caption{Julia function (parts) to compute $G_n(k, \ell)$.}
\label{ls:gpclosure}
\vspace*{-3ex}
\begin{jllisting}[numbers=left] function gpclosure(n, k, l, iposets, filled, locks)
#Return memoized if exists
if filled[k, l, n]
return (x[1] for x in vcat(iposets[:, k, l, n]...))
end
#If opposites exist, return opposites
if filled[l, k, n]
return (reverse(x[1]) for x in vcat(iposets[:, l, k, n]...))
end
#Otherwise, generate recursively
lock(locks[end, k, l, n])
#First, the gluings
Threads.@threads for p in 1:(n-1)
Threads.@threads for q in 1:(n-1)
m = p + q - n
if m < 0 || m ≥ p || m ≥ q || k > p || l > q
continue
end
for P in gpclosure(p, k, m, iposets, filled, locks)
for Q in gpclosure(q, m, l, iposets, filled, locks)
R = glue(P, Q)
neR = ne(R) #number of edges
pushuptoiso!(iposets[neR, k, l, n], ip, locks[neR, k, l, n])
end
end
end
end
#Now, the parallel compositions
...
filled[k, l, n] = true
unlock(locks[end, k, l, n])
return (x[1] for x in vcat(iposets[:, k, l, n]...)) end
\end{jllisting}
\vspace*{-3ex} \end{lstfloat}
Listing \ref{ls:gpclosure} shows part of the recursive Julia function which implements the above algorithm. The multi-dimensional array \jlinl{iposets} is used for memoization and initiated with the four singletons in $\single4$, \jlinl{filled} is used to denote which parts of \jlinl{iposets} have been computed, and \jlinl{locks} is used for locking. The heart of the procedure is the call to \jlinl{pushuptoiso!} in line 23 which checks whether \jlinl{iposets} already contains an element isomorphic to \jlinl{ip} and, if not, pushes it into the array.
Due to the multi-threaded implementation and tight locking, we were able to generate $G_7$ in about 4 minutes on a standard laptop. Generating $G_8$ took altogether 300 hours in a distributed computation to generate each $G_8(k, \ell)$ separately on four different computers: two standard laptops and two Norwegian supercomputers. For generating iposets and analyzing forbidden substructures we have benefited greatly from Brendan McKay's poset collections.\footnote{See
\url{http://users.cecs.anu.edu.au/~bdm/data/digraphs.html}.}
Deciding whether iposets are isomorphic is just as difficult as for posets. Brinkmann and McKay \cite{DBLP:journals/order/BrinkmannM02} develop an algorithm to compute canonical representations: mappings $f$ from posets to labeled posets so that $f(P)=f(P')$ precisely if $P$ and $P'$ are isomorphic. It is clear that if any such algorithm were to run in polynomial time, then poset isomorphism, and thus also graph isomorphism, would be in \textsf{P}.
Canonical representations are complete isomorphism invariants. In our software we are instead using an \emph{incomplete} isomorphism invariant inspired by bisimulation \cite{book/Milner89} which can be computed in polynomial time. For digraph $G$ and a point $x$ of $G$, denote by $\indeg(x)$ and $\outdeg(x)$ the in- and out-degrees of $x$ in $G$.
\begin{definition}
An \emph{in-out bisimulation} between digraphs $G_1=(V_1, E_1)$ and
$G_2=(V_2, E_2)$ is a relation $R\subseteq V_1\times V_2$ such that
\begin{itemize}
\item for all $(x_1, x_2)\in R$, $\indeg(x_1)=\indeg(x_2)$ and
$\outdeg(x_1)=\outdeg(x_2)$;
\item for all $(x_1, x_2)\in R$ and $(x_1, y_1)\in E_1$, there
exists $(x_2, y_2)\in E_2$ with $(y_1, y_2)\in R$;
\item for all $(x_1, x_2)\in R$ and $(x_2, y_2)\in E_2$, there
exists $(x_1, y_1)\in E_1$ with $(y_1, y_2)\in R$.
\end{itemize} \end{definition}
Note that this is the same as a standard bisimulation \cite{book/Milner89} between the transition systems (without initial state) given by enriching digraphs with propositions stating each vertex's in- and out-degree. Digraphs are said to be in-out-bisimilar if there exists an in-out-bisimulation joining them.
\begin{definition}
Let $P$ be a poset. The functions $\inhash, \outhash: P\to \Nat$
are the least fixed points to the equations
\begin{equation*}
\inhash(x) = \indeg(x) + |P| \sum_{y<x} \inhash(y), \qquad
\outhash(x) = \outdeg(x) + |P| \sum_{x<y} \outhash(y).
\end{equation*} \end{definition}
By acyclicity these hashes are well defined, and they may be computed in linear time. A \emph{hash isomorphism} between posets $P$ and $Q$ is a bijection $f:P\to Q$ such that \begin{equation*}
(\inhash(f(x)), \outhash(f(x)))=(\inhash(x), \outhash(x)) \end{equation*} for all $x\in P$.
\begin{samepage} \begin{lemma}
Let $P$ and $Q$ be posets.
\begin{enumerate}
\item If $P$ and $Q$ are isomorphic, then they are hash isomorphic.
\item If $P$ and $Q$ are hash isomorphic, then they are
in-out-bisimilar.
\end{enumerate} \end{lemma} \end{samepage}
\begin{proof}
If $f:P\to Q$ is an isomorphism, then it is also a hash isomorphism.
If $f$ is a hash isomorphism, then the relation defined by $f$ is an
in-out-bisimulation. \end{proof}
Checking for existence of a hash isomorphism can be done in polynomial time, for example by sorting the hashes.
\begin{figure}
\caption{The two pairs of non-isomorphic posets on 6 points which
are hash isomorphic.}
\label{fi:hashnoiso6}
\end{figure}
\begin{table}[bp]
\centering
\caption{Numbers of pairs of non-isomorphic but hash isomorphic
posets on $n$ points; their proportion as part of all pairs of
non-isomorphic posets; average numbers of bijections checked for
isomorphism.}
\label{ta:hashnoiso7+}
\begin{tabular}{r|rrr}
$n$ & $\textsf{NIHI}(n)$ & $\textsf{NIHI}(n) / \PP(n)^2$ &
$\textsf{nperm}(n)$ \\\hline
5 & 0 & $0\phantom{{}\cdot 10^{-6}}$ & $0\phantom{{}\cdot
10^{-6}}$ \\
6 & 2 & $2\cdot 10^{-6}$ & $8\cdot 10^{-6}$ \\
7 & 45 & $1\cdot 10^{-6}$ & $6\cdot 10^{-6}$ \\
8 & 928 & $3\cdot 10^{-7}$ & $2\cdot 10^{-6}$ \\
9 & 20443 & $6\cdot 10^{-8}$ & $4\cdot 10^{-7}$
\end{tabular} \end{table}
\begin{example}
Hash isomorphisms are complete for posets on up to 5 points: if
$|P|, |Q|\le 5$, then $P$ and $Q$ are isomorphic if and only if they
are hash isomorphic. On 6 points, there are two pairs of
non-isomorphic posets which are hash isomorphic, depicted in Figure
\ref{fi:hashnoiso6}. Proportionally to the number of \emph{all}
pairs of non-isomorphic posets, the number of ``false positives''
grows rather slowly, see Table \ref{ta:hashnoiso7+}. Hence it
appears that hash isomorphism is a rather tight invariant which
allows one to avoid most of the costly isomorphism checks. \end{example}
\begin{lstfloat}[tb]
\caption{Julia function to check poset isomorphism.}
\label{ls:posetiso}
\vspace*{-3ex}
\begin{jllisting}[numbers=left] function isomorphic(P::Poset, pv::Vprof, Q::Poset, qv::Vprof)
#Start with the easy stuff
P == Q && return true
n = nv(P) #number of points
(n != nv(Q) || ne(P) != ne(Q)) && return false
#Check for hash isomorphism
used = zeros(Bool, n)
@inbounds for i in 1:n
found = false
for j in 1:n
if pv[i] == qv[j] && !used[j]
used[j] = true; found = true
break
end
end
!found && return false
end
#Collect all hash isomorphisms, mapping points to their possible images
targets = Array{Array{Int}}(undef, n)
targets .= [[]]
@inbounds for v in 1:n
for u in 1:n
pv[v] == qv[u] && push!(targets[v], u)
end
end
#Check all target permutations if they are isos
for pos_isom in Iterators.product(targets...)
if bijective(pos_isom) && isomorphic(P, Q, pos_isom)
return true
end
end
return false end
\end{jllisting}
\vspace*{-3ex} \end{lstfloat}
Listing \ref{ls:posetiso} shows our Julia code for checking whether two posets are isomorphic. The hashes are precomputed, so the function \jlinl{isomorphic} takes two posets \jlinl{P} and \jlinl{Q} as arguments as well as their hashes, \jlinl{pv} and \jlinl{qv} of type \jlinl{Vprof} for ``vertex profile''. After checking whether there is a hash isomorphism, the hashes are used to constrain possible isomorphisms: only bijections \jlinl{pos_isom} which are hash isomorphisms are given to the isomorphism checker \jlinl{isomorphic(P,
Q, pos_isom)}. Table \ref{ta:hashnoiso7+} also shows the averages for how many bijections are checked whether they are isomorphisms.
\section{Forbidden Substructures} \label{se:forbidden}
Recall that an induced subposet $(Q, \mathord<_Q)$ of a poset $(P, \mathord<_P)$ is a subset $Q\subseteq P$ with the order $<_Q$ the restriction of $<_P$ to $Q$. We have shown in \cite{BeyondN2} that gluing-parallel posets are closed under induced subposets: if $Q$ is an induced subposet of a gp-poset $P$, then also $Q$ is gluing-parallel. This begs the question whether gp-posets admit a finite set of forbidden substructures: a set $\mcal F$ of posets which are incomparable under the induced-subposet relation and such that any poset is gluing-parallel if and only if it contains none of the structures in $\mcal F$ as induced subposets.
\begin{proposition}[\cite{DBLP:conf/RelMiCS/FahrenbergJST20}]
\label{pr:forbidden1}
The following posets are contained in~$\mcal F$:
\begin{gather*}
\NN =\! \vcenter{\hbox{
\begin{tikzpicture}[y=.5cm]
\node (0) at (0,0) {\intpt};
\node (1) at (0,-1) {\intpt};
\node (2) at (0,-2) {\intpt};
\node (3) at (1,0) {\intpt};
\node (4) at (1,-1) {\intpt};
\node (5) at (1,-2) {\intpt};
\path (0) edge (3) (1) edge (3) (1) edge (4) (2) edge (4)
(2) edge (5);
\end{tikzpicture}}}
\qquad
\NPLUS =\! \vcenter{\hbox{
\begin{tikzpicture}[y=.5cm]
\node (1) at (0,-1) {\intpt};
\node (2) at (0,-2) {\intpt};
\node (3) at (1,0) {\intpt};
\node (4) at (1,-1) {\intpt};
\node (5) at (1,-2) {\intpt};
\node (6) at (2,0) {\intpt};
\path (3) edge (6) (1) edge (3) (1) edge (4) (2) edge (4)
(2) edge (5);
\end{tikzpicture}}}
\qquad
\NMINUS =\! \vcenter{\hbox{
\begin{tikzpicture}[y=.5cm]
\node (0) at (0,0) {\intpt};
\node (1) at (0,-1) {\intpt};
\node (2) at (0,-2) {\intpt};
\node (3) at (1,0) {\intpt};
\node (4) at (1,-1) {\intpt};
\node (-1) at (-1,-2) {\intpt};
\path (0) edge (3) (1) edge (3) (1) edge (4) (2) edge (4)
(-1) edge (2);
\end{tikzpicture}}}
\\
\TC =\! \vcenter{\hbox{
\begin{tikzpicture}[y=.5cm]
\node (0) at (0,0) {\intpt};
\node (1) at (0,-1) {\intpt};
\node (2) at (0,-2) {\intpt};
\node (3) at (1,0) {\intpt};
\node (4) at (1,-1) {\intpt};
\node (5) at (1,-2) {\intpt};
\path (0) edge (3) (1) edge (3) (1) edge (5) (2) edge (4)
(2) edge (5) (0) edge (4);
\end{tikzpicture}}}
\qquad
\LN =\! \vcenter{\hbox{
\begin{tikzpicture}[y=.7cm]
\node (0) at (0,0) {\intpt};
\node (1) at (0,-1) {\intpt};
\node (2) at (1,0) {\intpt};
\node (3) at (1,-1) {\intpt};
\node (4) at (2,0) {\intpt};
\node (5) at (2,-1) {\intpt};
\path (0) edge (2) (2) edge (4) (1) edge (4) (1) edge (3)
(3) edge (5);
\end{tikzpicture}}}
\end{gather*} \end{proposition}
\begin{lstfloat}[tb]
\caption{Julia code to find forbidden substructures.}
\label{ls:forbiddensubs}
\vspace*{-3ex}
\begin{jllisting}[numbers=left] function findforbiddensubs()
n = 5
fsubs = Array{Poset}(undef, 0)
while true
n += 1
pngs = posetsnotgp(n)
newfsubs = nosubs(pngs, fsubs)
if !isempty(newfsubs)
println("Found new forbidden substructure(s) on $n points:")
for s in newfsubs
println(string(s))
end
append!(fsubs, newfsubs)
end
end
return fsubs end function posetsnotgp(n)
ps = posets(n)
gps = [ip.poset for ip in gpiposets(n, 0, 0)]
return diffuptoiso(ps, gps) end function nosubs(posets, subs)
res = Array{Poset}(undef, 0)
for p in posets
hasnosub = true
for s in subs
sg, _ = subgraph(p, s)
if sg
hasnosub = false
break
end
end
hasnosub && push!(res, p)
end
return res end
\end{jllisting}
\vspace*{-3ex} \end{lstfloat}
Listing \ref{ls:forbiddensubs} shows our implementation of the semi-algorithm to find forbidden substructures. These are collected in the array \jlinl{fsubs} and printed out as they are found. The function \jlinl{posetsnotgp} returns the posets on $n$ points which are not gluing-parallel, using the function \jlinl{diffuptoiso} (not shown) which computes the difference between two (i)poset arrays up to isomorphism. The function \jlinl{nosubs} returns all elements of \jlinl{posets} which have no induced subposet isomorphic to any element of \jlinl{subs}; this latter check is carried out in \jlinl{subgraph} which we also do not show here. Using McKay's files of posets and our own precomputed files of iposets, \jlinl{findforbiddensubs} finds the forbidden substructures of Proposition \ref{pr:forbidden1} almost immediately. After a few seconds it finds another forbidden substructure on 8 points (see below), and after an hour it verifies that there are no new forbidden substructures on 9 points.
\begin{figure}
\caption{Additional forbidden substructures for gp-posets.}
\label{fi:forbidden2}
\end{figure}
\begin{proposition}[\cite{BeyondN2}]
\label{pr:forbidden2}
When restricting to posets on at most 10 points, $\mcal F$ contains
precisely the five posets of Proposition \ref{pr:forbidden1} and the
six posets in Figure \ref{fi:forbidden2}. \end{proposition}
In order to find the forbidden substructures on 10 points in Figure \ref{fi:forbidden2}, we used another, distributed algorithm which took about two weeks to run. We generated 45 separate files containing the gp-iposets on 10 points obtained from gluing elements of $G_n(0, k)$ and $G_m(k,0)$ for $n\in\{1, \dotsc, 9\}$ and $m\in\{10-n, \dotsc, 9\}$ (thus $k=10-n-m$), each reduced up to isomorphism, and one file containing all gp-iposets on 10 points obtained as parallel compositions of smaller gp-iposets. Then we took \jlinl{posets10.txt}, removed posets containing one of our forbidden substructures on 6 or 8 points, and then successively filtered it through these 46 files, using \jlinl{diffuptoiso}.
Whether there are further forbidden substructures (on 11 points or more), and whether $\mcal F$ is a finite set, remains open.
\section{Discrete Iposets} \label{se:discrete}
\begin{table}[tbp]
\centering
\caption{Numbers of discrete iposets, gp-discrete iposets, starters,
and gp-starters on $n$ points. (Numbers of terminators and
gp-terminators are the same as in the two last columns.)}
\label{ta:discrete}
\begin{tabular}{r|rrrr}
$n$ & $\textsf{D}(n)$ & $\textsf{GPD}(n)$ & $\textsf{S}(n)$ &
$\textsf{GPS}(n)$ \\\hline
0 & 1 & 1 & 1 & 1 \\
1 & 4 & 4 & 2 & 2 \\
2 & 13 & 12 & 5 & 4\\
3 & 45 & 33 & 16 & 8 \\
4 & 184 & 88 & 65 & 16 \\
5 & 913 & 232 & 326 & 32 \\
6 & 5428 & 609 & 1957 & 64 \\
7 & 37.764 & 1596 & 13.700 & 128 \\
8 & 300.969 & 4180 & 109.601 & 256 \\
9 & 2.702.152 & 10.945 & 986.410 & 512 \\
10 & 26.977.189 & 28.656 & 9.864.101 & 1024 \\[1ex]
EIS & & 27941 & 522 & 79
\end{tabular} \end{table}
This section explores the ``fine structure'' of iposets. An iposet $(s, P, t):n\to m$ is \emph{discrete} if $P$ is, it is a \emph{starter} if, additionally, $t:[m]\to P$ is bijective, and a \emph{terminator} if $s:[n]\to P$ is bijective. Any discrete iposet is the gluing of a starter with a terminator and is gluing-parallel if and only if it is interface consistent. The following is clear.
\begin{lemma}
\label{le:trivialglue}
A gluing $P=P_1\glue P_2$ is trivial if and only if $P_1$ is a
starter or $P_2$ is a terminator. \qed \end{lemma}
The next proposition shows numbers of some classes of discrete iposets, see also Table~\ref{ta:discrete}.
\begin{proposition}
Let $n\ge 0$. Up to isomorphism,
\begin{enumerate}
\item there are $2^n$ gp-starters and $2^n$ gp-terminators on $n$
points;
\item there are $\sum_{k=0}^n \frac{n!}{k!}$ starters and
$\sum_{k=0}^n \frac{n!}{k!}$ terminators on $n$ points;
\item there are $\sum_{s,t=0}^n \sum_{u=\max(0,s+t-n)}^{\min(s,t)}
\binom{s}{u} \binom{t}{u}$ gp-discrete iposets on $n$ points;
\item there are
$\sum_{s,t=0}^n \sum_{u=\max(0,s+t-n)}^{\min(s,t)} \binom{s}{u}
\binom{t}{u} u!$ discrete iposets on $n$ points.
\end{enumerate} \end{proposition}
\begin{samepage} The third term above can be simplified by \begin{equation*}
\sum_{s,t=0}^n\, \sum_{u=\max(0,s+t-n)}^{\min(s,t)} \binom{s}{u}
\binom{t}{u} = \sum_{i=0}^n \binom{n+2+i}{n-i}, \end{equation*} using a version of Vandermonde's identity; we are not aware of any such simplification for the last term. \end{samepage}
\begin{proof}
\mbox{}
\begin{enumerate}
\item For any $k\in\{0,\dotsc,n\}$, there are $\binom{n}{k}$
non-isomorphic interface consistent starters on $n$ points with
$k$ of them in the starting interface.
\item Similarly, there are $\binom{n}{k} k!$ non-isomorphic starters
on $n$ points with $k$ points in the starting interface when not
requiring interface consistency.
\item Let $s,t,u\in\{0,\dotsc,n\}$ and consider the number of
non-isomorphic interface consistent discrete iposets on $n$ points
with $s$ points in the starting, $t$ points in the terminating,
and $u$ points in both interfaces, then necessarily
$s+t-n\le u\le \min(s,t)$. The points not in both interfaces only
give rise to one isomorphism class, the points in the overlap may
be chosen in $\binom{s}{u} \binom{t}{u}$ non-isomorphic ways, and
their order is unique by interface consistency.
\item The argument is the same as above; but the missing interface
consistency requirement adds a factor $u!$. \qedhere
\end{enumerate} \end{proof}
A discrete iposet $(s,P,t):n\to n$ is a \emph{symmetry} if it is both a starter and a terminator, that is, $s$ and $t$ are both bijective. All points of $P$ are in the starting and terminating interfaces, but the permutation $t^{-1}\circ s:[n]\to [n]$ is not necessarily an identity. It is clear that there are precisely $n!$ non-isomorphic symmetries on $n$ points, and that any discrete iposet $P$ may be written $P\cong \sigma\glue Q\cong R\glue \tau$ for symmetries $\sigma$, $\tau$ and $Q$ and $R$ interface consistent.
We finish this section by a special case of the interchange property relating parallel and gluing compositions, \cf Remark \ref{re:interchange}.
\begin{lemma}[\cite{BeyondN2}]
\label{le:interchange}
Let $P_1$, $P_2$, $Q_1$, $Q_2$ be iposets such that
$\tgt(P_1)=\src(Q_1)$ and $\tgt(P_2)=\src(Q_2)$. Then
$(P_1\otimes P_2)\glue(Q_1\otimes Q_2)=(P_1\glue
Q_1)\otimes(P_2\glue Q_2)$ if and only if $P_1\glue Q_1$ or
$P_2\glue Q_2$ is discrete. \end{lemma}
\section{Iposets with Full Interfaces} \label{se:wink}
We now introduce a class of iposets where \emph{all} minimal and/or maximal points are in the interfaces; we name these after Winkowski \cite{DBLP:journals/ipl/Winkowski77} who, to the best of our knowledge, was the first to consider posets with interfaces, and who only considered such full-interface iposets.
\begin{definition}
An iposet $(s,P,t):n\to m$ is \emph{left Winkowski} if
$s([n])= P^{\min}$, \emph{right Winkowski} if $t([m])= P^{\max}$,
and \emph{Winkowski} if it is both left and right Winkowski. \end{definition}
Note that starters are precisely discrete right Winkowskis, terminators are precisely discrete left Winkowskis, and symmetries are precisely the discrete Winkowskis.
\begin{lemma}
\label{le:W-glue}
Let $P= P_1\glue P_2$ nontrivially.
\begin{itemize}
\item $P$ is left Winkowski if and only if $P_1$ is;
\item $P$ is right Winkowski if and only if $P_2$ is;
\item $P$ is Winkowski if and only if $P_1$ is left Winkowski and
$P_2$ is right Winkowski.
\end{itemize} \end{lemma}
\begin{proof}
We show that $P^{\min}=P_1^{\min}$ and $P^{\max}=P_2^{\max}$;
the lemma then follows. By non-triviality there must be $x\in P_1$
which is not in the target interface. Now
$P_1^{\min}\subseteq P^{\min}$ by definition of $\glue$, so assume
$y\in P^{\min}\setminus P_1^{\min}$. Then $y\notin P_1$, which
implies $x<y$ in contradiction to $y\in P^{\min}$. (Note that
non-triviality of $P=P_1\glue P_2$ is necessary here: if $P_1$ is a
starter, we may have $y\notin P_1$ but still $y\in P^{\min}$.) The
proof for $P^{\max}=P_2^{\max}$ is symmetric. \end{proof}
For parallel compositions, it is clear that $P_1\otimes P_2$ is (left/right) Winkowski if and only if $P_1$ and $P_2$ are. Our immediate interest in Winkowski iposets is to speed up generation of gp-iposets by only considering gp-Winkowskis. It is clear that any iposet has a decomposition $P=S\glue W\glue T$ into a starter $S$, a Winkowski $W$ and a terminator $T$; by the next lemma, this also holds in the gluing-parallel case.
\begin{lemma}
\label{le:decomp-SWT}
Any gp-iposet $P$ has a decomposition $P=S\glue W\glue T$ into a
starter $S$, a Winkowski $W$ and a terminator $T$ which are all
gluing-parallel. \end{lemma}
\begin{proof}
Let $n=|P|$ be the number of points in $P$. If $n\le 1$, then the
claim is trivially true as all iposets on $0$ or $1$ points are
gluing-parallel. Let $n\ge 2$, assume that the claim is true for
all iposets with fewer than $n$ points, and let $P$ be
gluing-parallel.
If $P= P_1\glue P_2$ nontrivially with $P_1$ and $P_2$
gluing-parallel, then by the induction hypothesis,
$P_1= S_1\glue W_1\glue T_1$ and $P_2= S_2\glue W_2\glue T_2$ for
$S_1$, $S_2$ gp-starters, $W_1$, $W_2$ gp-Winkowski, and $T_1$,
$T_2$ gp-terminators. Now
$P= P_1\glue P_2= S_1\glue( W_1\glue T_1\glue S_2\glue W_2)\glue
T_2$, and $W_1\glue T_1\glue S_2\glue W_2$ is gluing-parallel
because all four components are, and is Winkowski because $W_1$ and
$W_2$ are.
If $P= P_1\otimes P_2$ nontrivially with $P_1$ and $P_2$
gluing-parallel, then by the induction hypothesis,
$P_1= S_1\glue W_1\glue T_1$ and $P_2= S_2\glue W_2\glue T_2$ for
$S_1$, $S_2$ gp-starters, $W_1$, $W_2$ gp-Winkowskis, and $T_1$,
$T_2$ gp-terminators. Now
\begin{equation*}
P= P_1\otimes P_2=( S_1\glue W_1\glue T_1)\otimes( S_2\glue
W_2\glue T_2)=( S_1\otimes S_2)\glue( W_1\otimes W_2)\glue(
T_1\otimes T_2)
\end{equation*}
by Lemma \ref{le:interchange}, $S_1\otimes S_2$ is a gp-starter,
$W_1\otimes W_2$ is gp-Winkowski, and $T_1\otimes T_2$ is a
gp-terminator. \end{proof}
For generating gluing-parallel iposets it is thus sufficient to generate gp-Winkowskis, gp-starters and gp-terminators. The next lemma entails that also in the recursions these are the only classes we need to consider.
\begin{lemma}
\label{le:gp-SWT}
For $P$ a gluing-parallel Winkowski iposet, the following are
exhaustive:
\begin{enumerate}
\item $P=\id_0$ or $P=\id_1$;
\item $P= P_1\otimes P_2$ nontrivially for $P_1$ and $P_2$
gp-Winkowski;
\item \label{en:gp-SWT.glue} $P= P_1\glue P_2$ nontrivially for
$P_1$ gp-Winkowski or a gp-terminator and $P_2$ gp-Win\-kow\-ski
or a gp-starter.
\end{enumerate} \end{lemma}
\begin{proof}
The first two cases are clear. Otherwise, $P= P_1\glue P_2$
nontrivially for $P_1$ and $P_2$ gluing-parallel. By Lemma
\ref{le:W-glue}, $P_1$ is left Winkowski and $P_2$ right Winkowski,
and by Lemma \ref{le:decomp-SWT} we can decompose $P_1=W_1\glue T_1$
and $P_2=S_2\glue W_2$ into gp-starters, gp-Winkowskis and
gp-terminators. Then $P=W_1\glue T_1\glue S_2\glue W_2$. There are
four cases to consider:
\begin{enumerate}
\item If both $W_1$ and $W_2$ are identities, then neither $T_1$ nor
$S_2$ are (by non-triviality of $P=P_1\glue P_2$), hence
$P=T_1\glue S_2$ nontrivially.
\item If $W_1$ is an identity, but $W_2$ is not, then also $T_1$ is
not an identity. Now if $S_2$ is an identity, then $P=T_1\glue
W_2$ nontrivially; otherwise, $T_1\glue S_2$ is Winkowski by Lemma
\ref{le:W-glue} and $P=(S_1\glue T_2)\glue W_2$.
\item The case of $W_2$ being an identity but not $W_1$ is
symmetric.
\item If neither $W_1$ nor $W_2$ are identities, but $T_1\glue S_2$
is, then $P=W_1\glue W_2$ nontrivially. If also $T_1\glue S_2$ is
not an identity, then $T_1\glue S_2\glue W_2$ is Winkowski by
Lemma \ref{le:W-glue} and $P=W_1\glue(T_1\glue S_2\glue W_2)$
nontrivially. \qedhere
\end{enumerate} \end{proof}
Denoting by $G_n^\textup{W}$, $G_n^\textup{S}$ and $G_n^\textup{T}$ the subsets of $G_n$ consisting of Winkowskis, starters, respectively terminators, we have thus shown the following refinement of Lemma \ref{le:gp-rec}.
\begin{lemma}
For all $n>1$ and $0\le k,\ell\le n$,
\begin{multline*}
G_n^\textup{W}(k, \ell) =
\smash[b]{\bigcup_{\substack{1\le p, q<n \\ m=p+q-n \\ 0\le m<p \\
0\le m<q}}} \big( G_p^\textup{W}(k, m)\cup G_p^\textup{T}(k,
m) \big) \glue \big( G_q^\textup{W}(m, \ell)\cup G_q^\textup{S}(m,
\ell) \big)
\\[1ex]
\cup \bigcup_{\substack{p+q=n \\ p, q\ge 1 \\ k_1+k_2=k \\
\ell_1+\ell_2=\ell}} G_p^\textup{W}(k_1, \ell_1)\otimes
G_q^\textup{W}(k_2, \ell_2).
\end{multline*}
\qed \end{lemma}
\begin{table}
\centering
\caption{Different types of (i)posets on $n$ points: posets;
iposets; gp-iposets; Winkowski iposets; interface consistent
Winkowskis; gp-Winkowskis}
\label{ta:wink}
\begin{tabular}{r|rrrrrr}
$n$ & $\PP(n)$ & $\IP(n)$ & $\GPI(n)$ & $\textsf{WIP}(n)$ &
$\textsf{ICW}(n)$ & $\textsf{GPWI}(n)$ \\\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 4 & 4 & 1 & 1 & 1 \\
2 & 2 & 17 & 16 & 3 & 2 & 2 \\
3 & 5 & 86 & 74 & 13 & 8 & 8 \\
4 & 16 & 532 & 419 & 75 & 43 & 42 \\
5 & 63 & 4068 & 2980 & 555 & 311 & 284 \\
6 & 318 & 38.933 & 26.566 & 5230 & 3018 & 2430 \\
7 & 2045 & 474.822 & 289.279 & 63.343 & 39.196 & 25.417 \\
8 & 16.999 & 7.558.620 & 3.726.311 & 1.005.871 & 682.362 & 314.859 \\
9 & 183.231 & & & & & 4.509.670 \\[1ex]
EIS & 112 & 331158 & 331159
\end{tabular} \end{table}
Note that in order to find forbidden substructures, it is not enough to generate $G_n^\textup{W}(0,0)$ as was the case for general iposets; indeed $G_n^\textup{W}(0,0)=\emptyset$ for $n\ge 1$, given that the number of interfaces for Winkowski iposets is a structural property determined by the underlying posets. Generating $G_7^\textup{W}$ took about 4 seconds and $G_8^\textup{W}$ ca.\ 12 minutes on a standard laptop (compare this with the 4 minutes for $G_7$ and 300 hours for $G_8$). Generating $G_9^\textup{W}$ took 79 hours on one of the machines mentioned in footnote \ref{fn:norway}. Table \ref{ta:wink} shows the numbers of Winkowskis and gp-Winkowskis on $n$ points up to isomorphism, and Tables \ref{ta:wip123split} to \ref{ta:wip9split} in the appendix show the split into sources and targets.
\newcommand{\etalchar}[1]{$^{#1}$} \newcommand{\Afirst}[1]{#1} \newcommand{\afirst}[1]{#1}
\appendix \setcounter{table}0 \renewcommand{A.\arabic{table}}{A.\arabic{table}}
\begin{table}[tbp]
\centering
\caption{Numbers of iposets on 1, 2 and 3 points split by number of
sources and targets.}
\label{ta:ip123split}
\begin{tabular}[t]{r|rr}
$\IP(1)$\!\! & 0 & 1 \\\hline
0 & 1 & 1 \\
1 & & 1
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrr}
$\IP(2)$\!\! & 0 & 1 & 2 \\\hline
0 & 2 & 2 & 1 \\
1 & & 3 & 2 \\
2 & & & 2 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrr}
$\ICI(2)$\!\! & 0 & 1 & 2 \\\hline
0 & 2 & 2 & 1 \\
1 & & 3 & 2 \\
2 & & & 1
\end{tabular}
\begin{tabular}[t]{r|rrrr}
$\IP(3)$\!\! & 0 & 1 & 2 & 3 \\\hline
0 & 5 & 6 & 4 & 1 \\
1 & & 9 & 8 & 3 \\
2 & & & 10 & 6 \\
3 & & & & 6 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrrr}
$\ICI(3)$ & 0 & 1 & 2 & 3 \\\hline
0 & 5 & 6 & 4 & 1 \\
1 & & 9 & 8 & 3 \\
2 & & & 9 & 3 \\
3 & & & & 1
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of iposets on 4 points split by number of sources
and targets.}
\label{ta:ip4split}
\begin{tabular}[t]{r|rrrrr} $\IP(4)$\!\! & 0 & 1 & 2 & 3 & 4 \\\hline 0 & 16 & 22 & 19 & 8 & 1 \\ 1 & & 36 & 37 & 20 & 4 \\ 2 & & & 48 & 36 & 12 \\ 3 & & & & 42 & 24 \\ 4 & & & & & 24 \\ \end{tabular} \qquad
\begin{tabular}[t]{r|rrrrr} $\ICI(4)$\!\! & 0 & 1 & 2 & 3 & 4 \\\hline 0 & 16 & 22 & 19 & 8 & 1 \\ 1 & & 36 & 37 & 20 & 4 \\ 2 & & & 46 & 30 & 6 \\ 3 & & & & 19 & 4 \\ 4 & & & & & 1 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrr} $\GPI(4)$\!\! & 0 & 1 & 2 & 3 & 4 \\\hline 0 & 16 & 22 & 19 & 8 & 1 \\ 1 & & 36 & 37 & 20 & 4 \\ 2 & & & 45 & 30 & 6 \\ 3 & & & & 19 & 4 \\ 4 & & & & & 1 \\ \end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of iposets on 5 points split by number of sources
and targets.}
\label{ta:ip5split}
\begin{tabular}[t]{r|rrrrrr} $\IP(5)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 \\\hline 0 & 63 & 101 & 106 & 63 & 16 & 1 \\ 1 & & 180 & 214 & 148 & 48 & 5 \\ 2 & & & 295 & 250 & 112 & 20 \\ 3 & & & & 282 & 192 & 60 \\ 4 & & & & & 216 & 120 \\ 5 & & & & & & 120 \\ \end{tabular} \qquad
\begin{tabular}[t]{r|rrrrrr} $\ICI(5)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 \\\hline 0 & 63 & 101 & 106 & 63 & 16 & 1 \\ 1 & & 180 & 214 & 148 & 48 & 5 \\ 2 & & & 290 & 232 & 88 & 10 \\ 3 & & & & 209 & 80 & 10 \\ 4 & & & & & 33 & 5 \\ 5 & & & & & & 1 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrr} $\GPI(5)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 \\\hline 0 & 63 & 101 & 106 & 62 & 16 & 1 \\ 1 & & 180 & 214 & 146 & 48 & 5 \\ 2 & & & 281 & 220 & 88 & 10 \\ 3 & & & & 198 & 80 & 10 \\ 4 & & & & & 33 & 5 \\ 5 & & & & & & 1 \\ \end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of iposets on 6 points split by number of sources
and targets.}
\label{ta:ip6split}
\begin{tabular}[t]{r|rrrrrrr} $\IP(6)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline 0 & 318 & 576 & 720 & 552 & 217 & 32 & 1 \\ 1 & & 1131 & 1536 & 1303 & 589 & 112 & 6 \\ 2 & & & 2305 & 2221 & 1212 & 320 & 30 \\ 3 & & & & 2549 & 1812 & 720 & 120 \\ 4 & & & & & 1872 & 1200 & 360 \\ 5 & & & & & & 1320 & 720 \\ 6 & & & & & & & 720 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrrr} $\ICI(6)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline 0 & 318 & 576 & 720 & 552 & 217 & 32 & 1 \\ 1 & & 1131 & 1536 & 1303 & 589 & 112 & 6 \\ 2 & & & 2289 & 2155 & 1098 & 240 & 15 \\ 3 & & & & 2245 & 1242 & 280 & 20 \\ 4 & & & & & 690 & 170 & 15 \\ 5 & & & & & & 51 & 6 \\ 6 & & & & & & & 1 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrrr} $\GPI(6)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline 0 & 313 & 565 & 703 & 523 & 205 & 32 & 1 \\ 1 & & 1104 & 1493 & 1235 & 561 & 112 & 6 \\ 2 & & & 2146 & 1931 & 993 & 240 & 15 \\ 3 & & & & 1911 & 1092 & 280 & 20 \\ 4 & & & & & 644 & 170 & 15 \\ 5 & & & & & & 51 & 6 \\ 6 & & & & & & & 1 \\ \end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of iposets on 7 points split by number of sources
and targets.}
\label{ta:ip7split}
\begin{tabular}[t]{r|rrrrrrrr} $\IP(7)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\hline 0 & 2045 & 4162 & 6026 & 5692 & 3074 & 771 & 64 & 1 \\ 1 & & 8945 & 13756 & 13925 & 8210 & 2352 & 256 & 7 \\ 2 & & & 22664 & 24956 & 16465 & 5654 & 864 & 42 \\ 3 & & & & 30610 & 23572 & 10440 & 2400 & 210 \\ 4 & & & & & 22880 & 14400 & 5280 & 840 \\ 5 & & & & & & 14040 & 8640 & 2520 \\ 6 & & & & & & & 9360 & 5040 \\ 7 & & & & & & & & 5040 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrrrr} $\ICI(7)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\hline 0 & 2045 & 4162 & 6026 & 5692 & 3074 & 771 & 64 & 1 \\ 1 & & 8945 & 13756 & 13925 & 8210 & 2352 & 256 & 7 \\ 2 & & & 22601 & 24653 & 15829 & 5024 & 624 & 21 \\ 3 & & & & 29054 & 20072 & 6760 & 880 & 35 \\ 4 & & & & & 14489 & 4870 & 700 & 35 \\ 5 & & & & & & 1777 & 312 & 21 \\ 6 & & & & & & & 73 & 7 \\ 7 & & & & & & & & 1 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrrrr} $\GPI(7)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\hline 0 & 1903 & 3813 & 5423 & 4878 & 2563 & 680 & 64 & 1 \\ 1 & & 8056 & 12179 & 11811 & 6865 & 2110 & 256 & 7 \\ 2 & & & 19129 & 19567 & 12305 & 4246 & 624 & 21 \\ 3 & & & & 21295 & 14420 & 5433 & 880 & 35 \\ 4 & & & & & 10439 & 4112 & 700 & 35 \\ 5 & & & & & & 1647 & 312 & 21 \\ 6 & & & & & & & 73 & 7 \\ 7 & & & & & & & & 1 \\ \end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of iposets on 8 points split by number of sources
and targets.}
\label{ta:ip8split}
\begin{tabular}[t]{r|rrrrrrrrr} $\IP(8)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\hline 0 & 16999 & 38280 & 63088 & 70946 & 49255 & 18152 & 2809 & 128 & 1 \\ 1 & & 89699 & 154451 & 182680 & 134680 & 53651 & 9451 & 576 & 8 \\ 2 & & & 279685 & 350957 & 278197 & 122505 & 25810 & 2240 & 56 \\ 3 & & & & 472927 & 410905 & 207923 & 56322 & 7392 & 336 \\ 4 & & & & & 406232 & 253640 & 96600 & 20160 & 1680 \\ 5 & & & & & & 218200 & 126120 & 43680 & 6720 \\ 6 & & & & & & & 118080 & 70560 & 20160 \\ 7 & & & & & & & & 75600 & 40320 \\ 8 & & & & & & & & & 40320 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrrrrr} $\ICI(8)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\hline 0 & 16999 & 38280 & 63088 & 70946 & 49255 & 18152 & 2809 & 128 & 1 \\ 1 & & 89699 & 154451 & 182680 & 134680 & 53651 & 9451 & 576 & 8 \\ 2 & & & 279367 & 349229 & 273877 & 116985 & 22555 & 1568 & 28 \\ 3 & & & & 463000 & 384873 & 173073 & 34857 & 2576 & 56 \\ 4 & & & & & 334532 & 152970 & 30605 & 2520 & 70 \\ 5 & & & & & & 68080 & 14711 & 1484 & 56 \\ 6 & & & & & & & 3854 & 518 & 28 \\ 7 & & & & & & & & 99 & 8 \\ 8 & & & & & & & & & 1 \\ \end{tabular}
\begin{tabular}[t]{r|rrrrrrrrr} $\GPI(8)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\hline 0 & 13943 & 30333 & 48089 & 50187 & 32790 & 12348 & 2251 & 128 & 1 \\ 1 & & 68571 & 113701 & 125539 & 88295 & 36791 & 7789 & 576 & 8 \\ 2 & & & 193330 & 221192 & 164078 & 73774 & 17438 & 1568 & 28 \\ 3 & & & & 263828 & 206161 & 98655 & 25233 & 2576 & 56 \\ 4 & & & & & 169476 & 85192 & 22937 & 2520 & 70 \\ 5 & & & & & & 44362 & 12173 & 1484 & 56 \\ 6 & & & & & & & 3559 & 518 & 28 \\ 7 & & & & & & & & 99 & 8 \\ 8 & & & & & & & & & 1 \\ \end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of Winkowski iposets on 1, 2 and 3 points split by
number of sources and targets.}
\label{ta:wip123split}
\begin{tabular}[t]{r|rr}
$\textsf{WIP}(1)$\!\! & 0 & 1 \\\hline
0 & 0 & 0 \\
1 & & 1 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrr}
$\textsf{WIP}(2)$\!\! & 0 & 1 & 2 \\\hline
0 & 0 & 0 & 0 \\
1 & & 1 & 0 \\
2 & & & 2 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrr}
$\textsf{GPWI}(2)$\!\! & 0 & 1 & 2 \\\hline
0 & 0 & 0 & 0 \\
1 & & 1 & 0 \\
2 & & & 1 \\
\end{tabular}
\begin{tabular}[t]{r|rrrr}
$\textsf{WIP}(3)$\!\! & 0 & 1 & 2 & 3 \\\hline
0 & 0 & 0 & 0 & 0 \\
1 & & 1 & 1 & 0 \\
2 & & & 4 & 0 \\
3 & & & & 6 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrrr}
$\textsf{GPWI}(3)$\!\! & 0 & 1 & 2 & 3 \\\hline
0 & 0 & 0 & 0 & 0 \\
1 & & 1 & 1 & 0 \\
2 & & & 4 & 0 \\
3 & & & & 1 \\
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of Winkowski iposets on 4 points split by number of
sources and targets.}
\label{ta:wip4split}
\begin{tabular}[t]{r|rrrrr}
$\textsf{WIP}(4)$\!\! & 0 & 1 & 2 & 3 & 4 \\\hline
0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 2 & 3 & 1 & 0 \\
2 & & & 11 & 6 & 0 \\
3 & & & & 18 & 0 \\
4 & & & & & 24 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrrrr}
$\textsf{GPWI}(4)$\!\! & 0 & 1 & 2 & 3 & 4 \\\hline
0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 2 & 3 & 1 & 0 \\
2 & & & 10 & 6 & 0 \\
3 & & & & 9 & 0 \\
4 & & & & & 1 \\
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of Winkowski iposets on 5 points split by number of
sources and targets.}
\label{ta:wip5split}
\begin{tabular}[t]{r|rrrrrr}
$\textsf{WIP}(5)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 5 & 11 & 7 & 1 & 0 \\
2 & & & 41 & 43 & 8 & 0 \\
3 & & & & 81 & 36 & 0 \\
4 & & & & & 96 & 0 \\
5 & & & & & & 120 \\
\end{tabular}
\qquad
\begin{tabular}[t]{r|rrrrrr}
$\textsf{GPWI}(5)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 5 & 11 & 7 & 1 & 0 \\
2 & & & 39 & 36 & 8 & 0 \\
3 & & & & 61 & 18 & 0 \\
4 & & & & & 16 & 0 \\
5 & & & & & & 1 \\
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of Winkowski iposets on 6 points split by number of
sources and targets.}
\label{ta:wip6split}
\begin{tabular}[t]{r|rrrrrrr}
$\textsf{WIP}(6)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 16 & 47 & 47 & 15 & 1 & 0 \\
2 & & & 200 & 285 & 135 & 10 & 0 \\
3 & & & & 598 & 408 & 60 & 0 \\
4 & & & & & 600 & 240 & 0 \\
5 & & & & & & 600 & 0 \\
6 & & & & & & & 720 \\
\end{tabular}
\begin{tabular}[t]{r|rrrrrrr}
$\textsf{GPWI}(6)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 16 & 47 & 46 & 15 & 1 & 0 \\
2 & & & 190 & 238 & 102 & 10 & 0 \\
3 & & & & 406 & 256 & 30 & 0 \\
4 & & & & & 222 & 40 & 0 \\
5 & & & & & & 25 & 0 \\
6 & & & & & & & 1 \\
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of Winkowski iposets on 7 points split by number of
sources and targets.}
\label{ta:wip7split}
\begin{tabular}[t]{r|rrrrrrrr}
$\textsf{WIP}(7)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 63 & 243 & 343 & 185 & 31 & 1 & 0 \\
2 & & & 1203 & 2198 & 1609 & 391 & 12 & 0 \\
3 & & & & 5323 & 5185 & 1605 & 90 & 0 \\
4 & & & & & 6808 & 3720 & 480 & 0 \\
5 & & & & & & 4800 & 1800 & 0 \\
6 & & & & & & & 4320 & 0 \\
7 & & & & & & & & 5040 \\
\end{tabular}
\begin{tabular}[t]{r|rrrrrrrr}
$\textsf{GPWI}(7)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 63 & 239 & 318 & 173 & 31 & 1 & 0 \\
2 & & & 1096 & 1727 & 1129 & 260 & 12 & 0 \\
3 & & & & 3284 & 2699 & 838 & 45 & 0 \\
4 & & & & & 2864 & 1112 & 80 & 0 \\
5 & & & & & & 595 & 75 & 0 \\
6 & & & & & & & 36 & 0 \\
7 & & & & & & & & 1 \\
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of Winkowski iposets on 8 points split by number of
sources and targets.}
\label{ta:wip8split}
\begin{tabular}[t]{r|rrrrrrrrr}
$\textsf{WIP}(8)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 318 & 1533 & 2891 & 2319 & 707 & 63 & 1 & 0 \\
2 & & & 8895 & 20195 & 20222 & 8333 & 1099 & 14 & 0 \\
3 & & & & 56783 & 71835 & 37396 & 5688 & 126 & 0 \\
4 & & & & & 112751 & 72140 & 17580 & 840 & 0 \\
5 & & & & & & 74000 & 35400 & 4200 & 0 \\
6 & & & & & & & 42120 & 15120 & 0 \\
7 & & & & & & & & 35280 & 0 \\
8 & & & & & & & & & 40320 \\
\end{tabular}
\begin{tabular}[t]{r|rrrrrrrrr}
$\textsf{GPWI}(8)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 313 & 1432 & 2413 & 1856 & 616 & 63 & 1 & 0 \\
2 & & & 7402 & 13942 & 12152 & 4736 & 626 & 14 & 0 \\
3 & & & & 29702 & 30062 & 14150 & 2433 & 63 & 0 \\
4 & & & & & 36058 & 20366 & 4230 & 140 & 0 \\
5 & & & & & & 13812 & 3507 & 175 & 0 \\
6 & & & & & & & 1316 & 126 & 0 \\
7 & & & & & & & & 49 & 0 \\
8 & & & & & & & & & 1 \\
\end{tabular} \end{table}
\begin{table}[tbp]
\centering
\caption{Numbers of gp-Winkowski iposets on 9 points split by number of
sources and targets.}
\label{ta:wip9split}
\begin{tabular}[t]{r|rrrrrrrrrr}
$\textsf{GPWI}(9)$\!\! & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & & 1903 & 10109 & 20397 & 20173 & 9935 & 2123 & 127 & 1 & 0 \\
2 & & & 57949 & 126041 & 135862 & 74501 & 18507 & 1456 & 16 & 0 \\
3 & & & & 298002 & 355788 & 221772 & 65086 & 6585 & 84 & 0 \\
4 & & & & & 478218 & 340355 & 115612 & 13988 & 224 & 0 \\
5 & & & & & & 276343 & 108612 & 15337 & 350 & 0 \\
6 & & & & & & & 49569 & 8960 & 336 & 0 \\
7 & & & & & & & & 2555 & 196 & 0 \\
8 & & & & & & & & & 64 & 0 \\
9 & & & & & & & & & & 1 \\
\end{tabular} \end{table}
\end{document} | arXiv |
Parasites & Vectors
Detection of Dirofilaria immitis and other arthropod-borne filarioids by an HRM real-time qPCR, blood-concentrating techniques and a serological assay in dogs from Costa Rica
Alicia Rojas1,3,
Diana Rojas1,
Víctor M Montenegro2 &
Gad Baneth3
Parasites & Vectors volume 8, Article number: 170 (2015) Cite this article
Canine filarioids are important nematodes transmitted to dogs by arthropods. Diagnosis of canine filariosis is accomplished by the microscopic identification of microfilariae, serology or PCR for filarial-DNA. The aim of this study was to evaluate a molecular assay for the detection of canine filariae in dog blood, to compare its performance to other diagnostic techniques, and to determine the relationship between microfilarial concentration and infection with other vector-borne pathogens.
Blood samples from 146 dogs from Costa Rica were subjected to the detection of canine filarioids by four different methods: the microhematocrit tube test (MCT), Knott's modified test, serology and a high resolution melt and quantitative real-time PCR (HRM-qPCR). Co-infection with other vector-borne pathogens was also evaluated.
Fifteen percent of the dogs were positive to Dirofilaria immitis by at least one of the methods. The HRM-qPCR produced distinctive melting plots for the different filarial worms and revealed that 11.6% of dogs were infected with Acanthocheilonema reconditum. The latter assay had a limit of detection of 2.4x10−4 mf/μl and detected infections with lower microfilarial concentrations in comparison to the microscopic techniques and the serological assay. The MCT and Knott's test only detected dogs with D. immitis microfilaremias above 0.7 mf/μl. Nevertheless, there was a strong correlation between the microfilarial concentration obtained by the Knott's modified test and the HRM-qPCR (r = 0.906, p < 0.0001). Interestingly, one dog was found infected with Cercopithifilaria bainae infection. Moreover, no association was found between microfilaremia and co-infection and there was no significant difference in microfilarial concentration between dogs infected only with D. immitis and dogs co-infected with Ehrlichia canis, Anaplasma platys or Babesia vogeli.
This is the first report of A. reconditum and C. bainae in Costa Rica and Central America. Among the evaluated diagnostic techniques, the HRM-qPCR showed the most sensitive and reliable performance in the detection of blood filaroids in comparison to the Knott's modified test, the MCT test and a serological assay.
Canine arthropod-borne filarioids include nematodes of the superfamily Filarioidea which are transmitted by arthropods such as mosquitos, fleas, lice and ticks [1]. Dirofilaria immitis, D. repens, Acanthocheilonema reconditum, Onchocerca lupi and Thelazia callipaeda are among the most important species that affect dogs. Animals infected with these parasites may remain asymptomatic or suffer from subcutaneous abnormalities, formation of nodules in subcutaneous tissues or life-threatening pathologies that include cardiovascular complications [2].
The distribution of canine filarioids depends on the presence of the vector, climate conditions (such as temperature, relative humidity and precipitation), density of human population and the presence of other canid populations that serve as reservoirs for these filarioids [3]. In the case of Costa Rica, D. immitis is the only canine filarioid reported to date. In 2009, a seroprevalence study of 84 owned dogs revealed that 2.3% were infected with heartworm [4]. In addition, seven cases of human dirofilarosis have been reported in Costa Rica since 1984 [5-9].
The diagnosis of canine filarosis in clinical laboratories can be accomplished by the identification of microfilariae, serology or PCR for filarial DNA from the dog's blood. The gold standard of filarial detection has been the modified Knott's test, which relies on the observer's expertise and ability to morphologically identify microfilariae concentrated from the blood [10]. Serological diagnosis of D. immits is based on the detection of a female adult antigen, and has been applied for clinical purposes and in epidemiological studies [11]; however, it restricts detection only to D. immitis, disregarding other canine filarioids. Molecular detection techniques have been designed to detect different genetic loci that identify canine filarioids in general or certain species with high sensitivity and specificity [12-15]. Nevertheless, none of previous studies have compared the validity of the Knott's test to all the other diagnostic methods included in this study.
The purpose of this study was to evaluate infection of canine filarioids by two blood-concentrating techniques (Knott's modified test and the Microcapillary test), a serological assay and a novel quantitative HRM real-time qPCR; to compare the performance of the tests; and to determine the relationship between microfilarial concentration and co-infection with other vector-borne pathogens, demographic data and PCV values.
Animals and sample collection
One hundred and forty six blood samples from dogs were obtained from the Costa Rican regions of San Ramón, Alajuela (Costa Rica's Central Valley, elevation 1060 m); Kéköldi, Limón (The Atlantic coast, elevation 169 m); Liberia, Guanacaste (Pacific coast, elevation 142 m) and Chomes, Puntarenas (Pacific Coast, elevation 8 m), during the rainy season (May to November) of 2012 as a part of a previous study [16]. The regions were chosen because they represented different geophysical and climate conditions. A questionnaire was filled for each animal with information regarding sex and age. Blood was obtained from the cephalic vein and collected in EDTA and serum tubes. The samples were transported at 4°C to the laboratory. After allowing blood to clot, sera were separated by centrifugation and stored at −20°C until further analysis. The packed cell volume (PCV) of each dog was measured by glass microcapillary centrifugation from EDTA blood samples. Dogs were divided into three groups according to their PCV value: group 1 (PCV: 7-24%), group 2 (PCV: 25-34%) and group 3 (PCV: 35-50%). The study was approved by the Inter-Institutional Committee for the Care and Use of Animals (CICUA), Universidad de Costa Rica.
Microcapillary test (MCT)
EDTA blood samples were centrifuged in microhematocrit tubes and the buffy coat was analyzed for the presence of microfilariae by light microscopy at 100 and 400 magnifications. The number of microfilariae was recorded for each sample, as described elsewhere [17].
Knott's modified test
Knott's modified test was performed with EDTA blood samples from dogs as described by Castillo and Guerrero [18] with the following modifications. Briefly, 0.5 ml of EDTA blood was added to 4.5 ml of 2% formalin, mixed by inversion and centrifuged at 3000 × g for 5 minutes. The volume of supernatant was measured for each sample and later discarded. The sediment was mixed with 35 μl of 0.1% methylene blue and 20 μl of this mixture were observed by a light microscope at 100× and 400× magnifications. No morphometric distinction was made between microfilariae of different species. The number of microfilariae per microliter (mf/μl) was calculated according to the following formula:
$$ mF/\mu l=\frac{\mu F\ observed\times \left\{\left[\left({V}_{blood}+{V}_{formalin}\right)-{V}_{supernatant}\right]+{V_{methylene}}_{blue}\right\}}{V_{sample}\times {V}_{blood}} $$
Serological examination
The commercial kit VetScan® Canine Heartworm Rapid Test (Abaxis Inc, Union City, CA) was employed for the detection of D. immitis. This rapid assay detects circulating D. immitis female adult antigen in sera and the manufacturer declares a sensitivity and specificity of 98% and 100%, respectively [19]. The test was performed and its results were interpreted according to the manufacturer's instructions.
DNA extraction from dog samples
DNA from EDTA blood samples was extracted with a commercial kit (Illustra Blood Genomic Prep Mini Spin Kit, GE Healthcare, Buckinghamshire, UK), following the manufacturer's instructions.
Screening for filaroid-DNA with HRM real-time PCR
A high resolution melt (HRM) real-time PCR was performed using primers that target a partial sequence of the mitochondrial 12S gene of filarioids of approximately 115 bp [15]. Primers (F5′-TTTAAACCGAAAAAATATTGACTGAC-3′ and R5′- AAAAACTAAACAATCATACATGTGCC-3′) were designed to detect D. immitis, Brugia malayi and B. pahangi [15] but they are also able to amplify the DNA of other filarial species. Three microliters of each DNA sample were diluted in a final volume of 20 μl with 10 μl of Maxima Hot Start PCR Master Mix (Thermo Fisher Scientific Inc., Surrey, UK), 4.4 μl sterile PCR grade water, 0.6 μl of SYTO-9 (Invitrogen, Carlsband, US) and 1 μl of each primer at 500 nM. The protocol was modified by performing an initial hold of 4 min at 95°C and 50 cycles of 5 s at 95°C, 15 s at 58°C and 10 s at 72°C. The melt curve was constructed from 60°C to 95°C with increments of 1°C/sec, followed by a hybridization step. An HRM curve was measured from 70 to 85°C at 0.1°C/sec. Reactions were performed with a Rotor Gene 6000™ cycler (Corbet, Sydney, AU). All runs included a non-template control (NTC) with PCR-grade water and DNA from a laboratory bred pathogen-free dog's blood sample. As positive controls, DNA extracted from blood samples with D. immitis and A. reconditum, from heavily infected dogs from Puntarenas and Guanacaste, Costa Rica, respectively, were used and run in each reaction. Additionally, DNA from D. repens-positive blood samples from Israel were employed for the standardization of the assay. All positive amplicons obtained in the study were confirmed by sequencing (described below).
Co-infection analysis
Specific PCR reactions for D. immitis and A. reconditum were performed to detect potential co-infection cases in the positive samples detected by the general filaroid HRM real-time PCR (described above). D. repens-detection was not tested due to reported absence of this filarioid in the Americas [20]. Accordingly, positive samples for A. reconditum were run in a HRM real-time PCR specific for D. immitis, and the positive samples for D. immitis were run in a HRM-real time PCR specific for A. reconditum.
Dirofilaria immitis detection was targeted using primers DI COI-F1 (5′- AGTGTAGAGGGTCAGCCTGAGTTA-3′) and DI COI-R1 (5′- ACAGGCACTGACAATACCAAT-3′) [12] at a concentration of 250 nM, which amplify a 200 bp fragment of the cytochrome oxidase (cox1) gene of D. immitis. The conditions consisted of an initial hold of 4 min at 95°C and 50 cycles of 15 sec at 95°C, 30 sec at 59°C and 5 sec at 72°C. The melt curve went from 60°C to 95°C with a raise of 1°C/1 sec, followed by a hybridization step from 90°C to 50°C. Finally, an HRM curve was performed from 70°C to 82°C, with an increment of 0.1°C/sec. Each run included a non-template control with PCR grade water, a negative control and a positive control of D. immitis.
The HRM real-time PCR for A. reconditum-DNA was carried out using primers AR COI-F1 (5′- AGTGTTGAGGGACAGCCAGAATTG-3′) and AR COI-R1 (5′-CCAAAACTGGAACAGACAAAACAAGC-3′) at a concentration of 500 nM, which amplify a 200 bp fragment of the cytochrome oxidase (cox1) gene of A. reconditum. The conditions consisted of an initial hold of 4 min at 95°C and 50 cycles of 15 sec at 95°C, 30 sec at 59°C and 5 sec at 72°C. The melt curve went from 60°C to 95°C with a raise of 1°C/1 sec, followed by a hybridization step from 90°C to 50°C. Finally, an HRM curve was performed from 70°C to 85°C, with an increment of 0.1°C/sec.
Quantitative HRM real-time PCR (HRM-qPCR) for D. immitis
A standard curve for the absolute quantification of D. immitis by HRM-qPCR was developed. Accordingly, a serial dilution of the DNA extracted from the blood of a D. immitis-infected dog with known microfilariae concentration (14.33 D. immitis mf/μl of blood, determined twice by the Knott's modified test) was used as standard points for the curve. This quantitative real-time PCR targets the mitochondrial 12S gene of filarial species with conditions and reaction volumes as described above. Thus, three-fold serial dilutions of the DNA-positive control were prepared in sterile PCR grade water (Sigma, St. Louis, USA). The serial dilutions ranged from 1.4x101 to 8.1x10−6 mf/μL. All the points of the standard curve (11 in total) were analyzed by triplicate. The standard curve was prepared with a logarithm of mf/μl versus the threshold cycle (Ct) values. The slope, intercept, efficiency and R 2 values from this curve were obtained.
All the positive samples for D. immitis were quantified with the standard curve. The estimated microfilarial concentration (mf/μl) was calculated by the interpolation of the Ct value of each sample to the standard curve equation. In order to normalize the variations within and between PCR runs, a correction factor was calculated. The correction factor of each run was obtained with the division of the Ct of the standard point obtained in the standard curve and the Ct of the same point in each run for the sample analysis. Then, the Ct of each sample was corrected by the multiplication of the correction factor against each sample Ct value.
Positive DNA amplicons were purified (EXO-Sap, New England Biolabs Inc., Ipswich, MA, USA) and subsequently sequenced using the BigDye Terminator cycle sequencing chemistry from Applied Biosystems ABI3700 DNA Analyzer, and the ABI's Data Collection and Sequence Analysis software (ABI, Carlsbad, US). Samples were identified when the sequence of the amplicon indicated that the closest GenBank accession was at least 97% identical to the identified species. The data was analyzed with the Chromas Lite Version 2.01 software and compared to database available from GenBank using BLASTn 2.2.26 program (http://www.ncbi.nlm.nih.gov/BLAST).
Infection rates (%) of canine filarioids were expressed with confidence intervals of 95%. To estimate the potential association between nominal variables, the Chi-square or Fisher's exact tests were applied, according to the sample size. A two-tailed T-test was employed to evaluate differences between microfilarial concentrations in dogs infected only with D. immitis and those co-infected with Babesia vogeli, Hepatozoon canis, Ehrlichia canis or Anaplasma platys [16], and to evaluate the difference in microfilarial concentration and PCV values. A two-tailed Pearson correlation test and a linear regression test were performed to evaluate the correlation between the microfilarial concentration obtained by the Knott's modified test and the HRM real-time qPCR. A paired two-tailed T-test was employed to compare the mean microfilarial concentration of D. immitis obtained by the Knott's modified test and the HRM real-time qPCR. Additionally, Cohen's kappa coefficient was calculated to determine the agreement in the detection of cases of D. immitis-infection between the four diagnostic tests employed. The statistical tests were analyzed under the hypothesis of null independence. Significance was determined with p <0.05. The Bonferroni correction was applied in cases where multiple comparisons were performed. Statistical analyses were performed using the IBM SPSS Statistics 20.0 software.
A total of 8.9% (13/146; 95% C.I.:5.0-14.4%) of the blood samples were positive to filarioids by the MCT (Additional file 1: Table S1). The MCT did not distinguish between microfilariae species. All positive dogs were from the region of Chomes, Puntarenas.
Seventeen percent (25/146, 95% C.I.: 11.4-24.2%) of the dogs were found to harbor microfilariae by this test. This test was not employed to distinguish between microfilariae species. All of the microfilariae were found in dogs from Chomes, Kéköldi and San Ramón (Additional file 1: Table S1). The average number of microfilariae ranged from 0.05 to 22.7 mf/μl. The dogs from Chomes presented the highest microfilaremia compared to the other two locations (Fisher's exact test, p < 0.001).
Serological assay
Eleven percent of the samples (16/146, 95% C.I.: 6.4-17.2%) were positive for D. immitis antigen (Additional file 1: Table S1). Additionally, two samples were classified as inconclusive according to the interpretation of the results as classified by the manufacturer.
Molecular methods
The HRM real-time PCR screening revealed that 22.6% of the dogs (33/146, 95% C.I.: 16.1-30.2%) were positive for filarioid DNA. Of these, 51.5% (17/33, 95% C.I.: 33.5-69.2%) were identified as D. immitis and 48.5% (16/33. 95% C.I.: 30.8-66.5%) as A. reconditum according to their DNA sequences. Moreover, A. reconditum and Dirofilaria spp. produced clearly distinct HRM curves (Figure 1). The GenBank accession numbers with the closest match and identity percentages for D. immitis DNA sequences were FN391554.1 (97%) and HQ540423.1 (100%), and for A. reconditum JF461460.1 (97%).
HRM real-time qPCR analysis for the identification of 12S rRNA gene of canine filarioids. Normalized HRM curves of positive samples with Acanthocheilonema reconditum (blue) (Tm = 74.39 ± 0.03°C), Dirofilaria immitis (green) (Tm = 74.92 ± 0.04°C) and D. repens (red) (Tm = 75.54 ± 0.05°C).
No cases of dog co-infection with D. immitis and A. reconditum were revealed by the specific HRM real-time PCRs.
One dog with a positive result in the HRM real-time PCR screening for filarioids presented a low quality inconclusive sequence with the closest match to a dermal filarioid. Therefore, skin scrapes and conjunctival swabs of this dog obtained from a previous study [16] were submitted to the Dipartamento di Medicina Veterinaria, Università degli Studi di Bari, in Italy, for additional testing. PCR for the detection of the genes 12S and cox1 of Cercopithifilaria sp. was performed on these samples and revealed the presence of Cercopithifilaria bainae (100% identity to GenBank accession numbers JF461461 and JF461457 for 12S and cox1 genes, respectively).
Quantification of D. immitis by the HRM real-time PCR
The standard curve for the quantification of D. immitis is shown in the Additional file 2: Figure S1. The curve had an efficiency of 96%, R 2 = 0.985 and a limit of detection of 2.4×10−4 mf/μl. The microfilaremia of the dogs ranged from 6.6×10−6 to 34.2 mf/μl. Three dogs presented lower microfilaremia than the lowest concentration in the curve. Additionally, two dogs had higher microfilaremia than the highest point of the standard curve. The concentrations of these samples were calculated by the extrapolation of the curve assuming linearity, and, thus should be considered as estimated values. The assay's detection limit was 2.4×10−4 mf/μl.
Evaluation of method performance
Filaroids were detected in 24.0% (35/146, 95% C.I.: 17.3-31.7%) of the dogs by putting together results from all of the employed methods. The HRM real-time qPCR detected 94.3% (33/35, 95% C.I.: 80.8-99.3%) of the total positives, whereas the MCT, the Knott's test and the serological assay detected 37.1% (13/35, 95% C.I.: 21.5-55.1%), 71.4% (25/35, 95% C.I.: 53.7-85.7%) and 45.7% (16/35, 95% C.I.: 28.8/63.4%), respectively.
Dirofilaria immitis was identified by the HRM real-time PCR and the serological assay in 15.10% (22/146, 95% C.I.: 9.7-22.0%) of the samples. The HRM real-time PCR detected 77.3% (17/22, 95% C.I.: 54.6-92.2%) and the serological assay 72.7% (16/22, 95% C.I.: 49.8-89.3) of the D. immitis-positive dogs (Table 1). Five samples were detected only by the serological assay and 6 only by the HRM real-time PCR. There was a moderate statistical agreement in the detection of D. immitis by the HRM real-time PCR and the Knott's and microcapillary tests (all κ > 0.522, all p < 0.005), and perfect agreement between Knott's test and the microcapillary test (κ = 0.91, p < 0.0001). However, there was no agreement in the detection of dirofilariosis cases by the serological assay and the HRM real-time PCR, Knott's and microcapillary tests (all κ < −0.09, all p > 0.12).
Table 1 Comparative detection of Dirofilaria immitis by different diagnostic assays
The quantification of the microfilaremia level by the HRM-qPCR allowed the comparison of positive samples detected by other tests as well. Accordingly, the MCT and Knott's test only detected dogs with microfilaremias above 0.7 mf/μl revealed by the HRM-qPCR as positive and missed 4 dogs with lower concentrations of microfilariae (Figure 2). However, three samples, detected as positive for microfilariae by the microscopic methods, were found negative by this molecular assay. The serological assay detected cases of dirofilarosis among samples of all microfilariae concentrations (Figure 2), and 5 samples were only found serologically positive and negative by the techniques dependent on detection of microfilaremia (MCT, Knott's test and HRM-qPCR). The mean value of microfilariae/μl obtained by either the HRM-qPCR (6.94 ± 8.5 mf/μl) and/or the Knott's test (5.32 ± 7.22 mf/μl) did not vary significantly (two-tailed Paired T-test, d.f.: 15, all p = 0.075). Moreover, there was a strong positive correlation between the microfilarial concentration obtained by the Knott's modified test and the HRM real-time qPCR (two-tailed Pearson correlation test, r = 0.906, p < 0.0001) (Figure 3).
Comparison of the HRM real-time qPCR, microscopic and serological methods in D. immitis- detection. Each point of the curve corresponds to a sample positive for D. immitis according to the HRM real-time qPCR and/or the microcapillary test (MCT), Knott's test (KT) and a serological assay. The concentration of mf/μl was obtained by interpolation to the standard curve. The MCT and KT detected only samples with concentrations higher than 0.7 mf/μl as shown in the high microfilaremic section of the graph.
Correlation between microfilarial concentrations obtained by HRM real-time qPCR for D. immitis and the Knott's test. The coefficient of linear regression is shown in the graph. Each point corresponds to a different dog blood sample.
The real-time PCR identified filarioids that the other assays could not detect. Acanthocheilonema reconditum was correctly identified only by the real-time PCR and confirmed by sequencing in 11.0% (16/146, 95% C.I.: 6.4-17.2%) of the samples. In this regard, the serological assay, did not present cross reaction with this filarial sp.
Co-infection with vector-borne hemopathogens and D. immitis
Sixty five percent (11/17, 95% C.I.: 38.3-85.8%) and 19% (3/16, 95% C.I.: 4.1-45.6%) of the dogs with molecularly detected-D. immitis (Table 2) and A. reconditum, respectively, were co-infected with protozoal or bacterial vector-borne pathogens such as Babesia vogeli, Ehrlichia canis and Anaplasma platys detected in our previous study [16]. There was no difference in microfilarial concentration between dogs with single infection with D. immitis (mean concentration: 13.1 ± 16.2 mf/μl); and co-infected with the other hemopathogens (mean concentration: 6.7 ± 5.6 mf/μl) (two-tailed T-test, T = 0.58, d.f. = 15, p = 0.238). Noteworthy, the only dog in the study that presented co-infection with three pathogens (B. vogeli and A. platys), had the lowest microfilaremia (6.6×10−6 mf/μl).
Table 2 Dirofilaria immitis -microfilarial concentration and co-infection with other vector-borne hemopathogens in dogs from Costa Rica
Association of location, age, sex and PCV values with detection of filarioids
The presence of filarioids according to the detection by the HRM real-time PCR, varied in regards to the location, sex, age and PCV value of the dogs (Additional file 3: Table S2). The distribution of D. immitis and A. reconditum was significantly higher in Chomes and Kéköldi, respectively, than in the other sampled regions (Chi-square test p< 0.0001 for each location). With regard to age, 82% (14/17, 95% C.I.: 56.6-96.2%) of the cases with D. immitis occurred in dogs younger than 4 years, and 50% (8/16, 95% C.I.: 24.6-75.3%) of the dogs with A. reconditum were younger than 1 year. Infection with these filarioids was observed in more males (29.5%; C.I. 95%: 18.6-39.5%) than females (17.6%; C.I. 95%: 95–28.8%). However, no significant differences were found between filarioid-infection and sex or age of the dogs (Chi-square test, p = 0.132). Additionally, there was no significant difference between the PCV values of dogs with D. immitis or A. reconditum and the values of dogs negative for filarioids (two-tailed T-test p = 0.36 and p = 0.26, respectively).
Canine filarioids are arthropod-borne pathogens that cause severe disease to dogs and potentially also to humans. The wide distribution of these parasites is attributed to the adaptation of their vectors to their final hosts and the environment, as well as to climate changes [1]. This study describes the presence of A. reconditum and C. bainae in dogs from Costa Rica and Central America from the first time. Moreover, it compared the performance of three different methods employed currently in clinical practice and a novel HRM real-time qPCR for the detection of D. immitis.
Dirofilaria immitis was detected in 15% of the dogs sampled from Costa Rica by the combination of the HRM real-time PCR and a serological assay. The prevalence of D. immitis found in this study is higher than the 2.3% obtained in a previous serologic study from Costa Rica [4]. The higher prevalence of infection found may be explained by the use of a combination of detection techniques including molecular and serological assays, and also by sampling regions of Costa Rica with a potentially higher abundance of this nematode. We found that the coastal region of Chomes is endemic for the parasite since 88% of the cases were from this area and those dogs also had the highest microfilaremias. This finding is in agreement with the increased distribution of this filarioid in shorelines [21]. A study performed on convenience samples of dogs from neighboring Nicaragua did not detect D. immitis by PCR [22], however, serosurveys from the Caribbean and South America have described prevalence rates of infection that reach 74% [20]. No other Dirofilaria spp. were detected by our molecular assay, even though microfilariae resembling D. repens were reported recently in dogs from Chile [2].
Acanthocheilonema reconditum was detected in 11% of the sampled dogs. Prevalence studies in the Americas have reported rates of infection that range from 0.1% to 22% in the United States [23] and Brazil [24], respectively. The high prevalence of this filarioid in our study may be due to the widespread parasitism of A. reconditum's intermediate hosts (e.g. fleas and lice) among dog populations. Despite the fact that the pathogenicity of A. reconditum is low compared to other filarioids [25], the occurrence of this parasite should be highlighted since it constitutes an important differential diagnosis for D. immitis in studies of dogs employing morphological detection techniques.
The HRM real-time qPCR performed in the present study successfully quantified D. immitis-microfilariae and distinguished filarioids that were not detected by the other employed assays. Moreover, there was a strong correlation between the microfilarial concentration obtained by Knott's modified test and the current PCR (Figure 3). This can be explained by the use of a positive control quantified by Knott's test for preparing the standard curve of the HRM-qPCR, as done in other qPCR protocols for detecting parasites [26]. These results show that, although the quantification of microfilariae by the assays were mostly similar, the qPCR had the advantage of detecting positive samples with lower microfilarial concentrations (Figure 2). The molecular assay employed herein was able to detect cases with very low microfilaremia, which may occur during initial microfilaremia or following incomplete treatment [27].
A previously reported duplex quantitative real-time PCR for the detection of D. immitis and D. repens found a lower limit of detection (8.0×10−6 mf/μl) than the present assay (2.4×10−4 mf/μl) [14]. Nevertheless, the present method has the advantage of detecting other filarioids with a single pair of primers and separating them based on their HRM-curves, which makes it less laborious in the screening of large numbers of dogs. A limitation of our method was the use of a positive sample as the starting point of the standard curve. The latter required the extrapolation of microfilaremia values above the curve. Although challenging, a potential solution to this limitation is the isolation, quantification and DNA-extraction of higher number of microfilariae obtained from an in vitro culture [28].
The microscopic assays, i.e. the MCT test and Knott's modified method, were useful in detecting more than half of the infected dogs with filarioids. The difficulty in the identification presented in this study relies on the observation of only one microfilaria in more than 40% of the preparations and in the epidemiological bias of being in a previously unknown A. reconditum region. In clinical practice, both microscopic methods depend on the observer's expertise to morphologically identify and classify microfilariae [10]. Additionally, microscopic methods are known to have lower sensitivity for detection of microfilariae compared to molecular tools, as demonstrated in the present study, which makes the diagnosis of cases with low parasite burdens or dogs exposed to parasiticides more difficult [29-31]. The fact that the MCT detected mainly dogs with a high microfilaremia (Figure 2) could be due to the small amount of blood employed for this test. On the other hand, the Knott's test detected similar microfilarial concentration as the HRM-qPCR (Figure 3), but failed in the detection of four low-concentration positive samples (Figure 2). Moreover, three samples were detected as positive only by either MCT or Knott's test (HRM real-time PCR negative) possibly due to the presence of PCR-inhibitors, low DNA extraction-yield or missidentification of filarioids. The present study highlights the importance of proper identification of different filarial species especially in samples with low concentrations of microfilariae, and emphasizes the importance of the application of more than one screening technique for epidemiological studies.
Serological tests are the preferred method to diagnose D. immitis infection in clinical practice due to their high sensitivity and simplicity [1]. Moreover, this assay detected D. immitis antigenemia in five dogs which were molecular and microscopically-negative. The latter are probably associated with occult infection in amicrofilaremic dogs as previously described [32]. The negative serological results in microfilarial-positive dogs may be due to a low female burden or previous adulticidal treatment [33].
The agreement in the detection of D. immitis-cases in the HRM real-time PCR, Knott's modified test and the microcapillary test relies in the fact that these three assays detect circulating microfilariae. On the contrary, the serological assay did not statistically agree with molecular and microscopic methods since it detects circulating antigens present also in occult infection [31].
The majority of the dogs with D. immitis (65%) were co-infected with other vector-borne pathogens such as E. canis, A. platys and B. vogeli. This situation may worsen the dog's clinical manifestations and complicate the diagnosis and treatment [34,35]. However, in our study co-infection was not found to alter the burden of infection of D. immitis as manifested by the microfilarial concentrations.
Cercopithifilaria bainae-infection was a surprising finding. This filarioid was first described from Brazil [36] and has since been reported in clinical cases from Italy [37], Romania [38] and Portugal [39]; and in ticks from Italy, Spain, Portugal, Greece, Brazil, Australia, Malaysia, South Africa and Pakistan [40]. Our study constitutes the first report of this nematode in Costa Rica and Central America. The intermediate host of this nematode, the tick Rhipicephalus sanguineus [41], was found in a third of the dogs included in this study [16]. Therefore, the screening of additional skin samples from other dogs in Costa Rica could better describe the real prevalence of C. bainae in this country.
The present study molecularly detected D. immitis, A. reconditum and C. bainae in dogs from Costa Rica. The latter two were detected for the first time in Costa Rica and Central America. Among the employed techniques to detect filarioids, the HRM real-time qPCR was the most sensitive and had the advantage of detecting and accurately discriminating the filarial species found in the dog's populations, in comparison with the Knott's test, microcapillary test and a serological assay. Therefore, the implementation of molecular techniques in the diagnosis of canine filarioids in the clinical practice should be recommended.
Otranto D, Dantas-Torres F, Brianti E, Traversa D, Petrić D, Genchi C, et al. Vector-borne helminths of dogs and humans in Europe. Parasit Vectors. 2013;6:16.
López J, Valiente-Echeverría F, Carrasco M, Mercado R, Abarca K. Identificación morfológica y molecular de filarias canina en una comunidad semi-rural de la Región Metropolitana de Chile. Rev Chilena Infectol. 2012;29:284–9.
Brown HE, Harrington LC, Kaufman PE, McKay T, Bowman DD, Nelson CT, et al. Key factors influencing canine heartworm, Dirofilaria immitis, in the United States. Parasit Vectors. 2012;5:245.
Scorza AV, Duncan C, Miles L, Lappin MR. Prevalence of selected zoonotic and vector-borne agents in dogs and cats in Costa Rica. Vet Parasitol. 2011;183:178–83.
Beaver PC, Brenes R, Vargas Solano G. Zoonotic filaria in a subcutaneous artery of a child in Costa Rica. Am J Trop Med Hyg. 1984;33:583–5.
Brenes R, Beaver PC, Monge E, Zamora L. Pulmonary dirofilariasis in a Costa Rican man. Am J Trop Med Hyg. 1985;34:1142–3.
Beaver PC, Brenes R, Ardon J. Dirofilaria from the index finger of a man in Costa Rica. Am J Trop Med Hyg. 1986;35:988–90.
Rodríguez B, Arroyo R, Caro L, Orihel TC. Human dirofilariasis in Costa Rica. A report of three new cases of Dirofilaria immitis infection. Parasite. 2002;9:193–5.
Rodríguez B, Ros-Alvarez T, Grant S, Orihel TC. Human dirofilariasis in Costa Rica: Dirofilaria immitis in periorbital tissues. Parasite. 2003;10:87–9.
Magnis J, Lorentz S, Guardone L, Grimm F, Magi M, Naucke TJ, et al. Morphometric analyses of canine blood microfilariae isolated by the Knott's test enables Dirofilaria immitis and D. repens species-specific and Acanthocheilonema (syn. Dipetalonema) genus-specific diagnosis. Parasit Vectors. 2013;6:48.
Hoch H, Strickland K. Canine and feline dirofilariasis: life cycle, pathophysiology, and diagnosis. Compend Contin Educ Vet. 2008;30:133–40.
Rishniw M, Barr SC, Simpson KW, Frongillo MF, Franz M, Dominguez Alpizar JL. Discrimination between six species of canine microfilariae by a single polymerase chain reaction. Vet Parasitol. 2006;135:303–14.
Casiraghi M, Bazzocchi C, Mortarino M, Ottina E, Genchi C. A simple molecular method for discriminating common filarial nematodes of dogs (Canis familiaris). Vet Parasitol. 2006;141:368–72.
Latrofa MS, Dantas-Torres F, Annoscia G, Genchi M, Traversa D, Otranto D. A duplex real-time polymerase chain reaction assay for the detection of and differentiation between Dirofilaria immitis and Dirofilaria repens in dogs and mosquitoes. Vet Parasitol. 2012;185:181–5.
Wongkamchai S, Monkong N, Mahannol P, Taweethavonsawat P, Loymak S, Foongladda S. Rapid detection and identification of Brugia malayi, B. pahangi, and Dirofilaria immitis by high-resolution melting assay. Vector Borne Zoonotic Dis. 2013;13:31–6.
Rojas A, Rojas D, Montenegro V, Gutierrez R, Yasur-Landau D, Baneth G. Vector-borne pathogens in dogs from Costa Rica: First molecular description of Babesia vogeli and Hepatozoon canis infections with a high prevalence of monocytic ehrlichiosis and the manifestations of co-infection. Vet Parasitol. 2014;199:121–8.
Acuña P, Chávez A. Determinación de la prevalencia de Dirofilaria immitis en los distritos de San Martín de Porres, Rímac y Cercado de Lima. Rev Inv Vet Perú. 2002;13:108–10.
Castillo A, Guerrero O. Técnica de concentración para microfilarias (en sangre). In: Castillo A, Guerrero O, editors. Técnicas de diagnóstico parasitológico. San José, Costa Rica: Editorial de Universidad de Costa Rica; 2006. p. 74–5.
Abaxis Inc. VetScan Canine Heartworm Rapid Test. 2015. http://www.abaxis.com/veterinary/products/canine-heartworm-rapid-test.html. Accessed 18 Feb 2015.
Dantas-Torres F, Otranto D. Dirofilariosis in the Americas: a more virulent Dirofilaria immitis? Parasit Vectors. 2013;6:288.
Bowman D, Little SE, Lorentzen L, Shields J, Sullivan MP, Carlin EP. Prevalence and geographic distribution of Dirofilaria immitis, Borrelia burgdorferi, Ehrlichia canis, and Anaplasma phagocytophilum in dogs in the United States: results of a national clinic-based serologic survey. Vet Parasitol. 2009;160:138–48.
Wei L, Kelly P, Ackerson K, Zhang J, El-Mahallawy HS, Kaltenboeck B, et al. First report of Babesia gibsoni in Central America and survey for vector-borne infections in dogs from Nicaragua. Parasit Vectors. 2014;7:126.
Theis JH, Stevens F, Law M. Distribution, prevalence, and relative risk of filariasis in dogs from the State of Washington (1997–1999). J Am Anim Hosp Assoc. 2001;37:339–47.
Reifur L, Thomaz-Soccol V, Montiani-Ferreira F. Epidemiological aspects of filariosis in dogs on the coast of Paraná state, Brazil: with emphasis on Dirofilaria immitis. Vet Parasitol. 2004;122:273–86.
Brianti E, Gaglio G, Napoli E, Giannetto S, Dantas-Torres F, Bain O, et al. New insights into the ecology and biology of Acanthocheilonema reconditum (Grassi, 1889) causing canine subcutaneous filariosis. Parasitology. 2012;139:530–6.
Espírito-Santo MC, Alvarado-Mora MV, Pinto PL, de Brito T, Botelho-Lima L, Heath AR, et al. Detection of Schistosoma mansoni infection by TaqMan® Real-Time PCR in a hamster model. Exp Parasitol. 2014;143:83–9.
Nakagaki K, Yoshida M, Nogami S. Experimental infection of Dirofilaria immitis in raccoon dogs. J Parasitol. 2007;93:432–4.
Taylor AE. Maintenance of filarial worms in vitro. Exp Parasitol. 1960;9:113–20.
Hou H, Shen G, Wu W, Gong P, Liu Q, You J, et al. Prevalence of Dirofilaria immitis infection in dogs from Dandong. China Vet Parasitol. 2011;183:189–93.
Giangaspero A, Marangi M, Latrofa MS, Martinelli D, Traversa D, Otranto D, et al. Evidences of increasing risk of dirofilarioses in southern Italy. Parasitol Res. 2013;112:1357–61.
McCall JW, Genchi C, Kramer LH, Guerrero J, Venco L. Heartworm disease in animals and humans. Adv Parasitol. 2008;66:193–285.
Rawlings CA, Dawe DL, McCall JW, Keith JC, Prestwood AK. Four types of occult Dirofilaria immitis infection in dogs. J Am Vet Med Assoc. 1982;180:1323–6.
Ionica AM, Matei IA, Mircean V, Dumitrache MO, D'Amico G, Gyorke A, et al. Current surveys on the prevalence and distribution of Dirofilaria spp. and Acanthocheilonema reconditum infections in dogs in Romania. Parasitol Res. 2015;114:975–82.
De Tommasi AS, Otranto D, Dantas-Torres F, Capelli G, Breitschwerdt EB, de Caprariis D. Are vector-borne pathogen co-infections complicating the clinical presentation in dogs? Parasit Vectors. 2013;6:97.
Tabar MD, Altet L, Martínez V, Roura X. Wolbachia, filariae and Leishmania coinfection in dogs from a Mediterranean area. J Small Anim Pract. 2013;54:174–8.
Almeida GL, Vicente JJ. Cercopithifilaria bainae sp. n. parasita de Canis familiaris (L.) (Nematoda Filarioidea). Atas Soc Biol Rio de Janeiro. 1984;24:18.
Otranto D, Brianti E, Dantas-Torres F, Weigl S, Latrofa MS, Gaglio G, et al. Morphological and molecular data on the dermal microfilariae of a species of Cercopithifilaria from a dog in Sicily. Vet Parasitol. 2011;182:221–9.
Ionică AM, D'Amico G, Mitková B, Kalmár Z, Annoscia G, Otranto D, et al. First report of Cercopithifilaria spp. in dogs from Eastern Europe with an overview of their geographic distribution in Europe. Parasitol Res. 2014;113:2761–4.
Cortes HC, Cardoso L, Giannelli A, Latrofa MS, Dantas-Torres F, Otranto D. Diversity of Cercopithifilaria species in dogs from Portugal. Parasit Vectors. 2014;7:261.
Latrofa MS, Dantas-Torres F, Giannelli A, Otranto D. Molecular detection of tick-borne pathogens in Rhipicephalus sanguineus group ticks. Ticks Tick Borne Dis. 2014;5:943–6.
Brianti E, Otranto D, Dantas-Torres F, Weigl S, Latrofa MS, Gaglio G, et al. Rhipicephalus sanguineus (Ixodida, Ixodidae) as intermediate host of a canine neglected filarial species with dermal microfilariae. Vet Parasitol. 2012;183:330–7.
The authors thank Prof. Domenico Otranto for his help in the analysis of Cercopithifilaria bainae samples and Prof. Dennis León for his assistance. The authors thank Abaxis Veterinary Diagnostics for the donation of the serological kits and Bayer Health Care-Animal Health Division for kindly supporting the publication of this manuscript in the framework of the 10th CVBD World Forum symposium.
Departamento de Parasitología, Centro de Investigación en Enfermedades Tropicales, Facultad de Microbiología, Universidad de Costa Rica, P.O. Box 11501–2060, San José, Costa Rica
Alicia Rojas & Diana Rojas
Laboratorio de Parasitología, Escuela de Medicina Veterinaria, Universidad Nacional, P.O. Box 86–3000, Heredia, Costa Rica
Víctor M Montenegro
Koret School of Veterinary Medicine, Hebrew University of Jerusalem, P.O. Box 12, Rehovot, 76100, Israel
Alicia Rojas & Gad Baneth
Alicia Rojas
Diana Rojas
Gad Baneth
Correspondence to Gad Baneth.
AR, DR and GB participated in the study design. VM collected the dog samples, performed the MCT and the serological assay. AR and DR performed Knott's test and extracted DNA. AR performed the molecular assays. AR, DR and GB interpreted the results and helped to draft the manuscript. All authors read and approved the final manuscript.
Arthropod-borne helminth detection in dogs from Costa Rica according to diagnostic method and sampling location.
Additional file 2: Figure S1.
Standard curve for the quantification of Dirofilaria immitis by an HRM real-time qPCR. The equation of the curve and R2 are shown in the graph.
Canine filarioids distribution in Costa Rica according to demographic and clinical data. Demographic data of dogs include sampling location, sex and age. Packed cell volume (PCV) values are shown as percentages.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Rojas, A., Rojas, D., Montenegro, V.M. et al. Detection of Dirofilaria immitis and other arthropod-borne filarioids by an HRM real-time qPCR, blood-concentrating techniques and a serological assay in dogs from Costa Rica. Parasites Vectors 8, 170 (2015). https://doi.org/10.1186/s13071-015-0783-8
Dirofilaria immitis
Acanthocheilonema reconditum
Cercopithifilaria bainae
Canine filariosis
Knott's test
10th Symposium on Canine Vector-Borne Diseases
Submission enquiries: [email protected] | CommonCrawl |
Tag Discussion Solution Statistics Submit
Time Limit : sec, Memory Limit : KB
Twin Trees Bros.
To meet the demand of ICPC (International Cacao Plantation Consortium), you have to check whether two given trees are twins or not.
Example of two trees in the three-dimensional space.
The term tree in the graph theory means a connected graph where the number of edges is one less than the number of nodes. ICPC, in addition, gives three-dimensional grid points as the locations of the tree nodes. Their definition of two trees being twins is that, there exists a geometric transformation function which gives a one-to-one mapping of all the nodes of one tree to the nodes of the other such that for each edge of one tree, there exists an edge in the other tree connecting the corresponding nodes. The geometric transformation should be a combination of the following transformations:
translations, in which coordinate values are added with some constants,
uniform scaling with positive scale factors, in which all three coordinate values are multiplied by the same positive constant, and
rotations of any amounts around either $x$-, $y$-, and $z$-axes.
Note that two trees can be twins in more than one way, that is, with different correspondences of nodes.
Write a program that decides whether two trees are twins or not and outputs the number of different node correspondences.
Hereinafter, transformations will be described in the right-handed $xyz$-coordinate system.
Trees in the sample inputs 1 through 4 are shown in the following figures. The numbers in the figures are the node numbers defined below.
For the sample input 1, each node of the red tree is mapped to the corresponding node of the blue tree by the transformation that translates $(-3, 0, 0)$, rotates $-\pi / 2$ around the $z$-axis, rotates $\pi / 4$ around the $x$-axis, and finally scales by $\sqrt{2}$. By this mapping, nodes #1, #2, and #3 of the red tree at $(0, 0, 0)$, $(1, 0, 0)$, and $(3, 0, 0)$ correspond to nodes #6, #5, and #4 of the blue tree at $(0, 3, 3)$, $(0, 2, 2)$, and $(0, 0, 0)$, respectively. This is the only possible correspondence of the twin trees.
For the sample input 2, red nodes #1, #2, #3, and #4 can be mapped to blue nodes #6, #5, #7, and #8. Another node correspondence exists that maps nodes #1, #2, #3, and #4 to #6, #5, #8, and #7.
For the sample input 3, the two trees are not twins. There exist transformations that map nodes of one tree to distinct nodes of the other, but the edge connections do not agree.
For the sample input 4, there is no transformation that maps nodes of one tree to those of the other.
The input consists of a single test case of the following format.
$n$
$x_1$ $y_1$ $z_1$
$x_n$ $y_n$ $z_n$
$u_1$ $v_1$
$u_{n−1}$ $v_{n−1}$
$x_{n+1}$ $y_{n+1}$ $z_{n+1}$
$x_{2n}$ $y_{2n}$ $z_{2n}$
$u_n$ $v_n$
$u_{2n−2}$ $v_{2n−2}$
The input describes two trees. The first line contains an integer $n$ representing the number of nodes of each tree ($3 \leq n \leq 200$). Descriptions of two trees follow.
Description of a tree consists of $n$ lines that give the vertex positions and $n - 1$ lines that show the connection relation of the vertices.
Nodes are numbered $1$ through $n$ for the first tree, and $n + 1$ through $2n$ for the second tree.
The triplet $(x_i, y_i, z_i)$ gives the coordinates of the node numbered $i$. $x_i$, $y_i$, and $z_i$ are integers in the range between $-1000$ and $1000$, inclusive. Nodes of a single tree have distinct coordinates.
The pair of integers $(u_j , v_j )$ means that an edge exists between nodes numbered $u_j$ and $v_j$ ($u_j \ne v_j$). $1 \leq u_j \leq n$ and $1 \leq v_j \leq n$ hold for $1 \leq j \leq n - 1$, and $n + 1 \leq u_j \leq 2n$ and $n + 1 \leq v_j \leq 2n$ hold for $n \leq j \leq 2n - 2$.
Output the number of different node correspondences if two trees are twins. Output a zero, otherwise.
Sample Input 1
Sample Output 1
Source: https://onlinejudge.u-aizu.ac.jp/problems/1403 | CommonCrawl |
Statistical test for two distributions where only 5-number summary is known
I have two distributions where only the 5-number summary (minimum, 1st quartile, median, 3rd quartile, maximum) and sample size are known. Cntrary to the question here, not all data points are available.
Is there any non-parametric statistical test which allows me to check whether the underlying distributions of the two are different?
distributions nonparametric
bonifazbonifaz
Under the null hypothesis that the distributions are the same and both samples are obtained randomly and independently from the common distribution, we can work out the sizes of all $5\times 5$ (deterministic) tests that can be made by comparing one letter value to another. Some of these tests appear to have reasonable power to detect differences in distributions.
The original definition of the $5$-letter summary of any ordered batch of numbers $x_1 \le x_2 \le \cdots \le x_n$ is the following [Tukey EDA 1977]:
For any number $m = (i + (i+1))/2$ in $\{(1+2)/2, (2+3)/2, \ldots, (n-1+n)/2\}$ define $x_m = (x_i + x_{i+1})/2.$
Let $\bar{i} = n+1-i$.
Let $m = (n+1)/2$ and $h = (\lfloor m \rfloor + 1)/2.$
The $5$-letter summary is the set $\{X^{-} = x_1, H^{-}=x_h, M=x_m, H^{+}=x_\bar{h}, X^{+}=x_n\}.$ Its elements are known as the minimum, lower hinge, median, upper hinge, and maximum, respectively.
For example, in the batch of data $(-3, 1, 1, 2, 3, 5, 5, 5, 7, 13, 21)$ we may compute that $n=12$, $m=13/2$, and $h=7/2$, whence
$$\eqalign{ &X^{-} &= -3, \\ &H^{-} &= x_{7/2} = (x_3+x_4)/2 = (1+2)/2 = 3/2, \\ &M &= x_{13/2} = (x_6+x_7)/2 = (5+5)/2 = 5, \\ &H^{+} &= x_\overline{7/2} = x_{19/2} = (x_9+x_10)/2 = (5+7)/2 = 6, \\ &X^{+} &= x_{12} = 21. }$$
The hinges are close to (but usually not exactly the same as) the quartiles. If quartiles are used, note that in general they will be weighted arithmetic means of two of the order statistics and thereby will lie within one of the intervals $[x_i, x_{i+1}]$ where $i$ can be determined from $n$ and the algorithm used to compute the quartiles. In general, when $q$ is in an interval $[i, i+1]$ I will loosely write $x_q$ to refer to some such weighted mean of $x_i$ and $x_{i+1}$.
With two batches of data $(x_i, i=1,\ldots, n)$ and $(y_j, j=1,\ldots,m),$ there are two separate five-letter summaries. We can test the null hypothesis that both are iid random samples of a common distribution $F$ by comparing one of the $x$-letters $x_q$ to one of the $y$-letters $y_r$. For instance, we might compare the upper hinge of $x$ to the lower hinge of $y$ in order to see whether $x$ is significantly less than $y$. This leads to a definite question: how to compute this chance,
$${\Pr}_F(x_q \lt y_r).$$
For fractional $q$ and $r$ this is not possible without knowing $F$. However, because $x_q \le x_{\lceil q \rceil} $ and $y_{\lfloor r \rfloor} \le y_r,$ then a fortiori
$${\Pr}_F(x_q \lt y_r) \le {\Pr}_F(x_{\lceil q \rceil} \lt y_{\lfloor r \rfloor}).$$
We can thereby obtain universal (independent of $F$) upper bounds on the desired probabilities by computing the right hand probability, which compares individual order statistics. The general question in front of us is
What is the chance that the $q^\text{th}$ highest of $n$ values will be less than the $r^\text{th}$ highest of $m$ values drawn iid from a common distribution?
Even this does not have a universal answer unless we rule out the possibility that probability is too heavily concentrated on individual values: in other words, we need to assume that ties are not possible. This means $F$ must be a continuous distribution. Although this is an assumption, it is a weak one and it is non-parametric.
The distribution $F$ plays no role in the calculation, because upon re-expressing all values by means of the probability transform $F$, we obtain new batches
$$X^{(F)} = F(x_1) \le F(x_2) \le \cdots \le F(x_n)$$
$$Y^{(F)} = F(y_1) \le F(y_2) \le \cdots \le F(y_m).$$
Moreover, this re-expression is monotonic and increasing: it preserves order and in so doing preserves the event $x_q \lt y_r.$ Because $F$ is continuous, these new batches are drawn from a Uniform$[0,1]$ distribution. Under this distribution--and dropping the now superfluous "$F$" from the notation--we easily find that $x_q$ has a Beta$(q, n+1-q)$ = Beta$(q, \bar{q})$ distribution:
$$\Pr(x_q\le x) = \frac{n!}{(n-q)!(q-1)!}\int_0^x t^{q-1}(1-t)^{n-q}dt.$$
Similarly the distribution of $y_r$ is Beta$(r, m+1-r)$. By performing the double integration over the region $x_q \lt y_r$ we can obtain the desired probability,
$$\Pr(x_q \lt y_r) = \frac{\Gamma (m+1) \Gamma (n+1) \Gamma (q+r)\, _3\tilde{F}_2(q,q-n,q+r;\ q+1,m+q+1;\ 1)}{\Gamma (r) \Gamma (n-q+1)}$$
Because all values $n, m, q, r$ are integral, all the $\Gamma$ values are really just factorials: $\Gamma(k) = (k-1)! = (k-1)(k-2)\cdots(2)(1)$ for integral $k\ge 0.$ The little-known function $_3\tilde{F}_2$ is a regularized hypergeometric function. In this case it can be computed as a rather simple alternating sum of length $n-q+1$, normalized by some factorials:
$$\Gamma(q+1)\Gamma(m+q+1)\ {_3\tilde{F}_2}(q,q-n,q+r;\ q+1,m+q+1;\ 1) \\ =\sum_{i=0}^{n-q}(-1)^i \binom{n-q}{i} \frac{q(q+r)\cdots(q+r+i-1)}{(q+i)(1+m+q)(2+m+q)\cdots(i+m+q)} \\ = 1 - \frac{\binom{n-q}{1}q(q+r)}{(1+q)(1+m+q)} + \frac{\binom{n-q}{2}q(q+r)(1+q+r)}{(2+q)(1+m+q)(2+m+q)} - \cdots.$$
This has reduced the calculation of the probability to nothing more complicated than addition, subtraction, multiplication, and division. The computational effort scales as $O((n-q)^2).$ By exploiting the symmetry
$$\Pr(x_q \lt y_r) = 1 - \Pr(y_r \lt x_q)$$
the new calculation scales as $O((m-r)^2),$ allowing us to pick the easier of the two sums if we wish. This will rarely be necessary, though, because $5$-letter summaries tend to be used only for small batches, rarely exceeding $n, m \approx 300.$
Suppose the two batches have sizes $n=8$ and $m=12$. The relevant order statistics for $x$ and $y$ are $1,3,5,7,8$ and $1,3,6,9,12,$ respectively. Here is a table of the chance that $x_q \lt y_r$ with $q$ indexing the rows and $r$ indexing the columns:
q\r 1 3 6 9 12
1 0.4 0.807 0.9762 0.9987 1.
3 0.0491 0.2962 0.7404 0.9601 0.9993
5 0.0036 0.0521 0.325 0.7492 0.9856
8 0. 0.0004 0.0102 0.1022 0.6
A simulation of 10,000 iid sample pairs from a standard Normal distribution gave results close to these.
To construct a one-sided test at size $\alpha,$ such as $\alpha = 5\%,$ to determine whether the $x$ batch is significantly less than the $y$ batch, look for values in this table close to or just under $\alpha$. Good choices are at $(q,r)=(3,1),$ where the chance is $0.0491,$ at $(5,3)$ with a chance of $0.0521$, and at $(7,6)$ with a chance of $0.0542.$ Which one to use depends on your thoughts about the alternative hypothesis. For instance, the $(3,1)$ test compares the lower hinge of $x$ to the smallest value of $y$ and finds a significant difference when that lower hinge is the smaller one. This test is sensitive to an extreme value of $y$; if there is some concern about outlying data, this might be a risky test to choose. On the other hand the test $(7,6)$ compares the upper hinge of $x$ to the median of $y$. This one is very robust to outlying values in the $y$ batch and moderately robust to outliers in $x$. However, it compares middle values of $x$ to middle values of $y$. Although this is probably a good comparison to make, it will not detect differences in the distributions that occur only in either tail.
Being able to compute these critical values analytically helps in selecting a test. Once one (or several) tests are identified, their power to detect changes is probably best evaluated through simulation. The power will depend heavily on how the distributions differ. To get a sense of whether these tests have any power at all, I conducted the $(5,3)$ test with the $y_j$ drawn iid from a Normal$(1,1)$ distribution: that is, its median was shifted by one standard deviation. In a simulation the test was significant $54.4\%$ of the time: that is appreciable power for datasets this small.
Much more can be said, but all of it is routine stuff about conducting two-sided tests, how to assess effects sizes, and so on. The principal point has been demonstrated: given the $5$-letter summaries (and sizes) of two batches of data, it is possible to construct reasonably powerful non-parametric tests to detect differences in their underlying populations and in many cases we might even have several choices of test to select from. The theory developed here has a broader application to comparing two populations by means of a appropriately selected order statistics from their samples (not just those approximating the letter summaries).
These results have other useful applications. For instance, a boxplot is a graphical depiction of a $5$-letter summary. Thus, along with knowledge of the sample size shown by a boxplot, we have available a number of simple tests (based on comparing parts of one box and whisker to another one) to assess the significance of visually apparent differences in those plots.
whuber♦whuber
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable -- you could look at something like an ECDF-type statistic - say some equivalent to a Kolmogorov-Smirnov-type statistic or something akin to an Anderson-Darling statistic (though of course the distribution of the statistic will be very different in this case).
The distribution for small samples will depend on the precise definitions of the quantiles used in the five number summary.
Consider, for example, the default quartiles and extreme values in R (n=10):
> summary(x)[-4]
Min. 1st Qu. Median 3rd Qu. Max.
-2.33500 -0.26450 0.07787 0.33740 0.94770
compared to those generated by its command for the five number summary:
> fivenum(x)
[1] -2.33458172 -0.34739104 0.07786866 0.38008143 0.94774213
Note that the upper and lower quartiles differ from the corresponding hinges in the fivenum command.
By contrast, at n=9 the two results are identical (when they all occur at observations)
(R comes with nine different definitions for quantiles.)
The case for all three quartiles occurring at observations (when n=4k+1, I believe, possibly under more cases under some definitions of them) might actually be doable algebraically and should be nonparametric, but the general case (across many definitions) may not be so doable, and may not be nonparametric (consider the case where you're averaging observations to produce quantiles in at least one of the samples ... in that case the probabilities of different arrangements of sample quantiles may no longer be unaffected by the distribution of the data).
Once a fixed definition is chosen, simulation would seem to be the way to proceed.
Because it will be nonparametric at a subset of possible values of $n$, the fact that it's no longer distribution free for other values may not be such a big concern; one might say nearly distribution free at intermediate sample sizes, at least if $n$'s are not too small.
Let's look at some cases that should be distribution free, and consider some small sample sizes. Say a KS-type statistic applied directly to the five number summary itself, for sample sizes where the five number summary values will be individual order statistics.
Note that this doesn't really 'emulate' the K-S test exactly, since the jumps in the tail are too large compared to the KS, for example. On the other hand, it's not easy to assert that the jumps at the summary values should be for all the values between them. Different sets of weights/jumps will have different type-I error characteristics and different power characteristics and I am not sure what is best to choose (choosing slightly different from equal values could help get a finer set of significance levels, though). My purpose, then is simply to show that the general approach may be feasible, not to recommend any specific procedure. An arbitrary set of weights to each value in the summary will still give a nonparametric test, as long as they're not taken with reference to the data.
Anyway, here goes:
Finding the null distribution/critical values via simulation
At n=5 and 5 in the two samples, we needn't do anything special - that's a straight KS test.
At n=9 and 9, we can do uniform simulation:
ks9.9 <- replicate(10000,ks.test(fivenum(runif(9)),fivenum(runif(9)))$statistic)
plot(table(ks9.9)/10000,type="h"); abline(h=0,col=8)
# Here's the empirical cdf:
cumsum(table(ks9.9)/10000)
0.2 0.4 0.6 0.8
so at $n_1 = n_2=9$, you can get roughly $\alpha=0.1$ ($D_{crit}=0.6$), and roughly $\alpha=0.005$ ($D_{crit}=0.8$). (We shouldn't expect nice alpha steps. When the $n$'s are moderately large we should expect not to have anything but very big or very tiny choices for $\alpha$).
$n_1 = 9, n_2=13$ has a nice near-5% significance level ($D=0.6$)
$n_1 = n_2=13$ has a nice near-2.5% significance level ($D=0.6$)
At sample sizes near these, this approach should be feasible, but if both $n$s are much above 21 ($\alpha \approx 0.2$ and $\alpha\approx 0.001$), this won't work well at all.
A very fast 'by inspection' test
We see a rejection rule of $D\geq 0.6$ coming up often in the cases we looked at. What sample arrangements lead to that? I think the following two cases:
(i) When the whole of one sample is on one side of the other group's median.
(ii) When the boxes (the range covered by the quartiles) don't overlap.
So there's a nice super-simple nonparametric rejection rule for you -- but it usually won't be at a 'nice' significance level unless the sample sizes aren't too far from 9-13.
Getting a finer set of possible $\alpha$ levels
Anyway, producing tables for similar cases should be relatively straightforward. At medium to large $n$, this test will only have very small possible $\alpha$ levels (or very large) and won't be of practical use except for cases where the difference is obvious).
Interestingly, one approach to increasing the achievable $\alpha$ levels would be to set the jumps in the 'fivenum' cdf according to a Golomb-ruler. If the cdf values were $0,\frac{1}{11},\frac{4}{11},\frac{9}{11}$ and $1$, for example, then the difference between any pair of cdf-values would be different from any other pair. It might be worth seeing if that has much effect on power (my guess: probably not a lot).
Compared to these K-S like tests, I'd expect something more like an Anderson-Darling to be more powerful, but the question is how to weight for this five-number summary case. I imagine that can be tackled, but I'm not sure the extent to which it's worth it.
Let's see how it goes on picking up a difference at $n_1=9,n_2=13$. This is a power curve for normal data, and the effect, del, is in number of standard deviations the second sample is shifted up:
This seems like quite a plausible power curve. So it seems to work okay at least at these small sample sizes.
What about robust, rather than nonparametric?
If nonparametric tests aren't so crucial, but robust-tests are instead okay, we could instead look at some more direct comparison of the three quartile values in the summary, such as an interval for the median based off the IQR and the sample size (based off some nominal distribution around which robustness is desired, such as the normal -- this is the reasoning behind notched box plots, for example). This should tend to work much better at large sample sizes than the nonparametric test which will suffer from lack of appropriate significance levels.
Glen_bGlen_b
$\begingroup$ Very nice! I wonder off-hand if given the summary statistics you could actually calculate the maximum or minimum possible D statistic for the KS test. For example, you can draw the CDFs based on the summary statistics, and then there will by p-box windows for each sample CDF. Based on those two p-box windows you could calculate the maximum or minimum possible D statistic - and then look up the test statistic in usual tables. $\endgroup$ – Andy W Feb 18 '14 at 18:22
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change only 2 numbers, but clearly more numbers could be changed
set.seed(123)
#Create data
x <- rnorm(1000)
#Modify it without changing 5 number summary
x2 <- sort(x)
x2[100] <- x[100] - 1
x2[900] <- x[900] + 1
fivenum(x)
fivenum(x2)
Peter FlomPeter Flom
$\begingroup$ This example only demonstrates a limitation in the power of such a procedure, but otherwise does not seem to shed much light on it. $\endgroup$ – whuber♦ Feb 17 '14 at 23:22
$\begingroup$ I think it means that, without some assumptions, the power of such a test would be inestimable. What could such a test look like? $\endgroup$ – Peter Flom Feb 17 '14 at 23:34
$\begingroup$ Power calculations will always require assumptions, even with nonparametric tests. Try finding a power curve for a Kolmogorov-Smirnov without more assumptions than you need for carrying out the test itself. $\endgroup$ – Glen_b Feb 18 '14 at 2:35
$\begingroup$ There is a small finite number of tests that can be considered: they compare the values in one summary to those in another. One of them would be (for example) a comparison of the upper hinge of one dataset to the lower hinge of another. For sufficiently large sample sizes, this would indicate a significant difference in one population compared to another. It is related to the joint probability that $X\gt Y$ for independent random variables $X$ and $Y$. Although you don't get much control over the significance level, these tests can be reasonably powerful against a large set of alternatives. $\endgroup$ – whuber♦ Feb 18 '14 at 8:15
$\begingroup$ @whuber Without any measure of the error or accuracy of the measurements? Or is that supplied by sample size? The quantiles, and even more the max and min, are hard to work with in this way. $\endgroup$ – Peter Flom Feb 18 '14 at 11:27
Not the answer you're looking for? Browse other questions tagged distributions nonparametric or ask your own question.
Two samples of the same distribution
Can I say that my samples are different just by looking at box plots without performing a test?
Estimate population quantiles from subpopulations' quantiles
What is the difference between distribution free statistics/methods and non-parametric statistics?
Will two distributions with identical 5-number summaries always have the same shape?
Can you use the Kolmogorov-Smirnov test to directly test for equivalence of two distributions?
What fat tail distributions can I fit to my data?
What is a statistical significance test for two Poisson distributions?
Distribution-free test for two-sample multivariate distributions
Statistical test for comparing two frequency distributions expressed as arrays (buckets) of values
What test do I use to check if two samples came from different population? | CommonCrawl |
The king of Ebonchester has just returned from his recent conquests of Baronshire and with him he brought four cart loads of plunder. However Baronshire is known for its lacklustre fiscal regulation, so the king would like to test the authenticity of every coin. The problem is his old accountant died at his side in battle (a good king never goes anywhere without his accountant). Thus he wants to recruit the most meticulous counters in the land.
Having placed posters on every street corner offering the handsomely paid job he was soon inundated with applications. Each applicant had to prove their worth by determining the forged coin from a stack of 12 (which could either by lighter or heavier) in just 3 weighings.
Since the great infrastructure drive of a few years prior everyon in the kingdom has internet access, and it turns out that everyone and their mother had rushed to Stack Exchange in preparation for their interview and every single one of them knew the answer.
In the first test once the scale has fallen to the left or right during a weighing, it must fall the same way, or balance equally, in all future weighings.
As per test 2 but now a coin increases or decreases (mod 12), when light or heavy respectively, by the current number on the coin itself. For example if coin 7 was light, after the next weighing it would have its label switched with coin 2, and if it were heavy it would switch with 12.
Question: What is the minimum number of weighings required (if possible) to be certain of which coin is the counterfeit and whether it is light or heavy in each case?
The relabelings double the number of a light coin, and set the number of a heavy coin to 12. In particular, because 12 is divisible by 4, the odd coin must have an even number after one weighing and must be divisible by 4 after two.
Therefore, after two weighings, there are only four possible situations due to relabeling: either 4 is light, 8 is light, 12 is light, or 12 is heavy. Distinguishing them in one weighing is not possible, but we can do it with two weighings.
First weighing: 1,4,7,10 against 2,5,8,11. Second weighing: 3,9 against 6,12.
If the first weighing balances, then the odd coin is now labeled either 6 or 12. If these two together are lighter than 3 and 9 (which must be two regular coins after relabeling), the odd coin is lighter, otherwise it is heavier. After the second weighing, the odd coin is labeled 12 and we are done.
If the first weighing does not balance, suppose 1,4,7,10 was lighter. Then either the odd coin was light and is now labeled 2 or 8, or the odd coin was heavy and is now labeled 12. If the second weighing balances, the odd coin is light. After the second weighing, it is now labeled 4. Otherwise, the odd coin is heavy and is labeled 12.
The case where 2,5,8,11 is lighter is similar: if the second weighing balances, the odd coin is light and is now labeled 8; otherwise, it is heavy and labeled 12.
You can't distinguish 24 possibilities of (forged coin, heavier or lighter) with only 15 possible weighing results.
If you know which coin was originally fake, you can also determine its current label if required.
again, you can't distinguish the 24 possibilities with only 2 weighings, as those can only give 9 distinct results.
Read his answer to know why this works.
The result is either Right Heavy, Left Heavy, or Balanced.
Right heavy: There are three possibilities now. Either 1 is light, 2 is light, or 4 is heavy. Weigh 1 against 7, then 2 against 7 to find out which, being careful to put 7 on the right.
Balance: There are three possibilities now. Either 3 is light, 5 is heavy or 6 is heavy. Weigh 7 against 5, then 7 against 6, to find out which, being careful to put 7 on the left.
Left heavy: this is symmetric to the right heavy case.
Right heavy: 7,8 vs 1,2.
If the scale tips right, then one of 7 or 8 is light. Weigh 7 vs 1 to check which.
If the scale balances, then one of 9 or 10 is heavy. Weigh 1 vs 9 to check which.
Balance: The counterfeit coin is either 11 or 12. Weight 1 vs 11, then 1 vs 12 to find out.
Here's why this many weighings are necessary. After three weighings, there are only 15 possible results. To see this: every result looks like UUU, UUB, UBU, UBB, BUU, BUB, BBU, or BBB, where B = balanced, U = unbalanced. For the first seven of these, U can be either Left or Right, leading to $2\times 7+1=15$ possible results. Since there are 24 > 15 possibilities to distinguish between, three weighings is insufficient. | CommonCrawl |
CoCoA
CoCoA (Computations in Commutative Algebra)[6] is a free computer algebra system developed by the University of Genova, Italy, used to compute with numbers and polynomials. The CoCoA Library (CoCoALib[7]) is available under GNU General Public License. CoCoA has been ported to many operating systems including Macintosh on PPC and x86, Linux on x86, Unix x86-64 & PPC, Solaris on SPARC and Windows on x86. CoCoA is mainly used by researchers (see citations at[8] and[9]), but can be useful even for "simple" computations.
CoCoA
Original author(s)Abbott, J. and Bigatti, A. M. and Robbiano, L.[1]
Initial release1988 (1988)[2]
Stable release
5.4.0 / 11 April 2022 (2022-04-11)[3]
Preview release
5.4.1j / 21 February 2023 (2023-02-21)[3]
Written inC++
Operating systemWindows, Linux/Unix, macOS,
TypeComputer algebra system
LicenseGNU GPL
Websitecocoa.dima.unige.it
CoCoALib
Original author(s)Abbott, J. and Bigatti, A. M.[1]
Initial release9 March 2007 (2007-03-09)[4]
Stable release
0.99800 / 28 April 2022 (2022-04-28)[5]
Preview release
0.99718 / 14 February 2022 (2022-02-14)[5]
Written inC++
TypeLibrary
LicenseGNU GPL
Websitecocoa.dima.unige.it
CoCoA's features include:
• Very big integers and rational numbers using the GNU Multi-Precision Library
• Multivariate Polynomials
• Gröbner basis
• User interfaces: text; Emacs-based; Qt-based
It is able to perform simple and sophisticated operations on multivariate polynomials and on various data related to them (ideals, modules, matrices, rational functions). For example, it can readily compute Gröbner basis, syzygies and minimal free resolutions, intersection, division, the radical of an ideal, the ideal of zero-dimensional schemes, Poincaré series and Hilbert functions, factorization of polynomials, and toric ideals. The capabilities of CoCoA and the flexibility of its use are further enhanced by the dedicated high-level programming language.
Its mathematical core, CoCoALib, has been designed as an open source C++ library, focussing on ease of use and flexibility.
CoCoALib is based on GNU Multi-Precision Library.
CoCoALib is used by ApCoCoA[10] and NmzIntegrate[11]
See also
Wikimedia Commons has media related to Computations in Commutative Algebra.
• List of computer algebra systems
• Standard Template Library
References
1. "Citing CoCoA". cocoa.dima.unige.it. Retrieved 2022-07-06.
2. "CoCoA History". cocoa.dima.unige.it. Archived from the original on 1998-01-09. Retrieved 2023-03-09.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
3. "CoCoA 5 Release Notes". cocoa.dima.unige.it. Retrieved 2022-07-06.
4. "CoCoALib Beta Archive". cocoa.dima.unige.it. Archived from the original on 2007-04-26. Retrieved 2023-03-09.
5. "CoCoALib". cocoa.dima.unige.it. Retrieved 2022-07-06.
6. "CoCoA website".
7. "CoCoALib home page".
8. "CoCoA - Mathematical software - swMATH".
9. "CoCoA in GoogleScholar".
10. "ApCoCoA website".
11. "Normaliz website". Archived from the original on 2015-12-08. Retrieved 2014-05-22.
External links
• Official website
• ApCoCoA, an extension of CoCoA
Computer algebra systems
Open-source
• Axiom
• Cadabra
• CoCoA
• Fermat
• FriCAS
• FORM
• GAP
• GiNaC
• Macaulay2
• Maxima
• Normaliz
• PARI/GP
• Reduce
• SageMath
• Singular
• SymPy
• Xcas/Giac
• Yacas
Proprietary
• ClassPad Manager
• KANT
• Magma
• Maple
• Mathcad
• Mathematica
• muPAD (MATLAB symbolic math toolbox)
• SMath Studio
• TI InterActive!
Discontinued
• CAMAL
• Derive
• Erable
• LiveMath
• Macsyma
• Mathomatic
• muMATH
• ALTRAN
• Category
• List
| Wikipedia |
Atlantic Economic Journal
Synergies in Labour Market Institutions: the Nonlinear Effect of Minimum Wages on Youth Employment
Adam Brzezinski
Best Undergraduate Paper
Empirical evidence on the effect of minimum wages on youth employment is inconclusive, with studies pointing to negative, positive or insignificant effects. In trying to explain some of the conflicting evidence, this research paper examines synergies of minimum wages with other labour market institutions using an unbalanced panel dataset of 19 OECD countries over 1985–2013. Institutions that enforce labour market rigidity, such as unemployment benefits and union density, are found to exacerbate the negative effect of minimum wages on youth employment, while government expenditure on training programmes for the unemployed dampen it. This finding of significant synergy effects indicates that panel data models which omit interactive terms between minimum wages and institutions might be misspecified. In addition, the analysis suggests that the negative effect of minimum wages is most severe in rigid labour markets with high unemployment benefits and union density. Therefore, policymakers need to consider the full spectrum of institutions they face before adjusting minimum wages.
Minimum wages Institutions Policy complementarities Youth employment
The online version of this article (doi: 10.1007/s11293-017-9537-7) contains supplementary material, which is available to authorized users.
J20 J38 J48 J58 J65
Minimum wages have seen a remarkable return to the spotlight in 2015/2016, with the introduction of minimum wages in Germany and announcements of substantial minimum wage hikes in the UK and Japan. In this context, two questions become ever more relevant to policymakers: What are the benefits and costs of minimum wages? The first question produced a plethora of papers that agree that minimum wages decrease inequality but disagree on the size of this effect (e.g. Neumark and Wascher 2006; Manning 2003).
In contrast, the discussion of minimum wage costs yields strong disagreement. Empirical papers point to negative, positive or insignificant minimum wage effects on employment. The following analysis aims to tackle this question, where the focus is youth employment. The reason for the restriction to the younger age class is twofold. Firstly, the soaring youth unemployment in post-crisis Europe has become a pressing issue to policymakers. Secondly, minimum wages should have a more pronounced impact on young workers, since they are more likely to work in low-skill sectors (Manning 2003).
The present paper employs a fixed-effects panel data model covering 19 Organisation for Economic Cooperation and Development (OECD) member countries in order to estimate a nonlinear relationship between minimum wages and youth employment. In particular, it will be shown that the minimum wage effect on youth employment varies with labour market institutions. The motivation for considering nonlinearities follows Coe and Snower's (1997) hypothesis that institutions form synergies in their interactions.
In the context of minimum wages, this hypothesis has only been thoroughly examined by Neumark and Wascher (2004). However, Neumark and Wascher's paper suffers from the low quality of institutional variables. The present research paper employs better institutional and control variables as well as a larger number of observations. In doing so, significant negative synergies are estimated between minimum wages and institutions that enforce labour market rigidity such as unemployment benefits and union density, while a positive synergy is found with active labour market policies that constitute training for the unemployed.
Two key conclusions follow. Firstly, many minimum wage panel data models are misspecified as they omit the interactive term with labour market institutions, which might account for some of the discrepancy in empirical evidence. Secondly, policymakers need to consider the institutional setting faced before adjusting minimum wages. This paper suggests that minimum wage raises in rigid markets will depress youth employment particularly strongly.
Theoretical Background and Empirical Evidence
Basic neoclassical theory suggests that, in a competitive labour market, minimum wages (MWs) above the efficient level will depress employment by decreasing labour demand. This effect will be more detrimental in low-skill sectors where MWs are binding. Given that young workers tend to work in such sectors, MWs lead to higher youth unemployment following this theory (Neumark and Wascher 2008).
However, more recent models dispute the validity of the neoclassical prediction. For instance, job search costs can engender monopsonistic markets in which firms can set wages below the competitive level without losing workers (Burdett and Mortensen 1998). In such a setting MWs can increase employment by limiting employers' bargaining power (Manning 2003).
The implicit theoretical foundation of the following analysis diverges from both these theories. Firstly, it builds upon the distinction between supply and demand side effects as proposed by Brown et al. (2014). The authors argue that MWs raise labour supply by increasing the return on working, but depress labour demand through elevating costs faced by firms. Hence, the overall effect is ambiguous, but can potentially be investigated in greater detail by examining labour market institutions, broadly defined as the policies and conventions that determine the costs, flexibility and incentives of employment (Betcherman 2012). Coe and Snower (1997) argue that MWs can create synergy effects with other labour market institutions with respect to employment. In particular, the authors argue that institutions that make labour markets rigid, such as unemployment benefits, can aggravate the negative effect of MWs. This theory of policy complementarities or synergies will be empirically tested.
Review of Empirical Evidence
Empirical research with regards to the minimum wage effect on youth employment is rich but inconclusive. Earlier papers tend to confirm the neoclassical prediction of negative effects. For instance, Brown et al. (1982) find that a 10% increase in MWs decreases U.S. teenage employment by 1–3% in a time series study, which is confirmed in a later study by Neumark and Wascher (1992) for 16–24 year-olds for a U.S. panel dataset.
This consensus view was first challenged by Card and Krueger (1994). Using the 1992 New Jersey MW increase as a natural experiment, the authors found significant employment growth for restaurant employees in New Jersey compared to a control group in Pennsylvania. While this paper was criticised inter alia for the control group used (Neumark and Wascher 2006), it heralded the emergence of more papers questioning the former consensus. For instance, Dickens et al. (1998) rejected the null-hypothesis that MWs in the UK depress employment, while Christl et al. (2017) found positive employment effects for small MW increases.
Following Coe and Snower's (1997) theory, the present paper argues that some of the conflicting evidence can be resolved by analysing policy complementarities. The first attempt to do so was conducted by Neumark and Wascher (2004), who found negative synergies between MWs and restrictive labour market policies in a panel analysis of 17 OECD countries. However, when analysing different groups of countries directly, the authors found that MWs have the most detrimental effect in liberal countries, which partially contradicts their former results. Another limitation is the reliance on time-invariant institutional variables. Despite the many questions that remain open after Neumark and Wascher's analysis, few papers aimed to clarify the results and any synergies were at best included as an aside, if at all (Dolton and Bondibene 2011; Christl et al. 2017; Addison and Ozturk 2010). Hence, it is the aim of this research paper to dissolve some of the controversy discussed above by explicitly investigating potential synergy effects.
Data and Methodology
The analysis is based on an unbalanced panel dataset of 19 OECD countries spanning the time frame 1985–2013, drawn from the OECD Labour Force Statistics and Minimum Wage Databases.1 The dependent variable is the youth employment rate, defined as the ratio of the working population to the total population of 15–24 year olds.2
The minimum wage variable is defined as the ratio of the minimum to median wage, dubbed the Kaitz index (KI). The KI can be interpreted as the relative price of unskilled to skilled labour. The median rather than mean wage is used in the denominator such that the KI does not change with the distribution of income.
The core of the analysis rests on the following four institutional variables, which are used to test Coe and Snower's (1997) hypothesis of institutional complementarities:
The Employment Protection Index measures the costs of hiring and firing. It is constructed by the OECD based on eight items including severance pay and length of notice before firing. The indicator takes values between zero and six and is higher with stricter employment protection.
Union density is defined as the proportion of wage and salary earners covered by a union. This serves to approximate the bargaining power of incumbent workers (insiders).
Active labor market policies are designed to increase the competitiveness of unemployed workers and are measured as government expenditure as a proportion of GDP.3
Unemployment benefit generosity is estimated by the OECD and defined as benefits received by an average production worker as a percentage of previous earnings.
Three control variables are used in all specifications. Firstly, the unemployment rate of 25–64 year-olds, harmonised by the OECD for better international comparability, is used as a labour demand control. Secondly, the number of 15–24 year olds as a proportion of the total working-age population serves as a supply-side control. Finally, purchasing power parity (PPP)-adjusted logged gross domestic product (GDP) per capita measures average worker productivity.
The model specification is as follows:
$$ {emp}_{i t}=\beta {mw}_{i t-1}+{\mathbf{I}}_{\mathbf{it}}^{\mathrm{T}}\uppsi +{mw_{i t-1}}^{\ast }{\mathbf{I}}_{\mathbf{it}}^{\mathrm{T}}\Phi +{\mathbf{X}}_{\mathbf{it}}^{\mathrm{T}}\Theta +{\tau}_t+{\partial}_i+{\varepsilon}_{i t} $$
where emp it is the youth employment rate; mw it − 1 is the lagged minimum wage variable; \( {\mathbf{I}}_{\mathbf{it}}^{\mathrm{T}} \) is the transposed vector of institutional variables and ψ is the corresponding coefficient vector; \( {\mathbf{X}}_{\mathbf{it}}^{\mathrm{T}} \) are control variables; τ t are year dummies; and ∂ i are country fixed effects. The interactive coefficients between MWs and the institutions contained in the coefficient vector Φ are at the core of the analysis, as they capture Coe and Snower's (1997) hypothesis of synergies.
MWs enter the regression in lagged form, as this is a way to break the potential two-way causality between MWs and employment: Governments might alter MWs in response to changes in employment levels. For similar reasons, unemployment benefits are lagged. In contrast, the remaining variables are included contemporaneously as they are unlikely to be endogenous to youth employment.4
Note that while MWs are not exogenous to employment levels, they are likely to be conditionally exogenous. First, the inclusion of labour supply and demand variables already controls for how governments adjust MWs after employment shocks. Fixed effects further control for time-invariant characteristics that determine how different countries set MWs, which can include legislative and cultural reasons. This should strip out most non-random factors from the error term that simultaneously influence employment rates and MWs.
For these reasons, while an instrumental variable (IV) approach would certainly lend further credibility to the results, it does not seem to be necessary. Moreover, a convincing instrument has yet to be found. The most notable attempt was made by Dolton and Bondibene (2011) who instrument MWs with an index capturing how "left" a government is. However, the political orientation of governments is determined by voter preferences that are related to employment, which makes this method share the bias that it is trying to counteract.
Finally, note that country-clustered robust standard errors are employed, which control for unknown forms of serial correlation and heteroscedasticity (Wooldridge 2010). Failing to account for these factors can lead to downward-biased standard errors and the over-rejection of the null-hypotheses (Angrist and Pischke 2009).
Main Results and Cross-Country Analysis
Regression Results
Regression results are summarised in Table 1, whereby institutional variables were standardised by subtracting the mean and dividing by the standard deviation (s.d.) for greater comparability. The results indicate that MWs generally depress youth employment. At average institutional characteristics, the elasticity of the youth employment rate with respect to MWs is −0.24 (column 2), similar to Neumark and Wascher's (2004) estimate of −0.16. More importantly, however, there is strong evidence for synergies in labour market institutions as three of the four interactive variables produce significant results. Since this forms the core hypothesis of the paper, each interaction is discussed in turn below.
No interactions
MWs, lagged
-0.288***
-0.322**
Institutional variables
Employment protection
0.176**
Union density
0.146***
ALMPs
Interaction with MWs
-0.120*
Elasticity of employment w.r.t. MWs (at average characteristics)
Number of countries
Data source: OECD Labour Force Statistics and Minimum Wage Databases; Nickell (2006); Gwartney et al. (2014); own calculations. The data spans the time period 1985–2013; for more details refer to online supplemental Appendix Table 1.
Control variables and year dummies are included in the regression but omitted in the table for simplicity. Robust standard errors clustered by country.
*** p < 0.01, ** p < 0.05, * p < 0.1
First, looking at the initial model specification (column 1), the interaction of MWs with employment protection is insignificant at 5%. This is not surprising following Bentolila and Bertola's (1990) model of hiring and firing costs, which stresses that employment protection has two opposing effects. On the one hand, a firm's inability to ascertain the productivity of prospective workers becomes more costly under high firing costs, to which firms respond with decreased hiring. However, it simply becomes harder to fire workers under high protection. Here it seems that these effects offset each other, rendering the interaction insignificant. Therefore this interaction will be omitted, yielding the preferred model in column 2.
Second, unemployment benefits produce significant negative synergies with MWs. Note that high benefits decrease the opportunity cost of unemployment, depressing the labour supply. Hence, in a high unemployment benefit regime, MWs fail to motivate workers to seek labour while increasing labour costs faced by firms.
Third, high union density also leads to a significantly aggravated minimum wage effect. Note that unions advocate for the protection of the workplace of insiders at the expense of job opportunities for outsiders (Kawaguchi and Murao 2014). Hence, when MWs are raised the jobs of non-unionised workers are disproportionately at risk. Now, young workers are less likely to be represented by unions, both because they have little knowledge of how unions work and because they are hardly targeted by unions (Keune 2015; Checchi and Nunziata 2011). Hence, MW effects on youth employment are worse under high union density.
In contrast, MWs interacting with active labour market policies (ALMPs) generate positive synergies. Recall that ALMPs primarily consist of training programmes for the unemployed. Hence, higher government expenditure on ALMPs should lead to a more productive and more motivated pool of job-seekers, providing employers with more suitable job applicants and decreasing labour market frictions. Furthermore, Scarpetta (1996) argues that ALMPs draw discouraged workers back into the labour force. Hence, under high ALMPs a raise in MWs increases the incentive to seek jobs while not substantially burdening employers who face more productive and better matching job applicants, leading to a positive synergy.
These results illustrate the importance of accounting for synergies between MWs and institutions. Hence, panel data models which omit interactive terms are misspecified and can lead to biased MW estimates. In the present discussion, omitting interactive terms (last column) leads to an estimated elasticity of youth employment to MWs of −0.36, much larger than that of the preferred model at average characteristics.
It also follows that policymakers cannot view MWs in isolation from other institutions. In particular, the results imply that unionisation and unemployment benefits should be viewed as policy substitutes to MWs, as they aggravate the negative demand side effect while depressing the positive supply side effect of MWs. The implication of these complementarities across different countries will be discussed in greater detail.
Cross-Country Analysis
The analysis thus far suggests that the ideal setting for MWs consists of low unemployment benefits, low union density and high ALMPs. However, a country with these characteristics does not exist in practice. This is evident in Fig. 1, which plots countries according to their mean expenditure on ALMPs (y-axis) and a labour market fluidity index consisting of the mean union density and unemployment benefits (x-axis).5 A high value of this index indicates a fluid labour market, i.e. low unemployment benefits and union density. Two similarly-sized camps of countries can be distinguished in Fig. 1: free-market economies with low ALMPs and low unemployment benefits or unionisation appear in the bottom right corner, while interventionist economies are positioned on the top left.
Classification of countries by mean expenditure on active labor market policies (ALMPs) and labor market fluidity. Data source: OECD Labour Force Statistics and Minimum Wage Databases; Nickell (2006); Gwartney et al. (2014); own calculations. The data spans the time period 1985–2013; for more details refer to online supplemental Appendix Table 1
For the purpose of analysis, these two camps are again subdivided into two groups each. Group 1 consists of the most fluid markets (former soviet satellite states and Korea). Group 2 comprises the remaining free-market economies and consists mainly of traditionally liberal Anglo-Saxon economies. Interventionist countries subdivide into those with moderate and those with high market rigidity (groups 3 and 4, respectively).
Given this distribution of countries, a relevant policy question lies in assessing the cost of MWs across the two country types. An indicative answer to this question is provided in Tables 2 (groups 1 and 2) and 3 (groups 3 and 4), which show the implied marginal effects of MWs when plugging each country's mean institutional characteristics into the preferred model of Table 1. The differences in predicted effects are striking. For the most restrictive countries (group 4), a one percentage point (p.p.) increase in MWs on average decreases the youth employment rate by a substantial 0.64 p.p, despite their high ALMPs. Meanwhile, for the most fluid labour markets (group 1) the effect is insignificant or even marginally positive, regardless of the low ALMPs. Hence, with the exception of France and Spain who have extremely high expenditures on ALMPs and are only moderately rigid, the model predicts interventionist countries to suffer more from MWs. This indicates that labour market rigidity as captured by unionisation and unemployment benefits is the defining factor determining the effect of MWs on youth employment.
Implied marginal effect (ME) of minimum wages (MWs), free-market countries
ME of MWs
Group average
Data source: OECD Labour Force Statistics and Minimum Wage Databases; Nickell (2006); Gwartney et al. (2014); own calculations. The data spans the time period 1985–2013; for more details refer to online supplemental Appendix Table 1
Implied marginal effect (ME) of minimum wages (MWs), interventionist countries
In order to confirm this hypothesis, the marginal effect across country types was directly estimated by allowing the slope of MWs to differ between free-market (groups 1 and 2) and interventionist (groups 3 and 4) countries as indicated by the model below:
$$ {emp}_{i t}=\beta {mw}_{i t-1}+{\mathbf{I}}_{\mathbf{it}}^{\mathrm{T}}\uppsi +\gamma {mw_{i t-1}}^{\ast }{ i ntervention}_i+{\mathbf{X}}_{\mathbf{it}}^{\mathrm{T}}\Theta +{\tau}_t+{\partial}_i+{\varepsilon}_{i t} $$
where intervention i takes the value 1 for interventionist countries and 0 otherwise, while the other variables remain as before. The estimated marginal effects are reported in Table 4 and show that the marginal MW effect on youth employment is insignificant at 5% for free-market but significant and negative for interventionist economies.
Estimated marginal effect of minimum wages free-market versus interventionist countries
Marginal effect
Free-market
Figure 2 summarises the policy implications of this section by plotting the implied marginal effect of minimum wages at each country's mean institutional characteristics against labour market fluidity. Three simple policy lessons follow. Firstly, raising MWs in fluid labour markets should depress youth employment relatively mildly, if at all. Secondly, high economic costs of MWs in moderately rigid labour markets can be avoided by employing suitable ALMPs (France and Spain). Thirdly, this solution is not viable if labour markets are too rigid. Note that these conclusions oppose Neumark and Wascher (2004) where free-market countries were found to suffer more from MW increases.
Marginal effect of minimum wages (MWs) by labor market fluidity. Data source: OECD Labour Force Statistics and Minimum Wage Databases; Nickell (2006); Gwartney et al. (2014); own calculations. The data spans the time period 1985–2013; for more details refer to online supplemental Appendix Table 1
Robustness Checks
Panel data studies in minimum wage research share two key characteristics: they comprise short time periods of few countries and rely on unbalanced datasets. This can bias both standard errors and coefficients. Nevertheless, these problems were not discussed in previous studies and are therefore addressed below.6
Firstly, an unbalanced dataset is problematic if observations are missing non-randomly, since then data are selected based on unobservable characteristics, potentially biasing coefficients (Wooldridge 2010). This could be a problem in the present case as fewer observations are associated with less-developed countries. Now, it cannot be reasoned economically whether this panel imbalance will necessarily bias the interactive variables. This would occur if synergies work differently in countries with missing observations, for which there is no obvious reason. In order to examine whether the imbalance poses a problem, the analysis was repeated using a balanced subsample covering 2/3 of observations (Table 5, column 3). Note that the interactive coefficients are less than 1 standard error away from the original ones and remain significant. This does not rule out bias when using the full sample, but indicates that any bias is probably negligible.
Main robustness checks
MWs (lagged)
Interactions with MWs
Unemployment benefits (lagged)
Data source: OECD Labour Force Statistics and Minimum Wage Databases; Nickell (2006); Gwartney et al. (2014); own calculations. The data spans the time period 1985–2013; for more details refer to online supplemental Appendix Table 1. Institutional variables and controls were included but are not reported. Robust standard errors clustered by country. Note that bootstrapping occurred with 600 repetitions.
*** p < 0.01, ** p < 0.05, * p < 0.10
Secondly, the small number of countries in the study is a potential problem for clustered standard errors as these are only consistent when the number of clusters approaches infinity (Angrist and Pischke 2009). A breach of this asymptotic condition can lead to downward-biased standard errors, as shown in Monte Carlo studies by Cameron et al. (2008). The authors recommend bootstrapping methods to counter this problem, since they constitute an asymptotic refinement and can decrease biases in finite samples. Essentially, bootstrapping works by drawing a large number of subsamples (with replacement) from the original data, estimating coefficients for each subsample and calculating standard errors based on the distribution of these coefficients (Angrist and Pischke 2009).7 This method was adopted in column 2. Indeed, standard errors increase but the coefficients of the interactive variables remain significant, albeit the interaction with unemployment benefits is only at the 10% significance level. This indicates that the results are moderately robust to the problem of limited clusters, but availability of institutional data for more countries would solve this issue more thoroughly.
Conclusion and Limitations
Consistent with Coe and Snower's (1997) hypothesis, MWs have been found to generate synergies with labour market institutions. In particular, restrictive institutions such as high union density and high unemployment benefits aggravate the negative impact of MWs, while active labour market policies create a positive synergy with MWs.
Two main conclusions follow. Firstly, the interactions between MWs and other labour market institutions are omitted relevant variables in many studies. This model misspecification could at least partially explain the large discrepancy in empirical results in the MW literature. Secondly, policymakers have to consider possible complementary effects when altering MW policies. In particular, the analysis indicates that MWs can be very costly in rigid labour markets with high union density and generous unemployment benefits.
There are some important limitations to the present analysis. First, only four institutional variables are incorporated and although the sample size exceeds that of most previous studies, data availability remains a problem. Moreover, while the low number of countries under analysis and the panel imbalance have been examined in robustness checks, these issues can only be fully resolved by incorporating more countries into the analysis.
Second, although there are reasons to believe that MWs are conditionally exogenous given suitable controls and fixed effects, a valid IV estimation would greatly improve the credibility of the analysis. This is an area with a large scope for improvement in MW research.
Last and perhaps most importantly, the economic costs of MWs are only one side of the token. Policymakers might be willing to sacrifice even large levels of employment in order to reap the benefits of MWs in the form of decreased wage inequality. However, whether the benefits of MWs exceed the costs remains a political and not economic debate. Nevertheless, this paper highlights that the costs can be very high in some settings in which MWs should be applied with caution.
Only in few exceptions was the OECD database supplemented with different sources. Firstly, note that the OECD changed its methodology in 2000 for the unemployment benefits variable. Hence, for all countries the gross replacement rates were used from Nickell (2006) up until 2004, the last year of availability. After this, the series was interpolated using the net replacement rates from the OECD database. Secondly, employment protection data missing for Luxembourg during 2002–2007 was interpolated using an index of hiring and firing costs (Gwartney et al. 2014). Similarly, employment protection data was extended backwards for New Zealand during 1986–1989 using the Nickell (2006) employment protection index. Note that the results presented in this paper are not impacted significantly by these interpolations.
For a description and summary statistics of the countries covered refer to online supplemental Appendix Table 1.
These policies span four categories: expenditure on training of the unemployed (typically the largest component), direct job creation, employment incentives and start-up incentives.
However, changing the lag order of all variables does not impact the results (online supplemental Appendix Table 2).
The index is a weighted average of the standardised mean unemployment benefit and union density, whereby high values indicate high market fluidity. More weight is given to unemployment benefits, but changing this does not impact country positions significantly.
Further robustness checks to different time-controls and lag orders can be found in the online supplemental Appendix Table 2.
The online supplemental Appendix contains a more complete description of the bootstrapping method used.
I would like to thank Dr. Piotr Jelonek for his continuous support as a supervisor at the University of Warwick and the International Atlantic Economic Society for organising the Best Undergraduate Paper Competition and for supporting me throughout the publication process.
11293_2017_9537_MOESM1_ESM.docx (261 kb)
ESM 1 (DOCX 261 kb)
Addison, J. T., & Ozturk, O. D. (2010). Minimum wages, labor market institutions, and female employment and unemployment: A cross-country analysis. IZA Discussion Paper No. 5162. Bonn: Institute for the Study of Labor/Forschungsinstitut zur Zukunft der Arbeit.Google Scholar
Angrist, D. J., & Pischke, J. S. (2009). Mostly harmless econometrics. Princeton: Princeton University Press.Google Scholar
Bentolila, S., & Bertola, G. (1990). Firing costs and labour demand: How bad is eurosclerosis? The Review of Economic Studies, 57(3), 381–402.CrossRefGoogle Scholar
Betcherman, G. (2012). Labor market institutions: a review of the literature. World Bank Policy Research Working Paper, 6276.Google Scholar
Brown, C. C., Gilroy, C., & Kohen, A. I. (1982). The effect of the minimum wage on employment and unemployment. Journal of Economic Literature, 20(2), 487–528.Google Scholar
Brown, A. J., Merkl, C., & Snower, D. J. (2014). The minimum wage from a two-sided perspective. Economics Letters, 124(3), 389–391.CrossRefGoogle Scholar
Burdett, K., & Mortensen, D. T. (1998). Wage differentials, employer size, and unemployment. International Economic Review, 257–273.Google Scholar
Cameron, A. C., Gelbach, J. B., & Miller, D. L. (2008). Bootstrap-based improvements for inference with clustered errors. The Review of Economics and Statistics, 90(3), 414–427.CrossRefGoogle Scholar
Card, D., & Krueger, A. B. (1994). Minimum wages and employment: A case study of the fast food industry in New Jersey and Pennsylvania. The American Economic Review, 84(5), 772–793.Google Scholar
Checchi, D., & Nunziata, L. (2011). Models of unionism and unemployment. European Journal of Industrial Relations, 17(2), 141–152.CrossRefGoogle Scholar
Christl, M., Köppl Turyna, M., & Kucsera, D. (2017). Revisiting the employment effects of minimum wages in Europe. German Economic Review, (forthcoming).Google Scholar
Coe, D. T., & Snower, D. J. (1997). Policy complementarities: the case for fundamental labor market reform. International Monetary Fund Staff Papers 4(1), 1–35.Google Scholar
Dickens, R., Machin, S., & Manning, A. (1998). Estimating the effect of minimum wages on employment from the distribution of wages: A critical view. Labour Economics, 5(2), 109–134.CrossRefGoogle Scholar
Dolton, P., & Bondibene, C. R. (2011). An evaluation of the international experience of minimum wages in an economic downturn. Report prepared for the Low Pay Commission, March.Google Scholar
Gwartney, J., Lawson, R., Hall, J. (2014). 2014 economic freedom dataset, published in economic freedom of the world: 2014 annual report. Fraser Institute.Google Scholar
Kawaguchi, D., & Murao, T. (2014). Labor-market institutions and long-term effects of youth unemployment. Journal of Money, Credit and Banking, 46(S2), 95–116.CrossRefGoogle Scholar
Keune, M. (2015). Trade unions and young workers in seven EU countries. YOUnion Final Report 2015. YOUnion – Union for Youth.Google Scholar
Manning, A. (2003). Monopsony in motion: Imperfect competition in labor markets. Princeton University Press.Google Scholar
Neumark, D., & Wascher, W. (1992). Employment effects of minimum and subminimum wages: Panel data on state minimum wage laws. Industrial & Labor Relations Review, 46, 55–81.CrossRefGoogle Scholar
Neumark, D., & Wascher, W. (2004). Minimum wages, labor market institutions, and youth employment: A cross-national analysis. Industrial & Labor Relations Review, 57(2), 223–248.CrossRefGoogle Scholar
Neumark, D., & Wascher, W. (2006). Minimum wages and employment: A review of evidence from the new minimum wage research. NBER Working Paper No. 12663. National Bureau of Economic Research.Google Scholar
Neumark, D., & Wascher, W. (2008). Minimum wages. Cambridge: MIT Press.Google Scholar
Nickell, W. (2006). The CEP-OECD institutions data set (1960–2004). London: Centre for Economic Performance, London School of Economics and Political Science.Google Scholar
Scarpetta, S. (1996). Assessing the role of labour market policies and institutional settings on unemployment: A cross-country study. OECD Economic Studies, 26(1), 43–98.Google Scholar
Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data. Cambridge: MIT Press.Google Scholar
1.University of OxfordOxfordUK
Brzezinski, A. Atl Econ J (2017) 45: 251. https://doi.org/10.1007/s11293-017-9537-7 | CommonCrawl |
Skip to main content Skip to sections
Living Reviews in Solar Physics
December 2004 , 1:2 | Cite as
Astrospheres and Solar-like Stellar Winds
Brian E. Wood
Latest version View article history
Stellar analogs for the solar wind have proven to be frustratingly difficult to detect directly. However, these stellar winds can be studied indirectly by observing the interaction regions carved out by the collisions between these winds and the interstellar medium (ISM). These interaction regions are called "astrospheres", analogous to the "heliosphere" surrounding the Sun. The heliosphere and astrospheres contain a population of hydrogen heated by charge exchange processes that can produce enough H I Lyα absorption to be detectable in UV spectra of nearby stars from the Hubble Space Telescope (HST). The amount of astrospheric absorption is a diagnostic for the strength of the stellar wind, so these observations have provided the first measurements of solar-like stellar winds. Results from these stellar wind studies and their implications for our understanding of the solar wind are reviewed here. Of particular interest are results concerning the past history of the solar wind and its impact on planetary atmospheres.
Solar Wind Mass Loss Rate Stellar Wind Hubble Space Telescope Termination Shock
It goes without saying that it is generally much easier to study the nearby Sun than it is to study much more distant solar-like stars. Nevertheless, stellar research can address questions about the Sun that observations of the Sun alone cannot answer. The Sun only provides one example of a cool main sequence star, so it cannot tell us by itself how its various properties relate to each other. By observing other solar-like stars, we can see how properties such as stellar activity, rotation, and age are correlated. This can teach us a lot about why the Sun has the properties it does today. It can also tell us what the Sun was like in the past and what it will be like in the future.
Many stellar analogs of solar phenomena are available for study: photospheres, chromospheres, coronae, starspots, magnetic fields, rotation, asteroseismology, etc. (see Skumanich, 1972; Linsky, 1980; Vogt et al., 1987; Gustafsson and Jørgensen, 1994; Johns-Krull and Valenti, 1996; Christensen-Dalsgaard, 2003; Favata and Micela, 2003; Güdel, 2004). Comparing solar properties with those observed for other stars provides a useful context for the solar measurements, improving our understanding of the Sun as well as for stars in general. However, one major solar phenomenon that has proven to be very difficult to study for other stars is the solar wind.
Some types of stellar winds are very easy to detect and study spectroscopically. The massive, radiation-pressure driven winds of hot stars and the cool, massive winds of red giants and super-giants both produce P Cygni emission line profiles that allow the measurement of wind properties with reasonable precision (Harper et al., 1995; Mullan et al., 1998; Kudritzki and Puls, 2000). However, these stars are not solar-like and the winds of these stars are not analogous to the much weaker wind that we see emanating from the Sun. The weak and fully ionized solar wind provides no spectral diagnostics analogous to those used to study more massive stellar winds. Directly detecting a truly solar-like wind around another solar-like star has therefore proven to be a formidable problem.
The first clear detections of winds around other solar-like stars have come from UV spectra of nearby stars from the Hubble Space Telescope (HST). Stellar H I Lyα lines at 1216 Å are always contaminated by very broad, saturated H I absorption. For a long time, this absorption was assumed to be entirely from interstellar H I. However, for some of the nearest stars, the interstellar medium (ISM) cannot account for all of the observed absorption. With the assistance of complex hydrodynamic models of the solar wind/ISM interaction, the excess Lyα absorption has been convincingly identified as being partly due to heated H I gas within our own heliosphere and partly due to analogous H I gas within the "astrospheres" surrounding the observed stars. Note that the word "astrosphere" is used here as the stellar analog for "heliosphere", although "asterosphere" has also been used in the past (see Schrijver et al., 2003). "Astrosphere" has a longer history, with published usage in the literature dating at least back to 1978 (Fahr, 1978). The term "heliosphere" itself only dates back to the 1960s (Dessler, 1967).
The detection of astrospheric Lyα absorption represents an indirect detection of solar-like stellar winds, since astrospheres do not exist in the absence of a stellar wind. Furthermore, the amount of astrospheric absorption is dependent on the strength of the wind, so the astrospheric absorption has provided the first estimates of mass loss rates for solar-like stars. This article reviews the study of solar-like winds around other stars, especially results using the astrospheric Lyα absorption technique.
In Section 2, some background material is provided about the solar wind, ISM, and heliosphere.
In Section 3, techniques used to try to directly detect solar-like winds are reviewed.
In Section 4, the astrospheric Lyα absorption diagnostic is described in detail.
Section 5 provides a review of what the astrospheric analyses have taught us about the solar wind, and discusses some implications of these results both within and outside the realm of solar/stellar physics.
Finally, the article ends in Section 6 with some concluding comments about the future of this subject.
2 Background Material
2.1 The solar wind
Before describing how solar-like winds are detected around other stars, it is worthwhile to briefly review what is known about the solar wind and how we study its properties. The solar wind was first detected through its role in the formation of aurorae and the creation of comet tails. As far back as 1896, Kristian Birkeland proposed that aurorae were due to particles emanating from the Sun (see review by Stern, 1989), while Biermann (1951) first described how "corpuscular radiation" from the Sun was responsible for the plasma tails of comets. Today almost everything we know about the solar wind comes from in situ measurements of its properties from satellites. These measurements date back to the Soviet Luna missions in 1959 (Gringauz et al., 1962) and NASA's Mariner 2 mission in 1962 (Neugebauer and Snyder, 1962). Numerous other spacecraft have participated in studying the solar wind since then, but of particular note are the venerable Voyager 1 and Voyager 2 satellites, which have returned data on the solar wind from 1977 through the present day (see Lazarus and McNutt Jr, 1990). More recently, the Ulysses spacecraft, launched in 1990 and still operating, has provided a first look at the solar wind outside of the ecliptic plane (McComas et al., 2000).
Within the ecliptic plane the solar wind is dominated by low speed streams with typical velocities, proton densities, and temperatures (at 1 AU) of V = 400 km s−1, n(H+) = 5 cm−3, and T = 105 K, respectively, although high speed streams with lower densities and V ≈ 800 km s−1 are not uncommon (see Feldman et al., 1977). At the maximum of the 11-year solar activity cycle, similar solar wind behavior is seen at almost all latitudes. However, Figure 1 shows that at solar minimum the wind above 30° ecliptic latitude is uniformly high speed, low density wind with V ≈ 800 km s−1 (McComas et al., 2000, 2002). This high speed wind originates from coronal holes, which are particularly prominent on the Sun during solar minimum conditions. These solar wind data imply a total mass loss rate for the Sun of Ṁ⊙ ≈ 2 × 10−14 M⊙ yr−1. Although thermal temperatures for the wind at 1 AU are of order T = 105 K, densities are low enough that the wind cannot equilibrate to this temperature, and the ionization state of the wind is actually frozen in at coronal temperatures closer to T = 106 K. Hydrogen is fully ionized, meaning that protons are the dominant constituents of the wind by mass.
Figure 1:
The solar wind velocity (red/blue line) and density (green line) observed by Ulysses as a function of ecliptic latitude (McComas et al., 2000). During solar minimum conditions, high latitudes are dominated by high speed, low density wind, while low latitudes see mostly lower speed wind with higher densities.
Perhaps the most fundamental question to ask about the solar wind is why it exists. Addressing this question is also necessary to assess whether similar winds should exist around other stars. Even before satellites proved the existence of a more-or-less steady solar wind, Parker (1958) predicted that such a wind should be present due to the existence of the 106 K corona surrounding the Sun. In Parker's model, the solar wind exists because of thermal expansion from the hot corona. The predictions of this simple model agree remarkably well with the observed properties of the low speed wind that dominates in the ecliptic plane, although additional wind acceleration mechanisms invoking MHD waves have been proposed to explain the high speed streams (see MacGregor and Charbonneau, 1994; Cranmer, 2002). Thus, any star that has a hot corona analogous to that of the Sun should also have a wind analogous to that of the Sun. Observations with X-ray satellites such as Einstein and ROSAT clearly demonstrate that coronae are a ubiquitous phenomenon among cool main sequence stars (see Schmitt, 1997; Hünsch et al., 1999), so solar-like winds should be present around all solar-like stars. However, that does not mean that they are easy to detect (see Section 1).
2.2 The local interstellar medium
The solar wind does not expand forever. Eventually it runs into the local interstellar medium (LISM). The interaction region between the solar wind and LISM is the subject of Section 2.3, and it is through analogous interaction regions around other stars that solar-like stellar winds can be detected (see Section 4), but before the wind/ISM interaction regions are discussed, it is necessary to review the properties of the undisturbed LISM.
The principle method by which one studies the ISM is by observing absorption lines that interstellar material produces in spectra of distant stars. These studies reveal that ISM column densities remain rather low within about 100 pc of the Sun in most directions and then increase dramatically (Sfeir et al., 1999). This low density region is called the Local Bubble. Figure 2 shows a map of the Local Bubble in the Galactic plane (Lallement et al., 2003). The hot plasma within the Bubble has been detected directly from observations of the soft X-ray background (Snowden et al., 1995). Most locations within the Local Bubble are very hot (T ∼ 106 K) and rarified (ne ∼ 10−3 cm−3), and therefore completely ionized.
Map of the Local Bubble in the Galactic plane, where the contours indicate 20 mÅ and 50 mÅ equivalent widths for the Na I D2 line (Lallement et al., 2003). The distance scale is in parsecs.
Although most of the volume of the Local Bubble consists of this hot material, the absorption line studies clearly demonstrate that there are cooler, partially neutral clouds embedded within the Local Bubble. Furthermore, since even the shortest lines of sight show absorption from H I and other low temperature species (Linsky, 1998; Linsky et al., 2000), the Sun must be located within one of these clouds. The cloud immediately surrounding the Sun has been called the Local Interstellar Cloud (LIC), which is roughly 5–7 pc across and has a total mass of about 0.32M⊙ (Redfield and Linsky, 2000). There are similar clouds that are apparently adjacent to the LIC (e.g. the "G" cloud and "Hyades" clouds, see Lallement and Bertin, 1992; Redfield and Linsky, 2001), although it is debatable whether the LIC is truly distinct from these clouds. Velocity gradients within a single cloud could in principle create the appearance of multiple clouds in absorption line studies.
The first evidence that the LIC is not entirely ionized came not from absorption line studies but from observations of solar Lyα emission scattering off interstellar H I gas flowing into the heliosphere (Bertaux and Blamont, 1971; Quémerais et al., 1999, 2000). Interstellar atoms have also been observed directly with particle detectors on board spacecraft such as Ulysses (Witte et al., 1993, 1996). Both measurements of LIC material flowing through the heliosphere and LISM absorption line studies have been used to estimate the direction and magnitude of the LIC vector, and the resulting vectors are in good agreement. The heliocentric vector derived from absorption lines has a magnitude of 25.7 km s−1 directed towards Galactic coordinates l = 186.1° and b = −16.4° (Lallement and Bertin, 1992; Lallement et al., 1995).
Other properties of the undisturbed LISM just beyond the heliosphere are less precisely known. Absorption line studies are hampered by probable variations of densities, temperatures, and ionization states within the LIC (see Cheng and Bruhweiler, 1990; Slavin and Frisch, 2002; Wood et al., 2003b), meaning that line-of-sight averages of these properties towards even the nearest stars are potentially different from the actual circumsolar LISM properties. Studies of LISM particles streaming through the heliosphere are hampered by the fact that the properties of these particles are often altered in the outer heliosphere, thereby requiring the assistance of models to extrapolate back to undisturbed LISM conditions (see Izmodenov et al., 2004). In any case, typical temperatures measured for the LIC are T = 6000–8000 K, typical hydrogen densities are n(H I) = 0.1–0.2 cm−3, and typical proton and electron densities are n(H+) ≈ ne = 0.04–0.2 cm−3 (Witte et al., 1993, 1996; Wood and Linsky, 1997, 1998; Izmodenov et al., 1999a; Redfield and Linsky, 2000; Frisch and Slavin, 2003).
2.3 The structure of the heliosphere
Wind/ISM interactions provide the means by which solar-like stellar winds can be detected (see Section 4). Our understanding of these interactions relies heavily on a long history of efforts to model the solar wind/ISM interaction. This heliospheric modeling is summarized briefly here, but for more comprehensive reviews see Holzer (1989); Baranov (1990); Suess (1990) and Zank (1999).
Modeling the large scale structure of the heliosphere began not long after the solar wind's discovery (Parker, 1961, 1963). The basic structure of the heliosphere, which is shown schematically in Figure 3, is dominated by three prominent boundaries: the termination shock (TS), heliopause (HP), and bow shock (BS). The solar wind is highly supersonic, and the oval-shaped termination shock is where the radial wind is shocked to subsonic speeds. The 26 km s−1 laminar ISM flow is also generally believed to be supersonic, although in principle it could be subsonic if the poorly known ISM magnetic field is strong enough (Zank et al., 1996). Nevertheless, most heliospheric models assume the existence of a bow shock, where the ISM flow is shocked to subsonic speeds (see Figure 3). In between the TS and BS is the heliopause, which is a contact surface separating the plasma flows of the solar and interstellar winds.
Schematic picture of the heliospheric interface from Izmodenov et al. (2002), which can be divided into the 4 regions shown in the figure, with significantly different plasma properties. Region 1: supersonic solar wind; Region 2: subsonic solar wind; Region 3: disturbed interstellar gas and plasma; and Region 4: undisturbed interstellar medium.
The heliospheric structure shown in Figure 3 is inferred almost entirely from hydrodynamic models. However, in 2004, Voyager 1 crossed the TS at a distance of 94 AU from the Sun in roughly the upwind direction of the ISM flow (Stone et al., 2005). Precursors of this crossing were seen years prior to the accepted crossing date (Krimigis et al., 2003; Burlaga et al., 2003; McDonald et al., 2003). At the time of this writing, Voyager 1's sister satellite Voyager 2 has begun to see these precursors but has not yet officially crossed the TS (Opher et al., 2006). The 94 AU TS distance measured by Voyager 1 is consistent with model predictions (Izmodenov et al., 2003). As for the HP and BS, recent models generally predict upwind distances of ∼ 140 AU and ∼ 240 AU, respectively. The Voyager satellites may not survive long enough to get out this far.
The plasma component of the LISM is diverted around the heliopause due to the strong plasma interactions, but neutrals in the LISM can penetrate into the solar system through the HP and TS. These neutrals were first detected through Lyα backscatter emission (Bertaux and Blamont, 1971). However, even after this discovery most hydrodynamic models of the heliosphere continued to ignore the neutrals since the collisional interactions involving neutrals are much weaker than those involving charged particles. Essentially, the assumption was made that the neutrals would pass through the heliosphere unimpeded, having little or no effect on the heliospheric structure.
It was recognized in the 1970s that the LISM neutrals could in fact play an important role in the solar wind/ISM collision through charge exchange interactions (Holzer, 1972; Wallis, 1975). However, trying to model this is very difficult, because the charge exchange sends the neutral H wildly out of thermal and ionization equilibrium. This means that simple fluid approximations break down and one has to resort to complex multi-fluid codes or ideally fully kinetic codes. It was not until much later that the first codes that treat the plasma and neutrals in a self-consistent manner were first developed (Baranov and Malama, 1993, 1995; Baranov and Zaitsev, 1995; Zank et al., 1996; Izmodenov et al., 1999a; Müller et al., 2000; Izmodenov et al., 2001). These models demonstrate that the heliospheric structure is indeed influenced significantly by the neutrals in many different ways.
For purposes of this article, the importance of the development of heliospheric codes that treat neutrals properly is that it is only because of the existence of neutral hydrogen in the LISM that heliospheric and astrospheric Lyα absorption is detectable, and it is only because of the existence of the self-consistent codes developed to model neutrals in the heliosphere that we can model the astrospheric absorption and extract stellar mass loss rates from the data. Figure 4 shows a heliospheric model that uses a hybrid kinetic code, where the protons are modeled as a fluid but a kinetic code is used for the neutrals (Lipatov et al., 1998; Müller et al., 2000; Wood et al., 2000b). The strong plasma interactions heat and compress LISM protons in between the HP and BS, and thanks to charge exchange processes these high temperatures and densities are transmitted to the neutral H. As a consequence, the heliosphere and astrospheres are permeated by a population of hot hydrogen, which produces a substantial amount of Lyα absorption in HST observations of nearby stars. Most of this absorption comes from the "hydrogen wall" region in between the HP and BS, where densities of the hot H I are particularly high (see Figure 4d). The heliospheric and astrospheric Lyα diagnostic is described in detail in Section 4.
(a) Proton temperature, (b) proton density, (c) neutral hydrogen temperature, and (d) neutral hydrogen density distributions for a heliospheric model from Wood et al. (2000b). The positions of the termination shock (TS), heliopause (HP), and bow shock (BS) are indicated in (a), and streamlines indicating the plasma flow direction are shown in (b). The distance scale is in AU.
3 Direct Wind Detection Techniques
The astrospheric Lyα absorption diagnostic described in detail in Section 4 represents only an indirect detection of stellar winds, since the H I that produces the absorption is essentially LISM rather than wind material. The H I is nevertheless heated by its interaction with the stellar wind, and the astrospheric absorption has therefore proven to be very useful for successfully detecting and measuring solar-like winds around other stars. However, there have been attempts to detect these winds more directly, and these mostly unsuccessful efforts are briefly summarized here.
One can try to look for free-free radio emission from solar-like winds, since they are fully ionized and should therefore produce emission at some level. However, current radio telescopes can only detect winds if they are much stronger than that of the Sun. There have been some claims of very high mass loss rates for a few very active stars using observations at millimeter wavelengths (Mullan et al., 1992), but these interpretations of the data are highly controversial (Lim and White, 1996; van den Oord and Doyle, 1997). The problem is that the coronae of these active stars are also sources of radio emission, which makes problematic the identification of a wind as the source of the emission. Furthermore, it has been argued that massive winds around active stars should absorb the flaring coronal emission that is often observed from these stars, suggesting that massive winds cannot be present (Lim and White, 1996). Nondetections of radio emission have been used to derive upper limits to the mass loss rates of various stars, but these upper limits are typically 2–3 orders of magnitude higher than the solar mass loss rate, so these are not very stringent constraints (Brown et al., 1990; Drake et al., 1993; Lim et al., 1996b; Gaidos et al., 2000).
Variable ultraviolet absorption features observed from the close, eclipsing binary V471 Tau (K2 V+DA) have been interpreted as being due to a wind from the K2 V star (Mullan et al., 1989; Bond et al., 2001). Even if this interpretation is correct, it is questionable whether the wind produced by this star can be considered to be truly "solar-like" given the close presence of the white dwarf companion. In addition, instead of a spherically symmetric wind it has been proposed that the UV absorption could instead be indicative of coronal material being funneled directly from the K2 V star to the white dwarf through the magnetospheric interaction of the two stars (Lim et al., 1996a).
One final wind detection technique that has been proposed is to look for X-ray emission surrounding nearby stars, caused by charge exchange between highly ionized heavy atoms in the stellar wind and inflowing LISM neutrals. This is very analogous to the process by which comets produce X-rays (see Lisse et al., 2001; Cravens, 2002). In the heliosphere, this charge exchange X-ray emission may be responsible for a significant fraction of the observed soft X-ray background (Cravens, 2000). Wargelin and Drake (2002) searched for circumstellar X-ray emission in Chandra observations of the nearest star, Proxima Cen, but they failed to detect any. Based on this nondetection, they quote an upper limit for Proxima Cen's mass loss rate of Ṁ < 14Ṁ⊙. This can be compared with the upper limit of Ṁ < 350Ṁ⊙ derived from a nondetection of radio emission from Proxima Cen (Lim et al., 1996b), and the upper limit of Ṁ < 0.2Ṁ⊙ derived from the nondetection of astrospheric Lyα absorption (Wood et al., 2001). The astrospheric Lyα absorption diagnostic (see Section 4) is roughly two orders of magnitude more sensitive than the X-ray diagnostic and roughly three orders of magnitude more sensitive than the radio measurement.
4 Detecting Winds Through Astrospheric Absorption
4.1 Analyzing H I Lyman-alpha lines
Spectroscopic analyses of stellar H I Lyman-α lines have proven to be the best way so far to clearly detect and measure weak solar-like winds, but analysis of this line is complex. The history of Lyα absorption observations and analyses is summarized here, with emphasis on how these studies eventually led to the detection of solar-like stellar winds.
The Lyα line at 1216 Å is the most fundamental transition of the most abundant atom in the universe. As such, the line is a valuable diagnostic for many purposes. For cool stars, the Lyα line is a strong radiative coolant for stellar chromospheres, and is therefore an important chromospheric diagnostic. Cool stars produce very little continuum emission at 1216 Å, so Lyα lines observed from cool stars are strong, isolated emission lines. However, these lines are always heavily contaminated by interstellar absorption.
Figure 5 shows the Lyα lines observed from the two members of the a Cen binary, which is the nearest star system to us at a distance of only 1.3 pc. The centers of the stellar emission lines are obliterated by broad, saturated H I absorption. Narrower absorption from deuterium (D I) is visible −0. 33 Å from the H I absorption. The D I absorption is entirely from interstellar material in between α Cen and the Sun, and interstellar absorption also accounts for much of the H I absorption. The H I and D I absorption lines are therefore valuable diagnostics for the LISM, and stellar Lyα data like that in Figure 5 have proven to be useful for measuring the properties of warm, neutral ISM gas, and also for mapping out the distribution of this gas in the vicinity of the Sun (see Linsky et al., 2000; Redfield and Linsky, 2000).
HST/GHRS spectra of the Lyα lines of α Cen A and B, showing broad absorption from interstellar H I and narrow absorption from D I (Linsky and Wood, 1996).
The importance of these data is magnified further by the measurements the data provide of D/H ratios. The LISM D/H ratio has important applications for both cosmology and our understanding of Galactic chemical evolution (see McCullough, 1992; Linsky, 1998; Moos et al., 2002; Wood et al., 2004). The light element abundances in the universe are powerful diagnostics for Big Bang nucleosynthesis calculations, with the exact abundances depending on the cosmic baryon density, Ωb. The deuterium abundance is particularly sensitive to Ωb, so measuring the primordial D/H ratio and thereby constraining cosmological models has been a major goal in astronomy (see Boesgaard and Steigman, 1985; Burles et al., 2001). The most accurate meaningful D/H measurements come from analyses of LISM absorption like that in Figure 5. Unfortunately, the LISM D/H ratio only provides a lower limit to the primordial D/H ratio. Since deuterium is destroyed in stellar interiors, the D/H ratio is expected to have decreased with time, so Galactic chemical evolution models are required to extrapolate back to a primordial value (see Prantzos, 1996; Tosi et al., 1998; Chiappini et al., 2002). Primordial D/H values can also be measured more directly by measuring D/H in more pristine intergalactic material (see Kirkman et al., 2003).
The desire to improve our understanding of the LISM and measure D/H has provided the primary impetus behind stellar Lyα analyses. This work dates back to Copernicus (York and Rogerson, 1976; Dupree et al., 1977), which was the first ultraviolet astronomical satellite to provide high quality stellar Lyα spectra. Copernicus was followed by the long-lived International Ultraviolet Explorer (IUE) satellite, which also provided Lyα spectra that could be analyzed for LISM and D/H purposes (see Murthy et al., 1987; Diplas and Savage, 1994). However, it was the Goddard High Resolution Spectrograph (GHRS) instrument aboard HST that was the first UV spectrometer capable of fully resolving the D I and H I absorption line profiles. The GHRS spectrometer was replaced by the Space Telescope Imaging Spectrometer (STIS) in 1997.
The first Lyα analyses from HST data were for the lines of sight to Capella and Procyon (Linsky et al., 1993, 1995). However, the third analysis, which was of the α Cen data shown in Figure 5, presented a dilemma. The observed H I absorption is simply inconsistent with D I and other ISM absorption lines (Mg II, Fe II, etc.). The D I absorption and the other non-H I lines are centered at a heliocentric velocity of v = −18.0 ± 0.2 km s−1, and the widths of these lines suggest an interstellar temperature of T = 5400 ± 500 K. However, the H I absorption implies v = −15.8 ± 0.2 km s−1 and T = 8350K. In other words, the H I absorption is broader than it should be, and it is also redshifted by 2.2 km s−1 from where it should be. (Linsky and Wood, 1996). Interestingly enough, the H I redshift was also discerned earlier in much lower quality IUE spectra (Landsman et al., 1984).
Linsky and Wood (1996) interpreted the problem as being due to the presence of an extra H I absorption component that contributes no absorption to any of the weaker ISM lines. This was also one of the hypotheses suggested by Landsman et al. (1984) based on the IUE data. Two-component fits to the HST/GHRS data from Linsky and Wood (1996) suggest that the extra H I absorption can be explained by the existence of a redshifted H I absorption component with a temperature of T ≈ 30000K and a column density (in cm−2) of log N(H I) ≈ 15.0. This temperature is much hotter than typical LISM material, and the column density is almost three orders of magnitude lower than the ISM H I column density towards α Cen. The low column density would explain why the absorption component is only seen in the H I line and not in any of the other ISM lines from atomic species with much lower abundances. However, the interpretation for this absorption component was initially a mystery.
Fortuitously, the α Cen Lyα analysis was being performed at about the same time as the first heliospheric models including neutrals in a self-consistent manner were being developed (see Section 2.3). One session of the 1995 IUGG (International Union of Geodesy and Geophysics) General Assembly brought together interstellar and heliospheric experts, and it was realized during that meeting that the heated heliospheric hydrogen predicted by the new heliospheric models had precisely the right properties to account for the extra a Cen absorption component. It was quickly realized that if hot hydrogen existed around the Sun, then it should exist around other solar-like stars as well, so the initial a Cen analysis suggested that the excess H I absorption could be partly due to astrospheric as well as heliospheric absorption (Linsky and Wood, 1996). Thus was born a new way to detect and study stellar winds.
More work was required to verify this interpretation of the Lyα data. Gayley et al. (1997) made the first direct comparison between the α Cen Lyα data and the predictions of various heliospheric models. This work demonstrated that heliospheric Lyα absorption can only account for the excess absorption seen on the red side of the Lyα line, and that astrospheric absorption is required to explain the excess absorption seen on the blue side of the line. A more refined interpretation for the a Cen Lyα absorption was therefore developed, and is shown schematically in Figure 6. The four middle panels in Figure 6 show what happens to the α Cen B Lyα line profile as it journeys from the star towards the Sun. The Lyα profile first has to traverse the hot hydrogen in the star's astrosphere, which erases the central part of the line. The Lyα emission then makes the long interstellar journey, resulting in additional absorption, including some from D I. Finally, the profile has to travel through the heliosphere, resulting in additional absorption on the red side of the line. Most of the astrospheric and heliospheric absorption is from material in the "hydrogen wall" region mentioned at the end of Section 2.3 (see Figure 4).
Schematic diagram showing how a stellar Lyα profile changes from its initial appearance at the star and then through various regions that absorb parts of the profile before it reaches an observer at Earth: the stellar astrosphere, the LISM, and finally the heliosphere (Wood et al., 2003b). The lower panel shows the actual observed Lyα profile of a Cen B. The upper solid line is the assumed stellar emission profile and the dashed line is the ISM absorption alone. The excess absorption is due to heliospheric H I (green shading) and astrospheric H I (red shading).
Why is the heliospheric absorption redshifted relative to the interstellar absorption? For the α Cen line of sight, which is roughly in the upwind direction relative to the LISM flow seen by the Sun, the heliospheric absorption is redshifted primarily because of the deceleration and deflection of interstellar material as it crosses the bow shock. Heliospheric models predict that heliospheric Lyα absorption should always be redshifted relative to the ISM absorption, even in downwind directions, although the physical explanation for the redshift in the downwind direction is more complicated (Izmodenov et al., 1999b; Wood et al., 2000b). Conversely, astrospheric absorption will always be blueshifted relative to the ISM absorption, since we are viewing that absorption from outside the astrosphere rather than inside. It is very fortunate that heliospheric and astrospheric material produce excess absorption on opposite sides of the Lyα line, as this makes it possible to identify the source of the absorption.
The bottom panel of Figure 6 shows the observed Lyα profile of a Cen B (Linsky and Wood, 1996). As mentioned above, the non-H I ISM lines observed towards α Cen suggest υ = −18.0 ± 0.2 km s−1 and T = 5400 ± 500 K for the ISM material in this direction. The dashed line in the bottom panel of Figure 6 shows what the H I absorption looks like when forced to be consistent with these results. No matter what is assumed for the stellar line profile, and no matter what is assumed for the ISM H I column density, there is always excess absorption on both sides of the H I absorption feature that cannot be explained by ISM absorption. As suggested above, the red side excess is best interpreted as heliospheric absorption and the blue side excess is best interpreted as astrospheric absorption.
This example illustrates how heliospheric and astrospheric absorption is detected. The ISM H I absorption is estimated by forcing the H I fit parameters to be consistent with D I and other ISM lines. In many cases, this still leads to excellent fits to the data, but in some cases there is evidence for excess H I absorption on one or both sides of the line, indicating the presence of heliospheric and/or astrospheric absorption. The Lyα analyses can be simplified even further if one assumes that D/H = 1.5 × 10−5, in addition to forcing υ and T to be consistent for D I and H I. This assumption should be valid for nearby stars, since recent work suggests that D/H ≈ 1.5 × 10−5 throughout the Local Bubble, with no evidence for variation (Linsky, 1998; Moos et al., 2002; Wood et al., 2004).
The heliospheric/astrospheric interpretation of the excess Lyα absorption has strong theoretical support, but additional evidence for the validity of this interpretation is still valuable. The best purely empirical demonstration that it is correct comes by comparing the Lyα absorption observed towards a Cen with that observed towards a distant M dwarf companion of the α Cen system called Proxima Cen (Wood et al., 2001). This comparison is made in Figure 7. The Lyα absorption profiles agree well on the red side of the line where the heliospheric absorption is located. However, the blue-side excess absorption seen towards α Cen is not seen towards Proxima Cen. This means that the blue-side excess absorption seen towards α Cen has to be from circumstellar material surrounding α Cen that does not extend as far as the distant companion Proxima Cen (∼ 12 000 AU away), consistent with the astrospheric interpretation. Apparently, Proxima Cen must have a weaker wind than the α Cen binary, which results in a much smaller astrosphere and much less astrospheric Lyα absorption. (The two α Cen stars are close enough that they will share the same astrosphere, and the astrospheric absorption will therefore be characteristic of the combined winds of both stars.) This example suggests how the astrospheric absorption might be used as a diagnostic for the mass loss rates of solar-like stars, which is a subject that is discussed in detail in Section 4.3.
Comparison between the Lyα spectra of α Cen B (green histogram) and Proxima Cen (red histogram) from Wood et al. (2001). The inferred ISM absorption is shown as a green dashed line. The Alpha/Proxima Cen data agree well on the red side of the H I absorption, but on the blue side the Proxima Cen data do not show the excess Lyα absorption seen toward α Cen (i.e. the astrospheric absorption).
4.2 Comparing heliospheric absorption with model predictions
In this article, we are more interested in the astrospheric Lyα absorption than the heliospheric absorption, since we are concerned with using the astrospheric detections to measure the properties of solar-like stellar winds. However, hydrodynamic models of the astrospheres are required to do this (see Section 4.3). In order to believe the results of these models, it is crucial to demonstrate that they are properly extrapolated from heliospheric models that can reproduce observed heliospheric absorption. Thus, the heliospheric absorption and efforts to reproduce it using models are reviewed in this section.
Gayley et al. (1997) were the first to clearly demonstrate that heliospheric models are capable of reproducing the observed excess Lyα absorption, at least on the red side of the line (see Section 4.1). However, the exact amount of absorption predicted depends on exactly what properties are assumed for the surrounding LISM. Since many of these LISM properties are not precisely known (see Section 2.2), there is the hope that the heliospheric absorption can be used as a diagnostic for the LISM properties, such that the heliospheric absorption is only reproduced when the correct LISM parameters are assumed. In practice, this has proven to be very difficult.
One problem has been finding enough detections of heliospheric Lyα absorption to provide proper constraints for the models, although the situation has been gradually improving. The first heliospheric absorption detection was for the α Cen line of sight described in detail in Section 4.1. The second detection was for the downwind line of sight towards Sirius (Izmodenov et al., 1999b), though the analysis of Hébrard et al. (1999) suggests that this detection is not very secure. The upwind line of sight towards 36 Oph (Wood et al., 2000a) provided the third detection, and there is a marginal detection for the crosswind line of sight towards HZ 43 (Kruk et al., An HST archival Lyα survey by Wood et al. (2005b) resulted in four more detections (70 Oph, ξ Boo, 61 Vir, and HD 16. Most recently, a detailed analysis of all reconstructed stellar Lyα lines based on HST data has found evidence for very broad, weak heliospheric absorption for three lines of sight (χ1 Ori, HD 28205, HD 28568) observed within 20° of the downwind direction (Wood et al., 2007b). This brings the grand total of absorption detections to 11. In addition, Lemoine et al. (2002) and Vidal-Madjar and Ferlet (2002) have claimed to find evidence for weak heliospheric absorption towards the similar Capella and G191-B2B lines of sight, but these claims rely on subtle statistical arguments rather than clearly visible excess absorption.
The heliospheric absorption detections can be supplemented with other lines of sight observed by HST that at least provide useful upper limits for the amount of heliospheric absorption. Figure 8 shows the Lyα absorption profiles observed for three of the lines of sight with detected heliospheric absorption (36 Oph, α Cen, and Sirius) and three additional lines of sight with nondetections (31 Com, β Cas, and ϵ Eri). The figure zooms in on the red sides of the profiles where the heliospheric absorption should be located. The θ angles in the figure indicate the angle of the line of sight with respect to the upwind direction of the interstellar flow. The six sight lines sample diverse orientiation angles, ranging from the nearly upwind direction towards 36 Oph (θ = 12°) to the nearly downwind line of sight to ϵ Eri (θ = 148°). The dotted lines show the ISM absorption alone, constrained by forcing consistency with D I (see Section 4.1) — only 36 Oph, α Cen, and Sirius show excesses that reveal the presence of heliospheric absorption. A successful heliospheric model should accurately reproduce the amount of heliospheric absorption detected towards these three stars, while predicting no detectable absorption towards the other three.
Comparison of the H I absorption predicted by a four-fluid heliospheric model (dashed lines) and the observations, where the model heliospheric absorption is shown after having been added to the ISM absorption (dotted lines). Reasonably good agreement is observed, although there is a slight underprediction of absorption towards 36 Oph and Sirius, and a slight overprediction towards ϵ Eri (Wood et al., 2000b).
Figure 8 shows a model that agrees reasonably well with the data, despite a slight overprediction of absorption towards ϵ Eri and slight underpredictions for 36 Oph and Sirius (Wood et al., 2000b). This model assumes T = 8000K, n(HI) = 0.14 cm−3, and n(H+) = 0.10 cm−3 for the ambient LISM, parameters well within the range of values inferred by other means (see Section 2.2). However, not all heliospheric models that assume these parameters find agreement with the data.
This brings us to the second problem with trying to infer ambient LISM parameters from the heliospheric Lyα data: Results currently seem to be very model dependent. It is mentioned in Section 2.3 just how difficult it is to properly consider neutrals in heliospheric models due to charge exchange processes driving the neutral H out of thermal equilibrium. The model used in Figure 8 is a "four-fluid" model of the type developed by Zank et al. (1996), where one fluid represents the protons, and three distinct fluids are used to represent the neutral hydrogen, one fluid for each distinct region where charge exchange occurs (inside the TS, between the TS and HP, and between the HP and BS). However, there are other approaches, such as the hybrid kinetic code of Müller et al. (2000) and the Monte Carlo kinetic code of Baranov and Malama (1993, 1995). The heliospheric absorption predicted by these kinetic models is not identical to that predicted by the four-fluid models (Wood et al., 2000b; Izmodenov et al., 2002). Currently the kinetic models seem to have more difficulty reproducing the observed heliospheric absorption than the four-fluid models, especially in downwind directions where they tend to predict too much absorption. However, the kinetic models should in principle yield more accurate velocity distributions for the neutral H than codes with multi-fluid approximations. A complex multi-component treatment of the protons in the heliosphere seems to improve the kinetic models' ability to fit the data (Malama et al., 2006; Wood et al., 2007b). Clearly more work is required to attain some sort of convergence in the models before LISM parameters can be unambiguously derived from the data.
However, a third difficulty with using the heliospheric absorption to infer ambient LISM properties is that the absorption may not be as sensitive to these properties as one might wish. Izmodenov et al. (2002) experiment with different LISM proton and neutral hydrogen densities and find surprisingly little change in the predicted Lyα absorption, at least in upwind and sidewind directions. This may be bad news for the diagnostic power of the heliospheric absorption, but it is actually good news for the astrospheric analyses that are described in Section 4.3. In using astrospheric models to help extract stellar mass loss rates from the astrospheric absorption, one has to assume that the LISM does not vary much from one location to another. The results of Izmodenov et al. (2002) suggest that the models are not very sensitive to the modest variations in LISM properties that one might expect to be present in the solar neighborhood.
Finally, there are some aspects of heliospheric physics that are only beginning to be considered in the models. The models mentioned previously do not consider either the heliospheric magnetic field carried outwards by the solar wind (see Nerney et al., 1991), or the poorly known interstellar magnetic field. Not only is a proper MHD treatment of the heliosphere difficult, but the problem is inherently three dimensional, whereas the models mentioned previously assume a 2D axisymmetric geometry. Using a 2D approach, Florinski et al. (2004) find that a strong ISM field oriented parallel to the LISM flow does not yield significantly different predictions for heliospheric Lyα absorption than models without magnetic fields. However, 3D models are required to include the heliospheric field, and 3D models are also required to consider ISM field orientations other than parallel to the flow vector.
Initial 3D models developed to model these effects (see Linde et al., 1998) did not include neutrals in a self-consistent manner. Dealing with both neutrals and magnetic fields properly in a 3D model is a very formidable problem. Nevertheless, the 3D models without neutrals do suggest that MHD effects could in principle lead to changes in the heliospheric structure that could affect the Lyα absorption. Examples include the unstable jet sheet and north-south asymmetries predicted by Opher et al. (2003, 2004, 2006). In addition, Ratkiewicz et al. (1998) find that if the LISM magnetic field is skewed with respect to the ISM flow, the effective nose of the heliosphere could be significantly shifted from the upwind direction. Even in the absence of magnetic fields, latitudinal variations in solar wind properties could also cause asymmetries in the heliosphere (Pauls and Zank, 1997). It is possible that all these asymmetries suggested by 3D models could be detectable in Lyα absorption. However, neutrals must be included properly in the models to make clearer predictions. Only very recently has this been done (Izmodenov et al., 2005; Pogorelov et al., 2006). Wood et al. (2007a) have made the first comparison between such models and the Lyα data, finding that the absorption predicted by the models is modestly affected by the assumed LISM field strength and orientation, allowing some constraints on these quantities to be inferred from the data. However, with the absorption being modestly dependent on so many uncertain LISM properties, including particle densities, it is probably unreasonable to expect analysis of the absorption by itself to yield a single set of acceptable LISM parameters.
4.3 Measuring stellar mass loss rates
The α Cen line of sight provided the first detection of astrospheric Lyα absorption (Linsky and Wood, 1996), but the use of models by Gayley et al. (1997) was necessary to clearly demonstrate that heliospheric absorption could not explain the blue-side excess as well as the stronger redside excess (see Figure 6). By that time, there were already two other lines of sight, ϵ Ind and λ And, with excess Lyα absorption found only on the blue side of the absorption line, which clearly could not be heliospheric in origin (Wood et al., 1996). This excess absorption was immediately interpreted as being solely astrospheric. Thus, a case could be made for either α Cen AB or ϵ Ind/λ And being the first stars with detected solar-like coronal winds.
There are now a total of 13 published detections of astrospheric absorption. These detections are listed in Table 1, in addition to Proxima Cen since the nondetection for Proxima Cen is used to derive an upper limit for its mass loss rate (see below).
Table 1:
Published Astrospheric Detections and Mass Loss Measurements
Spectral Type
d (pc)
Surf. Area (A⊙)
Log L x
VISM (km s−1)
θ (deg)
Ṁ (Ṁ⊙)
α Cen
G2 V+K0 V
Prox Cen
M5.5 V
ϵ Eri
K1 V
61 Cyg A
ϵ Ind
EV Lac
70 Oph
K0 V+K5 V
K1 V+K1V
ξ Boo
61 Vir a
G5 V
δ Eri
K0 IV
λ And a
G8 IV-III+M V
DK UMa a
G4 III-IV
a Uncertain detection.
References: (1) Linsky and Wood (1996). (2) Gayley et al. (1997). (3) Wood et al. (2001). (4) Dring et al. (1997). (5) Wood and Linsky (1998). (6) Wood et al. (1996). (7) Müller et al. (2001a). (8) Müller et al. (2001b). (9) Wood et al. (2000a). (10) Wood et al. (2002). (11) Wood et al. (2005b). (12) Wood et al. (2005a).
Some of these stars are binaries where the individual components are clearly close enough that both stars will reside within the same astrosphere, meaning that the observed astrospheric absorption is indicative of the combined mass loss of both stars. For these binaries (α Cen, 36 Oph, and λ And), the spectral types of both stars are listed in Table 1 and the listed stellar surface areas are the combined areas of both stars. In contrast, 61 Cyg A's companion is far enough away that 61 Cyg A should have an astrosphere all to itself, so it is listed alone in Table 1.
Three detections are flagged as uncertain in Table 1 for various reasons that will not be discussed here (see Wood et al., 2002, 2005b). Results regarding these three lines of sight should be considered with caution. There was originally another uncertain astrospheric detection, 40 Eri A (Wood and Linsky, 1998; Wood et al., 2002), which has now been dropped entirely from the list of astrospheric detections. Since astrospheric hydrogen scatters stellar Lyα photons, solar-like stars should in principle be surrounded by faint nebulae of astrospheric Lyα emission. There was an unsuccessful attempt to detect this emission surrounding 40 Eri A, and the lack of success was used to argue that the tentative detection of astrospheric absorption for 40 Eri A could not be correct (Wood et al., 2003a).
The astrospheric absorption detections represent an indirect detection of solar-like winds from the observed stars, and the amount of absorption observed has diagnostic power. Frisch (1993) envisioned using astrospheres as probes for the interstellar medium, but so far work has focused on the use of the astrospheric absorption for estimating stellar mass loss rates. The higher the stellar mass loss rate, the larger the astrosphere will be, and the more Lyα absorption it will produce. However, extracting quantitative mass loss measurements from the data requires the assistance of astrospheric models analogous to the heliospheric models discussed in Section 2.3.
The first step in modeling an astrosphere is to determine what the interstellar wind is like in the rest frame of the star. The proper motions and radial velocities of the nearby stars in Table 1 are known very accurately, as are their distances (see Perryman et al., 1997). The ISM flow vector is generally assumed to be the same as the LIC flow vector seen by the Sun (Lallement et al., 1995). Although multiple ISM velocity components are often seen towards even very nearby stars, the components are never separated by more than 5–10 km s−1, meaning that the LIC vector should be a reasonable approximation for these other clouds. An example is the nearby "G" cloud. Since a Cen, 70 Oph, and 36 Oph are known to lie within this cloud rather than the LIC, the "G" cloud vector from Lallement and Bertin (1992) is used instead of the LIC vector, but this does not change things very much. In any case, Table 1 lists the ISM wind velocity seen by each star (VISM). The θ value in Table 1 indicates the angle between our line of sight to the star and the upwind direction of the ISM flow seen by the star.
Astrospheric models are extrapolated from a heliospheric model that fits the observed heliospheric Lyα absorption. The principle heliospheric model used in the past for this purpose is the four-fluid model described in Section 4.2, which is the source of the predicted heliospheric absorption in Figure 8. In order to convert this to an astrospheric model, the model is recomputed using the same wind and ISM input parameters, except for the ISM wind speed which is changed to the VISM value appropriate for the star (see Table 1). The stellar wind proton density is varied to experiment with mass loss rates different from that of the Sun. Figure 9 shows four models of the α Cen astrosphere computed assuming different mass loss rates in the range of Ṁ = 0.2–2Ṁ⊙ (Wood et al., 2001). The model astrospheres naturally become larger as the mass loss is increased. The figure shows the H I density distribution, but the models also provide temperature distributions and flow patterns. From these models astrospheric absorption can be computed for the θ = 79° line of sight to the star, for comparison with the data. Figure 10 shows this comparison for α Cen, along with similar comparisons for five other stars in Table 1 (Wood et al., 2002). The model with M = 2Ṁ⊙ agrees best with the data, so this is the estimated mass loss rate of α Cen. Figure 11 shows similar data-model comparisons for six additional stars from Table 1 (Wood et al., 2005a).
Distribution of H I density predicted by hydrodynamic models of the Alpha/Proxima Cen astrospheres, assuming stellar mass loss rates of (from top to bottom) 0.2Ṁ⊙, 0.5Ṁ⊙, 1.0Ṁ⊙, and 2.0Ṁ⊙ (Wood et al., 2001). The distance scale is in AU. Streamlines show the H I flow pattern.
Figure 10:
Closeups of the blue side of the H I Lyα absorption lines for six stars with detected astrospheric absorption, plotted on a heliocentric velocity scale. Narrow D I ISM absorption is visible in all the spectra just blueward of the saturated H I absorption. Green dashed lines indicate the interstellar absorption alone, and blue lines in each panel show the additional astrospheric absorption predicted by hydrodynamic models of the astrospheres assuming various mass loss rates (Wood et al., 2002).
A figure analogous to Figure 10 , but for six other lines of sight (Wood et al., 2005a).
Table 1 lists all mass loss rates measured in this manner, and Figures 12 and 13 show the astrospheric models that lead to the best fits to the data. The astrospheres vary greatly in size, both due to different mass loss rates and to different ISM wind speeds. It is worth noting that the largest of these astrospheres (ϵ Eri, 70 Oph) would be comparable in size to the full Moon in the night sky, if we could see them.
Maps of H I density from hydrodynamic models of stellar astrospheres (Wood et al., 2002). The models shown are the ones that lead to the best fits to the data in Figure 10 . The distance scale is in AU. The star is at coordinate (0,0) and the ISM wind is from the right. The dashed lines indicate the Sun-star line of sight.
Maps of H I density from hydrodynamic models of stellar astrospheres (Wood et al., 2005a), analogous to Figure 12 . The models shown are the ones that lead to the best fits to the data in Figure 11 .
Given the model dependence and other difficulties described in Section 4.2, one might wonder if mass loss rates derived using the astrospheric models are at all reliable. However, the crucial point is that the heliospheric model from which the astrospheric models are extrapolated successfully reproduces the heliospheric absorption. For the astrospheric modeling purposes described here, this is more important than whether the input parameters of that model are actually correct. In essence, the observed heliospheric absorption is calibrating the models before they are being applied to modeling astrospheres, and in this way the mass loss measurement technique is actually semi-empirical. Nevertheless, the mass loss rates measured using the astrospheric Lyα absorption technique should not be considered to be terribly precise. Uncertainties in the mass loss rates are probably of order a factor of 2 (Wood et al., 2002).
Other assumptions are implicitly made in this mass loss measurement procedure. One is that the LISM does not vary greatly from one star to the next. However, for these very nearby stars LISM variations should be modest, and as discussed in Section 4.2 modest variations probably do not greatly affect the predicted astrospheric absorption. Another assumption is that the stellar wind speeds of the stars in Table 1 are similar to that of the Sun. An argument in favor of this assumption is that stellar wind velocities are generally not very different from the surface escape speeds based on past experience, regardless of the type of wind or star one is considering (see Section 2.1). This includes the Sun, which has a surface escape speed of 619 km s−1, very similar to observed solar wind velocities. All the main sequence stars in Table 1 have similar escape speeds, so one might expect them to have similar wind velocities. However, the magneto-centrifugal wind acceleration models of Holzwarth and Jardine (2007) suggest that rapidly rotating stars might have wind speeds much faster than their surface escape speeds. Thus, the appropriateness of the constant wind speed assumption is still a matter for debate. The one star in Table 1 that has an escape velocity significantly different from that of the Sun is λ And. The G8 IV-III primary that surely dominates the wind from this binary is not a main sequence star, and it has a surface escape speed about 3 times lower than that of the Sun. Perhaps in the future the mass loss rate listed in Table 1 should be remeasured assuming a lower wind velocity.
In addition to the mass loss measurements for stars with detected astrospheric absorption, Table 1 also lists an upper limit derived from a nondetection for Proxima Cen. Upper limits cannot be computed for most stars with nondetections, because a likely interpretation for most nondetections is that the star is surrounded by hot, fully ionized ISM material, rather than partially neutral gas like that which surrounds the Sun. The Local Bubble in which the Sun is located is mostly filled with this hot material (see Section 2.2). The Sun just happens to be located within one of the cooler, partially neutral clouds that lie within the Bubble. For Proxima Cen, the hot ISM explanation for the astrospheric nondetection can be discarded, because Proxima Cen's distant companion α Cen has detected astrospheric absorption, proving that Proxima Cen is not located within the hot ISM. Thus, for Proxima Cen a meaningful upper limit to the stellar mass loss rate can be derived.
No mass loss measurement is reported for HD 128987 in Table 1, since no astrospheric model seems to be able to reproduce the apparent astrospheric absorption, bringing into question the astrospheric interpretation of the absorption for this star (Wood et al., 2005a). The fundamental problem is the very low LISM wind speed seen by this star (VISM = 8 km s−1; see Table 1). This low speed is not sufficient to result in much deceleration and heating of neutral H within the astrosphere, thereby yielding little predicted absorption. Thus, the apparent astrospheric detection for HD 128987 remains a mystery at this time.
5 Implications for the Sun and Solar System
5.1 Inferring the mass loss history of the Sun
In addition to stellar mass loss rates, Table 1 lists coronal X-ray luminosities from the ROSAT PSPC instrument (see Hünsch et al., 1999; Schmitt and Liefke, 2004). Solar-like winds have their origins in stellar coronae (see Section 2.1), so one might expect the winds to be correlated with coronal properties such as X-ray emission. Thus, in Figure 14 the mass loss rates measured from the astrospheric Lyα absorption (per unit surface area) are plotted versus X-ray surface fluxes (Wood et al., 2005a). For the main sequence stars mass loss increases with coronal activity. A power law is fitted to these GK stars in Figure 14. Quantitatively, this relation is
$$\dot M \propto F_{\rm{X}}^{1.34 \pm 0.18}.$$
The saturation line in Figure 14 indicates the maximum X-ray flux observed from solar-like stars (Güdel et al., 1997).
Measured mass loss rates (per unit surface area) plotted versus X-ray surface flux (Wood et al., 2005a). The filled and open circles are main sequence and evolved stars, respectively. For the main sequence stars with log F X < 8 × 105 ergs cm−2 s−1, mass loss appears to increase with coronal activity, so a power law has been fitted to these stars, and the shaded region is the estimated uncertainty in the fit. The saturation line represents the maximum FX value observed from solar-like stars.
It is interesting to note that during the solar cycle, the Sun's wind strength is actually anticorrelated with its X-ray flux. The solar wind is weaker at solar maximum than at solar minimum despite coronal X-ray fluxes being much higher (Lazarus and McNutt Jr, 1990). This is presumably due to the fact that winds are more associated with the large scale dipole component of the solar magnetic field instead of the small scale active regions responsible for most of the Sun's X-ray emission. The dipole field actually weakens at solar maximum along with the wind. However, the interior magnetic dynamo is ultimately responsible for both the small scale and large scale fields, so as a whole both field components should increase with increasing dynamo activity, consistent with the mass loss/activity correlation in Figure 14 (Schrijver et al., 2003).
The evolved stars are clearly inconsistent with the main sequence stars in Figure 14. The very active coronae of λ And and DK UMa produce surprisingly weak winds, though it should be noted that both of these astrospheric detections are flagged as being questionable in Table 1. There are three main sequence stars with log F X > 8 × 105 ergs cm−2 s−1, which have low mass loss measurements that are not consistent with the wind-activity correlation that seems to exist for the low activity main sequence stars. Two of these stars (Proxima Cen and EV Lac) are tiny M dwarf stars. If these were the only discrepant data points one could perhaps argue that the discrepancy is due to these M dwarfs being significantly less solar-like than the G and K dwarfs that make up the rest of the main sequence sample of stars. However, this interpretation is invalidated by the third discrepant measurement, that of ξ Boo. Being a binary with two rather solar-like stars (G8 V+K4 V), there is no easy way to dismiss the ξ Boo measurement, which implies that the power law relation does not extend to high activity levels for any type of star. More mass loss measurements of active stars would clearly be helpful to better define the characteristics of solar-like winds at high coronal activity levels.
Based on the available data, the mass-loss/activity relation appears to change its character at log F X ≈ 8 × 105 ergs cm−2 s−1. One possible explanation for this concerns the existence of polar spots for very active stars. Low activity stars presumably have starspot patterns like that of the Sun, where spots are confined to low latitudes. However, for very active stars not only are spots detected at high latitudes, but a majority of these stars show evidence for large polar spots (Strassmeier, 2002). The existence of high latitude and polar spots represents a fundamental change in the stellar magnetic geometry (Schrijver and Title, 2001), and it is possible that this dramatic change in magnetic field structure could affect the winds emanating from these stars. Perhaps stars with polar spots might have a magnetic field with a strong dipolar component that could envelope the entire star and inhibit stellar outflows, thereby explaining why active stars have weaker winds than the mass-loss/activity relation of less active main sequence stars would predict. For ξ Boo A, high latitude spots of some sort have been detected (Toner and Gray, 1988). Petit et al. (2005) have detected a strong global dipole field component for ξ Boo A, consistent with the picture presented above. They also detected a large-scale toroidal field component, which would have no solar analog whatsoever, consistent with the idea that very active solar-like stars have significantly different magnetic field structures from those of the Sun and other low-activity stars.
Figure 14 illustrates how mass loss varies with coronal activity. But what about age? There is a known connection between activity and age, for the following reasons. The gravitational contraction of interstellar clouds that results in star formation leads to rapid rotation for young, newly born stars. This rapid rotation leads to vigorous dynamo activity and therefore high surface magnetic activity and high coronal X-ray emission. However, the magnetic fields of these young, rapidly rotating stars drag against their winds, and this magnetic braking gradually slows the stellar rotation. This in turn leads to lower activity levels and X-ray fluxes. An enormous amount of effort has been expended in the past few decades to observationally establish exactly how rotation relates to stellar age (see Skumanich, 1972; Soderblom et al., 1993) and how rotation relates to stellar activity, which is most easily measured through X-ray emission (see Pallavicini et al., 1981; Walter, 1982, 1983; Caillault and Helfand, 1985; Micela et al., 1985; Fleming et al., 1989; Stauffer et al., 1994). For solar-like stars, Ayres (1997) finds
$${V_{{\rm{rot}}}} \propto {t^{- 0.6 \pm 0.1}}$$
for the rotation/age relation, while X-ray flux and rotation are related by
$${F_{\rm{X}}} \propto V_{{\rm{rot}}}^{2.9 \pm 0.3}.$$
Equations (1), (2), and (3) can be combined to obtain the following relation between mass loss and age for solar-like stars:
$$\dot M \propto {t^{- 2.33 \pm 0.55}}.$$
This is the first empirically determined mass loss evolution law for solar-like stars, and Figure 15 shows what this relation implies for the mass loss history of the Sun in particular (Wood et al., 2005a).
The mass loss history of the Sun suggested by the power law relation from Figure 14 (Wood et al., 2005a). The low mass-loss rate measurement for ξ Boo implies that the wind weakens at t ≈ 0.7 Gyr as one goes back in time.
The truncation of the power law relation in Figure 14 leads to the truncation of the mass-loss/age relation in Figure 15 at about t = 0.7 Gyr. The location of ξ Boo is shown in order to infer what the solar wind might have been like at earlier times. Despite the high activity cutoff, the mass loss measurements obtained so far clearly suggest that winds are generally stronger for young solar-like stars, and as a consequence the solar wind was presumably much stronger early in the Sun's lifetime. This has many important implications, some of which are discussed in Sections 5.2, 5.3, and 5.4.
5.2 Magnetic braking
The magnetic braking process by which stars like the Sun shed angular momentum is the reason why stellar activity decreases with time and why the solar mass loss/age relation in Figure 15 can be inferred from the mass loss/activity relation in Figure 14 (see Section 5.1). Winds play an important role in this process, because it is the wind that the stellar field drags against in slowing down the star's rotation. The efficiency of this braking is clearly related to the density of the wind, and therefore to the mass loss rate. Thus, the wind evolution law in Equation (4) has consequences for how the effectiveness of the magnetic braking changes with time. Models for magnetic braking suggest relations of the form
$${{\dot \Omega} \over \Omega} \propto {{\dot M} \over M}{\left({{{{R_{\rm{A}}}} \over R}} \right)^m},$$
where Ω is the angular rotation rate and RA is the Alfvén radius (Weber and Davis Jr, 1967; Stepień, 1988; Gaidos et al., 2000). The exponent m is a number between 0 and 2, where m = 2 corresponds to a purely radial magnetic field. Mestel (1984) claims that more reasonable magnetic geometries suggest m = 0–1. The Alfvén radius is
$${R_{\rm{A}}} = \sqrt {{{{V_{\rm{w}}}\dot M} \over {B_{\rm{r}}^2}}},$$
where Vw is the stellar wind speed and Br is the disk-averaged radial magnetic field.
For a star like the Sun, the star's mass and radius are relatively invariant. If Vw does not vary with time, which was also an assumption used in the derivation of mass loss rates from the astrospheric absorption (see Section 4.3), then the time dependence of all quantities in Equations (5) and (6) are known except for that of Br. If Br is expressed as a power law, Br ∝ t α , Equations (4), (5), and (6) combined suggest
$$\alpha = 1/m - (1.17 \pm 0.28)(m + 2)/m.$$
Assuming that m is in the physically allowable range of m = 0–2 yields the upper limit α < −1.3, while the more likely range of m = 0–1 suggested by Mestel (1984) implies α < −1.7. In any case, the empirical mass loss evolution law in Equation (4) is consistent with theoretical descriptions of magnetic braking only if disk-averaged stellar magnetic fields decline at least as fast as t−1.3.
5.3 The Faint Young Sun problem
Evolutionary models of the solar interior lead to the prediction that the Sun should have been about 25% fainter 3.8 Gyr ago (Gough, 1981; Bahcall et al., 2001). This is a fairly robust result that has been known for a long time. However, this creates significant difficulties for those who study the climate history of planets in our solar system. With the young Sun this faint, the planets should have been significantly colder than they are now. On Earth and Mars, surface water should have been frozen, but this contradicts geologic evidence that abundant water existed and flowed on the surfaces of both of these planets. This is the so-called "Faint Young Sun" paradox (Sagan and Mullen, 1972; Kasting, 1991). Most attempts to solve this problem have appealed to increased amounts of greenhouse gases in the early atmospheres of Earth and Mars, which allowed the surface temperatures to remain similar to present-day temperatures despite a fainter Sun (see Walker, 1985; Kasting and Ackerman, 1986; Kasting, 1991).
However, a couple theories have been advanced that involve the solar wind. One of these is simply that the young solar wind was strong enough to significantly decrease the mass of the Sun, meaning that the young Sun was more massive and thus more luminous than standard solar evolutionary models predict (Guzik et al., 1987; Sackmann and Boothroyd, 2003). If the Sun was only about 2% more massive 3.8 Gyr ago, then the Sun would still have been bright enough to maintain sufficiently warm temperatures on Earth and Mars. The mass loss evolution law derived from stellar wind measurements in Equation (4) does indeed suggest that the solar wind was stronger in the past. Unfortunately, it was not strong enough. If Equation (4) is correct, the Sun could not have been more than 0.2% more massive 3.8 Gyr ago. Thus, this theory ultimately is not supported by the stellar wind measurements (Minton and Malhotra, 2007).
However, there is a less direct method that has been proposed by which a stronger young solar wind could contribute to a solution of the Faint Young Sun problem. There have been several claims that cosmic rays have important effects on the Earth's climate by stimulating cloud formation, which is believed to generally cool the Earth's atmosphere (see Svensmark, 1988). However, cosmic rays are modulated by the solar wind, and a stronger solar wind would reduce the flux of cosmic rays into the Earth's atmosphere. Thus, the idea is that the stronger wind of the young Sun led to lower cosmic ray fluxes, helping the Earth (and perhaps Mars) to maintain warm temperatures (Shaviv, 2003). However, the whole idea of cosmic rays influencing the terrestrial climate to this degree remains very controversial (see Carslaw et al., 2002).
5.4 Erosion of planetary atmospheres
Even if the solar wind plays no role in the solution to the Faint Young Sun paradox described in Section 5.3, the solar wind may still have had a dramatic effect on planets in the past through the eroding effects of the wind on planetary atmospheres. Solar wind sputtering processes have been proposed as having had important effects on the atmospheres of both Venus and Titan (Chassefière, 1997; Lammer et al., 2000), but it is the Martian atmosphere that may have been the most dramatically affected by the solar wind. The connection between the Martian atmosphere and the question of whether life once existed on the planet makes the Mars example even more interesting.
Mars apparently had running water on its surface in the distant past, but it is now very dry. Isotopic evidence strongly suggests that the Martian atmosphere was once much thicker and more conducive to the existence of surface water (see Carr, 1996; Jakosky and Phillips, 2001). What caused the loss of the early Martian atmosphere, and presumably the loss of surface water and habitability as well? Solar wind sputtering is a leading candidate for the cause of these changes (Luhmann et al., 1992; Perez de Tejada, 1992; Jakosky et al., 1994; Kass and Yung, 1995; Lundin, 2001). Unlike Earth, the Martian atmosphere is not currently protected from the solar wind by a strong magnetosphere. There is evidence that Mars once had a magnetic field, but it disappeared at least 3.9 Gyr ago (Acuña et al., 1999). At that point, Mars would have been exposed to a solar wind about 80 times stronger than the current wind according to Figure 15. The stronger solar wind makes wind erosion an even more likely culprit behind the disappearance of most of the Martian atmosphere.
Studies of Martian atmosphere evolution that consider the wind evolution law inferred from the astrospheric measurements, along with modern estimates of the history of solar X-ray and UV fluxes, have already begun (Guinan et al., 2003; Lammer et al., 2003; Ribas et al., 2005). Investigations into how the solar wind may have affected other planets will undoubtedly also be an active area of future research, stimulated in part by the discovery of planets around other stars. Many of these extrasolar planets orbit very close to their stars, which means that they will be exposed to wind fluxes orders of magnitude higher than the Earth or Mars have ever seen. Knowledge of the evolution of solar-like winds is crucial for understanding the atmospheric evolution of these extrasolar planets, and efforts are already underway to try to model wind erosion effects on such planets (Grießmeier et al., 2004; Preusse et al., 2005).
The study of solar-like winds around other stars is not an easy one, due to the difficulty in detecting these winds. However, such measurements are the only way to empirically infer the history of the solar wind and to assess its potential effects on atmospheres of planets in our solar system. At this time, the indirect astrospheric Lyα absorption technique is the only way to detect weak solar-like stellar winds. Radio and X-ray observations can place limits on wind fluxes (see Section 3), but there is no reason to believe that these diagnostics will approach the sensitivity level of the Lyα diagnostic in the near future.
Currently there are only about a dozen mass loss measurements for solar-like stars, a rather small data sample. Additional astrospheric detections are required to better constrain relations between stellar winds, activity, and age. Unfortunately, the STIS instrument on HST failed in August 2004, and at the time of this writing is still unavailable. Furthermore, there is no future mission even in the planning stages that would be capable of the high resolution UV spectrometry necessary to detect absorption signatures of stellar astrospheres. Thus, the future of this subject area is very uncertain, at least on the data side. On the theory side, there is hope that improvements in our ability to numerically model the heliosphere and astrospheres will allow more precise analyses of existing data, and in this way improve our understanding of solar-like stellar winds and astrospheres.
I would like to thank my primary collaborators Jeff Linsky, Hans Müller, Gary Zank, and Vlad Izmodenov for their contributions to the astrospheric Lyα research. I would also like to thank David McComas and Rosine Lallement for providing figures for this article, and Priscilla Frisch for fruitful discussions. Support for this work was provided by NASA through grants NAG5-9041 and NNG05GD69G to the University of Colorado, and through grant number AR-09957 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. Extensive use of NASA's Astrophysics Data System Bibliographic Services has been made in the preparation of this article.
Acuña, M.H., Connerney, J.E.P., Ness, N.F., Lin, R.P., Mitchell, D., Carlson, C.W., McFadden, J., Anderson, K.A., Rème, H., Mazelle, C., Vignes, D., Wasilewski, P., Cloutier, P., 1999, "Global Distribution of Crustal Magnetization Discovered by the Mars Global Surveyor MAG/ER Experiment", Science, 284, 790–793ADSCrossRefGoogle Scholar
Ayres, T.R., 1997, "Evolution of the Solar Ionizing Flux", J. Geophys. Res., 102, 1641–1652ADSCrossRefGoogle Scholar
Bahcall, J.N., Pinsonneault, M.H., Basu, S., 2001, "Solar Models: Current Epoch and Time Dependences, Neutrinos, and Helioseismological Properties", Astrophys. J., 555, 990–1012. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2001ApJ...555..990BADSCrossRefGoogle Scholar
Baranov, V.B., 1990, "Gasdynamics of the Solar Wind Interaction with the Interstellar Medium", Space Sci. Rev., 52, 89–120ADSCrossRefGoogle Scholar
Baranov, V.B., Malama, Y.G., 1993, "Model of the solar wind interaction with the local interstellar medium — Numerical solution of self-consistent problem", J. Geophys. Res., 98, 15, 157–15, 163ADSCrossRefGoogle Scholar
Baranov, V.B., Malama, Y.G., 1995, "Effect of Local Interstellar Medium Hydrogen Fractional Ionization on the Distant Solar Wind and Interface Region", J. Geophys. Res., 100, 14, 755–14, 762ADSCrossRefGoogle Scholar
Baranov, V.B., Zaitsev, N.A., 1995, "On the Problem of the Solar Wind Interaction with Magnetized Interstellar Plasma", Astron. Astrophys., 304, 631–637. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1995A%.26A...304..631BADSGoogle Scholar
Bertaux, J.-L., Blamont, J.E., 1971, "Evidence for a Source of an Extraterrestrial Hydrogen Lyman-α Emission: the Interstellar Wind", Astron. Astrophys., 11, 200–217. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1971A%26A....11..200BADSGoogle Scholar
Biermann, L., 1951, "Kometenschweife und solare Korpuskularstrahlung", Z. Astrophys., 29, 274–286. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1951ZA.....29..274BADSGoogle Scholar
Boesgaard, A.M., Steigman, G., 1985, "Big Bang Nucleosynthesis: Theories and Observations", Annu. Rev. Astron. Astrophys., 23, 319–378. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1985ARA%26A..23..319BADSCrossRefGoogle Scholar
Bond, H.E., Mullan, D.J., O'Brien, M.S., Sion, E.M., 2001, "Detection of Coronal Mass Ejections in V471 Tauri with the Hubble Space Telescope", Astrophys. J., 560, 919–927. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2001ApJ...560..919BADSCrossRefGoogle Scholar
Brown, A., Vealié, A., Judge, P., Bookbinder, J.A., Hubeny, I., 1990, "Stringent Limits on the Ionized Mass Loss from A and F Dwarfs", Astrophys. J., 361, 220–224. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1990ApJ...361..220BADSCrossRefGoogle Scholar
Burlaga, L.F., Ness, N.F., Stone, E.C., McDonald, F.B., Acuña, M.H., Lepping, R.P., Connerney, J.E.P., 2003, "Search for the Heliosheath with Voyager 1 Magnetic Field Measurements", Geophys. Res. Lett., 30, 2072–2075ADSGoogle Scholar
Burles, S., Nollett, K.M., Turner, M.S., 2001, "Big Bang Nucleosynthesis Predictions for Precision Cosmology", Astrophys. J. Lett., 552, L1–L5. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2001ApJ...552L...1BADSCrossRefGoogle Scholar
Caillault, J.-P., Helfand, D.J., 1985, "The Einstein Soft X-ray Survey of the Pleiades", Astrophys. J., 289, 279–299. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1985ApJ...289..279CADSCrossRefGoogle Scholar
Carr, M.H., 1996, Water on Mars, Oxford University Press, New York, U.S.A.Google Scholar
Carslaw, K.S., Harrison, R.G., Kirkby, J., 2002, "Cosmic Rays, Clouds, and Climate", Science, 298, 1732–1737ADSCrossRefGoogle Scholar
Chassefière, E., 1997, "Loss of Water on the Young Venus: The Effect of a Strong Primitive Solar Wind", Icarus, 126, 229–232ADSCrossRefGoogle Scholar
Cheng, K.-P., Bruhweiler, F.C., 1990, "Ionization Processes in the Local Interstellar Medium: Effects of the Hot Coronal Substrate", Astrophys. J., 364, 573–581. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1990ApJ...364..573CADSCrossRefGoogle Scholar
Chiappini, C., Renda, A., Matteucci, F., 2002, "Evolution of Deuterium, 3He and 4He in the Galaxy", Astron. Astrophys., 395, 789–801ADSCrossRefGoogle Scholar
Christensen-Dalsgaard, J., 2003, "Problems, Connections and Expectations of Asteroseismology", Astrophys. Space Sci., 284, 277–294ADSCrossRefGoogle Scholar
Cranmer, S.R., 2002, "Coronal Holes and the High-Speed Solar Wind", Space Sci. Rev., 101, 229–294ADSCrossRefGoogle Scholar
Cravens, T.E., 2000, "Heliospheric X-ray Emission Associated with Charge Transfer of the Solar Wind with Interstellar Neutrals", Astrophys. J. Lett., 532, L153–L156. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2000ApJ...532L.153CADSCrossRefGoogle Scholar
Cravens, T.E., 2002, "X-ray Emission from Comets", Science, 296, 1042–1046ADSCrossRefGoogle Scholar
Dessler, A.J., 1967, "Solar Wind and Interplanetary Magnetic Field", Rev. Geophys., 5, 1ADSCrossRefGoogle Scholar
Diplas, A., Savage, B.D., 1994, "An IUE survey of Interstellar H I Lyα Absorption. 1. Column Densities", Astrophys. J. Suppl. Ser., 93, 211–228. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1994ApJS...93..211DADSCrossRefGoogle Scholar
Drake, S.A., Simon, T., Brown, A., 1993, "Detection of Radio Continuum Emission from Procyon", Astrophys. J., 406, 247–251. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1993ApJ...406..247DADSCrossRefGoogle Scholar
Dring, A.R., Linsky, J.L., Murthy, J., Henry, R.C., Moos, H.W., Vidal-Madjar, A., Audouze, J., Landsman, W.B., 1997, "Lyman-Alpha Absorption and the D/H Ratio in the Local Interstellar Medium", Astrophys. J., 488, 760–775. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997ApJ...488..760DADSCrossRefGoogle Scholar
Dupree, A.K., Baliunas, S.L., Shipman, H.L., 1977, "Deuterium and Hydrogen in the Local Interstellar Medium", Astrophys. J., 218, 361–369. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1977ApJ...218..361DADSCrossRefGoogle Scholar
Fahr, H.J., 1978, "Change of Interstellar Gas Parameters in Stellar-wind-dominated Astrospheres: Solar Case", Astron. Astrophys., 66, 103–117. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1978A%26A....66..103FADSGoogle Scholar
Favata, F., Micela, G., 2003, "Stellar Coronal Astronomy", Space Sci. Rev., 108, 577–708ADSCrossRefGoogle Scholar
Feldman, W.C., Asbridge, J.R., Bame, S.J., Gosling, J.T., 1977, "Plasma and Magnetic Fields from the Sun", in The Solar Output and its Variation, (Ed.) White, O.R., Proceedings of a Workshop, held in Boulder, Colorado, April 26–28, 1976, p. 351, Colorado Associated University Press, Boulder, U.S.A.Google Scholar
Fleming, T.A., Gioia, I.M., Maccacaro, T., 1989, "The Relation between X-ray Emission and Rotation in Late-Type Stars from the Perspective of X-ray Selection", Astrophys. J., 340, 1011–1023. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1989ApJ...340.1011FADSCrossRefGoogle Scholar
Florinski, V., Pogorelov, N.V., Zank, G.P., Wood, B.E., Cox, D.P., 2004, "On the Possibility of a Strong Magnetic Field in the Local Interstellar Medium", Astrophys. J., 604, 700–706. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2004ApJ...604..700FADSCrossRefGoogle Scholar
Frisch, P.C., 1993, "G-star Astropauses: A Test for Interstellar Pressure", Astrophys. J., 407, 198–206. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1993ApJ...407..198FADSCrossRefGoogle Scholar
Frisch, P.C., Slavin, J.D., 2003, "The Chemical Composition and Gas-to-Dust Mass Ratio of Nearby Interstellar Matter", Astrophys. J., 594, 844–858. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJ...594..844FADSCrossRefGoogle Scholar
Gaidos, E.J., Güdel, M., Blake, G.A., 2000, "The Faint Young Sun Paradox: An Observational Test of an Alternative Solar Model", Geophys. Res. Lett., 27, 501–503ADSCrossRefGoogle Scholar
Gayley, K.G., Zank, G.P., Pauls, H.L., Frisch, P.C., Welty, D.E., 1997, "One- versus Two-Shock Heliosphere: Constraining Models with Goddard High Resolution Spectrograph Lyman-α Spectra toward a Centauri", Astrophys. J., 487, 259–270. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997ApJ...487..259GADSCrossRefGoogle Scholar
Gough, D.O., 1981, "Solar Interior Structure and Luminosity Variations", Solar Phys., 74, 21–34. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1981SoPh...74...21GADSCrossRefGoogle Scholar
Grießmeier, J.-M., Stadelmann, A., Penz, T., Lammer, H., Selsis, F., Ribas, I., Guinan, E.F., Motschmann, U., Biernat, H.K., Weiss, W.W., 2004, "The Effect of Tidal Locking on the Magnetospheric and Atmospheric Evolution of 'Hot Jupiters'", Astron. Astrophys., 425, 753–762. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2004A%26A...425..753GFULADSCrossRefGoogle Scholar
Gringauz, K.I., Bezrukikh, V.V., Ozerov, V.D., Rybchinskii, R.E., 1962, "The Study of Interplanetary Ionized Gas, High-Energy Electrons and Corpuscular Radiation of the Sun, Employing Three-Electrode Charged Particle Traps on the Second Soviet Space Rocket", Planet. Space Sci., 9, 97ADSGoogle Scholar
Güdel, M., 2004, "X-ray Astronomy of Stellar Coronae", Astron. Astrophys. Rev., 12, 71–237. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2004A%26ARv..12...71GADSCrossRefGoogle Scholar
Güdel, M., Guinan, E.F., Skinner, S.L., 1997, "The X-Ray Sun in Time: A Study of the Long-Term Evolution of Coronae of Solar-Type Stars", Astrophys. J., 483, 947–960. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997ApJ...483..947GADSCrossRefGoogle Scholar
Guinan, E.F., Ribas, I., Harper, G.M., 2003, "Far-Ultraviolet Emissions of the Sun in Time: Probing Solar Magnetic Activity and Effects on Evolution of Paleoplanetary Atmospheres", Astrophys. J., 594, 561–572. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJ...594..561GADSCrossRefGoogle Scholar
Gustafsson, B., Jørgensen, U.G., 1994, "Models of Late-Type Stellar Photospheres", Astron. Astrophys. Rev., 6, 19–65ADSCrossRefGoogle Scholar
Guzik, J.A., Willson, L.A., Brunish, W.M., 1987, "A Comparison between Mass-Losing and Standard Solar Models", Astrophys. J., 319, 957–965. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1987ApJ...319..957GADSCrossRefGoogle Scholar
Harper, G.M., Wood, B.E., Linsky, J.L., Bennett, P.D., Ayres, T.R., Brown, A., 1995, "A Semiempirical Determination of the Wind Velocity Structure for the Hybrid-Chromosphere Star a Trianguli Australis", Astrophys. J., 452, 407–422. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1995ApJ...452..407HADSCrossRefGoogle Scholar
Hébrard, G., Mallouris, C., Ferlet, R., Koester, D., Lemoine, M., Vidal-Madjar, A., York, D., 1999, "Ultraviolet Observations of Sirius A and Sirius B with HST-GHRS: An Interstellar Cloud with a Possible Low Deuterium Abundance", Astron. Astrophys., 340, 643–658. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/1999A%26A...350..643HGoogle Scholar
Holzer, T.E., 1972, "Interaction of the Solar Wind with the Neutral Component of the Interstellar Gas", J. Geophys. Res., 77, 5407ADSCrossRefGoogle Scholar
Holzer, T.E., 1989, "Interaction between the Solar Wind and the Interstellar Medium", Annu. Rev. Astron. Astrophys., 27, 199–234. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1989ARA%26A..27..199HADSCrossRefGoogle Scholar
Holzwarth, V., Jardine, M., 2007, "Theoretical Mass Loss Rates of Cool Main-Sequence Stars", Astron. Astrophys., 463, 11–21. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2007A%26A...463...11HFULADSzbMATHCrossRefGoogle Scholar
Hünsch, M., Schmitt, J.H.M.M., Sterzik, M.F., Voges, W., 1999, "The ROSAT All-Sky Survey Catalogue of the Nearby Stars", Astron. Astrophys. Suppl., 135, 319–338ADSCrossRefGoogle Scholar
Izmodenov, V., Gloeckler, G., Malama, Y., 2003, "When Will Voyager 1 and 2 Cross the Termination Shock?", Geophys. Res. Lett., 30, 1351–1354ADSCrossRefGoogle Scholar
Izmodenov, V., Alexashov, D., Myasnikov, A., 2005, "Direction of the Interstellar H Atom Inflow in the Heliosphere: Role of the Interstellar Magnetic Field", Astron. Astrophys., 437, L35–L38. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005A%26A...437L..35IFULADSCrossRefGoogle Scholar
Izmodenov, V.V., Geiss, J., Lallement, R., Gloeckler, G., Baranov, V.B., Malama, Y.G., 1999a, "Filtration of Interstellar Hydrogen in the Two-Shock Heliospheric Interface: Inferences on the Local Interstellar Cloud Electron Density", J. Geophys. Res., 104, 4731–4742ADSCrossRefGoogle Scholar
Izmodenov, V.V., Lallement, R., Malama, Y.G., 1999b, "Heliospheric and Astrospheric Hydrogen Absorption towards Sirius: No Need for Interstellar Hot Gas", Astron. Astrophys., 342, L13–L16. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1999A%26A...342L..13IADSGoogle Scholar
Izmodenov, V.V., Gruntman, M., Malama, Y.G., 2001, "Interstellar Hydrogen Atom Distribution Function in the Outer Heliosphere", J. Geophys. Res., 106, 10,681–10,690ADSCrossRefGoogle Scholar
Izmodenov, V.V., Wood, B.E., Lallement, R., 2002, "Hydrogen Wall and Heliosheath Lyman-α Absorption toward Nearby Stars: Possible Constraints on the Heliospheric Interface Plasma Flow", J. Geophys. Res., 107, 1308–1322CrossRefGoogle Scholar
Izmodenov, V.V., Malama, Y.G., Gloeckler, G., Geiss, J., 2004, "Filtration of Interstellar H, O, N Atoms through the Heliospheric Interface: Inferences on Local Interstellar Abundances of the Elements", Astron. Astrophys., 414, L29–L32ADSCrossRefGoogle Scholar
Jakosky, B.M., Phillips, R.J., 2001, "Mars' Volatile and Climate History", Nature, 412, 237–244ADSCrossRefGoogle Scholar
Jakosky, B.M., Pepin, R.O., Johnson, R.E., Fox, J.L., 1994, "Mars Atmospheric Loss and Isotopic Fractionation by Solar Wind Induced Sputtering and Photochemical Escape", Icarus, 111, 271–288ADSCrossRefGoogle Scholar
Johns-Krull, C.M., Valenti, J.A., 1996, "Detection of Strong Magnetic Fields on M Dwarfs", Astrophys. J. Lett., 459, L95–L98. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996ApJ...459L..95JADSCrossRefGoogle Scholar
Kass, D.M., Yung, Y.L., 1995, "Loss of Atmosphere from Mars due to Solar Wind-Induced Sputtering", Science, 268, 697–699ADSCrossRefGoogle Scholar
Kasting, J.F., 1991, "CO2 Condensation and the Climate of Early Mars", Icarus, 94, 1–13ADSCrossRefGoogle Scholar
Kasting, J.F., Ackerman, T.P., 1986, "Climatic Consequences of Very High Carbon Dioxide Levels in the Earth's Early Atmosphere", Science, 234, 1383–1385ADSCrossRefGoogle Scholar
Kirkman, D., Tytler, D., Suzuki, N., O'Meara, J.M., Lubin, D., 2003, "The Cosmological Baryon Density from the Deuterium-to-Hydrogen Ratio in QSO Absorption Systems: D/H toward Q1243+3047", Astrophys. J. Suppl. Ser., 149, 1–28. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJS..149....1KADSCrossRefGoogle Scholar
Krimigis, S.M., Decker, R.B., Hill, M.E., Armstrong, T.P., Gloeckler, G., Hamilton, D.C., Lanzerotti, L.J., Roelof, E.C., 2003, "Voyager 1 Exited the Solar Wind at a Distance of ∼85AU from the Sun", Nature, 426, 45–48ADSCrossRefGoogle Scholar
Kruk, J.W., Howk, J.C., André, M., Moos, H.W., Oegerle, W.R., Oliveira, C., Sembach, K.R., Chayer, P., Linsky, J.L., Wood, B.E., Ferlet, R., Hébrard, G., Lemoine, M., Vidal-Madjar, A., Sonneborn, G., 2002, "Abundances of Deuterium, Nitrogen, and Oxygen toward HZ 43A: Results from the FUSE Mission", Astrophys. J. Suppl. Ser., 140, 19–36. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJS..140...19KADSCrossRefGoogle Scholar
Kudritzki, R.-P., Puls, J., 2000, "Winds from Hot Stars", Annu. Rev. Astron. Astrophys., 38, 613–666ADSCrossRefGoogle Scholar
Lallement, R., Bertin, P., 1992, "Northern-hemisphere observations of nearby interstellar gas: Possible detection of the local cloud", Astron. Astrophys., 266, 479–485. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1992A%26A...266..479LADSGoogle Scholar
Lallement, R., Ferlet, R., Lagrange, A.M., Lemoine, M., Vidal-Madjar, A., 1995, "Local Cloud structure from HST-GHRS", Astron. Astrophys., 304, 461–474. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1995A%26A...304..461LADSGoogle Scholar
Lallement, R., Welsh, B.Y., Vergely, J.L., Crifo, F., Sfeir, D.M., 2003, "3D Mapping of the Dense Interstellar Gas around the Local Bubble", Astron. Astrophys., 411, 447–464ADSCrossRefGoogle Scholar
Lammer, H., Stumptner, W., Molina-Cuberos, G.J., Bauer, S.J., Owen, T., 2000, "Nitrogen Isotope Fractionation and its Consequence for Titan's Atmospheric Evolution", Planet. Space Sci., 48, 529–543ADSCrossRefGoogle Scholar
Lammer, H., Lichtenegger, H.I.M., Kolb, C., Ribas, I., Guinan, E.F., Abart, R., Bauer, S.J., 2003, "Loss of Water from Mars: Implications for the Oxidation of the Soil", Icarus, 165, 9–25ADSCrossRefGoogle Scholar
Landsman, W.B., Henry, R.C., Moos, H.W., Linsky, J.L., 1984, "Observations of Interstellar Hydrogen and Deuterium toward Alpha Centauri A", Astrophys. J., 285, 801–807. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1984ApJ...285..801LADSCrossRefGoogle Scholar
Lazarus, A.J., McNutt Jr, R.L., 1990, "Plasma Observations in the Distant Heliosphere — A View from Voyager", in Physics of the Outer Heliosphere, (Eds.) Grzedzielski, S., Page, D.E., Proceedings of the 1st COSPAR Colloquium, held in Warsaw, Poland, 19–22 September 1989, vol. 1 of COSPAR Colloquia Series, pp. 229–234, Pergamon, Oxford, U.K.; New York, U.S.A.CrossRefGoogle Scholar
Lemoine, M., Vidal-Madjar, A., Hébrard, G., Désert, J.-M., Ferlet, R., Lecavelier des Étangs, A., Howk, J.C., André, M., Blair, W.P., Friedman, S.D., Kruk, J.W., Lacour, S., Moos, H.W., Sembach, K., Chayer, P., Jenkins, E.B., Koester, D., Linsky, J.L., Wood, B.E., Oegerle, W.R., Sonneborn, G., York, D.G., 2002, "Deuterium Abundance toward G191-B2B: Results from the FUSE Mission", Astrophys. J. Suppl. Ser., 140, 67–80. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJS..140...67LADSCrossRefGoogle Scholar
Lim, J., White, S.M., 1996, "Limits to Mass Outflows from Late-Type Dwarf Stars", Astrophys. J. Lett., 462, L91–L94. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996ApJ...462L..91LADSCrossRefGoogle Scholar
Lim, J., White, S.M., Cully, S.L., 1996a, "The Eclipsing Radio Emission of the Precataclysmic Binary V471 Tauri", Astrophys. J., 461, 1009–1015. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996ApJ...461.1009LADSCrossRefGoogle Scholar
Lim, J., White, S.M., Slee, O.B., 1996b, "The Radio Properties of the dMe Flare Star Proxima Centauri", Astrophys. J., 460, 976–983. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996ApJ...460..976LADSCrossRefGoogle Scholar
Linde, T., Gombosi, T.I., Roe, P.L., Powell, K.G., DeZeeuw, D.L., 1998, "Heliosphere in the Magnetized Local Interstellar Medium — Results of a Three-Dimensional MHD Simulation", J. Geophys. Res., 103, 1889–1904ADSCrossRefGoogle Scholar
Linsky, J.L., 1980, "Stellar Chromospheres", Annu. Rev. Astron. Astrophys., 18, 439–488. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1980ARA%26A..18..439LADSCrossRefGoogle Scholar
Linsky, J.L., 1998, "Deuterium Abundance in the Local ISM and Possible Spatial Variations", Space Sci. Rev., 84, 285–296ADSCrossRefGoogle Scholar
Linsky, J.L., Wood, B.E., 1996, "The α Centauri Line of Sight: D/H Ratio, Physical Properties of Local Interstellar Gas, and Measurement of Heated Hydrogen (the "Hydrogen Wall") Near the Heliopause", Astrophys. J., 463, 254–270. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996ApJ...463..254LADSCrossRefGoogle Scholar
Linsky, J.L., Brown, A., Gayley, K.G., Diplas, A., Savage, B.D., Ayres, T.R., Landsman, W.B., Shore, S.N., Heap, S.R., 1993, "Goddard High-Resolution Spectrograph Observations of the Local Interstellar Medium and the Deuterium/Hydrogen Ratio Along the Line of Sight toward Capella", Astrophys. J., 402, 694–709. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1993ApJ...402..694LADSCrossRefGoogle Scholar
Linsky, J.L., Diplas, A., Wood, B.E., Brown, A., Ayres, T.R., Savage, B.D., 1995, "Deuterium and the Local Interstellar Medium: Properties for the Procyon and Capella Lines of Sight", Astrophys. J., 451, 335–351. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1995ApJ...451..335LADSCrossRefGoogle Scholar
Linsky, J.L., Redfield, S., Wood, B.E., Piskunov, N., 2000, "The Three-dimensional Structure of the Warm Local Interstellar Medium. I. Methodology", Astrophys. J., 528, 756–766. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2000ApJ...528..756LADSCrossRefGoogle Scholar
Lipatov, A.S., Zank, G.P., Pauls, H.L., 1998, "The Interaction of Neutral Interstellar H with the Heliosphere: A 2.5D Particle-Mesh Boltzmann Simulation", J. Geophys. Res., 103, 20,631–20,642ADSCrossRefGoogle Scholar
Lisse, C.M., Christian, D.J., Dennerl, K., Meech, K.J., Petre, R., Weaver, H.A., Wolk, S.J., 2001, "Charge Exchange-Induced X-Ray Emission from Comet C/1999 S4 (LINEAR)", Science, 292, 1343–1348ADSCrossRefGoogle Scholar
Luhmann, J.G., Johnson, R.E., Zhang, M.H.G., 1992, "Evolutionary Impact of Sputtering of the Martian Atmosphere by O+ Pickup Ions", Geophys. Res. Lett., 19, 2151–2154ADSCrossRefGoogle Scholar
Lundin, R., 2001, "Erosion by the Solar Wind", Science, 291, 1909CrossRefGoogle Scholar
MacGregor, K.B., Charbonneau, P., 1994, "Stellar Winds with Non-WKB Alfvén Waves I. Wind Models for Solar Coronal Conditions", Astrophys. J., 430, 387–398. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1994ApJ...430..387MADSCrossRefGoogle Scholar
Malama, Y.G., Izmodenov, V.V., Chalov, S.V., 2006, "Modeling of the Heliospheric Interface: Multi-Component Nature of the Heliospheric Plasma", Astron. Astrophys., 445, 693–701. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2006A%26A...445..693MFULADSCrossRefGoogle Scholar
McComas, D.J., Barraclough, B.L., Funsten, H.O., Gosling, J.T., Santiago-Munfoz, E., Skoug, R.M., Goldstein, B.E., Neugebauer, M., Riley, P., Balogh, A., 2000, "Solar Wind Observations Over Ulysses' First Full Polar Orbit", J. Geophys. Res., 105, 10,419–10,434ADSCrossRefGoogle Scholar
McComas, D.J., Elliott, H.A., von Steiger, R., 2002, "Solar Wind from High-Latitude Coronal Holes at Solar Maximum", Geophys. Res. Lett., 29, 28–31Google Scholar
McCullough, P.R., 1992, "The Interstellar Deuterium-to-Hydrogen Ratio: A Reevaluation of Lyman Absorption-Line Measurements", Astrophys. J., 390, 213–218. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1992ApJ...390..213MADSCrossRefGoogle Scholar
McDonald, F.B., Stone, E.C., Cummings, A.C., Heikkila, B., Lal, N., Webber, W.R., 2003, "Enhancements of Energetic Particles near the Heliospheric Termination Shock", Nature, 426, 48–51ADSCrossRefGoogle Scholar
Mestel, L., 1984, "Angular Momentum Loss During Pre-Main Sequence Contraction", in Cool Stars, Stellar Systems, and the Sun, (Eds.) Baliunas, S.L., Hartmann, L., Proceedings of the Third Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, Held in Cambridge, Massachusetts, October 5–7, 1983, pp. 49–59, Pergamon, New York, U.S.A.CrossRefGoogle Scholar
Micela, G., Sciortino, S., Serio, S., Vaiana, G.S., Bookbinder, J., Golub, L., Harnden Jr, F.R., Rosner, R., 1985, "Einstein X-ray Survey of the Pleiades — The Dependence of X-ray Emission on Stellar Age", Astrophys. J., 292, 172–180. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1985ApJ...292..172MADSCrossRefGoogle Scholar
Minton, D.A., Malhotra, R., 2007, "Assessing the Massive Young Sun Hypothesis to Solve the Warm Young Earth Puzzle", Astrophys. J., 660, 1700–1706. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2007ApJ...660.1700MADSCrossRefGoogle Scholar
Moos, H.W., Sembach, K.R., Vidal-Madjar, A., York, D.G., Friedman, S.D., Hébrard, G., Kruk, J.W., Lehner, N., Lemoine, M., Sonneborn, G., Wood, B.E., Ake, T.B., André, M., Blair, W.P., Chayer, P., Gry, C., Dupree, A.K., Ferlet, R., Feldman, P.D., Green, J.C., Howk, J.C., Hutchings, J.B., Jenkins, E.B., Linsky, J.L., Murphy, E.M., Oegerle, W.R., Oliveira, C., Roth, K., Sahnow, D.J., Savage, B.D., Shull, J.M., Tripp, T.M., Weiler, E.J., Welsh, B.Y., Wilkinson, E., Woodgate, B.E., 2002, "Abundances of Deuterium, Nitrogen, and Oxygen in the Local Interstellar Medium: Overview of First Results from the FUSE Mission", Astrophys. J. Suppl. Ser., 140, 3–17. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJS..140....3MADSCrossRefGoogle Scholar
Mullan, D.J., Sion, E.M., Bruhweiler, F.C., Carpenter, K.G., 1989, "Evidence for a Cool Wind from the K2 Dwarf in the Detached Binary V471 Tauri", Astrophys. J. Lett., 339, L33–L36. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1989ApJ...339L..33MADSCrossRefGoogle Scholar
Mullan, D.J., Doyle, J.G., Redman, R.O., Mathioudakis, M., 1992, "Limits on Detectability of Mass Loss from Cool Dwarfs", Astrophys. J., 397, 225–231. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1992ApJ...397..225MADSCrossRefGoogle Scholar
Mullan, D.J., Carpenter, K.G., Robinson, R.D., 1998, "Large Variations in the Winds of Single Cool Giants: λ Velorum and γ Crucis", Astrophys. J., 495, 927–932. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1998ApJ...495..927MADSCrossRefGoogle Scholar
Müller, H.-R., Zank, G.P., Lipatov, A.S., 2000, "Self-Consistent Hybrid Simulations of the Interaction of the Heliosphere with the Local Interstellar Medium", J. Geophys. Res., 105, 27,419–27,438ADSCrossRefGoogle Scholar
Müller, H.-R., Zank, G.P., Wood, B.E., 2001a, "Modeling the Interstellar Medium-Stellar Wind Interactions of λ Andromedae and ϵ Indi", Astrophys. J., 551, 495–506. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2001ApJ...551..495MADSCrossRefGoogle Scholar
Müller, H.-R., Zank, G.P., Wood, B.E., 2001b, "Modeling Stellar Wind Interaction with the ISM: Exploring Astrospheres and their Lyman-α Absorption", in The Outer Heliosphere: The Next Frontiers, (Eds.) Scherer, K., Fichtner, H., Fahr, H.J., Marsch, E., Proceedings of COSPAR Colloquium, Potsdam, Germany, July, 2001, vol. 11 of COSPAR Colloquia Series, pp. 53–56, Pergamon; Elsevier Science, New York, U.S.A.; Amsterdam, NetherlandsGoogle Scholar
Murthy, J., Henry, R.C., Moos, H.W., Landsman, W.B., Linsky, J.L., Vidal-Madjar, A., Gry, C., 1987, "IUE Observations of Hydrogen and Deuterium in the Local Interstellar Medium", Astrophys. J., 315, 675–686. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1987ApJ...315..675MADSCrossRefGoogle Scholar
Nerney, S., Suess, S.T., Schmahl, E.J., 1991, "Flow Downstream of the Heliospheric Terminal Shock — Magnetic Field Kinematics", Astron. Astrophys., 250, 556–564. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1991A%26A...250..556NADSGoogle Scholar
Neugebauer, M., Snyder, C., 1962, "Solar Plasma Experiment", Science, 138, 1095–1097ADSCrossRefGoogle Scholar
Opher, M., Liewer, P.C., Gombosi, T.I., Manchester, W., DeZeeuw, D.L., Sokolov, I., Toth, G., 2003, "Probing the Edge of the Solar System: Formation of an Unstable Jet Sheet", Astrophys. J. Lett., 591, L61–L65. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJ...591L..610ADSCrossRefGoogle Scholar
Opher, M., Liewer, P.C., Velli, M., Bettarini, L., Gombosi, T.I., Manchester, W., DeZeeuw, D.L., Sokolov, I., 2004, "Magnetic Effects at the Edge of the Solar System: MHD Instabilities, the de Laval Nozzle Effect and an Extended Jet", Astrophys. J., 611, 575–586ADSCrossRefGoogle Scholar
Opher, M., Stone, E.C., Liewer, P.C., 2006, "The Effects of a Local Interstellar Magnetic Field on Voyager 1 and 2 Observations", Astrophys. J. Lett., 640, L71–L74. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2006ApJ...640L..710ADSCrossRefGoogle Scholar
Pallavicini, R., Golub, L., Rosner, R., Vaiana, G.S., Ayres, T., Linsky, J.L., 1981, "Relations among stellar X-ray emission observed from Einstein, stellar rotation and bolometric luminosity", Astrophys. J., 248, 279–290. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1981ApJ...248..279PADSCrossRefGoogle Scholar
Parker, E.N., 1958, "Dynamics of the Interplanetary Gas and Magnetic Fields", Astrophys. J., 128, 664–676. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1958ApJ...128..664PADSCrossRefGoogle Scholar
Parker, E.N., 1961, "The Stellar-Wind Regions", Astrophys. J., 134, 20–27. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1961ApJ...134...20PADSCrossRefGoogle Scholar
Parker, E.N., 1963, Interplanetary Dynamical Processes, vol. 8 of Interscience Monographs and Texts in Physics and Astronomy, Interscience Publishers, New York, U.S.A.zbMATHGoogle Scholar
Pauls, H.L., Zank, G.P., 1997, "Interaction of a Nonuniform Solar Wind with the Local Interstellar Medium. 2. A Two-Fluid Model", J. Geophys. Res., 102, 19,779–19,788ADSCrossRefGoogle Scholar
Perez de Tejada, H., 1992, "Solar Wind Erosion of the Mars Early Atmosphere", J. Geophys. Res., 97, 3159–3167ADSCrossRefGoogle Scholar
Perryman, M.A.C., Lindegren, L., Kovalevsky, J., Hoeg, E., Bastian, U., Bernacca, P.L., Crézé, M., Donati, F., Grenon, M., van Leeuwen, F., van der Marel, H., Mignard, F., Murray, C.A., Le Poole, R.S., Schrijver, H., Turon, C., Arenou, F., Froeschlé, M., Petersen, C.S., 1997, "The Hipparcos Catalogue", Astron. Astrophys., 323, L49–L52. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997A%26A...323L..49PADSGoogle Scholar
Petit, P., Donati, J.-F., Aurière, M., Landstreet, J.D., Lignières, F., Marsden, S., Mouillet, D., Paletou, F., Toqué, N., Wade, G.A., 2005, "Large-Scale Magnetic Field of the G8 Dwarf ξ Bootis A", Mon. Not. R. Astron. Soc., 361, 837–849. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005MNRAS.361..837PADSCrossRefGoogle Scholar
Pogorelov, N.V., Zank, G.P., Ogino, T., 2006, "Three-Dimensional Features of the Outer Heliosphere due to Coupling between the Interstellar and Interplanetary Magnetic Fields. II. The Presence of Neutral Hydrogen Atoms", Astrophys. J., 644, 1299–1316. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2006ApJ...644.1299PADSCrossRefGoogle Scholar
Prantzos, N., 1996, "The Evolution of D and 3He in the Galactic Disk", Astron. Astrophys., 310, 106–114. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996A%26A...310..106PADSGoogle Scholar
Preusse, S., Kopp, A., Büchner, J., Motschmann, U., 2005, "Stellar Wind Regimes of Close-in Extrasolar Planets", Astron. Astrophys., 434, 1191–1200. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005A%26A...434.1191PFULADSCrossRefGoogle Scholar
Quémerais, E., Bertaux, J.-L., Lallement, R., Berthé, M., Kyrölä, E., Schmidt, W., 1999, "Interplanetary Lyman-α Line Profiles Derived from SWAN/SOHO Hydrogen Cell Measurements: The Full-sky Velocity Field", J. Geophys. Res., 104, 12,585–12,604ADSCrossRefGoogle Scholar
Quémerais, E., Bertaux, J.-L., Lallement, R., Berthé, M., Kyrölä, E., Schmidt, W., 2000, "SWAN/SOHO H Cell Measurements: The First Year", Adv. Space Res., 26, 815–818ADSCrossRefGoogle Scholar
Ratkiewicz, R., Barnes, A., Molvik, G.A., Spreiter, J.R., Stahara, S.S., Vinokur, M., Venkateswaran, S., 1998, "Effect of Varying Strength and Orientation of Local Interstellar Magnetic Field on Configuration of Exterior Heliosphere: 3D MHD Simulations", Astron. Astrophys., 335, 363–369. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1998A%26A...335..363RADSGoogle Scholar
Redfield, S., Linsky, J.L., 2000, "The Three-dimensional Structure of the Warm Local Interstellar Medium. II. The Colorado Model of the Local Interstellar Cloud", Astrophys. J., 534, 825–837. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2000ApJ...534..825RADSCrossRefGoogle Scholar
Redfield, S., Linsky, J.L., 2001, "Microstructure of the Local Interstellar Cloud and the Identification of the Hyades Cloud", Astrophys. J., 551, 413–428. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2001ApJ...551..413RADSCrossRefGoogle Scholar
Ribas, I., Guinan, E.F., Güdel, M., Audard, M., 2005, "Evolution of the Solar Activity over Time and Effects on Planetary Atmospheres. I. High-Energy Irradiances (1–1700 Å)", Astrophys. J., 622, 680–694. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005ApJ...622..680RADSCrossRefGoogle Scholar
Sackmann, I.-J., Boothroyd, A.I., 2003, "Our Sun. V. A Bright Young Sun Consistent with Helioseismology and Warm Temperatures on Ancient Earth and Mars", Astrophys. J., 583, 1024–1039. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJ...583.1024SADSCrossRefGoogle Scholar
Sagan, C., Mullen, G., 1972, "Earth and Mars: Evolution of Atmospheres and Surface Temperatures", Science, 177, 52–56ADSCrossRefGoogle Scholar
Schmitt, J.H.M.M., 1997, "Coronae on Solar-Like Stars", Astron. Astrophys., 318, 215–230. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997A%26A...318..215SADSGoogle Scholar
Schmitt, J.H.M.M., Liefke, C., 2004, "NEXXUS: A Comprehensive ROSAT Survey of Coronal X-ray Emission Among Nearby Solar-like Stars", Astron. Astrophys., 417, 651–665. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2004A%26A...417..651SFULADSCrossRefGoogle Scholar
Schrijver, C.J., Title, A.M., 2001, "On the Formation of Polar Spots in Sun-like Stars", Astrophys. J., 551, 1099–1106. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2001ApJ...551.1099SADSCrossRefGoogle Scholar
Schrijver, C.J., DeRosa, M.L., Title, A.M., 2003, "Asterospheric Magnetic Fields and Winds of Cool Stars", Astrophys. J., 590, 493–501. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJ...590..493SADSCrossRefGoogle Scholar
Sfeir, D.M., Lallement, R., Crifo, F., Welsh, B.Y., 1999, "Mapping the Contours of the Local bubble: Preliminary Results", Astron. Astrophys., 346, 785–797. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1999A%26A...346..785SADSGoogle Scholar
Shaviv, N.J., 2003, "Toward a Solution to the Early Faint Sun Paradox: A Lower Cosmic Ray Flux from a Stronger Solar Wind", J. Geophys. Res., 108, 1437–1444CrossRefGoogle Scholar
Skumanich, A., 1972, "Time Scales for Ca II Emission Decay, Rotational Braking, and Lithium Depletion", Astrophys. J., 171, 565–567. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1972ApJ...171..565SADSCrossRefGoogle Scholar
Slavin, J.D., Frisch, P.C., 2002, "The Ionization of Nearby Interstellar Gas", Astrophys. J., 565, 364–379. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJ...565..364SADSCrossRefGoogle Scholar
Snowden, S.L., Freyberg, M.J., Plucinsky, P.P., Schmitt, J.H.M.M., Truemper, J., Voges, W., Edgar, R.J., McCammon, D., Sanders, W.T., 1995, "First Maps of the Soft X-Ray Diffuse Background from the ROSAT XRT/PSPC All-Sky Survey", Astrophys. J., 454, 643–653. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1995ApJ...454..643SADSCrossRefGoogle Scholar
Soderblom, D.R., Stauffer, J.R., MacGregor, K.B., Jones, B.F., 1993, "The Evolution of Angular Momentum Among Zero-Age Main-Sequence Solar-Type Stars", Astrophys. J., 409, 624–634. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1993ApJ...409..624SADSCrossRefGoogle Scholar
Stauffer, J.R., Caillault, J.-P., Gagné, M., Prosser, C.F., Hartmann, L.W., 1994, "A Deep Imaging Survey of the Pleiades with ROSAT", Astrophys. J. Suppl. Ser., 91, 625–657. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1994ApJS...91..625SADSCrossRefGoogle Scholar
Stepień, K., 1988, "Spin-Down of Cool Stars During their Main-Sequence Life", Astrophys. J., 335, 907–913. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1988ApJ...335..907SADSCrossRefGoogle Scholar
Stern, D.P., 1989, "A Brief History of Magnetospheric Physics Before the Spaceflight Era", Rev. Geophys., 27, 103–114. Related online version (cited on 25 March 2004): http://www.phy6.org/Education/Intro.htmlADSCrossRefGoogle Scholar
Stone, E.C., Cummings, A.C., McDonald, F.B., Heikkila, B.C., Lal, N., Webber, W.R., 2005, "Voyager 1 Explores the Termination Shock Region and the Heliosheath Beyond", Science, 309, 2017–2020. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005Sci...309.2017SADSCrossRefGoogle Scholar
Strassmeier, K.G., 2002, "Doppler Images of Starspots", Astron. Nachr., 323, 309–316. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2002AN....323..309SADSCrossRefGoogle Scholar
Suess, S.T., 1990, "The Heliopause", Rev. Geophys., 28, 97–115ADSCrossRefGoogle Scholar
Svensmark, H., 1988, "Influence of Cosmic Rays on Earth's Climate", Phys. Rev. Lett., 81, 5027–5030ADSCrossRefGoogle Scholar
Toner, C.G., Gray, D.F., 1988, "The Starpatch on the G8 Dwarf ξ Bootis A", Astrophys. J., 334, 1008–1020. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/1988ApJ...334.1008TADSCrossRefGoogle Scholar
Tosi, M., Steigman, G., Matteucci, F., Chiappini, C., 1998, "Is High Primordial Deuterium Consistent with Galactic Evolution?", Astrophys. J., 498, 226–235. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1998ApJ...498..226TADSCrossRefGoogle Scholar
van den Oord, G.H.J., Doyle, J.G., 1997, "Constraints on Mass Loss from dMe Stars: Theory and Observations", Astron. Astrophys., 319, 578–588. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997A%26A...319..578VADSGoogle Scholar
Vidal-Madjar, A., Ferlet, R., 2002, "Hydrogen Column Density Evaluations toward Capella: Consequences on the Interstellar Deuterium Abundance", Astrophys. J. Lett., 571, L169–L172. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJ...571L.169VADSCrossRefGoogle Scholar
Vogt, S.S., Penrod, G.D., Hatzes, A.P., 1987, "Doppler images of rotating stars using maximum entropy image reconstruction", Astrophys. J., 321, 496–515. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1987ApJ...321..496VADSCrossRefGoogle Scholar
Walker, J.C.G., 1985, "Carbon Dioxide on the Early Earth", Origins of Life, 16, 117–127ADSCrossRefGoogle Scholar
Wallis, M., 1975, "Local Interstellar Medium", Nature, 254, 207–127ADSCrossRefGoogle Scholar
Walter, F.M., 1982, "On the Coronae of Rapidly Rotating Stars. III. An Improved Coronal Rotation-Activity Relation in Late-Type Dwarfs", Astrophys. J., 253, 745–751. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1982ApJ...253..745WADSCrossRefGoogle Scholar
Walter, F.M., 1983, "On the Coronae of Rapidly Rotating Stars. IV. Coronal Activity in F Dwarfs and Implications for the Onset of the Dynamo", Astrophys. J., 274, 794–800. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1983ApJ...274..794WADSCrossRefGoogle Scholar
Wargelin, B.J., Drake, J.J., 2002, "Stringent X-Ray Constraints on Mass Loss from Proxima Centauri", Astrophys. J., 578, 503–514. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJ...578..503WADSCrossRefGoogle Scholar
Weber, E.J., Davis Jr, L., 1967, "The Angular Momentum of the Solar Wind", Astrophys. J., 148, 217–228. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1967ApJ...148..217WADSCrossRefGoogle Scholar
Witte, M., Rosenbauer, H., Banaszkewicz, M., Fahr, H.J., 1993, "The ULYSSES Neutral Gas Experiment — Determination of the Velocity and Temperature of the Interstellar Neutral Helium", Adv. Space Res., 13, 121–130ADSCrossRefGoogle Scholar
Witte, M., Rosenbauer, H., Banaszkewicz, M., 1996, "Recent Results on the Parameters of the Interstellar Helium from the Ulysses/Gas Experiment", Space Sci. Rev., 78, 289–296ADSCrossRefGoogle Scholar
Wood, B.E., Linsky, J.L., 1997, "A New Measurement of the Electron Density in the Local Interstellar Medium", Astrophys. J. Lett., 474, L39–L42. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1997ApJ...474L..39WADSCrossRefGoogle Scholar
Wood, B.E., Linsky, J.L., 1998, "The Local ISM and its Interaction with the Winds of Nearby Late-Type Stars", Astrophys. J., 492, 788–803. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1998ApJ...492..788WADSCrossRefGoogle Scholar
Wood, B.E., Alexander, W.R., Linsky, J.L., 1996, "The Properties of the Local Interstellar Medium and the Interaction of the Stellar Winds of ϵ Indi and λ Andromedae with the Interstellar Environment", Astrophys. J., 470, 1157–1171. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1996ApJ...470.1157WADSCrossRefGoogle Scholar
Wood, B.E., Linsky, J.L., Zank, G.P., 2000a, "Heliospheric, Astrospheric, and Interstellar Lyα Absorption toward 36 Ophiuchi", Astrophys. J., 537, 304–311. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2000ApJ...537..304WADSCrossRefGoogle Scholar
Wood, B.E., Müller, H.-R., Zank, G.P., 2000b, "Hydrogen Lyman-α Absorption Predictions by Boltzmann Models of the Heliosphere", Astrophys. J., 542, 493–503. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2000ApJ...542..493WADSCrossRefGoogle Scholar
Wood, B.E., Linsky, J.L., Müller, H.-R., Zank, G.P., 2001, "Observational Estimates for the Mass-Loss Rates of α Centauri and Proxima Centauri Using Hubble Space Telescope Lyα Spectra", Astrophys. J. Lett., 547, L49–L52. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2001ApJ...547L..49WADSCrossRefGoogle Scholar
Wood, B.E., Müller, H.-R., Zank, G.P., Linsky, J.L., 2002, "Measured Mass-Loss Rates of Solar-like Stars as a Function of Age and Activity", Astrophys. J., 574, 412–425. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2002ApJ...574..412WADSCrossRefGoogle Scholar
Wood, B.E., Linsky, J.L., Müller, H.-R., Zank, G.P., 2003a, "A Search for Lyα emission from the Astrosphere of 40 Eridani A", Astrophys. J., 591, 1210–1219. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2003ApJ...591.1210WADSCrossRefGoogle Scholar
Wood, B.E., Redfield, S., Linsky, J.L., 2003b, "The 3D-Structure of the LISM", in The Interstellar Environment of the Heliosphere, (Eds.) Breitschwerdt, D., Haerendel, G., International Colloquium in Honour of Stanislaw Grzedzielski, Paris 2001, vol. 285 of MPE Report, pp. 25–47, MPE, Garching, Germany. Related online version (cited on 13 May 2004): http://arXiv.org/abs/astro-ph/0107033Google Scholar
Wood, B.E., Linsky, J.L., Hébrard, G., Williger, G.M., Moos, H.W., Blair, W.P., 2004, "Two New Low Galactic D/H Measurements from the Far Ultraviolet Spectroscopic Explorer (FUSE)", Astrophys. J., 609, 838–853. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/2004ApJ...609..838WADSCrossRefGoogle Scholar
Wood, B.E., Müller, H.-R., Zank, G.P., Linsky, J.L., Redfield, S., 2005a, "New Mass-Loss Measurements from Astrospheric Lyα Absorption", Astrophys. J. Lett., 628, L143–L146. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005ApJ...628L.143WADSCrossRefGoogle Scholar
Wood, B.E., Redfield, S., Linsky, J.L., Müller, H.-R., Zank, G.P., 2005b, "Stellar Lyα Emission Lines in the Hubble Space Telescope Archive: Intrinsic Line Fluxes and Absorption from the Heliosphere and Astrospheres", Astrophys. J. Suppl. Ser., 159, 118–140. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2005ApJS..159..118WADSCrossRefGoogle Scholar
Wood, B.E., Izmodenov, V.V., Linsky, J.L., Alexashov, D., 2007a, "Dependence of Heliospheric Lyα Absorption on the Interstellar Magnetic Field", Astrophys. J., 659, 1784–1791. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2007ApJ...659.1784WADSCrossRefGoogle Scholar
Wood, B.E., Izmodenov, V.V., Linsky, J.L., Malama, Y.G., 2007b, "Lyα Absorption from Heliosheath Neutrals", Astrophys. J., 657, 609–617. Related online version (cited on 2 June 2007): http://adsabs.harvard.edu/abs/2007ApJ...657..609WADSCrossRefGoogle Scholar
York, D.G., Rogerson, J.B., 1976, "The Abundance of Deuterium Relative to Hydrogen in Interstellar Space", Astrophys. J., 203, 378–385. Related online version (cited on 13 May 2004): http://adsabs.harvard.edu/abs/1976ApJ...203..378YADSCrossRefGoogle Scholar
Zank, G.P., 1999, "Interaction of the Solar Wind with the Local Interstellar Medium: A Theoretical Perspective", Space Sci. Rev., 89, 413–688ADSCrossRefGoogle Scholar
Zank, G.P., Pauls, H.L., Williams, L.L., Hall, D.T., 1996, "Interaction of the Solar Wind with the Local Interstellar Medium: A Multifluid Approach", J. Geophys. Res., 101, 21,639–21,656ADSCrossRefGoogle Scholar
1.JILAUniversity of ColoradoBoulderUSA
Wood, B.E. Living Rev. Sol. Phys. (2004) 1: 2. https://doi.org/10.12942/lrsp-2004-2
DOI https://doi.org/10.12942/lrsp-2004-2
This article is published under an open access license. Please check the 'Copyright Information' section for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
DOI: https://doi.org/10.12942/lrsp-2004-2
Change summary
This revision includes 23 new references, two new Figures 11 and 13, and two revised Figures 14 and 15, which replace former Figures 12 and 13 of the original publication. Changes have been made to Sections 2.1, 2.3, 4.2, 5.2, 5.3 and 5.4. The most substantial revisions have taken place in Sections 4.3 and 5.1. Section 6, Conclusions, has been added.
See below for more details on the changes.
Page 9: I now acknowledge the early solar wind detection of the Soviet Luna missions of 1959. Added reference to Cranmer (2002).
Page 13: I have reworded the third paragraph to take into account Voyager 1's 2004 crossing of the termination shock.
Page 20: In Section 4.2, I have revised the third paragraph to take into account numerous new heliospheric absorption detections that have been made since 2004. I have also revised the text below to consider the results of more sophisticated heliospheric models that have been published in the past few years. Added references to Malama et al. (2006), Wood et al. (2007b), Opher et al. (2006), Izmodenov et al. (2005), Pogorelov et al. (2006), and Wood et al. (2007a).
Page 23: This is where the most substantial revisions have taken place, in order to take into account the increase in astrospheric absorption detections from 6 to 13. This enlarges Table 1 and results in the addition of the two new Figures 11 and 13. The discussion of the wind measurements, and the inferred wind/activity and wind/age relations, is necessarily revised significantly, basically along the lines of Wood et al. (2005a). The recent work of Holzwarth and Jardine (2007) is also noted.
Page 31: Along with Section 4.3, this is where the most substantial revisions have taken place, in order to take into account the increase in astrospheric absorption detections from 6 to 13. Figures 14 and 15 have been revised. The discussion of the wind measurements, and the inferred wind/activity and wind/age relations, is necessarily revised significantly, basically along the lines of Wood et al. (2005a). Added references to Schmitt and Liefke (2004), Strassmeier (2002), Schrijver and Title (2001), Toner and Gray (1988), and Petit et al. (2005).
Page 33: Some minor quantitative adjustments have had to be made based on the changes in Section 5.1, but there are no serious textual changes. In detail, 1.00 ± 0.26 was changed to 1.17 ± 0.28 in Equation 7, and the upper limit for m = 0–2 from α < −1.0 to α < −1.3, for m = 0–1 from α < −1.2 to α < −1.7.
Page 35: In the second paragraph, 0.1% was changed to 0.2% and a reference to Minton and Malhotra (2007) added.
Page 35: The final paragraph has been revised to discuss the importance of solar-like winds for the atmospheric evolution of extrasolar planets, especially ones that orbit very close to their stars. Added references to Ribas et al. (2005), Grießmeier et al. (2004), and Preusse et al. (2005).
Page 36: Section 6 has been added to briefly discuss the future of the subject, and to provide a less abrupt ending to the article. | CommonCrawl |
\begin{definition}[Definition:Plane Number/Example]
A example of a plane number is $6$.
Its divisors $2$ and $3$ are its sides.
Category:Definitions/Number Theory
194887
194886
2014-09-27T07:58:26Z
Prime.mover
59
194887
wikitext
text/x-wiki
\end{definition} | ProofWiki |
Vinculum (symbol)
A vinculum (from Latin vinculum 'fetter, chain, tie') is a horizontal line used in mathematical notation for various purposes. It may be placed as an overline (or underline) over (or under) a mathematical expression to indicate that the expression is to be considered grouped together. Historically, vincula were extensively used to group items together, especially in written mathematics, but in modern mathematics this function has almost entirely been replaced by the use of parentheses.[1] It was also used to mark Roman numerals whose values are multiplied by 1,000.[2] Today, however, the common usage of a vinculum to indicate the repetend of a repeating decimal[3][4] is a significant exception and reflects the original usage.
${\overline {\rm {AB}}}$
line segment from A to B
1⁄7 = 0.142857
repeated 0.1428571428571428571...
${\overline {a+bi}}$
complex conjugate
$Y={\overline {\rm {AB}}}$
boolean NOT (A AND B)
${\sqrt[{n}]{ab+2}}$
radical ab + 2
$a-{\overline {b+c}}$ = a − (b + c)
bracketing function
Vinculum usage
History
The vinculum, in its general use, was introduced by Frans van Schooten in 1646 as he edited the works of François Viète (who had himself not used this notation). However, earlier versions, such as using an underline as Chuquet did in 1484, or in limited form as Descartes did in 1637, using it only in relation to the radical sign, were common.[5]
Usage
Modern
A vinculum can indicate a line segment where A and B are the endpoints:
• ${\overline {\rm {AB}}}.$
A vinculum can indicate the repetend of a repeating decimal value:
• 1⁄7 = 0.142857 = 0.1428571428571428571...
A vinculum can indicate the complex conjugate of a complex number:
• ${\overline {2+3i}}=2-3i$
Logarithm of a number less than 1 can conveniently be represented using vinculum:
• $\log 2=0.301\Rightarrow \log 0.2={\overline {1}}.301=-0.699$
In Boolean algebra, a vinculum may be used to represent the operation of inversion (also known as the NOT function):
• $Y={\overline {\rm {AB}}},$
meaning that Y is false only when both A and B are both true - or by extension, Y is true when either A or B is false.
Similarly, it is used to show the repeating terms in a periodic continued fraction. Quadratic irrational numbers are the only numbers that have these.
Historical
Formerly its main use was as a notation to indicate a group (a bracketing device serving the same function as parentheses):
$a-{\overline {b+c}},$
meaning to add b and c first and then subtract the result from a, which would be written more commonly today as a − (b + c). Parentheses, used for grouping, are only rarely found in the mathematical literature before the eighteenth century. The vinculum was used extensively, usually as an overline, but Chuquet in 1484 used the underline version.[6]
In India, the use of this notation is still tested in primary school.[7]
As a part of a radical
The vinculum is used as part of the notation of a radical to indicate the radicand whose root is being indicated. In the following, the quantity $ab+2$ is the whole radicand, and thus has a vinculum over it:
${\sqrt[{n}]{ab+2}}.$
In 1637 Descartes was the first to unite the German radical sign √ with the vinculum to create the radical symbol in common use today.[8]
The symbol used to indicate a vinculum need not be a line segment (overline or underline); sometimes braces can be used (pointing either up or down).[9]
Encodings
Main article: Overline § Implementations
In Unicode
• U+0305 ◌̅ COMBINING OVERLINE
TeX
In LaTeX, a text <text> can be overlined with $\overline{\mbox{<text>}}$. The inner \mbox{} is necessary to override the math-mode (here invoked by the dollar signs) which the \overline{} demands.
See also
• Overline § Math and science similar-looking symbols
• Overline § Implementations in word processing and text editing software
• Underline
References
1. Cajori, Florian (2012) [1928]. A History of Mathematical Notations. Vol. I. Dover. p. 384. ISBN 978-0-486-67766-8.
2. Ifrah, Georges (2000). The Universal History of Numbers: From Prehistory to the Invention of the Computer. Translated by David Bellos, E. F. Harding, Sophie MENGNIU , Ian Monk. John Wiley & Sons.
3. Childs, Lindsay N. (2009). A Concrete Introduction to Higher Algebra (3rd ed.). Springer. pp. 183-188.
4. Conférence Intercantonale de l'Instruction Publique de la Suisse Romande et du Tessin (2011). Aide-mémoire. Mathématiques 9-10-11. LEP. pp. 20–21.
5. Cajori 2012, p. 386
6. Cajori 2012, pp. 390–391
7. https://www.khanacademy.org/math/middle-school-math-india/x888d92141b3e0e09:bridge-7th/x888d92141b3e0e09:untitled-302/e/b7-bodmas-1
8. Cajori 2012, p. 208
9. Abbott, Jacob (1847) [1847], Vulgar and decimal fractions (The Mount Vernon Arithmetic Part II), p. 27
External links
• Weisstein, Eric W. "Periodic Continued Fraction". MathWorld.
• Weisstein, Eric W. "Vinculum". MathWorld.
| Wikipedia |
Improved production of melanin from Aspergillus fumigatus AFGRD105 by optimization of media factors
Nitya Meenakshi Raman1,
Pooja Harish Shah1,
Misha Mohan1 &
Suganthi Ramasamy1
Melanins are indolic polymers produced by many genera included among plants, animals and microorganisms and targeted mainly for their wide range of applications in cosmetics, agriculture and medicine. An approach to analyse the cumulative effect of parameters for enhanced melanin production was carried out using response surface methodology. In this present study, optimization of media and process parameters for melanin production from Aspergillus fumigatus AFGRD105 (GenBank: JX041523; NFCCI accession number: 3826) was carried out by an initial univariate approach followed by statistical response surface methodology. The univariate approach was used to standardise the parameters that can be used for the 12-run Plackett–Burman design that is used for screening for critical parameters. Further optimization of parameters was analysed using Box–Behnken design. The optimum conditions observed were temperature, moisture and sodium dihydrogen phosphate concentration. The yield of every run of both designs were confirmed to be melanin by laboratory tests of analysis in the presence of acids, base and water. This is the first report confirming an increase in melanin production A. fumigatus AFGRD105 without the addition of costly additives.
Melanin has been reported to be produced by various bacteria (Lagunas-Munoz et al. 2006), fungi and many members of the plant kingdom. Biosynthesis of melanin in microorganisms has been largely associated with its prospects of being associated with UV protection, binding to antibiotics, resistance studies in pathogenic bacteria and its pathways have been targeted for potential targets for drugs in antimicrobial therapy. Wide uses of melanin have been previously reported in the field of cosmetics, protective agents in eye wear, as insecticidal crystals and photo protective creams (Zhang et al. 2007).
The production cost of any biotechnological process can be considerably reduced by optimization of the process (Sangkharak and Prasertsan 2007). The use of multivariate experimental design technique is becoming increasingly wide spread in applied biotechnology as it allows the simultaneous study of several control variables resulting in faster implementation and more cost effective approach than traditional univariate approaches (Nikel et al. 2005). The statistical method is a versatile technique for investigating multiple process variables because it makes the process easily optimized with fewer experimental trials (Bajaj et al. 2009).
Several experimental design models could be employed to reduce the number of experiments under different conditions. Plackett–Burman and Box–Behnken designs are among the most widely used statistical techniques for optimization of biological processes. The Plackett–Burman experimental design is a two-level factorial design, which identifies the critical physicochemical parameters by screening N variables in N + 1 experiments (Plackett and Burman 1946), but it does not consider the interaction effect among the variables. The variables that are found significant in this initial screening can be further optimized using response surface methodology (RSM) which is a collection of statistical techniques that uses design of experiments (DoE) for building models, evaluating the effects of factors and predicting optimum conditions (Abdel-Nabi et al. 1998). Now it is extensively applied in the optimization of medium composition, conditions of enzymatic hydrolysis, fermentation and food manufacturing processes. RSM which results from using Box–Behnken design was used for further study of the influences of major factors and interaction between them on the response value, which is based on the results of sole-experiment and Plackett–Burman (PB) design.
For this study, an initial standardization, optimizing the age of A. fumigatus AFGRD105 for maximum melanin production using standard media without any substitutions of other carbon or nitrogen sources was carried out. Application of statistical experimental methods to screen the significant medium components affecting melanin production and to evaluate the optimal levels of the significant variables has also been conducted by standardizing the parameters for study by an initial univariate approach followed by the use of statistical designs.
The identified strain A. fumigatus AFGRD105 (GenBank:JX041523) was grown on Sabourauds dextrose agar plates and slants at incubated for 5 days at 45 °C, followed which they were maintained at 4 °C for further subculturing. The entire set of experiments was performed using freshly subcultured strains. As major (wild-type) strains used in a study must be deposited in a publicly accessible culture collection the strain was deposited at National Facility for Culture Collection of Fungi (NFCCI -WDCM 932), Pune, India and the NFCCI accession number 3826 was obtained.
Univariate approach confirming the optimum age, carbon and nitrogen source by submerged fermentation
Age: The strain was grown on Sabourauds Dextrose Agar (Dextrose 20 g, Peptone 10 g, Agar 15 g, Distilled Water 1000 mL) for a period of 10 days and checked for growth and melanization of the conidia present in the culture. Once the greying of the conidial samples was found, melanin isolation was repeated for each day till the tenth day. The lyophilized samples of melanin were weighed and plotted for comparison.
Optimization of carbon sources: A. fumigatus AFGRD105 was grown on medium substituted with various carbon sources (Dextrose, Galactose, Sucrose, Mannitol and Sorbitol) at the same concentration of Dextrose mentioned in the composition of the Sabourauds Dextrose Agar medium. Further, the optimum carbon source obtained was added in a range of 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 g/100 mL and dry weight of the samples was measured at the optimized age of the culture and concentration of melanin was determined by isolating it from each sample.
Optimization of nitrogen sources: Medium with varying concentrations of Peptone and Yeast Extract (0.25, 0.5, 0.75, 1.0, 1.5 and 2.0 g/100 mL) was substituted in Sabourauds Dextrose Agar medium composition and the selected strain was inoculated for the optimized number of days. Dry weight of the fungus grown on each substituted media was measured. Melanin was isolated and the amount of melanin was determined.
C:N ratio utilization: For determining the C:N ratio, 250 mL shake flasks were used containing dextrose as the carbon source and peptone as the nitrogen source. Suspension of 5 × 106 spores/ml were standardised in 500µL and inoculated into 100 mL of media and placed in a rotary shaker for a period of 10 days. Further addition of dextrose was carried out from the 6th day. In addition to dextrose and peptone, NaH2PO4, KH2PO4, MgSO4, CaCl2 and FeSO4 were added and pH was adjusted to seven prior to sterilization. The equivalent ratio of carbon and nitrogen with the amount of dextrose added is given in Table 1.
Table 1 Media composition for determination of C:N ratio
Multivariate approach
The optimum conditions for the age of the culture (time required for optimum production), carbon source and nitrogen source were subjected to other parameters using statistical package. The concentration of these sources was then subjected to high and low values averaging the optimum value. All the data were treated with the aid of design expert from Stat-Ease (8.0.7.1).
Plackett Burman Experimental Design: The Plackett–Burman (PB) design, an efficient technique for medium-component optimization (Yong et al. 2011), was used to pick factors that significantly influenced melanin production from A. fumigatus AFGRD105. PB design is one special type of a two-level fractional factorial design based on the incomplete equilibrium piece principle. It can pick up the main factors from a list of candidate factors with the least number of experiments. Total number of trials to be carried out according to the Plackett–Burman is n + 1, where n is number of variables (medium components). Each variable is represented at two levels, high and low, which are denoted by (+1) and (−1), respectively. Eleven process parameters including pH, temperature, inoculum volume, incubation time, substrate, moisture content, NaH2PO4, KH2PO4, MgSO4, CaCl2 and FeSO4 were added in two levels of +1 and −1. This design is used for the characterization of the model that results in the significant variable where there is no interaction among the factors (Plackett and Burman 1946). The statistical significance of this model was given by Fischer's test and ANOVA. The lists of the factors involved in the experimental design are given in Table 1 and the design parameters are given in Table 2.
Table 2 Experimental variables at different levels used for production of Melanin
Optimization of growth parameters for Box Behnken design: Independent positive variables obtained after PB design was optimized by Response surface methodology (Table 3). In general, response surface methodology contains Box–Behnken (BB) design and Central Composite Design (CCD). CCD is a five-level fractional factorial design which is tensely dependent on the accuracy of the central point. Based on the results of the PB design, the BB design was conducted to gain the optimal levels of the main factors picked out by PB experiment. Each variable was studied at three different levels of low, intermediate and high (−1, 0, +1). Experimental design included 17 flasks for the strain with three factors (Table 4). Response surface graphs were obtained to understand the effect of the variables, individually and in combination, and to determine their optimum levels for maximum melanin production. The data obtained from 17 experiments, were used to find out the optimum point of the process parameters by using Box–Behnken Design in Response surface methodology.
Table 3 Twelve trial Plackett Burman design matrix for the experimental variables with coded values for melanin production
Table 4 Experimental variables for RSM at different levels used for production of Melanin
Extraction of melanin from A. fumigatus AFGRD105
Conidia were collected from A. fumigatus AFGRD105, grown for 5 days on SDA slants or plates, by adding 5 mL of sterile Phosphate Buffered Saline (PBS) of 1X concentration (8 g NaCl, 0.2 g KCl, 1.44 g Na2HPO4, 0.24 g of KH2PO4, pH 7.4, Distilled Water 1000 mL) and centrifuged at 8000 g for 30 min followed by washing in PBS thrice. A final wash was done using 1 M Sorbitol and 0.1 M Sodium Citrate (pH 5.5). 5 μL Macerozyme (10 mg/mL), as the cell lysing enzyme (Himedia; from Rhizopus spp), was added and incubated overnight at 30 °C to generate protoplasts. The protoplasts were collected by centrifugation and washed thrice by PBS and left overnight in 4.0 M Guanidine Thiocyanate (Himedia) at room temperature. The dark particles collected by centrifugation at 5000 g for 10 min were subjected to three washes using PBS followed by treatment using Reaction Buffer (10.0 mM Tris, 1.0 mM CaCl2 and 0.5 % SDS, pH 7.8) with 10 μL of 10 mg/mL Proteinase K and incubated at 37 °C. The debris obtained were boiled in 6.0 M HCl for 90 min.After treatment by boiling in acid, the melanin particles were collected by filtration through Whatmann paper and washed extensively with distilled water at 2 h intervals until a neutral pH was obtained. The pH of distilled water that is used to wash the crude melanin was checked using methyl orange for the pH comes close to seven indicating complete removal of the acid and lyophilized as required.
Analysis of melanin and its biomass trend
The lyophilized particles was checked for colour; solubility in inorganic solvents (distilled water (pH 7), 1 N NaOH and 1 N HCl); solubility in organic solvents (ethanol, warm chloroform, warm acetone, benzene and phenol); precipitation (1 % ferric chloride, 1 N HCl and 1 N H2SO4); oxidation (6 % sodium hypochlorite, 30 % H2O2); and reduction (H2S and 5 % sodium hydrosulphite). Tests were carried out in parallel with Synthetic Melanin (Myko Teck Pvt Ltd, Goa) for comparison. For further validation of the results obtained using RSM, a comparative analysis was carried out on melanin production before and after optimization.
Aspergillus fumigatus has been exploited as a major source of secondary metabolites with potential commercial applications in the field of enzymes, pharmaceuticals, cosmetics and agriculture. This current study on optimization of melanin production plays a vital role in the cost effectiveness of melanin production.
The direct dry weight measurement of the A. fumigatus AFGRD105 grown on supplemented media resulted in typical sigmoid pattern. It was therefore observed in the present study that the dry matter weight of the substrate gradually decreased as the growth progressed in the case of A. fumigatus AFGRD105. The optimized time for growth was set at 5 days for optimizing the carbon and nitrogen sources as the melanin production is the same after day 5 and the conidial samples tend to become a dry mass of spores (Fig. 1).
Variation in the production of dry weight and melanin production of the A. fumigatus AFGRD105 over a period of 10 days
Of the carbon sources tested, it is found that Dextrose, which is the standard sugar used in Sabourauds Dextrose Agar medium, is the best carbon source for the growth of A. fumigatus AFGRD105 and also for melanin production. The other sources glucose, sucrose, and sorbitol show a descending increase in the dry weight of the fungus on the 5th day with mannitol being the lowest. The same feature is also observed for melanin production although the melanin production found least with mannitol can be attributed to the low dry weight of the mycelium. A gradient increase was however found as the amount of dextrose in the medium was increased (Fig. 2).
Effect of carbon source, dextrose, peptone and yeast extract on mycelial dry weight and melanin production by A. fumigatus AFGRD105
The effect of nitrogen source on both mycelial growth and melanin production was observed with the highest mycelial biomass achieved in the medium containing 1.0 g/100 mL of peptone for A. fumigatus AFGRD105. It is interesting to note that higher concentration of the nitrogen source is essentially not needed for mycelial growth which is opposite to the results for the carbon source (Fig. 2). Therefore, A. fumigatus AFGRD105 demonstrated enhanced production of mycelial biomass and melanin when cultured in media containing the unsubstituted SDA.
For the initial values of C:N ratios of 7.2, 16.2 and 20.6, the amount of peptone as nitrogen source was held at the same level (Table 1) whereas the concentration of dextrose varied; thus the limiting source int his experiment was the nitrogen source. The production profile of melanin was showed in Fig. 3 confirming that the production of melanin in Aspergillus fumigatus is sources from spores unlike other metabolites that can be sourced from both hyphae and conidia. Addition of dextrose on day 6 was undertaken to maintain the final equivalent ratio of carbon and nitrogen sources added as 25.25 at all cases (Table 1). Figure 3 suggests that addition of dextrose did not affect the production of melanin due to limitation of nitrogen source but the presence excess dextrose from day 6 enhanced the rate of production of melanin.
a Production of melanin for various C:N ratio. b Comparison of melanin production and biomass of Aspergillus fumigatus inoculated as hyphal and spore suspensions
The levels of the variables for the PB design were selected (Table 3) according to the previous single-factor experiments. Based on the selection, a 12-run PB experiment was chosen to pick up the main factors in the fermentative process for the production of melanin. Pareto chart showed that the values of temperature, moisture and Sodium dihydrogen phosphate was above the Bonferroni limit indicating significance of these factors. The main factors were picked up at the confidence level of 95 % based on their effects. According to the t test results, temperature, moisture and sodium dihydrogen phosphate were considered as the three major factors affecting the production. The rest of the factors level was below 90 % therefore considered insignificant. Experimental runs and their respective melanin yields are presented in Table 3. The adequacy of the model was checked using analysis of variance (ANOVA) which was tested using Fisher's statistical analysis. The model F value of 5.05 implied that the model was significant and also showed that there was 0.25 % chance that the model F value could occur due to noise. The p values denotes the significance of the coefficients and also important in understanding the pattern of the mutual interactions between the variables.
The major factors including temperature, moisture and sodium dihydrogen phosphate were selected for further optimization by using BB. Based on the results of PB experiment, the BB experiment was designed and conducted, as is shown in Tables 3 and 5. Each of the three major factors (temperature, moisture and sodium dihydrogen phosphate) was designed in three levels (Table 4). The BB experiment results were submitted to ANOVA using the Design Expert software (version 8.0, Stat-Ease Inc., Minneapolis, USA), and the regression model was given as mentioned below which indicated that the experimental results of BB could be fitted into the final equation of factors as second order regression.
$$\begin{aligned} \bf{R1 }\left( {\textbf{Yield}} \right) & = \bf{0}\bf{.055135} -\bf{0.00359*A} + \bf{0.002987*B} + \bf{0.001471*C} - \bf{0.93*AB} \\ & \quad - \bf{0.0019*AC} - \bf{0.00011*BC} + \bf{0.0041A}^{\bf{2}} - \bf{0.0025B}^{\bf{2}} -\bf{0.0035C}^{\bf{2}} \\ \end{aligned}$$
where R1 is the yield of melanin obtained and A, B and C are the coded values for temperature, moisture and Sodium dihydrogen phosphate respectively.
Table 5 Seventeen trial Box–Behnken design matrix for the experimental variables with coded values
The ANOVA of the quadratic regression model demonstrated the above mentioned equation is highly significant. It was in reasonable agreement with the predicted R2 of 0.8212. The lack-of-fit value for regression was not significant (0.1010), indicating that the model equation was adequate for predicting the melanin production under any combination of values of the variables.
The present analysis carried out statistically using PB design showed the critical parameters that affected melanin production followed by further optimization using BB design determined the production was predominantly influenced by the amount of temperature, moisture and Sodium dihydrogen phosphate. The graphical representation provides a method to visualize the relationship between the response and experimental levels of each variable and the type of interactions between test variables in order to deduce the optimum conditions. The contour plots show the region of the desirability for the production of protein content with the point prediction from the analysis of variable for response surface cubic model for the production of melanin (Fig. 4).
The surface plots of response surface methodology showing the effect of temperature, moisture and sodium dihydrogen phosphate and their mutual interaction on melanin production. a Sodium diphosphate and temparature; b Temperature and Moisture; c Moisture and Sodium diphosphate. d Melanin production before and after optimization
The interaction effects and optimal levels of the variables were determined by plotting the three-dimensional (3D) response surface curves. The most efficient and economical conditions were to use the lowest concentrations of all the parameters for an optimal response. By using the response surface methodology, an attempt was made to understand the important variables to obtain an efficient response for maximum yield of melanin.
The pigment could not be dissolved in water, acid, ethanol, warm chloroform, warm acetone or benzene. The pigment was soluble in a concentrated alkaline solution or in phenol. The dissolved, extracted black pigment was lightened by the oxidizing agents NaClO and H2O2, as well as by the reducing agents H2S and Na2S2O4 (sodium hydrosulfite). The black pigment also reacted positively in a test for polyphenols with FeCl3, producing a flocculent brown precipitate. Our extracts reacted identically to synthetic melanin.
Melanin production of before and after optimization illustrated in Fig. 4d confirms the process parameters reached a higher concentration than the use of regular laboratory conditions. These process parameters were considered after the fungus was allowed to form greenish-grey conidia confirming the presence of melanin essentially after 4th day on an hourly basis. Optimization of melanin indicated the increase in the production of melanin from 3.4 mg/L to 6.6 mg/L resulting in a two fold increase after optimization. The biomass trend also indicates that the production of melanin is constant after a period of 5 days similar to the results obtained in Fig. 1. This production of melanin by A. fumigatus AFGRD105 without any the addition of any precursors and with the use of minimum medium components indicates a cost effective alternative.
Temperature and moisture have largely been associated with the fungal populations in aiding their growth even under lesser influential conditions (Nielsen et al. 2004; McGinnis 2007); whereas phosphates have largely been targeted in melanin related studies as an element in every composition of media (Alviano et al. 1991). The potential positive correlation of factors like temperature, moisture and phosphates may also be attributed to the results obtained on the direct dependency of growth of fungus with melanin production.
The production of melanin with respect to C:N utilization in the medium by fungi is the first to be undertaken in this study. Although various other fungi have been subjected production of biotechnologically prominent metabolites, this study pertains and confirms the production of melanin by the conidia of Aspergillus fumigatus as no melanin was obtained from the hyphae. With respect to these results, it can be confirmed that the yield of melanin can be improved by carbon although the growth has been arrested by nitrogen sources.
Statistical designs are effective tools that can be used to account for the main as well as the interactive influences of fermentation parameters on the process performance. Among them, RSM is a collection of certain statistical techniques for designing experiments, building models, evaluating the effect of the factors and searching for optimal conditions for desirable responses. Therefore, during the past decades, RSM has been extensively applied in the optimization of medium composition, fermentation conditions and food manufacturing processes (Vazquez and Martin 1997; Park et al. 2005).
Statistical designs has aided in analysing the important factors with minimum labour and low time consumption and also proved to be useful in optimizing medium composition for melanin production from A. fumigatus AFGRD105. Optimising the media conditions revealed a positive correlation in higher yield was influenced by temperature, moisture and Sodium dihydrogen phosphate.
In this present study, the statistical methodology, combination of the PB design and the Box–Behnken design, is demonstrated to be effective and reliable in selecting the statistically significant factors and finding the optimal concentration of those factors in the fermentation medium for melanin production (Bajaj and Singhal 2009; Yong et al. 2011). The interaction effects and optimal levels of the variables were determined by plotting the three-dimensional (3D) response surface curves. The shape of the response surface curves showed strong positive interaction between these tested variables.
Melanin production has also been detailed in Escherichia coli, Kleibsiella sp., Bacillus thuringiensis and B. cereus (Lagunas-Munoz et al. 2006; Shrishailnath et al. 2010; Chen et al. 2004; Zhang et al. 2007). The significant variables were quite different from this study and it may be due the fungal source of melanin in this study. Studies undertaken using Brevundimonas sp. SGJ showed increased yield of melanin with pH 5.31, tryptone 1.440 g/L, l-tyrosine 1.872 g/L and CuSO4 0.0366 g/L. On comparision with the present study, use of RSM resulted in a 3.05-fold increase in melanin production (Surwase et al. 2013).
In this work, the process parameters—temperature, moisture and sodium dihydrogen phosphate were selected and optimized to produce melanin. Design Expert from Stat-Ease was used to develop the design of the experiment and BB design in RSM was used to optimize the process condition. Thus it has been concluded that the point prediction from the analysis of variable for response surface cubic model is used as basic tool for the production of melanin from A. fumigatus AFGRD105. This enhanced production of melanin can be further used in platforms of research in cosmetics and dyes.
RSM:
response surface methodology
DoE:
PCR:
ITS:
internal transcribed sequences
Plackett Burman
Box Behnken
SSF:
solid state fermentation
Abdel-Nabi MA, Ismail AMS, Ahmed SA, Abel Fattah AF (1998) Production and immobilization of alkaline protease from Bacillus mycoides. Bioresour Technol 64:205–210
Alviano CS, Farbiarz SR, De Souza W, Angluster J, Travassos LR (1991) Characterization of Fonsecaea pedrosoi melanin. J Gen Microbiol 137(4):837–844. doi:10.1099/00221287-137-4-837
Bajaj IB, Singhal RS (2009) Enhanced production of poly (γ-glutamic acid) from Bacillus licheniformis NCIM 2324. Bioresour Technol 100:826–832
Bajaj IB, Lele SS, Singhal RS (2009) A statistical approach to optimization of fermentative Licheniformis NCIM 2324 by using metabolic precursors. Appl Biochem Biotechnol 159:133–141
Chen Y, Deng Y, Wang J, Cai J, Ren G (2004) Characterization of melanin produced by a wild-type strain of Bacillus thuringiensis. J Gen Appl Microbiol 50:183–188. doi:10.2323/jgam.50.183
Lagunas-Munoz VH, Cabrera-Valladares N, Bolıvar F, Gosset G, Martınez A (2006) Optimum melanin production using recombinant Escherichia coli. J Appl Microbiol 101:1002–1008. doi:10.1111/j.1365-2672.2006.03013.x
McGinnis MR (2007) Indoor mould development and dispersal. Med Mycol 45(1):1–9. doi:10.1080/13693780600928495
Nielsen KF, Holm G, Uttrup LP, Nielsen PA (2004) Mould growth on building materials under low water activities. Influence of humidity and temperature on fungal growth and secondary metabolism. Int Biodeter Biodegr 54(4):325–336. doi:10.1016/j.ibiod.2004.05.002
Nikel PI, Pettinari MJ, Mendez BS, Galvagno MA (2005) Statistical optimization of a culture medium for biomass and poly (3-hydroxybutyrate) production by a recombinant Escherichia coli strain using agroindustrial byproducts. Int Microbiol 8:243–250
Park PK, Cho DH, Kim EY, Chu KH (2005) Optimization of carotenoid production by Rhodotorula glutinis using statistical experimental design. World J Microbiol Biotechnol 21:429–434. doi:10.1007/s11274-004-1891-3
Plackett RL, Burman JP (1946) The design of optimum multifactorial experiments. Biometrika 33:305–325. doi:10.1093/biomet/33.4.305
Sangkharak K, Prasertsan P (2007) Optimization of polyhydroxybutyrate production from a wild type and two mutant strains of Rhodobacter sphaeroides using statistical method. J Biotechnol 132:331–340. doi:10.1016/j.jbiotec.2007.07.721
Shrishailnath S, Kulkarni G, Yaligara V, Kyoung L, Karegoudar T (2010) Purification and physiochemical characterization of melanin pigment from Klebsiella sp. GSK. J Microbiol Biotechnol 20:1513–1520
Surwase SN, Jadhav SB, Phugare SS, Jadhav JP (2013) Optimization of melanin production by Brevundimonas sp. SGJ using response surface methodology. 3 Biotech 3(3):187–194. doi:10.1007/s13205-012-0082-4
Vazquez M, Martin AM (1997) Optimization of Phaffia rhodozyma continuous culture through response surface methodology. Biotechnology 57:314–320. doi:10.1002/(SICI)1097-0290(19980205)57:3%3C314:AID-BIT8%3E3.3.CO;2-V
Yong X, Raza W, Yu G, Ran W, Shen Q, Yang X (2011) Optimization of the production of poly-γ-glutamic acid by Bacillus amyloliquefaciens C1 in solid-state fermentation using dairy manure compost and monosodium glutamate production residues as basic substrates. Bioresour Technol 102:7548–7554. doi:10.1016/j.biortech.2011.05.05
Zhang J, Cai J, Deng Y, Chen Y, Ren G (2007) Characterization of melanin produced by a wild-type strain of Bacillus cereus. Front Biol China 2:26–29. doi:10.1007/s11515-007-0004-8
NMR, PHS and MM performed all the experiments assisted by SR.NMR and SR wrote the manuscript assisted by all co-authors. NMR and SR designed the study. SR assisted NMR and PHS on statistical analysis and in the discussion on the interpretation of the data. MM was committed to all the experiments. All authors read and approved the final manuscript.
We would like to thanks the lab technicians at the NFCCI for providing the identification of the isolated wild strain. We would also like to thank the management of Dr. G. R. Damodaran College of Science for providing the facilities to carry out this work.
Department of Biotechnology, Dr. G. R. Damodaran College of Science, 641 014, Coimbatore, India
Nitya Meenakshi Raman, Pooja Harish Shah, Misha Mohan & Suganthi Ramasamy
Nitya Meenakshi Raman
Pooja Harish Shah
Misha Mohan
Suganthi Ramasamy
Correspondence to Suganthi Ramasamy.
Raman, N.M., Shah, P.H., Mohan, M. et al. Improved production of melanin from Aspergillus fumigatus AFGRD105 by optimization of media factors. AMB Expr 5, 72 (2015). https://doi.org/10.1186/s13568-015-0161-0
Response-surface methodology
Plackett–Burman design | CommonCrawl |
\begin{document}
\author{Christian Hirsch} \address[Christian Hirsch]{University of Groningen, Bernoulli Institute, Nijenborgh~9, 9747 AG Groningen, The Netherlands} \email{[email protected]}
\thanks{This work is supported by The Danish Council for Independent Research | Natural Sciences, grant DFF -- 7014-00074 \emph{Statistics for point processes in space and beyond}, and by the \emph{Centre for Stochastic Geometry and Advanced Bioimaging}, funded by grant 8721 from the Villum Foundation.}
\title[Mod two Invariants of Weyl groups]{On the decomposability of mod 2 cohomological invariants of Weyl groups}
\begin{abstract} We compute the invariants of Weyl groups in mod 2 Milnor $K$-theory and more general cycle modules, which are annihilated by 2. Over a base field of characteristic coprime to the group order, the invariants decompose as direct sums of the coefficient module. All basis elements are induced either by Stiefel-Whitney classes or specific invariants in the Witt ring. The proof is based on Serre's splitting principle that guarantees detection of invariants on elementary abelian 2-subgroups generated by reflections. \end{abstract}
\maketitle
\goodbreak \section{Introduction} \label{introSec}
\noindent
Let $G$ be a smooth affine algebraic group over a field $k_0$ of characteristic not 2. Motivated from the concept of characteristic classes in topology, the idea behind \emph{cohomological invariants} as presented by J.-P.~Serre in \cite{CohInv} is to provide tools for detecting that two torsors are not isomorphic. Loosely speaking, such an invariant assigns a value in an abelian group to an algebraic object, such as a quadratic form or an \'etale algebra.
\smallbreak
The formal definition of a cohomological invariant is due to J.-P.~Serre and appears in his lectures \cite{CohInv}, where also a brief account of the history of the subject is given. First, we identify the pointed set of isomorphism classes of $G$-torsors over a field $k$ with the first non-abelian Galois cohomology $H^1(k, G)$. Further, let $M$ be a functor from the category $\mathcal F _{k_0}$ of finitely generated field extensions of $k_0$, to abelian groups. Then, a \emph{cohomological invariant} of $G$ with values in the coefficient space $M$ is a natural transformation from $H^1( -, G)$ to $M( - )$ considered as functors on $\mathcal F _{k_0}$. Interesting examples of the functor $M$ include Witt groups or Milnor $K$-theory modulo $2$, which is the same as Galois cohomology with $\mathbb Z/2$-coefficients by Voevodsky's proof of the Milnor conjecture.
\smallbreak
In general, the cohomological invariants of a given algebraic group with values in some functor $M$ are hard to compute and there are only a few explicit computations carried out yet. One exception are the cohomological invariants of the orthogonal group over a field of characteristic not $2$ with values in Milnor $K$-theory modulo $2$. These invariants are generated by Stiefel-Whitney classes $$ w_i:\, H^1(-, O_n) \to K^{\mathsf M}_i( - )/2 $$ introduced by Delzant \cite{De62}. Now, every finite group $G$ embeds in a symmetric group $S_n$ for an appropriate $n$, and this group in turn embeds in $O_n$. Pulling back the Stiefel-Whitney classes along such homomorphisms $G \to S_n \to O_n$ is a rich source of cohomological invariants of finite groups considered as group scheme of finite type over a base field $k_0$.
\smallbreak
In this work, we show that most cohomological invariants of a Weyl group $G$ over a field $k_0$ of characteristic coprime to $|G|$ arise in this way if the coefficient space is a cycle module $M_\ast$ in the sense of Rost~\cite{Ro96}, which is annihilated by~$2$. More precisely, there exists a finite family of invariants $\{a_i\}_{i \in I}$ with values in $K_\ast^M/2$, such that every invariant $a$ over $k_0$ with values in $M_\ast$ decomposes uniquely as $$ a = \sum\limits_{i \in I}a_i m_i, $$ for some constant invariants $m_i \in M_\ast(k_0)$. In characteristic 0, any Weyl group is a product of the irreducible ones mentioned above. Hence, invoking a product formula of J.-P.~Serre yields the decomposition for cohomological invariants.
\smallbreak
The proof of this result is constructive, in the sense that we give precise formulas for the generators $\{a_i\}_{i \in I}$. For most Weyl groups the invariants are induced by Stiefel-Whitney classes coming from embeddings of the Weyl group into certain orthogonal groups. Note that these embeddings make use of the fact that such a Weyl group can be realized as orthogonal reflection group over every field of characteristic not~$2$. However, if the Weyl group has factors of type $D_{2n}$, $E_7$ or $E_8$, then besides Stiefel-Whitney classes also specific Witt-type invariants appear, which induce invariants in mod 2 Milnor $K$-theory via the Milnor isomorphism. All basis elements are invariants derived from either the Stiefel-Whitney or the Witt-ring invariants.
\smallbreak
Crucial for the derivation is Serre's splitting principle for Weyl groups: if two invariants coincide on the elementary abelian $2$-subgroups generated by reflections, then these are the same. This allows the following proof strategy. Since Stiefel-Whitney classes and Witt invariants provide us with a family of invariants, we only have to show that a given invariant coincides on the elementary abelian subgroups with a combination from this list. The invariants are then computed case by case for the various types.
\smallbreak
J.-P.~Serre has recently computed with a different method the invariants of Weyl groups with values in Galois cohomology, see his 2018 Oberwolfach talk \cite{Se18}.
In an e-mail exchange on an earlier version of the present paper, J.-P.~Serre explains how to remove many of the restrictions on the characteristic of $k_0$. An excerpt of his letter is reproduced in Section \ref{serreSec}. J.~Ducoat provided a proof of Serre's splitting principle and attempted to compute the invariants for groups of type $B_n$ and $D_n$ \cite{Du11}. However, many proofs are incomplete as they are ``left to the reader'' or ``similar to previous ones''. Moreover, Theorem 5 on page 4 about the invariants of $W(D_n)$ is not correct as stated, because an invariant in degree $n/2$ is missing. Therefore, we provide detailed computations also for the types $B_n$ and $D_n$.
\medbreak
The content of this article is as follows. In Section \ref{resultsSect}, we state the main result and fix notations and conventions. Next, Section \ref{techLemSec} contains preliminary results. The proof of the main result occupies the rest of the paper. It also includes an appendix, elucidating how to use a {\tt GAP}-program to determine the invariants for $E_7$ and $E_8$.
\smallbreak
\bigbreak
\section*{Acknowledgments} The present manuscript has a long history. It is a condensed version of my diploma thesis at LMU Munich supervised by F.~Morel. I am very grateful for his comments and insights that shaped this work in many ways. The thesis is available online and contains additional background material from algebraic geometry \cite{Hi10} as well as results for reflection groups that are not of Weyl type. Moreover, I thank S.~Gille for massive help and discussions on earlier versions of the manuscript. He was also the one to mention the thesis during a presentation of J.-P.~Serre at the 2018 Oberwolfach meeting. I am very grateful to J.-P.~Serre for a highly insightful e-mail exchange and for sharing with me an early version of his report \cite{Se18}. His remarks helped to both substantially raise the quality of the presentation, and also improve the contents such as removing restrictions on the characteristic in the present paper. Moreover, an earlier version also contained an irritating assumption that $-1$ be a square in $k_0$. Thanks to a more appropriate representation of $W(B_2)$ pointed out by J.-P.~Serre, also this assumption could be removed in the present version. Finally, I thank the anonymous referee for the careful reading of the manuscript and valuable observations that helped to improve the presentation.
\bigbreak
\bigbreak
\begin{center} {\bf\large Part I: Results and methods. } \end{center}
\section{Main theorem and proof strategy} \label{resultsSect}
\subsection{Cycle modules} We consider in this work invariants with values in a cycle module $M_\ast$ in the sense of Rost, which is annihilated by $2$. Recall that a cycle module over a field $k_0$ is a covariant functor $$ k\, \longmapsto\, M_\ast(k)\,: = \;\bigoplus\limits_{n \in \mathbb Z}M_n(k) $$ on the category $\mathcal F _{k_0}$ with values in graded Milnor $K$-theory modules. For a field extension $\iota:k\subseteq L$, the image of $z \in M_\ast(k)$ in $M_\ast(L)$ is denoted by $\iota_\ast(z)$. By definition, cycle modules have further structure and we refer the reader to \cite{Ro96} for details.
The main example of a cycle module is Milnor $K$-theory: \begin{align*} \mc F_{k_0} & \to \text{$\mathbb Z$-graded rings}\\ k &\mapsto K^{\mathsf M}_*(k) = \oplus_{n \ge 0} K^{\mathsf M}_n(k). \end{align*} For $a_1, \dots, a_n \in k^\times$, we denote pure symbols in $K^{\mathsf M}_n(k)$ by $\{a_1, \dots, a_n\}$. The graded abelian group $M_\ast(k)$ has the structure of a graded $K^{\mathsf M}_\ast(k)$-module for every field $k \in \mathcal F _{k_0}$. Hence, if $M_\ast$ is annihilated by 2, it becomes a $K^{\mathsf M}_\ast(k)/2$-module. For ease of notation, we set $\Kt^{\mathsf M}_\ast(k): = K^{\mathsf M}_\ast(k)/2$ and denote the image of a symbol $\{ a_1, \dots, a_n\} \in K^{\mathsf M}_n(k)$ in $\Kt^{\mathsf M}_n(k)$ by $\{a_1, \dots, a_n\}$. We say that $M_\ast$ has a \emph{$\Kt^{\mathsf M}_\ast$-structure} if $M_\ast$ is annihilated by 2.
\bigbreak
\noindent {\it From now on cycle module means cycle module with $\Kt^{\mathsf M}_\ast$-structure. }
\bigbreak
\subsection{Invariants with values in cycle modules} Let $G$ and $M_\ast$ be a linear algebraic group and a cycle module over $k_0$, respectively. Recall from Section \ref{introSec} that a \emph{cohomological invariant} of $G$ with values in $M_n$ is a natural transformation from $H^1(\, -, G)$ to $M_n(\, -\, )$. We denote the set of all invariants of degree $n$ of $G$ with values in $M_\ast$ by $\ms{Inv}^n(G, M_\ast)$, and set $$\ms{Inv}(G, M_\ast):= \ms{Inv}_{k_0}(G, M_\ast):= \bigoplus\limits_{n \in \mathbb Z}\ms{Inv}^n(G, M_\ast).$$
\smallbreak
For $k \in \mathcal F _{k_0}$, any invariant $a \in \ms{Inv}_{k_0}(G, M_\ast)$ restricts to a natural transformation of functors $H^1(\, -, G) \to M_\ast(\, -\, )$ on the full sub-category $\mathcal F _k$ of $\mathcal F _{k_0}$. We denote this restricted invariant by $\ms{res}_{k/k_0}(a)$ or by the same symbol $a$ if the meaning is clear from the context. A particular example of invariants are the \emph{constant invariants}, which are in one-to-one correspondence with elements of $M_\ast(k_0)$: The constant invariant $c \in M_\ast(k_0)$ maps every $x \in H^1(k, G)$ onto the image of $c$ in $M_\ast(k)$ for all $k \in \mathcal F _{k_0}$. The set $\ms{Inv}(G, M_\ast)$ is a $\Kt^{\mathsf M}_\ast(k_0)$-module, so that if $a:\, H^1(\, -, G) \to \Kt^{\mathsf M}_\ast(\, -\, )$ is a Milnor $K$-theory invariant of degree $m$ and $x \in M_n(k_0)$, then $$ a \cdot x\,:\; H^1(k, G)\, \to \, M_{m + n}(k), \; T\, \mapsto a_k(T) x_k $$ is an invariant with values in $M_\ast$ of degree $m + n$. We now define precisely what it means that an invariant can be represented uniquely as a sum of basis elements.
\begin{definition} \label{decomposableDef} Let $M_\ast$ be a cycle module over the field $k_0$, and $G$ a linear algebraic group over $k_0$.
\smallbreak
\begin{itemize}
\item[(i)]
A subgroup $S\subseteq\ms{Inv}_{k_0}^{\ast}(G,M_{\ast})$ is a \emph{free $M_{\ast}(k_0)$-module} with \emph{basis} $a^{(i)} \in \ms{Inv}^{d_i}_{k_0}(G, \Kt^{\mathsf M}_{\ast})$, $i\in I$, if
\begin{align*}
\bigoplus_{i\in I}M_{*- d_i}(k_0) \to S, \qquad\qquad
\{m_i\}_{i \in I} \mapsto \sum_{i \le r} a^{(i)}\cdot m_i
\end{align*}
is an isomorphism of abelian groups. \smallbreak
\item[(ii)] $\ms{Inv}(G, M_\ast)$ is \emph{ completely decomposable} with a finite {basis} $a_i \in \ms{Inv}^{d_i}_{k_0}(G, \Kt^{\mathsf M}_\ast)$ if $\ms{Inv}_k^\ast(G, M_\ast)$ is a free $M_\ast(k)$-module with the corresponding basis $\ms{res}_{k/k_0}(a_i) \in \ms{Inv}_k^{d_i}(G, \Kt^{\mathsf M}_\ast)$, $i \in I$, for all $k \in \mathcal F _{k_0}$. \end{itemize} \end{definition}
\smallbreak
After these preparations, we now state the main result.
\begin{theorem} \label{reflThm}
Let $G$ be an irreducible Weyl group. Let $k_0$ be a field of characteristic coprime to $|G|$ and $M_\ast$ a cycle module over $k_0$. Then, $\ms{Inv}^\ast_{k_0}(G, M_\ast)$ is completely decomposable. \end{theorem}
The proof of Theorem \ref{reflThm} is constructive and we describe the generators explicitly. These depend on the type of the Weyl group and will be given in the course of the computation later on. Now, we explain the strategy starting with a reminder on Weyl groups.
Let $\mathbb E$ be a finite-dimensional real vector space with scalar product $( -, - )$ and orthogonal group $O(\mathbb E)$. Then, $s_v:\mathbb E \to\mathbb E$, \begin{align*}
s_v(w):= w - \frac{2(v, w)}{(v, v)} v, \end{align*} defines the reflection at a vector $v \in \mathbb E$ with $(v, v) \ne 0$.
\smallbreak
Now, the \emph{Weyl group} $W(\Sigma)$ associated with a crystallographic root system $\Sigma\subseteq\mathbb E$ is the subgroup of $O(\mathbb E)$ generated by all reflections $s_\alpha$ at the roots $\alpha \in \Sigma$. By definition of a root system, the scalars ${2(\alpha, \beta)}/{(\alpha, \alpha)}$ are integers for all $\alpha, \beta \in \Sigma$ and the reflections act on the root system. The Weyl group is {\it irreducible} if the corresponding root system is irreducible.
\smallbreak
The irreducible root systems are classified by types $A_n, B_n, C_n, D_n, E_6, E_7$, $E_8$, $F_4, G_2$. Let $\Sigma$ be such an irreducible root system. Then, there exists an Euclidean space $\mathbb E=\mathbb R^n$ for an appropriate $n$, such that (i) $\Sigma\subseteq V:=\bigoplus\limits_{i \le n}\mathbb Z [1/2]e_i$, where $e_1,\ldots ,e_n$ is the standard basis of $\mathbb R^n$, and (ii) $W(\Sigma)$ maps $V$ into itself. This can be deduced using the realizations of these root systems in Bourbaki \cite[PLATES I-VIII]{LIE4-6}. If now $k_0$ is a field of characteristic not $2$ then $W(\Sigma)$ acts via scalar extension on $V_{k_0}:=k_0\otimes_{\mathbb Z [1/2]}V$ and can so be realized as orthogonal reflection group over $k_0$ considering $V_{k_0}$ has regular bilinear space with the scalar product induced by the restriction of the standard scalar product of $\mathbb E = \mathbb R^n$ to $V$.
\medbreak
The strategy of proof for an irreducible Weyl group $G$, is as follows. We leverage different embeddings of the Weyl group $G$ into an orthogonal group $O_n$ over the field $k_0$. Now, the invariants of $O_n$ with values in $\Kt^{\mathsf M}_\ast$ are generated by the Stiefel-Whitney classes, see \cite{CohInv}. Considering embeddings $W\hookrightarrow O_n$ gives rise to a family of invariants in $\ms{Inv}(G, \Kt^{\mathsf M}_\ast)$ by composing the Stiefel-Whitney classes with the natural transformation $H^1( -, W) \to H^1( -, O_n)$. As we shall see in Sections \ref{BnSubSect} -- \ref{E6-8SubSect}, these already generate $\ms{Inv}(G, M_\ast)$ except if $G$ is of type $D_{2n}$, $E_7$, or $E_8$. The 'missing' invariants have their source in certain Witt invariants.
\medbreak
Having a family of invariants with values in $\Kt^{\mathsf M}_\ast$ at our disposal, we deduce Theorem \ref{reflThm} for an irreducible Weyl group $G$ by showing that this set of invariants contains a basis of $\ms{Inv}(G, M_\ast)$ in the sense of Definition \ref{decomposableDef}. The main tool is the following adaptation of Serre's splitting principle, which is proven in \cite[Corollary 4.10]{GiHi19}. Loosely speaking, if $k_0$ is a field of characteristic coprime to $|G|$, then $\ms{Inv}(G, M_\ast)$ is detected by the maximal elementary abelian $2$-subgroups of $G$ generated by reflections. We let $\Omega(G)$ denote the set of conjugacy classes of maximal elementary $2$-abelian subgroups of $G$, which are generated by reflections.
\smallbreak
Note that the proof of Theorem \ref{reflThm} for Weyl groups of type $G_2$ in Section \ref{G2InvSubSect} is purely group theoretic, in the sense that it uses only its semi-direct decomposition and not the geometry of the corresponding root system.
\smallbreak
\begin{proposition}[Serre's splitting principle] \label{splitCorollary}
Let $M_\ast$ be a cycle module over $k_0$ and $G$ be a Weyl group. Let $k_0$ be a field of characteristic coprime to $|G|$.
Then, the canonical map \begin{align*}
\big({\mathsf{res}^P_G}\big)_{[P]}:\, \ms{Inv}(G, M_\ast)& \to \prod_{[P] \in \Omega(G)} \ms{Inv}(P, M_\ast)^{N_G(P)} \end{align*} is injective, where $N_G(P)$ is the normalizer of the maximal elementary $2$-abelian subgroup $P$ of $G$, which is generated by reflections. \end{proposition}
\smallbreak
\noindent We point out that the assumption that order of the irreducible Weyl group $G$ and the characteristic of $k_0$ are coprime seems to be not necessary, see Section~\ref{serreSec}. This assumption comes from the article \cite{GiHi19}, where the splitting principle is proven for more general orthogonal reflection groups. This would also remove that assumption from Theorem \ref{reflThm}.
\medbreak
\begin{remark} \label{splittingPrincipleRem} For groups of type $A_n$, $D_n$, $E_6$, $E_7$, or $E_8$, any two roots are conjugate \cite[Rem.\ 4, Sect.\ 2.9]{Hu90}. Hence, an induction argument shows that for these types, there is up to conjugacy only one maximal abelian $2$-subgroup $P$ generated by reflections. In particular, by Proposition \ref{splitCorollary}, the restriction map $\mathsf{res}_G^P$ is injective for simply-laced groups. \end{remark}
\medbreak
The computation of the invariants of an arbitrary Weyl group follows from Theorem~\ref{reflThm} by a product formula of Serre. To state the product formula precisely, we first introduce the notion of a product of invariants. Identifying $H^1(k, G'\times G)$ with $ H^1(k, G') \times H^1(k, G)$, for invariants $a \in \ms{Inv}_{k_0}(G, \Kt^{\mathsf M}_\ast)$ and $b \in \ms{Inv}_{k_0}(G', M_\ast)$, we define the product $ab$ through \begin{align*}
(ab)_k:\,H^1(k, G\times G') &\to M_\ast(k)\\
(T, T')&\mapsto a_k(T) b_k(T'). \end{align*} \begin{proposition}[Product formula] \label{productLem} Let $M_\ast$ be a cycle module and $G, G'$ algebraic groups over $k_0$. If $\ms{Inv}^\ast_{k_0}(G, M_\ast)$ is completely decomposable with finite basis $\{a_i\}_{i \in I}$, then the map
\begin{align*} \bigoplus_{i \in I}\ms{Inv}^\ast_k(G', M_\ast) & \to \ms{Inv}^\ast_k(G \times G', M_\ast)\\ \{b_i\}_{i \in I } &\mapsto \sum_{i \in I }\ms{res}_{k/k_0}(a_i) b_i \end{align*} is an isomorphism for all $k \in \mathcal F _{k_0}$. In particular, if the invariants of both $G$ and $G'$ are completely decomposable, then so is $\ms{Inv}^\ast_{k_0}(G \times G', M_\ast)$. \end{proposition} \begin{proof}
We follow the outline given in \cite[Part I, Exercise 16.5]{CohInv}. Replacing $a_i$ by $\ms{res}_{k/k_0}(a_i)$ we can assume $k = k_0$.
\smallbreak
To show surjectivity, let $a \in \ms{Inv}^\ast_{k_0}(G \times G', M_\ast)$. Then, for every $k \in \mc F_{k_0}$ and $T' \in H^1(k, G')$ we define an invariant $\bar a \in \ms{Inv}^\ast_k(G, M_\ast)$ by mapping $T \in H^1(\ell, G)$ to $\bar a_\ell(T) = a_\ell(T \times T'_\ell)$, where, $T'_\ell$ denotes the image of $T'$ in $H^1(\ell, G')$ under the base change map. Since $\ms{Inv}(G, M_\ast)$ is completely decomposable, $\bar a$ can be uniquely expressed as $\sum_i \ms{res}_{k/k_0}(a_i) b_i(T')$ for suitable $b_i(T') \in M_\ast(k)$. It remains to prove that $b_i \in \ms{Inv}(G', M_\ast)$ for all $i$. To achieve this goal, let $\iota:k\subseteq k_1$ be a field extension in $\mathcal F _{k_0}$ and $T' \in H^1(k, G')$. Then, $$ \iota_\ast\Big(\sum\limits_{i \in I}\ms{res}_{k/k_0}(a_i)(T) b_i(T')\Big) = \sum\limits_{i \in I}\ms{res}_{k_1/k_0}(a_i)(T_{k_1}) b_i(T'_{k_1}). $$
Since $a_i$'s are invariants $$ \sum\limits_{i \in I}\ms{res}_{k_1/k_0}(a_i) \iota_\ast(b_i(T'))\, = \, \sum\limits_{i \in I}\ms{res}_{k_1/k_0}(a_i) b_i(T'_{k_1}). $$ As the $a_i$'s are a basis we get $b_i(T'_{k_1}) = \iota_\ast(b_i(T'))$, as asserted.
\smallbreak
To show injectivity, we assume $\sum\limits_{i \in I}a_ib_i = 0$ and claim that $b_i = 0$ for all $i \in I$. Fix a field $k$ and $T' \in H^1(k, G')$. Then $\sum\limits_{i \in I}a_ib_i(T') \in \ms{Inv}^\ast_k(G, M_\ast)$ is the constant zero invariant. Since the $a_i$'s are a basis, we get $b_i(T') = 0$ for all $i \in I$. Since $k$ and $T'$ were arbitrary, this implies that the $b_i$'s are constant zero. \end{proof}
\smallbreak
Since every Weyl group is a product of irreducible ones, we get the following corollary. \begin{corollary}
Let $k_0$ be a field of characteristic coprime to $|G|$ and $M_\ast$ a cycle module over $k_0$. Then, $\ms{Inv}^\ast_{k_0}(G, M_\ast)$ is completely decomposable for all Weyl groups $G$. \end{corollary}
\goodbreak \section{Preparations for the proof} \label{techLemSec}
In this section, we establish several key lemmas on cycle modules. We also discuss auxiliary results used in the type-by-type proof of Theorem \ref{reflThm} for irreducible Weyl groups.
\smallbreak
\subsection{Cycle complex computations.} We start with a computation of cycle module cohomology which seems to be well known, but for which we have not found an appropriate reference. To this end, we recall first the cycle complex associated with a cycle module $M_\ast$ over $k_0$. We refer the reader to Rost \cite{Ro96} for further details.
\smallbreak
Let $X$ be a scheme essentially of finite type over $k_0$. That is, $X$ is of finite type over $k_0$ or the localization of such a $k_0$-scheme. Then, the \emph{cycle complex} is given by $$ \bigoplus\limits_{x \in X^{(0)}}M_n(k_0 (x)) \xrightarrow{d^0_{X, n}} \bigoplus\limits_{x \in X^{(1)}}M_{n - 1}(k_0 (x)) \xrightarrow{d^1_{X, n}} \bigoplus\limits_{x \in X^{(2)}}M_{n - 2}(k_0 (x))\to \cdots, $$ where $X^{(p)}\subseteq X$ denotes the set of points of codimension $p \ge 0$ in $X$ and $k_0(x)$ is the residue field of $x \in X$. In general, the differentials $d^p_{X, n}$ are sums of composition of second residue maps and transfer maps. If $X$ is an integral scheme with function field $k_0 (X)$ and regular in codimension 1, then the components of $d^0_{X, n}$ are the \emph{second residue maps} $\partial_x:\,M_n(k_0 (X)) \to M_{n - 1}(k_0 (x))$. In particular, the cohomology group in dimension 0, also called \emph{unramified cohomology} of $X$ with values in $M_n$, equals $$ \CMu n(X):= \Ker\Big( M_n(k_0(X))\xrightarrow{\:(\partial_{x})_{x \in X^{(1)}}\;} \bigoplus\limits_{x \in X^{(1)}}M_{n - 1}(k_0 (x))\, \Big). $$
In case $X = \ms{Spec}(R)$, we use affine notations and write $M_{n, \mathsf{unr}}(R)$ instead of $\CMu n(X)$.
\smallbreak
\begin{lemma} \label{cheeseLine} Let $M_\ast$ be a cycle module over $k_0$ and $R$ a regular and integral $k_0$-algebra with fraction field $K$, which is essentially of finite type. Let $a_1, \dots, a_l \in R$ be such that $a_i - a_j \in R^\times$ for all $i \ne j$. Then, $$
\CMu n(R[T]_{\prod\limits_{i \le l }(T - a_i)}) \simeq\, \CMu n(R)\;\oplus\; \bigoplus\limits_{i \le l}\{ T - a_i\} \cdot\CMu {n - 1}(R), $$
where we consider $\{ T - a_i\}$ as an element of $K_1^{\mathsf M}(K(T))$ and $\CMu{n - 1}(R)$ as a subset of $M_{n - 1}(K(T))$. \end{lemma} \begin{proof} Setting $f(T): = \prod\limits_{i \le l }(T - a_i)$, we consider the following short exact sequence of cycle complexes, where for a cohomological complex $P^{{\scriptscriptstyle \bullet}}$ we denote by $P^{{\scriptscriptstyle \bullet}}[1]$ the shifted complex with $P^i$ in degree $i + 1$: $$ \xymatrix{ \mathrm{C}^{{\scriptscriptstyle \bullet}}(R[T]/R[T] \cdot f(T), M_{n - 1})[1]\;\; \ar@{>->}[r] & \mathrm{C}^{{\scriptscriptstyle \bullet}}(R[T], M_n) \ar@{->>}[r] &
\mathrm{C}^{{\scriptscriptstyle \bullet}}(R[T]_{f(T)}, M_n). } $$ Using homotopy invariance, the associated long exact cohomology sequence starts with $$
0 \to \CMu n(R) \to \CMu n(R[T]_{f(T)}) \to \CMu {n - 1}(R[T]/R[T] \cdot f(T)). $$ We claim that the map on the right-hand side of this exact sequence is a split surjection. Indeed, by the Chinese remainder theorem, $$
R[T]/R[T] \cdot f(T)\, \simeq\, \prod\limits_{i \le l} R[T]/R[T] \cdot (T - a_i)\, \simeq\, \prod\limits_{i \le l}R, $$
so that $\CMu{n - 1}(R[T]/R[T] \cdot f(T))\simeq\CMu{n - 1}(R)^{\oplus\, l}$. Disentangling the definitions of the appearing maps shows that $$
\CMu {n - 1}(R)^{\oplus l} \to \CMu n (R[T]_{f(T)}),\quad
(x_1, \dots, x_l) \longmapsto \sum\limits_{i \le l}\{T - a_i\} x_i
$$
defines the asserted splitting.
\end{proof}
\medbreak
By induction and homotopy invariance, Lemma \ref{cheeseLine} implies the well-known computation of the unramified cohomology of a Laurent ring.
\bigbreak
\noindent
\begin{corollary}
\label{Z2Invariants}
Let $M_\ast$ be a cycle module over $k_0$. Then,
$$
\CMu{n}(k_0 [T_1^{\pm}, \dots, T_l^{\pm}]) \simeq\ \bigoplus_{\substack{r \le l \\ 1 \le i_1 < \cdots < i_r \le l}}
\{ T_{i_1}, \dots, T_{i_r}\} \cdotM_{n - r}(k_0).
$$
\end{corollary}
\subsection{Invariants of $(\mathbb Z/2)^n$}
\label{Z/2InvSec}
Corollary \ref{Z2Invariants} implies that the invariants of $(\mathbb Z/2)^n$ with values in a cycle module are completely decomposable. This is shown for invariants of $(\mathbb Z/2)^n$ with values in $\Kt^{\mathsf M}_\ast$ in Serre's lectures \cite[Part I, Sect.\ 16]{CohInv}. Writing $(\alpha) \in H^1(k, \mathbb Z/2)$ for the class of $\alpha \in k^\times$, every index set $1 \le i_1 < \cdots < i_l \le n$ gives rise to an invariant \begin{align*} x_{i_1, \dots, i_l}:\, H^1(k, (\mathbb Z/2)^n) \simeq H^1(k, \mathbb Z/2)^n \to \Kt^{\mathsf M}_l(k)\\ \big[ (\alpha_1), \dots, (\alpha_n)\big] \mapsto \{\alpha_{i_1}, \dots, \alpha_{i_l}\} . \end{align*} We show that they form a basis of $\ms{Inv}((\mathbb Z/2)^n, M_\ast)$ for every cycle module $M_\ast$ with $\Kt^{\mathsf M}_\ast$-structure.
\smallbreak
Let $k \in \mathcal F _{k_0}$, $a \in \ms{Inv}^\ast_k((\mathbb Z/2)^n, M_\ast)$ and write $K: = k(t_1, \dots, t_n)$ for the rational function field in $n$ variables over the field $k$. Then, $T: k(\sqrt{t_1}, \dots, \sqrt{t_n})\supseteq k(t_1, \dots, t_n)$ is a versal $(\mathbb Z/2)^n$-torsor, so that by \cite[Part I, Thm.~11.1]{CohInv} or \cite[Thm.\ 3.5]{GiHi19}, $$ a_K(T)\, \in \, \CMu{\ast}(k[t_1^{\pm}, \dots, t_n^{\pm}]). $$ By Corollary \ref{Z2Invariants}, there exist unique $m_{i_1,\dots, i_l} \in M_\ast(k)$ with $$ a_K(T) = \sum_{\substack{l \le n \\ 1 \le i_1 < \dots < i_l \le n}}\big\{ t_{i_1}, \dots, t_{i_l}\big\} m_{i_1,\dots, i_l}. $$ Then, the invariant $$ b: = \sum_{\substack{l \le n \\ 1 \le i_1 < \dots < i_l \le n}}x_{i_1,\dots, i_l} m_{i_1,\dots, i_l}. $$ agrees with $a$ on the versal torsor $T$. Hence, the detection principle in the form of \cite[Part I, 12.2]{CohInv} or \cite[Thm.\ 3.7]{GiHi19} implies that $a = b$, as asserted.
\subsection{Invariants of Weyl groups of type $G_2$} \label{G2InvSubSect} Assume here that the base field is of characteristic not~$2$ or~$3$.
The group $W(G_2)$ is a semi-direct product of a normal subgroup $L$ of order $3$ and a subgroup $P \simeq (\mathbb Z/2)^2$ generated by the reflections at two orthogonal roots, see \cite[Chap.\ VI, \S 4, No 13]{LIE4-6}. Since there is up to conjugacy only one such $P$, Proposition \ref{splitCorollary} shows that the restriction map $\ms{res}_{W(G_2)}^P$ is injective. Since the projection $W(G_2)\simeq P\ltimes L \to P$ induces a splitting, we deduce that $\ms{res}_{W(G_2)}^P$ is in fact an isomorphism.
In view of the results for other Weyl groups it is worthwhile to note that a basis for the invariants can also be expressed in terms of the Stiefel-Whitney invariants to be introduced in Section \ref{swInvSubSect} below. As in Section \ref{b2Sec} below, we see that the restriction of the Stiefel-Whitney classes in degrees 1 and 2 to $P$ correspond to the invariants $x_1 + x_2$ and $x_{1, 2}$. Finally, considering the morphism $W(G_2) \to O_1 = \{\pm 1\}$ sending one of the two classes of reflections to $-1$ and the other to 1 yields the invariant $x_1$ (or $x_2$).
\subsection{Torsor computations} Henceforth, we switch freely between the interpretation of $H^1(k, O_n)$ via cocycles on the one hand and via quadratic forms on the other hand. For this purpose, we recall how to view $H^1(k, O_n)$ in terms of non-abelian Galois cohomology \cite{cg}. Let $c \in Z^1(\Gamma, O_n)$ be a cocycle. That is, $c$ is a continuous map from the absolute Galois group $\Gamma$ of a separable closure $k_s/k$ to $O_n(k_s)$ and satisfies the cocycle condition $c_{\sigma\tau} = c_\sigma \cdot\sigma(c_\tau)$. To construct a quadratic form $q_c$ over $k$, we first define an action $\star$ of $\Gamma$ on $k_s^n$ via $\sigma \star v = c_\sigma(\sigma(v))$. Then, we let $v_1, \dots, v_n \in k_s^n$ denote a $k$ basis of the vector space \begin{align}
\label{vStarEq}
V^{\star\Gamma} = \{v \in k_s^n:\, \sigma \star v = v\text{ for all }\sigma \in \Gamma\}. \end{align} Now, we let $q_c$ be the quadratic form whose associated bilinear form $b_{q_c}$ is determined by $b_{q_c}(e_i, e_j) = \langle v_i, v_j \rangle$, where $\langle \cdot, \cdot\rangle$ denotes the standard scalar product in $k_s^n$.{ In other words, $q_c$ is the restriction to $V^{\star\Gamma}$ of the quadratic form associated with the standard scalar product $\langle \cdot, \cdot \rangle$.} We will come back frequently to the following three pivotal examples, where $V = k^2_s$.
\medbreak
\begin{example} \label{abQuadratic} Consider the group homomorphism $(\mathbb Z/2)^2 \to O_2$, \begin{align*} e_1\mapsto \begin{pmatrix} 0&1\\1&0\end{pmatrix}&, \; e_2\mapsto \begin{pmatrix} 0&-1\\-1&0\end{pmatrix}. \end{align*}
Let $(\alpha, \beta) \in (k^\times/{k^\times}^2)^2$ be a $(\mathbb Z/2)^2$-torsor over $k$. Then, $v_1 = (\sqrt\alpha, -\sqrt\alpha)^\top$, $v_2 = (\sqrt\beta, \sqrt\beta)^\top$ defines a basis of $V^{\star\Gamma}$ and the induced bilinear form is the diagonal form $q_{(\alpha, \beta)} = \langle 2\alpha, 2\beta\rangle$. \end{example}
\begin{example} \label{aQuadratic} Consider the group homomorphism $\mathbb Z/2 \to O_2$, \begin{align*} e_1&\mapsto \begin{pmatrix} 0&1\\1&0\end{pmatrix}.\\ \end{align*} Let $\alpha \in k^\times/k^{\times2}$ be a $\mathbb Z/2$-torsor. Applying the above example with $\beta = 1$, we see that the induced bilinear form is the diagonal form $q_{(\alpha)} = \langle 2\alpha, 2\rangle$. \end{example}
\begin{example} \label{abQuadratic2} Consider the group homomorphism $(\mathbb Z/2)^2 \to O_2$, \begin{align*} e_1\mapsto \begin{pmatrix} 0&1\\1&0\end{pmatrix}&, \;e_2\mapsto \begin{pmatrix} 0&1\\1&0\end{pmatrix}. \end{align*} Let $(\alpha, \beta) \in (k^\times/{k^\times}^2)^2$ be a $(\mathbb Z/2)^2$-torsor over $k$. Then, $v_1 = (1, 1)^\top$, $v_2 = (\sqrt{\alpha\beta}, -\sqrt{\alpha\beta})^\top$ defines a basis of $V^{\star\Gamma}$. The induced bilinear form is the diagonal form $q_{(\alpha, \beta)} = \langle 2, 2\alpha\beta\rangle$. \end{example}
\subsection{An embedding of $S_{2^n}$ into $O_{2^n}$} \label{Sym - OnEmbeddingSubSect} Next, we describe a specific embedding $(\mathbb Z/2)^n \to O_{2^n}$ on the torsor level. For any $ l \le 2^n - 1$ let $b(l) \subseteq [0, n - 1]$ be the position of the bits in the binary representation. That is, $l = \sum_{i \in b(l)} 2^i$. Furthermore, let $f_S$ be the flipping the bits at all positions in $S \subseteq [0, n - 1]$. In other words, $f_S:\, [0, 2^n - 1] \to [0, 2^n - 1]$, \begin{align*}
f_S(l):= b^{-1}(b(l) \Delta S), \end{align*} where $R \Delta S = (R \setminus S) \cup (S \setminus R)$ is the symmetric difference. In this notation, the group homomorphism $\phi:\, (\mathbb Z/2)^n \to S_{2^n} \subseteq O_{2^n}$ \begin{align*}
\phi\Big(\sum_{s \in S}e_s\Big):=f_S \end{align*} induces a map $\phi_*:\, H^1(k, (\mathbb Z/2)^n) \to H^1(k, O_{2^n})$, which we now describe explicitly.
\begin{lemma} \label{pfisterLemma} Let $\epsilon_0, \dots, \epsilon_{n - 1} \in k^\times / k^{\times2}$. Then,
$$\phi_*(\epsilon_0, \dots, \epsilon_{n - 1}) = \langle 2^n \rangle \otimes \langle\langle -\epsilon_0\rangle\rangle \otimes \langle\langle -\epsilon_1\rangle\rangle \otimes \cdots \otimes \langle\langle -\epsilon_{n - 1}\rangle\rangle.$$ \end{lemma}
\noindent Since any two simply transitive actions on $[0, 2^n - 1]$ are conjugate in $S_{2^n}$, Lemma \ref{pfisterLemma} is more useful than it may seem at first.
\begin{proof}
Consider a cocycle representation $c \in Z^1(\Gamma, (\mathbb Z/2)^n)$ of the torsor $(\epsilon_0, \dots, \epsilon_{n - 1}) \in (k^\times/k^{\times2})^n$. That is, the $i$th component
of $c_\sigma$ equals 1 if and only if $\sigma\big(\sqrt{\epsilon_i}\big) = - \sqrt{\epsilon_i}$. To determine the quadratic form defined by the induced cocycle
$\sigma \mapsto \phi(c_\sigma)$, we assert that a basis of the $k$-vector space $V^{\star \Gamma}$ from \eqref{vStarEq} is given by $\{v_0, \dots, v_{2^n - 1}\}$,
where $v_p$ has components
$$(v_p)_\ell = (-1)^{|b(p) \cap b(\ell)|} \prod_{\substack{i \in b(p)}} \sqrt{\epsilon_i}.$$
First, $v_p \in V^{\star\Gamma}$, since writing $c_\sigma = \sum_{i \in S}e_i$ for some $S = S(\sigma) \subseteq [0, n - 1]$ shows that \begin{align*}
\sigma\Big((-1)^{|b(p) \cap b(\ell)|}\prod_{\substack{i \in b(p)}}\sqrt{\epsilon_i}\Big) = (-1)^{|b(p) \cap b(\ell)| + |b(p) \cap S|}\prod_{i \in b(p)}\sqrt{\epsilon_i} = (v_p)_{f_S(\ell)}. \end{align*} Moreover, to prove the linear independence of the $\{v_p\}_p$, we note that \begin{align*} b(v_p, v_p) = \sum_{u \le 2^n - 1} (v_p)_u(v_p)_u = 2^n\prod_{i \in b(p)}\epsilon_i. \end{align*} Hence, it suffices to show that $b(v_p, v_q) = 0$, if $p \ne q$. By assumption, there is at least one $i \in b(p) \Delta b(q)$, so that pairing any $L \subseteq [0, n - 1]\setminus \{i\}$ with $L \cup \{i\}$ shows that \begin{align*}
b(v_p, v_q)& = \prod_{i \in b(p)}\sqrt{\epsilon_i} \cdot\prod_{i \in b(q)}\sqrt{\epsilon_i} \cdot\, \sum_{L \subseteq [0, n - 1]}(-1)^{|b(p) \cap L| + |b(q) \cap L|}\\
& = \prod\limits_{\substack{i \in b(p) \\ j \in b(q)}}\sqrt{\epsilon_i\epsilon_j}\hspace{-.4cm} \sum_{L \subseteq [0, n - 1] \setminus \{i\}}\hspace{-.4cm}\big((-1)^{|b(p) \cap L| + |b(q) \cap L|} + (-1)^{|b(p) \cap L| + |b(q) \cap L| + 1}\big), \end{align*} vanishes as claimed. \end{proof}
\subsection{Stiefel-Whitney Invariants} \label{swInvSubSect}
The \emph{total Stiefel-Whitney class} is defined by \begin{align*} w_\ast\,:\; H^1(k, O_n)& \to \Kt^{\mathsf M}_\ast(k)\\ \langle \alpha_1, \dots, \alpha_n\rangle &\mapsto \prod_{i \le n} (1 + \{\alpha_i\}), \end{align*} where $\langle\alpha_1, \dots, \alpha_n\rangle$ is the class in $H^1(k, O_n)$ of the diagonal form. They generate the invariants of the orthogonal group $O_n$ with values in $\Kt^{\mathsf M}_\ast$ as Serre shows in \cite[Part I, Sect.\ 17]{CohInv}.
\smallbreak
\begin{theorem} \label{swThm} Let $k_0$ be a field of characteristic not $2$. Then, the Stiefel-Whitney invariants form a basis in the sense of Definition~\ref{decomposableDef} of $\ms{Inv}(O_n, \Kt^{\mathsf M}_\ast)$ for all $n \ge 1$. \end{theorem}
\smallbreak
By \cite[Rem.\ 17.4]{CohInv} the product of Stiefel-Whitney classes is given by \begin{equation} \label{SWProduct}
w_r w_s = \{-1\}^{b^{-1}(b(r) \cap b(s))} w_{r + s - b^{-1}(b(r) \cap b(s))}, \end{equation} where $b( \cdot)$ denote the binary representation of Section \ref{Sym - OnEmbeddingSubSect}.
\begin{example} \label{modSW} Later, we will meet some examples where it is easier to do the computations with a slight variant of the Stiefel-Whitney classes. Therefore, we introduce \emph{modified} Stiefel-Whitney classes $\widetilde{w_d} \in \ms{Inv}^d(O_n, \Kt^{\mathsf M}_\ast)$: For even $n$, we put $\widetilde{w_d}(q):= w_d(\langle 2\rangle\otimes q)$ for all $d \le n$ and for odd $n$, we set inductively $\widetilde{w_0} = 1$ and $\widetilde{w_{d + 1}}(q) = w_{d + 1}(\lan2\rangle\otimes q)-\{2\}\widetilde{w_d}(q)$. Then, we obtain for even $\mathsf{rank}(q)$ that
$$ \widetilde{w_d}(\langle 2 \rangle \otimes q) = w_d(q) = \widetilde{w_d}(\langle 1 \rangle + \langle 2\rangle \otimes q).$$
Alternatively, one could also give a more direct definition of modified Stiefel-Whitney classes not depending on the parity of $q$ by setting $\widetilde{w_d}(q)$ as $w_d(q)$ if $d$ is odd and as $w_d(q) + \{2\} w_{d - 1}(q)$ if $d$ is even. \end{example}
Finally, we recall another kind of invariants.
\begin{example}[Witt-ring invariants] \label{pfisterInvariant} The image of an $n$-dimensional quadratic form in the Witt ring $G$ yields an invariant $\ms{Inv}^*(O_n, W)$. Since the definition of invariants only makes use of the functor property, this concept makes sense, even though $G$ is not a cycle module. Albeit of limited use in the setting of quadratic forms, the aforementioned invariant becomes a refreshing source of invariants for groups $G$ embedding into $O_n$. Indeed, for Weyl groups $G$ of type $D_{2n}, E_7, E_8$, we construct embeddings such that the restrictions become invariants with values in a suitable power of the fundamental ideal $I \subseteq W$. Since the Milnor morphism \begin{align*} f^{\ms{Mil}}_n:\, \Kt^{\mathsf M}_n & \to I^n/I^{n + 1}\\ \{\alpha_1\} \cdots \{\alpha_n\} &\mapsto \langle\langle \alpha_1 \rangle\rangle \otimes \cdots \otimes \langle\langle \alpha_n \rangle\rangle \end{align*} with $\langle\langle a \rangle\rangle := \langle 1, -a\rangle$ induces an isomorphism between mod 2 Milnor K-theory and the graded Witt ring \cite[Theorem 4.1]{OVV07}, we obtain elements in $\ms{Inv}^\ast(G, \Kt^{\mathsf M}_\ast)$. \end{example}
\medbreak
\subsection{A technical lemma} The following technical lemma simplifies the computations of invariants.
\smallbreak
\begin{lemma} \label{orbitSum1} Let $R$ be a commutative ring, $I$ a finite index set, $M$ an $R$-module and $G$ a finite group acting on $I$. The operation of $G$ on $I$ induces an operation of $G$ on the $R$-module $N:= \oplus_{i \in I}M$ by permutation of coordinates. Let $I = I_1\sqcup I_2\sqcup\cdots\sqcup I_k$ be its orbit decomposition. Then, $N^G\cong \oplus_{i \le k} N_i$, where for $i \le k$, $$N_i:= \Big\{\sum_{j \in I_i}\iota_j(m):\, m \in M\Big\}\cong M.$$ Here, $\iota_j:\, M \to N$ denotes the inclusion along the $j$th coordinate. \end{lemma}
\begin{proof}
Since $(\sum_{j \ne i}N_j) \cap N_i = \{0\}$ and $\oplus_{i \le k}N_i\subseteq N^G$ hold for every $i$, it remains to show that the
$N_i$ generate $N^G$. To prove this, note that any $x \in N$ can be written uniquely as $x = \sum_{i \in I}\iota_i(m_i)$ for
certain $m_i \in M$. We prove by induction on the number of non-zero $m_i$ that any $x \in N^G$ lies in the module
generated by the $N_i$. We may suppose $I = [1;|I|]$, $m_1 \ne 0$ and denote by $I_1$ the orbit containing 1.
Now, comparing the $g(1)$th entry of $x$ and of $g. x$ yields that $m_{g(1)} = m_1$ for every $g \in G$.
In particular, we can split of a sum $\sum_{j \in I_1}\iota_j(m_j) = \sum_{j \in I_1}\iota_j(m_1) \in N_1$ from $x$. Applying induction to $x-\sum_{j \in I_1}\iota_j(m_1)$ concludes the proof. \end{proof}
\smallbreak
In particular, Lemma \ref{orbitSum1} yields the following orbit decomposition. \begin{corollary} \label{orbitSum} Let $R_*$ be a commutative, graded ring, $I^1, \dots, I^{r}$ be finite index sets, $M_*$ be a graded $R_*$-module and $G$ a finite group acting on each of the $I^{\ell}$. The operation of $G$ on the $I^{\ell}$ induces an operation of $G$ on the graded $R_*$-module $N_*:= \oplus_{\ell \le r}\oplus_{I^{\ell}}M_{* - d_\ell}$, where the $d_\ell$ are certain non-negative integers. Let $I^{\ell} = I^{\ell}_1\sqcup I_2^{\ell}\sqcup\cdots\sqcup I^{\ell}_{n_\ell}$ be the orbit decomposition. Then, $N^G\cong \oplus_{\ell \le r}\oplus_{i \le n_\ell} N_{\ell, i}$, where for $ \ell \le r$, $ i \le n_\ell$, we put $$(N_{\ell, i})_*:= \Big\{\sum_{j \in I^{\ell}_i}\iota_j(m)\;:\,\; m \in M_{* - d_\ell}\Big\}\cong M_{*-d_\ell}.$$ \end{corollary}
\begin{center} {\bf\large Part II: Computation of the invariants of irreducible Weyl groups } \end{center}
\noindent Throughout this part~$k_0$ denotes a field of characteristic not~$2$. When we compute the invariants of an irreducible Weyl group~$W=W(\Sigma)$, where~$\Sigma$ is an irreducible root system we assume also that the characteristic of~$k_0$ and the order of~$G$ are coprime.
\smallbreak
We use in the following the description of irreducible root systems given in Bourbaki~\cite[PLATES I-VIII]{LIE4-6} for irreducible root systems of type $\not=G_2$ (recall that for Weyl groups of type~$G_2$ we have already computed the invariants in Section \ref{G2InvSubSect}). We have $\Sigma\subseteq\bigoplus\limits_{i\le n}e_i\mathbb Z [1/2]\subseteq\mathbb R^n$ for an appropriate $n$. Taking the tensor product $k_0\otimes_{\mathbb Z [1/2]}$ we get an embedding of $\Sigma$ into $k_0^n$, such that all $\alpha\in\Sigma$ are anisotropic for the standard scalar product of $k_0^n$. Hence the associated reflections generate a finite subgroup of $O_n(k_0)$ which is isomorphic to $G$. In the following we will identify $G$ with this subgroup of $O_n(k_0)$. \smallbreak
We provide a family of elements $\{ x_i\}_{i \in I}\subseteq\ms{Inv}(G, \Kt^{\mathsf M}_\ast)$, forming a basis of $\ms{Inv}(G, M_\ast)$ for all cycle modules over $k_0$. For this we have to show that given $k \in \mathcal F _{k_0}$ and an invariant $a \in \ms{Inv}_k^\ast(G, M_\ast)$, then there exist unique $c_i \in M_\ast(k)$ such that $$ a = \sum\limits_{i \in I}\ms{res}_{k/k_0}(x_i) c_i. $$ To verify this claim, we may assume $k = k_0$ and let $e_1, \dots, e_n$ denote the standard basis elements of the $k_0$-vector space $k_0^n$.
If $a_1, \dots, a_n \in \Sigma$ are pairwise orthogonal, then $P(a_1, \dots, a_n)$ denotes the elementary 2-abelian subgroup generated by the corresponding reflections $s_{a_1}, \dots, s_{a_n}$. For $1 \le i_1 < \cdots < i_l \le n$, we write $x_{a_{i_1}, \dots, a_{i_l}}$ for the invariant $$ H^1( -, (\mathbb Z/2) \cdot s_{a_1}\times\dots\times (\mathbb Z/2) \cdot s_{a_n})\xrightarrow{\simeq}H^1( -, (\mathbb Z/2)^n) \xrightarrow{x_{i_1, \dots, {i_l}}}\Kt^{\mathsf M}_l(\, -\, ), $$ see Corollary \ref{Z/2InvSec} for the definition of the invariant $x_{i_1, \dots, i_l}$.
\section{Weyl groups of type $A_n$} The invariants of Weyl groups of type $A_n$ with values in $\Kt^{\mathsf M}_\ast$ are induced by the Stiefel-Whitney classes $\{w_i\}_i$, see \cite[Part I, Sect.\ 25]{CohInv}. The proof carries over essentially verbatim to invariants with values in cycle modules $M_\ast$ with $\Kt^{\mathsf M}_\ast$-structure using the splitting principle in the form of Proposition~\ref{splitCorollary} and the computation of $\ms{Inv}((\mathbb Z/2)^n, M_\ast)$ in Corollary \ref{Z/2InvSec}. The result is as follows. Here, we identify $H^1(k, S_n)$ with the set of isomorphism classes of \'etale algebras of dimension $n$ over $k$, and denote for such an algebra $E$ by $q_E$ its trace form.
\begin{proposition}
Let $n \ge 1$. Then, $\ms{Inv}(S_n, M_\ast)$ is completely decomposable with basis $\{E\mapsto w_i(q_E)\}_{i \le \lfloor n /2 \rfloor}$. \end{proposition}
\goodbreak \section{Weyl groups of type $B_n/C_n$.} \label{BnSubSect} First, we note that the Weyl group $W(C_n)$ is isomorphic to the Weyl group $W(B_n)$. Hence, determining the invariants for $W(B_n)$ will also yield the determinants for $W(C_n)$.
\subsection{Invariants of $B_2$} \label{b2Sec} First, we consider $W(B_2)$, which is isomorphic to the dihedral group of order $8$. In particular, $G:= W(B_2) = \langle\sigma, \tau\rangle\subseteq S_4$ admits the permutation representation defined by \begin{align*} \sigma& = \begin{pmatrix} 1 & 2& 3 & 4 \\ 2 & 3& 4 & 1 \end{pmatrix}, \;\phantom{aaaa} \tau = \begin{pmatrix} 1 & 2& 3& 4 \\ 3 & 4& 1 & 2 \end{pmatrix}. \end{align*} \smallbreak
\smallbreak
Considering $G$ as orthogonal reflection group over $k_0$ yields an embedding $\phi:\, G\subseteq O_2$ of algebraic groups over $k_0$ given by \begin{align*} \sigma\mapsto \begin{pmatrix} 0 & -1 \\ 1 & 0\\ \end{pmatrix} \text{, } \tau\mapsto \begin{pmatrix} 0 & 1 \\ 1 & 0\\ \end{pmatrix}. \end{align*} Now, $\phi$ determines an action of $G$ on $k_0 [X, Y]$ given by ${^\sigma X} = Y$, ${^\sigma Y} = -X$, ${^{\tau} X} = Y$, ${^{\tau} Y} = X$. In particular, $k_0 [X, Y]^G = k_0 [X^2 + Y^2, X^2Y^2]\congk_0 [A, B]$, where $A:= X^2 + Y^2$, $B:= 4X^2Y^2$. Fix the notation $E:= k_0(X, Y)$, $K:=k_0 (X^2 + Y^2, X^2Y^2)$. Now, the group $G$ acts freely on the open subscheme \begin{align*}
U:= D \big(XY(X - Y)(X + Y)\big) = D(X^2Y^2(X^2 - Y^2)^2)\subseteq \mathbb A^2, \end{align*} where for a polynomial $f$, we denote by $D(f)\subseteq \mathbb A^2$ the open subset given by $f \ne 0$.
By \cite[Part I, Thm.~12.3]{CohInv} or \cite[Thm.\ 3.7]{GiHi19}, the evaluation at the versal torsor $\ms{Spec}(E) \to \ms{Spec}(K)$ yields an injection $\ms{Inv}(G, M_\ast) \to \CMu \ast(U/G)$. To check that this map is also surjective, we first compute $\CMu \ast(U/G)$. An explicit computation yields \begin{align*}
U/G&\cong \ms{Spec}(k_0 [X, Y, X^{-2}Y^{-2}(X^2 - Y^2)^{-2}]^G)\\
& = \ms{Spec}(k_0 [X^2 + Y^2, X^2Y^2, X^{-2}Y^{-2}, (X^2 - Y^2)^{-2}])\\
&\cong \ms{Spec} \big(k_0 \big[A, B, B^{-1}, (B - A^2)^{-1}\big]\big), \end{align*}
\smallbreak
To compute $M_{\ast, \mathsf{unr}}(U/G)$, note that $V:= D(A)\subseteq U/G$ is isomorphic to the spectrum of \begin{align*}
k_0 \big[A, B, B^{-1}, A^{-1}, (B - A^2)^{-1}\big] \cong k_0 \big[A, B',(B')^{-1}, A^{-1}, (B' - 1)^{-1}\big], \end{align*} where the isomorphism is induced by mapping $B'$ to $B/A^2$. Now, by applying Lemma~\ref{cheeseLine} twice and homotopy invariance, \begin{align*}
\CMu{\ast}(V)&\congM_\ast(k_0)\oplus \{B/A^2 - 1\}M_{\ast -1}(k_0)\oplus \{A\}M_{\ast -1}(k_0)\oplus\\
&\phantom=\oplus \{B\}M_{\ast -1}(k_0)\oplus \{A\}\{B/A^2 - 1\}M_{\ast -2}(k_0)\\
&\phantom=\oplus \{A\}\{B\}M_{\ast -2}(k_0). \end{align*} $M_{\ast, \mathsf{unr}}(U/G)$ can be computed as the kernel of the boundary $\partial = \partial^A_{(A)}:\,M_\ast(V) \to M_{\ast -1}(\mathbb G_m)$. Thus, for every $t \in M_\ast(k_0)$, \begin{align*} \partial(t)& = 0, \\ \partial(\{B/A^2 - 1\}t)& = \partial(\{B - A^2\}t)
= \{B\}\partial(t)
= 0, \\ \partial(\{B\}t)&
= \{B\}\partial(t)
= 0, \\ \partial(\{A\}t)& = t, \\ \partial(\{A\}\{B/A^2 - 1\}t)& = \partial(\{A\}\{B - A^2\}t)
= \{B\}\partial(\{A\}t)
= \{B\}t.\\ \partial(\{A\}\{B\}t)&
= \{B\}\partial(\{A\}t)
= \{B\}t. \end{align*} Writing $M_\ast$ short for $M_\ast(k_0)$, we conclude that $M_{\ast, \mathsf{unr}}(U/G)$ is given by \begin{align*}
&M_\ast\oplus \{B - A^2\}M_{\ast -1}\oplus \{B\}M_{\ast -1}\oplus \{A\}\{B(B - A^2)\}M_{\ast -2}\\
&\congM_\ast\oplus \{B - A^2\}M_{\ast -1}\oplus \{B\}M_{\ast -1}\oplus \{A\}\{B - A^2\}M_{\ast -2}. \end{align*} It remains to construct invariants mapping to the three non-constant basis elements of $\CMu*(U/G)$. Pulling back $w_1, w_2 \in \ms{Inv}(O_2, \Kt^{\mathsf M}_\ast)$ along the embedding $\phi$ gives invariants in $\ms{Inv}(G, \Kt^{\mathsf M}_\ast)$ that -- by abuse of notation -- we again denote by $w_1, w_2$. We first compute the value $w_1(E/K)$ of $w_1$ at the versal torsor $E/K$ constructed above. To do this, we note that the determinant of $\phi(\sigma^i\tau)$ is $-1$, while the determinant of $\phi(\sigma^i)$ is 1. Now, $XY(X^2 - Y^2) \in E$ maps to its negative by each reflection and is fixed by all the $\sigma^i$. Thus, $w_1(E/K) = \{X^2Y^2(X^2 - Y^2)^2\} = \{B(A^2 -B)\}$.
\medbreak
Another invariant comes from the embedding $G\subseteq S_4$. We may define $v_1:=\mathsf{res}^G_{S_4}(\widetilde{w_1})$. Again, we compute $v_1(E/K)$. We note that $\widetilde{w_1} \in \ms{Inv}^1(S_4, \Kt^{\mathsf M}_\ast)$ may be computed as follows. Start with an arbitrary $x \in H^1(k, S_4)$; then $\widetilde{w_1}(x) = \mathsf{sgn}_*(x) \in H^1(k, \mathbb Z/2)\cong k^\times/k^{\times2}\cong\Kt^{\mathsf M}_1(k)$. The kernel of $\mathsf{sgn}$ consists exactly of the elements $\{id, \tau, \sigma^2, \sigma^2\tau\}$ with $\sigma, \tau$ as above. Since $XY$ is fixed by this kernel and is mapped to its negative by $\sigma$, the value of $v_1$ at the versal torsor is $\{X^2Y^2\} = \{B\}$. Consequently, it remains to find an invariant mapping to the basis $\{A\}\{B^2 - A\}$ of $M_{\ast, \mathsf{unr}}(U/G)$.
\smallbreak
Finally, we compute the value of $w_2 \in \ms{Inv}^2(G, \Kt^{\mathsf M}_\ast)$ at $E/K$. First consider the elementary abelian 2-subgroup generated by reflections $P:= \langle\tau, \tau'\rangle$, where $\tau' = \sigma^2\tau$. Thus, \begin{align*} \phi(\tau) = \begin{pmatrix} 0 & 1\\ 1 &0 \end{pmatrix}, \phi(\tau') = \begin{pmatrix} 0 &-1 \\ -1 &0 \end{pmatrix}. \end{align*} Recalling that the action of $G$ on $E$ is defined via $\phi$, we now consider the versal $P$-torsor $E/E^P = k_0 (X, Y)/k_0(X^2 + Y^2, XY)$. Then, $\tau \in P = \mathsf{Gal}(E/E^P)$ acts via $\tau(X) = Y, \tau(Y) = X$ and $\tau'$ via $\tau'(X) = -Y, \tau'(Y) = -X$. Thus, this $(\mathbb Z/2)^2$-torsor over $E^P$ is equivalently described by the pair $((X-Y)^2, (X + Y)^2) \in ((E^P)^\times/(E^P)^{\times 2})^2$. We conclude that the value of $\mathsf{res}^P_{O_4}w_2$ at this $P$-torsor is $\{(X-Y)^2\}\{(X + Y)^2\} \in \Kt^{\mathsf M}_2(E^P)$.
\smallbreak
By the computations above, the value of $\mathsf{res}^G_{O_4}(w_2)$ at $E/K$ is of the form $$\alpha_1 + \{B - A^2\}\alpha_2 + \{A\}\alpha_3 + \{B\}\{B(B - A^2)\}\alpha_4 \in \Kt^{\mathsf M}_2(K)$$ for some $\alpha_1 \in \Kt^{\mathsf M}_2(k_0)$, $\alpha_2, \alpha_3 \in \Kt^{\mathsf M}_1(k_0)$, $\alpha_4 \in \Kt^{\mathsf M}_0(k_0)$. Now, consider the diagram $$ \xymatrix{
H^1(K, G)\ar[r]^-{w_2}\ar[d]_{\mathsf{res}^{E^P}_K(E)}& \Kt^{\mathsf M}_2(K) \ar[d]\\ H^1(E^P, G)\ar[r]^-{w_2}& \Kt^{\mathsf M}_2(E^P)\\ H^1(E^P, P).\ar[u]^{\mathsf{ind}^G_P}& } $$ The square commutes by the definition of invariants. Denote by $E \in H^1(K, G)$ the $G$-torsor $E/K$ and by $F \in H^1(E^P, P)$ the $P$-torsor $E/E^P$. Interpreting the torsors as cocycles yields $$\mathsf{ind}_P^G(F) = \mathsf{res}^{E^P}_K(E) \in H^1(E^P, G).$$ Observing that $XY$ is a square in $E^P$, this means $$ \{(X-Y)^2\}\{(X + Y)^2\} = \alpha_1 + \{B - A^2\}\alpha_2 + \{A\}\{A^2 -B\}\alpha_4. $$ Applying the identity $\{\beta\}\{\beta'\} = \{\beta + \beta'\}\{-\beta\beta'\}$ to the left-hand side gives $\{2A\}\{B - A^2\}$, so that we may choose $\alpha_1 = 0$, $\alpha_2 = \{2\}$ and $\alpha_4 = 1$. We conclude that the injection $\ms{Inv} (G, M_\ast) \to M_{\ast, \mathsf{unr}}(U/G)$ is surjective. This finishes the computation of $\ms{Inv} (G, M_\ast)$ and we obtain the following.
\begin{proposition} The invariants $\ms{Inv} (W(B_2), M_\ast)$ are completely decomposable with basis consisting of the invariants $\{1, v_1, w_1, w_2\}$. \end{proposition}
We conclude this section with a corollary of the proof. \begin{corollary} \label{i2Corollary} Let $P_1 = P(e_1, e_2)$ and $P_2 = P(e_1 - e_2, e_1 + e_2)$. Then, \begin{align*} \mathsf{res}^{P_1}_{W(B_2)}(v_1)& = x_{\{e_1\}} + x_{\{e_2\}}, \\ \mathsf{res}^{P_1}_{W(B_2)}(w_1)& = x_{\{e_1\}} + x_{\{e_2\}}, \\ \mathsf{res}^{P_1}_{W(B_2)}(w_2)& = x_{\{e_1, e_2\}}, \end{align*} and \begin{align*} \mathsf{res}^{P_2}_{W(B_2)}(v_1)& = 0, \\ \mathsf{res}^{P_2}_{W(B_2)}(w_1)& = x_{\{e_1 - e_2\}} + x_{\{e_1 + e_2\}}, \\ \mathsf{res}^{P_2}_{W(B_2)}(w_2)& = x_{\{e_1 + e_2, e_1 - e_2\}} + \{2\} \cdot(x_{\{e_1 - e_2\}} + x_{\{e_1 + e_2\}}). \end{align*} \end{corollary}
\subsection{Invariants of $B_n$} After dealing with the case $n = 2$, we now compute the invariants of Weyl groups of type $B_n$ for general $n$. The root system $B_n$ is the disjoint union $\Delta_1\sqcup \Delta_2\subseteq \mathbb R^n$, where $\Delta_1 = \{\pm e_i:\,1 \le i \le n\}$ are the short roots and $\Delta_2 = \{\pm e_i \pm e_j:\, 1 \le i < j \le n\}$ are the long roots. This root system induces an orthogonal reflection group over any $k_0$ satisfying the above requirements. Furthermore, $W(B_n)\cong S_n\ltimes (\mathbb Z/2)^n$ as abstract groups. Put $m:= [n/2]$ and for $ i \le m$ define $a_i:= e_{2i - 1} - e_{2i}$ and $b_i:= e_{2i - 1} + e_{2i}$. For each $ L \le m$ the elements of $X_L:= \{a_1, b_1, \dots, a_L, b_L, e_{2L + 1}, e_{2L + 2}, \dots, e_n\}$ are mutually orthogonal. Defining $P_L:= P(X_L)$, we prove by induction on $m$ that $\Omega(G) = \{[P_0], \dots, [P_m]\}$.
The claim is clear for $n = 2$. In the general case, let $P$ be any maximal elementary abelian $2$-subgroup generated by reflections. First assume that $P$ contains a short root, say $e_n$. Now, observe that $\langle e_n\rangle^{\perp} \cap B_n = B_{n - 1}$ and use induction. If $P$ contains a long root, we may assume this root to be $a_1$. Then, $\langle a_1\rangle^\perp \cap B_n = \{\pm b_1\}\cup B_{n - 2}$, where we consider $B_{n - 2}$ to be embedded in $\mathbb R^n$ using the last $n - 2$ coordinates. In particular, we may again use the induction hypothesis.
\medbreak
To determine $\ms{Inv} (B_n, M_\ast)$, we introduce additional pieces of notation. We denote $P_L$-torsors over a field $k$ by $(\alpha_1, \beta_1, \dots, \alpha_L, \beta_L, \epsilon_{2L + 1}, \dots, \epsilon_n) \in (k^\times/k^{\times 2})^n$. From the $(\mathbb Z/2)^n$-section, we know that $\ms{Inv}(P_L, M_\ast)$ is completely decomposable with basis $\{x_I\}_{I\subseteq [1;n]}$. Since this parameterization is inconvenient in the present setting, we change the index set by putting \begin{align*}
\Lambda^d_L:= \{(A, B, C, E)\subseteq [1;L]^3\times[2L + 1;n]:\,& A, B, C \text{ pw.~disjoint}, \\
&|A| + |B| + 2|C| + |E| = d\}. \end{align*} We reindex the basis of $\ms{Inv} (P_L, M_\ast)$ by defining for every $(A, B, C, E) \in \Lambda^d_L$: \begin{align*} x_{A, B, C, E}^L:\, H^1(k, P_L)& \to \Kt^{\mathsf M}_\ast(k)\\ (\alpha_1, \beta_1, \dots, \alpha_L, \beta_L, \epsilon_{2L + 1}, \dots \epsilon_n)&\mapsto \prod_{a \in A}\{\alpha_a\} \prod_{b \in B}\{\beta_b\} \prod_{c \in C}\{\alpha_c\}\{\beta_c\} \prod_{e \in E}\{\epsilon_e\}. \end{align*} In the same spirit, we also write $$ P(A, B, C, E):= P(\{a_p\}_{p \in A}\cup \{b_q\}_{q \in B}\cup \{a_r, b_r\}_{r \in C}\cup \{e_s\}_{s \in E}). $$
\medbreak
\noindent For $d \le n$, we now construct the specific $W(B_n)$-invariant $$u_d:= \rho^*(\widetilde{w_d}) \in \ms{Inv}^d(W(B_n), M_\ast),$$ where $\widetilde{w_d} \in \ms{Inv}^d(S_n, \Kt^{\mathsf M}_\ast)$ denotes the $d$th modified Stiefel-Whitney class and $ \rho:\, W(B_n)\cong S_n\ltimes (\mathbb Z/2)^n \to S_n $ is the canonical projection. Then, the map $W(B_n) \to S_n$ sends both $s_{a_i}, s_{b_i}$ to $(2i - 1, 2i)$ and $s_{e_i}$ to the neutral element. Let $k \in \mc F_{k_0}$ and $(\alpha_1, \beta_1, \dots, \alpha_L, \beta_L, \epsilon_{2L + 1}, \dots, \epsilon_n)$ be a $P_L$-torsor over $k$. Using Example \ref{abQuadratic2} and $\{2\}\{2\} = 0$, gives that the value of the total modified Stiefel-Whitney class at this torsor is $\prod_{i \le L }(1 + \{\alpha_i\beta_i\})$. Hence, \begin{equation} \label{bnUd} \mathsf{res}^{P_L}_{W(B_n)}(u_d) = \sum_{\substack{(A, B, \varnothing, \varnothing) \in \Lambda^d_L}} x^L_{A, B, \varnothing, \varnothing}. \end{equation} Next, we construct an invariant $v_d$ such that \begin{equation} \label{bnVdEq} \mathsf{res}^{P_L}_{W(B_n)}(v_d) = \sum_{(\varnothing, \varnothing, C, E) \in \Lambda^d_L} x^L_{\varnothing, \varnothing, C, E} \end{equation} To that end, we note that $W(B_n)$ embeds into $S_{2n}$ via $\sigma \prod_{i \in I}s_{e_i}\mapsto \sigma \cdot(\sigma + n) \prod_{i \in I}(i, i + n)$, where $I\subseteq [1;n]$, $\sigma \in S_n$ and $\sigma + n \in S_{2n}$ is given by \begin{align*} k\mapsto \begin{cases} k &\text{ if }k \le n,\\ n + \sigma(k - n) &\text{ if }k > n. \end{cases} \end{align*} We define the modified Stiefel-Whitney invariants $\widetilde{w_d} \in \ms{Inv}^d(S_{2n}, \Kt^{\mathsf M}_\ast)$ as before and put $v_d' := \mathsf{res}^{W(B_n)}_{S_{2n}}(\widetilde{w_d}) \in \ms{Inv}^d(W(B_n), \Kt^{\mathsf M}_\ast)$ for $d \le n$. Then, we define $v_d$ recursively, by setting $v_0 := 0$ and then $$v_d := v_d' + \sum_{k \le d - 1}u_{d - k} v_k.$$ To show that the so-defined invariant satisfies \eqref{bnVdEq}, we first note that already when restricting $v_d'$ to $P_L$, we obtain an agreement with the right-hand side of \eqref{bnVd} up to mixed lower-order expressions.
\begin{lemma}
\label{bnVdLem} \begin{equation} \label{bnVd}
\mathsf{res}^{P_L}_{W(B_n)}(v_d') =\hspace{-.2cm} \sum_{(\varnothing, \varnothing, C, E) \in \Lambda^d_L} \hspace{-.2cm}x^L_{\varnothing, \varnothing, C, E} + \sum_{k \le d - 1}\{-1\}^{d - k}\hspace{-.2cm}\sum_{(A, B, C, E) \in \Lambda^k_L}\hspace{-.2cm} x^L_{A, B, C, E} \end{equation} \end{lemma}
\begin{proof}
Observe that the map $W(B_n) \to S_{2n}$ sends $s_{e_i} \mapsto (i, i + n)$ and \begin{align*}
s_{a_i}\mapsto(2i - 1, 2i)(2i - 1 + n, 2i + n), \;
s_{b_i}\mapsto (2i - 1, 2i + n)(2i, 2i - 1 + n) \end{align*} Hence, by Lemma \ref{pfisterLemma}, the composition $P_L \to W(B_n) \to S_{2n} \to O_{2n}$ maps a $P_L$-torsor to the quadratic form $$\langle\langle - \alpha_1, - \beta_1 \rangle\rangle\oplus\dots\oplus \langle\langle - \alpha_L, - \beta_L \rangle\rangle\oplus \lan2, 2\epsilon_{2L + 1}, \dots, 2, 2\epsilon_n\rangle.$$ We claim that the total modified Stiefel-Whitney class evaluated at this quadratic form equals
\begin{align}
\label{vdswEq}
\prod_{i \le L}(1 + \{-1\}(\{\alpha_i\} + \{\beta_i\}) + \{\alpha_i\}\{\beta_i\})\prod_{ 2L + 1\le i \le n}(1 + \{\epsilon_i\}). \end{align} To see this, we compute
it suffices to check that $w(\lan2\rangle\otimes \langle\langle\alpha, \beta\rangle\rangle) = 1 +\{-1\}\{-1\} + \{\alpha\}\{\beta\}$. To see this, we compute
\begin{align*}
w( \lan2\rangle\otimes\langle\langle-\alpha, -\beta\rangle\rangle) &= (1 + \{2\})(1 + \{2\alpha\})(1 + \{2\beta\})(1 + \{-2\beta\} + \{-\alpha\})\\
&= (1 + \{\alpha\} + \{2\}\{\alpha\}) (1 + \{\alpha\} + \{2\beta\}\{-\alpha\})\\
&= 1 + \{\alpha\}\{\alpha\} + \{2\}\{\alpha\} + \{2\beta\}\{-\alpha\}\\
&= 1 +\{-1\}\{\alpha\} + \{-1\}\{\beta\} + \{\alpha\}\{\beta\}.
\end{align*}
Thus, translating \eqref{vdswEq} into the new notation, we obtain that
$$\mathsf{res}^{P_L}_{W(B_n)}(v_d') = \sum_{(\varnothing, \varnothing, C, E) \in \Lambda^d_L} x^L_{\varnothing, \varnothing, C, E} + \sum_{k \le d - 1}\{-1\}^{d - k}\hspace{-.5cm}\sum_{(A, B, C, E) \in \Lambda^k_L} x^L_{A, B, C, E}.\qedhere$$ \end{proof}
In light of Lemma \ref{bnVd}, to establish \eqref{bnVdEq}, it remains to understand the product structure between $u_{d - k}$ and $v_k$. To that end, we restrict the products to $P_L$.
\begin{lemma} \label{bnLemma1} We have $$
\sum_{(A, B, \varnothing, \varnothing) \in \Lambda_L^d}x^L_{A, B, \varnothing, \varnothing} \sum_{(\varnothing, \varnothing, C, E) \in \Lambda_L^f}x^L_{\varnothing, \varnothing, C, E} = \sum_{\substack{(A, B, C, E) \in \Lambda^{d + f}\\2|C| + |E| = f}}x^L_{A, B, C, E}. $$ \end{lemma}
\begin{proof}
First, since $x^L_{A, B, \varnothing, \varnothing} x^L_{\varnothing, \varnothing, C, E} = \{-1\}^{|A \cap C| + |B \cap C|} x^L_{A - C, B - C, C, E}$, \begin{align*}
&\sum_{(A, B, \varnothing, \varnothing) \in \Lambda_L^d}x^L_{A, B, \varnothing, \varnothing} \sum_{(\varnothing, \varnothing, C, E) \in \Lambda_L^f}x^L_{\varnothing, \varnothing, C, E}\\
&\qquad= \sum_{k \ge 0}\sum_{\substack{(A, B, \varnothing, \varnothing) \in \Lambda_L^d\\(\varnothing, \varnothing, C, E) \in \Lambda_L^f\\|A \cap C| + |B \cap C| = k}} \{-1\}^kx^L_{A - C, B - C, C, E} \\
&\qquad= \sum_{\substack{(A, B, C, E) \in \Lambda_L^{d + f}\\2|C| + |E| = f }} x^L_{A, B, C, E}
+ \sum_{k \ge 1}
\sum_{\substack{(A, B, \varnothing, \varnothing) \in \Lambda_L^d\\(\varnothing, \varnothing, C, E) \in \Lambda_L^f\\|A \cap C| + |B \cap C| = k}} \{-1\}^kx^L_{A - C, B - C, C, E}. \end{align*} To show that the second sum vanishes, fix $k \ge 1$ and $(A', B', C, E) \in \Lambda_L^{d + f - k}$. Then, define
\begin{align*}
S&:=\{(A, B):\, (A, B, \varnothing, \varnothing) \in \Lambda_L^d \text{ and }A - C = A' \text{ and }B - C = B'\}\\
& = \{(A'\cup U, B'\cup V):\, U, V\subseteq C\text{ and } U \cap V = \varnothing \text{ and }|U| + |V| = k\}.
\end{align*}
Using this description, we conclude $|S| = 2^k\Big(\begin{array}{c} |C| \\ k\end{array}\Big)$. Since $k \ge 1$, this is even and we obtain the desired vanishing of the second sum. \end{proof} In the rest of this section, we show that $\ms{Inv} (W(B_n), M_\ast)$ is completely decomposable and that the products $\{u_{d - r} v_r\}_{\substack{\max(0, 2d - n) \le r \le d\\ d \le n}}$ yield a basis. \smallbreak
Before determining the structure of $\ms{Inv} (W(B_n), M_\ast)$, it is helpful to know something about the image of the restriction maps $\ms{Inv} (W(B_n), M_\ast) \to \ms{Inv} (P_L, M_\ast)$. Let $d, k, \ell, L$ be non-negative integers, $L \le m$. Then, the invariant $$
\phi^d_{L, k, \ell}:=\sum_{\substack{(A, B, C, E) \in \Lambda^d_L\\|C| = k, |E| = \ell}}x^L_{A, B, C, E} $$
is non-trivial if and only if there exists $(A, B, C, E) \in \Lambda_L^d$ with $|C| = k$ and $|E| = \ell$.
\begin{lemma}
\label{prevLem} The image of the restriction map $\ms{Inv} (W(B_n), M_\ast) \to \ms{Inv} (P_L, M_\ast)$ is contained in the free submodule with basis $$
\big\{\phi^d_{L, k, \ell}:\, 2k + \ell \le d \le n, \; 2(d - k-\ell) \le 2L \le n - \ell\big\}. $$ \end{lemma}
\begin{proof}
Let us first show that $\phi^d_{L, k, \ell} \ne 0$ iff $2k + \ell \le d \le n$ and $2(d - k-\ell) \le 2L \le n - \ell$. First, the conditions $2k + \ell \le d$ and $2L + \ell \le n$ are necessary. Furthermore, from the pairwise disjointness of $A, B, C$, we conclude $|A| + |B| + |C| \le L$. This is equivalent to $d - (2k + \ell) + k \le L$. Thus, $d - k-\ell \le L$ is also necessary. To check sufficiency, suppose, we are given $L, k, \ell, d$ satisfying the restrictions. Then, $([1;d - \ell-2k], \varnothing, [d - \ell-2k + 1;d - \ell-k], [2L + 1;2L + \ell]) \in \Lambda^d_L$. Thus, $\phi^d_{L, k, \ell} \ne 0$. Next, we check that the image of the restriction map is indeed contained in the submodule generated by the $\phi^d_{L, k, \ell} M_\ast(k_0)$.
\smallbreak
Observe that all of the following elements normalize $P_L$:
$$\{s_{e_{2i - 1} - e_{2j-1}} s_{e_{2i} - e_{2j}}\}_{i, j \le L},\qquad \{s_{e_i - e_j}\}_{i, j \ge 2L + 1}\quad \text{and}\quad \{s_{e_{2i}}\}_{i \le L}.$$ Let $N_L\subseteq N_{W(B_n)}(P_L)$ be the subgroup generated by these elements. We claim that $N_L$ permutes the $x^L_{A, B, C, E}$. Applying $s_{e_{2i - 1} - e_{2j-1}} s_{e_{2i} - e_{2j}}$ for $i, j \le L$ to a $P_L$-torsor $$ (\alpha_1, \beta_1, \dots, \alpha_L, \beta_L, \epsilon_{2L + 1}, \dots, \epsilon_n) $$
interchanges $\alpha_i \leftrightarrow \alpha_j$ and $\beta_i \leftrightarrow \beta_j$. Thus, $x^L_{A, B, C, E}$ maps to $x^L_{A', B', C', E}$ where $A'/B'/C'$ is obtained from $A/B/C$ by applying the transposition $(i, j)$ to the respective sets. Similarly, we see that swapping the $i$th and the $j$th coordinate for $i, j \ge 2L + 1$ maps $x^L_{A, B, C, E}$ to $x^L_{A, B, C, E'}$ where $E'$ is obtained from $E$ by applying to it the transposition $(i, j)$. Finally, changing the $(2i)$th sign maps $x^L_{A, B, C, E}$ to $x^L_{A', B', C, E}$ where $A' = (A - \{i\})\cup (B \cap \{i\})$ and $B' = (B - \{i\})\cup (A \cap\{i\})$. That is, if $i \in A$ we remove it from $A$ and put it into $B$ and vice versa.
\smallbreak
Iteratively applying these operations to an arbitrary $(A_0, B_0, C_0, E_0) \in \Lambda_L^d$ shows that its orbit under $N_L$ equals $ \{(A, B, C, E) \in \Lambda_L^d\;:\,\; |C| = |C_0|, |E| = |E_0|\}$. Now, the lemma follows from Corollary \ref{orbitSum}. \end{proof}
By Proposition \ref{splitCorollary}, the injection $\ms{Inv} (W(B_n), M_\ast) \to \prod_{L \le m}\ms{Inv} (P_L, M_\ast)$ has its image inside $\prod_{L \le m}\ms{Inv} (P_L, M_\ast)^{N_L}$ and Lemma \ref{prevLem} gives a good description of this object. However, this map is not surjective. One reason is the following: If an element $(z_L)_L$ of the right hand side comes from a $W(B_n)$-invariant, then certainly the restrictions of $z_L$ and $z_{L'}$ to $P_L \cap P_{L'}$ must coincide. To address this, we prove the following refined lemma.
\smallbreak
\begin{lemma} \label{refinedBn}
The image of $\ms{Inv} (W(B_n), M_\ast) \to \prod_{L \le m}\ms{Inv} (P_L, M_\ast)$ lies in the subgroup generated by $\{s \cdot M_{*-|s|}(k_0):\,s \in S\}$, where $$ S:= \Big\{ \Big(\sum_{2k + \ell = r} \phi_{L, k, \ell}^d\Big)_L:\, \max(0, 2d - n) \le r \le d \le n\Big\}\subseteq \prod_{L \le m}\ms{Inv} (P_L, \Kt^{\mathsf M}_\ast). $$ \end{lemma}
\begin{proof}
Let $\widetilde z \in \ms{Inv} (W(B_n), M_\ast)$ be a homogeneous invariant and $z = (z_L)_L \in \prod_{L \le m}\ms{Inv} (P_L, M_\ast)$ be the image of $\widetilde z$ under the restriction maps. By Lemma \ref{prevLem}, $z = \big(\sum_{d, k, \ell}\phi_{L, k, \ell}^dm_{L, d, k, \ell}\big)_L$ for some $m_{L, d, k, \ell} \in M_{\ast -d}(k_0)$, where the sums are over all those $d, k, \ell$ such that $\phi_{L, k, \ell}^d \ne 0$.
\smallbreak
First goal, we show that $m_{L, d, k, \ell}$ is independent of $L$ in the sense that $m_{L, d, k, \ell} = m_{L', d, k, \ell}$, if $\phi^d_{L, k, \ell} \ne 0$ and $\phi^d_{L', k, \ell} \ne 0$. We then denote by $m_{d, k, \ell}$ the common value. Observe that $(A_0, B_0, C_0, E_0) \in \Lambda^d_{L'} \cap\Lambda^d_L$, where
$$ (A_0, B_0, C_0, E_0):=([1;d - \ell-2k], \varnothing, [d - \ell-2k + 1;d - \ell-k], [n - \ell + 1;n]). $$
Hence, since $z$ comes from an invariant of $W(B_n)$, $$ \mathsf{res}^{P(A_0, B_0, C_0, E_0)}_{P_L}(z_L) = \mathsf{res}^{P(A_0, B_0, C_0, E_0)}_{P_{L'}}(z_{L'}). $$ Comparing coefficients of $x_{A_0, B_0, C_0, E_0}$-components on both sides yields that $m_{L, d, k, \ell} = m_{L', d, k, \ell}$.
\smallbreak
Now, let us have a look at the second obstruction. We want to prove $m_{d, k, \ell} = m_{d, k', \ell'}$, if $2k + \ell = 2k' + \ell'$ and if there exist $L, L'$ such that $\phi^d_{L', k', \ell'} \ne 0$ and $\phi^d_{L, k, \ell} \ne 0$. It suffices to prove this in the case $k' - k = 1$. Since there exist $L, L'$ satisfying $\phi^d_{L', k', \ell'}, \phi^d_{L, k, \ell} \ne 0$, we can choose some $L$ such that $\phi^d_{L + 1, k', \ell'}, \phi^d_{L, k, \ell} \ne 0$.
\smallbreak
Let $y$ be the restriction of $\widetilde z$ to $P([1;d - \ell-2k], \varnothing, [L - k + 1;L], [2L + 3;2L + \ell])\times W(B_2)$, where $B_2$ is embedded via the $(2L + 1)$th and the $(2L + 2)$th coordinates. By Proposition \ref{productLem}, $$ y = \sum_{\substack{A\subseteq[1;d - \ell-2k]\\ C\subseteq [L - k + 1;L]\\ E\subseteq[2L + 3;2L + \ell]}}x^L_{A, \varnothing, C, E} y_{A, C, E} $$
for uniquely determined $y_{A, C, E} \in \ms{Inv}^{\ast -|A|-2|C|-|E|}(W(B_2), M_\ast)$. Furthermore, by the results of Section \ref{b2Sec}, $$ y_{A, C, E} = m^{(0)}_{A, C, E} + w_1m^{(1a)}_{A, C, E} + v_1m^{(1b)}_{A, C, E} + w_2m^{(2)}_{A, C, E} $$ for uniquely determined $$
m^{(0)}_{A, C, E} \in M_{\ast -|A|-2|C|-|E|}(k_0), \; m^{(1a)}_{A, C, E}, m^{(1b)}_{A, C, E} \in M_{\ast -|A|-2|C|-|E|-1}(k_0) $$ and $$
m^{(2)}_{A, C, E} \in M_{\ast -|A|-2|C|-|E|-2}(k_0). $$ Restricting $y$ further to $P([1;d - \ell-2k], \varnothing, [L - k + 1;L], [2L + 1;2L + \ell])$ and considering the $x_{[1;d - 2k-\ell], \varnothing, [L - k + 1;L], [2L + 1;2L + \ell]}$-component, Corollary \ref{i2Corollary} yields that $$ m_{d, k, \ell} = m^{(2)}_{([1;d - \ell-2k], [L - k + 1;L], [2L + 3;2L + \ell])}. $$ On the other hand, restricting $y$ to $P([1;d - \ell-2k], \varnothing, [L - k + 1;L + 1], [2L + 3;2L + \ell])$ and considering the $x_{[1;d - 2k-\ell], \varnothing, [L - k + 1;L + 1], [2L + 3;2L + \ell]}$-component, we obtain from Corollary~\ref{i2Corollary} that $$ m_{d, k', \ell'} = m^{(2)}_{([1;d - \ell-2k], [L - k + 1;L], [2L + 3;2L + \ell])}. $$ This proves the lemma. \end{proof}
From Lemma \ref{bnLemma1}, we deduce the following decomposition of $\ms{Inv} (W(B_n), M_\ast)$.
\smallbreak
\begin{corollary} The group $\ms{Inv} (W(B_n), M_\ast)$ is completely decomposable with basis $$
\big\{u_{d - r}v_r:\;\max(0, 2d - n) \le r \le d \le n\big\}. $$ \end{corollary}
\goodbreak \section{Weyl groups of type $F_4$.} \label{F4SubSect}
\noindent The root system $F_4$ is the disjoint union $\Delta_1\sqcup \Delta_2\sqcup \Delta_3\subseteq \mathbb R^4$ with short routes $\Delta_1:=\{\pm e_i\pm e_j:\, 1\le i < j \le 4\}$ and long roots $$ \Delta_2:=\{\pm e_i:\, 1 \le i \le 4\}, \qquad \Delta_3:=\{1/2(\pm e_1\pm e_2\pm e_3\pm e_4)\}. $$ Moreover, $\Omega(W(F_4)) = \{[P_0], [P_1], [P_2]\}$, where $$P_0:= P(e_1, e_2, e_3, e_4), \qquad P_1:= P(a_1, b_1, e_3, e_4), \qquad P_2:= P(a_1, b_1, a_2, b_2)$$ \medbreak
Indeed, the set of long roots of $F_4$ is the root system $D_4$, which up to conjugacy has a unique maximal set of pairwise orthogonal vectors, namely $a_1, b_1, a_2, b_2$. On the other hand, if we have a maximal set of pairwise orthogonal roots containing a short root, say $e_4$, then $\langle e_4\rangle^\perp \cap F_4 = B_3$. We have determined before that up to conjugacy $B_3$ contains two maximal sets of pairwise orthogonal roots; namely $\{e_1, e_2, e_3\}$ and $\{a_1, b_1, e_3\}$.
\smallbreak
Furthermore, the inclusion $P_2\subseteq W(B_4)\subseteq W(F_4)$ shows that the restriction map $$ \ms{Inv} (W(F_4), M_\ast) \to \ms{Inv} (W(B_4), M_\ast) $$ is injective. Recall that $\ms{Inv} (W(B_4), M_\ast)$ is a free $M_\ast(k_0)$-module with the basis $$\{1, u_1, v_1, u_2, v_1u_1, v_2, v_2u_1, v_3, v_4\}.$$ \medbreak
Before constructing specific invariants, we first point to another restriction in degree $2$. Since $\mathsf{res}^{P_2}_{W(F_4)}(v_1) = \mathsf{res}^{P_2}_{W(F_4)}(v_3) = 0$, the image of the restriction $\mathsf{res}^{P_2}_{W(F_4)}$ is contained in the free submodule $S\subseteq\ms{Inv}^*(P_2, M_\ast)$ with basis $\{1, y_1, y_2, y_2', y_3, y_4\}$, where $y_1 = \ms{res}_{W(B_4)}^{P_2}(u_1)$, $y_2 = \ms{res}_{W(B_4)}^{P_2}(u_2)$, $y_2' = \ms{res}_{W(B_4)}^{P_2}(v_2)$, $y_3 = \ms{res}_{W(B_4)}^{P_2}(v_2u_1)$ and $y_4 = \ms{res}_{W(B_4)}^{P_2}(v_4)$.
\smallbreak
Now, let $a \in \ms{Inv} (P_2, M_\ast)$ be any invariant which is induced by an invariant from $\ms{Inv} (W(F_4), M_\ast)$. Then, we can find unique $m_d \in M_{\ast -d}(k_0)$, $m_2, m_2' \in M_{\ast -2}(k_0)$ such that $$ a = \sum_{\substack{ d \le 4\\d \ne 2}} \Big(\hspace{-.2cm}\sum_{(A, B, C) \in \Lambda^d}\hspace{-.2cm}x_{A, B, C}\Big)m_d +
\Big(\hspace{-.2cm}\sum_{(A, B, \varnothing) \in \Lambda^2}\hspace{-.2cm}x_{A, B, \varnothing}\Big)m_2 + \Big(\hspace{-.2cm}\sum_{(\varnothing, \varnothing, C) \in \Lambda^2}\hspace{-.2cm}x_{\varnothing, \varnothing, C}\Big)m_2'. $$ Now, $s_{1/2(e_1 + e_2 + e_3 + e_4)}$ lies in the normalizer of $P_2$, as it leaves ${a_1}$, ${a_2}$ fixed and swaps ${b_1}$ with $-{b_2}$. Since $a$ comes from $\ms{Inv} (W(F_4), M_\ast)$, the action of $s_{1/2(e_1 + e_2 + e_3 + e_4)}$ leaves $a$ invariant. Hence, \begin{align*}
a = &\sum_{\substack{d \le 4\\d \ne 2}} \Big(\hspace{-.2cm}\sum_{(A, B, C) \in \Lambda^d}\hspace{-.2cm}x_{A, B, C}\Big)m_d
+ (x_{\{a_1, a_2\}} + x_{\{b_1, b_2\}} + x_{\{a_1, b_1\}} + x_{\{a_2, b_2\}})m_2 \\
&+ (x_{\{a_1, b_2\}} + x_{\{a_2, b_1\}})m_2'. \end{align*} Comparing coefficients yields $m_2 = m_2'$.
\bigbreak
Thus, the image of the restriction $\ms{Inv} (W(F_4), M_\ast) \to \ms{Inv} (P_2, M_\ast)$ is contained in the free submodule with basis $\{1, y_1, y_2 + y_2', y_3, y_4\}$. Therefore, the image of the restriction $\ms{Inv} (W(F_4), M_\ast) \to \ms{Inv} (W(B_4), M_\ast)$ is contained in the free $M_\ast(k_0)$-module with basis $\{1, u_1, v_1, u_2 + v_2, v_1u_1, v_2u_1, v_3, v_4\}$.
\smallbreak
Now, we need to construct $F_4$-invariants which restrict to these elements. First observe that $D_4\subseteq F_4$ and that $W(F_4)$ stabilizes $D_4$. Thus, any $g \in W(F_4)$ maps the simple system $S = \{e_1 - e_2, e_2 - e_3, e_3 - e_4, e_3 + e_4\}$ to another simple system $S'\subseteq D_4$. Since all simple systems are conjugate there exists a \emph{unique} $h \in W(D_4)$ mapping $S'$ to $S$. This procedure induces a permutation of the $3$ outer vertices $\{e_1 - e_2, e_3 - e_4, e_3 + e_4\}$ of the Coxeter graph, thereby giving rise to a group homomorphism $\psi:\, W(F_4) \to S_3$.
\smallbreak
Then, we define $v_1:= \psi^*(\widetilde{w_1})$, where $\widetilde{w_1} \in \ms{Inv} (S_3, \Kt^{\mathsf M}_\ast)$ is the first modified Stiefel-Whitney class. To determine the restriction of $v_1$ to $P_L$ note that the map $W(F_4) \to S_3$ sends $W(D_4)$ to the identity and $s_{e_4} $ to the transposition $(2, 3)$. Since $s_{e_i} = g_i s_{e_4} g_i^{-1}$, where $g_i \in W(D_4)$ denotes the element switching the 4th and the $i$th coordinate ($ i \le 3$), we conclude that all $s_{e_i}$ are sent to $(2, 3)$. Thus, the value of $\mathsf{res}^{P_L}_{W(F_4)}(v_1)$ at the $P_L$-torsor $(\alpha_1, \beta_1, \dots, \alpha_L, \beta_L, \epsilon_{2L + 1}, \dots, \epsilon_4)$ is $\sum_{i \ge 2L + 1} \{\epsilon_i\}$.
\smallbreak
The embedding $W(F_4)\subseteq O_4$ as orthogonal reflection group yields invariants $\mathsf{res}^{W(F_4)}_{O_4}(w_d) \in \ms{Inv}^d(W(F_4), \Kt^{\mathsf M}_\ast)$, where $w_d \in \ms{Inv}^d(O_4, \Kt^{\mathsf M}_\ast)$ is the $d$th unmodified Stiefel-Whitney class. Again, if $2$ is not a square in $k_0$, then these invariants do not have a nice form, when restricted to the $P_L$. Therefore, we change them a little and define invariants $\widehat{w_d}$. The image of a $P_L$-torsor $(\alpha_1, \dots, \alpha_L, \beta_1, \dots, \beta_L, \epsilon_{2L + 1}, \dots, \epsilon_4)$ in $H^1(k, O_4)$ under the map $P_L\subseteq W(F_4)\subseteq O_4$ may be computed by using Example \ref{abQuadratic} and is given by $\lan2\alpha_1, 2\beta_1, \dots, 2\alpha_L, 2\beta_L, \epsilon_{2L + 1}, \dots, \epsilon_4\rangle$. We would like to have $$ \mathsf{res}^{P_L}_{W(F_4)}(\widehat{w_d}) = \sum_{(A, B, C, E) \in \Lambda^d_L}x^L_{A, B, C, E}. $$
Since the restriction of $w_1$ to $P_L$ is already given by $\sum_{(A, B, C, E) \in \Lambda^1_L}x^L_{A, B, C, E}$, we put $\widehat w_1:= w_1$. Now, for $d = 2$, $$\mathsf{res}^{P_L}_{O_4}(w_2) = \sum_{(A, B, C, E) \in \Lambda^2_L}x^L_{A, B, C, E} + \sum_{(A, B, \varnothing, \varnothing) \in \Lambda^1_L}\{2\} x^L_{A, B, \varnothing, \varnothing},$$ so that $\widehat w_2:= w_2 - \{2\} (w_1 - v_1)$ has the desired property. The restriction of $w_3$ to $P_L$ is $$
\mathsf{res}^{P_L}_{O_4}(w_3) = \sum_{(A, B, C, E) \in \Lambda^3_L}x^L_{A, B, C, E} + \sum_{\substack{(A, B, \varnothing, E) \in \Lambda^2_L\\|E| = 1}}\{2\} x^L_{A, B, \varnothing, E}, $$ so that we set $\widehat{w_3}:= w_3-\{2\} (w_1 - v_1) v_1$. Finally, the restriction of $w_4$ to $P_L$ is $$
\mathsf{res}^{P_L}_{O_4}(w_4) = \sum_{(A, B, C, E) \in \Lambda^4_L}x^L_{A, B, C, E} + \sum_{\substack{(A, B, C, E) \in \Lambda^3_L\\2|C| + |E| = 2}}\{2\} x^L_{A, B, C, E} $$ so that we set $\widehat{w_4}:= w_4-\{2\}w_2(w_1 - v_1)$. Furthermore, define $u_1:= w_1 - v_1 \in \ms{Inv}^1(W(F_4), \Kt^{\mathsf M}_\ast)$.
\smallbreak
Now, we restrict the so-constructed invariants to $W(B_4)$. We claim that
\smallbreak
\begin{itemize} \item[(a)] $u_1, v_1 \in \ms{Inv}^1(W(F_4), \Kt^{\mathsf M}_\ast)$ restrict to $u_1, v_1 \in \ms{Inv}^1(W(B_4), \Kt^{\mathsf M}_\ast)$;
\smallbreak
\item[(b)] $u_1v_1, (\widehat{w_2}-u_1v_1) \in \ms{Inv}^2(W(F_4), \Kt^{\mathsf M}_\ast)$ restrict to $u_1v_1, u_2 + v_2 \in \ms{Inv}^2(W(B_4), \Kt^{\mathsf M}_\ast)$; and
\smallbreak
\item[(c)] $u_1\widehat{w_2}, (\widehat{w_3}-u_1\widehat{w_2}) \in \ms{Inv}^3(W(F_4), \Kt^{\mathsf M}_\ast)$ restrict to $u_1v_2, v_3$. \end{itemize}
\smallbreak
Finally, $\widehat w_4 \in \ms{Inv}^4(W(F_4), \Kt^{\mathsf M}_\ast)$ restricts to $v_4 \in \ms{Inv}^4(W(B_4), \Kt^{\mathsf M}_\ast)$. To prove these claims, we only need to consider the restrictions to $\ms{Inv} (P_L, \Kt^{\mathsf M}_\ast)$, where the identities are clear by construction. Thus, $\ms{Inv} (W(F_4), M_\ast)$ is a free $M_\ast(k_0)$-module with basis $$ \{1, \widehat{w_1}, v_1, \widehat{w_2}, \widehat{w_1}v_1, \widehat{w_3}, \widehat{w_2}v_1, \widehat{w_4}\}. $$ The construction of the $\widehat{w_d}$ also yields the following result. \begin{proposition} $\ms{Inv} (W(F_4), M_\ast)$ is completely decomposable with basis $$ \{1, w_1, v_1, w_2, v_1w_1, w_3, v_1w_2, w_4\}. $$ \end{proposition} \begin{remark}
Alternatively, to the approach above, one could also rely on transfer-restriction arguments to characterize the invariants of $W(B_4)$, which extend to $W(F_4)$ as those whose restriction to $W(D_4)$ is fixed under the action of $W(F_4)/W(D_4)$. \end{remark}
\section{Weyl groups of type $D_n$.} \label{DnSubSect}
\noindent The root system $D_n$, $n\ge2$ consists of the elements $$ D_n = \{\pm e_i\pm e_j:\, 1 \le i < j \le n\}. $$
Let $m:= [n/2]$, $a_i:= e_{2i - 1} - e_{2i}$ and $b_i:= e_{2i - 1} + e_{2i}$. By Remark \ref{splittingPrincipleRem}, this root system defines an orthogonal reflection group over $k_0$ with $|\Omega(W(D_n))| = 1$. More precisely, $P:= P(a_1, b_1, \dots, a_m, b_m)$ is a maximal elementary abelian $2$-group generated by reflections. Furthermore, $W(D_n)$ is a subgroup of $S_n\ltimes (\mathbb Z/2)^n\cong W(B_n)$ in the precise sense that $$
W(D_n) = \{\sigma \cdot \prod_{i \in I}s_{e_i} \in S_n\ltimes (\mathbb Z/2)^n:\, |I| \text{ even} \}. $$ \begin{remark}
We note that for odd $n$ the invariants of $W(D_n)$ can be deduced from those of $W(B_n)$, since $W(B_n) = \{\pm 1\} \times W(D_n)$. For instance, since $W(D_3) \cong W(A_3)$, this gives the invariants for $W(B_3)$. \end{remark}
Similarly to the $B_n$-section, we define
$$\Lambda^d:= \{(A, B, C)\subseteq [1, m]^3:\, A, B, C \text{ are pw.~disjoint}, \; |A| + |B| + 2|C| = d\}$$ and $x_{A, B, C}:\, H^1(k, P) \to \Kt^{\mathsf M}_d(k)/2$ \begin{align*}
x_{A, B, C}(\alpha_1, \beta_1, \dots, \alpha_m, \beta_m) = \prod_{a \in A}\{\alpha_a\} \cdot\prod_{b \in B}\{\beta_b\} \cdot \prod_{c \in C}\{\alpha_c\}\{\beta_c\}. \end{align*} As in the $B_n$-section, we now construct specific invariants. First, for $d \le m$ the group homomorphism $\rho:\, W(D_n)\subseteq W(B_n) \to S_n$ induces the invariant $u_d:= \rho^*(\widetilde{w_d}) \in \ms{Inv}^d(W(D_n), \Kt^{\mathsf M}_\ast)$ with $\mathsf{res}^P_{W(B_n)}(u_d) = \sum_{\substack{(A, B, \varnothing) \in \Lambda^d}} x_{A, B, \varnothing}.$
\smallbreak
Furthermore, from Section \ref{BnSubSect} we have an embedding $W(D_n)\subseteq W(B_n)\subseteq S_{2n}$. Starting with a $W(D_n)$-torsor $x \in H^1(k, W(D_n))$, we may consider its image $q_x \in H^1(k, O_{2n})$ induced by the map $W(D_n) \to S_{2n} \to O_{2n}$. Observe that $W(D_n) \to S_{2n}$ sends \begin{align*}
s_{a_i}\mapsto(2i - 1, 2i)(2i - 1 + n, 2i + n),\;
s_{b_i}\mapsto (2i - 1, 2i + n)(2i, 2i - 1 + n). \end{align*} Thus, starting with a $P$-torsor $(\alpha_1, \beta_1, \dots, \alpha_m, \beta_m)$, we may apply Lemma \ref{pfisterLemma} to see that under the composition $P \to W(D_n) \to S_{2n} \to O_{2n}$ this torsor maps to $$ \langle\langle - \alpha_1, - \beta_1 \rangle\rangle\oplus\dots\oplus\langle\langle - \alpha_m, - \beta_m \rangle\rangle\;\; (\oplus \langle 1, 1\rangle), $$ where the expression in parentheses appears only for odd $n$. We would like to have an element $v \in \ms{Inv} (W(D_n), \Kt^{\mathsf M}_\ast)$ such that $\mathsf{res}^P_{W(D_n)}(v)$ is given by \begin{align*}
H^1(k, P)& \to \Kt^{\mathsf M}_\ast(k)\\ (\alpha_1, \beta_1 \dots, \alpha_m, \beta_m)&\mapsto (1 + \{\alpha_1\}\{\beta_1\}) \cdots(1 + \{\alpha_m\}\{\beta_m\}). \end{align*} To achieve this goal, we proceed recursively as in Section \ref{BnSubSect}. First, we compute the value of the total Stiefel-Whitney class $w \in \ms{Inv} (O_4, \Kt^{\mathsf M}_\ast)$ at a $2$-fold Pfister form: \begin{align*} w(\langle \langle- \alpha, - \beta\rangle\rangle)& = (1 + \{\alpha\})(1 + \{\beta\})(1 + \{\alpha\} + \{\beta\}) \\
&= 1 + \{-1\}\{\alpha\} + \{-1\}\{\beta\} + \{\alpha\}\{\beta\}. \end{align*} Hence, setting $v' := \mathsf{res}^{W(D_n)}_{O_{2n}}(w)$, we obtain as in Lemma \ref{bnVdLem} that $$\mathsf{res}^P_{W(D_n)}(v_d') = \hspace{-.2cm} \sum_{(\varnothing, \varnothing, C) \in \Lambda^d} \hspace{-.2cm}x^L_{\varnothing, \varnothing, C} + \sum_{k \le d - 1}\{-1\}^{d - k}\hspace{-.2cm}\sum_{(A, B, C) \in \Lambda^k}\hspace{-.2cm} x_{A, B, C}.$$ Hence, proceeding recursively by setting $v_0 := 0$ and then $$v_d := v_d' + \sum_{k \le d - 1}u_{d - k} v_k$$ yields the desired invariant. Moreover, $ \mathsf{res}^P_{W(D_n)}(v_d) = \sum_{\substack{(\varnothing, \varnothing, C) \in \Lambda^d}} x_{\varnothing, \varnothing, C} $ and, by Lemma \ref{bnLemma1}, \begin{align}
\label{dnProdEq}
\mathsf{res}^P_{W(D_n)}(u_d) \mathsf{res}^P_{W(D_n)}(v_e) = \sum_{\substack{(A, B, C) \in \Lambda^{d + e}\\2|C| = e }}x_{A, B, C}. \end{align}
Now, suppose that $n = 2m$ is even. In this case, we need to construct one further invariant. Since $W(D_n)\cong S_n\ltimes (\mathbb Z/2)^{n - 1}$, we have an embedding $S_n\subseteq W(D_n)$ such that $|W(D_n)/S_n| = 2^{n - 1}$. More precisely, $|W(D_n)/ S_n|$ consists of the cosets $g_IS_n$, where $g_I:=\prod_{i \in I}s_{e_i}$ and where $I\subseteq [1;n]$ has even cardinality. The left action of $W(D_n)$ on these cosets induces a map $W(D_n) \to S_{2^{n - 1}} \to O_{2^{n - 1}}$. Thus, any $k \in \mathcal F_{k_0}$ and $y \in H^1(k, W(D_n))$ induce a quadratic form $q_y \in H^1(k, O_{2^{n - 1}})$ and thereby an invariant $\omega \in \ms{Inv} (W(D_n), W)$. In fact, we claim that $\omega \in \ms{Inv} (W(D_n), I^m)$, where $I(k)\subseteq W(k)$ is the fundamental ideal.
\smallbreak
To prove this, we start by showing that $\mathsf{res}^P_{W(D_n)}(\omega) \in \ms{Inv}(P, I^m)$. It is convenient to understand the map $W(D_n) \to S_{2^{n - 1}}$ on the subgroup $P$. \begin{lemma}
\label{pactLem} Let $L = \{\{2i - 1, 2i\}:\, i \le m\}$ and define $f:\, 2^{[1;n]} \to 2^L$, \begin{align*}
f(I):= \{\{2i - 1, 2i\}:\, \text{ either }2i - 1 \in I\text{ or }2i \in I\text{, but not both}\}. \end{align*} Then, \begin{enumerate} \item
The action of $P$ on $W(D_n)/S_n$ has the $2^{m - 1}$ orbits $\mathcal O_{\mathcal J}:= \{g_I S_n\mid f(I) = {\mathcal J}\}$, ${\mathcal J}\subseteq L$, $|{\mathcal J}|$ even.
\smallbreak
\item Let $\mathcal O_{\mathcal J}$ be an arbitrary orbit from (1). Put $A_\mathcal J:= \{i \le m:\, \{2i - 1, 2i\} \in \mathcal J\}$ and $B_\mathcal J:= \{i \le m:\, \{2i - 1, 2i\}\not \in \mathcal J\}$. Then, $P(\{a_i\}_{i \in B_\mathcal J}\cup\{b_j\}_{j \in A_\mathcal J})$ acts trivially on $\mathcal O_{\mathcal J}$ and the action of $P_{\mathcal J}:= P(\{a_i\}_{i \in A_\mathcal J}\cup\{b_j\}_{j \in B_\mathcal J})$ on $\mathcal O_{\mathcal J}$ is simply transitive. \end{enumerate} \end{lemma}
\begin{proof} (1) Let $I\subseteq [1;n]$. If $\{2i - 1, 2i\}\not \in f(I)$, then $s_{a_i}g_I = g_Is_{a_i}$ and $s_{b_i}g_I = g_{I\Delta\{2i - 1, 2i\}}s_{a_i}$, where $\Delta$ is the symmetric difference. On the other hand, if $\{2i - 1, 2i\} \in f(I)$, then $s_{a_i}g_I = g_{I\Delta\{2i - 1, 2i\}}s_{a_i}$ and $s_{b_i}g_I = g_Is_{a_i}$.
\smallbreak
(2) By the proof of part (1), $P(\{a_i\}_{i \in B_\mathcal J}\cup\{b_j\}_{j \in A_\mathcal J})$ acts trivially on $\mathcal O_{\mathcal J}$. Since $|P(\{a_i\}_{i \in A_\mathcal J}\cup\{b_j\}_{j \in B_\mathcal J})| = 2^m = |\mathcal O_{\mathcal J}|$, assertion (2) follows after verifying that $P(\{a_i\}_{i \in A_\mathcal J}\cup\{b_j\}_{j \in B_\mathcal J})$ acts freely on $\mathcal O_{\mathcal J}$. So suppose, $I\subseteq [1;n]$, $M\subseteq A_\mathcal J$ and $N\subseteq B_\mathcal J$ is such that $f(I) = \mathcal J$ and $g:= \prod_{i \in M}s_{a_i} \cdot \prod_{j \in N}s_{b_j}$ fixes $g_IS_n$. The proof of part (1) gives that $g g_IS_n = g_{I'}S_n$, where $I' = I\Delta(\cup_{i \in M \cup N}\{2i - 1, 2i\})$. Observing that $I' = I$ if and only if $M = N = \varnothing$ concludes the proof. \end{proof}
Using Lemma \ref{pactLem}, we conclude the following. Consider an arbitrary $y = (\alpha_1, \dots, \alpha_m, \beta_1, \dots, \beta_m) \in H^1(k, P)$ and let $q_y \in H^1(k, O_{2^{n - 1}})$ be the quadratic form induced by the composition $P \to W(D_n) \to S_{2^{n - 1}} \to O_{2^{n - 1}}$. The decomposition of the action of $P$ into orbits $\mathcal O_{\mathcal J}$ induces a decomposition of $q_y$ as $q_y\cong \oplus_{\mathcal J}q_{\mathcal J}$. More precisely, the action of $P$ on $\mathcal O_{\mathcal J}$ induces a map $P \to S_{2^m}$ and $q_{\mathcal J}$ is defined to be the image of $y \in H^1(k, P)$ under the composition $P \to S_{2^m} \to O_{2^m}$. By Lemma \ref{pactLem}, this composition factors through the projection $P \to P_{\mathcal J}$. Now, by Lemma \ref{pfisterLemma}, its remark and Lemma \ref{pactLem}, \begin{align}
\label{qjEq} q_{\mathcal J}\cong \langle 2^m\rangle\otimes \bigotimes_{i \in A_{\mathcal J}}\langle\langle - \alpha_i\rangle\rangle\otimes \bigotimes_{j \in B_{\mathcal J}}\langle\langle - \beta_j\rangle\rangle. \end{align} Thus, the image of $q_y = \oplus_{\mathcal J}q_{\mathcal J}$ in $W(k)$ lies in $I^m(k)$, so that $\mathsf{res}^P_{W(D_n)}(\omega) \in \ms{Inv}(P, I^m)$.
\smallbreak
Now, we pass from $P$ to $W(D_n)$. First, $\omega$ induces an invariant $\overline{\omega} \in \ms{Inv}^0(W(D_n), I^*/I^{* + 1})$ through the projection $W \to (I^*/I^{* + 1})_0 = W/I$. Since the image of $\mathsf{res}^P_{W(D_n)}(\omega)$ lies in $I^m\subseteq I$, we conclude that $\mathsf{res}^P_{W(D_n)}(\overline{\omega}) = 0$. As $P$ is up to conjugation the only maximal elementary abelian $2$-subgroup of $W(D_n)$ generated by reflections, Corollary \ref{splitCorollary} gives that $\overline\omega = 0 \in \ms{Inv}^0(W(D_n), I^*/I^{* + 1})$, i.e., $\omega \in \ms{Inv}(W(D_n), I)$. Iterating this procedure $m$ times shows that $\omega \in \ms{Inv}(W(D_n), I^m)$.
\smallbreak
By Example \ref{pfisterInvariant}, there exists an invariant $e_m:\, I^m(k) \to \Kt^{\mathsf M}_2(k)$ satisfying \begin{align}
\label{emyEq} e_m(\langle\langle\alpha_1\rangle\rangle\otimes \cdots\otimes\langle\langle\alpha_m\rangle\rangle) = \prod_{i \le m}\{\alpha_i\}. \end{align} Then, $$e_m(y):= e_m(\lan2^m\rangle\otimes\omega(y)) + \{-1\}\sum_{k \le d - 1}u_{d - 1 - k} v_k$$ defines an element of $\ms{Inv}^m(W(D_n), \Kt^{\mathsf M}_\ast)$ and, in the vein of Lemma \ref{bnVdLem}, we now determine its restriction to $P$. \begin{lemma}
\label{dnEmLem} \begin{equation} \label{dnEm}
\mathsf{res}^P_{W(D_n)}(e_m) = \sum_{\substack{(A, B, \varnothing) \in \Lambda^m\\ |A|\text{ even}}}x_{A, B, \varnothing} \end{equation} \end{lemma} \begin{proof}
First, by identity \eqref{dnProdEq}, it suffices to show that the restriction of the invariant $e_m'(y) := e_m(\lan2^m\rangle\otimes\omega(y))$ to $P$ is given by
\begin{align}
\label{dnEmp}
\sum_{\substack{(A, B, \varnothing) \in \Lambda^m\\ |A|\text{ even}}}x_{A, B, \varnothing} + \{-1\}\sum_{\substack{(A, B, C) \in \Lambda^m_{d - 1}}}x_{A, B, C}.
\end{align}
Then, by identities \eqref{qjEq} and \eqref{emyEq}, evaluating $\mathsf{res}^P_{W(D_n)}(e_m')$ at the torsor $(\alpha_1, \dots, \alpha_m, \beta_1, \dots, \beta_m) \in H^1(k, P)$ gives that
\begin{align*}
\sum_{\substack{(A, B, \varnothing) \in \Lambda^m\\ |A|\text{ even}}}\prod_{i \in A}\{-\alpha_i\}\prod_{j \in B}\{-\beta_j\}
&=\hspace{-.6cm} \sum_{\substack{(A, B, \varnothing) \in \Lambda^m\\ |A|\text{ even}}}\sum_{\substack{U \subseteq A\\ V \subseteq B}}\{-1\}^{m - |U| - |V|}\prod_{i \in U}\{\alpha_i\}\hspace{-.1cm}\prod_{j \in V}\{\beta_j\}\\
&= \hspace{-.5cm}\sum_{\substack{U, V \subseteq [1, m]\\ U \cap V = \varnothing}}\hspace{-.1cm} N_{U, V}\{-1\}^{m - |U| - |V|}\prod_{i \in U}\{\alpha_i\}\hspace{-.0cm}\prod_{j \in V}\{\beta_j\}.
\end{align*}
where
$$N_{U, V} := |\{A \subseteq [1,m]:\, A\supset U , A \cap V = \varnothing , |A| \text{ even}\}|.$$
To conclude the proof, we distinguish on the value of $|U| + |V|$. First, the contributions coming from $|U| +|V| = m$ give precisely the leading-order expression in \eqref{dnEmp}.
Next, suppose that $|U| + |V| = m - k$ with $k \ge 1$. Then, $N_{U, V} = 2^{k - 1}$, so that the corresponding contribution vanishes mod 2 if and only if $k \ge 1$. Now, we conclude the proof by noting that the contributions for $k = 1$ yield precisely the summation expression in \eqref{dnEmp}. \end{proof}
\smallbreak
Now, we derive a central set of constraints for the image of the restriction map $\ms{Inv}(W(D_n), M_\ast) \to \ms{Inv}(P, M_\ast)$. For $d \le n$ and $i \le [d/2]$ put $$
\phi^d_i:= \sum_{\substack{(A, B, C) \in \Lambda^d\\|C| = i}}x_{A, B, C} \in \ms{Inv}^d(P, \Kt^{\mathsf M}_\ast) $$ and $
\psi_1:= \sum_{\substack{(A, B, \varnothing)\\ |A| \text{ even}}}x_{A, B, \varnothing}. $
\begin{lemma}
\label{pactLem2} The image of the restriction map $\ms{Inv}(W(D_n), M_\ast) \to \ms{Inv}(P, M_\ast)$ is contained in the free $M_\ast(k_0)$-module with basis $$ S = \{\phi^d_i:\, d \le n, \; \max(0, d - m) \le i \le [d/2]\}\cup R, $$ where $R = \varnothing$, if $n$ is odd and $R = \{\psi_1\}$, if $n$ is even. \end{lemma}
\begin{proof}
Arguing as in the $B_n$-section shows that all elements of $S$ are non-zero. Furthermore, both $s_{e_{2i - 1} - e_{2j-1}} s_{e_{2i} - e_{2j}}$ and $s_{e_{2i - 1}} s_{e_{2j-1}}$ normalize $P$.
Let us denote by $N_1, N_2\subseteq N(P)$ the subgroups generated by the first, respectively second kind of elements and let us denote by $N$ the subgroup generated by $N_1$ and $N_2$. At the torsor level, conjugation by the first kind of elements swaps $\alpha_i \leftrightarrow \alpha_j$ and $\beta_i \leftrightarrow \beta_j$. Thus for $(A, B, C) \in \Lambda^d$, the invariant $x_{A, B, C}$ maps to $x_{A', B', C'}$, where $A' = (i, j)A$, $B' = (i, j)B$ and $C' = (i, j)C$. On the other hand, conjugation by the second kind of elements swaps $\alpha_i \leftrightarrow \beta_i$ and $\alpha_j \leftrightarrow \beta_j$. Thus, it maps $x_{A, B, C}$ to $x_{A', B', C}$, where $A' = (A - \{i, j\})\cup (B \cap \{i, j\})$ and $B' = (B - \{i, j\})\cup (A \cap\{i, j\})$. That is, if $i \in A$, we remove it from $A$ and put it into $B$ and vice versa; then we do the same for $j$. Thus, $N$ acts on $\ms{Inv} (P, \Kt^{\mathsf M}_\ast)$ by permuting the $x_{A, B, C}$ and hence we can apply Corollary \ref{orbitSum}.
In the next step, we determine the orbit of $x_{A_0, B_0, C_0}$ under $N$ for an arbitrary $(A_0, B_0, C_0) \in \Lambda^d$. First, suppose that $n$ is odd or that $C_0 \ne \varnothing$ or that ($n = 2m$ is even and $d < m$). Then, we claim that the orbit of $x_{A_0, B_0, C_0}$ under $N_2$ is given by $\{x_{A, B, C_0}:\,(A, B, C_0) \in \Lambda^d, \;A\cup B = A_0\cup B_0\}$. It suffices to show that for any $a \in A_0$, there exists an element of $N_2$ mapping $x_{A_0, B_0, C_0}$ to $x_{A_0-\{a\}, B_0\cup\{a\}, C_0}$. As soon as this is proven, one observes that the symmetric statement with $b \in B_0$ also holds; iterating these operations, we indeed get the claimed orbit. For $n$ odd, $s_{e_{2a-1}} s_{e_n}$ maps $x_{A_0, B_0, C_0}$ to $x_{A_0-\{a\}, B_0\cup\{a\}, C_0}$. If $C_0 \ne \varnothing$ choose $c \in C_0$; then $s_{e_{2a-1}} s_{e_{2c-1}}$ maps $x_{A_0, B_0, C_0}$ to $x_{A_0-\{a\}, B_0\cup\{a\}, C_0}$. Finally, if $n = 2m$ is even and $d < m$, then there exists $i \in [1;m]$ such that $i\not \in A_0\cup B_0\cup C_0$ and the element $s_{e_{2a-1}} s_{e_{2i - 1}}$ does the trick. Thus, the orbit of $x_{A_0, B_0, C_0}$ under $N_2$ equals $\{x_{A, B, C_0}:\,(A, B, C_0) \in \Lambda^d, \;A\cup B = A_0\cup B_0\}$. Similarly, for any $(A_1, B_1, C_1) \in \Lambda^d$ the orbit of $x_{A_1, B_1, C_1}$ under $N_1$ equals $\{x_{A, B, C}:\,(A, B, C) \in \Lambda^d, \;|A| = |A_1|, \;|B| = |B_1|, \;|C| = |C_1|\}$. Combining these results, the orbit of $x_{A_0, B_0, C_0}$ under $N$ is given by $\{x_{A, B, C}:\,(A, B, C) \in \Lambda^d, \;|C| = |C_0|\}$.
\smallbreak
Finally, let $C_0 = \varnothing$, $n = 2m$ be even and $d = m$. Then, the orbit of $x_{A_0, B_0, \varnothing}$ under $N_2$ equals $\{x_{A, B, \varnothing}:\,(A, B, \varnothing) \in \Lambda^d, \;A\cup B = A_0\cup B_0, |B| - |B_0|\;\text{is even}\}$. Using that for any $(A_1, B_1, C_1) \in \Lambda^d$ the orbit of $x_{A_1, B_1, C_1}$ under $N_1$ is given by $\{x_{A, B, C}:\, (A, B, C) \in \Lambda^d, \;|A| = |A_1|, \;|B| = |B_1|, \;|C| = |C_1|\}$, we see that the orbit of $x_{A_0, B_0, \varnothing}$ under $N$ is $\{x_{A, B, \varnothing}:\, (A, B, \varnothing) \in \Lambda^d, \;|B| -|B_0|\;\text{is even}\}$.
\smallbreak
Hence, applying Corollary \ref{orbitSum} concludes the proof. \end{proof}
In particular, as Lemma \ref{bnLemma1} gives that $\mathsf{res}^P_{W(D_n)}(u_{d - 2i}v_{2i}) = \phi^d_i$ and as $\mathsf{res}^P_{W(D_n)}(e_m) = \psi_1$ and , we obtain the following result. \begin{corollary}
\label{dnCor} $\ms{Inv} (W(D_n), M_\ast)$ is completely decomposable with basis $$ \{u_{d - 2i}v_{2i}:\, d \le n, \max(0, d - m) \le i \le [d/2]\}\cup R, $$ where $R = \varnothing$ for odd $n$ and $R = \{e_m\}$ for even $n$. \end{corollary} \begin{remark}
A relation between $W(B_n)$ and $W(D_n)$ explains why in Corollary \ref{dnCor}, we only see $v_d$ with even $d$. Indeed, the kernel of the determinant of the $2n$-dimensional representation of $W(B_n)$ contains $W(D_n)$. Since for odd $d$, all the $W(B_n)$-invariants $v_d$ are divisible by $v_1$ and since $v_1$ is vanishing, we deduce that they all reduce to 0 on $W(D_n)$. \end{remark}
\goodbreak \section{Weyl groups of type $E_6$, $E_7$, and $E_8$.} \label{E6-8SubSect}
\subsection{Type $E_6$} Up to conjugacy, $P:= P(a_1, b_1, a_2, b_2)$ is the unique maximal elementary abelian subgroup generated by reflections in $W(E_7)$. Since the injection $\ms{Inv} (W(E_6), M_\ast) \to \ms{Inv} (P, M_\ast)$ factors through $\ms{Inv} (W(D_5), M_\ast)$, the map $\ms{Inv} (W(E_6), M_\ast) \to \ms{Inv} (W(D_5), M_\ast)$ is injective and a basis of $\ms{Inv} (W(D_5), M_\ast)$ is given by $\{1, u_1, u_2, v_2, v_2u_1, v_4\}$.
\smallbreak
So let $a \in \ms{Inv} (P, M_\ast)$ be an invariant which comes from a $W(E_6)$-invariant. Since the inclusion $P\subseteq W(E_6)$ factors through $W(D_5)\subseteq W(E_6)$, $a$ decomposes uniquely as $$ a = \sum_{\substack{ d \le 4\\d \ne 2}}\sum_{(A, B, C) \in \Lambda^d}x_{A, B, C}m_d + \sum_{(A, B, \varnothing) \in \Lambda^2}x_{A, B, \varnothing}m_2 + \sum_{(\varnothing, \varnothing, C) \in \Lambda^2}x_{\varnothing, \varnothing, C}m_2' $$ for certain $m_d \in M_{\ast -d}(k_0)$, $m_2, m_2' \in M_{\ast -2}(k_0)$. Now, the element $$ g:= s_{\frac12(e_1 - e_2 - e_3 - e_4 - e_5 - e_6 - e_7 + e_8)} s_{\frac12( - e_1 + e_2 + e_3 + e_4 - e_5 - e_6 - e_7 + e_8)} \in W(E_6) $$ lies in the normalizer of $P$, since \begin{align*} g s_{a_1} g^{-1} = s_{b_2}, \qquad g s_{b_1} g^{-1} = s_{b_1}, \qquad g s_{a_2} g^{-1} = s_{a_2}, \qquad g s_{b_2} g^{-1} = s_{a_1}. \end{align*} The induced action of $g$ on a $P$-torsor $(\alpha_1, \alpha_2, \beta_1, \beta_2)$ is thus given by swapping $\alpha_1 \leftrightarrow \beta_2$, while leaving $\alpha_2, \beta_1$ fixed. Therefore, applying $g$ to the invariant $a$ yields \begin{align*} \sum_{\substack{ d \le 4\\d \ne 2}} \sum_{(A, B, C) \in \Lambda^d}x_{A, B, C}m_d
+ \sum_{i,j \in \{1, 2\}}x_{\{a_i, b_j\}} m_2 + (x_{\{a_1, a_2\}} + x_{\{b_1, b_2\}})m_2'. \end{align*} Since $a$ comes from an invariant of $W(E_6)$, it stays invariant under $g$ and comparing coefficients, we conclude that the image of the restriction $\ms{Inv} (W(E_6), M_\ast) \to \ms{Inv} (W(D_5), M_\ast)$ lies in the free submodule with basis $$\{1, u_1, u_2 + v_2, v_2u_1, v_4\}.$$ The embedding of $W(E_6)$ in $O_8$ as orthogonal reflection group gives rise to the invariants $\mathsf{res}^{W(E_6)}_{O_8}(\widetilde{w_d}) \in \ms{Inv}^d(O_8, \Kt^{\mathsf M}_\ast)$, which we again denote by $\widetilde{w_d}$. For any $k \in \mathcal F _{k_0}$ and $(\alpha_1, \beta_1, \alpha_2, \beta_2) \in (k^\times/k^{\times 2})^4$, the map $P \to W(E_6)\subseteq O_8$ induces the quadratic form $$ \lan2\alpha_1, 2\beta_1, 2\alpha_2, 2\beta_2, 1, 1, 1, 1\rangle. $$ Thus, the total modified Stiefel-Whitney class evaluated at this torsor equals $$ (1 + \{\alpha_1\})(1 + \{\alpha_2\})(1 + \{\beta_1\})(1 + \{\beta_2\}). $$ Now, \begin{align*} \mathsf{res}^P_{W(D_5)}(u_1)& = \mathsf{res}^P_{W(E_6)}(\widetilde{w_1}),
&\mathsf{res}^P_{W(D_5)}(u_2 + v_2) &= \mathsf{res}^P_{W(E_6)}(\widetilde{w_2}),\\
\mathsf{res}^P_{W(D_5)}(v_2u_1)& = \mathsf{res}^P_{W(E_6)}(\widetilde{w_3}),
&\mathsf{res}^P_{W(D_5)}(v_4) &= \mathsf{res}^P_{W(E_6)}(\widetilde{w_4}). \end{align*} Hence, $\{\widetilde{w_d}\}_{d \le 4}$ form a basis of $\ms{Inv}(W(E_6), M_\ast)$ as $M_\ast(k_0)$-module.
\medbreak
\subsection{Type $E_7$} Up to conjugacy, $P:= P(a_1, b_1, a_2, b_2, a_3, b_3, a_4)$ is the unique maximal elementary abelian subgroup generated by reflections in $W(E_7)$. Looking at the root systems, we see that there is an inclusion $W(D_6)\times \langle s_{a_4}\rangle\subseteq W(E_7)$. Invoking the same factorization argument as before, the restriction map $$ \ms{Inv} (W(E_7), M_\ast) \to \ms{Inv} (W(D_6)\times\langle s_{a_4}\rangle, M_\ast ) $$ is injective. We first recall that $\ms{Inv} (W(D_6)\times \langle s_{a_4}\rangle, M_\ast)$ is a free $M_\ast(k_0)$-module with basis \begin{enumerate} \item[(0)] 1
\smallbreak
\item $u_1, x_{\{a_4\}}$
\smallbreak
\item $u_2, v_2, u_1x_{\{a_4\}}$
\smallbreak
\item $(u_3 - e_3), e_3, u_1v_2, u_2x_{\{a_4\}}, v_2x_{\{a_4\}}$
\smallbreak
\item $u_2v_2, v_4, (u_3 - e_3)x_{\{a_4\}}, e_3x_{\{a_4\}}, u_1v_2x_{\{a_4\}}$
\smallbreak
\item $v_4u_1, u_2v_2x_{\{a_4\}}, v_4x_{\{a_4\}}$
\smallbreak
\item $v_6, v_4u_1x_{\{a_4\}}$
\smallbreak
\item $v_6x_{\{a_4\}}.$ \end{enumerate} Defining $g:= s_{\frac12(e_1 - e_2 - e_3 - e_4 - e_5 - e_6 - e_7 + e_8)} s_{\frac12( - e_1 + e_2 + e_3 + e_4 - e_5 - e_6 - e_7 + e_8)} \in W(E_7)$ as in the $E_6$-case yields that \begin{align*} g s_{a_1} g^{-1}& = s_{b_2},
&g s_{b_1} g^{-1}& = s_{b_1},
&g s_{a_2} g^{-1}& = s_{a_2},
&g s_{b_2} g^{-1}& = s_{a_1},\\
g s_{a_3} g^{-1} &= s_{a_3},
&g s_{b_3} g^{-1}& = s_{a_4},
&g s_{a_4} g^{-1}& = s_{b_3}. \end{align*} The action of $g$ on a $P$-torsor $(\alpha_1, \beta_1, \dots, \alpha_3, \beta_3, \alpha_4) \in (k^\times/k^{\times 2})^7$ is thus given by swapping $\alpha_1 \leftrightarrow \beta_2$, $\beta_3 \leftrightarrow\alpha_4$ while leaving $\beta_1, \alpha_2, \alpha_3$ fixed. Arguing just as in the $E_6$-case, we see that the image of $\ms{Inv} (W(E_7), M_\ast) \to \ms{Inv} (W(D_6)\times\langle s_{a_4}\rangle, M_\ast)$ lies in the free $M_\ast(k_0)$-module with basis \begin{enumerate} \item[(0)] 1
\smallbreak
\item $u_1 + x_{\{a_4\}}$
\smallbreak
\item $v_2 + u_2 + u_1x_{\{a_4\}}$
\smallbreak
\item $u_1v_2 + (u_3 - e_3) + u_2x_{\{a_4\}}, e_3 + v_2x_{\{a_4\}}$
\smallbreak
\item $v_4 + (u_3 - e_3)x_{\{a_4\}}, u_2v_2 + u_1v_2x_{\{a_4\}} + e_3x_{\{a_4\}}$
\smallbreak
\item $v_4x_{\{a_4\}} + u_2v_2x_{\{a_4\}} + v_4u_1$
\smallbreak
\item $v_4u_1x_{\{a_4\}} + v_6$
\smallbreak
\item $v_6x_{\{a_4\}}$. \end{enumerate} Now, we provide specific $W(E_7)$-invariants. First, the embedding $W(E_7)\subseteq O_8$ gives us invariants $\mathsf{res}^{W(E_7)}_{O_8}(\widetilde{w_d}) \in \ms{Inv}^d(W(E_7), \Kt^{\mathsf M}_\ast)$, which we again denote by $\widetilde{w_d}$. Then, \begin{align*} \mathsf{res}^P_{W(E_7)}(\widetilde{w_1})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(u_1 + x_{\{a_4\}})\\ \mathsf{res}^P_{W(E_7)}(\widetilde{w_2})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(u_2 + v_2 + u_1x_{\{a_4\}})\\ \mathsf{res}^P_{W(E_7)}(\widetilde{w_3})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(u_3 + u_1v_2 + u_2x_{\{a_4\}} + v_2x_{\{a_4\}})\\ \mathsf{res}^P_{W(E_7)}(\widetilde{w_4})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(u_2v_2 + v_4 + u_3x_{\{a_4\}} + u_1v_2x_{\{a_4\}})\\ \mathsf{res}^P_{W(E_7)}(\widetilde{w_5})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(v_4u_1 + v_4x_{\{a_4\}} + u_2v_2x_{\{a_4\}})\\ \mathsf{res}^P_{W(E_7)}(\widetilde{w_6})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(v_6 + v_4u_1x_{\{a_4\}})\\ \mathsf{res}^P_{W(E_7)}(\widetilde{w_7})& = \mathsf{res}^P_{W(D_6)\times\langle s_{{a_4}}\rangle}(v_6x_{\{a_4\}}). \end{align*} So we still lack invariants in degree $3$ and $4$. To construct the missing invariant in degree $3$, we mimic the construction of the invariant ${e_m}$ in the $D_n$-section. Let $U\cong S_6\times \langle s_{a_4}\rangle \subseteq W(E_7)$ be the subgroup generated by the reflections at $$ \{e_1 + e_2, e_2 - e_3, e_3 - e_4, e_4 - e_5, e_5 - e_6, e_7 - e_8\}. $$
Then, $|U\backslash W(E_7)| = 2016$ and we obtain a map $W(E_7) \to S_{2016} \to O_{2016}$. To be more precise, there is a right action of $W(E_7)$ on the right cosets $U\backslash W(E_7)$ given by right multiplication. This induces an anti-homomorphism $W(E_7) \to S_{2016}$ and precomposing this map with $g\mapsto g^{-1}$, we obtain the desired homomorphism. We need the following lemma which tells us that we are in a situation which is quite similar to the $D_n$-case:
\begin{lemma} \label{e7GapLem} Let $k \in \mathcal F_{k_0}$ and $y \in H^1(k, P)$ be a $P$-torsor. Let $q_y$ be the quadratic form induced by $y$ under the composition $P \to W(E_7) \to S_{2016} \to O_{2016}$. Then, the image of $q_y$ in $W(k)$ is contained in $I^3(k)$. \end{lemma}
\begin{proof} This can be checked by a computational algebra system, see the appendix. \end{proof}
\medbreak
\noindent We now argue similarly to the $D_n$-case. In concrete terms, if $y$ is a $W(E_7)$-torsor, and $q_y$ is the quadratic form induced by $y$ under the composition $W(E_7) \to S_{2016} \to O_{2016}$, then the image of $q_y$ in $W(k)$ is contained in $I^3(k)$ and we define the invariant \begin{align}
\label{f3pEq}
f_3'(y):= e_3(\langle 2^3\rangle\otimes q_y). \end{align} In the $D_n$-case, namely in Lemma \ref{dnEmLem}, we could compute the restriction of the invariant $e_m$ to the maximal elementary abelian $2$-subgroup explicitly. In principle, this would also be possible in the present setting. However, the computations would be substantially more involved. Therefore, we provide a more conceptual level argument. To that end, we recall from Section \ref{DnSubSect} that if $g \in W(E_7)$ is contained in the normalizer $N_{W(E_7)}(P)$ of $P$ in $W(E_7)$, then $g$ acts both on the invariants $\{x_{A, B, C}\}_{(A, B, C)\in \Lambda^d} \in \ms{Inv}^d (P, M_\ast)$ as well as on the indexing set $\Lambda^d$. \begin{lemma}
\label{orbitLem}
Let $d \le 7$ and $g \in N_{W(E_7)}(P)$. Also, let $a \in \ms{Inv}^d(W(E_7), \Kt^{\mathsf M}_\ast)$ be an invariant and represent its restriction to $\ms{Inv}^d(P, \Kt^{\mathsf M}_\ast)$ as
\begin{align}
\label{orbitEq}
\mathsf{res}^P_{W(E_7)}(a) = \sum_{\ell \le d} \sum_{I \in \Lambda^{\ell}}m_Ix_I,
\end{align}
for certain coefficients $m_I \in \Kt^{\mathsf M}_{d - |I|}(k_0)$. Then, $m_I = m_{g(I)}$ for all $\ell \le d$ and $I \in \Lambda^\ell$. \end{lemma} \begin{proof}
First, since the restriction is invariant under the action of $g$,
\begin{align}
\label{orbitEq2}
\sum_{\ell \le d} \sum_{I \in \Lambda^{d - \ell}}(m_I - m_{g(I)})x_I = 0.
\end{align}
Now, suppose that the assertion of the lemma was false, and choose a counterexample $I^* \in \Lambda^{\ell^*}$ with maximal $\ell^*$. Then, we first evaluate both sides of \eqref{orbitEq} at the function field $E = k_0(A_1, B_1, \dots, A_3, B_3, A_4)$ in the indeterminates $A_1, B_1, \dots, A_3, B_3, A_4$ corresponding to the roots in $P$, and then apply the Milnor residue maps corresponding to the indeterminates associated with the index set $I^*$. Since $\ell^*$ was chosen to be maximal, the identity \eqref{orbitEq2} reduces to $m_I - m_{g(I)} = 0$, which concludes the proof. \end{proof} In words, just as in Corollary \ref{orbitSum}, when representing the restrictions of invariants as in \eqref{orbitEq}, then basis elements in the same orbit share the same coefficient.
In particular, we have seen above that in degree 1 and 2 all basis elements are in a single orbit and are therefore the restriction of the corresponding modified Stiefel-Whitney classes. Thus, applying Lemma \ref{orbitLem} with $a = f_3'$, there exist $m_\ell \in \Kt^{\mathsf M}_{3 - \ell}(k_0)$, $\ell \in \{0, 1, 2\}$ and $m_{A, B, C} \in \mathbb Z/2$, $(A, B, C) \in \Lambda^3$ such that $$\mathsf{res}^P_{W(E_7)}(f_3') = \sum_{(A, B, C) \in \Lambda^3}m_{A, B, C}x_{A, B, C} + \sum_{\ell \le 2}m_\ell \mathsf{res}^P_{W(E_7)}(\widetilde{w_\ell}).$$ Then, proceeding as in the definition of $e_m$ in Section \ref{DnSubSect}, we define an invariant $f_3 \in \ms{Inv}^3(W(E_7), \Kt^{\mathsf M}_\ast)$ by stripping of the mixed terms from $f_3'$. That is, $$f_3 := f_3' - \sum_{\ell \le 2}m_\ell \widetilde{w_\ell}.$$ In the appendix, we expound on how a computational algebra system shows that \begin{align}
\label{e7BarEq}
\mathsf{res}^P_{W(E_7)}(f_3)= \mathsf{res}^P_{W(D_6)\times\langle s_{a4}\rangle}(u_1v_2 + u_3 - e_3 + u_2x_{\{a_4\}}). \end{align} Finally, we can proceed in a similar fashion in order to remove the mixed terms in the product expression. \begin{align*} &(u_1 + x_{\{a_4\}})(u_1v_2 + (u_3 - e_3) + u_2x_{\{a_4\}}). \end{align*} Thus, $\ms{Inv} (W(E_7), M_\ast)$ is completely decomposable with basis $\{\widetilde{w_d}\}_{d \le 7}\cup\{f_3, f_3 \widetilde{w_1}\}$.
\medbreak
\subsection{Type $E_8$} Up to conjugacy, $P:= P(a_1, b_1, a_2, b_2, a_3, b_3, a_4, b_4)$ is the unique maximal elementary abelian subgroup generated by reflections in $W(E_8)$. By the same arguments as in the $E_6/E_7$-case, we obtain that the restriction map $\ms{Inv} (W(E_8), M_\ast) \to \ms{Inv} (W(D_8), M_\ast)$ is injective. We first recall that $\ms{Inv} (W(D_8), M_\ast)$ is a free $M_\ast(k_0)$-module with the basis $$\{1, u_1, u_2, v_2, u_3, v_2u_1,e_4, v_4, (u_4 - e_4), v_2u_2, v_2u_3, v_4u_1, v_4u_2, v_6, v_6u_1, v_8\}.$$ Again, we define $g \in W(E_8)$ as in the $E_6$ or $E_7$-case and check that it normalizes $P$: \begin{align*}
g s_{a_1} g^{-1}& = s_{b_2},&
g s_{b_1} g^{-1} = s_{b_1}, \qquad &
g s_{a_2} g^{-1} = s_{a_2},& g s_{b_2} g^{-1} = s_{a_1},\\
g s_{a_3} g^{-1}& = s_{a_3},&
g s_{b_3} g^{-1} = s_{a_4}, \qquad &
g s_{a_4} g^{-1} = s_{b_3}, & g s_{b_4} g^{-1} = s_{b_4}. \end{align*} The action of $g$ on a $P$-torsor $(\alpha_1, \beta_1, \alpha_2, \beta_2, \alpha_3, \beta_3, \alpha_4, \beta_4)$ is thus given by swapping $\alpha_1 \leftrightarrow \beta_2$, $\beta_3 \leftrightarrow\alpha_4$ while leaving $\beta_1, \alpha_2, \alpha_3, \beta_4$ fixed. Again, applying the same kind of arguments as in the $E_6$-case, we see that the image of the restriction map $\ms{Inv} (W(E_8), M_\ast) \to \ms{Inv} (W(D_8), M_\ast)$ is contained in the free submodule with basis $$\{1, u_1, u_2 + v_2, u_3+ v_2u_1,e_4 + v_4, (u_4 - e_4) + v_2u_2, v_2u_3 + v_4u_1, v_4u_2 + v_6, v_6u_1, v_8\}.$$ We need to construct $W(E_8)$-invariants mapping to these basis elements. On the one hand, the inclusion $W(E_8)\subseteq O_8$ gives modified Stiefel-Whitney classes $\widetilde{w_d} \in \ms{Inv}^d(W(E_8), \Kt^{\mathsf M}_\ast)$. Again, \begin{alignat*}3
\ms{res}^P_{W(E_8)}(\widetilde{w_1})& = \ms{res}^P_{W(D_8)}(u_1),&&
\hspace{-1.2cm}\ms{res}^P_{W(E_8)}(\widetilde{w_5}) = \ms{res}^P_{W(D_8)}(v_2u_3 + v_4u_1),\\
\ms{res}^P_{W(E_8)}(\widetilde{w_2}) &= \ms{res}^P_{W(D_8)}(u_2 + v_2),&&
\hspace{-.8cm}\ms{res}^P_{W(E_8)}(\widetilde{w_6}) = \ms{res}^P_{W(D_8)}(v_4u_2 + v_6),\\
\ms{res}^P_{W(E_8)}(\widetilde{w_3}) &= \ms{res}^P_{W(D_8)}(u_3 + u_1v_2) ,&&
\ms{res}^P_{W(E_8)}(\widetilde{w_7}) = \ms{res}^P_{W(D_8)}(v_6u_1),\\
\ms{res}^P_{W(E_8)}(\widetilde{w_4}) &= \ms{res}^P_{W(D_8)}(u_4 + u_2v_2 + v_4),\quad&&
\hspace{.4cm}\ms{res}^P_{W(E_8)}(\widetilde{w_8}) = \ms{res}^P_{W(D_8)}(v_8). \end{alignat*} The situation is very similar to the $E_7$-case except that now, we miss a basis invariant in degree $4$. Let $U\subseteq W(E_8)$ be the subgroup generated by the reflections at $$ \{e_1 + e_2, e_2 - e_3, e_3 - e_4, e_4 - e_5, e_5 - e_6, e_6 - e_7, e_7 - e_8\}. $$
By observing that $U\cong S_8$ or by using a computational algebra software, we conclude $|U\backslash W(E_8)| = 17280$. As in the $E_7$-case, we obtain a map $W(E_8) \to S_{17280} \to O_{17280}$. Again, we need the following lemma. \begin{lemma} \label{e8GapLem} Let $k \in \mathcal F_{k_0}$ and $y \in H^1(k, P)$ be a $P$-torsor. Let $q_y$ be the quadratic form induced by $y$ under the composition $P \to W(E_8) \to S_{17280} \to O_{17280}$. Then, the image of $q_y$ in $W(k)$ is contained in $I^4(k)$. \end{lemma}
\begin{proof} Again, this can be checked by a computational algebra software, see the appendix. \end{proof}
\smallbreak
\noindent As in the $D_n$-case, we obtain from this an invariant $f_4 \in \ms{Inv}^4(W(E_8), \Kt^{\mathsf M}_\ast)$. More precisely, if $y$ is a $W(E_8)$-torsor and $q_y$ is the quadratic form induced by $y$ under the composition $W(E_8) \to S_{17280} \to O_{17280}$, then the image of $q_y$ in $W(k)$ is contained in $I^4(k)$ and we define $f_4'(y):= e_4(q_y)$. We then proceed as in the $E_7$-case and set $$f_4 := f_4' - \sum_{\ell \le 3}m_\ell \widetilde{w_\ell}$$ for suitable $m_\ell \in \Kt^{\mathsf M}_\ell(4 - \ell)$ in order to strip off the mixed contributions from $f_4'$.
\smallbreak
The restriction of $f_4$ to $P$ is determined through a computational algebra system, see the appendix. The result is $ \mathsf{res}^P_{W(D_8)}(v_2u_2 + (u_4 - e_4)). $ Thus, we conclude that $\ms{Inv} (W(E_8), M_\ast)$ is completely decomposable with basis $ \{f_4\}\cup\{\widetilde{w_d}\}_{d \le 8}. $
\medbreak
\section{Appendix A -- Excerpts from a letter by J.-P.~Serre} \label{serreSec}
[...] Hence, the only technical point which remains is the ``splitting principle'': if the restrictions of an invariant to every cube is 0, the invariant is 0. In your text with Gille, you prove that result under the restrictive condition that the characteristic $p$ does not divide the order $|G|$ of the group $G$. The proof you give (which is basically the same as in my UCLA lectures) is based on the fact that the polynomial invariants of $G$ (in its natural representation) make up a polynomial algebra; in geometric language, the quotient ${\rm Aff}^n/G$ is isomorphic to ${\rm Aff}^n$. This is OK when $p$ does not divide $|G|$, but it is also true in many other cases. For instance, it is true for all $p$ $(\ne 2)$ for the classical types (provided, for type $A_n$, that we choose for lattice the natural lattice for $GL_{n+1}$, namely $\mathbb Z^{n+1}$). For types $G_2, F_4, E_6, E_7$, it is true if $p > 3$ and for $E_8$ it is true for $p > 5$: this is not easy to prove, but it has been known to topologists since the 1950's (because the question is related to the determination of the mod $p$ cohomology of the corresponding compact Lie groups). When I started working on these questions, I found natural to have to exclude, for instance, the characteristics 3 and 5 for $E_8$. It is only a few years ago that I realized that even these small restrictions are unnecessary: the splitting principle holds for every $p > 2$.
I have sketched the proof in my Oberwolfach report: take for instance the case of $E_8$; the group $G = W(E_8)$ contains $W(D_8)$ as a subgroup of odd index, namely 135; moreover, the reflections of $W(D_8)$ are also reflections of $W(E_8)$; hence every cube of $W(D_8)$ is a cube of $W(E_8)$; if a cohomological invariant of $W(E_8)$ gives 0 over every cube, its restriction to $W(D_8)$ has the same property, hence is 0 because $D_8$ is a classical type; since the index of $W(D_8)$ is odd, then this invariant is 0. It is remarkable that a similar proof works in every other case. [...]
\section{Appendix B -- Computations for $E_7$ and $E_8$} \label{apdxSec}
\noindent For the computations involving $E_7$ and $E_8$, we use the computational algebra system {\tt GAP} and the {\tt GAP}-package {\tt CHEVIE} \cite{CH}. The complete source code used for the proof of Lemmas \ref{e7GapLem} and \ref{e8GapLem} together with detailed instructions on how to reproduce the results are provided on the author's GitHub page:\, {\tt https://github.com/Christian-Hirsch/orbit-e78}.
\subsection{Computations concerning $W(E_7)$} \label{e7Apdx}
The proof of Lemma \ref{e7GapLem} requires detailed information on the action of $P$ on $U\backslash W(E_7)$. We analyze this action, via the procedure {\tt fullCheck(7, U, P)}.
First, {\tt fullCheck(7, U, P)} computes the action of $P$ on $U\backslash W(E_7)$ and also its orbits $\mc O_1, \dots, \mc O_r$. Then, for each orbit $\mc O_k$, it determines a subset $A_k \subseteq \{a_1, b_1, a_2, b_2, a_3, b_3, a_4\}$, such that $P(\{a_1, b_1, a_2, b_2, a_3, b_3, a_4\}-A_k)$ acts trivially on $\mc O_k$ and such that $P(A_k)$ acts simply transitively on $\mc O_k$. A priori, there is no reason that such a subset should exist; however -- as checked by the program -- it exists in the case we are considering. The return value of the procedure {\tt fullCheck} is an array whose $k$th entry is the set $A_k$. Inspecting the return value reveals that each $A_k$ consists of at least 3 elements and that the subsets consisting of 3 elements have the desired form.
More precisely, to call {\tt fullCheck(7, U, P)}, we need to determine the indices of the roots generating $U$ and $P$. In the following, the roots are expressed as linear combinations of the simple system of roots given by $ v_1 = \tfrac12(e_1 - e_2 - e_3 - e_4 - e_5 - e_6 - e_7 + e_8)$, $v_2 = e_1 + e_2$, $v_i = e_{i - 1} - e_{i - 2}$, $3 \le i \le 7$. Additionally, \begin{align*} b_2& = v_2 + v_3 + 2v_4 + v_5\\ b_3& = v_2 + v_3 + 2v_4 + 2v_5 + 2v_6 + v_7\\ -a_4& = 2v_1 + 2v_2 + 3v_3 + 4v_4 + 3v_5 + 2v_6 + v_7 \end{align*} We claim that $U$ and $P$ are represented by the indices $[2, 4, 5, 6, 7, 63]$ and $[3, 2, 5, 28, 7, 49, 63]$, respectively. This can be checked by printing the basis representation of the $E_7$ roots:
\noindent gap$>$ p: = [ 3, 2, 5, 28, 7, 49, 63 ]; \newline gap$>$ for u in p do Print(CoxeterGroup("E", 7).roots[u]);Print("$\backslash$ n");od; \newline [ 0, 0, 1, 0, 0, 0, 0 ] \newline [ 0, 1, 0, 0, 0, 0, 0 ] \newline [ 0, 0, 0, 0, 1, 0, 0 ] \newline [ 0, 1, 1, 2, 1, 0, 0 ] \newline [ 0, 0, 0, 0, 0, 0, 1 ] \newline [ 0, 1, 1, 2, 2, 2, 1 ] \newline [ 2, 2, 3, 4, 3, 2, 1 ] \newline
We can now call the {\tt fullCheck}-procedure.
\noindent gap$>$ Aks: = fullCheck(7, [2, 4, 5, 6, 7, 63], [3, 2, 5, 28, 7, 49, 63]); \newline
Verifying that all $\{A_k\}_{k \le r}$ consist of at least 3 elements can be achieved via the command
\noindent gap$>$ for Ak in Aks do if Length(Ak)$ < $3 then Print("Fail");fi;od; \newline
To see that those $A_k$ with $|A_k| = 3$ correspond precisely to the elements
\begin{align*}
\{(A, B, C) \in \Lambda_3:\, |C| = 1\} &\cup \{(A, B, \varnothing) \in \Lambda_3:\, |A|\text{ odd}\} \\
&\cup \{(A, B, \varnothing, a_4):\, (A, B, \varnothing) \in \Lambda_2\},
\end{align*} we use the {\tt e7Correct}-procedure. It checks that the $\{A_k\}_{k \le r}$ do not contain elements which are not in the claimed set above. Since there are precisely 28 $A_k$ with 3 elements, which is precisely the cardinality of the above set, this reasoning yields the claimed description.
\noindent gap$>$ Y: = Filtered(Aks, Ak-$>$ Length(Ak)$ < $4);
\newline \noindent gap$>$ e7Correct(Y);
\subsection{Computations concerning $W(E_8)$} \label{e8Apdx}
Since the arguments are very similar to the $E_7$-case, we only explain the most important changes. First, we consider the maximal elementary abelian subgroup generated by reflections $P = P(a_1, b_1, a_2, b_2, a_3, b_3, a_4, b_4)$ and the subgroup $$U = \langle s_{e_1 + e_2}, s_{e_2 - e_3}, s_{e_3 - e_4}, s_{e_4 - e_5}, s_{e_5 - e_6}, s_{e_6 - e_7}, s_{e_7 - e_8}\rangle.$$ In addition to the computations provided in Appendix \ref{e7Apdx}, we note that $$b_4 = 2v_1 + 3v_2 + 4v_3 + 6v_4 + 5v_5 + 4v_6 + 3v_7 + 2v_8.$$ Then, $P$ and $U$ are represented by the indices $[3, 2, 5, 32, 7, 61, 97, 120]$ and $[2, 4, 5, 6, 7, 8, 97]$:
\noindent gap$>$ a: = [3, 2, 5, 32, 7, 61, 97, 120]; \newline [ 3, 2, 5, 32, 7, 61, 97, 120 ] \newline gap$>$ for u in a do Print(CoxeterGroup("E", 8).roots[u]); Print("$\backslash$ n"); od; \newline [ 0, 0, 1, 0, 0, 0, 0, 0 ] \newline [ 0, 1, 0, 0, 0, 0, 0, 0 ] \newline [ 0, 0, 0, 0, 1, 0, 0, 0 ] \newline [ 0, 1, 1, 2, 1, 0, 0, 0 ] \newline [ 0, 0, 0, 0, 0, 0, 1, 0 ] \newline [ 0, 1, 1, 2, 2, 2, 1, 0 ] \newline [ 2, 2, 3, 4, 3, 2, 1, 0 ] \newline [ 2, 3, 4, 6, 5, 4, 3, 2 ] \newline
To understand the orbit structure, we proceed as in the $E_7$-case:
\noindent gap$>$ Aks: = fullCheck(8, [2, 4, 5, 6, 7, 8, 97], [3, 2, 5, 32, 7, 61, 97, 120]); \newline \noindent gap$>$ for Ak in Aks do if Length(Ak)$ < $4 then Print("Fail");fi;od; \newline \noindent gap$>$ Y: = Filtered(Aks, Ak-$>$Length(Ak)$ < $5); \newline \noindent gap$>$ e8Correct(Y);
\addresseshere
\end{document} | arXiv |
Antoine Song
Antoine Song (born 18 July 1992 in Paris) is a French[1] mathematician whose research concerns differential geometry. In 2018, he proved Yau's conjecture. He is a Clay Research Fellow (2019–2024).[2] He obtained his Ph.D. from Princeton University in 2019 under the supervision of Fernando Codá Marques.[3]
Existence of minimal surfaces
It is known that any closed surface possesses infinitely many closed geodesics. The first problem in the minimal submanifolds section of Yau's list asks whether any closed three-manifold has infinitely many closed smooth immersed minimal surfaces. At the time it was known from Almgren–Pitts min-max theory the existence of at least one minimal surface. Kei Irie, Fernando Codá Marques, and André Neves solved this problem in the generic case [4] and later Antoine Song claimed it in full generality.[5]
Selected publications
• "Existence of infinitely many minimal hypersurfaces in closed manifolds" (2018)
• Joint with Marques and Neves: "Equidistribution of minimal hypersurfaces for generic metrics" (2019), Inventiones mathematicae
References
1. Song's CV
2. "Antoine Song | Clay Mathematics Institute". www.claymath.org.
3. Antoine Song at the Mathematics Genealogy Project
4. "Density of minimal hypersurfaces for generic metrics | Annals of Mathematics".
5. Song, Antoine (2018). "Existence of infinitely many minimal hypersurfaces in closed manifolds". arXiv:1806.08816 [math.DG].
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
| Wikipedia |
\begin{document}
\begin{titlepage}
\date{} \title{The Price of Anarchy is Unbounded for Congestion Games with Superpolynomial Latency Costs}
\thispagestyle{empty}
\begin{abstract} We consider non-cooperative unsplittable congestion games where players share resources, and each player's strategy is pure and consists of a subset of the resources on which it applies a fixed weight. Such games represent unsplittable routing flow games and also job allocation games. The congestion of a resource is the sum of the weights of the players that use it and the player's cost function is the sum of the utilities of the resources on its strategy. The social cost is the total weighted sum of the player's costs. The quality of Nash equilibria is determined by the price of anarchy ($PoA$) which expresses how much worse is the social outcome in the worst equilibrium versus the optimal coordinated solution. In the literature the predominant work has only been on games with polynomial utility costs, where it has been proven that the price of anarchy is bounded by the degree of the polynomial. However, no results exist on general bounds for non-polynomial utility functions.
Here, we consider general versions of these games in which the utility of each resource is an arbitrary non-decreasing function of the congestion. In particular, we consider a large family of superpolynomial utility functions which are asymptotically larger than any polynomial. We demonstrate that for every such function there exist games for which the price of anarchy is unbounded and increasing with the number of players (even if they have infinitesimal weights) while network resources remain fixed. We give tight lower and upper bounds which show this dependence on the number of players. Furthermore we provide an exact characterization of the $PoA$ of all congestion games whose utility costs are bounded above by a polynomial function. Heretofore such results existed only for games with polynomial cost functions.
\end{abstract} \end{titlepage}
\section{Introduction} We consider non-cooperative congestion games on a set of resources which are shared among players. A player's strategy consists of a subset (or all) of the resources where it applies a fixed weight. Strategies of players are pure in the sense that each player picks one strategy among a set of available strategies. The resource utilization is unsplittable within a strategy, since a player applies the same weight on each resource. The congestion of a resource is simply the sum of the weights of the players that use it. The utility of each resource is a function of its congestion. Each player selfishly minimizes its own cost which is the sum of the utilities of all the resources along its strategy.
We examine pure Nash equilibria which are game states where each player has chosen a locally optimal strategy from which there is no better alternative strategy. Rosenthal \cite{rosenthal1} shows that if all players have the same weight then pure Nash equilibria always exist. There may be multiple Nash equilibria for the same game. The quality of a Nash equilibrium is measured with respect to the social cost function which is simply the weighted sum of the all the players' costs. We measure the impact of the selfishness of the players with the {\em price of anarchy} which is the ratio of the social cost of the worst Nash equilibrium versus the coordinated optimal social cost.
Resource congestion games can represent network flow games and job distribution games. In networks, each resource corresponds to a link. A player with weight $w$ represents a routing request from a source node to a destination node of demand $w$ which is fulfilled along a path of the network. The utility cost of the player relates to the delay for sending the demand in the network along the chosen route. In job distribution games, each resource represents a machine. Each player has a job that consists of small sub-tasks that can execute at each node. The weight of the player $w$ relates to the work to be assigned to each machine in order to execute the job. The cost of the player relates to the delay to finish its job. In both the network and job games, the price of anarchy represents the impact of selfishness to the overall performance of the system, which in one case is the total network delay, and in the other the total work to execute all jobs.
Congestion games were introduced and studied in~\cite{monderer1,rosenthal1}. Most of the literature considers linear or polynomial utility functions.
Koutsoupias and Papadimitriou \cite{KP99} introduced the notion of price of anarchy in the specific {\em parallel link networks} model in which they provide the bound $PoA = 3/2$.
Roughgarden and Tardos \cite{roughgarden3} provided the first result for splittable flows in general networks in which they showed that $PoA\le 4/3$.
Pure equilibria with atomic flow have been studied in
\cite{AAE05,BM06,SAGT,CK05,libman1,STZ04} for classic congestion games and their variations of botteleck games (where the cost is determined by the maximum congested edge), and with splittable flow in \cite{roughgarden1,roughgarden2,roughgarden3,roughgarden5}. Mixed equilibria with atomic flow have been studied in
\cite{czumaj1,GLMMb04,KMS02,KP99,LMMR04,MS01,P01}, and with splittable flow in \cite{correa1,FKS02}.
\section{Contributions: Functional Characterization of PoA Boundedness}
Let $\mathcal{L} = \!+\! {l_1(x), l_2(x), \ldots \!+\! }$ denote a class of arbitrary non-decreasing latency cost functions.
\begin{definition} Function $l_k(x) \in \mathcal{L}$ is defined to be a superpolynomial function if it cannot be bounded from above by some polynomial function $x^p$ i.e \!+\! ( \not \exists p: \lim_{x \to \infty} \frac{l_k(x)}{x^p} \rightarrow 0 \!+\! ) \label{superpolynomial} \end{definition}
We also define our notion of boundedness for the Price of Anarchy.
\begin{definition} The Price of Anarchy of a congestion game $G$ is bounded if it does not arbitrarily increase with the number of players.
\label{boundedness} \end{definition}
Under our notion of boundedness, the Price of Anarchy depends only on intrinsic game parameters such as the parameters of the latency cost function, player strategies etc. but is {\it independent} of the number of players. Consider for example a game with 2 resources and $n$ players who can use either resource. The $PoA$ is 1 and independent of $n$. In this paper, we prove the existence of a large class of latency functions for which there exist games in which the $PoA$ increases with the number of players while other network parameters such as network topology or number of resources, player weights and cost functions remain fixed.
Let $G$ be an unsplittable congestion game with player weights derived from weight set $W \subseteq \mathbb{R}^+$ and latency cost functions derived from $\mathcal{L}$. We assume that $W$ is bounded by $w = (\max_i \in W)$ representing the weight of the largest player. For any function $l_k() \in \mathcal{L}$, define the set of ordered triples
\begin{equation} O_k = \left \!+\! {(x,y,z)_k \in \mathcal{R}^3 | x \geqslant y \geqslant z, 0 < z \leqslant w, \frac{l_k(x+z)}{l_k(x)} = \frac{x}{y} \right \!+\! } \label{orderedtriples} \end{equation}
As per the usual convention, for any ordered triple $(j,t,i) \in \mathcal{R}^3$, we denote $(j,t,i) \geqslant (x,t,i)_k$ if $j \geqslant x$. Next, for each $l_k()$, we define two special parameters:
\begin{eqnarray} g^*_k &= &\max\limits_{(x,y,z)_k \in O_k} \left \!+\! { \frac{l_k(x+z)}{l_k(y)} \right \!+\! } \label{g*k}
\!+\! \widehat{g}_k &= &\max\limits_{x \in \mathbb{R}^+, y \geqslant z} \frac{l_k(x+z)}{l_k(y)} - \frac{x l_k(x)}{y l_k(y)} \label{ghatk} \end{eqnarray}
\noindent Let the ordered triple values at $g^*_k$ and $\widehat{g}$ be denoted by $(j^*,t^*,i^*)_k$ and $(\hat{j}, \hat{t}, \hat{i})_k$, respectively, i.e $g^*_k = l_k(j^* + i^*)/l_k(t^*)$ and $\widehat{g}_k = \frac{l_k(\hat{j}+\hat{i})}{l_k(\hat{t})} - \frac{\hat{j} l_k(\hat{j})}{\hat{t} l_k(\hat{t})}$. Note that $(\hat{j},\hat{t},\hat{i}) \leqslant (j^*,t^*,i^*)$ since both $l_k(x+z)$ and $x l_k(x)$ are increasing in $j$ and hence by definition of the ordered triple $(x,y,z)_k$ we must have $\widehat{g}_k \leqslant g^*_k$.
We consider three disjoint subclasses of latency functions from $\mathcal{L}$ as described below and evaluate their $PoA$ bounds.
\begin{itemize}
\item
$\mathcal{L}_1 = \!+\! { l_k \in \mathcal{L} | \forall i \in \mathbb{R}^+ \lim_{x \to \infty} \frac{l_k(x+i)}{l_k(x)} > 1 \!+\! }$. Thus $\mathcal{L}_1$ contains superpolynomial functions such as $l_k(x) = a_k 2^x$, $l_k(x) = a_k x^x$ etc. with coefficient $a_k>0$.
\item
$\mathcal{L}_2 = \!+\! { l_k \in \mathcal{L} | l_k(x) = a_k \cdot e^{\log^{1+\epsilon}x} \!+\! }$, where $\epsilon >0$ is any constant, coefficient $a_k >0$ and $\log$ refers to the natural logarithm. The functions in $\mathcal{L}_2$ are superpolynomials with the property $\lim_{x \to \infty} \frac{l_k(x+i)}{l_k(x)} = 1$. Thus $\mathcal{L}_2$ contains slower growing superpolynomial functions such as $l_k(x) = a_k x^{\log x}$, $l_k(x) = a_k x^{\log^2 x}$ etc.
\item $\mathcal{L}_3$ is the class of non-superpolynomial increasing functions inclusive of and bounded from above by polynomials, for example, $l_k(x) = (\sum_{q=0}^d a_q x^q)(\sum_{q=0}^d b_q \log^q x)$.
\end{itemize}
Our first result is an exact bound on the Price of Anarchy of unsplittable congestion games.
\begin{result} For every subset of cost functions $\mathcal{L}' \subseteq \in \mathcal{L}$, there exist games $G$ where the Price of Anarchy is
\begin{equation} \displaystyle PoA(G) =
\max\limits_{\substack{ l_k \in \mathcal{L}', \forall (x,t,i)_k \in O_k \!+\! (j,t,i) \geqslant (x,t,i)_k , 0 <i \leqslant w} } \!+\! \!+\! \frac{\widehat{g}_k j l_k(j)}{\widehat{g}_k t l_k(t) + j l_k(j) - t l_k(j+i)}
\end{equation}
\label{PoAbound:main} \end{result}
As a corollary from above note that every game $G$ has a $PoA$ lower bounded by the maximum value of $g_k$ at an ordered triple, i.e $PoA = \Omega(g^*_k)$ since the expression above equates to $g^*_k$ when $j =x$ for any $(x,t,i)_k \in O_k$.
Theorem~\ref{PoAbound:main} compactly describes a necessary and sufficient condition for the boundedness of the Price of Anarchy of any game with cost functions drawn from $\mathcal{L}$. To the best of our knowledge, this is the first exact formulation of Price of Anarchy bounds (both lower and upper bounds) for unsplittable congestion games with {\it arbitrary} latency cost functions. Previous results by Awerbuch {\it et al.} and Monien {\it et al.} \cite{awerbuchSICOMP,monienSICOMP} have provided a tight characterization for games with polynomial latency cost functions. In \cite{roughgardenSTOC09}, Roughgarden provides generalized existence conditions for the Price of Anarchy of congestion games using smoothness characterizations.
We demonstrate in this paper for every superpolynomial cost function in $\mathcal{L}_1$ and $\mathcal{L}_2$, the existence of games for which the $PoA$ is unbounded and also provide a tight bound on the $PoA$ for all games polynomially bounded latency costs.
\begin{result} The relation between the Price of Anarchy, latency cost functions and the number of players is as follows:
\begin{enumerate} \item For every superpolynomial cost function in $\mathcal{L}_1$ and $\mathcal{L}_2$, there exist congestion games where the $PoA$ is arbitrarily large and increases with the number of players even with bounded weights. \item For every polynomially bounded congestion game (with latency functions drawn from $\mathcal{L}_3$), the $PoA$ is independent of the number of players and bounded only by the parameters of the cost function. \end{enumerate}
\label{PoA:iffcondition} \end{result}
Result~\ref{PoA:iffcondition} directly relates the Price of Anarchy of unsplittable congestion games to the growth rates of the latency cost functions that control the player costs in these games. More significantly, it has strongly negative implications for the Price of Anarchy of many such games. These implications were heretofore unknown, as the only known results to date were on the Price of Anarchy of congestion games with polynomial cost functions.
Our result implies that the Price of Anarchy is finite only for those games with latency cost functions bounded by some polynomial. Latency costs growing faster than polynomial functions have strongly negative consequences for the Price of Anarchy of every game with player costs controlled by these functions. For every cost function in this class, there are games with {\it unbounded} Price of Anarchy, even if the weights of all players are infinitesimally small.
{\it Remark}: The $PoA$ is bounded for all games with latency functions from $\mathcal{L}_3$ such as the well-known polynomial cost functions \cite{awerbuchSICOMP,monienSICOMP} of degree $d \geqslant 0$ described by $l_k(x) = \sum_{q=0}^d a_q x^q$,. We show in this paper that the $PoA$ is bounded even for other polynomial bounded functions, for example, $l_k(x) = (\sum_{q=0}^d a_q x^q)(\sum_{q=0}^m b_q \log^q x)$.
The verdict on the existence of the Price of Anarchy for games with faster growing cost functions is strongly negative. This includes both slowly growing super-polynomial cost functions such as $l_k(x) = a_k x^{\log^{\epsilon} x}$, $\epsilon >0$ as well as fast-growing exponential cost functions such as $l_k(x) = x!$ and $l_k(x) = a^x$, where $a > 1$.
\section{Game Formulation} \label{section:definitions}
An {\em unsplittable congestion game} is a strategic game
$G = (\Pi, W, R, \mathcal{S}, (l_r)_{r \in R})$ where: \begin{itemize} \item $\Pi = \!+\! {\pi_1,\ldots, \pi_N \!+\! }$ is a non-empty and finite set of players.
\item Player weight set $W \in \mathbb{R}^+$ where each player $\pi_i$ has an associated weight $w_{\pi_i} \in W$ (also denoted as $i$ later in the analysis), and the maximum player weight is $w =\max_{\pi_i \in \Pi} w_{\pi_i}$.
\item $R = \!+\! {r_1,\ldots,r_\zeta \!+\! }$ is a non-empty and finite set of resources.
\item Strategy profile $\mathcal{S} = \mathcal{S}_{\pi_1} \times \mathcal{S}_{\pi_2} \times \cdots \times \mathcal{S}_{\pi_N}$, where $\mathcal{S}_{\pi_i}$ is a {\em strategy set} for player $\pi_i$, such that $\mathcal{S}_{\pi_i} \subseteq 2^R$. Each strategy $S_{\pi_i} \in \mathcal{S}_{\pi_i}$ is {\em pure} in the sense that it is a single element from $\mathcal{S}_{\pi_i}$ (in contrast to a {\em mixed strategy} which is a probability distribution over the strategy set of the player). A {\em game state} is any $S \in \mathcal{S}$. We consider {\em finite games} which have finite $\mathcal{S}$ (finite number of states). \end{itemize}
In any game state $S$, let $S_{\pi_i}$ denote the strategy of player $\pi_i$. We define the following terms with respect to a state $S$: {\it Congestion}: Each resource $r \in R$ has a congestion $C_r (S) = \sum_{\pi_i \in \Pi \wedge r \in S_{\pi_i}} w_{\pi_i}$, which is the sum of the weights of the players that use it. {\it Utility:} In any game state $S$, each resource $r \in R$ has a {\em utility cost} (also referred to as {\em latency cost}) $l_r(S)$. {\it Player Cost:} In any game state $S$, each player $\pi \in \Pi_G$ has a {\em player cost} $pc_{\pi_i}(S) = \sum_{r \in S_{\pi_i}} l_r(S)$. {\it Social Cost:} In any game state $S$, the {\em social cost} is $SC(S) = \sum_{r \in R} l_r(C_r(S)) \cdot C_r(S)$. Note that the social cost is the weighted sum of the player's costs.
When the context is clear, we will drop the dependence on $S$. For any state $S$, we use the standard notation $S = (S_{\pi_i},S_{-{\pi_i}})$ to emphasize the dependence on player $\pi_i$. Player $\pi_i$ is \emph{locally optimal} (or {\em stable}) in state $S$ if $pc_{\pi_i}(S) \leq pc_{\pi_i}(S'_{\pi_i},S_{-{\pi_i}})$ for all strategies $S'_{\pi_i} \in \mathcal{S}_{\pi_i}$. A greedy move by a player $\pi_i$ is any change of its strategy from $S'_{\pi_i}$ to $S_{\pi_i}$ which improves the player's cost, that is, $pc_{\pi_i}(S_{\pi_i},S_{-{\pi_i}}) < pc_{\pi_i}(S'_{\pi_i},S_{-{\pi_i}})$.
A state $S$ is in a {\em Nash Equilibrium} if every player is locally optimal, namely, no greedy move is available for any player. A Nash Equilibrium realizes the notion of a stable selfish outcome. In the games that we study there could exist multiple Nash Equilibria.
A state $S^*$ is called {\em optimal} if it has minimum attainable social cost: for any other state $S$, $SC(S^*) \le SC(S)$. We quantify the quality of the states which are Nash Equilibria with the \emph{price of anarchy} ($PoA$) (sometimes referred to as the coordination ratio). Let $\cal P$ denote the set of distinct Nash Equilibria. Then the price of anarchy of game $G$ is: \begin{equation*} PoA(G) =\max\limits_{ S \in {\cal P}} \frac{SC(S)}{SC(S^*)} \end{equation*}
\section{Preliminaries}
Let $S$ denote an arbitrary (not necessarily an equilibrium) state of game $G$ with resource set $R$. We group resources in $R$ together based on congestion and cost parameters. Let $R_{j,k}^t \subseteq R$ denote an equivalence class of resources such that for every $r \in R_{j,k}^t$, we have $C_r(S) = j$, $C_r(S^*) =t$ and latency costs governed by function $l_k(C_r(S)) \in \mathcal{L}$ i.e the cost of using $r \in R_{j,k}^t$ in states $S$ and $S^*$ are given by $l_k(j)$ and $l_k(t)$, respectively. For notational convenience, we label the set of resource equivalence classes by $\mathcal{E} = \!+\! { R_{j,k}^t \!+\! }$.
For any resource $r \in R$, let $\Pi_{r} = \!+\! { \pi | r \in S_{\pi} \!+\! }$
and $\Pi^*_{r} = \!+\! { \pi | r \in S^*_{\pi} \!+\! }$. Let $\sigma_i \subseteq \Pi$ denote the set of players in $\Pi$ with weight $i$, $ 0 < i \leqslant w$
and let $\alpha_{ir} = |\sigma_i \bigcap \Pi^*_r|$ denote the number of players of weight $i$ utilizing resource $r$ in the optimal state $S^*$. Thus $\sum_{i: \sigma_i \neq \phi} i \cdot \alpha_{ir} = t$ for all $r \in R_{j,k}^t$.
For any $t > 0$, let $f(t)$ be the total number of combinations of players of different weights which can satisfy the equation $\sum_{i: \sigma_i \neq \phi} i \cdot \alpha_{ir} = t$, where $f(t)$ can be an exponential function of $t$. We denote a particular such player combination by the index $t_a$, $1 \leqslant a \leqslant f(t)$ and let $R^{t_a}_{j,k} \subseteq R^t_{j,k}$ denote the equivalence class of resources with identical configurations of players in the optimal state as represented by $t_a$. If $t=0$, we will sometimes use the notation $0_0$ to denote the empty configuration. We represent the optimal configuration in state $S^*$ on any resource $r \in R^{t_a}_{j,k}$, by the vector $\bar{L}_{j,k}^{t_a} = < \!+\! { \alpha_{j,k}^{i,t_a} \!+\! }>$, where $\alpha_{j,k}^{i,t_a} > 0$ denotes the number of players of weight $i$ in configuration $t_a$ on resource $r$ in optimal state $S^*$. Henceforth we use the notation $i \in \bar{L}_{j,k}^{t_a}$ to denote the presence of a player of weight $i$ in configuration $t_a$, i.e if $\alpha_{j,k}^{i,t_a} > 0$.
We derive our $PoA$ bound by obtaining a constrained maximization formulation of the $PoA$ using resource equivalence classes that can then be bounded. In particular, consider the term $|R^{t_a}_{j,k}| i \alpha_{j,k}^{i,t_a} l_k(t)$ representing resource equivalence class $R_{j,k}^{t_a}$. This term represents the net contribution of all players of a given weight $i$ occupying the subset of resources $R_{j,k}^{t_a}$ towards the optimal social cost. More formally define the terms
\begin{eqnarray} \lambda_{j,k}^{i,t_a} &=
&\frac{|R_{j,k}^{t_a}| \cdot i \cdot \alpha_{j,k}^{i,t_a} l_k(t)}{SC(S^*)}, \quad R_{j,k}^t \in \mathcal{E}, t > 0, 1 \leqslant a \leqslant f(t), i: \sigma_i \in \Pi, \label{FML:c2} \!+\!
\lambda_{j,k}^{i,0} &= &\frac{|R_{j,k}^0|}{SC(S^*)}, \quad \quad R_{j,k}^0 \in \mathcal{E}, i: \sigma_i \in \Pi. \label{FML:c3} \end{eqnarray}
Each term of Eq.~\ref{FML:c2} represents the fractional net contribution of players of weight $i$ occupying resources in $R_{j,k}^{t_a}$ with $t>0$ towards the optimal social cost $SC(S^*)$. However as shown in the lemma below, these terms also represent exactly the contribution of these players towards the total Price of Anarchy. Also, Eq.~\ref{FML:c3} is defined for arbitrary $i$ for consistency with Eq.~\ref{FML:c2}, however in actuality $i$ can be assumed 0 since there are no players of any weight on resources in $R_{j,k}^0$ in the optimal state.
Denote the coordination ratio of any unsplittable congestion game $G$ as $H(S) = SC(S)/SC(S^*)$ for arbitrary state $S$ and optimal state $S^*$. The following lemma relates the coordination ratio to the coefficients $\lambda_{j,k}^{i,t_a}$.
\begin{lemma} Given any game state $S$, the coordination ratio $H(S)$ of an unsplittable congestion game with latency cost functions derived from class $\mathcal{L}$ can be expressed as
\begin{flalign} H(S) \quad = &\quad \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \lambda_{j,k}^{i,t_a} \cdot \frac{j}{t} \cdot \frac{l_k(j)}{l_k(t)} + \quad \sum_{R_{j,k}^0 \in \mathcal{E}} \lambda_{j,k}^{0,0} \cdot j \cdot l_k(j) \label{H(S):obj} \!+\! &\mbox{where} \notag \!+\! &\quad \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \lambda_{j,k}^{i,t_a} \quad = \quad 1 \label{lambda:c2} \!+\! &\quad \lambda_{j,k}^{0,0} \quad \geqslant \quad 0 \label{lambda:prop3} \end{flalign}
\label{H(S):Formulation} \end{lemma} {\it Proof}: Please see Appendix.
Let $\mathcal{P}$ denote the set of Nash equilibrium states of $G$. Then from the definition of the Price of Anarchy, we have \begin{equation} PoA(G) \quad = \quad \max_{S \in \mathcal{P}} H(S) \label{FML:obj} \!+\! \end{equation}
Note that for any group of resources $R_{j,k}^t$, the term $\frac{jl_k(j)}{tl_k(t)}$ represents the localized $PoA$. Thus given constraint~\ref{lambda:c2} we can also view the overall $PoA$ of the game as the average of the localized $PoA$'s on each resource class. Also note that while the $\lambda_{j,k}^{i,t_a}$, $t > 0$, terms representing actual optimal player configurations are constrained, the $\lambda_{j,k}^{0,0}$ terms representing resources contributing to the equilibrium cost but not the optimal are not. However as shown later they cannot be too large.
\section{ Price of Anarchy Lower Bounds} \label{LB}
For any arbitrary cost function in $\mathcal{L}$, we describe a specific game which bounds the Price of Anarchy from below. Consider a game $G = (\Pi, W, R, \mathcal{S}, l_k)$, where $l_k \in \mathcal{L}$ and players $\Pi = \!+\! { \pi_1, \ldots, \pi_{N} \!+\! }$
such that every player has a demand of weight exactly $w \in W$. The set of resources $R$, where $\zeta = |R|$, can be divided into two disjoint sets $R = A \cup B$, $A \cap B = \emptyset$, such that $A = \!+\! {a_0, \ldots, a_{\zeta_1 - 1} \!+\! }$ and $B = \!+\! {b_0, \ldots, b_{\zeta_2 - 1} \!+\! }$, that is, $\zeta = \zeta_1 + \zeta_2$.
Consider parameters $\alpha, \beta, \gamma, \delta \geq 0$, such that $\zeta_1 \geq \alpha + \beta$, and $\zeta_2 \geq \gamma + \delta$. Each player $\pi_i$ has two strategies $\mathcal{S}_{\pi_i} = \!+\! {s_i, {\overline s}_i \!+\! }$. Strategy $s_{i}$ occupies $\alpha$ consecutive resources from the set $A$, $s^A_{i} = \!+\! { a_{(i-1) \mod \zeta_1}, \ldots, a_{(i + \alpha - 1) \mod \zeta_1} \!+\! }$, and $\beta$ consecutive resources from $B$, $s^B_{i} = \!+\! { b_{(i-1) \mod \zeta_2}, \ldots, b_{(i + \beta - 1) \mod \zeta_2} \!+\! }$; namely, $s_{i} = s^A_{i} \cup s^B_{i}$. Strategy ${\overline s}_{i}$ occupies $\gamma$ consecutive resources from the set $A$ such that the first resource is immediately after the last in $s^A_i$, ${\overline s}^A_{i} = \!+\! { a_{(i + \alpha -1) \mod \zeta_1}, \ldots, a_{(i + \alpha + \gamma - 1) \mod \zeta_1} \!+\! }$, and $\delta$ consecutive resources from $B$, such that the first resource is immediately after the last in $s^B_i$, ${\overline s}^B_{i} = \!+\! { b_{(i + \beta -1) \mod \zeta_2}, \ldots, b_{(i + \beta + \delta - 1) \mod \zeta_2} \!+\! }$; namely, ${\overline s}_{i} = {\overline s}^A_{i} \cup {\overline s}^B_{i}$.
We consider the game state $S = (s_1,\ldots, s_N)$ which consists of the first strategy of each player, and game state ${\overline S} = ({\overline s}_1,\ldots, {\overline s}_N)$ which consists of the second strategy of each player. We take the number of players $N = \kappa_1 \zeta_1$ and $N = \kappa_2 \zeta_2$, for integers $\kappa_1, \kappa_2 \geq 0$.
\begin{lemma} \label{lemma:LB-equil} State $S$ is a Nash equilibrium. \end{lemma} {\it Proof}: Please see Appendix.
As observed in the proof of Lemma \ref{lemma:LB-equil}, in state $S$ each resource $r \in A$ has congestion equal to $j_1 = C_r(S) = N \alpha w / \zeta_1$, and each resource $r \in B$ has congestion equal to $j_2 = C_r(S) = N \beta w / \zeta_2$. Similarly, in state ${\overline S}$ each resource $r \in A$ has congestion equal to $t_1 = C_r(S) = N \gamma w / \zeta_1$, and each resource $r \in B$ has congestion equal to $t_2 = C_r(S) = N \delta w / \zeta_2$. Similar to the previous section, we define parameters $\lambda_1, \lambda_2 \geq 0$, such that $\lambda_1 + \lambda_2 = 1$ and: $$ \lambda_1 = \frac {\zeta_1 t_1 l_k(t_1)}
{\zeta_1 t_1 l_k(t_1) + \zeta_2 t_2 l_k(t_2)}, \qquad \qquad \lambda_2 = \frac {\zeta_2 t_2 l_k(t_2)}
{\zeta_1 t_1 l_k(t_1) + \zeta_2 t_2 l_k(t_2)}. $$ From Lemma \ref{lemma:LB-equil}, we have that $S$ is a Nash equilibrium, and thus, for any player $i$, $pc_{\pi_i}(S) \leq pc_{\pi_i}(S')$. With an appropriate choice of the game parameters ($\alpha, \beta, \gamma, \delta, \zeta_1, \zeta_2, \kappa_1, \kappa_2$) and also by adjusting the weight $w$ which is a real number, we can actually get $pc_{\pi_i}(S) = pc_{\pi_i}(S')$ for each player $\pi_i \in \Pi$. In other words, $$ \alpha \cdot l_k(j_1) + \beta \cdot l_k(j_2) = \gamma \cdot l_k(j_1 + w) + \delta \cdot l_k(j_2 + w). $$ Therefore, $$ \zeta_1 j_1 \cdot l_k(j_1) + \zeta_2 j_2 \cdot l_k(j_2) = \zeta_1 t_1 \cdot l_k(j_1 + w) + \zeta_2 t_2 \cdot l_k(j_2 + w), $$ and hence, $$ \zeta_1 j_1 \cdot l_k(j_1) - \zeta_1 t_1 \cdot l_k(j_1 + w) = \zeta_2 t_2 \cdot l_k(j_2 + w) - \zeta_2 j_2 \cdot l_k(j_2), $$ or equivalently, $$
\lambda_1 \cdot \frac {j_1 l_k(j_1) - t_1 l_k(j_1 + w)}
{t_1 l_k(t_1)} = \lambda_2 \cdot \frac {t_2 l_k(j_2 + w) - j_2 l_k(j_2)}
{t_2 l_k(t_2)}, $$ which gives, \begin{equation} \label{eqn:lambdas} \lambda_1 \cdot \left [ \frac {j_1 l_k(j_1)}
{t_1 l_k(t_1)}
- \frac {l_k(j_1 + w)}
{l_k(t_1)} \right ] = \lambda_2 \cdot \left [ \frac {l_k(j_2 + w)}
{l_k(t_2)}
- \frac {j_2 l_k(j_2)}
{t_2 l_k(t_2)} \right]. \end{equation}
\begin{lemma} \label{POALB} For game $G$, the price of anarchy is bounded by: $$PoA(G) \geqslant \max \left( g^*_k,
\max_{\substack{(j_1,t_1,w) \geqslant (x,t_1,w)_k \!+\! \forall (x,t_1,w)_k \in O_k}} \!+\! \!+\!
\frac {{\widehat g}_k j_1 l_k(j_1)}
{{\widehat g}_k t_1 l_k(t_1) + j_1 l_k(j_1) - t_1 l_k(j_1 + w)} \right).$$ \end{lemma} {\it Proof}: Please see Appendix.
\section{ Price of Anarchy Upper Bounds} \subsection{Constrained Maximization}
Let $S$ be any Nash equilibrium state of $G$. We find the upper bound on the $PoA$ via the lemma below in which we convert the unconstrained maximization of Eq.~\ref{H(S):obj} into a constrained version. Consider an arbitrary resource equivalence class $R_{j,k}^{t_a}$ in state $S$ of $G$. For any player of weight $i \in \bar{L}_{j,k}^{t_a}$, $0 < i \leqslant \min(t,w)$, define
\begin{eqnarray} f_{j,k}^{i,t_a} &= & \frac{j l_k(j)}{t l_k(t)} - \frac{l_k(j+i)}{l_k(t)}, \label{fjk:def} \!+\! f_{j,k}^{0,0} &= & j l_k(j) \label{fjk0:def} \end{eqnarray}
\noindent Also let $g_{j,k}^{i,t_a} = - f_{j,k}^{i,t_a}$. We first define the notion of underloaded and overloaded resource sets conditioned on the value of $f_{j,k}^{i,t_a}$.
\begin{definition} Resource subset $R_{j,k}^{t_a}$ is defined to be overloaded with respect to players of weight $i$ if $f_{j,k}^{i,t_a} \geqslant 0$ and underloaded if $f_{j,k}^{i,t_a} < 0$ ($g_{j,k}^{i,t_a} > 0$). Define $F = \!+\! { f_{j,k}^{i,t_a} : f_{j,k}^{i,t_a} \geqslant 0 \!+\! }$ and $T = \!+\! { g_{j,k}^{i,t_a} : g_{j,k}^{i,t_a} > 0 \!+\! }$. \label{overloaded} \end{definition}
It can be seen that the first term in the definition of $f_{j,k}^{i,t_a}$ above is related to the overall social cost of players using resource class $R_{j,k}^{t_a}$ while the second term is related to the cost of a player of weight $i$ switching to a resource in $R_{j,k}^{t_a}$ from its current strategy. Thus the magnitude of the function $f_{j,k}^{i,t_a}$ is an indicator of the contribution of the corresponding resource class $R_{j,k}^{t}$ to the overall Price of Anarchy and also indicates the excess load over the switching costs in that resource class. However for any equilibrium state $S$, the switching costs must exceed the players costs when taken over all resource classes and thus the weight of the overloaded resource classes must be constrained by the underloaded resource classes, as we show through the lemma below.
\begin{lemma} Let $S \in \mathcal{P}$ be any Nash equilibrium state of game $G$. Then we must have, \begin{equation} \sum_{R_{j,k}^0 \in \mathcal{E}} \lambda_{j,k}^{0,0} f_{j,k}^{0,0} + \sum_{\substack{R_{j,k}^{t_a} \in \mathcal{E} \!+\! \sum_{i \in \bar{L}_{j,k}^{t_a}}}} \sum_{f_{j,k}^{i,t_a} \in F} \lambda_{j,k}^{i,t_a} f_{j,k}^{i,t_a} \quad \leqslant \quad \sum_{\substack{R_{j,k}^{t_a} \in \mathcal{E} \!+\! \sum_{i \in \bar{L}_{j,k}^{t_a}}}} \sum_{g_{j,k}^{i,t_a} \in T} \lambda_{j,k}^{i,t_a} g_{j,k}^{i,t_a} \label{ML:1} \!+\! \end{equation} \label{lemma:mainconstraint} \end{lemma} {\it Proof}: Please see Appendix.
Next in order to bound the $PoA$ objective function, we relate the functions $f_{j,k}^{i,t_a}$ and $g_{j,k}^{i,t_a}$ above to the ordered triples of any latency function $l_k()$. From the definition of ordered triples in Eq.~\ref{orderedtriples}, we obtain
\begin{lemma} Let $(x,t,i)_k \in O_k$ be any ordered triple of function $l_k()$. Then we must have, $(j,t,i) < (x,t,i)_k \in O_k$ for all $g_{j,k}^{i,t_a} \in T$ and $(j,t,i) \geqslant (x,t,i)_k \in O_k$ for all $f_{j,k}^{i,t_a} \in F$. \label{ordtriple} \end{lemma}
Rewriting lemma~\ref{lemma:mainconstraint} using lemma~\ref{ordtriple} above and combining with Eq.~\ref{FML:obj}, the upper bound on the $PoA$ can therefore be expressed as the following constrained maximization:
\begin{definition}[Maximization Problem] \begin{flalign} &PoA \quad \leqslant \quad \max_{S \in \mathcal{P}} H(S) \quad := \quad \sum_{R_{j,k}^0 \in \mathcal{E}} \lambda_{j,k}^{0,0} \cdot j l_k(j) \quad + \notag \!+\! & \quad \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) \geqslant (x,t,i)_k } \lambda_{j,k}^{i,t_a} \cdot \frac{j l_k(j)}{t l_k(t)} + \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) < (x,t,i)_k } \lambda_{j,k}^{i,t_a} \cdot \frac{j l_k(j)}{t l_k(t)} \label{PoA:1} \!+\! &\quad \mbox{s.t} \notag \!+\! &\sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) \geqslant (x,t,i)_k } \lambda_{j,k}^{i,t_a} f_{j,k}^{i,t_a} \quad \leqslant \quad \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) < (x,t,i)_k } \lambda_{j,k}^{i,t_a} g_{j,k}^{i,t_a} \label{PoA:2} \!+\! &\quad \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \lambda_{j,k}^{i,t_a} \quad = \quad 1 \label{PoA:3} \!+\! &\quad \lambda_{j,k}^{0,0} \quad \geqslant \quad 0 \label{PoA:4} \end{flalign} \end{definition}
The following lemma provides an exact formulation for evaluating the upper bound on the $PoA$ of any game $G$ by bounding the objective function $H(S)$ for any equilibrium state $S$.
\begin{lemma} \!+\! [ PoA(G) \quad \leqslant \quad \max \left( g^*_k, \max_{\substack{(j,t,i) \geqslant (x,t,i)_k \!+\! \forall (x,t,i)_k \in O_k}} \!+\! \!+\! \frac{\widehat{g}_k j l_k(j)}{\widehat{g}_k t l_k(t) + j l_k(j)- t l_k(j+i)} \right) \!+\! ] \label{POAUB} \end{lemma} {\it Proof}: Please see Appendix.
\section{$PoA$: Functional Characterizations}
Now we can combine the lower and upper bounds for the $PoA$ as derived in lemma~\ref{POALB} and lemma~\ref{POAUB} to get the tight bounds described in Eq.~\ref{PoAbound:main} of Result~\ref{PoAbound:main}. Next we bound the expression in Eq.~\ref{PoAbound:main} for arbitrary latency functions $l_k()$.
\begin{theorem} For every latency function $l_k \in \mathcal{L}_1$, there exist congestion games with arbitrarily large $PoA$s depending only on the number of players. \label{l1bound} \end{theorem} \begin{proof} First consider the following example where $i=1$ and $l_k(x+1) > x l_k(x)$ (for example, latency functions such as $l_k(x) = x!$ or $l_k(x) = x^x$). Note that in these cases, ordered triples do not exist since $tl_k(x+1) > x l_k(x)$ for all $t \geqslant 1$. For such functions, $\widehat{g}_k = \max_{x,t \geqslant 1} \left( \frac{l_k(x+i)}{l_k(t)} - \frac{x l_k(x)}{tl_k(t)} \right)$ is unbounded and therefore so is $g^*_k$. Since the $PoA \geqslant g^*_k$ it is also unbounded.
Consider a more general example of latency functions in $\mathcal{L}_1$ where ordered triples exist. Given any $i>0$, let $\lim_{x \to \infty} \frac{l_k(x+i)}{l_k(x)} = 1 + \delta$, $\delta >0$ is a constant independent of $t$. Choose an ordered triple $(x,t,i)$ such that $tl_k(x+i) = xl_k(x)$ and $l_k(x+i) \geqslant (1+\delta)l_k(x)$. Thus $x \geqslant (1+\delta)t$. Substituting in the expression for the $PoA$ above with $j=x$, we have the $PoA \geqslant \frac{xl_k(x)}{tl_k(t)}$. Choose $t$ large enough so that
\!+\! [ PoA > \frac{l_k\left( (1+\delta)t \right)}{l_k(t)} = \frac{l_k(t + \delta t)}{l_k(t + \delta t -i)} \cdot \frac{l_k(t + \delta t -i)}{l_k(t + \delta t -2i)} \cdots \frac{l_k(t+i)}{l_k(t)} \geqslant \left(1 + \delta \right)^{\delta t/i} \!+\! ]
From Section~\ref{LB}, since $t$ is controlled by the number of players in the game which can be arbitrarily large while player weight $i$ is bounded by a given constant $w$, the $PoA$ is unbounded. \end{proof}
\begin{theorem} For every latency function $l_k \in \mathcal{L}_2$, there exist congestion games with arbitrarily large $PoA$ depending only on the number of players. \label{l2bound} \end{theorem} \begin{proof} Let $t_0 \in \mathbb{R}^+$ be a sufficiently large constant. Consider ordered triples $(x,t,i)_k \in O_k$ with $t \geqslant t_0$, $x \geqslant t$, for cost function $l_k() \in \mathcal{L}_2$. We can safely assume that $\frac{l_k(x)}{l_k(t)}$ is bounded for all $t \geqslant t_0$, else the $PoA$ is unbounded as $g^*_k \geq \frac{xl_k(x)}{tl_k(t)}$ is unbounded. Assume $\frac{l_k(x)}{l_k(t)} \leqslant \kappa$ for all $t \geqslant t_0$ or equivalently \begin{equation} \log^{1+\epsilon}x - \log^{1+\epsilon}t \leqslant \log \kappa , \forall t \geqslant t_0 \label{xlx} \end{equation}
Also $\lim_{x \to \infty} l_k(x+i)/l_k(x) = 1$ and $l_k(x+i)/l_k(x) = x/t$, which implies $\exists \epsilon_t \to 0$ such that $x =(1+\epsilon_t)t$.
Since $g^*_k$ is assumed bounded and $jl_k(j) > tl_k(j+i)$ for all $j \geqslant x: (x,t,i)_k \in O_k$, we can bound the $PoA$ expression
in Eq.~\ref{PoAbound:main} as
\begin{equation} PoA = \Omega \left( \max_{\substack{(j,t,i) \geqslant (x,t,i)_k \!+\! \forall (x,t,i)_k \in O_k}} \!+\! \!+\! \frac{\widehat{g}_k tl_k(j+i))}{\widehat{g}_k t l_k(t) + j l_k(j)- t l_k(j+i)} \right) \label{mainW*} \end{equation}
Denote the term above by $y$. Taking the partial derivative of $y$ with respect to $j$ and equating it to 0 gives us
\begin{equation}
\left. \frac{\partial y}{\partial j} \right|_0 \implies \widehat{g}_k t l(t) - jl_k(j) = \left(l_k(j+i) \right) \frac{(jl_k(j))'}{tl'_k(j+i)} \label{dydj:eq} \end{equation} \noindent where $()'$ denotes the partial derivative with respect to $j$.
For any given value of $t: (x,t,i)_k \in O_k$, $y$ is maximized for $j\geqslant x$ that satisfies Eq.~\ref{dydj:eq}
Substituting this in Eq.~\ref{mainW*} and simplifying we get
\begin{equation} PoA \geqslant \max_{\substack{(j,t,i) \geqslant (x,t,i)_k \!+\! \forall (x,t,i)_k \in O_k}} \!+\! \!+\! \frac{\widehat{g}_k }{ \frac{\left(jl_k(j)\right)'}{t \cdot l'_k(j+i)} -1} \label{PoAlb:1} \end{equation}
Let $\alpha_i(j) =l_k'(j+i)/l_k(j+i)$, $i \geqslant 0$. Using the fact that $\alpha_i(x) = (1+\epsilon) \frac{\log^{\epsilon}(x+i)}{x+i}$, $i \geqslant 0$ for $l_k(x) = a_k e^{\log^{1+\epsilon}x}$, we have
\begin{eqnarray} \frac{\left(jl_k(j)\right)'}{t \cdot l'_k(j+i)} \quad - \quad 1 \quad &= &\quad \frac{\alpha_0(j) + 1/j}{\alpha_i(j)} \cdot \frac{j l_k(j)}{t l_k(j+i)} \quad - \quad 1 \label{PoAlb:2} \nonumber \!+\! &= &\quad \left(1 + \frac{i}{j} \right) \left( \frac{1 +(1+\epsilon)\log^{\epsilon}j}{(1+\epsilon)\log^{\epsilon}(j+i)} \right) \left( \frac{j l_k(j)}{t l_k(j+i)} \right) \quad - \quad 1 \label{PoAlb:3} \nonumber \!+\! &\leqslant &\quad \left(1 + \frac{i}{j} \right) \left(1 + \frac{1}{(1+\epsilon)\log^{\epsilon}j} \right) \left(\frac{j}{t} \right) \quad - \quad 1 \label{PoAlb:4} \end{eqnarray}
Similarly Eq.~\ref{dydj:eq} can be simplified as
\begin{flalign}
&\frac{j l_k(j)}{t l_k(t)} \quad = \quad \frac{\widehat{g}_k \cdot \alpha_i(j)}{ \alpha_0(j) - \alpha_i(j) +1/j} \quad \leqslant \quad \widehat{g}_k \frac{1+\epsilon}{1+i/j} \log^{\epsilon}(j+i) \quad \leqslant \quad \beta \log^{\epsilon}(j+i) \label{PoAlb:5} \end{flalign} where $\beta = (1+\epsilon) \widehat{g}_k$ is a constant dependent only on the parameters of latency function $l_k(x)$.
Since $j \geq x$ where $(x,t,i)_k$ is an ordered triple, let $j = \gamma x$ where $\gamma \geqslant 1$ and $x = (1+\epsilon_t)t$ as defined earlier. Substituting in Eq.~\ref{PoAlb:5}, we get
\begin{flalign} &\gamma (1+\epsilon_t) e^{\log^{1+\epsilon}(\gamma x) - \log^{1+\epsilon}t} \leqslant \beta \log^{\epsilon}(\gamma x+i) \notag \!+\! &\Rightarrow \log \gamma + (\log \gamma + \log x)^{1+\epsilon} - \log^{1+\epsilon}t \leqslant \log \beta + \epsilon \log \log(2\gamma x) \notag \!+\! &\Rightarrow \log \gamma + \epsilon' \log \gamma \log x + log^{1+\epsilon}x - \log^{1+\epsilon}t \leqslant \log \beta + \epsilon \log \log(2\gamma x ) \label{PoAlb:6} \end{flalign} \noindent where $\epsilon'$ is a constant. Further substituting from Eq.~\ref{xlx}, we get,
\begin{flalign} &\log \gamma(1 + \epsilon' \log x) + \log \kappa \leqslant \log \beta + \epsilon \log \log(2\gamma x) \label{PoAlb:7} \!+\! &\Rightarrow \log \gamma \leqslant \frac{\zeta + \epsilon \log \log 2\gamma x}{1 + \epsilon' \log x} \end{flalign} \noindent where $\zeta$ is a constant. Since $\epsilon$, $\epsilon'$ and $\zeta$ are constants and $x$ can be chosen to be $x \gg i$, we have $\log \gamma = \Theta(\frac{\log \log x}{\log x} )$ and so $\gamma = \Theta(1+\frac{\log \log x}{\log x})$. Substituting for $j/t = (1+\epsilon_t) \gamma$ in Eq.~\ref{PoAlb:4} we notice that all the terms in the first expression on the RHS converge to 1. Further substituting this expression in Eq.~\ref{PoAlb:1} for the $PoA$, we get $PoA = \Omega((1+\epsilon)\log^{\epsilon}j)$. Since the congestion $j$ depends on the number of players and can be arbitrarily large, we get the result as desired. \end{proof}
Finally we consider games from $\mathcal{L}_3$. \cite{monienSICOMP} describes upper bounds for games with polynomial costs. Here we we present a generalized result for all congestion games with latency functions drawn from the class of polynomially bounded functions.
\begin{theorem} For every congestion game with latency functions drawn from $\mathcal{L}_3$, the $PoA$ is independent of the number of players and bounded only by the parameters of the cost function (such as the degree of the polynomial). \label{l3bound} \end{theorem} {\it Proof}: Please see Appendix.
\section{Conclusions} We provide the first characterization for the price of anarchy of superpolynomial utilities in congestion games. We provide tight bounds for a large family of utility functions and show how the price of anarchy increases with the number of players, while other game parameters such as number of resources and player weights remain fixed. We also extend and generalize the previously known bounds on games with polynomial utility functions to games with utility functions inclusive of and bounded above by polynomials. Our results lead to several interesting open questions: by restricting player strategy sets and network topologies, can we find interesting families of games with bounded price of anarchy even with superpolynomial utilities? Another interesting problem is to determine whether there are approximate games with bounded $PoA$.
\appendix \section{Appendix} {\it Proof of lemma~\ref{H(S):Formulation}}:
After substituting the values of $\lambda_{j,k}^{i,t_a}$ from Eqs.~\ref{FML:c2} and Eq.~\ref{FML:c3} into the coordination ratio $H(S)$ in Eq.~\ref{H(S):obj}, we get,
\begin{eqnarray*} H(S) \cdot SC(S^*) \quad & = & \displaystyle \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}}
|R_{j,k}^{t_a}| i \alpha_{j,k}^{i,t_a} \frac{j \cdot l_k(j)}{t} + \sum_{R_{j,k}^0 \in \mathcal{E}}
|R_{j,k}^{0}| j \cdot l_k(j) \notag \!+\! H(S) \cdot SC(S^*) & = & \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{r \in R_{j,k}^{t_a}} j \cdot l_k(j) \frac{\sum_{i \in \bar{L}_{j,k}^{t_a}} i \alpha_{j,k}^{i,t_a}}{t} + \sum_{R_{j,k}^0 \in \mathcal{E}} \sum_{r \in R_{j,k}^{0}} j \cdot l_k(j) \notag \!+\! \quad & = & \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{r \in R_{j,k}^t} j \cdot l_k(j) \notag \!+\! \quad & = & \sum_{r \in R} C_r(S) \cdot l_k(C_r(S)) \notag \!+\! \quad & = & SC(S) \notag \end{eqnarray*}
\noindent where we use the fact that $\sum_{i \in \bar{L}_{j,k}^{t_a}} i \cdot \alpha_{j,k}^{i,t_a} =t$ for all values of optimal congestion $t >0$. Similarly, to prove constraint~\ref{lambda:c2}, note that
\begin{flalign} \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \lambda_{j,k}^{i,t_a} \cdot SC(S^*) \notag \!+\! \quad = \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}}
|R_{j,k}^{t_a} \cdot i \cdot \alpha_{j,k}^{i,t_a} l_k(t) \notag \!+\! \quad = \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{r \in R_{j,k}^{t}} t \cdot l_k(t) \notag \!+\! \quad = SC(S^*) \notag \end{flalign}
$\Box$
{\it Proof of lemma~\ref{lemma:LB-equil}}:
Since $N$ is a multiple of $\zeta_1$ and $\zeta_2$, in state $S$ each resource in $A$ is utilized by $N \alpha / \zeta_1 = \kappa_1 \alpha$ players, and each resource in $B$ is utilized by $N \beta / \zeta_2 = \kappa_2 \beta$ players. Therefore, $$pc_{\pi_i}(S) = \sum_{r \in s_i} l_k(C_r(S)) = \sum_{r \in s^A_i} l_k(C_r(S)) + \sum_{r \in s^B_i} l_k(C_r(S)) = \alpha \cdot l_k(\kappa_1 \alpha w) + \beta \cdot l_k(\kappa_2 \beta w).$$
Let $S' = ({\overline s}_{i}, S_{-\pi_i})$ denote the state derived from $S$ when player $\pi_i$ switches its strategy from $s$ to ${\overline s}$. Since $s_i \cap {\overline s}_i = \emptyset$, each resource in $r \in {\overline s}_i$ will have congestion $C_r(S') = C_r(S) + w$, where player $\pi_i$ adds weight $w$ to $r$, while every other resource in $R$ will have the same congestion in both states. Consequently, $$pc_{\pi_i}(S') = \sum_{r \in {\overline s}_i} l_k(C_r(S')) = \sum_{r \in {\overline s}^A_i} l_k(C_r(S')) + \sum_{r \in {\overline s}^B_i} l_k(C_r(S')) = \gamma \cdot l_k(\kappa_1 \alpha w + w) + \delta \cdot l_k(\kappa_2 \beta w + w).$$
In order to prove that $S$ is a Nash equilibrium, it suffices to show that $pc_{\pi_i}(S') - pc_{\pi_i}(S) \geq 0$. We have, $$ pc_{\pi_i}(S') - pc_{\pi_i}(S) =\gamma \cdot l_k(\kappa_1 \alpha w + w) + \delta \cdot l_k(\kappa_2 \beta w + w) - \alpha \cdot l_k(\kappa_1 \alpha w) - \beta \cdot l_k(\kappa_2 \beta w)\nonumber. $$ Therefore, we only need to show that: \begin{equation} \label{eqn:abgd} \alpha \cdot l_k(\kappa_1 \alpha w) - \gamma \cdot l_k(\kappa_1 \alpha w + w) \leq \delta \cdot l_k(\kappa_2 \beta w + w) - \beta \cdot l_k(\kappa_2 \beta w). \end{equation} If $\alpha \cdot l_k(\kappa_1 \alpha w) - \gamma \cdot l_k(\kappa_1 \alpha w + w) \leq 0$, then by taking $\delta = \beta$, since $l_k$ is a non-decreasing function, we get $\delta \cdot l_k(\kappa_2 \beta w + w) - \beta \cdot l_k(\kappa_2 \beta w) \geq 0$; hence, Eq. \ref{eqn:abgd} holds.
If $\alpha \cdot l_k(\kappa_1 \alpha w) - \gamma \cdot l_k(\kappa_1 \alpha w + w) > 0$, then by setting $\beta = \beta' \zeta_2 / \zeta_1$, and $\delta = \delta' \zeta_2 / \zeta_1$, for some $\beta', \delta' \geq 0$, and we get: $$ \delta \cdot l_k(\kappa_2 \beta w + w) - \beta \cdot l_k(\kappa_2 \beta w) = \delta' \frac {\zeta_2} {\zeta_1} \cdot l_k\left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w + w\right) - \beta' \frac {\zeta_2} {\zeta_1} \cdot l_k\left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w\right). $$ Then, Eq. \ref{eqn:abgd}, is equivalent to: $$ \label{eqn:abgd2} \zeta_1 \left(\alpha \cdot l_k(\kappa_1 \alpha w) - \gamma \cdot l_k(\kappa_1 \alpha w + w)\right) \leq \zeta_2 \left(\delta' \cdot l_k\left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w + w\right) - \beta' \cdot l_k\left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w\right)\right). $$ For taking $\zeta_2 \geq \kappa_1 \alpha \zeta_1$, and by setting $\delta'$ and $\beta'$ such that $\delta' - \beta' \geq (\alpha - \gamma)/(\kappa_1 \alpha)$, we get: \begin{eqnarray} \zeta_2 \left(\delta' \cdot l_k\left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w + w\right) - \beta' \cdot l_k\left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w\right)\right) & \geq & \zeta_2 (\delta' - \beta') l_k \left(\kappa_2 \beta' \frac {\zeta_2} {\zeta_1} w\right)\nonumber \!+\! & \geq & \zeta_1 (\delta' - \beta') l_k(\kappa_2 \beta' \kappa_1 \alpha w)\nonumber \!+\! & \geq & \zeta_1 (\alpha - \gamma) l_k(\kappa_1 a w + w)\nonumber \!+\! & \geq & \zeta_1 \left(\alpha \cdot l_k(\kappa_1 \alpha w) - \gamma \cdot l_k(\kappa_1 \alpha w + w)\right),\nonumber \end{eqnarray} as needed.
$\Box$
{\it Proof of lemma~\ref{POALB}}:
From Lemma \ref{lemma:LB-equil}, state $S$ is a Nash equilibrium. Therefore, \begin{eqnarray} PoA(G) & \geq & \frac {SC(S)} {SC({\overline S})} \nonumber
= \frac {\sum_{r \in R} l_k(C_r(S))}
{\sum_{r \in R} l_k(C_r(\overline S))} \nonumber \!+\! & = & \frac {\sum_{r \in A} j_1 l_k(j_1) + \sum_{r \in B} j_2 l_k(j_2)}
{\sum_{r \in A} t_1 l_k(t_1) + \sum_{r \in B} t_2 l_k(t_2)} \nonumber \!+\! & = & \frac {\zeta_1 j_1 l_k(j_1) + \zeta_2 j_2 l_k(j_2)}
{\zeta_1 t_1 l_k(t_1) + \zeta_2 t_2 l_k(t_2)} \nonumber \!+\! & = & \frac {\zeta_1 t_1 l_k(t_1)}
{\zeta_1 t_1 l_k(t_1) + \zeta_2 t_2 l_k(t_2)}
\cdot \frac {j_1 l_k(j_1)} { t_1 l_k(t_1)}
+ \frac {\zeta_2 t_2 l_k(t_2)}
{\zeta_1 t_1 l_k(t_1) + \zeta_2 t_2 l_k(t_2)}
\cdot \frac {j_2 l_k(j_2)} { t_2 l_k(t_2)}\nonumber \!+\! & = & \lambda_1 \cdot \frac {j_1 l_k(j_1)} { t_1 l_k(t_1)}
+ \lambda_2 \cdot \frac {j_2 l_k(j_2)} { t_2 l_k(t_2)}\nonumber \!+\! & \geq & \lambda_1 \cdot \frac {j_1 l_k(j_1)} { t_1 l_k(t_1)}.\nonumber \end{eqnarray}
Since $\lambda_1 + \lambda_2 = 1$, from Eq. \ref{eqn:lambdas} we can get that $$ \lambda_1 F = (1 - \lambda_1) {\widehat g}_k, $$ where $$F = \frac {j_1 l_k(j_1)}
{t_1 l_k(t_1)}
- \frac {l_k(j_1 + w)}
{l_k(t_1)}, $$ and ${\widehat g}_k$ is obtained from Eq. \ref{ghatk} such that it maximizes $$\frac {l_k(j_2 + w)}
{l_k(t_2)}
- \frac {j_2 l_k(j_2)}
{t_2 l_k(t_2)}.$$ Consequently, $$ \lambda_1 = \frac {{\widehat g}_k} {F + {\widehat g}_k}. $$ Therefore, \begin{equation} PoA(G) \geq \lambda_1 \cdot \frac {j_1 l_k(j_1)} { t_1 l_k(t_1)} = \frac {{\widehat g}_k j_1 l_k(j_1)}
{{\widehat g}_k t_1 l_k(t_1) + j_1 l_k(j_1) - t_1 l_k(j_1 + w)}. \label{lb-lb1} \end{equation} A lower bound on the price of anarchy follows by considering all ordered triplets of the form $(j_1, t_1, w)$ that maximize the right hand in Eq. \ref{lb-lb1}. A second lower bound for the price of anarchy is $g^*_k$, defined in Eq. \ref{g*k}, for the case where $\lambda_1 = 1$.
$\Box$
{\it Proof of lemma~\ref{lemma:mainconstraint}}:
Substituting above for $\lambda_{j,k}^{i,t_a}$, $g_{j,k}^{i,t_a}$ and $f_{j,k}^{i,t_a}$ from Eqs.~\ref{FML:c2} and ~\ref{fjk:def} and simplifying, we need to prove:
\begin{flalign} \frac{ \displaystyle \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{a=1}^{f(t)}
|R_{j,k}^{t_a}| \sum_{i \in \bar{L}_{j,k}^{t_a}} i \cdot \alpha_{j,k}^{i,t_a} \Big(j l_k(j)/t - l_k(j+i) \Big) +
\sum_{R_{j,k}^0 \in \mathcal{E}} |R_{j,k}^{0}| \cdot j l_k(j) }{SC(S^*)} \quad \leqslant \quad 0 \label{c2:1} \end{flalign}
Since $SC(S^*) > 0$, consider the numerator. We use the following simple observation
\begin{equation} \sum_{r \in R_{j,k}^t} \sum_{i: \pi \in \Pi_r \land \pi \in \sigma_i} i \cdot l_k(j) =
|R_{j,k}^{t}| \cdot j \cdot l_k(j) \quad \forall R_{j,k}^t \in \mathcal{E} \label{Rjk:explain2} \end{equation}
\noindent since each player $\pi \in \sigma_i$ contributes $i$ towards the equilibrium congestion value $j$ of every resource $r \in R_{j,k}^t$ that is contained in its equilibrium strategy $S_{\pi}$ and each such resource contributes $l_k(j)$ towards its player cost.
\begin{flalign} & \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{a=1}^{f(t)}
|R_{j,k}^{t_a}| \cdot j \cdot l_k(j) \sum_{i \in \bar{L}_{j,k}^{t_a}} \frac{i \alpha_{j,k}^{i,t_a}}{t} +
\sum_{R_{j,k}^0 \in \mathcal{E}} |R_{j,k}^{0}| \cdot j \cdot l_k(j) \notag \!+\! &\quad \quad \quad - \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{a=1}^{f(t)} \sum_{i \in \bar{L}_{j,k}^{t_a}}
|R_{j,k}^{t_a}| i \alpha_{j,k}^{i,t_a} l_k(j+i) \notag \!+\! & \quad \equiv \sum_{R_{j,k}^t \in \mathcal{E}}
|R_{j,k}^{t}| \cdot j \cdot l_k(j) - \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{a=1}^{f(t)} \sum_{i \in \bar{L}_{j,k}^{t_a}}
|R_{j,k}^{t_a}| \cdot i \alpha_{j,k}^{i,t_a} l_k(j + i) \notag \!+\! & \quad = \sum_{R_{j,k}^t \in \mathcal{E}} \sum_{r \in R_{j,k}^t} \left( \sum_{i: \pi \in \Pi_r \land \pi \in \sigma_i} i \cdot l_k(j) - \sum_{i: \pi \in \Pi^*_r \land \pi \in \sigma_i} i \cdot l_k(j + i) \right) \label{c2:4.2} \!+\! &= \sum_{r \in R} \!+\! \!+\! \sum_{i: \pi \in \Pi_r \land \pi \in \sigma_i} i \cdot l_r ( C_r ) - \sum_{r \in R} \!+\! \!+\! \sum_{i: \pi \in \Pi^*_r \land \pi \in \sigma_i} i \cdot l_r( C_r + i ) \label{c2:5} \!+\! &= \sum_{0<i \leqslant w} \sum_{\pi \in \sigma_i} i \cdot pc_{\pi}(S_{\pi}, S_{-\pi}) - \sum_{0<i \leqslant w} \sum_{\pi \in \sigma_i} i \cdot pc_{\pi}(S^*_{\pi}, S_{-\pi}) \label{c2:7} \!+\! &\leqslant 0 \label{c2:8} \end{flalign}
\noindent where the first term of Eq.~\ref{c2:4.2} uses Eq.~\ref{Rjk:explain2}. The second term follows from the fact that for each resource $r \in R_{j,k}^{t_a}$ there are exactly $\alpha_{j,k}^{i,t_a}$ players of weight $i$ that contain $r$ in their optimal strategies and $l_k(j + i)$ represents the cost to each such player of switching to resource $r$ while all other players remain in state $S$. The left term of Eq.~\ref{c2:5} represents $i$ times the cost to a player of weight $i$ of any resource in $R$ in state $S$ while the right term represents $i$ times the switching cost to any resource in $R_{j,k}^t$ summed up over all resources. Eq.~\ref{c2:7} represents the summation of player costs in state $S$ and the switching cost to state $S^*$ over all resources and then over all players. Finally Eq.~\ref{c2:8} follows since $S$ is a Nash equilibrium.
$\Box$
From the definition of $g^*_k$ in Eq.~\ref{g*k} and using $j l_k(j) < tl_k(j+i)$ for all $(j,t,i) < (x,t,i)_k$ along with constraint~\ref{PoA:3}, we have
\begin{lemma} \!+\! [ \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) < (x,t,i)_k } \lambda_{j,k}^{i,t_a} \cdot \frac{j l_k(j)}{t l_k(t)} \quad \leqslant \quad
g^*_k
\!+\! ] \label{ginTbound} \end{lemma}
Also from the LHS of lemma~\ref{lemma:mainconstraint} and the definition of $\widehat{g}$ in Eq.~\ref{ghatk}, we get
\begin{lemma} \!+\! [ \sum_{R_{j,k}^0 \in \mathcal{E}} \lambda_{j,k}^{0,0} \cdot j l_k(j) \quad \leqslant \quad \widehat{g}_k \label{rjk0bound} \!+\! ] \end{lemma}
{\it Proof of lemma~\ref{POAUB}}:
Applying lemmas~\ref{ginTbound} and ~\ref{rjk0bound} to the objective function $H(S)$ in ~\ref{PoA:1} we have
\begin{equation*} H(S) \quad \leqslant \quad \widehat{g}_k + g^*_k + \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) \geqslant (x,t,i)_k } \lambda_{j,k}^{i,t_a} \cdot \frac{j l_k(j)}{t l_k(t)} \end{equation*}
For every ordered triple $(x,t,i)_k \in O_k$ and any $(j,t,i) \geqslant (x,t,i)_k$ define $\widehat{\lambda}_{j,k}^{i,t_a} = \widehat{g}_k / (\widehat{g}_k + f_{j,k}^{i,t_a})$ and $\widehat{W}_{j,k}^{i,t_a} = \widehat{\lambda}_{j,k}^{i,t_a} \frac{j l_k(j)}{t l_k(t)}$. Letting $W^* = \max_{(j,t,i) \geqslant (x,t,i)_k} \widehat{W}_{j,k}^{i,t_a}$ we can rewrite the expression above as,
\begin{equation*} H(S) \quad \leqslant \quad \widehat{g}_k + g^*_k + W^* \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \sum_{ (j,t,i) \geqslant (x,t,i)_k } \lambda_{j,k}^{i,t_a} \left( 1 + \frac{f_{j,k}^{i,t_a}}{\widehat{g}_k} \right) \end{equation*}
Using the RHS of lemma~\ref{lemma:mainconstraint} to bound the last term above, we get
\begin{eqnarray} H(S) &\leqslant & \widehat{g}_k + g^*_k + W^* \left( \sum_{\substack{R_{j,k}^{t_a} \in \mathcal{E} \!+\! i \in \bar{L}_{j,k}^{t_a}}} \sum_{ (j,t,i) \geqslant (x,t,i)_k } \lambda_{j,k}^{i,t_a} + \frac{1}{\widehat{g}_k} \sum_{\substack{R_{j,k}^{t_a} \in \mathcal{E} \!+\! i \in \bar{L}_{j,k}^{t_a}}} \sum_{ (j,t,i) < (x,t,i)_k } \lambda_{j,k}^{i,t_a} g_{j,k}^{i,t_a} \right) \notag \!+\! &\leqslant &\widehat{g}_k + g^*_k + W^* \sum_{R_{j,k}^{t_a} \in \mathcal{E}} \sum_{i \in \bar{L}_{j,k}^{t_a}} \left( \sum_{ (j,t,i) \geqslant (x,t,i)_k } \lambda_{j,k}^{i,t_a} + \sum_{ (j,t,i) < (x,t,i)_k } \lambda_{j,k}^{i,t_a} \right) \notag \!+\! &\leqslant &\widehat{g}_k + g^*_k + W^* \notag \!+\! &= &\widehat{g}_k + g^*_k + \max_{\substack{(j,t,i) \geqslant (x,t,i)_k \!+\! \forall (x,t,i)_k \in O_k}} \!+\! \!+\! \frac{\widehat{g}_k j l_k(j)} {\widehat{g}_k t l_k(t) + jl_k(j) - t l_k(j+i)} \label{w*} \end{eqnarray}
Note that if ordered triples exist for the cost function $l_k()$, then since $j^* l_k(j^*) = t^* l_k(j^* + i^*)$ and $\frac{j^*l_k(j^*)}{t^*l_k(t^*)} = g^*_k$ and $(j,t,i) \geqslant (j^*,t^*,i^*)_k$ we must have $\max_{(j,t,i)\geqslant (x,t,i)_k, \forall (x,t,i)_k \in O_k} W^*\geqslant g^*_k \geqslant \widehat{g}_k$ and so the last term dominates. However for rapidly growing cost functions, ordered triples may not exist and so only the first two terms above are relevant. This leads to the expression in the lemma as desired.
$\Box$
{\it Proof of lemma~\ref{l2bound}}:
Let $z$ denote the $PoA$ expression
in Eq.~\ref{PoAbound:main}. Taking the partial derivative $\frac{\partial y}{\partial j}$ and equating to $0$ gives us
\begin{equation}
\left. \frac{\partial y}{\partial j} \right|_0 \implies \widehat{g}_k l_k(t) = l_k(j+i) -j l_k(j) \frac{l'(j+i)}{(jl_k(j))'} \label{dydj2} \end{equation}
Similarly evaluating the partial with respect to $t$ gives \begin{equation} \frac{\partial y}{\partial t} = \alpha \left( l_k(j+i) - \widehat{g}_k (tl_k(t))' \right) \label{dydt2} \end{equation} where $\alpha >0$. Evaluating the two together it can be shown that $\frac{\partial y}{\partial t}$ is decreasing in $t$ and $t=i$ at the maximum value of $z$. Given $0 <i \leq w$, the bound on the $PoA$ can then be obtained by solving the following minimization using standard KKT conditions:
\begin{flalign} &\min \left( \widehat{g}_k / \left( 1 - \frac{t l'_k(j+i)}{(j l_k(j))'} \right)
\right) \label{POAminKKT} \!+\! & \mbox{ s.t } \notag \!+\! & \widehat{g}_k l(t) = l_k(j+i) - j l_k(j) \frac{l'(j+i)}{(jl_k(j))'} \!+\! &j l_k(j) \geq t l_k(j+i) \!+\! &0 <i \leq w \end{flalign}
$\Box$
\end{document} | arXiv |
Wogonoside inhibits invasion and migration through suppressing TRAF2/4 expression in breast cancer
Yuyuan Yao1,
Kai Zhao1,
Zhou Yu1,
Haochuan Ren1,
Li Zhao1,
Zhiyu Li2,
Qinglong Guo1 &
Na Lu ORCID: orcid.org/0000-0002-4981-52811
This article has been updated
The Correction to this article has been published in Journal of Experimental & Clinical Cancer Research 2019 38:444
Twist1 is involved in tumor initiation and progression, which especially contributes to tumor invasion and metastasis. Wogonoside is the main in-vivo metabolite of wogonin, and it is also a natural product with potential treatment effects against cancer.
In this study, we investigated the in-vitro anti-invasion and in-vivo anti-metastasis effects of wogonoside on breast cancer cells and uncovered its underlying mechanism.
The results showed that wogonoside could suppress the growth and metastasis of breast tumor in the orthotopic model of MDA-MB-231 cells. We found that wogonoside could reduce the overexpression of TNF-α, TRAF2 and TRAF4 in later stage of tumor, and improved tumor microenvironment. Therefore, TNF-α was utilized to induce metastases of breast cancer cell in vitro. Wogonoside could inhibit invasion and migration in TNF-α-induced MDA-MB-231, MDA-MB-435, and BT-474 cells. Mechanically, wogonoside inactivated NF-κB signaling through decreasing the protein expression of TRAF2/4, which further inhibited Twist1 expression. Consequently, wogonoside could down-regulate MMP-9, MMP-2, vimentin and CD44v6 expression in TNF-α-induced MDA-MB-231 and MDA-MB-435 cells. Then, these findings were proved in TNF-α + TGF-β1-induced MCF7 cells.
Wogonoside might be a potential therapeutic agent for the treatment of tumor metastasis in breast cancer.
Breast cancer is a global leading cause of cancer death in women [1,2,3]. One of the major reasons for such high morbidity and mortality rates of breast cancer is the invasive behavior of breast cancer cells, which leads to cancer metastasis [4]. Several cytokines in the microenvironment could assist breast cancer cells to invade and metastasize. Among these cytokines tumor necrosis factor (TNF-α) is always overexpressed in advanced breast cancer [5].
Twist1 is a bHLH transcription factor that has been known as an essential player in the aggressive phenotype of epithelial-mesenchymal transition (EMT) and cell migration in the developing neural crest [6, 7]. Twist1 is overexpressed in many primary tumors including colon, breast, prostate, and gastric carcinomas [7,8,9]. In agreement with its role in embryonic cell migration, Twist1 overexpression is associated with the increase in tumor cell migration, invasion, and metastasis [10,11,12]. Twist1 is also correlated with changes in classical EMT biomarkers such as E-cadherin and vimentin [11, 13]. In addition, downregulation of Twist1 expression suppressed p65-mediated malignancy, which demonstrates that Twist1 is a central modulator downstream from NF-κB [14, 15]. Twist1 promoter contains a functional p65-binding motif, several lines of evidence show that TNF-α-mediated Twist1 expression in breast cancer cells contributes to their aggressive phenotype [15].
Wogonoside is a bioactive flavonoid extracted from the root of Scutellaria baicalensis Georgi [16]. It was reported that wogonoside had a preclinical anticancer efficacy in various cancer models, including breast cancer, bladder cancer and hematopoietic malignancies [17]. However, the mechanism of wogonoside inhibiting metastasis remained unclear. In the present study, we evaluated the inhibitory effect of wogonoside in an orthotopic model and TNF-α-induced metastasis. The results showed that wogonoside inhibited growth and metastasis in vivo, and suppressed TNF-α-induced invasion and migration in several breast cancer cells. The mechanism of wogonoside against tumor metastasis was mainly on account of the blocked TRAF2/4-Twist1 axis.
Wogonoside (>98% purity, Langze Pharmaceutical Co., Ltd., Nanjing, China) was dissolved in dimethylsulfoxide (DMSO) as a stock solution, stored at −20 °C, and diluted with medium before each experiment [17, 18]. MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide] was purchased from Sigma Chemical Co. (St. Louis, MO, USA). A nuclear/cytosol fractionation kit (KeyGEN, Nanjing, China) was used according to the manufacturer's directions. Human recombinant TNF-α and TGF-β1 were from PeproTech Inc. (PeproTech, IL, USA). Primary antibodies against CD44v6 were form Abcam plc. (Abcam, Cambridge, UK), antibodies against E-cadherin, vimentin, p-IκBα and β-Tubulin were from Cell Signaling Technology (CST, MA, USA), antibodies against MMP-9, p-IKKα, IKKα, GAPDH, Lamin A, TNF-α, TRAF2 and TRAF4 were from Bioworld (Bioworld, MN, USA), antibodies against MMP-2, IκBα, NF-κB p65 were from Santa Cruz Biotechnology (Santa Cruz, CA, USA), and antibodies against Twist1 were form Signalway antibody (SAB, MD, USA). IRDye®800-conjugated secondary antibodies were from Rockland Inc. (PA, USA).
Four-week-old female BALB/c nude mice (Slaccas Shanghai Laboratory Animal Co., Ltd., Shanghai, China) were used for the orthotopic model of MDA-MB-231 cells. The animals were maintained in a pathogen-free environment (23 ± 2 °C, 55 ± 5% humidity) on a 12 h light/12 h dark cycle with food and water supplied ad libitum throughout the experimental period. Animal study and euthanasia was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of the China Pharmaceutical University.
Human breast cancer MDA-MB-231 cells, MCF7 cells, MDA-MB-435 cells, BT-474 cells were originally obtained from the Cell Bank of the Shanghai Institute of Cell Biology. The MDA-MB-231 cells, MDA-MB-435 cells and BT-474 cells were cultured in DMEM medium (Gibco, Grand Island, NY) and MCF7 cells were cultured in RPMI 1640 medium (Gibco, Grand Island, NY) both containing 10% fetal bovine serum (Gibco), 100 U/ml penicillin, and 100 μg/ml streptomycin, in a stable environment with 5% CO2 at 37 °C.
Cell viability assay
Cells were plated at a density of 5 × 103 cells/well in 96-well plates. After 24 h culture, the cells were exposed to different concentrations of wogonoside for 48 h in a 5% CO2 incubator at 37 °C. Then, MTT was added to the medium and the cells were incubated at 37 °C for 4 h. The supernatant was removed and dimethyl sulfoxide was used to dissolve the precipitate. The absorbance was measured spectrophotometrically at 570 nm.
Cell attachment assay
The 96-well plates were coated with matrigel (Corning, NY, USA) overnight at 4 °C and blocked with 1% BSA for 4 h at 37 °C. After cells were treated with different concentrations of wogonoside for 48 h, cells were collected in serum-free medium at 5 × 105 cells/ml. Aliquots (100 μl) of the cell suspensions were seeded into the wells and incubated for 1 h at 37 °C. After that, unattached cells were washed thrice with PBS and the attached cells were determined by MTT assay.
Wound healing assay
Cells were seeded in a six-well plate and allowed to attach overnight, with growth to 80% confluence. The cell monolayers were then wounded with white pipette tips and washed twice with phosphate-buffered saline. The cells were then incubated with wogonoside in medium supplemented with 1% serum for 48 h. The number of migrated cells was determined under an inverted microscopy.
Cell invasion assay [19]
The transwell chambers (12 mm in diameter, 8 μm pore-size, Millipore, Billerica, MA) were loaded with 0.1 ml of matrigel (Corning, NY, USA) in a 24 well plate at 37 °C for 1 h. After cells were pretreated of wogonoside for 48 h, cells were collected in serum-free medium at a final concentration of 2 × 105 cells/ml. 400 μl cell suspensions were then placed in the upper transwell chamber, and 600 μl medium containing 10% fetal bovine serum was added to the lower compartment. Followed by incubation for 24 h, cells on the upper surface were removed, and invasive cells on the lower surface were fixed with 100% methanol and stained with hematoxylin and eosin. Then quantified by manual counting and three randomly chosen fields were analyzed for each group.
Cells were harvested after pretreatment of wogonoside for 48 h. Western blot was performed as previously described [20]. The membrane was blocked with 5% BSA in PBS at 37 °C for 1 h and incubated overnight at 4 °C with the indicated antibodies, and then with IRDye®800-conjugated secondary antibody for 1 h at 37 °C. The samples were visualized with the Odyssey Infrared Imaging System (LI-COR Inc., Lincoln, NE, USA).
Real-time PCR analysis
Cells were pre-treated with wogonoside for 48 h. The mRNA levels of E-cadherin, vimentin, MMP-9 and Twist1 were then determined with a method described previously [20]. The primer sets used for the PCR amplifications were as follows: Twist1 (forward, 5′- GGAGTCCGCAGTCTTACGAG-3′, reverse, 5′- TCTGGAGGACCTGGTAGAGG-3′), E-cadherin (forward, 5′-CCACCAAAGTCACGCTGAAT-3′, reverse, 5′-GGAGTTGGGAAATGTGAGC-3′), MMP-9 (forward: 5′-GCAGAGGAATACCTGTACCGC-3′, reverse, 5′-AGGTTTGGAATCTGCCCAGGT-3′), vimentin (forward, 5′-ATGAAGGTGCTGCAAAAC-3′, reverse, 5′-GTGACTGCACCTGTCTCCGGTA-3′), human GAPDH (forward, 5′-TGGGTGTGAACCATGAGAAG-3′, reverse, 5′-GCTAAGCAGTTGGTGGTGC-3′).
Transient transfection
Cells were seeded in six-well plates at 70% confluency. The transient transfection assay was performed by using Lipofectamine®2000 Transfection Reagent (Thermo, MA, USA) according to the manufacturer's protocol. Briefly, 8 μl Transfection Reagent and the Twist plasmids (1 μg, Addgene), TRAF2 plasmids (1 μg, Addgene), TRAF4 plasmids (1 μg, Addgene) were respectively diluted in 250 μl medium gently. Then mix the above two diluted medium gently and incubate for 20 min at room temperature. Finally, the complexes were added to each well containing cells and media. The plate was rocked back and forth and incubated at 37 °C in a CO2 incubator for 12 h.
Cytokines detected by ELISA
The concentration of cytokines in supernatant was detected by ELISA kit (Boster Biotechnology, Wuhan, China). Briefly, the supernatant diluted with sample diluent buffer was added to the microwell (100 μl/well) and the plates were then stored in incubator at 37 °C for 90 min. The supernatant was discarded and antibody diluted with antibody diluent buffer was added to the microwell (100 μl/well) and stored in incubator at 37 °C for 1 h. Then plates were washed with PBS for 3 times and Avidin-Biotin-Peroxidase-Complex (ABC) diluted with ABC diluent buffer was added to the microwell. After incubated at 37 °C for 30 min, ABC was discarded and plates were washed with PBS for 5 times. Add the TMB color developing agent to the microwell (90 μl/well) at 37 °C for 20 min and then add the TMB stop solution (100 μl/well). The absorbance was measured at 450 nm.
Orthotopic model of MDA-MB-231 cells
Orthotopic injections were performed following the previous study with minor modifications [21]. MDA-MB-231 cells (1 × 105/25 μl) were mixed with 25 μl matrigel on ice, and then the cell suspension was quickly injected into the fourth Mammary Fat Pad (MFP). Animal was observed for 30 min until fully recovery. Seventy-seven days later, the mice were randomly divided into three groups (8 mice/group): the negative group (intraperitoneal injection of 0.9% normal saline); the wogonoside-treated group (gavage of 80 mg/kg wogonoside at a frequency of once every other day); and the gemcitabine-positive group (intraperitoneal injection of 80 mg/kg gemcitabine at a frequency of one time every 2 day). Ninety-eight days later, the nude mice were killed and the tumor xenografts were segregated and measured. Additionally, to the blank control group (16 mice, without any drug administration), 8 mice were killed when they lived to 63-day and 98-day respectively, then the tumor xenografts were segregated and measured. Visceral tissue resected from control and test mice were fixed in formalin and tested with H&E staining. Tumor volume (TV) was calculated using the following formula:
$$ TV\ \left({\mathrm{mm}}^3\right)=\frac{D}{2}\times {d}^2 $$
D and d are the longest and the shortest diameters, respectively. At the same time the animals were weighed twice per week and monitored for mortality throughout the experimental period.
Relative tumor volume (RTV) was calculated according to the equation:
$$ RTV=\frac{V_t}{V_0} $$
V 0 is the tumor volume at day 0 and V t is the tumor volume at day t. And the evaluation index for inhibition was of relative tumor growth ratio
$$ \frac{T}{C}=\frac{T_{RTV}}{C_{RTV}}\times 100\% $$
T RTV and C RTV represented RTV of treated and control groups, respectively.
This study was approved in SPF Animal Laboratory of China Pharmaceutical University. In all experiments, the ethics guidelines for investigations in conscious animals were followed, with approval from the local Ethics Committee for Animal Research.
The expression of Twist1, E-cadherin, vimentin, MMP-9, TNF-α, TRAF2, and TRAF4 in nude mice model was assessed to the method described previously [22], using a goat-anti-rabbit antibody and an Ultra-Sensitive TMSAP kit. All reagents used in the experiments were supplied by Maixin-Bio Co., Fuzhou, China.
The data were obtained from at least three independent experiments and all data in different experimental groups were expressed as the mean ± SD. We compared TNF-α-treated group or TNF-α + TGF-β1-treated group to control group in vitro, and the saline-treated group to control group in vivo. Differences between groups were tested with One-Way ANOVA analysis of variance and Dunnett's post hoc test. The changes in tumor weight and tumor volume over time were tested using a random effects mixed model. Metastasis incidence rates were evaluated using percentages of animals withmetastases, and tested using Fisher's exact test. The significance of differences is indicated at *p < 0.05 and **p < 0.01.
Wogonoside suppresses breast cancer growth and metastasis of MDA-MB-231 cells in vivo
The anti-metastatic effect of wogonoside was assessed with orthotopic model of MDA-MB-231 cells in vivo (Fig. 1a). As it is shown in Fig. 1b, during the 21-day treatment, tumor volume was reduced by wogonoside or gemcitabine (80 mg/kg), which showed inhibitory effects on tumor growth of MDA-MB-231 cells. The tumor weight was also decreased compared with the control group (Fig. 1c). The inhibitory rate of wogonoside was about 46%, while that of gemcitabine was approximately 71%. However, as shown in Table 1, eight (100%), three (37.5%) and four (50%) mice were found of metastasis after pathological examination in the control, gemcitabine-treated and wogonoside-treated group respectively. The probability of metastasis was down-regulated by wogonoside (50%), while that of gemcitabine was 62.5%. In the orthotopic model of MDA-MB-231 cells, cancer cells could metastasize to several organs, including brain, lung, liver and bone. Afterwards, we analyzed the incidence of metastasis in each organ according to the pathological section. The results showed that wogonoside could suppress the formation of metastases in brain, lung, liver and bone (Fig. 1d). The instance of organ metastasis was then added up to indicate the risk of metastases in each organ. For example, breast-to-lung cancer metastases were found in 8 mice (100%) in the control group, and lung metastases were found in 4 animals (50%) in the wogonoside-treated group [23] (Table 2). In the primary tumor, we tested the expression of metastasis-associated proteins with immunohistochemistry and western blot assay. It demonstrated that wogonoside could increase the expression of E-cadherin and decrease the expression of MMP-9, vimentin and Twist1 (Fig. 1e and f).
Wogonoside suppresses breast cancer growth and metastasis of MDA-MB-231 cells in vivo. a Diagram shows the experimental course of MDA-MB-231 orthotopic model. b Effect of wogonoside (80 mg/kg) and gemcitabine (80 mg/kg) on tumor growth was investigated in the model of MDA-MB-231 orthotopic model. c Effect of wogonoside (80 mg/kg) and gemcitabine (80 mg/kg) on tumor weight was investigated in the model of MDA-MB-231 orthotopic model. d H&E stained brains, lungs, livers, bones and spleens of mice from wogonoside-treated and control group to confirm the presence of micrometastases (image magnification: 200×). e Immunohistochemical detection of E-cadherin, MMP-9, vimentin and Twist1 protein levels in MDA-MB-231 orthotopic site (image magnification: 400×). f The expression of E-cadherin, MMP-9, vimentin and Twist1 proteins were analyzed in MDA-MB-231 orthotopic site by western blot using specific antibodies. A GAPDH antibody was used to check equivalent protein loading. Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
Table 1 Probability of metastasis
Table 2 Instance of organ metastasis
TNF-α and TRAF2/4 were overexpressed in late stage of metastatic breast cancer
Mice tumors resected at different time periods were shown in Fig. 2a. By Elisa assay and immunohistochemical analysis, we found that the content of TNF-α in 98-day primary tumors was increased, compared to 63-day primary tumors (Fig. 2b and c). Meanwhile, the expressions of TNF-α, TRAF2 and TRAF4 were also enhanced in 98-day primary tumors (Fig. 2d). Coincidentally, as shown in Fig. 2e and f, the immunohistochemistry assay and western blot assay verified that wogonoside could reduce the protein expression of TNF-α, TRAF2, and TRAF4. Therefore, we wondered that whether wogonoside inhibited metastasis through suppressingTRAF2/4 expression.
TNF-α and TRAF2/4 were overexpressed in late stage of metastatic breast cancer. a Diagram shows the experimental course of MDA-MB-231 orthotopic model (n = 8). b The effect of wogonoside on TNF-α content of tumor tissue. c Immunohistochemical detection of TNF-α protein levels in 63-day and 98-day MDA-MB-231 orthotopic site (image magnification: 400×). d The expression of TRAF4, TRAF2 and TNF-α proteins were analyzed in 63-day and 98-day MDA-MB-231 orthotopic site by western blot using specific antibodies. A GAPDH antibody was used to check equivalent protein loading. e Immunohistochemical detection of TRAF4, TRAF2 and TNF-α protein levels in MDA-MB-231 orthotopic site (image magnification: 400×). f The expression of TRAF4, TRAF2 and TNF-α proteins were analyzed in MDA-MB-231 orthotopic site by western blot using specific antibodies. Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
Wogonoside inhibits TNF-α-induced migration, adhesion and invasion in MDA-MB-231, MDA-MB-435, and BT-474 cells
As shown in Fig. 3a, the result of MTT assay revealed that the treatment of TNF-α (20 ng/ml) and wogonoside (0, 50, 100 and 150 μM) caused no significant cytotoxicity in MDA-MB-231, MDA-MB-435, and BT-474 cells. These concentrations were then applied to all subsequent experiments. Cancer cell adhesion to basement membranes is important for tumor invasion since it is a key step in proteinase-dependent cell locomotion. The results of cell attachment assay showed that the adhesive capabilities of TNF-α-induced MDA-MB-231, MDA-MB-435, and BT-474 cells were decreased after treatment of wogonoside (Fig. 3b).
Wogonoside inhibits TNF-α-induced migration, adhesion and invasion in MDA-MB-231, MDA-MB-435, and BT-474 cells. MDA-MB-231, MDA-MB-435, and BT-474 cells were exposed to different concentrations of TNF-α and wogonoside for 48 h. a Effect of wogonoside on cell viability by MTT assay. b 100 μl cell suspension (2 × 105 cells/ml) was added to the 96 wells which are pre-coated with matrigel. After incubating for 60 min, adherent cells were determined by MTT assay. c-d A monolayer of cells was scraped with a pipette tip and then treated with TNF-α and wogonoside. The migrating cells were assessed with a microscope equipped with a camera (image magnification: 100×). e-f The invasive ability was evaluated by a matrigel-coated transwell invasion assay (image magnification: 200×). Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
We next examined the effect of wogonoside on the TNF-α-induced migration of these three cells. As shown in Fig. 3c and d, migrated cells were quantified by scratch width. The inhibitory rate of wogonoside (50, 100 and 150 μM) was about 1%, 43% and 63% in MDA-MB-231 cells, 12%, 49% and 69% in MDA-MB-435 cells, 1%, 20% and 34% in BT-474 cells, respectively.
Then, we investigated the effects of wogonoside on the TNF-α-induced invasion of MDA-MB-231, MDA-MB-435, and BT-474 cells in vitro. We found that cells in the control group were able to invade freely through the matrigel, whereas this ability was inhibited by wogonoside. As shown in Fig. 3e and f, wogonoside could inhibit the invasion of MDA-MB-231, MDA-MB-435, and BT-474 cells in a concentration-dependent manner, and the inhibition rate at 150 μM was about 57% in MDA-MB-231 cells, 60% in MDA-MB-435 cells, and 24% in BT-474 cells.
Wogonoside inhibits the expression of metastasis-associated proteins through suppressing Twist1 expressing in breast cancer cells
In the process of cancer metastasis, MMP-9, CD44v6 and vimentin are responsible for cell migration, invasion and cell-matrix adhesion. Therefore, the effects of wogonoside on the expression of MMP-9, MMP-2, CD44v6, vimentin and Twist1 in MDA-MB-231 and MDA-MB-435 cells were detected by western blot analysis. As shown in Fig. 4a, we observed that wogonoside reduced the expression of MMP-9, MMP-2, CD44v6 and vimentin in TNF-α-induced MDA-MB-231 and MDA-MB-435 cells. TNF-α could induce Twist1 expression and overexpression of Twist1 could promote further metastasis, which increased the expression of MMP-9, MMP-2, CD44v6 and vimentin in cancer cells. Therefore, we detected Twist1 expression. As shown in Fig. 4b and c, wogonoside decreased the protein and mRNA expression of Twist1, which suggested that wogonoside could inhibit Twist1 expression at the transcriptional level. The inhibition rate of Twist1 mRNA level was about 9%, 37% and 65% in TNF-α-induced MDA-MB-231 cells and 7%, 21% and 45% in TNF-α-induced MDA-MB-435 cells.
Wogonoside inhibits the expression of metastasis-associated proteins through suppressing Twist1 expressing in breast cancer cells. MDA-MB-231 and MDA-MB-435 cells were exposed to different concentrations of TNF-α and wogonoside for 48 h. a The expression of MMP-9, MMP-2, CD44v6, and vimentin proteins in the cells were analyzed by western blot using specific antibodies. b The expression of Twist1 protein in the cells was analyzed by western blot using specific antibodies. c Twist1 mRNAs were measured with real-time PCR. GAPDH was used as the internal control. The relative levels were calculated as the ratio of the relative biomarker mRNA to GAPDH mRNA. Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
Wogonoside inhibits TNF-α-induced NF-κB signaling through suppressing the expression of TRAF2/4
Previous study has demonstrated that NF-κB pathway is involved with the regulation of Twist1 expression at the level of transcription. Therefore, we examined the key kinases in NF-κB signaling and the nuclear translocation of p65. The results showed that wogonoside inhibited the phosphorylation of IκBα and IKKα in TNF-α-induced MDA-MB-231 and MDA-MB-435 cells (Fig. 5a). And the nuclear translocation of NF-κB p65 was also inhibited by wogonoside in TNF-α-induced MDA-MB-231 and MDA-MB-435 cells (Fig. 5b).
Wogonoside inhibits TNF-α-induced NF-κB signaling through suppressing the expression of TRAF2/4. MDA-MB-231 and MDA-MB-435 cells were exposed to different concentrations of TNF-α and wogonoside for 48 h. a The expression of p-IKKα, IKKα, p-IκBα and IκBα proteins in the cells were analyzed by western blot using specific antibodies. b The expression and localization of NF-κB p65 protein in the cells was analyzed by western blot using specific antibodies. c The expression of TRAF2 protein in the cells was analyzed by western blot using specific antibodies. d The expression and localization of TRAF4 protein in the cells was analyzed by western blot using specific antibodies. Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
Since TRAF2 complex is the upstream of NF-κB signaling, which could be responsible for the phosphorylation of IKK, we detected the inhibitory effect of wogonoside on TRAF2 expression. The results showed that wogonoside could inhibit TRAF2 expression in TNF-α-induced MDA-MB-231 and MDA-MB-435 cells (Fig. 5c). In addition, according to the experiment of orthotopic model, wogonoside could decrease TRAF2 and TRAF4 in vivo. Therefore, we further investigated whether TRAF4 expression was also decreased in these two cells. We found that wogonoside mainly inhibited the nuclear protein level of TRAF4 instead of the cytoplasmic protein level (Fig. 5d).
Wogonoside inhibits TNF-α + TGF-β1-induced migration, adhesion and invasion in MCF7 cells in vitro
Since the treatment of TNF-α alone in MCF7 cells could not induce EMT efficiently, we added TGF-β1 (5 ng/ml) to promote the EMT process. An MTT assay showed that TNF-α (20 ng/ml) and TGF-β1 (5 ng/ml) or the combination of wogonoside, TNF-α and TGF-β1 had no cytotoxic effect (Fig. 6a). These concentrations were used in the following experiments. As shown in Fig. 6b, we found that MCF7 cells transformed from an epithelial morphology to an elongated fibroblast-like cell morphology under the treatment of TNF-α and TGF-β1. However, wogonoside (150 μM) could inhibit TNF-α + TGF-β1-induced morphological changes in MCF7 cells. The results of cell attachment assay showed that the adhesive capabilities of TNF-α + TGF-β1-induced MCF7 cells were decreased after treatment of wogonoside (Fig. 6c). Afterwards, we investigated the anti-invasive and anti-migratory effects of wogonoside on TNF-α + TGF-β1-triggered EMT with matrigel invasion and wound healing assays. Wogonoside inhibited the migration of TNF-α + TGF-β1-stimulated MCF7 cells across the wounded space in a concentration-dependent manner (Fig. 6d and e). And treatment with wogonoside could also reduce the invasiveness of TNF-α + TGF-β1-induced MCF7 cells through matrigel (Fig. 6f and g).
Wogonoside inhibits TNF-α + TGF-β1-induced migration, adhesion and invasion in MCF7 cells in vitro. MCF7 cells were exposed to different concentrations of TNF-α, TGF-β1 and wogonoside for 48 h. a An MTT assay showed that TNF-α, TGF-β1 and wogonoside had no effect on cell viability. b Morphological changes in MCF7 cells were observed under an inverted lightmicroscope (400×). c 100 μl cell suspension (2 × 105 cells/ml) was added to the 96 wells which are pre-coated with matrigel. After incubating for 60 min, adherent cells were determined by MTT assay. d-e A monolayer of cells was scraped with a pipette tip and then treated with TNF-α, TGF-β1 and wogonoside. The migrating cells were assessed with a microscope equipped with a camera (image magnification: 100×). f-g The invasive ability was evaluated by a matrigel-coated transwell invasion assay (image magnification: 200×). h The expression of E-cadherin, MMP-9, MMP-2, CD44v6, and vimentin proteins in the cells were analyzed by western blot using specific antibodies. i E-cadherin, MMP-9, vimentin and Twist1 mRNAs were measured with real-time PCR. GAPDH was used as the internal control. The relative levels were calculated as the ratio of the relative biomarker mRNA to GAPDH mRNA. Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
As TNF-α and TGF-β1 triggered EMT process in MCF7 cells, we investigated the effect of wogonoside on EMT biomarkers. Quantization of western blot assay showed that wogonoside up-regulated the protein and mRNA expression of E-cadherin while it down-regulated the protein and mRNA expression of MMP-9 and vimentin in a concentration-dependent manner (Fig. 6h and i).
Wogonoside inhibits Twist1 expression through suppressing TRAF2/4 expression in MCF7 cells
As shown in Fig. 7a, the expression of TRAF2, TRAF4, and Twist1 in MDA-MB-231, MCF7, MDA-MB-435, and BT-474 cells were detected by western blot analysis. There were different protein levels of Twist1 in four cells: MDA-MB-231 and MDA-MB-435 with high expression (100%), MCF7 with medium expression (75%), and BT-474 with low expression (55%).
Wogonoside inhibits Twist1 expression through suppressing TRAF2/4 expression in MCF7 cells. MCF7 cells were exposed to different concentrations of TNF-α, TGF-β1 and wogonoside for 48 h. a TRAF4, TRAF2 and Twist1 expression in MDA-MB-231, MCF7, MDA-MB-435, and BT-474 cells were analyzed by western blot using specific antibodies. Comparison was made with the expression in MDA-MB-231 cells. b MCF7 cells were transfected with TRAF4, TRAF2 and Twist plasmid. TRAF4, TRAF2 and Twist1 expression with treatment of wogonoside were analyzed by western blot. c The expression of TRAF2 and Twist1 protein in the cells were analyzed by western blot using specific antibodies. d The expression and localization of TRAF4 protein in the cells was analyzed by western blot using specific antibodies. e The expression of p-IKKα, IKKα, p-IκBα and IκBα proteins in the cells were analyzed by western blot using specific antibodies. f The expression and localization of NF-κB p65 protein in the cells was analyzed by western blot using specific antibodies. Each experiment was performed at least three times. Data are presented as mean ± SD. *p < 0.05 compared with the control group; **p < 0.01 compared with the control group
Since MCF7 cells exhibited lower expression of Twist1, TRAF2 and TRAF4, we transfected the overexpressed plasmids of TRAF2, TRAF4 and Twist into MCF7 cells. The results showed that transfection of TRAF2 and TRAF4 plasmids could induce Twist1 expression while transfection of Twist plasmids could induce TRAF2/4 expression. On the other hand, the overexpressed proteins of TRAF2, TRAF4 and Twist1 were all decreased by wogonoside (Fig. 7b).
Indeed, the stimulation of TNF-α and TGF-β1 could also enhance the expression of TRAF2, TRAF4 and Twist1 in MCF7 cells. Hence, we investigated whether the mechanism of wogonoside against EMT was involved with TRAF2/4 expression. The results showed that wogonoside inhibited the total protein of TRAF2 and the nuclear protein level of TRAF4 in TNF-α + TGF-β1-induced MCF7 cells (Fig. 7c and d). Correspondingly, wogonoside could inhibit the phosphorylation of IκBα and IKKα in TNF-α + TGF-β1-induced MCF7 cells (Fig. 7e). Meanwhile, as shown in Fig. 7f, nuclear translocation of NF-κB p65 was also inhibited by wogonoside.
Wogonoside is not only a bioactive component of Scutellaria baicalensis Georgi, but also a main in-vivo metabolite of wogonin. Wogonin was reported to exhibit anti-metastatic effects in various solid tumors, such as breast cancer, melanoma and hepatocellular cancer [24,25,26]. Although the anti-metastatic effect of wogonin was proved, the mechanism of wognin has not been fully revealed and whether its in-vivo main metabolite has the same anti-metastatic effect was not clear. In this study, we demonstrated that wogonoside possessed the anti-metastatic potential in breast cancer and uncovered its mechanism for the first time.
MDA-MB-231 is a breast cancer cell line from patients with Triple-negative breast cancer (TNBC), which is a subtype of breast cancer with poor prognosis and limited treatment options. As the drug of first choice for patients suffered with TNBC, gemcitabine, a pyrimidine analog, could suppress DNA replication and induce apoptosis of breast cancer cells with a great damage to bone marrow, liver and kidney. As an in-vivo metabolite of wogonin, wogonoside showed a lower toxicity and did not act as cytotoxic agent and we took gemcitabine as a positive drug in comparison with the effect of wogonoside in breast cancer. Although the inhibitory rate of wogonoside on the growth of primary tumor was lower than that of gemcitabine, the effect of wogonoside on experimental metastasis was comparable to gemcitabine. It suggested that wogonoside possessed the potential anti-tumor activity in the treatment of metastatic breast cancer. According to our results, wogonoside could suppress tumor metastasis through inhibition of primary tumor invasion by increasing the expression of E-cadherin and decreasing the expression of MMP-9, vimentin and Twist1. On the other hand, we found the expression of TNF-α, TRAF2 and TRAF4 was enhanced in the primary tumor as time goes on. This might be due to the internal tumor necrosis and the pro-inflammatory environment, which promotes overexpression of TNF-α and activation of NF-κB pathway in later stage of tumor. And these factors always promote invasion and metastasis. Hence, TNF-α was used to induce the metastatic process of breast cancer cells.
Tumor metastasis accounts for 90% of cancer-associated deaths, in which invasion plays a critical role in metastasis [27]. During invasion, tumor cells firstly lose cell-cell junctions, subsequently degrade, remodel, and adhere to the surrounding ECM and eventually migrate through ECM to the distance sites [28]. Therefore, wound healing assay, transwell invasion assay and cell adhesion assay were used to measure the anti-migration, anti-invasion and anti-adhesion effect of wogonoside in vitro. We found that wogonoside could inhibit invasion and migration in TNF-α-induced MDA-MB-231, MDA-MB-435 and BT-474 cells. Meanwhile, the abnormal expression of MMP-9, MMP-2, vimentin and CD44v6 in cancer cells would lead to decreased adhesion, enhanced migration and invasion. Thus, Wogonoside inhibited metastasis-related protein expression to block TNF-α-induced metastatic process. Twist1 is a master regulator of morphogenesis, which can induce EMT to facilitate breast tumor metastasis [6]. We found that wogonoside reduced the mRNA and protein expression of Twist1 in TNF-α-induced MDA-MB-231 and MDA-MB-435 cells, which indicated that the anti-metastatic effect of wogonoside in breast cancer was dependent of Twist1 expression.
Activation of NF-κB pathway is associated with Twist1 expression and EMT in cancer cells [29, 30]. NF-κB activation in response to inflammatory cytokines and growth factors is frequently observed in metastatic breast cancer cells. NF-κB has been shown to be essential for EMT and metastasis in a model of breast cancer progression [31]. We found wogonoside could inhibit the activation of NF-κB signaling through suppressing the expression of TRAF2 and TRAF4. Upon TNF-α stimulation, TRAF2 is recruited and involved in the activation of the receptor. The receptor complex collaborates with TRAF2 to bring the TGF-β-activated kinase 1 (TAK1) kinase complex close to the IKK complex, which is phosphorylated afterwards [32]. And another important member of TRAF family proteins, TRAF4 is mainly expressed in nucleus. The nuclear expression of TRAF4 is correlated with poor survival in breast cancer patients [33]. TRAF4 is also required for the activation of TAK1 and TGF-β-induced EMT [34]. Therefore, TRAF2 and TRAF4 could both positively regulate the activation of NF-κB pathway, which promoted Twist1 expression transcriptionally. Consequently, wogonoside down-regulated TRAF2 and TRAF4 expression to block NF-κB signaling and Twist1 transcprition.
As Twist1 is a critical regulator in EMT process, we investigated the inhibitory effect of wogonoside on EMT process in the model of TNF-α + TGF-β1-induced MCF7 cells. During EMT, cells lose their epithelial characteristics, including cell adhesion and polarity, and acquire a mesenchymal morphology and the ability to migrate. Biochemically, cells switch off the expression of epithelial markers such as adherens junction protein E-cadherin and turn on mesenchymal markers including vimentin and fibronectin [35]. We demonstrated that wogonoside inhibited migration, invasion and cytoskeletal remodeling in TNF-α + TGF-β1-induced MCF7 cells. In addition, wogonoside increased E-cadherin expression and reduced vimentin expression through decreasing Twist1 expression. Meanwhile, when TNF-α and TGF-β1 activated NF-κB pathway, wogonoside could also suppress the expression of TRAF2 and TRAF4. It further verified that wogonoside had an inhibitory effect on TRAF2/4 expression after exogenous stimulation.
In conclusion, wogonoside could inhibit TRAF2/4 expression, thus inactivate NF-κB signaling, and finally suppress Twist1 protein content and EMT process. Accordingly, wogonoside inhibited the invasion and migration of breast cancer cells in vitro and in vivo. Therefore, wogonoside might be a potential therapeutic agent for the treatment of metastatic breast cancer.
In the original publication of this article [1], there are mistakes in Fig. 3c and Fig. 3e.
ABC:
Avidin-biotin-peroxidase-complex
EMT:
MFP:
Mammary fat pad
RTV:
Relative tumor volume
TAK1:
TGF-β-activated kinase 1
TGF-β:
Transforming growth factor-β
TNBC:
TNF-α:
Tumor necrosis factor-α
Tumor volume
Hollestelle A, Nagel JHA, Smid M, Lam S, Elstrodt F, Wasielewski M, Ng SS, French PJ, Peeters JK, Rozendaal MJ, et al. Distinct gene mutation profiles among luminal-type and basal-type breast cancer cell lines. Breast Cancer Res Treat. 2010;121:53–64.
Williams SL, Birdsong GG, Cohen C, Siddiqui MT. Immunohistochemical Detection of Estrogen and Progesterone Receptor and HER2 Expression in Breast Carcinomas: Comparison of Cell Block and Tissue Block Preparations. Int J Clin Exp Pathol. 2009;2:476–80.
Ebrahimi M, Ebrahimie E, Shamabadi N, Ebrahimi M. Are there any differences between features of proteins expressed in malignant and benign breast cancers? J Res Med Sci. 2010;15:299–309.
Loganathan J, Jiang JH, Smith A, Jedinak A, Thyagarajan-Sahu A, Sandusky GE, Nakshatri H, Sliva D. The mushroom Ganoderma lucidum suppresses breast-to-lung cancer metastasis through the inhibition of pro-invasive genes. Int J Oncol. 2014;44:2009–15.
Aggarwal BB, Shishodia S, Ashikawa K, Bharti AC. The role of TNF and its family members in inflammation and cancer: lessons from gene deletion. Curr Drug Targets Inflamm Allergy. 2002;1:327–41.
Yang J, Mani SA, Donaher JL, Ramaswamy S, Itzykson RA, Come C, Savagner P, Gitelman I, Richardson A, Weinberg RA. Twist, a master regulator of morphogenesis, plays an essential role in tumor metastasis. Cell. 2004;117:927–39.
Weiss MB, Abel EV, Mayberry MM, Basile KJ, Berger AC, Aplin AE. TWIST1 Is an ERK1/2 Effector That Promotes Invasion and Regulates MMP-1 Expression in Human Melanoma Cells. Cancer Res. 2012;72:6382–92.
Ansieau S, Bastid J, Doreau A, Morel AP, Bouchet BP, Thomas C, Fauvet F, Puisieux I, Doglioni C, Piccinin S, et al. Induction of EMT by twist proteins as a collateral effect of tumor-promoting inactivation of premature senescence. Cancer Cell. 2008;14:79–89.
Maestro R, Dei Tos AP, Hamamori Y, Krasnokutsky S, Sartorelli V, Kedes L, Doglioni C, Beach DH. Hannon GJ: twist is a potential oncogene that inhibits apoptosis. Genes Dev. 1999;13:2207–17.
Kwok WK, Ling MT, Lee TW, Lau TCM, Zhou C, Zhang XM, Chua CW, Chan KW, Chan FL, Glackin C, et al. Up-regulation of TWIST in prostate cancer and its implication as a therapeutic target. Cancer Res. 2005;65:5153–62.
Eckert MA, Lwin TM, Chang AT, Kim J, Danis E, Ohno-Machado L, Yang J. Twist1-induced invadopodia formation promotes tumor metastasis. Cancer Cell. 2011;19:372–86.
Lee TK, Poon RTP, Yuen AP, Ling MT, Kwok WK, Wang XH, Wong YC, Guan XY, Man K, Chau KL, Fan ST. Twist overexpression correlates with hepatocellular carcinoma metastasis through induction of epithelial-mesenchymal transition. Clin Cancer Res. 2006;12:5369–76.
Yang Z, Zhang XH, Gang H, Li XJ, Li ZM, Wang T, Han J, Luo T, Wen FQ, Wu XT. Up-regulation of gastric cancer cell invasion by Twist is accompanied by N-cadherin and fibronectin expression. Biochem Biophys Res Commun. 2007;358:925–30.
Vesuna F, Lisok A, Kimble B, Raman V. Twist modulates breast cancer stem cells by transcriptional regulation of CD24 expression. Neoplasia. 2009;11:1318–28.
Li CW, Xia W, Huo L, Lim SO, Wu Y, Hsu JL, Chao CH, Yamaguchi H, Yang NK, Ding Q, et al. Epithelial-mesenchymal transition induced by TNF-alpha requires NF-kappaB-mediated transcriptional upregulation of Twist1. Cancer Res. 2012;72:1290–300.
Li H, Hui H, Xu J, Yang H, Zhang X, Liu X, Zhou Y, Li Z, Guo Q, Lu N. Wogonoside induces growth inhibition and cell cycle arrest via promoting the expression and binding activity of GATA-1 in chronic myelogenous leukemia cells. Arch Toxicol. 2016;90:1507–22.
Huang Y, Zhao K, Hu Y, Zhou Y, Luo X, Li X, Wei L, Li Z, You Q, Guo Q, Lu N. Wogonoside inhibits angiogenesis in breast cancer via suppressing Wnt/beta-catenin pathway. Mol Carcinog. 2016;55:1598–612.
Sun Y, Zhao Y, Yao J, Zhao L, Wu Z, Wang Y, Pan D, Miao H, Guo Q, Lu N. Wogonoside protects against dextran sulfate sodium-induced experimental colitis in mice by inhibiting NF-kappaB and NLRP3 inflammasome activation. Biochem Pharmacol. 2015;94:142–54.
Cheng Y, Zhao K, Li G, Yao J, Dai Q, Hui H, Li Z, Guo Q, Lu N. Oroxylin A inhibits hypoxia-induced invasion and migration of MCF-7 cells by suppressing the Notch pathway. Anti-Cancer Drugs. 2014;25:778–89.
Lu Z, Lu N, Li C, Li F, Zhao K, Lin B, Guo Q. Oroxylin A inhibits matrix metalloproteinase-2/9 expression and activation by up-regulating tissue inhibitor of metalloproteinase-2 and suppressing the ERK1/2 signaling pathway. Toxicol Lett. 2012;209:211–20.
Chakrabarti R, Kang Y. Transplantable mouse tumor models of breast cancer metastasis. Methods Mol Biol. 2015;1267:367–80.
Wang H, Zhao L, Zhu LT, Wang Y, Pan D, Yao J, You QD, Guo QL. Wogonin reverses hypoxia resistance of human colon cancer HCT116 cells via downregulation of HIF-1alpha and glycolysis, by inhibiting PI3K/Akt signaling pathway. Mol Carcinog. 2014;53(Suppl 1):E107–18.
Jiang J, Thyagarajan-Sahu A, Loganathan J, Eliaz I, Terry C, Sandusky GE, Sliva D. BreastDefend prevents breast-to-lung cancer metastases in an orthotopic animal model of triple-negative human breast cancer. Oncol Rep. 2012;28:1139–45.
Liu X, Tian S, Liu M, Jian L, Zhao L. Wogonin inhibits the proliferation and invasion, and induces the apoptosis of HepG2 and Bel7402 HCC cells through NFkappaB/Bcl-2, EGFR and EGFR downstream ERK/AKT signaling. Int J Mol Med. 2016;38:1250–256.
Zhao K, Wei L, Hui H, Dai Q, You QD, Guo QL, Lu N. Wogonin suppresses melanoma cell B16-F10 invasion and migration by inhibiting Ras-medicated pathways. PLoS One. 2014;9:e106458.
Chen P, Lu N, Ling Y, Chen Y, Hui H, Lu Z, Song X, Li Z, You Q, Guo Q. Inhibitory effects of wogonin on the invasion of human breast carcinoma cells by downregulating the expression and activity of matrix metalloproteinase-9. Toxicology. 2011;282:122–8.
Chaffer CL, Weinberg RA. A perspective on cancer cell metastasis. Science. 2011;331:1559–64.
Friedl P, Wolf K. Tumour-cell invasion and migration: diversity and escape mechanisms. Nat Rev Cancer. 2003;3:362–74.
Castanon I, Baylies MK. A Twist in fate: evolutionary comparison of Twist structure and function. Gene. 2002;287:11–22.
Sosic D, Richardson JA, Yu K, Ornitz DM, Olson EN. Twist regulates cytokine gene expression through a negative feedback loop that represses NF-kappaB activity. Cell. 2003;112:169–80.
Huber MA, Azoitei N, Baumann B, Grunert S, Sommer A, Pehamberger H, Kraut N, Beug H, Wirth T. NF-kappaB is essential for epithelial-mesenchymal transition and metastasis in a model of breast cancer progression. J Clin Invest. 2004;114:569–81.
Borghi A, Verstrepen L, Beyaert R. TRAF2 multitasking in TNF receptor-induced signaling to NF-kappaB, MAP kinases and cell death. Biochem Pharmacol. 2016;116:1–10.
Yi P, Xia W, Wu RC, Lonard DM, Hung MC, O'Malley BW. SRC-3 coactivator regulates cell resistance to cytotoxic stress via TRAF4-mediated p53 destabilization. Genes Dev. 2013;27:274–87.
Zhang L, Zhou F, Garcia de Vinuesa A, de Kruijf EM, Mesker WE, Hui L, Drabsch Y, Li Y, Bauer A, Rousseau A, et al. TRAF4 promotes TGF-beta receptor signaling and drives breast cancer metastasis. Mol Cell. 2013;51:559–72.
Hay ED. An overview of epithelio-mesenchymal transformation. Acta Anat (Basel). 1995;154:8–20.
This work was supported by the National Science & Technology Major Project (No. 2017ZX09301014, 2017ZX09101003-005-023, 2017ZX09101003-003-007), Program for Changjiang Scholars and Innovative Research Team in University (IRT1193), the Project Program of State Key Laboratory of Natural Medicines, China Pharmaceutical University (SKLNMZZCX201606), the National Natural Science Foundation of China (No. 81603135, 81673461, 81373449, 81373448), the Fundamental Research Funds for the Central Universities (2016ZPY005).
All data generated or analysed during this study are included in this published article.
State Key Laboratory of Natural Medicines, Jiangsu Key Laboratory of Carcinogenesis and Intervention, School of Basic Medicine and Clinical Pharmacy, China Pharmaceutical University, 24 Tongjiaxiang, Nanjing, 210009, People's Republic of China
Yuyuan Yao
, Kai Zhao
, Zhou Yu
, Haochuan Ren
, Li Zhao
, Qinglong Guo
& Na Lu
Department of Medicinal Chemistry, School of Pharmacy, China Pharmaceutical University, 24 Tongjiaxiang, Nanjing, 210009, People's Republic of China
Zhiyu Li
Search for Yuyuan Yao in:
Search for Kai Zhao in:
Search for Zhou Yu in:
Search for Haochuan Ren in:
Search for Li Zhao in:
Search for Zhiyu Li in:
Search for Qinglong Guo in:
Search for Na Lu in:
YYY, KZ, QLG and NL proposed the study. YYY and KZ performed research and wrote the first draft. ZY, HCR, LZ and ZYL collected and analyzed the data. All authors contributed to the design and interpretation of the study and to further drafts. NL is the guarantor. All authors read and approved the final manuscript.
Correspondence to Qinglong Guo or Na Lu.
All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.
Yao, Y., Zhao, K., Yu, Z. et al. Wogonoside inhibits invasion and migration through suppressing TRAF2/4 expression in breast cancer. J Exp Clin Cancer Res 36, 103 (2017) doi:10.1186/s13046-017-0574-5
Wogonoside
TNF-α
Twist1
TRAF2 | CommonCrawl |
\begin{document}
\date{}
\pagerange{\pageref{firstpage}--\pageref{lastpage}} \volume{} \pubyear{} \artmonth{}
\doi{}
\label{firstpage}
\begin{abstract} Data collected from wearable devices and smartphones can shed light on an individual's pattern of behavioral and circadian routine. Phone use can be modeled as alternating event process, between the state of active use and the state of being idle. Markov chains and alternating recurrent event models are commonly used to model state transitions in cases such as these, and the incorporation of random effects can be used to introduce diurnal effects. While state labels can be derived prior to modeling dynamics, this approach omits informative regression covariates that can influence state memberships. We instead propose an alternating recurrent event proportional hazards (PH) regression to model the transitions between latent states. We propose an Expectation-Maximization (EM) algorithm for imputing latent state labels and estimating regression parameters. We show that our E-step simplifies to the hidden Markov model (HMM) forward-backward algorithm, allowing us to recover a HMM with logistic regression transition probabilities. In addition, we show that PH modeling of discrete-time transitions implicitly penalizes the logistic regression likelihood and results in shrinkage estimators for the relative risk. We derive asymptotic distributions for our model parameter estimates and compare our approach against competing methods through simulation as well as in a digital phenotyping study that followed smartphone use in a cohort of adolescents with mood disorders. \end{abstract}
\begin{keywords} Alternating Recurrent Event Processes; Expectation Maximization Algorithm; Hidden Markov Models; Latent Variable Modeling; Longitudinal Data \end{keywords}
\maketitle
\section{Introduction} \label{intro}
Diurnal and circadian rhythm studies often model physiological processes as periodic cycles, such as a person's active and rest cycle. Sleep and diurnal rhythm are essential components of many circadian physiological processes with a clear time-of-day effect on active and rest cycles \citep{lagona2014latent,morris2012circadian}. While classification of physiological processes is an ongoing area of research, in many instances, processes can be discretized into a few state categories such as active and rest state labels. Here we consider this problem of estimating an individual's cycles between active and rest states in a mobile health (mHealth) setting based on wearable device or smartphone sensor data. If the true state labels are known, then a Markov chain can be used to model state transitions over time, otherwise a hidden Markov model (HMM) can be used to simultaneously perform classification and state transition estimation \citep{langrock2013combining}. In addition, HMMs have been extended to incorporate time-of-day effects as periodicity or seasonality using random effects \citep{stoner2020advanced,holsclaw2017bayesian,bartolucci2015discrete}. Continuous-time hidden Markov models (CT-HMM) have also been used for state classification in similar contexts but have difficulty accounting for random effects \citep{bartolucci2019shared,liu2015efficient,bureau2003applications,jackson2003multistate}. Most mixed effects HMM estimation procedures estimate logistic regression for discrete-time processes and do not account for time between states \citep{maruotti2012mixed,altman2007mixed}.
It is important to note that active and rest state transitions are ergodic processes and the sojourn time between these states can be modeled with a proportional hazards (PH) regression in both directions, active-to-rest and rest-to-active. These two directions of transitions can be viewed as an alternating recurrent event PH model \citep{krol2015semimarkov,wang2020penalized,shinohara2018alternating}. Already, hazard rates have been incorporated into continuous-time Markov chains (CTMC) to model sojourn times \citep{hubbard2016using}. However, the ability for alternating recurrent event processes to accurately model sojourn times complements HMMs and provides many useful properties in addition to being computationally scalable. This novel model can be viewed as a latent state analog of an alternating recurrent event process \citep{wang2020penalized,krol2015semimarkov}. Mainly, if the underlying data generating process is a continuous-time process, then PH models are a more appropriate modeling choice \citep{abbott1985logistic,ingram1989empirical}. If the data generating process involves discrete-time transitions, we showed that PH modeling penalizes the logistic regression likelihood, inducing shrinkage during estimation.
We propose an approach that takes advantage of the strengths of both HMMs and alternating recurrent event models to jointly estimate latent states while simultaneously providing flexible modeling of sojourn times. Our Expectation-Maximization (EM) algorithm imputes latent active and rest state labels while modeling state transitions with an alternating recurrent event process using exponential PH regressions \citep{dempster1977maximum}. Informative regression covariates, often omitted in state labeling, are incorporated into the latent state imputation using the EM algorithm. Under the EM algorithm, we show that the E-step in this case simplifies to imputations involving a HMM forward-backward algorithm where state transition probabilities are defined as logistic or multinomial regression probabilities \citep{baum1970maximization,altman2007mixed}. We also show that the M-step reduces to fitting independent PH models weighted by E-step imputations allowing for a potentially large number of latent states, as well as, providing a means to obtain large sample theory inference. Our EM approach involving PH models provides a scaleable M-step while returning multinomial regression transition probabilities commonly found in HMMs \citep{maruotti2012mixed,holsclaw2017bayesian,altman2007mixed}. Furthermore, we show that applying PH models to discrete-time transitions, implicitly penalizes logistic regression to shrink the transition probability matrices of HMMs towards the identity matrix and mitigates overfitting in many practical settings. As a result, the PH models favors processes with a low incidence of state transitions such as diurnal cycles where we expect few state transitions within a 24h period.
We apply this approach to estimate active-rest diurnal cycles in a sample of patients with affective disorders using their passively collected smartphone sensor data, namely through the accelerometer, screen on/off data of patient smartphones and time-of-day random intercepts. We are able to quantify the strength of a patient's routine by representing the magnitude of time-of-day random intercepts as the regularity of a patient's diurnal rhythm. This quantification of the strength of routine can be correlated with a myriad of relevant clinical outcomes, as the regularity of diurnal rhythms plays an important role in psychopathology with past studies having shown associations between irregular rhythms and adverse health outcomes \citep{monk1990social,monk1991social}. In addition, we fit a population level HMM to study effects of individual specific covariates on state transitions. \iffalse The rest of this manuscript is organized as follows. In Section \ref{methods}, we introduce the data and derive the EM algorithm for estimating the alternating recurrent event exponential PH model and the accompanying HMM. In Section \ref{results}, we compare our alternating recurrent event model and HMM with competing approaches through simulation and also by estimating active/rest cycles using smartphone data in a cohort of patients with affective disorders. \fi \section{Data and Methods} \label{methods}
Our data consist of $i \in \{ 1,\dots, I \}$ individuals, with each individual $i$ having a sequence $j \in \{ 1, \dots, n_i \}$ of covariates to be modeled with separate hidden Markov models. The HMM of the $i$th individual has $n_i+1$ sequences of active or rest states $\mathbf{A}_i = \left\{ A(t_{i0}),\dots,A(t_{ij}),\dots,A(t_{in_i}) \right\}$, where hourly time-stamps $t_{j}$ are increasing in $j$. In our example, we denote active and rest states as $A(t_{ij})=1$ and $A(t_{ij})=2$ respectively, with an outline of our HMM in Web Figure 1. Within each sequence, we define event times for $n_i$ state transitions as $\Delta(t_{ij}) = t_{ij} - t_{i(j-1)}$, which follow from an exponential distribution and can be fitted with a PH regression. Linking multiple exponential event time processes results in a recurrent event model. Furthermore, recurrent exponential PH models are analogous with a non-homogeneous Poisson processes which retains independent increments, allowing us to chain multiple transitions together.
The covariates used in the exponential PH regression are mean acceleration magnitudes (Euclidean norm) from the preceding hour evaluated at $n_{i}$ transitions and are outlined in Web Figure 1. We denote the intercept and covariates as $\mathbf{X}^\top_{i} = \left[ \mathbf{x}(t_{i1}),\mathbf{x}(t_{i2}),\dots,\mathbf{x}(t_{i n_{i}}) \right] \in \mathbb{R}^{p \times n_i}$. We make an ergodic state transition assumption where states will inevitably communicate with each other, i.e., the active-to-rest and rest-to-active transitions will eventually occur. This allows the survival function to capture the likelihood contribution of when a state transition did not occur, meaning the transition will occur at some future time. Because we do not have the true state labels $\mathbf{A} = \{\mathbf{A}_1,\dots \mathbf{A}_I \}$, we must rely on state dependent observations. We use screen-on counts for each time-stamp $\mathbf{y}_{i} = \left\{ Y(t_{i0}),Y(t_{i1}),\dots,Y(t_{i n_i }) \right\}$ as observations from state dependent distributions $Y(t_{ij}) | \left\{ A({t_{ij}}) = s \right\} \sim \text{Poisson}( \mu_s )$, where $s \in \{1,2\}$, $\mu_s$ are state specific parameters and we expect $\mu_2 \approx 0$ for the rest state.
\subsection{Alternating Recurrent Event PH Model} \label{PHHMM}
In our two state setting, rates of transition from state $s$ to the other state are defined as $\lambda_s(t_{ij}) = \exp \left( \mathbf{x}^\top(t_{ij}) \boldsymbol{\beta}_s \right)$. For example, $\lambda_1(t_{ij})$ denotes the rate of transition from state 1-to-2 (active-to-rest). Alternating recurrent event PH models often need to account for the longitudinal nature of the data, i.e., repeated measurements. Mixed effects or frailties can be used to account for the recurrent nature of the data \citep{wang2020penalized,mcgilchrist1991regression}. Modifying the standard exponential PH model with a shared log-normal frailties or normal random intercepts, state transition hazards become $ \lambda_s(t_{ij}) = \exp \left( \eta_s(t_{ij}) \right) = \exp \left( \mathbf{x}^\top(t_{ij}) \boldsymbol{\beta}_s + \mathbf{z}^\top(t_{ij}) \mathbf{b}_s \right)$, where $\mathbf{b}_s \sim \mathrm{N}(\mathbf{0}_{24}, \sigma^2_s \mathbf{I}_{24} )$. Here $\mathbf{z}(t_{ij})$ are 24 hour-of-day indicators, one-hot vectors designed to toggle the appropriate random intercepts within $\{ \mathbf{b}_1,\mathbf{b}_2 \}$.
A HMM for individual $i$ is given by the complete data likelihood \begin{equation} \label{eq1} \begin{array}{rl}
L ( \boldsymbol{\beta}_1, \mathbf{b}_1, \sigma^2_1, \boldsymbol{\beta}_2, \mathbf{b}_2, \sigma^2_2, \mu_1, \mu_2 | \mathbf{A}_i ) &=
\left\{ \prod_{s=1}^2 L(\boldsymbol{\beta}_s, \sigma^2_s, \mathbf{b}_s |\mathbf{A}_i ) \right\} L ( \mu_1, \mu_2 | \mathbf{A}_i ) \\
&= \left\{ \prod_{s=1}^2 L(\boldsymbol{\beta}_s | \mathbf{A}_i, \mathbf{b}_s ) f ( \mathbf{b}_s | \sigma^2_s ) \right\} L ( \mu_1, \mu_2 | \mathbf{A}_i ) \end{array} \end{equation} where $\mathbf{A}_i$ are the true the state labels. The PH likelihoods for state transitions are given as $$
L(\boldsymbol{\beta}_s | \mathbf{A}_i, \mathbf{b}_s ) f ( \mathbf{b}_s | \sigma^2_s ) = \left[ \prod^{n_{i}}_{j=1} \left\{ f ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) \right\}^{d_s(t_{ij})} \left\{ S ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) \right\}^{c_s(t_{ij})} \right] f ( \mathbf{b}_s | \sigma^2_s ) $$
where $f ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) = \lambda_s(t_{ij}) \exp \left( -\lambda_s(t_{ij}) \Delta(t_{ij}) \right)$ and $S ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) = \exp \left( -\lambda_s(t_{ij}) \Delta(t_{ij}) \right)$ are derived from the exponential distribution. Note that $\prod_{s=1}^2 L(\boldsymbol{\beta}_s, \sigma^2_s, \mathbf{b}_s |\mathbf{A}_i ) f ( \mathbf{b}_s | \sigma^2_s )$ is the likelihood of an alternating event process \citep{krol2015semimarkov,wang2020penalized}. We denote indicators for state 1-to-2 transitions $d_1(t_{ij}) = \mathbb{I} \left[ A(t_{i(j-1)})=1, A(t_{ij}) = 2 \right]$, as $\mathbf{d}_1 = \left\{ d_1(t_{11}), d_1(t_{12}), \dots, d_1(t_{ij}), \dots \right\}$ and 2-to-1 transitions as $\mathbf{d}_2$. We interpret failure to transition out of state 1, $c_1(t_{ij}) = \mathbb{I} \left[ A(t_{i(j-1)})=1, A(t_{ij}) = 1 \right]$ as censoring, denoted as $\mathbf{c}_1 = \left\{ c_1(t_{11}), c_1(t_{12}), \dots, c_1(t_{ij}), \dots \right\}$ and we similarly define $\mathbf{c}_2$. The screen-on count state conditional Poisson likelihood is given as $ \begin{array}{c}
L \left( \mu_1, \mu_2 | \mathbf{A}_i \right) = \prod^{n_{i}}_{j=0} \prod^{2}_{s=1} f \left( y(t_{ij}) | \mu_s \right)^{u_s(t_{ij})} \end{array} $ where state memberships $\mathbf{u}$, are denoted as indicators $u_s(t_{ij}) = \mathbb{I} \left[ A(t_{ij}) = s \right]$. Since the true labels are unknown, $\mathbf{d}_s$, $\mathbf{c}_s$, and $\mathbf{u}$ are latent variables and \eqref{eq1} becomes a mixture model.
\subsection{EM Algorithm for PH Regression and HMM Parameters} \label{est}
The log-likelihood of \eqref{eq1} are linear functions of latent variables $\mathbf{d}_s$, $\mathbf{c}_s$, and $\mathbf{u}$, lending our optimization approach to an EM algorithm \citep{dempster1977maximum}. Through the EM algorithm, indicators $\mathbf{d}_s$, $\mathbf{c}_s$, and $\mathbf{u}$ are imputed as continuous probabilities, in the process of obtaining maximum likelihood estimates (MLEs). As a result, the alternating recurrent event exponential PH model reduces to two weighted frailty models. We denote PH model weights as $\mathbf{w}(t_{ij}) = \left\{ {c}_1(t_{ij}), {d}_1(t_{ij}), {c}_2(t_{ij}), {d}_2(t_{ij}) \right\}$, which belong to a 4-dimensional probability simplex, i.e., values are non-negative and $\| \mathbf{w}(t_{ij}) \|_1 = 1$. Poisson mixture model weights $\mathbf{u}(t_{ij}) = \left\{ {u}_1(t_{ij}), u_2(t_{ij}) \right\}$ belong to a 2-dimensional probability simplex. Our EM algorithm iteratively estimates the weights $\mathbf{w}(t_{ij})$ and $\mathbf{u}(t_{ij})$ using the forward-backward algorithm of \cite{baum1970maximization} and $\left\{ \boldsymbol{\beta}_1, \mathbf{b}_1, \sigma^2_1, \boldsymbol{\beta}_2, \mathbf{b}_2, \sigma^2_2 \right\}$ using survival modeling. While the complete data likelihood is written as an alternating event processes, we see an equivalence with logistic regression transition probabilities in the E-step calculations, effectively retooling alternating event processes to fit HMMs to data that have heterogeneous event times transitions.
\subsubsection{E-step} \label{EStep} In the E-step, we derive the expectation of $\mathbf{w}(t_{ij})$ conditional on model parameters and observed data in the $i$th HMM denoted by $\mathbf{y}_{i}$ and $\mathbf{X}_{i}$, as $$ \begin{array}{rl}
\mathbb{E} \left[ {d}_1(t_{ij}) | \mathbf{y}_{i}, \mathbf{X}_{i}, {\Theta}_{i} \right] &=
\text{Pr} \left( A(t_{i(j-1)})=1, A(t_{ij}) = 2 | \mathbf{y}_{i}, \mathbf{X}_{i}, {\Theta}_{i} \right) \\
&= \left( \frac{ {\alpha}_1 (t_{i(j-1)}) {\nu}_2 (t_{ij})}{ \text{Pr}(\mathbf{Y}_{i} =\mathbf{y}_{i} | \mathbf{X}_{i}, {\Theta}_{i}) } \right) \frac{f ( \Delta(t_{ij}) | \lambda_1(t_{ij}) ) }{f ( \Delta(t_{ij}) | \lambda_1(t_{ij}) ) + S ( \Delta(t_{ij}) | \lambda_1(t_{ij}) )} , \end{array} $$
$\mathbb{E} \left[ {c}_1(t_{ij}) | \mathbf{y}_{i}, \mathbf{X}_{i}, {\Theta}_{i} \right] = \text{Pr} \left( A(t_{i(j-1)})=1, A(t_{ij}) = 1 | \mathbf{y}_{i}, \mathbf{X}_{i}, {\Theta}_{i} \right)$, $\Theta_{i} = \left\{ \boldsymbol{\delta}_{i}, \boldsymbol{\beta}_s, \mathbf{b}_s, \sigma^2_s, \mu_s \right\}$ and $$ \begin{array}{rl}
{\alpha}_s (t_{ij}) &\propto \text{Pr} \left( A(t_{ij})=s | y_{i0}, \dots, y_{ij}, \mathbf{x}_{i1}, \dots, \mathbf{x}_{ij}, {\Theta}_{i} \right) \\
{\nu}_s (t_{ij}) &\propto \text{Pr} \left( A(t_{ij})=s | y_{ij}, \dots, y_{in_{i}}, \mathbf{x}_{i(j+1)}, \dots, \mathbf{x}_{in_{i}}, {\Theta}_{i} \right) \end{array} $$ are forward and backward probabilities of a HMM. Vectors $\boldsymbol{\delta}_{i} \in \mathbb{R}_{\geq0}^{1 \times 2}$ are of initial state distribution probabilities for the HMM. The transition probability matrix $\boldsymbol{\Gamma}(t_{ij}) \in \mathbb{R}_{\geq0}^{2 \times 2}$, derived by normalizing the alternating recurrent event exponential PH models are $$ \boldsymbol{\Gamma}(t_{ij}) = \begin{bmatrix} \gamma_{11}(t_{ij}) & \gamma_{12}(t_{ij}) \\ \gamma_{21}(t_{ij}) & \gamma_{22}(t_{ij}) \end{bmatrix} = \begin{bmatrix} 1-\text{expit}\left( \eta_1(t_{ij}) \right) & \text{expit}\left( \eta_1(t_{ijk}) \right) \\ \text{expit}\left( \eta_2(t_{ij}) \right) & 1 - \text{expit}\left( \eta_2(t_{ij}) \right) \end{bmatrix} $$
where $f ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) / \left\{ f ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) + S ( \Delta(t_{ij}) | \lambda_s(t_{ij}) ) \right\} = \left\{ 1 + \exp(-\eta_s(t_{ij})) \right\}^{-1} =\\ \text{expit}\left( \eta_s(t_{ij}) \right)$. The weights from $\mathbf{w}(t_{ij})$ can more generally be written as \\$\text{Pr}\left( A(t_{i(j-1)})=q, A(t_{ij})=r | \mathbf{y}_{i}, \mathbf{X}_{i}, {\Theta}_{i} \right) \propto \alpha_q (t_{i(j-1)}) \nu_r (t_{ij}) \gamma_{qr} (t_{ij})$. The E-step involves imputing $\mathbf{w}(t_{ij})$ through a HMM, using a forward-backward algorithm, where transition probabilities are standard logistic functions. We denote forward and backward probabilities vectors $\boldsymbol{\alpha}^\top (t_{ij}) = \boldsymbol{\delta}_{i} \mathbf{P}(t_{i0}) \prod^{j}_{m=1} \boldsymbol{\Gamma} (t_{im}) \mathbf{P}(t_{im})$ and $
\boldsymbol{\nu} (t_{ij}) = \mathbf{P}(t_{ij}) \prod^{n_{i}}_{m=j+1} \boldsymbol{\Gamma} (t_{im}) \mathbf{P}(t_{im}) \mathbf{1} $, where $\boldsymbol{\alpha} (t_{ij})$ and $\boldsymbol{\nu} (t_{ij})$ are $2 \times 1$ vectors. The state dependent distribution are contained in the $2 \times 2$ diagonal matrix $\mathbf{P}(t_{ij})= \text{diag} \Big( f \left(y(t_{ij}) | \mu_1 \right), f \left(y(t_{ij}) | \mu_2 \right) \Big)$. Note that $\text{Pr}(\mathbf{Y}_{i} =\mathbf{y}_{i} | \mathbf{X}_{i}, {\Theta}_{i}) = \boldsymbol{\alpha}^\top (t_{ij}) \boldsymbol{\Gamma} (t_{ij}) \boldsymbol{\nu} (t_{i(j+1)})$, and are used to normalize E-step probabilities. The E-step update for iteration $l+1$, simplifies to calculating probabilities $\mathbf{w}^{(l+1)}(t_{ij})$ as $$ \begin{bmatrix} c^{(l+1)}_1(t_{ij}) & d^{(l+1)}_1(t_{ij}) \\ d^{(l+1)}_2(t_{ij}) & c^{(l+1)}_2(t_{ij}) \end{bmatrix} \propto \left( {\boldsymbol{\alpha}^\top}^{(l)}(t_{i(j-1)}) \otimes \boldsymbol{\nu}^{(l)}(t_{ij}) \right) \odot \boldsymbol{\Gamma}^{(l)}(t_{ij}) $$
such that the sum of all elements is equal to one, $\| \mathbf{w}^{(l+1)}(t_{ij}) \|_1 = 1$. Operations $\otimes$ and $\odot$ are Kronecker and Hadamard products respectively. The update for the Poisson mixture model weights are $\mathbf{u}^{(l+1)} (t_{ij}) \propto \boldsymbol{\alpha}^{(l)}(t_{ij}) \odot \left( \boldsymbol{\Gamma}^{(l)}(t_{i(j+1)}) \boldsymbol{\nu}^{(l)}(t_{i(j+1)}) \right)$, such that $\| \mathbf{u}^{(l+1)}(t_{ij}) \|_1 = 1$.
\subsubsection{M-step} \label{MStep}
The M-step update for $\left\{ \mu_1^{(l+1)}, \mu_2^{(l+1)} \right\}$ involves solving the mixture model
$L \left( \mu_1, \mu_2 | \mathbf{u}^{(l+1)} \right) = \prod^{n_{i}}_{j=0} \prod^{2}_{s=1} f \left( y(t_{ij}) | \mu_s \right)^{u^{(l+1)}_s(t_{ij})}$, where solutions are known for most distributions and $\mu^{(l+1)}_s = \left({ \sum_{ij} u_s^{(l+1)}(t_{ij}) } \right)^{-1} \left({ \sum_{ij} u_s^{(l+1)}(t_{ij}) y(t_{ij}) } \right)$ in the Poisson setting. From $\left\{ \mu_1^{(l+1)}, \mu_2^{(l+1)} \right\}$, we have updates $\mathbf{P}^{(l+1)}(t_{ij}) $. The update for $\left\{ \boldsymbol{\beta}_s^{(l+1)}, \mathbf{b}_s^{(l+1)}, {\sigma^2_s}^{(l+1)} \right\}$ involves fitting two frailty models for $L \left (\boldsymbol{\beta}_s, \sigma^2_s, \mathbf{b}_s | \mathbf{c}_s^{(l+1)}, \mathbf{d}_s^{(l+1)} \right)$ which can be accomplished by recognizing $\left\{ \mathbf{d}_s^{(l+1)}, \mathbf{c}_s^{(l+1)}\right\}$ can be factored into indicators and log-likelihood weights or case weights. The weights can interpreted as the probability that a specific state transition, e.g., if $\widehat{d}_1(t_{ij})\approx1$, then an active-to-rest transition likely occurred. We duplicate each row of data to be both a transition event and a censored outcome and then weight the rows by $\mathbf{d}_s^{(l+1)}$, and $\mathbf{c}_s^{(l+1)}$ respectively. A data augmentation example is outlined in Web Table 1.
While there are numerous survival packages in \texttt{R}, we need a package that can incorporate weights, parametric PH models and normally distributed random intercepts \citep{therneau2015package}. The \texttt{R} package \texttt{tramME} can be used to fit our weighted exponential parametrization of shared log-normal frailty models \citep{tamasi2021tramme}. The package \texttt{tramME} uses a transformation model approach combined with an efficient implementation of the Laplace approximation to fit the shared log-normal frailty models \citep{hothorn2018most,hothorn2020most,kristensen2016tmb}. By updating $\left\{ \boldsymbol{\beta}_1^{(l+1)}, \mathbf{b}_1^{(l+1)}, \boldsymbol{\beta}_2^{(l+1)}, \mathbf{b}_2^{(l+1)} \right\}$ we also update our transition rates, $\lambda^{(l+1)}_1(t_{ij})$ and $\lambda^{(l+1)}_2(t_{ij})$ which are used to calculate $\boldsymbol{\Gamma}^{(l+1)} (t_{ij})$.
The M-step updates for the initial state distribution are $\boldsymbol{\delta}^{(l+1)}_{i} \propto \\ \left( \boldsymbol{\delta}^{(l)}_{i} \mathbf{P}^{(l+1)}(t_{i0}) \right) \odot \left( \boldsymbol{\Gamma}^{(l+1)}(t_{i1}) \boldsymbol{\nu}^{(l+1)}(t_{i1}) \right) $, such that $\| \boldsymbol{\delta}^{(l+1)}_{i} \|_1 = 1$. Finally, we iteratively calculate the E-step: $\left\{ \mathbf{w}^{(l+1)}(t_{ij}), \mathbf{u}^{(l+1)}(t_{ij}) \right\}$, and M-step: $\left\{ \boldsymbol{\delta}^{(l+1)}_{i}, \boldsymbol{\beta}^{(l+1)}_s, \mathbf{b}^{(l+1)}_s, {\sigma^2}^{(l+1)}_s, \mu^{(l+1)}_s \right\}$ until convergence to obtain the maximum likelihood estimates. Estimating shared population parameters can be done by taking the product of all individual likelihoods and applying EM.
\subsection{Comparison of Relative Risks from PH and Logistic Regression} \label{penalty} The estimated relative risk or coefficient estimates from PH and logistic regression have been shown to be similar under a variety of situations \citep{abbott1985logistic,ingram1989empirical,thompson1977treatment,callas1998empirical}. Many useful properties of PH modeling can be leveraged in Markov chains which can improve the robustness of parameter estimation and reduce computational burden. Though many analyses look at the Cox PH case, we can adapt these approaches to the exponential PH which is a special case of the Cox PH. Following the derivations of \cite{abbott1985logistic}, we have $\lambda(t)=-d \log \{S(t)\} / d t$ and $S( \Delta(t) ) =\exp \left\{ -\exp \left(\zeta_0( \Delta(t) )+\sum_k \beta_{k} x_{k}\right) \right\}$ with $\zeta_0 (\Delta (t)) =\log \left\{ \int_0^{ \Delta(t) } \lambda_{0}(z) d z \right\} = \log(\Delta(t)) + \beta_0 $ where $\lambda_{0}(z) = \exp(\beta_0)$ is the baseline hazard from the exponential distribution. When the event times are discrete, $\Delta( t ) = 1$ can be arbitrarily assigned as the event time and $\zeta_0 ( \Delta(t) ) = \beta_0$. Under this setting, $\log \left\{ -\log \left( S(\Delta(t_{ij})) \right)\right\} = \eta_s(t_{ij})$ and the Taylor expansion of the survival function $S(\Delta(t_{ij})=1 \mid \lambda_s(t_{ij})) = \exp(-\lambda_s(t_{ij}))$ at $\lambda_s(t_{ij})=0$ results in the power series $ \exp(-\exp(\eta_s(t_{ij}))) = 1-\lambda_s(t_{ij})+\lambda_s(t_{ij})^{2} / 2 !-\lambda_s(t_{ij})^{3} / 3 ! \ldots+(-1)^{n} \lambda_s(t_{ij})^{n} / n ! \ldots = 1-\lambda_s(t_{ij}) + R_{\text{PH}} \left( \lambda_s(t_{ij}) \right) $ where $1-P_{ij} = S(\Delta(t_{ij})=1 \mid \lambda_s(t_{ij})) \approx 1-\lambda_s(t_{ij})$ when the incidence rate $\lambda_s(t_{ij})\approx 0$. The respective power series of $1-\text{expit}(\eta_s(t_{ij}))$ is given as $ 1-\text{expit}(\eta_s(t_{ij})) = 1-\lambda_s(t_{ij})+\lambda_s(t_{ij})^{2}-\lambda_s(t_{ij})^{3} \ldots+(-1)^{n} \lambda_s(t_{ij})^{n} \ldots = 1-\lambda_s(t_{ij}) + R_{\text{log}} \left( \lambda_s(t_{ij}) \right) $ where we set $\exp(-\exp(\eta_s(t_{ij}))) = 1-\text{expit}(\eta_s(t_{ij})) = 1-P_{ij} \approx 1-\lambda_s(t_{ij})$ when the incidence rates are low. Note that in the case of low incidence rates, $R_{\text{PH}} \left( \lambda_s(t_{ij}) \right) \leq R_{\text{log}} \left( \lambda_s(t_{ij}) \right)$ and writing $\eta_s(t_{ij})$ as a function of $R_{\text{PH}} \left( \lambda_s(t_{ij}) \right)$ or $R_{\text{log}} \left( \lambda_s(t_{ij}) \right)$, we get $\log \left( P_{ij} + R_{\text{PH}} \left( \lambda_s(t_{ij}) \right) \right) \leq \log \left( P_{ij} + R_{\text{log}} \left( \lambda_s(t_{ij}) \right) \right)$. As noted in previous studies, PH and logistic regression estimate similar relative risk under a group event time, low incidence rate setting \citep{abbott1985logistic,ingram1989empirical}. In many practical settings, discrete-time applications of binary outcome models involve low incidence rates, letting exponential PH regression serve as an alternative to logistic regression. The analogous Markov chain with rare outcomes is a process with an extended stay in the current state, commonly found in diurnal biological processes. However, when event times are heterogeneous, then PH regression is the more appropriate choice for estimating relative risk. As noted in \cite{abbott1985logistic}, in many cases, we observed shrinkage towards zero for the relative risk under PH regression when compared to the logistic regression due to the inequality between the remainder terms $R_{\text{PH}} \left( \lambda_s(t_{ij}) \right)$ and $R_{\text{log}} \left( \lambda_s(t_{ij}) \right)$ \citep{ingram1989empirical,thompson1977treatment,callas1998empirical}.
We further expand on this observed shrinkage by showing that the exponential PH regression is a penalized logistic regression. The exponential canonical form of exponential PH and logistic regression is given as $ L\left( \boldsymbol{\beta}_s \mid \mathbf{A}_i \right) = \prod^{n_{i}}_{j=1} \exp \left\{ d_s(t_{ij}) \eta_s(t_{ij}) \right\} \exp \left\{ -\Psi\left( \eta_s(t_{ij}) \right) \right\}, \ $ with $d_s(t_{ij})\in\{0,1\}$. Here, $\Psi_{\text{log}}(\eta) = \log(1+\exp(\eta))$ for logistic regression and $\Psi_{\text{PH}}(\eta) = \Delta (t) \exp(\eta) = \exp(\eta)$ for exponential PH regression under the discrete-time setting. We see that the logistic regression likelihood is uniformly bounded below by the PH likelihood due to $\exp(-\log(1+\exp(\eta))) > \exp(-\exp(\eta))$, which simply follows from the inequality $\log(1+z) < z$ for $z>0$. The convex optimization problems of PH and logistic regression are to minimize loss functions: $J_{\text{log}}(\boldsymbol{\beta}_s ) = - \log L_{\text{log}} \left( \boldsymbol{\beta}_s \mid \mathbf{A}_i \right) $ and $J_{\text{PH}}(\boldsymbol{\beta}_s) = - \log L_{\text{PH}} \left( \boldsymbol{\beta}_s \mid \mathbf{A}_i \right)$. There is a positive convex penalty difference between the logistic and PH optimization problems, $ J_{\text{PH}}(\boldsymbol{\beta}_s) = J_{\text{log}}(\boldsymbol{\beta}_s) + \mathcal{P}(\boldsymbol{\beta}_s) $ and $ \mathcal{P}(\boldsymbol{\beta}_s) = \sum^{n_{i}}_{j=1} -\log(1+ \exp( \eta_s(t_{ij}) ) ) + \exp( \eta_s(t_{ij}) ) > 0 $ with the penalty as the difference of $\Psi_{\text{log}}(\eta)$ and $\Psi_{\text{PH}}(\eta)$. It is straight forward to see that $ \mathcal{P}(\boldsymbol{\beta}_s)$ is convex with hessian $\nabla_{\boldsymbol{\beta}_s}^2 \mathcal{P}(\boldsymbol{\beta}_s) = \mathbf{X}^\top \boldsymbol{\Omega} \mathbf{X} \succeq 0$ and $\boldsymbol{\Omega}$ as a diagonal matrix with elements $\exp(\eta_s(t_{ij})) \left[ 1 - \{1+ \exp(\eta_s(t_{ij}))\}^{-2} \right] > 0$. Under a simple parameterization, such as the intercept only model, it follows that the penalty $\mathcal{P}(\boldsymbol{\beta})$ favors a low incidence rate. The function $-\log(1+ \exp(\eta)) + \exp(\eta)$ is relatively flat for $\eta<0$ and dominated by $\exp(\eta)$ for $\eta \gg 0$. However, in practice this will shrink coefficients to the solution of $\text{argmin}_{\boldsymbol{\beta}_s} \{ \mathcal{P}(\boldsymbol{\beta}_s) \}$ which tends to be near $\boldsymbol{\beta}_s=0$ when considering that each $\eta_s(t_{ij})= \mathbf{x}^\top (t_{ij}) \boldsymbol{\beta}_s$ is a different linear combination of $\boldsymbol{\beta}_s$. The penalty $\mathcal{P}(\boldsymbol{\beta}_s)$ modifies the convex hull of the logistic regression loss function to induce shrinkage and results in a PH loss function which has many readily available software for estimation.
As noted in our E-step, the conditional expectations of the EM algorithms reduces to transition probability matrices that are comprised of logistic regression probabilities. In the case of a HMM with 3 or more states, the E-step reduces to transition probabilities comprised of multinomial logistic regression probabilities. However, the M-step conveniently involves fitting several independent exponential PH models. In order to illustrate this point, we define the complete data likelihood of transitioning out of state 1 in a 3 state HMM as, \begin{equation} \label{HMM3states} \begin{array}{rl}
L( \boldsymbol{\beta}_{12}, \boldsymbol{\beta}_{13} | \mathbf{A}_i ) &= L( \boldsymbol{\beta}_{12} | \mathbf{A}_i ) L( \boldsymbol{\beta}_{13} | \mathbf{A}_i ) \\
&= \prod^{n_{i}}_{j=1} \left\{ \lambda_{12}(t_{ij}) \right\}^{d_{12}(t_{ij})} \left\{ \lambda_{13}(t_{ij}) \right\}^{d_{13}(t_{ij})}
S ( \Delta(t_{ij}) | \sum_{k=2}^3 \lambda_{1k}(t_{ij}) ) \end{array} \end{equation}
with $d_{1m}(t_{ij}) = \mathbb{I} \left[ A(t_{i(j-1)})=1, A(t_{ij}) = m \right]$ for $m\in \{1,2,3\}$, $\sum_{m=1}^3 d_{1m}(t_{ij}) = 1$ and $\lambda_{1k}(t_{ij}) = \exp( \eta_{1k}(t_{ij}) ) = \exp( \mathbf{x}^\top (t_{ij}) \boldsymbol{\beta}_{1k} )$ for $k \in \{ 2, 3 \}$. The minimum of two or more independent exponential random variables follows an exponential distribution with a new rate equal to the sum of rates, in our case: $\sum_{k=2}^3 \lambda_{1k}(t_{ij})$. The likelihood contribution of staying in the same state is the survival function $S \left( \Delta(t_{ij}) | \sum_{k=2}^3 \lambda_{1k}(t_{ij}) \right) = \prod_{k=2}^3 S \left( \Delta(t_{ij}) | \lambda_{1k}(t_{ij}) \right) = \prod_{k=2}^3 \exp( \Delta(t_{ij}) \lambda_{1k}(t_{ij}) )$. Continuing our example, calculating the E-step for the transition from state 1 to 2 is given as multinomial probability $\mathbb{E} \left[ d_{12}(t_{ij}) \right] = \lambda_{12}(t_{ij}) / \left\{ 1 + \sum_{k=2}^3 \lambda_{1k}(t_{ij}) \right\}$ where we leave the derivation details to the Web Appendix B. Note that in the 3 state HMM, E-step imputations are constrained to a 9-dimensional simplex. The M-step for the 3 state HMM parameters from likelihood \eqref{HMM3states}, can be estimated with 2 independent exponential PH models. This EM procedure can be extended to an arbitrary number of states. In the M-step we fit independent weighted PH models rather than a cumbersome multinomial regression.
As noted in previous work, when the event times are heterogenous, PH regression is the correct model for estimating relative risk \citep{abbott1985logistic,ingram1989empirical,thompson1977treatment,callas1998empirical}. However, PH and logistic regression estimate similar relative risk for the discrete and grouped event time setting with low incidence rates. We showed that under the discrete-time setting, exponential PH regression is an implicitly penalized logistic regression resulting in shrinkage of the relative risk estimates. This is desirable in many situations, specifically in our case where we are building HMMs, a model with a complicated error in response mechanism. The penalty $\mathcal{P}(\boldsymbol{\beta}_s)$ slightly favors low probabilities of transitioning out of a state, which is useful to mitigate false positives. As a result, penalization shrinks the transition probability matrices $\boldsymbol{\Gamma} (t_{ij})$, towards the identity matrix and favors an extended sojourn time. Of note, post estimation HMM procedures such as the Viterbi algorithm for finding the most likely path also favors low probabilities of transitioning out of a state \citep{viterbi1967error,forney1973viterbi}. In addition, we also invoke mixed effect modeling which numerically benefits from penalization. Next, fitting multiple independent weighted frailty models in the M-step is a more straight forward procedure than fitting an analogous weighted mixed multinomial logistic regression. The PH model is the correct model for heterogenous event times, but even in the case of discrete-time transitions, PH modeling's implicit penalization has several useful properties for estimating HMMs.
\subsection{Competing Methods} \label{comp}
We denote the alternating recurrent event exponential PH model outlined in Section \ref{est} as PH-HMM. For our competing methods, we use the CT-HMM and discrete-time mixed effect logistic regression HMM (denoted as DT-HMM), described in our Web Appendix A. In addition, we define a two step estimator based on Poisson mixture model (PMM) \textit{maximum a posteriori} (MAP) estimates in the Web Appendix A. All competing methods are initialized using PMM MAP estimates for state labels and EM is repeated until $\| \Theta^{(l+1)} - \Theta^{(l)} \|_1 \leq 10^{-4}$.
\section{Results} \label{results}
The EM algorithm converges and obtains the MLEs $\left\{ \widehat{\boldsymbol{\beta}}_s, \widehat{\mathbf{b}}_s, \widehat{\sigma}^2_s, \widehat{\mathbf{c}}_s, \widehat{\mathbf{d}}_s \right\}$, with the final likelihood evaluation being equivalent to weighted exponential frailty models. As a result, we can generalize existing large sample theory asymptotic inference for mixed effect models for our weighted setting, using weights derived in Section \ref{EStep}.
\begin{theorem} \label{thm1} Regression coefficients $\boldsymbol{\beta}_s$ are asymptotically normally distributed $\widehat{\boldsymbol{\beta}}_s \stackrel{D}{\sim} \mathrm{N} \left( \boldsymbol{\beta}_s, \Sigma_s \right)$ where $\Sigma_s$ are corresponding $\boldsymbol{\beta}_s$ elements of the inverse observed informations $\mathcal{I}^{-1}(\boldsymbol{\beta}_s, \mathbf{b}_s) $. The observed informations are given as $ \mathcal{I}(\boldsymbol{\beta}_s, \mathbf{b}_s) = \\ \mathbf{U}^\top \left(
- \nabla^2_{\boldsymbol{\eta}_s} \log L(\boldsymbol{\beta}_s | \mathbf{b}_s, \widehat{\mathbf{c}}_s, \widehat{\mathbf{d}}_s ) \right) \mathbf{U} + \mathrm{diag}(\mathbf{0}_{ p \times p}, \sigma_s^{-2} \mathbf{I} ) $ where $
- \nabla^2_{\boldsymbol{\eta}_s} \log L(\boldsymbol{\beta}_s | \mathbf{b}_s, \widehat{\mathbf{c}}_s, \widehat{\mathbf{d}}_s ) = \\ \textup{diag} \big( \Delta(t_{i1}) \lambda_s (t_{i1}), \Delta(t_{i1}) \lambda_s (t_{i1}), \dots, \Delta(t_{ij}) \lambda_s(t_{ij}), \Delta(t_{ij}) \lambda_s(t_{ij}), \dots \big) \mathbf{W}_s $, $$ {\mathbf{U}^\top = \left[ \begin{array}{llllllll} \mathbf{x} (t_{i1}) & \mathbf{x} (t_{i1}) & \mathbf{x} (t_{i2}) & \mathbf{x} (t_{i2}) & \dots & \mathbf{x} (t_{ij}) & \mathbf{x} (t_{ij}) & \dots \\ \mathbf{z} (t_{i1}) & \mathbf{z} (t_{i1}) & \mathbf{z} (t_{i2}) & \mathbf{z} (t_{i2}) & \dots & \mathbf{z} (t_{ij}) & \mathbf{z} (t_{ij}) & \dots \end{array} \right]} $$ and $ \mathbf{W}_s = \textup{diag} \big( \widehat{d}_s(t_{i1}), \widehat{c}_s(t_{i1}), \widehat{d}_s(t_{i2}), \widehat{c}_s(t_{i2}),\dots, \widehat{d}_s(t_{ij}), \widehat{c}_s(t_{ij}),\dots \big)$. \end{theorem}
\subsection{Simulation Studies} \label{sim_study}
For the alternating event process, we use a 24h period sine function as our time varying covariate, $x(t) = \sin \left( {2 \pi t}/{24} \right) \in [-1,1]$ and independently draw censoring times from a uniform distribution, $r_{ij} \sim U(0,h_{\text{max}})$. For the PH models we sequentially increment $t_{ij}$ by $\Delta(t_{ij})$ and simulate covariates $x(t_{ij})$. We simulate the complete data likelihood with multiple individuals but shared $\boldsymbol{\beta}_s$, by drawing from $$ \begin{array}{l} \lambda_1(t_{ij}) \mid ( A(t_{i(j-1)}) = 1 ) = \exp \left( \eta_1(t_{ij}) \right) = \exp \left( \mathbf{x}^\top (t_{ij}) \boldsymbol{\beta}_1\right) = \exp \left( \beta_{10} + \beta_{11} x(t_{ij}) \right) \end{array} $$ and $\lambda_2(t_{ij}) \mid ( A(t_{i(j-1)}) = 2 ) = \exp \left( \beta_{20} + \beta_{21} x(t_{ij}) \right) $ with $v_s(t_{ij}) \sim \text{Exp} \left( \lambda_s(t_{ij}) \right)$, $\Delta(t_{ij}) = \min \{v_s(t_{ij}), r_{ij}\}$, and $d_s (t_{ij}) = \mathbb{I}[ \Delta(t_{ij}) = v_s(t_{ij}) ] = \mathbb{I}[ A(t_{i(j-1)})=s, A(t_{ij}) \ne s ]$. Pseudocode for generating the alternating survival data can be found in Algorithm 1 of the Web Appendix. Similarly, for a discrete-time alternating event process, we simulate transition events as $d_s (t_{ij}) \sim \text{Bernoulli}\left( \text{expit}(\eta_s(t_{ij})) \right)$ and increment $t_{ij}$ by $\Delta(t_{ij})=1$. Pseudocode for generating the discrete alternating event data can be found in Algorithm 2 of the Web Appendix. Once true state labels $A(t_{ij})$ are obtained, we simulate state dependent observations from Poisson distributions to complete the simulation of the HMMs.
We simulated 500 replicates using four different sets of parameters and used 3 methods for generating the data. For each set of parameters, we simulated data using Algorithm 1 with maximum censoring times set to $h_{\text{max}}=10$ and $h_{\text{max}}=1$ to evaluate performance under heterogeneous and grouped event times. We also used Algorithm 2 in order study models fitted to data simulated from a discrete-time process. We have a total of 12 different cases outlined in Table \ref{sim}. In summary, Cases 1.1--1.3 looked at low incidence rate of state transition and a large distributional difference between $f (y(t_{ij}) | {\mu_1})$ and $f (y(t_{ij}) |{\mu_2})$; Cases 2.1--2.3 looked at high incidence rate and large distributional difference; Cases 3.1--3.3 looked at low incidence rate and small distributional difference; and Cases 4.1--4.3 looked at high incidence rate and small distributional difference. For each case we fit models using the PMM, DT-HMM, CT-HMM and PH-HMM approaches. The E-step update can be computed quickly in parallel where each $i$th HMM is processed independently. We present mean accuracy of recovering the true state labels using the MAP and empirical standard error (SE), over 500 replicates in Table \ref{sim}. We also present mean parameter estimates, empirical standard errors and mean square errors (MSE) in Tables \ref{sim_beta} and \ref{sim_etc}.
First, we present our findings regarding the accuracy of recovering the true label. When the difference between state dependent distributions is large, such as Cases 1.1--1.3 and 2.1--2.3, all competing methods have comparable accuracy. In Cases 3.1--3.3 and 4.1--4.3, with a small distributional difference, we observe lower accuracy in PMM. In Cases 3.2 and 4.2, with low incidence rates and grouped event times, DT-HMM, CT-HMM and PH-HMM have comparable accuracy for reasons outlined in Section \ref{penalty}. When $\beta_{11}$ and $\beta_{21}$ have opposite signs, the range of $\Delta(t_{ij})$ is small and there is a low incidence, then the estimated CT-HMM transition probabilities from the matrix exponential, $\boldsymbol{\Gamma} (t_{ij}) = \exp \left( \mathbf{Q}(t_{ij}) \Delta(t_{ij}) \right)$, approximates the transition probability matrices from DT-HMM and PH-HMM. As a results, in these cases, DT-HMM, CT-HMM and PH-HMM have comparable accuracy. However, given heterogeneous event times and high incidence rates (Case 4.1), the model misspecification of CT-HMM noticeably reduces accuracy and is also reflected in the high MSE of parameter estimates. In the case of small distributional difference and heterogeneous event times (Cases 3.1 and 4.1), we observed that DT-HMM was more accurate than PH-HMM but had greater bias in its coefficients estimates (see Tables \ref{sim_beta} and \ref{sim_etc}).
We outline our findings regarding heterogeneous event times, $\Delta(t_{ij}) \in [0,h_{\text{max}}]$ for $h_{\text{max}}=10$ (Cases 1.1, 2.1, 3.1 and 4.1). In these cases, we observed that DT-HMM has poor performance in estimating the relative risk, specifically the baseline risk or intercept. In Cases 1.1, 2.1, 3.1 and 4.1 we observed large MSE in the DT-HMM coefficient estimates, emphasizing discrete-time based approaches are sensitive to heterogeneous event times. Of note, in Case 2.1: heterogeneous event times, high incidence rates and large distributional difference, PH-HMM clearly out performs CT-HMM and DT-HMM. This finding is inline with Section \ref{penalty}, where we noted that DT-HMM and PH-HMM results are similar in grouped event time low incidence setting.
Next, we discuss additional implications of the derivations from Section \ref{penalty} in our simulation study. When evaluating parameter estimation, we observed higher MSE of $\mu_1$ and $\mu_2$ for PMM in a small distributional difference settings (Cases 3.1--3.3 and 4.1--4.3). The PMM method is a two step estimator and does not use covariate information to estimate $\mu_1$ and $\mu_2$, leading to highly variable estimates. Because PMM does not use $\{ \mu_1, \mu_2 \}$ and $\boldsymbol{\beta}_s$ simultaneously during estimation, PMM estimates are generally less accurate than other competing methods. In the cases with low incidence rate, discrete and grouped event times (Cases 1.2, 1.3, 3.2 and 3.3), we observed that DT-HMM, CT-HMM and PH-HMM yielded similar parameter estimates. In cases with a small distributional difference, we observed that all estimates become less accurate when compare with a large distributional difference setting. When considering a grouped event time, low incidence rate with PH data generating process (Cases 1.2 and 3.2) we observed that the relative risk from DT-HMM are inflated when compared the truth and PH-HMM estimates. We may contrast these results with the cases where the data is generated from a logistic regression (Cases 1.3, 2.3, 3.3 and 4.3). DT-HMM is best suited for estimation when the underlying process is a logistic regression. However, in Cases 1.3, 2.3, 3.3 and 4.3, PH-HMM yields estimates where $\beta_{10}$ and $\beta_{20}$ are pulled towards $-\infty$ and $\beta_{11}$ and $\beta_{21}$ are pulled towards 0. These findings are aligned with the derivations found in Section \ref{penalty}. This shrinkages slightly favors lower probabilities of transitioning out of a state, i.e., shrinking state transition matrices towards the identity matrix. In other words, PH models penalizes HMMs generated from a discrete-time processes to favor an extended sojourn time while maintaining comparable estimates for $\mu_s$. This shrinkage is useful for mitigating overfitting, especially when the goal is to fit a complicated HMM with no prior knowledge on the true data generating process.
From our simulation studies, we found that PH-HMM excels in a number of different cases. DT-HMM and CT-HMM are sensitive to heterogeneous event times leading to inaccurate state label recovery and bias parameter estimation. While DT-HMM is more accurate than PH-HMM at state label recovery in Cases 3.1 and 4.1, their parameter estimates have higher MSE and bias. DT-HMM generally has higher MSE than PH-HMM for cases where the data is generate from PH models (Cases 1.1, 1.2, 2.1, 2.2, 3.1, 3.2, 4.1 and 4.2). The shrinkage properties outlined in Section \ref{penalty} carries over into our simulation study. In most cases and when the data is generated from a logistic regression, DT-HMM and PH-HMM have comparable accuracy and state dependent distribution estimates, even though PH-HMM coefficients exhibit shrinkage. This shrinkage property has many uses which extend beyond simple simulation studies which we outline next in our real data analysis examples.
\subsection{Application: mHealth Data} A sample of $I = 41$ individuals, recruited via the Penn/CHOP Lifespan Brain Institute or through the Outpatient Psychiatry Clinic at the University of Pennsylvania as part of a study of affective instability in youth \citep{xia2022mobile}. All participants provided informed consent to all study procedures. This study was approved by the University of Pennsylvania Institutional Review Board. For each individual, roughly 3 months of data was collected using the Beiwe platform \citep{torous2016new}. Accelerometer measures (meters per second squared) for x, y and z axes, screen-on events for Android devices and screen-unlock events for iOS (Apple) devices were acquired through the Beiwe platform–we refer to both as “screen-on events” in this manuscript. In general, we recommend a minimum of 30 days of data in order to fit our individual-specific model with hour-of-day random intercepts.
First, we analyzed the data under a discrete-time setting where we impute data for each missing hour. By collecting both screen-on events as well as accelerometer data, we are able to construct a missing at random (MAR) model for imputing missing accelerometer data. In our case, there is missing accelerometer data during of periods screen-on activity i.e., accelerometer data is missing over a given hour while there are many screen-on events observed over the same period. On the other hand, there are periods of dormancy: accelerometer data is missing and there are also no screen-on events. More specifically periods of dormancy occur when accelerometer measurements are missing due to user and device related factors such as the phone being powered off or being in airplane mode. Periods of dormancy have greater probability of missing accelerometer features and are identified using a two state hidden semi-Markov model with Bernoulli state dependent distributions \citep{bulla2010hsmm}. Missing mean acceleration magnitudes from dormant periods were imputed using the minimum of acceleration features (excluding outliers). While missing data assigned to the periods screen-on activity were imputed by regressing accelerometer features on $Y(t_{ij})$ over all hours where data is completely observed. For the heterogeneous event time example, we did not impute missing data and absorbed the duration of dormancy into the event times $\Delta(t_{ij})$. Periods of consecutive missing acceleration magnitudes over 24h constitutes an end of a Markov chain and the start of a new chain, where the likelihoods of multiple HMM sequences can be multiplied together for parameter estimation.
\subsubsection{Estimating Strength of Routine in Youth with Affective Disorders} In psychiatric studies, regularity of a rhythm is defined as the association of time-of-day and state membership; the effect of hour-of-day on activity and rest state membership represents the diurnal rhythm. As an illustrative example, we fit a model with hour-of-day effects, as normally distributed random intercepts in our HMMs and for each individual we fit PH models $\lambda_s(t_{ij}) = \exp \left( \mathbf{x}^\top(t_{ij}) \boldsymbol{\beta}_{s} + \mathbf{z}^\top (t_{ij}) \mathbf{b}_{s} \right)$, where $ \mathbf{b}_s \sim \mathrm{N} ( \mathbf{0}_{24}, \tau_s \mathbf{I}_{24} )$ are 24 hour-of-day random intercepts. By fitting hour-of-day random intercepts, $\mathbf{b}_1$ and $\mathbf{b}_2$ are each of length $24$ and $[\mathbf{z}(t_{ij})]_r=1$ only if $t_{ij}$ is in the $r$th hour of the day. Rates of transition from active-to-rest states are given as $\lambda_1(t_{ij})$, rates of transition from rest-to-active states are given as $\lambda_2(t_{ij})$ and an example of PH-HMM outputs can be found in Figure \ref{fig:3}. The variances of the random intercepts can be interpreted as a L2 penalty on hour-of-day effects and disappears as $\tau_s \rightarrow \infty$. We quantify the strength of diurnal effects for an individual, by looking at the variances $\tau_s$, where large variances correspond with large hour-of-day effect sizes and greater regularity in diurnal rhythms with an example in Figure \ref{fig:3}.
\subsubsection{Population HMM: Differences Between Operating Systems}
In addition, we can fit a population model, with random intercepts being specific to each individuals through estimation using the likelihood $\prod_{i=1}^I L( \boldsymbol{\beta}_1, \mathbf{b}_1, \sigma^2_1, \boldsymbol{\beta}_2, \mathbf{b}_2, \sigma^2_2, \mu_1, \mu_2 | \mathbf{A}_i )$. However, for iOS devices we only have screen-unlock events, i.e., entering in a passcode to unlock the phone. Android devices have screen-on events which occur when the phone screen turns on such as when receiving a message; the phone does not need to be unlocked for the screen to be turned on. Screen-unlock events are less frequent and a subset of screen-on events, causing the counts $Y(t_{ij})$, to be lower for iOS devices. The relationship between acceleration $x(t_{ij})$ and $Y(t_{ij})$, may be experience effect modification due to operating system (OS). We can test interaction between OS and acceleration, while controlling for the interaction with user sex in our regression and other individual effects with random intercepts. Android devices and males serve as baseline in this analysis. For the active-to-rest model: $\lambda_1(t_{ij})$ and rest-to-active model: $\lambda_2(t_{ij})$, $ \lambda_s(t_{ij}) = \exp \left( \mathbf{x}^\top(t_{ij}) \boldsymbol{\beta}_{s} + \mathbf{z}^\top (t_{ij}) \mathbf{b}_{s} \right) = \exp \left( \beta_{s0} + \beta_{s1} x(t_{ij}) + \beta_{s2}x(t_{ij}) \mathrm{sex} + \beta_{s3}x(t_{ij}) \mathrm{OS} + \mathbf{z}^\top (t_{ij}) \mathbf{b}_{s} \right)$, where $\mathbf{b}_s \sim \mathrm{N} (\mathbf{0}_{41},\sigma^2_s \mathbf{I}_{41})$, $\mathbf{b}_{s}$ are 41 individual specific random intercepts and $\mathbf{z}^\top (t_{ij})$ are individual specific indicators. We fit our competing methods and test interaction for the discrete and heterogeneous event time settings, where estimates can be found in Table \ref{HMM_fit} and \ref{HMM_fit2}.
We found that there was no significant interaction between OS and acceleration in the discrete-time active-to-rest model but did find significant interaction in the heterogeneous event time active-to-rest model. In addition, we found that there was significant interaction between OS and acceleration in the discrete-time rest-to-active model but did not find significant interaction in the heterogeneous event time rest-to-active model. During the active state, we know that counts $Y(t_{ij})$, are lower for iOS devices. In order to compensate for lower screen-on counts in iOS devices, iOS rest-to-active transitions require a higher magnitude of acceleration to achieve the same transition rate as an Android device, hence the negative sign of $\beta_{23}$. During the rest state, we know iOS devices have zero-inflated counts, where excess zeros are due to not being able to record a screen-on event. In the active-to-rest model, we may achieve the same transition rate while having higher acceleration in iOS devices than Android devices, hence the positive sign of $\beta_{13}$. Our results suggest that the magnitude of effect acceleration has on state transition depends on OS, but further investigation is needed.
For the discrete-time setting, we estimated active state distribution $\mathbb{E}[y(t_{ij}) | A(t_{ij})=1] = \widehat{\mu}_1 \approx 8$ and rest state distribution $\mathbb{E}[y(t_{ij}) | A(t_{ij})=2] = \widehat{\mu}_2 \approx 0.4$. For the heterogeneous event time setting, we estimated $\mathbb{E}[y(t_{ij}) | A(t_{ij})=1] = \widehat{\mu}_1 \approx 9$ and $\mathbb{E}[y(t_{ij}) | A(t_{ij})=2] = \widehat{\mu}_2 \approx 1$. Screen-on counts separate into stark clusters where $\mu_1$ and $\mu_2$ are similar to the large distributional difference from our simulation study. We found that rate of transition from active-to-rest states are negatively associated with acceleration; rate of transition for rest-to-active states are positively associated with acceleration and are statistically significant ($p<0.05$) for all competing methods. These HMM parameter estimates related to screen-on counts and acceleration align with common intuition. For the PMM method, modeling the MAP estimates first and then combining the estimates to obtain a population level model, does not account for acceleration when imputing state labels and resulted in a poor fit. CT-HMM and PH-HMM estimates are comparable and aligned with the findings from the simulation study. In addition to large distributional differences, diurnal active-rest cycles are expected to be low incidence rate processes, where we anticipate a few transition between states in a 24 hour period. We observed that the magnitude of the relative risk estimates for CT-HMM and PH-HMM are comparable to each other but less than the DT-HMM estimates. This difference becomes more noticeable for the heterogeneous event time setting, which DT-HMM is not equipped to handle. For many parameters, DT-HMM relative risk estimates are 3 times that of PH-HMM estimates, while estimates for state dependent distribution parameters $\{ \mu_1, \mu_2 \}$ are similar. PH-HMM allows us to achieve comparable estimation of state dependent distributions while leveraging shrinkage to avoid overfitting coefficients.
\vspace*{-1cm} \section{Discussion} \label{discuss}
For a latent state setting, we proposed a method for estimating alternating recurrent event exponential PH model with shared log-normal frailties using the EM algorithm. Our E-step imputations involves a discrete-time HMM using logistic or multinomial regression transition probabilities with normally distributed random intercepts. The HMM obtained during the E-step of our EM algorithm is an alternative method for estimating mixed hidden Markov models which are typically obtained by estimating logistic or multinomial regressions \citep{altman2007mixed,maruotti2012mixed}. The M-step conveniently involves fitting several independent PH models rather than multinomial regression with many states. In addition, we showed that the PH model applied to the discrete-time setting is a penalized logistic regression which shrinks transition probability matrices towards the identity matrix. Our framework can accommodate random intercepts to account for longitudinal data, such as data collected from the same individual or hour-of-day periodic effects. We derived asymptotic distributions for the PH regression coefficients and random intercepts, where coefficients have a hazard ratio interpretation akin to the Cox PH model. Our PH-HMM approach is a flexible method for modeling complex mHealth datasets, where heterogeneous event times can be incorporated into the PH regression while accounting for latent states. If the underlying data is a discrete-time process, then PH modeling offers slight penalization to mitigate overfitting, otherwise PH models are more appropriate for heterogeneous event time processes.
We estimated two models in our real data analysis: one with missing data being accounted for through heterogeneous event times and another where missing data was imputed. By taking advantage of the fact that screen activity and accelerometer data are highly associated as well as the fact that screen activity data is never missing, the MAR assumption in our imputation model is well founded. That being said, if there's any residual explanation in the missing data even after accounting for the screen activity an MNAR and its corresponding sensitivity analyses would be more appropriate. We presented a flexible regression procedure that can accommodate different parameterization of random effects and the use of other statistical structures such as semiparametric regression and multilevel models. However, computational complexity and interpretability should be considered when parameterizing complicated models such as HMMs. A key advantage of our method is that complicated statistical structures can be incorporated into independent PH regressions which then simplifies to multinomial transition probabilities during the E-step. Though model selection statistics such as AIC/BIC was not explored in our manuscript, we evaluated the practical implications of PH modeling with a variety of different criteria. Simulation results and mHealth data analysis suggest that our PH-HMM excels in a variety of situations.
\backmatter
\pagebreak
\section*{Supporting Information} Additional supporting information may be found online in the Supporting Information section at the end of the article.
\section*{Figures and Tables}
\begin{table} \caption{{Simulation: Case Parameters and Accuracy for Competing Methods}. Mean accuracy and empirical standard errors based on 500 replicates. Each replicate had $I=50$ individuals with $n_{i}=25$ state transitions. The surivial models were simulate with Algorithm 1 and logistic models were simulate with Algorithm 2 from the Web Appendix. Cases 1.1--1.3 looked at low incidence rate and large distributional difference; Cases 2.1--2.3 looked at high incidence rate and large distributional difference; Cases 3.1--3.3 looked at low incidence rate and small distributional difference; and Cases 4.1--4.3 looked at high incidence rate and small distributional difference. } \begin{center} \addtolength{\leftskip} {-2cm} \addtolength{\rightskip}{-2cm} \footnotesize
\begin{tabular}{ |ccc||cccc| }
\hline
& & & \multicolumn{4}{c|}{Methods: mean accuracy (SE)} \\
\cline{4-7}
Case & Simulation & Parameters & PMM & DT-HMM & CT-HMM & PH-HMM \\
\hline
1.1 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 10 \end{array}$
&
$\begin{array}{lll}
\mu_1=10 & \beta_{10} = -3 & \beta_{11} = -1 \\
\mu_2=1 & \beta_{20} = -3 & \beta_{21} = 1 \end{array}$
& 0.9851(0.0035) & 0.9904(0.0026) & 0.9835(0.0035) & 0.9842(0.0037) \\
\hline
1.2 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 1 \end{array}$
&
$\begin{array}{lll}
\mu_1=10 & \beta_{10} = -3 & \beta_{11} = -1 \\
\mu_2=1 & \beta_{20} = -3 & \beta_{21} = 1 \end{array}$
& 0.9847(0.0215) & 0.9985(0.0011) & 0.9982(0.0012) & 0.9984(0.0011) \\
\hline
1.3 &
$\begin{array}{l}
\text{logistic} \end{array}$
&
$\begin{array}{lll}
\mu_1=10 & \beta_{10} = -3 & \beta_{11} = -1 \\
\mu_2=1 & \beta_{20} = -3 & \beta_{21} = 1 \end{array}$
& 0.9852(0.0034) & 0.9972(0.0015) & 0.9972(0.0015) & 0.9972(0.0015) \\ \hline \\ \hline
2.1 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 10 \end{array}$
&
$\begin{array}{lll}
\mu_1=10 & \beta_{10} = -2 & \beta_{11} = -5 \\
\mu_2=1 & \beta_{20} = -2 & \beta_{21} = 5 \end{array}$
& 0.9850(0.0035) & 0.9943(0.0021) & 0.9790(0.0048) & 0.9908(0.0026) \\
\hline
2.2 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 1 \end{array}$
&
$\begin{array}{lll}
\mu_1=10 & \beta_{10} = -2 & \beta_{11} = -5 \\
\mu_2=1 & \beta_{20} = -2 & \beta_{21} = 5 \end{array}$
& 0.9856(0.0037) & 0.9980(0.0012) & 0.9974(0.0013) & 0.9978(0.0014) \\
\hline
2.3 &
$\begin{array}{l}
\text{logistic} \end{array}$
&
$\begin{array}{lll}
\mu_1=10 & \beta_{10} = -2 & \beta_{11} = -5 \\
\mu_2=1 & \beta_{20} = -2 & \beta_{21} = 5 \end{array}$
& 0.9850(0.0036) & 0.9968(0.0015) & 0.9967(0.0016) & 0.9967(0.0016) \\ \hline \\ \hline
3.1 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 10 \end{array}$
&
$\begin{array}{lll}
\mu_1=5 & \beta_{10} = -3 & \beta_{11} = -1 \\
\mu_2=1 & \beta_{20} = -3 & \beta_{21} = 1 \end{array}$
& 0.8973(0.0085) & 0.9255(0.0083) & 0.8953(0.0100) & 0.8716(0.0161) \\
\hline
3.2 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 1 \end{array}$
&
$\begin{array}{lll}
\mu_1=5 & \beta_{10} = -3 & \beta_{11} = -1 \\
\mu_2=1 & \beta_{20} = -3 & \beta_{21} = 1 \end{array}$
& 0.8966(0.0242) & 0.9879(0.0042) & 0.9865(0.0043) & 0.9820(0.0067) \\
\hline
3.3 &
$\begin{array}{l}
\text{logistic} \end{array}$
&
$\begin{array}{lll}
\mu_1=5 & \beta_{10} = -3 & \beta_{11} = -1 \\
\mu_2=1 & \beta_{20} = -3 & \beta_{21} = 1 \end{array}$
& 0.8964(0.0205) & 0.9769(0.0051) & 0.9770(0.0051) & 0.9770(0.0051) \\ \hline \\ \hline
4.1 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 10 \end{array}$
&
$\begin{array}{lll}
\mu_1=5 & \beta_{10} = -2 & \beta_{11} = -5 \\
\mu_2=1 & \beta_{20} = -2 & \beta_{21} = 5 \end{array}$
& 0.8967(0.0187) & 0.9638(0.0054) & 0.8027(0.0242) & 0.9033(0.0137) \\
\hline
4.2 &
$\begin{array}{l}
\text{survival} \\
h_{\text{max}} = 1 \end{array}$
&
$\begin{array}{lll}
\mu_1=5 & \beta_{10} = -2 & \beta_{11} = -5 \\
\mu_2=1 & \beta_{20} = -2 & \beta_{21} = 5 \end{array}$
& 0.8945(0.0299) & 0.9862(0.0039) & 0.9841(0.0043) & 0.9778(0.0067) \\
\hline
4.3 &
$\begin{array}{l}
\text{logistic} \end{array}$
&
$\begin{array}{lll}
\mu_1=5 & \beta_{10} = -2 & \beta_{11} = -5 \\
\mu_2=1 & \beta_{20} = -2 & \beta_{21} = 5 \end{array}$
& 0.8982(0.0085) & 0.9770(0.0047) & 0.9756(0.0050) & 0.9742(0.0054) \\ \hline \end{tabular} \end{center} \label{sim} \end{table}
\begin{table} \caption{{Simulation: Estimates, Standard Errors and Mean Square Errors for ${\mu}_1$ and $\boldsymbol{\beta}_1$}. Mean parameter estimates (Est.), empirical standard error (SE) and mean squared error based (MSE) on 500 replicates. Cases 1.1--1.3 looked at low incidence rate and large distributional difference; Cases 2.1--2.3 looked at high incidence rate and large distributional difference; Cases 3.1--3.3 looked at low incidence rate and small distributional difference; and Cases 4.1--4.3 looked at high incidence rate and small distributional difference.} \begin{center} \addtolength{\leftskip} {-2cm} \addtolength{\rightskip}{-2cm} \footnotesize
\begin{tabular}{ |cc||rrrr|rrrr|rrrr| }
\hline
Case & Model & $\mu_{1}$ & Est. & SE & MSE & $\beta_{10}$ & Est. & SE & MSE & $\beta_{11}$ & Est. & SE & MSE \\
\hline
1.1 & PMM & 10 & 10.001 & 0.132 & 0.017 & -3 & -2.918 & 0.104 & 0.017 & -1 & -0.921 & 0.136 & 0.025 \\ survival & DT-HMM & 10 & 9.999 & 0.129 & 0.016 & -3 & -1.294 & 0.116 & 2.923 & -1 & -1.091 & 0.166 & 0.036 \\ $h_{\text{max}}=10$ & CT-HMM & 10 & 9.904 & 0.132 & 0.027 & -3 & -3.075 & 0.107 & 0.017 & -1 & -1.028 & 0.148 & 0.023 \\ & PH-HMM & 10 & 9.885 & 0.134 & 0.031 & -3 & -3.141 & 0.113 & 0.033 & -1 & -1.072 & 0.153 & 0.029 \\
\hline
1.2 & PMM & 10 & 9.985 & 0.233 & 0.054 & -3 & -2.113 & 0.261 & 0.855 & -1 & -1.150 & 0.503 & 0.275 \\ survival & DT-HMM & 10 & 9.993 & 0.123 & 0.015 & -3 & -3.699 & 0.359 & 0.617 & -1 & -1.058 & 0.561 & 0.318 \\ $h_{\text{max}}=1$ & CT-HMM & 10 & 9.987 & 0.124 & 0.015 & -3 & -3.053 & 0.356 & 0.129 & -1 & -1.045 & 0.554 & 0.308 \\ & PH-HMM & 10 & 9.998 & 0.123 & 0.015 & -3 & -2.957 & 0.342 & 0.119 & -1 & -1.032 & 0.538 & 0.290 \\
\hline
1.3 & PMM & 10& 10.000 & 0.130 & 0.017 & -3 & -2.614 & 0.184 & 0.183 & -1 & -0.768 & 0.242 & 0.112 \\ logistic & DT-HMM & 10 & 10.000 & 0.126 & 0.016 & -3 & -3.013 & 0.217 & 0.047 & -1 & -1.025 & 0.301 & 0.091 \\ & CT-HMM & 10 & 9.999 & 0.126 & 0.016 & -3 & -3.077 & 0.207 & 0.049 & -1 & -0.966 & 0.281 & 0.080 \\ & PH-HMM & 10 & 9.999 & 0.126 & 0.016 & -3 & -3.077 & 0.207 & 0.049 & -1 & -0.966 & 0.281 & 0.080 \\
\hline
2.1 & PMM & 10 & 9.990 & 0.129 & 0.017 & -2 & -1.995 & 0.126 & 0.016 & -5 & -3.758 & 0.430 & 1.727 \\ survival & DT-HMM & 10 & 9.985 & 0.125 & 0.016 & -2 & -0.123 & 0.162 & 3.548 & -5 & -5.684 & 0.574 & 0.797 \\ $h_{\text{max}}=10$ & CT-HMM & 10 & 9.826 & 0.149 & 0.053 & -2 & -2.256 & 0.097 & 0.075 & -5 & -3.353 & 0.345 & 2.831 \\ & PH-HMM & 10 & 9.952 & 0.129 & 0.019 & -2 & -2.129 & 0.106 & 0.028 & -5 & -4.371 & 0.326 & 0.501 \\
\hline
2.2 & PMM & 10 & 10.001 & 0.244 & 0.059 & -2 & -1.508 & 0.240 & 0.300 & -5 & -2.759 & 0.523 & 5.296 \\ survival & DT-HMM & 10 & 10.006 & 0.136 & 0.018 & -2 & -2.679 & 0.395 & 0.618 & -5 & -5.605 & 1.420 & 2.378 \\ $h_{\text{max}}=1$ & CT-HMM & 10 & 9.994 & 0.136 & 0.018 & -2 & -2.135 & 0.380 & 0.162 & -5 & -5.465 & 1.069 & 1.356 \\ & PH-HMM & 10 & 10.013 & 0.136 & 0.019 & -2 & -1.935 & 0.324 & 0.109 & -5 & -4.807 & 0.896 & 0.838 \\
\hline
2.3 & PMM & 10 & 9.998 & 0.130 & 0.017 & -2 & -2.016 & 0.159 & 0.025 & -5 & -2.619 & 0.257 & 5.736 \\ logistic & DT-HMM & 10 & 9.996 & 0.128 & 0.016 & -2 & -2.016 & 0.238 & 0.057 & -5 & -5.173 & 0.722 & 0.550 \\ & CT-HMM & 10 & 9.995 & 0.128 & 0.016 & -2 & -2.348 & 0.166 & 0.149 & -5 & -3.448 & 0.314 & 2.508 \\ & PH-HMM & 10 & 9.997 & 0.128 & 0.016 & -2 & -2.335 & 0.165 & 0.139 & -5 & -3.404 & 0.308 & 2.641 \\
\hline
3.1 & PMM & 5 & 5.004 & 0.158 & 0.025 & -3 & -2.509 & 0.073 & 0.247 & -1 & -0.571 & 0.096 & 0.194 \\ survival & DT-HMM & 5 & 5.016 & 0.114 & 0.013 & -3 & -1.298 & 0.153 & 2.921 & -1 & -1.085 & 0.210 & 0.051 \\ $h_{\text{max}}=10$ & CT-HMM & 5 & 4.819 & 0.127 & 0.049 & -3 & -3.422 & 0.146 & 0.200 & -1 & -1.092 & 0.198 & 0.047 \\ & PH-HMM & 5 & 4.633 & 0.156 & 0.159 & -3 & -4.125 & 0.249 & 1.328 & -1 & -1.414 & 0.309 & 0.267 \\
\hline
3.2 & PMM & 5 & 4.997 & 0.215 & 0.046 & -3 & -0.693 & 0.139 & 5.343 & -1 & -0.747 & 0.192 & 0.101 \\ survival & DT-HMM & 5 & 5.004 & 0.093 & 0.009 & -3 & -3.616 & 0.447 & 0.579 & -1 & -1.146 & 0.631 & 0.419 \\ $h_{\text{max}}=1$ & CT-HMM & 5 & 4.991 & 0.095 & 0.009 & -3 & -3.113 & 0.443 & 0.209 & -1 & -1.111 & 0.606 & 0.378 \\ & PH-HMM & 5 & 5.060 & 0.101 & 0.014 & -3 & -2.265 & 0.401 & 0.701 & -1 & -0.877 & 0.714 & 0.523 \\
\hline
3.3 & PMM & 5 & 4.995 & 0.216 & 0.046 & -3 & -1.463 & 0.130 & 2.379 & -1 & -0.382 & 0.123 & 0.396 \\ logistic & DT-HMM & 5 & 5.007 & 0.098 & 0.010 & -3 & -3.009 & 0.236 & 0.056 & -1 & -1.038 & 0.317 & 0.102 \\ & CT-HMM & 5 & 5.004 & 0.098 & 0.010 & -3 & -3.098 & 0.223 & 0.059 & -1 & -0.990 & 0.292 & 0.085 \\ & PH-HMM & 5 & 5.004 & 0.097 & 0.009 & -3 & -3.097 & 0.223 & 0.059 & -1 & -0.985 & 0.291 & 0.085 \\
\hline
4.1 & PMM & 5 & 5.003 & 0.182 & 0.033 & -2 & -1.862 & 0.101 & 0.029 & -5 & -2.085 & 0.173 & 8.529 \\ survival & DT-HMM & 5 & 5.012 & 0.093 & 0.009 & -2 & -0.143 & 0.235 & 3.502 & -5 & -5.801 & 0.998 & 1.635 \\ $h_{\text{max}}=10$ & CT-HMM & 5 & 4.307 & 0.207 & 0.523 & -2 & -3.215 & 0.220 & 1.525 & -5 & -1.716 & 0.127 & 10.802 \\ & PH-HMM & 5 & 4.865 & 0.136 & 0.037 & -2 & -2.952 & 0.141 & 0.927 & -5 & -2.806 & 0.219 & 4.860 \\
\hline
4.2 & PMM & 5 & 4.985 & 0.217 & 0.047 & -2 & -0.481 & 0.140 & 2.326 & -5 & -1.233 & 0.167 & 14.215 \\ survival & DT-HMM & 5 & 5.013 & 0.090 & 0.008 & -2 & -2.521 & 0.624 & 0.659 & -5 & -5.764 & 2.183 & 5.340 \\ $h_{\text{max}}=1$ & CT-HMM & 5 & 4.994 & 0.090 & 0.008 & -2 & -2.213 & 0.456 & 0.253 & -5 & -5.074 & 1.228 & 1.510 \\ & PH-HMM & 5 & 5.088 & 0.099 & 0.018 & -2 & -1.229 & 0.246 & 0.655 & -5 & -2.477 & 0.413 & 6.534 \\
\hline
4.3 & PMM & 5 & 5.010 & 0.154 & 0.024 & -2 & -1.162 & 0.066 & 0.707 & -5 & -1.216 & 0.097 & 14.327 \\ logistic & DT-HMM & 5 & 5.018 & 0.093 & 0.009 & -2 & -1.973 & 0.319 & 0.102 & -5 & -5.239 & 0.978 & 1.012 \\ & CT-HMM & 5 & 5.015 & 0.091 & 0.009 & -2 & -2.400 & 0.185 & 0.194 & -5 & -3.270 & 0.289 & 3.078 \\ & PH-HMM & 5 & 5.024 & 0.091 & 0.009 & -2 & -2.338 & 0.179 & 0.146 & -5 & -3.033 & 0.273 & 3.945 \\ \hline \end{tabular} \end{center} \label{sim_beta} \end{table}
\begin{table} \caption{{Simulation: Estimates, Standard Errors and Mean Square Errors for ${\mu}_2$ and $\boldsymbol{\beta}_2$}. Mean parameter estimates, empirical standard error and mean squared error based on 500 replicates. Cases 1.1--1.3 looked at low incidence rate and large distributional difference; Cases 2.1--2.3 looked at high incidence rate and large distributional difference; Cases 3.1--3.3 looked at low incidence rate and small distributional difference; and Cases 4.1--4.3 looked at high incidence rate and small distributional difference.} \begin{center} \addtolength{\leftskip} {-2cm} \addtolength{\rightskip}{-2cm} \footnotesize
\begin{tabular}{ |cc||rrrr|rrrr|rrrr| }
\hline
Case & Model & $\mu_{2}$ & Est. & SE & MSE & $\beta_{20}$ & Est. & SE & MSE & $\beta_{21}$ & Est. & SE & MSE \\
\hline
1.1 & PMM & 1 & 1.001 & 0.043 & 0.002 & -3 & -2.904 & 0.089 & 0.017 & 1 & 0.909 & 0.125 & 0.024 \\ survival & DT-HMM & 1 & 0.998 & 0.040 & 0.002 & -3 & -1.291 & 0.105 & 2.932 & 1 & 1.083 & 0.157 & 0.032 \\ $h_{\text{max}}=10$ & CT-HMM & 1 & 1.001 & 0.043 & 0.002 & -3 & -3.059 & 0.095 & 0.013 & 1 & 1.027 & 0.142 & 0.021 \\ & PH-HMM & 1 & 1.004 & 0.043 & 0.002 & -3 & -3.120 & 0.099 & 0.024 & 1 & 1.074 & 0.148 & 0.027 \\
\hline
1.2 & PMM & 1 & 1.006 & 0.214 & 0.046 & -3 & -2.140 & 0.297 & 0.828 & 1 & 1.130 & 0.468 & 0.236 \\ survival & DT-HMM & 1 & 0.994 & 0.038 & 0.001 & -3 & -3.735 & 0.382 & 0.686 & 1 & 1.062 & 0.564 & 0.322 \\ $h_{\text{max}}=1$ & CT-HMM & 1 & 0.995 & 0.038 & 0.001 & -3 & -3.081 & 0.376 & 0.148 & 1 & 1.037 & 0.554 & 0.307 \\ & PH-HMM & 1 & 0.994 & 0.038 & 0.001 & -3 & -2.986 & 0.363 & 0.132 & 1 & 1.088 & 0.551 & 0.310 \\
\hline
1.3 & PMM & 1 & 1.001 & 0.043 & 0.002 & -3 & -2.595 & 0.170 & 0.193 & 1 & 0.727 & 0.216 & 0.121 \\ logistic & DT-HMM & 1 & 0.999 & 0.040 & 0.002 & -3 & -3.009 & 0.207 & 0.043 & 1 & 1.001 & 0.293 & 0.086 \\ & CT-HMM & 1 & 0.999 & 0.040 & 0.002 & -3 & -3.073 & 0.197 & 0.044 & 1 & 0.944 & 0.275 & 0.078 \\ & PH-HMM & 1 & 0.999 & 0.040 & 0.002 & -3 & -3.073 & 0.197 & 0.044 & 1 & 0.944 & 0.275 & 0.078 \\
\hline
2.1 & PMM & 1 & 1.003 & 0.043 & 0.002 & -2 & -1.933 & 0.119 & 0.019 & 5 & 3.936 & 0.497 & 1.379 \\ survival & DT-HMM & 1 & 1.001 & 0.041 & 0.002 & -2 & -0.119 & 0.169 & 3.566 & 5 & 5.612 & 0.525 & 0.649 \\ $h_{\text{max}}=10$ & CT-HMM & 1 & 0.999 & 0.046 & 0.002 & -2 & -2.112 & 0.101 & 0.023 & 5 & 3.838 & 0.435 & 1.540 \\ & PH-HMM & 1 & 1.002 & 0.041 & 0.002 & -2 & -2.090 & 0.109 & 0.020 & 5 & 4.357 & 0.387 & 0.563 \\
\hline
2.2 & PMM & 1 & 1.010 & 0.207 & 0.043 & -2 & -1.514 & 0.250 & 0.299 & 5 & 2.824 & 0.472 & 4.958 \\ survival & DT-HMM & 1 & 0.998 & 0.039 & 0.002 & -2 & -2.655 & 0.354 & 0.554 & 5 & 5.646 & 1.266 & 2.016 \\ $h_{\text{max}}=1$ & CT-HMM & 1 & 0.999 & 0.039 & 0.002 & -2 & -2.076 & 0.341 & 0.122 & 5 & 5.242 & 0.975 & 1.006 \\ & PH-HMM & 1 & 0.998 & 0.039 & 0.002 & -2 & -1.962 & 0.287 & 0.083 & 5 & 4.740 & 0.734 & 0.605 \\
\hline
2.3 & PMM & 1 & 1.000 & 0.044 & 0.002 & -2 & -1.933 & 0.146 & 0.026 & 5 & 2.508 & 0.252 & 6.274 \\ logistic & DT-HMM & 1 & 0.998 & 0.039 & 0.001 & -2 & -2.024 & 0.240 & 0.058 & 5 & 5.198 & 0.724 & 0.562 \\ & CT-HMM & 1 & 0.998 & 0.039 & 0.001 & -2 & -2.352 & 0.166 & 0.152 & 5 & 3.456 & 0.320 & 2.487 \\ & PH-HMM & 1 & 0.998 & 0.039 & 0.001 & -2 & -2.347 & 0.165 & 0.147 & 5 & 3.419 & 0.309 & 2.596 \\
\hline
3.1 & PMM & 1 & 1.009 & 0.111 & 0.012 & -3 & -2.615 & 0.076 & 0.154 & 1 & 0.611 & 0.095 & 0.161 \\ survival & DT-HMM & 1 & 0.979 & 0.053 & 0.003 & -3 & -1.298 & 0.148 & 2.918 & 1 & 1.107 & 0.209 & 0.055 \\ $h_{\text{max}}=10$ & CT-HMM & 1 & 1.011 & 0.065 & 0.004 & -3 & -3.351 & 0.149 & 0.145 & 1 & 1.127 & 0.198 & 0.055 \\ & PH-HMM & 1 & 1.069 & 0.085 & 0.012 & -3 & -4.057 & 0.308 & 1.213 & 1 & 1.633 & 0.379 & 0.543 \\
\hline
3.2 & PMM & 1 & 1.024 & 0.205 & 0.043 & -3 & -0.838 & 0.170 & 4.701 & 1 & 1.026 & 0.184 & 0.035 \\ survival & DT-HMM & 1 & 0.986 & 0.042 & 0.002 & -3 & -3.779 & 0.697 & 1.091 & 1 & 1.192 & 0.880 & 0.810 \\ $h_{\text{max}}=1$ & CT-HMM & 1 & 0.990 & 0.043 & 0.002 & -3 & -3.274 & 0.696 & 0.558 & 1 & 1.158 & 0.877 & 0.793 \\ & PH-HMM & 1 & 0.977 & 0.043 & 0.002 & -3 & -2.396 & 0.988 & 1.339 & 1 & 1.390 & 1.195 & 1.577 \\
\hline
3.3 & PMM & 1 & 1.017 & 0.155 & 0.024 & -3 & -1.581 & 0.228 & 2.064 & 1 & 0.509 & 0.149 & 0.263 \\ logistic & DT-HMM & 1 & 0.984 & 0.043 & 0.002 & -3 & -3.060 & 0.248 & 0.065 & 1 & 1.083 & 0.350 & 0.129 \\ & CT-HMM & 1 & 0.985 & 0.043 & 0.002 & -3 & -3.148 & 0.235 & 0.077 & 1 & 1.030 & 0.323 & 0.105 \\ & PH-HMM & 1 & 0.985 & 0.043 & 0.002 & -3 & -3.146 & 0.234 & 0.076 & 1 & 1.025 & 0.322 & 0.104 \\
\hline
4.1 & PMM & 1 & 1.008 & 0.141 & 0.020 & -2 & -2.123 & 0.207 & 0.058 & 5 & 2.038 & 0.151 & 8.794 \\ survival & DT-HMM & 1 & 0.993 & 0.044 & 0.002 & -2 & -0.143 & 0.225 & 3.499 & 5 & 5.699 & 0.819 & 1.158 \\ $h_{\text{max}}=10$ & CT-HMM & 1 & 0.897 & 0.107 & 0.022 & -2 & -2.250 & 0.157 & 0.087 & 5 & 1.604 & 0.177 & 11.564 \\ & PH-HMM & 1 & 0.992 & 0.064 & 0.004 & -2 & -2.798 & 0.130 & 0.654 & 5 & 2.921 & 0.246 & 4.383 \\
\hline
4.2 & PMM & 1 & 1.013 & 0.199 & 0.040 & -2 & -0.690 & 0.291 & 1.801 & 5 & 1.497 & 0.162 & 12.300 \\ survival & DT-HMM & 1 & 0.986 & 0.043 & 0.002 & -2 & -2.645 & 0.531 & 0.697 & 5 & 6.051 & 2.109 & 5.543 \\ $h_{\text{max}}=1$ & CT-HMM & 1 & 0.990 & 0.043 & 0.002 & -2 & -2.272 & 0.379 & 0.217 & 5 & 4.916 & 1.030 & 1.065 \\ & PH-HMM & 1 & 0.973 & 0.044 & 0.003 & -2 & -1.447 & 0.240 & 0.363 & 5 & 2.745 & 0.341 & 5.201 \\
\hline
4.3 & PMM & 1 & 1.007 & 0.112 & 0.013 & -2 & -1.344 & 0.075 & 0.435 & 5 & 1.443 & 0.103 & 12.664 \\ logistic & DT-HMM & 1 & 0.987 & 0.041 & 0.002 & -2 & -2.001 & 0.341 & 0.116 & 5 & 5.295 & 1.026 & 1.137 \\ & CT-HMM & 1 & 0.984 & 0.041 & 0.002 & -2 & -2.424 & 0.190 & 0.216 & 5 & 3.267 & 0.308 & 3.097 \\ & PH-HMM & 1 & 0.980 & 0.041 & 0.002 & -2 & -2.384 & 0.183 & 0.181 & 5 & 3.040 & 0.276 & 3.919 \\ \hline \end{tabular} \end{center} \label{sim_etc} \end{table}
\begin{figure}
\caption{{Example of Discrete-Time mHealth Data and Fitted PH-HMM}. Probabilities of being in the active state, screen-on counts, mean acceleration magnitude, and hour-of-day random intercepts are plotted against time (hours). Regression models: $\eta_s(t_{ij}) = \beta_{s0} + \beta_{s1} x^\top(t_{ij}) + \mathbf{z}^\top (t_{ij}) \mathbf{b}_{s}$, $\mathbf{b}_{s} \sim \mathrm{N}(\mathbf{0}_{24}, \tau_s \mathbf{I}_{24})$ were fitted for individual $i$ and MAP estimates were calculated using final E-step probabilities $\text{Pr}(A(t_{ij})=1)=\widehat{u}_1(t_{ij})$. Random intercepts capture the diurnal rhythm of active rest cycles, with active states mainly occurring between the hours of 6am-10pm. Large values of $\tau_s$ correspond with a high magnitude in the cyclic diurnal effects. }
\label{fig:3}
\end{figure}
\begin{table} \caption{{Population HMM: Parameter Estimates for Discrete-Time Setting}. Parameter estimates using EM algorithm and asymptotic standard errors. Regression for state transitions are given as $\eta_s(t_{ij}) = \beta_{s0} + \beta_{s1} x(t_{ij}) + \beta_{s2}x(t_{ij}) \mathrm{sex} + \beta_{s3}x(t_{ij}) \mathrm{OS} + \mathbf{z}^\top (t_{ij}) \mathbf{b}_{s}$ where $\mathbf{b}_s \sim \mathrm{N}(\mathbf{0}_{41}, \sigma^2_s\mathbf{I}_{41} )$ are individual specific random intercepts, Android devices and males serve as baseline. } \begin{center} \addtolength{\leftskip} {-2cm} \addtolength{\rightskip}{-2cm} \small
\begin{tabular}{ |cc||cccc| }
\hline
& & \multicolumn{4}{c|}{Methods: Estimate (SE)} \\
\cline{3-6}
Transition & Parameters & PMM & DT-HMM & CT-HMM & PH-HMM \\
\hline & $\beta_{10}$ & 2.8973(0.8384) & 10.8254(1.4074) & 9.0154(1.2162) & 8.5893(1.2036) \\ Active & $\beta_{11}$ & -0.4182(0.0850) & -1.2431(0.1428) & -1.0828(0.1232) & -1.0395(0.1219) \\ to & $\beta_{12}$ & 0.0288(0.0121) & 0.0379(0.0184) & 0.0257(0.0126) & 0.0257(0.0126) \\ Rest & $\beta_{13}$ & 0.0070(0.0124) & 0.0052(0.0189) & 0.0054(0.0130) & 0.0050(0.0129) \\ & $\sigma^2_{1}$ & 0.1099 & 0.2585 & 0.1171 & 0.1160 \\
\hline & $\beta_{20}$ & -6.6357(0.3846) & -16.2898(1.3782) & -6.0404(0.4941) & -5.9631(0.5029) \\ Rest & $\beta_{21}$ & 0.5359(0.0405) & 1.5280(0.1401) & 0.4671(0.0512) & 0.4599(0.0521) \\ to & $\beta_{22}$ & -0.0278(0.0190) & -0.0441(0.0203) & -0.0357(0.0183) & -0.0355(0.0183) \\ Active & $\beta_{23}$ & -0.0431(0.0191) & -0.0541(0.0208) & -0.0503(0.0187) & -0.0505(0.0188) \\ & $\sigma^2_{2}$ & 0.2802 & 0.3210 & 0.2631 & 0.2634 \\
\hline & $\mu_1$ & 8.5321 & 8.0866 & 8.0251 & 8.0114 \\ & $\mu_2$ & 0.5238 & 0.4263 & 0.4195 & 0.4153 \\
\hline \end{tabular} \end{center} \label{HMM_fit} \end{table}
\begin{table} \caption{{Population HMM: Parameter Estimates for Heterogeneous Event Time Setting}. Parameter estimates using EM algorithm and asymptotic standard errors. Regression for state transitions are given as $\eta_s(t_{ij}) = \beta_{s0} + \beta_{s1} x(t_{ij}) + \beta_{s2}x(t_{ij}) \mathrm{sex} + \beta_{s3}x(t_{ij}) \mathrm{OS} + \mathbf{z}^\top (t_{ij}) \mathbf{b}_{s}$ where $\mathbf{b}_s \sim \mathrm{N}(\mathbf{0}_{41}, \sigma^2_s\mathbf{I}_{41} )$ are individual specific random intercepts, Android devices and males serve as baseline. } \begin{center} \addtolength{\leftskip} {-2cm} \addtolength{\rightskip}{-2cm} \small
\begin{tabular}{ |cc||cccc| }
\hline
& & \multicolumn{4}{c|}{Methods: Estimate (SE)} \\
\cline{3-6}
Transition & Parameters & PMM & DT-HMM & CT-HMM & PH-HMM \\
\hline & $\beta_{10}$ & 0.3714(0.7637) & 6.4420(1.2828) & 2.8110(1.1055) & 2.0846(1.0867) \\ Active & $\beta_{11}$ & -0.2567(0.0786) & -0.7771(0.1305) & -0.5526(0.1130) & -0.5209(0.1120) \\ to & $\beta_{12}$ & 0.0541(0.0214) & 0.0496(0.0231) & 0.0528(0.0233) & 0.0685(0.0285) \\ Rest & $\beta_{13}$ & 0.0867(0.0221) & 0.0094(0.0241) & 0.0840(0.0240) & 0.1127(0.0296) \\ & $\sigma^2_{1}$ & 0.3554 & 0.3941 & 0.4102 & 0.6132 \\
\hline & $\beta_{20}$ & -6.4771(0.3775) & -20.1172(1.2551) & -6.9784(0.4544) & -6.8518(0.4673) \\ Rest & $\beta_{21}$ & 0.4597(0.0411) & 2.0046(0.1298) & 0.5196(0.0481) & 0.4889(0.0500) \\ to & $\beta_{22}$ & -0.0211(0.0235) & -0.0620(0.0294) & -0.0383(0.0217) & -0.0327(0.0241) \\ Active & $\beta_{23}$ & 0.0033(0.0240) & -0.0874(0.0309) & -0.0199(0.0225) & -0.0077(0.0252) \\ & $\sigma^2_{2}$ & 0.4051 & 0.6207 & 0.3422 & 0.4040 \\
\hline & $\mu_1$ & 9.3939 & 9.0530 & 8.9522 & 8.9428 \\ & $\mu_2$ & 1.0534 & 0.9579 & 0.9528 & 0.9534 \\
\hline \end{tabular} \end{center} \label{HMM_fit2} \end{table}
\label{lastpage}
\end{document} | arXiv |
\begin{document}
\def\mathbb N{\mathbb N} \def\mathbb C{\mathbb C} \def\mathbb Q{\mathbb Q} \def\mathbb R{\mathbb R} \def\mathbb T{\mathbb T} \def\mathbb A{\mathbb A} \def\mathbb Z{\mathbb Z} \def\frac{1}{2}{\frac{1}{2}}
\begin{titlepage} \author{Abed Bounemoura~\footnote{[email protected], Mathematics Institute, University of Warwick}} \title{\LARGE{\textbf{An example of instability in high-dimensional Hamiltonian systems}}} \end{titlepage}
\maketitle
\begin{abstract} In this article, we use a mechanism first introduced by Herman, Marco, and Sauzin to show that if a Gevrey or analytic perturbation of a quasi-convex integrable Hamiltonian system is not too small with respect to the number of degrees of freedom, then the classical exponential stability estimates do not hold. Indeed, we construct an unstable solution whose drifting time is polynomial with respect to the inverse of the size of the perturbation. A different example was already given by Bourgain and Kaloshin, with a linear time of drift but with a perturbation which is larger than ours. As a consequence, we obtain a better upper bound on the threshold of validity of exponential stability estimates. \end{abstract}
\section{Introduction}
\paraga Consider a near-integrable Hamiltonian system of the form \begin{equation*} \begin{cases} H(\theta,I)=h(I)+f(\theta,I) \\
|f| < \varepsilon \end{cases} \end{equation*}
with angle-action coordinates $(\theta,I) \in \mathbb T^n \times \mathbb R^n$, and where $f$ is a small perturbation, of size $\varepsilon$, in some suitable topology defined by a norm $|\,.\,|$.
If the system is analytic and $h$ satisfies a generic condition, it is a remarkable result due to Nekhoroshev (\cite{Nek77}, \cite{Nek79}) that the action variables are stable for an exponentially long interval of time with respect to the inverse of the size of the perturbation: one has
\[ |I(t)-I_0| \leq c_1\varepsilon^b, \quad |t|\leq c_2\exp(c_3\varepsilon^{-a}), \] for some positive constants $c_1,c_2,c_3,a,b$ and provided that the size of the perturbation $\varepsilon$ is smaller than a threshold $\varepsilon_0$. Of course, all these constants strongly depend on the number of degrees of freedom $n$, and when the latter goes to infinity, the threshold $\varepsilon_0$ and the exponent of stability $a$ go to zero.
More precisely, in the case where $h$ is quasi-convex and the system is analytic or even Gevrey, then we know that the exponent $a$ is of order $n^{-1}$ and this is essentially optimal (see \cite{LN92}, \cite{Pos93}, \cite{MS02}, \cite{LM05} \cite{KZ09} and \cite{BM10} for more information on the optimality of the stability exponent).
\paraga This fact was used by Bourgain and Kaloshin in \cite{BK05} to show that if the size of the perturbation is \[ \varepsilon_n \sim e^{-n}, \] then there is no exponential stability: they constructed unstable solutions for which the time of drift is linear with respect to the inverse of the size of the perturbation, that is
\[ |I(\tau_n)-I_0|\sim 1, \quad \tau_n \sim \varepsilon_n^{-1}. \] More precisely, in the first part of \cite{BK05}, Bourgain proved this result for a specific example of Gevrey non-analytic perturbation of a quasi-convex system, then for an analytic perturbation he obtained a time $\tau_n \sim \varepsilon_n^{-1-c}$, for any $c>0$. In the second part of \cite{BK05}, using much more elaborated techniques (especially Mather theory), Kaloshin proved the above result for both Gevrey and analytic perturbation and for a wider class of integrable Hamiltonians, including convex and quasi-convex systems.
Their motivation was the implementation of stability estimates in the context of Hamiltonian partial differential equations, which requires to understand the relative dependence between the size of the perturbation and the number of degrees of freedom. Their result indicates that for infinite dimensional Hamiltonian systems, Nekhoroshev's mechanism does not survive and that ``fast diffusion" should prevail. Of course, in their example, one cannot simply take $n=\infty$ as the size of the perturbation $\varepsilon_n \sim e^{-n}$ goes to zero and the time of instability $\tau_n \sim e^n$ goes to infinity exponentially fast with respect to $n$. A more precise interpretation concerns the threshold of validity $\varepsilon_0$ in Nekhoroshev's theorem: it has to satisfy \[ \varepsilon_0 <\!\!< e^{-n}, \] and so it deteriorates faster than exponentially with respect to $n$.
\paraga As was noticed by the authors in \cite{BK05}, their examples share some similarities with the mechanism introduced by Herman, Marco and Sauzin in \cite{MS02} (see also \cite{LM05} for the analytic case). In this present article, we use the approach of \cite{MS02} and \cite{LM05} to show, using simpler arguments than those contained in \cite{BK05}, that if the size of the perturbation is \[ \varepsilon_n \sim e^{-n\ln (n\ln n)}, \] then it is still too large to have exponential stability: we will show that one can find an unstable solution where the time of drift is polynomial, more precisely
\[ |I(\tau_n)-I_0|\sim 1, \quad \tau_n \sim \varepsilon_n^{-n}. \] As in the first part of \cite{BK05}, we will construct specific examples of Gevrey and analytic perturbations of a quasi-convex system. We refer to Theorem~\ref{thmnonpert} and Theorem~\ref{thmnonpertana} below for precise statements. Hence one can infer that the threshold $\varepsilon_0$ in Nekhoroshev's theorem further satisfies \[ \varepsilon_0 <\!\!< e^{-n\ln (n\ln n)} <\!\!< e^{-n}, \] and this gives another evidence that the finite dimensional mechanism of stability cannot extend so easily to infinite dimensional systems. Let us point out that our time of drift is worst than the one obtained in \cite{BK05}, but this stems from the fact that the size of our perturbation is smaller than theirs and so it is natural for the time of instability to be larger. Moreover, our exponent $n$ in the time of drift can be a bit misleading since in any cases, that is even for a linear time of drift, $\tau_n$ goes to infinity exponentially fast with $n$, so such results do not apply at all to infinite dimensional Hamiltonian systems. A natural question, which was raised by Marco, is the following.
\begin{question} Given $\varepsilon>0$ arbitrarily small, construct an $\varepsilon_n$-perturbation of an integrable system having an unstable orbit with a time of instability $\tau_n$ such that \[ \lim_{n\rightarrow +\infty}\varepsilon_n=\varepsilon, \quad \lim_{n\rightarrow +\infty}\tau_n<+\infty. \] \end{question}
We believe that one can give a positive answer to this question, by using a more clever construction. However, even if one can formally let $n$ goes to infinity, by no means this implies the existence of an unstable solution for an infinite-dimensional Hamiltonian systems, which is a very difficult problem (see \cite{CKSTT} and \cite{GG10} for related results in some examples of Hamiltonian partial differential equations).
\section{Main results}
\subsection{The Gevrey case}
\paraga Let us first state our result in the Gevrey case. Let $n\geq 3$ be the number of degrees of freedom, and given $R>0$, let $B=B_R$ be the open ball of $\mathbb R^n$ around the origin, of radius $R>0$ with respect to the supremum norm $|\,.\,|$, and $\overline{B}$ its closure.
The phase space is $\mathbb T^n \times B$, and we consider a Hamiltonian system of the form \[ H(\theta,I)=h(I)+f(\theta,I), \quad (\theta,I)\in \mathbb T^n \times B. \] Our quasi-convex integrable Hamiltonian $h$ is the simplest one, namely \[ h(I)=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+I_n, \quad I=(I_1,\dots,I_n)\in B. \] Let us recall that, given $\alpha \geq 1$ and $L>0$, a function $f\in C^{\infty}(\mathbb T^n \times \overline{B})$ is $(\alpha,L)$-Gevrey if, using the standard multi-index notation,
\[ |f|_{\alpha,L}=\sum_{l\in \mathbb N^{2n}}L^{|l|\alpha}(l!)^{-\alpha}|\partial^l f|_{C^0(\mathbb T^n \times \overline{B})} < \infty. \] The space of such functions, with the above norm, is a Banach space that we denote by $G^{\alpha,L}(\mathbb T^n \times \overline{B})$. One can see that analytic functions correspond exactly to $\alpha=1$.
\paraga Now we can state our theorem.
\begin{theorem}\label{thmnonpert}
Let $n\geq 3$, $R>1$, $\alpha>1$ and $L>0$. Then there exist positive constants $c,\gamma,C$ and $n_0\in\mathbb N^*$ depending only on $R,\alpha$ and $L$ such that for any $n\geq n_0$, the following holds: there exists a function $f_n \in G^{\alpha,L}(\mathbb T^n \times \overline{B})$ with $\varepsilon_n=|f_n|_{\alpha,L}$ satisfying \[e^{-2(n-2)\ln (4n\ln 2n)}\leq \varepsilon_n \leq c\, e^{-2(n-2)\ln (n\ln 2n)},\] such that the Hamiltonian system $H_n=h+f_n$ has an orbit $(\theta(t),I(t))$ for which the estimates
\[ |I(\tau_n)-I_0|\geq 1, \quad \tau_n\leq C\left(\frac{c}{\varepsilon_n}\right)^{n\gamma}, \] hold true. \end{theorem}
As we have already explained, this statement gives an upper bound on the threshold of applicability of Nekhoroshev's estimates, which is an important issue when trying to use abstract stability results for ``realistic" problems, for instance for the so-called planetary problem (see \cite{Nie96}).
So let us consider the set of Gevrey quasi-convex integrable Hamiltonians $\mathcal{H}=\mathcal{H}(n,R,\alpha,L,M,m)$ defined as follows: $h\in\mathcal{H}$ if $h\in G^{\alpha,L}(\overline{B})$ and satisfies both
\[ \forall I\in B, \quad |\partial ^k h(I)|\leq M, \quad 1\leq |k_1|+\cdots+|k_n|\leq 3, \] and
\[ \forall I\in B, \forall v\in\mathbb R^n, \quad \nabla h(I).v=0 \Longrightarrow \nabla^2 h(I)v.v \geq m|v|^2.\] From Nekhoroshev's theorem (see \cite{MS02} for a statement in Gevrey classes), we know that there exists a positive constant $\varepsilon_0(\mathcal{H})=\varepsilon_0(n,R,\alpha,L,M,m)$ such that the following holds: for any $h\in\mathcal{H}$, there exist positive constants $c_1,c_2,c_3,a$ and $b$ such that if
\[ f\in G^{\alpha,L}(\mathbb T^n \times \overline{B}), \quad |f|_{\alpha,L}<\varepsilon_0(\mathcal{H}),\] then any solution $(\theta(t),I(t))$ of the system $H=h+f$, with $I(0)\in B_{R/2}$, satisfies
\[ |I(t)-I_0| \leq c_1\varepsilon^b, \quad |t|\leq c_2\exp(c_3\varepsilon^{-a}). \] Then we can state the following corollary of our Theorem~\ref{thmnonpert}.
\begin{corollary} With the previous notations, one has the upper bound \[ \varepsilon_0(\mathcal{H})<e^{-2(n-2)\ln (4n\ln 2n)}. \] \end{corollary}
This improves the upper bound $\varepsilon_0(\mathcal{H})<e^{-n}$ obtained in \cite{BK05} for Gevrey functions.
\subsection{The analytic case}
\paraga Let us now state our result in the analytic case. Here $B=B_R$ is still the open ball of $\mathbb R^n$ around the origin, of radius $R>0$ with respect to the supremum norm, and we will also consider a Hamiltonian system of the form \[ H(\theta,I)=h(I)+f(\theta,I), \quad (\theta,I)\in \mathbb T^n \times B, \] where \[ h(I)=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+I_n, \quad I=(I_1,\dots,I_n)\in B. \] Given $\rho>0$, let us introduce the space $\mathcal{A}_\rho(\mathbb T^n \times B)$ of bounded real-analytic functions on $\mathbb T^n \times B$ admitting a bounded holomorphic extension to the complex neighbourhood
\[ V_\rho=V_\rho(\mathbb T^n \times B)=\{(\theta,I)\in(\mathbb C^n/\mathbb Z^n)\times \mathbb C^{n} \; | \; |\mathcal{I}(\theta)|<\rho,\;d(I,B)< \rho\}, \] where $\mathcal{I}(\theta)$ is the imaginary part of $\theta$ and the distance $d$ is associated to the supremum norm on $\mathbb C^n$. Such a space $\mathcal{A}_\rho(\mathbb T^n \times B)$ is obviously a Banach space with the norm
\[ |f|_\rho=|f|_{C^0(V_\rho)}=\sup_{z\in V_\rho}|f(z)|, \quad f\in\mathcal{A}_\rho(\mathbb T^n \times B). \] Furthermore, for bounded real-analytic vector-valued functions defined on $\mathbb T^n \times B$ admitting a bounded holomorphic extension to $V_\rho$, we shall extend this norm componentwise (in particular, this applies to Hamiltonian vector fields and their time-one maps).
\paraga Now we can state our theorem.
\begin{theorem}\label{thmnonpertana}
Let $n\geq 4$, $R>1$, and $\sigma>0$. Then there exist positive constants $\rho,\gamma,C$ and $n_0\in\mathbb N^*$ depending only on $R$ and $\sigma$, and a constant $c_n$ that may also depends on $n$, such that for any $n\geq n_0$, the following holds: there exists a function $f_n \in \mathcal{A}_\rho(\mathbb T^n \times B)$ with $\varepsilon_n=|f_n|_{\rho}$ satisfying \[e^{-2(n-3)\ln (4n\ln 2n)}\leq \varepsilon_n \leq c_n\, e^{-2(n-3)\ln (n\ln 2n)},\] such that the Hamiltonian system $H_n=h+f_n$ has an orbit $(\theta(t),I(t))$ for which the estimates
\[ |I(\tau_n)-I_0|\geq 1, \quad \tau_n\leq C\left(\frac{c_n}{\varepsilon_n}\right)^{n\gamma}, \] hold true. \end{theorem}
In the above statement, the constant $\sigma$ has to be chosen sufficiently small but independently of the choice of $n$ and $R$ (see Proposition~\ref{LocMar}). Moreover, the theorem is slightly different than the one in the Gevrey case, since there is a constant $c_n$ depending also on $n$: this comes from the use of suspension arguments due to Kuksin and Pöschel (\cite{Kuk93}, \cite{KP94}) in the analytic case, which are more difficult than in the Gevrey case.
Here we can also define a threshold of validity in the Nekhoroshev theorem $\varepsilon_0(\mathcal{H})=\varepsilon_0(n,R,\rho,M,m)$ and state the following corollary of our Theorem~\ref{thmnonpertana}.
\begin{corollary} With the previous notations, one has the upper bound \[ \varepsilon_0(\mathcal{H})<e^{-2(n-3)\ln (4n\ln 2n)}. \] \end{corollary}
This improves the upper bound $\varepsilon_0(\mathcal{H})<e^{-n}$ obtained in \cite{BK05} for analytic functions. For concrete Hamiltonians like in the planetary problem, the actual distance to the integrable system is essentially of order $10^{-3}$, hence the above corollary yields the impossibility to apply Nekhoroshev's estimates for $n>3$.
\subsection{Some remarks}
\paraga Theorem~\ref{thmnonpert} and Theorem~\ref{thmnonpertana} are obtained from the constructions in \cite{MS02} and \cite{LM05}, but one has to choose properly the dependence with respect to $n$ of the various parameters involved.
As the reader will see, we will use only rough estimates leading to the factor $n$ in the time of instability: this can be easily improved but we do not know if it is possible in our case (that is with a perturbation of size $e^{-n\ln (n\ln n)}$) to obtain a linear time of drift.
Let us note also we have restricted the perturbation to a compact subset of $\mathbb A^n=\mathbb T^n \times \mathbb R^n$ just in order to evaluate Gevrey or analytic norms. In fact, in both theorems the Hamiltonian vector field generated by $H_n=h+f_n$ is complete and the unstable solution $(\theta(t),I(t))$ satisfies
\[ \lim_{t\rightarrow \pm \infty}|I(t)-I_0|=+\infty, \] which means that it is bi-asymptotic to infinity.
\paraga Ii is important to note that our approach leads, as in the first part of \cite{BK05}, to results for an autonomous perturbation of a quasi-convex integrable system (or, equivalently, for a time-dependent time-periodic perturbation of a convex integrable system). In the second part of \cite{BK05}, for a class of convex integrable systems, Kaloshin was able to reduce the case of an autonomous perturbation to the case of a time-dependent time periodic perturbation, partly because of his more general (but more involved) approach. We could have tried to apply his general arguments to our case, but for simplicity we decided not to pursue this further.
\paraga In this text, we will have to deal with time-one maps associated to Hamiltonian flows. So given a function $H$, we will denote by $\Phi_t^H$ the time-$t$ map of its Hamiltonian flow and by $\Phi^H=\Phi^H_1$ the time-one map. We shall use the same notation for time-dependent functions $H$, that is $\Phi^H$ will be the time-one map of the Hamiltonian isotopy (the flow between $t=0$ and $t=1$) generated by $H$.
\section{Proof of Theorem~\ref{thmnonpert}}
The proof of Theorem~\ref{thmnonpert} is contained in section~\ref{sectnonpert}, but first in section~\ref{mechanism} we recall the mechanism of instability presented in the paper \cite{MS02} (see also \cite{MS04}).
This mechanism has two main features. The first one is that it deals with perturbation of integrable maps rather than perturbation of integrable flows, and then the latter is recovered by a suspension process. This point of view, which is only of technical matter, was already used for example in \cite{Dou88} and offers more flexibility in the construction. The second feature, which is the most important one, is that instead of trying to detect instability in a map close to integrable by means of the usual splitting estimates, we will start with a map having already ``unstable" orbits and try to embed it in a near-integrable map. This will be realized through a ``coupling lemma", which is really the heart of the mechanism.
As we will see, the construction offers an easy and very efficient way of computing the drifting time of unstable solutions, therefore avoiding all the technicalities that are usually required for such a task.
\subsection{The mechanism} \label{mechanism}
\paraga Given a potential function $U : \mathbb T \rightarrow \mathbb R$, we consider the following family of maps $\psi_q : \mathbb A \rightarrow \mathbb A$ defined by \begin{equation}\label{maps1} \psi_q(\theta,I)=\left(\theta +qI, I-q^{-1}U'(\theta + qI)\right), \quad (\theta,I)\in\mathbb A, \end{equation} for $q \in \mathbb N^*$. If we require $U'(0)=-1$, for example if we choose \[ U(\theta)=-(2\pi)^{-1}\sin (2\pi\theta),\quad U'(\theta)=-\cos(2\pi\theta),\] then it is easy to see that $\psi_q(0,0)=(0,q^{-1})$ and by induction \begin{equation}\label{driftstand} \psi_q^k(0,0)=(0,kq^{-1}) \end{equation} for any $k \in \mathbb Z$ (see figure~\ref{drift}). After $q$ iterations, the point $(0,0)$ drifts from the circle $I=0$ to the circle $I=1$ and it is bi-asymptotic to infinity, in the sense that the sequence $\left(\psi_q^k(0,0)\right)_{k \in \mathbb Z}$ is not contained in any semi-infinite annulus of $\mathbb A$. \begin{figure}
\caption{Drifting point for the map $\psi_q$}
\label{drift}
\end{figure} Clearly these maps are exact-symplectic, but obviously they have no invariant circles and so they cannot be ``close to integrable". However, we will use the fact that they can be written as a composition of time-one maps, \begin{equation}\label{driftstand2} \psi_q=\Phi^{q^{-1}U} \circ \left(\Phi^{\frac{1}{2}I^2} \circ \cdots \circ \Phi^{\frac{1}{2}I^2}\right) =\Phi^{q^{-1}U} \circ \left(\Phi^{\frac{1}{2}I^2}\right)^q, \end{equation} to embed $\psi_q$ in the $q^{th}$-iterate of a near-integrable map of $\mathbb A^n$, for $n \geq 2$. To do so, we will use the following ``coupling lemma", which is easy but very clever.
\begin{lemma}[Herman-Marco-Sauzin] \label{coupling} Let $m,m' \geq 1$, $F : \mathbb A^m \rightarrow \mathbb A^m$ and $G : \mathbb A^{m'} \rightarrow \mathbb A^{m'}$ two maps, and $f : \mathbb A^m \rightarrow \mathbb R$ and $g : \mathbb A^{m'} \rightarrow \mathbb R$ two Hamiltonian functions generating complete vector fields. Suppose there is a point $a \in \mathbb A^{m'}$ which is $q$-periodic for $G$ and such that the following ``synchronisation" conditions hold: \begin{equation}\label{sync} g(a)=1, \quad dg(a)=0, \quad g(G^k(a))=0, \quad dg(G^k(a))=0, \tag{S} \end{equation} for $1 \leq k \leq q-1$. Then the mapping \[ \Psi=\Phi^{f \otimes g} \circ (F \times G) : \mathbb A^{m+m'} \longrightarrow \mathbb A^{m+m'} \] is well-defined and for all $x \in \mathbb A^m$, \[ \Psi^q(x,a)=(\Phi^f \circ F^q(x),a). \] \end{lemma}
The product of functions acting on separate variables was denoted by $\otimes$, \textit{i.e.} \[ f \otimes g(x,x')=f(x)g(x') \quad x \in \mathbb A^m , \; x' \in \mathbb A^{m'}. \] Let us give an elementary proof of this lemma since it is a crucial ingredient.
\begin{proof} First note that since the Hamiltonian vector fields $X_f$ and $X_g$ are complete, an easy calculation shows that for all $x\in\mathbb A^m$, $x'\in\mathbb A^{m'}$ and $t\in \mathbb R$, one has \begin{equation}\label{coupl} \Phi_{t}^{f \otimes g}(x,x')=\left(\Phi_{t}^{g(x')f}(x),\Phi_{t}^{f(x)g}(x')\right) \end{equation} and therefore $X_{f \otimes g}$ is also complete. Using the above formula and condition~(\ref{sync}), the points $(F^{k}(x),G^{k}(a))$, for $1\leq k \leq q-1$, are fixed by $\Phi^{f \otimes g}$ and hence \[ \Psi^{q-1}(x,a)=(F^{q-1}(x),G^{q-1}(a)). \] Since $a$ is $q$-periodic for $G$ this gives \[ \Psi^{q}(x,a)=\Phi^{f \otimes g}(F^{q}(x),a), \] and we end up with \[ \Psi^{q}(x,a)=(\Phi^{f}(F^{q}(x)),a) \] using once again~(\ref{sync}) and~(\ref{coupl}). \end{proof}
Therefore, if we set $m=1$, $F=\Phi^{\frac{1}{2}I_1^2}$ and $f=q^{-1}U$ in the coupling lemma, the $q^{th}$-iterate $\Psi^q$ will leave the submanifold $\mathbb A \times \{a\}$ invariant, and its restriction to this annulus will coincide with our ``unstable map" $\psi_q$. Hence, after $q^2$ iterations of $\Psi$, the $I_1$-component of the point $((0,0),a) \in \mathbb A^2$ will move from $0$ to $1$.
\paraga The difficult part is then to find what kind of dynamics we can put on the second factor to apply this coupling lemma. In order to have a continuous system with $n$ degrees of freedom at the end, we may already choose $m'=n-2$ so the coupling lemma will give us a discrete system with $n-1$ degrees of freedom.
First, a natural attempt would be to try \[ G=G_n=\Phi^{\frac{1}{2}I_2^2+\cdots+\frac{1}{2}I_{n-1}^2}.\] Indeed, in this case \[ F \times G_n=\Phi^{\frac{1}{2}I_2^2+\cdots+\frac{1}{2}I_{n-1}^2}=\Phi^{\tilde{h}} \] where \[ \tilde{h}(I_1,\dots,I_{n-1})=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2) \] and the unstable map $\Psi$ given by the coupling lemma appears as a perturbation of the form $\Psi=\Phi^u \circ \Phi^{\tilde{h}}$, with $u=f\otimes g$. However, this cannot work. Indeed, for $j\in\{2,\dots,n-1\}$, one can choose a $p_j$-periodic point $a^{(j)}\in\mathbb A$ for the map $\Phi^{\frac{1}{2}I_j^2}$, and then setting \[ a_n=(a^{(2)},\dots,a^{(n-1)})\in\mathbb A^{n-2}, \quad q_n=p_2 \cdots p_{n-1},\] the point $a_n$ is $q_n$-periodic for $G_n$ provided that the numbers $p_j$ are mutually prime. One can easily see that the latter condition will force the product $q_n$ to converge to infinity when $n$ goes to infinity. So necessarily the point $a_n$ gets arbitrarily close to its first iterate $G_n(a_n)$ when $n$ (and therefore $q_n$) is large: this is because $q_n$-periodic points for $G_n$ are equi-distributed on $q_n$-periodic tori. As a consequence, a function $g_n$ with the property \[ g_n(a_n)=1, \quad g_n(G_n(a_n))=0, \] will necessarily have very large derivatives at $a_n$ if $q_n$ is large. Then as the size of the perturbation is essentially given by
\[ |f\otimes g_n|=|q_{n}^{-1}U\otimes g_n|=|q_{n}^{-1}||g_n|,\] one can check that it is impossible to make this quantity converge to zero when the number of degrees of freedom $n$ goes to infinity.
\paraga As in \cite{MS02}, the idea to overcome this problem is the following one. We introduce a new sequence of ``large" parameters $N_n \in \mathbb N^*$ and in the second factor we consider a family of suitably rescaled penduli on $\mathbb A$ given by \[ P_n(\theta_2,I_2)=\frac{1}{2} I_2^2 +N_n^{-2}V(\theta_2), \] where $V(\theta)=-\cos 2\pi\theta$. The other factors remain unchanged, so \[ G_n=\Phi^{\frac{1}{2}(I_2^2+I_3^2+\cdots+I_{n-1}^2)+N_{n}^{-2}V(\theta_2)}. \] In this case, the map $\Psi$ given by the coupling lemma is also a perturbation of $\Phi^{\tilde{h}}$ but of the form $\Psi=\Phi^u \circ \Psi^{\tilde{h}+v}$, with $v=N_n^{-2}V$. But now for the map $G_n$, due to the presence of the pendulum factor, it is now possible to find a periodic orbit with an irregular distribution: more precisely, a $q_n$-periodic point $a_n$ such that its distance to the rest of its orbit is of order $N_{n}^{-1}$, no matter how large $q_n$ is.
\paraga Let us denote by $(p_j)_{j\geq 0}$ the ordered sequence of prime numbers and let us choose $N_n$ as the product of the $n-2$ prime numbers $\{p_{n+3},\dots,p_{2n}\}$, that is \[ N_n=p_{n+3}p_{n+4}\cdots p_{2n}\in\mathbb N^*.\] Our goal is to prove the following proposition.
\begin{proposition}\label{pertu} Let $n\geq 3$, $\alpha>1$ and $L_1>0$. Then there exist a function $g_n\in G^{\alpha,L_1}(\mathbb A^{n-2})$, a point $a_n\in\mathbb A^{n-2}$ and positive constants $c_1$ and $c_2$ depending only on $\alpha$ and $L_1$ such that if \[ M_n=2\left[c_1N_ne ^{c_2(n-2)p_{2n}^{\frac{1}{\alpha}}}\right], \quad q_n=N_nM_n, \] then $a_n$ is $q_n$-periodic for $G_n$ and $(g_n, G_n, a_n, q_n)$ satisfy the synchronization conditions (\ref{sync}): \begin{equation*} g_n(a_n)=1, \quad dg_n(a_n)=0, \quad g_n(G_n^k(a_n))=0, \quad dg_n(G_n^k(a_n))=0, \end{equation*} for $1 \leq k \leq q_n-1$. Moreover, the estimate \begin{equation}\label{estimgn}
q_n^{-1}|g_n|_{\alpha,L_1}\leq N_{n}^{-2}, \end{equation} holds true. \end{proposition}
The rest of this section is devoted to the proof of the above proposition. Note that together with the coupling lemma (Lemma~\ref{coupling}), this proposition easily gives a result of instability (analogous to Proposition 2.1 in \cite{MS02}) for a discrete system which is a perturbation of the map $\Phi^{\tilde{h}}$, but we prefer not to state such a result in order to focus on the continuous case.
\paraga We first consider the simple pendulum \[ P(\theta,I)=\frac{1}{2} I^2 + V(\theta), \quad (\theta,I)\in\mathbb A. \] With our convention, the stable equilibrium is at $(0,0)$ and the unstable one is at $(0,1/2)$. Given any $M\in\mathbb N^*$, there is a unique point $b^M=(0,I_M)$ which is $M$-periodic for $\Phi^P$ (this is just the intersection between the vertical line $\{0\} \times \mathbb R$ and the closed orbit for the pendulum of period $M$). One can check that $I_M \in\, ]2,3[$ and as $M$ goes to infinity, $(0,I_M)$ tends to the point $(0,2)$ which belongs to the upper separatrix. Since $P_n(\theta,I)=\frac{1}{2} I^2 +N_n^{-2}V(\theta)$, then one can see that \[ \Phi^{P_{n}}=(S_n)^{-1} \circ \Phi^{N_{n}^{-1}P} \circ S_{n}, \] where $S_{n}(\theta,I)=(\theta,N_n I)$ is the rescaling by $N_n$ in the action components. Therefore the point $b_n^M=(0,N_{n}^{-1}I_{M})$ is $q_n$-periodic for $\Phi^{P_{n}}$, for $q_n=N_nM$. Let $(\Phi_t^{P})_{t \in \mathbb R}$ be the flow of the pendulum, and \[ \Phi_t^{P}(0,I_{M})=(\theta_{M}(t),I_{M}(t)). \] The function $\theta_{M}(t)$ is analytic. The crucial observation is the following simple property of the pendulum (see Lemma 2.2 in \cite{MS02} for a proof). \begin{figure}
\caption{The point $b^M$ and its iterates}
\label{bN}
\end{figure}
\begin{lemma} Let $\sigma=-\frac{1}{2} +\frac{2}{\pi}\arctan e^{\pi} < \frac{1}{2}$. For any $M \in \mathbb N^*$, \[ \theta_{M}(t) \notin [-\sigma,\sigma], \] for $t\in[1/2,M-1/2]$. \end{lemma}
Hence no matter how large $M$ is, most of the points of the orbit of $b^M\in\mathbb A$ will be outside the set $\{-\sigma \leq \theta \leq \sigma\}\times \mathbb R$ (see figure~\ref{bN}). The construction of a function that vanishes, as well as its first derivative, at these points, will be easily arranged by means of a function, depending only on the angle variables, with support in $\{-\sigma \leq \theta \leq \sigma\}$.
As for the other points, it is convenient to introduce the function \[ \tau_{M} : [-\sigma,\sigma] \longrightarrow \, ]-1/2,1/2[ \] which is the analytic inverse of $\theta_{M}$. One can give an explicit formula for this map: \[ \tau_{M}(\theta)=\int_{0}^{\theta}\frac{d\varphi}{\sqrt{I_M^2-4\sin^2 \pi\varphi}}. \] In particular, it is analytic and therefore it belongs to $G^{\alpha,L_1}([-\sigma,\sigma])$ for $\alpha\geq 1$ and $L_1>0$, and one can obtain the following estimate (see Lemma 2.3 in \cite{MS02} for a proof).
\begin{lemma}\label{lemmelambda} For $\alpha>1$ and $L_1>0$,
\[ \Lambda=\sup_{M\in\mathbb N^*}|\tau_M|_{\alpha,L_1}<+\infty. \] \end{lemma}
Note that $\Lambda$ depends only on $\alpha$ and $L_1$. Under the action of $\tau_{M}$, the points of the orbit of $b^M$ whose projection onto $\mathbb T$ belongs to $\{-\sigma \leq \theta \leq \sigma\}$ get equi-distributed, and we can use the following elementary lemma.
\begin{lemma}\label{funct} For $p \in \mathbb N^*$, the analytic function $\eta_p : \mathbb T \rightarrow \mathbb R$ defined by \[ \eta_p(\theta)=\left(\frac{1}{p}\sum_{l=0}^{p-1}\cos 2\pi l\theta \right)^2 \] satisfies \[ \eta_p(0)=1, \quad \eta_p'(0)=0, \quad \eta_p(k/p)=\eta_p'(k/p)=0, \] for $1 \leq k \leq p-1$, and
\[ |\eta_p|_{\alpha,L_1}\leq e^{2\alpha L_1(2\pi p)^{\frac{1}{\alpha}}}. \] \end{lemma}
The proof is trivial (see \cite{MS02}, Lemma 2.4).
\paraga We can now pass to the proof of Proposition~\ref{pertu}.
\begin{proof}[Proof of Proposition~\ref{pertu}] For $\alpha>1$ and $L_1>0$, consider the bump function $\varphi_{\alpha,L_1}\in G^{\alpha,L_1}(\mathbb T)$ given by Lemma~\ref{lemmeGev1} (see Appendix~\ref{Gev}).
We choose our function $g_n\in G^{\alpha,L_1}(\mathbb A^{n-2})$, depending only on the angle variables, of the form \[ g_n=g_n^{(2)}\otimes \cdots \otimes g_n^{(n-1)}, \] where \[ g_n^{(2)}(\theta_2)=\eta_{p_{n+3}}(\tau_{M_n}(\theta_2))\varphi_{\alpha,(4\sigma)^{-\frac{1}{\alpha}}L_1}((4\sigma)^{-1}\theta_2), \] and \[ g_n^{(i)}(\theta_i)=\eta_{p_{n+1+i}}(\theta_i), \quad 3\leq i \leq n-1. \] Let us write
\[ c_1=\left|\varphi_{\alpha,(4\sigma)^{-\frac{1}{\alpha}}L_1}\right|_{\alpha,(4\sigma)^{-\frac{1}{\alpha}}L_1}. \] Now we choose our point $a_n=(a_n^{(2)},\dots,a_n^{(n-1)})\in\mathbb A^{n-2}$. We set \[ a_n^{(2)}=b_n^{M_n}=(0,N_{n}^{-1}I_{M_n}), \] and \[ a_n^{(i)}=(0,p_{n+1+i}^{-1}), \quad 3\leq i \leq n-1. \]
Let us prove that $a_n$ is $q_n$-periodic for $G_n$. We can write \[ G_n=\Phi^{\frac{1}{2}I_2^2+N_{n}^{-2}V(\theta_2)}\times \Phi^{\frac{1}{2}(I_3^2+I_4^2+\cdots+I_{n-1}^2)}=\Phi^{P_n}\times \widehat{G}. \] Since $p_{n+4}, \dots, p_{2n}$ are mutually prime, the point $(a_n^{(3)},\dots,a_n^{(n-1)})\in\mathbb A^{n-3}$ is periodic for $\widehat{G}$, with period \[N_n'=p_{n+4} \cdots p_{2n}.\] By construction, the point $a_n^{(2)}=b_n^{M_n}\in\mathbb A$ is periodic for $\Phi^{P_n}$, with period $q_n=N_nM_n$, where \[ N_n=p_{n+3}p_{n+4}\cdots p_{2n}.\] This means that $a_n$ is periodic for the product map $G_n$, and the exact period is given by the least common multiple of $q_n$ and $N_n'$. Since $N_n'$ divides $q_n$, the period of $a_n$ is $q_n$.
Now let us show that the synchronization conditions~(\ref{sync}) hold true, that is \[ g_n(a_n)=1, \quad dg_n(a_n)=0, \quad g_n(G_n^k(a_n))=0, \quad dg_n(G_n^k(a_n))=0, \] for $1\leq k\leq q_n-1$. Since $\varphi_{\alpha,L_1}(0)=1$, then \[ g_n(a_n)=g_n^{(2)}(0)\cdots g_n^{(n-1)}(0)=1 \] and as $\varphi_{\alpha,L_1}'(0)=0$, then \[ dg_n(a_n)=0. \] To prove the other conditions, let us write $G_n^k(a_n)=(\theta_k,I_k)\in\mathbb A^{n-2}$, for $1\leq k\leq q_n-1$.
If $\theta_k^{(2)}$ does not belong to $]-\sigma,\sigma[$, then $g_n^{(2)}$ and its first derivative vanish at $\theta_k^{(2)}$ because it is the case for $\varphi_{\alpha,(4\sigma)^{-\frac{1}{\alpha}}L}$, so \[ g_n(\theta_k)=dg_n(\theta_k)=0. \] Otherwise, if $-\sigma<\theta_k^{(2)}<\sigma$, one can easily check that \[ - \frac{N_n-1}{2}\leq k \leq \frac{N_n-1}{2} \] and therefore \[ \tau_{M_n}(\theta_k^{(2)})=\frac{k}{N_n}, \] while \[ \theta_k^{(i)}=\frac{k}{p_{n+i+1}}, \quad 3\leq i \leq n-1. \] If $N_n'=p_{n+4} \cdots p_{2n}$ divides $k$, that is $k=k'N_n'$ for some $k'\in\mathbb Z$, then \[ \tau_{M_n}(\theta_k^{(2)})=\frac{k}{N_n}=\frac{k'}{p_{n+3}} \] and therefore, by Lemma~\ref{funct}, $\eta_{p_{n+3}}$ vanishes with its differential at $\theta_k^{(2)}$, and so does $g_n^{(2)}$. Otherwise, $N_n'$ does not divide $k$ and then, for $3\leq i \leq n-1$, at least one of the functions $\eta_{p_{n+1+i}}$ vanishes with its differential at $\theta_k^{(2)}$, and so does $g_n^{(i)}$. Hence in any case \[ g_n(\theta_k)=dg_n(\theta_k)=0, \quad 1\leq k\leq q_n-1, \] and the synchronization conditions~(\ref{sync}) are satisfied.
Now it remains to estimate the norm of the function $g_n$. First, using Lemma~\ref{lemmeGev2}, one finds
\[ |g_n|_{\alpha,L_1} \leq \left|\varphi_{\alpha,(4\sigma)^{-\frac{1}{\alpha}}L_1}\right|_{\alpha,(4\sigma)^{-\frac{1}{\alpha}}L_1} |\eta_{p_{n+3}}\circ\tau_{M_n}|_{\alpha,L_1}|\eta_{p_{n+4}}|_{\alpha,L_1}|\eta_{p_{2n}}|_{\alpha,L_1}, \] which by definition of $c_1$ gives
\[ |g_n|_{\alpha,L_1} \leq c_1|\eta_{p_{n+3}}\circ\tau_{M_n}|_{\alpha,L_1}|\eta_{p_{n+4}}|_{\alpha,L_1}|\eta_{p_{2n}}|_{\alpha,L_1}. \] Then, by definition of $\Lambda$ (Lemma~\ref{lemmelambda}) and using Lemma~\ref{lemmeGev3} (with $\Lambda_1=\Lambda^{\frac{1}{\alpha}}$),
\[ |\eta_{p_{n+3}}\circ\tau_{M_n}|_{\alpha,L_1} \leq |\eta_{p_{n+3}}|_{\alpha,\Lambda^{\frac{1}{\alpha}}}, \] so using Lemma~\ref{funct} and setting $c_2=2\alpha\sup\left\{\Lambda^{\frac{1}{\alpha}},L_1\right\}(2\pi)^{\frac{1}{\alpha}}$, this gives \begin{eqnarray*}
|g_n|_{\alpha,L_1} & \leq & c_1 e^{2\alpha(\Lambda^{\frac{1}{\alpha}}+(n-3)L_1)(2\pi p_{2n})^{\frac{1}{\alpha}}} \\ & \leq & c_1 e^{2\alpha\sup\left\{\Lambda^{\frac{1}{\alpha}},L_1\right\}(n-2)(2\pi p_{2n})^{\frac{1}{\alpha}}} \\ & \leq & c_1 e^{c_2(n-2)p_{2n}^{\frac{1}{\alpha}}}. \end{eqnarray*} Finally, by definition of $M_n$ we obtain \begin{equation*}
|g_n|_{\alpha,L_1}\leq M_nN_n^{-1}, \end{equation*} and as $q_n=N_nM_n$, we end up with
\[ q_n^{-1}|g_n|_{\alpha,L_1}\leq N_n^{-2}. \] This concludes the proof. \end{proof}
\subsection{Proof of Theorem~\ref{thmnonpert}} \label{sectnonpert}
\paraga In the previous section, we were concerned with a perturbation of the integrable diffeomorphism $\Phi^{\tilde{h}}$, which can be written as $\Phi^u \circ \Phi^{\tilde{h}+v}$. So now we will briefly describe a suspension argument to go from this discrete case to a continuous case (we refer once again to \cite{MS02} for the details).
Here we will make use of bump functions, however the process is still valid, though more difficult, in the analytic category, (see for example \cite{Dou88} or \cite{KP94}). The basic idea is to find a time-dependent Hamiltonian function on $\mathbb A^n$ such that the time-one map of its isotopy is $\Phi^u \circ \Phi^{\tilde{h}+v}$, or, equivalently, an autonomous Hamiltonian function on $\mathbb A^{n+1}$ such that its first return map to some $2n$-dimensional Poincaré section coincides with our map $\Phi^u \circ \Phi^{\tilde{h}+v}$.
Given $\alpha>1$ and $L>1$, let us define the function \[ \phi_{\alpha,L}=\left(\int_{\mathbb T}\varphi_{\alpha,L}\right)^{-1}\varphi_{\alpha,L}, \] where $\varphi_{\alpha,L}$ is the bump function given by Lemma~\ref{lemmeGev1}. If $\phi_0(t)=\phi_{\alpha,L}\big(t-\frac{1}{4}\big)$ and $\phi_1(t)=\phi_{\alpha,L}\big(t-\frac{3}{4}\big)$, the time-dependent Hamiltonian \[ H^*(\theta,I,t)=(\tilde{h}(I)+v(\theta)) \otimes \phi_0(t) + u(\theta) \otimes \phi_1(t) \] clearly satisfies \[ \Phi^{H^*}=\Phi^u \circ \Phi^{\tilde{h}+v}. \] But as $u$ and $v$ go to zero, $H^*$ converges to $\tilde{h} \otimes \phi_0$ rather than $\tilde{h}$. However, using classical generating functions, it is not difficult to modify the Hamiltonian in order to prove the following proposition (see Lemma 2.5 in \cite{MS02}).
\begin{proposition}[Marco-Sauzin]\label{sus} Let $n\geq 1$, $R>1$, $\alpha>1$, $L_1>0$ and $L>0$ satisfying \begin{equation}\label{LL1}
L_1^\alpha=L^\alpha(1+(L^\alpha+R+1/2)|\phi_{\alpha,L}|_{\alpha,L}). \end{equation} If $u_n,v_n\in G^{\alpha,L_1}(\mathbb T^{n-1})$, there exists $f_n\in G^{\alpha,L}(\mathbb T^n\times \overline{B})$, independent of the variable $I_n$, such that if \[ H_n(\theta,I)=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+I_n+f_n(\theta,I), \quad (\theta,I)\in\mathbb A^n, \] for any energy $e\in\mathbb R$, the Poincaré map induced by the Hamiltonian flow of $H_n$ on the section $\{\theta_n=0\}\cap H_{n}^{-1}(e)$ coincides with the diffeomorphism \[ \Phi^{u_n}\circ\Phi^{\tilde{h}+v_n}.\] Moreover, one has \begin{equation}\label{taillesus}
\sup\{|u_n|_{C^0},|v_n|_{C^0}\}\leq |f_n|_{\alpha,L} \leq c_3\sup\{|u_n|_{\alpha,L_1},|v_n|_{\alpha,L_1}\}, \end{equation}
where $c_3=2|\phi_{\alpha,L}|_{\alpha,L}$ depends only on $\alpha$ and $L$. \end{proposition}
\paraga Now we can finally prove our theorem.
\begin{proof}[Proof of Theorem~\ref{thmnonpert}] Let $R>1$, $\alpha>1$ and $L>0$, and choose $L_1$ satisfying the relation~(\ref{LL1}). The constants $c_1$ and $c_2$ of Proposition~\ref{pertu} depend only on $\alpha$ and $L_1$, hence they depend only on $R$, $\alpha$ and $L$.
We can define $u_n, v_n\in G^{\alpha,L_1}(\mathbb T^{n-1})$ by \[ u_n=q_{n}^{-1}U\otimes g_n,\quad v_n=N_{n}^{-2}V, \] where $U(\theta_1)=-(2\pi)^{-1}\sin 2\pi\theta_1$, $V(\theta_2)=-\cos 2\pi\theta_2$ (so $v_n$ is formally defined on $\mathbb T$ but we identify it with a function on $\mathbb T^{n-1}$) and $g_n$ is the function given by Proposition~\ref{pertu}. Let us apply Proposition~\ref{sus}: there exists $f_n\in G^{\alpha,L}(\mathbb T^n\times \overline{B})$, independent of the variable $I_n$, such that if \[ H_n(\theta,I)=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+I_n+f_n(\theta,I), \quad (\theta,I)\in\mathbb A^n, \] for any energy $e\in\mathbb R$, the Poincaré map induced by the Hamiltonian flow of $H$ on the section $\{\theta_n=0\}\cap H^{-1}(e)$ coincides with the diffeomorphism \[ \Phi^{u_n}\circ\Phi^{\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+v_n}=\Phi^{u_n}\circ\Phi^{\tilde{h}+v_n}.\] Let us show that our system $H_n$ has a drifting orbit. First consider its Poincaré section defined by \[ \Psi_n=\Phi^{u_n}\circ\Phi^{\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+v_n}=\Phi^{f_n \otimes g_n} \circ (F \times G_n),\] with \[ f_n=q_n^{-1}U, \quad F=\Phi^{\frac{1}{2} I_1^2}, \quad G_n=\Phi^{\frac{1}{2}(I_2^2+I_3^2+\cdots+I_{n-1}^2)+N_{n}^{-2}V(\theta_2)}. \] By Proposition~\ref{pertu}, we can apply the coupling lemma (Lemma~\ref{coupling}), so \[ \Psi_n^{q_n}((0,0),a_n)=(\Phi^f_n \circ F^{q_n}(0,0),a_n). \] Then, using~(\ref{driftstand2}), observe that \[ \Phi^{f_n} \circ F^{q_n}=\Phi^{q_n^{-1}U} \circ \left(\Phi^{\frac{1}{2}I_1^2}\right)^{q_n} =\psi_{q_n}, \] so \begin{eqnarray*} \Psi_n^{q_n^2}((0,0),a_n) & = & ((\Phi^{f_n} \circ F^{q_n})^{q_n}(0,0),a_n) \\ & = & (\psi_{q_n}^{q_n}(0,0),a_n) \\ & = & ((0,1),a_n), \end{eqnarray*} where the last equality follows from~(\ref{driftstand}). Hence, after $q_n^2$ iterations, the $I_1$-component of the point $x_n=((0,0),a_n)\in\mathbb A^{n-1}$ drifts from $0$ to $1$. Then, for the continuous system, the initial condition $(x_n,t=0,I_n=0)$ in $\mathbb A^n$ gives rise to a solution $(x(t),t,I_n(t))=(x(t),\theta_n(t),I_n(t))$ of the Hamiltonian vector field generated by $H_n$ such that \[ x(k)=\Psi_n^k(x_n), \quad k\in\mathbb Z. \] So after a time $\tau_n=q_n^2$, the point $(x_n,(0,0))$ drifts from $0$ to $1$ in the $I_1$-direction, and this gives our drifting orbit.
Now let $\varepsilon_n=|f_n|_{\alpha,L_1}$ be the size of our perturbation. Using the estimate~(\ref{estimgn}) and~(\ref{taillesus}) one finds \begin{equation}\label{estNeps} N_{n}^{-2}\leq \varepsilon_n \leq c_3 N_{n}^{-2}. \end{equation} By the prime number theorem, $p_n$ is equivalent to $n\ln n$, so there exists $n_0\in\mathbb N^*$ such that for $n\geq n_0$, one can ensure that \[ p_{2n}/4 \leq p_{n+i} \leq p_{2n}, \quad 3\leq i\leq n, \] which gives \begin{equation}\label{estimPN} (p_{2n}/4)^{n-2} \leq N_n \leq p_{2n}^{n-2}, \quad N_n^{\frac{1}{n-2}}\leq p_{2n} \leq 4N_n^{\frac{1}{n-2}}. \end{equation} We can also assume by the prime number theorem that for $n\geq n_0$, one has \begin{equation}\label{nompremier} 2n\ln 2n \leq p_{2n} \leq 2(2n \ln 2n)=4n\ln 2n. \end{equation} From the above estimates~(\ref{estimPN}) and~(\ref{nompremier}) one easily obtains \begin{equation}\label{estN} e^{(n-2)\ln (2^{-1}n\ln 2n)}\leq N_n\leq e^{(n-2)\ln (4n\ln 2n)}, \end{equation} and, together with~(\ref{estNeps}), one finds \begin{equation}\label{estpert} e^{-2(n-2)\ln (4n\ln 2n)}\leq \varepsilon_n \leq c_3e^{-2(n-2)\ln (2^{-1}n\ln 2n)}. \end{equation}
Finally it remains to estimate the time $\tau_n$. First recall that \[ M_n=2\left[c_1N_n e^{c_2(n-2)p_{2n}^{\frac{1}{\alpha}}}\right], \] and with~(\ref{estimPN}) \[ q_n=N_nM_n \leq 3c_1N_n^2 e^{4c_2(n-2)N_n^{\frac{1}{\alpha(n-2)}}}. \] Hence \[ q_n^2\leq 9c_1^2N_n^4 e^{8c_2(n-2)N_n^{\frac{1}{\alpha(n-2)}}}. \] Then using~(\ref{estN}) we have \[ N_n^{\frac{1}{\alpha(n-2)}}\leq (4n \ln 2n)^{\frac{1}{\alpha}} \] and from~(\ref{estNeps}) we know that \[ N_n^4 \leq \left(\frac{c_3}{\varepsilon_n}\right)^2, \] so we obtain \[ q_n^2\leq 9c_1^2\left(\frac{c_3}{\varepsilon_n}\right)^2 e^{8c_2(n-2)(4n \ln 2n)^{\frac{1}{\alpha}}}. \] Now taking $n_0$ larger if necessary, as $\alpha>1$, one can ensure that for $n\geq n_0$, \[ (4n)^{\frac{1}{\alpha}} \leq n, \quad (\ln 2n)^{\frac{1}{\alpha}}\leq \ln (2^{-1}n\ln 2n), \] so \[ 8c_2(n-2)(4n \ln 2n)^{\frac{1}{\alpha}} \leq 8c_2(n-2)n\ln (2^{-1}n\ln 2n). \] Therefore \begin{eqnarray*} q_n^2 & \leq & 9c_1^2\left(\frac{c_3}{\varepsilon_n}\right)^2 e^{8c_2n(n-2)\ln (2^{-1}n\ln 2n)} \\ & \leq & 9c_1^2\left(\frac{c_3}{\varepsilon_n}\right)^2\left(e^{2(n-2)\ln (2^{-1}n\ln 2n))}\right)^{4c_2n}. \end{eqnarray*} Finally by~(\ref{estpert}) we obtain \begin{eqnarray*} q_n^2 & \leq & 9c_1^2\left(\frac{c_3}{\varepsilon_n}\right)^2\left(\frac{c_3}{\varepsilon_n}\right)^{4c_2n} \\ & \leq & C\left(\frac{c}{\varepsilon_n}\right)^{n\gamma} \end{eqnarray*} with $C=9c_1^2$, $c=c_3$ and $\gamma=2+4c_2$. This ends the proof. \end{proof}
\section{Proof of Theorem~\ref{thmnonpertana}}
The proof of Theorem~\ref{thmnonpertana} will be presented in section~\ref{sectnonpertana}, but first in section~\ref{mechanismana}, following \cite{LM05}, we will explain how the mechanism of instability that we explained in the Gevrey context can be (partly) generalized to an analytic context.
Recall that the first feature of the mechanism is to study perturbations of integrable maps and to obtain a result for perturbations of integrable flows by a ``quantitative" suspension argument. In the Gevrey case, this was particularly easy using compactly-supported functions. In the analytic case, this is more difficult but such a result exists, and here we will use a version due to Kuskin and Pöschel (\cite{KP94}).
The second and main feature of the mechanism is the use of a coupling lemma, which enables us to embed a low-dimensional map having unstable orbits into a multi-dimensional near-integrable map. In the Gevrey case, we simply used the family of maps $\psi_q : \mathbb A \rightarrow \mathbb A$ defined as in~(\ref{maps1}) and the difficult part was the choice of the coupling, where we made an important use of the existence of compactly-supported functions. We do not know if this approach can be easily extended to the analytic case. However, by a result of Lochak and Marco (\cite{LM05}), one can still follow this path by using instead a suitable family of maps $\mathcal{F}_q : \mathbb A^2 \rightarrow \mathbb A^2$ having a well-controlled unstable orbit.
\subsection{The modified mechanism}\label{mechanismana}
\paraga So let us describe this family of maps $\mathcal{F}_q : \mathbb A^2 \rightarrow \mathbb A^2$, $q\in\mathbb N^*$. We fix a width of analyticity $\sigma>0$ (to be chosen small enough in Proposition~\ref{LocMar} below). For $q$ large enough, $\mathcal{F}_q$ will appear as a perturbation of the following \textit{a priori} unstable map \[ \mathcal{F}_*=\Phi^{\frac{1}{2} (I_1^2+I_2^2)+\cos 2\pi\theta_1} : \mathbb A^2 \rightarrow \mathbb A^2. \] More precisely, for $q\in\mathbb N^*$, let us define an analytic function $f_q : \mathbb A^2 \rightarrow \mathbb R$, depending only on the angle variables, by \[ f_q(\theta_1,\theta_2)=f_{q}^{(1)}(\theta_1)f^{(2)}(\theta_2), \] where \[ f_{q}^{(1)}(\theta_1)=(\sin\pi\theta_1)^{\nu(q,\sigma)}, \quad f^{(2)}(\theta_2)=-\pi^{-1}(2+\sin 2\pi(\theta_2+6^{-1})). \] We still have to define the exponent $\nu(q,\sigma)$ in the above expression for $f_{q}^{(1)}$. Let us denote by $[\,.\,]$ the integer part of a real number. Given $\sigma>0$, let $q_\sigma$ be the smallest positive integer such that \[ \left[\frac{\ln q_\sigma}{4\pi\sigma}+1\right]=1, \] then we set \[ \nu(q,\sigma)=2\left[\frac{\ln q_\sigma}{4\pi\sigma}+1\right], \quad q\geq q_\sigma. \] In particular, for $q\geq q_\sigma$, $\nu(q,\sigma) \geq 2$ and it is always an even integer, hence $f^{(q)}$ is a well-defined $1$-periodic function. The reasons for the choice of this function $f^{(q)}$ are explained at length in \cite{Mar05} and \cite{LM05}, so we refer to these papers for some motivations.
Finally, for $q\geq q_\sigma$, we can define \begin{equation}\label{Fq} \mathcal{F}_q=\Phi^{q^{-1}f_q}\circ\mathcal{F}_* : \mathbb A^2 \rightarrow \mathbb A^2. \end{equation} Let us also define the family of points $(\xi_{q,k})_{k\in\mathbb Z}$ of $\mathbb A^2$ by their coordinates \[ \xi_{q,k} : (\theta_1=1/2, I_1=2, \theta_2=0, I_2=q^{-1}(k+1)). \] Clearly, the $I_2$-component of the point $\xi_{q,k}$ converges to $\pm \infty$ when $k$ goes to $\pm \infty$, hence the sequence $(\xi_{q,k})_{k\in\mathbb Z}$ is wandering in $\mathbb A^2$.
\paraga The following result was proved in \cite{LM05}, Proposition 2.1.
\begin{proposition}[Lochak-Marco]\label{LocMar} There exist a width $\sigma>0$, an integer $q_0$ and a constant $0<d<1$ such that for any $q\geq q_0$, the diffeomorphism $\mathcal{F}_q : \mathbb A^2 \rightarrow \mathbb A^2$ has a point $\zeta_q \in \mathbb A^2$ which satisfies \begin{equation*}
|\mathcal{F}_{q}^{kq}(\zeta_q)-\xi_{q,k}|\leq d^{\nu(q,\sigma)}. \end{equation*} \end{proposition}
As a consequence, the orbit of the point $\zeta_q \in \mathbb A^2$ under the map $\mathcal{F}_{q}$ is also wandering in $\mathbb A^2$. In particular, for $k=0$ and $k=3q$ the above estimate yields
\[ |\zeta_q-\xi_{q,0}|\leq d^{\nu(q,\sigma)}, \quad |\mathcal{F}_{q}^{3q^2}(\zeta_q)-\xi_{q,3q}|\leq d^{\nu(q,\sigma)}, \] and as
\[ |\xi_{q,3q}-\xi_{q,0}|=3, \] one obtains \begin{equation}\label{timeana}
|\mathcal{F}_{q}^{3q^2}(\zeta_q)-\zeta_q| \geq 3-2d^{\nu(q,\sigma)} \geq 1. \end{equation}
The proof of the above proposition is rather difficult and it would be too long to explain it. We just mention that crucial ingredients are on the one hand a conjugacy result for normally hyperbolic manifolds (in the spirit of Sternberg) adapted to this analytic and symplectic context, and on the other hand the classical method of correctly aligned windows introduced by Easton.
\paraga Now this family of maps $\mathcal{F}_q : \mathbb A^2 \rightarrow \mathbb A^2$ will be used in the coupling lemma. More precisely, recalling the notations of the coupling lemma~\ref{coupling}, in the following we shall take $m=2$, \[ F=F_n=\Phi^{\frac{1}{2} (I_1^2+I_2^2)+N_n^{-2}\cos 2\pi\theta_1},\] which is just a rescaled version of the map $\mathcal{F}_*$ we introduced before, and $f=f_n=q_{n}^{-1}f_{q_n}$, for some positive integer parameters $N_n$ and $q_n$ to be defined below.
It remains to choose the dynamics on the second factor, and here it will be an easy task. In order to have a result for a continuous system with $n$ degrees of freedom, we set $m'=n-3$, and it will be just fine to take \[ G=G_n=\Phi^{\frac{1}{2}(I_3^2+\cdots+I_{n-1}^{2})}. \] If $(p_j)_{j\geq 0}$ is the ordered sequence of prime numbers, now we let $N_n$ be the product of the $n-3$ prime numbers $\{p_{n+4},\dots,p_{2n}\}$, that is \begin{equation}\label{Nn} N_n=p_{n+4}p_{n+5}\dots p_{2n}. \end{equation} The next proposition is the analytic analogue of Proposition~\ref{pertu}, and its proof is even simpler.
\begin{proposition}\label{pertuana} Let $n\geq 4$ and $\sigma >0$. Then there exist a function $g_n\in \mathcal{A}_{\sigma}(\mathbb T^{n-3})$ and a point $a_n\in\mathbb A^{n-3}$ such $a_n$ is $N_n$-periodic for $G_n$ and $(g_n, G_n, a_n, N_n)$ satisfy the synchronization conditions (\ref{sync}): \begin{equation*} g_n(a_n)=1, \quad dg_n(a_n)=0, \quad g_n(G_n^k(a_n))=0, \quad dg_n(G_n^k(a_n))=0, \end{equation*} for $1 \leq k \leq N_n-1$. Moreover, there exists a positive constant $c$ depending only on $\sigma$ such that if \begin{equation}\label{qn} q_n=2N_n^4[e^{c(n-3)p_{2n}}], \end{equation} the estimate \begin{equation}\label{estimgnana}
q_n^{-1/2}|g_n|_{\sigma}\leq N_{n}^{-2}, \end{equation} holds true. \end{proposition}
The function $g_n$ belongs to $\mathcal{A}_{\sigma}(\mathbb T^{n-3})$, but it can also be considered as a function in $\mathcal{A}_{\sigma}(\mathbb A^{n-3})$ depending only on the angle variables.
As in the previous section, one can easily see that the coupling lemma, together with both Proposition~\ref{LocMar} and Proposition~\ref{pertuana}, already give us a result of instability for a perturbation of an integrable map, but we shall not state it.
\begin{proof} Recall that for $p \in \mathbb N^*$, we have defined in Lemma~\ref{funct} an analytic function $\eta_p : \mathbb T \rightarrow \mathbb R$ by \[ \eta_p(\theta)=\left(\frac{1}{p}\sum_{l=0}^{p-1}\cos 2\pi l\theta \right)^2. \] We choose our function $g_n\in \mathcal{A}_{\sigma}(\mathbb T^{n-3})$ of the form \[ g_n=g_n^{(3)}\otimes \cdots \otimes g_n^{(n-1)}, \] where \[ g_n^{(i)}(\theta_i)=\eta_{p_{n+1+i}}(\theta_i), \quad 3\leq i \leq n-1, \] and our point $a_n=(a_n^{(3)},\dots,a_n^{(n-1)})\in\mathbb A^{n-3}$ where \[ a_n^{(i)}=(0,p_{n+1+i}^{-1}), \quad 3\leq i \leq n-1. \] Recalling the definition of $G_n$ and $N_n$, it is obvious that $a_n$ is $N_n$-periodic for $G_n$. Moreover, by Lemma~\ref{funct}, the function $\eta_p$ satisfies \[ \eta_p(0)=1, \quad \eta_p'(0)=0, \quad \eta_p(k/p)=\eta_p'(k/p)=0, \] for $1 \leq k \leq p-1$, from which one can easily deduce that $(g_n, G_n, a_n, N_n)$ satisfy the synchronization conditions (\ref{sync}).
Concerning the estimate, first note that
\[ |\eta_p|_{\sigma}\leq e^{4\pi\sigma p} \] so that
\[ |g_n|_{\sigma} \leq |\eta_{p_{n+4}}|_{\sigma}\cdots |\eta_{p_{2n}}|_{\sigma} \leq e^{4\pi \sigma(n-3)p_{2n}}. \] Therefore, if we set $c=8\pi\sigma$, then by definition of $q_n$ one has \[ q_n^{1/2} \geq N_n^2 e^{4\pi \sigma(n-3)p_{2n}}\] and this eventually gives us
\[ q_n^{-1/2}|g_n|_{\sigma}\leq N_{n}^{-2}, \] which is the desired estimate. \end{proof}
\subsection{Proof of Theorem~\ref{thmnonpertana}}\label{sectnonpertana}
\paraga First we shall recall the following result of Kuksin-Pöschel (\cite{KP94}, see also \cite{Kuk93}).
\begin{proposition}[Kuksin-Pöschel]\label{susana}
Let $\Psi_n : \mathbb A^{n-1} \rightarrow \mathbb A^{n-1}$ be a bounded real-analytic exact-symplectic diffeomorphism, which has a bounded holomorphic extension to some complex neighbourhood $V_{\varrho}$, for some width $\varrho>0$ independent of $n\in\mathbb N^*$. Assume also that $|\Psi_n-\Phi^{\tilde{h}}|_{\varrho}$ goes to zero when $n$ goes to infinity, where $\tilde{h}(I_1,\dots,I_{n-1})=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)$.
Then there exist $n_0\in\mathbb N^*$, $\rho<\varrho$ such that for any $n\geq n_0$, there exists $f_n\in \mathcal{A}_\rho(\mathbb T^n \times B)$, independent of the variable $I_n$, such that if \[ H_n(\theta,I)=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+I_n+f_n(\theta,I), \quad (\theta,I)\in\mathbb A^n, \] for any energy $e\in\mathbb R$, the Poincaré map induced by the Hamiltonian flow of $H_n$ on the section $\{\theta_n=0\}\cap H_{n}^{-1}(e)$ coincides with $\Psi_n$. Moreover, the estimate \begin{equation}\label{taillesusana}
|\Psi_n-\Phi^{\tilde{h}}|_{\varrho} \leq |f_n|_\rho \leq \delta_n |\Psi_n-\Phi^{\tilde{h}}|_{\varrho}, \end{equation} holds true for some constant $\delta_n$ that may depends on $n\in\mathbb N^*$. \end{proposition}
This suspension result is slightly less accurate (since more difficult) than Proposition~\ref{sus}, as there is a constant $\delta_n$ depending on $n$. However, what really matters is that the resulting width of analyticity $\rho$ depends only on $\varrho$ and $R$, but not on $n$.
\paraga Now we can finally prove the theorem.
\begin{proof}[Proof of Theorem~\ref{thmnonpertana}] Let $n\geq 4$, $R>1$ and $\sigma>0$ given by the Proposition~\ref{LocMar}, and let $N_n$ and $q_n$ defined as in (\ref{Nn}) and (\ref{qn}) respectively.
We will first construct a map $\Psi_n$ with a well-controlled wandering point. To this end, by Proposition~\ref{pertuana} we can apply the coupling lemma~\ref{coupling} with the following data: \[ F_n=\Phi^{\frac{1}{2} (I_1^2+I_2^2)+N_n^{-2}\cos 2\pi\theta_1}, \quad f_n=q_{n}^{-1}f_{q_n}, \quad G_n=\Phi^{\frac{1}{2}(I_3^2+\cdots+I_{n-1}^{2})}, \] and with the function $g_n$ and the point $a_n$ given by the aforementioned proposition. This gives us the following: if \[ u_n=q_{n}^{-1} f_{q_n}\otimes g_n , \quad v_n=N_n^{-2}V \] where $V(\theta_1)=\cos 2\pi\theta_1$, then the $N_n$-iterates of the map \[ \Psi_n=\Phi^{u_n} \circ \Phi^{\tilde{h}+v_n} : \mathbb A^{n-1} \rightarrow \mathbb A^{n-1}\] satisfies the following relation: \begin{equation}\label{coup} \Psi_n^{N_n}(x,a_n)=(\Phi^{q_{n}^{-1}f_{q_n}}\circ F_n^{N_n}(x),a_n), \quad x\in \mathbb A^2. \end{equation} Now let us look at the map \[ \Phi^{q_{n}^{-1}f_{q_n}}\circ F_n^{N_n}=\Phi^{q_{n}^{-1}f_{q_n}}\circ\left(\Phi^{\frac{1}{2} (I_1^2+I_2^2)+N_n^{-2}\cos 2\pi\theta_1}\right)^{N_n}. \] If $S_n(\theta_1,\theta_2,I_1,I_2)=(\theta_1,\theta_2,N_nI_1,N_nI_2)$ is the rescaling by $N_n$ in the action components, one sees that \[ \Phi^{q_{n}^{-1}f_{q_n}}\circ F_n^{N_n}=S_n^{-1}\circ \mathcal{F}_{N_n^{-1}q_n}\circ S_n \] where $\mathcal{F}_{N_n^{-1}q_n}$ is defined in~(\ref{Fq}). Now by Proposition~\ref{LocMar}, choosing $n$ large enough so that $N_n^{-1}q_n\geq q_0$, this map has a wandering point $\zeta_{N_n^{-1}q_n}\in\mathbb A^2$, which by~(\ref{timeana}) satisfies
\[ \left|\mathcal{F}_{N_n^{-1}q_n}^{3N_n^{-2}q_n^2} \left(\zeta_{N_n^{-1}q_n}\right)-\zeta_{N_n^{-1}q_n}\right| \geq 1. \] Using the above conjugacy relation, one finds that the point \[ \chi_n=S_n^{-1}(\zeta_{N_n^{-1}q_n})\in\mathbb A^2 \] wanders under the iteration of $\Phi^{q_{n}^{-1}f_{q_n}}\circ F_n^{N_n}$, and that its drift is bigger than one after $N_n(3N_n^{-2}q_n^2)=3N_n^{-1}q_n^2$ iterations, that is
\[ \left|(\Phi^{q_{n}^{-1}f_{q_n}}\circ F_n^{N_n})^{3N_n^{-1}q_n^2}(\chi_n)-\chi_n\right|\geq 1. \] By the relation~(\ref{coup}) this gives a wandering point $x_n=(\chi_n,a_n)\in\mathbb A^{n-1}$ for the map $\Psi_n$, satisfying the estimate \begin{equation}\label{estimpsi}
|\Psi_n^{3q_n^2}(x_n)-x_n|\geq 1. \end{equation}
Next let us estimate the distance between $\Psi_n$ and the integrable diffeomorphism $\Phi^{\tilde{h}}$. First note that since $u_n,v_n\in \mathcal{A}_\sigma(\mathbb T^{n-1})$, $\Psi_n$ extends holomorphically to a complex neighbourhood of size $\sigma$. Let us now estimate the norms of $u_n$ and $v_n$. Obviously, one has
\[ N_n^{-2}\leq|v_n|_\sigma\leq e^{2\pi\sigma}N_n^{-2}. \] By definition of $f_q$ and the exponent $\nu(q,\sigma)$, one easily obtains
\[ |q_n^{-1}f_{q_n}|_\sigma \leq q_n^{-1/2}|f^{(2)}|_\sigma, \] and hence
\[ |u_n|_\sigma \leq |q_n^{-1}f_{q_n}|_\sigma |g_n|_\sigma \leq q_n^{-1/2} |g_n|_\sigma |f^{(2)}|_\sigma \leq N_n^{2} |f^{(2)}|_\sigma, \] where the last inequality follows from the estimate~(\ref{estimgnana}). Then by using Cauchy estimates and general inequalities on time-one maps, we obtain \begin{equation}\label{estimdis}
N_n^{-2} \leq |\Psi_n-\Phi^{\tilde{h}}|_{\varrho} \leq c_\sigma N_n^{-2}, \end{equation} for $n$ large enough, and for some constants $c_\sigma$ and $\varrho>0$ depending only on $\sigma$ (for instance, one can choose $\varrho=6^{-1}\sigma$).
Now we can eventually apply Proposition~\ref{susana}: there exist $n_0\in\mathbb N^*$, $\rho<\varrho$ such that for any $n\geq n_0$, there exists $f_n\in \mathcal{A}_\rho(\mathbb T^n \times B)$, independent of the variable $I_n$, such that if \[ H_n(\theta,I)=\frac{1}{2}(I_1^2+\cdots+I_{n-1}^2)+I_n+f_n(\theta,I), \quad (\theta,I)\in\mathbb A^n, \] for any energy $e\in\mathbb R$, the Poincaré map induced by the Hamiltonian flow of $H_n$ on the section $\{\theta_n=0\}\cap H_{n}^{-1}(e)$ coincides with $\Psi_n$. Clearly, the wandering point $x_n$ for $\Psi_n$ gives us a wandering orbit $(x(t),t,I_n(t))=(x(t),\theta_n(t),I_n(t))$ for the Hamiltonian vector field generated by $H_n$, such that \[ x(k)=\Psi_n^k(x_n), \quad k\in\mathbb Z. \] In particular, after a time $\tau_n=3q_n^2$, by the above equality and the relation~(\ref{estimpsi}) this orbit drifts from $0$ to $1$.
Now it remains to estimate the size of the perturbation $\varepsilon_n=|f_n|_\rho$ and the time of drift $\tau_n$ in terms of the number of degrees of freedom $n$. First, by~(\ref{taillesusana}) and~(\ref{estimdis}), \begin{equation}\label{estNepsana} N_{n}^{-2}\leq \varepsilon_n \leq c_n N_{n}^{-2}, \end{equation} with $c_n=c_\sigma \delta_n$. Then, by the prime number theorem, taking $n_0$ large enough, one can ensure that \[ p_{2n}/4 \leq p_{n+i} \leq p_{2n}, \quad 4\leq i\leq n, \] which gives \begin{equation}\label{estimPNana} (p_{2n}/4)^{n-3} \leq N_n \leq p_{2n}^{n-3}, \quad N_n^{\frac{1}{n-3}}\leq p_{2n} \leq 4N_n^{\frac{1}{n-3}}. \end{equation} We can also assume by the prime number theorem that for $n\geq n_0$, one has \begin{equation}\label{nompremierana} 2n\ln 2n \leq p_{2n} \leq 2(2n \ln 2n)=4n\ln 2n. \end{equation} From the above estimates~(\ref{estimPNana}) and~(\ref{nompremierana}) one easily obtains \begin{equation}\label{estNana} e^{(n-3)\ln (2^{-1}n\ln 2n)}\leq N_n\leq e^{(n-3)\ln (4n\ln 2n)}, \end{equation} and, together with~(\ref{estNepsana}), one finds \begin{equation}\label{estpertana} e^{-2(n-3)\ln (4n\ln 2n)}\leq \varepsilon_n \leq c_n e^{-2(n-3)\ln (2^{-1}n\ln 2n)}. \end{equation} Concerning the time $\tau_n$, we have \[ \tau_n=3q_n^2 \leq 12 N_n^{8} e^{2c(n-3)p_{2n}} \leq 12 N_n^{8} e^{8c(n-3)N_n^{\frac{1}{n-3}}} , \] where the last inequality follows from~(\ref{estimPNana}). Then using~(\ref{estNana}) we have \[ N_n^{\frac{1}{(n-3)}}\leq 4n \ln 2n \] and from~(\ref{estNepsana}) we know that \[ N_n^8 \leq \left(\frac{c_n}{\varepsilon_n}\right)^4, \] so we obtain \[ q_n^2\leq 12\left(\frac{c_3}{\varepsilon_n}\right)^4 e^{32c(n-3)n \ln 2n}. \] Then one can ensure that for $n\geq n_0$, \[ \ln 2n \leq \ln (2^{-1}n\ln 2n), \] so \[ 32c(n-3)n \ln 2n \leq 32c(n-3)n \ln (2^{-1}n\ln 2n). \] Therefore \begin{eqnarray*} q_n^2 & \leq & 12\left(\frac{c_n}{\varepsilon_n}\right)^4 e^{32c(n-3)n \ln (2^{-1}n\ln 2n)} \\ & \leq & 12\left(\frac{c_n}{\varepsilon_n}\right)^4\left(e^{2(n-3)\ln (2^{-1}n\ln 2n))}\right)^{16 cn}. \end{eqnarray*} Finally by~(\ref{estpertana}) we obtain \begin{eqnarray*} q_n^2 & \leq & 12\left(\frac{c_n}{\varepsilon_n}\right)^4\left(\frac{c_n}{\varepsilon_n}\right)^{16cn} \\ & \leq & C\left(\frac{c_n}{\varepsilon_n}\right)^{n\gamma} \end{eqnarray*} with $C=12$ and $\gamma=4+16c$. This concludes the proof. \end{proof}
\appendix
\section{Gevrey functions}\label{Gev}
In this very short appendix, we recall some facts about Gevrey functions that we used in the text. We refer to~\cite{MS02}, Appendix A, for more details.
The most important property of $\alpha$-Gevrey functions is the existence, for $\alpha>1$, of bump functions.
\begin{lemma}\label{lemmeGev1} Let $\alpha>1$ and $L>0$. There exists a non-negative $1$-periodic function $\varphi_{\alpha,L}\in G^{\alpha,L}\left([-\frac{1}{2},\frac{1}{2}]\right)$ whose support is included in $[-\frac{1}{4},\frac{1}{4}]$ and such that $\varphi_{\alpha,L}(0)=1$ and $\varphi_{\alpha,L}'(0)=0$. \end{lemma}
The following estimate on the product of Gevrey functions follows easily from the Leibniz formula.
\begin{lemma}\label{lemmeGev2} Let $L>0$, and $f,g\in G^{\alpha,L}(\mathbb T^n \times \overline{B})$. Then
\[ |fg|_{\alpha,L}\leq |f|_{\alpha,L}|g|_{\alpha,L}. \] \end{lemma}
Finally, estimates on the composition of Gevrey functions are much more difficult (see Proposition A.1 in \cite{MS02}), but here we shall only need the following statement.
\begin{lemma}\label{lemmeGev3} Let $\alpha\geq 1$, $\Lambda_1>0, L_1>0$, and $I,J$ be compact intervals of $\mathbb R$. Let $f\in G^{\alpha,\Lambda_1}(I)$, $g\in G^{\alpha,L_1}(J)$ and assume $g(J)\subseteq I$. If
\[ |g|_{\alpha,L_1}\leq \Lambda_{1}^{\alpha},\] then $f\circ g \in G^{\alpha,L_1}(J)$ and
\[ |f\circ g|_{\alpha,L_1} \leq |f|_{\alpha,\Lambda_1}. \] \end{lemma}
{\it Acknowledgments.}
The author is indebted to Jean-Pierre Marco for suggesting him this problem to work on, for helpful discussions, comments and corrections on a first version of this paper. He also thanks the anonymous referee for several interesting suggestions. Finally, the author thanks the University of Warwick where this work has been finished while he was a Research Fellow through the Marie Curie training network ``Conformal Structures and Dynamics (CODY)".
\addcontentsline{toc}{section}{References}
\end{document} | arXiv |
Jacobi's formula
In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix A in terms of the adjugate of A and the derivative of A.[1]
If A is a differentiable map from the real numbers to n × n matrices, then
${\frac {d}{dt}}\det A(t)=\operatorname {tr} \left(\operatorname {adj} (A(t))\,{\frac {dA(t)}{dt}}\right)=\left(\det A(t)\right)\cdot \operatorname {tr} \left(A(t)^{-1}\cdot \,{\frac {dA(t)}{dt}}\right)$
where tr(X) is the trace of the matrix X. (The latter equality only holds if A(t) is invertible.)
As a special case,
${\partial \det(A) \over \partial A_{ij}}=\operatorname {adj} (A)_{ji}.$
Equivalently, if dA stands for the differential of A, the general formula is
$d\det(A)=\operatorname {tr} (\operatorname {adj} (A)\,dA).$
The formula is named after the mathematician Carl Gustav Jacob Jacobi.
Derivation
Via Matrix Computation
We first prove a preliminary lemma:
Lemma. Let A and B be a pair of square matrices of the same dimension n. Then
$\sum _{i}\sum _{j}A_{ij}B_{ij}=\operatorname {tr} (A^{\rm {T}}B).$
Proof. The product AB of the pair of matrices has components
$(AB)_{jk}=\sum _{i}A_{ji}B_{ik}.$
Replacing the matrix A by its transpose AT is equivalent to permuting the indices of its components:
$(A^{\rm {T}}B)_{jk}=\sum _{i}A_{ij}B_{ik}.$
The result follows by taking the trace of both sides:
$\operatorname {tr} (A^{\rm {T}}B)=\sum _{j}(A^{\rm {T}}B)_{jj}=\sum _{j}\sum _{i}A_{ij}B_{ij}=\sum _{i}\sum _{j}A_{ij}B_{ij}.\ \square $
Theorem. (Jacobi's formula) For any differentiable map A from the real numbers to n × n matrices,
$d\det(A)=\operatorname {tr} (\operatorname {adj} (A)\,dA).$
Proof. Laplace's formula for the determinant of a matrix A can be stated as
$\det(A)=\sum _{j}A_{ij}\operatorname {adj} ^{\rm {T}}(A)_{ij}.$
Notice that the summation is performed over some arbitrary row i of the matrix.
The determinant of A can be considered to be a function of the elements of A:
$\det(A)=F\,(A_{11},A_{12},\ldots ,A_{21},A_{22},\ldots ,A_{nn})$
so that, by the chain rule, its differential is
$d\det(A)=\sum _{i}\sum _{j}{\partial F \over \partial A_{ij}}\,dA_{ij}.$
This summation is performed over all n×n elements of the matrix.
To find ∂F/∂Aij consider that on the right hand side of Laplace's formula, the index i can be chosen at will. (In order to optimize calculations: Any other choice would eventually yield the same result, but it could be much harder). In particular, it can be chosen to match the first index of ∂ / ∂Aij:
${\partial \det(A) \over \partial A_{ij}}={\partial \sum _{k}A_{ik}\operatorname {adj} ^{\rm {T}}(A)_{ik} \over \partial A_{ij}}=\sum _{k}{\partial (A_{ik}\operatorname {adj} ^{\rm {T}}(A)_{ik}) \over \partial A_{ij}}$
Thus, by the product rule,
${\partial \det(A) \over \partial A_{ij}}=\sum _{k}{\partial A_{ik} \over \partial A_{ij}}\operatorname {adj} ^{\rm {T}}(A)_{ik}+\sum _{k}A_{ik}{\partial \operatorname {adj} ^{\rm {T}}(A)_{ik} \over \partial A_{ij}}.$
Now, if an element of a matrix Aij and a cofactor adjT(A)ik of element Aik lie on the same row (or column), then the cofactor will not be a function of Aij, because the cofactor of Aik is expressed in terms of elements not in its own row (nor column). Thus,
${\partial \operatorname {adj} ^{\rm {T}}(A)_{ik} \over \partial A_{ij}}=0,$
so
${\partial \det(A) \over \partial A_{ij}}=\sum _{k}\operatorname {adj} ^{\rm {T}}(A)_{ik}{\partial A_{ik} \over \partial A_{ij}}.$
All the elements of A are independent of each other, i.e.
${\partial A_{ik} \over \partial A_{ij}}=\delta _{jk},$
where δ is the Kronecker delta, so
${\partial \det(A) \over \partial A_{ij}}=\sum _{k}\operatorname {adj} ^{\rm {T}}(A)_{ik}\delta _{jk}=\operatorname {adj} ^{\rm {T}}(A)_{ij}.$
Therefore,
$d(\det(A))=\sum _{i}\sum _{j}\operatorname {adj} ^{\rm {T}}(A)_{ij}\,dA_{ij},$
and applying the Lemma yields
$d(\det(A))=\operatorname {tr} (\operatorname {adj} (A)\,dA).\ \square $
Via Chain Rule
Lemma 1. $\det '(I)=\mathrm {tr} $, where $\det '$ is the differential of $\det $.
This equation means that the differential of $\det $, evaluated at the identity matrix, is equal to the trace. The differential $\det '(I)$ is a linear operator that maps an n × n matrix to a real number.
Proof. Using the definition of a directional derivative together with one of its basic properties for differentiable functions, we have
$\det '(I)(T)=\nabla _{T}\det(I)=\lim _{\varepsilon \to 0}{\frac {\det(I+\varepsilon T)-\det I}{\varepsilon }}$
$\det(I+\varepsilon T)$ is a polynomial in $\varepsilon $ of order n. It is closely related to the characteristic polynomial of $T$. The constant term ($\varepsilon =0$) is 1, while the linear term in $\varepsilon $ is $\mathrm {tr} \ T$.
Lemma 2. For an invertible matrix A, we have: $\det '(A)(T)=\det A\;\mathrm {tr} (A^{-1}T)$.
Proof. Consider the following function of X:
$\det X=\det(AA^{-1}X)=\det(A)\ \det(A^{-1}X)$
We calculate the differential of $\det X$ and evaluate it at $X=A$ using Lemma 1, the equation above, and the chain rule:
$\det '(A)(T)=\det A\ \det '(I)(A^{-1}T)=\det A\ \mathrm {tr} (A^{-1}T)$
Theorem. (Jacobi's formula) ${\frac {d}{dt}}\det A=\mathrm {tr} \left(\mathrm {adj} \ A{\frac {dA}{dt}}\right)$
Proof. If $A$ is invertible, by Lemma 2, with $T=dA/dt$
${\frac {d}{dt}}\det A=\det A\;\mathrm {tr} \left(A^{-1}{\frac {dA}{dt}}\right)=\mathrm {tr} \left(\mathrm {adj} \ A\;{\frac {dA}{dt}}\right)$
using the equation relating the adjugate of $A$ to $A^{-1}$. Now, the formula holds for all matrices, since the set of invertible linear matrices is dense in the space of matrices.
Via Diagonalization
Both sides of the Jacobi formula are polynomials in the matrix coefficients of A and A'. It is therefore sufficient to verify the polynomial identity on the dense subset where the eigenvalues of A are distinct and nonzero.
If A factors differentiably as $A=BC$, then
$\mathrm {tr} (A^{-1}A')=\mathrm {tr} ((BC)^{-1}(BC)')=\mathrm {tr} (B^{-1}B')+\mathrm {tr} (C^{-1}C').$
In particular, if L is invertible, then $I=L^{-1}L$ and
$0=\mathrm {tr} (I^{-1}I')=\mathrm {tr} (L(L^{-1})')+\mathrm {tr} (L^{-1}L').$
Since A has distinct eigenvalues, there exists a differentiable complex invertible matrix L such that $A=L^{-1}DL$ and D is diagonal. Then
$\mathrm {tr} (A^{-1}A')=\mathrm {tr} (L(L^{-1})')+\mathrm {tr} (D^{-1}D')+\mathrm {tr} (L^{-1}L')=\mathrm {tr} (D^{-1}D').$
Let $\lambda _{i}$, $i=1,\ldots ,n$ be the eigenvalues of A. Then
${\frac {\det(A)'}{\det(A)}}=\sum _{i=1}^{n}\lambda _{i}'/\lambda _{i}=\mathrm {tr} (D^{-1}D')=\mathrm {tr} (A^{-1}A'),$
which is the Jacobi formula for matrices A with distinct nonzero eigenvalues.
Corollary
The following is a useful relation connecting the trace to the determinant of the associated matrix exponential:
$\det e^{B}=e^{\operatorname {tr} \left(B\right)}$
This statement is clear for diagonal matrices, and a proof of the general claim follows.
For any invertible matrix $A(t)$, in the previous section "Via Chain Rule", we showed that
${\frac {d}{dt}}\det A(t)=\det A(t)\;\operatorname {tr} \left(A(t)^{-1}\,{\frac {d}{dt}}A(t)\right)$
Considering $A(t)=\exp(tB)$ in this equation yields:
${\frac {d}{dt}}\det e^{tB}=\operatorname {tr} (B)\det e^{tB}$
The desired result follows as the solution to this ordinary differential equation.
Applications
Several forms of the formula underlie the Faddeev–LeVerrier algorithm for computing the characteristic polynomial, and explicit applications of the Cayley–Hamilton theorem. For example, starting from the following equation, which was proved above:
${\frac {d}{dt}}\det A(t)=\det A(t)\ \operatorname {tr} \left(A(t)^{-1}\,{\frac {d}{dt}}A(t)\right)$
and using $A(t)=tI-B$, we get:
${\frac {d}{dt}}\det(tI-B)=\det(tI-B)\operatorname {tr} [(tI-B)^{-1}]=\operatorname {tr} [\operatorname {adj} (tI-B)]$
where adj denotes the adjugate matrix.
Remarks
1. Magnus & Neudecker (1999, pp. 149–150), Part Three, Section 8.3
References
• Magnus, Jan R.; Neudecker, Heinz (1999). Matrix Differential Calculus with Applications in Statistics and Econometrics (Revised ed.). Wiley. ISBN 0-471-98633-X.
• Bellman, Richard (1997). Introduction to Matrix Analysis. SIAM. ISBN 0-89871-399-4.
| Wikipedia |
Finite state machine implementation for left ventricle modeling and control
Jacob M. King1,
Clint A. Bergeron1 &
Charles E. Taylor ORCID: orcid.org/0000-0002-3408-29941
Simulation of a left ventricle has become a critical facet of evaluating therapies and operations that interact with cardiac performance. The ability to simulate a wide range of possible conditions, changes in cardiac performance, and production of nuisances at transition points enables evaluation of precision medicine concepts that are designed to function through this spectrum. Ventricle models have historically been based on biomechanical analysis, with model architectures constituted of continuous states and not conducive to deterministic processing. Producing a finite-state machine governance of a left ventricle model would enable a broad range of applications: physiological controller development, experimental left ventricle control, and high throughput simulations of left ventricle function.
A method for simulating left ventricular pressure-volume control utilizing a preload, afterload, and contractility sensitive computational model is shown. This approach uses a logic-based conditional finite state machine based on the four pressure-volume phases that describe left ventricular function. This was executed with a physical system hydraulic model using MathWorks' Simulink® and Stateflow tools.
The approach developed is capable of simulating changes in preload, afterload, and contractility in time based on a patient's preload analysis. Six pressure–volume loop simulations are presented to include a base-line, preload change only, afterload change only, contractility change only, a clinical control, and heart failure with normal ejection fraction. All simulations produced an error of less than 1 mmHg and 1 mL of the absolute difference between the desired and simulated pressure and volume set points. The acceptable performance of the fixed-timestep architecture in the finite state machine allows for deployment to deterministic systems, such as experimental systems for validation.
The proposed approach allows for personalized data, revealed through an individualized clinical pressure–volume analysis, to be simulated in silico. The computational model architecture enables this control structure to be executed on deterministic systems that govern experimental left ventricles. This provides a mock circulatory system with the ability to investigate the pathophysiology for a specific individual by replicating the exact pressure–volume relationship defined by their left ventricular functionality; as well as perform predictive analysis regarding changes in preload, afterload, and contractility in time.
Every year since 1919, cardiovascular disease (CVD) accounted for more deaths than any other major cause of death in the United States [1]. Based on data collected by the National Health and Nutrition Examination Survey (NHANES), CVD was listed as the underlying cause of death in 30.8% of all deaths in 2014, accounting for approximately 1 of every 3 deaths in the U.S., while CVD attributed to 53.8% of all deaths in that year. Additionally, data accumulated from 2011 to 2014 revealed that approximately 92.1 million American adults currently have one or more types of CVD and that by 2030, projections estimate that 43.9% of the U.S. population will have some form of this disease.
Research has revealed that CVD is a leading contributor to Congestive Heart Failure (CHF) [2]. CHF is a medical condition that occurs when the heart is incapable of meeting the demands necessary for maintaining an adequate amount of blood flow to the body, resulting in ankle swelling, breathlessness, fatigue, and potentially death [2]. In 2012, the total cost for CHF alone was estimated to be $30.7 billion with 68% attributed to direct medical costs. Furthermore, predictions indicate that by 2030, the total cost of CHF will increase almost 127% to an estimated $69.7 billion [1]. This prediction is based on data that revealed that one-third of the U.S. adult population has the predisposing conditions for CHF. With research revealing that 50% of people who develop CHF will die within 5 years of being diagnosed [1, 3], the need to evaluate treatments for this widening patient population is of growing importance.
One treatment alternative for patients with late-stage CHF is the use of a ventricular assist device (VAD) to directly assist with the blood flow demands of the circulatory system [2]. Implantable VADs have proven their potential as a quickly implemented solution for bridge to recovery, bridge to transplant, and destination therapy [4]. Given the severity of CHF, and the impending need for supplemental support from these cardiac assist devices, effective methods of identifying the recipient cardiovascular profile and matching that to the operation of the VAD is critical to the success of the intervention.
The effectiveness of CHF diagnosis and treatment therapy depends on an accurate and early assessment of the underlying pathophysiology attributed to a specific type of CVD, typically by means of analyzing ventricular functionality [2, 5, 6]. Clinical application of non-invasive cardiac imaging in the management of CHF patients with systolic and/or diastolic dysfunction has become the standard with the use of procedures such as echocardiography [7,8,9,10]. Echocardiography is a non-invasive ultrasound procedure used to assess the heart's structures and functionality, to include the left ventricular ejection fraction (LVEF), left ventricular end-diastolic volume (LVEDV), and left ventricular end-systolic volume (LVESV). Three-dimensional echocardiography of adequate quality has been shown to improve the quantification of left ventricular (LV) volumes and LVEF, as well as provide data with better accuracy when compared with values obtained from cardiac magnetic resonance imaging [2, 11]. At present, echocardiography has been shown to be the most accessible technology capable of diagnosing diastolic dysfunction; therefore, a comprehensive echocardiography examination incorporating all relevant two-dimensional and Doppler data is recommended [2]. Doppler techniques allow for the calculation of hemodynamic variations, such as stroke volume (SV) and cardiac output (CO), based on the velocity time integral through the LV outflow tract area.
A left ventricular pressure–volume (LV-PV) analysis, employing hemodynamic principles, has effectively performed as a basis for understanding cardiac physiology and pathophysiology for decades [12, 13]. A LV-PV analysis has been primarily restricted to clinical investigations in a research environment; therefore, it has not been extensively used due to the invasive nature of the procedure [14, 15]. A broader predictive application for detecting and simulating CHF is more easily attainable with the development of single-beat methodologies that only rely on data collected through non-invasive techniques. These techniques include echocardiographic measurements of the left ventricular volume (LVV), Doppler data, the peripheral estimates of left ventricular pressure (LVP), and the timing of the cardiac cycle [16,17,18,19,20,21].
Utilizing data obtained non-invasively, population and patient-specific investigations can be conducted by simulating the LV-PV relationship obtained through the PV analysis by means of a mock circulatory system (MCS) [22, 23]. An MCS is a mechanical representation of the human circulatory system, essential for in vitro evaluation of VADs, as well as other cardiac assist technologies [24,25,26,27,28,29]. An MCS effectively simulates the circulatory system by replicating specific cardiovascular conditions, primarily pressure [mmHg] and flow rate [mL/s], in an integrated bench-top hydraulic circuit. Utilizing these hydraulic cardiovascular simulators and data obtained through a clinical PV analysis, the controls that govern the LV portion of the MCS could be driven to produce the PV relationship of: a CVD profile, specific population, or patient [30]. With research revealing the increasing need for these medical devices [31], a comprehensive in vitro analysis could be completed to assure a particular cardiac assist device treatment will be effective beforehand. The ability of an MCS to be able to replicate the exact PV relationship that defines the pathophysiology for a specific individual allows for a robust in vitro analysis to be completed, and a "patient specific diagnosis" created, ensuring a higher standard of patient care [32, 33].
The following is how this manuscript is presented. "Background" section summaries the principal theories governing the modeling of the PV relationship, its background in simulating cardiovascular hemodynamics within an MCS, and how a PV loop controller should perform for subsequent in vitro testing. "Method" section presents the proposed methodology for developing LV-PV control functionality is presented and utilizes a logic-based conditional finite state machine (FSM) and a physical system modeling approach, then the experimental results are presented in "Results" section. "Discussion" section concludes with a discussion regarding the results of this investigation, followed by "Conclusion" section which outlines the limitations of the approach and future investigations.
Pressure–volume relationship
The efficacy of the PV relationship, often referred to as a PV loop, to describe and quantify the fundamental mechanical properties of the LV was first demonstrated in 1895 by Otto Frank [34]. Frank represented the cardiac cycle of ventricular contraction as a loop on a plane defined by ventricular pressure on the vertical axis and ventricular volume on the horizontal. By late twentieth century, the PV analysis was considered the gold standard for assessing ventricular properties, primarily due to the researched conducted by Suga and Sagawa [35,36,37]. Yet, this approach has failed to become the clinical standard for evaluating LV functionality due to the invasive nature of the procedure [14, 15]. However, due to recent advances single-beat methodologies, the practical application for PV analysis is expanding [18,19,20]. Most recently are the efforts published in 2018 by Davidson et al. with regard to the development of a beat-by-beat method for estimating the left ventricular PV relationship using inputs that are clinically accessible in an intensive care unit (ICU) setting and are supported by a brief echocardiograph evaluation [20].
There has been extensive clinical and computational research into understanding the PV relationship, which is presented in Fig. 1 [12, 21, 30, 38]. However, for the purpose of repeatability within a MCS, the culmination of this knowledge can be summarized by simplifying the performance of the LV through three principal factors: preload, afterload, and contractility [24, 25]. These have significant implications on VAD performance [39].
Left Ventricular Pressure–Volume Relationship (Stouffer [30]). a Schematic of LV pressure–volume loop in a normal heart. In Phase I, preceding the opening of the mitral valve, ventricular filling occurs with only a small increase in pressure and a large increase in volume, guided along the EDPVR curve. Phase II constitutes the first segment of systole called isovolumetric contraction. Phase III begins with the opening of the aortic valve; ejection initiates and LV volume falls as LV pressure continues to increase. Isovolumetric relaxation begins after the closure of the aortic valve constituting Phase IV. b Effects of increasing preload on a LV-PV loop with afterload and contractility remaining constant. Loop 2 has an increased preload compared to loop 1 by rolling the arterial elastance (Ea) line parallel while keeping the slope (Ea) constant, resulting in an increase in SV. c Effects of increasing afterload on a LV-PV loop with preload and contractility held constant. This consists of increasing the slope of the Ea line. d Effects of increasing contractility on a LV-PV loop with preload and afterload remaining constant. This consists of increasing the slope (Ees) of the ESPVR line. Note that in b, c, and d, loop 2 represents the increase in the respective principle factor, i.e. preload, afterload, and contractility, when compared to loop 1
A schematic of the LV pressure–volume loop in a normal heart is presented in Fig. 1a. In Phase I, ventricular filling occurs with only a small increase in pressure and a large increase in volume, guided along the EDPVR curve. Phase I can additionally be divided in two sub-phases, rapid filling governed by elastance of the ventricle and atrial systole that brings the ventricle into optimal preload for contraction. Phase II constitutes the first segment of systole called isovolumetric contraction. Phase III begins with the opening of the aortic valve; ejection initiates and LV volume falls as LV pressure continues to increase. Phase III can be divided into two sub-phases: rapid ejection and reduced ejection. Isovolumetric relaxation begins after the closure of the aortic valve constituting Phase IV.
Ventricular preload refers to the amount of passive tension or stretch exerted on the ventricular walls (i.e. intraventricular pressure) just prior to the systolic contraction [14, 29]. This load determines the end-diastolic sarcomere length and thus the force of contraction. Because the true sarcomere length is not easily measured clinically, preload is typically measured by ventricular pressure and volume at the point immediately preceding isometric ventricular contraction. This correlation is described through the end-systolic pressure–volume relationship (ESPVR); as well as through the end-diastolic pressure–volume relationship (EDPVR). The effects of increasing preload on the PV relationship is displayed in Fig. 1b; reduced isovolumetric contraction period and increased stroke volume.
Afterload is defined as the forces opposing ventricular ejection [14]. Effective arterial elastance (Ea) is a lumped measure of total arterial load that incorporates the mean resistance with the pulsatile factors that vary directly with heart rate, systemic vascular resistance, and relates inversely with total arterial compliance. Ea is directly defined as the ratio of left ventricular end-systolic pressure (LVESP) to SV. In practice, another measure of afterload is the LVESP at the moment ventricular pressure begins to decrease to less than systemic arterial pressure. The effects of increasing afterload are presented in Fig. 1c; increase in peak systolic pressure and decrease in stroke volume.
A acceptable clinical index of contractility that is independent of preload and afterload has not been completely defined [29]. In non-pathological conditions, contractility is best described by the pressure–volume point when the aortic valve closes. Contractility is typically measured by the slope of the ESPVR line, known as Ees, which is calculated as \(\frac{{\Delta {\text{P}}}}{{\Delta {\text{V }}}}\) [38]. An additional index of contractility is dP/dtmax which is the derivative of the maximum rate of ventricular pressure rise during the isovolumetric period. The effects of increasing contractility on the PV relationship is revealed in Fig. 1d; revealing the ability for the stroke volume to accommodate with increasing peak systolic pressure.
For a given ventricular state, there is not just a single Frank-Starling curve, rather there is a set or family of curves [29]. Each curve is determined by the driving conditions of preload, afterload, and inotropic state (contractility) of the heart. While deviations in venous return can cause a ventricle to move along a single Frank-Starling curve, changes in the driving conditions can cause the PV relationship of the heart to shift to a different Frank-Starling curve. This allows clinicians to diagnose the pathophysiological state of a dysfunctional heart by analyzing the PV relationship of a patient.
Additionally, it provides the ability to simulate diseased states: heart failure [14], valvular disease [29], or specific cardiovascular dysfunction seen in pediatric heart failure [40].
Pressure–volume loop computational modeling
Comprehensive computationally modeling of the LV-PV relationship has been effectively reported since the mid-1980s, following the extensive work completed by Suga and Sagawa [34,35,36]. In 1986, Burkhoff and Sagawa first developed a comprehensive analytical model for predicting ventricular efficiency utilizing Windkessel modeling techniques and an understanding of the PV relationship principles previously developed by Suga and Sagawa. With the advancement and routine use of innovative technologies in the early twenty-first century (e.g. conductance catheter, echocardiography), there was a significant increase in research efforts to determine the potential clinical applications [12,13,14,15], improving predictive strategies [16,17,18,19], and refining computational models [41,42,43].
An elastance-based control of an electrical circuit analogue of a closed circulatory system with VAD assistance was developed in 2009 by Yu et al. [42]. Their state-feedback controller was designed to drive a voice coil actuator to track a reference volume, and consequently generate the desired ventricular pressure by means of position and velocity feedbacks. The controller was tested in silico by modifying the load conditions as well as contractility to produce an accurate preload response of the system. The MCS analogue and controller architecture was able to reproduce human circulatory functionality ranging from healthy to unhealthy conditions. Additionally, the MCS control system developed was able to simulate the cardiac functionality during VAD support.
In 2007, Colacino et al. developed a pneumatically-driven mock left ventricle as well as a native left ventricle model and connected each model to a numerical analogue of a closed circulatory system comprised of systemic circulation, a left atrium, and inlet/outlet ventricular valves [43]. The purpose of their research was to investigate the difference between preload and afterload sensitivity of a pneumatic ventricle, when used as a fluid actuator in a MCS, when compared to elastance-based ventricle computational model. Their research concluded that the elastance-based model performed more realistically when reproducing specific cardiovascular scenarios and that many MCS designs could be considered inadequate, if careful consideration is not made to the pumping action of the ventricle. Subsequent in vitro testing utilizing this control approach successfully reproduced an elastance mechanism of a natural ventricle by mimicking preload and afterload sensitivity [25]. Preload was modified by means of manually changing the fluid content of the closed loop hydraulic circuit, while afterload was varied by increasing or decreasing the systemic arterial resistance within a modified Windkessel model.
Recent advancements in contractility-based control
An MCS simulates the circulatory system by accurately and precisely replicating specific cardiovascular hemodynamic variables, mainly the respective pressure (mmHg) and flow rate (mL/s) for key circulatory constituents, in an integrated bench-top hydraulic circuit [23]. While this human circulatory system model is not an all-inclusive replacement for an in vivo analysis of a cardiac assist device's design, it is an effective method of evaluating fundamental design decisions beforehand by determining its influence on a patient's circulatory hemodynamics in a safe and controlled environment. Published research efforts typically either involve the development of the system [22, 25, 26, 44,45,46] or the dissemination of the results of a particular in vitro investigation [27, 28].
In 2017, Wang et al. was able to replicate the PV relationship with controllable ESPVR and EDPRV curves on a personalized MCS based on an elastance function for use in the evaluation of VADs [21]. The numerical elastance models were scaled to change the slopes of the ESPVR and EDPVR curves to simulate systolic and diastolic dysfunction. The results of their investigation produced experimental PV loops that are consistent with the respective theoretical loop; however, their model only includes a means of controlling preload and contractility with no afterload control. Their model assumes afterload remains constant regardless of preload changes; due to the Frank-Starling mechanism, the ventricle reached the same LVESV despite an increase in LVEDV and preload.
Jansen-Park et al., 2015, determined the interactive effects between a simulated patient with VAD assistance on an auto-regulated MCS which includes a means of producing the Frank-Starling response and baroreflex [24]. In their study, a preload sensitive MCS was developed to investigate the interaction between the left ventricle and a VAD. Their design was able to simulate the physiological PV relationship for different conditions of preload, afterload, ventricular contractility, and heart rate. The Frank-Starling mechanism (preload sensitivity) was modeled by regulating the stroke volume based on the measured mean diastolic left atrial pressure, afterload was controlled by modifying systemic vascular resistance by means of an electrically controlled proportional valve, and contractility was changed depending on the end diastolic volume. The effects of contractility, afterload, and heart rate on stroke volume were implemented by means of two interpolating three-dimensional look-up tables based on experimental data for each state of the system. The structure of their MCS was based on the design developed by Timms et al. [27]. The results of their investigation revealed a high correlation to published clinical literature.
In 2011, Gregory et al. was able to replicate a non-linear Frank-Starling response in a MCS by modifying preload by means of opening a hydraulic valve attached to the systemic venous chamber [44]. Their research was able to successfully alter left and right ventricular contractility by changing preload to simulate the conditions of mild and severe biventricular heart failure. The EDV offset and a sensitivity gain were manually adjusted through trial and error to produce an appropriate degree of contractility with a fixed ventricular preload. The shape of the ESPVR curve was then modified by decreasing MCS volume until the ventricular volumes approached zero. These efforts, validated using published literature, improved a previously established MCS design developed by Timms et al. [28].
These control architectures were primarily hardware determined, rather than software-driven. In some cases, reproducibility is inhibited due to the tuning of hemodynamic conditions by manually adjusting parameters until a desired response is achieved. Utilizing a conditional logic-based conditional finite state machine (FSM) and physical system modeling control approach, a software-driven controller could be developed to respond to explicitly-defined preload, afterload, and contractility events. This would enable the regulation of the PV relationship within an MCS's LV section, without the limitation of dedicated hardware.
Logic-based finite state machine (FSM) and physical system modeling tools
MathWorks' Simulink® is a model based design tool utilized for multi-domain physical system simulation and model-based design [47]. Simulink® provides a graphical user interface, an assortment of solver options, and an extensive block library for accurately modeling dynamic system performance. Stateflow® is a toolbox found within Simulink® for constructing combinatorial and sequential decision-based control logic represented in state machine and flow chart structure. Stateflow® offers the ability to create graphical and tabular representations, such as state transition diagrams and truth tables, which can be used to model how a system reacts to time-based conditions and events, as well as an external signal. The Simscape™ toolbox, utilized within the Simulink® environment, provides the ability to create models of physical systems that integrate block diagrams acknowledged by real-world physical connections. Dynamic models of complex systems, such as those with hydraulic and pneumatic actuation, can be generated and controlled by assembling fundamental components into a schematic-based modeling diagram. An additional toolbox that was utilized in this approach was the Simscape Fluids™ toolbox which provides component libraries for modeling and simulating fluid systems. The block library for this toolbox includes all the necessary modules to create systems with a variety of domain elements, such as hydraulic pumps, fluid reservoirs, valves, and pipes. The advantage of using these toolbox libraries is that the blocks are version controlled and conformal to regulatory processes that mandate tractable computational modeling tools.
Overview of methodology and model architecture
A method for simulating LV-PV control functionality utilizing explicitly defined preload, afterload, and contractility is needed for cardiovascular intervention assessment. The resulting solution must be capable of being compiled for hardware control of an MCS; deterministic processing compatible logic and architecture that would enable runtime setpoint changes. The approach used was a logic-based conditional finite state machine (FSM) based on the four PV phases that describe left ventricular functionality developed with a physical system hydraulic plant model using Simulink®. The proposed aggregate model consists of three subsystems to include: a preload/afterload/contractility-based setpoint calculator ("PV loop critical point determination" section), a FSM controller ("PV loop modeling utilizing a state machine control architecture approach" section), and a hydraulic testing system ("Hydraulic testing model utilizing MathWorks' Simulink® and SimscapeTM toolbox" section). The last subsystem acts as the simulated plant to evaluate the control architecture that is formed by the first two subsystems. The proposed method allows for multiple uses which include the simulation of parameter effects in time and the simulation of personalized data, revealed through an individualized clinical PV analysis. This method provides the means to be simulated in silico and can be subsequently compiled for control of in vitro investigations. This provides an MCS with the ability to investigate the pathophysiology for a specific individual by replicating the exact PV relationship defined by their left ventricular functionality; as well as perform predictive analysis regarding changes in preload, afterload, and contractility with time. Of critical importance were the non-isovolumetric state behavior: non-linear EDPVR curve, rate-limited ejection, and energy-driven model of contraction. This investigation was developed utilizing Matlab R2017b and a Dell T7500 Precision workstation with 8.0 gigabytes of RAM, a Dual Core Xeon E5606 processor, and a Windows 7 64-bit operating system.
PV loop critical point determination
A preload, afterload, and contractility sensitive computational model was developed utilizing Simulink® for determining critical points for switching between PV loop states; the four phases described in Fig. 1. These critical points are LV End-Systolic Pressure (LVESP), LV End-Systolic Volume (LVESV), LV End-Diastolic Pressure (LVEDP), LV End-Diastolic Volume (LVEDV), LV End-Isovolumetric Relaxation Pressure (LVEIRP), LV End-Isovolumetric Relaxation Volume (LVEIRV), LV End-Isovolumetric Contraction Pressure (LVEICP), and LV End-Isovolumetric Contraction Volume (LVEICV). These can be resolved by the three equations that describe ESPVR, EDPVR, and Ea. ESPVR is typically described as a linear equation with a positive slope (Ees) and a negative or positive y-intercept, EDPVR can be defined with a third-order polynomial, while Ea is also linear and has a negative slope with a positive y-intercept [13]. Eqs. 1, 2, and 3 define the system of equations used to produce the critical points, where ESPVR, EDPVR, and Ea are Eqs. 1, 2, and 3 respectively.
$$P_{A} = a_{1} V_{A} + a_{0}$$
$$P_{B} = b_{3} V_{B}^{3} + b_{2} V_{B}^{2} + b_{1} V_{B} + b_{0}$$
$$P_{C} = c_{1} V_{C} + c_{0}$$
The point where Eqs. 1 and 3 intercept is LVESV and LVESP and solving produces Eqs. 4 and 5.
$$LV_{ESV} = \frac{{c_{0} - a_{0} }}{{a_{1} - c_{1} }}$$
$$LV_{ESP} = a_{1} \left( {\frac{{c_{0} - a_{0} }}{{a_{1} - c_{1} }}} \right) + a_{0}$$
Setting Eq. 3 equal to zero yields LVEDV, producing Eq. 6.
$$LV_{EDV} = \frac{{ - c_{0} }}{{c_{1} }}$$
Substituting Eq. 6 into Eq. 2 produces LVEDP.
$$LV_{EDP} = b_{3} \left( {\frac{{ - c_{0} }}{{c_{1} }}} \right)^{3} + b_{2} \left( {\frac{{ - c_{0} }}{{c_{1} }}} \right)^{2} + b_{1} \left( {\frac{{ - c_{0} }}{{c_{1} }}} \right) + b_{0}$$
Due to isovolumetric relaxation,
$$LV_{EIRV} = LV_{ESV}$$
Thus, substituting Eq. 4 into Eq. 2 yields Eq. 8 for LVEIRP.
$$LV_{EIRP} = b_{3} \left( {\frac{{c_{0} - a_{0} }}{{a_{1} - c_{1} }}} \right)^{3} + b_{2} \left( {\frac{{c_{0} - a_{0} }}{{a_{1} - c_{1} }}} \right)^{2} + b_{1} \left( {\frac{{c_{0} - a_{0} }}{{a_{1} - c_{1} }}} \right) + b_{0}$$
Lastly, due to isovolumetric contraction, LVEICV equals LVEDV. The final unknown variable value to complete the four-phase cycle is LVEICP. This is resolved by utilizing an offset value based on LVESP.
$$LV_{EICV} = LV_{EDV}$$
$$LV_{EICP} = LV_{ESP} - offset$$
Figure 2 presents the computational model and example developed in Simulink™ to reflect Eq. 4 through 9; utilized to find the critical points which define the initiation of each phase. Figure 2a reflects the system of equations in this example, capable of being solved in real-time. Figure 2b presents a graph of these equations, with critical points noted. For this example, based on data collected using DataThief on loop 1 of Fig. 1b: a1 = 2.9745, a0 = − 17.133, b3 = 2.6435E−5, b2 = − 4.0598E−3, b1 = 0.16687, b0 = 8.5448, c1 = − 1.7504, and c0 = 185.02. The computational system produces LVEDP = 12.043 mmHg, LVEDV = 105.71 mL, LVESP = 110.13 mmHg, LVESV = 42.785 mL, LVEIRP = 10.323 mmHg, and LVEIRV = 42.785 mL. Using these parameters, LV Stroke Volume (LVSV) = 62.93 mL, LV Ejection Fraction (LVEF) = 0.595, LV Stroke Work (LVSW) = 6929.9 mmHg*mL. These values are presented in Tables 1 and 2. These coefficient values can be interchanged with clinical values for individualized PV assessment, and can be controlled over time for determining the effects of ventricular functional shifts. Utilizing DataThief [48], an open-source program used to extract data from images, these coefficients can be obtained from a plot of a patient's left ventricular pressure–volume analysis of preload change.
Computational model of example PV loop developed in Simulink™ to reflect Eqs. 4, 5, 6, 7, and 8, to be utilized to find the critical points which define the initiation of phases 1, 2, and 4. a reflects the system of equations in this example, capable of solving in real-time. b presents a graph of these equations with critical points annotated. The driving values can be interchanged with clinical values for individualized PV assessment, as well as can be controlled over time for determining the effects of preload, afterload, and contractility changes. These values are presented in Tables 1 and 2
Table 1 Input parameters for all simulations presented
Table 2 Results for all simulations presented. Note, error was calculated as the absolute value of the difference between the desired and simulated LVESP, LVESV, LVEDP, and LVEDV
PV loop modeling utilizing a state machine control architecture approach
Utilizing Simulink™ Stateflow®, a sequential decision-based control logic represented in Mealy machine structure form was developed to control the transition between LV-PV phases. A Mealy machine is appropriate because this application requires that the output values are determined by both its current state and the current input values. A state transition diagram is presented in Fig. 3. The Variables in the block are parameters that are held constant: Piston cross-sectional area (A), b3, b2, b1, b0, Isovolumetric Rate, Isovolumetric Contraction Offset, Systolic Ejection Rate, and Systolic Ejection Offset. The Inputs are parameters that can change with time and are LVESP, LVESV, LVEDV, LVEIRP, time (t), simulated pressure (P), and simulated volume (V). The Output is the output variables of the model, which is Force (F) applied to the piston in Newtons, Cycle_Count, and Heart_Rate [bpm]. The organization of the state transition diagram follows FSM convention: the single curved arrow donates the initial time-dependent conditions of the model, the oval shapes are the states of the model, the dotted hoop arrows denote the output of the state until a specific condition is met, and the straight arrows are the transition direction once the condition annotated is satisfied. Time (t) is an input variable that discretely changes at the Fundamental Sampling Time of the simulation, \(\frac{1}{1024}{\text{s}}\). Correspondingly, the FSM operates at a sampling rate of 1024 Hz. After every complete cycle, the output variables Cycle_Count and Heart_Rate are calculated. Heart rate is determined based on the Cycle_Time that is updated with the current time at the initiation of Phase 1 for every cycle. Isovolumetric Rate is defined as the rate of change in the output variable, F, during isovolumetric relaxation and contraction. For isovolumetric relaxation, this rate is one-third the magnitude when compared to isovolumetric contraction. The Isovolumetric Contraction Offset is defined as the value subtracted from the LVEDV to start the initialization of the Phase 2 state to compensate for the radius of curvature created due to transitioning from fill to eject, as well as the means by which end-diastolic pressure and volume are clinically quantified. The Systolic Ejection Rate is defined as the rate of change in the output variable, F, during systolic ejection. Systolic Ejection Offset is defined as the value subtracted from the LVESP to start the initialization of the Phase 3 state, establishing LVEICP.
State transition diagram of sequential decision-based control logic represented in Mealy machine structure form was developed to control the transition between left ventricular PV phases. The Variables, parameters that are held constant, are Piston cross-sectional area (A), b3, b2, b1, b0, Isovolumetric Contraction Offset, Systolic Ejection Rate, and Systolic Ejection Offset. The Inputs, parameters that can change with time, are \({\text{LV}}_{\text{ESP}}\), \({\text{LV}}_{\text{ESV}}\), \({\text{LV}}_{\text{EDV}}\), \({\text{LV}}_{\text{EIRP}}\), Time (t), Measured Pressure (P), and Measured Volume (V). The Output, the output variable of the model, is Force (F) applied to the piston in Newtons. The single curved arrow donates the initial time-dependent conditions of the model. The oval shapes are the five states of the model. The dotted hoop arrow denotes the output of the state until a specific condition is met. The straight arrows are the transition direction once the condition annotated is satisfied. The sample rate is 1024 Hz
Hydraulic testing model utilizing MathWorks' Simulink® and Simscape™ toolbox
A hydraulic testing model was developed for simulating hydraulic performance as presented in Fig. 4. This system was designed to replicate the dynamics of a force-based piston pump model that drives the pressure within a chamber between two opposing check valves. This constitutes similar conditions observed within the left ventricular portion of an MCS. The Simulink® and Simscape™ block library provided all the necessary components needed to create a hydraulic testing platform capable of simulating this application. All modified parameter values are noted in the diagram, while any parameters not noted were left standard to the block's original parameter values. Additionally, for any element parameter denoted as 'Variable', these values were not left constant for all simulations presented. The values utilized in each simulation, not explicitly declared in Fig. 4, are displayed in Table 1.
Presented is the hydraulic testing model developed utilizing Simulink® and Simscape™. This system was designed to replicate the dynamics of a force-based piston pump model that drives the pressure within a chamber between two opposing check valves, conditions reflected within the left ventricular portion of an MCS. All block element parameter values that were modified are noted in the diagram, while any parameters not noted were left standard to the block's original parameter values. Additionally, for any element parameter denoted as 'Variable', these values were not left constant for all simulations presented. The hydraulic testing model is a one-input, four-output system. The input is the force [N] applied to the piston and is regulated by means of the Stateflow® control architecture. The outputs are simulated LVV [mL], simulated LVP [mmHg], simulated AoP [mmHg], and LAP [mmHg]
The hydraulic testing model is a one-input, four-output system. The input is the force [N] applied to the piston and is regulated by means of the Stateflow® control architecture. The outputs are simulated left ventricular volume (LVV) [mL], simulated left ventricular pressure (LVP) [mmHg], simulated aortic pressure (AoP) [mmHg], and left atrial pressure (LAP) [mmHg]. LVP and LVV are utilized by the Stateflow® control logic to govern state transitions while AoP and LAP are used for system fidelity and plotting purposes. The input force is applied to the Ideal Force Source block element which is then directed to an Ideal Translational Motion Sensor which converts an across variable measured between two mechanical translational nodes into a control signal proportional to position. The position signal is then converted into volume [mL] based on a piston diameter of 2 inches, thus a cross-sectional area of π × 2.542 = 20.27 cm2. The input force [N] is also applied to a Translational Hydro-Mechanical Converter which converts hydraulic energy into mechanical energy in the form of translational motion of the converter output member. Two check valves (aortic and mitral), positioned in opposing directions, regulate the fluid flow direction as seen in the left ventricular section of an MCS. A Constant Volume element is positioned between the two check valves to simulate a constant volume filling chamber. A Hydraulic Pressure Sensor is positioned between the opposing check valves to monitor LVP, then outputs the simulated values to the Stateflow® control logic.
Upstream to the mitral valve is a Hydraulic Reference source block governed by the EDPVR curve function with respect to simulated volume, LVV, and increased by an offset of 2 mmHg to ensure proper flow through the mitral check valve. This establishes a dynamic LAP, the initial pressure condition of the left heart. LAP is outputted from the model here for plotting purposes. Downstream to the aortic valve is a Spring-Loaded Accumulator block. This block element consists of a preloaded spring and a fluid chamber. As the fluid pressure at the inlet of the accumulator becomes greater than the prescribed preload pressure, fluid enters the accumulator and compresses the spring, creating stored hydraulic energy. A decrease in the fluid pressure causes the spring to decompress and eject the stored fluid into the system. The spring motion is restricted by a hard stop when the fluid volume becomes zero, as well as when the fluid volume is at the prescribed capacity of the fluid chamber. These settings are utilized to regulate the compliance,\(\frac{{\Delta {\text{V}}}}{{\Delta {\text{P}}}}\), of the aorta. Immediately following is Hydraulic Pressure Sensor measuring AoP.
Additionally, a needle valve was positioned downstream to the aortic valve to simulate the resistance to flow contributed to the branching arteries of the aortic arch, as well as provide the capability to simulate the effects of increasing and decreasing resistance with time. As was previously stated, all block element parameter values that were modified are noted in the diagram presented in Fig. 4, while any parameters not noted were left standard to the block's original parameter values. For any element parameter denoted as 'Variable', these values were not left constant for all simulations presented. For each simulation, these values are displayed in Table 1.
The computational model effectively executed the trials assessing the performance of the FSM architecture. Solver settings and simulated fluid type were held constant through the analysis. The results presented were produced with the MathWorks' ode14x (fixed-step, extrapolation) using a fundamental sampling time of \(\frac{1}{1024}\) s. This solver was chosen to accelerate the simulations and ensure the resultant model is compatible with deterministic hardware systems. Validation of this solver was performed against a variable-step variable-order solver (ODE15 s) to ensure accuracy. The fluid selected is a glycerol/water mixture with a fluid density of 1107.1 kg/m3 and a kinematic viscosity of 3.3 centistoke [49]. These characteristics equate to a fluid temperature of 25 °C or 77 °F.
The input variables utilized for each presented simulation are displayed in Table 1, while the results of each simulation are displayed in Table 2. All simulations were performed utilizing discrete changes, evenly incremented between the designated initial and final LVESP, LVESV, LVEDP, and LVEDV over a 10 s total simulation time. Each discrete variable is controlled by means of a Lookup Table element block that outputs the modified variable value, dependent on the specific cycle count number. Note, any variable presented as a vector, changes with each cycle count, i.e. \([ 1,{ 2},{ 3}, \cdots ,{\text{n}}]\) where the nth value represents the input variable value for the entirety of the corresponding cycle. If a simulation has more cycles than input vector elements, then the system continues with a zero-order hold of the last value.
The parameters for the Spring-Loaded Accumulator block were developed based on a desired LVP response due to aortic compliance. The desired response consisted of a physiological correct AoP waveform and a peak-to-peak AoP amplitude of approximately 40 mmHg, corresponding to a normal range of 120/80. The baseline of this response was created at a heart rate of 60 bpm and a compliance of 1. This corresponded to an Isovolumetric Rate of 225 N*sample/s, a Resistance value of 0.03, a Fluid Chamber Capacity of 517.15 mmHg, a Preload Pressure of 0.01 psi, and a Pressure at Full Capacity of 10.01 psi. Given the relationship \(\frac{1}{R*C} = I\), where R is resistance, C is compliance, and I is the Impedance, I was held constant for all simulations using I = 33.333. For the simulations that required a heart rate beyond 60 bpm, the Isovolumetric Rate had to be consequently increased. Utilizing this relationship to sustain a peak-to-peak AoP amplitude of 40 mmHg, the Fluid Chamber Capacity and the Preload Pressure was held constant, while Resistance and Pressure at Full Capacity was modified to produce the desired heart rate while sustaining aortic performance. Lastly, the Initial Volume of Fluid for each simulation was calculated to create an initial LVP corresponding to LVESP. This was done to decrease the amount of initial cycles necessary to achieve simulation stability to 1. All values utilized for these parameters are presented in Table 1. Error was calculated as the absolute value of the difference between the desired and simulated LVESP, LVESV, LVEDP, and LVEDV.
A LV-PV loop; LVP, LAP, and AoP versus time; and volume versus time graphs for the 10 s total simulation time was presented for each simulation. Note, the driving force [N] produced by the FSM can be derived from the presented LVP and LVV plots by means of \({\text{Force }}\left[ {\text{N}} \right] = {\text{Pressure }}\left[ {\text{mmHg}} \right] \times \left[ {1\frac{\text{N}}{{{\text{cm}}^{2} }}/75.00615 {\text{mmHg}}} \right] \times {\text{Piston area }}\left[ {{\text{cm}}^{2} } \right]\) . The piston cross-sectional area is π × 2.542 = 20.27 cm2. The piston position [cm] can additionally be derived from the volume time plot by means of \({\text{Piston position }}\left[ {\text{cm}} \right] = {\text{Volume }}\left[ {{\text{cm}}^{ 3} } \right] \div {\text{Piston area }}\left[ {{\text{cm}}^{2} } \right]\).
Computational model verification
The LV-PV loop critical point computational model and FSM approach was effective at driving the hydraulic testing model to produce the characteristic LV-PV relationship as presented in Fig. 5. The computational model parameters are the same as those presented in Fig. 2. As can be shown from the graph, with known ESPVR, EDPVR, and Ea curves, the computational model successfully provided the correct LVESP, LVESV, LVEDP, LVEDV, LVEIRP, and LVEIRV transition points within the state transition logic to produce the prescribed LV-PV relationship. Table 1 contains all input parameters and Table 2 presents the results of all simulations performed. For each LV-PV loop graph, the initial LV end-systolic and end-diastolic datasets are denoted with circle points. Figure 5a displays the LV-PV loop based on data collected using DataThief on loop 1 of Fig. 1b. The results presented reveal an error between the desired and simulated end-systolic and end-diastolic transition points in the datasets of less than 1 mmHg and 1 mL, respectively.
The outlined approach was effective at simulating the characteristic LV-PV relationship. Preload, afterload, and contractility changes in time were simulated by means of manipulating the input variables of the computational model via evenly-spaced discrete increments that change per cycle count. The LV-PV loop, pressure versus time, and volume versus time graphs are presented for each simulation. Displayed in a is the derived LV-PV loop, based on the computational model parameters determined using DataThief on loop 1 of Fig. 1b and presented in Fig. 2. The parameters for this LV-PV loop constitutes the initial conditions for the subsequent simulations. b presents the system correctly responding to a discrete change in preload. c reveals the correct afterload change response to the PV relationship. d displays the correct system response to contractility change. Each simulation was run for a total simulation time of 10 s and the system takes one cycle before it settles. The system functions consistently for every preceding cycle. The heart rate begins at approximately 60 bpm for each simulation. The reference force [N] produced by the FSM as well as the piston position [cm] can be derived from these time graphs
The system takes one cycle to initialize from a rest state before the control topology functions consistently for the remainder of the simulation. Additionally, the isovolumetric and systolic offsets and rates, necessary to achieve this response are noted in Table 1. Figure 5a also presents the LVP, LAP, and AoP versus time and volume versus time graphs for the 10 s total simulation time. The reference force [N] produced by the FSM as well as the piston position [cm] can be derived from these time graphs.
Preload, afterload, and contractility changes in time
As presented in Fig. 5b–d, the outlined approach was effective at simulating preload, afterload, and contractility changes in time by means of discretely manipulating the computational model over time. The initial parameters of the computational model are the same as those presented in Fig. 5a and presented in Table 1. Presented for each simulation is the LV-PV loop; LVP, LAP, and AoP versus time; and volume versus time graphs for the 10 s total simulation time.
As shown in Fig. 5b, the system displays the correct preload change response to the PV relationship as displayed in Fig. 1b. The Ea was initially defined by the equation \({\text{P}} = - 1.7504\left( {\text{V}} \right) + 185.02\). The y-axis intercept was increased from 185.02 mmHg at a rate of 5 mmHg per cycle, ending with a y-axis intercept of 215.02 mmHg for the last completed cycle. The results report an error of less than 1 mmHg and 1 mL for all targeted pressures and volumes.
Presented in Fig. 5c, the system reveals the correct afterload change response to the PV relationship as shown in Fig. 1c. Ea is initially defined by the equation \({\text{P}} = - 1.7504\left( {\text{V}} \right) + 185.02\). The y-axis intercept was decreased from 185.02 mmHg at a rate of 15 mmHg per cycle, ending with a y-axis intercept of 110.02 mmHg for the last completed cycle. The slope of the Ea was decreased from − 1.7504 mmHg/mL concluding with a slope of − 1.0408 mmHg/mL. This rate of change for the Ea slope was derived from the 15 mmHg per cycle y-axis rate of increase to achieve a consistent x-intercept, as shown in Fig. 1c. The results indicate an error of less than 1 mmHg and 1 mL for all targeted datasets.
As presented in Fig. 5d, the system displays the correct contractility change response to the PV relationship as revealed in Fig. 1d. The ESPVR curve is initially defined by the equation \({\text{P}} = 2.9745\left( {\text{V}} \right) - 17.133\). The slope of the ESPVR curve was decreased from 2.9745 mmHg/mL, concluding with a slope of 1.2245 mmHg/mL for the last completed cycle. The results report an error of less than 1 mmHg and 1 mL for all targeted pressures and volumes.
Clinical assessment of outlined approach
Figure 6 displays the results of simulating Heart Failure with Normal Ejection Fraction (HFNEF) and the Control developed by means of a preload reduction analysis conducted in 2008 by Westermann et al. [50] and presented in Fig. 1 of their investigation. The ESPVR, Ea, and EDPVR curve coefficients were developed utilizing DataThief to find the associated LVESP, LVESV, LVEDP, and LVEDV for the initial and final loops, as well as evaluate the EDPVR curve. These datasets were analyzed over a 10 s total simulation time and for each simulation are the LV-PV loop; LVP, LAP, and AoP versus time; and volume versus time graphs. Both simulations reflect a mean heart rate [bpm] within the range of mean values noted in the reference material. All parameter values are presented in Table 1 and the results are in Table 2.
The outlined approach was effective at simulating Heart Failure with Normal Ejection Fraction (HFNEF) and the Control developed by means of a preload reduction analysis conducted in 2008 by Westermann et al. [50] and presented in Fig. 1 of their investigation. The ESPVR, Ea, and EDPVR curve coefficients were developed utilizing DataThief to find the associated LVESP, LVESV, LVEDP, and LVEDV for the initial and final loops, as well as evaluate the EDPVR curve. These datasets were analyzed over a 10 s total simulation time and for each simulation is the LV-PV loop; LVP, LAP, and AoP versus time; and volume versus time graphs. a presents the Control where the slope and y-intercept of Ea was divided into evenly-spaced increments to constitute 4 intermediate discrete steps between the initial and final cycle parameters. HFNEF is presented in b. The slope and y-intercept of Ea was also divided into evenly-spaced increments to constitute 4 intermediate discrete steps between the initial and final cycle parameters. For both simulations, the results produced an error of less than 1 mmHg and 1 mL for all targeted datasets and reflect a mean heart rate [bpm] within the range of mean values noted in the reference material. The reference force [N] produced by the FSM as well as the piston position [cm] can be derived from these time graphs
The Control is presented in Fig. 6a. The ESPVR curve was found to be defined by the equation \({\text{P}} = 1.2407\left( {\text{V}} \right) + 33.857\) and the EDPVR curve was found to be \({\text{P}} = 2.6928{\text{E}} - 7\left( V \right)^{3} + - 9.3013{\text{E}} - 6\left( V \right)^{2} + 0.026968\left( V \right) + 2.9515\). Ea is initially defined by the equation \({\text{P}} = - 1.1365\left( {\text{V}} \right) + 211.17\) and defined by the equation \({\text{P}} = - 1.4501\left( {\text{V}} \right) + 160.11\) for the final cycle. The slope and y-intercept of Ea was divided into evenly-spaced increments to constitute 4 intermediate discrete steps between the initial and final cycle parameters. The results indicate an error of less than 1 mmHg and 1 mL for all targeted datasets.
HFNEF is presented in Fig. 6b. The ESPVR curve was found to be \({\text{P}} = 0.99741\left( {\text{V}} \right) + 72.586\) and the EDPVR curve was found to be \({\text{P}} = 1.4046{\text{E}} - 5\left( V \right)^{3} + - 2.5351{\text{E}} - 3\left( V \right)^{2} + 0.15836\left( V \right) + - 0.010234\). Ea is initially defined by the equation \({\text{P}} = - 1.4054\left( {\text{V}} \right) + 235.76\) and defined by the equation \({\text{P}} = - 1.3754\left( {\text{V}} \right) + 160.43\) for the final cycle. The slope and y-intercept of Ea was divided into evenly-spaced increments to constitute 4 intermediate discrete steps between the initial and final cycle parameters. The results produced an error of less than 1 mmHg and 1 mL for all targeted datasets.
A novel method for simulating LV-PV control functionality utilizing explicitly defined preload, afterload, and contractility was delivered for cardiovascular intervention assessment. The proposed aggregate model consists of three subsystems which include a preload, afterload, and contractility sensitive computational setpoint calculator ("PV loop critical point determination" section), a FSM controller ("PV loop modeling utilizing a state machine control architecture approach" section), and a hydraulic testing system ("Hydraulic testing model utilizing MathWorks' Simulink® and SimscapeTM toolbox" section). The computation model provides pressure and volume setpoints based on the coefficients revealed by best fit equations for ESPVR, EDPVR, and Ea. The acquired setpoints drive the FSM controller to perform the prescribed PV relationship. Then the hydraulic testing system, which reproduces conditions comparable to those found in a left heart MCS with cardiac piston actuation, simulates the PV relationship defined by the inputs to the computational model.
The resulting solution was capable of being compiled for hardware control in a MCS through the architecture and solver type employed; deterministic processing is achievable and runtime setpoint changes can be made. Simulink® and its supplemental product library was effective at developing reproducible clinical conditions, which would be determined through an individualized clinical PV analysis, simulated in silico for this work with ability to translate to future in vitro investigations. This provides an MCS with the capabilities to investigate the pathophysiology for a specific individual, with or without VAD support, by reproducing the precise PV relationship defined by their left ventricular functionality.
In silico verification of the LV-PV loop critical point computational model, FSM control architecture, and hydraulic testing system support this modeling approach as an effective means of simulating the LV-PV relationship. In this work, a novel method for simulating the characteristic EDPVR curve and LAP during diastolic filling was presented. This approach proved to be an effective means of capturing the nuisances in those sections of the PV curve that are critical for diastolic operation of mechanical circulatory support systems and not found in prior computational models [15, 41].
As shown in Fig. 5a and Table 2, the computational model was able to create specific points that the FSM was able to utilize as features governing the transition between LV-PV states, given a clinical preload analysis, similar to Fig. 1b. Additionally, the hydraulic testing model was able to produce a suitable degree of realism to be able to evaluate the feasibility of this methodology, producing realistic conditions to include LAP and AoP. The delivered capabilities enable control of the PV relationship beyond that presented in prior work on elastance based control with respect to dynamic afterload response [21, 24] and software-oriented control [44].
A key result of this investigation is a novel in silico method for simulating LV-PV relationships based on an analysis of a patient's ESPVR, EDPVR, and Ea curves. Displayed in Fig. 6 is the characteristic LV-PV loop of two individuals presented in the research conducted by Westermann et al. [50]. Simulated is Heart Failure with Normal Ejection Fraction (HFNEF) and the Control developed by means of a preload reduction analysis and quantified by means of data capture tools. Both simulations reflect a mean heart rate [bpm] within the range of mean values described in the reference material. This capability enables the utilization of the breadth of published PV curves on various patient types in the literature; illustrating how the digitized data from these graphs can be utilized with the computational model presented. Additionally, this FSM model could be implemented in embedded physiological control applications that are utilizing model predictive control and require a computationally efficient left ventricular simulator.
The limitations of this approach are mainly the ideal hydraulic testing system and use of anticipatory limits in transition points of the PV loop. If a force is exerted into this computational model of the hydraulic system, the system responds with the corresponding pressure instantaneously within that sample period. There was no modeled delay or rise time in the actuation components. This consideration is made in the FSM by increasing force incrementally instead of applying a constant desired force. Some parameters which define the hydraulic system, such as the parameters within the Spring-Loaded Accumulator are ideal assumptions based off a desired performance of the system. The focus of this work was on the control architecture that can be adjusted to a variety of hardware platforms through manipulation of the output signal magnitude and response characteristics. Additionally, pressure sensor feedback is ideal using this modeling approach. The sensor sampling rate was set to 512 Hz and assumed an ideal sensor with low noise. Additionally, a manual offset was made to the transition from diastolic filling to isovolumetric contraction of the system; enabling a ramping from the transition of fill to eject. Moreover, an offset was utilized in the transition from isovolumetric contraction to ejection in order to allow the pressure to slowly increase to the desired LVESP during ejection.
Future work includes a sensitivity analysis regarding resistance, compliance, and force rates. This analysis will be useful in that it will quantify the exact limitations of the hydraulic testing system as well as the range of accuracy of the FSM approach. Isolated in vitro testing of this approach will be conducted on a nested-loop hydraulic system before being incorporated into an MCS for investigating accurate cardiovascular hemodynamic considerations, such as the accuracy of pressure and flowrate sensor feedback. Additionally, what-if scenarios will be conducted on a MCS in order to create feasible scenarios to which a patient may experience.
This research will assist in producing an investigatory method and MCS control logic that will advance the medical community by improving left ventricular in vitro analysis capabilities. The ability of an MCS to be able to replicate the exact PV relationship that define the pathophysiology allows for a robust in vitro analysis to be completed. This ventricular model for ventricular function could also be coupled with aortic and left atrium computational fluid dynamics (CFD) models that require inlet and outlet conditions manifested by the left ventricle. The FSM approach is computationally efficient due to the explicit computation, and simple transition logic, which is preferential when small time steps and high iteration solvers are being employed. It was this efficiency and portability in the outcome that has made this work impactful for a variety of investigative purposes.
AoP [mmHg]:
aortic pressure
Ea :
arterial elastance
CFD:
CHF:
CO:
CVD:
EDPVR:
end-diastolic pressure–volume relationship
ESPVR:
end-systolic pressure–volume relationship
FSM:
HFNEF:
Heart Failure with Normal Ejection Fraction
LAP [mmHg]:
left atrial pressure
left ventricular
LVEF :
left ventricular ejection fraction
LVEDP [mmHg]:
left ventricular end-diastolic pressure
LVEDV [mmHg]:
left ventricular end-diastolic volume
LVEICP [mmHg]:
left ventricular end-isovolumetric contraction pressure
LVEICV [mmHg]:
left ventricular end-isovolumetric contraction volume
LVEIRP [mmHg]:
left ventricular end-isovolumetric relaxation pressure
LVEIRV [mmHg]:
left ventricular end-isovolumetric relaxation volume
LVESP [mmHg]:
left ventricular end-systolic pressure
LVESV [mmHg]:
left ventricular end-systolic volume
LVP [mmHg]:
left ventricular pressure
LV-PV:
left ventricular pressure–volume
LVSV [mL]:
left ventricular stroke volume
LVSW [mmHg*mL]:
left ventricular stroke work
LVV [mL]:
left ventricular volume
MCS:
mock circulatory system
PSM:
patient-specific modeling
stroke volume
VAD:
ventricular assist device
V&V:
Benjamin EJ, et al. Heart disease and stroke statistics—2017. Update: a report from the American Heart Association. Circulation. 2017;135(10):e146–603.
Ponikowski P, et al. 2016 ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure—the task force for the diagnosis and treatment of acute and chronic heart failure of the European Society of Cardiology (ESC)—developed with the special contribution of the Heart Failure Association (HFA) of the ESC. Eur Heart J. 2016;37(27):2129–200.
Kochanek KD, Murphy SL, Xu J, et al. National vital statistics reports. Natl Vital Stat Rep. 2015;63(3):1–20.
Mancini D, Colombo PC. Left ventricular assist devices: a rapidly evolving alternative to transplant. J Am Coll Cardiol. 2015;65(23):2542–55.
Mueller C, et al. European Society of Cardiology-Acute Cardiovascular Care Association Position paper on acute heart failure: a call for interdisciplinary care. Eur Heart J Acute Cardiovasc Care. 2017;6(1):81–6.
Yancy CW, et al. 2017 ACC/AHA/HFSA focused update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure. J Am Coll Cardiol. 2017;70(6):776–803.
Gimelli A, et al. Non-invasive cardiac imaging evaluation of patients with chronic systolic heart failure: a report from the European Association of Cardiovascular Imaging (EACVI). Eur Heart J. 2014;35(48):3417–25.
Wake R, Fukuda S, Oe H, Abe Y, Yoshikawa J, Yoshiyam M. Echocardiographic evaluation of left ventricular diastolic function. In: Hot topics in echocardiography. Squeri A, ed. InTech; 2013.
Nagueh SF, et al. Recommendations for the evaluation of left ventricular diastolic function by echocardiography. J Am Soc Echocardiogr. 2009;22(2):107–33.
Jenkins C, Bricknell K, Hanekom L, Marwick TH. Reproducibility and accuracy of echocardiographic measurements of left ventricular parameters using real-time three-dimensional echocardiography. J Am Coll Cardiol. 2004;44(4):878–86.
Lang RM, et al. EAE/ASE recommendations for image acquisition and display using three-dimensional echocardiography. Eur Heart J Cardiovasc Imaging. 2012;13(1):1–46.
Burkhoff D. Pressure–volume loops in clinical research. J Am Coll Cardiol. 2013;62(13):1173–6.
Burkhoff D, Mirsky I, Suga H. Assessment of systolic and diastolic ventricular properties via pressure–volume analysis: a guide for clinical, translational, and basic researchers. Am J Physiol Heart Circ Physiol. 2005;289(2):H501–12.
Sorajja P, et al. SCAI/HFSA clinical expert consensus document on the use of invasive hemodynamics for the diagnosis and management of cardiovascular disease: editorial. Catheter Cardiovasc Interv. 2017;89(7):E233–47.
Green P, Kodali S, Leon MB, Maurer MS. Echocardiographic assessment of pressure volume relations in heart failure and valvular heart disease: using imaging to understand physiology. Minerva Cardioangiol. 2016;59(4):26.
Spevack DM, Karl J, Yedlapati N, Goldberg Y, Garcia MJ. Echocardiographic left ventricular end-diastolic pressure volume loop estimate predicts survival in congestive heart failure. J Card Fail. 2013;19(4):251–9.
Chen WW, Gao H, Luo XY, Hill NA. Study of cardiovascular function using a coupled left ventricle and systemic circulation model. J Biomech. 2016;49(12):2445–54.
Klotz S, Hay I, Dickstein ML, Yi GH, Wang J, Maurer MS, Kass DA, Burkhoff D. Single-beat estimation of end-diastolic pressure–volume relationship: a novel method with potential for noninvasive application. Am J Physiol Heart Circ Physiol. 2006;291(1):H403–12.
Inuzuka R, Kass DA, Senzaki H. Novel, single-beat approach for determining both end-systolic pressure–dimension relationship and preload recruitable stroke work. Open Heart. 2016;3(1):e000451.
Davidson S, Pretty C, Kamoi S, Desaive T, Chase JG. Beat-by-beat estimation of the left ventricular pressure–volume loop under clinical conditions. Ann Biomed Eng. 2018;46(1):171–85.
Chirinos JA. Ventricular–arterial coupling: invasive and non-invasive assessment. Artery Res. 2013;7(1):2–14.
Wang Y, et al. Replication of pressure–volume loop with controllable ESPVR and EDPVR curves on a personalized mock circulatory loop based on elastance function. 2017. p. 1282–6.
King JM, Bergeron CA, Taylor CE. Development of an adaptive pulmonary simulator for in vitro analysis of patient populations and patient-specific data. Comput Methods Programs Biomed. 2018;161:93–102.
Lee J. Long-term mechanical circulatory support system reliability recommendation by the national clinical trial initiative subcommittee. ASAIO J. 2009;55(6):534–42.
Jansen-Park SH, et al. Effects of interaction between ventricular assist device assistance and autoregulated mock circulation including frank-starling mechanism and baroreflex. Artif Organs. 2015;40(10):981–91.
Colacino FM, Moscato F, Piedimonte F, Danieli G, Nicosia S, Arabia M. A modified elastance model to control mock ventricles in real-time: numerical and experimental validation. ASAIO J. 2008;54(6):563.
YWu Y, Allaire P, Tao G, Liu Y. In vitro test of an adaptive flow controller for a continuous flow LVAD. In: Proceedings of the 2004 American Control Conference, 2004. vol. 2, 2004. p. 1647–8.
Timms D, Hayne M, McNeil K, Galbraith A. A complete mock circulation loop for the evaluation of left, right, and biventricular assist devices. Artif Organs. 2005;29(7):564–72.
Timms D, Gregory S, Greatrex NA, Pearcy MJ, Fraser JF, Steinseifer U. A compact mock circulation loop for the in vitro testing of cardiovascular devices. Artif Organs. 2011;35(4):384–91.
Stouffer GA. Cardiovascular hemodynamics for the clinician. Malden: Blackwell Futura; 2008.
Amador F, Gierszewski K, Li Y, MS A, Ross V, Zorzella K. Cardiac assist devices| market analysis| US| 2016. 2016.
Neal ML, Kerckhoffs R. Current progress in patient-specific modeling. Brief Bioinform. 2010;11(1):111–26.
Ricotta JJ, Pagan J, Xenos M, Alemu Y, Einav S, Bluestein D. Cardiovascular disease management: the need for better diagnostics. Med Biol Eng Comput. 2008;46(11):1059–68.
Frank O. Zur Dynamik des Herzmuskels. Z Biol. 1895;32:370–447.
Suga H, Sagawa K. Instantaneous pressure–volume relationships and their ratio in the excised, supported canine left ventricle. Circ Res. 1974;35(1):117–26.
Sagawa K. The ventricular pressure–volume diagram revisited. Circ Res. 1978;43(5):677–87.
Sagawa K. The end-systolic pressure–volume relation of the ventricle: definition, modifications and clinical use. Circulation. 1981;63(6):1223–7.
Walley KR. Left ventricular function: time-varying elastance and left ventricular aortic coupling. Crit Care. 2016;20(1):270.
Naiyanetr P, Moscato F, Vollkron M, Zimpfer D, Wieselthaler G, Schima H. Continuous assessment of cardiac function during rotary blood pump support: a contractility index derived from pump flow. J Heart Lung Transplant. 2010;29(1):37–44.
Cassidy SC. Pressure–volume relationships in pediatric systolic and diastolic heart failure. Prog Pediatr Cardiol. 2000;11(3):211–8.
Green A, Drzewiecki G. Computational left ventricular heart failure model with patient specific inputs and outputs. 2015. p. 1–2.
Yu YC, Gopalakrishnan S. Elastance control of a mock circulatory system for ventricular assist device test. 2009. p. 1009–14.
Colacino F, Arabia M, Moscato F, Danieli G. Modeling, analysis, and validation of a pneumatically driven left ventricle for use in mock circulatory systems. Med Eng Phys. 2007;29(8):829–39.
Gregory SD, Stevens M, Timms D, Pearcy M. Replication of the Frank-Starling response in a mock circulation loop. In: Engineering in Medicine and Biology Society, EMBC, 2011 annual international conference of the IEEE, 2011. p. 6825–8.
Pantalos GM, Koenig SC, Gillars KJ, Giridharan GA, Ewert DL. Characterization of an adult mock circulation for testing cardiac support devices. ASAIO J. 2004;50(1):37–46.
Zannoli R, Corazza I, Branzi A. Mechanical simulator of the cardiovascular system. Phys Med. 2009;25(2):94–100.
Klee H, Allen R. Simulation of dynamic systems with MATLAB and Simulink. Boca Raton: CRC Press; 2007.
Welcome to DataThief. http://www.datathief.org/. Accessed 16 Oct 2013.
Macris MD, et al. Development of an implantable ventricular assist system. Ann Thorac Surg. 1997;63(2):367–70.
Westermann D, et al. Role of left ventricular stiffness in heart failure with normal ejection fraction. Circulation. 2008;117(16):2051–60.
JMK made substantial contributions to conception, design, analysis, and interpretation of data. CAB involved in drafting the manuscript or revising it critically for important intellectual content. CET made contributions to the conception of the program, interpretation of data, and revising the manuscript for important intellectual content. All authors read and approved the final manuscript.
The datasets generated and/or analysed during the current study are available in the GitHub repository, https://github.com/CAHLaboratory/PV_Loop/upload.
Funding was provided by University endowment. This funding body did not play a role in the work presented. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sect.
Department of Mechanical Engineering, University of Louisiana at Lafayette, 241 E. Lewis St. RM320, Lafayette, LA, 70503, USA
Jacob M. King, Clint A. Bergeron & Charles E. Taylor
Jacob M. King
Clint A. Bergeron
Charles E. Taylor
Correspondence to Charles E. Taylor.
King, J.M., Bergeron, C.A. & Taylor, C.E. Finite state machine implementation for left ventricle modeling and control. BioMed Eng OnLine 18, 10 (2019). https://doi.org/10.1186/s12938-019-0628-3
Left ventricular pressure–volume relationship
PV loop
Cardiovascular lumped parameter modeling
Simulating cardiovascular hemodynamics
Patient-specific and population modeling | CommonCrawl |
\begin{document}
\title{Hardness Results for the Synthesis of $b$-bounded Petri Nets (Technical Report)} \author{Ronny Tredup} \institute{Universit\"at Rostock, Institut f\"ur Informatik, Theoretische Informatik, Albert-Einstein-Stra\ss e 22, 18059, Rostock } \maketitle
\begin{abstract} Synthesis for a type $\tau$ of Petri nets is the following search problem: For a transition system $A$, find a Petri net $N$ of type $\tau$ whose state graph is isomorphic to $A$, if there is one. To determine the computational complexity of synthesis for types of bounded Petri nets we investigate their corresponding decision version, called feasibility. We show that feasibility is NP-complete for (pure) $b$-bounded P/T-nets if $b\in \mathbb{N}^+$. We extend (pure) $b$-bounded P/T-nets by the additive group $\mathbb{Z}_{b+1}$ of integers modulo $(b+1)$ and show feasibility to be NP-complete for the resulting type. To decide if $A$ has the \emph{event state separation property} is shown to be NP-complete for (pure) $b$-bounded and group extended (pure) $b$-bounded P/T-nets. Deciding if $A$ has the \emph{state separation property} is proven to be NP-complete for (pure) $b$-bounded P/T-nets. \end{abstract}
\section{Introduction}
\emph{Synthesis} for a Petri net type $\tau$ is the task to find, for a given transition system (TS, for short) $A$, a Petri net $N$ of this type such that its state graph is isomorphic to $A$ if such a net exists. The decision version of synthesis is called $\tau$-\emph{feasibility}. It asks whether for a given TS $A$ a Petri net $N$ of type $\tau$ exists whose state graph is isomorphic to $A$.
Synthesis for Petri nets has been investigated and applied for many years and in numerous fields: It is used to extract concurrency and distributability data from sequential specifications like transition systems or languages \cite{DBLP:journals/fac/BadouelCD02}. Synthesis has applications in the field of process discovery to reconstruct a model from its execution traces \cite{DBLP:books/daglib/0027363}. In \cite{DBLP:journals/deds/HollowayKG97}, it is employed in supervisory control for discrete event systems and in \cite{DBLP:journals/tcad/CortadellaKKLY97} it is used for the synthesis of speed-independent circuits. This paper deals with the computational complexity of synthesis for types of \emph{T2019b} Petri nets, that is, Petri nets for which there is a positive integer $b$ restricting the number of tokens on every place in any reachable marking.
In \cite{DBLP:conf/tapsoft/BadouelBD95,DBLP:series/txtcs/BadouelBD15}, synthesis has been shown to be solvable in polynomial time for bounded and pure bounded P/T-nets. The approach provided in \cite{DBLP:conf/tapsoft/BadouelBD95,DBLP:series/txtcs/BadouelBD15} guarantees a (pure) bounded P/T-net to be output if such a net exists. Unfortunately, it does not work for preselected bounds. In fact, in \cite{DBLP:journals/tcs/BadouelBD97} it has been shown that feasibility is NP-complete for $1$-bounded P/T-nets, that is, if the bound $b=1$ is chosen \emph{a priori}. In \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18}, it was proven that this remains true even for strongly restricted input TSs. In contrast, \cite{DBLP:conf/stacs/Schmitt96} shows that it suffices to extend pure $1$-bounded P/T-nets by the additive group $\mathbb{Z}_2$ of integers modulo $2$ to bring the complexity of synthesis down to polynomial time. The work of \cite{TR2019a} confirms also for other types of $1$-bounded Petri nets that the presence or absence of interactions between places and transitions tip the scales of synthesis complexity. However, some questions in the area of synthesis for Petri nets are still open. Recently, in \cite{DBLP:conf/concur/SchlachterW17} the complexity status of synthesis for (pure) $b$-bounded P/T-nets, $2\leq b$, has been reported as unknown. Furthermore, it has not yet been analyzed whether extending (pure) $b$-bounded P/T-nets by the group $\mathbb{Z}_{b+1}$ provides also a tractable superclass if $b\geq 2$.
In this paper, we show that feasibility for (pure) $b$-bounded P/T-nets, $b\in \mathbb{N}^+$, is NP-complete. This makes their synthesis NP-hard. Moreover, we introduce (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets, $b\geq 2$. This type origins from (pure) $b$-bounded P/T-nets by adding interactions between places and transitions simulating addition of integers modulo $b+1$. This extension is a natural generalization of Schmitt's approach \cite{DBLP:conf/stacs/Schmitt96}, which does this for $b=1$. In contrast to the result of \cite{DBLP:conf/stacs/Schmitt96}, this paper shows that feasibility for (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets remains NP-complete if $b\geq 2$.
To prove the NP-completeness of feasibility we use its well known close connection to the so-called \emph{event state separation property} (ESSP) and \emph{state separation property} (SSP). In fact, a TS $A$ is feasible with respect to a Petri net type if and only if it has the type related ESSP \emph{and} SSP \cite{DBLP:series/txtcs/BadouelBD15}. The question of whether a TS $A$ has the ESSP or the SSP also defines decision problems. The possibility to decide efficiently if $A$ has at least one of both properties serves as quick-fail pre-processing mechanisms for feasibility. Moreover, if $A$ has the ESSP then synthesizing Petri nets up to language equivalence is possible \cite{DBLP:series/txtcs/BadouelBD15}. This makes the decision problems ESSP and SSP worth to study. In \cite{DBLP:journals/tcs/Hiraishi94}, both problems have been shown to be NP-complete for pure $1$-bounded P/T-nets. This has been confirmed for almost trivial inputs in \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18}.
This paper shows feasibility, ESSP and SSP to be NP-complete for $b$-bounded P/T-nets, $b\in \mathbb{N}^+$. Moreover, feasibility and ESSP are shown to remain NP-complete for (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets if $b\geq 2$. Interestingly, \cite{T2019b} shows that SSP is decidable in polynomial time for (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets, $b\in \mathbb{N}^+$. So far, this is the first net family where the provable computational complexity of SSP is different to feasibility and ESSP.
All presented NP-completeness proofs base on a reduction from the monotone one-in-three 3-SAT problem which is known to be NP-complete \cite{DBLP:journals/dcg/MooreR01}. Every reduction starts from a given boolean input expression $\varphi$ and results in a TS $A_\varphi$. The expression $\varphi$ belongs to monotone one-in-three 3-SAT if and only if $A_\varphi$ has the (target) property ESSP, SSP or feasibility, respectively.
This paper is organized as follows: Section~\ref{sec:preliminaries} introduces the formal definitions and notions. Section~\ref{sec:unions} introduces the concept of unions applied in by our proofs. Section~\ref{sec:hardness_results} provides the reductions and proves their functionality. A short conclusion completes the paper. This paper is an extended abstract of the technical report \cite{T2019b}. The proofs that had to be removed due to space limitation are given in \cite{T2019b}.
\section{Preliminaries}\label{sec:preliminaries}
See Figure~\ref{fig:types} and Figure~\ref{fig:example_prelis} for an example of the notions defined in this section. A \emph{transition system} (TS for short) $A = (S,E,\delta)$ consists of finite disjoint sets $S$ of states and $E$ of events and a partial \emph{transition function} $\delta: S\times E\rightarrow S$. Usually, we think of $A$ as an edge-labeled directed graph with node set $S$ where every triple $\delta(s,e)=s'$ is interpreted as an $e$-labeled edge $s\edge{e}s'$, called \emph{transition}. We say that an event $e$ \emph{occurs} at state $s$ if $\delta(s,e)=s'$ for some state $s'$ and abbreviate this with $s\edge{e}$. This notation is extended to words $w'=wa$, $w\in E^*, a\in E$ by inductively defining $s\edge{\varepsilon}s$ for all $s\in S$ and $s\edge{w'}s''$ if and only if $s\edge{w}s'$ and $s'\edge{a}s''$. If $w\in E^*$ then $s\edge{w}$ denotes that there is a state $s'\in S$ such that $s\edge{w}s'$. An \emph{initialized} TS $A=(S,E,\delta, s_0)$ is a TS with an initial state $s_0 \in S$ where every state is \emph{reachable}: $\forall s\in S, \exists w\in E^*: s_0\edge{w}s$. The language of $A$ is the set $L(A)=\{w\in E^* \mid s_{0}\edge{w}\}$. In the remainder of this paper, if not explicitly stated otherwise, we assume all TSs to be initialized and we refer to the components of an (initialized) TS $A$ consistently by $A=(S_A, E_A, \delta_A, s_{0,A})$.
The following notion of \emph{types of nets} has been developed in~\cite{DBLP:series/txtcs/BadouelBD15}. It allows us to uniformly capture several Petri net types in one general scheme. Every introduced Petri net type can be seen as an instantiation of this general scheme. A type of nets $\tau$ is a TS $\tau=(S_\tau, E_\tau,\delta_\tau)$ and a Petri net $N = (P, T, f, M_0)$ of type $\tau$, $\tau$-net for short, is given by finite and disjoint sets $P$ of places and $T$ of transitions, an initial marking $M_0: P\longrightarrow S_\tau$, and a flow function $f: P \times T \rightarrow E_\tau$. The meaning of a $\tau$-net is to realize a certain behavior by cascades of firing transitions. In particular, a transition $t \in T$ can fire in a marking $M: P \longrightarrow S_\tau$ and thereby produces the marking $M': P \longrightarrow S_\tau$ if for all $p\in P$ the transition $M(p)\edge{f(p,t)}M'(p)$ exists in $\tau$. This is denoted by $M \edge{t} M'$. Again, this notation extends to sequences $\sigma \in T^*$. Accordingly, $RS(N)=\{M \mid \exists \sigma\in T^*: M_0\edge{\sigma}M \}$ is the set of all reachable markings of $N$. Given a $\tau$-net $N=(P, T, f,M_0)$, its behavior is captured by the TS $A_N=(RS(N), T,\delta, M_0)$, called the state graph of $N$, where for every reachable marking $M$ of $N$ and transition $t \in T$ with $M \edge{t} M'$ the transition function $\delta$ of $A_N$ is defined by $\delta(M,t) = M'$.
The following notion of $\tau$-regions allows us to define the type related ESSP and SSP. If $\tau$ is a type of nets then a $\tau$-region of a TS $A$ is a pair of mappings $(sup, sig)$, where $sup: S_A \longrightarrow S_\tau$ and $sig: E_A\longrightarrow E_\tau$, such that, for each transition $s\edge{e}s'$ of $A$, we have that $sup(s)\edge{sig(e)}sup(s')$ is a transition of $\tau$. Two distinct states $s,s'\in S_A$ define an \emph{SSP atom} $(s,s')$, which is said to be $\tau$-solvable if there is a $\tau$-region $(sup, sig)$ of $A$ such that $sup(s)\not=sup(s')$. An event $e\in E_A$ and a state $s\in S_A$ at which $e$ does not occur, that is $\neg s\edge{e}$, define an \emph{ESSP atom} $(e,s)$. The atom is said to be $\tau$-solvable if there is a $\tau$-region $(sup, sig)$ of $A$ such that $\neg sup(s)\edge{sig(e)}$. A $\tau$-region solving an ESSP or a SSP atom $(x,y)$ is a \emph{witness} for the $\tau$-solvability of $(x,y)$.
A TS $A$ has the $\tau$-ESSP ($\tau$-SSP) if all its ESSP (SSP) atoms are $\tau$-solvable. Naturally, $A$ is said to be $\tau$-feasible if it has the $\tau$-ESSP and the $\tau$-SSP. The following fact is well known from~\cite[p.161]{DBLP:series/txtcs/BadouelBD15}: A set $\mathcal{R}$ of $\tau$-regions of $A$ contains a witness for all ESSP and SSP atoms if and only if the \emph{synthesized $\tau$-net} $N^{\mathcal{R}}_A=(\mathcal{R} , E_A, f, M_0)$ has a state graph that is isomorphic to $A$. The flow function of $N^{\mathcal{R}}_A$ is defined by $f((sup, sig), e)= sig(e)$ and its initial marking is $M_0((sup,sig))=sup(s_{0,A})$ for all $(sup, sig) \in \mathcal{R}, e\in E_A$ . The regions of $\ensuremath{\mathcal{R}}$ become places and the events of $E_A$ become transitions of $N^{\mathcal{R}}_A$. Hence, for a $\tau$-feasible TS $A$ where $\mathcal{R}$ is known, we can synthesize a net $N$ with state graph isomorphic to $A$ by constructing $N^{\mathcal{R}}_A$.
\begin{figure}
\caption{ The types $\tau^2_0,\tau^2_1,\tau^2_2$ and $\tau^2_3$. $\tau^2_0$ is sketched by the $(m,n)$-labeled transitions where edges with different labels represent different transitions. Discard from $\tau^2_0$ the $(1,1)$, $(1,2)$, $(2,1)$ and $(2,2)$ labeled transitions to get $\tau^2_1$ and add for $i\in \{0,1,2\}$ the $i$-labeled transitions and remove $(0,0)$ to have $\tau^2_2$. Discarding $(1,1),(1,2),(2,1),(2,2)$ leads from $\tau^2_2$ to $\tau^2_3$. }
\label{fig:types}
\end{figure}
In this paper, we deal with the following $b$-bounded types of Petri nets: \begin{enumerate} \item The type of \emph{$b$-bounded P/T-nets} is defined by $\tau^b_0=(\{0,\dots, b\}, \{0,\dots, b\}^2,\delta_{\tau^b_0})$ where for $s\in S_{\tau^b_0}$ and $(m,n)\in E_{\tau^b_0}$ the transition function is defined by $\delta_{\tau^b_0}(s,(m,n))=s-m+n$ if $s\geq m$ and $ s-m+n \leq b$, and undefined otherwise.
\item The type of \emph{pure $b$-bounded P/T-nets} is a restriction of $\tau^b_0$-nets that discards all events $(m,n)$ from $E_{\tau^b_0}$ where both, $m$ and $n$, are positive. To be exact, $\tau^b_1=(\{0,\dots, b\}, E_{\tau^b_0} \setminus \{(m,n) \mid 1 \leq m,n \leq b\}, \delta_{\tau^b_1})$, and for $s\in S_{\tau^b_1}$ and $e\in E_{\tau^b_1}$ we have $\delta_{\tau^b_1}(s,e)=\delta_{\tau^b_0}(s,e)$.
\item The type of \emph{$\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets} origins from $\tau^b_0$ by extending the event set $E_{\tau^b_0}$ with the elements $0,\dots, b$. The transition function additionally simulates the addition modulo (b+1). More exactly, this type is defined by $\tau^b_2=(\{0,\dots, b\}, (E_{\tau^b_0}\setminus \{(0,0)\}) \cup \{0,\dots, b\}, \delta_{\tau^b_2})$ where for $s\in S_{\tau^b_2}$ and $e\in E_{\tau^b_2}$ we have that $\delta_{\tau^b_2}(s,e)=\delta_{\tau^b_0}(s,e)$ if $e\in E_{\tau^b_0}$ and, otherwise, $\delta_{\tau^b_2}(s,e)=(s+e) \text{ mod } (b+1)$.
\item The type of \emph{$\mathbb{Z}_{b+1}$-extended pure $b$-bounded P/T-nets} is a restriction of $\tau^b_2$ being defined by $\tau^b_3=(\{0,\dots, b\}, E_{\tau^b_2}\setminus \{(m,n) \mid 1 \leq m,n \leq b\}, \delta_{\tau^b_3})$ where for $s\in S_{\tau^b_3}$ and $e\in E_{\tau^b_3}$ we have that $\delta_{\tau^b_3}(s,e)=\delta_{\tau^b_2}(s,e)$. \end{enumerate}
Notice that the type $\tau^1_3$ coincides with Schmitt's type for which the considered decision problems and synthesis become tractable \cite{DBLP:conf/stacs/Schmitt96}. Moreover, in \cite{TR2019a} it has been shown that $\tau^1_2$, a generalization of $\tau^1_3$, allows polynomial time synthesis, too. Hence, in the following, if not explicitly stated otherwise, for $\tau\in \{\tau^b_0,\tau^b_1\}$ we let $b\in \mathbb{N}^+$ and for $\tau\in \{\tau^b_2,\tau^b_3\}$ we let $2\leq b\in \mathbb{N}$. If $\tau\in \{\tau^b_0,\tau^b_1, \tau^b_2,\tau^b_3\}$ and if $(sup, sig)$ is a $\tau$-region of a TS $A$ then for $e\in E_A$ we define $sig^-(e)=m$ and $sig^+(e)=n$ and $\vert sig(e)\vert =0$ if $sig(e)=(m,n)\in E_\tau$, respectively $sig^-(e)=sig^+(e)=0$ and $\vert sig(e)\vert =sig(e)$ if $sig(e)\in \{0,\dots, b\}$.
\begin{figure}\label{fig:example_prelis}
\end{figure}
The observations of the next Lemma are used to simplify our proofs:
\begin{lemma}\label{lem:observations} Let $\tau \in \{\tau^b_0, \tau^b_1, \tau^b_2, \tau^b_3\}$ and $A$ be a TS. \begin{enumerate} \item\label{lem:sig_summation_along_paths} Two mappings $sup: S_A\longrightarrow S_\tau$ and $sig: E_A\longrightarrow E_\tau$ define a $\tau$-region of $A$ if and only if for every word $w=e_1\dots e_\ell \in E_\tau^*$ and state $s_0\in S_A$ the following statement is true: If $s_{0}\edge{e_1} \dots \edge{e_\ell} s_\ell$, then $sup(s_{i})=sup(s_{i-1})-sig^-(e_i)+sig^+(e_i)+\vert sig(e)\vert$ for $i\in \{1,\dots, \ell\}$, where for $ \tau\in \{\tau^b_2, \tau^b_3\}$ this equation is considered modulo $(b+1)$. That is, every region $(sup, sig)$ is implicitly completely defined by the signature $sig$ and the support of the initial state: $sup(s_{0,A})$.
\item\label{lem:absolute_value} If $s_{0}, s_{1},\dots, s_{b}\in S_A$, $e\in E_A$ and $s_{0}\edge{e} \dots \edge{e} s_b$ then a $\tau$-region $(sup, sig)$ of $A$ satisfies $sig(e)= (m,n)$ with $m\not=n$ if and only if $(m,n) \in \{(1,0),(0,1)\}$. If $sig(e)=(0,1)$ then $sup(s_{0})=0$ and $sup(s_b)=b$. If $sig(e)=(1,0)$ then $sup(s_0)=b$ and $sup(s_b)=0$. \end{enumerate} \end{lemma}
\section{The Concept of Unions}\label{sec:unions}
For our reductions, we use the technique of \emph{component design} \cite{DBLP:books/fm/GareyJ79}. Every implemented constituent is a TS locally ensuring the satisfaction of some constraints. Commonly, all constituents are finally joined together in a target instance (TS) such that all required constraints are properly globally translated. However, the concept of unions saves us the need to actually create the target instance:
If $A_0, \dots, A_n$ are TSs with pairwise disjoint states (but not necessarily disjoint events) then $U(A_0, \dots, A_n)$ is their \emph{union} with set of states $S_U=\bigcup_{i=0}^n S_{A_i}$ and set of events $E_U=\bigcup_{i=0}^n E_{A_i}$. For a flexible formalism, we allow to build unions recursively: Firstly, we identify every TS $A$ with the union containing only $A$, that is, $A = U(A)$. Next, if $U_1= U(A^1_0,\dots,A^1_{n_1}), \dots, U_m=(A^m_0,\dots,A^n_{n_m})$ are unions then $U(U_1, \dots, U_m)$ is the evolved union $U(A^1_0, \dots, A^1_{n_1},\dots, A^m_0, \dots, A^n_{n_m})$.
The concepts of regions, SSP, and ESSP are transferred to unions $U = U(A_0, \dots, A_n)$ as follows: A $\tau$-region $(sup, sig)$ of $U$ consists of $sup: S_U \rightarrow S_\tau$ and $sig: E_U \rightarrow E_\tau$ such that, for all $i \in \{0, \dots, n\}$, the projection $sup_i(s) = sup(s), s \in S_{A_i}$ and $sig_i(e) = sig(e), e \in E_{A_i}$ defines a region $(sup_i, sig_i)$ of $A_i$. Then, $U$ has the $\tau$-SSP if for all distinct states $s, s' \in S_U$ of the \emph{same} TS $A_i$ there is a $\tau$-region $(sup,sig)$ of $U$ with $sup(s) \not= sup(s')$. Moreover, $U$ has the $\tau$-ESSP if for all events $e \in E_U$ and all states $s \in S_U$ with $\neg s\edge{e}$ there is a $\tau$-region $(sup,sig)$ of $U$ where $sup(s) \edge{sig(e)}$ does not hold. We say $U$ is $\tau$-feasible if it has the $\tau$-SSP and the $\tau$-ESSP. In the same way, $\tau$-SSP and $\tau$-ESSP are translated to the state and event sets $S_U$ and $E_U$.
To merge a union $U = U(A_0, \dots, A_n)$ into a single TS, we define the joining $A(U)$ as the TS $A(U) = (S_U \cup Q, E_U \cup W \cup Y , \delta, q_0 )$ with additional connector states $Q=\{ q_0, \dots, q_n\}$ and fresh events $W=\{w_1, \dots, w_n\}$, $Y=\{ y_0, \dots, y_n\}$ connecting the individual TSs of $U$ by
\[\delta(s,e) = \begin{cases} s_{0,A_i}, & \text{if } s = q_i \text{ and } e=y_i \text{ and } 0 \leq i \leq n,\\ q_{i+1}, & \text{if } s = q_i \text{ and } e=w_{i+1} \text{ and } 0 \leq i \leq n-1,\\ \delta_i(s,e), & \text{if } s \in S_{A_i} \text{ and } e \in E_{A_i} \text{ and } 0 \leq i \leq n \end{cases} \]
Hence, $A(U)$ puts the connector states into a chain with the events from $W$ and links the initial states of TSs from $U$ to this chain using events from $Y$. For example, the upper part of Figure~\ref{fig:example_prelis} shows $A(U)$ where $U=(A, A_{N^{\mathcal{R}_1}_A},A_{N^{\mathcal{R}_2}_A})$.
In \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18}, we have shown that a union $U$ is a useful vehicle to investigate if $A(U)$ has the $\tau$-feasibility, $\tau$-ESSP and $\tau$-SSP if $\tau=\tau^1_1$. The following lemma generalizes this observation for $\tau\in \{ \tau^b_0, \tau^b_1, \tau^b_2, \tau^b_3\}$:
\begin{lemma}\label{lem:union_validity} Let $\tau \in \{\tau^b_0, \tau^b_1, \tau^b_2, \tau^b_3\}$. If $U = U(A_0, \dots, A_n)$ of TSs $A_0, \dots, A_n$ is a union such that for every event $e\in E_U$ there is a state $s\in S_U$ with $\neg s\edge{e}$ then $U$ has the $\tau$-ESSP, respectively the $\tau$-SSP, if and only if $A(U)$ has the $\tau$-ESSP, respectively the $\tau$-SSP. \end{lemma}
\section{Main Result}\label{sec:hardness_results}
\begin{theorem}\label{the:hardness_results} \begin{enumerate} \item\label{the:hardness_results_essp} If $\tau \in \{\tau^b_0, \tau^b_1,\tau^b_2, \tau^b_3\}$ then to decide if a TS $A$ is $\tau$-feasible or has the $\tau$-ESSP is NP-complete. \item\label{the:hardness_results_ssp} If $\tau \in \{\tau^b_0, \tau^b_1\}$ then deciding whether a TS $A$ has the $\tau$-SSP is NP complete. \end{enumerate} \end{theorem}
The proof of Theorem~\ref{the:hardness_results} bases on polynomial time reductions of the cubic monotone one-in-three $3$-SAT problem to $\tau$-ESSP, $\tau$-feasibility and $\tau$-SSP, respectively. The input for this decision problem is a boolean expression $\varphi=\{C_0, \dots, C_{m-1}\}$ with $3$-clauses $C_i = \{X_{i,0}, X_{i,1}, X_{i,2}\}$ containing unnegated boolean variables $X_{i,0}, X_{i,1}, X_{i,2}$. $V(\varphi)$ denotes the set of all variables of $\varphi$. Every element $X\in V(\varphi)$ occurs in exactly three clauses implying that $V(\varphi)=\{X_0,\dots, X_{m-1}\}$. Given $\varphi$, cubic monotone one-in-three $3$-SAT asks if there is a one-in-three model $M$ of $\varphi$. $M$ is a subset of $V(\varphi)$ such that $\vert M \cap C_i \vert =1$ for all $i\in \{0,\dots,m-1\}$.
For Theorem~\ref{the:hardness_results}.\ref{the:hardness_results_essp}, we let $\tau\in \{\tau^b_0,\tau^b_1,\tau^b_2,\tau^b_3\}$ and reduce $\varphi$ to a union $U_\tau=(K_\tau, T_\tau)$ which consists of the \emph{key} $K_{\tau}$ and the \emph{translator} $T_\tau$, both unions of TSs. The index $\tau$ emphasizes that the components actual peculiarity depends on $\tau$.
For Theorem~\ref{the:hardness_results}.\ref{the:hardness_results_ssp} the reduction starts from $\varphi$ and results in a union $W=(K ,T)$ consisting of \emph{key} $K$ and \emph{translator} $T$, both unions. $W$ needs no index as it has the same shape for $\tau^b_0$ and $\tau^b_1$.
The key $K_\tau$ provides a key ESSP atom $\alpha_\tau=(k,s_\tau)$ with event $k$ and state $s_\tau$. The key $K$ supplies a key SSP atom $\alpha=(s, s')$ with states $s,s'$. The translators $T_\tau$ and $T$ represent $\varphi$ by using the variables of $\varphi$ as events. The unions $K_\tau$ and $T_\tau$ as well as $W$ and $T$ share events which define their \emph{interface} $I_\tau=E_{K_\tau} \cap E_{T_\tau}$ and $I=E_{K} \cap E_{T}$. The construction ensures via the interface that $K_\tau$ and $T_\tau$ just as $K$ and $T$ interact in way that satisfies the following objectives of \emph{completeness}, \emph{existence} and \emph{sufficiency}:
\begin{objective}[Completeness]\label{int:completeness} Let $(sup, sig)$ be a region of $K_\tau$ ($K$) solving the key atom. If $(sup', sig')$ is a region of $T_\tau$ ($T$) satisfying $sup'(e)=sup(e)$ for $e\in I_\tau$ ($e\in I$) then the signature of the variable events reveal a one-in-three model of $\varphi$. \end{objective}
\begin{objective}[Existence]\label{int:existence} There is a region $(sup_K, sig_K)$ of $K_\tau$ ($K$) which solves the key atom. If $\varphi$ is one-in-three satisfiable then there is a region $(sup_T, sig_T)$ of $T_\tau$ ($T$) such that $sig_T(e)=sig_K(e)$ for $e\in I_\tau$ ($e\in I$) \end{objective}
\begin{objective}[Suffiency]\label{int:suffiency} If the key atom is $\tau$-solvable in $U_\tau$, respectively $W$, then $U_\tau$ has the $\tau$-ESSP and the $\tau$-SSP and $W$ has the $\tau$-SSP.
\end{objective}
Objective~\ref{int:completeness} ensures that the $\tau$-ESSP just as the $\tau$-feasibility of $U_\tau$ implies the one-in-three satisfiability of $\varphi$, respectively. More exactly, if $U_\tau$ has the $\tau$-ESSP or the $\tau$-feasibility then there is a $\tau$-region $(sup, sig)$ of $U_\tau$ that solves $\alpha_\tau$. By definition, this yields corresponding regions $(sup_K , sig_K)$ of $K_\tau$ and $(sup_T, sig_T)$ of $T_\tau$: $sup_{K_\tau}(s)=sig(s)$ and $sig_{K_\tau}(e)=sig(e)$ if $s\in S_{K_\tau}, e\in E_{K_\tau}$ and $sup_{T_\tau}(s)=sig(s)$ and $sig_{T_\tau}(e)=sig(e)$ if $s\in S_{T_\tau}, e\in E_{T_\tau}$. Similarly, the $\tau$-SSP of $W$ implies proper regions of $K$ and $T$ by a region $(sup, sig)$ of $W$ which solves $\alpha$. As $(sup, sig)$ solves $\alpha_\tau$ in $U_\tau$ ($\alpha$ in $W$) the region $(sup_K, sig_K)$ solves $\alpha_\tau$ in $K_\tau$ ($\alpha$ in $K$). Hence, by Objective~\ref{int:completeness}, the region $(sup_T, sig_T)$ of $T_\tau$ ($T$) reveals a one-in-three model of $\varphi$.
Reversely, Objective~\ref{int:existence} ensures that a one-in-three model of $\varphi$ defines a region $(sup, sig)$ of $U_\tau=(K_\tau, T_\tau)$ solving the key atom $\alpha_\tau$: $sup(s)=sup_K(s)$ if $s\in S_{K_\tau}$ and $sup(s)=sup_T(s)$ if $s\in S_{T_\tau}$ as well as $sig(e)=sig_K(e)$ if $e\in E_{K_\tau}$ and $sig(e)=sig_T(e)$ if $e\in E_{T_\tau}\setminus E_{K_\tau}$. Similarly, we get a region of $W$ that solves $\alpha$.
Objective~\ref{int:suffiency} guarantees that the solvability of the key atom $\alpha_\tau$ in $U_\tau$ ($\alpha$ in $K$) implies the solvability of all ESSP atoms and SSP atoms of $U_\tau$ (SSP atoms of $W$). Hence, by objective~\ref{int:existence}, if $\varphi$ has a one-in-three model then $U_\tau$ has the $\tau$-ESSP and is $\tau$-feasible just as $W$ has the $\tau$-SSP.
The unions $U_\tau$ and $W$ satisfy the conditions of Lemma~\ref{lem:union_validity}. Therefore, the joining TS $A(U_\tau)$ has the $\tau$-ESSP and is $\tau$-feasible if and only if $\varphi$ is one-in-three satisfiable. Likewise, the TS $A(W)$ has the $\tau$-SSP if and only if there is a one-in-three model for $\varphi$. By definition, every TS $A$ has at most $\vert S_A\vert ^2$ SSP, respectively $\vert S_A\vert \cdot \vert E_A\vert$ ESSP atoms. Consequently, a non-deterministic Turing machine can verify a guessed proof of $\tau$-SSP, $\tau$-ESSP and $\tau$-feasibility in polynomial time in the size of $A$. Hence, all decision problems are in NP. All reductions are doable in polynomial time and deciding the one-in-three satisfiability of $\varphi$ is NP-complete. Thus, our approach proves Theorem~\ref{the:hardness_results}.
In order to prove the functionality of the constituents and to convey the corresponding intuition without becoming too technical, we proceed as follows. On the one hand, we precisely define the constituents of the unions for arbitrary bound $b$ and input instance $\varphi=\{C_0,\dots, C_{m-1}\}$, $C_i=\{X_{i,0}, X_{i,1}, X_{i,2}\}$, $i\in \{0,\dots, m-1\}$, $V(\varphi)=\{X_0,\dots, X_{m-1}\}$, and prove their functionality. On the other hand, we provide for comprehensibility full examples for the types $\tau\in \{\tau^b_0,\tau^b_1\}$ and the unions $U_\tau$ and $W$. The illustrations also provide a $\tau$-region solving the corresponding key atom. For a running example, the input instance is $\varphi_0=\{C_0,\dots, C_{5}\}$ with clauses $C_0=\{X_0,X_1,X_2\},\ C_1= \{X_2,X_0,X_3\},\ C_2= \{X_1,X_3,X_0\},\ C_3= \{X_2,X_4,X_5\},\ C_4=\{X_1,X_5,X_4\},\ C_5= \{X_4,X_3,X_5\}$ that allows the one-in-three model $\{X_0,X_4\}$. A full example for $\tau\in \{\tau^b_2,\tau^b_3\}$ is given in \cite{T2019b}. For further simplification, we reuse gadgets for several unions as far as possible. This is not always possible as small differences between two types of nets imply huge differences in the possibilities to build corresponding regions: The more complex (the transition function of) the considered types, the more difficult the task to connect the solvability of the key atom with the signature of the interface events, respectively to connect the signature of the interface events with an implied model. Moreover, the more difficult these tasks, the more complex the corresponding gadgets. Hence, less complex gadgets are useless for more complex types. Reversely, the more complex the gadgets the more possibilities to solve all ESSP atoms and all SSP atoms are needed. Hence, more complex gadgets are not useful for less complex types. At the end, some constituents may differ only slightly at first glance but their differences have a crucial and necessary impact.
Note, that some techniques of the proof of Theorem~\ref{the:hardness_results} are very general advancements of our previous work \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18}. For example, like in \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18} the proof of Theorem~\ref{the:hardness_results} bases on reducing cubic monotone one-in-three $3$-SAT. Moreover, we apply unions as part of \emph{component design} \cite{DBLP:books/fm/GareyJ79}. However, the reductions in \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18} fit only for the basic type $\tau^1_1$ and they are already useless for $\tau^1_0$. They fit even less for $\tau^b_0$ and $\tau^b_1$ if $b\geq 2$ and certainly not for their group extensions.
We proceed as follows. Section~\ref{sec:keys_1} and Section~\ref{sec:translators_1} introduce the keys $K_{\tau^b_0}, K_{\tau^b_1}, K$ and translators $T_{\tau^b_0}, T_{\tau^b_1} , T$ and prove their functionality. Section~\ref{sec:group_extensions_keys} and Section~\ref{sec:group_extensions_translators} present $K_{\tau^b_2}, K_{\tau^b_3}$ and $T_{\tau^b_2}, T_{\tau^b_3}$ and carry out how they work. Section~\ref{sec:liaison} proves that the keys and translators collaborate properly.
\subsection{The Unions $K_{\tau^b_0}$ and $K_{\tau^b_1}$ and $K$.}\label{sec:keys_1}
Let $\tau\in \{\tau^b_0, \tau^b_1\}$. The aim of $K_\tau$ and $K$ is summarized by the next lemma:
\begin{lemma}\label{lem:key_unions_1} The keys $K_{\tau}$ and $K$ implement the interface events $k_{0},\dots, k_{6m-1}$ and provide a key atom $a_{\tau}$ and $\alpha$, respectively, such that the following is true:
\begin{enumerate} \item\label{lem:key_unions_1_completeness}\emph{(Completeness)} If $(sup_K, sig_K)$ is $\tau$-region of $K_\tau$, respectively of $K$, that solves $\alpha_\tau$, respectively $\alpha$, then $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(0,b)$ or $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(b,0)$.
\item\label{lem:key_unions_1_existence}\emph{(Existence)} There is a $\tau$-region $(sup_K, sig_K)$ of $K_\tau$, respectively of $K$, that solves $a_{\tau}$, respectively $\alpha$, such that $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(0,b)$. \end{enumerate} \end{lemma}
Firstly, we introduce the keys $K_{\tau^b_0}, K_{\tau^b_1}$ and $K$ and show that they satisfy Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness}. Secondly, we present corresponding $\tau$-regions which prove Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_existence}.
\textbf{The union $\mathbf{K_{\tau^b_0}}$} contains the following TS $H_0$ which provides the ESSP atom $(k, h_{0, 4b+1})$:
\begin{tikzpicture} \node (init) at (-0.75,0) {$H_{0}=$}; \node (h0) at (0,0) {\nscale{$h_{0,0}$}}; \node (h1) at (1,0) {}; \node (dots1) at (1.25,0) {\nscale{$\dots$}}; \node (h_b_1) at (1.5,0) {};
\node (h_b) at (2.5,0) {\nscale{$h_{0,b}$}}; \node (h_b+1) at (3.5,0) {}; \node (dots_2) at (3.75,0) {\nscale{$\dots$}}; \node (h_2b_1) at (4,0) {};
\node (h_2b) at (5,0) {\nscale{$h_{0,2b}$}}; \node (h_2b+1) at (6.5,0) {\nscale{$h_{0,2b+1}$}}; \node (h_2b+2) at (7.5,0) {}; \node (dots_3) at (7.75,0) {\nscale{$\dots$}}; \node (h_3b) at (8,0) {};
\node (h_3b+1) at (9,0) { \nscale{$h_{0,3b+1}$} }; \node (h_3b+2) at (9,-1) {}; \node (dots_4) at (8.75,-1) {\nscale{$\dots$}}; \node (h_4b) at (8.5,-1) { };
\node (h_4b+1) at (7.5,-1) { \nscale{$h_{0,4b+1}$} }; \node (h_4b+2) at (6.5,-1) {}; \node (dots_5) at (6.25,-1) {\nscale{$\dots$}}; \node (h_5b) at (6,-1) { };
\node (h_5b+1) at (5,-1) { \nscale{$h_{0,5b+1}$} }; \node (h_5b+2) at (4,-1) { }; \node (dots_5) at (3.75,-1) {\nscale{$\dots$}}; \node (h_6b) at (3.5,-1) { }; \node (h_6b+1) at (2.5,-1) { \nscale{$h_{0,6b+1}$} };
\graph { (h0) ->["\escale{$k$}"] (h1); (h_b_1)->["\escale{$k$}"] (h_b) ->["\escale{$z$}"] (h_b+1); (h_2b_1)->["\escale{$z$}"] (h_2b)->["\escale{$o_0$}"] (h_2b+1)->["\escale{$k$}"] (h_2b+2); (h_3b)->["\escale{$k$}"] (h_3b+1)->["\escale{$z$}"] (h_3b+2); (h_4b)->[swap, "\escale{$z$}"] (h_4b+1)->[swap, "\escale{$o_1$}"] (h_4b+2); (h_5b)->[swap, "\escale{$o_1$}"] (h_5b+1)->[swap, "\escale{$k$}"] (h_5b+2); (h_6b)->[swap, "\escale{$k$}"] (h_6b+1); }; \end{tikzpicture}
\noindent $K_{\tau^b_0}$ also installs for $j\in \{0,\dots, 6m-1\}$ the TS $D_{j,0}$ providing interface event $k_{j}$:
\noindent \begin{tikzpicture}[yshift=-5cm] \node (init) at (-1,0) {$D_{0,j}=$}; \foreach \i in {0,...,3} {\coordinate (\i) at (\i*1.2,0);} \foreach \i in {0,...,3} {\node (p\i) at (\i) {\nscale{$d_{j,0,\i}$}};}
\node (hdots_3) at (4.2,0) {\nscale{$\dots$}}; \node (db+2) at (4.4,0) {}; \node (db+3) at (5.5,0) { \nscale{$d_{j,0,b+3}$} };
\graph { (p0) ->["\escale{$o_{0}$}"] (p1) ->["\escale{$k_j$}"] (p2) ->["\escale{$o_{1}$}"] (p3); (db+2) ->["\escale{$o_{1}$}"] (db+3); }; \end{tikzpicture}
\noindent Overall, $K_{\tau^b_0}=(H_0,D_{0,0}, \dots, D_{6m-1,0})$.
\begin{proof}[Proof of Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness} for $\tau^b_0$]
For $j\in \{0,\dots, 6m-1\}$ the TSs $H_0$ and $D_{j,0}$ interact as follows: If $(sup_K, sig_K)$ is a region of $K_{\tau^b_0}$ solving $(k, h_{0, 4b+1})$ then either $sig_K(o_0)=(0,b)$ \emph{and} $sig_K(o_1)=(0,1)$ or $sig_K(o_0)=(b,0)$ \emph{and} $sig_K(o_1)=(1,0)$. By $\edge{o_0}d_{j,0,1}$, $d_{j,0,2}\edge{o_1}$ and Lemma~\ref{lem:observations}, if $sig_K(o_0)=(0,b), sig_K(o_1)=(0,1)$ then $sup_K(d_{j,0,1})=b$ and $sup_K(d_{j,0,2})=0$. This implies $sig_K(k_j)=(b,0)$. Similarly, $sig_K(o_0)=(b,0), sig_K(o_1)=(1,0)$ implies $sup_K(d_{j,0,1})=0$ and $sup_K(d_{j,0,2})=b$ yielding $sig_K(k_j)=(0,b)$. Hence, it is $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(b,0)$ or $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(0,b)$.
To prove Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness} for $K_{\tau^b_0}$ it remains to argue that a $\tau^b_0$-region $(sig, sup)$ of $K_{\tau^b_0}$ solving $(k, h_{0, 4b+1})$ satisfies $sig_K(o_0)=(0,b), sig_K(o_1)=(0,1)$ or $sig_K(o_0)=(b,0), sig_K(o_1)=(1,0)$. Let $ E_{0}=\{ (m,m) \mid 0\le m \leq b\}$.
By definition, if $sig(k)=(m,m) \in E_0$ then $sup(h_{0,3b+1}),sup(h_{0,5b+1})\geq m$. Event $(m,m)$ occurs at every state $s$ of $\tau^b_0$ satisfying $s\geq m$. Hence, by $\neg h_{0,4b+1}\edge{(m,m)}$, we get $sup(h_{0,4b+1}) < m$. Observe, that $z$ occurs always $b$ times in a row. Therefore, by $sup(h_{0,3b+1}) \geq m$, $sup(h_{0,4b+1}) < m$ and Lemma~\ref{lem:observations}, we have $sup(z)=(1,0)$, $sig(o_1)=(0,1)$ and immediately obtain $sup(h_{0,2b})=0$ and $sup(h_{0,3b+1})=b$. Moreover, by $sig(k)=(m,m) $ and $sup(h_{0,3b+1})=b$ we get $sup(h_{0,2b+1})=b$ implying with $sup(h_{0,2b})=0$ that $sig(o_0)=(0,b)$. Thus, we have $sig(o_0)=(0,b)$ and $sig(o_1)=(0,1)$.
Otherwise, if $sig(k)\not\in E_0$, then Lemma~\ref{lem:observations} ensures $sig(k)\in \{(1,0), (0,1)\}$. If $sig(k)=(0,1)$ then, by $s\edge{(0,1)}$ for every state $s\in \{0,\dots, b-1\}$ of $\tau^b_0$, we have $sup(h_{0, 4b+1})=b$. Moreover, again by $sig(k)=(0,1)$ we have $sup(h_{0,b})=sup(h_{0,3b+1})=b$ and $sup(h_{0,2b+1})=sup(h_{0, 5b+1})=0$. By $sup(h_{0, 3b+1})=sup(h_{0,4b+1})=b$ we have $sig(z)\in E_0$ which together with $sup(h_{0,b})=b$ implies $sup(h_{0, 2b})=b$. Thus, by $sup(h_{0,2b})=b$ and $sup(h_{0, 2b+1})=0$, it is $sig(o_0)=(b,0)$. Moreover, by $sup(h_{0, 4b+1})=b$ and $sup(h_{0, 5b+1 })=0$, we conclude $sig(o_1)=(1,0)$. Hence, we have $sig(o_0)=(b,0)$ and $sig(o_1)=(1,0)$. Similar arguments show that $sig_K(k)=(1,0)$ implies $sig(o_0)=(0,b)$ and $sig(o_1)=(0,1)$. Overall, this proves the announced signatures of $o_0$ and $o_1$. Hence, $K_{\tau^b_0}$ satisfies Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness}.
\end{proof}
\textbf{The union $\mathbf{K_{\tau^b_1}}$} uses the next TS $H_1$ to provide the key atom $(k, h_{1,2b+4})$:
\noindent \begin{tikzpicture} \node at (-0.75,0) {$H_1=$}; \node (h0) at (0,0) {\nscale{$h_{1,0}$}}; \node (h1) at (1,0) {\nscale{}};
\node (h_2_dots) at (1.25,0) {\nscale{$\dots$}};
\node (h_k_1) at (1.5,0) {}; \node (h_k) at (2.5,0) {\nscale{$h_{1,b}$}}; \node (h_k+1) at (3.7,0) {\nscale{$h_{1,b+1}$}}; \node (h_k+2) at (4.9,0) {\nscale{$h_{1,b+2}$}}; \node (h_k+3) at (6.1,0) {\nscale{}};
\node (h_k+4_dots) at (6.35,0) {\nscale{$\dots$}};
\node (h_2k+1) at (6.6,0) {\nscale{}}; \node (h_2k+2) at (7.8,0) {\nscale{$h_{1,2b+2}$}}; \node (h_2k+3) at (9.2,0) {\nscale{$h_{1,2b+3}$}}; \node (h_2k+4) at (10.6,0) {\nscale{$h_{1,2b+4}$}}; \node (h_2k+5) at (10.6,-1) {\nscale{$h_{1,2b+5}$}}; \node (h_2k+6) at (9.4,-1) {};
\node (h_k+5_dots) at (9.15,-1) {\nscale{$\dots$}};
\node (h_3k+4) at (8.9,-1) {}; \node (h_3k+5) at (7.7,-1) {\nscale{$h_{1,3b+5}$}}; \graph { (h0) ->["\escale{$k$}"] (h1) (h_k_1)->["\escale{$k$}"] (h_k) ->["\escale{$z_0$}"] (h_k+1)->["\escale{$o_0$}"] (h_k+2)->["\escale{$k$}"] (h_k+3); (h_2k+1)->["\escale{$k$}"] (h_2k+2)->["\escale{$z_1$}"] (h_2k+3)->["\escale{$z_0$}"] (h_2k+4)->["\escale{$o_2$}"] (h_2k+5)->[swap, "\escale{$k$}"] (h_2k+6); (h_3k+4)->[swap, "\escale{$k$}"] (h_3k+5); ;}; \end{tikzpicture}
\noindent Furthermore, $K_{\tau^b_1}$ contains for $j\in \{0,\dots, 6m-1\}$ the TS $D_{j,1}$ which provides the interface event $k_j$: \noindent \begin{tikzpicture}[baseline=-2pt] \node at (-1,0) {$D_{j,1}=$}; \foreach \i in {0,...,3} {\coordinate (\i) at (\i*1.4,0);} \foreach \i in {0,...,3} {\node (p\i) at (\i) {\nscale{$d_{j,1,\i}$}};} \graph { (p0) ->["\escale{$o_{0}$}"] (p1) ->["\escale{$k_j$}"] (p2) ->["\escale{$o_{2}$}"] (p3);}; \end{tikzpicture}
\noindent Altogether, $K_{\tau^b_1}=U(H_1,D_{0,1},\dots, D_{6m-1,1})$.
\begin{proof}[Proof of Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness} for $\tau^b_1$] For $j\in \{0,\dots, 6m-1\}$ the TSs $H_1$ and $D_{j,1}$ interact as follows: If $(sup_K, sig_K)$ is a $\tau^b_1$-region of $K_{\tau^b_1}$ solving $(k, h_{1,2b+4})$ then either $sig_K(o_0)=sig_K(o_2)=(b,0)$ or $sig_K(o_0)=sig_K(o_2)=(0,b)$. Clearly, $sig_K(o_0)=sig_K(o_2)=(b,0)$, respectively $sig_K(o_0)=sig_K(o_2)=(0,b)$, implies $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(0,b)$, respectively $sig_K(k_0)=\dots=sig_K(k_{6m-1})=(b,0)$.
We argue that the $\tau^b_1$-solvability of $(k, h_{1,2b+4})$ implies the announced signatures of $o_0,o_2$. If $(sup_K, sig_K)$ is a $\tau^b_1$-region that solves $(k, h_{1,2b+4})$ then, by definition of $\tau^b_1$ and Lemma~\ref{lem:observations}, we get $sig_K(k)\in \{(1,0), (0,1)\}$. Let $sig_K(k)=(0,1)$. The event $(0,1)$ occurs at every $s\in \{0,\dots, b-1\}$ of $\tau^b_1$. Hence, $\neg sup_K(h_{1,2b+4})\edge{(0,1)}$ implies $sup_K(h_{1,2b+4})=b$. Moreover, $k$ occurs $b$ times in a row. Thus, by $sig_K(k)=(0,1)$ and Lemma~\ref{lem:observations}, we obtain $sup_K(h_{1,b})=b$ and $sup_K(h_{1,b+2})=sup_K(h_{1,2b+5})=0$. This implies, by $h_{1,2b+4}\edge{o_2}h_{1,2b+5}$, $sup(h_{1,2b+4})=b$ and $sup(h_{1,2b+5})=0$, that $sig(o_2)=(b,0)$. Hence, by $sup_K(h_{1,b})=sup_K(h_{1,2b+4})=b$, $h_{1,b}\edge{z_0}$ and $\edge{z_0}h_{1,2b+4}$, we get $sig(z_0)=(0,0)$. Finally, by $sup(h_{1,b})=b$, $h_{1,b}\edge{z_0}$ and $sig(z_0)=(0,0)$ we deduce $sup(h_{1,b+1})=b$. Hence, by $h_{1,b+1}\edge{o_0}h_{1,b+2}$, $sup(h_{1,b+1})=b$ and $sup(h_{1,b+1})=0$ we have $sig(o_0)=(b,0)$. Altogether, we have that $sig(o_0)=sig(o_2)=(b,0)$. Similarly, one verifies that $sig_K(k)=(1,0)$ results in $sig(o_0)=sig(o_2)=(0,b)$. This proves Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness} for $K_{\tau^b_1}$. \end{proof}
\textbf{The union $\mathbf{K}$} uses the following TS $H_2$ to provide the key atom $(h_{2,0}, h_{2,b})$:
\noindent \begin{tikzpicture} \node at (-0.75,0) {$H_2=$}; \node (h0) at (0,0) {\nscale{$h_{2,0}$}}; \node (h1) at (1,0) {\nscale{}};
\node (hdots_1) at (1.25,0) {\nscale{$\dots$}};
\node (hb_1) at (1.5,0) {}; \node (hb) at (2.5,0) {\nscale{$h_{2,b}$}}; \node (hb+1) at (3.75,0) {\nscale{$h_{2,b+1}$}}; \node (hb+2) at (5,0) {};
\node (hdots_2) at (5.25,0) {\nscale{$\dots$}};
\node (h2b) at (5.5,0) {}; \node (h2b+1) at (6.5,0) {\nscale{$h_{2,2b+1}$}}; \node (h2b+2) at (7.75,0) {\nscale{$h_{2,2b+2}$}}; \node (h2b+3) at (9,0) {};
\node (hdots_3) at (9.25,0) {\nscale{$\dots$}};
\node (h3b+1) at (9.5,0) {}; \node (h3b+2) at (10.5,0) {\nscale{$h_{2,3b+2}$}}; \graph { (h0) ->["\escale{$k$}"] (h1); (hb_1)->["\escale{$k$}"] (hb) ->["\escale{$o_0$}"] (hb+1)->["\escale{$k$}"] (hb+2); (h2b) ->["\escale{$k$}"] (h2b+1)->["\escale{$o_2$}"] (h2b+2)->["\escale{$k$}"] (h2b+3); (h3b+1) ->["\escale{$k$}"] (h3b+2); }; \end{tikzpicture}
\noindent $K$ also contains the TSs $D_{0,1},\dots, D_{6m-1,1}$, thus $K=(H_2, D_{0,1},\dots, D_{6m-1,1})$.
\begin{proof}[Proof of Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness} for $\tau^b_2$] $K$ works as follows: The event $k$ occurs $b$ times in a row at $h_{2,0}$. Therefore, by Lemma~\ref{lem:observations}, a region $(sup_K, sig_K)$ solving $(h_{2,0}, h_{2,b})$ satisfies $sig_K(k)\in \{(1,0), (0,1)\}$. If $sig_K(k)=(1,0)$ then $sup_K(h_{2,b})=sup_K(h_{2,2b+1})=b$ and $sup_K(h_{2,b+1})=sup_K(h_{2,2b+2})=0$ implying $sig_k(o_0)=sig_k(o_2)=(b,0)$. Otherwise, if $sig_K(k)=(0,1)$ then $sup_K(h_{2,b})=sup_K(h_{2,2b+1})=0$ and $sup_K(h_{2,b+1})=sup_K(h_{2,2b+2})=b$ which implies $sig_k(o_0)=sig_k(o_2)=(0,b)$. As already discussed for $K_{\tau^b_1}$, we have that $sig_k(o_0)=sig_k(o_2)=(b,0)$ ($sig_k(o_0)=sig_k(o_2)=(0,b)$) implies $sig_K(k_j)=(0,b)$ ($sig_K(k_j)=(b,0)$) for $j\in \{0,\dots, 6m-1\}$. Hence, Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness} is true for $K$. \end{proof}
It remains to show that $K_{\tau^b_0}, K_{\tau^b_1}$ and $K$ satisfy the objective of \emph{existence}:
\begin{proof}[Proof of Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_existence}] We present corresponding regions. Let $S$ and $E$ be the set of all states and of all events of $K,K_{\tau^b_0}$ and $K_{\tau^b_1}$, respectively. We define mappings $sig:E\longrightarrow E_{\tau^b_1}$ and $sup: S \longrightarrow S_{\tau^b_1}$ by:
\[sig(e)= \begin{cases} (0,b), & \text{if } e\in \{k_{0},\dots, k_{6m-1}\} \\ (0,1), & \text{if } e = k\\ (0,0), & \text{if } e \in \{z, z_0, z_1 \} \\ (1,0), & \text{if } e = o_1\\ (b,0), & \text{if } e\in \{o_0, o_2 \} \\ \end{cases}
sup(s)= \begin{cases} 0 , & \text{if } s\in \{h_{0,0},h_{1,0}, h_{2,0}\} \\ b , & \text{if } s\in \{d_{j,0,0}, d_{j,1,0}\}\\
& \text{and } 0 \leq j \leq 6m-1 \end{cases} \] By $sig_{K}$, $sig_{K_{\tau^b_0}}$ and $sig_{K_{\tau^b_1}}$ ($sup_{K}$, $sup_{K_{\tau^b_0}}$ and $sup_{K_{\tau^b_1}}$) we denote the restriction of $sig$ ($sup$) to the events (states) of $K$, $K_{\tau^b_0}$ and $K_{\tau^b_1}$, respectively. As $sup$ defines the support of every corresponding initial state, by Lemma~\ref{lem:observations}, we obtain fitting regions $(sup_{K}, sig_{K})$, $(sup_{K_{\tau^b_0}},sig_{K_{\tau^b_0}})$ and $(sup_{K_{\tau^b_1}},sig_{K_{\tau^b_1}})$ that solve the corresponding key atom. Figure~\ref{fig:example_1} sketches this region for $K^0_{\tau^2_1}$ and $K$.
\end{proof}
\begin{figure}
\caption{ Constituents $K_{\tau}, K, T$ for $\tau\in \{\tau^2_0,\tau^2_1\}$ and $\varphi_0$. TSs are defined by bold drawn states, edges and events. Labels with reduced opacity correspond to region $(sup, sig)$ defined in Sections~\ref{sec:keys_1}, \ref{sec:translators_1}: $sup(s)$ is presented in square brackets below state s and $sig(e)$ is depicted below every $e$-labeled transition. The model of $\varphi_0$ is $\{X_0, X_4\}$. }
\label{fig:example_1}
\end{figure}
\subsection{The Translators $T_{\tau^b_0}$ and $T_{\tau^b_1}$ and $T$}\label{sec:translators_1}
In this subsection, we present translator $T$, which we also use as $T_{\tau^b_0}$ and $T_{\tau^b_1}$, that is, $T_{\tau^b_0}=T_{\tau^b_1}=T$.
For every $i\in \{0,\dots, m-1\}$ the clause $C_i=\{X_{i,0},X_{i,1}, X_{i,2}\}$ is translated into the following three TSs which use the variables of $C_i$ as events:
\noindent \begin{tikzpicture}[scale=0.9]
\begin{scope} \node at (-0.75,0) {\scalebox{0.8}{$T_{i,0}=$}};
\node (t0) at (0,0) {\nscale{$t_{i,0,0}$}}; \node (t1) at (1,0) {\nscale{$t_{i,0,1}$}}; \node (t2) at (2,0) {};
\node (h_2_dots) at (2.25,0) {\nscale{$\dots$}};
\node (tb) at (2.4,0) {}; \node (tb+1) at (3.5,0) {\nscale{$t_{i,0,b+1}$}}; \node (tb+2) at (4.9,0) {\nscale{$t_{i,0,b+2}$}}; \node (tb+3) at (6,0) {};
\node (h_k+4_dots) at (6.25,0) {\nscale{$\dots$}};
\node (t2b+1) at (6.4,0) {}; \node (t2b+2) at (7.5,0) {\nscale{$t_{i,0,2b+2}$}}; \node (t2b+3) at (9,0) {\nscale{$t_{i,0,2b+3}$}}; \graph { (t0) ->["\escale{$k_{6i}$}"] (t1) ->["\escale{$X_{i,0}$}"] (t2) ; (tb) ->["\escale{$X_{i,0}$}"] (tb+1) ->["\escale{$x_{i}$}"] (tb+2)->["\escale{$X_{i,2}$}"] (tb+3); (t2b+1)->["\escale{$X_{i,2}$}"] (t2b+2)->["\escale{$k_{6i+1}$}"] (t2b+3); ;}; \end{scope}
\begin{scope}[yshift = -0.8cm] \node at (-0.75,0) {\scalebox{0.8}{$T_{i,1}=$}}; \node (t0) at (0,0) {\nscale{$t_{i,1,0}$}}; \node (t1) at (1.2,0) {\nscale{$t_{i,1,1}$}}; \node (t2) at (2.2,0) {};
\node (tdots) at (2.45,0) {\nscale{$\dots$}};
\node (tb) at (2.65,0) {}; \node (tb+1) at (3.7,0) {\nscale{$t_{i,1,b+1}$}}; \node (tb+2) at (5,0) {\nscale{$t_{i,1,b+2}$}}; \node (tb+3) at (6.5,0) {\nscale{$t_{i,1,b+3}$ }}; \graph { (t0) ->["\escale{$k_{6i+2}$}"] (t1) ->["\escale{$X_{i,1}$}"] (t2); (tb) ->["\escale{$X_{i,1}$}"] (tb+1) ->["\escale{$p_i$}"] (tb+2)->["\escale{$k_{6i+3}$}"] (tb+3);}; \end{scope}
\begin{scope}[yshift = -1.6cm] \node at (-0.75,0) {\scalebox{0.8}{$T_{i,2}=$}}; \foreach \i in {0,...,4} {\coordinate (\i) at (\i*1.2,0);} \foreach \i in {0,...,4} {\node (p\i) at (\i) {\nscale{$t_{i,2,\i}$}};} \graph { (p0) ->["\escale{$k_{6i+4}$}"] (p1) ->["\escale{$x_{i}$}"] (p2) ->["\escale{$p_i$}"] (p3)->["\escale{$k_{6i+5}$}"] (p4);}; \end{scope}
\end{tikzpicture}
\noindent Altogether, $T=U(T_{0,0}, T_{0,1}, T_{0,2},\dots, T_{m-1,0}, T_{m-1,1}, T_{m-1,2})$. Figure~\ref{fig:example_1} provides an example for $T$ where $b=2$ and $\varphi=\varphi_0$. In accordance to our general approach and Lemma~\ref{lem:key_unions_1} the following lemma states the aim of $T$:
\begin{lemma}\label{lem:translator_1}
Let $\tau\in \{\tau^b_0, \tau^b_1\}$. \begin{enumerate} \item\label{lem:translator_1_completeness}\emph{(Completeness)} If $(sup_T, sig_T)$ is a $\tau$-region of $T$ such that $sig_T(k_0)=\dots = sig_T(k_{6m-1})=(0,b)$ or $sig_T(k_0)=\dots = sig_T(k_{6m-1})=(b,0)$ then $\varphi$ has a one-in-three model.
\item\label{lem:translator_1_existence}\emph{(Existence)} If $\varphi$ has a one-in-three model then there is a $\tau$-region $(sup_T, sig_T)$ of $T$ such that $sig_T(k_0)=\dots = sig_T(k_{6m-1})=(0,b)$.
\end{enumerate} \end{lemma}
\begin{proof} To fulfill its destiny, $T$ works as follows. By definition, if $(sup_T, sig_T)$ is a region of $T$ then $\pi_{i,0},\pi_{i,1},\pi_{i,2}$, defined by
\noindent \begin{tikzpicture}[scale=0.89]
\begin{scope} \node at (-1,0) {\scalebox{0.8}{$\pi_{i,0}=$}};
\node (t1) at (0,0) {\nscale{$sup_T(t_{i,0,1})$}}; \node (t2) at (2,0) {};
\node (h_2_dots) at (2.25,0) {\nscale{$\dots$}};
\node (tb) at (2.5,0) {}; \node (tb+1) at (4.5,0) {\nscale{$sup_T(t_{i,0,b+1})$}}; \node (tb+2) at (7,0) {\nscale{$sup_T(t_{i,0,b+2})$}}; \node (tb+3) at (9,0) {};
\node (h_k+4_dots) at (9.25,0) {\nscale{$\dots$}};
\node (t2b+1) at (9.5,0) {}; \node (t2b+2) at (11.5,0) {\nscale{$sup_T(t_{i,0,2b+2})$}};
\graph { (t1) ->["\escale{$sig_T(X_{i,0}$})"] (t2) ; (tb) ->["\escale{$sig_T(X_{i,0})$}"] (tb+1) ->["\escale{$sig_T(x_{i})$}"] (tb+2)->["\escale{$sig_T(X_{i,2})$}"] (tb+3); (t2b+1)->["\escale{$sig_T(X_{i,2})$}"] (t2b+2); ;}; \end{scope}
\begin{scope}[yshift = -0.8cm] \node at (-1,0) {\scalebox{0.8}{$\pi_{i,1}=$}}; \node (t1) at (0,0) {\nscale{$sup_T(t_{i,1,1})$}}; \node (t2) at (2,0) {};
\node (tdots) at (2.25,0) {\nscale{$\dots$}};
\node (tb) at (2.5,0) {}; \node (tb+1) at (4.5,0) {\nscale{$sup_T(t_{i,1,b+1})$}}; \node (tb+2) at (6.5,0) {\nscale{$sup_T(t_{i,1,b+2})$}};
\graph { (t1) ->["\escale{$sig_T(X_{i,1}$})"] (t2); (tb) ->["\escale{$sig_T(X_{i,1})$}"] (tb+1) ->["\escale{$sig_T(p_i$})"] (tb+2);}; \end{scope}
\begin{scope}[yshift = -1.6cm] \node at (-1,0) {\scalebox{0.8}{$\pi_{i,2}=$}}; \foreach \i in {1,...,3} {\coordinate (\i) at (\i*2-2,0);} \foreach \i in {1,...,3} {\node (p\i) at (\i) {\nscale{$sup_T(t_{i,2,\i})$}};} \graph { (p1) ->["\escale{$sig_T(x_{i})$}"] (p2) ->["\escale{$sig_T(p_i)$}"] (p3);}; \end{scope}
\end{tikzpicture} \newline
are directed labeled paths of $\tau$. For every $i\in \{0,\dots, m-1\}$, the events $k_{6i}, \dots, k_{6i+5}$ belong to the interface. By Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_completeness}, $K_\tau$ and $K$ ensure the following: If $(sup_K, sig_K)$ is a region of $K_\tau$, respectively $K$, that solves the key atom $a_\tau$, respectively $\alpha$, then either $sig_K(k_0)=\dots =sig_K(k_{6m-1})=(0,b)$ or $sig_K(k_0)=\dots =sig_K(k_{6m-1})=(b, 0)$. For every transition $s\edge{k_j}s'$, the first case implies $sup(s)=0$ and $sup(s')=b$ while the second case implies $sup(s)=b$ and $sup(s')=0$, where $j\in \{0,\dots, 6m-1\}$. Hence, a $\tau$-region $(sup_T, sig_T)$ of $T$ being compatible with $(sup_K, sig_K)$ satisfies exactly one of the next conditions: \begin{enumerate}
\item[(1)] \label{item:T_from_b_to_0} $sig_T(k_0)=\dots =sig_T(k_{6m-1})=(0,b)$ and for every $i\in \{0,\dots, m-1\}$ the paths $\pi_{i,0},\pi_{i,1},\pi_{i,2}$ start at $b$ and terminate at $0$.
\item[(2)]\label{item:T_from_0_to_b} $sig_T(k_0)=\dots =sig_T(k_{6m-1})=(b,0)$ and for every $i\in \{0,\dots, m-1\}$ the paths $\pi_{i,0},\pi_{i,1},\pi_{i,2}$ start at $0$ and terminate at $b$. \end{enumerate}
The construction of $T$ ensures that if (1), respectively if (2), is satisfied then there is for every $i\in \{0,\dots, m-1\}$ exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ such that $sig(X)=(1,0)$, respectively $sig(X)=(0,1)$. Each triple $T_{i,0}, T_{i,1}, T_{i,2}$ corresponds exactly to the clause $C_i$. Hence, $M=\{X\in V(\varphi) \vert sig_T(X)=(1,0)\}$ or $M=\{X\in V(\varphi) \vert sig_T(X)=(0,1)\}$, is a one-in-three model of $\varphi$, respectively. Having sketched the plan to satisfy Lemma~\ref{lem:translator_1}.\ref{lem:translator_1_completeness}, it remains to argue that the deduced conditions (1), (2) have the announced impact on the variable events.
For a start, let (2) be satisfied and $i\in \{0,\dots, m-1\}$. By $sig_T(k_{6i})=\dots=sig_T(k_{6i+5})=(0,b)$ we have that $sup_T(t_{i,0,1})=sup_T(t_{i,1,1})=sup_T(t_{i,1,1})=b$ and $sup_T(t_{i,0,2b+2})=sup_T(t_{i,1,b+2})=sup_T(t_{i,1,3})=0$. Notice, for every event $e\in \{X_{i,0}, X_{i,1}, X_{i,2}, x_i, p_i\}$ there is a state $s$ such that $s\edge{e}$ and $sup_T(s)=b$ or such that $\edge{e}s$ and $sup_T(s)=0$. Consequently, if $(m,n)\in E_\tau$ and $m < n$ then $sig(e)\not=(m,n)$. This implies the following condition:
\begin{enumerate} \item[(3)]\label{item:T_greater_or_equal_sup} If $e\in \{X_{i,0}, X_{i,1}, X_{i,2}, x_i, p_i\}$ and $s\edge{e}s'$ then $sup_T(s)\geq sup_T(s')$. \end{enumerate}
Moreover, every variable event $X_{i,0}, X_{i,1}, X_{i,2}$ occurs $b$ times consecutively in a row. Hence, by Lemma~\ref{lem:observations}, we have: \begin{enumerate}
\item[(4)]\label{item:T_state_changing_sig_is_from_1_to_0} If $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$, $sig_T(X)=(m,n)$ and $m\not=n$ then $(m,n)=(1,0)$. \end{enumerate}
The paths $\pi_{i,0}, \pi_{i,1}, \pi_{i,2}$ of $\tau$ start at $b$ and terminate at $0$. Hence, by definition of $\tau$, for every $\pi\in \{ \pi_{i,0}, \pi_{i,1}, \pi_{i,2} \}$ there has to be an event $e_\pi$, which occurs at $\pi$, such that $sig_T(e_\pi)=(m,n)$ with $m > n$.
If for $\pi\in \{ \pi_{i,0}, \pi_{i,1}\}$ it is true that $e_{\pi}\not\in\{X_{i,0}, X_{i,1}, X_{i,2}\}$ then for $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ we have $sig_T(X)=(m,m)$ for some $m\in \{0,\dots, b\}$. This yields $sup(t_{i,0,b+1})=sup(t_{i,1,b+1})=b$ and $sup(t_{i,0,b+2})=0$ which with $sup(t_{i,1,b+2})=0$ implies $sig_T(x_i)=sig_T(p_i)=(b,0)$. By $sig_T(x_i)=(b,0)$, we obtain $sup(t_{i,2,2})=0$ and, by $sig_T(p_i)=(b,0)$, we obtain $sup(t_{i,2,2})=b$, a contradiction. Consequently, by Condition~$4$, there has to be an event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ such that $sig_T(X)=(1,0)$. We discuss all possible cases to show that $X$ is unambiguous.
If $sig_T(X_{i,0})=(1,0)$ then, by Lemma~\ref{lem:observations}, we have that $sup_T(t_{i,0,b+1}) = 0$. By (3), this implies that $sup_T(t_{i,0,b+2}) = \dots = sup_T(t_{i,0,2b+1}) = 0 $ and $sig_T(x_i)=sig_T(X_{i,2})=(0,0)$. Moreover, $sig_T(x_i)=(0,0)$ and $sup(t_{i,2,1})=b$ imply $sup(t_{i,2,2})=b$ which with $sup(t_{i,2,3})=0$ implies $sig_T(p_i)=(b,0)$. By $sig_T(p_i)=(b,0)$ we obtain $sup(t_{i,1,b+1})=b$ which, by Lemma~\ref{lem:observations} and contraposition shows that $sig_T(X_{i,1})\not=(1,0)$. Hence, we have $sig_T(X_{i,1})\not=(1,0)$.
If $sig_T(X_{i,2})=(1,0)$ then, by Lemma~\ref{lem:observations}, we have that $sup_T(t_{i,0,b+2}) = b$. Again by (3), this implies that $sup_T(t_{i,0,1}) = \dots = sup_T(t_{i,0,b+2}) = b $ and $sig_T(x_i)=(m,m)$, $sig_T(X_{i,0})=(m',m')$ for some $m,m'\in \{0,\dots, b\}$. Especially, we have that $sig_T(X_{i,0})\not=(1,0)$. Moreover, by $sig_T(x_i)=(m,m)$, we obtain $sup_T(t_{i,2,2})=b$ implying with $sup_T(t_{i,2,3})=0$ that $sig_T(p_i)=(b,0)$. As in the previous case this yields $sig_T(X_{i,1})\not=(1,0)$.
Finally, if $sig_T(X_{i,1})=(1,0)$ then, by Lemma~\ref{lem:observations}, we get $sup_T(t_{i,1,b+1}) = 0$. By $sup_T(t_{i,1,b+1}) = sup_T(t_{i,1,b+2})= 0$ we conclude $sig_T(p_i)=(0,0)$ which with $sup_T(t_{i,2,3})=0$ implies $sup_T(t_{i,2,2})=0$. Using $sup_T(t_{i,2,1})=b$ and $sup_T(t_{i,2,2})=0$ we obtain $sig_T(x_i)=(b,0)$ implying that $sup_T(t_{i,0,b+1})=b$ and $sup_T(t_{i,0,b+2})=0$. By (3), this yields $sup_T(t_{i,0,1})=\dots =sup_T(t_{i,0,b+1})=b$ and $sup_T(t_{i,0,b+2})=\dots =sup_T(t_{i,0,2b+2})=b$ which, by Lemma~\ref{lem:observations}, implies $sig_T(X_{i,0})\not=(1,0)$ and $sig_T(X_{i,2})\not=(1,0)$.
So far, we have proven that if (1) is satisfied then for every $i\in \{0,\dots, m-1\}$ there is exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ such that $sig_T(X)=(1,0)$. Consequently, the set $M=\{X\in V(\varphi) \vert sig_T(X)=(1,0)\}$ is a one-in-three model of $\varphi$. One verifies, by analogous arguments, that (2) implies for every $i\in \{0,\dots, m-1\}$ that there is exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ with $sig_T(X)=(0,1)$, which makes $M=\{X\in V(\varphi) \vert sig_T(X)=(0,1)\}$ a one-in-three model of $\varphi$. Hence, a $\tau$-region of $T_\tau$ that satisfies (1) or (2) implies a one-in-three model of $\varphi$.
Reversely, if $M$ is a one-in-three model of $\varphi$ then there is a $\tau$-region $(sup_T, sig_T)$ satisfying (1) which, by Lemma~\ref{lem:observations}, is completely defined by $sup_T(t_{i,0,0})=sup_T(t_{i,1,0})=sup_T(t_{i,1,0})=0$ for $i\in \{0,\dots, m-1\}$ and
\[sig_T(e)= \begin{cases} (0,b), & \text{if } e\in \{k_{0},\dots, k_{6m-1}\} \\ (0,0), & \text{if } e\in V(\varphi)\setminus M\\ (0,0), & \text{if } (e=p_i, X_{i,1}\in M) \text{ or } (e= x_i , X_{i,1} \not\in M), 0\leq i\leq m-1 \\ (1,0), & \text{if } e \in M \\ (b,0), & \text{if } (e = x_i , X_{i,1} \in M) \text{ or } (e = p_i , X_{i,1} \not\in M), 0\leq i\leq m-1 \\ \end{cases} \] See Figure~\ref{fig:example_1}, for a sketch of this region for $\tau\in \{\tau^2_0, \tau^2_1\}$, $\varphi_0$ and $M=\{X_0, X_4\}$. This proves Lemma~\ref{lem:translator_1}. \end{proof}
\subsection{The Key Unions $K_{\tau^b_2}$ and $K_{\tau^b_3}$ }\label{sec:group_extensions_keys}
The unions $U_{\tau^b_2}, U_{\tau^b_3}$ install the same key. More exactly, if $\tau\in \{\tau^b_2,\tau^b_3\}$ then $K_\tau$ uses only the TS $H_3$ to provide key atom $(k, h_{3,1,b-1})$ and the interface $k$ and $z$:
\noindent \begin{tikzpicture} \node (init) at (-0.75,0) {$H_3=$};
\node (h0) at (0,0) {\nscale{$h_{3,0,0}$}}; \node (h1) at (1,0) {\nscale{}};
\node (h_2_dots) at (1.25,0) {\nscale{$\dots$}};
\node (h_b_2) at (1.5,0) {}; \node (h_b_1) at (2.6,0) {\nscale{$h_{3,0,b-1}$}}; \node (h_b) at (4.0,0) {\nscale{$h_{3,0,b}$}};
\node (h_b+1) at (0,-1) {\nscale{$h_{3,1,0}$}}; \node (h_b+2) at (1,-1) {};
\node (h_b+2_dots) at (1.25,-1) {\nscale{$\dots$}};
\node (h_2b_3) at (1.5,-1) {}; \node (h_2b_2) at (2.6,-1) {\nscale{$h_{3,1,b-1}$}};
\graph{ (h0) ->["\escale{$k$}"] (h1); (h0) ->["\escale{$u$}", swap](h_b+1)->["\escale{$k$}"](h_b+2); (h_b_2)->["\escale{$k$}"] (h_b_1)->["\escale{$k$}"] (h_b); (h_2b_3)->["\escale{$k$}"] (h_2b_2); (h_2b_2)->[swap, "\escale{$z$}"] (h_b); };
\end{tikzpicture}
\noindent The next lemma summarizes the intention behind $K_{\tau}$: \begin{lemma}\label{lem:key_unions_2} Let $\tau\in \{\tau^b_2, \tau^b_3\}$ and $E_0=\{(m,m) \vert 1 \leq m\leq b\}\cup \{ 0 \}$. \begin{enumerate} \item\label{lem:key_unions_2_completeness}\emph{(Completeness)} If $(sup_K, sig_K)$ is a $\tau$-region that solves $(k, h_{3,1,b-1})$ in $K_\tau$ then $sig(k)\in \{(1,0), (0,1)\}$ and $sig_K(z)\in E_0$.
\item\label{lem:key_unions_2_existence}\emph{(Existence)} There is a $\tau$-region $(sup_K, sig_K)$ of $K_\tau$ solving $(k, h_{3,1,b-1})$ such that $sig(k)=(0,1)$ and $sig_K(z)=0$. \end{enumerate} \end{lemma}
\begin{proof}
For the first statement, we let $(sup_K, sig_K)$ be a region solving $\alpha_\tau$. By $\edge{k}h_{3,1,b-1}$ and $\neg sup_K(h_{3,1,b-1})\edge{sig_K(k)}$ we immediately have $sig(K)\not\in E_0$. Moreover, for every group event $e\in \{0,\dots, b\}$ and every state $s$ of $\tau$ we have that $s\edge{e}$. Hence, by $\neg sup_K(h_{3,1,b-1})\edge{sig_K(k)}$ we have $sig_K(k)\not\in \{0,\dots, b\}$. The event $k$ occurs $b$ times in a row. Therefore, by Lemma~\ref{lem:observations}, we have that $sig_K(k)\in \{(1,0), (0,1)\}$ and if $sig_K(k)= (1,0)$ then $sup_K(h_{3,0,b})=0$ and if $sig_K(k)= (0,1)$ then $sup_K(h_{3,0,b})=b$. If $s\in \{0,\dots, b-1\}$ then $s\edge{(0,1)}$ is true. Furthermore, every state $s\in \{1,\dots, b\}$ satisfies $s\edge{(1,0)}$. Consequently, by $\neg sup_K(h_{3,1,b-1})\edge{sig_K(k)}$, if $sig_K(k)=(0,1)$ then $sup_K(h_{3,1,b-1})=b$ and if $sig_K(k)=(1,0)$ then $sup_K(h_{3,1,b-1})=0$. This implies for $(sup_K, sig_K)$ that $sig_K(z)\in E_0$ and proves Lemma~\ref{lem:key_unions_2}.\ref{lem:key_unions_2_completeness}. For Lemma~\ref{lem:key_unions_2}.\ref{lem:key_unions_2_existence} we easily verify that $(sup_K, sig_K)$ with $sig_K(k)=(0,1)$, $sig_K(u)=1$, $sig_K(z)=0$ and $sup_K(h_{3,0,0})=0$ properly defines a solving $\tau$-region.
\end{proof}
\subsection{The Translators $T_{\tau^b_2}$ and $T_{\tau^b_3}$}\label{sec:group_extensions_translators}
In this section we introduce $T_{\tau^b_2}$ which is used for $U_{\tau^b_2}$ and $U_{\tau^b_3}$, that is, $T_{\tau^b_3}=T_{\tau^b_2}$. Let $\tau\in\{\tau^b_2, \tau^b_3\}$. Firstly, the translator $T_{\tau}$ contains for every variable $X_j$ of $\varphi$, $j\in\{0,\dots, m-1\}$, the TSs $F_j, G_j$ below, that apply $X_j$ as event:
\noindent \begin{tikzpicture} \begin{scope} \node at (-0.75,0) {$F_j=$}; \node (f0) at (0,0) {\nscale{$f_{j,0,0}$}}; \node (f1) at (1,0) {\nscale{}};
\node (f_2_dots) at (1.25,0) {\nscale{$\dots$}};
\node (f_b_1) at (1.5,0) {}; \node (f_b) at (2.6,0) {\nscale{$f_{j,0,b}$}};
\node (f_b+1) at (0,-1) {\nscale{$f_{j,1,0}$}}; \node (f_b+2) at (1,-1) {};
\node (f_b+2_dots) at (1.25,-1) {\nscale{$\dots$}};
\node (f_2b_1) at (1.5,-1) {}; \node (f_2b) at (2.6,-1) {\nscale{$f_{j,1,b-1}$}};
\graph{ (f0) ->["\escale{$k$}"] (f1); (f0) ->["\escale{$v_j$}", swap](f_b+1)->["\escale{$k$}"](f_b+2); (f_b_1)->["\escale{$k$}"] (f_b); (f_2b_1)->["\escale{$k$}"] (f_2b); (f_2b)->[swap, "\escale{$X_{j}$}"] (f_b); }; \end{scope}
\begin{scope}[xshift= 5cm] \node at (-0.75,0) {$G_j=$}; \node (f0) at (0,0) {\nscale{$g_{j,0}$}}; \node (f1) at (1,0) {\nscale{}};
\node (f_2_dots) at (1.25,0) {\nscale{$\dots$}};
\node (f_b_1) at (1.5,0) {}; \node (f_b) at (2.6,0) {\nscale{$g_{j,b}$}}; \node (f_b+1) at (3.8,0) {\nscale{$g_{j,b+1}$}};
\graph{ (f0) ->["\escale{$k$}"] (f1); (f_b_1)->["\escale{$k$}"] (f_b)->["\escale{$X_j$}"](f_b+1); }; \end{scope}
\end{tikzpicture}
\noindent Secondly, translator $T_{\tau}$ implements for every clause $C_i=\{X_{i,0}, X_{i,1}, X_{i,2}\}$ of $\varphi$, $i\in \{0,\dots, m-1\}$, the following TS $T_i$ that applies the variables of $C_i$ as events :
\noindent \begin{tikzpicture} \node at (-0.75,0) {$T_i=$}; \node (t0) at (0,0) {\nscale{$t_{i,0}$}}; \node (t1) at (1,0) {\nscale{}};
\node (t_2_dots) at (1.25,0) {\nscale{$\dots$}};
\node (t_b_1) at (1.5,0) {}; \node (t_b) at (2.5,0) {\nscale{$t_{i,b}$}}; \node (t_b+1) at (3.7,0) {\nscale{$t_{i,b+1}$}}; \node (t_b+2) at (4.9,0) {\nscale{$t_{i,b+2}$}}; \node (t_b+3) at (6.1,0) {\nscale{$t_{i,b+3}$}}; \node (t_b+4) at (7.3,0) {\nscale{$t_{i,b+4}$}}; \node (t_b+5) at (8.5,0) {};
\node (t_b+5_dots) at (8.75,0) {\nscale{$\dots$}};
\node (t_2b+3) at (9,0) {\nscale{}}; \node (t_2b+4) at (10.2,0) {\nscale{$t_{i, 2b+4}$}};
\graph{ (t0) ->["\escale{$k$}"] (t1); (t_b_1)->["\escale{$k$}"] (t_b) ->["\escale{$X_{i,0}$}"] (t_b+1)->["\escale{$X_{i,1}$}"] (t_b+2)->["\escale{$X_{i,2}$}"] (t_b+3)->["\escale{$z$}"] (t_b+4)->["\escale{$k$}"] (t_b+5); (t_2b+3)->["\escale{$k$}"] (t_2b+4); }; \end{tikzpicture}
\noindent Altogether, we have $T_{\tau}=(F_0,G_0,\dots, F_{m-1}, G_{m-1}, T_0,\dots,T_{m-1})$.
The next lemma summarizes the functionality of $T_{\tau}$: \begin{lemma}\label{lem:translator_2}
If $\tau\in \{\tau^b_2, \tau^b_3\}$ then the following conditions are true: \begin{enumerate} \item\label{lem:translator_2_completeness}\emph{(Completeness)} If $(sup_T, sig_T)$ is a $\tau$-region of $T_\tau$ such that $sig_T(z)\in E_0$ and $sig_T(k)=(0,1)$, respectively $sig_T(k)=(1,0)$, then $\varphi$ is one-and-three satisfiable.
\item\label{lem:translator_2_existence}\emph{(Existence)} If $\varphi$ has a one-in-three model $M$ then there is a $\tau$-region $(sup_T, sig_T)$ of $T_\tau$ such that $sig_T(z)=0$ and $sig_T(k)=(0,1)$. \end{enumerate}
\end{lemma}
\begin{proof} Firstly, we argue for Lemma~\ref{lem:translator_2}.\ref{lem:translator_2_completeness}. Let $(sup_T, sig_T)$ be a region of $T_\tau$ which satisfies $sig_T(z)\in E_0, sig_T(k)\in \{(1,0), (0,1)\}$. By definition, $\pi_i$ defined by
\noindent \begin{tikzpicture}
\node (init) at (-1,0) {$\pi_i=$}; \node (t_b) at (0,0) {\nscale{$sup_T(t_{i,b})$}}; \node (t_b+1) at (2.3,0) {\nscale{$sup_T(t_{i,b+1})$}}; \node (t_b+2) at (4.6,0) {\nscale{$sup_T(t_{i,b+2})$}}; \node (t_b+3) at (6.9,0) {\nscale{$sup_T(t_{i,b+3})$}};
\graph{
(t_b) ->["\escale{$sig_T(X_{i,0})$}"] (t_b+1)->["\escale{$sig_T(X_{i,1})$}"] (t_b+2)->["\escale{$sig_T(X_{i,2})$}"] (t_b+3);
};
\end{tikzpicture}
\noindent is a directed labeled path in $\tau$. By $sig_T(z)\in E_0$ and $t_{i,b+3}\edge{z}t_{i,b+4}$ we obtain that $sup_T(t_{i,b+3})=sup_T(t_{i,b+4})$. Moreover, $k$ occurs $b$ times in a row at $t_{i,0}$ and $t_{i,b+4}$. By Lemma~\ref{lem:observations}, this implies if $sig_T(k)=(1,0)$ then $sup_T(t_{i,b})=b$ and $sup_T(t_{i,b+4})=0$ and if $sig_T(k)=(0,1)$ then $sup_T(t_{i,b})=0$ and $sup_T(t_{i,b+4})=b$. Altogether, we obtain that the following conditions are true: If $sig_T(z)\in E_0, sig_T(k) = (1,0)$ then path $p_i$ starts a $0$ and terminates at $b$ and if $sig_T(z)\in E_0, sig_T(k) = (0,1)$ then the path $p_i$ starts a $b$ and terminates at $0$.
By definition of $\tau$, both conditions imply that there has to be at least one event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ whose signature satisfies $sig_T(X)\not\in E_0$. Again, our intention is to ensure that for exactly one such variable event the condition $sig_T(X)\not\in E_0$ is true. Here, the TSs $F_0,G_0,\dots, F_{m-1},G_{m-1} $ come into play. The aim of $F_0,G_0,\dots, F_{m-1},G_{m-1} $ is to restrict the possible signatures for the variable events as follows: If $sig_T(k) = (1,0)$ then $X\in V(\varphi)$ implies $sig_T(X)\in E_0 \cup \{ b \}$ and if $sig_T(k) = (0,1)$ then $X\in V(\varphi)$ implies $sig_T(X)\in E_0 \cup \{ 1 \}$.
We now argue, that the introduced conditions ensure that there is exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ with $sig_T(X)\not\in E_0$. Remember that, by definition, if $sig_T(X)\in E_0$ then $sig^-_T(X) +sig^+_T(X) = \vert sig_T(X)\vert = 0$.
For a start, let $sig_T(z)\in E_0, sig_T(k) = (1,0)$, implying that $p_i$ starts at $b$ and terminates at $0$, and assume $sig_T(X)\in E_0 \cup \{ b \}$. By Lemma~\ref{lem:observations}, we obtain: \begin{equation}\label{eq:modulo=b} (\vert sig_T(X_{i,0})\vert + \vert sig_T( X_{i,1} ) \vert +\vert sig_T( X_{i,2}) \vert) \equiv b \text{ mod } (b+1) \end{equation}
Clearly, if $sig_T(X_{i,0}), sig_T(X_{i,1}), sig_T(X_{i,2})\in E_0$, then we obtain a contradiction to (\ref{eq:modulo=b}) by $\vert sig_T(X_{i,0})\vert = \vert sig_T( X_{i,1} ) \vert =\vert sig_T( X_{i,2}) \vert=0$. Hence, there has to be at least one variable event $X\in \{ X_{i,0}, X_{i,1} , X_{i,2} \}$ with $sig_T(X)=b$.
If there are two different variable events $X, Y\in \{ X_{i,0}, X_{i,1} , X_{i,2} \}$ such that $sig_T(X)=sig_T(Y)=b$ and $sig_T(Z)\in E_0$ for $Z \in \{ X_{i,0}, X_{i,1} , X_{i,2} \}\setminus \{X, Y\}$ then, by symmetry and transitivity, we obtain: \begin{align}
& b \equiv (\vert sig_T(X_{i,0})\vert + \vert sig_T( X_{i,1} ) \vert +\vert sig_T( X_{i,2}) \vert) \text{ mod } (b+1) && \vert (1) \\ & (\vert sig_T(X_{i,0})\vert + \vert sig_T( X_{i,1} ) \vert +\vert sig_T( X_{i,2}) \vert) \equiv 2b \text{ mod } (b+1) && \vert \text{assumpt.} \\ & b \equiv 2b \text{ mod } (b+1) && \vert (2),(3) \\ & 2b \equiv (b-1) \text{ mod } (b+1) && \vert \text{def. } \equiv \\ & b \equiv (b-1) \text{ mod } (b+1) &&\vert (4),(5)\\ & \exists m\in \mathbb{Z}: m(b+1)=1 && \vert (6)
\end{align}
By $(7)$ we obtain $b=0$, a contradiction. Similarly, if we assume that $\vert sig_T(X_{i,0})\vert = \vert sig_T( X_{i,1} ) \vert =\vert sig_T( X_{i,2}) \vert=b$ then we obtain
\begin{align}
& (\vert sig_T(X_{i,0})\vert + \vert sig_T( X_{i,1} ) \vert +\vert sig_T( X_{i,2}) \vert) \equiv 3b \text{ mod } (b+1) && \vert \text{assumpt.} \\ & b \equiv 3b \text{ mod } (b+1) && \vert (2),(8) \\ & 3b \equiv (b-2) \text{ mod } (b+1) && \vert \text{def. } \equiv \\ & b \equiv (b-2) \text{ mod } (b+1) &&\vert (9),(10)\\ & \exists m\in \mathbb{Z}: m(b+1)=2 && \vert (11)
\end{align}
By $(12)$, we have $b\in \{0,1\}$ which contradicts $b\geq 2$. Consequently, if $sig_T(z)\in E_0$ and $sig_T(k) = (1,0)$ and $sig_T(X)\in E_0 \cup \{ b \}$ then there is exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ with $sig_T(X)\not\in E_0$.
If we continue with $sig_T(z)\in E_0$, $sig_T(k) = (0,1)$ and $sig_T(X)\in E_0 \cup \{ 1 \}$ then we find the following equation to be true: \begin{equation}\label{eq:modulo=0} (\vert sig_T(X_{i,0})\vert + \vert sig_T( X_{i,1} ) \vert +\vert sig_T( X_{i,2}) \vert) \equiv 0 \text{ mod } (b+1) \end{equation}
Analogously to the former case one argues that the assumption that not exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ is equipped with the signature $1$, that is, $sig_T(X)\not\in E_0$, leads to the contradiction $b\in \{0,1\}$. Altogether, we have shown that if $(sup_T, sig_T)$ is a region such that $sig_T(k)\in \{(0,1), (1,0)\}$ and $sig_T(z)\in E_0$ and if the TSs $F_0,G_0,\dots, F_{m-1}, G_{m-1}$ do as announced then there is exactly one variable event $X\in \{X_{i,0}, X_{i,1}, X_{i,2}\}$ for every $i\in \{0,\dots, m-1\}$ such that $sig_T(X)\not\in E_0$. By other words, in that case we have that the set $M=\{X\in V(\varphi) \vert sig_T(X) \not\in E_0\}$ defines a one-in-three model of $\varphi$.
Hence, to complete the arguments for Lemma~\ref{lem:translator_2}.\ref{lem:translator_2_completeness}, it remains to argue for the announced functionality of $F_0,G_0,\dots, F_{m-1},G_{m-1} $. Let $j\in \{0,\dots, m-1\}$. We argue for $X_j$ that if $sup_T(k)=(1,0)$ then $sup_T(X_j)\in E_0\cup \{b\}$ and if $sup_T(k)=(0,1)$ then $sup_T(X_j)\in E_0\cup \{1\}$, respectively.
To begin with, let $sig_T(k)=(1,0)$. The event $k$ occurs $b$ times in a row at $f_{j,0,0}$ and $g_{j,0}$ and $b-1$ times in a row at $f_{j,1,0}$. By Lemma~\ref{lem:observations} this implies $sup_T(f_{j,0,b})=sup_T(g_{j,b})=0$ and $sup_T(f_{j,1,b-1})\in \{0,1\}$. Clearly, if $sup_T(f_{j,0,b})=sup_T(f_{j,1,b-1})=0$ then $sig_T(X_j)\in E_0$. We argue, $sup_T(f_{j,1,b-1})=1$ implies $sig_T(X_j) = b$.
Assume, for a contradiction, that $sig_T(X_j)\not=b $. If $sig_T(X_j)=(m,m)$ for some $m\in \{1,\dots, b\}$ then $-sig^-_T(X_j)+sig^+_T(X_j)=\vert sig_T(X_j) \vert = 0$. By Lemma~\ref{lem:observations} this contradicts $sup_T(f_{j,0,b})\not=sup_T(f_{j,1,b-1})$. If $sig_T(X_j)=(m,n)$ with $m\not=n$ then the $\vert sig_T(X_j)\vert =0$. By Lemma~\ref{lem:observations}, we have $sup_T(f_{j,0,b})=sup_T(f_{j,1,b-1})-sig^-_T(X_j)+sig^+_T(X_j)$ implying $sig_T(X_j)=(1,0)$. But, by $sup_T(g_{j,b})=0$ and $\neg 0 \edge{(1,0)}$ in $\tau$, this contradicts $sup_T(g_{j,b})\edge{sig_T(X_j)}$. Finally, if $sig_T(X_j) = e \in \{0,\dots, b-1 \}$ then we have $1 + e \not\equiv 0 \text{ mod } (b+1)$. Again, this is a contradiction to $sup_T(f_{j,1,b-1})\edge{sig_T(X_j)}sup_T(f_{j,0,b})$. Hence, we have $sig_T(X_j)=b$. Overall, it is proven that if $sup_T(k)=(1,0)$ then $sup_T(X_j)\in E_0\cup \{b\}$.
To continue, let $sig_T(k)=(0,1)$. Similar to the former case, by Lemma~\ref{lem:observations}, we obtain that $sup_T(f_{j,0,b})=sup_T(g_{j,b})=b$ and $sup_T(f_{j,1,b-1})\in \{b-1,b\}$. If $sup_T(f_{j,1,b-1})=b$ then $sig_T(X_j)\in E_0$. We show that $sup_T(f_{j,1,b-1})=b-1$ implies $sig_T(X_j)=1$: Assume $sig_T(X_j)=(m,n)\in E_\tau$. If $m=n$ or if $m>n$ then, by $sup_T(f_{j,0,b})=sup_T(f_{j,1,b-1})-sig^-_T(X_j)+sig^+_T(X_j)$, we have $sup_T(f_{j,0,b}) < b$, a contradiction. If $m < n$ then, by $sup_T(g_{j,b+1})=sup_T(g_{j,b})-sig^-_T(X_j)+sig^+_T(X_j)$, we get the contradiction $sup_T(g_{j,b+1}) > b$. Hence, $sig_T(X_j)\in \{0,\dots, b\}$. Again, $sig_T(X_j)=e\in \{0,2\dots, b\}$ implies $(b-1 + e)\not \equiv b \text{ mod } (b+1)$ which contradicts $sup_T(f_{j,0,b})=sup_T(f_{j,1,b-1}) + \vert sig_T(X_j) \vert $. Consequently, we obtain $sig_T(X_j)=1$ which shows that $sup_T(k)=(0,1)$ implies $sup_T(X_j)\in E_0\cup \{1\}$. Altogether, this proves Lemma~\ref{lem:translator_2}.\ref{lem:translator_2_completeness}.
To complete the proof Lemma~\ref{lem:translator_2}, we show its second condition to be true. To do so, we start from a one-in-three model $M\subseteq V(\varphi)$ of $\varphi$ and define the following $\tau$-region $(sup_T, sig_T)$ of $T_\tau$ that satisfies Lemma~\ref{lem:translator_2}.\ref{lem:translator_2_existence}: For $e\in E_{T_\tau}$ we define $sig_T(e)=$
\[ \begin{cases} (0,1), & \text{if } e = k\\ 0, & \text{if } e\in \{z\}\cup (V(\varphi)\setminus M) \text{ or } e=v_j \text{ and } X_j\in M, 0\leq j\leq m-1 \\ 1, & \text{if } e \in M\cup\{u\} \text{ or } e=v_j \text{ and } X_j\not\in M, 0\leq j\leq m-1\\ \end{cases} \] By Lemma~\ref{lem:observations}, having $sig_T$, it is sufficient to define the values of the initial states of the constituent of $T_\tau$. To do so, we define $sup_T(f_{j,0,0})=sup_T(g_{j,0})=t_{j,0}=0$ for $j\in \{0,\dots, m-1\}$. One easily verifies that $(sup_T, sig_T)$ is a well defined region of $T_\tau$. See Figure~\ref{fig:example} which presents a concrete example of $(sup_T, sig_T)$ for $b=2$, $\varphi_0$ and $M=\{X_0, X_4\}$. Finally, that proves Lemma~\ref{lem:translator_2}. \end{proof}
\subsection{The Liaison of Key and Translator}\label{sec:liaison}
The following lemma completes our reduction and finally proves Theorem~\ref{the:hardness_results}:
\begin{lemma}[Suffiency]\label{lem:liaison} \begin{enumerate} \item\label{lem:liaison_essp} Let $\tau\in \{\tau^b_0,\tau^b_1, \tau^b_2,\tau^b_3\}$. $U_\tau$ is $\tau$-feasible, respectively has the $\tau$-ESSP, if and only if there is a $\tau$-region of $U_\tau$ solving its key atom $\alpha_\tau$ if and only if $\varphi$ has a one-in-three model.
\item\label{lem:liaison_ssp} Let $\tau'\in \{\tau^b_0,\tau^b_1\}$. $W$ has the $\tau'$-SSP if and only if there is a $\tau'$-region of $W$ solving its key atom $\alpha$ if and only if $\varphi$ has a one-in-three model. \end{enumerate} \end{lemma} \begin{proof} By Lemma~\ref{lem:key_unions_1}, Lemma~\ref{lem:translator_1}, respectively Lemma~\ref{lem:key_unions_2}, Lemma~\ref{lem:translator_2}, the respective key atoms are solvable if and only if $\varphi$ is one-in-three satisfiable. Clearly, if all corresponding atoms are solvable the key atom is, too. Hence, it remains to prove that the $\tau$-solvability ($\tau'$-solvability) of the key atom $\alpha_\tau$ ($\alpha$) implies the $\tau$-ESSP and $\tau$-SSP for $U_\tau$ ($\tau'$-SSP for $W$). Due to space limitation, the corresponding proofs are moved to the appendix. \end{proof}
\section{Conclusions}
In this paper, we show that deciding if a TS $A$ has the $\tau$-feasibility or the $\tau$-ESSP, $\tau\in \{\tau^b_0,\dots,\tau^b_3\}$, is NP-complete. This makes their synthesis NP-hard. Moreover, we argue that deciding whether $A$ has the $\tau$-SSP, $\tau'\in \{\tau^b_0,\tau^b_1\}$, is also NP-complete. It remains for future work to investigate if there are superclasses of (pure) $b$-bounded P/T-nets or their extensions where synthesis becomes tractable. Moreover, one may search for parameters of the net-types or the input TSs for which the decision problems are \emph{fixed parameter tractable}.
\begin{appendix}
\section{Example for $A(U_{\tau^b_2})$ and $A(U_{\tau^b_3})$}\label{sec:example}
\newcommand{\freezer}[4]{
\ifstrequal{#4}{0}{ \begin{scope}[nodes={set=import nodes}, xshift= #2cm, yshift=#3 cm] \coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate (c02) at (2,0) ; \coordinate (c10) at (0,-1) ; \coordinate (c11) at (2,-1) ;
\foreach \i in {c00} {\draw(\i) circle (0.3);} \foreach \i in {c01, c10} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c02, c11} {\draw[dotted, thick] (\i) circle (0.3);}
\node (f00) at (0,0) {\nscale{$f_{#1,0,0}$}}; \node (f01) at (1,0) {\nscale{$f_{#1,0,1}$}}; \node (f02) at (2,0) {\nscale{$f_{#1,0,2}$}};
\node (f10) at (0,-1) {\nscale{$f_{#1,1,0}$}}; \node (f11) at (2,-1) {\nscale{$f_{#1,1,1}$}};
\graph{ (f00) ->[thick,"\escale{$k$}"] (f01)->[thick,"\escale{$k$}"] (f02); (f10) ->[thick,"\escale{$k$}"] (f11); (f00) ->[thick,swap, "\escale{$v_#1$}"] (f10); (f11) ->[thick,swap, "\escale{$X_#1$}"] (f02); }; \end{scope} }{ \begin{scope}[nodes={set=import nodes}, xshift= #2cm, yshift=#3 cm]
\coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate (c02) at (2,0) ; \coordinate (c10) at (0,-1) ; \coordinate (c11) at (2,-1) ;
\foreach \i in {c00,c10} {\draw(\i) circle (0.3);} \foreach \i in {c01, c11} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c02} {\draw[dotted, thick] (\i) circle (0.3);}
\node (f00) at (0,0) {\nscale{$f_{#1,0,0}$}}; \node (f01) at (1,0) {\nscale{$f_{#1,0,1}$}}; \node (f02) at (2,0) {\nscale{$f_{#1,0,2}$}};
\node (f10) at (0,-1) {\nscale{$f_{#1,1,0}$}}; \node (f11) at (2,-1) {\nscale{$f_{#1,1,1}$}};
\graph{ (f00) ->[thick,"\escale{$k$}"] (f01)->[thick,"\escale{$k$}"] (f02); (f10) ->[thick,"\escale{$k$}"] (f11); (f00) ->[thick,swap, "\escale{$v_#1$}"] (f10); (f11) ->[thick,swap, "\escale{$X_#1$}"] (f02); }; \end{scope} } } \newcommand{\generator}[4]{
\ifstrequal{#4}{0}{ \begin{scope}[nodes={set=import nodes}, xshift = #2cm, yshift= #3cm, ]
\coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate(c02) at (2,0) ; \coordinate(c03) at (3,0) ; \foreach \i in {c00} {\draw(\i) circle (0.3);} \foreach \i in {c01} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c02,c03} {\draw[dotted, thick] (\i) circle (0.3);}
\node (g00) at (0,0) {\nscale{$g_{#1,0}$}}; \node (g01) at (1,0) {\nscale{$g_{#1,1}$}}; \node (g02) at (2,0) {\nscale{$g_{#1,2}$}}; \node (g03) at (3,0) {\nscale{$g_{#1,2}$}}; \node (g00) at (0,0) {\nscale{$g_{#1,0}$}}; \node (g01) at (1,0) {\nscale{$g_{#1,1}$}}; \node (g02) at (2,0) {\nscale{$g_{#1,2}$}}; \node (g03) at (3,0) {\nscale{$g_{#1,2}$}};
\graph{ (g00) ->[thick,"\escale{$k$}"] (g01)->[thick,"\escale{$k$}"] (g02)->[thick,"\escale{$X_#1$}"] (g03); }; \end{scope} }{ \begin{scope}[nodes={set=import nodes}, xshift = #2cm, yshift= #3cm, ]
\coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate(c02) at (2,0) ; \coordinate(c03) at (3,0) ; \foreach \i in {c00,c03} {\draw(\i) circle (0.3);} \foreach \i in {c01} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c02} {\draw[dotted, thick] (\i) circle (0.3);}
\node (g00) at (0,0) {\nscale{$g_{#1,0}$}}; \node (g01) at (1,0) {\nscale{$g_{#1,1}$}}; \node (g02) at (2,0) {\nscale{$g_{#1,2}$}}; \node (g03) at (3,0) {\nscale{$g_{#1,2}$}}; \node (g00) at (0,0) {\nscale{$g_{#1,0}$}}; \node (g01) at (1,0) {\nscale{$g_{#1,1}$}}; \node (g02) at (2,0) {\nscale{$g_{#1,2}$}}; \node (g03) at (3,0) {\nscale{$g_{#1,2}$}};
\graph{ (g00) ->[thick,"\escale{$k$}"] (g01)->[thick,"\escale{$k$}"] (g02)->[thick,"\escale{$X_#1$}"] (g03); }; \end{scope} }
} \newcommand{\translator}[7]{
\ifstrequal{#7}{1}{ \begin{scope}[nodes={set=import nodes}, xshift=#5 cm, yshift=#6 cm]
\coordinate (c0) at (0,0); \coordinate (c1) at (1,0) ; \coordinate (c2) at (2,0) ; \coordinate (c3) at (3,0) ; \coordinate (c4) at (4,0) ; \coordinate (c5) at (5,0) ; \coordinate(c6) at (6,0) ; \coordinate (c7) at (7,0) ; \coordinate (c8) at (8,0) ;
\foreach \i in {c0,c6, c3,c4,c5} {\draw(\i) circle (0.3);} \foreach \i in {c1,c7} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c2,c8} {\draw[dotted, thick] (\i) circle (0.3);}
\node (t0) at (0,0) {\nscale{$t_{#1,0}$}}; \node (t1) at (1,0) {\nscale{$t_{#1,1}$}}; \node (t2) at (2,0) {\nscale{$t_{#1,2}$}}; \node (t3) at (3,0) {\nscale{$t_{#1,3}$}}; \node (t4) at (4,0) {\nscale{$t_{#1,4}$}}; \node (t5) at (5,0) {\nscale{$t_{#1,5}$}}; \node (t6) at (6,0) {\nscale{$t_{#1,6}$}}; \node (t7) at (7,0) {\nscale{$t_{#1,7}$}}; \node (t8) at (8,0) {\nscale{$t_{#1,8}$}};
\graph{ (t0) ->[thick,"\escale{$k$}"] (t1)->[thick,"\escale{$k$}"] (t2)->[thick,"\escale{$X_#2$}"] (t3)->[thick,"\escale{$X_#3$}"] (t4)->[thick,"\escale{$X_#4$}"] (t5)->[thick,"\escale{$z$}"] (t6)->[thick,"\escale{$k$}"] (t7)->[thick,"\escale{$k$}"] (t8); }; \end{scope} }{ \ifstrequal{#7}{2}{ \begin{scope}[nodes={set=import nodes}, xshift=#5 cm, yshift=#6 cm]
\coordinate (c0) at (0,0); \coordinate (c1) at (1,0) ; \coordinate (c2) at (2,0) ; \coordinate (c3) at (3,0) ; \coordinate (c4) at (4,0) ; \coordinate (c5) at (5,0) ; \coordinate(c6) at (6,0) ; \coordinate (c7) at (7,0) ; \coordinate (c8) at (8,0) ;
\foreach \i in {c0,c6, c4,c5} {\draw(\i) circle (0.3);} \foreach \i in {c1,c7} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c2,c8, c3} {\draw[dotted, thick] (\i) circle (0.3);}
\node (t0) at (0,0) {\nscale{$t_{#1,0}$}}; \node (t1) at (1,0) {\nscale{$t_{#1,1}$}}; \node (t2) at (2,0) {\nscale{$t_{#1,2}$}}; \node (t3) at (3,0) {\nscale{$t_{#1,3}$}}; \node (t4) at (4,0) {\nscale{$t_{#1,4}$}}; \node (t5) at (5,0) {\nscale{$t_{#1,5}$}}; \node (t6) at (6,0) {\nscale{$t_{#1,6}$}}; \node (t7) at (7,0) {\nscale{$t_{#1,7}$}}; \node (t8) at (8,0) {\nscale{$t_{#1,8}$}};
\graph{ (t0) ->[thick,"\escale{$k$}"] (t1)->[thick,"\escale{$k$}"] (t2)->[thick,"\escale{$X_#2$}"] (t3)->[thick,"\escale{$X_#3$}"] (t4)->[thick,"\escale{$X_#4$}"] (t5)->[thick,"\escale{$z$}"] (t6)->[thick,"\escale{$k$}"] (t7)->[thick,"\escale{$k$}"] (t8); }; \end{scope}
}{ \begin{scope}[nodes={set=import nodes}, xshift=#5 cm, yshift=#6 cm]
\coordinate (c0) at (0,0); \coordinate (c1) at (1,0) ; \coordinate (c2) at (2,0) ; \coordinate (c3) at (3,0) ; \coordinate (c4) at (4,0) ; \coordinate (c5) at (5,0) ; \coordinate(c6) at (6,0) ; \coordinate (c7) at (7,0) ; \coordinate (c8) at (8,0) ;
\foreach \i in {c0,c6,c5} {\draw(\i) circle (0.3);} \foreach \i in {c1,c7} {\draw[dashed] (\i) circle (0.3);} \foreach \i in {c2,c8, c3, c4} {\draw[dotted, thick] (\i) circle (0.3);}
\node (t0) at (0,0) {\nscale{$t_{#1,0}$}}; \node (t1) at (1,0) {\nscale{$t_{#1,1}$}}; \node (t2) at (2,0) {\nscale{$t_{#1,2}$}}; \node (t3) at (3,0) {\nscale{$t_{#1,3}$}}; \node (t4) at (4,0) {\nscale{$t_{#1,4}$}}; \node (t5) at (5,0) {\nscale{$t_{#1,5}$}}; \node (t6) at (6,0) {\nscale{$t_{#1,6}$}}; \node (t7) at (7,0) {\nscale{$t_{#1,7}$}}; \node (t8) at (8,0) {\nscale{$t_{#1,8}$}};
\graph{ (t0) ->[thick,"\escale{$k$}"] (t1)->[thick,"\escale{$k$}"] (t2)->[thick,"\escale{$X_#2$}"] (t3)->[thick,"\escale{$X_#3$}"] (t4)->[thick,"\escale{$X_#4$}"] (t5)->[thick,"\escale{$z$}"] (t6)->[thick,"\escale{$k$}"] (t7)->[thick,"\escale{$k$}"] (t8); }; \end{scope} } } } \begin{figure}\label{fig:example}
\end{figure}
\section{Proofs for Section~\ref{sec:hardness_results}}
\subsection{Proofs of Lemma~\ref{lem:union_validity} and Lemma~\ref{lem:observations}}
\begin{proof}[Proof of Lemma~\ref{lem:union_validity}] \emph{If}: If $(sup, sig)$ is a $\tau$-region of $A(U)$ that, for $e\in E_U, s,s'\in S_U$, solves $(e,s)$, respectively $(s,s')$, then projecting $(sup,sig)$ to the component TSs of $U$ yields a $\tau$-region of $U$ that solves the respective separation atom in $U$. Hence, the $\tau$-(E)SSP of $A(U)$ implies the $\tau$-(E)SSP of $U$.
\emph{Only-if}: Let $0_\tau=(0,0)$ if $(0,0)\in E_\tau$ and, otherwise, $0_\tau=0$. A $\tau$-region $(sup, sig)$ of $U$ that solves $(s,s')$, respectively $(e, s)$, can be extended to a corresponding $\tau$-region $(sup', sig')$ of $A(U)$ by setting: \begin{align*} sup'(s'') &= \begin{cases} sup(s''), & \text{if } s'' \in S_U,\\ sup(s), & \text{if } s'' \in Q \end{cases}\\ sig'(e') &= \begin{cases} sig(e'), & \text{if } e' \in E_U,\\ 0_\tau, & \text{if } e' \in W ,\\ (sup(s) - sup(s_{0,A_i}),0) & \text{if } e' = y_i \text{ and } sup(s_{0,A_i}) < sup(s), 0 \leq i \leq n \\ ( 0, sup(s_{0,A_i} ) - sup(s)) & \text{if } e' = y_i \text{ and } sup(s_{0,A_i}) \geq sup(s), 0 \leq i \leq n \end{cases} \end{align*}
A $\tau$-region $(sup, sig)$ defined in that way inherits the property to solve $(e,s)$, respectively $(s,s')$, from $(sup, sig)$ and solves $(e,q_i)$ for $i\in \{0,\dots, n\}$ as, by definition, $sup(q_i)=sup(s)$ for all $i\in \{0,\dots, n\}$. Consequently, as for every event $e\in E_U$ there is at least one state $s\in S_U$ such that $(e,s)$ is a valid ESSP atom of $U$, the atom $(e,q_i)$ is solvable for every $e\in E_U$ and $i\in \{0,\dots, n\}$. As a result, to prove the $\tau$-(E)SSP for $A(U)$ it remains to argue that the SSP atoms states $(q_0,\cdot),\dots, (q_n,\cdot)$ and the ESSP atoms $(w_1,\cdot),\dots, (w_n,\cdot), (y_0,\cdot),\dots, (y_n,\cdot)$ are solvable in $A(U)$. If $i\in \{0,\dots, n\}$ and $s\in S_{A(U)}, e\in E_{A(U)}$ then the following region $(sup, sig)$ simultaneously solves every valid atom $(y_i, \cdot)$, $(q_i,\cdot) $ and, if it exists, $(w_{i+1}, \cdot)$ in $A(U)$: \[ sup(s) =\begin{cases}
0, & \text{if } s=q_i\\
b, & \text{otherwise }
\end{cases}\\ \text{ } sig(e)=\begin{cases}
(0,b) & \text{if } e = y_i \text{ or } ( i < n \text{ and } e=w_{i+1})\\
(b,0) & \text{if } 1 < i \text{ and } e=w_{i-1}\\
0_\tau & \text{ otherwise} \\
\end{cases} \] \end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:observations}]
(\ref{lem:sig_summation_along_paths}): The first claim follows directly from the definitions of $\tau$ and $\tau$-regions.
(\ref{lem:absolute_value}): The \textit{If}-direction is trivial. For the \textit{Only-if}-direction we show that the assumption $(m,n)\not\in \{(1,0),(0,1)\}$ yields a contradiction.
By (\ref{lem:sig_summation_along_paths}), we have that $sup(s_b)=sup(s_0) + b\cdot(n-m)$. If $\vert n-m\vert > 1$, then we get a contradiction to $sup(s_0)\geq 0$ or to $sup(s_b)\leq b$. Hence, if $n\not=m$ then $\vert n-m\vert =1$ implying $m > n$ or $m < n$. For a start, we show that $ m > n$ implies $m=1, n=0$, that is, by $n \leq m-1$ and $sup(s_0)\leq b$ we obtain the estimation \[ sup(s_{b-1}) = sup(s_0) +(b-1)(n-m)\ \leq \ b + (b-1)(m-1-m) = 1 \] By $n < m \leq sup(s_{b-1})\leq 1$ we have $(m,n)=(1,0)$. Similarly, we obtain that $(m,n)=(0,1)$ if $m < n$. Hence, if $sig(e)=(m,n)$ and $n\not=m$ then $sig(e) \in \{(1,0),(0,1)\}$. The second statement follows directly from (\ref{lem:sig_summation_along_paths}).
\end{proof}
\subsection{Completion of the Proof of Lemma~\ref{lem:liaison}}
To complete the proof of Lemma~\ref{lem:liaison}, we stepwise prove the following statements in the given order:
\begin{enumerate}\label{en:todo_for_lemma_hardness_results}
\item\label{en:essp_atom_1} If $\tau=\tau^b_1$ then the $\tau$-solvability of $(k , h_{1, 2b+4})$ in $U_\tau$ implies its $\tau$-ESSP.
\item\label{en:essp_atom_0} If $\tau=\tau^b_0$ then the $\tau$-solvability of $(k , h_{0, 4b+1})$ in $U_\tau$ implies its $\tau$-ESSP.
\item\label{en:essp_implies_ssp} If $\tau\in \{\tau^b_0,\tau^b_1\}$ then the $\tau$-ESSP of $U_\tau$ implies its $\tau$-SSP.
\item\label{en:essp_atom_2} If $\tau\in \{\tau^b_2, \tau^b_3 \}$ then the $\tau$-solvability of $(k , h_{3,1,b-1})$ in $U_\tau$ implies its $\tau$-ESSP.
\item\label{en:essp_implies_ssp_Z} If $\tau\in \{\tau^b_2,\tau^b_3\}$ then the $\tau$-ESSP of $U_\tau$ implies its $\tau$-SSP.
\item\label{en:ssp_atom} If $\tau\in \{\tau^b_0,\tau^b_1\}$ then the $\tau$-solvability of $(h_{2,0} , h_{2,b})$ in $W$ implies its $\tau$-SSP.
\end{enumerate}
\subsubsection{Proof of Statement~\ref{en:essp_atom_1} and Statement~\ref{en:essp_atom_0}}
We prove for $ \tau \in \{ \tau^b_0, \tau^b_1\}$ that the $\tau$-solvability of the key atom $a_\tau$ in $U_\tau$ implies the $\tau$-solvability of all ESSP atoms by the presentation of corresponding regions. To do so, we provide for every ESSP atom $(e,s)$ of $U_\tau$ a corresponding $\tau$-region $(sup, sig)$ solving it.
For the sake of simplicity, these regions are often presented as rows of a table with the shape and meaning: See Table~\ref{tab:first_table} for the first example.
\begin{enumerate}
\item\label{e} \emph{e}: Here, $e$ means the event of the ESSP atoms $(e, \cdot)$ which are solved by the region of this row. The corresponding states are listed in the \emph{states}-cell. It is always the case that a $\tau$-region $(sup, sig)$ that solves such an atom $(e, \cdot)$ satisfies $sig(e) = (0,n)$ for some $n\in \mathbb{N}^+$.
\item\label{states} \emph{states}: All listed states $s$ such that $(e, s)$ is $\tau$-solved by the region of the corresponding row.
\item\label{initials} \emph{initials}: By Lemma~\ref{lem:observations}, a $\tau$-region of $U_\tau$ is fully defined by its signature and the support of the initial states of the constituent TSs. Hence, this cell explicitly presents the supports of the initial states of the TSs of $U_\tau$, which are actually affected by an event having a signature different from $(0,0)$. The initial states of all other TSs, that is, all those constituents which have no event in their event set with a signature different from $(0,0)$, are assumed to be mapped to $b$. Certainly, this condemns the states of all unaffected TSs to have the same support $b$. As mentioned above, every $(e,\cdot )$ solving $\tau$-region $R=(sup, sig)$ satisfies $sig(e)=(0,n), n\in \mathbb{N}^+$. Thus, for every state $s$ of an unaffected TS the atom $(e, s)$ is automatically solved by $R$, as $sup(s)=b$ and $\neg b\edge{(0,n)}$ for $n\geq 1$. For the sake of readability, we never mention these states explicitly in the \emph{states}-cell.
\item \emph{sig}: The signatures of the events of $U_\tau$ with a value different from $(0,0)$. The signature of the other events is $(0,0)$.
\item \emph{constituents}: For the sake of transparency, the constituents which are affected by events with a signature different from $(0,0)$. Note that, by the discussion of (\ref{e}) and (\ref{initials}), for every state $s$ of a constituent which is not mentioned here it is true that the ESSP atom $(e,s)$ is also solved. \end{enumerate}
Moreover, especially in the presented tables, we apply several shortcuts to make the presentations more lucid: \begin{enumerate}
\item If $s\in S_{U_\tau}$ is an initial state of an affected TS with support $sup(s)=s_\tau \in S_\tau$ then we write $s=s_\tau$. \item We differentiate between $i$-indexed and $j$-indexed events and insinuate the following double meaning: If '$i$' occurs explicitly in the index of a presented event, respectively state, for example $k_{6i+1}$, respectively $t_{i,0,0}$, then it is assumed that $i\in \{0,\dots, m-1\}$ is arbitrary but fixed. In contrast, if not stated explicitly otherwise, if '$j$' occurs explicitly in the index of a presented event or state then $j$ represents all possible values for this type of state or event. For example, we write $d_{j,1,0}$ to abridge the enumeration $d_{0,1,0},\dots, d_{6m-1,1,0}$. \end{enumerate}
\begin{proof}[Statement~\ref{en:essp_atom_1} ] Let $\tau=\tau^b_1$. The following table presents for atoms $(e, \cdot)$ of $U_\tau$ solving $\tau$-regions, where $e\in \{z_0,z_1,o_0,o_1, k,k_{6i}, k_{6i+2}, k_{6i+4}\}$.
\begin{longtable}{p{1cm} p{3.1cm}p{4.4cm} p{2.1cm} p{1.5cm}} \caption{For $\tau=\tau^b_1$: Solving $\tau$-regions for atoms $(e, \cdot)$ of $U_\tau$ where $e\in \{z_0,z_1,o_0,o_1, k,k_{6i}, k_{6i+2}, k_{6i+4}\}$.} \label{tab:first_table} \endfirsthead \endhead \emph{e}& \emph{initials}& \emph{states} & \emph{sig} & \emph{constituents}\\ \hline $z_0$ & $h_{1,0}=b$ & \raggedright{$S_{H_1}\setminus \{h_{1,2b+2},h_{1,3b+5}\} $} & \raggedright{$z_0=(0,b)$\\ $k=(1,0)$} & $H_1$\\ $z_0$ & $h_{1,0}=0$ & \raggedright{$h_{1,2b+2},h_{1,3b+5}$} & \raggedright{$z_0=(0,b)$,\\ $z_1=(b,0)$} & $H_1$\\ \hline
$z_1$ & \raggedright{$h_{1,0}=b$, $d_{j,1,0}=0$} & \raggedright{$S_{H_1}\setminus \{h_{1,b},h_{1,b+1},h_{1,3b+5}\} $, $\{d_{j,1,1}, d_{j,1,2}, d_{j,1,3}\}$} & \raggedright{$z_1=(0,b)$, $o_0=(0,b)$, $k=(1,0)$} & $H_1, D_{j,1}$\\
$z_1$ & \raggedright{$h_{1,0}=b$, $d_{j,1,0}=b$} & \raggedright{$\{h_{1,b},h_{1,b+1}, h_{1,3b+5}, d_{j,1,0}\}$} & \raggedright{$o_0=(b,0)$, $z_1=(0,b)$} & $H_1, D_{j,1}$\\ \hline
$o_0$ & \raggedright{$h_{1,0}=b$, $d_{j,1,0}=0$} & \raggedright{$S_{H_1}\setminus \{h_{1,2b+4},\dots, h_{1,3b+5}\}$, $\{d_{j,1,1}, d_{j,1,2}, d_{j,1,3}\}$} & \raggedright{$o_0=(0,b)$, $z_0=(b,0)$} & $H_1, D_{j,1}$\\
$o_0$ & \raggedright{$h_{1,0}=0$, $d_{j,1,0}=0$} & \raggedright{$\{h_{1,2b+4},\dots, h_{1,3b+5}\}$} & \raggedright{$o_0=(0,b)$} & $H_1, D_{j,1}$\\ \hline
$o_1$ & \raggedright{$h_{1,0}=t_{j,0,0}=b$, $t_{j,1,0}=t_{j,2,0}=b$, $0\leq j\leq 3m-1: d_{2j,1,0}=b$, $d_{2j+1,1,0}=0$ } & \raggedright{$S_{H_1}\setminus \{h_{1,b+1},\dots, h_{1,2b+2}\}$, $0\leq j\leq 3m-1: S_{D_{2j,1}}$} & \raggedright{$o_1=(0,b)$, $z_0=(b,0)$, $z_1=(0,b)$, $k_{2j}=(b,0)$} & $H_1, D_{j,1}$, $T_\tau$\\
$o_1$ & \raggedright{$h_{1,0}=t_{j,0,0}=b$, $t_{j,1,0}=t_{j,2,0}=b$, $0\leq j\leq 3m-1: d_{2j,1,0}=0$, $d_{2j+1,1,0}=b$ } & \raggedright{$ \{h_{1,b+1},\dots, h_{1,2b+2}\}$, $0\leq j\leq 3m-1: S_{D_{2j+1,1}}$} & \raggedright{$o_1=(0,b)$, $z_1=(b,0)$, $k_{2j+1}=(b,0)$} & $H_1, D_{j,1}$, $T_\tau$\\
$o_1$ & \raggedright{$h_{1,0}=0$, $d_{j,1,0}=0$} & \raggedright{remaining states} & \raggedright{$o_1=(0,b)$} & $H_1, D_{j,1}, $\\ \hline
$k$ & key region & $S_{H_1}$& see Lemma~\ref{lem:key_unions_1}, Lemma~\ref{lem:translator_1} & \\ $k$ & $h_{1,0}=0$ & \raggedright{$S_{U_\tau} \setminus S_{H_1}$} & \raggedright{$k=(0,1)$, $z_0=(b,0)$} & $H_1$\\ \hline
$k_{6i}$ & \raggedright{$h_{1,0}=b$, $d_{j,1,0}=b$, $t_{i,0,0}=0$} & $d_{6i,1,0}$ & \raggedright{$k_{6i}=(0,b)$, $o_0=(b,0)$} & $H_1, T_{i,0}, D_{j,1}$ \\ $k_{6i}$ & \raggedright{$t_{i,0,0}=d_{6i,1,0}=0$} & remaining states & $k_{6i}=(0,b)$ & $T_{i,0}, D_{6i,1}$\\ \hline
$k_{6i+2}$ & \raggedright{$h_{1,0}=b$, $d_{j,1,0}=b$, $t_{i,1,0}=0$} & $d_{6i+2,1,0}$ & \raggedright{$k_{6i+2}=(0,b)$, $o_0=(b,0)$} & $H_1, T_{i,1}, D_{j,1}$ \\ $k_{6i+2}$ & \raggedright{$t_{i,1,0}=d_{6i+2,1,0}=0$} & remaining states & $k_{6i+2}=(0,b)$ & $T_{i,1}, D_{6i+2,1}$\\ \hline
$k_{6i+4}$ & \raggedright{$h_{1,0}=b$, $d_{j,1,0}=b$, $t_{i,2,0}=0$} & $d_{6i+4,1,0}$ & \raggedright{$k_{6i+4}=(0,b)$, $o_0=(b,0)$} & $H_1, T_{i,2}, D_{j,1}$ \\ $k_{6i+4}$ & \raggedright{$t_{i,2,0}=d_{6i+4,1,0}=0$} & remaining states & $k_{6i+4}=(0,b)$ & $T_{i,2}, D_{6i+4,1}$\\ \end{longtable}
It remains to prove the solvability of the valid atoms $(e,\cdot)$ of $U_\tau$ where $e\in E_{T_\tau}$. To do so, we need the following notations: If $i\in \{0,\dots, m-1\}$ and $\alpha \in \{0,1,2\}$ are arbitrary but fixed then by $i',i'',\beta, \gamma$ we mean the indices $i',i''\in \{0,\dots, m-1\}\setminus \{i\}$ and $\beta, \gamma \in \{0,1,2\}$ such that $X_{i,\alpha}=X_{i',\beta}=X_{i'',\gamma}$, that is, $i'$ and $\beta$, respectively $i''$ and $\gamma$, determine the second, respectively third, occurrence of $X_{i,\alpha}$ in $U_\tau$. The following table shows for an arbitrary but fixed $i\in \{0,\dots, m-1\}$ and all possible values for $\alpha\in \{0,1,2\}$ the solvability of $(X_{i,\alpha},s)$ for the states $s$ of $U_\tau$ which are not in $T_{i',\beta}$ or $T_{i'',\gamma}$ and the solvability of $(x_i,s)$ and $(p_i,s)$ for all states $s$ of $U_\tau$. Please note, that if $\beta,\gamma\in \{0,2\}$ then $X_{i',\beta}\in E_{T_{i',0}}$ and $X_{i'',\gamma}\in E_{T_{i'',0}}$, otherwise, if $\beta =\gamma=1$ then $X_{i',\beta}\in E_{T_{i',1}}$ and $X_{i'',\gamma}\in E_{T_{i'',1}}$. We can abbreviate this case analyses by identifying $T_{i',\beta\text{mod}2}$, respectively $T_{i'',\gamma\text{mod}2}$, as the translator where $X_{i',\beta}$, respectively $X_{i'',\gamma}$, occur in. To abridge, we define $S^{\beta}_\gamma=S_{T_{i', \beta\text{mod}2 }}\cup S_{T_{i'', \gamma\text{mod}2 }}$. By the arbitrariness of $i$, this approach proves the solvability of every valid ESSP atom $(e, \cdot)$ in $U_\tau$, where $e$ is an event of $T_\tau$.
\begin{longtable}{p{1cm} p{3.5cm}p{3cm} p{2.5cm} p{2cm}} \caption{For $\tau={\tau^b_1}$: Solving $\tau$-regions for atoms $(e, \cdot )$ of $U_\tau$ where $e\in E_{T_\tau}$.} \label{tab:second_table} \endfirsthead \endhead \emph{e}& \emph{initials}& \emph{states} & \emph{sig} & \emph{constituents}\\ \hline
$x_i$ & \raggedright{$t_{i,0,0}=0$, $t_{i,2,0}=b$, $d_{6i+4,0,0}=b$, } & $S_{T_{i,2}}$ & \raggedright{$x_i=(0,b)$, $k_{6i+4}=(b,0)$} & \raggedright{$T_{i,0}, T_{i,2}$, $D_{6i+4,1}$}\arraybackslash \\
$x_i$ & \raggedright{$t_{i,0,0}=b$, $t_{i,2,0}=0$, $\alpha=0: t_{i',\beta \text{mod}2,0}=b$, $t_{i'',\gamma\text{mod}2,0}=b$} & $S_{T_{i,0}}$ & \raggedright{$x_i=(0,b)$, $X_{i,0} =(1,0)$} & \raggedright{$T_{i,0}, T_{i,2}$, $T_{i',\beta\text{mod}2}$, $T_{i'',\gamma\text{mod}2}$}\arraybackslash \\
$x_i$ & \raggedright{$t_{i,0,0}=t_{i,2,0}=0$} & remaining states & $x_i=(0,b)$ & $T_{i,0}, T_{i,2}$\\ \hline
$p_i$ & \raggedright{$t_{i,0,0}=t_{i,1,0}=b$, $t_{i,2,0}=b$,\newline $\alpha=1$: $t_{i',\beta\text{mod}2,0}=b$, $t_{i'',\gamma\text{mod}2,0}=b$} & $S_{T_{i,1}},S_{T_{i,2}} $ & \raggedright{$p_i=(0,b)$, $x_i=(b,0)$, $X_{i,1}=(1,0)$ } & \raggedright{$T_{i,0}, T_{i,1}, T_{i,2}$, $T_{i',\beta\text{mod}2}$, $T_{i'',\gamma\text{mod}2}$}\arraybackslash \\
$p_i$ & \raggedright{$t_{i,1,0}=t_{i,2,0}=0$} & remaining states & $ p_i=(0,b)$ & $T_{i,1}, T_{i,2}$\\ \hline
$k_{6i+1}$ & \raggedright{$h_{1,0}=d_{j,1,0}=b$, $t_{i,0,0}=0$} & \raggedright{$d_{6i+1,1,0}$, $S_{T_\tau}\setminus S_{T_{i,0}}$} & \raggedright{$k_{6i+1}=(0,b)$, $o_0=(b,0)$} & $H_1, T_{i,0}, D_{j,1}$ \\
$k_{6i+1}$ & \raggedright{$t_{i,0,0}=b$, $d_{6i+1,1,0}=0$, $\alpha=2$: $t_{i',\beta\text{mod}2,0}=b$, $t_{i'',\gamma\text{mod}2,0}=b$} & remaining states & \raggedright{$k_{6i+1}=(0,b)$, $X_{i,2}=(1,0)$} & \raggedright{$T_{i,0}, D_{6i+1,1}$, $T_{i',\beta\text{mod}2}$, $T_{i'',\gamma\text{mod}2}$}\arraybackslash \\ \hline
$k_{6i+3}$ & \raggedright{$h_{1,0}=d_{j,1,0}=b$, $t_{i,1,0}=t_{i,2,0}=b$} & \raggedright{$d_{6i+3,1,0}$, $S_{T_\tau}\setminus S_{T_{i,2}}$} & \raggedright{$k_{6i+3}=(0,b),\newline o_0=p_i=(b,0)$} & \raggedright{$H_1, T_{i,1}, T_{i,2}$, $D_{j,1}$}\arraybackslash \\
$k_{6i+3}$ & \raggedright{$d_{6i+3,1,0}=t_{i,1,0}=0$} & remaining states & $k_{6i+3}=(0,b)$ & $D_{6i+3,1}, T_{i,1}$ \\ \hline
$k_{6i+5}$ & \raggedright{$h_{1,0}=d_{j,1,0}=b$, $t_{i,1,0}=t_{i,2,0}=b$} & \raggedright{$d_{6i+5,1,0}$, $S_{T_\tau}\setminus S_{T_{i,1}}$} & \raggedright{$k_{6i+5}=(0,b),\newline o_0=p_i=(b,0)$} & \raggedright{$H_1, T_{i,1}, T_{i,2}$, $D_{j,1}$}\arraybackslash \\
$k_{6i+5}$ & \raggedright{$d_{6i+5,1,0}=t_{i,2,0}=0$} & remaining states & $k_{6i+5}=(0,b)$ & $D_{6i+5,1}, T_{i,2}$ \\ \hline
$X_{i,0}$ & \raggedright{$t_{i,0,0}=d_{6i,1,0}=b$, $\alpha=0$: $t_{i',\beta\text{mod}2, 0}=0$, $t_{i'',\gamma\text{mod}2,0}=0$} & $S_{T_{i,0}}$ & \raggedright{$X_{i,0}=(0,1)$, $k_{6i}=(b,0)$} & \raggedright{$D_{6i,1}, T_{i',\beta\text{mod}2} $, $T_{i,0}, T_{i'',\gamma\text{mod}2}$}\arraybackslash \\
$X_{i,0}$ & \raggedright{$ t_{i,0,0}=0 $, $\alpha=0$: $ t_{i',\beta\text{mod}2, 0} = 0 $, $ t_{i'',\gamma\beta\text{mod}2,0} = 0 $} & \raggedright{$S_{U_\tau} \setminus (S_{T_{i,0}} \cup S^{\beta}_\gamma$)} & $X_{i,0}=(0,1)$ & \raggedright{$T_{i,0}$, $T_{i',\beta\text{mod}2}$, $T_{i'',\gamma\text{mod}2}$}\arraybackslash \\ \hline
$X_{i,1}$ & \raggedright{$t_{i,1,0}=d_{6i+2,1,0}=b$, $\alpha=1$: $t_{i',\beta\text{mod}2, 0}=0$, $t_{i'',\gamma\text{mod}2,0}=0$} & $S_{T_{i,1}}$ & \raggedright{$X_{i,1}=(0,1)$, $k_{6i+2}=(b,0)$} & \raggedright{$D_{6i+2,1}, T_{i,1}$, $T_{i',\beta\text{mod}2}$, $T_{i'',\gamma\text{mod}2}$} \arraybackslash \\
$X_{i,1}$ & \raggedright{$t_{i,1,0}=0$, $\alpha=1$: $t_{i',\beta\text{mod}2, 0}=0$, $t_{i'',\gamma\text{mod}2,0}=0$} & \raggedright{$S_{U_\tau} \setminus (S_{T_{i,1}}\cup S^{\beta}_\gamma)$} & $X_{i,1}=(0,1)$ & \raggedright{$T_{i,1}$, $T_{i',\beta\text{mod}2}$, $T_{i'',\gamma\text{mod}2}$} \arraybackslash \\ \hline
$X_{i,2}$ & \raggedright{$t_{i,0,0}=t_{i,2,0}=b$, $t_{i',\beta\text{mod}2 , 0}=0$, $t_{i'',\gamma\text{mod}2 ,0}=0$} & $S_{T_{i,0}}$ & \raggedright{$X_{i,2}=(0,1)$, $x_i=(b,0)$} & \raggedright{$T_{i,0}, T_{i,2}$, $T_{i',\beta\text{mod}2 }$, $T_{i'',\gamma\text{mod}2 }$} \arraybackslash \\
$X_{i,2}$ & \raggedright{ $t_{i,0,0}=0$, $t_{i',\beta\text{mod}2, 0}=0$, $t_{i'',\gamma\text{mod}2,0}=0$ } & \raggedright{$S_{U_\tau} \setminus (S_{T_{i,0}} \cup S^{\beta}_\gamma)$} & \raggedright{$X_{i,2}=(0,1)$} & \raggedright{$T_{i,2}$, $T_{i',\beta\text{mod}2 }$, $T_{i'',\gamma\text{mod}2 }$}\arraybackslash \\
\end{longtable} \end{proof}
\begin{proof}[Statement~\ref{en:essp_atom_0}] Let $\tau=\tau^b_0$. We show that the solvability of $(k, h_{0,4b+1})$ in $U_\tau$ implies the $\tau$-ESSP for $U_\tau$. The type $\tau$ has the following obvious property: If $U=U(A_1,\dots, A_n)$ is a union, $e\in E_{U}$ and if $S=\{s \in S_{A_i} \mid 1\leq i\leq n : e\not\in E_{A_i}\}$ is the set of states of all TSs implemented by $U$ which do not have $e$ in their event set then we can solve $(e,s)$ for all states $s\in S$ by a $\tau$-region $(sup, sig)$ which is defined by $sup(s)=b$ for all $s\in S_U\setminus S$, $sup(s)=0$ for all $s\in S$, $sig(e)=(b,b)$ and $sig(e')=(0,0)$ for all $e'\in E_{U}\setminus \{e\}$. Hence, in the following, for all $e\in E_{U_\tau}$, we restrict ourselves to the presentation of regions of $U_\tau$ that altogether solves ESSP atoms $(e, s)$ for states $s$ of TSs that actually implement $e$. By the former observation, this proves every atom $(e,\cdot) $ of $U_\tau$ to be solvable.
The following table presents corresponding regions for a lot of ESSP atoms of $U_\tau$. However, for some atoms we need regions which are better discussed individually and these atoms are served first.
$(z)$: The solvability of $(z, s)$ for $s\in S_{H_0}\setminus \{h_{0,2b},h_{0,4b+1}, h_{0,6b+1}\}$ is already proven by the region that solves $(k, h_{0,4b+1})$ presented in the proofs of Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_existence}, Lemma~\ref{lem:translator_1}.\ref{lem:translator_1_existence}. The first row of Table~\ref{tab:third_table} proves $(z, s)$ to be solvable for $s\in \{h_{0,2b},h_{0,4b+1}, h_{0,6b+1}\}$. Hence, every $(z, \cdot)$ is solvable.
$(k, o_1)$: The solvability of $(k, s)$ for $s\in S_{H_0}\setminus \{h_{0, 4b+2}, \dots, h_{0, 5b}\}$ is already done by the region that solves $(k, h_{0,4b+1})$ presented in the proof of Lemma~\ref{lem:key_unions_1}.\ref{lem:key_unions_1_existence}, Lemma~\ref{lem:translator_1}.\ref{lem:translator_1_existence}.
The solvability of $(o_1, s)$ for \[ s\in \{ h_{0,0},\dots, h_{0,b}, h_{0,2b+1},\dots, h_{0,3b+1}, h_{0,5b+1},\dots, h_{0,6b+1}\} \] and the solvability of $(k ,s)$ for $s\in \{ h_{0, 4b+2}, \dots, h_{0, 5b} \}$ can be done as follows: We use the region $(sup,sig)$ where $sup(h_{0,0})=b$, $sup(d_{j,0,0})=0$ and $sig(o_1)=(0,1)$, $sig(o_0)=(0,b)$, $sig(k)=(b,b)$, $sig(z)=(1,0)$, $sig(k_j)=(b,0)$ for $j\in \{0,\dots, 6m-1\}$ of $K_\tau$ and extend it for $i\in \{0,\dots, m-1\}$ appropriately corresponding to the region $(sup_T, sig_T)$ given for the proof of Lemma~\ref{lem:translator_1}.\ref{lem:translator_1_existence}: \begin{enumerate} \item $sup(t_{i,0,0})= sup(t_{i,1,0})= sup(t_{i,2,0}) =b$, \item $sig(x_i)=(0,b)$ if $sig''(x_i)=(b,0)$, else $sig(x_i)=sig''(x_i)$, \item $sig(p_i)=(0,b)$ if $sig''(p_i)=(b,0)$, else $sig(p_i)=sig''(p_i)$, \item for $X\in V(\varphi)$: $sig(X)=(0,1)$ if $sig''(X)=(1,0)$, else $sig(X)=sig''(X)$. \end{enumerate}
Moreover, to solve $(o_1, s)$ for \[ s\in \{ h_{0,b+1},\dots, h_{0,2b-1}, h_{0,3b+2},\dots, h_{0,4b}, d_{0,0,1},\dots, d_{6m-1,0,1}\} \] we extend the region $(sup,sig)$ of $K_\tau$ with $sup(h_{0,0})=0$, $sup(d_{j,0,0})=b$ and $sig(o_1)=(b,b)$, $sig(o_0)=(b,0)$, $sig(z)=(0,1)$, $sig(k_j)=(0,b)$ for $j\in \{0,\dots, 6m-1\}$, by $sup(s)=sup_T(s)$ and $sig(e)=sig_T(e)$ for $s\in S_{T_\tau}$ and $e\in E_{T_\tau}$ where $(sup_T, sig_T)$ is defined in the proof of Lemma~\ref{lem:translator_1}.\ref{lem:translator_1_existence}.
Finally, the region presented in the 4th row of Table~\ref{tab:third_table} solves $(o_1, s)$ for $s\in \{h_{0,2b}, d_{0,0,0}, \dots, d_{6m-1,0,0}\}$. Altogether it is justified, to consider every atom $(k, \cdot)$ and $(o_1,\cdot)$ to be solvable in $U_\tau$.
$(o_0)$: The corresponding regions are given in Table~\ref{tab:third_table}.
For the solvability of atoms induced by the remaining events we exploit the already defined regions of Table~\ref{tab:first_table} and Table~\ref{tab:second_table}. If $(sup, sig)$ is a region of the last three rows of Table~\ref{tab:first_table} or a region of Table~\ref{tab:second_table} then we use it to create a region $(sup',sig')$ where we replace initials $h_{1,0}, d_{j,1,0}$ by $h_{0,0},d_{j,0,0}$, that is, $sup'(h_{0,0})=sup(h_{1,0})$, $sup'(d_{j,0,0})=sup(d_{j,1,0})$, and define $sup'(s)=sup(s)$ for the other affected initials, respectively, and let $sig'=sig$. One can easily verify, that, altogether, the generated regions solve the remaining ESSP atoms of $U_\tau$.
\begin{longtable}{p{1cm} p{3cm}p{4.5cm} p{2cm} p{1.5cm}} \caption{For $\tau=\tau^b_0$: Solving $\tau$-regions for atoms $(e, \cdot)$ of $U_\tau$ where $e\in \{z,o_0,o_1\}$.} \label{tab:third_table} \endfirsthead \endhead \emph{e}& \emph{initials}& \emph{states} & \emph{sig} & \emph{constituents}\\ \hline $z$ & $h_{0,0}=0$ & \raggedright{$h_{0,2b},h_{0,4b+1}, h_{0,6b+1}$} & \raggedright{$z=(0,1)$,\\ $o_0=(b,0)$} & $H_0, D_{j,0}$\\ \hline
$o_0$ & \raggedright{$h_{0,0}=b$, $d_{j,0,0}=0$} & \raggedright{$h_{0,0},\dots, h_{0,2b-1}$, $d_{0,0,0},\dots, d_{6m-1,0,0}$} & \raggedright{$o_0=(0,b)$, $z=(1,0)$} & $H_0, D_{j,0}$\\
$o_0$ & \raggedright{$h_{0,0}=0$, $d_{j,0,0}=0$} & \raggedright{remaining states} & \raggedright{$o_0=(0,b)$} & $H_0, D_{j,0}$\\ \hline
$o_1$ & \raggedright{$h_{0,0}=d_{j,0,0}=b$} & \raggedright{$h_{0,2b}, d_{0,0,0},\dots, d_{6m-1,0,0}$} & \raggedright{$o_1=(0,1)$, $o_0=(b,0)$} & $H_0, D_{j,0}$\\ \end{longtable}
\end{proof}
\subsubsection{Proof of Statement~\ref{en:essp_implies_ssp}}
To justify Statement~\ref{en:essp_implies_ssp}, we observe, that the constituents of $U_\tau$ are all \emph{linear} TSs, that is, every constituent $A$ of $U_\tau$ is a finite directed labeled path: $ A= s_0 \edge{e_1} \dots \edge{e_t} s_t$, where all states $s_0,\dots, s_t$ are pairwise different. The next Lemma shows that the $\tau$-ESSP of a linear TS $A$ always implies its $\tau$-SSP. Consequently, if $\tau\in \{\tau^b_0, \tau^b_1\}$ then the $\tau$-ESSP of $U_\tau$ implies its $\tau$-SSP by the following lemma:
\begin{lemma}\label{lem:essp_implies_ssp} Let $b\in \mathbb{N}^+$, $\tau \in \{ \tau^b_0, \tau^b_1 \}$ and $ A= s_0 \edge{e_1} \dots \edge{e_t} s_t$ be a linear TS having the $\tau$-ESSP. \begin{enumerate}
\item\label{lem:essp_implies_ssp_infinite_sequence} If $q_0\edge{e_1}\dots \edge{e_m}q_m \edge{e_1}\dots \edge{e_m}q_{2m}$ is a subpath of $A$ then there has to be a $\tau$-region $(sup,sig)$ of $A$ such that $sup(q_0)\not=sup(q_{m})$.
\item\label{lem:essp_implies_ssp_proof} If $A$ is finite then it has the $\tau$-SSP.
\end{enumerate} \end{lemma}
\begin{proof} (\ref{lem:essp_implies_ssp_infinite_sequence}): Assume, for a contradiction, that $q_0\edge{e_1}\dots \edge{e_m}q_m \edge{e_1}\dots \edge{e_m}q_{2m}$ satisfies the equality $sup(q_0)=sup(q_{m})$ for every $\tau$-region $(sup,sig)$ of $A$. We argue, that this sequence is continued by another transition $q_{2m}\edge{e_1}q_{2m+1}$ and that for every $\tau$-region $(sup, sig)$ of $A$ the equality $sup(q_1)=sup(q_{m+1})$ is satisfied. This makes $q_1\edge{e_2}\dots \edge{e_1}q_{m +1}\edge{e_2}\dots \edge{e_1}q_{2m+1}$ a new starting point from which, by the same argumentation, we get another sequence that can be continued. Hence, there is a state $s\in S_A$ and an event $e\in E_A$ such that $s_t\edge{e}s$, which is a contradiction.
For the proof, let $(sup,sig)$ be an arbitrary region. By Lemma~\ref{lem:observations}.\ref{lem:sig_summation_along_paths} we obtain \begin{align} \label{eq:support_values_2} sup(q_m) &= sup(q_0) + \sum_{i=1}^{m} sig^-(e_i) + \sum_{i=1}^{m} sig^+(e_i) \\ \label{eq:support_values_3}
sup(q_{2m}) &= sup(q_m) +\sum_{i=1}^{m} sig^-(e_i) + \sum_{i=1}^{m} sig^+(e_i) \end{align} By Equation~\ref{eq:support_values_2} and $sup(q_0)=sup(q_m)$ we get $\sum_{i=1}^{m} sig^-(e_i) + \sum_{i=1}^{m} sig^+(e_i) =0$, implying, by Equation~\ref{eq:support_values_3}, $sup(q_{2m})=sup(q_m)=sup(q_0)$. Hence, as $(sup,sig)$ was arbitrary, a valid ESSP atom $(e_1, q_{2m})$ contradicts the $\tau$-ESSP of $A$. Thus, there is a state $q_{2m+1}$ such that $q_{2m}\edge{e_1}q_{2m+1}$. Moreover, as $\delta_\tau$ is a function, by $q_0\edge{e_1}, q_{m}\edge{e_1}$ and $sup(q_0)=sup(q_m)$ we obtain $sup(q_1)=sup(q_{m+1})$, too.
(\ref{lem:essp_implies_ssp_proof}): Assume, that there is a sequence $s_{i}\edge{e_{i+1}}\dots \edge{e_j}s_j\edge{e_{j+1}} \dots \edge{e_t}s_t $ in $A$ such that $(s_i, s_j)$ is not solvable. As every region $(sup,sig)$ of $A$ satisfies $sup(s_i)=sup(s_j)$, by the $\tau$-ESSP, we have that $s_j\not=s_t$ and $e_{i+1}=e_{j+1}$. Let $k\in \mathbb{N}$ such that $j=i+k$ and let $1 \le \ell \leq k $ be the biggest index such that $e_{i+1}=e_{j+1}, e_{i+2}=e_{j+2},\dots, e_{i+\ell}=e_{j+\ell}$. If $\ell < k$, then, by the ESSP of $A$, there is a region $(sup, sig)$ separating $e_{i+\ell+1}$ from $s_{j+\ell}$. By Lemma~\ref{lem:observations}.\ref{lem:sig_summation_along_paths} we have that \begin{align} sup(s_{i+\ell}) & =sup(s_i) + \sum_{m=1}^{\ell} sig^-(e_{i+m}) + \sum_{m=1}^{\ell} sig^+(e_{i+m})\\ sup(s_{j+\ell}) & =sup(s_j) + \sum_{m=1}^{\ell} sig^-(e_{i+m}) + \sum_{m=1}^{\ell} sig^+(e_{i+m}) \end{align}
which, by $sup(s_i)=sup(s_j)$, implies $sup(s_{i+\ell})=sup(s_{j+\ell})$ contradicting $\neg (sup(e_{j+\ell})\edge{sig(e_{i+\ell+1})})$. Hence, we have $\ell=k$. This implies that we have a sequence $s_i\edge{e_{i+1}}\dots\edge{e_{i+k}}s_j\edge{e_{i+1}}\dots \edge{e_{i+k}}s_{j+k}$ where $sup(s_i)=sup(s_j)$ for all regions of $A$. By (\ref{lem:essp_implies_ssp_infinite_sequence}) this contradicts the linearity of $A$. Hence, $(s_i,s_j)$ is $\tau$-solvable and $A$ has the $\tau$-SSP.
\end{proof}
\subsubsection{Proof of Statement~\ref{en:essp_atom_2}}
Let $\tau\in \{\tau^b_2,\tau^b_3\}$. We define the set of all initial states of the TSs implemented by $U_\tau$ by $I=\{h_{3,0,0}, t_{j,0}, f_{j,0,0},g_{j,0}\mid 0\leq j\leq m-1\}$. A lot of separation atoms are solve by Table~\ref{tab:fourth_table} presented at the bottom of this subsection. However, some atoms need to be discussed individually and or need some additional instructions how their corresponding rows in Table~\ref{tab:fourth_table} are to interpret.
($k$): The key region inhibits $k$ in $H_3$ and the region of the first row of the Table~\ref{tab:fourth_table} separates $k$ from the remaining states.
($z$): Let $i, \ell \in \{0,\dots, m-1\}$,such that $X_\ell=X_{i,2}$ and $i',i''\in \{0,\dots, m-1\}\setminus \{i\}$ be the indices of the translators (clauses) of the second and third occurrence of $X_{i,2}$: $X_{i,2}\in E_{T_{i}}\cap E_{T_{i'}}\cap E_{T_{i''}}$. Using these definitions, the region presented in the second row of Table~\ref{tab:fourth_table} shows the separation of $z$ in $T_i$ and from $h_{3,0,0}$. By the arbitrariness of $i$ this proves $z$ to be separable from all states of $T_\tau$.
For the separation of $z$ from the states of $S_{H_3}\setminus \{ h_{3,0,0} \}$ see row three of Table~\ref{tab:fourth_table} and, finally, see the 4th row of Table~\ref{tab:fourth_table} for the separation of $z$ from the remaining states, that is $S_{F_j}\cup S_{G_j},j\in \{0,\dots, m-1\}$.
($v_\ell$): Let $\ell\in \{0,\dots, m-1\}$. The separation of $v_\ell$ in $U_\tau$ affects the variable event $X_\ell$ and we assume $i,i',i''\in \{0,\dots, m-1\}$ to be the respective indices such that $X_\ell\in E_{T_{i}} \cap E_{T_{i'}}\cap E_{T_{i''}}$. Using these indices, the seventh and eighth row of Table~\ref{tab:fourth_table} prove $v_\ell$ to be separable from all states of $U_\tau$.
($V(\varphi)$): For the separation of the variable events we proceed as follows: If $i, i',i''\in \{0,\dots, m-1\}$ and $\alpha \in \{0,1,2\}$ such that $X_{i,\alpha}\in E_{T_{i}} \cap E_{T_{i'}}\cap E_{T_{i''}}$ then we explicitly present regions for the separation of $X_{i,\alpha}$ at the states in question of $S_{U_\tau}\setminus (S_{T_{i'}} \cup S_{T_{i''})}$. By the arbitrariness of $i$ and $\alpha$ this proves $X_{i,\alpha}$ to be separable in $T_{i'}, T_{i''}$, too, and, consequently, in $U_\tau$.
$(X_{i,0})$: Let $i,i',i'', \ell \in \{0,\dots, m-1\}$ such that $X_{i,0}=X_\ell$ and $X_\ell\in E_{T_{i}} \cap E_{T_{i'}}\cap E_{T_{i''}}$. The 9th row is dedicated to the separation of $X_\ell$ at the states $f_{\ell,1,0},\dots, f_{\ell,1,b-2}$ and $g_{\ell,0},\dots, g_{\ell, b-1}$ and $t_{i,0},\dots, t_{i,b-1}$. After that, the 10th row shows $X_\ell$ to be separable at the remaining states of $S_{U_\tau}\setminus (S_{T_{i'}} \cup S_{T_{i''}})$.
$(X_{i,1})$: Let $\ell_0,\ell_1,i_0,\dots, i_3\in \{0,\dots, m-1\}$ such that $X_{i,0}=X_{\ell_0}\in E_{T_{i}} \cap E_{T_{i_0}}\cap E_{T_{i_1}}$ and $X_{i,1}=X_{\ell_1}\in E_{T_{i}} \cap E_{T_{i_2}}\cap E_{T_{i_3}}$. The 11th row show the separation of $X_{\ell_1}$ at $f_{\ell_1,1,0},\dots, f_{\ell, 1, b-2}$ and $g_{\ell_1,0},\dots, g_{\ell_1, b-1}$ . After that, the 12th row shows $X_{\ell_1}$ to be separable at the remaining states of $S_{U_\tau}\setminus (S_{T_{i_2}} \cup S_{T_{i_3})}$. To separate$X_{\ell_1}$ at the states $t_{i,0}, \dots, t_{i,b}$, a lot of cases analyses is necessary as the variable event $X_{i,0}$ comes into play. Hence, to define an appropriate region, we have to analyze in which constellation the events $X_{i,0}$ and $X_{i,1}$ occur a second and a third time. Roughly said, the following cases are possible: \begin{enumerate} \item $X_{i,0}$ and $X_{i,1}$ occur a second (third) time together in another translator and $X_{i,1}$ occur \emph{left} from $X_{i,0}$, for example, $i_0=i_2$ and $t_{i_0, b}\edge{ X_{i,1} }$ and $t_{i_0, b+1} \edge{ X_{i,0} }$, respectively $t_{i_0,b+2}\edge{ X_{i,0} }$, or $t_{i_0,b+1}\edge{X_{i,1}}$ and $t_{i_0,b+2}\edge{X_{i,0}}$.
\item $X_{i,0}$ and $X_{i,1}$ occur a second (third) time together in another translator and $X_{i,1}$ occur always \emph{right} from $X_{i,0}$ as, for example, it is the case for $T_i$.
\item $X_{i,0}$ and $X_{i,1}$ occur not again in a common translator. \end{enumerate} To define an appropriate region, in the following, we will discuss all possible cases individually. Firstly, for all cases, the signature $sig$ is defined by $sig(X_{\ell_1}) =(0, b)$, $sig(X_{i,\ell_0})=sig(v_{\ell_1})=1$, $sig(v_{\ell_0})=b$ and $sig(e)=0$ for $e\in E_{U_\tau}\setminus \{v_{\ell_0}, v_{\ell_1}, X_{\ell_0}, X_{\ell_1} \}$. Independent from the different cases, we have that $sup(t_{i,0})=sup(f_{\ell_1,0})=b$ and $sup(f_{\ell_0,0})=1$. The challenge for the further initials is to achieve, that each further source $s$ of a $X_{\ell_1}$-labeled transition, that is $s\edge{X_{\ell_1}}$, is mapped to $0$.
If $X_{\ell_0} \not\in E_{T_{i_2}}$, respectively $X_{\ell_0} \not\in E_{T_{i_3}}$, then we simply define $sup(t_{i_2,0})=0$, respectively $sup(t_{i_3,0})=0$.
If $X_{\ell_0} \in E_{T_{i_2}}$, respectively $X_{\ell_0} \in E_{T_{i_3}}$, and if $X_{\ell_0}$ occurs left from $X_{\ell_1}$ then the situation is similar to $T_i$ and we define $sup(t_{i_2,0})=b$, respectively $sup(t_{i_3,0})=b$. Otherwise, if $X_{\ell_0}$ occurs right from $X_{\ell_1}$ then we define $sup(t_{i_2,0})=0$, respectively $sup(t_{i_3,0})=0$.
Finally, the values of $t_{i_0,0}, t_{i_1,0}$ are defined in dependence of one of the former cases: Actually, if $t_{i_0,0} \in \{t_{i_2,0}, t_{i_3,0}\}$, respectively $t_{i_1,0} \in \{t_{i_2,0}, t_{i_3,0}\}$, then the former case properly defines the support of $t_{i_0,0}$, respectively $t_{i_1,0}$. Otherwise, we set $sup(t_{i_0,0})=0$, respectively $sup(t_{i_1,0})=0$.
$(X_{i,2})$: The separation of $X_{i,2}$ can be perfectly in the same way shown to the separation of $X_{i,1}$. Hence, for simplicity, we refrain from the explicit presentation of the separating regions.
\noindent\begin{longtable}{p{0.7cm} p{3.8cm}p{3cm} p{3.5cm} p{1.7cm}} \caption{Inhibiting regions of $U^{\tau}_\varphi$ for $z,u, v_\ell, X_{i,0}, X_{i,1}$} \label{tab:fourth_table} \endfirsthead \endhead \emph{e} & \emph{initials}& \emph{states} & \emph{sig} & \emph{constituents}\\ \hline
$k$ & $s\in I: s=0$ & remaining states & \raggedright{$k=(0,1)$, $z=v_j=1$} &$U_\tau$\\ \hline
$z$ & \raggedright{$t_{i,0}=t_{i',0}=t_{i'',0}=h_{3,0,0}=b$, $f_{j,0,0}=g_{j,0}=1$, $j\not\in \{i,i',i''\}: t_{j,0}=0$} & $S_{T_i}, h_{3,0,0}$ & \raggedright{$z=(0,b)$,\newline $u=X_\ell=1$, $v_\ell=b$} &$T^0_\tau, H_3, F_\ell, G_\ell$\\
$z$ & \raggedright{$h_{3,0,0}=f_{j,0,0}=g_{j,0}=0$, $t_{j,0}=1$} & $S_{H_3}\setminus \{h_{3,0,0}\}$ & $z=(0,b), k=1$, $u=2$ & $U_\tau$ \\
$z$ & \raggedright{ $h_{3,0,0}=b$, $t_{j,0}=0$} & remaining states & \raggedright{$z=(0,b)$, $u=1$} & $H_3, T^0_\tau$\\ \hline
$u$ & \raggedright{$s\in I: s=0$} & $h_{3,0,1},\dots, h_{3,0, b}$ & $u=(0,b)$, $z=2$, $k=1$ & $ U_\tau$\\
$u$ & \raggedright{$h_{3,0,0}=0$, $t_{j,0}=1$} & remaining states & $u=(0,b), z=1$ & $H_3, T^0_\tau$\\ \hline
$v_\ell$ & \raggedright{$s\in I: s=0$} & $f_{\ell,0,1},\dots, f_{\ell,0,b}$ & \raggedright{$v_\ell=(0,b)$, $X_\ell=2$, $k=1$} & $ U_\tau$\\
$v_\ell$ & \raggedright{$f_{\ell,0,0}=0$, $s\in I\setminus \{f_{\ell,0,0} \}: s=1$} & remaining states & $v_\ell=(0,b), X_\ell=1$ & $F_\ell, T_{i}, T_{i'}, T_{i''}$\\ \hline
$\underbrace{X_{i,0}}_{= X_\ell}$ & \raggedright{$g_{\ell,0}=1$, $t_{i,0}=t_{i',0}=t_{i'',0}=1$, $s\in I\setminus \{t_{i,0},t_{i',0}, t_{i'',0},g_{\ell,0}\}$: $s=0$} & $f_{\ell,1,0},\dots, f_{\ell,1, b-2}$, $t_{i,0},\dots, t_{i,b-1}$, $g_{\ell,0},\dots, g_{\ell, b-1}$ & \raggedright{$X_\ell=(0,b)$, $v_\ell=2$, $k=1$} & $ U_\tau$\\
$\underbrace{X_{i,0}}_{= X_\ell}$ & \raggedright{$f_{\ell,0,0}=b, g_{\ell,0}=0$, $t_{i,0}=t_{i',0}=t_{i'',0}=0$} & remaining states & $X_\ell=(0,b)$, $v_\ell=1$& $ F_\ell, G_\ell$, $T_{i}, T_{i'}, T_{i''}$ \\ \hline
$\underbrace{X_{i,1}}_{= X_{\ell_1}}$ & \raggedright{$g_{\ell_1,0}=1$,$t_{i,0}=t_{i_2,0}=t_{i_3,0}=1$, $s\in I\setminus\{g_{\ell_1,0},t_{i,0},t_{i_2,0},t_{i_3,0} \}: s=0$} & $f_{\ell_1,1,0},\dots, f_{\ell_1,1,b-2}$, $g_{\ell_1,0},\dots, g_{\ell_1, b-1}$ & $X_{\ell_1}=(0,b)$, $v_{\ell_1}=2$, $k=1$ & $ U_\tau$\\
$\underbrace{ X_{i,1} }_{= X_{\ell_1}} $ & \raggedright{ $f_{ \ell_1,0, 0} = b, g_{\ell_1,0}=0 $, $t_{i,0}=t_{i_2,0}=t_{i_3,0}=0$} & \raggedright{remaining of $S_{U_\tau}\setminus (S_{T_{i_2} }\cup S_{T_{i_3} })$, but not $t_{i,0},\dots, t_{i,b+1} $}& \raggedright{$X_{\ell_1}=(0,b)$, $v_{\ell_1}=1$} & $ F_{\ell_1}, G_{\ell_1}$, $T_{i}, T_{i_2}, T_{i_3}$ \\ \end{longtable}
\subsubsection{Proof of Statement~\ref{en:essp_implies_ssp_Z}}
Actually, the $\tau$-SSP is already proven by the $\tau$-regions presented for the $\tau$-ESSP in Statement~\ref{en:essp_atom_2}: \begin{enumerate}
\item If $s_0\edge{k}\dots \edge{k}s_{n}$ is a sequence in $U_\tau$, then the pairwise separation of $s_{i-1}, s_i$ for $i\in \{1,\dots, n\}$ is done by the region that solves $(k,h_{3,1, b-1})$.
\item The separation of $h_{3,0,0}, \dots, h_{3,0,b}$ from $h_{3,1,0},\dots, h_{3,1, b-1}$ is done by the region of the 2nd row of Table~\ref{tab:fourth_table} as well as the separation of $f_{j,0,0},\dots, f_{j,0, b}$ from $f_{j, 1,0},\dots, f_{j, 1,b-1}$ for $j\in \{0,\dots, m-1\}$. This finishes the state separation in $H_3, F_0,\dots, F_{m-1}$.
\item If $i,\ell\in \{0,\dots, m-1\}$ such that $X_{i,0}=X_\ell$ then \begin{enumerate} \item the 10th row of Table~\ref{tab:fourth_table} presents a region, that separates $t_{i,0},\dots, t_{i,b}$ from $t_{i,b+1},\dots, t_{i, 2b+4}$ and $g_{\ell,0},\dots, g_{\ell,b}$ from $g_{\ell,b+1}$, \item the region of the 12th row separates $t_{i,b+1}$ from $t_{i,b+2},\dots, t_{i, 2b+4}$ and the corresponding region for $X_{i,2}$ separates $t_{i,b+2}$ from $t_{i,b+3},\dots, t_{i, 2b+4}$ and \item the region of the 4th row separates $t_{i,b+3}$ from $t_{i,b+4},\dots, t_{i, 2b+4}$. \end{enumerate} By the arbitrariness of $i$, this completes the state separation in $U_\tau$. \end{enumerate}
\subsubsection{Proof of Statement~\ref{en:ssp_atom}}
Let $\tau\in \{\tau^b_0,\tau^b_1\}$. To show, that the solvability of $(h_{2,0}, h_{2,1})$ in $W$ implies its $\tau$-SSP we explicitly present regions of $W$ that, altogether, solve the SSP atoms induced by $H_2$ and $D_{0,1},\dots, D_{6m-1,1}$. Furthermore, we observe that the regions of Table~\ref{tab:first_table} and Table~\ref{tab:second_table}, which were originally dedicated to the $\tau^b_1$-separation of $k_{6i},\dots, k_{6i+5},x_i,p_i,X_{i,0}, X_{i,1}, X_{i,2}$, can be fitted into $\tau$-regions of $W$. In fact, this is doable simply by replacing the support of the initial state $h_{1,0}$ by the appropriate value for $h_{2,0}$. Consequently, the union $T_\tau$, has the $\tau$-ESSP with regions of $W$. Moreover, $T_\tau$ consists only of linear TSs. Hence, by Lemma~\ref{lem:essp_implies_ssp}, $T_\tau$ has the $\tau$-SSP. Finally, with the added regions concerning $H_2, D_{0,0},\dots, D_{6m-1,0}$, the $\tau$-SSP for $W$ is proven:
For a start, if $(s,s')$ satisfies the condition $s,s'\in S_0=\{h_{2,0}, \dots, h_{2,b}\}$ or $s,s'\in S_1=\{h_{2,b+1}, \dots, h_{2,2b+1}\}$ or $s,s'\in S_2=\{h_{2,2b+2}, \dots, h_{2,3b+2}\}$ or $s,s'\in S_3=\{d_{j,1,0}, d_{j,1,1}\}$ or $s,s'\in S_4=\{d_{j,1,2}, d_{j,1,3}\}$ then $(s,s')$ is already solved by the key region presented for the proof of Lemma~\ref{lem:key_unions_1} and Lemma~\ref{lem:translator_1}. Moreover, the $\tau$-region $(sup,sig)$ where: \begin{enumerate} \item
$sup(h_{2,0})=sup(d_{j,1,0})=0$ and $sig(o_0)=(0,b)$ separates all states of $S_0$ from all states of $S_1\cup S_2$,
\item $sup(h_{2,0})=sup(d_{j,1,0})=0$ and $sig(o_1)=(0,b)$ separates all states of $S_1$ from all states of $ S_2$,
\item $sup(d_{j,1,0})=0$ and $sig(k_j)=(0,b)$ solves $(s,s')$ where $s\in S_3$ and $s'\in S_4$. Altogether, this proves $H_2, D_{0,1},\dots, D_{6m-1,1}$ to have the $\tau$-SSP with regions of $W$. With the former discussion this implies the $\tau$-SSP of $W$. \end{enumerate}
\end{appendix}
\end{document} | arXiv |
Academic Quant News
The latest quantitative finance news from the academic world
This site archives
Research articles for the 2020-04-13
A new multilayer network construction via Tensor learning
Giuseppe Brandi,T. Di Matteo
Multilayer networks proved to be suitable in extracting and providing dependency information of different complex systems. The construction of these networks is difficult and is mostly done with a static approach, neglecting time delayed interdependences. Tensors are objects that naturally represent multilayer networks and in this paper, we propose a new methodology based on Tucker tensor autoregression in order to build a multilayer network directly from data. This methodology captures within and between connections across layers and makes use of a filtering procedure to extract relevant information and improve visualization. We show the application of this methodology to different stationary fractionally differenced financial data. We argue that our result is useful to understand the dependencies across three different aspects of financial risk, namely market risk, liquidity risk, and volatility risk. Indeed, we show how the resulting visualization is a useful tool for risk managers depicting dependency asymmetries between different risk factors and accounting for delayed cross dependencies. The constructed multilayer network shows a strong interconnection between the volumes and prices layers across all the stocks considered while a lower number of interconnections between the uncertainty measures is identified.
An extensive study of stylized facts displayed by Bitcoin returns
F.N.M. de Sousa Filho,J.N. Silva,M.A. Bertella,E. Brigatti
In this paper, we explore some stylized facts in the Bitcoin market using the BTC-USD exchange rate time series of historical intraday data from 2013 to 2018. Despite Bitcoin presents some very peculiar idiosyncrasies, like the absence of macroeconomic fundamentals or connections with underlying asset or benchmark, a clear asymmetry between demand and supply and the presence of inefficiency in the form of very strong arbitrage opportunity, all these elements seem to be marginal in the definition of the structural statistical properties of this virtual financial asset, which result to be analogous to general individual stocks or indices. In contrast, we find some clear differences, compared to fiat money exchange rates time series, in the values of the linear autocorrelation and, more surprisingly, in the presence of the leverage effect. We also explore the dynamics of correlations, monitoring the shifts in the evolution of the Bitcoin market. This analysis is able to distinguish between two different regimes: a stochastic process with weaker memory signatures and closer to Gaussianity between the Mt. Gox incident and the late 2015, and a dynamics with relevant correlations and strong deviations from Gaussianity before and after this interval.
Containment efficiency and control strategies for the Corona pandemic costs
Claudius Gros,Roser Valenti,Lukas Schneider,Kilian Valenti,Daniel Gros
The rapid spread of the Coronavirus (COVID-19) confronts policy makers with the problem of measuring the effectiveness of containment strategies and the need to balance public health considerations with the economic costs of a persistent lockdown. We introduce a modified epidemic model, the controlled-SIR model, in which the disease reproduction rate evolves dynamically in response to political and societal reactions. An analytic solution is presented. The model reproduces official COVID-19 cases counts of a large number of regions and countries that surpassed the peak of the outbreak. A single unbiased feedback parameter is extracted from field data and used to formulate an index that measures the efficiency of containment policies (the CEI index). CEI values for a range of countries are given. For two variants of the controlled-SIR model, detailed estimates of the total medical and socio-economic costs are evaluated over the entire course of the epidemic. Costs comprise medical care cost, the economic cost of social distancing, as well as the economic value of lives saved. Under plausible parameters, strict measures fare better than a hands-off policy. Strategies based on actual case numbers lead to substantially higher total costs than strategies based on the overall history of the epidemic.
Continuous Time Random Walk with correlated waiting times. The crucial role of inter-trade times in volatility clustering
Jarosław Klamut,Tomasz Gubiec
In many physical, social or economical phenomena we observe changes of a studied quantity only in discrete, irregularly distributed points in time. The stochastic process used by physicists to describe this kind of variables is the Continuous Time Random Walk (CTRW). Despite the popularity of this type of stochastic processes and strong empirical motivation, models with a long-term memory within the sequence of time intervals between observations are missing. Here, we fill this gap by introducing a new family of CTRWs. The memory is introduced to the model by the assumption that many consecutive time intervals can be the same. Surprisingly, in this process we can observe a slowly decaying nonlinear autocorrelation function without a fat-tailed distribution of time intervals. Our model applied to high-frequency stock market data can successfully describe the slope of decay of nonlinear autocorrelation function of stock market returns. The model achieves this result with no dependence between consecutive price changes. It proves the crucial role of inter-event times in the volatility clustering phenomenon observed in all stock markets.
Effective alleviation of rural poverty depends on the interplay between productivity, nutrients, water and soil quality
Sonja Radosavljevic,L. Jamila Haider,Steven J. Lade,Maja Schluter
Most of the world poorest people come from rural areas and depend on their local ecosystems for food production. Recent research has highlighted the importance of self-reinforcing dynamics between low soil quality and persistent poverty but little is known on how they affect poverty alleviation. We investigate how the intertwined dynamics of household assets, nutrients (especially phosphorus), water and soil quality influence food production and determine the conditions for escape from poverty for the rural poor. We have developed a suite of dynamic, multidimensional poverty trap models of households that combine economic aspects of growth with ecological dynamics of soil quality, water and nutrient flows to analyze the effectiveness of common poverty alleviation strategies such as intensification through agrochemical inputs, diversification of energy sources and conservation tillage. Our results show that (i) agrochemical inputs can reinforce poverty by degrading soil quality, (ii) diversification of household energy sources can create possibilities for effective application of other strategies, and (iii) sequencing of interventions can improve effectiveness of conservation tillage. Our model-based approach demonstrates the interdependence of economic and ecological dynamics which preclude blanket solution for poverty alleviation. Stylized models as developed here can be used for testing effectiveness of different strategies given biophysical and economic settings in the target region.
Evolving efficiency and robustness of global oil trade networks
Wen-Jie Xie,Na Wei,Wei-Xing Zhou
As a vital strategic resource, oil has an essential influence on the world economy, diplomacy and military development. Using oil trade data to dynamically monitor and warn about international trade risks is an urgent need. Based on the UN Comtrade data from 1988 to 2017, we construct unweighted and weighted global oil trade networks (OTNs). Complex network theories have some advantages in analyzing global oil trade as a system with numerous economies and complicated relationships. This paper establishes a trading-based network model for global oil trade to study the evolving efficiency, criticality and robustness of economies and the relationships between oil trade partners. The results show that for unweighted OTNs, the efficiency of oil flows gradually increases with growing complexity of the OTNs, and the weighted efficiency indicators are more capable of highlighting the impact of major events on the OTNs. The identified critical economies and trade relationships have more important strategic significance in the real market. The simulated deliberate attacks corresponding to national bankruptcy, trade blockade, and economic sanctions have a more significant impact on the robustness than random attacks. When the economies are promoting high-quality economic development, and continuously enhancing positions in the OTN, more attention needs be paid to the identified critical economies and trade relationships. To conclude, some suggestions for application are given according to the results.
Holding-Based Evaluation upon Actively Managed Stock Mutual Funds in China
Huimin Peng
We analyze actively managed mutual funds in China from 2005 to 2017. We develop performance measures for asset allocation and selection. We find that stock selection ability from holding-based model is positively correlated with selection ability estimated from Fama-French three-factor model, which is price-based regression model. We also find that industry allocation from holding-based model is positively correlated with timing ability estimated from price-based Treynor-Mazuy model most of the time. We conclude that most actively managed funds have positive stock selection ability but not asset allocation ability, which is due to the difficulty in predicting policy changes.
Optimal multi-asset trading with linear costs: a mean-field approach
Matt Emschwiller,Benjamin Petit,Jean-Philippe Bouchaud
Optimal multi-asset trading with Markovian predictors is well understood in the case of quadratic transaction costs, but remains intractable when these costs are $L_1$. We present a mean-field approach that reduces the multi-asset problem to a single-asset problem, with an effective predictor that includes a risk averse component. We obtain a simple approximate solution in the case of Ornstein-Uhlenbeck predictors and maximum position constraints. The optimal strategy is of the "bang-bang" type similar to that obtained in [de Lataillade et al., 2012]. When the risk aversion parameter is small, we find that the trading threshold is an affine function of the instantaneous global position, with a slope coefficient that we compute exactly. We relate the risk aversion parameter to the desired target risk and provide numerical simulations that support our analytical results.
Pricing of counterparty risk and funding with CSA discounting, portfolio effects and initial margin
Francesca Biagini,Alessandro Gnoatto,Immacolata Oliva
In this paper we extend the existing literature on xVA along three directions. First, we enhance current BSDE-based xVA frameworks to include initial margin by following the approach o Cr\'epey (2015a) and Cr\'epey (2015b). Next, we solve the consistency problem that arises when the front-office desk of the bank uses trade-specific discount curves that differ from the discount curve adopted by the xVA desk. Finally, we address the existence of multiple aggregation levels for contingent claims in the portfolio between the bank and the counterparty by providing suitable extensions of our proposed single-claim xVA framework.
Pricing variance swaps with stochastic volatility and stochastic interest rate under full correlation structure
Teh Raihana Nazirah Roslan,Wenjun Zhang,Jiling Cao
This paper considers the case of pricing discretely-sampled variance swaps under the class of equity-interest rate hybridization. Our modeling framework consists of the equity which follows the dynamics of the Heston stochastic volatility model, and the stochastic interest rate is driven by the Cox-Ingersoll-Ross (CIR) process with full correlation structure imposed among the state variables. This full correlation structure possess the limitation to have fully analytical pricing formula for hybrid models of variance swaps, due to the non-affinity property embedded in the model itself. We address this issue by obtaining an efficient semi-closed form pricing formula of variance swaps for an approximation of the hybrid model via the derivation of characteristic functions. Subsequently, we implement numerical experiments to evaluate the accuracy of our pricing formula. Our findings confirmed that the impact of the correlation between the underlying and the interest rate is significant for pricing discretely-sampled variance swaps.
Quantifying horizon dependence of asset prices: a cluster entropy approach
L. Ponta,A. Carbone
Market dynamic is quantified in terms of the entropy $S(\tau,n)$ of the clusters formed by the intersections between the series of the prices $p_t$ and the moving average $\widetilde{p}_{t,n}$. The entropy $S(\tau,n)$ is defined according to Shannon as $\sum P(\tau,n)\log P(\tau,n),$ with $P(\tau,n)$ the probability for the cluster to occur with duration $\tau$. \par The investigation is performed on high-frequency data of the Nasdaq Composite, Dow Jones Industrial Avg and Standard \& Poor 500 indexes downloaded from the Bloomberg terminal. The cluster entropy $S(\tau,n)$ is analysed in raw and sampled data over a broad range of temporal horizons $M$ varying from one to twelve months over the year 2018. The cluster entropy $S(\tau,n)$ is integrated over the cluster duration $\tau$ to yield the Market Dynamic Index $I(M,n)$, a synthetic figure of price dynamics. A systematic dependence of the cluster entropy $S(\tau,n)$ and the Market Dynamic Index $I(M,n)$ on the temporal horizon $M$ is evidenced. \par Finally, the Market Horizon Dependence}, defined as $H(M,n)=I(M,n)-I(1,n)$, is compared with the horizon dependence of the pricing kernel with different representative agents obtained via a Kullback-Leibler entropy approach. The Market Horizon Dependence $H(M,n)$ of the three assets is compared against the values obtained by implementing the cluster entropy $S(\tau,n)$ approach on artificially generated series (Fractional Brownian Motion).
The effect of stay-at-home orders on COVID-19 infections in the United States
James H. Fowler,Seth J. Hill,Remy Levin,Nick Obradovich
In March and April 2020, public health authorities in the United States acted to mitigate transmission of COVID-19. These actions were not coordinated at the national level, which creates an opportunity to use spatial and temporal variation to measure their effect with greater accuracy. We combine publicly available data sources on the timing of stay-at-home orders and daily confirmed COVID-19 cases at the county level in the United States (N = 132,048). We then derive from the classic SIR model a two-way fixed-effects model and apply it to the data with controls for unmeasured differences between counties and over time. Mean county-level daily growth in COVID-19 infections peaked at 17.2% just before stay-at-home orders were issued. Two way fixed-effects regression estimates suggest that orders were associated with a 3.8 percentage point (95% CI 0.7 to 8.6) reduction in the growth rate after one week and an 8.6 percentage point (3.0 to 14.1) reduction after two weeks. By day 22 the reduction (18.2 percentage points, 12.3 to 24.0) had surpassed the growth at the peak, indicating that growth had turned negative and the number of new daily infections was beginning to decline. A hypothetical national stay-at-home order issued on March 13, 2020 when a national emergency was declared might have reduced cumulative county infections by 62.3%, and might have helped to reverse exponential growth in the disease by April 5. The results here suggest that a coordinated nationwide stay-at-home order may have reduced by hundreds of thousands the current number of infections and by thousands the total number of deaths from COVID-19. Future efforts in the United States and elsewhere to control pandemics should coordinate stay-at-home orders at the national level, especially for diseases for which local spread has already occurred and testing availability is delayed.
Time-inhomogeneous Gaussian stochastic volatility models: Large deviations and super roughness
Archil Gulisashvili
We introduce time-inhomogeneous stochastic volatility models, in which the volatility is described by a nonnegative function of a Volterra type continuous Gaussian process that may have extremely rough sample paths. The drift function and the volatility function are assumed to be time-dependent and locally $\omega$-continuous for some modulus of continuity $\omega$. The main results obtained in the paper are sample path and small-noise large deviation principles for the log-price process in a Gaussian model under very mild restrictions. We use these results to study the asymptotic behavior of binary up-and-in barrier options and binary call options.
What You See and What You Don't See: The Hidden Moments of a Probability Distribution
Empirical distributions have their in-sample maxima as natural censoring. We look at the "hidden tail", that is, the part of the distribution in excess of the maximum for a sample size of $n$. Using extreme value theory, we examine the properties of the hidden tail and calculate its moments of order $p$. The method is useful in showing how large a bias one can expect, for a given $n$, between the visible in-sample mean and the true statistical mean (or higher moments), which is considerable for $\alpha$ close to 1. Among other properties, we note that the "hidden" moment of order $0$, that is, the exceedance probability for power law distributions, follows an exponential distribution and has for expectation $\frac{1}{n}$ regardless of the parametrization of the scale and tail index.
Academic Quant News is an aggregator of academic research articles and journals related to quantitative finance.
Interested in portfolio optimization?
You can find on Portfolio Optimizer a REST API to solve portfolio allocation and optimization problems (mean-variance optimization, equal risk contributions optimization...).
Feel free to use it in Google Sheets, in Excel, in Node.js or (why not?) in the browser!
Credits to Quantocracy
Credit where credit's due: this site was partly inspired by Quantocracy (known as The Whole Street a long time ago...).
A great complement to this site, be sure to bookmark it !
© Academic Quant News / Contact | CommonCrawl |
Separability criteria
Although there exists a clear definition of what separable and entangled states are, in general it is difficult to determine whether a given state is entangled or separable. Linear maps which are positive but not completely positive (PnCP) are a useful tool to investigate the entanglement of given states via separability criteria.
PnCP maps and separability criteria
Every linear map Λ which describes a physical transformation must preserve the positivity of every state $\varrho$: if this were not true, the transformed system could have negative eigenvalues, which would be in contradiction with the statistical interpretation of the eigenvalues as probabilities. In order to preserve the positivity of every state $\varrho,\; \Lambda$ must be a positive map. But the system Sd could be statistically coupled to another system Sn, called "ancilla". If we perform a physical transformation, represented by the positive map Λ, on the system Sd statistically coupled to the system Sn, we must consider the action of the tensor product of the maps idn ⊗ Λ on the compound system Sn ⊗ Sd, where idn is the identity on the state space of the system Sn. If we want Λ to be a fully consistent physical transformation it isn't sufficient for Λ to be positive: the tensor product idn ⊗ Λ must be positive for every n, i.e. the map Λ must be completely positive. Complete positivity is necessary because of entangled states of the bipartite system Sn ⊗ Sd. If all the physical states of a bipartite system were separable, then positivity of the map Λ would be sufficient. Indeed we know that if $\varrho \geq 0$ is separable, then $\varrho \equiv \varrho_{nd} = \sum_i p_i \varrho_i^n \otimes \varrho_i^d$, and therefore:
$$(id_n \otimes \Lambda)[\varrho] = \sum_i p_i \Big(id_n[\varrho_i^n] \otimes \Lambda[\varrho_i^d]\Big) = \sum_i p_i \Big(\varrho_i^n \otimes \Lambda[\varrho_i^d]\Big) \geq 0 \, .$$
If instead the state $\varrho$ of the bipartite system is entangled ($\varrho \equiv \varrho^{ent}$), it cannot be written as a convex combination of product states as above, and therefore, in order to have $(id_n \otimes \Lambda)[\varrho^{ent}] \geq 0$, the tensor product idn ⊗ Λ must be positive for every n, i.e. the map Λ must be completely positive.
Therefore positive but not completely positive (PnCP) maps move entangled states out of the space of physical states and thus are a useful tool in the identification of separable or entangled states via separability criteria, such as the following.
Theorem [Separability criterion via PnCP maps]: A state $\varrho \in \mathcal{S}_{d \times d}$ is separable if and only if $(id_d \otimes \Lambda)[\varrho] \geq 0$ for all PnCP maps Λ : Md → Md.
The following theorem provides an operationally useful separability criterion:
'''A state $\varrho \in \mathcal{S}_{d \times d}$ is entangled if and only if there exists a PnCP map Λ such that
$$Tr[(id_d \otimes \Lambda)[P_d^+]\varrho] < 0 .$$
''' Pd + is the projector onto the totally symmetric state $|\Psi_d^+\rangle = 1/\sqrt{d}\sum_{i=1}^d |i \rangle \otimes |i\rangle$. The operator (idd ⊗ Λ)[Pd + ] is called entanglement witness and is uniquely associated to the positive map Λ via the Choi-Jamiolkowski isomorphism.
The most simple example of PnCP map is transposition, from which we get the PPT criterion.
But there are also two other PnCP maps that provide important separability criteria.
Reduction criterion
Since it is based on a decomposable map, this criterion is not very strong; however, it is interesting because it plays an important role in entanglement distillation and it leads to the extended reduction criterion, which we will analyze in the following subsection.
Definition: The linear map Λr : SdA × dB → SdA × dB such that
$$\; \Lambda_r[\varrho] = \mathbf{I}(Tr\varrho) - \varrho,$$
with $\; \varrho_{AB} \in \mathcal{S}_{d_A \times d_B}$ and I the identity operator, is called reduction map.
It can be easily proved that the reduction map is positive but not completely positive (PnCP) and decomposable.
Theorem [Reduction criterion]: If the state $\; \varrho_{AB} \in \mathcal{S}_{d_A \times d_B}$ is separable, then $\; (\mathbf{I} \otimes \Lambda_r)[\varrho_{AB} ] \geq 0$, i.e. the following two conditions hold:
$$\; \varrho_A \otimes \mathbf{I}_B - \varrho_{AB} \geq 0 \qquad \mathbf{I}_A \otimes \varrho_B - \varrho_{AB} \geq 0 ,$$
where $\; \varrho_A$ and $\; \varrho_B$ are the reduced density matrices of the subsystems SA and SB respectively.
Extended reduction criterion
This criterion is based on a PnCP non-decomposable map, found independently by Breuer and Hall, which is an extension of the reduction map on even-dimensional Hilbert spaces with d = 2k. On these subspaces there exist antisymmetric unitary operations UT = − U. The corresponding antiunitary map U[ ⋅ ]TU † maps any pure state to some state that is orthogonal to it. Therefore we can define the positive map Λer as follows.
Definition: The linear map Λer : Sd → Sd such that
$$\; \Lambda_{er}[\varrho] = \Lambda[\varrho] - U[\varrho]^T U^\dagger$$
is called extended reduction map.
This map is positive but not completely positive and non-decomposable; moreover, the entanglement witness corresponding to Λer can be proved to be optimal.
From Λer we get the following separability condition.
Theorem [Extended reduction criterion]: If the state $\; \varrho \in \mathcal{S}_{d \times d }$ is separable, then $\; (\mathbf{I}_d \otimes \Lambda_{er})[\varrho] \geq 0$.
Notice that, since Λer is indecomposable, it can detect the entanglement of PPT entangled states and thus turns out to be useful for the characterization of the entanglement properties of various classes of quantum states.
Other separability criteria
There are also separability criteria which are not based on PnCP maps, such as the range criterion and the matrix realignment criterion.
Range criterion
Let us consider a state $\; \varrho_{AB}$ where the dimensions of the two subsystem are dA respectively dB. If $\; d_A \cdot d_B >6$ then there exist states which are entangled but nevertheless PPT. Therefore, a separability criterion independent of the PPT criterion is needed in order to detect the entanglement of these states. This can be done with separability criteria based on PnCP maps where the chosen PnCP map is not decomposable. However, in (P. Horodecki, Phys. Lett. A 232, 1997) another criterion was especially formulated to detect the entanglement of some PPT states: the range criterion.
Range criterion: If the state $\; \varrho_{AB}$ is separable, then there exists a set of product vectors {ψAi ⊗ ϕBi} that spans the range of $\; \varrho_{AB}$, while {ψAi ⊗ (ϕBi) * } spans the range of the partial transpose $\; \varrho_{AB}^{T_B}$, where the complex conjugation (ϕBi) * is taken in the same basis in which the partial transposition operation on $\; \varrho_{AB}$ is performed.
An interesting application of the range criterion in detecting PPT entangled states is the unextendible product basis methods.
Definition: An unextendible product basis is a set SUPB of orthonormal product vectors in HAB = HA ⊗ HB such that there is no product vector that is orthogonal to all of them.
Thus, from the definition it directly follows that any vector belonging to the orthogonal subspace HUPB⊥ is entangled and, by the range criterion, any mixed state with support contained in HUPB⊥ is entangled.
Matrix realignment criterion and linear contractions criteria
Another strong class of separability criteria which are independent of the separability criteria based on PnCP maps and, in particular, of the PPT criterion, is those based on linear contractions on product states.
Matrix realignment criterion or computable cross norm (CCN) criterion: If the state $\; \varrho_{AB}$ is separable, then the matrix $\; \mathcal{R}(\varrho_{AB})$ with elements
\; \langle m|\langle \mu| \mathcal{R}(\varrho_{AB})|n\rangle |\nu\rangle \equiv
\langle m|\langle n| \varrho_{AB}|\nu \rangle |\mu\rangle has trace norm not greater than 1.
The above condition can be generalized as follows.
Linear contraction criterion: If the map Λ satisfies the condition
∣∣Λ[∣ϕA⟩⟨ϕA∣ ⊗ ∣ϕB⟩⟨ϕB∣]∣∣Tr ≤ 1
for all pure product states ∣ϕA⟩⟨ϕA∣ ⊗ ∣ϕB⟩⟨ϕB∣, then for any separable state $\; \varrho_{AB}$ one has
$\; ||\Lambda[\varrho_{AB}]||_{Tr} \leq 1$.
The matrix realignment criterion is just a particular case of the above criterion where the matrix realignment map R, which permutes matrix elements, satisfies the above contraction condition on product states. Moreover, this criterion has been found to be useful for the detection of some PPT entanglement.
M. Keyl, Phys. Rep. 369, no.5, 431-548 (2002).
C. H. Bennett et al., Phys. Rev. Lett. 76, 722 (1996).
M. Horodecki, P. Horodecki, R. Horodecki, Phys. Lett. A 223, 1 (1996).
G. Lindblad, Commun. Math. Phys. 40, 147-151 (1975).
M. D. Choi, Linear Alg. Appl. 10, 285 (1975).
A. Jamiolkowski, Rep. Math. Phys. 3, 275 (1972).
N. Cerf, R. Adami, Phys. Rev. A 60, 898-909 (1999).
H.-P. Breuer, Phys. Rev. Lett. 97, 080501 (2006).
W. Hall, Construction of indecomposable positive maps based on a new criterion for indecomposability, e-print quant-ph/0607035.
M. Horodecki, P. Horodecki, R. Horodecki, Phys. Rev. Lett. 78, 574 (1997)
C. H. Bennet et al., Phys. Rev. Lett. 82, 5385 (1999)
D. DiVincenzo et al., Comm. Math. Phys. 238, 379 (2003)
O. Rudolph, "Lett. Math. Phys." 70, 57 (2004)
K. Chen, L.-A. Wu, Quantum Inf. Comp. 3, 193 (2003)
Category:Entanglement Category:Handbook of Quantum Information | CommonCrawl |
The New Palgrave Dictionary of Economics
| Editors: Palgrave Macmillan
Growth and Learning-By-Doing
Paul Beaudry
DOI: https://doi.org/10.1057/978-1-349-95121-5_2027-1
Learning by doing refers to improvements in productive efficiency arising from the generation of experience obtained by producing a good or service. The formal modelling of learning by doing was initiated in Arrow (1962) and was motivated by two main factors. The first motivating factor was empirical: several studies of wartime production found that input requirements decreased as a result of production experience. For example, Searle (1945) studied productivity changes in the Second World War shipbuilding programmes. During the Second World War, US production of ships increased dramatically, from 26 vessels in 1939 to 1,900 ships in 1943, an almost fiftyfold increase. Searle (1945) noticed that unit labour requirements decreased at a constant rate for a given percentage increase in output. On average, a doubling of output was associated with declines of 16 to 22 per cent in the number of man-hours required to build Liberty ships, Victory ships, tankers and standard cargo vessels. Alchian (1963) studied the relationship between the amount of direct labour required to produce an airframe and the number of airframes produced in the United States during the Second World War. He found that a doubling of production experience decreased labour input by approximately one-third. Other empirical studies of learning by doing include Rapping (1965), Irwin and Klenow (1994) and Thornton and Thompson (2001).
Arrow, K. Economic growth theory Learning by doing Lucas, R. Productive efficiency
JEL Classifications
This chapter was originally published in The New Palgrave Dictionary of Economics, 2nd edition, 2008. Edited by Steven N. Durlauf and Lawrence E. Blume
The second motivating factor behind the work of Arrow (1962) was a search for a theory of economic growth which did not rely on exogenous change in productivity as a driving force. In particular, Arrow's contribution and its extensions in Levhari (1966a, b) were to show how economic growth could be sustained in a market with perfect competition. Arrow's original model is quite sophisticated, but the main insight can be derived in a simpler setting, as shown in Sheshinski (1967) and presented here. Consider a one good economy, where the production of the good requires capital and labour input according to the constant returns to scale production function:
$$ Y=F\left(K,AL\right),F\left(\lambda K,A\lambda L\right)=\lambda F\left(K,AL\right). $$
In this specification of the production technology, A represents the efficiency of labour in producing the good. The main idea in the learning by doing literature is that A is a function of past experience. Arrow assumed that experience can be measured by cumulative investment or, in other words, the capital stock. The form of the relationship between A and the capital stock is posited to be:
$$ {A}_t={\left({K}_t\right)}^{\alpha },0<\alpha <1 $$
where the assumption that 0 < α < 1 is motivated by the empirical studies. In order to close the system, assume that the labour force grows exponentially at the rate η and let capital accumulation be driven by a constant saving rate out of incomes, s where, in the absence of depreciation, this implies
$$ \dot{K}=sY $$
In this environment, on the assumption that the change in A is an unintended consequence of production, it can be shown that a balanced growth path exists where per-capita income and per-capita capital grow at the rate
$$ \alpha \frac{\eta }{1-\alpha } $$
The two important aspects to note about the resulting growth rate is that it is positive if η > 0 and it is independent of the savings rate s. The additional property – that the rate of growth of income is tied to a positive rate of population growth – is generally seen as a weakness of this type of model. This property can be partially remedied, as shown in Romer (1986), if one assumes that α = 1. In this case, even in the absence of labour force growth there exists a balanced growth path where the rate of growth is given by
$$ sF\left(1,L\right) $$
The drawback of this specification (α = 1) is that the growth rate now depends on the size of the labour force, which is referred to as a 'scale effect'. The attractive feature of this specification is that the growth rate can be modified by an economic decision variable such as the savings rate. An alternative way of modifying Arrow's original model is to posit, as in Lucas (1988), that A depends on the per-capita value of the capital stock instead of on the level of the capital stock. This assumption is justified in Lucas (1988) on the grounds that A reflects the knowledge of the average worker with respect to how best to operate the technology. In the case where the relationship is given by \( A=\frac{K}{L} \), the steady growth rate of per-capita output is given by sF(1, 1) η. This formulation has the attractive property that it is positive even if η = 0, and it does not exhibit a scale effect. Accordingly it offers a succinct theory of economic growth. Lucas conjectured that the assumption of constant returns to learning (that is, α = 1) could be justified in a model where there is bounded learning in any one good but where there is continual entry of new goods over time. This idea is formally studied in Stokey (1988) and Young (1993). There is also a large literature that discusses how learning by doing can interact with international trade and potentially give rise to income divergence across countries; see for example Lucas (1993) and Young (1991).
Alchian, A. 1963. Reliability of progress curves in airframe production. Econometrica 31: 679–693.CrossRefGoogle Scholar
Arrow, K. 1962. The economic implications of learning by doing. Review of Economic Studies 29: 155–173.CrossRefGoogle Scholar
Irwin, D., and P. Klenow. 1994. Learning by doing spillovers in the semiconductor industry. Journal of Political Economy 102: 1200–1227.CrossRefGoogle Scholar
Levhari, D. 1966a. Further implications of learning by doing. Review of Economic Studies 33: 31–38.CrossRefGoogle Scholar
Levhari, D. 1966b. Extensions of arrow's 'Learning by Doing'. Review of Economic Studies 33: 117–131.CrossRefGoogle Scholar
Lucas, R. Jr. 1988. On the mechanics of economic development. Journal of Monetary Economics 22: 3–42.CrossRefGoogle Scholar
Lucas, R. Jr. 1993. Making a miracle. Econometrica 61: 251–272.CrossRefGoogle Scholar
Rapping, L. 1965. Learning and World War II production functions. Review of Economics and Statistics 47: 81–86.CrossRefGoogle Scholar
Romer, P. 1986. Increasing returns and long run growth. Journal of Political Economy 94: 1002–1037.CrossRefGoogle Scholar
Searle, A. 1945. Productivity of labour and industry. Monthly Labor Review 61: 1132–1147.Google Scholar
Sheshinski, E. 1967. Optimal accumulation with learning by doing. In Essays on the theory of economic growth, ed. K. Shell. Cambridge, MA: MIT Press.Google Scholar
Stokey, N. 1988. Learning by doing and the introduction of new goods. Journal of Political Economy 96: 701–717.CrossRefGoogle Scholar
Thornton, R., and P. Thompson. 2001. Learning from experience and learning from others: An exploration of learning and spillovers in wartime shipbuilding. American Economic Review 91: 1350–1368.CrossRefGoogle Scholar
Young, A. 1991. Learning by doing and the dynamic effects of international trade. Quarterly Journal of Economics 106: 369–405.CrossRefGoogle Scholar
Young, A. 1993. Invention and bounded learning by doing. Journal of Political Economy 101: 443–472.CrossRefGoogle Scholar
1.http://link.springer.com/referencework/10.1057/978-1-349-95121-5
Beaudry P. (2008) Growth and Learning-By-Doing. In: Palgrave Macmillan (eds) The New Palgrave Dictionary of Economics. Palgrave Macmillan, London
Received 09 September 2016
Accepted 09 September 2016
eBook Packages Economics and Finance
Grotius (de Groot), Hugo (1583–1645)
Group (Lie Group) Theory
Growth Accounting
Growth and Civil War
Growth and Cycles
Growth and Inequality (Macro Perspectives)
Growth and Institutions
Growth and International Trade
Growth Models, Multisector
Growth Take-Offs
Gérard-Varet, Louis-André (1944–2001)
Haavelmo, Trygve (1911–1999)
Habakkuk, John Hrothgar (1915–2002)
Haberler, Gottfried (1900–1995)
Habit Persistence
Hadley, Arthur Twining (1856–1930)
Hagen, Everett Einar (1906–1993)
Halévy, Elie (1870–1937) | CommonCrawl |
Because of their connections with public-key cryptography, trapdoor functions are surrounded by a lot of mystery. While one-way functions (functions that are easy to compute, yet hard to invert) like integer multiplication are familiar and intuitive, the idea of a function that is hard to invert except if one possesses some secret, the "trapdoor", seems a more remote possibility. The few such functions suitable for cryptographic purposes that have been found (like the modular exponentiation functions as used by the RSA and Rabin cryptosystems) unsurprisingly require heavy use of number theory to understand and analyze.
For every $m \in \mathbb N$ there exists a (not necessarily unique) $n \in \mathbb N$ such that $f_N(n) = m$.
For every $m > 0$, finding any $n$ with $f_N(n) = m$ is equivalent to factoring $N$.
What at first glance might appear incomprehensible is actually a rather simple decision procedure, obfuscated through arithmetic.
Since the factorization is nontrivial and preimages for any values of $g$ and $h$ are readily found, it follows that knowing the prime factors $p$ and $q$ of $N$ is both necessary and sufficient for making the original function $f_N$ produce any value other than $0$. | CommonCrawl |
\begin{definition}[Definition:Vertical Line/Definition 2]
A line is '''vertical''' {{iff}} it is parallel to the path taken by a body released from rest in a constant gravitational field.
:::File:Vertical.png
\end{definition} | ProofWiki |
논문 표/그림
발행처
KoreaScience란? 통계 목록
아세아태평양축산학회지 (Asian-Australasian Journal of Animal Sciences)
아세아태평양축산학회 (Asian Australasian Association of Animal Production Societies)
Effects of Fermented Potato Pulp on Performance, Nutrient Digestibility, Carcass Traits and Plasma Parameters of Growing-finishing Pigs
Li, P.F. (Ministry of Agriculture Feed Industry Centre, State Key Laboratory of Animal Nutrition, China Agricultural University) ;
Xue, L.F. (Ministry of Agriculture Feed Industry Centre, State Key Laboratory of Animal Nutrition, China Agricultural University) ;
Zhang, R.F. (Ministry of Agriculture Feed Industry Centre, State Key Laboratory of Animal Nutrition, China Agricultural University) ;
Piao, Xiangshu (Ministry of Agriculture Feed Industry Centre, State Key Laboratory of Animal Nutrition, China Agricultural University) ;
Zeng, Z.K. (Ministry of Agriculture Feed Industry Centre, State Key Laboratory of Animal Nutrition, China Agricultural University) ;
Zhan, J.S. (Daqing Essence Starch Co., Ltd.)
투고 : 2011.06.08
심사 : 2011.07.20
발행 : 2011.10.01
인용 PDF KSCI
A total of 629 Duroc${\times}$Landrace${\times}$Large White crossbred pigs were utilized in three experiments (Exp. 1, 222 pigs weighing $25.6{\pm}2.0\;kg$ BW; Exp. 2, 216 pigs weighing 5$6.2{\pm}4.3\;kg$ BW; Exp. 3, 191 pigs weighing $86.4{\pm}4.6\;kg$ BW) conducted to determine the effects of fermented potato pulp on performance, nutrient digestibility, carcass traits and plasma parameters in growingfinishing pigs. Each experiment lasted 28 d. The pigs were assigned to one of two corn-soybean meal-based diets containing 0 or 5% fermented potato pulp. The inclusion of fermented potato pulp increased weight gain (p<0.05) in experiments 1 and 2 and increased feed intake (p<0.05) in experiment 2. Feed conversion was improved (p<0.05) in experiment 2 and showed a tendency to improve (p<0.10) in experiments 1 and 3 when pigs were fed fermented potato pulp. Fermented potato pulp increased (p<0.05) dry matter digestibility in experiments 1 and 3 and energy digestibility in experiment 2. Feeding fermented potato pulp decreased plasma urea nitrogen (p<0.05) and alanine aminotransferase (p<0.05) in experiments 1 and 2, while plasma aspartate aminotransferase was decreased (p<0.05) in experiment 3. Dietary fermented potato pulp did not affect the carcass characteristics of finishing pigs. Feeding fermented potato pulp reduced (p<0.05) fecal ammonia concentration in all three experiments. In conclusion, feeding growing-finishing pigs diets containing 5% fermented potato pulp improved weight gain and feed conversion without any detrimental effects on carcass traits. The improvements in pig performance appeared to be mediated by improvements in nutrient digestibility.
Fermented Potato Pulp;Growing-finishing Pigs;Performance;Blood Parameters;Carcass Characteristics;Fecal Noxious Gas
원문 PDF 다운로드
참고문헌
AOAC. 1990. Official methods of analysis. 15th ed. Assoc. Off. Anal. Chem., Arlington, VA.
Awati, A., B. A. Williams, M. W. Bosch, W. J. Gerrits and M. W. Verstegen. 2006. Effect of inclusion of fermentable carbohydrates in the diet on fermentation end-product profile in feces of weanling piglets. J. Anim. Sci. 84:2133-2140. https://doi.org/10.2527/jas.2004-676
Beaulieu, A. D., N. H. Williams and J. F. Patience. 2009. Response to dietary digestible energy concentration in growing pigs fed cereal grain-based diets. J. Anim. Sci. 87:965-976.
Beckett, G. J., G. R. Foster, A. J. Hussey, D. B. Oliveira, J. W. Donovan, L. F. Prescott and A. T. Proudfoot. 1989. Plasma glutathione s-transferase and protein are more sensitive than alanine aminotransferase as markers of paracetamol (acetaminophen)-induced liver damage. Clin. Chem. 35:2186-2189.
Canibe, N. and B. B. Jensen. 2003. Fermented and nonfermented liquid feed to growing pigs: Effect on aspects of gastrointestinal ecology and growth performance. J. Anim. Sci. 81: 2019-2031.
Canibe, N., H. Miettinen and B. B. Jensen. 2008. Effect of adding lactobacillus plantarum or a formic acid containing product to fermented liquid feed on gastrointestinal ecology and growth performance of piglets. Livest. Sci. 144:251-262.
Chiang, G., W. Q. Lu, X. S. Piao, J. K. Hu, L. M. Gong and P. A. Thacker. 2010. Effects of feeding solid-state fermented rapeseed meal on performance, nutrient digestibility, intestinal ecology and intestinal morphology of broiler chickens. Asian-Aust. J. Anim. Sci. 23(2):263-271.
Cho, J. H. and I. H. Kim. 2011. Effects of fermented fish meal on nitrogen balance and apparent total tract and ileal amino acid digestibility in weanling pigs. J. Anim. Vet. Adv. 10:1455-1459. https://doi.org/10.3923/javaa.2011.1455.1459
Cho, J. H., B. J. Min, Y. J. Chen, J. S. Yoo, Q. Wang, J. D. Kim and I. H. Kim. 2007. Evaluation of fermented soy protein to replace soybean meal in weaned pigs: Growth performance, blood urea nitrogen and total protein concentrations in serum and nutrient digestibility. Asian-Aust. J. Anim. Sci. 20:1874-1879. https://doi.org/10.5713/ajas.2007.1874
Chung, Y. C., C. Huang and C. Tseng. 1996. Reduction of $H_2S/NH_3$ production from pig feces by controlling environmental conditions. J. Environ. Sci. Heal. A. 31:139-151. https://doi.org/10.1080/10934529609376348
Dung, N. N. X., L. H. Manh and B. Ogle. 2005. Effects of fermented liquid feeds on the performance, digestibility, nitrogen retention and plasma urea nitrogen (PUN) of growing-finishing pigs. Livest. Res. Rural Dev. 17:102.
Eggum, B. O. 1970. Blood urea measurement as a technique for assessing protein quality. Br. J. Nutr. 24:983-988. https://doi.org/10.1079/BJN19700101
Fan, M. Z. and W. C. Sauer. 2002. Determination of true ileal amino acid digestibility and the endogenous amino acid outputs associated with barley samples for growing-finishing pigs by the regression analysis technique. J. Anim. Sci. 80:1593-1605.
Ganapathy, V. and F. H. Leibach. 1999. Transport across intestinal. Epithelium. In: Textbook of Gastroenterology, Yamada, T. (Ed.). Lippincott Williams and Wilkins, Philadelphia, pp. 456-467.
Hong, T. T. T. and J. E. Lindberg. 2007. Effect of cooking and fermentation of a pig diet on gut environment and digestibility in growing pigs. Livest. Sci. 109:135-137. https://doi.org/10.1016/j.livsci.2007.01.121
Hu, J. K., W. Q. Lu, C. L. Wang, R. H. Zhu and J. Y. Qiao. 2008. Characteristics of solid-state fermented feed and its effects on performance and nutrient digestibility in growing-finishing pigs. Asian-Aust. J. Anim. Sci. 21:1635-1641. https://doi.org/10.5713/ajas.2008.80032
Kim, Y. C. 2004. Evaluation of availability for fermented soybean meal in weanling pigs. Ph.D. Thesis, Department of Animal Resources and Science, Dankook University, Seoul, Korea.
Kim, Y. G., J. D. Lohakare, J. H. Yun, S. Heo and B. J. Chae. 2007. Effect of feeding levels of microbial fermented soy protein on the growth performance, nutrient digestibility and intestinal morphology in weaned piglets. Asian-Aust. J. Anim. Sci. 20:399-404. https://doi.org/10.5713/ajas.2007.399
Kim, Y. Y. and D. C. Mahan. 2001. Comparative effects of high dietary levels of organic and inorganic selenium on selenium toxicity of growing-finishing pigs. J. Anim. Sci. 79:942-948.
Lyberg, K., T. Lundh, C. Pedersen and J. E. Lindberg. 2006. Influence of soaking, fermentation and phytase supplementation on nutrient digestibility in pigs offered a grower diet based on wheat and barley. Anim. Sci. 82:853-858. https://doi.org/10.1017/ASC2006109
Mayer, F. 1998. Potato pulp: Properties, physical modification and applications. Polym. Degrad. Stabil. 59:231-235. https://doi.org/10.1016/S0141-3910(97)00187-0
Mayer, F. and J. O. Hillebrandt. 1997. Potato pulp: Microbiological characterization, physical modification, and application of this agricultural waste product. Appl. Microbiol. Biotechnol. 48:435-440. https://doi.org/10.1007/s002530051076
Miller, E. L. 1967. Determination of the tryptophan content of feeding stuffs with particular reference to cereals. J. Sci. Food Agric. 18:381-386. https://doi.org/10.1002/jsfa.2740180901
Min, B. J., J. W. Hong, O. S. Kwon, W. B. Lee, Y. C. Kim, W. T. Cho and I. H. Kim. 2004. The effect of feeding processed soy protein on the growth performance and apparent ileal digestibility in weanling pigs. Asian-Aust. J. Anim. Sci. 17:1271-1276. https://doi.org/10.5713/ajas.2004.1271
Ministry of Agriculture. 2004. Feeding standards of swine. China Agriculture Press, Beijing, China.
Missotten, J. A. M., J. Michiels, W. Willems, A. Ovyn, S. De Smet and N. A. Dierick. 2010. Effect of fermented liquid feed on morpho-histological parameters in the piglet gut. Livest. Sci. 134:155-157. https://doi.org/10.1016/j.livsci.2010.06.124
National Bureau Statistics of China. 2010. China statistical yearbook. China Statistics Press, Beijing.
Nelson, M. L. 2010. Utilization and application of wet potato processing co-products for finishing cattle. J. Anim. Sci. 88:E133-E142. https://doi.org/10.2527/jas.2009-2502
NRC. 1998. Nutrient requirements of swine. 10th ed. Nat. Acad. Press, W., DC.
Pedersen, C. and J. E. Lindberg. 2003. Effect of fermentation in a liquid diet on nitrogen metabolism in growing pigs. EAAP Publication.109:641-644.
Prescott, L. M., J. P. Harley and D. A. Klein. 1996. Microbiology. William C. Brown Publishers, Dubuque, IA, pp. 935.
Radunz, A. E., G. P. Lardy, M. L. Bauer, M. J. Marchello, E. R. Loe and P. T. Berg. 2003. Influence of steam-peeled potato-processing waste inclusion level in beef finishing diets: Effects on digestion, feedlot performance, and meat quality. J. Anim. Sci. 81:2675-2685.
Rijnen, M. M., M. W. Verstegen, M. J. Heetkamp, J. Haaksma and J. W. Schrama. 2001. Effects of dietary fermentable carbohydrates on energy metabolism in group-housed sows. J. Anim. Sci. 79:148-154.
SAS. 1999. SAS user's guide: Statistics (Version 8.01). SAS Inst. Inc., Cary, NC, USA.
Scholten, R. H. J., C. M. C. van der Peet-Schwering, M. W. A. Verstegen, L. A. den Hartog, J. W. Schrama and P. C. Vesseur. 1999. Fermented co-products and fermented compound diets for pigs: A review. Anim. Feed Sci. Technol. 82:1-9. https://doi.org/10.1016/S0377-8401(99)00096-6
Scipioni, R. and G. Martelli. 2001. Consequences of the use of ensiled sugar beet-pulp in the diet of heavy pigs on performances, carcass characteristics and nitrogen balance: A review. Anim. Feed Sci. Technol. 90:81-91. https://doi.org/10.1016/S0377-8401(01)00198-5
Szasz, J. I., C. W. Hunt, O. A. Turgeon, Jr., P. A. Szasz and K. A. Johnson. 2005. Effects of pasteurization of potato slurry byproduct fed in corn-or barley-based beef finishing diets. J. Anim. Sci. 83:2806-2014.
Urlings, H. A., A. J. Mul, A. T. van't Klooster, P. G. Bijker, J. G. van Logtestijn and L. G. van Gils. 1993. Microbial and nutritional aspects of feeding fermented feed (poultry byproducts) to pigs. Vet. Q. 15:146-151. https://doi.org/10.1080/01652176.1993.9694394
Wang, Y., J. H. Cho, Y. J. Chen, J. S. Yoo, Y. Huang, H. J. Kim and I. H. Kim. 2009. The effect of probiotic BioPlus $2B^{\circledR}$ on growth performance, dry matter and nitrogen digestibility and slurry noxious gas emission in growing pigs. Livest. Sci. 120:35-42. https://doi.org/10.1016/j.livsci.2008.04.018
Xue, L. F., P. F. Li, R. F. Zhang, X. S. Piao, R. Han, D. Wang and Z. J. Zhan. 2011. Use of fermented potato pulp in diets fed to lactating sows. J. Anim. Vet. Adv. 10:2032-2037. https://doi.org/10.3923/javaa.2011.2032.2037
Zahn, J. A., J. L. Hatfield, Y. S. Do, A. A. Dispirito, D. A. Laird and R. L. Pfeiffer. 1997. Characterization of volatile organic emissions and wastes from a swine production facility. J. Environ. Qual. 26:1687-1696.
한국과학기술정보연구원 NDSL 한국학술지 인용보고서 KPubs 한국과학기술인용색인서비스 한국전통지식포털
(34141) 대전광역시 유성구 대학로 245 한국과학기술정보연구원 TEL 042)869-1004
자세히 찾기
제목, 요약, 키워드
권
저자소속기관
학술회의자료
협회지 | CommonCrawl |
\begin{document}
\title
{Action of micro-differential operators on quantized contact transformations}
\author{Mehdi Benchoufi}
\maketitle
\begin{abstract}
Quantized contact transforms (QCT) have been constructed in~\cite{SKK73}. We give here a complete proof of the fact that such QCT commute with the action of microdifferential operators. To our knowledge, such a proof did not exist in the literature. We apply this result to the microlocal Radon transform. \end{abstract}
\tableofcontents
\section{Introduction} \subsection{Overview of the results}
For a manifold $M$, let us denote by $T^*M$ the cotangent bundle and $\dT{}^*M$ the bundle $T^*M$ with the zero section removed. We will consider the following situation: let $X$ and $Y$ be two complex manifolds of the same dimension, a closed submanifold $Z$ of $X\times Y$, open subset $U\subset \dT{}^*X$ and $V\subset \dT{}^*Y$ and assume that the conormal bundle $\dT{}^*_Z(X\times Y)$ induces a contact transformation \eq\label{eq:diag2} &&\xymatrix{
&\dT{}^*_Z(X\times Y)\cap (U\times V^a)\ar[ld]_-{\sim}\ar[rd]^-{\sim}&\\
\dT{}^*X\supset U\ar[rr]^-{\sim}&&V\subset \dT{}^*Y. } \eneq Let $F\in\Derb(X)$ and let $\phi_K(F)$ denote the contact transformation of $F$ with kernel $K\in\Derb(X\times Y)$. We will prove an isomorphism between $\muhom(F,\sho_X)$ on $U\cap T_M^*X$ and $\muhom(\phi_K(F),\sho_Y)$ on $V\cap T_N^*Y$, which follows immediately from \cite[Lem. 11.4.3]{KS90}. Our main result will be the commutation of this isomorphism to the action of microdifferential operators. Although considered as well-known, the proof of this commutation does not appear clearly in the literature (see \cite[p.~467]{SKK73}), and is far from being obvious. In fact, we will consider a more general setting, replacing sheaves of microfunctions with sheaves of the type $\muhom(F,\sho_X)$. Moreover, assume that $X$ and $Y$ are complexification of real analytic manifolds $M$ and $N$ respectively. Then, it is known that, under suitable hypotheses, one can quantize this contact transform and get an isomorphism between microfunctions on $U\cap T_M^*X$ and microfunctions on $V\cap T_N^*Y$ \cite{KKK86}.
Then, we will specialize our results to the case of projective duality. We will study the {\em microlocal} Radon transform understood as a quantization of projective duality, both in the real and the complex case.
In the real case, denote by $P$ the real projective space (say of dimension $n$), by $P^*$ its dual and by $S$ the incidence relation: \eq\label{eq:incidence1} &&S\eqdot\{(x,\xi)\in P\times P^*; \langle x,\xi\rangle=0\}. \eneq In this setting, there is a well-known correspondance between distributions on $P$ and $P^*$ due to Gelfand, Gindikin, Graev \cite{GGG}\, and to Helgason \cite{Hel}. However, it is known since the 70s under the influence of the Sato's school, that to well-understand what happens on real (analytic) manifolds, it may be worth to look at their complexification.
Hence, denote by $\BBP$ the complex projective space of dimension $n$, by $\BBP^*$ the dual projective space and by $\BBS\subset\BBP\times\BBP^*$ the incidence relation. We have the correspondence \eq\label{eq:diag1} &&\xymatrix@C-0pc@R+0pc{
&\dT{}^*_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)\ar[ld]_-{\sim}\ar[rd]^-{\sim}&\\
\dT{}^*\mathbb{P}\ar[rr]^-{\sim}&&\dT{}^*\mathbb{P}^* } \eneq
This contact transformation induces an equivalence of categories between perverse sheaves modulo constant ones on the complex projective space and perverse sheaves modulo constant ones on its dual, as shown by Brylinski \cite{B86}, or between coherent $\mathcal{D}$-modules modulo flat connections, as shown by D'Agnolo-Schapira \cite{DS94}.
In continuation of the previous cited works, we shall consider the contact transform induced by~\eqref{eq:diag1} \eq\label{eq:diag2} &&\xymatrix{
&\dT{}^*_\BBS(\BBP\times\BBP^*)\cap (\dT{}^*_P\BBP\times \dT{}^*_{P^*}\BBP^*)\ar[ld]_-{\sim}\ar[rd]^-{\sim}&\\
\dT{}_P^*\BBP\ar[rr]^-{\sim}&&\dT{}_{P^*}^*\BBP^* } \eneq The above contact transformation leads to the well-known fact that the Radon transform establishes an isomorphism of sheaves of microfunctions on $P$ and $P^*$ (see~\cite{KKK86}). We will apply our main result to prove the commutation of this isomorphism to the action of microdifferential operators.
\subsection{Main theorems}\label{The problem}
We will use the langage of sheaves and $\mathcal{D}$-modules and we refer the reader to \cite{KS90} and \cite{K03} for a detailed developement of these topics. We will denote by k a commutative unital ring of finite global dimension.
\paragraph*{Notations for integral transforms}\label{def:radon_transforms} Let $X,Y$ be real manifolds and $S$ a closed submanifold $X\times Y$. Consider the diagrams $X \from[f] S \to[g] Y$, $X \from[q_1] X\times Y \to[q_2] Y$. Let $F\in\Derb(\cor_Y)$ and $K\in\Derb(\cor_{X\times Y})$. The integral transform of $F$ with respect to the kernel $K$ is defined to be $\Phi_{K}(F):=\reim{q_1}(K\tens\opb{q_2}F)$. We will denote $\Phi_{S}(F)$ the integral transform of $F$ with respect to the kernel $\cor_S[d_S-d_X]$.
\paragraph*{Results on the functor $\muhom$ }\label{sec:pre_notations_results}
To establish our main results, we will need the following complement on the functor $\muhom$.
For $(M_i)_{i=1,2,3}$, three manifolds, we write $M_{ij}\eqdot M_i\times M_j$ ($1\leq i,j\leq3$). We consider the operation of composition of kernels: \eq\label{eq:conv} &&\ba{l} \conv[2]\;\cl\;\Derb(\cor_{M_{12}})\times\Derb(\cor_{M_{23}})\to\Derb(\cor_{M_{13}})\\ \hs{10ex}\ba{rcl}(K_1,K_2)\mapsto K_1\conv[2] K_2&\eqdot& \reim{q_{13}}(\opb{q_{12}}K_1\tens\opb{q_{23}}K_2)\\ &\simeq&\reim{q_{13}}\opb{\delta_2}(K_1\etens K_2).\ea \ea \eneq
We add a subscript $a$ to $p_j$ to denote by $p_j^a$ the composition of $p_j$ and the antipodal map on $T^*M_{j}$. We define the composition of kernels on cotangent bundles (see~\cite[Prop.~4.4.11]{KS90}) \eq\label{eq:aconv} &&\hs{-0ex}\ba{rcl} \aconv[2]\;\cl\;\Derb(\cor_{T^*M_{12}})\times\Derb(\cor_{T^*M_{23}}) &\to&\Derb(\cor_{T^*M_{13}})\\ (K_1,K_2)&\mapsto&K_1\aconv[2] K_2\eqdot \reim{p_{13}}(\opb{p_{12^a}} K_1\tens\opb{p_{23}} K_2)\\ \ea \eneq
Let $F_i,G_i,H_i$ respectively in $\Derb(\cor_{M_{12}}),\Derb(\cor_{M_{23}}),\Derb(\cor_{M_{34}})$, $i=1,2$.
Let $U_i$ be an open subset of $T^*M_{ij}$ $\lp i=1,2$, $j=i+1\rp$ and set
\eqn
U_3=U_i\aconv[2]U_j=p_{13}(\opb{p_{12^a}}(U_1)\cap\opb{p_{23}}(U_2))
\eneqn
In \cite{KS90}, a canonical morphism in $\Derb(\cor_{T^*M_{13}})$ is constructed
\eq\label{aconv_natural_morphism}
&&\muhom(F_1,F_2)\vert_{U_1}\aconv[2]\muhom(G_1,G_2)\vert_{U_2}\to\muhom(F_1\conv[2]G_1,F_2\conv[2]G_2)\vert_{U_3}.
\eneq
We will see that the composition $\aconv$ is associative and we will see also that the morphism (\ref{aconv_natural_morphism}) is compatible with associativity with respect to $\aconv$.
\paragraph*{Complex contact transformations} Consider now two complex manifolds $X$ and $Y$ of the same dimension $n$, open $\mathbb{C}^{\times}$-conic subsets $U$ and $V$ of $\dT X$ and $\dT Y$, respectively, $\Lambda$ a smooth closed submanifold of $U\times V^a$ and assume that the projections $p_1\vert_\Lambda$ and $p_2^a\vert_\Lambda$ induce isomorphisms, hence a homogeneous symplectic isomorphism $\chi\cl U\isoto V$:
\eqn &&\xymatrix{
&\Lambda\subset U\times V^a\ar[ld]^-{p_1}_-{\sim}\ar[rd]_-{p_2^a}^-{\sim}&\\
\dTX\supset U\ar[rr]^-{\sim}_-{\chi}& &V\subset \dTY } \eneqn
Let us consider a perverse sheaf $L$ on $X\times Y$ satisfying $(\opb{p_1}(U)\cup\opb{{p_2^a}}(V))\cap\SSi(L)\subset\Lambda$ and a section $s$ of $\muhom(L,\Omega_{X\times Y/X})$ on $\Lambda$, where $\Omega_{X\times Y/X}:=\mathcal{O}_{X\times Y}\tens_{\opb{q_2}\mathcal{O}_{Y}}\opb{q_2}\Omega_Y$. Recall that one denotes by $\she_X^{\mathbb{R}}$ sheaf of rings $\she_X^{\mathbb{R}}:=\muhom(\mathbb{C}_{\Delta_{X}},\Omega_{X\times X/X})[d_X]$, and $\she_{X}$ the subsheaf of $\she_X^{\mathbb{R}}$ of finite order microdifferential operators. In the following theorem, the first statement $(i)$ is well-known, see \cite{SKK73}, $(ii)$ is proved in \cite{KS90}, the fact that isomorphism (\ref{eq:intro_qct_main_theorem}) is compatible with the action of microdifferential operators was done at the germ level in \cite{KS90}, but from a global perspective, it was announced for microfunctions in various papers but no detailed proof exists to our knowledge. We will prove our main theorem:
\begin{theorem}\label{th:microfunction_contact_iso}
Let $G\in\Derb(\C_Y)$ and assume to be given a section $s$ of $\muhom(L,\Omega_{X\times Y/X})$, non-degenerate on $\Lambda$.
\bnum
\item For $W\subset U$, $P\in\she_X(W)$, there is a unique $Q\in\she_Y(\chi(W))$ satisfying $P\cdot s=s\cdot Q$ ($P,Q$ considered as sections of $\she_{X\times Y}$). The morphism induced by $s$
\eqn
\opb{\chi}\she_Y\vert_{V}\to\she_X\vert_{U}
\eneqn
\eqn
P\mapsto Q
\eneqn
is a ring isomorphism.
\item We have the following isomorphism in $\Derb(\C_U)$
\eq\label{eq:intro_qct_main_theorem}
&&\opb{\chi}\muhom(G,\sho_Y)\vert_V\isoto\muhom(\Phi_{L[n]}(G),\sho_X)\vert_U
\eneq
\item The isomorphism (\ref{eq:intro_qct_main_theorem}) is compatible with the action of $\she_Y$ and $\she_X$ on the left and right side of (\ref{eq:intro_qct_main_theorem}) respectively.
\enum \end{theorem}
We will see that the action of microdifferential operators in Theorem \ref{th:microfunction_contact_iso} (iii) is derived from the morphism (\ref{aconv_natural_morphism}).
\paragraph*{Projective duality for microfunctions}\label{sec:notation_from_results}
For $M$ a real analytic manifold and $X$ its complexification, we might be led to identify $T_M^*X$ with $i\cdot T^*M$. We denote by $\sha_M$, $\shb_M$, $\shc_M$ the sheaves of real analytic functions, hyperfunctions, microfunctions, respectively.
In this article, we will quantize the contact transform associated with the Lagrangian submanifold $\dT{}_\mathbb{S}^*(\mathbb{P}\times\mathbb{P}^*)$. We will construct and denote by $\chi$ the homogeneous symplectic isomorphism between $\dT{}^*\mathbb{P}$ and $\dT{}^*\mathbb{P}^*$.
For $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we denote by $\mathbb{C}_{P}(\varepsilon)$ the two locally constant sheaf of rank one on $P$ (see Section \ref{sec:notations_projective} for a precise definition).
Let an integer $p\in\mathbb{Z}$, $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we will define the sheaves of real analytic functions $\sha_{P}(\varepsilon,p)$, hyperfunctions $\shb_P(\varepsilon,p)$, microfunctions $\mathscr{C}_{P}(\varepsilon,p)$, on $P$ resp. $P^*$ twisted by some power of the tautological line bundle.
For $X,Y$ either the manifold $\mathbb{P}$ or $\mathbb{P}^*$, for any two integers $p,q$, we note $\mathcal{O}_{X\times Y}(p,q)$ the line bundle on $X\times Y$ with homogenity $p$ in the $X$ variable and $q$ in the $Y$ variable. Setting $\Omega_{X\times Y/X}(p,q):=\Omega_{X\times Y/X} \tens_{\mathcal{O}_{X\times Y}}\mathcal{O}_{X\times Y}(p,q)$, $\she_X^{\mathbb{R}}(p,q):=\muhom(\mathbb{C}_{\Delta_{X}},\Omega_{X\times X/X}(p,q))[d_X]$ and we define accordingly $\she_X(p,q)$. Let us notice that $\she_X^{\mathbb{R}}(-p,p)$ is a sheaf of rings.
Let $n$ be the dimension of $P$, (of course $n=d_{\mathbb{P}}$). For an integer $k$ and $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we note $k^{*}:=-n-1-k$, $\varepsilon^{*}:=-n-1-\varepsilon\hspace{1ex}mod(2)$. We have:
\begin{theorem}\label{pre_main_theorem}
\bnum
\item Let $k$ be an integer such that $-n-1<k<0$ and let $s$ be a global non-degenerate section on $\dT{}^*_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)$ of $H^{1}(\muhom(\mathbb{C}_{\mathbb{S}},\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*)))$. For $P\in\she_\mathbb{P}(-k,k)$, there is a unique $Q\in\she_{\mathbb{P}^*}(-k^*,k^*)$ satisfying $P\cdot s=s\cdot Q$.
The morphism induced by $s$
\eqn
\oim{\chi}\she_{\mathbb{P}}(-k,k)\to\she_{\mathbb{P}^*}(-k^*,k^*)
\eneqn
\eqn
P\mapsto Q
\eneqn
is a ring isomorphism.
\item There exists such a non-degenerate section $s$.
\enum \end{theorem}
In fact, we will see that the non-degenerate section of Theorem \ref{pre_main_theorem} is provided by the Leray section.
Now, from classical adjunction formulas for $\she$-modules, we get a correspondance between solutions of systems of microdifferential equations on the projective space and solutions of systems of microdifferential equations on its dual. We will prove the following theorem, which was proved in \cite{DS96} for $\mathcal{D}$-modules,
\begin{theorem}
Let $k$ be an integer such that $-n-1<k<0$. Let $\mathcal{N}$ be a coherent $\she_{\mathbb{P}}(-k,k)$-module and $F\in\Derb(\mathbb{P})$. Then, we have an isomorphism in $\Derb(\C_{\dT{}^*\mathbb{P}^*})$
\eqn
\hspace{-15em}\oim{\chi}\rhom{_{\she_{\mathbb{P}}(-k,k)}}(\mathcal{N},\muhom(F,\mathcal{O}_\mathbb{P}(k)))\simeq
\eneqn
\eqn
\hspace{15em}\rhom{_{\she_{\mathbb{P}^*}(-k^*,k^*)}}(\underline{\Phi}_{\mathbb{S}}^{\mu}(\mathcal{N}),\muhom((\Phi_{\mathbb{C}_{\mathbb{S}}[-1]}F,\mathcal{O}_{\mathbb{P}^*}(k^*)))
\eneqn
\end{theorem} where $\underline{\Phi}_{\mathbb{S}}^{\mu}$ it is the counterpart of $\underline{\Phi}_\mathbb{S}$ for $\she$-modules, and will be defined in Section \ref{sec:int_trans_emodules}.
Let us mention that, through a difficult result from \cite{KSIW06}, $\muhom(F,\mathcal{O}_\mathbb{P})$ is well-defined in the derived category of $\she$-modules.
\begin{corollary}\label{main_theorem}
Let $k$ be an integer such that $-n-1<k<0$ and $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$. The section $s$ of theorem \ref{pre_main_theorem} defines an isomorphism:
$$
\chi_{*}\mathscr{C}_{P}(\varepsilon,k)\vert_{\dT{}^*_P\mathbb{P}}\simeq\mathscr{C}_{P^{*}}(\varepsilon^*,k^{*})\vert_{\dT{}^*_{P^*}\mathbb{P}^*}
$$
Moreover, this morphism is compatible with the respective action of $\oim{\chi}\she_{\mathbb{P}}(-k,k)$ and $\she_{\mathbb{P}^*}(-k^*,k^*)$. \end{corollary}
\textbf{Acknowledgements} I would like to express my gratitude to Pierre Schapira for suggesting me this problem and for his enlightening insights to which this work owes much.
\section{Reminders on Algebraic Analysis and complements}\label{sec:reminder_algebraic_analysis}
In this section, we recall classical results of Algebraic Analysis, with the exception of section \ref{complements_muhom}.
\subsection{Notations for manifolds}\label{not:12345}
\bnum
\item Let $M_i$ ($i=1,2,3$) be manifolds. For short, we write $M_{ij}\eqdot M_i\times M_j$ ($1\leq i,j\leq3$), $M_{123}=M_1\times M_2\times M_3$, $M_{1223}=M_1\times M_2 \times M_2\times M_3$, etc.
\item $\delta_{M_i}\cl M_i\to M_i\times M_i$ denote the diagonal embedding, and $\Delta_{M_i}$ the diagonal set of $M_i\times M_i$.
\item We will often write for short $\cor_i$ instead of $\cor_{{M_i}}$ and $\cor_{\Delta_i}$ instead of $\cor_{\Delta_{M_i}}$ and similarly with $\omega_{M_i}$, etc., and with the index $i$ replaced with several indices $ij$, etc.
\item We denote by $\pi_i$, $\pi_{ij}$, etc.\ the projection $T^*M_{i}\to M_{i}$, $T^*M_{ij}\to M_{ij}$, etc.
\item For a fiber bundle $E\to M$, we denote by $\dot{E}\to M$ the fiber bundle with the zero-section removed.
\item We denote by $q_i$ the projection $M_{ij}\to M_i$ or the projection $M_{123}\to M_i$ and by $q_{ij}$ the projection $M_{123}\to M_{ij}$. Similarly, we denote by $p_i$ the projection $T^*M_{ij}\to T^*M_i$ or the projection $T^*M_{123}\to T^*M_i$ and by $p_{ij}$ the projection $T^*M_{123}\to T^*M_{ij}$.
\item We also need to introduce the maps $p_{j^a}$ or $p_{ij^a}$, the composition of $p_{j}$ or $p_{ij}$ and the antipodal map $a$ on $T^*M_j$. For example, \eqn &&p_{12^a}((x_1,x_2,x_3;\xi_1,\xi_2,\xi_3))=(x_1,x_2;\xi_1,-\xi_2). \eneqn \item We let $\delta_2\cl M_{123} \to M_{1223}$ be the natural diagonal embedding. \enum
\subsection{Sheaves}
We follow the notations of~\cite{KS90}.
Let $X$ be a good topological space, i.e. separated, locally compact, countable at infinity, of finite global cohomological dimension and let $\cor$ be a commutative unital ring of finite global dimension.
For a locally closed subset $Z$ of $X$, we denote by $\cor_Z$, the sheaf, constant on $Z$ with stalk $\cor$, and $0$ elsewhere.
We denote by $\Derb(\cor_X)$ the bounded derived category of the category of sheaves of $\cor$-modules on $X$. If $\shr$ is a sheaf of rings, we denote by $\Derb(\shr)$ the bounded derived category of the category of left $\shr$-modules.
Let Y be a good topological space and $f$ a morphism $Y\to X$. We denote by $\roim{f},\opb{f},\reim{f},\epb{f},\rhom,\ltens$ the six Grothendieck operations. We denote by $\etens$ the exterior tensor product.
We denote by $\omega_X$ the dualizing complex on $X$, by $\omega_X^{\otimes-1}$ the sheaf-inverse of $\omega_X$ and by $\omega_{Y/X}$ the relative dualizing complex.
In the following, we assume that $X$ is a real manifold. Recall that $\omega_X\simeq\ori_X\,[\dim X]$ where $\ori_X$ is the orientation sheaf and $\dim X$ is the dimension of $X$. We denote by $\RD_X(\scbul)$,$\RD'_X(\scbul)$ the duality functor $\RD_X(\scbul)=\rhom(\scbul,\omega_X), \RD'_X(\scbul)=\rhom(\scbul,\cor_X)$, respectively.
For $F\in\Derb(\cor_X)$, we denote by $SS(F)$ its singular support, also called micro-support. For a a subset $Z\subset T^*X$, we denote by $\Derb(\cor_X;Z)$ the localization of the category $\Derb(\cor_X )$ by the full subcategory of objects whose micro-support is contained in $T^*X\setminus Z$.
For a closed submanifold $M$ of $X$, we denote by $\nu_M$, $\mu_M$, $\muhom$, the functor of specialization, microlocalization along $M$ and the functor of microcalization of $\rhom$ respectively.
Let $M_i$ ($i=1,2,3$) be manifolds. We shall consider the operations of composition of kernels: \eq\label{eq:conv} &&\ba{l} \conv[2]\;\cl\;\Derb(\cor_{M_{12}})\times\Derb(\cor_{M_{23}})\to\Derb(\cor_{M_{13}})\\ \hs{10ex}\ba{rcl}(K_1,K_2)\mapsto K_1\conv[2] K_2&\eqdot& \reim{q_{13}}(\opb{q_{12}}K_1\ltens\opb{q_{23}}K_2)\\ &\simeq&\reim{q_{13}}\opb{\delta_2}(K_1\letens K_2)\ea \ea \eneq \eq\hs{10ex}\label{eq:2_conv} &&\ba{l} \conv[23]\;\cl\;\Derb(\cor_{M_{12}})\times\Derb(\cor_{M_{23}})\times\Derb(\cor_{M_{34}})\to\Derb(\cor_{M_{14}})\\ \hs{10ex}\ba{rcl}(K_1,K_2,K_3)\mapsto K_1\conv[2] K_2 \conv[3] K_3 &\eqdot& \reim{{q_{14}}}(\opb{{q_{12}}}K_1\ltens\opb{{q_{23}}}K_2\ltens\opb{{q_{34}}}K_3)\\ \ea \ea \eneq
Let us mention a variant of $\circ$: \eqn &&\ba{l} \sconv[2]\;\cl\;\Derb(\cor_{M_{12}})\times\Derb(\cor_{M_{23}}) \to\Derb(\cor_{M_{13}})\\ \hs{10ex}(K_1,K_2)\mapsto K_1\sconv[2] K_2\eqdot \roim{q_{13}}\bl\opb{q_{2}}\omega_{2}\tens\epb{\delta_2}(K_1\etens K_2)\br \ea \eneqn There is a natural morphism $K_1 \conv[2] K_2 \to K_1 \sconv[2] K_2$.
We refer the reader to \cite{KS90} for a detailed presentation of sheaves on manifolds.
\subsection{$\mathcal{O}$-modules and $\mathcal{D}$-modules} We refer to \cite{K03} for the notations and the main results of this section.
Let $(X,\mathcal{O}_X)$ be a complex manifold. We denote by $d_X$ its complex dimension and by $\mathcal{D}_X$ the sheaf of rings of finite order holomorphic differential operators on $X$.
For an invertible $\mathcal{O}_X$-module $\mathcal{F}$, we denote by $\mathcal{F}^{\tens -1}:=\hom{_{\mathcal{O}_X}}(\mathcal{F},\mathcal{O}_X)$, the inverse of $\mathcal{F}$. Denote by $\text{Mod}(\mathcal{D}_X)$ the abelian category of left $\mathcal{D}_X$-modules and $\text{Mod}(\mathcal{D}_X^{op})$ of right $\mathcal{D}_X$-modules. We denote by $\Omega_X$ the right $\mathcal{D}_X$-module of holomorphic $d_X$ forms.
Let $\Derb(\mathcal{D}_X)$ be the bounded derived category of the category of left $\mathcal{D}_X$-modules, $\Derb_{\text{coh}}(\mathcal{D}_X)$ its full triangulated subcategory whose objects have coherent cohomology.
Let $\Derb_{\text{good}}(\mathcal{D}_X)$ be the triangulated subcategory of $\Derb(\mathcal{D}_X)$, whose objects have all cohomologies consisting in good $\mathcal{D}_X$-modules (see \cite{K03} for a classical reference).
We refer in the following to \cite{K03}. Let $f:Y\to X$ be a morphism of complex manifolds. We denote by $\mathcal{D}_{Y\to X}$ and $\mathcal{D}_{X\from Y}$ the transfer bimodules.
For $\mathcal{M}\in\Derb(\mathcal{D}_X)$, $\mathcal{N}\in\Derb(\mathcal{D}_Y)$, we denote by $\opb{\underline{f}}\mathcal{M}$, $\oim{\underline{f}}\mathcal{N}$, the pull-back and the direct image of $\mathcal{D}$-modules respectively.
We refer to \cite{DS96} for functorial properties of inverse and direct image of $\mathcal{D}$-modules.
\subsection{$\she$-modules}\label{sec:reminder_e_mod}
We refer in the following to \cite{SKK73} (see also \cite{S85} for an exposition). For a complex manifold $X$, one denotes by $\she_X$ the sheaf of filtered ring of finite order holomorphic microdifferential operators on $T^*X$. We denote by $\Derb_{\text{coh}}(\she_X)$ the full triangulated subcategory of $\Derb(\she_X)$ whose objects have coherent cohomology.
For $m\in\mathbb{Z}$, we denote by $\she_{X}(m)$ the abelian subgroup of $\she_X$ of microdifferential operators of order less or equal to $m$. For a section $P$ of $\she_X$, we denote by $\sigma(P)$ the principal symbol of $P$.
Let $\pi_X$ denote the natural projection $T^*X\to X$. Let us recall that $\she_X$ is flat over $\opb{\pi}(\mathcal{D}_X)$. To a $\mathcal{D}_X$-module $\shm$, we associate an $\she_X$-module defined by \eqn \she\shm:=\she_X\tens_{\opb{\pi_X}\mathcal{D}_X}\opb{\pi_X}\shm \eneqn
To a morphism of manifolds $f\cl Y\to X$, we associate the diagram of natural morphisms:
\eq\label{diag:microlocal1}
&&\xymatrix{
T^*Y\ar[dr]_-{\pi_Y}&\ar[l]_-{f_d}Y\times_XT^*X\ar[r]^-{f_\pi}\ar[d]^-\pi&T^*X\ar[d]^-{\pi_X}\\
&Y\ar[r]^-f&X
}\eneq
where $f_d$ is the transposed of the tangent map $Tf\cl TY\to Y\times_XTX$.
For $\mathcal{M},\mathcal{N}$ objects of respectively $\Derb(\she_X)$ and $\Derb(\she_Y)$, we denote by $\opb{\underline{f}}\mathcal{M}$ and $\oim{\underline{f}}\mathcal{N}$ the pull-back and the direct image of $\she$-modules respectively.
\subsection{Hyperfunctions and microfunctions}\label{sec:reminder_special_sheaves}
Let $M$ be a real analytic manifold and $X$ a complexification of $M$. We might be led to identify $T_M^*X$ with $i\cdot T^*M$. We denote by $\sha_M\eqdot\sho_X\vert_{M}$, $\shb_M\eqdot\rhom(\RD'_X\C_M,\sho_X)$, $\shc_M\eqdot\muhom(\RD'_X\C_M,\sho_X)$, the sheaves of real analytic functions, hyperfunctions, microfunctions, respectively. Let us denote by \textit{sp}, the isomorphism \eq\label{def:spectrum_isomorphism} sp\cl \shb_M \isoto \roim{\pi_{M}}\shc_M \eneq
There is a natural action of the sheaf of microdifferential operators $\she_X$ on $\shc_M$.
If $Z$ is a closed complex submanifold of $X$ of codimension $d$, we note \eqn \shb_{Z\vert X}:=H^d_{[Z]}(\mathcal{O}_X) \eneqn the algebraic cohomology of $\mathcal{O}_X$ with support in $Z$.
\subsection{Integral transforms for sheaves and $\mathcal{D}$-modules}\label{sec:d_modules_duality_intro}
\subsubsection{Integral transforms for sheaves}
Let $X$ and $Y$ be complex manifolds of respective dimension $d_X,d_Y$. Let $S$ be a closed submanifold $X\times Y$ of dimension $d_S$. We set $d_{S/X}:=d_S-d_X$. Consider the diagram of complex manifolds \eq\label{eq:duality_diagram} \xymatrix@C-0pc@R+0pc{
&S \ar[dl]_-{f} \ar[dr]^-{g} &&&\widetilde{S} \ar[dl]_-{g} \ar[dr]^-{f} &\\
X & & Y,& Y & & X } \eneq where the second diagram is obtained by interchanging $X$ and $Y$.
Let $F\in\Derb(\mathbb{C}_X)$, $G\in\Derb(\mathbb{C}_Y)$, we define \eqn \xymatrix@C-0pc@R+0pc{
\Phi_S(F):=\reim{g}\opb{f}F[d_{S/Y}],&\Phi_{\widetilde{S}}(G):=\reim{f}\opb{g}G[d_{S/X}]\\ } \eneqn \eqn \xymatrix@C-0pc@R+0pc{
\Psi_S(F):=\roim{g}\epb{f}F[d_{X/S}],&\Psi_{\widetilde{S}}(G):=\roim{f}\epb{g}G[d_{Y/S}]\\ } \eneqn
For $K\in\Derb(\mathbb{C}_{X\times Y})$, and given the diagram $X\from[q_1] X\times Y \to[q_2] Y$, we define the integral transform of $F$ with kernel $K$ \eqn \xymatrix@C-0pc@R+0pc{
\Phi_K(F):=\reim{q_2}(K\tens\opb{q_1}F)\\ } \eneqn
\subsubsection{Integral transforms for $\mathcal{D}$-modules}
Let $X,Y$ be complex manifolds of equal dimension $n>0$, and $S$ a complex manifold. Consider again the situation (\ref{eq:duality_diagram}).
We suppose \eq\label{hyp:duality_diagram} \left\{ \begin{array}{ll}
\hspace{1em}\text{$f,g$ are smooth and proper,}\\
\hspace{1em}\text{$S$ is a complex submanifold of $X\times Y$ of codimension $c>0$}\\ \end{array} \right. \eneq
Let $\mathcal{M}\in\Derb(\mathcal{D}_X)$, $\mathcal{N}\in\Derb(\mathcal{D}_Y)$. Let us denote by $\widetilde{S}$ the image of $S$ by the map $r:X\times Y\to Y\times X, (x,y)\mapsto(y,x)$. One sets \eqn \xymatrix@C-0pc@R+0pc{
\underline{\Phi}_S(\mathcal{M}):=\oim{\underline{g}}\opb{\underline{f}}\mathcal{M},&\underline{\Phi}_{\widetilde{S}}(\mathcal{N}):=\oim{\underline{f}}\opb{\underline{g}}\mathcal{N} } \eneqn
We refer to \cite[Prop. 2.6.]{DS94} for adjonction formulae related to these integral transforms.
Let us recall that we denote by $\Omega_X$ the sheaf of holomorphic $n$-forms and let \eqn \shb_{S\vert X\times Y}^{(n,0)}:=\opb{q_1}\Omega_X\tens_{\opb{q_1}\mathcal{O}_X}\shb_{S\vert X\times Y} \eneqn This $(\mathcal{D}_{Y},\mathcal{D}_{X})$-bimodule allows the computation of $\underline{\Phi}_S$ because of the isomorphism, proven in \cite[Prop 2.12]{DS94} \eqn \mathcal{D}_{Y\from S}\ltens{_{\mathcal{D}_{S}}}\mathcal{D}_{S\to X}\isoto\shb_{S\vert X\times Y}^{(n,0)} \eneqn
leading to \eqn \underline{\Phi}_S(\mathcal{M})\simeq\reim{q_2}(\shb_{S\vert X\times Y}^{(n,0)}\ltens{_{\opb{q_1}\mathcal{D}_{X}}}\opb{q_1}\mathcal{M}) \eneqn
\subsection{Microlocal integral transforms}
\subsubsection{Integral transforms for $\she$-modules}\label{sec:int_trans_emodules}
Let $X,Y$ be complex manifolds and $S$ is a closed submanifold of $X\times Y$. We consider again the diagram (\ref{eq:duality_diagram}) under the hypothesis (\ref{hyp:duality_diagram}).
We define the functor \eqn \begin{array}{rrrl}
\Derb(\mathcal{E}_X)\to \Derb(\mathcal{E}_Y)\text{, } \underline{\Phi}_S^{\mu}(\mathcal{M}):=\oim{\underline{g}}\opb{\underline{f}}\mathcal{M}\\ \end{array} \eneqn
We define the $\she_{X \times Y}$-module attached to $\shb_{S\vert X\times Y}$, \eqn \shc_{S\vert X\times Y}:=\she\shb_{S\vert X\times Y} \eneqn
and we consider the $(\she_Y,\she_X)$-bimodule \eq\label{microlocal_associated_kernel} \shc_{S\vert X\times Y}^{(n,0)}:=\opb{\pi}\opb{q_1}\Omega_X\ltens{_{\opb{\pi}\opb{q_1}\mathcal{O}_X}}\shc_{S\vert X\times Y} \eneq
One can notice that \eqn \she_{Y\from S}\ltens{_{\she_{S}}}\she_{S\to X}\isoto\shc_{S\vert X\times Y}^{(n,0)} \eneqn
and hence, we have \eq\label{eq:microlocal_integral_transform} \underline{\Phi}_S^{\mu}(\mathcal{M})\simeq\reim{p_2^a}(\shc_{S\vert X\times Y}^{(n,0)}\ltens{_{\opb{p_1}\she_{X}}}\opb{p_1}\mathcal{M}) \eneq
Let $\mathcal{M}\in\Derb_{\text{good}}(\mathcal{D}_X)$. The functors $\underline{\Phi}_S^{\mu}$ and $\underline{\Phi}_S$ are linked through the following isomorphism in $\Derb(\mathbb{C}_{\dT{}^*Y})$, (see \cite{SS94}) \eq\label{eq:e_d_modules_transform} \xymatrix@C-0pc@R+0pc{
\she(\underline{\Phi}_S(\mathcal{M}))\simeq\underline{\Phi}_S^{\mu}(\she\mathcal{M})\\ } \eneq
\subsubsection{Microlocal integral transform of the structure sheaf} Consider two open subsets $U$ and $V$ of $T^*X$ and $T^*Y$, respectively and $\Lambda$ a closed complex Lagrangian submanifold of $U\times V^a$ \eq\label{eq:1b} &&\xymatrix{
&U\times V^a\ar[ld]_-{p_1}\ar[rd]^-{p_{2^a}}&\\
T^*X\supset U& &V\subset T^*Y }\eneq
As detailed in Section 11.4 of \cite{KS90}, let $K\in\Derb(\mathbb{C}_{X\times Y})$, $SS(K)$ its micro-support and let us suppose that $p_1\vert{\Lambda},p_2^a\vert{\Lambda}$ are isomorphisms, $K$ is cohomologically constructible simple with shift $0$ along $\Lambda$ and that $(\opb{p_1}(U)\cup\opb{p_2}(V))\cap SS(K)\subset\Lambda$.
Let $p=(p_X,p_Y^a)\in\Lambda$ and let us consider some section $s\in H^0(\muhom(K,\Omega_{X\times Y/Y}))_{p}$, where $\Omega_{X\times Y/Y}:=\mathcal{O}_{X\times Y}\tens_{\opb{q_1}\mathcal{O}_{X}}\opb{q_1}\Omega_X$. The section $s$ gives a morphism $K\to\Omega_{X\times Y/Y}$ in $\Derb(\mathbb{C}_{X\times Y};p)$. Then, there is a natural morphism \eq \begin{array}{ll}
\Phi_{K[d_X]}(\mathcal{O}_X) & \to\mathcal{O}_Y \end{array} \eneq
We recall the result: \begin{theorem}[{{\cite[Th.11.4.9]{KS90}}}]
There exists $s\in H^0(\muhom(K,\Omega_{X\times Y/Y}))_{p}$ such that the associated morphism $\Phi_{K[d_X]}(\mathcal{O}_X)\to\mathcal{O}_Y$ is an isomorphism in the category $\Derb(\mathbb{C}_Y;p_Y)$. Moreover, this morphism is compatible with the action of microdifferential operators on $\mathcal{O}_X$ in $\Derb(\mathbb{C}_X;p_X)$ and the action of microdifferential operators on $\mathcal{O}_Y$ in $\Derb(\mathbb{C}_Y;p_Y)$ \end{theorem}
Also, we will make use of the following theorem proven in~\cite[Th.~7.2.1]{KS90}:
\begin{theorem}[{{\cite[Th.~7.2.1]{KS90}}}]\label{main_theorem_contact_muhom}
Let $K\in\Derb(X\times Y)$
and assume that
\item (i) K is cohomologically constructible
\item (ii) $(p_{1}^{-1}(U)\cup (p_{2}^{a})^{-1}(V))\cap SS(K)\subset\Lambda$
\item (iii) the natural morphism $\mathbb{C}_{\Lambda}\longrightarrow \mu\mathpzc{hom}(K,K)|_\Lambda$ is an isomorphism.
\item Then for any $F_{1},F_{2}\in\Derb(X;U)$, the natural morphism
$$
\chi_{*}\mu\mathpzc{hom}(F_{1},F_{2})\longrightarrow\mu\mathpzc{hom}(\Phi_K(F_{1}),\Phi_K(F_{2}))
$$
is an isomorphism in $\Derb(Y;V)$. \end{theorem}
\subsection{Complements on the functor $\muhom$}\label{complements_muhom}
\subsubsection{Associativity for the composition of kernels} The next result is well-known although no proof is written down in the literature, to our knowledge.
\begin{lemma}\label{eq:associativity_lemma}
Let $M_{1},M_{2},M_{3}$ be real manifolds, and $K,L,M$ be objects respectively of $\Derb(\cor_{M_{12}})$, $\Derb(\cor_{M_{23}}),\Derb(\cor_{M_{34}})$, then the composition of kernels $\conv$ defined in \ref{eq:conv} is associative. We have the following isomorphism
\eq\label{diag:associativity_composition}
\begin{array}{rcl}
(K\conv[2] L)\conv[3] M & \isoto & K\conv[2] (L\conv[3] M)\\
\end{array}
\eneq
such that for any $N\in \Derb(\cor_{M_{45}})$, the diagram below commutes:
\eq\label{diag:tensor}
&&\xymatrix{
((K\conv[2] L)\conv[3] M) \conv[4] N\ar[r]\ar[d]&(K\conv[2] L)\conv[3] (M \conv[4] N)\ar[dd]\\
(K\conv[2] (L\conv[3] M)) \conv[4] N\ar[d]&\\
K\conv[2] ((L\conv[3] M )\conv[4] N)\ar[r]&K\conv[2] (L\conv[3] (M \conv[4] N)).
}\eneq
\end{lemma} \begin{proof}
Consider the following diagram
\eqn
\xymatrix@C-1pc@R+1pc{
&&&&&M_{1234}\ar@<0.0ex>[ld]_-{q_{124}^{3}}\ar@<0.15ex>[ld]\ar[dd]_-{q_{14}^{23}} \ar@<0.0ex>[rd]^-{q_{134}^{2}}\ar@<0.15ex>[rd]
\ar@/_3pc/@<0.0ex>[lllldd]\ar@/_3pc/@<0.15ex>[lllldd]\ar@/^3pc/@<0.0ex>[rrrrdd]\ar@/^3pc/@<0.15ex>[rrrrdd]&&&&&\\
&&&&M_{124}\ar@<0.0ex>[rd]\ar@<0.15ex>[rd]_-{q_{14}^{2}}\ar@/^1pc/@<0.0ex>[rrrrrdd]\ar@/^1pc/@<0.15ex>[rrrrrdd]^-{q_{24}^{1}}\ar@/_3pc/[lllldd]_-{q_{12}^{4}}&&
M_{134}\ar@<0.0ex>[ld]^-{q_{14}^{3}}\ar@<0.15ex>[ld]\ar@/_1pc/@<0.0ex>[llllldd]_-{q_{13}^{4}}\ar@/_1pc/@<0.15ex>[llllldd]\ar@/^3pc/[rrrrdd]^-{q_{34}^{1}}&&&&\\
&M_{123}\ar[ld]_-{q_{12}^{3}}\ar@<0.0ex>[d]^-{q_{13}^{2}}\ar@<0.11ex>[d]\ar[rrrrd]^-{q_{23}^{1}}&&&&M_{14}&&&&
M_{234}\ar[lllld]_-{q_{23}^{4}}\ar@<0.0ex>[d]_-{q_{24}^{3}}\ar@<0.11ex>[d]\ar[rd]^-{q_{34}^{2}}&\\
M_{12}&M_{13}&&&&M_{23}&&&&M_{24}&M_{34}
}
\eneqn
where the thick squares are cartesian, and where for clarity we enforced the notation: the projection $M_{ijk}\to M_{ij}$ by $q_{ij}^{k}$ (independently of order of appearence of the indices), and the projection $M_{ijkl}\to M_{ij}$ by $q_{ij}^{kl}$. We now have:
\eqn
\begin{array}{rcl}
\reim{{q_{14}^{3}}}(\opb{{q_{13}^{4}}}(\reim{{q_{13}^{2}}}(\opb{{q_{12}^{3}}}K\tens\opb{{q_{23}^{1}}}L))\tens\opb{{q_{34}^{1}}}M) & \simeq & \reim{{q_{14}^{3}}}(\reim{{q_{134}^{2}}}(\opb{{q_{12}^{34}}}K\tens\opb{{q_{23}^{14}}}L)\tens\opb{{q_{34}^{1}}}M)\\
& \simeq & \reim{{q_{14}^{23}}}(\opb{{q_{12}^{34}}}K\tens\opb{{q_{23}^{14}}}L\tens\opb{{q_{34}^{12}}}M)\\
& \eqdot &
K_1\conv[2] K_2 \conv[3] K_3
\end{array}
\eneqn
The same way, we get the isomorphism
\eqn
\begin{array}{rcl}
K_1\conv[2] K_2 \conv[3] K_3 & \simeq & \reim{{q_{14}^{2}}}(\opb{{q_{12}^{4}}}K \tens (\opb{{q_{24}^{1}}}(\reim{{q_{24}^{3}}}(\opb{{q_{23}^{4}}}L\tens\opb{{q_{34}^{2}}}M))))
\end{array}
\eneqn
which proves the isomorphism (\ref{diag:associativity_composition}). And, it follows immediately that given $N\in \Derb(\cor_{M_{45}})$, the diagram (\ref{diag:tensor}) commutes.
\end{proof}
\subsubsection{Associativity for the composition of $\muhom$}\label{sec:asso_composition_muhom}
We define the composition of kernels on cotangent bundles (see~\cite[section 3.6, (3.6.2)]{KS90}). \eq\label{eq:aconv} &&\hs{-0ex}\ba{rcl} \aconv[2]\;\cl\;\Derb(\cor_{T^*M_{12}})\times\Derb(\cor_{T^*M_{23}}) &\to&\Derb(\cor_{T^*M_{13}})\\ (K_1,K_2)&\mapsto&K_1\aconv[2] K_2\eqdot \reim{p_{13}}(\opb{p_{12^a}} K_1\tens\opb{p_{23}} K_2)\\ &&\hs{8ex}\simeq\reim{p_{13^a}}(\opb{p_{12^a}} K_1 \tens\opb{p_{23^a}} K_2). \ea \eneq
There is a variant of the composition $\circ$, constructed in \cite{KS14}: \eq\label{eq:star} &&\ba{l} \sconv[2]\;\cl\;\Derb(\cor_{M_{12}})\times\Derb(\cor_{M_{23}}) \to\Derb(\cor_{M_{13}})\\ \hs{10ex}(K_1,K_2)\mapsto K_1\sconv[2] K_2\eqdot \roim{q_{13}}\bl\opb{q_{2}}\omega_{2}\tens\epb{\delta_2}(K_1\letens K_2)\br. \ea \eneq There is a natural morphism for $K_1\in \Derb(\cor_{M_{12}})$ and $K_2\in \Derb(\cor_{M_{23}})$, $K_1 \conv[2] K_2 \to K_1 \sconv[2] K_2$.
Let us state a theorem proven in \cite[Prop.~4.4.11]{KS90} refined in \cite{KS14}.
\begin{theorem}\label{th:main_associativity_theorem}
Let $F_i,G_i,H_i$ respectively in $\Derb(\cor_{M_{12}}),\Derb(\cor_{M_{23}}),\Derb(\cor_{M_{34}})$, $i=1,2$. Let $U_i$ be an open subset of $T^*M_{ij}$ ($i=1,2$, $j=i+1$) and set $U_3=U_1\aconv[2]U_2$. There exists a canonical morphism in $\Derb(\cor_{T^*M_{13}})$, functorial in $F_1$ (resp. $F_2$):
\eq\label{eq:micro_action_star_compo}
&&\muhom(F_1,F_2)\vert_{U_1}\aconv[2]\muhom(G_1,G_2)\vert_{U_2}\to\muhom(F_1\sconv[2]G_1,F_2\conv[2]G_2)\vert_{U_3}.
\eneq
and hence
\eq\label{eq:micro_action_compo}
&&\muhom(F_1,F_2)\vert_{U_1}\aconv[2]\muhom(G_1,G_2)\vert_{U_2}\to\muhom(F_1\conv[2]G_1,F_2\conv[2]G_2)\vert_{U_3}.
\eneq \end{theorem}
We state the main theorem of this section. \begin{theorem}\label{micro_associativity}
Let $F_i,G_i,H_i$ respectively in $\Derb(\cor_{M_{12}}),\Derb(\cor_{M_{23}}),\Derb(\cor_{M_{34}})$, $i=1,2$ then we have:
\banum
\item
\eqn
\hspace{-15em}\left(\muhom(F_{1},F_{2})\aconv[2]\muhom(G_{1},G_{2})\right)\aconv[3]\muhom(H_{1},H_{2})\isoto
\eneqn
\eqn
\hspace{15em}\muhom(F_{1},F_{2})\aconv[2]\left(\muhom(G_{1},G_{2})\aconv[3]\muhom(H_{1},H_{2})\right)
\eneqn
\item The above isomorphism is compatible with the composition $\conv$ in the sense that the following diagram commutes
\eqn
\xymatrix@C-6pt{
\hspace{-5em}(\muhom(F_{1},F_{2})\aconv[2]\muhom(G_{1},G_{2}))\aconv[3]\muhom(H_{1},H_{2})
\ar[r]^-{\sim} \ar[d] &
\muhom(F_{1},F_{2})\aconv[2](\muhom(G_{1},G_{2})\aconv[3]\muhom(H_{1},H_{2}))
\ar[d]
\\
\muhom(F_{1}\conv[2] G_{1},F_{2}\conv[2] G_{2})\aconv[3]\muhom(H_{1},H_{2})
\ar[d] &
\muhom(F_{1},F_{2})\aconv[2]\muhom(G_{1}\conv[3] H_{1},G_{2}\conv[3] H_{2})
\ar[d]
\\
\muhom((F_{1}\conv[2] G_{1})\conv[3] H_{1},(F_{2}\conv[2]G_{2})\conv[3] H_{2})
\ar[r]^-{\sim} &
\muhom(F_{1}\conv[2] (G_{1}\conv[3] H_{1}),F_{2}\conv[2] (G_{2}\conv[3] H_{2}))
}
\eneqn
\eanum
\end{theorem} \begin{proof}
\banum
\item
This is a direct application of Lemma $\ref{eq:associativity_lemma}$ with $X,Y,Z$ taken to be respectively $T^*M_{12},T^*M_{13},T^*M_{34}$.
\item We shall skip the proof, which is tedious but straightforward.
\eanum \end{proof}
\section{Complex quantized contact transformations}
\subsection{Kernels on complex manifolds}\label{sec:kernels_complex_manifolds}
Consider two complex manifolds $X$ and $Y$ of respective dimension $d_X$ and $d_Y$. We shall follow the notations of Section~\ref{not:12345}.
For $K\in\Derb(\mathbb{C}_{X\times Y})$, we recall that we defined the functor $\Phi_K:\Derb(\mathbb{C}_Y)\rightarrow\Derb(\mathbb{C}_X)$, $\Phi_K(G)=Rq_{1 !}(K\tens q_{2}^{-1}(G)),\text{ for } G\in\Derb(\mathbb{C}_Y)$. With regards to the notation of Section~\ref{not:12345}, let us notice that $\Phi_K(G)$ is $K\circ G$. We refer also to Section \ref{The problem} for a definition of $\Omega_{X\times Y/X}$.
We recall the
\begin{lemma}\label{le:sects1}
There is a natural morphism
\eqn
&&\Omega_{X\times Y/X}\conv\sho_Y\,[d_Y]\to\sho_X.
\eneqn \end{lemma} \begin{proof}
We have
\eqn
\Omega_{X\times Y/X}\conv\sho_Y\,[d_Y]&=&
\reim{q_1}(\Omega_{X\times Y/X}\tens\opb{q_1}\sho_Y[d_Y])\\
&\to&\reim{q_1}(\Omega_{X\times Y/X}[d_Y])\to[\int]\sho_X,
\eneqn
where the last arrow is
the integration morphism on complex manifolds. \end{proof}
The following Lemma will be useful for the proof of Lemma \ref{lem:conv_section}. Let us first denote by $M_i$ ($i=1,2,3,4$) four complex manifolds, $L_i\in\Derb(\C_{M_{i,i+1}})$, $1\leq i\leq3$. We set for short \eqn &&d_i=\dim_{\C} M_i, d_{ij}=d_{i}+d_{j}, \Omega_{ij/i}=\Omega_{M_{ij}/M_i}=\Omega^{(0,d_j)}_{M_{ij}}. \eneqn Set for $1\leq i\leq3, $ \eqn &&K_{i}=\muhom(L_{i},\Omega_{i,j/i}[d_j]),\quad j=i+1\\ &&L_{ij}=L_i\circ L_j\quad j=i+1,\quad L_{123}=L_1\circ L_2\circ L_3,\\ &&\tw K_{ij}=\muhom(L_{ij},\Omega_{i,j/i}\,[d_j]\circ \Omega_{j,k/j}\,[d_k]) \quad j=i+1, k=j+1\\ &&\tw K_{123}=\muhom(L_{123}, \Omega_{12/1}[d_2]\circ\Omega_{23/2}[d_3]\circ\Omega_{34/3}[d_4])\\ &&K_{ij}=\muhom(L_{ij},\Omega_{i,k/i}[d_{k}])\quad j=i+1, k=j+1,\\ &&K_{123}=\muhom(L_{123}, \Omega_{14/1}[d_4]). \eneqn
We recall that we have t\\ he sequence of natural morphisms: \eq\label{compo_forms_integration} \Omega_{i,j/i}\conv \Omega_{j,k/j}&=& \reim{q_{i,k}}(\opb{q_{i,j}}\Omega_{i,j/i}\tens\opb{q_{j,k}}\Omega_{j,k/j}) \nonumber\\ &\to&\reim{q_{i,k}}(\Omega_{i,j,k/i}) \nonumber\\ &\to&\Omega_{i,k/i}[-d_{j}] \eneq
\begin{lemma}\label{lem:conv_int_commutation}
The following diagram commutes:
\eqn
&&\xymatrix{
&K_1\circ K_2\circ K_3\ar[ld]\ar[rd]\ar@{}[d]|-A&\\
\tw K_{12}\circ K_3\ar[r]\ar[d]&\tw K_{123}\ar[d]&K_1\circ\tw K_{23}\ar[l]\ar[d]\\
K_{12}\circ K_3\ar[r]\ar@{}[ru]|-B&K_{123}&K_1\circ K_{23}\ar[l]\ar@{}[lu]|-C
}\eneqn \end{lemma} \begin{proof}
Diagram labelled A commutes by the associativity of the functor $\muhom$ (see Theorem 2.7.3). Let us prove that Diagram B and C commute. Of course, it is enough to consider Diagram B. To make the notations easier, we assume that $M_1=M_4=\rmpt$. We are reduced to prove the commutativity of the diagram:
\eqn
\xymatrix{
\muhom(L_{2},\Omega_{2}\circ \Omega_{2,3/2}[d_{23}])\circ \muhom(L_{3},\sho_3)\ar[r]\ar[d]^-{\int_2}
&\muhom(L_{23},\Omega_{2}\circ \Omega_{2,3/2}[d_{23}]\circ\sho_3)\ar[d]^-{\int_2}\\
\muhom(L_{2},\Omega_{3}[d_{3}])\circ \muhom(L_{3},\sho_3)\ar[r]&\muhom(L_{23},\Omega_{3}[d_{3}]\circ\sho_3)
}
\eneqn
For $F,F'\in\Derb(\cor_{12})$, $G,G'\in\Derb(\cor_{23})$, we saw in Theorem \ref{micro_associativity} (b) that the morphism $\muhom(F,F')\conv\muhom(G,G')\to\muhom(F\conv G,F'\conv G')$ is functorial in $F,F',G,G'$. This fact applied to the morphism
\eqn
\Omega_{2}\circ \Omega_{2,3/2}[d_{23}] \to \Omega_{3}[d_{3}]
\eneqn
gives that the above diagram commutes and so diagram $B$ commutes. \end{proof}
Let $Z$ be a complex manifold and let $\Lambda\subset T^*(X\times Y)$ and $\Lambda'\subset T^*(Y\times Z)$ be two conic Lagrangian smooth locally closed complex submanifolds.
Let $L$, $L'$, be perverse sheaves on $X\times Y$, $Y\times Z$, with microsupport $SS(L)\subset\Lambda$, $SS(L')\subset\Lambda'$ respectively. We set \eqn &&L''\eqdot L[d_Y] \circ L' \eneqn
Assume that \eq\label{set_transversality_condition} p_2^{a}\vert_{\Lambda}:\Lambda\to T^*{Y} \text{ and } p_2\vert_{\Lambda'}:\Lambda'\to T^*{Y} \text{ are transversal} \eneq
and that
\eq\label{set_composability_condition}\hspace{3em}
\text{the map } \Lambda\times_{T^*Y}\Lambda'\to\Lambda\conv\Lambda'
\text{ is an isomorphism.}
\eneq
Let us set \eqn &&\shl\eqdot\muhom(L,\Omega_{X\times Y/X}) \eneqn Note that $\shl\in\Derb(T^*(X\times Y))$ is concentrated in degree $0$. Indeed, it is proven in ~\cite[Th.~10.3.12]{KS90} that perverse sheaves are the ones which are pure with shift zero at any point of the non singular locus of their microsupport. On the other hand, Theorem 9.5.2 of ~\cite{KS85} together with Definition 9.5.1 of \cite{KS85} show that the latter verify the property that, when being applied $\muhom(\bullet,\Omega_{X\times Y/X})$, they are concentrated in degree $0$. Moreover, $\shl$ is a $(\she_X,\she_Y)$-bimodule. Indeed, such actions come from morphism (\ref{eq:micro_action_compo}) and the integration morphism (\ref{compo_forms_integration}). We define similarly $\shl'$ and $\shl''$.
Now consider two open subsets $U$,$V$ and $W$ of $\dTX$, $\dTY$, $\sdot{T\mspace{2mu}}{}^*{Z}$, respectively.
Let $K_{U\times V^a}$ be the constant sheaf on $(U\times V^a)\cap \Lambda$ with stalk $H^0\rsect(U\times V^a;\shl)$, extended by $0$ elsewhere.
$K'_{V\times W^a}$ is the constant sheaf on $(V\times W^a)\cap \Lambda'$ with stalk $H^0\rsect(V\times W^a;\shl')$, extended by $0$ elsewhere.
$K''_{U\times W^a}$ is the constant sheaf on $(U\times W^a)\cap \Lambda\circ\Lambda'$ with stalk $H^0\rsect(U\times W^a;\shl'')$, extended by $0$ elsewhere.
Let $s,s'$ be sections of $\sect(U\times V^a;\shl)$ and $\sect(V\times W^a;\shl')$ respectively. We define the product $s\cdot s'$ to be the section of $\sect(U\times W^a;\shl'')$, image of $1$ by the following sequence of morphisms \eqn \begin{array}{rcl}
\mathbb{C}_{\Lambda\circ\Lambda'}&\isofrom&\mathbb{C}_{\Lambda}\circ\mathbb{C}_{\Lambda'}\\
&:=&
\reim{p_{13}}(\opb{p_{12^a}}\mathbb{C}_{\Lambda}\tens \opb{p_{23}}\mathbb{C}_{\Lambda'})\\
&\to&\reim{p_{13}}(\opb{p_{12^a}}K_{U\times V^a}\tens \opb{p_{23}}K_{V\times W^a})\\
&\to&
\reim{p_{13}}(\opb{p_{12^a}}\muhom(L,\Omega_{X\times Y/X})\tens \opb{p_{23}}\muhom(L',\Omega_{Y\times Z/Y}))\\
&:=&\muhom(L,\Omega_{X\times Y/X})\circ \muhom(L',\Omega_{Y\times Z/Y})
\to\shl'' \end{array} \eneqn where the first isomorphism comes from the assumption \ref{set_composability_condition}.
\begin{lemma}\label{lem:conv_section} Assume that conditions \ref{set_transversality_condition} and \ref{set_composability_condition} are satisfied. Let $s,s'$ be sections of $\sect(U\times V^a;\shl)$ and $\sect(V\times W^a;\shl')$ respectively, and let $G\in\Derb(\mathbb{C}_Y)$, $H\in\Derb(\mathbb{C}_Z)$. Then, \bnum \item $s$ defines a morphism \eqn &&\alpha_G(s)\cl\mathbb{C}_{\Lambda}\circ\muhom(G,\sho_Y)\vert_V\to\muhom(L[d_Y]\conv G,\sho_X)\vert_U \eneqn \item Considering the morphism \eqn &\alpha_H(s\cdot s')\cl\mathbb{C}_{\Lambda\circ\Lambda'}\circ\muhom(H,\sho_Z)\vert_W\to\muhom(L[d_Y]\circ L'[d_Z]\conv H,\sho_X)\vert_U \eneqn we have the isomorphism \eqn
\alpha_H(s\cdot s') \simeq \alpha_{L'[d_Z]\circ H}(s)\circ \Phi_{\mathbb{C}_{\Lambda}}(\alpha_H(s')) \eneqn \enum \end{lemma}
\begin{proof}
\bnum
\item Given $s$ and two objects $G_1,G_2\in\Derb(\mathbb{C}_Y)$, we have a morphism
\eqn
\mathbb{C}_{\Lambda}\circ\muhom(G_1,G_2)\vert_V&\to&\muhom(L\conv G_1,\Omega_{X\times Y/X}\conv G_2)\vert_U
\eneqn
corrresponding to the composition of morphisms:
\eq\label{decomposition_morphism_s}
\begin{array}{rcl}
\reim{p_{1}}(\mathbb{C}_{\Lambda}\tens\opb{p_{2^a}}\muhom(G_1,G_2)\vert_V)&\to &\reim{p_{1}}(K_{U\times V^a}\tens\opb{p_{2^a}}\muhom(G_1,G_2)\vert_V)\\
&\to&\reim{p_{1}}(\muhom(L,\Omega_{X\times Y/X})\tens\opb{p_{2^a}}\muhom(G_1,G_2)\vert_V)\\
&\to&\muhom(L\conv G_1,\Omega_{X\times Y/X}\conv G_2)\vert_U
\end{array}
\eneq
where the second morphism comes from the natural morphism $K_{U\times V^a}\to\muhom(L,\Omega_{X\times Y/X})$. We conclude by choosing, $G_1=G$, $G_2=\mathcal{O}_Y$ and by using Lemma~\ref{le:sects1}:
\eqn
&&\xymatrix@R=0ex@C=0ex{
\muhom(L\conv G_1,\Omega_{X\times Y/X}\conv \mathcal{O}_Y)&\to&\muhom(L\conv G_1, \mathcal{O}_X[-d_Y])&\isoto&\muhom(L[d_Y]\conv G_1, \mathcal{O}_X)
}
\eneqn
\item Let $H\in\Derb(\mathbb{C}_Z)$. We denote by $\shh\eqdot\muhom(H,\sho_Z)$. It suffices to prove that the following diagram commutes:
\eqn
\xymatrix@C-1pc@R+1pc{\hspace{-3em}
(\mathbb{C}_{\Lambda}\circ\mathbb{C}_{\Lambda'})\circ\shh \ar[r]^-{\simeq}\ar[d]_-{\simeq} \ar@{}[rdddddd]|-*+[o][F-]{A} \ar@{.>}[rdddd]^(.4){ \Phi_{\mathbb{C}_{\Lambda}}(\alpha_H(s'))} & \mathbb{C}_{\Lambda}\circ(\mathbb{C}_{\Lambda'}\circ\shh)\ar[d]\\
\mathbb{C}_{\Lambda}\circ\mathbb{C}_{\Lambda'}\circ\shh \ar@{.>}[rddddddd]^(.4){\alpha(s\cdot s')} \ar[d] & \mathbb{C}_{\Lambda}\circ K'\circ \shh \ar[d] \\
K\circ K' \circ \shh \ar[ddd] & \mathbb{C}_{\Lambda}\circ \muhom(L',\Omega_{Y\times Z/Y}) \circ \shh \ar[d] \\
& \mathbb{C}_{\Lambda}\circ\muhom(L'\circ H,\Omega_{Y\times Z/Y}\circ \sho_Z) \ar[d]^-{\int_Z} \\
& \mathbb{C}_{\Lambda}\circ\muhom(L'[d_Z]\circ H,\sho_Y) \ar[d]\ar@/_-3pc/@{.>}[dddd]^(.35){\alpha_{L'[d_Z]\circ H}(s)} \\
K\circ \muhom(L',\Omega_{Y\times Z/Y}) \circ \shh \ar[r]^(.6){\int_Z}\ar[d] \ar@{}[rd]|-*+[o][F-]{B}&K\circ \muhom(L'[d_Z]\circ H, \sho_Y) \ar[d] \\
\muhom(L,\Omega_{X\times Y/X})\circ \muhom(L',\Omega_{Y\times Z/Y})\circ \shh \ar[r]^-{\int_Z} \ar[dd] \ar@{}[rdd]|-*+[o][F-]{C} & \muhom(L,\Omega_{X\times Y/X}) \circ \muhom( L'[d_Z]\circ H, \sho_Y) \ar[d] \\
& \muhom(L\circ L'[d_Z]\circ H,\Omega_{X\times Y/X} \circ \sho_Y) \ar[d]^-{\int_Y} \\
\muhom(L\circ L' \circ H,\Omega_{X\times Y/X}\circ \Omega_{Y\times Z/Y}\circ\sho_Z) \ar[r]^-{\int_{Y,Z}}& \muhom(L[d_Y]\circ L'[d_Z]\circ H,\sho_X)
}
\eneqn
where we omitted the subscript $U\times V^a$ and $V\times W^a$, $H$, $L'[d_Z]\circ H$ for $K_{U\times V^a}$, $K'_{V\times W^a}$, $\alpha_H, \alpha_{L'[d_Z]\circ H}$, respectively.
We know from Theorem \ref{th:main_associativity_theorem} that the operation $\conv$ is functorial, so that diagram $A$ and $B$ commute. For instance, diagram $A$ decomposes this way: \eqn \xymatrix@C-1pc@R+1pc{
\mathbb{C}_{\Lambda}\circ\mathbb{C}_{\Lambda'}\circ\shh \ar[rr]^-{\simeq}\ar[d] && \mathbb{C}_{\Lambda}\circ(\mathbb{C}_{\Lambda'}\circ\shh)\ar[d]\\
\mathbb{C}_{\Lambda}\circ K' \circ \shh \ar[r] \ar[dd] & \mathbb{C}_{\Lambda}\circ \muhom(L',\Omega_{Y\times Z/Y}) \circ \shh \ar[r]^-{\simeq} \ar[dd] & \mathbb{C}_{\Lambda}\circ (\muhom(L',\Omega_{Y\times Z/Y}) \circ \shh) \ar[d]^-{\int_Y}\\
&& \mathbb{C}_{\Lambda}\circ \muhom(L'[d_Z]\circ H,\sho_Y) \ar[d]\\
K\circ K' \circ \shh \ar[r] & K\circ \muhom(L',\Omega_{Y\times Z/Y}) \circ \shh \ar[r]^-{\int_Y} & K \circ \muhom(L'[d_Z]\circ H,\sho_Y) } \eneqn
Besides, diagram $C$ commutes by Lemma \ref{lem:conv_int_commutation}.
Finally, the bottom diagonal punctured line correponds to $\alpha(s\cdot s')$, since the following diagram commutes
\eqn
\xymatrix@C-1pc@R+1pc{
\mathbb{C}_{\Lambda\circ\Lambda'} \ar[r]^-{\simeq}\ar@{.>}[rd]^(0.5){\alpha(s\cdot s')} \ar[d]& \mathbb{C}_{\Lambda}\circ\mathbb{C}_{\Lambda'} \ar[d]\\
K''_{U\times W^a}\ar[r] & \muhom(L\circ L'[d_Y],\Omega_{X\times Z/X})
}
\eneqn
\enum \end{proof} \begin{remark}
In the following, unless necessary, we will omit the subsript for $\alpha$. \end{remark} \begin{theorem}\label{th:KS14b}
Let $s\in \sect(U\times V^a;\shl)$, $G\in\Derb(\mathbb{C}_Y)$. Then,
(i) $s$ defines a morphism
\eq\label{eq:mors2}
&&\alpha(s)\cl\mathbb{C}_{\Lambda}\circ\muhom(G,\sho_Y)\vert_V\to\muhom(L[d_Y]\conv G,\sho_X)\vert_U.
\eneq
(ii) Moreover, if $P\in\sect(U;\she_X)$ and $Q\in\sect(V;\she_Y)$ satisfy $P\cdot s=s\cdot Q$, then the diagram below commutes
\eq\label{diag:Ps=sQ}
&&\xymatrix{
\mathbb{C}_{\Lambda}\circ\muhom(G,\sho_Y)\ar[rr]^-{\alpha(s)}\ar[d]_-{\Phi_{\mathbb{C}_{\Lambda}}(\alpha(Q))}&&\muhom(L[d_Y]\conv G,\sho_X)\ar[d]^-{\alpha(P)}\\
\mathbb{C}_{\Lambda}\circ\muhom(G,\sho_Y)\ar[rr]_-{\alpha(s)}&&\muhom(L[d_Y]\conv G,\sho_X).
}\eneq \end{theorem}
\begin{proof}
(i) is already proven in Lemma \ref{lem:conv_section}.
(ii) With regards to the notation of Lemma \ref{lem:conv_section}, we consider the triplet of manifolds $X,X,Y$, $\Lambda=\mathbb{C}_{\Delta_X}$, $\shl:= \muhom(\mathbb{C}_{\Delta_X}[-n],\Omega_{X\times X/X})$. Then, the assumption \ref{set_transversality_condition} is satisfied and noticing that $\Phi_{\mathbb{C}_{\Delta_X}}\simeq Id_X$, we conclude by Lemma \ref{lem:conv_section} that
\eqn
\alpha(P) \circ \alpha(s)\simeq \alpha(P\cdot s) \simeq \alpha(s\cdot Q) \simeq \alpha(s)\circ \Phi_{\mathbb{C}_{\Lambda}}(\alpha(Q))
\eneqn
\end{proof}
\subsection{Main theorem}\label{sec:qct_main_theorem}
In this section, we will apply Theorem \ref{th:KS14b} when we are given a homogeneous symplectic isomorphism. Let us recall some useful results.
For $\shm$ a left coherent $\she_X$-module generated by a section $u\in\shm$, we denote by $\shi_{\shm}$ the annihilator left ideal of $\she_X$ given by: \eqn \shi_{\shm}:=\{P\in\she_X;Pu=0\} \eneqn and by $\overline{\shi}_{\shm}$ the symbol ideal associated to $\shi_{\shm}$: \eqn \overline{\shi}_{\shm}:=\{\sigma(P);P\in\shi_{\shm}\} \eneqn
\begin{definition}[{\cite{K03}}]
Let $\shm$ be a coherent $\she_X$-module generated by an element $u\in\shm$. We say that $(\shm,u)$ is a simple $\she_X$-module if $\overline{\shi}_{\shm}$ is reduced and $\overline{\shi}_{\shm}=\{\phi\in\sho_{T^*X};\phi\vert_{supp(\shm)}=0\}$. \end{definition}
Consider two complex manifolds $X$ and $Y$, open subsets $U$ and $V$ of $\dT{}^* X$ and $\dT{}^* Y$, respectively, and denote by $p_1$ and $p_2$ the projections $U \xleftarrow{p_1} U\times V^a \xrightarrow{p_2} V$. Let $\Lambda$ be a smooth closed submanifold Lagrangian of $U\times V^a$. We will make use of the following result from~\cite[Th.~4.3.1]{SKK73},~\cite[Prop.~8.5]{K03}:
\begin{theorem}[{\cite{SKK73}},{\cite{K03}}]\label{th:quantized_iso}
Let $(\shm,u)$ be a simple $\she_{X\times Y}$-module defined on $U\times V^a$ such that $\supp{\shm}=\Lambda$. Assume $\Lambda\to U$ is a diffeomorphism. Then, there is an isomorphism of $\she_{X}$-modules:
\eqn
\begin{array}{rrl}
\she_X\vert_{U} & \isoto & \oim{(p_{1}\vert_{U\times V^a})}\shm\\
P & \mapsto & P\cdot u
\end{array}
\eneqn \end{theorem}
Assume that the projections $p_1\vert_\Lambda$ and $p_2^a\vert_\Lambda$ induce isomorphisms. We denote by $\chi$ the homogeneous symplectic isomorphism $\chi:=p_2\vert_\Lambda\circ\opb{p_1\vert_\Lambda}$,
\eq\label{eq:intro_contact_iso_diagram}\label{th:quantized_iso} &&\xymatrix{
&\Lambda\subset U\times V^a\ar[ld]_\sim^-{p_1\vert_\Lambda}\ar[rd]^\sim_-{p_2^a\vert_\Lambda}&\\
\dTX\supset U\ar[rr]^-\sim_-\chi& &V\subset \dTY } \eneq \begin{corollary}\label{coro:quantized_iso_microdiff}
Let $(\shm,u)$ be a simple $\she_{X\times Y}$-module defined on $U\times V^a$. Assume $\supp{\shm}=\Lambda$. Then, in the situation of $(\ref{eq:intro_contact_iso_diagram})$, we have an anti-isomorphism of algebras
\eqn
\oim{\chi}\she_X\vert_U\simeq\she_Y\vert_V
\eneqn \end{corollary}
Consider two complex manifolds $X$ and $Y$ of the same dimension $n$, open subsets $U$ and $V$ of $\dT{}^*X$ and $\dT{}^*Y$, respectively, $\Lambda$ a smooth closed Lagrangian submanifold of $U\times V^a$ and assume that the projections $p_1\vert_\Lambda$ and $p_2^a\vert_\Lambda$ induce isomorphisms, hence a homogeneous symplectic isomorphism $\chi\cl U\isoto V$:
\eq\label{eq:contact_iso_diagram} &&\xymatrix{
&\Lambda\subset U\times V^a\ar[ld]_\sim^-{p_1}\ar[rd]^\sim_-{p_2^a}&\\
\dTX\supset U\ar[rr]^-\sim_-\chi& &V\subset \dTY } \eneq We consider a perverse sheaf $L$ on $X\times Y$ satisfying \eq\label{eq:condition_microsupport} &&(\opb{p_1}(U)\cup\opb{{p_2^a}}(V))\cap\SSi(L)=\Lambda. \eneq and a section $s$ in $\sect(U\times V^a;\muhom(L,\Omega_{X\times Y/X}))$.
Let $G\in\Derb(\mathbb{C}_Y)$. From Theorem \ref{th:KS14b} (i), the left composition by $s$ defines the morphism $\alpha(s)$ in $\Derb(\C_U)$:
\eq
&&\mathbb{C}_{\Lambda}\circ\muhom(G,\sho_Y)\vert_V\xrightarrow{\alpha(s)}\muhom(L[n]\conv G,\sho_X)\vert_U
\eneq
The condition (\ref{eq:condition_microsupport}) implies that $\supp(\muhom(L,\Omega_{X\times Y/X})\vert_{\opb{{p_2^{a}}}(V)})\subset\Lambda$. Since, $p_1$ is an isomorphism from $\Lambda$ to $U$ and that $\chi\circ p_1\vert_\Lambda=p_2^a\vert_\Lambda$, we get a morphism in $\Derb(\C_U)$
\eq\label{eq:contact_transform_1}
&&\opb{\chi}\muhom(G,\sho_Y)\vert_V\xrightarrow{\alpha(s)}\muhom(\Phi_{L[n]}(G),\sho_X)\vert_U
\eneq
\begin{theorem}\label{th:qct_main_theorem}
Assume that the section $s$ is non-degenerate on $\Lambda$. Then, for $G\in\Derb(\C_Y)$, we have the following isomorphism in $\Derb(\C_U)$
\eq\label{eq:qct_main_theorem}
&&\opb{\chi}\muhom(G,\sho_Y)\vert_V\isoto\muhom(\Phi_{L[n]}(G),\sho_X)\vert_U
\eneq
Moreover, this isomorphism is compatible with the action of $\she_Y$ and $\she_X$ on the left and right side of (\ref{eq:qct_main_theorem}) respectively.
\end{theorem}
\begin{proof}
Let us first prove the following lemma, whose proof is available at the level of germs in \cite[Th. 11.4.9]{KS90}.
Let us prove that the morphism \eqref{eq:contact_transform_1} is an isomorphism. Let $L^*$ be the perverse sheaf $\opb{r}\rhom(L,\omega_{X\times Y/Y})$ where $r$ is the map $X\times Y\to Y\times X,(x,y)\mapsto(y,x)$. Let $s'$ be a section of $\muhom(L^*,\Omega_{Y\times X/Y})$, non-degenerate on $r(\Lambda)$, then we apply the same precedent construction to get a natural morphism
\eqn
\oim{\chi}\muhom(\Phi_{L[n]}(G),\sho_X)\vert_U\to\muhom(\Phi_{L^{*}[n]}\circ\Phi_{L[n]}G,\sho_Y)\vert_V\simeq &&\muhom(\Phi_{L^{*}\circ L[n]}G,\sho_Y)\vert_V
\eneqn
We know from~\cite[Th.~7.2.1]{KS90} that $\C_{\Delta_{X}}\simeq L^{*}\circ L$, so that we get a morphism in $\Derb(\C_V)$
\eq\label{eq:contact_transform_2}
&&\oim{\chi}\muhom(\Phi_{L[n]}(G),\sho_X)\vert_U\xrightarrow{\alpha(s')}\muhom(G,\sho_Y)\vert_V
\eneq
We must prove that (\ref{eq:contact_transform_1}) and (\ref{eq:contact_transform_2}) are inverse to each other. By Lemma \ref{lem:conv_section}{(ii)}, we get that the composition of these two morphisms is $\alpha(s'\cdot s)$, with $s'\cdot s\in\she_X$.
For any left $\she_X$-module $\shm$, corresponds a right $\she_X$-module $\Omega_X\tens_{\sho_X}\shm$. Fixing a non-degenerate form $t_X$ of $\Omega_X\vert_U$ (resp. $t_Y$ of $\Omega_Y\vert_{V}$), we apply now Theorem~\ref{th:quantized_iso}: $s$ and $s'$ are non-degenerate sections so that $(\she_{X\times Y}\vert_{U\times V^a},t_X\tens s)$ and $(\she_{Y\times X}\vert_{V^a\times U},s'\tens t_Y)$ are simple and so isomorphic to $(\opb{p_1}\she_X\vert_U,1)\simeq(\opb{{p_2^a}}\she_Y\vert_{V},1)$. $\Omega_X$ resp. $\Omega_Y$ being invertible $\sho_X$-module resp. $\sho_Y$-module, we get as well for the left-right $(\she_X\vert_U,\she_Y\vert_{V})$ bi-module, resp. left-right $(\she_Y\vert_{V},\she_X\vert_U)$ bi-module generated by $s$ resp. $s'$, that they are both isomorphic to $\opb{p_1}\she_X\vert_U\simeq\opb{{p_2^a}}\she_Y\vert_{V}$.
Then, following the proof of \cite[Th. 11.4.9]{KS90}, $s $ and $s'$, define ring isomorphisms associating to each $P\in\she_X(U)$, $P'\in\she_X(U)$, some $Q\in\she_Y(V)$, $Q'\in\she_Y(V)$, such that $P\cdot s=s\cdot Q$, $s'\cdot P'=Q'\cdot s'$, respectively. Hence, we get that $\alpha(s')\circ \alpha(s)$ is an automorphism $\muhom(G,\sho_Y)\vert_V$, defined by the left action of $s'\cdot s\in\she_X$. Hence, we can choose $s'$ so that $\alpha(s')\circ \alpha(s)$ is the identity.
We are now in a position to prove Theorem \ref{th:qct_main_theorem}: we constructed in the proof of the lemma, for each $P\in\opb{p_{1}}\she_X\vert_U$, some $Q\in\opb{p_{2^a}}\she_Y\vert_{V}$ such that $P\cdot s=s\cdot Q$ and we can apply Theorem~\ref{th:KS14b} to conclude. \end{proof}
\section{Radon transform for sheaves}\label{part:radon_transform}
We are going to apply the results of the last chapter to the case of projective duality. Recall projective duality for $\mathcal{D}$-modules were performed by D'Agnolo-Schapira \cite{DS96}. We will extend their results in a micorlocal setting.
\subsection{Notations}\label{sec:notations_projective}
In the following, we will quantize the contact transform associated with the Lagrangian submanifold $\dT{}_\mathbb{S}^*(\mathbb{P}\times\mathbb{P}^*)$, where $\mathbb{S}$ is the hypersurface of $\mathbb{P}\times \mathbb{P}^{*}$ defined by the incidence relation $\langle \xi,x\rangle=0,(x,\xi)\in\mathbb{P}\times \mathbb{P}^{*}$.
We denote by $\dT{}^*_P\mathbb{P}$, resp. $\dT{}^*_{P^*}\mathbb{P}^*$, the conormal space to $P$ in $\dT{}^*\mathbb{P}$, resp. to $P^*$ in $\dT{}^*\mathbb{P}^*$, and we will construct and denote by $\chi$ the homogeneous symplectic isomorphism between $\dT{}^*\mathbb{P}$ and $\dT{}^*\mathbb{P}^*$.
For $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we denote by $\mathbb{C}_{P}(\varepsilon)$ the following sheaves: for $\varepsilon=0$, we set \eqn \mathbb{C}_{P}(0):=\mathbb{C}_{P} \eneqn for $\varepsilon=1$, $\mathbb{C}_{P}(1)$ is the sheaf defined by the following exact sequence: \eq\label{def:proj_loc_constant_sheaf} 0\rightarrow \mathbb{C}_{P}(1) \rightarrow \eim{q}\mathbb{C}_{\widetilde{P}}\xrightarrow{tr}\mathbb{C}_{P}\rightarrow0 \eneq where $q$ is the $2:1$ map from the universal cover $\widetilde{P}$ of $P$, to $P$ and $tr$ the integration morphism $tr:\eim{q}\mathbb{C}_{\widetilde{P}}\simeq\eim{q}\epb{q}\mathbb{C}_{P}\to\mathbb{C}_{P}$.
Let an integer $p\in\mathbb{Z}$, $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we define the sheaves of real analytic functions, hyperfunctions on $P$ resp. $P^*$ twisted by some power of the tautological line bundle, \eqn \sha_{P}(\varepsilon,p)\eqdot\sha_{P}\tens_{\sho_\mathbb{P}}\sho_\mathbb{P}(p)\tens_{\mathbb{C}}\mathbb{C}_{P}(\varepsilon) \eneqn
\eqn \shb_P(\varepsilon,p):= \shb_P\tens_{\sha_{P}} \sha_{P}(\varepsilon,p)\simeq\rhom(\RD'_\mathbb{P}\C_P,\sho_\mathbb{P}(p))\tens\mathbb{C}_{P}(\varepsilon) \eneqn
We define the sheaves of microfunctions on $P$ resp. $P^{*}$ twisted by some power of the tautological bundle, \eqn \mathscr{C}_{P}(\varepsilon,p):=H^0(\mu\mathpzc{hom}(D'_\mathbb{P}\mathbb{C}_{P},\mathcal{O}_{\mathbb{P}}(p)))\tens\mathbb{C}_{P}(\varepsilon) \eneqn and similarly with $P^*$ instead of $P$. We notice that for $n$ odd, $D'_\mathbb{P}\mathbb{C}_{P}\simeq\mathbb{C}_{P}(0)=\mathbb{C}_{P}$, and for $n$ even $D'_\mathbb{P}\mathbb{C}_{P}\simeq\mathbb{C}_{P}(1)$.
For $X,Y$ either the manifold $\mathbb{P}$ or $\mathbb{P}^*$, for any two integers $p,q$, we note $\mathcal{O}_{X\times Y}(p,q)$ the line bundle on $X\times Y$ with homogenity $p$ in the $X$ variable and $q$ in the $Y$ variable. We set \eqn \begin{array}{rrl}
\Omega_{X\times Y/X}(p,q)&:=&\Omega_{X\times Y/X} \tens_{\mathcal{O}_{X\times Y}}\mathcal{O}_{X\times Y}(p,q)\\
\she_X^{\mathbb{R}}(p,q)&:=&\muhom(\mathbb{C}_{\Delta_{X}},\Omega_{X\times X/X}(p,q))[d_X] \end{array} \eneqn and we define accordingly $\she_X(p,q)$. Let us notice that $\she_X^{\mathbb{R}}(-p,p)$ is a sheaf of rings.
Let $n$ be the dimension of $P$, (of course $n=d_{\mathbb{P}}$). For an integer $k$ and $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we note \eqn k^{*}:=-n-1-k \eneqn \eqn \varepsilon^{*}:=-n-1-\varepsilon\hspace{1ex}mod(2) \eneqn
\subsection{Projective duality: geometry} \label{geometric_situation}
\subsubsection{Notations}\label{notation_tensored_microfunctions}
We refer to the notations of the sections~\ref{sec:notation_from_results}. We recall that we denote by \eqn \text{$V$, $\mathbb{V}$, an $(n+1)$-dimensional real and complex vector space, respectively,}\\ \text{$P$, $\mathbb{P}$, the $n-$dimensional real and complex projective space, respectively,}\\ \text{$S$, $\mathbb{S}$, the real and complex incidence hypersurface in $P\times P^*$, $\mathbb{P}\times\mathbb{P}^*$, respectively.} \eneqn When necessary, we will enforce the dimension by noting $\mathbb{P}_{n}$, resp. $\mathbb{P}^{*}_{n}$.
Let $X,Y$ be complex manifolds, we recall that we denote by $q_1$ and $q_2$ the respective projection of $X\times Y$ on each of its factor.
For $K\in\Derb(\mathbb{C}_{X\times Y})$, we recall that we defined the functor: \eqn && \Phi_K \cl \Derb(\mathbb{C}_X)\rightarrow\Derb(\mathbb{C}_Y) \\ && F\mapsto Rq_{2 !}(K\tens q_{1}^{-1}F)\\ \eneqn
For an integer $k$ and $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$, we note $k^{*}‹=-n-1-k$ and $\varepsilon^{*}=-n-1-\varepsilon \text{ mod}(2)$. We refer to Section \ref{sec:notation_from_results} for the definition of the sheaves of twisted microfunctions $\mathscr{C}_{P}(\varepsilon,k),\mathscr{C}_{P^{*}}(\varepsilon^*,k^*)$.
\subsubsection{Geometry of projective duality}
For a manifold $X$, we denote by $P^*X$ the projectivization of the cotangent bundle of $X$. The following results are well-known. However, we will give a proof of Prooposition \ref{dual_topo_iso} since it is more straighforward than the one usually found in the litterature.
\begin{proposition}\label{dual_topo_iso}
There is an homogeneous complex symplectic isomorphism
\eq\label{eq:dual_topo_cotangent_iso_t}
\dT{}^{*}\mathbb{P} \simeq \dT{}^{*}\mathbb{P}^{*}
\eneq
and a contact isomorphism
\eq\label{eq:dual_topo_coproj_iso_t}
P^*\mathbb{P}\simeq \mathbb{S}\simeq P^*\mathbb{P}^*
\eneq \end{proposition}
\begin{proof}
We have the natural morphism
\eqn
\mathbb{V}\setminus\{0\} \xrightarrow{\rho} \mathbb{P}
\eneqn
According to \ref{diag:microlocal1}, this morphism, after removing the zero section, induces the following diagram
\eqn
&&\xymatrix{
T{}^*(\mathbb{V}\setminus\{0\}) \ar[dr]&\mathbb{V}\setminus\{0\} \times_{\mathbb{P}} T{}^*\mathbb{P}\ar[l]_-{^t\rho'}\ar[r]\ar[d]&T{}^*\mathbb{P}\ar[d]\\
&\mathbb{V}\setminus\{0\}\ar[r]&\mathbb{P}
}
\eneqn
We notice that $^t\rho'$ is an immersion. Let us denote by $\mathbb{H}$, $\mathbb{H}^*$, the incidence hypersurfaces:
\eqn
\mathbb{H}=\{(\xi,x)\in\mathbb{V}^*\times(\mathbb{V}\setminus\{0\});\langle\xi,x\rangle=0\}\\
\mathbb{H}^*=\{(x,\xi)\in\mathbb{V}\times(\mathbb{V}^*\setminus\{0\});\langle x,\xi\rangle=0\}
\eneqn
Noticing that for $x\in\mathbb{V}\setminus\{0\}$, $\rho$ is constant along the fiber above $\rho(x)$, we see that $^t\rho'$ is an immersion into the incidence hypersurface $\mathbb{H}$. Besides, $^t\rho'$ is a morphism of fibered space and so, by a dimensional argument, we conclude that this immersion is also onto.
Removing the zero sections, we get the diagram
\eq\label{dia:incidence_map_duality}
&&\xymatrix{
T{}^*(\mathbb{V}\setminus\{0\}) \ar[d]_-{\simeq} & \mathbb{H} \ar@{_{(}->}[l] \ar[d]_-{\simeq} &\mathbb{V}\setminus\{0\} \times_{\mathbb{P}} \dT{}^*\mathbb{P}\ar[l]\ar[r]\ar[d]_-{\simeq} & \dT{}^*\mathbb{P}\\
T{}^*(\mathbb{V}^*\setminus\{0\}) & \mathbb{H}^* \ar@{_{(}->}[l] & (\mathbb{V}^*\setminus\{0\} )\times_{\mathbb{P}^*} \dT{}^*\mathbb{P}^* \ar[l]\ar[r]& \dT{}^*\mathbb{P}^*
}
\eneq
where the isomophism between $\mathbb{H}$ and $\mathbb{H}^*$ follows from the following symplectic isomorphism:
\eqn
\dT{}^*(\mathbb{V}\setminus\{0\}) \simeq \dT{}^*(\mathbb{V}^*\setminus\{0\}) \\
(x,\xi) \mapsto (\xi,-x)
\eneqn
Now, taking the quotient by the action of $\mathbb{C}^*$ on both sides of the isomorphism between $(\mathbb{V}\setminus\{0\} )\times_{\mathbb{P}} \dT{}^*\mathbb{P}$ and $(\mathbb{V}^*\setminus\{0\} )\times_{\mathbb{P}^*} \dT{}^*\mathbb{P}^*$, we get the isomorphism:
\eqn
\dT{}^*\mathbb{P} \simeq (\mathbb{V}^*\setminus\{0\} )\times_{\mathbb{P}^*} P^*\mathbb{P}^* \simeq \dT{}^*\mathbb{P}^*
\eneqn
This gives (\ref{eq:dual_topo_cotangent_iso_t}).
Besides, passing to the quotient by the action of $\mathbb{C}^*\times\mathbb{C}^*$ on the two central columns of diagram (\ref{dia:incidence_map_duality}), we get (\ref{eq:dual_topo_coproj_iso_t}). \end{proof}
\begin{proposition}\label{projective_duality_iso}
Consider the double fibrations
\eq\label{dia:projective_duality_iso}
&&\xymatrix{
&\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times \mathbb{P}^{*})\ar[ld]_\sim^-{p_1}\ar[rd]^\sim_-{p_{2}^{a}}&\\
\dT{}^{*}\mathbb{P}\ar[rr]^-\sim_-\chi& & \dT{}^{*}\mathbb{P}^{*}
}
\eneq
Then, $p_{1}$ and $p_{2}^{a}$ are isomorphisms and $\chi=p_{2}^{a} \circ \opb{p_{1}}$ is a homogeneous symplectic isomorphism. \end{proposition}
Now, we are going to prove the following
\begin{proposition}\label{rc_dual_topo_iso}
The diagram \ref{dia:projective_duality_iso} induces
\eqn
&&\xymatrix{
& \dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times \mathbb{P}^{*})\cap(\dT{}^{*}_{P}\mathbb{P}\times\dT{}^{*}_{P^{*}}\mathbb{P}^{*})\ar[ld]_\sim^-{p_1}\ar[rd]^\sim_-{p_{2}^{a}}&\\
\dT{}^{*}_{P}\mathbb{P}\ar[rr]^-\sim_-\chi& & \dT{}^{*}_{P^{*}}\mathbb{P}^{*}
}
\eneqn \end{proposition}
\subsection{Projective duality for microdifferential operators} \label{presentation_preparatory_theorem}
Let $k,k'$ be integers and $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$. We follow the notations of the sections \ref{sec:notation_from_results} and \ref{sec:reminder_algebraic_analysis}. We define similarly a twisted version of $\shb_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}$ and $\shc_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}$.
We set \eqn \shb_{\mathbb{S}}^{(n,0)}(k,k'):=\opb{q_{2}}\mathcal{O}_{\mathbb{P}^*}(k')\tens_{\opb{q_2}\mathcal{O}_{\mathbb{P}^*}}\shb_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}\tens_{\opb{q_1}\mathcal{O}_\mathbb{P}}\opb{q_{1}}(\mathcal{O}_{\mathbb{P}}(k)\tens_{\mathcal{O}_\mathbb{P}}\Omega_\mathbb{P}) \eneqn
and the $(\she_{\mathbb{P}}(-k,k),\she_{\mathbb{P}^*}(-k^*,k^*))$-module \eqn \shc_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}(k,k'):=\she \shb_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}(k,k') \eneqn
We notice that $\she_{\mathbb{P}}(-k,k)$ is nothing but $\mathcal{O}_{\mathbb{P}}(-k)\mathcal{D}\tens{_{\opb{\pi{_{\mathbb{P}}}}\mathcal{D}_\mathbb{P}}}\she_{\mathbb{P}}\tens{_{\opb{\pi{_{\mathbb{P}}}}\mathcal{D}_\mathbb{P}}}\mathcal{D}\mathcal{O}_{\mathbb{P}}(k)$. According to the diagram \ref{dia:projective_duality_iso}, we denoted by $\chi$ the homogeneous symplectic isomorphism \eqn \chi:=p_{2}^{a}\vert_{\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times \mathbb{P}^{*})}\circ\opb{p_{1}\vert_{\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times \mathbb{P}^{*})}} \eneqn
We have: \begin{theorem}[{\cite[p.~469]{DS96}}]\label{th:projective_non_degenerate_section}
Assume $-n-1<k<0$. There exists a section $s$ of $\muhom(\mathbb{C}_{\mathbb{S}}[-1],\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*))$, non-degenerate on $\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)$. \end{theorem} \begin{proof}
From the exact sequence:
\eq\label{support_distinguished_triangle}
0 \longrightarrow \mathbb{C}_{(\mathbb{P}\times\mathbb{P}^{*})\setminus \mathbb{S}}\longrightarrow \mathbb{C}_{\mathbb{P}\times\mathbb{P}^{*}} \longrightarrow \mathbb{C}_{\mathbb{S}} \longrightarrow 0
\eneq
we get the natural morphism
\eqn
\begin{array}{rcl}
\rsect((\mathbb{P}\times\mathbb{P}^{*})\setminus \mathbb{S};\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*))\ & \to & \rsect_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^{*};\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*))[1] \\
& \simeq & \rsect(\mathbb{P}\times\mathbb{P}^{*};\rhom(\mathbb{C}_{\mathbb{S}};\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*))) [1]\\
& \simeq & \rsect(T^*{}(\mathbb{P}\times\mathbb{P}^*);\muhom(\mathbb{C}_{\mathbb{S}};\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*))) [1]\\
& \to &
\rsect(\dT{}^*(\mathbb{P}\times\mathbb{P}^*);\muhom(\mathbb{C}_{\mathbb{S}}[-1];\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*)))\\
\end{array}
\eneqn
Let $z=(z_0,... ,z_n)$ be a system of homogeneous coordinates on $\mathbb{P}$ and $\zeta= (\zeta_0,... ,\zeta_n)$ the dual coordinates on $\mathbb{P}^*$. As explained in \cite{DS96}, a non-degenerate section is provided by the Leray section, defined for $(z,\xi)\in(\mathbb{P}\times\mathbb{P}^*)\setminus\mathbb{S}$ by
\eq\label{def:projective_non_degenerate_section}
s(z,\zeta)=\frac{\omega'(z)}{\langle z,\zeta\rangle^{n+1+k}}
\eneq
where $\omega'(z)$ is the Leray form $\omega^{\prime }(z) = \sum_{k=0}^{n}(-1)^{k}z_{k} dz_{0} {\small
\wedge }\ldots\wedge dz_{k-1}\wedge dz_{k+1}\wedge\ldots{\small \wedge }dz_{n}$, Leray~\cite{L59}. \end{proof}
Let $s$ be a section of $H^1(\muhom(\mathbb{C}_{\mathbb{S}}[-1],\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*)))$, non-degenerate on $\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)$. \begin{theorem}\label{th:projective_operator_main_theorem}
Assume $-n-1<k<0$. Then, we have an isomorphism in $\Derb(\C_{\dT{}^*\mathbb{P}})$
\eqn
\underline{\Phi}_{\mathbb{S}}^{\mu}(\she_{\mathbb{P}}(-k,k)\vert_{\dT{}^*\mathbb{P}}) \simeq \she_{\mathbb{P}^*}(-k^*,k^*)\vert_{\dT{}^*\mathbb{P}^*}\\
\oim{\chi}\she_{\mathbb{P}}(-k,k)\vert_{\dT{}^*\mathbb{P}} \simeq \she_{\mathbb{P}^*}(-k^*,k^*)\vert_{\dT{}^*\mathbb{P}^*}
\eneqn \end{theorem} \begin{proof}
Let $\mathcal{F}$, $\mathcal{G}$ be line bundles on $\mathbb{P}$, and $\mathbb{P}^*$ respectively. We know from \cite{SKK73} that a global non-degenerate section $s\in\sect(\dT{}^*\mathbb{P}\times\dT{}^*\mathbb{P}^*;\shc_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}\tens_{\opb{p_1}\she_{\mathbb{P}}}\she\mathcal{F}\tens_{\opb{p_2}\she_{\mathbb{P}^*}}\mathcal{G}^{\tens -1}\she)$ induces an isomorphism of $\she$-modules
\eqn
\underline{\Phi}_{\mathbb{S}}^{\mu}(\she\mathcal{F}\vert_{\dT{}^*\mathbb{P}}) \simeq \she\mathcal{G}\vert_{\dT{}^*\mathbb{P}^*}
\eneqn
Now, let us set $\mathcal{F}=\mathcal{O}_{\mathbb{P}}(k)$, $\mathcal{G}=\mathcal{O}_{\mathbb{P}^{*}}(k^{*})$. Then, \ref{th:projective_non_degenerate_section} provides such a non-degenerate section in $\sect(\dT{}^*\mathbb{P}\times\dT{}^*\mathbb{P}^*;\shc_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}\tens_{\opb{p_1}\she_{\mathbb{P}}}\she\mathcal{O}_{\mathbb{P}}(k)\tens_{\opb{p_2}\she_{\mathbb{P}^*}}\mathcal{O}_{\mathbb{P}^{*}}(k^{*})^{\tens -1}\she)$. So that, we have an isomorphism
\eqn
\underline{\Phi}_{\mathbb{S}}^{\mu}(\she_{\mathbb{P}}(-k,k)\vert_{\dT{}^*\mathbb{P}}) \simeq \she_{\mathbb{P}^*}(-k^*,k^*)\vert_{\dT{}^*\mathbb{P}^*}
\eneqn
On the other hand $s$ is a non-degenerate section of $\shc_{\mathbb{S}\vert \mathbb{P}\times \mathbb{P}^*}^{(n,0)}(-k,k^*)$, hence we can apply Theorem \ref{th:quantized_iso}. Let us denote by
\eqn
\she_{\mathbb{P}\times \mathbb{P}^*}(k,k^*):=\she_{\mathbb{P}}(-k,k)\etens_{\opb{\pi}\mathcal{O}_{\mathbb{P}\times\mathbb{P^*}}}\she_{\mathbb{P}^*}(-k^*,k^*)
\eneqn
Theorem \ref{th:quantized_iso} gives the following isomorphisms
\eqn
\she_{\mathbb{P}}(-k,k)\vert_{\dT{}^*\mathbb{P}} \simeq \oim{p_1}(\she_{\mathbb{P}\times \mathbb{P}^*}(k,k^*).s)\vert_{\dT{}^*\mathbb{P}}\\
\oim{p_2}(\she_{\mathbb{P}\times \mathbb{P}^*}(k,k^*).s)\vert_{\dT{}^*\mathbb{P}^*} \simeq
\she_{\mathbb{P}^*}(-k^*,k^*)\vert_{\dT{}^*\mathbb{P}^*}
\eneqn
And so
\eqn
\oim{\chi}\she_{\mathbb{P}}(-k,k)\vert_{\dT{}^*\mathbb{P}} \simeq \she_{\mathbb{P}^*}(-k^*,k^*)\vert_{\dT{}^*\mathbb{P}^*}
\eneqn \end{proof}
\subsection{Projective duality for microfunctions } \label{presentation_main_theorem}
In the following, we will denote by $K$ the object $\mathbb{C}_{\mathbb{S}}[n-1]$. In order to prove Proposition \ref{microfunction_iso_theorem}, we will need to compute $\Phi_{K}(\mathbb{C}_{P}(1))$, which is done in \cite{DS96}:
\begin{lemma}[{\cite{DS96}}]\label{lem:constant_sheaf_correspondance}
We have
\eqn
\Phi_K(\mathbb{C}_{P}(1))\simeq\left\{
\begin{array}{ll}
\mathbb{C}_{P^{*}}(1)\text{ , for $n$ odd}\\
\mathbb{C}_{\mathbb{P}^{*}\setminus P^{*}}[1]\text{ , for $n$ even}\\
\end{array}
\right.
\eneqn
and
\eqn
H^j(\Phi_{K}(\mathbb{C}_{P}(0)))\simeq\left\{
\begin{array}{ll}
\mathbb{C}_{\mathbb{P}^*}\text{ , for $j=n-1$}\\
\mathbb{C}_{\mathbb{P}^*\setminus P^*}\text{, for $j=-1$ and $n$ odd}\\
\mathbb{C}_{P^*}(1)\text{, for $j=0$ and $n$ even}\\
0\text{ in any other case}\\
\end{array}
\right.
\eneqn \end{lemma}
We are in a proposition to prove:
\begin{theorem}\label{microfunction_iso_theorem}
Assume $-n-1<k<0$. Recall that any section $s\in \sect(\mathbb{P}\times\mathbb{P}^*;\shb_{\mathbb{S}}^{(n,0)}(-k,k^*))$, defines a morphism in $\Derb(\C_{\dT\mathbb{P}})$
\eq\label{projective_twisted_microfunction_iso}
\chi_{*}\mathscr{C}_{P}(\varepsilon,k)\vert_{\dT{}^*_P\mathbb{P}}\to\mathscr{C}_{P^{*}}(\varepsilon^{*},k^{*})\vert_{\dT{}^*_{P^*}\mathbb{P}^*}
\eneq
Assume $s$ is non-degenerate on $\dT{}^*_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)$. Then (\ref{projective_twisted_microfunction_iso}) is an isomorphism. Moreover, there exists such a non-degenerate section. \end{theorem}
\begin{remark}
\bnum
\item This is a refinement of a general theorem of \cite{SKK73} and is a microlocal version of Theorem 5.17 in \cite{DS96}.
\item
The classical Radon transform deals with the case where $k=-n$, $k^{*}=-1$.
\enum
\end{remark}
\begin{proof}
We will deal with the case $\varepsilon=1$ and $n$ even, the complementary cases being proven the same way. Let us apply Theorem $\ref{main_theorem_contact_muhom}$ in the following particular case
- $U=\dT{}^{*}\mathbb{P}$, $V=\dT{}^{*}\mathbb{P}^*$, $\Lambda=\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^{*})$.
- $K$ is $\mathbb{C}_{\mathbb{S}}[n-1]$.
- $F_{1}=\mathbb{C}_{P}(1)$ and $F_{2}=\mathcal{O}_{\mathbb{P}}(k)$
$K$ verifies conditions (i),(ii),(iii) of Theorem $\ref{main_theorem_contact_muhom}$
(i) is fulfilled as the constant sheaf on a closed submanifold of a manifold is cohomologically constructible.
(ii) is fulfilled since $SS(\mathbb{C}_{\mathbb{S}})$ is nothing but $T^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^{*})$.
(iii) $\mathbb{C}_{T^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^{*})}\longrightarrow \mu\mathpzc{hom}(\mathbb{C}_{\mathbb{S}},\mathbb{C}_{\mathbb{S}})$ is an isomorphism on $T^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^{*})$ (this follows from the fact that for a closed submanifold $Z$ of a manifold $X$, $\mu_{Z}(\mathbb{C}_Z)\isoto\mathbb{C}_{T^*_ZX}$, see \cite[Prop.~4.4.3]{KS90}).
By a fundamental result in~\cite[Th 5.17]{DS96}, we know that for $-n-1<k<0$, a section $s\in \sect(\mathbb{P}\times\mathbb{P}^*;\shb_{\mathbb{S}}^{(n,0)}(-k,k^*))$, non-degenerate on $\dT{}^*_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)$, induces an isomorphism
\eqn
\Phi_K(\mathcal{O}_{\mathbb{P}}(k))\simeq \mathcal{O}_{\mathbb{P}^{*}}(k^{*})
\eneqn
Formula (\ref{def:projective_non_degenerate_section}) provides an example of such a non-degenerate section. Hence, applying Lemma \ref{lem:constant_sheaf_correspondance}, Theorem $\ref{main_theorem_contact_muhom}$ gives:
\eqn
\begin{split}
\oim{\chi}\mu\mathpzc{hom}(\mathbb{C}_{P}(1),\mathcal{O}_{\mathbb{P}}(k))\vert_{\dT{}^*\mathbb{P}} & \simeq\mu\mathpzc{hom}(\mathbb{C}_{\mathbb{P}^{*}\setminus P^{*}}[1],\mathcal{O}_{\mathbb{P}^{*}}(k^{*}))\vert_{\dT{}^*\mathbb{P}^{*}}\\
\end{split}
\eneqn
We have the exact sequence:
\eq\label{support_distinguished_triangle}
0 \longrightarrow \mathbb{C}_{\mathbb{P}^{*}\setminus P^*} \longrightarrow \mathbb{C}_{\mathbb{P}^{*}} \longrightarrow \mathbb{C}_{P^*} \longrightarrow 0
\eneq
Now, for any $F\in\Derb(\mathbb{C}_{\mathbb{P}^*})$, we have
\eqn
supp(\muhom(\mathbb{C}_{\mathbb{P}^*},F)\vert_{\dT{}^*\mathbb{P}^{*}})\subset (SS(\mathbb{C}_{\mathbb{P}^*}) \cap \dT{}^*\mathbb{P}^{*}) \cap SS(F) = \emptyset
\eneqn
and hence,
\eqn
\muhom(\mathbb{C}_{\mathbb{P}^*},F)\vert_{\dT{}^*\mathbb{P}^{*}}\simeq 0
\eneqn
Applying the $\muhom$ functor to $\ref{support_distinguished_triangle}$, we get
\eqn
\muhom(\mathbb{C}_{\mathbb{P}^{*}\setminus P^{*}},F)\vert_{\dT{}^*\mathbb{P}^{*}}[-1]\simeq \muhom(\mathbb{C}_{P^{*}},F)\vert_{\dT{}^*\mathbb{P}^{*}}
\eneqn
Hence, we have proved in particular that
\eqn
\begin{split}
\oim{\chi}\mu\mathpzc{hom}(\mathbb{C}_{P}(1),\mathcal{O}_{\mathbb{P}}(k))\vert_{\dT{}^*_P\mathbb{P}} & \simeq \mu\mathpzc{hom}(\mathbb{C}_{P^{*}},\mathcal{O}_{\mathbb{P}^{*}}(k^{*}))\vert_{\dT{}^*_{P^*}\mathbb{P}^{*}}\\
\end{split}
\eneqn
\end{proof}
\subsection{Main results}
We follow the notations of Section~\ref{sec:notation_from_results} and Section~\ref{sec:reminder_algebraic_analysis}.
Let us consider the situation (\ref{dia:projective_duality_iso}), where we denoted by $\chi$ the homogeneous symplectic isomorphism between $\dT{}^*\mathbb{P}$ and $\dT{}^*\mathbb{P}^*$ through $\dT{}^*_\mathbb{S}(\mathbb{P}\times\mathbb{P}^*)$. We set \eqn L:=\mathbb{C}_{\mathbb{S}}[-1] \eneqn Then $L$ is a perverse sheaf satisfying \eq\label{eq:2} &&(\opb{p_1}(\dT{}^*\mathbb{P})\cup\opb{{p_2^a}}(\dT{}^*\mathbb{P}^*))\cap\SSi(L)=\dT{}^*_\mathbb{S}(\mathbb{P}\times\mathbb{P}^*) \eneq
Recall Theorem \ref{th:projective_non_degenerate_section}, and let $s$ be a section of $\muhom(\mathbb{C}_{\mathbb{S}}[-1],\Omega_{\mathbb{P}\times \mathbb{P}^*/\mathbb{P}^*}(-k,k^*))$, non-degenerate on $\dT{}^{*}_{\mathbb{S}}(\mathbb{P}\times\mathbb{P}^*)$. We are in situation to apply Theorem \ref{th:qct_main_theorem}. \begin{theorem}\label{th:projective_main_theorem}
Let $G\in\Derb(\C_{\mathbb{P}^*})$, $k$ an integer. Assume $-n-1<k<0$. Then, we have an isomorphism in $\Derb(\C_{\dT{}^*\mathbb{P}})$:
\eq\label{eq:qct_projective_main_theorem}
&&\opb{\chi}\muhom(G,\sho_{\mathbb{P}^*}(k^*))\isoto\muhom(\Phi_{\mathbb{C}_{\mathbb{S}}[n-1]}(G),\sho_{\mathbb{P}}(k))
\eneq
This isomorphism is compatible with the action of $\she_{\mathbb{P}^*}(-k^*,k^*)$ and $\she_{\mathbb{P}}(-k,k)$ on the left and right side of (\ref{eq:qct_projective_main_theorem}) respectively. \end{theorem} \begin{proof}
The isomorphism is directly provided by Theorem \ref{th:qct_main_theorem} in the situation where, using the notation inthere, $U=\dT{}^*\mathbb{P}$, $V=\dT{}^*\mathbb{P}^*$ and $\Lambda=\dT{}^*_\mathbb{S}(\mathbb{P}\times\mathbb{P}^*)$ and where we twist by homogenous line bundles of $\mathbb{P}$, $\mathbb{P}^*$ as explained below.
Let us adapt (\ref{eq:qct_main_theorem}) by taking into account the twist by homogeneous line bundles. We follow the exact same reasoning than sections of \ref{sec:kernels_complex_manifolds} and \ref{sec:qct_main_theorem}.
We have the natural morphism
\eqn
&&\Omega_{\mathbb{P}^*\times \mathbb{P}/\mathbb{P}}(-k^*,k)\conv\sho_{\mathbb{P}^*}(k^*)\,[n]\to\sho_{\mathbb{P}}(k).
\eneqn
Indeed, we have
\eqn
\Omega_{\mathbb{P}^*\times \mathbb{P}/\mathbb{P}}(-k^*,k)\conv\sho_{\mathbb{P}^*}(k^*)\,[n]&=&
\reim{q_1}(\mathcal{O}_{\mathbb{P}^*\times \mathbb{P}}(-k^*,k)\tens_{\opb{q_2}\mathcal{O}_{\mathbb{P}^*}}\opb{q_2}\Omega_{\mathbb{P}^*}\tens\opb{q_2}\sho_{\mathbb{P}^*}(k^*)[n])\\
&\to&\reim{q_1}(\mathcal{O}_{\mathbb{P}^*\times \mathbb{P}}(k,0)\tens_{\opb{q_2}\mathcal{O}_{\mathbb{P}^*}}\opb{q_2}\Omega_{\mathbb{P}^*})[n]\to[\int]\sho_{\mathbb{P}}(k)
\eneqn
Given this morphism and considering $\shl\eqdot\muhom(\mathbb{C}_{\mathbb{S}}[-1],\Omega_{\mathbb{P}^*\times\mathbb{P}/\mathbb{P}}(-k^*,k))$, we mimic the proof of Theorem \ref{th:KS14b} so that for a section $s$ of $\shl$ on $\dT{}^*\mathbb{P}\times\dT{}^*\mathbb{P}^{*a}$ and for $P\in\sect(\dT{}^*\mathbb{P};\she_{\mathbb{P}}(-k,k))$ and $Q\in\sect(\dT{}^*\mathbb{P}^{*};\she_{\mathbb{P}^*}(-k^*,k^*))$ satisfying $P\cdot s=s\cdot Q$, the diagram below commutes:
\eq\label{diag:Ps=sQ}
\eneq
\eqn
&&\xymatrix{
\mathbb{C}_{\mathbb{S}}\circ\muhom(G,\sho_{\mathbb{P}^*}(k^*))\vert_{\dT{}^*_{P^*}\mathbb{P}^*}\ar[rr]_-{\alpha(s)}\ar[d]_-{\Phi_{\mathbb{C}_{\mathbb{S}}}(\alpha(Q))}&&\muhom(\mathbb{C}_{\mathbb{S}}[n-1]\conv G,\sho_{\mathbb{P}}(k))\vert_{\dT{}^*_P\mathbb{P}}\ar[d]_-{\alpha(P)}\\
\mathbb{C}_{\mathbb{S}}\circ\muhom(G,\sho_{\mathbb{P}^*}(k^*))\vert_{\dT{}^*_{P^*}\mathbb{P}^*}\ar[rr]_-{\alpha(s)}&&\muhom(\mathbb{C}_{\mathbb{S}}[n-1]\conv G,\sho_{\mathbb{P}}(k))\vert_{\dT{}^*_P\mathbb{P}}.
}
\eneqn
From there, given a non-degenerate section of $\shl$ on $\dT{}^{*}_S(\mathbb{P}\times\mathbb{P}^*)$, Theorem \ref{th:qct_main_theorem} gives the compatible action of micro-differential operators on each side of the isomorphism (\ref{eq:qct_projective_main_theorem})
\eqn
&&\opb{\chi}\muhom(G,\sho_{\mathbb{P}^*}(k^*))\vert_{\dT{}^*_{P^*}\mathbb{P}^*}\isoto\muhom(\Phi_{\mathbb{C}_{\mathbb{S}}[n-1]}(G),\sho_{\mathbb{P}}(k))\vert_{\dT{}^*_P\mathbb{P}}
\eneqn
It remains to exhibit a non-degenerate section so that, for $P\in\sect(\dT{}^*\mathbb{P};\she_{\mathbb{P}}(-k,k))$, there is $Q\in\sect(\dT{}^*\mathbb{P}^{*};\she_{\mathbb{P}^*}(-k^*,k^*))$ such that $P\cdot s=s\cdot Q$. Precisely, $s$ is given by Proposition \ref{th:projective_non_degenerate_section}. \end{proof}
Specializing the above proposition, we get \begin{corollary}\label{coro:microfunction_duality_iso}
Let $\varepsilon\in\mathbb{Z}/2\mathbb{Z}$. In the situation of Proposition \ref{th:projective_main_theorem}, we have the isomorphism, compatible with the respective action of $\opb{p_{1}}\she_{\mathbb{P}}(-k,k)$ and $\opb{{p_{2}^a}}\she_{\mathbb{P}^*}(-k^*,k^*)$
\eqn
\chi_{*}\mathscr{C}_{P}(\varepsilon,k)\vert_{\dT{}^*_P\mathbb{P}}\simeq\mathscr{C}_{P^{*}}(\varepsilon^{*},k^{*})\vert_{\dT{}^*_{P^*}\mathbb{P}^*}
\eneqn
\end{corollary} \begin{proof}
This is an immediate consequence of Proposition \ref{th:projective_main_theorem}, where we consider the special case $G=\mathbb{C}_{P^*}(\varepsilon^*)$. Indeed, we have, from Lemma \ref{lem:constant_sheaf_correspondance}, the isomorphism in $\Derb(\mathbb{C}_{\mathbb{P}^*};\dT{}^*\mathbb{P}^*)$
\eqn
\mathbb{C}_{\mathbb{S}}[n-1]\conv \mathbb{C}_{P^*}(\varepsilon^*)\simeq\mathbb{C}_{P}(\varepsilon)
\eneqn \end{proof}
We can state now
\begin{corollary}
Let $k$ be an integer. Let $\mathcal{N}$ be a coherent $\she_{\mathbb{P}}(-k,k)$-module and $F\in\Derb(\mathbb{P})$. Assume $-n-1<k<0$. Then, we have an isomorphism in $\Derb(\C_{\dT{}^*\mathbb{P}})$
\eqn\hspace{-19em}
\oim{\chi}\rhom{_{\she_{\mathbb{P}}(-k,k)}}(\mathcal{N},\muhom(F,\mathcal{O}_\mathbb{P}(k)))\simeq
\eneqn
\eqn
\hspace{15em}\rhom{_{\she_{\mathbb{P}^*}(-k^*,k^*)}}(\underline{\Phi}_{\mathbb{S}}^{\mu}(\mathcal{N}),\muhom((\Phi_{\mathbb{C}_{\mathbb{S}}[n-1]}F,\mathcal{O}_{\mathbb{P}^*}(k^*)))
\eneqn \end{corollary} \begin{proof}
It suffices to prove this statement for finite free $\she_{\mathbb{P}}(-k,k)$-modules, which in turn can be reduced to the case where $\mathcal{N}=\she_{\mathbb{P}}(-k,k)$.
By Theorem \ref{th:projective_operator_main_theorem}, we have
\eqn
\underline{\Phi}_{\mathbb{S}}^{\mu}(\she_{\mathbb{P}}(-k,k)\vert_{\dT{}^*\mathbb{P}}) \simeq \she_{\mathbb{P}^*}(-k^*,k^*)\vert_{\dT{}^*\mathbb{P}^*}
\eneqn
Then, by applying Proposition \ref{th:projective_main_theorem}, we have
\eqn
\oim{\chi}\muhom(F,\mathcal{O}_\mathbb{P}(k))\vert_{\dT{}^*\mathbb{P}^*} \simeq
\rhom{_{\she_{\mathbb{P}^*}(-k^*,k^*)}}(\she_{\mathbb{P}^*}(-k^*,k^*),\muhom((\Phi_{\mathbb{C}_{\mathbb{S}}[n-1]}F,\mathcal{O}_{\mathbb{P}^*}(k^*)))\vert_{\dT{}^*\mathbb{P}^*}
\eneqn
which proves the corollary. \end{proof}
\printbibliography[heading=bibintoc,title={References}]
\end{document} | arXiv |
\begin{document}
\title{Asymptotic strong Feller property and local weak irreducibility via generalized couplings} \renewcommand{**}{*} \footnotetext{Email: \texttt{[email protected]}. Supported in part by DFG Research Unit FOR 2402 and ERC grant 683164.}
\renewcommand{**}{**} \footnotetext{Email: \texttt{[email protected]}. }
\begin{abstract} In this short note we show how the asymptotic strong Feller property (ASF) and local weak irreducibility can be established via generalized couplings. We also prove that a stronger form of ASF together with local weak irreducibility implies uniqueness of an invariant measure. The latter result is optimal in a certain sense and complements some of the corresponding results of Hairer, Mattingly (2008). \end{abstract}
\section{Introduction} In this short note we show how the asymptotic strong Feller property (ASF) and local weak irreducibility can be established via generalized couplings. We also prove that a stronger form of ASF together with local weak irreducibility implies uniqueness of an invariant measure. The latter result is optimal in a certain sense and complements some of the corresponding results of \cite[Section~2]{HM11}, \cite[Section~2.1]{HM08}, and extends some of the ideas of \cite[Section 2]{BKS}.
A central question in the theory of Markov processes can be posed as follows: given a Markov semigroup determine whether it has a unique invariant measure. If the semigroup is strong Feller (this is typically the case for finite--dimensional Markov processes), then this problem is relatively easy to solve. For example, one way to get unique ergodicity for a strong Feller semigroup is just to verify a certain accessibility condition.
The problem becomes much more difficult if the Markov semigroup is only Feller and not strong Feller (this might happen for infinite--dimensional Markov processes, e.g. stochastic delay equations or SPDEs). A breakthrough was achieved in a series of works by Hairer and Mattingly \cite{HM}, \cite{HM08}, \cite{HM11}, where the notion of an \textit{asymptotically strong Feller} (ASF) Markov process was introduced. It turned out that if a Markov process has the ASF property, then any two of its ergodic invariant measures have disjoint support. This, in turn, implies that ASF together with a certain irreducibility condition yields unique ergodicity. However, verification of the ASF condition in practice might be quite involving and is usually based on Malliavin calculus techniques.
This short article has two goals. First, we provide an alternative way of establishing the ASF property based on the generalized couplings technique. We hope also that it might be useful in obtaining certain gradient-type bounds for SPDEs. Second, we show that a stronger version of the ASF property together with local weak irreducibility (a weaker condition than the one used in \cite{HM}) is sufficient for unique ergodicity. We also provide a way how this local weak irreducibility can be established via generalized couplings.
\noindent \textbf{Acknowledgments}. This article is based on the master thesis of FW written at TU~Berlin under the supervision of OB. The authors are grateful to Michael Scheutzow for useful discussions. OB has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No.~683164).
\section{Main results}
First, let us introduce some basic notation. Let $(E, d)$ be a Polish space equipped with the Borel $\sigma$-field $\mathcal{E}=\mathcal{B}(E)$. For $x\in E$, $R>0$ let $B_R(x):=\{z \in E\colon d(z,x)< R\}$ be the open ball of radius $R$ around $x$. Denote by $\mathcal{P}(E)$ the set of all probability measures on $(E, \mathcal{E})$. For $\mu, \nu \in \mathcal{P}(E)$ let $\mathscr{C}(\mu, \nu)$ be the set of all \textit{couplings} between $\mu$ and $\nu$, i.e. probability measures on $(E \times E, \mathcal{E} \otimes \mathcal{E})$ with marginals $\mu$ and $\nu$. For $\mu, \nu \in \mathcal{P}(E)$ and a measurable function $\rho: E\times E\to \mathbb{R}_+$ we put \begin{equation} W_\rho(\mu, \nu) \; := \; \inf_{\gamma\, \in\, \mathscr{C}(\mu, \nu)} \; \int_{E \times E} \, \rho(x,y)\; \gamma(\diff x, \diff y), \qquad \mu, \nu \in \mathcal{P}(E). \label{2.1} \end{equation} Clearly, for $\rho=d$, the function $W_d$ is just the standard Wasserstein--$1$ (or Kantorovich) distance. If $\rho(x,y)=\I(x=y)$, then $W_\rho$ coincides with the \textit{total variation} distance $d_{TV}$. The latter can also be defined as follows $$
d_{TV}(\mu, \nu) \; := \; \sup_{A \, \in \, \mathcal{E}} \; |\mu(A) - \nu(A)|. $$
A mapping $\rho: E \times E \to \mathbb{R}_+$ is called a \textit{pseudo-metric}, if it satisfies all characteristics of a metric without possibly the property that $\rho(x,y)=0$ implies $x=y$. For a function $f\colon D\to\mathbb{R}$, where $D$ is an arbitrary domain, we will denote $\|f\|_{\infty}:=\sup_{x\in D}|f(x)|$.
For the convenience of the reader, all of our results will be stated in the continuous time framework. Yet, they also hold for the discrete time setup. Consider a Markov transition function $\{P_t(x, A), x\in E, A\in\mathcal{E}\}_{t\in\mathbb{R}_+}$. Recall the following concepts introduced in \cite{HM}.
\begin{Definition}[{\cite[Definition 3.1]{HM}}] An increasing sequence $(d_n)_{n\in\mathbb{Z}_+}$ of bounded, continuous pseudo-metrics on $E$ is called \textit{totally separating} if for every $x,y\in E$, $x\neq y$ one has $\lim\limits_{n \to \infty} d_n(x,y) = 1$. \end{Definition}
\begin{Definition}[{\cite[Definition 3.8]{HM}}] We say that a Markov semigroup $(P_t)_{t\in \mathbb{R}_+}$ satisfies the \textit{Asymptotic Strong Feller Property} (ASF) if it is Feller and for every $x \in E$ there exist a sequence of positive real numbers $(t_n)_{n\in\mathbb{Z}_+}$ and a totally separating sequence $(d_n)_{n\in\mathbb{Z}_+}$ of pseudo--metrics such that $$ \inf_{U\, \in \, \mathcal{U}_x} \; \limsup\limits_{n \to \infty} \; \sup_{y\, \in \, U} \; W_{d_n}\left( P_{t_n}(x,\cdot) , P_{t_n}(y, \cdot)\right) \; = \; 0, $$ where $\mathcal{U}_x:=\{ U \subseteq E\colon \, x \in U \text{ and } U \text{ open}\}$ denotes the collection of all open neighborhoods of $x$. \end{Definition}
We refer to \cite{HM}, \cite{HM08}, \cite{HM11} for further discussions of this notion. In particular, it was shown in \cite[Corollary 3.17]{HM} that ASF, together with a certain irreducibility assumption, implies uniqueness of invariant probability measure. It was also shown there (\cite[Proposition 3.12]{HM}) that ASF follows from the following stronger property.
\begin{Definition}[see also {\cite[Proposition 3.12]{HM}}] \label{Def:ASF+} We say that a Markov semigroup $(P_t)_{t\in \mathbb{R}_+}$ satisfies the \textit{Asymptotic Strong Feller Plus Property} (ASF+) if it is Feller and there exist $x_0 \in E$, a non-decreasing sequence $(t_n)_{n\in\mathbb{Z}_+}$, a positive sequence $(\delta_n)_{n\in\mathbb{Z}_+}$ with $\delta_n \searrow 0$ as $n \to \infty$ and a non-decreasing function $F: [0,\infty) \to [0,\infty)$ such that for every $d$-Lipschitz continuous function $\varphi: E \to \mathbb{R}$ with Lipschitz constant $K>0$, we have \begin{equation} \label{eq:ASF+}
\left| P_{t_n} \varphi (x) - P_{t_n} \varphi (y) \right| \; \leqslant \; d(x,y) \; F(d(x,x_0) \vee d(y,x_0) ) \; ( \, \norm{\varphi}_\infty + \delta_n K) \end{equation} for every $n \in \mathbb{N}$ and $x,y \in E$. \end{Definition}
\begin{Remark} \label{rem:2.4} It was shown in \cite[Remark 3.10]{HM} that ASF is an extension of the strong Feller property (i.e. any strong Feller semigroup satisfies ASF). Furthermore, \cite[Proposition 3.12]{HM} proved that ASF follows from ASF+. However, according to Example~\ref{ex:2} below, ASF+ is \textbf{not} an extension of the strong Feller property, and one can construct a strong Feller semigroup that does not satisfy ASF+. Nevertheless, the ASF+ property is quite useful, since it holds for many interesting Feller but not strong Feller semigroups. \end{Remark}
As mentioned before, thanks to the results of \cite{HM}, to show unique ergodicity it is enough to establish ASF or ASF+ (together with some irreducibility-type conditions). However verifying this criteria in practice is usually rather tedious (see, e.g., \cite[Proposition 4.2]{CGHV}) and involves Malliavin calculus techniques. Our first main result suggests a different strategy of verifying ASF+. This strategy is based on the generalized coupling method and develops some ideas of \cite[Section 2.2]{BKS}.
Consider the following assumption which is related to \cite[Assumption A]{BKS}.
\begin{Assumption}{\textbf{A1}}\label{A:1} There exist non-decreasing functions $F_1, F_2\colon \mathbb{R}_+ \to \mathbb{R}_+$, a non-increasing function $r\colon\mathbb{R}_+\to\mathbb{R}_+$ with $\lim_{t\to\infty} r(t)=0$ and $x_0 \in E$ such that for every $x,y \in E$ and $t \in \mathbb{R}_+$, there exist $E$-valued random variables $Y^{x,y}_t$ and $Z^{x,y}_t$ on a common probability space with the following properties \begin{enumerate} \item $\Law(Y^{x,y}_t)=P_t(y,\cdot)$ and $$ d_{TV} \left(\Law(Z^{x,y}_t), P_t(x,\cdot)\right) \; \leqslant \; F_1(d(x,x_0) \vee d(y,x_0))\; d(x,y), \quad t\ge0. $$ \item $\hskip.15ex\mathsf{E}\hskip.10ex d(Z^{x,y}_t, Y^{x,y}_t) \; \leqslant \; F_2(d(x,x_0) \vee d(y,x_0)) \; r(t) \; d(x,y), \quad t\ge0$. \end{enumerate} \end{Assumption}
\begin{Theorem}\label{Thm:2.2} If $(P_t)_{t\in \mathbb{R}_+}$ satisfies Assumption \ref{A:1}, then it also satisfies ASF+ with $F:=2F_1+F_2$ (and hence $(P_t)_{t\in \mathbb{R}_+}$ is asymptotically strong Feller). \end{Theorem}
It is interesting to compare \cite[Assumption A]{BKS} and Assumption~\ref{A:1}. Both of them are similar generalized-coupling-type assumptions which yield certain mixing properties. However, the former is \textbf{global} in space, whilst the latter is \textbf{local} in space; this difference is crucial for studying ergodic properties of certain SPDE models, see \cite[Section 5]{BKS}. Therefore we believe that Assumption~\ref{A:1} is more suited for establishing exponential ergodicity than \cite[Assumption A]{BKS}.
As a possible application of this result let us mention that it was shown in \cite{BKS} that the fractionally dissipative Euler model admits a generalized coupling satisfying Assumption~\ref{A:1}. Thus, by Theorem~\ref{Thm:2.2}, the gradient-type bound from \cite[Proposition 4.2]{CGHV} holds.
Another property, which is important for unique ergodicity is \textit{local weak irreducibility}. The following definition is inspired by \cite[Assumptions 3 and 6]{HM08}.
\begin{Definition} We say that a semigroup $(P_t)_{t\in \mathbb{R}_+}$ is \textit{locally weak irreducible} if there exists $x_0 \in E$ such that for any $R>0$ and $\varepsilon>0$ there exists $T:=T(R,\varepsilon)>0$ such that for any $t\geqslant T$ one has \begin{equation}\label{LWI} \inf_{x,y \, \in \, B_R(x_0)} \; \; \sup_{\Gamma \in \, \mathscr{C}(P_t\delta_x, P_t\delta_y)} \; \Gamma \left(\{(x',y') \in E \times E \; \colon \; d(x',y')\leqslant \varepsilon \}\right) \; > \; 0. \end{equation} \end{Definition}
Our second main result provides a sufficient condition for local weak irreducibility in terms of generalized couplings. Consider the following assumption, which is the same as \cite[Assumption B2]{BKS}.
\begin{Assumption}{\textbf{A2}}\label{A:2} There exist a set $B\subseteq E$, a function $R:\mathbb{R}_+\to\mathbb{R}_+$ with \mbox{$\lim_{t\to\infty} R(t)=0$}, and $\varepsilon>0$ such that for any $x,y \in B$ and $t\ge0$, there exist $E$-valued random variables $Y^{x,y}_t$ and $Z^{x,y}_t$ on a common probability space with the following properties \begin{enumerate}
\item $\Law(Y^{x,y}_t)=P_t(y,\cdot)$ and \begin{equation*} d_{TV}(\Law(Z^{x,y}_t), P_t(x,\cdot)) \; \leqslant \; 1-\varepsilon,\quad t\ge0. \end{equation*} \item $\hskip.15ex\mathsf{E}\hskip.10ex d(Y^{x,y}_t, Z^{x,y}_t)\; \leqslant\; R(t), \quad t\ge0$.
\end{enumerate} \end{Assumption}
\begin{Theorem}\label{Thm:2.3} If there exists $x_0 \in E$ such that for all $M>0$ the semigroup $(P_t)_{t\in \mathbb{R}_+}$ satisfies Assumption~\ref{A:2} for the set $B:=B_M(x_0)$ (with some $\varepsilon=\varepsilon(M)>0$), then $(P_t)_{t\in \mathbb{R}_+}$ is locally weak irreducible. \end{Theorem}
Finally, the following theorem illustrates the use of these notions.
\begin{Theorem}\label{Thm:2.4}
If $(P_t)_{t\in \mathbb{R}_+}$ is locally weak irreducible and satisfies ASF+ with a non-decreasing function $F:\mathbb{R}_+ \to [0,\infty)$ such that $\|F\|_\infty <\infty$, then $(P_t)_{t\in \mathbb{R}_+}$ possesses at most one invariant probability measure. \end{Theorem}
Recall that it was shown in \cite[Theorem 2.5]{HM08} that \textbf{global} weak irreducibility and ASF+ with $\norm{F}_\infty <\infty$ additionally imply existence of an invariant measure and exponential ergodicity. Theorem~\ref{Thm:2.4} shows that if the semigroup satisfies \textbf{local} rather than \textbf{global} weak irreducibility, then uniqueness of invariant measure is guaranteed. This result is optimal in the following sense: the given assumptions do not guarantee existence of an invariant probability measure (see Example \ref{Ex:2}); furthermore the requirement $\norm{F}_\infty <\infty$ cannot be dropped (see Example \ref{Ex:1}).
In addition, we note that Theorem~\ref{Thm:2.4} complements \cite[Corollary 3.17]{HM}. The latter shows unique ergodicity provided that the semigroup satisfies a stronger condition than local weak irreducibility and a weaker condition than ASF+ with finite \,$\norm{F}_\infty$.
\begin{Example} \label{Ex:1} This example shows that local weak irreducibility together with ASF+ lacking the requirement $\norm{F}_\infty <\infty$ does not guarantee unique ergodicity (therefore it implies that ASF together with local weak irreducibility are insufficient for unique ergodicity).
Fix $\xi \in (0,2^{-1})$ and consider the state space $E:=\mathbb{N} \cup \{n+ \frac{\xi}{n}\mid n \in \mathbb{N}\} $ equipped with the standard Euclidean distance. Consider the Markov transition function $(P_t)_{t \in \mathbb{Z}_+}$ induced by \begin{align*} &P_1(x,A)=
\begin{cases} \frac{1}{2} \delta_{1}(A) + \frac{1}{2} \delta_{x+1}(A),& \quad \text{ if } x \in \mathbb{N} \\[1ex]
\frac{1}{2} \delta_{1+ \xi }(A) + \frac{1}{2} \delta_{\lfloor x \rfloor+1+ \frac{\xi}{\lfloor x \rfloor+1}}(A),& \quad \text{ if } x \in \{n+
\frac{\xi}{n} \mid n \in \mathbb{N}\},
\end{cases} \end{align*} where $A \in 2^E= \mathcal{B}(E)$. In other words, from any positive integer the process goes to the next integer with probability $\frac{1}{2}$ and to $1$ with probability $\frac{1}{2}$. Similarly, from any shifted positive integer, it moves to the next shifted integer with probability $\frac{1}{2}$ and to $1+ \xi$ with probability $\frac{1}{2}$.
Clearly, this dynamic has two invariant probability measures: one sits on the integers and the other sits on the shifted integers: $$ \mu_1 := \sum_{i=1}^\infty \frac{1}{2^i} \delta_i \qquad \text{ and } \qquad \mu_2 := \sum_{i=1}^\infty \frac{1}{2^i} \delta_{i+ \frac{\xi}{i}} $$
To see that the semigroup has the local weak irreducibility property \eqref{LWI}, fix arbitrary $\varepsilon>0$, $x,y \in E$. Choose $K \in \mathbb{N}$ such that $\xi/K< \varepsilon$ and define $A_K:= \{K, K+\xi/K\}$. Then, obviously $A_K\times A_K \subseteq \{(x',y') \in E \times E\colon |x'-y'| \leqslant \varepsilon\}$ and we have \begin{align*} \sup_{\Gamma \, \in \, \mathscr{C}(P_{K+1}\delta_x, P_{K+1}\delta_y)} \Gamma \left(\{(x',y') \in E \times E\, \colon \,d(x',y')\leqslant \varepsilon \}\right) \; &\geqslant \; \sup_{\Gamma \, \in \, \mathscr{C}(P_{K+1}\delta_x, P_{K+1}\delta_y)} \; \Gamma \left(A_K \times A_K \right) \\ &\geqslant \; P_{K+1}(x, A_K) \; P_{K+1}(y, A_K) \\ &\geqslant \; \frac{1}{2^{2K+2}} \; > \; 0. \end{align*} Hence, bound \eqref{LWI} holds for arbitrary $x,y \in E$, and thus this semigroup is locally weak irreducible.
Finally, let us show now that $(P_t)_{t\in \mathbb{Z}_+}$ satisfies ASF+. Obviously the semigroup is strong Feller, however by Remark \ref{rem:2.4} this only implies that $(P_t)_{t\in \mathbb{Z}_+}$ is ASF rather than ASF+; therefore we have to check the ASF+ property directly. Choose $t_n:=1$ and $\delta_n:=0$, $n\in\mathbb{N}$ and $x_0:=1$. Take any function $\varphi: E \to \mathbb{R}$ which is Lipschitz with constant $K$. Then, clearly, \begin{equation} \label{step1asfex}
|P_1\varphi(x) - P_1\varphi(y) | \; \leqslant \; 2\norm{\varphi}_{\infty}, \quad x,y\in E \end{equation} It is easy to see, that if $x\neq y$, then \begin{equation*}
|x-y| \; \geqslant \; \frac{\xi}{x\vee y} \; = \; \frac{\xi}{((x-1)\vee (y-1))+1}. \end{equation*} Combining this with \eqref{step1asfex}, we get $$
| P_1\varphi(x) - P_1\varphi(y) | \; \leqslant \; 2\norm{\varphi}_{\infty} \; \leqslant \; 2 |x-y| \; \frac{(|x-1|\vee |y-1|)+1}{\xi} \; \norm{\varphi}_{\infty}. $$ Thus, inequality \eqref{eq:ASF+} holds with $F(u):=\frac{2u+1}{\xi}$, $u\ge0$. Therefore, this process is locally weak irreducible and satisfies ASF+ (and is even strong Feller), yet it has two invariant probability measures. \end{Example}
\begin{Example} \label{Ex:2} This example shows that the assumptions of Theorem~\ref{Thm:2.4} do not guarantee existence of an invariant probability measure.
Let $E=\mathbb{R}$ and consider a Markov semigroup corresponding to the standard Brownian motion $(W_t)_{t \in \mathbb{R}_+}$ on some probability space $(\Omega, \mathcal{F}, \mathsf{P})$, i.e. $P_t(x,\cdot)=\operatorname{Law}(x+W_t)$ for any $x \in \mathbb{R}$, $t\geqslant 0$. This semigroup has the local weak irreducibility property as well as ASF+ with bounded $F$.
To establish ASF+ with bounded $F$, set $t_n=1$, $\delta_n=0$, $n \in \mathbb{N}$ and without loss of generality let $\varphi: \mathbb{R} \to \mathbb{R}$ be bounded. Let $p_t$ be the density of a Gaussian random variable with mean $0$ and variance $t>0$. Then, applying \cite[Bound (2.4e)]{Ros87}, we deduce that there exist positive constants $M,\alpha$ such that for any $x,y\in\mathbb{R}$ one has \begin{align*}
|P_1\varphi(x) - P_1\varphi(y) |& \; \leqslant \; \norm{\varphi}_{\infty} \; \int_\mathbb{R}\, \left| p_1(z-x) -p_1(z-y) \right|\; \diff z\\
& \; \leqslant \; M \; |x-y|\; \norm{\varphi}_{\infty} \; \int_\mathbb{R} \, (e^{- \alpha(z-x)^2} + e^{- \alpha (z-y)^2})\;\diff z\\
& \; = \; 2 M \; \sqrt{\frac{\alpha}{\pi}}\; |x-y| \; \norm{\varphi}_{\infty}. \end{align*} Hence, the semigroup satisfies ASF+ with the constant $F:= 2M \sqrt{\frac{\alpha}{\pi}}$.
Now let us show local weak irreducibility. Let $x_0=0$, $T=1$. Choose any $R>0$, $\varepsilon>0$, $t\ge1$. Let $\Gamma^{x,y}:= P_t\delta_x \otimes P_t\delta_y$ be the independent coupling of $P_t\delta_x$ and $P_t\delta_y$ for any $x,y \in B_R(0)$. Then, \begin{align*}
\Gamma^{x,y}\left( \left\{ x',y' \in \mathbb{R}\colon |x'-y'| \; \leqslant \; \varepsilon \right\}\right) \; &\geqslant \;
P_t\left( x , \left[ -\varepsilon/2, \varepsilon/2 \right] \right) P_t\left( y , \left[ -\varepsilon/2, \varepsilon/2 \right] \right) \\ & = \; \int_{-\varepsilon/2}^{\varepsilon/2} p_t(x-z)\, d z \; \int_{-\varepsilon/2}^{\varepsilon/2} p_t(y-z)\, \diff z \end{align*} for every $x,y \in B_{R}(0)$. However, for every $x \in B_{R}(0)$, $z \in [-\varepsilon/2, \varepsilon/2]$, we obviously have $$ p_t(x-z) \; \geqslant \; \frac{1}{\sqrt{2\pi t}} \, \exp \Bigl\{ - \frac{( R+\varepsilon/2)^2}{2t} \Bigr\} =: \lambda \; > \; 0 $$ and thus, \begin{equation*}
\Gamma^{x,y}\left( \left\{ x',y' \in \mathbb{R}\colon \; |x'-y'|\leqslant \varepsilon \right\}\right) \; \geqslant \; \lambda^2 \varepsilon^2 \; > \; 0 \end{equation*} for every $x,y \in B_{R}(0)$, which implies local weak irreducibility. Hence, the assumptions of Theorem \ref{Thm:2.4} are satisfied. On the other hand, as it is well-known, the semigroup $(P_t)_{t \in \mathbb{R}_+}$ has no invariant probability measure. \end{Example}
Finally, let us present an example showing that the strong Feller property does not imply ASF+.
\begin{Example}\label{ex:2} Let $E:=[0,3]$ be endowed with the Euclidean distance and let $\zeta$ be a random variable uniformly distributed on $[0,1/3]$. Define a Markov transition function $(P_n)_{n\in\mathbb{Z}_+}$ on $([0,3], \mathcal{B}([0,3]))$ by $$ P_1(x, \cdot) := \begin{cases} \Law\left( 2- \sqrt{x} + \zeta \right), \qquad \quad x \in [0,1]; \\ \Law\left( \frac{2}{3} + \frac{x}{3} + \zeta \right), \qquad \quad \; \; x \in [1,3]. \end{cases} $$
We claim that this semigroup is stong Feller but ASF+ property does not hold for this semigroup.
We start by showing the strong Feller property. Let $\varphi: [0,3] \to \mathbb{R}$ be a bounded measurable function. Our goal it to show that the function $P_1\varphi$ is continuous. Therefore, let $x \in [0,3]$ and $(x_n)_{n\in\mathbb{Z}_+} \subseteq [0,3]$ be a sequence converging to $x$. We need to show \begin{equation}\label{contbound}
| P_1\varphi(x) - P_1\varphi(x_n) | \; \to \; 0, \quad\text{as $n\to\infty$}. \end{equation}
Note that for any $y_1, y_2 \in [0,1]$ with $y_1 \leqslant y_2$, we have \begin{align*}
d_{TV}( P_1 \delta_{y_1} , P_1 \delta_{y_2} ) \; &= \; \frac32 \int_{\mathbb{R}} \, \bigl| \I_{[2-\sqrt{y_1}, 2-\sqrt{y_1}+\frac{1}{3}]}(z) - \I_{[2-\sqrt{y_2} , 2-\sqrt{y_2}+\frac{1}{3}]}(z) \bigr|\;\diff z\\[1ex] &\leqslant \; 3(\sqrt{y_2} - \sqrt{y_1}). \end{align*} Similarly, in the case where $y_1, y_2 \in [1,3]$, one gets $d_{TV}( P_1 \delta_{y_1} , P_1 \delta_{y_2}) \leqslant y_2-y_1$. This implies, that \begin{equation*} \lim_{n\to\infty} \; d_{TV}(P_1(x,\cdot),P_1(x_n,\cdot))\;=\;0, \end{equation*} which, in turn, yields \eqref{contbound} and establishes the strong Feller property for this semigroup.
Now let us show that ASF+ fails for this process. Assume for the sake of a contradiction that ASF+ holds, i.e., that there exist a non-decreasing sequence $(t_n)_{n\in\mathbb{Z}_+} \subseteq \mathbb{Z}_+$, a sequence $(\delta_n)_{n\in \mathbb{Z}_+} \subseteq \mathbb{R}_+$ with $\delta_n \searrow 0$ as $n\to \infty$, and a non-decreasing function $F:[0, \infty) \to [0, \infty)$ such that for every Lipschitz continuous $\varphi: [0,3] \to \mathbb{R}$ with Lipschitz constant $K>0$ it holds $$
\left| P_{t_n} \varphi (x) - P_{t_n} \varphi (y) \right| \; \leqslant \; |x-y| \; F(|x| \vee |y|) \; (\, \norm{\varphi}_\infty + \delta_n K),\quad x,y \in [0,3],\, n\in \mathbb{N}. $$ Taking in the above inequality $n=1$, $x=0$ and $\varphi(z)= z$, we obtain in particular \begin{equation}\label{eq:11}
\left|P_{t_1}\varphi(0) - P_{t_1}\varphi(y)\right| \; \leqslant \; y\, F(1) \, (3 + \delta_1) \quad y \in [0,1]. \end{equation} By definition of the semigroup $(P_n)_{n\in\mathbb{N}}$, we have for every $n\in\mathbb{Z}_+$ $$ P_n\varphi(u) = \frac{5}{4}(1 - 3^{-n}) + 3^{-n}u,\quad u\in[1,3]. $$ Thus we derive for every $z \in [0,1]$
$$ P_{n} \varphi (z) \; = \; P_1(P_{n-1} \varphi)(z) \; = \; \hskip.15ex\mathsf{E}\hskip.10ex [P_{n-1}\varphi(2- \sqrt{z}+ \zeta) ] \; = \; \frac54(1- 3^{-n+1}) + \frac{2- \sqrt{z} + 1/6}{3^{n-1}}, $$ where we took into account that for $z \in [0,1]$ we have $2-\sqrt{z} + \zeta \in (1,3]$ almost surely.
Therefore, we finally get $$
\left| P_{n} \varphi (0) - P_{n} \varphi (y) \right| \; = \; 3^{-n+1}\sqrt{y}. $$ for every $y\in [0,1]$ and $n\in \mathbb{N}$. Combining this with \eqref{eq:11}, we derive that the following should hold for every $y \in [0,1]$: $$ 3^{-t_1+1}\sqrt{y} \; \leqslant \; y \, F(1) \, (3+ \delta_1). $$ However this is impossible. Therefore this process does not satisfy the ASF+ condition. \end{Example}
\section{Proofs}
\begin{proof}[Proof of Theorem \ref{Thm:2.2}] We begin with the following calculation. Let $t\ge0$, $x,y\in E$. Let $\varphi\colon E \to \mathbb{R}$ be a $d$-Lipschitz continuous function with constant $K>0$. Denote $P_t^Z:= \Law(Z^{x,y}_t)$. Then
\begin{align}\label{boundth25}
&|P_{t}\varphi(x) - P_{t}\varphi(y)| \nonumber\\
&\qquad \leqslant \; \left| \int_E\, \varphi(z) \; P_{t}(x,\diff z) -\int_E\, \varphi(z) \; P_t^Z(\diff z)\right| \; + \; \left| \int_E \, \varphi(z)\; P_{t}^Z(\diff z) - \int_E \, \varphi(z) \; P_{t}(y,\diff z)\right| \nonumber\\ &\qquad \leqslant \; 2\norm{\varphi}_{\infty} \; d_{TV} \left(\Law(Z^{x,y}_{t}) , \Law(X^{x,y}_{t})\right)\; +\;
\int_E \, |\varphi(Z^{x,y}_{t}) - \varphi(Y^{x,y}_{t})| \; \diff \mathsf{P}. \end{align} Using part 1 of Assumption \ref{A:1}, we bound the first term in the right--hand side of~\eqref{boundth25}: \begin{equation}\label{step1bound} 2\norm{\varphi}_\infty \; d_{TV} \left(\Law(Z^{x,y}_{t}) , \Law(X^{x,y}_{t})\right) \; \leqslant \; 2F_1(d(x,x_0) \vee d(y,x_0)) \; \norm{\varphi}_\infty \; d(x,y). \end{equation} Applying part 2 of Assumption \ref{A:1}, we obtain the following bound on the second term in the right--hand side of \eqref{boundth25}. \begin{equation*}
\int_E \, |\varphi(Z^{x,y}_{t}) - \varphi(Y^{x,y}_{t})| \; \diff \mathsf{P} \; \leqslant \; K \, \hskip.15ex\mathsf{E}\hskip.10ex d(Z^{x,y}_{t},Y^{x,y}_{t} ) \; \leqslant \; F_2( d(x,x_0) \vee d(y,x_0))\; r(t) \, K \, d(x,y). \end{equation*} Hence, combining this with \eqref{step1bound} and \eqref{boundth25}, we get \begin{equation}\label{eq:3.1}
| P_{t}\varphi(x) - P_{t}\varphi(y)| \; \leqslant \; F_3( d(x,x_0) \vee d(y,x_0)) \; d(x,y) \; ( \, \norm{\varphi}_\infty + r(t)K), \end{equation} where we denoted $F_3(z):=2F_1(z)+F_2(z)$, $z\ge0$.
Now let us show that $(P_t)_{t\in \mathbb{R}_+}$ satisfies ASF+ property. First, let us show that this semigroup is Feller. Fix $t>0$, $x\in E$. Let $(x_n)_{n\in\mathbb{Z}_+}$ be a sequence of elements in $E$ converging to $x$. Then, using \eqref{eq:3.1}, we obtain for any bounded $d$-Lipschitz function $\varphi\colon E \to \mathbb{R}$ with Lipschitz constant $K>0$ \begin{equation*}
| P_t \varphi(x) - P_t\varphi(x_n) | \; \leqslant \; F_3( d(x,x_0) \vee d(x_n,x_0)) \; d(x,x_n) \; ( \, \norm{\varphi}_\infty + r(t)K). \end{equation*}
Therefore $| P_t \varphi(x) - P_t\varphi(x_n) |\to 0$ as $n\to\infty$. This, together with the Portmanteau theorem (see, e.g., \cite[Lemma~3.7.2]{Shi16}), implies that $P_t(x_n,\cdot)$ converges weakly to $P_t(x,\cdot)$ as $n\to\infty$. Thus, the semigroup $(P_t)_{t\in\mathbb{R}_+}$ is Feller.
Second, let us verify bound \eqref{eq:ASF+}. For $n\in\mathbb{N}$ put $t_n:=n$, $\delta_n:=r(n)$. We claim know that \eqref{eq:ASF+} holds for the sequences $(t_n)$, $(\delta_n)$, and the function $F:=F_3$ defined above. Indeed, let $\varphi: E \to \mathbb{R}$ be a $d$-Lipschitz continuous function with constant $K>0$ and $x,y\in E$. Then, applying \eqref{eq:3.1}, we derive for any $n\in\mathbb{N}$ \begin{align*}
| P_{t_n}\varphi(x) - P_{t_n}\varphi(y)|\; &= \; | P_{n}\varphi(x) - P_{n}\varphi(y)|\\[1ex] &\leqslant \; F_3( d(x,x_0) \vee d(y,x_0)) \; d(x,y)\; ( \, \norm{\varphi}_\infty + \delta_n K), \end{align*} which is \eqref{eq:ASF+}. Therefore all the conditions of Definition~\ref{Def:ASF+} are met and thus the semigroup $(P_t)$ satisfies ASF+ property. \end{proof}
As a helpful tool in the sequel, we would like to recall the following \textit{Gluing Lemma}. \begin{Proposition}[{\cite[p. 23-24]{V08}}] \ \label{Gluing Lemma} Let $\mu_i$, $i=1,2,3$, be probability measures on a Polish space $E$. If $(X_1, X_2)$ is a coupling of $\mu_1, \mu_2$ and $(Y_2, Y_3)$ is a coupling of $\mu_2, \mu_3$, then there exists a triple of random variables $(V_1, V_2, V_3)$ such that $(V_1,V_2)$ has the same law as $(X_1, X_2)$ and $(V_2, V_3)$ has the same law as $(Y_2, Y_3)$. \end{Proposition}
\begin{proof}[Proof of Theorem \ref{Thm:2.3}] Fix $M>0$, $\delta>0$ and choose $T\!:=T(\delta)\geqslant 0$ such that \mbox{$\delta^{-1} R(T) < \frac{\varepsilon}{2}$.} Let $t\geqslant T$. Then, using the Markov inequality and part 2 of assumption \ref{A:2}, we get for any $x,y\in B_M(x_0)$ $$ \mathsf{P}\left( d(Z_t^{x,y} ,Y_t^{x,y}) > \delta \right) \; \leqslant \; \frac{\hskip.15ex\mathsf{E}\hskip.10ex d(Z_t^{x,y} ,Y_t^{x,y})}{\delta} \; \leqslant \; \frac{R(t)}{\delta} \; \leqslant \; \frac{R(T)}{\delta} \; < \; \frac{\varepsilon}{2}, $$ which is equivalent to \begin{equation}\label{eq:14} \mathsf{P}\left( d(Z_t^{x,y} ,Y_t^{x,y}) \leqslant \delta \right) \; \geqslant \; 1- \frac{\varepsilon}{2}. \end{equation} By part 1 of Assumption \ref{A:2} and the definition of the total variation distance, there exist random variables $\widetilde{X}^{x,y}_t, \widetilde{Z}_t^{x,y}$ such that $$ d_{TV} \left( P_t(x,\cdot) , \Law(Z^{x,y}_t)\right) \; = \; \mathsf{P}\left(\widetilde{X}^{x,y}_t \neq \widetilde{Z}_t^{x,y} \right) \; \leqslant \; 1- \varepsilon $$ and therefore \begin{equation}\label{eq:15}
\mathsf{P}\left(\widetilde{X}^{x,y}_t = \widetilde{Z}_t^{x,y} \right) \; > \; \varepsilon. \end{equation} Now, according to Proposition~\ref{Gluing Lemma}, there exist random variables $V^{X}_t,V^{Z}_t, V^{Y}_t$ on a probability space $(\Omega, \mathcal{F}, \mathsf{P})$ such that $(V^{X}_t, V^{Z}_t)$ has the same law as $(\widetilde{X}_t^{x,y}, \widetilde{Z}_t^{x,y})$ and $(V^{Z}_t, V^Y_t)$ has the same law as $(Z^{x,y}_t, Y_t^{x,y})$. Therefore, using also the fact that for any measurable sets $A, B \in \mathcal{F}$ one has $\mathsf{P}(A\cap B) \geqslant \mathsf{P}(A)+\mathsf{P}(B)-1$, we deduce \begin{align*} \mathsf{P}\left(d(V^X_t ,V^Y_t) \leqslant \delta \right) \; &\geqslant \; \mathsf{P}\left( \{V_t^X =V_t^Z\} \cap \{(d(V^Z_t ,V^Y_t) \leqslant \delta\}\right)\\ &\geqslant\; \mathsf{P}(V_t^X =V_t^Z)\; +\; \mathsf{P}(d(V^Z_t ,V^Y_t) \leqslant \delta)-1\\ &= \; \mathsf{P}(\widetilde{X}_t^{x,y} =\widetilde{Z}_t^{x,y})\; + \mathsf{P}(d(Z_t^{x,y} ,Y_t^{x,y}) \leqslant \delta)-1\\ &\geqslant\; \varepsilon/2, \end{align*} where the last inequality follows from \eqref{eq:14} and \eqref{eq:15}.
Since $(V_t^X, V_t^Y)$ is a coupling of $P_t\delta_x$ and $P_t\delta_y$, we finally obtain $$ \frac{\varepsilon}{2} \; \leqslant \; \mathsf{P}\left( d(V^X_t ,V^Y_t) \leqslant \delta \right) \; \leqslant \; \sup_{\Gamma \, \in \, \mathscr{C}(P_t\delta_x, P_t\delta_y)} \Gamma \left( \left\{ (x', y') \in E \times E \colon d(x',y') \leqslant \delta \right\} \right). $$ Therefore bound \eqref{LWI} holds and the semigroup $(P_t)$ is locally weak irreducible. \end{proof}
The next proposition, established in \cite[Theorem 2.1]{HM11}, will help us to develop the uniqueness result based on local weak irreducibility and ASF+ with bounded $F$. \cite{HM11} stated the result in a Hilbert space structure, however, the proof can be conducted similarly by just replacing norms by the metric distance to our reference point $x_0$.
\begin{Proposition}[{\cite[Theorem 2.1]{HM11}}] \label{Lem} Let $(P_t)_{t \in \mathbb{R}_+}$ be a Markov semigroup satisfying ASF+. Let $\mu_1, \mu_2$ be two distinct ergodic invariant probability measures for $(P_t)_{t \in \mathbb{R}_+}$. Then, for any pair of points $(w_1, w_2) \in \operatorname{supp}(\mu_1) \times \operatorname{supp}(\mu_2)$ one has $$ d(w_1,w_2)\; \geqslant \; \frac{1}{F(d(w_1,x_0) \vee d(w_2,x_0))}. $$ \end{Proposition}
Note that the above lemma tells us that not only are the supports of two different ergodic invariant measures under ASF+ disjoint (which is already the case under ASF -- see \cite[Theorem 3.16]{HM}) but they are even separated by a distance depending on the function $F$.
To present the proof of Theorem~\ref{Thm:2.4}, we need a couple of auxiliary statements. \begin{Lemma} \label{Lem:1} Let $(P_t)_{t\in \mathbb{R}_+}$ be a Feller semigroup and $x \in E$. Then for every $t >0$ and $A\subseteq E$ open such that $P_t(x,A)>0$, there exists an open set $B:=B(x,t,A) \subseteq E$ containing $x$ such that \begin{equation*} \inf_{z \in B} \; P_t(z, A) \; > \; 0. \end{equation*} \end{Lemma} \begin{proof} Assume for the sake of a contradiction, that there exists $(z_n)_{n\in\mathbb{N}} \subseteq E$ such that $z_n \to x$ as $n\to \infty$ and $\lim_{n\to\infty}P_t(z_n,A)=0$. Since $(P_t)_{t\in \mathbb{R}_+}$ is Feller, $P_t(z_n,\cdot)$ converges weakly to $P_t(x,\cdot)$. According to Portmanteau's theorem \cite[Theorem III.1.1.III]{Shi16} this implies $$ 0 \; = \; \lim\limits_{n \to \infty} P_t(z_n, A) \; \geqslant \; P_t(x,A) \; > \; 0, $$ which yields a contradiction. \end{proof}
\begin{Lemma}\label{Lem:2} Let $\mu \in \mathcal{P}(E)$ and $A \in \mathcal{E}$ such that $\mu(A)>0$. Then, $\overline{A}\, \cap \, \operatorname{supp}(\mu) \neq \emptyset$. \end{Lemma} \begin{proof} Since $\mu(A)>0$ and the space $E$ is separable, we see that there exists $z_1 \in A$ such that \mbox{$\mu(A \cap B_{2^{-1}}(z_1)) >0$}. Now, let us inductively choose a sequence $(z_n)_{n\in\mathbb{N}} \subseteq A$ as follows. Due to separability of the space and $\mu(A \cap B_{2^{-n}}(z_n)) >0$, choose $z_{n+1} \in A \cap B_{2^{-n}}(z_n)$ such that $\mu(A \cap B_{2^{-n-1}}(z_{n+1})) >0$. Thus, for every $m\geqslant n$ we have $$ z_m \in B_{2^{-n}}(z_n) $$ implying that $(z_n)_{n\in\mathbb{N}}$ is a Cauchy sequence. By the completeness of the space, there exists $z \in \overline{A}$ such that $z_n \to z$ as $n\to \infty$. Furthermore, $z \in \operatorname{supp}(\mu)$, since for every $\varepsilon >0$ there exists $n \in \mathbb{N}$ such that $B_{2^{-n}}(z_n) \subseteq B_\varepsilon(z)$ and by monotonicity, $$ \mu(B_\varepsilon(z)) \; \geqslant \; \mu(B_{2^{-n}}(z_n)) \; > \; \mu(A \cap B_{2^{-n}}(z_n))\; > \; 0. $$ Hence, $z \in \overline{A} \cap \operatorname{supp}(\mu)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:2.4}] This proof is inspired by some ideas from \cite[proof of Corollary~1.4]{HM11}. Assume for the sake of a contradiction that there existed two distinct invariant probability measures $\mu_1, \mu_2 \in \mathcal{P}(E)$. Since every invariant probability measure can be written as the convex combination of two ergodic measures (see, e.g., \cite[p. 670]{HM11}), without loss of generality, we may assume $\mu_1, \mu_2$ to be ergodic.
Denote $M:=\|F\|_{\infty}<\infty$. For $i=1,2$, choose $u_i \in E$ such that $u_i \in \operatorname{supp}(\mu_i)$. Set $R:= d(u_1,x_0) \vee d(u_2,x_0)$, where $x_0$ is defined in \eqref{LWI}. Furthermore, let $\varepsilon>0$ be such that $M^{-1}>6\varepsilon$ and choose $T:=T(R, \varepsilon)$ as in (\ref{LWI}). Then, by the local weak irreducibility, there exists a coupling $\Gamma$ of $P_T\delta_{u_1}$, $P_T\delta_{u_2}$ with $$ \Gamma \left( \Delta_\varepsilon \right) \; > \; 0, $$ where $\Delta_\varepsilon:=\{(x,y) \in E\times E \colon \; d(x,y) \leqslant \varepsilon\}$. Denote by $\Delta:=\{(x,x) \in E\times E \colon \; x \in E\}$ the diagonal on the product space.
Due to the fact that $E$ is separable, there exists a countable set $Q \subseteq \Delta$ which lies dense in $\Delta$. Hence, $$ \union{q \in Q}{} B^{E\times E}_{2\varepsilon}(q) \; \supseteq \; \Delta_\varepsilon, $$ where $B^{E\times E}_\alpha((q_1,q_2)):=\{(x,y) \in E\times E\colon \; d(x,q_1)+d(y,q_2) < \alpha \}$ denotes the open ball in $E \times E$ of radius $\alpha$ around $q$ with respect to the sum of the distances in each coordinate. Therefore, there exists $\tilde{v}_\varepsilon=(v_\varepsilon,v_\varepsilon) \in Q\subseteq \Delta$ such that $\Gamma(B^{E \times E}_{2\varepsilon}(\tilde{v}_\varepsilon))>0$. Hence $$ P_T(u_i,B_{2\varepsilon}(v_\varepsilon)) \; = \; P_T\delta_{u_i}(B_{2\varepsilon}(v_\varepsilon)) \; \geqslant \; \Gamma(B^{E \times E}_{2\varepsilon}(\tilde{v}_\varepsilon)) \; > \; 0 $$ for every $i=1,2$. Thus, according to Lemma \ref{Lem:1}, there exists $\delta_\varepsilon>0$ such that \begin{equation}\label{eq:3.6} \inf_{z\, \in\, B_{\delta_\varepsilon}(u_i)} \; P_T(z,B_{2\varepsilon}(v_\varepsilon)) \; > \; 0 \end{equation} for every $i=1,2$. Additionally, since $u_i \in \operatorname{supp}(\mu_i)$, we have \begin{equation} \label{eq:3.7} \mu_i \left( B_{\delta_\varepsilon}(u_i) \right) \; > \; 0 \end{equation} for every $i=1,2$. Combining (\ref{eq:3.6}) and (\ref{eq:3.7}) and using the fact that $\mu_1, \mu_2$ are invariant for $(P_t)_{t\in \mathbb{R}_+}$, we obtain \begin{align*} \mu_i \left( B_{2 \varepsilon}(v_\varepsilon)\right) \; = \; \int_E \, P_T(z, B_{2\varepsilon}(v_\varepsilon)) \;\mu_i(\diff z) &\geqslant \; \int_{B_{\delta_\varepsilon}(u_i)} \, P_T(z, B_{2\varepsilon}(v_\varepsilon))\; \mu_i(\diff z)\\ & \geqslant\; \mu_i\left( B_{\delta_\varepsilon}(u_i)\right) \; \inf_{z \, \in \, B_{\delta_\varepsilon}(u_i)} \, P_T(z,B_{2\varepsilon}(v_\varepsilon))) \; > \; 0. \end{align*} Since, by monotonicity, $\mu_i \left( B_{3 \varepsilon}(v_\varepsilon)\right)>0$, Lemma~\ref{Lem:2} directly implies that $$ B_{3\varepsilon}(v_\varepsilon) \, \cap \, \operatorname{supp}(\mu_i) \; \neq \; \emptyset $$ for $i=1,2$. This means that there exists $(w_1, w_2) \in \operatorname{supp}(\mu_1)\times \operatorname{supp}(\mu_2)$ such that $w_1, w_2 \in B_{3\varepsilon}(v_\varepsilon)$. Thus, we finally get $$ \frac{1}{M} \; \leqslant \; d(w_1 ,w_2) \; \leqslant \; 6\varepsilon \; < \; \frac{1}{M}, $$ where the first inequality follows from Lemma \ref{Lem}, yielding the desired contradiction. \end{proof}
\end{document} | arXiv |
\begin{document}
\ifthenelse{1=1}{\pagenumbering{roman}}
\begin{flushleft}
{\Large\bf Density of the spectrum of Jacobi matrices}
\\[1.5mm]
{\Large\bf with power asymptotics}
\\[4mm]
\textsc{Raphael Pruckner \, \
\hspace*{-19pt}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\footnote{[email protected]}
\setcounter{footnote}{0}
}
\\
{\footnotesize
\vspace*{4mm}
Institute for Analysis and Scientific Computing, Vienna University of Technology\\
Wiedner Hauptstra{\ss}e\ 8--10/101, 1040 Wien, AUSTRIA
}
\end{flushleft}
\vspace*{0mm}
{\small \noindent
\textbf{Abstract:}
We consider Jacobi matrices $J$
whose parameters have the power asymptotics
$\rho_n=n^{\beta_1} \left( x_0 + \frac{x_1}{n} + {\rm O}(n^{-1-\epsilon})\right)$
and
$q_n=n^{\beta_2} \left( y_0 + \frac{y_1}{n} + {\rm O}(n^{-1-\epsilon})\right)$ for the off-diagonal and
diagonal, respectively.
We show that for $\beta_1 > \beta_2$,
or $\beta_1=\beta_2$ and $2x_0 > |y_0|$,
the matrix $J$ is in the limit circle case and
the convergence exponent of its spectrum is $1/\beta_1$.
Moreover, we obtain upper and lower bounds for the upper density of the spectrum.
When the parameters of the matrix $J$ have
a power asymptotic with one more term,
we characterise the occurrence of the limit circle case completely
(including the exceptional case $\lim_{n\to \infty} |q_n|\big/ \rho_n = 2$)
and determine the convergence exponent in almost all cases.
\\[3mm]
{\bf Keywords:} Jacobi matrix, Spectral analysis, Difference equation, growth of entire function, canonical system, Berezanski{\u\i}'s theorem
}
\\[1mm]
{\bf AMS MSC 2010:} 47B36, 34L20, 30D15
\ifthenelse{1=1}{ \fbox{ \parbox{100mm}{ \hspace*{0pt}\\ \centerline{{\Large\ding{45}}\quad\,{\large\sc Draft}\quad{\Large\ding{45}}} \hspace*{0pt}\\[-3mm] In final version:\qquad \ding{233} disable package \textsf{showkeys}.\\ Download latest version of bibtex-database-woracek.bib:\\ \centerline{\textsf{asc.tuwien.ac.at/\~{}funkana/woracek/bibtex-database-woracek.bib}} \\[2mm] \textcircledP\ Preliminary version Thu 27 Sep 2018 07:22 \\[2mm] \ding{229}\quad to obtain proper eepic-pictures use latex/dvipdf to compile \\[-2mm] } } \tableofcontents \listoftodos
\pagenumbering{arabic} \setcounter{page}{1} } {}
\section{Introduction}
A Jacobi matrix $J$ is a tridiagonal semi-infinite matrix \begin{equation*}
J=
\begin{pmatrix}
q_0 & \rho_0 && \\
\rho_0 & q_1 & \rho_1 & \\
& \rho_1 & q_2 & \smash{\ddots} & \\
&& \ddots & \ddots
\end{pmatrix} \end{equation*} with real $q_n$ and positive $\rho_n$. A Jacobi matrix induces a closed symmetric operator $T_J$ on $\ell^2(\bb N)$, namely as the closure of the natural action of $J$ on the subspace of finitely supported sequences, see, e.g.,\ \cite[Chapter~4.1]{akhiezer:1961}. There occurs an alternative: \begin{itemize}
\item $T_J$ is selfadjoint; one speaks of the limit point case (lpc), or, in the language of \cite{akhiezer:1961}, type D.
\item $T_J$ has defect index $(1,1)$ and is entire in the sense of M.G.Kre{\u\i}n; one speaks of the limit circle case (lcc), or,
synonymously, type C. \end{itemize}
In the lpc, the spectrum of $T_J$ may be discrete, continuous, or be composed of different types.
If $J$ is in the lcc, then the spectrum of every canonical selfadjoint extension of $T_J$ is discrete, and each two spectra are interlacing. In this case, we fix one such extension and denote its spectrum by $\sigma(J)$.
In general it is difficult to decide from the parameters $\rho_n,q_n$ whether $J$ is in the lcc or lpc. Two classical necessary conditions for occurrence of lcc are Carleman's condition which says that $\sum_{n=0}^\infty\rho_n^{-1}=\infty$ implies lpc, cf.\ \cite{carleman:1926}, and Wouk's theorem that a dominating diagonal in the sense that $\sup_{n\geq 0}(\rho_n+\rho_{n-1}-q_n)<\infty$ or $\sup_{n\geq 0}(\rho_n+\rho_{n-1}+q_n)<\infty$ implies lpc, cf.\ \cite{wouk:1953}.
A more subtle result, which gives a sufficient condition for lcc, is due to Yu.M.Berezanski{\u\i}, cf.\ \cite[Theorem~4.3]{berezanskii:1956}, \cite[VII,Theorem~1.5]{berezanskii:1968} or \cite[Addenda~5.,p.26]{akhiezer:1961}: Assume that $\sum_{n=0}^\infty\rho_n^{-1}<\infty$, that the sequence $(q_n)_{n=0}^\infty$ of diagonal parameters is bounded,
and that the sequence $(\rho_n)_{n=0}^\infty$ of off-diagonal parameters behaves regularly in the sense that $\rho_n^2\geq\rho_{n+1}\rho_{n-1}$ (log-concavity).
Then $J$ is in the lcc. An extension and modern formulation of this result can be found in \cite[Theorem~4.2]{berg.szwarc:2014}. In particular, instead of $(q_n)_{n=0}^\infty$ being bounded it is enough to require $\sum_{n=0}^\infty |q_n|\big/\rho_n<\infty$.
There is a vast literature dealing with Jacobi matrices in the lpc, whose aim is to establish discreteness of the spectrum and investigate spectral asymptotics, e.g.,\ \cite{boutet.zielinski:2012,deift:1999,janas.malejki:2007,janas.naboko:2004,tur:2003}. Contrasting this, if $J$ is in the lcc, not much is known about the asymptotic behaviour of the spectrum.
The probably first result in this direction is due to M.Riesz\ \cite{riesz:1923a} and states that the spectrum of a Jacobi matrix in the lcc case is sparse compared to the integers, in the sense that ($\lambda_n^\pm$ denote the sequences of positive or negative points in $\sigma(J)$, arranged according to increasing modulus) \[
\lim_{n\to\infty}\frac n{\lambda_n^\pm}=0
. \] A deeper result holds in the context of
the already mentioned work of Berezanski{\u\i}. Under the mentioned assumptions, C.Berg and R.Szwarc showed that the convergence exponent $\rho(\sigma(J))$ of the spectrum coincides with the convergence exponent of the sequence $(\rho_n)_{n=0}^\infty$ of off-diagonal parameters of $J$, cf.\ \cite[Theorem~4.11]{berg.szwarc:2014}. Recall that the convergence exponent of a sequence $(x_n)_{n=0}^\infty$, which we denote by $\rho((x_n)_{n=0}^\infty)$, is defined as the greatest lower bound of all $\alpha>0$ such that $\sum_{n=0}^\infty x_n^{-\alpha} < \infty$.
In this paper we contribute to the study of the spectrum of Jacobi matrices in the lcc. We investigate Jacobi matrices $J$ whose parameters have, for some $\epsilon>0$, power asymptotics \begin{equation} \label{P32}
\rho_n = n^{\beta_1} \Big( x_0 + \frac{x_1}{n} + {\rm O}\Big(\frac{1}{n^{1+\epsilon}}\Big)\Big)
, \quad
q_n = n^{\beta_2} \Big( y_0 + \frac{y_1}{n} + {\rm O}\Big(\frac{1}{n^{1+\epsilon}}\Big)\Big)
, \end{equation} with $x_0>0$, $y_0\neq 0$.
In our first theorem,
we show that, apart from the exceptional case that $\lim_{n\to \infty} |q_n|\big/\rho_n =2$, a characterisation of the lcc is possible.
Moreover, we give bounds for the upper density of the spectrum, in particular, determine the convergence exponent.
Our second theorem treats the exceptional case that $\lim_{n\to \infty} |q_n|/\rho_n =2$, or equivalently $\beta_1=\beta_2$ and $2x_0 = |y_0|$. Under a stronger assumption, we fully characterise the occurrence of the lcc and determine the convergence exponent of the spectrum in almost all cases .
\begin{theorem} \thlab{P1} Let $J$ be a Jacobi matrix with off-diagonal $(\rho_n)_{n=0}^\infty$ and diagonal $(q_n)_{n=0}^\infty$ which have the power asymptotics \eqref{P32}
with some $\epsilon>0$, $\beta_1,\beta_2 \in \bb R$, $x_0>0$, $y_0\neq 0$ and $x_1,y_1 \in \bb R$.
Consider the following two cases. \begin{enumerate}[$(i)$]
\item $\beta_1 \leq \beta_2$, and $2x_0<|y_0|$ if $\beta_1=\beta_2$
In this case, $J$ is in the lpc.
\item $\beta_1 \geq \beta_2$, and $2x_0>|y_0|$ if $\beta_1=\beta_2$
In this case, $J$ is in the lcc if and only if $\beta_1$ is greater than $1$. \end{enumerate} In the lcc, the convergence exponent of the spectrum is $1/\beta_1$. Moreover, we have the following bounds for the upper density of the spectrum, \begin{equation*}
\frac{\beta_1 - 1}{\beta_1}
\left(\frac{1}{x_0}\right)^{1/\beta_1}
\leq
\limsup_{r\to \infty} \frac{n_{\sigma} (r)}{r^{1/{\beta_1}}}
\leq
e\frac{\beta_1 }{\beta_1 - 1} \left( \frac{a}{x_0}\right)^{1/\beta_1}
, \end{equation*} where $n_\sigma(r):=\# (\sigma(J) \cap [-r,r])$ denotes the counting function, and \[
a:=\begin{cases}
1 &, \beta_1>\beta_2, \\
\Big(1-\frac{y_0^2}{4 x_0^2}\Big)^{-1/2} \ \ &, \beta_1=\beta_2
.
\end{cases} \] \end{theorem}
\begin{remark}
Note that case $(i)$ in \thref{P1} is equivalent to $\lim_{n\to \infty} |q_n|/\rho_n >2$, whereas case $(ii)$ corresponds to $\lim_{n\to \infty} |q_n|/\rho_n <2$. \end{remark}
\begin{remark} Having \eqref{P32} implies that $(\rho_n)_{n=0}^\infty$ is log-concave. Hence, if $\beta_1>\beta_2+1$, the above discussed extension of Berezanski{\u\i}'s theorem applies and yields $\rho(\sigma(J))=\rho((\rho_n)_{n=0}^\infty)$. Note that the convergence exponent of a sequence $(\rho_n)_{n=0}^\infty$ with \eqref{P32} is $1/\beta_1$.
For $\beta_1 > \beta_2+1$, Theorem 1 refines this result by providing explicit estimates for the upper density of the spectrum. However, the main significance is that the statement remains valid for $\beta_1>\beta_2$, and even in some cases where $\beta_1=\beta_2$, i.e., where diagonal and off-diagonal parameters are comparable. \end{remark}
\begin{center}
{ \footnotesize
\begin{tikzpicture}[x={(20pt,0pt)},y={(0pt,20pt)},scale=0.5]
\draw[-triangle 60,thick] (2,0.1) -- (2,12);
\draw[-triangle 60,thick] (0.4,2) -- (18.5,2);
\draw (1.5,1.5) node {$0$};
\draw (6.5,1.5) node {$1$};
\draw (18.7,1.2) node {$\beta_1$};
\draw (1.2,11) node {$\beta_2$};
\draw (7,0) -- (7,12);
\draw (2,2) -- (7,7);
\draw[dashed,ultra thick] (7,7) -- (12,12);
\draw[dashed] (7,2) -- (17,12);
\fill[pattern=dots,opacity=0.5]
(7,0) -- (7,7) -- (12,12) -- (18,12) -- (18,0);
\draw (12.1,12.45) node {$\beta_1=\beta_2$};
\draw (17.5,12.45) node {$\beta_1=\beta_2+1$};
\draw (12,7) node[fill=white, rounded corners=4pt] {\emph{lcc}};
\end{tikzpicture}
}
\end{center}
In order to handle the exceptional case, we require the stronger assumption that the parameters of the Jacobi matrix $J$ have, for some $\epsilon>0$, power asymptotics of the form \begin{equation} \label{P32E}
\rho_n = n^{\beta_1} \Big( x_0 + \frac{x_1}{n} + \frac{x_2}{n^2}+ {\rm O}\Big(\frac{1}{n^{2+\epsilon}}\Big)\Big)
,\quad
q_n = n^{\beta_2} \Big( y_0 + \frac{y_1}{n} + \frac{y_2}{n^2} + {\rm O}\Big(\frac{1}{n^{2+\epsilon}}\Big)\Big)
, \end{equation} with $x_0>0$ and $y_0\neq 0$.
\begin{theorem} \thlab{ADD1} Let $J$ be a Jacobi matrix with off-diagonal $(\rho_n)_{n=0}^\infty$ and diagonal $(q_n)_{n=0}^\infty$ which have the power asymptotics \eqref{P32E}
with some $\epsilon>0$, $\beta_1,\beta_2 \in \bb R$, $x_0>0$, $y_0\neq 0$ and $x_1,y_1,x_2,y_2 \in \bb R$.
Assume that $\beta_1=\beta_2=:\beta$ and $2x_0 = |y_0|$. Then, exactly one of the following cases takes place. \begin{enumerate}[$(i)$]
\item ${\beta\in \big(-\infty,\frac 3 2\big] \cup \big(\frac{2x_1}{x_0}-\frac{2y_1}{y_0}, \infty\big)}$
In this case, $J$ is in the lpc.
\item $\beta\in \big(\tfrac 3 2, \frac{2x_1}{x_0}-\frac{2y_1}{y_0}\big)$
In this case, $J$ is in the lcc. Regarding the convergence exponent of the spectrum, we have
\begin{equation} \label{P27}
\rho(\sigma(J))
\begin{cases}
\in [\frac{1}{\beta},\frac{1}{2(\beta-1)}] &, \frac{3}{2}< \beta < 2, \\
= \frac{1}{\beta} &, \beta \geq 2.
\end{cases}
\end{equation}
\item $\beta = \frac{2x_1}{x_0}-\frac{2y_1}{y_0}>\tfrac 3 2 $
In this case, $J$ is in the lcc if and only if
$2 < \beta < \frac{3}{2} + \frac{2z_2}{x_0}$,
where
\[
z_2=x_0 \left(
\frac{2x_2}{x_0}-\frac{2y_2}{y_0}+\frac{y_1}{y_0} -\frac{2x_1 y_1}{x_0 y_0} + \frac{2 y_1^2}{y_0^2}
\right)
.
\]
In the lcc, the convergence exponent of the spectrum is $1/\beta$.
\end{enumerate} When $J$ is in the lcc, the following lower estimate of the density of the spectrum holds,
\begin{equation*}
\frac{\beta - 1}{\beta}
\left(\frac{1}{x_0}\right)^{1/\beta}
\leq \limsup_{r\to \infty} \frac{n_{\sigma} (r)}{r^{1/{\beta}}}
. \end{equation*} \end{theorem}
\begin{remark} We strongly believe that the convergence exponent of the spectrum is equal to $1/\beta$ whenever $J$ is in the lcc, even for $\beta\in (\tfrac 3 2, 2)$ in the case $(ii)$.
\end{remark}
\begin{remark} In case $(iii)$ of \thref{ADD1}, the parameter $\beta$ is already given by $x_0,y_0,x_1,y_1$. Hence, the condition $2 < \beta < \frac{3}{2} + \frac{2z_2}{x_0}$ can equivalently be written as the two conditions
\[
1 < \frac{x_1}{x_0}-\frac{y_1}{y_0}, \quad \quad \frac{x_1}{|y_0|} + \frac{y_1}{y_0}\left( \left(\frac{x_1}{x_0}-\frac{y_1}{y_0}\right) -1 \right) <\frac{3}{8} + \frac{x_2}{x_0} - \frac{y_2}{y_0}
. \]
Moreover, the notation $z_2$, which is introduced in that case, relates to Wouk's theorem, cf.\ \eqref{P2}. \end{remark}
In the proofs of these theorems we first establish that the power asymptotics of the Jacobi parameters, i.e.\ \eqref{P32} or \eqref{P32E}, give rise to the asymptotic behaviour of a fundamental solution of the finite difference equation \eqref{P3} corresponding to $J$. This is achieved by applying theorems of R.Kooman. In the proof of \thref{P1}, we use \cite[Corollary~1.6]{kooman:1998}, which is a generalisation of the classical Poincare-Perron theorem to the case that the zeros of the characteristic equation may have the same modulus but are distinct. In the exceptional case, the characteristic equation has a double zero and the more involved theorem \cite[Theorem~1]{kooman:2007} is needed. In any case, the asymptotic behaviour of solutions directly leads to a characterisation of the lcc.
The crucial step is to estimate the upper density of the spectrum and determine the convergence exponent in the lcc. Here we use the fact that the growth of the counting function $n_\sigma$ relates to the growth of the canonical product having the spectrum as its zero-set. The upper density of the spectrum is in our setting always bounded from below by a result of C.Berg and R.Szwarc, i.e.\ \cite[Proposition~7.1]{berg.szwarc:2014}. In the proof of \thref{P1}, we obtain an upper bound of the upper density by estimating the canonical product by hand. In particular, this determines the convergence exponent of the spectrum. If one is only interested in the convergence exponent, it is enough to apply \cite[Theorem~1.2]{berg.szwarc:2014}. In the situation of \thref{ADD1}, both approaches fail and a better estimate of the canonical product is needed, cf.\ \thref{P28}. This is achieved by writing the Jacobi matrix as a Hamburger Hamiltonian of a canonical system and applying \cite[Theorem~2.7]{pruckner.romanov.woracek:2016}, which goes back to a theorem of R.Romanov, cf.\ \cite[Theorem~1]{romanov:2017}.
\section{Proof of Theorem~1}
Let $J$ be a Jacobi matrix with parameters $\rho_n$ and $q_n$ having the power asymptotics \eqref{P32} with some $\epsilon>0$, $\beta_1,\beta_2 \in \bb R$, $x_0>0$, $y_0\neq 0$ and $x_1,y_1 \in \bb R$.
Recall Wouk's theorem, which is formulated in the Introduction. Since $q_n$ does not change its sign for $n$ large enough, this theorem states that if
$\sup_{n\geq 1} (\rho_n + \rho_{n-1} - |q_n|)<\infty$, then $J$ is in the lpc.
In case $(i)$, we have $\lim_{n\to \infty} |q_n|/\rho_n >2$, which implies \[
\lim_{n\to \infty} \frac{\rho_n + \rho_{n-1} - |q_n|}{\rho_n} = 2-\lim_{n\to \infty} \frac{|q_n|}{\rho_n} < 0
. \] Hence, $J$ is in the lpc by Wouk's theorem. It remains to treat case $(ii)$.
Thus, assume $\delta:=\beta_1-\beta_2\geq 0$, and $2x_0 > |y_0|$ if $\delta=0$.
\subsubsection*{\underline{Step 1:} Growth of solutions} We start with studying asymptotics of solutions of the difference equation \begin{equation} \label{P3} \rho_{n+1} u_{n+2} + q_{n+1} u_{n+1} + \rho_n u_n = 0 . \end{equation} Dividing by $\rho_{n+1}$ yields \[ u_{n+2} + C_1(n) u_{n+1} + C_0(n) u_n = 0 , \] with \begin{align*}
&C_1(n)=\frac{q_{n+1}}{\rho_{n+1}} = n^{-\delta}
\Big( \frac{y_0}{x_0} + \frac{1}{n}\Big(\frac{y_1-\delta y_0}{x_0} - \frac{y_0 x_1}{x_0^2}\Big) + {\rm O}\Big(\frac{1}{n^{1+\epsilon}}\Big) \Big)
,\\
&C_0(n)=\frac{\rho_n}{\rho_{n+1}} = 1 - \frac{\beta_1}{n} + {\rm O}\Big(\frac{1}{n^{1+\epsilon}}\Big)
. \end{align*} We denote by $\alpha_1(n), \alpha_2(n)$ the zeros of the characteristic polynomials, i.e. \[
x^2+C_1(n)x+C_0(n)=0
. \] Note that the limit \[ \lim_{n\to \infty} \frac{C_1(n)^2}{4} - C_0(n) = \begin{cases} -1 \ &, \delta >0,\\ \frac{y_0^2}{4x_0^2}-1 &, \delta=0, \end{cases} \] is negative by assumption. It follows that $\alpha_1(n)$ and $\alpha_2(n)$ are, for $n$ large enough, complex conjugate numbers, which converge to distinct numbers, i.e. \begin{equation} \label{P12}
\lim_{n\to \infty} \alpha_{1,2}(n) = \begin{cases}
\pm i &, \delta >0,\\
\frac{-y_0}{2 x_0} \pm i \sqrt{1-\frac{y_0^2}{4 x_0^2}} &, \delta=0.
\end{cases} \end{equation} Moreover, $(C_i(n)-C_i(n+1))$ is summable for $i=0,1$, due to \begin{align*}
|C_1(n)-C_1(n+1)| &\leq
C \big| n^{-\delta} - (n+1)^{-\delta}\big| + {\rm O}(n^{-1-\epsilon})
\\
&\leq C \delta n^{-\delta -1} + {\rm O}(n^{-1-\epsilon}), \\
|C_0(n)-C_0(n+1)| &= \Big|-\frac{\beta_1}{n} + \frac{\beta_1}{n+1} \Big| + {\rm O}(n^{-1-\epsilon}) = {\rm O}(n^{-1-\epsilon})
. \end{align*} Now \cite[Corollary~1.6]{kooman:1998} yields two linearly independent, complex conjugate solutions $(v_n^{(1)})_{n=1}^\infty$, $(v_n^{(2)})_{n=1}^\infty$ of \eqref{P3} with \begin{equation} \label{P19}
v_n^{(i)}= (1+{\rm o}(1)) \prod_{k=0}^{n-1} \alpha_i(k),
\quad n\in \bb N
. \end{equation} By using \cite[Lemma~4]{kooman:2007}, adding a summable perturbation, we get \begin{equation} \label{P15}
\Big|\prod_{k=0}^{n-1} \alpha_i(k) \Big|^2
=\prod_{k=0}^{n-1} C_0(k)
=\prod_{k=0}^{n-1} \Big( 1 - \frac{\beta_1}{k} + {\rm O}\Big(\frac{1}{k^{1+\epsilon}}\Big)\Big)
= (d^2+{\rm o}(1)) n^{-\beta_1}
, \end{equation} for a constant $d>0$.
Hence, the normalized solutions $u_n^{(j)}:=v_n^{(j)}/d$ for $j=1,2$ satisfy $|u_n^{(1)}|=|u_n^{(2)}|= (1+{\rm o}(1)) n^{-\beta_1/2}$. In particular, they are square-summable and $J$ is in the lcc if and only if $\beta_1>1$.
\subsubsection*{\underline{Step 2:} The lower bound in the lcc}
From the first step we know that the corresponding moment problem is in the lcc. Thus, the Nevanlinna matrix $\big(\begin{smallmatrix} A(z) & B(z) \\ C(z) & D(z) \end{smallmatrix} \big)$ which parametrizes all solutions of the moment problem is available, cf.~\cite{nevanlinna:1922, akhiezer:1961}. This four entries are canonical products and have the same growth, i.e.\ the same type w.r.t.\ any growth function, cf.~\cite[Proposition~2.3]{baranov.woracek:smsub}. In particular, they have the same order and type.
The zeros of $B$ interlace with the spectrum of any canonical selfadjoint extension of $T_J$. Thus, the counting function of the zeros of $B$, which we denote by $n_{B}(r)$, differs from $n_{\sigma} (r)$ by at most $1$. Hence, knowledge about the growth of any entry of the Nevanlinna matrix can be used to derive knowledge about the distribution of the spectrum. In particular, the order of $B$ coincides with the convergence exponent $\rho(\sigma(J))$ of the spectrum, and the type of $B$ is comparable to the upper density of the spectrum with explicit constants.
The order and type of the entries of the Nevanlinna matrix are, by \cite[Proposition~7.1~(\textit{ii}),(\textit{iii})]{berg.szwarc:2014}, bounded from below by the order and type of the entire function \[
H(z)= \sum_{n=0}^\infty b_{n,n}z^n
, \] where $b_{n,n}=\left( \rho_1 \rho_2 \cdot \ldots \cdot \rho_{n-1} \right)^{-1}$ denotes the leading coefficient of the $n$-th orthogonal polynomial of the first kind, denoted by $P_n(z)$. The power asymptotic of $\rho_n$ yields \[
b_{n,n} =
(C+{\rm o}(1))\big[n!\big]^{-\beta_1} n^{\beta_1 - \frac{x_1}{x_0}} x_0^{-n+1}
, \] for a constant $C>0$. By the standard formula for the order and type of a power series, cf.\ \cite[Theorem~2]{levin:1980}, we get that the order of $H(z)$ is $1/{\beta_1}$, and the type w.r.t.\ this order is equal to $\beta_1 x_0^{-1/\beta_1}$.
Thus, we get $\rho(\sigma(J))\geq 1/\beta_1$ and $\tau(B) \geq \beta_1 x_0^{-1/\beta_1}$, where $\tau(B)$ denotes the type of $B$ w.r.t.\ the order $1/\beta_1$. The inequality between the type of a canonical product and the upper density of its zeros, cf.~\cite[eq.\ (1.25)]{levin:1980}, gives \[ \limsup_{r\to \infty} \frac{n_{\sigma} (r)}{r^{1/{\beta_1}}} = \limsup_{r\to \infty} \frac{n_{B} (r)}{r^{1/{\beta_1}}}
\geq \frac{\beta_1 - 1}{\beta_1^2} \, \tau(B) \geq \frac{\beta_1 - 1}{\beta_1} \Big(\frac{1}{x_0}\Big)^{1/\beta_1}
. \]
\subsubsection*{\underline{Step 3:} The upper bound in the lcc}
In the first step we have seen that the difference equation \eqref{P3} has a fundamental system of solutions with $|u_n^{(j)}| = (1+{\rm o}(1)) n^{-\beta_1/2}$ for $j=1,2$.
The orthogonal polynomials of the first and second kind associated with the matrix $J$, denoted by $P_n(0)$ and $Q_n(0)$ respectively, are also linearly independent solutions of \eqref{P3}. Therefore, both $\big(P_n^2(0)\big)_{n=1}^\infty$ and $\big(Q_n^2(0)\big)_{n=1}^\infty$ are in $l^p$ for $p>1/\beta_1$. By \cite[Theorem~1.2]{berg.szwarc:2014} the order of $B$ is at most, and hence equal to, $1/\beta_1$.
We are going to estimate the density of the spectrum from above by analysing the growth of $B$ more precisely. To this end, we write the Nevanlinna matrix as a product, i.e. \[
\begin{pmatrix} A_{n+1}(z) & B_{n+1}(z) \\ C_{n+1}(z) & D_{n+1}(z) \end{pmatrix}
=
\big( I + z
R_n
\big)
\begin{pmatrix} A_{n}(z) & B_{n}(z) \\ C_{n}(z) & D_{n}(z) \end{pmatrix}
, \] with \[
R_n:=\begin{pmatrix}-P_n(0)Q_n(0) & Q_n^2(0) \\ -P_n^2(0) & P_n(0)Q_n(0) \end{pmatrix}
= \begin{pmatrix}
Q_n(0) \\ P_n(0)
\end{pmatrix} \begin{pmatrix}
Q_n(0) \\ P_n(0)
\end{pmatrix}^T
\begin{pmatrix}0 & 1 \\ -1 & 0 \end{pmatrix}
. \] Here, $A_n, B_n, C_n$ and $D_n$ are polynomials, which converge to the corresponding entry of the Nevanlinna matrix, cf.\ \cite[p.~14/54]{akhiezer:1961} and \cite[eq.\ (39)]{berg.szwarc:2014}. Hence, the spectral norm of the Nevanlinna matrix can be written as \begin{equation}\label{P33}
\left\|\begin{pmatrix} A(z) & B(z) \\ C(z) & D(z) \end{pmatrix}\right\|
= \bigg\| \prod_{n=0}^{\infty} (I + z R_n) \bigg\|
. \end{equation} Let $T$ denote the regular $2\times 2$ matrix such that \[
\begin{pmatrix}
Q_n(0) \\ P_n(0)
\end{pmatrix} = T \bigg(\begin{matrix}
u_n^{(1)} \\ u_n^{(2)}
\end{matrix} \bigg)
. \] Before we use the submultiplicativity of the norm on the right-hand side of \eqref{P33}, we rewrite the factors as follows. \begin{align*} I+z R_n &= I + z T \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix} \bigg) \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix}\bigg)^{\! T}
T^T \begin{pmatrix} 0 & 1 \\ - 1& 0 \end{pmatrix} \\ &=T \bigg[ T^{-1} \begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix} (T^T)^{-1} + z \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix} \bigg) \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix}\bigg)^{\! T}
\bigg]T^T \begin{pmatrix} 0 & 1 \\ - 1& 0 \end{pmatrix}\\ &=T \bigg[ \frac{1}{\det T} \begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix} + z \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix} \bigg) \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix}\bigg)^{\! T}
\bigg]T^T \begin{pmatrix} 0 & 1 \\ - 1& 0 \end{pmatrix}\\ &= \frac{T}{\det T}\bigg[ \begin{pmatrix}0 & -1 \\ 1 & 0\end{pmatrix} + z \det T \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix} \bigg) \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix}\bigg)^{\!T}
\bigg]T^T \begin{pmatrix} 0 & 1 \\ - 1& 0 \end{pmatrix} \end{align*} When taking the product, the terms outside of the brackets give \[ (\det T)^{-1} T^T \begin{pmatrix} 0 & 1 \\ - 1& 0 \end{pmatrix} T = \begin{pmatrix} 0 & 1 \\ - 1& 0 \end{pmatrix} , \] which is a unitary matrix, whose spectral norm is $1$. Also note that
\[
\left\| \bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix} \bigg)
\bigg(\begin{matrix} u_n^{(1)} \\ u_n^{(2)} \end{matrix}\bigg)^{\! T} \right\| = \big| u_n^{(1)} \big|^2 + \big| u_n^{(2)}\big|^2
=2 \big| u_n^{(1)} \big|^2
. \] Hence, pulling the norm into the product in \eqref{P33} yields, with the notation \[
F(z):=\prod_{n=0}^\infty (1+ z f_n n^{-\beta_1}), \quad f_n:=2 |\det T| | u_n^{(1)}|^2 n^{\beta_1}
, \] nothing but \begin{equation} \label{P34}
\left\|\begin{pmatrix} A(z) & B(z) \\ C(z) & D(z) \end{pmatrix}\right\|
\leq c \, F(|z|) , \end{equation} for a constant $c>0$ which depends on $T$ only. Therefore, order and type of the entries of the Nevanlinna matrix do not exceed the order and type of $F$.
Due to the first step, we have $f_n=2 |\det T| + {\rm o}(1)$. Next, we compute the determinate of $T$ by considering the relation $g_n$ \[
\begin{pmatrix}
Q_{n+1}(0) & Q_{n}(0) \\ P_{n+1}(0) & P_{n}(0)
\end{pmatrix} = T \bigg(\begin{matrix}
u_{n+1}^{(1)} & u_{n}^{(1)} \\ u_{n+1}^{(2)} & u_{n}^{(2)}
\end{matrix} \bigg)
. \] Taking the determinants on both sides and multiplying by $n^{\beta_1}$ gives, due to $Q_{n+1}(0) P_{n}(0) - P_{n+1}(0) Q_{n}(0)=\rho_{n}^{-1}$, \[
\frac{n^{\beta_1}}{\rho_{n}}
= n^{\beta_1} \big(u_{n+1}^{(1)} u_{n}^{(2)} - u_{n+1}^{(2)}u_{n}^{(1)}\big)\det T
. \] The left-hand side converges to $1/x_0$ by assumption. By introducing the notation $h_n:= n^{\beta_1} \big(u_{n+1}^{(1)} u_{n}^{(2)} - u_{n+1}^{(2)}u_{n}^{(1)}\big)$, we have $\det T = 1/(x_0 \lim_{n\to \infty} h_n)$. Recall that $u_n^{(2)}$ is the complex conjugate of $u_n^{(1)}$, which gives \begin{align} \nonumber h_n &=
n^{\beta_1} \Big(u_{n+1}^{(1)} \overline{u_{n}^{(1)}} - \overline{u_{n+1}^{(1)}} u_{n}^{(1)}\Big)
= 2 i n^{\beta_1} \IM\Big(u_{n+1}^{(1)} \overline{u_{n}^{(1)}} \Big)
\label{P35} \end{align} By \eqref{P19} and \eqref{P15} we have \begin{align*}
\IM\Big(u_{n+1}^{(1)} \overline{u_{n}^{(1)}}\Big)&=
\IM\Big( (1/d^2+{\rm o }(1)) \alpha_1(n) \prod_{k=0}^{n-1} | \alpha_1(k)|^2\Big)
\\&=(1+{\rm o }(1))n^{-\beta_1} \big(\IM(\alpha_1(n)) + {\rm o }(1) \big)
, \end{align*} which gives, due to \eqref{P12}, \[
\lim_{n\to \infty} h_n= 2 i \lim_{n\to \infty} \IM(\alpha_1(n)) =
2i /a
, \] where $a$ is defined in the formulation of this theorem. Hence, we get $\det T = a/(2 x_0 i)$ and, thus, \[
f_n= 2 |\det T| + {\rm o}(1)
= \frac{a}{x_0} + o(1)
. \] The zeros of $F$ ordered by increasing modulus behave like $(-x_0/a + o(1)) n^{\beta_1}$. Thus, the convergence exponent of the zeros as well as the order of $F$ is equal to $1/\beta_1$. A straight forward calculation shows that the upper density of the zeros of $F$ is equal to $(a/x_0)^{1/\beta_1}$. By \eqref{P34} and \cite[eq.\ (1.25)]{levin:1980}, we get the following upper bound for the type of $B$, \[ \tau_B\leq \tau_F \leq \frac{\beta_1^2 }{\beta_1 - 1} \left( \frac{a}{x_0}\right)^{1/\beta_1} . \] The fact that the type of a canonical product is, up to a constant, not lower than the upper density of its zeros, cf.\ \cite[Theorem~2.5.13]{boas:1954}, yields \[ \limsup_{r\to \infty} \frac{n_{\sigma} (r)}{r^{1/{\beta_1}}} = \limsup_{r\to \infty} \frac{n_{B} (r)}{r^{1/{\beta_1}}} \leq \frac{e}{\beta_1} \tau_B \leq
\frac{e \beta_1 }{\beta_1 - 1} \left( \frac{a}{x_0}\right)^{1/\beta_1}
. \]
\vspace*{-4mm} \qed \vspace*{-3mm}
\section{Proof of Theorem 2}
Let $J$ be a Jacobi matrix with parameters $\rho_n$ and $q_n$ having the power asymptotics \eqref{P32E}
with some $\epsilon>0$, $\beta_1,\beta_2 \in \bb R$, $x_0>0$, $y_0\neq 0$ and $x_1,y_1,x_2,y_2 \in \bb R$. Assume that $\beta_1=\beta_2=:\beta$ and $2x_0 = |y_0|$.
A calculation shows that the expression in Wouk's theorem has the following power asymptotic (see also the beginning of the proof of \thref{P1}), \begin{equation} \label{P2}
\rho_n + \rho_{n-1} - |q_n| = n^\beta \Big(\frac{z_1}{n} + \frac{z_2}{n^2}+ {\rm O}\Big(\frac{1}{n^{2+\epsilon}}\Big) \Big), \end{equation} with \begin{equation} \label{P26}
z_1=x_0\left(\frac{2x_1}{x_0} - \frac{2y_1}{y_0} - \beta \right) , \ \ \:
z_2=x_0 \left(
\frac{2x_2}{x_0} - \frac{2y_2}{y_0} + \frac{\beta-1}{2} \left( \beta- \frac{2x_1}{x_0} \right) \right) . \end{equation}
As before, the proof is divided in steps. In step 1 we make a case distinction regarding the sign of $z_1$, and characterise occurrence of the lcc in each case. The lower and upper bound of the convergence exponent in the lcc is settles in step 2 and step 3, respectively. In the last step, we finish the proof by showing how this relates to the actual statement of this theorem.
\subsubsection*{\underline{Step 1:} Growth of solutions} We start with the difference equation, \begin{equation} \label{PP3} \rho_{n+1} u_{n+2} + q_{n+1} u_{n+1} + \rho_n u_n = 0 . \end{equation} Proceeding as in the proof of \thref{P1} is not possible here, since we are in the case that the characteristic polynomial has a double zero. Instead, set $r_i:=\frac{- q_i}{2 \rho_i }$ and divide \eqref{PP3} by $\rho_{n+1} \prod_{i=1}^{n+1} r_i$ to get \begin{equation*}
\frac{u_{n+2} }{\prod_{i=1}^{n+1} r_i} - 2 \frac{u_{n+1}}{\prod_{i=1}^{n} r_i} +
\frac{\rho_n}{\rho_{n+1} r_n r_{n+1}}\frac{u_{n} }{\prod_{i=1}^{n-1} r_i} =0
. \end{equation*} Introducing the new variable $v_n:= u_n \big/ \prod_{i=1}^{n-1} r_i$ and setting $C_n:=1-\frac{\rho_n}{\rho_{n+1} r_n r_{n+1}}$ gives \begin{equation} \label{P4}
v_{n+2} - 2 v_{n+1} + (1-C_n) v_{n} =0
. \end{equation} A computation shows \begin{equation} \label{P6}
C_n=\frac{-z_1}{x_0 n} + \frac{d}{n^2} + {\rm O}\Big(\frac{1}{n^{2+\epsilon}}\Big)
, \end{equation} with some constant $d\in \bb R$.
\paragraph*{Case 1: $z_1<0$.} In this case, $J$ is in the lpc by Wouk's theorem, cf.\ \eqref{P2}. \paragraph*{Case 2: $z_1>0.$} Here, $\lim_{n\to \infty} n C_n = -z_1/x_0$ is negative and \cite[Theorem~1,1.]{kooman:2007} gives two linearly independent solutions of \eqref{P4}, denoted by $(v_n^{(j)})_{n=1}^\infty$ for $j=1,2$, such that \begin{equation*} v_n^{(1)}=\overline{v_n^{(2)}} = (1+\mathrm{o}(1)) n^{1/4} \prod_{k=1}^{n-1} \big(1 + i \sqrt{-C_k} \big).
\end{equation*} The square of the absolute value of each factor is equal to \begin{equation*}
\left| 1 + i \sqrt{-C_k} \right|^2 = 1 - C_k
= 1 + \frac{z_1}{x_0 k} + {\rm O}\Big(\frac{1}{k^{2}}\Big)
, \end{equation*} which leads to \begin{equation*}
\left| \prod_{k=1}^{n-1} \big(1 + i \sqrt{-C_k}\big) \right|
= (c_1+o(1)) n^{\frac{z_1}{2x_0}}
= (c_1+o(1)) n^{ \frac{ x_1}{x_0} - \frac{ y_1}{y_0}- \frac{\beta}{2}}
,
\end{equation*} for some $c_1>0$, due to \cite[Lemma~4]{kooman:2007} adding a summable perturbation. Thus, we get \begin{equation}
\big| v_n^{(1)} \big|=\big| v_n^{(2)} \big|
=(c_1+o(1)) n^{ \frac{1}{4} + \frac{ x_1}{x_0} - \frac{ y_1}{y_0}- \frac{\beta}{2}} \label{PP9}
. \end{equation}
Substituting back via $u_n = v_n \prod_{i=1}^{n-1} r_i$ produces two solutions of \eqref{PP3}, denoted by $(u_n^{(j)})_{n=1}^\infty$ for $j=1,2$. Again by \cite[Lemma~4]{kooman:2007}, we have \begin{equation}\label{PP10}
\bigg| \prod_{k=1}^{n-1} r_k \bigg| =
\prod_{k=1}^{n-1} \Big(1 + \frac 1 k \Big(\frac{y_1}{y_0} - \frac{x_1}{x_0}\Big) + {\rm O}\Big(\frac{1}{k^{2}}\Big) \Big)
=(c_2+o(1)) n^{\frac{y_1}{y_0} - \frac{x_1}{x_0}}
, \end{equation} for some $c_2> 0$. Together with \eqref{PP9} this results in the asymptotic behaviour \begin{equation*}
\big| u_n^{(1)} \big| = \big| u_n^{(2)} \big| =
\big| v_n^{(1)} \big| \bigg| \prod_{k=1}^{n-1} r_k \bigg| = (c_3+o(1))n^{\frac{1}{4}-\frac{\beta}{2}}
, \end{equation*} where $c_3=c_1 c_2 >0$. In particular, $J$ is in the lcc if and only if $\beta>\frac{3}{2}$.
\paragraph*{Case 3: $z_1=0.$} In that case, we have $\lim_{n\to \infty} n^2 C_n = d$, cf.\ \eqref{P6}. A calculation shows \[
d=\frac{2y_2}{y_0} - \frac{2x_2}{x_0} + (\beta-1)\frac{y_1}{y_0} + \frac{\beta ( \beta - 2)}{4}
= \frac{-z_2}{x_0} + \frac{\beta ( \beta - 2)}{4}
, \] where $z_2$ is defined in \eqref{P26}. Note that $\beta\leq 2$ already implies that $J$ is in the lpc by Wouk's theorem since $z_1=0$, cf.\ \eqref{P2}. We denote by $\alpha_1$ and $\alpha_2$ the zeros of the equation $X^2-X-d=0$, i.e., \[
\alpha_{1,2}
= \big(1 \pm \sqrt{1+4d}\,\big)\big/2
. \] For $\alpha_1 \neq \alpha_2$, there are two linearly independent solutions of \eqref{P4} such that \[
v_n^{(1)} = (1+{\rm o}(1)) n^{\alpha_1}
, \quad
v_n^{(2)} = (1+{\rm o}(1)) n^{\alpha_2}
. \] This follows from either \cite[Theorem~10.1,(1)]{kooman:1998}, or \cite[Theorem~1,2.]{kooman:2007}. Actually, the case $d=0$ is already treated in \cite[Theorem~10.3]{coffman:1964}.
In the case of a double zero $\alpha_1=\alpha_2=1/2$, we get two solutions of \eqref{P4} with \[
v_n^{(1)} = (1+{\rm o}(1)) n^{1/2}, \quad
v_n^{(2)} = (1+{\rm o}(1)) \log(n) n^{1/2}. \] To transform these solutions back to solutions $u_n^{(j)}$ of \eqref{PP3}, note that \[
\bigg| \prod_{k=1}^{n-1} r_k \bigg| = (c_3+{\rm o}(1)) n^{-\beta/2} , \] by \eqref{PP10} together with $z_1=0$. \begin{enumerate}[\text{Case} 3a:]
\item $d<-1/4.$ \ \
In this case, $\alpha_1$ and $\alpha_2$
are two distinct complex conjugate numbers with $\RE \alpha_i = 1/2$, and we get
two solutions of \eqref{PP3} with
\[
|u_n^{(1)}| = |u_n^{(2)}|=(1+{\rm o}(1)) n^{(1-\beta)/2}
.
\]
Thus, $J$ is in the lcc if and only if $\beta>2$.
\item $d=-1/4. $ \ \
Here, $\alpha_1=\alpha_2=1/2$ is a double zero, and we get
\[
|u_n^{(1)}| = (1+{\rm o}(1)) n^{(1-\beta)/2}\log n, \quad
|u_n^{(2)}| = (1+{\rm o}(1)) n^{(1-\beta)/2}
.
\]
As before, $J$ is in the lcc if and only if $\beta>2$.
\item $d>-1/4.$ \ \
In that case, $\alpha_1$ and $\alpha_2$ are two distinct real zeros, and we get two solutions of \eqref{PP3} such that
\[
|u_n^{(1)}| = (1+{\rm o}(1)) n^{(1 + \sqrt{1+4d} - \beta)/2}, \quad
|u_n^{(2)}| = (1+{\rm o}(1)) n^{(1 - \sqrt{1+4d} - \beta)/2}.
\]
Here, $J$ is in the lcc if and only if the dominating solution is square-summable, i.e,
\[
\sqrt{1+4d} < \beta -2
.
\]
For $\beta\leq 2$ this inequality is obviously false, i.e., $J$ is in the lpc.
If $\beta>2$, then the above condition is further equivalent to
$\beta < \frac{3}{2} + \frac{2z_2}{x_0}$. \end{enumerate}
\subsubsection*{\underline{Step 2:} The lower bound in the lcc}
This step can be done exactly as in \thref{P1}. When $J$ is in the lcc, we get as before $\rho(\sigma(J))\geq 1 /\beta$, as well as \[
\limsup_{r\to \infty} \frac{n_{\sigma} (r)}{r^{1/{\beta}}}
\geq \frac{\beta - 1}{\beta}
\Big(\frac{1}{x_0}\Big)^{1/\beta}
. \]
\subsubsection*{\underline{Step 3:} The upper bound in the lcc}
In the first step we have seen that the difference equation \eqref{PP3} has a fundamental solution $u_n^{(1)},u_n^{(2)}$ such that the dominating solution satisfies
$|u_n^{(1)}| \asymp \lambda(n)$ where either $\lambda(n):=n^\gamma$ or $\lambda(n):=n^\gamma \log n$ for some $\gamma \in \bb R$.
Recall that the orthogonal polynomials of the first and second kind, denoted by $P_n(0)$ and $Q_n(0)$, respectively, are linearly independent solutions of \eqref{P3}.
The quotient $(\left|P_n(0)\right| + \left|Q_n(0)\right|)/{\lambda(n)}$ is bounded from above since $P_n(0)$ and $Q_n(0)$ can be written as linear combinations of $u_n^{(1)}$ and $u_n^{(2)}$. It is also bounded away from zero, since $u_n^{(1)}$ is a linear combination of $P_n(0)$ and $Q_n(0)$ and
$ | u_n^{(1)}| /{\lambda(n)}$ is bounded away from zero. Thus, we obtain $P_n(0)^2 + Q_n(0)^2 \asymp \lambda(n)^2$.
Now we write the Jacobi matrix as a Hamburger Hamiltonian of a canonical system, cf.\ \cite{pruckner.romanov.woracek:2016} or \cite{kac:1999} for details about this reformulation. We denote by $(l_n)_{n=1}^\infty$ and $(\phi_n)_{n=1}^\infty$ the sequences of lengths and angles of the corresponding Hamburger Hamiltonian. By \cite[(1.5),(1.6)]{pruckner.romanov.woracek:2016}, we have that \begin{align*}
&\, l_n= P_n(0)^2 + Q_n(0)^2 \asymp \lambda(n)^2
, \\
&\big|\sin(\phi_{n+1}- \phi_n ) \big| =
1\big/\big( \rho_n \sqrt{l_n l_{n+1}}\big)
\asymp n^{-\beta} \lambda(n)^{-2}
. \end{align*} With the notation from \cite{pruckner.romanov.woracek:2016}, the lengths and angle-differences are regularly distributed. Moreover, we have $\Delta_l=-2\gamma$ and $\Delta_\phi=\beta+2\gamma$, both expressions exist as a limit and $\Delta_l + \Delta_\phi = \beta$.
\paragraph*{Case 2: $z_1>0.$} In this case we have $\gamma = (1 -2\beta)/4$. For $3/2 < \beta < 2$, we have $\Delta_l + \Delta_\phi = \beta <2$. By \cite[Theorem~2.7]{pruckner.romanov.woracek:2016} the order of $B$, i.e.\ $\rho(\sigma(J))$, does not exceed \[
\frac{1-\Delta_\phi - \frac{\Lambda}{2}}{\Delta_l - \Delta_\phi + \Lambda} =\frac{1}{2(\beta - 1)}
. \] For $\beta\geq 2$, \cite[Theorem~2.22,(i)]{pruckner.romanov.woracek:2016} is applicable and gives $\rho(\sigma(J))=1/\beta_1$.
\paragraph*{Case 3: $z_1=0.$} In this case, $\beta>2$ is necessary for occurrence of the lcc by Wouk's theorem. Due to $\Delta_l + \Delta_\phi = \beta >2$, \cite[Theorem~2.22,(i)]{pruckner.romanov.woracek:2016} is applicable and gives $\rho(\sigma(J))=1/\beta_1$.
\subsubsection*{\underline{Step 4:} Conclusion}
Fist consider $\beta >\frac{2x_1}{x_0} -\frac{2y_1}{y_0}$ which is equivalent to $z_1<0$. Hence, we are in case 1, and $J$ is in the lpc by the first step.
Similarly, $\beta <\frac{2x_1}{x_0} -\frac{2y_1}{y_0}$ is equivalent to $z_1>0$, which is case 2. By the first step, $J$ is in the lcc if and only if $\beta > 3/2$. Regarding the convergence exponent, \eqref{P27} holds by the third step.
\noindent Therefore, case $(ii)$ in the formulation of the theorem is settled, as well as case $(i)$ with the possible exception of $\beta=\frac{2x_1}{x_0} -\frac{2y_1}{y_0}\leq \frac 3 2$. In this case we have $z_1=0$, i.e.\ we are in case 3. Due to $\beta \leq \frac 3 2 $, $J$ is in the lpc by Wouk's theorem.
Thus, it remains to treat case $(iii)$, i.e. $\beta=\frac{2x_1}{x_0} -\frac{2y_1}{y_0}> \frac 3 2$. Once more we have $z_1=0$ and, thus, fall into case 3. Recall from the first step that $J$ is in the lcc if and only if
\begin{equation*}
d\leq -\tfrac{1}{4} \,\text{ and }\, \beta > 2,
\quad \text{ or, } \quad
2<\beta< \tfrac{3}{2} + \tfrac{2 z_2}{x_0}
. \end{equation*} Next, we show that $d \leq -1/4$ and $\beta>2$ already implies $\beta< \frac{3}{2} + \frac{2 z_2}{x_0}$. By solving a quadratic equation one can show that $d = -\tfrac{z_2}{x_0} + \tfrac{\beta (\beta-2)}{4} \leq -\frac{1}{4}$ implies
\[
\beta \leq 1 + 2 \big(\tfrac{z_2}{x_0} \big)^{1/2}
. \]
For $z_2/x_0 = 1/4$ this would give $\beta \leq 2$, which would contradict $\beta >2$. Thus, we have $z_2/x_0 \neq 1/4$ and obatin, again by solving a quadratic equation, the estimate \[
\beta\leq 1 + 2 \big(\tfrac{z_2}{x_0} \big)^{1/2} < \tfrac{3}{2} + \tfrac{2 z_2}{x_0}
. \] Hence, occurrence of the lcc in case 3 is equivalent to $2< \beta < \tfrac{3}{2} + \tfrac{2 z_2}{x_0}$. In the lcc, we have $\rho(\sigma(J))=1/\beta$ by the third step. \qed
\begin{remark} \thlab{P28} The techniques used in the third step of the proof of \thref{P1} does not seem to be suitable in the situation of \thref{ADD1}.
To demonstrate this, consider the case 2, i.e.\ $z_1>0$. By the first step, we know $P_n^2,Q_n^2 = {\rm O }(n^{(1-2\beta)/2})$, and \cite[Theorem~1.2]{berg.szwarc:2014} gives $\rho(\sigma(J))\leq 1/(\beta-\frac{1}{2})$. Together with the lower estimate, we get $\rho(\sigma(J))\in [1/\beta, 1/(\beta-\frac{1}{2})]$. Hence, contrary to the situation in \thref{P1}, this only shows that the convergence exponent is contained in an interval. Also the estimate of the Nevanlinna matrix, as performed in the proof of \thref{P1}, does not improve the size of this interval.
Using \cite[Theorem~2.7]{pruckner.romanov.woracek:2016} improves our result drastically: For $\beta<2$ the size of the interval shrinks, and for $\beta \geq 2$ this determines the convergence exponent. \end{remark}
\ifthenelse{1=1}{
\noindent {\large\sc labels:}\\[10mm]
P:\hspace*{3mm} \framebox{ \begin{tabular}{r@{\ }r@{\ }r@{\ }r@{\ }r@{\ }r@{\ }r@{\ }r@{\ }r@{\ }r@{\ }}
** & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\
20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & . \\
. & 32 & . & . & . & . & . & . & . & . \\ \end{tabular} } \\[1cm] } {}
\end{document} | arXiv |
2019 NAIPC Practice Contest 7
2019-02-09 09:00 AKST
Problem K
Your factory has $N$ junctions (numbered from $1$ to $N$) connected by $M$ conveyor belts. Each conveyor belt transports any product automatically from one junction to another junction in exactly one minute. Note that each conveyor belt only works in one direction. There can be more than one conveyor belt connecting two junctions, and there can be a conveyor belt connecting a junction to itself.
There are $K$ producers (machines which produce the products) located at the first $K$ junctions, i.e. junctions $1, 2, \ldots , K$. The producer at junction $j$ produces an product each minute $(x \cdot K + j)$ for all integers $x \ge 0$ and $j = 1, 2, \ldots , K$. All products are transported immediately via the conveyor belts to the warehouse at junction $N$, except for those produced at junction $N$ (if any). Items produced at junction $N$ are directly delivered to the warehouse (there is no need to use the conveyor belts).
At each junction, there is a robot deciding which conveyor belts the incoming product should go to in a negligible time (instantly). The robots can be programmed such that all products produced by a producer are always delivered to the warehouse via the same route. Once the robots are programmed, the routing can no longer be changed. Items from different producers may have the same or different routes.
A prudent potential investor comes and wants to inspect the factory before making any decision. You want to show to the potential investor that your factory employs a good risk management policy. Thus, you want to make sure that each conveyor belt only transports at most one product at any time; i.e. two products cannot be on the same conveyor belt at the same time. On the other hand, there is no limit on the number of products at the junctions (the robots have a lot of arms!). To achieve this, you may turn off zero or more producers, but you still want to maximize the production, hence, this problem.
Find the maximum number of producers that can be left running such that all the produced products can be delivered to the warehouse and each conveyor belt transports at most $1$ product at any time.
The first line contains three integers $N$, $K$, and $M$ ($1 \le K \le N \le 300$; $0 \le M \le 1\, 000$) representing the number of junctions, the number of producers, and the number of conveyor belts, respectively.
The next $M$ lines, each contains two integers $a$ and $b$ ($1 \le a, b \le N$) representing a conveyor belt connecting junction $a$ and junction $b$ with the direction from $a$ to $b$.
The output contains an integer denoting the maximum number of producers which can be left running such that all the produced products can be delivered to the warehouse and each conveyor belt transports at most one product at any time.
In Sample Input $1$, $N = 4$, $K = 2$, $M = 3$, and the directed edges are $\{ (1,3)$, $(2,3)$, $(3,4)\} $. There is only one possible delivery route for each producer, i.e. $1 \rightarrow 3 \rightarrow 4$ for producer $1$, and $2 \rightarrow 3 \rightarrow 4$ for producer $2$. Both producers are using conveyor belt $(3,4)$, however, the products from producer $1$ are on the conveyor belt $(3,4)$ on minutes $2, 4, 6, \dots $ (even minutes), while the products from producer $2$ are on the conveyor belt $(3,4)$ on minutes $3, 5, 7, \dots $ (odd minutes). Therefore, both producers can be left running.
In Sample Input $2$, $N = 5$, $K = 2$, $M = 4$, and the directed edges are $\{ (1,3)$, $(3,4)$, $(2,4)$, $(4,5)\} $. Similar to the previous example, there is only one possible delivery route for each product produced by each producer. In this example, only one producer can be left running as products from both producers ($1$ and $2$) are on the conveyor belt $(4,5)$ at the same time if both are running.
Sample Input 1
Sample Output 1
Problem ID: conveyorbelts
CPU Time limit: 2 seconds
Memory limit: 1024 MB
Sample data files
Author: Kyle See
Source: 2018 ICPC Asia Singapore Regional | CommonCrawl |
Existence of periodically invariant tori on resonant surfaces for twist mappings
Spectral gap and quantitative statistical stability for systems with contracting fibers and Lorenz-like maps
Positive Lyapunov exponent for a class of quasi-periodic cocycles
Jinhao Liang
Department of Mathematics, Southeast University, Nanjing 211189, China
Received January 2019 Revised October 2019 Published December 2019
Figure(3)
Young [17] proved the positivity of Lyapunov exponent in a large set of the energies for some quasi-periodic cocycles. Her result is also proved to be applicable for some quasi-periodic Schrödinger cocycles by Zhang [18]. However, her result cannot be applied to the Schrödinger cocycles with the potential $ v = \cos(4\pi x)+w( x) $, where $ w\in C^2(\mathbb R/\mathbb Z,\mathbb R) $ is a small perturbation. In this paper, we will improve her result such that it can be applied to more cocycles.
Keywords: Quasi-periodic cocycle, Schrödinger cocycle, Lyapunov exponent.
Mathematics Subject Classification: Primary: 37A30.
Citation: Jinhao Liang. Positive Lyapunov exponent for a class of quasi-periodic cocycles. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1361-1387. doi: 10.3934/dcds.2020080
A. Avila, Global theory of one-frequency Schrödinger operators, Acta Math., 215 (2015), 1-54. doi: 10.1007/s11511-015-0128-7. Google Scholar
M. Benedicks and L. Carleson, The dynamics of the Hénon map, Ann. of Math. (2), 133 (1991), 73-169. doi: 10.2307/2944326. Google Scholar
K. Bjerklöv, The dynamics of a class of quasi-periodic Schrödinger cocycles, Ann. Henri Poincaré, 16 (2015), 961-1031. doi: 10.1007/s00023-014-0330-8. Google Scholar
J. Bourgain, Positivity and continuity of the Lyapounov exponent for shifts on $\mathbb T^d$ with arbitrary frequency vector and real analytic potential, J. Anal. Math., 96 (2005), 313-355. doi: 10.1007/BF02787834. Google Scholar
J. Bourgain and M. Goldstein, On nonperturbative localization with quasi-periodic potential, Ann. of Math. (2), 152 (2000), 835-879. doi: 10.2307/2661356. Google Scholar
J. Chan, Method of variations of potential of quasi-periodic Schrödinger equations, Geom. Funct. Anal., 17 (2008), 1416-1478. doi: 10.1007/s00039-007-0633-8. Google Scholar
L. H. Eliasson, Discrete one-dimensional quasi-periodic Schrödinger operators with pure point spectrum, Acta Math., 179 (1997), 153-196. doi: 10.1007/BF02392742. Google Scholar
J. Fröhlich, T. Spencer and P. Wittwer, Localization for a class of one-dimensional quasi-periodic Schrödinger operators, Comm. Math. Phys., 132 (1990), 5-25. doi: 10.1007/BF02277997. Google Scholar
M. Herman, Une méthode pour minorer les exposants de Lyapounov et quelques exemples montrant le caractère local d'un théorème d'Arnold et de Moser sur le tore de dimension 2, Comment. Math. Helv., 58 (1983), 453-502. Google Scholar
K. Ishii, Localization of eigenstates and transport phenomena in one-dimensional disordered systems, Progr. Theoret. Phys. Suppl., 53 (1973), 77-138. doi: 10.1143/PTPS.53.77. Google Scholar
S. Klein, Anderson localization for the discrete one-dimensional quasi-periodic Schrödinger operator with potential defined by a Gevrey-class function, J. Funct. Anal., 218 (2005), 255-292. doi: 10.1016/j.jfa.2004.04.009. Google Scholar
J. Liang and P. Kung, Uniform positivity of Lyapunov exponent for a class of smooth Schrödinger cocycles with weak Liouville frequencies, Front. Math. China, 12 (2017), 607-639. doi: 10.1007/s11464-017-0619-2. Google Scholar
L. Pastur, Spectral properties of disordered systems in the one-body approximation, Comm. Math. Phys., 75 (1980), 179-196. doi: 10.1007/BF01222516. Google Scholar
Ya. G. Sinai, Anderson localization for one-dimensional difference Schrödinger operator with quasiperiodic potential, J. Statist. Phys., 46 (1987), 861-909. doi: 10.1007/BF01011146. Google Scholar
E. Sorets and T. Spencer, Positive Lyapunov exponents for Schrödinger operators with quasi-periodic potentials, Comm. Math. Phys., 142 (1991), 543-566. doi: 10.1007/BF02099100. Google Scholar
Y. Wang and Z. Zhang, Uniform positivity and continuity of Lyapunov exponents for a class of $C^2$ quasiperiodic Schrödinger cocycles, J. Funct. Anal., 268 (2015), 2525-2585. doi: 10.1016/j.jfa.2015.01.003. Google Scholar
L. Young, Lyapunov exponents for some quasi-periodic cocycles, Ergodic Theory Dynam. Systems, 17 (1997), 483-504. doi: 10.1017/S0143385797079170. Google Scholar
Z. Zhang, Positive Lyapunov exponents for quasiperiodic Szegő cocycles, Nonlinearity, 25 (2012), 1771-1797. doi: 10.1088/0951-7715/25/6/1771. Google Scholar
Figure 1. graph of the function in $ \mathcal F $
Figure Options
Download as PowerPoint slide
Figure 2. Graphs of the angle functions
Figure 3. Bifurcation of Type Ⅲ functions with $ f'_1(c_1)f'_2(c_2)<0 $
Wenmeng Geng, Kai Tao. Lyapunov exponents of discrete quasi-periodic gevrey Schrödinger equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 2977-2996. doi: 10.3934/dcdsb.2020216
Pedro Duarte, Silvius Klein, Manuel Santos. A random cocycle with non Hölder Lyapunov exponent. Discrete & Continuous Dynamical Systems, 2019, 39 (8) : 4841-4861. doi: 10.3934/dcds.2019197
Jean Bourgain. On quasi-periodic lattice Schrödinger operators. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 75-88. doi: 10.3934/dcds.2004.10.75
Lei Jiao, Yiqian Wang. The construction of quasi-periodic solutions of quasi-periodic forced Schrödinger equation. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1585-1606. doi: 10.3934/cpaa.2009.8.1585
Meina Gao, Jianjun Liu. Quasi-periodic solutions for derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 2101-2123. doi: 10.3934/dcds.2012.32.2101
Yingte Sun. Floquet solutions for the Schrödinger equation with fast-oscillating quasi-periodic potentials. Discrete & Continuous Dynamical Systems, 2021, 41 (10) : 4531-4543. doi: 10.3934/dcds.2021047
William A. Veech. The Forni Cocycle. Journal of Modern Dynamics, 2008, 2 (3) : 375-395. doi: 10.3934/jmd.2008.2.375
Moulay-Tahar Benameur, Alan L. Carey. On the analyticity of the bivariant JLO cocycle. Electronic Research Announcements, 2009, 16: 37-43. doi: 10.3934/era.2009.16.37
Claudia Valls. On the quasi-periodic solutions of generalized Kaup systems. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 467-482. doi: 10.3934/dcds.2015.35.467
Peng Huang, Xiong Li, Bin Liu. Invariant curves of smooth quasi-periodic mappings. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 131-154. doi: 10.3934/dcds.2018006
Danijela Damjanović, Anatole Katok. Periodic cycle functions and cocycle rigidity for certain partially hyperbolic $\mathbb R^k$ actions. Discrete & Continuous Dynamical Systems, 2005, 13 (4) : 985-1005. doi: 10.3934/dcds.2005.13.985
Qihuai Liu, Dingbian Qian, Zhiguo Wang. Quasi-periodic solutions of the Lotka-Volterra competition systems with quasi-periodic perturbations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1537-1550. doi: 10.3934/dcdsb.2012.17.1537
Yanling Shi, Junxiang Xu, Xindong Xu. Quasi-periodic solutions of generalized Boussinesq equation with quasi-periodic forcing. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2501-2519. doi: 10.3934/dcdsb.2017104
Jordi-Lluís Figueras, Thomas Ohlson Timoudas. Sharp $ \frac12 $-Hölder continuity of the Lyapunov exponent at the bottom of the spectrum for a class of Schrödinger cocycles. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4519-4531. doi: 10.3934/dcds.2020189
Alessandro Fonda, Antonio J. Ureña. Periodic, subharmonic, and quasi-periodic oscillations under the action of a central force. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 169-192. doi: 10.3934/dcds.2011.29.169
Xavier Blanc, Claude Le Bris. Improving on computation of homogenized coefficients in the periodic and quasi-periodic settings. Networks & Heterogeneous Media, 2010, 5 (1) : 1-29. doi: 10.3934/nhm.2010.5.1
Yanling Shi, Junxiang Xu. Quasi-periodic solutions for a class of beam equation system. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 31-53. doi: 10.3934/dcdsb.2019171
Russell Johnson, Francesca Mantellini. A nonautonomous transcritical bifurcation problem with an application to quasi-periodic bubbles. Discrete & Continuous Dynamical Systems, 2003, 9 (1) : 209-224. doi: 10.3934/dcds.2003.9.209
Siqi Xu, Dongfeng Yan. Smooth quasi-periodic solutions for the perturbed mKdV equation. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1857-1869. doi: 10.3934/cpaa.2016019
Xiaoping Yuan. Quasi-periodic solutions of nonlinear wave equations with a prescribed potential. Discrete & Continuous Dynamical Systems, 2006, 16 (3) : 615-634. doi: 10.3934/dcds.2006.16.615 | CommonCrawl |
Corporate Finance & Accounting
Corporate Finance & Accounting Financial Ratios
Variable Cost Ratio Definition
By Will Kenton
Updated Apr 15, 2019
What Is the Variable Cost Ratio?
The variable cost ratio is used in cost accounting to express a company's variable production costs as a percentage of net sales, calculated as variable costs divided by net revenues (total sales, minus returns, allowances, and discounts).
The ratio compares costs that vary with levels of production to the amount of revenues generated by that production. It excludes fixed costs that remain constant regardless of production levels, such as a building lease.
The Formula for the Variable Cost Ratio Is
Variable Cost Ratio=Variable CostsNet Sales\begin{aligned} &\text{Variable Cost Ratio} = \frac{ \text{Variable Costs} }{ \text {Net Sales} } \\ \end{aligned}Variable Cost Ratio=Net SalesVariable Costs
What Does the Variable Cost Ratio Tell You?
The variable cost ratio, which can alternatively be calculated as 1 - contribution margin, is one factor in determining profitability. It indicates whether a company is achieving, or maintaining, the desirable balance where revenues are rising faster than expenses.
The variable cost ratio quantifies the relationship between a company's sales and the specific costs of production associated with those revenues. It is a useful evaluation metric for a company's management in determining necessary break-even or minimum profit margins, making profit projections and in identifying the optimal sales price for its products.
If a company has high variable costs in relation to net sales, it likely doesn't have many fixed costs to cover each month, and can stay profitable with a relatively low amount of sales. Conversely, companies with high fixed costs will have a lower ratio result, meaning they have to earn a good amount of revenue just to cover fixed costs and stay in business, before seeing any profits from sales.
The variable cost calculation can be done on a per-unit basis, such as a $10 variable cost for one unit with a sales price of $100 giving a variable cost ratio of 0.1, or 10 percent, or by using totals over a given time period, such as total monthly variable costs of $1,000 with total monthly revenues of $10,000 also rendering a variable cost ratio of 0.1, or 10 percent.
The variable cost ratio shows the total variable expenses a firm incurs in percent terms, as a proportion of its net sales.
A high ratio result shows that a company can make profits on relatively low sales since it doesn't have many fixed costs to cover.
A low ratio reveals that a company has high fixed costs to cover and must hit a high break-even sales level before it makes any profits.
The Difference Between Variable Costs and Fixed Costs
The variable cost ratio and its usefulness are easily understood once the basic concepts of variable costs, fixed expenses, and their relationship to revenues and general profitability are understood.
The two expenses that must be known to calculate total production costs and determine profit margin are variable costs and fixed costs also referred to as fixed expenses.
Variable costs are variable in the sense they fluctuate in relation to the level of production, or output. Examples of variable costs include the costs of raw material and packaging. These costs increase as production increases and decline when production declines. It should also be noted that increases or decreases in variable costs occur without any direct intervention or action on the part of management. Variable costs commonly increase at a fairly constant rate in proportion to increases in expenditures on raw materials and/or labor.
Fixed expenses are general overhead or operational costs that are "fixed" in the sense they remain relatively unchanged regardless of levels of production. Examples of fixed expenses include facility rental or mortgage costs and executive salaries. Fixed expenses only change significantly as a result of decisions and actions by management.
The contribution margin is the difference, expressed as a percentage, between total sales revenue and total variable costs. Contribution margin refers to the fact this figure delineates what amount of revenue is left over to "contribute" toward fixed costs and potential profit.
Understanding Cost-Volume-Profit – CVP Analysis
Cost-volume-profit (CVP) analysis looks at the impact that varying levels of sales and product costs have on operating profit. Also commonly known as break-even analysis, CVP analysis looks to determine the break-even point for different sales volumes and cost structures.
Understanding Variable Cost
A variable cost is a corporate expense that changes in proportion to production output. Variable costs increase or decrease depending on a company's production volume; they rise as production increases and fall as production decreases.
Break-Even Analysis
Break-even analysis calculates a margin of safety where an asset price, or a firm's revenues, can fall and still stay above the break-even point.
Breakeven Point (BEP)
In accounting, the breakeven point is the production level at which total revenues equal total expenses. Businesses also have a breakeven point, when they aren't making or losing money.
Cost Accounting Definition
Cost accounting is a form of managerial accounting that aims to capture a company's total cost of production by assessing its variable and fixed costs.
How Operating Leverage Works
Operating leverage shows how a company's costs and profit relate to each other and changes can affect profits without impacting sales, contribution margin or selling price.
Understanding Contribution Margins
How Operating Leverage Can Impact a Business
Production Costs vs. Manufacturing Costs: What's the Difference?
How budgeting works for companies
How can I calculate break-even analysis in Excel?
Marginal Revenue and Marginal Cost of Production | CommonCrawl |
\begin{document}
\title{Demonstrating NISQ Era Challenges in Algorithm Design on IBM's 20 Qubit Quantum Computer}
\author{Daniel Koch$^{1}$, Brett Martin$^{2}$, Saahil Patel$^{1}$, Laura Wessing$^{1}$, Paul M. Alsing$^{1}$}
\affiliation{$^{1}$Air Force Research Lab, Information Directorate, Rome, NY}
\affiliation{$^{2}$Air Force Academy, Colorado Springs, Co}
\begin{abstract}
As superconducting qubits continue to advance technologically, the realization of quantum algorithms from theoretical abstraction to physical implementation requires knowledge of both quantum circuit construction as well as hardware limitations. In this study we present results from experiments run on IBM's 20-qubit `Poughkeepsie' architecture, with the goal of demonstrating various qubit qualities and challenges that arise in designing quantum algorithms. These include experimentally measuring $T_1$ and $T_2$ coherence times, gate fidelities, sequential CNOT gates, techniques for handling ancilla qubits, and finally CCNOT and QFT$^{\dagger}$ circuits implemented on several different qubit geometries. Our results demonstrate various techniques for improving quantum circuits which must compensate for limited connectivity, either through the use of SWAP gates or additional ancilla qubits.
\end{abstract}
\maketitle
\section{Introduction}
For as long as technology remains in the NISQ (Noisy Intermediate Scale Quantum) Era \cite{nisq} of quantum computers, quantum algorithm design will need to compensate for noisy qubits. While algorithms such as Shor's \cite{shor} and Grover's \cite{grover} have proven mathematical speedups over the best known classical algorithms, they critically rely on demands from quantum computers such as qubit coherence times and gate fidelities, which are to date unfeasible. For superconducting qubits specifically, these various sources of noise \cite{noise1,noise2,noise3,noise4} inhibit the success of quantum algorithms, which in turn diminish or completely negate their potential for speedups. Even still, technological strides and new techniques for minimizing noise continue to develop \cite{err1,err2,err3}, with the hope that someday soon we will reach full error-correcting \cite{err_corr1,err_corr2,err_corr3} quantum computers.
Due to the complex technological nature of quantum computers, the current standard model by which interested users can work with these machines is through remote access with various vendors \cite{ibmq, google, rigetti, microsoft}. Analogous to high-level classical programming languages, these vendors offer quantum programming languages which grant the ability for users to execute quantum circuits, without necessarily knowing the full technical extent of how they are implemented via superconducting qubits. Consequently, this allows for an important separation of quantum software from hardware, opening up more opportunities for research efforts in the field of quantum algorithms\cite{tutorials1,tutorials2}. In the spirit of this new dawn of quantum programming, the findings in this study reflect the capabilities and limitations of this current model for quantum computer access, aiming to test the 20-qubit Poughkeepsie architecture through various experiments.
Each experiment in this study is motivated by different components which are critical to the success of larger, more complex quantum algorithms. These include $T_1$ and $T_2$ coherence times \cite{T2_1,T2_2, T1_T2}, single and 2-qubit gate fidelities, and qubit connectivity. After testing these properties individually, we then study their combined effects through implementations of CCNOT and QFT$^{\dagger}$ circuits \cite{toffoli,qft}. Throughout these various experiments, we make a concentrated effort to distinguish between results which are simply technological benchmarks (coherence times, gate fidelity, etc.) and those which are more fundamental to algorithm design. Our findings demonstrate several challenges which must be factored into NISQ Era algorithm design, adding to the growing population of studies which aim to benchmark IBM's qubits \cite{ibmq1,ibmq2,ibmq3}, as well as test the limits of various algorithm implementations \cite{ibm_exp1,ibm_exp2,ibm_exp3,ibm_exp4,ibm_exp5}.
\subsection{Layout}
The layout of this paper is as follows: In section 2 we investigate the $T_1$ and $T_2$ coherence times of various qubits. In section 3 we demonstrate CNOT gate fidelities across all 20 of IBM's Poughkeepsie qubits, showing the extent to which a single CNOT operation can be reliably performed between distant qubits. Section 4 contains no experimental results, but lays the framework and motivation for the remainder of the study. In sections 5 and 6 we experimentally implement CCNOT and QFT$^{\dagger}$ circuits on various qubit geometries. And lastly, section 7 summarizes the main results of the paper and their implications for future algorithm design.
\section{Coherence Times}
In quantum computing, a qubit is a two-level system that can simultaneously occupy both the $|0\rangle$ (ground) and $|1\rangle$ (excited) states through superposition, and is also sensitive to the relative phase between the two states. In practice however, one must always be mindful of the potential for noise to cause qubits to deviate from their intended states. Contrary to their classical counterparts, current qubits have short timescales for which their quantum states are usable in any sort of calculation. These time frames are referred to as coherence times, quantified by the metrics $T_1$ and $T_2$, representing timescales after which a qubit has likely lost its computational utility. Physically, these metrics correspond to a qubit's interactions with a noisy environment, tracking the probability that a qubit's excited state ($T_1$) or superposition state ($T_2$) is preserved, which both decay exponentially in time. Equation \ref{Eqn:Exp_Decay} below shows the probability of a qubit resisting a decoherence collapse after an interval of time $\Delta t$.
\begin{eqnarray}
\textrm{P}_{i}(\Delta t) = e^{\frac{- \Delta t}{T_i}}
\label{Eqn:Exp_Decay}
\end{eqnarray}
While working on IBM's Poughkeepsie architecture, coherence times for both $T_1$ and $T_2$ ranged from lows of $30$ - $40 \mu s$, to highs of $100$ - $120 \mu s$. In this section we present experimental results aimed to verify these coherence times through direct observations of decoherence over varying timescales. However, when working on these shared devices remotely, one must be mindful of other users, which can cause certain times during the day to be more competitive for machine usage than others. This in turn can be problematic for experiments which require data collecting from numerous trials, such as the coherence experiments to come, as we found that qubit coherence times can fluctuate throughout a 24 hour day. Thus, the results shown in the coming two subsections represent some of the best data obtained as a remote user, overcoming the challenge of shared device usage and ultimately demonstrating the coherence times of IBM's qubits.
\subsection{T$_1$ Energy Relaxation}
The $T_1$ metric corresponds to a spontaneous decay from the excited state ($\hspace{.05cm}|1\rangle \hspace{.01cm}$) to the ground state ($\hspace{.05cm}|0\rangle \hspace{.01cm}$). Just as classical computing is reliant on the long shelf life of bits, a critical ingredient for quantum circuits is how long a qubit can maintain the $|1\rangle$ state. 2-qubit gates such as CNOT and control-$R_{\phi}$, which make up the backbone of several critical quantum subroutines, are reliant on `control' qubits whereby the action of the 2-qubit gate is only performed if the control qubit is in the $|1\rangle$ state. Thus, the impact of a spontaneous energy relaxation in the middle of an algorithm can vary depending on when, and on which qubit the error occurs. If a key qubit to a circuit's success were to unintentionally undergo a $T_1$ collapse, it could spell the end of the algorithm. Conversely, as demonstrated in some of the later experiments, certain algorithms can still yield successful results despite one or more qubits undergoing spontaneous decays, provided the collapse happens after a qubit has served its purpose.
In order to experimentally demonstrate the decay shown in equation \ref{Eqn:Exp_Decay}, figure \ref{Fig:T1_Circuit} below shows the circuit used to verify the underlying $T_1$ nature of IBM's qubits. The circuit is designed such that the qubit is initially brought into the $|1\rangle$ excited state via an $X$ gate, followed by a desired amount of time $\Delta t$ whereby we anticipate an energy relaxation according to the exponential probability distribution.
\begin{figure}
\caption{Quantum circuit for studying $T_1$ coherence times. The qubit is excited into the $|1\rangle$ state via the $X$ gate, followed by various amounts of time $\Delta t$ where the qubit may spontaneously undergo a $T_1$ collapse.}
\label{Fig:T1_Circuit}
\end{figure}
In performing the experiment in figure \ref{Fig:T1_Circuit}, various $\Delta t$ times were tested in order to reveal the full exponential decaying nature of the qubits. For each value of $\Delta t$ the circuit was run 8000 times, from which the results were then used to compute an average percentage probability of decay. Once all of the experiments for a given qubit were completed, exponential regression fits were then performed to the data. The $T_1$ values from these best fits are displayed alongside the plots in figure \ref{Plt:T1_Fit}, as well as the reported $T_1$ times by IBM for each qubit.
\begin{figure}
\caption{ (scatter plots) Data collected running the circuit shown in figure \ref{Fig:T1_Circuit} for various qubits on IBM's Poughkeepsie architecture. Accompanying each set of data are exponential regression best fits (dashed lines) used to extract experimental $T_1$ values (black circles), along with their associated correlation coefficients R$^2$. These $T_1$ values are shown in the accompanying table, as well as the reported times from IBM.}
\label{Plt:T1_Fit}
\end{figure}
\subsection{T$_2$ Transverse Relaxation}
By comparison to $T_1$, which has a single well defined physical description, $T_2$ coherence for qubits can take on several potential definitions. In this study, we present results based on two experiments, commonly referred to as `$T_2$ Ramsey' and `$T_2$ Echo'. The quantum circuits for each experiment are shown below in figure \ref{Fig: T2 Circuits}. In both experiments the qubit is initially brought into the 50-50 superposition state $|+\rangle$ via a Hadamard gate, followed by various amounts of time $\Delta t$, and finally a second Hadamard just before the measurement. During the time between Hadamard gates, the qubit is subject to spontaneous energy relaxation ($T_1$) as well as dephasing and frequency drifting, for a combined effect referred to as transverse relaxation \cite{noise3}.
\begin{figure}
\caption{Quantum circuits for demonstrating the $T_2$ nature of IBM's qubits. The difference between the two experiments can be seen in the extra $X$ gate which splits the time $\Delta t$, which is used to counteract the drifting superposition state.}
\label{Fig: T2 Circuits}
\end{figure}
Beginning with the Ramsey experiment, the theoretical final state after two sequential Hadamard gates ($\Delta t = 0$) should return the qubit back to the ground state. However, when time is introduced in between these two $H$ gates, the qubit becomes susceptible to $T_2$ transverse relaxation. Illustrated in figure \ref{Fig: T2 Bloch}, the frequency drifting component of $T_2$ relaxation causes the state of the qubit to process around the equatorial plane of the Bloch Sphere.
\begin{eqnarray}
|\Psi (t) \rangle \hspace{.15cm} = \hspace{.15cm} \frac{|0\rangle \hspace{.05cm}+\hspace{.05cm}e^{i \omega t}|1\rangle }{\sqrt{2}}
\label{Eqn: T2 Drift}
\end{eqnarray}
Experimentally, this effect causes the second Hadamard gate to transform the qubit to a new final state based on the elapsed time, one which oscillates between $|0\rangle$ and $|1\rangle$ with the frequency of the drift. Secondly, the qubit is also subject to pure dephasing over time, represented by the growing shaded area in figure \ref{Fig: T2 Bloch}, whereby one gradually loses knowledge of the exact state of the qubit \cite{book1}.
\begin{figure}
\caption{Bloch Sphere representation of the state of the qubit after being initialized by a Hadamard gate. The orange shaded areas in the equatorial plane represent the growing uncertainty of the state of the qubit as it drifts, reaching a fully decoherent state after enough time.}
\label{Fig: T2 Bloch}
\end{figure}
The frequency of the precessing state shown above in equation \ref{Eqn: T2 Drift} can be determined through the difference in energies of the two-level qubit system, $\omega = E_1 - E_0$. As a result, the probability of measuring the $|0\rangle$ state in the Ramsey experiment as a function of time goes like cos$^2$(), which can be seen in figure \ref{Plt: T2 Ramsey} below.
\begin{eqnarray}
\textrm{P}( \hspace{.04cm} |0\rangle \hspace{.03cm} ) \hspace{.1cm}=\hspace{.1cm} |\hspace{.03cm}\langle 0 | \textrm{H} | \Psi(t) \rangle \hspace{.03cm} | ^2 \hspace{.1cm} = \hspace{.1cm} \textrm{cos}^2\Big{(} \hspace{.02cm} \frac{\omega t}{2} \hspace{.02cm} \Big{)}
\label{Eqn: T2 |0> Probability}
\end{eqnarray}
In order to obtain data similar to that of the $T_1$ experiment, suitable for an exponential best fit, one can counteract the drifting process of the qubit using the $T_2$ Echo circuit. By splitting each time evolution ($\Delta t$) into two equal halves and using an additional $X$ gate, one can refocus the qubit back to the $|+\rangle$ state just prior to the second Hadamard, guaranteeing a theoretical measurement of $|0\rangle$. Visually, one can picture this process as the state of the qubit drifting along for some time $\Delta t$ (figure \ref{Fig: T2 Bloch}), becoming $|\Psi (\Delta t) \rangle$ (equation \ref{Eqn: T2 Drift}), undergoing a reflection as a result of the $X$ gate, and finally drifting once again of equal time back to its starting state.
Plotted below are the results obtained running the Ramsey and Echo experiments on various qubits on the Poughkeepsie architecture. Figure \ref{Plt: T2 Ramsey} shows a typical Ramsey experiment, whereby the probability of the final state oscillates between $|0\rangle$ and $|1\rangle$ as a function of time, while simultaneously dampening into a fully decohered state as a result of dephasing. Figure \ref{Plt: T2 Echo} illustrates the Echo technique described earlier, showing the effect of using an $X$ gate to let the quantum system naturally refocus the state of the qubit. Exponential best fits for the $T_2$ Echo experiment are given, along with reported values from IBM.
\begin{figure}
\caption{Data collected running the $T_2$ Ramsey circuit \ref{Fig: T2 Circuits} for two different time scales, both on the same qubit.}
\label{Plt: T2 Ramsey}
\end{figure}
\begin{figure}
\caption{(green circles) Data collected running the $T_2$ Echo circuit for two different qubits. (dashed line) Exponential best fits used to extract experimental values for $T_2$ (black circle) to compare with values reported by IBM.}
\label{Plt: T2 Echo}
\end{figure}
\section{Chaining CNOT Gates}
When comparing quantum algorithms to classical competitors, claims of speedups often assume full connectivity between all qubits in the quantum system. By connectivity, we refer to the ability for two qubits to perform a 2-qubit operation. If one looks to the foreseeable future of qubit technologies however, it is possible that a fully connected superconducting 20 or 50+ qubit device may be upwards of a decade or more away. Thus, in order to compensate for lacking connectivity, quantum algorithms will need to be adapted to fit the various existing architectures.
In this section we investigate the effectiveness of using CNOT gates (control-$X$ gates) as a means of compensating for limited qubit connectivity. We study the reliability with which one can use a series of CNOT gates to invoke a control operation between distant qubits, which do not directly share a connection. Figure \ref{Fig:CNOT Chain} shows an example of a length-3 chain (two intermediate qubits separating the control and target), achieving a CNOT operation between qubits $A$ and $B$.
\begin{figure}
\caption{An example of a CNOT gate implementation between distant qubits $A$ and $B$, which lack a direct connection. Qubits $1$ and $2$ serve as ancilla, acting as intermediate control qubits in order to pass along the desired effect from qubit $A$. }
\label{Fig:CNOT Chain}
\end{figure}
In figure \ref{Fig:CNOT Chain}, qubits such as $1$ and $2$, which only serve an intermediate means for connecting $A$ and $B$, are often referred to as ancilla qubits. Such qubits play a pivotal role in delivering the control operation between the distant computational qubits ($A$ and $B$), and in principle are meant to have no direct impact on the success of the algorithm. In practice however, this last point can be difficult to control, as merely their incorporated presence in the quantum system can lead to new sources of error. Additionally, this problem can become compounding as certain algorithms may require the use of the same ancilla qubits several times, requiring proper handling of these qubits after each use.
\subsection{Delivering Desired States}
In testing the effectiveness of IBM's 20-qubit device for implementing CNOT chains, the overall goal of each experiment is to measure both the starting and final (control and target) qubits in the $|1\rangle$ state. This is done by exciting the initial control qubit into the $|1\rangle$ state with an $X$ gate, followed by a series of CNOT gates along some path of ancilla qubits. For each path studied we examine three circuits (figure \ref{Fig:Chain Circuits}), two such that the desired final state of the system contains all of the ancilla qubits in the $|0\rangle$ state, and one for $|1\rangle$. The motivation for studying multiple variations of each chain circuit stems from whether or not a certain algorithm requires the ancilla qubits to be reset for future use, oftentimes determined by whether or not the control qubit contains superposition.
\begin{figure}
\caption{(left) CNOT chain circuit where the desired final state of the system leaves all of the ancilla qubits in the $|1\rangle$ state. (center) Modified version of the left circuit, where the presence of additional $X$ gates now result in the desired final state of each ancilla qubit to be $|0\rangle$. (right) The most general form for a CNOT chain, using additional CNOT gates to properly reset each ancilla qubit when the starting control qubit is in a superposition state.}
\label{Fig:Chain Circuits}
\end{figure}
The left circuit in figure \ref{Fig:Chain Circuits} is the simplest form of a CNOT chain, where the ideal final state of the system leaves each ancilla qubit in the $|1\rangle$ state. More generally, this circuit represents the case where the ancilla qubit states are inconsequential. That is to say, the only desired effect is such that the distant control and target qubits are both in the $|1\rangle$ state, completing the effect of the computational CNOT gate between them with no intended future use of the ancilla qubits. Conversely, circuits of the center and right type are designed such that the desired final state of the system leaves each ancilla qubit back in the $|0\rangle$ state, representing the case where one anticipates future use from these ancilla. For instances where one knows that the initial control qubit is purely in the $|1\rangle$ state, the central circuit would be optimal due to the use of only single qubit gates for resetting the ancilla. However, for more general cases in which the control qubit may contain superposition, the additional CNOT gates are necessary for resetting.
\subsection{Experimental Design}
In designing CNOT chain paths for IBM's 20-qubit Poughkeepsie architecture, figure \ref{Fig:Chain Path} below illustrates the general layout for each experiment, showcasing the longest path of ancilla qubits tested for a single CNOT chain (touching all 20 qubits on the device). In addition to the maximum, all intermediate lengths were tested as well, keeping the control qubit fixed and moving the final target qubit along the path shown.
\begin{figure}
\caption{Example of a full CNOT chain path on the Poughkeepsie architecture. One experimental run studies the success rates for implementing a CNOT operation between the starting control qubit (C) and the final target qubit (T), tested for all chain lengths from $1$ to $19$.}
\label{Fig:Chain Path}
\end{figure}
The path shown in figure \ref{Fig:Chain Path} is one of four orientations tested experimentally. In order to achieve the best average result for CNOT chain success, as well as potentially identify any trends for certain qubits, three additional orientations were also tested, shown in figure \ref{Fig:Chain Path Alts}.
\begin{figure}
\caption{Additional orientations for the full 20-qubit CNOT chain experiment. Individual results for each orientation can be seen in figure \ref{Plt: Paths Compared}, as well as the combined average fidelity rates in figure \ref{Fig:Chain Circuits}}
\label{Fig:Chain Path Alts}
\end{figure}
\subsection{Experimental Results}
Presented here are the results experimentally gathered for the various CNOT chain experiments, testing each of the circuit types illustrated in figure \ref{Fig:Chain Circuits} across all four path orientations. In all of the coming results, we distinguish the measurement outcomes from each experiment into three categories based on the state of the control, target, and ancilla qubits. Firstly, any measurement outcome yielding either the control or target qubit in the $|0\rangle$ state is considered a failure. Next, we separate cases whereby the CNOT chain successfully yields both the control and target qubits in the $|1\rangle$ state into two groupings based on the final state of the ancilla qubits. The more lenient of the two metrics, which we call $f_1$, tracks the final state of only the control and target qubits, regardless of the ancilla qubits. Conversely, the second state fidelity metric, $f_2$, tracks the percentage of trials where $\textit{all}$ qubits in the system are found to be in their theoretically desired state of either $|0\rangle$ or $|1\rangle$.
\begin{eqnarray}
f_1 \hspace{.15cm} &\equiv& \hspace{.15cm} \big{|}\hspace{.05cm} \langle \hspace{.02cm}11\hspace{.02cm}|\hspace{.02cm}\textrm{CT}\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2 \\
f_2 \hspace{.15cm} &\equiv& \hspace{.15cm} \big{|}\hspace{.05cm} \langle \hspace{.02cm}11\hspace{.02cm}|\hspace{.02cm}\textrm{CT}\hspace{.02cm} \rangle \otimes \langle \hspace{.02cm}\textrm{A}'\hspace{.02cm}|\hspace{.02cm}\textrm{A}\hspace{.02cm}\rangle^{\otimes N} \hspace{.05cm} \big{|}^2
\label{Eqn: CNOT Chain Fidelities}
\end{eqnarray}
In the equations shown above, the states $|\hspace{.02cm}\textrm{CT}\hspace{.02cm}\rangle$ and $|\hspace{.02cm}\textrm{A}\hspace{.02cm}\rangle$ represent the final measured states of the control, target, and intermediate ancilla qubits respectively in the computational basis. The state $|\hspace{.02cm}\textrm{A}'\hspace{.02cm}\rangle$ represents the desired final state for the ancilla qubits, either $|0\rangle$ or $|1\rangle$ according to the circuit types laid out in figure \ref{Fig:Chain Circuits}. Using the metrics $f_1$ and $f_2$, we present the first of two experimental findings regarding CNOT chains in figure \ref{Plt: Paths Compared}, demonstrating differences in fidelities as a consequence of the four path orientations shown in figures \ref{Fig:Chain Path} and \ref{Fig:Chain Path Alts}.
\begin{figure}
\caption{Comparison of the four CNOT chain path orientations shown in figures \ref{Fig:Chain Path} and \ref{Fig:Chain Path Alts}, for the case where the desired state of each ancilla qubits is $|0\rangle$ using $X$ gates (middle circuit in figure \ref{Fig:Chain Circuits}). }
\label{Plt: Paths Compared}
\end{figure}
When comparing the plots in figure \ref{Plt: Paths Compared}, it is clear that there are some noticeable differences in fidelity rates between the various paths, some as large as 10 - 20$\%$ after length-10 chains. Interestingly, a closer look at the data reveals distinct drops at certain chain lengths, particularly for paths $2$ (circles) and $3$ (diamonds), and $4$ (squares). Upon further investigation into the location of each path's drop in fidelity, we find that they all occur at qubit 7 (see figure \ref{Fig:Chain Path}), which was later confirmed to have the lowest $T_1$ at the time of the experiments. Thus, these results demonstrate how the location of a single noisy qubit can lead to varying algorithmic success rates based circuit configuration, which is a result that will be further demonstrated in later experiments. Next, we now present CNOT chain results which showcase each of the circuit techniques in figure \ref{Fig:Chain Circuits} and their effectiveness at controlling the final state of the ancilla.
\begin{figure}
\caption{A comparison of fidelities for the three circuit types laid out in figure \ref{Fig:Chain Circuits}, averaged across all four path orientations. The top plots demonstrate each circuit type's ability to successfully deliver the distant CNOT operation, while the bottom plots show how reliably each circuit handles the final state of the ancilla qubits.}
\label{Plt: Chain Circuits Avg}
\end{figure}
The results shown in figure \ref{Plt: Chain Circuits Avg} represent average fidelities obtained from the four path orientations. Beginning with $f_1$, the data shows that the presence of additional $X$ gates for resetting ancilla qubits has no impact when compared to using no gates, demonstrated by the overlap of the circle and triangle plots. This agrees with what one would expect theoretically, given that chronologically the chain of CNOTs in both circuits are the same, therefore uninfluenced by any later gate operations on each ancilla qubit. Additionally, the total lengths of time for each circuit are nearly identical, only differing by a single $X$ gate. Conversely, because of the way in which the additional CNOT gates are staggered in the third circuit type (squares), we see a decrease in $f_1$ fidelity resulting from the fact that these extra gates ultimately delay the final measurement, effectively creating more time for decoherence errors on the control and target qubits.
Now turning to the results for each circuit's $f_2$ rates, the bottom plot in figure \ref{Plt: Chain Circuits Avg} demonstrates each circuit type's ability to produce reliable ancilla states. Using the case of no gates as a baseline, the data shows that the usage of $X$ gates can lead to a drastic improvement, while the use of CNOT gates has the opposite effect. When comparing the plots for $X$ gates versus no gates, the data shows a widening gap as a function of chain length. We can understand this trend as a result of $T_1$ decays, which become more problematic as the number of ancilla increases (more opportunities for collapse) as well as circuit length (more time that each qubit must maintain its excited state). Conversely, the usage of additional $X$ gates immediately after each CNOT remedies this problem by effectively minimizing the time each ancilla qubit is subject to $T_1$ relaxation, regardless of chain length.
Lastly, the results for the circuit type utilizing CNOT gates shows the worst $f_2$ rates between the three. Conceptually, resetting the ancilla qubits in this way is plagued with several issues, the worst of which being increased circuit length. By requiring an entire second CNOT chain for resetting, ancilla qubits closer to the control must maintain their excited state for almost double that of the other two circuits. Additionally, the success of each ancilla being properly reset is conditioned on the one prior, which means that a single intermediate $T_1$ relaxation is enough to disrupt the entire resetting process. Although this circuit type has been shown to be the worst in both $f_1$ and $f_2$ fidelities, it is important to note that amongst the three circuit types it is the only one which can properly reset ancilla when the control qubit is in a superposition. Thus, despite the advantage in ancilla control provided from using $X$ gates, oftentimes the requirement of superposition in algorithms forces the use of additional CNOT gates.
\section{Qubit Geometry \& Algorithm Design}
Having now seen the extent to which a single CNOT operation can be reliably transmitted across ancilla qubits, the next question is how useful could such chains be for constructing larger circuits? With limited connectivity on future NISQ devices being the expected standard, near term quantum circuits will need to critically rely on ancilla qubits and various techniques for algorithm implementation. In this section we present several qubit geometries and circuit implementations for which we later present experimental results (see sections V and VI).
\subsection{3 Qubit Geometry}
Despite lacking any set of three directly interconnected qubits, the Poughkeepsie architecture possesses numerous combinations of three linearly connected qubits, as shown in figure \ref{Fig: 3 Geometry}. Using qubits in this way to implement 3-qubit algorithms has the advantage of avoiding the need for any additional ancilla qubits, but becomes problematic when 2-qubit gate operations are required between the outer two qubits. Compensating for this lacking connection requires additional 2-qubit gates through the central qubit, typically SWAP gates, which consequently increase circuit length and noise susceptibility.
\begin{figure}
\caption{Three linearly connected qubits.}
\label{Fig: 3 Geometry}
\end{figure}
When implementing 2-qubit gate operations between the two unconnected qubits in figure \ref{Fig: 3 Geometry}, the challenge lies in protecting the central qubit from additional noise while simultaneously ensuring that its quantum state is unaltered in the end. The standard approach for implementing a 2-qubit gate between qubits $1$ and $3$ in figure \ref{Fig: 3 Geometry} would be to use SWAP gates, which effectively interchange the quantum state held on two qubits. Through the use of SWAP gates, the quantum states held on distant qubits can be swapped onto qubits which possess a direct connection, allowing for the 2-qubit gate operation to occur.
\begin{figure}
\caption{(top left) Circuit implementation for a CNOT gate between qubits $1$ and $3$, which are both connected to qubit $2$, but do not share a direct connection between themselves (see figure \ref{Fig: 3 Geometry}). (top right) Gate construction for a SWAP gate, consisting of three alternating CNOT gates. (bottom) Circuit implementation for a control-$R_{\phi}$ gate between qubits $1$ and $3$ for the same geometry.}
\label{Fig: 3G 2Qubit Gates}
\end{figure}
The circuits shown in figure \ref{Fig: 3G 2Qubit Gates} are those tested in the coming experiments. While not always optimal in circuit depth and gate count, each circuit is guaranteed to accomplish two things: 1) the successful implementation of the 2-qubit operation between the distant qubits, and 2) the central qubit is always returned back to its initial quantum state. This second point comes at the cost of the second SWAP gate in each circuit, where notably this is where one typically looks to optimize when possible.
\subsection{4 Qubit Geometry}
If one wishes to avoid interchanging any computational qubits in order to compensate for missing connections, such as with the 3-qubit geometry, then one is forced to increase the size of the quantum system through the introduction of ancilla qubits. Figure \ref{Fig: 4 Geometry} below illustrates one such solution, using a single central ancilla qubit to supply all 2-qubit gate operations between the surrounding three computational qubits.
\begin{figure}
\caption{Four qubit geometry used for implementing 3-qubit algorithms throughout this study. The central ancilla qubit allows for the implementation of 2-qubit gates between any two computational qubits, without interfering with the state of the third qubit.}
\label{Fig: 4 Geometry}
\end{figure}
\begin{figure}
\caption{The six possible locations one can construct the 4-qubit configuration shown in figure \ref{Fig: 4 Geometry}}
\label{Fig: 4 Geometry Locations}
\end{figure}
The 4-qubit geometry shown above represents an alternative to the problem of the 3-qubit geometry discussed earlier, using a single ancilla in order to avoid any additional SWAP gates on the computational qubits. However, using a single ancilla for all 2-qubit gates requires that the state of the ancilla be properly reset after each usage in order to ensure the success of future gate operations. Figure \ref{Fig: 4G 2Qubit Gate} below shows two examples for resetting the ancilla qubit when implementing a CNOT gate between computational qubits, where the choice for resetting depends on the presence of superposition on the control qubit.
\begin{figure}
\caption{Example implementations of a CNOT gate between two computational qubits for the 4-qubit geometry (see figure \ref{Fig: 4 Geometry}). Depending on whether or not the control computational qubit possesses superposition determines whether an $X$ or additional CNOT gate is necessary for resetting the state of the ancilla.}
\label{Fig: 4G 2Qubit Gate}
\end{figure}
In comparing the 3 and 4-qubit geometries, the question becomes whether or not the additional ancilla qubit aids or hinders algorithmic success. By requiring all 2-qubit gate operations traverse through a single qubit, a heavy burden is placed on the 4-qubit geometry's ancilla in terms of proper state manipulation and minimizing noise. A similar burden is placed on the central qubit in the 3-qubit geometry, leading one to expect that a majority of algorithmic success is tied to these central qubits' coherence properties and gate fidelities.
\subsection{6 Qubit Geometries}
As the final qubit geometry size tested in this study, we present here two 6-qubit geometries, shown in figure \ref{Fig: 6 Geometry}, which are motivated by the respective strengths and weaknesses anticipated of the 3 and 4-qubit geometries already mentioned. Specifically, the left circuit in figure \ref{Fig: 6 Geometry} (`3 Chain') contains the same connectivity between the three computational qubits as the 3-qubit geometry, sharing the advantage of having two-out-of-three direct connections. Conversely, the right geometry (`1 Chains') uses one ancilla for all 2-qubit gate operations just as with the 4-qubit geometry, but avoids the issue of repeated single ancilla use by using a different ancilla between each computational qubit. While not expected to outperform the other two configurations, the two geometries put forth in figure \ref{Fig: 6 Geometry} are designed to provide additional insight into the role of using ancilla in implementing algorithms over more sparse architectures.
\begin{figure}
\caption{Six qubit geometries used for implementing 3-qubit algorithms throughout this study. (left) Qubit geometry which uses three ancilla qubits to supplement the lack of connection between qubits Q$_1$ and Q$_3$. (right) Qubit geometry whereby each computational qubit is separated by a single ancilla.}
\label{Fig: 6 Geometry}
\end{figure}
\begin{figure}
\caption{The two 6-qubit geometries experimentally tested on the Poughkeepsie architecture.}
\label{Fig: 6Q Loops}
\end{figure}
\section{CCNOT Experimental Results}
For the progression of quantum computing, the importance of the CCNOT (Control-Control-$X$ gate) operation, also known as a Toffoli gate, cannot be understated as numerous quantum algorithms critically rely on its usage. This includes algorithms which rely on oracles such as Grover's \cite{grover}, modular multiplication like Shor's \cite{shor2}, higher order control operators for Quantum Phase Estimation \cite{qpe}, or creating entanglement through mixing operators like in QAOA (Quantum Approximate Optimization Algorithm) \cite{qaoa} just to name a few. Unlike previous operations studied up to this point however, the CCNOT gate is better thought of as a quantum circuit, consisting of several single and 2-qubit, shown in figure \ref{Fig: CCNOT Circuit}.
\begin{figure}
\caption{Quantum circuit for implementing a CCNOT gate operation \cite{toffoli2}. $Q_1$ and $Q_2$ serve as the control qubits, while $Q_3$ is the target, receiving the equivalent of an X gate if both $Q_1$ and $Q_2$ are in the $|1\rangle$ state.}
\label{Fig: CCNOT Circuit}
\end{figure}
In order to implement the quantum circuit outlined in figure \ref{Fig: CCNOT Circuit}, one necessary condition is that the three qubits be interconnected, requiring CNOT gates between the target and control qubits, and the two controls as well. As already mentioned however, such connectivity does not exist on IBM's 20-qubit Poughkeepsie architecture, which means the circuit must be adapted to fit the various qubit geometries laid out in the previous section.
\subsection{3 Qubit Results}
In testing the CCNOT circuit using three linearly connected qubits, there are two unique configurations for which one can implement the control and target qubits. Specifically, one can have the target qubit be either an outer (referred to as `CCT') or central (`CTC') qubit. Due to the design of the CCNOT circuit, which requires exactly two CNOT gates between all three qubits, both configurations result in the same circuit depth and gate count when using SWAP gates to supplement the missing outer connection. In total, the Poughkeepsie architecture possesses 32 possible 3-qubit combinations, all of which were tested using both configurations, and the results are shown in figure \ref{Fig: CCNOT 3 Qubits}. In each trial, the control and target qubits are prepared in the $|1\rangle$ and $|0\rangle$ states respectively before passing through the CCNOT circuit. Just as with the CNOT chain experiments, we are interested in the fidelity of the final measured state, whereby the target and both control qubits are all found to be in the $|1\rangle$ state.
\begin{eqnarray}
f_1 \hspace{.15cm} &\equiv& \hspace{.15cm} \big{|}\hspace{.05cm} \langle \hspace{.02cm}111\hspace{.02cm}|\hspace{.02cm}\textrm{Q}_1 \textrm{Q}_2 \textrm{Q}_3\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2
\label{Eqn: CCNOT f1}
\end{eqnarray}
\begin{figure}
\caption{Highest fidelities found for the 32 possible 3-qubit combinations on the Poughkeepsie architecture, implementing the CCNOT circuit shown in figure \ref{Fig: CCNOT Circuit}. The orientation which produced the higher fidelity is indicated by color, blue for the case where the central qubit was the target (CTC) and red for when it was an outer qubit (CCT).}
\label{Fig: CCNOT 3 Qubits}
\end{figure}
Figure \ref{Fig: CCNOT 3 Qubits} shows the fidelity rates found across the 96 tested CCNOT circuit implementations (three unique locations for the target qubit per each of the 32 total combinations). As illustrated by the colors and numerical values, it is clear that no single combination of qubits or orientation is dominant in producing the best CCNOT fidelity. While certain combinations produced worse fidelities as a result of noisier qubits, the data suggests that on average one can expect a successful CCNOT gate implementation on the order of 50-60\%, with only a select few noisy qubits reducing these values to around 25-40\%.
\subsection{4 \& 6 Qubit Results}
When analyzing the results for the 4 and 6-qubit geometry implementations of the CCNOT circuit, the addition of ancilla qubits requires the tracking of the $f_2$ metric in addition to $f_1$. The interest is once again in how well each qubit configuration can reliably reset their respective ancilla qubits back to the $|0\rangle$ state through the use of either $X$ or CNOT gates.
\begin{eqnarray}
\textrm{4-Qubit:} \hspace{.3cm} f_2 \hspace{.1cm} &\equiv& \hspace{.1cm} \big{|}\hspace{.05cm} \langle \hspace{.02cm}111\hspace{.02cm}|\hspace{.02cm}\textrm{Q}_1 \textrm{Q}_2 \textrm{Q}_3\hspace{.02cm}\rangle \otimes \langle \hspace{.02cm}0\hspace{.02cm}|\hspace{.02cm}\textrm{A}\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2 \\
\textrm{6-Qubit:} \hspace{.3cm} f_2 \hspace{.1cm} &\equiv& \hspace{.1cm} \big{|}\hspace{.05cm} \langle \hspace{.02cm}111\hspace{.02cm}|\hspace{.02cm}\textrm{Q}_1 \textrm{Q}_2 \textrm{Q}_3\hspace{.02cm}\rangle \otimes \langle \hspace{.02cm}000\hspace{.02cm}|\hspace{.02cm}\textrm{A}_1 \textrm{A}_2 \textrm{A}_3\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2 \hspace{.5cm}
\label{Eqn: CCNOT f2 }
\end{eqnarray}
Beginning with the 4-qubit geometry, two cases for handling the resetting of the ancilla qubit were tested, corresponding to the two implementations shown in figure \ref{Fig: 4G 2Qubit Gate}. For each implementation, all six possible 4-qubit combinations on the Poughkeepsie architecture (see figure \ref{Fig: 4 Geometry Locations}) were experimentally tested, setting each of the three outer qubits as the target. The average results for each qubit combination are shown below in figure \ref{Fig: CCNOT 4 Qubits}.
\begin{figure}
\caption{$f_1$ (green) and $f_2$ (red) rates for the 4-qubit implementations of the CCNOT gate. The two sets of data correspond to using $X$ gates (left) for resetting the ancilla qubit, versus using additional CNOT gates (right).}
\label{Fig: CCNOT 4 Qubits}
\end{figure}
When looking at the results in figure \ref{Fig: CCNOT 4 Qubits}, it is clear that the use of $X$ gates for resetting the central ancilla qubit produce noticeably higher fidelities, both for $f_1$ and $f_2$. This is in agreement with the CNOT chain results from earlier (figure \ref{Plt: Chain Circuits Avg}), once again highlighting the cost in ancilla control when forced to implement CNOT gates to account for superposition. If we now compare these results to those of figure \ref{Fig: CCNOT 3 Qubits}, we find that the fidelities between the 3 and 4-qubit geometries using $X$ gates are comparable, with a slight edge going to the 3-qubit geometry. The closeness in the two results suggests that the use of an ancilla qubit, versus SWAP gates through a computational qubit, is a viable approach for CCNOT algorithm design. However, this viability is lost when superpositions must be accounted for, which are handled automatically by SWAP gates for the 3-qubit geometry, but require CNOT gates for the 4-qubit geometry.
Proceeding now to the 6-qubit geometries, figure \ref{Fig: CCNOT 6 Qubits} below shows the full results for the two implementations illustrated in figure \ref{Fig: 6 Geometry}, once again separated into the metrics $f_1$ and $f_2$. Overall, the data shows that the `3 Chain' circuit implementation yields comparable fidelity rates to the 4-qubit configurations using $X$ gates. However, the use of a single ancilla between each computational qubit shows a dramatic decrease.
\begin{figure}
\caption{Fidelities $f_1$ and $f_2$ for the `3 Chain' (left) and `1 Chains' (right) circuits. Each bar represents one of the twelve possible qubit configurations, denoted by the qubit in each 6-qubit ring acting as the target qubit for the CCNOT action (see figures \ref{Fig: 6 Geometry} and \ref{Fig: 6Q Loops}).}
\label{Fig: CCNOT 6 Qubits}
\end{figure}
When comparing the two circuit designs, which only differ in the manner in which the ancilla qubits are distributed, it is important to note that both circuit implementations use the same total number of CNOT and $X$ gates, producing near identical circuit depths. Additionally, in both cases each ancilla qubit is called upon exactly twice, followed immediately by $X$ or CNOT gates for resetting back to the $|0\rangle$ state. These consistencies suggest that the results shown in figure \ref{Fig: CCNOT 6 Qubits} are then attributable to the $\textit{way}$ in which the CNOT gates are distributed throughout the circuits.
More specifically, if we return to the results of the CNOT Chain experiment, and focus on the fidelities found for chains of $1$ and $3$ ancilla qubits, we can approximate the average $f_1$ fidelities to be $0.9$ and $0.85$ respectively. If we now compare the way in which each circuit requires CNOT chains in order to supplement missing connections, we find that the `3 Chain' configuration only needs two successful chains, while the `1 Chains' configuration is reliant on six. Using our example approximate fidelities, this means that we would expect the probability of success from each circuit to be ($0.9$)$^6$ and ($0.85$)$^2$ respectively, heavily favoring the `3 Chains' circuit design. Thus, in determining how best to arrange computational and ancilla qubits for algorithm design, the results shown above suggest that grouping computational qubits closer together, in favor of fewer but longer ancilla chains, will lead to better algorithmic success.
\section{Quantum Fourier Transformation}
Having just seen the varying degrees to which the Poughkeepsie architecture can handle CCNOT circuits, we now turn to another critical subroutine for quantum computing, the Quantum Fourier Transformation (QFT). In this section, we present experimental results which demonstrate the reliability with which one can successfully perform a 3-qubit QFT using the various qubit geometries outlined earlier. At its core, the QFT is the quantum equivalent to the Discrete Fourier Transformation, applied to a quantum state. The power of the QFT lies in its ability to apply up to $2^{N}$ unique phases across the various components of a quantum state, where $N$ is the number of qubits. The core element necessary to any successful QFT is the control-$R_{\phi}$ gate, which applies an arbitrary phase to a target qubit's $|1\rangle$ state, conditional on a control qubit.
\begin{eqnarray}
R_{\phi} \hspace{.02cm} |1\rangle \otimes ( \hspace{.04cm} \alpha \hspace{.02cm}|0\rangle + \beta \hspace{.02cm} |1\rangle \hspace{.04cm} ) \hspace{.1cm} = \hspace{.1cm} |1\rangle \otimes ( \hspace{.04cm} \alpha \hspace{.02cm}|0\rangle + e^{i\phi} \hspace{.02cm}\beta \hspace{.02cm}|1\rangle \hspace{.04cm} )
\label{Eqn: Control Phase Gate}
\end{eqnarray}
Just like the CCNOT circuit, the QFT requires full connectivity between all qubits. The standard quantum circuit for a 3-qubit QFT is shown below in figure \ref{Fig: QFT Circuit} (technically a QFT$^{\dagger}$ circuit, which we discuss in the coming section), which we adapt accordingly for the various qubit geometries. When physically implementing these 3-qubit QFT's, note that the true gate count for each control-$R_{\phi}$ gate includes additional CNOT and $R_{\phi}$ gates. As a result, the circuit depth and total gate count for the 3-qubit QFT turns out to be comparable to that of the CCNOT circuit, which in turn will provide some insight when comparing the success rates between the two.
\begin{figure}
\caption{QFT$^{\dagger}$ circuit for three qubits. The QFT$^{\dagger}$ shown above is the circuit tested on the IBM 20-qubit architecture, identical to the QFT circuit in both total gate count and circuit depth, differing only in gate order and phase values.}
\label{Fig: QFT Circuit}
\end{figure}
\subsection{Testing QFT$^{\dagger}$ With Phase Estimation}
In order to isolate and benchmark the success of the QFT$^{\dagger}$ circuit in figure \ref{Fig: QFT Circuit} in a way similar to the previous sections, one ideally needs an experiment whereby the effect of the QFT$^{\dagger}$ produces a single desirable final state. However, unlike the CCNOT operation whose effect is directly observable by means of the target qubit, the QFT$^{\dagger}$ is a more versatile quantum operation whose effect ranges widely based on the state of the qubits it is applied to. To this end, we quantify the fidelity of our QFT$^{\dagger}$ implementations in a manner analogous to the Quantum Phase Estimation Algorithm (QPE) \cite{qpe,book2}, whereby the effect of the final QFT$^{\dagger}$ leaves all of the qubits in a final state containing no superposition. By creating very particular superposition states just prior to the QFT$^{\dagger}$ operation, we are guaranteed to have theoretical $|0\rangle$ and $|1\rangle$ final states for the computational qubits, with which we can then use to compute fidelities $f_1$ and $f_2$.
Quantum Phase Estimation is a quantum algorithm which uses a control-U operation, along with one of its eigenstates $|\mu \rangle$, in order to detect some unknown eigenphase $e^{i \theta}$. An example QPE is shown below in the top circuit of figure \ref{Fig: QPE Circuit}. Creating such a circuit is typically very challenging, as both the implementation of arbitrary control-U operators and their eigenstates require clever circuit design. For the purpose of our QFT benchmarking however, we apply the core idea of the QPE in a much simpler form, effectively achieving the states resulting from the control-U operations acting on $|\mu \rangle$ with only $R_{\phi}$ gates, illustrated by the bottom circuit in figure \ref{Fig: QPE Circuit}.
\begin{figure}
\caption{(top) Quantum circuit for a 3-qubit Quantum Phase Estimation Algorithm. (bottom) A circuit which mimics the effect of the control-U operations through the use of single qubit rotation gates.}
\label{Fig: QPE Circuit}
\end{figure}
By using single qubit rotation gates to initialize the computational qubits, we are able to prepare quantum states just prior to the QFT$^{\dagger}$ with high fidelities, minimizing any additional noise not caused by the QFT$^{\dagger}$ circuit. Additionally, the use of $R_{\phi}$ gates allows for the creation of a wider range of states than typically achievable through the use of control-U gates, which we in turn use for further insight in the viability of the QFT$^{\dagger}$ circuit in later experiments.
\subsection{Perfect Phase Detection}
For the case of a 3-qubit QFT$^{\dagger}$, there are exactly eight choices for $\phi$ such that the bottom circuit in figure \ref{Fig: QPE Circuit} will result in a final state containing no superposition. These eight values of $\phi$ span an even distribution from $0$ to $\frac{7\pi}{4}$, corresponding to the eight unique quantum states from $|000\rangle$ to $|111\rangle$. These eight states will serve as the desired final measurements for determining fidelities:
\begin{eqnarray}
f_1 &\equiv& \big{|}\hspace{.05cm} \langle \hspace{.02cm}\textrm{Q}_1' \textrm{Q}_2' \textrm{Q}_3'\hspace{.02cm}|\hspace{.02cm}\textrm{Q}_1 \textrm{Q}_2 \textrm{Q}_3\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2 \\
\textrm{4-Q:} \hspace{.1cm} f_2 &\equiv& \big{|}\hspace{.05cm} \langle \hspace{.02cm}\textrm{Q}_1' \textrm{Q}_2' \textrm{Q}_3'\hspace{.02cm}|\hspace{.02cm}\textrm{Q}_1 \textrm{Q}_2 \textrm{Q}_3\hspace{.02cm}\rangle \otimes \langle \hspace{.02cm}0\hspace{.02cm}|\hspace{.02cm}\textrm{A}\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2 \\
\textrm{6-Q:} \hspace{.1cm} f_2 &\equiv& \big{|}\hspace{.05cm} \langle \hspace{.02cm}\textrm{Q}_1' \textrm{Q}_2' \textrm{Q}_3'\hspace{.02cm}|\hspace{.02cm}\textrm{Q}_1 \textrm{Q}_2 \textrm{Q}_3\hspace{.02cm}\rangle \otimes \langle \hspace{.02cm}000\hspace{.02cm}|\hspace{.02cm}\textrm{A}_1 \textrm{A}_2 \textrm{A}_3\hspace{.02cm}\rangle \hspace{.05cm} \big{|}^2 \hspace{.7cm}
\label{Eqn: QFT Fidelities}
\end{eqnarray}
In the QFT$^{\dagger }$ fidelity results to come, the highest fidelity rates from experiments \ref{Fig: CCNOT 3 Qubits} - \ref{Fig: CCNOT 6 Qubits} were used to determine which qubits to experimentally test. Specifically, the top three qubit combinations for each geometry which yielded the highest fidelities were tested and then averaged together. For the 3-qubit geometries, the top three qubit combinations for both control-target orientations were tested. For the 6-qubit geometries, only the `3 Chain' orientation was tested (preliminary results showed once again a significant decrease in fidelity rates for the `1 Chains' orientation). And finally, because the QFT$^{\dagger}$ circuit is always acting on a superposition state of the computational qubits, both the 4 and 6-qubit geometries require CNOT gates for resetting ancilla qubits.
\begin{figure}
\caption{(solid fill) $f_1$ fidelities found for each qubit geometry, demonstrating each geometry's ability to produce and measure the eight desired final states resulting from the QPE circuit (bottom circuit in figure \ref{Fig: QPE Circuit}). For the 4 and 6-qubit results, $f_2$ (dashed fill) fidelity rates are also shown, highlighting each respective geometry's ability to reliably reset ancilla qubits. }
\label{Fig: QFT Perfects}
\end{figure}
Beginning with $f_1$, the results shown in figure \ref{Fig: QFT Perfects} reveal that the 4-qubit geometry lead to the overall highest QFT$^{\dagger}$ fidelities across all eight phases. In addition to the high $f_1$ rates, the accompanying high $f_2$ values suggests that one could also reliably perform further gate operations after the QFT$^{\dagger}$. Behind the 4-qubit geometry we find that the 3 and 6-qubit geometries produced fidelities of the order 55-65\% and 30-40\% respectively.
Based on the results from the CCNOT experiments, the higher fidelity rates of the 4-qubit geometry may come as a surprise at first glance. However, in analyzing the two top performing geometries and their circuit implementations, the key to the 4-qubit geometry's success lies in the ordering of the control phase gates. Specifically, the $-\frac{\pi}{2}$ and $-\frac{\pi}{4}$ control-$R_{\phi}$ gates which happen in succession, originating from the same computational qubit, allow for a slight optimization in the 4-qubit circuit.
\begin{figure}
\caption{Circuit implementations of a 3-qubit QFT$^{\dagger}$, subject to the connectivity restraints outlined in figures \ref{Fig: 3 Geometry} and \ref{Fig: 4 Geometry}. The blue and red underscores to each circuit highlight the gates which are the same (blue) and different (red) between the two circuits. In total, the 4-qubit geometry achieves the QFT$^{\dagger}$ operation with two fewer CNOT gates.}
\label{Fig: QFT 3 vs 4}
\end{figure}
As illustrated in figure \ref{Fig: QFT 3 vs 4}, the difference in implementation between the 3 and 4-qubit geometries boils down to the extra CNOT gates necessary to to compensate for each configuration's lacking connectivity. While the 3-qubit geometry requires two SWAP gates, for a combined total of six additional CNOT gates, the 4-qubit geometry only requires four. Typically each control gate would require two CNOTs for resetting, but since two of them originate from the same computational qubit with no gates in between, there is no need to reset the ancilla qubit back down to the $|0\rangle$ state after the first $-\frac{\pi}{2}$ gate. Thus, the results of figure \ref{Fig: QFT 3 vs 4} demonstrate that the use of an ancilla qubit can potentially be used to optimize circuit depth and consequently improve algorithm success.
In addition to the qubit geometry discrepancies, a second interesting result emerging from the data reveals an alternating pattern in fidelities between phases. This pattern is present across all three qubit geometries, suggesting that this phenomenon is inherently linked to QPE itself. One possible explanation for this trend could be in the complexity of the quantum state just prior to the QFT$^{\dagger}$ circuit. Specifically, the superposition states created from the even integer cases of $\frac{\pi}{4}$ contain at most four unique phases: $\frac{1}{8}$, $-\frac{1}{8}$, $\frac{i}{8}$, and $\frac{-i}{8}$ across the eight computational basis states. Conversely, the odd integer cases contain four additional phases: $\frac{\pm 1 \pm i}{4}$, producing superposition states where each basis state has a unique phase. Since these quantum states have more relative phases between the eight computational basis states, it is possible that they are more sensitive to noise and errors, leading to lower fidelity rates after the QFT$^{\dagger}$ circuit.
\subsection{Continuous Phase Detection}
Following from the data trends revealed in the previous section, we now present experimental results which are motivated by a more realistic usage of the QPE Algorithm. Specifically, we present results which extend the data shown in figure \ref{Fig: QFT Perfects}, testing for intermediate values of $\phi$, ultimately attempting to detect phases which do not match up perfectly with one of the $2^N$ basis states created from the number of qubits being used. Detecting these `non-perfect' phases comes with an inherent probability of failure, even for a noiseless quantum computer, as the resulting final states from the QFT$^{\dagger}$ now contain superposition. Consequently, one expects lower fidelities in regions of $\phi$ between the $2^N$ perfect phases, with the lowest points being exactly halfway between each perfect phase (approximately 40\% for a noiseless 3-qubit QPE). Figure \ref{Fig: QFT Continuous} confirms this trend, illustrating fidelity swings of nearly 50\% for changes in $\phi$ as little as $\frac{\pi}{16}$.
\begin{figure}
\caption{ QPE fidelity rates for the 3 (green circles) and 4-qubit (blue triangles) geometries as a function of phase $\phi$. Each data point represents the measured percentage of states corresponding to the nearest `perfect phase' value for $\phi$ (see figure \ref{Fig: QFT Perfects}).}
\label{Fig: QFT Continuous}
\end{figure}
The $f_1$ rates illustrated in figure \ref{Fig: QFT Continuous} are in agreement with those found in the previous experiment, showing fidelities on the order of 60 to 75\% around the eight perfect phases. Additionally, we once again see the alternating peaks in success between the odd and even integers of $\frac{\pi}{4}$, now extended to the nearby regions of $\phi$ as well.
When comparing the data from figure \ref{Fig: QFT Continuous} to what one would expect from a noiseless quantum computer, the quantity of interest here is the way in which noise affects the full range of phase values. Specifically, since the anticipated results have less than $100\%$ theoretical fidelities, one might anticipate the manner in which noise impacts these fidelities in one of two ways. Supposing one finds a $75\%$ fidelity for the perfect phases, would the presence of noise cause intermediate values of $\phi$ to similarly yield $75\%$ of their theoretical maximums, or simply result in a flat $25\%$ reduction across the board (down to lows of fully decohered states). The results from figure \ref{Fig: QFT Continuous} confirm the impact of noise to be of the former case, showing that on average the measured $f_1$ rates for both geometries across the full range of $\phi$ are around 60 to 75\% of their theoretical values. In terms of NISQ Era algorithm design, this means that quantum algorithms which may rely on low probabilities are still viable, whereas noise of the latter case would be far more detrimental.
\section{Conclusions}
The experimental results in this paper have showcased various qualities of IBM's 20-qubit chip Poughkeepsie. In analyzing these results, it is important to keep in context the steadily improving technology of quantum computers, specifically superconducting qubits in this case. In the coming years, it is reasonable to expect quantities such as $T_1$ \& $T_2$ coherence times and gate fidelities to continually improve. In anticipation of better qubits however, the results from this study demonstrate inherent properties in algorithm design which go beyond qubit quality.
In testing the CNOT chains across all 20 qubits, the difference in fidelity rates between using $X$ gates versus CNOT gates for resetting ancilla qubits was very pronounced. This in turn demonstrates the potential for improving algorithm design when working with qubit geometries of limited connectivity, where knowledge of where and when qubits contain superposition in a circuit can be used to optimize ancilla qubit resetting. Additionally, the study of the various qubit geometries and their performance in implementing the CCNOT and QFT$^{\dagger}$ operations further showcased the challenges of circuit design with limited connectivity. When determining the best geometry of computational and ancilla qubits for implementing an algorithm, our results demonstrated pros and cons of various configurations, ultimately showing that for certain circuits the use of ancilla qubits can be used to potentially reduce total gate counts.
\section*{Acknowledgments}
We gratefully acknowledge support from the National Research Council Associateship Program. Special thanks to Dan Campbell for his numerous insightful talks throughout this project. We would also like to thank the IBM Quantum Experience team and all of their support. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of AFRL.
\subsection{Data Availability}
The data that support the findings of this study are available from the corresponding author ([email protected]) upon reasonable request.
\pagebreak
\end{document} | arXiv |
\begin{document}
\title{Measures and slaloms} \author[Piotr Borodulin--Nadzieja]{Piotr Borodulin-Nadzieja} \address{Instytut Matematyczny, Uniwersytet Wroc\l awski} \email{[email protected]} \thanks{The first author was partially supported by Polish National Science Center grant 2013/11/B/ST1/03596 (2014-2017).} \author[Tanmay Inamdar]{Tanmay Inamdar} \address{School of Mathematics, University of East Anglia} \email{[email protected]}
\date{7 April 2016}
\subjclass[2010]{03E35,03E17,03E75,28A60} \keywords{Suslin Hypothesis, Radon measure, measures on Boolean algebras, Martin's Axiom, cardinal coefficients, Suslinean spaces, random model, complemented subspaces}
\begin{abstract}
We examine measure-theoretic properties of spaces constructed using the technique of Todor\v{c}evi\'{c} from \cite[Theorem 8.4]{Todorcevic}. We show that the existence of strictly positive measures on such spaces depends on combinatorial properties of certain families of slaloms. As a corollary
we get that if $\mathrm{add}(\mathcal{N}) = \mathrm{non}(\mathcal{M})$ then there is a non-separable space which supports a measure and which cannot be mapped continuously onto $[0,1]^{\omega_1}$. Also, without any additional axioms we prove that
there is a non-separable growth of $\omega$ supporting a measure and that there is a compactification $L$ of $\omega$ with growth of such properties and such that the natural copy of $c_0$ is complemented in $C(L)$. Finally, we discuss examples of spaces not
supporting measures but satisfying quite strong chain conditions. Our main tool is a characterization due to Kamburelis (\cite{Kamburelis}) of Boolean algebras supporting measures in terms of their chain conditions in generic extensions by a measure algebra.\end{abstract}
\maketitle
\section{Introduction}
The study of the interplay between the countable chain condition and separability has been a constant source of many important results since the formulation of Suslin's Hypothesis. In general, it is hard to separate these properties assuming $\mathsf{MA}_{\omega_1}$ if we deal with compact spaces which are in some sense topologically small. For example, under $\mathsf{MA}_{\omega_1}$ neither linearly ordered nor first-countable spaces can be ccc and non-separable.
For many years the status of the following statement was unclear: \emph{every ccc compact space which cannot be mapped continuously onto $[0,1]^{\omega_1}$ is separable}. In \cite[Remark 8.7]{Todorcevic}, this is referred to as `the ultimate form' of Suslin's Hypothesis. Quite unexpectedly, in \cite{Todorcevic} Todor\v{c}evi\'{c} proved that it is inconsistent, i.e., there is a $\mathsf{ZFC}$ example of a ccc non-separable space which cannot be mapped continuously onto $[0,1]^{\omega_1}$.
In \cite{Pbn-Plebanek} the authors consider a weakening of the above assertion: \emph{every compact space supporting a measure which cannot be mapped continuously onto $[0,1]^{\omega_1}$ is separable.} A space $K$ supports a measure if there is measure $\mu$ on $K$ such that $\mu(U)>0$ for every nonempty open set $U\subseteq K$. This is clearly a stronger condition than ccc and still weaker than separability. The results from \cite{Kunen-vanMill} imply that it does not hold if $\mathrm{cov}(\mathcal{N}_{\omega_1})=\omega_1$ and \cite{Pbn-Plebanek} contains a counterexample under $\mathsf{MA}$. It is still not known if this statement is consistent with $\mathsf{ZFC}$.
In this context a natural question is whether the space from \cite{Todorcevic} mentioned above supports a measure. Consistently, it does not, for example if $\mathrm{add}(\mathcal{N}) = \omega_1 <\mathrm{cov}(\mathcal{N}_{\omega_1})$ (see Section \ref{meas}). However, examining Todor\v{c}evi\'{c}'s space more carefully, we found that its measure-theoretic properties depend on the way it is constructed (so it makes more sense to speak about \emph{Todor\v{c}evi\'{c} spaces}). We prove that under $\mathrm{add}(\mathcal{N})=\mathrm{non}(\mathcal{M})$ we can modify Todor\v{c}evi\'{c}'s construction in such a way that it supports a measure (Theorem \ref{small}). It improves the result from \cite{Pbn-Plebanek} mentioned above.
Moreover, using similar techniques, we construct a $\mathsf{ZFC}$ example of a Boolean algebra supporting a measure which is not $\sigma$-centered and which can be embedded in $\mathcal{P}(\omega)/{\rm Fin}$ (Theorem \ref{growth}). Earlier, only consistent examples of such spaces were known (see \cite{Drygier-Plebanek15}). Recall that if a Boolean algebra is $\sigma$-centered, then it can be embedded in $P(\omega)/{\rm Fin}$, and by Parovi\v{c}enko's theorem (see \cite{Parovichenko}) the measure algebra (and in fact any Boolean algebra of the size of the continuum) can be embedded into $\mathcal{P}(\omega)/{\rm Fin}$ under $\mathsf{CH}$. On the other hand, a result of Dow and Hart (see \cite{Dow-Hart}) shows that under $\mathsf{OCA}$ the measure algebra cannot be embedded into $\mathcal{P}(\omega)/{\rm Fin}$, and in fact Selim has observed (see \cite{Selim}) that the same holds for any atomless Maharam algebra. Our result shows that a non-trivial piece of the measure algebra can be embedded in $\mathcal{P}(\omega)/{\rm Fin}$ in $\mathsf{ZFC}$.
Recently, several authors (\cite{Castillo}, \cite{Drygier-Plebanek}) considered the problem of when $C(K)$, where $K$ is a compactification of $\omega$, contains a copy of $c_0$ which is complemented in $C(K)$. If the natural copy of $c_0$ is complemented in $C(K)$, then $K\setminus \omega$ supports a measure, an observation which in \cite{Drygier-Plebanek} is attributed to Kubi{\' s}. In \cite{Drygier-Plebanek} the authors prove that under $\mathsf{CH}$ there is a compactification $K$ of $\omega$ such that $K\setminus \omega$ is non-separable, supports a measure, and the natural copy of $c_0$ in $C(K)$ is complemented. We prove that such a space exists in $\mathsf{ZFC}$ (Corollary \ref{c_0}).
Finally, we show in $\mathsf{ZFC}$ that Todor\v{c}evi\'{c}'s space can be constructed in such a way that it does not support a measure but satisfies quite strong chain conditions: $\sigma$-n-linkedness for every $n\in \omega$ and Fremlin's property (*). Considering the completion of the Boolean algebra of clopen subsets of this space, we obtain a complete Boolean algebra which possesses these properties and which does not support a measure. This raises the natural, though perhaps na\"{\i}ve, question of whether this algebra can provide another example of a non-measurable Maharam algebra. We show that forcing with this Boolean algebra adds a Cohen real and so it is not weakly distributive and thus cannot be a Maharam algebra.
Our main tool is Kamburelis' characterization of Boolean algebras supporting measures as Boolean algebras which can be made $\sigma$-centered by adding random reals (\cite{Kamburelis}). Thanks to this result and the nature of Todor\v{c}evi\'{c}'s construction, to check if a Boolean algebra obtained in this way supports a measure it is enough to investigate destructibility of some families of slaloms by random forcing. Some of the theorems mentioned above can be proved directly by using Kelley's characterization of Boolean algebras supporting measures (\cite{Kelley}). However, we decided to use the forcing language because this is how the results were obtained. Also, perhaps the facts concerning the destructibility of families of slaloms can be of independent interest.
\section{Notation and basic facts}
We use standard set theoretic notation. Let $\kappa$ be a cardinal number. Then by $\lambda_\kappa$ we denote the standard measure on $[0,1]^\kappa$ and by $\mathbb{M}_\kappa$ the \emph{measure algebra of type $\kappa$}, that is, the Boolean algebra $\mathrm{Bor}[0,1]^\kappa/_{\lambda_\kappa=0}$. We write $\mathbb{M}$ instead of $\mathbb{M}_1$, which we simply call the \emph{measure algebra}.
By a \emph{real number} we will mean an element of Baire space, $\omega^\omega$, or an element of some $\prod_{n\in \omega} S_n$, where $S_n \subseteq \omega$, the exact choice of which shall be clear from the context. If $S \subseteq \omega \times \omega$, then $S(n)$ will denote the horizontal section $\{m \colon (n,m) \in S\}$.
Most of the spaces which will appear in this article will be constructed as Stone spaces of some Boolean algebras. We will treat Boolean algebras as algebras of sets, and so we will use ``$\cup$'' to denote conjunction, ``$\emptyset$'' to denote the zero element, ``$\subseteq$'' to denote the Boolean order, and so on. If $\mathfrak{A}$ is a Boolean algebra, then $\mathfrak{A}^+ = \mathfrak{A}\setminus \{\emptyset\}$. A family $\mathcal{P}$ is a \emph{$\pi$-base} of a Boolean algebra $\mathfrak{A}$ if $\mathcal{P}\subseteq \mathfrak{A}^+$ and for each $A\in \mathfrak{A}^+$ there is $P\in \mathcal{P}$ such that $P\subseteq A$. For a family $\mathcal{G}$ by ${\rm alg}(\mathcal{G})$ we denote the Boolean algebra generated by $\mathcal{G}$.
Recall that a Boolean algebra $\mathfrak{A}$ is $\sigma$-centered if $\mathfrak{A}^+=\bigcup_{n<\omega} \mathcal{C}_n$, where each $\mathcal{C}_n$ is centered (that is, each finite subset of $\mathcal{C}_n$ has non-empty intersection). A family $\mathcal{A}\subseteq \mathfrak{A}$ is independent if for every collection of finite disjoint families $\mathcal{A}_0$, $\mathcal{A}_1 \subseteq \mathcal{A}$ we have \[ \bigcap_{\mathcal{A}_0} A \cap \bigcap_{\mathcal{A}_1} A^c \ne 0. \] A Boolean algebra is $\sigma$-centered if and only if its Stone space is separable and it contains an uncountable independent family if and only if the Stone space maps continuously onto $[0,1]^{\omega_1}$.
Recall that a Boolean algebra $\mathfrak{A}$ has the countable chain condition, abbreviated to `ccc', if any collection of pairwise disjoint elements from $\mathfrak{A}^+$ is at most countable.
If $\mathcal{I}$ is an ideal of subsets of $K$, then
\[ \mathrm{add}(\mathcal{I}) = \min\{|\mathcal{A}|\colon \mathcal{A}\subseteq \mathcal{I}, \ \bigcup \mathcal{A}\notin \mathcal{I}\}, \]
\[ \mathrm{non}(\mathcal{I}) = \min\{|X|\colon X\subseteq K, \ X\notin \mathcal{I}\}, \]
\[ \mathrm{cov}(\mathcal{I}) = \min\{|\mathcal{A}|\colon \mathcal{A}\subseteq \mathcal{I}, \ \bigcup \mathcal{A} = K\}. \] By $\mathcal{N}$ we will mean the $\sigma$-ideal of Lebesgue null sets, by $\mathcal{M}$, the $\sigma$-ideal of meager sets, and by $\mathcal{N}_{\omega_1}$ the $\sigma$-ideal of $\lambda_{\omega_1}$-null sets. By ${\rm Fin}$ we will denote the ideal of finite subsets (of a set which should be clear from the context). We shall also need the standard fact that $\mathrm{add}(\mathcal{N})$ is an uncountable regular cardinal.
The \emph{bounding number} is defined by
\[ \mathfrak{b} = \min\{|\mathcal{F}| \colon \mathcal{F} \subseteq \omega^\omega, \ \forall g\in \omega^\omega \ \exists f\in \mathcal{F} \ f\nleq^* g\}. \] Here $f \leq^* g$ means $f(n)\leq g(n)$ for all but finitely many $n$'s. Similarly, $A\subseteq^* B$ will denote the fact that $A\setminus B$ is finite. We shall need the standard fact that $\mathrm{add}(\mathcal{N}) \leq \mathfrak{b}$.
By a measure on a Boolean algebra we understand a \emph{finitely-additive} measure. Note that every such measure can be uniquely extended to a $\sigma$-additive Radon measure on the Stone space.
Recall that a space $K$ has \emph{countable $\pi$-character} if each $x\in K$ has a local $\pi$-base (i.e., a family $\mathcal{U}_x$ of nonempty open sets such that each neighbourhood of $x$ contains an element of $\mathcal{U}_x$) which is countable. A space $K$ is \emph{scatteredly-fibered} if there is a continuous function $f\colon K\to M$, where $M$ is a metric space, such that each fiber $f^{-1}[x]$ is a scattered space (i.e. it cannot be mapped continuously onto $[0,1]^\omega$). Note that no scatteredly-fibered space can be mapped continuously onto $[0,1]^{\omega_1}$. Otherwise, one of the fibers could be mapped continuously onto $[0,1]^{\omega_1}$ (by Tkachenko's theorem, see \cite{Tkachenko}). Similarly, one can define \emph{linearly-fibered} spaces.
A \emph{compactification} of a space $X$ is a compact space $K\supseteq X$ such that $X$ is dense in $K$. The space $K\setminus X$ is called a \emph{growth} of $X$. We will consider compactifications of $\omega$ (with the discrete topology). If $\mathfrak{A}$ is a subalgebra of $\mathcal{P}(\omega)$, then its Stone space is a compactification of $\omega$. Similarly, Stone spaces of subalgebras of $\pom / \fin$ are growths of $\omega$.
We are going to violate notation in many different ways. In particular, we will not always distinguish between Borel sets and the elements of $\mathbb{M}$. Also, we will not distinguish in notation between elements of Boolean algebras and clopen subsets of its Stone space or between measures on Boolean algebras and its extensions to the Stone spaces. We hope this will not cause any confusion.
For proofs of the standard facts of Stone duality and Boolean algebras, see \cite{BAhandbook}; for set theory, see \cite{Jech}; for set theory of the reals, see \cite{Bartoszynski}; for Banach space theory, see \cite{Kalton06}.
\section{Todor\v{c}evi\'{c}'s construction}
In this short section we will explain some details of the construction from \cite[Theorem 8.4]{Todorcevic}.
For $g \in \omega^ \omega$ let $\mathcal{S}_g$ be the set of $g$-\emph{slaloms}, i.e.,
\[ \mathcal{S}_g = \{S\subseteq \omega\times\omega \colon |S(n)|< g(n)\}. \] Let $h\in \omega^\omega$ be given by $h(n)=2^n$. We write $\mathcal{S}$ for $\mathcal{S}_h$ (any increasing function $g$ such that $\sum_n \frac{1}{g(n)}$ converges would be equally good). Similarly, \emph{a slalom} will mean an $h$-slalom.
Let $\Omega = \{(S,n)\colon n\in\omega, \ S\in \mathcal{S}, \ S\subseteq (n\times 2^n)\}$. For each $A\subseteq \omega\times\omega$ define \[ T_A = \{(T,n)\in \Omega \colon A\cap (n\times 2^n) \subseteq T\}. \] For $(S,n)\in \Omega$ let \[ T_{(S,n)} = \{(T,m)\in \Omega \colon m\geq n, T\cap (n\times 2^n) = S\}. \]
It will be convenient to collect some simple observations concerning $T_A$'s.
\begin{fact} For each $A$, $B\in \mathcal{S}$ we have
\begin{enumerate}
\item $S\in \mathcal{S}$ if and only if $T_S$ is infinite,
\item $T_{(A\cup B)} = T_A \cap T_B$,
\item if $A\subseteq B$, then $T_B \subseteq T_A$,
\end{enumerate} \end{fact}
If $\mathcal{A}\subseteq \mathcal{P}(\omega\times \omega)$, then let $\mathfrak{T}_\mathcal{A}$ be the subalgebra of $\mathcal{P}(\Omega)$ generated by \[\{T_A \colon A\in \mathcal{A}\} \cup \{T_{(S,n)}\colon (S,n)\in\Omega\}.\]
Finally, let $K_\mathcal{A}$ be the Stone space of $\mathfrak{T}_\mathcal{A}/{\rm Fin}$.
We say that a family $\mathcal{F}\subseteq \omega^\omega$ is \emph{localized} by $\mathcal{S}_g$ if there is $S\in \mathcal{S}_g$ such that $f\subseteq^* S$ (that is, for all but finitely many $n \in\omega$ we have that $f(n) \in S(n)$) for every $f\in \mathcal{F}$. Similarly, a family $\mathcal{A} \subseteq \mathcal{S}_g$ is \emph{$\subseteq^*$-bounded}, or simply, \emph{bounded}, if there is a $S \in \mathcal{S}_g$ such that $A \subseteq^* S$ (that is, for all but finitely many $n \in\omega$ we have that $A(n) \subseteq S(n)$) for every $A \in \mathcal{A}$.
\begin{thm}\cite[Theorem 2.3.9]{Bartoszynski} \label{Bartoszynski}
Let $g\in \omega^\omega$ be such that $\lim_n g(n)=\infty$. Then
\[ \mathrm{add}(\mathcal{N}) = \min\{|\mathcal{F}|\colon \mathcal{F}\subseteq \omega^\omega, \mathcal{F}\mbox{ is not localized by }\mathcal{S}_g\}. \] \end{thm}
Let
\[ \mathcal{Z} = \{S\subseteq \omega\times \omega\colon S\in \mathcal{S}\mbox{ and } \lim_n \frac{1}{2^n}|S(n)| = 0 \}. \]
In \cite{Kunen-Fremlin} a subfamily of $\omega^\omega$ which cannot be localized by $\mathcal{S}$ was used to construct a family of elements of $\mathcal{Z}$ which is not $\subseteq^*$-bounded in $\mathcal{S}$. Note that in \cite{Kunen-Fremlin} and in \cite{Todorcevic} the authors considered $\mathcal{S}_g$ for $g(n)=n$ instead of $\mathcal{S}$ but it does not make any difference for their results.
\begin{thm}\cite[Theorem 4]{Kunen-Fremlin}\label{kunen-fremlin}
There is a $\subseteq^*$-chain $\{A_\alpha\colon \alpha<\mathrm{add}(\mathcal{N})\} \subseteq \mathcal{Z}$ such that for every $S\in\mathcal{S}$ there is $\alpha<\mathrm{add}(\mathcal{N})$ such that $A_\alpha \nsubseteq^* S$. \end{thm}
Let $\{A_\alpha\colon \alpha<\mathrm{add}(\mathcal{N})\}$ be a family given by Theorem \ref{kunen-fremlin}. Denote \[ \mathcal{A} = \{A\in \mathcal{S}\colon A=^* A_\alpha\mbox{ for some }\alpha<\mathrm{add}(\mathcal{N})\}.\] \begin{thm}\label{todor} \cite[Theorem 8.4]{Todorcevic} $K_\mathcal{A}$ has the following properties:
\begin{enumerate}
\item it is homeomorphic to a growth of $\omega$,
\item it is non-separable,
\item it is ccc,
\item it is linearly-fibered and scatteredly-fibered,
\item it has countable $\pi$-character.
\end{enumerate}
\end{thm}
\begin{proof}
For the proof see \cite[Theorem 8.4]{Todorcevic}. We will only present a slightly different proof that $\mathfrak{T}_\mathcal{A}/{\rm Fin}$ is not $\sigma$-centered (i.e. that $K_\mathcal{A}$ is not separable).
Suppose for the contradiction that $\mathfrak{T}_\mathcal{A}/{\rm Fin} = \bigcup_{n<\omega} \mathcal{C}_n$ and each $\mathcal{C}_n$ is centered. Then, since $\mathrm{add}(\mathcal{N})$ is a regular uncountable cardinal, there is an $n$ such that \[\{\alpha\colon \exists A\in \mathcal{C}_n, A=^* A_\alpha\}\mbox{ is cofinal in }\mathrm{add}(\mathcal{N}).\] For
simplicity we will just assume that \[ \{T_{A_\alpha}\colon \alpha<\mathrm{add}(\mathcal{N})\}/{\rm Fin} \subseteq \mathcal{C}_n.\]
Of course $\bigcup_\alpha A_\alpha \notin \mathcal{S}$ and so there is $m\in \omega$ such that $|\bigcup_\alpha A_\alpha(m)|\geq 2^m$. Enumerate $\bigcup_\alpha A_\alpha(m) = \{k_0, k_1, \dots\}$ and for each $i$ let $\alpha_i$ be such that $k_i \in
A_{\alpha_i}(m)$. Then
\[ \bigcap_{i\leq 2^m} T_{A_{\alpha_i}} = \{(T,n) \in \Omega \colon (\bigcup_{i \leq 2^m}A_{\alpha_i}) \cap (n \times 2^n) \subseteq T\} \]
does not contain any $(T,n)$ such that $n >m$, and hence is finite, a contradiction \end{proof}
It will be convenient to make the following observation available.
\begin{remark}\label{T^*}
Denote by $\mathfrak{T}^*_\mathcal{A}$ the Boolean subalgebra of $\mathfrak{T}_\mathcal{A}$ generated only by $\{T_A\colon A\in \mathcal{A}\}$. The above proof shows that $\mathfrak{T}^*_\mathcal{A}/{\rm Fin}$ is not $\sigma$-centered. \end{remark}
\section{Random destructible families of slaloms}\label{destruct}
The main ingredient of the construction from Theorem \ref{todor} is a family of slaloms. In this section we will investigate combinatorial properties of certain families of slaloms which in Section \ref{applications} will be translated to properties of resulting spaces.
Let \[ \mathcal{I} = \{S\subseteq \omega \times \omega\colon S(n)\subseteq 2^n \mbox{ for each }n\mbox{ and } \sum_n \frac{1}{2^n}|S(n)| < \infty \}. \] Notice that if $f\colon \{(n,i)\colon i< 2^n, n\in\omega\} \to \omega$ is the natural enumeration function (sending $\{n\}\times 2^n$ to $[2^n, 2^{n+1})$ for each $n$), then $I\in \mathcal{I}$ if and only if $f[I]\in \mathcal{I}_{1/n}$, where
\[ \mathcal{I}_{1/n} = \{A\subseteq \omega\colon \sum_n \frac{|A\cap [2^n, 2^{n+1})|}{2^n} < \infty\}, \] i.e., $\mathcal{I}_{1/n}$ is the classical summable ideal on $\omega$ (see for example \cite{Farah}). Let \[ \mathcal{W} = \mathcal{I} \cap \mathcal{S}. \] In other words, $\mathcal{W}$ consists of elements of $\mathcal{I}$ which miss at least one point of $\{n\}\times 2^n$ for each $n$.
For $g\in \omega^\omega$ equip the space $\mathcal{X}_g = \prod g(n)$ with the product topology (so that $\mathcal{X}_g$ is homeomorphic to the Cantor set). Let $\lambda$ be the standard measure on $\mathcal{X}_g$, so in particular\[ \lambda(\{f\in \mathcal{X}_g\colon f(n)=i\}) = \frac{1}{g(n)}\] if $i< g(n)$. Recall that $h \in \omega^\omega$ is given by $h(n)= 2^n$. Let $\mathcal{X} = \mathcal{X}_h$ and let $A_g = \{f\in \mathcal{X}\colon \exists^\infty n \ f(n)=g(n)\}$.
Before we proceed, we make the simple observation that if $f \in \mathcal{X}$, then \[ \{(n,f(n))\colon n\in \omega, n >0\}\in \mathcal{W},\] and also, if $S \in \mathcal{S}$ is such that $S(n) \subseteq 2^n$ for every $n$, then there is a $f \in \mathcal{X}$ and a $T \in \mathcal{S}$ such that $S \subseteq T$ and $T(n)= 2^n \setminus \{f(n)\}$ for every $n$. We shall use these observations several times in what follows.
\begin{prop}[folklore] \label{non}
There is a family $\mathcal{F}\subseteq \mathcal{X}$ of size $\mathrm{non}(\mathcal{M})$ which is not localized by $\mathcal{S}$. \end{prop}
\begin{proof}
First, notice that for each $g\in \mathcal{X}$ the set $A_g$ is comeager. Indeed, let $A^n_g = \{f\in \mathcal{X}\colon f(n)=g(n)\}$ for $g\in \mathcal{X}$. Of course, each $A^n_g$ is open and $\bigcup_{n>m} A^n_g$ is dense for each $m\in \omega$. But
\[ A_g = \bigcap_m \bigcup_{n>m} A^n_g. \] Let $\{f_\alpha\colon \alpha<\mathrm{non}(\mathcal{M})\} \subseteq \mathcal{X}$ be a family witnessing $\mathrm{non}(\mathcal{M})$. Then for each $g\in \mathcal{X}$ there is an $\alpha$ such that $f_\alpha \in A_g$ and so $f_\alpha(n)=g(n)$ for infinitely many $n$.
The family $\{f_\alpha\colon \alpha<\mathrm{add}(\mathcal{N})\}$ is not localized by $\mathcal{S}$, because for every $S\in \mathcal{S}$ there is $g_S\in \mathcal{X}$ such that $g_S(n)\notin S(n)$ for each $n$. Hence, there is an $\alpha$ such that $f_\alpha(n) = g_S(n)$ for infinitely many $n$. So, for each $S\in \mathcal{S}$ there is an $\alpha<\mathrm{non}(\mathcal{M})$ such that $\{n\colon f_\alpha(n)\notin S(n)\}$ is infinite. \end{proof}
Now, as in \cite[Theorem 4]{Kunen-Fremlin}, we will use a set of reals as above to find a $\subseteq^*$-chain in $\mathcal{W}$ which is not $\subseteq^*$-bounded in $\mathcal{S}$. The proof is essentially the same as there, with some minor modifications, but we include it here for the sake of completeness.
\begin{thm}\label{kunen-fremlin2} Assume $\mathrm{add}(\mathcal{N}) = \mathrm{non}(\mathcal{M})$. There is a $\subseteq^*$-chain $\{A_\alpha\colon \alpha<\mathrm{add}(\mathcal{N})\} \subseteq \mathcal{W}$ such that for every
$S\in\mathcal{S}$ there
is $\alpha<\mathrm{add}(\mathcal{N})$ such that $A_\alpha \nsubseteq^* S$. \end{thm}
\begin{proof}
Let $\mathcal{F} = \{f_\alpha\colon \alpha<\mathrm{add}(\mathcal{N}\}$ be as in Proposition \ref{non}.
Let $A_0 = f_0\cap([1,\infty)\times \omega)$ and assume that we have constructed $A_\alpha$'s for $\alpha<\beta$.
For each $\alpha<\beta$ fix a function $g_\alpha \colon \omega\to \omega$ such that
\[ \sum_{i\geq g_\alpha(n)} \frac{1}{2^i}|A(i)| < 1/2^n. \]
As $\beta<\mathrm{add}(\mathcal{N})\leq \mathfrak{b}$, there is a function $g\colon \omega\to \omega$ which is strictly increasing and which $\leq^*$-dominates
$\{g_\alpha \colon \alpha<\beta\}$. For each $\alpha< \beta$, fix $m_\alpha$ such that $g(n)\geq g_\alpha(n)$ for each $n \geq m_\alpha$.
Define $F_\alpha\colon \omega \to [\omega\times \omega]^{<\omega}$ such that $$ F_\alpha(n) = \begin{cases} A_\alpha \cap [g(n), g(n+1))\times \omega \mbox{ if }n\geq m_\alpha, \\
\emptyset \mbox{ otherwise.} \end{cases} $$ Now, since $[\omega\times\omega]^{<\omega}$ is countable and $\beta<\mathrm{add}(\mathcal{N})$, by Theorem \ref{Bartoszynski} applied to the space $\omega^{[\omega\times\omega]^{<\omega}}$ we see that there is an $f$-slalom $\Phi \subseteq \omega \times [\omega\times\omega]^{<\omega}$ for $f\in \omega^\omega$ given by $f(n) =n+1$ which localises all of the $F_\alpha$. That is, \begin{enumerate}
\item $\{n\colon F_\alpha(n) \notin \Phi(n)\}$ is finite,
\item $|\{I\colon (n,I)\in \Phi\}|\leq n$, \end{enumerate} Additionally, throwing out some elements if needed, we can assume that
\begin{enumerate}
\item[(3)] for each $(n,I)\in \Phi$ there is $\alpha<\beta$ such that $F_\alpha(n) = I$.
\end{enumerate}
The last condition implies that whenever $(n,I)\in \Phi$, then $I\subseteq [g(n),g(n+1))\times \omega$ and $\sum_{i\geq g(n)} \frac{1}{2^i}|I(i)| < \frac{1}{2^n}$. Also, if $I$ is such that $(n,I)\in \Phi$ and $(k,l)\in I$, then $(k,l)\in A_\alpha$ for
some $\alpha<\beta$ and as we will see at the end of the proof, therefore a $\gamma \leq \alpha$ such that $l= f_\gamma(k)$.
Let \[ A = \bigcup\{I\colon \exists n \ (n,I)\in \Phi\}. \] Notice that \[ \sum_{g(n)\leq i < g(n+1)} \frac{1}{2^i}|A(i)| < \frac{n}{2^n} \] and since $\sum_n \frac{n}{2^n} = 2$, we have that $A\in \mathcal{W}$. Moreover, for each $\alpha<\beta$ there is $m\geq m_\alpha$ such that $(n,F_\alpha(n))\in \Phi$ for every $n\geq m$. So, $A_\alpha \subseteq A\cup [0,g(m)]\times \omega$ and it follows that $A_\alpha \subseteq^* A$.
Now, it is easy to see that there is a $k< \omega$ such that $(A\cup f_\beta) \cap ([k,\infty)\times \omega) \in \mathcal{W}$. Put
\[ A_\beta = (A\cup f_\beta) \cap ([k,\infty)\times \omega). \]
We have now finished the construction. To see that $\{A_\alpha\colon \alpha<\mathrm{add}(\mathcal{N})\}$ is not $\subseteq^*$-bounded by any slalom in $\mathcal{S}$, notice that the family $\mathcal{F}$ was chosen so as to not be localised by any slalom in $\mathcal{S}$, and since every real from this family is $\subseteq^*$-contained in some $A_\alpha$ (to be more specific, we simply have that if $\alpha< \mathrm{add}(\mathcal{N})$ then $f_\alpha \subseteq^* A_\alpha$), so the former family also inherits this property.
Clearly \[\bigcup_{\alpha<\mathrm{add}(\mathcal{N})} A_\alpha \subseteq
\bigcup_{\alpha<\mathrm{add}(\mathcal{N})} f_\alpha,\]
so $A_\alpha(n) \subseteq 2^n$ for each $n$ and $\alpha<\mathrm{add}(\mathcal{N})$.
\end{proof}
\begin{remark}\label{non-cov} Let
\[ \kappa = \min\{|\mathcal{D}|\colon \mathcal{D}\subseteq \mathcal{W}, \neg \exists S\in \mathcal{S} \ \forall D\in \mathcal{D} \ D\subseteq^* S\}. \] The reader may notice that Proposition~\ref{non} amounts to a proof that $\kappa \leq \mathrm{non}(\mathcal{M})$, and that Theorem~\ref{kunen-fremlin2} can actually be proved from the assumption that $\mathrm{add}(\mathcal{N}) = \kappa$.
In fact, there is a better upper bound for $\kappa$ (than $\mathrm{non}(\mathcal{M})$). Recall that if $\mathcal{I}$ is an ideal on $\omega$, then $\mathrm{cov}^*(\mathcal{I})$ is the minimal size of a subfamily of $\mathcal{I}$ such that for every infinite $X\subseteq \omega$ there is an element of the family intersecting $X$ on an infinite set (see e.g. \cite{Hrusak07}). In this setting $\kappa$ is the minimal size of a subfamily of $\mathcal{I}_{1/n}$ such that for each subset of $\omega$ intersecting each interval $[2^n,2^{n+1})$ at least once, there is an element of this family intersecting it infinitely many times. Clearly then, $\kappa\leq \mathrm{cov}^*(\mathcal{I}_{1/n})$.
It is also not hard to see that $\mathrm{cov}(\mathcal{N}) \leq \kappa$. Indeed, for $W\in \mathcal{W}$ let $A_W = \{f\in \mathcal{X}\colon \exists^\infty n \ f(n)\in W(n)\}$. By the Borel-Cantelli Lemma, $\lambda(A_W)=0$ for each $W\in \mathcal{W}$. If $\mathcal{F}\subseteq \mathcal{W}$ is not bounded by any slalom, then each $f\in \mathcal{X}$ is in some $A_F$, $F\in \mathcal{F}$ (since we can in particular consider the slalom which on every $n$ is exactly $2^n\setminus \{f(n)\}$). Hence, if $\mathcal{F}$ witnesses $\kappa$, then the family $\{A_F\colon F\in \mathcal{F}\}$ covers $\mathcal{X}$, and hence has size at least $\mathrm{cov}(\mathcal{N})$.
In fact, if $\mathrm{cov}(\mathcal{N})<\mathfrak{b}$, then $\kappa = \mathrm{cov}(\mathcal{N})$ (see \cite[Theorem 2.2]{Bartoszynski88}). It seems likely that consistently $\mathrm{cov}(\mathcal{N})<\kappa$, but we were not able to prove it. \end{remark}
Assume that $\mathcal{F}\subseteq \omega^\omega$ is not localized by $\mathcal{S}_g$. We say that $\mathcal{F}$ is $g$-\emph{destructible} by a forcing poset $\mathbb{P}$ if \[ \Vdash_\mathbb{P} ``\check{\mathcal{F}} \mbox{ is localized by }\dot{\mathcal{S}_g}". \] Similarly, if $\mathcal{A}$ is a family not bounded by any element of $\mathcal{S}_g$, then we say that it is $g$-\emph{destructible} by $\mathbb{P}$ if \[ \Vdash_\mathbb{P} ``\check{\mathcal{A}} \mbox{ is bounded by }\dot{\mathcal{S}_g}". \] As before \emph{destructible} means $h$-destructible ($h(n)=2^n$).
We will fix some notation for the rest of the article. For $n>0$, $k< 2^n$ let $I^n_k = \{f\in\mathcal{X}\colon f(n)=k\}$. We will consider the measure algebra $\mathbb{M}$ in the following incarnation: $\mathbb{M} = \mathrm{Bor}(\mathcal{X})/_{\lambda=0}$. Define an $\mathbb{M}$-name $\dot{S}$ for a subset of $\omega\times \omega$ in the following way: \[ \llbracket k\in \dot{S}(n) \rrbracket = \mathcal{X}\setminus I^n_k. \] Clearly, $\dot{S}$ is an $\mathbb{M}$-name for a slalom. In fact, \[ \Vdash_\mathbb{M} ``\exists \dot{f}\in \dot{\mathcal{X}} \ \dot{S}(n) = 2^n \setminus \{\dot{f}(n)\}", \] and $\dot{f}$ is a name for a random real.
We will prove that the family $\mathcal{W}$ is destructible by $\mathbb{M}$.
\begin{prop} \label{E-destructible} For every $W\in \mathcal{W}$ \[ \Vdash_\mathbb{M} ``\check{W} \subseteq^* \dot{S}" \] \end{prop} \begin{proof}
Fix a $W\in \mathcal{W}$ and a $p\in
\mathbb{M}$ of positive measure, and let $\varepsilon>0$ be such that $\lambda(p)>\varepsilon$. Take $n$ such that $\sum_{i>n} \frac{1}{2^i} |W(i)| < \varepsilon$. Clearly,
\[ \sum_{i>n} \lambda( \bigcup_{k \in W(i)} I^i_k ) <\varepsilon \] and so if \[ q = \bigcup_{i>n} \bigcup_{k\in W(i)} I^i_k, \] then $\lambda(q)<\varepsilon$. So we finish by noticing that \[ \emptyset \ne p \setminus q \Vdash ``\forall i>n \ \check{W}(i) \in \dot{S}(i)". \] \end{proof}
\section{Applications}\label{applications}
\subsection{Non-separable growths of $\omega$ supporting a measure}\label{meas}
First, we are going to apply the results from the previous section to construct some non-separable ccc compact spaces. We will use the following theorem due to Kamburelis:
\begin{thm} \cite[Proposition 3.7]{Kamburelis}\label{kamburelis}
A Boolean algebra $\mathfrak{A}$ supports a measure if and only if there is a cardinal $\kappa$ such that $\Vdash_{\mathbb{M}_\kappa}$ ``$\check{\mathfrak{A}}$ is $\sigma$-centered''. \end{thm}
We will need the following fact.
\begin{prop}\label{pi-base} Assume that $\mathcal{B} \subseteq \mathcal{S}$ is closed under finite unions (as long as they belong to $\mathcal{S}$). Then the family $\{T_B\cap T_{(T,n)}\colon B\in \mathcal{B}, (T,n)\in \Omega\}/{\rm Fin} \setminus
\{\emptyset\}$ forms a $\pi$-base of $\mathfrak{T}_\mathcal{B}/Fin$. \end{prop} \begin{proof}
See Claim 1 of \cite[Theorem 8.4]{Todorcevic}. \end{proof}
\begin{thm} \label{main}
If $\mathcal{B}$ is closed under finite unions (as long as they belong to $\mathcal{S}$), and $\mathcal{B}$ is destructible by some $\mathbb{M}_\kappa$, then the Boolean algebra $\mathfrak{T}_\mathcal{B}/{\rm Fin}$ supports a measure. \end{thm} \begin{proof}
Assume that $\mathcal{B}$ is destructible by $\mathbb{M}_\kappa$. Let $V$ be the ground model and let $G$ be $V$-generic for $\mathbb{M}_\kappa$. We shall show that $\mathcal{B}$ is $\sigma$-centered in $V[G]$, which, by Theorem~\ref{kamburelis} will let us finish. But in fact we can get away with even less, since if we can show that some $\pi$-base of $\mathcal{B}$ is $\sigma$-centered, then from a countable partition of this $\pi$-base into centered sets we can easily get a countable partition of the whole of $\mathcal{B}^+$ into centered sets. And naturally the $\pi$-base that we have in mind is the one that is furnished to us by Proposition~\ref{pi-base}.
We work in $V[G]$. We know that there is some $S\in \mathcal{S}$ (note that here we are applying the \emph{formula} for $\mathcal{S}$, so this might be strictly larger than $\mathcal{S}^V$) such that for every $B \in \mathcal{B}$ (here on the other hand we are considering $\mathcal{B}$ as a \emph{set} from the ground model) we have $B \subseteq^* S$.
Let $\mathcal{D} = \{D\in \mathcal{S}\colon S=^*D\}$.
For $D\in \mathcal{D}$ and $(T,n)\in \Omega$ such that $T_D \cap T_{(T,n)}$ is infinite, let
\[ \mathcal{C}^D_{(T,n)} = \{T_B \cap T_{(T,n)}\in \mathfrak{T}_{\mathcal{B}}\colon B \in \mathcal{B}, B\subseteq D\},\]
where we point out that if $B \subseteq D$, then $T_B \supseteq T_D$, so if $T_D \cap T_{(T,n)}$ is infinite, then since this set is contained in $T_B\cap T_{(T,n)}$, the latter set is infinite too.
Clearly, for each such $D$ and $(T,n)$, the family $\mathcal{C}^D_{(T,n)}/{\rm Fin}$ is centered, and since we have that for every $B \in \mathcal{B}$ there is a $D \in \mathcal{D}$ such that $B \subseteq D$, we see that the collection of such $\mathcal{C}^D_{(T,n)}/{\rm Fin}$ is a countable covering of the $\pi$-base given to us by Proposition~\ref{pi-base} into centered sets, so we are done. \end{proof}
\begin{thm}\label{small} Assume $\mathrm{add}(\mathcal{N})=\mathrm{non}(\mathcal{M})$. There is a non-separable growth of $\omega$ supporting a measure, which has countable $\pi$-character, and which is scatteredly- and linearly-fibered. In particular, it does not map continuously onto $[0,1]^{\omega_1}$. \end{thm}
\begin{proof}
Let $\mathcal{A}$ be the closure under finite modifications (as long as they belong to $\mathcal{S}$) of a family $(A_\alpha)_{\alpha<\mathrm{add}(\mathcal{N})}\subseteq \mathcal{W}$ as given to us by Theorem \ref{kunen-fremlin2}. It is easy to see that since $\mathcal{A}$ is obtained from a $\subseteq^*$-chain by taking finite modifications (so long as they belong to $\mathcal{S}$), we have that $\mathcal{A}$ is closed under taking finite unions (so long as they belong to $\mathcal{S}$). By Proposition \ref{E-destructible} the family $\mathcal{A}$ is
destructible by $\mathbb{M}$, so by Theorem \ref{main} the space $K_\mathcal{A}$ supports a measure.
That $K_\mathcal{A}$ satisfies the other properties listed in
the statement can be seen in the same way as in Theorem \ref{todor} (i.e., as in \cite[Theorem 8.4]{Todorcevic}), using in particular the fact that $(A_\alpha)_{\alpha<\mathrm{add}(\mathcal{N})}$ is not bounded by any element of $\mathcal{S}$. \end{proof}
The axiom $\mathrm{add}(\mathcal{N})=\mathrm{non}(\mathcal{M})$ above can be replaced by a milder assumption, see Remark \ref{non-cov}.
Note that if $\mathrm{add}(\mathcal{N}) = \omega_1$ and $\mathrm{cov}(\mathcal{N}_{\omega_1})>\omega_1$ (or more generally, $\mathrm{add}(\mathcal{N}) = \kappa < \mathrm{cov}(\mathcal{N}_\kappa)$), then a space like in the above theorem cannot be constructed using Theorem \ref{kunen-fremlin2}. The reason is that $\mathrm{cov}(\mathcal{N}_{\omega_1})>\omega_1$ implies that all Boolean algebras of size $\omega_1$ and supporting a measure are $\sigma$-centered (see \cite[Lemma 3.6]{Kamburelis}). We do not know the answer to the following question.
\begin{prob}
Is it consistent that there is no non-separable space supporting a measure which cannot be mapped continuously onto $[0,1]^{\omega_1}$? \end{prob}
If the answer is negative, then perhaps one can construct a $\mathsf{ZFC}$ example using the techniques presented in this article for some appropriate family $\mathcal{F}\subseteq \mathcal{W}$ which is $\subseteq^*$-unbounded in $\mathcal{S}$. The most difficult part is to find a reason why the resulting space does not map continuously onto $[0,1]^{\omega_1}$. In the construction of Todor\v{c}evi\'{c} (and our take on it), the $\subseteq^*$-linearity of $\mathcal{A}$ gives the linearly fibered-ness of the spaces, which then satisfy this property thanks to Tkachenko's Theorem. Also, note that if $\mathrm{non}(\mathcal{N})=\omega_1$ and $\mathrm{cov}(\mathcal{N}_{\omega_1})>\omega_1$, then such a space cannot have countable $\pi$-character (see \cite[Theorem 5.5]{Pbn-Plebanek}).
\begin{remark}\label{ms} The space of Theorem \ref{small} satisfies a slightly stronger property than the lack of continuous mapping onto $[0,1]^{\omega_1}$. Namely, it only carries separable measures. Recall that a measure $\mu$ is \emph{separable} if there is a countable family $\mathcal{A}$ of measurable sets such that for every measurable $E$ \[ \inf\{\mu(E \triangle A)\colon A\in \mathcal{A}\} = 0. \] Every space which can be mapped continuously onto $[0,1]^{\omega_1}$ carries a non-separable measure and the reverse implication does not hold in general (see \cite{Fremlin}). The fact that the space of Theorem \ref{small} does not carry a non-separable measures follows directly from \cite[Theorem 3.1]{Drygier} which says that scatteredly-fibered spaces only carry separable measures. \end{remark}
Denote by $\mathcal{E}$ the $\sigma$-ideal on $2^\omega$ generated by the closed sets of measure zero. In \cite{Drygier-Plebanek} the authors proved that if $\mathrm{cf}(\mathrm{cov}(\mathcal{E})^\omega)< \mathfrak{b}$, then there is a non-separable growth of $\omega$ supporting a measure. Using the above technology we can prove that such a space exists in $\mathsf{ZFC}$. We can demand that the space, contrary to that of Theorem \ref{small}, is quite big in the combinatorial sense, that is, it maps continuously onto $[0,1]^\mathfrak{c}$.
\begin{thm}\label{growth} There is a non-separable growth $K$ of $\omega$ which supports a measure and which can be mapped continuously onto $[0,1]^\mathfrak{c}$. \end{thm}
\begin{proof}
Consider $\mathcal{W}$ as in the previous section. It is clear from the definition that $\mathcal{W}$ is closed under finite unions (so long as they belong to $\mathcal{S}$). As before, it follows that $K_\mathcal{W}$ supports a measure. We will show that it is not separable. Towards a contradiction, assume that it is, so in particular we get a countable covering $\{T_W\colon W\in \mathcal{W}\} = \bigcup_n \mathcal{C}_n$ where each $\mathcal{C}_n$ is centered.
Therefore, for each $n$ the set $C_n = \bigcup \mathcal{C}_n \in \mathcal{S}$ (see the proof of Theorem \ref{todor}), and further, for every $W$ such that $T_W\in \mathcal{C}_n$, $W \subseteq C_n$. Let $f\in \mathcal{X}$ be such that $f(n)\not\in C_n(n)$ for each $n$. Clearly, $f\cap([1,\infty)\times \omega)\in \mathcal{W}$, but is not contained in $C_n$ for any $n$, a contradiction.
We will finish the proof by showing that $\mathfrak{T}_\mathcal{W}/{\rm Fin}$ contains an independent family of size $\mathfrak{c}$, which clearly suffices.
Let $\{X_\alpha \colon \alpha < \mathfrak c\}$ be subsets of $\omega$ such that they are representatives of an independent family in $\mathcal{P}(\omega)/{\rm Fin}$ (see \cite{Kantorovich}, \cite{Geschke}). Note that this in particular implies that all of them are infinite. Let $S \in \mathcal{W}$ be such that for every $n>1$, $|S(n)| \geq 2$ (and $S(0)=S(1)=\emptyset$). Also, for each $n>1$, let $Z^n_0, Z^n_1$ be non-empty pairwise disjoint subsets of $S(n)$. For each $\alpha < \mathfrak c$, we shall define $S_\alpha \in \mathcal{W}$ as follows:
$$ S_\alpha(n) = \begin{cases} Z^n_1 \mbox{ if }n\in X_\alpha, \\ Z^n_0 \mbox{ otherwise.} \end{cases} $$
It is clear that each $S_\alpha$ is contained in $S$ (and hence is in $\mathcal{W}$) and is infinite. The following claim will then complete the proof of the theorem.
\begin{claim} The family $\{ T_{S_\alpha} \colon \alpha < \mathfrak c\}/{\rm Fin}$ is an independent family of $\mathfrak{T}_\mathcal{W}/{\rm Fin}$. \end{claim} \begin{why} Let $\alpha_1, \ldots, \alpha_m, \beta_1,\ldots, \beta_n$ be pairwise distinct ordinals less than $\mathfrak c$. We need to show that
\[|\bigcap_{1\leq i \leq m}T_{S_{\alpha_i}}\cap \bigcap_{1\leq j\leq m}(T_{S_{\beta_j}})^c| =\aleph_0.\] Now, since $\{X_\alpha \colon \alpha < \mathfrak c\}/{\rm Fin}$ is an independent family, we can find an infinite $Y\subseteq \omega$ such that \[Y \subseteq \bigcap_{1\leq i \leq m}X_{\alpha_i}\cap \bigcap_{1\leq j\leq m}(X_{\beta_j})^c.\] Then, let $T\subseteq \omega \times \omega$ be defined as follows:
$$ T(n) = \begin{cases} Z^n_1 \mbox{ if }n\in Y, \\ S(n) \mbox{ otherwise.} \end{cases} $$
Notice that $T$ is infinite, and also, since $T \subseteq S$, that $T \in \mathcal{W}$. Also, for $1 \leq i \leq m$, $S_{\alpha_i} \subseteq T$, since the only $n < \omega$ when $T(n) \ne S(n)$ are the $n \in Y$, in which case $T(n) = S^n_1 = S_{\alpha_i}(n)$ since $Y \subseteq X_{\alpha_i}$. But also, since $Y \subseteq (X_{\beta_j})^c$ for each $1 \leq j \leq n$, we have that for $n \in Y$, $T(n) \cap S_{\beta_j}(n) = \emptyset$, with the latter set being non-empty.
It follows that if $l< \omega$ is the least element of $Y$, then for every $k > l$, \[(T \cap (k\times 2^k), k) \in \bigcap_{1\leq i \leq m}T_{S_{\alpha_i}}\cap \bigcap_{1\leq j\leq m}(T_{S_{\beta_j}})^c,\] thus yielding that the latter set is infinite and finishing the proof of the claim. \end{why} \end{proof}
\begin{remark}\label{T^*2} Actually, the above proof shows that the algebra $\mathfrak{T}^*_\mathcal{W}/{\rm Fin} = {\rm alg}(\{T_W\colon W\in \mathcal{W}\})/{\rm Fin}$ is not $\sigma$-centered (cf. Remark \ref{T^*}). \end{remark}
\begin{remark} After we carried out the above construction Tomasz \.{Z}uchowski presented another example of a Boolean algebra $\mathfrak{A}$ which supports a
measure $\mu$, which is not $\sigma$-centered and
such that there is an embedding $\varphi\colon \mathfrak{A} \to \mathcal{P}(\omega)/Fin$ (see \cite{Zuchowski}). His construction is quite different to ours and has the additional property that $\varphi$ transfers $\mu$ to the density $d$ (i.e., $\mu(A) = d(\varphi(A))$ for each $A\in
\mathfrak{A}$). \end{remark}
\subsection{$c_0$-complementedness}
A closed subspace $Y$ of a Banach space $X$ is \emph{complemented in} $X$ if there is a projection $p\colon X\to X$ such that $p[X]=Y$. Since many Banach spaces have a copy of $c_0$ as a subspace, the question of which of these copies are complemented is a natural one and was considered by many authors. One of the most important results in this topic is that of Sobczyk: if $X$ is separable, then each copy of $c_0$ in $X$ is complemented (\cite{Sobczyk}). On the other hand, a Banach space $X$ is Grothendieck (i.e., $X^*$ does not contain weakly$^*$ convergent sequences which are not weakly convergent) if and only if no copy of $c_0$ in $X$ is complemented (see \cite{Cembranos84}).
If $K$ is a compactification of $\omega$, then the space \[ \{f\in C(K)\colon f(x)=0 \mbox{ for }x\in K\setminus \omega\} \] forms a copy of $c_0$ in $C(K)$. We will call it the \emph{natural copy of} $c_0$ in $C(K)$. In \cite{Drygier-Plebanek} the authors discus when this natural copy is complemented in $C(K)$. If $K$ is metrizable, then $C(K)$ is separable and so, by Sobczyk's theorem, every copy of $c_0$ in $C(K)$ is complemented. The following result implies that there are many compactifications $K$ of $\omega$ such that the natural copy of $c_0$ is not complemented in $C(K)$.
\begin{lem}\cite[Lemma 3.1]{Drygier-Plebanek}\label{drygier}
Let $\mathfrak{A}$ be a subalgebra of $\mathcal{P}(\omega)$ containing all finite subsets and let $K$ be its Stone space. The following conditions are equivalent:
\begin{enumerate}
\item the natural copy of $c_0$ is complemented in $C(K)$;
\item there is a sequence of measures $(\nu_n)_n$ on $\mathfrak{A}$ such that each $\nu_n$ vanishes on finite subsets and
\[ \lim_{n\to\infty} \nu_n(A) - \delta_n(A) = 0 \]
for every $A\in\mathfrak{A}$
\end{enumerate} \end{lem} Theorem \ref{drygier} implies, in particular, the following fact, which is attributed to Kubi{\'s} in \cite{Drygier-Plebanek}.
\begin{cor}\label{sp-measure} If $K$ is a compactification of $\omega$ and the natural copy of $c_0$ is complemented in $C(K)$, then $K\setminus \omega$ supports a measure. \end{cor} \begin{proof} If $\mathfrak{A}\subseteq P(\omega)$
is such that $K$ is its Stone space and $(\nu_n)_n$ is a sequence as in Theorem \ref{drygier}, then $\nu$ given by \[ \nu(A) = \sum_n \frac{1}{2^{n+1}}\nu_n(A), \] for $A\in \mathfrak{A}$, is a measure on $\mathfrak{A}$ vanishing only on finite sets. Therefore, $\nu$ induces a strictly positive measure on $\mathfrak{A}/{\rm Fin}$ which can be extended further to a (strictly positive) measure on $K\setminus \omega$. \end{proof}
In \cite[Theorem 5.1]{Drygier-Plebanek} the authors show that under $\mathsf{CH}$ there is a non-separable space $K$, a compactification of $\omega$ with a growth supporting a measure, such that the natural copy of $c_0$ in $C(K)$ is complemented. We will show that such a space exists in $\mathsf{ZFC}$.
The space will be similar to $K_\mathcal{W}$ but this time the generators of the form $T_{(S,n)}$ will be a little bit cumbersome. So, we will consider the algebra $\mathfrak{T}^*_\mathcal{W}$, instead of $\mathfrak{T}_\mathcal{W}$. We make the simple observation that this algebra contains every finite subset of $\Omega$. We also note that if $W \in \mathcal{W}$, then $T_W$ is infinite, and that if $W \not = W'$ are in $\mathcal{W}$, then $T_W \triangle T_{W'}$ is infinite.
Let $\dot{U}$ be an $\mathbb{M}$-name for a slalom. Define a function $f_{\dot{U}}\colon \{T_W\colon W\in \mathcal{W}\}/{\rm Fin} \to \mathbb{M}$ in the following way: \[ f_{\dot{U}}([T_W]) = \llbracket \check{W} \subseteq \dot{U} \rrbracket. \] Here $[T_W] = \{A\subseteq \Omega \colon A=^* T_W\}$. We will show that it can be extended to a homomorphism.
\begin{prop}\label{homo}
For each $\mathbb{M}$-name $\dot{U}$ for a slalom the function $f_{\dot{U}}$ can be extended to a homomorphism $\varphi_{\dot{U}} \colon \mathfrak{T}^*_\mathcal{W}/{\rm Fin} \to \mathbb{M}$. \end{prop}
\begin{proof}
Since $\{T_W \colon W\in \mathcal{W}\}/{\rm Fin}$ generates $\mathfrak{T}^*_\mathcal{W}/{\rm Fin}$, by Sikorski's theorem it is enough to check that if $A_0, \dots , A_k, B_0, \dots, B_l \in \mathcal{W}$ and \[ C =
T_{A_0}\cap \dots \cap T_{A_k} \cap T_{B_0}^c \cap \dots \cap T_{B_l}^c\] is finite, then \[ C' = f_{\dot{U}}([T_{A_0}]) \cap \dots \cap f_{\dot{U}}([T_{A_k}]) \cap (f_{\dot{U}}([T_{B_0}]))^c \cap \dots (f_{\dot{U}}([T_{B_l}]))^c = \emptyset.\]
First, we will look at two particular cases:
\begin{enumerate}
\item There is $n$ such that $|\bigcup_{i\leq k} A_i(n)|=2^n$. Then \[ C' \subseteq \bigcap_{i\leq k} \llbracket \check{A}_i(n) \subseteq \dot{U}(n)\rrbracket = \llbracket \bigcup_{i\leq k} \check{A}_i(n) \subseteq
\dot{U}(n)\rrbracket = \emptyset,\]
since $\dot{U}$ is a name for a slalom.
\item There is $j\leq l$ such that $B_j \subseteq \bigcup_{i\leq k} A_i$. Then \[ C' \subseteq \llbracket \bigcup_{i\leq k} \check{A}_i \subseteq \dot{U}\rrbracket \cap \llbracket \check{B}_j \nsubseteq \dot{U}\rrbracket = \emptyset.\]
\end{enumerate}
Assume now that neither of the above is satisfied. In this case $(\bigcup_{i\leq k} A_i \cap (n\times 2^n), n)\in \Omega$ for each $n$ and
\[ C \supseteq \{(\bigcup_{i\leq k} A_i \cap (n\times 2^n), n)\colon n\in \omega\}. \]
The latter set is clearly infinite and we are done. \end{proof}
Thanks to Proposition \ref{homo} we can induce measures by names for slaloms. Note that usually these measures need not be positive on all infinite elements of $\mathfrak{T}^*_\mathcal{W}$.
\begin{cor}\label{measure} Let $\dot{U}$ be an $\mathbb{M}$-name for a slalom. The following formula uniquely defines a measure on $\mathfrak{T}^*_\mathcal{W}$: \[ \nu(T_W) = \lambda(\llbracket \check{W}\subseteq \dot{U} \rrbracket). \] This measure vanishes on finite sets. \end{cor}
We are going to show that there is a sequence of measures defined on $\mathfrak{T}^*_{\mathcal{W}}$ as in Theorem \ref{drygier}. We will use the name $\dot{S}$ constructed in the previous section. For $(T,m)\in \Omega$ define an $\mathbb{M}$-name $\dot{S}_{(T,m)}$ in the following way: $$ \llbracket k\in \dot{S}_{(T,m)}(n) \rrbracket = \begin{cases}
\llbracket k\in \dot{S}(n)\rrbracket \mbox{ if } n\geq m \\
\mathcal{X} \mbox{ if } n< m \mbox{ and } k\in T(n) \\
\emptyset \mbox{ if } n < m \mbox{ and } k\notin T(n). \end{cases} $$
Then $\dot{S}_{(T,m)}$ is a name for a slalom for each $(T,m)\in \Omega$ since $T$ is a slalom and $\dot{S}$ is a name for a slalom.
For $W\in \mathcal{W}$ and $(T,m)\in\Omega$, Corollary \ref{measure} allows us to define $\nu_{(T,m)}$ on $\mathfrak{T}^*_\mathcal{W}$ by setting \[ \nu_{(T,m)}(T_W) = \lambda(\llbracket \check{W} \subseteq \dot{S}_{(T,m)} \rrbracket). \] Of course, each $\nu_{(T,m)}$ vanishes on finite sets.
\begin{prop}\label{seq} For every $A\in \mathfrak{T}^*_\mathcal{W}$
\[ \lim_{(T,m)\in\Omega} \nu_{(T,m)}(A)-\delta_{(T,m)}(A) = 0. \] \end{prop} \begin{proof}
Let $W\in \mathcal{W}$.
\begin{claim}$\lim_{(T,m)\in T_W}\nu_{(T,m)}(T_W) = 1$.\end{claim}
\begin{why}Let $\varepsilon>0$. There is $m$ such that $\lambda(\llbracket \forall n>m \ \check{W}(n)\subseteq \dot{S}(n)\rrbracket)>1-\varepsilon$. So, for each $n>m$ and $(T,n)\in T_W$ \[ \nu_{(T,n)} (T_W) = \lambda(\llbracket \check{W} \subseteq \dot{S}_{(T,n)}\rrbracket) \geq \lambda\left(\llbracket \check{W}\cap (n\times 2^n+1) \subseteq \check{T} \rrbracket \cap \llbracket \forall i \geq n \ \check{W}(i)\subseteq \dot{S}(i)\rrbracket\right) = \]\[ = \lambda(\llbracket \forall i \geq n \ \check{W}(i)\subseteq \dot{S}(i)\rrbracket)> 1-\varepsilon. \] \end{why}
\begin{claim}$\lim_{(T,m)\notin T_W}\nu_{(T,m)}(T_W) = 0$.\end{claim}
\begin{why}In fact, if $(T,m)\notin T_W$, then \[ \nu_{(T,m)}(T_W) = \lambda(\llbracket \check{W} \subseteq \dot{S}_{(T,m)}\rrbracket) \leq \lambda(\llbracket \check{W} \cap (m\times 2^m+1) \subseteq \check{T}\rrbracket) = 0. \] \end{why}
In this way we have proved that $\lim_{(T,n)} \nu_{(T,n)}(T_W) - \delta_{(T,n)}(T_W) = 0$ for each $W \in \mathcal{W}$. Each element of $\mathfrak{T}^*_\mathcal{W}$ is a finite Boolean combination of elements of this form, so the convergence for arbitrary elements of the algebra easily follows. \end{proof}
\begin{cor} \label{c_0} There is a compactification $L$ of $\omega$ such that
\begin{enumerate}
\item $L\setminus \omega$ is non-separable and supports a measure,
\item the natural copy of $c_0$ is complemented in $C(L)$.
\end{enumerate}
If $\mathrm{add}(\mathcal{N})=\mathrm{non}(\mathcal{M})$, then we can additionally require that $L\setminus \omega$ does not map continuously onto $[0,1]^{\omega_1}$, and that every isomorphic copy of $c_0$ in $C(L)$ contains a further copy of $c_0$ which is complemented. That is, $C(L)$ is hereditarily separably Sobczyk. \end{cor} \begin{proof}
Let $L$ be the Stone space of $\mathfrak{T}^*_\mathcal{W}$. Since $\mathfrak{T}^*_\mathcal{W}/{\rm Fin}$ is a subalgebra of $\mathfrak{T}_\mathcal{W}/{\rm Fin}$, it supports a measure. Also, $\mathfrak{T}^*_\mathcal{W}/{\rm Fin}$ is not
$\sigma$-centered (see Remark \ref{T^*2}). So, the space $L\setminus \omega$ supports a measure and is not separable.
Proposition \ref{seq} and Theorem \ref{drygier} imply (2).
If
$\mathrm{add}(\mathcal{N})=\mathrm{non}(\mathcal{M})$, then instead of $\mathfrak{T}^*_\mathcal{W}$ we can use $\mathfrak{T}^*_\mathcal{A}$, where $\mathcal{A}$ is defined as in Theorem \ref{todor}. Again, since $\mathfrak{T}^*_\mathcal{A}$ is a
subalgebra of $\mathfrak{T}_\mathcal{A}$, the algebra $\mathfrak{T}^*_\mathcal{A}/Fin$ supports a measure and does not contain an uncountable independent sequence. It is not
$\sigma$-centered by Remark
\ref{T^*}. Also, since the Stone space of $\mathfrak{T}_\mathcal{A}$ only carries separable measures and $\mathfrak{T}^*_\mathcal{A}$ is a subalgebra of $\mathfrak{T}_\mathcal{A}$, the Stone space of $\mathfrak{T}^*_\mathcal{A}$ also does not carry a non-separable measure, and so by \cite[Theorem 8.4]{Drygier-Plebanek}, we have that $C(L)$ is hereditarily separably Sobczyk. \end{proof}
In the Section \ref{meas} we proved that the algebra $\mathfrak{T}_\mathcal{W}/{\rm Fin}$ supports a measure without giving any example of a supported measure. Theorem \ref{seq} allows us to define strictly positive measure on $\mathfrak{T}^*_\mathcal{W}/{\rm Fin}$ explicitly (see the proof of Corollary \ref{sp-measure}). For each $(S,n)\in \Omega$ let $\mu_{(S,n)}$ be a measure on $\mathfrak{T}^*_\mathcal{W}/{\rm Fin}$ induced by $\nu_{(S,n)}$. Fix an enumeration $\phi\colon \Omega \to \omega$.
\begin{cor} The measure $\mu$ given by \[ \mu(A) = \sum_{(S,n)\in \Omega} \frac{1}{{2^{\phi(S,n)+1}}} \mu_{(S,n)} (A) \] is strictly positive on $\mathfrak{T}^*_\mathcal{W}/{\rm Fin}$. \end{cor}
\subsection{Small $\sigma$-$n$-linked spaces supporting no measure.}
In \cite{Dzamonja-Plebanek} the authors claimed to prove that the space constructed by Todor\v{c}evi\'{c} does not support a measure. They were mainly interested in the corollary saying that there is a Boolean algebra of size $\mathrm{add}(\mathcal{N})$ which does not support a measure. However, the argument in they proof was inaccurate (and the proof of Theorem \ref{small} indicates that the theorem is not true in general). Namely, it is assumed that the Todor\v{c}evi\'{c} space is the Stone space of a Boolean algebra generated by a family of elements such that every two of them are either disjoint or ordered by inclusion. This is not the case, and in fact one can show that under Suslin's Hypothesis every Boolean algebra with this property is $\sigma$-centered.
However, if we require some additional properties for the family $\mathcal{A}$ used for the construction, we can prevent the resulting space from supporting a measure (and, so the corollary of \cite[Theorem 3.1]{Dzamonja-Plebanek} remains true).
The following is in principle \cite[Theorem 3.7]{Judah-Shelah}. Recall that $h \in \omega^\omega$ is given by $h(n)=2^n$.
\begin{prop}
Let $g\in \omega^\omega$ be such that $\lim_n \frac{g(n)}{2^n} = \infty$. If $\mathcal{F}$ is not localized by $\mathcal{S}_g$, then it is not ($h$-)destructible by any $\mathbb{M}_\kappa$. \end{prop}
\begin{proof}
Let $\dot{S}$ be a $\mathbb{M}_\kappa$-name for an ($h$-)slalom. Define $A\subseteq \omega\times\omega$ by
\[ A(n) = \{k\colon \lambda_\kappa(\llbracket k\in \dot{S}(n) \rrbracket) > \frac{2^n}{g(n)}\}. \]
Then $|A(n)|< \frac{g(n)}{2^n}\cdot 2^n = g(n)$. So, there is an $f\in\mathcal{F}$ such that $f\nsubseteq^* A$.
Let $p\in \mathbb{M}_\kappa$ and $N\in \omega$ be such that
\[ p \Vdash ``\forall n>N \ \check{f}(n) \in \dot{S}(n)". \]
There is $m>N$ such that $\lambda_\kappa(p)>\frac{2^m}{g(m)}$ and $f(m)\notin A(m)$. Then $\lambda_\kappa(\llbracket f(m) \in \dot{S}(n) \rrbracket)\leq \frac{2^m}{g(m)}$ and so
\[ \emptyset \ne p \setminus \llbracket \check{f}_\alpha(m) \in \dot{S}(m) \rrbracket \Vdash ``\check{f}_\alpha(m) \notin \dot{S}(m)", \]
a contradiction. \end{proof}
It follows that we have, in $\mathsf{ZFC}$, families $\mathcal{F} \subseteq \omega^\omega$ of size $\mathrm{add}(\mathcal{N})$ which are not ($h$-)destructible by any $\mathbb{M}_\kappa$: simply consider a family $\mathcal{F}$ as in Theorem~\ref{Bartoszynski} which is not localised by $\mathcal{S}_g$ where $g \in \omega^\omega$ grows fast enough as above.
\begin{thm} \label{nomeasure} There is a space $K$ satisfying the properties of the space of Theorem \ref{todor} and such that $K$ does not support a measure. Moreover, $K$ is $\sigma$-$n$-linked for every $n$. \end{thm}
\begin{proof} Assume $\mathcal{F}$ is not $h$-destructible by any $\mathbb{M}_\kappa$. Then we can repeat the construction of Theorem \ref{kunen-fremlin} obtaining a $\subseteq^*$-increasing family $\mathcal{A}\subseteq \mathcal{Z}$ which is not
destructible by any $\mathbb{M}_\kappa$. Then we can repeat the argument from the proof of Theorem \ref{todor} to show that $\mathfrak{T}_\mathcal{A}/{\rm Fin}$ (here and for the rest of this proof we are actually assuming for notational convenience that $\mathcal{A}$ is the closure under finite modifications of what it was in the last sentence) cannot be $\sigma$-centered in a forcing extension by $\mathbb{M}_\kappa$ for any $\kappa$. By Theorem \ref{kamburelis} the algebra $\mathfrak{T}_\mathcal{A}/{\rm Fin}$ does not support a measure.
We now show that $\mathfrak{T}_\mathcal{A}/{\rm Fin}$ is $\sigma$-$n$-linked for every $n$. As before, it suffices to show that the $\pi$-base for this algebra given to us by Proposition~\ref{pi-base} is $\sigma$-$n$-linked for every $n$ (note that we have implicitly used this proposition in this proof already, and it is easy to check that its antecedent is satisified by $\mathcal{A}$). In fact, it can easily be seen that this itself would follow if we can show for any $n$ that there is a countable covering $\mathcal{A} = \bigcup_m \mathcal{C}_m$ where for every $m$ and $A_1, A_2, \ldots A_n$ from $\mathcal{C}_m$, there is some $S \in \mathcal{S}$ such that $S \supseteq \bigcup \{A_1, A_2, \ldots A_n\}$.
But this is easily accomplished. Let $n$ be fixed. With each $A \in \mathcal{A}$ we associate a pair $(k_A, T_A)$ such that for every $m \geq k_A$, $\frac{1}{2^m}|A(m)| < \frac{1}{n}$, and $T_A = A \cap ([0, k_A) \times \omega)$. Since there are only countably many pairs and every $A \in \mathcal{A}$ can be associated with some such pair, it is clear that these pairs give rise to a countable partitioning of $\mathcal{A}$. Let $A_1, A_2, \ldots A_n$ be associated with the same pair, say $(k, T)$. Let $S = \bigcup \{A_1, A_2, \ldots A_n\}$. Now, it is clear that $S \in \mathcal{S}$: if $m < k$, then $S(m) = T(m)$, from which it follows that $|S(m)| < 2^m$; on the other hand, if $m \geq k$, then $S(m)= \bigcup \{A_1(m), A_2(m),\ldots A_n(m)\}$, so $|S(m)| \leq |A_1(m)|+|A_2(m)|+ \ldots |A_n(m)| < \frac{2^m}{n}\cdot n = 2^m$. \end{proof}
We will show that using this technique we can find a space supporting no measure and satisfying Fremlin's property (*), a chain condition very close to separability.
\begin{defi}[see \cite{Fremlin-Maharam}]
A Boolean algebra $\mathfrak{A}$ has \emph{property (*)} if $\mathfrak{A}^+ = \bigcup \mathcal{C}_n$, where for each $n$ and for each infinite $\mathcal{C}\subseteq \mathcal{C}_n$, the family $\mathcal{C}$ can be furthermore refined to an infinite
centered family. \end{defi}
Define now the following family
\[ \mathcal{J} = \{S\subseteq \omega \times \omega\colon S(n)\subseteq 2^n \mbox{ for each }n\mbox{ and } \lim_n \frac{1}{2^n}|S(n)| = 0 \}. \] Recall that the density zero ideal $\mathcal{D}$ (see for example \cite{Farah}) is defined by
\[ \mathcal{D} = \{A\subseteq \omega\colon \lim_n \frac{|A\cap [2^n, 2^{n+1})|}{2^n} = 0 \}. \] Let \[ \mathcal{V} = \mathcal{J} \cap \mathcal{S}. \] Clearly, $\mathcal{V}$ is to the density zero ideal what $\mathcal{W}$ is to the summable ideal, and the same natural enumeration function witnesses this correspondence.
\begin{thm}[Brendle-Yatabe, see \cite{Brendle-Yatabe} and \cite{Elekes}] The density zero ideal is random-indestructible, i.e., \[ \Vdash_\mathbb{M} ``\mbox{There is no co-infinite }\dot{X}\mbox{ such that }\check{D}\subseteq^* \dot{X}\mbox{ for each }\check{D}\in \check{\mathcal{D}}." \] \end{thm}
\begin{prop}\label{nomeasure2}
The Boolean algebra $\mathfrak{T}_\mathcal{V}/Fin$ does not support a measure. \end{prop} \begin{proof} First, notice that the density zero ideal is not destructible by $\mathbb{M}_\kappa$ for any cardinal $\kappa$. Indeed, for any new subset of $\omega$ in the generic extension, we can consider a nice name for this set, and because $\mathbb{M}_\kappa$ has the ccc, this name is decided by countably many conditions, so each new subset (potentially destroying the ideal) added by $\mathbb{M}_\kappa$ can be added by a single random real. See also \cite[Remark 3.4]{Elekes}.
As in Theorem~\ref{nomeasure}, it suffices to show that $\mathcal{V}$ is indestructible by forcing with any $\mathbb{M}_\kappa$. So, let $G$ be generic for $\mathbb{M}_\kappa$ over $V$. We will work in $V[G]$. Let $\dot{S} \in \dot{\mathcal{S}}$ (here we are referring to the interpretations of these names in the generic extension, but retaining the checks and dots so as to avoid confusion). Let $\dot{f} \in \dot{\mathcal{X}}$ be such that $\dot{S}(n) \subseteq 2^n\setminus \{\dot{f}(n)\}$ for every $n$. Then, corresponding to $\dot{f}$ there is an infinite and co-infinite subset $\dot{X}$ of $\omega$ such that for every $n$, $\dot{X}\cap [2^n, 2^{n+1}) = 1$. By the indestructibility of $\check{\mathcal{D}}$, it follows that there
is an infinite $\check{Y}\subseteq \dot{X}$ such that $\check{Y} \in \check{\mathcal{D}}$. Corresponding to $\check{Y}$, there is some infinite $\check{T} \in \check{\mathcal{V}}$ such that $\check{T}(n)\subseteq \{\dot{f}(n)\}$ for every $n$. Then
$\check{T}\nsubseteq^* \dot{S}$ and so it follows that $\dot{S}$ cannot localise $\check{\mathcal{V}}$.
Altogether, $\mathcal{V}$ is indestructible by forcing with any $\mathbb{M}_\kappa$, and so $\mathfrak{T}_\mathcal{V}/Fin$ does not support a measure.
\iffalse So, towards a contradiction, let $\dot{S}$ be an $\mathbb{M}_\kappa$-name such that
\[\Vdash_{\mathbb{M}_\kappa} ``\dot{S}\in \dot{\mathcal{S}} \mbox{ and }\check{V}\subseteq^* \dot{S}\mbox{ for each }\check{V}\in \check{\mathcal{V}}".\]
Clearly, we can assume that there is some $\dot{f}$ a name for an element of $\dot{\mathcal{X}}$ such that
\[\Vdash_{\mathbb{M}_\kappa} ``\forall n\in \omega \dot{S}(n) = 2^n\setminus \{\dot{f}(n)\}".\]
Moreover, using the fact that $\mathcal{D}$ and $\mathcal{J}$ are isomorphic, the above remark implies that for each $\kappa$ \[ \Vdash_{\mathbb{M}_\kappa} \mbox{ there is no }\dot{S}\in \dot{\mathcal{S}} \mbox{ such that }\check{J}\subseteq^* \dot{S}\mbox{ for each }\check{J}\in \check{\mathcal{J}}. \] And, since for each $J\in \mathcal{J}$ there is $V\in \mathcal{V}$ such that $J =^* V$, for each $\kappa$ \[ \Vdash_{\mathbb{M}_\kappa} \mbox{ there is no }\dot{S}\in \dot{\mathcal{S}} \mbox{ such that }\check{V}\subseteq^* \dot{S}\mbox{ for each }\check{V}\in \check{\mathcal{V}}. \] According to Theorem \ref{kamburelis}, we are done. \fi \end{proof}
\begin{thm} The Boolean algebra $\mathfrak{T}_\mathcal{V}/Fin$ has property (*). \end{thm}
\begin{proof} As before, it suffices to find a countable covering $\mathcal{V} = \bigcup_n \mathcal{C}_n$ such that for every $n$ and every infinite $\mathcal{C} \subseteq \mathcal{C}_n$, there is an $S \in \mathcal{S}$ such that for infinitely many $T \in \mathcal{C}$, $T \subseteq S$.
So, first we get the sets $\mathcal{C}_n$. For each $A \in \mathcal{V}$ we fix a pair $(k_A, U_A)$ such that $k_A\in \omega\setminus\{0\}$ is such that for every $m \geq k_A$ we have $\frac{1}{2^m}|A(m)| < \frac{1}{9}$, and such that $U_A = A \cap ([0, k_A)\times \omega)$. As there are only countably many such pairs which are admissible, and each element is associated with some such pair, we get a countable partitioning $\mathcal{V} = \bigcup_n \mathcal{C}_n$ such that any two elements of $\mathcal{V}$ in the same piece of the partition agree about the pair. We claim that this countable partitioning witnesses property (*).
Fix some $n'$, and the pair $(k,U)$ associated with $\mathcal{C}_{n'}$. Let $\mathcal{C} \subseteq \mathcal{C}_{n'}$ be a countably infinite subset, and we assume that we have fixed some enumeration of its elements, $\mathcal{C}=\{S_0, S_1, \ldots\}$.
We shall need the following observation.
\begin{claim}
Let $Q\subseteq \omega$ be infinite, $I\subseteq \omega$ finite such that $k \leq I$, and $n\in Q$. Then there is $T\in \mathcal{V}$,
$T\subseteq (I \times \omega)$ and an infinite $Q'\subseteq Q$ including $n$ such that
\[ \forall m\in Q' \ \forall j\in I \ S_m(j)\subseteq T(j), \]
and for each $j \in I$, $\frac{1}{2^j}|T(j)| < \frac{1}{3}$. \end{claim}
\begin{why}
Note that the set of $T\in \mathcal{V}$, such that \begin{enumerate}
\item $T\subseteq I\times \omega$,
\item $T(j)\subseteq 2^j$ for each $j\in I$,
\item for every $j \in I$, $\frac{1}{2^j}|T(j)| < \frac{1}{3}$,
\item $S_n \cap (I\times \omega) \subseteq T$ \end{enumerate}
is finite. But for every $m \in Q$, there is some such $T$ so that $S_m\cap (I\times \omega) \subseteq T$ (because $\frac{1}{2^j}|S_m(j)|<1/9$ for each $j\geq k$). So one of these $T$ must work for infinitely many $m \in Q$, including $n$. \end{why}
Now, we would like to find an infinite $N \subseteq \omega$ such that $\bigcup \{S_n \colon n \in N\} \in \mathcal{S}$. We shall build $N$ inductively, and we shall need to keep track of some extra information during this inductive construction. We will construct $(n_i,k_i,T_i,Q_i)_{i\in\omega}$ such that for every $i$ \begin{enumerate}
\item $k_i\in\omega$ and $k_{i+1}>k_i$,
\item $T_i \subseteq [k_i,k_{i+1})\times \omega$, $\frac{1}{2^j} |T_i(j)| < \frac{1}{3}$ for every $j\in [k_i,k_{i+1})$,\label{T_n}
\item $Q_i\in [\omega]^\omega$, $Q_{i+1}\subseteq Q_i$,
\item $n_{i}\in Q_{i+1}$, $n_{i+1}>n_i$,\label{i_n}
\item $S_l \cap ([k_i,k_{i+1})\times \omega) \subseteq T_i$ for each $l\in Q_{i+1}$,\label{m>n}
\item $\frac{1}{2^j}|S_{n_i}(j)| < \frac{1}{3^{i+1}}$ for each $j>k_{i+1}$.\label{k_n} \end{enumerate}
Let $n_0 = 0$, $k_0=k$ and $Q_0 = \omega$ and suppose that we have constructed $n_i$, $k_i$ and $Q_i$. Then use the fact that $S_{n_{i}}\in \mathcal{V}$ to pick an appropriate $k_{i+1}>k_i$ to satisfy condition (\ref{k_n}). Then apply the claim to $Q=Q_i$, $I = [k_i,
k_{i+1})$ and $n=n_i$ to obtain $T_i = T$ and $Q_{i+1} = Q'$. Now, take $n_{i+1}\in Q_{i+1}$ such that $n_{i+1}>n_i$ and proceed.
Now, let $N = \{n_i\colon i\in\omega\}$ and
\[V = \bigcup \{S_{n} \colon n\in N\}. \]
We are done once we show that $V\in \mathcal{S}$. For this, we only need to check that for every $j \in \omega$, $|V(j)| < 2^j$. This is clear if $j \in [0,k)$, and otherwise there is some $i$ such that $j \in [k_i, k_{i+1})$. Of course,
\[ V(j) = \bigcup_{m < i} S_{n_m}(j) \cup \bigcup_{m\geq i} S_{n_m}(j). \]
By (\ref{m>n}) and (\ref{i_n}) $S_{n_m}(j) \subseteq T_i(j)$ for every $m\geq i$. So, by (\ref{T_n}), $|\bigcup_{m\geq i} S_{n_m}(j)|<2^j \cdot \frac{1}{3}$.
By (\ref{k_n})
\[ |\bigcup_{m<i} S_{n_m}(j)| < 2^j \cdot \sum_{m=0}^{i-1} \frac{1}{3^{m+1}} < 2^j \cdot \frac{2}{3}, \]
and we are done. \end{proof}
The reason for our interest in property (*) is that by \cite[Lemma 2D]{Fremlin-Maharam}, every \emph{Maharam algebra} (i.e., complete Boolean algebra with a strictly positive exhaustive submeasure) satisfies this chain condition. Maharam's Problem, asking if there is a Maharam algebra which does not support a measure, was a longstanding open problem (see \cite{Maharam}) in measure theory. In \cite{Talagrand} Talagrand gave an example of such an algebra. Given the complexity of Talagrand's algebra, it is natural to search for simpler examples, for example one not appealing to the existence of a non-principal ultrafilter on $\omega$.
Of course the algebra $\mathfrak{T}_\mathcal{V}/Fin$ is not complete but instead of $\mathfrak{T}_\mathcal{V}/Fin$ we can consider its completion $\mathfrak{A}$. The algebra $\mathfrak{A}$ does not support a measure (otherwise $\mathfrak{T}_\mathcal{V}/Fin$ would support a measure). Moreover, since $\mathfrak{T}_\mathcal{V}/Fin$ is a $\pi$-base of $\mathfrak{A}$, the algebra $\mathfrak{A}$ is $\sigma$-$n$-linked for every $n$ and satisfies property (*). By \cite[Theorem 1]{Todorcevic04} every complete Boolean algebra which is $\sigma$-$n$-linked and weakly distributive is a Maharam algebra. In \cite{Dobrinen04} Dobrinen proved that several complete $\sigma$-linked algebras which do not support a measure are not weakly distributive by showing that they add a Cohen real. Unfortunately, the algebra $\mathfrak{A}$ also adds a Cohen real, so it cannot be weakly distributive
\begin{thm} Forcing with $\mathfrak{T}_\mathcal{V}/Fin$ adds a Cohen real.\label{Cohen} \end{thm}
\begin{proof} Clearly, if we can prove that forcing with the dense subposet formed by a $\pi$-base of $\mathfrak{T}_\mathcal{V}/Fin$ adds a Cohen real, then the result follows. The $\pi$-base we choose is $\{T_A \cap T_{(T,n)}\colon A \in \mathcal{V}, (T,n) \in \Omega)\}/Fin\setminus\{[\emptyset]\}$. We consider this set with the inclusion relation.
It shall, however, be convenient to consider some canonical representatives of $[T_A\cap T_{(S,n)}]$, where $T_A\cap T_{(S,n)}$ is infinite. Given such $T_A\cap T_{(S,n)}$, let $m \geq n$ be the least natural number such that $|A(m)|<2^m-1$. Notice that $[T_A\cap T_{(S,n)}] =[T_A\cap T_{(T,m)}]$ where $T= A \cap (m\times 2^m)$. Furthermore, it is easy to verify that if $B \in \mathcal{V}$ and $(U,l) \in \Omega$ are such that $|B(l)|<2^l-1$ and $[T_A \cap T_{(T,m)}] = [T_B \cap T_{(U,l)}]$, then $A=B$, $m=l$ and $T=U$. It follows that the poset
\[ \mathbb{Q} = \{T_A \cap T_{(T,n)}\colon T_A \cap T_{(T,n)}\not \in \mathrm{Fin}, A\in \mathcal{V}, (T,n)\in \Omega, |A(n)| < 2^n-1\},\] with the order given by $\subseteq^*$ is isomorphic to our $\pi$-base. The dense subset of $\mathbb{Q}$ we shall consider is the following: \[\mathbb P = \{T_A \cap T_{(T,n)}\in \mathbb{Q}\colon n >1\}.\]
For technical reasons, the particular incarnation, $\mathbb C$, of Cohen forcing that we will work with is the poset which consists of finite partial functions from $\omega\setminus 2$ to $2$, with the order being reverse inclusion.
Now, in order to prove that forcing with $\mathbb P$ adds a Cohen real, it suffices (see \cite{Abraham-Aronszajn}) to define a projection from $\mathbb P$ to $\mathbb C$, that is, a function $ \Phi :\mathbb P \to \mathbb C$ such that \begin{enumerate}
\item $\Phi[\mathbb P]$ is a dense subset of $\mathbb C$,
\item $\Phi$ is order-preserving,
\item If $p \in \mathbb P$ and $\tau \in \mathbb C$ is such that $\tau\leq \Phi(p)$, then there is a $q \leq p$ in $\mathbb P$ such that $\Phi(q) = \tau$. \end{enumerate}
Let $G$ be the generic filter of $\mathcal B$. Note that we can define from $G$ an element of $\mathcal{S}$ by noting that in $V[G]$ the set \[\bigcup\{A(n) \colon A \in \mathcal{V}, \exists (S,m) \in \Omega, m > n, T_A\cap T_{(S,m)} \in G\}\] has size less than $2^n$ for every $n \in \omega$. Let $\dot{H}$ be a name for this `generic slalom'.
We shall now define for each natural number $n>1$ a function $d_n\colon \mathcal{P}(2^n) \to \{0,1\}$. For $n\in \omega \setminus 2$ let $$ d_n(F)= \begin{cases}
1 \mbox{ if } 2^{n-1} \subseteq F\\
0 \mbox{ otherwise.} \end{cases} $$ Note that if $A \subseteq 2^n$ has size less than $2^{n-1}$, then there are $B, C \in [2^n]^{2^{n}-1}$ such that $A \subseteq B \cap C$ and $d_n(B) = 1- d_n(C)$.
The crucial property of the functions $\{d_n\}_{n >1}$ is the following: given any $T_A\cap T_{(S,n)}\in \mathbb P$, since $A \in \mathcal{V}$ the set \[F = \{m >1\colon\exists i \ T_A\cap T_{(S,n)}\Vdash``d_m(\dot{H}(m)) = i"\}\]
is finite. To see this notice that if $M \geq n$ is such that for every $m\geq M$, $\frac{1}{2^m}|A(m)| < \frac{1}{2}$, then $F \subseteq M$. Indeed, for any $m \geq M$ in $\omega\setminus 2$ and $i \in 2$, we can find $B_i\subseteq 2^m$ containing $A(m)$ such that $d_m(B_i) = i$. For $i\in 2$ consider the slalom $A_i \in \mathcal{V}$ such that $A_i(m') = A(m')$ for every $m' \neq m$, and $A_i(m) = B$. Then $p_i = T_{A_i} \cap T_{(S,n)} \subseteq^* T_A \cap T_{(S,n)}$ and $p_i \Vdash ``d_m(\dot{G}(m))=i"$ for each $i\in 2$ (here we have not shown that the $p_i$ are in $\mathbb P$, so what we mean is that any extension of the $p_i$ in $\mathbb P$ forces the respective statements).
To finish, consider the function $\Phi\colon \mathbb P \to \mathbb C$ given by \[\Phi(T_A \cap T_{(S,n)}) = \{(m,i)\colon T_A \cap T_{(S,n)}\Vdash ``d_m(\dot{H}(m)) = i"\}.\]
We will check that $\Phi$ satisfies the desired properties. First, let $(S,2)\in \Omega$ and $A \in \mathcal{V}$ be such that $T_A \cap T_{(S,n)}$ is infinite, $|A(m)|\frac{1}{2^m} < \frac{1}{2}$ for every $m \in \omega$. Then $T_A\cap T_{(S,2)}\in \mathbb P$ and $\Phi(T_A\cap T_{(S,2)}) = \emptyset$.
Second, let $T_A \cap T_{(S,n)} \in \mathbb P$ be such that $\Phi(p)= \sigma$, and let $\tau \leq \sigma$. Notice that if $k \in [2, n)$, then $k \in \mathrm{dom}(\sigma)$, so if $k \in \mathrm{dom}(\tau)\setminus \mathrm{dom}(\sigma)$, then $k \geq n$.
Now, let $B\in \mathcal{V}$ be given by $B(k)=A(k)$ for every $k \not \in \mathrm{dom}(\tau)\setminus\mathrm{dom}(\sigma)$, and otherwise $B(k)$ is such that $d_k(B(k)) = \tau(k)$. Let $m$ be the least such that $m \geq n$ and $m \not \in
\mathrm{dom}(\tau)$. Note that this implies that $m \not \in \mathrm{dom}(\sigma)$, $B(m)= A(m)$, and also that $|A(m)| < 2^m-1$. Let $T$ be arbitrary such that $T \cap (n \times 2^n) = S$, $(T,m)\in \Omega$, and $T_B \cap T_{(T,m)}$ is infinite.
Then $T_B \cap T_{(T,m)} \in \mathbb P$, is below $T_A \cap T_{(S,n)}$, and $\Phi(T_B \cap T_{(T,m)})= \tau$. Note that, as there is $q\in \mathbb{Q}$ such that $\Phi(q)=\emptyset(=1_\mathbb{C}$), this also gives us that $\Phi$ is a surjection.
That $\Phi$ is order-preserving is clear. \end{proof}
\begin{remark}
In \cite[Theorem 6A]{Fremlin-Maharam} it is shown (and attributed there to Todor\v{c}evi\'{c}) that the \emph{Gaifman algebra} of \cite{Gaifman} is an example of a $\sigma$-linked Boolean algebra with property (*) which does not support a
measure (and Dobrinen has also shown in \cite{Dobrinen04} that this algebra adds a Cohen real). It contains an uncountable independent family, similarly to
$\mathfrak{T}_\mathcal{V}/Fin$. Using techniques from Section \ref{destruct} we can show that consistently we can produce such an example without a big independent family.
Recall that
\[ {\rm cof}^*(\mathcal{I}) = \min \{|\mathcal{A}|\colon \mathcal{A}\subseteq \mathcal{I}, \ \forall I\in \mathcal{I} \ \exists A\in \mathcal{A} \ I\subseteq^* A\}. \]
Notice that assuming $\mathrm{add}(\mathcal{N})=\mathrm{cof}^*(\mathcal{D})$ and using the technique of the proof of Theorem \ref{kunen-fremlin2} we can find an $\subseteq^*$-chain $\mathcal{A}\subseteq \mathcal{V}$ such that each element of $\mathcal{V}$
is almost contained in an element of $\mathcal{A}$. Then we can argue as in Proposition \ref{nomeasure2} to show that the algebra $\mathfrak{T}_\mathcal{A}/Fin$ does not support a measure.
Finally, in this way we can construct a Boolean algebra which does not contain an uncountable independent family, which does not support a measure, but which is $\sigma$-$n$-linked for each $n$ and has property (*). \end{remark}
Recall that for an ideal $\mathcal{I}$, we denote its dual filter by $\mathcal{I}^*$. That is, $\mathcal{I}^* = \{I^c\colon I\in \mathcal{I}\}$.
\begin{remark}
If we are interested rather in the completion of $\mathfrak{T}_\mathcal{V}/Fin$ than in $\mathfrak{T}_\mathcal{V}/Fin$ itself, then we could present it in a slightly different language. Recall that the Mathias forcing, $\mathbb{M}(\mathcal{F})$, for a filter $\mathcal{F}$ on $\omega$ consists of all pairs $(s,F)$ such that $s\subseteq \omega$ is a finite set, $F\subseteq (\max s, \infty)$ and $F\in \mathcal{F}$. The ordering is given by $(s',F') \leq (s,
F)$ if
\begin{enumerate}
\item $s\subseteq s'$,
\item $F'\subseteq F$,
\item $s'\setminus s \subseteq F$.
\end{enumerate}
The forcing $\mathbb{M}(\mathcal{F})$ is $\sigma$-centered and it adds generically a pseudointersection of $\mathcal{F}$.
We will consider a sub-poset $\mathbb{P}(\mathcal{F})$ of $\mathbb{M}(\mathcal{F})$ by imposing one more restriction on the conditions: \begin{enumerate}
\item[(4)] for every $n \in \omega$, $(s\cup F) \cap [2^n,2^{n+1}) \ne \emptyset$. \end{enumerate}
We will show that $\mathrm{RO}(\mathbb{P}(\mathcal{D}^*))$ is isomorphic to the completion of $\mathfrak{T}_\mathcal{V}/Fin$. The difference is simply one of viewpoint: whereas a generic for $\mathfrak{T}_\mathcal{V}/Fin$ defines a member of $\mathcal{S}$, a generic for $\mathbb{P}(\mathcal{D}^*)$ will define a subset of $\mathcal{X}$ whose complement is a member of $\mathcal{S}$.
It is enough to find a poset isomorphic to a $\pi$-base of $\mathfrak{T}_\mathcal{V}/Fin$ which is order-isomorphic to a dense subset of $\mathbb P(\mathcal{D}^*)$.
By the same argument as in Theorem~\ref{Cohen}, the set
\[ \mathbb{Q} = \{T_A \cap T_{(T,n)}\colon T_A \cap T_{(T,n)}\not \in \mathrm{Fin}, A\in \mathcal{V}, (T,n)\in \Omega, |A(n)| < 2^n-1\},\] under the order given by the reverse almost-inclusion is isomorphic to a $\pi$-base of $\mathfrak{T}_\mathcal{V}/Fin$. The advantage again being that if $A\in \mathcal{V}$ and $(S,n)\in \Omega$ witness that $T_A \cap T_{(S,n)} \in \mathbb{Q}$, then they are uniquely determined by $[T_A \cap T_{(S,n)}]$.
The order-isomorphism $\varphi\colon \mathbb{Q} \to \mathbb{P}(\mathcal{D}^*)$ is induced by the natural enumeration $f$ of $\{(n,i)\colon i< 2^n\}$ sending $\{n\}\times 2^n$ to $[2^n,2^{n+1})$ for each $n$ in the following way: for $A\in \mathcal{V}$ and $(S,n)\in \Omega$ witnessing that $T_A \cap T_{(S,n)} \in \mathbb{Q}$, let
\[ \varphi(T_A\cap T_{(S,n)}) = (2^n \setminus f[S],f[A]^c \setminus 2^n). \]
To see that $\varphi$ is an order embedding, that the range of the embedding is
\[\{(s,F) \in \mathbb{P}(\mathcal{D}^*) \colon \exists n \in \omega, s\subseteq 2^n, F \subseteq [2^n, \infty), |F \cap [2^n, 2^{(n+1)})| >1\},\] and that this range is actually a dense subset of $\mathbb{P}(\mathcal{D}^*)$, we use that if $T_A \cap T_{(S,n)} \subseteq^* T_B \cap T_{(T,m)}$ are distinct elements of $\mathbb{Q}$, then \begin{enumerate}
\item $n \geq m$,
\item $A \supseteq B$,
\item $S \cap (m \times 2^m) = T$,
\item $S \cap ([m,n)\times 2^n) \supseteq A \cap ([m,n)\times 2^n) \supseteq B \cap ([m,n)\times 2^n)$. \end{enumerate}
So, we have that $\mathbb{P}(\mathcal{D}^*)$ is an example of a complete Boolean algebra adding a Cohen real which is not $\sigma$-centered and which does not support a measure, but is $\sigma$-$n$-linked for each $n$ and satisfies property (*). Analogously, $\mathbb{P}(\mathcal{I}_{1/n}^*)$ is a complete Boolean algebra supporting a measure which is not $\sigma$-centered and which adds a Cohen real (the last statement can be proved in the same way as Theorem \ref{Cohen}).
Perhaps using other ideals one can construct in this way other interesting Boolean algebras. \end{remark}
\end{document} | arXiv |
\begin{document}
\title{Toward a theory of monomial preorders}
\author{Gregor Kemper} \address{Technische Universi\"at M\"unchen, Zentrum Mathematik - M11, Boltzmannstr. 3, 85748 Garching, Germany} \email{[email protected]}
\author{Ngo Viet Trung} \address{Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, 10307 Hanoi, Vietnam} \email{[email protected]}
\author{Nguyen Thi Van Anh} \address{University of Osnabr\"uck, Institut f\"ur Mathematik, Albrechtstr. 28 A, 49076 Osnabr\"uck, Germany} \email{[email protected]}
\thanks{The second author is supported by Vietnam National Foundation for Science and Technology Development under grant number 101.04-2017.19. A large part of the paper was completed during a long term visit of the second author to Vietnam Institute of Advanced Study on Mathematics.}
\keywords{monomial order, monomial preorder, weight order, leading ideal, standard basis, flat deformation, dimension, descent of properties, regular locus, normal locus, Cohen-Macaulay locus, graded invariants, toric ring} \subjclass{Primary 13P10; Secondary 14Qxx, 13H10}
\begin{abstract} In this paper we develop a theory of monomial preorders, which differ from the classical notion of monomial orders in that they allow ties between monomials. Since for monomial preorders, the leading ideal is less degenerate than for monomial orders, our results can be used to study problems where monomial orders fail to give a solution. Some of our results are new even in the classical case of monomial
orders and in the special case in which the leading ideal defines the
tangent cone. \end{abstract}
\maketitle
\section*{Introduction}
A monomial order or a monomial ordering is a total order on the monomials of a polynomial ring which is compatible with the product operation \cite{GP}. Gr\"obner basis theory is based on monomial orders with the additional condition that 1 is less than all other monomials. Using such a monomial order, one can associate to every ideal a leading ideal that has a simple structure and that can be used to get information on the given ideal. This concept has been extended to an arbitrary monomial order in order to deal with the local case by Mora, Greuel and Pfister \cite{GP',GP,Mo}. One may ask whether there is a similar theory for partial orders on the monomials of a polynomial ring.
For a partial order, the leading ideal is no longer a monomial ideal and, therefore, harder to study. On the other hand, it is closer to the given ideal in the sense that it is less degenerate than the leading ideal for a monomial order. An instance is the initial ideal generated by the homogeneous components of lowest degree of the polynomials of the given ideal, which corresponds to the notion of the tangent cone at the origin of an affine variety. Being closer to the original ideal, a partial order may help to solve a problem that cannot be solved by any monomial order. A concrete example is Cavaglia's proof~\cite{Cav} of a conjecture of Sturmfels on the Koszul property of the pinched Veronese. The aim of this paper is to establish an effective theory of partial monomial orders and to show that it has potential applications in the study of polynomial ideals. \par
Let $k[\x] = k[x_1,...,x_n]$ be a polynomial ring over a field $k$. For any integral vector $a = ({\alpha}_1,...,{\alpha}_n) \in {\mathbb N}^n$ we write $x^a$ for the monomial $x_1^{{\alpha}_1}\cdots x_n^{{\alpha}_n}$. Let $<$ be an arbitrary partial order on the monomials of $k[\x]$. For every polynomial $f = \sum c_ax^a$ one defines the {\em leading part} of $f$ as $$L_<(f) := \sum_{x^a \in \max_<(f)} c_ax^a,$$ where $\max_<(f)$ denotes the set all monomials $x^a$ of $f$ such that there is no monomial $x^b$ of $f$ with $x^a <x^b$. \par
The first problem that we have to address is for which partial orders the leading parts of polynomials behave well under the operations of $k[\x]$. Obviously, such a partial order should be a weak order, i.e. it satisfies the additional condition that incomparability is an equivalence relation. Moreover, it should be compatible and cancellative with the product operation, i.e. if $x^a, x^b$ are monomials with $x^a < x^b$, then $x^ax^c < x^bx^c$ for any monomial $x^c$, and if $x^ax^c < x^bx^c$ for some $x^c$, then $x^a < x^b$. If a partial order $<$ satisfies these conditions, we call it a {\em monomial preorder}. A natural instance is the {\em weight order} associated to a weight vector $w \in {\mathbb R}^n$, defined by $x^a < x^b$ if $w \cdot a < w\cdot b$. \par
We shall see that a binary relation $<$ on the monomials of $k[\x]$ is a monomial preorder if and only if there exists a real $m \times n$ matrix $M$ for some $m \ge 1$ such that $x^a < x^b$ if and only if $M \cdot a <_{\operatorname{lex}} M \cdot b$ for any monomials $x^a, x^b$, where $<_{\operatorname{lex}}$ denotes the lexicographic order. This means that monomial preorders are precisely products of weight orders. This characterization is a natural extension of a result of Robbiano \cite{Ro}, who showed that every monomial order can be defined as above by a real matrix with additional properties. It can be also deduced from a subsequent result of Ewald and Ishida in \cite{EI}, where similar preorders on the lattice ${\mathbb Z}^n$ were studied from the viewpoint of algebraic geometry (see also Gonzalez Perez and Teissier \cite{GT}). They call the set of all such preorders the Zariski-Riemann space of the lattice, and use this result to prove the quasi-compactness of that space. \par
As one can see from the above characterization by real matrices, monomial preorders give rise to graded structures on $k[\x]$. For graded structures, Robbiano \cite{Ro2} developed a framework for dealing with leading ideals. See also the papers of Mora~\cite{Mo3} and Mosteig and Sweedler~\cite{MS} and for related results. Especially, non-negative gradings defined by matrices of integers were studied thoroughly by Kreuzer and Robbiano in \cite[Section 4.2]{KR}. They remarked in \cite[p. 15]{KR}: ``For actual computations, arbitrary gradings by matrices are too general". Nevertheless, we can develop an effective theory of leading ideals for monomial preorders despite various obstacles compared to the theory of monomial orders. \par
Let $<$ be an arbitrary monomial preorder of $k[\x]$. Following Greuel and Pfister \cite{GP}, we will work in the localization $k[\x]_< := S_<^{-1}k[\x]$, where $S_< := \{u \in k[\x] \mid L_<(u) = 1\}$. Note that $k[\x]_< = k[\x]$ if and only if $1 < x_i$ or~$1$ and~$x_i$ are incomparable for all~$i$, and $k[\x]_< = k[\x]_{(X)}$ if and only if $x_i < 1$ for all~$i$. In these cases, we call $<$ a {\em global monomial preorder} or {\em local monomial preorder}, respectively. For every element $f \in k[\x]_<$, we can choose $u \in S_<$ such that $uf \in K[X]$, and define $L_<(f) := L_<(uf)$. The {\em leading ideal} of a set $G \subseteq k[\x]_<$ is the ideal in $k[\x]$ generated by the polynomials $L_<(f)$, $f \in G$, denoted by $L_<(G)$. \par
Let $I$ be an ideal in $k[\x]_<$. For monomial orders, there is a division algorithm and a notion of s-polynomials, which are used to devise an algorithm for the computation of a standard basis of $I$, i.e. a finite set $G$ of elements of $I$ such that $L_<(G) = L_<(I)$. For monomial preorders, there is no such algorithm. However, we can overcome this obstacle by refining the given monomial preorder $<$ to a monomial order. We shall see that $I$ and $L_<(I)$ share the same leading ideal with respect to such a refinement of the preorder $<$. Using this fact, we show that a standard basis of $I$ with respect to the refinement is also a standard basis of $I$ with respect to the original monomial preorder. Therefore, we can compute a standard basis with respect to a monomial preorder by using the standard basis algorithm for monomial orders. Moreover, we can show that if $J \subseteq I$ are ideals in $k[\x]_<$ with $L_<(J) = L_<(I)$, then $J=I$. \par
An important feature of the leading ideal with respect to a monomial order is that it is a flat deformation of the given ideal \cite{GP}. This can be also shown for a monomial preorder. For that we need to approximate a monomial preorder by an integral weight order
which yields the same leading ideal. Compared to the case of a monomial order, the approximation for a monomial preorder is more complicated because of the existence of incomparable monomials, which must be given the same weight. \par
Using the approximation by an integral weight order we can relate properties of $I$ and $L_<(I)$ with each other. The main obstacle here is that $L_<(I)$ and $I$ may have different dimensions. However, we always have $\dim k[\x]/L_<(I) = \dim k[\x]/I^*$, where $I^* = I \cap k[\x]$. From this it follows that $\operatorname{ht} L_<(I) = \operatorname{ht} I$ and $\dim k[\x]/L_<(I) \ge \dim k[\x]_</I$ with equality if $<$ is a global or local preorder. Inspired by a conjecture of Kredel and Weispfening \cite{KW} on equidimensionality in Gr\"obner basis theory and its solution by Kalkbrenner and Sturmfels \cite{KS}, we also show that if $k[\x]/I^*$ equidimensional, then $k[\x]/L_<(I)$ is equidimensional. This has the interesting consequence that if an affine variety is equidimensional at the origin, then so is its tangent cone. \par
Despite the fact that $L_<(I)$ and $I$ may have different dimensions, many properties descend from $L_<(I)$ to $I$. Let ${\mathbb P}$ be a property which an arbitrary local ring may have or not have. We denote by $\operatorname{Spec}_{\mathbb P}(A)$ the ${\mathbb P}$-locus of a noetherian ring $A$. If ${\mathbb P}$ is one of the properties regular, complete intersection, Gorenstein, Cohen-Macaulay, Serre's condition $S_r$, normal, integral, and reduced, we can show that $$\dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}(k[\x]_</I) \le \dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\bigl(k[\x]/L_<(I)\bigr),$$ where ${\operatorname{N{\mathbb P}}}$ denotes the negation of ${\mathbb P}$. As far as we know, this inequality is new even for global monomial orders and for the tangent cone. From this it follows that if ${\mathbb P}$ holds at all primes of $k[\x]/L_<(I)$, then it also holds at all primes of $k[\x]_</I$. For a large class of monomial preorders, containing all monomial orders, it suffices to test ${\mathbb P}$ for the maximal ideal in $k[\x]/L_<(I)$ corresponding to the origin. Moreover, we can show that if $k[\x]/L_<(I)$ is an integral domain, then so is $k[\x]_</I$. For a positive integral weight order, Bruns and Conca \cite{BC} showed that the above properties descend from $k[\x]/L_<(I)$ to $k[\x]/I$. However, their method could not be used for monomial preorders.
\par
If $I$ is a homogeneous ideal of $k[\x]$, we can replace a monomial preorder $<$ by a global monomial preorder, which can be approximated by a positive integral weight order. So we can use results on such weight orders \cite{Cav, Sb, Tr} to compare important graded invariants of $I$ and $L_<(I)$. We can show that the graded Betti numbers of $L_<(I)$ are upper bounds for the graded Betti numbers of $I$. From this it follows that the depth and the Castelnuovo-Mumford regularity of $I$ are bounded by those of $L_<(I)$: \begin{align*} \operatorname{depth} k[\x] /I & \ge \operatorname{depth} k[\x]/L_<(I),\\ \operatorname{reg} k[\x] /I & \le \operatorname{reg} k[\x]/L_<(I). \end{align*} We can also show that the dimension of the graded components of the local cohomology modules of $L_<(I)$ are upper bounds for those of $I$ and that the reduction number of $k[\x]/I$ is bounded above by the reduction number of $k[\x]/L_<(I)$. \par
The above results demonstrate that one can use the leading ideal with respect to a monomial preorder to study properties of the given ideal. For some cases, where the preorder is not a total order, the leading ideal still has a structure like a monomial ideal in a polynomial ring. For instance, if $I$ is an ideal which contains the defining ideal $\Im$ of a toric ring $R$, one can construct a monomial preorder $<$ such that $L_<(I)$ contains $\Im$ and $L_<(I)/\Im$ is isomorphic to a monomial ideal of $R$. This construction was used by Gasharov, Horwitz and Peeva \cite{GHP} to show that if $R$ is a projective toric ring and if $Q$ is an arbitrary homogeneous ideal of $R$, there exists a monomial ideal $Q^*$ in $R$ such that $R/Q$ and $R/Q^*$ have the same Hilbert function. Their result is just a consequence of the general fact that $k[\x]/L_<(I)$ and $k[\x]/I$ have the same Hilbert function for any homogeneous ideal $I$ and for any monomial preorder $\le$. This case shows that monomial preorders can be used to study subvarieties of a toric variety. \par
We would like to mention that in a recent paper \cite{KT}, the first two authors have used global monomial preorders in a polynomial ring over a commutative ring $R$ to characterize the Krull dimension of $R$. Global monomial preorders have been also used recently by Sumi, Miyazaki, and Sakata \cite{SMS} to study ideals of minors.
\par
The paper is organized as follows. In Section 1 we characterize monomial preorders as products of weight orders, which are given by real matrices. In Section 2 we investigate basic properties of leading ideals. In Section 3 we approximate a monomial preorder by an integral weight order. Then we use this result to study the dimension of the leading ideal. In the final Section 4 we prove the descent of properties and invariants from the leading ideal to the given ideal for an arbitrary monomial preorder. \par
We refer to the books \cite{Ei} and \cite{GP} for unexplained notions in Commutative Algebra. \par
The authors would like to thank G.-M. Greuel, J. Herzog, J. Majadas, G. Pfister, L. Robbiano, T. R\"omer, F.-O. Schreyer, and B. Teissier for stimulating discussions on the subjects of this paper. We also thank the anonymous referees for their comments.
\section{Monomial preorders}
Recall that a (strict) {\em partial order} on a set $S$ is a binary relation $<$ on $S$ which is irreflexive, asymmetric, and transitive, i.e., for all $a, b, c \in S$, \begin{itemize} \item not $a < a$; \item if $a < b$ then not $b < a$; \item if $a < b$ and $b < c$ then $a < c$. \end{itemize} The elements $a,b$ are said to be {\em comparable} if $a < b$ or $b < a$. One calls $<$ a {\em weak order} if the incomparability is an equivalence relation on $S$. Notice that this is equivalent to saying that the negation~$\not<$ of~$<$ is transitive. A partial order under which every pair of elements is comparable is called a {\em total order}. \par
Let $k[\x] = k[x_1,...,x_n]$ be a polynomial ring in~$n$ indeterminates over a field $k$. First, we want to see for which (strict) partial order $<$ on the monomials of $k[\x]$ one can define a meaningful notion of leading polynomials. \par
It is natural that $<$ should be a weak order. Moreover, $<$ should be compatible and cancellative with the multiplication, meaning that $x^a < x^b$ implies $x^a x^c < x^bx^c$ and $x^a x^c < x^bx^c$ implies $x^a < x^b$ for $a,b,c \in {\mathbb N}^n$. We call a weak order $<$ on the monomials of $k[\x]$ a {\em monomial preorder} if it the above properties are satisfied. Note that this definition is weaker than the definition of a monomial preorder in \cite{KT}, where it is required that $1 < x^a$ for all $x^a \neq 1$. If a monomial preorder is a total order, we call it a {\em monomial
order}. So a monomial order is precisely what Greuel and Pfister~\cite[Definition~1.2.1]{GP} call a monomial ordering.
\begin{Remark} \label{cancellative} {\rm For a total order, the cancellative property can be deduced from the compatibility with the multiplication. That is no more the case for a weak order. For example, define $x^a < x^b$ if $\deg x^a < \deg x^b$ or $\deg x^a = \deg x^b > 1$ and $x^a <_{\operatorname{lex}} x^b$. This weak order is compatible with the product operation but not cancellative because $x_1x_2 < x_1^2$ but $x_2 \not< x_1$.} \end{Remark}
Monomial preorders are abundant. Given an arbitrary real vector $w \in {\mathbb R}^n$, we define $x^a <_w x^b$ if $w \cdot a < w \cdot b$, with the dot signifying the standard scalar product. Obviously, $<_w$ is a monomial preorder. One calls $<_w$ the {\em weight order} associated with $w$ \cite{Ei}. For example, the {\em degree order} or the {\em reverse degree order} defined by $x^a < x^b$ if $\deg x^a < \deg x^b$ or $\deg x^a > \deg x^b$ is the weight order of the vector $(1,...,1)$ or $(-1,...,-1)$. More generally, we can associate with every real $m \times n$ matrix $M$ a monomial preorder $<$ by defining $x^a < x^b$ if $M \cdot a <_{\operatorname{lex}} M\cdot b$, where $<_{\operatorname{lex}}$ denotes the lexicographic order on ${\mathbb R}^n$.\par
Given two monomial preorders $<$ and $<'$, we can define a new monomial preorder $<^*$ by $x^a <^* x^b$ if $x^a < x^b$ or if $x^a, x^b$ are incomparable with respect to $<$ and $x^a <' x^b$. We call $<^*$ the {\em product} of $<$ and $<'$. Note that this product is not commutative. The monomial preorder associated with a real matrix $M$ is just the product of the weight orders associated with the row vectors of $M$. \par
The following result shows that every monomial preorder of $k[\x]$ arises in such a way.
\begin{Theorem} \label{Robbiano} For every monomial preorder $<$ of $k[\x]$, there is a real $m \times n$ matrix $M$ for some $m > 0$ such that $x^a < x^b$ if and only if $M \cdot a <_{\operatorname{lex}} M \cdot b$. \end{Theorem}
Theorem \ref{Robbiano} is actually about partial orders on ${\mathbb N}^n$. For total orders on ${\mathbb Q}^n$, it was first shown by Robbiano \cite[Theorem 4]{Ro} (see also \cite[Theorem 2.4]{Ro2}). For partial orders on ${\mathbb Z}^n$, it was shown by Ewald and Ishida \cite[Theorem 2.4]{EI} from the viewpoint of algebraic geometry. Actually, Ewald and Ishida reduced the proof to the case of total orders on ${\mathbb Q}^n$. However, they were unaware of the much earlier result of Robbiano. We will deduce Theorem \ref{Robbiano} from Robbiano's result by using the following simple observations. These observations also explain why we have to define a monomial preorder as above. Moreover, they will be used later in the course of this paper. \par
Let $S$ be a cancellative abelian monoid with the operation $+$. We call a partial order $<$ on $S$ a {\em partial order of the monoid} $S$ if it is {\em compatible} and {\em cancellative} with $+$, meaning that $a < b$ implies $a + c < b + c$ and $a + c < b + c$ implies $a < b$ for all $a,b,c \in S$. \par
Similarly, if $E$ is a vector space over ${\mathbb Q}$, a partial order $<$ on $E$ is called a {\em partial order of the vector space} $E$ if it is a partial order of $E$ as a monoid and $a < b$ implies $\lambda a < \lambda b$ for all $\lambda \in {\mathbb Q}_+$ and $a,b \in E$, where ${\mathbb Q}_+$ denotes the set of the positive rational numbers.
\begin{Lemma}\label{extension} Every partial order of the additive monoid ${\mathbb N}^n$ can be uniquely extended to a partial order of the vector space ${\mathbb Q}^n$. \end{Lemma}
\begin{proof} Let $<$ be a partial order of ${\mathbb N}^n$. For every $a \in {\mathbb Z}^n$, there are two unique vectors $a_+, a_- \in {\mathbb N}^n$ having disjoint supports such that $a = a_+ - a_-$. For arbitrary $a, b \in {\mathbb Z}^n$ we define $a < b$ if $a_+ + b_- < a_- + b_+$. One can easily shows that $<$ is a partial order of ${\mathbb Z}^n$ extending the partial order $<$ of ${\mathbb N}^n$. Now, for arbitrary $a, b \in {\mathbb Q}^n$, we can always find a positive integer $p$ such that $pa, pb \in {\mathbb Z}^n$. We define $a < b$ if $pa < pb$. It is easy to see that $<$ is a well-defined partial order of the vector space ${\mathbb Q}^n$. \end{proof}
It is clear from the above proof that the cancellative property of $<$ on ${\mathbb N}^n$ is necessary for the extension of $<$ to ${\mathbb Q}^n$. In fact, any partial order on an abelian group which is compatible with the group operation is also cancellative. \par
If $<$ is a weak order of ${\mathbb N}^n$, one can easily verify that the extended partial order $<$ on ${\mathbb Q}^n$ is also a weak order.
\begin{Lemma}\label{kernel} Let $<$ be a weak order of the vector space ${\mathbb Q}^n$. Let $E$ denote the set of the elements which are incomparable to $0$. Then $E$ is a linear subspace of ${\mathbb Q}^n$ and, if we define $a + E < b+E$ if $a < b$ for arbitrary $a, b \in {\mathbb Q}^n$, then $<$ is a total order of the vector space ${\mathbb Q}^n/E$. \end{Lemma}
\begin{proof} It is clear that two elements $a, b \in {\mathbb Q}^n$ are incomparable if and only if $a - b \not < 0$ and $0 \not < a-b$, which means $a - b \in E$. Since the incomparability is an equivalence relation, $a, b \in E$ implies $a, b$ are incomparable and, therefore,
$a-b \in E$. As a consequence, $a \in E$ implies $pa \in E$ for any $p \in {\mathbb N}$. From this it follows that $(p/q)a = pa/q \in E$ for any $q \in {\mathbb Z}$, $q \neq 0$. Therefore, $E$ is a linear subspace of ${\mathbb Q}^n$ and $a + E$ is the set of the elements which are incomparable to $a$. Now, it is easy to see that the induced relation $<$ on ${\mathbb Q}^n/E$ is a total order of the vector space ${\mathbb Q}^n/E$. \end{proof}
Lemma \ref{kernel} does not hold if $<$ is a partial order that is not a weak order.
\begin{Example} {\rm Consider the partial order of the vector space ${\mathbb Q}^n$, $n \ge 2$, defined by the condition $a < b$ if and only if $a - b = \lambda (e_1-e_2)$ for some $\lambda \in {\mathbb Q}_+$, where $e_i$ denote the standard basis vectors. Then $<$ is not a weak order because $e_1,0$ and $e_2, 0$ are pairs of incomparable elements, whereas $e_1 < e_2$. Clearly, $E$ is not a linear subspace of ${\mathbb Q}^n$ because $e_1,e_2 \in E$ but $e_1 - e_2 \not\in E$.} \end{Example}
Now we will use Lemma \ref{extension} and Lemma \ref{kernel} to prove Theorem \ref{Robbiano}.
\noindent{\em Proof of Theorem \ref{Robbiano}.} Let $<$ denote the weak order of the additive monoid ${\mathbb N}^n$ induced by the monomial preorder $<$ in $k[\x]$. By Lemma \ref{extension}, $<$ can be extended to a weak order of ${\mathbb Q}^n$. Let $E$ be the set of the incomparable elements to $0$ in ${\mathbb Q}^n$. By Lemma \ref{kernel}, $E$ is a linear subspace of ${\mathbb Q}^n$ and $<$ induces a total order $<$ of ${\mathbb Q}^n/E$. By \cite[Theorem 4]{Ro}, there is an injective linear map $\phi$ from ${\mathbb Q}^n/E$ to ${\mathbb R}^m$ (as a vector space over ${\mathbb Q}$) such that $a + E < b+E$ if and only if $\phi(a +E) <_{\operatorname{lex}} \phi(b+E)$ for all $a, b \in {\mathbb Q}^n$. The composition of the natural map from ${\mathbb Q}^n$ to ${\mathbb Q}^n/E$ with $\phi$ is a linear map $\psi$ from ${\mathbb Q}^n$ to ${\mathbb R}^m$ such that $a < b$ if and only if $\psi(a) <_{\operatorname{lex}} \psi(b)$. Since $\psi$ is a linear map, we can find a real $m \times n$ matrix $M$ such that $\psi(a) = M\cdot a$ for all $a \in {\mathbb Q}^n$. Therefore, $x^a < x^b$ if and only if $M\cdot a <_{\operatorname{lex}} M\cdot b$. \qed
We shall see in the following remark that a monomial preorder give rises to a grading
on $k[\x]$, which may be useful for the study of leading ideals.
\begin{Remark} \label{graded} {\rm Let $<$ be an arbitrary monomial preorder in $k[\x]$. Let $S$ denote the quotient set of the monomials with respect to the equivalence relation of incomparability. Since $<$ is compatible and cancellative with the product of monomials, we can define the product of two equivalent classes to make $S$ a totally ordered abelian monoid. For every $a \in {\mathbb N}^n$ we denote by $[a]$ the equivalent class of the monomials incomparable to $x^a$ and by $k[\x]_{[a]}$ the vector space generated by the monomials of $[a]$. Then $k[\x] = \oplus_{[a] \in S} k[\x]_{[a]}$ has the structure of an $S$-graded ring. For instance, if $<$ is the weight order associated with a vector $w$, this grading is given by the weighted degree $\deg x^a = w\cdot a$. We call a polynomial or a polynomial ideal {\em $<$-homogeneous} if it is graded with respect to this grading. It is clear that the leading part of any polynomial is $<$-homogeneous. Therefore, the leading ideal of any set in $k[\x]$ is $<$-homogeneous. As a consequence, the leading ideal has a primary decomposition with $<$-homogeneous primary ideals and $<$-homogeneous associated primes. See e.g. \cite[Exercise~3.5]{Ei} for more information on rings graded by an abelian monoid and \cite{Ro2} for algebraic structures over rings graded by a totally ordered abelian group.} \end{Remark}
We can use the leading ideal of monomial preorders to study different subjects in algebra and geometry. For instance, if $<$ is the degree order, i.e. $x^a < x^b$ if $\deg x^a < \deg x^b$, then $L_<(f)$ is the homogeneous component of the highest degree of a polynomial $f$. In this case, the leading ideal $L_<(I)$ of a polynomial ideal $I$ describes the part at infinity of the affine variety $V(I)$ (see e.g. \cite[Definition 4.14]{GP}). If $<$ is the reverse degree order, i.e. $x^a < x^b$ if $\deg x^a > \deg x^b$, then $L_<(f)$ is just the homogeneous component of the lowest degree of $f$. In this case, $k[\x]/L_<(I)$ is the associated graded ring of $k[\x]/I$ with respect to the maximal homogeneous ideal, which corresponds to the concept of the tangent cone (see e.g. \cite[Section 5.4]{Ei}). \par
In the following we will present a class of useful monomial preorders which arise naturally in the study of ideals of toric rings. Recall that a {\em toric ring} is an algebra $R$ which are generated by a set of monomials $t^{c_1},...,t^{c_n}$, $c_1,...,c_n \in {\mathbb N}^m$, in a polynomial ring $k[t_1,...,t_m]$. We call an ideal of $R$ a {\em monomial ideal} if it is generated by monomials of $k[t_1,...,t_m]$. Monomial ideals of $R$ have a simple structure and can be studied using combinatorics tools. \par
Let $\phi: k[\x] \to R$ denote the map which sends $x_i$ to $t^{c_i}$, $i = 1,...,n$, and $\Im = \operatorname{ker} \phi$. Then $R = k[\x]/\Im$. One calls $\Im$ the toric ideal of $R$. Every ideal of $R$ corresponds to an ideal of $k[\x]$ containing $\Im$. Let $M$ be the matrix of the column vectors $c_1,...,c_n$. We call the monomial preorder on $k[\x]$ associated to $M$ the {\em toric preorder} associated to $R$. This order can be used to deform every ideal of $R$ to a monomial ideal.
\begin{Proposition} \label{toric} Let $R$ be a toric ring and $\Im$ the toric ideal of $R$ in $k[\x]$. Let $<$ be the toric preorder of $k[\x]$ with respect to $R$. Let $I$ be an arbitrary ideal of $k[\x]$ which contains $\Im$. Then $L_<(I) \supseteq \Im$ and $L_<(I)/\Im$ is isomorphic to a quotient ring of $R$ by a monomial ideal. \end{Proposition}
\begin{proof} It is known that $\Im$ is generated by binomials of the form $x^{a_+} - x^{a_-}$, where $a_+, a_- \in {\mathbb N}^n$ are two vectors having disjoint supports such that $a = a_+ - a_-$ is a solution of the equation $M \cdot a = 0$ \cite{He}. Since $M \cdot x^{a_+} = M \cdot x^{a_-}$, $x^{a_+}$ and $x^{a_-}$ are incomparable with respect to $<$. Hence, $L_<(x^{a_+} - x^{a_-}) = x^{a_+} - x^{a_-}$. Thus, $L_<(\Im) = \Im$. Since $I \supseteq \Im$, this implies $L_<(I) \supseteq \Im$. \par
Since $L_<(I)/\Im \cong \phi(L_<(I))$, it remains to show that $\phi(L_<(I))$ is a monomial ideal of $R$. This follows from the general fact that for any polynomial $f \in k[\x]$, $\phi(L_<(f))$ is a monomial of $k[t_1,...,t_r]$, which we shall show below. \par
If $f$ is a monomial, then $L_<(f) = f$ and $\phi(f)$ is clearly a monomial of $k[t_1,...,t_r]$. If $f$ is not a monomial, $L_<(f)$ is a linear combination of incomparable monomials. Therefore, it suffices to show that if $x^a, x^b$ are two incomparable monomials, then $\phi(x^a) = \phi(x^b)$. Let $M$ be the matrix defined as above. Since $<$ is the monomial preorder associated to $M$, $M \cdot a = M \cdot b$. Hence, $\phi(x^a) = t^{M \cdot a} = t^{M \cdot b'} = \phi(x^{b}).$ \end{proof}
Proposition \ref{toric} extends a technique used by Gasharov, Horwitz and Peeva to show that if $R$ is a projective toric ring and if $Q$ is a homogeneous ideal in $R$, then there exists a monomial ideal $Q^*$ such that $R/Q$ and $R/Q^*$ have the same Hilbert function \cite[Theorem 2.5(i)]{GHP}.
In this case, we have $R/Q \cong k[X]/I$ and $R/Q^* \cong k[X]/L_<(I)$ for some homogeneous ideal $I$. In the next section we will prove the more general result that if $I$ is an arbitrary homogeneous ideal, then $k[\x]/I$ and $k[\x]/L_<(I)$ have the same Hilbert function for any homogeneous ideal $I$ of $k[\x]$ and any monomial preorder $<$.
\section{Computation of leading ideals}
Let~$<$ be an arbitrary monomial preorder on $k[\x]$. Since $<$ is compatible with the product operation, we have $L_<(f g) = L_<(f)L_<(g)$ for $f,g \in k[\x]$. It follows that the set $S_< := \{u \in k[\x] \mid L_<(u) = 1\}$ is closed under multiplication, so we can form the localization $k[\x]_< := S_<^{-1} k[\x].$ \par
It is easy to see that $S_< = \{1\}$ if and only if $1 < x_i$ or~$1$ and~$x_i$ are incomparable for all~$i$ and that $S_< = k[\x] \setminus (X)$ if and only if $x_i < 1$ for all~$i$. That means $k[\x]_< = k[\x]$ or $k[\x]_< = k[\x]_{(X)}$, explaining why we call $<$ in these cases a {\em global monomial preorder} or {\em local monomial preorder}. For monomial orders, these notions coincide with those introduced by Greuel and Pfister \cite{GP}. \par
For every element $f \in k[\x]_<$, there exists $u \in S_<$ such that $uf \in K[X]$. If there is another $v \in S_<$ such that $vf \in K[X]$, then $L(vf) = L(uvf) = L(uf)$ because $L(u) = L(v) = 1$. Therefore, we can define $L_<(f) := L_<(uf)$. Recall that for a subset $G \subseteq k[\x]_<$, the {\em leading ideal} $L_<(G)$ of $G$ is generated by the elements $L_<(f)$, $f \in G$, in $k[\x]$. \par
The above notion of leading ideal allow us to work in both rings $k[X]$ and $k[\x]_<$. Actually, we can move from one ring to the other ring by the following relationship.
\begin{Lemma} \label{leading} Let $Q$ be an ideal in $k[\x]$ and $I$ an ideal in $k[\x]_<$. Then\par {\rm (a)} $L_<(Qk[\x]_<) = L_<(Q),$ \par {\rm (b)} $L_<(I \cap k[\x]) = L_<(I)$. \end{Lemma}
\begin{proof} For every $f \in Qk[\x]_<$, there exists $u \in S_<$ such that $uf \in Q$. Therefore, $L_<(f) = L_<(uf) \in L_<(Q)$. This means $L_<(QKx_<) \subseteq L_<(Q)$. Since $Q \subseteq Qk[\x]_<$, this implies $L_<(Q k[\x]_<) = L_<(Q)$. Now let $Q = I \cap k[\x]$. Then $Qk[\x]_< = I$. As we have seen above, $L_<(Q) = L_<(I)$. \end{proof}
By Lemma \ref{leading}(a), two different ideals in $k[\x]$ have the same leading ideal if they have the same extensions in $k[\x]_<$. This explains why we have to work with ideals in $k[\x]_<$. \par
For a monomial order, there is the division algorithm, which gives a remainder $h$ (or a weak normal form in the language of \cite{GP}) of the division of an element $f \in k[\x]_<$ by the elements of $G$ such that if $h \neq 0$, $L_<(h) \not\in L_<(G)$. This algorithm is at the heart of the computations with ideals by monomial orders \cite{GP}. In general, we do not have a division algorithm for monomial preorders. For instance, if $<$ is the monomial preorder without comparable monomials, then $L_<(f) = f$ for all $f \in k[\x]$. In this case, there are no ways to construct such an algorithm. However, we can overcome this obstacle by refining the monomial preorder $<$. \par
We say that a monomial preorder $<^*$ in $k[\x]$ is a {\em refinement} of $<$ if $x^a < x^b$ implies $x^a <^* x^b$. Notice that this implies $S_< \subseteq S_{<^*}$, so $k[\x]_< \subseteq k[\x]_{<^*}$. The product of $<$ with an other monomial preorder $<'$ is a refinement of $<$. Conversely, every refinement $<^*$ of $<$ is the product of $<$ with $<^*$.
\begin{Lemma} \label{finer}
Let $<^*$ be the product of $<$ with a monomial preorder $<'$. Then
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item \label{lRefineA} $L_{<^*}(G) \subseteq
L_{<'}\bigl(L_<(G)\bigr)$ for every subset $G \subseteq k[\x]_<$,
\item \label{lRefineB} $L_{<^*}(I) = L_{<'}\bigl(L_<(I)\bigr)$ for
every ideal $I \subseteq k[\x]_<$,
\item \label{lRefineC} if~$<'$ is global, then $k[\x]_{<^*} = k[\x]_<$.
\end{enumerate} \end{Lemma}
\begin{proof}
To show part~\eqref{lRefineA}, let $f \in G$ and choose $u \in S_<$ with $u f \in k[\x]$. Then
\[
L_{<^*}(f) = L_{<^*}(u f) = L_{<'}\bigl(L_<(u f)\bigr) =
L_{<'}\bigl(L_<(f)\bigr) \in L_{<'}\bigl(L_<(G)\bigr).
\]
To show part~\eqref{lRefineB}, we only need to show that $L_{<'}\bigl(L_<(I)\bigr) \subseteq L_{<^*}(I)$. Let $g \in L_<(I)$. Then $g = \sum_{i=1}^m h_iL_<(f_i)$ with $h_i \in k[\x]$ and $f_i \in I$. We may assume that the~$h_i$ are monomials, so $h_i L_<(f_i) = L_<(h_i f_i)$ for all $i$.
Replacing the $f_i$ by suitable $u_i f_i$ with $u_i \in S_<$, we may assume $f_i \in I \cap k[\x]$.
Let us first consider the case $g$ is $<$-homogeneous. Then we may further assume that the monomials of all $L_<(h_i f_i)$ are equivalent to the monomials of $g$. Therefore, if we set $f = \sum_{i=1}^m h_if_i$, then $g = L_<(f)$. Since $f \in I$, we get $$L_{<'}(g) = L_{<'}(L_<(f)) = L_{<^*}(f) \in L_{<^*}(I).$$
Now we drop the assumption that $g$ is $<$-homogeneous. Since $L_<(I)$ is $<$-homogeneous, all $<$-homogeneous components of $g$ belong to $L_<(I)$. As we have seen above, their leading parts with respect to $<'$ belong to $L_{<^*}(I)$. Let $g_1,...,g_r$ be those $<$-homogeneous components of $g$ that contribute terms to
$L_{<'}(g)$. Since each term of $L_{<'}(g)$ occurs in precisely one $<$-homogeneous component of $f$,
$$L_{<'}(g) = \sum_{j=1}^r L_{<'}(g_j) \in L_{<^*}(I).$$
Therefore, we can conclude that $L_{<'}(L_<(I)) \subseteq L_{<^*}(I)$. \par
To prove part~\eqref{lRefineC} we show that $S_{<^*} = S_<$. Since $S_< \subseteq S_{<^*}$, we only need to show that $S_{<^*} \subseteq S_<$. Let $f \in S_{<^*}$. Then $L_{<'}(L_<(f)) = L_{<^*}(f) = 1$. Since $<'$ is a global monomial preorder, $1 <' x^a$ or~$1$ and $x^a$ are incomparable for all $x^a \neq 1$. Therefore, we must have $L_<(f) = 1$, which means $f \in S_<$. \end{proof}
The following example shows that the inclusion in Lemma \ref{finer}\eqref{lRefineA} may be strict.
\begin{Example} {\rm Let $<$ be the monomial preorder without any
comparable monomials. Then $L_<(f) = f$ for every polynomial $f$.
Let $<^*$ be the degree reverse lexicographic order. Then $<^*$ is
the product of $<$ with~$<^*$. For $G = \{x_1,x_1 + x_2\}$, we
have \[ L_{<^*}(L_<(G)) = L_{<^*}((x_1,x_1 + x_2)) = (x_1,x_2)
\supsetneqq (x_1) = L_{<^*}(G).
\]} \end{Example}
By Lemma \ref{finer}\eqref{lRefineB}, $I$ and $L_<(I)$ share the same leading ideal with respect to $<^*$. If we choose $<'$ to be a monomial order, then $<^*$ is also a monomial order. Therefore, we can use results on the relationship between ideals and their leading ideals in the case of monomial orders to study this relationship in the case of monomial preorders. \par
First, we have the following criterion for the equality of ideals by means of their leading ideals.
\begin{Theorem} \label{equality}
Let $J \subseteq I$ be ideals of $k[\x]_<$ such that $L_<(J) = L_<(I)$,
then $J = I$. \end{Theorem}
\begin{proof}
Let $<^*$ be the product of $<$ with a global monomial order $<'$.
Using Lemma \ref{finer}\eqref{lRefineB}, we have
\[
L_{<^*}(J) = L_{<'}\bigl(L_<(J)\bigr) = L_{<'}\bigl(L_<(I)\bigr) =
L_{<^*}(I).
\]
Moreover, $k[\x]_< = k[\x]_{<^*}$ by Lemma~\ref{finer}\eqref{lRefineC}.
Since $<^*$ is a monomial order, these facts implies $J = I$ \cite[Lemma~1.6.7(2)]{GP}. \end{proof}
Let $I$ be an ideal of $k[\x]_<$. We call a finite set $G$ of elements of $I$ a {\em standard basis} of $I$ with respect to $<$ if $L_<(G) = L_<(I)$. This means that $L_<(I)$ is generated by the elements $L_<(f)$, $f \in G$. For monomial orders, our definition coincides with \cite[Definition~1.6.1]{GP}. If $<$ is a global monomial order, then $k[\x]_< = k[\x]$ and a standard basis is just a Gr\"obner basis.
\begin{Corollary} \label{generating}
Let $G$ be a standard basis of $I$. Then $G$ is a generating set of $I$. \end{Corollary}
\begin{proof}
Let $J := (G)$. Then $J \subseteq I$ and $L_<(I) = L_<(G)
\subseteq L_<(J) \subseteq L_<(I)$. So $L_<(J) = L_<(I)$. Hence $J = I$ by Theorem \ref{equality}. \end{proof}
The above results do not hold for ideals in $k[\x]$. This can be seen from the following observation. For every ideal $Q$ of $k[X]$ we define $$Q^* := Qk[X]_< \cap k[X].$$ Then $Q \subseteq Q^*$. By Lemma \ref{leading}, $L_<(Q) = L_<(Q^*)$. Therefore, a standard basis of $Q$ is also a standard basis of $Q^*$. One can easily construct ideals $Q$ such that $Q^* \neq Q$. For instance, if $Q = (uf)$ with $1 \neq u \in S_<$ and $0 \neq f \in k[X]$, then $f \in Q^* \setminus Q$. \par
To compute the leading ideal $L_<(I)$ we only need to compute a standard basis $G$ of $I$ and then extract the elements $L_<(f)$, $f \in G$, which generate $L_<(I)$. The following result shows that the computation of the leading ideal can be passed to the case of a monomial order. Note that the product of a monomial preorder with a monomial order is always a monomial order.
\begin{Theorem} \label{standard}
Let $<^*$ be the product of $<$ with a global monomial order. Let $I$ be an ideal in $k[\x]_<$ (which by Lemma~\ref{finer}\eqref{lRefineC} equals $k[\x]_{<^*}$). Then every standard basis $G$ of $I$ with respect to $<^*$ is also a standard basis of $I$ with respect to $<$. \end{Theorem}
\begin{proof}
Let $<^*$ be the product of $<$ with a global monomial order
$<'$. Let $G$ be a standard basis of $I$ with respect to $<^*$. By Lemma~\ref{finer}\eqref{lRefineA} and \eqref{lRefineB}, we have \[
L_{<'}\bigl(L_<(I)\bigr) = L_{<^*}(I) = L_{<^*}(G) \subseteq
L_{<'}\bigl(L_<(G)\bigr) \subseteq L_{<'}\bigl(L_<(I)\bigr).
\] This implies $L_{<'}(L_<(G)) = L_{<'}(L_<(I))$. Therefore, applying Theorem \ref{equality} to $<'$, we obtain $L_<(G) = L_<(I)$. \end{proof}
If~$<$ is a monomial order, there is an effective algorithm that computes a standard basis of a given ideal $I \subseteq k[\x]_<$ with respect to~$<$ (see \cite[Algorithm~1.7.8]{GP}). Since monomial orders are monomial preorders, we cannot get a more effective algorithm. For this reason we will not address computational issues like membership test and complexity for monomial preorders. \par
For global monomial preorders defined by matrices of integers, Corollary \ref{generating} and Theorem \ref{standard} were already proved by Kreuzer and Robbiano \cite[Propositions 4.2.14 and 4.2.15]{KR}. Note that they use the term Macaulay basis instead of standard basis. \par
For an ideal $I \subseteq k[\x]$, we also speak of a standard basis of $I$ with respect to a monomial preorder~$<$, meaning a standard basis $G \subseteq I$ of $I k[\x]_<$.
\begin{Theorem}
Let $I \subseteq k[\x]$ be a polynomial ideal. Then the set of all leading ideals of $I$ with respect to
monomial preorders is finite.
Hence, there exists a {\em universal standard basis} for $I$, i.e., a finite subset $G \subseteq
I$ that is a standard basis with respect to all monomial
preorders. \end{Theorem}
\begin{proof}
For monomial orders, this result was proved by Mora and Robbiano \cite[Proposition 4.1]{MR}.
It can be also deduced from a more recent result of Sikora in \cite{Sikora:04} on the compactness of the space of all monomial orders.
By Theorem \ref{standard}, for each monomial preorder $<$, there exists a monomial order $<^*$
such that every standard basis of $I$ with respect to $<^*$ is also a standard basis of $I$ with respect to $<$.
Therefore, the set of of all leading ideals of $I$ with respect to monomial preorders is finite.
\end{proof}
In the remainder of this paper, we will investigate the problem whether the leading ideal with respect to a monomial preorder $<$ can be used to study properties of the given ideal. \par
First, we will study the case of homogeneous ideals. Here and in what follows, the term ``homogeneous'' alone is used in the usual sense. In this case we can always replace a monomial preorder $<$ by a global monomial preorder.
\begin{Lemma} \label{homogeneous}
Let $I$ be a homogeneous ideal in $k[\x]$. Let $<^*$ be the product
of the degree order with $<$. Then~$1 <^* x_i$ for all $i$ and $L_{<^*}(I) = L_<(I)$.
\end{Lemma}
\begin{proof} Let $<'$ denote the degree order. Then $1 <' x_i$ for all $i$. Since $<^*$ is a refinement of $<'$, we also have $1 <^* x_i$ for all $i$. For every polynomial $f$, $L_{<'}(f)$ is a homogeneous component of $f$. In particular, $L_{<'}(f) = f$ if $f$ is homogeneous. Since $I$ is a homogeneous ideal, every homogeneous component of every polynomial of $I$ belongs to $I$. Therefore, $L_{<'}(I) = I$. By Lemma \ref{finer}\eqref{lRefineB}, this implies $L_{<^*}(I) = L_<(L_{<'}(I)) = L_<(I).$ \end{proof}
\begin{Corollary} Let $I$ be a homogeneous ideal in $k[\x]$. Then $L_<(I)$ is a homogeneous ideal. \end{Corollary}
\begin{proof} By Lemma \ref{homogeneous}, $L_<(I) = L_{<^*}(I)$. Since $<^*$ is a refinement of the degree order, $L_{<^*}(I)$ is a homogeneous ideal. \end{proof}
Let $HP_R(z)$ denote the Hilbert-Poincare series of a standard graded algebra $R$ over $k$, i.e. $$HP_R(z) := \sum_{t \ge 0} (\dim_k R_t)z^t,$$ where $R_t$ is the vector space of the homogeneous elements of degree $t$ of $R$ and $z$ is a variable. Note that $\dim_k R_t$ is the Hilbert function of $R$.
\begin{Theorem} \label{Hilbert} Let $I$ be a homogeneous ideal in $k[\x]$. Then $$HP_{k[\x]/I}(z) = HP_{k[\x]/L_<(I)}(z).$$ \end{Theorem}
\begin{proof} By Let $<^*$ be the product of $<$ with a monomial order $<'$. Since $<^*$ is a monomial order, we can apply \cite[Theorem 5.2.6]{GP} to get $$HP_{k[\x]/I}(z) = HP_{k[\x]/L_{<^*}(I)}(z).$$ Since $L_{<^*}(I) = L_{<'}(L_<(I))$ by Lemma \ref{finer}\eqref{lRefineB}, we can also apply \cite[Theorem 5.2.6]{GP} to $<'$ and obtain $$HP_{k[\x]/L_<(I)}(z) = HP_{k[\x]/L_{<^*}(I)}(z).$$ Comparing the above formulas we obtain the assertion. \end{proof}
\begin{Corollary} \label{homogen dim} Let $I$ be a homogeneous ideal in $k[\x]$. Then $$\dim k[\x]/I = \dim k[\x]/L_<(I).$$ \end{Corollary}
\begin{proof} By Theorem \ref{Hilbert}, $k[\x]/I$ and $k[\x]/L_<(I)$ share the same Hilbert function. As a consequence, they share the same Hilbert polynomial. Since the dimension of a standard graded algebra is the degree of its Hilbert polynomial, they have the same dimension. \end{proof}
We shall see in the next section that Corollary \ref{homogen dim} does not hold for arbitrary ideals in $k[\x]$ and $k[\x]_<$.
\section{Approximation by integral weight orders} \label{sWeight}
In the following we call a weight order $<_w$ {\em integral} if $w \in {\mathbb Z}^n$. The following result shows that on a finite set of monomials, any monomial preorder $<$ can be approximated by an integral weight order. This result is known for monomial orders \cite[Lemma 1.2.11]{GP}.
For a monomial preorders, the approximation may appear to be difficult since we have to dealt with incomparable monomials, which must have the same weight. A complicated proof for global monomial preorders was given by the first two authors in \cite[Lemma 3.3]{KT}.
\begin{Lemma} \label{approx 1} For any finite set $S$ of monomials in $k[\x]$ we can find $w \in {\mathbb Z}^n$ such that $x^a < x^b$ if and only if $x^a <_w x^b$ for all $x^a, x^b \in S$. \end{Lemma}
\begin{proof} Let $<$ denote the weak order of ${\mathbb N}^n$ induced by the monomial preorder $<$ in $k[\x]$. By Lemma \ref{extension}, $<$ can be extended to a weak order of ${\mathbb Q}^n$. By Lemma \ref{kernel}, the set $E$ of the elements incomparable to $0$ is a linear subspace of ${\mathbb Q}^n$.
Let $s = \dim {\mathbb Q}^n/E$. Let $\phi: {\mathbb Q}^n \to {\mathbb Q}^s$ be a surjective map such that $\operatorname{ker} \phi = E$. \par Set $S' = \{\phi(a) - \phi(b)|\ a, b \in S, a < b\}$. If $\phi(a) - \phi(b) = - (\phi(a') - \phi(b'))$ for $a,b,a',b' \in S$, $a < b$, $a' <b'$, then $\phi(a+ a') = \phi(b + b')$. By Lemma \ref{kernel}, this implies that $a+a'$ and $b+b'$ are incomparable, which is a contradiction to the fact that $a + a' < b+b'$. Thus, if $c \in S'$, then $-c \not\in S'$.\par Now, we can find an integral vector $v \in {\mathbb Z}^s$ such that $v\cdot c < 0$ for all $c \in S'$. Thus, $a < b$ if and only if $v \cdot \phi(a) < v \cdot \phi(b)$ for all $a, b \in S$. We can extend $v$ to an integral vector $w \in {\mathbb Z}^n$ such that $w \cdot a = w' \cdot \phi(a)$ for all $a \in {\mathbb Q}^n$. From this it follows that $a < b$ if and only if $w \cdot a < w \cdot b$ for all $a, b \in S$. Hence $x^a < x^b$ if and only if $x^a <_w x^b$. \end{proof}
Using Lemma \ref{approx 1} we can show that on a finite set of ideals, any monomial preorder $<$ can be replaced by an integral order. The case of several ideals will be needed in the sequel.
\begin{Theorem} \label{approx 2}
Let $I_1,\ldots,I_r$ be ideals in $k[\x]$. Then there exists an integral vector $w = (w_1,...,w_n) \in {\mathbb Z}^n$ such that $L_<(I_i) = L_{<_w}(I_i)$ for $i = 1,...,r$.
\end{Theorem}
\begin{proof}
Let $<^*$ be the product of $<$ with a global monomial order $<'$. Then $k[\x]_{<^*} = k[\x]_<$ by Lemma~\ref{finer}\eqref{lRefineC}.
For each~$i$, let $G_i \subset k[\x]$ be a standard basis of $I_ik[X]_<$ with respect to $<^*$. Then $L_<(I_i) = L_<(I_ik[\x]_<) = L_<(G_i)$ by Lemma \ref{leading}(a) and Theorem~\ref{standard}. Since $<^*$ is a monomial order, there exists a finite set $S_i$ of monomials such
that $G_i$ is a standard basis of $I_i$ with respect to any monomial
order coinciding with $<^*$ on $S_i$ \cite[Corollary 1.7.9]{GP}. \par
Let $S$ be the union of the set of all monomials of the polynomials in the
$G_i$ with $\cup_{i=1}^rS_i$. By Lemma \ref{approx 1}, there is an
integral vector $w \in {\mathbb Z}^n$ such that $L_{<_w}(f) = L_<(f)$ for
all $f \in S$. This implies $L_<(G_i) = L_{<_w}(G_i)$ for $i = 1,...,r$. Let $<_w^*$ be the product of $<_w$ with $<'$. For all $f \in S$,
it follows from the definition of the product of monomial orders
that
\[
L_{<_w^*}(f) = L_{<'}(L_{<_w}(f)) = L_{<'}(L_<(f)) = L_{<^*}(f).
\]
So $<_w^*$ coincides with $<^*$ on $S_i$. Therefore, every $G_i$ is a
standard basis of $I_i$ with respect to $<_w^*$. By
Theorem~\ref{standard}, this implies $L_{<_w}(G_i) = L_{<_w}(I_i)$.
Summing up we get $L_<(I_i) = L_<(G_i) = L_{<_w}(G_i) =
L_{<_w}(I_i)$. \end{proof}
Working with an integral weight order has the advantage that we can link an ideal to its leading ideal via the homogenization with respect to the weighted degree. \par
Let $w$ be an arbitrary vector in ${\mathbb Z}^n$. For every polynomial $f = \sum c_ax^a \in k[X]$ we set $\deg_wf := \max\{w\cdot a|\ c_{\alpha} \neq 0\}$ and define $$f^{\operatorname{hom}}:= t^{\deg_wf}\big(t^{-w_1}x_1,...,t^{-w_n}x_n\big),$$ where $t$ is a new indeterminate and $w_1,...,w_n$ are the components of $w$. Then $f^{\operatorname{hom}}$ is a weighted homogeneous polynomial in $R := k[X,t]$ with respect to the weighted degree $\deg x_i = w_i$ and $\deg t = 1$. We may view $f^{\operatorname{hom}}$ as the {\em homogenization} of $f$ with respect to $w$ (see e.g. Kreuzer and Robbiano \cite[Section 4.3]{KR}). If we write $f^{\operatorname{hom}}$ as a polynomial in $t$, then $L_{<_w}(f)$ is just the constant coefficient of $f^{\operatorname{hom}}$. \par
For an ideal $I$ in $k[X]$, we denote by $I^{\operatorname{hom}}$ the ideal in $k[X,t]$ generated by the elements $f^{\operatorname{hom}}$, $f \in I$. We call $I^{\operatorname{hom}}$ the {\em homogenization of $I$} with respect to $w$. Note that $t$ is a non-zerodivisor in $R/I^{\operatorname{hom}}$ \cite[Proposition~4.3.5(e)]{KR}. It is clear that $$L_{<_w}(I) = (I^{\operatorname{hom}},t)/(t).$$ On the other hand, the map $x_i \to t^{-w_i}x_i$, $i = 1,...,n$, induces an automorphism of $R[t^{-1}]$. Let $\Phi_w$ denote this automorphism. Then $\Phi_w(f) = t^{-\deg_w}f^{\operatorname{hom}}$. Therefore, $$\Phi_w(IR[t^{-1}]) = I^{\operatorname{hom}}R[t^{-1}].$$ From these observations we immediately obtain the following isomorphisms.
\begin{Lemma} \label{iso} With the above notations we have \par {\rm (a)} $R/(I^{\operatorname{hom}},t) \cong k[X]/L_{<_w}(I),$ \par {\rm (b)} $(R/I^{\operatorname{hom}})[t^{-1}] \cong (k[X]/I)[t,t^{-1}].$ \end{Lemma}
The above isomorphisms together with the following result show that there is a flat family of ideals over $k[t]$ whose fiber over $0$ is $k[X]/L_{<_w}(I)$ and whose fiber over $t-\lambda$ is $k[X]/I$ for all $\lambda \in k \setminus 0$.
\begin{Proposition} \label{flat} $R/I^{\operatorname{hom}}$ is a flat extension of $k[t]$. \end{Proposition}
This result was already stated for an arbitrary integral order $<_w$ by Eisenbud \cite[Theorem~15.17]{Ei}. However, the proof there required that all $w_i$ are positive. This case was also proved by Kreuzer and Robbiano in \cite[Theorem 4.3.22]{KR}. For the case that $w_i \neq 0$ for all $i$, it was proved by Greuel and Pfister \cite[Exercise~7.3.19 and Theorem~7.5.1]{GP}.
\begin{proof}
It is known that a module over a principal ideal domain is flat if and only if it is
torsion-free (see Eisenbud~\cite[Corollary~6.3]{Ei}). Therefore, we only need to
show that $k[X,t]/I^{\operatorname{hom}}$ is torsion-free. Let $g \in k[t] \setminus \{0\}$ and $F \in k[X,t] \setminus I^{\operatorname{hom}}$. Then we have to show that $g F \notin I^{\operatorname{hom}}$. Assume that $gF \in I^{\operatorname{hom}}$. Since $I^{\operatorname{hom}}$ is weighted homogeneous, we may assume that $g$ and $F$ are weighted homogeneous polynomials. Then $g = \lambda t^d$ for some $\lambda \in k$, $\lambda \neq 0$, and $d \ge 0$. Since $t$ is a non-zerodivisor in $R/I^{\operatorname{hom}}$, the assumption $gF \in I^{\operatorname{hom}}$ implies $F \in I^{\operatorname{hom}}$, a contradiction.
\end{proof}
Now we will use the above construction to study the relationship between the dimension of $I$ and $L_<(I)$. We will first investigate the case $I$ is a prime ideal.
\begin{Lemma} \label{prime} Let $P$ be a prime ideal of $k[X]$ such that $L_<(P) \neq k[\x]$. Let $Q$ be an arbitrary minimal prime of $L_<(P)$. Then $$\dim k[\x]/Q = \dim k[\x]/P. $$ \end{Lemma}
\begin{proof} By Theorem \ref{approx 2} we may assume that $<$ is an integral weight order $<_w$. Let $P^{\operatorname{hom}}$ denote the homogenization of $P$ with respect to $w$. Then $P^{\operatorname{hom}}$ is a prime ideal \cite[Proposition~4.3.10(d)]{KR}. By Lemma \ref{iso}(a), there is a minimal prime $Q'$ of $(P^{\operatorname{hom}},t)$ such that $Q \cong Q'/(t)$. Since $t$ is a non-zerodivisor in $R/P^{\operatorname{hom}}$, $\operatorname{ht} Q' = \operatorname{ht} P^{\operatorname{hom}}+1$ by Krull's principal theorem. By the automorphism $\Phi_w$, $\operatorname{ht} P^{\operatorname{hom}} = \operatorname{ht} P^{\operatorname{hom}}R[t^{-1}] = \operatorname{ht} PR[t^{-1}] = \operatorname{ht} P$. Therefore, $$\operatorname{ht} Q = \operatorname{ht} Q' - 1 = \operatorname{ht} P^{\operatorname{hom}} = \operatorname{ht} P.$$ Hence, $\dim k[\x]/Q = n - \operatorname{ht} Q = n- \operatorname{ht} P = \dim k[\x]/P.$ \end{proof}
It was conjectured by and Kredel and Weispfening \cite{KW} that if $<$ is a global monomial order, then $k[\x]/L_<(P)$ is equidimensional, i.e. $\dim k[\x]/Q = \dim k[\x]/L_<(P)$ for every minimal prime $Q$ of $L_<(P)$. This conjecture was settled by Kalkbrenner and Sturmfels \cite[Theorem 1]{KS} if $k$ is an algebraically closed field (see also \cite[Theorem 6.7]{HT}). Lemma \ref{prime} extends their result to any monomial preorder.
\begin{Theorem} \label{dim} Let $I$ be an ideal of $k[X]$ and $I^* := Ik[X]_< \cap k[X]$. Then \par {\rm (a)} $\dim k[X]/L_<(I) = \dim k[X]/I^* \le \dim k[\x]/I$. \par {\rm (b)} If $k[\x]/I^*$ is equidimensional, then so is $k[X]/L_<(I)$. \end{Theorem}
\begin{proof} It is clear that $I^* = k[\x]$ if and only if $1 \in Ik[\x]_< $ if and only if $L_<(I) = k[\x]$. Therefore, we may assume that $I^* \neq k[\x]$. \par
Let $P$ be a minimal prime of $I^*$. Then $P \cap S_< = \emptyset$ because $P$ is the contraction of a minimal prime of $Ik[\x]_<$. This means $L_<(P) \neq k[\x]$. By Proposition \ref{prime}, $\dim k[\x]/L_<(P) = \dim k[\x]/P.$ Choose $P$ such that $\dim k[\x]/P = \dim k[\x]/I^*$. Since $L_<(I) \subseteq L_<(P)$, we have $$\dim k[\x]/L_<(I) \ge \dim k[\x]/L_<(P) = \dim k[\x]/I^*.$$ \par
To prove the converse inequality we use Theorem \ref{approx 2} to choose an integral weight order $<_w$ such that $L_<(I) = L_{<_w}(I)$ and $L_<(P) = L_{<_w}(P)$ for all minimal primes $P$ of $I$. Then $L_<(I) \cong (I^{\operatorname{hom}},t)$ and $L_<(P) \cong (P^{\operatorname{hom}},t)/(t)$. \par
Let $Q$ be an arbitrary minimal prime of $L_<(I)$. Then there is a minimal prime $Q'$ of $(I^{\operatorname{hom}},t)$ such that $Q \cong Q'/(t)$. Let $P'$ be a minimal prime of $I^{\operatorname{hom}}$ contained in $Q'$. Then $Q'$ is also a minimal prime of $(P',t)$. By \cite[Proposition 4.3.10]{KR}, $P' = P^{\operatorname{hom}}$ for some minimal prime $P$ of $I$. Hence, $L_<(P) \cong (P',t)/(t).$ Therefore, $Q$ is a minimal prime of $L_<(P)$. By Lemma~\ref{prime}, $$\dim k[\x]/Q = \dim k[\x]/P.$$ Since $(P',t) \subseteq Q'$, $L_<(P) \subseteq Q \neq k[\x]$. This implies $P \cap S_< = \emptyset$. Hence, $P$ is a minimal prime of $I^*$. Therefore, $$\dim k[\x]/P \le \dim k[\x]/I^*.$$ Since there exits $Q$ such that $\dim k[\x]/Q = \dim k[\x]/L_<(I)$, we obtain $$\dim k[\x]/L_<(I) \le \dim k[\x]/I^*.$$ So we can conlude that $\dim k[X]/L_<(I) = \dim k[X]/I^* \le \dim k[\x]/I.$ \par
If $k[\x]/I^*$ is equidimensional, $\dim k[\x]/P = \dim k[\x]/I^*$ for all minimal primes $P$ of $I^*$. As we have seen above, for every minimal prime $Q$ of $L_<(I)$, there is a minimal prime $P$ of $I^*$ such that $\dim k[\x]/Q = \dim k[\x]/P$. Therefore, $\dim k[\x]/Q = \dim k[\x]/I^*$. From this it follows that $k[\x]/L_<(I)$ is equidimensional. \end{proof}
\begin{Corollary} \label{global} Let $I$ be an ideal of $k[X]$. Let $<$ be a global monomial preorder. Then\par {\rm (a)} $\dim k[\x]/L_<(I) = \dim k[\x]/I$. \par {\rm (b)} If $k[\x]/I$ is equidimensional, then so is $k[\x]/L_<(I)$. \end{Corollary}
\begin{proof} For a global monomial preorder $<$, we have $I^* = I$ because $k[\x]_< = k[\x]$. Therefore, the statements follow from Theorem \ref{dim}. \end{proof}
\begin{Remark} {\rm If $n \ge 2$ and $<$ is not a global monomial preorder, we can always find an ideal $I$ of $k[\x]$ such that $$\dim k[\x]/L_<(I) < \dim k[\x]/I.$$ To see this choose a variable $x_i < 1$. Let $I = (x_i-1) \cap (X)$. Then $I^* = (X)$. By Theorem \ref{dim}(a), $\dim k[\x]/L_<(I) = \dim k[\x]/I^* = 0$, whereas $\dim k[\x]/I = n-1 > 0$.} \end{Remark}
Now we turn our attention to ideals in the ring $k[\x]_<$. First, we observe that \linebreak $\dim k[\x]_< = n$ because $X$ generates a maximal ideal of $k[\x]_<$ which has height $n$. However, other maximal ideals of $k[\x]_<$ may have height less than $n$. The following result shows that these primes are closely related to the set $$X_- := \{x_i \mid x_i <1\}.$$
\begin{Lemma} \label{maximal} Let $Q$ be a maximal ideal of $k[X]_<$. Then $\operatorname{ht} Q = n$ if and only if $X_- \subseteq Q$. \end{Lemma}
\begin{proof} Assume that $\operatorname{ht} Q = n$. Let $Q' = Q \cap k[\x]$. Then $\operatorname{ht} Q' = \operatorname{ht} Q = n$. Hence $Q'$ is a maximal ideal of $k[\x]$. This implies $Q' \cap k[x_i] \neq 0$ for all $i$. Since $Q' \cap k[x_i]$ is a prime ideal, there is a monic irreducible polynomial $f_i$ generating $Q' \cap k[x_i]$. For $x_i < 1$, we must have $f = x_i$ because otherwise $L_<(f_i)$ is the constant coefficient of $f$, which would implies $Q' \cap S_< \neq \emptyset$, a contradiction. Therefore, $X_- \subseteq Q' \subseteq Q$.\par
Conversely, assume that $X_- \subseteq Q$. Then $Q/(X_-)$ is a maximal ideal of the ring $k[\x]_</(X_-)$, which is isomorphic to the polynomial ring $A := k[X \setminus X_-]$ because $A \cap S_< = \emptyset$. Therefore, $\operatorname{ht} Q/(X_-) = \dim A = n - \operatorname{ht} (X_-)$. Hence $$\operatorname{ht} Q = \operatorname{ht} Q/(X_-) + \operatorname{ht} (X_-) = n.$$ \end{proof}
\begin{Theorem} \label{height}
Let $I$ be an ideal of $k[X]_<$. Then
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item $\operatorname{ht} L_<(I) = \operatorname{ht} I$,
\item $\dim k[\x]/L_<(I) \ge \dim k[\x]_</I$,
\item $\dim k[\x]/L_<(I) = \dim k[\x]_</I$ if and only if
$1 \not\in (P,X_-)$ for at least one prime $P$ of $I$ with
$\operatorname{ht} P = \operatorname{ht} I$.
\end{enumerate} \end{Theorem}
\begin{proof} Let $J = I \cap k[\x]$. By Lemma \ref{leading}(b), $L_<(I) = L_<(J)$. Since $I = Jk[X]_<$, we have $J^* = J$. By Theorem \ref{dim}(a), this implies $\dim k[X]/L_<(J) = \dim k[X]/J$. Hence $\operatorname{ht} L_<(J) = \operatorname{ht} J$. By the correspondence between ideals in a localization and their contractions, $\operatorname{ht} J = \operatorname{ht} I$. So we can conclude that $\operatorname{ht} L_<(I) = \operatorname{ht} I$. \par
From this it follows that $$\dim k[X]/L_<(I) = n - \operatorname{ht} L_<(I) = \dim k[\x]_< - \operatorname{ht} I \ge \dim k[\x]_</I.$$
The above formula also shows that $\dim k[\x]/L_<(I) = \dim k[\x]_</I$ if and only if $n - \operatorname{ht} I = \dim k[\x]_</I.$ Being a localization of $k[\x]$, $k[\x]_<$ is a catenary ring. Therefore, the latter condition is satisfied if and only there exists a prime $P$ of $I$ with $\operatorname{ht} P = \operatorname{ht} I$ such that $P$ is contained in a maximal ideal of height $n$. \par
Assume that a prime ideal $P$ is contained in a maximal ideal $Q$ of height $n$. Then $X_- \subset Q$ by Lemma \ref{maximal}. Hence, $1 \not\in (P,X_-)$ because $(P,X_-) \subseteq Q$. Conversely, assume that $1 \not\in (P,X_-)$. Then, any maximal ideal containing $(P,X_-)$ has height $n$ by Lemma \ref{maximal}. \end{proof}
We would like to point out the phenomenon that if $I$ is an ideal of $k[\x]$, then \linebreak $\dim k[\x]/L_<(I) \le \dim k[\x]/I$ by Theorem \ref{dim}(a), whereas if $I$ is an ideal of $k[\x]_<$, then $\dim k[\x]/L_<(I) \ge \dim k[\x]_</I$ by Theorem \ref{height}(b).\par
\begin{Remark} {\rm It is claimed in \cite[Corollary 7.5.5]{GP} that $$\dim k[X]_</I = \dim k[X]/L_<(I)$$ for any monomial order $<$. This is not true. For instance, let $<$ be the weight order on $k[x, y]$ with weight $(1,-1)$, refined, if desired, to a monomial order. Consider the irreducible polynomial $f = x^2y + 1$ and the ideal $I = (f)$ in $k[x, y]_<$. Since $L_<(f) = x^2y$, $I$ is a proper ideal and since f is irreducible, $I$ is a prime ideal. Since $1 \in (I,y)$, we have $\dim k[x,y]_</I < \dim k[x,y]/L_<(I)$ by Theorem \ref{height}(c). Actually, $I$ is a maximal ideal of $k[x,y]_<$ because any strictly bigger prime $Q$ has height 2 and must therefore contain $y$ by Lemma \ref{maximal}. This implies $1 \in Q$, a contradiction.} \end{Remark}
The following result characterizes the monomial preorders for which the equality in Theorem~\ref{height}(c) always holds.
\begin{Proposition} \label{cDim}
The implications
\[
(a) \ \Longrightarrow \ (b) \ \Longleftrightarrow \ (c) \
\Longleftrightarrow \ (d) \ \Longleftrightarrow \ (e) \
\Longrightarrow (f)
\]
hold for the following conditions on the monomial preorder~$<$:
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item The monomial preorder~$<$ is global or local.
\item The monomial preorder can be defined, in the sense of
Theorem~\ref{Robbiano}, by a
real matrix $\left(\begin{smallmatrix} A \\
B \end{smallmatrix}\right)$ composed of an upper part $A$
whose entries are all nonpositive, and a lower part $B$ whose
entries are all nonnegative.
\item If $x_i < 1$ then $t < 1$ for every monomial~$t$ that is
divisible by~$x_i$.
\item Every maximal ideal of $k[\x]_<$ has height~$n$.
\item For every ideal $I \subseteq k[\x]_<$, the equality $\dim
k[\x]/L_<(I) = \dim k[\x]_</I$ holds.
\item If $I \subseteq k[\x]_<$ is an ideal such that $k[\x]_</I$ is
equidimensional, then also \linebreak $k[\x]/L_<(I)$ is
equidimensional.
\end{enumerate} \end{Proposition}
\begin{proof}
It is clear that~(a) implies~(b) and~(b) implies~(c). One can
deduce~(b) from~(c) by using that in a matrix defining~$<$ one can
add a multiple of any row to a lower row. Moreover, (c) holds if and only if
$L_<(1 + g) = 1$ for every $g \in (X_-)$, which is equivalent to the
condition that for all $g \in (X_-)$, $1 + g$ is not contained in
any maximal ideal of $k[\x]_<$, or, equivalently, that $X_-$ is
contained in all maximal ideals. By Lemma~\ref{maximal}, this means
that the condition~(d) holds.
By Theorem~\ref{height}(c), the condition~(e) holds if and only if
$1 \notin (P,X_-)$ for all primes $P \in \operatorname{Spec}(k[\x]_<)$, which is
equivalent to $X_- \subseteq Q$ for all maximal ideals $Q \subset
k[\x]_<$. By Lemma~\ref{maximal}, this means that the condition~(d)
holds.
We finish the proof by showing that~(d) implies~(f). If~(d) holds,
then all primes $P \subset k[\x]_<$ satisfy $\operatorname{ht} P = n - \dim
k[\x]_</I$. So if $I$ is an ideal with $k[\x]_</I$ equidimensional, then
all minimal primes of $I$ have the same height. Therefore the same
is true for all minimal primes of $J := k[\x] \cap I$. So $J$ is
equidimensional, and since $J = J^*$, Theorem~\ref{dim}(b) tells us
that $k[\x]/L(J)$ is equidimensional. But $L(I) = L(J)$, and we are
done. \end{proof}
For a moment let $I$ be the defining ideal of an affine variety $V$. If $<$ is the degree order, then $<$ is a global monomial preorder. In this case, $L_<(I)$ describes the part at infinity of $V$. If $<$ is the reverse degree order, then $<$ is a local monomial preorder. In this case, $k[\x]/L_<(I)$ corresponds to the tangent cone of $V$ at the origin. Therefore, the implication (a) $\Longrightarrow$ (f) of Proposition~\ref{cDim} (a) has the following interesting consequences.
\begin{Corollary}
Let $V$ be an affine variety.
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item If $V$ is equidimensional, then so is its part at infinity.
\item If $V$ is equidimensional at the origin, then so is its
tangent cone.
\end{enumerate} \end{Corollary}
In this context, the question of connectedness is also interesting. A far reaching result was obtained by Varbaro~\cite{Var}, whose Theorem~2.5, expressed in the language of this paper, says the following: If $I \subseteq k[\x]$ is an ideal such that $\operatorname{Spec}(k[\x]/I)$ is connected in dimension $k \ge 0$ (i.e., its dimension is bigger than~$k$ and removing a closed subset of dimension less than~$k$ does not disconnect it), then for any global monomial preorder~$<$, also $\operatorname{Spec}\left(k[\x]/L_<(I)\right)$ is connected in dimension~$k$. The following examples give a negative answer to the question if this result carries over to general or local monomial preorders. We thank F.-O. Schreyer for the second example.
\begin{Example} \label{Schreyer}
{\rm (1) Let~$<$ be the weight order on $k[x_1,x_2]$ given by $w =
(1,-1)$. For the prime ideal $I \subseteq k[x_1,x_2]_<$
generated by $(x_1^2 + 1) x_2 + x_1$, the leading ideal is
$L_<(I) = \bigl(x_1(x_1 x_2 + 1)\bigr)$. By
Theorem~\ref{height}, $k[x_1,x_2]_</I$ has dimension~$1$, so its
spectrum is connected in dimension~$0$. But
$\operatorname{Spec}\bigl(k[x_1,x_2]/L_<(I)\bigr)$ is not connected.\par
\noindent (2) In $k[x_0,\ldots,x_4]$ consider the polynomials
\begin{align*}
f_1 & = x_0 + x_2 x_3 + x_1 x_4 - x_0 x_4 - x_0^2, \\
f_2 & = x_3 - x_3 x_4 - x_1 x_3 + x_1 x_2 - x_0 x_3 + x_0 x_2, \\
f_3 & = x_4 - x_3^2 + x_2 x_3 - x_1^2 - x_0 x_4 + x_0 x_1.
\end{align*} The tangent cone at the origin is given by the
ideal $(x_0,x_3,x_4)$ and, as a short computation shows, at the
point $(1,0,0,0,0)$ is it is given by $(x_0 + x_4,x_1,x_2)$. The
projetion $\pi$: $\mathbb{A}^5 \to \mathbb{A}^4$ ignoring the
first coordinate merges these two points, so applying it to the
variety $X$ given by the $f_i$ will produce a new variety $Y$
whose tangent cone at the origin is the union of two planes
meeting at one point. This can be easily verified, at least in
characteristic~$0$, by using a computer algebra system such as
MAGMA~\cite{Mag}.
Being regular at the origin, $X$ is locally
integral at the origin, and so the same is true of $Y$. So
replacing $Y$ by its (only) irreducible component passing through
the origin, we receive a surface that is connected in
dimension~$1$, but its tangent space at the origin is not.
We produced this example by starting with the equations for the
component of $Y$ through the origin, which were provided to us
by F.-O. Schreyer.
} \end{Example}
\section{Descent of properties and invariants} \label{sLoci}
Let $<$ be an arbitrary monomial order in $k[\x]$. In this section, we will again relate properties of an ideal and its leading ideal. Our results follow the philosophy that the leading ideal never behaves better than the ideal itself, so the passage to the leading ideal is a ``degeneration.'' \par
First, we will concentrate on the loci of local properties. Let ${\mathbb P}$ denote a property which an arbitrary local ring may have or not have. For a noetherian ring $A$ we let $\operatorname{Spec}_{\mathbb P}(A)$ denote the ${\mathbb P}$-locus of $A$, i.e. the set of the primes $P$ such that the local ring $A_P$ satisfies ${\mathbb P}$. \par
We say that ${\mathbb P}$ is an {\em open property} if for any finitely generated algebra $A$ over a field, $\operatorname{Spec}_{\mathbb P}(A)$ is a Zariski-open subset of $\operatorname{Spec} (A)$, i.e. $\operatorname{Spec}_{\operatorname{N{\mathbb P}}}(A) = V(Q)$ for some ideal $Q$ of $A$, where ${\operatorname{N{\mathbb P}}}$ is the negation of ${\mathbb P}$ and $$V(Q) := \{P \in \operatorname{Spec}(A) \mid Q \subseteq P\}.$$ We say that ${\mathbb P}$ is a {\em faithful property} if for every noetherian local ring $(A,{\mathfrak m})$, the following conditions are satisfied: \par (F1) If $A[t]_{{\mathfrak m} A[t]}$ has ${\mathbb P}$, where $t$ is an indeterminate, then $A$ has ${\mathbb P}$. \par (F2) If $A/tA$ has ${\mathbb P}$ for some non-zerodivisor $t \in {\mathfrak m}$, then $A$ has ${\mathbb P}$.
\begin{Proposition} \label{open} ${\mathbb P}$ is open and faithful if ${\mathbb P}$ is one of the following properties:\par {\rm (a)} regular, \par {\rm (b)} complete intersection, \par {\rm (c)} Gorenstein, \par {\rm (d)} Cohen-Macaulay, \par {\rm (e)} $S_r$ ($r \ge 1$), \par {\rm (f)} normal, \par {\rm (g)} integral (domain),\par {\rm (h)} reduced. \end{Proposition}
\begin{proof} It is known that any finitely generated algebra over a field is excellent \cite[Proposition 7.8.3(ii)]{EGA42}. If a ring $A$ is excellent, then $\operatorname{Spec}_{\mathbb P}(A)$ is open when ${\mathbb P}$ is (a), (d), (e), (f) \cite[Proposition 7.8.3(iv)]{EGA42}, (b), (c) \cite[Corollary 3.3 and Corollary~1.5]{GM}. If ${\mathbb P}$ is (g) or (h), ${\mathbb P}$ is obviously open. \par
The faithfulness of (a)-(d) is more or less straightforward. Since the map $A \to A[t]_{{\mathfrak m} A[t]}$ is faithfully flat, we have (F1) for (e) and $R_{r-1}$ by \cite[Proposition 6.4.1 and Proposition 6.5.3]{EGA42}. Since a local ring is reduced or normal if it satisfies $S_1$ and $R_0$ or $S_2$ and $R_1$ \cite[Proposition 5.4.5 or Theorem 5.8.6]{EGA42}, this also proves (F1) for (f) and (h). For (e), (f) and (h) we have (F2) by \cite[Proposition 2.2 and Corollary 2.4]{CN} for the trivial grading. For (g), (F1) is clear and (F2) follows from \cite[Proposition 3.4.5]{EGA42}. \end{proof}
The following theorem is the main result of this section.
\begin{Theorem} \label{dim locus} Let ${\mathbb P}$ be an open and faithful property. Let $I$ be an ideal of $k[\x]_<$. Then $$\dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]_</I\big) \le \dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big).$$ \end{Theorem}
As we will see, Theorem \ref{dim locus} follows from the following stronger result, which relates the ${\operatorname{N{\mathbb P}}}$-loci of $k[\x]_</I$ and $k[\x]/L_<(I)$.
\begin{Theorem} \label{locus} Let ${\mathbb P}$ be an open and faithful property. Let $I \subseteq J$ be ideals in $k[\x]_<$ such that $V(J/I) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]_</I\big)$. Then $$V\big(L_<(J)/L_<(I)\big) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big).$$ \end{Theorem}
\begin{proof} Set $I^* = I \cap k[\x]$ and $J^* = J \cap k[\x]$. Then $I^* \subseteq J^*$. By Lemma \ref{leading}, $L_<(I) = L_<(I^*)$ and $L_<(J) = L_<(J^*)$. Let $P$ be an arbitrary minimal prime of $J^*$ and $\wp$ the corresponding minimal prime of $J$. Then $(k[\x]/I^*)_P = (k[\x]_</I)_\wp$. Since $V(J/I) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/I\big)$, $(k[\x]_</I)_\wp$ does not have ${\mathbb P}$. Hence, $(k[\x]/I^*)_P$ does not have ${\mathbb P}$. This shows that $V(J^*/I^*) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/I^*\big)$. \par
Now, replacing $I$ and $J$ by $I^*$ and $J^*$ we may assume that $I \subseteq J$ are ideals in $k[\x]$ such that $V(J/I) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/I\big)$. By Theorem \ref{approx 2} we may assume that $<$ is an integral weight order $<_w$. Suppose that $V\big(L_<(J)/L_<(I)\big) \not\subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big).$ Then there exists a minimal prime $P$ of $L_<(J)$ such that $\big(k[X]/L_<(I)\big)_P$ has ${\mathbb P}$. Let $R = k[X,t]$ and $I^{\operatorname{hom}}, J^{\operatorname{hom}}$ be the homogenizations of $I,J$ in $R$ with respect to $w$. By Lemma \ref{iso}, we have \begin{align*} R/(I^{\operatorname{hom}},t) & \cong k[X]/L_<(I),\\ R/(J^{\operatorname{hom}},t) & \cong k[X]/L_<(J). \end{align*} Therefore, there exists a minimal prime $P'$ of $(J^{\operatorname{hom}},t)$ such that $$\big(R/(I^{\operatorname{hom}},t)\big)_{P'} \cong \big(k[X]/L_<(I)\big)_P.$$ Since $t$ is a non-zerodivisor in $R/I^{\operatorname{hom}}$, using the faithfulness of ${\mathbb P}$ we can deduce that $\big(R/I^{\operatorname{hom}}\big)_{P'}$ also has ${\mathbb P}$. \par
Let $Q'$ be a minimal prime of $J^{\operatorname{hom}}$ such that $Q' \subseteq P'$. Since ${\mathbb P}$ is an open property, $\big(R/I^{\operatorname{hom}}\big)_{Q'}$ also has ${\mathbb P}$. Since $t$ is a non-zerodivisor in $R/J^{\operatorname{hom}}$, $t \not\in Q'$. Therefore, $Q'R[t^{-1}]$ is a prime ideal and $$\big(R/I^{\operatorname{hom}}\big)_{Q'} = (R/I^{\operatorname{hom}})[t^{-1}]_{Q'R[t^{-1}]}.$$ Let $\Phi_w$ be the automorphism of $R[t^{-1}]$ introduced before Lemma \ref{iso}. We know that $\Phi_w(I^{\operatorname{hom}}R[t^{-1}]) = IR[t^{-1}]$ and $\Phi_w(J^{\operatorname{hom}}R[t^{-1}])= JR[t^{-1}]$. Thus, $\Phi_w(Q'R[t^{-1}]) = QR[t^{-1}]$ for some minimal prime $Q$ of $J$ and $$(R/I^{\operatorname{hom}})[t^{-1}]_{Q'R[t^{-1}]} \cong (R/IR)[t^{-1}]_{QR[t^{-1}]}.$$ It is easy to see that $$(R/IR)[t^{-1}]_{QR[t^{-1}]} = (k[X]/I)[t]_{QR}.$$ Therefore, $(k[X]/I)[t]_{QR} \cong \big(R/I^{\operatorname{hom}}\big)_{Q'}$ has ${\mathbb P}$. Since ${\mathbb P}$ is faithful, $k[X]/I$ also has ${\mathbb P}$. So we obtain a contradiction to the assumption that $V(J/I) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/I\big)$. \end{proof}
Now, we are ready to prove Theorem~\ref{dim locus}.
\begin{proof}[Proof of Theorem~\ref{dim locus}]
Let $J$ be the defining ideal of the ${\operatorname{N{\mathbb P}}}$-locus of $k[\x]_</I$, i.e., \linebreak $V(J/I) = \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/I\big)$.
Then $\dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]_</I\big) = \dim k[\x]_</J$.
By Theorem~\ref{height}(b), $\dim k[\x]_</J \le \dim k[\x]/L_<(J)$.
By Theorem~\ref{locus}, $V\big(L_<(J)/L_<(I)\big) \subseteq \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big).$
Hence, $\dim k[\x]/L_<(J) \le \dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big).$
So we can conclude that $\dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]_</I\big) \le \dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big)$. \end{proof}
\begin{Remark} {\rm Theorem \ref{locus} still holds if we replace the assumption on the openess of ${\mathbb P}$ by the weaker condition that if $A_P$ has ${\mathbb P}$ then so is $A_Q$ for all primes $Q \subset P$. This condition is actually used in the proof of Theorem \ref{locus}. The openess of ${\mathbb P}$ is only needed to have the dimension of the ${\mathbb P}$-loci in Theorem \ref{dim locus}. Moreover, one can also replace property (F2) by the weaker but more complicated condition that $A$ has ${\mathbb P}$ if $A/tA$ has ${\mathbb P}$ for some non-zerodivisor $t$ of $A$ such that $A$ is flat over $k[t]$, where $A$ is assumed to be a local ring essentially of finite type over $k$. In fact, we have used (F2) for a local ring which is of this type by Proposition \ref{flat}. This shows that Theorems~\ref{dim locus} and~\ref{locus} extend to the case that ${\mathbb P}$ is one of the following properties: the Cohen-Macauly defect or the complete intersection defect is at most~$r$, where $r$ is a fixed integer.} \end{Remark}
The proof of Theorem \ref{locus} shows that it also holds for ideals in $k[\x]$. However, the following example shows that Theorem \ref{dim locus} does not hold if $I$ is an ideal of $k[\x]$.
\begin{Example} {\rm Consider an affine variety that has the origin as a regular point
but has singulatities elsewhere, such as the curve given by $I =
\bigl(y^2 - (x - 1)^2x\bigr) \subseteq k[x,y]$ with char$(k) \ne
2$. In such an example, if ${\mathbb P}$ is the property {\em regular}, we
have $\dim \operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/I\big) \ge 0$ but $\dim
\operatorname{Spec}_{\operatorname{N{\mathbb P}}}\big(k[\x]/L_<(I)\big) < 0$.
} \end{Example}
Theorem \ref{locus} shows that if $\operatorname{Spec}_{\operatorname{N{\mathbb P}}}(k[\x]/L_<(I)) = \emptyset$, then $\operatorname{Spec}_{\operatorname{N{\mathbb P}}}(k[\x]_<(I)) = \emptyset$. Hence, we has the following consequence.
\begin{Corollary} \label{all} Let ${\mathbb P}$ be an open and faithful property. Let $I$ be an ideal in $k[\x]_<$. If ${\mathbb P}$ holds at all primes of $k[\x]/L_<(I)$, then it also holds at all primes of $k[\x]_</I$. \end{Corollary}
For a positive integral weight order $<_w$, Bruns and Conca \cite[Theorem 3.1]{BC} shows that the properties Gorenstein, Cohen-Macaulay, normal, integral, reduced are passed from $k[\x]/L_{<_w}(I)$ to $k[\x]/I$. Their proof is based on the positively graded structure of $k[\x]$ induced by $w$, which is not available for any integral weight order. \par
The following corollary gives a reason why it is often easier to work with $L_<(I)$ instead of $I$.
\begin{Corollary} \label{MthenAll}
Let ${\mathbb P}$ be an open and faithful property, and assume that the
monomial order~$<$ is such that $1$ is comparable to all other
monomials. (This assumption is satisfied if $x_i > 1$ for all~$i$ or
if $<$ is local or if $<$ is a monomial order.) Let $I$ be a proper
ideal in $k[\x]_<$. If ${\mathbb P}$ holds at the maximal ideal
$\mathfrak{m} = (X)/L_<(I)$ of $k[\x]/L_<(I)$, then it also holds at
all primes of $k[\x]_</I$. \end{Corollary}
\begin{proof}
Assume that $\operatorname{Spec}_{\operatorname{N{\mathbb P}}}(k[\x]_</I) \ne \emptyset$. Then the ideal $J$ in
Theorem~\ref{locus} can be chosen to be proper. Therefore $L_<(J)$
is also a proper ideal, and from the hypothesis on $<$ and the fact
that $L_<(J)$ is $<$-homogeneous it follows that $L_<(J) \subseteq
(X)$. By Theorem~\ref{locus} this implies that $P$ does not hold at
$\mathfrak{m}$. \end{proof}
Moreover, we can also prove the descent of primality.
\begin{Theorem} \label{tIntegral} Let $I$ be an ideal of $k[\x]_<$ such that $L_<(I)$ is a prime ideal. Then $I$ is a prime ideal. \end{Theorem}
\begin{proof}
Choose a global monomial order~$<'$ and let $<^*$ be the product of
$<$ with~$<'$. Then $<^*$ is a monomial order, and $k[\x]_{<^*} = k[\x]_<$ by
Lemma \ref{finer}(c).
Let $G$ be a standard basis of $I$ with respect to~$<^*$. We have to show that if $f,g \in k[\x]_< \setminus I$, then $f g \not\in I$. Without restriction we may replace $f, g$ by their weak normal forms with respect to $G$ (see \cite[Definition~1.6.5]{GP}). Then $L_{<^*}(f) \notin L_{<^*}(I)$ and $L_{<^*}(g) \notin L_{<^*}(I)$. Using Lemma~\ref{finer} we obtain
\[
L_{<'}\bigl(L_<(f)\bigr) = L_{<^*}(f) \notin L_{<^*}(I) =
L_{<'}\bigl(L_<(I)\bigr),
\]
so $L_<(f) \notin L_<(I)$. Similarly, $L_<(g) \notin L_<(I)$. By our
hypothesis, this implies $L_<(f g) = L_<(f)L_<(g) \notin L_<(I)$,
so $f g \not\in I$ as desired. \end{proof}
According to our philosophy that the leading ideal with respect to a monomial preorder is a deformation that is ``closer'' to the original ideal than the leading ideal with respect to a monomial order, it would be interesting to see an example where $k[\x]/L_<(I)$ is Cohen-Macaulay but $k[\x]/L_{<^*}(I)$ is not. If~$<$ is a monomial preorder satisfying the hypothesis of the last statement from Theorem~\ref{locus}, then the benefit arising from this is that the Cohen-Macaulay property of $k[\x]_</I$ can be verified by testing only the maximal ideal $\mathfrak{m} := (X)/L_<(I)$ of $k[\x]/L_<(I)$. The following is such an example.
\begin{Example} \label{exCM} {\em Consider the ideal
\[
I = \bigl(x_1^2,x_2^2,x_3^3, x_1 x_2, x_1 x_3, x_1 x_4 - x_2 x_3 +
x_1\bigr) \subseteq k[x_1,x_2,x_3,x_4].
\]
Let~$< = <_{\bold w}$ be the weight order with weight ${\bold w} = (1,1,1,1)$,
and let~$<^*$ be the product of~$<$ and the lexicographic order
with $x_1 < x_2 < x_3 < x_4$. So~$<^*$ is the graded lexicographic
order, and it is easy to see by forming and reducing s-polynomials
that the given basis of $I$ is a Gr\"obner basis with respect
to~$<^*$. So by Theorem~\ref{standard}, $G$ it is also a standard
basis with respect to~$<$. So
\[
L_<(I) = \bigl(x_1^2,x_2^2,x_3^3, x_1 x_2, x_1 x_3, x_1 x_4 - x_2
x_3\bigr).
\]
From the leading ideal $L_{<^*}(I) = \bigl(x_1^2,x_2^2,x_3^3, x_1
x_2, x_1 x_3, x_1 x_4\bigr)$ we see that the following elements
form a vector space basis of $A := k[\x]/L_<(I)$:
\[
\overline{x_4^i},\ \overline{x_2 x_4^i},\ \overline{x_3 x_4^i},\
\overline{x_2 x_3 x_4^i} \quad (i \ge 0), \quad \text{and} \quad
\overline{x_1}.
\]
Here the bars indicate the class in $A$ of a polynomial. Because
$\overline{x_2 x_3 x_4^i} = \overline{x_1 x_4^{i+1}}$ this implies
\[
A = k[\overline{x_4}] \oplus k[\overline{x_4}] \cdot
\overline{x_1} \oplus k[\overline{x_4}] \cdot \overline{x_2}
\oplus k[\overline{x_4}] \cdot \overline{x_3},
\]
and $\overline{x_4}$ is transcendental. It follows that $A =
k[\x]/L_<(I)$ is Cohen-Macaulay, and so the same is true for
$k[\x]/I$.
Now we turn to $A^* := k[\x]/L_{<^*}(I)$. A vector space basis of
$A^*$ is given as above, but now the bars indicate classes in
$A^*$. So~$\overline{x_4}$ forms a homogeneous system of
parameters, but it is not regular since $\overline{x_1}
\overline{x_4} = 0$. Therefore $A^* = k[\x]/L_{<^*}(I)$ is not
Cohen-Macaulay. } \end{Example}
In the following we will compare graded invariants of homogeneous ideals with those of its leading ideals. The following result is essentially due to Caviglia's proof of Sturmfels' conjecture on the Koszul property of the pinched Veronese \cite{Cav}.
\begin{Proposition} \label{Tor} Let $I,J,Q$ be homogeneous ideals in $k[\x]$. Then $$\dim_k\operatorname{Tor}_i^{k[\x]/I}(k[\x]/J,k[\x]/Q)_j \le \dim_k\operatorname{Tor}_i^{k[\x]/L_<(I)}(k[\x]/L_<(J),L_<(Q))_j$$ for all $i \in {\mathbb N}$, $j \in {\mathbb Z}$. \end{Proposition}
\begin{proof} By Lemma \ref{homogeneous} we may assume that~$<$ is a monomial preorder with $1 < x_i$ for all $i$. Applying Theorem \ref{approx 2} to $I, J, Q$ we can find $w \in {\mathbb Z}^n$ with $w_i > 0$ for all $i$ such that $L_<(I) = L_{<_w}(I)$, $L_<(J) = L_{<_w}(J)$, and $L_<(Q) = L_{<_w}(Q)$. For a positive weight vector $w$, Caviglia \cite[Lemma 2.1]{Cav} already showed that $$\dim_k\operatorname{Tor}_i^{k[\x]/I}(k[\x]/J,k[\x]/Q)_j \le \dim_k\operatorname{Tor}_i^{k[\x]/L_{<_w}(I)}(k[\x]/L_{<_w}(J),L_{<_w}(Q))_j$$ for all $i \in {\mathbb N}$, $j \in {\mathbb Z}$. \end{proof}
Recall that a $k$-algebra $R$ is called Koszul if $k$ has a linear free resolution as an $R$-module or, equivalently, if $\operatorname{Tor}_i^R(k,k)_j = 0$ for all $j \neq i$.
\begin{Corollary} Let $I$ be a homogeneous ideal in $k[\x]$. If $k[X]/L_<(I))$ is a Koszul algebra, then so is $k[X]/I$. \end{Corollary}
\begin{proof} We apply Lemma \ref{Tor} to the case $J = Q = (X)$. From this it follows that if $\operatorname{Tor}_i^{k[\x]/L_<(I)}(k,k)_j = 0$, then $\operatorname{Tor}_i^{k[\x]/I}(k,k)_j = 0$ for all $j \neq i$. \end{proof}
For any finitely generated graded $k[\x]$-module $E$, let $\beta_{i,j}(E)$ denote the number of copies of the graded free module $k[\x](-j)$ appearing in the $i$-th module of the resolution the largest degree of a minimal graded free resolution of $E$. These numbers are called the {\em graded Betti numbers} of $E$. In some sense, these invariants determine the graded structure of $E$. It is well known that $\beta_{i,j}(E) = \dim_k \operatorname{Tor}_i^{k[\x]}(E,k)_j$ for all $i \in {\mathbb N}$, $j \in {\mathbb Z}$.
\begin{Proposition} \label{Betti} Let $I$ be a homogeneous ideal in $k[\x]$. Then $\beta_{i,j}(k[\x]/I) \le \linebreak \beta_{i,j}(k[\x]/L_<(I))$ for all $i \in {\mathbb N}$, $j \in {\mathbb Z}$. \end{Proposition}
\begin{proof} We apply Lemma \ref{Tor} to the case $I = 0$, $Q = (X)$ and replace $J$ by $I$. Then $$\dim_k\operatorname{Tor}_i^{k[\x]}(k[\x]/I,k)_j \le \dim_k\operatorname{Tor}_i^{k[\x]}(k[\x]/L_<(I),k)_j$$ which implies $\beta_{i,j}(k[\x]/I) \le \beta_{i,j}(k[\x]/L_<(I))$ for all $i \in {\mathbb N}$, $j \in {\mathbb Z}$. \end{proof}
Using the graded Betti numbers of $E$ one can describe other important invariants of $E$ such that the depth and the Castelnuovo-Mumford regularity: \begin{align*}
\operatorname{depth} E & = n- \max\{i|\ \beta_{i,j} \neq 0 \text{ for some } j\},\\
\operatorname{reg} E & = \max\{j-i|\ \beta_{i,j} \neq 0\}. \end{align*} By this definition, we immediately obtain from Proposition \ref{Betti} the following relationship between the depth and the regularity of $k[\x]/I$ and $k[\x]/L_<(I)$.
\begin{Corollary} \label{depth} Let $I$ be a homogeneous ideal in $k[\x]$. Then \begin{align*} \operatorname{depth}(k[\x]/I) & \ge \operatorname{depth} (k[\x]/L_<(I)),\\ \operatorname{reg}(k[\x]/I) & \le \operatorname{reg} (k[\x]/L_<(I)). \end{align*} \end{Corollary}
Let ${\mathfrak m}$ denote the maximal homogeneous ideal of $k[\x]$. For any finitely generated graded $k[\x]$-module $E$, we denote by $H_{\mathfrak m}^i(E)$ the $i$-th local cohomology module of $E$ with respect to ${\mathfrak m}$ for all $i \in {\mathbb N}$. Note that $H_{\mathfrak m}^i(E)$ is a ${\mathbb Z}^n$-graded module. As usual, we denote by $H_{\mathfrak m}^i(E)_j$ the $j$-th component of $H_{\mathfrak m}^i(E)$ for all $j \in {\mathbb Z}$. It is known that the vanishing of $H_{\mathfrak m}^i(E)$ gives important information on the structure of $E$.
\begin{Proposition} \label{Sbarra} Let $I$ be a homogeneous ideal in $k[\x]$. Then $$\dim_k H_{\mathfrak m}^i(k[\x]/I)_j \le \dim_k H_{\mathfrak m}^i(k[\x]/L_<(I))_j$$ for all $i \in {\mathbb N}, j \in {\mathbb Z}$. \end{Proposition}
\begin{proof} Sbarra \cite[Theorem 2.4]{Sb} already proved the above inequality for an arbitrary global monomial order. Actually, his proof shows that for an arbitrary integral vector $<_w$, $$\dim_k H_{\mathfrak m}^i(k[\x]/I)_j \le \dim_k H_{\mathfrak m}^i(k[\x]/L_{<_w}(I))_j$$ for all $i \in {\mathbb N}, j \in {\mathbb Z}$. By Theorem \ref{approx 2}, there exists $w \in {\mathbb Z}^n$ such that $L_<(I) = L_{<_w}(I)$. Therefore, Sbarra's result implies the conclusion. \end{proof}
Let $R$ be a standard graded algebra over an infinite field $k$ with $d = \dim R$. An ideal $Q$ of $R$ is called a {\em minimal reduction} of $R$ if $Q$ is generated by a system of linear forms $z_1,\ldots,z_d$ such that $k[z_1,\ldots,z_d] \hookrightarrow R$ is a Noether normalization. Let $r_Q(R)$ denote the maximum degree of the generators of $R$ as a graded $k[z_1,\ldots,z_d]$-module. One calls the invariant
$$ r(R) := \min\{r_Q(R)|\ \text{$Q$ is a minimal reduction of $R$}\}$$ the {\it reduction number} of $R$ \cite{Va}. \par
The following result on the reduction number of the leading ideal was a conjecture of Vasconcelos for global monomial orders \cite[Conjecture 7.2]{Va}. This conjecture has been confirmed independently by Conca \cite[Theorem 1.1]{Co} and the second author \cite[Corollary 3.4]{Tr}. Now we can prove it for monomial preorders.
\begin{Proposition} \label{reduction} Let $I$ be an arbitrary homogeneous ideal in $k[\x]$. Then $$r(k[\x]/I) \le r(k[\x]/L_<(I)).$$ \end{Proposition}
\begin{proof} By Theorem \ref{approx 2}, there exists $w \in {\mathbb Z}^n$ such that $L_<(I) = L_{<_w}(I)$. By \cite[Theorem 3.3]{Tr}, we know that $r(k[\x]/I) \le r(k[\x]/L_{<_w}(I))$ for an arbitrary weight order $<_w$. \end{proof}
\end{document} | arXiv |
Random Walk on Cube
A particle performs a random walk on the eight corners of a unit cube. At each step it can either remain where it is with probability $1/2$ or it can move to one of its three nearest neighbors, each with probability $1/6$. Let $u$ and $v$ be opposite corners of the cube (so $|u-v|=\sqrt3$), and suppose the walk starts at $u$. Find (a) the expected number of steps until its first return to $u$, and (b) the expected number of steps until its first visit to $v$.
Ian Dumais
1.) This is a Markov process, so we can use the theory of Markov chains to help solve the problem. First let's simplify the process: we don't really care about most of the details of the geometry of the cube, so we'd like to find a simpler process with fewer states where we can do our computations. Recall that the unit cube has vertices $\sum_{i = 1}^3 \delta_i e_i$ where $\delta_i \in \{0, 1\}$ this gives a bijection with the set of binary sequences $(\delta_1, \delta_2, \delta_3)$. Now note that two vertices only have an edge between them if their 3-digit binary sequences differ by a single digit, so we can make a simpler process. Define $f(\delta_1, \delta_2, \delta_3) = \sum_i \delta_i$, pushing forward this Markov process to $\{0, 1, 2, 3\}$ we get a simpler process with only $4$ states with transition matrix:
$$T = \begin{pmatrix} \frac{1}{2} & \frac{1}{6} & 0 & 0 \\\\ \frac{1}{2} & \frac{1}{2} & \frac{2}{6} & 0 \\\\ 0 & \frac{2}{6} & \frac{1}{2} & \frac{1}{2} \\\\ 0 & 0 & \frac{1}{6} & \frac{1}{2} \end{pmatrix} $$
Because the states $\{0, 3\}$ correspond to unique states on the unit cube, all the data we are interested in can be computed equivalently with this simpler process. One easily observes that this matrix $T$ has a $1$-eigenvalue at the vector $(1, 3, 3, 1) = \pi$, so $\frac{1}{8} \pi$ gives the stationary distribution of $T$. By the fundamental theorem of Markov processes the component of the stationary distribution corresponding to the state $i$ gives the reciprocal of the expected return time of $i$. Thus we see that the reciprocal of the first component of $\frac{1}{8} \pi$ gives our answer, and this is $8$.
2.) For the second part of this question it is once again important to simplify the question. Replace $T$ with the matrix $$Q = \begin{pmatrix} \frac{1}{2} & \frac{1}{6} & 0 & 0 \\\\ \frac{1}{2} & \frac{1}{2} & \frac{2}{6} & 0 \\\\ 0 & \frac{2}{6} & \frac{1}{2} & 0 \\\\ 0 & 0 & \frac{1}{6} & 1 \end{pmatrix} $$ this is essentially the same Markov process except now once a random walk reaches the state $3$ corresponding to the vertex $(1, 1, 1)$ on the cube it will stay there forever. For the purpose of computing when a process first reaches $(1, 1, 1)$ this process is simpler and equally useful. Because paths can never return from the state $3$ we can use the matrix in the upper left corner to compute the behavior of walks before they reach $3$. $$Q' = \begin{pmatrix} \frac{1}{2} & \frac{1}{6} & 0 \\\\ \frac{1}{2} & \frac{1}{2} & \frac{2}{6} \\\\ 0 & \frac{2}{6} & \frac{1}{2} \end{pmatrix}.$$ We call $N = (Id - Q')^{-1} = \sum_{n = 0}^\infty$ the fundamental matrix of the absorbing process $Q$. Note that $$e_j^T \cdot N \cdot e_i = \sum_{n = 0}^\infty e_j^T \cdot (Q')^n \cdot e_i,$$ where we note that $$e_j^T \cdot (Q')^n \cdot e_i = P(\text{A random walk starting at state i hits state j at the nth step})$$, thus $e_j^T \cdot N \cdot e_i$ computes the expected number of times a random walk starting at $i$ will land on state $j$ before being absorbed. Adding up these times we see that $\sum_j n_{j, 0}$ gives the expected number of steps a random walk will take inside of the transient states $\{0, 1, 2\}$ before being absorbed by state $3$, which is exactly what we want to compute. Now we compute: $$ N = \begin{pmatrix} 5 & 3 & 2 \\\\ 9 & 9 &6 \\\\ 6 & 6 & 6 \end{pmatrix},$$ summing the first column gives us 20, the expected number of steps a random walk at $0$ will take before hitting state $3$.
Persimmonl
The display equation that was lost into the margin was supposed to finish: "...at the nth step)"
foundations in probability
Pulling balls out of a bin
Bivariate Normality questions
Five married couples are seated around a table at random. Let $X$ be the number of wives who sit next to their husbands. What is the expectation of $X$ $(E[X])$?
Three red balls and three green balls on a circle
Quantile function of CDF
Applied Probability | CommonCrawl |
\begin{document}
\twocolumn[
\begin{@twocolumnfalse}
\title{ \huge An illustration of canonical \\ quantum-classical dynamics \\
\begin{abstract}
Using the example of the harmonic oscillator, we illustrate the use of hybrid dynamical brackets in analyzing quantum-classical interaction. We only assume that a hybrid dynamical bracket exists, is bilinear, and reduces to the pure quantum/classical bracket when acting on pure quantum/classical variables. Any hybrid bracket obeying these natural requirements will produce the same dynamics for pure classical or quantum variables, given a hybrid Hamiltonian. Backreaction is manifested in the evolution of a nonvanishing commutator between classical variables. The more massive the classical system is, the less it is affected by backreaction. Interestingly, we show that while pure variables evolve to violate the pure canonical relations, they always obey the hybrid canonical relations. The dynamics of hybrid variables, on the other hand, is shown to require a fully specified and consistent hybrid bracket, otherwise evolution cannot be defined.
\end{abstract}
\end{@twocolumnfalse} ]
\saythanks
\section{Introduction} \label{intro}
Quantum-classical systems appear in numerous fields, from chemical and condensed matter physics to cosmology and black hole physics. In practical applications, a full quantum treatment is often computationally intractable. Treating parts of the system as classical simplifies calculations and provide easier models to conceptualize and approach such problems. We constantly deal with quantum electrons around classical nuclei, quantum matter on classical spacetimes or quantum systems interacting with external classical fields. Less practically, foundational research on the nature of gravity or the measurement problem in quantum mechanics may also benefit from the study of quantum-classical systems.
Familiar approaches to modelling quantum-classical interactions usually fall into two categories. The first treats the classical part of the system as omnipotent; it acts on, but is not acted back on by, the quantum part. Examples of this type include textbook quantum mechanics~\cite{maddox_classical_1995}, or quantum field theories on curved spacetime.
The second approach is to use averages of quantum observables $\langle x_Q \rangle$ in classical equations $$\text{Eq.}_\text{Classical} \big(\langle x_Q \rangle \, , y_C \big)~,$$ and then feed the classical observables $y_C$ back into quantum equations $$\text{Eq.}_\text{Quantum} \big(x_Q \, , y_C \big)~.$$ While this approach captures a form of quantum backreaction on the classical system, it suffers from two problems: 1.~it averages out quantum fluctuations so they do not manifest on the classical side, and 2.~it introduces nonlinearity to the quantum Schrödinger equation thus negating the superposition principle. Even if one ignores the loss of superposition as a side effect of a simplifying approximation, nonlinear equations can be very difficult to handle in practical applications.
In this paper, we are concerned with a third approach: directly coupling quantum and classical variables. This requires defining a consistent framework for the dynamics of such hybrid quantum-classical variables. Namely, we are interested in a canonical dynamical structure where the equations of motion are defined in terms of a hybrid dynamical bracket and canonical relations between conjugate variables. Many proposals for a hybrid quantum-classical bracket can be found in the literature~\cite{aleksandrov_statistical_1981,gerasimenko_dynamical_1982,boucher_semiclassical_1988,anderson_quantum_1995,prezhdo_mixing_1997,prezhdo_quantum-classical_2006,elze_linear_2012,bondar_koopman_2019}. See also~\cite{barcelo_hybrid_2012,elze_four_2012,elze_quantum-classical_2013}.
However, impediments to a consistent hybrid dynamics have been found~\cite{salcedo_absence_1996,caro_impediments_1999,sahoo_mixing_2004,salcedo_statistical_2012,gil_canonical_2017}. Specifically, it was shown that hybrid dynamical brackets do not satisfy the Jacobi identity or the Leibniz rule. These are crucial properties for dynamical consistency. The Jacobi identity guarantees that the fundamental canonical relations are preserved with dynamical evolution. The Leibniz rule is essential for defining dynamical evolution and derivatives, as shall be demonstrated in Sec.~\ref{sec:4}.
The present authors, however, have shown in~\cite{amin_quantum-classical_2020} that starting from a full quantum system and taking the classical limit of a part, one can uncover a general class of hybrid brackets. Different brackets arise from different quantization schemes on the classical sector prior to taking the classical limit. In particular, the bracket proposed by Aleksandrov~\cite{aleksandrov_statistical_1981}, Gerasimenko~\cite{gerasimenko_dynamical_1982}, and Boucher and Traschen~\cite{boucher_semiclassical_1988} arises as a special case of the general bracket when Wigner-Weyl quantization is used. The derivation of the general bracket makes use of the phase space formulation of quantum mechanics showing a connection between phase space distributions (like that of Wigner, Husimi, etc.) and hybrid dynamical brackets.
To address the aforementioned no-go theorems, the authors introduced a \textit{hybrid composition product}~\cite{amin_quantum-classical_2020}. The general hybrid bracket is then the commutator of that product. It follows that the Jacobi identity and Leibniz rule are automatically satisfied provided that the hybrid composition product is associative.
Then, for a restricted set of hybrid variables that form an associative subalgebra with the hybrid composition product, dynamics is consistent. Since quantum-classical interactions are represented by hybrid terms in the Hamiltonian, the consistency of quantum-classical dynamics in our scenario implies that only certain interactions between quantum and classical systems are allowed.
Here we present a working example of canonical hybrid dynamics that provides a blueprint for applying the framework to possible problems of interest. The harmonic oscillator is ubiquitous in physical systems. We use it to illustrate the features of hybrid dynamics and the cautionary pitfalls of dealing with systems that are not clearly classical or quantum.
Notably, hybrid evolution of pure variables does not depend on any particular definition of a hybrid bracket as long as it is bilinear and reduces to a pure bracket when one of its arguments is pure. Evolution of hybrid variables, on the other hand, requires specification of the hybrid bracket used. We emphasize that the methods presented in this paper are applicable to more general problems.
We introduce basic concepts and general properties of hybrid brackets in Sec.~\ref{sec:2}. In Sec.~\ref{sec:3}, we calculate the equations of motion of pure classical and quantum variables in the quantum-classical harmonic oscillator. Adopting Anderson's argument~\cite{anderson_quantum_1995}, we find the ``backreaction'' of the quantum on the classical and show the blurring of the line between quantum and classical variables due to interaction.
Finally, we describe an example of time evolution of hybrid variables in Sec.~\ref{sec:4}. In that example, the need for a consistent hybrid bracket is illustrated. Different brackets, corresponding to different quantization schemes (and, by extension, different phase space distributions) are shown to produce different time evolutions for hybrid variables.
\section{Hybrid dynamics} \label{sec:2}
There are multiple approaches to the problem of combining quantum and classical systems~\cite{barcelo_hybrid_2012,elze_four_2012,elze_quantum-classical_2013}. In this paper, we are concerned with the canonical approach, in which the problem is that of defining a hybrid dynamical bracket. The hybrid bracket $\{\![\cdot\,,\cdot]\!\}$ then plays, for hybrid variables, the role of the commutator $[\cdot\,,\cdot]/i\hbar$ for quantum variables or the Poisson bracket $\{\cdot\,,\cdot\}$ for classical ones.
Hybrid brackets have been proposed in~\cite{aleksandrov_statistical_1981,gerasimenko_dynamical_1982,boucher_semiclassical_1988,anderson_quantum_1995,prezhdo_mixing_1997,prezhdo_quantum-classical_2006,elze_linear_2012,bondar_koopman_2019}. The authors have derived a general class of hybrid brackets in~\cite{amin_quantum-classical_2020} by applying a partial classical limit to quantum mechanics. Different arguments may produce different brackets, but a general feature of all of them is their reduction property. A hybrid bracket should reduce to a pure (quantum or classical) bracket when one of its arguments is pure: \begin{align}\label{eq:reduction} \begin{split}
\{\![\cdot\,,\,A_Q]\!\} &= \frac{1}{i\hbar} [\cdot\,,\,A_Q]~, \\
\{\![\cdot\,,\,A_C]\!\} &= \{\cdot\,,\,A_C\}~. \end{split} \end{align} The subscripts $Q$ and $C$ signify quantum and classical variables, respectively. Another property shared by hybrid brackets is bilinearity: \begin{align}\label{eq:lin} \begin{split}
\{\![A, B + C]\!\} = \{\![A, B]\!\} + \{\![A, C]\!\}~, \\
\{\![A + B, C]\!\} = \{\![A, C]\!\} + \{\![B, C]\!\}~. \end{split} \end{align}
A general Hamiltonian is given by \begin{align}\label{eq:hamiltonian}
H = H_Q + H_C + V~. \end{align} While the pure $H_Q$ and $H_C$ encode internal dynamics of the quantum and classical subsystems, $V$ is a hybrid interaction term that couples quantum and classical variables. Without knowing the specific definition of the hybrid bracket, one can find the equation of motion for pure variables using only the reduction requirement~\eqref{eq:reduction} \begin{align}\label{eq:eom-pure}
\frac{d}{dt}A_\text{Pure} = \{\![A_\text{Pure},H]\!\} + \frac{\partial}{\partial t} A_\text{Pure}~. \end{align} The equation is given in the Heisenberg picture, where time evolution resides in the dynamical variables instead of the state.
The reduction requirement~\eqref{eq:reduction} can go a long way in defining hybrid dynamics as shown by an explicit example in Sec.~\ref{sec:3}. Specifically, when one is interested in calculating the time evolution of pure variables, as in Eq.~\eqref{eq:eom-pure}, one can rely on the well-defined quantum or classical brackets to perform calculations.
The dynamics of hybrid variables is more complicated, however. It was shown in~\cite{salcedo_absence_1996,caro_impediments_1999,sahoo_mixing_2004,salcedo_statistical_2012,gil_canonical_2017} that hybrid brackets cannot define a consistent dynamical framework for general hybrid variables.
To deal with the problem of evolving hybrid variables or, equivalently, having both arguments of the dynamical bracket be hybrid, we need a specific definition of the bracket. The authors have derived a general hybrid bracket in~\cite{amin_quantum-classical_2020} along with a definite consistency condition for hybrid brackets. Namely, only a certain subset of hybrid variables is allowed into the theory for which hybrid dynamics is consistently defined. An immediate consequence of such restriction is that the consistency of hybrid dynamics dictates what kind of interaction could exist between quantum and classical systems. The derivation of the general hybrid bracket in~\cite{amin_quantum-classical_2020}, as outlined in Sec.~\ref{sec:4}, relies on the methods of phase space quantum mechanics.
\section{Backreaction in the harmonic oscillator} \label{sec:3}
One of the main motivations for developing hybrid dynamics is to study the effect of quantum systems on classical ones. It is to be expected that initially deterministic classical variables will inherit some uncertainty by interacting with a quantum system. In~\cite{anderson_quantum_1995}, Anderson argues that classical variables will evolve to become ``quasiclassical'': variables that exhibit only \textit{secondary fluctuations}. That is, the uncertainty in a quasiclassical variable exists only due to interacting with a quantum system; in the absence of such interaction, quasiclassical variables evolve in purely classical fashion. This is in contrast to fully quantum variables, which exhibit primary fluctuations; those are fundamentally uncertain.
In this section, following Anderson's pioneering work, we provide an example of the evolution of (quasi)classical variables. Quantum backreaction on classical variables is found in terms of a nonvanishing commutator between classical canonical conjugates. We emphasize that here we only use the reduction property (Eq.~\eqref{eq:reduction}) and not a specific definition of the hybrid bracket.
It should be stressed that the following example only serves as an explicit illustration of the more general possibilities of hybrid dynamics. The same logic applies to any Hamiltonian~\eqref{eq:hamiltonian}, even if more involved equations of motion result. The example presented here demonstrates the basic concepts.
Consider a quantum-classical harmonic oscillator Hamiltonian \begin{align}\label{eq:qc-ho}
H = \frac{p_C^2}{2m_C} + \frac{p_Q^2}{2m_Q} + \frac{1}{2} k (x_C - x_Q)^2~. \end{align} The mass, position and momentum of the classical and quantum particles are given by $(m_C,x_C,p_C)$ and $(m_Q,x_Q,p_Q)$ respectively, and $k$ is the coupling strength. Using the center-of-mass and relative separation coordinates \begin{align} \begin{split}
X &= \frac{m_C \, x_C + m_Q \, x_Q}{m_C + m_Q}~,\quad
P = p_C + p_Q \\
x &= x_C - x_Q~,\qquad\qquad
p = \frac{m_Q \, p_C - m_C \, p_Q}{m_C + m_Q}~, \end{split} \end{align} and the total and reduced masses \begin{align}
M = m_C + m_Q~,\qquad\quad
m = \frac{m_C \, m_Q}{m_C+m_Q}~, \end{align} the Hamiltonian can be written as \begin{align} H = \frac{P^2}{2M} + \frac{p^2}{2m} + \frac{1}{2} k x^2~. \end{align} Notice that $(X,P,x,p)$ are hybrid variables. However, they are \textit{additive} hybrids that do not directly couple quantum and classical variables.
Formally, the new variables obey the canonical relations in terms of the hybrid bracket. This can be shown using only the general reduction property~\eqref{eq:reduction} and bilinearity~\eqref{eq:lin}~: \begin{align}
\{\![X,P]\!\} &= \frac{1}{M} \{\![m_C x_C + m_Q x_Q, p_C + p_Q]\!\} \\
&= \frac{m_C}{M} \{x_C, p_C\} + \frac{1}{i\hbar} \frac{m_Q}{M} [x_Q, p_Q] = 1~, \end{align} and similarly for the rest of the relations. Using the hybrid fundamental relations, we can show that $\dot{X} = \{\![X,H]\!\}$ by explicit calculation \begin{align}
\dot{X} &= \frac{m_C}{M} \dot{x}_C + \frac{m_Q}{M} \dot{x}_Q \\
&= \frac{m_C}{M} \{x_C,H\} + \frac{1}{i\hbar} \frac{m_Q}{M} [x_Q,H] \\
&= \{\![X,H]\!\}~. \end{align} The reduction and bilinearity properties were again used to obtain the last equality.
Now the system can be solved using standard methods. The center-of-mass position and momentum evolve according to \begin{align}
P = P(0)~, \qquad
X = X(0) + \frac{P(0)}{M} t~. \end{align} The relative separation position and momentum equations of motion are \begin{align}
x &= x(0) \cos(\omega\,t) + \frac{p(0)}{m\omega} \sin(\omega\,t)~, \\
p &= p(0) \cos(\omega\,t) - m\omega\, x(0) \sin(\omega\,t)~, \end{align} where $\omega = \sqrt{k/m}$~. Finally, the equations of motion for the classical position and momentum are found to be \begin{align}\label{eq:xc}
\begin{split}
x_C =&~ \frac{1}{M} \big[m_C + m_Q \cos(\omega\,t) \big] x_C(0) \\
&+ \frac{1}{m_CM\omega} \big[ m_C \omega\,t + m_Q \sin(\omega\,t) \big] p_C(0) \\
&+ \frac{m_Q}{M} \big[ 1 - \cos(\omega\,t) \big] x_Q(0)\\
&+ \frac{1}{M\omega} \big[ \omega\,t - \sin(\omega\,t) \big] p_Q(0)
\end{split} \end{align} and \begin{align}\label{eq:pc}
\begin{split}
p_C =&~ \frac{1}{M} \big[m_C + m_Q \cos(\omega\,t) \big] p_C(0) - m\omega \sin(\omega\,t) x_C(0) \\
&+ \frac{m_C}{M} \big[ 1 - \cos(\omega\,t) \big] p_Q(0) + m\omega \sin(\omega\,t) x_Q(0)~.
\end{split} \end{align} The equations of motion for $x_Q$ and $p_Q$ are identical and can be found by swapping subscripts $Q$ and $C$ in~\eqref{eq:xc} and~\eqref{eq:pc}.
We see from the dependence of $x_C$ and $p_C$ on $x_Q(0)$ and $p_Q(0)$ that a form of backreaction is present on the classical variables. While the commutator $[x_C,p_C]$ vanishes initially, it evolves to be non-zero due to interaction: \begin{align}\label{eq:c-comm}
\frac{1}{i\hbar}[x_C,p_C] = \frac{m}{M} \big( 2 - 2 \cos(\omega\,t) - \omega\,t \sin(\omega\,t) \big) ~. \end{align} If a nonvanishing commutator indicates uncertainty, then the above equation shows how quantum backreaction is manifested in a quantum-classical harmonic oscillator. Fig.~\ref{fig:1} depicts the evolution of the commutator of $x_C$ and $p_C$. Initially, $x_C(0)$ and $p_C(0)$ were known with complete certainty, perhaps through measurement. Prediction or retrodiction at times other than $t=0$ is uncertain. We see that as the classical mass $m_C$ becomes large compared to the quantum mass $m_Q$, the commutator $[x_C,p_C]$ approaches zero and classical behaviour dominates $x_C$ and $p_C$. \begin{figure}
\caption{Qualitative evolution of $[x_C,p_C]/i\hbar$ through Eq.~\eqref{eq:c-comm}, with $m_Q$ and $k$ set to unity. At $t=0$, classical variables are known with certainty (vanishing commutator). Before and after this moment of certainty, the commutator deviates from zero implying uncertain prediction/retrodiction. As the classical mass $m_C$ becomes larger, the curve flattens approaching zero around the point of certainty at $t=0$.}
\label{fig:1}
\end{figure}
Further, non-classical behaviour is shown in the change of the value of the Poisson bracket. While $\{x_C,p_C\}$ initially obeys the classical canonical relation, it evolves to deviate from unity \begin{align}\label{eq:c-poisson} \begin{split}
\{x_C,p_C\} &= \frac{1}{M^2} \big(m_C + m_Q \cos(\omega\,t) \big)^2 \\
&\quad+ \frac{m}{m_C M} \sin(\omega\,t) \big(m_C \omega\,t + m_Q \sin(\omega\,t) \big)~. \end{split} \end{align} That deviation from classicality is, again, due to quantum backreaction.
The hybrid nature of $x_C$ and $p_C$ is now shown elegantly in terms of the hybrid bracket. While they do not obey either the quantum or the classical canonical relations (Eqs.~\eqref{eq:c-comm} and~\eqref{eq:c-poisson}), they do obey the \textit{hybrid} fundamental canonical relation \begin{align}
\{\![x_C,p_C]\!\} = 1~. \end{align} Since $x_C$ and $p_C$ at $t \neq 0$ are additive hybrids, this can be shown explicitly using the reduction~\eqref{eq:reduction} and bilinearity~\eqref{eq:lin} properties applied to Eqs.~\eqref{eq:xc} and~\eqref{eq:pc} for $x_C$ and $p_C$. Any hybrid bracket possessing these properties will produce the same results. A similar calculation can be done for $x_Q$ and $p_Q$.
An important note is in order. Thanks to the reduction property~\eqref{eq:reduction}, hybrid evolution of pure variables is consistent. That is, if the hybrid bracket has one of its two arguments pure (or a sum of pure variables), it reduces to a consistent pure bracket. This will be the case \textit{if only one hybrid variable is used in the problem}. This covers a wide class of problems. Usually the one interesting hybrid variable is the interaction Hamiltonian, while all other variables of interest are either pure classical or pure quantum such as positions, momenta, fields etc. In the next section we will demonstrate the use of the full hybrid bracket if one desires to study the evolution of general hybrid variables.
\section{Evolution of hybrid variables} \label{sec:4}
As demonstrated in the previous section, the evolution of pure variables and, by extension, additive hybrid variables is largely bracket-agnostic. For nontrivial hybrids, however, a specific definition of the hybrid bracket is necessary. Consider, for example, a hybrid variable $\eta$ that is a product of quantum and classical variables \begin{align}
\eta = \eta_C \eta_Q~. \end{align} On one hand, taking the time derivative of such a variable is unclear, since the Leibniz rule for derivation now runs into operator ordering ambiguities. As seen in the previous section, classical variables will evolve into quasiclassical variables that do not commute with quantum ones. Thus, while $\eta = \eta_C \eta_Q = \eta_Q \eta_C$, one does not know whether $\dot{\eta} = \dot{\eta}_C \eta_Q + \eta_C \dot{\eta}_Q$ or $\dot{\eta} = \eta_Q \dot{\eta}_C + \eta_C \dot{\eta}_Q$, for example. On the other hand, the reduction property~\eqref{eq:reduction} is clearly insufficient to calculate the hybrid bracket when both variables are nontrivial hybrids so that $\{\![\eta,H]\!\}$ isn't defined either. A consistent Leibniz rule for the hybrid bracket is needed.
The problem intensifies as various no-go theorems assert that the Leibniz rule and the Jacobi identity are not generally satisfied by hybrid brackets, making them inconsistent~\cite{salcedo_absence_1996,caro_impediments_1999,sahoo_mixing_2004,salcedo_statistical_2012,gil_canonical_2017}. However, the authors have proposed a possible reinterpretation of these no-go theorems in light of the work presented in~\cite{amin_quantum-classical_2020}. A general hybrid bracket is derived there, of the form \begin{align}\label{eq:hybrid1}
\{\![A,B]\!\} = \frac{1}{i\hbar} \big( A \circledast B - B \circledast A \big)~, \end{align} where $\circledast$ is the \textit{hybrid composition product}. Now the consistency of the bracket is seen in light of this new composition product. If $\circledast$ is associative, the bracket will obey the Jacobi identity, and the Leibniz rule will take the form \begin{align}
\{\![A, B \circledast C]\!\} = \{\![A, B]\!\} \circledast C + B \circledast \{\![A, C]\!\}~. \end{align}
Now that the problem is cast in definite terms, we propose a possible circumvention. If we restrict hybrid variables to be only those forming an associative subalgebra with the hybrid composition product $\circledast$, then the dynamics of these variables is consistent. This implies that only certain quantum-classical interactions are allowed, as dictated by the consistency of the framework.
Since associativity with the $\circledast$-product is the central condition, a concrete definition of $\circledast$ is needed. In~\cite{amin_quantum-classical_2020}, it is shown that a general hybrid bracket is derived through the application of a partial classical limit to a full quantum theory. A quantum system is subdivided into $Q$(uantum) and $C$(lassical) sectors. Then a classical limit is applied on the $C$ sector using the phase space formulation of quantum mechanics. The form of $\circledast$ depends on the choice of the quantization scheme (e.g., the ordering recipe) on the $C$ sector.
The $\circledast$-product~\cite{amin_quantum-classical_2020} \begin{align}\label{eq:ast}
A \circledast B = AB + \frac{i\hbar}{2} \big[ \{A, B\} + \sigma(A, B) \big] \end{align} acts only on the classical part of hybrid variables. The symmetric binary operation $\sigma$ reflects the quantization scheme on the $C$ sector prior to taking the classical limit. Using~\eqref{eq:ast} in~\eqref{eq:hybrid1}, we obtain the bracket in a more familiar form \begin{align}\label{eq:hybrid2} \begin{split}
\{\![A,B]\!\} = \frac{1}{i\hbar} [A,B] &+ \frac{1}{2} \big[ \{A,B\} - \{B,A\} \big] \\
&+ \frac{1}{2} \big[ \sigma(A,B) - \sigma(B,A) \big]~. \end{split} \end{align} For $\sigma = 0$, the bracket reduces to that proposed by Aleksandrov~\cite{aleksandrov_statistical_1981}, Gerasimenko~\cite{gerasimenko_dynamical_1982}, and Boucher and Traschen~\cite{boucher_semiclassical_1988}.
A general formula for $\sigma$ resulting from familiar quantization schemes on the $C$ sector is given by \begin{align}\label{eq:sigma} \begin{split}
\sigma({A},{B}) &= a ~ \frac{\partial A}{\partial x_C}\,\frac{\partial B}{\partial x_C}
+ b ~ \frac{\partial A}{\partial p_C}\,\frac{\partial B}{\partial p_C} \\
&\qquad + c \left( \frac{\partial A}{\partial x_C}\,\frac{\partial B}{\partial p_C} + \frac{\partial A}{\partial p_C}\,\frac{\partial B}{\partial x_C} \right)~, \end{split} \end{align} where the constants $(a,b,c)$ reflect the choice of quantization. For example, $(0,0,0)$ reflects the Weyl ordering associated with the Wigner phase space distribution, while $(1,1,0)$ reflects a certain parameterization of the anti-normal ordering associated with the Husimi distribution.
Now a hybrid equation of motion for hybrid variables in the Heisenberg picture can be defined as \begin{align}
\frac{dA}{dt} = \{\![A,H]\!\} + \frac{\partial A}{\partial t}~. \end{align} Once a choice for $\sigma$ (or the quantization scheme prior to the classical limit) is specified, dynamical evolution is unambiguously defined. Different $\sigma$'s will produce different evolutions.
As an illustrative example, take \begin{align}
\eta = p_C\, p_Q \end{align} to be a hybrid variable of interest. Notice that the time derivative of the classical part of $\eta$ does not commute with quantum variables. Given the harmonic oscillator Hamiltonian~\eqref{eq:qc-ho}, we have \begin{align}
\dot{\eta}_C = \dot{p}_C = \{p_C,H\} = -k (x_C - x_Q)~, \end{align} and thus the ordering of $\frac{d}{dt} \left( \eta_Q \eta_C \right)$ is ambiguous. To see the problem, let us naively calculate $\dot{\eta}$ in two different orderings \begin{subequations}\label{eq:naive} \begin{align} \begin{split}
\left( \dot{\eta} \right)_1 &= \dot{p}_C\, p_Q + p_C\, \dot{p}_Q \\
&= k \left( x_Q \, p_Q - x_C \, p_Q - x_Q \, p_C + x_C \, p_C \right)~, \end{split} \end{align} \begin{align} \begin{split}
\left( \dot{\eta} \right)_1 &= \dot{p}_C\, p_Q + p_C\, \dot{p}_Q \\
&= k \left( p_Q \, x_Q - x_C \, p_Q - x_Q \, p_C + x_C \, p_C \right)~, \end{split} \end{align} \end{subequations} It could be suggested to symmetrize the product of $x_Q$ and $p_Q$, and thus get rid of the ambiguity. However, such symmetrization would be an ad hoc choice of operator ordering on the $Q$ sector. A consistent hybrid bracket is necessary.
In terms of the hybrid bracket~\eqref{eq:hybrid2}, time evolution of $\eta$ is given by \begin{align}\label{eq:eta-dot} \begin{split}
\dot{\eta} &= \{\![\eta,H]\!\} \\
&= k \left( \frac{x_Q \, p_Q + p_Q \, x_Q}{2} - x_C \, p_Q - x_Q \, p_C + x_C \, p_C \right. \\
&\qquad \qquad \qquad \qquad \qquad \qquad + \left. \frac{i\hbar}{2} \sigma(x_C,p_C) \right)~. \end{split} \end{align} Compare~\eqref{eq:eta-dot} to~\eqref{eq:naive}. While a symmetrization of~\eqref{eq:naive} gets rid of the ordering ambiguity (in an ad-hoc manner), it completely misses out on the $\sigma$ term present in~\eqref{eq:eta-dot}. To reiterate, $\sigma$ is connected the operator ordering on the $C$ sector, not the $Q$ sector, before taking the classical limit.
As promised, different forms of $\sigma$ will have different evolutions. In the simple example of $\eta=p_C\,p_Q$, using the definition~\eqref{eq:sigma} of $\sigma$, $\dot{\eta}$ becomes \begin{align}\label{eq:eta-dot-c}
k \left( \frac{x_Q \, p_Q + p_Q \, x_Q}{2} - x_C \, p_Q - x_Q \, p_C + x_C \, p_C + \frac{i\hbar}{2}c \right)~. \end{align} Here, $c$ is a constant determined by the original quantization scheme on the $C$ sector before the classical limit, as discussed below Eq.~\eqref{eq:sigma}. Finally, the time dependence of $\eta$ can be found by plugging in the solutions for $(x_C,p_C,x_Q,p_Q)$. The classical variables $x_C$ and $p_C$ have been explicitly calculated in Eqs.~\eqref{eq:xc} and~\eqref{eq:pc}. The equations for $(x_Q,p_Q)$ can be found similarly.
The argument presented in this section is not specific to the harmonic oscillator, of course. The procedure is general and applicable to any Hamiltonian and hybrid variable provided they belong to an associative subalgebra with the hybrid composition product $\circledast$.
\section{Conclusion} \label{sec:conc}
The example of the quantum-classical harmonic oscillator presented here exhibits basic features of hybrid systems. The treatment used is applicable to general Hamiltonians.
Quantum backreaction in interacting quantum-classical systems can, in principle, be described through the framework of Hamiltonian hybrid dynamics. The example of the quantum-classical harmonic oscillator studied here demonstrates this backreaction: classical variables grow to be uncertain by interacting with quantum ones. The uncertainty in classical variables is manifested in the nonvanishing commutator of classical variables.
Along with Anderson~\cite{anderson_quantum_1995}, we note that despite the fuzziness of classical variables through interaction, they are still classical in the sense that their uncertainty is entirely due to the effect on them of truly quantum variables. That effect is depicted in Fig.~\ref{fig:1}: a commutator of classical variables vanishes initially, but evolves to be nonzero at later times. The figure also illustrates that the more massive the classical system is, the longer it remains certain.
An important property made plain by the analysis presented here is that the details of a hybrid bracket are irrelevant for the dynamics of pure variables. All brackets obeying the reduction and bilinearity properties (Eqs.~\eqref{eq:reduction} and~\eqref{eq:lin}) are equivalent as far as pure variables are concerned. This covers a wide class of problems where the only nontrivial hybrid variable is the interaction Hamiltonian.
Dynamics of hybrid variables requires the use of a fully-defined hybrid bracket. Without such a bracket, we run into ordering ambiguities and miss extra effects resulting from the hybrid nature of the system. As shown in~\cite{amin_quantum-classical_2020}, these extra effects arise from the specific quantization scheme (and its associated phase space distribution) used in deriving the bracket.
\end{document} | arXiv |
\begin{document}
\twocolumn[ \icmltitle{Value-at-Risk Optimization with Gaussian Processes}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist} \icmlauthor{Quoc Phong Nguyen}{to} \icmlauthor{Zhongxiang Dai}{to} \icmlauthor{Bryan Kian Hsiang Low}{to} \icmlauthor{Patrick Jaillet}{goo} \end{icmlauthorlist}
\icmlaffiliation{to}{Department of Computer Science, National University of Singapore, Republic of Singapore} \icmlaffiliation{goo}{Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA}
\icmlcorrespondingauthor{Quoc Phong Nguyen}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in ]
\printAffiliationsAndNotice{}
\begin{abstract} \emph{Value-at-risk} ({\textsc{VaR}}) is an established measure to assess risks in critical real-world applications with random environmental factors. This paper presents a novel \emph{{\textsc{VaR}} upper confidence bound} (V-UCB) algorithm for maximizing the {\textsc{VaR}} of a black-box objective function with the first no-regret guarantee. To realize this, we first derive a confidence bound of {\textsc{VaR}} and then prove the existence of values of the environmental random variable (to be selected to achieve no regret) such that the confidence bound of {\textsc{VaR}} lies within that of the objective function evaluated at such values. Our V-UCB algorithm empirically demonstrates state-of-the-art performance in optimizing synthetic benchmark functions, a portfolio optimization problem, and a simulated robot task.
\end{abstract}
\section{Introduction}
Consider the problem of maximizing an expensive-to-compute black-box objective function $f$ that depends on an \emph{optimization variable} $\mbf{x}$ and an \emph{environmental random variable} $\mbf{Z}$.
Due to the randomness in $\mbf{Z}$, the function evaluation $f(\mbf{x},\mbf{Z})$ of $f$ at $\mbf{x}$ is a random variable. Though for such an objective function $f$, \emph{Bayesian optimization} (BO) can be naturally applied to maximize its expectation $\mbb{E}_{\mbf{Z}}[f(\mbf{x},\mbf{Z})]$ over $\mbf{Z}$ \cite{toscano2018bayesian}, this maximization objective overlooks the \emph{risks} of potentially undesirable function evaluations. These risks can arise from either (a) the realization of an unknown distribution of $\mbf{Z}$ or (b) the realization of the random $\mbf{Z}$ given that the distribution of $f(\mbf{x},\mbf{Z})$ can be estimated well or that of $\mbf{Z}$ is known.
The issue (a) has been tackled by distributionally robust BO \cite{kirschner2020distributionally,nguyen2020distributionally} which maximizes $\mbb{E}_{\mbf{Z}}[f(\mbf{x},\mbf{Z})]$ under the worst-case realization of the distribution of $\mbf{Z}$.
To resolve the issue (b), the risk from the uncertainty in $\mbf{Z}$ can be controlled via the mean-variance optimization framework \cite{iwazaki2020mean}, \emph{value-at-risk} ({\textsc{VaR}}), or \emph{conditional value-at-risk} ({\textsc{CVaR}}) \cite{borisk20,torossian2020bayesian}. The work of \citet{bogunovic2018adversarially} has considered \emph{adversarially robust BO}, where $\mbf{z}$ is controlled by an adversary deterministically.\footnote{We use upper-case letter $\mbf{Z}$ to denote the environmental random variable and lower-case letter $\mbf{z}$ to denote its realization or a (non-random) variable.}
In this case, the objective is to find $\mbf{x}$ that maximizes the function under the worst-case realization of $\mbf{z}$, i.e., $\text{arg}\!\max_{\mbf{x}} \min_{\mbf{z}} f(\mbf{x},\mbf{z})$.
In this paper, we focus on case (b) where the distribution of
$\mbf{Z}$ is known (or well-estimated).
For example, in agriculture, although farmers cannot control the temperature of an outdoor farm,
its distribution can be estimated from historical data and controlled in an indoor environment for optimizing the plant yield. Given the distribution of $\mbf{Z}$, the objective is to control the risk
that the function evaluation $f(\mbf{x},\mbf{z})$, for a $\mbf{z}$ sampled from $\mbf{Z}$,
is small. One popular framework is to control the trade-off between the mean (viewed as reward) and the variance (viewed as risk) of the function evaluation with respect to $\mbf{Z}$ \cite{iwazaki2020mean}. However, quantifying the risk
using variance implies indifference between positive and negative deviations from the mean, while people often have asymmetric risk attitudes \cite{goh2012portfolio}. In our problem of maximizing the objective function, it is reasonable to assume that people are risk-averse towards only the negative deviations
from the mean, i.e., the risk of getting lower function evaluations.
Thus, it is more appropriate to adopt risk measures
with this asymmetric property, such as \emph{value-at-risk} ({\textsc{VaR}}) which is a widely adopted risk measure in real-world applications (e.g., banking \cite{basel06}). Intuitively, the risk
that the random $f(\mbf{x},\mbf{Z})$ is less than {\textsc{VaR}} at level $\alpha \in (0,1)$ does not exceed $\alpha$, e.g., by specifying a small value of $\alpha$ as $0.1$, this risk is controlled to be at most $10\%$. Therefore, to maximize the function $f$ while controlling the risk of undesirable (i.e., small) function evaluations, we aim to maximize {\textsc{VaR}} of the random function $f(\mbf{x},\mbf{Z})$ over $\mbf{x}$.
The recent work of \citet{borisk20} has used BO to maximize {\textsc{VaR}} and has achieved state-of-the-art empirical performances. They have assumed that we are able to select both $\mbf{x}$ and $\mbf{z}$ to query during BO,
which is motivated by fact that physical experiments can usually be studied by
simulation \cite{williams2000sequential}.
In the example on agriculture given above, we can control the temperature, light and water ($\mbf{z}$) in a small indoor environment to optimize the amount of fertilizer ($\mbf{x}$), which can then be used in an outdoor environment with random weather factors.
\citet{borisk20} have exploited the ability to select $\mbf{z}$ to model the function $f(\mbf{x},\mbf{z})$ as a GP, which allows them to retain the appealing closed-form posterior belief of the objective function.
To select the queries $\mbf{x}$ and $\mbf{z}$, they have designed a one-step lookahead approach based on the well-known \emph{knowledge gradient} (KG) acquisition function \cite{scott2011correlated}.
However, the one-step lookahead incurs an expensive nested optimization procedure, which is computationally expensive and hence requires approximations.
Besides, the acquisition function can only be approximated using samples of the objective function $f$ from the GP posterior and the environmental random variable $\mbf{Z}$. While they have analysed the asymptotically unbiased and consistent estimator of the gradients, it is challenging to obtain a guarantee for the convergence of their algorithm.
Another recent work \cite{torossian2020bayesian} has also applied BO to maximize {\textsc{VaR}} using an asymmetric Laplace likelihood function and variational approximation of the posterior belief. However, in contrast to \citet{borisk20} and our work, they have focused on a different setting where the realizations of $\mbf{Z}$ are not observed.
In this paper, we adopt the setting of \citet{borisk20} which allows us to choose both $\mbf{x}$ and $\mbf{z}$ to query, and assume that the distribution of $\mbf{Z}$ is known or well-estimated. Our contributions include:
\textbf{Firstly}, we propose a novel BO algorithm named \emph{Value-at-risk Upper Confidence Bound} ({V-UCB}) in Section~\ref{sec:vy}. Unlike the work of \citet{borisk20}, {V-UCB} is equipped with a no-regret convergence guarantee and is more computationally efficient.
To guide its query selection and facilitate its proof of the no-regret guarantee, the classical GP-UCB algorithm~\cite{srinivas10ucb} constructs a \emph{confidence bound} of the objective function. Similarly, to maximize the {\textsc{VaR}} of a random function, we, for the first time to the best of our knowledge, construct a confidence bound of {\textsc{VaR}} (Lemma~\ref{lemma:confbound}). The resulting confidence bound of {\textsc{VaR}} naturally gives rise to a strategy to select $\mbf{x}$. However, it remains a major challenge to select $\mbf{z}$ to preserve the no-regret convergence of GP-UCB. To this end, we firstly prove that our algorithm is no-regret as long as we ensure that at the selected $\mbf{z}$, the confidence bound of {\textsc{VaR}} \emph{lies within} the confidence bound of the objective function. Next, we also prove that this query selection strategy is \emph{feasible}, i.e., such values of $\mbf{z}$, referred to as \emph{lacing values} (LV), exist.
\textbf{Secondly}, although our theoretical no-regret property allows the selection of \emph{any} LV, we design a heuristic to select an LV such that it improves our empirical performance over random selection of LV (Section~\ref{subsec:improvedselectz}). We also discuss the implications when $\mbf{z}$ cannot be selected by BO and is instead randomly sampled by the environment during BO (Remark~\ref{rmk:unknownZ}).
\textbf{Thirdly}, we show that adversarially robust BO \cite{bogunovic2018adversarially} can be cast as a special case of our V-UCB
when the risk level $\alpha$ of {\textsc{VaR}} approaches $0$ from the right and the domain of $\mbf{z}$ is the support of $\mbf{Z}$. In this case,
adversarially robust BO \cite{bogunovic2018adversarially} selects the same input queries as those selected by V-UCB since the set of LV collapse into the set of minimizers of the lower bound of the objective function (Section~\ref{subsec:vy-stableopt}).
\textbf{Lastly}, we provide practical techniques for
implementing {V-UCB} with continuous random variable $\mbf{Z}$ (Section~\ref{sec:continuousz}): we (a) introduce \emph{local neural surrogate optimization} with the \emph{pinball loss} to optimize {\textsc{VaR}}, and (b) construct an objective function to search for an LV in the continuous support of $\mbf{Z}$.
The performance of our proposed algorithm is empirically demonstrated in optimizing several synthetic benchmark functions, a portfolio optimization problem, and a simulated robot task in Section~\ref{sec:experiments}.
\section{Problem Statement and Background}
Let the objective function be defined as $f: \mcl{D}_{\mbf{x}} \times \mcl{D}_{\mbf{z}} \rightarrow \mbb{R}$ where $\mcl{D}_{\mbf{x}} \subset \mbb{R}^{d_x}$ and $\mcl{D}_{\mbf{z}}\subset \mbb{R}^{d_z}$ are the bounded domain of the optimization variable $\mbf{x}$ and the support of the environmental random variable $\mbf{Z}$, respectively; $d_x$ and $d_z$ are the dimensions of $\mbf{x}$ and $\mbf{z}$, respectively. The support of $\mbf{Z}$ is defined as the smallest closed subset $\mcl{D}_{\mbf{z}}$ of $\mbb{R}^{d_z}$ such that $P(\mbf{Z} \in \mcl{D}_{\mbf{z}}) = 1$. Let $\mbf{z} \in \mcl{D}_{\mbf{z}}$ denote a realization of the random variable $\mbf{Z}$.
Let $f(\mbf{x},\mbf{Z})$ denote a random variable whose randomness comes from $\mbf{Z}$.
The {\textsc{VaR}} of $f(\mbf{x},\mbf{Z})$ at \emph{risk level} $\alpha \in (0,1)$ is defined as:
\begin{equation} V_{\alpha}(f(\mbf{x},\mbf{Z})) \triangleq \inf \{\omega: P(f(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\} \label{eq:var} \end{equation}
which implies the risk that $f(\mbf{x},\mbf{Z})$ is less than its {\textsc{VaR}} at level $\alpha$ does not exceed $\alpha$.
Our objective is to search for $\mbf{x} \in \mcl{D}_{\mbf{x}}$ that maximizes $V_\alpha(f(\mbf{x},\mbf{Z}))$ at a user-specified risk level $\alpha \in (0,1)$. Intuitively, the goal is find $\mbf{x}$ where the evaluations of the objective function are as large as possible under most realizations of the environmental random variable $\mbf{Z}$ which is characterized by the probability of $1-\alpha$.
The unknown objective function $f(\mbf{x},\mbf{z})$ is modeled with a GP. That is, every finite subset of $\{f(\mbf{x},\mbf{z})\}_{(\mbf{x}, \mbf{z}) \in \mcl{D}_{\mbf{x}} \times \mcl{D}_{\mbf{z}}}$ follows a multivariate Gaussian distribution \cite{rasmussen06}. The GP is fully specified by its \emph{prior} mean and covariance function $k_{(\mbf{x},\mbf{z}), (\mbf{x}',\mbf{z}')} \triangleq \text{cov}[f(\mbf{x},\mbf{z}), f(\mbf{x}',\mbf{z}')]$ for all $\mbf{x}, \mbf{x}'$ in $\mcl{D}_{\mbf{x}}$ and $\mbf{z}, \mbf{z}'$ in $\mcl{D}_{\mbf{z}}$. For notational simplicity (and w.l.o.g.), the former is assumed to be zero, while we use the \emph{squared exponential} (SE) kernel as its bounded maximum information gain can be used for later analysis \cite{srinivas10ucb}.
To identify the optimal $\mbf{x}_* \triangleq \text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_{\alpha}(f(\mbf{x},\mbf{Z}))$, BO algorithm selects an input query $(\mbf{x}_t,\mbf{z}_t)$ in the $t$-th iteration to obtain a noisy function evaluation $y_{(\mbf{x}_t,\mbf{z}_t)} \triangleq f(\mbf{x}_t,\mbf{z}_t) + \epsilon_t$ where $\epsilon_t \sim \mcl{N}(0,\sigma_n^2)$ are i.i.d. Gaussian noise with variance $\sigma_n^2$. Given noisy observations $\mbf{y}_{\mcl{D}_t} \triangleq (y_{(\mbf{x},\mbf{z})})_{(\mbf{x},\mbf{z}) \in \mcl{D}_t}^\top$ at observed inputs $\mcl{D}_t \triangleq \mcl{D}_{t-1} \cup \{(\mbf{x}_t, \mbf{z}_t)\}$ (and $\mcl{D}_0$ is the initial observed inputs), the GP posterior belief of function evaluation at any input $(\mbf{x},\mbf{z})$ is a Gaussian $p(f(\mbf{x},\mbf{z})|\mbf{y}_{\mcl{D}_t}) \triangleq \mcl{N}( f(\mbf{x},\mbf{z})|\mu_t(\mbf{x},\mbf{z}), \sigma_t^2(\mbf{x},\mbf{z}))$:
\begin{equation} \begin{array}{r@{}l} \mu_t(\mbf{x},\mbf{z}) &\triangleq \mbf{K}_{(\mbf{x},\mbf{z}),\mcl{D}_t} \bm{\Lambda}_{\mcl{D}_t\mcl{D}_t} \mbf{y}_{\mcl{D}_t}\ ,\\ \sigma_t^2(\mbf{x},\mbf{z}) &\triangleq k_{(\mbf{x},\mbf{z}),(\mbf{x},\mbf{z})} - \mbf{K}_{(\mbf{x},\mbf{z}),\mcl{D}_t} \bm{\Lambda}_{\mcl{D}_t\mcl{D}_t} \mbf{K}_{\mcl{D}_t,(\mbf{x},\mbf{z})} \label{eq:gppost} \end{array} \end{equation}
where $\bm{\Lambda}_{\mcl{D}_t\mcl{D}_t} \triangleq \left(
\mbf{K}_{\mcl{D}_t\mcl{D}_t} + \sigma_n^2 \mbf{I} \right)^{-1}$, $\mbf{K}_{(\mbf{x},\mbf{z}),\mcl{D}_t} \triangleq (k_{(\mbf{x},\mbf{z}),(\mbf{x}',\mbf{z}')})_{(\mbf{x}',\mbf{z}') \in \mcl{D}_t}$, $\mbf{K}_{\mcl{D}_t,(\mbf{x},\mbf{z})} \triangleq \mbf{K}_{(\mbf{x},\mbf{z}),\mcl{D}_t}^\top$, $\mbf{K}_{\mcl{D}_t\mcl{D}_t} \triangleq (k_{(\mbf{x}',\mbf{z}'), (\mbf{x}'',\mbf{z}'')})_{(\mbf{x}',\mbf{z}'), (\mbf{x}'',\mbf{z}'') \in \mcl{D}_t}$, $\mbf{I}$ is the identity matrix.
\section{BO of {\textsc{VaR}}} \label{sec:vy}
Following the seminal work \cite{srinivas10ucb}, we use the \emph{cumulative regret} as the performance metric to quantify the performance of our BO algorithm. It is defined as $R_T \triangleq \sum_{t=1}^T r(\mbf{x}_t)$ where $r(\mbf{x}_t) \triangleq V_{\alpha}(f(\mbf{x}_*,\mbf{Z})) - V_{\alpha}(f(\mbf{x}_t,\mbf{Z}))$ is the \emph{instantaneous regret} and $\mbf{x}_* \triangleq \text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_{\alpha}(f(\mbf{x},\mbf{Z}))$. We would like to design a query selection strategy that incurs \emph{no regret}, i.e., $\lim_{T\rightarrow \infty} R_T / T = 0$.
Furthermore, we have that $\min_{t \le T} r(\mbf{x}_t) \le R_T / T$, equivalently, $\max_{t \le T} V_{\alpha}(f(\mbf{x}_t,\mbf{Z})) \ge V_{\alpha}(f(\mbf{x}_*,\mbf{Z})) - R_T / T$. Thus, $\text{lim}_{T \rightarrow \infty} \max_{t \le T} V_{\alpha}(f(\mbf{x}_t,\mbf{Z})) = V_{\alpha}(f(\mbf{x}_*,\mbf{Z}))$ for a no-regret algorithm.
The proof of the upper bound on the cumulative regret of GP-UCB is based on confidence bounds of the objective function \cite{srinivas10ucb}. Similarly, in the next section, we start by constructing a confidence bound of $V_{\alpha}(f(\mbf{x},\mbf{Z}))$, which naturally leads to a query selection strategy for $\mbf{x}_t$.
\subsection{A Confidence Bound of $V_\alpha(f(\mbf{x},\mbf{Z}))$ and the Query Selection Strategy for $\mbf{x}_t$} \label{subsec:confboundf}
Firstly, we adopt a confidence bound of the function $f(\mbf{x},\mbf{z})$ from \citet{chowdhury2017kernelized}, which assumes that $f$ belongs to a \emph{reproducing kernel Hilbert space} $\mcl{F}_k(B)$ such that its RKHS norm is bounded $\Vert f\Vert_k \le B$.
\begin{lemma}[\citet{chowdhury2017kernelized}] Pick $\delta \in (0,1)$ and set $\beta_t = (B + \sigma_n \sqrt{2(\gamma_{t-1} + 1 + \log1/ \delta)})^2$.
Then, $f(\mbf{x},\mbf{z}) \in I_{t-1}[f(\mbf{x},\mbf{z})] \triangleq [l_{t-1}(\mbf{x},\mbf{z}), u_{t-1}(\mbf{x},\mbf{z})]$
$\forall \mbf{x} \in \mcl{D}_{\mbf{x}}, \mbf{z} \in \mcl{D}_{\mbf{z}}, t \ge 1 $ holds with probability $\ge 1 - \delta$ where
\begin{equation} \begin{array}{r@{}l} l_{t-1}(\mbf{x},\mbf{z})\ &\triangleq \mu_{t-1}(\mbf{x},\mbf{z}) - \beta_t^{1/2} \sigma_{t-1}(\mbf{x},\mbf{z})\\ u_{t-1}(\mbf{x},\mbf{z})\ &\triangleq \mu_{t-1}(\mbf{x},\mbf{z}) + \beta_t^{1/2} \sigma_{t-1}(\mbf{x},\mbf{z})\ . \end{array}
\label{eq:fbound} \end{equation}
\label{lemma:ucb51} \end{lemma}
As the above lemma holds for both finite and continuous $\mcl{D}_x$ and $\mcl{D}_z$, it is
used to analyse the regret in both cases. On the other hand, the confidence bound can be adopted to the Bayesian setting by changing only $\beta_t$ following the work of \citet{srinivas10ucb} as noted by \cite{bogunovic2018adversarially}.
Then, we exploit this confidence bound on the function evaluations (Lemma \ref{lemma:ucb51}) to formulate a confidence bound of $V_{\alpha}(f(\mbf{x},\mbf{Z}))$ as follows.
\begin{lemma} \label{lemma:confbound} Similar to the definition of $f(\mbf{x},\mbf{Z})$, let $l_{t-1}(\mbf{x}, \mbf{Z})$ and $u_{t-1}(\mbf{x},\mbf{Z})$ denote the random function over $\mbf{x}$ where the randomness comes from the random variable $\mbf{Z}$; $l_{t-1}$ and $u_{t-1}$ are defined in \eqref{eq:fbound}.
Then, $\forall \mbf{x} \in \mcl{D}_{\mbf{x}}$, $t \ge 1$,
\[ \begin{array}{r@{}l} \displaystyle V_\alpha(f(\mbf{x},\mbf{Z})) \displaystyle &\displaystyle \in I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))]\\
&\displaystyle \triangleq [V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})), V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))] \end{array} \] holds with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}, where $V_\alpha(l_{t-1}(\mbf{x},\mbf{Z}))$ and $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ are defined as \eqref{eq:var}. \end{lemma}
The proof is in Appendix~\ref{app:proofvconfbound}. Given the confidence bound $I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))] \triangleq [V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})), V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ in Lemma~\ref{lemma:confbound}, we follow the the well-known
``optimism in the face of uncertainty'' principle to select $\mbf{x}_t = \text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$. This query selection strategy for $\mbf{x}_t$ leads to an upper bound of $r(\mbf{x}_t)$:
\begin{align} r(\mbf{x}_t) \le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))\ \forall t \ge 1
\label{eq:iregretbound1} \end{align}
which holds with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}, and is proved in Appendix~\ref{app:iregretbound1}.
As our goal is $\lim_{T\rightarrow \infty} R_T/T = 0$, given the selected query $\mbf{x}_t$, a reasonable query selection strategy of $\mbf{z}_t$ should gather informative observations at $(\mbf{x}_t, \mbf{z}_t)$ that improves the confidence bound $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$ (i.e., $I_{t}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$ is a proper subset of $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$ if $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))] \neq \emptyset$) which can be viewed as the uncertainty of $V_{\alpha}(f(\mbf{x}_t,\mbf{Z}))$.
Assume that there exists $\mbf{z}_l \in \mcl{D}_{\mbf{z}}$ such that $l_{t-1}(\mbf{x}_t,\mbf{z}_l) = V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))$ and $\mbf{z}_u \in \mcl{D}_{\mbf{z}}$ such that $u_{t-1}(\mbf{x}_t,\mbf{z}_u) = V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z}))$.
Lemma~\ref{lemma:confbound} implies that $V_\alpha(f(\mbf{x}_t,\mbf{Z})) \in I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))] = [l_{t-1}(\mbf{x}_t,\mbf{z}_l), u_{t-1}(\mbf{x}_t,\mbf{z}_u)]$ with high probability. Hence, we may
na\"ively want to query for observations at $(\mbf{x}_t, \mbf{z}_l)$ and $(\mbf{x}_t,\mbf{z}_u)$ to reduce $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$. However, these observations may not always reduce $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$. The insight is that $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$ changes (i.e., shrinks) when either of its boundary values (i.e., $l_{t-1}(\mbf{x}_t,\mbf{z}_l)$ or $u_{t-1}(\mbf{x}_t,\mbf{z}_u)$) changes.
Consider $u_{t-1}(\mbf{x}_t,\mbf{z}_u)$ and finite $\mcl{D}_{\mbf{z}}$ as an example, since $u_{t-1}(\mbf{x}_t,\mbf{z}_u) = V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z}))$, a natural cause for the change in $u_{t-1}(\mbf{x}_t,\mbf{z}_u)$ is when $\mbf{z}_u$ changes.
This happens if there exists $\mbf{z}' \neq \mbf{z}_u$ such that the \emph{ordering} of $u_{t-1}(\mbf{x}_t,\mbf{z}')$ relative to $u_{t-1}(\mbf{x}_t,\mbf{z}_u)$ changes given more observations. Thus, observations that are capable of reducing $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$ should be able to \emph{change the relative ordering} in this case. We construct the following counterexample where observations at $\mbf{z}_u$ (and $\mbf{z}_l$) are not able to change the relative ordering, so they do not reduce $I_{t-1}[V_\alpha (f(\mbf{x}_t,\mbf{Z}))]$.
\begin{example} \label{example:counter} This example is described by Fig.~\ref{fig:orderinguncertainty}. We reduce notational clutter by removing $\mbf{x}_t$ and $t$ since they are fixed in this example, i.e., we use $f(\mbf{z})$, $f(\mbf{Z})$, and $l(\mbf{z})$ to denote $f(\mbf{x}_t, \mbf{z})$, $f(\mbf{x}_t, \mbf{Z})$, and $l_{t-1}(\mbf{x}_t, \mbf{z})$ respectively. We condition on the event $f(\mbf{z}) \in I[f(\mbf{z})] \triangleq [l(\mbf{z}), u(\mbf{z})]$ for all $\mbf{z} \in \mcl{D}_{\mbf{z}}$ which occurs with probability $\ge 1 - \delta$ in Lemma~\ref{lemma:ucb51}. In this example, $\mbf{z}_l = \mbf{z}_1$ and $l(\mbf{z}_1) = u(\mbf{z}_1)$, so there is no uncertainty in $f(\mbf{z}_l) = f(\mbf{z}_1)$. Similarly, there is no uncertainty in $f(\mbf{z}_u) = f(\mbf{z}_2)$. Thus, new observations at $\mbf{z}_l$ and $\mbf{z}_u$ change neither $l(\mbf{z}_l)$ nor $u(\mbf{z}_u)$, so these observations do not reduce the confidence bound $I[V_{\alpha=0.4}(f(\mbf{Z}))] = [l(\mbf{z}_l), u(\mbf{z}_u)]$ (plotted as the double-headed arrow in Fig.~\ref{fig:orderinguncertainty}b).
In fact, to reduce $I[V_{\alpha=0.4}(f(\mbf{Z}))]$, we should gather new observations at $\mbf{z}_0$ which potentially change the ordering of $u(\mbf{z}_0)$ relative to $u(\mbf{z}_2)$ (which is $u(\mbf{z}_u)$ without new observations). For example, after
getting new observations at $\mbf{z}_0$, if $u(\mbf{z}_0)$ is improved to be in the white region between A and B ($u(\mbf{z}_0) > u(\mbf{z}_2)$ in Fig.~\ref{fig:orderinguncertainty}b changes to $u(\mbf{z}_0) < u(\mbf{z}_2)$ in Fig.~\ref{fig:orderinguncertainty}c), then $I[V_{\alpha=0.4}(f(\mbf{Z}))]$ is reduced to $[l(\mbf{z}_1), u(\mbf{z}_0)]$ because now $\mbf{z}_u=\mbf{z}_0$. Thus, as the confidence bound $I[f(\mbf{z}_0)]$ is shortened with more and more observations at $\mbf{z}_0$, the confidence bound $I[V_{\alpha=0.4}(f(\mbf{Z}))]$ reduces (the white region in Fig.~\ref{fig:orderinguncertainty} is `laced up'). \end{example}
\begin{figure*}\label{fig:orderinguncertainty}
\end{figure*}
In the next section, we define a property of $\mbf{z}_0$ in Example~\ref{example:counter}
and prove the existence of $\mbf{z}$'s with this property. Then, we prove that along with the optimistic selection of $\mbf{x}_t$, the selection of $\mbf{z}_t$ such that it satisfies this property leads to a no-regret algorithm.
\subsection{Lacing Value (LV) and the Query Selection Strategy for $\mbf{z}_t$} \label{subsec:lv}
We note that in Example~\ref{example:counter}, as long as the confidence bound of the function evaluation at $\mbf{z}_0$ contains
the confidence bound of {\textsc{VaR}}, observations at $\mbf{z}_0$ can
reduce the confidence bound of {\textsc{VaR}}. We name the values of $\mbf{z}$ satisfying this property as \emph{lacing values} (LV):
\begin{definition}[Lacing values] \label{definition:lv} \emph{Lacing values} (LV) with respect to $\mbf{x} \in \mcl{D}_{\mbf{x}}$ and $t \ge 1$ are $\mbf{z}_{\text{LV}} \in \mcl{D}_{\mbf{z}}$ that satisfies $l_{t-1}(\mbf{x},\mbf{z}_{\text{LV}}) \le V_{\alpha}(l_{t-1}(\mbf{x}, \mbf{Z})) \le V_{\alpha}(u_{t-1}(\mbf{x}, \mbf{Z})) \le u_{t-1}(\mbf{x},\mbf{z}_{\text{LV}})$, equivalently, $I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))] \subset [l_{t-1}(\mbf{x},\mbf{z}_{\text{LV}}), u_{t-1}(\mbf{x},\mbf{z}_{\text{LV}})]\ .$ \end{definition}
Recall that the support $\mcl{D}_{\mbf{z}}$ of $\mbf{Z}$ is defined as the smallest closed subset $\mcl{D}_{\mbf{z}}$ of $\mbb{R}^{d_z}$ such that $P(\mbf{Z} \in \mcl{D}_{\mbf{z}}) = 1$ (e.g., $\mcl{D}_{\mbf{z}}$ is a finite subset of $\mbb{R}^{d_z}$ and $\mcl{D}_{\mbf{z}} = \mbb{R}^{d_z}$). The following theorem guarantees the existence of lacing values and is proved in Appendix~\ref{app:prooflv}.
\begin{theorem}[Existence of lacing values] \label{theorem:lv} $\forall \alpha \in (0,1)$, $\forall \mbf{x} \in \mcl{D}_{\mbf{x}}$, $\forall t \ge 1$, there exists a lacing value in $\mcl{D}_{\mbf{z}}$ with respect to $\mbf{x}$ and $t$. \end{theorem}
\begin{corollary} \label{corollary:loclv} Lacing values with respect to $\mbf{x} \in \mcl{D}_{\mbf{x}}$ and $t \ge 1$ are in $\mcl{Z}_l^{\le} \cap \mcl{Z}_u^{\ge}$ where $\mcl{Z}_l^{\le} \triangleq \{\mbf{z} \in \mcl{D}_{\mbf{z}}: l_{t-1}(\mbf{x},\mbf{z}) \le V_{\alpha}(l_{t-1}(\mbf{x}, \mbf{Z}))\}$ and $\mcl{Z}_u^{\ge} \triangleq \{\mbf{z} \in \mcl{D}_{\mbf{z}}: u_{t-1}(\mbf{x},\mbf{z}) \ge V_{\alpha}(u_{t-1}(\mbf{x}, \mbf{Z}))\}$. \end{corollary}
As a special case, when $\mbf{z}_l = \mbf{z}_u$, $I_{t-1}[V_\alpha(f(\mbf{x},\mbf{Z}))] = I_{t-1}[f(\mbf{x},\mbf{z}_l)]$ which means $\mbf{z}_l = \mbf{z}_u$ is an LV.
Based on Theorem~\ref{theorem:lv}, we can always select $\mbf{z}_t$ as an LV w.r.t $\mbf{x}_t$ defined in Definition~\ref{definition:lv}. This strategy for the selection of $\mbf{z}_t$, together with the selection of $\mbf{x}_t = \text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ (Section \ref{subsec:confboundf}), completes our algorithm: \emph{{\textsc{VaR}} Upper Confidence Bound} ({V-UCB}) (Algorithm~\ref{alg:v-ucb}).
\textbf{Upper Bound on Regret.} As a result of the selection strategies for $\mbf{x}_t$ and $\mbf{z}_t$, our V-UCB algorithm enjoys the following upper bound on its instantaneous regret (proven in Appendix~\ref{app:iregretbound2}): \begin{lemma} \label{lemma:iregretbound2} By selecting $\mbf{x}_t$ as a maximizer of $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ and selecting $\mbf{z}_t$ as an LV w.r.t $\mbf{x}_t$, the instantaneous regret is upper-bounded by: \begin{equation*} r(\mbf{x}_t) \le 2 \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t, \mbf{z}_t)\ \forall t \ge 1 \end{equation*} with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}. \end{lemma}
Lemma~\ref{lemma:iregretbound2}, together with Lemma 5.4 from \citet{srinivas10ucb}, implies that the cumulative regret of our algorithm is bounded (Appendix~\ref{app:rtbound}): $R_T \le \sqrt{C_1 T \beta_T \gamma_T}$
where $C_1 \triangleq 8/\log(1 + \sigma_n^{-2})$, and $\gamma_T$ is the maximum information gain about $f$ that can be obtained from any set of $T$ observations. \citet{srinivas10ucb} have analyzed $\gamma_T$ for several commonly used kernels such as SE and Mat\'ern kernels, and have shown that for these kernels, the upper bound on $R_T$ grows sub-linearly. This implies that our algorithm is \emph{no-regret} because $\lim_{T\rightarrow \infty} R_T / T = 0$.
\begin{algorithm}[tb]
\caption{The {V-UCB} Algorithm} \begin{algorithmic}[1]
\STATE {\bfseries Input:} $\mcl{D}_{\mbf{x}}$, $\mcl{D}_{\mbf{z}}$, prior $\mu_0=0, \sigma_0, k$
\FOR{$i=1,2,\dots$}
\STATE Select $\mbf{x}_t = \text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$
\STATE Select $\mbf{z}_t$ as a \emph{lacing value} w.r.t. $\mbf{x}_t$ (Definition~\ref{definition:lv})
\STATE Obtain observation $y_t \triangleq f(\mbf{x}_t,\mbf{z}_t) + \epsilon_t$
\STATE Update the GP posterior belief to obtain $\mu_t$ and $\sigma_t$
\ENDFOR \end{algorithmic} \label{alg:v-ucb} \end{algorithm}
Inspired by \citet{bogunovic2018adversarially}, at the {$T$-th} iteration of {V-UCB}, we can recommend $\mbf{x}_{t_*(T)}$ as an estimate of the maximizer $\mbf{x}_*$ of $V_\alpha(f(\mbf{x},\mbf{Z}))$, where $t_*(T) \triangleq \text{arg}\!\max_{t \in \{1,\dots,T\}} V_{\alpha}(l_{t-1}(\mbf{x}_t,\mbf{Z}))$. Then, the instantaneous regret $r(\mbf{x}_{t_*(T)})$ is upper-bounded by $\sqrt{C_1 \beta_T \gamma_T / T}$ with high probability as we show in Appendix~\ref{app:recommendxbound}.
In our experiments in Section~\ref{sec:experiments},
we recommend $\text{arg}\!\max_{\mbf{x} \in \mcl{D}_T} V_{\alpha}(\mu_{t-1}(\mbf{x}, \mbf{Z}))$ (where $\mu_{t-1}(\mbf{x}, \mbf{Z})$ is a random function defined in the same manner as $f(\mbf{x},\mbf{Z})$) as an estimate of $\mbf{x}_*$ due to its empirical convergence.
\textbf{Computational Complexity.} To compare our computational complexity with that of the {$\rho \text{KG}$} algorithm from \citet{borisk20}, we exclude the common part of training the GP model (line 6) and assume that $\mcl{D}_{\mbf{z}}$ is finite.
Then, the time complexity of {V-UCB} is dominated by that of the selection of $\mbf{x}_t$ (line 3) which includes the time complexity $\mcl{O}(|\mcl{D}_{\mbf{z}}| |\mcl{D}_{t-1}|^2)$ for the GP prediction at $\{\mbf{x}\} \times \mcl{D}_{\mbf{z}}$, and $\mcl{O}(|\mcl{D}_{\mbf{z}}|\log|\mcl{D}_{\mbf{z}}|)$ for the sorting of $u_{t-1}(\mbf{x},\mcl{D}_{\mbf{z}})$ and searching of {\textsc{VaR}}. Hence, our overall complexity is $\mcl{O}(n |\mcl{D}_{\mbf{z}}|\ (|\mcl{D}_{t-1}|^2 + \log |\mcl{D}_{\mbf{z}}|))$, where $n$ is the number of iterations to maximize $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ (line 3).
Therefore, our V-UCB is more computationally efficient than {$\rho \text{KG}$} and its variant with approximation {$\rho \text{KG}^{apx}$}, whose complexities are $\mcl{O}(n_{\text{out}} n_{\text{in}} K |\mcl{D}_{\mbf{z}}|\ (|\mcl{D}_{t-1}|^2 + |\mcl{D}_{\mbf{z}}| |\mcl{D}_{t-1}| + |\mcl{D}_{\mbf{z}}|^2 + M |\mcl{D}_{\mbf{z}}|))$ of {$\rho \text{KG}$} and $\mcl{O}(n_{\text{out}} |\mcl{D}_{t-1}| K |\mcl{D}_{\mbf{z}}|\ (|\mcl{D}_{t-1}|^2 + |\mcl{D}_{\mbf{z}}| |\mcl{D}_{t-1}| + |\mcl{D}_{\mbf{z}}|^2 + M |\mcl{D}_{\mbf{z}}|))$, respectively.\footnote{$n_{\text{out}}$ and $n_{\text{in}}$ are the numbers of iterations for the outer and inner optimization respectively, $K$ is the number of fantasy GP models required for their one-step lookahead, and $M$ is the number of functions sampled from the GP posterior \cite{borisk20}.}
\subsection{On the Selection of $\mbf{z}_t$} \label{subsec:improvedselectz}
Although Algorithm~\ref{alg:v-ucb} is guaranteed to be no-regret with any choice of LV as $\mbf{z}_t$, we would like to select the LV that can reduce a large amount of the uncertainty of $V_{\alpha}(f(\mbf{x}_t,\mbf{Z}))$. However, relying on the information gain measure or the knowledge gradient method often incurs the expensive one-step lookahead. Therefore, we use a simple heuristic by choosing the LV $\mbf{z}_{\text{LV}}$ with the maximum probability mass (or probability density if $\mbf{Z}$ is continuous) of $\mbf{z}_{\text{LV}}$. We motivate this heuristic using an example with $\alpha = 0.2$, i.e., $V_{\alpha=0.2}(f(\mbf{x}_t,\mbf{Z})) = \inf \{\omega: P(f(\mbf{x}_t,\mbf{Z}) \le \omega) \ge 0.2\}$. Suppose $\mcl{D}_{\mbf{z}}$ is finite and there are $2$ LV's $\mbf{z}_0$ and $\mbf{z}_1$ with $P(\mbf{z}_0) \ge 0.2$ and $P(\mbf{z}_1) = 0.01$. Then, because $P(f(\mbf{x}_t,\mbf{Z}) \le f(\mbf{x}_t,\mbf{z}_0)) \ge P(\mbf{z}_0) \ge 0.2$, it follows that $V_{\alpha=0.2}(f(\mbf{x}_t,\mbf{Z})) \le f(\mbf{x}_t,\mbf{z}_0)$, i.e., querying $\mbf{z}_0$ at $\mbf{x}_t$ gives us information about an explicit upper bound on $V_{\alpha=0.2}(f(\mbf{x}_t,\mbf{Z}))$ to reduce its uncertainty. In contrast, this cannot be achieved by querying $\mbf{z}_1$ due to its low probability mass.
This simple heuristic can also be implemented when $\mbf{Z}$ is a continuous random variable which we will introduce in Section~\ref{sec:continuousz}.
\begin{remark} \label{rmk:unknownZ} Although we assume that we can select both $\mbf{x}_t$ and $\mbf{z}_t$ during our
algorithm, Corollary~\ref{corollary:loclv} also gives us some insights about the scenario where we cannot select $\mbf{z}_t$.
In this case, in each iteration $t$, we select $\mbf{x}_t$ while $\mbf{z}_t$ is randomly sampled by the environment following the distribution of $\mbf{Z}$. Next, we observe both $\mbf{z}_t$ and $y_{(\mbf{x}_t,\mbf{z}_t)}$ and then update the GP posterior belief of $f$. Of note, Corollary~\ref{corollary:loclv} has shown that all LV lie in the set $\mcl{Z}_l^{\le} \cap \mcl{Z}_u^{\ge}$. However, the probability of this set is usually small, because $P(\mbf{Z} \in \mcl{Z}_l^{\le} \cap \mcl{Z}_u^{\ge}) \le P(\mbf{Z} \in \mcl{Z}_l^{\le}) \le \alpha$ and small values of $\alpha$ are often used by real-world applications to manage risks. Thus, the probability that the sampled $\mbf{z}_t$ is an LV is small. As a result, we suggest sampling a large number of $\mbf{z}_t$'s from the environment to increase the chance that an LV is sampled.
On the other hand, the small probability of sampling an LV
motivates the need for us to select $\mbf{z}_t$.
\end{remark}
\subsection{{V-UCB} Approaches {\textsc{StableOpt}} as $\alpha \rightarrow 0^+$}
\label{subsec:vy-stableopt}
Recall that the objective of adversarially robust BO is to find $\mbf{x} \in \mcl{D}_{\mbf{x}}$ that maximizes $\min_{\mbf{z} \in \mcl{D}_{\mbf{z}}} f(\mbf{x}, \mbf{z})$ \cite{bogunovic2018adversarially} by iteratively specifying input query $(\mbf{x}_t, \mbf{z}_t)$ to collect noisy observations $y_{\mbf{x}_t,\mbf{z}_t}$. It is different from BO of {\textsc{VaR}} since its $\mbf{z}$ is not random but selected by an adversary who aims to minimize the function evaluation. The work of \citet{bogunovic2018adversarially} has proposed a no-regret algorithm for this setting named {\textsc{StableOpt}}, which selects
\begin{equation} \begin{array}{r@{}l} \mbf{x}_t &= \text{arg}\!\max_{\mbf{x} \in \mcl{D}_x} \min_{\mbf{z} \in \mcl{D}_z} u_{t-1}(\mbf{x},\mbf{z})\ ,\\ \mbf{z}_t &= \text{arg}\!\min_{\mbf{z} \in \mcl{D}_z} l_{t-1}(\mbf{x}_t,\mbf{z}) \end{array} \label{eq:stableopt} \end{equation}
where $u_{t-1}$ and $l_{t-1}$ are defined in \eqref{eq:fbound}.
At first glance, BO of {\textsc{VaR}} and adversarially robust BO are seemingly different problems because $\mbf{Z}$ is a random variable in the former but not in the latter. However, based on our key observation on the connection between the minimum value of a continuous function $h(\mbf{w})$ and the {\textsc{VaR}} of the random variable $h(\mbf{W})$ in the following theorem, these two problems and their solutions are connected as shown in Corollary~\ref{corollary:alpha0lv}, and~\ref{corollary:stableopt}.
\begin{theorem} \label{theorem:0plus} Let $\mbf{W}$ be a random variable with the support $\mcl{D}_w \subset \mbb{R}^{d_w}$ and dimension $d_w$.
Let $h$ be a continuous function mapping from $\mbf{w} \in \mcl{D}_w$ to $\mbb{R}$. Then, $h(\mbf{W})$ denotes the random variable whose realization is the function $h$ evaluation at a realization $\mbf{w}$ of $\mbf{W}$. Suppose $h(\mbf{w})$ has a minimizer $\mbf{w}_{\min} \in \mcl{D}_w$, then $\lim_{\alpha \rightarrow 0^+} V_{\alpha}(h(\mbf{W})) = h(\mbf{w}_{\min})\ .$ \end{theorem}
\begin{corollary} \label{corollary:alpha0lv} Adversarially robust BO which finds $\text{arg}\!\max_{\mbf{x}} \min_{\mbf{z}} f(\mbf{x},\mbf{z})$ can be cast as BO of {\textsc{VaR}} by letting (a) $\alpha$ approach $0$ from the right and (b) $\mcl{D}_{\mbf{z}}$ be the support of the environmental random variable $\mbf{Z}$, i.e., $\text{arg}\!\max_{\mbf{x}} \lim_{\alpha \rightarrow 0^+} V_\alpha(f(\mbf{x},\mbf{Z}))$. \end{corollary}
Interestingly, from Theorem~\ref{theorem:0plus}, we observe that $\mcl{Z}_l^\le$ in Corollary~\ref{corollary:loclv} approaches
the set of minimizers $\text{arg}\!\min_{\mbf{z} \in \mcl{D}_{\mbf{z}}} l_{t-1}(\mbf{x}_t,\mbf{z})$ as $\alpha \rightarrow 0^+$.
Corollary \ref{corollary:stableopt} below shows that LV w.r.t $\mbf{x}_t$ becomes a minimizer of $l_{t-1}(\mbf{x}_t,\mbf{z})$ which is same as the selected $\mbf{z}_t$ of {\textsc{StableOpt}} in \eqref{eq:stableopt}.
\begin{corollary} \label{corollary:stableopt} The {\textsc{StableOpt}} solution to adversarially robust BO selects the same input query as that selected by the {V-UCB} solution to the corresponding BO of {\textsc{VaR}} in Corollary~\ref{corollary:alpha0lv}. \end{corollary}
The proof of Theorem~\ref{theorem:0plus} and its two corollaries are shown in Appendix~\ref{app:0plus}. We note that {V-UCB} is also applicable to the optimization of $V_{\alpha}(f(\mbf{x},\mbf{Z}))$ where the distribution of $\mbf{Z}$ is a conditional distribution given $\mbf{x}$. For example, in robotics, if there exists noise/perturbation in the control, an optimization problem of interest is $V_{\alpha}(f(\mbf{x} + \bm{\xi}(\mbf{x})))$ where $\bm{\xi}(\mbf{x})$ is the random perturbation that depends on $\mbf{x}$.
\subsection{Implementation of V-UCB with Continuous Random Variable $\mbf{Z}$} \label{sec:continuousz}
The V-UCB algorithm involves two steps: selecting $\mbf{x}_t = \text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ and selecting $\mbf{z}_t$ as the LV $\mbf{z}_{\text{LV}}$ with the largest probability mass (or probability density). When $|\mcl{D}_{\mbf{z}}|$ is finite, given $\mbf{x}$, $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ can be computed exactly. The gradient of $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ with respect to $\mbf{x}$ can be obtained
easily (e.g., using automatic differentiation provided in the Tensorflow library \cite{tensorflow2015-whitepaper}) to aid the selection of $\mbf{x}_t$.
In this case, the latter step can also be performed by constructing the set of all LV (checking the condition in the Definition~\ref{definition:lv} for all $\mbf{z} \in \mcl{D}_{\mbf{z}}$) and choosing the LV $\mbf{z}_{\text{LV}}$ with the largest probability mass.
{\bf{Estimation of {\textsc{VaR}}.}} When $\mbf{Z}$ is a continuous random variable, estimating {\textsc{VaR}} by an ordered set of samples (e.g., in \citet{borisk20}) may require a prohibitively large number of samples, especially for small values of $\alpha$. Thus, we employ the following popular pinball (or tilted absolute value) function in quantile regression \cite{koenker1978regression} to estimate {\textsc{VaR}} as a lower $\alpha$-quantile:
\begin{equation*} \rho_{\alpha}(w) \triangleq \begin{cases}
\alpha w &\text{if } w \ge 0\ ,\\
(\alpha - 1) w &\text{if } w < 0 \end{cases} \end{equation*}
where $w \in \mbb{R}$. In particular, to estimate $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ as $\nu \in \mbb{R}$, we find $\nu$ that minimizes:
\begin{equation}
\mbb{E}_{\mbf{z} \sim p(\mbf{Z})}[\rho_{\alpha}(u_{t-1}(\mbf{x},\mbf{z}) - \nu)]\ .
\label{eq:lossfindvar} \end{equation}
A well-known example is when $\alpha = 0.5$ and $\rho_{\alpha}$ is the absolute value function, then the optimal $\nu$ is the median.
The loss in \eqref{eq:lossfindvar} can be optimized using stochastic gradient descent with a random batch of samples of $\mbf{Z}$ at each optimization iteration.
{\bf{Maximization of $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$.}} Unfortunately, to maximize $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ over $\mbf{x} \in \mcl{D}_{\mbf{x}}$, there is no gradient of $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ with respect to $\mbf{x}$ under the above approach. This situation resembles BO where there is no gradient information, but only noisy observations at input queries. Unlike BO, the observation (samples of $u_{t-1}(\mbf{x},\mbf{Z})$ at $\mbf{x}$) is not costly. Therefore, we propose the \emph{local neural surrogate optimization} (LNSO) algorithm to find $\text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ which is visualized in Fig.~\ref{fig:lnso}. Suppose the optimization is initialized at $\mbf{x} = \mbf{x}^{(0)}$, instead of maximizing $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ (whose gradient w.r.t. $\mbf{x}$ is unknown), we maximize a surrogate function $g(\mbf{x},\bm{\theta}^{(0)})$ (modeled by a neural network) that approximates $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ well in a local region of $\mbf{x}^{(0)}$, e.g., a ball $\mcl{B}(\mbf{x}^{(0)}, r)$ of radius $r$ in Fig.~\ref{fig:lnso}. We obtain such parameters $\bm{\theta}^{(0)}$ by minimizing the following loss function:
\begin{equation} \begin{array}{l} \mcl{L}_g(\bm{\theta},\mbf{x}^{(0)})\\ \ \displaystyle\triangleq \mbb{E}_{\mbf{x} \in \mcl{B}(\mbf{x}^{(0)}, r)} \mbb{E}_{\mbf{Z} \sim p(\mbf{Z})} \left[ \rho_\alpha(u_{t-1}(\mbf{x},\mbf{z}) - g(\mbf{x};\bm{\theta}))\right] \end{array} \label{eq:losslnso} \end{equation}
where the expectation $\mbb{E}_{\mbf{x} \in \mcl{B}(\mbf{x}^{(0)}, r)}$ is taken over a uniformly distributed $\mbf{X}$ in $\mcl{B}(\mbf{x}^{(0)}, r)$. Minimizing \eqref{eq:losslnso} can be performed with stochastic gradient descent. If maximizing $g(\mbf{x},\bm{\theta}^{(0)})$ leads to a value $\mbf{x}^{(i)} \notin \mcl{B}(\mbf{x}^{(0)}, r)$ (Fig.~\ref{fig:lnso}), we update the local region to be centered at $\mbf{x}^{(i)}$ ($\mcl{B}(\mbf{x}^{(i)}, r)$) and find $\bm{\theta}^{(i)} = \text{arg}\!\min_{\bm{\theta}} \mcl{L}_g(\bm{\theta}, \mbf{x}^{(i)})$ such that $g(\mbf{x},\bm{\theta}^{(i)})$ approximates $V_{\alpha}(u_{t-1}(\mbf{x},\mbf{Z}))$ well $\forall \mbf{x} \in \mcl{B}(\mbf{x}^{(i)},r)$. Then, $\mbf{x}^{(i)}$ is updated by maximizing $g(\mbf{x},\bm{\theta}^{(i)})$ for $\mbf{x} \in \mcl{B}(\mbf{x}^{(i)},r)$. The complete algorithm is described in Appendix~\ref{app:lnso}.
We prefer a small value of $r$ so that the ball $\mcl{B}$ is small. In such case, $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ for $\mbf{x} \in \mcl{B}$ can be estimated well with a small neural network $g(\mbf{x},\bm{\theta})$ whose training requires a small number of iterations.
\begin{figure}
\caption{Plot of a hypothetical optimization path (as arrows) of LNSO initialized at $\mbf{x}^{(0)}$. Input $\mbf{x}$ is $2$-dimensional. The boundary of a ball $\mcl{B}$ of radius $r$ is plotted as a dotted circle. When the updated $\mbf{x}$ moves out of $\mcl{B}$, the center of $\mcl{B}$ and $\bm{\theta}$ are updated.}
\label{fig:lnso}
\end{figure}
{\bf{Search of Lacing Values.}} Given a continuous random variable $\mbf{Z}$, to find an LV w.r.t $\mbf{x}_t$ in line 4 of Algorithm~\ref{alg:v-ucb}, i.e., to find a $\mbf{z}$ satisfying $d_u(\mbf{z}) \triangleq u_{t-1}(\mbf{x}_t, \mbf{z}) - V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) \ge 0$ and $d_l(\mbf{z}) \triangleq V_\alpha(l_t(\mbf{x}_t,\mbf{Z})) - l_{t-1}(\mbf{x}_t, \mbf{z}) \ge 0$, we choose a $\mbf{z}$ that minimizes
\begin{equation} \mcl{L}_{\text{LV}}(\mbf{z}) \triangleq \text{ReLU}(-d_u(\mbf{z})) + \text{ReLU}(-d_l(\mbf{z})) \label{eq:llv} \end{equation}
where $\text{ReLU}(\omega) = \max(\omega,0)$ is the rectified linear unit function ($\omega \in \mbb{R}$). To include the heuristic in Section~\ref{subsec:improvedselectz} which selects the LV with the highest probability density, we find $\mbf{z}$ that minimizes
\begin{equation*} \mcl{L}_{\text{LV-P}}(\mbf{z}) \triangleq \mcl{L}_{\text{LV}}(\mbf{z}) - \mbb{I}_{d_u(\mbf{z}) \ge 0} \mbb{I}_{ d_l(\mbf{z}) \ge 0}\ p(\mbf{z}) \end{equation*}
where $\mcl{L}_{\text{LV}}(\mbf{z})$ is defined in \eqref{eq:llv}; $p(\mbf{z})$ is the probability density; $\mbb{I}_{d_u(\mbf{z}) \ge 0}$ and $\mbb{I}_{ d_l(\mbf{z}) \ge 0}$ are indicator functions.
\section{Experiments} \label{sec:experiments}
In this section, we empirically evaluate the performance of {V-UCB}. The work of \citet{borisk20} has motivated the use of the approximated variant of their algorithm {$\rho \text{KG}^{apx}$} over its original version {$\rho \text{KG}$} by showing that {$\rho \text{KG}^{apx}$} achieves comparable empirical performances to {$\rho \text{KG}$} while incurring much less computational cost. Furthermore, {$\rho \text{KG}^{apx}$} has been shown to significantly outperform other competing algorithms \cite{borisk20}.
Therefore, we choose {$\rho \text{KG}^{apx}$} as the main baseline to empirically compare with {V-UCB}. The experiments using {$\rho \text{KG}^{apx}$} is performed by
adding new objective functions to the existing implementation of \citet{borisk20} at \texttt{https://github.com/saitcakmak/BoRisk}.
Regarding {V-UCB}, when $\mcl{D}_{\mbf{z}}$ is finite and the distribution of $\mbf{Z}$ is not uniform, we perform {V-UCB} by selecting $\mbf{z}_t$ as an LV at random, labeled as \emph{{V-UCB} Unif}, and by selecting $\mbf{z}_t$ as the LV with the maximum probability mass, labeled as \emph{{V-UCB} Prob}.
The performance metric is defined as $V_\alpha(f(\mbf{x}_*,\mbf{Z})) - V_\alpha(f(\tilde{\mbf{x}},\mbf{Z}))$ where $\tilde{\mbf{x}}$ is the recommended input. The evaluation of {\textsc{VaR}} is described in Section~\ref{sec:continuousz}. The recommended input is $\text{arg}\!\max_{\mbf{x} \in \mcl{D}_T} V_{\alpha}(\mu_{t-1}(\mbf{x}, \mbf{Z}))$ for {V-UCB}, and $\text{arg}\!\min_{\mbf{x} \in \mcl{D}_{\mbf{x}}} \mbb{E}_{t-1}[V_{\alpha}(f(\mbf{x}, \mbf{Z}))]$ for {$\rho \text{KG}^{apx}$} \cite{borisk20}, where $\mbb{E}_{t-1}$ is the conditional expectation over the unknown $f$ given the observations $\mcl{D}_{t-1}$ (approximated by a finite set of functions sampled from the GP posterior belief).\footnote{While the work of \citet{borisk20} considers a minimization problem of {\textsc{VaR}}, our work considers a maximization problem of {\textsc{VaR}}. Therefore, the objective functions for {$\rho \text{KG}^{apx}$} are the negation of those for {V-UCB}. For {V-UCB} at risk level $\alpha$,
the risk level for {$\rho \text{KG}^{apx}$} is $1-\alpha$.}
We repeat each experiment $10$ times with different random initial observations $\mbf{y}_{\mcl{D}_0}$ and plot both the mean (as lines) and the $70\%$ confidence interval (as shaded areas) of the $\log10$ of the performance metric. The detailed descriptions of experiments are deferred to Appendix~\ref{app:experiment}.
\subsection{Synthetic Benchmark Functions} \label{subsec:syn}
We use $3$ synthetic benchmark functions: Branin-Hoo, Goldstein-Price, and Hartmann-3D functions to construct $4$ optimization problems: Branin-Hoo-$(1,1)$, Goldstein-Price-$(1,1)$, Hartmann-$(1,2)$, and Hartmann-$(2,1)$. The tuples represent $(d_x, d_z)$ corresponding to the
dimensions of $\mbf{x}$ and $\mbf{z}$.
The noise variance $\sigma_n^2$ is set to $0.01$. The risk level $\alpha$ is $0.1$.
There are $2$ different settings: finite $\mcl{D}_{\mbf{z}}$ ($|\mcl{D}_{\mbf{z}}| = 64$ for Hartmann-$(1,2)$ and $|\mcl{D}_{\mbf{z}}|=100$ for the others) and continuous $\mcl{D}_{\mbf{z}}$. In the latter setting, $r=0.1$ and the surrogate function is a neural network of $2$ hidden layers with $30$ hidden neurons each, and sigmoid activation functions.
The results are shown in Fig.~\ref{fig:synfinite} and Fig.~\ref{fig:syncont} for the settings of discrete $\mcl{D}_{\mbf{z}}$ and continuous $\mcl{D}_{\mbf{z}}$, respectively. When $\mcl{D}_{\mbf{z}}$ is discrete (Fig.~\ref{fig:synfinite}), {V-UCB} Unif is on par with {$\rho \text{KG}^{apx}$} in optimizing Branin-Hoo-$(1,1)$ and Goldstein-Price-$(1,1)$, and outperforms {$\rho \text{KG}^{apx}$} in optimizing Hartmann-$(1,2)$ and Hartmann-$(2,1)$. {V-UCB} Prob is also able to exploit the probability distribution of $\mbf{Z}$ to outperform {V-UCB} Unif. When $\mcl{D}_{\mbf{z}}$ is continuous (Fig.~\ref{fig:syncont}), {V-UCB} Prob outperforms {$\rho \text{KG}^{apx}$}. The unsatisfactory performance of {$\rho \text{KG}^{apx}$} in some experiments may be attributed to its approximation of the inner optimization problem in the acquisition function \cite{borisk20}, and the approximation of {\textsc{VaR}} using samples of $\mbf{Z}$ and the GP posterior belief.
\newcommand\figheight{0.201}
\begin{figure}\label{fig:synfinite}
\end{figure}
\begin{figure}\label{fig:syncont}
\end{figure}
\subsection{Simulated Optimization Problems} \label{subsec:simulated}
The first problem is portfolio optimization adopted by \cite{borisk20}.
There are $d_x = 3$ optimization variables (risk and trade aversion parameters, and the holding cost multiplier) and $d_z = 2$ environmental random variables (bid-ask spread and borrow cost). The variable $\mbf{Z}$ follows a discrete uniform distribution with $|\mcl{D}_{\mbf{z}}| = 100$. Hence, there is no difference between {V-UCB} Unif and {V-UCB} Prob. Thus, we only report the results of the latter. The objective function is the posterior mean of a trained GP on the dataset in \citet{borisk20} of size $3000$ generated from CVXPortfolio. The noise variance $\sigma_n^2$ is set to $0.01$. The risk level $\alpha$ is set to $0.2$.
The second problem is a simulated robot pushing task for which we use the implementation from the work of \citet{wang17mes}.
The simulation is viewed as a $3$-dimensional function $\mbf{h}(r_x, r_y, t_p)$ returning the 2-D location of the pushed object, where $r_x,r_y \in [-5,5]$ are the robot location and $t_p \in [1,30]$ is the pushing duration.
The objective is to minimize the distance to a fixed goal location $\mbf{g}=(g_x,g_y)^\top$, i.e., the objective function of the maximization problem is $f_0(r_x, r_y, t_p) = -\Vert \mbf{h}(r_x, r_y, t_p) - \mbf{g}\Vert$. We assume that there are perturbations in specifying the robot location $W_x, W_y$ whose support $\mcl{D}_{\mbf{z}}$ includes $64$ equi-distant points in $[-1,1]^2$ and whose probability mass is proportional to $\exp(-(W_x^2 + W_y^2) / 0.4^2)$. It leads to a random objective function $f(r_x, r_y, t_p, W_x, W_y) \triangleq f_0(r_x + W_x, r_y + W_y, t_p)$. We aim to maximize the {\textsc{VaR}} of $f$ which is more difficult than maximizing that of $f_0$. Moreover, a random noise following $\mcl{N}(0,0.01)$ is added to the evaluation of $f$.
The risk level $\alpha$ is set to $0.1$.
The results are shown in Fig.~\ref{fig:portrobot}. We observe that {V-UCB} outperforms {$\rho \text{KG}^{apx}$} in both problems. Furthermore, in comparison to our synthetic experiments, the difference between {V-UCB} Unif and {V-UCB} Prob is not significant in the robot pushing experiment. This is because the chance that a uniform sample of LV has a large probability mass is higher in the robot pushing experiment due to a larger region of $\mcl{D}_{\mbf{z}}$ having high probabilities.
\begin{figure}
\caption{Simulated experiments.}
\label{fig:portrobot}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
To tackle the BO of {\textsc{VaR}} problem, we construct a no-regret algorithm, namely \emph{{\textsc{VaR}} upper confidence bound} (V-UCB), through the design of a confidence bound of {\textsc{VaR}} and a set of \emph{lacing values} (LV) that is guaranteed to exist. Besides, we introduce a heuristic to select an LV that improves the emprical performance of {V-UCB} over random selection of LV. We also draw
an elegant connection between BO of {\textsc{VaR}} and adversarially robust BO in terms of both problem formulation and solutions. Lastly, we provide practical techniques for implementing {\textsc{VaR}} with continuous $\mbf{Z}$. While {V-UCB} is more computationally efficient than the the state-of-the-art {$\rho \text{KG}^{apx}$} algorithm for BO of {\textsc{VaR}}, it also demonstrates competitive empirical performances in our experiments.
\appendix
\section{Proof of Lemma~\ref{lemma:confbound}} \label{app:proofvconfbound}
{\bf{Lemma~\ref{lemma:confbound}.}} Similar to the definition of $f(\mbf{x},\mbf{Z})$, let $l_{t-1}(\mbf{x}, \mbf{Z})$ and $u_{t-1}(\mbf{x},\mbf{Z})$ denote the random function over $\mbf{x}$ where the randomness comes from the random variable $\mbf{Z}$; $l_{t-1}$ and $u_{t-1}$ are defined in \eqref{eq:fbound}. Then, $\forall \mbf{x} \in \mcl{D}_{\mbf{x}}$, $t \ge 1$, $\alpha \in (0,1)$,
\[ \begin{array}{r@{}l} \displaystyle V_\alpha(f(\mbf{x},\mbf{Z})) \displaystyle &\displaystyle \in I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))]\\
&\displaystyle \triangleq [V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})), V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))] \end{array} \] holds with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}, where $V_\alpha(l_{t-1}(\mbf{x},\mbf{Z}))$ and $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ are defined as \eqref{eq:var}.
\begin{proof} Conditioned on the event $f(\mbf{x},\mbf{z}) \in I_{t-1}[f(\mbf{x},\mbf{z})] \triangleq [l_{t-1}(\mbf{x},\mbf{z}), u_{t-1}(\mbf{x},\mbf{z})]$ for all $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $\mbf{z} \in \mcl{D}_{\mbf{z}}$, $t \ge 1$ which occurs with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}, we will prove that $V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})) \le V_\alpha(f(\mbf{x},\mbf{Z}))$. The proof of $V_\alpha(f(\mbf{x},\mbf{Z})) \le V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ can be done in a similar manner.
From $f(\mbf{x},\mbf{z}) \in I_{t-1}[f(\mbf{x},\mbf{z})] \triangleq [l_{t-1}(\mbf{x},\mbf{z}), u_{t-1}(\mbf{x},\mbf{z})]$ for all $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $\mbf{z} \in \mcl{D}_{\mbf{z}}$, $t \ge 1$ we have $\forall \mbf{x} \in \mcl{D}_{\mbf{x}}, \mbf{z} \in \mcl{D}_{\mbf{z}}, t \ge 1,$
\begin{align*} f(\mbf{x},\mbf{z}) &\ge l_{t-1}(\mbf{x},\mbf{z})\ . \end{align*}
Therefore, for all $\omega \in \mbb{R}$, $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $\mbf{z} \in \mcl{D}_{\mbf{z}}$, $t \ge 1$,
\begin{align*} f(\mbf{x},\mbf{z}) \le \omega &\Rightarrow l_{t-1}(\mbf{x},\mbf{z}) \le \omega\\ P(f(\mbf{x},\mbf{Z}) \le \omega) &\le P(l_{t-1}(\mbf{x},\mbf{Z}) \le \omega)\ . \end{align*}
So, for all $\omega \in \mbb{R}$, $\alpha \in (0,1)$, $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $t \ge 1$,
\begin{align*} P(f(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha \Rightarrow P(l_{t-1}(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\ . \end{align*}
Hence, the set $\{\omega: P(f(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\}$ is a subset of $\{\omega: P(l_{t-1}(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\}$ for all $\alpha \in (0,1)$, $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $t \ge 1$, which implies that $\inf\{\omega: P(f(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\} \ge \inf\{\omega: P(l_{t-1}(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\}$, i.e., $V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})) \le V_\alpha(f(\mbf{x},\mbf{Z}))$ for all $\alpha \in (0,1)$, $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $t \ge 1$. \end{proof}
\section{Proof of \eqref{eq:iregretbound1}} \label{app:iregretbound1}
We prove that \begin{align*} r(\mbf{x}_t) \le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))\ \forall t \ge 1 \end{align*}
which holds with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}.
\begin{proof} Conditioned on the event $V_\alpha(f(\mbf{x},\mbf{Z})) \in I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))] \triangleq [V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})), V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))]$ for all $\alpha \in (0,1)$, $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $t \ge 1$, which occurs with probability $\ge 1 - \delta$ in Lemma~\ref{lemma:confbound},
\begin{align*} V_\alpha(f(\mbf{x}_*,\mbf{Z})) &\le V_\alpha(u_{t-1}(\mbf{x}_*,\mbf{Z}))\\ V_\alpha(f(\mbf{x}_t,\mbf{Z})) &\ge V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))\ . \end{align*}
Hence,
\begin{align} r(\mbf{x}_t) &\triangleq V_\alpha(f(\mbf{x}_*,\mbf{Z})) - V_\alpha(f(\mbf{x}_t,\mbf{Z}))\nonumber\\
&\le V_\alpha(u_{t-1}(\mbf{x}_*,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))\ .\label{eq:inter1irb1} \end{align}
Since $\mbf{x}_t$ is selected as $\text{arg}\!\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$,
\begin{align*} V_\alpha(u_{t-1}(\mbf{x}_*,\mbf{Z})) \le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z}))\ , \end{align*}
equivalently, $V_\alpha(u_{t-1}(\mbf{x}_*,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z})) \le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))$. Hence, from \eqref{eq:inter1irb1}, $r(\mbf{x}_t) \le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))$ for all $\alpha \in (0,1)$ and $t \ge 1$. \end{proof}
\section{Proof of Theorem~\ref{theorem:lv}} \label{app:prooflv}
{\bf{Theorem~\ref{theorem:lv}}.} $\forall \alpha \in (0,1)$, $\forall \mbf{x} \in \mcl{D}_{\mbf{x}}$, $\forall t \ge 1$, there exists a lacing value in $\mcl{D}_{\mbf{z}}$ with respect to $\mbf{x}$ and $t$.
\begin{proof} Recall that
\begin{align*} \mcl{Z}_l^{\le}&\triangleq \{\mbf{z} \in \mcl{D}_{\mbf{z}}: l_{t-1}(\mbf{x},\mbf{z}) \le V_{\alpha}(l_{t-1}(\mbf{x}, \mbf{Z}))\}\ . \end{align*}
From to the definition of $\mcl{Z}_l^\le$ and $V_{\alpha}(l_{t-1}(\mbf{x},\mbf{Z}))$, we have
\begin{align} P(\mbf{Z} \in \mcl{Z}_l^\le) \ge \alpha\ . \label{eq:pzl} \end{align}
Since $\alpha \in (0,1)$, $\mcl{Z}_l^\le \neq \emptyset$. We prove the existence of LV by contradiction: (a) assuming that $\exists \mbf{x} \in \mcl{D}_{\mbf{x}}, \exists t \ge 1, \forall \mbf{z} \in \mcl{Z}_l^{\le}, u_{t-1}(\mbf{x}, \mbf{z}) < V_{\alpha}(u_{t-1}(\mbf{x}, \mbf{Z}))$ and then, (b) proving that $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ is not a lower bound of $\{\omega: P(u_{t-1}(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\}$ which is a contradiction.
Since the GP posterior mean $\mu_{t-1}$ and posterior standard deviation $\sigma_{t-1}$ are continuous functions in $\mcl{D}_{\mbf{x}} \times \mcl{D}_{\mbf{z}}$, $l_{t-1}$ and $u_{t-1}$ are continuous functions in the closed $\mcl{D}_{\mbf{z}} \subset \mbb{R}^{d_z}$ ($\mbf{x}$ and $t$ are given and remain fixed in this proof). We will prove that $\mcl{Z}_l^\le$ is closed in $\mbb{R}^{d_z}$ by contradiction.
If $\mcl{Z}_l^\le$ is not closed in $\mbb{R}^{d_z}$, there exists a limit point $\mbf{z}_p$ of $\mcl{Z}_l^\le$ such that $\mbf{z}_p \notin \mcl{Z}_l^\le$. Since $\mcl{Z}_l^\le \subset \mcl{D}_{\mbf{z}}$ and $\mcl{D}_{\mbf{z}}$ is closed in $\mbb{R}^{d_z}$, $\mbf{z}_p \in \mcl{D}_{\mbf{z}}$. Thus, for $\mbf{z}_p \notin \mcl{Z}_l^\le$, $l_{t-1}(\mbf{x},\mbf{z}_p) > V_\alpha(l_{t-1}(\mbf{x},\mbf{Z}))$ (from the definition of $\mcl{Z}_l^\le$). Then, there exists $\epsilon_0 > 0$ such that $l_{t-1}(\mbf{x},\mbf{z}_p) > V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})) + \epsilon_0$. The pre-image of the open interval $I_0 = (l_{t-1}(\mbf{x},\mbf{z}_p) - \epsilon_0/2, l_{t-1}(\mbf{x},\mbf{z}_p) + \epsilon_0/2)$ under $l_{t-1}$ is also an open set $\mcl{V}$ containing $\mbf{z}_p$ (because $l_{t-1}$ is a continuous function). Since $\mbf{z}_p$ is a limit point of $\mcl{Z}_l^\le$, there exists an $\mbf{z}' \in \mcl{Z}_l^\le \cap \mcl{V}$. Then, $l_{t-1}(\mbf{x},\mbf{z}') \in I_0$, so $l_{t-1}(\mbf{x},\mbf{z}') \ge l_{t-1}(\mbf{x},\mbf{z}_p) - \epsilon_0/2 \ge V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})) + \epsilon_0 - \epsilon_0 /2 = V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})) + \epsilon_0/2$. It contradicts $\mbf{z}' \in \mcl{Z}_l^\le$.
Therefore, $\mcl{Z}_l^\le$ is a closed set in $\mbb{R}^{d_z}$. Besides, since $\{u_{t-1}(\mbf{x},\mbf{z}): \mbf{z} \in \mcl{Z}_l^\le\}$ is upper bounded by $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ (due to our assumption), so $\sup\{ u_{t-1}(\mbf{x},\mbf{z}): \mbf{z} \in \mcl{Z}_l^\le \}$ exists. Let $\mbf{z}_l^+$ be such that $u_{t-1}(\mbf{x},\mbf{z}_l^+) \triangleq \sup\{ u_{t-1}(\mbf{x},\mbf{z}): \mbf{z} \in \mcl{Z}_l^\le \}$. Then, $\mbf{z}_l^+ \in \mcl{Z}_l^\le$ because $\mcl{Z}_l^\le$ is closed.
Moreover, from our assumption that $\forall \mbf{z} \in \mcl{Z}_l^{\le}, u_{t-1}(\mbf{x}, \mbf{z}) < V_{\alpha}(u_{t-1}(\mbf{x}, \mbf{Z}))$, we have $u_{t-1}(\mbf{x},\mbf{z}_l^+) < V_{\alpha}(u_{t-1}(\mbf{x}, \mbf{Z}))$. Furthermore,
\begin{align*}
P(u_{t-1}(\mbf{x}, \mbf{Z}) \le u_{t-1}(\mbf{x}, \mbf{z}_l^+)) \ge P(\mbf{Z} \in \mcl{Z}_l^{\le}) \ge \alpha\ . \end{align*}
where the first inequality is because $u_{t-1}(\mbf{x},\mbf{z}_l^+) = \sup\{ u_{t-1}(\mbf{x},\mbf{z}): \mbf{z} \in \mcl{Z}_l^\le \}$ and the last inequality is from \eqref{eq:pzl}. Hence, $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z})$ is not a lower bound of $\{\omega: P(u_{t-1}(\mbf{x},\mbf{Z}) \le \omega) \ge \alpha\}$. \end{proof}
\section{Proof of Lemma~\ref{lemma:iregretbound2}} \label{app:iregretbound2}
{\bf{Lemma~\ref{lemma:iregretbound2}.}} By selecting $\mbf{x}_t$ as a maximizer of $V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))$ and selecting $\mbf{z}_t$ as an LV w.r.t $\mbf{x}_t$, the instantaneous regret is upper-bounded by: \begin{equation*} r(\mbf{x}_t) \le 2 \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t, \mbf{z}_t)\ \forall t \ge 1 \end{equation*} with probability $\ge 1 - \delta$ for $\delta$ in Lemma~\ref{lemma:ucb51}.
\begin{proof} Conditioned on the event $f(\mbf{x},\mbf{z}) \in I_{t-1}[f(\mbf{x},\mbf{z})] \triangleq [l_{t-1}(\mbf{x},\mbf{z}), u_{t-1}(\mbf{x},\mbf{z})]$ for all $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $\mbf{z} \in \mcl{D}_{\mbf{z}}$, $t \ge 1$ which occurs with probability $\ge 1 - \delta$ in Lemma~\ref{lemma:ucb51}, it follows that $V_\alpha(f(\mbf{x},\mbf{Z})) \in I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))] \triangleq [V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})), V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))]$ for all $\alpha \in (0,1)$, $\mbf{x} \in \mcl{D}_{\mbf{x}}$, and $t \ge 1$ in Lemma~\ref{lemma:confbound}.
From \eqref{eq:iregretbound1}, by selecting $\mbf{z}_t$ as an LV, for all $t \ge 1$,
\begin{align*} r(\mbf{x}_t) &\le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) - V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z}))\\
&\le u_{t-1}(\mbf{x}_t, \mbf{z}_t) - l_{t-1}(\mbf{x}_t, \mbf{z}_t) \text{ (since } \mbf{z}_t \text{ is an LV)}\\
&\le \mu_{t-1}(\mbf{x}_t,\mbf{z}_t) + \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t) \\
&\quad- \mu_{t-1}(\mbf{x}_t,\mbf{z}_t) + \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t)\\
&= 2 \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t)\ . \end{align*} \end{proof}
\section{Bound on the Average Cumulative Regret} \label{app:rtbound}
Conditioned on the event $f(\mbf{x},\mbf{z}) \in I_{t-1}[f(\mbf{x},\mbf{z})] \triangleq [l_{t-1}(\mbf{x},\mbf{z}), u_{t-1}(\mbf{x},\mbf{z})]$ for all $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $\mbf{z} \in \mcl{D}_{\mbf{z}}$, $t \ge 1$ which occurs with probability $\ge 1 - \delta$ in Lemma~\ref{lemma:ucb51}, it follows that $r(\mbf{x}_t) \le 2 \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t, \mbf{z}_t)\ \forall t \ge 1$ in Lemma~\ref{lemma:iregretbound2}. Therefore,
\begin{align*} R_T &\triangleq \sum_{t=1}^T r(\mbf{x}_t)
\le \sum_{t=1}^T 2 \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t)\\
&\le 2 \beta_T^{1/2} \sum_{t=1}^T \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t) \end{align*}
since $\beta_t$ is a non-decreasing sequence.
From Lemma 5.4 and the Cauchy-Schwarz inequality in \cite{srinivas10ucb}, we have
\begin{align} 2 \sum_{t=1}^T \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t) \le \sqrt{C_1 T \gamma_T} \label{eq:frombogu} \end{align}
where $C_1 = 8 / \log(1 + \sigma_n^{-2})$.
Hence,
\begin{align*} R_T \le \sqrt{C_1 T \beta_T \gamma_T}\ . \end{align*}
Equivalently, $R_T / T \le \sqrt{C_1 \beta_T \gamma_T/ T} \ .$ Since $\gamma_T$ is shown to be bounded for several common kernels in \cite{srinivas10ucb}, the above implies that $\text{lim}_{T \rightarrow \infty} R_T / T = 0$, i.e., the algorithm is no-regret.
\section{Bound on $r(\mbf{x}_{t_*(T)})$} \label{app:recommendxbound}
Conditioned on the event $f(\mbf{x},\mbf{z}) \in I_{t-1}[f(\mbf{x},\mbf{z})] \triangleq [l_{t-1}(\mbf{x},\mbf{z}), u_{t-1}(\mbf{x},\mbf{z})]$ for all $\mbf{x} \in \mcl{D}_{\mbf{x}}$, $\mbf{z} \in \mcl{D}_{\mbf{z}}$, $t \ge 1$, which occurs with probability $\ge 1 - \delta$ in Lemma~\ref{lemma:ucb51}, it follows that $V_\alpha(f(\mbf{x},\mbf{Z})) \in I_{t-1}[V_{\alpha}(f(\mbf{x},\mbf{Z}))] \triangleq [V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})), V_\alpha(u_{t-1}(\mbf{x},\mbf{Z}))]$ for all $\alpha \in (0,1), \mbf{x} \in \mcl{D}_{\mbf{x}}, t \ge 1$ in Lemma~\ref{lemma:confbound}. Furthermore, we select $\mbf{z}_t$ as an LV, so $l_{t-1}(\mbf{x}_t, \mbf{z}_t) \le V_\alpha(l_{t-1}(\mbf{x}_t,\mbf{Z})) \le V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) \le u_{t-1}(\mbf{x}_t, \mbf{z}_t)$ according to the Definition~\ref{definition:lv}.
At $T$-th iteration, by recommending $\mbf{x}_{t_*(T)}$ as an estimate of $\mbf{x}_*$ where $t_*(T) \triangleq \text{arg}\!\max_{t \in \{1,\dots,T\}} V_{\alpha}(l_{t-1}(\mbf{x}_t,\mbf{Z}))$, we have
\begin{align*} V_{\alpha}(l_{t_*(T)-1}(\mbf{x}_{t_*(T)}, \mbf{Z})) &= \max_{t \in \{1,\dots,T\}} V_{\alpha}(l_{t-1}(\mbf{x}_t, \mbf{Z}))\\ &\ge \frac{1}{T} \sum_{t=1}^T V_{\alpha}(l_{t-1}(\mbf{x}_t, \mbf{Z}))\ . \end{align*}
Therefore,
\begin{align*} r(\mbf{x}_{t_*(T)}) &= V_\alpha(f(\mbf{x}_*,\mbf{Z})) - V_{\alpha}(f(\mbf{x}_{t_*(T)}, \mbf{Z}))\\
&\le V_\alpha(f(\mbf{x}_*,\mbf{Z})) - V_{\alpha}(l_{t_*(T)-1}(\mbf{x}_{t_*(T)}, \mbf{Z}))\\
&\le \frac{1}{T} \sum_{t=1}^T \left(V_\alpha(f(\mbf{x}_*,\mbf{Z})) - V_{\alpha}(l_{t-1}(\mbf{x}_t, \mbf{Z})) \right)\ . \end{align*}
Furthermore, $V_\alpha(f(\mbf{x}_*,\mbf{Z})) \le V_\alpha(u_{t-1}(\mbf{x}_*,\mbf{Z}))$ from our condition, so
\begin{align*} &r(\mbf{x}_{t_*(T)}) \le \frac{1}{T} \sum_{t=1}^T \left(V_\alpha(f(\mbf{x}_*,\mbf{Z})) - V_{\alpha}(l_{t-1}(\mbf{x}_t, \mbf{Z})) \right)\\
&\le \frac{1}{T} \sum_{t=1}^T \left(V_\alpha(u_{t-1}(\mbf{x}_*,\mbf{Z})) - V_{\alpha}(l_{t-1}(\mbf{x}_t, \mbf{Z})) \right)\\
&\le \frac{1}{T} \sum_{t=1}^T \left(V_\alpha(u_{t-1}(\mbf{x}_t,\mbf{Z})) - V_{\alpha}(l_{t-1}(\mbf{x}_t, \mbf{Z})) \right)\\
&\le \frac{1}{T} \sum_{t=1}^T \left(u_{t-1}(\mbf{x}_t,\mbf{z}_t) - l_{t-1}(\mbf{x}_t, \mbf{z}_t) \right) \text{ (since } \mbf{z}_t \text{ is an LV)}\\
&\le \frac{1}{T} \sum_{t=1}^T 2 \beta_t^{1/2} \sigma_{t-1}(\mbf{x}_t,\mbf{z}_t)\\
&\le \sqrt{\frac{C_1 \beta_T \gamma_T}{T}} \text{ (from Appendix~\ref{app:rtbound})}\ . \end{align*}
Since $\gamma_T$ is shown to be bounded for several common kernels in \cite{srinivas10ucb}, the above implies that $\text{lim}_{T \rightarrow \infty} r(\mbf{x}_{t_*(T)}) = 0$.
\section{Proof of Theorem~\ref{theorem:0plus} and Its Corollaries} \label{app:0plus}
\subsection{Proof of Theorem~\ref{theorem:0plus}}
{\bf{Theorem~\ref{theorem:0plus}.}}
Let $\mbf{W}$ be a random variable with the support $\mcl{D}_w \subset \mbb{R}^{d_w}$ and dimension $d_w$.
Let $h$ be a continuous function mapping from $\mbf{w} \in \mcl{D}_w$ to $\mbb{R}$. Then, $h(\mbf{W})$ denotes the random variable whose realization is the function $h$ evaluation at a realization $\mbf{w}$ of $\mbf{W}$. Suppose $h(\mbf{w})$ has a minimizer $\mbf{w}_{\min} \in \mcl{D}_w$, then $\lim_{\alpha \rightarrow 0^+} V_{\alpha}(h(\mbf{W})) = h(\mbf{w}_{\min})\ .$
Recall that the support $\mcl{D}_w$ of $\mbf{W}$ is defined as the smallest closed subset $\mcl{D}_w$ of $\mbb{R}^{d_z}$ such that $P(\mbf{W} \in \mcl{D}_w) = 1$, and $\mbf{w}_{\min} \in \mcl{D}_w$ minimizes $h(\mbf{w})$.
\begin{lemma} \label{lemma:prenondecreaseV} For all $\alpha \in (0,1)$, $V_\alpha(h(\mbf{W}))$ is a nondecreasing function, i.e., \begin{align*} \forall\ 1 > \alpha > \alpha' > 0,\ V_\alpha(h(\mbf{W})) \ge V_{\alpha'}(h(\mbf{W}))\ . \end{align*} \end{lemma}
\begin{proof} Since $\alpha > \alpha'$, for all $\omega \in \mbb{R}$,
\begin{align*} P(h(\mbf{W}) \le \omega) \ge \alpha \Rightarrow P(h(\mbf{W}) \le \omega) \ge \alpha'\ . \end{align*}
Therefore, $\{\omega: P(h(\mbf{W}) \le \omega) \ge \alpha\}$ is a subset of $\{\omega: P(h(\mbf{W}) \le \omega) \ge \alpha'\}$. Thus,
\begin{align*} &\inf \{\omega: P(h(\mbf{W}) \le \omega) \ge \alpha\} \\ &\ge \inf \{\omega: P(h(\mbf{W}) \le \omega) \ge \alpha'\} \end{align*}
i.e., $V_\alpha(h(\mbf{W})) \ge V_{\alpha'}(h(\mbf{W}))\ .$ \end{proof}
Let \begin{align} \omega_{0^+} \triangleq \lim_{\alpha \rightarrow 0^+} V_\alpha(h(\mbf{W}))\ . \label{eq:omega0plus} \end{align}
Then, from Lemma~\ref{lemma:prenondecreaseV}, the following lemma follows.
\begin{lemma} \label{lemma:nondecreaseV} For all $\alpha \in (0,1)$, and $\omega_{0^+}$ defined in \eqref{eq:omega0plus} \begin{align*} \omega_{0^+} \le V_\alpha(h(\mbf{W}))\ . \end{align*} \end{lemma}
We use Lemma~\ref{lemma:nondecreaseV} to prove the following lemma.
\begin{lemma} \label{lemma:omegalehw} For all $\mbf{w} \in \mcl{D}_w$, and $\omega_{0^+}$ defined in \eqref{eq:omega0plus} \begin{align*} \omega_{0^+} \le h(\mbf{w}) \end{align*}
which implies that
\begin{align*} \omega_{0^+} \le h(\mbf{w}_{\min})\ . \end{align*} \end{lemma}
\begin{proof} By contradiction, we assume that there exits $\mbf{w}' \in \mcl{D}_w$ such that $\omega_{0^+} > h(\mbf{w}')$. Then, there exists $\epsilon_1 > 0$ such that $\omega_{0^+} > h(\mbf{w}') + \epsilon_1$. Consider the pre-image $\mcl{V}$ of the open interval $I_h = (h(\mbf{w}') - \epsilon_1/2, h(\mbf{w}') + \epsilon_1/2$. Since $h$ is a continuous function, $\mcl{V}$ is an open set and it contains $\mbf{w}'$ (as $I_h$ contains $h(\mbf{w}')$). Then, consider the set $\mcl{V} \cap \mcl{D}_w \supset \{\mbf{w}'\} \neq \emptyset$, we prove $P(\mbf{W} \in \mcl{V} \cap \mcl{D}_{\mbf{z}}) > 0$ by contradiction as follows.
If $P(\mbf{W} \in \mcl{V} \cap \mcl{D}_w) = 0$ then the closure of $\mcl{D}_w \setminus \mcl{V}$ is a closed set that is smaller than $\mcl{D}_w$ (since $\mcl{V}$ is an open set, $\mcl{D}_w$ is a closed set, and $\mcl{V} \cap \mcl{D}_w$ is not empty) and satisfies $P(\mbf{W} \in \mcl{D}_w \setminus \mcl{V}) = 1$, which contradicts the definition of $\mcl{D}_w$. Thus, $P(\mbf{W} \in \mcl{V} \cap \mcl{D}_w) > 0$.
Therefore, $P(h(\mbf{W}) \in I_h) > 0$. So,
\begin{align*} &P(h(\mbf{W}) \le \omega_{0^+})\\ &\ge P(h(\mbf{W}) \le h(\mbf{w}') + \epsilon_1 / 2)\\ &\ge P(h(\mbf{W}) \in I_h)\\ &> 0\ . \end{align*}
Let us consider $\alpha_0 = P(h(\mbf{W}) \le h(\mbf{w}') + \epsilon_1/2) > 0$, the {\textsc{VaR}} at $\alpha_0$ is
\begin{align*}
V_{\alpha_0}(h(\mbf{W})) &\triangleq \inf\{\omega: P(h(\mbf{W}) \le \omega) \ge \alpha_0\}\\
&\le h(\mbf{w}') + \epsilon_1/2\\
&< \omega_{0^+} \end{align*}
which is a contradiction to Lemma~\ref{lemma:nondecreaseV}. \end{proof}
\begin{lemma} \label{lemma:omegagemin} For $\omega_{0^+}$ defined in \eqref{eq:omega0plus} \begin{align} \omega_{0^+} \ge h(\mbf{w}_{\min})\ . \end{align} \end{lemma}
\begin{proof} By contradiction, we assume that $\omega_{0^+} < h(\mbf{w}_{\min})$. Then there exists $\epsilon_2 > 0$ that $\omega_{0^+} + \epsilon_2 < h(\mbf{w}_{\min})$. Since $\omega_{0^+} \triangleq \lim_{\alpha \rightarrow 0^+} V_{\alpha}(h(\mbf{W}))$ so there exits $\alpha_0 > 0$ such that $V_{\alpha_0}(h(\mbf{W})) \in (\omega_{0^+}, \omega_{0^+} + \epsilon_2)$. However,
\begin{align*} &P(h(\mbf{W}) \le V_{\alpha_0}(h(\mbf{W}))) \\ &\le P(h(\mbf{W}) \le \omega_{0^+} + \epsilon_2 < h(\mbf{w}_{\min}))\\ &= 0 \end{align*}
which contradicts the fact that $\alpha_0 > 0$. Therefore, $\omega_{0^+} \ge h(\mbf{w}_{\min})$. \end{proof}
From \eqref{eq:omega0plus}, Lemma~\ref{lemma:omegalehw} and Lemma~\ref{lemma:omegagemin},
\begin{align*} \lim_{\alpha \rightarrow 0^+} V_\alpha(h(\mbf{W})) = h(\mbf{w}_{\min}) \end{align*}
which directly leads to the result in Corollary~\ref{corollary:alpha0lv} for a continuous function $f(\mbf{x},\mbf{z})$ over $\mbf{z} \in \mcl{D}_{\mbf{z}}$. While $\mbf{Z}$ can follow any probability distribution defined on the support $\mcl{D}_{\mbf{z}}$, we can choose the distribution of $\mbf{Z}$ as a uniform distribution over $\mcl{D}_{\mbf{z}}$.
\subsection{Corollary~\ref{corollary:stableopt}}
From Theorem~\ref{theorem:0plus}, $\mcl{D}_{\mbf{z}}$ is a closed subset of $\mbb{R}^{d_z}$, and $u_{t-1}(\mbf{x},\mbf{z})$, $l_{t-1}(\mbf{x},\mbf{z})$ are continuous functions over $\mbf{z} \in \mcl{D}_{\mbf{z}}$, it follows that the selected $\mbf{x}_t$ by both {\textsc{StableOpt}} (in \eqref{eq:stableopt}) and {V-UCB} are the same. Furthermore,
\begin{align*} \mcl{Z}_l^\le &\triangleq \{ \mbf{z} \in \mcl{D}_{\mbf{z}}: l_{t-1}(\mbf{x},\mbf{z}) \le V_\alpha(l_{t-1}(\mbf{x},\mbf{Z})) \}\\
&= \{ \mbf{z} \in \mcl{D}_{\mbf{z}}: l_{t-1}(\mbf{x},\mbf{z}) \le \min_{\mbf{z}' \in \mcl{D}_{\mbf{z}}}l_{t-1}(\mbf{x},\mbf{z}') \}\\
&= \{ \mbf{z} \in \mcl{D}_{\mbf{z}}: l_{t-1}(\mbf{x},\mbf{z}) = \min_{\mbf{z}' \in \mcl{D}_{\mbf{z}}}l_{t-1}(\mbf{x},\mbf{z}') \}\ ,\\ \mcl{Z}_u^\ge &\triangleq \{ \mbf{z} \in \mcl{D}_{\mbf{z}}: u_{t-1}(\mbf{x},\mbf{z}) \ge V_\alpha(u_{t-1}(\mbf{x},\mbf{Z})) \}\\
&= \{ \mbf{z} \in \mcl{D}_{\mbf{z}}: u_{t-1}(\mbf{x},\mbf{z}) \ge \min_{\mbf{z}' \in \mcl{D}_{\mbf{z}}} u_{t-1}(\mbf{x},\mbf{z}') \}\\
&= \mcl{D}_{\mbf{z}}\ . \end{align*}
Therefore, the set of lacing values is $\mcl{Z}_l^\le \cap \mcl{Z}_u^\ge = \mcl{Z}_l^\le = \{ \mbf{z} \in \mcl{D}_{\mbf{z}}: l_{t-1}(\mbf{x},\mbf{z}) = \min_{\mbf{z}' \in \mcl{D}_{\mbf{z}}}l_{t-1}(\mbf{x},\mbf{z}') \}$ any of which is also the selected $\mbf{z}_t$ in \eqref{eq:stableopt} by {\textsc{StableOpt}}. Thus, the selected $\mbf{z}_t$ by both {\textsc{StableOpt}} and {V-UCB} are the same.
\section{Local Neural Surrogate Optimization} \label{app:lnso}
The \emph{local neural surrogate optimization} (LNSO) to maximize a {\textsc{VaR}} $V_\alpha(h(\mbf{x},\mbf{Z}))$ is described in Algorithm~\ref{alg:lnso}. The algorithm can be summarized as follows:
\begin{itemize} \item Whenever the current updated $\mbf{x}^{(i)}$ is not in $\mcl{B}(\mbf{x}_c,r)$ (line 4), the center $\mbf{x}_c$ of the ball $\mcl{B}$ is updated to be $\mbf{x}^{(i)}$ (line 6) and the surrogate function $g(\mbf{x},\bm{\theta})$ is re-trained (lines 7-12).
\item The surrogate function $g(\mbf{x},\bm{\theta})$ is (re-)trained to estimate $V_\alpha(h(\mbf{x},\mbf{Z}))$ well for all $\mbf{x} \in \mcl{B}(\mbf{x}_c, r)$ (lines 7-12) with stochastic gradient descent by minimizing the following loss function given random mini-batches $ \mcl{Z}$ of $\mbf{Z}$ (line 8) and $\mcl{X}$ of $\mbf{x} \in \mcl{B}(\mbf{x}_c,r)$ (line 9):
\begin{align}
\mcl{L}_g(\mcl{X},\mcl{Z}) \triangleq \frac{1}{|\mcl{X}| |\mcl{Z}|} \sum_{\mbf{x} \in \mcl{X},\mbf{z} \in \mcl{Z}} [\rho_\alpha(h(\mbf{x},\mbf{z}) - g(\mbf{x};\bm{\theta}))] \label{eq:lossg} \end{align}
where $\rho_{\alpha}$ is the pinball function in Sec.~\ref{sec:continuousz}.
\item Instead of directly maximizing $V_\alpha(h(\mbf{x},\mbf{Z}))$ whose gradient w.r.t $\mbf{x}$ is unavailable, we find $\mbf{x}$ that maximizes the surrogate function $g(\mbf{x},\bm{\theta}_s)$ (line 14) where $\bm{\theta}_s$ is the parameters trained in lines 7-12.
\end{itemize}
\begin{algorithm}[tb]
\caption{LNSO of $V_\alpha(h(\mbf{x},\mbf{Z}))$} \begin{algorithmic}[1]
\STATE {\bfseries Input:} target function $h$; domain $\mcl{D}_{\mbf{x}}$; initializer $\mbf{x}^{(0)}$; $\alpha$; a generator of $\mbf{Z}$ samples \texttt{gen\_Z}; radius $r$; no. of training iterations $t_v$, $t_g$; optimization stepsizes $\gamma_x$, $\gamma_g$
\STATE Randomly initialize $\bm{\theta}_s$
\FOR{$i=1,2,\dots, t_v$}
\IF{$i=1$ or $\Vert \mbf{x}^{(i)} - \mbf{x}_c \Vert \ge \delta_x$}
\STATE Initialize $\bm{\theta}^{(0)} = \bm{\theta}_s$
\STATE Update the center of $\mcl{B}$: $\mbf{x}_c = \mbf{x}^{(i)}$
\FOR{$j=1,2,\dots, t_g$}
\STATE Draw $n_z$ samples of $\mbf{Z}$: $\mcl{Z} =\texttt{gen\_Z}(n_z)$.
\STATE Draw a set $\mcl{X}$ of $n_x$ uniformly distributed samples in $\mcl{B}(\mbf{x}_c,r)$.
\STATE Update
$\bm{\theta}^{(j)} = \bm{\theta}^{(j-1)} - \gamma_g \frac{\text{d} \mcl{L}_g(\mcl{X},\mcl{Z})}{\text{d} \bm{\theta}}\Big|_{\bm{\theta} = \bm{\theta}^{(j-1)}}$ where $\mcl{L}_g(\mcl{X},\mcl{Z})$ is defined in \eqref{eq:lossg}.
\ENDFOR
\STATE $\bm{\theta}_s = \bm{\theta}_{t_g}$
\ENDIF
\STATE Update $\mbf{x}^{(i)} = \mbf{x}^{(i-1)} + \gamma_x \frac{\text{d}g(\mbf{x};\bm{\theta}_s)}{ \text{d}\mbf{x}}\big|_{\mbf{x} = \mbf{x}^{(i-1)}}$.
\STATE Project $\mbf{x}^{(i)}$ into $\mcl{D}_{\mbf{x}}$.
\ENDFOR
\STATE Return $\mbf{x}^{(t_v)}$ \end{algorithmic} \label{alg:lnso} \end{algorithm}
\section{Experimental Details} \label{app:experiment}
Regarding the construction of $\mcl{D}_{\mbf{z}}$ in optimizing the synthetic benchmark functions, the discrete $\mcl{D}_{\mbf{z}}$ is selected as equi-distanct points (e.g., by dividing $[0,1]^{d_z}$ into a grid). The probability mass of $\mbf{Z}$ is defined as $P(\mbf{Z} = \mbf{z}) \propto \exp(-(\mbf{z} - 0.5)^2 / 0.1^2)$ (the subtraction $\mbf{z} - 0.5$ is elementwise). The continuous $\mbf{Z}$ follows a $2$-standard-deviation truncated independent Gaussian distribution with the mean of $0.5$ and standard deviation $0.125$. It is noted that when $\mcl{D}_{\mbf{z}}$ is discrete, there is a large region of $\mbf{Z}$ with low probability $P(\mbf{Z})$ in experiments with synthetic benchmark functions. This is to highlight the advantage of {V-UCB} Prob in exploiting $P(\mbf{Z})$ compared with {V-UCB} Unif. In the robot pushing experiment, the region of $\mbf{Z}$ with low probability is smaller than that in the experiments with synthetic benchmark functions (e.g., Hartmann-$(1,2)$), which is illustrated in Fig.~\ref{fig:zprob}. Therefore, the gap in the performance between {V-UCB} Unif and {V-UCB} Prob is smaller in the robot pushing pushing experiment (Fig.~\ref{fig:portrobot}b) than that in the experiment with Hartmann-$(1,2)$ (Fig.~\ref{fig:synfinite}c).
\begin{figure}
\caption{Plots of the log values of the un-normalized probabilities of the discrete $\mbf{Z}$ for the Hartmann-$(1,2)$ in the left plot and Robot pushing $(3,2)$ in the right plot. The orange dots show the realizations of the discrete $\mbf{Z}$.}
\label{fig:zprob}
\end{figure}
When the closed-form expression of the objective function is known (e.g., synthetic benchmark functions) in the evaluation of the performance metric, the maximum value $\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_\alpha(f(\mbf{x},\mbf{Z}))$ can be evaluated accurately. On the other hand, when the closed-form expression of the objective function is unknown even in the evaluation of the performance metric (e.g., the simulated robot pushing experiment), the maximum value $\max_{\mbf{x} \in \mcl{D}_{\mbf{x}}} V_\alpha(f(\mbf{x},\mbf{Z}))$ is estimated by $\max_{\mbf{x} \in \mcl{D}_T} V_\alpha(f(\mbf{x},\mbf{Z})) + 0.01$ where $\mcl{D}_T$ are input queries in the experiments with both {V-UCB} and {$\rho \text{KG}^{apx}$}. The addition of $0.01$ is to avoid $-\infty$ value in plots of the log values of the performance metric.
The sizes of the initial observations $\mcl{D}_0$ are $3$ for the Branin-Hoo and Goldstein-Price functions; $10$ for the Hartmann-3D function; $20$ for the portfolio optimization problem; and $30$ for the simulated robot pushing task. The initial observations are randomly sampled for different random repetitions of the experiments, but they are the same between the same iterations in {V-UCB} and {$\rho \text{KG}^{apx}$}.
The hyperparameters of GP (i.e., the length-scales and signal variance of the SE kernel) and the noise variance $\sigma_n^2$ are estimated by maximum likelihood estimation \cite{rasmussen06} every $3$ iterations of BO. We set a lower bound of $0.0001$ for the noise variance $\sigma_n^2$ to avoid numerical errors.
To show the advantage of LNSO, we set the number of samples of $\mbf{W}$ to be $10$ for both {V-UCB} and {$\rho \text{KG}^{apx}$}. The number of samples of $\mbf{x}$, i.e., $|\mcl{X}|$, in LNSO (line 9 of Algorithm~\ref{alg:lnso}) is $50$. The radius $r$ of the local region $\mcl{B}$ is set to be a small value of $0.1$ such that a small neural network works well: $2$ hidden layers with $30$ hidden neurons at each layer; the activation functions of the hidden layers and the output layer are sigmoid and linear functions, respectively.
Since the theoretical value of $\beta_t$ is often considered as excessively conservative \cite{bogunovic16,srinivas10ucb,bogunovic2018adversarially}. We set $\beta_t = 2\log(t^2 \pi^2/0.6)$ in our experiments while $\beta_t$ can be tuned to achieved better exploration-exploitation trade-off \cite{srinivas10ucb} or multiple values of $\beta_t$ can be used in a batch mode \cite{torossian2020bayesian}.
\end{document} | arXiv |
Publication Info.
BMB Reports
Korean Society for Biochemistry and Molecular Biology (KSBMB)
1976-670X(eISSN)
Life Science > Developmental/Neuronal Biology
BMB Reports is an international journal devoted to the very rapid dissemination of timely and significant results in diverse fields of biochemistry and molecular biology. For speedy publication of novel knowledge, we aim to offer a first decision to the authors in less than 3 weeks from the submission date. BMB Reports is an open access, online-only journal. The journal publishes peer-reviewed Original Articles and Contributed Mini Reviews by two or more reviewers.
http://submit.bmbreports.org/ KSCI KCI SCOPUS
Volume 53 Issue 11
DOI 출판
한국DOI센터에 등록된 논문의 서지정보를 불러와서 KoreaScience에 공개합니다.
신규 권호 정보 생성과 논문 PDF 연결은 담당자([email protected], 042-869-1775)에게 요청하세요.
Identification and Characterization of the Interaction between Heat-Shock Protein 90 and Phospholipase C-γ1
Kim, Su-Jeong;Kim, Myung-Jong;Kim, Yong;Si, Fu Chun;Ryu, Sung-Ho;Suh, Pann-Chill 97
Phosphoinositide-specific phospholipase C-${\gamma}1$ (PLC-${\gamma}1$) is a pivotal mediator in the signal transduction cascades induced by many growth factors. Using a yeast two-hybrid system, heat-shock protein 90 (Hsp90) was identified as a PLC-${\gamma}1$-binding protein. A co-immunoprecipitation experiment, using anti-PLC-${\gamma}1$ antibody, demonstrated an in vivo interaction between Hsp90 and PLC-${\gamma}1$ in the NIH-3T3 cells. The interaction in NIH-3T3 was unaffected by the PDGF treatment, inducing phosphorylation and activation of PLC-${\gamma}1$. Direct interaction between Hsp90 and PLC-${\gamma}1$ was confirmed by in vitro binding experiments using purified Hsp90 and PLC-${\gamma}1$. Furthermore, Hsp90 increased the $PIP_2$-hydrolyzing activity of PLC-${\gamma}1$ up to 2-fold at $0.1{\mu}M$ in vitro. Taken together, we show for the first time, the interaction of PLC-${\gamma}1$ with Hsp90, both in vivo and in vitro. We suggest that Hsp90 may play a role in PLC-${\gamma}1$-mediated signal transduction.
Phosphorylation of Elongation Factor-2 And Activity Of Ca2+/Calmodulin-Dependent Protein Kinase III During The Cell Cycle
Suh, Kyong-Hoon 103
Phosphorylation of the eukaryotic elongation factor 2 (eEF-2) blocks the elongation step of translation and stops overall protein synthesis. Although the overall rate of protein synthesis in mitosis reduces to 20% of that in S phase, it is unclear how the protein translation procedure is regulated during the cell cycle, especially in the stage of peptide elongation. To delineate the regulation of the elongation step through eEF-2 function, the changes in phosphorylation of eEF-2, and in activity of corresponding $Ca^{2+}$/calmodulin (CaM)-dependent protein kinase III (CaMK-III) during the cell cycle of NIH 3T3 cells, were determined. The in vivo level of phosphorylated eEF-2 showed an 80% and 40% increase in the cells arrested at G1 and M, respectively. The activity of CaMK-III also changed in a similar pattern, more than a 2-fold increase when arrested at G1 and M. The activity change of the kinase during one turn of the cell cycle also demonstrated the activation at G1 and M phases. The activity change of cAMP-dependent protein kinase (PKA) was reciprocal to that of CaMK-III. These results indicated: (1) the activity of CaMK-III was cell cycle-dependent and (2) the level of eEF-2 phosphorylation followed the kinase activity change. Therefore, the elongation step of protein synthesis might be cell cycle dependently regulated.
Purification and Characterization of Serine Protease Inhibitors from Dolichos lablab Seeds; Prevention Effects on Pseudomonal Elastase-Induced Septic Hypotension
Koo, Sun-Hyang;Choi, Yun-Lim;Choi, Su-Kyung;Shin, Young-Hee;Kim, Byeong-Gee;Lee, Bok-Luel 112
Three kinds of serine protease inhibitors, members of the Bowman-Birk trypsin inhibitor, were purified from Dolichos lablab seeds and named Dolichos protease inhibitor 1, 2 and 3 (DI-1, DI-2 and DI-3), respectively. Each inhibitor showed a single band with gel mobility at around 15.9, 12.1 and 14.6 kDa on 20% SDS-PAGE under reducing conditions. To characterize inhibitory specificity, the inhibition constant (Ki) for these inhibitors was measured against several known serine proteases. All three Dolichos protease inhibitors (DI-1, DI-2 and DI-3) inhibited the activity of trypsin and plasmin, but had no effect on thrombin and kallikrein (either for human plasma kallikrein or for porcine pancreas kallikrein). DI-1 inhibited chymotrypsin most effectively (Ki = $3.6{\times}10^{-9}\;M$), while DI-2 displayed inhibitory activity for porcine pancreatic elastase (Ki = $6.2{\times}10^{-8}\;M$). Pre-treatment of the 33 mg/kg of DI-mixture (active fractions from $C_{18}$ open column chromatography that included DI-1, DI-2 and DI-3) inhibited the induction of pseudomonal elastase-induced septic hypotension and prevented an increase in bradykinin generation in pseudomonal elastase-treated guinea pig plasma. Also, the increase of kallikrein activity, by injection of pseudomonal elastase, was inhibited by the pretreatment of the DI-mixture in a guinea pig. Since the DI-mixture had no inhibitory effect on kallikrein activity when Z-Phe-Arg-MCA was used as a substrate in vitro, its inhibitory activity in the pseudomonal elastase-induced septic hypotension model might not be due to a direct inhibition of plasma kallikrein in the activation cascade of the Hageman factor and prekallikrein system. These results suggest that the Dolichos DI-mixture might be used as an inhibitor in pathogenic bacterial protease-induced septic shock.
Solution Structure of an Active Mini-Proinsulin, M2PI: Inter-chain Flexibility is Crucial for Insulin Activity
Cho, Yoon-Sang;Chang, Seung-Gu;Choi, Ki-Doo;Shin, Hang-Cheol;Ahn, Byung-Yoon;Kim, Key-Sun 120
M2PI is an active single chain mini-proinsulin with a 9-residue linker containing the turn-forming sequence 'YPGDV' between the B- and A-chains, but which retains about 50% of native insulin receptor binding activity. The refolding efficiency of M2PI is higher than proinsulin by 20-40% at alkaline pH, and native insulin is generated by the enzymatic conversion of M2PI. The solution structure of M2PI was determined by NMR spectroscopy. The global structure of M2PI is similar to that of native insulin, but the flexible linker between the B- and A-chains perturbed the N-terminal A-chain and C-terminal B-chain. The helix in the N-terminal A-chain is partly perturbed and the ${\beta}$-turn in the B-chain is disrupted in M2PI. However, the linker between the two chains was completely disordered indicating that the designed turn was not formed under the experimental conditions (20% acetic acid). Considering the fact that an insulin analogue, directly cross-linked between the C-terminus of the B-chain and the N-terminus of the A-chain, has negligible binding activity, a flexible linker between the two chains is sufficient to keep binding activity of M2PI, but the perturbed secondary structures are detrimental to receptor binding.
Nucleotide Insertion Fidelity of Human Hepatitis B Viral Polymerase
Kim, Youn-Hee;Hong, Young-Bin;Suh, Se-Won;Jung, Gu-Hung 126
The hepadnaviruses replicate their nucleic acid through a reverse transcription step. The MBP-fused HBV polymerase was expressed in E. coli and purified by using amylase affinity column chromatography. The purified protein represented DNA-dependent DNA polymerase activity. In this report, the MBP-HBV polymerase was shown to lack 3'$\rightarrow$5' exonuclease activity, like other retroviral RTs. The ratio of the insertion efficiency for the wrong versus right vase pairs indicates the misinsertion frequency (f). The nucleotide insertion fidelity (1/f), observed with the MBP-HBV polymerase and HIV-1 RT, was between 60 and 54,000, and between 50 and 73,000, respectively, showing that they are in close range. A relatively efficient nucleotide incorporation by the MBP-HBV polymerase was observed with a specificity of three groups: (1) A : T, T : A>C : G, G : C (matched pairs), (2) A : C, C : A>G: T, T : G (purine-pyrimidine and pyrimidine-purine mispairs), and (3) C : C, A : A, G : G, T : T>T : C, C : T>A : G, G : A (purine-purine or pyrimidine-pyrimidine mispairs), and their order is (1)>(2)>(3). The data from the nucleotide insertion fidelity by the MBP-HBV polymerase suggest that the HBV polymerase may be as error-prone as HIV-1 RT.
Effect of γ-Irradiation on the Molecular Properties of Bovine Serum Albumin and β-Lcatoglobulin
Cho, Yong-Sik;Song, Kyung-Bin 133
To elucidate the effect of oxygen radicals on the molecular properties of proteins, the secondary and tertiary structure and molecular weight size of BSA and ${\beta}$-lactoglobulin were examined after irradiation of proteins at various doses. Gamma-irradiation of protein solutions caused the disruption of the ordered structure of protein molecules as well as degradation, cross-linking, and aggregation of the polypeptide chains. As a model system, BSA and ${\beta}$-lactoglobulin were used as a typical ${\alpha}$-helical and a ${\beta}$-sheet structure protein, respectively. A circular dichroism study showed that the increase of radiation decreased the ordered structure of proteins with a concurrent increase of aperiodic structure content. Fluorescence spectroscopy indicated that irradiation quenched the emission intensity excited at 280 nm. SDS-PAGE and a gel permeation chromatography study indicated that radiation caused initial fragmentation of proteins resulting in a subsequent aggregation due to cross-linking of protein molecules.
Expression and cDNA Cloning of klp-12 Gene Encoding an Ortholog of the Chicken Chromokinesin, Mediating Chromosome Segregation in Caenorhabditis elegans
Ali, M. Yusuf;Khan, M.L.A.;Shakir, M.A.;Kobayashi, K. Fukami;Nishikawa, Ken;Siddiqui, Shahid S. 138
In eukaryotes, chromosomes undergo a series of complex and coordinated movements during cell division. The kinesin motor proteins, such as the chicken Chromokinesin, are known to bind DNA and transport chromosomes on spindle microtubles. We previously cloned a family of retrograde C-terminus kinesins in Caenorhabditis elegans that mediate chromosomal movement during embryonic development. Here we report the cloning of a C. elegans klp-12 cDNA, encoding an ortholog of chicken Chromokinesin and mouse KIF4. The KLP-12 protein contains 1609 amino acid and harbors two leucine zipper motifs. The insitu RNA hybridization in embryonic stages shows that the klp-12 gene is expressed during the entire embryonic development. The RNA interference assay reveals that, similar to the role of Chromokinesin, klp-12 functions in chromosome segregation. These results support the notion that during mitosis both types, the anterograde N-terminus kinesins such as KLP-12 and the retrograde C-terminus kinesins, such as KLP-3, KLP-15, KLP-16, and KLP-17, may coordinate chromosome assembly at the metaphase plate and chromosomal segregation towards the spindle poles in C. elegans.
Lipoprotein Lipase-Mediated Uptake of Glycated LDL
Koo, Bon-Sun;Lee, Duk-Soo;Yang, Jeong-Yeh;Kang, Mi-Kyung;Sohn, Hee-Sook;Park, Jin-Woo 148
The glycation process plays an important role in accelerated atherosclerosis in diabetes, and the uptake of atherogenic lipoproteins by macrophage in the intima of the vessel wall leads to foam cell formation, an early sign of atherosclerosis. Besides the lipolytic action on the plasma triglyceride component, lipoprotein lipase (LPL) has been reported to enhance the cholesterol uptake by arterial wall cells. In this study, some properties of LPL-mediated low-density lipoprotein (LDL) uptake and the effect of LDL glycation were investigated in RAW 264.7 cell, a murine macrophage cell line. In the presence of LPL, $^{125}I$-LDL binding to RAW 264.7 cells was increased in a dose-dependent manner. At concentrations greater than $20\;{\mu}g/ml$ of LPL, LPL-mediated LDL binding was increased about 17-fold, achieving saturation. Without LPL, both very low-density lipoprotein (VLDL) and high-density lipoprotein (HDL) were ineffective in blocking the binding of $^{125}I$-LDL to Cells. However, LPL-enhanced LDL binding was inhibited about 50% by the presence of VLDL, while no significant effect was observed with HDL. Heat inactivation of LPL caused a 30% decrease of LDL binding. In the presence of LPL, the cells took up 40% of cell-bound native LDL. No significant difference was observed in cell binding between native and glycated LDL. However, the uptake of glycated LDL was significantly greater than that of native LDL, reaching to 70% of the total cell bound glycated LDL. These results indicate that LPL can cause the significant enhancement of LDL uptake by RAW 264.7 cells and the enhanced uptake of glycated LDL in the presence of LPL might play an important role in the accelerated atherogenesis in diabetic patients.
In vitro Evidence that Purified Yeast Rad27 and Dna2 are not Stably Associated with Each Other Suggests that an Additional Protein(s) is Required for a Complex Formation
Bae, Sung-Ho;Seo, Yeon-Soo 155
The saccharomyces cerevisiae Rad27, a structure-specific endonuclease for the okazaski fragment maturation has been known to interact genetically and biochemically with Dna2, an essential enzyme for DNA replication. In an attempt to define the significance of the interaction between the two enzymes, we expressed and purified both Dna2 and Rad27 proteins. In this report, Rad27 could not form a complex with Dna2 in the three different analyses. The analyses included glycerol gradient sedimentation, protein-column chromatography, and coinfection of baculoviruses followed by affinity purification. This is in striking contrast to the previous results that used crude extracts. These results suggest that the interaction between the two proteins is not sufficiently stable or indirect, and thus requires an additional protein(s) in order for Rad27 and Dna2 to form a stable physical complex. This result is consistent with our genetic findings that Schizosaccharomyces pombe Dna2 is capable of interacting with several proteins that include two subunits of polymerase $\delta$, DNA ligase I, as well as Fen-1. In addition, we found that the N-terminal modification of Rad27 abolished its enzymatic activity. Thus, as suspected, we found that on the basis of the structure determination, N-terminal methionine indeed plays an important role in the nucleolytic cleavage reaction.
'Restriction-PCR' - a Superior Replacement for Restriction Endonucleases in DNA Cloning Applications
Klimkait, Thomas 162
Polymerase chain reaction (PCR) is well established as an indispensable tool of molecular biology; and yet a limitation for cloning applications continues to be that products often require subsequent restriction to be that products often require subsequent restriction digests, blunt-end ligation, or the use of special linear vectors. Here a rapid, PCR-based system is described for the simple, restriction enzyme-free generation of synthetic, 'restriction-like' DNA fragments with staggered ends. Any 3'- or 5'-protruding terminus, but also non-palindromic overhangs with an unrestricted single strand length are specifically created. With longer overhangs, "Restriction-PCR" does not even require a ligation step prior to transformation. Thereby the technique presents a powerful tool e.g. for a successive, authentic reconstitution of sub-fragments of long genes with no need to manipulate the sequence or to introduce restriction sites. Since restriction enzyme-free and thereby devoid the limitations of partial DNA digests, "Restriction-PCR" allows a straight one-step generation and cloning of difficult DNA fragments that internally carry additional sites for specific sequence insertions or deletions can be precisely engineered into genes of interest. With these properties "Restriction-PCR" has the potential to add significant speed and versatility to a wide variety of DNA cloning applications.
Temperature-Dependent Expression of Escherichia coli Thioredoxin Gene
Lee, Jin-Joo;Park, Eun-Hee;Ahn, Ki-Sup;Lim, Chang-Jin 166
Thioredoxin is a multifunctional protein that is ubiquitous in microorganisms, animals and plants. Previously, the expression of the Escherichia coli thioredoxin gene (trxA) was found to be negatively regulated by cAMP. In the present study, the effect of temperature on the expression of the E. coli trxA gene was investigated. In order to examine the temperature effect, the fusion plasmid pCL70 that harbors the E. coli trxA P1P2 promoter was used. The other two fusion plasmids, pJH3 and pMH521 that were constructed in different vectors which harbor the E. coli trxA P2 promoter, were also used. When the E. coli strain MC1061/pCL70 was grown in a rich medium at $25^{\circ}C$, $34^{\circ}C$ and $42^{\circ}C$, the cells grown at $42^{\circ}C$ gave the highest $\beta$-galactosidase activity. The E. coli MC1061/pJH3 and MC1061/pMG521 cells showed increased $\beta$-galactosidase activity after the shift of the culture temperature to $42^{\circ}C$. The wild-type trxA gene of the E. coli MC1061 cells produced much higher thioredoxin activity at the higher temperature. These results support the conclusion that the E. coli trxA gene is regulated in a temperature-dependent manner. Especially the expression from its P2 promoter appeared to be sensitive to temperature.
Chemical Modification of 5-Lipoxygenase from the Korean Red Potato
Kim, Kyoung-Ja 172
The lipoxygenase was purified 35 fold to homogeneity from the Korean red potato by an ammonium sulfate precipitation and DEAE-cellulose column chromatography. The simple purification method is useful for the preparation of pure lipoxygenase. The molecular weight of the enzyme was estimated to be 38,000 by SDS-polyacrylamide gel electrophoreses and Sepharose 6B column chromatography. The purified enzyme with 2 M $(NH_4)_2SO_4$ in a potassium phosphate buffer, pH 7.0, was very stable for 5 months at $-20^{\circ}C$. Because the purified lipoxygenase is very stable, it could be useful for the screening of a lipoxygenase inhibitor. The optimal pH and temperature for lipoxygenase purified from the red potato were found to be pH 9.0. and $30^{\circ}C$, respectively. The Km and Vmax values for linoleic acid of the lipoxygenase purified from the red potato were $48\;{\mu}M$ and $0.03\;{\mu}M$ per minute per milligram of protein, respectively. The enzyme was insensitive to the metal chelating agents tested (2 mM KCN, 1 and 10mM EDTA, and 1 mM $NaN_3$), but was inhibited by several divalent cations, such as $Cu^{++}$, $Co^{++}$ and $Ni^{++}$. The essential amino acids that were involved in the catalytic mechanism of the 5-lipoxygenase from the Korean red potato were determined by chemical modification studies. The catalytic activity of lipoxygenase from the red potato was seriously reduced after treatment with a diethylpyrocarbonate (DEPC) modifying histidine residue and Woodward's reagent (WRK) modifying aspartic/glutamic acid. The inactivation reaction of DEPC (WRK) processed in the form of pseudo-first-order kinetics. The double-logarithmic plot of the observed pseudo-first-order rate constant against the modifier concentration yielded a reaction order 2, indicating that two histidine residues (carboxylic acids) were essential for the lipoxygenase activity from the red potato. The linoleic acid protected the enzyme against inactivation by DEPC(WRK), revealing that histidine and carboxylic amino acids residues were present at the substrate binding site of the enzyme molecules.
Glutathione S-Transferase Activities of S-Type and L-Type Thioltransferases from Arabidopsis thaliana
Cho, Young-Wook;Park, Eun-Hee;Lim, Chang-Jin 179
The glutathione S-transferase (GST) activities of S-type and L-type thioltransferases (TTases), which are purified from the seeds and leaves of Arabidopsis thaliana, respectively, were identified and compared. The S-type and L-type TTases showed $K_m$ values of 9.72 mM and 3.18mM on 1-chloro-2,4-dinitrobenzene (CDNB), respectively, indicating the L-type TTase has higher affinity for CDNB. The GST activity of the L-type TTase was rapidly inactivated after being heated at $70^{\circ}C$ or higher. The GST activity of the S-type TTase remains active in a range of $30-90^{\circ}C$. $Hg^{2+}$ inhibited the GST activity of the S-type TTase, whereas $Ca^{2+}$ and $Cd^{2+}$ inhibited the GST activity of the L-type TTase. Our results suggest that the GST activities of two TTases of Arabidopsis thaliana may have different catalytic mechanisms. The importance of the co-existence of TTAse and GST activities in one protein remains to be elucidated.
Identification of a Novel PGE2 Regulated Gene in SNU1 Gastric Cancer Cell
Park, Min-Seon;Kim, Hong-Tae;Min, Byung-Re;Kimm, Ku-Chan;Nam, Myeong-Jin 184
Prostaglandin $E_2$ ($PGE_2$) plays an important role in the regulation of various gastric functions, and the growth-inhibitory activities on tumor cells are studied in vitro and in vivo. Although the mechanisms have attracted many researchers in the past decade, the molecular mechanisms of cell cycle arrest, or induction of apoptosis by $PGE_2$, is unclear. We investigated the effects of $PGE_2$ on the growth of the human gastric carcinoma cell line SNU1 and genes that are regulated by $PGE_2$ and isolated them using differential display RT-PCR (DD RT-PCR). FACS analysis suggested that SNU1 cells were arrested at the G1 phase by $PGE_2$ treatment. This growth inhibitory effect was in a time- and dose-dependent manner. Treatment of SNU1 cells with $10\;{\mu}g/ml$ $PGE_2$, followed by DD RT-PCR analysis, revealed differently expressed bands patterns from the control. Among the differently expressed clones, we found an unidentified cDNA clone (HGP-27) overexpressed in $PGE_2$-treated cells. The full-length cDNA of HGP-27 was isolated using RACE, which consisted of a 30-nt 5'-noncoding region, a 891-nt ORF encoding the 296 amino acid protein, and a 738-nt 3'-noncoding region including a poly(a) signal. This gene was localized on the short arm of chromosome number 11. Using the Motif Finder program, a myb-DNA binding repeat signature was detected on the ORF region. The COOH-terminal half was shown to have similarity with the $NH_3$-terminal domain of thioredoxin (Trx). This relation between HGP-27 and Trx implied a potential role for HGP-27 in modulating the DNA binding function of a transcription factor, myb.
A new purification method for the Fab and F(ab)2 fragment of 145-2C11, hamster anti-mouse CD3ε antibody
Kwack, Kyu-Bum 188
Recombinant protein G has been utilized in the purification of antibodies from various mammalian species based on the interaction of antibodies with protein G. The interaction between immunoglobulin and protein G may not be restricted to the Fc protion of antibodies, as many different $F(ab)_2$ or Fab fragments can also bind to protein G. I found both FAb $F(ab)_2$ of 145-2C11, a hamster anti-mouse $CD3{\varepsilon}$ antibody, bound to the protein G-sepharose. Interestingly, Fab and $F(ab)_2$ of 145-2C11 did not bind to the protein A-sepharose. The binding of Fab and $F(ab)_2$ of 145-2C11 to protein G provided a useful method to remove proteases, chopped fragments of the Fc region, and other contaminating proteins. The remaining intact antibody in the protease reaction mixture can be removed by using a protein A-sepharose, because the Fab and $F(ab)_2$ portions of 145-2C11 did not bind to protein A-sepharose. The specific binding of Fab and $F(ab)_2$ portions of 145-sC11 to a protein G-sepharose (though not to a protein A-sepharose) and binding of intact 145-2C11 to both protein A- and G-sepharose will be useful in developing an effective purification protocol for Fab and $F(ab)_2$ portions of 145-2C11. | CommonCrawl |
Equation: "integral from 0 to 6 of -(x^2)+36"
So I know the answer, I just don't understand what my teacher wants. I know there's an "n(n+1)(2n+1) that's supposed to be thrown in there.
What the teacher wants if for you to divide the interval (0,6) into a lot of evenly spaced intervals and set up the Riemann sum over these intervals. Evaluate the sum and then let the number of intervals become infinite and see that the sum converges to the integral.
Thanks for the information, but I have no idea how to do that. He's never gone over a single problem like that and it's due for homework tomorrow. Do you know any good places to read up on this method of solving?
which tends to -6(12) + 216 = 144 as $n \to \infty$.
Thanks for your help. I have no idea what you did, but I guess I'll learn soon! | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.