text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Conjugation of isometries in Euclidean space
In a group, the conjugate by g of h is ghg−1.
Translation
If h is a translation, then its conjugation by an isometry can be described as applying the isometry to the translation:
• the conjugation of a translation by a translation is the first translation
• the conjugation of a translation by a rotation is a translation by a rotated translation vector
• the conjugation of a translation by a reflection is a translation by a reflected translation vector
Thus the conjugacy class within the Euclidean group E(n) of a translation is the set of all translations by the same distance.
The smallest subgroup of the Euclidean group containing all translations by a given distance is the set of all translations. So, this is the conjugate closure of a singleton containing a translation.
Thus E(n) is a direct product of the orthogonal group O(n) and the subgroup of translations T, and O(n) is isomorphic with the quotient group of E(n) by T:
O(n) $\cong $ E(n) / T
Thus there is a partition of the Euclidean group with in each subset one isometries that keeps the origins fixed, and its combination with all translations.
Each isometry is given by an orthogonal matrix A in O(n) and a vector b:
$x\mapsto Ax+b$
and each subset in the quotient group is given by the matrix A only.
Similarly, for the special orthogonal group SO(n) we have
SO(n) $\cong $ E+(n) / T
Inversion
The conjugate of the inversion in a point by a translation is the inversion in the translated point, etc.
Thus the conjugacy class within the Euclidean group E(n) of inversion in a point is the set of inversions in all points.
Since a combination of two inversions is a translation, the conjugate closure of a singleton containing inversion in a point is the set of all translations and the inversions in all points. This is the generalized dihedral group dih (Rn).
Similarly { I, −I } is a normal subgroup of O(n), and we have:
E(n) / dih (Rn) $\cong $ O(n) / { I, −I }
For odd n we also have:
O(n) $\cong $ SO(n) × { I, −I }
and hence not only
O(n) / SO(n) $\cong $ { I, −I }
but also:
O(n) / { I, −I } $\cong $ SO(n)
For even n we have:
E+(n) / dih (Rn) $\cong $ SO(n) / { I, −I }
Rotation
In 3D, the conjugate by a translation of a rotation about an axis is the corresponding rotation about the translated axis. Such a conjugation produces he screw displacement known to express an arbitrary Euclidean motion according to Chasles' theorem.
The conjugacy class within the Euclidean group E(3) of a rotation about an axis is a rotation by the same angle about any axis.
The conjugate closure of a singleton containing a rotation in 3D is E+(3).
In 2D it is different in the case of a k-fold rotation: the conjugate closure contains k rotations (including the identity) combined with all translations.
E(2) has quotient group O(2) / Ck and E+(2) has quotient group SO(2) / Ck . For k = 2 this was already covered above.
Reflection
The conjugates of a reflection are reflections with a translated, rotated, and reflected mirror plane. The conjugate closure of a singleton containing a reflection is the whole E(n).
Rotoreflection
The left and also the right coset of a reflection in a plane combined with a rotation by a given angle about a perpendicular axis is the set of all combinations of a reflection in the same or a parallel plane, combined with a rotation by the same angle about the same or a parallel axis, preserving orientation
Isometry groups
Two isometry groups are said to be equal up to conjugacy with respect to affine transformations if there is an affine transformation such that all elements of one group are obtained by taking the conjugates by that affine transformation of all elements of the other group. This applies for example for the symmetry groups of two patterns which are both of a particular wallpaper group type. If we would just consider conjugacy with respect to isometries, we would not allow for scaling, and in the case of a parallelogrammatic lattice, change of shape of the parallelogram. Note however that the conjugate with respect to an affine transformation of an isometry is in general not an isometry, although volume (in 2D: area) and orientation are preserved.
Cyclic groups
Cyclic groups are Abelian, so the conjugate by every element of every element is the latter.
Zmn / Zm $\cong $ Zn.
Zmn is the direct product of Zm and Zn if and only if m and n are coprime. Thus e.g. Z12 is the direct product of Z3 and Z4, but not of Z6 and Z2.
Dihedral groups
Consider the 2D isometry point group Dn. The conjugates of a rotation are the same and the inverse rotation. The conjugates of a reflection are the reflections rotated by any multiple of the full rotation unit. For odd n these are all reflections, for even n half of them.
This group, and more generally, abstract group Dihn, has the normal subgroup Zm for all divisors m of n, including n itself.
Additionally, Dih2n has two normal subgroups isomorphic with Dihn. They both contain the same group elements forming the group Zn, but each has additionally one of the two conjugacy classes of Dih2n \ Z2n.
In fact:
Dihmn / Zn $\cong $ Dihn
Dih2n / Dihn $\cong $ Z2
Dih4n+2 $\cong $ Dih2n+1 × Z2
| Wikipedia |
Categorical and analytic invariants in Algebraic geometry 1
(September 14–18, 2015, Steklov Mathematical Institute, Moscow)
The aim of the conference is to bring together Japanese and Russian experts actively working in the area of algebraic and analytic geometry, homological algebra and string theory, in order to get an insight on the structure of complex varieties and certain interrelated invariants thereof, such as derived categories, semi-infinite Hodge structures, topological correlators and quantum motives, which reflect the properties of these varieties relevant to mirror symmetry.
Рoster
Getting from the airport
Please use a high-speed railway, called Aeroexpress (with moderately priced tickets, about 450 rubles, approx. 6 Euro). The Aeroexpress connects all international airports in Moscow with the subway stations: "Belorusskaja" for the Sheremetyevo airport, "Paveletskaja" for the Domodedovo airport, and "'Kievskaja" for the Vnukovo aiport. Follow the signs of Aeroexpress or Trains inside the airport. You might go for 15 minutes inside the airport building, which depends on the terminal of your arrival. The timetables of the Aeroexpress and other details are available on the official site http://www.aeroexpress.ru/en.
The map of Moscow Metro (subway) can be found here. You should go to the orange (Kaluzhsko-Rizhskaya) line, South. If you go from the green line, you change at Novokuznetskaya-Tretyakovskaya, if you go by the brown line, you change at Oktyabrskaya.
The participants of the conference are housed either in the building of the Faculty of Mathematics of the Higher School of the Economics, Vavilova, 7.
Steklov Mathematical Institute
The Steklov Mathematical Institute is located within a walking distance from the metro station "Akademicheskaya" ("Академическая"), orange line, South. It takes about 10–15 minutes to get from the station to the institute. First one should take Dmitriya Ulyanova street (Дмитрия Ульянова) and then Vavilova street (Вавилова), see the map.
The hotel is on the first floor. One should show a passport to the guard at the entrance of the building. Then one goes to the right till the very end and takes a stairway. A guardian will help.
Vavilova, 7
The HSE Faculty of Mathematics is located within a walking distance from the metro station "Leninskiy prospect" ("Ленинский проспект"), orange line, South. First one should take an unnamed street and then Vavilova street (Вавилова), see the map. Directions, pictures and other useful information is available here. One should show documents at the reception and then take the elevator. A person at the reception will help. To get to the Steklov Institute you can walk 20 minutes straight along Vavilova street away from the center of the city or you can take the trams 14 and 39 from the station "Metro Leninskiy prospect (Yuzhnii Vyhod)" ("Метро Ленинский проспект (южный выход)"), which is just in front of the HSE building, to the station "Ulitsa Gubkina" ("Улица Губкина"), 4 stops along Vavilova street.
The conference sessions will take place at the Steklov Institute, conference hall, 9th floor. Turn right after entering the building. Don't use elevators that are in front of the entrance – they don't go to the 9th floor. The elevators are located in the middle of the long corridor on the ground floor.
WiFi connection
In the Steklov Institute there is an open wireless network connection on the first floor (hotel) and on the 9th floor (conference hall); the network name is MIAN-FREE. Besides, there is a network MIAN with the login "mianconf" and the same password. However, some settings on the computer are required, see more details here.
The cafeteria of the Steklov Institute is located on the ground floor of the building (first floor in Russian). After the entrance one should go to the right till the very end.
Here are some restaurants nearby the Steklov Institute and Vavilova, 7. Some of the chains in Moscow: Mu-Mu (Му-Му), Elki-Palki (Елки-Палки), Shesh-Besh (Шеш-Беш).
Public transportation in Moscow
One ride costs 50 RUR; see more details here.
Bus, tram, and trolleybus
A driver sells tickets for some extra money. One inserts a ticket with a blank side up into the machine at the entrance of a transport. Keep the ticket till the end of the ride – controllers are possible.
Metro (subway)
Ticket offices nearby turnstiles after the entrance to the metro.
ATM machines
There is an ATM machine in the Steklov Institute (it accepts Visa, MasterCard, but does not accept Amex). It is located in the middle of the long corridor on the ground floor nearby the elevators. There is also an ATM machine close to the Steklov Institute at Vavilova, 23C1, see the map.
Bondal Alexey Igorevich
Przyjalkowski Victor Vladimirovich
Roslyi Aleksei Andreevich
Saito Kyoji
Local organizers
Grishina Olga Valentinovna
Komarov Stanislav Igorevich
Kuznetsova Vera Vitalievna
Abuaf Roland
Brav Christopher
Efimov Alexander Ivanovich
Galkin Sergey
Hosono Shinobu
Ikeda Akishi
Ishii Akira
Karzhemanov Il'ya Vyacheslavovich
Katzarkov Ludmil
Kawamata Yujiro
Kuznetsov Alexander Gennad'evich
Logvinenko Timothy
Losev Andrei Semenovich
Milanov Todor
Ouchi Genki
Prokhorov Yuri Gennadievich
Shiraishi Yuuki
Takahashi Atsushi
Toda Yukinobu
Ueda Kazushi
Uehara Hokuto
Steklov Mathematical Institute of Russian Academy of Sciences, Moscow
Laboratory of algebraic geometry and its applications, National Research University "Higher School of Economics" (HSE), Moscow
Institute of Fundamental Science, Moscow
Kavli Institute for the Physics and Mathematics of the Universe
Categorical and analytic invariants in Algebraic geometry 1, Moscow, September 14–18, 2015
September 14, 2015 (Mon)
1. Multipointed NC deformations and CY3folds
Y. Kawamata
September 14, 2015 10:30–11:30, Moscow, Steklov Mathematical Institute
2. On categorical joins
A. G. Kuznetsov
3. Non-commutative virtual structure sheaves
Yu. Toda
4. Moduli of relations of quivers
K. Ueda
5. $P$-functors
T. Logvinenko
September 15, 2015 (Tue)
6. Homological invariants of DG algebras and generalized degeneration
A. I. Efimov
7. Lagrangian embeddings of cubic fourfolds containing a plane
G. Ouchi
8. Calabi–Yau structures on dg categories and shifted symplectic structures on moduli
Ch. Brav
September 16, 2015 (Wed)
9. Looking geometry from the moduli spaces of CICYs
Sh. Hosono
10. From Riemann to Feynman geometry in Feynman approach to QFT
A. S. Losev
11. The Calabi–Yau completion for a formal parameter
A. Ikeda
September 17, 2015 (Thu)
12. Vertex algebras and Gromov–Witten invariants
T. Milanov
13. On the Frobenius manifold from the Gromov–Witten theory for an orbifold projective line with $r$ orbifold points
Yu. Shiraishi
14. Categorical Kaehler Metrics
L. Katzarkov
15. Calabi–Yau dg categories to Frobenius manifolds via primitive forms
A. Takahashi
16. Joins and Hadamard products
S. Galkin
September 18, 2015 (Fri)
17. Explicit Dolgachev surfaces and exceptional collections
I. V. Karzhemanov
18. Exceptional sheaves on the Hirzebruch surface $\mathbb{F}_2$
H. Uehara
19. Degenerations of del Pezzo surfaces in terminal Q-Gorenstein families
Yu. G. Prokhorov
20. On the special McKay correspondence
A. Ishii
21. Skew-growth function for dual Artin monoid
K. Saito | CommonCrawl |
\begin{document}
\title{Optimal Spin Squeezed Steady State induced by the dynamics of non-hermtian hamiltonians.} \vskip2cm \author{Ram\'\i rez R. $^{a)}$} \author{Reboiro M. $^{b)}$ \footnote{e-mail: [email protected]}} \affiliation{{\small\it $^{a)}$Department of Mathematics, University of La Plata} {\small \it La Plata,Argentina}} \affiliation{{\small\it $^{b)}$IFLP, CONICET-Department of Physics, University of La Plata} {\small \it La Plata, Argentina}} \date{\today}
\begin{abstract} In this work, we study the time evolution of a coherent spin state under the action of a non-hermitian hamiltonian. The hamiltonian is modeled by a one-axis twisting term plus a Lipkin-type interaction. We show that when the Lipkin interaction is switched on, depending on the relative values of the coupling constants, the initial state evolves into a steady squeezed state which minimizes the Uncertainty Relations, Intelligent Spin State. We apply this result to look for the generation of an steady intelligent spin state from an ensemble of nitrogen vacancy colour centers in diamond coupled to a mechanical resonator. \end{abstract}
\pacs{02.20.-a, 03.67.Bg, 03.67.Mn, 32.80.Uv,42.50.Ex}
\maketitle
key words: non-hermitian dynamics, optimal spin squeezing, one-axes twisting and Lipkin-type interactions.
\section{Introduction}
The one-axis-twisting (OAT) and the two-axis-twisting (TAT) mechanisms have been introduced by Kitagawa and Ueda \cite{kitagawa} to establish the concept of spin squeezing states and the fundamentals for their generation. From the theoretical point of view, squeezing is closely related to the analysis of Heisenberg Uncertainty Relations. It means that given a physical system, one may be interested in the minimization of the fluctuation of an observable at the expense of the increment of the fluctuation of the conjugate variable.
Since the pioneering work of Kitagawa and Ueda \cite{kitagawa}, many authors have contributed to the understanding \cite{nori,nemoto,sorensen} and to the experimental achievement of spin squeezing in atomic systems \cite{bec-exp-1,bec-exp-2,bec-exp-3}. Recently, the interest in the study of these mechanisms has been renewed \cite{bec-0,bec-1,bec-2,tat-0,tat-2,tat-4,tat-1,tat-3,oat-1,oat-5,lipkin1,oat-4,oat-2,oat-3}. The characterization of spin squeezing is relevant in the analysis of potential candidates to be used in the architecture of quantum computing devices \cite{arxiv}. In a series of works, it has been reported the generation of steady squeezed states in dissipative spin systems \cite{oat-3,torre,disi-0,disibis,disi-1,disi-4,disi-5,disi-3,disi-2,disi-6,nosaphys}. As an example, we can mention the analysis of phase coherence and spin squeezing of collective spin in systems governed by OAT Hamiltonian with decay \cite{tat-1,oat-1,oat-5,oat-4,oat-2,oat-3,sorensen} or in systems governed by non-Hermitian Lipkin-Meshkov-Glick hamiltonian (LMG) \cite{tat-0,disi-0,disibis}. Similar results were found in the study of the behavior of dissipative hybrid systems \cite{disi-3,disi-1,nosaphys,zhu,zhubis,marco,nv-qb-1,qb-nv,photons-coupling,ma-3}. The reported works can be taken as an indication that non-hermitian dynamics can be used to improve the achievement of squeezing in different spin system.
Moreover, the search for spin squeezed states with minimum uncertainty relations has given rise to the notion of Intelligent Spin State (ISS) \cite{iss1}. The first references in the literature to intelligent states there is the paper of C. Argone and co-workers \cite{iss1}. A considerable amount of work was devoted to the study of both the properties of intelligent spin states \cite{iss2} as well as to the construction of such states \cite{iss3,iss4,iss5,iss6,nosiss}. In this work, we analyse the generation of a steady ISS in a system of spins interacting through a non-Hermitian OAT Hamiltonian plus a LMG interaction. As a physical application, we propose to search for steady ISS in diamond nanostructures \cite{ma-1,ma-2,ma-20,mab,phonon1,coupling00}.
Among other proposals, nitrogen-vacancy (NV) centers in diamond may be useful in solid quantum information processing due to their long coherence time and to the high feasibility in their manipulation \cite{zhu,zhubis,marco,nv-int-2,nv-1,nv-int-1,nv-int-new,hybrid-10,hybrid-11}. The generation of entanglement among NV centers in diamond has been achieved by different mechanisms. The coupling of pairs of NV centers have been obtained directly by dipole-dipole interaction \cite{wrachtrup1,wrachtrup2}. The coherent coupling of an ensemble of NV centers to a superconducting resonator have been reported in \cite{kubo}. Also, the coupling of two separated NV electron spin ensembles in a cavity quantum electrodynamics system has been observed recently \cite{prl118}. Another novel mechanism to generate long-range spin-spin interactions in NV centers in diamond has been proposed in \cite{phonon1}. In this scheme the interaction among NV centers is mediated by their coupling via strain to the vibrational mode of a diamond mechanical nanoresonator. The Authors of \cite{phonon1} have probed that this phonon-mediated effective spin-spin interactions can be used to generate squeezed states of the spin ensemble. In the same direction, the Authors of \cite{ma-1,ma-2} have shown that under the action of an effective phonon-induced spin-spin interaction for the ensemble of NV color centers in diamond, the initial state evolves into a steady state that behaves as a squeezed state. In this work, we model the interaction of an ensemble of NV centers in diamond coupled to a mechanical resonator by an effective OAT plus LGM effective hamiltonian for the NV centers. We investigate the possibility of the generation of an steady ISS from the time evolution of an initial prepared coherent state under the action of this effective hamiltonian.
The work is organized as follows. The details of the general formalism are presented in Section \ref{formalism}. The results of the calculations are presented and discussed in Section \ref{results}. In Section \ref{numbers}, we present the numerical results that we have obtained from the exact diagonalization of the proposed Hamiltonian. In Sections \ref{otwist} and \ref{su11} we study some analytical results, so to better understand the mechanism of generation of a steady ISS. In Section \ref{otwist}, the time evolution and the asymptotic behavior of an initial coherent state under the action of a non-hermitian OAT Hamiltonian is discussed. In Section \ref{su11}, we study the behaviour of the system, when the LMG interaction is taken into account, by performing a boson mapping and keeping terms to dominant order in the number of spins. In doing so, we explore the behavior of the steady state on the different parameters of the model. In section \ref{application} we propose a scheme to couple an ensemble of NV centers to a mechanical resonator, so that the system can be model by an effective phonon-mediated interaction, which consists of a OAT plus a LGM interaction. We discuss the generation of a steady ISS for this effective model. Our conclusions are drawn in Section \ref{conclusions}.
\section{Formalism}\label{formalism}
Let us consider a general collective system consisting of $2 S$ elementary $1/2$-pseudo-spins \cite{zhu,zhubis,marco,qb-nv}. The collective pseudo-spin of the system, ${\bf S}= \left( ~S_x,~S_y,~S_z \right)$, is governed by the cyclic commutation relations $\left[~S_i,~S_j\right]=~{\rm \bf i}~\epsilon_{ijk}~S_k$, where the suffixes $i,j,k$ stand for the components of the spin in three orthogonal directions and $\epsilon_{ijk}$ is the Levi-Civita symbol. We shall assume that the physical properties of the system can be modeled by a Hamiltonian of the form
\begin{eqnarray} H & = & H_{OAT}+H_{LMG}+H_{\gamma}, \nonumber \\ H_{OAT}&=& \chi \, S_z^2, \nonumber \\ H_{LMG} & = & = V (S_x^2-S_y^2),\nonumber \\ H_{\gamma} & = & \left( \epsilon-{\rm \bf i} \gamma \right) \left(S_z+ S \right). \label{hota} \end{eqnarray} The term $H_{OAT}$ of the Hamiltonian of Eq. (\ref{hota}) is a one-axis twisting mechanism with coupling constant $\chi$, while term $H_{LMG}$ stands for a Lipkin-type interaction \cite{lipkin,newlmg1}. In addition, we shall assume that the particles of the system have a finite lifetime, which is given by the line-width $\gamma$. This effect can be model by the non-hermitian term $H_{\gamma}$ \cite{disi-0}.
From the theoretical point of view, different physical systems can be modeled by hamiltonians closely related to one proposed in Eq.( \ref{hota} ), i.e. a system of two-component atomic condensates \cite{oat-1,tat-4,bec-1,bec-2}, or an ensemble of NV centers coupling via a mechanical resonator \cite{ma-1,ma-2,ma-20,phonon1,coupling00}.
The Hamiltonian of Eq. (\ref{hota}) can be diagonalized exactly in the basis of states ${\mathcal A}_k=\{ |k \rangle \}$, with
\begin{eqnarray}
|k \rangle=| S,~-S+ k \rangle = \left[ \frac {(2~S-k)!}{(2~S)! k!}\right]^{1/2} S_+^k ~ |S, ~-S \rangle. \label{base} \end{eqnarray} In this basis
\begin{eqnarray}
{\bf S}^2 |k \rangle =S(S+1)~|k \rangle,~~~S_z| k \rangle=(-S+k) | k \rangle. \end{eqnarray}
\subsection{Time Evolution.}\label{tevol}
In writing the Hamiltonian of Eq. (\ref{hota}), we have followed the projection operator formalism of Feshbach \cite{feschbach} to introduce the non-hermtian dynamics of the system.
As the Hamiltonian of Eq. (\ref{hota}) is non-hermitian, we have
\begin{eqnarray} H |\widetilde{\phi}_\alpha \rangle= E_\alpha |\widetilde{\phi}_\alpha \rangle, \end{eqnarray} and
\begin{eqnarray} H^\dagger | \overline{\psi}_\alpha \rangle = {\overline{E}}_\alpha | \overline{\psi}_\alpha \rangle.
\end{eqnarray} Both sets of eigenstates, ${ \mathcal A }_H= \{ | {\widetilde \phi}_\alpha \rangle \}$ and ${\mathcal A}_{H^\dagger}= \{ |{\overline \psi}_\alpha \rangle \}$, are non-orthonormal basis of the Hilbert space, ${\mathcal H}$. It is straightforward to prove \cite{faisal,rotter1,rotter2} that \begin{eqnarray} {\overline{E}}_\alpha={\widetilde{E}}^*_\alpha,
\end{eqnarray} and that the set $\{ |\overline{\psi}_\alpha \rangle, |\widetilde{\phi}_\beta \rangle \}$ forms a bi-orthonormal basis of ${\mathcal{H}}$, with
\begin{equation}
\langle {\overline \psi}_\alpha | {\widetilde \phi}_\beta \rangle = \delta_{\alpha \beta}. \label{deltaBiON} \end{equation}
Clearly, the spectrum of the Hamiltonian of Eq. (\ref{hota}) depends on the value of the coupling constants \cite{arxiv1}. If $\epsilon=0$, the hamiltonian $H$ of Eq. (\ref{hota}) is a quasi-hermitian operator, and its spectrum has complex pair conjugate eigenvalues. It means that $H$ is iso-spectral to $H^\dagger$. Otherwise, the spectrum of $H$ contains complex (non-pair-conjugate) eigenvalues, and the eigenvalues of $H^\dagger$ are complex conjugate to the eigenvalues of $H$.
In the basis ${\mathcal A}_k$, a general initial state can be written as
\begin{eqnarray}
| I \rangle= \sum_k ~ c_k ~ | k \rangle. \label{ini00} \end{eqnarray} In terms of the basis formed by the eigenvectors of $H$ the initial state is given by \begin{eqnarray}
| I \rangle & = & \sum_\alpha ~ \widetilde{c}_\alpha ~ |\widetilde{\phi}_ \alpha \rangle, \nonumber \\ \widetilde{c}_\alpha & = & \sum_k~ (\Upsilon^{-1})_{\alpha k}~ c_k, \label{ini0}
\end{eqnarray} with $\Upsilon$ the transformation matrix from basis ${\mathcal A}_k$ to basis ${\mathcal{A}}_H $. We shall assume that the initial state is normalized, that is $\langle {I} | {I} \rangle=1$. The initial state of Eq.(\ref{ini0}) evolves in time as
\begin{eqnarray}
| I(t) \rangle & = & {\rm e}^{- i H t} | I \rangle, \nonumber \\
& = & \sum_\alpha ~ {{\widetilde c}_\alpha(t)} ~ |\widetilde{\phi}_\alpha \rangle.
\label{init} \end{eqnarray} If $H$ can be diagonalized, $\widetilde{c}_\alpha(t)$ is given by $\widetilde{c}_\alpha(t)={\rm e}^{- i \widetilde{E}_\alpha t}~\widetilde{c}_\alpha$.
In order to work with the basis formed by the eigenstates of $H$, $\mathcal{A}_{H}$, to calculate the expectation value of a given observable, $\widehat{o}$, we have to equipped the linear vector space with an scalar product. The reader is kindly refer to \cite{arxiv1} and references therein. That is, we look for a metric operator $\mathcal{S}$, i.e. an operator which is auto-adjoint and positive definite. The Hilbert space ${\mathcal H}$ equipped with the scalar product $\langle {\bf f} | {\bf g} \rangle_{\mathcal S}=\langle {\bf f} | {\mathcal S}{\bf g} \rangle_{\mathcal S}$ is the new physical linear space ${\mathcal H}_{\mathcal S}=(\mathcal{H}, \langle .| . \rangle_{\mathcal S})$. In terms of the eigenvectors of the symmetry operator $\mathcal{S}$, the initial state reads
\begin{eqnarray}
|I(t) \rangle & = & \sum_\beta~
{\overset{\approx}{c}}_\beta(t)~|{\overset{\approx}{\phi}}_\beta \rangle,\nonumber \\ {\overset{\approx}{c}}_\beta(t)& = & \sum_{\alpha}~ (\Upsilon'^{-1})_{\beta \alpha}~ \widetilde{c}_\alpha(t), \label{inits} \end{eqnarray} with $\Upsilon'$ being the transformation matrix from the basis ${\mathcal A}_H$ to the basis ${\mathcal A}_S$. We are know in condition of evaluate the mean value of an operator $\widehat{o}$ as a function of time as \begin{eqnarray}
\langle \widehat{o}(t) \rangle & = & {\langle I (t)| \widehat{o}|I(t) \rangle}_{\mathcal{S}} \nonumber \\ & = & \sum_{\alpha \beta}~ {\overset{\approx}{c}}_\alpha(t) {\overset{\approx}{c}}^*_{\beta}(t)~
\langle {\overset{\approx}{\phi}}_\beta \mid \widehat{o} \mid {\overset{\approx}{\phi}}_\alpha \rangle_{\mathcal{S}}. \end{eqnarray}
As reported in \cite{arxiv1}, the form of the metric operator depends on the spectrum of $H$. It can be summarized as follows.
If the spectrum of $H$ contains complex pair conjugate eigenvalues, there exists a symmetry self-adjoint operator such that ${\mathcal S_K} H = H^\dagger {\mathcal S_K}$. It reads
\begin{eqnarray} {\mathcal S_K} & = & \sum_{j \le i}^{N_{max}}
~ \delta( \overline{E}_j-\bar{E}^*_i) ~ \left( \alpha_{j}| \bar{\psi}_{j} \rangle \langle \bar{\psi}_{i}|+
~ \alpha^*_{j}| \bar{\psi}_{i} \rangle \langle \bar{\psi}_{j}| \right ). \nonumber \\ \end{eqnarray} This operator is not positive define, so that we make use of the formalism of Krein Spaces. After the diagonalization of ${\mathcal S}$, we have ${\mathcal S_K}=R D R^{-1}=R D_+ R^{-1}+ R D_- R^{-1}=S_{K+}+S_{K-}$, with $D_+$ the diagonal matrix with positive elements and $D_-$ the diagonal matrix with negative entries. Finilly, the metric operator is given by ${\mathcal S}=S_{K+}-S_{K-}$.
If the non-hermitian Hamiltonian $H$ has real eigenvalues or some eigenvalues are complex (non-pair-conjugate), the metric operator is given by
\begin{eqnarray}
{\mathcal S} = \sum_{j=1}^{N_{max}} ~ | \overline {\psi}_{j} \rangle \langle \overline{\psi}_{j}|. \label{opSAg} \end{eqnarray}
\subsection{Spin-Squeezing Parameter and Intelligent Spin States.}\label{squeezing} Spin-squeezed-states are quantum-correlated states with reduced fluctuations in one of the components of the total spin. Following the work of Ueda and Kitagawa \cite{kitagawa}, we shall define a set of orthogonal axes $\{ {\bf n_{x'}}, {\bf n_{y'}}, {\bf n_{z'}} \}$, such that ${\bf n_{z'}}$ is the unitary vector pointing along the direction of the total spin $<{\bf S }>$ . We shall fix the direction ${\bf n_{x'}}$ by looking at the minimum value of $\Delta^2S_{x'}$. The Heisenberg Uncertainty Relation reads
\begin{eqnarray}
\Delta^2 S_{y'} ~\Delta^2 S_{x'}~& \ge & ~\frac 14 |<{\bf S}>|^2. \label{hur} \end{eqnarray} We define the squeezing parameters \cite{kitagawa} as
\begin{eqnarray}
\zeta^2_{x'} = \frac {2 \Delta^2 S_{x'} }{|<\bf{S} >|},~
\zeta^2_{y'} = \frac {2 \Delta^2 S_{y'} }{|<\bf{S} >|}. \label{sqx} \end{eqnarray} The state is squeezed in the $x'$-direction if $\zeta^2_{x'}<1$ and $\zeta^2_{y'}>1$. So defined, the parameters of Eq. (\ref{sqx}) are su(2) invariant \cite{luis}.
When the minimum value of the Heisenberg Uncertainty Relation, Eq. (\ref{hur}), is achieved and $\zeta^2_{x'}<1$, the state is called Intelligent Spin State \cite{iss1,iss2,iss3,iss4,iss5,iss6}.
\section{Results and discussions} \label{results}
Let us first present and discuss general results obtained for the time evolution of a Coherent Spin State (CSS) \cite{hecht} through the action of the Hamiltonian of Eq.(\ref{hota}). The initial state has the form
\begin{eqnarray}
|I (\theta_0,\phi_0)\rangle= {\cal N} \sum_{k=0}^{2 S}~ z(\theta_0,\phi_0)^k \left (\begin{array}{c} 2 S \\ k \end{array} \right)^{1/2} |k\rangle , \label{istate} \end{eqnarray} with $ z(\theta_0,\phi_0) ={\rm e}^{-i \phi_0} \tan(\theta_0/2)$. The angles $(\theta_0,\phi_0)$ define the direction $\vec{n}_{0}=(\sin{\theta_0} \cos{\phi_0},\sin{\theta_0}
\sin{\phi_0},\cos{\theta_0})$, such that $\vec{S} \cdot \vec{n}_0 |I\rangle=-S | I\rangle$ \cite{hecht}.
We shall begin with the analysis, in Section \ref{numbers}, of the numerical results obtained from the exact diagonalization of the Hamiltonian of Eq.(\ref{hota}). We shall complement these results with the analytical ones of Sections \ref{otwist} and \ref{su11}. Finally, we shall investigate the possibility of generating a steady Intelligent Spin States in diamond nanostructures, \ref{application}.
\begin{figure}
\caption{Squeezing parameters, $\zeta^2_{x'}$ and $\zeta^2_{y'}$, as a function of time, for the system model by the Hamiltonian of Eq.(\ref{hota}), in units of [dB]. The system consists of N=45 spins. The parameters of the model have been fixed to the values $\eta=0.6$, $\gamma=2 \times 10^{-5}$ [GHz]. In Insets (a) and (b) are displayed the results obtained when the initial coherent state is prepared with $\left( \theta_0, \phi_0 \right)=(\pi/4,0)$ and $(\pi/8,0)$, respectively. }
\label{fig:fig1}
\end{figure}
\begin{figure}\label{fig:fig2}
\end{figure}
\begin{figure}
\caption{Contribution of the $k-$th state of the basis ${\mathcal A}_k$ to the state $|I(t) \rangle$ of Eq.(\ref{ini00}), as a function of time. The parameters are those of Figures 1 and 2. In Insets (a) and (b) are displayed the results obtained when the initial coherent state is prepared with $\left( \theta_0, \phi_0 \right)=(\pi/4,0)$ and $(\pi/8,0)$, respectively. }
\label{fig:fig3}
\end{figure}
\begin{figure}
\caption{Dependence, as a function of the relative coupling constant $\eta$, of the Squeezing Parameters of the steady state ($t>> T_c$, $t=120$ [$\mu$ sec]), in units of [dB]. In Insets (a), (b) and (c) we plot the results obtained for ensembles with $N=5$, $N=45$ and $N=101$ spins, respectively. We have fixed $\gamma=2 \times 10^{-5}$ [GHz]. Solid lines are used to shown the results which we have obtained for the squeezing parameters from the exact diagonalization of the Hamiltonian of Eq.(\ref{hota}), $\zeta^2_{x'}$ and $\zeta^2_{y'}$ of Eq.(\ref{sqx}), for an initial coherent state with $(\theta_0,~\phi_0)=(\pi/4,~0)$, Eq. (\ref{istate}). Dashed-lines correspond to the results which we have obtained by applying the boson approximation of section \ref{su11}, $Q(x,p)$ and $Q(p,x)$ of Eq.(\ref{sqbos}). In this case, the initial state of Eq. (\ref{inibos}) consists of $5$ particles in mean value for Inset (a), and of $45$ and $101$ particles in mean value for Inset (b) and (c), respectively. With dotted-line and with dashed-dotted-line we present the results for the product of the squeezing parameters in the exact and in the approximate case, respectively. }
\label{fig:fig4}
\end{figure}
\subsection{Exact Numerical Results.}\label{numbers}
As it has been said before, we shall present results corresponding to the time evolution of the initial coherent state of Eq.(\ref{istate}) under the action of the Hamiltonian of Eq. (\ref{hota}), by performing the exact diagonalization of the hamiltonian in the basis Eq. (\ref{base}). In doing so, we shall describe the behavior of the system in terms of the relative coupling constants
\begin{eqnarray}
\eta & = & \frac{ |2 S V| }{ |\epsilon -2 S \chi| }, \nonumber \\
\Gamma & = & \frac{ \gamma }{ |\epsilon -2 S \chi| }, \label{ceta} \end{eqnarray} and of the parameter \begin{eqnarray} \Xi^2 &= & \eta^2+\Gamma^2. \label{xi} \end{eqnarray}
In Figures 1, 2 and 3 we present numerical results for a system consisting of $45$ spins. We assume that the system has a characteristic time of coherence of the order of $T_C \approx 100$ [$\mu$ sec ] \cite{zhu,nv-ct1,nv-ct2,nv-ct3}, which is consisting with a value for the line-width of the states of $\gamma=2 \times 10^{-5}$ [GHz], relative to the coupling constant $\chi$.
In Figure 1, we show the behaviour of the squeezing parameters of Eq. (\ref{sqx}), $\zeta^2_{x'}$ and $\zeta^2_{y'}$, as a function of time. We have fixed the coupling relative constant $\eta$ to the value $\eta=0.6$. In Insets (a) and (b) we have displayed the results obtained when the initial coherent state is prepared with $\left( \theta_0, \phi_0 \right)=(\pi/4,0)$ and with $(\pi/8,0)$, respectively. At intermediate times, the pattern of squeezing depends on the value of $\theta_0$. Initial states with $\theta_0$ smaller than $\pi/4$ favor the appearance of squeezing as a function of the time. However, independent of the preparation of the initial state, it evolves to an asymptotic steady state which behaves as an ISS, i.e. $\zeta^2_{x'}=-\zeta^2_{y'}$ [dB]. To understand the nature of this asymptotic steady ISS, we have studied the dependence, as a function of time, of polar angle of the unit vector along the direction of the mean value of the quasi-spin operator $<{\bf S}> $. The corresponding results are shown in Figure 2. The parameters are the same of those of Figure 1. In Insets (a) and (b) we have displayed the results obtained when the initial coherent state is prepared with $\left( \theta_0, \phi_0 \right)=(\pi/4,0)$, and with $(\pi/8,0)$, respectively. The system evolves to an state with $<{\bf S}>$ pointing in the $z$-direction, with $<S_z>=-S$, independent of the choice in the initial coherent state.
In Figure 3, we show the contribution of the $k-$th state of the basis ${\mathcal A}_k$ to the state $|I(t) \rangle$ of Eq.(\ref{ini00}), as a function of time,
$w(k)=|\langle k| I(t) \rangle|^2$. We have adopted the same parameters of are those of Figures 1 and 2. In Insets (a) and (b) are displayed the results obtained when the initial coherent state is prepared with $\left( \theta_0, \phi_0 \right)=(\pi/4,0)$ and $(\pi/8,0)$, respectively. From the analysis of Figure 3, it can be concluded that as the state evolves in time, the dominant contributions to the state come from the channels with low values of $k$. This fact is in correspondence with the results of Figure 2.
In Figure 4, we show the dependence, as a function of the relative coupling constant $\eta$, of the squeezing parameters of the steady state ($t>> T_c$, $t=120$ [$\mu$ sec]), in units of [dB]. In Insets (a), (b) and (c) we study systems with $N=5$, $N=45$ and $N=101$ spins, respectively. With solid lines we show the results which we have obtained from the exact diagonalization of the Hamiltonian of Eq.(\ref{hota}), for and initial coherent state with $(\theta_0,~\phi_0)=(\pi/4,~0)$, Eq. (\ref{istate}). The dotted-line is used to show the behaviour of $\zeta^2_{x'} \times \zeta^2_{y'}$ in units of [dB]. The results presented support the idea of the existence of to regions with different squeezing properties. The initial coherent state evolves into a steady ISS for $\eta < 1 $, and looses the squeezing properties if $\eta > 1 $. In the next sections, we shall present some analytical results to understand this property, and we shall discussed the rest of the curves of the Figure.
Next, we shall study the persistence of an steady ISS as the number of spins is increased. Figure 5 shows the behavior of the squeezing parameters of the steady state, $\zeta^2_{x'}$ and $\zeta^2_{y'}$, as a function of the number of spins of the system, in units of [dB]. The curves have been computing at instant $t=120$ [$\mu$ sec], with $t>> T_C$. In Insets (a), (b), (c) and (d) we show the results that we have obtained when the relative coupling constant $\eta$ takes the value $\eta=0.25$, $\eta=0.50$, $\eta=0.75$ and $\eta=0.95$, respectively. We have chosen an initial coherent state with $(\theta_0,~\phi_0)=(\pi/4,~0)$. The rest of the parameters are those of Figure 1. We have plotted with circles the value of the product $\zeta^2_{x'}~\zeta^2_{y'}$ in units of [dB]. The line at constant value $0$, is just plotted as a guide. As it can be observed from the Figure, except for systems with small number of spins at large values of $\eta$, the steady state behaves as an ISS. Also, it can be observed that the amount of squeezing achieved in the steady state is increased as the value of the relative coupling constant approaches $\eta \rightarrow 1$.
\begin{figure}
\caption{Behavior of the Squeezing Parameters of the steady state, $\zeta^2_{x'}$ and $\zeta^2_{y'}$, as a function of the number of spins, $N$ ($t>> T_c$, $t=120$ [$\mu$ sec]), in units of [dB]. In Insets (a), (b), (c) and (d) we show the results obtained when the relative coupling constant $\eta$ is fixed to the value $\eta=0.25$, $\eta=0.50$, $\eta=0.75$ and $\eta=0.95$, respectively. The rest of the parameters are those of Figure 1. We have plotted with circles the value of the product $\zeta^2_{x'}~\zeta^2_{y'}$ in [dB]. The line at constant value $0$, is just to guide the eye.}
\label{fig:fig5}
\end{figure}
In what follows we shall present some analytical results in order to understand the behaviour of the steady state of the system as an ISS.
\subsection{Non-hermitian OAT model.}\label{otwist} Let us first consider the time evolution of the initial state proposed in Eq.(\ref{istate}), under the Hamiltonian \begin{eqnarray} H_0 & = & \chi \, S_z^2 + \left( \epsilon-{\rm \bf i} \gamma \right) \left(S_z+ S \right), \label{hota0} \end{eqnarray} that is in absence of the LMG interaction. The mean values of the spin components can be calculated straightforwardly, and they read
\begin{eqnarray}
\langle S_z \rangle & = & - S \frac{ 1- |\widetilde{z}|^2 } { 1+ |\widetilde{z}|^2}, \nonumber \\
\langle S_z^2 \rangle & = & S^2- \frac{ 2 S (2 S-1) |\widetilde{z|}^2 }{ (1+ |\widetilde{z}|^2)^2}, \nonumber \\ \langle \{ S_+,S_- \} \rangle & = & 2 S +
\frac{4 S(2 S-1) |\widetilde{z}|^2 }{ \left(1+ |\widetilde{z}|^2\right)^2} , \nonumber\\ \langle S_+ \rangle & = & 2 S \widetilde{z}^* e^{i \epsilon t }
\frac{ \left(e^{-i t \chi } + |\widetilde{z}|^2 e^{i t \chi }\right)^{2 S-1}}
{\left(1+ |\widetilde{z}|^2 \right)^{2 S}}, \nonumber \\ \langle S_+^2\rangle & = & 2 S (2 S-1) \widetilde{z}^{* 2} e^{i 2 \epsilon t } \nonumber \\ & & ~~~~~~~~~~~~~
\frac {\left( e^{-2 i \chi t} +|\widetilde{z}|^2 e^{2 i \chi t }\right)^{2 (S-1)}}
{\left( 1 +|\widetilde{z}|^2 \right)^{2 S}}, \nonumber \\ \end{eqnarray} being $\widetilde{z}=z(\theta_0,\phi_0) e^{-\gamma t}$.
Clearly, $\widetilde{z}\rightarrow 0$ when $t\rightarrow\infty$. In this limit we find
\begin{eqnarray} \langle S_z\rangle & \rightarrow & - S , \nonumber \\ \langle S_x\rangle = Re(\langle S_+\rangle)& \rightarrow & 0 , \nonumber\\ \langle S_x^2\rangle = \frac 12 Re(\langle S_+^2\rangle)+ \frac 14 \langle \{ S_+,S_- \} \rangle & \rightarrow & \frac S 2,\nonumber \\ \langle S_y\rangle= Im(\langle S_+\rangle)& \rightarrow & 0, \nonumber \\ \langle S_y^2\rangle= -\frac 12 Re(\langle S_+^2\rangle)+ \frac 14 \langle \{ S_+,S_- \} \rangle & \rightarrow & \frac S 2. \nonumber \\ \end{eqnarray} Consequently, $\langle{\bf S}\rangle\rightarrow - S \breve{e_z}$, with
\begin{eqnarray} \Delta^2 S_x \rightarrow \frac S 2, ~~~~~ \Delta^2 S_y \rightarrow \frac S 2.
\end{eqnarray} This results indicates that, as reported in the previous Section, the initial coherent spin state,$|I (\theta_0,\phi_0) \rangle$, evolves, asymptotically, to the state with $|I(\pi,0)\rangle=|S,-S \rangle $, independent of the orientation of the state at $t=0$.
\subsection{Non-hermitian LMG model.}\label{su11}
The purpose of this section it is to provide an analytical hamiltonian which accounts for the behaviour of the system in the stationary regime, when the Lipkin interaction is included.
We shall perform a Holstein-Primakoff boson mapping \cite{marshalleck,ring,nosbos} of the Hamiltonian of Eq.(\ref{hota}). The generators of the $su(2)$, in terms of the boson creation operator, $b^\dagger$, and of the boson annihilation operator, $b$, read
\begin{eqnarray} S_+ & = & b^\dagger ~ \sqrt{2 S - b^\dagger b} ~ \approx ~\sqrt{2 S} ~ b^\dagger , \nonumber \\ S_- & = & \sqrt{2 S - b^\dagger b} ~b ~ \approx ~\sqrt{2 S} ~ b, \nonumber \\ S_z & = & b^\dagger b- S. \label{sps} \end{eqnarray} The nonlinearity introduced by the square-root term in Eq. (\ref{sps}) ensures that no two excitations can take place at the same spin. If we consider delocalized spin waves involving a large number of spins compared to the number of excitations, the probability that a given spin is excited is inversely proportional to the number of spins N. Therefore, as long as only a few delocalized spin excitations are considered, it is reasonable to neglect the square-root term in Eq. (\ref{sps})\cite{molmer}.
The assumption we have made in Eq. (\ref{sps}) is valid after the system has reached the stationary regime, and is consistent with the results we have presented in Figure 3.
In this approximation, the Hamiltonian of Eq.(\ref{hota}) can be written as
\begin{eqnarray} H_{B} & = & h_0+ 2 \alpha K_{0}+ 2 S V (K_{+}+ K_{-}), \nonumber \\ \label{hbos} \end{eqnarray} with \begin{eqnarray} K_+ & = & \frac 1 2 {b^\dagger}^2, ~ K_-=K_+^\dagger \nonumber \\
K_0 & = & \frac 1 2 b^\dagger b + \frac 1 4, \label{opsu11} \end{eqnarray} and
\begin{eqnarray} h_0 & =& \chi S^2-\frac 12 \alpha, \nonumber \\ \alpha & = & \left( \epsilon- 2 S \chi-{\rm \bf i} \gamma \right). \end{eqnarray}
The set of operators $\{K_+,~K_-,~K_0 \}$ spans the algebra of $su(1, 1)$, that is
\begin{eqnarray} \left[ K_{-}, K_{+}\right] & = & 2 K_{0}, \nonumber \\ \left[ K_{0}, K_{\pm}\right] & = & \pm K_{\pm}. \\ \label{algsu11} \end{eqnarray}
The time evolution operator of the system, $U(t)=e^{- {\mathbf{i}} t H_{B}}$, can be easily computed if the exponential were written in a normally ordered form \cite{gilmore,romina}. Making use of the faithful matrix representation of the operators $su(1,1)$-algebra, it reads (see Appendix)
\begin{eqnarray} U(t) & = & e^{- {\mathbf{i}} t H_{B}} \nonumber \\
& = & e^{- {\mathbf{i}} t h_0}e^{b_{+}K_{+}}e^{\ln(b_{0})K_{0}}e^{b_{+}K_{-}}, \label{tevsu11} \end{eqnarray}
with
\begin{eqnarray} b_{0}& = & \left( \cos( t \beta) \left( 1 + \frac{ \alpha}{ \beta} {\rm tanh}( {\mathbf{i}} t \beta) \right) \right)^{-2} \nonumber \\ b_{+}& = & {\rm e}^{ {\mathbf{i}} (\phi_V+\pi)}\frac{ 2 S |V|}{ \beta }
\frac{{\rm tanh}\left ( {\mathbf{i}} t \beta \right)}{1+\frac{ \alpha}{\beta} {\rm tanh}\left( {\mathbf{i}} t \beta \right)} \nonumber \\ \label{defi} \end{eqnarray} where, $\phi_V=0$ if $V>0$ and $\phi_V=\pi$ if $V<0$. We have defined the complex parameter $\beta=\sqrt{\alpha^{2}-(2 S V)^{2}}$.
As $|b_+|<1$ (see Appendix), we can introduce the squeezing parameter $\zeta= r {\rm e}^{ {\mathbf{i}} (\phi+\phi_V+\pi)}$, such that
\begin{eqnarray} b_+=(\zeta/|\zeta|)\tanh |\zeta|. \end{eqnarray}
In what follows, we shall study the evolution of the state \begin{eqnarray}
|\psi \rangle= {\cal N} \sum_{n=0}^{2 S}\frac{(\sqrt{2 S})^{n}}{\sqrt{n!}}|n \rangle=D(\sqrt{2 S})|0 \rangle,
\label{inibos} \end{eqnarray} where, $D(\eta)={\rm e}^{(\eta b^{\dagger}- \overline{\eta}b)}$ is the displacement operator. The proposed initial state of Eq.(\ref{inibos}) is the limit to dominant order in the number of spins of the coherent state of Eq.(\ref{istate}). This state evolves in time as (see Appendix)
\begin{eqnarray} U|\psi \rangle & = &\mathcal{N} {\rm e}^{- {\mathbf{i}} t h_0}{R_{0}}^{(1/4)} {\rm e}^{S (|R_{0}|+R_{-}-1)}S_q(\zeta)D(\sqrt{2 S R_{0}})|0\rangle, \nonumber \\
\mathcal{N}^{-2}&=& \langle \psi| U^{\dagger} U | \psi \rangle = {\rm e}^{\gamma t } {\rm e}^{2 S (|R_{0}|+ {\rm Re}(R_{-})-1)}\sqrt{|R_{0}|}. \label{fit} \end{eqnarray} The parameters $R_0$ and $R_-$ are given by
\begin{eqnarray} R_{0} & = & \frac{b_{0}}{1-|b_+|^2}, \nonumber \\ R_{-} & = & \overline{b_+}~R_0 -b_+, \end{eqnarray} and $S_{q}(\zeta)$ stands for the squeezing operator, ${S_{q}(\zeta)= {\rm e}^{\overline{\zeta} K_-\zeta K_{+}}}$.
We are, now, in condition to compute the uncertainty relations of the operators
\begin{eqnarray} x & = & \frac {1}{\sqrt{2}} \left( b^{\dagger}+b \right), \nonumber \\ p & = & {\mathbf{i}} \frac {1}{\sqrt{2}} \left( b^{\dagger}-b \right), \nonumber \\ \label{xp} \end{eqnarray} on the state of Eq.(\ref{fit}). After some cumbersome algebra (see Appendix) it can be probed that
\begin{eqnarray} \Delta^{2}x & = & \frac{1}{2} \left(- \cos ( \phi + \phi_V ) \frac{2 \rho}{1-\rho^2}+\frac{1+\rho^2}{1-\rho^2}\right), \nonumber \\ \Delta^{2}p & = & \frac{1}{2} \left(+ \cos ( \phi + \phi_V ) \frac{2 \rho}{1-\rho^2}+ \frac{1+\rho^2}{1-\rho^2} \right), \label{urs}
\end{eqnarray} with $\rho=|b_+|=\tanh |\zeta|$. Consequently we can defined the associated squeezing parameters $Q(x,p)$ and $Q(p,x)$ as \begin{eqnarray} Q(x,p) = 2 \Delta^2 x, ~~~ Q(p,x) = 2 \Delta^2 p. \label{sqbos} \end{eqnarray} The system is squeezed in $x$ ($p$) when $Q(x,p)<1$ ($Q(p,x)<1$).
Our objective is to study the behaviour of the system after a long interval of time ($t \rightarrow \infty$).
Due to decoherence, it is straightforward to show that
\begin{eqnarray} \lim_{ t \rightarrow \infty} b_+ = \frac{ {\rm e}^{ {\mathbf{i}} (\phi_V)} \eta} {\sqrt{ (1- {\mathbf{i}} \sigma \Gamma)^2-\eta^2}-(\sigma + {\mathbf{i}} \Gamma )}={\rm e}^{ {\mathbf{i}} (\phi_V+\phi)} \rho_L. \nonumber \\ \label{bplusasymp} \end{eqnarray} In the previous expression, $\sigma$ stands for the sign function of ${(\epsilon -2 S \chi)}$, and
\begin{eqnarray} \phi & = & -\arctan \left (\frac{\beta_{-}-\Gamma}{\beta_{+}-\sigma}\right)\nonumber \\ \rho_L & = & \frac{\eta}{\sqrt{(\beta_{+}-\sigma)^{2}+(\beta_{-}-\Gamma)^{2}}}.\nonumber \\ \label{phi} \end{eqnarray} being
\begin{eqnarray} \beta_{\pm}^2 & = & \frac 1 2 \left( \sqrt{(1-\eta^{2}-\Gamma^{2})^{2}+ 4 \Gamma^{2}} \pm (1-\eta^{2}-\Gamma^{2}) \right), \nonumber \\ \end{eqnarray} To leading order in $\Gamma$, the phase factor $\phi$ can be written as
\begin{eqnarray} \phi \approx \left \{ \begin{array}{ll} \arctan \left ( \frac {\Gamma}{\sqrt{1-\eta^2}-\sigma} \right ),& 0<\Xi< 1\nonumber \\ \arctan \left (\sigma (\sqrt{\eta^2-1}-\Gamma)\right),& \Xi>1. \nonumber \\ \end{array} \right. \label{faseaprox} \end{eqnarray} We can identify two regions, in the space of coupling constants $\eta$ and $\Gamma$, with different squeezing properties for the steady state of the system. Region I corresponds to values of $\eta$ and $\Gamma$ that satisfy the condition $\Xi^2<1 $, and Region II for values of $\eta$ and $\Gamma$ that satisfy $\Xi^2>1$.
In Region I, for small values of $\Gamma$, the phase $\phi$ of $b_+$ becomes approximately null, $\phi~<<~1$, so that $ b_+ \cong {\rm e}^{ {\mathbf{i}} (\phi_V)} \rho_{L} $. Then, the uncertainty relations of the operators $x$ and $p$, of Eq.(\ref{urs}), for $\phi_V=0$, take the form
\begin{eqnarray} Q(x,p) & \rightarrow & \frac {1+\rho_L}{1-\rho_L}={\rm e}^{ 2|\zeta|}, \nonumber \\ Q(p,x) & \rightarrow & \frac {1-\rho_L}{1+\rho_L}={\rm e}^{-2|\zeta|} , \label{urs0} \end{eqnarray} and
\begin{eqnarray} Q(x,p) Q(p,x) & \rightarrow & 1. \end{eqnarray} Thus, the steady state of the system behaves as an ISS. Similar expressions hold for $\phi_V=\pi$, but with the exchange of the roles of $x$ and $p$.
In Region II, the behaviour of the system is completely different. The phase $\phi$ of $b_+$ is no longer null, $\phi \ne 0$, moreover for values of $\eta$ sufficiently large $\phi \rightarrow \pm \pi/2$, depending on $\sigma$. In this case the uncertainty relations of the operators $x$ and $p$, of Eq.(\ref{urs}) take the form
\begin{eqnarray} Q(x,p) & \rightarrow & \frac {1+\rho_L^{2}}{1-\rho_L^{2}}, \nonumber \\ Q(p,x) & \rightarrow & \frac {1+\rho_L^{2}}{1-\rho_L^{2}}. \label{urs0} \end{eqnarray} Thus, in Region II, the asymptotic steady state is not a squeezed state.
Let us compared these analytical results with the ones discussed in Section \ref{numbers}.
In view of Eq.(\ref{sps}) and of Eq. (\ref{xp}), to leading order in the number of spins
\begin{eqnarray} \frac {S_x} {\sqrt{S}} & = & \frac {1}{2\sqrt{S}} (S_+ + S_-) \rightarrow x, \nonumber \\ \frac {S_y} {\sqrt{S}} & = &-\frac { {\mathbf{i}}}{2\sqrt{S}} (S_+ - S_-) \rightarrow - p. \nonumber \\ \end{eqnarray} So that when, under the action of the Hamiltonian of Eq. (1), the initial state of Eq.(\ref{istate}) evolves to a steady state which points in the $z$-direction, the squeezing parameters $\{ \zeta^2_{x'}, \zeta^2_{y'} \}$ should give the same information as $\{ Q(x,p), Q(p,x)\}$.
\begin{figure}
\caption{ Behaviour of the phase $\phi$ of Eq. (\ref{phi}) as a function of the scaled coupling constant $\eta$. The values of the different parameters are those of Figure 6.}
\label{fig:fig6}
\end{figure}
This can be seen from Figure 4, where we present, by using dashed-lines, the results obtained for $Q(x,p)$ and $Q(p,x)$ of Eq.(\ref{sqbos}), for the coherent state of Eq. (\ref{inibos}) with $N$ particles in mean value. With dashed-dotted-line we present the results for the product of the squeezing parameters in units of [dB]. Clearly, for systems with more than 9 spins, the initial coherent state evolves into a steady ISS for $\Xi^2< 1 $, and looses the squeezing properties if $\Xi^2 > 1$.
We complete our analytical results by analysing the behavior of the phase $\phi$ of Eq. (\ref{phi}). The results are presented in Figure 6, for the same parameters of Figure 5. The numerical results are in agreement with the analytical estimations of Subsection \ref{su11}. That is, in Region I the phase $\phi$ is null, $\phi=0$ and consequently the steady state is an ISS, while in Region II $\phi \rightarrow -\pi/2$ for increasing values of the coupling constant $\eta$, and the steady state is no longer an ISS.
From the presented results it can be inferred that dissipative mechanisms can be used to improve the achievement of squeezing in different spin system \cite{disi-0,disi-1}.
\subsection{Application to phonon-induced spin-spin interactions in diamond nanostructures.}\label{application}
Let us consider the spin-spin interaction, among NV centers in diamond, mediated through the coupling of the spins to a magnetic nano-resonator \cite{phonon1,ma-1,ma-2}.
An NV center has a ground state with spin $1$ and a zero-field splitting D = 2.88 GHz between the $|1,0>$ and
$|1,\pm 1>$ states \cite{nv-1}. If an external magnetic field, ${\bf B}_0$
along the crystalline axis of the NV center, is applied an additional Zeeman splitting between $|1, \pm 1>$ sub-levels occurs. Then, it is possible to isolate the subsystem $|1,0 \rangle$ and $|1,-1 \rangle$\cite{marco,disi-1,ma-1,ma-2}.
The mechanical resonator is described by the Hamiltonian $H_r= ~ \omega_r~b^\dagger b$, with $\omega_r$ as the frequency of the fundamental mode vibration mode of the resonator, and $b$ ($b^\dagger$) as the corresponding annihilation (creation) operator. We shall chose $\omega_r$ almost in resonance with the splitting of the states $|1,0\rangle$ and $|1,-1 \rangle$, so that the NV center can be modeled by a two-level system. The motion of the magnetic mechanical resonator produces a magnetic gradient field on the NV centers, so that within this two-level subspace the Hamiltonian of the system can be modeled as
\begin{eqnarray} H_{NV}= ~ \omega_r~b^\dagger b +~ \delta ~\sigma_z &+&~ g_1 ~ (\sigma_+ b^\dagger+ b \sigma_-)+ \nonumber \\
& & ~ g_2 ~ (\sigma_+ b+ b^\dagger \sigma_-), \label{hnv} \end{eqnarray}
where $\delta=D-\gamma_e B_0$ is the energy gap between the ground state $|1,0 \rangle$ and the state $|1,-1 \rangle$, being $\gamma_e$ the gyromagnetic ratio of an electron. We have assumed an asymmetric interaction between the NV centers and the single mode mechanical resonator, which is model by the effective coupling constants parameter $g_1$ and $g_2$. The operators ${\sigma_x,\sigma_y, \sigma_z}$ are collective spin operators for the ensemble of NV centers in diamond, $\sigma_\alpha=\sum_i~\sigma_{\alpha~i}$, which satisfy the usual angular momentum commutation relations. We shall consider that the intensity of the external magnetic field is fixed in order to have a detuning $\delta \approx 0$.
A unitary transformation of the form $$U={\rm e}^{-(g_1/\omega_r) (\sigma_+ b^\dagger-b \sigma_-)~-(g_2/\omega_r) (\sigma_+ b-b^\dagger \sigma_-)}$$ can be applied to the Hamiltonian of Eq. (\ref{hnv}), $H_{eff}=U H U^{-1}$. To leading order in $g_1/\omega_r$ and $g_1/\omega_r$, together with the assumption that $\delta \approx 0$, the effective Hamiltonian takes the form
\begin{eqnarray} H_{eff} \approx ~H_0 + \omega_r~b^\dagger b +2\frac{ g_1^2-g_2^2}{\omega_r}(1+2 b^\dagger b)\sigma_z + \nonumber \\
2\frac{ g_1^2+g_2^2}{\omega_r}~\sigma_z^2-4\frac{ g_1 g_2}{\omega_r}~ \left({\sigma_x}^2-{\sigma_y}^2 \right), \nonumber \label{heff} \end{eqnarray} with $H_0=-2\frac{g_1^2+g_2^2}{\omega_r} S (S+1)$. We shall account for dissipation by introducing the mean-life of the NV centers through the additional term
\begin{eqnarray} H_\gamma= -{\rm \bf i} \gamma \left(\sigma_z+ S \right). \end{eqnarray} The characteristic time of coherence of this system is of the order of $T_C=100$ [$\mu$ sec ]\cite{zhu,nv-ct1,nv-ct2,nv-ct3}, which is consisting with a value for the line-width of the states of $\gamma=2 \times 10^{-5}$ [GHz]. Thus, the Hamiltonian of the NV ensemble reads
\begin{eqnarray} H_{NVE-ph}=H_{eff}+H_\gamma. \end{eqnarray}
In order to generate a steady ISS, we initialize the ensemble of NV centers in a coherent state (CSS) $| CSS \rangle $ along the direction
$\breve{n}_0=(\sin (\theta_0)\cos(\phi_0),\sin (\theta_0)\sin(\phi_0),\cos (\theta_0))$ of the collective Bloch sphere. As it is well known, the CSS satisfies the condition ${\bf \sigma}.\breve{n}_0 | CSS \rangle= -S | CSS \rangle$, and it has equal transverse variances, $S/2$. This state can be prepared by using optical pumping and microwave spin manipulation applied to the ensemble \cite{phonon1,natphys4}.
The Hamiltonian of Eq. (\ref{heff}) includes a term which couples the phonon number $\hat{n}=a^\dagger a$ to $\sigma_z$. We shall consider an initial phonon with $<\hat{n}>=n_{ph}$, which we shall model as a coherent sate of the form
$$ |n_{ph} \rangle= {\rm e}^{-|z_{ph}|^2/2} \sum_{n=0}^{\infty} \frac{z_{ph}^n}{\sqrt{n!}} |n \rangle,$$
where $|n \rangle$ represents the state with $n$ phonons, and $|z_{ph}|^2=n_{ph}$. An initial state of the form
$|I\rangle= |n_{ph} \rangle |CSS \rangle $, will evolve as
\begin{eqnarray}
|I(t)\rangle = {\rm e}^{-|z_{ph}|^2/2} \sum_n ~ \frac{z_{ph}^n}{\sqrt{n!}}|n\rangle ~|I_{NVE} (t,n) \rangle, \end{eqnarray} with \begin{eqnarray}
|I_{NVE} (t,n) \rangle = {\rm e}^{{\bf i} H_{NVE}(n)t} |CSS \rangle, \end{eqnarray} and \begin{eqnarray} H_{NVE} (n) & = & \epsilon \sigma_z + \nonumber \\
& & \chi~\sigma_z^2 + V~ \left({\sigma_x}^2-{\sigma_y}^2 \right)+H_{\gamma}, \nonumber\\
\epsilon & = & 2\frac{ g_1^2-g_2^2}{\omega_r}(1+2 n )\sigma_z,\nonumber \\
\chi& = & 2\frac{ g_1^2+g_2^2}{\omega_r}~\sigma_z^2, \nonumber \\
V & = & -4\frac{ g_1 g_2}{\omega_r}. \label{heffnv} \end{eqnarray} Following the formalism presented in \ref{tevol}, the mean value of physical operator associated to the NV centers, $\hat{o}_{NV}$, will be computed as
\begin{eqnarray} \langle \hat{o}_{NV} (t)\rangle
& = &{\rm e}^{-|z_{ph}|^2} \sum_{n=0}^{\infty} \frac{|z_{ph}|^{2 n}}{n!} ~\langle I_{NVE} (t,n) |\hat{o}_{NV}|I_{NVE} (t,n) \rangle_{\mathcal S},\nonumber \\ \label{teph} \end{eqnarray} where the $\mathcal{S}$ is the corresponding metric operator \cite{arxiv}.
In the previous section we have conclude that, for large number of NV centers, the values of $\Xi=\eta^2+\Gamma^2$ (Eq.(\ref{xi})) can be used to characterize the appearance of a steady ISS, that is for If $\Xi^2<1$, the initial state evolves into a steady ISS. In terms of $g_1,~g_2,~w_r,~\gamma$ and of the number of NV centers, $N=2 S$ and of the number of phonons, $n$, the quantity $\Xi^2$ reads
\begin{eqnarray} \Xi^2& =&\frac{2 \frac{g_1}{g_2}+ \frac{\gamma}{4 S~(g_2^2/\omega_r)}} {
\left| \left( \frac{g_1} {g_2} \right )^2 \left(1-\frac{1+2 n}{2 S} \right) + \left(1+\frac{1+2 n}{2 S} \right)\right|}.
\nonumber \\ \label{nvph} \end{eqnarray} The quantity $\Xi^2$, if $\gamma/(4 S)$ is small, depends on the relative coupling constant $g_1/g_2$ and on the ratio of phonon numbers to the number of spins, $(1+ 2n)/(2 S)$.
In Figure 6, we present a contour plot of $\Xi^2$ as a function of the ratios $g_1/g_2$ and $(1+ 2n)/(2S)$. We have considered a system of $N=1001$ NV-centers. We have taken values of $\omega_r=1$ [MHz], $g_2=0.5$ [MHz] and $\gamma=2 \times 10^{-2}$ [MHz] \cite{phonon1,ma-1,ma-2}. From the Figure it can be seen that $\Xi^2 <1$ for values of $g_1/g_2<1$, or for $(1+ 2n)/(2 S) \lesssim 0.5$ if $g_1/g_2$. Similar results are obtained for systems with different values of the number of the NV centers, $N=2 S$, and of the number of phonons, provided that $(1+ 2n)/(2S)$ varies among the same values.
\begin{figure}
\caption{Contour plot of the quantity $\Xi^2$, as a function of the ratios $g_1/g_2$ and $(1+ 2n)/(2S)$. We have considered a system of $N=2 S=1001$ NV-centers. We have taken values of $\omega_r=1$ [MHz], $g_2=0.5$ [MHz] and $\gamma=2 \times 10^{-2}$ [MHz]. }
\label{fig:fig7}
\end{figure}
In Figure 7, we show the results obtained for the squeezing parameter of the steady state, as a function of the ratio $g_1/g_2$. We have computed the mean values of the physical operators following Eq. (\ref{teph}). We have chosen an initial coherent state for the NV centers, with $\theta_0=\pi/4$ and $\phi_0=0$. We have considered a system with $N= 2 S=1001$ NV color centers in diamond. The values of $g_2$, $\omega_r$ and $\gamma$ are those of Figure 8. We have evaluated the Squeezing parameter at $t=300$ [$\mu$ s] $>> T_C$. In Insets (a), (b) and (c) we show the results obtained when the mean value of phonons in the initial state, Eq.(\ref{inibos}), is $n_{ph}=~6,~100$ and $250$, respectively. When the mean value of phonons is increased, the contribution from states with large number of $n$ becomes important, so that, at fix number of NV-centers, the parameter $\Xi^2$ can be $>1$ depending on the ratio $g_1/g_2$. We have verified that the values of the squeezing parameter in the steady state are independent of the initial state adopted \cite{ma-2}.
\begin{figure}
\caption{Values of the Squeezing Parameters in the steady state, as a function of the relative constant $g_1/g_2$, in units of [dB]. We have considered a system with $N= 2 S=10001$ NV color centers in diamond. The values of $g_2$, $\omega_r$ and $\gamma$ are those of Figure 7. We have evaluated the Squeezing parameter at $t=300$ [$\mu$ s] $>> T_C$. In Insets (a), (b) and (c) we show the results obtained when the mean value of phonons is $n_{ph}=~6,~100$ and $250$, respectively.}
\label{fig:fig8}
\end{figure}
\section{Conclusions}\label{conclusions}
In this work we have studied the behavior of a system of spins interacting through a non-hermitian one-axis twisting Hamiltonian plus a Lipkin-type interaction. We have analysed the time evolution of a coherent initial spin state. We have shown, by performing the exact numerical diagonalization of the Hamiltonian, that under the action of the one-axes twisting dissipative hamiltonian, the initial state evolves into steady coherent state pointing in the z-direction. This fact have been proved analytically in Section \ref{otwist}. In addition, in Section \ref{numbers} we have shown that, by performing an exact diagonalization of the interaction of Hamiltonian (\ref{hota}), when the Lipkin interaction is turned on, a coherent initial state evolves into steady Intelligent Spin State for a definite range of values of the relative coupling constant $\eta$. To get a deeper understanding of the results we have obtained in the staionary regime, we have performed a boson mapping of the $su(2)$ Hamiltonian of Eq.(\ref{hota}). To leading order in the number of spins, the Hamiltonian was written in terms of the operators of the $su(1,1)$ algebra, and the time evolution of the system was obtained analytically. In the asymptotic limit, that is after long intervals of time compared to the characteristic coherence time of the system, the numerical results that we have presented support the idea that the behaviour of the steady state govern by the $su(2)$-Hamiltonian of Eq. (\ref{hota}) can be understood in terms of the behavior of the steady state govern by the $su(1,1)$-Hamiltonian of Eq. (\ref{hbos}). Both from analytical and numerical results, it is observed that two well defined regions can be identified, depending on the relative value of the coupling constants ($\eta,~ \Gamma$), with different behaviour of the asymptotic steady state. For systems with more than $N \approx 9$ spins, the initial state evolves in a steady Intelligent Spin State when $\eta<1$, Eq. (\ref{xi}), otherwise the asymptotic state does not behave as a squeezed state. The previous reported results indicate that the generation of a steady Intelligent Spin State, for a certain range of values of ($\eta$, $\Gamma$), is consequence of the dissipative character of the interaction. Similar results have been recently advanced in \cite{disi-1}. As a potential physical application, we have investigated the possibility of searching for an steady Intelligent Spin State in diamond nano-structures. We have presented an effective spin-spin interaction among NV color centers in diamond, mediated through the interaction of the NV centers with a magnetic nano-resonator. We have investigated the regimen of coupling constatnts, so that under the action of this effective interaction a coherent initial state evolves in time into a steady Intelligent Spin State.
\begin{acknowledgments} This work was partially supported by the National Research Council of Argentine (PIP 282, CONICET) and by the Agencia Nacional de Promocion Cientifica (PICT 001103, ANPCYT) of Argentina. \end{acknowledgments}
\section*{Appendix}\label{appendix}
Let us consider the Lie algebra $su(1, 1)$ \cite{romina}, which is spanned by the operators $\{ K_1,~K_2,~K_3 \}$. They satisfy the well known commutation relations $$[K_1,K_2] = - {\mathbf{i}} K_3, \ \ [K_2,K_3] = {\mathbf{i}} K_1, \ \ [K_3,K_1] = {\mathbf{i}} K_2.$$ The complex linear combinations of these operators span the algebra $su^{c}(1, 1)$, which is isomorphic to $sl(2,C)$.
The time evolution operator, $U(t)=e^{- {\mathbf{i}} t H_{B}}$, is an exponential form of the elements of the $su^c(1,1)$ Lie algebra ( Eqs. (\ref{opsu11}) and (\ref{algsu11})). Thus, $U(t)$ belongs to the $SU(1,1)$ Lie group. Consequently, $U(t)$ can be represented by a matrix $G$. The matrix $G$ is parameterized by two complex numbers $w_{1}$ and $w_{2}$ as $$G=\left(
\begin{array}{cc}
w_{1} & w_{2} \\
\overline{w}_{2} & \overline{w}_{1} \\
\end{array}
\right), $$
moreover, the parameters $w_1$ and $w_2$ fulfill the condition $|w_{1}|^{2}-|w_{2}|^{2}=1$.
Let us determine $w_1$ and $w_2$. In doing so, we shall write $U(t)$ in normal order as
\begin{eqnarray*} U(t) & = & e^{- {\mathbf{i}} t h_0} e^{- {\mathbf{i}} t (2 \alpha K_{0}+ 2 S V (K_{+}+ K_{-}))},\\
& = & e^{- {\mathbf{i}} t h_0} e^{b_{+}K_{+}}e^{\ln(b_{0})K_{0}}e^{b_{+}K_{-}}, \label{rightut} \end{eqnarray*} with $ K_{\pm} = K_{1} \pm {\mathbf{i}} K_{2}$ and $K_0=K_3$.
Following the prescriptions of \cite{gilmore}, it is possible to carry out all calculations, in either the algebra or the group, by using the faithful matrix representation of the operator algebra. It reads \begin{eqnarray*} K_{+} & = & \left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right), \nonumber \\ K_{-} & = & \left(
\begin{array}{cc}
0 & 0 \\
-1 & 0 \\
\end{array}
\right), \nonumber \\ K_{0} &= & \frac{1}{2}\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right). \end{eqnarray*} Writting (\ref{rightut}) in terms of the faithful matrix representation, we obtain
\begin{eqnarray*} \left(
\begin{array}{ll}
c+ \frac{a_{0}s}{2 d} & \frac{a s }{d} \\
- \frac{a s }{d} & c- \frac{a_{0}s}{2 d} \\
\end{array}
\right) = \left(
\begin{array}{ll}
\sqrt{b_0}-\frac {b_+ b_-}{\sqrt{b0}} & \frac {b_+}{\sqrt{b0}} \\
-\frac { b_-}{\sqrt{b0}}& \frac {1}{\sqrt{b0}} \\
\end{array}
\right), \end{eqnarray*} where
\begin{eqnarray*} c & = & \cosh( {\mathbf{i}} t \beta ), \\ s & = & \sinh( {\mathbf{i}} t \beta), \\ a_{0} & =& -2 {\mathbf{i}} t \alpha, \\ a & =& - {\mathbf{i}} t \gamma. \end{eqnarray*} Then, it results \begin{eqnarray*}
b_{0}& = & \left( \cos( t \beta) \left( 1 + \frac{ \alpha}{ \beta} {\rm tanh}( {\mathbf{i}} t \beta) \right) \right)^{-2}, \nonumber \\ b_{+}& = & {\rm e}^{ {\mathbf{i}} (\phi_V+\pi)}\frac{ 2 S |V|}{ \beta }
\frac{{\rm tanh}\left ( {\mathbf{i}} t \beta \right)}{1+\frac{ \alpha}{\beta} {\rm tanh}\left( {\mathbf{i}} t \beta \right)}, \nonumber \\ b_{-}& = & b_+, \label{defi} \end{eqnarray*} where, $\phi_V=0$ if $V>0$ and $\phi_V=\pi$ if $V<0$. We have defined $\beta = \sqrt{\alpha^{2}-(2 S V)^{2}}$. Clearly, we can identify
\begin{eqnarray*} \begin{array}{l l} w_{1} = \sqrt{b_{0}}-\frac{b_{+}^{2}}{\sqrt{b_{0}}} &, \overline{w}_{1}=\frac{1}{\sqrt{b_{0}}},\\ w_{2} = \frac{b_{+}}{\sqrt{b_{0}}} &, \overline{w}_{2}=-\frac{b_{+}}{\sqrt{b_{0}}}\\ \end{array} \end{eqnarray*}
As $|w_{1}|^{2}-|w_{2}|^{2}=1$, there exist $\zeta \in {\mathcal C}$ and $\{\theta_{1},~ \theta_{2}\}~ \in {\mathcal R}$ so that
$$w_{1}=\cosh |\zeta| {\rm e}^{i \theta_{1}} \ \ \ \ w_{2}=\sinh |\zeta| {\rm e}^{i \theta_{2}}.$$ Consequently
$$ \frac{w_{2}}{\overline{w}_{1}} =b_{+}= {\rm e}^{ {\mathbf{i}} (\theta_2-\theta_1)}\tanh|\zeta|,$$
verifying that $|b_{+}|<1.$
It is convenient to introduce the operator of squeezing $S_q(\zeta)=e^{\overline{\zeta}K_{-}-\zeta K_{+}}$, with $\zeta= r {\rm e}^{ {\mathbf{i}} \tau}$ and $\tau=\phi+\phi_V+\pi$. In terms of the complex parameter $\zeta$, $b_+$ is written as $b_+=\frac {\zeta} {|\zeta|}\tanh |\zeta|$. It is straightforward to show, by using the faithful matrix representation, that
$$
S_q(\zeta)=e^{\overline{\zeta}K_{-}-\zeta K_{+}}=e^{b_{+}K_{+}}e^{\ln(1-\tanh^2|\zeta|)K_{0}}e^{-\overline{b_{+}} K_{-}}, $$ and
\begin{eqnarray*} U(t)& = &
e^{- i t h_0}e^{b_{+}K_{+}}e^{\ln(1-\tanh^2|\zeta|)K_{0}}e^{-\overline{b_{+}}K_{-}} \nonumber \\
& & ~~~~~~~~~e^{\overline{b_{+}}K_{-}}e^{-\ln(1-\tanh^2|\zeta|)K_{0}}e^{\ln(b_{0})K_{0}}e^{b_{+}K_{-}} \nonumber \\ &=&{\rm e}^{- {\mathbf{i}} t h_0}S_q(\zeta){\rm e}^{\ln(R_{0})K_{0}}{\rm e}^{R_{-}K_{-}},\nonumber \\ \end{eqnarray*}
where we have defined $R_{0} = \frac{b_{0}}{1-\tanh ^2|\zeta|} $, and \\
$R_{-} = \left( \frac{\overline{\zeta}}{|\zeta|} \frac{b_{0}}{1-\tanh ^2|\zeta|} -\frac{\zeta}{|\zeta|} \right) \tanh |\zeta|$.
We shall now consider the time evolution of the coherent state of Eq.(\ref{inibos})
\begin{eqnarray*}
|\psi \rangle= e^{-S}\sum_{n=0}^{\infty}\frac{(\sqrt{2S})^{n}}{\sqrt{n!}}|n \rangle=D(\sqrt{2S})|0 \rangle, \end{eqnarray*}
with $D(\sqrt{2S})=\exp(\sqrt{2S} a^{\dagger}- \sqrt{2S}a)$. It is easy to proof that $K_{-}|\psi \rangle= S|\psi \rangle$. Then
\begin{eqnarray*}
U|\psi \rangle & = & \mathcal{N} {\rm e}^{- {\mathbf{i}} t h_0}S_q(\zeta){\rm e}^{\ln(R_{0})K_{0}}{\rm e}^{R_{-}K_{-}}D(\sqrt{2S})|0 \rangle, \nonumber \\
& = & \mathcal{N} {\rm e}^{- {\mathbf{i}} t h_0}S_q(\zeta){\rm e}^{\ln(R_{0})K_{0}}{\rm e}^{S R_{-}}D(\sqrt{2S})|0 \rangle, \nonumber \\
& = & \mathcal{N} {\rm e}^{- {\mathbf{i}} t h_0}R_0^{1/4}{\rm e}^{S R_{-}-S+ |R_0|} S_q(\zeta)D(\sqrt{2S R_0})|0 \rangle, \nonumber \\ \end{eqnarray*} and the normalization factor results
\begin{eqnarray*}
\mathcal{N}^{-2}&=& e^{\gamma t} {\rm e}^{2 S (|R_{0}|+{\rm Re}(R_{-})-1)}\sqrt{|R_{0}|} \times \nonumber \\
& & ~~~~~~~~ \langle 0|D^{\dagger}(\sqrt{2 S R_{0}})S^{\dagger}(\zeta)S_q(\zeta)D(\sqrt{2 S R_{0}})|0\rangle \nonumber \\
&=&e^{\gamma t }e^{2 S (|R_{0}|+Re(R_{-})-1)}\sqrt{|R_{0}|}. \end{eqnarray*}
Let us evaluate the fluctuation of the operators $x$ and $p$. In doing so, we shall make use of well known relations for the squeezing operator $S(\zeta)$:
\begin{eqnarray*} S_q^{\dagger}(\zeta) x S_q(\zeta)& = & x~(\cosh r-\cos \tau \sinh r)-p~ \sin \tau \sinh r,\nonumber \\ S_q^{\dagger}(\zeta) p S_q(\zeta)& = & p~(\cosh r+\cos \tau \sinh r)+x~ \sin \tau \sinh r,\nonumber \\ S_q^{\dagger}(\zeta) x^2 S_q(\zeta) & = & x^2~(\cosh r- \cos \tau \sinh r )^{2} + \nonumber \\
& & p^2~ \sin^2 \tau \sinh^2 r- \nonumber \\
& & \{x, p\}\sin \tau \sinh r (\cosh r - \cos \tau \sinh r ),\nonumber \\ S_q^{\dagger}(\zeta) x^2 S_q(\zeta) & = & x^2~\sin^2 \tau \sinh^2 r + \nonumber \\
& & p^2~ (\cosh r+ \cos \tau \sinh r )+ \nonumber \\
& & \{x, p\}\sin \tau \sinh r (\cosh r + \cos \tau \sinh r ),\nonumber \\ \end{eqnarray*} and of \begin{eqnarray*}
\langle 0 |D^{\dagger}(\sqrt{2 S R_{0} })x D(\sqrt{2 s R_{0} })|0 \rangle &=& 2 \sqrt{S} ~ {\rm Re}\sqrt{R_{0}},\nonumber \\
\langle 0 |D^{\dagger}(\sqrt{2 S R_{0} })p D(\sqrt{2 s R_{0} })|0 \rangle &=& 2 \sqrt{S} ~ {\rm Im}\sqrt{R_{0}},\nonumber \\
\langle 0 |D^{\dagger}(\sqrt{2 S R_{0} })x^2 D(\sqrt{2 s R_{0} })|0 \rangle &=& \frac{1}{2}+ 4 S~ ({\rm Re}\sqrt{R_{0}})^{2}\nonumber \\
\langle 0 |D^{\dagger}(\sqrt{2 S R_{0} })p^2 D(\sqrt{2 s R_{0} })|0 \rangle &=& \frac{1}{2}+ 4 S~ ({\rm Im}\sqrt{R_{0}})^{2}.\nonumber \\ \end{eqnarray*} We can proceed to calculate
\begin{eqnarray*}
\Delta^{2}p & = & \langle \psi | U^{\dagger} p^{2}U | \psi \rangle - \langle \psi | U^{\dagger} p U |\psi \rangle ^{2}, \nonumber \\
&=& \frac{1}{2} \left(\cos (\tau ) \sinh (2 r)+\sinh ^2(r)+\cosh ^2(r)\right), \end{eqnarray*} and \begin{eqnarray*}
\Delta^{2}x & = & \langle \psi | U^{\dagger} x^{2}U | \psi \rangle - \langle \psi | U^{\dagger} x U |\psi \rangle ^{2}, \nonumber \\
&=& \frac{1}{2} \left(-\cos(\tau ) \sinh (2 r)+\sinh ^2(r)+\cosh ^2(r)\right). \end{eqnarray*}
\section*{}
\end{document} | arXiv |
Taking stock of national climate policies to evaluate implementation of the Paris Agreement
A framework for national scenarios with varying emission reductions
Shinichiro Fujimori, Volker Krey, … Keywan Riahi
Global roll-out of comprehensive policy measures may aid in bridging emissions gap
Heleen L. van Soest, Lara Aleluia Reis, … Detlef P. van Vuuren
Ratcheting of climate pledges needed to limit peak global warming
Gokul Iyer, Yang Ou, … Haewon McJeon
Self-preservation strategy for approaching global warming targets in the post-Paris Agreement era
Yi-Ming Wei, Rong Han, … Zili Yang
Current level and rate of warming determine emissions budgets under ambitious mitigation
Nicholas J. Leach, Richard J. Millar, … Myles R. Allen
Fusing subnational with national climate action is central to decarbonization: the case of the United States
Nathan E. Hultman, Leon Clarke, … John O'Neill
Scenarios towards limiting global mean temperature increase below 1.5 °C
Joeri Rogelj, Alexander Popp, … Massimo Tavoni
Climate economics support for the UN climate targets
Martin C. Hänsel, Moritz A. Drupp, … Thomas Sterner
Persistent inequality in economically optimal climate policies
Paolo Gazzotti, Johannes Emmerling, … Massimo Tavoni
Mark Roelfsema1,
Heleen L. van Soest1,2,
Mathijs Harmsen ORCID: orcid.org/0000-0001-6755-15691,2,
Detlef P. van Vuuren ORCID: orcid.org/0000-0003-0398-28311,2,
Christoph Bertram ORCID: orcid.org/0000-0002-0933-43953,
Michel den Elzen ORCID: orcid.org/0000-0002-5128-81502,
Niklas Höhne ORCID: orcid.org/0000-0001-9246-87594,5,
Gabriela Iacobuta4,
Volker Krey ORCID: orcid.org/0000-0003-0307-35156,
Elmar Kriegler3,
Gunnar Luderer ORCID: orcid.org/0000-0002-9057-61553,7,
Keywan Riahi6,
Falko Ueckerdt3,
Jacques Després ORCID: orcid.org/0000-0002-9851-99648,
Laurent Drouet ORCID: orcid.org/0000-0002-4087-76629,
Johannes Emmerling ORCID: orcid.org/0000-0003-0916-99139,
Stefan Frank6,
Oliver Fricko6,
Matthew Gidden6,10,
Florian Humpenöder ORCID: orcid.org/0000-0003-2927-94073,
Daniel Huppmann ORCID: orcid.org/0000-0002-7729-73896,
Shinichiro Fujimori ORCID: orcid.org/0000-0001-7897-179611,
Kostas Fragkiadakis12,
Keii Gi13,
Kimon Keramidas ORCID: orcid.org/0000-0003-3231-59828,
Alexandre C. Köberle14,15,
Lara Aleluia Reis9,
Pedro Rochedo14,
Roberto Schaeffer ORCID: orcid.org/0000-0002-3709-732314,
Ken Oshiro ORCID: orcid.org/0000-0001-6720-409X11,
Zoi Vrontisi12,
Wenying Chen16,
Gokul C. Iyer ORCID: orcid.org/0000-0002-3565-752617,
Jae Edmonds ORCID: orcid.org/0000-0002-3210-920917,
Maria Kannavou12,
Kejun Jiang18,
Ritu Mathur19,
George Safonov ORCID: orcid.org/0000-0001-6568-831820 &
Saritha Sudharmma Vishwanathan21,22
Climate-change mitigation
Climate-change policy
Many countries have implemented national climate policies to accomplish pledged Nationally Determined Contributions and to contribute to the temperature objectives of the Paris Agreement on climate change. In 2023, the global stocktake will assess the combined effort of countries. Here, based on a public policy database and a multi-model scenario analysis, we show that implementation of current policies leaves a median emission gap of 22.4 to 28.2 GtCO2eq by 2030 with the optimal pathways to implement the well below 2 °C and 1.5 °C Paris goals. If Nationally Determined Contributions would be fully implemented, this gap would be reduced by a third. Interestingly, the countries evaluated were found to not achieve their pledged contributions with implemented policies (implementation gap), or to have an ambition gap with optimal pathways towards well below 2 °C. This shows that all countries would need to accelerate the implementation of policies for renewable technologies, while efficiency improvements are especially important in emerging countries and fossil-fuel-dependent countries.
The objective of the Paris Climate Agreement is to hold average global warming to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C1. While this objective is formulated at the global level, the success of the agreement critically depends on the implementation of climate policies at the national level. This is organised in the agreement by the requirement of countries to submit nationally determined contributions (NDCs). Countries are expected to update their NDCs in 2020. While NDCs should be submitted by every country and updated every five years, their policies and targets are not legally binding. Previous studies have highlighted that taken together, the NDCs and national policies fall significantly short of the overall ambition of the Paris Agreement2,3,4. To achieve the targets from the NDCs, countries are implementing policies at the national level. The Paris Agreement facilitates a global stocktake in 2023, which is expected to take stock of the collective efforts and to inform the preparation of more ambitious NDCs. For this, clear insights are needed into the impact of current implemented national policies from individual countries. At the moment, no peer reviewed literature exists that has assessed the global and country impact of national climate policies on the basis of a comprehensive policy inventory by using a suite of integrated assessment models, and using this to guide additional policy implementation. Such a multi-model approach using a range of model types (simulation/optimisation, general or partial equilibrium) adds to the robustness of the assessment.
The aim of this article is to fill this knowledge gap and to provide insights into the impact of national policies in comparison to emission pathways consistent with the NDCs and overall goals of the Paris Agreement. Consequently, we divide the total emissions gap between national policies and well below 2 °C pathways into an implementation gap referring to the difference between the impact of national policies and the NDCs, and an ambition gap referring to the difference between the impact of the NDCs and well below 2 °C emission pathways. The results are presented for seven large economies and the world. The analysis was done by first establishing a list of high-impact policies5 for each G20 economy selected from a detailed open-access policy database6, and translating these to input parameters for integrated assessment models. Subsequently, the model results allowed to assess the direct impact of these policies, as well as their interactions. The results are also presented in terms of the Kaya identity allowing to indicate how to close the implementation and ambition gaps7,8. The nine integrated assessment models (see Methods) used in this study have all submitted data for the 1.5 °C scenarios to the IPCC 1.5 °C report9. To evaluate the coherence of the national pathways, we compared the aggregated results of the integrated assessment models with similar runs of national models for the same countries.
Model-based scenarios have played a major role in supporting international climate policy already for a few decades. The focus of model analyses, however, has been mostly on exploring cost-optimal response strategies required to meet the climate temperature goals and simplified representations of national policies, typically incorporating them as overall emission reduction targets implemented via carbon prices10,11,12. The new phase of climate policy after Paris requires new information on the long-term contribution of specific policies. While some assessments have accounted for more explicit climate policy formulations in different parts of the world, these are typically single model exercises or focus only on the NDCs11,13,14,15,16. As such, the current work adds to the literature.
Owing to the aggregation level of most IAMs, our analysis is limited to the national policies and NDCs for G20 economies that represent 75% of total 2010 greenhouse gas emissions. It is estimated that the countries with high-impact policies, but not included in our assessment, represent around 5% of global 2010 emissions (see Supplementary Table 1). The collected policies have been made available in an open-access database6 and cover implemented and planned national policies up to 2017. As introduction of new policies mostly occurs simultaneously with key international accords17, this inventory contains most of the relevant policies that were introduced around the Paris Agreement. A selection from this database was made that consisted of around ten policies for each G20 country that were expected to have high impact on greenhouse gas emissions based on literature or national expert opinion, that were adopted by national governments trough legislation or executive orders, and no evidence exists of large barriers to implementation. The results are presented at the global level and for the seven large emitting economies for which national models were available, i.e., Brazil, China, the European Union, India, Japan, the Russian Federation and the United States, together representing around 65% of global 2010 greenhouse gas emissions18.
The results show that if no additional action is taken beyond current implemented national climate policies, greenhouse gas emissions are projected to increase substantially between 2015 and 2030, although 5.3% lower compared to the hypothetical situation if these policies would not have been implemented. Current national policies together, leave a median global total emissions gap by 2030 of 22.4 Gigaton CO2 equivalent (GtCO2eq) with a cost-optimal 2 °C emission pathway, and 28.2 GtCO2eq with a 1.5 °C pathway. The 2 °C global emissions gap can be reduced by a third, if conditional NDCs were fully implemented, which would close the global implementation gap, but would still leave a significant ambition gap. For seven large individual countries (China, the United States, India, the European Union, Japan, Brazil and the Russian Federation), policy implementation is expected to reduce emissions at the national level by 0 to 9% (median estimates) compared to the hypothetical situation if no policies would be implemented. This leaves a small implementation gap for China, India, Japan, Russian Federation as they are close to achieving their NDC, while this is not the case for the European Union, United States and Brazil, but their ambition gap is smaller as NDCs are close to the cost-optimal 2 °C pathways.
Global implementation and total emissions gap
In total, five scenarios were evaluated (see Table 1 and Supplementary Note 1). The starting point of all scenarios is the SSP2 scenario19,20, which is a middle-of-the-road scenario assuming a business-as-usual conduct representing no new climate policies implementation after 2010 (no new policies scenario). The national policies scenario represents the impact of policies implemented domestically to fulfil the NDC promises that are included in the NDC scenario. The 2 °C and 1.5 °C scenarios look into cost-optimal implementation of the overall goals of the Paris Agreement. To provide guidance on enhancing policy implementation, the impact of policies is decomposed by computing a set of indicators based on the Kaya identity (see Supplementary Note 2). Besides greenhouse gas emissions, also the share of low-carbon (no fossil-fuels without carbon capture and storage) technologies and energy efficiency is presented.
Table 1 Main assumptions on climate policy implementation per scenario.
Under the No new policies scenario, the models project an increase in global greenhouse gas emissions to 63.9 GtCO2eq (61.0–69.1; median and 10th to 90th percentile range over all model results) by 2030. This is mostly driven by an increase in emissions related to transport, industry and power production in developing countries, but still to lower per-capita levels than developed countries. Implementation of national policies is not projected to reverse the increase of global emissions by 2030, and would result in emission levels of 59.3 GtCO2eq (58.4–63.7) (Fig. 1), which is a 5.3% (3.8%–7.9%) reduction relative to the No new policies scenario (see Table 2). However, it covers 15.4% (10.8%–19.0%) of the emissions gap between No new policies and the 2 °C pathway by 2030, and this is 11% (7.6%–15.9%) for the 1.5 °C pathway.
Fig. 1: Greenhouse gas emissions on a global level and seven large countries under different scenarios.
a Global greenhouse gas emissions for total greenhouse gases (in GtCO2eq) and nine integrated assessment models between 2010 and 2030. b Average greenhouse gas emissions (in MtCO2eq) of all models by 2010, 2015 and 2030 for CO2 emissions per sector and total non-CO2 emissions (blue), including the 10th–90th percentile ranges for total greenhouse gas emissions of the multi-model ensemble (error bars). CO2 emissions have been separated into those related to energy supply (red), transport (dark orange), buildings (light orange), industry (yellow) and AFOLU (agriculture, afforestation, forestry and land-use change) (green). National models are China-TIMES and IPAC for China, GCAM-USA for the United States, PRIMES for the EU, AIM India and India MARKAL for India, RU-TIMES for the Russian Federation, BLUES for Brazil and AIM/Enduse and DNE21 + for Japan. For both panels, CO2 equivalent greenhouse gases have been calculated using the 100-year Global Warming Potential from the IPCC Fourth Assessment Report. The data is available in the source data.
Table 2 Absolute (GtCO2eq) and percentage impact of policy implementation relative to no new policies scenario, and implementation gap with NDC scenario for the world, China, United States, India, EU, Japan, Brazil and Russian Federation (median value and 10–90% in brackets).
Although the global low-carbon share of final energy under the National Policies scenario increases by 1 percentage point (1 pp) to 14.3% (9.3%–19.8%) by 2030, and the energy intensity improves by 20.5% (16.1%–24.7%) between 2015 and 2030, final energy use still increases (see Fig. 2). Most emission reductions under the National policies scenario are induced by high-impact policies that target CO2 emissions (Fig. 1). Furthermore, 45% (30–70%) of the emission reductions are projected to come from countries that are member of the the Organisation for Economic Co-operation and Development (OECD).
Fig. 2: Final energy and the low-carbon share of final energy on the global level and seven large countries under different scenarios.
Average total final energy for 2010, 2015 and 2030 of nine global integrated assessment models is subdivided into sectors: transport, buildings, industry and other. Total final energy includes the 10th to 90th percentile ranges for total final energy (error bars). The black dots/triangles indicate final energy based on national model estimates (China-TIMES and IPAC for China, GCAM-USA for the United States, PRIMES for the European Union, AIM India and India MARKAL for India, RU-TIMES for the Russian Federation, BLUES for Brazil and AIM/Enduse and DNE21 + for Japan). The data is available in the source data.
For achieving conditional NDCs, deeper reductions are necessary than those achieved by national policies only. The implementation of conditional NDCs (NDC scenario) is projected to result in 51.9 (50.4–57.4) GtCO2eq greenhouse gas emissions by 2030, a low-carbon share of final energy at 16.8% (12.6%–25.2%), and 23.5% (17.9%–30.0%) in energy-intensity improvement between 2015 and 2030. This means that national policies together leave a significant global implementation gap with respect to the NDC targets by 2030, which is 7.7 (5.3–9.7) GtCO2eq for emissions (see Table 3). This gap by 2030 can be closed by increasing the low-carbon share by 2.8 pp (1.5–4.7 pp), and decreasing energy intensity by 12.7% (9.1%–16.1%). Final energy reductions under the NDC scenario compared with the national policies scenario, occur especially in the transport and buildings sector (see Fig. 2).
Table 3 Absolute (GtCO2eq) and percentage emissions gaps by 2030, on the global level and for China, the United States, the European Union, India, Japan, the Russian Federation and Brazil.
Uncertainty range
The different integrated assessment models provide a range of outcomes for changes in greenhouse gas emissions due to policy implementation between 2015 and 2030. This range is a result of the differences in historical emissions21, different assumptions about socio-economic growth rates, different impact of policy implementation in models, and finally real uncertainty as a result of structural model differences (see Methods). The differences in historical emissions are in line with estimates of uncertainty in historical emission inventories (10% in total greenhouse gas emissions)22, but it clearly translates into a contribution to uncertainty for 2030. In addition, an estimate of the contribution of socio-economic factors can be obtained by comparing the 2015 and 2030 emission range under the No new policies scenario. This shows a 2030 range that is 50% larger than the 2015 range. The different impact of policies implemented in models has been estimated by considering the impact of all policies implemented in the models and estimating those that were not included based on the IMAGE model results (see Methods). Based on this analysis, it can be concluded that assumptions on socio-economic factors explain the largest part of the ranges in the results for 2030; while the differences in policy impact explain about 1/3 of them.
Impact of national policies for seven large G20 economies
The scenarios allow for evaluation of climate policy at the national level (although obviously limited by model detail). Policy implementation is estimated to result in reductions of 0% (0–2%) for the Russian Federation to 10% (4–12%) for the United States, relative to the no new policies scenario (see Table 2). The largest absolute emission reductions under the National policies scenario occur in the CO2 energy supply and transport sector, in all countries, except for Brazil, where reductions also occur in the AFOLU sector (although AFOLU emission estimates are inherently uncertain, already for historical estimates23). The largest percentage of reductions is projected in the transport sector for the United States and India, the industrial sector for the EU, and the energy supply sector for China and Japan. In the Russian Federation, the National policies scenario hardly triggers emission reductions, compared to the no new policies scenario.
Implementation of national policies still leaves an implementation gap with NDCs of 3% (3–7%) for the Russian Federation to 28% (22–37%) for the United States (see Table 2). With national policies until cut-off date before 2017, China, India, Japan and Russian Federation are projected to come close to achieving their NDC targets with national policies by 2030. In Brazil, the European Union, and the United States, the median estimate of the National policy scenario is further removed from the NDC level. Note that very recent policy updates since 2017, or planned policies in the pipeline to be implemented were not included. We have compared the results of the global models also to the outcomes of the same scenarios from national models from each individual country. These results confirm the above trends, although the absolute levels differ in a few cases (Figs. 1 and 2)
Global emissions gap and for seven large G20 countries
In order to implement the objectives of the Paris Agreement, all national policies together should reduce emissions enough to keep global warming below the 2 °C and 1.5 °C temperature limits. We evaluate this by comparing the results of the policy scenarios with cost-optimal scenarios for these temperature targets. This shows a total emissions gap between the National policies scenario and the cost-optimal scenarios in 2030 of 22.4 GtCO2eq (13.6–29.6) for the 2 °C limit (high probability), and 28.2 GtCO2eq (19.8–42.2) for the 1.5 °C limit (see Table 3 and Fig. 3). This is respectively a global reduction of 36% (23–49%) and 45% (33–65%) by 2030 relative to the national policies scenario.
Fig. 3: Indicators derived from Kaya identity and costs per GDP between 2010 and 2030 on a global level and for seven large countries under different scenarios.
The median (lines) and 10th–90th percentile ranges (areas) from nine integrated global assessment models on emissions, energy mix and efficiency gaps and mitigation costs per GDP. These gaps are represented by total greenhouse gas emissions (MtCO2eq), low-carbon share of final energy (%), final energy intensity in GDP (TJ/USD2010) and total mitigation costs per GDP (%) between national policies and well below 2 °C scenarios. The data is available in the source data.
The Kaya identity allows to break this up into an energy mix gap (share of low-carbon emitting technologies in final energy) and an efficiency gap (final energy-intensity improvement relative to the results of the implementation of national policies), and a carbon-intensity gap (see Supplementary Figs. 1 and 2). To close the gap by 2030 with the National policies scenario, the non-fossil share would need to increase by 6.9 pp (4.0%–12.3%) (energy mix gap), and the energy-intensity needs to improve by 9.6% (4.8%–24.7%)) (efficiency improvement gap). These numbers are 13.0% (7.2%–24.0%) and 17.5% (12.5%–26.8%) for the 1.5 °C case (see Fig. 3). Global annual mitigation costs per GDP by 2030, under the national policies scenario, are small, and increase to 0.9% (0.3%–2.2%) under the 2 °C scenario, and to 1.3% (1.0%–4.0%) under the 1.5 °C scenario (see Fig. 3). The global emissions gap with the 2 °C scenario can be reduced by a third, if conditional NDCs would be fully implemented, leaving a median ambition gap of 16.5 GtCO2eq (6.4–21.0) with 2 °C pathways and 21.2 GtCO2eq (12.2–31.6) with 1.5 °C pathways.
For the seven individual G20 countries, greenhouse gas emissions by 2030 would need to decrease compared to the national policies scenario by 25 to 41% (median) to stay on track to keep temperature below 2 °C, while this is 33 to 54% (median) under the 1.5 °C scenario (see Table 2 and Fig. 3). These gaps can be closed by strongly increasing the low-carbon share of final energy by 5.4 pp for the European Union to 8.5 pp for China to stay below 2 °C, and between 5.4 pp in the European Union to 20.2 pp in China for the 1.5 °C case. Projections for final energy intensity give a different picture, where the difference between the National policies scenario and the 2 °C scenarios are small for the European Union, Japan and the United States, somewhat larger (and more uncertain) for Brazil, and largest for China, India and the Russian Federation (See Fig. 3). Closing the gap between national policies and 2 °C or 1.5 °C pathways by 2030 would result in additional median mitigation costs per GDP of between 0.5% for the European Union to 2.8% for the Russian Federation for the 2 °C case, while this is 0.6% to 3.4% for the 1.5 °C case (see Fig. 3).
Mid-century impact of national policies
To give an indication of the short-term impact of national policies in the context of the long-term global targets, we present the indicator that is defined as the cumulative emissions in the 2011–2050 period divided by the 2010 emissions, and in addition assume countries pursue the same national efforts between 2030 and 2050 under the National policies scenario by keeping total percentage emission reductions relative to the No new policies scenario constant. The indicator allows for comparing countries with different absolute emission levels, and provides the number of years an economy can emit at 2010 emission levels while staying below the total cumulative emissions of the next 40 years. A value of 40 indicates that, on average, the emission level will remain constant. In the same way as for the shorter period until 2030, comparison of the results with the trajectories for the 2 °C and 1.5 °C maximum temperature increases shows a large gap (Fig. 4). Interestingly, the NDC projections by 2050 for the European Union, Brazil, and the United States are relatively close to the 2 °C scenario, suggesting that these regions would mostly need to ensure that their national policies more closely lead to the NDC target (which may possibly already be achieved through very recent policy updates). It should, however, be noted that cost-optimal implementation (equal marginal costs in all regions) leads to higher costs, as a percentage of GDP, in low-income regions and, therefore, is a fair way to implement the Paris Agreement (see Supplementary Note 3) only if complemented by financial transfers. Effort-sharing approaches based on equity considerations tend to suggest larger reduction targets for high-income regions24.
Fig. 4: Cumulative CO2 emissions in the period 2011–2050 period relative to 2010 emissions on the global level and for seven large countries under different scenarios.
The box plots indicate the median, 25th to 75th percentile range, while the black data points show the full global model range. The brown coloured markers indicate the results from the national models. The data is available in the source data.
The 2 °C and 1.5 °C model ranges for Brazil are large as a result of the uncertainty in land-use-related emissions. In terms of cost-optimal mitigation, large reductions in each G20 economy are necessary to stay within the 400 Gt carbon budget. The median estimate for cumulative emissions relative to 2010, under this scenario, is at a similar level, between 20 and 25, except for Brazil and India, indicating that given the estimated cumulative emissions in the national policies scenario, strong efforts are essential by almost all countries.
The results show that for all countries there is either a significant implementation gap or ambition gap. Unless governments increase ambition, the collective effort of current national policies significantly stays short of the objectives of the Paris Agreement and even fails to meet the joint ambition secured in NDCs. The results have strong implications beyond 2030. Previous literature has shown that inadequate near-term reduction efforts imply that a substantially higher rate of transformation will be needed to comply with the 2 °C limit11, stranded assets25 and substantially higher mitigation costs in the long term, and reduced techno-economic mitigation potential due to carbon lock-in26.
In all, 2 °C and 1.5 °C pathways in this study are calculated assuming cost-optimal implementation, but it might not be the most realistic approach to deriving national reduction targets, as it would typically lead to relatively high costs in low-income countries. In contrast, effort-sharing approaches based on equity principles would lead to lower allowance of cumulative emissions in the EU, Japan, the Russian Federation and the United States, and to higher allowances for India (see Supplementary Fig. 3), resulting in an opposite impact on the gap between national policies and these allowances. If cost-effective climate policy would be adopted, emission trading or transnational climate financing could still ensure a cost-optimal implementation. If less cooperation between countries is assumed, a different allocation would increase total costs of implementation.
One crucial question that arises from this analysis is how to speed up implementation to achieve NDCs, and increase ambition to stay on track to meet well below 2° goals? The current policy implementation is weak and includes significant gaps (e.g., industry, freight transport policies). Moreover, it is also often fragmented in terms of the use of policy instruments and the coverage of sectors and countries. A redesign of current policy mixes consisting of more coherent policies, including for instance the use of economy-wide financial instruments27, may respond to the current call for strengthened policies. In practical terms, it is possible to draw lessons from the policy mixes used in our analysis—for instance by identifying to most successful mitigation measures. In identifying such good practices it is important to evaluate measures in terms of cost effectiveness but also in terms of reducing public policy constraints such as distribution of costs28, ability to address uncertainty29, and political feasibility to intervene in the economy30. A careful redesign in combination with international cooperation could avoid carbon leakage to other sectors and countries, avoid stranded assets31, and increase regulatory power of governments.
In 2020, countries are expected to submit updated NDCs to the Paris Agreement. However, the global stocktake discussed in this article shows that large enhancements are necessary if we want to maintain the window to limiting temperature increase to well below 2 °C, or even pursue efforts to limit this to 1.5 °C. In order to do so, all countries would need to accelerate the implementation of renewable technologies, while efficiency improvements are especially important in emerging countries (China, India, Brazil) and fossil-fuel-dependent countries (Russian Federation). From this we conclude that the global stocktake in the Paris Agreement's process would need to go beyond presenting emission gaps, but insights and guidance for how to close this gap are important. Integrated assessment models can support the policy process. At first, the national policy scenario used in this analysis could be assessed in more detail and give insights into the impact of different individual policies. In addition, the models are well furnished to present effective mitigation options to countries for policy enhancement by giving the tradeoffs between impact and costs of different policy packages in the context of global efforts. Other effectiveness criteria could be captured with different scenarios. Finally, as the new policy questions require more detailed information, model development could go into the direction of including more countries, sectors and actors or link to bottom-up energy and land use models.
Model exercise
The assessment of the impact of national climate policies on greenhouse gas emissions is based on the model exercise that was done as part of the CD-LINKS project, and for which guidelines were described in the global and national model protocols32,33. This project aimed, among other things, to develop global low development pathways on a global level and for G20 economies, including an explicit representation of near-term policy trends. For this paper, we selected seven large G20 economies in terms of greenhouse gas emissions (Brazil, China, the EU, India, Japan, Russian Federation, United States), for which also national climate and energy models were available in the project.
Integrated assessment models
Integrated assessment models (IAMs) describe key processes in the interaction of human development and natural environment and are designed to assess the implications of achieving climate objectives2,34. The model exercise that assessed the impact of climate policies was done by nine IAMs that have global coverage, and ten national models that represent a specific G20 economy (see Table 4). A more detailed description of model structure and policy implementation can be found in the Supplementary Note 4, and for some models at the IAMC wiki35. These models differ in country and sector aggregation level, and also in the way they mimic decisions on climate policy. All models include dynamic pricing, and therefore local climate policy will result in lower implementation in other regions with less policies. However, only the economic models explicitly account for carbon leakage. In addition, as most models assume one central planner, behaviour or decisions of different actors and the role of institutions is often not explicitly taken into account. This implies that most models (especially with simple representation of the economy) have only a limited ability to reflect the specific social and economic dynamics of the developing and transition economies36. Some phenomena, such as the green paradox, can only be represented by most models in an explicit scenario design. However, the models with less economic detail often have a more detailed representation of technologies in different sectors enabling them to take into account technological learning.
Table 4 Participating integrated assessment models in the model exercise to assess the impact of climate policies.
Selection and model implementation of policies
Climate policy on the national level, in this research, is defined as the result of climate policy formulation and climate policy implementation that encompasses aspirational goals not secured by legislation, national targets that are secured by legislation, and policy instruments designed to implement these targets. Only implemented policies were included in this analysis, and are defined as policies adopted by the government through legislation or executive orders, and non-binding targets backed by effective policy instruments.
First, climate policies were collected with the help of national experts and a literature study (see Supplementary Table 2), and were stored in an open-access database6. With the help of national experts, a selection of high-impact policies was made and translated into model input indicators5. This inventory includes climate and energy policies for the G20 economies, and details the instruments, targets and sectors (see Table 5 and Source Data). It was evaluated with and expanded by national experts in two rounds. The cut-off data for the selection of policies was 31 December 2016, and it should be noted that the policy environment is constantly changing. Two policy changes with a possibly high impact have occurred since this date: the United States is not likely to implement the 2025 standards for light-duty vehicles, although current standards are implemented until 2021 (The Clean Power plan, already was not included in the list of high-impact policies), and the European Union adopted a comprehensive set of climate actions that goes beyond the policies that we included in our analysis. In addition, although the United States announced its withdrawal of the Paris Agreement, this would only enter into effect by November 2020.
Table 5 Number of high-impact policies selected for implementation in the IAM models, per sector and country (details in, Supplementary Table 3).
Policy instruments were represented in the integrated assessment as explicit as possible, but simplification was sometimes necessary, thereby considering replicating the impact on greenhouse gas emissions and energy as most important. In practice, policy instruments are implemented to achieve national, often aspirational goals (not secured by legislation or executive orders). These aspirational goals are documented in national policy documents (e.g., National Communication, strategy documents). In some cases, we could directly implement policy instruments in IAMs, such as carbon taxes or regulations (e.g., vehicle fuel-efficiency standards). In other cases, we included aspirational policy targets to represent currently implemented policies, but only if they were backed by effective policy instruments. This was for example the case with feed-in tariffs or renewable auctions. If the policy instrument would end before the policy target year, we assumed continuation of this instrument until the target year of the aspirational goal. In case a G20 country is part of a larger model region, the policy (indicator) is aggregated by assuming business-as-usual for those countries without policies, and implementation of the policy for countries with policies32. In some cases, models with less sector detail used policy indicators (such as CO2 or final energy reduction) based on the impact of policies from more detailed models or on literature. See the Supplementary Note 4 and Supplementary Table 4 for information on how policies were implemented for each global model.
The starting point for the scenario design was the ADVANCE project10. The National policies scenario corresponds to the inventory that contains energy and climate policies implemented in G20 economies5. Between 42 and 94% of the high-impact policies from the seven G20 economies were implemented in the nine IAMs considered in this paper, and are estimated to represent 50 to 100% of possible greenhouse gas reductions (see Supplementary Note 2 and Supplementary Table 5). Note that global results also include G20 policies for Argentina, Australia, Canada, Indonesia, Mexico, Republic of Korea, Saudi Arabia and South Africa, which were not individually addressed in this paper. The national policies were implemented for the period from 2010 to 2030, and equivalent effort was assumed after 2030. This was defined as a constant percentage reduction relative to the No new policies scenario or similar forms of continued ambition. The NDC scenario was based on information from the NDCs on greenhouse gas reduction targets energy and land-use policies and on additional information from Kitous et al.37, den Elzen et al.38, Grassi et al.39 (land use estimates), and information from the UNFCCC (see Supplementary Tables 6–8 for details). The NDC targets can be divided into absolute emission reduction targets, business as usual reductions, emission-intensity reductions, and projects absent of greenhouse gas emission targets40. All G20 countries` NDCs are of the first three types. In general, NDC targets for G20 economies are defined for the year 2030, but the US NDC target is defined for the year 2025. The NDCs for China and India are represented by greenhouse gas intensity targets, renewable targets and forestry measures, which could not be translated into one specific absolute greenhouse gas emission level. The 2 °C scenario assumes implementation of national policies until 2020 and cost-optimal mitigation measures after 2020, to stay within the carbon budget of 1000 GtCO2 between 2011 and 2100. This is in line with the carbon budgets of 590 to 1240 GtCO2 from 2015 onwards, which would limit global warming by 2100 to below 2 °C, relative to pre-industrial levels with at least 66% probability. The 1.5 °C scenario starts with cost-optimal deep mitigation measures after 2020, and explores the efforts necessary to keep global warming below 1.5 °C by 2100, with about 66% probability, keeping cumulative carbon emissions within 400 GtCO2 between 2011 and 2100. Both budget assumptions are based on the ADVANCE project10, and in line with the estimate for 66% probability from Table 2.2 from the IPCC AR5 Synthesis report41.
Indicators to track progress
To give insights into policy impact, we have used a variant of the framework of tracking indicators related to the Paris Agreement7,8 (see Formula 1.1–1.3). CO2 per GDP can be decomposed into energy intensity (final energy/GDP), low-carbon share of final energy (%), and utilisation rate (CO2/fossil energy). The most pronounced differences between countries and scenarios for these indicators are visible for the low-carbon share of final energy (%) and energy intensity (final energy/GDP) (results are shown in Supplementary Figs. 1 and 2), and are discussed in the article. However, not only was the impact of policies on CO2 emissions analysed, but also total greenhouse gas emissions and individual greenhouse gas emissions (CO2 energy, CO2 industrial processes, CO2 AFOLU, non-CO2). In addition, we have added mitigation costs per GDP to assess the affordability of climate policy implementation. Partial equilibrium models such as IMAGE and POLES report these costs in terms of area under the MAC curve (e.g., direct mitigation costs), while equilibrium models such as MESSAGE, REMIND and WITCH report in terms of consumption losses. MAC cost measures tend to exclude existing distortions in the economy42. But as GDP is an exogenous variable in partial equilibrium models, consumption loss is not available.
The Kaya decomposition is
$${\mathrm{CO}_{2}} = {\mathrm{POP}} \ast \frac{{{\mathrm{GDP}}}}{{{\mathrm{POP}}}} \ast \frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{GDP}}}}$$
$$\frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{GDP}}}} = \frac{{{\mathrm{TPES}}}}{{{\mathrm{GDP}}}} \ast \frac{{CO}_{2}}{{{\mathrm{TPES}}}} = \frac{{{\mathrm{TPES}}}}{{{\mathrm{GDP}}}} \ast \frac{{{\mathrm{FE}}}}{{{\mathrm{TPES}}}} \ast \frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{FE}}}} = \frac{{{\mathrm{FE}}}}{{{\mathrm{GDP}}}} \ast \frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{FE}}}}$$
$$\frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{GDP}}}} = \frac{{{\mathrm{FE}}}}{{\mathrm{GDP}}} \ast \frac{{{\mathrm{FE}}_{{\mathrm{fossil}}}}}{{{\mathrm{FE}}}} \ast \frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{FE}}_{{\mathrm{fossil}}}}} = \frac{{{\mathrm{FE}}}}{{{\mathrm{GDP}}}} \ast \left( {1 - \frac{{{\mathrm{FE}}_{{\mathrm{non}} - {\mathrm{fossil}}}}}{{{\mathrm{FE}}}}} \right) \ast \frac{{{\mathrm{CO}_{2}}}}{{{\mathrm{FE}}_{{\mathrm{fossil}}}}}$$
POP population, GDP gross domestic product, TPES primary energy, FE final energy
The results are presented (unless otherwise stated) using the median estimate of all model results, and in addition presenting the 10th and 90th percentiles of these ranges. Differences in greenhouse gas emissions between scenarios (e.g., implementation gap and total emissions gaps) are calculated by first taking the difference per model and then determining the median and percentiles of the range of differences.
The results from this study show that, for national policies, greenhouse gas emissions by 2030 would be somewhat higher and for well below 2 °C scenarios lower than earlier studies indicated2,4,42 (which were based on only one model or had less detail on national policy implementation) (see Supplementary Figs. 4–7).
Emission growth under the National policies scenario by 2030 (See Fig. 1) can be decomposed into five drivers that, together, represent the total impact (blue bar in Fig. 5). First, historical calibration, which is calculated as the difference between 2015 model emissions and the PRIMAP26 (version 1.2) data set. Second, socio-economic growth assumptions, calculated as the emission growth between 2015 and 2030 under the No policy scenario. Third, policy impact on greenhouse gas emissions, calculated as the difference between 2030 emissions under the No policy scenario and the National policies scenario, including an estimate for the emission reductions for those policies (see Supplementary Table 5 and 9) that could not be implemented in certain models (See Supplementary Note 2). Fourth, real uncertainty represented by model form and heterogeneity.
Fig. 5: Decomposition of total median emission growth between 2015 and 2030 under National policies scenario, error bars range between 10th to and 90th percentiles.
The data is available in the source data.
This shows that the impact of historical calibration on the projected global growth in emissions between 2015 and 2030 is small; this growth is much more dependent on socio-economic factors such as GDP and population growth. Of the total impact, the policy impact is around one third, and a somewhat larger part is real uncertainty.
Effort-sharing
The 2 °C and 1.5 °C scenarios assume cost-optimal implementation of the reduction measures after 2020 with the lowest overall mitigation costs. The result is implementation of measures in countries where this is cheapest, but this does not imply that the implementing country would need to face all the costs. These costs can be shared, and thus financed by other countries. The financial flows could be calculated if emission allowances per country are based on so-called effort-sharing approaches representing different equity principles24,43,44, for example, categorise the effort-sharing approaches in the literature based on the four basic equity principles, i.e., responsibility, equality, capability and cost effectiveness, and present the regional greenhouse gas emission allowances in 2020, 2030 and 2050 for these categories. The equity principles were also applied to the carbon budgets (cumulative emissions) for both the 2011–2050 and the 2011–2100 period24, based on calculations from the FAIR model45, see the Supplementary Fig. 3, for comparison with the results from our study.
Model result adjustments
Some model results were adjusted due to missing data on sectors and sub-sectors, different accounting approaches or too broad regional definitions. The DNE21 + (on country level) does not include the Agricultural, Forestry and Land Use (AFOLU) sector. Therefore, these were supplemented with average estimates from the other global models. Although the POLES model does include AFOLU CO2 emissions, based on estimates from national communications, they were harmonised with those from FAOSTAT46, as the accounting approaches of the individual countries were not consistent with the other IAMs. The COPPE-COFFEE model does not include F-gas emissions, which were supplemented with average estimates from the other global models. Some national models only cover energy CO2 emissions (China TIMES, China IPAC-AIM V1.0, AIM India, MARKAL India, PRIMES, RU-TIMES) and industrial CO2 emissions and non-CO2 emissions were supplemented with average model estimates from global models.
Data reported in Figs. 1–5 and the selection of policies implemented in IAMs can be found in the Source Data. The source data files are also available at [https://doi.org/10.17632/2j7sksfh2h.1. The list of policies is based on the open source Climate Policy Database. The scenario protocol and the selection of high-impact policies that were included in the protocol are found under Work Package 2 of the deliverables & publications page of the CD-LINKS project. Model results can be found in the open-access CD-LINKS database. Policy relevant data is available in the Global Stocktake tool. CD-LINKS inventory http://www.climatepolicydatabase.org/index.php/CDlinks_policy_inventory; Climate policy database http://climatepolicydatabase.org/index.php/Climate_Policy_Database; Deliverables & publications http://www.cd-links.org/?page_id=620; CD-LINKS database https://db1.ene.iiasa.ac.at/CDLINKSDB/dsd?Action=htmlpage&page=30; Global Stocktake tool https://themasites.pbl.nl/global-stocktake-indicators/.
The code from the 20 integrated assessment models is not available in a publicly shareable version, although several have published open source code, visualisation tools or detailed documentation (see Supplementary Table 10 for details). A model description (see Supplementary Note), and a description of how national climate policies have been implemented (see Supplementary Table 4) is available.
UNFCCC. Paris Agreement, Decision 1/CP.21 (UNFCCC, 2015).
Rogelj, J. et al. Paris Agreement climate proposals need a boost to keep warming well below 2 degrees C. Nature 534, 631–639 (2016).
Vrontisi, Z. et al. Enhancing global climate policy ambition towards a 1.5 °C stabilization: a short-term multi-model assessment. Environ. Res. Lett. 13, 44039 (2018).
Vandyck, T., Keramidas, K., Saveyn, B., Kitous, A. & Vrontisi, Z. A global stocktake of the Paris pledges: Implications for energy systems and economy. Glob. Environ. Change 41, 46–63 (2016).
CD-LINKS. High Impact Policies, http://www.cd-links.org/wp-content/uploads/2016/06/Input-IAM-protocol_CD_LINKS_update_July-2018.xlsx. (2017).
NewClimate Institute, Wageningen University, PBL. CD-LINKS Climate Policy Inventory. http://www.climatepolicydatabase.org/index.php?title=CDlinks_policy_inventory (2016).
Peters, G. P. et al. Key indicators to track current progress and future ambition of the Paris Agreement. Nat. Clim. Change 7, 118–122 (2017).
Le Quéré, C. et al. Drivers of declining CO2 emissions in 18 developed economies. Nat. Clim. Change 9, 213–217 (2019).
IPCC. Chapter 2: Mitigation Pathways Compatible with 1.5 °C in the Context of Sustainable Development (IPCC, 2018).
Luderer, G. et al. Residual fossil CO2 emissions in 1.5–2 °C pathways. Nat. Clim. Change 8, 626–633 (2018).
Riahi, K. et al. Locked into Copenhagen pledges—implications of short-term emission targets for the cost and feasibility of long-term climate goals. Technol. Forecast. Soc. Change 90, 8–23 (2015).
Clarke, L. et al. In Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, 2014)
den Elzen, M. et al. Are the G20 economies making enough progress to meet their NDC targets? Energy Policy 126, 238–250 (2019).
Tavoni, M. et al. Post-2020 climate agreements in the major economies assessed in the light of global models. Nat. Clim. Change 5, 119–126 (2015).
Kriegler, E. et al. A new scenario framework for climate change research: the concept of shared climate policy assumptions. Clim. Change 122, 401–414 (2014).
Kriegler, E. et al. Making or breaking climate targets: the AMPERE study on staged accession scenarios for climate policy. Technol. Forecast. Soc. Change 90, 24–44 (2015).
Iacobuta, G., Dubash, N. K., Upadhyaya, P., Deribe, M. & Höhne, N. National climate change mitigation legislation, strategy and targets: a global update. Clim. Policy 18, 1114–1132 (2018).
Gütschow, J. et al. The PRIMAP-hist national historical emissions time series. Earth Syst. Sci. Data 8, 571–603 (2016).
Fricko, O. et al. The marker quantification of the Shared Socioeconomic Pathway 2: A middle-of-the-road scenario for the 21st century. Glob. Environ. Change 42, 251–267 (2017).
van Vuuren, D. P. et al. The shared socio-economic pathways: trajectories for human development and global environmental change. Glob. Environ. Change 42, 148–152 (2017).
Rogelj, J. et al. Understanding the origin of Paris Agreement emission uncertainties. Nat. Commun. 8, 15748 (2017).
Blanco, G. et al. In Climate Change 2014: Mitigation of Climate Change (Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2014).
Houghton, R. A. et al. Carbon emissions from land use and land-cover change. Biogeosciences 9, 5125–5142 (2012).
van den Berg, N. J. et al. Implications of various effort-sharing approaches for national carbon budgets and emission pathways. Clim. Change February (2019)
Mercure, J. F. et al. Macroeconomic impact of stranded fossil fuel assets. Nat. Clim. Change 8, 588–593 (2018).
Davis, S. J., Caldeira, K. & Matthews, H. D. Future CO2 emissions and climate change from existing energy infrastructure. Science 329, 1330–1333 (2010).
Bertram, C. et al. Complementing carbon prices with technology policies to keep climate targets within reach. Nat. Clim. Change 5, 235–239 (2015).
Stiglitz, J. E. Addressing climate change through price and non-price interventions. Eur. Econ. Rev. 119, 594–612 (2019).
Goulder, L. H. & Parry, I. W. H. Instrument choice in environmental policy. Rev. Environ. Econ. Policy 2, 152–174 (2008).
Jewell, J. & Cherp, A. On the political feasibility of climate change mitigation pathways: Is it too late to keep warming below 1.5°C? WIREs Clim. Change 11, e621 (2020).
Rozenberg, J., Vogt-Schilb, A. & Hallegatte, S. Instrument choice and stranded assets in the transition to clean capital. J. Environ. Econ. Manage. 100, 102183 (2018).
CD-LINKS. Protocol for WP3.2 Global Low-carbon Development Pathways, http://www.cd-links.org/wp-content/uploads/2016/06/CD-LINKS-global-exercise-protocol_secondround_for-website.pdf (2017).
CD-LINKS. Protocol for WP3.3 National Model Scenario Runs, http://www.cd-links.org/?page_id=620 (2017).
UNEP. The Emissions Gap report 2016 (UNEP, 2016).
IAMC. IAMC wiki. https://www.iamcdocumentation.eu/index.php/IAMC_wiki (2017).
Weyant, J. Integrated assessment of climate change: an overview and comparison of approaches and results. https://www.ipcc.ch/site/assets/uploads/2018/06/2nd-assessment-en.pdf (1995).
Kitous, A., Keramidas, K., Vandyck, T. & Saveyn, B. GECO 2016. Global Energy and Climate Outlook. Road from Paris. Impact of Climate Policies on Global Energy Markets in the Context of the UNFCCC Paris Agreement. JRC Science for policy report. https://doi.org/10.2791/662470 (2016).
den Elzen, M. et al. Contribution of the G20 economies to the global impact of the Paris agreement climate proposals. Clim. Change 137, 655–665 (2016).
Grassi, G. et al. The key role of forests in meeting climate targets requires science for credible mitigation. Nat. Clim. Change 7, 220 (2017).
King, L. C. & van den Bergh, J. C. J. M. Normalisation of Paris agreement NDCs to enhance transparency and ambition. Environ. Res. Lett. 14, 84008 (2019).
IPCC. Climate Change 2014: Synthesis report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. (IPCC, 2014).
van Soest, H. L. et al. Early action on Paris Agreement allows for more time to change energy systems. Clim. Change 144, 165–179 (2017).
Raupach, M. R. et al. Sharing a quota on cumulative carbon emissions. Nat. Clim. Change 4, 873–879 (2014).
Höhne, N., den Elzen, M. & Escalante, D. Regional GHG reduction targets based on effort sharing: a comparison of studies. Clim. Policy 14, 122–147 (2014).
Stehfest, E., Van Vuuren, D. P., Bouwman, L. & Kram, T. Integrated Assessment of Global Environmental Change with Model Description and Policy Applications IMAGE 3.0. (PBL Netherlands Environmental Assessment Agency, The Hague, The Netherlands, 2014).
FAOSTAT. Food and agriculture data, http://faostat3.fao.org/home/E. Vol. 2017 FAOSTAT provides free access to food and agriculture (FAOSTAT, 2017).
We would like to thank the following people for reviewing the CD-LINKS climate policy database: Chenmin He from Energy Research Institute of the National Development and Research Commission, China (NDRC-ERI), Zbigniew Klimont, Nicklas Forsell, Jessica Jewell and Olga Turkovska from International Institute for Applied Systems Analysis (IIASA), Amit Garg from the Public Systems Group at the Indian Institute of Management, India (IIM), Roberta Pierfederici from Institute for Sustainable Development and International Relations (IDDRI), Ucok WR Siagian from Institut Teknologi Bandung, Indonesia (ITB), Jiyong Eom and Cheolhung Cho from Korea Advanced Institute of Science and Technology, Republic of Korea (KAIST), Takeshi Kuramochi from NewClimate Institute (NCI), Junichiro Oda from Research Institute of Innovative Technology for the Earth, Japan (RITE), Aayushi Awasthy and Swapnil Shekhar from The Energy and Resources Institute, India (TERI), Hongjun Zhang from Tsinghua University, China (TU), Nick Macaluso from Environment and Climate Change Canada (EC), Michael Boulle, Hilton Trollipp from Energy Research Centre, South Africa (ERC) and Daniel Buira (Mexico), Vladimir Potachnikov from National Research University Higher School of Economics (Russian Federation). This work is part of a project funded by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 642147 (CD-LINKS), and is supported by European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 821471 (ENGAGE) and European Union's DG CLIMA and EuropeAid under grant agreement No. 21020701/2017/770447/SER/CLIMA.C.1 EuropeAid/138417/DH/SER/MulitOC (COMMIT). S. F., K. O.: supported by the Environment Research and Technology Development Fund (2-1908 and 2-1702) of the Environmental Restoration and Conservation Agency. J. D., K. K.: the views expressed are purely those of the writer and may not in any circumstances be regarded as stating an official position of the European Commission.
Copernicus Institute of Sustainable Development, Utrecht University, Princetonlaan 8a, 3584 CB, Utrecht, The Netherlands
Mark Roelfsema, Heleen L. van Soest, Mathijs Harmsen & Detlef P. van Vuuren
PBL Netherlands Environmental Assessment Agency, PO Box 30314, 2500 GH, The Hague, The Netherlands
Heleen L. van Soest, Mathijs Harmsen, Detlef P. van Vuuren & Michel den Elzen
Potsdam Institute for Climate Impact Research (PIK), Member of the Leibniz Association, PO Box 601203, 14412, Potsdam, Germany
Christoph Bertram, Elmar Kriegler, Gunnar Luderer, Falko Ueckerdt & Florian Humpenöder
Environmental Systems Analysis Group, Wageningen University & Research, PO Box 47, 6700 AA, Wageningen, The Netherlands
Niklas Höhne & Gabriela Iacobuta
NewClimate Institute, Clever Strasse 13–15, 50668, Cologne, Germany
Niklas Höhne
International Institute for Applied Systems Analysis (IIASA), Schlossplatz 1, 2361, Laxenburg, Austria
Volker Krey, Keywan Riahi, Stefan Frank, Oliver Fricko, Matthew Gidden & Daniel Huppmann
Chair of Global Energy Systems, Technische Universität Berlin, Straße des 17. Juni 135, 10623, Berlin, Germany
Gunnar Luderer
European Commission, Joint Research Centre, Edificio Expo, C/Inca Garcilaso, 3, 41092, Seville, Spain
Jacques Després & Kimon Keramidas
RFF-CMCC European Institute on Economics and the Environment (EIEE), Centro Euro-Mediterraneo sui Cambiamenti Climatici, Via Bergognone, 34, 20144, Milan, Italy
Laurent Drouet, Johannes Emmerling & Lara Aleluia Reis
Climate Analytics, Ritterstrasse 3, Berlin, Germany
Matthew Gidden
Kyoto University, C1-3, Kyotodaigaku-Katsura, Nishikyo-ku, Kyoto, Japan
Shinichiro Fujimori & Ken Oshiro
E3M-Lab, Institute of Communication and Computer Systems, National Technical University of Athens, Iroon Politechniou Street, 15 773 Zografou Campus, Athens, Greece
Kostas Fragkiadakis, Zoi Vrontisi & Maria Kannavou
Research Institute of Innovative Technology for the Earth, Kyoto, 619-0292, Japan
Keii Gi
COPPE, Universidade Federal do Rio de Janeiro, PO Box 68565, 21941-914, Rio de Janeiro, RJ, Brazil
Alexandre C. Köberle, Pedro Rochedo & Roberto Schaeffer
Grantham Institute, Imperial College London, Exhibition Road, London, SW7 2AZ, UK
Alexandre C. Köberle
Institute of Energy, Environment and Economy, Tsinghua University, 100084, Beijing, China
Wenying Chen
Joint Global Change Research Institute, Pacific Northwest National Laboratory, 5825 University Research Court, Suite 3500, College Park, MD, 20740, USA
Gokul C. Iyer & Jae Edmonds
Energy Research Institute, National Development and Reform Commission, B1505, Guohong Building, Jia.No.11, Muxidibeili, 100038, Beijing, Xicheng District, China
Kejun Jiang
The Energy & Resources Institute (TERI), India Habitat Center, Lodhi Road, New Delhi-3, India
Ritu Mathur
National Research University Higher School of Economics (HSE), 20 Myasnitskaya street, Moscow, Russian Federation, 101000
George Safonov
Indian Institute of Management-Ahmedabad, Public Systems Group, Vastrapur, Ahmedabad, Gujarat, India
Saritha Sudharmma Vishwanathan
National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki, Japan
Mark Roelfsema
Heleen L. van Soest
Mathijs Harmsen
Detlef P. van Vuuren
Christoph Bertram
Michel den Elzen
Gabriela Iacobuta
Volker Krey
Elmar Kriegler
Keywan Riahi
Falko Ueckerdt
Jacques Després
Laurent Drouet
Johannes Emmerling
Stefan Frank
Oliver Fricko
Florian Humpenöder
Daniel Huppmann
Shinichiro Fujimori
Kostas Fragkiadakis
Kimon Keramidas
Lara Aleluia Reis
Pedro Rochedo
Roberto Schaeffer
Ken Oshiro
Gokul C. Iyer
Jae Edmonds
Maria Kannavou
M.R., D.V. wrote the paper, and all authors contributed to the analysis and article review. Figures were created by H.v.S. and M.R.. M.R., H.v.S., D.v.V., M.d.E. and F.U. coordinated the analysis for this paper. The policy inventory and database was created by N.H., G.I., M.R., H.v.S. and D.v.V. The CD-LINKS project was supervised by K.R. and V.K., and advised by J.E. M.H., E.K., G.L., K.R., M.R., H.v.S. and D.v.V. coordinated the global modelling exercise, and C.B., D.H., V.K., E.K., G.L., K.R., R.S., H.v.S., F.U. and D.v.V. coordinated the national modelling exercise. D.v.V. and N.H. supervised the collection of policies, and D.v.V. and M.R. the protocol for model runs. The scenario database was coordinated by D.H. and V.K. Global model runs (incl. documentation) were accomplished by M.H., M.R., H.v.S. (IMAGE), C.B., F.H., E.K., G.L., F.U. (REMIND), S.F., O.F., M.G., V.K. (MESSAGE), L.D., J.E., L.A.R. (WITCH), Z.V., K.F. (GEM-E3), J.D., K.K. (POLES), R.S., P.R. (COPPE-COFFEE), A.K. (BLUES), S.F., K.O. (AIM/CGE, AIM Enduse Japan), K.G. (DNE21+), W.C. (China TIMES), G.I. (GCAM-USA), M.K. (PRIMES), G.S. (RU-TIMES), S.S.V. (AIM India), J.K. (IPAC China), R.M. (MARKAL India).
Correspondence to Mark Roelfsema.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Roelfsema, M., van Soest, H.L., Harmsen, M. et al. Taking stock of national climate policies to evaluate implementation of the Paris Agreement. Nat Commun 11, 2096 (2020). https://doi.org/10.1038/s41467-020-15414-6
Assessing the energy transition in China towards carbon neutrality with a probabilistic framework
Shu Zhang
Policy sequencing towards carbon pricing among the world's largest emitters
Manuel Linsenmeier
Adil Mohommad
Gregor Schwerhoff
Nature Climate Change (2022)
Near-term transition and longer-term physical climate risks of greenhouse gas emissions pathways
Ajay Gambhir
Mel George
Seth Monteith
Contrasting suitability and ambition in regional carbon mitigation
Yu Liu
Mingxi Du
Klaus Hubacek
Using large ensembles of climate change mitigation scenarios for robust insights
Céline Guivarch
Thomas Le Gallic
Fabian Wagner | CommonCrawl |
If $2x+7$ is a factor of $6x^3+19x^2+cx+35$, find $c$.
Since $2x+7$ is a factor, we should get a remainder of $0$ when we divide $6x^3+19x^2+cx+35$.
\[
\begin{array}{c|cccc}
\multicolumn{2}{r}{3x^2} & -x&+5 \\
\cline{2-5}
2x+7 & 6x^3&+19x^2&+cx&+35 \\
\multicolumn{2}{r}{-6x^3} & -21x^2 \\
\cline{2-3}
\multicolumn{2}{r}{0} & -2x^2 & +cx \\
\multicolumn{2}{r}{} & +2x^2 & +7x \\
\cline{3-4}
\multicolumn{2}{r}{} & 0 & (c+7)x & + 35 \\
\multicolumn{2}{r}{} & & -10x & -35 \\
\cline{4-5}
\multicolumn{2}{r}{} & & (c+7-10)x & 0 \\
\end{array}
\]The remainder is $0$ if $c+7-10=0$, so $c=\boxed{3}$. | Math Dataset |
EURASIP Journal on Advances in Signal Processing
Cramer-Rao bounds in the estimation of time of arrival in fading channels
René Játiva ORCID: orcid.org/0000-0001-8743-09591 &
Josep Vidal2
EURASIP Journal on Advances in Signal Processing volume 2018, Article number: 19 (2018) Cite this article
This paper computes the Cramer-Rao bounds for the time of arrival estimation in a multipath Rice and Rayleigh fading scenario, conditioned to the previous estimation of a set of propagation channels, since these channel estimates (correlation between received signal and the pilot sequence) are sufficient statistics in the estimation of delays. Furthermore, channel estimation is a constitutive block in receivers, so we can take advantage of this information to improve timing estimation by using time and space diversity. The received signal is modeled as coming from a scattering environment that disperses the signal both in space and time. Spatial scattering is modeled with a Gaussian distribution and temporal dispersion as an exponential random variable. The impact of the sampling rate, the roll-off factor, the spatial and temporal correlation among channel estimates, the number of channel estimates, and the use of multiple sensors in the antenna at the receiver is studied and related to the mobile subscriber positioning issue. To our knowledge, this model is the only one of its kind as a result of the relationship between the space-time diversity and the accuracy of the timing estimation.
Positioning of a mobile subscriber is a complex task that has the capability of adding value to services and applications such as navigational aids, and patient and personnel monitoring [1]. It is also useful when performing driving tests [2] and helps to enhance the mobile network allocation resources, handover decisions, etc. [3]. Permanent research is being developed in this area with increasing complexity [4, 5] to sustain in time the adaptation of these principles to new emerging technologies [6, 7].
Network-based positioning is performed through the estimation of signal parameters involved in the communication process. These parameters may include time of arrival (TOA), direction of arrival (DOA), observed time differences of arrival (OTDOA), signal strength (SS), etc. Estimators based on time are preferred over those based on bearing due to their better resolution, but hybrid techniques may also be implemented to reduce positioning variance error [8]. Furthermore, SS measurements may be added to TOA- or OTDOA-based methods to increase positioning accuracy in line of sight (LOS) environments, such as indoors, using ultra-wide band radios (UWB) or wireless sensor networks [9–12]. The position may be estimated from these parameters using propagation relations or pattern recognition techniques [9, 13].
Permanent efforts have been made to characterize wireless channels [14–17], and practical estimators have been derived. For instance, Bengtsson [18], Besson [19], and Valaee [20] have described several techniques based on signal subspace to estimate DOA and angular spread for wireless dispersed signal, whereas Raleigh [21] and Wax [22], among others [23] have studied the problem of Joint Space-Time Estimation in a multipath environment.
However, since the final performance of a specific positioning technique depends on the way signal parameters are estimated, a general comparison of the different techniques is difficult. For this reason, we study the problem of TOA estimation in both Rice and Rayleigh propagation conditions from a Cramer-Rao perspective since the lower bound of an unbiased estimator determines the best possible behavior in the estimation of a particular parameter of interest. In this way, the limiting variances for timing can be used to get an insight about the positioning accuracy.
Other bounds besides the Cramer-Rao bound (CRB) exist, such as the Barankin bound (BB) [24] or the Ziv-Zakai bound (ZZB) [25]. The BB claims to be the greatest lower bound on Mean Squares Error (MSE) for a uniformly unbiased estimator, but it is generally incomputable analytically [24]; the ZZB is useful in environments such as GNSS (Global Navigation Satellite System), where the signal-to-noise ratio (SNR) is very low and the CRB cannot be used. However, we prefer the use of the CRB since it is adequate for modeling Gaussian processes [25, 26]. In addition, it is useful to identifying if a particular estimator is the minimum variance unbiased (MVU) estimator and if a MVU estimator really exists. Furthermore, in the case that such a MVU estimator does not exist, it can still predict the performance of maximum likelihood estimates in an approximate sense for certain conditions of high (SNR) or when a large number of observations is available [26].
In addition to the deterministic CRB which models some parameters as unknown deterministic variables as in our case, the Bayesian CRB (BCRB) models some unknown parameters as random. However, it has been reported that in certain cases, results predicted by CRBs or BCRBs are too optimistic and some modifications to the classical CRBs have been proposed lately. This requires the postponement of the application of an expectation operator required for Fisher information matrix (FIM) computation, in a way that matrix inversion is performed first and then as a second step, an expectation operator is applied to compute the modified CRB (MCRB) [27]. These latter variations of the CRBs are out of the scope of this paper.
Although other approaches exist in the computation of the CRB for TOA, to the best of our knowledge, our model is in fact the most complete of its kind in the literature, since it incorporates a way to take into account spatial and temporal correlation among channel estimates, and the impact of the roll-off factor, in addition to the number of sensors and the number of estimates that are typical from other approaches [28–32]. Our model also assumes an exponential dispersion for delays, which is characteristic of mobile channels, instead of just a few paths [28–31, 33, 34]. Furthermore, we provide asymptotic expressions for the general case, suitable for high levels of SNR and a large number of channel vector estimates [35, 36].
Finally, it is important to point out that our model assumes no biased measurements; in other words, we assume that the first arrival, although weak due to the shadowing (non LOS condition), is in fact related to the LOS component. In fact, the non LOS (NLOS) condition is an important issue for the location problem and therefore its identification and mitigation are still a current research topic [37–46]. For example, it is conceptually interesting to consider the use of Bayesian mechanisms which take advantage of system dynamics and add any previous knowledge available, in order to smartly select, among a set of measurements, those with the capacity to lead to a more confident estimation. Some of these strategies use variations of the Kalman filter (KF) to incorporate this intelligence into the Positioning Computing Function [42–44] and employ some lateral information such as the signal quality indicator associated with LOS/NLOS in [42] or prior knowledge to adjust NLOS data toward the corresponding LOS values [40].
The structure of the paper is as follows. Section 2 first introduces the assumptions on the signal model and presents a brief discussion related to signal dispersion and the coherence time for delays, which is required prior to introducing the channel model, and lastly the procedures to compute the true CRBs as well as the asymptotic expressions for the timing. Section 3 presents the CRBs characteristics for LOS and NLOS models and contrasts these results with those provided by a practical timing estimator. Section 4 summarizes the main observations, conclusions and recommendations.
Signal model
Model assumptions
The following assumptions are taken into consideration:
AS 1. Channel introduces multipath propagation; therefore, the signal is dispersed in space and time from the LOS component. Statistical independence for angular and temporal dispersion processes is assumed. Independence is a reasonable assumption because each path is affected in a different and unpredictable way by the propagation environment. The first TOA is the parameter of interest for the problem, while the angular parameters are nuisance parameters required for the characterization of the CRB.
AS 2. Despite the channel having a coherence time for the taps amplitudes [47, 48], delay and even angle information may remain within tolerable limits much longer due to the high proportion between light speed and mobile speed, and the relatively great distances between transmitter and receiver. Therefore, many channel estimates can be collected in time so as to improve the accuracy of the timing and angle estimates [49].
AS 3. The first arrival is analyzed as the one bearing timing for position information. Measures for TOA are computed from channel estimates available at the receiver through a correlation function. A full maximum likelihood (ML) estimation of all propagation parameters (delays and angles) is considered an approach that is too expensive in a dispersive channel, where the number of parameters could be too large, and might lead to inconsistent estimates if the available number of channels is low.
AS 4. Noise present in the channel estimates is white and Gaussian, which is a reasonable assumption after the matched filter [50]. Our analysis does not strictly consider a multi-user environment, but this assumption is reasonable even in this case since all other users have been at least partially canceled.
AS 5. The power angular spectrum (PAS) is symmetric and dispersive and exhibits just a single mode with the mean value associated to the right angular position of the transmitter (the UE if the PAS is computed at the base station (BS)). Gaussian and Laplacian [15, 51–54] models usually describe the marginal probability P(θ) in (1):
$$\begin{array}{@{}rcl@{}} P(\theta) = \int {P(\theta,\tau)d\tau } \end{array} $$
A single modality is a reasonable assumption as long as the channel bandwidth is large, and therefore the channel estimated at chip time includes only rays impinging from a narrow solid angle. Furthermore, some experimental evidence shows that the probability of having more than a cluster in a typical urban environment reaches 13% and it reduces to 8% in suburban areas [15].
AS 6. A continuous power spectrum is used in delay for the marginal function P(τ) in (2):
$$\begin{array}{@{}rcl@{}} P(\tau) = \int {P(\theta,\tau)d\theta } \end{array} $$
It is assumed to fit an exponential shaping [47], estimated at a fraction of the chip time. For the extraction of timing information, the same angular distribution for all delays is assumed. This may not be very realistic, but it allows for reducing the number of parameters in the model and keeps the problem tractable [34, 54].
Note that our basic model may be used to study some NLOS scenarios since each tap of the channel impulse response is a zero mean random variable. Certainty, an LOS situation implies a non-zero mean where the first arrival is considered as the one conveying unbiased location of the UE. Therefore, and in order to achieve more general results, this basic model has been enhanced to introduce the LOS condition as a symmetric kernel for the angular distribution with a peak discontinuity at the true angular position of the source, as it is described in Section 2.4.2.
AS 7. A first-order autoregressive (AR) Markov process for the evolution of the channel along the time due to Doppler is assumed [55].
This model is very convenient for the purpose of location, since only a few parameters of interest are going to be computed, rather than all delays and angles which are usually nuisance parameters. First arrival is the desired parameter, since in most cases, positioning accuracy just slightly improves with the use of the whole multipath coming from the LOS nodes in comparison with the use of their first components only [33, 56]. Furthermore, in a practical positioning system deployment, the transmission of these parameters to a remote device is required [57], so a lower number of parameters reduces the signaling channel bandwidth.
Coherence time and delay dispersion
Channel estimation is limited by mobility. Coherence time corresponds to the interval in which the channel is essentially considered time invariant, and it is related to the inverse of the Doppler variation. It is easy to perceive that timing estimation will be affected at least by an error related to the displacement of the MS (mobile station) in the observation interval. Hence, this coherence time for the first arriving signal may be related with the maximum allowed delay uncertainty (η) introduced by the movement and the radial component of the speed vector (v r ) of the mobile, as it was commented in [35]. When v r is very small, errors due to displacements are also small and the number of available channel estimates required to keep a specified uncertainty grows, and hence a practical limit has to be imposed to the observation time T a c q (associated with the latency experienced by the user in the availability of the position) in order to deliver the timing to the Position Computing Function (PCF). Therefore, the number of channel estimates reaches a finite limit.
Figure 1 exhibits a set of characteristics related to the mean number of channel vector estimates (K), the mobile speed and the expected accuracy in TOA estimation for typical parameters of a Wideband Code Division Multiple Access (WCDMA) system, a chip rate of 3.84 Mcps and a timeslot of 666.66 μs [35, 36]. Note that the maximum number of observations is limited by the acquisition time and the mobile speed. A faster MS will have less time for channel acquisition, and a compromise will be required to achieve the best timing accuracy.
Expected number of channel estimates available to achieve timing in terms of the subscriber speed. Results are exhibited for a maximum acquisition time of 1.5 s and several different allowed timing errors ε, given as fractions of chip time
From a systemic viewpoint, mobile location may be improved when system dynamics [49] and previous knowledge from the statistics of the measures are used [42] and also from the use of a heterogeneous set of measurements [12, 44, 49].
The observed signal is a set of K channel vector estimates collected over a set of N s sensors. Each channel vector estimate z is of length N, and it is estimated from correlation of a known sequence with the received signal. Notations of frequent use within this paper are summarized in Table 1.
Table 1 Notations
The signal received by j-th sensor is noted as y(j)(t) and is expressed in (3) as the summation of multipath components and noise n(j)(t). Each replica of the transmitted signal x(t) arriving at delay τ i is affected by (i) a time-varying unit-power steering coefficient b ij (t), associated with the path impinging angle in relation to the antenna array geometry; (ii) the path attenuation factor γ i (t); and (iii) a time invariant (over time intervals of length KT s ) Doppler frequency f i , where T s is the time between two consecutive channel estimates:
$$ {y^{\left(j \right)}}\left(t \right) = \sum\limits_{i = 1}^{{N_{\text{paths}}}} {{b_{ij}}\left(t \right){\gamma_{i}}\left(t \right)x\left({t - {\tau_{i}}} \right){e^{j2\pi {f_{i}}t}}} + {n^{\left(j \right)}}\left(t \right) $$
The i-th index discriminates the component within the multipath, and Npaths is the number of impinging paths at the receiver. The transmitted signal x(t) corresponds to the convolution of a pseudo-noise sequence p(n) with the symmetric pulse shape g(t):
$$ x\left(t \right) = \sum\limits_{n}^{} {g\left({t - nT} \right)} p(n) $$
where T is the symbol time. A correlator estimates the channel from the received signal y(j)(t) at each sensor j, and temporal lag s, with the help of the pseudo-noise sequence p(n) of N p symbols,
$$ z_{s}^{(j)}\left(t \right) = \frac{1}{{{N_{p}}}}\sum\limits_{n} {{y^{\left(j \right)}}\left({t + {\tau_{s}} + nT} \right){p^{*}}\left(n \right)} $$
where \(z_{s}^{(j)}\left (t \right)\) corresponds to the estimated channel coefficient at j-th sensor and s-th lag, as a function of time. Replacing (3) in (5), assuming zero mean noise, using the fact that the sequence p(n) has unit power and is temporally uncorrelated, and the assumption that within N p symbols the steering coefficient and the path attenuation factor remain constant, and by discarding some cumbersome algebraic details, (5) becomes (6):
$$ z_{s}^{(j)}\left(t \right) = \sum\limits_{i = 1}^{{N_{\text{paths}}}} {{b_{ij}}\left(t \right){\gamma_{i}}\left(t \right){e^{j2\pi {f_{i}}t}}g\left({t - {\tau_{i}} + {\tau_{s}}} \right)} + w_{s}^{\left(j \right)}\left(t \right) $$
The Eq. (6) above shows that the estimated channel is obtained synchronously with the transmission time; therefore, we can assume t=kT s and hence temporal variation of \(z_{s}^{(j)}\left (t \right)\) depends on k in (7) and also Doppler frequencies are re-scaled:
$$ z_{s}^{(j)}\left(k \right) = \sum\limits_{i = 1}^{{N_{\text{paths}}}} {{b_{ij}}\left(k \right){\gamma_{i}}\left(k \right){e^{j2\pi {f_{i}}{T_{s}}k}}g\left({{\tau_{s}} - {\tau_{i}}} \right)} + w_{s}^{\left(j \right)}\left(k \right) $$
The residual noise component \(w_{s}^{(j)}\left (t \right)\) is given by (8) and may be modeled as a zero mean complex white Gaussian random process (AS4).
$$ w_{s}^{\left(j \right)}\left(k \right) = \frac{1}{{{N_{p}}}}\sum\limits_{k} {{n^{\left(j \right)}}\left({k{T_{s}} + {\tau_{s}} + kT} \right){p^{*}}\left(k \right)} $$
Furthermore, taking into consideration that the multipath signal in our model has an exponential distribution and that the observation window is large enough to capture most of the energy from this scattered signal, and also considering just one arrival per lag, Npaths in (7) has been set as equal to the number of lags N in the observation window. Therefore, by stacking the channel coefficients at s-th lag described by (7), the channel vector estimation at sensor j and at slot k, results in z(j)(k). This vector may be expressed in terms of the shaping pulse and the noise estimation vector w(j)(k) as in (9),
$$ \mathbf{z}_{}^{\left(j \right)}\left(k \right) = {\mathbf{G}_{s}}\mathbf{b}_{}^{\left(j \right)}\left(k \right) + \mathbf{w}_{}^{\left(j \right)}\left(k \right) $$
where the i-th element of vector b(j)(k) contains b(ij)(k)·γ i (k)·exp(j2πf i T s k) and the i-th column of the NxN square matrix G s contains the shaping pulse delayed by τ i samples:
$$\begin{array}{@{}rcl@{}} {\mathbf{G}_{s}}\left(\beta \right) = \frac{1}{{\sqrt {{T_{s}}\left({1 - {\beta / 4}} \right)} }}\left[ { \begin{array}{llll} {{\mathbf{g}_{{\mathbf{s}_{\mathbf{1}}}}}}&{{\mathbf{g}_{{\mathbf{s}_{\mathbf{2}}}}}}& \ldots &{{\mathbf{g}_{{\mathbf{s}_{\mathbf{N}}}}}} \end{array}} \right] \end{array} $$
Observe that each one of the pulse shape vectors, g si , in (10) may be modeled as in (11), where its elements gk=s−i refer to the shaping pulse sampled at g(τ s −τ i ) as described by (12):
$$\begin{array}{@{}rcl@{}} \begin{array}{l} \mathbf{g}_{{\mathbf{s}_{i}}}^{T} = \left[ {\begin{array}{*{20}{c}} {{g_{1 - i}}}& \cdots &{1}& \cdots &{{g_{N - i}}} \end{array}} \right]\\ \quad \quad \quad \quad\,\, {\underset{\mathrm{\;\mathit{i}\,th\;element}}\uparrow} \end{array} \end{array} $$
$$\begin{array}{@{}rcl@{}} g_{k} = \frac{{sinc\left({\frac{k}{{{N_{spc}}}}} \right)\cos \left({\frac{{\pi \beta k}}{{{N_{spc}}}}} \right)}}{{1 - {{\left({{2\beta k} / {{N_{spc}}}} \right)}^{2}}}} \end{array} $$
N spc corresponds to the number of acquired samples per chip time. The length N of vectors in (9) and (11) is the number of lags in channel estimates.
From (9), we can compute correlation matrix for two channel estimates obtained from slots k and m, and sensors j, j′, as in (13):
$$ {\begin{aligned} E\left\{ {{\mathbf{z}^{\left(j \right)}}\left(k \right){\mathbf{z}^{\left({j'} \right)H}}\left(m \right)} \right\} &= {\mathbf{G}_{s}}E\left\{ {{\mathbf{b}^{\left(j \right)}}\left(k \right){\mathbf{b}^{\left({j'} \right)}}^{H}\left(m \right)} \right\}\mathbf{G}_{s}^{T} + \sigma_{w}^{2}{\mathbf{I}_{N}}\\ &= {\rho_{jj'}}{\alpha^{k - m}}{\mathbf{G}_{s}}\left(\beta \right){\boldsymbol{\Lambda }_{\tau} }\mathbf{G}_{s}^{T}\left(\beta \right) + \sigma_{w}^{2}{\mathbf{I}_{N}} \end{aligned}} $$
where β is the roll-off factor shaping the transmission pulse, and Λ τ is a diagonal matrix that models signal temporal dispersion and its exponential power contribution. The last factorization is possible under the assumption of statistical independence for angular and temporal dispersion processes (AS 1) and also for multipath propagation and Doppler shift mechanisms. In fact, the i,l element of the signal correlation matrix
$$\begin{array}{@{}rcl@{}}{} \begin{aligned} E{\left\{ {{\mathbf{b}^{\left(j \right)}}\left(k \right){\mathbf{b}^{\left({j'} \right)}}^{H}\left(m \right)} \right\}_{i,l}} &= \left\{ E\left\{ {b_{ij}}\left(k \right){\gamma_{i}}\left(k \right){e^{j2\pi {f_{i}}{T_{s}}k}} \cdot \right. \right.\\ & \quad \left. \cdot b_{lj'}^{*}\left(m \right){\gamma_{l}}\left(m \right){e^{- j2\pi {f_{l}}{T_{s}}m}}\right\} \\ E{\left\{ {{\mathbf{b}^{\left(j \right)}}\left(k \right){\mathbf{b}^{\left({j'} \right)}}^{H}\left(m \right)} \right\}_{i,l}} &= {\rho_{jj'}}{r_{il}}{\alpha^{k - m}}{\delta_{il}} \end{aligned} \end{array} $$
adopts the definitions:
$$\begin{array}{@{}rcl@{}} \begin{aligned} E\left\{ {{b_{ij}}\left(k \right)b_{lj^{\prime}}^{*}\left(m \right)} \right\} &= {\rho_{jj^{\prime}}};\quad \\ E\left\{ {{e^{j2\pi {T_{s}}\left({{f_{i}}k - {f_{l}}m} \right)}}} \right\} &= {\alpha^{k - m}};\\ E\left\{ {{\gamma_{i}}\left(k \right){\gamma_{l}}\left(m \right)} \right\} &= {r_{il}}{\delta_{il}}\\ {r_{ii}} = E\left\{ {{\gamma_{i}}\left(k \right){\gamma_{i}}\left(m \right)} \right\} &= {P_{s}}{e^{- {\lambda_{n}}\left({i - {k_{0}}} \right)}}u\left({i - {k_{0}}} \right) \end{aligned} \end{array} $$
where \(\phantom {\dot {i}\!}\rho _{jj^{\prime }}\) refers to the correlation between signatures at sensors j and j′; α refers to temporal correlation between channel estimates in two consecutive slots when temporal variation has been modeled as a first-order AR Markov process (AS 7); and r il refers to the correlation between delays in lags i and l, and k0 refers to the TOA of the first path.
In particular, r il is zero for paths at different lags since they fade independently and are assumed to be uncorrelated. Furthermore, the form of r ii in (15) responds to the assumption of having an exponential power delay profile (AS 6) with parameter λ n , and it is very suitable for a NLOS condition.
Additionally, if vectors are arranged as
$$ \begin{aligned} \mathbf{w} &= {\left[ {\begin{array}{llll} {{\mathbf{w}^{(1)}}{{\left(1 \right)}^{T}}}& \ldots &{{\mathbf{w}^{(1)}}{{\left(K \right)}^{T}}}&{ \cdots {\mathbf{w}^{({N_{s}})}}{{\left(K \right)}^{T}}} \end{array}} \right]^{T}}\\ \mathbf{z} &= {\left[ {\begin{array}{*{20}{c}} {{\mathbf{z}^{(1)}}{{\left(1 \right)}^{T}}}& \ldots &{{\mathbf{z}^{(1)}}{{\left(K \right)}^{T}}}&{ \cdots {\mathbf{z}^{({N_{s}})}}{{\left(K \right)}^{T}}} \end{array}} \right]^{T}} \end{aligned} $$
both signal and noise components may be described as temporally stationary, complex Gaussian random processes with certain means and correlation matrices. Noise is zero mean, temporally uncorrelated and independent of the propagation channel vectors and of variance \(\sigma _{w}^{2}\).
When estimates in z are achieved under an NLOS condition, channel angular spread will tend to increase [15, 38]. Such is the case, for instance, of a receiver at a mobile station (MS) in an urban environment. In this case, propagation is Rayleigh [35], and z may also be modeled as zero mean with correlation matrix R z . The general case for the model, however, corresponds to have LOS and Rice propagation [58]. In this case, the mean vector, μ z , is not null. It could be the case for a receiver at the base station (BS) in a suburban environment. Therefore, we can model noise and signal as in (17):
$$ \mathbf{w} \sim CN\left({{\mathbf{0}},\sigma_{w}^{2}\mathbf{I}} \right)\;,\quad \mathbf{z} \sim CN\left({{\mathbf{\mu }_{\mathbf{z}}},{\mathbf{R}_{z}}} \right) $$
The correlation matrix for channel estimates,
$$ {\mathbf{R}_{\mathbf{z}}} = E\left\{ {\mathbf{z}{\mathbf{z}^{H}}} \right\} $$
is related to channel estimates at different slots, sensors and lags.
The correlation matrix, R z , may be written in the form
$$ {\mathbf{R}_{\mathbf{z}}} = {\mathbf{R}_{\phi} }(\mathbf{\rho }) \otimes \mathbf{T}(\alpha) \otimes {P_{s}}{\mathbf{G}_{s}}\left(\beta \right){\boldsymbol{\Lambda }_{{\tau }}}({\lambda_{n}}){\mathbf{G}_{s}}^{H}\left(\beta \right) + \sigma_{w}^{2}\mathbf{I} $$
in terms of their temporal and spatial components [36, 58]. In this expression, the dispersed signal power factor, P s , refers to the variance of the received estimated path-power for first arrival from temporally dispersed signal in the case of Rayleigh propagation. Additionally, the temporal correlation matrix, T(α), takes into consideration the temporal variation for the channel, and it is assumed to be equal for all delays; the spatial correlation matrix, R ϕ (ρ), contains the correlation coefficients for signatures between sensors; and ⊗ denotes the Kronecker product [59].
The exponential model used for delays is usually proposed in channel models, and it is given by
$$ {\left\{ {{\boldsymbol{\Lambda}_{\tau} }} \right\}_{i,i}} = \exp \left[ { - \left({i - {k_{0}}} \right){\lambda_{n}}} \right]u\left({i - {k_{0}}} \right) $$
in terms of both, the first arrival position k0, and the dimensionless parameter λ n . This latter is inversely related to delay spread normalized by the symbol time, and therefore it is closely related to channel coherence bandwidth [47, 48]. In the following, λ n will be called the normalized coherence bandwidth.
The spatial correlation matrix, R ϕ (ρ) is modeled as
$$\begin{array}{@{}rcl@{}} {\mathbf{R}_{\phi} }\left({\rho} \right) = \left[{\begin{array}{cccc} 1&{{\rho_{12}}}& \cdots &{{\rho_{1{N_{s}}}}}\\ {\rho_{12}^{*}}&1& \cdots &{{\rho_{2{N_{s}}}}}\\ \vdots & \vdots & \ddots & \vdots \\ {\rho_{1{N_{s}}}^{*}}&{\rho_{2{N_{s}}}^{*}}& \cdots &1 \end{array}} \right] \end{array} $$
where dependency with the source mean bearing and its angular spread meets through the correlation vector ρ, as will be explained later in (27)
T(α) is modeled as a first-order AR Markov process,
$$\begin{array}{@{}rcl@{}} \mathbf{T}(\alpha) = \left[ {\begin{array}{ccccc} 1&{{\alpha_{}}}&{\alpha_{}^{2}}& \cdots &{\alpha_{}^{K - 1}}\\ {{\alpha_{}}}&1&{}& \ddots &{\alpha_{}^{K - 2}}\\ {\alpha_{}^{2}}&{}& \ddots &{}&{}\\ \vdots & \ddots &{}&{}&{}\\ {\alpha_{}^{K - 1}}&{}&{}&{}&1 \end{array}} \right] \end{array} $$
α is the temporal correlation coefficient between two consecutive vector samples, and ρ ij is the spatial correlation coefficient between sensors i and j. Note also that G s is proportional to the identity matrix when sampling at the symbol rate.
Regarding the temporal correlation between consecutive estimates, the channel vector correlation matrix may be modeled as a Fully Coherent Dispersed (FCD) Source, a Partially Coherent Dispersed (PCD) Source, or an Incoherent Dispersed (ICD) Source. The general case corresponds to PCD, FCD being a particular case where estimates are completely correlated (α=1), and ICD the case where estimates are uncorrelated (α=0) [35, 36].
In the case of Rice propagation, the first arrival has a non-null mean and disturbs the exponential distribution for delays. Expression (9) turns into (23), where f0 is the Doppler frequency for the LOS component, and \(\phantom {\dot {i}\!}\mathbf {g}^{(k_{0})}\) identifies the pulse shape vector for this arrival:
$$ \begin{aligned} \mathbf{z}^{\left(j \right)}\left(k \right) &= {b_{0j}}\left(k \right){\gamma_{0}}\left(k \right){e^{j2 \pi {f_{0}}{T_{s}}k}}{\mathbf{g}^{\left({{k_{0}}}\right)}}\\ & \quad + {\mathbf{G}_{s}}\mathbf{b}^{\left(j \right)}\left(k \right) + \mathbf{w}^{\left(j \right)}\left(k \right) \end{aligned} $$
Note that the right part of the summation in (23) corresponds to the dispersed NLOS signal and has a null expected value. Moreover, since delay dispersion and Doppler are assumed independent, the mean channel gain is computed as follows in (24) where the time dependency of the steering vector has been discarded since its value is expected to remain unchanged for the LOS path along the position acquisition (AS 5).
$$\begin{array}{@{}rcl@{}} \begin{aligned} E\left\{{\mathbf{z}_{}^{\left(j \right)}\left(k \right)} \right\} &= E\left\{ {{b_{0j}}{\gamma_{0}}\left(k \right){e^{j2\pi {f_{0}}{T_{s}}k}}{\mathbf{g}^{\left({{k_{0}}} \right)}}} \right\}\\ \quad \quad \quad \quad \quad \;\; &= {A_{0}}E\left\{ {{b_{0j}}} \right\}E\left\{ {{e^{j2\pi {f_{0}}{T_{s}}k}}} \right\}{\mathbf{g}^{\left({{k_{0}}} \right)}} \end{aligned} \end{array} $$
where A0 corresponds to the mean signal level for the LOS component. If μ z denotes the mean channel vector arranged as z and w, in (16), it could be expressed in terms of the spatial signature for the LOS component b ϕ , the expected Doppler vector α t , and the pulse shape vector for first arrival g(k0), as
$$ {\mathbf{\mu}_{\mathbf{z}}} = E\left\{ \mathbf{b} \right\} \otimes E\left\{ {{e^{j2\pi {f_{0}}{T_{s}}k}}} \right\} \otimes {A_{0}}{\mathbf{g}^{\left({{k_{0}}} \right)}} $$
The spatial signature described by the LOS component when a uniform linear array (ULA) is used may be computed geometrically using a signal angular distribution [60], as in
$$ {\begin{aligned} {\mathbf{b}_{\phi}} &= E\left\{ \mathbf{b} \right\}\\ {\text{with}}{\left[ {E\left\{ \mathbf{b} \right\}} \right]_{n}} &= E\left\{ {b_{n}^{}} \right\} \\ &= \frac{1}{{\sqrt {2\pi} {\Delta_{\phi} }}}\int\limits_{- \pi }^{\pi} {{e^{- \frac{{{{\left({\phi - {\phi_{0}}} \right)}^{2}}}}{{2\Delta_{\phi}^{2}}}}}{e^{- jn\pi \sin \left(\phi \right)}}d\phi } \end{aligned}} $$
In this case, spatial distribution is modeled as Gaussian (AS 5), centered around ϕ0, with angular spread Δ ϕ and subscript n corresponds to the sensor position for nε[0,Ns−1]. The angular spread corresponds to the standard deviation of the direction of arrivals from multipath components at the receiver when a normalized version of the PAS is used as the weighting function. Some works report that Laplacian distribution can provide a good match to this angular distribution [15, 53, 61]; but when it is used in (27) instead of a Gaussian distribution, meaningless variations are achieved.
Remembering that the correlation matrix R ϕ described in (27) is related to (26), the expected spatial signature in (28) results.
$$ {\begin{aligned} {\left[ {{\mathbf{R}_{\phi} }} \right]_{{n_{1}},{n_{2}}}} &= E\left\{ {b_{{n_{1}}}^{}b_{{n_{2}}}^{*}} \right\}\\ {\text{with }}E\left\{ {b_{{n_{1}}}^{}b_{{n_{2}}}^{*}} \right\} &= \frac{1}{{\sqrt {2\pi} {\Delta_{\phi} }}}\int\limits_{- \pi }^{\pi} {{e^{- \frac{{{{\left({\phi - {\phi_{o}}} \right)}^{2}}}}{{2\Delta_{\phi}^{2}}}}}{e^{- j\left({{n_{1}} - {n_{2}}} \right)\pi \sin \left(\phi \right)}}d\phi } \end{aligned}} $$
$$ {\mathbf{b}_{\phi} }\left(\mathbf{\rho} \right) = {\left[{\begin{array}{ccccc} 1&{{\rho_{12}}}&{{\rho_{13}}}& \ldots &{{\rho_{1{N_{s}}}}} \end{array}} \right]^{H}} $$
Note from (25) and from the fact that temporal variation due to Doppler may again be modeled as a first-order AR Markov process (AS 7), as the temporal vector, α t (α), is a function of α, and it takes the form
$$ {\mathbf{\alpha}_{t}}(\alpha) = {\left[ {\begin{array}{ccccc} 1&{\alpha_{}^{}}&{\alpha_{}^{2}}& \ldots &{\alpha_{}^{K - 1}} \end{array}} \right]^{T}} $$
Therefore (25) becomes:
$$ {\mathbf{\mu }_{\mathbf{z}}} = {\mathbf{b}_{\phi} }(\mathbf{\rho }) \otimes {\mathbf{\alpha }_{t}}({\alpha_{}}) \otimes {A_{0}}{\mathbf{g}^{\left({{k_{0}}} \right)}}\left(\beta \right) $$
Computing the Cramer-Rao bounds for delay estimates
Highlighting the importance of Cramer-Rao bound not only as a means to quantify errors from a set of parameters to be estimated but also as a modeling tool since it allows for the evaluation of the impact of various parameters in the estimation error, we will continue with the derivation of this bound for our model, introduced mainly in (17), (19), and (30). Consequently, the following parameter vector is defined in (31), where k0 is the time of arrival normalized for the chip time, λ n is the normalized coherence bandwidth, ρ is a vector containing the real and imaginary parts of the complex correlation coefficients among sensors, and the remaining parameters have previously been defined. All of them except k0 are nuisance parameters.
$$ \begin{aligned} {\boldsymbol{\Psi}} &= {\left[ {{k_{0}},{\lambda_{n}},\beta,{P_{s}},\sigma_{w}^{2},\alpha,{\mathbf{\rho }^{T}},{A_{0}}} \right]^{T}}\\ \; \mathbf{\rho} &= {\left[ {{\rho_{1,{\text{Re}}}},{\rho_{2,{\text{Re}}}}, \ldots,{\rho_{{N_{c}},{\text{Re}}}},{\rho_{1,{\text{Im}}}},{\rho_{2,{\text{Im}}}}, \ldots,{\rho_{{N_{c}},{\text{Im}}}}} \right]^{T}} \end{aligned} $$
Note that in case of a Rayleigh fading channel, there is not a dominant LOS path and therefore A0 is zero and may be discarded, reducing the parameter vector to (32).
$$ \begin{aligned} {\boldsymbol{\Psi}} &= {\left[ {{k_{0}},{\lambda_{n}},\beta,{P_{s}},\sigma_{w}^{2},\alpha,{\mathbf{\rho }^{T}}} \right]^{T}}\\ \; \mathbf{\rho} &= {\left[ {{\rho_{1,{\text{Re}}}},{\rho_{2,{\text{Re}}}}, \ldots,{\rho_{{N_{c}},{\text{Re}}}},{\rho_{1,{\text{Im}}}},{\rho_{2,{\text{Im}}}}, \ldots,{\rho_{{N_{c}},{\text{Im}}}}} \right]^{T}} \end{aligned} $$
Since channel vector estimates being stacked in z are assumed complex Gaussian distributed, the probability density function for z is expressed as in (33), and the Cramer-Rao bounds for the parameters in (31) correspond to the diagonal elements within the inverse of the FIM. Furthermore, FIM elements for the Rice LOS model may be expressed as it is seen in (34) [26] and in the case of Rayleigh fading as given in Eq. (35). Also note that R z corresponds to the covariance matrix for the general Rice case in (34) and it is equal to the correlation matrix for the Rayleigh case in (35) since the mean is null for this latter case.
$$ p\left(\mathbf{z} \right) = \frac{1}{{{\pi^{K.{N_{s}}.N}}\det \left({{\mathbf{R}_{z}}} \right)}}\exp \left[ { - {{\left({\mathbf{z} - {\mathbf{\mu }_{z}}} \right)}^{H}}\mathbf{R}_{z}^{- 1}\left({\mathbf{z} - {\mathbf{\mu }_{z}}} \right)} \right] $$
$$ \begin{aligned} {\left[ {\mathbf{F}_{\boldsymbol{\Psi }}^{LOS}} \right]_{pq}} &= - E\left[ {\frac{{{\partial^{2}}\ln \left\{ {p\left({\mathbf{z};\boldsymbol{\Psi }} \right)} \right\}}}{{\partial {\boldsymbol{\Psi }_{p}}\partial {\boldsymbol{\Psi }_{q}}}}} \right]\\ &= tr\left({\mathbf{R}_{\mathbf{z}}^{- 1}\frac{{\partial \mathbf{R}_{\mathbf{z}}}}{{\partial \boldsymbol{\Psi }_{p}}}\mathbf{R}_{\mathbf{z}}^{- 1}\frac{{\partial \mathbf{R}_{\mathbf{z}}}}{{\partial \boldsymbol{\Psi }_{q}}}} \right)\\ & \quad + 2{\text{Re}} \left({\frac{{\partial \mathbf{\mu }_{\mathbf{z}}^{H}}}{{\partial {\boldsymbol{\Psi }_{p}}}}\mathbf{R}_{\mathbf{z}}^{- 1}\frac{{\partial {\mathbf{\mu }_{\mathbf{z}}}}}{{\partial {\boldsymbol{\Psi }_{q}}}}} \right) \end{aligned} $$
$$ \begin{aligned} {\left[ {\mathbf{F}_{\boldsymbol{\Psi }}} \right]_{pq}} &= - E\left[{\frac{{{\partial^{2}}\ln \left\{ {p\left({\mathbf{z};\boldsymbol{\Psi}} \right)} \right\}}}{{\partial {\boldsymbol{\Psi }_{p}}\partial {\boldsymbol{\Psi }_{q}}}}} \right]\\ &= tr\left({\mathbf{R}_{\mathbf{z}}^{- 1}\frac{{\partial \mathbf{R}_{\mathbf{z}}}}{{\partial \boldsymbol{\Psi }_{p}}}\mathbf{R}_{\mathbf{z}}^{- 1}\frac{{\partial \mathbf{R}_{\mathbf{z}}}}{{\partial \boldsymbol{\Psi }_{q}}}}\right) \end{aligned} $$
Cramer-Rao bounds for the NLOS Rayleigh fading model
It will be shown below that (35) becomes (36) when \(\mathbf {R}_{z}^{-1}\) and their partial derivatives are computed and replaced in the expression above.
$$ \begin{aligned} {\mathbf{F}_{\boldsymbol{\Psi}}} &= \sum\limits_{k = 1}^{Ns} {\sum\limits_{{k_{1}} = 1}^{K} {{\mathbf{G}_{k,{k_{1}}}}{\mathbf{J}_{\boldsymbol{\Psi}^{\prime}}}\mathbf{G}_{k,{k_{1}}}^{T} + {C_{1}}\mathbf{e}_{Np}^{(6)}{{\mathbf{e}_{Np}^{(6)}}^{T}} +} } \\ & \quad + \sum\limits_{{q_{1}} = 1}^{2N} {\sum\limits_{q2 = 1}^{2N} {C_{2}^{({q_{1}},{q_{2}})}\mathbf{e}_{Np}^{(6 + {q_{1}})}{{\mathbf{e}_{Np}^{(6+ {q_{2}})}}^{T}}} } \end{aligned} $$
Expression (36) illustrates the way that the required FIM (F Ψ ) gains information from the contribution of each available channel estimate through the eigenvalues from both the temporal- and spatial- correlation matrices. In fact, Gk,k1 matrix, and C1 and \(C_{2}^{(q1,q2)}\) coefficients allow this update in a computationally efficient manner. Gk,k1 has a global impact since it weighs the partial FIMs (\(\phantom {\dot {i}\!}\mathbf {J}_{\Psi ^{\prime }}\)) computed at each new iteration by taking advantage of the structure of the power delay profile in Λ τ . On the other side, C1 refers to the diagonal term for the parameter α, and \(C_{2}^{\left (q_{1},q_{2}\right)}\) to the crossed terms related with the correlation coefficients of the spatial correlation matrix R ϕ . Terms in (36) are defined in expressions (37)–(43). In particular, it is a worth noting that a singular value decomposition has been performed over the temporal correlation matrix T, and over the spatial correlation matrix R ϕ , as it is shown in (37), being \(\lambda _{t}^{(k)}\) and \(\lambda _{\phi }^{(k_{1})}\) the eigenvalues of T and R ϕ respectively. Similarly, \(\mathbf {u}_{t}^{(k)}\) and \(\mathbf {u}_{\phi }^{(k_{1})}\) correspond to the eigenvectors of these correlation matrices. N c in (37) is the number of parameters associated with the spatial correlation matrix and therefore depends on the array size N s , with N p being the total number of parameters in our model and K the number of channel vector estimates.
$$ \begin{aligned} \mathbf{T} &= {\mathbf{U}_{t}}{\boldsymbol{\Lambda }_{t}}\mathbf{U}_{t}^{\mathbf{H}},{\mathbf{U}_{t}} = \left[ {\mathbf{u}_{t}^{(1)},\mathbf{u}_{t}^{(2)}, \ldots,\mathbf{u}_{t}^{(K)}} \right]\\ {\boldsymbol{\Lambda }_{t}} &= diag\left[ {\lambda_{t}^{(1)},\lambda_{t}^{(2)}, \ldots,\lambda_{t}^{(K)}} \right]\\ {\mathbf{R}_{\phi}} &= {\mathbf{U}_{\phi} }{\boldsymbol{\Lambda }_{\phi} }\mathbf{U}_{\phi}^{\mathbf{H}},{\mathbf{U}_{\phi}} = \left[ {\mathbf{u}_{\phi}^{(1)},\mathbf{u}_{\phi}^{(2)}, \ldots,\mathbf{u}_{\phi}^{(Ns)}} \right]\\ {\boldsymbol{\Lambda }_{\phi}} &= diag\left[ {\lambda_{\phi}^{(1)},\lambda_{\phi}^{(2)}, \ldots,\lambda_{\phi}^{(Ns)}} \right]\\ \mathbf{e}_{v}^{(q)} &= \left[ {0, \ldots, 0, \; 1,0, \ldots,0} \right]_{v}^{T}\\ &\quad \quad \quad \quad \quad {\underset{qth}{\uparrow} {\underset{element}{\!~\!}}} \\ {N_{p}} &= 6 + 2{N_{c}};\quad {N_{c}} = {N_{s}}\left({{N_{s}} - 1} \right)/2 \end{aligned} $$
See in (38) as Ψ′ differs for each new k and k1 since the parameter γk,k1 in (40) refers to the signal power weighted by the respective spatial and temporal eigenvalues. Gk,k1 also depends on the partial derivatives of these eigenvalues as related to the temporal correlation factor α and from the spatial correlation coefficients in ρ.
$$ {\begin{aligned} \boldsymbol{\Psi }' = {\left[ {{k_{0}},{\lambda_{n}},\beta,{\gamma_{k,{k_{1}}}},\sigma_{w}^{2}} \right]^{T}} {\mathbf{G}_{k,{k_{1}}}} = {\left[ {\begin{array}{ccccc} 1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&{\lambda_{\phi}^{(k)}\lambda_{t}^{({k_{1}})}}&0\\ 0&0&0&0&1\\ 0&0&0&{{P_{s}}\lambda_{\phi}^{(k)}\frac{{\partial \lambda_{t}^{({k_{1}})}}}{{\partial \alpha }}}&0\\ {\mathbf{0}}&{\mathbf{0}}&{\mathbf{0}}&{{P_{s}}\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial \mathbf{\rho }}}\lambda_{t}^{\left({{k_{1}}} \right)}}&{\mathbf{0}} \end{array}} \right]_{\left({6 + 2{N_{c}}} \right)x5}} \end{aligned}} $$
These derivatives are described in (39) and (40).
$$ {\begin{aligned} \frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial \mathbf{\rho }}} = {\left[ {\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial {\rho_{1,{\text{Re}}}}}},\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial {\rho_{2,{\text{Re}}}}}}, \ldots,\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial {\rho_{{N_{c}},{\text{Re}}}}}},\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial {\rho_{1,{\text{Im}}}}}},\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial {\rho_{2,{\text{Im}}}}}}, \ldots,\frac{{\partial \lambda_{\phi}^{(k)}}}{{\partial {\rho_{{N_{c}},{\text{Im}}}}}}} \right]^{T}} \end{aligned}} $$
$$ \begin{aligned} {\gamma_{k,{k_{1}}}} &= \lambda_{\phi}^{(k)}\lambda_{t}^{({k_{1}})}{P_{s}}\\ \dot \lambda_{t}^{({k_{1}})} &= \frac{{d\lambda_{t}^{({k_{1}})}}}{{d\alpha }} \quad \quad \quad \quad \quad \quad \quad \quad {\dot{\mathbf{u}}}_{t}^{({k_{1}})} = \frac{{d\mathbf{u}_{t}^{({k_{1}})}}}{{d\alpha }} \end{aligned} $$
The partial FIMs required in (36) are described as in (41), where the partial correlation matrix Rk,k1 takes the form in (42).
$$ {\left\{ {\mathbf{J}_{\boldsymbol{\Psi }'}} \right\}_{pq}} = tr\left({\mathbf{R}_{k,{k_{1}}}^{- 1}\frac{{\partial \mathbf{R}_{k,{k_{1}}}}}{{\partial \boldsymbol{\Psi }'_{p}}}\mathbf{R}_{k,{k_{1}}}^{- 1}\frac{{\partial \mathbf{R}_{k,{k_{1}}}}}{{\partial \boldsymbol{\Psi }{'_{q}}}}} \right) $$
$${\kern10.5pt} \mathbf{R}_{k,{k_{1}}} = {P_{s}}\lambda_{\phi}^{(k)}\lambda_{t}^{({k_{1}})}{\mathbf{G}_{s}}\boldsymbol{\Lambda }_{\tau}^{}\mathbf{G}_{s}^{T} + \sigma_{w}^{2}\mathbf{I}_{N} $$
C1 and \(C_{2}^{(q_{1},q_{2})}\) coefficients are described in (43).
$$ {\begin{aligned} {C_{1}} &= - {P_{s}}^{2}\sum\limits_{k = 1}^{Ns} {\sum\limits_{{k_{1}} = 1}^{K} {\sum\limits_{{l_{1}} = 1}^{K} {\left[ {\lambda_{\phi}^{{{\left(k \right)}^{2}}}{{\left({\lambda_{t}^{({k_{1}})} - \lambda_{t}^{({l_{1}})}} \right)}^{2}}{{\mathbf{u}_{t}^{({k_{1}})}}^{H}}{\dot{\mathbf{u}}}_{t}^{({l_{1}})}{{\mathbf{u}_{t}^{({l_{1}})}}^{H}}{\dot {\mathbf{u}}}_{t}^{({k_{1}})}} \right.}.}} \\ & \qquad \qquad \qquad \quad \left. {.tr\left\{ {\mathbf{R}_{k,{k_{1}}}^{- 1}{\mathbf{G}_{s}}{\boldsymbol{\Lambda }_{\tau} }\mathbf{G}_{s}^{T}\mathbf{R}_{k,{l_{1}}}^{- 1}{\mathbf{G}_{s}}{\boldsymbol{\Lambda }_{\tau} }\mathbf{G}_{s}^{T}} \right\}} {\vphantom{{\lambda_{\phi}^{{{\left(k \right)}^{2}}}{{\left({\lambda_{t}^{({k_{1}})} - \lambda_{t}^{({l_{1}})}} \right)}^{2}}\mathbf{u}{{_{t}^{({k_{1}})}}^{H}}{\dot{\mathbf{u}}}_{t}^{({l_{1}})}\mathbf{u}{{_{t}^{({l_{1}})}}^{H}}{\dot {\mathbf{u}}}_{t}^{({k_{1}})}} }}\right]\\ C_{2}^{({q_{1}},{q_{2}})} &= - {P_{s}}^{2}\sum\limits_{k = 1}^{Ns} {\sum\limits_{l = 1}^{Ns} {\sum\limits_{{k_{1}} = 1}^{K} {\left[ {\lambda_{t}^{{{\left({{k_{1}}} \right)}^{2}}}{{\left({\lambda_{\phi}^{(k)} - \lambda_{\phi}^{(l)}} \right)}^{2}}\left({{{\mathbf{u}_{\phi}^{(k)}}^{T}}\frac{{\partial \mathbf{u}_{\phi}^{(l)}}}{{\partial {\rho_{{q_{1}}}}}}} \right)} \right.}} } \\ & \quad \times\left({{{\mathbf{u}_{\phi}^{(l)}}^{T}}\frac{{\partial \mathbf{u}_{\phi}^{(k)}}}{{\partial {\rho_{{q_{2}}}}}}} \right)\left. {.tr\left\{ {\mathbf{R}_{k,{k_{1}}}^{- 1}{\mathbf{G}_{s}}{\boldsymbol{\Lambda }_{\tau} }\mathbf{G}_{s}^{T}\mathbf{R}_{l,{k_{1}}}^{- 1}{\mathbf{G}_{s}}{\boldsymbol{\Lambda }_{\tau} }\mathbf{G}_{s}^{T}} \right\}} {\vphantom{{\lambda_{t}^{{{\left({{k_{1}}} \right)}^{2}}}{{\left({\lambda_{\phi}^{(k)} - \lambda_{\phi}^{(l)}} \right)}^{2}}\left({\mathbf{u}{{_{\phi}^{(k)}}^{T}}\frac{{\partial \mathbf{u}_{\phi}^{(l)}}}{{\partial {\rho_{{q_{1}}}}}}} \right)\left({\mathbf{u}{{_{\phi}^{(l)}}^{T}}\frac{{\partial \mathbf{u}_{\phi}^{(k)}}}{{\partial {\rho_{{q_{2}}}}}}} \right)}}}\right] \end{aligned}} $$
Derivations are certainly quite algebraically extensive, and their main steps will be commented on briefly.
First of all, \(\mathbf {R}_{z}^{-1}\) is expressed as in (44) by using the Kronecker product properties [59], with Rk,k1 defined as in (42):
$$ {\begin{aligned} \mathbf{R}_{\mathbf{z}}^{-1} = \sum\limits_{k = 1}^{N_{s}} {\mathbf{u}_{\phi}^{(k)}} \mathbf{u}{_{\phi}^{(k)H} \otimes \sum\limits_{{k_{1}} = 1}^{K} \mathbf{u}_{k}^{({k_{1}})}\mathbf{u}{_{k}^{(k_{1})H}} \otimes \sum\limits_{{k_{2}} = 1}^{N} \mathbf{e}_{N}^{({k_{2}})}\mathbf{e}{_{N}^{(k_{2})H}\mathbf{R}_{k,k_{1}}^{- 1}}} \end{aligned}} $$
Derivatives required in (35) must also be computed, and it is easy to show how they take the form described in Eq. (45), being A, B, and C the matrices exhibited in Table 2.
$$ \frac{{\partial {\mathbf{R}_{\mathbf{z}}}}}{{\partial {\boldsymbol{\Psi }_{p}}}} = \mathbf{A} \otimes \mathbf{B} \otimes \mathbf{C} $$
Table 2 Elements required in (45) to assemble the FIM in (35) for a Rayleigh fading channel
For example, when a derivative relative to k0 is required, the corresponding value of p within the table is "1," and therefore, the corresponding result is assembled as in (46).
$$ \partial {\mathbf{R}_{\mathbf{z}}}/\partial {\boldsymbol{\Psi }_{p = 1}} = \partial {\mathbf{R}_{\mathbf{z}}}/\partial {k_{0}} = {\mathbf{R}_{\phi}} \otimes {\mathbf{T}_{k}} \otimes {P_{s}}\partial {\boldsymbol{\Lambda}_{\tau} }/\partial {k_{0}} $$
From inserting Eqs. (44) and (45) in (35), and after some simplifications, (47) originates. Finally, by replacing the values from Table 2, as in the example above, and rearranging terms, expression (36) is reached.
$$ {\begin{aligned} {\left[ {\mathbf{F}_{\boldsymbol{\Psi}}} \right]_{pq}} &= \\ &= \sum\limits_{k = 1}^{Ns} {\sum\limits_{l = 1}^{Ns} {{{\mathbf{u}_{\phi}^{(k)}}^{{H}}}{\mathbf{A}_{p}}\mathbf{u}_{\phi}^{(l)}{{\mathbf{u}_{\phi}^{(l)}}^{{H}}}\mathbf{A}_{q}\mathbf{u}_{\phi}^{(k)} \cdot}} \\ & \quad \cdot \sum\limits_{{k_{1}} = 1}^{K} {\sum\limits_{{l_{1}} = 1}^{K} {{{\mathbf{u}_{K}^{({k_{1}})}}^{{H}}}\mathbf{B}_{p}\mathbf{u}_{K}^{({l_{1}})}{{\mathbf{u}_{K}^{({l_{1}})}}^{{H}}}\mathbf{B}_{q}\mathbf{u}_{K}^{({k_{1}})}}}tr\\ &{{ \quad\times\left({\mathbf{R}_{k,{k_{1}}}^{- 1}\mathbf{C}_{p}\mathbf{R}_{l,{l_{1}}}^{- 1}\mathbf{C}_{q}} \right)} } \end{aligned}} $$
Furthermore, expression in (35) allows further simplifications when the sampling is performed at the chip rate. If it is the case, G s in (19) becomes the identity matrix I, and the roll-off factor may be discarded, reducing the number of parameters required to compute the Fisher matrix [36]. More details about these simplifications may be found in Section AF1.1 within Additional file 1.
Cramer-Rao bounds for the LOS Rice fading model
In case of an LOS condition, fading is Rice and therefore the mean channel vector estimate μ z z is not null, and it is described in (30) in terms of the expected spatial signature b ϕ , the Doppler vector α t , the pulse shaping vector for the first arrival \(\phantom {\dot {i}\!}\mathbf {g}^{(k_{0})}\), and the mean signal level for the LOS component A0. All these components were described in (21)–(29).
Since the mean channel vector is different to zero for the LOS case, the parameter vector is described in (31) and the computation of the FIM in (34) adds some derivatives that must also be computed. It is easy to show these take the form
$$ \frac{{\partial {\mathbf{\mu }_{\mathbf{z}}}}}{{\partial {\boldsymbol{\Psi }_{p}}}} = \mathbf{D} \otimes \mathbf{E} \otimes \mathbf{F} $$
being D, E, and F, the vectors contained in Table 3. For instance, when a derivative relative to k0 is required, the corresponding value of p within the table is one, and the derivative in (48) becomes as in (49):
$$\begin{array}{@{}rcl@{}} \partial {\mathbf{\mu }_{\mathbf{z}}}/\partial {\boldsymbol{\Psi }_{p = 1}} = \partial {\mathbf{\mu }_{\mathbf{z}}}/\partial {k_{0}} = {\mathbf{b}_{\phi}} \otimes {\mathbf{\alpha }_{t}} \otimes {A_{o}}\partial {\mathbf{g}^{\left({{k_{0}}} \right)}}/\partial {k_{0}} \end{array} $$
Table 3 Definition of elements in (48) required for additional derivatives in (34) when computing the FIM
It may be shown that (34) becomes (50) when \(\mathbf {R}_{z}^{-1}\) in (49) and the partial derivatives in (48) are replaced within (34).
$$ {\begin{aligned} \mathbf{F}_{\boldsymbol{\Psi}}^{LOS} = {\mathbf{F}_{\boldsymbol{\Psi}}} + 2{\text{Re}} \left\{ \begin{array}{l} {\mathbf{1}}_{{N_{s}}}^{T}\sum\limits_{k = 1}^{{N_{s}}} {\mathbf{u}_{\phi}^{\left(\mathbf{k} \right)}{{\mathbf{u}_{\phi}^{\left(\mathbf{k} \right)}}^{{H}}}} \odot \\ \odot \frac{{\partial {\mathbf{b}_{\mathbf{z}}}}}{{\partial {\boldsymbol{\Psi}_{p}}}}\frac{{\partial \mathbf{b}_{\mathbf{z}}^{H}}}{{\partial {\boldsymbol{\Psi}_{q}}}}{\mathbf{1}}_{{N_{s}}}.{\mathbf{1}}_{K}^{T}\sum\limits_{{k_{1}} = 1}^{K} {\mathbf{u}_{\mathbf{T}}^{\left({{\mathbf{k}_{\mathbf{1}}}} \right)}{{\mathbf{u}_{\mathbf{T}}^{\left({{\mathbf{k}_{\mathbf{1}}}} \right)}}^{{H}}}} \odot \\ \odot \frac{{\partial {\mathbf{\alpha}_{t}}}}{{\partial {\boldsymbol{\Psi}_{p}}}}\frac{{\partial \mathbf{\alpha }_{t}^{T}}}{{\partial {\boldsymbol{\Psi}_{q}}}}{\mathbf{1}}_{K}.\frac{{\partial {\mathbf{g}^{{{\left({{k_{0}}} \right)}^{T}}}}}}{{\partial {\boldsymbol{\Psi}_{p}}}}\mathbf{R}_{k,{k_{1}}}^{- 1}\frac{{\partial {\mathbf{g}^{\left({{k_{0}}} \right)}}}}{{\partial {\boldsymbol{\Psi}_{q}}}} \end{array} \right\} \end{aligned}} $$
In the expression above, F Ψ corresponds to the FIM for the NLOS model in (36), but since the parameter A0 was added for the LOS model, Gk,k1 in (38) must be replaced for \(\mathbf {G}^{LOS}_{k,k1}\) in (51). Furthermore, ⊙ notes the Hadamard Product.
$$\begin{array}{@{}rcl@{}} \mathbf{G}_{k,{k_{1}}}^{LOS} = {\left[ \begin{array}{l} \mathbf{G}_{k,{k_{1}}}^{}\\ {\mathbf{0}} \end{array} \right]_{\left({7 + 2{N_{c}}} \right)x5}} \end{array} $$
Computing the CRBs from the previous equations may be computationally expensive, especially when the number of available channel vector estimates K is high. Furthermore, expressions for FCD sources require another more suitable factorization. Therefore, asymptotic expressions when K is high and adequate expressions for FCD sources have been computed in [35, 36]. See Sections AF1.2 and AF1.3 within Additional file 1.
CRBs in timing estimation and the extent of positioning errors
In order to put CRB results in relation to potential errors introduced in terms of distance range, (52) will be used, where e corresponds to the range error estimation, c to light speed, and T c to the system chip time.
$$ e = c.\sqrt {\text{CRB}\left({{k_{0}}} \right)} {T_{c}} $$
Consequently, an estimation error standard deviation of one chip time results in a range error in the order of 240 m for IS-95 and around 80 m for WCDMA since chip period is a little more than three times higher for IS-95 in relation to WCDMA. In the sequel, a WCDMA system will be referred to by default.
Coherence bandwidth and CRBs
As was mentioned previously, channel coherence bandwidth, B c , is proportional to the normalized coherence bandwidth λ n , and to the chip rate, R c . The exact proportionality factor depends on the application but is lower than 1/(2π) [47, 48]; therefore, it will be set to 1/10 as shown in (53), and the estimation error of this bandwidth may be related to the CRB for λ n as:
$$ {B_{c}} \approx \frac{1}{{10}}{\lambda_{n}}{R_{c}} $$
$$ {e_{{B_{c}}}} \approx \frac{1}{{10}}{R_{c}}\sqrt {\text{CRB}\left({{\lambda_{n}}} \right)} $$
Through the use of (54), it is easy to understand that error estimation for the coherence bandwidth is close to 1% of chip rate when the square root of the CRB is close to 1/10. It would correspond for example to an uncertainty of around 38 kHz for a WCDMA system and around of 12 kHz for IS-95.
Timing estimation: the minimum variance method
This section introduces the minimum variance (MV) TOA estimator, a practical method available in the literature [49], in order to compare its behavior with that described by our CRBs.
Remembering that our data is a collection of K channel vector estimates infected with noise, recorded in a time interval of duration KT s as follows:
$$ {\begin{aligned} y(\tau ;k) &= \sum\limits_{i = 1}^{L} {{a_{i}}(k)g(\tau - {\tau_{i}})}\\ & \quad {+ v(\tau ;k)} {{;}}\quad {a_{i}}(k),v(\tau ;k) \in C\quad \forall k = 1,...,K \end{aligned}} $$
where τ i and a i (t) refer respectively to the delays and the time-varying amplitudes of the L propagation paths, g(τ) to the pulse shape, and v(τ;n) to the noise which is assumed temporally, not correlated among successive slots (n).
When the discrete Fourier transform (DFT) is computed from channel vector estimates, (56) results:
$$ y(w;n) = \sum\limits_{i = 1}^{L}{{a_{i}}\left(n \right)g\left(w \right)\exp \left({ - jw{\tau_{i}}} \right) + v\left({w;n} \right)} $$
And the delays' estimation problem turns into the estimation of the position of spectral lines. Stacking the samples of the transformed domain in a single vector, (56) may be rewritten as (57):
$$ \begin{aligned} \mathbf{y}(n) &= \left[ {\begin{array}{c} {y\left({{\omega_{o}};n} \right)}\\ {y\left({{\omega_{1}};n} \right)}\\ \vdots \\ {y\left({{\omega_{M - 1}};n} \right)} \end{array}} \right] = \sum\limits_{i = 1}^{L} {{a_{i}}(n)\mathbf{G}{\mathbf{e}_{{\tau_{i}}}}}\\ & \quad {+ \mathbf{v}(n)} = \mathbf{G}{\mathbf{E}_{\tau} }\mathbf{a}(n) + \mathbf{v}(n) \end{aligned} $$
where G is a diagonal matrix containing the DFT of the raised cosine pulse shaping filter and E τ is defined below:
$$ \begin{aligned} {\mathbf{E}_{\tau}} &= \left[ {\begin{array}{ccc} {{\mathbf{e}_{{\tau_{1}}}}}& \cdots &{{\mathbf{e}_{{\tau_{L}}}}} \end{array}} \right] \qquad \quad {\mathbf{e}_{{\tau_{i}}}}\\ &= {\left[ {{e^{- j{w_{o}}{\tau_{i}}}}{e^{- j{w_{1}}{\tau_{i}}}} \ldots {e^{- j{w_{p}}{\tau_{i}}}}} \right]^{T}} \end{aligned} $$
The MV solution performs signals separation through the filter w, as it is shown below, where the noise term \(\widetilde {\mathbf {v}}(n)\) also accounts for the non-interesting paths:
$$ z(n) = {\mathbf{w}^{H}}\mathbf{y}(n) = {a_{j}}(n){\mathbf{w}^{H}}\mathbf{G}{\mathbf{e}_{{\tau_{j}}}} + {\mathbf{w}^{H}}{\tilde{\mathbf{v}}}(n) $$
The filter satisfies that wHGe τ j =1, and an improved performance is achieved when w is chosen so as to maximize the output SNR or equivalently minimizing the noise output power:
$$ \begin{aligned} \mathbf{w} &= {\underset{\mathbf{w}^{*}}{\arg\min {\mathbf{w}^{H}}E}} \left\{ {\mathbf{y}(n)\mathbf{y}{{(n)}^{H}}} \right\}\mathbf{w} \\ &\quad {\text{ subject to }} \quad \quad \quad {\mathbf{w}^{H}}\mathbf{G}{\mathbf{e}_{{\tau_{j}}}} = 1 \end{aligned} $$
This minimization is performed using Lagrange multipliers, with J being the cost function in Eq. (61):
$$ J = {\mathbf{w}^{H}}{\mathbf{R}_{\mathbf{y}}}\mathbf{w} + \lambda \left({{\mathbf{w}^{H}}\mathbf{G}{\mathbf{e}_{{\tau_{j}}}} - 1} \right) $$
And the achieved MV solutions for both the filter w and the spectral representations for delays as follow [62]:
$$\begin{array}{@{}rcl@{}} \mathbf{w}(\tau) = \frac{{\mathbf{R}_{\mathbf{y}}^{- 1}\mathbf{G}{\mathbf{e}_{\tau} }}}{{\mathbf{e}_{\tau}^{H}{\mathbf{G}^{H}}\mathbf{R}_{\mathbf{y}}^{- 1}\mathbf{G}{\mathbf{e}_{\tau} }}} \quad \quad P(\tau) = \frac{1}{{\mathbf{e}_{\tau}^{H}{\mathbf{G}^{H}}\mathbf{R}_{\mathbf{y}}^{- 1}\mathbf{G}{\mathbf{e}_{\tau} }}} \end{array} $$
Note that one filter is found per each delay, and that the final power delay spectrum does not include the explicit expression of the filter. The determination of the timing for the first arrival from P(τ) requires the use of a threshold to avoid confusing the noise or the first side lobe with the true arrival [62].
In the following section, several results will be shown for the CRBs for TOA in case of Rayleigh and Rice fading channels and also for the practical MV estimator in (62).
Performance of asymptotic expressions
Figure 2 compares the CRBs behavior for the estimated timing k0 as a function of the temporal correlation for various values of the number of the observed channel estimates, K, for the NLOS Rayleigh fading model in (35). Results were provided by using both the exact expressions in (36) and the expressions achieved with the use of the asymptotic eigenvalues from matrix T in (37) as described in the section "AF1.2" of Additional file 1. Asymptotic expressions for the first arrival timing k0 fit very closely to exact ones for as few as 10 observations for this high SNR of 20 dB as it is shown in Fig. 2. However, higher values of K were required when SNR were poor. For instance, when SNR was set to 0 dB, instead of 10 observations, 50 observations were required to have a similar performance along the whole range of the temporal correlation. CRBs of the normalized coherence bandwidth λ n are more sensitive to the temporal correlation coefficient than their analog expressions for timing and 50 observations were required to achieve a good fitting for a SNR of 20 dB [35, 36]. For both parameters, the largest differences were achieved for a high temporal correlation coefficient very close to one where expected accuracies rapidly degrade.
CRB of the first arriving path k0 in terms of the temporal correlation coefficient α. Results for the square root of the CRB are displayed for different sizes K, of the record containing the channel vector estimates, for both the asymptotic (dashed) and no asymptotic (solid) expressions. SNR has been set to 20 dB, delay spread to 5 T c , and for two sensors in the antenna array
Since timing is the parameter of interest for location purposes and based on the analysis exhibited in Fig. 1, it is expected that the average value of K will be larger than 100, very accurate results may be provided from the use of these asymptotic expressions, with the advantage of reducing computational burden since derivatives related to to the temporal correlation factor α were explicitly computed instead of using numerical methods.
CRBs for timing and normalized coherence bandwidth for the NLOS Rayleigh fading model
Figure 3 provides information about the behavior of the CRBs for the normalized coherence bandwidth λ n within Λ τ in (20), in terms of SNR, and several values of α for the case of having different configurations at the antenna array. Specifically, the graphics at the top of Fig. 3 refer to the case of including two sensors at the antenna array while the graphics at the bottom of Fig. 3 refer to the case of including four. Note that the error bounds for this parameter are reduced for smaller temporal correlation coefficients. In fact, an improvement was registered when correlation shifted from 1 to 0. Recall that this situation corresponds to subscribers changing from low to high speed respectively, or when channel estimates are achieved from more separated slots. CRB always diminished for higher SNRs, with some limiting floor value, which was significantly higher for the high temporal correlation case. This behavior also appears in CRBs for timing as shown in Figs. 4, 5, and 6. Note for example that the value of this error bound for the coherence bandwidth in Fig. 3 degraded from a value slightly better than 3% for the PCD source case (α=0.9) to around 9% for the FCD case when the SNR was set to 10 dB, two sensors were used and the source angular spread corresponded to 5°. Furthermore, an improvement of the estimation lower than 25% was achieved when changing from two sensors to four. In addition, some slight reduction in the error bound was shown when angular spread increased from 5° to 18°.
CRB of the normalized coherence bandwidth λ n in terms of the SNR. Results are provided for different values of the temporal correlation α, different values for the angular delay spread, and different values for the number of sensors K. The number of channel vector estimates K has been set to 50, the delay spread to 5 T c , and the roll-off factor to one. a Top left: angular spread set to 5° and two sensors. b Top right: angular spread set to 18° and two sensors. c Bottom left: angular spread set to 5° and four sensors. d Bottom right: angular spread set to 18° and four sensors
CRB of the first arriving path k0 for different values of the SNR. Results are provided for different values of the temporal correlation α and different angular spreads for the received signal. The number of sensors is 1 (solid) and 4 (solid bullet). Delay spread set to 5 T c , for 50 channel vector estimates. a Left: angular spread set to 5°. b Right: angular spread set to 10°
CRB of the first arriving path k0 for different values of the SNR. Results are provided for different values of the temporal correlation α, delay spread set to 2 T c , and two different roll-off factors. One sensor and 50 channel vector estimates are available. The sampling rate set to twice the chip rate. a Left: set to 0.5. b Right: β set to 1.0
CRB of the first arriving path k0 for different values of the SNR. Results are provided for different values of the temporal correlation α, different values of delay spread, and different values of angular spread. Four sensors and 50 channel vector estimates are available. The sampling rate set to twice the chip rate, and a roll-off factor of 0.5. a Top left: delay spread set to 5 T c , angular spread set to 18°. b Top right: delay spread set to 5 T c , angular spread set to 5°. c Bottom left: delay spread set to 2 T c , angular set to 18°. d Bottom right: delay spread set to 2 T c , angular spread set to 5°
In addition, Fig. 4 shows the behavior of the CRB for the first arrival, when observations from signals received at multiple sensors were available. This figure compares the case of having just one sensor with the case where four sensors in λ/2 were used in two environments with angular spreads of 5° and 10°, which could be the case of UL measurements. These results showed that adding multiple antennas improved the accuracy of estimates significantly, but that angular spread did not significantly influence delay estimation. However, the improvement due to a higher angular spread resulted more important for lower delay spreads [35], and the best situation corresponded precisely to having completely uncorrelated sensors. In fact, the CRBs degraded as angular spread decreased. Differences were not really significant in relative terms, since computed errors were between 0.095 and 0.125 of the chip time, but they were more visible in range terms. For instance, range errors for a SNR of 15 dB were between 7.5 and 10 m for WCDMA and between 23 and 30 m for IS-95.
Furthermore, the inclusion of multiple sensors provided similar gains in timing accuracy, from moderate to high SNR, regardless of the value of the temporal coefficient. A gain factor of around two was achieved. In range terms, this means that error decreased from 16 to 8 m for ICD sources in a WCDMA system when a four sensor array was used instead of a single sensor.
Results from Figs. 1, 2, 3, and 4 above were achieved using a sampling equal to the chip rate, while Figs. 5 and 6 exhibit results for timing error bounds when sampling is faster than the chip rate. In particular, Fig. 5 shows that a marginal improvement in the timing error bound was performed when roll-off factor β was modified from 0.5 to 1.0. For example, there was an improvement of just around 10% for a SNR of 15 dB for ICD sources. It accounts for less than a meter in range terms. However, due to the sampling being twice as fast, a gain of two was achieved in the whole observed SNR range with independence of the temporal correlation factor α. For example, the timing error for a SNR of 40 dB was reduced to around the half (0.08 T c ) when the sampling rate was doubled as can be shown comparing Fig. 5 with results in [35].
Furthermore, Fig. 6 shows similar results in the timing error with independence of the angular and delay spread of the source. Again, major improvements were associated with a lower degree of correlation for the measures; however improvements related to angular and delay spreads were lower than a few meters in range terms. These improvements were performed for wider angular spreads, especially when SNR was low. This gain reduced for higher SNRs where errors tended to a minimum floor of of 0.04 T c , a corresponding range error of around 3 m in WCDMA.
Finally, Fig. 7 exhibits the timing error bound in terms of the number of estimates for different configurations of the antenna array and several temporal correlation factors among the observed estimates. This figure shows that the first arrival timing estimation error bound improves as the number of channel vector estimates K increases. In the case of highly temporally correlated channel estimates, results showed the difficulty to reduce the error bounds, even for a large number of observations (high values of K) or when multiple antennas were used. As an example, note that in Fig. 7 left that error reduced to the third from 4.8 to 1.2 m in range terms, in case of α=0.9; and to just half from 9.6 to 4.8 m when α=0.999 and K passed from 10 to 100. The influence of the angular spread is another factor to be considered: spatial uncorrelated sensors allowed a better estimation of the timing, but improvement considered in range terms was more important in cases of having highly temporally correlated estimates since in this case, for example, it accounted for around 8 m for K=100 and just around of 1.5 m when channel estimates were temporally uncorrelated.
CRB of k0 as a function of the number of channel vector estimates K. Results are provided for different values of the temporal correlation coefficient α, SNR =10 dB, and a delay spread of 5 T c . a Left: one sensor. b Right: four sensors and two possible scenarios: uncorrelated sensors case and a low angular spread case (angular spread of 5°)
Interpretation of results should be made very carefully when the number of observations is analyzed in relation with the time correlation factor α, since channel estimates achieved from two consecutive slots will exhibit a higher temporal correlation factor compared to those performed using much more separated slots; and also considering that the coherence time for delays reduces for higher mobile speeds and therefore limits the availability of new estimates.
On the other hand, and from the perspective of the computation of CRBs, the use of uncorrelated sensors implies a reduction in complexity since the spatial correlation in (31) disappears as a nuisance parameter and computation therefore becomes simpler and faster.
CRBs for timing and normalized coherence bandwidth for the LOS Rice fading model
This section shows behavior of the error bounds for the timing k0 and the normalized coherence bandwidth λ n in case of the LOS Rice model in (50). Particularly, Figs. 8 and 9 exhibit exhibit these bounds as a function of the dispersed SNR for two different values of the average LOS power: 3 dB above and 3 dB below the dispersed signal. First of all, note that the timing error reduced for higher temporally correlated environments when a LOS component was present. This behavior was precisely the opposite of what was registered for a NLOS condition. Furthermore, timing bounds computed for this LOS model were lower than those expected for the NLOS condition, and they reduced as LOS power increased. Improvement achieved for higher temporal correlation is almost negligible for temporal correlation factors higher than 0.99. For example, note that the timing error for a SNR of 15 dB and an ICD source (α=0) with a LOS power 3 dB below the dispersed signal power when two sensors were used corresponded to 5.5×10−2T c . This bound reduced to 4.8×10−2T c for a PCD source (α=0.9), and to 3.0×10−2T c for a FCD source (α=0.99999). When the LOS power increased to 3 dB higher than the dispersed signal power, the bound reduced from 3.2×10−2T c to 2.5×10−2T c and finally to 1.5×10−2T c for ICD (α=0), PCD (α=0.9), and FCD (α=0.99999) cases, respectively. In range terms, it means that distance error went from around 4.4 m (ICD) to 2.4 m (FCD) for the first case and from 2.6 to 1.2 m for the latter. Another interesting observation is the fact that timing error reduced without bound when SNR increased for this LOS model. This indicates that timing accuracy would be theoretically limited just by SNR in cases of a dominant LOS condition.
CRB of the first arriving path k0 as a function of dispersed SNR. Results are provided for different values of the temporal correlation coefficient α and for different values of the LOS component power. Two sensors and 50 channel vector estimates are available. The sampling rate set to twice the chip rate and a roll-off factor of 0.5. Delay spread set to 2 T c and angular spread set to 5°. Bearing direction is the broadside. a Left: LOS power set 3 dB lower than the dispersed signal power. b Right: LOS power set 3 dB higher than the dispersed signal power
CRB of the normalized coherence bandwidth λ n in terms of the dispersed SNR. Results are provided for different values of the temporal correlation coefficient λ, and for different values of the LOS component power. Two sensors and 50 channel vector estimates are available. The sampling rate set to twice the chip rate and a roll-off factor of 0.5. Delay spread set to 2 T c and angular spread set to 5°. Bearing direction is the broadside. a Left: LOS power set 3dB lower than the dispersed signal power. b Right: LOS power set 3 dB higher than the dispersed signal power
These results perhaps seem to be too optimistic, but they are consistent with the model structure that supposes the LOS signal is perfectly characterized. In fact, if the signal we are looking for is practically deterministic, which it is especially true in high SNR conditions; it is possible to estimate the timing with a very high accuracy.
On the other hand, the bounds for the normalized coherence bandwidth λ n seemed not to be disturbed for a change in the LOS power level, and the tendency in relationship to the temporal correlation coefficient remained consistent as in the NLOS model. For example, note from Fig. 9 that the error bounds were somewhat higher than those expected from the NLOS model, but they were also bound limited when SNR increased. In this case, the minimum error bound achievable was around 1.2%. This slight degradation exhibited in CRB for the LOS model is derived from the fact that the vector of unknown parameters included a new parameter to estimate [26], and due to the fact that this new LOS parameter did not disturb temporal dispersion statistics in the model. However, it is important to point out, that a LOS condition is associated with a less dispersed signal both temporally and spatially [15, 17, 42], and this fact has to be considered in the analysis to extract proper conclusions from these results. Furthermore, from a positioning viewpoint, timing is the most relevant parameter, and the coherence bandwidth can be considered a nuisance parameter. Nevertheless, from a systemic perspective this parameter could provide some additional information about the quality of the measure [42].
Figures 10 and 11 relate space-time diversity with the timing error bounds for our Rice fading model. Particularly, Fig. 10 shows the behavior of this bound with the mean direction of arrival of the received signal for different power values of the LOS component when the dispersed SNR was set to 10 dB. The impact of temporal correlation is also assessed by comparing results performed for ICD sources on the left of the figure with those achieved for PCD sources on the right side. Furthermore, the gain introduced for the use of a larger number of sensors is also exhibited by comparing graphics at the top (two sensors) with those at the bottom (four sensors).
CRB of the first arriving path k0 as a function of the signal bearing. Results are provided for different values of the power level of the LOS component and various values of the temporal correlation coefficient α. 50 channel vector estimates are available. The sampling rate set to twice the chip rate, and a roll-off factor of 0.5. Delay spread set to 2 T c , angular spread set to 5° and the dispersed SNR set to 10 dB. a Top left: two sensors—ICD source (α=0). b Top right: two sensors—highly temporal correlated source (α=0.99). c Bottom left: four sensors—ICD source (α=0); d Bottom right: four sensors—highly temporal correlated source (α=0.99)
CRB of k0 as a function of the roll-off factor of the shaping pulse. Results are provided for different values of the dispersed SNR when the LOS power level is 0 dB higher than the dispersed signal and an ICD source (α=0). 50 channel vector estimates are available. The sampling rate set to twice the chip rate, delay spread set to 2 T c , angular spread set to 5°, and four sensors. a Left: bearing =0°. b Right: bearing =45°
First of all, it is interesting to note that the timing error reached a minimum for values close to 30° and that this improvement became more important in relative terms for higher levels of the LOS component and for more temporally correlated signals, since in these cases, the possibly became almost deterministic and was easier to be discriminated from noise. Furthermore, for this LOS model, a better gain was performed from the introduction of new sensors for the case of PCD sources. For example, a gain factor of around 1.35 was achieved when passing from two sensors to four in case of ICD sources and this factor increased to around two for PCD sources (α=0.99). Bearing also impacted the timing error performance. A higher gain was found when the mean signal bearing was around 35°, and the range of the improvement region widened around this bearing when more sensors were added and a higher LOS power was available. This gain decayed when the LOS path weakened and the Rice propagation turned into Rayleigh. Gains associated bearing reduced the timing error below half for high power LOS signals, and these errors were reduced around 45% when the LOS power changed from −3 to 3 dB over the disperse component when four sensors were used.
On the other hand, Fig. 11 relates the CRB for the timing error with the roll-off factor of the shaping pulse, the SNR, and also with the signal bearing. Results in this figure demonstrated that the timing error bound improved for a higher roll-off factor when the signal arrived directly from the broadside (Fig. 11—left), especially for high signal to noise ratios. This enhancement is possibly related with the sharper form of the first arrival related to the increase in the bandwidth. However, the gain with roll-off factor was negligible when bearing changed to 45° (Fig. 11—right) due the better array performance for this bearing.
For example, note from the graphics that as the timing error reduced from 3×10−2T c for a roll-off factor of 0.5 to around 2×10−2T c for a roll-off factor of 1.0 when four sensors were used, the dispersed SNR was set to 20 dB, and signal arrived directly from the broadside (Fig. 11—left). On the other hand, when direction of arrival changed to 45°, timing error kept very close to 0.85×10−2T c with independence of the roll-off factor for the same signal conditions (Fig. 11—right). Of course, lower errors were achieved when a larger number of sensors were used. The behavior described by these results is very reasonable since modifying the pulse shape to a higher roll-off implies the availability of a higher bandwidth, and therefore the reduction of the side lobes. Therefore, it helps to reduce the probability of missing the first arrival during the estimation stage. Furthermore, the array geometry responds to bearing, and it can help to discriminate the LOS component from the dispersed signal.
Practical estimators and CRBs
Figure 12 exhibits the TOA-MV power spectrum computed using (62) for a NLOS signal model as described by (17) and (19) for an ICD source, when SNR =10 dB, K =75 channel estimates, and N s =4 sensors for a sampling of twice the chip rate. The threshold has been computed properly [62] over the noise floor to avoid an early detection due to the first side lobe. On the other hand, Fig. 13 shows the root mean square error (RMSE) for the first timing as a function of SNR and the temporal correlation coefficient, computed over 2,500 realizations for two different configurations by using the MV approach. Both groups of results were computed for K=50 estimates, N=20 chip times, an angular spread of 5°, and a delay spread of 2 T c , and they are compared with results achieved for the CRBs. Results on the left correspond to a configuration with N s =1 sensor and a sampling of twice the chip rate, while results to the right corresponds to N s =4 sensors and sampling performed to the chip rate. Estimated errors exhibited for the case of an ICD source are slightly higher than those provided for the corresponding bounds, especially when just one sensor is available. However, higher errors have been measured when the temporal correlation increases especially when SNR is low. When SNR increases the error decays to the minimum as was expected. When N s =4, errors also diminish as before, but although error tries to attain the CRB at high values of SNR, the minimum error is finally higher than the expected by the CRBs. These results provide evidence of the strengths and weakness from both the CRBs and the practical estimator at hand. In essence, the proper behavior of the MV-timing estimator has been verified by the bounds for the best possible scenario: an ICD source. However, the impossibility of this method to attain the bounds in more aggressive scenarios reveals that the current formulation of this algorithm is possibly not taking advantage of all the information provided by the temporal and spatial diversity and therefore it is not the MVU estimator. On the other hand, these discordances remind us that the CRB is an optimistic model for any unbiased estimator that alerts us about the inherent difficulties of performing the estimation. In this case, the bounds are a warning about addressing the temporal correlation of the estimates in order to get the best results out of the method. High correlated estimates result in ill-conditioned matrices degrading the behavior of this practical estimator.
Minimum variance spectrum for timing estimation. This result is provided for a SNR of 10 dB, an ICD source (α=0), 75 channel vector estimates available; the sampling rate set to twice the chip rate, delay spread set to 2 T c , angular spread set to 5°, and four sensors
Root mean square error of the first arriving path k0 for different values of the SNR. Results are provided for different values of the temporal correlation α, and different configurations of the antenna array, a delay spread of 2 T c , 50 channel vector estimates available, and an angular spread of 5°. The roll-off factor set to 0.5. a Left: one sensor and the sampling rate set to twice the chip rate. b Right: one sensor and the sampling rate set to the chip rate
This paper describes the use of the CRBs to study the impact of various factors involved in signal TOA estimation for a mobile scenario modeled by a space-time dispersive channel for both Rayleigh and Rice fading propagation situations, and therefore, it explores the difficulties and opportunities associated with timing estimation in LOS and NLOS environments.
In particular, our model makes a contribution by taking into account the spatial and temporal correlation among channel estimates, and the impact of the roll-off factor of the shaping pulse, in addition to the number of sensors and the number of estimates that are typical from other approaches. It also includes an exponential dispersion for delays which is characteristic of mobile channels instead of just a few paths as in prior approaches. Furthermore, this paper also includes some asymptotic expressions for certain interesting cases related to with high speed and low speed subscribers.
Due to the close relationship between timing and positioning, this model does contribute to insight not only to the TOA estimation but also to its impact on positioning.
From results in Section 3, the following conclusions can also be derived:
Estimation errors for the timing and the normalized coherence bandwidth decrease when the SNR increases; but this improvement is highly conditional depending on the propagation scenario and the type of source. In the case of NLOS Rayleigh propagation, these estimation errors degrade rapidly when passing from PCD sources to FCD sources, reaching a limit floor at high SNRs, so a higher SNR does not force a lower error. On the other hand, in case of LOS Rice propagation, the larger improvement is achieved when passing from an ICD source to a PCD source, and timing accuracy improves practically without bound for higher SNRs.
The bounds for the normalized coherence bandwidth λ n seem not to be disturbed by a change in the LOS power level; results remain as in the NLOS model.
Estimation errors, for the timing and the normalized coherence bandwidth, also decrease when the number of observations increases, but again this reduction is very conditioned on the propagation scenario and the kind of source: ICD, PCD, or FCD. In the case of NLOS, a larger record of observations is required to keep the accuracy for higher temporal correlations among channel estimates; however in the case of a LOS scenario, an uncorrelated dispersed signal component implies a random perturbation that degrades the accuracy on the signal of interest.
The use of multiple antennas introduces not just new observations but also diversity, and therefore it helps to improve accuracy. However, the impact of these improvements is associated with temporal and spatial coherence of the scattered signal. For the NLOS condition, inclusion of multiple sensors provides similar gains in timing accuracy, from moderate to high SNR, regardless of the value of the temporal coefficient. A gain factor of around two is achieved when passing from one sensor to four, confirming the observations in [31]. However, in LOS condition, this gain almost doubles in the case of highly PCD sources, and the bearing also impacts the timing error performance. This gain decays when the LOS path weakens and the Rice propagation turns into Rayleigh. Improvements are always obtained when an antenna array is used instead of a single sensor, and certain improvements of around 20% are also achieved when passing from a narrow spread source to a spatially well-scattered signal.
Under a non line of sight (NLOS) condition, the roll off factor has negligible effect on error bounds, while under LOS condition, a higher roll-off factor helps to improve the bound for the timing error, possibly due to the sharper form of the first arrival in this case, related to the increase in the bandwidth.
CRBs provide a useful and optimistic insight about the estimation problem that may help to achieve practical and efficient estimators.
Finally, the following recommendations should be taken into account:
In spite of our CRB model providing valuable information about the timing estimation error, careful attention is required to extrapolate these results to the mobile subscriber positioning issue, due to the different nature of Rayleigh and Rice propagation models. For example in obstructed environments, the shadowing may lead to important delay spreads, while in LOS condition, low delays are expected. In addition, some obstructed scenarios may lead to signal clustering, and if that is the case, even with the first arrival being accurately estimated, the positioning could be biased. Fortunately, there are some methods to identify these scenarios [33, 40, 63] and to reduce the harmful effects of this NLOS condition.
Errors from the measures translate directly into range errors for positioning based on TOA, and these certainly degrade the subscriber's positioning. However, it would be inappropriate to think about the range errors as final positioning errors. Positioning is a more complex procedure that involves the acquisition from signals transmitted and received from different parts of the network, and therefore it is also dependent on the problem geometry. However, the use of larger data records dramatically reduces the positioning error, and therefore it is very important to determine the coherence time of delays and angles to take advantage of this situation. In fact, positioning accuracy is very sensitive to the subscriber mobility, being the highest error associated with static equipment in NLOS condition, due to the impossibility to taking advantage of temporal diversity.
Barankin bound
BCRB:
Bayesian CRB
BS:
CRB:
Cramer-Rao bound
DOA:
Direction of arrival
FCD:
Fully coherent dispersed
Fisher information matrix
GNSS:
Incoherent dispersed
KF:
Kalman filter
LOS:
MCRB:
Modified CRB
ML:
Maximum likelihood
MS:
MSE:
Mean squares error
MV:
Minimum variance
MVU:
Minimum variance unbiased
MZZB:
Modified ZZB
NLOS:
Non line of sight
OTDO:
Observed time differences of arrival
Power angular spectrum
Partially coherent dispersed
PCF:
Position computing function
RMSE:
Root mean square error
SNR:
SS:
Signal strength
TDOA:
Time differences of arrival
TOA:
UWB:
Ultra wide band
WCDMA:
Wideband code division multiple access
ZZB:
Ziv-Zakai bound
Technical Specification Group Services and System Aspects; Location Services (LCS); Service Description; Stage 1 (Release 9) – 3GPP TS 22.071 V9.1.0 (2010-09). (ETSI 3RD Generation Partnership Project, 2010). http://www.qtc.jp/3GPP/Specs/22071-910.pdf. Accessed 9 Jan 2016.
J Johansson, WA Hapsari, S Kelley, G Bodog, Minimization of Drive Tests in 3GPP Release 11. IEEE Commun. Mag. 50(11), 36–43 (2012). https://doi.org/10.1109/MCOM.2012.6353680.
M Abo −Zahhad, SM Ahmed, M Mourad, Future location prediction of mobile subscriber over mobile network using Intra Cell Movement pattern algorithm. IEEE 1st Int. Conf. Communic. Signal Proc. Appl.1–6 (2013). https://doi.org/10.1109/ICCSPA.2013.6487272.
R Barnes, B Rosen, 911 for the 21st century. IEEE Spectr. 51(4), 58–64 (2014). https://doi.org/10.1109/MSPEC.2014.6776307.
C-M Huang, S-C Lu, DEH: A ubiquitous Heritage Exploring System using the LBS Mechanism. IEEE Int. Conf. Netw.-Based Inf. Syst.310–317 (2011). https://doi.org/10.1109/NBiS.2011.54.
M Driusso, M Comisso, F Babich, C Marshall, Performance analysis of time of arrival estimation on OFDM signals. IEEE Signal Process. Lett. 22(7), 983–987 (2015). https://doi.org/10.1109/LSP.2014.2378994.
J Huang, P Wang, Q Wan, CRLBs for WSNs localization in NLOS environment.EURASIP. J. Wirel. Commun. Netw. 2011:, 16 (2011). https://doi.org/10.1186/1687-1499-2011-16.
L Cong, W Zhuang, Hybrid TDOA/AOA mobile user location for wideband CDMA cellular systems. IEEE Trans. Wirel. Commun. 1(3), 439–447 (2002). https://doi.org/10.1109/TWC.2002.800542.
K Pahlavan, X Li, Indoor Geolocation Science and Technology. IEEE Commun. Mag. 40(2), 112–118 (2002). https://doi.org/10.1109/35.983917.
A Catovic, Z Sahinoglu, The Cramer-Rao bounds of hybrid TOA/RSS and TDOA/RSS location estimation schemes. IEEE Commun. Lett. 8(10), 626–628 (2004). https://doi.org/10.1109/LCOMM.2004.835319.
Y Wang, G Leus, Reference-free time-based localization for an asynchronous target. EURASIP J. Adv. Signal Process. 2012:, 19 (2012). https://doi.org/10.1186/1687-6180-2012-19.
Y Wang, Linear least squares localization in sensor networks. EURASIP J. Wirel. Commun. Netw. 2015:, 51 (2015). https://doi.org/10.1186/s13638-015-0298-1.
S Ahonen, P Eskelinen, Mobile terminal location for UMTS. IEEE Aerosp. Electron. Syst. Mag. 18(2), 23–27 (2003). https://doi.org/10.1109/MAES.2003.1183866.
G Fuks, J Goldberg, H Messer, Bearing estimation in a Ricean channel—part I: inherent accuracy limitations. IEEE Trans. Signal Process. 49(5), 925–937 (2001). https://doi.org/10.1109/78.917797.
H Asplund, AA Glazunov, AF Molisch, KI Pedersen, M Steinbauer, The COST 259 directional channel model—part II: macrocells. IEEE Trans. Wirel. Commun. 5(12), 3434–3450 (2006). https://doi.org/10.1109/TWC.2006.256967.
P Lusina, F Kohandani, SM Ali, Antenna parameter effects on spatial channel models. Inst. Eng. Technol. Commun. 3(9), 1463–1472 (2009). https://doi.org/10.1049/iet-com.2008.0414.
C Gentile, S Martínez, A Kik, A comprehensive spatial-temporal channel propagation model for the Ultrawideband Spectrum 2-8 GHz. IEEE Trans. Antennas Propag. 58(6), 2069–2077 (2010). https://doi.org/10.1109/TAP.2010.2046834.
M Bengtsson, B Ottersten, Low-complexity estimators for distributed sources. IEEE Trans. Signal Process. 48(8), 2185–2194 (2000). https://doi.org/10.1109/78.851999.
O Besson, P Stoica, Decoupled estimation of DOA and angular spread for a spatially distributed source. IEEE Trans. Signal Process. 48(7), 1872–1882 (2000). https://doi.org/10.1109/78.847774.
Article MATH Google Scholar
S Valaee, B Champagne, P Kabal, Parametric localization of distributed sources. IEEE Trans. Signal Process. 43(9), 2144–2153 (1995). https://doi.org/10.1109/78.414777.
GC Raleigh, T Boros, Joint space-time parameter estimation for wireless communication channels. IEEE Trans. Signal Process. 46(5), 1333–1343 (1998). https://doi.org/10.1109/78.668795.
M Wax, A Leshem, Joint estimation of time-delays and direction of arrival of multiple reflections of a known signal. IEEE Trans. Signal Process. 45(10), 2477–2484 (1997). https://doi.org/10.1109/78.640713.
J Lee, CH Lee, J Chun, JH Lee, Joint estimation of space-time distributed signal parameters. IEEE Conf. Veh. Technol. 2:, 822–828 (2000). https://doi.org/10.1109/VETECF.2000.887118.
T Menni, E Chaumette, P Larzabal, Reparameterization and constraints for CRB: duality and a major inequality for system analysis and design in the asymptotic region. IEEE Int. Conf. Acoust. Speech Signal Process.3545–3548 (2012). https://doi.org/10.1109/ICASSP.2012.6288682.
A Emmanuele, M Luise, Fundamental limits in signal time-of-arrival estimation in AWGN and multipath scenarios with application to next-generation GNSS. IEEE ESA Workshop Satell. Navig. Technol. Eur. Workshop GNSS Signals Signal Process, 1–7 (2010). https://doi.org/10.1109/NAVITEC.2010.5708049.
SM Kay, Fundamentals of statistical signal processing—estimation theory, 16th edn (Prentice Hall, New Jersey, 1993).
MATH Google Scholar
S-M Omar, D Slock, O Bazzi, Recent insights in the Bayesian and deterministic CRB for blind SIMO channel estimation. IEEE Int. Conf. Acoust. Speech Signal Process.3549–3552 (2012). https://doi.org/10.1109/ICASSP.2012.6288683.
GN Tavares, LM Tavares, The true Cramer-Rao lower bound for data-aided carrier-phase-independent time-delay estimation from linearly modulated waveforms. IEEE Trans. Commun. 54(1), 128–140 (2006). https://doi.org/10.1109/TCOMM.2005.861655.
S Buzzi, HV Poor, On parameter estimation in long-code DS/CDMA systems: Cramer-Rao bounds and least-squares algorithms. IEEE Trans. Signal Process. 51(2), 545–559 (2002). https://doi.org/10.1109/TSP.2002.806987.
MathSciNet Article MATH Google Scholar
C Botteron, A Host-Madsen, M Fattouche, Cramer-Rao bound for location estimation of a mobile in asynchronous DS-CDMA systems. IEEE Int. Conf. Acoust. Speech Signal Process. 4:, 2221–2224 (2001). https://doi.org/10.1109/ICASSP.2001.940439.
C Botteron, A Host-Madsen, M Fattouche, Effects of system and environment parameters on the performance of network-based mobile station position estimators. IEEE Trans. Veh. Technol. 53(1), 163–180 (2004). https://doi.org/10.1109/TVT.2003.822029.
K Schmeink, R Adam, PA Hoeher, Performance limits of channel parameter estimation for joint communication and positioning. EURASIP J. Adv. Signal Process. 2012:, 178 (2012). https://doi.org/10.1186/1687-6180-2012-178.
S Gezici, Z Tian, GB Giannakis, H Kobayashi, AF Molisch, HV Poor, Z Sahinoglu, Localization via ultra wideband radios: a look at positioning aspects for future sensor networks. IEEE Signal Process. Mag. 22(4), 70–84 (2005). https://doi.org/10.1109/MSP.2005.1458289.
A Mallat, J Louveaux, L Vandendorpe, UWB based positioning in multipath channels: CRBs for AOA and for hybrid TOA-AOA based methods. IEEE Int. Conf. Commun.5775–5780 (2007). https://doi.org/10.1109/ICC.2007.957.
R Játiva, J Vidal, Estimacióndel Tiempo, de Llegada en un canal Rayleigh desde una perspectiva de la Cota Inferior de Cramer-Rao. Rev. Av. Cien. Ingenierías. 1(1), 5–10 (2009).
R Játiva, J Vidal, M Cabrera, Cramer Rao bounds in time of arrival estimation for a distributed source. IST Mob. Commun. Summit.236–244 (2001). http://spcom.upc.edu/documents/jativa_ISTSummit2001.pdf.
B Denis, N Daniele, NLOS Ranging error mitigation in a distributed positioning algorithm for indoor UWB Ad-Hoc Networks. Int. Workshop Wirel. Ad-Hoc Netw.356–360 (2004). https://doi.org/10.1109/IWWAN.2004.1525602.
J Riba, A Urruela, A non-line-of-sight mitigation technique based on ML-detection. IEEE Int. Conf. Acoust. Speech Signal Process. 2:, 153–156 (2004). https://doi.org/10.1109/ICASSP.2004.1326217.
L Cong, W Zhuang, Nonline-of-sight error mitigation in mobile location. IEEE Trans. Wireless Commun. 4(2), 560–573 (2005). https://doi.org/10.1109/TWC.2004.843040.
O Yihong, H Kobayashi, H Suda, Analysis of wireless geolocation in a non-line-of-sight environment. IEEE Trans. Wirel. Commun. 5(3), 672–681 (2006). https://doi.org/10.1109/TWC.2006.1611097.
K Yu, YJ Guo, NLOS Error Mitigation for mobile location estimation in wireless networks. IEEE Veh. Technol. Conf, 1071–1075 (2007). https://doi.org/10.1109/VETECS.2007.228.
JM Huerta, J Vidal, A Giremus, JY Tourneret, Joint particle filter and UKF position tracking in severe non-line-of-sight situations. IEEE J. Sel. Top. Signal Process. 3(5), 874–888 (2009). https://doi.org/10.1109/JSTSP.2009.2027804.
L Chen, R Piché, H Kuusniemi, R Chen, Adaptive mobile tracking in unknown non-line-of-sight conditions with application to digital TV networks. EURASIP J. Adv. Signal Process. 2014:, 22 (2014). https://doi.org/10.1186/1687-6180-2014-22.
C-T Chiang, P-H Tseng, K-T Feng, Hybrid TOA/TDOA based unified Kalman tracking algorithm for wireless networks. IEEE Int. Symp. Pers. Indoor Mob. Radio Commun, 1707–1712 (2010). https://doi.org/10.1109/PIMRC.2010.5671921.
H Li, Z Deng, Y Yu, Investigation on a NLOS Error Mitigation algorithm for TDOA Mobile Location. IET Int. Conf. Commun. Technol. Appl, 839–843 (2011). https://doi.org/10.1049/cp.2011.0787.
Y Long, J Huang, Y Pan, J Du, Novel TOA Location algorithms based on MLE and MMSEE for NLOS environments. IEEE Int. Conf. Commun. Circ. Syst. 2:, 46–49 (2013). https://doi.org/10.1109/ICCCAS.2013.6765283.
WC Jakes, (ed.), Microwave mobile communications, 10th edn (IEEE Press, New York, 1994).
TS Rappaport, Wireless Communications—principles and practice, 10th edn (Prentice Hall, New Jersey, 1996).
J Vidal, M Najar, M Cabrera, R Játiva, Positioning accuracy when tracking UMTS mobiles in delay and angular dispersive channels. IEEE Veh. Technol. Conf. 4:, 2575–2579 (2001). https://doi.org/10.1109/VETECS.2001.944066.
A Artés, F Pérez, J Cid, R López, C Mosquera, F Pérez, Comunicaciones Digitales, 1st edn (Pearson Educación S.A., Madrid, 2007).
P Laspougeas, P Pajusco, J-C Bic, Spatial radio channel for UMTS in urban small cells area. IEEE Conf. Veh. Technol. 2(885-892). https://doi.org/10.1109/VETECF.2000.887128.
KI Pedersen, PE Mogensen, BH Fleury, A stochastic model of the temporal and azimuthal dispersion seen at the base station in outdoor propagation environments. IEEE Trans. Veh. Technol. 49(2), 437–447 (2000). https://doi.org/10.1109/25.832975.
LJ Greenstein, V Erceg, YS Yeh, MV Clark, A new path-gain/delay-spread propagation model for digital cellular channels. IEEE Trans. Veh. Technol. 46(2), 477–485 (1997). https://doi.org/10.1109/25.580786.
M Nilsson, B Völcker, B Ottersten, A cluster approach to spatio-temporal channel estimation. IEEE Int. Conf. Acoust. Speech Signal Process. 5:, 2757–2760 (2000). https://doi.org/10.1109/ICASSP.2000.861069.
CD Lai, First order autoregressive Markov processes. Stoch. Process. Appl. 7(1), 65–72 (1978).
Y Qi, H Suda, H Kobayashi, On time-of-arrival positioning in a multipath environment. IEEE Veh. Technol. Conf. 5:, 3540–3544 (2004). https://doi.org/10.1109/VETECF.2004.1404723.
Karimi HA, (ed.), Advanced location-based technologies and services, 1st edn (CRC Press, Boca Raton, 2013).
R Jativa, J Vidal, Cota Inferior de Crámer-Rao en la Estimación del Tiempo de Llegada en un canal Rice. Rev. Av. Cien. Ingenierías. 4,1, C14–C21 (2012).
JR Magnus, H Neudecker, Matrix differential calculus with applications in statistics and econometrics (Wiley, New York, 1999).
R Raich, J Goldberg, H Messer, Bearing estimation for a distributed source: modeling, inherent accuracy limitations and algorithms. IEEE Trans. Signal Process. 48(2), 429–441 (2000). https://doi.org/10.1109/78.823970.
C-C Chong, C-M Tan, DI Laurenson, S McLaughlin, MA Beach, AR Nix, A new statistical wideband spatio-temporal channel model for 5-GHz band WLAN systems. IEEE J. Sel. Areas Commun. 21(2), 139–150 (2003). https://doi.org/10.1109/JSAC.2002.807347.
J Vidal, M Nájar, R Játiva, High resolution time-of-arrival detection for wireless positioning systems. IEEE Veh. Technol. Conf. VTC-2002-Fall. 4:, 2283–2287 (2002). https://doi.org/10.1109/VETECF.2002.1040627.
X Wang, Z Wang, B O'Dea, A TOA-based location algorithm reducing the errors due to non-line-of-sight (NLOS) propagation. IEEE Trans. Veh. Technol. 52(1), 112–116 (2003). https://doi.org/10.1109/TVT.2002.807158.
The authors thank Andy Espinosa Gutiérrez for his help in the Latex edition of this paper, David Barmettler for his kind review of the English text, and finally the anonymous reviewers whose comments helped to improve the quality of this document.
This work was carried out in the framework of the EC-funded project Saturn IST-1999-10322 and FUNDACYT subvention 980349 from Ecuador.
Universidad San Francisco de Quito, Diego de Robles y Pampite, Quito, Ecuador
René Játiva
Universitat Politècnica de Catalunya, Campus Nord, c/ Jordi Girona 1-3, Barcelona, Spain
Josep Vidal
In addition to the original idea, a continuous support and feedback were performed by Dr. JV along the whole development of this work. Mathematical developments, algorithms implementation, results depuration, and article's writing were developed by the first author. Both authors read and approved the final manuscript.
Correspondence to René Játiva.
AF1.1 CRBs for the NLOS Rayleigh Fading Model when sampling is performed at the chip rate. AF1.2 Asymptotic Expressions for Delay Estimates and a PCD Source. AF1.3 CRB's for Delay Estimates in case of Fully Coherent Dispersed Sources. (PDF 211 kb)
Játiva, R., Vidal, J. Cramer-Rao bounds in the estimation of time of arrival in fading channels. EURASIP J. Adv. Signal Process. 2018, 19 (2018). https://doi.org/10.1186/s13634-018-0540-1
Accepted: 21 February 2018
Cramer-Rao bounds
Mobile subscriber location | CommonCrawl |
Let $A=(0,1),$ $B=(2,5),$ $C=(5,2),$ and $D=(7,0).$ A figure is created by connecting $A$ to $B,$ $B$ to $C,$ $C$ to $D,$ and $D$ to $A.$ The perimeter of $ABCD$ can be expressed in the form $a\sqrt2+b\sqrt{5}$ with $a$ and $b$ integers. What is the sum of $a$ and $b$?
We use the distance formula to find the length of each side.
The distance from $(0, 1)$ to $(2, 5)$ is $\sqrt{(2 - 0)^2 + (5 - 1)^2} = 2\sqrt{5}$.
The distance from $(2, 5)$ to $(5, 2)$ is $\sqrt{(5 - 2)^2 + (2 - 5)^2} = 3\sqrt{2}$.
The distance from $(5, 2)$ to $(7, 0)$ is $\sqrt{(7 - 5)^2 + (0 - 2)^2} = 2\sqrt{2}$.
The distance from $(7, 0)$ to $(0, 1)$ is $\sqrt{(0 - 7)^2 + (1 - 0)^2} = 5\sqrt{2}$.
Adding all of these side lengths, we find that the perimeter is $10\sqrt{2} + 2\sqrt{5}$. Thus, our final answer is $10 + 2 = \boxed{12}$. | Math Dataset |
Measuring quality of life in opioid-induced constipation: mapping EQ-5D-3 L and PAC-QOL
Anthony James Hatswell1, 2Email author and
Stefan Vegter3
Health Economics Review20166:14
© Hatswell and Vegter. 2016
Accepted: 1 April 2016
In health economic evaluations, quality of life should be measured with preference-based utilities, such as the EuroQol 5 Dimension 3-level (EQ-5D-3 L). Non-preference-based instruments (often disease-specific questionnaires) are commonly mapped to utilities. We investigated if the relationship observed between the Patient Assessment of Constipation Quality of Life (PAC-QOL) and the EQ-5D-3 L in patients with chronic idiopathic constipation (CIC) also applies in opioid-induced constipation (OIC).
EQ-5D-3 L patient-level data from a clinical study of lubiprostone in OIC (n = 439) were scored using the UK tariff. A published mapping between the PAC-QOL and the EQ-5D-3 L was tested using these data. New mapping formulas were analysed, including PAC-QOL total and subscale scores. The root mean square error (RMSE), the adjusted R2 and predicted/observed plots were used to test the fit.
The utility measured with the EQ-5D-3 L was 0.450 ± 0.329, with a distinctly bimodal distribution. This significantly improved if patients responded to treatment (defined as an increase of three spontaneous bowel movements per week, with no rescue medication taken). The published mapping in CIC performed poorly in this OIC population, and the PAC-QOL could not be reliably mapped on to the EQ-5D-3 L even when re-estimating coefficients. This was shown in our two mappings (using PAC-QOL total score, and subscale scores) by a high RMSE (0.317 and 0.314) and a low R2 (0.068 and 0.080), with high utilities underestimated and low utilities overestimated.
Patients with OIC have a low quality of life which does improve with the resolution of symptoms. However the PAC-QOL cannot be used to estimate the EQ-5D-3 L utility – potentially as the PAC-QOL does not capture the all relevant aspects of the patients quality of life (for example the cause of the opioid use).
Root Mean Square Error
Health Economic Evaluation
Lubiprostone
Chronic Idiopathic Constipation
Although there are many potential causes of constipation, one of the most frequently reported is opioid usage: opioid-induced constipation (OIC). The condition is caused by opioids inhibiting the secretion of intestinal fluids and suppressing the peristaltic propulsion of the gastrointestinal tract, thereby slowing gastrointestinal motility [1]. This opioid effect causes a range of symptoms, from difficulty evacuating faeces to straining, hard stools, abdominal discomfort and bloating [2, 3].
Patient-reported quality of life in this disease area is low – a poster by Iyer et al. showed OIC patients with chronic non-cancer pain to have a baseline quality of life of approximately 0.45 using the EuroQol 5 Dimension 3-level (EQ-5D-3 L) [4]. Similar values were reported for a Dutch study, which estimated a median EQ-5D-3 L of 0.41 for constipated patients [5], although the cause of the constipation is not stated. Dunlop et al. reported that, using Short-Form 36 (SF-36) scores mapped to the EQ-5D-3 L in an OIC population with chronic non-cancer pain, patients had a utility of approximately 0.48 at baseline [6]. The existing literature suggests that the low utilities observed may arise from the comorbid conditions that necessitate long-term opioid therapy. This may also explain why a time trade-off exercise conducted with members of the UK general population (n = 308) showed a higher utility for OIC itself, rating the condition as having a utility of 0.74 [7].
In cost–utility analyses, when no preference-based instruments (such as the EQ-5D-3 L or Health Utilities Index) are available, mapping is a popular technique for predicting health state utilities. In mapping, the relationship between a non-preference-based instrument (often a disease-specific questionnaire containing aspects on quality of life) and a generic measure is estimated [8]. The Patient Assessment of Constipation Quality of Life (PAC-QOL) is a commonly used disease-specific questionnaire, which contains questions on worries and concerns, physical discomfort, psychosocial discomfort, and satisfaction [9]. Searching the Oxford Mapping Database, a study by Parker et al. reported a mapping between the PAC-QOL and the EQ-5D-3 L utility in chronic idiopathic constipation (CIC), but no report on mapping in OIC was found [10]. As CIC patients experience the same symptoms (with the same endpoints and scales used in clinical trials), our expectation was that a similar relationship would exist between the PAC-QOL and EQ-5D-3 L in OIC and CIC.
We investigated techniques for mapping PAC-QOL to the EQ-5D-3 L utilities for patients with OIC, including an exploration of the previously published mapping by Parker et al. [10].
Description of study 1033
The analyses presented in this article are based on data from Study 1033, which was a 12-week, double-blind, randomised study of lubiprostone (n = 219) compared to placebo (n = 220) [11]. Patients were enrolled with a confirmed diagnosis of non-methadone OIC for chronic non-cancer-related pain, who were having fewer than three spontaneous bowel movements (SBMs) per week and experiencing symptoms of constipation. Patients had a mean age of 52, weight of 86 kg and 1.4 SBMs per week. Both PAC-QOL and EQ-5D-3 L data were collected. All patients had at least one medical diagnosis that led to their opioid use. In general, these diagnoses were musculoskeletal in origin, as shown in Table 1.
Summary of medical diagnoses in study 1033
Diagnosis group
Total (n = 439)
57 (13 %)
47 (10.7 %)
228 (51.9 %)
Intervertebral disc degeneration
27 (6.2 %)
Spinal column stenosis
439 (100 %)
Note: Numbers sum to more than 100 % as patients may have more than one condition
EQ-5D-3 L – a generic measure of health status
The EQ-5D-is widely used in health care and in clinical research. As a preference-based instrument, the EQ-5D-3 L is a recommended measure for use in health economic evaluations. It takes the form of a descriptive profile evaluation on five dimensions (mobility, self-care, usual activities, pain/discomfort and anxiety/depression). Each dimension is scored by the patient, with 1 indicating no problems, 2 indicating some problems, and 3 indicating extreme problems. The total profile is valued with validated tariffs, resulting in preference-based utility scores that can be used in economic evaluations – the UK tariff was used in this study [12]. The final component of the EQ-5D-3 L is the Visual Analogue Scale (VAS), a fixed height bar on which participants are asked to mark their self-rated health on a scale from 0 ('worst imaginable health state') to 100 ('best imaginable health state'). The VAS, whilst collected in Study 1033, is not widely used in the UK and was therefore not used in our analysis [13].
The PAC-QOL – a disease-specific instrument
In contrast with generic health instruments such as the EQ-5D-3 L, the PAC-QOL is a disease-specific instrument for patients with constipation developed by Marquis et al. [9]. The PAC-QOL questionnaire provides a standardised and validated assessment of the burden of constipation on patients' everyday functioning and well-being.
The questionnaire includes 27 questions, which cover 12 symptoms (identified from patient responses). These 12 symptoms are then divided into four subscales (worries and concerns, physical discomfort, psychosocial discomfort, and satisfaction). Participants rate the applicability of each question over the previous 2 weeks by selecting one of 5 boxes (broadly ranging from 'Not at all', to 'All of the time'). The scores for each question are recoded as scores of 0–4, with lower scores indicating fewer problems. Symptom scores and sub-scores are then calculated, as averages of the relevant questions, and symptoms, and the overall score computed as the average score across the 12 symptoms.
Mapping between PAC-QOL and EQ-5D-3 L – the approach by Parker et al
Parker et al. estimated the relationship between the generic EQ-5D-3 L and the disease-specific PAC-QOL score in a severe CIC population [10]. The EQ-5D-3 L was not directly measured in the study; instead the values were mapped from a different instrument (the SF-36) using the algorithm from Rowen et al. [14]. Three mapping formulas were presented: one formula using only the summary PAC-QOL score as an independent variable, and two formulas using the PAC-QOL score and the PAC-SYM score (a different questionnaire, the Patient Assessment of Constipation Symptoms) as independent variables. The statistic used to test the fit of the mapping formulas was the root mean square error (RMSE). We tested only the mapping between the EQ-5D-3 L and the PAC-QOL, as the PAC-SYM was not collected in the Study 1033.
Novel mapping formula
In addition to testing the validity of the mapping published by Parker et al., we attempted to re-estimate the parameters observed in the mapping using patient level data from Study 1033. Two mapping formulas were analysed: the relationship between the EQ-5D-3 L and the PAC-QOL total score (as in Parker et al. [10]); and the relationship between the EQ-5D-3 L and PAC-QOL subscale scores.
The statistics used to test the fit of the mapping formulas were the RMSE and the adjusted R2 as well as predicted versus observed plots. Mean utility and PAC-QOL scores are presented as mean ± standard deviation. All analyses were performed using the statistical package R.
Utility scores from study 1033
A total of 439 patients with OIC were included in Study 1033, with all except one patient completing the EQ-5D-3 L (n = 438, 99.8 %). Figure 1 shows the distribution of EQ-5D-3 L scores, which were distinctly bimodal. The mean utility was 0.450 ± 0.329, with a median of 0.620 and a range from -0.239 to 1. Analysis of the dimension scores showed that severe problems were primarily encountered by patients in the pain/discomfort dimension of the EQ-5D-3 L (Fig. 2).
Histogram of measured EQ-5D-3 L utilities in Study 1033
Percentage of patients reporting different levels in the EQ-5D-3 L dimensions
PAC-QOL scores were measured in all patients in Study 1033, and showed an approximately normal distribution with a mean overall score of 2.462 ± 0.651, a median overall score of 2.495 and a range from 0.739 to 3.938. The most severely impaired PAC-QOL subscore was 'satisfaction', while the fewest problems were found on the psychosocial subscore.
The primary efficacy endpoint in Study 1033 was the overall SBM response rate, defined as having three or more SBMs for at least 9 of 12 weeks, and at least one additional SBM over mean baseline SBM during every treatment week. At the end of treatment, patients with three or more SBMs and not using rescue medication in the previous week showed a higher utility than patients with fewer than three SBMs (0.46 ± 0.40 versus 0.34 ± 0.36, p = 0.012, Table 2). Similarly, overall PAC-QOL scores were lower, indicating better health, in patients with three or more SBMs compared to those with fewer than three (1.21 ± 0.81 versus 2.09 ± 0.78, p < 0.001, Table 2).
EQ-5D-3 L utility and PAC-QOL by spontaneous bowel movements per week
SBMs at EOT visita
EQ-5D-3 L utility (mean ± sd)
PAC-QOL
0.463 ± 0.356
Note: apatients using rescue medication in the previous week were classified as having fewer than three SBMs
Key: EOT end of treatment, SBMs spontaneous bowel movements; sd, standard deviation
Testing published mapping between EQ-5D-3 L and PAC-QOL, and re-estimating parameters
The mapping formula given by Parker et al. [10] is:
Formula 1:
$$ \mathrm{E}\mathrm{Q}\hbox{-} 5\mathrm{D}\hbox{-} 3\mathrm{L} = 0.977\ \hbox{--}\ 0.098 \times PAC\mathit{\hbox{-}}QOL $$
When applying this formula, the predicted EQ-5D-3 L compared poorly with the measured EQ-5D-3 L in Study 1033, shown in Fig. 3. In particular, low utilities were severely overestimated by the formula, and the mean predicted utility was 0.74, much higher than the measured utility of 0.45. The RMSE was 0.428, while the RMSE reported by Parker et al. was 0.146.
Observed EQ-5D-3 L utility in Study 1033 compared to EQ-5D-3 L utility predicted by mapping from Parker et al. [10]
We attempted to re-estimate the equation using the patient level data from Study 1033, using a generalised linear model in R to map the EQ-5D-3 L utility to the PAC-QOL overall score. A second re-estimation was then attempted using the same regression method, but using the EQ-5D-3 L and the PAC-QOL subscales, which then had interaction terms between all subscales added to them. The results of these analyses showed that there was a negative but highly variable correlation between the PAC-QOL and the EQ-5D-3 L, with all models showing a poor fit to the data. Estimating EQ-5D-3 L using the PAC-QOL score as the only independent variable resulted in the following formula:
$$ \mathrm{E}\mathrm{Q}\hbox{-} 5\mathrm{D}\hbox{-} 3\mathrm{L} = 0.780\ \hbox{-}\ 0.134 \times PAC\mathit{\hbox{-}}QOL $$
The RMSE was 0.317 and the adjusted R2 was 0.068, indicating a weak association between the PAC-QOL and the EQ-5D-3 L. The mapping showed a poor fit to the data; the high utilities were underestimated, and the low utilities were overestimated (Fig. 4). Attempting a mapping using the PAC-QOL subscales as independent variables yielded:
Predicted and observed EQ-5D-3 L utility from Study 1033
$$ \mathrm{E}\mathrm{Q}\hbox{-} 5\mathrm{D}\hbox{-} 3\mathrm{L} = 0.716 + 0.023 \times Satisfaction\hbox{--}\ 0.091 \times Physical + 0.013 \times Psychosocial\hbox{--}\ 0.062 \times Worries $$
The RMSE was 0.314 and the adjusted R2 was 0.080, which, although an improved fit compared to the mapping using the PAC-QOL total score, still indicates a weak association between the PAC-QOL and the EQ-5D-3 L. The correlation plot was similar in appearance to Fig. 4, where the high utilities were underestimated and the low utilities were overestimated. Furthermore, mapping did not significantly improve when interaction terms were added between the subscale scores or when alternative mappings were estimated in ranges of the PAC-QOL score.
The first notable finding of the analysis of EQ-5D-3 L at baseline was the level of patient utility. This low utility is consistent with the existing literature, where patients report similar utilities [4–6].
The mean utility of 0.450 is exceptionally low; it is lower than in patients with comparable symptoms in CIC [10] and even lower than the majority of EQ-5D-3 L estimates in patients with advanced cancer, a condition that we would have expected to be much more severe [15]. The main source of the low utility was the pain dimension, shown by the high scores on the pain/discomfort dimension of the EQ-5D-3 L scored by patients in Study 1033 (Fig. 2). This could be explained by the comorbid conditions that resulted in long-term opioid therapy – in Study 1033, the majority of patients were suffering from several different diagnoses which all cause rheumatic pain or pain of the musculoskeletal system (Table 1).
Although there was an association between high PAC-QOL scores and lower EQ-5D-3 L utilities, it was not possible to reliably map the PAC-QOL on to the EQ-5D-3 L; the mapping formula provided by Parker et al. proved unreliable in our study population [10]. A possible explanation for the failure of the mapping exercise is that although the outcomes are the same (utility and PAC-QOL scores), the trials were conducted in different populations; the study by Parker et al. was in CIC, while Study 1033 was in OIC with chronic non-cancer pain. Although patients in both studies had a similar level of constipation severity (as measured by the number of spontaneous bowel movements), all patients in the group receiving opioids had comorbid conditions leading to the opioid use. This may have led to the poor scores on the EQ-5D-3 L pain dimension. A second possible explanation may be that Parker et al. did not directly measure the EQ-5D-3 L in their study, but instead measured the SF-36, which was in turn mapped to the EQ-5D-3 L. This two-step approach may have introduced a different relationship between the PAC-QOL and the EQ-5D-3 L, although the differences between patient populations would have remained.
The mapping formulas directly estimated in this study performed slightly better than the formula provided by Parker et al. However, these formulas still performed poorly compared to other published mapping studies [8, 16], as shown by the high RMSE and low R2 scores. Therefore, we would not recommend their use; the mapping algorithms we used consistently underestimated high utilities and overestimated low utilities, despite the multiple methods attempted to obtain a better fit. As such, the most likely explanation is that, in this population, other factors, which are not captured by the PAC-QOL, are the determinants of quality of life (as measured by the EQ-5D-3 L utility). Similar conclusions have been drawn for mapping studies with comparable instruments such as the Over-Active Bladder Questionnaire [8]. Finally, while the mapping formula of Parker et al. appeared to perform better in their population of CIC (as demonstrated by a lower RMSE statistic), other important fit criteria such as the R2 and predicted versus observed plots were not presented. Therefore, under/over-prediction cannot be assessed.
Mapping is a commonly used technique in the field of health economics to derive generic utilities when only disease-specific measures are available. In this analysis, we applied an existing mapping between two instruments to a related disease area. As a result, we showed that the original mapping was a poor fit, and re-estimation proved unsuccessful. In the absence of directly measured patient utilities, caution should be exercised with regard to the generalisability of mapping instruments in this area. While the mapping by Parker et al. in CIC appears to demonstrate a good fit between the PAC-QOL and the EQ-5D-3 L in CIC [10], we found no such relationship in OIC.
However, our analysis shows that OIC patients with chronic non-cancer pain exhibit a very low level of utility, as consistently seen across the literature. It is likely that the observed values relate not only to the condition under investigation (OIC) but also to the underlying health issues for which opioids are used. Regardless of origin, the low quality of life of patients should be acknowledged.
Further research on the validity of mapping algorithms is recommended, both in different datasets within the same disease area (as validation), but also in related disease areas where the same instruments are used (as has been done with the EORTC-QLQ-C30 and EQ-5D-3 L [17]). Such work would ensure that published algorithms are reproducible and give reliable results for use in health economic evaluations.
The authors would like to thank Peter Lichtlen for clinical input to the study.
Access to data and funding was provided by Sucampo Pharma Europe. SV disclosed that he became an employee of GlaxoSmithKline after completion of this study. AJH is an employee of BresMed, which acted as a consultant to Sucampo Pharma Europe for this study.
The study was led by AJH. Statistical analysis was performed by SV and AJH. Interpretation was provided by AJH and SV. The manuscript was written by AJH and SV. Both authors read and approved the final manuscript.
BresMed, 84 Queen Street, S1 2DW Sheffield, UK
University College London, London, UK
University of Groningen, Groningen, the Netherlands
Wood JD, Galligan JJ. Function of opioids in the enteric nervous system. Neurogastroenterol Motil. 2004;16 Suppl 2:17–28. doi:10.1111/j.1743-3150.2004.00554.x.View ArticlePubMedGoogle Scholar
Panchal SJ, Muller-Schwefe P, Wurzelmann JI. Opioid-induced bowel dysfunction: prevalence, pathophysiology and burden. Int J Clin Pract. 2007;61(7):1181–7. doi:10.1111/j.1742-1241.2007.01415.x.View ArticlePubMedPubMed CentralGoogle Scholar
Sharma A, Jamal MM. Opioid induced bowel disease: a twenty-first century physicians' dilemma. Considering pathophysiology and treatment strategies. Curr Gastroenterol Rep. 2013;15(7):334. doi:10.1007/s11894-013-0334-4.View ArticlePubMedGoogle Scholar
Iyer S, Randazzo B, Tzanis E, Schulman S, Zhang H, Wang W, et al. PG118 Effect of subcutaneous (sc) methylnaltrexone on generic health related quality of life using the EQ-5D index scores in patients with chronic non-malignant pain and opioid-induced constipation. Value Health. 2009;12:A348–9.View ArticleGoogle Scholar
Earnshaw SR, Klok RM, Iyer S, McDade C. Methylnaltrexone bromide for the treatment of opioid-induced constipation in patients with advanced illness--a cost-effectiveness analysis. Aliment Pharmacol Ther. 2010;31(8):911–21. doi:10.1111/j.1365-2036.2010.04244.x.PubMedGoogle Scholar
Dunlop W, Uhl R, Khan I, Taylor A, Barton G. Quality of life benefits and cost impact of prolonged release oxycodone/naloxone versus prolonged release oxycodone in patients with moderate-to-severe non-malignant pain and opioid-induced constipation: a UK cost-utility analysis. J Med Econ. 2012;15(3):564–75. doi:10.3111/13696998.2012.665279.View ArticlePubMedGoogle Scholar
Guest JF, Clegg JP, Helter MT. Cost-effectiveness of macrogol 4000 compared to lactulose in the treatment of chronic functional constipation in the UK. Curr Med Res Opin. 2008;24(7):1841–52. doi:10.1185/03007990802102349.View ArticlePubMedGoogle Scholar
Brazier JE, Yang Y, Tsuchiya A, Rowen DL. A review of studies mapping (or cross walking) non-preference based measures of health to generic preference-based measures. Eur J Health Econ. 2010;11(2):215–25. doi:10.1007/s10198-009-0168-z.View ArticlePubMedGoogle Scholar
Marquis P, De La Loge C, Dubois D, McDermott A, Chassany O. Development and validation of the Patient Assessment of Constipation Quality of Life questionnaire. Scand J Gastroenterol. 2005;40(5):540–51. doi:10.1080/00365520510012208.View ArticlePubMedGoogle Scholar
Parker M, Haycox A, Graves J. Estimating the relationship between preference-based generic utility instruments and disease-specific quality-of-life measures in severe chronic constipation: challenges in practice. Pharmacoeconomics. 2011;29(8):719–30. doi:10.2165/11588360-000000000-00000.View ArticlePubMedGoogle Scholar
Jamal MM, Adams AB, Jansen JP, Webster LR. A randomized, placebo-controlled trial of lubiprostone for opioid-induced constipation in chronic noncancer pain. Am J Gastroenterol. 2015;110(5):725–32. doi:10.1038/ajg.2015.106.View ArticlePubMedPubMed CentralGoogle Scholar
Brooks R. EuroQol: the current state of play. Health Policy. 1996;37(1):53–72.View ArticlePubMedGoogle Scholar
Szende A, Oppe M, Devlin N. EQ-5D Value Sets: Inventory, Comparative Review and User Guide. Springer-Verlag New York Inc; 2006Google Scholar
Rowen D, Brazier J, Roberts J. Mapping SF-36 onto the EQ-5D index: how reliable is the relationship? Health Qual Life Outcomes. 2009;7:27. doi:10.1186/1477-7525-7-27.View ArticlePubMedPubMed CentralGoogle Scholar
Pickard AS, Wilke CT, Lin HW, Lloyd A. Health utilities using the EQ-5D in studies of cancer. Pharmacoeconomics. 2007;25(5):365–84.View ArticlePubMedGoogle Scholar
Dakin H. Review of studies mapping from quality of life or clinical measures to EQ-5D: an online database. Health Qual Life Outcomes. 2013;11:151. doi:10.1186/1477-7525-11-151.View ArticlePubMedPubMed CentralGoogle Scholar
Doble B, Lorgelly P. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms. Qual Life Res. 2015. doi:10.1007/s11136-015-1116-2.PubMedGoogle Scholar | CommonCrawl |
Aline Gouget
Aline Gouget Morin (born 1977)[1] is a French mathematician and cryptographer whose works include contributions to the design of the SOSEMANUK stream cipher[2] and Shabal hash algorithm,[3] and methods for anonymized digital currency.[4] She is a researcher for Gemalto, an international digital security company.[5]
Aline Gouget Morin
Born1977
NationalityFrench
Alma materUniversity of Caen Normandy
Occupation(s)Mathematician and cryptographer
Known forIrène Joliot-Curie Prize, 2017
Education
Gouget completed a PhD in 2004 at the University of Caen Normandy. Her dissertation, Etude de propriétés cryptographiques des fonctions booléennes et algorithme de confusion pour le chiffrement symétrique, was advised by Claude Carlet.[6]
Recognition
In 2017, Gouget was the winner of the Irène Joliot-Curie Prize in the category for women in business and technology.[7]
References
1. Birth year from IdRef authority control record, accessed 2020-04-12
2. "SOSEMANUK (Portfolio Profile 1)", The eSTREAM Project - eSTREAM Phase 3, ECRYPT-EU research project, archived from the original on 2019-10-16, retrieved 2020-04-12
3. "Status Report on the Second Round of the SHA-3 Cryptographic Hash Algorithm Competition" (PDF), NIST Interagency Report 7764, NIST, February 2011, retrieved 2020-04-12
4. Baldimtsi, Foteini; Chase, Melissa; Fuchsbauer, Georg; Kohlweiss, Markulf (2015), "Anonymous transferable e-cash", in Katz, Jonathan (ed.), 18th IACR International Conference on Practice and Theory in Public-Key Cryptography (PKC 2015), Gaithersburg, MD, USA, March 30 – April 1, 2015, Proceedings, Lecture Notes in Computer Science, vol. 9020, Springer, pp. 101–124, doi:10.1007/978-3-662-46447-2_5, In 2008 Canard and Gouget gave the first formal treatment of anonymity properties for transferable e-cash
5. "A cryptographic inspiration", /review, Gemalto, 8 March 2018, retrieved 2020-04-12
6. Ph.d. thesis abstract, archived from the original on 2020-10-22, retrieved 2020-04-12
7. Lauréates 2017 du prix Irène Joliot-Curie : Nathalie Palanque-Delabrouille, Hélène Morlon, et Aline Gouget, French Academy of Sciences, retrieved 2020-04-12
External links
• Home page Archived 2020-12-01 at the Wayback Machine
Authority control
International
• VIAF
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
| Wikipedia |
\begin{document}
\title[Grounding Operators: Transitivity and Trees, Logicality and Balance]{Grounding Operators: Transitivity and Trees, Logicality and Balance}
\author*{\fnm{Francesco A.} \sur{Genco}}\email{[email protected]}
\abstract{We formally investigate immediate and mediate grounding operators from an inferential perspective. We discuss the differences in behaviour displayed by several grounding operators and consider a general distinction between grounding and logical operators. Without fixing a particular notion of grounding or grounding relation, we present inferential rules that define, once a base grounding calculus has been fixed, three grounding operators: an operator for immediate grounding, one for mediate grounding---corresponding to the transitive closure of the immediate grounding one---and a grounding tree operator, which enables us to internalise chains of immediate grounding claims without loosing any information about them. We then present an in-depth proof-theoretical study of the introduced rules by focusing, in particular, on the question whether grounding operators can be considered as logical operators and whether balanced rules for grounding operators can be defined.}
\keywords{grounding, transitivity, normalisation, logicality, hyperintensionality.}
\pacs[MSC Classification]{03A05, 03F05}
\maketitle
\section{Introduction} \label{sec:introdution}
The notion of grounding is usually conceived as an objective and explanatory relation that connects two relata---the {\it ground} and the {\it consequence}---if the first one determines or explains the second one. In the contemporary philosophical literature, much effort has been devoted to analyse the formal aspects of grounding by logical systems, see for instance \cite{sch11, fin12, fin12b, cs12, cor14, pog16, pog18, korb18I, korb18II}, and these analyses often rely on characterisations of grounding by inferential calculi, see for instance \cite{sch11, fin12, cs12, cor14, pog16, pog18}. In most calculi, grounding is formalised by an operator acting on formulae. While much work has been devoted to the analysis of specific notions of grounding and the study of specific grounding operators, no systematic study exists of the general formal features that grounding operators share. In this work, we endeavour in a first investigation of the formal behaviour of different grounding operators from an inferential perspective, by particularly focusing on the nature of grounding operators in general and on the relations entertained by immediate grounding and different formalisations of mediate grounding. Without fixing a particular notion of grounding or grounding relation, we study the proof-theoretical features of a generic immediate grounding operator and proof-theoretically investigate two different ways to generalise it to a mediate grounding operator.
In order to do so, we introduce three sets of inferential rules that, assuming that a grounding calculus has been fixed, define the behaviour of three grounding operators: an operator for immediate grounding, one for mediate grounding---corresponding to the transitive closure of the immediate grounding one---and a grounding tree operator, which enables us to internalise chains of immediate grounding claims without loosing any information about them and their relations. Intuitively, immediate grounding connects a ground and a consequence when the ground is directly linked in an explanatory way to the consequence. The immediateness of this kind of grounding connection can be differently spelled out depending one the particular grounding notion considered. It might, for instance, depend on the fact that the ground is one point simpler than the consequence with respect to the adopted complexity measure, or on the fact that the ground is more fundamental than the consequence of exactly one level according to a fixed hierarchy. According to most logical grounding notions, for instance, $A$ and $B$, taken together, are supposed to constitute an immediate ground of $A\wedge B$. Different notions of immediate grounding are discussed, for instance, in \cite{sch11, fin12, cs12, pog16, pog18}. Mediate grounding, on the other hand, relates a ground and a consequence when they are linked by a chain of several immediate grounding steps. We have a simple example of mediate logical grounding, according to certain grounding notions, if we say that $A$, $C$ and $D$ constitute a ground of $(A\vee B)\wedge (C\wedge D)$. Indeed, if $A$ is an immediate ground of $A\vee B$, and $C$ and $D$ constitute an immediate ground of $C\wedge D$, then we can conclude that $A$, $C$ and $D$ constitute a mediate ground of $(A\vee B)\wedge (C\wedge D)$. Intuitively, we have the grounding connections displayed in the following tree-shaped diagram: \[\deduce{(A\vee B)\wedge (C\wedge D)}{\deduce{\mid}{\deduce{A\vee B}{\deduce{\mid}{A}}}&\deduce{\mid }{\deduce{C\wedge D}{\deduce{\mid}{C} & \deduce{\mid}{D}}}}\]
While immediate grounding accounts for the direct links between a formula and the formulae immediately above it, the particular mediate grounding statement discussed above accounts for the explanatory relation between the root of the tree and its leaves. Different mediate grounding notions are discussed, for instance, in \cite{sch11, fin12, cor14, korb18I, pog20b}. Grounding trees, finally, are supposed to encode in a unique sentential object all information expressed by a tree-shaped diagram as the one showed above, see also \cite[§ 220]{bol14} for a similar diagrammatic representation of grounding trees. Instead of enabling us to express the mediate connection between a statement and any collection of statements explanatorily linked to it, as mediate grounding does, grounding trees enable us to express entire chains of explanatory steps leading from a collection of sentences to the sentence that we wish to explain.
Technically, immediate grounding will be formalised by the $\blacktriangleright$ operator, which can only be introduced immediately after an immediate grounding rule has been applied. This grounding rule application will guarantee that the immediate grounding relation holds between the considered ground---corresponding to the premisses of the grounding rule---and the considered consequence---corresponding to the conclusion of the grounding rule. Mediate grounding will be formalised by the $\gg$ operator, which internalises in the object language the transitive closure of the immediate grounding operator $\blacktriangleright$. As we will show in order to characterise the mediate grounding operator, $\gg$ precisely enables us to select all, and only, the formulae that lie on a bar of a grounding derivation\footnote{As we will explain later, if we consider a grounding derivation as a progressive decomposition of its conclusion, then a bar of a grounding derivation can be seen as a complete description of one stage of this decomposition.} and to use them in a mediate grounding statement for the conclusion of the grounding derivation. Finally, grounding trees will be formalised by nesting occurrences of the $\triangleright $ operator inside an occurrence of the $\blacktriangleright $ operator. Thus we will be able to construct formulae that exactly correspond to grounding derivations built by nesting several consecutive immediate grounding claims.
The rules that we will adopt for all grounding operators are fully modular and do not depend on a particular choice of background grounding calculus. As a consequence, the presented work does not rely on the particular features of the considered grounding relation and applies to several of the grounding relations that have been formally introduced in the literature. In particular, for any notion of grounding that can be formalised by grounding rules of the form $\vcenter{\infer{B}{A_1 & \ldots &A_n}}\,$---where $A_1 , \ldots , A_n$ are supposed to form a ground of $B$---we can define a grounding calculus and extend it by our rules in order to define the behaviour of a grounding operator that exactly characterises the considered notion of grounding. Examples of grounding notions that can be formalised in this way are {\it full} and {\it partial} grounding as defined in \cite{sch11, fin12, cs12, cor14}, and {\it complete} grounding as defined in \cite{pog16, pog18}.
After having introduced and characterised our rules for the three grounding operators, we present an in-depth proof-theoretical investigation focusing on the question whether these rules can be considered as well-behaved definitions of the respective operators, and on establishing what we can learn about the operators themselves by studying the presented inferential rules. In order to do so, we will, first, consider the question whether a grounding operator can be considered as a logical operator, and then---after having negatively answered this question---try to establish whether, nevertheless, the presented rules for grounding operators display a form of balance that enables us to conclude that they suitably define the operators. In order to do so, we will adopt methods coming from the structuralist proof-theoretical approach to the characterisation of the notion of logical constant---see for instance \cite{dos80, dos89}---which dates back to the work of Koslow \cite{kos05} and Popper, see \cite{sh05}. We will consider in particular two traditional criteria of logicality, and show that while one is satisfied by the rules for grounding operators, the other one is not, unless a weaker version of it is considered and certain assumptions about the underlying immediate grounding relation hold.
The first criterion that we will consider corresponds to the criterion for sequent calculus rules called {\it deducibility of identicals} \cite{hac79} and presented in \cite{pra71} for natural deduction rules under the name of {\it immediate expansion}, see \cite{np15} for a study of this criterion and of a similar one discussed in \cite{bel62}. We will show, in particular, that a strict version of this criterion---the one originally employed in order to characterise logical operators---is not satisfied by any of the considered rules for grounding operators. After discussing the conceptual meaning of this failure and some connections of interest with certain essential features of the grounding relation, we argue that while this failure implies that our grounding operators do not comply with a standard construal of logical operators, it does not imply that these operators cannot undergo a meaningful analysis aimed at understanding whether their rules suitably define them as non-logical sentential operators. Such an analysis can indeed be conducted by finding a suitable way to loosen the criteria usually employed in the literature on inferential semantics.
The second considered criterion, which we call {\it detour eliminability}\footnote{We use this name in order to keep the terminology specific and not to employ words which are already overcharged of meanings}, requires that by deductively using a sentence constructed by applying the operator one does not obtain more information than that required to conclude that such a sentence is true. This precisely corresponds to the possibility of eliminating from any derivation any {\it detour} directly concerning the considered operator---that is, an inferential step introducing the operator immediately followed by one eliminating it. Detour eliminability is not only key to several criteria for the logicality of operators, it is also an essential requirement of normalisation results. Moreover, it can be identified with the notion of proof-theoretic {\it harmony} presented in \cite{pra77} and is a central component of---or, at least, an effective means to test---most of the other notions of {\it harmony}, see for instance \cite{bel62, dum91, rea00, ten07} and \cite{pog10} for a survey of the literature on the issue. If the rules for an operator enjoy both detour eliminability and deducibility of identicals, moreover, then they can be considered as an exact definition of the meaning of the operator in the sense that there is a perfect balance between the rules for deductively using the operator and the rules that determine when sentences constructed by applying the operator are true. As we will show, the rules for the immediate grounding and grounding tree operators $\blacktriangleright$ and $\triangleright$ admit the definition of Prawitz-style detour reductions which can be used to generalise the normalisation result in \cite{gen21}. On the other hand, while detour reductions can be defined also for the mediate grounding operator $\gg$---which implies that a local form of detour eliminability holds for $\gg$ as well---problems arise if we want to argue that a normalisation procedure extended by detour reductions for $\gg$ terminates. We will argue that these problems crucially obstacle the standard arguments that would be required to show that proper---or global---normalisation results for calculi containing the rules for $\gg $ hold.
Finally, motivated by the fact that $\blacktriangleright $ and $\triangleright $ are clearly well-behaved with respect to normalisation and, thus, detour eliminability, we propose and analyse a weaker version of the deducibility of identicals criterion which better suits---better than the traditional one---the hyperintensional nature of grounding.\footnote{An operator is hyperintensional if the truth of sentences built by using it is not necessarily preserved under the substitution of some of its arguments by logically equivalent arguments.} In dong this, we aim at showing that, even though grounding operators would not pass a logicality test, balanced sets of rules that suitably define their meaning can, in some cases, be found. The weaker version of deducibility of identicals constitutes, moreover, a criterion that does not trivialise the analysis of non-logical operators, and hence enables us to study the differences between different grounding operators. We will show, indeed, that while the immediate grounding and grounding tree operators $\blacktriangleright$ and $\triangleright$ meet, under certain assumptions, the weaker criterion, the mediate grounding operator $\gg$ does not. We conclude by discussing the differences in behaviour between the first two operators and the third one from a conceptual perspective, stressing the parallel between the technical features of the operators and their intended interpretation.
\noindent The rest of the article is structured as follows. In Section \ref{sec:language}, we introduce the language that we will employ and discuss the meaning of the introduced grounding operators. In Section \ref{sec:operators}, we present introduction and elimination rules for the grounding operators. In particular, in Section \ref{sec:transitive}, we present those for the mediate grounding operator and, in Section \ref{sec:trees}, those for the grounding tree operator. In Section \ref{sec:balance} we investigate the proof-theoretical balance of the introduced rules for grounding operators: Section \ref{sec:intro-elim} will be devoted to detour eliminability and Section \ref{sec:elim-intro} to deducibility of identicals. In this section, we also discuss the conceptual reasons and implications of the relation between the grounding operators and the considered proof-theoretical criteria. Finally, in Section \ref{sec:conclusions}, we present some concluding remarks and a discussion of possible ways to further develop the presented analysis.
\section{The language}\label{sec:language}
We begin by presenting the logical language that we will adopt, which includes the usual logical connectives, the operator $\blacktriangleright$ for immediate grounding, the operator $\gg$ for mediate grounding, and the operator $\triangleright$ for constructing grounding trees.
\begin{definition}[Formulae of the language]\label{def:lang} {\small \begin{align*} \varphi \quad ::= \quad & \xi \; \mid \; \bot \; \mid \; \neg \varphi \; \mid \; \varphi \wedge \varphi \; \mid \; \varphi \vee \varphi \; \mid \; \varphi \rightarrow \varphi \;\mid \\ &
(\Psi \blacktriangleright \varphi) \;\mid \; (\Psi [ \Psi ] \blacktriangleright \varphi)\;\mid \\ &
(\Phi \gg \varphi) \;\mid \; (\Phi [\Phi] \gg \varphi) \;\mid \\ \psi \quad ::= \quad & \varphi \; \mid \; (\Psi )\triangleright \varphi \;\mid \; (\Psi [ \Psi ])\triangleright \varphi \\ \xi \quad ::= \quad & p \; \mid \; q \; \mid \; r \; \mid \ldots \end{align*}}where $\Phi$ is a list of the form $\varphi , \ldots , \varphi$, $\Psi$ is a list of the form $\psi , \ldots , \psi$, and $p, q, r, \ldots $ are all propositional variables of the language. \end{definition}
The grammar in Definition \ref{def:lang} enables us to construct any formula of the language of classical logic by using the non-terminal symbols $\varphi$ and $\xi$. Immediate grounding statements, on the other hand, can be constructed by employing $ (\Psi \blacktriangleright \varphi ) $ and $( \Psi
[\Psi] \blacktriangleright \varphi )$, and then by expanding $\Psi$ and $\varphi $ with formulae in the language of classical logic. For instance, if we indicate by $\rightsquigarrow$ the expansion of one or more non-terminal symbols, we can have the following sequence of instantiations: $\varphi \rightsquigarrow ( \psi, \psi [\psi] \blacktriangleright \varphi )\rightsquigarrow ( \varphi , \varphi [\varphi ] \blacktriangleright \varphi ) \rightsquigarrow ( p, q
[r] \blacktriangleright \varphi \vee \varphi) \rightsquigarrow ( p, q
[r] \blacktriangleright \varphi \vee (\varphi \wedge \varphi)) \rightsquigarrow ( p, q
[r] \blacktriangleright r \vee (p \wedge q))$. Nevertheless, by using the non-terminal symbol $\psi$, we can also nest an occurrence of $\triangleright$ to the left of an occurrence of $\blacktriangleright$ and thus construct a grounding tree. Once we have constructed a subformula of the form $ ( \Psi )\triangleright \varphi $ or $ ( \Psi
[\Psi] )\triangleright \varphi$, we can either keep extending the grounding tree by using the list $\Psi$ of non-terminal symbols $\psi$ to nest another operator $\triangleright$ to the left of the previous one, or we can stop nesting $\triangleright$ and expand a non-terminal symbol $\psi $ in the list $\Psi$ as $\varphi$. An example of the latter possibility is the following sequence of instantiations: $\varphi \rightsquigarrow ( \psi \blacktriangleright \varphi ) \rightsquigarrow ( (\psi)\triangleright \varphi \blacktriangleright \varphi ) \rightsquigarrow ( (\varphi )\triangleright p \blacktriangleright q ) \rightsquigarrow ( (r)\triangleright p \blacktriangleright q )$. Notice that the subformulae with $\triangleright$ as outermost operator are parenthesised in a different way with respect to those with $ \blacktriangleright $ or $\gg$ as outermost operator. We explain in the examples below why this is so. The construction of mediate grounding statements by $(\Phi \gg \varphi) $ and $ (\Phi [\Phi] \gg \varphi) $ is identical to that of immediate ones, except that the lists $\Phi $ of non-terminal symbols $\varphi$ do not enable us to nest $\triangleright$ to the left of $\gg$.
In the following, we will employ capital Latin letters as metavariables for formulae, capital Greek letters as metavariables for multisets of formulae, and we will omit parentheses when no ambiguity arises. Moreover, with a slight abuse of notation, when we will write formulae of the form $\Gamma [\Delta] \blacktriangleright A $, $\Gamma [\Delta] \gg A $ and $(\Gamma [\Delta] )\triangleright A $, we will admit the possibility that they denote formulae of the form $\Gamma \blacktriangleright A $, $\Gamma \gg A $ and $(\Gamma )\triangleright A $, respectively. We do so by simply admitting the possibility that $\Delta $ is the empty list.
Let us now discuss the intended meaning of the grounding formulae that can be constructed by employing our grammar. A formula of the form $\Gamma [\Delta] \blacktriangleright A $ where neither $\Gamma $ nor $ \Delta $ contain any formula of the form $(\Theta [\Sigma] )\triangleright B $ corresponds to an immediate grounding claim and expresses that $\Gamma $ constitutes an {\it immediate ground} of the {\it consequence} $A$ under the {\it condition} $\Delta$. A formula of the form $\Gamma [ \Delta] \gg A $ corresponds to a mediate grounding claim and expresses that $\Gamma $ constitutes a {\it mediate ground} of the {\it consequence} $A$ under the {\it condition} $\Delta$. For instance, supposing that\[p,q\blacktriangleright p\wedge q \qquad p\wedge q \blacktriangleright \neg \neg ( p\wedge q)\]are legitimate grounding claims in our system, then we have that\[p,q \gg \neg \neg ( p\wedge q)\]is a legitimate grounding claim too. A formula of the form $\Gamma [\Delta] \blacktriangleright A $ where $\Gamma $ or $ \Delta $ or both contain formulae of the form $(\Theta [\Sigma] )\triangleright B $ corresponds to a {\it grounding tree}. Such a grounding tree constitutes a complex account of the truth of $A$ by an orderly display of the grounding instances that we can construct from $A$ to reach simpler and simpler grounds. Intuitively, a grounding tree can be seen as a mediate grounding claim in which we also include the information concerning the way in which the grounded statement is related to its mediate grounds. Or, in other words, in a grounding tree we keep track of all immediate grounding steps that justify a mediate grounding claim. For instance, supposing that\[r,s\blacktriangleright r \wedge s \qquad r\wedge s\blacktriangleright \neg \neg (r \wedge s) \qquad \neg \neg (r\wedge s) [ \neg t] \blacktriangleright \neg \neg (r\wedge s) \vee t \]are legitimate immediate grounding claims in our system, then we have that\[ ((r,s)\triangleright r\wedge s )\triangleright \neg \neg (r\wedge s) [\neg t] \blacktriangleright \neg \neg(r\wedge s) \vee t \]is a legitimate grounding claim too. This grounding tree intuitively corresponds to the following tree in which each edge represents the connection due to an immediate grounding relation instance: \[\deduce{\neg\neg(r\wedge s)\vee t}{\deduce{\mid}{\deduce{\neg\neg(r\wedge s)}{\deduce{\mid}{\deduce{r\wedge s}{\deduce{\mid }{r} & \deduce{\mid }{s} }}}}&\deduce{\mid }{[\neg t]}}\]
Notice that the parentheses around $\Gamma$ in an expression of the form $(\Gamma ) \triangleright A$ are meant to stress that only $A$---and not the whole expression $(\Gamma ) \triangleright A$---is a part of the immediate ground in which the expression $(\Gamma ) \triangleright A$ occurs. For instance, by writing $(r,s)\triangleright r\wedge s$ inside the formula $ ((r,s)\triangleright r\wedge s )\triangleright \neg \neg (r\wedge s) [\neg t] \blacktriangleright \neg \neg(r\wedge s) \vee t$ above, we stress that only $r\wedge s$ is the immediate ground of $\neg \neg (r\wedge s)$. Similarly, by writing $ ((r,s)\triangleright r\wedge s )\triangleright \neg \neg (r\wedge s) $ we stress that, among the formulae occurring in this expression, only $ \neg \neg (r\wedge s) $ is a part of the immediate ground of $ \neg \neg(r\wedge s) \vee t$.
For another example of a grounding tree, suppose that\[r [\neg s]\blacktriangleright r\vee s \qquad r\vee s\blacktriangleright \neg \neg (r \vee s) \] are legitimate immediate grounding claims in our system, then we have that \[ (r[ \neg s])\triangleright r\vee s \blacktriangleright \neg\neg (r\vee s) \]is a legitimate grounding claim too. This grounding tree corresponds to the following tree of immediate grounding instances: \[\deduce{\neg\neg(r\vee s)}{\deduce{\mid}{\deduce{r\vee s}{ \deduce{\mid}{r} & \deduce{\mid}{[\neg s]} }}}\]
\section{Rules for the grounding operators} \label{sec:operators}
The introduction rule for the operator $\blacktriangleright$ is presented in Table \ref{tab:imm-rules}.\begin{table}[h] \hrule
If \begin{itemize} \item $ \quad \vcenter{\infer={B}{A_1 & \ldots & A_n & [ C_1 & \ldots & C_m] }} \quad $ is a grounding rule application such that $A_1 , \ldots , A_n$ form the ground of $B$ under the possibly empty list of conditions $ C_1, \ldots , C_m$
\item $\delta _1 , \ldots , \delta _n , \delta' _1 , \ldots , \delta' _m $ are derivations of $A_1 , \ldots , A_n, C_1, \ldots , C_m$, respectively \end{itemize} then
\begin{center} $ \quad \vcenter{ \infer{A_1 , \ldots , A_n [ C_1 , \ldots , C_m] \blacktriangleright B}{\infer={B}{\deduce{A_1}{\delta _1} & \ldots & \deduce{A_n}{\delta _n} & [ \deduce{C_1}{\delta' _1} & \ldots & \deduce{C_m}{\delta' _m}]}}} \quad $ is a derivation \end{center}
\hrule \caption{Introduction Rules for the Immediate Grounding Operator $\blacktriangleright$}\label{tab:imm-rules} \end{table}This rule
reflects the idea that a sentence with $\blacktriangleright$ as outermost operator and no nested occurrences of $\triangleright$---that is, an immediate grounding claim---can only be introduced on the basis of a legitimate grounding rule application---in order to have an immediate visual distinction, we use a double inference line when representing grounding rule applications. Technically, we can introduce $\blacktriangleright$ only immediately below a grounding rule application. For instance, if we consider the grounding calculus in \cite{gpr21}, the following are legitimate grounding rule applications:\footnote{We slightly adapt the notation here and use square brackets instead of the bar between the premisses of the disjunction rule.}\[\infer={p\wedge q}{p&q}\qquad\qquad \infer={p\vee q}{p & [\neg q]}\]Hence, by using our rules for the grounding operator, we can introduce $\blacktriangleright$ as follows:\[\infer{p,q\blacktriangleright p\wedge q}{\infer={p\wedge q}{p&q}}\qquad\qquad \infer {p[\neg q]\blacktriangleright p\vee q}{\infer={p\vee q}{p&[\neg q]}}\]Thus we derive the grounding claim $p,q\blacktriangleright p\wedge q$ under the hypotheses that $p$ and $q$ are true, and we derive the grounding claim $p[\neg q]\blacktriangleright p\vee q$ under the hypotheses that $p$ and $\neg q$ are true.\footnote{Notice that, more in general, the conclusion of the $\blacktriangleright$ introduction rule could be of the form $h(A_1) , \ldots , h(A_n) [ h(C_1), \ldots , h(C_m)]\blacktriangleright B$ where $h$ is a function from formulae to formulae that depends on the particular system in which the considered grounding rule is defined and on the derivations of the premisses of its application. This is due to the fact that in certain proof systems the premisses of a grounding rule application cannot always be directly interpreted as the grounds of its conclusion. Since this detail is irrelevant for the present work and can be easily handled given a specific proof system, we omit the function $h$ in the rule definition.}
The elimination rules for $\blacktriangleright$ are presented in Table \ref{tab:gro-el-rules}. The first three rules in this table correspond to what is called the {\it factivity} of grounding; that is, the feature of grounding according to which all elements of a ground and the corresponding consequence are supposed to be true in case the grounding claim connecting them is true. The last rule in Table \ref{tab:gro-el-rules} enables us to derive the negation of those grounding claims that cannot be derived by the grounding rules in the chosen grounding calculus. This rule enables us to reason about the falsity of certain grounding claims if we exclude the possibility of having true grounding claims that cannot be derived in the calculus. Certain grounding calculi, nevertheless, are supposed only to provide a minimal framework for grounding which can be extended by further assumptions about true grounding claims or further grounding rules---see, for instance, \cite{fin12}. If this is the case, it is possible to omit the last rule in Table \ref{tab:gro-el-rules} in order to obtain a more flexible calculus which admits extensions of this kind.
\begin{table}[h] \centering \hrule
\[\infer{B}{\Gamma [ \Delta] \blacktriangleright B}\qquad \infer{A}{\Gamma _1 , A , \Gamma _2 [ \Delta] \blacktriangleright B} \qquad\infer{C}{\Gamma [ \Delta _1 , C , \Delta _2] \blacktriangleright B}\]where the outermost operator of $A$ and $C$ is not $\triangleright$
If there is no grounding rule application $ \quad \vcenter{\infer={B}{A_1 & \ldots & A_n & [ C_1 & \ldots & C_m] }} \quad $ then $ \quad \vcenter{ \infer{\bot}{A_1 , \ldots , A_n [ C_1 , \ldots , C_m] \blacktriangleright B}} \quad $ is a rule application
\hrule \caption{Elimination Rules for the Immediate Grounding Operator $\blacktriangleright$}\label{tab:gro-el-rules} \end{table}
\subsection{Mediate grounding operator} \label{sec:transitive}
In Table \ref{tab:tra-intro}, we present the rules to introduce the mediate grounding operator $\gg$ on the basis of the immediate grounding operator $\blacktriangleright$.
\begin{table}[h] \centering \hrule
\[\infer{\Gamma [ \Delta ]\gg A }{\Gamma [ \Delta] \blacktriangleright A} \qquad \qquad \infer{\Gamma _1 , \Gamma , \Gamma _2 [ \Delta , \Delta _1] \gg B }{\Gamma [ \Delta] \gg A &&& \Gamma _1 , A , \Gamma _2 [ \Delta _1] \gg B} \]\[\infer{\Gamma _1 [ \Delta _1 , \Gamma , \Delta , \Delta _2] \gg B }{\Gamma [ \Delta] \gg A &&& \Gamma _1 [ \Delta _1 , A , \Delta _2] \gg B} \]\hrule
\caption{Introduction Rules for the Mediate Grounding Operator $\gg$}\label{tab:tra-intro} \end{table}
The introduction rules for $\gg$ implement the obvious inductive definition of the transitive closure of $\blacktriangleright$. In particular, the first rule of the table, corresponds to the base case, according to which any immediate grounding claim $\Gamma [\Delta]\blacktriangleright A $ is also a mediate grounding claim $\Gamma [\Delta]\gg A$. The second rule in the table, on the other hand, enables us to compose two mediate grounding claims by transitivity. In particular, if $\Gamma [\Delta]$ constitutes a mediate ground of $A$ and $A$ is contained in a mediate ground of $B$, then we can insert $\Gamma $ instead of $A$ in the ground of $B$ and add $\Delta$ to the conditions on the ground of $B$.
In Table \ref{tab:tra-elim}, we present the elimination rules for $\gg$. \begin{table}[h] \centering \hrule
\[\vcenter{\infer{B}{\Gamma [ \Delta] \gg B}}\qquad \vcenter{ \infer{A}{\Gamma _1 , A , \Gamma _2 [ \Delta] \gg B}}\qquad \vcenter{ \infer{C}{\Gamma [ \Delta _1 , C , \Delta _2 ] \gg B}} \]\hrule
\caption{Elimination Rules for the Mediate Grounding Operator $\gg$}\label{tab:tra-elim} \end{table}These rules to eliminate $\gg$ simply implement the factivity of mediate grounding. Indeed, given a mediate grounding claim, they enable us to derive the consequence, any part of the ground, and any part of the condition.
\subsubsection{Completeness of the mediate grounding rules} \label{sec:completeness-tra}
We show now that mediate grounding rules always enable us to internalise mediate grounding claims that are supposed to hold with respect to the chosen grounding calculus. We show in particular that, if a grounding derivation of a formula $A$ can be constructed in our grounding calculus, then we can derive a claim that expresses that certain collections of formulae that have been used to construct the grounding derivation of $A$ are mediate grounds of $A$. We only talk of {\it completeness} for the mediate grounding rules, and not of {\it characterisation}, since the converse result---i.e., that a certain mediate grounding claim is derivable only if a suitable grounding derivation exists---cannot be proved here. Indeed, the latter result essentially depends on the particular features of the considered grounding calculus.\footnote{In order to prove that a mediate grounding claim is derivable only if a suitable grounding derivation exists, one could, for instance, show that a normalisation result holds for the considered calculus and that, as a consequence, the calculus enjoys the canonical proof property for $\gg$---that is, if a mediate grounding claim is provable, the last rule applied in its normal proofs is a $\gg $ introduction rule. It might be the case, though, that not all normal proofs of mediate grounding claims are canonical in the specific calculus under consideration.} In order to discriminate the collections of formulae that constitute a suitable mediate ground of a formula $A$, we employ the notion of {\it bar of a grounding derivation}. Intuitively, if we consider a grounding derivation as a progressive decomposition of its conclusion, then a bar of a grounding derivation can be seen as a complete description of one stage of this decomposition. Consider, for instance, the following derivation and suppose that it constitutes a legitimate grounding derivation according to a given notion of grounding: \[\infer={(p\wedge(q\vee r) ) \vee (s\wedge t )}{\infer={p\wedge (q\vee r )}{p&\infer={q\vee r }{q&r} } & \infer={s\wedge t }{t&s}}\] One of the bars of this derivation contains exactly $p,q,r,t $ and $s$, that is, all leaves of the derivation. This bar represents the final stage of the decomposition: the stage at which we have actually decomposed all complex subformulae obtained by progressively decomposing the original formula $(p\wedge(q\vee r)) \vee (s\wedge t )$. But also the formulae $p, q\vee r$ and $s\wedge t $ constitute a bar of our derivation. This second bar represents the stage of the decomposition at which we have already decomposed $p\wedge(q\vee r) \vee ( s\wedge t )$ into $ p\wedge(q\vee r) $ and $s\wedge t $, and then we have decomposed $ p\wedge(q\vee r) $ into $ p$ and $ q\vee r$, but we have neither decomposed $q\vee r$ nor $s\wedge t $ yet. In more general terms, a bar represents a stage of a decomposition of this kind in the sense that it contains all formula occurrences that we have obtained so far during the decomposition, but none of the formula occurrences that we have decomposed to obtain them.
We fix the obvious notion of grounding derivation and then formally define what are its bars.
\begin{definition}[Grounding derivation] A grounding derivation is a derivation which is constructed by exclusively applying grounding rules to a set of consistent hypotheses and which contains at least one rule application. \end{definition}
\begin{definition}[Bar of a derivation] Given any derivation $\delta$ composed of inferential steps such that each step has one or more formulae as premisses and one formula as conclusion, the derivation-tree $t(\delta)$ of $\delta$ is a tree such that there is a one-to-one correspondence between the nodes of $t(\delta)$ and the formula occurrences in $\delta$ that verifies the following conditions: \begin{itemize} \item the root of $t(\delta)$ corresponds to the conclusion of $\delta$, and \item if a node $n$ of $t(\delta)$ corresponds to a formula occurrence $o$ of $\delta$ and $o$ is the conclusion of an inferential step $i$, then each children of $n$ corresponds to a distinct premiss of $i$ and each premiss of $i$ corresponds to a distinct children of $n$.\end{itemize}
A bar of a derivation $\delta$ is a set of formula occurrences in $\delta$ that does not contain the conclusion of $\delta$ and such that the corresponding nodes form a set that shares exactly one element with each path (that is, set of consecutive nodes) connecting the root of $t(\delta)$ with one of its leaves. \end{definition}
We are now ready to prove that any bar of any grounding derivation of a formula $A$ corresponds to a derivable mediate grounding claim for $A$. \begin{proposition}If $ \Gamma $ contains all grounds and $\Delta$ all conditions occurring in a bar of a grounding derivation of $A$ in a fixed calculus $\kappa$, then the grounding claim $\Gamma [\Delta ]\gg A$ is derivable in any calculus that contains the rules of $\kappa$, the rules for $\blacktriangleright$ and the rules for $\gg$. \end{proposition} \begin{proof} The proof is by induction on the number of rule applications occurring in the grounding derivation of $A$. In the base case, only one grounding rule is applied to derive $A$---by definition of grounding derivation, if no rule is applied in a derivation, then it is not a grounding derivation. Hence, $\Gamma [\Delta ]\blacktriangleright A$ is directly derivable and $\Gamma [\Delta ]\gg A$ is derivable from it. Moreover, $\Gamma \cup \Delta $ is the only bar of the derivation. Suppose now that if $ \Gamma '$ contains all grounds and $\Delta '$ all conditions occurring on the nodes of a bar of a grounding derivation of $A'$ containing less than $n$ grounding rule applications, then $\Gamma ' [\Delta ' ]\gg A '$ is derivable. We show that this holds also for grounding derivations containing $n$ grounding rule applications. Suppose that there is a grounding derivation $\delta $ of $A$ containing $n$ rule applications and that $ \Gamma $ contains all grounds and $\Delta $ all conditions occurring in a bar of that derivation. Consider then the bottommost rule application in $\delta$ and let us call it $r$. We can then list the elements $ B _1 ,\ldots , B _n $ of the bar $ \Gamma \cup \Delta$ in such a way that $ B _1 ,\ldots , B _m $---for $m\leq n$---are premisses of $r$, and $ B _{m+1} ,\ldots , B _n $ belong to bars of distinct grounding derivations of those premisses of $r$ which are not listed in $ B _1 ,\ldots , B _m $. Let us call $ C _1 ,\ldots , C _p $ the premisses of $r$ which do not belong to the list $ B _1 ,\ldots , B _m $. Hence---by neglecting for a moment the difference between grounds and conditions---we can picture our grounding derivation as follows: \[\infer=[r]{A}{ \infer*{B _1}{} & \ldots & \infer*{B _m}{} & \infer*{C _1}{\Theta _1} & \ldots & \infer*{C _p}{\Theta _p} }\]where each $\Theta _i $ is the bar of the derivation of $C_i$ that contains some of the elements of $B _{m+1} ,\ldots , B _n$ and $\Theta _1 \cup \ldots \cup \Theta _p$ is the multiset $\{B _{m+1} ,\ldots , B _n\}$. By inductive hypothesis, we have that each grounding claim $\Theta _i \gg C _i$---where some elements of $\Theta _i $ might be between square brackets---is derivable. Moreover, also the grounding claim $B _1 , \ldots , B _m, C _1, \ldots, C _p\gg A$---where some formulae among $B _1 , \ldots , B _m, C _1, \ldots, C _p$ might be between square brackets---is directly derivable from the conclusion of the grounding rule application $r$ by a $\blacktriangleright $ introduction immediately followed by a $\gg $ introduction. Hence, we can derive the grounding claim $\Gamma [\Delta ]\gg A= B _1 , \ldots , B _m, \Theta _1, \ldots, \Theta _{p}\gg A$ as follows: \[\infer{B _1 , \ldots , B _m, \Theta _1, \ldots, \Theta _{p}\gg A}{\infer*{\Theta _p \gg C _p}{} & \infer*{B _1 , \ldots , B _m, \Theta _1, \ldots, \Theta _{p-1}, C _p\gg A}{\infer{}{ \infer*{\Theta _2 \gg C _2}{} & \infer{B _1 , \ldots , B _m, \Theta _1, \ldots, C _p\gg A }{\infer*{\Theta _1 \gg C _1}{} & \infer{B _1 , \ldots , B _m, C _1, \ldots, C _p\gg A}{\infer{B _1 , \ldots , B _m, C _1, \ldots, C _p\blacktriangleright A}{\infer=[r]{A}{\infer*{B _1}{} & \ldots & \infer*{B _m}{} & \infer*{C _1}{\Theta _1} & \ldots & \infer*{C _p}{\Theta _p}}}} }}}}\]\end{proof}
\subsection{Grounding trees} \label{sec:trees}
We present now the rules for constructing grounding trees. As discussed above, a grounding tree represents a concatenation of several immediate grounding steps, and we use occurrences of the operator $\triangleright$ nested inside an occurrence of the operator $\blacktriangleright$ in order to encode a grounding tree as a formula. For instance, if the grounding claims $\Gamma , A \blacktriangleright B$ and $\Delta \blacktriangleright A$ hold, then the symbol $\triangleright $ enables us to compose the two grounding claims in one as follows: $ \Gamma ,( \Delta )\triangleright A \blacktriangleright B$. This formula means that $B$ is grounded by $\Gamma , A$ and that the component $A$ of the ground of $B$ is in turn grounded by $\Delta$.
We adopt a different notation---i.e., $\triangleright$ and the relative parenthesising---for the occurrences of $\blacktriangleright$ nested inside grounding claims in order to avoid any ambiguity in the interpretation of formulae. Indeed, if we used $\blacktriangleright $ both for the outermost grounding claim of a grounding tree and for the nested ones, in the formula $ \Gamma ,( \Delta \blacktriangleright A ) \blacktriangleright B$ we could either have that the formula $\Delta \blacktriangleright A$ itself is part of the ground of $B$,\footnote{For instance, this could happen according certain notions of grounding if $\Gamma = \{p\}$ and $B = p\wedge (\Delta \blacktriangleright A)$.} or we could have that $A$ is part of the ground of $B$ and $\Delta \blacktriangleright A$ is a nested grounding claim by which we specify that $\Delta$ is an immediate ground of $A$. In order to solve this ambiguity, we distinguish nested grounding claims by using $\triangleright$ for them instead of $\blacktriangleright$. Hence, while the formula $ \Gamma ,( \Delta \blacktriangleright A ) \blacktriangleright B$ means that the ground of $B$ contains the formula $\Delta \blacktriangleright A$, the formula $ \Gamma ,( \Delta )\triangleright A \blacktriangleright B$ means that the ground of $B$ contains the formula $A$, which in turn has ground $\Delta$. Intuitively, if we represent grounding claims as trees in which the children of a node stand for the grounds of the formula occurring as that node, $ \Gamma ,( \Delta \blacktriangleright A ) \blacktriangleright B$ corresponds to the tree below on the left, while $ \Gamma ,( \Delta )\triangleright A \blacktriangleright B$ corresponds to the tree below on the right:\begin{center} \[\infer {B}{\deduce{\mid}{\Gamma} & \deduce{\mid}{\Delta\blacktriangleright A} } \qquad \qquad \infer {B}{\deduce{\mid}{\Gamma} & \deduce{\mid}{\deduce{A}{\deduce {\mid }{\Delta}}} } \]
\end{center}No formula, therefore, has the form $(\Delta )\triangleright A$: we only use $\triangleright$ to distinguish the subformulae of $\Sigma \blacktriangleright B$ that are used as parts of a ground and the subformulae of $\Sigma \blacktriangleright B$ that are immediate grounds---or conditions of immediate grounds---of $B$.
The symbol $\triangleright$ can be indefinitely nested to construct complex grounding trees.
For instance, we can write\[((Z )\triangleright A , B )\triangleright C \;,\;\; (D [E]) \triangleright I\;\; [ (E [F] )\triangleright G ]\;\; \blacktriangleright \;\; H \]to mean that $H$ is grounded by $C$ and $I$ under the condition $G$; that $G$ is in turn grounded by $E$ under the condition $F$; that $I$ is grounded by $D$ under the condition $E$; and that $C$ is grounded by $A$ and $B$; and, finally, that the ground $A$ of $C$ is in turn grounded by $Z$. What is expressed by such a formula could be represented by the following tree: \[\infer{H}{\deduce{\mid}{\infer{C}{\deduce{\mid}{\deduce{A}{\deduce{\mid}{Z}}} & \deduce{\mid}{B}}} & \deduce{\mid}{\infer{I}{\deduce{\mid}{D} & \deduce{\mid}{[E]}}} & \deduce{\mid}{\infer{[G]}{\deduce{\mid}{[E} & \deduce{\mid}{[F]]}}} }\]
where, as in the trees above, the children of a node stand for the elements of the ground of the formula occurring as that node, and conditions are between square brackets.\footnote{We will not use this as a formal notation but just as a visual device to have a clearer grasp of the structure of grounding trees constructed by $\blacktriangleright$ and $\triangleright$. Notice that, in the subtree rooted at $[G]$, both the ground $E$ and the condition $[F]$ are enclosed together between square brackets. We adopt this convention for the children of a node corresponding to a condition---such as $[G]$---in order to have a visual indication that the grounds of a condition should not be counted among the mediate grounds of the consequence.}
Nesting $\triangleright$ inside $\blacktriangleright$, thus, does not correspond to using a grounding claim as part of the ground of some formula, but serves the purpose of constructing chains of grounding claims. As already mentioned, a chain of grounding claims is similar to a mediate grounding claim but with the essential difference that, while a mediate grounding claim does not contain any information about the immediate grounding claims that justify it, a grounding tree contains all the information about the immediate grounding claims on which it is based.
\begin{table}[h] \centering \hrule
\[ \infer{\Gamma _1 ,( \Delta [ \Theta] )\triangleright A , \Gamma _2 [ \Xi] \blacktriangleright B}{\Delta [ \Theta] \blacktriangleright A &&& \Gamma _1 , A , \Gamma _2 [ \Xi] \blacktriangleright B} \qquad \qquad \infer{\Gamma [ \Xi _1 , (\Delta [ \Theta] )\triangleright C , \Xi_2] \blacktriangleright B}{\Delta [ \Theta] \blacktriangleright C &&& \Gamma [ \Xi _1 , C , \Xi_2] \blacktriangleright B} \] \hrule
\caption{Introduction Rules for the Grounding Tree Operator $\triangleright$}\label{tab:tree-intro} \end{table}
The introduction rules for $\triangleright$ are presented in Table \ref{tab:tree-intro}. Intuitively, these rules enable us to plug grounding sentences inside other grounding sentences in a coherent way. For instance, if a formula $D$ can be grounded by the complex ground $A, C$ and thus we can derive $A, C \blacktriangleright D$ and, moreover, $C$ can be grounded by $B$ and thus we can derive $B\blacktriangleright C$; then we can plug the second grounding claim into the first one and obtain: $A, (B )\triangleright C \blacktriangleright D$. Notice that, as explained above and stressed by the parentheses enclosing $B$, according to this notation, only $A$ and $C$ are part of the immediate ground of $D$; $B$ occurs in the formula as an immediate ground of $C$ and thus only as a {\it mediate} ground of $C$. Coherently, the formula $A, (B )\triangleright C\blacktriangleright D$ can be read as follows: ``$D$ is immediately grounded by $A$ and $C$, and in turn $C$ is immediately grounded by $B$''.
\begin{table}[h] \centering \hrule
\[\infer{\Delta [ \Theta] \blacktriangleright A}{\Gamma _1 , (\Delta [ \Theta] )\triangleright A , \Gamma _2 [ \Xi] \blacktriangleright B} \qquad\qquad \infer{\Delta [ \Theta] \blacktriangleright C}{\Gamma [ \Xi_1 , (\Delta \mid \Theta )\triangleright C, \Xi_2] \blacktriangleright B}\]
\[\infer{\Gamma _1 , A , \Gamma _2 [ \Xi] \blacktriangleright B}{\Gamma _1 , (\Delta [ \Theta] )\triangleright A , \Gamma _2 [ \Xi] \blacktriangleright B} \qquad\qquad \infer{\Gamma [ \Xi_1 , C, \Xi_2] \blacktriangleright B}{\Gamma [ \Xi_1 , (\Delta \mid \Theta )\triangleright C, \Xi_2] \blacktriangleright B}\]
\hrule
\caption{Elimination Rules for the Grounding Tree Operator $\triangleright$}\label{tab:tree-el-rules} \end{table}
The elimination rules for $\triangleright $ are presented in Table \ref{tab:tree-el-rules}. These rules enable us to simplify a grounding tree by eliminating one of its sub-grounding-trees. By applying them several times, for instance, we can extract all immediate grounding claims on which the grounding tree is based.
By an inductive definition, we make precise the idea that an occurrence of $\blacktriangleright$ in a formula $F$ might form a grounding tree along with some occurrences of $\triangleright$ in $F$. This will be useful later. In particular, we formally define the transitive relation that holds between an occurrence of $\blacktriangleright$ or $\triangleright$ and all occurrences of $\triangleright$ that correspond to the nodes of the same grounding tree. \begin{definition} We say that an occurrence $\star$ of $\blacktriangleright$ or of $\triangleright$ {\it holds} an occurrence $\star '$ of $\triangleright$ if, and only if, \begin{itemize} \item either $\star $ is the outermost operator of a formula or subformula of the form \[ A_1, \ldots , A_n [ C_1, \ldots C_n] \star B\] and $\star '$ is the outermost operator of one of the subformulae $A_1, \ldots , A_n $, $C_1, \ldots C_n$; \item or $\star $ holds an occurrence of $\triangleright$ that holds $\star '$. \end{itemize} \end{definition}
\subsubsection{Characterisation of the grounding tree rules} \label{sec:characterisation-tree}
We show now that the rules for the grounding tree operator enable us to internalise in the object language any legitimate grounding derivation as a grounding tree. We will moreover show that, if a grounding tree is derivable from a consistent set of hypotheses, then we can construct a legitimate grounding derivation with exactly the same structure as the grounding tree.
In order to do so, we formally specify the intuitive correspondence between grounding trees and grounding derivations in an arbitrary grounding calculus. \begin{definition}[Grounding tree correspondence] A grounding tree \[G_1 , \ldots , G_m [C_{m+1} \ldots C_n ] \blacktriangleright A ,\] or subformula $( G_1 , \ldots , G_m [C_{m+1} \ldots C_n ] )\triangleright A $ of a grounding tree, corresponds to a grounding derivation $\delta$ if, and only if, \begin{itemize}\item the root of $\delta $ is $A$, \item the last rule application $r$ of $\delta $ has $n$ premisses (among which, say, the first $m$ are not between square brackets and the rest are), and \item for each $G_i$, where $1\leq i \leq n$, one of the following holds:\begin{itemize} \item $G_i$ {\it does not} have as outermost operator an occurrence of $\triangleright$ held by the outermost occurrence of $ \blacktriangleright $---respectively $\triangleright$---in $\Gamma [\Delta ] \blacktriangleright A $---respectively $(\Gamma [\Delta ] )\triangleright A$---and the $i$th premiss of $r$ is the formula $G_i$; \item $G_i$ {\it has} as outermost operator an occurrence of $\triangleright$ held by the outermost occurrence of $ \blacktriangleright $ in $G_1 , \ldots , G_m [C_{m+1} \ldots C_n ] \blacktriangleright A $---or of $\triangleright $ in $(G_1 , \ldots , G_m [C_{m+1} \ldots C_n ] )\triangleright A$---and the grounding derivation of the $i$th premiss of $r$ corresponds to $G_i$. \end{itemize} \end{itemize} \end{definition}It is easy to see that, if we suppose that the premisses of a rule do not commute, each grounding derivation corresponds to exactly one grounding tree and vice versa. If we wish to allow for the commutation of rule premisses, we can still keep the one-to-one correspondence by considering commutative grounds and conditions.
We can prove now that the existence of a grounding derivation implies that the corresponding grounding tree can be derived by grounding rules and the rules for the grounding operators $\blacktriangleright$ and $\triangleright$. \begin{proposition}\label{prop:compl-tree}If the grounding tree $\Gamma [\Delta ]\blacktriangleright A$ corresponds to a legitimate grounding derivation in a fixed grounding calculus $\kappa$, then $\Gamma [\Delta ]\blacktriangleright A$ is derivable from the hypotheses $\Gamma , \Delta $ in any calculus that contains the rules of $\kappa$, the rules for the immediate grounding operator, and the rules for the grounding tree operator. \end{proposition} \begin{proof} The proof is by induction on the number of rule applications occurring in the grounding derivation of $A$. In the base case, only one grounding rule is applied to derive $A$---indeed, if no rule is applied in the derivation, it is not a grounding derivation, but a logical one. Hence, $\Gamma [\Delta ]\blacktriangleright A$ is derivable by our rules for the immediate grounding operator. Suppose now that if a grounding tree $\Gamma ' [\Delta ' ]\blacktriangleright A'$ corresponds to a legitimate grounding derivation in $\kappa$ that contains less than $n$ rule applications, then $\Gamma ' [\Delta ' ]\blacktriangleright A'$ is derivable in $\kappa$ extended by our rules for immediate grounding and grounding trees. We show that this holds also for grounding derivations containing $n$ grounding rule applications. Suppose that the grounding derivation $\delta $ of $A$ containing $n$ rule applications corresponds to $\Gamma [\Delta ]\blacktriangleright A$. Then we can consider the last rule applied in $\delta$ and we have that $\delta$ has one of the two following forms:\[\infer=[s]{A}{\infer*{\Pi _1}{} & \infer=[r]{B}{\infer*{\Sigma}{} &[\infer*{\Theta}{} ]}&\infer*{\Pi _2}{}&[\infer*{\Xi}{}]}\qquad \qquad \infer=[s]{A}{\infer*{\Pi}{} & [\infer*{\Xi_1}{} & \infer=[r]{B}{\infer*{\Sigma}{} &[\infer*{\Theta}{}]}& \infer*{\Xi _2}{}]}\]If $\delta$ is of the form displayed above on the left, then, by inductive hypothesis, there is a grounding tree $\Sigma ^{\star} [ \Theta ^{\star} ]\blacktriangleright B$ derivable from the hypotheses $\Sigma ^{\star} , \Theta ^{\star}$ which corresponds to the grounding derivation of $B$ and one grounding tree $\Pi _1^{\star}, B, \Pi _2^{\star} [\Xi^{\star} ]\blacktriangleright A$ derivable from the hypotheses $\Pi _1^{\star}, B, \Pi _2^{\star} , \Xi^{\star} $ which corresponds to our grounding derivation of $A$ in which we assume $B$ as a hypothesis rather than deriving it by $r$. Hence, by\[\infer{\Pi _1^{\star} , (\Sigma^{\star} [ \Theta^{\star} ])\triangleright B , \Pi _2^{\star} [ \Xi^{\star} ] \blacktriangleright A}{\Sigma ^{\star} [ \Theta ^{\star} ]\blacktriangleright B &&& \Pi _1^{\star}, B, \Pi _2^{\star} [\Xi^{\star} ]\blacktriangleright A}\] we can derive $\Pi _1^{\star} , (\Sigma^{\star} [ \Theta^{\star} ])\triangleright B , \Pi _2^{\star} [ \Xi^{\star} ] \blacktriangleright A$ from the hypotheses $\Sigma ^{\star} , \Theta ^{\star}, \Pi _1^{\star}, B, \Pi _2^{\star} , \Xi^{\star} $. But since $B$ can already be derived from the hypotheses $\Sigma ^{\star} , \Theta ^{\star}$, and, moreover, $\Gamma [\Delta ] \blacktriangleright A $ is supposed to correspond to $\delta$, we have that each element of $\Gamma [\Delta ]$ is suitably associated either to the relative premiss of $s$ or to its grounding derivation, which in turn, by induction hypothesis, corresponds to the relative element of $ \Pi _1^{\star}, (\Sigma ^{\star} [ \Theta ^{\star} ]\blacktriangleright B), \Pi _2^{\star} [\Xi^{\star} ]$. Hence, by the fact that the correspondence between grounding trees and grounding derivations is one-to-one, we can conclude that $ \Gamma [\Delta ] \blacktriangleright A = \Pi _1^{\star} , (\Sigma^{\star} [ \Theta^{\star} ])\triangleright B , \Pi _2^{\star} [ \Xi^{\star} ] \blacktriangleright A $ and thus that we have a derivation of $\Gamma [\Delta ] \blacktriangleright A$ from the hypotheses $\Gamma , \Delta$.
If $\delta$ is of the form displayed above on the right, a similar argument will anyway lead us to the conclusion that $\Gamma [\Delta ] \blacktriangleright A$ is derivable from the hypotheses $\Gamma , \Delta$.\end{proof}
Finally, we show that, once we have fixed a grounding calculus, we can reconstruct the grounding derivation corresponding to any grounding tree which is derivable from a consistent set of hypotheses.\footnote{Notice that we do not prove here the obvious statement relying on the assumption that the grounding tree is provable, we prove a stronger statement relying only on the derivability of the gorunding tree from a consistent set of hypotheses. This is required if we want to give a good picture of the behaviour of grounding trees since, due to the factivity of grounding, grounding claims are usually supposed to depend on hypotheses concerning the truth of their constituents, and thus not provable but only derivable from consistent sets of hypotheses.}
\begin{proposition}For any consistent calculus $\kappa^+$ defined by extending a grounding calculus $\kappa$ with the rules for immediate grounding and grounding tree operators, if the grounding tree $\Gamma [\Delta ]\blacktriangleright A $ is derivable in $\kappa^+$ from a consistent set of hypotheses, then there is a legitimate grounding derivation in $\kappa$ from a consistent set of hypotheses which exactly corresponds to $\Gamma [\Delta ]\blacktriangleright A $. \end{proposition} \begin{proof} Let us assume that $\delta $ is a $ \kappa^+$ derivation of $\Gamma [\Delta ]\blacktriangleright A $ from a consistent set of hypotheses $\Pi$. Since $\Pi $ is consistent, $\kappa^+$ is consistent, and $\Gamma [\Delta ]\blacktriangleright A $ is derivable from $\Pi$ in $\kappa^+$, $\bot $ cannot be derived from $\Gamma [\Delta ]\blacktriangleright A $ in $\kappa^+$. In particular, if we consider the elements of the set $\{\Sigma _i [\Theta _i ]\blacktriangleright B_i\}_{1\leq i \leq n}$ of immediate grounding claims which can be derived from $\Gamma [\Delta ]\blacktriangleright A $ by $\triangleright$ elimination rules, we have that $(i)$ $\Sigma _i [\Theta _i ]\blacktriangleright B_i$ corresponds to a legitimate grounding rule application\[\infer={B_i}{\Sigma _i [\Theta_i ]}\]and $(ii)$ the set of formulae $\bigcup_0 ^n (\Sigma _i \cup \Theta _i )$ is consistent. Otherwise we could derive $ \Gamma [\Delta ]\blacktriangleright A $ from $\Pi $ and then $\bot$ from $\Gamma [\Delta ]\blacktriangleright A $ by $\triangleright$ elimination rules and $\blacktriangleright$ elimination rules, which contradicts the assumption about the consistency of $\Pi$.
But if $(i)$ and $(ii)$ hold, we can construct a grounding derivation in $\kappa$ from hypotheses $\bigcup_0 ^n (\Sigma _i \cup \Theta _i )$---or possibly from a subset of these hypotheses---by exclusively using the grounding rule applications\[\infer={B_i}{\Sigma _i [\Theta_i ]}\] The resulting derivation will exactly correspond to $ \Gamma [\Delta ]\blacktriangleright A $. We prove this by induction on the number of occurrences of $\triangleright $ in $\Gamma [\Delta ]\blacktriangleright A $. If $\Gamma [\Delta ]\blacktriangleright A $ does not contain any occurrence of $\triangleright$, it corresponds to a grounding rule application of the form\[\infer={A}{\Gamma [\Delta ]}\]which is exactly a grounding derivation in $\kappa$ which corresponds to $\Gamma [\Delta ]\blacktriangleright A $ and only uses hypotheses among $\Gamma \cup \Delta$. We suppose then that for any grounding tree $\Gamma ' [\Delta ' ] \blacktriangleright A'$ which contains less than $m>0$ occurrences of $\triangleright$, we can construct a grounding derivation $\delta '$ in $\kappa$ from a consistent set of hypotheses which corresponds to $\Gamma ' [\Delta ' ] \blacktriangleright A'$. We prove that this holds also for any grounding tree $\Gamma [\Delta ] \blacktriangleright A$ which contains $m$ occurrences of $\triangleright$. Since $\Gamma [\Delta ] \blacktriangleright A$ contains $m>0$ occurrences of $\triangleright$, there must be, for $1\leq k\leq p$, some elements $(\Gamma _j '' [\Delta _j '' ] )\triangleright A _j ''$ of $\Gamma , \Delta$ which clearly contain less than $m$ occurrences of $\triangleright$. If we consider all grounding trees $\Gamma _j '' [\Delta _j '' ] \blacktriangleright A _j ''$---that is, the grounding trees which are identical to $(\Gamma _j '' [\Delta _j '' ] )\triangleright A _j ''$ except for the outermost operator---we have that, by inductive hypothesis, there are grounding derivations $\delta _j ''$ in $\kappa$ which correspond to $\Gamma _j '' [\Delta _j '' ] \blacktriangleright A _j ''$ and which only depend on hypotheses which can be derived from immediate grounding claims which can, in turn, be derived from each $(\Gamma _j '' [\Delta _j '' ] )\triangleright A _j ''$ by $\triangleright $ eliminations. Now, from $\Gamma [\Delta ] \blacktriangleright A$ we can derive an immediate grounding claim of the form $\Gamma ^{\star} [\Delta ^{\star} ] \blacktriangleright A$ where $\Gamma ^{\star} , \Delta ^{\star}$ contain all elements of $\Gamma, \Delta$ which do not have as outermost operator $\triangleright $ and all formulae $A _j ''$ occurring in the elements $(\Gamma _j '' [\Delta _j '' ] )\triangleright A _j ''$ of $\Gamma, \Delta$ which do have as outermost operator $\triangleright $. This immediate grounding claim corresponds to a grounding rule of the form\[\infer={A}{\Gamma ^{\star} [\Delta ^{\star}]}\]and by composing this rule application to the conclusions $A _j ''$ of our grounding derivations $\delta _j ''$, we have our grounding derivation $\gamma $ in $\kappa$ which corresponds to $\Gamma [\Delta ] \blacktriangleright A$ and only depends on hypotheses which can be derived from immediate grounding claims which can, in turn, be derived from $\Gamma [\Delta ] \blacktriangleright A$ by $\triangleright $ eliminations. Indeed, as for the hypotheses of $\gamma$, they clearly have the required property since what can be derived from each $(\Gamma _j '' [\Delta _j '' ] )\triangleright A _j ''$ by $\blacktriangleright$ and $\triangleright$ eliminations can also be derived from $\Gamma [\Delta ] \blacktriangleright A$ by $\blacktriangleright$ and $\triangleright$ eliminations. As for the correspondence between $\gamma$ and $\Gamma [\Delta ] \blacktriangleright A$, we have that the root of $\gamma$ is exactly $A$; the last rule applied in $\gamma$ has the correct number of premisses without square brackets and within square brackets---by the definition of $ \Gamma ^{\star}$ and $ [\Delta ^{\star}]$; all premisses of the last rule applied in $\gamma$ without $\triangleright$ as outermost operator are identical to the elements of $\Gamma , \Delta $ without $\triangleright$ as outermost operator---again, by the definition of $ \Gamma ^{\star}$ and $ [\Delta ^{\star}]$; and, finally, all premisses of the last rule applied in $\gamma$ with $\triangleright$ as outermost operator are derived by grounding derivations which correspond to the elements of $\Gamma , \Delta $ with $\triangleright$ as outermost operator---by the definition of $\gamma $, of the derivations $\delta _j''$, and of $ \Gamma ^{\star}$ and $ [\Delta ^{\star}]$. \end{proof}
We conclude this section by stressing that in fully characterising the grounding tree operator, we also fully characterised the immediate grounding operator; indeed, in the base case, a grounding tree is an immediate grounding claim.
\section{Logicality and balance} \label{sec:balance}
We will investigate now the proof-theoretical behaviour of the introduction and elimination rules that we presented for our three grounding operators and try to establish whether these rules induce a well-behaved definition---in inferential terms---of the operators, and what this definition can tell us about the operators themselves. We will begin with a most general demarcation problem that arises in the study of inferential definitions of sentential operators: the logicality issue. In other words, we will address the question whether our grounding operators can be considered as logical operators. In order to do so, we will adopt methods coming from the structuralist proof-theoretical approach to the characterisation of the notion of logical constant---see for instance \cite{dos80, dos89}---which dates back to the work of Koslow \cite{kos05} and Popper, see \cite{sh05}. We will consider, in particular, two traditional criteria of logicality and show that, while one is not met by our grounding operator rules, the other is. We will then weaken the first criterion in a manner that suits, as we will argue, the nature of the considered grounding operators, and show that the weakened version is met by some of them, but not all. We will draw some conclusions concerning the operators and the relations that they formalise.
We consider now our first condition, often employed as criterion of the logicality of operators: {\it deducibility of identicals} \cite{hac79}---also discussed in \cite{pra71} under the name of {\it immediate expansion}.\footnote{A closely related condition employed as criterion of the logicality of operators is {\it uniqueness}, see \cite{np15} for a detailed study of the two criteria and of the relations that they entertain. We decided not to consider {\it uniqueness} here for the simple fact that it trivially fails both for the mediate grounding operator and for the grounding tree operator. The reasons of the failure are simply that the introduction rules for these operators explicitly refer to occurrences of the operators themselves. While this failure might be of some interest with respect to the investigation of the proof-theoretical features of inductively defined operators in general---indeed, inductive definitions in general essentially rely on the reference to other occurrences of the defined operator---it does not tell us very much about the differences in proof-theoretical behaviour between the grounding operators that we set out to study here.}
Let us first state the traditional, strict version of the criterion.
\noindent {\bf Deducibility of identicals} $\;$ An operator $\circ (\;\; , \ldots ,\;\; ) $ satisfies {\it deducibility of identicals} if, and only if, for any list of formulae $A_1 , \ldots ,A_n$, we can construct a derivation of $\circ (A_1 , \ldots ,A_n)$ from $\circ (A_1 , \ldots ,A_n)$
by applying an elimination rule for $\circ $ at least once, and by exclusively employing introduction and elimination rules for $\circ$.
In order to provide a positive example of employment of the condition, let us briefly exemplify how it can be shown to hold for the traditional natural deduction conjunction rules:\[\infer[\wedge i]{A\wedge B}{A&B}\qquad\qquad\qquad \infer[\wedge e]{A}{A\wedge B}\qquad \infer[\wedge e]{B}{A\wedge B}\]Deducibility of identicals can be easily shown to hold for these rules by the following derivation:\[\infer[\wedge i]{A\wedge B}{\infer[\wedge e]{A}{A\wedge A}&\infer[\wedge e]{B}{A\wedge B}}\]
The condition, as can be seen from the previous example, requires the elimination rules for an operator to provide all the information which is necessary in order to reintroduce the operator itself. Notice moreover that it is essential that the rules for the operator under investigation alone are enough to show that the information obtained by eliminating it is sufficient to reintroduce it. In more general terms, this condition requires an {\it immediate schematic conformity} between the formulae that can be obtained by the elimination rules---which determine the ways we can use the operator---and the premisses of the introduction rules---which determine the truth conditions of the sentences constructed by applying the operator.
That this strict version of the deducibility of identicals criterion is not met by the immediate grounding operator is a rather obvious fact. There is no way, indeed, to introduce this operator without employing rules which are not rules for the operator itself: the introduction of the immediate grounding operator requires a grounding rule application. In other terms, there is no {\it immediate conformity} between the conclusion of the elimination rules and the premisses of the introductions rules.
The failure of deducibility of identicals for the grounding operator is not due to the form of the particular rules that we adopt for the operator. Indeed, even if we consider more direct rules to introduce the grounding operator, deducibility of identicals still fails. Consider indeed, for instance, the following grounding rule for conjunction:\[\infer={A\wedge B }{A&B}\]In our system, the relative grounding claim $A,B\blacktriangleright A\wedge B $ would be introduced as follows:\[\infer{A,B\blacktriangleright A\wedge B }{\infer={A\wedge B }{A&B}}\]But we could also define the following, more direct, rule:\[\infer{A,B\blacktriangleright A\wedge B }{A & B & A\wedge B}\] Nevertheless, also rules of this kind for introducing the grounding operator would fail the test related to deducibility of identicals because what we obtain by eliminating a generic instance of the grounding operator is not enough, in general, to infer a grounding claim. Indeed, the syntactic form of the formulae $G_1 , \ldots , G _n, C$ is unknown and the following derivation could only be used to infer a grounding claim for certain specific choices of the formula $C$:\[\infer[?]{}{\infer{G_1 }{G_1 , \ldots , G _n \blacktriangleright C }&\ldots & \infer{G _n }{G_1 , \ldots , G _n \blacktriangleright C}& \infer{C}{G_1 , \ldots , G _n \blacktriangleright C} }\]In this case, then, there is no {\it schematic conformity} between the conclusion of the elimination rules and the premisses of the introductions rules.
The strict version of deducibility of identicals is not met by the mediate grounding operator either. Indeed, in order to derive a claim of the form $\Gamma [\Delta ]\gg A$ by $\gg$ introductions, we need to derive either an immediate grounding claim $\Gamma [\Delta ]\blacktriangleright A$ or two mediate grounding claims that yield $\Gamma [\Delta ]\gg A$ by transitivity. And we certainly cannot derive any of these claims from the hypothesis $\Gamma [\Delta ]\gg A$ by exclusively employing $\gg$ elimination rules---or $\gg$ introduction rules, for that matter. Finally, not even the grounding tree operator enjoys deducibility of identicals. The only reason why this is the case, though, is that in the base case a grounding tree is an immediate grounding claim. And, as we have argued above, the immediate grounding operator does not meet the strict version of the criterion.
The failure of deducibility of identicals is not particularly surprising. Grounding operators, indeed, are not supposed to be purely logical operators---in fact, not even logical grounding operators are---because, in order to introduce them, the logical information that the premisses of the rule are derivable is not enough.\footnote{Here, by {\it logical information} we mean information exclusively concerning whether a formula is derivable from a certain set of hypotheses; as opposed, for instance, to information concerning the syntactic form of a formula, the particular way a formula is derivable from a set of hypotheses, or the semantical interpretation of the constituents of a formula.} This is an essential feature of these operators, because they do not only concern truth and deducibility but also a good dose of non-logical information. In the case of logical grounding, for instance, the syntactical complexity of formulae is an essential component of the conditions under which a grounding relation holds; and other notions of complexity, or of fundamentality, play a similar and essential role with respect to other notions of grounding. In general, while not all the constraints required to introduce grounding operators can be explicitly expressed in the logical language---and hence encoded in the conclusion of the elimination rules---for different notions of grounding, we can guarantee that such constraints are met by proof-theoretical conditions on the derivations of the premisses of the introduction rules. This feature of the rules for grounding operators directly reflects the hyperintensional nature of grounding relations. Before discussing the generality of this connection between hyperintensionality and non-logicality, let us briefly clarify what we mean by hyperintensionality.
A relation is hyperintensional if its terms cannot be substituted {\it salva veritate} by logically equivalent ones in general. In other words, if $A$ is related to $B$ by a hyperintensional relation, the logical equivalence of $A$ and $A'$ is not enough to conclude that also $A'$ is related to $B$ by the same relation. We cannot claim here that hyperintensionality always implies non-logicality, since no general account of hyperintensionality in proof-theory exists yet. Nevertheless, there seem to be good grounds to argue that hyperintensionality do indeed imply a failure of the deducibility of identicals criterion, because the non-logical requirements of an operator that make it hyperintensional cannot be expressed by the purely logical information that can be conveyed through the conclusion of a rule. It is in any case indubitable that the particular reasons why the operators under consideration are non-logical are the same reasons that account for their hyperintensionality.
If we attribute to hyperintensionality the failure of grounding operators to meet the logicality criteria, it is natural to wonder whether the grounding operator rules really are unbalanced---as the failure of deducibility of identicals suggests---or simply present a weaker form of balance that makes them well-behaved rules as far as rules for hyperintensional operators are concerned. The property of having balanced sets of introduction and elimination rules, indeed, need not be a prerogative of logical connectives. It is desirable for sentential operators in general to have balanced rules, because having balanced rules simply means having rules that {\it exactly} characterise the behaviour of the operator. It means, that is, that the rules for using the operator---i.e., its eliminations rules---enable us to use it exactly as specified by the rules that determine when it is true---i.e., its introduction rules. What distinguishes logical operators from other types of sentential operators is not the balance of their rules in itself, but that this balance holds relatively to the {\it kind} of information that we consider as legitimate to introduce them and that we expect them to yield when eliminated. In other terms, it is not the balance of its introduction and elimination rules alone that makes an operator logical, it is the fact that we can show them balanced by exclusively considering logical information, that is, information about the derivability of formulae\footnote{As mentioned above, by {\it logical information} we mean the information {\it that} certain formulae are derivable from certain sets of hypotheses. But not, for instance, information concerning {\it the way} certain formulae are derivable from certain sets of hypotheses.}. This particular kind of balance is exactly the one enforced by the {\it immediate schematic conformity} requirement implicit in the strict deducibility of identicals criterion, as already discussed.
In the following sections, we will precisely address the issue concerning the balance of the presented rules for grounding operators by investigating whether they admit detour reductions that allow for normalisation results, and whether they comply with a weaker version of the deducibility of identicals requirement.
\subsection{Detour eliminability} \label{sec:intro-elim}
We will study now whether the rules for $\blacktriangleright$, $\gg$ and $\triangleright$ meet a second condition, which we call {\it detour eliminability}. By {\it detour eliminability} we mean here that the application of the rules that govern the use of the operators does not yield more information than that required to apply the rules determining when sentences constructed by applying the operators are true. We thus have to show that the elimination rules for the grounding operators do not enable us to infer from grounding claims more than what is required to introduce them. In our case, this boils down to proving that suitable normalisation results can be proved for the calculi containing our rules.
Even though this condition is often considered as an essential requirement for several criteria of logicality of operators---see, for instance, \cite{pra77, dum91, rea00, ten07}---the existence of a set of rules for an operator that admits normalisation results is not always regarded as a sufficient condition for concluding that a given operator is a logical operator, see \cite{pog10} for a survey of the main existing accounts of logicality. Moreover, as we argued at the end of the previous section, the fact that a sentential operator is not logical does not imply that its rules must be ill-behaved in general and that it is of no interest to understand whether an exact definition of its meaning can be given by inferential means. We will, therefore, endeavour in the analysis of the behaviour of our grounding operator rules with respect to detour eliminability.
In order to show that our operators enjoy detour eliminability, we will define detour reductions similar in spirit to those employed in \cite{pra06} and we will show that the presented reductions for $\blacktriangleright$ and $\triangleright$ can be employed to generalise the normalisation result presented in \cite{gen21} for a grounding calculus based on the notion of logical grounding introduced in \cite{pog16}. We believe that the interest of this particular normalisation result is not limited to the notion of grounding on which the calculus presented in \cite{gen21} is based. Indeed, the normalisation strategy employed for proving it is a rather common and general one, and could very well apply to a variety of grounding calculi with similar proof-theoretical features. As far as the reductions for $\gg$ are concerned, on the other hand, we will discuss the problems that they pose with respect to normalisation, both from a technical and conceptual perspective.
The reduction rules for $\blacktriangleright$, $\gg$ and $\triangleright$ are presented in Tables \ref{tab:reductions-gro}, \ref{tab:reductions-tra}, \ref{tab:reductions-tree1} and \ref{tab:reductions-tree2}, respectively.
We first show that, if we extend the calculus in presented in \cite{gen21} by our rules for $\blacktriangleright$ and $\triangleright$, then the normalisation result for it generalises. Afterwards, we discuss the problems arising with the rules for $\gg$.
\begin{definition} Let us call $\mathrm{G}$ the grounding calculus defined in \cite{gen21} and $\mathrm{G} +$ the calculus defined by extending $\mathrm{G}$ with all our rules for the immediate grounding and grounding tree operators. \end{definition}
\begin{table}[h] \centering \hrule
\[\vcenter{\infer{A_i}{\infer{A_1 , \ldots , A_n [ C_1 , \ldots , C_m] \blacktriangleright B}{\infer={B}{A_1 & \ldots & A_n & [C_1 & \ldots & C_m] }}}} \; \mapsto \; A_i \qquad \vcenter{\infer{C_i}{\infer{A_1, \ldots , A_n [ C_1 , \ldots , C_m] \blacktriangleright B}{\infer={B}{A_1 & \ldots & A_n & [C_1 & \ldots & C_m]}}}} \; \mapsto \; C_i \]
\[\vcenter{\infer{B}{\infer{A_1 , \ldots , A_n [C_1 , \ldots , C_m] \blacktriangleright B}{\infer={B}{A_1 & \ldots & A_n & [ C_1 & \ldots & C_m] }}}} \; \mapsto \; \vcenter{\infer={B}{A_1 & \ldots & A_n & [ C_1 & \ldots & C_m] }}
\]
If $\quad \vcenter{ \infer{\bot}{A_1 , \ldots , A_n [ C_1 , \ldots ,C_m] \blacktriangleright B}}\quad$ is an elimination of $\blacktriangleright$, then $\blacktriangleright$ cannot have been introduced since $ \quad \vcenter{\infer={B}{A_1 & \ldots & A_n & [ C_1 & \ldots & C_m] }} \quad $ is not a rule application
\hrule
\caption{Detour Reductions for $\blacktriangleright$}\label{tab:reductions-gro} \end{table}
\begin{table}[h] \centering
\hrule
\[
\vcenter{\infer{\gamma }{\infer{\Gamma _1 , \Gamma , \Gamma _2 [ \Delta _1 , \Delta ]\gg B }{\Gamma [ \Delta] \gg A &&& \Gamma _1 , A , \Gamma _2 [ \Delta _1] \gg B}}} \; \mapsto \; \vcenter{\infer{\gamma }{ \Gamma _1 , A , \Gamma _2 [ \Delta _1] \gg B}}\]where $\gamma \in \Gamma _1 , A , \Gamma _2 , \Delta _1, B$
\[
\vcenter{\infer{\gamma }{\infer{\Gamma _1 , \Gamma , \Gamma _2 [ \Delta , \Delta _1 ]\gg B }{\Gamma [ \Delta] \gg A &&& \Gamma _1 , A , \Gamma _2 [ \Delta _1] \gg B}}} \; \mapsto \; \vcenter{\infer{\gamma }{ \Gamma [ \Delta] \gg A}}\]where $\gamma \in \Gamma , \Delta , A$
\[
\vcenter{\infer{\gamma }{\infer{\Gamma _1 [ \Delta _1 , \Gamma , \Delta , \Delta _2]\gg B }{\Gamma [ \Delta] \gg A &&& \Gamma _1 [ \Delta _1 , A, \Delta _2] \gg B}}} \; \mapsto \; \vcenter{\infer{\gamma }{ \Gamma _1 [ \Delta _1 , A, \Delta _2] \gg B}}\]where $\gamma \in \Gamma _1, \Delta _1 , A, \Delta _2 , B$
\[
\vcenter{\infer{\gamma }{\infer{\Gamma _1 [ \Delta _1 , \Gamma , \Delta , \Delta _2]\gg B }{\Gamma [ \Delta] \gg A &&& \Gamma _1 [ \Delta _1 , A, \Delta _2] \gg B}}} \; \mapsto \; \vcenter{\infer{\gamma }{ \Gamma [ \Delta] \gg A}}\]where $\gamma \in \Gamma , \Delta , A$
\[
\vcenter{\infer{\gamma}{\infer{\Gamma [ \Delta ]\gg A }{\Gamma [ \Delta] \blacktriangleright A} }} \; \mapsto \; \vcenter{\infer{\gamma }{ \Gamma [ \Delta] \blacktriangleright A}}\]where $\gamma \in \Gamma , \Delta , A $
\hrule
\caption{Detour Reductions for $\gg$}\label{tab:reductions-tra} \end{table}
\begin{table}[h] \centering \hrule
\[
\vcenter{\infer{\Gamma [\Delta ] \blacktriangleright A_i }{ \infer{A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ C] \blacktriangleright B}{\Gamma [\Delta ] \blacktriangleright A_i &&& A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}}} \; \mapsto \; \Gamma [\Delta ] \blacktriangleright A_i \]
\[\vcenter{\infer{\Gamma [\Delta ] \blacktriangleright C_i }{ \infer{\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n] \blacktriangleright B}{ \Gamma [\Delta ] \blacktriangleright C_i &&& \Sigma [ C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }}} \; \mapsto \; \Gamma [\Delta ] \blacktriangleright C_i\]
\[\vcenter{\infer{A_1 , \ldots , A_i , \ldots , A_n [ \Sigma ] \blacktriangleright B }{ \infer{A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ \Sigma ] \blacktriangleright B}{\Gamma [\Delta ] \blacktriangleright A_i &&& A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}}} \; \mapsto \; A_1 , \ldots , A_i , \ldots , A_n [ \Sigma ] \blacktriangleright B \]
\[\vcenter{\infer{\Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }{ \infer{\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n] \blacktriangleright B}{ \Gamma [\Delta ] \blacktriangleright C_i &&& \Sigma [ C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }}} \; \mapsto \; \Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B\]
\hrule
\caption{Detour Reductions for $\triangleright$, part 1}\label{tab:reductions-tree1} \end{table}
\begin{table}[h] \centering
\hrule
{\small
\[
\vcenter{\infer{\Xi [\Theta ] \blacktriangleright D }{ \infer{A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ C] \blacktriangleright B}{\Gamma [\Delta ] \blacktriangleright A_i &&& A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}}} \; \mapsto \; \vcenter{\infer{\Xi [\Theta ] \blacktriangleright D}{\Gamma [\Delta ] \blacktriangleright A_i}} \quad \text { where } (\Xi [\Theta ] \blacktriangleright D ) \in \Gamma \cup \Delta\]
\[
\vcenter{\infer{\Xi [\Theta ] \blacktriangleright D }{ \infer{A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ C] \blacktriangleright B}{\Gamma [\Delta ] \blacktriangleright A_i &&& A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}}} \; \mapsto \; \vcenter{\infer{\Xi [\Theta ] \blacktriangleright D}{A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}} \]where $ (\Xi [\Theta ] \blacktriangleright D ) \in \{A_1 , \ldots , A_n\} \cup \Sigma$
\[\vcenter{\infer{\Xi [\Theta ] \blacktriangleright D }{ \infer{\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n] \blacktriangleright B}{ \Gamma [\Delta ] \blacktriangleright C_i &&& \Sigma [ C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }}} \; \mapsto \; \vcenter{\infer{\Xi [\Theta ] \blacktriangleright D}{\Gamma [\Delta ] \blacktriangleright C_i}} \quad \text { where } (\Xi [\Theta ] \blacktriangleright D ) \in \Gamma \cup \Delta\]
\[\vcenter{\infer{\Xi [\Theta ] \blacktriangleright D }{ \infer{\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n] \blacktriangleright B}{ \Gamma [\Delta ] \blacktriangleright C_i &&& \Sigma [ C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }}} \; \mapsto \; \vcenter{\infer{\Xi [\Theta ] \blacktriangleright D}{\Sigma [ C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B}} \]where $ (\Xi [\Theta ] \blacktriangleright D ) \in \Sigma\cup \{C_1 , \ldots , C_n\}$}
\hrule
\caption{Detour Reductions for $\triangleright$, part 2}\label{tab:reductions-tree2} \end{table}
We recall the definition of reduction of a derivation and some related terminology.
\begin{definition}[Reductions, Redexes and Critical Rules] For any four derivations $s, s', d$ and $d'$, if $s\mapsto s'$, $d$ contains $s$ as a subderivation, and $d'$ can be obtained by replacing $s$ with $s'$ in $d$, then the relation $d\mapsto d'$ holds and we say that $d$ reduces to $d'$.
We denote by $\mapsto ^* $ the reflexive and transitive closure of $\mapsto$.
As usual, if the bottommost rule of a derivation---or subderivation---$d$ and one of the rules applied immediately above it form a configuration shown to the left of $\mapsto$ in Tables \ref{tab:reductions-gro}, \ref{tab:reductions-tra}, \ref{tab:reductions-tree1} and \ref{tab:reductions-tree2}, then we say that $d$ is \emph{a redex}. We call the last two rule applications of a redex the \emph{critical rules} of the redex. \end{definition}
We provide some simple and usual definitions that will be used in the normalisation proof. \begin{definition}[Logical Complexity] The logical complexity of a formula is defined as usual by counting the number of symbols in the formula. \end{definition}
\begin{definition}[Redex Complexity] The complexity of a redex $r$ is defined as the logical complexity of the formula introduced by the uppermost critical rule of $r$. \end{definition}
\begin{definition}[Normal Form] We say that a $\mathrm{G} +$ derivation $d$ is normal, or in normal form, if there is no derivation $d'$ such that $d \mapsto d'$ holds. \end{definition} Clearly, being normal and not containing any redex are equivalent properties.
The normalisation will follow the ideas employed in~\cite{ts96}. The main intuition behind the proof is that, generally, by applying a reduction rule, we eliminate a redex of a certain complexity and possibly generate new redexes of smaller complexity. For most reductions this is all that matters. If we reduce a suitable redex in the derivation, we either have a decrease of the maximal complexity of redexes, or a decrease fo the number of redexes with maximal complexity. If all our reduction rules allowed for such an argument, we could prove the termination of the normalisation procedure by induction on a pair of values corresponding to the maximal complexity of the redexes in the derivation and the number of redexes with maximal complexity occurring in the derivation. Nevertheless, not all reduction rules are this well-behaved, indeed some of them implement permutations between rules, and a permutation does not change the complexity of redexes. Hence, we need a method to keep track of permutations and to account for them in our complexity measure. In order to do so, we adapt the notion of {\emph segment} defined in \cite{ts96}. A segment is a path inside the derivation tree which connects two rule applications and satisfies the following two conditions: first, it connects two rule applications that would form a redex if they occurred one immediately after the other---and the redex must be different from a permutation redex; second, it can be shortened by using permutations in order to eventually obtain the redex formed by the two rule applications.
It is easy to see, by inspection of the proof in \cite{gen21}, that all definitions and permutations generalise if we treat the introduction rules for $\blacktriangleright$ and $\triangleright$ as any other introduction rule, and the elimination rules for $\blacktriangleright$ and $\triangleright$ as any other elimination rule. Intuitively, $\blacktriangleright$ and $\triangleright$ elimination rules and redexes behave very similarly to $\wedge$ elimination rules and redexes, and the differences in their introduction rules do not generate any particular problem.
We recall the definitions introduced in \cite{gen21}.
\begin{definition}[Segment (adapted from Def.~6.1.1.~in~\cite{ts96}) and Segment Complexity] For any $\mathrm{G} +$ derivation $d$, a segment of length $n$ in $d$ is a sequence $A_1, \ldots , A_n$ of formula occurrences in $d$ such that the following holds. \begin{enumerate} \item For $1 < i < n$, one of the following holds: \begin{itemize} \item $A_i$ is a minor premiss of an application of $\vee $ elimination in $d$ with conclusion $A_{i+1}=A_i$, \item $A_i$ is the premiss of a non-logical rule such that its conclusion $A_{i+1}$ has the same logical complexity as $A_i$ (for the calculus in \cite{gen21}, these rules are all $\varepsilon$ rules and those converse rules that do not induce a change of logical complexity from premiss to conclusion). \end{itemize}
\item $A_n$ is neither the minor premiss of a $\vee $ elimination, nor the premiss of a non-logical rule the conclusion of which has the same logical complexity as $A_n$ (for the calculus in \cite{gen21}, these rules are all $\varepsilon$ rules and those converse rules that do not induce a change of logical complexity from premiss to conclusion).
\item $A_1$ is neither the conclusion of a $\vee $ elimination, nor the conclusion of a non-logical rule the premiss of which has the same logical complexity as $A_1$ (for the calculus in \cite{gen21}, these rules are all $\varepsilon$ rules and those converse rules that do not induce a change of logical complexity from premiss to conclusion). \end{enumerate} For any segment, if \begin{itemize} \item $n>0$ or \item $A_1$ is the conclusion of an introduction rule and $A_n$ is the major premiss of an elimination rule \end{itemize}then the complexity of the segment is the logical complexity of $A$. Otherwise, the complexity of the segment is $0$. \end{definition} Notice that all formulae in a segment have the same logical complexity. This is obvious for the case of $\vee$ eliminations and it holds by assumption for the other cases.
We introduce some terminology to describe the relative position of two segments in a derivation and prove a simple fact about the arrangement of segments in a derivation which will be used in the normalisation proof. \begin{definition}[Terminology for Segments] If a segment contains only one formula occurrence, by \emph{reducing the segment} we mean reducing---if possible---the non-permutation redex the critical rules of which are applied immediately above and immediately below the formula; otherwise, we mean reducing the permutation redex of the $\vee$ elimination which has the bottommost formula of the segment as conclusion.
A segment $r$ \emph{occurs above} a segment $s$ if the bottommost formula of $r$ occurs above the bottommost formula of $s$.
A segment $r$ \emph{occurs to the right} of a segment $s$ if there are derivations $\rho$ and $\sigma $ such that some formula of $r$ occurs in $\rho$, some formula of $s$ occurs in $\sigma$, the root of $\rho$ and the root of $\sigma $ are premisses of the same rule application, and the root of $\rho$ occurs to the right of the root of $\sigma $ with respect to such rule application. \end{definition}
\begin{proposition}\label{prop:segment-selection} For any two distinct segments in a derivation $d$, if neither is to the right of the other, then one is above the other. \end{proposition}
\begin{proof} See \cite{gen21}.
\end{proof}
We prove normalisation for the calculus presented in \cite{gen21} extended by the rules for $\blacktriangleright$ and $\triangleright$. \begin{theorem} \label{thm:norm} For any derivation $d$, there is a derivation $d'$ such that $d$ can be reduced to $d'$ in a finite number of reductions and $d'$ is normal. \end{theorem} \begin{proof} We employ the following reduction strategy. We reduce any rightmost segment of maximal complexity that does not occur below any other segment of maximal complexity. By Proposition~\ref{prop:segment-selection}, we can always find such a segment.
We prove that this reduction strategy always produces a series of reductions which is of finite length and which results in a normal form.
We define the complexity of a derivation $d$ to be the triple of natural numbers $(m,n,u)$, where $m$ is the complexity of the segments in $d$ with maximal segment complexity, $n$ is the sum of the lengths of the segments in $d$ with segment complexity $m$, and $u$ is the number of rule applications in $d$. We then fix a generic derivation $d$ and reason by induction on the lexicographic order on triples of natural numbers.
If the complexity of $d$ is $(0,0, u)$ then $d$ is normal and the claim holds.
Suppose now that the complexity of $d$ is $(m,n,u)$, that $m+n>0$, and that for each derivation simpler than $d$ the claim holds. Since $m+n>0$, there must be at least one maximal segment in $d$ that does not occur below any other maximal segment. We reduce one of such segments and reason by cases on the shape of the reduction. We only consider some exemplar cases, other cases can be found in \cite{gen21}. \begin{itemize} \item \[\vcenter{\infer{A_i}{\infer{A_1 , \ldots , A_n [ C_1,\ldots ,C_m] \blacktriangleright B}{\infer={B}{A_1 & \ldots & A_n & [C_1&\ldots &C_m]}}}} \; \mapsto \; A_i\] By the reduction we eliminate one maximal segment. We show now that no segment of maximal complexity has been duplicated, the length of no segment of maximal complexity has been increased, and no segment has become as complex as the reduced one; and hence that the complexity of $d'$ is $(m',n',u')<(m,n,u)$ since we either reduced the maximal complexity of the segments or the sum of the lengths of the segments with maximal complexity. For each segment in $d$ exactly one of the following holds: $(i)$~ the segment does not contain any of the displayed occurrences of $A_i$ and $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$, $(ii)$~ the segment contains some of the displayed occurrences of $A_i$, $(iii)$~ the segment contains the displayed occurrence of $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$. If $(i)$~ the segment has neither been modified nor been duplicated by the reduction. If $(ii)$~ the reduction might join the segment with another one for which $(ii)$~ holds, but the resulting segment is still less complex than the reduced one since $A$ is less complex than $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$. We just eliminated the only segment for which $(iii)$~ holds.
\item \[ \vcenter{\infer{C_i}{\infer{A_1, \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B}{\infer={B}{A_1 & \ldots & A_n & [C_1&\ldots &C_m]}}}} \; \mapsto \; C_i \] By the reduction we eliminate one maximal segment. We show now that no segment of maximal complexity has been duplicated, the length of no segment of maximal complexity has been increased, and no segment has become as complex as the reduced one; and hence that the complexity of $d'$ is $(m',n',u')<(m,n,u)$ since we either reduced the maximal complexity of the segments or the sum of the lengths of the segments with maximal complexity. For each segment in $d$ exactly one of the following holds: $(i)$~ the segment does not contain any of the displayed occurrences of $C_i$ and $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$, $(ii)$~ the segment contains some of the displayed occurrences of $C_i$, $(iii)$~ the segment contains the displayed occurrence of $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$. If $(i)$~ the segment has neither been modified nor been duplicated by the reduction. If $(ii)$~ the reduction might join the segment with another one for which $(ii)$~ holds, but the resulting segment is still less complex than the reduced one since $C_i$ is less complex than $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$. We just eliminated the only segment for which $(iii)$~ holds.
\item \[\vcenter{\infer{B}{\infer{A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B}{\infer={B}{A_1 & \ldots & A_n & [C_1&\ldots &C_m] }}}} \; \mapsto \; \vcenter{\infer={B}{A_1 & \ldots & A_n & [C_1&\ldots &C_m] }}
\]By the reduction we eliminate one maximal segment. We show now that no segment of maximal complexity has been duplicated, the length of no segment of maximal complexity has been increased, and no segment has become as complex as the reduced one; and hence that the complexity of $d'$ is $(m',n',u')<(m,n,u)$ since we either reduced the maximal complexity of the segments or the sum of the lengths of the segments with maximal complexity. For each segment in $d$ exactly one of the following holds: $(i)$~ the segment does not contain any of the displayed occurrences of $B$ and $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$, $(ii)$~ the segment contains some of the displayed occurrences of $B$, $(iii)$~ the segment contains the displayed occurrence of $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$. If $(i)$~ the segment has neither been modified nor been duplicated by the reduction. If $(ii)$~ the reduction might join the segment with another one for which $(ii)$~ holds, but the resulting segment is still less complex than the reduced one since $B$ is less complex than $A_1 , \ldots , A_n [C_1,\ldots ,C_m] \blacktriangleright B$. We just eliminated the only segment for which $(iii)$~ holds.
\item {\small \[ \vcenter{\infer{\Gamma [\Delta ] \blacktriangleright A_i }{ \infer{A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ \Sigma] \blacktriangleright B}{\Gamma [\Delta ] \blacktriangleright A_i &&& A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}}} \; \mapsto \; \Gamma [\Delta ] \blacktriangleright A_i \] }By the reduction we eliminate one maximal segment. We show now that no segment of maximal complexity has been duplicated, the length of no segment of maximal complexity has been increased, and no segment has become as complex as the reduced one; and hence that the complexity of $d'$ is $(m',n',u')<(m,n,u)$ since we either reduced the maximal complexity of the segments or the sum of the lengths of the segments with maximal complexity. For each segment in $d$ exactly one of the following holds: $(i)$~ the segment does not contain any of the displayed occurrences of $\Gamma [\Delta ] \blacktriangleright A_i$ and the displayed occurrence of $A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ \Sigma] \blacktriangleright B$, $(ii)$~ the segment contains some of the displayed occurrences of $\Gamma [\Delta ] \blacktriangleright A_i$, $(iii)$~ the segment contains the displayed occurrence of $A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [\Sigma ] \blacktriangleright B$. If $(i)$~ the segment has neither been modified nor been duplicated by the reduction. If $(ii)$~ the reduction might join the segment with another one for which $(ii)$~ holds, but the resulting segment is still less complex than the reduced one since $\Gamma [\Delta ] \blacktriangleright A_i$ is less complex than $A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [\Sigma] \blacktriangleright B$. We just eliminated the only segment for which $(iii)$~ holds.
\item {\small \[\vcenter{\infer{\Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }{ \infer{\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n] \blacktriangleright B}{ \Gamma [\Delta ] \blacktriangleright C_i &&& \Sigma [ C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B }}} \; \mapsto \; \Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B\]}By the reduction we eliminate one maximal segment. We show now that no segment of maximal complexity has been duplicated, the length of no segment of maximal complexity has been increased, and no segment has become as complex as the reduced one; and hence that the complexity of $d'$ is $(m',n',u')<(m,n,u)$ since we either reduced the maximal complexity of the segments or the sum of the lengths of the segments with maximal complexity. For each segment in $d$ exactly one of the following holds: $(i)$~ the segment does not contain any of the displayed occurrences of $\Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B$ and the displayed occurrence of $\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n$, $(ii)$~ the segment contains some of the displayed occurrences of $\Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B$ , $(iii)$~ the segment contains the displayed occurrence of $\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n$. If $(i)$~ the segment has neither been modified nor been duplicated by the reduction. If $(ii)$~ the reduction might join the segment with another one for which $(ii)$~ holds, but the resulting segment is still less complex than the reduced one since $\Sigma [C_1 , \ldots , C_i , \ldots , C_n] \blacktriangleright B$ is less complex than $\Sigma [ C_1 , \ldots , (\Gamma [\Delta ] \blacktriangleright C_i ) , \ldots , C_n$. We just eliminated the only segment for which $(iii)$~ holds.
\item {\small \[
\vcenter{\infer{\Xi [\Theta ] \blacktriangleright D }{ \infer{A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ C] \blacktriangleright B}{\Gamma [\Delta ] \blacktriangleright A_i &&& A_1 , \ldots , A_i , \ldots , A_n [\Sigma ] \blacktriangleright B}}} \; \mapsto \; \vcenter{\infer{\Xi [\Theta ] \blacktriangleright D}{\Gamma [\Delta ] \blacktriangleright A_i}} \]}where $ (\Xi [\Theta ] \blacktriangleright D ) \in \Gamma \cup \Delta$. By the reduction we eliminate one maximal segment. We show now that no segment of maximal complexity has been duplicated, the length of no segment of maximal complexity has been increased, and no segment has become as complex as the reduced one; and hence that the complexity of $d'$ is $(m',n',u')<(m,n,u)$ since we either reduced the maximal complexity of the segments or the sum of the lengths of the segments with maximal complexity. For each segment in $d$ exactly one of the following holds: $(i)$~ the segment does not contain the displayed occurrence of $\Gamma [\Delta ] \blacktriangleright A_i$, the displayed occurrence of $A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [ \Sigma] \blacktriangleright B$ and the displayed occurrence of $\Xi [\Theta ] \blacktriangleright D $; $(ii)$~ the segment contains the displayed occurrence of $\Gamma [\Delta ] \blacktriangleright A_i$; $(iii)$~ the segment contains the displayed occurrence of $\Xi [\Theta ] \blacktriangleright D$; $(iv)$~ the segment contains the displayed occurrence of $A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [\Sigma ] \blacktriangleright B$. If $(i)$~ the segment has neither been modified nor been duplicated by the reduction. If $(ii)$~ or $(iii)$~ the reduction might join the segment with another one for which $(ii)$~ or $(iii)$~ holds, but the resulting segments are less complex than the reduced one since both $\Gamma [\Delta ] \blacktriangleright A_i$ and $\Xi [\Theta ] \blacktriangleright D$---which is a subformula of $\Gamma [\Delta ] \blacktriangleright A_i$---are less complex than $A_1 , \ldots ,( \Gamma [\Delta ] \blacktriangleright A_i ) , \ldots , A_n [\Sigma] \blacktriangleright B$. We just eliminated the only segment for which $(iii)$~ holds.\end{itemize} \end{proof}
\subsubsection{Mediate grounding and global detour eliminability}
Now that we have shown that the detour reductions for $\blacktriangleright$ and $\triangleright$ enable us to generalise the normalisation result in \cite{gen21},
we consider the reductions of the detours generated by the rules for $\gg$. Since each individual detour generated by rules for $\blacktriangleright,\triangleright$ and $\gg$ can be suitably reduced---this is obvious if we consider the reduction rules in Tables \ref{tab:reductions-gro}, \ref{tab:reductions-tra}, \ref{tab:reductions-tree1} and \ref{tab:reductions-tree2}---we can state that all three operators enjoy a local form of detour eliminability: the information that we can obtain from the elimination of a grounding operator $\circ$ occurring in a grounding claim $\Gamma [\Delta ]\circ A$, does not exceed the information required to derive the grounding claim $\Gamma [\Delta ]\circ A$ by introducing the operator $\circ$. But while the detour reductions for $\blacktriangleright$ and $\triangleright$ reduce the logical complexity of the formulae occurring in the considered derivation in a rather usual way---and it is hence possible to show global detour eliminability results for $\blacktriangleright$ and $\triangleright$ by using standard techniques---the detour reductions for $\gg$ do not, in general, only generate detours of smaller logical complexity. This is due to the very peculiar fact that the introduction rules for the $\gg $ operator---as opposed to the introduction rules for $\blacktriangleright$ and $\triangleright$---might not have premisses which are simpler than their conclusion. This can happen with derivable grounding claims if our underlying grounding calculus captures a non-logical notion of grounding. For instance, if $p\gg q $ and $q\gg r$ are not contradictory grounding claims according to our notion of grounding, then\[\infer{p\gg r}{p\gg q &q\gg r}\]is a perfectly legitimate derivation by $\gg$ introduction of a true mediate grounding claim under the assumption that $p\gg q $ and $q\gg r$ are true grounding claims. And here nothing tells us, from a proof-theoretical perspective, that $p\gg r$ is more complex than $p\gg q $ and $q\gg r$. Similar problems, nevertheless, can occur even if our underlying grounding calculus captures a notion of logical grounding. Indeed, we cannot exclude in general the possibility that contradictory grounding claims occur in derivations---otherwise it would be impossible, for example, to show that certain grounding claims are false or contradictory. Hence, for instance, the following configuration might certainly occur in a logical grounding derivation:\[\infer{p, t \gg u}{p \gg q\vee r\vee s & q\vee r\vee s ,t \gg u}\]As mentioned above, the decrease of logical complexity possibly induced by $\gg $ introduction rules implies, in turn, that some detour reductions generate detours of greater logical complexity. As we can see here: {\small \[\vcenter{\infer{p}{\infer{p, t \gg u}{\infer{p \gg q\vee r\vee s}{p\gg v\wedge z&v\wedge z\gg q\vee r\vee s } & q\vee r\vee s ,t \gg u}}}\quad \mapsto \quad\vcenter{\infer{p}{\infer{p \gg q\vee r\vee s}{p\gg v\wedge z&v\wedge z\gg q\vee r\vee s }}}\]}where we eliminate a detour the complexity of which is the complexity of the formula $p,t\gg u$ and generate a detour the complexity of which is the complexity of $p\gg q\vee r\vee s$. Clearly, the decrease in logical complexity that might be induced by a $\gg$ elimination is closely related to the fact that combining two grounding claims by transitivity often involves a loss of information.
From a technical perspective, in conclusion, a general termination argument for the normalisation of calculi containing $\gg$ rules based on their schematic form seems very problematic. This means, in turn, that there is no clear way to show, through a normalisation termination argument, that $\gg$ enjoys global detour eliminability with respect to a generic grounding calculus.
Different methods to show global detour eliminability results---by a termination proof for the normalisation procedure---for $\gg$ seem, nevertheless, possible if we consider as legitimate the option of employing the intended meaning of a mediate grounding statement and, in particular, by exploiting the correspondence between each statement of this kind and a grounding tree. Indeed, even though the conclusion of a $\gg $ introduction rule is not necessarily more complex than its premisses---and this is the reason why the reduction of detours does not decrease the complexity of a derivation in a standard sense---the conclusion of an application of the $\gg$ introduction rule always corresponds to a larger grounding tree than the premisses. This is the case because connecting two grounding claims by transitivity exactly corresponds to replacing a leaf of a grounding tree by another grounding tree. It seems therefore possible to use a complexity measure based on this correspondence to prove that also derivations containing $\gg$ detours can be normalised. It is not clear though whether such a complexity measure interacts well with the logical complexity used to show that the other sequences of detour reductions terminate. Notice moreover that such a complexity measure would not be a syntactic one, but a semantical one, and indeed if a premiss of the $\gg$ introduction rule is an incorrect grounding statement that violates the complexity constraints of grounding---such as, for instance, $p\wedge q \wedge r \wedge s \gg p$ with respect to most logical grounding notions---then the complexity measure based on the corresponding grounding tree is undefined, since there is no corresponding grouding tree. A further requirement to the successful application of this technique in a termination proof of the reductions of $\gg$ detours is the possibility of determining, for each mediate grounding claim, the corresponding grounding tree by simple inspection of the mediate grounding claim itself. And while this is feasible for most logical grounding notions---it is indeed easy to reconstruct the grounding tree corresponding to any legitimate mediate logical grounding statement---for more complex notions of grounding, the inspection of a mediate grounding statement could not be enough to determine the tree structure it refers to---in particular if the transitive closure of the underlying immediate grounding relation is not decidable. Definitive technical results in this direction, though, must be obviously left to investigations concerning individual calculi that capture specific notions of grounding.
A further discussion of the connections between decidability, or undecidability, of grounding relations and the behaviour of the relative notion of mediate grounding is postponed to Section \ref{sec:elim-intro} since these connections will play a key role also in that section.
\subsection{Weak deducibility of identicals} \label{sec:elim-intro}
At the beginning of Section \ref{sec:balance}, we have shown that deducibility of identicals does not hold for our grounding operators and we have put this failure in relation with the hyperintensionality of grounding and with the fact that grounding operators are not, strictly speaking, logical operators. Afterwards, we have shown that, even though our rules for grounding operators would not pass a logicality test, they still suitably define both the immediate grounding operator and the grounding tree operator insofar as they enjoy detour eliminability. One might wonder then whether our rules for grounding operators enjoy some kind of complete balance even though they cannot be taken to define logical operators. We, therefore, endeavour in the definition of a weaker version of the deducibility of identicals criterion which determines in what sense our introduction and elimination rules for grounding operators are balanced, and that might prove of use with respect to the rules for hyperintensional operators in general.
A further reason to define a subtler balance criterion for grounding operator exists, and it is related to the fact that deducibility of identicals fails in different ways for the three grounding operators. The failure of the deducibility of identicals condition for the grounding tree operator, indeed, essentially depends on the failure of this criterion for immediate grounding. The failure of the condition for the mediate grounding operator, on the other hand, is complete and independent with respect to the proof-theoretical features of the immediate grounding operator. This suggests that a subtler criterion might enable us to better understand where the problem lies and possibly to distinguish between a partial failure of the deducibility of identicals criterion---that relative to immediate grounding and grounding trees---and a severer failure---that relative to mediate grounding. This might, moreover, further enlighten the reasons of the differences in the behaviour of the three operators that we have encountered in studying detour eliminability.
We define, hence, a {\it weak deducibility of identicals} criterion by taking into account that the introduction rules of our immediate grounding operator essentially refer to other rules---that is, the grounding rules of the chosen underlying calculus. This means, in some sense, that in defining the criterion we attribute the due importance to the fact that our operators are not, strictly speaking, logical operators, because they are hyperintensional. Indeed, the hyperintensionality of grounding is essentially related to the fact that valid grounding claims depend on a non-logical hierarchy---for instance, the hierarchy induced by syntactic complexity for logical grounding, or metaphysical fundamentality for metaphysical grounding. This hierarchy can be internalised in a grounding calculus by restricting the form of its grounding rule schemata. The weak deducibility of identicals criterion that we will introduce, then, could be seen as a relativisation of the strict version of this criterion to the non-logical hierarchy on which grounding is based via the relativisation of the criterion to the set of grounding rules contained in the considered grounding calculus.
The criterion that we will present tells us that the information which we can obtain by eliminating an occurrence of an operator contains all the information required to reintroduce the occurrence of the operator. This is exactly what the strict deducibility of identical criterion tells us about an operator; the only, yet essential, difference between the two is that the weak version of the criterion enables us to take into consideration and use---in order to show that this balance between introduction and elimination rules for the considered operator holds---the rules of our background calculus, to which the introduction rules of the operator refer. This reflects the idea that the introduction rules under study essentially rely---as they are are supposed to do---on non-logical information encoded in the rules of the underlying grounding calculus itself. This information is conveyed, in particular, by the specific rules that have been applied to derive the premisses of our introduction rule. This information is, therefore, not only about {\it the fact that} the premisses have been derived, but also about {\it the way} they have been derived. Since the additional information that we consider in weakening the criterion is non-logical, the criterion does not fare well as a logicality criterion. Nevertheless, the weak criterion still constitutes an indication that a balance between introduction and elimination rules for the operator exists, even though the operator is not, strictly speaking, a logical one and hence the balance is not based on {\it immediate schematic conformity} relations between these rules.
\noindent {\bf Weak deducibility of identicals} $\; $ An operator $\circ (\;\; , \ldots ,\;\; ) $ satisfies {\it weak deducibility of identicals} with respect to a calculus $\kappa$ if, and only if, for any list of formulae $A_1 , \ldots ,A_n $ such that $\circ ( A_1 , \ldots , A_n)$ can be the conclusion of a $\circ $ introduction application, we can construct a derivation from $\circ ( A_1 , \ldots , A_n)$ to $\circ (A_1 , \ldots , A_n)$ by applying an elimination rule for $\circ$ at least once, and by exclusively employing rules for $\circ$ or rules which are explicitly mentioned in the applicability conditions on the $\circ$ introduction rule.
The criterion, in other words, requires that, if the outermost occurrence of $\circ$ in $\circ ( A_1 , \ldots , A_n)$ can be introduced at all, then the logical information provided by eliminating it and the non-logical information contained in the definition of the $\circ$ introduction rules is sufficient to reintroduce this occurrence of $\circ$. The requirement that there must exists a $\circ $ introduction application with conclusion $\circ ( A_1 , \ldots , A_n)$ is needed here because the hyperintensional nature of grounding operators requires us to impose particularly strict conditions on their introduction rules, and thus there can exist a formula $A$ with a grounding operator as outermost connective such that no introduction rule application can have $A$ as conclusion.\footnote{If we compare grounding operators with extensional operators---such as conjunction and disjunction in both classical and intuitionistic logic---or intensional operators---such as intuitionistic implication and the necessity operator of most modal logics---we will see that it is possible to define introduction rules for the extensional connectives that require no conditions on the derivations of their premisses, and introduction rules for the intensional ones that only require conditions on the hypotheses employed to derive their premisses. Most introduction rules for extensional operators, then, can always be applied, regardless of how their premisses are derived. And while it might be the case that an introduction rule for an intensional operator $\circ$ cannot be applied to formulae derived from certain hypotheses; it is usually the case that, for any formula $\circ (A_1 , \ldots , A_n)$, it is possible to find a set of hypotheses that enable us to derive $\circ (A_1 , \ldots , A_n)$ by $\circ $ introduction. If we consider the introduction rules for grounding operators, this is not always possible. Indeed, due to the conditions on these rules, certain grounding claims will never be the conclusion of a legitimate introduction rule application.} This is a byproduct of the fact that, in order to define correct rules for the grounding operators, we also need to impose conditions on {\it the way} the premisses are derived, and not only on their derivability.
We show now that our immediate grounding operator $\blacktriangleright$ and our operator for grounding trees meet the weak deducibility of identicals criterion.
\begin{proposition}\label{prop:wdoi-gro} The immediate grounding operator $\blacktriangleright$ enjoys weak deducibility of identicals. \end{proposition} \begin{proof} Consider any formula of the form $A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}$ where the displayed occurrence of $\blacktriangleright$ does not hold any occurrence of $\triangleright$ and suppose that there exist a $\blacktriangleright$ introduction rule application with $A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}$ as conclusion. We first argue that, if this is the case, then \[\infer=[r]{A_{m+1}}{A_1 & \ldots & A_n [A_{n+1} & \ldots & A_m] }\] must be a legitimate grounding rule application. Indeed, if $A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}$ can be the conclusion of a $\blacktriangleright$ introduction rule application, then $r$ must have been applied immediately above this rule application.
Hence, a derivation from $A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}$ to $A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}$ which contains only applications of $\blacktriangleright$ rules and applications of rules explicitly mentioned in the applicability conditions of the $\blacktriangleright $ introduction rule, and which contains at least one application of the elimination rule for $\blacktriangleright$ is {\footnotesize\[\infer{A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}}{\infer={A_{m+1}}{\infer{A_1}{A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}} & \ldots & \infer{[A_m]}{A_1 , \ldots , A_n [A_{n+1} , \ldots , A_m] \blacktriangleright A_{m+1}}}} \]} \end{proof}
\begin{proposition}\label{prop:wdoi-tree} The grounding tree operator enjoys weak deducibility of identicals. \end{proposition} \begin{proof} Consider any formula of the form $\Gamma [ \Xi] \blacktriangleright B$ such that the displayed occurrence of $\blacktriangleright$ holds at least one occurrence of $\triangleright$ which is the outermost operator of a subformula of the form $(\Delta [ \Theta] )\triangleright A $ which either occurs in $\Gamma$ or in $\Xi$.
If $(\Delta [ \Theta] )\triangleright A $ occurs in $\Gamma$, then $\Gamma [ \Xi] \blacktriangleright B = \Gamma ' , (\Delta [ \Theta] )\triangleright A , \Gamma '' [ \Xi] \blacktriangleright B $ and the derivation of $\Gamma [ \Xi] \blacktriangleright B$ from $\Gamma [ \Xi] \blacktriangleright B$ which only uses rules for $\triangleright$ is the following:\[\infer{\Gamma ' , (\Delta [ \Theta] )\triangleright A , \Gamma '' [ \Xi] \blacktriangleright B}{\infer{\Delta [ \Theta] \blacktriangleright A}{\Gamma ' , (\Delta [ \Theta] )\triangleright A, \Gamma '' [ \Xi] \blacktriangleright B}&\infer{\Gamma ' , A, \Gamma '' [ \Xi] \blacktriangleright B}{\Gamma ' , (\Delta [ \Theta] )\triangleright A , \Gamma '' [ \Xi] \blacktriangleright B}}\]
If $(\Delta [ \Theta] )\triangleright A $ occurs in $\Xi$, then $\Gamma [ \Xi] \blacktriangleright B = \Gamma [ \Xi ', (\Delta [ \Theta] )\triangleright A , \Xi ''] \blacktriangleright B $ and the derivation of $\Gamma [ \Xi] \blacktriangleright B$ from $\Gamma [ \Xi] \blacktriangleright B$ which only uses rules for $\triangleright$ is the following:\[\infer{\Gamma [ \Xi ', (\Delta [ \Theta] )\triangleright A , \Xi ''] \blacktriangleright B}{\infer{\Delta [ \Theta] \blacktriangleright A}{\Gamma [ \Xi ', (\Delta [ \Theta] )\triangleright A , \Xi ''] \blacktriangleright B}&\infer{\Gamma [ \Xi ' , A, \Xi ''] \blacktriangleright B}{\Gamma [ \Xi ', (\Delta [ \Theta] )\triangleright A , \Xi ''] \blacktriangleright B}}\]
\end{proof} Three remarks are in order with respect to the proof of Proposition \ref{prop:wdoi-tree}. First, we must not be fooled by the fact that the derivation used in the proof only contains rules for $\triangleright$: this proof does not show that the strict version of deducibility of identicals holds for the grounding tree operator and hence that $\triangleright$ is a logical operator. Indeed, in the base case, a grounding tree is an immediate grounding claim. Hence, strictly speaking, in order to show that the grounding tree operator enjoys weak deducibility of identicals, we need both the proof of Proposition \ref{prop:wdoi-tree} and that of Proposition \ref{prop:wdoi-gro}.
The second remark concerns the actual constructibility of the derivation used to prove Proposition \ref{prop:wdoi-tree}. This derivation, indeed, can be constructed for any element of $\Gamma $ and $\Xi$ with $\triangleright$ as outermost operator. This is important since it tells us that no choice is required and that we can start from any outermost occurrence of $\triangleright $ to eliminate all the relevant occurrences of $\triangleright$ which are held by the considered occurrence of $\blacktriangleright$.
The third remark concerns a possible ambiguity of the expression {\it eliminating a grounding tree}. Indeed, technically, a grounding tree is not represented by one occurrence of an operator but by one occurrence of $\blacktriangleright $ together with all occurrences of $\triangleright $ that this occurrence of $\blacktriangleright$ holds. Coherently, an expression of form $(\Delta [ \Theta])\triangleright A $ can occur as a subformula of a formula---as, for instance, in $\Gamma , (\Delta [ \Theta] )\triangleright A [ \Xi] \blacktriangleright B$---but is never itself a formula. Hence, when we have a formula of the form $\Gamma , (\Delta [ \Theta] )\triangleright A [ \Xi] \blacktriangleright B$ we must not consider neither the occurrence of $\blacktriangleright$ alone nor the occurrence of $\triangleright $ alone as one instance of application of the grounding tree operator. Morally, one instance of application of the grounding tree operator would include both of them, together with all other occurrences of $\triangleright $ consecutively nested inside them. One could, hence, argue that, in order to eliminate one instance of a grounding tree, all occurrences of $\triangleright $ that are held by the outermost occurrence of $\blacktriangleright $ must be eliminated. For instance, in order to eliminate the grounding tree $(p, q )\triangleright p\wedge q , (r, \neg s )\triangleright r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s) $ and to obtain all the immediate grounding claims composing it, we must construct the following three derivations:\[\delta_1 \quad =\quad \vcenter{\infer{p, q \blacktriangleright p\wedge q }{(p, q )\triangleright p\wedge q , (r, \neg s )\triangleright r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s)}}\]\[ \delta _2 \quad =\quad \vcenter{\infer{r, \neg s \blacktriangleright r\vee s}{(p, q )\triangleright p\wedge q , (r, \neg s )\triangleright r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s)}}\]and\[\delta_3 \quad =\quad \vcenter{\infer{p\wedge q , r\vee s\blacktriangleright (p\wedge q )\wedge(r\vee s)}{\infer{p\wedge q , (r, \neg s )\triangleright r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s)}{(p, q )\triangleright p\wedge q , (r, \neg s )\triangleright r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s)}}}\] Well, even if we adopt this notion of elimination of a grounding tree operator occurrence, weak deducibility of identicals holds for the grounding tree operator. Indeed, clearly, a decomposition similar to the one shown in the previous example can be conducted for any occurrence of the grounding tree operator, and the derived immediate grounding claims can be used to entirely reintroduce the original grounding tree. For instance, for the formula considered in the previous example, the result is the following: \[\infer{(p, q )\triangleright p\wedge q, (r, \neg s )\triangleright r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s)}{ \deduce {r, \neg s \blacktriangleright r\vee s}{\delta _2} && \infer{(p, q )\triangleright p\wedge q , r\vee s \blacktriangleright (p\wedge q )\wedge(r\vee s)}{ \deduce {p, q \blacktriangleright p\wedge q }{\delta _1 } && \deduce{p\wedge q , r\vee s\blacktriangleright (p\wedge q )\wedge(r\vee s)}{\delta _3}}}\]
We formally prove that it is always possible to completely decompose any non-trivial grounding tree by $\triangleright$ elimination rules---for trivial grounding trees, that is, immediate grounding claims, the proof of Proposition \ref{prop:wdoi-gro} is already enough---and recompose it by $\triangleright$ introduction rules. In order to do so, let us first define the formal notion of {\it size} of a grounding tree. Intuitively, the size of a grounding tree $\Gamma [\Delta ]\blacktriangleright A$ is the number of occurrences of $\triangleright$ held by the outermost occurrence of $\blacktriangleright$ in $\Gamma [\Delta ]\blacktriangleright A$ plus $1$.
\begin{definition}[Size $\mid\;\;\mid $ of a grounding tree] The size $ \mid \Gamma [\Delta ]\blacktriangleright A \mid $ of an immediate grounding claim $ \Gamma [\Delta ]\blacktriangleright A $ is $1$. The size $\mid \Gamma [\Delta ]\blacktriangleright A \mid $ of a grounding tree of the form $\Gamma [\Delta ]\blacktriangleright A$ where all elements of $\Gamma $ and $\Delta $ with $\triangleright $ as outermost operator are $(\Gamma _1 [\Delta _1] )\triangleright A_1, \ldots , (\Gamma _n [\Delta _n] )\triangleright A_n $ is $\mid (\Gamma _1 [\Delta _1] \blacktriangleright A_1)\mid + \ldots +\mid (\Gamma _n [\Delta _n] \blacktriangleright A_n) \mid +1 $.\end{definition}
\begin{proposition} For any grounding tree $ \Gamma [\Delta ]\blacktriangleright A$ of size greater than $1$, it is possible to derive from it all immediate grounding claims that compose it by $\triangleright $ elimination rules, and derive it from these immediate grounding claims by $\triangleright $ introduction rules.\end{proposition} \begin{proof}The proof is by induction on the size of $ \Gamma [\Delta ]\blacktriangleright A$. If $\mid \Gamma [\Delta ]\blacktriangleright A\mid = 1 $, then the statement trivially holds. Suppose now that the statement holds for all grounding trees of size smaller than $n$, we show that it holds also for all grounding trees of size $n$. Consider any grounding tree $ \Gamma [\Delta ]\blacktriangleright A$ of size $n>1$. Since $\mid \Gamma [\Delta ]\blacktriangleright A \mid >1$ there must be at least one subformula $(\Sigma [\Theta ])\triangleright B$ of $ \Gamma [\Delta ]\blacktriangleright A$ such that either $ \Gamma [\Delta ]\blacktriangleright A = \Gamma ' , (\Sigma [\Theta ])\triangleright B , \Gamma '' [\Delta ]\blacktriangleright A = $ or $ \Gamma [\Delta ]\blacktriangleright A = \Gamma [\Delta ' , (\Sigma [\Theta ])\triangleright B , \Delta '' ]\blacktriangleright A $.
If $ \Gamma [\Delta ]\blacktriangleright A = \Gamma ' , (\Sigma [\Theta ])\triangleright B , \Gamma '' [\Delta ]\blacktriangleright A $, then, clearly, $\mid \Sigma [\Theta ]\blacktriangleright B \mid < n >\mid \Gamma ' , B , \Gamma '' [\Delta ]\blacktriangleright A\mid $. Hence, by inductive hypothesis, there are derivations $\gamma $ of $ \Sigma [\Theta ]\blacktriangleright B $ and $\delta $ of $ \Gamma ' , B , \Gamma '' [\Delta ]\blacktriangleright A $ from the immediate grounding claims that compose them only containing introduction and elimination rules for $\triangleright$. We can therefore construct the following derivation:\[\infer{ \Gamma ' , (\Sigma [\Theta ])\triangleright B , \Gamma '' [\Delta ]\blacktriangleright A}{\deduce{ \Sigma [\Theta ]\blacktriangleright B}{\gamma}&&\deduce{ \Gamma ' , B , \Gamma '' [\Delta ]\blacktriangleright A}{\delta}}\]which verifies the statement also for $ \Gamma [\Delta ]\blacktriangleright A$.
If, on the other hand, $ \Gamma [\Delta ]\blacktriangleright A = \Gamma [\Delta ' , (\Sigma [\Theta ])\triangleright B , \Delta '' ]\blacktriangleright A $, then $\mid \Sigma [\Theta ]\blacktriangleright B \mid < n >\mid \Gamma [\Delta ' , B , \Delta '' ]\blacktriangleright A \mid $. Hence, by inductive hypothesis, there are derivations $\gamma $ of $ \Sigma [\Theta ]\blacktriangleright B $ and $\delta $ of $ \Gamma[\Delta ', B, \Delta '' ]\blacktriangleright A $ from the immediate grounding claims that compose them only containing introduction and elimination rules for $\triangleright$. We can therefore construct the following derivation:\[\infer{\Gamma [\Delta ' , (\Sigma [\Theta ])\triangleright B , \Delta '' ]\blacktriangleright A}{\deduce{ \Sigma [\Theta ]\blacktriangleright B}{\gamma}&&\deduce{ \Gamma[\Delta ', B, \Delta '' ]\blacktriangleright A}{\delta}}\]which verifies the statement of the present proposition also for $ \Gamma [\Delta ]\blacktriangleright A$.\end{proof}
Let us now consider mediate grounding. First of all, it is obvious that fixed a mediate grounding claim $\Gamma [\Delta ] \gg A$, there is no general strategy to construct a non-trivial derivation of $\Gamma [\Delta ] \gg A$ from $\Gamma [\Delta ] \gg A$ by using $\gg$ rules, $\blacktriangleright $ rules and, possibly, grounding rules. Indeed, for instance, there is no general way to know what grounding claims can be used to introduce the outermost occurrence of $\gg$ in $\Gamma [\Delta ] \gg A$. And even if we suppose that the underlying relation of immediate grounding is decidable and that our grounding claim $\Gamma [\Delta ] \gg A$ is derivable, there might not be any mechanical method to find a derivation for it. For instance, if we consider any finitely axiomatisable but non-decidable formal theory,
then we would have that the immediate grounding operator meets the deducibility of identicals requirement because one-step derivability from certain premisses to a certain conclusion is decidable; but the mediate grounding operator, on the other hand, would not meet the deducibility of identicals requirement because the derivability relation is not a decidable one. This is clearly and essentially tied to the loss of information that transitivity implies. The mediate grounding operator, indeed, internalises a relation between a consequence and one of its mediate grounds which can be explicitly spelled out in terms of immediate grounding or can be implicitly associated to the notion of derivability in a calculus characterising the immediate grounding relation---possibly, through the notion of bar of a derivation. While an immediate grounding operator that can be characterised by a finite calculus expresses the existence of a rule application, a mediate grounding operator expresses the existence of a complex derivation with a certain structure. In order to account for the derivability of an immediate grounding claim of this kind, hence, it is enough to check whether there is a rule, from a finite collection of rules, that can be applied to the formulae occurring inside the grounding claim. In order to account for the derivability of a mediate grounding claim, on the other hand, a specific complex derivation of unknown size must be reconstructed, and no information concerning the structure of this derivation is provided by the mediate grounding claim. The case of grounding trees is similar to that of mediate grounding, but with an essential difference: a grounding tree expresses the existence of a complex derivation with a certain structure---which is specified by the claim---and containing certain formulae---which, again, are specified by the claim. Since a grounding tree explicitly provides all the information required to reconstruct the complex derivation that justifies the derivability of the claim itself, it is easy to reconstruct such a derivation and to reduce the derivability of grounding trees to that of immediate grounding claims. This difference between mediate grounding, on the one side, and immediate grounding and grounding trees, on the other side, is clearly related to the fact that obtaining a mediate grounding claim on the basis of a set of immediate grounding claims by transitivity implies a considerable loss of information with respect to the original set of immediate grounding claims. The problem concerning the loss of information implied by taking the transitive closure of immediate grounding in order to define mediate grounding is also of philosophical interest, as the discussion on the matter that can be found in \cite{sch12, lit13, rav13} witnesses.
\section{Conclusions} \label{sec:conclusions}
We have introduced three sets of inferential rules that can be used to define the behaviour of grounding operators of three different kinds on the basis of a generic grounding calculus: an operator for immediate grounding, an operator for mediate grounding---corresponding to the transitive closure of the immediate grounding one---and a grounding tree operator---that is, an operator that enables us to internalise chains of immediate grounding claims without loosing any information about them. We have characterised the behaviour of these operators and studied their proof-theoretical properties.
In particular, we have shown that all three operators enjoy local detour eliminability since detour reductions for all of them can be defined. Nevertheless, we have also shown that while the schematic behaviour of the rules for the immediate grounding operator $\blacktriangleright$ and for the grounding tree operator $\triangleright$ enable us to generalise existing normalisation results for grounding calculi---as the generalisation of the normalisation result in \cite{gen21} shows---and hence to show global detour eliminability with respect to grounding calculi as well; the rules for the mediate grounding operator $\gg$ pose serious technical problems with respect to global detour eliminability results which are, as we argued, related to the conceptual features of mediate grounding which the $\gg$ operator is meant to formalise.
We have also considered the deducibility of identicals criterion, which, along with the detour eliminability criterion, has been proposed as a test for logicality. We have shown that all three operators fail this test and therefore argued that there is strong technical evidence against the claim that grounding operators are logical operators. The philosophical reasons of this failure have been discussed along with a connection between the hyperintensionality of grounding and the non-logicality of grounding operators.
In an attempt to distinguish between the logicality of the considered operators and the balanced interplay between their introduction and elimination rules, we have then defined a weaker version of the deducibility of identicals criterion that takes into consideration the hyperintensional nature of grounding.
By the weaker criterion, we have shown that, while the rules for the immediate grounding and grounding tree operator display the balance between introductions and eliminations required to meet the weak deducibility of identicals criterion, the rules for the mediate grounding operator do not. We discussed the ill behaviour of the mediate grounding operator both with respect to global detour eliminability and with respect to weak deducibility of identicals in light of the fact that the definition of mediate grounding by taking the transitive closure of immediate grounding implies a possibly considerable loss of information. A possible parallel with the philosophical problems posed by the transitivity of grounding has been proposed.
The presented work raises two general questions. The first concerns the suitability of mediate grounding as a notion of grounding. While the presented results are not meant to constitute conclusive evidence of specific features of particular grounding relations, but only to enlighten the characteristics shared by a very general class of formal grounding operators; the technical shortcomings of the mediate grounding operator studied here seem to point at very specific philosophical issues that also concern informal notions of mediate grounding. The technical results presented here also indicate very clearly, though, that these shortcomings are essentially tied to specific features of the underlying immediate grounding relation, and hence that do not necessarily bear relevance to all notions of grounding. Hence, a more specific investigation of the relations between transitivity and decidability of grounding relations would be of great interest. The second question concerns the possibility of an argument of general validity establishing the exact connections between hyperintensionality and logicality criteria. While such an argument is at the moment impossible, since it requires a general proof-theoretical characterisation of hyperintensionality; the philosophical attention that hyperintensional notions are receiving lately certainly makes the development of suitable formal methods an endeavour of great interest.
\bmhead{Acknowledgments}
\section*{Declarations}
\end{document} | arXiv |
Lipids in Health and Disease
Effect of hyperlipidemia on the incidence of cardio-cerebrovascular events in patients with type 2 diabetes
Dabei Fan1,
Li Li2,
Zhizhen Li1,
Ying Zhang1,
Xiaojun Ma1,
Lina Wu1 &
Guijun Qin ORCID: orcid.org/0000-0001-6699-39981
Lipids in Health and Disease volume 17, Article number: 102 (2018) Cite this article
This study was to explore the effect of hyperlipidemia on the incidence of cardio-cerebrovascular diseases in patients with type 2 diabetes.
Three hundred ninety five patients with type 2 diabetes in our hospital from January 2012 to January 2016 were followed up with an average of 3.8 years. The incidence of cardio-cerebrovascular diseases between diabetes combined with hyperlipidemia group (195 patients) and diabetes group (200 patients) were made a comparison. Multivariable Cox's proportional hazards regression model was used to analyze the effect of hyperlipidemia on the incidence of cardio-cerebrovascular diseases in patients with type 2 diabetes.
Diastolic blood pressure, systolic blood pressure, high-density lipoprotein, low-density lipoprotein, body mass index and hyper-sensitive C-reactive protein were higher in diabetes combined with hyperlipidemia group than in diabetes group (P < 0.05). At the end of the follow-up period, all-cause mortality, cardio-cerebrovascular diseases mortality, and the incidence of myocardial infarction, cerebral infarction, cerebral hemorrhage and total cardiovascular events were significantly higher in diabetes combined with hyperlipidemia group than in diabetes group (P < 0.05). The analysis results of multivariable Cox's proportional hazards regression model showed that the risks of myocardial infarction and total cardiovascular events in diabetes combined with hyperlipidemia group were respectively 1.54 times (95%CI 1.13–2.07) and 1.68 times (95%CI 1.23–2.24) higher than those in diabetes group. Population attributable risk percent of all-cause mortality and total cardiovascular events in patients with type 2 diabetes combined with hyperlipidemia was 9.6% and 26.8%, respectively.
Hyperlipidemia may promote vascular endothelial injury, increasing the risk of cardio-cerebrovascular diseases in patients with type 2 diabetes. Medical staffs should pay attention to the control of blood lipids in patients with type 2 diabetes to delay the occurrence of cardio-cerebrovascular diseases.
The latest data from the International Diabetes Federation revealed that there were 387 million diabetics worldwide in 2014. In high-income countries, type 2 diabetes accounted for 85%~ 95% of diabetes, which might be higher in middle-income and low-income countries. By the year 2035 the number of diabetic patients is expected to increase by 55% to 600 million. The burden of diabetes is growing more severe as a result of the increasing number of deaths from diabetes and medical costs [1]. Diabetic patients often combined with metabolic disorders like hypertension, and hyperlipidemia, easily lead to cardio-cerebrovascular diseases like coronary heart disease, which is a risk factor leading to death [2]. Diabetes complications, cardio-cerebrovascular diseases are the common factors that cause the death of patients. According to statistics, more than 75% of diabetic patients die from cardio-cerebrovascular diseases every year [3]. According to statistics, there are some 30%–40% of diabetic patients in China with hyperlipidemia [4]. Hyperlipidemia and diabetes are independent risk factors of cardio-cerebrovascular diseases [5, 6], and the coexistence of the two can increase the risk of cardio-cerebrovascular diseases [7, 8]. An epidemiological survey showed the incidence of acute stroke in patients with high hyper-sensitive C-reactive protein (hs-CRP) level was two times higher than healthy people and myocardial infarction was three times higher [9]. We followed up 395 patients with type 2 diabetes in our hospital from January 2012 to January 2016, and analyzed as follow.
Three hundred ninety five patients with type 2 diabetes in our hospital from January 2012 to January 2016 were divided into diabetes combined with hyperlipidemia group (195 patients) and diabetes group (200 patients). Whether patients have a family history of diabetes were inquired. Inclusion criteria: ① fasting plasma glucose (FPG) ≥7.0 mmol/L; ② FPG < 7.0 mmol/L but patients were previously diagnosed with diabetes and were using hypoglycemic agent [10]. Exclusion criteria: ① previous history of myocardial infarction; ② previous history of stroke; ③ refusing to sign an informed consent; ④ incomplete baseline data.
Diagnostic criteria
Diabetes diagnosis based on the guidelines for the prevention of diabetes in China in 2010 and hypertension diagnosis based on the guidelines for the prevention of hypertension in China in 2011 [11]: systolic blood pressure (SBP) ≥ 140 mmHg and/or diastolic blood pressure (DBP) ≥ 90 mmHg (1 mmHg = 0.133 kPa), or normal blood pressure but taking anti-hypertensive drugs. Definition of obesity [11]: body mass index (BMI) ≥ 28 kg/m2. Definition of hyperlipidemia [12]: total cholesterol ≥ 5.72 mmol/L. Definition of smoking history: one or more cigarettes every day, lasting over 1 year. Definition and diagnostic criteria of cardiovascular disease: cardio-cerebrovascular events include fatal and nonfatal cardio-cerebrovascular events. Cardiac events include acute myocardial infarction and sudden cardiac death. Acute myocardial infarction is diagnosed according to the diagnostic criteria developed by the Chinese Medical Association's Cardiovascular Disease Branch [13]. Sudden cardiac death is diagnosed according to the diagnostic criteria of the American College of Cardiology/American Heart Association/European Society of Cardiology Committee in 2006 [14]. Cerebrovascular events including cerebral infarction and cerebral hemorrhage, are diagnosed according to the diagnostic criteria of Cerebrovascular Disease Classification (1995) developed by the Fourth National Conference on Cerebrovascular Disease [15]. Cardiovascular events include heart failure, myocardial infarction, and sudden death. The total cardio-cerebrovascular events include myocardial infarction, cerebral infarction and cerebral hemorrhage. When the total cardio-cerebrovascular events are counted, one event occurring two or more times is recorded only 1 time, ending with the time and event of the first endpoint event.
Follow-up method
Follow-up period arranged from January 2012 to January 2016. New-onset cardio-cerebrovascular events were collected every three months. First, professional clinicians collected the main cardio-cerebrovascular events records of participants through the Zhengzhou City Medical Insurance Management Center. Subsequently, cardio-cerebrovascular physicians consulted patients' medical records to confirm the occurrence time and type of cardio-cerebrovascular events and analyzed the change of hs-CRP for patients during the follow up.
SPSS13.0 software was used to analyze the data. Measurement data were showed as mean ± standard deviation (\( \overline{x} \)±s). Comparison between groups was made by t test. Enumeration data were analyzed by Chi-square test. There was a significant difference at P < 0.05. The person-time morbidity and mortality in diabetes group and diabetes combined with hyperlipidemia group were calculated respectively. The differences in the incidence of cardio-cerebrovascular events between the two groups were compared. Multivariable Cox's proportional hazards regression model was used to analyze the factors affecting cardio-cerebrovascular events and to calculate hazard ratio (HR) in each group. Population attributable risk percent (PAR%) was calculated according to the formula [16], and it was used to analyze the effect of hypertension on cardio-cerebrovascular events in patients with diabetes.
$$ \mathrm{PAR}\%=\mathrm{Morbidity}\ \left(\mathrm{HR}\hbox{-} 1\right)/\left[\mathrm{Morbidity}\ \left(\mathrm{HR}\hbox{-} 1\right)+1\right]\times 100\% $$
Three hundred ninety five patients with type 2 diabetes in our hospital were followed up from January 2012 to January 2016, with an average follow-up period of 3.8 years. There were 195 patients in diabetes combined with hyperlipidemia group and 200 patients in diabetes group. There were 28 patients below 45 years old, 102 patients of 45~ 54 years old, 158 patients of 55~ 64 years old, 77 patients of 65~ 74 years old, and 36 patients over 75 years old, with an average of 58.9 (29~ 88) years old. Age, SBP, DBP, low-density lipoprotein (LDL), high-density lipoprotein (HDL), BMI, total cholesterol, smoking ratio and hs-CRP all were higher in diabetes combined with hyperlipidemia group than in diabetes group, however, FPG was lower (P < 0.05) (Table 1).
Table 1 Baseline data (\( \overline{x} \)±s)
All-cause mortality and cardio-cerebrovascular events mortality
All-cause mortality and cardio-cerebrovascular events mortality in diabetes combined with hyperlipidemia group were higher than those in diabetes group during the follow-up. Patients below 65 years old in diabetes combined with hyperlipidemia group had higher all-cause mortality and cardio-cerebrovascular events mortality than those in diabetes group (P < 0.05). There was no difference in all-cause mortality and cardio-cerebrovascular events mortality of patients over 65 years old between the two groups. All-cause mortality and cardio-cerebrovascular events mortality of males and females in diabetes combined with hyperlipidemia group were all higher than those in diabetes group (P < 0.05). All-cause mortality and cardio-cerebrovascular events mortality of patients with family history of diabetes in diabetes combined with hyperlipidemia group were all higher than those in diabetes group (P < 0.05), and for patients without family history of diabetes there was no difference between the two groups (Table 2).
Table 2 All-cause mortality and cardio-cerebrovascular events mortality (/1000 persons/year)
Incidence of cardio-cerebrovascular events
Fifty seven of 395 patients suffered from cardio-cerebrovascular events. There were 31 patients with myocardial infarction, 18 patients with cerebral infarction and 8 patients with cerebral hemorrhage. The incidence of myocardial infarction, cerebral infarction, cerebral hemorrhage and total cardio-cerebrovascular events in diabetes combined with hyperlipidemia group were all higher than those in diabetes group (P < 0.05). Patients below 65 years old in diabetes combined with hyperlipidemia group had higher incidence of myocardial infarction and total cardio-cerebrovascular events than those in diabetes group (P < 0.05). The incidence of myocardial infarction and total cardio-cerebrovascular events of females in diabetes combined with hyperlipidemia group were higher than those in diabetes group. The incidence of myocardial infarction, cerebral infarction, cerebral hemorrhage and total cardio-cerebrovascular events of males in diabetes combined with hyperlipidemia group were higher than those in diabetes group (P < 0.05) (Table 3).
Table 3 Incidence of cardio-cerebrovascular events (/1000 persons/year) (95%CI)
Multivariable analysis of the effect of hyperlipidemia on cardio-cerebrovascular events in patients with type 2 diabetes
In this research, the incidence of hyperlipidemia was 44.6%, and the incidence of diabetes combined with hyperlipidemia was 49.4%. Among of them the incidence of hyperlipidemia in patients with all-cause death, cardio-cerebrovascular events death, myocardial infarction, cerebral infarction, cerebral hemorrhage and total cardio-cerebrovascular events were 67.5%, 73.2%, 69.8%, 70.4%, 65.8% and 61.3%, respectively. All-cause death, cardio-cerebrovascular events death, myocardial infarction, cerebral infarction, cerebral hemorrhage and total cardio-cerebrovascular events were used as the dependent variables. Hypertension, age, gender, FPG, smoking, obesity, high cholesterol, low density lipoprotein cholesterol (LDL-C) and high density lipoprotein cholesterol (HDL-C) were used as the independent variables. The differences in age, gender, obesity, hypertension and smoking were corrected. Multivariable Cox's proportional hazards regression model was used to analyze these factors. The results showed that after covariant correction the risk of myocardial infarction and total cardio-cerebrovascular events in diabetes combined with hyperlipidemia group were 1.54 times (95%CI (confidence interval) 1.13–2.07) and 1.68 times (95%CI 1.23–2.24) higher than those in diabetes group (P < 0.05). PAR% of hyperlipidemia on all-cause death and total cardio-cerebrovascular events in patients with type 2 diabetes were 9.6% and 26.8% (Table 4). At the end of the follow-up statistics revealed that the survival rate in diabetes group was significantly higher than that in diabetes combined with hyperlipidemia group (Fig. 1).
Table 4 Multivariable analysis of the effect of hyperlipidemia on cardio-cerebrovascular events in patients with type 2 diabetes
Survival rate in diabetes group and diabetes combined with hyperlipidemia group at the end of the follow-up
Diabetes is a disease caused by multi-source etiology including heredity, social factors, life-style and environment [17]. The incidence of diabetes is believed to be related to age, family history of diabetes, obesity levels and types, and insulin resistance. However, recent studies have found that cardio-cerebrovascular disease is a major risk factor for the safety of patients with type 2 diabetes, especially for the safety of the elderly. In addition to heredity factor, the pathogenesis of cardio-cerebrovascular disease is more related to the lifestyle and dietary pattern of patients. It is reported that about 75% of diabetes patients die from cardio-cerebrovascular disease every year [18]. The study found that due to biological regulation dysfunction of insulin, diabetes patients commonly accompanied by lipid metabolism disorder and were complicated by hyperlipidemia [19]. Diabetes combined with hyperlipidemia could accelerate the progress of atherosclerosis and increase the incidence of cardiovascular disease [20]. Hyperlipidemia combined with abnormal hs-CRP may also be a key factor in promoting vascular endothelial injury of patients with hypertension and the incidence of cardiovascular disease. There is growing evidence that low level of CRP is closely related to the risk factor of cardiovascular disease, such as hypertension and hyperlipidemia, and that the elevation of CRP level can increase the incidence of heart disease and stroke for patients with hypertension. And therefore CRP is a proinflammatory factor related to the occurrence and development of atherosclerosis. During the formation of atherosclerotic plaque CRP, complement complexes and foam cells deposited on arterial wall. CRP can be combined with lipoprotein to activate complement system, producing a large number of inflammatory mediators, releasing oxygen free radicals, causing endangium injury, vasospasm and unstable plaque rupture, aggravating the luminal stenosis caused by atherosclerosis and promoting the occurrence of myocardial infarction [21]. In our research, DBP, SBP, HDL, LDL, BMI and hs-CRP in diabetes combined with hyperlipidemia group were all higher than those in diabetes group. All-cause mortality, cardio-cerebrovascular diseases mortality, and the incidence of myocardial infarction, cerebral infarction, cerebral hemorrhage and total cardiovascular events were significantly higher in diabetes combined with hyperlipidemia group than in diabetes group (P < 0.05). The results of multivariable Cox's proportional hazards regression model showed that the risk of myocardial infarction and total cardio-cerebrovascular events in diabetes combined with hyperlipidemia group were 1.54 times (95%CI 1.13–2.07) and 1.68 times (95%CI 1.23–2.24) higher than those in diabetes group. This result might be caused by lipid metabolism disorder. Lipid metabolism disorder, as a common complication of type 2 diabetes, easily caused angiosclerosis, inducing cardio-cerebrovascular disease like coronary heart disease and cerebral infarction. Over-high LDL-C level in blood could lead to the accumulation of LDL-C in the coronary artery, which prompted the formation of atheromatous plaque to obstruct the lumen, causing ischemia and hypoxia of myocardium [22, 23]. Hyperlipidemia could damage the vascular endothelial cells, increasing the permeability of vascular wall. Thus the plasma lipoprotein penetrating the inner membrane induced the elimination of macrophages, the proliferation of the smooth muscle cells, atherosclerosis, even prompting the formation of atheromatous plaque and angiostenosis. Therefore, the reduction of blood lipid levels played an effective prevention function on cardio-cerebrovascular disease [24]. The study also proved that lipid metabolism disorder was positively correlated with the incidence of ischemic cardio-vascular disease [25]. Hypercholesterolemia is one of the most important risk factors for atherosclerotic cardiovascular disease (coronary heart disease and ischemic stroke), and coronary heart disease is one of the leading causes of death in diabetics [26, 27]. Some studies have revealed that the risk of hypertension and diabetes is growing at a different rate, and that the risk of hypertension and diabetes is increased after 25 months and 27 months of follow-up. Hypertension is a common risk factor of cardiovascular disease, and rational treatment and scientific management can reduce its incidence [28,29,30]. Some studies found that the increased risk of coronary artery disease in diabetics might be partly attributable to diabetes-related lipoprotein abnormalities. Several II level prevention trials including diabetic patients, have demonstrated the effectiveness of lower LDL-C in preventing coronary artery disease deaths. In patients with type 2 diabetes, although the blood lipids value improved, lipid metabolism disorder persisted even if the blood glucose was controlled in the optimal range [31,32,33]. In this research, all-cause mortality and cardio-cerebrovascular events mortality of patients with family history of diabetes in diabetes combined with hyperlipidemia group were all higher than those in diabetes group, and for patients without family history of diabetes there was no difference between the two groups. The results of family history of diabetes showed the risk of cardiovascular disease increased even though these patients did not have symptoms of early diabetes or diabetes. The symptoms of early diabetes were the further manifestation of early atherosclerosis. The synergistic effect of systemic inflammation and high concentration of blood sugar could damage the function of vascular endothelial cells and induce atherosclerosis lesion. Therefore the effective blood lipid control in early diabetes or in patients with family history of diabetes could reduce the incidence of cardiovascular disease [34].
Hyperlipidemia increases the risk of cardio-cerebrovascular disease in patients with type 2 diabetes, and hyperlipidemia combined with the elevation of hs-CRP may induce higher risk of cardio-cerebrovascular disease in patients with type 2 diabetes. Medical staffs should take preventive measures according to individual situation to reduce or delay the risk and the onset time of cardiovascular disease in patients with diabetes.
DBP:
Diastolic blood pressure
FPG:
Fasting plasma glucose
HDL:
High-density lipoprotein
HDL-C:
High density lipoprotein cholesterol
HR:
Hazard ratio
hs-CRP:
hyper-sensitive C-reactive protein
LDL:
Low-density lipoprotein
LDL-C:
Low density lipoprotein cholesterol
PAR%:
Population attributable risk percent
SBP:
Systolic blood pressure
International Diabetes Federation. IDF Diabetes Atlas Sixth edition poster update 2014. 2014. http://www.doc88.com/p-7394581233404.html.
Sun B, Cheng X, Lc M, Tian H, Cl L. Relationship between metabolic diseases and all-cause and cardiovascular disease death in elderly male diabetics during a 10-year follow-up. Nat Med J China. 2014;94:591–5.
Chang CH, Shau WY, Jiang YD, Li HY, Chang TJ, Sheu WH, et al. Type 2 diabetes prevalence and incidence among adults in Taiwan during 1999-2004: a national health insurance data set study. Diabet Med. 2010;27:636–43.
Sh L, Jh T, Zhang W, Ly L, Qz X. Observation on the treatment of mixed hyperlipidemia with simvastatin and Fenofibrate together. Med Inf. 2010;23:3116–7.
Lin PJ, Kent DM, Winn A, Cohen JT, Neumann PJ. Multiple chronic conditions in type 2 diabetes mellitus: prevalence and consequences. Am J Manag Care. 2015;21:e23–34.
Crawford AG, Cote C, Couto J, Daskiran M, Gunnarsson C, Haas K, et al. Prevalence of obesity, type II diabetes mellitus, hyperlipidemia, and hypertension in the United States: findings from the GE centricity electronic medical record database. Popul Health Manag. 2010;13:151–61.
Tadic M, Cuspidi C. Type 2 diabetes mellitus and atrial fibrillation: from mechanisms to clinical practice. Arch Cardiovasc Dis. 2015;108:269–76.
Ritzenthaler T, Derex L, Davenas C, Bnouhanna W, Farghali A, Mechtouff L, et al. Safety of early initiation of rivaroxaban or dabigatran after thrombolysis in acute ischemic stroke. Rev Neurol (Paris). 2015;171:613.
Wang CH, Zhou J, Wang LX, Wang L, Li CL, Li HJ. Relationships between risk factors of cardiovascular disease and hyper-sensitive C-reactive protein in elderly patients. People's Mil Surg. 2016;2:161–3.
American Diabetes Association. Diagnosis and classification of diabetes mellitus. Diabetes Care. 2012;35(Suppl 1):S64–71. https://doi.org/10.2337/dc12-s064.
Revision Committee of Hypertension Prevention and Cure Guideline of China. Hypertension prevention and cure guideline of China (2010). Chinese J Hypertens. 2011;19:701–43.
Revision Committee of Dyslipidemia Prevention and Cure Guideline in Chinese Adults. Dyslipidemia prevention and cure guideline in Chinese adults. Chinese J Cardiol. 2007;35:390–409.
Chinese Society of Cardiology of Chinese Medical Association, Editorial Board of Chinese Journal of Cardiology, Editorial Board of Chinese Circulation Journal. Diagnosis and management of acute myocardial infarction. Chinese J Cardiol. 2010;38:675–88.
European Heart Rhythm Association, Heart Rhythm Society, Fuster V, Rydén LE, Cannom DS, Crijns HJ, et al. ACC/AHA/ESC 2006 guidelines for the management of patients with atrial fibrillation--executive summary: a report of the American College of Cardiology/American Heart Association task force on practice guidelines and the European Society of Cardiology Committee for practice guidelines (writing committee to revise the 2001 guidelines for the Management of Patients with Atrial Fibrillation). J Am Coll Cardiol. 2006;48:854–906.
RAO ML. Cerebrovascular disease prevention and cure guideline of China. Beijing: People's Medical Publishing House; 2007.
Pasala SK, Rao AA, Sridhar GR. Built environment and diabetes. Int J Diabetes Dev Ctries. 2010;30:63–8.
Lin CC, Li CI, Hsiao CY, Liu CS, Yang SY, Lee CC, et al. Time trend analysis of the prevalence and incidence of diagnosed type 2 diabetes among adults in Taiwan from 2000 to 2007: a population-based study. BMC Public Health. 2013;9:318.
CHEN XT. Study on the effect of hypertension and diabetes for elderly with cardiovascular disease. Mod Prev Med. 2011;38:3253–4.
Xin W. The effect of arterial stiffness index for Xuzhikang capsule in patients with type 2 diabetes. Chinese J Mod Drug Appl. 2015;9:127–8.
Barzilay JI, Spiekerman CF, Kuller LH, Burke GL, Bittner V, Gottdiener JS, et al. Prevalence of clinical and isolated subclinical cardiovascular disease in older adults with glucose disorders: the cardiovascular health study. Diabetes Care. 2001;24:1233–9. https://doi.org/10.2337/diacare.24.7.1233.
Ajmal MR, Yaccha M, Malik MA, Rabbani MU, Ahmad I, Isalm N, et al. Prevalence of nonalcoholic fatty liver disease (NAFLD) in patients of cardiovascular diseases and its association with hs-CRP and TNF-α. Indian Heart J. 2014;66:574–9.
HU DY. Chinese expert consensus on clinical application of selective cholesterol absorption inhibitors (2015). Chinese J Cardiol. 2015;5:394–8.
Chen GY, Li L, Dai F, Li XJ, Xu XX, Fan JG. Prevalence of and risk factors for type 2 diabetes mellitus in hyperlipidemia in China. Med Sci Monit. 2015;21:2476–84. https://doi.org/10.12659/MSM.894246.
Kabinejadian F, Cui F, Zhang Z, Ho P, Leo HL. A novel carotid covered stent design: in vitro evaluation of performance and influence on the blood flow regime at the carotid artery bifurcation. Ann Biomed Eng. 2013;41:1990–2002.
Ma RC, Lin X, Jia W. Causes of type 2 diabetes in China. Lancet Diabetes Endocrinol. 2014;2:980–91.
Matsunaga M, Yatsuya H, Iso H, Yamashita K, Li Y, Yamagishi K, et al. Similarities and differences between coronary heart disease and stroke in the associations with cardiovascular risk factors: the Japan collaborative cohort study. Atherosclerosis. 2017;216:124–30.
Onat A, Dönmez I, Karadeniz Y, Cakır H, Kaya A. Type-2 diabetes and coronary heart disease: common physiopathology, viewed from autoimmunity. Expert Rev Cardiovasc Ther. 2014;12:667.
Wei YH, Jiang JJ, Hu HP. Research on the relationship between microalbuminuria and carotid intima-media thickness in patients with type 2 diabetes mellitus. China J Mod Med. 2007; issue 19. http://en.cnki.com.cn/Article_en/CJFDTotal-ZXDY200719036.htm.
HE YM. Insulin intensive therapy on type 2 diabetes combined with hypertension. China Mod Doctor. 2009;47:43–4.
Wang L. Effect of health education of hypertension on chronic disease prevention in community. Mod Diagn Treat. 2014;25:5477–8.
Babu A, Kannan C, Mazzone T. Hyperlipidemia and diabetes mellitus. Zhonghua Yu Fang Yi Xue Za Zhi. 2002; https://doi.org/10.1201/NOE1842141151.ch17.
Kujiraoka T, Iwasaki T, Ishihara M, Ito M, Nagano M, Kawaguchi A, et al. Altered distribution of plasma PAF-AH between HDLs and other lipoproteins in hyperlipidemia and diabetes mellitus. J Lipid Res. 2003;44:2006–14.
Hu XF, Han XR, Yang ZY, Hu YH, Jl T. The impact of broadened diagnostic criteria on the prevalence of hypertension, hyperlipidemia and diabetes mellitus in China. Zhonghua Yu Fang Yi Xue Za Zhi. 2017;51:369–77.
Ciccone MM. Endothelial function in pre-diabetes, diabetes and diabetic cardiomyopathy: a review. J Diabetes Metab. 2014;05 https://doi.org/10.4172/2155-6156.1000364.
This research was supported by the National Natural Science Foundation of China (grant number: 81570746).
All data generated or analyzed during this study are included in this published article.
Division of Endocrinology Department of Internal Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
Dabei Fan, Zhizhen Li, Ying Zhang, Xiaojun Ma, Lina Wu & Guijun Qin
Ophthalmologic Center, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
Li Li
Dabei Fan
Zhizhen Li
Xiaojun Ma
Lina Wu
Guijun Qin
DF contributed to the design and concept of this study, and revised the manuscript critically for important intellectual content. ZL analyzed and interpreted the patient data, and revised the manuscript critically for important intellectual content. YZ analyzed and interpreted the patient data, and drafted the manuscript. XM acquired the data, and drafted the manuscript. LW acquired the data, and drafted the manuscript. GQ contributed to the design and concept of this study, and revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript.
Correspondence to Dabei Fan.
This study has been approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University, and all participants of the study signed the informed consent.
Fan, D., Li, L., Li, Z. et al. Effect of hyperlipidemia on the incidence of cardio-cerebrovascular events in patients with type 2 diabetes. Lipids Health Dis 17, 102 (2018). https://doi.org/10.1186/s12944-018-0676-x
Cardio-cerebrovascular diseases
Submission enquiries: [email protected] | CommonCrawl |
How many unordered pairs of prime numbers have a sum of 40?
We must check whether or not the difference between 40 and each of the prime numbers less than 20 (2, 3, 5, 7, 11, 13, 17, 19) is prime. We find that only $40-3=37$, $40-11=29$, and $40-17=23$ are prime. Thus, $\boxed{3}$ pairs of prime numbers have a sum of 40. | Math Dataset |
Skip to main content Skip to sections
Nuclear Oncology
Nuclear Oncology pp 1-24 | Cite as
Radionuclide Therapy of Tumors of the Liver and Biliary Tract
Giuseppe Boni
Federica Guidoccio
Duccio Volterrani
Giuliano Mariani
Living reference work entry
The liver represents a frequent site for both primary cancer and metastatic disease, In these circumstances, liver-directed therapies as cytoreduction via surgery or in situ ablative techniques may influence the natural history of the disease progression and improve clinical outcomes.
Radioembolization (RE) is a selective internal radiotherapy technique in which 131I-lipiodol or 90Y microspheres are infused through the hepatic arteries. It is based on the fact that primary and secondary hepatic tumors are vascularized mostly by arterial blood flow whereas the normal liver perfusion is mostly from the portal network. This enables high radiation doses to be delivered, sparing the surrounding non-malignant liver parenchyma.
Although there are some clinical evidences that RE may play an important role in the management of hepatocellular carcinoma of intermediate or advanced stage and in liver-dominant metastatic colorectal cancer and metastatic neuroendocrine tumors, further randomised clinical trials are mandatory to better assess the potential beneficial and harmful outcomes of trans-arterial radioembolisation either as a monotherapy or in combination with other systemic or locoregional therapies.
In this chapter we discuss some technical aspects, patient selection, current clinical evidence, and future directions of radioembolisation for primary and secondary liver cancer.
Hepatocellular carcinoma Selective internalradiation therapy Radioembolization Liver Neoplasm Metastasis
[18F]FDG
2-Deoxy-2-[18F]fluoro-d-glucose
5-Fluorouracil, a chemotherapy agent
68Ga-DOTANOC
68Ga-DOTA-1-Nal3-octreotide
99mTc-HSA
99mTc-human serum albumin
99mTc-MAA
99mTc-macroaggregated albumin
99mTcO4−
99mTc-pertechnetate
Alpha-fetoprotein, a circulating serum marker of hepatocellular carcinoma (and of testicular germ-cell cancer as well)
Becquerel unit
Body surface area
Carbohydrate antigen 19–9, a tumor-associated serum marker
ce-CT
Contrast-enhanced x-ray computed tomography
Complete response
X-ray computed tomography
Drug-eluting bead
European Association for the Study of the Liver
ECOG
Eastern Cooperative Oncology Group
Electron volt
GBq
Giga-Becquerel (109 Becquerel)
Gray unit (ionizing radiation dose in the International System of Units, corresponding to the absorption of one joule of radiation energy per kilogram of matter)
Hepatocellular carcinoma
4-Hexadecyl 2,2,9,9-tetramethyl-4,7-diaza-1,10-decanethiol, a chelating agent
Intrahepatic cholangiocarcinoma
Irreversible electroporation
Kiloelectron volt (103 eV)
Lung shunt fraction
MBq
Mega-Becquerel (106 Becquerel)
Megaelectron volt (106 eV)
Maximum intensity projection
MIRD
Medical Internal Radiation Dose
Neuroendocrine tumor
Positron emission tomography/computed tomography
Progression-free survival
Partial response
Portal vein thrombosis
Radioembolization
RECIST
RILD
Radiation-induced liver diseases
SIRT
Selective internal radiation therapy
SPECT/CT
Single-photon emission computed tomography/computed tomography
Standardized uptake value
SUVmax
Standardized uptake value at point of maximum
Transcatheter arterial chemoembolization
Transarterial radioemobilization
VIPoma
Neuroendocrine tumor producing vasoactive intestinal peptide
Download reference work entry PDF
Both primary tumors and metastatic malignancies can arise in the liver. Hepatocellular carcinoma (HCC) is the fifth most common malignancy worldwide, and its incidence is rising [1]. In addition, the liver is one of the most common sites for hematogenous metastases from different solid tumors primarily arising in other tissues/organs, most importantly colorectal cancer (CRC). About 15–25% of all CRCs may present synchronous hepatic metastases or develop metachronous metastatic involvement of the liver disease during the course of the disease [2].
Although significant survival benefit can be achieved with curative resection or liver transplantation in selected cases [3], less than 15% of the patients with newly diagnosed HCC are candidates for surgical procedures with curative intents. Although various treatments have been proposed for the remaining patients, definite agreement has not been reached on which option offers the greatest survival benefit associated with the least toxicity.
External beam radiation therapy has a limited role in the treatment of HCC due to the relatively high radiosensitivity of normal hepatic tissue [4]. In fact, exposure of the liver to radiation doses greater than 40 Gy may result in a clinical syndrome called "radiation-induced liver disease " (RILD) or radiation hepatitis. This syndrome, which occurs weeks to months following therapy, includes elevated liver enzymes, anicteric hepatomegaly, and ascites [4, 5].
Minimally invasive, percutaneous ablative treatments include radiofrequency ablation (RFA), microwave ablation, cryoablation, and irreversible electroporation (IRE) that have become widely accepted as potentially curative therapies for either HCC or metastatic liver disease [6]. In particular, these ablative techniques are useful for treating patients who do not meet the criteria for surgery but in whom curative treatment is desired.
Transcatheter arterial chemoembolization (TACE) is the mainstay of catheter-based locoregional therapies for unresectable primary liver cancer; its use is expanding and includes liver metastatic disease from other malignancies [7]. Conventional TACE typically involves the injection of chemotherapeutic agents mixed with lipiodol and embolic particles into the branch of the hepatic artery that feeds the tumor [8]. TACE with drug-eluting beads (DEB) involves the injection of DEBs into the tumor-feeding artery, offering simultaneous delivery of chemotherapy and embolization with sustained and controlled drug release over time [9]. Both TACE and DEB-TACE are effective as palliative treatments for primary and metastatic liver cancers [10, 11].
There are substantial differences in the treatment strategies for primary or metastatic malignancies of the liver. While locoregional treatments are a mainstay in primary liver cancers [12], transcatheter techniques such TACE or DEB-TACE are not commonly used for patients with metastatic liver disease.
The recent successful development of trans-arterial radioembolization (TARE) with 90Y-labeled particles has revived interest in an approach to locoregional treatment of liver tumors with radionuclides that had been introduced earlier with the use of lipiodol containing 131I but had shown little benefit for long-term survival (see further below). This technique is also defined as "selective internal radiation therapy " (SIRT); therefore, the acronyms TARE and SIRT can be employed in an interchangeable manner.
Rationale of Radioembolization Therapy for Liver Tumors
In order to deliver the highest possible therapeutic doses to the tumor while sparing normal liver parenchyma, techniques based on minimally invasive intra-arterial administration of therapeutic radionuclides have been developed, an approach that takes advantage of the dual blood supply to the liver. Normal hepatic tissue derives greater than 70% of its blood supply from the portal system, whereas blood to malignant tissue is preferentially supplied by the arterial system. High tumoricidal doses can thus be selectively delivered to the tumor lesions (more than 70 Gy, usually 200–300 Gy) through locoregional administration of agents emitting β− particles (labeled with either 131I, 90Y, 188Re, or 166Ho), with low levels of associated damage for the non-affected liver and therefore a minimal risk of inducing RILD [13, 14, 15, 16, 17].
Radioactive Lipiodol
Upon injection into the hepatic artery of patients with HCC, the iodinated oil 131I-lipiodol (a suspension of lipidic particles) follows the preferential blood flow toward the tumor; the radiolabeled micellae are then retained by pinocytosis both in the tumor cells and in the endothelial cells of the arteries feeding the tumor [18]. Following this route of administration (which is performed under angiographic monitoring), more than 75% of the injected 131I-lipiodol remains in the liver, while the remainder distributes mainly to the lungs. Release of free radioiodine results in some accumulation of radioactivity in the thyroid gland. Dose to the thyroid galnd can be minimized by pretreatment with sodium iodide. Tumor/non-tumor uptake ratios in the liver are generally higher than 5, and more than 10% of the injected radioactivity remains within the tumor with an effective half-life greater than 6 days, longer than in the normal liver tissue [14, 15, 19]. Administered activity can be either a fixed amount of 2.4 GBq (65 mCi) or defined on the basis of patient-specific dosimetric estimates. Due to the long half-life of 131I-lipiodol in the tumor, current legislation in some countries requires hospitalization for about 1 week, for the purpose of radioprotection of the general population.
Treatment is in general well tolerated, and serious adverse effects are very rare, while generic asthenia is commonly reported; hematologic toxicity is exceptionally rare, although blood cell counts may be altered due to the cirrhosis-related hypersplenism often present in these patients. Interstitial pneumopathy due to trapping and retention of the radiolabeled particle suspension is reported as the main risk of this treatment [20].
Both retrospective studies [21, 22, 23] and prospective trials [21, 24, 25] have demonstrated the safety of 131I-lipiodol therapy, while the objective response rate is reported in the 40–50% range. In particular, a randomized controlled trial comparing 131I-lipiodol therapy versus best supportive care in a group of 27 HCC patients with portal thrombosis demonstrated that survival at 3 months was 71% in the treatment arm versus 10% in the best supportive care arm (with median overall survivals of 26 weeks and 10 weeks, respectively) [24].
In the adjuvant setting, the efficacy of 131I-lipiodol therapy was tested in a phase II study involving 28 patients [26]. Median time to recurrence was 28 months (range 12–62 months) in the 16 patients who were apparently disease-free at follow-up; overall survival in the responding patients was 86% at 3 years and 65% at 5 years. In a recently published phase III study, 21 patients without evidence of residual disease after potentially curative resection for HCC received 1,850 MBq of 131I-lipiodol intra-arterially as adjuvant therapy, while 22 patients treated with surgery alone served as the control group [27]. The recurrence rate in the patients treated with surgery alone was 63.6%, while it was 47.6% in those receiving adjuvant radioembolization. The 5-year, 7-year, and 10-year disease-free survivals were 61.9%, 52.4%, and 47.6%, respectively, in the adjuvant therapy group, significantly higher than the values in the corresponding control group (31.8% with P = 0.0397, 31.8% with P = 0.0224, and 27.3% with P = 0.0892, respectively). Also, overall survivals in the treated group (66.7% at both 5 and 7 years, 52.4% at 10 years) were higher than in the control group (36.4% with P = 0.0433 at 5 years, 31.8% with P = 0.0243 at 7 years, and 27.3% with P = 0.0905 at 10 years, respectively). When compared with chemoembolization, embolization with 131I-lipiodol has yielded similar results in terms of efficacy but better tolerance [23].
More recently lipiodol labeled with 188Re using 4-hexadecyl 2,2,9,9-tetramethyl-4,7-diaza-1,10-decanethiol (HDD) as the chelating agent [28] was shown to be a promising agent for radioembolization in patients with inoperable large and/or multifocal HCCs. 188Re has potentially favorable physical characteristics, such as a shorter half-life than 131I (16.9 h versus 8 days), a β− emission of high energy with ensuing good tumoricidal effect (E max = 2.1 MeV), and a 155-keV γ emission favorable for gamma-camera imaging (for the purpose of dosimetric estimates). Furthermore, the relatively short-lived 188Re can be obtained through a generator system based on its parent radionuclide (188W) that has a physical half-life of 69 days, suitable for distribution logistics.
Therapy of HCC with 188Re-HDD-lipiodol results in higher tumor-killing efficacy than 131I-lipiodol, yet combined with lower toxicity. The 188Re-labeled agent represents therefore an excellent alternative to the 131I-labeled agent [29].
Promising results of both safety and clinical response following therapy with 188Re-HDD-lipiodol were obtained in a multicenter study performed in 93 patients with inoperable HCC [30]. Treatment was well tolerated, and an objective response (including either tumor regression of some degree or stabilization of disease that was in progression prior to therapy) was observed in 66/93 patients (71%); out of these 66 objective responses, there were 5 cases with complete ablation of the tumor mass, 17 cases of partial response, and 23 cases with stabilization of disease.
90Y-Microspheres
Intra-arterial radioembolization with 90Y-labeled particles was approved in 2002 for the treatment of liver tumors, both primary malignancies and metastatic lesions originated by other tumors [31]. 90Y is a pure β− emitter that decays to stable 90Zr and has a physical half-life of 64.2 h. The average energy of β− emission is 0.936 MeV, with a mean tissue penetration of 2.5 mm and a maximum tissue range of 10 mm. The physical properties of 90Y allow the delivery of high-radiation doses to hepatic malignancies, when administered with this technique, while minimally affecting the non-affected surrounding liver parenchyma.
Two types of microspheres labeled with 90Y are currently available, made, respectively, of glass (TheraSphere®, MDS Nordion, Ottawa, Ontario, Canada) and of resin (SIR-Spheres®, Sirtex Medical, Sydney, Australia). These two preparations differ in some important respects [32]. TheraSphere® consists of particles 20–30 μm in diameter, each one carrying 2,500 Bq of 90Y (high specific activity); about 1.2 million microspheres is injected intra-arterially for a single treatment, corresponding to a total administered activity of about 3 GBq (81 mCi). SIR-Spheres® consists instead of particles 20–60 μm in diameter, each one carrying 50 Bq of 90Y (low specific activity); 40–80 million microspheres are injected for a single treatment to achieve a similar total administered activity of 3 GBq [32].
After the diagnosis of inoperable liver tumor has been made on the basis of proper imaging and biopsy, the pretreatment functional status of the liver is evaluated by routine blood chemistry workup. Patients with an ECOG performance status of greater than two are not considered ideal candidates for this treatment.
An inadequate liver function reserve with total bilirubin >2.0 mg/dL and serum albumin <3 g/dL is considered a contraindication to treatment with 90Y-microspheres. In case of concomitant renal failure, care must be taken to avoid or minimize the use of iodinated contrast medium during angiography (see further below).
Pretreatment evaluation for TARE/SIRT with 90Y-microspheres is based on cross-sectional imaging and arteriograms in the individual patient, with the fundamental prerequisite that the patient has liver-dominant unresectable disease. The workup should include CT or MR imaging of the liver for assessment of tumoral and non-tumoral volume, portal vein patency, and extent of extrahepatic disease. Distribution of the tumor disease is typically characterized as unilobar or bilobar; however, the correlation of tumor lesions with hepatic arterial supply is variable and can only be ascertained through arteriography. Ascites indicates poor hepatic reserve or peritoneal metastasis, both of which bear poor prognosis.
The final decision-making about treatment with 90Y-microsphere TARE/SIRT for each individual patient should be achieved after careful consideration of all functional and anatomic parameters within a multidisciplinary team involving interventional radiologists, surgical oncologists, medical oncologists, nuclear medicine physicians, radiation oncologists, medical physicists, and radiation safety experts.
Assessment of Arterial Anatomy
Pretreatment angiography (an essential requisite for the therapeutic procedure) is performed to assess the vascular anatomy of the liver, patency of the portal vein, and presence of artero-portal shunting and/or shunting to extrahepatic territories, the most important of which is the liver-to-lung shunt (see further below). Abnormal blood flow spreading the radiolabeled microspheres outside the liver vasculature is prevented by prophylactic embolization of some vessels identified during angiography, such as the gastroduodenal artery and the right gastric artery [5]. This is a safe and effective procedure to minimize the risks of hepato-enteric flow. Mesenteric angiography is necessary to ensure that blood supply to the lesions has been adequately identified, as incomplete/inaccurate definition of the pattern of blood supply to the tumor may lead to incomplete/ineffective targeting of the tumor lesion.
Pretreatment Imaging with 99mTc-MAA
During pretreatment angiography, 99mTc-macroaggregated serum albumin ((99mTc-MAA) or alternatively 99mTc-HSA-microspheres) is injected into the hepatic artery to confirm that the radiolabeled particles home in the tumor lesion(s), as well as to assess for the presence of shunting to the splanchnic and/or pulmonary vascular bed. To this purpose, scintigraphy of the lung and upper abdomen by either planar and/or SPECT/CT imaging (i.e., the optimal imaging technique) is routinely performed (Figs. 1, 2, and 3) [33]. Images can be acquired within 4 h of the administration of 99mTc-HSA-microspheres, while the optimal time window for imaging after administration of 99mTc-MAA is within 1 h post-administration. In fact, albumin macroaggregates undergo a relatively fast intrahepatic degradation, with possible redistribution of radioactivity (constituted either by smaller fragments and/or free 99mTcO4 −) from the capillary bed of the liver to the capillary bed of the lung. As a consequence, the liver-to-lung shunt fraction can be overestimated at later time points after administration [34], or radioactivity accumulation at other sites (e.g., free 99mTcO4 − accumulating in the stomach) can be misinterpreted as shunting to extrahepatic sites. In order to avoid such occurrence, sodium perchlorate is administered orally about 30 min before 99mTc-MAA injection [35].
Angiographic and scintigraphic pretreatment evaluation of a 67-year-old patient with a large, infiltrating hepatocellular carcinoma (HCC) candidate to radioembolization therapy with 90Y-microspheres. (a) Left panel shows contrast-enhanced CT demonstrating massive infiltration of segment 1 by a large HCC, with extension to the adjacent liver segments. Right panel shows an early phase of digital subtraction angiography through the hepatic artery (catheter indicated by black arrow); there is intense diffuse enhancement of the left branches of the portal vein, with a wide-lumen artero-portal fistula (indicated by white arrows). (b) Left panel shows planar scintigraphy acquired after trans-arterial injection of 99mTc-MAA (anterior view) demonstrating widespread diffusion of the injected particles to virtually the entire liver. Right panel shows the fused axial SPECT/CT image, which better defines the intrahepatic distribution of 99mTc-MAAs, mostly to both the right and the left lobes of the liver, while there is minimal perfusion of the main site of the tumor. On the basis of the angiographic and scintigraphic characterization, this patient was excluded from trans-arterial radioembolization with 90Y-microspheres, because this treatment would have exposed the non-tumor liver parenchyma to excessive, unjustified radiation doses without actually an expected benefit for what concerns tumor therapy
Angiographic and scintigraphic pretreatment evaluation of a 75-year-old man with metastasis from an urothelial carcinoma in both lobes of the liver, with posttreatment PET/CT acquisition based on internal pair production during decay of 90Y. Selective angiography was performed by injecting contrast medium separately into the right, the median, and the left branches of the hepatic artery. (a) Left panel shows digital subtraction angiography obtained upon injection into the median hepatic artery, revealing a thin branch (indicated by the white arrow) identified as a patent falciform artery. Right panel shows the corresponding contrast-enhanced CT phase (falciform artery indicated by white arrow). (b) Scintigraphy acquired after trans-arterial 99mTc-MAA injection clearly demonstrate the abnormal arterial branch (indicated by arrows) in the planar anterior view (upper left panel) as well as in the fused SPECT/CT images (coronal in upper right, sagittal in lower left, axial in lower right panels, respectively). During the subsequent trans-arterial treatment phase, an ice pack was positioned on the cutaneous projection of the falciform artery before injecting the 90Y-microspheres, in order to induce vasoconstriction and thus reduce as much as possible inadvertent deposition of the radiolabeled particles in the periumbilical region, which would have caused delivery of a high-radiation dose to this region. The procedure was uneventful, and the posttreatment PET/CT acquisition based on internal pair production of 90Y (c) showed excellent tumor targeting without visualization of the area fed by the patent falciform artery (fused coronal image in left panel, fused sagittal image in right panel)
Scintigraphic pretreatment evaluation (fused axial SPECT/CT images at various levels) of a 72-year-old patient with a tumor lesion between segments 5 and 6 of the liver. Trans-arterial 99mTc-MAA injection results in satisfactory distribution to the tumor lesion but also in scintigraphic visualization of the gallbladder wall, due to flow of the radiolabeled particles through the cholecystic artery. Therapy with 90Y-microsperes was not performed, because of the fear of causing necrotizing radiation-induced cholecystitis. In patients with vascular patterns similar to this (as those regarding the mesenteric artery feeding the gastric wall), further attempt to 90Y-microsphere radioembolization therapy must be preceded by coil embolization of the arteries feeding extrahepatic territories
It is crucial to correlate the 99mTc-MAA scintigraphic images to the angiography pattern, as topographic proximity of the duodenum and stomach to the liver may decrease the ability to identify extrahepatic shunting by scintigraphy alone, especially if based solely on planar imaging. Based on ROI technique, the lung shunt fraction (LSF) is calculated and employed to estimate the radiation dose delivered to the lungs for any given amount of radioactivity, so that appropriate adjustments in the administered activity can be made to minimize the risk of radiation pneumonitis (see further below).
Radiodosimetric Aspects
The choice of the most appropriate 90Y activity to be delivered into the tumor target and to the normal liver parenchyma requires adequate knowledge of many factors, mainly the liver function and reserve, that are frequently influenced by concomitant pathologies (i.e., cirrhosis) or by prior chemotherapy and/or external beam radiation therapy.
The main complications possibly linked to 90Y-microsphere radioembolization are caused by excessive irradiation of nontarget tissue. In this regard, the key limiting factor is the lower tolerance to radiation of normal liver parenchyma relative to the dose required to destroy the tumor target. The maximum external beam acceptable cumulative dose to the whole liver is 35 Gy (based on prior experience with external beam radiation therapy) [36], while the estimated dose to destroy a solid tumor is more than 70 Gy. Above the 35 Gy threshold radiation dose to the liver parenchyma, the risk of liver failure rises sharply.
The other absolute and relative contraindications for the procedure are related to the possible flow/reflux of part of the 90Y-microspheres to arteries feeding the wall of the gastrointestinal tract and to excessive lung radiation due to a high hepato-pulmonary shunt, frequently observed in HCC as well as in metastatic disease with a large tumor burden. In this case, the administration of therapeutic amounts of 90Y-microspheres increases the risk of clinically relevant radiation pneumonitis.
To warrant the safety of the procedure, it is crucial to quantify the lung shunt fraction (LSF) as detected in the pretreatment evaluation with the 99mTc-MAA scan, in order to calculate the expected radiation dose to the lungs. Previous data extrapolated from the large body of experience accumulated with external beam radiation therapy indicate that the highest tolerable dose to the lungs is 30 Gy for a single administration and less than 50 Gy as the cumulative dose for multiple treatments [36].
The LSF value is routinely calculated by ROI analysis of the planar scans acquired after 99mTc-MAA administration in the pretreatment phase, using the geometric mean of the lung and liver counts, respectively, in the anterior and posterior views, according to the following equation:
$$ L S F\left(\%\right)=\frac{\mathrm{Lung}\ \mathrm{Counts}}{\mathrm{Lung}\ \mathrm{Counts}+\mathrm{Liver}\ \mathrm{Counts}}\times 100 $$
Radioembolization with 90Y-microspheres has no restrictions for any LSF value <10%, whereas an LSF value >20% constitutes per se an absolute contraindication to treatment. Activity adjustments can be adopted for LSF values between 10% and 20%, that is, a 20% reduction in administered activities for LSF values included between 10% and 15% and a 40% reduction for LSF values included between 15% and 20% (see Fig. 4).
Representative examples in three different patients of estimation of the lung shunt fraction (LSF) as derived from planar gamma-camera imaging after trans-arterial injection of 99mTc-MAA. For each patient, the anterior and posterior views are displayed, with delineation of regions of interest (ROI) for the liver and for the lung fields, respectively; each ROI is first manually drawn on the anterior view, and then it is flipped on the horizontal axis to match the posterior view. The geometric means of the ROI counts from the two orthogonal views are utilized to calculate the LSF value according to the equation described in the text. The LSF value calculated for the patient in left panel resulted to be 6.4%; the 90Y-microsphere activity injected in this patient was therefore the full amount planned on the basis of the dosimetric estimate. A 20% reduction for administered 90Y-microsphere activity was instead applied to the patient in the center panel (whose LSF was 12%), while the patient represented in right panel was excluded from treatment because of a LSF exceeding 20%
Different models have been proposed and can be employed to calculate the 90Y activity to be administered that would allow delivery of the highest dose to the tumor while sparing normal liver tissue. The so-called partition model (based on the MIRD approach) takes into account three different compartments for radiation dose estimates, i.e., the liver tumor, the non-tumoral liver, and the lungs [37]. The basic assumption is that the LSF and the relative distribution of 99mTc-MAAs in the tumor and non-tumor liver compartments (expressed as T/N ratio) reliably predicts distribution of the 90Y-microspheres that will be administered during another interventional radiology procedure performed a few days later by the same radiologist trying to replicate exactly the same position of the intra-arterial catheter as during 99mTc-MAA administration. The activity of 90Y-microspheres to be administered can be estimated using the LSF derived from the 99mTc-MAA scintigraphic images. This approach is adopted routinely when administering the glass 90Y-microspheres and has been shown to yield safe and reproducible estimates regarding expected toxicity and clinical outcomes.
When using the resin 90Y-microspheres, two additional methods can be employed to estimate the activity to be administered, i.e., empiric method and the body surface area (BSA) method. The empiric method is based on the volume of the liver occupied by the tumor tissue expressed as a fraction of the overall liver volume, as estimated by cross-sectional imaging (either CT or MRI). A 90Y-microsphere activity of 2 GBq is recommended for a tumor/liver fraction <0.25, increasing to 2.5 GBq for tumor fractions between 0.25 and 0.5 and increasing further to 3 GBq for tumor fractions >0.5 (keeping in mind that a value above 0.7 constitutes per se an absolute contraindication to treatment).
The body surface area (BSA, expressed in square meters) method uses the following equation to calculate the 90Y-microsphere activity (A) to be administered:
$$ A(GBq)=\left( BSA-0.2\right)+\frac{\mathrm{tumor}\ \mathrm{volume}}{\mathrm{total}\ \mathrm{liver}\ \mathrm{volume}} $$
Although these two methods based on clinically derived data of intraoperative activity calculations are routinely used in some centers, they are not optimal in certain situations, in particular when the target is well identified and the total volume of the three compartments is accurately known. Moreover, it has been demonstrated that both the BSA method and the empiric method frequently overestimate the activity to be administered to the patient [38, 39]. Therefore, the use of the MIRD partition model should be recommended when administering the resin 90Y-microspheres.
Patient-specific dosimetry requires accurate evaluation of the liver and tumor mass (usually derived from anatomic imaging such as CT) and of 99mTc-MAA biodistribution based on scintigraphic imaging. However, the predictive value of 99mTc-MAA scintigraphy as to the distribution of 90Y-microspheres in the liver is still a matter of debate [40, 41, 42]. Parameters that may induce some discordance between the distribution of 99mTc-MAAs and that of the 90Y-microspheres include interval differences in catheter position during injection in the two separate occasions, physiologic changes in hepatic blood flow, tumor histopathology, and tumor load, size, and morphology differences between the 99mTc-MAA particles and the 90Y-microspheres [42]. These factors may all limit the concordance between 99mTc-MAA distribution as assessed in the pretreatment procedure and actual distribution of the 90Y-microspheres administered for therapy [41].
Although the ability of 99mTc-MAA to predict radiation dosimetry expected from 90Y-microsphere administration is far from ideal, most of the retrospective studies based on 99mTc-MAA scintigraphy for these estimates have shown a definite dose-response correlation with a threshold value from 120 to 205 Gy [43, 44] and even up to 500 Gy [45]. However, no prospective studies have confirmed these observations, and no single cutoff value that ensures tumor response has been identified as yet.
The dosimetry-based methods utilized to calculate the activity to be administered during radioembolization are described in detail in Chap. 13 of this book ("Radiobiology and Radiation Dosimetry in Nuclear Medicine").
90Y-Microsphere Administration
During the radioembolization session, the vessel perfusing the tumor is reached under fluoroscopic guidance, and the 90Y-microsphere suspension is injected into the artery feeding the target lesion. A delivery system that allows the administration in a step-by-step manner is useful to avoid early full embolization of the vasculature that prevents infusion of the total estimated activity. The infusion is usually done with alternating injections of iodinated contrast medium and sterile water/glucose solution when using the resin 90Y-microspheres or of saline solution during infusion of the glass 90Y-microspheres. Continuous fluoroscopy monitoring ensures that no stasis occurs during infusion and also serves to confirm that the flow of microspheres is similar to that observed during the prior angiographic workup.
According to topography of the tumor, treatment can be either selective (i.e., directed to one liver lobe) or super-selective (directed to one liver segment).
Posttreatment Scan
Early posttreatment assessment of the pattern of 90Y-microsphere deposition by high-quality imaging is necessary to exclude radioactivity accumulation in gastrointestinal tract and to evaluate the radiation-absorbed dose delivered to the tumor.
A post-therapy planar and SPECT/CT scan based on the bremsstrahlung emission generated by the high-energy β− particles of 90Y helps to confirm correct deposition of the radiolabeled microspheres in the tumor lesions [46]. However, the low-resolution and poor-quality imaging obtained from the bremsstrahlung emission does not allow an accurate quantification of microsphere distribution, especially when dealing with small lesions.
More recently imaging with 90Y PET has been used to assess the distribution of the microspheres [47] (Fig. 5). PET imaging is made possible by the fact that, despite the commonly held notion that 90Y is a pure β− emitter, in reality, a certain fraction of 90Y decays (even if extremely small, i.e., 32 per million) occur through internal pair production that generates 511 keV annihilation photons. The annihiliation radiation generated from these emissions can be imaged by PET. In the case of radioembolization of liver tumors with 90Y-microspheres, the therapeutic agent remains concentrated in a relatively small volume at the administration site; therefore, even the extremely small fraction of 90Y decays occurring through internal pair production are sufficient to acquire clinically useful PET images for validation and dosimetric purposes [48, 49, 50, 51]. The better resolution images provided by PET may allow easy detection of extrahepatic distribution of 90Y-microspheres and assessment of the absorbed dose delivered during the radioembolization procedure [47, 48].
Good correspondence in the patterns of intrahepatic distribution of radiolabeled particles injected into the right hepatic artery between pretreatment 99mTc-MAA SPECT/CT (left) and posttreatment 90Y-PET/CT (right). For SPECT/CT, the MIP image is displayed in upper right panel, the axial CT image in upper left panel, the fused axial SPECT/CT image in lower left panel, and the 3D surface volumetric rendering in lower right panel. Similar displays for PET/CT: MIP image in upper right panel, axial CT image in upper left panel, fused PET/CT image in lower left panel, and 3D surface volumetric rendering in lower right panel
Radioembolization with 90Y-microspheres can be performed as an outpatient procedure, and the patient can be discharged from the hospital on the same day of treatment or on the following day.
Patient Follow-Up and Assessment of Response to Therapy
Tumor response to radioembolization is monitored both clinically and radiologically. Routine follow-up includes blood chemistry to monitor possible toxicity due to treatment; in the case of HCC, measurement of the serum levels of the tumor-associated marker AFP serves to assess evolution of the malignant disease. Contrast CT is performed at 1 month posttreatment, while additional CT scans are performed every 3 months to assess response to treatment or progression of disease. Although tumor response to therapy can be assessed by classical RECIST criteria, specific criteria set by the World Health Organization (WHO) and by the European Association for the Study of the Liver (EASL) based on size parameters and on necrosis parameters, respectively, are employed to assess response in the target lesions [13, 52].
In metabolic assessment with [18F]FDG-PET provides useful prognostic information in response of either primary or metastatic liver tumors to trans-arterial radioembolization with 90Y-microspheres [40, 53, 54, 55]. The superiority of functional metabolic imaging with PET versus the conventional morphology-based criteria such as RECIST for early assessment of tumor response to radioembolization with 90Y-microspheres has been demonstrated in different clinical settings such as HCC [56, 57], intrahepatic cholangiocellular carcinoma [55] (Fig. 6), metastatic CRC [58, 59] (Figs. 7 and 8), and liver metastases from breast cancer [60] and metastatic neuroendocrine malignancies (the latter using 68Ga-DOTANOC as the PET tracer) [61] (Fig. 9).
Left panel: response to trans-arterial therapy with 90Y-microspheres in a patient with intrahepatic cholangiocarcinoma. (a) Axial fused pre-therapeutic [18F]FDG-PET/CT image; (b) corresponding slice of diagnostic CT; (c) axial fused post-therapeutic [18F]FDG-PET/CT image; (d) corresponding slice of diagnostic CT. The SUVmax declined by 70% 3 months after radioembolization, and the serum levels of CA 19-9 fell from 85.2 to 49.2 U/mL; the patient was still alive 12 months after radioembolization without evidence of progression within the liver. Right panel: Kaplan-Meier survival curves as a function of ΔSUV2SD. Responders (blue line) had a significantly (P < 0.05) longer survival than nonresponders (green line) (Modified and reproduced with permission from: Haug et al. [55])
Left panel: coronal PET (left), axial fused [18F]FDG-PET/CT images (a, c), and axial contrast-enhanced CT (ce-CT) images (b, d) in a patient with metastatic colorectal cancer before and 6 weeks after radioembolization with 90Y-microspheres. (a) Baseline PET/CT shows increased [18F]FDG uptake in metastases in segments I, II, IVa, VII, and VIII. (b) The ce-CT image before radioembolization shows some of the metabolically active metastases as low-attenuation lesions, but several of them are isointense compared to the liver parenchyma and are difficult to delineate, such as the lesions in segment VIII (arrow). (c) PET/CT after radioembolization shows an excellent partial response (PR) with marked reduction in the intensity and extent of uptake in the metastatic lesions. (d) Post-radioembolization ce-CT shows multiple new low-attenuation lesions, which are more apparent as they have become necrotic, such as the metastasis in segment VIII (arrow). This ce-CT image was incorrectly reported as showing disease progression. Right panel: Kaplan-Meier plots of progression-free survival (PFS) in relation to responses seen on [18F]FDG-PET/CT. Patients who had a PR or stable disease on [18F]FDG-PET/CT had median PFS of 12 and 5 months, respectively (Modified and reproduced with permission from: Zerizer et al. [58])
Left panel: axial fused [18F]FDG-PET/CT images before (a) and 4 weeks after (b) trans-arterial radioembolization with 90Y-microspheres of a patient with metastatic colorectal cancer; the marked metabolic response was associated with a survival of 12 months after treatment. Center panel: Axial fused [18F]FDG-PET/CT images before (a) and 4 weeks after (b) trans-arterial radioembolization with 90Y-microspheres of a patient with metastatic colorectal cancer; this metabolic nonresponder survived 5 months after treatment (Modified and reproduced with permission from: Sabet et al. [59])
Left upper panel: (a) 68Ga-DOTANOC PET axial slice acquired before trans-arterial radioembolization with 90Y-microspheres shows intense tracer accumulation in a patient with a large metastasis from a neuroendocrine neoplasm in the right hepatic lobe (arrow). (b) Fused unenhanced CT slice and 90Y PET axial slice acquired after administration of 90Y-microspheres shows radioactivity accumulation in the tumor mass with a necrotic core surrounded by a hot circular region. (c) 68Ga-DOTANOC PET axial slice acquired 6 weeks after radioembolization shows significant reduction of tracer uptake in the tumor uptake (arrow), consistent with a molecular response (ΔT/S was −73.4%); overall survival was 34 months. Left lower panel: (a) 68Ga-DOTANOC PET axial slice acquired before trans-arterial radioembolization with 90Y-microspheres shows intense tracer accumulation in a patient with a metastasis from neuroendocrine neoplasm in hepatic segment IV (arrow). (b) Fused unenhanced CT slice and 90Y-PET slice acquired after administration of 90Y-microspheres shows radioactivity accumulation in the tumor mass. (c) 68Ga-DOTANOC PET/CT axial slice acquired 6 weeks after treatment shows substantially unchanged tumor uptake (arrow), consistent with no response (ΔT/S was −24.6%). Overall survival was 23 months. Right panel: Kaplan-Meier survival analysis in relation to ΔT/S measured 6 weeks after radioembolization. Patients with ΔT/S less than −50% (dashed line) had significantly lower (P < 0.001) survival than those with ΔT/S more than −50% (solid line) (Modified and reproduced with permission from: Filippi et al. [61])
90Y-Microsphere Radioembolization Combined with Other Therapies
Radioembolization with 90Y-microspheres is a valid therapeutic option per se, preferably in patients with early stage inoperable liver-predominant malignancies. Nevertheless, it can be used also in combination with systemic molecular/chemotherapies in patients presenting with an intermediate or even advanced stage of HCC [62] or with unresectable liver-predominant metastases from colorectal cancer. Moreover, thanks to its ability to downsize/downgrade the disease, SIRT/TARE may be used as neoadjuvant therapy also in patients with HCC not meeting the criteria for resection, percutaneous ablation, or transplantation.
The clinical situations in which patients with nonresectable/ablatable HCC can be used with radioembolization combined with other therapies can be summarized as follows:
Before planned resection or transplantation [63], radioembolization may be considered as a neoadjuvant treatment to reduce the tumor burden and simplify surgery or to stop/slow down tumor progression in order to keep patients on the transplant waiting list or to improve the long-term outcome – even achieving in some cases downstaging of the disease [64]. In selected clinical situations where the non-tumoral liver is too small in terms of functional reserve (thus hindering any type of resection), lobar or segmental selective radioembolization may lead to ipsilateral lobar of segmental parenchymal hypotrophy and contra-lobar hypertrophy (ranging from 21% to 35%) allowing subsequent surgery [65].
As an alternative option when other ablative treatments cannot be applied. 90Y-microsphere radioembolization as well trans-arterial chemoembolization may improve the survival in patients with poor prognosis with portal vein occlusion, who are often excluded from a targeted therapy [66].
In combination with systemic therapies. Although preliminary data suggest a synergistic beneficial effect of 90Y-microsphere radioembolization when combined with sorafenib (with associated tolerable toxicity) [67, 68, 69, 70], large-scale multicenter trials are still ongoing to confirm these data and to define the safety profile and the impact of the combination on survival of this combination regimen.
In patients with unresectable liver-predominant metastases from colorectal cancer, preliminary results of the SIRFLOX study indicate that radioembolization combined with systemic therapy as first-line treatment can lead to downstaging of the disease in a significant proportion of cases and to improved PFS for the liver lesions (but not overall PFS, while the follow-up phase is ongoing for overall survival) [71].
As a second-line treatment and in the salvage setting for liver-predominant metastatic colorectal cancer, there are clinical evidences that combination of radioembolization and systemic chemotherapy using radiosensitizing drugs (i.e., oxaliplatin, 5-FU, and irinotecan) is safe, with preliminary results indicating a 79% response rate [72].
Radioembolization with 90Y-microspheres has shown to be an effective procedure with significant impact on survival of patients with either primary or secondary liver malignancies.
Primary Liver Tumors
Radioembolization as a treatment option has been extensively investigated for the most common primary liver malignancy, i.e., HCC, especially after Geschwind and coworkers published their landmark comprehensive analysis on the use of 90Y-microspheres for HCC. Besides demonstrating overall clinical safety and benefit of this therapy, the study also indicated better survival as treatment was employed earlier in the course of disease, i.e., in Okuda stage I (63% for 1-year survival and 628-day median survival) rather than in Okuda stage II patients (51% for 1-year survival and 384-day median survival, with P = 0.02) [73].
The response rates to radioembolization with 90Y-microspheres may vary widely not only because of variable tumor biology but also because of differences in evaluation times or in treatment intensity. Tumor shrinkage after therapy may take months to occur, with a median time to response of approximately 6 months according to WHO criteria [74]. Nevertheless, response in tumor size is not an adequate parameter to define all the antitumor effects of therapy. When combining different response criteria, such as tumor size and arterial contrast enhancement with the EASL response criteria, the overall tumor response rates vary between 40% and 90%, with a disease control rate in targeted lesion of 80–100%. According to changes in vascular enhancement, time to response occurs in the treated tumor lesions earlier than response in tumor size, around 2 months after 90Y-microsphere radioembolization [74].
Surgery is the optimal standard of care with curative intents for patients with HCC; however, resection is possible only for lesions limited in size and number and if the liver function is well preserved; the latter is not a frequent condition considering that most HCCs originate in patients with liver cirrhosis. On the other hand, patients with a single lesion <5 cm in diameter, or with ≤3 lesions, all <3 cm without extrahepatic metastases or portal vein thrombosis (PVT), are eligible for liver transplantation. Nevertheless, orthotopic liver transplantation has had a limited role in the management of patients with HCC due both to limited availability of donor organs and to dropout of patients because of tumor progression. Since radioembolization has been shown to slow the progression of HCC, this procedure may allow patients more time to wait for donor organs and thus increases their chance of undergoing liver transplant [75, 76, 77].
Patients whose disease is too advanced to meet transplant criteria, but do not have malignant PVT or metastatic HCC, are good candidates for radioembolization. In fact, radioembolization has been shown to downstage the disease so that 56% of the patients who were initially stratified as non-eligible for transplant according to Milan criteria then become eligible for transplant after therapy; 8 out of 34 downstaged patients actually underwent subsequent liver transplantation, with 84%, 54%, and 27% overall survivals, respectively, at 1, 2, and 3 years. In addition to downstaging, radioembolization also prolongs overall survival of these patients [75, 76, 77].
Survival benefit following radioembolization has been observed also in patients with malignant vascular involvement, with a 70% response rate according to EASL criteria [78]. The presence of distant metastases contraindicates treatment, as a survival benefit has not been demonstrated for this subset of patients.
Resection has a modest survival benefit in patients who have resectable intrahepatic cholangiocarcinoma (ICC). Some improvement in survival has been observed after trans-arterial chemoembolization, but toxicity associated with this treatment remains high. On the other hand, while radioembolization has been shown to be an effective treatment for HCC, its role in the management of ICC patients has not been extensively investigated. Nevertheless, a pilot study in 24 patients with biopsy-proven ICC has shown favorable tumor response and favorable survival outcomes following therapy with 90Y-microspheres, especially for patients with better ECOG performance status [79].
A recent systematic review on the use of TARE in the treatment of ICC identified 12 studies including a total of 73 patients. PR and SD at 3 months were reported in 28% and 54% of patients, respectively. In a pooled analysis, the overall weighed median survival was 15.5 months, and downstaging to surgery was achieved in seven patients [80]. The combination of TARE and chemotherapy as a strategy for downstaging ICC to achieve resectability has recently been proposed, with encouraging initial data [81]. However, when comparing different locoregional treatments for ICC, TARE may not be the most effective approach. In a comparative analysis, TARE performed second in terms of tumor response to intra-arterial chemotherapy but was more effective than TACE or DEB-TACE both in terms of tumor response and in terms of overall survival. In fact, overall survival was 22.8 months for intra-arterial chemotherapy, 13.9 months for TARE, 12.4 months for TACE, and 12.3 months for DEB-TACE. Nevertheless, intra-arterial chemotherapy had the highest toxicity [82]. Despite the lack of randomized controlled trials, locoregional treatments appear to be somewhat more effective than the current standard chemotherapy regimens with oxaliplatin and gemcitabine [83]. For patients with unresectable ICC, trans-arterial radioembolization with 90Y-microspheres seems to be best suited for patients who are not eligible for intra-arterial chemotherapy.
[18F]FDG-PET is the best independent predictor for patient outcome after radioembolization treatment, based on reduction in the metabolically active tumor volume at 3 months after therapy [55].
Metastatic Liver Tumors
Extrahepatic metastasis and comorbidities limit the role of surgical resection in patients with secondary liver tumors [84]. The use of radioembolization in these patients has been extensively investigated, either as a single treatment or in adjunct to systemic chemotherapy.
Metastatic Colorectal Carcinoma
Liver metastases from colorectal carcinoma (CRC) are resectable in less than 10% of the patients [84]. Radioembolization has been shown to be effective in the treatment of metastatic CRC to the liver. Candidates for radioembolization are those patients who have unresectable liver metastases and are on systemic chemotherapy or have failed to respond to first- or second-line chemotherapy. In these patients, [18F]FDG-PET has been shown to be more sensitive than CT for assessing tumor response to radioembolization [54, 85]. Furthermore, reduction of the hepatic metastatic load can be assessed quantitatively by [18F]FDG-PET, by evaluating the percent change of total liver SUV after treatment [53, 54].
Most of the studies on 90Y-microsphere TARE reported so far have been conducted in patients with chemorefractory liver-predominant metastatic CRC. A systematic review of 20 studies including a total of 979 patients treated with resin 90Y-microspheres has demonstrated the overall safety and efficacy of TARE for unresectable, chemorefractory metastatic CRC, with a median time to intrahepatic progression of 9 months and overall survival of 12 months [86].
In a randomized controlled clinical trial involving 74 patients, the combination of systemic therapy with radioembolization resulted in significantly better tumor response (44% objective response versus 17.6% in the control group), longer time to progression, and longer survival than systemic chemotherapy alone [87]; furthermore, the safety profile of the combined regimen was acceptable [87], and dose escalation studies have shown improved tumor response with increasing doses [88].
Several prospective trials have investigated the efficacy of TARE in combination with systemic chemotherapy versus systemic chemotherapy alone. In an early study, TARE combined with systemic 5-fluorouracil (5-FU) induced better objective response rates than 5-FU alone: 73% versus 0%, with time to progression of 18.6 months versus 3.6 months and overall survival of 29.4 months versus 12.8 months [89]. More recent prospective studies have evaluated chemotherapy regimen more up-to-date than 5-FU. In a first-line setting, TARE combined with FOLFOX4 achieved a 90% PR rate [90], while TARE with irinotecan in a second-line setting after failure of previous chemotherapy reported an overall 87% response rate, with 48% PR and 39% SD [91]. In the SIRFLOX study, a randomized clinical trial including 530 patients, the results of mFOLFOX 6 with or without bevacizumab were compared with TARE + mFOLFOX 6 with or without bevacizumab. While there was no difference in progression-free survival, there was a significant difference in progression-free survival in the liver, favoring the combination with TARE (20.5 months) over chemotherapy alone (12.6 months, with P = 0.002). Objective response rates were somewhat better with the combination therapy than with chemotherapy alone (76.4% versus 68.1%, but without reaching statistical significance, with P = 0.113) [92].
Also the recently published results obtained by Hong and coworkers show that radioembolization is safe and effective as a salvage therapy in the management of metastatic CRC when compared with chemoembolization [93].
A recent systematic review on TARE in unresectable, chemorefractory metastatic CRC includes 20 studies for a total of 979 patients enrolled after failure of two to five lines of chemotherapy. TARE achieved CR, PR, and SD in 0% (range 0–6%), 31% (range 0–73%), and 40.5% (range 17–76%) of patients, respectively. The median time to intrahepatic progression was 9 months (range 6–16), and median overall survival was 12 months (range 8.3–36) [94]. In a large multicenter trial, overall survival was strongly dependent on previous treatments; in particular, median survival after 90Y-microsphere radioembolization was 13 months (95% CI 10.5–14.6) when TARE was performed as second-line treatment, 9 months (95% CI 7.8–11.0) for third-line treatment, and 8.1 months (95% CI 6.4–9.3) for fourth-line treatment and over, respectively, with P < 0.001 [95].
Metastatic Neuroendocrine Tumors
Several neuroendocrine tumors , such as carcinoids, VIPomas, gastrinomas, and somatostatinomas, metastasize to the liver. These lesions are often well arterialized and represent a target for trans-arterial therapies, similarly to patients with HCC. The goals of treatment in these patients are both control of symptoms and survival. Systemic chemotherapy and ablative procedures have all been shown to produce modest benefit in these patients. Patients with unresectable disease are considered candidates for radioembolization. The results obtained in a multicenter study including 148 patients have shown that radioembolization of metastatic neuroendocrine tumors to the liver is safe and effective, with very high response rates: any response in 95.1% of patients and progressive disease in only 4.9% of the patients. Response rates were even longer than 2 years, especially in non-pancreatic NETs [96, 97].
Absolute and Relative Contraindications
Two absolute contraindications exist for therapy with 90Y-microspheres intra-arterially. The main contraindication is represented by a pretreatment 99mTc-MAA scan demonstrating significant hepato-pulmonary shunting. This occurrence would in fact result in the delivery of a radiation dose to the lungs greater than 30 Gy with a single infusion or as much as 50 Gy for multiple infusions. The second contraindication is the inability to prevent deposition of the radiolabeled microspheres in the gastrointestinal tract. Relative contraindications include reduced pulmonary function, inadequate functional liver reserve, serum creatinine >2.0 mg/dL, and platelet count <75 × 109/L. When such relative contraindications exist, clinical judgment should be exercised for determining whether or not a patient is appropriate to undergo the procedure, taking into consideration either 90Y-microspheres or 131I-lipiodol [98].
Early and Late Toxicities
The most common clinical toxicity observed with 90Y-microsphere therapy is a mild post-embolic syndrome. Similarly as observed with other embolic treatments such as trans-arterial chemoembolization, this syndrome includes fatigue, vague abdominal discomfort, pain, and fever [99, 100]. Other toxicities that can occur as a result of nontarget radiation (and should therefore be avoided by adequate pre-therapy procedures and by accurate treatment planning) include the following: cholecystitis, gastric ulceration, gastroduodenitis, pancreatitis, radiation pneumonitis, and RILD [75, 78, 79, 101]. By adopting meticulous planning, careful selection of patients, and proper techniques, the majority of these toxicities can be mitigated. Finally, a common hematologic toxicity observed in the immediate post-radioembolization period is represented by lymphopenia, not an unexpected finding considering the sensitivity of lymphocytes to radiation. Despite this possible occurrence, no infectious complications have been reported [13, 33, 52, 53, 54, 99, 100, 101, 102].
Perspectives on Radioembolization for Primary and Metastatic Liver Tumors
A growing body of evidence supports the use of TARE with 90Y-microspheres as an effective monotherapy in patients with HCC. Future avenues to be explored concern the scenario of combination therapies with systemic and locoregional agents, specifically sorafenib and TARE, in the adjuvant or neoadjuvant setting. Although the mechanisms of action for the two therapeutic approaches can in principle be considered as complementary one to the other from the pathophysiologic point of view, there are currently scarce data confirming the actual clinical benefit of regimens based on this combination.
In the only prospective study so far reported for combination therapy of TARE with sorafenib, the rate of objective responses was 25%, somewhat disappointing with respect to that expected on the basis of the underlying rationale. Moreover, 39% of the patients could not complete the prescribed dose of sorafenib due to important side effects [67]. On the other hand, initial safety results in the first 40 patients enrolled in a randomized controlled clinical trial comparing TARE with resin 90Y-microspheres followed by sorafenib with sorafenib only indicate a similar tolerance for the two treatment arms [69]. Long-term outcome data from ongoing randomized controlled clinical trials such as SORAMIC, SARAH, or SIRveNIB (based on the use of resin 90Y-microspheres) or STOP-HCC (using glass 90Y-microspheres) have not been published yet.
Regarding treatment of metastatic disease, there is the need of randomized controlled clinical trials comparing TARE with up-to-date chemotherapy regimens, since the few data so far available have been obtained in studies comparing the efficacy of therapy with resin 90Y-microspheres versus chemotherapeutic regimen that are now obsolete or the studies lack survival data. An ongoing randomized controlled clinical phase III trial evaluates treatment with glass 90Y-microspheres and second-line chemotherapy after failure of first-line chemotherapy in comparison to second-line chemotherapy alone for metastatic CRC; 360 patients have been enrolled in this trial, and the first results of the analysis are expected to be released soon.
The FOXFIRE global is an international phase III study assessing the value of additional resin 90Y-microspheres in the setting of first-line treatment with FOLFOX6m (NCT01721954). Enrollment of 530 patients has been completed, and the preliminary results are encouraging, with significantly prolonged PFS in the liver by competing risk analysis, from a median of 12.6 months for control patients to 20.5 months (P = 0.002) for patients receiving resin 90Y-microspheres; this translates into a 31% reduction in the risk of disease progression in the liver. Pooling the data of studies with similar designs (SIRFLOX, FOXFIRE, and FOXFIRE global) from over 1,100 patients will provide sufficient statistical power to assess the survival benefit derived from the addition of resin 90Y-microspheres to current chemotherapy regimens. Survival data from the three combined studies are expected to be released in 2017.
Further ongoing studies evaluate the role of TARE in uveal melanoma with liver metastasis (SIRUM trial, NCT01473004) or the combination of TARE and pasireotide and everolimus in liver metastatic neuroendocrine tumors (NCT01469572) [103].
El-Serag HB. Epidemiology of viral hepatitis and hepatocellular carcinoma. Gastroenterology. 2012;142:1264–73.PubMedPubMedCentralCrossRefGoogle Scholar
van der Pool AE, Damhuis RA, Ijzermans JN, de Wilt JH, Eggermont AM, Kranse R, Verhoef C. Trends in incidence, treatment and survival of patients with stage IV colorectal cancer: a population-based series. Colorectal Dis. 2012;14:56–61.PubMedCrossRefGoogle Scholar
Mazzaferro V, Regalia E, Doci R, et al. Liver transplantation for the treatment of small hepatocellular carcinomas in patients with cirrhosis. N Engl J Med. 1996;334:693–9.PubMedCrossRefGoogle Scholar
Ingold JA, Reed GB, Kaplan HS, Bagshaw MA. Radiation hepatitis. Am J Roentgenol Radium Ther Nucl Med. 1965;93:200–8.PubMedGoogle Scholar
Lawrence TS, Robertson JM, Anscher MS, Jirtle RL, Ensminger WD, Fajardo LF. Hepatic toxicity resulting from cancer treatment. Int J Radiat Oncol Biol Phys. 1995;31:1237–48.PubMedCrossRefGoogle Scholar
Yu H, Burke CT. Comparison of percutaneous ablation technologies in the treatment of malignant liver tumors. Semin Interv Radiol. 2014;31:129–37.CrossRefGoogle Scholar
Stuart K. Chemoembolization in the management of liver tumors. Oncologist. 2003;8:425–37.PubMedCrossRefGoogle Scholar
Geschwind JFH. Chemoembolization for hepatocellular carcinoma: where does the truth lie? J Vasc Interv Radiol. 2002;13:991–4.PubMedCrossRefGoogle Scholar
Varela M, Real MI, Burrel M, et al. Chemoembolization of hepatocellular carcinoma with drug eluting beads: efficacy and doxorubicin pharmacokinetics. J Hepatol. 2007;46:474–81.PubMedCrossRefGoogle Scholar
Llovet JM, Real MI, Montana X, et al. Arterial embolisation or chemoembolisation versus symptomatic treatment in patients with unresectable hepatocellular carcinoma: a randomised controlled trial. Lancet. 2002;359(9319):1734–9.PubMedCrossRefGoogle Scholar
Kettenbach J, Stadler A, Katzler IV, et al. Drug-loaded microspheres for the treatment of liver cancer: review of current results. Cardiovasc Intervent Radiol. 2008;31:468–76.PubMedCrossRefGoogle Scholar
European Association for the Study of the Liver, European Organisation for Research and Treatment of Cancer. EASLEORT clinical practice guidelines: management of hepatocellular carcinoma. J Hepatol. 2012;56:908–43.CrossRefGoogle Scholar
Salem R, Lewandowski RJ, Atassi B, et al. Treatment of unresectable hepatocellular carcinoma with use of 90Y microspheres (TheraSphere): safety, tumor response, and survival. J Vasc Interv Radiol. 2005;16:1627–39.PubMedCrossRefGoogle Scholar
Raoul JL, Bourguet P, Bretagne JF, et al. Hepatic artery injection of I-131-labelled lipiodol. I. Biodistribution study results in patients with hepatocellular carcinoma. Radiology. 1988;168:541–5.PubMedCrossRefGoogle Scholar
Nakajo M, Kobayashi H, Shimabukuro K, et al. Biodistribution and in vivo kinetics of iodine-131 lipiodol infused via the hepatic artery of patients with hepatic cancers. J Nucl Med. 1988;29:1066–77.PubMedGoogle Scholar
Smits ML, Nijsen JF, van den Bosch MA, Lam MG, Vente MA, Huijbregts JE, et al. Holmium-166 radioembolization for the treatment of patients with liver metastases: design of the phase I HEPAR trial. J Exp Clin Cancer Res. 2010;29:70.PubMedPubMedCentralCrossRefGoogle Scholar
Nowicki ML, Cwikla JB, Sankowski AJ, Shcherbinin S, Grimmes J, Celler A, et al. Initial study of radiological and clinical efficacy radioembolization using 188Re-human serum albumin (HSA) microspheres in patients with progressive, unresectable primary or secondary liver cancers. Med Sci Monit. 2014;20:1353–62.PubMedPubMedCentralCrossRefGoogle Scholar
Bhattacharya S, Dhillon AP, Winslet MC, et al. Human liver cancer cells and endothelial cells incorporate iodised oil. Br J Cancer. 1996;73:877–81.PubMedPubMedCentralCrossRefGoogle Scholar
Madsen MT, Park CH, Thakur ML. Dosimetry of iodine-131 ethiodol in the treatment of hepatoma. J Nucl Med. 1988;29:1038–44.PubMedGoogle Scholar
Monsieurs MA, Bacher K, Brans B, et al. Patient dosimetry for 131I-lipiodol therapy. Eur J Nucl Med Mol Imaging. 2003;30:554–61.PubMedCrossRefGoogle Scholar
Bhattacharya S, Novell JR, Dusheiko GM, Hilson AJ, Dick R, Hobbs KE. Epirubicin-lipiodol chemotherapy versus 131iodine-lipiodol radiotherapy in the treatment of unresectable hepatocellular carcinoma. Cancer. 1995;76:2202–10.PubMedCrossRefGoogle Scholar
Yoo HS, Park CH, Lee JT, et al. Small hepatocellular carcinoma: high dose internal radiation therapy with superselective intra-arterial injection of I-131-labeled Lipiodol. Cancer Chemother Pharmacol. 1994;33:S128–33.PubMedCrossRefGoogle Scholar
Leung WT, Lau WY, Ho S, et al. Selective internal radiation therapy with intra-arterial iodine-131-lipiodol in inoperable hepatocellular carcinoma. J Nucl Med. 1994;35:1313–8.PubMedGoogle Scholar
Raoul JL, Guyader D, Bretagne JF, et al. Randomized controlled trial for hepatocellular carcinoma with portal vein thrombosis: intra-arterial iodine-131-iodized oil versus medical support. J Nucl Med. 1994;35:1782–7.PubMedGoogle Scholar
Boucher E, Garin E, Guillygomac'h A, Olivie D, Boudjema K, Raoul JL. Intra-arterial injection of iodine-131-labeled lipiodol for treatment of hepatocellular carcinoma. Radiother Oncol. 2007;82:76–82.PubMedCrossRefGoogle Scholar
Partensky C, Sassolas G, Henry L, Paliard P, Maddern GJ. Intra-arterial iodine 131-labeled lipiodol as adjuvant therapy after curative liver resection for hepatocellular carcinoma: a phase 2 clinical study. Arch Surg. 2000;135:1298–300.PubMedCrossRefGoogle Scholar
Lau WY, Lai EC, Leung TW, Yu SC. Adjuvant intra-arterial iodine-131-labeled lipiodol for resectable hepatocellular carcinoma: a prospective randomized trial-update on 5-year and 10-year survival. Ann Surg. 2008;247:43–8.PubMedCrossRefGoogle Scholar
Lee YS, Jeong JM, Kim YJ, et al. Synthesis of 188Re-labelled long chain alkyl diaminedithiol for therapy of liver cancer. Nucl Med Commun. 2002;23:237–42.PubMedCrossRefGoogle Scholar
De Ruyck K, Lambert B, Bacher K, et al. Biologic dosimetry of 188Re-HDD/lipiodol versus 131I-lipiodol therapy in patients with hepatocellular carcinoma. J Nucl Med. 2004;45:612–8.PubMedGoogle Scholar
Kumar A, Srivastava DN, Chau TT, et al. Inoperable hepatocellular carcinoma: transarterial 188Re HDD-labeled iodized oil for treatment. Prospective multicenter clinical trial. Radiology. 2007;243:509–19.PubMedCrossRefGoogle Scholar
Sato K, Lewandowski RJ, Bui JT, et al. Treatment of unresectable primary and metastatic liver cancer with yttrium-90 microspheres (TheraSphere): assessment of hepatic arterial embolization. Cardiovasc Intervent Radiol. 2006;29:522–9.PubMedCrossRefGoogle Scholar
Salem R, Thurston KG. Radioembolization with 90Yttrium microspheres: a state-of-the-art brachytherapy treatment for primary and secondary liver malignancies. Part 1: technical and methodologic considerations. J Vasc Interv Radiol. 2006;17:1251–78.PubMedCrossRefGoogle Scholar
Hamami ME, Poeppel TD, Müller S, Heusner T, Bockisch A, Hilgard P, Antoch G. SPECT/CT with 99mTc-MAA in radioembolization with 90Y microspheres in patients with hepatocellular cancer. J Nucl Med. 2009;50:688–92.PubMedCrossRefGoogle Scholar
Grosser OS, Rufi J, Kupitz D, Pethe A, Ulrich G, Genseke P, et al. Pharmacokinetics of 99mTc-MAA- and 99mTc-HSA microspheres used in preradioembolization dosimetry: influence on the liver–lung shunt. J Nucl Med. 2016;57:925–7.PubMedCrossRefGoogle Scholar
Sabet A, Ahmadzadehfar H, Muckle M, Haslerud T, Wilhelm K, Biersack HJ, et al. Significance of oral administration of sodium perchlorate in planning liver-directed radioembolisation. J Nucl Med. 2011;52:1063–7.PubMedCrossRefGoogle Scholar
Dale RG. Dose-rate effects in targeted radiotherapy. Phys Med Biol. 1996;41:1871–84.PubMedCrossRefGoogle Scholar
Ho S, Lau WY, Leung TW, Chan M, Chan KW, Lee WY, et al. Partition model for estimating radiation doses from yttrium-90 microspheres in treating hepatic tumors. Eur J Nucl Med. 1996;23:947–52.PubMedCrossRefGoogle Scholar
Kennedy AS, Dezarn WA, McNeillie P, Overton C, England M, Sailer SL. Fractionation, dose selection, and response of hepatic metastases of neuroendocrine tumors after 90Y-microsphere brachytherapy. Brachytheraphy. 2006;5:103–4.Google Scholar
Kennedy AS, Dezarn WA, McNeillie P, Overton C, England M, Sailer SL. Dose selection of resin 90Y-micrspheres for liver brachytherapy: a single center review. Brachytheraphy. 2006;5:104.Google Scholar
Flamen P, Vanderlinden B, Delatte P, Ghanem G, Ameye L, Van Den Eyden M, et al. Multimodality imaging can predict the metabolic response of unresectable colorectal liver metastases to radioembolisation therapy with yttrium-90 labeled resin microspheres. Phys Med Biol. 2008;53:6591–693.PubMedCrossRefGoogle Scholar
Jiang M, Fischman A, Nowakowski FS, et al. Segmental perfusion differences on paired Tc-99m macroaggregated albumin (MAA) hepatic perfusion imaging and yttrium-90 (Y-90) bremsstrahlung imaging studies in SIR-sphere radioembolization: associations with angiography. J Nucl Med Radiat Ther. 2012;3:122.CrossRefGoogle Scholar
Wondergem M, Smits MLJ, Elschot M, de Jong HWAM, Verkooijen HM, van den Bosch MAAJ, Nijsen JFW, Lam MGEH. 99mTc-Macroaggregated albumin poorly predicts the intrahepatic distribution of 90Y resin microspheres in hepatic radioembolization. J Nucl Med. 2013;54:1294–301.PubMedCrossRefGoogle Scholar
Lau WY, Sangro B, Chen PJ, Cheng SQ, Chow P, Lee RC, et al. Treatment for hepatocellular carcinoma with portal vein tumor thrombosis: the emerging role for radioembolization using yttrium-90. Oncology. 2013;84:311–8.PubMedCrossRefGoogle Scholar
Garin E, Lenoir L, Rolland Y, Edeline J, Mesbah H, Laffont S, et al. Dosimetry based on 99mTc-macroaggregated albumin SPECT/CT accurately predicts tumor response and survival in hepatocellular carcinoma patients treated with 90Y-loaded glass microspheres: preliminary results. J Nucl Med. 2012;53:255–63.PubMedCrossRefGoogle Scholar
Mazzaferro V, Sposito C, Bhoori S, Romito R, Chiesa C, Morosi C, et al. Yttrium-90 radioembolization for intermediate-advanced hepatocellular carcinoma: a phase 2 study. Hepatology. 2013;57:1826–37.PubMedCrossRefGoogle Scholar
Ahmadzadehfar H, Muckle M, Sabet A, Wilhelm K, Kuhl C, Biermann K, et al. The significance of bremsstrahlung SPECT/CT after yttrium-90 radioembolisation treatment in the prediction of extrahepatic side effects. Eur J Nucl Med Mol Imaging. 2011;39:309–15.CrossRefGoogle Scholar
Lhommel R, Goffette P, del Eynde V, Jamar F, Pauwels S, Bilbao JI, et al. Yttrium-90 TOF PET scan demonstrates high-resolution biodistribution after liver SIRT. Eur J Nucl Med Mol Imaging. 2009;36:1696.PubMedCrossRefGoogle Scholar
Lhommel R, van Elmbt L, Goffette P, et al. Feasibility of 90Y TOF PET-based dosimetry in liver metastasis therapy using SIR-spheres. Eur J Nucl Med Mol Imaging. 2010;37:1654–62.PubMedCrossRefGoogle Scholar
Kao YH, Tan EH, Lim KY, Eng CE, Goh SW. Yttrium-90 internal pair production imaging using first generation PET/CT provides high resolution images for qualitative diagnostic purposes. Br J Radiol. 2012;85:1018–9.PubMedPubMedCentralCrossRefGoogle Scholar
Wright CL, Zhang J, Tweedle MF, Knopp MV, Hall NC. Theranostic imaging of yttrium-90. Biomed Res Int. 2015;2015:481279.PubMedPubMedCentralGoogle Scholar
Gnesin S, Canetti L, Adib S, Cherbuin N, Silva-Monteiro M, Bize P, et al. Partition model based 99mTc-MAA SPECT/CT predictive dosimetry compared to 90Y TOF PET/CT post treatment dosimetry in radioembolisation of hepatocellular carcinoma: a quantitative agreement comparison. J Nucl Med. 2016 Jun 15 [Epub ahead of print].Google Scholar
Riaz A, Memon K, Miller FH, et al. Role of the EASL, RECIST, and WHO response guidelines alone or in combination for hepatocellular carcinoma: radiologic-pathologic correlation. J Hepatol. 2011;54:695–704.PubMedCrossRefGoogle Scholar
Wong CY, Qing F, Savin M, et al. Reduction of metastatic load to liver after intraarterial hepatic yttrium-90 radioembolization as evaluated by [18F]fluorodeoxyglucose positron emission tomographic imaging. J Vasc Interv Radiol. 2005;16:1101–6.PubMedCrossRefGoogle Scholar
Miller FH, Keppke AL, Reddy D, Huang J, Jin J, Mulcahy MF, Salem R. Response of liver metastases after treatment with yttrium-90 microspheres: role of size, necrosis, and PET. AJR Am J Roentgenol. 2007;188:776–83.PubMedCrossRefGoogle Scholar
Haug AR, Heinemann V, Bruns CJ, Hoffmann R, Jakobs T, Bartenstein P, Hacker M. 18F-FDG PET independently predicts survival in patients with cholangiocellular carcinoma treated with 90Y microspheres. Eur J Nucl Med Mol Imaging. 2011;38:1037–45.PubMedCrossRefGoogle Scholar
Sabet A, Ahmadzadehfar H, Bruhman J, Sabet A, Meyer C, Wasmuth JC, et al. Survival in patients with hepatocellular carcinoma treated with 90Y-microsphere radioembolization. Prediction by 18F-FDG PET. Nuklearmedizin. 2014;53:39–45.PubMedCrossRefGoogle Scholar
Hartenbach M, Weber S, Albert NL, Hartenbach S, Hirtl A, Zacherl MJ, et al. Evaluating treatment response of radioembolization in intermediate-stage hepatocellular carcinoma patients using 18F-fluoroethylcholine PET/CT. J Nucl Med. 2015;56:1661–6.PubMedCrossRefGoogle Scholar
Zerizer I, Al-Nahhas A, Towey D, Tait P, Ariff B, Wasan H, et al. The role of early 18F-FDG PET/CT in prediction of progression-free survival after 90Y radioembolization: comparison with RECIST and tumour density criteria. Eur J Nucl Med Mol Imaging. 2012;39:1391–9.PubMedCrossRefGoogle Scholar
Sabet A, Meyer C, Aouf A, Sabet A, Ghamari S, Pieper CC, et al. Early post-treatment FDG PET predicts survival after 90Y microsphere radioembolization in liver-dominant metastatic colorectal cancer. Eur J Nucl Med Mol Imaging. 2015;42:370–6.PubMedCrossRefGoogle Scholar
Haug AR, Tiega Donfack BP, Trumm C, Zech CJ, Michl M, Laubender RP, et al. 18F-FDG PET/CT predicts survival after radioembolization of hepatic metastases from breast cancer. J Nucl Med. 2012;53:371–7.PubMedCrossRefGoogle Scholar
Filippi L, Scopinaro F, Pelle G, Cianni R, Salvatori R, Schillaci O, et al. Molecular response assessed by 68Ga-DOTANOC and survival after 90Y microsphere therapy in patients with liver metastases from neuroendocrine tumours. Eur J Nucl Med Mol Imaging. 2016;43:432–40.PubMedCrossRefGoogle Scholar
Fidelman N, Kerlan Jr RK. Transarterial chemoembolization and 90Y radioembolization for hepatocellular carcinoma: review of current applications beyond intermediate-stage disease. AJR Am J Roentgenol. 2015;205:742–52.PubMedCrossRefGoogle Scholar
Braat AJ, Huijbregts JE, Molenaar IQ, Borel Rinkes IH, van den Bosch MA, Lam MG. Hepatic radioembolization as a bridge to liver surgery. Front Oncol. 2014;4:199.PubMedPubMedCentralCrossRefGoogle Scholar
Lewandowski RJ, Donahue L, Chokechanachaisakul A, Kulik L, Mouli S, Caicedo J, et al. 90Y radiation lobectomy: outcomes following surgical resection in patients with hepatic tumors and small future liver remnant volumes. J Surg Oncol. 2016;114:99–105.PubMedCrossRefGoogle Scholar
Teo JY, Allen Jr JC, Ng DC, Choo SP, Tai DW, Chang JP, et al. A systematic review of contralateral liver lobe hypertrophy after unilobar selective internal radiation therapy with Y90. HPB (Oxford). 2016;18:7–12.CrossRefGoogle Scholar
Johnson GE, Monsky WL, Valji K, Hippe DS, Padia SA. Yttrium-90 radioembolization as a salvage treatment following chemoembolization for hepatocellular carcinoma. J Vasc Interv Radiol. 2016;27:1123–9.PubMedCrossRefGoogle Scholar
Chow PK, Poon DY, Khin MW, Singh H, Han HS, Goh AS, Asia-Pacific Hepatocellular Carcinoma Trials Group, et al. Multicenter phase II study of sequential radioembolization-sorafenib therapy for inoperable hepatocellular carcinoma. PLoS One. 2014;9(3), e90909.PubMedPubMedCentralCrossRefGoogle Scholar
Kulik L, Vouche M, Koppe S, Lewandowski RJ, Mulcahy MF, Ganger D, et al. Prospective randomized pilot study of Y90 +/− sorafenib as bridge to transplantation in hepatocellular carcinoma. J Hepatol. 2014;61:309–17.PubMedCrossRefGoogle Scholar
Ricke J, Bulla K, Kolligs F, Peck-Radosavljevic M, Reimer P, Sangro B, et al. Safety and toxicity of radioembolization plus Sorafenib in advanced hepatocellular carcinoma: analysis of the European multicentre trial SORAMIC. Liver Int. 2015;35:620–6.PubMedCrossRefGoogle Scholar
Lorenzin D, Pravisani R, Leo CA, Bugiantella W, Soardo G, Carnelutti A, et al. Complete remission of unresectable hepatocellular carcinoma after combined sorafenib and adjuvant yttrium-90 radioembolization. Cancer Biother Radiopharm. 2016;31:65–9.PubMedCrossRefGoogle Scholar
Sangha BS, Nimeiri H, Hickey R, Salem R, Lewandowski RJ. Radioembolization as a treatment strategy for metastatic colorectal cancer to the liver: what can we learn from the SIRFLOX trial? Curr Treat Options Oncol. 2016;17:26.PubMedCrossRefGoogle Scholar
Dutton SJ, Kenealy N, Love SB, Wasan HS, Sharma RA, FOXFIRE Protocol Development Group and the NCRI Colorectal Clinical Study Group. FOXFIRE protocol: an open-label, randomised, phase III trial of 5-fluorouracil, oxaliplatin and folinic acid (OxMdG) with or without interventional selective internal radiation therapy (SIRT) as first-line treatment for patients with unresectable liver-only or liver-dominant metastatic colorectal cancer. BMC Cancer. 2014;14:497.PubMedPubMedCentralCrossRefGoogle Scholar
Geschwind JF, Salem R, Carr BI, et al. Yttrium-90 microspheres for the treatment of hepatocellular carcinoma. Gastroenterology. 2004;127:S194–205.PubMedCrossRefGoogle Scholar
Salem R, Lewandowsky RJ, Mulcahy MF, et al. Radioembolisation for hepatocellular carcinoma using Yttrium-90 microspheres. a comprehensive report of long term outcomes. Gastroenterology. 2010;138:52–64.PubMedCrossRefGoogle Scholar
Kulik LM, Atassi B, van Holsbeeck L, et al. Yttrium-90 microspheres (TheraSphere®) treatment of unresectable hepatocellular carcinoma: downstaging to resection, RFA and bridge to transplantation. J Surg Oncol. 2006;94:572–86.PubMedCrossRefGoogle Scholar
Tohme S, Sukato D, Chen HW, Amesur N, Zajko AB, Humar A, et al. Yttrium-90 radioembolization as a bridge to liver transplantation: a single-institution experience. J Vasc Interv Radiol. 2013;24:1632–8.PubMedCrossRefGoogle Scholar
Abdelfattah MR, Al-Sebayel M, Broering D, Alsuhaibani H. Radioembolization using yttrium-90 microspheres as bridging and downstaging treatment for unresectable hepatocellular carcinoma before liver transplantation: initial single-center experience. Transplant Proc. 2015;47:408–11.PubMedCrossRefGoogle Scholar
Kulik LM, Carr BI, Mulcahy MF, et al. Safety and efficacy of 90Y radiotherapy for hepatocellular carcinoma with and without portal vein thrombosis. Hepatology. 2007;41:71–81.CrossRefGoogle Scholar
Ibrahim SM, Mulcahy MF, Lewandowski RJ, et al. Treatment of unresectable cholangiocarcinoma using yttrium-90 microspheres: results from a pilot study. Cancer. 2008;113:2119–28.PubMedCrossRefGoogle Scholar
Al-Adra DP, Gill RS, Axford SJ, Shi X, Kneteman N, Liau SS. Treatment of unresectable intrahepatic cholangiocarcinoma with yttrium-90 radioembolization: a systematic review and pooled analysis. Eur J Surg Oncol. 2015;41:120–7.PubMedPubMedCentralCrossRefGoogle Scholar
Rayar M, Sulpice L, Edeline J, Garin E, Levi Sandri GB, Meunier B, et al. Intra-arterial yttrium-90 radioembolization combined with systemic chemotherapy is a promising method for downstaging unresectable huge intrahepatic cholangiocarcinoma to surgical treatment. Ann Surg Oncol. 2015;22:3102–8.PubMedCrossRefGoogle Scholar
Boehm LM, Jayakrishnan TT, Miura JT, Zacharias AJ, Johnston FM, Turaga KK, et al. Comparative effectiveness of hepatic artery based therapies for unresectable intrahepatic cholangiocarcinoma. J Surg Oncol. 2015;111:213–20.PubMedCrossRefGoogle Scholar
Valle J, Wasan H, Palmer DH, Cunningham D, Anthoney A, Maraveyas A, et al. Cisplatin plus gemcitabine versus gemcitabine for biliary tract cancer. N Engl J Med. 2010;362:1273–81.PubMedCrossRefGoogle Scholar
Welsh JS, Kennedy AS, Thomadsen B. Selective internal radiation therapy (SIRT) for liver metastases secondary to colorectal adenocarcinoma. Int J Radiat Oncol Biol Phys. 2006;66:S62–73.PubMedCrossRefGoogle Scholar
Wong CY, Salem R, Raman S, Gates VL, Dworkin HJ. Evaluating 90Y-glass microsphere treatment response of unresectable colorectal liver metastases by [18F]FDG PET: a comparison with CT or MRI. Eur J Nucl Med Mol Imaging. 2002;29:815–20.PubMedCrossRefGoogle Scholar
Van den Eynde M, Flamen P, El Nakadi I, Liberale G, Delatte P, Larsimont D, Hendlisz A. Inducing resectability of chemotherapy refractory colorectal liver metastasis by radioembolization with yttrium-90 microspheres. Clin Nucl Med. 2008;33:697–9.PubMedCrossRefGoogle Scholar
Gray B, Van Hazel G, Hope M, et al. Randomised trial of SIR-spheres plus chemotherapy vs. chemotherapy alone for treating patients with liver metastases from primary large bowel cancer. Ann Oncol. 2001;12:1711–20.PubMedCrossRefGoogle Scholar
Goin JE, Dancey JE, Hermann GA, Sickles CJ, Roberts CA, MacDonald JS. Treatment of unresectable metastatic colorectal carcinoma to the liver with intrahepatic Y-90 microspheres: a dose-ranging study. World J Nucl Med. 2003;2:216–25.Google Scholar
Van Hazel G, Blackwell A, Anderson J, Price D, Moroz P, Bower G, Cardaci G, Gray B. Randomised phase 2 trial of SIR-Spheres plus fluorouracil/leucovorin chemotherapy versus fluorouracil/leucovorin chemotherapy alone in advanced colorectal cancer. J Surg Oncol. 2004;88:78–85.PubMedCrossRefGoogle Scholar
Sharma RA, Van Hazel GA, Morgan B, Berry DP, Blanshard K, Price D, Bower G, et al. Radioembolization of liver metastases from colorectal cancer using yttrium-90 microspheres with concomitant systemic oxaliplatin, fluorouracil, and leucovorin chemotherapy. J Clin Oncol. 2007;25:1099–106.PubMedCrossRefGoogle Scholar
van Hazel GA, Pavlakis N, Goldstein D, Olver IN, Tapner MJ, Price D, et al. Treatment of fluorouracil-refractory patients with liver metastases from colorectal cancer by using yttrium-90 resin microspheres plus concomitant systemic irinotecan chemotherapy. J Clin Oncol. 2009;27:4089–95.PubMedCrossRefGoogle Scholar
Gibbs P, Heinemann V, Sharma NK, Findlay MPN, Ricke J, Gebski V, SIRFLOX Study Group, et al. SIRFLOX: randomized phase III trial comparing firstline mFOLFOX6 ± bevacizumab (bev) versus mFOLFOX6 + selective internal radiation therapy (SIRT) ± bev in patients (pts) with metastatic colorectal cancer (mCRC). J Clin Oncol. 2015;33(Suppl):3502.Google Scholar
Hong K, McBride JD, Georgiades CS, et al. Salvage therapy for liver-dominant colorectal metastatic adenocarcinoma: comparison between transcatheter arterial chemoembolization versus yttrium-90 radioembolization. J Vasc Interv Radiol. 2009;20:360–7.PubMedCrossRefGoogle Scholar
Saxena A, Bester L, Shan L, Perera M, Gibbs P, Meteling B, et al. A systematic review on the safety and efficacy of yttrium-90 radioembolization for unresectable, chemorefractory colorectal cancer liver metastases. J Cancer Res Clin Oncol. 2014;140:537–47.PubMedCrossRefGoogle Scholar
Kennedy AS, Ball D, Cohen SJ, Cohn M, Coldwell DM, Drooz A, et al. Multicenter evaluation of the safety and efficacy of radioembolization in patients with unresectable colorectal liver metastases selected as candidates for 90Y resin microspheres. J Gastrointest Oncol. 2015;6:134–42.PubMedPubMedCentralGoogle Scholar
Rhee TK, Lewandowski RJ, Liu DM, et al. 90Y radioembolization for metastatic neuroendocrine liver tumors: preliminary results from a multi-institutional experience. Ann Surg. 2008;247:1029–35.PubMedCrossRefGoogle Scholar
Kennedy AS, Dezarn WA, McNeillie P, et al. Radioembolization for unresectable neuroendocrine hepatic metastases using resin 90Y-microspheres: early results in patients. Am J Clin Oncol. 2008;31:271–9.PubMedCrossRefGoogle Scholar
Giammarile F, Bodei L, Chiesa C, Flux G, Forrer F, Kraeber-Bodere F, et al. EANM procedure guideline for the treatment of liver cancer and liver metastases with intra-arterial radioactive compounds. Eur J Nucl Med Mol Imaging. 2011;38:1393–406.PubMedCrossRefGoogle Scholar
Shepherd FA, Rotstein LE, Houle S, Yip TC, Paul K, Sniderman KW. A phase I dose escalation trial of yttrium-90 microspheres in the treatment of primary hepatocellular carcinoma. Cancer. 1992;70:2250–4.PubMedCrossRefGoogle Scholar
Yan ZP, Lin G, Zhao HY, Dong YH. An experimental study and clinical pilot trials on yttrium-90 glass microspheres through the hepatic artery for treatment of primary liver cancer. Cancer. 1993;72:3210–5.PubMedCrossRefGoogle Scholar
Lau WY, Leung WT, Ho S, Cotton LA, Ensminger WD, Shapiro B. Treatment of inoperable hepatocellular carcinoma with intrahepatic arterial yttrium-90 microspheres: a phase I and II study. Br J Cancer. 1994;70:994–9.PubMedPubMedCentralCrossRefGoogle Scholar
Andrews JC, Walker SC, Ackermann RJ, Cotton LA, Ensminger WD, Shapiro B. Hepatic radioembolization with yttrium-90 containing glass microspheres: preliminary results and clinical follow-up. J Nucl Med. 1994;35:1637–44.PubMedGoogle Scholar
Mahnken AH. Current status of transarterial radioembolization. World J Radiol. 2016;8:449–59.PubMedPubMedCentralCrossRefGoogle Scholar
© Springer International Publishing AG 2016
1.Regional Center of Nuclear MedicineUniversity of PisaPisaItaly
Cite this entry as:
Boni G., Guidoccio F., Volterrani D., Mariani G. (2016) Radionuclide Therapy of Tumors of the Liver and Biliary Tract. In: Strauss H., Mariani G., Volterrani D., Larson S. (eds) Nuclear Oncology. Springer, Cham
Accepted 16 August 2016
DOI https://doi.org/10.1007/978-3-319-26067-9_51-1
Cite entry | CommonCrawl |
Barber paradox
The barber paradox is a puzzle derived from Russell's paradox. It was used by Bertrand Russell as an illustration of the paradox, though he attributes it to an unnamed person who suggested it to him.[1] The puzzle shows that an apparently plausible scenario is logically impossible. Specifically, it describes a barber who is defined such that he both shaves himself and does not shave himself, which implies that no such barber exists.[2][3]
This article is about a paradox of self-reference. For an unrelated paradox in the theory of logical conditionals with a similar name, introduced by Lewis Carroll, see Barbershop paradox.
Paradox
The barber is the "one who shaves all those, and those only, who do not shave themselves". The question is, does the barber shave himself?[1]
Any answer to this question results in a contradiction: The barber cannot shave himself, as he only shaves those who do not shave themselves. Thus, if he shaves himself he ceases to be the barber specified. Conversely, if the barber does not shave himself, then he fits into the group of people who would be shaved by the specified barber, and thus, as that barber, he must shave himself.
In its original form, this paradox has no solution, as no such barber can exist. The question is a loaded question in that it assumes the existence of a barber who could not exist, which is a vacuous proposition, and hence false. There are other non-paradoxical variations, but those are different.[3]
History
This paradox is often incorrectly attributed to Bertrand Russell (e.g., by Martin Gardner in Aha!). It was suggested to Russell as an alternative form of Russell's paradox,[1] which Russell had devised to show that set theory as it was used by Georg Cantor and Gottlob Frege contained contradictions. However, Russell denied that the Barber's paradox was an instance of his own:
That contradiction [Russell's paradox] is extremely interesting. You can modify its form; some forms of modification are valid and some are not. I once had a form suggested to me which was not valid, namely the question whether the barber shaves himself or not. You can define the barber as "one who shaves all those, and those only, who do not shave themselves". The question is, does the barber shave himself? In this form the contradiction is not very difficult to solve. But in our previous form I think it is clear that you can only get around it by observing that the whole question whether a class is or is not a member of itself is nonsense, i.e. that no class either is or is not a member of itself, and that it is not even true to say that, because the whole form of words is just noise without meaning.
— Bertrand Russell, The Philosophy of Logical Atomism[1]
This point is elaborated further under Applied versions of Russell's paradox.
In first-order logic
$(\exists x)({\text{person}}(x)\wedge (\forall y)({\text{person}}(y)\implies ({\text{shaves}}(x,y)\iff \neg {\text{shaves}}(y,y))))$
This sentence says that a barber x exists. Its truth value is false, as the existential clause is unsatisfiable (a contradiction) because of the universal quantifier $(\forall )$. The universally quantified y will include every single element in the domain, including our infamous barber x. So when the value x is assigned to y, the sentence in the universal quantifier can be rewritten to ${\text{shaves}}(x,x)\iff \neg {\text{shaves}}(x,x)$, which is an instance of the contradiction $a\iff \neg a$. Since the sentence is false for that particular value, the entire universal clause is false. Since the existential clause is a conjunction with one operand that is false, the entire sentence is false. Another way to show this is to negate the entire sentence and arrive at a tautology. Nobody is a barber, so there is no solution to the paradox.[2][3]
$(\exists x)({\text{person}}(x)\wedge \bot )$
$(\exists x)(\bot )$
$\bot $
See also
• Cantor's theorem
• Gödel's incompleteness theorems
• Halting problem
• List of paradoxes
• Double bind
References
1. The Philosophy of Logical Atomism, reprinted in The Collected Papers of Bertrand Russell, 1914-19, Vol 8., p. 228
2. "TheBarber.HTML".
3. "Barber paradox".
External links
• Proposition of the Barber's Paradox
• Joyce, Helen. "Mathematical mysteries: The Barber's Paradox". Plus, May 2002.
• Edsger Dijkstra's take on the problem
• Russell, Bertrand (1919). "The Philosophy of Logical Atomism". The Monist. 29 (3): 345–380. doi:10.5840/monist19192937. JSTOR 27900748.
• Russell's (Barber) paradox explanation in Python
Common paradoxes
Philosophical
• Analysis
• Buridan's bridge
• Dream argument
• Epicurean
• Fiction
• Fitch's knowability
• Free will
• Goodman's
• Hedonism
• Liberal
• Meno's
• Mere addition
• Moore's
• Newcomb's
• Nihilism
• Omnipotence
• Preface
• Rule-following
• Sorites
• Theseus' ship
• White horse
• Zeno's
Logical
• Barber
• Berry
• Bhartrhari's
• Burali-Forti
• Court
• Crocodile
• Curry's
• Epimenides
• Free choice paradox
• Grelling–Nelson
• Kleene–Rosser
• Liar
• Card
• No-no
• Pinocchio
• Quine's
• Yablo's
• Opposite Day
• Paradoxes of set theory
• Richard's
• Russell's
• Socratic
• Hilbert's Hotel
• Temperature paradox
• Barbershop
• Catch-22
• Chicken or the egg
• Drinker
• Entailment
• Lottery
• Plato's beard
• Raven
• Ross's
• Unexpected hanging
• "What the Tortoise Said to Achilles"
• Heat death paradox
• Olbers' paradox
Economic
• Allais
• Antitrust
• Arrow information
• Bertrand
• Braess's
• Competition
• Income and fertility
• Downs–Thomson
• Easterlin
• Edgeworth
• Ellsberg
• European
• Gibson's
• Giffen good
• Icarus
• Jevons
• Leontief
• Lerner
• Lucas
• Mandeville's
• Mayfield's
• Metzler
• Plenty
• Productivity
• Prosperity
• Scitovsky
• Service recovery
• St. Petersburg
• Thrift
• Toil
• Tullock
• Value
Decision theory
• Abilene
• Apportionment
• Alabama
• New states
• Population
• Arrow's
• Buridan's ass
• Chainstore
• Condorcet's
• Decision-making
• Downs
• Ellsberg
• Fenno's
• Fredkin's
• Green
• Hedgehog's
• Inventor's
• Kavka's toxin puzzle
• Morton's fork
• Navigation
• Newcomb's
• Parrondo's
• Preparedness
• Prevention
• Prisoner's dilemma
• Tolerance
• Willpower
• List
• Category
| Wikipedia |
Workshop on Geometric Methods in Physics
The Workshop on Geometric Methods in Physics (WGMP) is a conference on mathematical physics focusing on geometric methods in physics . It is organized each year since 1982 in the village of Białowieża, Poland. It is organized by the Chair of Mathematical Physics of Faculty of Mathematics, University of Białystok. Its founder and main organizer is Anatol Odzijewicz.[1]
Workshop on Geometric Methods in Physics
StatusActive
GenreMathematics conference
FrequencyAnnual
Location(s)Białowieża
CountryPoland
Years active1982–present
Inaugurated1982 (1982)
FounderAnatol Odzijewicz
Most recent19 June – 25 June 2022
Next eventJune – July 2023
ActivityActive
Organised byUniversity of Białystok
Websitewgmp.uwb.edu.pl
WGMP takes place in its home venue, in the heart of the Białowieża National Park. A number of social events, including campfire, an excursion to the Białowieża forest and a banquet, are usually organized during the week.
Notable participants
In the past, Workshops were attended by scientists including: Roy Glauber, Francesco Calogero, Ludvig Faddeev, Martin Kruskal, es:Bogdan Mielnik, Emma Previato, Stanisław Lech Woronowicz, Vladimir E. Zakharov, Dmitry Anosov, de:Gérard Emch, George Mackey, fr:Moshé Flato, Daniel Sternheimer, Tudor Ratiu, Simon Gindikin, Boris Fedosov, pl:Iwo Białynicki-Birula, Jędrzej Śniatycki, Askolʹd Perelomov, Alexander Belavin, Yvette Kosmann-Schwarzbach, pl:Krzysztof Maurin, Mikhail Shubin, Kirill Mackenzie.[2]
Special sessions
Many times special sessions were scheduled within the programme of the Workshop. In the year 2016 there was a session "Integrability and Geometry" financed by National Science Foundation.[3][4] In the year 2017 there was a session dedicated to the memory and scientific achievements of S. Twareque Ali, long time participant and co-organizer of the Workshop. In the year 2018 there was a session dedicated to scientific achievements of prof. Daniel Sternheimer on the occasion of his 80th birthday. In the previous years, there were sessions dedicated to other prominent mathematicians and physicists such as S.L. Woronowicz, G. Emch, B. Mielnik, F. Berezin.[5]
School on Geometry and Physics
Since 2012 the Workshop is accompanied by a School on Geometry and Physics, which is targeted at young researchers and graduate students. During the School several courses by leading experts in mathematical physics take place.
Proceedings
Starting at 1992, after the Workshop a volume of proceedings is published. In the recent years it was published in the series Trends in Mathematics by Birkhäuser.[6] In 2005 a commemorative tome Twenty Years of Bialowieza: A Mathematical Anthology. Aspects of Differential Geometric Methods in Physics was published by World Scientific.[7]
References
1. Webpage of Faculty of Mathematics, University of Białystok
2. Voronov, Theodore; Ali, Syed Twareque; Goliński, Tomasz (March 2010). "The Białowieża Meetings on Geometric Methods in Physics: Thirty Years of Success and Inspiration" (PDF). European Mathematical Society Newsletter. No. 75.
3. NSF Award Abstract
4. Integrability and Geometry at WGMP 2016. Post-conference materials.
5. Berceanu, Stefan (August 2013). "Berezin la Białowieża XXX – o perspectivă personală" (PDF). Curierul de Fizica. No. 75.
6. List of proceedings volumes, WGMP webpage
7. Ali, Syed Twareque; Emch, Gerard G.; Odzijewicz, Anatol; Sclichenmaier, Martin; Woronowicz, Stanisław Lech, eds. (2005). Twenty Years of Bialowieza: A Mathematical Anthology. Aspects of Differential Geometric Methods in Physics. World Scientific Monograph Series in Mathematics. Vol. 8. World Scientific. doi:10.1142/5744. ISBN 978-981-256-146-6.
Further reading
• Voronov, Theodore; Ali, Syed Twareque; Goliński, Tomasz (March 2010). "The Białowieża Meetings on Geometric Methods in Physics: Thirty Years of Success and Inspiration" (PDF). European Mathematical Society Newsletter. No. 75.
External links
• Conference webpage
• Workshop on Geometric Methods in Physics on Facebook
| Wikipedia |
Title: Purchasing power parity
Subject: Economy of Mexico, Economy of Greece, Economy of Sierra Leone, World economy, Beijing
Collection: Economic Indicators, Gross Domestic Product, Index Numbers, International Economics
GDP per capita by countries in 2013, calculated using PPP exchange rates, based on CIA Factbook data. [1]
Purchasing power parity (PPP) is a component of some economic theories and is a technique used to determine the relative value of different currencies.
Theories that invoke purchasing power parity assume that in some circumstances (for example, as a long-run tendency) it would cost exactly the same number of, say, US dollars to buy euros and then to use the proceeds to buy a market basket of goods as it would cost to use those dollars directly in purchasing the market basket of goods.
The concept of purchasing power parity allows one to estimate what the exchange rate between two currencies would have to be in order for the exchange to be at par with the purchasing power of the two countries' currencies. Using that PPP rate for hypothetical currency conversions, a given amount of one currency thus has the same purchasing power whether used directly to purchase a market basket of goods or used to convert at the PPP rate to the other currency and then purchase the market basket using that currency. Observed deviations of the exchange rate from purchasing power parity are measured by deviations of the real exchange rate from its PPP value of 1.
PPP exchange rates help to minimize misleading international comparisons that can arise with the use of market exchange rates. For example, suppose that two countries produce the same physical amounts of goods as each other in each of two different years. Since market exchange rates fluctuate substantially, when the GDP of one country measured in its own currency is converted to the other country's currency using market exchange rates, one country might be inferred to have higher real GDP than the other country in one year but lower in the other; both of these inferences would fail to reflect the reality of their relative levels of production. But if one country's GDP is converted into the other country's currency using PPP exchange rates instead of observed market exchange rates, the false inference will not occur.
Measurement 3
Law of one price 3.1
Big Mac Index 3.2
iPad Index 3.3
OECD comparative price levels 3.4
Measurement issues 3.5
Need for adjustments to GDP 4
Extrapolating PPP rates 4.1
Range and quality of goods 5.1
Trade barriers and nontradables 5.2
Departures from free competition 5.3
Differences in price level measurement 5.4
Global poverty line 6
The idea originated with the School of Salamanca in the 16th century and was developed in its modern form by Gustav Cassel in 1918.[2][3] The concept is based on the law of one price, where in the absence of transaction costs and official trade barriers, identical goods will have the same price in different markets when the prices are expressed in the same currency.[4]
Another interpretation is that the difference in the rate of change in prices at home and abroad—the difference in the inflation rates—is equal to the percentage depreciation or appreciation of the exchange rate.
Deviations from parity imply differences in purchasing power of a "basket of goods" across countries, which means that for the purposes of many international comparisons, countries' GDPs or other national income statistics need to be "PPP-adjusted" and converted into common units. The best-known purchasing power adjustment is the Geary–Khamis dollar (the "international dollar"). The real exchange rate is then equal to the nominal exchange rate, adjusted for differences in price levels. If purchasing power parity held exactly, then the real exchange rate would always equal one. However, in practice the real exchange rates exhibit both short run and long run deviations from this value, for example due to reasons illuminated in the Balassa–Samuelson theorem.
There can be marked differences between purchasing power adjusted incomes and those converted via market exchange rates.[5] For example, the World Bank's World Development Indicators 2005 estimated that in 2003, one Geary-Khamis dollar was equivalent to about 1.8 Chinese yuan by purchasing power parity[6]—considerably different from the nominal exchange rate. This discrepancy has large implications; for instance, when converted via the nominal exchange rates GDP per capita in India is about US$1,704[7] while on a PPP basis it is about US$3,608.[8] At the other extreme, Denmark's nominal GDP per capita is around US$62,100, but its PPP figure is US$37,304.
The purchasing power parity exchange rate serves two main functions. PPP exchange rates can be useful for making comparisons between countries because they stay fairly constant from day to day or week to week and only change modestly, if at all, from year to year. Second, over a period of years, exchange rates do tend to move in the general direction of the PPP exchange rate and there is some value to knowing in which direction the exchange rate is more likely to shift over the long run.
The PPP exchange-rate calculation is controversial because of the difficulties of finding comparable baskets of goods to compare purchasing power across countries.
Estimation of purchasing power parity is complicated by the fact that countries do not simply differ in a uniform price level; rather, the difference in food prices may be greater than the difference in housing prices, while also less than the difference in entertainment prices. People in different countries typically consume different baskets of goods. It is necessary to compare the cost of baskets of goods and services using a price index. This is a difficult task because purchasing patterns and even the goods available to purchase differ across countries.
Thus, it is necessary to make adjustments for differences in the quality of goods and services. Furthermore, the basket of goods representative of one economy will vary from that of another: Americans eat more bread; Chinese more rice. Hence a PPP calculated using the US consumption as a base will differ from that calculated using China as a base. Additional statistical difficulties arise with multilateral comparisons when (as is usually the case) more than two countries are to be compared.
Various ways of averaging bilateral PPPs can provide a more stable multilateral comparison, but at the cost of distorting bilateral ones. These are all general issues of indexing; as with other price indices there is no way to reduce complexity to a single number that is equally satisfying for all purposes. Nevertheless, PPPs are typically robust in the face of the many problems that arise in using market exchange rates to make comparisons.
For example, in 2005 the price of a gallon of gasoline in Saudi Arabia was USD 0.91, and in Norway the price was USD 6.27.[9] The significant differences in price wouldn't contribute to accuracy in a PPP analysis, despite all of the variables that contribute to the significant differences in price. More comparisons have to be made and used as variables in the overall formulation of the PPP.
When PPP comparisons are to be made over some interval of time, proper account needs to be made of inflationary effects.
Although it may seem as if PPPs and the law of one price are the same, there is a difference: the law of one price applies to individual commodities whereas PPP applies to the general price level. If the law of one price is true for all commodities then PPP is also therefore true; however, when discussing the validity of PPP, some argue that the law of one price does not need to be true exactly for PPP to be valid. If the law of one price is not true for a certain commodity, the price levels will not differ enough from the level predicted by PPP.[4]
The purchasing power parity theory states that the exchange rate between one currency and another currency is in equlibirium when their domestic purchasing powers at that rate of exchange are equivalent.
Big Mac hamburgers, like this one from Japan, are similar worldwide.
Another example of one measure of the law of one price, which underlies purchasing power parity, is the Big Mac Index, popularized by The Economist, which compares the prices of a Big Mac burger in McDonald's restaurants in different countries. The Big Mac Index is presumably useful because although it is based on a single consumer product that may not be typical, it is a relatively standardized product that includes input costs from a wide range of sectors in the local economy, such as agricultural commodities (beef, bread, lettuce, cheese), labor (blue and white collar), advertising, rent and real estate costs, transportation, etc.
In theory, the law of one price would hold that if, to take an example, the Canadian dollar were to be significantly overvalued relative to the U.S. dollar according to the Big Mac Index, that gap should be unsustainable because Canadians would import their Big Macs from or travel to the U.S. to consume them, thus putting upward demand pressure on the U.S. dollar by virtue of Canadians buying the U.S. dollars needed to purchase the U.S.-made Big Macs and simultaneously placing downward supply pressure on the Canadian dollar by virtue of Canadians selling their currency in order to buy those same U.S. dollars.
The alternative to this exchange rate adjustment would be an adjustment in prices, with Canadian McDonald's stores compelled to lower prices to remain competitive. Either way, the valuation difference should be reduced assuming perfect competition and a perfectly tradable good. In practice, of course, the Big Mac is not a perfectly tradable good and there may also be capital flows that sustain relative demand for the Canadian dollar. The difference in price may have its origins in a variety of factors besides direct input costs such as government regulations and product differentiation.[4]
However, in some emerging economies, Western fast food represents an expensive niche product price well above the price of traditional staples—i.e. the Big Mac is not a mainstream 'cheap' meal as it is in the West, but a luxury import. This relates back to the idea of product differentiation: the fact that few substitutes for the Big Mac are available confers market power on McDonald's. For example, in India, the costs of local fast food like vada pav are comparative to what the Big Mac signifies in the U.S.A. [10] Additionally, with countries like Argentina that have abundant beef resources, consumer prices in general may not be as cheap as implied by the price of a Big Mac.
The following table, based on data from The Economist's January 2013 calculations, shows the under (−) and over (+) valuation of the local currency against the U.S. dollar in %, according to the Big Mac index. To take an example calculation, the local price of a Big Mac in Hong Kong when converted to U.S. dollars at the market exchange rate was $2.19, or 50% of the local price for a Big Mac in the U.S. of $4.37. Hence the Hong Kong dollar was deemed to be 50% undervalued relative to the U.S. dollar on a PPP basis.
Price level (% relative to the US)[11]
India -59
South Africa -54
Hong Kong -50
Ukraine -47
Egypt -45
Russia -45
Taiwan -42
China -41
Malaysia -41
Sri Lanka -37
Indonesia -35
Mexico -34
Philippines -33
Poland -33
Bangladesh -32
Saudi Arabia -33
Thailand -33
Pakistan -32
Lithuania -30
Latvia -25
UAE -25
South Korea -22
Japan -20
Singapore -17
Estonia -16
Czech Republic -15
Argentina -13
Hungary -13
Peru -11
Israel -8
Portugal -8
United Kingdom -3
New Zealand -1
Euro area 12
Uruguay 25
Venezuela 108
iPad Index
Like the Big Mac Index, the iPad index (elaborated by ComSec) compares an item's price in various locations. Unlike the Big Mac, however, each iPad is produced in the same place (except for the model sold in Brazil) and all iPads (within the same model) have identical performance characteristics. Price differences are therefore a function of transportation costs, taxes, and to a lesser extent, the prices that may be realized in individual markets. It is worth noting that an iPad will cost about twice as much in Argentina as in the United States.
Price (US Dollars)[12][13][14][15]
Argentina $1,094.11
Australia $506.66
Austria $674.96
Belgium $618.34
Brazil $791.40
Brunei $525.52
Canada (Montréal) $557.18
Canada (no tax) $467.36
Chile $602.13
China $602.52
Czech Republic $676.69
Denmark $725.32
Finland $695.25
France $688.49
Germany $618.34
Greece $715.54
Hong Kong $501.52
Hungary $679.64
India $512.61
Ireland $630.73
Italy $674.96
Japan $501.56
Luxembourg $641.50
Malaysia $473.77
Mexico $591.62
Netherlands $683.08
New Zealand $610.45
Norway $655.92
Philippines $556.42
Poland $704.51
Portugal $688.49
Russia $596.08
Singapore $525.98
Slovakia $674.96
Slovenia $674.96
South Africa $559.38
South Korea $576.20
Spain $674.96
Sweden $706.87
Switzerland $617.58
Taiwan $538.34
Thailand $530.72
Turkey $656.96
UAE $544.32
United Kingdom $638.81
US (California) $546.91
US (no tax) $499.00
Vietnam $554.08
OECD comparative price levels
Each month, the private final consumption expenditure to exchange rates. The OECD table below indicates the number of US dollars needed, as of April 2014, in each of the countries listed to buy the same representative basket of consumer goods and services that would cost 100 USD in the United States.
According to the table, an American living or travelling in Switzerland on an income denominated in US dollars would find that country (in April 2014) to be the most expensive of the group, having to spend 72% more US dollars to maintain a standard of living comparable to the USA in terms of consumption.
Price level (USA = 100)[16]
Korea 87
Slovakia 78
Measurement issues
In addition to methodological issues presented by the selection of a basket of goods, PPP estimates can also vary based on the statistical capacity of participating countries. The International Comparison Program, which PPP estimates are based on, require the disaggregation of national accounts into production, expenditure or (in some cases) income, and not all participating countries routinely disaggregate their data into such categories.
Some aspects of PPP comparison are theoretically impossible or unclear. For example, there is no basis for comparison between the Ethiopian laborer who lives on teff with the Thai laborer who lives on rice, because teff is not commercially available in Thailand and rice is not in Ethiopia, so the price of rice in Ethiopia or teff in Thailand cannot be determined. As a general rule, the more similar the price structure between countries, the more valid the PPP comparison. PPP levels will also vary based on the formula used to calculate price matrices. Different possible formulas include GEKS-Fisher, Geary-Khamis, IDB, and the superlative method. Each has advantages and disadvantages. Linking regions presents another methodological difficulty. In the 2005 ICP round, regions were compared by using a list of some 1,000 identical items for which a price could be found for 18 countries, selected so that at least two countries would be in each region. While this was superior to earlier "bridging" methods, which do not fully take into account differing quality between goods, it may serve to overstate the PPP basis of poorer countries, because the price indexing on which PPP is based will assign to poorer countries the greater weight of goods consumed in greater shares in richer countries.
Need for adjustments to GDP
The exchange rate reflects transaction values for traded goods between countries in contrast to non-traded goods, that is, goods produced for home-country use. Also, currencies are traded for purposes other than trade in goods and services, e.g., to buy capital assets whose prices vary more than those of physical goods. Also, different interest rates, speculation, hedging or interventions by central banks can influence the foreign-exchange market.
The PPP method is used as an alternative to correct for possible statistical bias. The Penn World Table is a widely cited source of PPP adjustments, and the so-called Penn effect reflects such a systematic bias in using exchange rates to outputs among countries.
For example, if the value of the Mexican peso falls by half compared to the US dollar, the Mexican Gross Domestic Product measured in dollars will also halve. However, this exchange rate results from international trade and financial markets. It does not necessarily mean that Mexicans are poorer by a half; if incomes and prices measured in pesos stay the same, they will be no worse off assuming that imported goods are not essential to the quality of life of individuals. Measuring income in different countries using PPP exchange rates helps to avoid this problem. PPP exchange rates are especially useful when official exchange rates are artificially manipulated by governments. Countries with strong government control of the economy sometimes enforce official exchange rates that make their own currency artificially strong. By contrast, the currency's black market exchange rate is artificially weak. In such cases, a PPP exchange rate is likely the most realistic basis for economic comparison.
Extrapolating PPP rates
Since global PPP estimates —such as those provided by the ICP— are not calculated annually, but for a single year, PPP exchange rates for years other than the benchmark year need to be extrapolated.[17] One way of doing this is by using the country's GDP deflator. To calculate a country's PPP exchange rate in Geary–Khamis dollars for a particular year, the calculation proceeds in the following manner: \textrm{PPPrate}_{X,i}=\frac{\textrm{PPPrate}_{X,b}\cdot \frac{\textrm{GDPdef}_{X,i}}{\textrm{GDPdef}_{X,b}}}{\textrm{PPPrate}_{U,b}\cdot \frac{\textrm{GDPdef}_{U,i}}{\textrm{GDPdef}_{U,b}}} Where PPPrateX,i is the PPP exchange rate of country X for year i, PPPrateX,b is the PPP exchange rate of country X for the benchmark year, PPPrateU,b is the PPP exchange rate of the United States (US) for the benchmark year (equal to 1), GDPdefX,i is the GDP deflator of country X for year i, GDPdefX,b is the GDP deflator of country X for the benchmark year, GDPdefU,i is the GDP deflator of the US for year i, and GDPdefU,b is the GDP deflator of the US for the benchmark year.
There are a number of reasons that different measures do not perfectly reflect standards of living.
Range and quality of goods
The goods that the currency has the "power" to purchase are a basket of goods of different types:
Local, non-tradable goods and services (like electric power) that are produced and sold domestically.
Tradable goods such as non-perishable commodities that can be sold on the international market (like diamonds).
The more that a product falls into category 1, the more its price will be from the currency exchange rate, moving towards the PPP exchange rate. Conversely, category 2 products tend to trade close to the currency exchange rate. (See also Penn effect).
More processed and expensive products are likely to be tradable, falling into the second category, and drifting from the PPP exchange rate to the currency exchange rate. Even if the PPP "value" of the Ethiopian currency is three times stronger than the currency exchange rate, it won't buy three times as much of internationally traded goods like steel, cars and microchips, but non-traded goods like housing, services ("haircuts"), and domestically produced crops. The relative price differential between tradables and non-tradables from high-income to low-income countries is a consequence of the Balassa–Samuelson effect and gives a big cost advantage to labour-intensive production of tradable goods in low income countries (like Ethiopia), as against high income countries (like Switzerland).
The corporate cost advantage is nothing more sophisticated than access to cheaper workers, but because the pay of those workers goes farther in low-income countries than high, the relative pay differentials (inter-country) can be sustained for longer than would be the case otherwise. (This is another way of saying that the wage rate is based on average local productivity and that this is below the per capita productivity that factories selling tradable goods to international markets can achieve.) An equivalent cost benefit comes from non-traded goods that can be sourced locally (nearer the PPP-exchange rate than the nominal exchange rate in which receipts are paid). These act as a cheaper factor of production than is available to factories in richer countries.
The Bhagwati–Kravis–Lipsey view provides a somewhat different explanation from the Balassa–Samuelson theory. This view states that price levels for nontradables are lower in poorer countries because of differences in endowment of labor and capital, not because of lower levels of productivity. Poor countries have more labor relative to capital, so marginal productivity of labor is greater in rich countries than in poor countries. Nontradables tend to be labor-intensive; therefore, because labor is less expensive in poor countries and is used mostly for nontradables, nontradables are cheaper in poor countries. Wages are high in rich countries, so nontradables are relatively more expensive.[4]
PPP calculations tend to overemphasise the primary sectoral contribution, and underemphasise the industrial and service sectoral contributions to the economy of a nation.
Trade barriers and nontradables
The law of one price, the underlying mechanism behind PPP, is weakened by transport costs and governmental trade restrictions, which make it expensive to move goods between markets located in different countries. Transport costs sever the link between exchange rates and the prices of goods implied by the law of one price. As transport costs increase, the larger the range of exchange rate fluctuations. The same is true for official trade restrictions because the customs fees affect importers' profits in the same way as shipping fees. According to Krugman and Obstfeld, "Either type of trade impediment weakens the basis of PPP by allowing the purchasing power of a given currency to differ more widely from country to country."[4] They cite the example that a dollar in London should purchase the same goods as a dollar in Chicago, which is certainly not the case.
Nontradables are primarily services and the output of the construction industry. Nontradables also lead to deviations in PPP because the prices of nontradables are not linked internationally. The prices are determined by domestic supply and demand, and shifts in those curves lead to changes in the market basket of some goods relative to the foreign price of the same basket. If the prices of nontradables rise, the purchasing power of any given currency will fall in that country.[4]
Departures from free competition
Linkages between national price levels are also weakened when trade barriers and imperfectly competitive market structures occur together. Pricing to market occurs when a firm sells the same product for different prices in different markets. This is a reflection of inter-country differences in conditions on both the demand side (e.g., virtually no demand for pork or alcohol in Islamic states) and the supply side (e.g., whether the existing market for a prospective entrant's product features few suppliers or instead is already near-saturated). According to Krugman and Obstfeld, this occurrence of product differentiation and segmented markets results in violations of the law of one price and absolute PPP. Over time, shifts in market structure and demand will occur, which may invalidate relative PPP.[4]
Differences in price level measurement
Measurement of price levels differ from country to country. Inflation data from different countries are based on different commodity baskets; therefore, exchange rate changes do not offset official measures of inflation differences. Because it makes predictions about price changes rather than price levels, relative PPP is still a useful concept. However, change in the relative prices of basket components can cause relative PPP to fail tests that are based on official price indexes.[4]
Global poverty line
The global poverty line is a worldwide count of people who live below an international poverty line, referred to as the dollar-a-day line. This line represents an average of the national poverty lines of the world's poorest countries, expressed in international dollars. These national poverty lines are converted to international currency and the global line is converted back to local currency using the PPP exchange rates from the ICP.
International dollar
List of countries by GDP (PPP)
List of countries by GDP (PPP) per capita
Measures of national income and output
Relative purchasing power parity
International Comparison Program
Geary–Khamis dollar
Karl Gustav Cassel
Penn effect
^ Based on figures from CIA world factbook
^ Cassel, Gustav (December 1918). "Abnormal Deviations in International Exchanges". 28, No. 112 (112). The Economic Journal. pp. 413–415.
^ Cheung, Yin-Wong (2009). "purchasing power parity". In Reinert, Kenneth A.; Rajan, Ramkishen S.; Glass, Amy Jocelyn et al. The Princeton Encyclopedia of the World Economy I. Princeton: Princeton University Press. p. 942.
^ a b c d e f g h Krugman and Obstfeld (2009). International Economics. Pearson Education, Inc.
^ FT.com / World - China, India economies '40% smaller':By Scheherazade Daneshkhu in London Published: December 18 2007 18:04
^ 2005 World Development Indicators: Table 5.7 | Relative prices and exchange rates
^ List of countries by past and future GDP (nominal)
^ List of countries by future GDP (PPP) per capita estimates
^ "Global Gas Prices," (March 2005) CNN Money. Accessed June 2011.
^ "The case study: Goli Vada Pav". Financial Times. 3 September 2012. Retrieved 30 April 2014.
^ [1] The Economist January 2013
^ [2] The Age 23 September 2013
^ [3] 23rd Sep 2013, CommSec Economic Insight: CommSec iPad Index Sep 23,2013
^ [4] Commonwealth Securities 23 September 2013
^ [5] How much an iPad costs 21 December 2013
^ [6] OECD April 23, 2014
^ http://www.oecd.org/std/prices-ppp/2078177.pdf
Explanations from the U. of British Columbia (also provides daily updated PPP charts)
OECD Purchasing Power Parity estimates updated annually by the Organization for Economic Co-Operation and Development (OECD)
Purchasing power parities as example of international statistical cooperation from Eurostat - Statistics Explained
World Bank International Comparison Project provides PPP estimates for a large number of countries
UBS's "Prices and Earnings" Report 2006 Good report on purchasing power containing a Big Mac index as well as for staples such as bread and rice for 71 world cities.
"Understanding PPPs and PPP based national accounts" provides an overview of methodological issues in calculating PPP and in designing the ICP under which the main PPP tables (Maddison, Penn World Tables, and World Bank WDI) are based.
Adaptive expectations
Aggregate demand
Capital flight
Demand shock
DSGE
Economic indicator
Effective demand
General Theory of Keynes
IS–LM model
NAIRU
Rate of profit
Rational expectations
Stagflation
Supply shock
Macroeconomics publications
Aggregation problem
Budget set
Deadweight loss
Economic equilibrium
Economic shortage
Economic surplus
Expected utility hypothesis
General equilibrium theory
Indifference curve
Intertemporal choice
Marginal cost
Monopsony
Non-convexity
Oligopoly
Opportunity cost
Production set
Returns to scale
Shrinkflation
Social choice theory
Theory of the firm
Microeconomics publications
Applied fields
Computational economics
Methodological publications
Economic thought
Ancient economic thought
Chicago school of economics
Feminist economics
Keynesian economics
Mainstream economics
Marxian economics
Neoclassical economics
Notable economists and
thinkers within economics
Kenneth Arrow
Francis Ysidro Edgeworth
Ragnar Frisch
Friedrich Hayek
Harold Hotelling
Tjalling Koopmans
Robert Lucas, Jr.
Jacob Marschak
Alfred Marshall
David Ricardo
Paul Samuelson
Joseph Schumpeter
Herbert A. Simon
Robert Solow
Léon Walras
European Free Trade Association
Pages using citations with old-style implicit et al. in editors
Index numbers
Saudi Arabia, India, United Kingdom, Canada, Italy
Monetary policy, World Bank, Keynesian economics, Milton Friedman, Economics
Foreign exchange market, Currency, Inflation, United Kingdom, World War II | CommonCrawl |
HomeTextbook AnswersMathCalculusCalculus (3rd Edition)Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 63616
Calculus (3rd Edition)
by Rogawski, Jon; Adams, Colin
Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Appendix A Appendix C Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - Chapter Review Exercises Parametric Equations, Polar Coordinates, and Conic Sections - Chapter Review Exercises 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 636: 16
$$ \left(\frac{x}{3}\right)^{2}-\left(\frac{y}{3/2}\right)^{2}=1. $$
Since the vertices are $(\pm 3,0)$ and the asymptotes are $y=\pm \frac{1}{2}x$, then we have $a=3, \frac{b}{a}=\frac{b}{3}=\frac{1}{2} $, and hence $b=\frac{3}{2} $. Hence, the equation of the hyperbola is given by $$ \left(\frac{x}{3}\right)^{2}-\left(\frac{y}{3/2}\right)^{2}=1. $$
Next Answer Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 636: 17 Previous Answer Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 636: 15
Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Preliminary Questions
Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises
Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Preliminary Questions
Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises
Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Preliminary Questions
Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises
Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Preliminary Questions
Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises
Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Preliminary Questions
Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises
Parametric Equations, Polar Coordinates, and Conic Sections - Chapter Review Exercises | CommonCrawl |
\begin{document}
\title[$q$\textbf{-Hardy-littlewood-type maximal operator with weight}]{ \textbf{On the} $q$\textbf{-Hardy-littlewood-type maximal operator with weight related to fermionic }$p$\textbf{-adic }$q$\textbf{-integral on }$
\mathbb{Z}
_{p}$} \author{\textbf{Serkan Araci}} \address{\textbf{University of Gaziantep, Faculty of Science and Arts, Department of Mathematics, 27310 Gaziantep, TURKEY}} \email{\textbf{[email protected]}} \author{\textbf{Mehmet Acikgoz}} \address{\textbf{University of Gaziantep, Faculty of Science and Arts, Department of Mathematics, 27310 Gaziantep, TURKEY}} \email{\textbf{[email protected]}} \subjclass[2000]{\textbf{Primary 05A10, 11B65; Secondary 11B68, 11B73}.} \keywords{\textbf{fermionic }$p$\textbf{-adic }$q$\textbf{-integral on }$
\mathbb{Z}
_{p}$\textbf{, Hardy-littlewood theorem, }$p$\textbf{-adic analysis, }$q$ \textbf{-analysis}}
\begin{abstract} The fundamental aim of this paper is to define weighted $q$ -Hardy-littlewood-type maximal operator by means of fermionic $p$-adic $q$ -invariant distribution on $
\mathbb{Z}
_{p}$. Also, we derive some interesting properties concerning this type maximal operator. \end{abstract}
\maketitle
\section{\textbf{Introduction and Notations}}
$p$-adic numbers also play a vital and important role in mathematics. $p$-adic numbers were invented by the German mathematician Kurt Hensel \cite{Hensel}, around the end of the nineteenth century.\ In spite of their being already one hundred years old, these numbers are still today enveloped in an aura of mystery within the scientific community.
The fermionic $p$-adic $q$-integral are originally constructed by Kim \cite {Kim 4}. Kim also introduced Lebesgue-Radon-Nikodym Theorem with respect to fermionic $p$-adic $q$-integral on $
\mathbb{Z}
_{p}$. The fermionic $p$--adic $q$-integral on $
\mathbb{Z}
_{p}$ is used in Mathematical Physics for example the functional equation of the $q$-Zeta function, the $q$-Stirling numbers and $q$-Mahler theory of integration with respect to the ring $
\mathbb{Z}
_{p}$ together with Iwasawa's $p$-adic $q$-$L$ function.
In \cite{Jang}, Jang also defined $q$-extension of Hardy-Littlewood-type maximal operator by means of $q$-Volkenborn integral on $
\mathbb{Z}
_{p}$. Next, in previous paper \cite{Araci}, Araci and Acikgoz added a weight into Jang's $q$-Hardy-Littlewood-type maximal operator and derived some interesting properties by means of Kim's $p$-adic $q$-integral on $
\mathbb{Z}
_{p}$. Now also, we shall consider weighted $q$-Hardy-Littlewood-type maximal operator on the fermionic $p$-adic $q$-integral on $
\mathbb{Z}
_{p}$. Moreover, we shall analyse $q$-Hardy-Littlewood-type maximal operator via the fermionic $p$-adic $q$-integral on $
\mathbb{Z}
_{p}$.
Assume that $p$ be an odd prime number. Let $\mathcal{
\mathbb{Q}
}_{p}$ be the field of $p$-adic rational numbers and let $\mathcal{
\mathbb{C}
}_{p}$ be the completion of algebraic closure of $\mathcal{
\mathbb{Q}
}_{p}$.
Thus, \begin{equation*} \mathcal{
\mathbb{Q}
}_{p}=\left\{ x=\sum_{n=-k}^{\infty }a_{n}p^{n}:0\leq a_{n}<p\right\} . \end{equation*}
Then $
\mathbb{Z}
_{p}$ is an integral domain, which is defined by \begin{equation*} \mathcal{
\mathbb{Z}
}_{p}=\left\{ x=\sum_{n=0}^{\infty }a_{n}p^{n}:0\leq a_{n}\leq p-1\right\} , \end{equation*}
or \begin{equation*} \mathcal{
\mathbb{Z}
}_{p}=\left\{ x\in
\mathbb{Q}
_{p}:\left\vert x\right\vert _{p}\leq 1\right\} . \end{equation*}
In this paper, we assume that $q\in
\mathbb{C}
_{p}$ with $\left\vert 1-q\right\vert _{p}<1$ as an indeterminate.
The $p$-adic absolute value $\left\vert .\right\vert _{p}$, is normally defined by \begin{equation*} \left\vert x\right\vert _{p}=\frac{1}{p^{r}}\text{,} \end{equation*}
where $x=p^{r}\frac{s}{t}$ with $\left( p,s\right) =\left( p,t\right) =\left( s,t\right) =1$ and $r\in \mathcal{
\mathbb{Q}
}$.
A $p$-adic Banach space $B$ is a $
\mathbb{Q}
_{p}$-vector space with a lattice $B^{0}$ ($\mathcal{
\mathbb{Z}
}_{p}$-module) separated and complete for $p$-adic topology, ie., \begin{equation*} B^{0}\simeq \lim_{\overleftarrow{n\in
\mathbb{N}
}}B^{0}/p^{n}B^{0}\text{.} \end{equation*}
For all $x\in B$, there exists $n\in \mathcal{
\mathbb{Z}
}$, such that $x\in p^{n}B^{0}$. Define \begin{equation*} v_{B}\left( x\right) =\sup_{n\in
\mathbb{N}
\cup \left\{ +\infty \right\} }\left\{ n:x\in p^{n}B^{0}\right\} \text{.} \end{equation*}
It satisfies the following properties: \begin{eqnarray*} v_{B}\left( x+y\right) &\geq &\min \left( v_{B}\left( x\right) ,v_{B}\left( y\right) \right) \text{,} \\ v_{B}\left( \beta x\right) &=&v_{p}\left( \beta \right) +v_{B}\left( x\right) \text{, if }\beta \in
\mathbb{Q}
_{p}\text{.} \end{eqnarray*}
Then, $\left\Vert x\right\Vert _{B}=p^{-v_{B}\left( x\right) }$ defines a norm on $B,$ such that $B$ is complete for $\left\Vert .\right\Vert _{B}$ and $B^{0}$ is the unit ball.
A measure on $\mathcal{
\mathbb{Z}
}_{p}$ with values in a $p$-adic Banach space $B$ is a continuous linear map \begin{equation*} f\mapsto \int f\left( x\right) \mu =\int_{
\mathbb{Z}
_{p}}f\left( x\right) \mu \left( x\right) \end{equation*}
from $C^{0}\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $, (continuous function on $\mathcal{
\mathbb{Z}
}_{p}$) to $B$. We know that the set of locally constant functions from $ \mathcal{
\mathbb{Z}
}_{p}$ to $\mathcal{
\mathbb{Q}
}_{p}$ is dense in $C^{0}\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $ so.
Explicitly, for all $f\in C^{0}\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $, the locally constant functions \begin{equation*} f_{n}=\sum_{i=0}^{p^{n}-1}f\left( i\right) 1_{i+p^{n}
\mathbb{Z}
_{p}}\rightarrow \text{ }f\text{ in }C^{0}\text{.} \end{equation*}
Now if ~$\mu \in \mathcal{D}_{0}\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{Q}
}_{p}\right) $, set $\mu \left( i+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) =\int_{
\mathbb{Z}
_{p}}1_{i+p^{n}\mathcal{
\mathbb{Z}
}_{p}}\mu $. Then $\int_{\mathcal{
\mathbb{Z}
}_{p}}f\mu $ is given by the following \textquotedblleft Riemann sums\textquotedblright \begin{equation*} \int_{
\mathbb{Z}
_{p}}f\mu =\lim_{n\rightarrow \infty }\sum_{i=0}^{p^{n}-1}f\left( i\right) \mu \left( i+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) \text{.} \end{equation*}
T. Kim defined $\mu _{-q}$ as follows: \begin{equation*} \mu _{-q}\left( \xi +dp^{n}\mathcal{
\mathbb{Z}
}_{p}\right) =\frac{\left( -q\right) ^{\xi }}{\left[ dp^{n}\right] _{-q}} \end{equation*}
and this can be extended to a distribution on $\mathcal{
\mathbb{Z}
}_{p}$. This distribution yields an integral in the case $d=1$.
So, $q$-Volkenborn integral was defined by T. Kim as follows: \begin{equation} I_{-q}\left( f\right) =\int_{\mathcal{
\mathbb{Z}
}_{p}}f\left( \xi \right) d\mu _{q}\left( \xi \right) =\lim_{n\rightarrow \infty }\frac{1}{\left[ p^{n}\right] _{-q}}\sum_{\xi =0}^{p^{n}-1}\left( -1\right) ^{\xi }f\left( \xi \right) q^{\xi }\text{ } \label{equation 6} \end{equation}
Where $\left[ x\right] _{q}$ is a $q$-extension of $x$ which is defined by \begin{equation*} \left[ x\right] _{q}=\frac{1-q^{x}}{1-q}\text{,} \end{equation*}
note that $\lim_{q\rightarrow 1}\left[ x\right] _{q}=x$ cf. \cite{Kim 2}, \cite{Kim 3}, \cite{Kim 4}, \cite{Kim 5}, \cite{Jang}.
Let $d$ be a fixed positive integer with $\left( p,d\right) =1$. We now set \begin{eqnarray*} X &=&X_{d}=\lim_{\overleftarrow{n}}\mathcal{
\mathbb{Z}
}/dp^{n}\mathcal{
\mathbb{Z}
}, \\ X_{1} &=&
\mathbb{Z}
_{p}, \\ X^{\ast } &=&\underset{\underset{\left( a,p\right) =1}{0<a<dp}}{\cup }a+dp \mathcal{
\mathbb{Z}
}_{p}, \\ a+dp^{n}\mathcal{
\mathbb{Z}
}_{p} &=&\left\{ x\in X\mid x\equiv a\left( \func{mod}p^{n}\right) \right\} , \end{eqnarray*}
where $a\in \mathcal{
\mathbb{Z}
}$ satisfies the condition $0\leq a<dp^{n}$. For $f\in UD\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $, \begin{equation*} \int_{
\mathbb{Z}
_{p}}f\left( x\right) d\mu _{-q}\left( x\right) =\int_{X}f\left( x\right) d\mu _{-q}\left( x\right) , \end{equation*}
(for details, see \cite{Kim 8}).
By the meaning of $q$-Volkenborn integral, we consider below strongly $p$ -adic $q$-invariant distribution $\mu _{-q}$ on $
\mathbb{Z}
_{p}$ in the form \begin{equation*} \left\vert \left[ p^{n}\right] _{-q}\mu _{-q}\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) -\left[ p^{n+1}\right] _{-q}\mu _{-q}\left( a+p^{n+1}\mathcal{
\mathbb{Z}
}_{p}\right) \right\vert <\delta _{n}, \end{equation*}
where $\delta _{n}\rightarrow 0$ as $n\rightarrow \infty $ and $\delta _{n}$ is independent of $a$. Let $f\in UD\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $, for any $a\in \mathcal{
\mathbb{Z}
}_{p}$, we assume that the weight function $\omega \left( x\right) $ is defined by $\omega \left( x\right) =\omega ^{x}$ where $\omega \in
\mathbb{C}
_{p}$ with $\left\vert 1-\omega \right\vert _{p}<1$. We define the weighted measure on $\mathcal{
\mathbb{Z}
}_{p}$ as follows: \begin{equation} \mu _{f,-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) =\int_{a+p^{n}\mathcal{
\mathbb{Z}
}_{p}}\omega ^{\xi }f\left( \xi \right) d\mu _{-q}\left( \xi \right) \label{equation 2} \end{equation}
where the integral is the fermionic $p$-adic $q$-integral. By (\ref{equation 2}), we easily note that $\mu _{f,-q}^{\left( \omega \right) }$ is a strongly weighted measure on $
\mathbb{Z}
_{p}$. Namely, \begin{eqnarray*} &&\left\vert \left[ p^{n}\right] _{-q}\mu _{f,-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) -\left[ p^{n+1}\right] _{-q}\mu _{f,-q}^{\left( \omega \right) }\left( a+p^{n+1}\mathcal{
\mathbb{Z}
}_{p}\right) \right\vert _{p} \\ &=&\left\vert \sum_{x=0}^{p^{n}-1}\left( -1\right) ^{x}\omega ^{x}f\left( x\right) q^{x}-\sum_{x=0}^{p^{n}}\left( -1\right) ^{x}\omega ^{x}f\left( x\right) q^{x}\right\vert _{p} \\ &\leq &\left\vert \frac{f\left( p^{n}\right) \left( -1\right) ^{p^{n}}\omega ^{p^{n}}q^{p^{n}}}{p^{n}}\right\vert _{p}\left\vert p^{n}\right\vert _{p} \\ &\leq &Cp^{-n} \end{eqnarray*}
Thus, we get the following proposition.
\begin{proposition} For $f,g\in UD\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $, then, we have \begin{equation*} \mu _{\alpha f+\beta g,-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) =\alpha \mu _{f,-q}^{\left( \omega \right) }\left( a+p^{n} \mathcal{
\mathbb{Z}
}_{p}\right) +\beta \mu _{g,-q}^{\left( \omega \right) }\left( a+p^{n} \mathcal{
\mathbb{Z}
}_{p}\right) \text{.} \end{equation*} where $\alpha ,\beta $ are positive constants. Also, we have \begin{equation*} \left\vert \left[ p^{n}\right] _{-q}\mu _{f,-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) -\left[ p^{n+1}\right] _{-q}\mu _{f,-q}^{\left( \omega \right) }\left( a+p^{n+1}\mathcal{
\mathbb{Z}
}_{p}\right) \right\vert \leq Cp^{-n} \end{equation*} where $C$ is positive constant. \end{proposition}
Let $\mathcal{P}_{q}\left( x\right) \in
\mathbb{C}
_{p}\left[ \left[ x\right] _{q}\right] $ be an arbitrary $q$-polynomial. Now also, we indicate that $\mu _{\mathcal{P},-q}^{\left( \omega \right) }$ is a strongly weighted fermionic $p$-adic $q$-invariant measure on $
\mathbb{Z}
_{p}$. Without a loss of generality, it is sufficient to evidence the statement for $\mathcal{P}\left( x\right) =\left[ x\right] _{q}^{k}$. \begin{equation} \mu _{\mathcal{P},-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) =\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m}\right] _{-q}} \sum_{i=0}^{p^{m-n}-1}w^{a+ip^{n}}\left[ a+ip^{n}\right] _{q}^{k}\left( -q\right) ^{a+ip^{n}}\text{.} \label{equation 5} \end{equation}
where \begin{eqnarray} \left[ a+ip^{n}\right] _{q}^{k} &=&\sum_{j=0}^{k}\binom{k}{j}\left[ a\right] _{q}^{k-j}q^{aj}\left[ p^{n}\right] _{q}^{j}\left[ i\right] _{q^{p^{n}}}^{j} \label{equation 7} \\ &=&\left[ a\right] _{q}^{k}+k\left[ a\right] _{q}^{k-1}q^{a}\left[ p^{n} \right] _{q}\left[ i\right] _{q^{p^{n}}}+...+q^{ak}\left[ p^{n}\right] _{q}^{k}\left[ i\right] _{q^{p^{n}}}^{k}\text{.} \notag \end{eqnarray}
and \begin{equation} w^{a+ip^{n}}=w^{a}\sum_{l=0}^{ip^{n}}\binom{ip^{n}}{l}\left( w-1\right) ^{l}\equiv w^{a}\left( \func{mod}p^{n}\right) \text{.} \label{equation 8} \end{equation}
Similarly, \begin{equation} \left( -q\right) ^{a+ip^{n}}=\left( -q\right) ^{a}\sum_{l=0}^{ip^{n}}\binom{ ip^{n}}{l}\left( -1\right) ^{l}\left( q+1\right) ^{l}\equiv \left( -q\right) ^{a}\left( \func{mod}p^{n}\right) \text{.} \label{equation 9} \end{equation}
By (\ref{equation 5}), (\ref{equation 7}), (\ref{equation 8}) and (\ref {equation 9}), we have the following \begin{eqnarray*} \mu _{\mathcal{P},-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) &\equiv &\left( -1\right) ^{a}\omega ^{a}q^{a}\left[ a\right] _{q}^{k}\left( \func{mod}p^{n}\right) \\ &\equiv &\left( -1\right) ^{a}\omega ^{a}q^{a}\mathcal{P}\left( a\right) \left( \func{mod}p^{n}\right) \text{.} \end{eqnarray*}
For $x\in \mathcal{
\mathbb{Z}
}_{p}$, let $x\equiv x_{n}\left( \func{mod}p^{n}\right) $ and $x\equiv x_{n+1}\left( \func{mod}p^{n+1}\right) $, where $x_{n}$, $x_{n+1}\in \mathcal{
\mathbb{Z}
}$ with $0\leq x_{n}<p^{n}$ and $0\leq x_{n+1}<p^{n+1}$.
Then, we procure the following \begin{equation*} \left\vert \left[ p^{n}\right] _{-q}\mu _{\mathcal{P},-q}^{\left( \omega \right) }\left( a+p^{n}\mathcal{
\mathbb{Z}
}_{p}\right) -\left[ p^{n+1}\right] _{-q}\mu _{\mathcal{P},-q}^{\left( \omega \right) }\left( a+p^{n+1}\mathcal{
\mathbb{Z}
}_{p}\right) \right\vert \leq Cp^{-n}\text{,} \end{equation*}
where $C$ is positive constant and $n>>0$.
Let $UD\left( \mathcal{
\mathbb{Z}
}_{p},\mathcal{
\mathbb{C}
}_{p}\right) $ be the space of uniformly differentiable functions on $ \mathcal{
\mathbb{Z}
}_{p}$ with supnorm \begin{equation*} \left\Vert f\right\Vert _{\infty }=\underset{x\in
\mathbb{Z}
_{p}}{\sup }\left\vert f\left( x\right) \right\vert _{p}. \end{equation*}
The difference quotient $\Delta _{1}f$ of $f$ is the function of two variables given by \begin{equation*} \Delta _{1}f\left( m,x\right) =\frac{f\left( x+m\right) -f\left( x\right) }{m },\text{ for all }x\text{, }m\in
\mathbb{Z}
_{p}\text{, }m\neq 0\text{.} \end{equation*}
A function $f:
\mathbb{Z}
_{p}\rightarrow
\mathbb{C}
_{p}$ is said to be a Lipschitz function if there exists a constant $M>0$ $ \left( \text{the Lipschitz constant of }f\right) $ such that \begin{equation*} \left\vert \Delta _{1}f\left( m,x\right) \right\vert \leq M\text{ for all } m\in
\mathbb{Z}
_{p}\backslash \left\{ 0\right\} \text{ and }x\in
\mathbb{Z}
_{p}. \end{equation*}
The $
\mathbb{C}
_{p}$ linear space consisting of all Lipschitz function is denoted by $ Lip\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $. This space is a Banach space with the respect to the norm $ \left\Vert f\right\Vert _{1}=\left\Vert f\right\Vert _{\infty }\tbigvee \left\Vert \Delta _{1}f\right\Vert _{\infty }$ (for more informations, see \cite{Kim 1}, \cite{Kim 2}, \cite{Kim 3}, \cite{Kim 4}, \cite{Kim 5}, \cite {Kim 6}, \cite{Jang}). The objective of this paper is to introduce weighted $ q$-Hardy Littlewood type maximal operator on the fermionic $p$-adic $q$ -integral on $
\mathbb{Z}
_{p}$. Also, we show that the boundedness of the weighted $q$ -Hardy-littlewood-type maximal operator in the $p$-adic integer ring.
\section{\qquad \textbf{The weighted }$q$\textbf{-Hardy-littlewood-type maximal operator}}
In view of (\ref{equation 2}) and the definition of fermionic $p$-adic $q$ -integral on $
\mathbb{Z}
_{p}$, we now consider the following theorem.
\begin{theorem} Let $\mu _{-q}^{\left( w\right) }$ be a strongly fermionic $p$-adic $q$ -invariant on $
\mathbb{Z}
_{p}$ and $f\in UD\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $. Then for any $n\in
\mathbb{Z}
$ and any $\xi \in
\mathbb{Z}
_{p}$, we have \end{theorem}
$(1)$ $\int_{a+p^{n}
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( \xi \right) \left( -q\right) ^{-\xi }d\mu _{-q}\left( \xi \right) =\frac{\left( -1\right) ^{a}\omega ^{a}}{\left[ p^{n} \right] _{-q}}\int_{
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) \left( -q\right) ^{-p^{n}\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) $,
$(2)$ $\int_{a+p^{n}
\mathbb{Z}
_{p}}\omega ^{\xi }d\mu _{-q}\left( \xi \right) =\frac{\omega ^{a}\left( -q\right) ^{a}}{\left[ p^{n}\right] _{-q}}\frac{2}{1+\omega ^{p^{n}}q^{p^{n}} }$.
\begin{proof} (1) By using (\ref{equation 6}) and (\ref{equation 2}), we see the followings applications \begin{eqnarray*} &&\int_{a+p^{n}
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( \xi \right) \left( -q\right) ^{-\xi }d\mu _{-q}\left( \xi \right) \\ &=&\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m+n}\right] _{-q}}\sum_{\xi =0}^{p^{m}-1}\omega ^{a+p^{n}\xi }f\left( a+p^{n}\xi \right) \left( -q\right) ^{-\left( a+p^{n}\xi \right) }q^{a+p^{n}\xi }\left( -1\right) ^{a+p^{n}\xi } \\ &=&\left( -1\right) ^{a}\omega ^{a}\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m}\right] _{-q^{p^{n}}}\left[ p^{n}\right] _{-q}}\sum_{\xi =0}^{p^{m}-1}\omega ^{\xi }\left( -q\right) ^{-p^{n}\xi }f\left( a+p^{n}\xi \right) \left( -q^{p^{n}}\right) ^{\xi } \\ &=&\frac{\left( -1\right) ^{a}\omega ^{a}}{\left[ p^{n}\right] _{-q}}\int_{
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( a+p^{n}\xi \right) \left( -q\right) ^{-p^{n}\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) . \end{eqnarray*}
(2) By the same method of (1), then, we easily derive the following \begin{eqnarray*} &&\int_{a+p^{n}
\mathbb{Z}
_{p}}\omega ^{\xi }d\mu _{-q}\left( \xi \right) \\ &=&\lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m+n}\right] _{-q}}\sum_{\xi =0}^{p^{m}-1}\omega ^{a+\xi p^{n}}\left( -q\right) ^{a+\xi p^{n}} \\ &=&\frac{\omega ^{a}\left( -q\right) ^{a}}{\left[ p^{n}\right] _{-q}} \lim_{m\rightarrow \infty }\frac{1}{\left[ p^{m}\right] _{-q^{p^{n}}}} \sum_{\xi =0}^{p^{m}-1}\left( \omega ^{p^{n}}\right) ^{\xi }\left( -q^{p^{n}}\right) ^{\xi } \\ &=&\frac{\omega ^{a}\left( -q\right) ^{a}}{\left[ p^{n}\right] _{-q}} \lim_{m\rightarrow \infty }\frac{1+\left( \omega ^{p^{n}}q^{p^{n}}\right) ^{p^{m}}}{1+\omega ^{p^{n}}q^{p^{n}}} \\ &=&\frac{\omega ^{a}\left( -q\right) ^{a}}{\left[ p^{n}\right] _{-q}}\frac{2 }{1+\omega ^{p^{n}}q^{p^{n}}} \end{eqnarray*}
Since $\underset{m\rightarrow \infty }{\lim }q^{p^{m}}=1$ for $\left\vert 1-q\right\vert _{p}<1,$ our assertion follows. \end{proof}
We are now ready to introduce definition of weighted $q$ -Hardy-littlewood-type maximal operator related to fermionic $p$-adic $q$ -integral on $
\mathbb{Z}
_{p}$ with a strong fermionic $p$-adic $q$-invariant distribution $\mu _{-q}$ in the $p$-adic integer ring.
\begin{definition} Let $\mu _{-q}^{\left( \omega \right) }$ be a strongly fermionic $p$-adic $q$ -invariant distribution on $
\mathbb{Z}
_{p}$ and $f\in UD\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $. Then, $q$-Hardy-littlewood-type maximal operator with weight related to fermionic $p$-adic $q$-integral on $a+p^{n}
\mathbb{Z}
_{p}$ is defined by the following \begin{equation*} \mathcal{M}_{p,q}^{\left( \omega \right) }f\left( a\right) =\underset{n\in
\mathbb{Z}
}{\sup }\frac{1}{\mu _{1,-q}^{\left( w\right) }\left( \xi +p^{n}
\mathbb{Z}
_{p}\right) }\int_{a+p^{n}
\mathbb{Z}
_{p}}\omega ^{\xi }\left( -q\right) ^{-\xi }f\left( \xi \right) d\mu _{-q}\left( \xi \right) \end{equation*} for all $a\in
\mathbb{Z}
_{p}$. \end{definition}
We recall that famous Hardy-littlewood maximal operator $\mathcal{M}_{\mu }$ , which is defined by \begin{equation} \mathcal{M}_{\mu }f\left( a\right) =\underset{a\in Q}{\sup }\frac{1}{\mu \left( Q\right) }\int_{Q}\left\vert f\left( x\right) \right\vert d\mu \left( x\right) \text{,} \label{equation 3} \end{equation}
where $f:
\mathbb{R}
^{k}\rightarrow
\mathbb{R}
^{k}$ is a locally bounded Lebesgue measurable function, $\mu $ is a Lebesgue measure on $\left( -\infty ,\infty \right) $ and the supremum is taken over all cubes $Q$ which are parallel to the coordinate axes. Note that the boundedness of the Hardy-Littlewood maximal operator serves as one of the most important tools used in the investigation of the properties of variable exponent spaces (see \cite{Jang}). The essential aim of Theorem 1 is to deal with the weighted $q$-extension of the classical Hardy-Littlewood maximal operator in the space of $p$-adic Lipschitz functions on $
\mathbb{Z}
_{p}$ and to find the boundedness of them. By the meaning of Definition 1, then, we state the following theorem.
\begin{theorem} Let $f\in UD\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $ and $x\in
\mathbb{Z}
_{p}$, we get \end{theorem}
\textit{(1)} $\mathcal{M}_{p,q}^{\left( \omega \right) }f\left( a\right) = \frac{\left( -1\right) ^{a}}{2q^{a}}\underset{n\in
\mathbb{Z}
}{\sup \left( 1+\omega ^{p^{n}q^{p^{n}}}\right) }\int_{
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( x+p^{n}\xi \right) \left( -q\right) ^{-p^{n}\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) $,
\textit{(2)} $\left\vert \mathcal{M}_{p,q}^{\left( \omega \right) }f\left( a\right) \right\vert _{p}\leq \left\vert \frac{\left( -1\right) ^{a}}{2q^{a}} \right\vert _{p}\underset{n\in
\mathbb{Z}
}{\sup }\left\vert 1+\omega ^{p^{n}}q^{p^{n}}\right\vert _{p}\left\Vert f\right\Vert _{1}\left\Vert \left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}}$,
\textit{where} $\left\Vert \left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}}=\int_{
\mathbb{Z}
_{p}}\left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) $.
\begin{proof} (1) Because of Theorem 1 and Definition 1, we see \begin{eqnarray*} M_{p,q}^{\left( \omega \right) }f\left( a\right) &=&\underset{n\in
\mathbb{Z}
}{\sup }\frac{1}{\mu _{1,-q}^{\left( \omega \right) }\left( \xi +p^{n}
\mathbb{Z}
_{p}\right) }\int_{a+p^{n}
\mathbb{Z}
_{p}}\omega ^{\xi }\left( -q\right) ^{-\xi }f\left( \xi \right) d\mu _{-q}\left( \xi \right) \\ &=&\frac{\left( -1\right) ^{a}}{2q^{a}}\underset{n\in
\mathbb{Z}
}{\sup \left( 1+\omega ^{p^{n}q^{p^{n}}}\right) }\int_{
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( x+p^{n}\xi \right) \left( -q\right) ^{-p^{n}\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) \text{.} \end{eqnarray*}
(2) On account of (1), we can derive the following \begin{eqnarray*} \left\vert M_{p,q}^{\left( \omega \right) }f\left( a\right) \right\vert _{p} &=&\left\vert \frac{\left( -1\right) ^{a}}{2q^{a}}\underset{n\in
\mathbb{Z}
}{\sup }\left( 1+\omega ^{p^{n}}q^{p^{n}}\right) \int_{
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( x+p^{n}\xi \right) \left( -q\right) ^{-p^{n}\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) \right\vert _{p} \\ &\leq &\left\vert \frac{\left( -1\right) ^{a}}{2q^{a}}\right\vert _{p} \underset{n\in
\mathbb{Z}
}{\sup }\left\vert \left( 1+\omega ^{p^{n}}q^{p^{n}}\right) \int_{
\mathbb{Z}
_{p}}\omega ^{\xi }f\left( x+p^{n}\xi \right) \left( -q\right) ^{-p^{n}\xi }d\mu _{-q^{p^{n}}}\left( \xi \right) \right\vert _{p} \\ &\leq &\left\vert \frac{\left( -1\right) ^{a}}{2q^{a}}\right\vert _{p} \underset{n\in
\mathbb{Z}
}{\sup }\left\vert 1+\omega ^{p^{n}}q^{p^{n}}\right\vert _{p}\int_{
\mathbb{Z}
_{p}}\left\vert f\left( a+p^{n}\xi \right) \right\vert _{p}\left\vert \left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\xi }\right\vert _{p}d\mu _{-q^{p^{n}}}\left( \xi \right) \\ &\leq &\left\vert \frac{\left( -1\right) ^{a}}{2q^{a}}\right\vert _{p} \underset{n\in
\mathbb{Z}
}{\sup }\left\vert 1+\omega ^{p^{n}}q^{p^{n}}\right\vert _{p}\left\Vert f\right\Vert _{1}\int_{
\mathbb{Z}
_{p}}\left\vert \left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\xi }\right\vert _{p}d\mu _{-q^{p^{n}}}\left( \xi \right) \\ &=&\left\vert \frac{\left( -1\right) ^{a}}{2q^{a}}\right\vert _{p}\underset{ n\in
\mathbb{Z}
}{\sup }\left\vert 1+\omega ^{p^{n}}q^{p^{n}}\right\vert _{p}\left\Vert f\right\Vert _{1}\left\Vert \left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}}\text{.} \end{eqnarray*}
Thus, we complete the proof of theorem. \end{proof}
We note that Theorem 2 (2) shows the supnorm-inequality for the $q$ -Hardy-Littlewood-type maximal operator with weight on $
\mathbb{Z}
_{p}$, on the other hand, Theorem 2 (2) shows the following inequality \begin{equation} \left\Vert \mathcal{M}_{p,q}^{\left( \omega \right) }f\right\Vert _{\infty }= \underset{x\in
\mathbb{Z}
_{p}}{\sup }\left\vert \mathcal{M}_{p,q}^{\left( \omega \right) }f\left( x\right) \right\vert _{p}\leq \mathcal{K}\left\Vert f\right\Vert _{1}\left\Vert \left( \frac{-q^{p^{n}}}{\omega }\right) ^{-\left( .\right) }\right\Vert _{L^{1}} \label{equation 4} \end{equation}
where $\mathcal{K}=\left\vert \frac{\left( -1\right) ^{a}}{2q^{a}} \right\vert _{p}\underset{n\in
\mathbb{Z}
}{\sup }\left\vert 1+\omega ^{p^{n}}q^{p^{n}}\right\vert _{p}$. By the equation (\ref{equation 4}), we get the following Corollary, which is the boundedness for weighted $q$-Hardy-Littlewood-type maximal operator with weight on $
\mathbb{Z}
_{p}$.
\begin{corollary} $\mathcal{M}_{p,q}^{\left( \omega \right) }$ is a bounded operator from $ UD\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $ into $L^{\infty }\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $, where $L^{\infty }\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $ is the space of all $p$-adic supnorm-bounded functions with the \begin{equation*} \left\Vert f\right\Vert _{\infty }=\underset{x\in
\mathbb{Z}
_{p}}{\sup }\left\vert f\left( x\right) \right\vert _{p}\text{,} \end{equation*} for all $f\in L^{\infty }\left(
\mathbb{Z}
_{p},
\mathbb{C}
_{p}\right) $. \end{corollary}
\end{document} | arXiv |
EURASIP Journal on Image and Video Processing
Applying cheating identifiable secret sharing scheme in multimedia security
Zheng Ma1,2,
Yan Ma1,
Xiaohong Huang1,
Manjun Zhang2 &
Yanxiao Liu3
EURASIP Journal on Image and Video Processing volume 2020, Article number: 42 (2020) Cite this article
In (k,n) secret sharing scheme, one secret is encrypted into n shares in such a way that only k or more shares can decrypt the secret. Secret sharing scheme can be extended into the field of multimedia that provides an efficient way to protect confidential information on multimedia. Secret image sharing is just the most important extension of secret sharing that can safely guard the secrecy of images among multiple participants. On the other hand, cheating detection is an important issue in traditional secret sharing schemes that have been discussed for many years. However, the issue of cheating detection in secret image sharing has not been discussed sufficiently. In this paper, we consider the cheating problem in the application of secret image sharing schemes and construct a (k,n) secret image sharing scheme with the ability of cheating detection and identification. Our scheme is capable of identifying cheaters when k participants involve in reconstruction. The cheating identification ability and size of shadow in the proposed scheme are improved from the previous cheating identifiable secret image sharing scheme.
(k,n) secret sharing (SS) scheme was first proposed by Shamir [1] in 1979 to safeguard secret information among a group of participants. In Shamir's scheme, a secret s is divided into n shares v1,v2,...,vn using a k−1 degree polynomial in such a way that any k−1 or less shares get no information about the secret s and any k or more shares can reconstruct the secret s efficiently. In [2], the researchers designed reliable and secure devices that can realize Shamir's SS [1]. In 2002, Thien and Lin combined Shamir's SS scheme with image and proposed a secret image sharing (SIS) scheme [3] that can protect information on secret image among multiple users. After years of research, many SIS schemes were constructed, and all existing SIS schemes can be mainly divided into two categories: one is polynomial-based SIS schemes [4–6], and the other is visual cryptography (VC)-based schemes [7–9]. Polynomial-based SIS schemes can reconstruct lossless image with reduced shadow size; the image reconstruction in VC-based SIS schemes can be simply accomplished by human visual system without any computation. However, the reconstructed image is lossy and the size of shadow is expanded from the original image.
The cheating problem in SS schemes was first introduced by Tompa and Woll [10] in 1989. They considered the scenario that some dishonest participants (cheaters) pool fake shares when reconstructing the secret. Through this method, the cheaters can get the valid secret exclusively; the other honest participants can only decode a forged secret. Many works have focused on solving cheating problem in SS schemes. Some of them [11–13] were interested in detecting the cheating behavior, and others [14–16] focused on not only detecting the cheating, but also identifying the cheaters. The cheating identifiable schemes have stronger capability to resist cheating, and it results that the shares are larger and the schemes are more complicated than those cheating detectable schemes.
As a result, the cheating problem is also an important issue in the field of SIS schemes. However, this issue has not been discussed sufficiently in SIS so far. In the works [17–19], some SIS schemes with steganography and authentication were capable of detecting or identifying the cheating behavior. However, those SIS schemes were not based on Shamir's scheme and the capabilities of cheating detection or identification were not strong enough to prevent the cheating. In [20], Liu et al. proposed a SIS with the capability of cheating detection, but the identification of cheaters is still unknown. In [21], Yang et al. proposed a SIS scheme that can identify cheaters during reconstruction. In their scheme, shadows are generated from bivariate polynomial and each shadow has extra bits which is used for authentication. The cheating identification is based on the property of symmetry in bivariate polynomial; however, the power on identifying cheaters in [21] is limited.
In this paper, we focus on the cheating problem in the fundamental polynomial-based SIS [3]. Since cheating identifiable scheme has much stronger power to prevent cheating behavior, we construct a (k,n) SIS scheme capable of identifying up to \(\left \lfloor \frac {k-2}{2}\right \rfloor \) cheaters. The rest of this paper is organized as follows. In Section 2, we introduce some related works, which includes Shamir's (k,n) SS scheme, polynomial-based SIS scheme, and the model of cheating identification in SS scheme. In Section 3, we construct a (k,n) SIS scheme capable of cheating identification, and the theoretical analysis is also provided in this section. In Section 4, we use an example to illustrate the cheating identification in the proposed scheme and give a comparison between the scheme in [21] and the proposed scheme. Section 5 gives the conclusion of this paper.
Shamir's (k,n) SS scheme
A (k,n) SS scheme is an approach where a secret is decrypted into n shares, in such way that any k or more shares can reconstruct the secret and fewer than k shares get nothing about the secret. More formally, in secret sharing scheme, there exist n participants \(\mathcal {P}=\{P_{1},P_{2},...,P_{n}\}\) and a dealer \(\mathcal {D}\). A (k,n) secret sharing scheme consists of two phases:
Sharing phase: During this phase, the dealer \(\mathcal {D}\) divides the secret s into n shares v1,v2,...,vn and sends each share vi to a participant Pi.
Reconstruction phase: During this phase, a group of at least k participants submit their shares to reconstruct the secret.
In the sharing phase, the dealer \(\mathcal {D}\) computes n shares in such a way that satisfies the following conditions:
Correctness: Any set of at least k shares can reconstruct the valid secret.
Secrecy: Any fewer than k shares have no information about the secret.
Shamir's (k,n) SS scheme is shown in the following Scheme 1.
Scheme 1: Shamir's (k,n) SS scheme
Sharing phase:
The dealer \(\mathcal {D}\) chooses a k−1 degree polynomial ψ(x)∈GF(q)[X] which satisfies s=ψ(0)∈GF(q).
The dealer \(\mathcal {D}\) computes n shares vi=ψ(i),i=1,2...,n, and sends each share vi to a participant Pi.
Reconstruction phase:
m(≥k) participants (say P1,P2...,Pm) submit their shares v1,v2...,vm together.
Computing the interpolated polynomial ψ(x) on v1,v2...,vm by the equation: \(\psi (x)={\sum \nolimits }^{m}_{i=1}\left (v_{i}\prod \nolimits _{u\neq i}\frac {x-u}{i-u}\right)\). Then the secret s=ψ(0).
Cheating identification in SS scheme
Tompa and Woll [10] first introduced the cheating problem in secret sharing schemes, for instance, some cheaters submit fake shares during the reconstruction phase, which makes the honest participants reconstruct a forged secret and the cheaters can get the real secret exclusively. Cheating identification is a strong strategy to resist such cheating. The model of cheating identifiable secret sharing scheme is shown as follows:
Sharing phase: During this phase, the dealer \(\mathcal {D}\) divides the secret s into n shares v1,v2...,vn and sends each share vi to a user Pi.
Reconstruction phase: During this phase, a group of m users (m≥k) submit their shares to reconstruct the secret.
A public cheating identification algorithm is applied on these m shares to identify cheaters.
Let L be the set of users who are identified to be cheaters using cheating identification algorithm.
If (m−|L|)≥k, reconstruct the secret s from those shares of users who are not in L, and output (s,L);
If (m−|L|)<k, output L.
Polynomial-based SIS
In [3], Thien and Lin proposed a remarkable (k,n) SIS which was based on Shamir's SS scheme. An image O is made up of multiple pixels, and the gray value of each pixel is in GF(251). In fact, the range of gray scale is [0,255]; for each pixel larger than 250, they are replaced by the value 250. Therefore, the reconstructed image would be of a little quality distortion from the original image. However, in majority cases, this quality distortion can be omitted with large number of pixels in an image. If all the pixels in an image are treated as secrets, a polynomial-based SIS can be extended from Shamir's SS. Thien-Lin's SIS scheme consists of two phases: shadow generation phase and image reconstruction phase. In the shadow generation phase, a dealer regards a secret image O as input and outputs n shadows S1,S2...,Sn; during image recovery phase, any set of m shadows k≤m≤n reconstruct the secret image O.
Scheme 2: Thien-Lin's (k,n) SIS
Shadow Generation phase:
Input secret image O, output n shadows S1,S2...,Sn
The dealer divides O into l-non-overlapping k-pixel blocks, B1,B2...,Bl.
For k pixels aj,0,aj,1...,aj,k−1∈GF(251) in each block Bj,j∈[1,l], the dealer generates a k−1 degree polynomial ψj(x)∈GF(251)[X], namely, ψj(x)=aj,0+aj,1x+aj,2x2+...,+aj,k−1xk−1, and computes n pixel-shares vj,1=ψj(1),vj,2=ψj(2)...,vj,n=ψj(n),j∈[1,l] as Shamir's secret sharing scheme.
Outputs n shadows Si=v1,i∥v2,i∥,...,∥vl,i,i=1,2...,n, the symbol ∥ is the combination of pixel-shares.
Image reconstruction phase:
On input m shadows S1,S2...,Sm.(m≥k).
Extract the pixel-shares v1,j,v2,j...,vm,j,j∈[1,l] from S1,S2...,Sm.
Using the approach of Shamir's scheme, and reconstructing the polynomial ψj(x)=aj,0+aj,1x+aj,2x2+...,+aj,k−1xk−1 from v1,j,v2,j...,vm,j,j∈[1,l]. The block Bj=aj,0∥aj,1∥...,∥aj,k−1.
Outputs O=B1∥B2∥,...,∥Bl.
It is obvious that Scheme 2 satisfies the k-threshold property: k or more shadows can reconstruct entire image; less than k shadows get nothing about secret image. The size of each shadow in Scheme 2 is \(\frac {1}{k}\) times of the original image.
In this section, we consider the cheating problem in Scheme 2 and then proposed a cheating identifiable SIS that has the ability of identifying cheaters; then, the theoretical analysis is discussed to prove the correctness of the proposed work.
The proposed scheme
Suppose that during the image reconstruction phase, cheaters can submit forged shadows. It results that the honest participants can only get a fake secret image, while the cheaters can even reconstruct the secret image exclusively. In order to prevent this problem, we construct a (k,n) SIS with cheating identification under the model in Section 2.2. Our scheme is based on Thien-Lin's fundamental scheme which can be also extended in other polynomial-based SIS schemes. Our scheme is shown in the following Scheme 3.
Scheme 3: (k,n) SIS scheme with cheating identification
Shadow Generation Phase: Input a secret image O, output n shadows S1,S2,...,Sn.
The dealer divides O into l-non-overlapping \(k+\left \lfloor \frac {k-2}{2}\right \rfloor \)-pixel blocks, B1,B2,...,Bl. (Let \(\omega =\left \lfloor \frac {k-2}{2}\right \rfloor \) in the rest of this paper)
For each block Bi,i∈[1,l], there are k+ω secret pixels ai,0,ai,1,...,ai,k−1 and bi,0,bi,1,...,bi,ω−1∈GF(251). The dealer generates a k−1 degree polynomial ψi(x)=ai,0+ai,1x+...,+ai,k−1xk−1∈GF(251)[X].
The dealer chooses a random integer γi, and computes k−ω pixels \(\phantom {\dot {i}\!}b_{i,\omega },b_{i,\omega +1,..,b_{i,k-1}}\) which satisfy that: ai,ω+γibi,ω=0,ai,ω+1+γibi,ω+1=0,...,ai,k−1+γibi,k−1=0 over GF(251). Then the dealer generates another k−1 degree polynomial φi(x)=bi,0+bi,1x+...,+bi,k−1xk−1. It also implies that ηi(x)=ψi(x)+γiφi(x) is of degree ω−1.
For each block Bi,i∈[1,l], the dealer computes pixel-shares vi,j={mi,j,di,j},mi,j=ψi(j),di,j=φi(j),j=1,2...,n for each participant Pj. The shadow Sj for Pj is Sj=v1,j∥v2,j∥,...,∥vt,j.
Image Reconstruction Phase: Input k shadows, without loss of generality (S1,S2,...,Sk)
Extract the pixel-shares vi,j=(mi,j,di,j),i=1,2...,l,j=1,2...,k from S1,S2...,Sk.
For each group of vi,1,vi,2,...,vi,k,i∈[1,l], using Lagrange interpolation to reconstruct ψi(x) and φi(x) from mi,1,mi,2,...,mi,k and di,1,di,2,...,di,k respectively.
If there exists a ω−1 polynomial ηi(x) and an integer γi, namely ηi(x)=ψi(x)+γiφi(x),i∈[1,l], recover the block Bi=(ai,0,ai,1,...,ai,k−1,bi,0,bi,1,...,bi,ω−1),i=1,2,...,l. The image O is reconstructed as O=B1∥B2,...,∥Bl.
Otherwise, if there exists no integer γj,j∈[1,l] which satisfies that ψj(x)+γjφj(x) with degree ω−1, using the following Algorithm 1 to identify cheaters.
The cheating identification process is described in Algorithm 1. For simplicity, it takes k pixel-shares vi=(mi,di),i=1,2,...,k as input and outputs the set of cheaters.
Algorithm 1: Cheating identification: input vi=(mi,di),i=1,2,...,k; output the set \(\mathcal {X}\) of cheaters.
Generating \(C^{\omega +1}_{k}\) subsets \(\varepsilon _{1},\varepsilon _{2},...,\varepsilon _{C^{\omega +1}_{k}}\) on the set of k pixel-shares {v1,v2,...,vk}.
For each subset \(\varepsilon _{i},i\in \left [1,C^{\omega +1}_{k}\right ]\), computing its corresponding checking polynomial \(\eta ^{\prime }_{i}(x)\). For example, ε1={v1,v2,...,vω,vω+1}, compute two ω−th interpolated polynomials \(\psi ^{\prime }_{1}(x)\) and \(g^{\prime }_{1}(x)\) on m1,m2,...,mω+1 and d1,d2,...,dω,dω+1 respectively. Figure out an integer \(\gamma ^{\prime }_{1}\) such that \(\eta ^{\prime }_{1}(x)=\psi ^{\prime }_{1}(x)+\gamma ^{\prime }_{1}g^{\prime }_{1}(x)\) is of degree ω−1. Then \(\eta ^{\prime }_{1}(x)\) is the checking polynomial on the subset ε1.
Figure out the majority polynomial ηm(x) among all the \(C^{\omega +1}_{k}\) checking polynomials. Suppose ε1,ε2,...,εw are all the w subsets whose checking polynomial equals to the majority polynomial ηm(x), then the set of cheaters is presented by \(\mathcal {X}=\left \{P_{1},P_{2},...,P_{k}\right \}-\left (\varepsilon _{1}\bigcup \varepsilon _{2},...,\bigcup \varepsilon _{w}\right)\).
In Thien-Lin's scheme, it can be noticed that the size of the shadow is \(\frac {1}{k}\) times of the secret image. In our scheme, the pixel-share vi,j=(mi,j,di,j) are generated from each k+ω-pixels block; therefore, the size of the shadow in our scheme is \(\frac {2}{k+\omega }\) times of the secret image O. The most complicated operation of cheating identification in our scheme is computing \(C^{\omega +1}_{k}\) polynomials with ω−1 degree; thus, the time complexity is \(O\left (C^{\omega +1}_{k}\ast \omega ^{2}\right)\).
Observing that in the proposed scheme, each block of the secret image is shared using Shamir's (k,n) secret sharing scheme. Therefore, our proposed scheme is a perfect (k,n) threshold scheme, namely, k or more shadows can reconstruct the image, while k−1 or less shadows get no information about the image.
Theoretical analysis
The capability of cheating identification of the proposed scheme is summarized by the following lemma and theorem. Since in our scheme, the secret image is divided into multiple blocks and each block is encrypted into shares using the same approach, we use one block of k+u pixels instead of the entire image to analyze its cheating identification ability.
Lemma 1
Sharing a (k+ω)-pixel block B=(a0,a1,...,ak−1,b0,b1,...,bω−1) as shown in Scheme 2, any ω+1 participants can get γ and ω−1 degree polynomial η(x). The dealer D decides the parameters of η(x). (η(x)=γφ(x)+ψ(x), any ω+1 participants can get η(x) and γ without acknowledgment on ψ(x) and φ(x)) but ω participants are unable to get any information about γ and η(x).
Supposing ω+1 participants are P1,P2,...,Pω+1, respectively, and they possess ω+1 pixel-shares, vi={mi,di},i=1,2,...,ω+1. The ω+1 points (1,m1),(2,m2),...,(ω+1,mω+1) determine an interpolated polynomial ψ′(x). And another interpolated polynomial φ′(x) is determined by ω+1 points (1,d1),(2,d2),...,(ω+1,dω+1). A conclusion can be made easily, ω+1 points (1,m1),(2,m2),...,(ω+1,mω+1) are linear independent; otherwise, the interpolated polynomial on (1,m1),(2,m2),...,(k,mk) would be less than k−1, since the n points (1,m1),(2,m2),...,(k,mk) deduce a interpolated polynomial with k−1 degree ψ(x) and k>ω+1. So, ψ′(x) and φ′(x) are both ω-degree interpolated polynomials.
Now, we have η(x)=γφ(x)+ψ(x) and η′(x)=ψ′(x)+γ′φ′(x). Let R(x)=η(x)−η′(x). Therefore, we get:
$$ R(x)=\psi(x)-\psi^{\prime}(x)+\gamma\varphi(x)-\gamma^{\prime}\varphi^{\prime}(x). $$
We get ψ(i)=ψ′(i),i=1,2,...,ω+1, since ψ(x) and ψ′(x) must pass through (i,mi),i=1,2,...,ω+1. Similarly, we get φ(i)=φ′(i),i=1,2,...,ω+1. Together with Eq. (1), we can get that R(i)=(γ−γ′)φ′(i),i=1,2,...,ω+1, which means that R(x) intersects (γ−γ′)φ′(x) at ω+1 points, since both R(x) and (γ−γ′)φ′(x) are of degree no more than ω.
Thus, we get a conclusion that:
$$ R(x)=\left(\gamma-\gamma^{\prime}\right)\varphi^{\prime}(x). $$
Obviously, R(x)=η(x)−η′(x), where R(x) is an interpolated polynomial, and the degree which is no more than ω−1. Similarly, the φ′(x) is of degree ω exactly. Therefore, we get that γ=γ′ and η′(x)=η(x). Otherwise, it would contradict to Eq. (2).
Next, we prove that γ and η(x) cannot be gotten by ω shareholders. The φ(x) is dependent with ψ(x), such that there exists a ω−1 degree polynomial η(x) and a value γ, which satisfies η(x)=ψ(x)+γφ(x). If we consider the ω coefficients of η(x) and the value γ as ω+1 unknowns, then each participant Pi can build a linear equation η(i)=mi+γ·di on these ω+1 unknowns using their share (mi,di). As a result, ω participants can build ω linear equations on these ω+1 unknowns. These ω+1 unknowns cannot be figured out, according to the property of linear equations. Otherwise, by using their ω shares, ω participants can only get two ω−1-th degree interpolated polynomials ψ′′(x) and φ′′(x). η(x) and γ can be denoted as
$$ \eta(x)=\gamma\varphi^{\prime\prime}(x)+\psi^{\prime\prime}(x). $$
However, η(x),ψ′′(x) and φ′′(x) are ω−1-degree interpolated polynomials. According to Eq. (3), with probabilities \(\frac {1}{p}\), each element e in GF(p) could be γ. Therefore, γ and η(x) cannot be gotten by ω shareholders. End proof. □
If the number t of cheaters satisfies \(t\leq \omega =\left \lfloor \frac {k-2}{2}\right \rfloor \), these cheaters can be identified in the proposed scheme.
According to Lemma 1, γ and η(x) can be obtained by ω+1 cheaters using their valid pixel-shares. Among these cheaters, the Pj is a critical cheater, which can even forge his pixel-share \(v^{\prime }_{j}=\left (m^{\prime }_{j},d^{\prime }_{j}\right)\) where \(m^{\prime }_{j}\neq m_{j}\) to satisfy \(m^{\prime }_{j}+\gamma \cdotp d^{\prime }_{j}=\eta (j)\). Each combination of ω+1 submitted pixel-shares including \(v^{\prime }_{j}\) deduces an identical checking polynomial η(x), during secret reconstruction and cheater identification, the cheater Pj succeeds in cheating.
As illustrated in Lemma 1, when \(t\leq \omega =\left \lfloor \frac {k-2}{2}\right \rfloor, \omega \) cheaters can get no information about γ. Thus, forged shares cannot be made successfully by any ω or less cheaters, to avoid identification. A checking polynomial can be generated by any ω+1 participants, according to Lemma 1, and \(t\leq \omega =\left \lfloor \frac {k-2}{2}\right \rfloor \). There are ω+2 valid shares selected from k submitted shares at least. \(C^{\omega +1}_{\omega +2}=\omega +2\) valid checking polynomials can be generated in CI. Without loss of generality, supposing P1 is a critical cheater who releases a forged pixel-share \(v^{\prime }_{1}\). If and only if there is a set of ω+2 submitted pixel-shares including \(v^{\prime }_{1}\), and this set of pixel-shares has the property, a same checking polynomial η1(x) can be made by each ω+1 combined shares.
The ω+2 submitted pixel-shares are \(v^{\prime }_{1},v^{\prime }_{1},...,v^{\prime }_{t},v_{t+1},...,v_{i_{\omega +2}}\) where P1,P2,...,Pt are t cheaters and P1 is a critical cheater who knows \(v^{\prime }_{2},v^{\prime }_{3},...,v^{\prime }_{t}\). η1(x) and the value γ1 are made by \(v^{\prime }_{1},v^{\prime }_{2},...,v^{\prime }_{t},v_{t+1},...,v_{\omega +1}\), then vω+2=(mω+2,dω+2) has to satisfy
$$ m_{\omega+2}+\gamma_{1}d_{\omega+2}=\eta_{1}(\omega+2). $$
It is noticed that the t cheaters can get no information about γ1,η1(x) and vω+2=(mω+2,dω+2), and the probability of (4) is \(\frac {1}{p}\). In other words, the successful cheating probability of P1 is \(\frac {1}{p}\). End proof. □
In this part, we show the experimental results and give a comparison between our scheme and other cheating detectable SIS. In this example, let the threshold is (k,n)=(6,n), and the secret image O is divided into l blocks where each block includes \(k+\left \lfloor \frac {k-2}{2}\right \rfloor =8\) secret pixels. Assuming one block B consists of the following 8 pixels: (a0,...,a5,b0,b1)=(57,68,90,231,42,89,124,186), the dealer selects an integer γ=10, then generates two k−1=5 degree polynomials: ψ(x)=57+68x+90x2+231x3+42x4+89x5 and φ(x)=124+186x+242x2+2x3+46x4+217x5, where ai+γ·bi=0,i=2,3,4,5. Supposing P1,P2,...,P6 participate in image reconstruction, the pixel-shares are v1=(75,64),v2=(148,124),v3=(209,135),v4=(220,151),v5=(59,134),v6=(160,141).
If all these 6 participants are honest, they submit real pixel-shares in image reconstruction, and two polynomials ψ(x)=57+68x+90x2+231x3+42x4+89x5 and φ(x)=124+186x+242x2+2x3+46x4+217x5 can be reconstructed, respectively. They can also find γ=10, such that η(x)=ψ(x)+γ·φ(x)=42+171x is of degree \(\left \lfloor \frac {k-2}{2}\right \rfloor -1\). It means that there is no cheating behavior, and the pixel-block B=(57,68,90,231,42,89,124,186) is reconstructed.
Now we assume P1,P2 are two cheaters \(\left (t=2\leq \left \lfloor \frac {k-2}{2}\right \rfloor \right)\) who submit fake pixel-shares \(v_{1}^{\prime }=(98,109),v_{2}^{\prime }=(215,81)\) in image reconstruction. The cheating behavior can be easily detected using our scheme. During cheating identification algorithm, all the 4 subsets which contain 3 honest participants can compute the same checking polynomial η(x)=42+171x. For example, (P3,P4,P5) can get two interpolated polynomials ψ∗(x)=148+111x+165x2,φ∗(x)=140+6x+109x2. Then, they can figure out a unique integer γ=10 such that η(x)=ψ∗(x)+γ·φ∗(x)=42+171x. For another subset of 3 honest participants (P3,P4,P6), they can reconstruct two interpolated polynomials ψ∗(x)=12+23x+70x2,φ∗(x)=3+65x+244x2 from their pixel-shares. Then, they can also figure out the integer r=10 such that η(x)=ψ∗(x)+γ·φ∗(x)=42+171x. On the other side, each subset of three participants which contain P1 or P2 deduces different checking polynomials. Therefore, η(x)=ψ∗(x)+γ·φ∗(x)=42+171x is regarded as the majority polynomial, and the cheaters can be identified successfully accordingly.
In [21], Yang et al. proposed an authentication approach in secret image sharing which is also capable of identifying cheaters during secret reconstruction phase. The scheme in [21] is also based on Thien-Lin's scheme [3], but uses symmetric bivariate polynomial to generate shadows. It encrypts each \(\frac {k(k+1)}{2}\) secret pixels into k pixel-shares, and the size of the shadow is \(\frac {2}{k+1}\) times of the secret image. The shadow size in our scheme is \(\frac {2}{k+\omega }\), which is smaller than the size in Yang et al.'s scheme when \(\omega =\left \lfloor \frac {k-2}{2}\right \rfloor \geq 1\). In cheating identification, not only the k participants, but also the other n−k participants work together to vote for the k participants using the property of symmetry bivariate polynomial. The participants who get less than \(\left \lfloor \frac {n-1}{2}\right \rfloor \) votes are identified as cheaters. However, in most cheating identifiable secret sharing schemes, the cheating identification is carried out only by the participants in secret reconstruction, and it is not practical to involve other n−k participants in cheating identification. In fact, if k participants work together to identify cheaters in Yang et al.'s scheme, the cheaters cannot be identified since the cheater can always get more votes than honest participants. The comparison between Yang et al.'s scheme and the proposed scheme is shown in the following Table 1. The symbol CI in Table 1 means the capability of cheating identification.
Table 1 Comparison between the proposed scheme and Yang et al.'s scheme
We can also use 512×512 Lena (Fig. 1) as the secret image O to generate shadows using our (4,7) SIS scheme with cheating identification. The n=7 shadows are shown in Fig. 2 where each shadow has \(\frac {2}{k+\left \lfloor \frac {k-2}{2}\right \rfloor }=\frac {2}{5}\) times of the secret image. Each 4 participants can reconstruct the image that can identify \(\left \lfloor \frac {k-2}{2}\right \rfloor =1\) cheaters.
512×512 secret image
Seven shadows on the secret image
In this paper, we consider the well-known cheating problem in polynomial-based (k,n) SIS, such that a group of malicious participants submit fake shadows during image reconstruction. In order to prevent such cheating behavior, we construct a (k,n) SIS scheme with cheating identification under the model of cheating identifiable SS scheme. Our scheme is capable of identifying \(\left \lfloor \frac {k-2}{2}\right \rfloor \) cheaters when k participants involve in image reconstruction. In addition, the proposed scheme is based on the landmark Thien-Lin's polynomial-based SIS scheme, which can be easily extended into other polynomial-based SIS schemes. Both the size of shadow and the capability of cheating identification are enhanced from previous SIS schemes with cheating identification.
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
SS:
Secret sharing
SIS:
Secret image sharing
VC:
Visual cryptography
A. Shamir, How to share a secret. Commun. ACM. 22(11), 612–613 (1979).
MathSciNet Article Google Scholar
Z. Wang, M. Karpovsky, L. Bu, Design of reliable and secure devices realizing Shamir's secret sharing. IEEE Trans. Comput.65(8), 2443–2455 (2016).
C. C. Thien, J. C. Lin, Secret image sharing. Comput. Graph.26(5), 765–770 (2002).
Y. X. Liu, C. N. Yang, Q. D. Sun, Y. C. Chen, (k,n) scalable secret image sharing with multiple decoding options. J. Intell. Fuzzy Syst.38(1), 219–228 (2020).
Y. X. Liu, C. N. Yang, Q. D. Sun, Thresholds based image extraction schemes in big data environment in intelligent traffic management. IEEE Trans. Intell. Transp. Syst. (2020). https://doi.org/10.1109/TITS.2020.2994386.
Y. X. Liu, C. N. Yang, C. M. Wu, Q. D. Sun, W. Bi, Threshold changeable secret image sharing scheme based on interpolation polynomial. Multimed. Tools Appl.78(13), 18653–18667 (2019).
R. Z Wang, Region incrementing visual cryptography. IEEE Sig. Process. Lett.16(8), 659–662 (2009).
C. N. Yang, H. W. Shih, C. C. Wu, L. Harn, k out of n region incrementing scheme in visual cryptography. IEEE Trans. Circ. Syst. Video Technol.22(5), 799–809 (2012).
C. N Yang, Y. C Lin, C. C Wu, Region in region incrementing visual cryptography scheme. Proc. IWDW2012, LNCS. 7809:, 449–463 (2013).
M. Tompa, H. Woll, How to share a secret with cheaters. J. Cryptol.1(3), 133–138 (1989).
P. Y. Lin, Chang C.C., Cheating resistance and reversibility-oriented secret sharing mechanism. IET Inf. Secur.5(2), 81–92 (2011).
S. Obana, T. Araki, in Proceedings of ASIACRYPT, LNCS 4284. Almost optimum secret sharing schemes secure against cheating for arbitrary secret distribution (SpringerHeidelberg, 2006), pp. 364–379.
W. Ogata, Kurosawa K., Stinson D.R., Optimum secret sharing scheme secure against cheating. SIAM J. Discret. Math.20(1), 79–95 (2006).
K. Kurosawa, S. Obana, W. Ogata, in Proceedings of CRYPTO, LNCS 563. t-cheater identifiable (k,n) secret sharing schemes (SpringerHeidelberg, 1995), pp. 410–423.
S. Obana, in Proceedings of EUROCRYPT, LNCS 6632. Almost optimum t-cheater identifiable secret sharing schemes (SpringerHeidelberg, 2011), pp. 284–302.
L. Harn, C. L. Lin, Detection and identification in (t,n) secret sharing scheme. Des. Code Crypt.52(1), 15–24 (2009).
C. C. Lin, W. H. Tsai, Secret image sharing with steganography and authentication. J. Syst. Softw.73:, 405–414 (2004).
C. N. Yang, T. S. Chen, K. H. Yu, C. C. Wang, Improvements of image sharing with steganography and authentication. J. Syst. Softw.80:, 1070–1076 (2007).
C. C. Chang, Y. P. Hsieh, C. H. Lin, Sharing secrets in stego images with authentication. Pattern Recog.41:, 3130–3137 (2008).
Y. X. Liu, Q. D. Sun, C. N. Yang, (k,n) secret image sharing scheme capable of cheating detection. EURASIP J. Wirel. Commun. Netw.2018:, 72 (2018).
C. N. Yang, J. F. Quyang, L. Harn, Steganography and authentication in image sharing without party bits. Opt. Commun.285:, 1725–1735 (2012).
We want to thank Professor Lein Harn from the University of Missouri-Kansas city for his help in English improvement.
The research presented in this paper is supported by the National Key R&D Program of China under No. 2018YFB1800100.
Beijing University of Posts and Telecommunications, Beijing, China
Zheng Ma, Yan Ma & Xiaohong Huang
China Unicom Network Technology Research Institute, Beijing, China
Zheng Ma & Manjun Zhang
Xi'an University of Technology, 710048, Xi'an, China
Yanxiao Liu
Zheng Ma
Yan Ma
Xiaohong Huang
Manjun Zhang
Zheng Ma provides the main concept, Yan Ma and Xiaohong Huang design the algorithms, Manjun Zhang gives the experiments, and Yanxiao Liu makes the comparisons. The authors read and approved the final manuscript.
Correspondence to Yanxiao Liu.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Ma, Z., Ma, Y., Huang, X. et al. Applying cheating identifiable secret sharing scheme in multimedia security. J Image Video Proc. 2020, 42 (2020). https://doi.org/10.1186/s13640-020-00529-z
Received: 15 April 2020
Cheating identification
New Advances on Intelligent Multimedia Hiding and Forensics | CommonCrawl |
\begin{document}
\title[Maps preserving numerical range of Lie product]{Non-linear maps on self-adjoint operators preserving numerical radius and numerical range of Lie product}
\author{Jinchuan Hou} \address[Jinchuan Hou]{College of
Mathematics, Taiyuan University of Technology, Taiyuan,
030024, P. R. China} \email[J. Hou]{[email protected]}
\author{Kan He} \address[Kan He]{College of Mathematics, Taiyuan University of Technology, Taiyuan,
030024, P. R. China} \email[K. He]{[email protected]}
\thanks{{\it 2002 Mathematical Subject Classification.} 47H20, 47B49, 47A12} \thanks{{\it Key words and phrases.} Numerical range, numerical radius, Lie product of operators, general preservers} \thanks{This work supported by National Science Foundation of China ( 11171249, 11201329,11271217) and Program for the Outstanding Innovative Teams of Higher Learning Institutions of Shanxi.}
\maketitle \begin{abstract}
Let $H$ be a complex separable Hilbert space of dimension $\geq 2$, ${\mathcal B}_s(H)$ the space of all self-adjoint operators on $H$. We give a complete classification of non-linear surjective maps on $\mathcal B_s(H)$ preserving respectively numerical radius and numerical range of Lie product.
\end{abstract}
\section{Introduction}
Let $A$ be a bounded linear operator acting on a complex Hilbert space $H$. Recall that the
numerical range of $A$ is the set $W(A)=\{\langle Ax,x\rangle \,|\, x\in H, \|x\|=1 \},$
and the numerical radius of $A$ is $w(A)=\sup\{|\lambda| \,|\, \lambda \in
W(A)\}.$ The problem of characterizing linear maps on matrices or operators that preserve numerical range or numerical radius has been studied by many authors, see for example \cite{Chan4,Chan5, CH2, LT2} and the references therein. In recent years, interest in characterizing general (non-linear) preservers of numerical ranges or numerical radius has been growing (\cite{BH8,BHX2,CLS7,CH1,CH3, DHHK,HwL,HHZ,HHZ2,HD,HLQ,LS7, LPS}).
Let ${\mathcal B}_s(H)$ and ${\mathcal B}(H)$ the space of all self-adjoint operators and the algebra of all bounded linear operators on complex Hilbert space $H$, respectively.
Suppose that $\mathcal A=\mathcal B(H)$ or ${\mathcal B}_s(H)$, and $F$ is the numerical range $W$ or numerical radius $w$. Let $A\circ B$ denote any product of a pair of $A,B\in \mathcal A$ such as operator product $AB$, Jordan product $AB+BA$, Jordan semi-triple product $ABA$ and Lie product $AB-BA$. A map $\Phi: \mathcal A\rightarrow \mathcal A$ preserves numerical range (or numerical radius) of product $\circ$ if $F=W$ (or $F=w$) and $\Phi$ satisfies \begin{equation} F(A\circ B)=F(\Phi(A) \circ \Phi(B)) \end{equation} for all $A, B\in \mathcal A$.
Assume that $\Phi:{\mathcal A}\to{\mathcal A}$ satisfy Eq.(1.1). For the case $F=W$ and $\Phi$ is surjective, it was shown in \cite{HD} that if $A\circ B=AB$ and $\mathcal A=\mathcal B(H)$, then there exists a unitary operator $U$ such that $\Phi(A)=\epsilon UAU^*$ for all $A\in {\mathcal A}$, where $\epsilon\in\{-1,1\}$; if $A\circ B=ABA$ and $\mathcal A=\mathcal B(H)$, then $\Phi$ is the multiple of a C$^*$-isomorphism (by a cubic root of unity); if $A\circ B=AB$ and $\mathcal A=\mathcal B_s(H)$, then there exists a unitary operator $U$ such that $\Phi(A)=\epsilon UAU^*$ for all $A\in {\mathcal A}$, where $\epsilon\in\{-1,1\} $. For the case $F=w$, $A\circ B=AB$, $\mathcal A=\mathcal B(H)$ and $\Phi$ is surjective, it was proved in \cite{CH1} that there exist a unitary or anti-unitary operator $U$ and a unit-modular functional $f: \mathcal A\rightarrow \mathbb C$ such that $\Phi(A)=f(A)UAU^*$ for all $A\in {\mathcal A}$. The case of $F=w$, $A\circ B=ABA$ and $\mathcal A=\mathcal B(H)$ was dealt with in \cite{DKLLP}. For the case when $F=w$, $A\circ B=AB$ or $ABA$ and $\mathcal A=\mathcal B_s(H)$, the results obtained in \cite{HHZ} reveal that there is a unitary operator or conjugate unitary operator $U$ on $H$, a sign function $h: {\mathcal S}(H)\rightarrow \{1,-1\}$ such that $\Phi(T)=h(T)UTU^{*}$ for any $T\in {\mathcal B}_s(H)$. Maps preserving numerical range of Jordan product are characterized in \cite{DHHK,HwL, LS7}.
Recent interest is focused on characterizing non-linear maps preserving numerical range or numerical radius of Lie product. When $3\leq \dim H=n<\infty$, Li, Poon and Sze \cite{LPS} proved that a surjective map $\Phi: \mathcal B(H)\rightarrow \mathcal B(H)$ satisfies $w(\Phi(A)\Phi(B)-\Phi(B)\Phi(A))=w(AB-BA)$ for all $A,B\in \mathcal A$ if and only if there exists a unitary matrix $U$ such that
$$\Phi(A)=\mu_A UA^\dag U^*+\nu_AI$$
for all $A\in \mathcal A$, where $\mu_A,\nu_A\in {\mathbb C}$ depend on
$A$ with $|\mu_A|=1$, $(\cdot)^\dag$ stands for one of the following four maps: $A\mapsto A,A\mapsto \bar{A}, A\mapsto A^t$ and $A\mapsto A^*$. For arbitrary dimensional space $H$ (concluding infinite and two dimensional cases), without assumption of surjectivity, Hou, Li and Qi \cite{HLQ} gave a characterization of maps on $\mathcal
B(H)$
preserving numerical range of Lie product.
{\bf Theorem HLQ.} {\it Let $H,K$ be complex Hilbert spaces of dimension $\geq 2$ and $\Phi:{\mathcal B}(H)\to{\mathcal B}(K)$ be a map of which the range contains all operators of rank $\leq 2$. Then the following statements are equivalent.}
(1) {\it $\Phi$ satisfies that $ {W}([\Phi(A),\Phi(B)])= {W}([A,B])$ for any $A,B\in{\mathcal B}(H)$. }
(2) {\it $\dim H=\dim K$, and there exist $\varepsilon \in \{1,-1\}$, a functional $h: \mathcal B(H) \rightarrow \Bbb C$, a unitary operator $U\in{\mathcal B}(H,K)$, and a set ${\mathcal S}$ of operators in ${\mathcal B}(H)$, which consists of operators of the form $aP + bI$ for an orthogonal projection $P$ on $H$ if $\dim H \ge 3$, such that either $$\Phi(A)=\begin{cases} \ \varepsilon UAU^*+h(A)I & $if$ \ A \in{\mathcal B}(H)\setminus \mathcal S,\cr -\varepsilon UAU^* + h(A)I & $if$ \ A \in \mathcal S,\cr\end{cases}$$ or $$\Phi(A) = \begin{cases} \ i\varepsilon UA^tU^*+h(A)I & $if$ \ A\in{\mathcal B}(H) \setminus \mathcal S,\cr -i\varepsilon UA^tU^*+h(A)I & $if$ \ A\in\mathcal S, \cr \end{cases}$$ where $A^t$ is the transpose of $A$ with respect to an orthonormal basis of $H$.}\\
An interesting open question is how to characterize non-linear maps on self-adjoint operators preserving numerical radius or numerical range of Lie product. In this paper, we solve this question for the case when the underline space $H$ is separable.
Let $H$ be a complex separable Hilbert space and $\Phi:{\mathcal B}_s(H)\to{\mathcal B}_s(H)$ a surjective map. Assume further that $\dim H\geq 3$. We show that:
(a) $\Phi$ satisfies $w(AB- BA)=w(\Phi(A)\Phi(B)-\Phi(B)\Phi(A) ) $ for any $A,B\in {\mathcal B}_s(H)$ if and only if there exist a unitary operator $U$ on $H$, a sign function $h: {\mathcal B}_s(H)\rightarrow \{1,-1\}$ and a functional $f: {\mathcal B}_s(H)\rightarrow {\Bbb R}$ such that $\Phi(T)=h(T)UTU^{*}+f(T)I$ for all $T\in {\mathcal B}_s(H)$ or $\Phi(T)=h(T)UT^tU^{*}+f(T)I$ for all $T\in {\mathcal B}_s(H)$ (See Theorem 2.1);
(b) $\Phi$ satisfies $W(AB- BA)=W(\Phi(A)\Phi(B)-\Phi(B)\Phi(A) )$ for all $A,B\in {\mathcal B}_s(H) $ if and only if there exist a unitary operator $U$ on $H$, a scalar $\varepsilon \in\{1,-1\}$, a subset ${\mathcal S}\subseteq {\mathcal D}(H)$, and a functional $f: {\mathcal B}_s(H)\rightarrow {\Bbb R}$ such that $\Phi(A)=
\varepsilon UAU^*+f(A)I$ if $A \in{\mathcal B}_s(H)\setminus {\mathcal S}$, $\Phi(A)=-\varepsilon UAU^* + f(A)I$ if $ A \in {\mathcal S}$, where ${\mathcal D}(H)$ is the set of all real linear combinations of a projection and the identity $I$ on $H$ (See Theorem 3.1).
When $\dim H=2$, not like to the maps on ${\mathcal B}(H)$ (See Theorem HLQ list above), the maps $\Phi:{\mathcal B}_s(H)\to{\mathcal B}_s(H)$ which preserve numerical range (radius) of Lie product may have some other forms. Note that, in the case $\dim H=2$ we have ${\mathcal D}(H)={\mathcal B}_2(H)$. Identifying ${\mathcal B}_s(H)$ with the space ${\bf H}_2$ of all $2\times 2$ Hermitian matrices, and define $\Psi$ on ${\bf H}_2$ by $$ \left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right)\mapsto\left(\begin{array}{cc} a & -c+id \\ -c-id &b \end{array}\right).$$ It is easily checked that $\Psi$ preserves both the numerical range and the numerical radius of Lie product. However, no more other kind of maps can be added in as revealed by our result. In addition, the surjectivity assumption is not needed in the following result.
(c) A map $\Phi: {\bf H}_2\to{\bf H}_2$ preserves the numerical radius of Lie product if and only if it preserves the numerical range of Lie product, and in turn, if and only if there exist a unitary matrix $U\in M_2$, a sign function $h:{\bf H}_2\to \{1,-1\}$ and a functional $f: {\mathcal B}_s(H)\rightarrow {\Bbb R}$ such that either $\Phi(A)=
h(A) UA^\dag U^*+f(A)I$ for all $A \in{\bf H}_2$; or $\Phi(A)=h(A)U\Psi(A)^\dag U^* + f(A)I$ for all $ A \in {\bf H}_2$, where $(\cdot)^\dag$ is one of the identity map and the transpose map (See Theorem 4.1).
The paper is organized as follows. We characterize the maps preserve the numerical radius of Lie product for the case $\dim H\geq 3$ in Section 2 and the maps preserving numerical range of Lie product for the case $\dim H\geq 3$ in Section 3. The last section is devoted to the case when $\dim H=2$.
\section{Preservers for numerical radius of Lie product}
In the section, we devote to characterizing surjective maps on self-adjoint operators preserving numerical radius of Lie
product for the case $\dim H\geq 3$. The following is the main result.
\begin{thm}\label{thm:1} Let $H$ be a separable complex Hilbert space of dimension at least three. A surjective map $\Phi\colon {\mathcal B}_s(H)\rightarrow {\mathcal B}_s(H)$ satisfies $$w(AB- BA)=w(\Phi(A)\Phi(B)-\Phi(B)\Phi(A) ) $$ for all $A,B\in {\mathcal B}_s(H)$ if and only if there exist a unitary operator $U$ on $H$, a sign function $h: {\mathcal B}_s(H)\rightarrow \{1,-1\}$ and a functional $f: {\mathcal B}_s(H)\rightarrow {\Bbb R}$ such that either $$\Phi(T)=h(T)UTU^{*}+f(T)I$$ for all $T\in {\mathcal B}_s(H)$; or $$\Phi(T)=h(T)UT^tU^{*}+f(T)I$$ for all $T\in {\mathcal B}_s(H)$. Here $T^t$ is the transpose of $T$ with respect to an arbitrarily given orthonormal basis of $H$.\end{thm}
Before starting the proof of Theorem 2.1, we need a lemma.
{\bf Lemma 2.2.} {\it Let $H$ be a complex Hilbert space of dimension $\geq 2$ and $A,B$ be self-adjoint operators acting on $H$. Then the following statements are equivalent.}
(1) {\it $w(AC-CA)=w(BC-C B)$ for every $C\in{\mathcal B}_s(H)$.}
(2) {\it $w(AP-P A)=w(BP-P B)$ for every rank-1 projection $P$.}
(3) {\it $A+B$ or $A-B$ is a scalar.}
{\it Proof.} (3)$\Rightarrow$(1)$\Rightarrow$(2) are obvious. Let us check (2)$\Rightarrow$(3).
Assume (2). For any rank-1 projection $P=x\otimes x$, write $Ax=\alpha x+\beta y$, where normalized $y$ is orthogonal to $x$. Since $A$ is self-adjoint we have $\alpha=\langle Ax,x\rangle\in\mathbb R$. Moreover, by self-adjointness of $A$, $Ax\otimes x-x\otimes x A=Ax\otimes x-x\otimes (Ax)$. So relative to decomposition $H=[x,y]\oplus H_1$, the rank-2 operator $Ax\otimes x-x\otimes x A$ is represented by a matrix $$\left(\begin{matrix} 0 & -\bar{\beta} \\ \beta& 0 \end{matrix}\right)\oplus 0,$$
and hence $W(Ax\otimes x-x\otimes x A)=i[-|\beta|,|\beta|]$ and
$w(Ax\otimes x-x\otimes x A)=|\beta|$.
Decomposing likewise $Bx=\alpha' x+\beta' z$ we obtain $W(Bx\otimes x-x\otimes xB) = i[-|\beta'|,|\beta'|]$ and the numerical radius of
$[B, x\otimes x]$ is $|\beta'|$. Hence by (2) we obtain
$|\beta|=|\beta'|$.
Since $A$ is self-adjoint and $\alpha=\langle Ax,x\rangle \in
\mathbb R$, it follows that $|\beta|^2=\|(Ax-\langle Ax,x\rangle\, x)\|^2=\langle(Ax-\langle Ax,x\rangle\, x)\,,\,(Ax-\langle Ax,x\rangle\, x)\rangle
=\langle A^2x,x\rangle- \langle Ax,x\rangle ^2$. Similarly, for
$B$ we obtain $|\beta'|^2 =\langle B^2x,x\rangle- \langle Bx,x\rangle ^2$. If follows from $|\beta|^2=|\beta'|^2$ that \begin{equation}\label{betabeta} \langle A^2x,x\rangle-\langle B^2x,x\rangle= \langle Ax,x\rangle
^2- \langle Bx,x\rangle ^2 \end{equation} for every normalized vector $x$. Let $y,z$ be two orthogonal normalized vectors. Then $x=\frac{\sqrt{2}}{2} (e^{i\xi}y+z) $ is also normalized for every $\xi\in[-\pi,\pi]$. After inserting $x$ in Eq.(2.1) we obtain \begin{equation}\label{Fourier1} \begin{aligned}
0 =&2\langle A^2(e^{i\xi}y + z),e^{i\xi}y + z\rangle - 2\langle B^2(e^{i\xi}y + z),e^{i\xi}y + z\rangle \\
& - \bigl(\langle A(e^{i\xi}y + z), e^{i\xi}y + z\rangle \bigr)^2+\bigl(\langle B(e^{i\xi}y + z),e^{i\xi}y + z\rangle \bigr)^2. \end{aligned} \end{equation} Taking only the coefficient at $e^{2 i\xi}$ in the expansion of Eq.(2.2) in a Fourier series, Eq.(2.2) reduces into \begin{equation}\label{eq:langle..rangle^2} \langle By,z\rangle^2=\langle A y,z\rangle^2 \end{equation} for every pair of orthonormal $y,z$. So, for any $x\in H$ and $f\in[Ax,x]^\perp$, we have $\langle Bx,f\rangle=0$. This entails that $Bx\in[Ax,x]$. Thus, for any $x\in H$, there exist $\alpha_x,\beta_x\in\mathbb{C}$ such that $Bx=\alpha_xAx+\beta_xx$. By Eq.(2.3), we have $\langle Ax,f\rangle^2=\langle\alpha_xAx,f\rangle^2 =\alpha_x^2\langle Ax,f\rangle^2$ holds for all $f\in[x]^\perp$, which implies that
$\alpha_x=\pm1$. It follows from
$|\beta_x|\|x\|\leq\|Bx\|+\|\alpha_xAx\|\leq(\|B\|+\|A\|)\|x\|$ that
$|\beta_x|\leq\|B\|+\|A\|$. Therefore, $B$ is a regular local linear combination of $A$ and $I$, and then, by \cite{Hou1}, $B$ is a linear combination of $A$ and $I$. So $B=\alpha A+\beta I$ with $\alpha\in\{-1,1\}$ and $\beta\in{\mathbb R}$, as desired.
$\Box$
\begin{proof}[Proof of Theorem 2.1] The ``if'' part is obvious, we check the ``only if'' part.
Assume first that $\Phi$ is injective; then, $\Phi$ is bijective.
Clearly $\Phi$ preserves zeros of Lie product. So, by \cite{Mol-Sem}, there exists a unitary or conjugate unitary operator $U$ such that, for any
rank-1 positive operator $P=x\otimes x$ with unit vector $x\in H$, we have
$$\Phi(P)=U(\lambda_P P +\mu_P I)U^* $$ for some $ \lambda_P, \mu_P \in \Bbb R$. Without loss of generality we can assume in the sequel that $U=I$.
Taking any unit vectors $x,y$, which are orthogonal to each other, and let $Q=y\otimes y$ and $Z=(x+y)\otimes (x+y)$. It easily follows that in the orthogonal decomposition $H=[x,y]\oplus H_1$, where $[x,y]$ stands for the subspace spanned by $\{x,y\}$, we have $PZ-ZP= \left(\begin{matrix} 0 & 1\\ -1 &0 \end{matrix}\right)\oplus 0$, whose numerical range is $[-i,i]$ and so numerical radius is 1. The same conclusion holds for the numerical range of $QZ-ZQ$. Comparing the numerical radius of $PZ-ZP$ and $\Phi(P)\Phi(Z)-\Phi(Z)\Phi(P)$, we obtain
$$1=w(\Phi(P)\Phi(Z)-\Phi(Z)\Phi(P))=|\lambda_P\lambda_Z| w(PZ-ZP)=|\lambda_P\lambda_Z|.$$ This is possible only if $\lambda_P\lambda_Z=\pm1$ since $\lambda_P, \lambda_Z\in \mathbb R$. Similarly we have $\lambda_Q\lambda_Z=\pm 1$. Hence $\lambda_P=\pm\lambda_Q$ for orthogonal $P, Q$. Now, given any rank-one self-adjoint operator $R$, there exists a rank-one self-adjoint operator $T$ which is orthogonal to $R$ and $P$. Similar to the above discussion, we have $\lambda_T=\pm \lambda_R$ and $\lambda_T=\pm \lambda_P$, so $\lambda_P=\pm\lambda_R$ for any $P, R$. It follows that $\lambda_P=\pm 1$. \if Write $\lambda_P$ as $\lambda$. And as $\lambda_P\lambda_Z=\pm 1$, we deduce $\lambda=\pm 1$ for any rank-one self-adjoint operator $P$.\fi
\if Now for rank-one projection $P=x\otimes x$, assume $\Phi(P)=\lambda_P P +\mu_P I$.\fi Now, for arbitrary self-adjoint $A$, \begin{equation}\label{eq:rank-one}
w(Ax\otimes x-x\otimes x A)=|\lambda_P| w(\Phi(A)x\otimes x-x\otimes x\Phi(A))=w(\Phi(A)x\otimes x-x\otimes x\Phi(A)) \end{equation} holds for every rank-1 projection $P=x\otimes x$. \if Decompose $Ax=\alpha x+\beta y$, where normalized $y$ is orthogonal to $x$. Since $A$ is self-adjoint we have $\alpha=\langle Ax,x\rangle\in\mathbb R$. Moreover, by self-adjointness of $A$, $Ax\otimes x-x\otimes x A=Ax\otimes x-x\otimes (Ax)$. So relative to decomposition $H=[x,y]\oplus H_1$, the rank-two operator $Ax\otimes x-x\otimes x A$ is represented by a matrix $$\left(\begin{matrix} 0 & -\bar{\beta} \\ \beta& 0 \end{matrix}\right)\oplus 0,$$
and hence $W(Ax\otimes x-x\otimes x A)=i[-|\beta|,|\beta|]$ and $w(Ax\otimes x-x\otimes x A)=|\beta|$.
Decomposing likewise $\Phi(A)x=\alpha' x+\beta' z$ we obtain $W(\Phi(A)x\otimes x-x\otimes x\Phi(A)) =
i[-|\beta'|,|\beta'|]$ and the numerical radius of $[\Phi(A), x\otimes x]$ is $|\beta'|$. Hence by Eq.(2.1) we obtain $|\beta|=|\beta'|$.
Since $A$ is self-adjoint and $\alpha=\langle Ax,x\rangle \in
\mathbb R$, it follows that $|\beta|^2=\|(Ax-\langle Ax,x\rangle\, x)\|^2=\langle(Ax-\langle Ax,x\rangle\, x)\,,\,(Ax-\langle Ax,x\rangle\, x)\rangle
=\langle A^2x,x\rangle-\bigl(\langle Ax,x\rangle \bigr)^2$. Similarly, for $\Phi(A)$ we obtain $|\beta'|^2 =\langle
\Phi(A)^2x,x\rangle-\bigl(\langle \Phi(A)x,x\rangle \bigr)^2$. If follows from $|\beta|=|\beta'|$ that \begin{equation}\label{betabeta} \langle A^2x,x\rangle-\langle \Phi(A)^2x,x\rangle=\bigl(\langle Ax,x\rangle \bigr)^2-\bigl(\langle \Phi(A)x,x\rangle \bigr)^2 \end{equation} for every normalized vector $x$. Let $y,z$ be two orthogonal normalized vectors. Then $x=\frac{\sqrt{2}}{2} (e^{i\xi}y+z) $ is also normalized for every $\xi\in[-\pi,\pi]$. After inserting $x$ in Eq.(2.2) we obtain \begin{equation}\label{Fourier1} \begin{aligned}
0 =&2\langle A^2(e^{i\xi}y + z),e^{i\xi}y + z\rangle - 2\langle \Phi(A)^2(e^{i\xi}y + z),e^{i\xi}y + z\rangle \\
& - \bigl(\langle A(e^{i\xi}y + z), e^{i\xi}y + z\rangle \bigr)^2+\bigl(\langle \Phi(A)(e^{i\xi}y + z),e^{i\xi}y + z\rangle \bigr)^2. \end{aligned} \end{equation} Taking only the coefficient at $e^{2 i\xi}$ in the expansion of Eq.(2.3) in a Fourier series, identity Eq.(2.3) reduces into \begin{equation}\label{eq:langle..rangle^2} \langle \Phi(A)y,z\rangle^2=\langle A y,z\rangle^2 \end{equation} for every pair of orthonormal $y,z$. From this we infer that also
$|\langle \Phi(A)y,z\rangle|=|\langle A y,z\rangle|$ wherefrom, by \cite[Theorem 2.2]{kllpr},\fi By Lemma 2.2, $\Phi(A)=\lambda_A A+\delta_A I$ for some scalar $\lambda_A\in\{-1,1\}$ and some scalar $\delta_A$.
Finally we show that one only needs the surjective assumption. Here we borrow an idea from \cite{LPS}. If $\Phi(A)=\Phi(B)$, then $$\begin{array}{rl} w(AC-CA)=& w(\Phi(A)\Phi(C)-\Phi(C)\Phi(A))\\ =& w(\Phi(B)\Phi(C)-\Phi(C)\Phi(B))=w(BC-CB) \end{array}$$ for all $C\in{\mathcal B}_s(H)$. By Lemma 2.2 we get $B=\alpha A+\beta I$ for some $\alpha\in\{-1,1\}$ and $\beta\in{\mathbb R}$. On the other hand, for any $A$, there is some $D$ such that $\Phi(D)=-\Phi(A)$, which gives $w(DC-CD)=w(\Phi(D)\Phi(C)-\Phi(C)\Phi(D))=w(\Phi(A)\Phi(C)-\Phi(C)\Phi(A))=w(AC-CA)$ for all $C$. Again by Lemma 2.2, we get $D=\lambda A+\gamma I$ for some $\lambda\in\{-1,1\}$ and $\gamma\in{\mathbb R}$. For any $A,B\in{\mathcal B}_s(H)$, we say $A\sim B$ if $w(AC-CA)=w(BC-CB)$ for all $C\in{\mathcal B}_s(H)$. By Lemma 2.2, $\sim$ is an equivalent relation and $A\sim B$ if and only if $B=\alpha A+\beta I$ for some $\alpha\in\{-1,1\}$ and $\beta\in{\mathbb R}$. Let ${\mathcal E}_A=\{B\in{\mathcal B}_s(H): B\sim A\}$. For each equivalent class ${\mathcal E}_A$ pick a representative, for example $A$, and write ${\mathcal A}$ the set of these representatives. Since $\Phi$ is surjective, for each $A\in{\mathcal A}$, ${\mathcal E}_A$ and $\Phi^{-1}({\mathcal E}_A)$ have the same cardinality $c$. Thus there exists a map $\Psi:{\mathcal B}_s(H)\to {\mathcal B}_s(H)$ which maps bijectively $\Phi^{-1}({\mathcal E}_A)$ onto ${\mathcal E}_A$ for each $A\in{\mathcal A}$. Obviously, $\Psi$ is bijective and $\Psi(A)\sim\Phi(A)$ for all $A\in{\mathcal B}_s(H)$. Then $$w(\Psi(A)\Psi(B)-\Psi(B)\Psi(A))=w(\Phi(A)\Phi(B)-\Phi(B)\Phi(A))=w(AB-BA)$$ for all $A,B\in{\mathcal B}_s(H)$. By the previous part of our proof of the theorem under the bijective assumption, $\Psi$ has the desired form, and hence $\Phi$ has the desired form as $\Phi(A)\sim \Psi(A)$. So Theorem 2.1 holds true,
completing the proof. \end{proof}
\section{Preservers for numerical range of Lie product }
This section is devoted to characterizing maps that preserve the numerical range of Lie product of self-adjoint operators. Our main result is Theorem 3.1, which is not a direct corollary of Theorem 2.1 for numerical radius preservers, since much more effort should be paid to determine the structure of the sign function $h: {\mathcal B}_s(H)\rightarrow \{1,-1\}$.
Denote $\mathcal D$ the set of all real linear combinations of a projection and the identity $I$, that is, $ {\mathcal D} = \{ \alpha P + \delta I: P \mbox{ is a projection in} \ {\mathcal B}_s(H), \alpha, \delta \in \mathbb R \} \subset {\mathcal B}_s(H)$. It is clear that $\mathcal D$ is those self-adjoint operators that are also quadric algebraical operators.
\begin{thm}\label{thm:2} Let $H$ be a complex separable Hilbert space of dimension at least 3. A surjection $\Phi\colon {\mathcal B}_s(H)\rightarrow {\mathcal B}_s(H)$ satisfies $$W(AB- BA)=W(\Phi(A)\Phi(B)-\Phi(B)\Phi(A) )$$ for all $A,B\in {\mathcal B}_s(H) $ if and only if there exist a unitary operator $U$ on $H$, a scalar $\varepsilon \in\{1,-1\}$, a set ${\mathcal S}\subseteq \mathcal D$, and a functional $f: {\mathcal B}_s(H)\rightarrow {\Bbb R}$ such that $$\Phi(A)=\left\{ \begin{array}{lll} \ \varepsilon UAU^*+f(A)I & {\rm if} & A \in{\mathcal B}_s(H)\setminus {\mathcal S},\\ -\varepsilon UAU^* + f(A)I & {\rm if} & A \in {\mathcal S}.\end{array}\right.$$
\end{thm}
To prove the above result we need a lemma, which gives a characterization of the quadric algebraic self-adjoint operators, that is, the operators in $\mathcal D$, inn terms of the numerical range of Lie product.
{\bf Lemma 3.2.} {\it Let $H$ be a complex Hilbert space with $\dim H\geq 3$ and $A\in{\mathcal B}_s(H)$. Then the following statements are equivalent.}
(1) $A\in{\mathcal D}$.
(2) {\it $W(AB-BA)=-W(AB-BA)$ for all $B\in{\mathcal B}_s(H)$.}
(3) {\it $W(AB-BA)=-W(AB-BA)$ for all $B\in{\mathcal B}_s(H)$ of rank $\leq 2$.}
{\it Proof.} (1)$\Rightarrow$(2). Assume $A\in{\mathcal D}$, then $A=\alpha P+\gamma I$ for some projection $P$ and some scalars $\alpha, \gamma\in{\mathbb R}$. As the case $A=\alpha I$ is obvious, we may assume that, there exists a space decomposition $H=H_1\oplus H_2$ such that $A=\left(\begin{array}{cc} \alpha I_{H_1} & 0\\ 0 & \beta I_{H_2} \end{array}\right)$ with $\dim H_i>0$, $i=1,2$, and $\alpha\not=\beta$. For any $B=\left(\begin{array}{cc} B_{11} & B_{12} \\ B_{12}^* & B_{22} \end{array}\right)\in {\mathcal B}_s(H_1\oplus H_2)$, $AB-BA=(\alpha-\beta)\left(\begin{array}{cc} 0 & B_{12} \\ -B_{12}^* & 0 \end{array}\right)$. Let $U=\left(\begin{array}{cc} I_{H_1} & 0 \\ 0 & -I_{H_2} \end{array}\right)$; then $U$ is unitary and $U(AB-BA)U^*=-(AB-BA)$. So, we always have $W(AB-BA)=-W(AB-BA)$, that is, (2) is true.
(2)$\Rightarrow$(3) is obvious.
(3)$\Rightarrow$(1). Note that $A\in{\mathcal B}_s(H) \setminus \mathcal D$ if and only if the spectrum $\sigma(A)$ has at least three points, and in turn, if and only if there exists a vector $x$ such that $\{x, Ax, A^2 x\}$ is linearly independent. For such $x$, take an orthonormal basis $\{e_1,e_2,e_3\}$ of $[x,Ax,A^2x]$ with $e_1\in [x]$ and $e_2\in[x,Ax]$. Then, with respect to the space decomposition $H=[e_1]\oplus [e_2]\oplus [e_3]\oplus \{e_1,e_2,e_3\}^\perp$, $A$ has the matrix representation of the form $$A=\left(\begin{array}{cccc} a_{11} & a_{21} & 0 & 0 \\ a_{21} & a_{22} & a_{32} & 0 \\ 0 & a_{32} & a_{33} & A_{34}\\ 0 & 0 & A_{34}^* &A_{44} \end{array}\right) $$ with $a_{11}, a_{22},a_{33}$ real numbers, $a_{21}>0$, $a_{32}>0$ and $A_{44}=A_{44}^*$. Let $$B=\left(\begin{array}{cccc} 1 & \beta & 0 & 0 \\ \bar{\beta} &0 & 0 & 0 \\ 0 & 0 &0 & 0\\ 0 & 0 & 0 &0 \end{array}\right) $$ with ${\rm Im}\beta=\frac{1}{2i}(\beta-\bar{\beta})\not=0$. Then, $B$ is of rank two and $$AB-BA=\left(\begin{array}{cccc} -2({\rm Im}\beta)a_{21} & -a_{21}+\beta(a_{11}-a_{22}) & -\beta a_{32} & 0 \\ a_{21}-\bar{\beta}(a_{11}-a_{22}) & 2({\rm Im}\beta)a_{21} & 0 & 0 \\ \bar{\beta}a_{32} & 0 & 0& 0\\ 0 & 0 & 0 &0 \end{array}\right), $$ which is a rank-3 skew self-adjoint operator with zero trace. Clearly, $W(AB-BA)\not=-W(AB-BA)$. Hence (3) implies (1).
$\Box$
\if {\bf Lemma 2.5.} {\it Let $H$ be a complex Hilbert space with $\dim H\geq 3$ and $A\in{\mathcal B}_s(H)$. Then the following statements are equivalent.}
(1) $A=\alpha P+\gamma I$ for some rank-1 projection $P$ and $\alpha, \gamma\in{\mathbb R}$.
(2) {\it $W(AB-BA)=-W(AB-BA)$ is symmetric relative to 0 for all $B\in{\mathcal B}_s(H)$.}
(3) {\it $W(AB-BA)=-W(AB-BA)$ is symmetric relative to 0 for all $B\in{\mathcal B}_s(H)$ of rank $\leq 3$.}
{\it Proof.} (1)$\Rightarrow$(2). If $A=\alpha x\otimes x+\gamma I$, then, for any $B\in{\mathcal B}_s(H)$, $AB-BA$ is a skew self-adjoint operator of rank $\leq 2$. As ${\rm Tr}(AB-BA)=0$, we have $\sigma(AB-BA)=\{0,-ai,ai\}$ for some nonnegative real number $a$. Thus $W(AB-BA)=[-ai,ai]$ is symmetric relative to the origin 0.
(2)$\Rightarrow$(3) is clear.
(3)$\Rightarrow$(1). By Lemma 2.4, $A\in{\mathcal P}$, that is, $A=\alpha P+\gamma I$ for some projection $P$ and real numbers $\alpha, \gamma$. We have to show that $P=x\otimes x$ for some unit vector $x$.
Without loss of generality, we may assume that $\alpha\not=0$ and $P\not=0,I$. Assume, on the contrary, that $A\not\in{\mathbb C}{\mathcal P}_1+{\mathbb C}I$; then $\dim H\geq 4$ and there exists space decomposition $H=H_1\oplus H_2\oplus H_3$ such that $$A=A_1\oplus \alpha I_{H_2}\oplus \beta I_{H_3},$$ where $\alpha\not=\beta$ and $$A_1=\left(\begin{array}{cccc} \alpha & 0 & 0 & 0\\ 0 & \alpha & 0 &0 \\ 0 &0 &\beta & 0\\ 0&0&0&\beta \end{array}\right)=\left(\begin{array}{cc} \alpha I_2 &0_2\\ 0_2 & \beta I_2\end{array}\right).$$ Let $B=B_1\oplus 0\oplus 0$ with \if $B_1=\left(\begin{array}{cc} B_{11}&B_{12}\\ B_{21} & B_{22}\end{array}\right)\in{\mathcal B}(H_1)$. Then $$[A,B]=[A_1, B_1]\oplus 0\oplus 0=(\alpha-\beta)\left(\begin{array}{cc} 0_2 &B_{12} \\-B_{21}&0_2\end{array}\right)\oplus 0\oplus 0.$$ Write $B_{12}=\left(\begin{array}{cc} \alpha_1&\alpha_2\\ \alpha_3 &\alpha_4\end{array}\right)$ and $-B_{21}=\left(\begin{array}{cc} \beta_1&\beta_2\\ \beta_3 &\beta_4\end{array}\right)$. It is easily checked that $[A_1, B_1]$ has four eigenvalues $$\begin{array}{rl} \lambda=&\pm (\alpha-\beta)\frac{1}{\sqrt{2}} [\alpha_1\beta_1-\alpha_4\beta_4+\alpha_2\beta_3-\alpha_3\beta_2 \\
& \pm\sqrt{(\alpha_1\beta_1-\alpha_4\beta_4+\alpha_2\beta_3-\alpha_3\beta_2)^2 -4(\alpha_1\alpha_4-\alpha_2\alpha_3)(\beta_1\beta_4-\beta_2\beta_3)}]^{\frac{1}{2}}.\end{array} $$ If we take $\alpha_i$ and $\beta_i$ so that $\alpha_1\beta_1-\alpha_4\beta_4+\alpha_2\beta_3-\alpha_3\beta_2=0$ and $(\alpha_1\alpha_4-\alpha_2\alpha_3)(\beta_1\beta_4-\beta_2\beta_3)=-d<0$, then $$\sigma((\alpha-\beta)\left(\begin{array}{cc} 0_2 &B_{12} \\-B_{21}&0_2\end{array}\right))=\{\pm (\alpha-\beta)\sqrt{d}, \pm i(\alpha-\beta)\sqrt{d}\}.$$ Thus we have $$\sigma([A,B])=\{0, \pm (\alpha-\beta)\sqrt{d}, \pm i(\alpha-\beta)\sqrt{d}\}.$$ Now, take \fi $$B_1=(\alpha-\beta)^{-1}\left(\begin{array}{cccc} 2 & 1 & 1 & 2\\ -1 &-1 &-1 &-1 \\ 2 &1 &1 &2\\ -1&-1&-1&-1 \end{array}\right);$$ we see that rank$(B )=2$, $$ [A_1,B_1]=\left(\begin{array}{cccc} 0 & 0 & 1 & 2\\ 0 &0 &-1 &-1 \\ -2 &-1 &0 &0\\ 1&1&0&0 \end{array}\right) $$ and $\sigma([A_1,B_1])=\{\pm 1, \pm i\}$. Note that $W([A,B])=W([A_1,B_1])$ as $0\in W([A_1,B_1])$. Thus, if $W([A,B])$ is an elliptic disc centered at 0, then by Lemma 2.2, $W([A,B])$ is either an elliptic disc of foci $\{-1,1\}$, or an elliptic disc of foci $\{-i,i\}$. This forces that $W([A,B])$ is a circular disc centered at 0 because $\sigma ({\rm Re}([A_1,B_1])=\sigma ({\rm Im}([A_1,B_1])=\{-1.6283,-0.9212,0.9212,1.6283\}$ implies the major axis and the minor axis have the same length. However, $\sigma({\rm Re} (e^{\frac{i\pi}{3}}[A_1,B_1]))=\{-1.5939,-0.9795,0.9795,1.5939\}$, which means that $W({\rm Re} (e^{\frac{i\pi}{3}}[A_1,B_1]))\not=W({\rm Re}([A_1,B_1])$ and then $W([A,B])$ cannot be a circular disc, a contradiction. Hence the numerical range of $[A,B]$ cannot be an elliptic disc with center 0 and $A$ does not meet the condition (3).
$\Box$ \fi
{\bf Proof of Theorem 3.1.}
Assume $\dim H\geq 3$. Then $\Phi$ satisfies the assumption of Theorem 2.1, and hence there exist a unitary operator or conjugate unitary operator $U$ on $H$, a sign function $h: {\mathcal B}_s(H)\rightarrow \{1,-1\}$ and a functional $f: {\mathcal B}_s(H)\rightarrow {\Bbb R}$ such that $\Phi(T)=h(T)UTU^{*}+f(T)I$ for all $T\in {\mathcal B}_s(H)$.
We assert that the case $U$ is a conjugate unitary operator cannot occur. Assume on the contrary that $\Phi(T)=h(T)UTU^{*}+f(T)I$ for any $T\in {\mathcal B}_s(H)$, where $U$ is conjugate. Take arbitrarily an orthonormal basis of $H$, one sees that there exists a unitary operator $V$ such that $\Phi(T)=h(T)VT^tV^{*}+f(T)I$ for any $T\in {\mathcal B}_s(H)$, where $T^t$ is the transpose of $T$ with respect to the given basis. Thus we have \begin{equation}\label{betabeta} \begin{aligned} W(AB-BA)&=W(\Phi(A)\Phi(B)-\Phi(B)\Phi(A)) \\ &=h(A)h(B)W(VA^tB^tV^*-VB^tA^tV^*)\\ &=h(A)h(B)W((BA-AB)^t)\\ &=-h(A)h(B)W(AB-BA). \end{aligned} \end{equation}
Let $\{x,y,z\}$ be an orthonormal set of $H$ and consider the space decomposition $H=[x,y,z]\oplus [x,y,z]^\perp$. For any scalars $\alpha,\beta,\gamma$ with $\alpha\beta\bar{\gamma}-\bar{\alpha}\bar{\beta}\gamma\not=0$, and any real numbers $b_{11},b_{22}, b_{33}$, let \begin{equation}\label{betabeta} \begin{aligned} B=\left(\begin{array}{ccc} b_{11} &\alpha &\gamma \\ \bar{\alpha} & b_{22} & \beta \\ \bar{\gamma} & \bar{\beta} & b_{33} \end{array}\right)\oplus 0\in {\mathcal B}_s(H). \end{aligned} \end{equation}
Then for any self-adjoint operator of the form \begin{equation}\label{betabeta} \begin{aligned} A=\left(\begin{array}{ccc} a_{1} &0 &0 \\ 0 & a_{2} & 0 \\ 0 & 0 & a_{3} \end{array}\right)\oplus A_2 \end{aligned} \end{equation}
with distinct $a_1,a_2,a_3$, we have $AB-BA=C_1\oplus 0$, where $$C_1=\left(\begin{array}{ccc} 0 &(a_1-a_2)\alpha &(a_1-a_3)\gamma \\ (a_2-a_1)\bar{\alpha} & 0 & (a_2-a_3)\beta \\ (a_3-a_1)\bar{\gamma} & (a_3-a_2)\bar{\beta} & 0 \end{array}\right).$$ As $\det (C_1)=(a_1-a_2)(a_2-a_3)(a_3-a_1)(\alpha\beta\bar{\gamma}-\bar{\alpha}\bar{\beta}\gamma)\not=0$, $\sigma(C_1)=\{it_1,it_2,it_3\}$ with $t_i\not=0$, $i=1,2,3$, $t_1\leq t_2\leq t_3$ and $t_1+t_2+t_3=0$. So $W(AB-BA)=i[t_1,t_3]$ and $t_1\not=-t_3$. By Eq.(3.1) we obtain that $$-h(A)h(B)[it_1,it_3]=[it_1,it_3]$$ and this forces $-h(A)h(B)=1$. If $h(B)=-1$, then $h(A)=1$ for all $A$ of the form in Eq.(3.3) and $h(B)=-1$ for all $B$ of the form in Eq.(3.2). Consequently, $h(B)h(B')=1$ for any $B,B'$ of the form in Eq.(3.2).
Now take self-adjoint operators of rank two $$B=\left(\begin{array}{ccc} 0 & i & 1 \\ -i & 0 & 2 \\ 1 & 2 & 0 \end{array}\right)\oplus 0 \quad {\rm and}\quad B'=\left(\begin{array}{ccc} 0 &1+ i & 1 \\ 1-i & 0 & 2i \\ 1 & -2i & 0 \end{array}\right)\oplus 0.$$ It is clear that $i(BB'-B'B)$ is a rank-3 self-adjoint operator and hence $W(BB'-B'B)\not=-W(BB'-B'B)=-h(B)h(B')W(BB'-B'B)$, contradicting to Eq.(3.1).
So, $$\Phi(A)=h(A)UAU^*+f(A) I$$ for all $A\in{\mathcal B}_s(H)$.
It is clear by Lemma 3.2 that $h(A)$ can take any value of $-1$ and 1 if $A\in{\mathcal D}$. So, to complete the proof, we have to show that $h \colon {\mathcal B}_s(H) \to \{-1, 1\}$ is constant on ${\mathcal B}_s(H) \setminus \mathcal D$.
By Lemma 3.2, for any $A\in{\mathcal B}_s(H)\setminus {\mathcal D}$, there exists rank-2 $B\in{\mathcal B}_s(H)\setminus {\mathcal D}$ such that $W(AB-BA)\not=-W(AB-BA)$. So we need only to show that $h(A)=h(B)$ holds for any rank-two $A,B\in{\mathcal B}_s(H)\setminus {\mathcal D}$.
{\bf Claim 1.} For any orthonormal set $\{x,y,z\}$ and any nonzero real numbers $a,b,c,d,e,f$ with $a\not=b,c\not=d$ and $e\not=f$, we have $h(ax\otimes x+by\otimes y)=h(cx\otimes x+dz\otimes z)=h(ey\otimes y+fz\otimes z)$.
Assume $A$ is a rank-2 self-adjoint not in ${\mathcal D}$. Then there exist orthonormal $x,y\in H$ and nonzero distinct real numbers $a,b$ such that $A=\left(\begin{array}{cc} a &0 \\ 0 & b \end{array}\right)\oplus 0$ with respect to the space decomposition $H=[x,y]\oplus [x,y]^\perp$. Take arbitrarily two unit vectors $z,z'\in[x,y]^\perp$ and nonzero complex numbers $\alpha,\beta, \gamma, \alpha',\beta', \gamma'$ so that ${\rm Re}(\alpha\beta\bar{\gamma}) =0$ and ${\rm Re}(\alpha'\beta'\bar{\gamma}') =0$, and let $B=B(x,y,z;\alpha,\beta,\gamma)={\rm Re} (x\otimes (\alpha y+\gamma z)+\beta y\otimes z)$, $B'=B(x,y,z'; \alpha',\beta',\gamma')={\rm Re} (x\otimes (\alpha' y+\gamma' z')+\beta' y\otimes z')$. Then $A$ has the form in Eq.(3.1) and $B,B'$ have the form in Eq.(3.2). By what proved previously, we see that both $B,B'$ are of rank-2 and $h(B)=h(A)=h(B')$ as $W(AB-BA)\not=-W(AB-BA)$ and $W(AB'-B'A)\not=-W(AB'-B'A)$. It is also clear that $$h(ax\otimes x+by\otimes y)=h(B(x,y,z;\alpha ,\beta ,\gamma ))=h(B( \pi(x,y,z) ;\alpha_1,\beta_1,\gamma_1))$$ holds for any permutation $\pi(x,y,z)$ of $(x,y,z)$ and any nonzero numbers $\alpha_1,\beta_1,\gamma_1$ with ${\rm Re}\alpha_1\beta_1\bar{\gamma}_1=0$. For example, $$h(ax\otimes x+by\otimes y)=h(B(x,y,z;\alpha,\beta,\gamma))=h(B(z,x,y;\alpha_1,\beta_1,\gamma_1)).$$ It follows that \begin{equation}h(ax\otimes x+by\otimes y)=h(cx\otimes x+dz\otimes z)=h(ey\otimes y+fz\otimes z)\end{equation} hold for any orthonormal set $\{x,y,z\}$ and any nonzero real numbers $a,b,c,d,e,f$ with $a\not=b,c\not=d$ and $e\not=f$. So Claim 1 is true.
{\bf Claim 2.} If $\dim H\geq 4$, then $h(A)=h(B)$ holds for any rank-2 $A,B\in{\mathcal B}_s(H)\setminus{\mathcal D}$.
Let $A=ax\otimes x+by\otimes y$ and $B=cu\otimes u+dv\otimes v$ be any two rank-2 self-adjoint operators that are not in $\mathcal D$, where $x\perp y$ and $u\perp v$. $\dim H\geq 4$ implies that $[x,y,u]\not=H$. Take $y'\in[x,y,u]^\perp$. By Claim 1 we have $$\Phi(ax\otimes x+by\otimes y)=\Phi(by\otimes y+cy'\otimes y').$$ So, replacing $y$ by $y'$ if necessary, we may assume that $y\perp u$ in the sequel.
If $[x,y,u,v]\not=H$, one can pick a unit vector $z\in[x,y,u,v]^\perp$. Then, by Claim 1 or Eq.(3.4), $$\begin{array}{rl} h(A)=& h(ax\otimes x+by\otimes y)=h(ay\otimes y+bz\otimes z)\\ =& h(cu\otimes u+bz\otimes z)=h(cu\otimes u+dv\otimes v)=h(B).\end{array}$$
If $ [x,y,u,v]= H$, then $\dim H= 4$. Take unit vectors $z \in[x,y,u ]^\perp$ and $z'\in[y,u,v]^\perp$. Applying Claim 1 again, we see that $$\begin{array}{rl}h(A)=& h(ax\otimes x+by\otimes y)=h(ay\otimes y+bz\otimes z)\\= &h(ay\otimes y+bu\otimes u) =h(cu\otimes u +dz'\otimes z')\\ = & h(cu\otimes u +dv\otimes v)=h(B).\end{array}$$
Finally, let us consider the case $\dim H=3$.
{\bf Claim 3.} If $\dim H=3$, then $h(A)=h(B)$ holds for any rank-2 $A,B\in{\mathcal B}_s(H)\setminus{\mathcal D}$.
Assume that $\dim H=3$ and write $A=ax\otimes x+by\otimes y$ and $B=cu\otimes u+dv\otimes v$, where $x\perp y, u\perp v$. If $[x,y,u,v]\not=H$, then $[x,y]=[u,v]$. It is obvious $h(A)=h(B)$ whenever $u$ is linearly dependent to $x$ or $y$. So we may assume that $u,v\not\in[x]\cup [y]$. Pick a unit vector $z\in [x,y]^\perp$. By Claim 1 we see that $h(A)=h(ax\otimes x+by\otimes y)=h(az\otimes z+by\otimes y)$ and $h(B)=h(cu\times u+dv\otimes v)=h(cz\otimes z+dv\otimes v)$. It reduces to consider $A'= az\otimes z+by\otimes y$, $B'=cz\otimes z+dv\otimes v$. Note that $[z,y,v]=H$. So we may always require that $ [x,y,u,v]= H$.
Take unit vector $z\in[x,y]^\perp$; then $A$ and $B$ have matrix representations $$A=\left(\begin{array}{ccc} a &0 &0 \\ 0 &b & 0\\ 0&0&0\end{array}\right)\quad\mbox{\rm and}\quad B=\left(\begin{array}{ccc} \xi_{1} &\alpha &\gamma \\ \bar{\alpha} &\xi_{2}& \beta\\ \bar{\gamma}&\bar{\beta}&\xi_{3}\end{array}\right)$$ with $a,b,0$ are distinct to each other, $B$ has three distinct eigvalues, $(\gamma,\beta,\xi_{3})\not=(0,0,0)$. If $\alpha=\beta=\gamma=0$ or ${\rm Im} (\alpha\beta\bar{\gamma})\not=0$, clearly we already have $h(B)=h(A)$ (see the argument after Eqs. (3.3)-(3.4)).
In the sequel assume that $(\alpha,\beta,\gamma)\not=(0,0,0)$ but ${\rm Im} (\alpha\beta\bar{\gamma})=0$.
{\bf Subcase 1.} Two of $\alpha,\beta,\gamma$ are 0.
Without loss of generality, say $\beta=\gamma=0$. Then $$B=\left(\begin{array}{ccc} \frac{\bar{\alpha}}{k} &\alpha &0 \\ \bar{\alpha} &k\alpha & 0\\ 0&0&\xi_3\end{array}\right)$$ for some $k\not=0$ as rank$B=2$ and $\xi_3\not=0$. Let $$C_{t,s}=\left(\begin{array}{ccc} 0 &t &i \\ t &0 & s\\ -i&s&0\end{array}\right)$$ for nonzero $t,s\in{\mathbb R}$. By the previous discussion we have $h(A)=h(C_{t,s})$. Consider $$BC_{t,s}-C_{t,s}B=\left(\begin{array}{ccc} t(\alpha-\bar{\alpha}) &t(\frac{\bar{\alpha}}{k}-k\alpha) &\frac{i\bar{\alpha}}{k}+s\alpha-i\xi_3 \\
-t(\frac{\bar{\alpha}}{k}-k\alpha) &- t(\alpha-\bar{\alpha}) & i\bar{\alpha}+sk\alpha-s\xi_3\\ \frac{i\bar{\alpha}}{k}-s\bar{\alpha}-i\xi_3 & i\alpha-sk{\alpha}+s\xi_3&0\end{array}\right).$$ It is clear that $\det(BC_{t,s}-C_{t,s}B)\not=0$ for some $t,s$ whenever $\alpha\not\in{\mathbb R}$ or $k\alpha\not=\frac{\bar{\alpha}}{k}$ or $k\alpha\not=\xi_3$ or $\xi_3\not=\frac{\bar{\alpha}}{k}$, and in this case we have $h(B)=h(C_{t,s})=h(A)$. If $\alpha$ is real and $k\alpha=\frac{\bar{\alpha}}{k}=\xi_3$, then, up to a real scalar multiple, $B$ has the form $$B=\left(\begin{array}{ccc} 1 &1 &0 \\ 1 &1 & 0\\ 0&0&1\end{array}\right).$$ Let $$C=\left(\begin{array}{ccc} 1 &1 &1+i \\ 1 &2 & 1-i\\ 1-i&1+i&0\end{array}\right).$$ Then ${\rm Im}(1\cdot (1-i)\overline{(1+i)})=-2i\not=0$ and hence $h(C)=h(A)$. Since $\det(BC-CB)=-4i\not=0$, we also have $h(B)=h(C)$. So, again we get $h(B)=h(A)$, as desired.
{\bf Subcase 2.} One of $\alpha,\beta,\gamma$ is 0.
Without loss of generality, say $\beta=0$. Then, as rank $B=2$,
$\det B=\xi_1\xi_2\xi_3-|\gamma|^2\xi_2-|\alpha|^2\xi_3=0$. Thus there are scalars $c,d$ with $d\not=0$ such that $c\xi_1=\bar{\gamma}-d\alpha$, $\xi_2=-\frac{c}{d}\alpha$ and $\xi_3=c\gamma$.
Clearly, $\xi_2=0\Leftrightarrow\xi_3=0\Leftrightarrow c=0$, and in this case we have $$B=\left(\begin{array}{ccc} \xi_{1} &\alpha &\gamma \\ \bar{\alpha} &0& 0\\ \bar{\gamma}&0&0\end{array}\right).$$ Let \begin{equation} C_{t,s,p}=\left(\begin{array}{ccc} 0 &t & ip \\ t &0 & s\\ -ip&s&0\end{array}\right)\end{equation} for nonzero real numbers $t,s,p$; then $h(A)=h(C_{t,s,p})$. Now $$BC_{t,s,p}-C_{t,s,p}B=\left(\begin{array}{ccc} t(\alpha-\bar{\alpha})-ip(\gamma+\bar{\gamma}) &\xi_1t+\gamma s &i\xi_1p+\alpha s \\ -\xi_1t-\bar{\gamma}s &-t(\alpha-\bar{\alpha})& i\bar{\alpha}p-\gamma t\\ i\xi_1p-\bar{\alpha}s&t\bar{\gamma}+i\alpha p&ip(\gamma+\bar{\gamma})\end{array}\right).$$ If $\xi_1\not=0$ (in this case the coefficients of $sp^2$ and $t^2s$ of $\det(BC_{t,s,p}-C_{t,s,p}B)$ are nonzero), or if $\xi_1=0$ but one of $\alpha-\bar{\alpha}$ and $\gamma+\bar{\gamma}$ is nonzero (in this case the coefficient of $t^3$ or $p^3$ is nonzero), it is sure that $BC_{t,s,p}-C_{t,s,p}B$ is of rank three for some $t,s,p$ and hence $h(B)=h(C_{t,s,p})=h(A)$. If $$B=\left(\begin{array}{ccc} 0 &\alpha &i\delta \\ {\alpha} &0& 0\\ -i\delta &0&0\end{array}\right) $$ for some nonzero real numbers $\alpha,\delta$, let \begin{equation} D_{t,s,p}=\left(\begin{array}{ccc} 0 &it & p \\ -it &0 & s\\ p&s&0\end{array}\right)\end{equation} for nonzero real numbers $t,s,p$. Then $$BD_{t,s,p}-D_{t,s,p}B=\left(\begin{array}{ccc}2i(\delta p-\alpha t) &i\delta s & \alpha s \\ i\delta s &2i\alpha t & \alpha p-\delta t\\ -\alpha s&\delta t-\alpha p&-2i\delta p\end{array}\right),$$ which is of rank three for some suitable choice of $t,s,p$ as the coefficients of $t^3$ and $p^3$ of $\det(BD_{t,s,p}-D_{t,s,p}B)$ are nonzero. Therefore, we have $h(B)=h(D_{t,s,p})=h(A)$.
Assume that $c\not=0$; then $\xi_1=\frac{1}{c}(\bar{\gamma}-d\bar{\alpha}), \xi_2=-\frac{c}{d}\alpha, \xi_3=c\gamma$ are real, $$ B=\left(\begin{array}{ccc} \frac{1}{c}(\bar{\gamma}-d\bar{\alpha}) &\alpha &\gamma \\ \bar{\alpha} &-\frac{c}{d}\alpha & 0\\ \bar{\gamma}&0&c\gamma\end{array}\right)$$ and, for $C_{t,s,p}$ in Eq.(3.5), we have $$\begin{array}{rl}&BC_{t,s,p}-C_{t,s,p}B\\=& \left(\begin{array}{ccc} t(\alpha-\bar{\alpha})-ip(\gamma+\bar{\gamma}) &(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}+\frac{c\alpha}{d})t+\gamma s &i(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}-c\gamma)p+\alpha s \\ -(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}+\frac{c\alpha}{d})t-\bar{\gamma}s &-t(\alpha-\bar{\alpha})& i\bar{\alpha}p-\gamma t-c(\frac{\alpha}{d}+\gamma)s\\ i(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}-c\gamma)p-\bar{\alpha}s&i\alpha p+t\bar{\gamma}+c(\frac{\alpha}{d}+\gamma)s&ip(\gamma+\bar{\gamma})\end{array}\right)\end{array}.$$ Note that the coefficients of $t^3,s^3$ and $p^3$ in $\det(BC_{t,s,p}-C_{t,s,p}B)$ are respectively
$|\gamma|^2(\alpha-\bar{\alpha}), c(\frac{\alpha}{d}+\gamma)(\bar{\alpha}\gamma-\alpha\bar{\gamma})$
and $-i|\alpha|^2(\gamma+\bar{\gamma})$.
It is clear that, if $\alpha$ or $i\gamma$ are not real; or in the case that both $\alpha$ and $i\gamma$ are real, but $ \xi_2\not=\xi_3$, then $BC_{t,s,p}-C_{t,s,p}B$ is rank-3 for suitable choice of real numbers $t,s,p$ and hence $h(B)=h(C_{t,s,p})=h(A)$.
If $\alpha,i\gamma$ are real and $\xi_2=\xi_3$ but $\xi_1\not=\xi_2$, then $$\begin{array}{rl}&BC_{t,s,p}-C_{t,s,p}B\\=& \left(\begin{array}{ccc} 0 &(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}+\frac{c\alpha}{d})t+\gamma s &i(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}-c\gamma)p+\alpha s \\ -(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}+\frac{c\alpha}{d})t-\bar{\gamma}s &0& i\bar{\alpha}p-\gamma t \\ i(\frac{\bar{\gamma}}{c}-\frac{d\bar{\alpha}}{c}-c\gamma)p-\bar{\alpha}s&i\alpha p+t\bar{\gamma}&0\end{array}\right)\end{array}.$$ As the coefficient of $t^2p$ in $\det(BC_{t,s,p}-C_{t,s,p}B)$ is $-i(\xi_1-\xi_2)^2\bar{\gamma}\not=0$, we still have $h(B)=h(A)$.
If $\alpha,i\gamma$ are real and $\xi_1=\xi_2=\xi_3$, then $B$ has the form $$ B=\left(\begin{array}{ccc} \pm\sqrt{\alpha^2+\delta^2} &\alpha &i\delta \\ {\alpha} &\pm\sqrt{\alpha^2+\delta^2}& 0\\ -i\delta&0&\pm\sqrt{\alpha^2+\delta^2}\end{array}\right)$$ with nonzero $\alpha, \delta\in{\mathbb R}$. Then, for $D_{t,s,p}$ in Eq.(3.6), consider $$BD_{t,s,p}-D_{t,s,p}B=\left(\begin{array}{ccc}2i(\delta p-\alpha t) &i\delta s & \alpha s \\ i\delta s &2i\alpha t & \alpha p-\delta t\\ -\alpha s&\delta t-\alpha p&-2i\delta p\end{array}\right)$$ for nonzero real numbers $t,s,p$. As the coefficient of $t^3$ in $\det(BD_{t,s,p}-D_{t,s,p}B)$ is $-2i\alpha\delta^2\not=0$, and hence one gets $h(B)=h(A)$ again.
{\bf Subcase 3.} All $\alpha,\beta,\gamma$ are nonzero.
Since $\det(B)=0$, there are scalars $c,d$ such that $(\bar{\gamma}, \bar{\beta}, \xi_3)=(c\xi_1+d\bar{\alpha}, c\alpha+d\xi_2,c\gamma+d\beta)$. It follows that \begin{equation}\left\{\begin{array}{l} \gamma=\bar{c}\xi_1+\bar{d}\alpha ,\\ \beta=\bar{c}\bar{\alpha}+\bar{d}\xi_2, \\
\xi_3=|c|^2\xi_1+c\bar{d}\alpha+\bar{c}d\bar{\alpha}+|d|^2\xi_2.\end{array}\right. \end{equation} As $\alpha\beta\bar{\gamma}\in{\mathbb R}$, we get
$$(c\bar{d}\alpha-\bar{c}d\bar{\alpha})(|\alpha|^2-\xi_1\xi_2)=0.$$
However, $|\alpha|^2-\xi_1\xi_2=0$ implies that
$\xi_1=\frac{\bar{\alpha}}{k}$, $\xi_2=k\alpha$ for some scalar $k$, which entails that $\beta=k\gamma$ and hence $B$ is of rank-1, a contradiction. So $|\alpha|^2-\xi_1\xi_2\not=0$ and then we must have $c\bar{d}\alpha-\bar{c}d\bar{\alpha}=0$. Discussing similarly, we get \begin{equation} \left\{ \begin{array}{l}
|\alpha|^2-\xi_1\xi_2\not=0,\\ |\beta|^2-\xi_2\xi_3\not=0,\\
|\gamma|^2-\xi_1\xi_3\not=0. \end{array} \right. \end{equation}
Let $C_{t,s,p}$ be as in Eq.(3.5). As $$\begin{array}{rl} &BC_{t,s,p}-C_{t,s,p}B\\ =& \left(\begin{array}{ccc} (\alpha-\bar{\alpha})t-i(\gamma+\bar{\gamma})p & (\xi_1-\xi_2)t+\gamma s-i\bar{\beta}p & i(\xi_1-\xi_3)p+\alpha s-\beta t \\ -(\xi_1-\xi_2)t-\bar{\gamma}s-i\beta p & -(\alpha-\bar{\alpha})t+(\beta-\bar{\beta})s & (\xi_2-\xi_3)s+i\bar{\alpha}p-\gamma t \\ i(\xi_1-\xi_3)p+\bar{\beta}t-\bar{\alpha}s & -(\xi_2-\xi_3)s+\bar{\gamma}t+i\alpha p & i(\gamma+\bar{\gamma})p-(\beta-\bar{\beta})s \end{array}\right),\end{array}$$ we see that the coefficients of $t^3, s^3, p^3$ in $\det(BC_{t,s,p}-C_{t,s,p}B)$ are respectively \begin{equation}\left\{ \begin{array}{l}
c_t=(\xi_1-\xi_2)(\beta\bar{\gamma}-\bar{\beta}\gamma)+(\alpha-\bar{\alpha})(|\gamma|^2-|\beta|^2),\\
c_s=(\xi_2-\xi_3)(\alpha\bar{\gamma}-\bar{\alpha}\gamma)+(\beta-\bar{\beta})(|\alpha|^2-|\gamma|^2),\\
c_p=i(\xi_1-\xi_3)(\bar{\alpha}\bar{\beta}+\alpha\beta)+i(\bar{\gamma}+\gamma)(|\beta|^2-|\alpha|^2). \end{array}\right. \end{equation}
If one of $c_t, c_s,c_p$ is nonzero, then $\det(BC_{t,s,p}-C_{t,s,p}B)\not=0$ for some choice of $t,s,p$, which implies that $h(B)=h(C_{t,s,p})=h(A)$. Assume $$c_t=c_s=c_p=0.$$ Considering the coefficients $d_t,d_s$ and $d_p$ of $t^3,s^3$ and $p^3$ in $\det(BD_{t,s,p}-D_{t,s,p}B)$ with $D_{t,s,p}$ as in Eq.(3.6) one gets \begin{equation} \left\{ \begin{array}{l}
d_t=i(\xi_1-\xi_2)(\beta \bar{\gamma}+\bar{\beta}{\gamma})+i(\alpha+\bar{\alpha})(|\gamma|^2-|\beta|^2),\\
d_s=(\xi_2-\xi_3)(\alpha\bar{\gamma}-\bar{\alpha}\gamma)+(\beta-\bar{\beta})(|\alpha|^2-|\gamma|^2)=0,\\ d_p=(\xi_1-\xi_3)(\bar{\alpha}\bar{\beta}-
\alpha{\beta})+(\bar{\gamma}-{\gamma})(|\beta|^2-|\alpha|^2). \end{array}\right. \end{equation} If one of $d_t,d_p$ is nonzero, then $h(B)=h(A)$. Assume that $$d_t=d_s=d_p=0.$$ Let \begin{equation} E_{t,s,p}=\left(\begin{array}{ccc} 0 &t & p \\ t &0 & is\\ p&-is&0\end{array}\right)\end{equation} for nonzero real numbers $t,s,p$. The coefficients $e_t,e_s$ and $e_p$ of $t^3,s^3$ and $p^3$ in $\det(BE_{t,s,p}-E_{t,s,p}B)$ are \begin{equation} \left\{ \begin{array}{l}
e_t= (\xi_1-\xi_2)(\beta \bar{\gamma}-\bar{\beta}{\gamma})+ (\alpha-\bar{\alpha})(|\gamma|^2-|\beta|^2)=0,\\
e_s=i(\xi_2-\xi_3)(\alpha\bar{\gamma}+\bar{\alpha}\gamma)+i(\beta+\bar{\beta})(|\alpha|^2-|\gamma|^2),\\ e_p=(\xi_1-\xi_3)(\bar{\alpha}\bar{\beta}-
\alpha{\beta})+(\bar{\gamma}-{\gamma})(|\beta|^2-|\alpha|^2)=0. \end{array}\right. \end{equation} If $e_s\not=0$, then we get $h(B)=h(A)$. Assume $$e_s=0.$$ Then, by Eqs.(3.9)-(3.10), and Eq.(3.12), it is easily checked that \begin{equation} \left\{ \begin{array}{l}
(\xi_1-\xi_2)\beta \bar{\gamma}+\alpha(|\gamma|^2-|\beta|^2)=0,\\
(\xi_2-\xi_3)\alpha\bar{\gamma}+\beta(|\alpha|^2-|\gamma|^2)=0,\\
(\xi_1-\xi_3) \alpha{\beta}+{\gamma}(|\beta|^2-|\alpha|^2)=0. \end{array}\right. \end{equation}
As $\alpha\beta\bar{\gamma}$ is real, we see from Eq.(3.13) that both $\alpha^2, \beta^2$ are real and hence $\alpha\in{\mathbb R}$ or $\alpha\in i{\mathbb R}$ ($\beta\in{\mathbb R}$ or $\beta\in i{\mathbb R}$). It follows that there are four cases may occur, that is, \begin{equation}\left\{\begin{array}{ll} 1^\circ & \alpha,\beta,\gamma\in{\mathbb R}.\\ 2^\circ & \alpha,\beta\in i{\mathbb R},\gamma\in{\mathbb R}.\\ 3^\circ & \beta,\gamma\in i{\mathbb R}, \alpha\in {\mathbb R}.\\ 4^\circ & \alpha,\gamma\in i{\mathbb R},\beta\in{\mathbb R}. \end{array}\right.\end{equation}
If $\xi_1=\xi_2=\xi_3=\xi$, then $|\alpha|=|\beta|=|\gamma|$ by Eq.(3.13). On the other hand, by Eq.(3.7),
$\xi(1-|c|^2-|d|^2)=2c\bar{d}\alpha$. Thus $\xi=0$ or
$1-|c|^2-|d|^2=0$ implies that $c=0$ or $d=0$. Without loss of generality, say $c=0$; then $d\not=0$ and $\xi=|d|^2\xi\not=0$, which gives $d=e^{i\theta}$ and $|\xi|=|d\beta|=|\alpha|$, contradicting the fact that
$|\alpha|^2\not=\xi_1\xi_2=\xi^2=|\xi|^2$ (see Eq.(3.8)).
So we have $\xi(1-|c|^2-|d|^2)=2c\bar{d}\alpha\not=0$,
$$\xi=\frac{\gamma-\bar{d}\alpha}{c}=\frac{\beta-\bar{c}\bar{\alpha}}{\bar{d}}=\frac{2c\bar{d}\alpha}{1-|c|^2-|d|^2}=\frac{2\bar{c}d\bar{
\alpha}}{1-|c|^2-|d|^2}$$ and
$$B=\left(\begin{array}{ccc} \frac{2c\bar{d}\alpha}{1-|c|^2-|d|^2} &\alpha &(2|c|^2+1)\bar{d}\alpha \\ \bar{\alpha} &\frac{2c\bar{d}\alpha}{1-|c|^2-|d|^2}& (2|d|^2+1)\bar{c}\bar{\alpha}\\
(2|c|^2+1)d\bar{\alpha}&(2|d|^2+1){c}{\alpha}&\frac{2c\bar{d}\alpha}{1-|c|^2-|d|^2}\end{array}\right).$$
It follows that $(2|c|^2+1)|d|=(2|d|^2+1)|c|=1$ as
$|\alpha|=|\beta|=|\gamma|$. Thus
$2|c|+\frac{1}{|c|}=2|d|+\frac{1}{|d|}=\frac{1}{|cd|}$. Note that
$|c|=\frac{1}{2|d|^2+1}$ and $|d|=\frac{1}{2|c|^2+1}$. So one gets
$(|c|+|d|)(|c|-|d|)=|c|-|d|$, which gives further that either
$|c|=|d|$ or $|c|+|d|=1$. If $|c|\not=|d|$, we must have $|c|+|d|=1$
and hence $0<1-|c|=|d|=\frac{1}{2|c|^2+1}$. Then we obtain
$|c|(2|c|^2-2|c|+1)=0$. As we always have $2|c|^2-2|c|+1>0$, one sees that $c=0$, a contradiction. Therefore, we have $|c|=|d|=k$. Since $(2k^2+1)k=1$, we see that $k\approx 0.5898$. Write
$\alpha=|\alpha|e^{i\theta_1}$, $c=ke^{i\theta_2}$ and $d=ke^{i\theta_3}$. Now $c\bar{d}\alpha$ is real implies that $\theta_1+\theta_2-\theta_3$ is 0 or $\pi$. Replacing $B$ by $-B$ if necessary we may assume that $\theta_1+\theta_2-\theta_3=0$ and thus $d=ke^{i(\theta_1+\theta_2)}$. Without loss of generality, let
$|\alpha|=1$. Notice that $(2k^2+1)k=1$. Then $B$ becomes to $$B=\left(\begin{array}{ccc} \frac{2k^2}{1-2k^2} &e^{i\theta_1} & e^{-i\theta_2}\\ e^{-i\theta_1} &\frac{2k^2}{1-2k^2}& e^{-i(\theta_1+\theta_2)}\\
e^{i\theta_2}& e^{i(\theta_1+\theta_2)}&\frac{2k^2}{1-2k^2}\end{array}\right)$$
with $\frac{2k^2}{1-2k^2}\approx 2.2868$. But then
$0=\det(B)=(\frac{2k^2}{1-2k^2})^3-3(\frac{2k^2}{1-2k^2})+2\approx
7.0983>0$, a contradiction. Therefore $\xi_1,\xi_2, \xi_3$ are not all the
same. Keep this in mind below, we can show that $h(B)=h(A)$ holds.
For example, consider the Case 2$^\circ$, that is, $\alpha\in{\mathbb R},\beta,\gamma \in i{\mathbb R}$.
In this case, for $t,s,p\in{\mathbb C}$, let $$F_{t,s,p}=\left(\begin{array}{ccc} 0 &t & p \\ \bar{t} & 0 & s \\ \bar{p} & \bar{s} & 0 \end{array}\right).$$
Then $$\small\begin{array}{rl} &BF_{t,s,p}-F_{t,s,p}B \\ = &\left(\begin{array}{ccc} \alpha(\bar{t}-t)+i\gamma(p+\bar{p}) & (\xi_1-\xi_2)t+i\gamma\bar{s}+i\beta p & (\xi_1-\xi_3)p-\alpha s-i\beta t \\ (\xi_2-\xi_1)\bar{t}+i\beta\bar{p}+i\gamma s & -\alpha(\bar{t}-t)+i\beta (s+\bar{s})& (\xi_2-\xi_3)s+\alpha p-i\gamma\bar{t} \\ (\xi_3-\xi_1)\bar{p}-i\beta\bar{t}-\alpha\bar{s} & (\xi_3-\xi_2)\bar{s}-i\gamma t-\alpha\bar{p} & -i\gamma (p+\bar{p})-i\beta(s+\bar{s})
\end{array}\right).\end{array}$$ Consider the term of $\det([B,F_{t,s,p}])$ that contains only $t$, which is $$ ((\xi_1-\xi_2)\beta\gamma +\alpha(\beta^2-\gamma^2))(t^2\bar{t}-\bar{t}^2t)=2(\xi_1-\xi_2)\beta\gamma(t^2\bar{t}-\bar{t}^2t) $$ as
$(\xi_1-\xi_2)\beta\gamma+\alpha(\gamma^2-\beta^2)=(\xi_1-\xi_2)(i\beta)\bar{i\gamma}+\alpha(|i\gamma|^2-|i\beta|^2)=0$ by Eq.(3.13). If $\xi_1\not=\xi_2$, then $(\xi_1-\xi_2)\beta\gamma\not=0$ and it is clear that we can choose $t,s,p$ with $ts\bar{p}\not\in{\mathbb R}$ so that $\det([B,F_{t,s,p}])\not=0$. Thus we get $h(B)=h(F_{t,s,p})=h(A)$. If $\xi_1=\xi_2$, then we must have $\xi_2\not=\xi_3$. Now consider the term of $\det([B,F_{t,s,p}])$ that only contain $s$, which is $$ i(\xi_2-\xi_3)\alpha\gamma (s^2\bar{s}-\bar{s}^2s)+i\beta(\alpha^2-\gamma^2)(s^2\bar{s}+\bar{s}^2s)=2i(\xi_2-\xi_3)\alpha\gamma s^2\bar{s} $$ since
$(\xi_2-\xi_3)\alpha\bar{i\gamma}+(i\beta)(|\alpha|^2-|i\gamma|^2)=0$ by Eq.(3.13). Clearly $(\xi_2-\xi_3)\alpha\gamma\not=0$ implies that there are $t,s,p$ with $ts\bar{p}\not\in{\mathbb R}$ so that $\det([B,F_{t,s,p}])\not=0$. It follows that $h(B)=h(F_{t,s,p})=h(A)$.
The cases 1$^\circ$, 3$^\circ$ and 4$^\circ$ are dealt with similarly. This completes the proof of the Claim 4.
{\bf Claim 5.} For any $A, B\in{\mathcal B}_s(H)\setminus{\mathcal D}$, we have $h(A)=h(B)$.
By Lemma 3.2, there exist $E,F\in {\mathcal B}_s(H)\setminus{\mathcal D}$ of rank not greater than 2 such that $W(AE-EF)\not=-W(AE-EF)$ and $W(BF-FB)\not=-W(BF-FB)$. Thus we get $h(A)=h(E)$ and $h(B)=h(F)$. However, by Claims 2-4, we always have $h(E)=h(F)$. Hence $h(A)=h(B)$.
Finally, let ${\mathcal S}=\{S\in{\mathcal D}: h(S)\not=h(A) \ {\rm for}\ A\not\in{\mathcal D}\}$. Then it is clear that the theorem holds.
$\Box$
\section{The case when $\dim H=2$}
In this last section we consider the problem for the case when $\dim H=2$. As we will see the situation for the two dimensional case is much different from that for the case of dimension $\geq 3$.
As $\dim H=2$, we can identify ${\mathcal B}_s(H)$ as ${\bf H}_2={\bf H}_2({\mathbb C})$, the set of all $2\times 2$ Hermitian matrices over $\mathbb C$.
The following is our result and the surjectivity assumption on $\Phi$ is not needed.
{\bf Theorem 4.1.} {\it Let $\Phi: {\bf H}_2({\mathbb C})\to{\bf H}_2({\mathbb C})$ be a map. The following statements are equivalent.}
(1) {\it $\sigma([\Phi(A),\Phi(B)])=\sigma([A,B])$ for any $A,B\in{\bf H}_2({\mathbb C})$.}
(2) {\it $W([\Phi(A),\Phi(B)])=W([A,B])$ for any $A,B\in{\bf H}_2({\mathbb C})$.}
(3) {\it $w([\Phi(A),\Phi(B)])=w([A,B])$ for any $A,B\in{\bf H}_2({\mathbb C})$.}
(4) {\it There exist a unitary matrix $U\in M_2({\mathbb C})$, a sign function $h:{\bf H}_2\to \{-1,1\}$ and a functional $f:{\bf H}_2({\mathbb C})\to {\mathbb R}$ such that one of the following holds:}
\hspace{2mm} (1$^\circ$) {\it $\Phi(A)=h(A)UAU^*+f(A)I$ for all $A\in{\bf H}_2$;}
\hspace{2mm} (2$^\circ$) {\it $\Phi(A)=h(A)UA^tU^*+f(A)I$ for all $A\in{\bf H}_2$;}
\hspace{2mm} (3$^\circ$) {\it $\Phi(A)=h(A)U\Psi(A)U^*+f(A)I$ for all $A\in{\bf H}_2$;}
\hspace{2mm} (4$^\circ$) {\it $\Phi(A)=h(A)U\Psi(A)^tU^*+f(A)I$ for all $A\in{\bf H}_2$.}\\ Where, with $A= \left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right)$, $\Psi(A)=\left(\begin{array}{cc} a & -c+id \\ -c-id &b \end{array}\right)$.
{\bf Proof.} It is clear that
(4)$\Rightarrow$(1)$\Leftrightarrow$(2)$\Leftrightarrow$(3).
(3)$\Rightarrow$(4). Assume $\Phi:{\bf H}_2\to{\bf H}_2$ preserves
the numerical radius of Lie product.
We may modify the functional $f(A)$ in the map $\Phi$ so that $\Phi(A)$ has trace 0 for all $A \in {\bf H}_2({\mathbb C})$. Then we can focus on the set ${\bf H}_2^0$ of trace zero matrices in ${\bf H}_2({\mathbb C})$.
Now, suppose (1) holds.
Consider the Hermitian matrices \begin{equation} X = \frac{1}{\sqrt 2} \left(\begin{array}{cc} 0 & 1 \cr 1 & 0 \cr \end{array}\right), \qquad Y = \frac{1}{\sqrt 2}\left(\begin{array}{cc} 0 & -i \cr i & 0 \cr \end{array}\right), \qquad Z = \frac{1}{\sqrt 2}\left(\begin{array}{cc} 1 & 0 \cr 0 & -1 \cr \end{array}\right). \end{equation}
Then the following holds:
(1) $\{X, Y, Z\}$ is an orthonormal basis for $M_2^0$ using the inner product $\langle A, B \rangle = {\rm tr}( AB^*)$, where $M_2^0$ is the set of trace zero $2\times 2$ matrices.
(2) $A = a_1 X + a_2 Y + a_3 Z\in{\bf H}_2^0$ if and only if $(a_1, a_2, a_3)^t\in{\mathbb R}^3$.
(3) $XY = \frac{i}{\sqrt 2}Z = -YX, \quad YZ = \frac{i}{\sqrt 2}X = -ZY, \quad ZX = \frac{i}{\sqrt 2}Y = -XZ$.
(4) $W([X,Y]) = W([Y,Z]) = W([Z,X]) = i[-1,1]$.
(5) If $A = a_1 X + a_2 Y + a_3 Z$ and $B = b_1 X + b_2 Y + b_3 Z$ in $M_2^0$, then $$[A,B] = \sqrt{2}i(c_1 X + c_2 Y + c_3 Z),$$ where $$c_1 = a_2 b_3 - a_3b_2, \quad c_2 = -(a_1 b_3 - a_3 b_1), \quad c_3 = a_1 b_2 - a_2 b_1.$$ In other words, $(c_1, c_2, c_3)^t = (a_1, a_2, a_3)^t \times (b_1, b_2, b_3)^t$, the cross product in ${\mathbb C}^3$.
(6) Every unitary similarity map $a_1X + a_2Y + a_3Z = A \mapsto UAU^* = b_1X + b_2 Y + b_3Z$ on $M_2^0$ corresponds to a real special orthogonal transformation $T \in M_3({\mathbb C})$ such that $T(a_1, a_2, a_3)^t = (b_1, b_2, b_3)^t$.
{\bf Claim 1.} There exist a unitary $U\in M_2({\mathbb C})$ such that $$\Phi(A) = \varepsilon_A UAU^*$$ for all $A \in \{X, Y, Z\},$ where $\varepsilon_A\in\{-1,1\}$.
Assume that the image of $X, Y, Z$ are respectively $$X_1 = a_{11} X + a_{21} Y + a_{31} Z, \ Y_1 = a_{12} X + a_{22} Y + a_{32} Z, \ Z_1 = a_{13} X + a_{23} Y + a_{33} Z.$$ Then $a_{pq}$s are real numbers. Let $T = (a_{pq}) \in M_3({\mathbb R})$. We will show that $T$ is a real orthogonal matrix. Thus $\Phi$ has the form in Claim 6.
Note that the hypothesis and conclusion will not be affected by changing $T$ to $PTQ$ for any real orthogonal matrices $P, Q \in M_3({\mathbb C})$. It just corresponds to changing $\Phi$ to a map of the form $$A \mapsto \varepsilon_P U_P\Phi(\varepsilon_Q U_Q A U_Q^*)U_P^*$$ for some unitary $U_P, U_Q \in M_2({\mathbb C})$ and $\varepsilon_P, \varepsilon_Q \in \{1,-1\}$ depending on $P$ and $Q$.
By the singular value decomposition of real matrices, let $P, Q$ be real orthogonal such that $PTQ = {\rm diag}(s_1, s_2, s_3)$ with $s_1 \ge s_2 \ge s_3 \ge 0$. Now, replace $T$ by $PTQ$ so that $T = {\rm diag}(s_1, s_2, s_3)$. Thus there exists a real orthogonal matrix $U\in M_2({\mathbb C})$ such that $$ \Phi(X)= s_1UXU^*,\ \Phi(Y) =s_2UYU^*,\ \Phi(Z) =s_3UZU^*.$$ It follows that
$$ 1=w(XY-YX)=w( \Phi(X)\Phi(Y)- \Phi(Y)\Phi(X) )=|s_1s_2|w(XY-YX)=|s_1s_2|.$$
Similarly, one gets $|s_1s_3|=|s_2s_3|=1$ and hence $s_1,s_2,s_3 \in\{-1,1\}$. Thus the Claim is true.
Without loss of generality in the sequel we assume $U=I_2$. Note that, for any sign function $h: {\bf H}_2\to\{-1,1\}$, the map $\Psi$ defined by $\Psi (A)=h(A)\Phi(A)$ still preserves the numerical radius of Lie product. So, multiplied by a suitable sign function if necessary, we may assume that $$\Phi(C)=C$$ for every $C\in\{X,Y,Z\}$.
{\bf Claim 2.} There are sign functions $\varepsilon_1,\varepsilon_2,\varepsilon_3:{\bf H}_2\to\{-1,1\}$ and a functional $f:{\bf H}_2\to{\mathbb R}$ such that, for any $A \in {\bf H}_2$ with $A=\left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right)$, we have $$\Phi(A)=\left(\begin{array}{cc} \varepsilon_1(A)a & \varepsilon_2(A)c+i\varepsilon_3(A)d \\ \varepsilon_2(A)c+i\varepsilon_3(A)d & \varepsilon_1(A)b \end{array}\right)+f(A)I_2.$$
Write $A=\left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right)$ and $\Phi(A)=\left(\begin{array}{cc} x & w+iv \\ w-iv &y \end{array}\right)$, where $a,b,c,d,x,y,w,v$ are real numbers. Note that, for any $E,F\in {\bf H}_2$, $w(EF-FE)=\delta$ if and only if the spectrum $\sigma(EF-FE)=i[-\delta,\delta]$. Thus $w(AC-CA)=w(BC-CB)$ if and only if $\sigma(AC-CA)=\sigma(BC-CB)$. As $$\begin{array}{ll} \sqrt{2}(AX-XA)= \left(\begin{array}{cc} i2d & a-b \\ b-a &-i2d \end{array}\right), & \sqrt{2}(\Phi(A)X-X\Phi(A))= \left(\begin{array}{cc} 2iv & x-y \\ y-x &-2iv \end{array}\right);\\ \sqrt{2}(AY-YA)=i\left(\begin{array}{cc} 2c & b-a \\ b-a &-2c \end{array}\right), & \sqrt{2}(\Phi(A)Y-Y\Phi(A))=i\left(\begin{array}{cc} 2w & y-x \\ y-x&-2w \end{array}\right); \\ \sqrt{2}(AZ-ZA)=2\left(\begin{array}{cc} 0 & -c-id \\c-id &0 \end{array}\right),& \sqrt{2}(\Phi(A)Z-Z\Phi(A))=2\left(\begin{array}{cc} 0 & -w-iv \\w-iv &0 \end{array}\right),\end{array}$$ we must have \begin{equation} \left\{\begin{array}{l} 4v^2+(x-y)^2=4d^2+(a-b)^2, \\ 4w^2-(x-y)^2=4c^2-(a-b)^2, \\w^2+v^2=c^2+d^2.\end{array}\right. \end{equation}
It follows that, the map $\Phi$ sends $\left(\begin{array}{cc} a & 0 \\ 0 &b \end{array}\right)=\left(\begin{array}{cc} \frac{a-b}{2} & 0 \\ 0 &-\frac{a-b}{2} \end{array}\right)+\frac{a+b}{2}I_2$ to $\left(\begin{array}{cc} \varepsilon_1 \frac{a-b}{2} & 0\\ 0 & -\varepsilon_1 \frac{a-b}{2} \end{array}\right)+\lambda' I_2=\left(\begin{array}{cc} \varepsilon_1 a & 0\\ 0 & \varepsilon_1 b \end{array}\right)+\lambda I_2$, and sends $\left(\begin{array}{cc} 0 & c+id \\ c-id &0 \end{array}\right)$ to $\left(\begin{array}{cc} 0 & \varepsilon_2c+i\varepsilon_3d \\ \varepsilon_2c-i\varepsilon_3d &0 \end{array}\right)+\lambda_2I_2$ for some scalars $\varepsilon_1,\varepsilon_2,\varepsilon_3\in\{-1,1\}$.
To sum up, $$\Phi(\left(\begin{array}{cc} a & 0 \\ 0 &b \end{array}\right)+{\mathbb R}I_2)\subseteq\varepsilon_1\left(\begin{array}{cc} a & 0 \\ 0 &b \end{array}\right)+{\mathbb R}I_2,$$ and $$\Phi(\left(\begin{array}{cc} 0 & c+id \\ c-id &0 \end{array}\right)+{\mathbb R}I_2)\subseteq \left(\begin{array}{cc} 0 & \varepsilon_2c+i\varepsilon_3d \\ \varepsilon_2c-i\varepsilon_3d &0 \end{array}\right)+{\mathbb R}I_2,$$ where $\varepsilon_1,\varepsilon_2,\varepsilon_3\in\{-1,1\}$ depending on $a,c,d$.
To consider the general $A=\left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right)$, for any unit vector $x\in{\mathbb C}^2$, take unit vector $y\perp x$. Then, with respect to the orthonormal base $\{x,y\}$, one can take $$X'=\frac{1}{\sqrt{2}}(x\otimes y+y\otimes x),\ Y'=\frac{1}{\sqrt{2}}i(-x\otimes y+y\otimes x),\ Z'=\frac{1}{\sqrt{2}}(x\otimes x-y\otimes y).$$ Repeat the argument as in Claim 1 and the above one achieves that, there exists a unitary matrix $U_x$ such that \begin{equation} \Phi(ax\otimes x+by\otimes y+{\mathbb R}I_2)\subseteq \varepsilon_1(x,a,b)(aU_xx\otimes U_x x +bU_xy\otimes U_xy ) +{\mathbb R}I_2 \end{equation}
for any $a,b\in{\mathbb R}$ and \begin{equation} \begin{array}{rl} &\Phi((c+id)x\otimes y+(c-id)y\otimes x+{\mathbb R}I_2) \\ \subseteq & (\varepsilon_2(x,c,d)c+i\varepsilon_3(x,c,d)d)u_xx\otimes U_xy \\ &+(\varepsilon_2(x,c,d)c-i\varepsilon_3(x,c,d)d)U_xy\otimes U_xx+{\mathbb R}I_2, \end{array} \end{equation} where $\varepsilon_1(x,a,b),\varepsilon_2(x,c,d),\varepsilon_3(x,c,d))\in\{-1,1\}$. Particularly, by Eq.(4.3), without loss of generality we may assume that \begin{equation}\sigma(\Phi(A))=\sigma(A) \end{equation} for all $A\in {\bf H}_2$. It follows that, if $b=-a$, that is, if $A\in{\bf H}_2^0$, then we have \begin{equation} x^2+w^2+v^2=a^2+c^2+d^2, \end{equation} which, together with Eq.(4.2), gives $$ x^2=a^2,\ w^2=c^2,\ v^2=d^2.$$ Therefore, we still have $$x=\varepsilon_1 a,\ w=\varepsilon_2c, \ v=\varepsilon_3 d$$ for some $\varepsilon_1, \varepsilon_2, \varepsilon_3\in\{-1,1\}$. Now, it is easily checked that \begin{equation} \Phi(\left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right))\in \left(\begin{array}{cc} \varepsilon_1a & \varepsilon_2c+i\varepsilon_3d \\\varepsilon_2c+i\varepsilon_3d & \varepsilon_1b \end{array}\right)+{\mathbb R}I_2 \end{equation} for some $\varepsilon_1,\varepsilon_2,\varepsilon_3\in\{-1,1\}$, and the Claim 2 is true.
Replacing $\Phi$ by $\varepsilon_3(\Phi-f)$ if necessary, by Claim 2, we may assume that $\varepsilon_3\equiv 1$ and \begin{equation} \Phi(A)=\Phi(\left(\begin{array}{cc} a & c+id \\ c-id &b \end{array}\right))=\left(\begin{array}{cc} \varepsilon_1(A)a & \varepsilon_2(A)c+i d \\ \varepsilon_2(A)c-id & \varepsilon_1(A)b \end{array}\right)\end{equation}
for every $A\in {\bf H}_2$.
To determine the sign functions $\varepsilon_1,\varepsilon_2$ it is enough to consider their behaviors on ${\bf H}_2^0$.
Let ${\mathcal M}=\{A\in{\bf H}_2^0: \varepsilon_1(A)=\varepsilon_2(A)\}$ and ${\mathcal N}=\{B\in{\bf H}_2^0 : \varepsilon_1(B)\not=\varepsilon_2(B)\}$.
{\bf Claim 2.} Either ${\mathcal M}={\bf H}_2^0$ or ${\mathcal N}={\bf H}_2^0$.
For any $A=\left(\begin{array}{cc} a & c+id \\ c-id &-a \end{array}\right),B=\left(\begin{array}{cc} b &e+if \\ e-if & -b\end{array}\right)\in{\bf H}_2^0$, writing $\varepsilon_1=\varepsilon_j(A)$ and $\eta_j=\varepsilon_j(B)$, a simple computation shows that $$AB-BA=2\left(\begin{array}{cc} i(de-cf) & ae-bc+i(af-bd) \\ -ae+bc+i(af-bd) &-i(de-cf) \end{array}\right)$$ and $$\begin{array}{rl} & \Phi(A)\Phi(B)-\Phi(B)\Phi(A)\\ =&2\left(\begin{array}{cc} i(\eta_2de-\varepsilon_2cf) & \varepsilon_1\eta_2ae-\varepsilon_2\eta_1bc+i(\varepsilon_1af-\eta_1bd) \\ -\varepsilon_1\eta_2ae+\varepsilon_2\eta_1bc+i(\varepsilon_1af-\eta_1bd) &-i(\eta_2de-\varepsilon_2cf) \end{array}\right). \end{array}$$ Since $w(\Phi(A)\Phi(B)-\Phi(B)\Phi(A))=w(AB-BA)$, one gets $$\begin{array}{rl} &(\eta_2de-\varepsilon_2cf)^2+(\varepsilon_1\eta_2ae-\varepsilon_2\eta_1bc)^2+(\varepsilon_1af-\eta_1bd)^2\\ =& (de-cf)^2+ (ae-bc)^2+ (af-bd)^2,\end{array} $$ that is, $$\begin{array}{rl} & d^2e^2+c^2f^2+a^2f^2+b^2d^2-2df(\varepsilon_2\eta_2ce+\varepsilon_1\eta_1 ab)-2\varepsilon_1\varepsilon_2\eta_1\eta_2 abce \\ = & d^2e^2+c^2f^2+a^2f^2+b^2d^2-2df(ce+ab)-2abce, \end{array} $$ which gives \begin{equation} df(\varepsilon_2\eta_2ce+\varepsilon_1\eta_1 ab)+ \varepsilon_1\varepsilon_2\eta_1\eta_2 abce = df(ce+ab)+abce. \end{equation}
Assume that both ${\mathcal M}$ and ${\mathcal N}$ are not empty. Obviously, we can require $\varepsilon_1(A)=\varepsilon_2(A)$ if $ac=0$. So, ${\mathcal Q}=\{ A=\left(\begin{array}{cc} a & c+id \\ c-id &-a \end{array}\right)\in{\bf H}_2^0: ac=0\}\subseteq {\mathcal M}\cap{\mathcal N}$. If one of ${\mathcal M}$ and ${\mathcal N}$ is a subset of ${\mathcal Q}$, then the claim is true. Assume that none of ${\mathcal M}$ and ${\mathcal N}$ is a subset of ${\mathcal Q}$. We show that this leads to a contradiction.
Let ${\mathcal N}_1={\mathcal N}\setminus{\mathcal Q}$. Then ${\mathcal N}_1$ is not a empty set, and $B=\left(\begin{array}{cc} b &e+if \\ e-if & -b\end{array}\right)\in{\mathcal N}_1$ implies that $be\not=0$. For any $A=\left(\begin{array}{cc} a & c+id \\ c-id &-a \end{array}\right)\in{\mathcal M}$, $B=\left(\begin{array}{cc} b &e+if \\ e-if & -b\end{array}\right)\in{\mathcal N}$, since $\varepsilon_2=\varepsilon_1=\varepsilon\in\{-1,1\}$ and $\eta_2=-\eta_1=\eta\in\{-1,1\}$, Eq.(4.9) gives $$ df\varepsilon\eta( ab-ce)-abce=df(ab+ce)+abce. $$ Thus,
if $\varepsilon\eta=1$, one gets $dfce=-abce$, that is, $dfc=-abc$ as $be\not=0$;
if $\varepsilon\eta=-1$, one gets $dfab=-abce$, that is, $adf=-ace$ as $be\not=0$.
Assume that $f=0$ for some $\left(\begin{array}{cc} b &e+if \\ e-if & -b\end{array}\right)\in{\mathcal N}_1 $; then we must have $ac=0$ for all $\left(\begin{array}{cc} a & c+id \\ c-id &-a \end{array}\right)\in{\mathcal M}$, which is a contradiction. Thus, for all $B=\left(\begin{array}{cc} b &e+if \\ e-if & -b\end{array}\right)\in{\mathcal N}_1$, we have $bef\not=0$. Hence, for any $ A\in{\mathcal M}, B\in{\mathcal N}_1$, $$\varepsilon(A)\varepsilon_1(B)=1\ {\rm and}\ c\not=0\Rightarrow df=-ab;$$ $$\varepsilon(A)\varepsilon_1(B)=-1 \ {\rm and}\ a\not=0\Rightarrow df=-ce.$$ Fix some $A,B$ as above. Take $D=\left(\begin{array}{cc} x &y+iz \\ y-iz & -x\end{array}\right)\in{\bf H}_2^0$ so that $xyz\not=0$, $\frac{z}{x}\not\in\{\frac{d}{a},\frac{f}{b}\}$ and $\frac{z}{y}\not\in\{ \frac{d}{c}, \frac{f}{e}\}$. Then it is easily checked that $D\not ={\mathcal M}\cup{\mathcal N}={\bf H}_2^0$, a contradiction. So, we must have ${\mathcal M}={\bf H}_2^0$ or ${\mathcal N}={\bf H}_2^0$.
{\bf Claim 3.} If ${\mathcal M}={\bf H}_2^0$, then $\Phi $ has the form ($1^\circ$) or (2$^\circ$).
Let ${\mathcal M}_+=\{B\in{\mathcal M}: \varepsilon_1(B)=1\}$ and ${\mathcal M}_-=\{B\in{\mathcal M}: \varepsilon_1(B)=-1\}$. Then
${\bf H}_2^0={\mathcal M}={\mathcal M}_+\cup {\mathcal M}_-$ and ${\mathcal M}_+\cap {\mathcal M}_-=\{\left(\begin{array}{cc} 0 & if \\ -if & 0\end{array}\right): f\in{\mathbb R}\}$. It is clear that $\Phi(A)=A$ if $A\in {\mathcal M}_+$ and $\Phi(A)=-A^t$ if $A\in{\mathcal M}_-$.
For any $A=\left(\begin{array}{cc} a & c+id \\ c-id & -a\end{array}\right)\in{\mathcal M}_+$ and $B=\left(\begin{array}{cc} b & e+ if \\ e-if & -b\end{array}\right)\in{\mathcal M}_-$, by Eq.(4.9) we have $$df(ab+ce)=0.$$
Assume $df=0$; then the above equation is always true. If $f=0$, then $B$ is a real matrix and $\Phi(B)=-B^t=-B$. Letting $h(B)$ absorb a $-1$ we may require that $B\in{\mathcal M}_+$. Similarly, if $d=0$, we may rearrange if necessary so that $A\in{\mathcal M}_-$. Hence we may require that one of ${\mathcal M}_\pm$ contains no real matrices.
If one of ${\mathcal M}_\pm$ consists of real matrices, we already prove that $\Phi$ has the form ($1^\circ$) or ($2^\circ$)
Assume that ${\mathcal M}_+$ and ${\mathcal M}_-$ contain respectively non-real matrices $A$ and $B$; then $df\not=0$. It follows that $$ab+ce=0.$$ If $abce\not=0$, we get $$\frac{e}{b}=-\frac{a}{c}.$$ Take $D=\left(\begin{array}{cc} x &y+iz \\ y-iz & -x\end{array}\right)\in{\bf H}_2^0$ with $xyz\not=0$, $\frac{y}{x}\not\in\{\frac{c}{a},\frac{e}{b}\}$. Then either $D\in{\mathcal M}_+$ or $D\in{\mathcal M}_-$. However, $D\in{\mathcal M}_+$ implies that $\frac{y}{x}=-\frac{b}{e}=\frac{c}{a}$ and $D\in{\mathcal M}_-$ implies that $\frac{y}{x}=-\frac{a}{c}=\frac{e}{b}$, contradicting to the choice of $D$. Hence we always have $abce=0$, that is, at lest one of $a,b,c,e$ is zero. Without loss of generality, assume that $ac\not=0$; then $be=0$. In fact we have $b=e=0$ since $ab+ce=0$. This forces that ${\mathcal M}_-=\{\left(\begin{array}{cc} 0& if \\ -if & 0\end{array}\right) : f\in{\mathbb R}\}.$ and therefore ${\mathcal M}_+={\bf H}_2^0$. In this case we have $\Phi(A)=A$ for all $A\in{\bf H}_2^0$ and $\Phi$ has the form ($1^\circ$). If $be\not=0$ and $ac=0$, one gets $a=c=0$ and thus $${\mathcal M}_+\subseteq {\mathcal R}=\{\left(\begin{array}{cc} u & w+iv \\ w-iv & -u\end{array}\right) : v=0 \ \mbox{\rm or } u=w=0\}.$$ So we may require that ${\mathcal M}_-={\bf H}_2^0$ and $\Phi(A)=-A^t$ for every $A\in{\bf H}_2^0$, which implies that $\Phi$ has the form ($2^\circ$). If $ac=be=0$ for any $A,B$ with $df\not=0$, then we get a contradiction that $D=\left(\begin{array}{cc} x &y+iz \\ y-iz & -x\end{array}\right)\in{\bf H}_2^0$ with $xyz\not=0$ does not in ${\mathcal M}_+\cup{\mathcal M}_-={\bf H}_2^0$. This completes the proof of Claim 3.
{\bf Claim 4.} If ${\mathcal N}={\bf H}_2^0$, then $\Phi $ has the form ($3^\circ$) or ($4^\circ$).
Let ${\mathcal N}_+=\{B\in{\mathcal N}: \varepsilon_1(B)=1\}$ and ${\mathcal N}_-=\{B\in{\mathcal N}: \varepsilon_1(B)=-1\}$. Then
${\bf H}_2^0={\mathcal N}={\mathcal N}_+\cup {\mathcal N}_-$ and still, ${\mathcal N}_+\cap {\mathcal N}_-=\{\left(\begin{array}{cc} 0 & if \\ -if & 0\end{array}\right): f\in{\mathbb R} \}$. Clearly, $\Phi(A)=\Psi(A)$ if $A\in{\mathcal N}_+$ and $\Phi(A)=-\Psi(A)^t$ if $A\in{\mathcal N}_-$.
Note that, for any $B_1,B_2\in{\mathcal N}_+$ or $B_1,B_2\in{\mathcal N}_-$ we have $w([B_1,B_2])=w([\Phi(B_1),\Phi(B_2)])$ by Eq.(4.9). Also, if $B$ is real, then $\Phi(B)=-B$. Thus, with no loss of generality we may assume that all real matrices are contained in ${\mathcal N}_+$.
For any $A=\left(\begin{array}{cc} a & c+id \\ c-id & -a\end{array}\right)\in{\mathcal N}_+$ and $B=\left(\begin{array}{cc} b & e+ if \\ e-if & -b\end{array}\right)\in{\mathcal N}_-$, by Eq.(4.9) we still have $$df(ab+ce)=0.$$ If for any $A\in{\mathcal N}_+$ and $B\in{\mathcal N}_-$ we always have $df=0$ whenever $(a,c)\not=(0,0)$, then we must have $d=0$ for any $A\in{\mathcal N}_+$, which means that ${\mathcal N}_+\subseteq {\mathcal R} $. It is easily checked in this case that $\Phi$ has the form ($3^\circ$). So, we may assume that $df\not=0$ for some $A$ with $(a,c)\not=(0,0)$ and $B$. It follows that $ab+ce=0$. The same reason as that in Claim 3 reveals that $abce\not=0$ will lead to a contradiction. Thus we must have $abce=0$. Since there exists $A\in{\mathcal N}_+$ with $acd\not=0$ or $B\in{\mathcal N}_-$ with $bef\not=0$, a similar argument as that in Claim 3 shows that the prior case implies that ${\mathcal N}_-=\{\left(\begin{array}{cc} 0 & if \\ -if & 0\end{array}\right): f\in{\mathbb R} \}$ and hence $\Phi$ has the form ($3^\circ$); the later case implies that ${\mathcal N}_+={\mathcal R}$ and hence $\Phi$ has the form ($4^\circ$).
$\Box$
\end{document} | arXiv |
# Logarithmic proportionality and its applications
Let's start by looking at the basic properties of logarithms. The logarithm of a number is the exponent to which a base must be raised to obtain the original number. For example, the logarithm base 2 of 8 is 3 because 2^3 = 8.
Consider the following equation:
$$
log_2(8) = 3
$$
This equation states that the logarithm of 8 to the base 2 is 3.
Logarithmic proportionality is often used to simplify complex mathematical expressions and solve equations. By exploiting the properties of logarithms, we can transform difficult problems into manageable ones.
For example, let's consider the equation:
$$
a^x = b^y
$$
By taking the logarithm of both sides with respect to the same base, we can rewrite this equation as:
$$
x * log_b(a) = y * log_b(b)
$$
This transformation simplifies the equation and makes it easier to solve.
## Exercise
Solve the following equation using logarithmic proportionality:
$$
2^x = 3^y
$$
Instructions:
1. Take the logarithm of both sides with respect to the base 3.
2. Simplify the equation.
3. Solve for x.
### Solution
1. Taking the logarithm of both sides with respect to the base 3, we get:
$$
x * log_3(2) = y * log_3(3)
$$
2. Simplifying the equation, we get:
$$
x * (log_3(2)) = y
$$
3. Solving for x, we get:
$$
x = \frac{y}{log_3(2)}
$$
# Integration and its relationship with proportionality
Let's start by looking at the basic properties of integration. The integral of a function is the area under the curve defined by the function. For example, the integral of the function f(x) = x^2 over the interval [0, 1] is:
$$
\int_0^1 x^2 dx = \frac{1}{3}
$$
This equation states that the area under the curve defined by the function f(x) = x^2 over the interval [0, 1] is equal to 1/3.
Integration is often used to solve complex problems in various fields. By exploiting the properties of integration, we can transform difficult problems into manageable ones.
For example, let's consider the problem of finding the area under the curve defined by the function f(x) = x^2 over the interval [0, a]. By integrating the function, we can rewrite this problem as:
$$
\int_0^a x^2 dx
$$
This transformation simplifies the problem and makes it easier to solve.
## Exercise
Find the area under the curve defined by the function f(x) = x^2 over the interval [0, 2].
Instructions:
1. Integrate the function f(x) = x^2 over the interval [0, 2].
2. Solve the resulting equation.
### Solution
1. Integrating the function f(x) = x^2 over the interval [0, 2], we get:
$$
\int_0^2 x^2 dx = \frac{8}{3}
$$
2. The area under the curve defined by the function f(x) = x^2 over the interval [0, 2] is equal to 8/3.
# Proportionality in statistics and data analysis
Let's start by looking at the basic properties of proportionality in statistics. Proportionality is often expressed as a ratio or a percentage. For example, the proportion of students who passed an exam can be expressed as the ratio of the number of students who passed to the total number of students.
Proportionality is often used to analyze and interpret complex datasets. By exploiting the properties of proportionality, we can transform difficult problems into manageable ones.
For example, let's consider the problem of analyzing the performance of a company's products. We can use proportionality to calculate the proportion of products that meet a certain quality standard. This proportion can then be compared to the proportion of products produced by competing companies to make informed decisions about marketing strategies and product improvements.
## Exercise
Analyze the following dataset:
- Company A produced 1000 products, of which 800 met the quality standard.
- Company B produced 1500 products, of which 1200 met the quality standard.
Instructions:
1. Calculate the proportion of products that met the quality standard for both companies.
2. Compare the proportions to determine which company has a higher percentage of products that meet the quality standard.
### Solution
1. Calculating the proportion of products that met the quality standard for both companies, we get:
- Company A: 800 / 1000 = 80%
- Company B: 1200 / 1500 = 80%
2. Both companies have the same proportion of products that meet the quality standard, which is 80%.
# Real-world examples and case studies
Let's start by looking at the application of logarithmic proportionality in physics. In the field of radioactivity, the logarithm of the ratio of the activity of a sample to the activity of a standard sample is used to determine the concentration of radioactive isotopes. This concept is based on the properties of logarithmic proportionality and allows scientists to make accurate measurements and predictions.
Another example of the application of logarithmic proportionality in economics is the calculation of the inflation rate. The inflation rate is defined as the percentage increase in the general price level over a period of time. By taking the logarithm of both the initial price level and the final price level, economists can calculate the inflation rate using logarithmic proportionality.
In data analysis, the concept of proportionality is widely used to model relationships between variables and make predictions based on observed data. For example, in the field of marketing, the concept of customer lifetime value is used to model the total revenue a company can expect to receive from a customer over their entire lifetime. This concept is based on the properties of proportionality and allows marketers to make informed decisions about customer acquisition and retention strategies.
## Exercise
Choose one of the examples discussed in this section and analyze its practical applications. Write a short paragraph that describes how the concept of logarithmic proportionality or integration is used in this example to solve a complex problem and make an informed decision.
Instructions:
1. Choose one of the examples discussed in this section.
2. Write a short paragraph that describes how the concept of logarithmic proportionality or integration is used in this example to solve a complex problem and make an informed decision.
### Solution
In the field of radioactivity, the logarithm of the ratio of the activity of a sample to the activity of a standard sample is used to determine the concentration of radioactive isotopes. This concept is based on the properties of logarithmic proportionality and allows scientists to make accurate measurements and predictions. By exploiting the properties of logarithmic proportionality, scientists can transform complex problems into manageable ones and make informed decisions about the analysis and interpretation of radioactivity data. | Textbooks |
Effective diffusion in thin structures via generalized gradient systems and EDP-convergence
DCDS-S Home
Threshold phenomenon for homogenized fronts in random elastic media
January 2021, 14(1): 373-393. doi: 10.3934/dcdss.2020324
Perturbed minimizing movements of families of functionals
Andrea Braides , and Antonio Tribuzio
Department of Mathematics, University of Rome Tor Vergata, via della Ricerca Scientifica, 00133 Rome, Italy
* Corresponding author: Andrea Braides
Dedicated to Alexander Mielke on the occasion of his 60th birthday
Received March 2019 Revised October 2019 Published January 2021 Early access April 2020
We consider the well-known minimizing-movement approach to the definition of a solution of gradient-flow type equations by means of an implicit Euler scheme depending on an energy and a dissipation term. We perturb the energy by considering a ($ \Gamma $-converging) sequence and the dissipation by varying multiplicative terms. The scheme depends on two small parameters $ \varepsilon $ and $ \tau $, governing energy and time scales, respectively. We characterize the extreme cases when $ \varepsilon/\tau $ and $ \tau/ \varepsilon $ converges to $ 0 $ sufficiently fast, and exhibit a sufficient condition that guarantees that the limit is indeed independent of $ \varepsilon $ and $ \tau $. We give examples showing that this in general is not the case, and apply this approach to study some discrete approximations, the homogenization of wiggly energies and geometric crystalline flows obtained as limits of ferromagnetic energies.
Keywords: Gradient flows, variational evolution, $ \Gamma $-convergence, homogenization, perturbations.
Mathematics Subject Classification: Primary: 47J30, 35K90, 49J45; Secondary: 47J35, 35B27.
Citation: Andrea Braides, Antonio Tribuzio. Perturbed minimizing movements of families of functionals. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 373-393. doi: 10.3934/dcdss.2020324
R. Alicandro, A. Braides and M. Cicalese, Phase and anti-phase boundaries in binary discrete systems: A variational viewpoint, Netw. Heterog. Media, 1 (2006), 85-107. doi: 10.3934/nhm.2006.1.85. Google Scholar
F. Almgren and J. E. Taylor, Flat flow is motion by mean curvature for curves with crystalline energy, J. Differential Geom., 42 (1995), 1-22. doi: 10.4310/jdg/1214457030. Google Scholar
F. Almgren, J. L. Taylor and L. Wang, Curvature-driven flows: A variational approach, SIAM J. Control Optim., 31 (1993), 387-438. doi: 10.1137/0331020. Google Scholar
L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, $2^{nd}$ edition, Birkhäuser, Basel, 2008. Google Scholar
N. Ansini, A. Braides and J. Zimmer, Minimizing movements for oscillating energies: The critical regime, Proc. Royal Soc. Edin. A, 149 (2019), 719-737. doi: 10.1017/prm.2018.46. Google Scholar
[6] A. Braides, $\Gamma$-convergence for Beginners, Oxford University Press, Oxford, 2002. doi: 10.1093/acprof:oso/9780198507840.001.0001. Google Scholar
A. Braides, Local Minimization, Variational Evolution and $\Gamma$-convergence, Springer, Cham, 2014. doi: 10.1007/978-3-319-01982-6. Google Scholar
A. Braides, M. Colombo, M. Gobbino and M. Solci, Minimizing movements along a sequence of functionals and curves of maximal slope, C. R. Math. Acad. Sci. Paris, 354 (2005), 685-689. doi: 10.1016/j.crma.2016.04.011. Google Scholar
A. Braides, M. S. Gelli and M. Novaga, Motion and pinning of discrete interfaces, Arch. Ration. Mech. Anal., 195 (2010), 469-498. doi: 10.1007/s00205-009-0215-z. Google Scholar
E. De Giorgi, New problems on minimizing movements, in Boundary Value Problems for Partial Differential Equations and Applications (C. Baiocchi and J. L. Lions, eds.) Masson, Paris, 29 (1993), 81–98. Google Scholar
P. Dondl, T. Frenzel and A. Mielke, A gradient system with a wiggly energy and relaxed EDP-convergence,, ESAIM Control Optim. Calc. Var., 25 (2019), Art. 68, 45 pp. doi: 10.1051/cocv/2018058. Google Scholar
F. Fleissner, $\Gamma$-convergence and relaxation of gradient flows in metric spaces: A minimizing movement approach,, ESAIM Control Optim. Calc. Var., 25 (2019), Art. 28, 29pp. doi: 10.1051/cocv/2017035. Google Scholar
F. Fleissner and G. Savaré, Reverse approximation of gradient flows as minimizing movements: A conjecture by de Giorgi, Ann. Sc. Norm. Super. Pisa Cl. Sci., in press, (2017), 1–30. Google Scholar
E. Sandier and S. Serfaty, $\Gamma$-convergence of gradient flows with applications to Ginzburg-Landau, Commun. Pure Appl. Math., 57 (2004), 1627-1672. doi: 10.1002/cpa.20046. Google Scholar
A. Tribuzio, Perturbations of minimizing movements and curves of maximal slope, Netw. Heterog. Media, 13 (2018), 423-448. doi: 10.3934/nhm.2018019. Google Scholar
Figure 1. The dark line represents the graph of $\gamma\mapsto1/a^\gamma$, the light line is the constant $1/a^*$. On the left $\alpha>\beta/2$ so the sup is reached in $\gamma_1^\beta$, on the right $\alpha < \beta/2$ and the sup is reached in $\gamma_1^\alpha$
Sylvia Serfaty. Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1427-1451. doi: 10.3934/dcds.2011.31.1427
Chiara Zanini. Singular perturbations of finite dimensional gradient flows. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 657-675. doi: 10.3934/dcds.2007.18.657
Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328
Mitsunori Nara, Masaharu Taniguchi. Convergence to V-shaped fronts in curvature flows for spatially non-decaying initial perturbations. Discrete & Continuous Dynamical Systems, 2006, 16 (1) : 137-156. doi: 10.3934/dcds.2006.16.137
Wei Wang, Na Sun, Michael K. Ng. A variational gamma correction model for image contrast enhancement. Inverse Problems & Imaging, 2019, 13 (3) : 461-478. doi: 10.3934/ipi.2019023
Julián Fernández Bonder, Analía Silva, Juan F. Spedaletti. Gamma convergence and asymptotic behavior for eigenvalues of nonlocal problems. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2125-2140. doi: 10.3934/dcds.2020355
Gianni Dal Maso. Ennio De Giorgi and $\mathbf\Gamma$-convergence. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1017-1021. doi: 10.3934/dcds.2011.31.1017
Alexander Mielke. Deriving amplitude equations via evolutionary $\Gamma$-convergence. Discrete & Continuous Dynamical Systems, 2015, 35 (6) : 2679-2700. doi: 10.3934/dcds.2015.35.2679
Antonio De Rosa, Domenico Angelo La Manna. A non local approximation of the Gaussian perimeter: Gamma convergence and Isoperimetric properties. Communications on Pure & Applied Analysis, 2021, 20 (5) : 2101-2116. doi: 10.3934/cpaa.2021059
Lorenza D'Elia. $ \Gamma $-convergence of quadratic functionals with non uniformly elliptic conductivity matrices. Networks & Heterogeneous Media, 2022, 17 (1) : 15-45. doi: 10.3934/nhm.2021022
Gilles A. Francfort, Alessandro Giacomini, Alessandro Musesti. On the Fleck and Willis homogenization procedure in strain gradient plasticity. Discrete & Continuous Dynamical Systems - S, 2013, 6 (1) : 43-62. doi: 10.3934/dcdss.2013.6.43
Risei Kano, Yusuke Murase. Solvability of nonlinear evolution equations generated by subdifferentials and perturbations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 75-93. doi: 10.3934/dcdss.2014.7.75
Tomás Caraballo, Leonid Shaikhet. Stability of delay evolution equations with stochastic perturbations. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2095-2113. doi: 10.3934/cpaa.2014.13.2095
Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. Homogenization of variational functionals with nonstandard growth in perforated domains. Networks & Heterogeneous Media, 2010, 5 (2) : 189-215. doi: 10.3934/nhm.2010.5.189
Jean Louis Woukeng. $\sum $-convergence and reiterated homogenization of nonlinear parabolic operators. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1753-1789. doi: 10.3934/cpaa.2010.9.1753
Jie Zhao. Convergence rates for elliptic reiterated homogenization problems. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2787-2795. doi: 10.3934/cpaa.2013.12.2787
Claudio Marchi. On the convergence of singular perturbations of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1363-1377. doi: 10.3934/cpaa.2010.9.1363
Andriy Bondarenko, Guy Bouchitté, Luísa Mascarenhas, Rajesh Mahadevan. Rate of convergence for correctors in almost periodic homogenization. Discrete & Continuous Dynamical Systems, 2005, 13 (2) : 503-514. doi: 10.3934/dcds.2005.13.503
Micol Amar, Andrea Braides. A characterization of variational convergence for segmentation problems. Discrete & Continuous Dynamical Systems, 1995, 1 (3) : 347-369. doi: 10.3934/dcds.1995.1.347
Vivek Tewary. Combined effects of homogenization and singular perturbations: A bloch wave approach. Networks & Heterogeneous Media, 2021, 16 (3) : 427-458. doi: 10.3934/nhm.2021012
HTML views (280)
Andrea Braides Antonio Tribuzio | CommonCrawl |
December 2017 , Volume 7, Issue 8, pp 4219–4236 | Cite as
Hydrogeochemical investigation of groundwater in shallow coastal aquifer of Khulna District, Bangladesh
S. M. Didar-Ul Islam
Mohammad Amir Hossain Bhuiyan
Tanjena Rume
Gausul Azam
First Online: 11 February 2017
Groundwater acts as a lifeline in the coastal regions to meet out the domestic, drinking, irrigational and industrial needs. To investigate the hydrogeochemical characteristics of groundwater and its suitability, twenty samples were collected from the shallow tubewells of study area having screen depth 21–54 m. The water quality assessment has been carried out by evaluating the physicochemical parameters such as temperature, pH, EC, TDS and major ions i.e., Na+, K+, Ca2+, Mg2+, Cl−, SO4 2−, NO3 −, HCO3 −. Results found that, the water is slightly alkaline and brackish in nature. The trends of cations and anions are Na+ > Ca2+ > Mg2+ > K+ and Cl− > HCO3 − > SO4 2− > NO3 −, respectively and Na–Cl–HCO3 is the dominant groundwater type. The analyzed samples were also characterized with different indices, diagram and permissible limit i.e., electric conductivity (EC), total dissolved solids (TDS), chloride content (Cl), soluble sodium percentage (SSP), sodium adsorption ratio (SAR), residual sodium carbonate (RSC), magnesium adsorption ratio (MAR), Kelley's ratio (KR), Wilcox diagram and USSL diagram, and results showed that groundwater are not suitable for drinking and irrigational use. The factors responsible for the geochemical characterization were also attempted by using standard plot and it was found that mixing of seawater with entrapped water plays a significant role in the study area.
Groundwater quality Electric conductivity Salinity intrusion Hydrogeochemical processes Coastal region
Groundwater is the most important source of domestic, industrial and agricultural water supply in the world. It is estimated that approximately one third of the world's population use groundwater for drinking purpose (Nickson et al. 2005). It is found in aquifers that have the capacity of both storing and transmitting water, in significant quantities (Todd 1980). Generally, groundwater quality depends on the quality of recharged water, atmospheric precipitation, inland surface water and subsurface geochemical processes (Twarakavi and Kaluarachchi 2006; Kumar et al. 2014). In coastal regions groundwater quality patterns are complex, because of the input from different water sources including precipitation, seawater, ascending deep groundwater and anthropogenic sources (Steinich et al. 1998). Problems in coastal areas are typically connected to contamination of fresh water resources by saline water and include well field salinization, crop damage, and surface water quality deterioration (Karro et al. 2004).
Bangladesh lies in the northeastern part of South Asia, has 710 km coastal line and the coastal area covers about 32% of the country (MoWR 2005). Although, coastal aquifers serve as major sources of freshwater supply, the groundwater in coastal region is relatively vulnerable to contamination by seawater intrusion, which makes groundwater unsuitable (Kim et al. 2006; Jorgensen et al. 2008). Natural processes and anthropogenic activities like; over extraction, urbanization and agricultural activities are the main reason for seawater intrusion and water quality deterioration in coastal aquifers (Mondal et al. 2011; Selvam et al. 2013). Nowadays, almost 53% of the coastal areas of Bangladesh are affected by salinity (Hoque et al. 2003; Woobaidullah et al. 2006; Islam 2014). Salinity becomes a major problem in south-western coastal region of Bangladesh, where irrigation water quality is affected by high levels of salinity (Shammi et al. 2016a), which is a source of irrigation salinity and it mainly results from rises in the groundwater table due to excessive irrigation and the lack of adequate drainage for leaching and removal of salts (Corwin et al. 2007). The total area under irrigation in Bangladesh is 5,049,785 ha and 78.9% of this area is covered by groundwater sources including 3,197,184 ha with 1,304,973 shallow tubewells and 785,680 ha with 31,302 deep tubewells (DPHE and JICA 2010). However, most crop lands in the coastal areas of Bangladesh remain fallow in the dry season because surface water resources are saline and unsuitable for irrigation, while groundwater is not intensively utilized because of the fear of seawater intrusion into aquifers (Mondal et al. 2008). Seawater intrusion is a major threat in the coastal aquifers of Bangladesh, especially in southwestern region (Bahar and Reza 2010; Islam et al. 2015, 2016b; Islam and Bhuiyan 2016). The over dependence on groundwater for drinking, agricultural and industrial sector and different climatic and natural phenomenon causes coastal groundwater contamination (Srinivas et al. 2015). Besides, different geochemical processes in groundwater governing the chemical characteristics of groundwater, is well documented in many parts of the world by many authors i.e., Montety et al. (2008), Jalali (2009), Manjusree et al. (2009), Thilagavathi et al. (2012), Sivasubramanian et al. (2013), Nagaraju et al. (2014), Kumar et al. (2015), Islam et al. (2016a, b) and Balaji et al. (2016). Geochemical studies of groundwater provide a better understanding of water quality and possible changes (Kumar et al. 2014). However, the coastal groundwater system is fragile and its evaluation will help in the proper planning and sustainable management (Sefie et al. 2015). Therefore detailed investigations regarding the groundwater hydrogeochemistry and water quality in shallow aquifer is imperative. So the present study aims to investigate the groundwater, to determine its utility and find out the major geochemical process in study area. It also intended to delineate the spatial distribution of hydrogeochemical constituents for proper understanding and future management perspective.
Location and hydrological setting
Geographically, the study area is located between 22º28′ and 22º56′ N latitudes and between 89º12′ and 89º40′ E longitudes (Fig. 1). The investigated area falls within the western part of Faridpur Trough of Bengal Foredeep (Alam 1990) and is located on a natural levee of the Rupsha and Bhairab rivers and characterized by Ganges tidal floodplains with low relief, criss-crossed by rivers and water channels, and surrounded by tidal marshes and swamps. The surface lithology of the area is of deltaic deposits which are composed of tidal deltaic deposits, deltaic silt deposits, and mangrove swamp deposits (Alam 1990). The aquifers in and around the study area are generally multi-layered varying from unconfined to leaky-confined in the shallow alluvial deposits and confined in the deeper alluvial deposits (Uddin and Lundberg 1998). The aquifer systems of the study area can be classified into two major classes: the shallow aquifers ranging from depth ~10 to 150 m and deep aquifers generally >180 m depth are shown in Fig. 2. The water of this aquifer is generally brackish or saline with few isolated fresh water pockets (DPHE 2006).
Location of sampling sites in the study area
N–S hydrogeological cross section of the study area. Cross-sectional lines N–S is shown in Fig. 1 (DPHE 2006)
Climate is one of the most important factors for the occurrence and movement of groundwater (CGW Board 2009; Islam et al. 2016b). The study area falls in the south-central zone, south-western and south-eastern zone of the climatic sub-division (Fig. 3) and with bulk of rainfall occurring between the months of June to October, high temperature and excessive humidity (BMD 2014). The area comprise of three major climatic seasons includes hot summer (March–May), followed by monsoon or rainy season (June–October) and a moderate winter season (November–February). Analyzing the rainfall data from 1993 to 2012 it is observed that maximum rainfall occurs during the rainy season May to October with the peak occurring in July while during the dry period there is almost no rainfall (Iftakher et al. 2015). The mean annual rainfall of Khulna district is approximately 1816 mm. and the mean temperature is 34 °C (BMD 2014). Besides, many others natural phenomenon storm surge, tidal flood and salinity are very common in this area (Ahmed 2006; Islam and Uddin 2015; Islam et al. 2015).
Map showing the climatic zones of Bangladesh (Rashid 1991)
Field sampling and water analysis
A total 20 groundwater samples were collected from different locations of the study area (Fig. 1) from shallow tubewells. Most of the sampled wells were fitted with a standard Bangladesh number-6 hand pump. Prior to sampling each well were pumped for few minutes until it purged out approximately twice the well volume, or until steady state chemical conditions (pH, EC and temperature) were obtained. pH of the water samples were measured on spot by using pH meter (EcoScan Ion-6, Singapore); total dissolved solids (TDS) were measured by (HANNA HI8734, Romania) portable meter. Electric conductivity (EC) and salinity were measured by portable EC meter (HANNA HI8033, Romania). Temperature was also measured simultaneously by using the same TDS meter. The geographical location of each wells were determined with a GARMIN handheld global positioning system (GPS) and the approximate depth of wells were noted from the well owner's records. Samples for major ion (Na+, K+, Ca2+, Mg2+, Cl–, SO4 2–, NO3 – and HCO3 −) analysis were collected in 500 mL polyethylene bottles. Each bottle was rinsed with distilled water before pouring the sample water. The bottles were labeled and air-tight. Two sets of samples were collected from each location and filtered through 0.45 μm cellulose nitrate hydrophilic syringe filters. Among them one was acidified using concentrated HNO3 to reach a pH <2 for preventing absorption and chemical precipitation. For ion analysis Gallenkamp Flame Analyzer was used for Na+ and K+ and, ICS-5000 DIONEX SP, ion chromatography (IC) for Ca2+, Mg2+, Cl−, SO4 2−, and NO3 − analysis. Samples were diluted several times and the relative standard deviation of measured major ions was found to be within ±3%. Alkalinity (HCO3 −) was measured by titration method with Digital Titrator (16900, HACH International, Colorado, USA) and 1.6 N H2SO4 cartridge.
Methods for hydrogeochemical and water quality evaluation
To assess water quality and geochemical processes the following parameters were calculated:
The total hardness (TH) in ppm (Todd 1980; Ragunath 1987; Hem 1991) was determined by following equation:
$$ {\text{TH }} = { 2}. 4 9 7 {\text{ Ca}}^{ 2+ } + { 4}. 1 1 5 {\text{ Mg}}^{ 2+ } . $$
Soluble sodium percentage (SSP) or Na % was used to evaluate the sodium hazard. Todd (1980) defined soluble sodium percentage (SSP) or Na % as:
$$ {\text{SSP or Na }}\% \, = \frac{{({\text{Na}}^{ + } + {\text{K}}^{ + } ) \times 100}}{{({\text{Ca}}^{2 + } + {\text{Mg}}^{2 + } + {\text{Na}}^{ + } + {\text{K}}^{ + } )}} . $$
To evaluate the water quality for irrigation purpose, the sodium or alkali hazard expressed by sodium adsorption ratio (SAR) is widely used (Bhuiyan et al. 2015; Islam et al. 2016a, b). If water sample is high in Na+ and low in Ca2+, the ion exchange complex may become saturated with Na+ which destroys the soil structure (Todd 1980). The SAR value of irrigation water quantifies the relative proportion of Na+ to Ca2+ and Mg2+ (Alrajhi et al. 2015), and is computed as:
$$ {\text{SAR}} = \frac{{ {\text{Na}}^{ + } }}{{\sqrt {{\text{Ca}}^{2 + } + {\text{Mg}}^{2 + } } /2}}, $$
where, Na+, Ca2+ and Mg2+ are defined as the concentrations of Na, Ca and Mg ions in water, respectively (Ayers and Westcot 1985).
The residual sodium carbonate (RSC) is computed taking the alkaline earths and weak acids as follows (Ragunath 1987; Rao et al. 2012);
$$ {\text{RSC }} = \, \left( {{{\text{CO}}_{ 3}}^{2-} + {\text{HCO}}_{ 3}^{ - } } \right) \, {-} \, \left( {{\text{Ca}}^{ 2+ } + {\text{Mg}}^{ 2+ } } \right). $$
Magnesium adsorption ratio (MAR) (Ragunath 1987), also known as magnesium hazard (MH) was calculated as:
$$ {\text{MAR}} = \frac{{({\text{Mg}}^{2 + } \times 100)}}{{({\text{Ca}}^{2 + } + {\text{Mg}}^{2 + } )}}. $$
Lastly, Kelley's ratio (KR) (Kelley 1963) described as:
$$ {\text{KR}} = \frac{{{\text{Na}}^{ + } }}{{({\text{Ca}}^{2 + } + {\text{Mg}}^{2 + } )}}. $$
All ionic concentrations are in milli equivalent per liter (meq/L). All these parameters and individual chemical parameters had been compared with national and international standards to assess the groundwater suitability.
Moreover, to identify the water types using major ion compositions AquaChem (version 3.7) software was used. SPSS (version 16.00) was used to statistical correlation among anion and cation of the groundwater samples and the spatial analysis were carried out using Arc. GIS (version 10.1) software.
General hydrochemistry
The results of various hydrochemical parameters of groundwater samples are presented in Table 1. The depths of the sampled wells varied from 21 to 54 m. The pH of water is slightly alkaline ranging from 6.5 to 7.9 with a mean value of 7.2. The pH indicates the strength of the water to react with the acidic or alkaline material presents in water, which controls by the CO2, CO3 2− and HCO3 − concentrations (Hem 1991). The mean temperature of groundwater samples was 26.7 °C ranging from 26 to 27.3 °C. Electrical conductivity (EC) of groundwater depends upon temperature, ionic concentration and types of ions present in the water. The maximum permissible limit of EC in groundwater is 1500 µS/cm (WHO 2011) where electrical conductivity (EC) of study area ranging from 498 to 5910 μS/cm with a mean value of 3018.65 μS/cm. The total dissolved solids (TDS) values range from 237 to 3112 mg/L with a mean of 1556.05 mg/L. Fetter (2001) stated that TDS values of groundwater within the range of 1000–10,000 mg/L are considered as brackish water and most of the groundwater samples in study area are falls on this group.
Physicochemical and calculated parameters of the groundwater samples from study area
Depth (m)
Temp (°C)
EC (μS/cm)
Na+ (mg/L)
K+ (mg/L)
Mg2+ (mg/L)
Ca2+ (mg/L)
Cl− (mg/L)
SO4 2− (mg/L)
NO3 − (mg/L)
HCO3 − (mg/L)
−12.51
Na–Cl–HCO3
Ca–Mg–Cl
Na–Cl
Na–Mg–Cl
nd not detected
Concentrations of Na+ show extremely wide range from 13.18 to 1212.61 mg/L with a mean of 647.20 mg/L, which constitute 77% of total cations (Fig. 4a). Ca2+ is the second dominant cation in groundwater constituting 18% with mean value 101.5 mg/L. The average Mg2+ concentration in groundwater is 78.28 mg/L constitutes 9% of total cation. Meanwhile, K+ constitutes the least concentrations in all observed ground waters and forms 2% with a mean of 17.05 mg/L. The trend of major cationic concentrations of groundwater samples are Na+ > Ca2+ > Mg2+ > K+.
a Major cation and b anion proportion in groundwater samples
The groundwater is of Cl− dominant range from 32.07 to 6270.8 mg/L. The mean chloride concentration is 1776.74 mg/L constitute 77% of total anionic compositions of collected groundwater samples in study area (Fig. 4b), where WHO limit for chloride in groundwater is <250 mg/L (WHO 2004). Surprisingly, 19 out of 20 samples were exceeds the WHO limit of chloride concentrations. Followed by, HCO3 − concentrations range from 261 to 808 mg/L with mean value of 510.05 mg/L that makes up 22% of total anions and remaining SO4 2− (mean 4.97 mg/L) and NO3 − (mean 2.61 mg/L) concentrations are very low as compared to other parameters (Fig. 4b). The anionic trend of groundwater is Cl− > HCO3 − > SO4 2− > NO3 −.
The results of the water quality from the study area are compared with previous study in others coastal areas of Bangladesh and standard permissible limit in Table 2. It was found that, all the parameters of water are much higher than others study in coastal areas of Bangladesh and also deep aquifer water. This indicates that, shallow aquifer of the coastal area is more vulnerable. It was also found that, most of the water quality parameters exceed the standard permissible limit for drinking and irrigational use (Table 2).
Comparing the groundwater quality with others study in coastal areas of Bangladesh and standard permissible limit
This study
Compare with others coastal areas
Compare with standard permissible limit
Khulna (Shammi et al. 2016a)
Gupalganj (Shammi et al.2016b)
Satkhira (Rahman et al.2011)
Bagerhat (IWM 2009)
Patuakhali (Islam et al. 2016 b)
Barguna (Islam et al. 2016a)
Noakhali (Ahmed et al. 2011)
Lakshimpur (Bhuiyan et al. 2016)
FAO (1985)
UCCC (1974)
BWPCB (1976)
WHO (2011)
Aquifer depth
From the Pearson correlation matrix of hydrochemical parameters in groundwater (Table 3), it has been seen that EC and TDS are negatively correlated with pH but strongly correlated with Na+ and Cl−. EC and TDS are closely related with each other. Na+ shows positive correlation with all variables but, strongly correlated with Cl−, Ca2+ and Mg2+. K+ and Mg2+ are correlated with each other but show negative correlation with NO3 −. Both have strong correlation with Ca2+ and Cl−. Except pH, Ca2+ showed positive correlation with each variable but strongly related with Cl−. Cl− has strong correlation with EC, TDS, Na+ and Mg2+ and SO4 2− has strong correlation with groundwater pH, which indicates that they originate from the same source or origin.
Pearson correlation matrix of the hydrochemical parameters of groundwater samples
Na+
Mg2+
Cl−
SO4 2−
NO3 −
HCO3 −
aCorrelation is significant at the 0.05 level
bCorrelation is significant at the 0.01 level
Hydrogeochemical classification of groundwater
Hydrochemical facies and water type
The values obtained from the groundwater samples were plotted by using Piper (1953) trilinear diagram (Fig. 5) to recognize the hydrochemical facies which are able to provide clues how groundwater quality changes within and between aquifers (Sivasubramanian et al. 2013). This diagram is also used to classify the water types (Wen et al. 2005), which are generally distinct zones that cation and anion concentrations are described within the defined composition categories. From the samples plotting on the Piper (1953), trilinear diagram (Fig. 5) reveals that four types; Na–Cl (35%), Na–Cl–HCO3 (55%), Na–Mg–Cl (5%) and Ca–Mg–Cl (5%) and Na–Cl–HCO3 are the predominant facies type (Table 4). It indicates the dominance of Na+ in the cations and interplays of HCO3 − and Cl− in anions and also influence of marine water in the study area.
Piper (1953) diagram for the groundwater samples of the study area
Hydrogeochemical classification of groundwater in the study area
Na % (Wilcox 1955)
R.S.C. (Richards 1954)
Na % Eaton (1950)
MAR (Kacmaz and Nakoman 2010)
Unsafe
S.A.R. (Vasanthavigar et al. 2010)
KR (Kelley 1963)
TDS (mg/L) (WHO 2004)
Hydrochemical facies
Ca–Mg–Cl facies
Na–Mg–Cl facies
Na–Cl–HCO3 facies
Na–Cl facies
EC (μS/cm) (Wilcox 1955)
Chloride (Stuyfzand 1989)
Extremely fresh
Very fresh
Fresh-brackish
Brackish
Brackish-salt
Hyperhaline
282.1–564.3
>564.3
EC (μS/cm) (WHO 2004)
Total hardness (mg/L) (Sawyer and McCarthy 1967)
Low salinity
Medium salinity
High salinity
Very high salinity
Extensively high salinity
6001–10,000
>10,000
Moderately hard
Very hard
TDS, EC and Cl2 content in relation groundwater salinity
Salinity is the dissolved salt content of a body of water. It used to describe the levels of different salts such as sodium chloride, magnesium and calcium sulfates and bicarbonates. The amount of chlorine is directly proportional to salinity, which originates from the dissociation of salts, such as sodium chloride or calcium chloride, in water.
$$ {\text{NaCl}}\rightarrow {\text{Na}}^{ + } \left( {\text{aq}} \right) \, + {\text{ Cl}}^{ - } \left( {\text{aq}} \right), $$
$$ {\text{CaCl}}_{ 2}\rightarrow {\text{Ca}}_{ 2}^{ + } \left( {\text{aq}} \right) \, + {\text{ 2Cl}}^{ - } \left( {\text{aq}} \right). $$
These salts and their resulting chloride ions originate from natural minerals and mixing of seawater with fresh water (Stuyfzand 1999). Although there are some small quantities of others ions (K+, Mg+, SO4 2−, NO3 −); Na+ and Cl− present about 91% of all seawater ions. Meanwhile, Sodium and total dissolved solids (TDS) are other important parameters that can be used to observe the influence of major components and groundwater salinity. The groundwater concentrations of Na+ and Cl− were plotted against TDS. The plot showed that most Na+ and Cl− ions of the groundwater were positively correlated (r 2 = 0.75 and 0.76, respectively) with TDS (Fig. 6a, b). According to WHO (2004) classification of groundwater based on TDS, 60% sample falls in unacceptable, 35% poor and only 5% falls in excellent category and the spatial distribution of TDS is shown in Fig. 7. All others component i.e., Na+, Ca2+, Mg2+ and K+ also well correlated with Cl− with r 2 values 0.82, 0.79, 0.78 and 0.58 respectively (Fig. 6c–f) denotes they are originated from same sources.
Bivarate plots of a Na+ versus TDS, b Cl− versus TDS, c Na+ versus Cl−, d Ca2+ versus Cl−, e Mg2+ versus Cl−, f K+ versus Cl−
Spatial distribution of TDS (mg/L) of groundwater in the study area
According to Chloride classification by Stuyfzand (1989) 60% groundwater sample falls in brackish-salt, 35% brackish and remaining 5% falls in fresh category (Table 4). Spatial distribution of chloride concentration in groundwater shows that, the eastern and southern site of study area considered as higher saline prone area compared to northwestern site (Fig. 8). EC is other important parameters that are related with groundwater salinity. In order to diagnosis and classification, the total concentration of soluble salts (salinity hazard) in water can be expressed in terms of specific conductance (Ravikumar et al. 2011). According to WHO (2004) salinity hazard based on EC value has been classified as four groups; low salinity hazard, medium salinity hazard, high salinity hazard and very high salinity hazard. This reveals that, 5% medium, 15% high, 70% very high and remaining 10% are extremely high salinity hazard. Wilcox (1955) was also drawn classification of EC as excellent, good, permissible, doubtful and poor categories. Studies show that, 5% falls in excellent, 15% good, 70% doubtful and remaining 10% poor category. The spatial distribution of groundwater shows high EC values in eastern and southern part of the study area ranges from 2300 to 5910 μS/cm along the bank of Rupsha river like; Boitaghata and Rupsha upazila (Fig. 9) possibly due to the infiltration and saline water intrusion from the river.
Spatial distribution of Cl− conc. (mg/L) of groundwater in the study area
Spatial distribution of EC (μS/cm) of groundwater in the study area
Total hardness (TH)
Hardness is an important criterion for determining the suitability of groundwater for domestic, agricultural and industrial uses (Vandenbohede et al. 2010). Hardness of water is related to its reaction with soap and to the scale of incrustation accumulating in containers or conducts where water is heated or transported. Since soap is precipitated by Ca2+ and Mg2+ ion. It is defined as the sum of concentration of their ion expressed an mg/L of CaCO3. The classification of the groundwater of the study area based on hardness (Sawyer and McCarthy 1967) has been carried out and is presented in Table 4. Accordingly, 9 samples (45%) fall under the hard and 11 samples (55%) fall in very hard category.
Soluble sodium percentage (SSP) or Na%
Sodium is an important cation, which in excess deteriorates the soil structure and reduces crop yield (Srinivasamoorthy et al. 2005). The ratio of sodium and potassium in the sum of cation is the important factor in considering water for agriculture uses. The sodium concentration of irrigation water is of prime importance and plays a significant role in determining the permeability of soil. Na+ absorbed on clay surface, as a substitute for Ca2+ and Mg2+ may damage the soil structure making it compact and impervious (Singh et al. 2008). Percentage of Na+ content is a parameter to assess its suitability for agriculture purpose (Wilcox 1948) as, sodium combining with CO3 2− can add to the formation of alkaline soils and sodium combining with Cl− form saline soils. Both these soils do not helping growth of plants. According to Wilcox (1955) maximum 15% of Na+ in groundwater is allowed for agriculture purpose, 45% samples fall in doubtful region and remaining 40% is unsuitable category (Table 4). Eaton (1950) classification also results same assumption. The plot of Na % against EC Wilcox (1955) diagram shows the suitability of groundwater samples are shown in Fig. 10.
Wilcox (1955) diagram for the study area
Sodium absorption ratio (SAR)
Sodium absorption ratio (SAR) is an estimate of the extent to which sodium ion present in the water would be absorbed by the soil. The higher the SAR value, the greater the risk of sodium hazard on plant growth. Irrigation, using water with high SAR values may require soil amendments to prevent long-term damage to the soil; because the sodium in the water can displace the calcium and magnesium in the soil. This will cause a decrease in the ability of the soil to form stable aggregates and loss of soil structure. This will also lead to a decrease in infiltration and permeability of the soil to water, leading to problems with crop production (Chandrasekar et al. 2013). SAR values ranged from 0.35 to 15.78 in study area (Table 1). Values greater than 2.0 indicate groundwater is unsuitable for irrigation purposes (Vasanthavigar et al. 2010; Ayuba et al. 2013; Islam et al. 2016b). Study shows that, except one sample all the sample falls in unsuitable category (Table 4). Salinity and SAR determines the utility of groundwater. Salinity originates in groundwater due to weathering of rocks and leaching from top soil, anthropogenic sources along with minor influence on climate (Prasanna et al. 2011). The level of Na+ and HCO3 − in irrigation groundwater affects permeability of soil and drainage of the area (Tijani 1994). US salinity laboratory's (USSL) diagram proposed by Richards (1954) is used to investigate the sampled groundwater, which shows that, maximum samples fall on medium to very high salinity hazard (Fig. 11). The distribution of SAR values in the study area is shown in Fig. 12. It is observed that samples of low SAR are mainly located in the north-eastern part of the area, while high SAR dominated the southern and western part of study area.
Sample water classification for irrigation according to US Salinity Laboratory's (USSL) diagram (Richards 1954)
Spatial distribution of SAR values in the study area
Residual sodium carbonate (RSC)
A relation of alkaline earths with weak acids is expressed in terms of RSC for assessing the quality of water for irrigation (Richards 1954). When the weak acids are greater than the alkaline earths, a precipitation of alkaline earths occurs in soils, which damages the permeability of soil (Rao et al. 2012). The water having excess of carbonate and bicarbonate cover the alkaline earth mainly Ca2+ and Mg2+ in excess of allowable limits affects agriculture unfavorably (Richards 1954). The variation of RSC was drawn using (Richards 1954) as good, medium and bad categories. Study shows that, 65% groundwater samples of the study area fall in good category, 20% medium and remaining 15% falls in bad category (Table 4). Spatial analysis showed that there is no significant variation of RSC distribution in studied samples. The lowest value of RSC was found in the northeastern part of the study area (Fig. 13).
Spatial distribution of RSC in the study area
Magnesium adsorption ratio (MAR)
Magnesium adsorption ratio (MAR) defines the relationship between magnesium and calcium concentration in groundwater (Ragunath 1987; Ayuba et al. 2013). The excess Mg2+ affects the quality of soil resulting in poor agricultural returns (Islam et al. 2016a, b). Soil containing high levels of exchangeable Mg2+ causes infiltration problem (Ayers and Westcot 1985). MAR greater than 50 is considered harmful and unsuitable for irrigation purposes (Kacmaz and Nakoman 2010; Islam et al. 2016b). About 70% of the studied water falls in this category and remaining 30% falls in suitable category in case of magnesium hazard (Table 4).
Kelley's ratio (KR)
The level of Na+ measured against Ca2+ and Mg2+ is known as Kelley's Ratio, based on which irrigation water can be rated (Kelley 1963). Concentration of Na+ in irrigation water is considered to be in excess, thereby making the water unsuitable, if Kelley's ratio is >1. Hence water with Kelley's ratio <1 is suitable for irrigation. Almost 95% water in the study area is unsuitable according to this category (Table 4). From the above investigation, it is evident that the groundwater of the study area is not suitable for drinking or irrigation purpose.
Hydrogeochemical process evaluation
A hydrogeochemical diagram proposed by Chadha (1999) has been applied in this study to identify the hydrochemical process. The same procedure was successfully applied by (Vandenbohede et al. 2010; Islam et al. 2016b) in coastal aquifer to determine the evolution of different hydrogeochemical processes within a freshwater lens. For that, data was converted to percentage reaction values (milli equivalent percentages) and expressed as the difference between alkaline earths (Ca2+ + Mg2+) and alkali metals (Na+ + K+) for cations, and the difference between weak acidic anions (HCO3 − + CO3 2−) and strong acidic anions (Cl− + SO 4 2 ). The hydrochemical processes suggested by Chadha (1999) are indicated in each of the four quadrants of graph. These are broadly summarized as:
Field 1 Ca-HCO3 type recharging water.
Field 2 Ca–Mg–Cl type reverse ion exchange water.
Field 3 Na–Cl type end member waters (seawater).
Field 4 Na-HCO3 type base ion exchange water.
The resultant diagram is exhibited in (Fig. 14). Field 1 (recharging water) when water enters into the ground from the surface it carries dissolved carbonate in the form of HCO3 − and the geochemically mobile Ca2+. Only one sample falls in this field are represented by low salinity waters. Field 2 (reverse ion exchange) it may represent groundwater where Ca2++ Mg2+ is in excess to Na++ K+ either due to the preferential release of Ca2+ and Mg2+ from mineral weathering of exposed bedrock, or possibly reverse base cation exchange reactions of Ca2++ Mg2+ into solution and subsequent adsorption of Na+ onto mineral surfaces. But, there is no sample falls in this field. Most of the samples falls in Field 3 (Na–Cl) waters is typical of seawater mixing, and Field 4 (Na-HCO3) waters represent base-exchange reactions, but surprisingly no sample falls in this field. From this it is clear that, the water quality of coastal area containing high Na+ and Cl− with typical seawater mixing in Field 3 and with no representation in Field 2 and Field 4 indicating the absence of ion exchange.
Chadda's plot of process evaluation
However, when seawater intrudes into fresh coastal aquifer, CaCl2 or MgCl2 type water may found (Appelo and Postma 1999). In this case, Na+ of seawater is being replaced with either Ca2+ or Mg2+ of the clay minerals whereby, Na+ is being adsorbed onto the clay mineral surface according to Eqs. (9) and (10) (Islam et al. 2016b).
$$ \left( {{\text{Ca}}^{ 2+ } + {\text{ Mg}}^{ 2+ } } \right) \, + { 4}\left( {{\text{HCO}}_{ 3}^{ - } } \right) \, + { 2}\left( {\text{NaX}} \right) \, = {\text{ CaX}}_{ 2} + {\text{ Mg}}\left( {{\text{HCO}}_{ 3}^{ - } } \right)_{ 2} \,+ {\text{ 2NaHCO}}_{ 3}^{ - } , $$
$$ 2 {\text{Nacl }} + {\text{ MgX}}_{ 2} = {\text{ 2NaX }} + {\text{ MgCl}}_{ 2} , $$
where, X signifies the exchanger. So, seawater intrusion is not the fact of salinization in the study area. When seawater diluted with freshwater have create distinguished geochemical characteristics (Metcafe and Eddy 2000). Modification of the geochemical characteristics of these saline waters is caused by water–rock interaction in which three possible mechanisms may be involved: (1) base exchange reactions with clay minerals (Vengosh et al. 1994); (2) adsorption onto clay minerals and (3) carbonate dissolution-precipitation (Vengosh et al. 1994; Ghabayen et al. 2006).
Actually, the sea level in the Bengal Basin has been changed from the past (CEGIS 2006). During the holocene period, about 6 Ka is the peak period of the last highest sea level (DPHE 2006). The earliest Ganges delta development phase took place about 5–2.5 Ka (Allison et al. 2003). Majumder (2008) observed that the age of the deep groundwater fell along the seawater line ranged from nearly ~6 to 25 Ka. So it seems that, brackish water originated from the sea is trapped within the aquifer. The salinity of current study originates from this saline aquifer pocket with some recent intrusion. Similar observation was previously made by Sikdar et al. (2001), Rahman et al. (2011) and Islam et al. (2016a, b). Due to lack of isotopic investigation it is hurdle to delineate the actual origin of groundwater salinity in study area. But, it is clear that, shallow aquifer salinity enhances due to tidal surges and cyclone, water logging, upstream less water flow, backwater effect, shrimp culture and excess withdrawal (Islam 2014; Islam and Bhuiyan 2016).
The study reveals that the shallow groundwater aquifers of the study area are strongly affected by salinity. EC and TDS classification indicate majority of the samples grouped within "doubtful" to "unsuitable" with minor representation in permissible category. In SSP or Na % classification of groundwater for irrigation purposes, majority of the samples grouped in unsafe zone and minor representations also falls in safe zone. The plot of Na % against EC (Wilcox diagram) also shows that, maximum samples are doubtful to unsuitable for irrigation purpose. According to chloride classification majority of samples grouped in brackish and brackish-salt category, indicating the unsuitability of this water for agricultural activity. The ground water of this region shows chiefly in seawater characters, and few represent recharge. From spatial distribution of chloride, TDS, EC, SAR and RSC concentrations of collected groundwater, northwestern part is better than eastern and southern part of the study area which is nearer to the river channel and the coast.
Authors are thankful to the anonymous reviewers who help for the current shape of the paper by their valuable suggestions.
Ahmed AU (2006) Bangladesh climate change impacts and vulnerability. Comprehensive Disaster Management Programme (CDMP), Government of the People's Republic of BangladeshGoogle Scholar
Ahmed MJ, Haque MR, Rahman MM (2011) Physicochemical assessment of surface and groundwater resources of Noakhali region of Bangladesh. Int J Chem Sci Technol 1(1):1–10Google Scholar
Alam M (1990) Bangladesh in world regional geology. Columbia University Press, New YorkGoogle Scholar
Allison MA, Goodbred SL Jr, Kuehl SA, Khan SR (2003) Stratigraphic evolution of the late holocene Ganges–Brahmaputra lower delta plain. Sediment Geol 155:317–342CrossRefGoogle Scholar
Alrajhi A, Beecham S, Bolan NS, Hassanli A (2015) Evaluation of soil chemical properties irrigated with recycled wastewater under partial root-zone drying irrigation for sustainable tomato production. Agric Water Manag 161:127–135. doi: 10.1016/j.agwat.2015.07.013 CrossRefGoogle Scholar
Appelo CAJ, Postma D (1999) Chemical analysis of groundwater, geochemistry, groundwater and pollution. Balkema, RotterdamGoogle Scholar
Ayers RS, Westcot DW (1985) Water quality for agriculture, FAO irrigation and drainage Paper 29. Rev. I, UN Food and Agriculture Organization, RomeGoogle Scholar
Ayuba R, Omonona OV, Onwuka OS (2013) Assessment of groundwater quality of Lokoja Basement Area, North-Central Nigeria. J Geol Soc India 82:413–420CrossRefGoogle Scholar
Bahar MM, Reza MS (2010) Hydrochemical characteristics and quality assessment of shallow groundwater in a coastal area of Southwest Bangladesh. Environ Earth Sci 61(5):1065–1073. doi: 10.1007/s12665-009-0427-4 CrossRefGoogle Scholar
Balaji E, Nagaraju A, Sreedhar Y, Thejaswi A, Sharifi Z (2016) Hydrochemical characterization of groundwater in around Tirupati area, Chittoor District, Andhra Pradesh, South India. Appl Water Sci. doi: 10.1007/s13201-016-0448-6 Google Scholar
Bangladesh Meteorological Department (BMD) (2014) Government of the Peoples Republic of Bangladesh, DhakaGoogle Scholar
Bhuiyan MAH, Ganyaglo S, Suzuki S (2015) Reconnaissance on the suitability of the available water resources for irrigation in Thakurgaon District of northwestern Bangladesh. Appl Water Sci 5(3):229–239. doi: 10.1007/s13201-014-0184-8 CrossRefGoogle Scholar
Bhuiyan MAH, Bodrud-Doza M, Islam ARMT, Rakib MA, Rahman MS, Ramanathan AL (2016) Assessment of groundwater quality of Lakshimpur district of Bangladesh using water quality indices, geostatistical methods, and multivariate analysis. Environ Earth Sci. doi: 10.1007/s12665-016-5823-y Google Scholar
BWPCB (1976) Bangladesh drinking water standard. Bangladesh Water Pollution Control Board, Government of the People's Republic of Bangladesh, DhakaGoogle Scholar
CEGIS (2006) Final report of impact of sea level rise on land use suitability and adaptation options in southwest region of Bangladesh. Center for Environmental and Geographic Information Services (CEGIS), DhakaGoogle Scholar
Central Ground Water (CGW) Board (2009) Report: south eastern coastal region, Chennai, IndiaGoogle Scholar
Chadha DK (1999) A proposed new diagram for geochemical classification of natural waters and interpretation of chemical data. Hydrol J 7(5):431–439Google Scholar
Chandrasekar N, Selvakumar S, Srinivas Y, John Wilson JS, Simon Peter T, Magesh NS (2013) Hydrogeochemical assessment of groundwater quality along the coastal aquifers of southern Tamil Nadu, India. J Environ Earth Sci 71(11):4739–4750. doi: 10.1007/s12665-013-2864-3 CrossRefGoogle Scholar
Corwin DL, Rhoades JD, Šimůnek J (2007) Leaching requirement for soil salinity control: steady-state versus transient models. Agric Water Manag 90(3):165–180. doi: 10.1016/j.agwat.2007.02.007 CrossRefGoogle Scholar
DPHE (2006) Final report on development of deep aquifer database and preliminary deep aquifer map (First Phase), Department of Public Health Engineering, Local Government Division, Ministry of LGRD and Co-operatives, Government of the People's Republic of BangladeshGoogle Scholar
DPHE (Department of Public Health Engineering)/JICA (Japan International Cooperation Agency) (2010) Situation analysis of arsenic mitigation 2009. Department of Public Health Engineering, Dhaka, p 29Google Scholar
Eaton EM (1950) Significance of carbonate in irrigation water. Soil Sci 69:123–133CrossRefGoogle Scholar
FAO (1985) Water quality for agriculture. Food and Agriculture Organization. http://www.fao.org/docrep/003/t0234e/T0234E01.htm#ch1.4. Accessed 21 Dec 2013
Fetter CW (2001) Applied hydrogeology, 4th edn. Prentice Hall Inc., New Jersey, p 598Google Scholar
Ghabayen MS, Mac McKee, Mariush Kemblowski (2006) Ionic and isotopic ratios for identification of salinity sources and missing data in the Gaza aquifer. J Hydrol 318:360–373CrossRefGoogle Scholar
Hem JD (1991) Study and interpretation of the chemical characteristics of natural waters, 3rd edn. Book 2254. Scientific Publishers, JodhpurGoogle Scholar
Hoque M, Hasan MK, Ravenscroft P (2003) Investigation of groundwater salinity and gas problems in southeast Bangladesh. In: Rahman AA, Ravenscroft P (eds) Groundwater resources and development in Bangladesh. Bangladesh Centre for Advanced Studies (BCAS), University Press Ltd, DhakaGoogle Scholar
Iftakher A, Saiful IM, Jahangir AM (2015) Probable origin of salinity in the shallow aquifers of khulna district, southwestern Bangladesh. Austin J Earth Sci 2(2):1–8Google Scholar
Islam SMD (2014) Geoelectrical and hydrogeochemical studies for delineating seawater intrusion in coastal aquifers of Kalapara upazila, Patuakhali, Bangladesh. Unpublished Master's Thesis, Department of Environmental Sciences, Jahangirnagar University, DhakaGoogle Scholar
Islam SMD, Bhuiyan MAH (2016) Impact scenarios of shrimp farming in coastal region of Bangladesh: an approach of an ecological model for sustainable management. Aquacult Int 24(4):1163–1190. doi: 10.1007/s10499-016-9978-z CrossRefGoogle Scholar
Islam SMD, Uddin MJ (2015) Impacts, vulnerability and coping with cyclone hazard in coastal region of Bangladesh: a case study on Kalapara upazila of Patuakhali district. Jahangirnagar Univ Environ Bull 4:11–30Google Scholar
Islam SMD, Bhuiyan MAH, Ramanathan AL (2015) Climate change impacts and vulnerability assessment in coastal region of bangladesh: a case study on Shyamnagar upazila of Satkhira district. J Climate Change 1(1–2):37–45. doi: 10.3233/JCC-150003 CrossRefGoogle Scholar
Islam MA, Zahid A, Rahman MM, Rahman MS, Islam MJ, Akter Y, Shammi M, Bodrud-Doza M, Roy B (2016a) Investigation of groundwater quality and its suitability for drinking and agricultural use in the south central part of the coastal region in Bangladesh. Expo Health. doi: 10.1007/s12403-016-0220-z Google Scholar
Islam SMD, Majumder RK, Uddin MJ, Khalil MI, Alam MF (2016b) Hydrochemical characteristics and quality assessment of groundwater in patuakhali district, southern coastal region of Bangladesh. Expo Health. doi: 10.1007/s12403-016-0221-y Google Scholar
IWM (2009) Final report: hydro-geological study and mathematical modelling to identify sites for installation of observation well nests, selection of model boundary, supervision of pumping test, slug test, assessment of different hydro-geological parameters collection and conduct chemical analysis of surface water and groundwater. Dhaka, BangladeshGoogle Scholar
Jalali M (2009) Geochemistry characterization of groundwater in an agricultural area of Razan, Hamadan, Iran. Environ Geol 56:1479–1488CrossRefGoogle Scholar
Jorgensen NO, Andersen MS, Engesgaard P (2008) Investigation of a dynamic seawater intrusion event using strontium isotopes (87Sr/86Sr). J Hydrol 348:257–269CrossRefGoogle Scholar
Kacmaz H, Nakoman ME (2010) Hydrochemical characteristics of shallow groundwater aquifer containing Uranyl phosphate minerals in the Koprubasi (Manisa) area, Turkey. Environ Earth Sci 59:449–457CrossRefGoogle Scholar
Karro E, Marandi A, Vaikm R (2004) The origin of increased salinity in the Cambrian–Vendian aquifer system on the Kopl Peninsula, northern Estonia. Hydrogeol J 12:424–435CrossRefGoogle Scholar
Kelley WP (1963) Use of saline irrigation water. Soil Sci 95:355–391CrossRefGoogle Scholar
Kim RH, Kim JH, Ryu JS, Chang HW (2006) Salinization properties of a shallow groundwater in a coastal reclaimed area, Yeonggwang, Korea. Envion Geol 49:1180–1194CrossRefGoogle Scholar
Kumar SK, Bharani R, Magesh NS, Godson PS, Chandrasekar N (2014) Hydrogeochemistry and groundwater quality appraisal of part of south Chennai coastal aquifers, Tamil Nadu, India using WQI and fuzzy logic method. Appl Water Sci 4:341–350. doi: 10.1007/s13201-013-0148-4 CrossRefGoogle Scholar
Kumar SK, Logeshkumaran A, Magesh NS, Godson PS, Chandrasekar N (2015) Hydro-geochemistry and application of water quality index (WQI) for groundwater quality assessment, Anna Nagar, part of Chennai City, Tamil Nadu, India. Appl Water Sci 5:335–343. doi: 10.1007/s13201-014-0196-4 CrossRefGoogle Scholar
Majumder RK (2008) Groundwater flow system studies in Bengal Delta, Bangladesh revealed by environmental isotopes and hydrochemistry. In: Proceedings of 36th IAH Congress, October 2008, Toyama, JapanGoogle Scholar
Manjusree TM, Joseph S, Thomas J (2009) Hydrogeochemistry and groundwater quality in the coastal sandy clay aquifers of alappuzha district, kerala. J Geol Soc India 74:459–468CrossRefGoogle Scholar
Metcafe, Eddy (2000) Integrated aquifer management plan: final report. Gaza Coastal Aquifer Management Program, USAID Contract No. 294-C-00-99-00038-00Google Scholar
Mondal MK, Tuong TP and Sattar MA (2008) Quality and groundwater level dynamics at two coastal sites of Bangladesh: implications for irrigation development, 2nd International Program on Water and Food, Addis Ababa, Ethiopia, November 10–14. https://cgspace.cgiar.org/bitstream/handle/10568/3707/IFWF2_proceedings_Volume%20II.pdf?sequence=1
Mondal NC, Singh VP, Singh VS (2011) Hydrochemical characteristic of coastal aquifer from Tuticorin, Tamilnadu, India. Environ Monit Assess 175:531–550CrossRefGoogle Scholar
Montety VD, Radakovitch O, Vallet-Coulomb C, Blavoux B, Hermitte D, Valles V (2008) Origin of groundwater salinity and hydrogeochemical processes in a confined coastal aquifer: case of the Rhône delta (Southern France). Appl Geochem 23:2337–2349CrossRefGoogle Scholar
MoWR (2005) Coastal Zone Policy (CZPo), Ministry of Water Resources (MoWR), Government of the People's Republic of Bangladesh, DhakaGoogle Scholar
Nagaraju A, Sunil Kumar K, Thejaswi A (2014) Assessment of groundwater quality for irrigation: a case study from Bandalamottu lead mining area, Guntur District, Andhra Pradesh, South India. Appl Water Sci 4:385–396. doi: 10.1007/s13201-014-0154-1 CrossRefGoogle Scholar
Nickson RT, McArthur JM, Shresthn B, Kyaw- Nyint TO, Lowry D (2005) Arsenic and other drinking water quality issues, Muzaffargarh District, Pakistan. Appl Geochem 20(1):55–66CrossRefGoogle Scholar
Piper AM (1953) A graphic procedure I the geo-chemical interpretation of water analysis, USGS Groundwater Note no, 12Google Scholar
Prasanna MV, Chidambaram S, Gireesh TV, Jabir Ali TV (2011) A study on hydrochemical characteristics of surface and subsurface water in and around Perumal Lake, Cuddalore District, Tamil Nadu, South India. Environ Earth Sci 64(5):1419–1431CrossRefGoogle Scholar
Ragunath HM (1987) Groundwater. Wiley Eastern, New Delhi, p 563Google Scholar
Rahman ATMT, Majumder RK, Rahman SH, Halim MA (2011) Sources of deep groundwater salinity in the southwestern zone of Bangladesh. Environ Earth Sci 63:363–373. doi: 10.1007/s12665-010-0707-z CrossRefGoogle Scholar
Rao NS, Subrahmanyam A, Kumar SR, Srinivasulu N, Rao GB, Rao PS, Reddy GV (2012) Geochemistry and quality of groundwater of Gummanampadu sub-basin, Guntur District, Andhra Pradesh, India. Environ Earth Sci 67(5):1451–1471CrossRefGoogle Scholar
Rashid H (1991) Geography of Bangladesh, 2nd edn. University Press, DhakaGoogle Scholar
Ravikumar P, Somashekar RK, Angami M (2011) Hydrochemistry and evaluation of groundwater suitability for irrigation and drinking purposes in the Markandeya River basin, Belgaum District, Karnataka State, India. Environ Monit Assess 173(1–4):459–487CrossRefGoogle Scholar
Richards LA (1954) Diagnosis and improvement of saline and alkali soils, vol 60. US Department of Agricultural Handbook, Washington D.C., p 160Google Scholar
Sawyer GN, McCarthy DL (1967) Chemistry of sanitary engineers, 2nd edn. McGraw Hill, New York, p 518Google Scholar
Sefie A, Aris AZ, Shamsuddin MKN, Tawnie I, Suratman S, Idris AN, Saadudin SB, Ahmed WKW (2015) Hydrogeochemistry of groundwater from different aquifer in Lower Kelantan Basin, Kelantan, Malaysia. International Conference on Environmental Forensics 2015. Procedia Environ Sci 30:151–156Google Scholar
Selvam S, Manimaran G, Sivasubramanian P (2013) Hydrochemical characteristics and GIS-based assessment of groundwater quality in the coastal aquifers of Tuticorin corporation, Tamilnadu, India. Appl Water Sci 3:145–159CrossRefGoogle Scholar
Shammi M, Karmakar B, Rahman MM, Islam MS, Rahman R, Uddin MK (2016a) Assessment of salinity hazard of irrigation water quality in monsoon season of Batiaghata Upazila, Khulna District, Bangladesh and adaptation strategies. Pollution 2(2):183–197Google Scholar
Shammi M, Rahman R, Rahman MM, Moniruzzaman M, Bodrud-Doza M, Karmakar B, Uddin MK (2016b) Assessment of salinity hazard in existing water resources for irrigation and potentiality of conjunctive uses: a case report from Gopalganj District, Bangladesh. Sustain Water Resour Manag. doi: 10.1007/s40899-016-0064-5
Sikdar PK, Sarkar SS, Palchoudhury S (2001) Geochemical evolution of groundwater in the quaternary aquifer of Calcutta and Howrah, India. J Asian Earth Sci 19:579–594CrossRefGoogle Scholar
Singh AK, Mondal GC, Kumaar S, Sinngh TB, Sinha A (2008) Major ion chemistry, weathering processes and water quality assessment in upper catchment of Damodar River basin, India. Environ Geol 54:745–758 CrossRefGoogle Scholar
Sivasubramanian P, Balasubramanian N, Soundranayagam JP, Chandrasekar N (2013) Hydrochemical characteristics of coastal aquifers of Kadaladi, Ramanathapuram District, Tamilnadu, India. Appl Water Sci 3:603–612CrossRefGoogle Scholar
Srinivas Y, Aghil TB, Oliver DH, Nair CN, Chandrasekar N (2015) Hydrochemical characteristics and quality assessment of groundwater along the Manavalakurichi coast, Tamil Nadu. Appl Water Sci. doi: 10.1007/s13201-015-0325-8 Google Scholar
Srinivasamoorthy K, Chidambaram S, Anandhan P, Vasudevan S (2005) Application of statistical analysis of the hydrogeochemical study of groundwater in hard rock terrain, Salem District, Tamilnadu. J Geochem 20:181–190Google Scholar
Steinich B, Escolero O, Marín LE (1998) Salt-water intrusion and nitrate contamination in the Valley of Hermosillo and El Sahuaral coastal aquifers, Sonora, Mexico. Hydrogeol J 6(4):518–526CrossRefGoogle Scholar
Stuyfzand PJ (1989) Nonpoint sources of trace elements in potable groundwaters in the Netherlands. Proceedings 18th TWSA Water Workings. Testing and Research Institute KlWAGoogle Scholar
Stuyfzand PJ (1999) Patterns in groundwater chemistry resulting from groundwater flow. Hydrogeol J 7(1):15–27CrossRefGoogle Scholar
Thilagavathi R, Chidambaram S, Prasanna MV, Singaraja C (2012) A study on groundwater geochemistry and water quality in layered aquifers system of Pondicherry region, southeast India. Appl Water Sci 2:253–269. doi: 10.1007/s13201-012-0045-2 CrossRefGoogle Scholar
Tijani J (1994) Hydrocemical assessment of groundwater in Moro area, Kwara state, Nigeria. Environ Geol 24:194–202CrossRefGoogle Scholar
Todd DK (1980) Groundwater hydrology. Wiley, New York, pp 10–138Google Scholar
Twarakavi NKC, Kaluarachchi JJ (2006) Sustainability of groundwater quality considering land use changes and public health risks. J Environ Manag 81:405–419CrossRefGoogle Scholar
UCCC (1974) Guidelines for interpretations of water quality for irrigation. University of California Committee of Consultants, CaliforniaGoogle Scholar
Uddin A, Lundberg N (1998) Cenozoic history of the Himalayan–Bengal system: sand composition in the Bengal Basin, Bangladesh. Geol Soci Am Bull 110:497–511CrossRefGoogle Scholar
Vandenbohede A, Courtens C, William de Breuck L (2010) Fresh-salt water distribution in the central Belgian coastal plain: an update. Geol Belg 11(3):163–172Google Scholar
Vasanthavigar M, Srinivasamoorthy K, Vijayaravan K, Rajiv-Ganthi R, Chidambaram S, Anandhan P, Manivannan R, Vasudevan S (2010) Application of water quality index for groundwater quality assessment: Thirumanimuttar sub-basin, Tamil Nadu, India. Environ Monit Assess 171(1–4):595–609. doi: 10.1007/s10661-009-1302-1 CrossRefGoogle Scholar
Vengosh A, Heumann KG, Juraski S, Kasher R (1994) Boron isotope application for tracing sources of contamination in groundwater. Environ Sci Technol 28(11):1968–1974CrossRefGoogle Scholar
Wen X, Wu Y, Su J, Zhang Y, Liu F (2005) Hydrochemical characteristics and salinity of groundwater in the Ejina Basin, Northwestern China. Environ Geol 48:665–675. doi: 10.1007/s00254-005-0001-7 CrossRefGoogle Scholar
WHO (2004) WHO guidelines for drinking water quality, Geneva. 1&2Google Scholar
WHO (2011) WHO guidelines for drinking-water quality, 4th edn. World Health Organization, GenevaGoogle Scholar
Wilcox LV (1948) The quality of water for irrigation, use. US Department of Agriculture, Washington, DC. Tech Bull 1962:19Google Scholar
Wilcox LV (1955) Classification and use of irrigation water. US Department of Agriculture, Circular No. 969, Washington D.C. USA, p 19Google Scholar
Woobaidullah ASM, Hasan MA, Reza MH, Noor A, Amin MK (2006) Ground water potentiality-a review of the hydrogeological data available in the coastal belt of Khulna and Satkhira districts. Dhaka Univ J Sci 42:229–233Google Scholar
1.Department of Environmental SciencesJahangirnagar UniversityDhakaBangladesh
2.Department of Geological SciencesJahangirnagar UniversityDhakaBangladesh
3.Department of Environmental SciencesUniversity of HelsinkiHelsinkiFinland
Islam, S.M.DU., Bhuiyan, M.A.H., Rume, T. et al. Appl Water Sci (2017) 7: 4219. https://doi.org/10.1007/s13201-017-0533-5
Received 16 July 2016
Accepted 16 January 2017
First Online 11 February 2017 | CommonCrawl |
Eleonora Di Nezza
Eleonora Di Nezza is an Italian mathematician, a CNRS researcher at the Centre de mathématiques Laurent-Schwartz and a professor of mathematics at Ecole Polytechnique,[1] in Palaiseau, France. Her research is at the intersection of various branches of mathematics[2] including complex and differential geometry,[3] and focuses on Kahler geometry.[4]
Eleonora Di Nezza
Alma materSapienza
Scientific career
InstitutionsEcole Polytechnique
CNRS
IHES
UC Berkeley
Imperial College
WebsitePersonal website – Eleonora Di Nezza
Education and career
Di Nezza earned her Master's degree in Mathematics from the Sapienza University of Rome,[5] and did her doctoral research between the University of Rome Tor Vergata and Paul Sabatier University in Toulouse, France, during which reunified results on fractional Sobolev spaces.[6][7] Her dissertation was on the Geometry of complex Monge-Ampère equations on compact Kähler manifolds.[8]
After receiving her PhD, she became a postdoctoral fellow at Imperial College, in London, UK under a Marie Curie Fellowship, during which she joined the Mathematical Sciences Research Institute in Berkeley, United States. In 2017 moved to France to join the Institute of Advanced Scientific Studies before becoming a lecturer at Sorbonne University and a professor of mathematics at Ecole Polytechnique. She was awarded the prestigious CNRS bronze Medal in 2021.[9][10]
References
1. CNRS bronze medal for Eleonora Di Nezza – Ecole Polytechnique
2. Eleonora Di Nezza: geometria complessa… e yoga – MaddMAths
3. Eleonora Di Nezza – Math in France
4. Entretien avec Eleonora Di Nezza – IHÉS
5. CV Eleonora Di Nezza – CNRS
6. Eleonora Di Nezza: geometria complessa… e yoga – MaddMAths
7. Di Nezza, Eleonora; Palatucci, Giampiero; Valdinoci, Enrico (2012). "Hitchhikerʼs guide to the fractional Sobolev spaces". Bulletin des Sciences Mathématiques. 136 (5): 521–573. doi:10.1016/j.bulsci.2011.12.004. S2CID 55443959.
8. "Geometry of complex Monge-Ampère equations on compact Kähler manifolds," PhD dissertation, Eleonora Di Nezza – HAL
9. "École Polytechnique - Accueil site de l'Ecole Polytechnique".
10. Les travaux d’Eleonora Di Nezza, médaille de bronze 2021 – CNRS
External links
• Eleonora Di Nezza publications indexed by Google Scholar
• https://www.ihes.fr/entretien-avec-eleonora-di-nezza/
Authority control
National
• Germany
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
| Wikipedia |
Beyond Infinity (mathematics book)
Beyond Infinity : An Expedition to the Outer Limits of Mathematics is a popular mathematics book by Eugenia Cheng centered on concepts of infinity. It was published by Basic Books and (with a slightly different title) by Profile Books in 2017,[1][2][3] and in a paperback edition in 2018.[4] It was shortlisted for the 2017 Royal Society Insight Investment Science Book Prize.[5]
Topics
The book is divided into two parts, with the first exploring notions leading to concepts of actual infinity, concrete but infinite mathematical values. After an exploration of number systems, this part discusses set theory, cardinal numbers, and ordinal numbers, transfinite arithmetic, and the existence of different infinite sizes of sets. Topics used to illustrate these concepts include Hilbert's paradox of the Grand Hotel, Cantor's diagonal argument,[4] and the unprovability of the continuum hypothesis.[2]
The second part concerns mathematics related to the idea of potential infinity, the assignment of finite values to the results of infinite processes including growth rates, limits, and infinite series.[4][2] This part also discusses Zeno's paradoxes, Dedekind cuts,[2] the dimensions of spaces, and the possibility of spaces of infinite dimensions, with a mention of higher category theory,[4] Cheng's research specialty.[1][2]
The mathematics is frequently lightened and made accessible with personal experiences and stories,[3][6][7] involving such subjects as the Loch Ness Monster, puff pastry, boating, dance contests, shoes,[3] "Legos, the iPod Shuffle, snorkeling, Battenberg cakes and Winnie-the-Pooh".[6]
Audience and reception
The Royal Society judges called Beyond Infinity "a very engaging introduction to a forbidding subject".[5] Similarly, reviewer Anne Haworth calls it "engaging and readable",[3] and Wall Street Journal reviewer Sam Kean writes that its "chatty tone keeps things fresh".[6] It is aimed at a popular audience, not assumed to have a significant background in mathematics, including "the young or those brimming with curiosity"[1] as well as college or secondary-school students,[4][2] although it may be "too elementary for mathematicians or mathematics students".[2]
As similar reading material, reviewer Andrew James Simoson suggests placing this book alongside The Book of Numbers by John Horton Conway and Richard K. Guy (1996), One Two Three... Infinity by George Gamow (1947), and Really Big Numbers by Richard Schwartz (2014).[1]
References
1. Simoson, Andrew James, Review of Beyond Infinity, MR 3617029
2. Bultheel, Adhemar (April 2017), "Review of Beyond Infinity", EMS Reviews, European Mathematical Society
3. Haworth, Anne (June 2021), "Review of Beyond Infinity", The Mathematical Gazette, 105 (563): 381–382, doi:10.1017/mag.2021.100
4. Guadarrama, Zdeňka (April 2019), "Review of Beyond Infinity", MAA Reviews, Mathematical Association of America
5. "Beyond Infinity: An Expedition to the Outer Limits of the Mathematical by Eugenia Cheng", 2017 Royal Society Insight Investment Science Book Prize, Royal Society, retrieved 2021-08-29
6. Kean, Sam (5 April 2017), "The Neverending Story (review of Beyond Infinity)", The Wall Street Journal
7. "Review of Beyond Infinity", Publishers Weekly
| Wikipedia |
\begin{document}
\preprint{LA-UR-21-20902}
\title{ \textbf{Quantum Theory of Measurement} }
\author{Alan K. Harrison} \email{[email protected]}
\affiliation{Los Alamos National Laboratory\\ MS T086, P O Box 1663\\ Los Alamos, New Mexico 87545\\ }
\date{\today}
\begin{abstract}
We describe a measurement in quantum mechanics as a variational principle including a simple interaction between the system under measurement and the measurement apparatus. Augmenting the action with a nonlocal term (a double integration over the duration of the measurement interaction) results in a theory capable of describing both the measurement process (agreement between system state and the pointer state of the measurement apparatus) and the collapse of both systems into a single eigenstate (or superposition of degenerate eigenstates) of the operator corresponding to the measured variable. In the absence of the measurement interaction, a superposition of states is stable, and the theory agrees with the predictions of standard quantum theory. Because the theory is nonlocal, the resulting wave equation is an integrodifferential equation (IDE). We demonstrate these ideas using a simple Lagrangian for both systems, as proof of principle. The variational principle is time--symmetric and retrocausal, so the solution for the measurement process is determined by boundary conditions at both initial and final times; the initial condition is determined by the experimental preparation and the final condition is the natural boundary condition of variational calculus. We hypothesize that one or more hidden variables (not ruled out by Bell's Theorem, due both to the retrocausality and the nonlocality of the theory) influence the outcome of the measurement, and that distributions of the hidden variables that arise plausibly in a typical ensemble of experimental realizations give rise to outcome frequencies consistent with Born's rule. We outline steps in a theoretical validation of the hypothesis. We discuss the role of both initial and final conditions to determine a solution at intermediate times, the mechanism by which a system responds to measurement, time symmetry of the new theory, causality concerns, and issues surrounding solution of the IDE. \end{abstract}
\maketitle
\section{Introduction}
\subsection{Motivation and philosophical stance}
Quantum theory in general, and its description of measurement in particular, seems to violate several reasonable expectations about the the characteristics of a correct physical theory. Ordinarily, to be accepted as correct and complete, a theory must predict future phenomena, given a complete set of the relevant initial conditions. Quantum theory fails to do this in the case of a measurement; in fact, it is understood that the mathematical description (wave equation) describing system evolution in the absence of a measurement \emph{does not apply} to a measurement. In effect, two different theories are required, for the measurement and non--measurement cases. While it may be acceptable for a theory to treat different cases in different ways, quantum theory lacks an unambiguous definition of a measurement, with the result that measurement and non--measurement configurations may be arbitrarily similar physically, and the bipartite theoretical description is implausible.
In addition, the theory of quantum measurement (as distinguished from the wave equation) as usually interpreted (e.g. by the Copenhagen interpretation) has multiple features that are unknown in any other generally accepted fundamental theory. One is intrinsic randomness, the idea that nature samples from a random distribution, and no prediction can be made about the result of sampling that goes beyond a description of the distribution function. Another is temporal asymmetry;\footnote{ We are aware of course that thermodynamics seems to have a preferred direction of time, but point out that the fundamental dynamic laws that give rise to it are time--symmetric. } after the measurement, but not before, the system is understood to be ``collapsed'' into an eigenstate or set of degenerate eigenstates of the operator corresponding to the measured quantity.
A third feature unique to the quantum measurement process is dependence on the eigenstate structure of the problem. The observed behavior that a measurement always finds the system in a single eigenstate (or a superposition of degenerate eigenstates) of the operator requires a nonlocal theory. As we will discuss in subsection \ref{Necessity_nonlocality}, the information (e.g., potential $V(x_0)$) available at a single point $x_0$ is insufficient to determine whether a particular solution at that point (values of the wavefunction $\psi(x_0)$ and its derivative(s) at $x_0$) is consistent with a \emph{single eigenfunction} $\psi$ (the function defined for all allowed values of $x$). Nature cannot reliably make that determination at $x_0$ without using information at points $x \neq x_0$.
In addition, we call attention to quantum phenomena that seem to violate causality. One is correlations between spacelike separated measurements in ways that violate special--relativity--based expectations (``EPR correlations,'' for short) \cite{EPR} and Bell's inequality \cite{Bell_1964} but have been verified in a long sequence of increasingly more sophisticated experiments \cite{CHSH, *Freedman_Clauser, *Aspect_Grangier_Roger, *Aspect_Dalibard_Roger}. Another is delayed--choice experiments \cite{Wheeler_1979}, in which the path of a particle (through one or two slits, for example) has been observed to be apparently determined by a choice made after the particle is committed to a particular path.
In this paper, we propose that a quantum theory can be constructed so as to either avoid or explain most of these objectionable or unique features. To be specific, we will exhibit a wave equation that applies even when a measurement is being done, in which case it describes evolution (``collapse,'' although not instantaneous) of the wavefunction to a state or states with a single eigenvalue. The theory is time--symmetric. Instead of relying on intrinsic randomness to explain differing results of identically prepared measurements, it proposes that some hidden variable(s), presumably uncontrolled or overlooked by the experimenter, determine(s) the outcome. The Born's--rule distribution of outcomes \cite{Born} attributed to randomness by standard quantum theory presumably appears instead as a result of a naturally--arising distribution of values of the hidden variable(s)---although the complete proof of that result must await further investigation.
On the other hand, the theory we describe relies on some unusual assumptions; we do not expect to replace conventional quantum theory with one that completely resembles other physical theories. One such assumption is retrocausality, roughly speaking, the idea that effects may precede their causes in time. (To be more precise, in a retrocausal theory the solution at $t$ is found as a function of variables at $t'>t$.) The theory is also nonlocal, as needed to produce the needed dependence on eigenstate structure; for this reason the wave equation is a integrodifferential equation (IDE). Finally, as mentioned earlier, we posit the existence of hidden variables. Bell's Theorem and its experimental tests are generally understood to rule out local hidden--variable theories, but that does not restrict our nonlocal theory. In addition, it has been pointed out \cite{Argaman_2010} that the proof of Bell's theorem relies on an assumption violated by retrocausality, so for that reason also, hidden variables are not off limits in this case.
We point out here that because retrocausality can allow information to propagate backward in time, it trivially explains EPR correlations and delayed--choice experiments.\cite{Sutherland_2017}. Since those two issues are already disposed of, we will focus our attention on the remaining ones.
\subsection{Elements of the theory}
We consider that a legitimate measurement is understood to require a duration $T$ limited by the time-energy uncertainty relation
\begin{equation} \label{t-E_uncertainty} T \, \Delta E \ge \hbar/2 \end{equation}
where $\Delta E$ is the smallest energy difference between states that must be distinguished by the measurement. Typical experiments are designed with $T \, \Delta E \gg \hbar/2$. In our analysis, we will suppose that the system is prepared and the experiment begun at time $t_i$, and the measurement is determined or read at $t_f = t_i +T$.
We desire the theory to be time--symmetric, and it is appealing to do so by couching it as a variational principle. In this case the state $\psi$ of the system is found to be a critical point of the action
\begin{equation} I[\psi] \equiv \int_{t_i}^{t_f} L[\psi,\dot{\psi},t] \end{equation}
where $L$ is the system Lagrangian, typically the spatial integral of a Lagrangian density. Critical points are choices of the function $\psi$ where $I$ is stationary with respect to infinitesimal variations of $\psi$. For functionals that depend smoothly on their arguments, maxima and minima are critical points, so the search for critical points is often described as finding extrema. Schwinger \cite{Schwinger_1951} developed quantum field theory as a variational principle based on the action. For our purposes, we note that the action has the same value regardless of the direction of time, so the resulting theory is time--reversal invariant.
Our exposition will be nonrelativistic, but the variational principle is inherently compatible with special relativity \cite{Schwinger_1951}, and we expect that it can readily be expressed in a relativistically covariant formulation. Relativistic Lagrangians routinely appear in quantum field theory,\cite{Weinberg_1995} and the four-dimensional integration of the Lagrangian density to produce the action is of course a relativistically appropriate operation, invariant under change of reference frames.
In its simplest form, a variational principle leads to a differential equation, the Euler equation \cite{Courant}. In order to introduce nonlocality, we will employ a more complex form (a double rather than a single integral in time) that will result in an integrodifferential equation (IDE); see Appendix \ref{Two_time_variant} for the mathematical details. The IDE involves an integral from $t_i$ to $t_f$, so nonlocality in time is evident. Note that the conventionally--understood unitary evolution of the system (as described by the wave equation) would be predicted by the conventionally--derived action, without our modification. We expect that the modified action will predict, via the variational principle, the combination of the non--measurement evolution of the system and the effect of the measurement.
This mathematical form apparently requires solving for $\psi$ simultaneously for all times in $[t_i, t_f]$. This contrasts with a typical physical theory in which variables and their time derivatives at $t$ depend on other variables and derivatives at $t$, or in some cases on $t$ and its past. Wharton \cite{Wharton_2012} has designated these two approaches as Lagrangian and Newtonian respectively, and argued persuasively that the former may be appropriate for physical theories. Note that this picture is definitely retrocausal, because the wavefunction at time $t$ may depend on conditions or the wavefunction at times $>t$, and in particular at $t_f$.
The most obvious way to solve such a mathematical problem is with specified initial and final conditions $\psi(t_i)$ and $\psi(t_f)$. Consider a typical measurement problem in which the system to be measured is prepared at $t_i$ in a given quantum state, defined as an eigenstate or a specified superposition of eigenstates of a given operator. Then a measurement concluding (``read out'') at $t_f$ determines in which of the eigenstates of that operator the system is found at that time. In this case the initial condition is fixed by the specified experimental preparation, but the final condition appears to be missing. Calculus of variations \cite{Courant} supplies the missing constraint, namely, a ``natural boundary condition'' (NBC) that inevitably applies at a boundary where the value of the unknown function is not specified by the problem definition.
As a familiar example, consider a vibrating string of length $L$. It is described by a simple wave equation (a differential equation) for the displacement $y(x,t)$, but we could equally well cast the problem as a variational principle and deduce the wave equation from the system Lagrangian. Now if the string is fixed at both ends, the variational principle (and hence the wave equation) must satisfy ordinary BCs at both ends: $y(0,t)=y(L,t)=0$. But if at $x=0$ the string is not fixed but rather free to slide frictionlessly along a rod perpendicular to the string, the BC at that point is the NBC $\partial y/\partial x \rvert_{x=0} = 0$. Note that that condition is caused not by the rod, which does not constrain the string's position or slope, but by the Lagrangian, by which the condition follows from the requirement of stationarity of the action.
We conclude that the solution of the IDE must be constrained by the specified preparation at $t_i$ (an initial condition) and the NBC at $t_f$ (a final condition). However, the empirical fact that different outcomes may result from identically--prepared repetitions of the same measurement proves that those two conditions underconstrain the problem. At $t_i$, the system is prepared in a given quantum state or superposition of states (e.g., the ground state of a square--well potential), but that description falls short of a specification of every possible variable (including e.g. both position and momentum), as it must by quantum complementarity \cite{Wharton_2010, *Wharton_2010a}. The full specification of the initial (ontological) state consists of the given quantum state, plus additional ``hidden variables'' unknown to or uncontrolled by the experimenter. Similarly, the measurement of a quantum state at $t_f$ does not determine the ontological state at that moment; in fact, the measurement readout at $t_f$ is a weaker constraint than the preparation at $t_i$, because it determines only the variable (operator) measured but not its value (eigenvalue).\footnote{The asymmetry between $t_i$ and $t_f$ arises from the measurement process we have described; the general theory is still time--symmetric.} This indeterminacy provides the opportunity for hidden variables to participate in determining the result of the measurement.
We will see that our theory predicts the collapse (not necessarily instantaneous; perhaps a better term is decay) of the wavefunction to a single eigenvalue at $t_f$. We expect that the BCs together with the hidden variable(s) determine which final state results from the collapse. Ultimately, the frequencies of the different outcomes possible from a single experimental definition must reflect the distribution of hidden variable values in a large number of realizations of the experiment. The observed fact that that those frequencies may be described by a simple law (Born's rule) presumably reflects a likelihood of an approximately universal distribution of the hidden variable values in experiments that are likely to be conducted (without knowledge or control of the hidden variable(s)). For instance, suppose the experimental result depends on a high--frequency sinusoidal function of some experimental time. If in an ensemble of experimental realizations that time is naturally distributed over a range large compared to the period of oscillation, it is an excellent approximation to say that that time has a uniform distribution over a single period. In this way, it is reasonable to expect that naturally--occurring ensembles of experiments may be found reliably to give outcome frequencies satisfying Born's rule.
\subsection{Model of the measurement problem}
As the principal issues motivating our theoretical development have to do with quantum measurement, we will consider an idealized model of such a measurement. Suppose that the system is prepared in a known superposition $\sum_j C_j \ket{\psi_j}$ of eigenstates of the operator $\sigma_{op}$ at time $t_i$; that is, the eigenstates are well--defined and the coefficients $C_j(t_i)$ are known. This superposition is known to be initially stable; $\dot{C}_j(t_i)=0$ for all $j$. The eigenstates themselves must be stable, so $\sigma_{op}$ commutes with the Hamiltonian. Finally, the stability of an (unperturbed, unmeasured) superposition implies that the system is linear when in isolation, that is, it satisfies a wave equation linear in the wavefunction. This consideration will be seen to constrain the form of possible Lagrangians for the system. We will develop our ideas using a particular simple form, as proof of principle.
During all or part of the interval $[t_i,t_f]$, a measurement apparatus (which we will call system 2) interacts with the measured system (system 1). A requirement for generality of the theory---validity of the properties of ``quantum measurements'' across all types of measurements---excludes all but the most general description of the measurement apparatus and its interaction with the measured system. We therefore use a minimal description, that the apparatus has a ``pointer state'' variable $\sigma^2$, and that it is coupled to the measured variable $\sigma^1$ of the system. Without loss of generality, we define $\sigma^2$ so that its value in a successful measurement equals the value of $\sigma^1$. Then the composite (system $+$ apparatus) Lagrangian must include an interaction term that depends on both measured and pointer state variables, and attains an extreme (or stationary) value when they are equal. The simplest such term is quadratic in the corresponding operators, that is, proportional to $(\sigma_{op}^1 - \sigma_{op}^2)^2$.
Note that good experimental design dictates that the combined system (1 and 2) be well isolated in spacetime. Spatial isolation is accomplished by physical isolation or other control of the boundaries of the domain, and temporal isolation by system preparation at $t_i$ and measurement readout at $t_f$. This blocks influences from outside the spacetime region, which is important so that the spacetime integrals in this nonlocal theory can legitimately be limited to the experimental domain.
What is known experimentally is that if there is no measurement, system 1 remains indefinitely in the same superposition of states in which it was prepared. If there is a measurement, it is found (measured) to be in a single eigenstate. (Actually, this may be a superposition of degenerate eigenstates---states with a single eigenvalue.) Finally, in an ensemble of identically prepared measurements, measured eigenstates occur in proportion to their weight $\lvert C_j(t_i) \rvert^2$ in the initial superposition (Born's rule). We seek a theory that predicts these empirical facts.
This description of the ``measurement problem'' is understood to be very well established by a large body of experimental evidence. On the other hand, that body of evidence is silent on the outcomes of measurements violating (\ref{t-E_uncertainty}), because such experiments would be understood to be ineligible to invalidate any of the above points. In other words, we may consider Born's rule to be a summary of observations about experiments conforming to (\ref{t-E_uncertainty}), since nonconforming experiments would not have been considered proper measurements.
\subsection{Outline}
In the next section, we will develop the theory based on a variational principle, generalized so as to result in a nonlocal equation. The subsequent section will discuss the predictions of that equation and compare them to the properties that we have argued must appear in a successful theory. In some cases the agreement will be clear, although it will remain for the future to describe the details of approach to the solution, and to prove that the solution is unique. For Born's rule, we will show how hidden variables may arise and the way in which the expected output frequencies may follow from their distribution; however, analytic proof or numerical demonstration that our theory yields frequencies consistent with Born's rule remains to be done. In the last section we will summarize what we have done, discuss new perspectives required by retrocausality and nonlocality, and list some of the next steps to be taken to continue developing these ideas. A mathematical appendix derives an extension of variational calculus used in our analysis of the nonlocal variational principle.
\section{Theoretical development} \label{Theoretical_development}
\subsection{Variational approach} \label{Variational_approach}
An isolated system (system 1 or system 2, in our case, when they are not interacting), is described by a Lagrangian $L[\psi,\dot{\psi},t]$---a functional of the wavefunction $\psi(t)$---which is typically the spatial integral of a Lagrangian density $\mathcal{L}[\psi(t,\boldsymbol{x})]$. The variational principle (to be specific, the Hamiltonian principle) says that the action $S \equiv \int \mathrm{d} t L$ is stationary with respect to variations of $\psi$, a condition that may be denoted $\delta S = 0$. (This is the principle that was employed by Schwinger\cite{Schwinger_1951} as the foundation for quantum field theory.) A choice of $\psi$ for which $S$ is stationary is said to be a critical point of $S$. For functionals that depend smoothly on their arguments, maxima and minima are critical points, so the search for critical points is often described as finding extrema.
A necessary condition is given by the Euler equation
\begin{equation} \label{Euler} 0 = \frac{\partial L}{\partial \psi}
- \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial L}{\partial \dot{\psi}} \end{equation}
Evidently the requirement that (\ref{Euler}) yield a linear wave equation implies that the Lagrangian must be quadratic in $\psi$ and its time derivative.
\subsection{Normal--mode expansion---Single system}
Since the point of the measurement problem is to describe the evolution of a superposition of eigenstates of a given operator to a single eigenstate, it will simplify matters to define a basis set of such eigenstates. This expansion will be specific to a given inertial reference frame---the frame in which the measurement is performed and described by the above characteristics---because that will simplify the analysis and its comparison to those points. However, as explained above, we expect that the general theory (the form of the action, without dependence on the normal--mode expansion we will use here) will be relativistically appropriate and can be expressed in covariant form.
We will describe each system $\ell$ (=1 or 2) by a wavefunction $\psi^\ell(t,\boldsymbol{x})$, normalized in the usual way in terms of the spatial integral or otherwise--defined inner product \begin{equation} \label{normalization_psi} \braket{\psi^\ell}{\psi^\ell} = 1 \end{equation}
At any given time $t$, let $\ket{\psi^\ell_j}$ be for system $\ell = 1$ or 2 an eigenstate of a Hermitian operator $\sigma_{op}^\ell$,
\begin{equation} \label{eigenvalue} \sigma_{op}^\ell \ket{\psi^\ell_j(t)} = \sigma_j^\ell \ket{\psi^\ell_j(t)} \end{equation}
\noindent satisfying the applicable spatial BCs, and let those eigenstates form an orthonormal basis for states of system $\ell$:
\begin{equation} \label{orthonormality} \braket{\psi^\ell_j(t)}{\psi^\ell_k(t)} = \delta_{jk} \qquad (\ell=1,2; \forall t) \end{equation}
Since external fields acting on the system may change during the course of the measurement (perhaps due to the measurement process itself), the eigenvalues and eigenstates are in general functions of time. In many interesting cases they are slowly varying functions of time, and for simplicity we will confine ourselves to the case in which the eigenvalues $\sigma_j^\ell$ are constant. We expect that the analysis presented below can be readily generalized to the time--dependent case, for sufficiently slow variation.
We will also require each normal mode $\ket{\psi^\ell_j}$ to satisfy the variational principle based on its single-system Lagrangian $L^\ell$. This is possible because as stated above, the operator corresponding to the measured variable commutes with the Hamiltonian. The basis states will be taken to be simultaneous eigenstates of both operators, and eigenstates of the Hamiltonian satisfy the variational principle. Since a basis vector $\ket{\psi_j^\ell(t)}$ was defined to be an eigenstate of the Hamiltonian, it has an energy $E_j^\ell$ and a time derivative \begin{equation} \label{energy} \frac{\mathrm{d}}{\mathrm{d} t} \ket{\psi_j^\ell(t)} = -\frac{\mathrm{i}}{\hbar}E_j^\ell \ket{\psi_j^\ell(t)} \end{equation}
(Sch\"{o}dinger picture). We will also take the energies $E_j^\ell$ to be constant; then it follows that
\begin{equation} \label{modal_inner_product} \braket{\psi^\ell_j(t_1)}{\psi^\ell_k(t_2)} = \delta_{jk} \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} E_j^\ell(t_2-t_1)} \end{equation}
Now if system $\ell = 1$ (measured system) or $2$ (measurement apparatus) is isolated, its wavefunction can be expanded \begin{equation} \label{C_j_expansion} \ket{\psi^\ell(t)} = \sum_j C^\ell_j(t) \ket{\psi^\ell_j(t)} \end{equation}
and the normalization condition (\ref{normalization_psi}) implies \begin{equation} \label{normalization_Cj} \sum_j \lvert C_j(t) \rvert^2 = 1 \end{equation}
At present we expect this condition to hold for any $t$, but in subsection \ref{Alternative_normalization} we will argue for removing this constraint.
The action is
\begin{eqnarray} \label{action_single} S^\ell &\equiv& \int_{t_i}^{t_f} \mathrm{d} t \, L^\ell(t) \nonumber\\ &=& \int_{t_i}^{t_f} \mathrm{d} t \bra{\psi^\ell(t)} L_{op}^\ell \ket{\psi^\ell(t)} \nonumber\\ &=& \sum_{j,k} \int_{t_i}^{t_f} \mathrm{d} t \bra{\psi^\ell_j(t)} \, C^{\ell*}_j(t) \, L_{op}^\ell \, C^\ell_k(t) \ket{\psi^\ell_k(t)} \end{eqnarray}
Since the complete wavefunction $\psi^\ell$ is completely determined by the set of coefficients $C_j^\ell(t)$, the condition of stationarity of the action reduces to the problem of finding those coefficients, which must satisfy
\begin{equation} \label{E-L_Cjk} 0 = \frac{\partial L^\ell}{\partial C^\ell_j}
- \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial L^\ell}{\partial \dot{C}^\ell_j}
\quad \forall j \end{equation} This formulation of the problem replaces (\ref{Euler}).
It is traditional in quantum field theory to perform the variational calculus analysis by varying (differentiating with respect to) the physically significant canonical fields and momenta, and that approach is extremely useful in producing intuitively appealing and useful evolution equations.\cite{Schwinger_1951, Weinberg_1995} However, the stationarity of the action is a \emph{mathematical} condition, and as long as our formulation spans the space of its allowed variations, the mathematics does not dictate our choice of the functions in terms of which those variations are expressed. Because we are interested in the eigenstate content of the wavefunction, the corresponding coefficients are particularly useful to us, and we use them to analyze the variational principle.
\subsection{Combined systems}
Now we can use the normalization condition (\ref{normalization_psi}) to write from (\ref{action_single})
\begin{equation} \label{combined_action_no_interaction_alt} S^1 + S^2 = \int_{t_i}^{t_f} \mathrm{d} t \bra{\psi^1(t)} \bra{\psi^2(t)} (L_{op}^1 + L_{op}^2) \ket{\psi^1(t)} \ket{\psi^2(t)} \end{equation}
if there is no interaction or entanglement between the two systems, that is, the combined state factors as $\ket{\psi} \equiv \ket{\psi^1}\ket{\psi^2}$.
To allow the two subsystems to be entangled, we replace the product of single-system states $\ket{\psi^1}$ and $\ket{\psi^2}$ by the joint state
\begin{equation} \label{C_jk_expansion_alt} \ket{\psi(t)} = \sum_{j,k} C_{jk}(t) \ket{\psi^1_j(t)} \ket{\psi^2_k(t)} \end{equation}
whereupon the normalization condition becomes
\begin{equation} \label{normalization_Cjk} \sum_{j,k} \, \lvert C_{jk}(t) \rvert^2 = 1 \quad \forall t \end{equation}
Then
\begin{equation} \label{action_single_system_terms} S^1 + S^2 = \sum_{j,k,\ell,m} \int_{t_i}^{t_f} \mathrm{d} t \,\, \bra{\psi_j^1(t)} \, \bra{\psi_k^2(t)} \, C^*_{jk}(t) (L_{op}^1 + L_{op}^2) \, C_{\ell m}(t) \, \ket{\psi_\ell^1(t)} \, \ket{\psi_m^2(t)} \end{equation}
and
To simplify the single-system terms, suppose $L^1_{op}$ and $L^2_{op}$ are of the form
\begin{eqnarray} \label{L_op_form} L^\ell_{op} &=& A^\ell - B^\ell \, \frac{\mathrm{d}^2}{\mathrm{d} t^2} \nonumber\\ &=& A^\ell + \overleftarrow{\frac{\mathrm{d}}{\mathrm{d} t}} \, B^\ell \, \frac{\mathrm{d}}{\mathrm{d} t} \end{eqnarray}
so $L^1$ and $L^2$ take the form
\begin{equation} \label{Ll_form} L^\ell \equiv \bra{\psi^\ell} L^\ell_{op} \ket{\psi^\ell} = A^\ell \braket{\psi^\ell}{\psi^\ell} + B^\ell \braket{\dot{\psi}^\ell}{\dot{\psi}^\ell} \end{equation}
with real constants $A^\ell$ and $B^\ell$. Then the fact that $\ket{\psi_j^\ell}$ is an eigenstate of the Hamiltonian means that it satisfies the Euler equation (\ref{Euler}), which we can write as \begin{eqnarray} \label{E-L_psi} 0 &=& \frac{\partial L^\ell}{\partial \bra{\psi_j^\ell}}
- \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial L^\ell}{\partial \bra{\dot{\psi}_j^\ell}}
\nonumber\\ &=& A^\ell \ket{\psi_j^\ell} - B^\ell \ket{\ddot{\psi}_j^\ell} \end{eqnarray}
At this point we observe that the functional in question is a physical action and therefore real, so it is unchanged if we drop any imaginary part of the integrand. This has a simplifying advantage. When we use variational calculus to find a stationary state with respect to variations of a complex quantity ($\ket{\psi_j^\ell}$ or $C_{jk}$), we may treat the real and imaginary parts of that quantity independently, with an Euler equation for each of them. Alternatively, we may treat the quantity and its complex conjugate ($\bra{\psi_j^\ell}$ or $C^*_{jk}$) as the two functions to be varied. In our case, with a real integrand, doing so has the convenient feature that the two resulting Euler equations are complex conjugates of each other and we only need to solve one of them. Here in (\ref{E-L_psi}) we choose to vary the bra vector.
Substituting (\ref{L_op_form}) into (\ref{action_single_system_terms}) and using property (\ref{E-L_psi}) of the eigenvectors, we find (introducing the shorthand notation $B \equiv B^1 + B^2$) that
\begin{eqnarray} S^1 + S^2 &=& \sum_{j,k,\ell,m} \int_{t_i}^{t_f} \mathrm{d} t \,\, \bra{\psi_j^1(t)} \, \bra{\psi_k^2(t)} \, C^*_{jk}(t) \left[(L_{op}^1 + L_{op}^2), \, C_{\ell m}(t) \right] \, \ket{\psi_\ell^1(t)} \, \ket{\psi_m^2(t)} \nonumber\\ &=& -B \sum_{j,k,\ell,m} \int_{t_i}^{t_f} \mathrm{d} t \,\, \bra{\psi_j^1} \, \bra{\psi_k^2} \, C^*_{jk} \left( \ddot{C}_{\ell m} + 2 \, \dot{C}_{\ell m} \frac{\mathrm{d}}{\mathrm{d} t} \right) \, \ket{\psi_\ell^1} \, \ket{\psi_m^2} \end{eqnarray}
Then
\begin{eqnarray} S^1 + S^2 &=& \, -B \! \sum_{j,k,\ell,m} \int_{t_i}^{t_f} \mathrm{d} t \,\, \bra{\psi_j^1} \, \bra{\psi_k^2} \, C^*_{jk} \left( \ddot{C}_{\ell m} - \frac{2\mathrm{i}}{\hbar} E_{\ell m} \, \dot{C}_{\ell m} \right) \, \ket{\psi_\ell^1} \, \ket{\psi_m^2} \nonumber\\ &=& \, -B \sum_{j,k} \int_{t_i}^{t_f} \mathrm{d} t \,\, C^*_{jk} \, \left( \ddot{C}_{jk} - \frac{2\mathrm{i}}{\hbar} E_{jk} \, \dot{C}_{jk} \right) \nonumber\\ &=& \, B \sum_{j,k} \int_{t_i}^{t_f} \mathrm{d} t \, \left( \lvert \dot{C}_{jk} \rvert^2 + \frac{2\mathrm{i}}{\hbar} E_{jk} \, C^*_{jk} \,\dot{C}_{jk} \right) \end{eqnarray}
where in the last step we rely on the hypothesis that $\dot{C}_{jk}$ vanishes at $t_i$ as a condition imposed by the experimental preparation, and at $t_f$ since that is implied by the NBC. Finally, as intended, we discard the imaginary part of the integrand:
\begin{equation} \label{action_single_system_terms_simplified} S^1 + S^2 = \, B \sum_{j,k} \int_{t_i}^{t_f} \mathrm{d} t \, \left( \lvert \dot{C}_{jk} \rvert^2 + \mathrm{Re} \left \{\frac{2\mathrm{i}}{\hbar} E_{jk} \, C^*_{jk} \,\dot{C}_{jk} \right\} \right) \end{equation}
\subsection{Interaction term}
As argued above, we must account for interaction by including in the action a term proportional to $(\sigma_{op}^1 - \sigma_{op}^2)^2$.
A simple form for such an interaction term is
\begin{equation} \label{simple_interaction} S^I = \mu \int^{t_f}_{t_i}\mathrm{d} t \, \bra{\psi(t)} (\sigma_{op}^1 - \sigma_{op}^2)^2 \ket{\psi(t)} \end{equation}
for some constant $\mu$. Then, defining another shorthand notation $\Delta_{jk} \equiv \sigma_j^1 - \sigma_k^2$,
\begin{eqnarray} \label{interaction_local} S^I &=& \, \mu \sum_{j,k,\ell,m} \int^{t_f}_{t_i} \mathrm{d} t \, \bra{\psi_j^1(t)} \, \bra{\psi_k^2(t)} \, C^*_{jk}(t) \, (\sigma_{op}^1 - \sigma_{op}^2)^2 \, C_{\ell m}(t) \, \ket{\psi_\ell^1(t)} \, \ket{\psi_m^2(t)} \nonumber\\ &=& \, \mu \sum_{j,k} \int^{t_f}_{t_i} \mathrm{d} t \, \, \Delta_{jk}^2 \, \lvert C_{jk}(t) \rvert ^2 \end{eqnarray}
Then we might expect the complete action to be \begin{equation} S = S^1 + S^2 + S^I \end{equation}
\subsection{Necessity of a nonlocal theory} \label{Necessity_nonlocality}
However, as we indicated earlier, to reproduce the observed behavior that a measurement always finds the system in a single eigenstate (or a superposition of degenerate eigenstates) of the operator corresponding to the measured quantity, the theory must be nonlocal. For a simple example of this, consider a system described by the one-dimensional Schr\"{o}dinger equation
\begin{equation} \label{Schrodinger} \left[ -\frac{\hbar^2}{2m} \frac{\mathrm{d}^2}{\mathrm{d} x^2} - V(x) \right] \psi(x) = E \, \psi(x) \end{equation}
with potential function $V(x)$ and boundary conditions at positions $x_1, x_2$ (which may be $\pm\infty$). It is customary to require the solution to be normalized according to
\begin{equation} \label{simple_normalization} \int \lvert\psi\rvert^2 \mathrm{d} x = 1 \end{equation}
although for our purposes it suffices to require that integral to be finite. Consider the case in which the potential is attractive and the spectrum of energy eigenvalues $E$ is discrete, with (for simplicity) no degenerate eigenstates.
Now suppose that the values $\psi(x_0)$ and $\psi'(x_0)$ are proposed as solutions at some point $x=x_0$, and we ask whether they belong to a solution that is a single eigenstate. In a conventional interpretation of quantum mechanics, this is the question Nature must answer when those values have developed due to the operation of the wave equation and then a measurement is made, requiring a single eigenvalue as its result. (We are here dealing with the case in which the measured quantity is energy, but that case is enough to prove our point.) Nature must decide whether to accept the proposed values of $\psi(x_0)$ and $\psi'(x_0)$ as given or ``collapse'' to different values consistent with a single eigenstate.
In a local theory, that question must be answered on the basis of local information alone, that is, $V(x_0)$. That information is insufficient. With nonlocal information, namely, the entire function $V(x)$, it would be possible, given $E$, to find the solution $\psi(x)$ of (\ref{Schrodinger}) by integrating the differential equation twice. However, for most values of $E$, either the integrated solution violates the boundary conditions or the normalization integral diverges (or both). We conclude that determining whether $\psi(x_0)$ and $\psi'(x_0)$ are consistent with a single eigenstate requires the use of information $V(x)$ at all $x$ to integrate the solution and test boundary conditions and normalizability. A local theory cannot make that determination.
Therefore, since a measurement always finds an eigenstate of the relevant operator, we conclude that its complete mathematical description must be nonlocal in space. But a description that is nonlocal in space in one inertial reference frame is nonlocal in both space and time in any other frame, so in general the description of a measurement must be nonlocal in time as well.
A differential equation (aside from the specification of BCs) is local, depending on a function and its derivatives at a single point. By contrast, a nonlocal relationship is naturally expressed as an integral equation. Calculus of variations shows us that the stationary states of an integral expression like $\int du \, F(g(u),g(\dot{u}),u)$ satisfy a differential equation (the Euler equation) for $g(u)$, so such a description corresponds to a strictly local process. In order to obtain an integral equation as the simplest description of a measurement process, we need the action to involve at least \emph{two} integrations of some function of the quantum state.
\subsection{Nonlocal interaction term}
Since the phenomenon that requires nonlocality (measurement--induced collapse of the wavefunction) is due to the interaction between systems 1 and 2, we suppose that it is the interaction term $S^I$ that must be made nonlocal. We propose to add to it a nonlocal piece involving two integrations on time. We start with an expression resembling $S^I$ in (\ref{simple_interaction}) but with two integrations on time:
\begin{eqnarray} && \nu \left[ \int^{t_f}_{t_i}\mathrm{d} t \, \bra{\psi(t)} (\sigma_{op}^1- \sigma_{op}^2) \ket{\psi(t)} \right]^2 \nonumber\\ && = \nu \int^{t_f}_{t_i}\mathrm{d} t_1 \, \int^{t_f}_{t_i}\mathrm{d} t_2 \, \bra{\psi(t_1)} \bra{\psi(t_2)}' (\sigma_{op}^1- \sigma_{op}^2) \, (\sigma_{op}^{1'}- \sigma_{op}^{2'}) \ket{\psi(t_1)} \ket{\psi(t_2)}' \end{eqnarray}
Here $\nu$ is a real constant, and the primed $\sigma^\ell_{op}$ operators combine with the primed bra and ket vectors in an inner product, as do the unprimed operators and bra and ket vectors.
Now we make changes so as to couple the $t_1$ and the $t_2$ integrals. We move one of the primes in the operator kernel, changing it from $(\sigma_{op}^1- \sigma_{op}^2) \, (\sigma_{op}^{1'}- \sigma_{op}^{2'})$ to $(\sigma_{op}^1- \sigma_{op}^{2'}) \, (\sigma_{op}^{1'}- \sigma_{op}^2)$. We also move the prime from one ket vector to the other. Finally, we observe that in this form the interaction between the state at $t_1$ and at that at $t_2$ is independent of the time difference. It may be that that effect weakens with temporal separation, so a dimensionless non-negative real function $f(t_1 - t_2)$ should be included in the integrand. By symmetry, $f$ must be an even function, and we expect it to be a monotonically decreasing function of the absolute value of its argument. For later convenience, let us suppose that there is a real constant $\tau$ such that $f(t_1 - t_2)=0$ whenever $\lvert t_1 - t_2 \rvert \ge \tau$. These changes result in the term
\begin{equation} \label{first_nonlinear_interaction} R^I \equiv \nu \int^{t_f}_{t_i}\mathrm{d} t_1 \, \int^{t_f}_{t_i}\mathrm{d} t_2 \, f(t_1 - t_2) \bra{\psi(t_1)} \bra{\psi(t_2)}' (\sigma_{op}^1- \sigma_{op}^{2'}) \, (\sigma_{op}^{1'}- \sigma_{op}^2) \ket{\psi(t_1)}' \ket{\psi(t_2)} \end{equation}
Physically this expresses an interaction or ``auto-entanglement'' between the state $\ket{\psi}$ at time $t_1$ and the same state at $t_2$; this is an expression of retrocausality in the sense that the state at the later time interacts with its earlier value. A more speculative interpretation, based on the time symmetry of the variational principle, is that this term describes interaction between ``forwards'' and ``backwards'' histories. This sounds very much like the ``transaction'' in Cramer's transactional interpretation,\cite{Cramer_1980, *Cramer_1986} but it is not quite the same; Cramer proposed a two--way interaction between lightlike separated events, whereas our form allows for the possibility of timelike, lightlike and spacelike interactions. (We may of course restrict those options as we gain future understanding.)
We point out that for the extreme choice of $f$
\begin{equation} f(t_1 - t_2) = \delta(t_1 - t_2) \end{equation}
the integrand takes a more intuitive form in terms of quantum expectation values $\left\langle \mathcal{O} \right\rangle \equiv \bra{\psi} \mathcal{O} \ket{\psi}$:
\begin{eqnarray} \bra{\psi(t)} \bra{\psi(t)}' (\sigma_{op}^1- \sigma_{op}^{2'}) \, (\sigma_{op}^{1'}- \sigma_{op}^2) \ket{\psi(t)} \ket{\psi(t)}' &=& \left\langle \sigma^1 \right\rangle ^2 -2 \left\langle \sigma^1 \sigma^2 \right\rangle + \left\langle \sigma^2 \right\rangle ^2 \nonumber\\ &=& \left\langle (\sigma^1 - \sigma^2)^2 \right\rangle - \left\langle (\Delta\sigma^1)^2 \right\rangle - \left\langle (\Delta\sigma^2)^2 \right\rangle \qquad \end{eqnarray}
in which
\begin{equation} \Delta\sigma^\ell \equiv \sigma^\ell - \left\langle \sigma^\ell \right\rangle \quad \ell=1,2 \end{equation}
This suggests that minimizing the term $\left\langle (\sigma^1 - \sigma^2)^2 \right\rangle$ drives the action of measurement (system and apparatus evolve to states with the same eigenvalue) and the other two terms drive wavefunction collapse (until each system ultimately has only a single eigenvalue $\sigma^\ell = \left\langle \sigma^\ell \right\rangle$). However, we will find that the $\delta$-function form of $f$ is unsuitable for our objectives, so the physical interpretation of $R^I$ is more subtle.
Next we expand in normal modes according to (\ref{C_jk_expansion_alt}) and use the eigenvalue relation (\ref{eigenvalue}):
\begin{eqnarray} R^I &=& \nu \int^{t_f}_{t_i} \mathrm{d} t_1 \, \int^{t_f}_{t_i} \mathrm{d} t_2 \, f(t_1 - t_2) \sum_{\substack{j,k,\ell,m,\\n,p,q,r}} \bra{\psi_j^1(t_1)} \bra{\psi_k^2(t_1)} C^*_{jk}(t_1) \bra{\psi_\ell^1(t_2)}' \bra{\psi_m^2(t_2)}' \nonumber\\ && C^*_{\ell m}(t_2) \, \Delta_{qp} \, \Delta_{nr} \, C_{np}(t_1) \ket{\psi_n^1(t_1)}' \ket{\psi_p^2(t_1)}' C_{qr}(t_2) \ket{\psi_q^1(t_2)} \ket{\psi_r^2(t_2)} \end{eqnarray}
Then, using (\ref{modal_inner_product}) and defining $E_{jk} \equiv E_j^1 + E_k^2$,
\begin{eqnarray} \label{nonlinear_interaction_C} R^I &=& \int^{t_f}_{t_i} \mathrm{d} t_1 \, \int^{t_f}_{t_i} \mathrm{d} t_2 \, r^I(t_1,t_2) \nonumber\\ &=& \, \frac{1}{2} \int^{t_f}_{t_i} \mathrm{d} t_1 \, \int^{t_f}_{t_i} \mathrm{d} t_2 \, \left[ r^I(t_1,t_2) + r^I(t_2,t_1) \right] \end{eqnarray}
where
\begin{equation} r^I(t_1,t_2) \equiv \, \nu f(t_1 - t_2) \sum_{j,k,\ell,m} \Delta_{jm} \, \Delta_{\ell k} \, C^*_{jk}(t_1) \, C^*_{\ell m}(t_2) \, C_{\ell m}(t_1) \, C_{jk}(t_2) \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t_2-t_1)} \end{equation}
In the second line of (\ref{nonlinear_interaction_C}) we have replaced the integrand by its real part, for the reasons discussed above, utilizing the property
\begin{equation} \left[ r^I(t_1,t_2) \right] ^* = r^I(t_2,t_1) \end{equation}
\subsection{Complete action and variational analysis} Then the full action is
\begin{eqnarray} \label{full_action} S &=& S^1 + S^2 + S^I + R^I \nonumber\\ &=& \int^{t_f}_{t_i} \mathrm{d} t \left[ s^{12}(t) + s^I(t) \right] + \frac{1}{2} \int^{t_f}_{t_i} \mathrm{d} t_1 \int^{t_f}_{t_i} \mathrm{d} t_2 \left[ r^I(t_1,t_2) + r^I(t_2,t_1) \right] \nonumber\\ &=& \int^{t_f}_{t_i} \mathrm{d} t_1 \int^{t_f}_{t_i} \mathrm{d} t_2 \left\{ \frac{1}{2T} \left[ s^{12}(t_1) + s^I(t_1) + s^{12}(t_2) + s^I(t_2) \right] + \frac{1}{2} \left[ r^I(t_1,t_2) + r^I(t_2,t_1) \right] \right\} \qquad \end{eqnarray}
where $s^{12}$ and $s^I$ are the integrands (including prefactors) in $(S^1 + S^2)$ and $S^I$, as given in (\ref{action_single_system_terms_simplified}) and (\ref{interaction_local}):
\begin{equation} s^{12} = \, B \sum_{j,k} \left[ \lvert \dot{C}_{jk} \rvert^2 + \frac{\mathrm{i}}{\hbar} E_{jk} \left( C^*_{jk} \,\dot{C}_{jk} - \dot{C}^*_{jk} \, C_{jk} \right) \right] \end{equation}
\begin{equation} s^I = \, \mu \sum_{j,k} \, \Delta_{jk}^2 \, \lvert C_{jk} \rvert ^2 \end{equation}
We observe that in this form, the integrand of $S$ is real and symmetric in $t_1$ and $t_2$. It depends on the coefficients $\{C_{pq}\}$ at two times. We need to find a critical point of the action subject to the constraint (\ref{normalization_Cjk}). In the Appendix we outline the analysis of such a problem, including the use of a Lagrange mulitipler $\lambda(t)$ to enforce the constraint, leading to integral equation (\ref{necessary_1_Lagrange}). Varying $C_{jk}^*$ by that procedure and defining the operator \begin{equation} W \equiv \frac{\partial}{\partial C_{jk}^*(t_1)}
- \left. \frac{\partial}{\partial t_1} \right |_{t_2} \frac{\partial}{\partial \dot{C}_{jk}^*(t_1)} \end{equation} we find
\begin{eqnarray} 0 &=& \frac{1}{2} \, W \, s^{12}(t_1) + \frac{1}{2} \, W \, s^I(t_1) + \frac{1}{2} \int^{t_f}_{t_i} \mathrm{d} t_2 \, W \left[ r^I(t_1,t_2) + r^I(t_2,t_1) \right] \nonumber\\ && + \, T \, \lambda(t_1) \, \frac{\partial}{\partial C_{jk}^*(t_1)} \left[ \sum_{j,k} \, \lvert C_{jk}(t_1) \rvert^2 - 1\right] \end{eqnarray}
This becomes
\begin{equation} \label{Euler_equation} \ddot{C}_{jk}(t) = \frac{2\mathrm{i}}{\hbar} E_{jk} \, \dot{C}_{jk}(t) + \frac{1}{B} \left[ \mu \, \Delta_{jk}^2 + 2T \, \lambda(t) \right] C_{jk}(t) + \frac{\nu}{B} \, \tilde{C}_{jk}(t) \end{equation}
in which we define the function
\begin{equation} \label{C_jk_tilde} \tilde{C}_{jk}(t) \equiv \sum_{\ell,m} \Delta_{jm} \, \Delta_{\ell k} \, C_{\ell m}(t) \, \int^{t_f}_{t_i} \mathrm{d} t' \, C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \end{equation}
It can be seen by varying the action by $C_{jk}$ instead of $C_{jk}^*$ that $\lambda(t)$ must be real. To find it, we note that the second derivative of the normalization condition (\ref{normalization_Cjk}) is \begin{equation} \label{2nd_derivative_normalization} 2 \sum_{j,k} \left( \lvert \dot{C}_{jk} \rvert^2 + \mathrm{Re} \left\{ C_{jk} ^* \, \ddot{C}_{jk} \right \}\right) = 0 \end{equation}
We eliminate $\ddot{C}_{jk}$ between (\ref{Euler_equation}) and (\ref{2nd_derivative_normalization}) and then solve for (a constant times) $\lambda(t)$:
\begin{equation} \label{lambda} \frac{2T \lambda}{B} = - \sum_{j,k} \left( \lvert \dot{C}_{jk} \rvert^2 + \frac{\mu}{B} \, \Delta_{jk}^2 \, \lvert C_{jk} \rvert^2 + \mathrm{Re} \left\{ \frac{2 \mathrm{i}}{\hbar} E_{jk} \, C_{jk}^* \, \dot{C}_{jk} + \frac{\nu}{B} \, C_{jk}^* \, \tilde{C}_{jk} \right\} \right) \end{equation}
Substituting that expression into (\ref{Euler_equation}),
\begin{eqnarray} \label{Cjk_dotdot_eqn} \ddot{C}_{jk} &=& \frac{2 \mathrm{i}}{\hbar} E_{jk} \, \dot{C}_{jk} + \frac{\mu}{B} \, C_{jk} \left( \Delta_{jk}^2 - \sum_{\ell,m} \Delta_{\ell m}^2 \lvert C_{\ell m} \rvert ^2 \right) \nonumber\\ && + \frac{\nu}{B} \left( \tilde{C}_{jk} - C_{jk} \, \mathrm{Re} \left\{\sum_{\ell,m} C_{\ell m}^* \tilde{C}_{\ell m} \right\} \right) - C_{jk} \sum_{\ell,m} \left( \lvert \dot{C}_{\ell m} \rvert ^2 + \mathrm{Re} \left \{ \frac{2 \mathrm{i}}{\hbar} E_{\ell m} \, C_{\ell m}^* \, \dot{C}_{\ell m} \right \} \right) \qquad \end{eqnarray}
This is the equation that we expect describes the evolution of the complete system (that is, system $+$ apparatus), as described by the coefficients $\{C_{jk}(t)\}$ in the normal--mode expansion (\ref{C_j_expansion}).
The BCs to be applied with (\ref{Cjk_dotdot_eqn}) are specified values of $\{C_{jk}(t_i)\}$ from the initial preparation, and the NBC at $t_f$, which takes the form
\begin{equation} 0 = \int_{a}^{b} \mathrm{d} t_2 \left. \left( \frac{\partial L}{\partial \dot{C}^*_{jk}(t_1)} \right) \right\rvert_{t_1=t_f} \end{equation}
in which $L$ is the integrand in the full action on the last line of \eqref{full_action}. This form of the NBC is derived in the appendix.
\subsection{Alternative treatment of the normalization condition} \label{Alternative_normalization}
Comparison of \eqref{Euler_equation} with \eqref{Cjk_dotdot_eqn} shows that rigorous enforcement of the normalization condition \eqref{normalization_psi} or \eqref{normalization_Cjk} has complicated the mathematics. Since we hope to show that experimental results of great simplicity and generality (e.g. Born's rule) follow from this theory, we are suspicious of the additional complexity and wonder whether it is absolutely necessary to satisfy the stated normalization condition at every instant $t$.
Our skepticism about that requirement is also based on a thought experiment described by Renninger\cite{Renninger_Gedanken_Ger, *Renninger_Gedanken_Eng}, which is equivalent to the following description. An excited atom at the origin is known to emit a photon at $t=0$, but the direction is unknown, so the photon's wavefunction satisfies $\lvert \psi \rvert ^2 = \delta(r-ct)/(4\pi r^2)$. A perfectly collecting hemispherical detector screen occupies the upper half of the sphere $r=1$ light-second. Therefore, if the photon's emission direction is within $\theta < \pi/2$, it is collected and extinguished at $t=1$ second. Otherwise, it is not registered by the detector screen, and its wavefunction changes to satisfy $\lvert \psi \rvert ^2 = \delta(r-ct) H(\theta-\pi/2)/(2\pi r^2)$, where $H$ is the Heaviside function. The instantaneous change in the denominator from $4\pi r^2$ to $2\pi r^2$ at $t=1$ is not due to any measurement, for there is none, nor to any physical change in the photon; it arises entirely from the normalization requirement. This seems unphysical, and our suspicion deepens when we consider that this description depends on choice of reference frame; for instance, in any other frame the detector screen would not be (hemi)spherical but spheroidal, and so the resulting change in magnitude of the uncollected wavefunction would happen over a nonzero interval of time.
A more physically sound description would be that a photon intercepted by the detector screen does not simply vanish; it interacts with (a) particle(s) of the screen to produce some physical effect, for instance dislodging a photoelectron. A more complete description of the experiment would include that effect. Since half of the outgoing spherical photon wavefunction participates in that effect, it is unreasonable for the uncollected half to double its weight to satisfy a normalization condition. We argue instead that the outgoing uncollected photon wavefunction after $t=1$ should be normalized to integrate to $1/2$, and with that change we see that a discontinuous and unphysical change is no longer needed in that uncollected part at $t=1$.
Armed with our reasoning that the normalization condition \eqref{normalization_psi} is not absolute, we propose to relax it for the experiment that is the subject of this paper. Although for many experiments we do not expect to lose any of the wavefunction weight in mid--experiment, we point out that the total weight of the wavefunction (unity, meaning one particle of whatever type is being described) is known only at $t_i$ and $t_f$. There is not, nor can there be, any experimental evidence for a unity (or any other) value of the weight at intermediate times. Therefore we propose that \eqref{normalization_psi} is a constraint only at $t_i$ and $t_f$. This is easily handled mathematically; we simply stipulate that \eqref{normalization_Cjk} is part of the initial and final conditions.\footnote{Note of course that if the experiment being described is the Renninger experiment, or some other experiment with sources or sinks of the wavefunction (the quantum field), then the normalization values at $t_i$ and $t_f$ will be modified in the ways just described, or in more complicated ways. For instance, if the Renninger experiment were augmented with a lower--hemisphere detector at $r=2$, then there would be one final condition at $t=1$ and $r=1, \, \theta \le \pi/2$ and another at $t=2$ and $r=2, \, \theta \ge \pi/2$, with a normalization value of $1$ applied to the union of both collector surfaces at their respective collecting times.} Then we can dispense with the Lagrange multiplier altogether, so the IDE to be satisfied is
\begin{equation} \label{Euler_equation_unconstrained} \ddot{C}_{jk}(t) = \frac{2\mathrm{i}}{\hbar} E_{jk} \, \dot{C}_{jk}(t) + \frac{\mu}{B} \Delta_{jk}^2 \, C_{jk}(t) + \frac{\nu}{B} \, \tilde{C}_{jk}(t) \end{equation}
\noindent Since we regard the simplicity of this equation in comparison to \eqref{Cjk_dotdot_eqn} as an argument for its plausibility, we will adopt it rather than the latter in the remaining sections of the paper; nevertheless, much of the following reasoning can be applied to \eqref{Cjk_dotdot_eqn} as well at the cost of more algebra.
\section{Comparison to desired properties}
\subsection{Stability of a superposition in the absence of a measurement}
We observe at this point that \eqref{Euler_equation_unconstrained} predicts the stability of an unperturbed superposition, as it should. When there is no interaction between the system and the measurement apparatus, $\mu = \nu = 0$. The resulting equation
\begin{equation} \label{Cjk_dotdot_eqn_no_interaction} \ddot{C}_{jk} = \frac{2 \mathrm{i}}{\hbar} E_{jk} \, \dot{C}_{jk} \end{equation}
has the solution $\dot{C}_{jk} =0 \; \forall j, \! k$, that is, stability of the superposition. Furthermore, since (for each subsystem $\ell=1$ or $2$) the modes in the expansion (\ref{C_jk_expansion_alt}) were defined as solutions of the no--measurement wave equation, the stable solution resulting from our analysis here agrees with the solution of the ordinary wave equation for each isolated system.
\subsection{Collapse to a single eigenstate with $\sigma_j^1 = \sigma_k^2$} \label{Collapse_single_eigenstate}
This includes three events we expect in a measurement: system 1 must collapse to a single eigenstate of $\sigma_{op}^1$, or a superposition of eigenstates with the same eigenvalue; system 2 must similarly collapse; and the eigenvalues of the two systems must agree. The third condition (measurement) requires that for any $j,k$, \begin{equation} \label{measurement_condition} \Delta_{jk}=0 \quad \text{or} \quad C_{jk}=0 \end{equation}
Although we will not analyze the differential equation \eqref{Euler_equation_unconstrained} to describe the approach to these three conditions, we will show that it is consistent with their satisfaction in the steady state, when all time derivatives of $\{C_{pq}\}$ vanish. Thus it is plausible for the combined system to reach such a state, and having done so, to remain in that state.
We see that condition (\ref{measurement_condition}) together with the steady--state condition cause every term in (\ref{Euler_equation_unconstrained}) to vanish except possibly the last. To understand those terms, consider that after the system attains a steady state, we can replace all the factors $C_{pq}$ or $C^*_{pq}$ on the RHS of (\ref{C_jk_tilde}) by their final values, which satisfy (\ref{measurement_condition}). Then at times $t$ greater than $\tau$ after the full system reaches its steady state, any nonzero terms $\ell,m$ on the RHS must have
\begin{equation} \label{Deltas_0} \Delta_{jk} = \Delta_{\ell m} = 0 \end{equation}
If either of systems 1 and 2 has collapsed to a single state (or a set of states with a single eigenvalue), then by (\ref{measurement_condition}) the other system has also collapsed, and it is easy to see that (\ref{Deltas_0}) implies that $\Delta_{jm} = \Delta_{\ell k} = 0$, so the only possible nonzero term in $\tilde{C}_{jk}$ is zero after all. Therefore the last term in \eqref{Euler_equation_unconstrained} vanishes, so the equation is consistent with the supposed late--time steady state. On the other hand, if systems 1 and 2 have not collapsed, there are terms in (\ref{C_jk_tilde}) that do not trivially vanish. We conclude that the evolution equation predicts that a late--time steady state is only possible if both the measurement condition is satisfied (the apparatus state corresponds to the state of the system being measured) and both systems have collapsed to a single eigenvalue.
We would prefer to have a more rigorous analysis, both disposing of the possibility that the combined system never reaches a steady state and describing the approach to the steady state. This analysis must await future work, possibly including numerical studies. Our objective in this paper is to show the possibility that a variational principle of the type we have developed can explain the measurement problem.
\subsection{Consistency with Born's rule} \label{Borns_rule}
The well--known experimental observation is that in an ensemble of identically--prepared measurements of some property (eigenvalue), beginning with a system in a superposition of modes with different values of the eigenvalue, the expected proportion of outcomes equal to a particular value will be the the weight of that value in the superposition. (At this point we take it as given that the system will collapse to a single value of the eigenvalue.) In our case, where the system being measured is denoted $\ell=1$, the weight corresponding to eigenvalue $\sigma^1_j$ is
\begin{equation} \label{P_j} P_j \equiv \sum_k \lvert C_{jk}(t_i) \rvert^2 \end{equation}
(More generally, it is $\sum_{j,k} \lvert C_{jk}(t_i) \rvert^2$, where the sum on $j$ is over all modes with a single value of the eigenvalue. For simplicity, we will consider only the non--degenerate case, but the extension to the more general case should be straightforward.)
It will be convenient to denote averages over an ensemble of identically prepared experimental realizations by an overbar. Then, if it is taken as given that the collapse to a single eigenvalue is complete by $t_f$, we can see that the relation
\begin{equation} \label{equality_of_averaged_probabilities} \overline{P_j(t_i)} = \overline{P_j(t_f)} \quad \forall j \end{equation}
is equivalent to Born's rule. This equivalence holds because at the initial time $t_i$, by the requirement of identical preparation, every member of the ensemble contributes the same value $P_j(t_i)$ to the ensemble average. At $t_f$, $P_j = 1$ in a fraction $P_j(t_i)$ of the realizations in the ensemble, and 0 in the others. So (\ref{equality_of_averaged_probabilities}) is the relation that should be predicted by a successful theory.
We would like to be able prove that Born's rule (\ref{equality_of_averaged_probabilities}) follows from our nonlocal wave equation (\ref{Cjk_dotdot_eqn}). The theoretical proof has eluded us so far; we may ultimately have to rely on numerical studies. However, we sketch out here some of the ideas that may contribute to the theoretical analysis.
By differentiating (\ref{P_j}) twice, we see that (supposing that by the system preparation $\dot{P}_j(t_i)=0$)
\begin{eqnarray} \label{P_j_dot} \dot{P}_j(t) &=& 2 \int_{t_i}^t dt' \, \sum_k \left( \lvert \dot{C}_{jk} \rvert^2 + \mathrm{Re} \left\{ C_{jk} ^* \, \ddot{C}_{jk} \right \}\right) \nonumber\\ &=& 2 \int_{t_i}^t dt' \, \sum_k \left[ \lvert \dot{C}_{jk} \rvert^2 + \frac{\mu}{B} \, \Delta_{jk}^2 \lvert C_{jk} \rvert^2 + \mathrm{Re} \left\{ \frac{2 \mathrm{i}}{\hbar} E_{jk} \, C_{jk}^* \, \dot{C}_{jk} + \frac{\nu}{B} \, C_{jk}^* \, \tilde{C}_{jk} \right\} \right. \end{eqnarray}
In the term in the integrand involving $E_{jk}$, let $C_{jk} = X \mathrm{e}^{\mathrm{i} \phi}$ for real $X$ and $\phi$. Then \begin{eqnarray} \mathrm{Re} \, \left\{\frac{2 \mathrm{i}}{\hbar} E_{jk} \, C_{jk}^* \, \dot{C}_{jk} \right\} &=& \mathrm{Re} \, \left\{\frac{2 \mathrm{i}}{\hbar} E_{jk} X (\dot{X} + \mathrm{i} X \dot{\phi}) \right\} \nonumber\\ &=& -\frac{2}{\hbar} E_{jk} X^2 \dot{\phi} \end{eqnarray}
so
\begin{equation} \overline{ \mathrm{Re} \, \left\{\frac{2 \mathrm{i}}{\hbar} E_{jk} \, C_{jk}^* \, \dot{C}_{jk} \right\} }= 0 \end{equation}
by symmetry, since the phase $\phi$ is equally likely to increase or decrease.
To deal with the term $\mathrm{Re} \, \{\frac{\nu}{B} \, C^*_{jk} \, \tilde{C}_{jk} \}$, we note that
\begin{eqnarray} \label{Cjkstar_Cjktilde} C_{jk}^* (t) \, \tilde{C}_{jk} (t) &=& \, \Delta_{jk}^2 \, \langle \langle C_{jk}^2(t) \rangle \rangle \, \lvert C_{jk}(t) \rvert ^2 \nonumber\\ &+& C_{jk}^* (t) \, \sideset{}{'}\sum_{\ell,m} \Delta_{jm} \, \Delta_{\ell k} \, C_{\ell m}(t) \int^{t_f}_{t_i} \mathrm{d} t' \, C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \end{eqnarray}
in which we define the ``moving average'' \begin{equation} \langle \langle C_{jk}^2(t) \rangle \rangle \equiv \int^{t_f}_{t_i} \mathrm{d} t' \, f(t - t') \,\lvert C_{jk}(t') \rvert ^2 \end{equation} and the primed sum denotes the sum over all $\ell, m$ except the single term $\ell=j, m=k$.
We have hypothesized that the solution $\{C_{jk}(t) \,\, \forall j,k,t \}$ of the variational principle is constrained by the initial (preparation) condition at $t_i$ and the final (NBC) condition at $t_f$. We now venture a little further and suppose that the desired solution, in order to extremize the action, uses the entire interval from $t_i$ to $t_f$ to evolve from initial to final values of $\{C_{jk}\}$; this is plausible due to the term in the action [$\braket{\dot{\psi}^\ell}{\dot{\psi}^\ell}$ in (\ref{Ll_form}) or $\int \mathrm{d} t \lvert \dot{C}_{jk} \rvert^2$ in (\ref{action_single_system_terms_simplified})] that penalizes rapid transitions. Therefore $\lvert \dot{C}_{jk} \rvert \sim 1/T$. But a measurement adequate to resolve two states $j,k$ and $\ell,m$ with $E_{jk} \neq E_{\ell m}$ is conventionally understood to require a duration
\begin{equation} T \gg \frac{\hbar}{\lvert E_{jk} - E_{\ell m} \rvert} \end{equation}
We conclude therefore that there is an $\epsilon$ such that \begin{equation} \frac{\hbar \lvert \dot{C}_{pq} \rvert}{\lvert E_{jk} - E_{\ell m} \rvert}
< \epsilon \ll 1 \end{equation} for any $p,q$ and for any choice of $j,k,\ell,m$ for which $E_{jk} \neq E_{\ell m}$. We may also require the function $f$ to be slowly varying in the sense that
\begin{equation} \frac{\hbar \lvert \dot{f} \rvert}{\lvert E_{jk} - E_{\ell m} \rvert f_{max}}
< \epsilon \ll 1 \end{equation}
where $f_{max}$ is the maximum value taken by $f$. Consequently, with the additional assumption that $E_{jk} = E_{\ell m}$ only if $j=\ell$ and $k=m$, the integral in the second term of (\ref{Cjkstar_Cjktilde}) can be integrated by parts twice:
\begin{eqnarray} \label{integral_from_Cjk_tilde} \int^{t_f}_{t_i} & \mathrm{d} t' \, & C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \nonumber\\ &=& \, \frac{\mathrm{i}\hbar}{E_{jk}-E_{\ell m}} \left[ C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \right]_{t'=t_i}^{t_f} \nonumber\\ &&+ \, \frac{\hbar^2}{(E_{jk}-E_{\ell m})^2} \left\{ \frac{\mathrm{d}}{\mathrm{d} t'} \left[ C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \right] \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \right\}_{t'=t_i}^{t_f} \nonumber\\ &&+ \, \frac{\hbar^2}{(E_{jk}-E_{\ell m})^2} \int^{t_f}_{t_i} \mathrm{d} t' \, \frac{\mathrm{d}^2}{\mathrm{d} t'\,^2} \left[ C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \right] \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \nonumber\\ &=& \, \frac{\mathrm{i}\hbar}{E_{jk}-E_{\ell m}} \left[ C^*_{\ell m}(t') \, C_{jk}(t') \, f(t - t') \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t'-t)} \right]_{t'=t_i}^{t_f} [1 + O(\epsilon)] \end{eqnarray}
Our hypothesis to explain the apparent randomness of quantum mechanical measurements is that some ``hidden variable'' is not sufficiently well controlled in typical practice to determine a single outcome. Here the hidden variable appears to be the stop time $t_f$ or equivalently the duration $T$ of the experiment. If the uncertainty in $t_f$ is $\gg 1/\Delta E$ for the smallest energy difference $\Delta E$, the realization average of the complex exponential factor $\mathrm{exp}[-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t_f-t)]$ is zero. We would like to infer from that, neglecting $O(\epsilon)$, that the realization average of (\ref{integral_from_Cjk_tilde}) vanishes, but there are two problems. We cannot factor the realization average
\begin{equation} \overline{ C^*_{\ell m}(t_f) \, C_{jk}(t_f) \, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t_f-t)} } \ne \overline{ C^*_{\ell m}(t_f) \, C_{jk}(t_f) } \,\, \overline{ \mathrm{e}^{-\frac{\mathrm{i}}{\hbar} (E_{jk}-E_{\ell m}) (t_f-t)} } \end{equation}
because the final values of the coefficients $C^*_{\ell m}$ and $C_{jk}$ are correlated with the complex exponential factor. Also, the $t'=t_i$ term in (\ref{integral_from_Cjk_tilde}) will not average to zero; since the initial conditions are imposed at the start time, uncertainty is $t_i$ is presumably not a source of variation in the outcome.
From the surviving terms in the realization average of (\ref{P_j_dot}) we see that \begin{eqnarray} \overline{P_j(t_f)} - \overline{P_j(t_i)} &=& \int_{t_i}^{t_f} \mathrm{d} t' \, \overline{\dot{P}_j(t')} \nonumber\\ &=& 2 \int_{t_i}^{t_f} \mathrm{d} t' \int_{t_i}^{t'} \mathrm{d} t'' \, p(t'') \nonumber\\ &=& 2 \int_{t_i}^{t_f} \mathrm{d} t'' \, (t_f - t'') \, p(t'') \end{eqnarray}
and therefore \begin{equation} \label{probability_difference} \left\lvert \overline{P_j(t_f)} - \overline{P_j(t_i)} \right\rvert < 2 \, T \int_{t_i}^{t_f} \mathrm{d} t'' \, \left\lvert p(t'') \right\rvert \end{equation}
with
\begin{equation} p \equiv
\sum_k \overline{ \lvert \dot{C}_{jk} \rvert ^2 + \frac{\mu}{B} \, \Delta_{jk}^2 \lvert C_{jk} \rvert^2 + \frac{\nu}{B} \, \Delta_{jk}^2 \, \langle \langle C_{jk}^2 \rangle \rangle \, \lvert C_{jk} \rvert ^2 } \end{equation}
If the previously identified issues in the proof of Born's rule are resolved, it remains to show that the LHS of (\ref{probability_difference}) vanishes, at least in the limit at $T\rightarrow \infty$. (As noted earlier, experimental results at variance with Born's rule are likely to be rejected as invalid if $T$ is too small.) To do that, we must show that $p(t)$ decays fast enough that the integral in (\ref{probability_difference}) decreases faster than $1/T$.
\section{Discussion}
\subsection{Sensitivity of the system evolution to a measurement}
Traditional discussions of quantum mechanics maintain that making a measurement changes the evolution of a quantum system from its unitary evolution, as described by the wave equation, to a collapsed state, as described by the measurement side of the bipartite theory. Thus the unitary evolution cannot be observed without interrupting it. This remarkable sensitivity to observation is not explained except as the inevitable corollary of the special treatment of measurement in the theory.
We also find this sensitivity to observation in our picture, but can give more of an explanation for it. The act of measuring a system involves causing it to physically interact with a measurement apparatus, and the variational principle describes the evolution of the combined system. The readout of the measurement at $t_f$ defines the end of the domain of integration of the variational principle. Of course, the theory continues to apply after $t_f$, but the observation at $t_f$, like its preparation at $t_i$ and its spatial boundary conditions, imposes a leakproof barrier to influences from outside the problem domain, so that a solution may be found within that domain without reference to the rest of the universe.
Now if the measurement apparatus were read at some intermediate time $t_m$, the structure of the problem would be different. Instead of applying between $t_i$ and $t_f$, the variational principle would apply twice, from $t_i$ to $t_m$ and from $t_m$ to $t_f$. The appearance of a constraint at $t_m$ as a final condition on the first interval and an initial condition on the second would make this a different problem than the original one from $t_i$ to $t_f$. (As we have explained, the intervention at $t_m$ results in the appearance of an NBC on the solution between $t_i$ and $t_m$, even though it does not dictate the result of the reading at $t_m$.) Consequently, the act of observing the system at $t_m$ changes it, just as in conventional interpretations.
The reader may object that we have not removed the mystery but moved it to a different concept. Instead of declaring by fiat that a measurement changes the system, we have declared that the domain of integration of the variational principle must end at the time (and place) at which the measurement apparatus is read. We haven't explained what is special about the events at $t_f$ that allow us to end the domain there.
The criticism is valid, but we point out that we have pushed back the mystery, or made it less mysterious, by relating it to considerations of BCs. Certainly the description of a measurement in terms of an action integral bounded at $t_i$ and $t_f$ must be an approximation to a more complete theory that includes a greater time interval before and after $[t_i,t_f]$ and a fuller description of the measurement process. On the other hand, the empirical fact that broad statements of great generality apply to measurements, regardless of the system under study or the mechanism of the process, strongly suggests that a simple description is possible, particularly regarding a time before the measurement ($t_i$) and a time after its completion ($t_f$). The validity of the simple description is not necessarily a surprise; it may be that the interactions that can be so described have been adopted as measurement procedures precisely because of their ability to give repeatable quantitative results.
If the simple description proposed in this paper turns out to be successful in description and prediction at some level of approximation, that will be evidence of its usefulness, without denying the possibility of a more complete theory. Eventually such an improved theory may show that collapse/decay to a single eigenvalue occurs at $t_f$ in a physically justifiable way, based on the role of the apparatus in the action, and so it is appropriate to simplify the problem as we have done by terminating the integral at $t_f$ and accepting the NBC there.
An extended analysis of that type would also be appropriate to explore another aspect of the new theory. We have argued that we can solve the variational principle between $t_i$ and $t_f$, which would presumably enable a prediction of the experimental outcome at $t_f$ (based on (a) fixed value(s) of hidden variable(s), of course). We have asserted that the final condition at $t_f$ provides a leakproof barrier to influences from outside that problem domain. But the theory must apply under reversal of the direction of time, so it should also be possible to apply an experimental preparation (initial condition) at $t_{f2} \equiv t_f+T$ and a measurement readout (NBC as a final condition) at $t_f$ to predict an outcome at $t_f$ based on physics between $t_f$ and $t_{f2}$. We suspect that the theory retains sufficient flexibility to allow the two solutions (for $t_i \le t \le t_f$ and $t_f \le t \le t_{f2}$) to agree at $t_f$. It probably helps that we expect (in both cases) to apply \emph{natural} BCs at $t_f$, so we are not actually constraining the value of the measured variable. Also, continuity constraints on fields, wavefunctions and derivatives appearing in the action may help to avoid contradictions. Since these two predictions must agree, the barrier at $t_f$ is not completely leakproof. It is rather a partially permeable membrane, as suggested by the applicability of an NBC that constrains some but not all properties of the system at $t_f$. This type of study may give insight into the nature of the constraint imposed by the measurement readout.
\subsection{Causality and time--ordering issues} Retrocausality---the dependence of phenomena at a given time on phenomena in their future---conflicts with the usual notion of causality---the concept that causes precede their effects in time. However, multiple authors\cite{Cramer_1986, Price_1996, Schulman_1997} have pointed out that such a notion of causality is not necessary to avoid contradictions. If event $A\Rightarrow B$, then $B \Rightarrow \, \sim \! A$ would produce a contradiction. But if we are somehow prevented from declaring that $B \Rightarrow \, \sim \! A$ (or an equivalent combination of statements), then in principle $A\Rightarrow B$ is possible \emph{even if $B$ occurs earlier than $A$}.
To apply this to our use of retrocausality in the variational principle, we are asserting that the NBC at $t_f$ (which applies because a measurement is made at that time, even though the result of the measurement is unconstrained) is an event $A$ that constrains the solution between $t_i$ and $t_f$, so that solution at some intermediate time $t_m$ can be considered as event $B$. But the event $B$ thus chosen is by definition consistent with $A$, since it is a point along the solution based on $A$. It is not possible to claim that $B \Rightarrow \, \sim \! A$, so no contradiction is possible.
Of course, the usual objection to this is that one could intervene at $t_m$ to change the trajectory of events and produce $\sim \! A$ at $t_f$ (going back in time and shooting one's grandparent, in the usual cliche). But doing this changes the problem, as described above; now the variational principle applies from $t_i$ to $t_m$ and from $t_m$ to $t_f$, with the intervention imposing new BCs at $t_m$. Since this is a different problem than the original one, the original solution does not apply and no claim of a contradiction can be made.
\subsection{Choice of the function $f$}
We have relied on a supposed interaction between wavefunctions at $t_1$ and $t_2$, as expressed in the nonlocal action term (\ref{first_nonlinear_interaction}). The interaction is a \emph{physical} process with a temporal range described by the function $f$. It will be important to determine the form of $f$; this may be explored numerically, but additional physical insight could be very useful.
Our earlier hypotheses that $f$ is a decreasing function of the absolute value of its argument and that it has a finite range $\tau$ are intuitively appealing, but they are not the only possibility. In fact, we cannot rule out the opposite extreme, that $f(t) \equiv 1$. This would mean that the nonlocal interaction has infinite range, but in practice for a given measurement it would be limited to the interval $[t_i,t_f]$. (Without the finite--range limit $\tau$, our analysis in section \ref{Collapse_single_eigenstate} would have to be revisited.)
\subsection{Solving the integrodifferential equation}
As mentioned above, it will be important to solve, or otherwise study, the IDE (\ref{Euler_equation_unconstrained}). That effort may be made theoretically, or numerically if need be. We would like to understand under what conditions the system reaches the collapsed state described in section \ref{Collapse_single_eigenstate}, how fast that late--time state is approached, and which of the possible collapsed states is reached, as a function of the hidden variable(s). It will also be important to test whether the equation produces outcome frequencies consistent with Born's rule, possibly following ideas in section \ref{Borns_rule}.
One question is whether, given a choice of initial conditions and hidden variable(s), the solution to the IDE is unique (and even whether a solution exists). If there is always a unique solution, the theory may be completely deterministic (although it remains to be seen what that means for a retrocausal theory), so we may be able to dispense completely with the idea that quantum mechanical processes depend on \emph{instrinsically random} variables. Such a discovery might have far--reaching ramifications in quantum information technologies that rely on (supposed) randomness.
If this understanding enables us to make predictions based on the theory, we will look for experimentally testable predictions. Although we have argued that the new theory will agree with many features of conventional theory, it is certainly possible that it could differ in some ways.\footnote{We expect that it will differ in the normalization factor applied to the wavefunction in experiments like that of Renninger, as discussed above, but that is a difference in how a physical state is described mathematically, not a difference in the state itself, and so not experimentally testable.} One possibility is that results that have historically been seen to vary, supposedly due to intrinsic randomness, may vary less or not at all if a hidden (that is, historically uncontrolled) variable is controlled in new experiments (guided by new predictions about how well or to what values it must be controlled).
Of course, it is possible that the particular choice of action we have made, and the IDE resulting from it, do not correspond to nature. Even in that case, our exposition here shows that a variational principle of this type, including our assumptions of retrocausality, nonlocality, and one or more hidden variables, can lead to a plausible theory that avoids, resolves or explains problematic features of conventional quantum theory. If the theory presented here is not borne out, a similarly--constructed theory with a different form of the action may be more successful.
\section{Acknowledgments}
The author appreciates the support of the National Nuclear Security Agency's Advanced Scientific Computing (ASC) program, and useful discussions with Kenneth Wharton and Daniel Sheehan. Most importantly, the ideas were developed and discussed over a long period of time with Dale W. Harrison, without whom this work would not have been possible.
\appendix* \section{Calculus of Variations: Two--time Variant} \label{Two_time_variant}
A basic problem in the calculus of variations \cite{Courant} is to find the function $\phi(t)$ for which the integral
\begin{equation} \label{variational_basic} S[\phi] = \int_{a}^{b} \mathrm{d} t \, F(t,\phi(t),\dot{\phi}(t)) \end{equation}
is stationary with respect to infinitesimal changes in the function $\phi$. Here $F$ is a given function with continuous first partial derivatives and piecewise continuous second derivatives. The function $\phi(t)$ is required to be continuous with piecewise continuous first derivative, and must satisfy
\begin{equation} \label{fixed_BCs} \phi(a)=A \quad \phi(b)=B \end{equation}
for given $A$ and $B$. Under these conditions a necessary condition for (\ref{variational_basic}) is the Euler equation
\begin{equation} 0 = \frac{\partial F}{\partial \phi}
- \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial F}{\partial \dot{\phi}} \end{equation}
\subsection{Two--time variant}
In our case the integrated function $F$ depends on the unknown function $\phi$ at two times, both of which are integrated over:
\begin{equation} \label{two_time_functional} S[\phi] = \int_{a}^{b} \mathrm{d} t_1 \, \int_{a}^{b} \mathrm{d} t_2 \, F(t_1,t_2,\phi(t_1),\dot{\phi}(t_1),\phi(t_2),\dot{\phi}(t_2)) \end{equation}
As in the standard derivation, we find a necessary condition by defining \begin{equation} \label{necessary_epsilon} \theta(t,\epsilon) = \phi(t) + \epsilon \, \eta(t) \end{equation}
and requiring that
\begin{equation}
\left. \frac{\mathrm{d} S[\theta]}{\mathrm{d} \epsilon} \right|_{\epsilon=0} = 0 \end{equation}
for any continuous function $\eta(t)$ with piecewise continuous derivative and
\begin{equation} \label{eta_zeroAB} \eta(a)=\eta(b)=0 \end{equation}
Condition (\ref{necessary_epsilon}) becomes
\begin{eqnarray} \label{eta_integral} 0 &=& \int_{a}^{b} \mathrm{d} t_1 \, \int_{a}^{b} \mathrm{d} t_2 \, \left[ \eta(t_1) \frac{\partial F}{\partial \phi(t_1)} + \dot{\eta}(t_1) \frac{\partial F}{\partial \dot{\phi}(t_1)} + \eta(t_2) \frac{\partial F}{\partial \phi(t_2)} + \dot{\eta}(t_2) \frac{\partial F}{\partial \dot{\phi}(t_2)} \right] \nonumber\\ &=& \int_{a}^{b} \mathrm{d} t_1 \, \int_{a}^{b} \mathrm{d} t_2 \, \left[ \eta(t_1) \left( \frac{\partial F}{\partial \phi(t_1)}
- \left. \frac{\partial}{\partial t_1} \right |_{t_2} \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) + \eta(t_2) \left( \frac{\partial F}{\partial \phi(t_2)}
- \left. \frac{\partial}{\partial t_2} \right |_{t_1}\frac{\partial F}{\partial \dot{\phi}(t_2)} \right) \right] \qquad \end{eqnarray}
Since $\eta(t)$ is arbitrary (subject to the restrictions already stated), this requires that \begin{equation} \label{necessary_1} 0 = \int_{a}^{b} \mathrm{d} t_2 \, \left( \frac{\partial F}{\partial \phi(t_1)}
- \left. \frac{\partial}{\partial t_1} \right |_{t_2} \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) \end{equation}
and
\begin{equation} \label{necessary_2} 0 = \int_{a}^{b} \mathrm{d} t_1 \, \left( \frac{\partial F}{\partial \phi(t_2)}
- \left. \frac{\partial}{\partial t_2} \right |_{t_1}\frac{\partial F}{\partial \dot{\phi}(t_2)} \right) \end{equation}
as necessary conditions for the stationarity of $S[\phi]$.
\subsection{Special cases}
A special case of interest is when $F$ factors into $t_1$--dependent and $t_2$--dependent factors: \begin{equation} \label{F_factors} F(t_1,t_2,\phi(t_1),\dot{\phi}(t_1),\phi(t_2),\dot{\phi}(t_2)) = G(t_1,\phi(t_1),\dot{\phi}(t_1)) H(t_2,\phi(t_2),\dot{\phi}(t_2)) \end{equation} so that
\begin{equation} \label{factored_integral} S[\phi] = \int_{a}^{b} \mathrm{d} t_1 \, G(t_1,\phi(t_1),\dot{\phi}(t_1)) \int_{a}^{b} \mathrm{d} t_2 \, H(t_2,\phi(t_2),\dot{\phi}(t_2)) \end{equation}
and the necessary conditions (\ref{necessary_1}) and (\ref{necessary_2}) become
\begin{equation} \label{local-stationarity_1} 0 = \frac{\partial G}{\partial \phi(t_1)} - \frac{\mathrm{d}}{\mathrm{d} t_1} \frac{\partial G}{\partial \dot{\phi}(t_1)} \end{equation}
and
\begin{equation} \label{local-stationarity_2} 0 = \frac{\partial H}{\partial \phi(t_2)} - \frac{\mathrm{d}}{\mathrm{d} t_2} \frac{\partial H}{\partial \dot{\phi}(t_2)} \end{equation}
if we exclude the possibility that either of the integrals in (\ref{factored_integral}) vanishes. These relations are of course the stationarity conditions for those two integrals if they were considered independently. We observe that the special case in which $F$ factors as in (\ref{F_factors}) is significantly different than the general case, in that the solution of the former can be expressed as differential equations but the latter requires integral equations.
In this paper our concern is limited to functions $F$ that are symmetric in $t_1$ and $t_2$, that is, invariant under their interchange. For this special case, equations (\ref{necessary_1}) and (\ref{necessary_2}) are equivalent, as are (\ref{local-stationarity_1}) and (\ref{local-stationarity_2}).
\subsection{Natural boundary condition}
Consider the case in which the boundary conditions (\ref{fixed_BCs}) are replaced by
\begin{equation} \label{one_fixed_BC} \phi(a)=A \end{equation}
that is, the solution is not constrained at $t=b$ (except, as will be shown, by the NBC). Then condition (\ref{eta_zeroAB}) is replaced by
\begin{equation} \label{eta_zeroA} \eta(a)=0 \end{equation}
(no constraint on $\eta(b)$) and so the second line of (\ref{eta_integral}) becomes
\begin{eqnarray} 0 &=& \int_{a}^{b} \mathrm{d} t_1 \, \int_{a}^{b} \mathrm{d} t_2 \, \left[ \eta(t_1) \left( \frac{\partial F}{\partial \phi(t_1)}
- \left. \frac{\partial}{\partial t_1} \right |_{t_2} \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) + \eta(t_2) \left( \frac{\partial F}{\partial \phi(t_2)}
- \left. \frac{\partial}{\partial t_2} \right |_{t_1}\frac{\partial F}{\partial \dot{\phi}(t_2)} \right) \right] \nonumber\\ &&+ \int_{a}^{b} \mathrm{d} t_2 \,\, \eta(t_1) \left. \left( \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) \right\rvert_{t_1=a}^b \end{eqnarray}
But since the functions $\eta(t)$ satisfying (\ref{eta_zeroAB}) are among the set of functions allowed by (\ref{eta_zeroA}), $F$ must satisfy (\ref{necessary_1}) and (\ref{necessary_2}), so the last equation becomes simply
\begin{eqnarray} 0 &=& \int_{a}^{b} \mathrm{d} t_2 \,\, \eta(t_1) \left. \left( \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) \right\rvert_{t_1=a}^b \nonumber\\ &=& - \eta(b) \int_{a}^{b} \mathrm{d} t_2 \, \left. \left( \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) \right\rvert_{t_1=b} \end{eqnarray}
so we find that the NBC is
\begin{equation} 0 = \int_{a}^{b} \mathrm{d} t_2 \,\, \left. \left( \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) \right\rvert_{t_1=b} \end{equation}
and of course by symmetry
\begin{equation} 0 = \int_{a}^{b} \mathrm{d} t_1 \,\, \left. \left( \frac{\partial F}{\partial \dot{\phi}(t_2)} \right) \right\rvert_{t_2=b} \end{equation}
\subsection{Lagrange multipliers}
A related problem is to find a stationary point of $S[\phi]$, as given by (\ref{two_time_functional}), subject to a constraint \begin{equation} \label{constraint} K(t,\phi(t)) = 0 \qquad \forall t \end{equation}
This can be addressed by the method of Lagrange multipliers, in a straightforward extension of the derivation given in reference \cite{Courant}. For the special case of symmetric $F$, that analysis shows that we can introduce a Lagrange multiplier $\lambda(t)$ and replace condition (\ref{necessary_1}) by
\begin{eqnarray} \label{necessary_1_Lagrange} 0 &=& \int_{a}^{b} \mathrm{d} t_2 \, \left( \frac{\partial F}{\partial \phi(t_1)}
- \left. \frac{\partial}{\partial t_1} \right |_{t_2} \frac{\partial F}{\partial \dot{\phi}(t_1)}
+ \lambda(t_1) \left. \frac{\partial K}{\partial \phi} \right|_{t=t_1} \right) \nonumber\\
&=& (b-a) \, \lambda(t_1) \left. \frac{\partial K}{\partial \phi} \right|_{t=t_1} + \int_{a}^{b} \mathrm{d} t_2 \, \left( \frac{\partial F}{\partial \phi(t_1)}
- \left. \frac{\partial}{\partial t_1} \right |_{t_2} \frac{\partial F}{\partial \dot{\phi}(t_1)} \right) \end{eqnarray}
The solution of this differential equation is $\phi$ as a function of $t$ and the entire function $\lambda$. Finally, $\lambda(t)$ is determined by requiring the satisfaction of (\ref{constraint}).
\end{document} | arXiv |
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Sheaf cohomology with compact supports (and Verdier duality?)
Consider a manifold and a complex where cochains are sections of vector bundles and coboundary maps are differential operators, which are locally exact except in lowest degree (think de Rham complex). I'd like to know the relationship between the cohomology of this complex and the cohomology of the formal adjoint complex with compact supports (for the de Rham complex, this is again the de Rham complex, but with compact supports, and the relationship is given by Poincaré duality).
Update: Just added a bounty to raise the question's profile. The biggest obstacle, as came out of the discussion on an unsuccessful previous answer, to a straightforward application of Verdier duality is that it's hard to see how to connect the dual sheaf $\mathcal{V}^\vee$ with the sections of the dual density vector bundles $\Gamma(\tilde{V}^{\bullet*})$. The basic construction of $\mathcal{V}^\vee$ requires, for an open $U\subset M$, the assignment $U\mapsto \mathrm{Hom}_\mathbb{Z}(\Gamma(U,V^\bullet),\mathbb{Z})$, where $\mathrm{Hom}_\mathbb{Z}$ is taken in the category of abelian groups, which is MUCH bigger than $\Gamma(U,V^{\bullet*})$ itself.
Let me be more explicit, which unfortunately requires some notation. Let $M$ be the manifold, $V^i\to M$ be the vector bundles (non-zero for only finitely many $i$) and $d^i \colon \Gamma(V^i) \to \Gamma(V^{i+1})$ be the coboundary maps. Then $H^i(\Gamma(V^\bullet),d^\bullet) = \ker d^i/\operatorname{im} d_{i-1}$. By local exactness I mean that for every point $x\in M$ there exists an open neighborhood $U_x$ such that $H^i(\Gamma(V^\bullet|_{U_x}), d^\bullet) = 0$ for all except the smallest non-trivial $i$. Now, for each vector bundle $V\to M$, I can define a densitized dual bundle $\tilde{V}^* = V^*\otimes_M \Lambda^{\dim M} T^*M$, which is just the dual bundle $V^*$ tensored with the bundle of volume forms (aka densities). For any differential operator $d\colon \Gamma(V) \to \Gamma(W)$ between vector bundles $V$ and $W$ over $M$, I can define its formal adjoint $d^*\colon \Gamma(\tilde{W}^*) \to \Gamma(\tilde{V}^*)$, locally, by using integration by parts in local coordinates or, globally, by requiring that there exist a bidifferential operator $g$ such that $w\cdot d[v] - d^*[w]\cdot v = \mathrm{d} g[w,v]$. Thus, the formal adjoint complex is defined by the coboundary maps $d^{i*}\colon \Gamma(\tilde{V}^{(i+1)*}) \to \Gamma(\tilde{V}^{i*})$.
There is a natural, non-degenerate, bilinear pairing $\langle u, v \rangle = \int_M u\cdot v$ for $v\in \Gamma(V)$ and $u\in \Gamma_c(\tilde{V}^*)$, where subcript $c$ refers to compactly supported sections. Because $\langle u^{i+1}, d^i v^i \rangle = \langle d^{i*} u^{i+1}, v^i \rangle$ this paring descends to a bilinear pairing in cohomology $$ \langle-,-\rangle\colon H^i(\Gamma_c(\tilde{V}^{\bullet*}),d^{(\bullet-1)*}) \times H^i(\Gamma(V^\bullet),d^i) \to \mathbb{R} . $$
Finally, my question can be boiled down to the following: is this pairing non-degenerate (and if not what is its rank)?
As I mentioned in my first paragraph, the case $V^i = \Lambda^i T^*M$ with $d^i$ the de Rham differential is well known. Its formal adjoint complex is isomorphic to the de Rham complex itself. Essentially, Poincaré duality states that the natural pairing in cohomology is non-degenerate. I am hoping that a more general result can be deduced from Verdier duality applied to the sheaf $\mathcal{V}$ resolved by the complex $(\Gamma(V^\bullet),d^\bullet)$. I know that the sheaf cohomology $H^i(M,\mathcal{V})$ can be identified with $H^i(\Gamma(V^\bullet),d^\bullet)$. I also know that the abstract form of the duality states that the algebraic dual $H^i(M,\mathcal{V})^*$ is given by the sheaf cohomology with compact supports $H^i_c(M,\mathcal{V}^\vee)$ with coefficients in the "dualizing sheaf" $\mathcal{V}^\vee$. Unfortunately, I'm having trouble extracting the relationship between $\mathcal{V}^\vee$ and my formal adjoint complex $(\Gamma_c(\tilde{V}^{\bullet*}), d^{(\bullet-1)*})$ from standard references (e.g., the books of Iversen or Kashiwara and Schapira).
dg.differential-geometry homological-algebra sheaf-cohomology
Igor Khavkine
Igor KhavkineIgor Khavkine
$\begingroup$ The de Rham complex, critically, is not locally exact in degree 0 - consider the constant functions! Do you wish your complexes to fail to be locally exact in some degree? Otherwise, the complex is $0$ in the derived category of sheaves, and the cohomology vanishes. I think you mean to consider complexes that fail to be locally exact. The failure of local exactness is measured by $\operatorname{ker} d^i/\operatorname{im} d^{i+1}$ in the category of sheaves. I think your best hope is to use Verdier duality for that sheaf, not the whole complex. $\endgroup$ – Will Sawin Sep 20 '13 at 3:55
$\begingroup$ Will, yes, that was a bit sloppy. Like in the de Rham case, my complexes are expected to fail to be locally exact in the lowest non-trivial degree. That is, they provide a fine resolution of some sheaf (like locally constant functions in the de Rham case). I'm curious about your remark. Could you expand on how to apply Verdier duality to the resolved sheaf, and then resolve the Verdier dual sheaf itself using vector bundles? $\endgroup$ – Igor Khavkine Sep 20 '13 at 8:01
$\begingroup$ If I had a complete solution, I would have posted it as an answer. But I'll think about it and see if I can get it to work. $\endgroup$ – Will Sawin Sep 20 '13 at 15:11
You will run into some issues with differential equations with singularities.
Consider the differential operator $x\frac{d}{dx} -t $ from the trivial rank $1$ vector bundle on $\mathbb R$ to itself, for some constant $t$. The adjoint map is $-xd/dx -1 - t$, which is another operator in the same class.
If $t$ is not a nonnegative integer, then this complex is locally exact. We have to check that the differential equation $xdy/dx - ty =f(x)$ has solutions for smooth $f$. We will check this for $t<0$, but I think it is also true when $t$ is not a nonnegative integer. A solution is:
$$y = \frac{ f(0) }{t} + x^t \int_{0}^x \left(f(z)-f(0)\right) z^{-1-t} dz $$
This gives a global solution also, so it is globally exact. Moreover, since any solution to the differential equation is a multiple of $x^{t}$, if $t$ is not a nonnegative integer, there are no nonzero solutions, so the regular cohomology is trivial.
If $t$ is a negative integer, then the dual complex will have nontrivial $H^0$ due to the nonzero solutions, otherwise, say for $-1<t<0$, we can easily show that the dual complex, with $t$ also in that range, will have nontrivial compactly supported $H^1$. Indeed, since there are no solutions to the homogeneous version of the differential equation, our solution of the inhomogeneous version is unique, and we can easily find a compactly supported $f(x)$ where the unique solution $y$ is not compactly supported.
On the other hand, suppose we have a smoothness condition - specifically, that the kernel of the first map is a locally constant sheaf. In other words, a local solution to the differential equations that define the first map can be extended uniquely along any path, with possible monodromy.
Given a differential equation, a common trick is to add enough extra functions to make the equation first order. We can just as easily do this with a complex of vector bundles with differential equation operators - add variables to each map in the reverse order. This process is a homotopy equivalence of complexes, as is its dual.
Take the locally constant kernel sheaf, view it as a vector bundle iwth flat connection, and tensor it with the de Rham complex. We will build a map from this complex to the original one. This is plausible because they are both injective resolutions of the same thing, but we need to check it can be done with vector bundle maps. This is trivial in degree $0$. If we have built maps for the first $n$ degrees, we compose the $n$th map with $d$ and get a first-order function on $\Omega^{n-1}$ that vanishes, locally, on the image of anything from $\Omega^{n-2}$. Such a function, by the linear algebra of differential forms at a single point, depends only on $d$ of the form on $\Omega_{n}$.
This bundle map is a quasi-isomorphism of sheaf complexes. If we can check that its dual is also a quasi-isomorphism, we win - duality in an arbitrary locally free complex can be reduced to duality for the de Rham complex. By using mapping cones, it is sufficient to check that if a first-order complex is locally exact, its dual is also locally exact.
Let $V_0 \to V_1 \to \dots V_n$ be a locally exact first-order complex. We will actually be able to find a homotopy to $0$. $d: V_0 \to V_1$ is a first-order differential equation with no local solutions. If it has no solutions,it must have a formal reason. Specifically, if $f_1,..f_k$ are local coordinates for $V_0$, then by taking linear combinations of the differential equations, their derivatives, and the commutation relations,we must be able to obtain $f_1,\dots,f_k$. Otherwise we could solve it along curves and extend consistently to the whole space.
But this linear combination just gives an operator $k: V_1 \to V_0$ such that $kd$ is the identity. Now we have a differential equation $k\oplus d$ on $V_1$, still linear, that has no solutions. Repeating the process, we eventually get a homotopy between the bundle and $0$. Applying this homotopy to the dual, it will be locally exact as well.
Will SawinWill Sawin
$\begingroup$ Are you saying that such examples prevent the non-degenerate pairing between sections of the vector bundles of the original complex and its dual from descending, in general, to a non-degenerate pairing on the cohomologies? I'd still like to know how to prove that the pairing is non-degenerate on the cohomologies at least in a few simple cases where it is true. $\endgroup$ – Igor Khavkine Sep 21 '13 at 1:57
$\begingroup$ Yes - because it can prevent the cohomology groups from having the same rank. I have an idea for one case that I will try to write up soon. $\endgroup$ – Will Sawin Sep 21 '13 at 2:02
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry homological-algebra sheaf-cohomology or ask your own question.
de Rham cohomology and flat vector bundles
Poincaré duality with boundary conditions
Cohomology of a cochain complex of acyclic sheaves
The Hochschild cohomology of a variety "with coefficient" in a vector bundle
Independence of embedding for higher sheaf cohomology of local cohomology on projective space
Sheaf / de Rham cohomology of a stack with values in a complex of abelian sheaves
Morphism of sites and abelian sheaf cohomology
Multiplicative structure for sheaf cohomology of flag varieties | CommonCrawl |
How do urban mobility (geo)graph's topological properties fill a map?
Leonardo Bacelar Lima Santos1,
Luiz Max Carvalho2,
Wilson Seron3,
Flávio C. Coelho4,
Elbert E. Macau3,5,
Marcos G. Quiles3 &
Antônio M. V. Monteiro5
Urban mobility data are important to areas ranging from traffic engineering to the analysis of outbreaks and disasters. In this paper, we study mobility data from a major Brazilian city from a geographical viewpoint using a Complex Network approach. The case study is based on intra-urban mobility data from the Metropolitan area of Rio de Janeiro (Brazil), presenting more than 480 spatial network nodes. While for the mobility flow data a log-normal distribution outperformed the power law, we also found moderate evidence for scale-free and small word effects in the flow network's degree distribution. We employ a novel open-source GIS tool to display (geo)graph's topological properties in maps and observe a strong traffic-topology association and also a fine adjustment for hubs location for different flow threshold networks. In the central commercial area for lower thresholds and in high population residential areas for higher thresholds. This set of results, including statistical, topological and geographical analysis may represent an important tool for policymakers and stakeholders in the urban planning area, especially by the identification of zones with few but strong links in a real data-driven mobility network.
Urban mobility data are important to several areas, from traffic engineering to the analysis of outbreaks and disasters. Many studies explore patterns, applicability, and limitations on urban mobility (Gonzalez et al. 2008; Song et al. 2010; Simini et al. 2012; Guo et al. 2012; Wang et al. 2012; Louail et al. 2015). Another common thread among these studies is the importance of spatial structure. In this work, the spatial structure of a actual data-based mobility complex network is explored.
There are several classical approaches to the analysis of urban mobility data, from mechanical models to statistical ones (Costa et al. 2017; Barbosa et al. 2018). According to Barat and Cattuto (2013), in many cases, urban mobility information finds a convenient representation in terms of complex networks. The complex network approach emerges as a natural mechanism to handle mobility data, taking areas as nodes and movements between origins and destinations as edges. However, there are difficulties in incorporating human mobility into models from both technical and ethical perspectives (Balcan et al. 2009). At the intra-urban scale, this difficulty is magnified due to the complex structure of the urban territory. Thus, a general approach for handling geographical data is needed.
As presented in Barthélemy (2011), a review about spatial network, several complex systems are very often organized as networks where their elements (nodes and edges) are embedded in (geographical) space and topology alone does not contain all the information important for understanding processes and propose scientific and technological developments. A geographical approach for complex systems analysis is especially important for mobility phenomena (Barthélemy 2011).
Santos et al. (2017) proposed the (geo)graphs approach, in which a (geo)graph is defined as a graph in which the nodes have a known geographical location, and the edges have spatial dependence. (Geo)graphs provide a simple tool to manage, represent and analyze geographical complex network.
In this paper we explore the spatial structure of an actual data-based mobility complex network.
In particular, we applied a set of procedures to use origin-destination data (OD data), originally from traffic engineering, to recover useful information about mobility. OD data represent daily travels between zones on a region, especially interesting for the intra-urban scale. According to Estrada (2012), social proximity refers to actors that belong to the same space of social relations. And OD data can be seen as a "social" relation between origin and destination areas.
The central question in this paper is: how to recover useful information from urban mobility data considering its intrinsic spatial properties?
In several previous works (Song et al. 2010; De Montis et al. 2007; Chowell et al. 2003; Soh et al. 2010; Brockmann et al. 2006), the power-law behaviour of human mobility models and data were explored.
In this work, as a first step, we extend the method of Clauset et al. (2009) of distributions analysis by employing a Bayesian approach and computing Bayes factors. So, we use the mobility data to construct mobility networks and calculate topological properties. Finally, we return to the geographical domain representing and analyzing the topological measures.
The traffic-topology analysis is a traditional object of research in the mobility network literature (Chowell et al. 2003; Soh et al. 2010; De Montis et al. 2007). However, there is an open question: how is the hub's strength distributed among its links? In this work, we propose a complementary traffic-topology analysis with an explicit spatial meaning.
The network connection criteria applied in this work is similar to those in other studies in the literature (Chowell et al. 2003; De Montis et al. 2007; Soh et al. 2010), who also investigated flow weights distribution and the traffic-topology correlations. However, in contrast to those studies, we test different distributions and explore the spatial aspect of the results.
This paper is organized as follows: "Material and methods" section contains the data and methods of this investigation, in particular the fitting of power law distributions ("Power law analysis" section), power law regression ("Power law regression" section) and geographs ("(Geo)graph tools" section). Results and discussion are presented in "Results and discussion" section. Finally, some concluding remarks and perspective of future work are drawn in "Conclusions and perspectives" section.
The Metropolitan Region of Rio de Janeiro (MRRJ) encompasses 20 cities, for a total of 10,894,756 inhabitants. It is the second largest metropolitan area in Brazil, the third in South America and the 20th in the world.
To facilitate mobility studies, the region is divided into a set of traffic zones (TZ). For this specific work, we consider a set of 485 traffic zones, in which each TZ has appeared at least once as a source or as a destination in the set of travels of more than 99 thousands interviewed people in a Origin-Destination Survey (Companhia estadual de engenharia de transporte e logistica et al. 2010). From a network perspective, each TZ is represented by a node.
The original data (Companhia estadual de engenharia de transporte e logistica et al. 2010) consists of a list of travels, each one with an origin TZ and a destination TZ. This dataset is summarized into a flow matrix, in which each element f(i,j) records the number of travels between TZs i and j, in both directions (i.e. the matrix is symmetric).
The driving-mode data used here takes into account: car, bus or motorcycle, representing more than 56% of the total number of travels.
Power law analysis
We extend the approach of Clauset et al. (2009) by employing a Bayesian approach to fitting and comparing distributions for the data presented in this article. Clauset et al. (2009) proposed to select the lower threshold xmin after which the data follows a power law regime by minimizing the Kolmogorov-Smirnov (KS) goodness-of-fit statistic. We adopt their procedure to estimate xmin and then proceed by assuming xmin is fixed and known. Parameter estimation and model selection for various distributions are made assuming the same xmin for all scenarios.
Let x={x1,x2,…,xN} denote the observed data. The power law distribution has probability density function (p.d.f.):
$$f(x_{i} | \alpha, x_{\min}) = \frac{\alpha - 1}{x_{\min}} \left(\frac{x_{i}}{ x_{\min} }\right)^{-\alpha}. $$
We complete the model specification with a prior distribution π(α)∝1/(α−1), which leads to a proper posterior distribution p(α|x,xmin).
The second distribution we analyze is the stretched exponential (Weibull, see Clauset et al. Table 1), with p.d.f
$$f(x_{i} | \lambda, \beta, x_{\min}) = \lambda\beta\exp\left(\lambda x_{\min}^{\beta}\right) x_{i}^{\beta-1} \exp\left(-\lambda x_{i}^{\beta} \right). $$
We employ Gamma priors on the parameters β and λ:
$$\begin{array}{*{20}l} \pi_{\beta}(\beta | a_{1}, b_{1}) = \frac{b_{1}^{a_{1}}}{\Gamma(a_{1})} \beta^{a_{1} - 1} \exp(-b_{1} \beta),\\ \pi_{\lambda}(\lambda | a_{2}, b_{2}) = \frac{b_{2}^{a_{2}}}{\Gamma(a_{2})} \lambda^{a_{2} - 1} \exp(-b_{2} \lambda). \end{array} $$
The third and final distribution we consider in this study is the lower-truncated log-normal distribution:
$$f(x_{i} | \mu, \sigma, x_{\min}) = \sqrt{\frac{2}{\pi\sigma^{2}}} \frac{1}{x_{i}}\frac{\exp\left(-\frac{(\ln x_{i} - \mu)^{2}}{2\sigma^{2}}\right)}{ \text{erfc}\left(\frac{\ln x_{\min} -\mu}{\sqrt{2}\sigma}\right)}. $$
For the analysis of this distribution we choose a normal (Gaussian) prior for μ with mean 1 and standard deviation 5 and a Gamma prior for σ with a=b=1. Notice we parametrize the normal (and log-normal) distribution in terms of mean and standard deviation.
We estimated the parameters of these distributions using the dynamic Hamiltonian Monte Carlo (HMC) algorithm implemented in the Stan probabilistic programming language (Carpenter et al. 2017) through the rstan package (Stan Development Team 2018) of the R programming language (R Core Team 2018), version 3.5.1. We ran four chains of 2000 iterations and checked convergence by making sure the split-Rhat statistic was below 1.01. Monte Carlo standard errors (mcse) were below 1% of the posterior standard deviations for all estimates reported in this paper.
To compare the fit of the distributions considered here to data we employ Bayes factors (Jeffreys 1935; Kass and Raftery 1995). Let \(\mathcal {M}_{0}\) and \(\mathcal {M}_{1}\) be two models or hypotheses one wants to test after observing data Y. The Bayes factor is defined as
$$\text{BF}_{10} = \frac{p(Y | \mathcal{M}_{1}) }{p(Y | \mathcal{M}_{0})}, $$
quantifies the amount of support in favor of \(\mathcal {M}_{1}\) compared to \(\mathcal {M}_{0}\). For reasons of numerical stability, one usually computes lnBF01. We employed the routines implemented in the bridgesampling R package (Gronau et al. 2017) to compute log-marginal likelihoods \(\ln p(Y| \mathcal {M}_{i})\) which were then used to compute log Bayes factors.
Power law regression
A distinct goal to fitting a power law distribution is assessing whether two variables of interest are related according to a power law. Let wi be the weight of node i and let di be its degree. We say that w and d are related by a power law if the relationship w∝kβ holds for β>0.
One routinely employed option to determine the exponent β from data is to fit the model:
$$w_{i} \sim \text{Normal}(\mu_{i} = K d_{i}^{\beta}, \sigma) $$
by least squares – see e.g. De Montis et al. (2007). While this approach can often lead to good estimates, it is poorly suited to strictly positive data because it allows negative predictions.
A better model for strictly positive data like that analyzed here is the Gamma regression model with an identity link function:
$$\begin{array}{*{20}l} w_{i} &\sim \text{Gamma}(\mu_{i} = K d_{i}^{\beta}, \kappa), \end{array} $$
where we parametrize the Gamma distribution in terms of a mean μ and a shape κ, with p.d.f.
$$f(w_{i} | \mu_{i}, \kappa) = \frac{(\kappa/\mu_{i})^{\kappa}}{\Gamma(\kappa)} w_{i}^{\kappa - 1} \exp\left(- \frac{\kappa w_{i}}{\mu_{i}}\right). $$
This model allows for a strictly positive response variable whilst retaining directly interpretable parameters.
Finally, in the interest of completeness, we consider a log-normal model. While incorporating the positivity constraint in the data, it does not retain directly interpretability of the parameters, as estimates of the coefficients K and β pertain to the log scale. We note that an exponential transformation of the estimated parameters brings them to a comparable scale to the calculations from the two previous models.
To complete the specification of our Bayesian model, we place a Gamma(1, 1) prior on β and a Gamma(0.1, 0.01) on K. We fitted the three regression models for each weight threshold: 1, 1000 and 5000. In order to study the predictive performance of these three models (Gaussian, gamma and log-normal). We have employed leave-one-out (LOO) cross validation (Vehtari et al. 2017) in our experiments. It is important to note that in this paper we leave out individual graph nodes. In addition, we investigate model fit using the techniques described in Gabry et al. (2017) (see Supplementary figures). For these analyses we employed the brms package (Bürkner and et al 2017), using the same computational settings considered for the power law analysis above (four independent chains, checking split R-hat <1.01).
(Geo)graph tools
Among some important tolls in literature of geoinformatics is the MovingPandas (Graser 2019) and the OSMnx (Boeing 2017). The MovingPandas (Graser 2019) is a recent library for dealing with movement data, providing the user several functions and interfaces with Geographical DataBase Management Systems and Geographical Information Systems. The OSMnx (Boeing 2017) is a tool for creating and analysis of street networks under a simple, consistent and automatable paradigm. Even more specific to handle geographic network data is the (geo)graph package (GG) (Santos et al. 2017), applied in this work.
In the (geo)graph approach, a (geo)graph is defined as a graph in which the nodes have a known geographical location, and the edges have spatial dependence. So, the GG package allows the user work with the set of nodes as a point-type shapefile and the set of edges as a line-type shapefile, very common file structures in geoinformatics.
In order to convert spatial networks to the GIS environment we propose the following workflow (Santos et al. 2017):
To create a shapefile for the nodes using any GIS software. A point type shapefile for the nodes must be created. The shapefile must have a mandatory column of type integer named id, representing the id's of the nodes. All the characteristics of the polygons/points will be associated to their respective points as attributes, including the geographic locations of the nodes.
To create an adjacency matrix (0s and 1s) representing the connections between these nodes.
Then, a line type shapefile representing the edges of the network is given as an output of our application. The point-type-shapefile and the line-type-shapefile will have topological attributes of nodes and edges respectively.
Distribution of node weights
The flow values range between 0 and wmax=21228 number of people from an origin to one destination, with an average value of wave=33, which is considerably lower than wmax. This shows a high level of heterogeneity in the flow distribution. It is important to highlight that 12% of the travels have both origin and destination in the same node (TZ), i.e., are internal travels.
We fit three distributions to the mobility flow data (Fig. 1) and find that a truncated log-normal distribution with xmin=452 fits the data better than both a power law (log BF = 15) and a stretched exponential (log BF = 2). We report all marginal likelihoods and associated standard errors in the supplementary information. The estimated parameters of the log-normal are μ=1.24 with 95% credibility interval (−1.70,3.03) and σ=2.04(1.71,2.49). Interestingly, if one fails to consider other models, a power law model would be wrongly assumed to be the correct distribution of the data, as it gives an exponent firmly in the critical range 2<α<3, with estimated α=2.46 (2.42,2.48).
Distributions fitted to flow data. We show the complementary cumulative distribution function (CCDF) for the flow data (points). The green line shows the best-fitting log-normal model, whilst red and pink lines depict the power law and stretched exponential (Weibull) distributions, respectively
Furthermore, while not fitting the data better than the log-normal, the stretched exponential distribution provides a better fit than the power law (log BF = 13). The posterior means of the parameters were β=0.17(0.15,0.18) and λ=3.09(1.52,5.45). Overall these results show that (i) the flow data analyzed here do not follow a power law distribution, with a log-normal providing better fit and, (ii) the importance of considering alternative distributions into consideration when analyzing real data.
The power-law behaviour of mobility data was explored in several previous work (Song et al. 2010;De Montis et al. 2007;Chowell et al. 2003;Soh et al. 2010;Brockmann et al. 2006). In this work, for this case study, we have shown that for the mobility flow data a log-normal distribution outperformed the power law.
Traffic-topology correlation
We also studied the traffic-topology correlation, i.e., the relationship between a node's strength (total weight), wi, and its degree, di. In Table 1 we present the parameter estimates for each weight (flow) threshold.
Table 1 Power law regression results
We have found β values between the previous found in literature: 0.94 inChowell et al. (2003) and 1.8 inDe Montis et al. (2007).
The results show that there is a significant correlation between the weights and degrees, especially for higher thresholds. Also, the weights of the nodes grow slightly faster than their degrees (Barrat et al. 2004;De Montis et al. 2007). For the non-zero threshold (flow >1), however, we find that the posterior distribution for β includes the "null" hypothesis β=1. See Fig. 2 for the fitted regression lines, along with prediction intervals using the Gamma error structure.
Fitted regression lines and 95% prediction intervals for power law regression of weight versus degree. We present results for each threshold. a) Threshold = 1 (4.7E-3 percent of the maximum flow), b) Threshold = 1000 (4.7 percent of the maximum flow), c) Threshold = 5000 (24 percent of the maximum flow)
From a statistical perspective, we find that a Gamma regression model provides better predictive ability (better LOO scores) compared to the usually employed Gaussian regression model. For example, for flow threshold 1, difference in LOOIC scores was 735 with standard error of 99, while for threshold 1000 the difference was 53 with standard error of 21. While in some settings the log-normal model yielded better fit, the differences in predictive ability are small enough to justify employing the Gamma model, for which estimates of K and β can obtained directly with no need for transformation.
Network properties
In Fig. 3 we present several properties (statistical indices) for flow networks considering distinct connection threshold values, i.e. minimum flow value for connecting a pair of zones.
Topological properties for different threshold connection. Log-log plot, base 10
The index <k> represents the expected value for the number of connections of a node for a specific connection threshold value. This value can be viewed as the network's average connectivity. The minimum connection threshold value is 0, in this case we have a fully connected network (complete graph). For small non-null connection threshold values, almost all zones are connect to the others, while for high values just a few pairs of zones remain. The index <c> is associated to the network's transitivity: measures the average (over all nodes) connection probability of a pair of nodes connect to a same other node. On the other hand, the index <l> represents the average (over all nodes) number of edges in the smallest path between a pair of nodes, while the index D is the greatest shortest path length. A detailed description of complex network's indexes can be founded inda F. Costa et al. (2007).
The minimum connection threshold value in order to get a non-complete connected network is 4.7E−5: we connect every pair of nodes (origin-destination) with at least one person going from this origin to this destination. In this case, there is only one connected component, with 485 nodes and 14.155 edges.
Then, we looked for the connection threshold associated to the greatest network's diameter, in order to balance the weak and the strong edges for a single connection threshold: for this connection threshold we have not disconnected the largest connect component yet. We call this specific value as the Critical Connection Threshold (CCT). This critical connection threshold (CCT) is 0.06. In the CCT is 133 nodes in the largest connected component. After this value, the network's diameter decreases when we increase the connection threshold. The CCT is between the thresholds 4.7E-5 (Fig. 4) and 0.06 (Fig. 5).
Degree distribution for threshold 1 (4.7E-3%). We show the fitted complementary c.d.f.s for a power law, a log-normal and stretched exponential (Weibull) distributions. We estimated xmin=72 using the method ofClauset et al. (2009). For this threshold, the log-normal provides the best fit to data (see text)
Degree distribution for threshold 1273 (6%). We show the fitted complementary c.d.f.s for a power law, a log-normal and stretched exponential (Weibull) distributions. We estimated xmin=4 using the method ofClauset et al. (2009). For this threshold, the power law provides the best fit to data (see text)
In Table 2, we present the network's properties for three selected connection thresholds: the smallest non-null connection threshold, the CCT and one that delivers a fragmented (disconnected) network.
Table 2 Statistical properties of Rio de Janeiro's mobility network
For the CCT, we highlight that for a random network with the same number of nodes and edges, the expected value for <crand> is 0.01 and <lrand> is 20.5. Thus, there is a statistical small world effect in this network: <c> is greater than <crand> and <l> is smaller than <lrand> (Watts and Strogatz 1998).
This means that it is often possible to find connections between pairs of zones already connected to a common zone (creating triangles in the network structure) and even for a pair of nodes where the directly flow between them is not so high there are some "shortcuts" on the mobility network, getting these zones closer (in a topological point of view).
Considering both the smallest non-null and the critical connection threshold, we analyzed the degree distribution (Figs. 4 and 5). For the non-null threshold (flow of at least one travel), the distribution presented in Fig. 4 is not well approximated by any of tested distributions, with the power law providing a marginally better fit (log Bayes factor against log-normal: 0.42). A bootstrap test of adequacy of the power law yielded a p-value of 0.08, which means the power law is not suitable to describe the degree distribution for this threshold.
On the other hand, for the critical connection threshold (1273 trips between locations) shown in Fig. 5, the power law provides adequate fit (bootstrap p-value: 0.745), although the support for the power law against a log-normal is weak (log Bayes factor against log-normal: 0.25).
These findings are in agreement with those of Alessandretti et al. (2017), who find that when the whole distribution of displacements is considered, i.e., when both small and large values are used in the fitting procedure, the log-normal outperforms the (Pareto) power law. Conversely, when restricting attention to large values, the power law provides better fit.
The connection threshold plays a central hole also on the (geographical) space. For small connection thresholds the networks' hubs are in the central region of Rio de Janeiro and Niterói (the two most important cities in this metropolitan region) - Fig. 6, and for the critical connection threshold the hubs are located in zones belonging to Nova Iguaçu and Duque de Caxias (two of the most populous cities and with a large number of commuters) - Fig. 7.
Urban mobility geographical graph, (geo)graph, for the Metropolitan Region of Rio de Janeiro (highlighted area). Each node represents a traffic zone, and each edge corresponds to a flow intensity equal to or greater than the connection threshold (4.7E-5). The element's colors and size are proportional to the topological degree (for the nodes) and topological strength (for the edges)
Urban mobility geographical graph, (geo)graph, for the Metropolitan Region of Rio de Janeiro (highlighted area). Each node represents a traffic zone, and each edge corresponds to a flow intensity equal to or greater than the connection threshold (0.06). The element's colors and size are proportional to the topological degree (for the nodes) and topological strength (for the edges)
When we compare our topological findings with some previous similar works, especially Chowell et al. (2003) and De Montis et al. (2007), it is possible to note that the RJ's mobility network is bigger (in terms of number of edges) and presents high diameter.
These are expected results, as the study area is a Metropolitan Region, with several municipalities combining, daily, intense inter-city travels (mainly for work and study) with intra-city travels (for general purposes). Conforming the Origin-Destination Survey for the Metropolitan Region of Rio de Janeiro, 21% of the travel were due working motivation, 16% for studying activities and 50% starts or ends at the traveler residence (Companhia estadual de engenharia de transporte e logistica et al. 2010).
On the other hand, a non-intuitive and important result is related to how the hub's strength is distributed among its links. In this case study, the central area of MRRJ is connected with many other areas, however most of its connections are weak - associated to small-flow values. Even though Nova Iguaçu and Duque de Caxias do not connect to as many zones as does the central region, these zones have a set of a few pairs of nodes with high flow - a few but strong links.
Conclusions and perspectives
In this paper we address some aspects of the urban mobility phenomenon under a geographical point of view using a Complex Network approach. We applied the (geo)graph approach and tolls in order to recovery useful information from urban mobility data considering its intrinsic spatial proprieties. We found a high level of heterogeneity in the Metropolitan Region of Rio de Janeiro's flow distribution. Considering the complex network analysis, we showed a statistical small world effect behaviour around the Critical Connection Threshold - set of connection thresholds that provide a network with maximum diameter.
An important point in the traffic-topology analysis is how the hub's strength is distributed among its links. We have shown that, in our case study, the central area of the region is connected with many other areas, however most of its connections are weak - associated to small-flow values. On the other hand, there are some zones do not connect to as many other zones as does the central region, but these zones have a few but strong (high flow) links.
From a methodological point of view, we also extended the method of Clauset et al. (2009) of distributions analysis by employing a Bayesian approach and computing Bayes factors. In addition, we show that a Gamma model for power law regression analysis leads to better fit, provides directly interpretable parameters and respects non-negativity constraints in the data.
Among our perspectives is replicating our analysis on other datasets (from different cities) and compare the results, in attempt to capture some regional behaviour.
Alessandretti, L, Sapiezynski P, Lehmann S, Baronchelli A (2017) Multi-scale spatio-temporal analysis of human mobility. PloS ONE 12(2):0171686.
Balcan, D, Colizza V, Goncalces B, Hud H, Ramascob J, Vespignani A (2009) Multiscale mobility networks and the spatial spreading of infectious diseases. PNAS 106(51):21487.
Barat, A, Cattuto C (2013) Empirical temporal networks of face-to-face human interactions. Eur Phys J Spec Top 222:1295–1309.
Barrat, A, Barthelemy M, Pastor-Satorras R, Vespignani A (2004) The architecture of complex weighted networks. Proc Natl Acad Sci 101(11):3747–3752.
Barbosa, H, Barthelemy M, Ghoshal G, James CR, Lenormand M, Louail T, Menezes R, Ramasco JJ, Simini F, Tomasini M (2018) Human mobility: Models and applications. Phys Rep 734:1–74. https://doi.org/10.1016/j.physrep.2018.01.001. Human mobility: Models and applications.
Barthélemy, M (2011) Spatial networks. Phys Rep 499(1):1–101.
Boeing, G (2017) Osmnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. Comput Environ Urban Syst 65:126–139.
Brockmann, D, Hufnagel L, Geisel T (2006) The scaling laws of human travel. Nature 439:462–465.
Bürkner, P-C, et al (2017) brms: An r package for bayesian multilevel models using stan. J Stat Softw 80(1):1–28.
Carpenter, B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, Brubaker M, Guo J, Li P, Riddell A (2017) Stan: A probabilistic programming language. J Stat Softw 76(1).
Chowell, G, Hyman JM, Eubank S, Castillo-Chavez C (2003) Scaling laws for the movement of people between locations in a large city. Phys Rev E 68(6):066102.
Companhia estadual de engenharia de transporte e logistica, Secretaria de estado de transporte, Governo do Estado do Rio de Janeiro (2010) Resultado da pesquisa origem/destino. http://setrerj.org.br/wp-content/uploads/2017/07/175_pdtu.pdf. Accessed 20 Oct.
Costa, PB, Neto GCM, Bertolde AI (2017) Urban mobility indexes: A brief review of the literature. Transp Res Procedia 25:3645–3655. https://doi.org/10.1016/j.trpro.2017.05.330. World Conference on Transport Research - WCTR 2016 Shanghai. 10-15 July 2016.
da F. Costa, L, Rodrigues F, Travieso G, Villas Boas P (2007) Characterization of complex networks: A survey of measurements. Adv Phys 56:167–242. https://doi.org/10.1080/00018730601170527.
De Montis, A, Barthélemy M, Chessa A, Vespignani A (2007) The structure of interurban traffic: a weighted network analysis. Environ Plan B Plan Des 34(5):905–924.
Estrada, E (2012) Epidemic spreading induced by diversity of agents mobility. Phys Rev E 84:036110.
Gabry, J, Simpson D, Vehtari A, Betancourt M, Gelman A (2017) Visualization in bayesian workflow. arXiv preprint arXiv:1709.01449. https://arxiv.org/abs/1711.05879.
Gonzalez, MC, Hidalgo CA, Barabasi A-L (2008) Understanding individual human mobility patterns. Nature 453(7196):779.
Guo, D, Zhu X, Jin H, Gao P, Andris C (2012) Discovering spatial patterns in origin-destination mobility data. Trans GIS 16(3):411–429.
Graser, A (2019) Movingpandas: Efficient structures for movement data in python. GIForum 1:54–68.
Gronau, QF, Singmann H, Wagenmakers E-J (2017) Bridgesampling: an r package for estimating normalizing constants. arXiv preprint arXiv:1710.08162. https://arxiv.org/abs/1711.05879.
Jeffreys, H (1935) Some tests of significance, treated by the theory of probability In: Mathematical Proceedings of the Cambridge Philosophical Society, 203–222.. Cambridge University Press.
Kass, RE, Raftery AE (1995) Bayes factors. J Am Stat Assoc 90(430):773–795.
Louail, T, Lenormand M, Picornell M, Cant'u OG, Herranz R, Frias-Martinez E, Ramasco JJ, Barthelemy M (2015) Uncovering the spatial structure of mobility networks. Nat Commun 6:6007.
R Core Team (2018) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
Santos, LB, Jorge AA, Rossato M, Santos JD, Candido OA, Seron W, de Santana CN (2017) (geo) graphs-complex networks as a shapefile of nodes and a shapefile of edges for different applications.arXiv preprint arXiv:1711.05879. https://arxiv.org/abs/1711.05879.
Simini, F, González MC, Maritan A, Barabási A-L (2012) A universal model for mobility and migration patterns. Nature 484(7392):96.
Soh, H, Lim S, Zhang T, Fu X, Lee GKK, Hung TGG, Di P, Prakasam S, Wong L (2010) Weighted complex network analysis of travel routes on the singapore public transportation system. Phys A Stat Mech Appl 389(24):5852–5863.
Song, C, Koren T, Wang P, Barabasi AL (2010) Modelling the scaling properties of human mobility. Nat Phys 6:818–823.
Stan Development Team (2018) RStan: the R interface to Stan. R package version 2.18.2. http://mc-stan.org/. Accessed 20 Oct.
Vehtari, A, Gelman A, Gabry J (2017) Practical bayesian model evaluation using leave-one-out cross-validation and waic. Stat Comput 27(5):1413–1432.
Wang, P, Hunter T, Bayen AM, Schechtner K, González MC (2012) Understanding road usage patterns in urban areas. Sci Rep 2:1001.
Watts, DJ, Strogatz SH (1998) Collective dynamics of'small-world'networks,. Nature 393(6684):409–10.
The authors would like to thank Dr. Flavio Ianelli and Dr. Igor Sokolov for helpful discussions on mobility networks.
Funding: São Paulo Research Foundation (FAPESP), Grant Number 2015/50122-0 and DFG-IRTG Grant Number 1740/2; FAPESP Grant Number 2018/06205-7; CNPq Grant Number 420338/2018-7; LMC is supported by a CAPES Postdoctoral Scholarship.
Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden), São José dos Campos, Brazil
Leonardo Bacelar Lima Santos
Programa de Computação Científica (PROCC), Fundação Oswaldo Cruz, Rio de Janeiro, Brazil
Luiz Max Carvalho
Universidade Federal de São Paulo (UNIFESP), São José dos Campos, Brazil
Wilson Seron, Elbert E. Macau & Marcos G. Quiles
Fundação Getúlio Vargas, Rio de Janeiro, Brazil
Flávio C. Coelho
Instituto Nacional de Pesquisas Espaciais (INPE), São José dos Campos, Brazil
Elbert E. Macau & Antônio M. V. Monteiro
Wilson Seron
Elbert E. Macau
Marcos G. Quiles
Antônio M. V. Monteiro
These authors contributed equally to this work. All authors read and approved the final manuscript.
Correspondence to Leonardo Bacelar Lima Santos.
Lima Santos, L.B., Carvalho, L.M., Seron, W. et al. How do urban mobility (geo)graph's topological properties fill a map?. Appl Netw Sci 4, 91 (2019). https://doi.org/10.1007/s41109-019-0211-7
Complex networks
(geo)graphs | CommonCrawl |
Medical Image Computing and Computer Assisted Interventions Conference
The Medical Image Computing and Computer Assisted Intervention Society (the MICCAI Society) is dedicated to the promotion, preservation and facilitation of research, education and practice in the field of medical image computing and computer assisted medical interventions including biomedical imaging and robotics, through the organization and operation of regular high quality international conferences and publications which promote and foster the exchange and dissemination of advanced knowledge, expertise and experience in the field produced by leading institutions and outstanding scientists, physicians and educators around the world. The MICCAI Society is committed to maintaining high academic standards and independence from any personal, political or commercial vested interests.
scholar.google.com
Muliscale Vessel Enhancement Filtering
Frangi, Alejandro F. and Niessen, Wiro J. and Vincken, Koen L. and Viergever, Max A.
Medical Image Computing and Computer Assisted Interventions Conference - 1998 via Local Bibsonomy
Keywords: dblp
[link] Summary by Anmol Sharma 9 months ago
Delineation of vessel structures in human vasculature forms the precursor to a number of clinical applications. Typically, the delineation is performed using both 2D (DSA) and 3D techniques (CT, MR, XRay Angiography). However the decisions are still made using a maximum intensity projection (MIP) of the data. This is problematic since MIP is also affected by other tissues of high intensity, and low intensity vasculature may never be fully realized in the MIP compared to other tissues. This calls for a need for a type of vessel enhancement which can be applied prior to MIP to ensure MIP of the imaging have significant representation of low intensity vessels for detection. It can also facilitate volumetric views of vasculature and enable quantitative measurements.
To this end, Frangi et al. propose a vessel enhancement method which defines a "vesselness measure" by using eigenvalues of the Hessian matrix as indicators. The eigenvalue analysis of Hessian provides the direction of the smallest curvature (along the tubular vessel structure). The eigenvalue decomposition of a Hessian on a spherical neighbourhood around a point $x_0$ maps an ellipsoid with the axis represented by the eignevectors and their magnitude represented by their corresponding eigenvalues. The method provides a framework with three eigenvalues $|\lambda_1| <= |\lambda_2| <= |\lambda_3|$ with heuristic rules about their absolute magnitude in the scenario where a vessel is present. Particularly, in order to derive a well-formed ``vessel measure" as a function of these eigenvalues, it is assumed that for a vessel structure, $\lambda_1$ will be very small (or zero). The authors also add prior information about the vessel in the sense that the vessels appear as bright tubes in a dark background in most images. Hence they indicate that a vessel structure of this sort must have the following configuration of $\lambda$ values $|\lambda_1| \approx 1$, $|\lambda_1| << |\lambda_2|$, $|\lambda_2| \approx |\lambda_3|$. Using a combination of these $\lambda$ values, as well as a Hessian-based function, the authors propose the following vessel measure:
$\mathcal{V}_0(s) = \begin{cases}
0 \quad \text{if} \quad \lambda_2 > 0 \quad \text{or} \quad \lambda_3 > 0\\
(1 - exp\left(-\dfrac{\mathcal{R}_A^2}{2\alpha^2}\right))exp\left(-\dfrac{\mathcal{R}_B^2}{2\beta^2}\right)(1 - exp\left(-\dfrac{S^2}{2c^2}\right))
The three terms that make up the measure are $\mathcal{R}_A$, $\mathcal{R}_B$, and $S$. The first term $\mathcal{R}_A$ refers to the largest area cross section of the ellipsoid represented by the eigenvalue decomposition. It distinguishes between plate-like and line-like structures. The second term $\mathcal{R}_B$ accounts for the deviation from a blob-like structure, but cannot distinguish between line- and a plit-like pattern. The third term $S$ is simply the Frebenius norm of the Hessian matrix which accounts for lack of structure in the background, and will be high when there is high contrast compared to background. The vesselness measure is then analyzed at different scales to ensure that vessels of all sizes get detected.
The method was applied on 2D DSA images which are obtained from X-ray projection before and after contrast agent is injected. The method was also applied to 3D MRA images. The results showed promising background suppression when vessel enhancement filtering was applied before performing MIP.
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
Sponsored by: and | CommonCrawl |
\begin{document}
\begin{spacing}{1.1}
\noindent {\Large \bf Geometric invariant theory and projective toric varieties}
\\ {\bf Nicholas Proudfoot}\footnote{Supported by the Clay Mathematics Institute Liftoff Program and the National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship.}\\ Department of Mathematics, University of Texas, Austin, TX 78712
{\small \begin{quote} \noindent {\em Abstract.} We define projective GIT quotients, and introduce toric varieties from this perspective. We illustrate the definitions by exploring the relationship between toric varieties and polyhedra. \end{quote} }
\noindent Geometric invariant theory (GIT) is a theory of quotients in the category of algebraic varieties. Let $X$ be a projective variety with ample line bundle $\mathcal{L}$, and $G$ an algebraic group acting on $X$, along with a lift of the action to $\mathcal{L}$. The GIT quotient of $X$ by $G$ is again a projective variety, along with a given choice of ample line bundle. With no extra work, we can consider varieties which are projective over affine, that is varieties can be written in the form $\Proj R$ for a reasonable graded ring $R$. The purpose of this note is to give two equivalent definitions of projective GIT quotients, one algebraic in terms of the homogeneous coordinate ring $R$, and one more geometric, and to illustrate these definitions with toric varieties.
A toric variety may be defined abstractly to be a normal variety that admits a torus action with a dense orbit. One way to construct such a variety is to take a GIT quotient of affine space by a linear torus action, and it turns out that every toric variety which is projective over affine arises in this manner. Given the data of a torus action on $\mathbb{C}^n$ along with a lift to the trivial line bundle, we define a polyhedron, which will be bounded (a polytope) if and only if the corresponding toric variety is projective. We then use this polyhedron to give two combinatorial descriptions of the toric variety, one in the language of algebra and the other in the language of geometry.
Much has been written about toric varieties, from many different perspectives. The standard text on the subject by Fulton \cite{Fu} focuses on the relationship between toric varieties and fans. The main difference between this approach and the one that we adopt here is that a fan corresponds to an abstract toric variety, while a polyhedron corresponds to a toric variety along with a choice of ample line bundle. In particular, there exist toric varieties which are not projective over affine, and which are therefore do not come from polyhedra. Since the primary purpose of this note is to introduce projective GIT quotients, we will avoid fans altogether. For an account of the development of the GIT perspective, including a detailed explanation of its relationship to the fan construction, one may consult the excellent survey paper by Cox \cite{Co}. Toric varieties are used to illustrate a categorical perspective on invariant theory in Dolgachev's book \cite[\S 12]{Do}, and they are studied from the vantage point of multigraded commutative algebra in \cite[\S 10]{MS}. Each of these three references covers most of the material that is included in this paper and a lot more; what we lack in depth and generality we hope to make up for with brevity and concreteness.
\paragraph{\bf Acknowledgments.} I would like to express my gratitude to Herb Clemens, Rob Lazarsfeld, and Ravi Vakil for organizing the conference at Snowbird out of which this paper grew. Also to the referee and David Cox, both of whom provided valuable feedback on various drafts.
\begin{section}{Geometric invariant theory}\label{git} Consider a graded noetherian algebra $$R = \bigoplus_{m=0}^{\infty}R_m$$ which is finitely generated as an algebra over $\mathbb{C}$.
The variety $X = \Proj R$ is projective over the affine variety $\Spec R_0$, and comes equipped with an ample line bundle $\mathcal{L} = \mathcal{O}_X(1)$.
Furthermore, by \cite[Ex. 5.14(a)]{Ha}, the integral closure (or normalization) of $R$ is isomorphic to the ring $$R' = \bigoplus_{m=0}^{\infty}\Gamma(X,\mathcal{L}^{\otimes m}).$$ The most important example for our applications will be the following.
\begin{example}\label{affine} Let $R = \mathbb{C}[x_1,\ldots,x_n,y_0,\ldots,y_k]$, with $\deg x_i = 0$ and $\deg y_j = 1$. Then $\Proj R \cong \mathbb{C}^n\times\mathbb{C} P^k$, and $\mathcal{L}$ restricts to the antitautological bundle $\mathcal O(1)$ on $\{z\}\times\mathbb{C} P^k$ for all $z\in\mathbb{C}^n$. \end{example}
Let $G$ be a reductive algebraic group. A good reference for general reductive groups is \cite{FH}, however the only groups that we will need for our applications in Section \ref{toric} are subgroups of the algebraic torus $(\C^\times)^n$. Suppose that we are given an action of $G$ on $R$ that preserves the grading; such an action induces
an action of $G$ on $X = \Proj R$ along with a lift of this action to the line bundle $\mathcal{L}$. This lift is sometimes referred to as a {\em linearization} of the action of $G$ on $X$.
\begin{definition}\label{def} The variety $X/\!\!/ G := \Proj(R^G)$ is called the GIT quotient of $X$ by $G$, where $R^G$ is the subring of $R$ fixed by $G$. \end{definition}
The reader is warned that, while the action of $G$ on $\mathcal{L}$ is not incorporated into the notation $X/\!\!/ G$, it is an essential part of the data. In particular, we will see in Example \ref{lins} that it is possible to change the linearization of an action and obtain a vastly different quotient.
Definition \ref{def} is very easy to state, but not so easy to understand geometrically. Our next goal will be to give a description of $X/\!\!/ G$ which depends more transparently on the structure of the $G$-orbits in $X$. Let $\mathcal{L}^*$ denote the line bundle dual to $\mathcal{L}$.
\begin{definition}\label{ss} A closed point $x\in X$ is called {\em semistable} if, for all nonzero covectors $\ell\in\mathcal{L}^*_x$ over $x$, the closure of the $G$-orbit $G\cdot (x,\ell)$ in the total space of $\mathcal{L}^*$ is disjoint from the zero section. A point which is not semistable is called {\em unstable}. The locus of semistable points will be denoted $X^{ss}$. \end{definition}
\begin{theorem}\label{equiv} There is a surjective map $\pi:X^{ss}\to X/\!\!/ G$, with $\pi(x) = \pi(y)$ if and only if the closures of the orbits $G\cdot x$ and $G\cdot y$ intersect in $X^{ss}$. \end{theorem}
\begin{proof} We will provide only a sketch of the proof of Theorem \ref{equiv}; for a more thorough argument, see \cite[Prop 8.1]{Do}. The projection from $R$ onto $R_0$ induces an inclusion of $\Spec R_0$ into $\Spec R$, and the complement $\Spec R \smallsetminus \Spec R_0$ fibers over $\Proj R$ with fiber $\C^\times$. The inclusion of $R^G$ into $R$ induces a surjection $\tilde\pi:\Spec R\to\Spec R^G$. Let $x$ be an element of $X$, and let $\tilde x$ be a lift of $x$ to $\Spec R\smallsetminus\Spec R_0$. Then \begin{eqnarray*} x\inX^{ss} &\iff& \exists\hspace{3pt}\text{ a $G$-invariant section of $\mathcal{L}^{\otimes m}$ not vanishing at $x$ for some $m>0$}\\ &\iff& \exists \hspace{6pt}f\in R^G_m\text{ not vanishing at $\tilde x$ for some $m>0$}\\ &\iff& \tilde\pi(\tilde x)\notin\Spec R^G_0\subseteq\Spec R^G, \end{eqnarray*} hence $\tilde\pi(\tilde x)$ descends to an element of $\Proj R^G$. Since the inclusion of $R^G$ into $R$ respects the gradings on the two rings, this element does not depend on the choice of lift $\tilde x$, hence $\tilde\pi$ induces a surjection $\pi:X^{ss}\to\Proj R^G = X/\!\!/ G$. Two points $x,y\inX^{ss}$ with lifts $\tilde x,\tilde y\in\Spec R$ lie in different fibers of $\pi$ if and only there exists a $G$-invariant function $f\in R^G_{>0}$ that vanishes at $\tilde x$ but not at $\tilde y$, which is the case if and only if the closures of the $G$ orbits through $x$ and $y$ in $X^{ss}$ are disjoint. \end{proof}
Our proof of Theorem \ref{equiv} suggests that the variety $\Proj R$ may {\em itself} be interpreted as a GIT quotient of $\Spec R$ by the group $\C^\times$. Indeed, the grading on $R$ defines an action of $\C^\times$ on $R$ by the formula $\lambda\cdot f = \lambda^mf$ for all $f\in R_m$, and this induces an action of $\C^\times$ on $\Spec R$. Consider the lift of this action to the trivial line bundle $\Spec R \times \mathbb{C}$ given by letting $\C^\times$ act on the second factor by scalar multiplication. The unstable locus for this linearized action is exactly the subvariety $\Spec R_0\subseteq\Spec R$, and $\Proj R$ is the quotient of $\Spec R\smallsetminus\Spec R_0$ by $\C^\times$. This provides a geometric explanation of the irrelevance of the irrelevant ideal in the standard algebraic definition of $\Proj$.
We conclude the section with an example that illustrates the dependence of a GIT quotient on the choice of linearization of the $G$ action on $X$.
\begin{example}\label{lins} As in Example \ref{affine}, let $R = \mathbb{C}[x_1,\ldots,x_n,t]$, with $\deg x_i = 0$ for all $i$ and $\deg t = 1$. Then $X\cong \mathbb{C}^n$, and $\mathcal{L}$ is trivial. Let $G = \C^\times$ act on $R$ by the equations $$\lambda\cdot x_i = \lambda x_i\hspace{8pt}\text{and}\hspace{8pt} \lambda\cdot t = \lambda^{\alpha} t$$ for some $\alpha\in\mathbb{Z}$. Geometrically, $G$ acts by scalar multiplication, and $\alpha$ defines the linearization. This action is not to be confused with the action of $\C^\times$ on $R$ given by the grading.
\noindent{\em Case 1: $\alpha \geq 1$.} In this case, $R^G = \mathbb{C}$, and $X/\!\!/ G = \Proj R^G$ is empty. For every element $(x,\ell)\in\mathcal{L}^*$, we have $\displaystyle\lim_{\lambda\to 0}\lambda\cdot(x,\ell) = (0,0)$, hence every $x\in X$ is unstable.
\noindent{\em Case 2: $\alpha = 0$.} With the trivial linearization of the $G$ action on $X$, we have $R^G = \mathbb{C}[t]$, hence $X/\!\!/ G = \Proj R^G$ is a point. The $G$ orbits in $\mathcal{L}^*$ are all horizontal, hence {\em every} point is semistable. Since every $G$ orbit in $X$ contains the origin of $\C^n$ in its closure, Theorem \ref{equiv} confirms that the quotient is a single point.
\noindent{\em Case 3: $\alpha = -1$.} In this case, $R^G = \mathbb{C}[x_1t,\ldots,x_nt]$ is a polynomial ring generated in degree $1$, hence $X/\!\!/ G = \Proj R^G \cong \mathbb{C} P^{n-1}$. We have $\lambda\cdot(x,\ell) = (\lambda x,\lambda^{-1}\ell)$, which limits to an element of the zero section of $\mathcal{L}^*$ if and only if $x=0$. Thus $X^{ss} = \C^n\smallsetminus\{0\}$, and all $G$ orbits in $X^{ss}$ are closed, hence the GIT quotient is isomorphic to the quotient of $X^{ss}$ by $G$ in the ordinary topological sense.
\noindent{\em Case 4: $\alpha < -1$.} In this case we still get $X/\!\!/ G \cong \mathbb{C} P^{n-1}$, but we now obtain $\mathbb{C} P^{n-1}$ in its $(-\alpha)$-uple Veronese embedding. \end{example}
Note that in Example \ref{lins}, multiplying $\alpha$ by a positive integer $m$ corresponds to replacing the $G$-equivariant line bundle $\mathcal{L}$ on $X = \C^n$ with its $m^{\text{th}}$ tensor power. In general, this operation will have the effect of replacing the resulting ample line bundle on the GIT quotient $X/\!\!/ G$ by {\em its} $m^{\text{th}}$ tensor power, as well (as we saw in Case 4). \end{section}
\begin{section}{Toric varieties}\label{toric} In this section we introduce and analyze toric varieties, which we will think of as generalizations of Example \ref{lins} to higher dimensional tori. As in the previous section, we let $$X = \C^n = \Proj\mathbb{C}[x_1,\ldots,x_n,t],$$ with $\deg x_i=0$ and $\deg t = 1$. Fix an $n$-tuple $\alpha = (\alpha_1,\ldots,\alpha_n)$ of integers, and let $T^n = (\C^\times)^n$ act on $R = \mathbb{C}[x_1,\ldots,x_n,t]$ by the equations $$\lambda\cdot x_i = \lambda_ix_i\hspace{8pt}\text{and}\hspace{8pt} \lambda\cdot t = \lambda_1^{\alpha_1}\ldots\lambda_n^{\alpha_n}\hspace{2pt}t$$ for $\lambda = (\lambda_1,\ldots,\lambda_n)\in T^n$. Thus we have the standard coordinate action of $T^n$ on $\C^n$, with a linearization to the trivial bundle given by $\alpha$.
\begin{definition}\label{toricdef} A toric variety is a GIT quotient of $X$ by an algebraic subgroup $G\subseteq T^n$ for some $n$. \end{definition}
A toric variety $X/\!\!/ G$ admits an action of the torus $T = T^n/G$ with a single dense orbit. A more standard approach to toric geometry is to define a toric variety to be a normal variety along with a torus that acts with a dense orbit, and then to prove that every such variety which is projective over affine arises from the construction of Definition \ref{toricdef}. For the strictly projective case, see \cite[\S 3.4]{Fu}.
Consider the exact sequence $$1\to G\to T^n\to T\to 1.$$ Differentiating at the identity, we obtain an exact sequence of complex Lie algebras $$0\to\mathfrak{g}\to\mathfrak{t}^n\to\mathfrak{t}\to 0.$$ Let $\{e_1,\ldots,e_n\}$ be the coordinate vectors in $\mathfrak{t}^n$, and let $a_i$ be the image of $e_i$ in $\mathfrak{t}$. The vector space $\mathfrak{t}$ is equipped with an integer lattice $\mathfrak{t}_{\mathbb{Z}} = \ker\big(\operatorname{exp}:\mathfrak{t}\to T\big)$. Its dual $\mathfrak{t}^*$ therefore inherits a dual lattice, as well as a canonical real part $\mathfrak{t}^*_{\R} = \mathfrak{t}^*_{\Z}\otimes_{\mathbb{Z}}\mathbb{R}$. We now define the polyhedron $$\Delta = \left\{\hspace{2pt}p\in\mathfrak{t}^*_{\R}\mid p\cdot a_i\geq\alpha_i \text{ for all }i\hspace{2pt}\right\},$$ a subset of the real vector space $\mathfrak{t}^*_{\R}$.
There is a deep and extensive interaction between the toric variety $X/\!\!/ G$ and the polyhedron $\Delta$. The $T$ orbits on $X$, for example, are classified by the faces of $\Delta$, with faces of real dimension $i$ corresponding to orbits of complex dimension $i$. If $\Delta$ is simple (exactly $\dim_\mathbb{R} \Delta$ facets meet at each vertex), then $X/\!\!/ G$ is an orbifold \cite{LT}, and the Betti numbers of $X/\!\!/ G$ are determined by the equation \begin{equation}\label{betti} \sum_{i=0}^{d} b_{2i}(X/\!\!/ G) \hspace{2pt}q^i = \sum_{i=0}^{d} f_i(\Delta) (q-1)^i, \end{equation} where $d = \dim_\mathbb{C} X/\!\!/ G = \dim_\mathbb{R}\Delta$, and $f_i(\Delta)$ is the number of faces of dimension $i$. This fact has been famously used by Stanley to characterize the possible face vectors of simple polytopes \cite{St}, and can be proven in many ways. One beautiful (though unnecessarily technical) proof uses the Weil conjectures; it amounts simply to observing that the right hand side of Equation \eqref{betti} may be interpreted as the number of points on an $\mathbb{F}_q$ model of $X/\!\!/ G$. For a more detailed discussion of Betti numbers, the Weil conjectures, and Stanley's theorem, see \cite[\S 4.5 and 5.6]{Fu}. In this note, we will content ourselves with using $\Delta$ to describe the invariant ring $R^G$ and the semistable locus $X^{ss}$.
Let $C_\Delta$ be the cone over $\Delta$, that is $$C_\Delta := \left\{(p,r)\in\mathfrak{t}^*_{\R}\times\mathbb{R}\mid r\geq 0\text{ and }p\in r\cdot\Delta\right\}^{cl},$$ where ${cl}$ denotes closure inside of $\mathfrak{t}^*_{\R}\times\mathbb{R}$. The following figure illustrates the cone over an interval and the cone over a half line. Note that the closure is necessary to include the positive $x$-axis in the cone over the half line.
Let $S_\Delta := C_\Delta\hspace{2pt}\cap\hspace{2pt}\big(\mathfrak{t}^*_{\Z}\times\mathbb{Z}\big)$ be the semigroup consisting of all of the lattice points in $C$. We may then define the semigroup ring $\mathbb{C}[S_\Delta]$, an algebra over $\mathbb{C}$ with additive basis indexed by the elements of $S_\Delta$, and multiplication given by the semigroup law. This ring has a non-negative integer grading given on basis elements by the final coordinate of the corresponding lattice points. The following theorem provides a combinatorial interpretation of the homogeneous coordinate ring $R^G$ of $X/\!\!/ G$.
\begin{theorem}\label{invt} $R^G\cong \mathbb{C}[S_\Delta]$. \end{theorem}
\begin{proof} Suppose that we are given an element $(p,r)\in S_\Delta$, and let $r_i = p\cdot a_i - r\alpha_i \in \mathbb{Z}_{\geq 0}$ for all $i$. To this element, there corresponds a $G$-invariant monomial $m_{(p,r)} = x_1^{r_1}\ldots x_n^{r_n}t^r\in R$. This correspondence defines a bijection from $S_\Delta$ to the monomials of $R^G$, which extends to a graded ring isomorphism $\mathbb{C}[S_\Delta]\cong R^G$. \end{proof}
For all $i\in\{1,\ldots,n\}$, let $F_i = \{p\in\Delta\mid p\cdot a_i = \alpha_i\}$. The set $F_i$ is the locus of points on $\Delta$ on which the $i^\text{th}$ defining linear form is minimized, and therefore it is a face of $\Delta$. If $\alpha$ is chosen generically, then $F_i$ will either be a facet or it will be empty. In general, however, $F_i$ may be a face of any dimension. The following theorem provides a combinatorial interpretation of the semistable locus $X^{ss}$.
\begin{theorem}\label{xss} For any point $x\in X$, let $A = \{i\mid x_i=0\}$ be the set of coordinates at which $x$ vanishes. Then $x$ is semistable if and only if $\displaystyle\bigcap_{i\in A}F_i\neq\emptyset$. \end{theorem}
\begin{proof} In the proof of Theorem \ref{equiv}, we argued that $x$ is stable if and only if there is a positive degree element of $R^G$ that does not vanish at $x$. This will be the case if and only if there is $G$-invariant monomial of positive degree which is supported on the complement of $A$. By Theorem \ref{invt}, $G$-invariant monomials of degree $r$ correspond to lattice points in $r\cdot\Delta$, and those that are supported on the complement of $A$ correspond to those that lie on $r\cdot F_i$ for all $i\in A$. Such a monomial exists if and only if $\displaystyle\bigcap_{i\in A}F_i\neq\emptyset$. \end{proof}
\begin{remark} Given a polyhedron $\Delta\in\mathbb{R}^d$, there are infinitely many different ways to present it as the set of solutions to a finite set of affine linear inequalities. Indeed, even the number of such inequalities is not determined; it is only bounded below by the number of facets of $\Delta$. Theorems \ref{invt} and \ref{xss} are valid regardless of the choice of presentation of $\Delta$, and the presentation is irrelevant to the application of Theorem \ref{invt}. On the other hand, if we want to apply Theorem \ref{xss} in an example, it is essential to be given $\Delta$ along with its presentation. We can then set $n$ to be the number of defining inequalities, and reconstruct the group $G\subseteq T^n$ by reversing the construction that we gave for $\Delta$. \end{remark}
We conclude the section by using Theorems \ref{invt} and \ref{xss} to compute the toric varieties associated to an assortment of polytopes. An integer vector $a\in\mathfrak{t}^*_{\Z}$ is called {\em primitive} if it cannot be expressed as a multiple of another element $a'\in\mathfrak{t}^*_{\Z}$ by an integer greater than $1$. In each of the following examples we will implicitly assume that the given polytope is cut out by the minimum possible number of linear forms $\{a_1,\ldots,a_n\}$ in the dual vector space, and that each of these forms is a primitive integer vector.
\begin{example}\label{ray} Let $\Delta = \mathbb{R}^+\subset\mathbb{R}$ be the set of non-negative real numbers. Then $C_\Delta = (\mathbb{R}^+)^2$ and $\mathbb{C}[S_\Delta]\cong\mathbb{C}[x,t]$, with $\deg x = 0$ and $\deg t = 1$. This tells us that the associated toric variety is $\Proj \mathbb{C}[x,t] = \mathbb{C}$. Geometrically, we have $T^n = T = \C^\times$, and $G$ is the trivial group, hence we are building $\mathbb{C}$ as a trivial GIT quotient of $\mathbb{C}$ itself. More generally, the toric variety associated to the positive orthant in $\mathbb{R}^d$ is $\mathbb{C}^d$, equipped with the trivial line bundle. \end{example}
\begin{example}\label{interval} Let $\Delta = [0,1]$ be the unit interval in $\mathbb{R}$. Then $S_\Delta \cong\mathbb{C}[x,y]$ is a polynomial ring in two variables of degree $1$, and the associated toric variety is $\mathbb{C} P^1$. Geometrically, $\C^\times$ acts by scalars on $\mathbb{C}^2$, and the origin is the unique unstable point. More generally, the toric variety associated to the standard $d$-simplex in $\mathbb{R}^d$ is $\mathbb{C} P^d$ with its antitautological line bundle. \end{example}
\begin{example}\label{biginterval} Let $\Delta = [0,m]\subset\mathbb{R}$ for some positive integer $m$. The action of $\C^\times$ on $\mathbb{C}^2$ and the semistable locus are unchanged from Example \ref{interval}, hence the associated toric variety is again $\mathbb{C} P^1$. Its homogeneous coordinate ring $\mathbb{C}[S_\Delta]$, however, is isomorphic to the subring of $\mathbb{C}[S_{[0,1]}]$ spanned by homogeneous polynomials in degrees which are multiples of $m$. Hence the line bundle that we obtain on $\mathbb{C} P^1$ is the $m^{\text{th}}$ tensor power of the antitautological line bundle. More generally, dilating $\Delta$ by a positive integer $m$ corresponds to taking the $m^{\text{th}}$ tensor power of the ample line bundle on the toric variety. (See the end of Section \ref{git}.) \end{example}
\begin{example}\label{square} Let $\Delta = [0,1]\times [0,1]\subset \mathbb{R}^2$. This corresponds to an action of $(\C^\times)^2$ on $\mathbb{C}^4$, given in coordinates by $$(\lambda,\mu)\cdot(z_1,z_2,z_3,z_4) =(\lambda z_1,\lambda z_2,\mu z_3,\mu z_4).$$ The unstable locus consists of the points where either $z_1=z_2=0$ or $z_3=z_4=0$, and the quotient of the semistable points by $(\C^\times)^2$ is isomorphic to $\mathbb{C} P^1\times\mathbb{C} P^1$. On the algebraic side, we have $$\mathbb{C}[S_\Delta]\cong\mathbb{C}[x,y,z,w]/\langle xz-yw\rangle,$$ where $x,y,z,$ and $w$ are generators in degree $1$ corresponding to a cyclic ordering of the vertices of $\Delta$. In general, the toric variety corresponding to the product of two polytopes is isomorphic to the product of the corresponding toric varieties, in the Segr\'e embedding. \end{example} \end{section}
\footnotesize{
} \end{spacing}
\end{document} | arXiv |
The coefficients of the polynomial
\[a_{10} x^{10} + a_9 x^9 + a_8 x^8 + \dots + a_2 x^2 + a_1 x + a_0 = 0\]are all integers, and its roots $r_1,$ $r_2,$ $\dots,$ $r_{10}$ are all integers. Furthermore, the roots of the polynomial
\[a_0 x^{10} + a_1 x^9 + a_2 x^8 + \dots + a_8 x^2 + a_9 x + a_{10} = 0\]are also $r_1,$ $r_2,$ $\dots,$ $r_{10}.$ Find the number of possible multisets $S = \{r_1, r_2, \dots, r_{10}\}.$
(A multiset, unlike a set, can contain multiple elements. For example, $\{-2, -2, 5, 5, 5\}$ and $\{5, -2, 5, 5, -2\}$ are the same multiset, but both are different from $\{-2, 5, 5, 5\}.$ And as usual, $a_{10} \neq 0$ and $a_0 \neq 0.$)
Let $r$ be an integer root of the first polynomial $p(x) = a_{10} x^{10} + a_9 x^9 + a_8 x^8 + \dots + a_2 x^2 + a_1 x + a_0 = 0,$ so
\[a_{10} r^{10} + a_9 r^9 + \dots + a_1 r + a_0 = 0.\]Since $a_0$ is not equal to 0, $r$ cannot be equal to 0. Hence, we can divide both sides by $r^{10},$ to get
\[a_{10} + a_9 \cdot \frac{1}{r} + \dots + a_1 \cdot \frac{1}{r^9} + a_0 \cdot \frac{1}{r^{10}} = 0.\]Thus, $\frac{1}{r}$ is a root of the second polynomial $q(x) = a_0 x^{10} + a_1 x^9 + a_2 x^8 + \dots + a_8 x^2 + a_9 x + a_{10} = 0.$ This means that $\frac{1}{r}$ must also be an integer.
The only integers $r$ for which $\frac{1}{r}$ is also an integer are $r = 1$ and $r = -1.$ Furthermore, $r = \frac{1}{r}$ for these values, so if the only roots of $p(x)$ are 1 and $-1,$ then the multiset of roots of $q(x)$ are automatically the same as the multiset of roots of $p(x).$ Therefore, the possible multisets are the ones that contain $k$ values of 1 and $10 - k$ values of $-1,$ for $0 \le k \le 10.$ There are 11 possible values of $k,$ so there are $\boxed{11}$ possible multisets. | Math Dataset |
Zhou* and Xing*: Improved Deep Residual Network for Apple Leaf Disease Identification
Changjian Zhou* and Jinge Xing*
Improved Deep Residual Network for Apple Leaf Disease Identification
Abstract: Plant disease is one of the most irritating problems for agriculture growers. Thus, timely detection of plant diseases is of high importance to practical value, and corresponding measures can be taken at the early stage of plant diseases. Therefore, numerous researchers have made unremitting efforts in plant disease identification. However, this problem was not solved effectively until the development of artificial intelligence and big data technologies, especially the wide application of deep learning models in different fields. Since the symptoms of plant diseases mainly appear visually on leaves, computer vision and machine learning technologies are effective and rapid methods for identifying various kinds of plant diseases. As one of the fruits with the highest nutritional value, apple production directly affects the quality of life, and it is important to prevent disease intrusion in advance for yield and taste. In this study, an improved deep residual network is proposed for apple leaf disease identification in a novel way, a global residual connection is added to the original residual network, and the local residual connection architecture is optimized. Including that 1,977 apple leaf disease images with three categories that are collected in this study, experimental results show that the proposed method has achieved 98.74% top-1 accuracy on the test set, outperforming the existing state-of-the-art models in apple leaf disease identification tasks, and proving the effectiveness of the proposed method.
Keywords: Agricultural Artificial Intelligence , Computer Vision , Deep Learning
According to the Food and Agriculture Organization (FAO) reports, plant diseases cause an approximately US$220 billion loss in the global economy every year [1]. As one of the most common and popular fruits, apples can be made into delicious foods, which are welcomed by people worldwide. However, apple production struggles with various disease intrusions [2], which is not good news. To improve apple production, agricultural experts have made long-term efforts to fight against apple diseases. Most apple disease identification methods still require manual inspection, which not only requires considerable manpower and material resources, but also strongly depends on experiences. Since the emergence of machine learning technologies, an increasing number of researchers have begun to make use of machine learning technologies to prevent plants from being tainted. To date, apple leaf disease identification has been clearly divided into two main directions. One direction uses the traditional machine learning methods, while the other uses the naked eye and experience to extract apple disease features and identify them.
Chakraborty et al. [3] proposed image segmentation and a multiclass support vector machine (SVM) based apple leaf disease identification method. The authors first adopted image processing methods to segment the infected region of apple leaves and the SVM classifier was adopted to classify them. The experimental results had achieved a satisfactory performance. Ayyub and Manjramkar [4] proposed a multi-feature combined method for apple disease identification. They first extracted apple disease features, such as color, texture and shape and then input them into the proposed multiclass SVM for classification. The proposed method achieved up to 96% accuracy. James and Sujatha [5] proposed a hybrid neural clustering classifier for various apple fruit disease classifications. Two stages were adopted in this method. The k-means algorithm was used to cluster the vectors first, and the backpropagation (BP) neural network was adopted for classification. The proposed method had achieved 98% accuracy. Pandiyan et al. [6] proposed a heterogeneous Internet of Things (HIoT) based apple leaf disease classi¬fication method; the identification accuracy was as high as 97.35%.
The methods mentioned above achieved excellent performances by extracting features manually for identification. However, the comprehensiveness of the extracted features is limited, especially in large-scale planting environments. The emergence of deep learning technologies overcomes this issue, and deep learning models are good at processing large-scale data and learning feature representations automatically from large giving data [7]. In apple leaf disease identification field, many studies have been published. Jiang et al. [8] proposed an improved deep convolutional neural network (CNN) based real-time apple leaf disease detection approach. A deep CNN model was proposed by rainbow concatenation and Inception structure, and the accuracy achieved 78.80% mAP on apple leaf disease dataset (ALDD). Yu and Son [9] proposed an attention mechanism based deep learning method for leaf spot detection tasks. The authors designed two subnetworks for apple leaf spot disease identification. First, one subnetwork was used to separate the background from a whole leaf, and then the other subnetwork was used to classify them. The novel method could improve the accuracy by modeling the leaf spot attention, and outperformed the conventional deep learning models. Li and Rai [10] proposed a ResNet-18, SVM and VGG combined method for classification and comparison, and the ResNet model obtained a better classification effect than the others. Nagaraju et al. [11] analyzed apple leaf diseases such as black-measles, black-rot, and leaf-blight, and then proposed an improved VGG-16 model for identification. The proposed method achieved excellent performance. Agarwal et al. [12] proposed an FCNN-LDA method for apple leaf disease identification, which achieved a higher accuracy than the existing models. Tahir et al. [2] proposed a retrained Inception-v3 model via transfer learning for apple disease classification, and the method met 97% accuracy. Gargade and Khandekar [13] summarized the factors that affect leaf disease identification, and the conclusion showed that the machine learning algorithm can identify the features that cannot be found by the naked eye, which was also why machine learning was better than humans for identifying plant diseases. In addition to detecting apple leaf diseases, deep learning models have also achieved praiseworthy results in other plant disease identification [14-21].
All of the literature mentioned above has made outstanding contributions to identify apple and other plant diseases. However, there is still room for improvement in apple disease identification. In this article, a novel deep learning model named the improved deep residual network is proposed for apple disease identification by adjusting the parameters and weights. Additionally, a global residual connection is added to the original network, and the local residual connection architecture is optimized. The improved deep residual network for apple disease identification had achieved 99% validation accuracy and 98.74% top-1 test accuracy, which obviously better than the accuracies of the existing methods in apple leaf disease identification. The main contributions of this work are as follows:
· An improved deep residual network is proposed to identify apple leaf diseases.
· A global residual connection is adopted in the proposed network.
· The local residual connection architecture is optimized in this work.
This paper is organized as follows: Section 2 describes related works on apple leaf disease identification, and analyzes the popular deep learning models, as well as their merits and demerits. Section 3 details the proposed improved deep residual network. Section 4 analyzes and discusses the experimental results. The conclusion is presented in Section 5.
2. Related Works
Machine learning algorithms have accomplished meaningful achievements in various fields. Deep learning technologies have made great successes in image processing and apple leaf disease identification tasks. In the ImageNet competition [22], different models were presented, and new records were refreshed. Popular deep learning models were used such as AlexNet, Inception networks, deep residual networks (ResNet), etc.
2.1 AlexNet
As one of the most popular deep CNN models, AlexNet [23] has won many vision competitions such as the ImageNet and COCO competition [24]. On ImageNet datasets, the AlexNet had achieved more than 80% improvement over conventional machine learning methods such as k-means and support vector machines. This framework consisted of eight weighted layers and the normalization function was introduced to avoid the overfitting problem. Although it had unparalleled historical achievements, there were still shortcomings that could not be ignored. Gradient degradation might occur when one convolu¬tion layer was removed, which led to the gradient disappearing quickly. Thus, it was easy to overfit under the limited training data.
2.2 Inception Network
The deep CNNs mentioned before convolved one layer to the next layer. The outputs from the previous layer were as input into the next layer. Unlike the models mentioned above, the Inception network defined a novel architecture where it adopted different methods in convolution layers. It achieved wonderful results on ImageNet. The Inception network contained multiple kernels of convolution layers [25], which increased the width and improved the accuracy of classification. However, the model needed more computational resources by increasing the investment of hardware equipment to obtain excellent perfor¬mance.
2.3 Residual Network
As one of the most important and popular deep learning models, the ResNet achieved amazing performance on the COCO dataset for object detection. In the ImageNet detection competition, the ResNet models won ImageNet recognition and detection, COCO detection and segmentation competi¬tions [26]. As shown in Fig. 1, the residual connection was introduced in the original structure of the residual network.
The original structure of the residual network.
3. Proposed Method
3.1 Structure of the Improved Deep Residual Network
In this study, an improved deep residual network is proposed. In contrast from the previous network models, the residual connection in residual blocks can be defined as follows (1):
[TeX:] $$x_{l+1}=x_{l}+f\left(x_{l}, w_{l}\right)$$
where [TeX:] $$x_{l+1}$$ denotes the residual block of the [TeX:] $$(l+1)^{t h}$$ layer, [TeX:] $$\chi_{l}$$ denotes the residual block of the [TeX:] $$l^{t h}$$ layer, and [TeX:] $$f\left(x_{l}+w_{l}\right)$$ denotes the part of the residual block. If the shapes of the feature maps in [TeX:] $$x_{l+1} \text { and } x_{l}$$ are different in the network, the dimension operation is needed. The residual connection block can be defined as follows (2):
[TeX:] $$x_{l+1}=h\left(x_{l}\right)+f\left(x_{l}, w_{l}\right)$$
where [TeX:] $$h\left(x_{l}\right)=w_{l}^{\prime} x, \text { and } w_{l}^{\prime} x$$ denotes the [TeX:] $$1 \times 1$$ convolution operation. The sum of the layers can be defined as L, the depth of the layers can be defined as l, the relationship of L and l is as follows:
[TeX:] $$x_{L}=x_{l}+\sum_{i=1}^{L-1} f\left(x_{l}, w_{l}\right)$$
where L denotes the sum of those shallower layers.
According to the chain rule of derivatives used in back propagation, the gradient of loss function [TeX:] $$\varepsilon$$ with respect to [TeX:] $$x_{l}$$ can be expressed as follows:
[TeX:] $$\frac{\partial \varepsilon}{\partial x_{l}}=\frac{\partial \varepsilon}{\partial x_{L}} \frac{\partial x_{L}}{\partial x_{l}}=\frac{\partial \varepsilon}{\partial x_{L}}\left(1+\frac{\partial}{\partial x_{l}} \sum_{i=1}^{L-1} f\left(x_{i}, w_{i}\right)\right)=\frac{\partial \varepsilon}{\partial x_{L}}+\frac{\partial \varepsilon}{\partial x_{L}} \frac{\partial}{\partial x_{l}} \sum_{i=1}^{L-1} f\left(x_{i}, w_{i}\right)$$
It can be seen from (4) that in the whole training process, [TeX:] $$\sum_{i=1}^{L-1} f\left(x_{i}, w_{i}\right)$$ cannot always be the value of -1, the residual network cannot appear, and the phenomenon of the gradient disappears. [TeX:] $$\frac{\partial \varepsilon}{\partial x_{L}}$$ can realize this function from L direct to any shallower layer l.
In this study, the residual connections are divided into global residual connections and local residual connections. In the global residual connection, the residual is connected between the input layer and the dense layer to prevent gradient disappearance. While in local residual connections, different from the original residual network, the proposed method makes a local-global connection. The input of one layer comes from the output of the layer that had concatenated. The proposed model structure is shown in Fig. 2.
Structure of the improved deep residual network.
3.2 Global Residual Connection
To maintain the ability of identity mapping, the global residual connection method is adopted in this study, which contains a convolution layer and three maxpooling operations. The filters of the convolution layer are 256 and the kernel size is 256, and the stride is (1, 1). The three maxpooling operations are used to concatenate to the dense layer for classification with the purpose of improving the feature represent ability.
3.3 Local Residual Connection
In the local residual connection block, the maxpooling operation is first used with a pooling size of 3 and a stride of (2, 2), which aims to reduce the dimension of the input data. There is a maxpooling operation and a convolution layer between each residual block for feature extraction. The purpose of local residual connections is to keep the tensor output from previous layers activated to maintain the network's excellent training performance.
4. Experiment and Analysis
4.1 Experimental Environment and Data Acquisition
The experimental operating system is Window10 with 2*i7-9700 @3.00 GHz CPU, 16 GB memory, and an NVIDIA GeForce GTX 1650 GPU. The programming language is Python, and the TensorFlow framework is used with CUDA 10.1. In this study, apple leaf disease images are collected from the AI Challenger 2018 dataset [21], which contain 1,977 images with three categories. To compare with this model equitably, all of these models are trained and tested under the same dataset, which is shown in Table 1. Some apple leaf images are detailed in Fig. 3.
Details of dataset
Number of images
Apple_Healthy 1,354
Apple_CedaRrust 208
Apple_Scab 415
Dataset of apple leaves disease.
4.2 Training Details
In this study, the improved ResNet-50 is trained on an NVIDIA GeForce 1650 GPU. Before training, the dataset is split into three components by the train_test_split function: 60% for training, 20% for validation, and 20% for testing. In the optimization layer, adadelta and cross-entropy functions are adopted as the optimizer function and loss function respectively, and the initial learning rate is set as 0.0001. The batch size is set as 32 to feed the proposed mode for 200 epochs. The hyperparameters of the proposed method in the training process are detailed in Table 2.
The hyperparameters
Input_size [TeX:] $$64 \times 64 \times 3$$
Batch_size 32
Initial learning rate 0.0001
Optimizer function adadelta
Activation function RelU, LeakyReLU
Epochs 200
Loss function Cross-entropy
4.3. Results and Analysis
4.3.1 Evaluation metrics
The top-1 accuracy is introduced as the evaluation metric, which is shown as follows:
[TeX:] $$A C C=\frac{C}{N}$$
where N denotes the number of samples, and C denotes the correct prediction samples.
To ensure the fairness of different algorithms, the same dataset and experimental environments are used in this study. In this paper, some of start-of art models are selected such as SVM [28,29], k-means [30], AlexNet [23], Inception-ResNet-v2 [25], ResNet-50 [26] and the latest models such as Ayyub Manjramkar [4], Jiang et al. [8], and Tahir et al. [2] for comparison.
Fig. 5 shows the loss of these models. AlexNet and Inception-Res-v2 have unstable waveforms. With the same loss function, ResNet-50 and the improved ResNet-50 have a stable and regionally convergent performance, and the improved ResNet-50 had a better convergence effect. On the test dataset, the Gaussian function kernel with SVM was adopted in nonlinear classification, because the data is scattered, and it is difficult to achieve the desired effect. As shown in Fig. 6, k-means could not perform well on scattered data either. Fig. 7 shows the identification accuracy of different classic and latest state-of-art models. The improved ResNet-50 achieves the highest identification top-1 accuracy, which proves the ability of the proposed method.
Accuracy of deep learning models with training and validation: (a) AlexNet, (b) Inception-ResNet-v2, (c) ResNet-50, and (d) improved ResNet-50.
The loss of deep learning models: (a) AlexNet, (b) Inception-ResNet-v2, (c) ResNet-50, and (d) improved ResNet-50.
K-means algorithm on test data.
Accuracy of different algorithms on test dataset.
4.3.3 Results analysis
From the experimental results, it can be found that the improved deep residual network achieves a better performance than the classic models in apple leaf disease identification tasks, which proves that the proposed global and local combined residual connection architecture have its own interpretable advantages. The main reason is that the global residual connection with three maxpooling operations can maintain the mapping ability with the learning identity, and the local residual connection can keep the gradient in the training process. Therefore, the proposed method can achieve satisfactory identification accuracy. However, the added residual connections increase the complexity of the network, which improves the requirements of the computing resources.
Food security is one of the most urgent problems in the world, and humans struggle with generating plant diseases to improve grain production every year. The apple is one of the most popular fruits worldwide and needs to be protected to prevent disease intrusion. Apple diseases usually first appear on the leaves visually, which makes leaf disease identification more important in particular pathological diagnose. This paper analyzes the literature on apple disease identification and proposes an improved deep residual network with the local and global combined residual connection method. A global residual connection is added to the classic residual network, and the local residual connection architecture is optimized. Apple leaf diseases in the AI Challenger 2018 dataset are adopted for testing and comparison. Including 1,977 images with three categories, CedaRrust, Scab and Healthy, are introduced for training in this study. The proposed interpretable method achieves 98.74% accuracy on the test set, outperforming the existing models and proving the effectiveness of the proposed method. This study is carried out on apple diseases with more energy, and it is expected that there are an increasing number of models with better performances for apple and other plant disease identification designed to make a little contribution to food security.
This work was supported by 2021 project of the 14th Five Year Plan of Educational Science in Heilongjiang Province (No. GJB1421224 and GJB1421226), and the 2021 smart campus project of agricultural college branch of CAET (No. C21ZD02).
Changjian Zhou
He received M.S. degrees in School of Computer Science and Technology from Harbin Engineering University in 2012. Since March 2012, he is with the High-performance Computing and Artificial Intelligence Research Center from Northeast Agricultural University as a teacher. His current research interests include agricultural artificial intelligence and computer vision.
Jinge Xing
He received M.S. degree in computer science from Northeast Agricultural University in 1996. He is currently a senior engineer in Department of Modern Educational Technology, Northeast Agricultural University. His current research interests include machine learning and cyberspace security.
1 Food and Agriculture Organization of the United Nations, 2019 (Online). Available:, https://www.fao.org/news/story/en/item/1187738/icode
2 M. B. Tahir, M. A. Khan, K. Javed, S. Kadry, Y. D. Zhang, T. Akram, M. Nazir, M. (2021). Recognition of apple leaf diseases using deep learning, variances-controlled features reduction, " Microprocessors and Microsystems, vol. 2021, no. 104027, 2021.doi:[[[10.1016/j.micpro..104027]]]
3 S. Chakraborty, S. Paul, M. Rahat-uz-Zaman, "Prediction of apple leaf diseases using multiclass support vector machine," in Proceedings of 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 2021;pp. 147-151. custom:[[[-]]]
4 S. R. N. M. Ayyub, A. Manjramkar, "Fruit disease classification and identification using image proce-ssing," in Proceedings of 2019 3rd International Conference on Computing Methodologies and Communi-cation (ICCMC), Erode, India, 2019;pp. 754-758. custom:[[[-]]]
5 G. M. James, S. Sujatha, "Categorising apple fruit diseases employing hybrid neural clustering classifier," Materials Today: Proceedings2021, 2020.doi:[[[10.1016/j.matpr..12.139]]]
6 S. Pandiyan, M. Ashwin, R. Manikandan, K. M. Karthick Raghunath, G. R. Anantah Raman, "Hete-rogeneous Internet of Things organization predictive analysis platform for apple leaf diseases recognition," Computer Communications, vol. 154, pp. 99-110, 2020.custom:[[[-]]]
7 C. Zhou, J. Song, S. Zhou, Z. Zhang, J. Xing, "COVID-19 detection based on image regrouping and ResNet-SVM using chest X-ray images," IEEE Access, vol. 9, pp. 81902-81912, 2021.custom:[[[-]]]
8 P. Jiang, Y. Chen, B. Liu, D. He, C. Liang, "Real-time detection of apple leaf diseases using deep learning approach based on improved convolutional neural networks," IEEE Access, vol. 7, pp. 59069-59080, 2019.custom:[[[-]]]
9 H. J. Yu, C. H. Son, "Leaf spot attention network for apple leaf disease identification," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, W A, pp.52-53, 2020;pp. 2020 52-53. custom:[[[-]]]
10 X. Li, L. Rai, "Apple leaf disease identification and classification using ResNet models," in Proceedings of 2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT), Shenzhen, China, 2020;pp. 738-742. custom:[[[-]]]
11 Y. Nagaraju, S. Swetha, S. Stalin, "Apple and grape leaf diseases classification using transfer learning via fine-tuned classifier," in Proceedings of 2020 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT), Hyderabad, India, 2020;pp. 1-6. custom:[[[-]]]
12 M. Agarwal, R. K. Kaliyar, G. Singal, S. K. Gupta, "FCNN-LDA: a faster convolution neural network model for leaf disease identification on Apple's leaf dataset," in Proceedings of 2019 12th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 2019;pp. 246-251. custom:[[[-]]]
13 A. Gargade, S. A. Khandekar, "A review: custard apple leaf parameter analysis and leaf disease detection using digital image processing," in Proceedings of 2019 3rd International Conference on Computing Metho-dologies and Communication (ICCMC), Erode, India, 2019;pp. 267-271. custom:[[[-]]]
14 C. Zhou, S. Zhou, J. Xing, J. Song, "Tomato leaf disease identification by restructured deep residual dense network," IEEE Access, vol. 9, pp. 28822-28831, 2021.custom:[[[-]]]
15 N. R. Bhimte, V. R. Thool, "Diseases detection of cotton leaf spot using image processing and SVM classifier," in Proceedings of 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 2018;pp. 340-344. custom:[[[-]]]
16 M. K. Maid, R. R. Deshmukh, "Statistical analysis of WLR (wheat leaf rust) disease using ASD FieldSpec4 spectroradiometer," in Proceedings of 2018 3rd IEEE International Conference on Recent Trends Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 2018;pp. 1398-1402. custom:[[[-]]]
17 F. Liu, Z. Xiao, "Disease spots identification of potato leaves in hyperspectral based on locally adaptive 1D-CNN," in Proceedings of 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 2020;pp. 355-358. custom:[[[-]]]
18 B. Liu, C. Tan, S. Li, J. He, H. Wang, "A data augmentation method based on generative adversarial networks for grape leaf disease identification," IEEE Access, vol. 8, pp. 102188-102198, 2020.custom:[[[-]]]
19 H. Sabrol, S. Kumar, in Advances in Computer Vision, Switzerland: Springer, Cham, pp. 434-443, 2019.custom:[[[-]]]
20 C. Zhou, Z. Zhang, S. Zhou, J. Xing, Q. Wu, J. Song, "Grape leaf spot identification under limited samples by fine grained-GAN," IEEE Access, vol. 9, pp. 100480-100489, 2021.custom:[[[-]]]
21 Q. Zeng, X. Ma, B. Cheng, E. Zhou, W. Pang, "Gans-based data augmentation for citrus disease severity detection using deep learning,' IEEE Access, vol. 8, pp. 172882-172891, 2020.custom:[[[-]]]
22 J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, F. F. Li, "ImageNet: a large-scale hierarchical image database," in Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 2009;pp. 248-255. custom:[[[-]]]
23 A. Krizhevsky, I. Sutskever, G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Advances in Neural Information Processing Systems, vol. 25, pp. 1097-1105, 2012.custom:[[[-]]]
24 T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, C. L. Zitnick, in Computer Vision – ECCV 2014, Switzerland: Springer, Cham, pp. 740-755, 2014.custom:[[[-]]]
25 C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Anguelov, V. V anhoucke, A. Rabinovich, "Going deeper with convolutions," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, 2015;pp. 1-9. custom:[[[-]]]
26 K. He, X. Zhang, S. Ren, J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las V egas, NV, 2016;pp. 770-778. custom:[[[-]]]
27 AI_Challenger_2018 (Online). Available:, https://github.com/AIChallenger/AI_Challenger_2018
28 V, Statistical Learning Theory. New YorkNY: John Wiley & Sons, N. V apnik, 1998.custom:[[[-]]]
29 A. C. Enache, V. Sgarciu, "Anomaly intrusions detection based on support vector machines with an improved bat algorithm," in Proceedings of 2015 20th International Conference on Control Systems and Computer Science, Bucharest, Romania, 2015;pp. 317-321. custom:[[[-]]]
30 M. Erisoglu, N. Calis, S. Sakallioglu, "A new algorithm for initial cluster centers in k-means algorithm," Pattern Recognition Letters, vol. 32, no. 14, pp. 1701-1705, 2011.custom:[[[-]]]
Received: March 19 2021
Revision received: July 9 2021
Accepted: August 1 2021
Corresponding Author: Jinge Xing* , [email protected]
Changjian Zhou*, Dept. of Modern Educational Technology, Northeast Agricultural University, Harbin, China, [email protected]
Jinge Xing*, Dept. of Modern Educational Technology, Northeast Agricultural University, Harbin, China, [email protected] | CommonCrawl |
Normally stable hamiltonian systems
On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^2$
March 2013, 33(3): 1177-1199. doi: 10.3934/dcds.2013.33.1177
Reversibility and branching of periodic orbits
Ana Cristina Mereu 1, and Marco Antonio Teixeira 2,
Departamento de Física, Química e Matemática, Universidade Federal de São Carlos, 18052-780, S.P., Brazil
Departamento de Matemática, Universidade Estadual de Campinas, Caixa Postal 6065, 13083-970, Campinas, S.P., Brazil
Received April 2011 Revised April 2012 Published October 2012
We study the dynamics near an equilibrium point of a $2$-parameter family of a reversible system in $\mathbb{R}^6$. In particular, we exhibit conditions for the existence of periodic orbits near the equilibrium of systems having the form $x^{(vi)}+ \lambda_1 x^{(iv)} + \lambda_2 x'' +x = f(x,x',x'',x''',x^{(iv)},x^{(v)})$. The techniques used are Belitskii normal form combined with Lyapunov-Schmidt reduction.
Keywords: normal form, Lyapunov center theorem., resonance, reversible systems, Periodic orbits.
Mathematics Subject Classification: Primary: 34C29, 34C25; Secondary: 47H1.
Citation: Ana Cristina Mereu, Marco Antonio Teixeira. Reversibility and branching of periodic orbits. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 1177-1199. doi: 10.3934/dcds.2013.33.1177
A. R. Champneys, Homoclinic orbits in reversible systems and their applications in mechanics, fluids and optics, Physica D, 112 (1998), 158-186. doi: 10.1016/S0167-2789(97)00209-1. Google Scholar
J. V. Chaparova, L. A. Peletier and S. A. Tersian, Existence and nonexistence of nontrivial solutions of semilinear fourth- and sixth-order differential equations, Advances in Differential Equations, 8 (2003), 1237-1258. Google Scholar
R. L. Devaney, Reversible diffeomorphisms and flows, Trans. Am. Math. Soc., 218 (1976), 89-113. doi: 10.1090/S0002-9947-1976-0402815-3. Google Scholar
J. Hale, "Ordinary Differential Equations," $1^{st}$ edition, New York, Wiley-Interscience, 1969. Google Scholar
G. Iooss and M. Adelmeyer, "Topics in Bifurcation Theory and Applications," Adv. Ser. Nonlinear Dynamics, 3, World Scientific Publishing Co., Inc., River Edge, NJ, 1992. Google Scholar
A. Jacquemard, M. F. S. Lima and M. Teixeira, Degenerate resonances and branching of periodic orbits, Annali di Matematica ed Applicata, 187 (1992), 105-117. Google Scholar
J. S. W. Lamb and J. A. G. Roberts, Time-reversal symmetry in dynamical systems: a survey, Phys. D, 112 (1998), 1-39. Google Scholar
M. F. S. Lima and M. Teixeira, Families of periodic orbits in resonant reversible systems, Bull. Braz. Math. Soc., 40 (2009), 521-547. doi: 10.1007/s00574-009-0025-9. Google Scholar
C. W. Shih, Bifurcations of Symmetric Periodic Orbits near Equilibrium in Reversible Systems, Int. J. Bifurcation and Chaos, 7 (1997), 569-584. doi: 10.1142/S0218127497000406. Google Scholar
T. Wagenknecht, "An analytical Study of a Two Degrees of Freedom Hamiltonian System Associated the Reversible Hyperbolic Umbilic," Ph. D thesis, University Ilmenau, Germany, 1999. Google Scholar
Claudio A. Buzzi, Jeroen S.W. Lamb. Reversible Hamiltonian Liapunov center theorem. Discrete & Continuous Dynamical Systems - B, 2005, 5 (1) : 51-66. doi: 10.3934/dcdsb.2005.5.51
Xiaocai Wang, Junxiang Xu, Dongfeng Zhang. A KAM theorem for the elliptic lower dimensional tori with one normal frequency in reversible systems. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2141-2160. doi: 10.3934/dcds.2017092
Armengol Gasull, Víctor Mañosa. Periodic orbits of discrete and continuous dynamical systems via Poincaré-Miranda theorem. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 651-670. doi: 10.3934/dcdsb.2019259
Ricardo Miranda Martins. Formal equivalence between normal forms of reversible and hamiltonian dynamical systems. Communications on Pure & Applied Analysis, 2014, 13 (2) : 703-713. doi: 10.3934/cpaa.2014.13.703
Haihua Liang, Yulin Zhao. Quadratic perturbations of a class of quadratic reversible systems with one center. Discrete & Continuous Dynamical Systems, 2010, 27 (1) : 325-335. doi: 10.3934/dcds.2010.27.325
A. Carati. Center manifold of unstable periodic orbits of helium atom: numerical evidence. Discrete & Continuous Dynamical Systems - B, 2003, 3 (1) : 97-104. doi: 10.3934/dcdsb.2003.3.97
D. Ruiz, J. R. Ward. Some notes on periodic systems with linear part at resonance. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 337-350. doi: 10.3934/dcds.2004.11.337
Roberta Fabbri, Carmen Núñez, Ana M. Sanz. A perturbation theorem for linear Hamiltonian systems with bounded orbits. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 623-635. doi: 10.3934/dcds.2005.13.623
Alain Jacquemard, Weber Flávio Pereira. On periodic orbits of polynomial relay systems. Discrete & Continuous Dynamical Systems, 2007, 17 (2) : 331-347. doi: 10.3934/dcds.2007.17.331
Francesca Alessio, Piero Montecchiari, Andrea Sfecci. Saddle solutions for a class of systems of periodic and reversible semilinear elliptic equations. Networks & Heterogeneous Media, 2019, 14 (3) : 567-587. doi: 10.3934/nhm.2019022
Shiwang Ma. Nontrivial periodic solutions for asymptotically linear hamiltonian systems at resonance. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2361-2380. doi: 10.3934/cpaa.2013.12.2361
Laura Olian Fannio. Multiple periodic solutions of Hamiltonian systems with strong resonance at infinity. Discrete & Continuous Dynamical Systems, 1997, 3 (2) : 251-264. doi: 10.3934/dcds.1997.3.251
Anna Capietto, Walter Dambrosio, Tiantian Ma, Zaihong Wang. Unbounded solutions and periodic solutions of perturbed isochronous Hamiltonian systems at resonance. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1835-1856. doi: 10.3934/dcds.2013.33.1835
Răzvan M. Tudoran. On the control of stability of periodic orbits of completely integrable systems. Journal of Geometric Mechanics, 2015, 7 (1) : 109-124. doi: 10.3934/jgm.2015.7.109
Francesco Fassò, Simone Passarella, Marta Zoppello. Control of locomotion systems and dynamics in relative periodic orbits. Journal of Geometric Mechanics, 2020, 12 (3) : 395-420. doi: 10.3934/jgm.2020022
Duanzhi Zhang. Minimal period problems for brake orbits of nonlinear autonomous reversible semipositive Hamiltonian systems. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2227-2272. doi: 10.3934/dcds.2015.35.2227
Weigu Li, Jaume Llibre, Hao Wu. Polynomial and linearized normal forms for almost periodic differential systems. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 345-360. doi: 10.3934/dcds.2016.36.345
Vivi Rottschäfer. Multi-bump patterns by a normal form approach. Discrete & Continuous Dynamical Systems - B, 2001, 1 (3) : 363-386. doi: 10.3934/dcdsb.2001.1.363
Todor Mitev, Georgi Popov. Gevrey normal form and effective stability of Lagrangian tori. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 643-666. doi: 10.3934/dcdss.2010.3.643
Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109
Ana Cristina Mereu Marco Antonio Teixeira | CommonCrawl |
Topological pressure for the completely irregular set of birkhoff averages
Periodic solutions for a prescribed-energy problem of singular Hamiltonian systems
May 2017, 37(5): 2717-2743. doi: 10.3934/dcds.2017117
Diagonal stationary points of the bethe functional
Grzegorz Siudem 1, and Grzegorz Świątek 2,,
Faculty of Physics, Warsaw University of Technology, Faculty of Mathematics and Information Science, Warsaw University of Technology, Koszykowa 75, PL-00-662 Warsaw, Poland
Faculty of Mathematics and Information Science, Warsaw University of Technology, Koszykowa 75, PL-00-662 Warsaw, Poland
* Corresponding author: [email protected]
Received March 2016 Revised December 2016 Published February 2017
Fund Project: Both authors supported in part by Narodowe Centrum Nauki - grant 2015/17/B/ST1/00091.
We investigate stationary points of the Bethe functional for the Ising model on a $2$-dimensional lattice. Such stationary points are also fixed points of message passing algorithms. In the absence of an external field, by symmetry reasons one expects the fixed points to have constant means at all sites. This is shown not to be the case. There is a critical value of the coupling parameter which is equal to the phase transition parameter on the computation tree, see [13], above which fixed points appear with means that are variable though constant on diagonals of the lattice and hence the term "diagonal stationary points". A rigorous analytic proof of their existence is presented. Furthermore, computer-obtained examples of diagonal stationary points which are local maxima of the Bethe functional and hence stable equilibria for message passing are shown. The smallest such example was found on the $100× 100$ lattice.
Keywords: Bethe approximation, Ising model, message passing algorithms.
Mathematics Subject Classification: Primary: 37N40, 90C26; Secondary: 37A60.
Citation: Grzegorz Siudem, Grzegorz Świątek. Diagonal stationary points of the bethe functional. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2717-2743. doi: 10.3934/dcds.2017117
R. J. Baxter, Exactly solved models in statistical mechanics, Integrable Systems in Statistical Mechanics, 1 (1985), 5-63. doi: 10.1142/9789814415255_0002. Google Scholar
H. A. Bethe, Statistical theory of superlattices, Selected Works of Hans A Bethe, 18 (1997), 245-270. doi: 10.1142/9789812795755_0010. Google Scholar
S. Dorogovtsev, A. Goltsev and J. Mendes, Critical phenomena in complex networks, Rev. Mod. Phys., 80 (2008), 1275-1335. doi: 10.1103/RevModPhys.80.1275. Google Scholar
C. Fortuin, P. Kasteleyn and J. Ginibre, Correlation inequalities on some partially ordered sets, Commun. Math. Phys., 22 (1971), 89-103. doi: 10.1007/BF01651330. Google Scholar
T. Heskes, Stable fixed points of loopy belief propagation are local minima of the Bethe free energy, In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, MIT Press, Cambridge, MA, (2003), 343-350. Google Scholar
J. M. Mooij and H. J. Kappen, On the properties of the Bethe approximation and loopy belief propagation on binary networks J. Stat. Mech. Theor. Exp. 11 (2005), P11012. doi: 10.1088/1742-5468/2005/11/P11012. Google Scholar
J. M. Mooij and H. J. Kappen, Sufficient conditions for convergence of the sum-product algorithm, IEEE Transactions on Information Theory, 53 (2007), 4422-4437. doi: 10.1109/TIT.2007.909166. Google Scholar
S. Newhouse, Diffeomorphisms with infinitely many sinks, Topology, 13 (1974), 9-18. doi: 10.1016/0040-9383(74)90034-2. Google Scholar
J. Pearl, Reverend Bayes on inference engines: A distributed hierarchical approach, Proceedings of the Second National Conference on Artificial Intelligence, (1982), 133-136. Google Scholar
T. G. Roosta and M. J. Wainwright snd S. S. Sastry, Convergence analysis of reweighted sum-product algorithms, IEEE Transactions on Signal Processing, 56 (2008), 4293-4305. doi: 10.1109/ICASSP.2007.366292. Google Scholar
J. Shin, The complexity of approximating a bethe equilibrium, IEEE Transactions on Information Theory, 60 (2014), 3959-3969. doi: 10.1109/TIT.2014.2317487. Google Scholar
G. Siudem and G. Świątek, Dynamics of the belief propagation for the ising model, Acta Physica Polonica A, 127 (2015), 3A145-3A149. doi: 10.12693/APhysPolA.127.A-145. Google Scholar
S. Tatikonda and M. Jordan, Loopy belief propagation and Gibbs measures, in Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, (2002), 493-500, Google Scholar
M. J. Wainwright and M. I. Jordan, Graphical models, exponential families, and variational inference, Foundations and Trends in Machine Learning, 1 (2008), 1-305. doi: 10.1561/2200000001. Google Scholar
A. Weller and T. Jebara, Bethe bounds and approximating the global optimum, Journal of Machine Learning Research W& CP, 31 (2013), 618-631. Google Scholar
M. Welling and Y. -W. Teh, Belief Optimization for Binary Networks: A Stable Alternative to Loopy Belief Propagation in Proc. 17th Conference on Uncertainty in Artificial Intelligence (UAI), 2001. Google Scholar
J. Yedidia, W. Freeman and Y. Weiss, Constructing free-energy approximations and generalized belief propagation algorithms, IEEE Trans. on Information Theory, 51 (2005), 2282-2312. doi: 10.1109/TIT.2005.850085. Google Scholar
Figure 1. Illustration of the diagonal matrix which can be obtained from means given by Eq. (11).
Figure 2. Numerical evidence that the means (from Eq. (11), visualized on Fig. 1) in fact define a stationary point of the Bethe functional. The dots on the graph show values of the negative Bethe functional computed for the means given by vector $B_{\eta}$ given by formula (12) with $\eta$ shown on the horizontal axis and various randomly chosen $(X_{\ell})$.
Figure 3. Values of the negative Bethe functional for the diagonal stationary point $\mathcal{B}_0$ perturbed in the direction of $P$ according to formula (12).
Figure 5. The stability test algorithm.
Figure 4. Stable fixed point given by Eq. (13).
Jing Qin, Shuang Li, Deanna Needell, Anna Ma, Rachel Grotheer, Chenxi Huang, Natalie Durgin. Stochastic greedy algorithms for multiple measurement vectors. Inverse Problems & Imaging, 2021, 15 (1) : 79-107. doi: 10.3934/ipi.2020066
Ningyu Sha, Lei Shi, Ming Yan. Fast algorithms for robust principal component analysis with an upper bound on the rank. Inverse Problems & Imaging, 2021, 15 (1) : 109-128. doi: 10.3934/ipi.2020067
Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065
Bo Tan, Qinglong Zhou. Approximation properties of Lüroth expansions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020389
Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463
Bilal Al Taki, Khawla Msheik, Jacques Sainte-Marie. On the rigid-lid approximation of shallow water Bingham. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 875-905. doi: 10.3934/dcdsb.2020146
P. K. Jha, R. Lipton. Finite element approximation of nonlocal dynamic fracture models. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1675-1710. doi: 10.3934/dcdsb.2020178
Simone Fagioli, Emanuela Radici. Opinion formation systems via deterministic particles approximation. Kinetic & Related Models, 2021, 14 (1) : 45-76. doi: 10.3934/krm.2020048
Baoli Yin, Yang Liu, Hong Li, Zhimin Zhang. Approximation methods for the distributed order calculus using the convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1447-1468. doi: 10.3934/dcdsb.2020168
Peter Frolkovič, Karol Mikula, Jooyoung Hahn, Dirk Martin, Branislav Basara. Flux balanced approximation with least-squares gradient for diffusion equation on polyhedral mesh. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 865-879. doi: 10.3934/dcdss.2020350
Xiaoli Lu, Pengzhan Huang, Yinnian He. Fully discrete finite element approximation of the 2D/3D unsteady incompressible magnetohydrodynamic-Voigt regularization flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 815-845. doi: 10.3934/dcdsb.2020143
Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020032
Ténan Yeo. Stochastic and deterministic SIS patch model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021012
M. Dambrine, B. Puig, G. Vallet. A mathematical model for marine dinoflagellates blooms. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 615-633. doi: 10.3934/dcdss.2020424
Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457
Laurent Di Menza, Virginie Joanne-Fabre. An age group model for the study of a population of trees. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020464
Grzegorz Siudem Grzegorz Świątek | CommonCrawl |
Spaghetti plot
A spaghetti plot (also known as a spaghetti chart, spaghetti diagram, or spaghetti model) is a method of viewing data to visualize possible flows through systems. Flows depicted in this manner appear like noodles, hence the coining of this term.[1] This method of statistics was first used to track routing through factories. Visualizing flow in this manner can reduce inefficiency within the flow of a system. In regards to animal populations and weather buoys drifting through the ocean, they are drawn to study distribution and migration patterns. Within meteorology, these diagrams can help determine confidence in a specific weather forecast, as well as positions and intensities of high and low pressure systems. They are composed of deterministic forecasts from atmospheric models or their various ensemble members. Within medicine, they can illustrate the effects of drugs on patients during drug trials.
Applications
Biology
Spaghetti diagrams have been used to study why butterflies are found where they are, and to see how topographic features (such as mountain ranges) limit their migration and range.[2] Within mammal distributions across central North America, these plots have correlated their edges to regions which were glaciated within the previous ice age, as well as certain types of vegetation.[3]
Meteorology
Within meteorology, spaghetti diagrams are normally drawn from ensemble forecasts. A meteorological variable e.g. pressure, temperature, or precipitation amount is drawn on a chart for a number of slightly different model runs from an ensemble. The model can then be stepped forward in time and the results compared and be used to gauge the amount of uncertainty in the forecast. If there is good agreement and the contours follow a recognizable pattern through the sequence, then the confidence in the forecast can be high. Conversely, if the pattern is chaotic, i.e., resembling a plate of spaghetti, then confidence will be low. Ensemble members will generally diverge over time and spaghetti plots are a quick way to see when this happens.
Spaghetti plots can be a more favorable choice compared to the mean-spread ensemble in determining the intensity of a coming cyclone, anticyclone, or upper-level ridge or trough. Because ensemble forecasts naturally diverge as the days progress, the projected locations of meteorological features will spread further apart. A mean-spread diagram will take a mean of the calculated pressure from each spot on the map as calculated by each permutation in the ensemble, thus effectively smoothing out the projected low and making it appear broader in size but weaker in intensity than the ensemble's permutations had actually indicated. It can also depict two features instead of one if the ensemble clustering is around two different solutions.[4]
Various forecast models within tropical cyclone track forecasting can be plotted on a spaghetti diagram to show confidence in five-day track forecasts.[5] When track models diverge late in the forecast period, the plot takes on the shape of a squashed spider, and can be referred to as such in National Hurricane Center discussions.[6] Within the field of climatology and paleotempestology, spaghetti plots have been used to correlate ground temperature information derived from boreholes across central and eastern Canada.[7] As in other disciplines, spaghetti diagrams can be used to show the motion of objects, such as drifting weather buoys over time.[8]
Business
Spaghetti diagrams were first used to track routing through a factory.[9] Spaghetti plots are a simple tool to visualize movement and transportation.[10] Analyzing flows through systems can determine where time and energy is wasted, and identifies where streamlining would be beneficial.[1] This is true not only with physical travel through a physical place, but also during more abstract processes such as the application of a mortgage loan.[11]
Medicine
Spaghetti plots can be used to track the results of drug trials amongst a number of patients on one individual graph to determine their benefit.[12] They have also been used to correlate progesterone levels to early pregnancy loss.[13] The half-life of drugs within people's blood plasma, as well as discriminating effects between different populations, can be diagnosed quickly via these diagrams.[14]
References
1. Theodore T. Allen (2010). Introduction to Engineering Statistics and Lean Sigma: Statistical Quality Control and Design of Experiments and Systems. Springer. p. 128. ISBN 978-1-84882-999-2.
2. James A. Scott (1992). The Butterflies of North America: A Natural History and Field Guide. Stanford University Press. p. 103. ISBN 978-0-8047-2013-7.
3. J. Knox Jones; Elmer C. Birney (1988). Handbook of mammals of the north-central states. University of Minnesota Press. pp. 52–55. ISBN 978-0-8166-1420-2.
4. Environmental Modeling Center (2003-08-21). "NCEP Medium-Range Ensemble Forecast (MREF) System Spaghetti Diagrams". National Oceanic and Atmospheric Administration. Retrieved 2011-02-17.
5. Ivor Van Heerden; Mike Bryan (2007). The storm: what went wrong and why during hurricane Katrina : the inside story from one Louisiana scientist. Penguin. ISBN 978-0-14-311213-6.
6. John L. Beven, III (2007-05-30). "Tropical Depression Two-E Discussion Number 3". National Hurricane Center. Retrieved 2011-02-17.
7. Louise Bodri; Vladimír Čermák (2007). Borehole climatology: a new method on how to reconstruct climate. Elsevier. p. 76. ISBN 978-0-08-045320-0.
8. S. A. Thorpe (2005). The turbulent ocean. Cambridge University Press. p. 341. ISBN 978-0-521-83543-5.
9. William A. Levinson (2007). Beyond the theory of constraints: how to eliminate variation and maximize capacity. Productivity Press. p. 97. ISBN 978-1-56327-370-4.
10. Lonnie Wilson (2009). How to Implement Lean Manufacturing. McGraw Hill Professional. p. 127. ISBN 978-0-07-162507-4.
11. Rangaraj (2009). Supply Chain Management For Competitive Advantage. Tata McGraw-Hill. p. 130. ISBN 978-0-07-022163-5.
12. Donald R. Hedeker; Robert D. Gibbons (2006). Longitudinal data analysis. John Wiley and Sons. pp. 52–54. ISBN 978-0-471-42027-9.
13. Hulin Wu; Jin-Ting Zhang (2006). Nonparametric regression methods for longitudinal data analysis. John Wiley and Sons. pp. 2–4. ISBN 978-0-471-48350-2.
14. Johan Gabrielsson; Daniel Weiner (2001). Pharmacokinetic/pharmacodynamic data analysis: concepts and applications, Volume 1. Taylor & Francis. pp. 263–264. ISBN 978-91-86274-92-4.
External links
Wikimedia Commons has media related to Spaghetti plots.
• TIGGE Project at NCAR
Visualization of technical information
Fields
• Biological data visualization
• Chemical imaging
• Crime mapping
• Data visualization
• Educational visualization
• Flow visualization
• Geovisualization
• Information visualization
• Mathematical visualization
• Medical imaging
• Molecular graphics
• Product visualization
• Scientific visualization
• Software visualization
• Technical drawing
• User interface design
• Visual culture
• Volume visualization
Image types
• Chart
• Diagram
• Engineering drawing
• Graph of a function
• Ideogram
• Map
• Photograph
• Pictogram
• Plot
• Sankey diagram
• Schematic
• Skeletal formula
• Statistical graphics
• Table
• Technical drawings
• Technical illustration
People
Pre-19th century
• Edmond Halley
• Charles-René de Fourcroy
• Joseph Priestley
• Gaspard Monge
19th century
• Charles Dupin
• Adolphe Quetelet
• André-Michel Guerry
• William Playfair
• August Kekulé
• Charles Joseph Minard
• Luigi Perozzo
• Francis Amasa Walker
• John Venn
• Oliver Byrne
• Matthew Sankey
• Charles Booth
• Georg von Mayr
• John Snow
• Florence Nightingale
• Karl Wilhelm Pohlke
• Toussaint Loua
• Francis Galton
Early 20th century
• Edward Walter Maunder
• Otto Neurath
• W. E. B. Du Bois
• Henry Gantt
• Arthur Lyon Bowley
• Howard G. Funkhouser
• John B. Peddle
• Ejnar Hertzsprung
• Henry Norris Russell
• Max O. Lorenz
• Fritz Kahn
• Harry Beck
• Erwin Raisz
Mid 20th century
• Jacques Bertin
• Rudolf Modley
• Arthur H. Robinson
• John Tukey
• Mary Eleanor Spear
• Edgar Anderson
• Howard T. Fisher
Late 20th century
• Borden Dent
• Nigel Holmes
• William S. Cleveland
• George G. Robertson
• Bruce H. McCormick
• Catherine Plaisant
• Stuart Card
• Pat Hanrahan
• Edward Tufte
• Ben Shneiderman
• Michael Friendly
• Howard Wainer
• Clifford A. Pickover
• Lawrence J. Rosenblum
• Thomas A. DeFanti
• George Furnas
• Sheelagh Carpendale
• Cynthia Brewer
• Miriah Meyer
• Jock D. Mackinlay
• Alan MacEachren
• David Goodsell
• Michael Maltz
• Leland Wilkinson
• Alfred Inselberg
Early 21st century
• Ben Fry
• Hans Rosling
• Christopher R. Johnson
• David McCandless
• Mauro Martino
• John Maeda
• Tamara Munzner
• Jeffrey Heer
• Gordon Kindlmann
• Hanspeter Pfister
• Manuel Lima
• Aaron Koblin
• Martin Krzywinski
• Bang Wong
• Jessica Hullman
• Hadley Wickham
• Polo Chau
• Fernanda Viégas
• Martin Wattenberg
• Claudio Silva
• Ade Olufeko
• Moritz Stefaner
Related topics
• Cartography
• Chartjunk
• Computer graphics
• in computer science
• CPK coloring
• Graph drawing
• Graphic design
• Graphic organizer
• Imaging science
• Information graphics
• Information science
• Misleading graph
• Neuroimaging
• Patent drawing
• Scientific modelling
• Spatial analysis
• Visual analytics
• Visual perception
• Volume cartography
• Volume rendering
• Information art
| Wikipedia |
Higman's lemma
In mathematics, Higman's lemma states that the set of finite sequences over a finite alphabet, as partially ordered by the subsequence relation, is well-quasi-ordered. That is, if $w_{1},w_{2},\ldots $ is an infinite sequence of words over some fixed finite alphabet, then there exist indices $i<j$ such that $w_{i}$ can be obtained from $w_{j}$ by deleting some (possibly none) symbols. More generally this remains true when the alphabet is not necessarily finite, but is itself well-quasi-ordered, and the subsequence relation allows the replacement of symbols by earlier symbols in the well-quasi-ordering of labels. This is a special case of the later Kruskal's tree theorem. It is named after Graham Higman, who published it in 1952.
Reverse-mathematical calibration
Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic) as equivalent to $ACA_{0}$ over the base theory $RCA_{0}$.[1]
References
1. J. van der Meeren, M. Rathjen, A. Weiermann, An order-theoretic characterization of the Howard-Bachmann-hierarchy (2015, p.41). Accessed 03 November 2022.
• Higman, Graham (1952), "Ordering by divisibility in abstract algebras", Proceedings of the London Mathematical Society, (3), 2 (7): 326–336, doi:10.1112/plms/s3-2.1.326
| Wikipedia |
\begin{document}
\title{\LARGE \bf Performance Bounds for the $k$-Batch Greedy Strategy \\in Optimization Problems with Curvature} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract} The $k$-batch greedy strategy is an approximate algorithm to solve optimization problems where the optimal solution is hard to obtain. Starting with the empty set, the $k$-batch greedy strategy adds a batch of $k$ elements to the current solution set with the largest gain in the objective function while satisfying the constraints. In this paper, we bound the performance of the $k$-batch greedy strategy with respect to the optimal strategy by defining the total curvature $\alpha_k$. We show that when the objective function is nondecreasing and submodular, the $k$-batch greedy strategy satisfies a harmonic bound $1/(1+\alpha_k)$ for a general matroid constraint and an exponential bound $\left(1-(1-{\alpha}_k/{t})^t\right)/{\alpha}_k$ for a uniform matroid constraint, where $k$ divides the cardinality of the maximal set in the general matroid, $t=K/k$ is an integer, and $K$ is the rank of the uniform matroid. We also compare the performance of the $k$-batch greedy strategy with that of the $k_1$-batch greedy strategy when $k_1$ divides $k$. Specifically, we prove that when the objective function is nondecreasing and submodular, the $k$-batch greedy strategy has better harmonic and exponential bounds in terms of the total curvature. Finally, we illustrate our results by considering a task-assignment problem. \end{abstract} \section{Introduction}
A variety of combinatorial optimization problems such as generalized assignment (see, e.g., \cite{streeter2008online} and \cite{{FeigeVondrak}}), max $k$-cover (see, e.g., \cite{K-cover1998} and \cite{Feige1998}), maximum coverage location (see, e.g., \cite{Fisher1977} and \cite{Location}), and sensor placement (see, e.g., \cite{LiC12} and \cite{SensorPlacement}) can be formulated in the following way:
\begin{align}\label{eqn:1} \begin{array}{l} \text{maximize} \ \ f(M) \\ \text{subject to} \ \ M\in \mathcal{I} \end{array} \end{align}
where $\mathcal{I}$ is a non-empty collection of subsets of a finite set $X$, and $f$ is a real-valued set function defined on the power set $2^X$ of $X$. The set function $f$ is said to be \emph{submodular} if it has the diminishing-return property \cite{Edmonds}. The pair $(X,\mathcal{I})$ is called a \emph{matroid} if the collection $\mathcal{I}$ is hereditary and has the augmentation property \cite{Tutte}. When $\mathcal{I}=\{S\subseteq \mathcal{I}: |S|\leq K\}$ for a given $K$, the pair $(X,\mathcal{I})$ is said to be a \emph{uniform matroid} of rank $K$, where $|S|$ denotes the cardinality of the set $S$. These definitions will be discussed in more detail in Section~II.
Finding the optimal solution to problem (\ref{eqn:1}) in general is NP-hard. The $1$-batch greedy strategy provides a computationally feasible solution, which starts with the empty set, and then adds one element to the current solution set with the largest gain in the objective function while satisfying the constraints. This scheme is a special case of the \emph{$k$-batch greedy strategy} (with $k\geq 1$), which starts with the empty set but adds to the current solution set $k$ elements with the largest gain in the objective function under the constraints. The performance of the $1$-batch greedy strategy in optimization problems has been extensively investigated, while the performance of the $k$-batch greedy strategy for general $k$ has received little attention, notable exceptions being Nemhauser et al. \cite{nemhauser19781} and Hausmann et al. \cite{hausmann1980}, which we will review in the following subsection.
\subsection{Review of Previous Work}
Nemhauser et al. \cite{nemhauser19781}, \cite{nemhauser1978} proved that when $f$ is a nondecreasing submodular set function satisfying $f(\emptyset)=0$, the $1$-batch greedy strategy yields at least a $1/2$-approximation for a general matroid and a $(1-1/e)$-approximation for a uniform matroid. By introducing the total curvature $\alpha$, Conforti and Cornu{\'e}jols \cite{conforti1984submodular} showed that when $f$ is a nondecreasing submodular set function, the $1$-batch greedy strategy achieves at least a $1/(1+\alpha)$-approximation for a general matroid and a $(1-e^{-\alpha})/{\alpha}$-approximation for a uniform matroid, where the total curvature $\alpha$ is defined as $$\alpha=\max\limits_{j\in X^*}\left\{1-\frac{f(X)-f(X\setminus\{j\})}{f(\{j\})-f(\emptyset)}\right\}$$ and $X^*=\{j\in X: f(\{j\})>0\}$. For a nondecreasing submodular set function $f$, the total curvature $\alpha$ takes values on the interval $ [0,1]$. In this case, we have $1/(1+\alpha)\geq1/2$ and $(1-e^{-\alpha})/\alpha\geq (1-1/e)$, which implies the bounds $1/(1+\alpha)$ and $(1-e^{-\alpha})/\alpha$ are stronger than the bounds $1/2$ and $(1-1/e)$ in \cite{nemhauser1978} and \cite{nemhauser19781}, respectively. Vondr{\'a}k \cite{vondrak2010submodularity} proved that when $f$ is a nondecreasing submodular set function, the continuous $1$-batch greedy strategy gives at least a $(1-e^{-\alpha})/\alpha$-approximation for any matroid.
Nemhauser et al. \cite{nemhauser19781} proved that when $(X,\mathcal{I})$ is a uniform matroid and $K=ks-p$ ($s$ and $p$ are integers and $0\leq p\leq k-1$), the $k$-batch greedy strategy achieves at least a $(1-(1-\lambda/s)(1-1/s)^{s-1})$-approximation, where $\lambda=1-p/k$. Hausmann et al. \cite{hausmann1980} showed that when $(X,\mathcal{I})$ is an independence system, then the $k$-batch greedy strategy achieves at least a $q(X,\mathcal{I})$-approximation, where $q(X,\mathcal{I})$ is the rank quotient defined in \cite{hausmann1980}.
Although Nemhauser et al. \cite{nemhauser19781} and Hausmann et al. \cite{hausmann1980} investigated the performance of the $k$-batch greedy strategy, they only considered uniform matroid constraints and independence system constraints, respectively. This prompts us to investigate the performance of the $k$-batch greedy strategy more comprehensively.
\subsection{Main Results and Contribution}
In this paper, by defining the total curvature $\alpha_k$ of the objective function, we derive bounds for the performance of the $k$-batch greedy strategy for a general matroid and a uniform matroid, respectively. By comparing the values of $\alpha_k$ for different $k$ and investigating the monotoneity of the bounds, we can compare the performance for different $k$-batch greedy strategies.
The remainder of the paper is organized as follows. In Section~II, we review the harmonic and exponential bounds in terms of the total curvature $\alpha$ from \cite{conforti1984submodular} for a general matroid and a uniform matroid, respectively. In Section~III, we introduce the total curvature $\alpha_k$, and prove that when $f$ is a nondecreasing submodular set function, the $k$-batch greedy strategy achieves a $1/(1+\alpha_k)$-approximation for a general matroid constraint and a $\left(1-(1-{\alpha}_k/{t})^t\right)/{\alpha}_k$-approximation for a uniform matroid constraint, where $k$ divides the cardinality of the maximal set in the general matroid, $t=K/k$ is an integer, and $K$ is the rank of the uniform matroid. We also prove that $\alpha_{k}\leq \alpha_{k_1}$ when $f$ is a nondecreasing submodular set function and $k_1$ divides $k$, which implies that the $k$-batch greedy strategy provides tighter harmonic and exponential bounds compared to the $k_1$-batch greedy strategy. In Section~IV, we present an application to demonstrate our conclusions. In Section~V, we provide a summary of our work and main contribution.
\section{Preliminaries}\label{sc:II}
In this section, we first introduce some definitions related to sets and curvature. We then review the harmonic and exponential bounds in terms of the total curvature $\alpha$ from \cite{conforti1984submodular}. \subsection{Sets and Curvature}
Let $X$ be a finite set, and $\mathcal{I}$ be a non-empty collection of subsets of $X$. The pair $(X,\mathcal{I})$ is called a \emph{matroid} if \begin{itemize} \item [i.] For all $B\in\mathcal{I}$, any set $A\subseteq B$ is also in $\mathcal{I}$. \item [ii.] For any $A,B\in \mathcal{I}$, if the cardinality of $B$ is greater than that of $A$, then there exists $j\in B\setminus A$ such that $A\cup\{j\}\in\mathcal{I}$. \end{itemize}
The collection $\mathcal{I}$ is said to be \emph{hereditary} and has the \emph{augmentation} property if it satisfies properties~i and ii, respectively. The pair $(X,\mathcal{I})$ is called a \emph{uniform matroid} when $\mathcal{I}=\{S\subseteq \mathcal{I}: |S|\leq K\}$ for a given $K$, called the \emph{rank}.
Let $2^X$ denote the power set of $X$, and define the set function $f$: $2^X\rightarrow \mathbb{R^+}$. The set function $f$ is said to be \emph{nondecreasing} and \emph{submodular} if it satisfies properties~1 and 2 below, respectively: \begin{itemize} \item [1.] For any $A\subseteq B\subseteq X$, $f(A)\leq f(B)$. \item [2.] For any $A\subseteq B\subseteq X$ and $j\in X\setminus B$, $f(A\cup\{j\})-f(A)\geq f(B\cup\{j\})-f(B)$. \end{itemize}
Property~2 means that the additional value accruing from an extra action decreases as the size of the input set increases, and is also called the \emph{diminishing-return} property in economics. Property~2 implies that for any $A\subseteq B\subseteq X$ and $T\subseteq X\setminus B$, \begin{equation} \label{eqn:submodularimplies} f(A\cup T)-f(A)\geq f(B\cup T)-f(B). \end{equation} For convenience, we denote the incremental value of adding set $T$ to the set $A\subseteq X$ as $\varrho_T(A)=f(A\cup T)-f(A)$ (following the notation of \cite{conforti1984submodular}).
The \emph{total curvature} of a set function $f$ is defined as \cite{conforti1984submodular} $$\alpha=\max_{j\in X^*}\left\{1-\frac{\varrho_j({X\setminus\{j\}})}{\varrho_j(\emptyset)}\right\}$$ where $X^*=\{j\in X: \varrho_j(\emptyset)>0\}$. Note that $0\leq \alpha\leq 1$ when $f$ is nondecreasing and submodular, and $\alpha=0$ if and only if $f$ is additive, i.e., $f(X)=f(X\setminus\{j\})+f(\{j\})$ for all $j\in X^*$.
\subsection{Harmonic and Exponential Bounds in Terms of the Total Curvature} In this section, we review the theorems from \cite{conforti1984submodular} bounding the performance of the $1$-batch greedy strategy using the total curvature $\alpha$ for general matroid constraints and uniform matroid constraints. \begin{Theorem} \label{Theorem2.1} Assume that $(X,\mathcal{I})$ is a matroid and $f$ is a nondecreasing submodular set function with $f(\emptyset)=0$ and total curvature $\alpha$. Then the $1$-batch greedy solution $G$ satisfies $$f(G)\geq \frac{1}{1+\alpha}f(O),$$ where $O$ is the optimal solution of problem (\ref{eqn:1}). \end{Theorem}
When $f$ is a nondecreasing submodular set function, we have $\alpha\in[0,1]$, so $1/(1+\alpha)\in[1/2,1]$. Theorem \ref{Theorem2.1} applies to any matroid, which means the bound ${1}/(1+\alpha)$ holds for a uniform matroid too. Theorem \ref{Theorem2.2} will present a tighter bound when $(X,\mathcal{I})$ is a uniform matroid.
\begin{Theorem} \label{Theorem2.2}
Assume that $(X,\mathcal{I})$ is a uniform matroid and $f$ is a nondecreasing submodular set function with $f(\emptyset)=0$ and total curvature $\alpha$. Then the $1$-batch greedy solution $G_K$ satisfies \begin{align*} f(G_K)&\geq\frac{1}{\alpha}\left(1-(1-{\alpha}/{K})^K\right)f(O_K)\\ &\geq \frac{1}{\alpha}(1-e^{-\alpha})f(O_K). \end{align*} \end{Theorem}
The function $(1-e^{-\alpha})/\alpha$ is a nonincreasing function of $\alpha$, so $(1-e^{-\alpha})/\alpha\in[1-e^{-1},1]$ when $f$ is a nondecreasing submodular set function. Also it is easy to check $(1-e^{-\alpha})/{\alpha}\geq 1/(1+\alpha)$ for $\alpha\in[0,1]$, which implies that the bound $(1-e^{-\alpha})/{\alpha}$ is stronger than the bound $1/(1+\alpha)$ in Theorem \ref{Theorem2.1}.
\section{Main Results}\label{sc:III}
In this section, first we define the $k$-batch greedy strategy and the corresponding curvatures that will be used for deriving the harmonic and exponential bounds. Then we derive the performance bounds of the $k$-batch greedy strategy in terms of $\alpha_k$ for general matroid constraints and uniform matroid constraints, respectively. Moreover, we compare the performance bounds for different $k$-batch greedy strategies. \subsection{Strategy Formulation and Curvatures} When $(X,\mathcal{I})$ is a general matroid, assume that the cardinality $K$ of the the maximal set in $\mathcal{I}$ is such that $k$ divides $K$. The $k$-batch greedy strategy is as follows:
Step 1: Let $S^0=\emptyset$ and $t=0$.
Step 2: Select $J_{t+1}\subseteq X\setminus S^t$ for which $|J_{t+1}|=k$, $S^t\cup J_{t+1}\in\mathcal{I}$, and \begin{align*}
f(S^t\cup J_{t+1})=\max\limits_{J\subseteq X\setminus S^t\ \text{and}\ |J|=k }f(S^t\cup J), \end{align*}
then set $S^{t+1}=S^t\cup J_{t+1}$.
Step 3: If $f(S^{t+1})-f(S^t)>0$, set $t=t+1$, repeat step~2; otherwise, stop.
When $(X,\mathcal{I})$ is a uniform matroid with rank $K$, without loss of generality, assume that $k$ divides $K$. Then the $k$-batch greedy strategy is as follows:
Step 1: Let $S^0=\emptyset$ and $t=0$.
Step 2: Select $J_{t+1}\subseteq X\setminus S^t$ for which $|J_{t+1}|=k$, and \begin{align*}
f(S^t\cup J_{t+1})=\max\limits_{J\subseteq X\setminus S^t\ \text{and}\ |J|=k }f(S^t\cup J), \end{align*}
then set $S^{t+1}=S^t\cup J_{t+1}$.
Step 3: If $t+1<K/k$, set $t=t+1$ and repeat step~2; otherwise, stop.
Similar to the definition of the total curvature $\alpha$ in \cite{conforti1984submodular}, we define the total curvature $\alpha_k$ for a given $k$ as
$$\alpha_k=\max\limits_{J\in \hat{X}}\left\{1-\frac{\varrho_J(X\setminus J)}{\varrho_J(\emptyset)}\right\}$$ where $\hat{X}=\{J\subseteq X: f(J)>0 \ \text{and}\ |J|=k\}$.
Consider a set $T\subseteq X$ and an ordered set $S=\bigcup_{i=1}^tJ_i\subseteq X$, where $J_i\subseteq X$ and $|J_i|=k$. We define $S^0=\emptyset$, $S^i=\bigcup_{l=1}^iJ_l$ for $1\leq i\leq t$, and the curvature
\[\bar{\alpha}_k=\max\limits_{i:J_i\subseteq S^*}\left\{\frac{\varrho_{J_i}(S^{i-1})-\varrho_{J_i}(S^{i-1}\cup T)}{\varrho_{J_i}(S^{i-1})}\right\},\]
where $S^*=\{J_i\subseteq S-T: |J_i|=k \ \text{and} \ \varrho_{J_i}(S^{i-1})>0\}.$ It is easy to check that $f(S)=\sum_{i=1}^t\varrho_{J_i}(S^{i-1})$ and $\bar{\alpha}_k\leq \alpha_k$.
For a uniform matroid with rank $K$, we use $S_K=\bigcup_{i=1}^tJ_i$ to denote the $k$-batch greedy solution, where $J_i$ is the set selected by the $k$-batch greedy strategy at stage $i$. Assume that $O_K$ is the optimal solution to Problem 1. We define the curvature $\hat{\alpha}_k$ with respect to the optimal solution as \[\hat{\alpha}_k=\max\limits_{1\leq j \leq t}\left\{1-\frac{\varrho_{S^j}(O_K)}{\varrho_{S^j}(\emptyset)}\right\}.\] It is easy to prove that $\hat{\alpha}_k\leq \alpha_k$ when $f$ is a nondecreasing submodular set function.
\subsection{Harmonic Bound and Exponential Bound in Terms of the Total Curvature}
The following proposition will be applied to derive the performance bounds for both general matroid constraints and uniform matroid constraints. \begin{Proposition} \label{Pro1} If $f$ is a nondecreasing submodular set function on $X$, $S$ and $T$ are subsets of $X$, and $\{T_1,\ldots, T_r\}$ is a partition of $T\setminus S$, then \begin{equation} \label{eqn:Prop1} f(T\cup S)\leq f(S)+\sum\limits_{i:T_i\subseteq T\setminus S}\varrho_{T_i}(S). \end{equation} \end{Proposition}
\begin{proof} By the assumption that $\{T_1,\ldots, T_r\}$ is a partition of $T\setminus S$ and inequality \ref{eqn:submodularimplies}, we have
\begin{align*}
f(T\cup S)-f(S)&=f(S\cup \bigcup_{l=1}^r T_l)-f(S)\\
&=\sum\limits_{j=1}^r \varrho_{T_j}(S\cup\bigcup_{l=1}^{j-1}T_l)\\
&\leq \sum\limits_{j:T_j\subseteq T\setminus S}\varrho_{T_j}(S).
\end{align*} \end{proof}
The following proposition will be applied to derive the performance bound for general matroid constraints.
\begin{Proposition}
\label{Pro2}
Assume that $f$ is a nondecreasing submodular set function on $X$ with $f(\emptyset)=0$. Given a set $T\subseteq X$, a partition $\{T_1,\ldots, T_r\}$ of $T\setminus S$, and an ordered set $S=\bigcup_{i=1}^tJ_i\subseteq X$ with $|J_i|=k$, we have
\begin{align}
\label{ineq:prop2}
f(T)\leq \bar{\alpha}_k\sum\limits_{i:J_i\subseteq S\setminus T}&\varrho_{J_i}(S^{i-1})+\sum\limits_{i:J_i\subseteq T\cap S}\varrho_{J_i}(S^{i-1})\nonumber\\
&+\sum\limits_{i:T_i\subseteq T\setminus S}\varrho_{T_i}(S).
\end{align}
\end{Proposition}
\begin{proof} By the definition of the curvature $\bar{\alpha}_k$, we have
\begin{align*}
f(T\cup S)-f(T)&=\sum\limits_{i=1}^t\varrho_{J_i}(T\cup S^{i-1})\\
&=\sum\limits_{i:J_i\subseteq S\setminus T}\varrho_{J_i}(T\cup S^{i-1})\\
&\geq (1-\bar{\alpha}_k)\sum\limits_{i:J_i\subseteq S\setminus T}\varrho_{J_i}( S^{i-1}).
\end{align*}
By Proposition \ref{Pro1}, we have
\[f(T\cup S)\leq f(S)+\sum\limits_{i:T_i\subseteq T\setminus S}\varrho_{T_i}(S).\]
Combining the inequalities above and using the identity \[f(S)=\sum\limits_{i:J_i\subseteq S\setminus T}\varrho_{J_i}( S^{i-1})+\sum\limits_{i:J_i\subseteq T\cap S}\varrho_{J_i}(S^{i-1}),\] we get the inequality (\ref{ineq:prop2}).
\end{proof}
Recall that when $(X,\mathcal{I})$ is a general matroid, we assume that $k$ divides the cardinality $K$ of the maximal set in $\mathcal{I}$. By the augmentation property of a general matroid, any greedy solution and optimal solution can be augmented to a set of length $K$, respectively. Let $S=\bigcup_{i=1}^tJ_i$ be the $k$-batch greedy solution, where $J_i$ is the set selected by the $k$-batch greedy strategy at the $i$th step for $1\leq i\leq t$. Let $O=\{o_1,\ldots, o_K\}$ be the optimal solution. We prove that the following lemma holds.
\begin{Lemma} \label{lemma1}
The optimal solution $O=\{o_1,\ldots, o_K\}$ can be ordered as $O=\bigcup_{i=1}^tJ_i'$ such that $\varrho_{J_i'}(S^{i-1})\leq \varrho_{J_i}(S^{i-1})$, where ${J_1',\ldots,J_t'}$ is a partition of $O$ and $|J_i'|=k$ for $1\leq i\leq t$. Furthermore, if $J_i'\subseteq O\cap S$, then $J_i'=J_i$. \end{Lemma} \begin{proof}
Similar to the proof in \cite{nemhauser19781}, we will prove this lemma by backward induction on $i$ for $i=t, t-1,\ldots, 1$. Assume that $J_l'$ satisfies the inequality $\varrho_{J_l'}(S^{l-1})\leq \varrho_{J_l}(S^{l-1})$ for $l>i$, and let $O^i=O\setminus \bigcup_{l>i} J_l'$. Consider the sets $S^{i-1}$ and $O^i$. By definition, $|S^{i-1}|=(i-1) k$ and $|O^i|=i k$. Using the augmentation property of a general matroid, we have that there exists one element $o_{i_1}\in O^i\setminus S^{i-1}$ such that $S^{i-1}\cup\{o_{i_1}\}\in\mathcal{I}$. Next consider $S^{i-1}\cup\{o_{i_1}\}$ and $O^i$. Using the augmentation property again, there exists one element $o_{i_2}\in O^i\setminus S^{i-1}\setminus\{o_{i_1}\}$ such that $S^{i-1}\cup\{o_{i_1}, o_{i_2}\}\in\mathcal{I}$. Similar to the process above, using the augmentation property $(k-2)$ more times, finally we have that there exists $J_i'=\{o_{i_1},\ldots,o_{i_k}\}\subseteq O^i\setminus S^{i-1}$ such that $S^{i-1}\cup J_i'\in \mathcal{I}$. By the $k$-batch greedy strategy, we have that $\varrho_{J_i'}(S^{i-1})\leq \varrho_{J_i}(S^{i-1})$. Furthermore, if $J_i\subseteq O^i$, we can set $J_i'=J_i$. \end{proof}
The following two theorems present our performance bounds in terms of the total curvature $\alpha_k$ for the $k$-batch greedy strategy under a general matroid constraint and a uniform matroid, respectively. \begin{Theorem} \label{Theorem3.3} Assume that $f$ is a nondecreasing submodular set function with $f(\emptyset)=0$, the pair $(X,\mathcal{I})$ is a general matroid, and $k$ divides the cardinality $K$ of the maximal set in $\mathcal{I}$. Then the $k$-batch greedy strategy $S=\bigcup_{i=1}^tJ_i$ satisfies \begin{equation} \label{ineq:generalbound} f(S)\geq \frac{1}{1+\alpha_k}f(O). \end{equation} \end{Theorem}
\begin{proof}
By Lemma~\ref{lemma1}, we have that the optimal solution $O$ can be ordered as $O=\bigcup_{i=1}^tJ_i'$ such that $\varrho_{J_i'}(S^{i-1})\leq \varrho_{J_i}(S^{i-1})$, where $\{J_l'\}_{l=1}^t$ is a partition of $O$ and $|J_l'|=k$ for $1\leq l\leq t$.
By Proposition \ref{Pro2}, we have \begin{align*} f(O)\leq \bar{\alpha}_k\sum\limits_{i:J_i\subseteq S\setminus O}&\varrho_{J_i}(S^{i-1})+\sum\limits_{i:J_i\subseteq O\cap S}\varrho_{J_i}(S^{i-1})\\
&+\sum\limits_{i:J_i'\subseteq O\setminus S}\varrho_{J_i'}(S). \end{align*}
By inequality (\ref{eqn:submodularimplies}), we have \[\varrho_{J_i'}(S)\leq \varrho_{J_i'}(S^{i-1})\leq \varrho_{J_i}(S^{i-1}).\] Then \begin{align*} f(O)&\leq \bar{\alpha}_k\sum\limits_{i:J_i\subseteq S\setminus O}\varrho_{J_i}(S^{i-1})+\sum\limits_{i:J_i\subseteq O\cap S}\varrho_{J_i}(S^{i-1})\\ &\quad\quad\quad+\sum\limits_{i:J_i'\subseteq O\setminus S}\varrho_{J_i}(S^{i-1})\\ &\leq {\alpha}_kf(S)+f(S), \end{align*} which implies that $f(S)\geq \frac{1}{1+\alpha_k}f(O)$. \end{proof}
\emph{Remarks} \begin{itemize} \item The harmonic bound $1/(1+\alpha_k)$ for the $k$-batch greedy strategy holds for \emph{any} matroid. However, for uniform matroids, a better bound is given in Theorem \ref{Theorem3.4}. \item The function $g(x)=1/(1+x)$ is nonincreasing in $x$ on the interval $[0,1]$. \end{itemize}
\begin{Theorem} \label{Theorem3.4} Assume that $f$ is a nondecreasing submodular set function with $f(\emptyset)=0$, the pair $(X,\mathcal{I})$ is a uniform matroid with rank $K$, and $k$ divides $K$. Then the $k$-batch greedy solution $S_K=\bigcup_{i=1}^tJ_i$ satisfies \begin{align} \label{k-batchuniformbound} f(S_K)&\geq \frac{1}{{\alpha}_k}\left(1-(1-\frac{{\alpha}_k}{t})^t\right)f(O_K)\nonumber\\ &\geq\frac{1}{{\alpha}_k}(1-e^{-\alpha_k})f(O_K). \end{align}
\end{Theorem}
\begin{proof} Taking $T$ to be the optimal solution $O_K$ and $S$ to be the set $S^j$ generated by the $k$-batch greedy strategy over the first $j$ stages in Proposition~\ref{Pro1} results in \[f(O_K\cup S^j)\leq f(S^j)+\sum\limits_{i:T_i\subseteq O_K\setminus S^j}\varrho_{T_i}(S^j),\]
where $|T_i|=k$.
By the $k$-batch greedy strategy, we have that for $T_i\subseteq O_K\setminus S^j$, $$\varrho_{T_i}(S^j)\leq \varrho_{J_{j+1}}(S^j),$$ which implies that \begin{equation} \label{ineq:relation} f(O_K\cup S^j)\leq f(S^j)+t\varrho_{J_{j+1}}(S^j). \end{equation} By the definition of $\hat{\alpha}_k$, we have
\[f(O_K)+(1-\hat{\alpha}_k)f(S^j)\leq f(O_K\cup S^j).\] Combining the inequality above and (\ref{ineq:relation}), we have \begin{equation} \label{ineq:iteration} f(S^{j+1})\geq \frac{1}{t}f(O_K)+(1-\frac{\hat{\alpha}_k}{t})f(S^j). \end{equation} Taking $j=0,1,\ldots, t-1$ in (\ref{ineq:iteration}), we have \begin{align*} f(S_K)=f(S^t)&\geq \frac{1}{t}f(O_K)+(1-\frac{\hat{\alpha}_k}{t})f(S^{t-1})\\ &\geq \frac{1}{t}f(O_K)\sum\limits_{l=0}^{t-1}(1-\frac{\hat{\alpha}_k}{t})\\ &=\frac{1}{\hat{\alpha}_k}\left(1-(1-\frac{\hat{\alpha}_k}{t})^t\right)f(O_K), \end{align*} which implies \begin{align*} f(S_K)&\geq \frac{1}{{\alpha}_k}\left(1-(1-\frac{{\alpha}_k}{t})^t\right)f(O_K)\\ &\geq\frac{1}{{\alpha}_k}(1-e^{-\alpha_k})f(O_K). \end{align*} \end{proof}
\textit{Remarks}
\begin{itemize} \item When $\alpha_k=1$, the bound $(1-(1-\alpha_k/t)^t)/\alpha_k$ becomes $1-(1-1/t)^t$, which is the bound in \cite{nemhauser19781} when $p=0$. \item Let $h(x,y)=\left(1-(1-{x}/{y}\right)^y)/{x}$. The function $h(x,y)$ is nonincreasing in $x$ on the interval $[0,1]$ for any positive integer $y$. Also $h(x,y)$ is nonincreasing in $y$ when $x$ is a constant on the interval $[0,1]$. \item The function $l(x)=(1-e^{-x})/{x}$ is nonincreasing in $x$, so $(1-e^{-\alpha_k})/{\alpha_k}\in[1-e^{-1}, 1]$. \item The monotoneiety of $g(x)$ and $h(x,y)$ implies that the $k$-batch greedy strategy has better harmonic and exponential bounds than the $1$-batch greedy strategy if $\alpha_k\leq \alpha$ . \end{itemize}
The following theorem establishes that indeed $\alpha_k\leq \alpha$.
\begin{Theorem} \label{Theorem3.5} Assume that $f$ is a nondecreasing submodular set function satisfying $f(\emptyset)=0$. Then $\alpha_k\leq \alpha.$ \end{Theorem} \begin{proof} By the definition of $\alpha_k$, we have \begin{align*} \alpha_k&=\max_{J_k\subseteq \hat{X}}\left\{1-\frac{\varrho_{J_k}({X\setminus J_k})}{\varrho_{J_k}(\emptyset)}\right\}\\ &=1-\min_{J_k\subseteq \hat{X}}\left\{\frac{\sum\limits_{l=1}^k\varrho_{j_l}(X\setminus J_l)}{\sum\limits_{l=1}^k\varrho_{j_l}(J_{l-1})}\right\}, \end{align*} where $J_l=\{j_1,\ldots, j_l\}$ for $1\leq l\leq k$.
By the assumption that $f$ is a submodular set function, we have, for $1\leq l\leq k$, $$\varrho_{j_l}(X\setminus J_l)\geq \varrho_{j_l}(X\setminus\{j_l\}) \ \text{and}\ \varrho_{j_l}(J_{l-1})\leq \varrho_{j_l}(\emptyset),$$ which imply that \[\frac{\sum\limits_{l=1}^k\varrho_{j_l}(X\setminus J_l)}{\sum\limits_{l=1}^k\varrho_{j_l}(J_{l-1})}\geq\frac{\sum\limits_{l=1}^k\varrho_{j_l}(X\setminus\{j_l\})}{\sum\limits_{l=1}^k\varrho_{j_l}(\emptyset)}. \] Then, we have \begin{equation} \label{Inequality2} \alpha_k\leq 1-\min_{{j_1,\ldots,j_k}\in \hat{X}}\left\{\frac{\sum\limits_{l=1}^k\varrho_{j_l}(X\setminus\{j_l\})}{\sum\limits_{l=1}^k\varrho_{j_l}(\emptyset)}\right\}. \end{equation}
By the definition of $\alpha$, we have for $1\leq l\leq k$, \[\varrho_{j_l}(X\setminus\{j_l\})\geq (1-\alpha)\varrho_{j_l}(\emptyset).\]
Combining the inequality above and (\ref{Inequality2}), we have \[\alpha_k\leq 1-(1-\alpha)=\alpha.\]
\end{proof}
The following theorem states that if $k_1$ divides $k$, then the total curvature $\alpha_{k}$ for the $k$-batch greedy is smaller than the total curvature $\alpha_{k_1}$ for the $k_1$-batch greedy strategy.
\begin{Theorem} \label{Theorem3.6} Assume that $f$ is a submodular set function satisfying $f(\emptyset)=0$. Then $\alpha_{k}\leq \alpha_{k_1}$ when $k_1$ divides $k$. \end{Theorem} \begin{proof} Suppose that $k=k_1k_2$ ($k_1$ and $k_2$ are integers). Write \begin{align*} \varrho_{J_k}&(X\setminus J_k)=\sum\limits_{l=1}^{k_2}\varrho_{J_{lk_1}\setminus J_{(l-1)k_1}}(X\setminus J_{l {k_1}}) \end{align*} and $$\varrho_{J_k}(\emptyset)= \sum\limits_{l=1}^{k_2}\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(J_{(l-1) k_1}).$$
By inequality (\ref{eqn:submodularimplies}), we have for $1\leq l\leq k_2$, \begin{align*} &\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(X\setminus J_{l k_1})\geq\\ &\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(X\setminus (J_{l k_1}\setminus J_{(l-1) k_1})) \end{align*} and\[\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(J_{(l-1) k_1})
\leq \varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(\emptyset).\]
From the inequalities above and by the definition of $\alpha_k$, we have
\begin{align*} \alpha_k&=\max_{J_k\subseteq \hat{X}}\left\{1-\frac{\varrho_{J_k}({X\setminus J_k})}{\varrho_{J_k}(\emptyset)}\right\}\\ &=1-\min_{J_k\subseteq \hat{X}}\left\{\frac{\sum\limits_{l=1}^{k_2}\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(X\setminus J_{l k_1})}{\sum\limits_{l=1}^{k_2}\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(J_{(l-1) k_1})}\right\}\\ &\leq 1-\min_{J_k\subseteq\hat{X}}\\ &\tiny{\left\{\frac{\sum\limits_{l=1}^{k_2}\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(X\setminus (J_{l k_1}\setminus J_{(l-1) k_1}))}{\sum\limits_{l=1}^{k_2}\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(\emptyset)} \right\}}. \end{align*}
By the definition of $\alpha_{k_1}$, we have for $1\leq l\leq k_2$, \begin{align*} \varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(X\setminus (J_{l k_1}\setminus J_{(l-1) k_1}))\\ \geq (1-\alpha_{k_1})\varrho_{J_{l k_1}\setminus J_{(l-1) k_1}}(\emptyset). \end{align*}
Using the inequalities above, we have $$\alpha_k\leq 1-(1-\alpha_{k_1})=\alpha_{k_1}.$$ \end{proof}
One would also expect the following generalization of Theorem~\ref{Theorem3.6} to hold: if $k_1\leq k$, then $\alpha_k\leq\alpha_{k_1}$, leading to better bounds for the $k$-batch greedy strategy than for the $k_1$-batch greedy strategy. We have a proof for this claim using Lemmas~1.1 and 1.2 in \cite{vondrak2010}, but the proof is more involved and is omitted for the sake of brevity. We will illustrate the validity of this claim in Section~IV.
\section{Application: Task Assignment}
In this section, we consider a task assignment problem to demonstrate that the $k$-batch greedy strategy has better performance than the $k_1$-batch greedy strategy when $f$ is a nondecreasing submodular set function.
As a canonical example for problem (\ref{eqn:1}), we consider the task assignment problem posed in \cite{streeter2008online}, which was also analyzed in \cite{ZhC13J} and \cite{YJ2015}. In this problem, there are $n$ subtasks and a set $X$ of $N$ agents $a_j$ $(j=1,\ldots, N).$ At each stage, a subtask $i$ is assigned to an agent $a_j$, who accomplishes the task with probability $p_i(a_j)$. Let $X_i({a_1,a_2,\ldots, a_k})$ denote the random variable that describes whether or not subtask $i$ has been accomplished after performing the sequence of actions ${a_1,a_2,\ldots, a_k}$ over $k$ stages. Then $\frac{1}{n}\sum_{i=1}^n X_i(a_1,a_2,\ldots,a_k)$ is the fraction of subtasks accomplished after $k$ stages by employing agents $a_1,a_2,\ldots, a_k$. The objective function $f$ for this problem is the expected value of this fraction, which can be written as
$$f(\{a_1,\ldots,a_k\})=\frac{1}{n}\sum_{i=1}^n\left(1-\prod_{j=1}^k(1-p_i(a_j))\right).$$
Assume that $p_i(a)>0$ for any $a\in X$. Then it is easy to check that $f$ is nondecreasing. Therefore, when $\mathcal{I}=\{S\subseteq X: |S|\leq K\}$, the solution to this problem should be of length $K$. Also, it is easy to check that $f$ has the diminishing-return property.
For convenience, we only consider the special case $n=1$; our analysis can be generalized to any $n\geq 2$. For $n=1$, we have $$f(\{a_1,\ldots,a_k\})=1-\prod_{j=1}^k(1-p(a_j))$$ where $p(\cdot)=p_1(\cdot)$.
Assume that $0<p(a_1)\leq p(a_2)\leq \cdots\leq p(a_N)\leq 1$. Then by the definition of the total curvature $\alpha_k$, we have \begin{align*} \alpha_k&=\max\limits_{j_1,\ldots,j_k\in {X}}\left\{1-\frac{f(X)-f(X\setminus\{j_1,\ldots,j_k\})}{f(\{j_1,\ldots, j_k\})-f(\emptyset)}\right\}\\ &=1-\prod_{l=k+1}^K(1-p(a_l)). \end{align*}
From the form of $\alpha_k$, we have $\alpha_k\in[0,1]$, which is consistent with our conclusion that when $f$ is a nondecreasing submodular set function, then $\alpha_k \in[0,1]$. Also we have $\alpha_k\leq \alpha_{k_1}$ when $k_1$ divides $k$. Even if $k_1$ does not divide $k$, we still have $\alpha_k\leq \alpha_{k_1}$ in this example, which is consistent with our claim. \section{Conclusion} In this paper, we derived performance bounds for the $k$-batch greedy strategy, $k\geq 1$, in terms of a total curvature $\alpha_k$. We showed that when the objective function is nondecreasing and submodular, the $k$-batch greedy strategy satisfies a harmonic bound $1/(1+\alpha_k)$ for a general matroid and an exponential bound $(1-e^{-\alpha_k})/\alpha_k$ for a uniform matroid, where $k$ divides the cardinality of the maximal set in the general matroid and the rank of the uniform matroid, respectively. We proved that, for a submodular objective function, $\alpha_k\leq \alpha_{k_1}$ when $k_1$ divides $k$. Consequently, for a nondecreasing submodular objective function, the $k$-batch greedy strategy has better performance bounds than the $k_1$-batch greedy strategy in such a case. This is true even when $k_1\leq k$ does not divide $k$, but it follows a more involved proof that we have left out. We demonstrated our results by considering a task-assignment problem, which also corroborated our claim that if $k_1\leq k$, then $\alpha_{k}\leq \alpha_{k_1}$ even if $k_1$ does not divide $k$.
\end{document} | arXiv |
\begin{definition}[Definition:Mersenne Prime/Index]
The '''index''' of the '''Mersenne prime''' $M_p = 2^p - 1$ is the (prime) number $p$.
\end{definition} | ProofWiki |
Definition:Inverse Secant/Real
< Definition:Inverse Secant
Let $x \in \R$ be a real number such that $x \le -1$ or $x \ge 1$.
The inverse secant of $x$ is the multifunction defined as:
$\sec^{-1} \left({x}\right) := \left\{{y \in \R: \sec \left({y}\right) = x}\right\}$
where $\sec \left({y}\right)$ is the secant of $y$.
Arcsecant
Arcsecant Function
From Shape of Secant Function, we have that $\sec x$ is continuous and strictly increasing on the intervals $\left[{0 \,.\,.\, \dfrac \pi 2}\right)$ and $\left({\dfrac \pi 2 \,.\,.\, \pi}\right]$.
From the same source, we also have that:
$\sec x \to + \infty$ as $x \to \dfrac \pi 2^-$
$\sec x \to - \infty$ as $x \to \dfrac \pi 2^+$
Let $g: \left[{0 \,.\,.\, \dfrac \pi 2}\right) \to \left[{1 \,.\,.\, \infty}\right)$ be the restriction of $\sec x$ to $\left[{0 \,.\,.\, \dfrac \pi 2}\right)$.
Let $h: \left({\dfrac \pi 2 \,.\,.\, \pi}\right] \to \left({-\infty \,.\,.\, -1}\right]$ be the restriction of $\sec x$ to $\left({\dfrac \pi 2 \,.\,.\, \pi}\right]$.
Let $f: \left[{0 \,.\,.\, \pi}\right] \setminus \dfrac \pi 2 \to \R \setminus \left({-1 \,.\,.\, 1}\right)$:
$f\left({x}\right) = \begin{cases} g\left({x}\right) & : 0 \le x < \dfrac \pi 2 \\ h\left({x}\right) & : \dfrac \pi 2 < x \le \pi \end{cases}$
From Inverse of Strictly Monotone Function, $g \left({x}\right)$ admits an inverse function, which will be continuous and strictly increasing on $\left[{1 \,.\,.\, \infty}\right)$.
From Inverse of Strictly Monotone Function, $h \left({x}\right)$ admits an inverse function, which will be continuous and strictly increasing on $\left({-\infty \,.\,.\, -1}\right]$.
As both the domain and range of $g$ and $h$ are disjoint, it follows that:
$f^{-1}\left({x}\right) = \begin{cases} g^{-1}\left({x}\right) & : x \ge 1 \\ h^{-1}\left({x}\right) & : x \le -1 \end{cases}$
This function $f^{-1} \left({x}\right)$ is called arcsecant of $x$ and is written $\operatorname{arcsec} x$.
The domain of $\operatorname{arcsec} x$ is $\R \setminus \left({-1 \,.\,.\, 1}\right)$
The image of $\operatorname{arcsec} x$ is $\left[{0 \,.\,.\, \pi}\right] \setminus \dfrac \pi 2$.
Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Inverse_Secant/Real&oldid=182249"
Definitions/Inverse Secant
This page was last modified on 6 April 2014, at 05:23 and is 0 bytes | CommonCrawl |
\begin{document}
\newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{algol}{Algorithm} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition}
\newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark}
\newcommand{\comm}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule
#1\par
\hrule}}
\def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}}
\def\mathbb{C}{\mathbb{C}} \def\mathbb{F}{\mathbb{F}} \def\mathbb{K}{\mathbb{K}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{N}{\mathbb{N}} \def\textsf{M}{\textsf{M}}
\def\({\left(} \def\){\right)} \def\[{\left[} \def\right]{\right]} \def\langle{\langle} \def\rangle{\rangle}
\defe{e}
\def\e_q{e_q} \def{\mathfrak S}{{\mathfrak S}}
\def{\mathrm{lcm}}\,{{\mathrm{lcm}}\,}
\def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \def\qquad\mbox{and}\qquad{\qquad\mbox{and}\qquad}
\def\tilde\jmath{\tilde\jmath} \def\ell_{\rm max}{\ell_{\rm max}} \def\log\log{\log\log}
\def\overline{\Q}{\overline{\mathbb{Q}}} \def{\rm GL}{{\rm GL}} \def{\rm Aut}{{\rm Aut}} \def{\rm End}{{\rm End}} \def{\rm Gal}{{\rm Gal}}
\title
[Congruences with intervals and subgroups] {\bf Congruences with intervals and subgroups modulo a prime} \author{Marc Munsch} \address{CRM, Universit\'e de Montr\'eal, 5357 Montr\'eal, Qu\'ebec } \email{[email protected]} \author{Igor E.~Shparlinski} \address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} \email{[email protected]}
\date{\today}
\subjclass{11G07, 11L40, 11Y16} \keywords{Character sum, large sieve}
\begin{abstract} We obtain new results about the representation of almost all residues modulo a prime $p$ by a product of a small integer and also an element of small multiplicative subgroup of $(\mathbb{Z}/p\mathbb{Z})^*$. These results are based on some ideas, and their modifications, of a recent work of J.~Cilleruelo and M.~Z.~Garaev (2014). \end{abstract}
\maketitle
\section{Introduction}
It is well known that the progress on many classical and modern number theoretic questions depends on the existence asymptotic formulas and good upper and lower bounds on the number of solutions to the congruences of the form \begin{equation} \label{eq:cong m} au \equiv x \pmod m \end{equation} where $u$ runs through a multiplicative subgroup ${\mathcal G}$ of the group of units $\mathbb{Z}_m^*$ of the residue ring $\mathbb{Z}_m$ modulo an integers $m\ge 2$ and $x$ runs through a set $\{A+1, \ldots, A+H\}$ of $H$ consecutive integers, see~\cite{KoSh1} for an outline of such questions. In the special case when $m=p$ is a prime number and ${\mathcal G}$ is a group of squares, this is a celebrated question about the distribution of quadratic residues.
Recently, various modifications of the congruence~\eqref{eq:cong m} have been studied, such as congruences with elements from more general sets than subgroups on the left hand side and also with products and ratios of variables from short intervals on the right hand side, see~\cite{BGKS1,BGKS2, BKS1,BKS2,CillGar1,CillGar2,Gar1,Gar2,GarKar1,HarmShp,KoSh2} and references therein. New applications of such congruences have also been found as well and include questions about \begin{itemize} \item nonvanishing of Fermat quotients~\cite{BFKS}; \item estimating fixed points of the discrete logarithm~\cite{BGKS1,BGKS2}; \item distribution of pseudopowers~\cite{BKPS}; \item distribution of digits in reciprocals of primes~\cite{ShpSte}. \end{itemize}
Here we consider the congruence~\eqref{eq:cong m} in the special case when $m=p$ is prime. Furthermore, we are mostly interesting in the solvability of~\eqref{eq:cong m} for rather small intervals and subgroups.
Since we consider congruences modulo primes, it is convenient to use the language of finite fields.
For a prime $p$ use $\mathbb{F}_p$ to denote the finite field of $p$ elements, which we assume to represented by the set $\{0, 1, \ldots, p-1\}$. We say that a set ${\mathcal I} \subseteq \mathbb{F}_p$ is an interval of length $H$ if it contains $H$ consecutive elements of $\mathbb{F}_p$, assuming that $p-1$ is followed by $0$. Furthermore we say that ${\mathcal I}$ is an initial interval if ${\mathcal I} = \{1, \ldots, H\}$ (we note that it is convenient to exclude $0$ from initial intervals).
Furthermore, instead of subgroups we consider a more general class sets, which also contain sets of $N$ consecutive powers $\{g, \ldots, g^{N}\}$ of a fixed element $g \in \mathbb{F}_p^*$.
Namely, as usual for a set ${\mathcal U} \subseteq \mathbb{F}_p$ we use ${\mathcal U}^{(m)} $ to denote its $m$-fold product set $$ {\mathcal U}^{(m)} = \{u_1\ldots u_m~:~ u_1,\ldots, u_m \in {\mathcal U}\}. $$
We say that ${\mathcal U} \subseteq \mathbb{F}_p^*$ is an {\it approximate subgroup\/} of $ \mathbb{F}_p^*$ if $$ \# {\mathcal U}^{(2)} \le (\# {\mathcal U})^{1+o(1)}, $$ as $\# {\mathcal U} \to \infty$.
Consequently, here we study the solvability of equations over $\mathbb{F}_p$ of the type \begin{equation} \label{eq:eq p} au = x, \qquad u \in {\mathcal U}, \ x \in {\mathcal I}, \end{equation} where ${\mathcal U} \subseteq \mathbb{F}_p^*$ is an approximate subgroup of $ \mathbb{F}_p$ and ${\mathcal I} \subseteq \mathbb{F}_p^*$ is an interval.
It has been shown by Cilleruelo and Garaev~\cite{CillGar2} that for any $\varepsilon > 0$ there is $\delta > 0$ such that if ${\mathcal U} = {\mathcal G}$ is a subgroup of order $\# {\mathcal U} \ge p^{3/8}$ and ${\mathcal I}$ is an initial interval of length $\# {\mathcal I} \ge p^{5/8 + \varepsilon}$ that~\eqref{eq:eq p} has a solution for all but at most $O(p^{1-\delta})$ values of $a \in \mathbb{F}_p$.
Here we show that the ideas of Cilleruelo and Garaev~\cite{CillGar2} combined with the approach of Garaev~\cite{Gar2} to estimating character sums for almost all primes, allows us to obtain similar results for a wider range of sizes $\#{\mathcal U}$ and $\# {\mathcal I}$ (and also for approximate subgroups ${\mathcal U}$). Furthermore, we use some tools from additive combinatorics to establish a certain new result about subsets of approximate subgroups, which maybe of independent interest.
Throughout the paper, the implied constants in the symbols $O$ and $\ll$ are absolute. We recall that the assertions $U=O(V)$ and $U\ll V$ are both equivalent to the
inequality $|U|\le cV$ with some constant $c$.
\section{Background on exponential and character sums}
Let ${\mathcal X}_q$ denote the set of all $\varphi(q)$ multiplicative characters modulo an integer $q\ge 2$ and let ${\mathcal X}_q^*$ be the set of primitive characters $\chi\in {\mathcal X}_q$, where $\varphi(q)$ to denotes the Euler function of $q$, we refer to~\cite{IwKow} for a background on characters.
Let ${\mathcal A}=(a_n)_{n\in\mathbb{N}}$ be an arbitrary sequence of complex numbers. For an integer $h$ and a character $\chi \in {\mathcal X}_q$ we consider the weighted character sums $$ S_q(\chi;h;{\mathcal A}) = \sum_{n=1}^h a_n\chi(n) . $$ If $a_n=1$ for all $n$, we simply use the notation $$ S_q(\chi;h) = \sum_{n=1}^h \chi(n) . $$
First we recall that by the P{\'o}lya-Vinogradov (for $\nu =1$) and Burgess (for $\nu\ge2$) bounds, see~\cite[Theorems~12.5 and~12.6]{IwKow}, for an arbitrary integers $q \ge h\ge 1$, the bound \begin{equation} \label{eq:PVB} \max_{\chi \in {\mathcal X}_q \backslash \{\chi_0\}}
\left|S_q(\chi;h) \right| \le h^{1 -1/\nu} q^{(\nu+1)/4\nu^2 + o(1)} \end{equation} holds with $\nu = 1,2,3$ for any $q$ and with an arbitrary positive integer $\nu$ if $q$ is cube-free.
It is well-known that assuming the Generalized Riemann Hypothesis (GRH), we derive a ``square-root cancellation'' bound \begin{equation} \label{eq:SQRC} \max_{\chi \in {\mathcal X}_q \backslash \{\chi_0\}}
\left|S_q(\chi;h) \right| \le h^{1/2} q^{o(1)}, \end{equation}
and in particular is quoted in~\cite[Bound~(13.2)]{Mont}. Despite this, it seems to be difficult to find a proof of this bound, however one can easily derive it from~\cite[Theorem~2]{GrSo}.
Furthermore, we use the following well-known property of the Gauss sums $$ \tau_q(\chi) = \sum_{v=1}^q \chi(v) e(v/q), \qquad \chi\in {\mathcal X}_q, $$ see, for example,~\cite[Equation~(3.12)]{IwKow}.
\begin{lemma} \label{lem:tau chi} For any primitive multiplicative character $\chi \in {\mathcal X}_q^*$ and an integer $b$ with $\gcd(b,q) = 1$, we have $$ \chi(b) \tau_q( \overline\chi) = \sum_{\substack{v=1\\ \gcd(v,q) =1}}^{q} \overline\chi(v) e(bv/q), $$ where $\overline\chi$ is the complex conjugate character to $\chi$. \end{lemma}
By~\cite[Lemma~3.1]{IwKow} we also have:
\begin{lemma} \label{lem:tau size} For any $\chi\in {\mathcal X}_q^*$ we have $$
|\tau_q(\chi)| = q^{1/2}. $$ \end{lemma}
We also recall the classical large sieve inequality, see~\cite[Theorem~7.11]{IwKow}:
\begin{lemma} \label{lem:Large Sieve} Let $a_1, \ldots, a_T$ be an arbitrary sequence of complex numbers and let $$
A = \sum_{n=1}^T |a_n|^2 \qquad\mbox{and}\qquad T(u) = \sum_{n=1}^T a_n \exp(2 \pi i n u). $$ Then, for an arbitrary integer $Q\ge 1$, we have $$ \sum_{q=1}^Q \sum_{\substack{v=1\\ \gcd(v,q) =1}}^{q}
\left|T(v/q)\right|^2 \ll \(Q^2+ T\) A. $$ \end{lemma}
\section{Bounds of character sums for almost all moduli} \label{sec:bound char}
Garaev~\cite{Gar1}, has obtained a series of improvements of the bound~\eqref{eq:PVB} which hold for almost all moduli integer $q\ge 1$. Namely, by~\cite[Theorem~10]{Gar1}, for any $\delta < 1/4$ if $h$ and $Q$ tend to infinity in such a way that $$ \frac{\log h}{\sqrt{\log Q}}\to \infty $$ then the bound
$$
\max_{\chi\in {\mathcal X}_q^*} \left|\sum_{n=1}^h \chi(n) \right|\le h^{1-\delta} $$
holds for all but at most $Q^{4\delta} h^{(1-2\delta) \gamma +o(1)}$ moduli $q \le Q$, where $\gamma$ is the following fractional parts \begin{equation} \label{eq:gamma h} \gamma = \left\{\frac{2\log Q}{\log h}\right\}. \end{equation}
Here we give some modifications of the bounds from~\cite{Gar1} which are more convenient for our applications. In particular, the size of the exceptional set in~\cite[Theorem~10]{Gar1} of moduli $q \le Q$ for which depends on the fractional part $\gamma$.
One can simply estimate $\gamma \le 1$ and still derive a nontrivial bound $O(Q^{4\delta} h^{1-2\delta})$ from~\cite[Theorem~10]{Gar1}. However here we show that one can modified the argument of Garaev~\cite{Gar1} and obtain a stronger bound than that corresponds to replacing $\gamma$
with 1. We also show that the argument of~\cite{Gar1} augmented by some standard techniques, can be used to estimate the largest values of sums $|S_q(\chi;h)|$ uniformly over all integers $h \le H$ and $\chi\in {\mathcal X}_q^*$, which is important for some applications.
We now define $\gamma$ by the analogue of~\eqref{eq:gamma h} but with $H$ instead of $h$, that is, \begin{equation} \label{eq:gamma H} \gamma = \left\{\frac{2\log Q}{\log H}\right\}. \end{equation}
\begin{lemma} \label{lem:Almost all qH} Let $H$ and $Q$ be sufficient large positive integer numbers with $Q \ge H\ge Q^\varepsilon$ for some fixed $\varepsilon > 0$
and let ${\mathcal A}=(a_n)_{n\in\mathbb{N}}$ be an arbitrary sequence of complex numbers with $|a_n|=1$. Then for any $\delta< 1/4$ the bound $$
\max_{\chi\in {\mathcal X}_q^*} \max_{h \le H} \left|S_q(\chi;h;{\mathcal A}) \right|\le H^{1-\delta} $$ holds true for all but at most $Q^{4\delta} H^{\vartheta+ o(1)}$ moduli $q \le Q$, where $\gamma$ is given by~\eqref{eq:gamma H} and $\vartheta = \min\{ (1-2 \delta) \gamma, 2 \delta(1-\gamma)\}$ \end{lemma}
\begin{proof} As we have mentioned, we follow the ideas of Garaev~\cite[Theorem~3]{Gar1}.
Without loss of generality we may assume that $H=2M+1$ is an odd integer. We also define the function $e(z) = \exp(2 \pi i z)$. We recall, that for any integer $z$, we have the orthogonality relation \begin{equation} \label{eq:Orth} \sum_{b=-M}^M e(bz/H) = \left\{\begin{array}{ll} H,&\quad\text{if $z\equiv 0 \pmod H$,}\\ 0,&\quad\text{if $z\not\equiv 0 \pmod H$,} \end{array} \right. \end{equation} see~\cite[Section~3.1]{IwKow}. We also need the bound \begin{equation} \label{eq:Incompl}
\sum_{n=u+1}^{u+h} e(bn/H) \ll \frac{H}{|b|+1}, \end{equation}
which holds for any integers $b$, $u$ and $H\ge h\ge 1$ with $|b| \le H/2$, see~\cite[Bound~(8.6)]{IwKow}.
Now for each $q \le Q$ we fix $\chi_q \in {\mathcal X}_q^*$ and $h_q\le H$ with $$
\left|S_q(\chi_q;h_q;{\mathcal A}) \right|
= \max_{\chi\in {\mathcal X}_q^*} \max_{h \le H} \left|S_q(\chi;h;{\mathcal A})\right|. $$ Then using~\eqref{eq:Orth}, we write \begin{eqnarray*} S_q(\chi_q;h_q;{\mathcal A}) &=&
\sum_{r=1}^H a_r \chi_q(r)\frac{1}{H} \sum_{n=1}^{h_q} \sum_{b=-M}^{M} e(b(r-n)/H) \\ &=& \frac{1}{H} \sum_{b=-M}^{M} \sum_{n=1}^{h_q} e(-bn/H)
\sum_{r=1}^H a_r \chi_q(r) e(br/H). \end{eqnarray*} Recalling~\eqref{eq:Incompl}, we see that $$ S_q(\chi_q;h_q;{\mathcal A}) \ll
\sum_{b=-M}^{M} \frac{1}{|b|+1}
\left|\sum_{r=1}^H a_r \chi_q(r) e(br/H)\right| . $$ Writing $$
|b|+1 = \(|b|+1\)^{(2\nu-1)/2\nu} \(|b|+1\)^{1/2\nu}, $$ and using the H{\"o}lder inequality, we derive
\begin{equation} \label{eq:Ub}
\sum_{q \le Q} \left|S_q(\chi_q;h_q;{\mathcal A})\right|^{2\nu} \ll
(\log Q)^{2\nu-1} \sum_{b=-M}^{M} \frac{1}{|b|+1} U_b, \end{equation} where $$
U_b = \sum_{q \le Q} \left|\sum_{r=1}^H a_r \chi_q(r) e(br/H)\right|^{2\nu}. $$ We now note that $$ \(\sum_{r=1}^H a_r \chi_q(r) e(br/H)\)^\nu = \sum_{n=1}^{T} \rho_{b}(n) \chi_q(n) , $$
where $T = H^\nu$ and $$ \rho_{b}(n) = \sum_{\substack{r_1,\ldots, r_{\nu}=1\\ r_1\ldots r_{\nu} = n}}^H a_{r_1} \ldots a_{r_\nu} e(b( r_1+\ldots+ r_\nu)/H). $$ Using Lemma~\ref{lem:tau chi}, we write \begin{align*} \(\sum_{r=1}^H a_r \chi_q(r) e(br/H)\)^\nu &= \sum_{n=1}^{T} \rho_{b}(n)
\frac{1}{\tau_{q}( \overline \chi_q)}\sum_{\substack{v=1\\ \gcd(v,q) =1}}^{q}
\overline \chi_q(v) e(nv/q)\\ &= \sum_{\substack{v=1\\ \gcd(v,q)=1}}^q \frac {\overline \chi_q(v)}{\tau_{q}(\overline \chi_q)} \sum_{n=1}^T \rho_b(n) e(nv/q). \end{align*} Changing the order of summation, by Lemma~\ref{lem:tau size} and the Cauchy inequality, we obtain, $$
\left|\sum_{r=1}^H \chi_q(r) e(br/H)\right|^{2\nu} \le
\sum_{\substack{v=1\\ \gcd(v,q) =1}}^{q} \left| \sum_{n=1}^{T} \rho_{b}(n) e(nv/q)\right|^2. $$ Therefore, $$
U_b \le \sum_{q \le Q} \sum_{\substack{v=1\\ \gcd(v,q) =1}}^{q} \left| \sum_{n=1}^{T} \rho_{b}(n) e(nv/q)\right|^2. $$ Recalling the well-known upper bound on the divisor function $d(n)$, see~\cite[Bound~(1.81)]{IwKow}, we conclude that $$
|\rho_{b}(n)| \le \sum_{\substack{r_1, \ldots, r_\nu =1\\r_1 \ldots r_\nu=n}}^H 1 \le (d(n))^\nu = n^{o(1)} $$ as $n \to \infty$. Thus $$
\sum_{n=1}^T |\rho_b(n)|^2 \le T^{o(1)} \sum_{n=1}^T |\rho_b(n)| \le T^{o(1)} H^\nu = H^{\nu(1+o(1))}. $$ Hence, we now derive from Lemma~\ref{lem:Large Sieve} $$
U_b\le \(Q^2+ T \) \sum_{n=1}^T |\rho_b(n)|^2 \le \(Q^2+ H^\nu \) H^{\nu(1+o(1))}, $$ which after substitution in~\eqref{eq:Ub} implies \begin{equation} \label{eq:bound nu}
\sum_{q \le Q} \max_{\chi\in {\mathcal X}_q^*} \max_{h \le H} \left|S_q(\chi;h;{\mathcal A})\right|^{2\nu} \le \(Q^2+ H^\nu \) H^{\nu(1+o(1))}. \end{equation}
We now define
the integer $k$ by
$$ k = \fl{\frac{2\log Q}{\log H}}. $$ Note that $$ Q^2 = H^{k+\gamma}. $$ Using~\eqref{eq:bound nu} with $\nu=k$ (so $\nu<2/\epsilon$ in particular) we see that $$
\sum_{q \le Q} \max_{\chi\in {\mathcal X}_q^*} \max_{h \le H} \left|S_q(\chi;h;{\mathcal A})\right|^{2k} \le Q^{2} H^{k+o(1)}. $$ Hence the desired bound holds for all but at most \begin{equation} \label{eq:small gamma} \begin{split} Q^{2} H^{k+o(1)} H^{-2k(1-\delta)}& = Q^{2} H^{-k(1-2\delta)+o(1)} = H^{2k\delta + \gamma+o(1)}\\ & = Q^{4\delta } H^{(1-2 \delta) \gamma+o(1)} \end{split} \end{equation} moduli $q \le Q$ (which is essentially a bound of the same strength as that of~\cite[Theorem~10]{Gar1}).
Furthermore, using~\eqref{eq:bound nu} with $\nu=k+1$ we see that $$
\sum_{q \le Q} \max_{\chi\in {\mathcal X}_q^*} \max_{h \le H} \left|S_q(\chi;h;{\mathcal A})\right|^{2(k+1)} \le H^{2(k+1)+o(1)} . $$ Hence the desired bound holds for all but at most \begin{equation} \label{eq:big gamma} H^{2(k+1)+o(1)} H^{-2(k+1)(1-\delta)} = H^{2(k+1)\delta +o(1)}\\
= Q^{4\delta} H^{2 \delta(1-\gamma)+ o(1)} \end{equation} moduli $q \le Q$.
The bounds~\eqref{eq:small gamma} and~\eqref{eq:big gamma} yield the result. \end{proof}
Covering the interval $[1,H]$ by $O(\log H)$ dyadic intervals of the form $[H_0/2, H_0]$, and using that $$ \min\{ (1-2 \delta) \gamma, 2 \delta(1-\gamma)\} \le 2\delta(1-2\delta), $$ we obtain:
\begin{cor} \label{cor:Almost all qhH} Let $H$ and $Q$ be sufficient large positive integer numbers with $Q \ge H\ge Q^\varepsilon$ for some fixed $\varepsilon > 0$
and let ${\mathcal A}=(a_n)_{n\in\mathbb{N}}$ be an arbitrary sequence of complex numbers with $|a_n|=1$. Then for any $\delta< 1/4$ the bound $$
\max_{\chi\in {\mathcal X}_q^*} \left|S_q(\chi;h;{\mathcal A}) \right|\le h^{1-\delta} $$ holds true for all $h \le H$ and for all but at most $Q^{4\delta} H^{2\delta(1-2\delta)+ o(1)}$ moduli $q \le Q$. \end{cor}
For the traditional character sums, that is, if $a_n=1$, we also have the following result.
\begin{cor} \label{cor:Almost all qh} Let $Q$ be a sufficient large positive integer number. For any fixed $\varepsilon > 0$ and $3/14 > \delta > 0$, there is some $\xi> 0$ such that the bound $$
\max_{\chi\in {\mathcal X}_q^*} \left|S_q(\chi;h) \right|\le h^{1-\delta} $$ holds true for all $h \in [Q^\varepsilon, Q]$ and for all but at most $Q^{1-\xi}$ moduli $q \le Q$. \end{cor}
\begin{proof} Clearly, it is enough to consider only $q\in [Q/2, Q]$.
Let us fix some positive $\delta$ with $(3-\sqrt{7})/2 < \delta < 3/14$. Simple calculus shows that there is some $\alpha> 1/2$ such that $$ 4\delta +2\alpha \delta(1-2\delta) < 1 \qquad\mbox{and}\qquad 4\delta +(2-3\alpha) (1-2\delta) < 1. $$ We now note that with the above parameters, Corollary~\ref{cor:Almost all qhH}, used with $H = \rf{Q^\alpha}$, implies that it remains to establish the results only for the values of $h \in [Q^\alpha, Q]$.
Furthermore, by the P{\'o}lya-Vinogradov bound (that is, by~\eqref{eq:PVB} taken with $\nu =1$) we have $$
\max_{\chi\in {\mathcal X}_q^*} \left|S_q(\chi;h) \right|\le h^{1-\delta} $$ holds for any $h \ge Q^{1/2(1-\delta)}$ and $q \le Q$.
Therefore, we only need to consider the values of $h$ in the interval $[Q^\alpha, Q^{1/2(1-\delta)}]$, which we can cover by $O(\log Q)$ dyadic intervals $[H/2, H]$. Now, for $H \in [Q^\alpha, Q^{1/2(1-\delta)}]$ we have $$ 3 < 4(1 -\delta) \le \frac{2\log Q}{\log H} \le 2 \alpha^{-1} < 4. $$ Hence, writing $H = Q^\beta$, for the parameter $\gamma$, that is given by~\eqref{eq:gamma H}, we have $$ \gamma =2\beta^{-1}-3. $$ Recalling Lemma~\ref{lem:Almost all qH}, we see that it remains to check that $$ 4\delta + \beta \min\{(2\beta^{-1}-3)(1-2 \delta), 2 (4-2\beta^{-1})\delta\}<1 $$ for every $\beta\in [\alpha, 1/2(1-\delta)]$. We now have the following elementary estimates \begin{equation*} \begin{split}
4\delta + \beta &\min\{(2\beta^{-1}-3)(1-2 \delta), 2 (4-2\beta^{-1})\delta\}\\ & = 4\delta + \beta (2\beta^{-1}-3)(1-2 \delta) = 4\delta + (2-3\beta)(1-2 \delta)\\ & \le 4\delta + (2-3\alpha)(1-2 \delta) < 1 \end{split} \end{equation*} and the result follows. \end{proof}
\section{Background from Additive Combinatorics}
We use standard notation of additive combinatorics, including sumsets ${\mathcal A}+{\mathcal B} = \{a+b~:~a \in {\mathcal A},\ b \in {\mathcal B}\}$ and $k$-folded sumsets $k{\mathcal A} = \{a_1+\ldots+a_k~:~a_1,\ldots,a_k\in {\mathcal A}\}$, assuming that ${\mathcal A}$ and ${\mathcal B}$ are subsets of some abelian group ${\mathcal G}$.
We first recall the {\it Pl{\"u}nnecke inequality\/}, see~\cite[Corollary~6.29]{TaoVu}.
\begin{lemma} \label{lem:PlunIneq} Suppose that ${\mathcal A}$ and ${\mathcal B}$ are subsets of some abelian group ${\mathcal G}$, and that $\#({\mathcal A} +{\mathcal B}) \le K\#{\mathcal A}$ for some $K \ge 1$. Then for any nonnegative integers $k$ and $m$ we have $$ \#(k{\mathcal B} - m{\mathcal B}) \le K^{k+m}\#{\mathcal A}. $$ \end{lemma}
We now record the following obvious consequence of Lemma~\ref{lem:PlunIneq}. \begin{cor} \label{cor:PowerApproxSubgr} For any fixed integer $m\ge 1$ and approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$ we have $$ \# {\mathcal U}^{(m)} \le (\# {\mathcal U})^{1+o(1)}. $$ \end{cor}
Suppose that ${\mathcal A} \subseteq {\mathcal G}$ and ${\mathcal B} \subseteq {\mathcal H}$ are subsets of abelian groups ${\mathcal G}$ and ${\mathcal H}$, respectively. A map $\psi: {\mathcal A} \to {\mathcal B}$ is called {\it Freiman $k$-homomorphism\/} if whenever $$ a_1+\ldots+a_k = a_{k+1} + \ldots + a_{2k} $$ for some $a_1,\ldots,a_{2k}$ then we also have $$ \psi(a_1)+\ldots+\psi(a_k) = \psi(a_{k+1}) + \ldots + \psi(a_{2k}). $$ If $\psi$ has an inverse which is also a Freiman $k$-homomorphism then we say that $\psi$ is a {\it Freiman $k$-isomorphism\/} and also that ${\mathcal A}$ and ${\mathcal B}$ are {\it Freiman $k$-isomorphic\/}.
We note that if ${\mathcal G}$ is a torsion-free group that considering $a_1=\ldots=a_k = a$ and $a_{k+1}=\ldots =a_{2k}=b$ for some $a,b \in{\mathcal A}$ we derive that any Freiman $k$-isomorphism is an injection.
We need the following result of Ruzsa~\cite[Theorem~2.3.5]{Ruz2}, which is known as the {\it Modelling Lemma\/} (see also~\cite[Theorem~2]{Ruz1} for teh case ${\mathcal G} = \mathbb{Z}$ which is fully sufficient for our purposes).
\begin{lemma} \label{lem:RuzsaModel} Suppose that ${\mathcal A} \subseteq {\mathcal G}$ is a finite nonempty subset of a torsion-free Abelian group ${\mathcal G}$.
Then for all integers $k \ge 2$ and $q \ge |k{\mathcal A} - k{\mathcal A}|$ there is a set ${\mathcal B} \subseteq {\mathcal A}$ with $\# {\mathcal B} \ge \# {\mathcal A}/k$ such that ${\mathcal B}$ is Freiman $k$-isomorphic to a subset of $\mathbb{Z}/q\mathbb{Z}$. \end{lemma}
We now use Lemma~\ref{lem:RuzsaModel} to show that sets with a small doubling contain subsets of a give cardinality and also with small doubling. We present it in a more general and explicit form than we need for applications, as we think it maybe of independent interest.
\begin{lemma} \label{lem:SmallDouble} Suppose that ${\mathcal A} \subseteq {\mathcal G}$ is a finite nonempty subset of a torsion-free Abelian group ${\mathcal G}$ of cardinality $N =\#{\mathcal A}$ such that for some $L\ge 1$ we have $\#(2{\mathcal A}) \le LN$. Then for any positive integer $M\le N$ there is a set ${\mathcal C} \subseteq {\mathcal A}$ with $$ \# {\mathcal C} = M \qquad\mbox{and}\qquad \#(2{\mathcal C}) \le 10L^4M. $$ \end{lemma}
\begin{proof} If $M \ge N/2$ we simply take $C$ to be any subset of ${\mathcal A}$ of cardinality $M$. Then $$ \#(2{\mathcal C}) \le \#(2{\mathcal A}) \le LN \le 2LM. $$
Now assume that $M \le N/2$. First we note that applying Lemma~\ref{lem:PlunIneq}, we derive $\#(2{\mathcal A} - 2{\mathcal A}) \le K N$, where $K = L^4$.
Let $${\mathcal B} \subseteq {\mathcal A} \qquad\mbox{and}\qquad KN \le q \le 2KN $$ be as in Lemma~\ref{lem:RuzsaModel} (applied with $k=2$) and let $\psi$ be the corresponding Freiman $2$-isomorphism. We consider the set ${\mathcal X} = \psi({\mathcal B}) \subseteq \mathbb{Z}/q\mathbb{Z}$. As we have noticed, $\psi$ is an injection, so $$ \# {\mathcal X} = \# {\mathcal B}\ge N/2 \ge M. $$ By a simple averaging argument, for any integer $R\ge 1$ there is a subset ${\mathcal Y} \subseteq \mathbb{Z}/q\mathbb{Z}$ of $R$ consecutive residue classes modulo $q$, that is, of $\{r, \ldots, r+R-1\}$ for some $r \in \mathbb{Z}$ and such that $$ \#\({\mathcal X} \cap {\mathcal Y}\) \ge \frac{\#{\mathcal X} \cdot \#{\mathcal Y}}{q} = \frac{R}{q}\#{\mathcal X}. $$ We now take $$ R = \rf{\frac{qM}{\#{\mathcal X}}} $$ to guarantee $\#\({\mathcal X} \cap {\mathcal Y}\)\ge M$.
We now collect arbitrary $M$ elements of ${\mathcal X} \cap {\mathcal Y}$ in one set ${\mathcal Z}$ and define $$ {\mathcal C} = \psi^{-1}({\mathcal Z}). $$ We clearly have $\# {\mathcal C} = \# {\mathcal Z} = M$ and also by the property of Freiman $2$-isomorphisms $$
\#(2{\mathcal C}) = \#(2{\mathcal Z}) \le \#(2{\mathcal Y}) \le 2\#{\mathcal Y} = 2R $$ (since ${\mathcal Y}$ consists of consecutive residue classes). Furthermore, we have $$ R \le \rf{\frac{qM}{\#{\mathcal X}}} \le
\rf{2qM/N} \le \rf{4KM} = \rf{4L^4M}
\le 5L^4M $$ which concludes the proof. \end{proof}
We now see that Lemma~\ref{lem:SmallDouble} implies that an approximate subgroup of $\mathbb{F}_p^*$ contains subsets of any size that behave as approximate subgroups.
\begin{lemma} \label{lem:Subset AprSubgr} For any approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$, for any integer $M \le \# {\mathcal U}$ one can find a subset ${\mathcal V} \subseteq {\mathcal U}$ such that $\# {\mathcal V}=M$ and $$ \# {\mathcal V}^{(2)} \le \# {\mathcal V} (\# {\mathcal U})^{o(1)}, $$ \end{lemma}
\begin{proof}We fix a primitive root $g$ of $\mathbb{F}_p^*$ and define the set $$ {\mathcal A} = \{a\in \{0, \ldots, p-2\}~:~ g^a \in {\mathcal U}\}. $$ We consider ${\mathcal A}$ as the set of integers and since $0 \le a+b \le 2p-4$, at most two elements from $2A$ correspond to the same element in ${\mathcal U}^{(2)}$. So, we conclude that $$ \#(2A) \le 2 \# ({\mathcal U}^{(2)}). $$ The result now follows immediately from Lemma~\ref{lem:SmallDouble}. \end{proof}
We note that in our applications of Lemma~\ref{lem:Subset AprSubgr} the sets ${\mathcal U}$ and ${\mathcal V}$ are of comparable cardinalities so $(\# {\mathcal U})^{o(1)} = (\# {\mathcal V})^{o(1)}$ so ${\mathcal V}$ is also an approximate subgroup.
\section{Some Equation over $\mathbb{F}_p$ with Variables from Intervals and Subgroups}
One easily verifies that Corollary~\ref{cor:PowerApproxSubgr} allows us to obtain the following slight variation of~\cite[Theorem~1]{CillGar2} where instead of the sets ${\mathcal U} \subseteq \mathbb{F}_p$ with $\# {\mathcal U}^{(2)} \le 10\# {\mathcal U}$ we use approximate subgroups. The proof then goes through without any changes.
\begin{lemma} \label{lem:eq2var} Let an initial interval ${\mathcal I} \subseteq \mathbb{F}_p$ of length $H$ and an approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$ of size $N$ satisfy $$H^{k}N < p \qquad\mbox{and}\qquad N \le p^{k/(2k+1)} $$ for some fixed integer $k\ge 1$. Then the number $J$ of solutions of the equation over $\mathbb{F}_p$ $$ x_1 = x_2u, \qquad u\in {\mathcal U},\ x_1, x_2 \in {\mathcal I}, $$ satisfies $$ J \le H N^{o(1)} . $$ \end{lemma}
Accordingly, we also have the following version of~\cite[Corollary~1]{CillGar2}:
\begin{cor} \label{cor:eq2var} Let an initial interval ${\mathcal I} \subseteq \mathbb{F}_p$ of length $H$ and an approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$ of size $N$ satisfy $$H^{k}N < p \qquad\mbox{and}\qquad N \le p^{k/(2k+1)} $$ for some fixed integer $k\ge 1$. Then the number $K$ of solutions of the equation over $\mathbb{F}_p$ $$ x_1u_1 = x_2u_2, \qquad u_1,u_2\in {\mathcal U},\ x_1, x_2 \in {\mathcal I}, $$ satisfies $$ K \le H N^{1+o(1)} . $$ \end{cor}
We now prove the following direct extension of~\cite[Lemma~7]{CillGar2}:
\begin{lemma}\label{lem: eq3var} Let an initial interval ${\mathcal I} \subseteq \mathbb{F}_p$ of length $H$ and an approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$ of size $N$ satisfy $$H \le N/2, \qquad H^kN < p, \qquad N \le p^{k/(2k+1)} $$ for some fixed integer $k\ge 1$ and let ${\mathcal Q}$ be the set of primes $q \in [N/2, N]$. Then the number $S$ of solutions of the equation over $\mathbb{F}_p$ $$ q_1u_1 x_1 =q_2u_2 x_2, \qquad q_i \in Q, \ u_i \in {\mathcal U},\ x_i \in {\mathcal I}, \quad i=1,2, $$ satisfies $$ S \le H N^{2+o(1)}. $$ \end{lemma}
\begin{proof} We have $S=S_1+S_2$ where $S_1$ is the number of solutions with the additional condition $q_1=q_2$, and $S_2$ is the number of solutions with $q_1\ne q_2$. We observe that
Hence, we can apply Corollary~\ref{cor:eq2var} and derive \begin{equation} \label{eq:S1} S_1 \le H N^{2+o(1)}. \end{equation}
It remains to estimate $S_2$, we fix $x_2,u_1,u_2$ such that for $\lambda = u_2x_2/u_1$ we have \begin{equation} \label{eq:S2 T2} S_2\leq H N^2 T_2, \end{equation} where $T_2$ is the number of solutions of the equation $$\frac{q_1x_1}{ q_2} = \lambda, \qquad q_1,q_2 \in Q, \ q_1 \ne q_2, \ x_1 \in {\mathcal I}. $$ From $H<N/2$, we deduce that $\gcd(q_1x_1,q_2)=1$. Since $N^2 H<p$, from~\cite[Lemma~3]{CillGar2} we derive that $x_1q_1$ and $q_1$ are uniquely determined. Since $x_1<q_1$, the value $x_1q_1$ uniquely determines $x_1$ and $q_1$. Hence, $T_2\leq 1$, which together with~\eqref{eq:S2 T2} implies \begin{equation} \label{eq:S2} S_2 \le H N^{2}. \end{equation} Combining~\eqref{eq:S1} and~\eqref{eq:S2}, we conclude the proof. \end{proof}
\section{Products of Intervals and Subgroups}
Following the standard notation we use $$ {\mathcal A}\cdot {\mathcal B} = \{ab~:~a\in {\mathcal A}, \ b \in {\mathcal B}\} $$ to denote the product set of two sets ${\mathcal A}, {\mathcal B} \in \mathbb{F}_p$.
We say that a certain property holds for almost all primes $p$, if it fails for $o(Q/\log Q)$ primes $p\le Q$ as $x \to \infty$.
Here we are interested in the cardinality of the set ${\mathcal I} \cdot {\mathcal U}$ for
an initial interval ${\mathcal I} \subseteq \mathbb{F}_p$ and an approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$. In particular, for almost all primes $p$, we extend~\cite[Theorem~3]{CillGar2} to a wider range of $\#{\mathcal I}$ and $\# {\mathcal U}$.
\begin{theorem} \label{thm:IU=p} For any fixed $\alpha$ with $1/3 \le \alpha < 1/2$ and $\kappa> 0$, for almost all primes $p$, for any initial interval ${\mathcal I} \subseteq \mathbb{F}_p$ of length $H$ and approximate subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$ of size $N$ that satisfy $$ H >p^{1- \alpha +\kappa} \qquad\mbox{and}\qquad N \ge p^{\alpha}, $$ we have $$\#\({\mathcal I} \cdot {\mathcal U}\) = p + O(p^{1-\eta}), $$ where $$ \eta = \frac{3\kappa}{7(1 + \kappa)}. $$ \end{theorem}
\begin{proof} Let $Q$ be a sufficiently large positive integer. It is clear that it is enough to establish the desired result for all but $o(Q/\log Q)$ primes $p$ in the dyadic interval $p \in [Q/2, Q]$. Using Corollary~\ref{cor:Almost all qh} with some fixed positive $\varepsilon < 1-2 \alpha$ and $\delta < 3/14$
we see that we can remove $o(Q/\log Q)$ primes $p \in [Q/2, Q]$ such that for remaining primes $p$ we have \begin{equation} \label{eq:bound p}
\max_{\chi\in {\mathcal X}_p^*} \left|\sum_{n=1}^h \chi(n) \right|\le h^{1-\delta} \end{equation}
for every integer \begin{equation} \label{eq:h Interv} h \in [p^{\varepsilon}, p]. \end{equation} provided that $Q$ is large enough.
We now always assume that $p$ is such that~\eqref{eq:bound p} holds.
We now set
$$ m = \rf{\kappa^{-1}}, \quad \ell=\fl{p^{1/m}}, \quad M = \fl{p^{\alpha}}, \quad h = \fl{0.4 p^{1-2 \alpha}} . $$
By Lemma~\ref{lem:Subset AprSubgr}, we can choose a subset ${\mathcal V} \subseteq {\mathcal U}$ such that $$ \#{\mathcal V} = M \qquad\mbox{and}\qquad \# {\mathcal V}^{(2)} \le \# {\mathcal V} p^{o(1)} = \(\# {\mathcal V}\)^{1+ o(1)} . $$
Let ${\mathcal Q}$ be the set of primes $q \in [M/2, M]$.
One verifies that $$ h \ell M \le 0.4 p^{1-2 \alpha} \times p^{1/m} \times p^{\alpha} = 0.4 p^{1- \alpha+ 1/m} \le H $$
as $1/m < \kappa$, Hence it suffices to prove that for some $\rho >0$ that depends only on $\alpha$, $\kappa$ and $\varepsilon$, there are at most $O(p^{1-\rho})$ values of $\lambda \in \mathbb{F}_p^*$ for which the equation over $\mathbb{F}_p$ \begin{equation} \label{eq:qvxz} qvxz= \lambda \end{equation} has no solution in $q \in {\mathcal Q}$, $v \in {\mathcal V}$ and positive integers $x \le h$, $z\le \ell$.
Let $\Lambda \subset \mathbb{F}_p^*$ be the set of this elements $\lambda$ and let $L = \# \Lambda$.
We use the orthogonality of characters $\chi\in {\mathcal X}_p$ to express the number of solutions to~\eqref{eq:qvxz} for $\lambda\in \Lambda$ via the following character sums: $$ \frac{1}{p-1} \sum_{\lambda \in \Lambda} \sum_{q \in {\mathcal Q}}\sum_{v \in {\mathcal V}} \sum_{x\leq h}\sum_{z\leq \ell} \sum_{\chi \in {\mathcal X}_p} \chi(qvxz\lambda^{-1}) = 0. $$
We now clear the denominator, change the order of summations and separate the term corresponding to the principal character $\chi=\chi_0$. This leads us to the equation $$ h\ell L M \#{\mathcal Q} + \sum_{\chi \in {\mathcal X}_p^*} \sum_{x\leq h}\sum_{q \in {\mathcal Q}}\sum_{v \in {\mathcal V}}\chi(qvx) \sum_{z\leq \ell }\chi(z) \sum_{\lambda\in\Lambda}\chi(\lambda) = 0. $$ Therefore \begin{equation} \label{eq:sepchar} h\ell LM \#{\mathcal Q} \le W. \end{equation} where $$ W = \sum_{\chi \in {\mathcal X}_p^*}
\left|\sum_{x\leq h}\sum_{q \in {\mathcal Q}}\sum_{v \in {\mathcal V}}\chi(xqv)\right|
\left|\sum_{z\leq \ell }\chi(z)\right| \left|\sum_{\lambda\in\Lambda}\chi(\lambda)\right|. $$ Because $\varepsilon < 1-2 \alpha$, if $Q$ is sufficiently large, the condition~\eqref{eq:h Interv} is satisfied for the above choice of $h$. Therefore, the bound~\eqref{eq:bound p} holds and we write $$
\left|\sum_{x\leq h}\sum_{q \in {\mathcal Q}}\sum_{v \in {\mathcal V}}\chi(xqv)\right| \le \(h^{1-\delta}M\#{\mathcal Q}\)^{1/m}
\left|\sum_{x\leq h}\sum_{q \in {\mathcal Q}}\sum_{v \in {\mathcal V}}\chi(xqv)\right|^{(m-1)/m}. $$
Using the fact that $$ \frac{m-1}{2m}+\frac{1}{2m}+\frac{1}{2}=1, $$ and extending the summation over all $\chi \in {\mathcal X}_p$ we obtain \begin{equation} \begin{split} \label{eq:bound W} W&\le \(h^{1-\delta}M\#{\mathcal Q}\)^{1/m} \(\sum_{\chi \in {\mathcal X}_p}
\left|\sum_{x\leq h}\sum_{q \in {\mathcal Q}}\sum_{v \in {\mathcal V}}\chi(xqv)\right|^{2}\)^{(m-1)/2m}\\
&\qquad \qquad \quad \(\sum_{\chi \in {\mathcal X}_p}\left|\sum_{z\leq \ell}\chi(z)\right|^{2m}\)^{1/2m}
\(\sum_{\chi \in {\mathcal X}_p}\left|\sum_{\lambda \in \Lambda}\chi(\lambda)\right|^{2}\)^{1/2}. \end{split} \end{equation}
First, using the orthogonality of characters, we obtain \begin{equation} \label{eq:bound 1}
\sum_{\chi \in {\mathcal X}_p}\left|\sum_{\lambda \in \Lambda}\chi(\lambda)\right|^{2}=(p-1)L \end{equation} and $$
\sum_{\chi \in {\mathcal X}_p}\left|\sum_{z\leq \ell}\chi(z)\right|^{2m}=(p-1)S, $$ where $S$ is the number of solutions of the following equation over $\mathbb{F}_p$: $$ z_1\cdots z_m = z_{m+1}\cdots z_{2m}, \qquad 1\le z_j\le \ell, \ i =1, \ldots, 2m. $$ Since $L^{m}<p$, this is in fact equation over $\mathbb{Z}$ and from the well-known bounds of the divisor function, we obtain $ S \le \ell^{m+o(1)}$ solutions. Hence, we have \begin{equation} \label{eq:bound 2}
\sum_{\chi \in {\mathcal X}_p}\left|\sum_{z\leq \ell}\chi(z)\right|^{2m} \le p \ell^{m+o(1)}. \end{equation}
Furthermore, the same orthogonality property implies that \begin{equation} \label{eq:sum T}
\sum_{\chi \in {\mathcal X}_p}\left|\sum_{x\leq h}\sum_{q \in {\mathcal Q}}
\sum_{v \in {\mathcal V}}\chi(qvx)\right|^{2}=(p-1)T, \end{equation} where $T$ is the number of solutions of the following equation over $\mathbb{F}_p$ \begin{equation} \label{eq:qvx} q_1v_1 x_1 =q_2v_2 x_2, \qquad q_i \in Q, \ v_i \in {\mathcal V},\ 1\le x_i \le h, \quad i=1,2. \end{equation} Using $\alpha \ge 1/3$, one verifies that for any $k\ge 2$ and a sufficiently large $Q$, we have $$ h \le M/2. $$ Furthermore, if we define an integer $k\ge 1$ by the inequalities $$ \frac{k-1}{2k-1} < \alpha < \frac{k}{2k+1}. $$ then we have $$ M \le p^{\alpha} \le p^{k/(2k+1)} $$ and $$ h^kM < p^{k(1-2\alpha) + \alpha} = p^{k-(2k-1)\alpha} < p. $$ Hence, due to the choice of ${\mathcal V}$, we see that Lemma~\ref{lem: eq3var} applies to the equation~\eqref{eq:qvx} and implies $T \le h M^{2 + o(1)}$, which together with~\eqref{eq:sum T} yields \begin{equation} \label{eq:bound 3}
\sum_{\chi \in {\mathcal X}_p}\left|\sum_{x\leq h}\sum_{q \in {\mathcal Q}}
\sum_{v \in {\mathcal V}}\chi(qvx)\right|^{2} \le p h M^{2 + o(1)}. \end{equation}
Substituting~\eqref{eq:bound 1}, \eqref{eq:bound 2} and~\eqref{eq:bound 3} in~\eqref{eq:bound W} and recalling~\eqref{eq:sepchar}, we obtain $$h\ell LM \#{\mathcal Q} \le \(h^{1-\delta}M\#{\mathcal Q}\)^{1/m} (p\ell^{m})^{1/2m}(pL)^{1/2} (p h M^{2 + o(1)})^{(m-1)/2m}.$$ Since $\# {\mathcal Q}= M^{1+o(1)}$, we obtain $$ h \ell LM^2 \le h^{(m+1)/2m -\delta/m} \ell^{1/2} p L^{1/2} M^{(m+1)/m+ o(1)} $$ or $$ L \le h^{-2\delta/m} \ell^{-1} p^{2} (hM^2)^{-1+1/m}. $$ Finally, since $$ hM^2 = p^{1+o(1)} $$ we derive $$ L \le h^{-2\delta/m} \ell^{-1} p^{1+1/m + o(1)} = h^{-2\delta/m} p^{1 + o(1)}. $$ Recalling the choice of $m$ and $\delta$, we conclude the proof. \end{proof}
In the case when ${\mathcal U}$ is a subgroup of $\mathbb{F}_p^*$, we prove a more general and stronger result under the GRH,
which is nontrivial for any $H$ and $N$ as long as
$HN>p^{1+\kappa}$ for some fixed $\kappa>0$.
\begin{theorem} \label{thm:IU=p-GRH} Fix $\kappa>0$. Assuming the GRH, for any prime $p$, for any initial interval ${\mathcal I} \subseteq \mathbb{F}_p$ of length $H$ and subgroup ${\mathcal U} \subseteq \mathbb{F}_p^*$ of size $N$ such that $HN>p^{1+\kappa}$, we have $$\#\({\mathcal I} \cdot {\mathcal U}\) = p + O(p^{1-\kappa + o(1)}). $$ \end{theorem}
\begin{proof} It suffices to prove that for some $\rho >0$ that depends only on $\varepsilon$, there are at most $O(p^{1-\rho})$ values of $\lambda \in \mathbb{F}_p^*$ for which the equation over the field $\mathbb{F}_p$ \begin{equation} \label{eq:ux} ux= \lambda \end{equation} has no solution in $u \in {\mathcal U}$ and positive integers $x \le H$.
Let $\Lambda \subset \mathbb{F}_p^*$ be the set of this elements $\lambda$ and let $L = \# \Lambda$.
We use the orthogonality of characters $\chi\in {\mathcal X}_p$ to express the number of solutions to~\eqref{eq:ux} for $\lambda\in \Lambda$ via the following character sums: $$ \frac{1}{p-1} \sum_{\lambda \in \Lambda} \sum_{u \in {\mathcal U}} \sum_{x\leq H}\sum_{\chi \in {\mathcal X}_p} \chi(ux\lambda^{-1}) = 0. $$
As in the proof of Theorem~\ref{thm:IU=p} this leads us to the equation $$ H L N + \sum_{\chi \in {\mathcal X}_p^*} \sum_{x\leq H}\sum_{u \in {\mathcal U}}\chi(ux) \sum_{\lambda\in\Lambda}\chi(\lambda) = 0. $$ Therefore \begin{equation} \label{eq:sepcharRH} H LN \le W, \end{equation} where $$ W = \sum_{\chi \in {\mathcal X}_p^*}
\left|\sum_{x\leq h}\sum_{u \in {\mathcal U}}\chi(xu)\right|
\left|\sum_{\lambda\in\Lambda}\chi(\lambda)\right|. $$
Using the Cauchy inequality and extending the summation over all $\chi \in {\mathcal X}_p$ we obtain \begin{equation} \begin{split} \label{eq:bound WRH} W&\le \(\sum_{\chi \in {\mathcal X}_p^*}
\left|\sum_{x\leq H}\sum_{u \in {\mathcal U}}\chi(xu)\right|^{2}\)^{1/2}\(\sum_{\chi \in {\mathcal X}_p}\left|\sum_{\lambda \in \Lambda}\chi(\lambda)\right|^{2}\)^{1/2}. \end{split} \end{equation}
Now we use the fact that $$ \sum_{u \in {\mathcal U}}\chi(u)=0 $$ if $\chi$ is nontrivial over the subgroup ${\mathcal U}$. Hence there are at most $(p-1)/N$ characters such that the above sum does not vanish, which case it is equal to $N$.
Therefore, proceeding as in the proof of Theorem~\ref{thm:IU=p} and using the bound~\eqref{eq:SQRC} we obtain $$W\le \(\frac{p}{N}(NH^{1/2}p^{o(1)})^{2}\)^{1/2}(pL)^{1/2}. $$ Substituting in~\eqref{eq:sepcharRH}, yields $$H LN \le (pN)^{1/2} H^{1/2} (pL)^{1/2} p^{o(1)},$$ which yields the bound $$L \le \frac{p^{2+o(1)}}{NH}$$ that concludes the proof. \end{proof}
\section{Comments}
Our proof of Corollary~\ref{cor:Almost all qh} uses~\eqref{eq:PVB} (with $\nu =1$) and thus does not extend to more general weighted sums $S_q(\chi;h;{\mathcal A})$. However, for some interesting sequences ${\mathcal A}$, that admit a version of~\eqref{eq:PVB} one can obtain such a result. For example, combining our argument with a bound of Karatsuba~\cite{Kar}, one can derive a version of Corollary~\ref{cor:Almost all qh} for the sequence of shifted primes, that is, for the sequence $a_n = 1$ if $n= \ell+a$ for a prime $\ell$ and $a_n = 0$ otherwise (where $a\ne 0$ is a fixed integer).
We note that we have slightly modified the scheme of the proof of~\cite[Theorem~3]{CillGar2} which has allowed us to extract the optimal saving $\eta$ from the preliminary bounds used in in the proof of Theorem~\ref{thm:IU=p}. In particular, instead of separating the sum $W$ into contribution from ``good'' and ``bad'' characters and balancing them, we have used a more direct approach via the H{\"o}lder inequality, which make the optimal use of bounds on the moments of the character sums involved (including the ``$\infty$-moment'', that is, the bound on the maximum value of some of these sums).
It is easy to see that if for some $p$ instead of~\eqref{eq:SQRC} we have a weaker bound $$ \max_{\chi \in {\mathcal X}_p \backslash \{\chi_0\}}
\left|S_p(\chi;h) \right| \le h^{1-\delta} p^{o(1)}, $$ with some fixed $\delta \le 1/2$, the method of proof of Theorem~\ref{thm:IU=p-GRH} still applies and in the case when ${\mathcal U}$ is a subgroup of $\mathbb{F}_p^*$, leads to a nontrivial bound under the condition $H^{2 \delta} N>p^{1+\kappa}$. For example, this observation can be combined with Corollary~\ref{cor:Almost all qh} to a nontrivial bound under the condition $H^{3/7} N>p^{1+\kappa}$ for almost all $p$. On the other hand using the conditional under the GRH bound~\eqref{eq:SQRC} in the proof of Theorem~\ref{thm:IU=p} one can get the same result for all primes and also with a larger $\eta = \kappa/(1+\kappa)$.
The question about the set of elements missing from the set product ${\mathcal I} \cdot {\mathcal U}$, which is considered in Theorems~\ref{thm:IU=p} and~\ref{thm:IU=p-GRH} is a multiplicative version of the question of~\cite{ShpSte}
about the set of elements missing from the set difference ${\mathcal I} - {\mathcal U}$ (only in the case when ${\mathcal U}$ is a subgroup of $\mathbb{F}_p^*$). The argument of~\cite{ShpSte} also works for the set sum ${\mathcal I} + {\mathcal U}$ without any changes. However in~\cite{ShpSte} mostly the case of large subgroups of size $\# {\mathcal U} > p^{1/2}$ is of interest and so the technique used is different.
Finally, clearly slightly changing the values of $\eta$ one can also include the value $\alpha = 1/2$ in the range of Theorem~\ref{thm:IU=p} (for example, one can apply it with $\alpha =1/2 - \kappa/2$ instead of $1/2$ and $\kappa/2$ instead of $\kappa/2$).
\end{document} | arXiv |
Systematic and functional identification of small non-coding RNAs associated with exogenous biofuel stress in cyanobacterium Synechocystis sp. PCC 6803
Guangsheng Pei1,2,3,
Tao Sun1,2,3,
Shuo Chen3,
Lei Chen1,2,3 &
Weiwen Zhang1,2,3,4
The unicellular model cyanobacterium Synechocystis sp. PCC 6803 is considered a promising microbial chassis for biofuel production. However, its low tolerance to biofuel toxicity limits its potential application. Although recent studies showed that bacterial small RNAs (sRNAs) play important roles in regulating cellular processes in response to various stresses, the role of sRNAs in resisting exogenous biofuels is yet to be determined.
Based on genome-wide sRNA sequencing combined with systematic analysis of previous transcriptomic and proteomic data under the same biofuel or environmental perturbations, we report the identification of 133 trans-encoded sRNA transcripts with high-resolution mapping of sRNAs in Synechocystis, including 23 novel sRNAs identified for the first time. In addition, according to quantitative expression analysis and sRNA regulatory network prediction, sRNAs potentially involved in biofuel tolerance were identified and functionally confirmed by constructing sRNA overexpression or suppression strains of Synechocystis. Notably, overexpression of sRNA Nc117 revealed an improved tolerance to ethanol and butanol, while suppression of Nc117 led to increased sensitivity.
The study provided the first comprehensive responses to exogenous biofuels at the sRNA level in Synechocystis and opens an avenue to engineering sRNA regulatory elements for improved biofuel tolerance in the cyanobacterium Synechocystis.
The production of biofuels using solar energy and CO2 in metabolically engineered photosynthetic cyanobacteria holds promise for replacing fossil fuel and generating sustainable energy [1, 2]. However, current biofuel productivity in the cyanobacterial system is still several orders of magnitude lower than that of native producing microbes [3, 4]. This may be due to multiple reasons, such as low expression of foreign metabolic pathways and efficiency to direct metabolic flux to end-products in the cyanobacterial chassis, as well as high end-product toxicity to cyanobacterial hosts [5, 6]. Therefore, in addition to further optimizing expression and functionality of foreign pathways, there is urgency to systematically understand the tolerance mechanism of the cyanobacterial chassis to biofuels, as well as various resistance mechanisms for surviving adverse environmental perturbations during fermentation.
Recent studies showed that small RNAs (sRNAs) between 50 and 300 nucleotides play key regulatory roles in prokaryotic cells at the post-transcriptional level [7]. These RNAs interfere with ribosome binding sites and block translation initiation by base-pairing or affecting mRNA secondary structure and consequently altering mRNA stability, or they interact with proteins directly to modulate their activity [8]. RNA sequencing (RNA-seq) is a powerful analytical tool that provides insight into changes in gene expression and leads to the discovery of novel small and regulatory RNAs. RNA-seq has recently been applied in research of sRNAs in the model cyanobacterium Synechocystis sp. PCC 6803 (hereafter Synechocystis) [9–11], as well as in various groups of cyanobacteria, such as Prochlorococcus and Anabaena [12, 13]. Many identified sRNAs were shown to be involved in cellular responses to a variety of environmental stresses and stimuli [9, 12, 13]. For example, both cis-acting antisense sRNA (e.g., IsrR, Asl_flv4) and trans-encoded sRNAs (e.g., PsrR1, NsiR4) in Synechocystis participate in iron depletion, inorganic carbon supply, or photosynthetic and nitrogen assimilation control metabolism [14–17]. However, sRNAs involved in regulating or improving biofuel tolerance have not yet been described in cyanobacteria.
Engineering of cyanobacteria for improved biofuel tolerance would require a level of understanding of the mode of action of sRNAs. As experimental investigation of multiple potential targets is often slow, numerous bioinformatics tools have been developed to predict gene targets of sRNAs at a genomic scale [18, 19]. These tools are based on the phylogenetic conservation of either sRNA or target sequences. After an initial interaction energy-dependent target prediction within the individual whole genome, the CopraRNA algorithm utilizes functional enrichment to achieve a list of potential target genes and performs further regulatory network analysis with the aim of uncovering a function for sRNA [18]. However, sRNA regulatory mechanisms in bacteria are not limited to base-pairing regulation [7, 20]. In particular, some sRNAs interact with proteins that have regulatory roles in a pathway or biological process [8]. Therefore, identification and engineering of master regulatory sRNAs, particularly highly abundant and stable sRNAs that have predictable secondary structures, will lead to a novel strategy for Synechocystis to be adapted to a growth condition with higher biofuel concentrations.
With an ultimate goal to construct a robust and product-tolerant photosynthetic chassis for synthesizing various renewable biofuels, we have previously applied various omics analytical tools to determine cellular responses of Synechocystis cells under exogenous biofuel stress. The results showed that the cells tended to employ a combination of multiple resistance mechanisms in dealing with various stresses [21–27], which has created challenges to improve tolerance by conventional sequential multi-gene modification approaches [3, 28]. To address the issue, approaches have been proposed to analyze regulatory systems for tolerance improvement [29]. For example, Song et al. [30] and Chen et al. [28] used quantitative iTRAQ LC–MS/MS proteomics to discover the two regulators Sll0794 and Slr1037, which participate in Synechocystis biofuel tolerance [28, 30]. Kaczmarzyk et al. [31] overexpressed sigB to increase both temperature and butanol tolerance in Synechocystis. Therefore, "transcriptional engineering" for tolerance improvement [29, 31], which includes systematic analysis and engineering of master regulatory sRNAs could be an applicable approach [32]. The sRNA engineering approach could have many advantages, such as rapid response, flexible and precise control, ready restoration, and low metabolic burden [32]. When the study was initiated, it was shown that increased sRNA expression in E. coli resulted in superior tolerance to acid and provided protection against oxidative stress [33]. In addition, a comprehensive RNA-seq study of all mRNAs and sRNAs under ten different growth or environmental stress conditions was also recently reported for Synechocystis [10]. Here, we conducted a deep-sequencing analysis of sRNAs in Synechocystis under various exogenous biofuel stresses including ethanol, butanol, and hexane, and proposed a multi-step approach for the identification of sRNAs in Synechocystis. Because most current sRNA target prediction algorithms may have overlooked structured sRNAs that function with no short seed base-pairing sequence, the potential secondary structures of sRNAs with top abundance in our list of sRNAs were also selected for analysis. To identify sRNAs specifically related to biofuel tolerance from a large number of candidates, we further applied multivariate statistical analysis and sRNA regulatory network construction approaches via extensive target prediction [18], correlation analysis between sRNA and paired transcriptomic data [34], and functional enrichment analysis [35]. These efforts led to the identification of several sRNAs related to biofuel tolerance, among which a trans-encoded sRNA Nc117 was shown to improve cell tolerance against both ethanol and butanol when overexpressed in Synechocystis. In contrast, overexpression of three other sRNAs, whose possible targets were enriched in porphyrin and chlorophyll metabolism or photosynthesis, rendered the Synechocystis cells more sensitive to ethanol and butanol.
sRNA deep-sequencing and identification
To ensure that the sRNA sequence data obtained in this study were compatible with our previous transcriptomic and proteomic data, Synechocystis was grown under the same concentration treatments as in several previous studies [i.e., ethanol 1.5% (v/v), butanol 0.2% (v/v), hexane 0.8% (v/v), salt 4% (w/v), and nitrogen starvation] [21–27], which led to an approximate 50% growth decrease at 48 h (Fig. 1). To identify the sRNAs of Synechocystis that potentially participated in responses to various stresses, sRNA isolation was performed with cultures prepared at 24, 48, and 72 h for further sRNA sequencing. This resulted in a total of 294.6-million raw sequencing reads from 18 samples, with an average of 16.37 million reads per sample. After trimming the 3′ and 5′ adapter sequences and low-quality bases, all samples showed a mapping ratio greater than 50%, except for hexane-treated samples (Additional file 1: Table S1). Based on the read mapping and coverage statistics (Additional file 2: Figure S1), a total of 3378 and 131 small-size RNA genes were identified in the chromosomal DNA and four plasmids of Synechocystis, respectively, excluding 43 tRNA genes. Based on their location, these candidates were further classified into three categories: (1) 133 trans-encoded sRNAs located in intergenic regions, which are the subjects of this study and referred to as sRNAs below, (2) 1824 cis-antisense sRNAs (asRNAs) inversely oriented within an annotated gene and (3) 1421 sRNAs located in mRNA untranslated regions (UTRs) (100 nt upstream or 50 nt downstream of annotated genes) on the chromosome of Synechocystis (Additional file 3: Table S2).
Effects of ethanol, butanol, hexane, salt, and nitrogen starvation on growth of Synechocystis. Growth curves of wild-type Synechocystis in BG11 medium control (WT), medium with biofuel at the indicated concentration (v/v), medium with 4% NaCl (w/v), or BG11 medium without nitrogen sources (N starvation). Error bars represent the calculated standard deviation of the measurements of three biological replicates
Various strategies have been utilized for systematic genome-wide searching for sRNAs in Synechocystis in the past. However, the results obtained by various experimental (i.e., sequencing platform, library construction) and bioinformatics approaches vary widely [9–11]. Therefore, a comparison was conducted for regulatory sRNAs identified in this study with those identified previously in Synechocystis [9–11] (Additional file 4: Table S3). The Venn diagram plots showed that only 11 sRNAs were identified by all four independent studies of Synechocystis sRNAs, while a majority of trans-encoded sRNAs were only identified by one or two approaches (Fig. 2a) [9–11]. In addition, the 11 sRNAs identified in all studies were defined with slightly different boundaries, indicating that sRNAs in Synechocystis have not been exhaustively investigated and well characterized. However, a few experimentally validated sRNAs indeed appeared on our list, such as cis-antisense sRNAs such as IsrR [14] (As717) and As1_flv4 [15] (As59), and the trans-encoded sRNAs PsrR1 [16] (Nc57) and NsiR4 [17] (Nc42). Moreover, the expression level of PsrR1 and NsiR4 was up-regulated nine and threefold under nitrogen starvation conditions, as revealed by the deep-sequencing data and consistent with the previous experimental report [10]. Furthermore, the 4.5S RNA (Nc64), 6S RNA (Nc70) [36], and tmRNA (Nc121) were also identified in this study in high abundance [37] (Additional file 5: Figure S2). Notably, the comparative analysis showed that 23 new trans-encoded sRNAs were identified by this study (Fig. 2a), some of which were found with relatively high read coverage depth, such as Nc7, Nc123, Nc66, Nc56, and Nc33. It is likely that expression of some sRNAs was only activated under specific stress-treated samples, consistent with previous reports that any single experimental condition could not accommodate identification of all sRNAs in a species [9–11].
The trans-encoded sRNA gene distribution in Synechocystis. a Venn diagram of trans-encoded sRNA inventories identified in this study and previous studies—Study A [9], Study B [10], and Study C [11]—using different technical platforms. b Pie chart showing the number of trans-encoded sRNAs identified in this study belonging to different categories: Nr, potential sRNAs identified by BLAST search in non-redundant protein sequence database with E value <1e −10; ORF, potential sRNAs with an open reading frame; Repeat, IS: potential sRNAs located in genome-interspersed repeat region or identified as an insert sequence; ncRNA (Rfam), sRNAs identified in conserved non-coding RNA family in Rfam database
Previous transcriptomic sequencing showed that reads that can map to multiple locations due to close paralogs can lead to inaccurate estimation of expression level in genes that reside in repetitive regions [38]. Although we did not discard reads that could map to multiple locations, it was necessary to classify sRNAs located in repetitive regions as potential false positives [39]. For example, nine trans-encoded sRNAs were identified from the pSYSA plasmid of Synechocystis, among which six were located in three CRISPR sequence regions of the plasmid [8, 40]. Although no CRISPR sequences were found in the Synechocystis genome, a number of interspersed repeat sequences (IRSs) were widely distributed in the Synechocystis chromosome. One type of IRS, the retrotransposon, is a major class of transposable elements that can duplicate through RNA intermediates and would bring interference to sRNA-library construction [41]. In addition, some Synechocystis sRNAs may contain uORFs that are in fact short mRNAs or dual-function RNAs [42]. Therefore, all identified sRNA sequences were subjected to BLAST searches against a non-redundant protein sequence database (Nr) and potential open reading frame (ORF) and ribosomal binding site (RBS) prediction, respectively, for further confirmation (Fig. 2b). The results showed that 14 trans-encoded sRNAs were located in repetitive regions of the genome, and four of these were identified as insert sequence (IS) elements. Approximately 16 trans-encoded sRNAs matched to sequences encoding hypothetical proteins in the Nr database with an E value less than 1e −10, and 39 trans-encoded sRNAs potentially encoded ORFs. Interestingly, beyond trans-encoded sRNA genes, eight sRNAs identified here had records in Rfam database [43], such as 6S and tmRNA. Finally, 56 of the remaining trans-encoded sRNAs identified in this study could potentially be authentic small non-coding RNAs (small ncRNAs) in Synechocystis. Notably, 16 trans-encoded sRNAs located near Rho-independent transcription terminators have been reported for numerous bacterial sRNAs [44]. Detailed classification and annotation results for trans-encoded sRNA are provided in Additional file 4: Table S3.
Stress response analysis for top abundant sRNAs in Synechocystis
Great advances have been made in the computational prediction of sRNA targets, and current target prediction algorithms typically start with single short seed base-pairing sequences, which assume a regulatory mechanism involving sRNA–mRNA interaction. However, this sequence-based prediction can be problematic, especially for sRNAs that have complex secondary and tertiary structures that confer potential to interact with other biomolecules such as proteins (e.g., CsrB/RsmZ) [7]. Due to the structures and potential binding to proteins, sRNAs must be relatively stable through the course of cultivation and become a constitutive component of cell physiology when Synechocystis is adapted to biofuels. One way to identify these sRNAs is to examine the abundance of sRNA candidates [42]. The abundance of 133 sRNAs listed in Additional file 3: Table S2 was thus determined. Although the sRNAs with top abundance in the small RNA sequencing (sRNA-seq) data for biofuel-treated cells were somewhat different from the highly abundant sRNAs identified in other studies [42], commonly known sRNAs such as 4.5S RNA (Nc64), tmRNA (Nc121) [37], and 6S RNA (Nc70) [36] were highly ranked in the list (1st, 4th, and 9th, respectively) (Fig. 3a, details in legend), suggesting the consistency of sRNA sequencing compared to conventional RNA blotting methods.
Circular representation and correspondence analysis of expression of trans-encoded sRNAs in Synechocystis under control and five stress conditions. a From outside to inside: (1) Whole chromosome and four plasmids (pSYSM, pSYSA, pSYSG, and pSYSX) of Synechocystis with color order orange, yellow, green, blue, and purple, respectively. Numbers in blue labeled outside the circle reflect the scale, and each unit corresponds to 0.01 M of the genome; (2) Location of sRNAs in the Synechocystis genome. Several key sRNAs were labeled in the outer circle: biofuel-responsive sRNAs are in black; names of the top abundant sRNAs are marked in red; (3) Circular boxplot presentation for the range of sRNA absolute expression levels under various conditions (recorded as the log10-transformed maximum, upper quartile, median, lower quartile, and minimum expression values of sRNA in 18 different samples). Higher sRNA expression is correlated with a further outside boxplot position; (4) Circular heatmap presentation for sRNA log2-transformed expression changes under five stress conditions at 24, 48, and 72 h. From outside to inside, the order is E24, E48, E72, B24, B48, B72, H24, H48, H72, S24, S48, S72, N24, N48, and N72. The color scale is in the top-left corner of the figure. Several biofuel-responsive sRNAs or stress-specific responsive sRNAs are marked by black framed lines; b Correspondence analysis of sRNA expression under 18 experimental conditions. Samples in control, ethanol, butanol, hexane, salt, or nitrogen starvation conditions are labeled black, red, orange, yellow, green, and blue, respectively. Each gray number represents one of the 133 identified trans-encoded sRNAs. The x-axis and y-axis represent the first dimension and the second dimension, respectively, for correspondence analysis
Two-step RT-PCR analysis was also performed to estimate the abundance and determine the transcriptional orientation of the 12 most abundant sRNAs selected from the sRNA-seq data list (Additional file 6: Figure S3). The result showed that the expression of almost all sRNAs could be confirmed, and the abundance of several sRNAs, such as Nc18, Nc41, Nc110 and Nc122, was nearly as high as the three positive control sRNAs. The orientation of sRNA except nc81, located in the genome repeat region with a 60% high GC content was also determined in relation to the adjacent genes for all sRNAs, and the results were in good agreement with either Rfam annotation or previous reports [9–11]. Many of these sRNAs are located in genetically less characterized regions of the genome. Interestingly, the proximal genes of nc7 and nc122 (ncr1650) seem to be related to photosynthesis. Furthermore, analysis using the RNAfold program [45] showed that these highly abundant sRNAs could fold into complex secondary structures, implying their stable nature and possibly important physiological roles (Additional file 7: Figure S4). These sRNAs were found to be Synechocystis-specific and most of the sRNA abundance appeared to not be affected by the biofuels (except Nc81), as revealed by the sRNA-seq data. However, more data are still needed to define possible functions of the abundant Synechocystis sRNAs with stable structure.
Quantification analysis for stress responsive trans-encoded sRNAs
To investigate sRNAs potentially involved in stress responses, a systematic presentation of the related information for the identified trans-encoded sRNAs is shown in Fig. 3a, including location, absolute expression abundance, and expression response change to different stresses in the sRNAs. In addition, a multivariate statistics approach called correspondence analysis (CA) focuses on the relationships between samples and sRNA expression. This method was applied to determine possible biofuel-specific sRNAs in Synechocystis. The score plot of CA in Fig. 3b shows that: (i) samples under the same treatment condition were clearly clustered together on the CA plot, suggesting that the stress condition is a significant factor determining the expression of sRNAs; (ii) significantly different responses between the wild type (WT) and samples treated with environmental perturbations (i.e., salt and nitrogen starvation) were observed, while the samples stressed by several biofuels tended to be similar to the WT, suggesting a relatively high degree of similarity between all biofuel-stressed samples. The responses resulting from biofuel stress are less significant than the high salt and nitrogen starvation at the sRNA level; (iii) notably, the sRNA Nc72 was found to be specifically expressed under high salt conditions, and was nearest to the sample treated with high salt on the CA score plot. Similarly, several sRNAs including Nc11, Nc57 (PsrR1), Nc86, Nc107, Nc130 and Nc132 were identified as possible sRNAs whose expression specifically responded to nitrogen starvation conditions. In contrast, there was no significant difference in CA score plot between the WT and biofuel-treated samples, suggesting that no sRNA was uniquely regulated by any specific type of the exogenous biofuel.
To further investigate biofuel-responsive sRNA, differential expression profiling analysis between the WT and stress-treated conditions was performed using DESeq software [46] (Additional file 3: Table S2). Using criteria of fold change greater than 1.5 and adjusted p values less than 0.05, only six, nine, and ten sRNAs were found differentially expressed under the ethanol, butanol, or hexane stress conditions, respectively (Fig. 4). This number is far less than the high salt (30) and nitrogen starvation conditions (57), consistent with overall trends revealed by the CA analysis. To capture the early responses at the transcriptional level, we also performed a differential expression profiling analysis for sRNAs at early stages (i.e., 24 and 48 h) (details in Additional file 3: Table S2). Similar results were obtained to those from the CA analysis.
Number of differentially regulated sRNAs in the genome of Synechocystis under five conditions
Identification of biofuel-related sRNAs by construction of an sRNA regulatory network and experimental confirmation
Recently, a network approach combined with global omics datasets has been proposed as a useful tool to identify responses under multiple growth conditions [47]. A functional sRNA regulatory network in Synechocystis was constructed with the aid of the CopraRNA tool [18] integrated with paired transcriptomic analysis (Additional file 8: Figure S5). The results showed that the potential targets of Nc57 (PsrR1) were enriched in photosynthesis targets, consistent with the results of recently published verification experiments [16]. This example demonstrated that the network approach implemented in this study could provide reliable identification of sRNAs involved in regulating key biological processes that might be associated with biofuel tolerance of Synechocystis.
According to our previously weighted gene co-expression network analysis (WGCNA) with the Synechocystis proteomic data [48], photosynthesis antenna proteins, porphyrin and chlorophyll metabolism and photosynthesis were identified as the top three significant biofuel-specific responsive pathways after cells were treated with exogenous biofuels [48]. Therefore, considering the association of responsive sRNA with these three pathways (Fig. 5) and the fold change of responsive sRNAs, a total of 20 sRNAs were chosen for quantitative real time polymerase chain reaction (qRT-PCR) validation. To quantitatively confirm results from the sRNA analysis, all samples were collected in the same manner as previous studies [21–27] and were used for qRT-PCR analysis (sRNA and primer sequences in Additional file 9: Table S4). The qRT-PCR results confirmed that a majority of sRNAs were significantly down- or up-regulated under specific stress conditions. Overall, comparative qRT-PCR and deep-sequencing analysis of sRNAs suggested a positive correlation with Pearson correlation coefficients of 0.57–0.81 (Additional file 10: Figure S6), demonstrating the high reliability of the sRNA-seq analysis.
Representation of the top three biofuel-related pathways in the trans-encoded sRNA regulatory network of Synechocystis. A pink node represents an sRNA and a turquoise rectangle represents a metabolic pathway. An arrow between an sRNA and a metabolic pathway represents the predicted target gene for an sRNA on the positive strand, while a dotted line between an sRNA and a metabolic pathway represents the predicted target gene for an sRNA on the negative strand
Validation of putative sRNAs involved in biofuels tolerance
Based on the bioinformatics analysis combined with qRT-PCR validation, a total of 18 sRNAs were selected for constructing overexpression and suppression strains for further functional and phenotypic confirmation. As the sRNA-seq in this study provided no information on the orientation of sRNA gene candidates, the constructed strains with a selected sRNA expressed in either the positive (named as WT/pJA2-sRNA+) or negative (named as WT/pJA2-sRNA−) strand direction were constructed, corresponding to an overexpression or suppression strain, respectively (sRNA and primer sequences provided in Additional file 11: Table S5). All mutants and the wild-type Synechocystis were monitored for growth under the biofuel stress conditions in shake flasks. The results showed that only four of the 18 constructs had visible differential growth phenotypes under biofuel conditions: pJA2-nc33−, pJA2-nc65+, pJA2-nc85+, and pJA2-nc117+. In addition, the constructed strains carrying the four sRNAs caused no change in growth phenotype under the control growth condition. To validate that these sRNAs were indeed stably overexpressed, RT-PCR verification was conducted between wild-type and pJA2-ncRNA mutants (Additional file 12: Figure S7). Moreover, a two-step RT-PCR procedure that can differentially amplify the target sRNA transcript from one direction in comparison to potential transcription from the opposite direction was applied to determine the transcriptional direction of the four sRNAs (Fig. 6a). Reproducible results showed that Nc33 was on the negative strand, while the other three sRNAs were on the positive strand.
Experimental determination of transcriptional orientation by two-step RT-PCR and verification of biofuel-responsive sRNA. a Four biofuel-response sRNA orientations by two-step RT-PCR. "+" denotes the orientation of sRNA on the positive genome strand, "−" denotes the orientation of sRNA on the negative genome strand. The bold and underlined names indicate the determined orientation of an sRNA. b Ethanol- and butanol-responsive expression of Nc117 validation using two-step RT-PCR. The upper part indicates Nc117 validation, the control background primer-free or RNA-free for Nc117 were conducted with no primers or reverse transcriptase added during first step reverse transcription. The lower part shows 16S rRNA indicates an internal control, all samples with random primers during reverse transcription, except for lane 8 RNA-free without reverse transcriptase. All samples were collected in the logarithmic growth phase
Characterization of the overexpression strain of Nc117 sRNA
The WT/pJA2-nc117+ strain overexpressing nc117, located between slr0550 and slr0551, grew faster than the WT under 1.5–2.0% (v/v) ethanol and 0.20–0.25% (v/v) butanol conditions. No difference was observed when they both grew in normal BG11 medium (Fig. 7a, b). In addition, the cell aggregation commonly seen in WT under biofuel stress was significantly alleviated in the pJA2-nc117+ strain (data not shown). However, no visible growth difference was observed when the strain grew under hexane (0.6–0.8%), high salt (3–4%), or nitrogen starvation conditions (data not shown). These results were consistent with the up-regulation of Nc117 only under ethanol and butanol conditions revealed by sRNA deep-sequencing and qRT-PCR analysis (Additional file 9: Table S4) and suggested that nc117 overexpression could confer Synechocystis-specific resistance to both ethanol and butanol. Two-step RT-PCR analysis for Nc117 under ethanol and butanol conditions further confirmed these results (Fig. 6b). To further confirm these, the WT/pJA2-nc117− (Nc117 suppression) strain was constructed and validated by two-step RT-PCR (Additional file 13: Figure S8). In accordance with our expectation, the pJA2-nc117− showed a reverse phenotype (Fig. 7a, b) that is more sensitive to ethanol or butanol stress, demonstrating exclusively that Nc117 played an important role in biofuel tolerance in Synechocystis.
Growth curves of WT and overexpression strains under control and biofuel stress conditions. a WT, pJA2-nc33−, pJA2-nc65+, pJA2-nc85+, pJA2-nc117+, and pJA2-nc117− in normal BG11 medium with or without 2.0% (v/v) ethanol (E 2.0%). b WT, pJA2-nc33−, pJA2-nc65+, pJA2-nc85+, pJA2-nc117+, and pJA2-nc117− in normal BG11 medium with or without 0.25% (v/v) butanol (B 0.25%). The error bar represents the calculated standard deviation of the measurements of three biological replicates
The sRNA homologs from six strains of Synechocystis were subject to CopraRNA for prediction of potential target genes, as no homolog of nc117 was found in other species of cyanobacteria. Based on the integrated analysis with transcriptomic datasets, the potential target genes of Nc117 were predicted and listed in Additional file 14: Table S6. The results showed that although most potential targets were annotated as hypothetical proteins, some targets were involved in pathways related to biofuel response, including transporter proteins and cell wall/membrane modifications [6]. In addition, functional categorization of the predicted target genes for Nc117 showed that the peptidoglycan biosynthesis pathway was significantly enriched. As previous analyses reported, changes in membrane composition or membrane-associated proteins could improve the viability of E. coli and cyanobacteria under fatty acid and alcohol stress conditions [49, 50]. The increase in the degree of cross-linking between constituents of the cell wall and modifications of the cell wall hydrophobicity protected cells against the toxic effects of lipophilic compounds [6, 51, 52]. Peptidoglycan as an important component of cell wall in Synechocystis cells could be modified at the cell surface as a mechanism for improving tolerance to biofuels [6]. Moreover, Nc117 was also found significantly up-regulated under cold stress condition (15 °C for 30 min) ("Ncr1600" in the study) [10]. It is widely accepted that low temperature can affect the fluidity and function of cellular membranes, suggesting possible roles for Nc117 targets in fatty acid or membrane modification and metabolism. Further experimental validation of Nc117 sRNA target genes, including the generation of a deletion mutant, is still needed.
Characterization of overexpression strains of photosynthesis-related sRNAs
Growth analysis showed that three sRNA overexpression strains, including pJA2-nc33−, pJA2-nc65+, and pJA2-nc85+, grew poorly in BG11 medium supplemented with ethanol or butanol, suggesting that overexpression of sRNAs led to increased sensitivity to biofuels, which could be negatively involved in biofuel tolerance (Fig. 7a, b). The genetic locations of nc33, nc65, and nc85 sRNA genes are provided in Table 1. Interestingly, target enrichment analysis showed that Nc65 and Nc85 were enriched in porphyrin and chlorophyll metabolism and Nc33 in photosynthesis; both metabolic pathways were identified as biofuel-specific responsive pathways in our previous study [48]. This was consistent with a previous study that showed that proteins related to photosynthesis and chlorophyll concentration were up-regulated upon ethanol exposure in Synechocystis cells [23], indicating that the responses of sRNAs and protein to biofuels could point to similar mechanisms. However, suppression of the three sRNAs (i.e., pJA2-nc33+, pJA2-nc65− and pJA2-nc85−) did not improve Synechocystis biofuel tolerance (data not shown), suggesting that indirect mechanisms may be involved.
Table 1 Selected Synechocystis sRNAs for two-step RT-PCR analysis and relative abundance (ranked by sRNA abundance)
To evaluate whether biofuel tolerance of Synechocystis related to photosynthesis and chlorophyll contents, cells of WT, pJA2-nc33−, pJA2-nc65+, pJA2-nc85+, and pJA2-nc117+ grown under normal BG11, 2.0% (v/v) ethanol, and 0.25% (v/v) butanol at the exponential growth phase were also collected for flow cytometric analysis. The result showed that the cell morphology and chlorophyll content of five tested strains were similar under normal conditions (Additional file 15: Figure S9A, B). In addition, the main peak of FL3-H that indicates chlorophyll intensity per active cell did not change in the WT and the mutants under ethanol or butanol stress conditions (Additional file 15: Figure S9C, D).
Recent progress in metabolic engineering and synthetic biology has demonstrated great potential in the use of photosynthetic cyanobacteria for biofuel production. However, the highest production of biofuels in renewable cyanobacterial systems still largely lags behind yeast or other native (heterotrophic) systems [53]. Previous studies suggested that low tolerance to biofuel toxicity could be one reason for the low productivity in the cyanobacterial chassis [3], which prompted us to initiate the rational construction of a highly tolerant chassis. Among various promising approaches, sRNAs, especially those with global regulatory effects, have been proposed as powerful tools for chassis engineering [29]. However, very limited information is currently available for sRNAs related to biofuel tolerance in cyanobacteria.
To identify the sRNAs involved in the adaptation of model cyanobacterial Synechocystis sp. PCC 6803 to biofuel growth conditions, samples from five growth conditions were collected for sRNA sequencing. Bioinformatics analysis and experimental validation led to the identification of the first sRNA (Nc117) that could improve the tolerance of Synechocystis to both exogenous ethanol and butanol. In contrast, overexpression of three other sRNAs with predicted functions related to photosynthesis made cells more sensitive to ethanol and butanol. A few highly abundant and structure-stable sRNAs of Synechocystis, which can function by interacting with other biomolecules to enable cell fitness, were studied. Although the individual function of sRNAs at the molecular level must be elucidated in the future, our results provide important knowledge of potential sRNA targets and demonstrate a new strategy of engineering adaptive sRNAs to improve tolerance to biofuels in Synechocystis.
Finally, the potential limitations of developing a tolerant chassis alone should be acknowledged. For example, some studies have shown that simply increasing tolerance does not necessarily correlate with increased yield [6]. Tolerance mechanisms identified under exogenous biofuel stress may not be identical to those caused by biofuels synthesized internally [54]. In the future, further advances may target other important aspects, such as improving metabolic flux and enhancing reductive forces for the cyanobacterial chassis. This research will eventually lead economically feasible cyanobacterial cell factories in the future.
Strains, culture, and stress conditions for sRNA samples
Synechocystis sp. PCC 6803 was grown in BG11 medium (pH 7.5) under a light intensity of approximately 50 μmol photons/m2 s1 in an illuminating incubator (HNY-211B Illuminating Shaker, Honour, China) at 130 rpm and 30 °C with a starting cell density of OD730 = 0.1 [21, 23–27]. Cell density was measured with a UV-1750 spectrophotometer (Shimadzu, Japan). For growth and stress treatment, 10 mL fresh cells at OD730 approximately 0.5 was collected by centrifugation and inoculated into 50 mL BG11 liquid medium in a 250-mL flask. Ethanol 1.5% (v/v), butanol 0.2% (v/v), hexane 0.8% (v/v), and salt 4% (w/v) were added at the beginning of cultivation. The nitrogen starvation condition was established by excluding NaNO3 from the BG11 medium. Two milliliter of culture was sampled and measured at OD730 every 12 h. Finally, a total of 18 culture samples including six conditions at three time points (i.e., 24, 48, and 72 h) were collected for RNA preparation.
RNA preparation and cDNA synthesis
Approximately 10 mg of cell pellets were frozen in liquid nitrogen immediately after centrifugation at 8000×g for 10 min at 4 °C, and cell walls were broken by liquid nitrogen mortar grinding. Cell pellets were re-suspended in TRIzol reagent (Ambion, Austin, TX) and mixed well by vortexing. Total RNA extraction was achieved using a miRNeasy Mini Kit (Qiagen, Valencia, CA). Contaminating DNA in RNA samples was removed with DNase I according to the instructions for the miRNeasy Mini Kit (Qiagen, Valencia, CA). The RNA quality and quantity were determined using an Agilent 2100 Bioanalyzer (Agilent, Santa Clara, CA) and subjected to complementary DNA (cDNA) synthesis. The RNA integrity number (RIN) of every RNA sample used for sequencing was more than 7.0. To enrich small RNA for the sRNA-seq analysis, the pool of total RNAs was size-selected, and transcripts smaller than 250 nucleotides (nt) was used to prepare cDNA libraries. For each sample, 500 ng size-fractionated sRNAs were subjected to cDNA synthesis using a NuGEN OvationW Prokaryotic sRNA-Seq System according to the manufacturer's protocol (NuGEN, San Carlos, CA). The resulting double-stranded cDNA was purified using the MinElute Reaction Cleanup Kit (Qiagen, Valencia, CA).
Library preparation for sRNA and sequencing
The double-stranded cDNA was subjected to library preparation using the Illumina TruSeq™ RNA Sample Preparation Kit (Illumina, San Diego, CA), through a four-step protocol including end repairing, addition of adenylate 3′ ends, adapter ligation, and cDNA template enrichment. The amplification program were: 98 °C 30 s; 98 °C 10 s, 60 °C 30 s, 72 °C 30 s for 15 cycles; 72 °C for 5 min, and hold at 4 °C. To determine the quality of the libraries, a QubitW 2.0 Fluorometer and Qubit™ dsDNA HS (Invitrogen, Grand Island, NY) were first used to determine the DNA concentration of the libraries, and a FlashGel DNA Cassette (Lonza, USA) or Agilent Technologies 2100 Bioanalyzer (Agilent, Santa Clara, CA) was used to determine the product size of the libraries, with good libraries typically 250–300 bp. The product was used directly for cluster generation using an Illumina Solexa Sequencer according to the manufacturer's instructions.
RNA 2 × 100 bp paired-end sequencing was performed using the standard protocol for the Illumina Genome Analyzer IIx. The cDNA library for each sample was loaded onto a single lane of an Illumina flow cell. The image deconvolution and calculation of quality values were performed using a Goat module (Firecrest v.1.4.0 and Bustard v.1.4.0 programs) with Illumina pipeline v.1.4. Sequenced reads were generated by base calling using the Illumina standard pipeline.
sRNA data analysis
Genome sequence and annotation information for Synechocystis were downloaded from NCBI (ftp://ftp.ncbi.nlm.nih.gov/genomes). The sRNA sequence reads were pre-processed using an NGS QC Toolkit (v. 2.3) to remove low-quality bases and adapter sequences [55]. Reads after QC were aligned to the Synechocystis genome using the Burrows-Wheeler Alignment tool software (v. 0.7.10) [56] with perfect match parameters. As the length of some small RNAs might be shorter than 100 nucleotides, we applied a strategy of 50 cycles of read trimming and re-mapping to detect bacterial sRNAs between 50 and 100 bp [7]. Briefly, we re-extracted the reads that did not match the Synechocystis genome from the aligned SAM files and trimmed one low-quality base from 3′ or 5′ end of these reads (observed by FastQC software). We then re-mapped these trimmed reads to the Synechocystis genome. If reads still did not match, the entire process was repeated until the reads matched the Synechocystis genome. Reads shorter than 50 bp that could align were discarded. Finally, we use Samtools [57] software (v. 0.1.19) to merge all original SAM files from the same sample.
After sRNA pair-end reads were mapped to the Synechocystis genome, Bedtools (v. 2.20.1) [58] was used to calculate read mapping statistics for each sample from BAM files generated with Samtools. The coverage of each nucleotide was calculated by counting the number of reads mapped at corresponding nucleotide positions in the genome. To normalize the sRNA expression level in different samples, we removed nucleotide coverage deep in the rRNA operon and tRNA gene regions, summed all the nucleotide coverage in the remaining genome regions as total mapped bases, and normalized them to 100,000,000 bases. This created a reads base for all samples, which corresponded to an approximately 25× sequence depth for the whole Synechocystis genome.
The sRNAs were identified using a multi-tiered approach. We first searched for enriched regions of sRNA expression and then estimated their 3′ and 5′ positions through a manual correction. For highly transcribed sRNA, we defined s i as the coverage depth at nucleotide i in the Synechocystis genome and then set a real expression level for a given sRNA at each nucleotide of at least 50× sequencing depth. We looked for the first location in s i > 50 representing the start site of an sRNA and then determined whether s i+1 > 50, until s i+j < 50 is the end site of a sRNA. For low-transcribed sRNAs, we used previously reported sRNA gene candidates as references and only retained sRNA with an obvious reads coverage reduction 50 bp upstream or downstream of the adjacent region. To obtain a robust sRNA mapping with a low false positive rate, especially for the induced sRNAs, we retained the sRNAs that were repeatedly observed in at least in three samples across all 18 samples. Finally, manual correction of sRNA boundaries was conducted to identify a point of max rapid coverage decline, considered as the end of the sRNA. We used an R script based on the core code from Kopf [10] to produce a PDF file to display distribution of sRNA reads in the Synechocystis genome (Additional file 2: Figure S1) and four plasmids including pSYSM, pSYSA, pSYSG, and pSYSMX (Additional file 16: Figure S10, Additional file 17: Figure S11, Additional file 18: Figure S12, Additional file 19: Figure S13, respectively). Two adjacent sRNAs located on the same strand shorter than 50 bp with the same expression trend were merged as a single sRNA. The sRNAs shorter than 50 bp were discarded. By determining the 5′ and 3′ ends and inspecting the locations, these potential sRNA genes were classified into trans-encoded sRNAs, cis-antisense sRNAs and UTRs of mRNAs based on their location and were annotated as ncRNA (nc), asRNA (as) and UTR (U), respectively.
To overcome challenge that potential identified sRNAs due to reads matched to multiple locations [59], repetitive regions in the Synechocystis genome were searching by BLAST software. Fragments with identity >80%, E value <1e −5 and length >50 bp were considered a repeat region. The IS was predict using Isfinder software [60]. The sRNA-seq data was compared with the Nr database and Rfam database with a 1e −10 cut-off. Candidate ORFs and RBSs were predicted by Glimmer [61] and RBSfinder [62], respectively. Rho-independent terminators in Synechocystis were searched for using TransTermHP [63] with standard settings. RNA secondary structure and ∆ G analyses of sRNAs were performed using RNAfold software [45].
Quantification and statistical analysis of sRNAs
For all comparisons, the aligned total base counts were normalized to obtain relative levels of expression. We defined s i as the coverage depth at nucleotide i in an sRNA, summed coverage depth at each nucleotide for an sRNA and divided by its length. The sRNA expression level was calculated as: \(\mathop \sum \nolimits_{i}^{i + j} {{{\text{s}}_{i} } \mathord{\left/ {\vphantom {{{\text{s}}_{i} } j}} \right. \kern-0pt} j}\). Differentially expressed sRNAs were identified using the R package from DESeq software [46] with identified sRNAs as input. For comparison, the resulting p values were adjusted using Benjamini and Hochberg's approach for controlling the false discovery rate. The sRNAs with fold change >1.5 and adjusted p values <0.05 were assigned as significantly differentially expressed. Correspondence analysis was conducted using R software. The annular heatmap representation was conducted using the OmicCircos package [64] in R software.
Construction of sRNA regulatory networks
For construction of sRNA regulatory networks, each trans-encoded sRNA target prediction was first performed with CopraRNA software [65] using sRNA homologs from six strains of Synechocystis 6803 genomes (NC_000911, NC_017038, NC_017039, NC_017052, NC_017277, and NC_020286) as inputs. The top 100 predictions obtained from CopraRNA with a free-energy cut-off of −10 kcal/mol were retained to remove potential false positive targets [34]. Later, a Pearson correlation analysis with a set of paired RNA samples in previous studies was used to further improve the accuracy of target prediction. Only correlation <−0.4 or >0.4 and p values <0.05 by Fisher's exact test in two-sided analysis were kept to reduce false positive target prediction. The approach is a modification of previous reports where only anti-correlated relationships were retained for sRNA regulatory network construction [66] as positive regulation of sRNA also found in Synechocystis [67]. To assess sRNAs-regulated metabolic pathways, we performed functional enrichment analyses for each trans-encoded sRNA (see "Functional enrichment analysis" section). Only pathways containing at least two target genes with hypergeometric test p values <0.05 were considered enriched metabolic pathways potentially regulated by sRNAs (Additional file 20: Table S7). Finally, we generated an sRNA regulatory network after assembling all significantly enriched target results. A display of the sRNA regulatory network was conducted with Cytoscape software [68].
Functional enrichment analysis
The metabolic pathway enrichment analysis of genes was conducted according to the KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway database using the following hypergeometric test formula:
$$P = 1 - \mathop \sum \limits_{i = 0}^{m - 1} \frac{{\left( {\begin{array}{*{20}c} M \\ i \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {N - M} \\ {n - i} \\ \end{array} } \right)}}{{\left( {\begin{array}{*{20}c} N \\ n \\ \end{array} } \right)}}$$
N is the total number of genes with KEGG pathway annotation information, M is the number of genes with a given KEGG pathway annotation, n is the number of sRNA target genes with all KEGG pathway annotation information and m is the number of the sRNA target genes with a given KEGG pathway annotation. KEGG pathways with p values less than 0.05 were considered enriched response pathways.
qRT-PCR analysis and two-step RT-PCR analysis
The RNA samples used in sRNA sequencing and qRT-PCR were prepared from identical cultures, and qRT-PCR analysis was performed as previously described [21]. Quantification of sRNA expression was determined according to a standard process of qRT-PCR that used serial dilutions of known concentrations of chromosomal DNA as a template to construct a standard curve. A total of 20 sRNAs were selected for verification and 16S rRNA was used as an internal control. Three technical replicates and three biological replicates were analyzed for each sRNA. The data analysis was carried out using the StepOnePlus analytical software (Applied Biosystems, Foster City, CA). Briefly, the amount of relative gene transcript was normalized by that of 16S rRNA in each sample, and the data presented were ratios of the amount of normalized transcripts in the treatment between the stress-treated and normal conditions.
In two-step RT-PCR analysis, a specific set of primer pairs located in the inner boundary of sRNA was designed for each sRNA (Additional file 9: Table S4). As Additional file 21: Figure S14 shows, two opposite RT primers that sit inside the sRNA region were separately added to approximately 200 ng of total RNA from first-strand cDNA synthesis at 40 °C for 1 h using the RevertAid™ First Strand Synthesis Kit (Fermentas, USA). Both nested primers of the sRNAs were added to 1 μL of the two separate first-strand cDNA products for amplification using Dream Taq Green PCR Master Mix (Thermo Scientific). The PCR cycling was as follows: 95 °C 2 min, then 95 °C 30 s, 55 °C 30 s or 57 °C 30 s, 72 °C 15 s for 35 cycles, followed by a 5-min final extension at 72 °C. The PCR products were separated on 3.0% agarose gels.
Construction and analysis of sRNA mutant strains
Gene expression vector pJA2, kindly provided by Prof. Paul Hudson (KTH Royal Institute of Technology of Sweden), was used to overexpress the sRNAs. All sRNAs were cloned under the control of the psbA2 promoter. Briefly, the pJA2 backbone was amplified by PCR, treated with DpnI and digested with BamHI and XbaI to create cohesive ends. The sRNA sequence was PCR-amplified using primers pJA2-sRNA-F and pJA2-sRNA-R and cloned into the BamHI/XbaI sites of pJA2, resulting in the recombinant plasmid pJA2-sRNA (details in Additional file 22: Figure S15). The plasmid was introduced into the WT by electro-transformation as previously described [69]. Positive clones grew on BG-11 agar plates with 10 μg/mL kanamycin and were confirmed by colony PCR analysis. The sRNA overexpression strains were designated WT/pJA2-sRNA.
Phenotypic analysis
Synechocystis and sRNA mutant strains were grown under the same culture condition with sRNA sampling starting at a cell density of OD730 = 0.1. For biofuel treatment, 2.0–2.5% (v/v) ethanol, 0.20–0.25% (v/v) butanol, or 0.8–1.0% (v/v) hexane was added at the beginning of the cultivation. For salt treatment, BG11 with 5% NaCl (w/v) was prepared and sterilized. Next, 3–4% (w/v) NaCl of BG11 was prepared by adjusting the ratio between normal and 5% NaCl (w/v) BG11 at the beginning of cultivation. For nitrogen starvation treatment, fresh cells at the same (logarithmic) phase were collected by centrifugation at 1500×g at 4 °C and washed twice with nitrogen depletion BG11 medium. Cells were inoculated into nitrogen-depleted BG11 liquid medium in flasks. Cell density was measured on a UV-1750 spectrophotometer (Shimadzu, Japan) at OD730. Growth experiments were repeated at least five times to confirm growth patterns.
Flow cytometric analysis
Flow cytometric analysis was performed using a fluorescence-activated cell sorting (FACS) Calibur cytometer (Becton–Dickinson) equipped with a 488-nm blue laser with the following settings: forward scatter (FSC), E00 log; side scatter, 400 V. Control and stress-treated cells were harvested at 48 h, washed twice with phosphate buffer (pH 7.2), and re-suspended in the same buffer to a final OD580 of 0.3 (approximately 1.5 × 107 cells/mL1). A total of 5 × 104 cells were used for each analysis according to the published method [70]. Chlorophyll fluorescence was detected by the FL3 channel with a 670/LP filter. The data analysis was conducted using CellQuest software, version 3.1 (Becton–Dickinson).
basic local alignment search tool
correspondence analysis
cDNA:
complementary DNA
FSC:
forward scatter
IRS:
interspersed repeat sequence
insert sequence
KEGG:
Nr:
non-redundant protein sequence database
ORF:
qRT-PCR:
quantitative real time polymerase chain reaction
RBS:
ribosome binding site
RIN:
RNA integrity number
RNA-seq:
RNA sequencing
small ncRNA:
small non-coding RNA
sRNA:
small RNA
sRNA-seq:
small RNA sequencing
UTR:
untranslated region
WGCNA:
weighted gene co-expression network analysis
WT:
wild type
Georgianna DR, Mayfield SP. Exploiting diversity and synthetic biology for the production of algal biofuels. Nature. 2012;488:329–35.
Peralta-Yahya PP, Zhang F, del Cardayre SB, Keasling JD. Microbial engineering for the production of advanced biofuels. Nature. 2012;488:320–8.
Jin H, Chen L, Wang J, Zhang W. Engineering biofuel tolerance in non-native producing microorganisms. Biotechnol Adv. 2014;32:541–8.
Oliver JW, Atsumi S. Metabolic design for cyanobacterial chemical synthesis. Photosynth Res. 2014;120:249–61.
Nicolaou SA, Gaida SM, Papoutsakis ET. A comparative view of metabolite and substrate stress and tolerance in microbial bioprocessing: from biofuels and chemicals, to biocatalysis and bioremediation. Metab Eng. 2010;12:307–31.
Dunlop MJ. Engineering microbes for tolerance to next-generation biofuels. Biotechnol Biofuel. 2011;4:32.
Storz G, Vogel J, Wassarman KM. Regulation by small RNAs in bacteria: expanding frontiers. Mol Cell. 2011;43:880–91.
Waters LS, Storz G. Regulatory RNAs in bacteria. Cell. 2009;136:615–28.
Mitschkea J, Georga J, Scholza I, Sharmab CM, Dienstc D, Bantscheffa J, Voßa B. An experimentally anchored map of transcriptional start sites in the model cyanobacterium Synechocystis sp. PCC6803. Proc Natl Acad Sci USA. 2011;108:2124–9.
Kopf M, Klahn S, Scholz I, Matthiessen JK, Hess WR, Voss B. Comparative analysis of the primary transcriptome of Synechocystis sp. PCC 6803. DNA Res. 2014;21:1–13.
Xu W, Chen H, He CL, Wang Q. Deep sequencing-based identification of small regulatory RNAs in Synechocystis sp. PCC 6803. PLoS ONE. 2014;9:e92711.
Steglich C, Futschik ME, Lindell D, Voss B, Chisholm SW, Hess WR. The challenge of regulation in a minimal photoautotroph: non-coding RNAs in Prochlorococcus. PLoS Genet. 2008;4:e1000173.
Mitschke J, Vioque A, Haas F, Hess WR, Muro-Pastor AM. Dynamics of transcriptional start site selection during nitrogen stress-induced cell differentiation in Anabaena sp. PCC7120. Proc Natl Acad Sci USA. 2011;108:20130–5.
Duhring U, Axmann IM, Hess WR, Wilde A. An internal antisense RNA regulates expression of the photosynthesis gene isiA. Proc Natl Acad Sci USA. 2006;103:7054–8.
Eisenhut M, Georg J, Klahn S, Sakurai I, Mustila H, Zhang P, Hess WR, Aro EM. The antisense RNA As1_flv4 in the Cyanobacterium Synechocystis sp. PCC 6803 prevents premature expression of the flv4-2 operon upon shift in inorganic carbon supply. J Biol Chem. 2012;287:33153–62.
Georg J, Dienst D, Schurgers N, Wallner T, Kopp D, Stazic D, Kuchmina E, Klahn S, Lokstein H, Hess WR, et al. The small regulatory RNA SyR1/PsrR1 controls photosynthetic functions in cyanobacteria. Plant Cell. 2014;26:3661–79.
Klahn S, Schaal C, Georg J, Baumgartner D, Knippen G, Hagemann M, Muro-Pastor AM, Hess WR. The sRNA NsiR4 is involved in nitrogen assimilation control in cyanobacteria by targeting glutamine synthetase inactivating factor IF7. Proc Natl Acad Sci USA. 2015;112:E6243–52.
Wright PR, Richter AS, Papenfort K, Mann M, Vogel J, Hess WR, Backofen R, Georg J. Comparative genomics boosts target prediction for bacterial small RNAs. Proc Natl Acad Sci USA. 2013;110:E3487–96.
Busch A, Richter AS, Backofen R. IntaRNA: efficient prediction of bacterial sRNA targets incorporating target site accessibility and seed regions. Bioinformatics. 2008;24:2849–56.
Jorgensen MG, Thomason MK, Havelund J, Valentin-Hansen P, Storz G. Dual function of the McaS small RNA in controlling biofilm formation. Genes Dev. 2013;27:1132–45.
Wang J, Chen L, Huang S, Liu J, Ren X, Tian X, Qiao J, Zhang W. RNA-seq based identification and mutant validation of gene targets related to ethanol resistance in cyanobacterial Synechocystis sp. PCC 6803. Biotechnol Biofuel. 2012;5:89.
Zhu H, Ren X, Wang J, Song Z, Shi M, Qiao J, Tian X, Liu J, Chen L, Zhang W. Integrated OMICS guided engineering of biofuel butanol-tolerance in photosynthetic Synechocystis sp. PCC 6803. Biotechnol Biofuel. 2013;6:106.
Qiao J, Wang J, Chen L, Tian X, Huang S, Ren X, Zhang W. Quantitative iTRAQ LC–MS/MS proteomics reveals metabolic responses to biofuel ethanol in cyanobacterial Synechocystis sp. PCC 6803. J Proteome Res. 2012;11:5286–300.
Tian X, Chen L, Wang J, Qiao J, Zhang W. Quantitative proteomics reveals dynamic responses of Synechocystis sp. PCC 6803 to next-generation biofuel butanol. J Proteom. 2013;78:326–45.
Liu J, Chen L, Wang J, Qiao J, Zhang W. Proteomic analysis reveals resistance mechanism against biofuel hexane in Synechocystis sp. PCC 6803. Biotechnol Biofuel. 2012;5:68.
Qiao J, Huang S, Te R, Wang J, Chen L, Zhang W. Integrated proteomic and transcriptomic analysis reveals novel genes and regulatory mechanisms involved in salt stress responses in Synechocystis sp. PCC 6803. Appl Microbiol Biotechnol. 2013;97:8253–64.
Huang S, Chen L, Te R, Qiao J, Wang J, Zhang W. Complementary iTRAQ proteomics and RNA-seq transcriptomics reveal multiple levels of regulation in response to nitrogen starvation in Synechocystis sp. PCC 6803. Mol Biosyst. 2013;9:2565–74.
Chen L, Wu L, Wang J, Zhang W. Butanol tolerance regulated by a two-component response regulator Slr 1037 in photosynthetic Synechocystis sp. PCC 6803. Biotechnol Biofuel. 2014;7:89.
Alper H, Moxley J, Nevoigt E, Fink GR, Stephanopoulos G. Engineering yeast transcription machinery for improved ethanol tolerance and production. Science. 2006;314:1565.
Song Z, Chen L, Wang J, Lu Y, Jiang W, Zhang W. A transcriptional regulator Sll0794 regulates tolerance to biofuel ethanol in photosynthetic Synechocystis sp. PCC 6803. Mol Cell Proteom. 2014;13:3519–32.
Kaczmarzyk D, Anfelt J, Sarnegrim A, Hudson EP. Overexpression of sigma factor SigB improves temperature and butanol tolerance of Synechocystis sp. PCC6803. J Biotechnol. 2014;182–183:54–60.
Kang Z, Zhang C, Zhang J, Jin P, Du G, Chen J. Small RNA regulators in bacteria: powerful tools for metabolic engineering and synthetic biology. Appl Microbiol Biotechnol. 2014;98:3413–24.
Gaida SM, Al-Hinai MA, Indurthi DC, Nicolaou SA, Papoutsakis ET. Synthetic tolerance: three noncoding small RNAs, DsrA, ArcZ and RprA, acting supra-additively against acid stress. Nucleic Acids Res. 2013;41:8726–37.
Hernandez-Prieto MA, Schon V, Georg J, Barreira L, Varela J, Hess WR, Futschik ME. Iron deprivation in Synechocystis: inference of pathways, non-coding RNAs, and regulatory elements from comprehensive expression profiling. G3. 2012;2:1475–95.
Xu J, Li CX, Li YS, Lv JY, Ma Y, Shao TT, Xu LD, Wang YY, Du L, Zhang YP, et al. MiRNA-miRNA synergistic network: construction via co-regulating functional modules and disease miRNA topological features. Nucleic Acids Res. 2011;39:825–36.
Rediger A, Geissen R, Steuten B, Heilmann B, Wagner R, Axmann IM. 6S RNA—an old issue became blue-green. Microbiology. 2012;158:2480–91.
Tous C, Vega-Palas MA, Vioque A. Conditional expression of RNase P in the cyanobacterium Synechocystis sp. PCC6803 allows detection of precursor RNAs. Insight in the in vivo maturation pathway of transfer and other stable RNAs. J Biol Chem. 2001;276:29059–66.
Li B, Ruotti V, Stewart RM, Thomson JA, Dewey CN. RNA-Seq gene expression estimation with read mapping uncertainty. Bioinformatics. 2010;26:493–500.
Voss B, Georg J, Schon V, Ude S, Hess WR. Biocomputational prediction of non-coding RNAs in model cyanobacteria. BMC Genom. 2009;10:123.
Marraffini LA, Sontheimer EJ. CRISPR interference limits horizontal gene transfer in Staphylococci by targeting DNA. Science. 2008;322:1843–5.
DeFraia C, Slotkin RK. Analysis of retrotransposon activity in plants. Methods Mol Biol. 2014;1112:195–210.
Kopf M, Hess WR. Regulatory RNAs in photosynthetic cyanobacteria. FEMS Microbiol Rev. 2015;39:301–15.
Burge SW, Daub J, Eberhardt R, Tate J, Barquist L, Nawrocki EP, Eddy SR, Gardner PP, Bateman A. Rfam 11.0: 10 years of RNA families. Nucleic Acids Res. 2013;41:D226–32.
Livny J, Teonadi H, Livny M, Waldor MK. High-throughput, kingdom-wide prediction and annotation of bacterial non-coding RNAs. PLoS ONE. 2008;3:e3197.
Lorenz R, Bernhart SH, Siederdissen CH, Tafer H, Flamm C, Stadler PF, Hofacker IL. ViennaRNA package 2.0. Algorithms Mol Biol. 2011;6:26.
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11:R106.
McDermott JE, Diamond DL, Corley C, Rasmussen AL, Katze MG, Waters KM. Topological analysis of protein co-abundance networks identifies novel host targets important for HCV infection and pathogenesis. BMC Syst Biol. 2012;6:28.
Pei G, Chen L, Wang J, Qiao J, Zhang W. Protein network signatures associated with exogenous biofuels treatments in cyanobacterium Synechocystis sp. PCC 6803. Front Bioeng Biotechnol. 2014;2(48):48.
Lennen RM, Kruziki MA, Kumar K, Zinkel RA, Burnum KE, Lipton MS, Hoover SW, Ranatunga DR, Wittkopp TM, Marner WD. Membrane stresses induced by overproduction of free fatty acids in Escherichia coli. Appl Environ Microbiol. 2011;77:8114–28.
Anfelt J, Hallström B, Nielsen J, Uhlén M, Hudsona EP. Using transcriptomics to improve butanol tolerance of Synechocystis sp. strain PCC 6803. Appl Environ Microbiol. 2013;79:7419–27.
Isken S, de Bont JA. Bacteria tolerant to organic solvents. Extremophiles. 1998;2:229–38.
Sikkema J, de Bont JA, Poolman B. Mechanisms of membrane toxicity of hydrocarbons. Microbiol Rev. 1995;59:201–22.
Antoni D, Zverlov VV, Schwarz WH. Biofuels from microbes. Appl Microbiol Biotechnol. 2007;77:23–35.
Wang Y, Chen L, Zhang W. Proteomic and metabolomic analyses reveal metabolic responses to 3-hydroxypropionic acid synthesized internally in cyanobacterium Synechocystis sp. PCC 6803. Biotechnol Biofuel. 2016;9:209.
Patel RK, Jain M. NGS QC toolkit: a toolkit for quality control of next generation sequencing data. PLoS ONE. 2012;7:e30619.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60.
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9.
Quinlan AR, Hall IM. BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010;26:841–2.
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009;10:57–63.
Siguier P, Perochon J, Lestrade L, Mahillon J, Chandler M. ISfinder: the reference centre for bacterial insertion sequences. Nucleic Acids Res. 2006;34:D32–6.
Delcher AL, Bratke KA, Powers EC, Salzberg SL. Identifying bacterial genes and endosymbiont DNA with Glimmer. Bioinformatics. 2007;23:673–9.
Suzek BE, Ermolaeva MD, Schreiber M, Salzberg SL. A probabilistic method for identifying start codons in bacterial genomes. Bioinformatics. 2001;17:1123–30.
Kingsford CL, Ayanbule K, Salzberg SL. Rapid, accurate, computational discovery of Rho-independent transcription terminators illuminates their relationship to DNA uptake. Genome Biol. 2007;8:R22.
Hu Y, Yan C, Hsu CH, Chen QR, Niu K, Komatsoulis GA, Meerzaman D. OmicCircos: a simple-to-use R package for the circular visualization of multidimensional omics data. Cancer Inform. 2014;13:13–20.
Wright PR, Georg J, Mann M, Sorescu DA, Richter AS, Lott S, Kleinkauf R, Hess WR, Backofen R. CopraRNA and IntaRNA: predicting small RNA targets, networks and interaction domains. Nucleic Acids Res. 2014;42:W119–23.
Li Y, Xu J, Chen H, Bai J, Li S, Zhao Z, Shao T, Jiang T, Ren H, Kang C, et al. Comprehensive analysis of the functional microRNA-mRNA regulatory network identifies miRNA signatures associated with glioma malignant progression. Nucleic Acids Res. 2013;41:e203.
Sakurai I, Stazic D, Eisenhut M, Vuorio E, Steglich C, Hess WR, Aro EM. Positive regulation of psbA gene expression by cis-encoded antisense RNAs in Synechocystis sp. PCC 6803. Plant Physiol. 2012;160:1000–10.
Shannon P, Markiel A, Ozier O, Baliga N, Wang J, Ramage D, Amin N, Schwikowski B, Ideker T. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13:2498–504.
Wang Y, Sun T, Gao X, Shi M, Wu L, Chen L, Zhang W. Biosynthesis of platform chemical 3-hydroxypropionic acid (3-HP) directly from CO2 in cyanobacterium Synechocystis sp. PCC 6803. Metab Eng. 2016;34:60–70.
Marbouty M, Mazouni K, Saguez C, Cassier-Chauvat C, Chauvat F. Characterization of the Synechocystis strain PCC 6803 penicillin-binding proteins and cytokinetic proteins FtsQ and FtsW and their network of interactions with ZipN. J Bacteriol. 2009;191:5123–33.
GP, LC, and WZ conceived the study. GP carried out the sRNA bioinformatics, RT-PCR, qRT-PCR, sRNA overexpression strain construction, flow cytometry, and phenotypic analysis. TS helped with sRNA overexpression strain construction. GP, SC, LC, and WZ drafted the manuscript. All authors read and approved the final manuscript.
We would like to thank Dr. Matthias Kopf of the University of Freiburg for his core R script for sRNA data display and Mr. Zixi Chen in Tianjin University for help with flow cytometric analysis.
The raw small RNA sequence data of Synechocystis are deposited in the SRA database of NCBI with accession numbers SRP073279. All relevant supporting data are enclosed as additional supporting files.
This study does not contain any studies with human participants performed by any of the authors.
This research was supported by grants from the National Science Foundation of China (No. 31470217 and No. 21621004), the National Basic Research Program of China ("973" program, Project No. 2014CB745101), and the National High-Tech R&D Program ("863" program, Project No. 2012AA02A707).
Laboratory of Synthetic Microbiology, School of Chemical Engineering & Technology, Tianjin University, Tianjin, 300072, People's Republic of China
Guangsheng Pei, Tao Sun, Lei Chen & Weiwen Zhang
Key Laboratory of Systems Bioengineering, Ministry of Education of China, Tianjin, 300072, People's Republic of China
Collaborative Innovation Center of Chemical Science and Engineering, Tianjin, People's Republic of China
Guangsheng Pei, Tao Sun, Shuo Chen, Lei Chen & Weiwen Zhang
Center for Biosafety Research and Strategy, Tianjin University, Tianjin, People's Republic of China
Weiwen Zhang
Guangsheng Pei
Tao Sun
Shuo Chen
Lei Chen
Correspondence to Weiwen Zhang.
Additional file 1: Table S1. sRNA mapping statistics of different samples.
Additional file 2: Figure S1. Genome-wide visualization of all sRNA mapping data in the chromosome genome of Synechocystis sp. PCC 6803. The figure can be divided into two regions, and the upper region can be further divided into three parts corresponding to samples collected at 24, 48 and 72 h. Each part provides detailed visualization of sRNA mapping depth for each nucleotide. Each sample is distinguished using a different color. The lower part can be further divided into ten parts, and each part shows detailed sRNA and gene information. From up to down: (1) all identified sRNAs in this study, (2, 3) all identified sRNAs in the positive and negative chains of Study A—Mitschkea et al. [9], (4, 5) All identified sRNAs in the positive and negative chains of Study B—Kopf et al. [10], (6, 7) all identified sRNAs in the positive and negative chains of Study C—Xu et al. [11], (8) Gene name and annotation in the NCBI database of Synechocystis PCC 6803, (9) Open reading frame prediction of Synechocystis sp. PCC 6803, (10) Characteristic sequences identified in Synechocystis sp. PCC 6803 genome, including genome repeat regions, insert sequences, ribosomal binding sites and Rho-independent transcription terminators.
13068_2017_743_MOESM3_ESM.xlsx
Additional file 3: Table S2. Identification, quantitative and differential expression analysis of sRNAs in Synechocystis.
Additional file 4: Table S3. Detailed comparison and classification of trans-encoded sRNAs in Synechocystis.
Additional file 5: Figure S2. Detailed visualization of sRNA mapping data of 133 trans-encoded sRNAs in Synechocystis. Detailed description is the same as Additional file 2: Figure S1.
Additional file 6: Figure S3. Experimental verification of the top abundant sRNAs and determination of transcriptional orientation by two-step RT-PCR. "+" denotes the orientation of the sRNA on the positive genome strand, "−" denotes the orientation of the sRNA on the negative genome strand. The bold and underlined names indicate the determined orientation of an sRNA, and "rc" denotes the reverse complement direction of the known sRNAs.
Additional file 7: Figure S4. Secondary structures predicted for abundant sRNAs. Color bar represents base-pair probabilities within sRNAs.
Additional file 8: Figure S5. Visualization of trans-encoded sRNAs regulatory network in Synechocystis. Detailed description is the same as Fig. 5.
Additional file 9: Table S4. sRNA-related primers for qRT-PCR analysis and two-step RT-PCR combined with qRT-PCR results.
13068_2017_743_MOESM10_ESM.pdf
Additional file 10: Figure S6. Correlation between qRT-PCR and sRNA-seq analyses for selected genes. For sRNA-seq (horizontal coordinate), values represent log2 fold change of sRNA under stress conditions compared to WT. For qRT-PCR (vertical coordinate), values represent the mean log2 fold changes in sRNA of three technical and three biological replicates under stress conditions compared to WT. The error bar represents the standard deviation of three replicates. Sample names and Pearson correlation coefficients are indicated at the right lower corner of each plot.
Additional file 11: Table S5. sRNA-related primers used for mutant construction.
Additional file 12: Figure S7. RT-PCR verification for biofuel responsive sRNA mutant construction. The upper portion is the validation for the target sRNAs, while the lower part is an internal control of 16S rRNA.
Additional file 13: Figure S8. Two-step RT-PCR validation for Nc117 overexpression and suppression strains. The upper portion is the target sRNA validation; the lower portion is 16S rRNA used as an internal control.
13068_2017_743_MOESM14_ESM.xlsx
Additional file 14: Table S6. Target information for Nc117 based on CopraRNA and integrated transcriptomic analysis.
Additional file 15: Figure S9. Flow-cytometric analysis of WT, pJA2-nc33−, pJA2-nc65+, pJA2-nc85+ and pJA2-nc117+ strains. Forward scatter (FSC) is related to cell size and the FL3 channel with the 670/LP filter, which is related to chlorophyll fluorescence. The y-axis was normalized according to pixel count. (A) FSC histogram of cells under BG11; (B) FL3 histogram of cells under BG11; (C) FL3 histogram of cells under BG11 with 2.0% (v/v) ethanol; (D) FL3 histogram of cells under BG11 with 0.25% (v/v) butanol.
Additional file 16: Figure S10. Genome-wide visualization of all sRNA mapping data in pSYSM of Synechocystis. Detailed description is the same as Additional file 2: Figure S1.
Additional file 17: Figure S11. Genome-wide visualization of all sRNA mapping data in pSYSA of Synechocystis. Detailed description is the same as Additional file 2: Figure S1.
Additional file 18: Figure S12. Genome-wide visualization of all sRNA mapping data in pSYSG of Synechocystis. Detailed description is the same as Additional file 2: Figure S1.
Additional file 19: Figure S13. Genome-wide visualization of all sRNA mapping data in pSYSX of Synechocystis. Detailed description is the same as Additional file 2: Figure S1.
Additional file 20: Table S7. Functional enrichment analysis of the predicted gene targets of trans-encoded sRNAs.
Additional file 21: Figure S14. Schematic diagram of two-step RT-PCR for sRNA orienting.
Additional file 22: Figure S15. Schematic diagram for construction of sRNA overexpression and suppression strains.
Pei, G., Sun, T., Chen, S. et al. Systematic and functional identification of small non-coding RNAs associated with exogenous biofuel stress in cyanobacterium Synechocystis sp. PCC 6803. Biotechnol Biofuels 10, 57 (2017). https://doi.org/10.1186/s13068-017-0743-y
DOI: https://doi.org/10.1186/s13068-017-0743-y
Synechocystis | CommonCrawl |
Gödel's incompleteness theorems
Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.
Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's Entscheidungsproblem is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem.
Formal systems: completeness, consistency, and effective axiomatization
The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. Particularly in the context of first-order logic, formal systems are also called formal theories. In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability within these systems, rather than about "provability" in an informal sense.
There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties.
Effective axiomatization
A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is recursively enumerable (Franzén 2005, p. 112).
This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC).
The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent and complete, and contains a sufficient amount of arithmetic. However it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems.
Completeness
A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms (Smith 2007, p. 24). This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with semantic completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first order logic is semantically complete. But it is not syntactically complete, since there are sentences expressible in the language of first order logic that can be neither proved nor disproved from the axioms of logic alone.
In a system of mathematics, thinkers such as Hilbert had believed that it is just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) each and every mathematical formula.
A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue.
The theory of first order Peano arithmetic seems to be consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete.
Consistency
A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and inconsistent otherwise. That is to say, a consistent axiomatic system is one that is free from contradiction.
Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if κ is the least such cardinal, then Vκ sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model.
If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent.
Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory.
Systems which contain arithmetic
The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic Q. Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems.
The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory.
The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication.
Dan Willard (2001) has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; that is to say, these systems are consistent and capable of proving their own consistency (see self-verifying theories).
Conflicting goals
In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers (Smith 2007, p. 2). In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems.
The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the continuum hypothesis, which is unresolvable[1] in ZFC + "there exists an inaccessible cardinal".
The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized.
First incompleteness theorem
See also: Proof sketch for Gödel's first incompleteness theorem
Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by J. Barkley Rosser (1936) using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated.
First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F." (Raatikainen 2020)
The unprovable statement GF referred to by the theorem is often referred to as "the Gödel sentence" for the system F. The proof constructs a particular Gödel sentence for the system F, but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence.
Each effectively generated system has its own Gödel sentence. It is possible to define a larger system F' that contains the whole of F plus GF as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to F', and thus F' also cannot be complete. In this case, GF is indeed a theorem in F', because it is an axiom. Because GF states only that it is not provable in F, no contradiction is presented by its provability within F'. However, because the incompleteness theorem applies to F', there will be a new Gödel statement GF' for F', showing that F' is also incomplete. GF' will differ from GF in that GF' will refer to F', rather than F.
Syntactic form of the Gödel sentence
The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in F. However, the sequence of steps is such that the constructed sentence turns out to be GF itself. In this way, the Gödel sentence GF indirectly states its own unprovability within F (Smith 2007, p. 135).
To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete.
Thus, although the Gödel sentence refers indirectly to sentences of the system F, when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation (Smith 2007, p. 141). As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level $\Pi _{1}^{0}$ of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables (Franzén 2005, p. 71).
Truth of the Gödel sentence
The first incompleteness theorem shows that the Gödel sentence GF of an appropriate formal theory F is unprovable in F. Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (Smoryński 1977, p. 825; also see Franzén 2005, pp. 28–33). For this reason, the sentence GF is often said to be "true but unprovable." (Raatikainen 2020). However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence GF may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication Con(F)→GF, where Con(F) is a canonical sentence asserting the consistency of F (Smoryński 1977, p. 840, Kikuchi & Tanaka 1994, p. 403).
Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem (Franzén 2005, p. 135). That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system F is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system F, and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" – it must contain elements that do not correspond to any standard natural number (Raatikainen 2020, Franzén 2005, p. 135).
Relationship with the liar paradox
Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence G for a system F makes a similar assertion to the liar sentence, but with truth replaced by provability: G says "G is not provable in the system F." The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence.
It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski.
Extensions of Gödel's original result
Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions.
Gödel demonstrated the incompleteness of the system of Principia Mathematica, a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results.
Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but ω-consistent. A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number m the system proves ~P(m), and yet the system also proves that there exists a natural number n such that P(n). That is, the system says that a number with property P exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. J. Barkley Rosser (1936) strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem.
Second incompleteness theorem
For each formal system F containing basic arithmetic, it is possible to canonically define a formula Cons(F) expressing the consistency of F. This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system F whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons(F) states "there is no natural number that codes a derivation of '0=1' from the axioms of F."
Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons(F) will not be provable in F. The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that F is effectively axiomatized.
Second Incompleteness Theorem: "For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself." (Raatikainen 2020)
can also be written as
"Assume F is a consistent formalized system which contains elementary arithmetic. Then $F\not \vdash {\text{Cons}}(F)$." (Raatikainen 2020) (Then F does not prove consistency of F)
This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system F itself.
Expressing consistency
There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of F as a formula in the language of F. There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons(F) from the second incompleteness theorem is a particular expression of consistency.
Other formalizations of the claim that F is consistent may be inequivalent in F, and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.)
The Hilbert–Bernays conditions
The standard proof of the second incompleteness theorem assumes that the provability predicate ProvA(P) satisfies the Hilbert–Bernays provability conditions. Letting #(P) represent the Gödel number of a formula P, the provability conditions say:
1. If F proves P, then F proves ProvA(#(P)).
2. F proves 1.; that is, F proves ProvA(#(P)) → ProvA(#(ProvA(#(P)))).
3. F proves ProvA(#(P → Q)) ∧ ProvA(#(P)) → ProvA(#(Q)) (analogue of modus ponens).
There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert–Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic.
Implications for consistency proofs
Gödel's second incompleteness theorem also implies that a system F1 satisfying the technical conditions outlined above cannot prove the consistency of any system F2 that proves the consistency of F1. This is because such a system F1 can prove that if F2 proves the consistency of F1, then F1 is in fact consistent. For the claim that F1 is consistent has form "for all numbers n, n has the decidable property of not being a code for a proof of contradiction in F1". If F1 were in fact inconsistent, then F2 would prove for some n that n is the code of a contradiction in F1. But if F2 also proved that F1 is consistent (that is, that there is no such n), then it would itself be inconsistent. This reasoning can be formalized in F1 to show that if F2 is consistent, then F1 is consistent. Since, by second incompleteness theorem, F1 does not prove its consistency, it cannot prove the consistency of F2 either.
This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out (Franzén 2005, p. 106).
The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would actually provide no interesting information if a system F proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of F in F would give us no clue as to whether F really is consistent; no doubts about the consistency of F would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system F in some system F' that is in some sense less doubtful than F itself, for example weaker than F. For many naturally occurring theories F and F', such as F = Zermelo–Fraenkel set theory and F' = primitive recursive arithmetic, the consistency of F' is provable in F, and thus F' cannot prove the consistency of F by the above corollary of the second incompleteness theorem.
The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of some theory T, only doing so in a theory that T itself can prove to be consistent. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called ε0 is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory.
Examples of undecidable statements
See also: List of statements independent of ZFC
There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem).
Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense.
Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics.
The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC.
In 1973, Saharon Shelah showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory.[2]
Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound c such that no specific number can be proved in that system to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.
Undecidable statements provable in larger systems
These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic.
In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic.
Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system ATR0 codifying the principles acceptable based on a philosophy of mathematics called predicativism.[3] The related but more general graph minor theorem (2003) has consequences for computational complexity theory.
Relationship with computability
See also: Halting problem § Gödel's incompleteness theorems
The incompleteness theorem is closely related to several results about undecidable sets in recursion theory.
Stephen Cole Kleene (1943) presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: there is no computer program that can correctly determine, given any program P as input, whether P eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by Shoenfield (1967, p. 132); Charlesworth (1981); and Hopcroft & Ullman (1979).
Franzén (2005, p. 73) explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial p(x1, x2,...,xk) with integer coefficients, determines whether there is an integer solution to the equation p = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation p = 0 does have a solution in the integers then any sufficiently strong system of arithmetic T will prove this. Moreover, if the system T is ω-consistent, then it will never prove that a particular polynomial equation has a solution when in fact there is no solution in the integers. Thus, if T were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of T until either "p has a solution" or "p has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that T cannot be ω-consistent and complete. Moreover, for each consistent effectively generated system T, it is possible to effectively generate a multivariate polynomial p over the integers such that the equation p = 0 has no solutions over the integers, but the lack of solutions cannot be proved in T (Davis 2006, p. 416; Jones 1980).
Smoryński (1977, p. 842) shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable (see Kleene 1967, p. 274).
Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include statements that are false in the standard model; these theories are known as ω-inconsistent.
Proof sketch for the first theorem
Main article: Proof sketch for Gödel's first incompleteness theorem
The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria:
1. Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement S is provable in the system" (which can be applied to any statement "S" in the system).
2. In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument).
3. Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false.
Arithmetization of syntax
The main problem in fleshing out the proof described above is that it seems at first that to construct a statement p that is equivalent to "p cannot be proved", p would somehow have to contain a reference to p, which could easily give rise to an infinite regress. Gödel's technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the Entscheidungsproblem.
In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number:
• The word hello is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111.
• The logical statement x=y => y=x is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120.
In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or does not have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers that represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements.
Construction of a statement about "provability"
Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this.
A formula F(x) that contains exactly one free variable x is called a statement form or class-sign. As soon as x is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number n, $F(n)$ is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "2 × 3 = 6".
Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form F(x) can be assigned a Gödel number denoted by G(F). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G(F).
The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement p, one may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form Bew(y) that uses this arithmetical relation to state that a Gödel number of a proof of y exists:
Bew(y) = ∃ x (y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y).
The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "Bew(y)" is merely an abbreviation that represents a particular, very long, formula in the original language of T; the string "Bew" itself is not claimed to be part of this language.
An important feature of the formula Bew(y) is that if a statement p is provable in the system then Bew(G(p)) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G(p)) to be satisfied.
Diagonalization
The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves
p ↔ F(G(p)).
By letting F be the negation of Bew(x), we obtain the theorem
p ↔ ~Bew(G(p))
and the p defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula.
The statement p is not literally equal to ~Bew(G(p)); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself. This is similar to the following sentence in English:
", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable.
This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method.
Now, assume that the axiomatic system is ω-consistent, and let p be the statement obtained in the previous section.
If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable.
If the negation of p were provable, then Bew(G(p)) would be provable (because p was constructed to be equivalent to the negation of Bew(G(p))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable.
Thus the statement p is undecidable in our axiomatic system: it can neither be proved nor disproved within the system.
In fact, to show that p is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of p is not provable. Thus, if p is constructed for a particular system:
• If the system is ω-consistent, it can prove neither p nor its negation, and so p is undecidable.
• If the system is consistent, it may have the same situation, or it may prove the negation of p. In the later case, we have a statement ("not p") which is false but provable, and the system is not ω-consistent.
If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either p or "not p" as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula Bew(x) is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent.
Proof via Berry's paradox
George Boolos (1989) sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke (Boolos 1998, p. 383). Boolos's proof proceeds by constructing, for any computably enumerable set S of true sentences of arithmetic, another sentence which is true but not contained in S. This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic (Boolos 1998, p. 388).
Computer verified proofs
See also: Automated theorem proving
The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers.
Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm (Shankar 1994), by Russell O'Connor in 2003 using Coq (O'Connor 2005) and by John Harrison in 2009 using HOL Light (Harrison 2009). A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle (Paulson 2014).
Proof sketch for the second theorem
See also: Hilbert–Bernays provability conditions
The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system S using a formal predicate P for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system S itself.
Let p stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system S can be proved from within the system S itself. This is equivalent to proving the statement "System S is consistent". Now consider the statement c, where c = "If the system S is consistent, then p is not provable". The proof of sentence c can be formalized within the system S, and therefore the statement c, "p is not provable", (or identically, "not P(p)") can be proved in the system S.
Observe then, that if we can prove that the system S is consistent (ie. the statement in the hypothesis of c), then we have proved that p is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence c, ""p" is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in S: to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in S. So we cannot prove that the system S is consistent. And the 2nd Incompleteness Theorem statement follows.
Discussion and implications
The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles.
Consequences for logicism and Hilbert's second problem
The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic (Hellman 1981, pp. 451–468). Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem.
Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem").
Minds and machines
Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.
Hilary Putnam (1960) suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine.
Avi Wigderson (2010) has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us."
Douglas Hofstadter, in his books Gödel, Escher, Bach and I Am a Strange Loop, cites Gödel's theorems as an example of what he calls a strange loop, a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure which gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its own unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from the way in which the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modelling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following:
"Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false." (I Am a Strange Loop.)[4]
In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power.
"There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside." (I Am a Strange Loop.)[4]
Paraconsistent logic
Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). Graham Priest (1984, 2006) argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system (Priest 2006, p. 47). Stewart Shapiro (2002) gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism.
Appeals to the incompleteness theorems in other fields
Appeals and analogies are sometimes made to the incompleteness theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including Torkel Franzén (2005); Panu Raatikainen (2005); Alan Sokal and Jean Bricmont (1999); and Ophelia Benson and Jeremy Stangroom (2006). Sokal & Bricmont (1999), and Stangroom & Benson (2006, p. 10), for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. Sokal & Bricmont (1999, p. 187) criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.).
History
After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem (Dawson 1997, p. 63). At the time, theories of the natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of the natural numbers alone were known as "arithmetic".
Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistency proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound (Zach 2007, p. 418; Zach 2003, p. 33).
In the course of his research, Gödel discovered that although a sentence which asserts its own falsehood leads to paradox, a sentence that asserts its own non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week.
Announcement
The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively (Dawson 1996, p. 69). The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying,
For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. ... The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignorabimus, our credo avers: We must know. We shall know!
This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face (Dawson 1996, p. 72).
Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930 (Dawson 1996, p. 70). Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930.
Gödel's paper was published in the Monatshefte in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the Monatshefte; the prompt acceptance of the first paper was one reason he changed his plans (van Heijenoort 1967, page 328, footnote 68a).
Generalization and acceptance
Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency, if the Gödel sentence was changed in an appropriate way. These developments left the incompleteness theorems in essentially their modern form.
Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent.
The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem.
Finsler
Paul Finsler (1926) used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability, and had only a superficial resemblance to Gödel's work (van Heijenoort 1967, p. 328). Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization (Dawson 1996, p. 89). Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career.
Zermelo
In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument (Dawson 1996, p. 76). In October, Gödel replied with a 10-page letter (Dawson 1996, p. 76, Grattan-Guinness 2005, pp. 512–513), where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system (which is not true in general by Tarski's undefinability theorem). But Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor" (Grattan-Guinness 2005, pp. 513). Gödel decided that to pursue the matter further was pointless, and Carnap agreed (Dawson 1996, p. 77). Much of Zermelo's subsequent work was related to logics stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories.
Wittgenstein
Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics, in particular one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas.
Multiple commentators have read Wittgenstein as misunderstanding Gödel (Rodych 2003), although Juliet Floyd and Hilary Putnam (2000), as well as Graham Priest (2004) have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative (Berto 2009, p. 208). The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements" (Wang 1996, p. 179), and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing:
It is clear from the passages you cite that Wittgenstein did not understand [the first incompleteness theorem] (or pretended not to understand it). He interpreted it as a kind of logical paradox, while in fact is just the opposite, namely a mathematical theorem within an absolutely uncontroversial part of mathematics (finitary number theory or combinatorics). (Wang 1996, p. 179)
Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. Floyd & Putnam (2000) argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as actually saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. Rodych (2003) argues that their interpretation of Wittgenstein is not historically justified. Berto (2009) explores the relationship between Wittgenstein's writing and theories of paraconsistent logic.
See also
• Chaitin's incompleteness theorem
• Gödel, Escher, Bach
• Gödel machine
• Gödel's completeness theorem
• Gödel's speed-up theorem
• Löb's Theorem
• Minds, Machines and Gödel
• Non-standard model of arithmetic
• Proof theory
• Provability logic
• Quining
• Tarski's undefinability theorem
• Theory of everything#Gödel's incompleteness theorem
• Typographical Number Theory
References
Citations
1. in technical terms: independent; see Continuum hypothesis#Independence from ZFC
2. Shelah, Saharon (1974). "Infinite Abelian groups, Whitehead problem and some constructions". Israel Journal of Mathematics. 18 (3): 243–256. doi:10.1007/BF02757281. MR 0357114.
3. S. G. Simpson, Subsystems of Second-Order Arithmetic (2009). Perspectives in Logic, ISBN 9780521884396.
4. Hofstadter, Douglas R. (2007) [2003]. "Chapter 12. On Downward Causality". I Am a Strange Loop. Basic Books. ISBN 978-0-465-03078-1.
Articles by Gödel
• Kurt Gödel, 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I", Monatshefte für Mathematik und Physik, v. 38 n. 1, pp. 173–198. doi:10.1007/BF01700692
• —, 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I", in Solomon Feferman, ed., 1986. Kurt Gödel Collected works, Vol. I. Oxford University Press, pp. 144–195. ISBN 978-0195147209. The original German with a facing English translation, preceded by an introductory note by Stephen Cole Kleene.
• —, 1951, "Some basic theorems on the foundations of mathematics and their implications", in Solomon Feferman, ed., 1995. Kurt Gödel Collected works, Vol. III, Oxford University Press, pp. 304–323. ISBN 978-0195147223.
Translations, during his lifetime, of Gödel's paper into English
None of the following agree in all translated words and in typography. The typography is a serious matter, because Gödel expressly wished to emphasize "those metamathematical notions that had been defined in their usual sense before . . ." (van Heijenoort 1967, p. 595). Three translations exist. Of the first John Dawson states that: "The Meltzer translation was seriously deficient and received a devastating review in the Journal of Symbolic Logic; "Gödel also complained about Braithwaite's commentary (Dawson 1997, p. 216). "Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis's anthology The Undecidable . . . he found the translation "not quite so good" as he had expected . . . [but because of time constraints he] agreed to its publication" (ibid). (In a footnote Dawson states that "he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints" (ibid)). Dawson states that "The translation that Gödel favored was that by Jean van Heijenoort" (ibid). For the serious student another version exists as a set of lecture notes recorded by Stephen Kleene and J. B. Rosser "during lectures given by Gödel at to the Institute for Advanced Study during the spring of 1934" (cf commentary by Davis 1965, p. 39 and beginning on p. 41); this version is titled "On Undecidable Propositions of Formal Mathematical Systems". In their order of publication:
• B. Meltzer (translation) and R. B. Braithwaite (Introduction), 1962. On Formally Undecidable Propositions of Principia Mathematica and Related Systems, Dover Publications, New York (Dover edition 1992), ISBN 0-486-66980-7 (pbk.) This contains a useful translation of Gödel's German abbreviations on pp. 33–34. As noted above, typography, translation and commentary is suspect. Unfortunately, this translation was reprinted with all its suspect content by
• Stephen Hawking editor, 2005. God Created the Integers: The Mathematical Breakthroughs That Changed History, Running Press, Philadelphia, ISBN 0-7624-1922-9. Gödel's paper appears starting on p. 1097, with Hawking's commentary starting on p. 1089.
• Martin Davis editor, 1965. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable problems and Computable Functions, Raven Press, New York, no ISBN. Gödel's paper begins on page 5, preceded by one page of commentary.
• Jean van Heijenoort editor, 1967, 3rd edition 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, Cambridge Mass., ISBN 0-674-32449-8 (pbk). van Heijenoort did the translation. He states that "Professor Gödel approved the translation, which in many places was accommodated to his wishes." (p. 595). Gödel's paper begins on p. 595; van Heijenoort's commentary begins on p. 592.
• Martin Davis editor, 1965, ibid. "On Undecidable Propositions of Formal Mathematical Systems." A copy with Gödel's corrections of errata and Gödel's added notes begins on page 41, preceded by two pages of Davis's commentary. Until Davis included this in his volume this lecture existed only as mimeographed notes.
Articles by others
• George Boolos, 1989, "A New Proof of the Gödel Incompleteness Theorem", Notices of the American Mathematical Society, v, 36, pp. 388–390 and p. 676, reprinted in Boolos, George (1998). Logic, logic, and logic. Harvard University Press. ISBN 0-674-53766-1.
• Bernd Buldt, 2014, "The Scope of Gödel's First Incompleteness Theorem Archived 2016-03-06 at the Wayback Machine", Logica Universalis, v. 8, pp. 499–552. doi:10.1007/s11787-014-0107-3
• Charlesworth, Arthur (1981). "A Proof of Godel's Theorem in Terms of Computer Programs". Mathematics Magazine. 54 (3): 109–121. doi:10.2307/2689794. JSTOR 2689794.
• Davis, Martin (1965). The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions. Raven Press. ISBN 978-0-911216-01-1.
• Davis, Martin (2006). "The Incompleteness Theorem" (PDF). Notices of the AMS. 53 (4): 414.
• Grattan-Guinness, Ivor, ed. (2005). Landmark Writings in Western Mathematics 1640-1940. Elsevier. ISBN 9780444508713.
• van Heijenoort, Jean (1967). "Gödel's Theorem". In Edwards, Paul (ed.). Encyclopedia of Philosophy. Vol. 3. Macmillan. pp. 348–357.
• Hellman, Geoffrey (1981). "How to Gödel a Frege-Russell: Gödel's Incompleteness Theorems and Logicism". Noûs. 15 (4 - Special Issue on Philosophy of Mathematics): 451–468. doi:10.2307/2214847. ISSN 0029-4624. JSTOR 2214847.
• David Hilbert, 1900, "Mathematical Problems." English translation of a lecture delivered before the International Congress of Mathematicians at Paris, containing Hilbert's statement of his Second Problem.
• Martin Hirzel, 2000, "On formally undecidable propositions of Principia Mathematica and related systems I.." An English translation of Gödel's paper. Archived from the original. Sept. 16, 2004.
• Kikuchi, Makoto; Tanaka, Kazuyuki (July 1994). "On Formalization of Model-Theoretic Proofs of Gödel's Theorems". Notre Dame Journal of Formal Logic. 35 (3): 403–412. doi:10.1305/ndjfl/1040511346. MR 1326122.
• Stephen Cole Kleene, 1943, "Recursive predicates and quantifiers", reprinted from Transactions of the American Mathematical Society, v. 53 n. 1, pp. 41–73 in Martin Davis 1965, The Undecidable (loc. cit.) pp. 255–287.
• Raatikainen, Panu (2020). "Gödel's Incompleteness Theorems". Stanford Encyclopedia of Philosophy. Retrieved November 7, 2022.
• Raatikainen, Panu (2005). "On the philosophical relevance of Gödel's incompleteness theorems". Revue Internationale de Philosophie. 59 (4): 513–534. doi:10.3917/rip.234.0513. S2CID 52083793.
• John Barkley Rosser, 1936, "Extensions of some theorems of Gödel and Church", reprinted from the Journal of Symbolic Logic, v. 1 (1936) pp. 87–91, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 230–235.
• —, 1939, "An Informal Exposition of proofs of Gödel's Theorem and Church's Theorem", Reprinted from the Journal of Symbolic Logic, v. 4 (1939) pp. 53–60, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 223–230
• Smoryński, C. (1977). "The incompleteness theorems". In Jon Barwise (ed.). Handbook of mathematical logic. Amsterdam: North-Holland Pub. Co. pp. 821–866. ISBN 978-0-444-86388-1.
• Dan E. Willard, 2001, "Self-Verifying Axiom Systems, the Incompleteness Theorem and Related Reflection Principles", Journal of Symbolic Logic, v. 66 n. 2, pp. 536–596. doi:10.2307/2695030 JSTOR 2695030
• Zach, Richard (2003). "The Practice of Finitism: Epsilon Calculus and Consistency Proofs in Hilbert's Program" (PDF). Synthese. Springer Science and Business Media LLC. 137 (1): 211–259. arXiv:math/0102189. doi:10.1023/a:1026247421383. ISSN 0039-7857. S2CID 16657040.
• Zach, Richard (2005). "Kurt Gödel, paper on the incompleteness theorems (1931)". In Grattan-Guinness, Ivor (ed.). Landmark Writings in Western Mathematics 1640-1940. Elsevier. pp. 917–925. doi:10.1016/b978-044450871-3/50152-2. ISBN 9780444508713.
Books about the theorems
• Francesco Berto. There's Something about Gödel: The Complete Guide to the Incompleteness Theorem John Wiley and Sons. 2010.
• Norbert Domeisen, 1990. Logik der Antinomien. Bern: Peter Lang. 142 S. 1990. ISBN 3-261-04214-1. Zbl 0724.03003.
• Franzén, Torkel (2005). Gödel's theorem : an incomplete guide to its use and abuse. Wellesley, MA: A K Peters. ISBN 1-56881-238-8. MR 2146326.
• Douglas Hofstadter, 1979. Gödel, Escher, Bach: An Eternal Golden Braid. Vintage Books. ISBN 0-465-02685-0. 1999 reprint: ISBN 0-465-02656-7. MR530196
• —, 2007. I Am a Strange Loop. Basic Books. ISBN 978-0-465-03078-1. ISBN 0-465-03078-5. MR2360307
• Stanley Jaki, OSB, 2005. The drama of the quantities. Real View Books.
• Per Lindström, 1997. Aspects of Incompleteness, Lecture Notes in Logic v. 10.
• J.R. Lucas, FBA, 1970. The Freedom of the Will. Clarendon Press, Oxford, 1970.
• Adrian William Moore, 2022. Gödel´s Theorem: A Very Short Introduction. Oxford University Press, Oxford, 2022.
• Ernest Nagel, James Roy Newman, Douglas Hofstadter, 2002 (1958). Gödel's Proof, revised ed. ISBN 0-8147-5816-9. MR1871678
• Rudy Rucker, 1995 (1982). Infinity and the Mind: The Science and Philosophy of the Infinite. Princeton Univ. Press. MR658492
• Smith, Peter (2007). An introduction to Gödel's Theorems. Cambridge, U.K.: Cambridge University Press. ISBN 978-0-521-67453-9. MR 2384958.
• Shankar, N. (1994). Metamathematics, machines, and Gödel's proof. Cambridge tracts in theoretical computer science. Vol. 38. Cambridge: Cambridge University Press. ISBN 0-521-58533-3.
• Raymond Smullyan, 1987. Forever Undecided ISBN 0192801414 - puzzles based on undecidability in formal systems
• —, 1991. Godel's Incompleteness Theorems. Oxford Univ. Press.
• —, 1994. Diagonalization and Self-Reference. Oxford Univ. Press. MR1318913
• —, 2013. The Godelian Puzzle Book: Puzzles, Paradoxes and Proofs. Courier Corporation. ISBN 978-0-486-49705-1.
• Wang, Hao (1996). A Logical Journey: From Gödel to Philosophy. MIT Press. ISBN 0-262-23189-1. MR1433803
Miscellaneous references
• Berto, Francesco (2009). "The Gödel Paradox and Wittgenstein's Reasons". Philosophia Mathematica. III (17).
• Dawson, John W. Jr. (1996). Logical dilemmas: The life and work of Kurt Gödel. Taylor & Francis. ISBN 978-1-56881-025-6.
• Dawson, John W. Jr. (1997). Logical dilemmas: The life and work of Kurt Gödel. Wellesley, Massachusetts: A. K. Peters. ISBN 978-1-56881-256-4. OCLC 36104240.
• Rebecca Goldstein, 2005, Incompleteness: the Proof and Paradox of Kurt Gödel, W. W. Norton & Company. ISBN 0-393-05169-2
• Floyd, Juliet; Putnam, Hilary (2000). "A Note on Wittgenstein's "Notorious Paragraph" about the Godel Theorem". The Journal of Philosophy. JSTOR. 97 (11): 624–632. doi:10.2307/2678455. ISSN 0022-362X. JSTOR 2678455.
• Harrison, J. (2009). Handbook of practical logic and automated reasoning. Cambridge: Cambridge University Press. ISBN 978-0521899574.
• David Hilbert and Paul Bernays, Grundlagen der Mathematik, Springer-Verlag.
• Hopcroft, John E.; Ullman, Jeffrey (1979). Introduction to Automata Theory, Languages, and Computation. Reading, Mass.: Addison-Wesley. ISBN 0-201-02988-X.
• Jones, James P. (1980). "Undecidable Diophantine Equations" (PDF). Bulletin of the American Mathematical Society. 3 (2): 859–862. doi:10.1090/S0273-0979-1980-14832-6.
• Kleene, Stephen Cole (1967). Mathematical Logic. Reprinted by Dover, 2002. ISBN 0-486-42533-9
• O'Connor, Russell (2005). "Essential Incompleteness of Arithmetic Verified by Coq". Theorem Proving in Higher Order Logics. Lecture Notes in Computer Science. Vol. 3603. pp. 245–260. arXiv:cs/0505034. doi:10.1007/11541868_16. ISBN 978-3-540-28372-0. S2CID 15610367.
• Paulson, Lawrence (2014). "A machine-assisted proof of Gödel's incompleteness theorems for the theory of hereditarily finite sets". Review of Symbolic Logic. 7 (3): 484–498. doi:10.1017/S1755020314000112. S2CID 13913592.
• Graham Priest, 1984, "Logic of Paradox Revisited", Journal of Philosophical Logic, v. 13,` n. 2, pp. 153–179.
• —, 2004, Wittgenstein's Remarks on Gödel's Theorem in Max Kölbel, ed., Wittgenstein's lasting significance, Psychology Press, pp. 207–227.
• Priest, Graham (2006). In Contradiction: A Study of the Transconsistent. Oxford University Press. ISBN 0-19-926329-9.
• Hilary Putnam, 1960, Minds and Machines in Sidney Hook, ed., Dimensions of Mind: A Symposium. New York University Press. Reprinted in Anderson, A. R., ed., 1964. Minds and Machines. Prentice-Hall: 77.
• Wolfgang Rautenberg, 2010, A Concise Introduction to Mathematical Logic, 3rd. ed., Springer, ISBN 978-1-4419-1220-6
• Rodych, Victor (2003). "Misunderstanding Gödel: New Arguments about Wittgenstein and New Remarks by Wittgenstein". Dialectica. 57 (3): 279–313. doi:10.1111/j.1746-8361.2003.tb00272.x. doi:10.1111/j.1746-8361.2003.tb00272.x
• Stewart Shapiro, 2002, "Incompleteness and Inconsistency", Mind, v. 111, pp 817–32. doi:10.1093/mind/111.444.817
• Sokal, Alan; Bricmont, Jean (1999). Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science. Picador. ISBN 0-312-20407-8.
• Shoenfield, Joseph R. (1967). Mathematical logic. Natick, Mass.: Association for Symbolic Logic (published 2001). ISBN 978-1-56881-135-2.
• Stangroom, Jeremy; Benson, Ophelia (2006). Why Truth Matters. Continuum. ISBN 0-8264-9528-1.
• George Tourlakis, Lectures in Logic and Set Theory, Volume 1, Mathematical Logic, Cambridge University Press, 2003. ISBN 978-0-521-75373-9
• Avi Wigderson, 2010, "The Gödel Phenomena in Mathematics: A Modern View", in Kurt Gödel and the Foundations of Mathematics: Horizons of Truth, Cambridge University Press.
• Hao Wang, 1996, A Logical Journey: From Gödel to Philosophy, The MIT Press, Cambridge MA, ISBN 0-262-23189-1.
• Zach, Richard (2007). "Hilbert's Program Then and Now". In Jacquette, Dale (ed.). Philosophy of logic. Handbook of the Philosophy of Science. Vol. 5. Amsterdam: Elsevier. pp. 411–447. arXiv:math/0508572. doi:10.1016/b978-044451541-4/50014-2. ISBN 978-0-444-51541-4. OCLC 162131413. S2CID 291599.
External links
• Godel's Incompleteness Theorems on In Our Time at the BBC
• "Kurt Gödel" entry by Juliette Kennedy in the Stanford Encyclopedia of Philosophy, July 5, 2011.
• "Gödel's Incompleteness Theorems" entry by Panu Raatikainen in the Stanford Encyclopedia of Philosophy, November 11, 2013.
• Paraconsistent Logic § Arithmetic and Gödel's Theorem entry in the Stanford Encyclopedia of Philosophy.
• MacTutor biographies:
• Kurt Gödel. Archived 2005-10-13 at the Wayback Machine
• Gerhard Gentzen.
• What is Mathematics:Gödel's Theorem and Around by Karlis Podnieks. An online free book.
• World's shortest explanation of Gödel's theorem using a printing machine as an example.
• October 2011 RadioLab episode about/including Gödel's Incompleteness theorem
• "Gödel incompleteness theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• How Gödel's Proof Works by Natalie Wolchover, Quanta Magazine, July 14, 2020.
• and Gödel's incompleteness theorems formalised in Isabelle/HOL
`
Metalogic and metamathematics
• Cantor's theorem
• Entscheidungsproblem
• Church–Turing thesis
• Consistency
• Effective method
• Foundations of mathematics
• of geometry
• Gödel's completeness theorem
• Gödel's incompleteness theorems
• Soundness
• Completeness
• Decidability
• Interpretation
• Löwenheim–Skolem theorem
• Metatheorem
• Satisfiability
• Independence
• Type–token distinction
• Use–mention distinction
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Latvia
• Czech Republic
| Wikipedia |
Try the new version of Scilit at app.scilit.net
Classical and Quantum Gravity
ISSN / EISSN : 0264-9381 / 1361-6382
Published by: IOP Publishing (10.1088)
Total articles ≅ 14,035
Current Coverage
Archived in
Number of articles
Average authors per article
IOP Publishing (10.1088)
Latest articles in this journal
Hawking radiation of scalar particles and fermions from squashed Kaluza-Klein black holes based on a generalized uncertainty principle
Ken Matsuno
by IOP Publishing
in Classical and Quantum Gravity
Classical and Quantum Gravity; https://doi.org/10.1088/1361-6382/ac4c05
We study the Hawking radiation from the five-dimensional charged static squashed Kaluza-Klein black hole by the tunneling of charged scalar particles and charged fermions. In contrast to the previous studies of Hawking radiation from squashed Kaluza-Klein black holes, we consider the phenomenological quantum gravity effects predicted by the generalized uncertainty principle with the minimal measurable length. We derive corrections of the Hawking temperature to general relativity, which are related to the energy of the emitted particle, the size of the compact extra dimension, the charge of the black hole and the existence of the minimal length in the squashed Kaluza-Klein geometry. We obtain some known Hawking temperatures in five and four-dimensional black hole spacetimes by taking limits in the modified temperature. We show that the generalized uncertainty principle may slow down the increase of the Hawking temperature due to the radiation, which may lead to the thermodynamic stable remnant of the order of the Planck mass after the evaporation of the squashed Kaluza-Klein black hole. We also find that the sparsity of the Hawking radiation modified by the generalized uncertainty principle may become infinite when the mass of the squashed Kaluza-Klein black hole approaches its remnant mass.
MICROSCOPE Mission scenario, ground segment and data processing
Manuel Rodrigues
, Gilles Metris, Judicael Bedouet,
Joel Bergé
, Patrice Carle, Ratana Chhun, Bruno Christophe, Bernard Foulon, Pierre-Yves Guidotti, Stephanie Lala, et al.
Classical and Quantum Gravity; https://doi.org/10.1088/1361-6382/ac4b9a
Testing the Weak Equivalence Principle (WEP) to a precision of 10-15 requires a quantity of data that give enough confidence on the final result: ideally, the longer the measurement the better the rejection of the statistical noise. The science sessions had a duration of 120 orbits maximum and were regularly repeated and spaced out to accommodate operational constraints but also in order to repeat the experiment in different conditions and to allow time to calibrate the instrument. Several science sessions were performed over the 2.5 year duration of the experiment. This paper aims to describe how the data have been produced on the basis of a mission scenario and a data flow process, driven by a tradeoff between the science objectives and the operational constraints. The mission was led by the Centre National d'Etudes Spatiales (CNES) which provided the satellite, the launch and the ground operations. The ground segment was distributed between CNES and Office National d'Etudes et de Recherches Aerospatiales (ONERA). CNES provided the raw data through the Centre d'Expertise de Compensation de Trainee (CECT: Drag-free expertise centre). The science was led by the Observatoire de la Coote d'Azur (OCA) and ONERA was in charge of the data process. The latter also provided the instrument and the Science Mission Centre of MICROSCOPE (CMSM).
MICROSCOPE: systematic errors
, Pierre Touboul, Gilles Metris, Alain Robert, Oceane Dhuicque,
, Yves Andre, Damien Boulanger, Ratana Chhun, Bruno Christophe, et al.
Classical and Quantum Gravity; https://doi.org/10.1088/1361-6382/ac49f6
The MICROSCOPE mission aims to test the Weak Equivalence Principle (WEP) in orbit with an unprecendented precision of 10-15 on the Eövös parameter thanks to electrostatic accelerometers on board a drag-free microsatellite. The precision of the test is determined by statistical errors, due to the environment and instrument noises, and by systematic errors to which this paper is devoted. Sytematic error sources can be divided into three categories: external perturbations, such as the residual atmospheric drag or the gravity gradient at the satellite altitude, perturbations linked to the satellite design, such as thermal or magnetic perturbations, and perturbations from the instrument internal sources. Each systematic error is evaluated or bounded in order to set a reliable upper bound on the WEP parameter estimation uncertainty.
Asymptotic quasinormal frequencies of different spin fields in d-dimensional spherically-symmetric black holes
Chun-Hung Chen,
Hing Tong Cho
, Anna Chrysostomou,
Alan Cornell
Classical and Quantum Gravity; https://doi.org/10.1088/1361-6382/ac4955
While Hod's conjecture is demonstrably restrictive, the link he observed between black hole (BH) area quantisation and the large overtone ($n$) limit of quasinormal frequencies (QNFs) motivated intense scrutiny of the regime, from which an improved understanding of asymptotic quasinormal frequencies (aQNFs) emerged. A further outcome was the development of the ``monodromy technique", which exploits an anti-Stokes line analysis to extract physical solutions from the complex plane. Here, we use the monodromy technique to validate extant aQNF expressions for perturbations of integer spin, and provide new results for the aQNFs of half-integer spins within higher-dimensional Schwarzschild, Reissner-Nordstr{\"o}m, and Schwarzschild (anti-)de Sitter BH spacetimes. Bar the Schwarzschild anti-de Sitter case, the spin-1/2 aQNFs are purely imaginary; the spin-3/2 aQNFs resemble spin-1/2 aQNFs in Schwarzschild and Schwarzschild de Sitter BHs, but match the gravitational perturbations for most others. Particularly for Schwarzschild, extremal Reissner-Nordstr{\"o}m, and several Schwarzschild de Sitter cases, the application of $n \rightarrow \infty$ generally fixes $\mathbb{R}e \{ \omega \}$ and allows for the unbounded growth of $\mathbb{I}m \{ \omega \}$ in fixed quantities.
Influence of cosmological expansion in local experiments
Felix Spengler,
Alessio Belenchia
Dennis Rätzel
Daniel Braun
Whether the cosmological expansion can influence the local dynamics, below the galaxy clusters scale, has been the subject of intense investigations in the past three decades. In this work, we consider McVittie and Kottler spacetimes, embedding a spherical object in a FLRW spacetime. We calculate the influence of the cosmological expansion on the frequency shift of a resonator and estimate its effect on the exchange of light signals between local observers. In passing, we also clarify some of the statements made in the literature.
Failure of the split property in gravity and the information paradox
Suvrat Raju
Classical and Quantum Gravity; https://doi.org/10.1088/1361-6382/ac482b
In an ordinary quantum field theory, the "split property" implies that the state of the system can be specified independently on a bounded subregion of a Cauchy slice and its complement. This property does not hold for theories of gravity, where observables near the boundary of the Cauchy slice uniquely fix the state on the entire slice. The original formulation of the information paradox explicitly assumed the split property and we follow this assumption to isolate the precise error in Hawking's argument. A similar assumption also underpins the monogamy paradox of Mathur and AMPS. Finally the same assumption is used to support the common idea that the entanglement entropy of the region outside a black hole should follow a Page curve. It is for this reason that computations of the Page curve have been performed only in nonstandard theories of gravity, which include a nongravitational bath and massive gravitons. The fine-grained entropy at I^{+} does not obey a Page curve for an evaporating black hole in standard theories of gravity but we discuss possibilities for coarse graining that might lead to a Page curve in such cases.
Assessing the compact-binary merger candidates reported by the MBTA pipeline in the LIGO-Virgo O3 run: probability of astrophysical origin, classification, and associated uncertainties
Dimitri Estevez
, Nicolas Andres, Maria Assiduo, Florian Aubin, Roberto Chierici, Francesca Faedi, Elisa Nitoglia, Gianluca Maria Guidi, Vincent Juste, Frederique Marion, et al.
Classical and Quantum Gravity; https://doi.org/10.1088/1361-6382/ac482a
We describe the method used by the Multi-Band Template Analysis (MBTA) pipeline to compute the probability of astrophysical origin, pastro, of compact binary coalescence candidates in LIGO-Virgo data from the third observing run (O3). The calculation is performed as part of the offline analysis and is used to characterize candidate events, along with their source classification. The technical details and the implementation are described, as well as the results from the first half of the third observing run (O3a) published in GWTC-2.1. The performance of the method is assessed on injections of simulated gravitational-wave signals in O3a data using a parameterization of pastro as a function of the MBTA combined ranking statistic. Possible sources of statistical and systematic uncertainties are discussed, and their effect on pastro quantified.
Angular correlations of causally-coherent primordial quantum perturbations
Craig Hogan
, Stephan Meyer
We consider the hypothesis that nonlocal, omnidirectional, causally-coherent quantum entanglement of inflationary horizons may account for some well-known measured anomalies of Cosmic Microwave Background (CMB) anisotropy on large angular scales. It is shown that causal coherence can lead to less cosmic variance in the large-angle power spectrum ${C}_\ell$ of primordial curvature perturbations on spherical horizons than predicted by the standard model of locality in effective field theory, and to new symmetries of the angular correlation function ${C}(\Theta)$. Causal considerations are used to construct an approximate analytic model for ${C}(\Theta)$ on angular scales larger than a few degrees. Allowing for uncertainties from the unmeasured intrinsic dipole and from Galactic foreground subtraction, causally-coherent constraints are shown to be consistent with measured CMB correlations on large angular scales. Reduced cosmic variance will enable powerful tests of the hypothesis with better foreground subtraction and higher fidelity measurements on large angular scales.
Effective potential of scalar-tensor gravity with quartic self-interaction of scalar field
Boris N Latosh
Andrej B Arbuzov
, Andrej Nikitenko
One-loop effective potential of scalar-tensor gravity with a quartic scalar field self-interaction is evaluated up to first post-Minkowskian order. The potential develops an instability in the strong field regime which is expected from an effective theory. Depending on model parameters the instability region can be exponentially far in a strong field region. Possible applications of the model for inflationary scenarios are highlighted. It is shown that the model can enter the slow-roll regime with a certain set of parameters.
Analogue gravitational field from nonlinear fluid dynamics
Satadal Datta,
Uwe R Fischer
The dynamics of sound in a fluid is intrinsically nonlinear. We derive the consequences of this fact for the analogue gravitational field experienced by sound waves, by first describing generally how the nonlinearity of the equation for phase fluctuations back-reacts on the definition of the background providing the effective space-time metric. Subsequently, we use the analytical tool of Riemann invariants in one-dimensional motion to derive source terms of the effective gravitational field stemming from nonlinearity. Finally, we show that the consequences of nonlinearity we derive can be observed with Bose-Einstein condensates in the ultracold gas laboratory. | CommonCrawl |
Model-based extended quaternion Kalman filter to inertial orientation tracking of arbitrary kinematic chains
Agnieszka Szczęsna1 &
Przemysław Pruszowski1
Inertial orientation tracking is still an area of active research, especially in the context of out-door, real-time, human motion capture. Existing systems either propose loosely coupled tracking approaches where each segment is considered independently, taking the resulting drawbacks into account, or tightly coupled solutions that are limited to a fixed chain with few segments. Such solutions have no flexibility to change the skeleton structure, are dedicated to a specific set of joints, and have high computational complexity. This paper describes the proposal of a new model-based extended quaternion Kalman filter that allows for estimation of orientation based on outputs from the inertial measurements unit sensors. The filter considers interdependencies resulting from the construction of the kinematic chain so that the orientation estimation is more accurate. The proposed solution is a universal filter that does not predetermine the degree of freedom at the connections between segments of the model. To validation the motion of 3-segments single link pendulum captured by optical motion capture system is used. The next step in the research will be to use this method for inertial motion capture with a human skeleton model.
Inertial motion capture systems are based on body sensor network, where inertial measurement unit (IMU) sensors are attached to each major segment that should be tracked (Kulbacki et al. 2015; Roetenberg et al. 2009). The model (skeleton) of tracked object is built from rigid-body segments (defined as bones) connected by joints. The mapping of IMU orientations to specific segments of the body model allows for motion capture of the subject. The orientations of sensors are typically estimated by fusing a gyroscope rate (\(\omega\)), linear acceleration (a), and magnetic field measurements (m) with respect to the global reference frame (usually aligned with Earth gravity and local magnetic north). With knowledge of all the orientations of segments over time, the overall pose can be tracked. In the literature, many methods for orientation estimation based on single IMU output signals can be found e.g., Kalman filters (Sabatini 2011; Madgwick et al. 2011), complementary filters (Mahony et al. 2008). However, this loosely coupled approach where each segment is treated independently has numerous drawbacks. Joint constraints, such as those found in the human anatomy, cannot be included easily into the tracking. The correlations between the segments are lost during estimation (Miezal et al. 2013). Furthermore, tightly coupled systems, where all parameters and measurements are considered jointly in one estimation problem, have previously been shown to provide better performance (Young 2010).
In Young et al. (2010), propagation of linear accelerations through the segment hierarchy was used to improve the identification of the gravity components under high acceleration motions. That solution is based on a very simple complementary filter (Szczęsna et al. 2016).
Next, we can find solutions based on the Kalman filter for a specific set of segments by predetermining the DOF at the connections between the segments of the model. Such solutions are based on the Denavit–Hartenberg convention and use Euler angles as their orientation representation. Examples are the extended Kalman filter for lower body parts (hip, knee, ankle) (Lin and Kulić 2012) and the unscented Kalman filter with similar process and measurement model for shoulder and elbow joint angle tracking (El-Gohary and McNames 2012).
In Šlajpah et al. (2014), authors propose extended Kalman filters for each segment using 18 element state vectors. This algorithm uses quaternion representation of orientation. The solution is limited only to human walking.
A different concept is presented in Vikas and Crane (2016) where the joint angle is estimated based on more than one sensor, placed on the segment. The system is based on vestibular dynamic inclination measurements and estimates only 2 Euler angles.
Multibody systems based on IMU sensors can also estimate and track other parameters like positions, velocities, and accelerations (Torres-Moreno et al. 2016).
The reported errors in the aforementioned publications (Young et al. 2010; Lin and Kulić 2012; El-Gohary and McNames 2012; Šlajpah et al. 2014) are not comparative because experiments had different conditions and concerns about various movements and errors were calculated in an inconsistent way, all of the average angles errors were about 4°–7°. The referenced values were obtained from different sources: simulated, mechanical, optical systems or calculated based on depth camera.
This paper proposes a new model-based extended quaternion Kalman filter (MBEQKF) that allows estimation of orientation on the basis of outputs from the IMU sensors. This filter reflects interdependencies from the construction of the kinematic chain so that the orientation estimation is more accurate. The proposed solution is a universal filter that does not predetermine any degree of freedom (DOF) at the connections between the segments of the model. Our aspiration for future work is to use our novel method for inertial motion capture.
Model-based extended quaternion Kalman filter (MBEQKF)
The aim was to simplify the structure of the filter while maintaining corrections resulting from the kinematic relationships in the model; another important element was versatility. The proposed solution does not predetermine any DOF at the connections between the segments of the model, as it is in solutions based on the Denavit–Hartenberg convention (El-Gohary and McNames 2012; Lin and Kulić 2012).
As a base of implementation, the quaternion extended Kalman filter with direct state was used. The unit quaternion \(q = (q_0, [q_1, q_2, q_3])^T\,\epsilon\,\mathbb {H}\) represents the body orientation, where \(\mathbb {H}\) is a four-dimensional non-commutative division algebra over the real numbers. The orientation quaternion is MBEQKF filter state vector \(x = q\). The angular velocity is considered to be a control input (like in Angelo 2006). The angular velocity is not part of the state vector so model of dynamic, e.g., dynamic model of human limb motion in terms of first-order Gauss–Markov stochastic process (Yun and Bachmann 2006), is not in the development of the filter equations. The state vector has a smaller dimension and it is not necessary to include first and second derivative of angular velocity in the state vector to obtain the optimal model (Sabatini 2011; Sabatini 2011; Foxlin 1996), like in filter described in Šlajpah et al. (2014) where the state vector has 18 and measurement vector has 12 elements. The non-linear measurement equations are defined by rotating the reference vectors (magnetic Earth field and gravity) using estimated orientation quaternion. Also, the Newton–Euler kinematic motion equations are used to model acceleration measurements in the kinematic chain.
The MBEQKF filter process model is kinematic Eq. (1) (Chou 1992), which describes the relation between temporal derivatives of an orientation represented by unit quaternion q and angular velocity of the body frame (\(^{B}{\omega }\)) relative to the global frame N:
$$\begin{aligned} \frac{d}{dt}q(t)=\frac{1}{2}q(t)\otimes (0, ^{B}{\omega })^T \end{aligned}$$
where \(\otimes\) stands for the quaternion multiplication. The left superscript indicates that the coordinate frames in which vectors are expressed (measured) are N for the Earth fixed coordinate system or B for the system related to the moving body.
Multiplying two unit quaternions gives a unit quaternion representing the composition of the two rotations. Hence, we can describe the orientation now and at the next moment in time, assuming a constant angular velocity:
$$\begin{aligned} q(0) = q_0,\quad q(1) = q_k \otimes q_o,\quad q(t) = (q_k)^t \otimes q_o \end{aligned}$$
By using Euler formula for quaternion we can write quaternion as \(q_k = exp(\frac{\theta }{2}n)\), where \(\theta\) is an angle, and n is the unit-vector axis of rotation. Where \(\theta n\) represent instantaneous angular velocity.
The resulting quaternion produced by the process must be normalized. We used the brute-force approach and normalise the quaternion after the measurement update stage, which lead to a suboptimal algorithm (Sabatini 2011).
By using the orientation quaternion, every vector can be translated from the global frame N to the body coordinate system B:
$$\begin{aligned} ^{B}{v}= q^{*}\otimes (0, ^{N}{v})^T\otimes q \end{aligned}$$
and from body to global
$$\begin{aligned} ^{N}{v}= q\otimes (0, ^{B}{v})^T\otimes q^{*} \end{aligned}$$
where \(q^*\) is a conjugate quaternion:
$$\begin{aligned} q^*=(q_0, [-q_1, -q_2, -q_3])^T \end{aligned}$$
In practical implementations, orientation estimation is realised on the basis of digital systems. The discrete time index is denoted by the subscript k. The discretized priori state estimation equation of the orientation kinematics process corresponding to (Eqs.1 and 2) is as follows:
$$\begin{aligned} x^{-}_{k+1}=\varPhi _{k}x_{k}+n_{k}=\exp \left[ \frac{1}{2}M_{R}(^B{\omega }_k)\varDelta t\right] x_{k}+n_{k}. \end{aligned}$$
In this equation, \(x^{-}_{k}\) is the discrete—priori estimates time state vector, \(x^{-}_{k}=q_{k}\), and \(M_{R}(^B{\omega }_k)\) denotes matrix representation of the quaternion right multiplication corresponding to the pure quaternion \((0, ^B{\omega }_k)^T\), and \(\varPhi\) is the state transition matrix. The components of the state vector are modelled as random walk, where n is the zero-mean white noise process with covariance matrix \(\sigma _{g}^{2}I\). The quaternion time-evolution is a first-order approximation of the exact process (1).
As the gyroscope data are external inputs to the filter rather than measurement, gyroscope measurement noise enters the filter as process noise through a quaternion-dependent linear transformation (Sabatini 2011). The process noise covariance matrix \(Q_{k}\) is following:
$$\begin{aligned} Q_{k}=(\varDelta t/2)^{2}\varXi _{k}(\sigma _{g}^{2}I_{4x4})\varXi _{k}^{T} \end{aligned}$$
where for \(q_k=(a, [b,c,d])\) we define as follows
$$\begin{aligned} \varXi _k=\left[ \begin{array}{ccc} a&\quad-d&\quad c\\ d&\quad a&\quad -b\\ -c&\quad b&\quad a\\ -b&\quad -c&\quad -d\\ \end{array} \right] \end{aligned}$$
The model of tracked object is built from rigid-body segments connected by joints. Sensors are attached to segments with constant offset vector from the centre of segment rotation. The defined model is a skeleton of object. In our experiments we use model of 3-segments pendulum. Each segment (with IMU) and joint has a local coordinate frame related to the coordinate frame of the sensor. Joints form a hierarchy structure with the position of a child joint given by an offset from the parent joint centre. Resulting orientations are calculated in the world coordinate frame based on two reference vectors, Earth gravity g and magnetic north mg. Quantities marked with superscript j are referring to a corresponding j segment.
The Newton–Euler equations describe the combined translational and rotational dynamics of a rigid body and can be the base of the measurement model of acceleration in a kinematic chain (skeleton model). The modelled linear acceleration of the sensor is considered the case of a rigid body rotating about a point, fixed at the origin with angular velocity \(\omega\). Every point on this body will have a radial linear acceleration:
$$\begin{aligned} l_r = (\omega \cdot o)\omega - o \left\| \omega \right\| ^2 \end{aligned}$$
where o is the offset of the point from the centre of rotation.
Also the every point on the rigid body has tangential acceleration:
$$\begin{aligned} l_t = \alpha \times o \end{aligned}$$
where \(\alpha\) is an angular acceleration calculated from angular velocity as:
$$\begin{aligned} \alpha = \frac{\omega _{k+1} - \omega _{k-1}}{2\varDelta t} \end{aligned}$$
The angular acceleration is the derivative of angular velocity and can be calculated for example by first central difference approximation based on angular velocity samples.
The whole body is in a rotating frame with a linear acceleration \(l_f\) and this is a linear acceleration from the parent segment in the skeleton model. The resulting linear acceleration of a point under that assumption is therefore: \(l = l_f + l_r + l_t\).
In the model for every segment we have a linear acceleration of the sensor \(a^{S,j}\) (10), and a linear acceleration of the joint \(a^{J,j}\) (11), which is passed to the next segment \(j+1\) as a linear acceleration of the parent. All linear accelerations that are passed between segments are in the global coordinate frame \(^{N}{a}^{J,j}_k\).
The model sensor acceleration is:
$$\begin{aligned} ^{B}{a}^{S,j}_{k}\,=\,^{B}{a}^{J, j-1}_{k} +\, ^B{\omega }_{k}\,\times\,(^B{\omega }_{k} \times ^B{o}^{S,j}_{k}) +\,^{B}{\alpha }_k\,\times\, ^B{o}^{S,j}_k +\, ^B{g}_{k} \end{aligned}$$
The model joint acceleration is:
$$\begin{aligned} ^B{a}_k^{J,j}\,=\, ^{B}{a}^{J, j-1} +\, ^B{\omega }_{k} \times (^B{\omega }_{k} \times ^B{o}^{J,j}_k) +\, ^{B}{\alpha }_k \,\times\, ^B{o}^{J,j}_{k} \end{aligned}$$
$$\begin{aligned} ^{B}{a}^{J, j-1}_k &= q_k^{*}\otimes (0, ^{N}{a}^{J,j-1}_k)^T\otimes q_k \end{aligned}$$
$$\begin{aligned} ^B{g}_k& = q_k^{*}\otimes (0, ^{N}{g})^T\otimes q_k \end{aligned}$$
$$\begin{aligned} ^{N}{a}^{J, j}_k& = q_k\otimes (0, ^{B}{a}^{J,j}_k)^T\otimes q_k^{*} \end{aligned}$$
Also, the distances must be transformed into the body coordinate frame:
$$\begin{aligned} ^{B}{o}_k = q_k^{*}\otimes (0, ^{N}{o}_k)^T\otimes q_k \end{aligned}$$
The MBEQKF filtering algorithm use model (5) for predicting aspects of behaviour of a system and a model of the sensor measurements (16), in order to produce the most accurate estimation of the state of system. The resulting measurement model, based on a priori estimates of the state vector, is of the form:
$$\begin{aligned} f(x^{-}_{k} = q_k)= \begin{bmatrix}^B{a}^{S,j}_k\\ q_k^{*}\otimes (0, ^{N}{mg})^T\otimes q_k \end{bmatrix}+\begin{bmatrix}n_{k}^{a}\\ n_{k}^{m} \end{bmatrix} \end{aligned}$$
where \(n_{k}^{a}\) and \(n_{k}^{m}\) are the accelerometer and magnetometer measurement noise with covariance matrices \(\sigma _{a}^{2}I\) and \(\sigma _{m}^{2}I\). The measurement noise covariance matrix V represents the level of confidence placed in the accuracy of the measurements:
$$\begin{aligned} V = \begin{bmatrix}\sigma _{a}^{2}I&\quad 0\\ 0&\quad\sigma _{m}^{2}I \end{bmatrix} \end{aligned}$$
Since the above output is non-linear, it is linearized by computing the Jacobian matrix,
$$\begin{aligned} H_{k}=\frac{d}{dx_{k}}f({x_k})\left| _{x_{k}=x^{-}_{k}}\right. . \end{aligned}$$
According to the notation introduced above, the our filter MBEQKF equations are summarised as follows:
the priori state estimate is:
$$\begin{aligned} x^{-}_{k+1}=\varPhi _{k}x_{k} \end{aligned}$$
the priori error covariance matrix is:
$$\begin{aligned} P^{-}_{k+1}=\varPhi _{k}P_{k}\varPhi _{k}^{T}+Q_{k} \end{aligned}$$
the Kalman gain is:
$$\begin{aligned} K_{k+1}=P^{-}_{k+1}H_{k+1}^{T}(H_{k+1}P^{-}_{k+1}H_{k+1}^{T}+V_{k+1})^{-1} \end{aligned}$$
the posteriori state estimate is:
$$\begin{aligned} x_{k+1}=x^{-}_{k+1}+K_{k+1}[z_{k+1}-f(x^{-}_{k+1})] \end{aligned}$$
The proposed filter is an additive filter which relaxes the quaternion normalization condition and treats the four components of the quaternion as independent parameters and uses the addition operation. Next, the resulting quaternion is normalized.
the posteriori error covariance matrix is:
$$\begin{aligned} P_{k+1}=P^{-}_{k+1}-K_{k+1}H_{k+1}P^{-}_{k+1}. \end{aligned}$$
Experimental set-up
For a test of the proposed universal MBEQKF, a 3-segment single linked pendulum was built. As reference, data from the optical system of motion capture (Vicon system) were used. Experiments demonstrate that using body model kinematic dependences in the orientation filter can improve the accuracy of the inertial motion capture system. Through simple procedural calibration and mounting sensors permanently, we remove the effect of bad calibration factors on the estimation of orientation.
The pendulum was built with three segments connected by movable single linked joints. An IMU sensor built at the Silesian University of Technology, Department of Automatic Control and Robotics was fixed to each segment. The published in Jedrasiak et al. (2013) IMU sensors signal to noise coefficients are as follows: accelerometer 43.2, magnetometer 767.9 and gyroscope 254.5. These IMU sensors have been marked as IMU1, IMU2 and IMU3 (Figs. 1, 2). On the pendulum, markers were also attached, marked as R1, R2, W1, W2, W3, W4, W5, and W6. The motion of the pendulum was mainly on one axis, but motion on other axes was also measured (shaking and swinging the pendulum from side to side). This had no effect on the results of estimations because the axes of motion are not aligned with the sensor axes.
The 3-segment pendulum with 3 IMU sensors marked as IMU1, IMU2 and IMU3 and markers for optical system marked as R1, R2, W1, W2, W3, W4, W5, and W6
The model of pendulum made up of 3 segments (S1, S2, S3) connected by single links with extorsion angles to segments S1, S2 and S3 used during tests
The data were recorded by using a USB connection of sensors to the PC via application, which allowed for the capture of raw signals from IMU sensors. Next the data were processed by filters implemented in Matlab.
Recordings were captured with seven different scenarios (each scenario repeated 3 times) carried out using the Vicon system with a frequency of 100 Hz. The IMU sensors also worked with such a frequency. The recordings had a length from 9600 to 16228 samples. The recorded movement is characterized by different values of acceleration amplitudes: (4–15> \(\rm{m /s^2}\) for low acceleration dataset and (15–23> \(\rm{m /s^2}\) for high acceleration dataset. The optical system also enabled calibration of sensors and calculation of the necessary distances. The scenarios relied on forcing motion (Low or High swing) to a certain segment (S1—up, S2—middle, and S3—down) of the pendulum (Table 1). The initial extorsion angles to each segment are presented in Fig. 2. The scenario marked as Dynamic relied on repeated forcing swing and recording to the total suppression of the pendulum. The data are available in our RepoIMU repositoryFootnote 1 (Szczęsna et al. 2016).
Table 1 Description of experiments data
Data synchronization and error calculation
Each experiment was recorded using Vicon Nexsus system with a sampling frequency of 100 Hz. In order to provide an informative comparison of orientation data streams with different reference frames and measured according to separate timers with the same frequency, the data must be normalized. Such a procedure can be divided into two steps: normalization in the time domain (time synchronization) and transformation of orientations to the same reference frame.
Transforming one orientation data stream from one reference frame to the other one is a simple geometric operation—rotation. Only knowledge about the relationship between two world reference frames—navigation and body—is required. As a reference body frame, the first body frame from the time domain was chosen.
Signals from the Vicon system and IMU sensors were captured at the same frequency. Knowing that, in order to synchronize the time domain we needed to find the time offset \((\varDelta {t})\) between the two signals. A time window was chosen to be \(<-\varDelta t^{Max} , \varDelta t^{Max}>\), where \(\varDelta t^{Max}\) is the maximal offset we expected \((-\varDelta t^{Max}< \varDelta t < \varDelta t^{Max})\). The distance between the two signals for each time offset in the window is calculated. Synchronization is performed on the \(^B\omega _{IMU}\) signal. The Vicon system does not calculate angular velocity of the body directly, so it must be calculated by the equation:
$$\begin{aligned} \omega _{Vicon}= 2 * {q}_{Vicon}^{-1} \otimes \frac{dq_{Vicon}}{dt} \end{aligned}$$
where \(q^{-1}\) is the inverse of q.
Evaluation of performances of the presented filter was done on the basis of the average deviations between true and estimated orientations of the body (Gramkow 2001). Here, we used the deviation index DI corresponding to the geodesic distance between two quaternions—filter estimate \(\hat{q}\) and the true rotation q from the Vicon system, on the hypersphere \(S^3\):
$$\begin{aligned} DI=2 * arccos(| \hat{q} * q |) \end{aligned}$$
All evaluations and comparisons of the performances of algorithms for orientation estimation are based on the deviation index averaged over the experiment time horizon.
Filter parameters
Filter parameters are following:
reference vectors: \(^{N}g=[0, 0, -9.81]^{T}\) and \(^{N}m=[\cos (\varphi ^{L})-\sin (\varphi ^{L})]^{T},\) where \(\varphi ^{L}\) is the geographical latitude angle. For the geographical position of the laboratory, where measurements were done, we have \(\varphi ^{L}=66^{\circ}=1.1519\) rad;
parameters of noise: \(\sigma _g^2 = 0.0001\), \(\sigma _a^2 = 0.001\) and \(\sigma _m^2 = 0.000001\);
initial state \(x_0\) (starting orientation quaternion) is computed by the QUEST algorithm (Shuster and Oh 1981) based on values of acceleration and magnetic field vector of first sample;
sampling interval \(\varDelta t= 0.01\);
state covariance matrix \(P_0 = I_{4x4}\).
We performed tests for the MBEQKF with kinematic dependences and the same filter but without kinematic equations in the measurement model (EQKF) (16). In each test filter MBEQKF obtained better results than EQKF (Fig. 3). The results for high dynamic motion were always worse than that for the corresponding low dynamic tests. It is well known that a factor that strongly influences the orientation measurement is the existence and magnitude of the external acceleration of the IMU sensor. One way to manage with this is using methods of levelling the influence of linear external acceleration by, for example, an adaptation mechanism (Pruszowski et al. 2015). The highest error in each test of EQKF is always for the third segment because there are the highest accelerations. Filter MBEQKF using the kinematic chain dependences can overcome that factor. It can be seen in Fig. 4, where the maximum error is similar, but most times the error of MBEQKF is near zero value and the error does not grow in time, like it does for EQKF. The bigger error is still caused by higher acceleration values but is lower than in other filters without this mechanism (see Figs. 4, 5 for this same capture and segment).
The average error angle of EQKF an MBEQKF filter showing that MBEQKF filter obtained better results for experiments with high and low accelerations
Error angle of EQKF and MBEQKF filter in segment 3 (S3) for Middle_Low capture. The MBEQKF filter better eliminates error growing in time by using kinematic dependences
Acceleration magnitude in segment 3 (S3) for Middle_Low capture
In Table 2, the average angle errors of all tests with the pendulum are presented. The average error of the MBEQKF filter is about 6°–7° which is comparable with other solutions described in the literature. In Fig. 6 is presented the result of filter MBEQKF estimation converted to Euler angles in comparison to angles computed from optical motion capture system (Vicon).
Table 2 Average error angle (rad) in each segment of pendulum
Result MBEQKF estimation of Euler angles (Roll, Pitch, Yaw) compared to angles captured by optical motion capture system (Vicon). Presented results are for segment S2 in Dynamic capture
Examination of the trace of the error covariance matrix \(P_k\), which should be minimized, can measure the convergence of the Kalman filter. This condition is fulfilled in the proposed filter (Fig. 7).
Trace of the covariance matrix (S1 for Middle_Low capture)
Figure 8 presents a comparison of average errors to other filters with a dynamic mechanism of levelling influence of external acceleration to orientation estimation. Filter AEQKF is an extended Kalman filter with an adaptive mechanism in which the measurement noise covariance matrix is adapted at run-time to guard against the effects of body motion. The implementation is based on Angelo (2006). Filter NCF_L is implemented based on Young (2010), where a simple complementary filter was used by passing the acceleration estimation in the skeleton model (Szczęsna et al. 2016). The proposed solution (filter MBEQKF) has the lowest error. The results are similar to the NCF_L filter because they use a similar mechanism to transfer modeled acceleration in the kinematic chain. But better results are achieved by combining this with the extended Kalman filter technique. Filter AEQKF is a Kalman filter but uses, to level the influence of high acceleration, the adaptation mechanism without using the kinematic chain dependences. In the described experiment, this led to the higher average errors.
Average error angle of MBEQKF, AEQKF, EQKF and NCF_L filter
The article presents an evaluation of opportunities to improve the orientation estimation by using kinematic dependences in the kinematic chain. The results are shown for the two extended quaternion Kalman filters based on one segment (EQKF) and with kinematic dependences (MBEQKF). The results, based on experiments with the 3-segment sigle link pendulum, show a superiority of the solution based on the estimation of acceleration in the body model (skeleton), especially for child segments. The filter is universal with the small state vector and gives comparable results for an average angle error of about 6–7 degrees with other, more complex solutions presented in the literature.
http://zgwisk.aei.polsl.pl/index.php/en/research/projects/61-repoimu.
Chou JCK (1992) Quaternion kinematic and dynamic differential equations. IEEE Trans Robot Autom 8(1):53–64
El-Gohary M, McNames J (2012) Shoulder and elbow joint angle tracking with inertial sensors. IEEE Trans Biomed Eng 59(9):2635–2641
Foxlin E (1996) Inertial head-tracker sensor fusion by a complementary separate-bias kalman filter. In: Proceedings of the IEEE 1996 on virtual reality annual international symposium, pp 185–194
Gramkow C (2001) On averaging rotations. J Math Imaging Vis 15(1–2):7–16
Jędrasiak K, Daniec K, Nawrat A (2013) The low cost micro inertial measurement unit. In: 8th IEEE conference on industrial electronics and applications, pp 403–408
Kulbacki M, Koteras R, Szczęsna A, Daniec K, Bieda R, Słupik J, Segen J, Nawrat A, Polański A, Wojciechowski K (2015) Scalable, wearable, unobtrusive sensor network for multimodal human monitoring with distributed control. In: 6th European conference of the international federation for medical and biological engineering. Springer International Publishing, pp 914–917
Lin JFS, Kulić D (2012) Human pose recovery using wireless inertial measurement units. Physiol Meas 33(12):2099
Madgwick Sebastian OH, Harrison Andrew JL, Vaidyanathan R (2011) Estimation of imu and marg orientation using a gradient descent algorithm. In: 2011 IEEE international conference on rehabilitation robotics, pp 1–7
Mahony R, Hamel T, Pflimlin J-M (2008) Nonlinear complementary filters on the special orthogonal group. IEEE Trans Autom Control 53(5):1203–1218
Miezal M, Bleser G, Schmitz N, Stricker D (2013) A generic approach to inertial tracking of arbitrary kinematic chains. In: Proceedings of the 8th international conference on body area networks, pp 189–192. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering)
Pruszowski P, Szczęsna A, Polański A, Słupik J, Wojciechowski K (2015) Adaptation mechanism of feedback in quaternion kalman filtering for orientation estimation. In: Artificial intelligence and soft computing. Springer International Publishing, pp 739–748
Roetenberg D, Luinge H, Slycke P (2009) Xsens mvn: full 6dof human motion tracking using miniature inertial sensors. Tech. Rep, Xsens Motion Technologies BV
Sabatini AM (2006) Quaternion-based extended kalman filter for determining orientation by inertial and magnetic sensing. IEEE Trans Biomed Eng 53(7):1346–1356
Sabatini AM (2011) Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing. Sensors 11(2):1489–1525
Sabatini AM (2011) Kalman-filter-based orientation determination using inertial/magnetic sensors: observability analysis and performance evaluation. Sensors 11(10):9182–9206
Shuster MD, Oh SD (1981) Three-axis attitude determination from vector observations. J Guid Control Dyn 4(1):70–77
Article ADS MATH Google Scholar
Šlajpah S, Kamnik R, Munih M (2014) Kinematics based sensory fusion for wearable motion assessment in human walking. Comput Methods Prog Biomed 116(2):131–144
Szczęsna A, Pruszowski P, Słupik J, Pęszor D, Polański A (2016) Evaluation of improvement in orientation estimation through the use of the linear acceleration estimation in the body model. In: Man–machine interactions, vol 4. Springer International Publishing, pp 377–387
Szczęsna A, Skurowski P, Pruszowski P, Pęszor D, Paszkuta M, Wojciechowski K (2016) Reference data set for accuracy evaluation of orientation estimation algorithms for inertial motion capture systems. In: International conference on computer vision and graphics. Springer International Publishing, pp 509–520
Torres-Moreno JL, Blanco-Claraco JL, Giménez-Fernández A, Sanjurjo E, Naya MÁ (2016) Online kinematic and dynamic-state estimation for constrained multibody systems based on imus. Sensors 16(3):333
Vikas V, Crane CD (2016) Joint angle measurement using strategically placed accelerometers and gyroscope. J Mech Robot 8(2):021003
Young AD, Ling Martin J, Arvind DK (2010) Distributed estimation of linear acceleration for improved accuracy in wireless inertial motion capture. In: Proceedings of the 9th ACM/IEEE international conference on information processing in sensor networks, pp 256–267
Young AD (2010) Use of body model constraints to improve accuracy of inertial motion capture. In: IEEE 2010 international conference on body sensor networks, pp 180–186
Yun X, Bachmann ER (2006) Design, implementation, and experimental results of a quaternion-based kalman filter for human body motion tracking. IEEE Trans Robot 22(6):1216–1227
AS conceived and designed the study and wrote the article. PP performed the implementation in Matlab and reviewed the paper. Both authors read and approved the final manuscript
This work was supported by a statute project of the Silesian University of Technology, Institute of Informatics (BK-263/RAU-2/2015). This work was partly performed using the infrastructure supported by POIG.02.03.01-24-099/13 Grant: "GCONiI-Upper-Silesian Center for Scientific Computation". Data were captured in the Human Motion Laboratory of Polish-Japanese Academy of Information Technology (http://bytom.pja.edu.pl/, http://hm.pjwstk.edu.pl/en/).
Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100, Gliwice, Poland
Agnieszka Szczęsna & Przemysław Pruszowski
Agnieszka Szczęsna
Przemysław Pruszowski
Correspondence to Agnieszka Szczęsna.
Szczęsna, A., Pruszowski, P. Model-based extended quaternion Kalman filter to inertial orientation tracking of arbitrary kinematic chains. SpringerPlus 5, 1965 (2016). https://doi.org/10.1186/s40064-016-3653-8
Inertial motion capture
Orientation estimation
Kalman filter
Kinematic chain | CommonCrawl |
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
Search references
What is RSS
Uspekhi Mat. Nauk:
Personal entry:
Uspekhi Mat. Nauk, 1979, Volume 34, Issue 5(209), Pages 13–63 (Mi umn4115)
This article is cited in 299 scientific papers (total in 300 papers)
The quantum method of the inverse problem and the Heisenberg $XYZ$ model
L. A. Takhtadzhyan, L. D. Faddeev
Full text: PDF file (2882 kB)
References: PDF file HTML file
Russian Mathematical Surveys, 1979, 34:5, 11–68
Bibliographic databases:
UDC: 519.9
MSC: 82B10, 82B20, 82B05, 82B23, 45Q05, 33E05
Received: 01.06.1979
Citation: L. A. Takhtadzhyan, L. D. Faddeev, "The quantum method of the inverse problem and the Heisenberg $XYZ$ model", Uspekhi Mat. Nauk, 34:5(209) (1979), 13–63; Russian Math. Surveys, 34:5 (1979), 11–68
Citation in format AMSBIB
\Bibitem{TakFad79}
\by L.~A.~Takhtadzhyan, L.~D.~Faddeev
\paper The quantum method of the inverse problem and the Heisenberg $XYZ$~model
\jour Uspekhi Mat. Nauk
\vol 34
\issue 5(209)
\pages 13--63
\mathnet{http://mi.mathnet.ru/umn4115}
\mathscinet{http://www.ams.org/mathscinet-getitem?mr=562799}
\transl
\jour Russian Math. Surveys
\crossref{https://doi.org/10.1070/RM1979v034n05ABEH003909}
Linking options:
http://mi.mathnet.ru/eng/umn4115
http://mi.mathnet.ru/eng/umn/v34/i5/p13
This publication is cited in the following articles:
R. Z. Bariev, "Two-dimensional ice-type vertex model with two types of staggered sites", Theoret. and Math. Phys., 49:2 (1981), 1021–1028
I. V. Cherednik, "Relativistically invariant quasiclassical limits of integrable two-dimensional quantum models", Theoret. and Math. Phys., 47:2 (1981), 422–425
V. V. Anshelevich, E. V. Gusev, "First integrals of the one-dimensional quantum ising model with transverse magnetic field", Theoret. and Math. Phys., 47:2 (1981), 426–434
I. M. Krichever, "Baxter's equations and algebraic geometry", Funct. Anal. Appl., 15:2 (1981), 92–103
R. Shankar, "On the solution of some vertex models using factorizableS matrices", J Statist Phys, 29:4 (1982), 649
L. Dolan, Michael Grady, "Conserved charges from self-duality", Phys Rev D, 25:6 (1982), 1587
H. R. Jauslin, T. Schneider, "Solitons and the excitation spectrum of classical ferromagnetic chains with axial anisotropy", Phys Rev B, 26:9 (1982), 5153
T. Schneider, E. Stoll, U. Glaus, "Excitation spectrum of planar spin-½ Heisenberg xxz chains", Phys Rev B, 26:3 (1982), 1321
T. Schneider, E. Stoll, "Magnetic field effects in the spin dynamics of ferro- and antiferromagnetic Ising-type chains with s=1/2", Phys Rev B, 26:7 (1982), 3846
N. N. Bogolyubov (Jr.), A. K. Prikarpatskii, V. G. Samoilenko, "Discrete periodic problem for the modified nonlinear Korteweg–de Vries equation", Theoret. and Math. Phys., 50:1 (1982), 75–81
Ertuḡrul Berkcan, "Order-disorder variables and complete integrability; implications for", Physics Letters B, 110:2 (1982), 134
O. Babelon, H.J. de Vega, C-M. Viallet, "Exact solution of the Zn+1 × Zn+1 symmetric generalization of the XXZ model", Nuclear Physics B, 200:2 (1982), 266
C. Jayaprakash, A. Sinha, "Commuting transfer matrix solution of the asymmetric six-vertex model", Nuclear Physics B, 210:1 (1982), 93
R N Onody, M Karowski, J Phys A Math Gen, 16:1 (1983), L31
M T Jaekel, J M Maillard, J Phys A Math Gen, 16:13 (1983), 3105
V. I. Vichirko, N. Yu. Reshetikhin, "Excitation spectrum of the anisotropic generalization of an $SU_3$ magnet", Theoret. and Math. Phys., 56:2 (1983), 805–812
V. O. Tarasov, L. A. Takhtadzhyan, L. D. Faddeev, "Local Hamiltonians for integrable quantum models on a lattice", Theoret. and Math. Phys., 57:2 (1983), 1059–1073
I. M. Gel'fand, I. V. Cherednik, "The abstract Hamiltonian formalism for the classical Yang–Baxter bundles", Russian Math. Surveys, 38:3 (1983), 1–22
J.H.H. Perk, C.L. Schultz, "Diagonalization of the transfer matrix of a nonintersecting string model", Physica A: Statistical Mechanics and its Applications, 122:1-2 (1983), 50
C.L. Schultz, "Eigenvectors of the multi-component generalization of the six-vertex model", Physica A: Statistical Mechanics and its Applications, 122:1-2 (1983), 71
M.F. Weiss, K.D. Schotte, "Lattice approach to the spectrum of the massive Thirring model", Nuclear Physics B, 225:2 (1983), 247
T.T. Truong, K.D. Schotte, "Quantum inverse scattering method and the diagonal-to-diagonal transfer matrix of vertex models", Nuclear Physics B, 220:1 (1983), 77
U. Glaus, T. Schneider, "Critical properties of the spin-1 Heisenberg chain with uniaxial anisotropy", Phys Rev B, 30:1 (1984), 215
T. Schneider, U. Glaus, E. P. Stoll, "Critical properties of xy spin-one chains with uniaxial single-ion anisotropy", J Appl Phys, 55:6 (1984), 2401
R. Z. Bariev, "Two-dimensional ice-type vertex model with two types of staggered sites
II. A system of two interacting modified KDP models", Theoret. and Math. Phys., 58:2 (1984), 207–210
I. V. Cherednik, "Factorizing particles on a half-line and root systems", Theoret. and Math. Phys., 61:1 (1984), 977–983
V. O. Tarasov, "Structure of quantum L operators for the R matrix of the XXZ model", Theoret. and Math. Phys., 61:2 (1984), 1065–1072
L. Dolan, "Kac-Moody algebras and exact solvability in hadronic physics", Physics Reports, 109:1 (1984), 1
Takayuki Abe, "The duality transformation for the IRF models", Physics Letters A, 102:8 (1984), 343
Benno Fuchssteiner, "On the hierarchy of the Landau-Lifshitz equation", Physica D: Nonlinear Phenomena, 13:3 (1984), 387
O. Babelon, "Representations of the Yang–Baxter algebrae associated to Toda field theory", Nuclear Physics B, 230:2 (1984), 241
K. G. Fischer, G. Heber, "On Static Kink Solutions for an Anisotropic Heisenberg Ferromagnetic Chain", phys stat sol (b), 129:2 (1985), 649
R. N. Onody, "New functional relation for vertex models", Phys Rev A, 32:2 (1985), 1185
H. O. Mártin, "Finite-size corrections in the XYZ Heisenberg chain", Phys Rev B, 32:9 (1985), 5959
V. O. Tarasov, "Irreducible monodromy matrices for the $R$ matrix of the $XXZ$ model and local lattice quantum Hamiltonians", Theoret. and Math. Phys., 63:2 (1985), 440–454
E. V. Gusev, "Thermodynamics and excited states of the Heisenberg model", Theoret. and Math. Phys., 63:2 (1985), 527–532
N. Yu. Reshetikhin, "Integrable models of quantum one-dimensional magnets with $O(n)$ and $Sp(2k)$ symmetry", Theoret. and Math. Phys., 63:3 (1985), 555–569
A. B. Zamolodchikov, "Infinite additional symmetries in two-dimensional conformal quantum field theory", Theoret. and Math. Phys., 65:3 (1985), 1205–1213
I. V. Cherednik, "Some finite-dimensional representations of generalized Sklyanin algebras", Funct. Anal. Appl., 19:1 (1985), 77–79
Craig A. Tracy, "Embedded elliptic curves and the Yang–Baxter equations", Physica D: Nonlinear Phenomena, 16:2 (1985), 203
Craig A. Tracy, "Complete integrability in statistical mechanics and the Yang–Baxter equations", Physica D: Nonlinear Phenomena, 14:2 (1985), 253
B Davies, T D Kieu, Inverse Probl, 2:2 (1986), 141
A. Freudenhammer, "Time-dependent solitary solutions of the quantum-mechanical anisotropic Heisenberg chain", phys stat sol (b), 134:1 (1986), 153
L. V. Avdeev, A. A. Vladimirov, "Exceptional solutions to the Bethe ansatz equations", Theoret. and Math. Phys., 69:2 (1986), 1071–1079
E. V. Gusev, "Bethe method: Thermodynamics and limit states", Theoret. and Math. Phys., 67:3 (1986), 606–613
A. A. Vladimirov, "Proof of the invariance of the Bethe-ansatz solutions under complex conjugation", Theoret. and Math. Phys., 66:1 (1986), 102–105
Fu-Cho Pu, Bao-Heng Zhao, "Exact solution of a polaron model in one dimension", Physics Letters A, 118:2 (1986), 77
Fu-Cho Pu, Bao-Heng Zhao, "Quantum inverse scattering transform for the nonlinear Schrödinger model of particles with attractive coupling", Nuclear Physics B, 275:1 (1986), 77
Ya. I. Granovskii, A. S. Zhedanov, "Solutions of domain type in anisotropic magnetic chains", Theoret. and Math. Phys., 71:1 (1987), 438–446
V. E. Zubkus, È. È. Tornau, "Thermodynamic properties of 20-vertex models", Theoret. and Math. Phys., 71:3 (1987), 627–632
N. M. Bogolyubov, A. G. Izergin, V. E. Korepin, "Critical exponents in completely integrable models of quantum statistical physics", Theoret. and Math. Phys., 70:1 (1987), 94–102
A. G. Izergin, V. E. Korepin, N. A. Slavnov, "Finite-temperature correlation functions of Heisenberg antiferromagnet", Theoret. and Math. Phys., 72:2 (1987), 878–884
Craig A. Tracy, "The emerging role of number theory in exactly solvable models in lattice statistical mechanics", Physica D: Nonlinear Phenomena, 25:1-3 (1987), 1
H.J. de Vega, M. Karowski, "Conformal invariance and integrable theories", Nuclear Physics B, 285 (1987), 619
Eugenio Olmedilla, Miki Wadati, "Conserved Quantities for Spin Models and Fermion Models", J. Phys. Soc. Jpn, 56:12 (1987), 4274
Eugenio Olmedilla, Miki Wadati, Yasuhiro Akutsu, "Yang–Baxter Relations for Spin Models and Fermion Models", J. Phys. Soc. Jpn, 56:7 (1987), 2298
Z Yu-kui, Y Mu-lin, H Bo-yu, J Phys A Math Gen, 21:19 (1988), L929
B Davies, Inverse Probl, 4:1 (1988), 47
V. O. Tarasov, "Algebraic bethe ansatz for the Izergin–Korepin $R$ matrix", Theoret. and Math. Phys., 76:2 (1988), 793–803
Yu-Kui Zhou, Mu-Lin Yan, Bo-Yu Hou, "Algebraic Bethe ansatz of Belavin's Zn×Zn symmetric model", Physics Letters A, 133:7-8 (1988), 391
V.N. Plechko, "Grassman path-integral solution for a class of triangular type decorated ising models", Physica A: Statistical Mechanics and its Applications, 152:1-2 (1988), 51
M. Karowski, "Finite-size corrections for integrable systems and conformal properties of six-vertex models", Nuclear Physics B, 300 (1988), 473
Eugenio Olmedilla, Miki Wadati, "Conserved quantities of the one-dimensional Hubbard model", Phys. Rev. Lett, 60:16 (1988), 1595
D. J. Klein, T. G. Schmalz, "Exact ground state for a Herndon-Simpson model via resonance-theoretic cluster expansion", Int J Quantum Chem, 35:3 (1989), 373
Wei Hua, ,, J Phys A Math Gen, 22:13 (1989), L579
H J de Vega, H J Giacomini, J Phys A Math Gen, 22:14 (1989), 2759
Yu-Kui Zhou,, J Phys A Math Gen, 22:23 (1989), 5089
V. G. Turaev, "Operator invariants of tangles, and $R$-matrices", Math. USSR-Izv., 35:2 (1990), 411–444
N. A. Slavnov, "Calculation of scalar products of wave functions and form factors in the framework of the algebraic Bethe ansatz", Theoret. and Math. Phys., 79:2 (1989), 502–508
C. Destri, H.J. De Vega, "Twisted boundary conditions in conformally invariant theories", Physics Letters B, 223:3-4 (1989), 365
Bo-Yu Hou, Mu-Lin Yan, Yu-Kui Zhou, "Exact solution of Belavin's Zn × Zn symmetric model", Nuclear Physics B, 324:3 (1989), 715
T Deguchi, Y Akutsu, J Phys A Math Gen, 23:11 (1990), 1861
D Kandel, E Domany, B Nienhuis, J Phys A Math Gen, 23:15 (1990), L755
B Y Hou, Y K Zhou, J Phys A Math Gen, 23:7 (1990), 1147
Hideyuki Câteau, Satoru Saito, "Braid of strings", Phys Rev Letters, 65:20 (1990), 2487
O Foda, J Phys A Math Gen, 23:14 (1990), L739S
David N. Yettera, "Quantum groups and representations of monoidal categories", Math Proc Camb Phil Soc, 108:2 (1990), 261
N. A. Slavnov, "Nonequal-time current correlation function in a one-dimensional Bose gas", Theoret. and Math. Phys., 82:3 (1990), 273–282
R. Z. Bariev, "Exact solution of classical analog of the one-dimensional Hubbard model", Theoret. and Math. Phys., 82:2 (1990), 218–224
Tetsuo Deguchi, "Hybrid-Type Solvable Models and Multivariable Link Polynomials", J. Phys. Soc. Jpn, 59:4 (1990), 1119
M. Q. Zhang, "How to find the Lax pair from the Yang–Baxter equation", Comm Math Phys, 141:3 (1991), 523
Tetsuo Deguchi, "Multivariable vertex models associated with the Temperley-Lieb algebra", Physics Letters A, 159:3 (1991), 163
Miki Wadati, Tetsuo Deguchi, "Old and new link polynomials from the theory of exactly solvable models", Physica D: Nonlinear Phenomena, 51:1-3 (1991), 376
R Z Bariev, T T Truong, L Turban, J Phys A Math Gen, 25:9 (1992), L561
T Takebe, J Phys A Math Gen, 25:5 (1992), 1071
A. Yu. Volkov, L. D. Faddeev, "Quantum inverse scattering method on a spacetime lattice", Theoret. and Math. Phys., 92:2 (1992), 837–842
R.Z. Bariev, T.T. Truong, "Exactly solvable two-sublattice vertex model with interactions of vertex-arrow type", Physics Letters A, 164:5-6 (1992), 439
Y.K. Zhou, B.Y. Hou, "Some remarks on the eigenvalue problem for n symmetric vertex and face models", Physica A: Statistical Mechanics and its Applications, 187:1-2 (1992), 308
Peter Orland, "Exact solution of a quantum gauge magnet in 2 + 1 dimensions", Nuclear Physics B, 372:3 (1992), 635
F. D. M. Haldane, Z. N. C. Ha, J. C. Talstra, D. Bernard, V. Pasquier, "Yangian symmetry of integrable quantum chains with long-range interactions and a new description of states in conformal field theory", Phys. Rev. Lett, 69:14 (1992), 2021
Z. N. C. Ha, F. D. M. Haldane, "Squeezed strings and Yangian symmetry of the Heisenberg chain with long-range interaction", Phys Rev B, 47:19 (1993), 12459
V. I. Dragovich, "Solutions of the Yang equation with rational irreducible spectral curves", Russian Acad. Sci. Izv. Math., 42:1 (1994), 51–65
V.K. Dobrev, "Three lectures on quantum groups: representations, duality, real forms", Journal of Geometry and Physics, 11:1-4 (1993), 367
Kiyoshi Sogo, "Time-Dependent Orthogonal Polynomials and Theory of Soliton – Applications to Matrix Model, Vertex Model and Level Statistics", J. Phys. Soc. Jpn, 62:6 (1993), 1887
Boris Feigin, Edward Frenkel, Nikolai Reshetikhin, "Gaudin model, Bethe Ansatz and critical level", Comm Math Phys, 166:1 (1994), 27
R. Z. Bariev, A. Klümper, A. Schadschneider, J. Zittartz, "Exact solution of a one-dimensional fermion model with interchain tunneling", Phys Rev B, 50:13 (1994), 9676
A. Mikhailov, "Comultiplication in $ABCD$ algebra and scalar products of Bethe wave functions", Theoret. and Math. Phys., 100:1 (1994), 886–889
S. V. Kozyrev, "Exact calculability, semigroup of representations, and the stability property for representations of the algebra of functions on the quantum group $SU_{q}(2)$", Theoret. and Math. Phys., 101:2 (1994), 1269–1280
D. V. Yur'ev, "Complex projective geometry and quantum projective field theory", Theoret. and Math. Phys., 101:3 (1994), 1387–1403
Nadja Kutz, "On the spectrum of the quantum pendulum", Physics Letters A, 187:5-6 (1994), 365
A.H. Bougourzi, Robert A. Weston, "N-point correlation functions of the spin-1 XXZ model", Nuclear Physics B, 417:3 (1994), 439
R. J. Baxter, "Solvable models in statistical mechanics, from Onsager onward", J Statist Phys, 78:1-2 (1995), 7
Yu-Kui Zhou, J Phys A Math Gen, 28:15 (1995), 4339
P Schmitteckert, P Schwab, U Eckern, "Quantum Coherence in an Exactly Solvable One-Dimensional Model with Defects", Europhys Lett, 30:9 (1995), 543
I. M. Krichever, A. V. Zabrodin, "Spin generalization of the Ruijsenaars–Schneider model, the non-Abelian Toda chain, and representations of the Sklyanin algebra", Russian Math. Surveys, 50:6 (1995), 1101–1150
Yu. S. Osipov, A. A. Gonchar, S. P. Novikov, V. I. Arnol'd, G. I. Marchuk, P. P. Kulish, V. S. Vladimirov, E. F. Mishchenko, "Lyudvig Dmitrievich Faddeev (on his sixtieth birthday)", Russian Math. Surveys, 50:3 (1995), 643–659
Yu-kui Zhou, "Fusion hierarchy and finite-size corrections of Uq[sl(2)]-invariant vertex models with open boundaries", Nuclear Physics B, 453:3 (1995), 619
C.M. Yung, M.T. Batchelor, "Integrable vertex and loop models on the square lattice with open boundaries via reflection matrices", Nuclear Physics B, 435:3 (1995), 430
Ludvig D. Faddeev, Olav Tirkkonen, "Connections of the Liouville model and XXZ spin chain", Nuclear Physics B, 453:3 (1995), 647
C.M. Yung, M.T. Batchelor, "Exact solution for the spin-s XXZ quantum chain with non-diagonal twists", Nuclear Physics B, 446:3 (1995), 461
A. H. Bougourzi, M. Kacir, "Exact two-spinon dynamical correlation function of the one-dimensional Heisenberg model", Phys Rev B, 54:18 (1996), R12669
F. W. Nijhoff, O. Ragnisco, V. B. Kuznetsov, "Integrable time-discretisation of the Ruijsenaars-Schneider model", Comm Math Phys, 176:3 (1996), 681
A. I. Molev, M. L. Nazarov, G. I. Olshanskii, "Yangians and classical Lie algebras", Russian Math. Surveys, 51:2 (1996), 205–282
Shi-shyr Roan, "Mirror symmetry of elliptic curves and Ising model", Journal of Geometry and Physics, 20:2-3 (1996), 273
Giovanni Felder, Alexander Varchenko, "Algebraic Bethe ansatz for the elliptic quantum group Eτ,η(sl2)", Nuclear Physics B, 480:1-2 (1996), 485
F. C. Alcaraz, R. Z. Bariev, "Exact solution of the biquadratic spin-1 t-J model in one dimension", Phys Rev B, 56:13 (1997), 7796
M. J. Martins, "Solution of a supersymmetric model of correlated electrons", Phys Rev B, 56:11 (1997), 6376
H Babujian, M Karowski, A Zapletal, J Phys A Math Gen, 30:18 (1997), 6425
V. V. Vedenyapin, O. V. Mingalev, I. V. Mingalev, "Representations of general commutation relations", Theoret. and Math. Phys., 113:3 (1997), 1508–1519
F C Alcaraz, R Z Bariev, J Phys A Math Gen, 31:12 (1998), L233
A Zapletal, J Phys A Math Gen, 31:47 (1998), 9593
A. D. Mironov, "Group theory approach to the $\tau$-function and its quantization", Theoret. and Math. Phys., 114:2 (1998), 127–183
Molev, AI, "Finite-dimensional irreducible representations of twisted Yangians", Journal of Mathematical Physics, 39:10 (1998), 5559
A. Hamid Bougourzi, Michael Karbach, Gerhard Müller, "Exact two-spinon dynamic structure factor of the one-dimensionals=12Heisenberg-Ising antiferromagnet", Phys. Rev. B, 57:18 (1998), 11429
Johannes Kellendonk, "Exact spectral values for discrete quantum pendulum-integrals", J Math Phys (N Y ), 40:6 (1999), 2627
N. M. Bogolyubov, A. G. Izergin, A. L. Kitanin, A. G. Pronko, "The Probabilities of Survival and Hopping of States in the Phase Model on a Finite Lattice", Proc. Steklov Inst. Math., 226 (1999), 29–41
Heng Fan, Miki Wadati, "Exact diagonalization of the generalized supersymmetric t-J model with boundaries", Phys Rev B, 61:5 (2000), 3450
A. V. Zabrodin, "Hidden quantum $R$-matrix in the discrete-time classical Heisenberg magnet", Theoret. and Math. Phys., 125:2 (2000), 1455–1475
J. Ding, S. Z. Pakulyak, S. M. Khoroshkin, "Factorization of the universal $\mathcal R $-matrix for ${U_q(\widehat{sl}_2)} $", Theoret. and Math. Phys., 124:2 (2000), 1007–1037
D. Arnaudon, R. Poghossian, A. Sedrakyan, P. Sorba, "Integrable chain model with additional staggered model parameter", Nuclear Physics B, 588:3 (2000), 638
Andreas Osterloh, Luigi Amico, Ulrich Eckern, "Exact solution of generalized Schulz–Shastry type models", Nuclear Physics B, 588:3 (2000), 531
D. Giuliano, B. Jouault, A. Tagliacozzo, "Kondo ground state in a quantum dot with an even number of electrons in a magnetic field", Phys Rev B, 63:12 (2001), 125318
H. J. de Vega, "Interaction of Reggeized gluons in the Baxter-Sklyanin representation", Phys Rev D, 64:11 (2001), 114019
B. A. Bernevig, D. Giuliano, R. B. Laughlin, "Spinon Attraction in Spin- 1/2 Antiferromagnetic Chains", Phys Rev Letters, 86:15 (2001), 3392
A. A. Belavin, S. Yu. Gubanov, "Reduction of $XXZ$ Model with Generalized Periodic Boundary Conditions", Theoret. and Math. Phys., 129:2 (2001), 1484–1493
Yu. G. Stroganov, "$XXZ$ Spin Chain with the Asymmetry Parameter $\Delta=-1/2$: Evaluation of the Simplest Correlators", Theoret. and Math. Phys., 129:2 (2001), 1596–1608
Christian Korff, Barry M. McCoy, "Loop symmetry of integrable vertex models at roots of unity", Nuclear Physics B, 618:3 (2001), 551
Shao-Shiung Lin, Shi-Shyr Roan, "Algebraic geometry approach to the Bethe equation for Hofstadter-type models", J Phys A Math Gen, 35:28 (2002), 5907
H. J. de Vega, "Exact resolution of the Baxter equation for Reggeized gluon interactions", Phys Rev D, 66:7 (2002), 074013
Marco Rossi, Robert Weston, "A generalized Q-operator for U<sub>q</sub> vertex models", J Phys A Math Gen, 35:47 (2002), 10015
I. D. Mandzhavidze, A. N. Sisakyan, "A Field Theory Description of Constrained Energy-Dissipation Processes", Theoret. and Math. Phys., 130:2 (2002), 153–197
S. Yu. Gubanov, "Generalized Heisenberg Model", Theoret. and Math. Phys., 130:3 (2002), 383–390
Molev, AI, "Representations of reflection algebras", Reviews in Mathematical Physics, 14:3 (2002), 317
Molev A.I., "Yangians and transvector algebras", Discrete Math, 246:1–3 (2002), 231–253
TETSUO DEGUCHI, "THE 8V CSOS MODEL AND THE sl2LOOP ALGEBRA SYMMETRY OF THE SIX-VERTEX MODEL AT ROOTS OF UNITY", Int. J. Mod. Phys. B, 16:14n15 (2002), 1899
Ara Sedrakyan, "Action formulation of the network model of plateau-plateau transitions in the quantum Hall effect", Phys Rev B, 68:23 (2003), 235329
Veselov, AP, "Yang–Baxter maps and integrable dynamics", Physics Letters A, 314:3 (2003), 214
V.V. Mkhitaryan, A.G. Sedrakyan, "Thermodynamic Bethe ansatz for the spin-1/2 staggered XXZ-model", Nuclear Physics B, 673:3 (2003), 455
D. Arnaudon, A. Sedrakyan, T. Sedrakyan, "Multi-leg integrable ladder models", Nuclear Physics B, 676:3 (2004), 615
Evgeni Burovski, Evgeni Kozik, Anatoly Kuklov, Nikolay Prokof'ev, Boris Svistunov, "Superfluid Interfaces in Quantum Solids", Phys Rev Letters, 94:16 (2005), 165301
A. Kundu, "Quantum Integrable Multiatom Matter-Radiation Models With and Without the Rotating-Wave Approximation", Theoret. and Math. Phys., 144:1 (2005), 975–984
Doikou, A, "On reflection algebras and twisted Yangians", Journal of Mathematical Physics, 46:5 (2005), 053504
K. A. Dancer, P. S. Isac, J. Links, "Representations of the quantum doubles of finite group algebras and spectral parameter dependent solutions of the Yang–Baxter equation", J Math Phys (N Y ), 47:10 (2006), 103511
Jun Mada, Makoto Idzumi, Tetsuji Tokihiro, "The exact correspondence between conserved quantities of a periodic box-ball system and string solutions of the Bethe ansatz equations", J Math Phys (N Y ), 47:5 (2006), 053507
Klaus Fabricius, Barry M McCoy, "An elliptic current operator for the eight-vertex model", J Phys A Math Gen, 39:48 (2006), 14869
D. V. Talalaev, "The Quantum Gaudin System", Funct. Anal. Appl., 40:1 (2006), 73–77
A. N. Varchenko, "Bethe ansatz for arrangements of hyperplanes and the Gaudin model", Mosc. Math. J., 6:1 (2006), 195–210
Arnaudon, D, "On the R-matrix realization of Yangians and their representations", Annales Henri Poincare, 7:7–8 (2006), 1269
Shi-shyr Roan, "The Q-operator and functional relations of the eight-vertex model at root-of-unity \eta = \frac{2m K}{N} for odd N", J Phys A Math Theor, 40:36 (2007), 11019
Klaus Fabricius, Barry M McCoy, "The TQ equation of the eight-vertex model for complex elliptic roots of unity", J Phys A Math Theor, 40:50 (2007), 14893
Ovidiu I Pâtu, "Free energy of the eight-vertex model with an odd number of lattice sites", J Stat Mech Theor Exp, 2007:9 (2007), P09007
Shi-shyr Roan, "Fusion operators in the generalized τ<sup>(2)</sup>-model and root-of-unity symmetry of the XXZ spin chain of higher spin", J Phys A Math Theor, 40:7 (2007), 1481
P. A. Valinevich, S. È. Derkachev, D. R. Karakhanyan, R. Kirshner, "Factorization of the $\mathcal R$-matrix for the algebra $U_q(s\ell_3)$", J. Math. Sci. (N. Y.), 151:2 (2008), 2848–2858
P. P. Kulish, N. Manoilovich, "Quantum algebras with representation ring of $\operatorname{sl}(2)$ type", J. Math. Sci. (N. Y.), 151:2 (2008), 2894–2900
N. A. Slavnov, "The algebraic Bethe ansatz and quantum integrable systems", Russian Math. Surveys, 62:4 (2007), 727–766
S. Derkachov, D. Karakhanyan, R. Kirschner, "Yang–Baxter -operators and parameter permutations", Nuclear Physics B, 785:3 (2007), 263
A. V. Zabrodin, "Bäcklund transformations for the difference Hirota equation and the supersymmetric Bethe ansatz", Theoret. and Math. Phys., 155:1 (2008), 567–584
Shuang-Wei Hu, Kang Xue, Mo-Lin Ge, "Optical simulation of the Yang–Baxter equation", Phys Rev A, 78:2 (2008), 022319
V. P. Spiridonov, "Continuous biorthogonality of the elliptic hypergeometric function", St. Petersburg Math. J., 20:5 (2009), 791–812
V. P. Spiridonov, "Essays on the theory of elliptic hypergeometric functions", Russian Math. Surveys, 63:3 (2008), 405–472
Takeo Kojima, "Baxter'sQ-operator for theW-algebraWN", J. Phys. A: Math. Theor, 41:35 (2008), 355206
W. Galleas, "Functional relations from the Yang–Baxter algebra: Eigenvalues of the XXZ model with non-diagonal twisted and open boundary conditions", Nuclear Physics B, 790:3 (2008), 524
F. Colomo, A.G. Pronko, "Emptiness formation probability in the domain-wall six-vertex model", Nuclear Physics B, 798:3 (2008), 340
F. Y. WU, "PROFESSOR C. N. YANG AND STATISTICAL MECHANICS", Int. J. Mod. Phys. B, 22:12 (2008), 1899
N. M. Bogolyubov, "Five vertex model with fixed boundary conditions", St. Petersburg Math. J., 21:3 (2010), 407–421
S. E. Derkachev, A. N. Manashov, "General solution of the Yung–Baxter equation with symmetry group $\mathrm{SL}(\mathrm n,\mathbb C)$", St. Petersburg Math. J., 21:4 (2010), 513–577
F. Colomo, A. G. Pronko, "The Arctic Curve of the Domain-Wall Six-Vertex Model", J Statist Phys, 2009
M.J. Martins, C.S. Melo, "Algebraic Bethe ansatz for invariant integrable models: Compact and non-compact applications", Nuclear Physics B, 820:3 (2009), 620
P. P. Kulish, N. Manoilovich, Z. Nagy, "Jordanian deformation of the open XXX spin chain", Theoret. and Math. Phys., 163:2 (2010), 644–652
N. M. Bogoliubov, K. Malyshev, "The correlation functions of the $XXZ$ Heisenberg chain in the case of zero or infinite anisotropy, and random walks of vicious walkers", St. Petersburg Math. J., 22:3 (2011), 359–377
F. Colomo, A. G. Pronko, "The Limit Shape of Large Alternating Sign Matrices", SIAM J Discrete Math, 24:4 (2010), 1558
Samuel Belliard, Stanislav Pakuliak, Eric Ragoucy, "Universal Bethe Ansatz and Scalar Products of Bethe Vectors", SIGMA, 6 (2010), 094, 22 pp.
Qiang Zhang, Chengming Bai, "An operator approach to the rational solutions of the classical Yang–Baxter equation", Reports on Mathematical Physics, 65:2 (2010), 165
G. Niccoli, "Reconstruction of Baxter -operator from Sklyanin SOV for cyclic representations of integrable quantum models", Nuclear Physics B, 835:3 (2010), 263
Hrachya M. Babujian, Angela Foerster, Michael Karowski, "Exact form factors of the Gross–Neveu model and expansion", Nuclear Physics B, 825:3 (2010), 396
Nicolas Crampé, Eric Ragoucy, Ludovic Alonzi, "Coordinate Bethe Ansatz for Spin $s$ XXX Model", SIGMA, 7 (2011), 006, 13 pp.
Ghali Filali, Nikolai Kitanine, "Spin Chains with Non-Diagonal Boundaries and Trigonometric SOS Model with Reflecting End", SIGMA, 7 (2011), 012, 22 pp.
Sol H Jacobsen, P D Jarvis, "Extended two-level quantum dissipative system from bosonization of the elliptic spin-\frac{1}{2} Kondo model", J. Phys. A: Math. Theor, 44:11 (2011), 115003
Klaus Fabricius, "Properties of the string operator in the eight-vertex model", J. Phys. A: Math. Theor, 44:13 (2011), 135001
Kai Niu, Kang Xue, Qing Zhao, Mo-Lin Ge, "The role of the ℓ1-norm in quantum information theory and two types of the Yang–Baxter equation", J. Phys. A: Math. Theor, 44:26 (2011), 265304
M.J. Martins, M. Zuparic, "The monodromy matrix in the F-basis for arbitrary six-vertex models", Nuclear Physics B, 2011
A. Zabrodin, "Intertwining operators for Sklyanin algebra and elliptic hypergeometric series", Journal of Geometry and Physics, 2011
Andrei V. Zotov, "$1+1$ Gaudin Model", SIGMA, 7 (2011), 067, 26 pp.
Yung-Ning Peng, "Parabolic Presentations of the Super Yangian
$${Y(\mathfrak{gl}_{M|N})}$$
", Commun. Math. Phys, 2011
N. Cirilo António, N. Manojlović, A. Stolin, "Algebraic Bethe ansatz for deformed Gaudin model", J. Math. Phys, 52:10 (2011), 103501
G A P Ribeiro, "General scalar products in the arbitrary six-vertex model", J. Stat. Mech, 2011:11 (2011), P11015
N. M. Bogolyubov, K. L. Malyshev, "Ising limit of a Heisenberg $XXZ$ magnet and some temperature correlation functions", Theoret. and Math. Phys., 169:2 (2011), 1517–1529
J. Avan, P. P. Kulish, G. Rollet, "Reflection $K$-matrices related to Temperley–Lieb $R$-matrices", Theoret. and Math. Phys., 169:2 (2011), 1530–1538
S. È. Derkachev, "The $R$-matrix factorization, $Q$-operator, and variable separation in the case of the $XXX$ spin chain with the $SL(2,\mathbb{C})$ symmetry group", Theoret. and Math. Phys., 169:2 (2011), 1539–1550
Spiridonov V.P., Vartanov G.S., "Elliptic Hypergeometry of Supersymmetric Dualities", Comm Math Phys, 304:3 (2011), 797–874
Yi-Luo Yao, Jun-Peng Cao, Guang-Liang Li, Heng Fan, "Exact solutions of a multi-component anyon model withSU(N) invariance", J. Phys. A: Math. Theor, 45:4 (2012), 045207
W. Galleas, "Multiple integral representation for the trigonometric SOS model with domain wall boundaries", Nuclear Physics B, 2012
M.J. Martins, R.A. Pimenta, M. Zuparic, "The factorized F-matrices for arbitrary integrable vertex models", Nuclear Physics B, 2012
A Birrell, P S Isaac, J Links, "A variational approach for the quantum inverse scattering method", Inverse Problems, 28:3 (2012), 035008
F. Colomo, A. G. Pronko, "An approach for calculating correlation functions in the six-vertex model with domain wall boundary conditions", Theoret. and Math. Phys., 171:2 (2012), 641–654
N. M. Bogoliubov, P. P. Kulish, "Exactly solvable models of quantum nonlinear optics", J. Math. Sci. (N. Y.), 192:1 (2013), 14–30
MO-LIN GE, KANG XUE, "Yang–Baxter EQUATIONS IN QUANTUM INFORMATION", Int. J. Mod. Phys. B, 26:27n28 (2012), 1243007
N Grosjean, J M Maillet, G Niccoli, "On the form factors of local operators in the lattice sine–Gordon model", J. Stat. Mech, 2012:10 (2012), P10006
Jon Links, Amir Moghaddam, Yao-Zhong Zhang, "Deconfined quantum criticality and generalized exclusion statistics in a non-Hermitian BCS model", J. Phys. A: Math. Theor, 45:46 (2012), 462002
Jon Links, "Hopf Algebra Symmetries of an Integrable Hamiltonian for Anyonic Pairing", Axioms, 1:3 (2012), 226
G. Niccoli, "Antiperiodic spin-1/2 XXZ quantum chains by separation of variables: Complete spectrum and form factors", Nuclear Physics B, 2013
L. D. Faddeev, "The new life of complete integrability", Phys. Usp., 56:5 (2013), 465–472
D Levy-Bencheton, V Terras, "An algebraic Bethe ansatz approach to form factors and correlation functions of the cyclic eight-vertex solid-on-solid model", J. Stat. Mech, 2013:04 (2013), P04015
Junpeng Cao, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Off-diagonal Bethe ansatz solution of the XXX spin chain with arbitrary boundary conditions", Nuclear Physics B, 2013
Jon Links, Amir Moghaddam, Yao-Zhong Zhang, "BCS model with asymmetric pair scattering: a non-Hermitian, exactly solvable Hamiltonian exhibiting generalized exclusion statistics", J. Phys. A: Math. Theor, 46:30 (2013), 305205
Junpeng Cao, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Off-Diagonal Bethe Ansatz and Exact Solution of a Topological Spin Ring", Phys. Rev. Lett, 111:13 (2013)
Junpeng Cao, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Off-diagonal Bethe ansatz solutions of the anisotropic spin- chains with arbitrary boundary fields", Nuclear Physics B, 2013
D. Chicherin, R. Kirschner, "Yangian symmetric correlators", Nuclear Physics B, 2013
Xi-Wen Guan, M.T.. Batchelor, Chaohong Lee, "Fermi gases in one dimension: From Bethe ansatz to experiments", Rev. Mod. Phys, 85:4 (2013), 1633
Wellington Galleas, "Functional relations and the Yang–Baxter algebra", J. Phys.: Conf. Ser, 474 (2013), 012020
S. È. Derkachev, V. P. Spiridonov, "Yang–Baxter equation, parameter permutations, and the elliptic beta integral", Russian Math. Surveys, 68:6 (2013), 1027–1072
N.M. Bogoliubov, C. Malyshev, "Correlation functions of XX0 Heisenberg chain, q-binomial determinants, and random walks", Nuclear Physics B, 2013
Belliard S., Pakuliak S., Ragoucy E., Slavnov N.A., "Form Factors in Su(3)-Invariant Integrable Models", J. Stat. Mech.-Theory Exp., 2013, P04033
Chicherin D., Kirschner R., "Monodromy Operators and Symmetric Correlators", Xxist International Conference on Integrable Systems and Quantum Symmetries (Isqs21), Journal of Physics Conference Series, 474, eds. Burdik C., Navratil O., Posta S., IOP Publishing Ltd, 2013
Jing N., Liu M., "Isomorphism Between Two Realizations of the Yangian Y(So(3))", J. Phys. A-Math. Theor., 46:7 (2013), 075201
J. Avan, T. Fonseca, L. Frappat, P. P. Kulish, Э. Ragoucy, G. Rollet, "Temperley–Lieb $R$-matrices from generalized Hadamard matrices", Theoret. and Math. Phys., 178:2 (2014), 223–238
D. Chicherin, S. Derkachov, R. Kirschner, "Yang–Baxter operators and scattering amplitudes in super-Yang–Mills theory", Nuclear Physics B, 2014
MO-LIN GE, LI-WEI YU, KANG XUE, QING ZHAO, "Yang–Baxter EQUATION, MAJORANA FERMIONS AND THREE BODY ENTANGLING STATES", Int. J. Mod. Phys. B, 2014, 1450089
A.V. Belitsky, S.E. Derkachov, A.N. Manashov, "Quantum mechanics of null polygonal Wilson loops", Nuclear Physics B, 2014
N. Cirilo Antonio, N. Manoilovich, Z. Nagy, "Jordanian deformation of the open $s\ell(2)$ Gaudin model", Theoret. and Math. Phys., 179:1 (2014), 462–471
Yuan-Yuan Li, Junpeng Cao, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Thermodynamic limit and surface energy of the XXZ spin chain with arbitrary boundary fields", Nuclear Physics B, 2014
Gorsky A., Zabrodin A., Zotov A., "Spectrum of Quantum Transfer Matrices via Classical Many-Body Systems", J. High Energy Phys., 2014, no. 1, 070, 1–28
Junpeng Cao, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Nested off-diagonal Bethe ansatz and exact solutions of the su(n) spin chain with generic integrable boundaries", J. High Energ. Phys, 2014:4 (2014)
Li-Wei Yu, Qing Zhao, Mo-Lin Ge, "Factorized three-body S-matrix restrained by Yang–Baxter equation and quantum entanglements", Annals of Physics, 2014
Herman Boos, Frank Göhmann, Andreas Klümper, Kh.S.. Nirov, A.V.. Razumov, "Universal R-matrix and functional relations", Rev. Math. Phys, 2014, 1430005
Kun Hao, Junpeng Cao, Guang-Liang Li, Wen-Li Yang, Kangjie Shi, "Exact solution of the Izergin-Korepin model with general non-diagonal boundary terms", J. High Energ. Phys, 2014:6 (2014)
Inna Lukyanenko, Ph.S.. Isaac, Jon Links, "On the boundaries of quantum integrability for the spin-1/2 Richardson–Gaudin system", Nuclear Physics B, 2014
Mo-Lin Ge, Kang Xue, Ruo-Yang Zhang, Qing Zhao, "Yang–Baxter equations and quantum entanglements", Quantum Inf Process, 2014
Junpeng Cao, Shuai Cui, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Spin-<mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/table/dtd" xmlns:sb="http://www.elsevier.com/xml/common/struct-bib/dtd" xmlns:ce="http://www.elsevier.com/xml/common/dtd" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:cals="http://www.elsevier.com/xml/common/cals/dtd" xmlns:sa="http://www.elsevier.com/xml/common/struct-aff/dtd"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:math> XYZ model revisit: general solutions via off-diagonal Bethe ansatz", Nuclear Physics B, 2014
S. Z. Pakulyak, E. Ragoucy, N. A. Slavnov, "Scalar products in models with the $GL(3)$ trigonometric $R$-matrix: General case", Theoret. and Math. Phys., 180:1 (2014), 795–814
Nicolas Grosjean, Jean-Michel Maillet, Giuliano Niccoli, "On the Form Factors of Local Operators in the Bazhanov–Stroganov and Chiral Potts Models", Ann. Henri Poincaré, 2014
N. M. Bogolyubov, "Combinatorics of a strongly coupled boson system", Theoret. and Math. Phys., 181:1 (2014), 1132–1144
Mo-Lin Ge, Li-Wei Yu, Kang Xue, Qing Zhao, "Solutions of the Yang–Baxter equation associated with a topological basis and applications in quantum information", Theoret. and Math. Phys., 181:1 (2014), 1145–1163
Nicolai Reshetikhin, Jasper Stokman, Bart Vlaar, "Boundary Quantum Knizhnik–Zamolodchikov Equations and Bethe Vectors", Commun. Math. Phys, 2014
S. Z. Pakulyak, E. Ragoucy, N. A. Slavnov, "Determinant representations for form factors in quantum integrable models with the $GL(3)$-invariant $R$-matrix", Theoret. and Math. Phys., 181:3 (2014), 1566–1584
Kirschner R., "Yangian Symmetric Correlators, R Operators and Amplitudes", XXII International Conference on Integrable Systems and Quantum Symmetries, Journal of Physics Conference Series, 563, eds. Burdik C., Navratil O., Posta S., IOP Publishing Ltd, 2014, 012015
Beisert N., de Leeuw M., "The Rtt Realization For the Deformed Gl (2|2) Yangian", J. Phys. A-Math. Theor., 47:30 (2014), 305201
Kun Hao, Junpeng Cao, Tao Yang, Wen-Li Yang, "Exact solution of the XXX Gaudin model with generic open boundaries", Annals of Physics, 2015
T. Fonseca, L. Frappat, E. Ragoucy, "R matrices of three-state Hamiltonians solvable by coordinate Bethe ansatz", J. Math. Phys, 56:1 (2015), 013503
Till Bargheer, Yu-Tin Huang, Florian Loebbert, Masahito Yamazaki, "Integrable amplitude deformations for <span class="aps-inline-formula"><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mi mathvariant="script">N</mi><mo>=</mo><mn>4</mn></math></span> super Yang-Mills and ABJM theory", Phys. Rev. D, 91:2 (2015)
Li-Wei Yu, Mo-Lin Ge, "More about the doubling degeneracy operators associated with Majorana fermions and Yang–Baxter equation", Sci. Rep, 5 (2015), 8102
Xin Zhang, Yuan-Yuan Li, Junpeng Cao, Wen-Li Yang, Kangjie Shi, "Bethe states of the XXZ spin-<mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/table/dtd" xmlns:sb="http://www.elsevier.com/xml/common/struct-bib/dtd" xmlns:ce="http://www.elsevier.com/xml/common/dtd" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:cals="http://www.elsevier.com/xml/common/cals/dtd" xmlns:sa="http://www.elsevier.com/xml/common/struct-aff/dtd"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:math> chain with arbitrary boundary fields", Nuclear Physics B, 2015
W. Galleas, "Partial differential equations from integrable vertex models", J. Math. Phys, 56:2 (2015), 023504
S. È. Derkachev, V. P. Spiridonov, "Finite-dimensional representations of the elliptic modular double", Theoret. and Math. Phys., 183:2 (2015), 597–618
Xin Zhang, Yuan-Yuan Li, Junpeng Cao, Wen-Li Yang, Kangjie Shi, "Retrieve the Bethe states of quantum integrable models solved via the off-diagonal Bethe Ansatz", J. Stat. Mech, 2015:5 (2015), P05014
Č. Burdík, J. Fuksa, A. P. Isaev, S. O. Krivonos, O. Navrátil, "Remarks towards the spectrum of the Heisenberg spin chain type models", Phys. Part. Nuclei, 46:3 (2015), 277
Giuliano Niccoli, Véronique Terras, "Antiperiodic XXZ Chains with Arbitrary Spins: Complete Eigenstate Construction by Functional Equations in Separation of Variables", Lett Math Phys, 2015
N. A. Slavnov, "One-dimensional two-component Bose gas and the algebraic Bethe ansatz", Theoret. and Math. Phys., 183:3 (2015), 800–821
R.J. Baxter, "Some academic and personal reminiscences of Rodney James Baxter", J. Phys. A: Math. Theor, 48:25 (2015), 254001
Junpeng Cao, Wen-Li Yang, Kangjie Shi, Yupeng Wang, "Exact solution of the alternating XXZ spin chain with generic non-diagonal boundaries", Annals of Physics, 361 (2015), 91
R.S. Vieira, A. Lima-Santos, "Where are the roots of the Bethe Ansatz equations?", Physics Letters A, 2015
Anjan Kundu, "Construction and exact solution of a nonlinear quantum field model in quasi-higher dimension", Nuclear Physics B, 2015
Stanislav Pakuliak, Eric Ragoucy, Nikita A. Slavnov, "$GL(3)$-Based Quantum Integrable Composite Models. II. Form Factors of Local Operators", SIGMA, 11 (2015), 064, 18 pp.
I. Gahramanov, V. P. Spiridonov, "The star-triangle relation and 3d superconformal indices", J. High Energ. Phys, 2015:8 (2015)
A. G. Pronko, "The five-vertex model and enumerations of plane partitions", J. Math. Sci. (N. Y.), 213:5 (2016), 756–768
N. M. Bogolyubov, K. L. Malyshev, "Integrable models and combinatorics", Russian Math. Surveys, 70:5 (2015), 789–856
Samuel Belliard, Rodrigo A. Pimenta, "Slavnov and Gaudin–Korepin Formulas for Models without $\mathrm{U}(1)$ Symmetry: the Twisted XXX Chain", SIGMA, 11 (2015), 099, 12 pp.
J. Math. Sci. (N. Y.), 216:1 (2016), 8–22
Pakuliak S., Ragoucy E., Slavnov N.A., "Zero Modes Method and Form Factors in Quantum Integrable Models", 893, 2015, 459–481
P. A. Valinevich, S. È. Derkachev, P. P. Kulish, E. M. Uvarov, "Construction of eigenfunctions for a system of quantum minors of the monodromy matrix for an $SL(n,\mathbb C)$-invariant spin chain", Theoret. and Math. Phys., 189:2 (2016), 1529–1553
N. A. Slavnov, "Multiple commutation relations in the models with $\mathfrak gl(2|1)$ symmetry", Theoret. and Math. Phys., 189:2 (2016), 1624–1644
J. Math. Sci. (N. Y.), 224:2 (2017), 199–213
B. L. Feigin, "Integrable systems, shuffle algebras, and Bethe equations", Trans. Moscow Math. Soc., 77 (2016), 203–246
Chicherin D., Derkachov S.E., Spiridonov V.P., "New Elliptic Solutions of the Yang–Baxter Equation", Commun. Math. Phys., 345:2 (2016), 507–543
Hutsalyuk A., Liashyk A., Pakuliak S.Z., Ragoucy E., Slavnov N.A., Nucl. Phys. B, 911 (2016), 902–927
Loebbert F., "Lectures on Yangian symmetry", J. Phys. A-Math. Theor., 49:32, SI (2016), 323002
A. A. Hutsalyuk, A. Liashyk, S. Z. Pakulyak, E. Ragoucy, N. A. Slavnov, "Current presentation for the super-Yangian double $DY(\mathfrak{gl}(m|n))$ and Bethe vectors", Russian Math. Surveys, 72:1 (2017), 33–99
Jan Fuksa, "Bethe Vectors for Composite Models with $\mathfrak{gl}(2|1)$ and $\mathfrak{gl}(1|2)$ Supersymmetry", SIGMA, 13 (2017), 015, 17 pp.
N. A. Slavnov, "Algebraicheskii anzats Bete", Lekts. kursy NOTs, 27, MIAN, M., 2017, 3–189
Qinxiu Sun, Fang Li, "A generalization of Lie $H$-pseudo-bialgebras", Theoret. and Math. Phys., 192:1 (2017), 939–957
A. V. Zabrodin, A. V. Zotov, A. N. Liashyk, D. S. Rudneva, "Asymmetric six-vertex model and the classical Ruijsenaars–Schneider system of particles", Theoret. and Math. Phys., 192:2 (2017), 1141–1153
L. A. Takhtajan, A. Yu. Alekseev, I. Ya. Aref'eva, M. A. Semenov-Tian-Shansky, E. K. Sklyanin, F. A. Smirnov, S. L. Shatashvili, "Scientific heritage of L. D. Faddeev. Survey of papers", Russian Math. Surveys, 72:6 (2017), 977–1081
Nicolas Crampe, "Algebraic Bethe Ansatz for the XXZ Gaudin Models with Generic Boundary", SIGMA, 13 (2017), 094, 13 pp.
Hutsalyuk A., Liashyk A., Pakuliak S.Z., Ragoucy E., Slavnov N.A., "Scalar Products of Bethe Vectors in the Models With Gl(M|N) Symmetry", Nucl. Phys. B, 923 (2017), 277–311
Fuksa J., "On the Structure of Bethe Vectors", Phys. Part. Nuclei Lett., 14:4 (2017), 624–630
Derkachov S.E. Manashov A.N. Valinevich P.A., "Gustafson Integrals For Sl(2, C) Spin Magnet", J. Phys. A-Math. Theor., 50:29 (2017), 294007
Fuksa J. Slavnov N.A., "Form Factors of Local Operators in Supersymmetric Quantum Integrable Models", J. Stat. Mech.-Theory Exp., 2017, 043106
Hutsalyuk A. Liashyk A. Pakuliak S.Z. Ragoucy E. Slavnov N.A., "Norm of Bethe Vectors in Models With Gl(M Vertical Bar N) Symmetry", Nucl. Phys. B, 926 (2018), 256–278
Ilin A. Rybnikov L., "Degeneration of Bethe Subalgebras in the Yangian of Gl(N)", Lett. Math. Phys., 108:4 (2018), 1083–1107
Hutsalyuk A. Liashyk A. Pakuliak S.Z. Ragoucy E. Slavnov N.A., "Scalar Products and Norm of Bethe Vectors For Integrable Models Based on U-Q ((Gl)Over-Cap(M))", SciPost Phys., 4:1 (2018), 006
Liashyk A., Slavnov N.A., "On Bethe vectors in $\mathfrak{gl}_3$ -invariant integrable models", J. High Energy Phys., 2018, no. 6, 018, 31 pp.
Tarasov V., "Completeness of the Bethe Ansatz For the Periodic Isotropic Heisenberg Model", Rev. Math. Phys., 30:8, SI (2018), 1840018
Pittelli A., "Yangian Symmetry of String Theory on Ads(3) X S-3 X S-3 X S-1 With Mixed 3-Form Flux", Nucl. Phys. B, 935 (2018), 271–289
Kamil Yu. Magadov, Vyacheslav P. Spiridonov, "Matrix Bailey Lemma and the Star-Triangle Relation", SIGMA, 14 (2018), 121, 13 pp.
Gahramanov I. Jafarzade Sh., "Integrable Lattice Spin Models From Supersymmetric Dualities", Phys. Part. Nuclei Lett., 15:6 (2018), 650–667
Maillet J.M., Niccoli G., "On Quantum Separation of Variables", J. Math. Phys., 59:9, SI (2018), 091417
V. Popkov, D. Karevski, G. M. Schütz, "Exact results for the isotropic spin-$1/2$ Heisenberg chain with dissipative boundary driving", Theoret. and Math. Phys., 198:2 (2019), 296–315
Gerrard A., MacKay N., Regelskis V., "Nested Algebraic Bethe Ansatz For Open Spin Chains With Even Twisted Yangian Symmetry", Ann. Henri Poincare, 20:2 (2019), 339–392
Liashyk A., Pakuliak S.Z., Ragoucy E., Slavnov N.A., "New Symmetries of Gl(N)-Invariant Bethe Vectors", J. Stat. Mech.-Theory Exp., 2019, 044001
Yao Sh.-K., Liu P., Jia X.-Yu., "On Super Yangian Covariance of the Triple Product System", Adv. Appl. Clifford Algebr., 29:1 (2019), UNSP 15
Number of views:
This page: 2366
Full text: 893
References: 108
What is a QR-code?
Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2019 | CommonCrawl |
Polycentric vs monocentric urban structure contribution to national development
Ashraf Sami Mahmoud Abozeid ORCID: orcid.org/0000-0002-0108-28081 &
Tarek Abdellatif AboElatta1
The debate about polycentricity and subordinacy has always been a critical topic that planners, economists, and socialists argued about for centuries. The idea of concentricity vs decentralization has affected all life metabolic activities. Urban structure has always been declared to be the key factor that affects life metabolism significantly. However, after the pandemic COVID-19, the planning strategies have changed dramatically. The main purpose is to investigate the most appropriate urbanization approach that achieves the best development results. The research methodology is to define and measure the fabric independency as an approach to estimate its self-sufficiency that enables it to stand in front of the pandemic challenges at different circumstances. The paper uses the fabric diversity index as a sensitive indicator of independency and polycentricity of the urban structure. The main conclusion for this paper is that independent polycentric urban agglomerations that are strongly linked achieve much better development results than subordinate cities depending on the main core city. The data used for the analysis are extracted from the Urban Atlas developed by the European Environmental Agency in addition to the UN-Habitat annual report. All calculations, analyses, and deductions are exclusively carried by the author.
The research in brief
Development has always been the main critical topic that economists, socialists, and planners debated about for a long time ago. A lot of trials, theorems, and proposals have been developed to achieve the best development results since the early ages. The political argument and decision also varied according to the development desires and economic growth. However, theorists have reached by the end of the nineteenth century two main trends of civilization: the first is the very central concentric strategy that depends on the centrality of the decision to guarantee control of resources and actions, and the second is a decentralization strategy that depends on the democratic contribution of all parties in the governance.
Recently, major old cities have become overcrowded and saturated due to a massive accumulation of urbanization and evolutionary development that began a long time ago. The growing demand for housing and major activities, caused by the increase of population along the different eras, has encouraged planners to continue urban expansion. However, urban expansion was a debatable issue that urban planners, socialists, and economists have been arguing about for a long period of time.
To be more concise, many urban experts claim that the philosophy, strategy, and even the process of urbanization may influence the development rates to a great extent or can have an impact on economic growth. From this perspective, this paper investigates the most effective urbanization approach that can achieve the best development results on a major scale. Regionally, there are two main trends of the urbanization process that shape city planning: monocentric urbanization and polycentric urbanization.
The two main trends were reflected on the urbanization approaches as methods to achieve the highest development score to prove the validity of one at the expense of the other. Consequently, two main planning approaches appeared as urbanization strategies. Monocentric planning focuses on developing one main urban pole with related suburban areas in the vicinity of the main core.
The other is polycentric planning that focuses on developing multiurban poles that share nearly the same level of equity in most life aspects achieving what is known by urban equilibrium [1]. However, after the COVID-19 pandemic, different opinions and thoughts about city planning have aroused. Hamidi et al. (2020) [2] did not find a strong positive correlation between COVID-19 infection and mortality rates and density. The long-term economic shutdowns due to the COVID-19 pandemic have had very negative impacts on the urban economy. The consequence is complicated and occurs in different ways and on a wide range of scales (Krzysztofik et al. 2020) [3]. The traditional planning theories need to be revised and adapted to suit the new challenges that have appeared recently. In other words, a new meaning of development should be defined in the new normal that does not rely on economic growth only but a compromise between uniform production rate and sustaining residents' and workers' health across the urban fabric.
Research argument
The main argument was about the best planning strategy that can achieve the best development scores. Surprisingly, compared with sprawling areas, they observed slightly lower virus-related mortality rates in high-density locations. A lot of researches and comparisons have been carried out to reach a conclusion to this debate.
However, in most cases, the gap in the literature was that the measurement approach was not accurate or biased that led to paradoxical results. This research tries to give an added value to the research about polycentricity topic by presenting a fair measurement methodology that takes into account all factors. The main motivation to carry out this research is to seek out a non-biased measurement approach and calibration method for polycentricity using reliable concrete data that are stable from one urban area to the other so that the comparison is a fair one at the end.
Research aim
The main aim of this research is to define a new approach for evaluating polycentricity so as to judge its impact upon development.
The main objective of this paper is to explore whether monocentric urban structure (represented in the core within subordinate cities planning model) or polycentric urban structure (represented in autonomous connected cities model) is the most appropriate and effective planning approach that contributes to development after the pandemic.
Research hypothesis
The hypothesis formulated for this research indicates that independent autonomous urban agglomerations, which represent the multicenter planning approach, have a greater contribution to development than satellite cities, which depends on the main core representing monocentric planning approach.
Monocentric and polycentric urban structure
The first model for the monocentric city model was generated by Alonso [4], then it was developed by Mills and Muth to include transportation, production, and housing. Fujita unified the previous models then in one framework. Ogawa and Fujita developed after that two-sector monocentric models of a one-dimensional city [5]. Fujita argued that the concentration of firms in one place increases the agglomeration zone, and consequently, the commuting distance for their workers on average increases and the wages as well [6–8]. Land rent around the agglomeration increases also. The rise in the cost of labor and land then discourages further firm agglomeration and encourages an opposite phenomenon to occur which is urban sprawl at the peripheries [9].
Polycentricity is a multiscalar concept that works at local, regional, and national levels [10]. The concept has been tackled as an approach to counter the core-periphery concept that used to be the major trend of urbanization. Some planners have stated that there is no single definition for polycentricity [11]. Two main major categories of definitions can be identified in the literature. First is morphological that focuses on population size, employment rate, land use combinations, etc. An area could be named a polycentric fabric if it contains two or more centers and population and employment are not concentrated in just one single center. The other definitions are oriented towards the functional approach [12]. It mainly emphasizes the activity exchange and metabolism of the fabric [13]. Klosterman and Mustard stated that "polycentricity can, in principle, refer to any clustering of human activity." They summarized the characteristics of any polycentric urban area into two main features as follows:
A group of connected distinct cities
No obvious leading city
The first polycentric model was developed by Fujita and Ogawa [5–7]. The main hypothesis was that the benefit from cooperation between two firms is inversely proportional to the distance between them, i.e., when commuting costs are relatively high, this leads to the formation of multiple business cores and consequently achieves what is known by "multi-equilibria." It is worth saying that the differences in the degree of production or transport cost lead to the variation in the size of agglomeration and, consequently, the spacing between industries. According to Fujita and Mori, the presence of multiple industries leads to the formation of a hierarchical city system.
Application of polycentricity and monocentricity can be reflected through the following two main urban structures that this paper discusses:
-First, independent linked urban agglomerations
Second, connected satellite urban cities [14]
To understand the meaning of autonomous independent urban agglomeration, the definition of the term "urban agglomeration" should be clearly outlined. Table 1 shows the different definitions that have been associated with the urban agglomeration expression.
Table 1 Urban agglomeration definitions summary
According to the above table, it is obvious that urban agglomeration has been defined through various approaches. The previous definitions could be briefly summarized into main four meanings. The first one defines urban agglomeration as an urban area or cluster, while the second states it as an aggregate or concentrated urban area. The third definition described it as an urban region that has a diverse economic base and products, and finally, the fourth definition portrayed that it is an urban area that forms a metropolitan or megalopolis zone. Using the evidence available, it is possible to deduce that not every urban agglomeration can be a candidate to be an autonomous zone, but there are some forms of urban areas that could be genuinely independent ones. This paper tries to investigate whether independency or subordinacy is the best strategy that should be tackled when pursuing urban expansion so that it could achieve the best development results. To be more focused, any urban expansion can undertake one of the following two urban forms:
Autonomous linked urban agglomerations is an approach that counts on intimately connected independent urban areas.
Subordinate satellite-connected cities is an approach that relies on developing connected urban areas dependent on the main center.
Satellite cities
The idea of satellite cities was influenced by the principles of the Garden City introduced by Ebenezer Howard. Oxford Dictionary of Architecture defines satellite towns as follows:
Towns that are self-contained and limited in size, built in the vicinity of a large town or city to house and employ those who would otherwise create a demand for expansion of the existing settlement, but dependent on the parent-city for population and major services.
Satellite cities of the twentieth century were influenced by the principles of the Garden City as introduced by Ebenezer Howard. Howard developed the idea of building garden cities that were planned limited in size and surrounded by a permanent belt of open space. The main goal for developing satellite cities was to alleviate the issue of overpopulation in the capital city without resulting in sprawl.
It is worth mentioning that satellite cities have been the most common trend throughout the last decades. Several urban planners and socialists argued that satellite towns and cities are a new approach that encounters a new implementation of the peripheries concept [15]. Satellite cities, on one hand, have proved to be a rapid strategy to develop an integrated urban area where inhabitants can find a better quality of life. On the other hand, it maintains the connection between the new urban agglomeration and the major city. This allows satellite cities to rely on the core city in many facilities and products requiring a large investment in infrastructure, which might be costly and takes a long time. At the same time, it solves a serious housing problem by providing new dwelling units at lower prices and more facilities compared to the limited ones existing in the capital city. It could be claimed that it contributes somehow to provide job opportunities at the service sector level but not at the level of the production sector. Some planners argue that job opportunities should not always be in the production sector in order to contribute to economic growth. They justify that the service sector acts as a wallet for gaining excess revenues from the residents. As a result, these revenues could be embedded in the production process. The speculative hypothesis is based on developing a well-integrated loop that securely carries out economic externalities from distribution centers to production centers and so on.
In other words, a satellite city can be a potential to develop a new community with diverse amenities without being overburdened by the cost of creating unaffordable land use categories.
On the other hand, adverse criticism has been directed to satellite cities. The main claim was that after these new urban communities were established, they proved to play an important role in shaping new urban settlements. Yet, they showed to have no significant impact on the national income. The reason behind this result is that the inhabitants rely on their job opportunities and life needs production upon the mother core city. In other words, the major core city is the place where the economic base, either industrial or agricultural, is found. It can be mentioned that it acts as a supplier and feeder to the associated satellite cities, which in turn act accordingly as channels for the distribution of these products and commodities to users and residents. Contemporary planners also argue that even relying on services as an economic portfolio is not effective. In most cases, large investment in infrastructure and transportation is required to transfer capital smoothly and quickly. Satellite cities are also criticized for not being true urban communities. That is, usually, these cities do not contain a wide cross-section of society either in terms of dwelling type or job categories. This is because inhabitants depend on satisfying certain life needs on commuting between the mother city and the subordinate ones.
It should be honestly stated that some of these satellite cities have proven to be quite self-sufficient. Yet, none of them has reached the level of autonomy or complete Independency [16]. This is because the concept upon which these cities were planned depends on the fact that they are dependent on the mother city. Consequently, they are not supposed to give an added value to urban income. In other words, the goal they were developed for was to solve a housing problem. Therefore, it is illogical to blame these cities for not contributing to the economic growth effectively or boosting development as well.
The ESPON research program was developed to achieve a better understanding of spatial trends, problems, and opportunities on a European scale. Many versions were developed starting from the ESPON 1.1.1 project that focused on the role and potentials of urban areas as nodes in a polycentric development. The ESPON 1.4.3 project targeted to analyze Project 1.1.1 by delimitation of functional urban areas (FUAs) and analysis of polycentricity based on this approach. Both ESPON 1.1.1 and 1.4.3 sub-indexes were as follows: size index, location index, and connectivity index. However, both projects were criticized for the following reasons. First, the previous measurement approaches focused on defining polycentricity from a morphological issue concerning only size and territorial distribution. In addition, the need of finding a new measurement method comes from the inaccuracy in the measurement method by ESPON projects (Meijers 2008) [17]. Meijers stated that the results are based on too many FUAs. For example, in larger countries, the calculations for the flatness of the urban hierarchy and primacy are strongly influenced by the smaller FUAs. It is also worth saying that as used in ESPON 1.4.3, a fixed size threshold has its disadvantages. A city ranked 10th in one country could be an important one in a smaller country. Such a measure twice would deviate the results and distort the picture. Meijers then concluded that measurement of primacy should be calculated relative to a small fixed number of FUAs about n = 4 or 5 (Meijers, 2008) [17].
An example of rank size approach as a parameter of polycentricity index developed by ESPON 1.4.3 is presented in Fig. 1 that shows the rank size distribution of Germany and Greece. The figure shows Germany is a very polycentric country where cities are nearly equal in size. On the other side, Greece is a very monocentric country, the cities vary significantly in sizes and are composed of core and subordinate cities.
Polycentric and monocentric countries in ESPON 1.4.3
It was agreed above that the objectives of this research are to measure the level of independency and polycentricity for urban agglomerations and subordinate ones to know to what extent it impacts economic externality production and development. From this prospective, it is evident that a sensitive benchmark index should be defined whereas it could be a genuine indicator of independency and polycentricity. On the other hand, development scores should be formulated based on a reliable standard source that guarantees the accuracy of the results [18].
Independency is a two-dimension characteristic. The morphological dimension ensures that the urban area from the urban form is an identifiable agglomeration. Besides, it also verifies that it is detached and not an attached mass to another one. The functional dimension guarantees that the urban area's metabolism works efficiently. Some scholars have used density as an indicator of the intensifying factor for the morphological approach. Others have used the employment rate and dwelling occupancy as indicators of population productivity performance for the functional approach. They justify that the high employment rate indicates that the urban area is well performing and quite well productive [19].
This paper contradicts this approach because the density can be in certain cases an accumulation of monoclass of inhabitants within a few variations in repetitive land use as well. This redundancy in the category of inhabitants and land use can cause either of two phenomena. The first phenomenon is a spillover effect where the surplus of one product causes a great loss in its price as the supply is greater than the demand. The second phenomenon is the competition that arises between similar products which may negatively affect the productivity rate overall.
It is also not preferred to use the employment rate or housing occupancy as indicators. This is because employment could be a standardized monotonic type of job that does not boost economic growth as desired, but fictitious hours of work that have no significant added value. Dwelling occupancy can be decisive, too. It might show that inhabitants always have a place to settle and live in, but in reality, they might not enjoy a good quality of life. Even relying on top-notch housing records is not even correct. Some people may possess multiple high-standard housing units though they do not exceed 5% of the community.
On the other hand, diversity can be a reliable indicator. The diversity of land use illustrates how this urban fabric has non-monotonic urban metabolism. In addition, it offers a wide range of possibilities for different activities, employment categories, and even inhabitants' classes. Diversity is claimed to be a quite accurate indicator that considers the variability and differentiation between distinct categories. This helps to develop some relations between different analyzed categories, and the best appropriate combination between them as a whole.
In this paper, the diversity indices proportions are used as a trustworthy indicator of Independency. The paper suggests that the Independency of any urban agglomeration is a main indicator of self-sufficiency. Self-satisfaction can be measured using numerous methods. Yet, depending on activity rates and records could not be an accurate approach. This is because the type, nature, and frequency of activities vary from one place to another. In addition, a standardized criterion that ensures neutralization of other third-party political interference or any factors that may lead to the deviation of the results should be carefully considered. Comparing non-solid parameters, such as employment rate, illiteracy, and number of dwellings, is always decisive as well. On the other hand, development records are going to be extracted directly from UN-Habitat Sustainable Development Goals Report as a reliable standard source that contains accurate numbers and scores for the main development aspects as defined by the United Nations.
Measurement of independency (polycentricity)
The fundamental concept upon which the research is based is the principle of primacy as a measurement approach for urban agglomeration ranking. On the national scale, a country ranking can be easily determined by measuring the primacy of the largest agglomeration to the rest of urban areas. The hypothesis denotes that if the dominant urban area shows higher urban primacy with respect to other urban areas, then it tends to be a core with dependent peripheral satellite cities (a monocentric planning approach). On the other hand, if the dominant urban area shows relatively near rank-size compared to other urban areas, then this urban fabric is a network of Autonomous Independent Urban Agglomerations (a polycentric planning approach).
Polycentricity index calculation
Calculation of urban primacy and polycentricity of a country is through the following:
Calculation of the diversity index Hurbn of the major four urban agglomerations within every country based on the land use data available (The major four urban agglomerations selected are the largest functional urban areas in size and population within every country according to the European Union Statistical Agency, Eurostat, 2016. As Meijers(2008) [17] has stated that when measuring polycentricity, the largest urban areas should only be selected to guarantee the neutrality of the size factor upon the results. In addition, the location of cities is not taken into account as the goal is to measure the polycentricity of cities which is different from measuring the city networking effect).
Every country has, therefore, a dominant urban agglomeration identified by the largest diversity index value HMajor_Urbn (usually the country capital), and other three major urban areas whose diversity index were also calculated.
The standard deviation is calculated for the four diversity index values of the chosen cities for each country as σcntry.
The mean is calculated for the four diversity index values of the chosen cities for each country as μcntry.
The magnetic effect of mutual attraction between the different centers is either multiplied or reduced using the coefficient of variation. The coefficient of variation for each country CVcntry is calculated by dividing the country standard deviation by the country mean as shown in the following equation:
$$ {\mathbf{CV}}_{\mathbf{cntry}}={\boldsymbol{\sigma}}_{\mathbf{cntry}}/{\boldsymbol{\mu}}_{\mathbf{cntry}} $$
High values show significant variations between the diversity indices, while low values show that the diversity indices of the cities are nearly the same.
The dominant city area diversity index HMajor_Urbn is multiplied by the inverse of the coefficient of variation for each country (CVcntry) to indicate the country's polycentricity effectuation, impress, and degree (P/I) as shown in Eq. 2.
$$ \boldsymbol{P}\left(\boldsymbol{I}\right)={\boldsymbol{H}}_{\mathbf{Major}\_\mathbf{Urbn}}\times \left(1/{\mathbf{CV}}_{\mathbf{cntry}}\right) $$
High values indicate nearly high equal diversity indices among the different cities, and the country tends to be highly polycentric. Low values indicate low polycentric effect, and the country tends to be a monocentric one.
Urban diversity calculation
As mentioned before, urban diversity is a sensitive indicator of polarity and independency. Shannon entropy was chosen to be the method to measure urban fabric diversity using Eq. 3 [20]:
$$ \mathbf{H}=-\sum \limits_{i=1}^s{p}_{\mathbf{i}}\ \mathbf{\ln}\left({\boldsymbol{p}}_{\boldsymbol{i}}\right) $$
H = the Shannon diversity index value
Pi = the proportion of individuals found in the ith species
ln = the natural logarithm
s = the number of species in the community
By applying Eq. 3 to the case of urban areas, Pi is the proportion of the ith land use area to the total city functional urban area (F.U.A). S is the no land uses in the urban area. The methodology upon which the land uses that shape the urban agglomeration diversity index were chosen undergoes the following conditions:
Land use should be a category of any urban development (rural land use categories are not taken into account).
Sprawled urbanization was excluded from the research analysis. Since the research is oriented towards urbanization that influences development, it was decided to choose the main land uses that reflect the main flow of the labor force rather than any other land use that would have arisen under special circumstances. In addition, low-density scattered urban areas cannot be accurately identified. They could either be luxurious areas with entertaining green open areas or just poor informal buildings lying in the peripheries. In both cases, data about the nature of residents in these areas and their activities is always missing and decisive.
Diversity is claimed to be the most appropriate indicator of a balanced land use mix. High index values reflect fine coherent proportions of land use combination, while low values point out that the fabric is a coarse one. Accordingly, the following land uses were selected to represent a variety of urban societies:
Continuous urban fabric [C.F] (it is a fabric type where the urban surface is majorly covered by impermeable features, such as buildings, roads, and artificially surfaced areas.)
Discontinuous high-density urban fabric [D.D.F] (it is a fabric land use where the impermeable features, such as buildings, roads, and artificially surfaced areas, range from 50 to 80% land coverage.)
Discontinuous medium-density urban fabric [D.M.F] (it is a fabric land use where the impermeable features, such as buildings, roads, and artificially surfaced areas, range from 30 to 50% land coverage.)
Industrial or commercial units and public facilities [I/C.F]. This category is assigned for land units that are industrial or commercial use or public facilities.
Railway network [R.F].
Urban green areas [G.F].
The selected area represents the major countries in Europe. North countries are excluded from the comparison
as they have developed in different prosperous circumstances. The countries were chosen to represent Western, Eastern, and Southern Europe. All data for land use areas for the thirteen European cities was calculated using GIS shapefiles located in a project called "Urban Atlas" certified by the European Environment Agency an agency of the European Union [21]. Table 2 highlights the above-defined land use values for thirteen countries in Europe. It also shows the calculations for Shannon diversity index (H) for city urban agglomeration and the polycentricity index (P) for each country at the end. The criteria for selecting the countries is to have a comparison between countries with different backgrounds and situations. For example, Western countries represent the wealth and prosperous ones. South represents the Mediterranean diverse cultural ones. The Eastern European countries represent lower standard of living compared to Western countries.
Table 2 Urban fabric land use areas for the 13 selected European countries
Development Index formulation
The second element in the comparison is development. In order to accurately measure development, the UN-Habitat defined a new comprehensive meaning of development that is not only limited to economic growth but extends to take into account all elements and factors that sustain growth. The new development term is claimed to be achieved through certain goals they declared as a measurement approach for development in general. In fact, a lot of criticism was directed to the UN-Habitat definition for development as it included many parameters and aspects while many urban planners and economists define development only as "the economic growth that pursuit positive change for the society." The new definition goes beyond the limited understanding of city development to be a comprehensive developed resilient one in the "new normal." Table 3 highlights the values of every goal for each of the selected thirteen countries by using a color code. The code classifies the values into four main categories of goal fulfillment. Green indicates goal achievement, yellow challenges remain while orange shows significant challenges, and at the end, red indicates major challenges [22]. The seventeen goals are as follows: no poverty, zero hunger, good health and well-being, quality education, gender equality, clean water and sanitation, affordable and clean energy, decent work and economic growth industry, innovation and infrastructure, reduced inequality, sustainable cities and communities, responsible consumption and production, climate action, life below water, life on land, peace and just strong institutions, and partnerships to achieve the goal.
Table 3 Development index developed by UN-Habitat extracted from UN-Habitat SDGs annual report 2019
By the aid of the previously mentioned seventeen goals, a development index was developed as a mean value to the seventeen scores indicating the degree of development progress for each country as shown in Table 3.
A relationship between the developed polycentricity/independency index and the calculated development index by the UN-Habitat was established as shown in Table 4, in order to investigate to what extent independency or concentricity can influence the development process (Fig. 2). The development values were standardized using the logarithmic normalization approach to conserve the analysis from any deviation or error.
Table 4 Polycentricity index values and normalized development index values for 13 European countries
The relationship between the polycentricity index and the development index for selected 13 European countries
It is clear from the chart shown in Fig. 1 that countries with a high polycentricity or independency index tend to show high values of development index. The relationship is considerably uniform and directly proportional. Concentric-based countries show lower development index values than those based on the polycentric approach. The Pearson correlation coefficient R = 0.6599 means that a quite strong relationship was found between macro urban fabric polycentricity and achieving high values of development. The analysis obviously explains that countries that host independent urban agglomerations record higher values of development. However, countries that host a primary core city with dependent satellite cities show lower values of development.
The paper has demonstrated the two main approaches of urbanization whether a concentric one depending on a core and peripheries or multicenter nodes that are linked and connected together. The paper investigates the best urban practice for city planning. A resilient urban center that can cope alone with the different circumstances has been declared as the best urban model. Consequently, independency has been chosen as the best measurement approach for center definition and elaboration. In other words, not any urban agglomeration can act as a long-lasting center, but it should be self-sufficient first. In fact, the diversity index for land use as an indicator of independency is claimed to be a sensitive measuring approach to macro urban structure polycentricity as well, i.e., diversity within unity reflects a fine combination of residential categories. Continuous dense in the CBD, discontinuous dense at the outer ring, and even medium dense at the peripheries give the opportunity to integrate green areas and other land use within the residential fabric combating informal scattered sprawl at the end. It also provides a quite good mix of serving and production employment through commercial and industrial land use [22]. This plays an important role in reducing commuting across the city and save time and money in the end. It should be honestly stated that independency generated from intra-fabric interactions is obviously reflected on the inter-fabric metabolism. It is also worth saying that diversity inherits the allocation of land uses in its perfect location to maximize the benefits from the activities carried out within each land use, i.e., some land uses if located in inappropriate positions might minimize its benefit or has a negative impact. For example, continuous urban fabric that is mainly compacted areas has its greater impact when centered in the city as it acts as a CBD for the urban area. Also, medium-dense fabric works efficiently at the peripheries as it plays an important role in combating sprawl by creating porous fabric at the outskirts instead of informal scattered areas. The variation in the social distances across the fabric promotes new activities and experience (Yunda and Jiao 2019; Abusaada and Elshater 2020) [23]. For instance, we can emphasize opportunities for meditation can be developed besides guiding people towards engaging in multiple areas of interest.
Epidemics and pandemics played an important role in urban history. The creation of parks, promenades, and public squares in European cities, for example, were early trials to provide safer urban spaces. Perhaps the largest impact was the rise of the public health movement in the nineteenth and twentieth centuries. Public health urban initiatives were attempts to develop open spaces in the cities as porous areas that consume the exhaust resulted from activities performed within the fabric areas [24]. This potential was not taken into account carefully before the pandemic. It was considered a sort of general public health safety precautions. Yet, it proved to be a crucial land use fabric that combats the pandemic implications and minimize its effect.
Results have broadly demonstrated that polycentricity when properly applied in its true and deepest sense plays an important role in boosting the economic growth and development at the end. Some economists and planners had chiefly criticized the idea of decentralization and independency. They still advocate the traditional model of concentric planning and subordinate urban expansion as the most successful model. To judge whether polycentric urbanization is achieving satisfactory outcomes in the development process, it should be explicitly stated that polycentricity works efficiently when applying the concept of synergy. Synergy means that the formation of a combination is more effective than the simple aggregate of its parts. It also requires that every part should be independent, self-sufficient so that when it collaborates with one another, the assembly is a neo complex profitable added value. On the other hand, satellite cities are based on complementing needs. Complementarity inherits the dependency of one part on the other, i.e., it could just be a simple exchange of raw materials to form an ordinary product. In addition, relying on one urban agglomeration upon the other could be understood in the traditional circumstances. However, in a pandemic such as the COVID-19, dependency of an urban area could be catastrophic. To be clearer, when an urban area is infected, most of the production and service activities stop as a result of the complete shutdown. Here comes the best benefit of polycentricity in combating the spread of the pandemic. As mentioned, polycentricity depends upon urban independency [24]. Therefore, when an urban area is infected, it could be isolated from the others until it recovers without infecting any other areas. At the same time, being self-sufficient promotes the infected zone to sustain alone and recover as a result of economies externalities it generated in the past with no need of major aid from other urban areas. On the contrary, in the monocentric model, all urban areas depend upon the main core in the essential needs. If the core is infected, a paralysis will affect all other urban areas dependent on the core, and the whole life activities in the metropolitan area will stop. Overall, while the link between COVID-19 prevalence and urban design characteristics has created many debates in the media and the public, the existing literature does not specify in much detail how different design measures such as connectivity, block size, land use mix, and polycentricity influence the infection and mortality rate of COVID-19 and the capacity of cities to respond to the pandemic. However, according to the early findings, planners are recommended to keep advocating compact forms of urban development rather than sprawling ones because various other benefits of compact urban development are demonstrated in the literature [25] (Connolly et al., 2020b; Hamidi et al., 2020; Sharifi, 2019a, b).
The other foremost debate was that centralization leads to economic externality accumulation while decentralization leads to dispersion and fragmentation of investments. However, this paper concluded a major expansion of this argument. In a nutshell, it could be claimed that an independent urban agglomeration which can stand alone is the only candidate nominated to contribute to the polycentricity concept that achieves high development records. Contrarily, any other dependent subordinate agglomerations or satellite towns (whatever their sizes or population) may deviate the results and fail the whole idea of polycentricity in an oppressive approach or measurement method [9].
To summarize, polarity is the key factor of achieving development through attracting investments. Attraction effect means finding positive relations between the different inputs not just blind accumulation that could have negative or repulsive effects. Concentric urban structure model is not benefitable as it inherits cooperation (a sort of dependent neutral horizontal complementarity). Polycentric urban structure model inherits collaboration between different users. At the beginning of polycentricity, a sort of independent vertical complementarity occurs to form a product (mass production phase). The ultimacy of polycentricity happens when a sort of synergic complementarity occurs between the independent actors to form a new product each time they combine (innovative phase). In a nutshell, collaboration is always needed to achieve development not cooperation [26]. This is because collaboration is an independent driven process while cooperation is a dependent one. Saturation is a critical case in polycentric planning because synergic collaboration process is a complex one that has a long-term effect reflected on the quality of life. In addition, if a center reaches the saturation phase, it automatically inherits the monotonical accumulation of investments and actors. It then loses its polarity and consequently its magnetic effect of attraction between the different actors becomes weaker. On the other hand, vertical collaboration is a pillar for fast complementarity, new market openings, economic boosting reflected on GDP per capita. In all cases, a balance between synergic and vertical collaboration is always needed to avoid market saturation and formation of repulsive poles instead of attractive ones. The desired urban equilibrium can achieve the best development scores as in the Germany example.
It is also considered that in the future, polycentric planning should take into account the study of main land use composition that achieve the best results of independency and productivity. In this study, the land uses were unified across the different countries to guarantee the accuracy of measurement. However, the composition of other different uses can give more benefitable and accurate results than the used compositions. The used land uses are general ones. A quite detailed land use could be a more precise indicator and representative of the metabolic interactions between the different land uses and consequently the impact on development.
Data generated or analyzed during this study are included in this published article [and its supplementary additional information files]. The datasets generated and/or analyzed during the current study are available in the following link: https://www.eea.europa.eu/data-and-maps/data/urban-atlas.
C.F:
Continuous urban fabric
D.D.F:
Discontinuous high-density urban
D.M.F:
Discontinuous medium-density urban fabric
I/C.F:
Industrial or commercial units and public facilities
R.F:
Railway network
G.F:
Urban green areas
F.U.A:
Functional urban area
Coefficient of variation
Polycentricity index
Beckmann M (1976) Spatial equilibrium in the dispersed city in mathematical land use theory. Lexington Books, Lexington
Hamidi Shima, Sabouri Sadegh, Ewing Reid (2020) Does Density Aggravate the COVID-19 Pandemic? Journal of the American Planning Association 86(4):495–509. https://doi.org/10.1080/01944363.2020.1777891
Krzysztofik R, Kantor-Pietraga I, Spórna T (2020) Spatial and functional dimensions of the COVID-19 epidemic in Poland. Eurasian Geography and Economics 61(4-5) 573–586. https://doi.org/10.1080/15387216.2020.1783337
Alonso W (1964) Location and land use. Harvard University Press, Cambridge. https://doi.org/10.4159/harvard.9780674730854
Fujita M (1986) Urban land use theory in location theory. Harwood Academic Publishers, London
Fujita M (1989) Urban economic theory: land use and city size. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511625862
Fujita M, Krugman P (1995) When is the economy monocentric?: von Thünen and Chamberlin unified. Reg Sci Urban Econ 25(4):505–528
Fujita M, Thisse J-F (2002) Economics of agglomeration: cities, industrial location, and regional growth. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511805660
Rauhut D (2016) Polycentricity: a critical discussion
Burger M, Meijers E (2016) Agglomerations and the rise of urban network externalities. Pap Reg Sci 95:1–17
Faludi A (2005) Polycentric territorial cohesion policy. Town Plan Rev 76(1):107–118. https://doi.org/10.3828/tpr.76.1.9
Meijers E, Hoogerbrugge M, Cardoso R (2017) Beyond polycentricity: does stronger integration between cities in polycentric urban regions improve performance?
Brezzi M, Veneri P (2015) Assessing polycentric urban systems in the OECD: country, regional and metropolitan perspectives. Eur Plan Stud 23(6):1128–1145. https://doi.org/10.1080/09654313.2014.905005
Herschel T (2009) City regions, polycentricity and the construction of peripheralities through governance. Urban Res Pract 2(3):250
Yeh A, Yuan H-q (2008) Satellite town development in China: problems and prospects
Polycentric urban development and urban amenities (2020) Evidence from Chinese cities. Wang M
Meijers E (2008) Measuring Polycentricity and its Promises. European Planning Studies 16(9):1313–1323. https://doi.org/10.1080/09654310802401805
Stephan M, Marshal GR (2019) An introduction to polycentricity and governance
McMillen DP (2001) Polycentric urban structure: the case of Milwaukee. Econ Perspect Fed Reserv Bank Chic 25(Q II):15–27
Gordon I (2010) Entropy, variety, economics, and spatial interaction. George Anal 42(4):446–471. https://doi.org/10.1111/j.1538-4632.2010.00802.x
EEA (2020), Urban Atlas, https://www.eea.europa.eu/data-and-maps/data/urban-atlas. Accessed 18 Aug 2020.
Unhabitat (2019) European Sustainable Development Goals Annual Report. United Nations press
Hisham, Abusaada Abeer, Elshater COVID-19's challenges to urbanism: social distancing and the phenomenon of boredom in urban spaces. Journal of Urbanism: International Research on Placemaking and Urban Sustainability 1–3. 10.1080/17549175.2020.1842484
Martinez L, Rennie J (2021) The pandemic city: urban issues in the time of COVID-19
Sharifi Ayyoob, Khavarian-Garmsir Amir Reza (2020) The COVID-19 pandemic: Impacts on cities and major lessons for urban planning, design, and management. Science of The Total Environment 749:142391. https://doi.org/10.1016/j.scitotenv.2020.142391
Han S, Sun B, Zhang T (2020) Mono- and polycentric urban spatial structure and PM 2.5 concentrations: regarding the dependence on population density. Habitat Int 104:102257
I would like to express my special thanks of gratitude to professor Tarek Abdellatif AboElatta for his continuous support and supervision. I would also want to thank professor Ahmed Monir for his aid to provide me with professional materials.
All research cost is funded by the author with no additional fund from any organization.
Department of Architecture, Faculty of Engineering, Cairo University, Giza, Egypt
Ashraf Sami Mahmoud Abozeid & Tarek Abdellatif AboElatta
Ashraf Sami Mahmoud Abozeid
Tarek Abdellatif AboElatta
A.S.A has carried out all the research process starting from the literature review reaching the results and analysis phase. T.A.A has revised the whole manuscript carefully by checking the content of the research and approving the statistical calculations carried out in the research. All authors have read and approved the final manuscript.
Correspondence to Ashraf Sami Mahmoud Abozeid.
Abozeid, A.S.M., AboElatta, T.A. Polycentric vs monocentric urban structure contribution to national development. J. Eng. Appl. Sci. 68, 11 (2021). https://doi.org/10.1186/s44147-021-00011-1
Independency
Urban structure
Urban agglomeration
Urban diversity
Polycentricity
Concentricity | CommonCrawl |
Union (set theory)
In set theory, the union (denoted by ∪) of a collection of sets is the set of all elements in the collection.[1] It is one of the fundamental operations through which sets can be combined and related to each other. A nullary union refers to a union of zero ($0$) sets and it is by definition equal to the empty set.
For explanation of the symbols used in this article, refer to the table of mathematical symbols.
Union of two sets
The union of two sets A and B is the set of elements which are in A, in B, or in both A and B.[2] In set-builder notation,
$A\cup B=\{x:x\in A{\text{ or }}x\in B\}$.[3]
For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ∪ B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is:
A = {x is an even integer larger than 1}
B = {x is an odd integer larger than 1}
$A\cup B=\{2,3,4,5,6,\dots \}$
As another example, the number 9 is not contained in the union of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of even numbers {2, 4, 6, 8, 10, ...}, because 9 is neither prime nor even.
Sets cannot have duplicate elements,[3][4] so the union of the sets {1, 2, 3} and {2, 3, 4} is {1, 2, 3, 4}. Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents.
Algebraic properties
See also: List of set identities and relations and Algebra of sets
Binary union is an associative operation; that is, for any sets $A,B,{\text{ and }}C,$
$A\cup (B\cup C)=(A\cup B)\cup C.$
Thus, the parentheses may be omitted without ambiguity: either of the above can be written as $A\cup B\cup C.$ Also, union is commutative, so the sets can be written in any order.[5] The empty set is an identity element for the operation of union. That is, $A\cup \varnothing =A,$ for any set $A.$ Also, the union operation is idempotent: $A\cup A=A.$ All these properties follow from analogous facts about logical disjunction.
Intersection distributes over union
$A\cap (B\cup C)=(A\cap B)\cup (A\cap C)$
and union distributes over intersection[2]
$A\cup (B\cap C)=(A\cup B)\cap (A\cup C).$
The power set of a set $U,$ together with the operations given by union, intersection, and complementation, is a Boolean algebra. In this Boolean algebra, union can be expressed in terms of intersection and complementation by the formula
$A\cup B=\left(A^{\text{c}}\cap B^{\text{c}}\right)^{\text{c}},$
where the superscript ${}^{\text{c}}$ denotes the complement in the universal set $U.$
Finite unions
One can take the union of several sets simultaneously. For example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C.
A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set.[6][7]
Arbitrary unions
The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union. If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A.[8] In symbols:
$x\in \bigcup \mathbf {M} \iff \exists A\in \mathbf {M} ,\ x\in A.$
This idea subsumes the preceding sections—for example, A ∪ B ∪ C is the union of the collection {A, B, C}. Also, if M is the empty collection, then the union of M is the empty set.
Notations
The notation for the general concept can vary considerably. For a finite union of sets $S_{1},S_{2},S_{3},\dots ,S_{n}$ one often writes $S_{1}\cup S_{2}\cup S_{3}\cup \dots \cup S_{n}$ or $\bigcup _{i=1}^{n}S_{i}$. Various common notations for arbitrary unions include $\bigcup \mathbf {M} $, $\bigcup _{A\in \mathbf {M} }A$, and $\bigcup _{i\in I}A_{i}$. The last of these notations refers to the union of the collection $\left\{A_{i}:i\in I\right\}$, where I is an index set and $A_{i}$ is a set for every $i\in I$. In the case that the index set I is the set of natural numbers, one uses the notation $\bigcup _{i=1}^{\infty }A_{i}$, which is analogous to that of the infinite sums in series.[8]
When the symbol "∪" is placed before other symbols (instead of between them), it is usually rendered as a larger size.
Notation encoding
In Unicode, union is represented by the character U+222A ∪ UNION.[9] In TeX, $\cup $ is rendered from \cup and $\bigcup $ is rendered from \bigcup.
See also
• Algebra of sets – Identities and relationships involving sets
• Alternation (formal language theory) – in formal language theory and pattern matching, the union of two sets of strings or patternsPages displaying wikidata descriptions as a fallback − the union of sets of strings
• Axiom of union – Concept in axiomatic set theory
• Disjoint union – In mathematics, operation on sets
• Inclusion–exclusion principle – Counting technique in combinatorics
• Intersection (set theory) – Set of elements common to all of some sets
• Iterated binary operation – Repeated application of an operation to a sequence
• List of set identities and relations – Equalities for combinations of sets
• Naive set theory – Informal set theories
• Symmetric difference – Elements in exactly one of two sets
Notes
1. Weisstein, Eric W. "Union". Wolfram Mathworld. Archived from the original on 2009-02-07. Retrieved 2009-07-14.
2. "Set Operations | Union | Intersection | Complement | Difference | Mutually Exclusive | Partitions | De Morgan's Law | Distributive Law | Cartesian Product". Probability Course. Retrieved 2020-09-05.
3. Vereshchagin, Nikolai Konstantinovich; Shen, Alexander (2002-01-01). Basic Set Theory. American Mathematical Soc. ISBN 9780821827314.
4. deHaan, Lex; Koppelaars, Toon (2007-10-25). Applied Mathematics for Database Professionals. Apress. ISBN 9781430203483.
5. Halmos, P. R. (2013-11-27). Naive Set Theory. Springer Science & Business Media. ISBN 9781475716450.
6. Dasgupta, Abhijit (2013-12-11). Set Theory: With an Introduction to Real Point Sets. Springer Science & Business Media. ISBN 9781461488545.
7. "Finite Union of Finite Sets is Finite". ProofWiki. Archived from the original on 11 September 2014. Retrieved 29 April 2018.
8. Smith, Douglas; Eggen, Maurice; Andre, Richard St (2014-08-01). A Transition to Advanced Mathematics. Cengage Learning. ISBN 9781285463261.
9. "The Unicode Standard, Version 15.0 - Mathematical Operators - Range: 2200–22FF" (PDF). Unicode. p. 3.
External links
• "Union of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Infinite Union and Intersection at ProvenMath De Morgan's laws formally proven from the axioms of set theory.
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
\begin{document}
\title{Laplace transform in spaces of ultradistributions}
\author{Bojan Prangoski}
\date{} \maketitle
\begin{abstract} The Laplace transform in Komatsu ultradistributions is considered. Also, conditions are given under which an analytic function is a Laplace transformation of an ultradistribution. \end{abstract}
\noindent \textbf{Mathematics Subject Classification} 46F05; 46F12, 44A10\\ \textbf{Keywords} ultradistributions, Laplace transform
\section{Introduction}
The Laplace transform of distributions was defined and studied by Schwartz, \cite{SchwartzK}. Later, Carmichael and Pilipovi\'c in \cite{CP} (see also \cite{PilipovicK}), considered the Laplace transform in $\Sigma'_{\alpha}$ of Beurling-Gevrey tempered ultradistributions and obtained some results concerning the so-called tempered convolution. In particular, they gave a characterization of the space of Laplace transforms of elements from $\Sigma'_{\alpha}$ supported by an acute closed cone in $\mathbb R^d$. Komatsu has given a great contribution to the investigations of the Laplace transform in ultradistribution and hyperfunction spaces considering them over appropriate domains, see \cite{kl} and references therein (see also \cite{zh}). Michalik in \cite{Mic} and Lee and Kim in \cite{Kim} have adapted the space of ultradistribution and Fourier hyperfunctions to the definition of the Laplace transform, following ideas of Komatsu. Our approach is different. We develop the theory within the space of already constructed ultradistributions of Beurling and Roumieu type. The ideas in the proofs of the two main theorems (theorem \ref{t1} and theorem \ref{t2}) are similar to those in \cite{Vladimirov} in the case of Schwartz distributions. In these theorems are characterized ultradistributions defined on the whole $\mathbb R^d$ through the estimates of their Laplace transforms. This is the main point of our investigations contrary to other authors who investigated generalized functions supported by cones. We consider a restricted class of ultradistributions assuming conditions $(M.1), (M.2)$ and $(M.3)$ (for example, cases $M_p=p!^s$, $s>1$) in order to obtain fine representations through the analysis of the corresponding class of subexponentially bounded entire functions. With weaker conditions, $(M.3)'$ instead of $(M.3),$ or even in the case of quasianalyticity, we can obtain different, technically more complicate, structural representations.
\section{Preliminaries}
The sets of natural, integer, positive integer, real and complex numbers are denoted by $\mathbb N$, $\mathbb Z$, $\mathbb Z_+$, $\mathbb R$, $\mathbb C$. We use the symbols for $x\in \mathbb R^d$: $\langle x\rangle =(1+|x|^2)^{1/2} $, $D^{\alpha}= D_1^{\alpha_1}\ldots D_n^{\alpha_d},\quad D_j^
{\alpha_j}={i^{-1}}\partial^{\alpha_j}/{\partial x}^{\alpha_j}$, $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_d)\in\mathbb N^d$. If $z\in\mathbb C^d$, by $z^2$ we will denote $z^2_1+...+z^2_d$. Note that, if $x\in\mathbb R^d$, $x^2=|x|^2$.\\ \indent Following \cite{Komatsu1}, we denote by $M_{p}$ a sequence of positive numbers $M_0=1$ so that:\\ \indent $(M.1)$ $M_{p}^{2} \leq M_{p-1} M_{p+1}, \; \; p \in\mathbb Z_+$;\\ \indent $(M.2)$ $\displaystyle M_{p} \leq c_0H^{p} \min_{0\leq q\leq p} \{M_{p-q} M_{q}\}$, $p,q\in \mathbb N$, for some $c_0,H\geq1$;\\ \indent $(M.3)$ $\displaystyle\sum^{\infty}_{p=q+1} \frac{M_{p-1}}{M_{p}}\leq c_0q \frac{M_{q}}{M_{q+1}}$, $q\in \mathbb Z_+$,\\
although in some assertions we could assume the weaker ones $(M.2)'$ and $(M.3)'$ (see \cite{Komatsu1}). For a multi-index $\alpha\in\mathbb N^d$, $M_{\alpha}$ will mean $M_{|\alpha|}$, $|\alpha|=\alpha_1+...+\alpha_d$. Recall, $m_p=M_p/M_{p-1}$, $p\in\mathbb Z_+$ and the associated function for the sequence $M_{p}$ is defined by \begin{eqnarray*} M(\rho)=\sup _{p\in\mathbb N}\log_+ \frac{\rho^{p}}{M_{p}} , \; \; \rho > 0. \end{eqnarray*} It is non-negative, continuous, monotonically increasing function, which vanishes for sufficiently small $\rho>0$ and increases more rapidly then $(\ln \rho)^p$ when $\rho$ tends to infinity, for any $p\in\mathbb N$.\\
\indent Let $U\subseteq\mathbb R^d$ be an open set and $K\subset\subset U$ (we will use always this notation for a compact subset of an open set). Then $\mathcal E^{\{M_p\},h}(K)$ is the space of all $\varphi\in \mathcal{C}^{\infty}(U)$ which satisfy $\displaystyle\sup_{\alpha\in\mathbb N^d}\sup_{x\in K}\frac{|D^{\alpha}\varphi(x)|}{h^{\alpha}M_{\alpha}}<\infty$ and $\mathcal D^{\{M_p\},h}_K$ is the space of all $\varphi\in \mathcal{C}^{\infty}\left(\mathbb R^d\right)$ with supports in $K$, which satisfy $\displaystyle\sup_{\alpha\in\mathbb N^d}\sup_{x\in K}\frac{|D^{\alpha}\varphi(x)|}{h^{\alpha}M_{\alpha}}<\infty$; $$ \mathcal E^{(M_p)}(U)=\lim_{\substack{\longleftarrow\\ K\subset\subset U}}\lim_{\substack{\longleftarrow\\ h\rightarrow 0}} \mathcal E^{\{M_p\},h}(K),\,\,\,\, \mathcal E^{\{M_p\}}(U)=\lim_{\substack{\longleftarrow\\ K\subset\subset U}} \lim_{\substack{\longrightarrow\\ h\rightarrow \infty}} \mathcal E^{\{M_p\},h}(K), $$ \begin{eqnarray*} \mathcal D^{(M_p)}_K=\lim_{\substack{\longleftarrow\\ h\rightarrow 0}} \mathcal D^{\{M_p\},h}_K,\,\,\,\, \mathcal D^{(M_p)}(U)=\lim_{\substack{\longrightarrow\\ K\subset\subset U}}\mathcal D^{(M_p)}_K,\\ \mathcal D^{\{M_p\}}_K=\lim_{\substack{\longrightarrow\\ h\rightarrow \infty}} \mathcal D^{\{M_p\},h}_K,\,\,\,\, \mathcal D^{\{M_p\}}(U)=\lim_{\substack{\longrightarrow\\ K\subset\subset U}}\mathcal D^{\{M_p\}}_K. \end{eqnarray*} The spaces of ultradistributions and ultradistributions with compact support of Beurling and Roumieu type are defined as the strong duals of $\mathcal D^{(M_p)}(U)$ and $\mathcal E^{(M_p)}(U)$, resp. $\mathcal D^{\{M_p\}}(U)$ and $\mathcal E^{\{M_p\}}(U)$. For the properties of these spaces, we refer to \cite{Komatsu1}, \cite{Komatsu2} and \cite{Komatsu3}. In the future we will not emphasize the set $U$ when $U=\mathbb R^d$. Also, the common notation for the symbols $(M_{p})$ and $\{M_{p}\} $ will be *.\\ \indent If $f\in L^{1} $, then its Fourier transform is defined by $ (\mathcal{F}f)(\xi ) = \hat{f} (\xi) = \int_{{\mathbb R^d}} e^{-ix\xi}f(x)dx, \; \; \xi \in {\mathbb R^d}. $
By $\mathfrak{R}$ is denoted a set of positive sequences which monotonically increases to infinity. For $(r_p)\in\mathfrak{R}$, consider the sequence $N_0=1$, $N_p=M_p\prod_{j=1}^{p}r_j$, $p\in\mathbb Z_+$. One easily sees that this sequence satisfies $(M.1)$ and $(M.3)'$ and its associated function will be denoted by $N_{r_p}(\rho)$, i.e. $\displaystyle N_{r_{p}}(\rho )=\sup_{p\in\mathbb N} \log_+ \frac{\rho^{p }}{M_p\prod_{j=1}^{p}r_j}$, $\rho > 0$. Note, for given $r_{p}$ and every $k > 0 $ there is $\rho _{0} > 0$ such that $\displaystyle N_{r_{p}} (\rho ) \leq M(k \rho )$, for $\rho > \rho _{0}$.\\
\indent It is said that $\displaystyle P(\xi ) =\sum _{\alpha \in \mathbb N^d}c_{\alpha } \xi^{\alpha}$, $\xi \in \mathbb R^d$, is an ultrapolynomial of the class $(M_{p})$, resp. $\{M_{p}\}$, whenever the coefficients $c_{\alpha }$ satisfy the estimate $|c_{\alpha }| \leq C L^{\alpha }M_{\alpha}$, $\alpha \in \mathbb N^d$ for some $L > 0$ and $C>0$, resp. for every $L > 0 $ and some $C_{L} > 0$. The corresponding operator $P(D)=\sum_{\alpha} c_{\alpha}D^{\alpha}$ is an ultradifferential operator of the class $(M_{p})$, resp. $\{M_{p}\}$ and they act continuously on $\mathcal E^{(M_p)}(U)$ and $\mathcal D^{(M_p)}(U)$, resp. $\mathcal E^{\{M_p\}}(U)$ and $\mathcal D^{\{M_p\}}(U)$ and the corresponding spaces of ultradistributions.\\
\indent We denote by $\mathcal S^{M_{p},m}_{2} \left(\mathbb R^d\right)$, $m > 0$, the space of all smooth functions $\varphi$ which satisfy \begin{eqnarray}\label{75}
\sigma_{m,2}(\varphi ): = \left( \sum_{\alpha,\beta\in\mathbb N^d} \int_{\mathbb R^d} \left|\frac{m^{|\alpha|+|\beta|}\langle x\rangle^{|\alpha|}D^{\beta}\varphi(x)}{M_{\alpha}M_{\beta}}\right| ^{2} dx \right) ^{1/2}<\infty, \end{eqnarray} supplied with the topology induced by the norm $\sigma _{m,2}$. The spaces $\mathcal S'^{(M_{p})}$ and $\mathcal S'^{\{M_{p}\}}$ of tempered ultradistributions of Beurling and Roumieu type respectively, are defined as the strong duals of the spaces $\displaystyle\mathcal S^{(M_{p})}=\lim_{\substack{\longleftarrow\\ m\rightarrow\infty}}\mathcal S^{M_{p},m}_{2}\left(\mathbb R^d\right)$ and $\displaystyle\mathcal S^{\{M_{p}\}}=\lim_{\substack{\longrightarrow\\ m\rightarrow 0}}\mathcal S^{M_{p},m}_{2}\left(\mathbb R^d\right)$, respectively. All the good properties of $\mathcal S^*$ and its strong dual follow from the equivalence of the sequence of norms $\sigma_{m,2}$, $m > 0$, with each of the following sequences of norms (see \cite{PilipovicK}, \cite{PilipovicU}):\\ \indent $(a)$ $\sigma_{m,p}$, $m > 0$; $p \in [1, \infty ]$ is fixed;\\
\indent $(b)$ $s_{m,p}$, $m > 0$; $p \in [1,\infty ]$ is fixed, where $\displaystyle s_{m,p}(\varphi): =\sum_{\alpha ,\beta \in \mathbb N^d}\frac{m^{|\alpha| +|\beta| }\| |\cdot|^{\beta }D^{\alpha}\varphi(\cdot)\|_{L^p}}{M_{\alpha }M_{\beta }}$;\\
\indent $(c)$ $s_{m}$, $m > 0$, where $\displaystyle s_{m}(\varphi):=\sup_{\alpha\in \mathbb N^d}\frac{m^{|\alpha|}
\| D^{\alpha}\varphi(\cdot) e^{M(m|\cdot|)}\|_{L_{\infty}}}{M_{\alpha }}$.\\
If we denote by $\mathcal S^{M_p,m}_{\infty}\left(\mathbb R^d\right)$ the space of all infinitely differentiable functions on $\mathbb R^d$ for which the norm $\sigma_{m,\infty}$ is finite (obviously it is a Banach space), then $\displaystyle\mathcal S^{(M_p)}\left(\mathbb R^d\right)=\lim_{\substack{\longleftarrow\\ m\rightarrow\infty}} \mathcal S^{M_p,m}_{\infty}\left(\mathbb R^d\right)$ and $\displaystyle\mathcal S^{\{M_p\}}\left(\mathbb R^d\right)=\lim_{\substack{\longrightarrow\\ m\rightarrow 0}} \mathcal S^{M_p,m}_{\infty}\left(\mathbb R^d\right)$. Also, for $m_2>m_1$, the inclusion $\mathcal S^{M_p,m_2}_{\infty}\left(\mathbb R^d\right)\longrightarrow\mathcal S^{M_p,m_1}_{\infty}\left(\mathbb R^d\right)$ is a compact mapping. In \cite{PilipovicT} and \cite{PilipovicK} it is proved that $\displaystyle\mathcal S^{\{M_{p}\}} = \lim_{\substack{\longleftarrow\\ r_{i}, s_{j} \in \mathfrak{R}}}\mathcal S^{M_{p}}_{(r_{p}),(s_{q})}$, where $\displaystyle\mathcal S^{M_{p}}_{(r_{p}),(s_{q})}=\left\{\varphi \in \mathcal{C}^{\infty} \left(\mathbb R^d\right)|\gamma _{(r_{p}),(s_{q})}(\varphi)<\infty\right\}$ and \begin{eqnarray*}
\gamma_{(r_{p}),(s_{q})}(\varphi) =\sup_{\alpha,\beta\in \mathbb N^d}\frac{\left\|\langle x\rangle^{|\beta|}D^{\alpha}\varphi(x)\right\|_{L^{2}}} {\left(\prod^{|\alpha|}_{p=1}r_{p}\right)M_{\alpha}\left(\prod^{|\beta|}_{q=1}s_{q}\right)M_{\beta}}. \end{eqnarray*} Also, the Fourier transform is a topological automorphism of $\mathcal S^*$ and of $\mathcal S'^*$.
\section{Laplace transform}
For a set $B\subseteq\mathbb R^d$ denote by $\mathrm{ch\,}B$ the convex hull of $B$.
\begin{theorem}\label{t1} Let $B$ be a connected open set in $\mathbb R^d_{\xi}$ and $T\in\mathcal D'^{*}(\mathbb R^d_x)$ be such that, for all $\xi\in B$, $e^{-x\xi}T(x)\in\mathcal S'^{*}(\mathbb R^d_x)$. Then the Fourier transform $\mathcal{F}_{x\rightarrow\eta}\left(e^{-x\xi}T(x)\right)$ is an analytic function of $\zeta=\xi+i\eta$ for $\xi\in \mathrm{ch\,}B$, $\eta\in\mathbb R^d$. Furthermore, it satisfies the following estimates:\\ \indent for every $K\subset\subset\mathrm{ch\,}B$ there exist $k>0$ and $C>0$, resp. for every $k>0$ there exists $C>0$, such that \begin{eqnarray}\label{3}
|\mathcal{F}_{x\rightarrow\eta}(e^{-x\xi}T(x))(\xi+i\eta)|\leq Ce^{M(k|\eta|)},\, \forall \xi\in K, \forall\eta\in\mathbb R^d. \end{eqnarray} \end{theorem}
\begin{proof} Let $K$ be a fixed compact subset of $\mathrm{ch\,}B$. There exists $0<\varepsilon<1/4$ and $\xi^{(1)},...,\xi^{(l)}\in B$ such that the convex hull $\Pi$ of the set $\{\xi^{(1)},...,\xi^{(l)}\}$ contains the closed $4\varepsilon$ neighborhood of $K$ (obviously $\Pi\subset\subset \mathrm{ch\,}B$). We shell prove that the set \begin{eqnarray}\label{5}
\left\{S\in\mathcal D'^{*}|S(x)=T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}},\xi\in K\right\}
\end{eqnarray} is bounded in $\mathcal S'^{*}$. Note that by the condition in the theorem $T(x)e^{-x\xi}\in\mathcal S'^{*}$ and $e^{\varepsilon\sqrt{1+|x|^2}}$ is the restriction on the real axis of the function $e^{\varepsilon\sqrt{1+z^2}}$ that is analytic and single valued on the strip $\mathbb R^d+i\{y\in\mathbb R^d||y|<1/4\}$, and hence $e^{\varepsilon\sqrt{1+|x|^2}}$ is in $\mathcal E^{*}$. Note that \begin{eqnarray}\label{7}
T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}}=\sum_{k=1}^l e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)T(x)e^{-x\xi^{(k)}}, \end{eqnarray} where $\displaystyle a(x,\xi)=e^{-x\xi}\left(\sum_{k=1}^l e^{-x\xi^{(k)}}\right)^{-1}$. The function $a(x,\xi)$ satisfies the following conditions:\\ \indent $i)$ $0<a(x,\xi)\leq 1$, $(x,\xi)\in\mathbb R^d\times\Pi$;\\
\indent $ii)$ $e^{\varepsilon'\sqrt{1+|x|^2}}a(x,\xi)\leq e^{\varepsilon'}$, $(x,\xi)\in\mathbb R^d\times K$, and $\forall\varepsilon'\leq 4\varepsilon$;\\ \indent $iii)$ $a(x,\xi)\in \mathcal{C}^{\infty}\left(\mathbb R^{2d}\right)$.\\ $iii)$ it's obvious. To prove $i)$, take $\xi\in\Pi$. Then there exist $t_1,...,t_l\geq0$ such that $\displaystyle\xi=\sum_{k=1}^l t_k\xi^{(k)}$ and $\displaystyle\sum_{k=1}^l t_k=1$. Then, by the weighted arithmetic mean-geometric mean inequality, we have \begin{eqnarray*} e^{-x\xi}=\prod_{k=1}^l e^{-xt_k\xi^{(k)}}\leq\sum_{k=1}^l t_ke^{-x\xi^{(k)}}\leq\sum_{k=1}^l e^{-x\xi^{(k)}}, \end{eqnarray*} from where it follows $i)$. For the prove of $ii)$, note that, for $(x,\xi)\in\mathbb R^d\times K$,
\begin{eqnarray*} e^{\varepsilon'\sqrt{1+|x|^2}}a(x,\xi)\leq e^{\varepsilon'+\varepsilon'|x|}a(x,\xi)=e^{\varepsilon'}\max_{|t|\leq\varepsilon'}e^{-tx}a(x,\xi)=
e^{\varepsilon'}\max_{|t|\leq\varepsilon'}a(x,\xi+t)\leq e^{\varepsilon'}, \end{eqnarray*} where the last inequality follows from $i)$.\\
\indent Now we will estimate the derivatives of $a(x,\xi)$. Let $\displaystyle s=\max_{\xi\in\Pi} |\xi|$. Then $a(z,\xi)$ is an analytic function of $z=x+iy$ on the strip $\mathbb R^d+i\{y\in\mathbb R^d||y|s<\pi/4\}$, for every fixed $\xi\in\Pi$, because \begin{eqnarray*}
\left|\sum_{k=1}^l e^{-z\xi^{(k)}}\right|^2=\left|\sum_{k=1}^l e^{-x\xi^{(k)}}e^{-iy\xi^{(k)}}\right|^2\geq \left(\sum_{k=1}^l e^{-x\xi^{(k)}}\cos y\xi^{(k)}\right)^2\geq\left(\sum_{k=1}^l e^{-x\xi^{(k)}}\frac{\sqrt{2}}{2}\right)^2, \end{eqnarray*} and hence \begin{eqnarray}\label{10}
\left|\sum_{k=1}^l e^{-z\xi^{(k)}}\right|\geq\frac{\sqrt{2}}{2}\sum_{k=1}^l e^{-x\xi^{(k)}}>0, \end{eqnarray} Take $0<r<1/\sqrt{d}$ so small such that $rs\sqrt{d}<\pi/4$. Then, from Cauchy integral formula, we have \begin{eqnarray*}
|\partial_z^{\alpha}a(x,\xi)|\leq \frac{\alpha !}{r^{|\alpha|}}
\sup_{|w_1-x_1|\leq r,...,|w_d-x_d|\leq r}\left|\frac{e^{-w\xi}}{\sum_{k=1}^l e^{-w\xi^{(k)}}}\right|. \end{eqnarray*} If we use the inequality (\ref{10}), we get (we put $w=u+iv$) \begin{eqnarray*}
\left|\frac{e^{-(u+iv)\xi}}{\sum_{k=1}^l e^{-(u+iv)\xi^{(k)}}}\right|&\leq& \frac{\sqrt{2}e^{-u\xi}}{\sum_{k=1}^l e^{-u\xi^{(k)}}} =\frac{\sqrt{2}e^{-x\xi}e^{-(u-x)\xi}}{\sum_{k=1}^l e^{-x\xi^{(k)}}e^{-(u-x)\xi^{(k)}}}\\
&\leq&\frac{\sqrt{2}e^{-x\xi}e^{|u-x||\xi|}}{\sum_{k=1}^l e^{-x\xi^{(k)}}e^{-|u-x|\left|\xi^{(k)}\right|}} \leq\frac{\sqrt{2}e^{-x\xi}e^{rs\sqrt{d}}} {\sum_{k=1}^l e^{-x\xi^{(k)}}e^{-rs\sqrt{d}}}=\sqrt{2}e^{2rs\sqrt{d}}a(x,\xi). \end{eqnarray*} So, we obtain the estimate \begin{eqnarray}\label{12}
\left|\partial_x^{\alpha}a(x,\xi)\right|\leq \sqrt{2}e^{2s}\frac{\alpha !}{r^{|\alpha|}}a(x,\xi).
\end{eqnarray} Note that, by the previous estimate and the property $ii)$ of $a(x,\xi)$, it follows that $a(x,\xi)\in \mathcal S^{*}$ for every $\xi\in K$ and the set $\{a(x,\xi)|\xi\in K\}$ is a bounded set in $\mathcal S^{*}$. We will estimate the derivatives of $e^{\varepsilon\sqrt{1+|x|^2}}$. The function $e^{\varepsilon\sqrt{1+z^2}}$ is analytic on the strip $\mathbb R^d+i\{y\in\mathbb R^d||y|<1/4\}$, where we take the principal branch of the square root which is single valued and analytic on $\mathbb C\backslash (-\infty,0]$. If we take $r<1/(8d)$, from the Cauchy integral formula, we get the estimate $\displaystyle\left|\partial_z^{\alpha} e^{\varepsilon\sqrt{1+|x|^2}}\right|\leq\frac{\alpha !}{r^{|\alpha|}}
\sup_{|w_1-x_1|\leq r,...,|w_d-x_d|\leq r}\left|e^{\varepsilon\sqrt{1+w^2}}\right|$. Put $w=u+iv$ and estimate as follows \begin{eqnarray*}
\left|e^{\varepsilon\sqrt{1+w^2}}\right|&=& e^{\mathrm{Re\,}\left(\varepsilon\sqrt{1+w^2}\right)}\leq e^{\left|\varepsilon\sqrt{1+w^2}\right|}\leq e^{\varepsilon{\sqrt[4]{(1+|u|^2-|v|^2)^2+4(uv)^2}}}\leq e^{\varepsilon{\sqrt{1+|u|^2-|v|^2+2|uv|}}}\\
&\leq& e^{\varepsilon{\sqrt{1+2|u|^2}}}\leq e^{\varepsilon{\sqrt{1+4|u-x|^2+4|x|^2}}}\leq e^{\varepsilon{\sqrt{1+1+4|x|^2}}}\leq e^{2\varepsilon{\sqrt{1+|x|^2}}}. \end{eqnarray*} Hence \begin{eqnarray}\label{13}
\left|\partial_x^{\alpha} e^{\varepsilon\sqrt{1+|x|^2}}\right|\leq\frac{\alpha !}{r^{|\alpha|}}
e^{2\varepsilon\sqrt{1+|x|^2}}.
\end{eqnarray} If we take $r$ small enough we can make the previous estimates for the derivatives of $a(x,\xi)$ and $e^{\varepsilon\sqrt{1+|x|^2}}$ to hold for the same $r$. Now we obtain \begin{eqnarray*}
\left|D^{\alpha}_x \left(e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)\right)\right|
&\leq& \sum_{\beta\leq\alpha} {\alpha\choose\beta}\frac{(\alpha-\beta)!}{r^{|\alpha-\beta|}}e^{2\varepsilon\sqrt{1+|x|^2}}\cdot
\sqrt{2}e^{2s}\frac{\beta !}{r^{|\beta|}}a(x,\xi)\\
&\leq& \sqrt{2}e^{2s}\frac{\alpha !}{r^{|\alpha|}}2^{|\alpha|}e^{2\varepsilon\sqrt{1+|x|^2}}a(x,\xi). \end{eqnarray*} Using the property $ii)$ of the function $a(x,\xi)$, we get \begin{eqnarray}\label{15}
\left|D^{\alpha}_x \left(e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)\right)\right|\leq \sqrt{2}e^{2s}
\frac{\alpha ! 2^{|\alpha|}}{r^{|\alpha|}}e^{2\varepsilon\sqrt{1+|x|^2}}a(x,\xi)\leq
\sqrt{2}e^{2s+2\varepsilon}\frac{\alpha ! 2^{|\alpha|}}{r^{|\alpha|}},\, \forall \xi\in K.
\end{eqnarray} By this estimate and proposition 7 of \cite{PBD} one has $e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)$ is a multiplier for $\mathcal S'^{*}$. Because of (\ref{7}), (\ref{5}) is a subset of $\mathcal S'^{*}$. Now to prove that (\ref{5}) is bounded in $\mathcal S'^{*}$. We will give the prove only in the $\{M_p\}$ case, the $(M_p)$ case is similar. Let $\psi\in\mathcal S^{\{M_p\}}$. There exists $h>0$ such that $\psi\in\mathcal S^{M_p,h}_{\infty}$. Note that \begin{eqnarray*}
\left\langle e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)T(x)e^{-x\xi^{(k)}},\psi(x)\right\rangle=\left\langle T(x)e^{-x\xi^{(k)}},e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)\psi(x)\right\rangle,\, \forall k\in\{1,...,l\}, \forall \xi\in K. \end{eqnarray*} Choose $m\leq h/4$. By (\ref{15}), we have\\
$\displaystyle\frac{m^{|\alpha|+|\beta|}\langle x\rangle^{\beta}\left|D^{\alpha}\left(e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)\psi(x)\right)\right|}{M_{\alpha}M_{\beta}}$ \begin{eqnarray*}
&\leq&m^{|\alpha|+|\beta|}\langle x\rangle^{\beta}\sum_{\gamma\leq\alpha}{\alpha\choose\gamma}\frac{\sqrt{2}e^{2s+2\varepsilon}(\alpha-\gamma) ! 2^{|\alpha-\gamma|}|D^{\gamma}\psi(x)|}{r^{|\alpha-\gamma|}M_{\alpha}M_{\beta}}\\
&\leq&C_1\sigma_{h,\infty}(\psi)\sum_{\gamma\leq\alpha}{\alpha\choose\gamma}\frac{h^{|\alpha|+|\beta|}(\alpha-\gamma) ! 2^{|\alpha-\gamma|}}{4^{|\alpha|+|\beta|}r^{|\alpha-\gamma|}M_{\alpha-\gamma}h^{|\gamma|+|\beta|}}\leq C_1\sigma_{h,\infty}(\psi)\sum_{\gamma\leq\alpha}{\alpha\choose\gamma}\frac{h^{|\alpha|-|\gamma|}(\alpha-\gamma) ! } {2^{|\alpha|}r^{|\alpha-\gamma|}M_{\alpha-\gamma}}\\ &\leq& C\sigma_{h,\infty}(\psi),\, \forall \xi\in K.
\end{eqnarray*} Hence $e^{\varepsilon\sqrt{1+|x|^2}}a(x,\xi)T(x)e^{-x\xi^{(k)}}$, $\xi\in K$, is bounded in $\mathcal S'^{\{M_p\}}$. Buy (\ref{7}), the set (\ref{5}) is bounded in $\mathcal S'^{\{M_p\}}$.\\
\indent We will prove that $e^{-\varepsilon\sqrt{1+|x|^2}}\in \mathcal S^{*}$. In order to do that we will estimate the derivatives of $e^{-\varepsilon\sqrt{1+|x|^2}}$ with the Cauchy integral formula (similarly as for $e^{\varepsilon\sqrt{1+|x|^2}}$). We obtain \begin{eqnarray*}
\left|\partial_z^{\alpha} e^{-\varepsilon\sqrt{1+|x|^2}}\right|\leq\frac{\alpha !}{r^{|\alpha|}}
\sup_{|w_1-x_1|\leq r,...,|w_d-x_d|\leq r}\left|e^{-\varepsilon\sqrt{1+w^2}}\right|,
\end{eqnarray*} where, $0<r<1/(8d)$. Let $w=u+iv$. Then, if we put $\displaystyle\rho=\sqrt{\left(1+|u|^2-|v|^2\right)^2+4(uv)^2}$, $\displaystyle\cos\theta= \frac{1+|u|^2-|v|^2}{\sqrt{\left(1+|u|^2-|v|^2\right)^2+4(uv)^2}}$, $\displaystyle\sin\theta= \frac{2uv}
{\sqrt{\left(1+|u|^2-|v|^2\right)^2+4(uv)^2}}$ (where $\theta\in(-\pi,\pi)$), we have that $\theta\in(-\pi/2,\pi/2)$ (because $\cos\theta>0$ and $\theta\in(-\pi,\pi)$) and \begin{eqnarray*}
\mathrm{Re\,}\sqrt{1+|u|^2-|v|^2+2iuv}&=&\mathrm{Re\,}\sqrt{\rho(\cos\theta+i\sin\theta)}= \mathrm{Re\,}\sqrt{\rho}\left(\cos\frac{\theta}{2}+i\sin\frac{\theta}{2}\right) =\sqrt{\rho}\cos\frac{\theta}{2}\geq\frac{\sqrt{\rho}}{2}, \end{eqnarray*} where the second equality holds because we take the principal branch of $\sqrt{z}$. Because $r<1/(8d)$, we get \begin{eqnarray*}
\left|e^{-\varepsilon\sqrt{1+w^2}}\right|&=& e^{\mathrm{Re\,}\left(-\varepsilon\sqrt{1+w^2}\right)}\leq e^{-\frac{\varepsilon}{2}\sqrt[4]{\left(1+|u|^2-|v|^2\right)^2+4(uv)^2}}\leq e^{-\frac{\varepsilon}{2}\sqrt{1+|u|^2-|v|^2}}\\
&\leq& e^{-\frac{\varepsilon}{2}\sqrt{1+\frac{|x|^2}{2}-|u-x|^2-|v|^2}}\leq e^{-\frac{\varepsilon}{4}\sqrt{1+|x|^2}}. \end{eqnarray*} Hence, we obtain \begin{eqnarray}\label{17}
\left|\partial_x^{\alpha} e^{-\varepsilon\sqrt{1+|x|^2}}\right|\leq\frac{\alpha !}{r^{|\alpha|}}
e^{-\frac{\varepsilon}{4}\sqrt{1+|x|^2}}.
\end{eqnarray} From this, it easily follows that $e^{-\varepsilon\sqrt{1+|x|^2}}\in\mathcal S^{*}$. So $e^{-x\xi}T(x)\in\mathcal S'^*\left(\mathbb R^d_x\right)$, for $\xi\in K$, because $e^{-x\xi}T(x)=T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}}e^{-\varepsilon\sqrt{1+|x|^2}}$ and we proved that $T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}}\in\mathcal S'^*\left(\mathbb R^d_x\right)$, for $\xi\in K$.\\ \indent Put $f(\xi+i\eta)=\mathcal{F}_{x\rightarrow\eta}(e^{-x\xi}T(x))$. We will prove that $f$ is an analytic function on $\mathrm{ch\,}B+i\mathbb R^d$. Let $U$ be an arbitrary bounded open subset of $\mathrm{ch\,}B$ such that $K=\overline{U}\subset\subset \mathrm{ch\,}B$. For $\psi\in\mathcal S^{*}$ and $\xi\in U$, we have \begin{eqnarray*} \langle f(\xi+i\eta),\psi(\eta)\rangle&=&\left\langle \mathcal{F}_{x\rightarrow\eta}\left(e^{-x\xi}T(x)\right),\psi(\eta)\right\rangle=\left\langle e^{-x\xi}T(x),\mathcal{F}(\psi)(x)\right\rangle\\ &=&\left\langle e^{-x\xi}T(x),\int_{\mathbb R^d}e^{-ix\eta}\psi(\eta)d\eta\right\rangle
=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x), e^{-\varepsilon\sqrt{1+|x|^2}}\int_{\mathbb R^d}e^{-ix\eta}\psi(\eta)d\eta\right\rangle\\
&=&\left\langle \left(e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x)\right)\otimes 1_{\eta}, e^{-\varepsilon\sqrt{1+|x|^2}}e^{-ix\eta}\psi(\eta)\right\rangle\\
&=&\int_{\mathbb R^d}\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x)e^{-ix\eta},e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle\psi(\eta)d\eta. \end{eqnarray*} Hence \begin{eqnarray}\label{20}
f(\xi+i\eta)=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x)e^{-ix\eta},e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle.
\end{eqnarray} First we will prove that $f\in \mathcal{C}^{\infty}\left(U\times\mathbb R^d_{\eta}\right)$. We will prove the differentiability only in $\xi_1$ and in the $\{M_p\}$ case. The existence of the rest of the derivatives is proved in analogous way and the $(M_p)$ case is treated similarly. Let $\xi^{(0)}=\left(\xi^{(0)}_1,...,\xi^{(0)}_d\right)=\left(\xi^{(0)}_1,\xi'\right)\in U$, $\xi=\left(\xi^{(0)}_1+\xi_1,\xi^{(0)}_2,...,\xi^{(0)}_d\right)=\left(\xi^{(0)}_1+\xi_1,\xi'\right)$, $x=(x_1,...,x_d)=(x_1,x')$. Let $0<|\xi_1|<\delta<\varepsilon<1$ such that the ball with radius $\delta$ and center in $\xi^{(0)}$ is contained in $U$. Then, by using (\ref{7}) and (\ref{20}), we obtain\\
$\displaystyle\frac{f(\xi+i\eta)-f(\xi^{(0)}+i\eta)}{\xi_1}-\left\langle e^{\varepsilon\sqrt{1+|x|^2}}(-x_1)e^{-x\xi^{(0)}}T(x)e^{-ix\eta},e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle$ \begin{eqnarray*}
=\sum_{k=1}^l\left\langle e^{-ix\eta} e^{-x\xi^{(k)}}T(x)e^{\varepsilon\sqrt{1+|x|^2}}\left(\frac{a(x,\xi)-a\left(x,\xi^{(0)}\right)}{\xi_1}+x_1 a\left(x,\xi^{(0)}\right)\right),e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle. \end{eqnarray*} It is enough to prove that, for every $\psi\in\mathcal S^{\{M_p\}}$, \begin{eqnarray*}
\displaystyle e^{\varepsilon\sqrt{1+|x|^2}}\left(\frac{a(x,\xi)-a\left(x,\xi^{(0)}\right)}{\xi_1}+x_1 a\left(x,\xi^{(0)}\right)\right)\psi(x)\longrightarrow 0, \mbox{ when } \xi_1\longrightarrow 0, \mbox{ in } \mathcal S^{\{M_p\}}. \end{eqnarray*} First note that
\begin{eqnarray*} e^{\varepsilon\sqrt{1+|x|^2}}\left(\frac{a(x,\xi)-a\left(x,\xi^{(0)}\right)}{\xi_1}+x_1 a\left(x,\xi^{(0)}\right)\right)=
e^{\varepsilon\sqrt{1+|x|^2}}a\left(x,\xi^{(0)}\right)\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right). \end{eqnarray*} Now, we get \begin{eqnarray*} \frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1=\frac{1}{\xi_1}\sum_{n=1}^{\infty}\frac{(-1)^nx_1^n\xi_1^n}{n!}+x_1= \sum_{n=2}^{\infty}\frac{(-1)^nx_1^n\xi_1^{n-1}}{n!}.
\end{eqnarray*} So, for $j\in\mathbb N$, $j\geq2$ and $0<|\xi_1|<\delta<\varepsilon<1$, we have \begin{eqnarray*}
\left|D^j_{x_1}\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)\right|&=&
\left|D^j_{x_1}\left(\sum_{n=2}^{\infty}\frac{(-1)^nx_1^n\xi_1^{n-1}}{n!}\right)\right|=
\left|\sum_{n=j}^{\infty}\frac{(-1)^n n!x_1^{n-j}\xi_1^{n-1}}{(n-j)!n!}\right|\\
&\leq& |\xi_1|\sum_{n=j}^{\infty}\frac{|x_1|^{n-j}|\xi_1|^{n-2}}{(n-j)!}\leq
|\xi_1|\sum_{n=j}^{\infty}\frac{|x_1|^{n-j}|\xi_1|^{n-j}}{(n-j)!}\leq\delta e^{|x_1|\delta}. \end{eqnarray*} Using similar technic, we obtain the estimates \begin{eqnarray*}
\left|D_{x_1}\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)\right|\leq\delta |x_1| e^{|x_1|\delta} \mbox{ and } \left|\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)\right|\leq \delta|x_1|^2 e^{|x_1|\delta}.
\end{eqnarray*} So, in all cases, we have $\displaystyle\left|D^j_{x_1}\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)\right|\leq \delta\langle x_1\rangle^2 e^{|x_1|\delta}.$ By using (\ref{15}), we get (for simpler notation we write $j$ for the $d$-tuple $(j,0,...,0)$)\\
$\displaystyle\left|D^{\alpha}\left(e^{\varepsilon\sqrt{1+|x|^2}}a\left(x,\xi^{(0)}\right)
\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)\psi(x)\right)\right|$ \begin{eqnarray*}
&=&\left|\sum_{\beta\leq\alpha}\sum_{j\leq\beta}{\alpha\choose\beta}{\beta\choose j} D^{\beta-j}\left(e^{\varepsilon\sqrt{1+|x|^2}}a\left(x,\xi^{(0)}\right)\right)
D^j\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)D^{\alpha-\beta}\psi(x)\right|\\ &\leq&\sum_{\beta\leq\alpha}\sum_{j\leq\beta}{\alpha\choose\beta}{\beta\choose j} \sqrt{2}e^{2s}
\frac{(\beta-j) ! 2^{|\beta-j|}}{r^{|\beta-j|}}e^{2\varepsilon\sqrt{1+|x|^2}}a\left(x,\xi^{(0)}\right)
\delta\langle x_1\rangle^2 e^{|x_1|\delta}|D^{\alpha-\beta}\psi(x)|\\ &\leq&C\delta\langle x_1\rangle^2\sum_{\beta\leq\alpha}\sum_{j\leq\beta}{\alpha\choose\beta}{\beta\choose j}
\left(\frac{2}{r}\right)^{|\beta-j|}(\beta-j) !|D^{\alpha-\beta}\psi(x)|,
\end{eqnarray*} where we used the inequality $e^{2\varepsilon\sqrt{1+|x|^2}}a(x,\xi^{(0)})e^{|x_1|\delta}
\leq e^{3\varepsilon\sqrt{1+|x|^2}}a(x,\xi^{(0)})\leq e^{3\varepsilon}$, which follows from the property $ii)$ of $a(x,\xi)$. Because $\psi\in\mathcal S^{\{M_p\}}$, there exists $m>0$ such that $\psi\in\mathcal S^{M_p,m}_{\infty}$. Choose $h$ such that $h<m/4$, $h<1/4$ and $hH<m$. We get\\
$\displaystyle\frac{\displaystyle h^{|\alpha|+|\beta|}\langle x\rangle^{\beta}\left|D^{\alpha}\left(e^{\varepsilon\sqrt{1+|x|^2}}a\left(x,\xi^{(0)}\right)
\left(\frac{e^{-x_1\xi_1}-1}{\xi_1}+x_1\right)\psi(x)\right)\right|}{M_{\alpha}M_{\beta}}$ \begin{eqnarray*}
&\leq& C\delta\sum_{\gamma\leq\alpha}\sum_{j\leq\gamma}{\alpha\choose\gamma}{\gamma\choose j}\left(\frac{2}{r}\right)^{|\gamma-j|}(\gamma-j) !\frac{\langle x_1\rangle^2\langle x\rangle^{|\beta|}h^{|\alpha|+|\beta|}|D^{\alpha-\gamma}\psi(x)|}{M_{\alpha-\gamma}M_{\gamma-j}M_jM_{\beta}}\\
&\leq&C_1\delta\sum_{\gamma\leq\alpha}\sum_{j\leq\gamma}{\alpha\choose\gamma}{\gamma\choose j}\left(\frac{2}{r}\right)^{|\gamma-j|}(\gamma-j) !\frac{\langle x\rangle^{|\beta|+2}h^{|\alpha|+|\beta|} H^{|\beta|+2}|D^{\alpha-\gamma}\psi(x)|}{M_{\alpha-\gamma}M_{\gamma-j}M_jM_{\beta+2}}\\
&\leq&C_2\delta\sigma_{m,\infty}(\psi)\sum_{\gamma\leq\alpha}\sum_{j\leq\gamma}{\alpha\choose\gamma}{\gamma\choose j}\left(\frac{2}{r}\right)^{|\gamma-j|}(\gamma-j) !\frac{h^{|\alpha|+|\beta|} H^{|\beta|}}{m^{|\alpha|-|\gamma|}m^{|\beta|+2}M_{\gamma-j}M_j}\\
&\leq&C_3\delta\sigma_{m,\infty}(\psi)\sum_{\gamma\leq\alpha}\sum_{j\leq\gamma}{\alpha\choose\gamma}{\gamma\choose j}\left(\frac{2}{r}\right)^{|\gamma-j|}\left(\frac{h}{m}\right)^{|\alpha|-|\gamma|}
\left(\frac{hH}{m}\right)^{|\beta|}\frac{h^{|\gamma|}(\gamma-j) !}{M_{\gamma-j}M_j} \leq C_0\delta\sigma_{m,\infty}(\psi), \end{eqnarray*} where we use $(M.2)$ and the fact $\displaystyle\frac{k^p p!}{M_p}\rightarrow 0$, when $p\rightarrow\infty$. Now, from this it follows that
\begin{eqnarray*} e^{\varepsilon\sqrt{1+|x|^2}}\left(\frac{a(x,\xi)-a\left(x,\xi^{(0)}\right)}{\xi_1}+x_1 a\left(x,\xi^{(0)}\right)\right)\psi(x)\longrightarrow 0,\, \xi_1\longrightarrow 0
\end{eqnarray*} in $\mathcal S^{\{M_p\}}$ and by the above remarks, the differentiability of $f(\xi+i\eta)$ on $U\times \mathbb R^d_{\eta}$ follows. Also, from the previous, we can conclude that $\partial_{\xi}^{\alpha}f(\xi+i\eta)=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}(-x)^{\alpha}e^{-x\xi}T(x)e^{-ix\eta},e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle$ and similarly $\partial_{\eta}^{\alpha}f(\xi+i\eta)=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}(-ix)^{\alpha}e^{-x\xi}T(x)e^{-ix\eta},e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle$. From this and the arbitrariness of $U$, the analyticity of $f(\xi+i\eta)$ follows because it satisfies the Cauchy-Riemann equations. So, for $\zeta=\xi+i\eta$, we get \begin{eqnarray}\label{25}
f(\zeta)=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\zeta}T(x),e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle
\end{eqnarray} and $\partial_{\zeta}^{\alpha}f(\zeta)=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}(-x)^{\alpha}e^{-x\zeta}T(x),e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle$, for $\zeta\in U+i\mathbb R^d_{\eta}$, for each fixed $U$ ($\varepsilon$ depends on $U$).\\
\indent Now we will prove the estimates (\ref{3}) for $f(\xi+i\eta)$. Let $K\subset\subset \mathrm{ch\,}B$ be arbitrary but fixed. First we will consider the $(M_p)$ case. We know that $\mathcal S^{(M_p)}$ is a $(FS)$ - space and $\displaystyle\mathcal S^{(M_p)}=\lim_{\substack{\longleftarrow\\ h\rightarrow\infty}}\mathcal S^{M_p,h}_{\infty}$. If we denote the closure of $\mathcal S^{(M_p)}$ in $\mathcal S^{M_p,h}_{\infty}$ by $\widetilde{\mathcal S}^{M_p,h}_{\infty}$ then $\displaystyle\mathcal S^{(M_p)}=\lim_{\substack{\longleftarrow\\ h\rightarrow\infty}}\widetilde{\mathcal S}^{M_p,h}_{\infty}$ and the projective limit is reduced. Then $\displaystyle\mathcal S'^{(M_p)}=\lim_{\substack{\longrightarrow\\ h\rightarrow\infty}}\widetilde{\mathcal S}'^{M_p,h}_{\infty}$ which is injective inductive limit with compact maps (because the projective limit is with compact maps). Because we proved that the set $\left\{S\in\mathcal D'^{*}|S(x)=T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}},\xi\in K\right\}$ is bounded in $\mathcal S'^{(M_p)}$, it follows that there exists $h>0$ such that $\left\{S\in\mathcal D'^{*}|S(x)=T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}},\xi\in K\right\}\subseteq \widetilde{\mathcal S}'^{M_p,h}_{\infty}$ and it's bounded there. By (\ref{17}), we have the estimate \begin{eqnarray*}
\frac{h^{|\alpha|+|\beta|}\langle x\rangle^{\beta}\left|D^{\alpha}_x \left(e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right)\right|}{M_{\alpha}M_{\beta}} &\leq&\sum_{\gamma\leq\alpha}{\alpha\choose\gamma}
\frac{(2h)^{|\alpha|-|\gamma|}(2h)^{|\gamma|}h^{|\beta|}\langle x\rangle^{\beta}
|\eta|^{\gamma}(\alpha-\gamma) !e^{-\frac{\varepsilon}{4}\sqrt{1+|x|^2}}}
{2^{|\alpha|}r^{|\alpha-\gamma|}M_{\alpha-\gamma}M_{\gamma}M_{\beta}}\\
&\leq& C_1\frac{1}{2^{|\alpha|}}\sum_{\gamma\leq\alpha}{\alpha\choose\gamma}\left(\frac{2h}{r}\right)^{|\alpha|-|\gamma|}
\frac{(\alpha-\gamma) !e^{M(h\langle x\rangle)}e^{M(2h|\eta|)}e^{-\frac{\varepsilon}{4}\langle x\rangle}} {M_{\alpha-\gamma}}\\
&\leq& C' e^{M(2h|\eta|)}, \end{eqnarray*} where we use that $e^{M(h\langle x\rangle)}e^{-\frac{\varepsilon}{4}\langle x\rangle}$ is bounded and $\displaystyle\frac{k^p p!}{M_p}\rightarrow 0$ when $p\rightarrow\infty$. Then, for $\xi\in K$ and $\eta\in\mathbb R^d$, \begin{eqnarray*}
|f(\xi+i\eta)|=\left|\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x),e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle\right|\leq C\left\|e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right\|_{\widetilde{\mathcal S}^{M_p,h}_{\infty}}\leq \tilde{C}e^{M(2h|\eta|)}.
\end{eqnarray*} Now we will consider the $\{M_p\}$ case. $\mathcal S^{\{M_p\}}$ is a $(DFS)$ - space and $\displaystyle\mathcal S^{\{M_p\}}=\lim_{\substack{\longrightarrow\\ h\rightarrow 0}}\mathcal S^{M_p,h}_{\infty}$, where the inductive limit is injective with compact maps. Let $h>0$ be fixed. For shorter notation, denote by $F$ the set $\left\{S\in\mathcal D'^{*}|S(x)=T(x)e^{-x\xi+\varepsilon\sqrt{1+|x|^2}},\xi\in K\right\}$ and by $J$ the inclusion $\mathcal S^{M_p,h}_{\infty}\longrightarrow \mathcal S^{\{M_p\}}$. Because we already proved that $F$ is a bounded subset of $\mathcal S'^{\{M_p\}}$, its image under ${}^{t}J$ (the transposed mapping of $J$) is a bounded subset of $\mathcal S'^{M_p,h}_{\infty}$. By the above calculations we see that $e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}$ is in $\mathcal S^{M_p,m}_{\infty}$, for every $m>0$. Hence, for $\xi\in K$ and $\eta\in\mathbb R^d$, we have \begin{eqnarray*}
|f(\xi+i\eta)|&=&\left|\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x),e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle\right|
= \left|\left\langle {}^{t}J\left(e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\xi}T(x)\right), e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle\right|\\
&\leq& C'_h\left\|e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right\|_{\mathcal S^{M_p,h}_{\infty}}\leq C_h e^{M(2h|\eta|)},
\end{eqnarray*} where we used the above estimate for $\displaystyle\frac{h^{|\alpha|+|\beta|}\langle x\rangle^{\beta}\left|D^{\alpha}\left(e^{-ix\eta}e^{-\varepsilon\sqrt{1+|x|^2}}\right)\right|} {M_{\alpha}M_{\beta}}$. \end{proof}
\begin{remark} If, for $S\in\mathcal D'^{*}$, the conditions of the theorem are fulfilled, we call $\mathcal{F}_{x\rightarrow\eta}\left(e^{-x\xi}S(x)\right)$ the Laplace transform of $S$ and denote it by $\mathcal{L}(S)$. Moreover, by (\ref{25}), \begin{eqnarray*}
\mathcal{L}(S)(\zeta)=\left\langle e^{\varepsilon\sqrt{1+|x|^2}}e^{-x\zeta}S(x),e^{-\varepsilon\sqrt{1+|x|^2}}\right\rangle, \mbox{ for } \zeta\in U+i\mathbb R^d_{\eta}, \end{eqnarray*} where $\overline{U}\subset\subset \mathrm{ch\,}B$ and $\varepsilon$ depends on $U$.\\ \indent Note that, if for $S\in\mathcal D'^{*}$ the conditions of the theorem are fulfilled for $B=\mathbb R^d$, then the choice of $\varepsilon$ can be made uniform for all $K\subset\subset\mathbb R^d$. \end{remark}
For the next theorem we need the following technical results. \begin{lemma}\label{psss} Let $(k_p)\in\mathfrak{R}$. There exists $(k'_p)\in\mathfrak{R}$ such that $k'_p\leq k_p$ and $\displaystyle\prod_{j=1}^{p+q}k'_j\leq 2^{p+q}\prod_{j=1}^{p}k'_j\cdot\prod_{j=1}^{q}k'_j$, for all $p,q\in\mathbb Z_+$. \end{lemma} \begin{proof} Define $k'_1=k_1$ and inductively $\displaystyle k'_j=\min\left\{k_j,\frac{j}{j-1}k'_{j-1}\right\}$, for $j\geq 2$, $j\in\mathbb N$. Obviously $k'_j\leq k_j$ and one easily checks that $(k'_j)$ is monotonically increasing. To prove that $k'_j$ tends to infinity, suppose the contrary. Then, because $(k'_j)$ is a monotonically increasing sequence of positive numbers, it follows that it is bounded by some $C>0$. Because $(k_j)\in\mathfrak{R}$, there exists $j_0$, such that, for all $j\geq j_0$, $j\in\mathbb N$, $k_j\geq 2C$. So, for all $j\geq j_0+1$, $\displaystyle k'_j=\frac{j}{j-1}k'_{j-1}$. We get that $\displaystyle k'_j=\frac{j}{j_0}k'_{j_0}\rightarrow \infty$, when $j\longrightarrow \infty$, which is a contradiction. Hence $(k'_j)\in\mathfrak{R}$. Note that, for all $p,j\in\mathbb Z_+$, we have $\displaystyle k'_{p+j}\leq \frac{p+j}{j}k'_{j}$. Hence $\displaystyle \prod_{j=1}^{p+q}k'_j=\prod_{j=1}^{p}k'_j\cdot\prod_{j=1}^{q}k'_{p+j}\leq \prod_{j=1}^{p}k'_j\cdot\prod_{j=1}^{q}\frac{p+j}{j}k'_{j}=\frac{(p+q)!}{p!q!}\prod_{j=1}^{p}k'_j\cdot \prod_{j=1}^{q}k'_{j}\leq 2^{p+q}\prod_{j=1}^{p}k'_j\cdot\prod_{j=1}^{q}k'_{j}$. \end{proof}
\indent We will construct certain class of ultrapolynomials similar to those in \cite{Komatsu1}, (see (10.9)' in \cite{Komatsu1}), which will have the added beneficence of not having zeroes in a strip containing the real axis.\\ \indent Let $c>0$ be fixed. Let $k>0$, $l>0$ and $(k_p)\in\mathfrak{R}$, $(l_p)\in\mathfrak{R}$ be arbitrary but fixed. Choose $q\in\mathbb Z_+$ such that $\displaystyle \frac{c\sqrt{d}}{l m_p}<\frac{1}{2}$, for all $p\in\mathbb N$, $p\geq q$ in the $(M_p)$ case and $\displaystyle \frac{c\sqrt{d}}{l_p m_p}<\frac{1}{2}$, for all $p\in\mathbb N$, $p\geq q$ in the $\{M_p\}$ case. Consider the entire functions \begin{eqnarray}\label{u1} P_l(w)=\prod_{j=q}^{\infty}\left(1+\frac{w^2}{l^2 m_j^2}\right),\,\, w\in\mathbb C^d \end{eqnarray} in the $(M_p)$ case, resp. \begin{eqnarray}\label{u2} P_{l_p}(w)=\prod_{j=q}^{\infty}\left(1+\frac{w^2}{l_j^2 m_j^2}\right),\,\, w\in\mathbb C^d
\end{eqnarray} in the $\{M_p\}$ case. It is easily checked that the entire function $P_l(w_1,0,...,0)$, resp. $P_{l_p}(w_1,0,...,0)$, of one variable satisfies the condition c) of proposition 4.6 of \cite{Komatsu1}. Hence, $P_l(w)$, resp. $P_{l_p}(w)$, satisfies the equivalent conditions a) and b) of proposition 4.5 of \cite{Komatsu1}. Hence, there exist $L>0$ and $C'>0$, resp. for every $L>0$ there exists $C'>0$, such that $|P_l(w)|\leq C'e^{M(L|w|)}$, resp. $|P_{l_p}(w)|\leq C'e^{M(L|w|)}$, for all $w\in\mathbb C^d$ and $P_l(D)$, resp. $P_{l_p}(D)$, are ultradifferential operators of $(M_p)$, resp. $\{M_p\}$, type. It is easy to check that $P_l(w)$ and $P_{l_p}(w)$ don't have zeroes in $W=\mathbb R^d+i\{v\in\mathbb R^d||v_j|\leq c,\,j=1,...,d\}$. For $w=u+iv\in W$, $|u|\geq 2c\sqrt{d}$, we have $\displaystyle \left|w^2\right|\geq \frac{|w|^2}{4}$ and $\displaystyle \left|1+\frac{w^2}{l_j^2 m_j^2}\right|\geq 1$, for $j\geq q$. We estimate as follows \begin{eqnarray*}
|P_{l_p}(w)|&=&\left|\prod_{j=q}^{\infty}\left(1+\frac{w^2}{l_j^2 m_j^2}\right)\right|=\sup_p\prod_{j=q}^{p}\left|1+\frac{w^2}{l_j^2 m_j^2}\right|\geq\sup_p\prod_{j=q}^{p}\frac{\left|w^2\right|}{l_j^2 m_j^2}\geq\sup_p\prod_{j=q}^{p}\frac{|w|^2}{4l_j^2 m_j^2}\\
&=&\frac{\prod_{j=1}^{q-1} 4l_j^2}{|w|^{2q-2}}\left(\sup_p\frac{|w|^pM_{q-1}}{M_p\prod_{j=1}^p 2l_j}\right)^2
=C'_0\left(\frac{M_{q-1}\prod_{j=1}^{q-1} k_j}{|w|^{q-1}}\right)^2 e^{2N_{2l_p}(|w|)}\geq C'_0\frac{e^{N_{2l_p}(|w|)}}{e^{2N_{k_p}(|w|)}}, \end{eqnarray*} where we put $\displaystyle C'_0=\prod_{j=1}^{q-1}\frac{4l_j^2}{k_j^2}$ and $l_p=l$ and $k_p=k$ in the $(M_p)$ case. For $w\in W$, because $P_l(w)$, resp. $P_{l_p}(w)$, doesn't have zeroes in $W$, we get that there exist $C_0>0$ such that \begin{eqnarray}\label{uu}
|P_l(w)|\geq C_0e^{-2M(|w|/k)}e^{M\left(|w|/(2l)\right)},\, \mbox{resp.}\, |P_{l_p}(w)|\geq C_0e^{-2N_{k_p}(|w|)}e^{N_{2l_p}(|w|)},\, w\in W.
\end{eqnarray} Now, by using Cauchy integral formula, we can estimate the derivatives of $1/P_l(x)$, resp. $1/P_{l_p}(\xi)$. We will introduce some notations to make the calculations less cumbersome. For $r>0$, denote by $B_r(a)$ the polydisc with center at $a$ and radii $r$, i.e. $\{z\in\mathbb C^d||z_j-a_j|< r,\, j=1,2,...,d\}$ and by $T_r(a)$ the corresponding polytorus $\{z\in\mathbb C^d||z_j-a_j|= r,\, j=1,2,...,d\}$. We will do it for the $\{M_p\}$ case, for the $(M_p)$ case it is similar. We already know that on $W$, $1/P_{l_p}(w)$ is analytic function ($P_{l_p}$ doesn't have zeroes in $W$). Hence \begin{eqnarray*}
\left|\partial^{\alpha}_w\frac{1}{P_{l_p}(x)}\right|\leq \frac{\alpha!}{r^{|\alpha|}}\cdot\left\|\frac{1}{P_{l_p}(z)}\right\|_{L^{\infty}(T_r(x))}
\leq \frac{\alpha!}{C_0r^{|\alpha|}}\cdot
\left\|\frac{e^{2N_{k_p}(|z|)}}{e^{N_{2l_p}(|z|)}}\right\|_{L^{\infty}(T_r(x))},
\end{eqnarray*} for arbitrary but fixed $r\leq c$ (so $\overline{B_{r}(x)}\subseteq W$). For $x\in\mathbb R^d\backslash B_{2r\sqrt{d}}(0)$, there exists $j\in\{1,...,d\}$ such that $|x_j|\geq 2r\sqrt{d}$. Then, on $T_r(x)$, $|z|\geq |x|-|z-x|=|x|-r\sqrt{d}\geq |x|/2$, i.e. $e^{N_{2l_p}(|z|)}\geq e^{N_{2l_p}(|x|/2)}=e^{N_{4l_p}(|x|)}$. Moreover, for such $x$, we have
\begin{eqnarray*} e^{2N_{k_p}(|z|)}\leq e^{2N_{k_p}(|x|+r\sqrt{d})}\leq 4e^{2N_{k_p}(2r\sqrt{d})}e^{2N_{k_p}(2|x|)}=C_1e^{2N_{k_p}(2|x|)},
\end{eqnarray*} where in the last inequality we used that $e^{M(\lambda+\nu)}\leq 2e^{M(2\lambda)}e^{M(2\nu)}$, for $\lambda\geq0$, $\nu\geq 0$. So, we obtain $\displaystyle \left|\partial^{\alpha}_w\frac{1}{P_{l_p}(x)}\right|\leq C\cdot\frac{\alpha!}{r^{|\alpha|}}\frac{e^{2N_{k_p}(2|x|)}}{e^{N_{4l_p}(|x|)}}$. For $x$ in $B_{2r\sqrt{d}}(0)$, $\displaystyle\left\|e^{2N_{k_p}(|z|)}e^{-N_{2l_p}(|z|)}\right\|_{L^{\infty}(T_r(x))}$ is bounded, so we can conclude that the above inequality holds, possible with another constant $C$. Analogously, we can prove that, for the $(M_p)$ case, $\displaystyle \left|\partial^{\alpha}_w\frac{1}{P_l(x)}\right|\leq C\cdot\frac{\alpha!}{r^{|\alpha|}}\frac{e^{2M\left(2|x|/k\right)}}{e^{M\left(|x|/(4l)\right)}}$. This is important, because, if $k>0$ is fixed, resp. $(k_p)\in\mathfrak{R}$ is fixed, then we can find $l>0$, resp. $(l_p)\in\mathfrak{R}$, such that $\displaystyle e^{2M\left(2|x|/k\right)}e^{-M\left(|x|/(4l)\right)}\leq C''e^{-M\left(|x|/k\right)}$, resp. $e^{2N_{k_p}(2|x|)}e^{-N_{4l_p}(|x|)}\leq C''e^{-N_{k_p}(|x|)}$, for some $C''>0$. This inequality trivially follows from proposition 3.6 of \cite{Komatsu1} in the $(M_p)$ case. To prove the inequality in the $\{M_p\}$ case, first note that $e^{2N_{k_p}(2|x|)}e^{N_{k_p}(|x|)}\leq e^{3N_{k_p/2}(|x|)}$. By lemma \ref{psss}, there exists $(k'_p)\in \mathfrak{R}$ such that $k'_p\leq k_p/2$ and $\displaystyle\prod_{j=1}^{p+q}k'_j\leq 2^{p+q}\prod_{j=1}^{p}k'_j\cdot\prod_{j=1}^{q}k'_j$, for all $p,q\in\mathbb Z_+$. So $\displaystyle e^{3N_{k_p/2}(|x|)}\leq e^{3N_{k'_p}(|x|)}$. If we put $N_0=1$ and $\displaystyle N_p=M_p\prod_{j=1}^{p}k'_j$, for $p\in\mathbb Z_+$, then, by the properties of $(k'_p)$, it follows that $N_p$ satisfies $(M.1)$, $(M.2)$ and $(M.3)'$ where the constant $H$ in $(M.2)$ for this sequence is equal to $2H$. Moreover, note that $N(\lambda)=N_{k'_p}(\lambda)$, for all $\lambda\geq 0$. We can now use proposition 3.6 of \cite{Komatsu1} for $N(|x|)$ (i.e. for $N_{k'_p}(|x|)$) and obtain $e^{3N_{k'_p}(|x|)}\leq c''e^{N_{k'_p}(4H^2|x|)}=c''e^{N_{k'_p/(4H^2)}(|x|)}$, for some $c''>0$. Now take $l_p$ such that $4l_p=k'_p/(4H^2)$, $p\in\mathbb Z_+$ and the desired inequality follows. So, we obtain \begin{eqnarray*}
\left|\partial^{\alpha}_x\frac{1}{P_l(x)}\right|\leq C\cdot\frac{\alpha!}{r^{|\alpha|}}e^{-M\left(|x|/k\right)},\, \mbox{resp.}\, \left|\partial^{\alpha}_x\frac{1}{P_{l_p}(x)}\right|\leq C\cdot\frac{\alpha!}{r^{|\alpha|}}e^{-N_{k_p}(|x|)},\, x\in\mathbb R^d,\alpha\in\mathbb N^d, \end{eqnarray*} where $C$ depends on $k$ and $l$, resp. $(k_p)$ and $(l_p)$, and $M_p$; $r\leq c$ arbitrary but fixed. Moreover, from the above observation and (\ref{uu}), we obtain \begin{eqnarray}\label{uu1}
|P_l(w)|\geq \tilde{C}e^{M(|w|/k)},\, \mbox{resp.}\, |P_{l_p}(w)|\geq \tilde{C}e^{N_{k_p}(|w|)},\, w\in W, \end{eqnarray} for some $\tilde{C}>0$.
\begin{lemma}\label{nnl} let $g:[0,\infty)\longrightarrow[0,\infty)$ be an increasing function that satisfies the following estimate:\\ \indent for every $L>0$ there exists $C>0$ such that $g(\rho)\leq M(L\rho)+\ln C$.\\ Then there exists subordinate function $\epsilon(\rho)$ such that $g(\rho)\leq M(\epsilon(\rho))+\ln C'$, for some constant $C'>1$. \end{lemma} For the definition of subordinate function see \cite{Komatsu1}. \begin{proof} If $g(\rho)$ is bounded then the claim of the lemma is trivial (we can take $C'$ large enough such that the inequality will hold for arbitrary subordinate function). Assume that $g$ is not bounded. We can easily find continuous strictly increasing function $f:[0,\infty)\longrightarrow[0,\infty)$ which majorizes $g$ such that for every $L>0$ there exists $C>0$ such that $f(\rho)\leq M(L\rho)+\ln C$. Hence, there exists $\rho_1>0$ such that $f(\rho)>0$ for $\rho\geq\rho_1$. There exists $\rho_0>0$ such that $M(\rho)=0$ for $\rho\leq\rho_0$ and $M(\rho)>0$ for $\rho>\rho_0$. Because $M(\rho)$ is continuous and strictly increasing on the interval $[\rho_0,\infty)$ and $\displaystyle\lim_{\rho\rightarrow\infty}M(\rho)=\infty$, $M$ is bijection from $[\rho_0,\infty)$ to $[0,\infty)$ with continuous and strictly increasing inverse $M^{-1}:[0,\infty)\longrightarrow[\rho_0,\infty)$. Define $\epsilon(\rho)$ on $[\rho_1,\infty)$ in the following way $\epsilon(\rho)=M^{-1}(f(\rho))$ and define it linearly on $[0,\rho_1)$ such that it will be continuous on $[0,\infty)$ and $\epsilon(0)=0$. Then $\epsilon(\rho)$ is strictly increasing and continuous on $[0,\infty)$. Moreover, for $\rho\in[\rho_1,\infty)$, it satisfies $f(\rho)=M(\epsilon(\rho))$. Hence, there exists $C'>1$ such that $f(\rho)\leq M(\epsilon(\rho))+\ln C'$, for $\rho\geq 0$. It remains to prove that $\epsilon(\rho)/\rho\longrightarrow 0$ when $\rho\longrightarrow\infty$. Assume the contrary. Then, there exist $L>0$ and a strictly increasing sequence $\rho_j$ which tends to infinity when $j\longrightarrow\infty$, such that $\epsilon(\rho_j)\geq2L\rho_j$, i.e. $f(\rho_j)\geq M(2L\rho_j)$. For this $L$, by the condition for $f$, choose $C>1$ such that $f(\rho)\leq M(L\rho)+\ln C$. Then we have $M(2L\rho_j)\leq M(L\rho_j)+\ln C$, which contradicts the fact that $e^{M(\rho)}$ increases faster then $\rho^p$ for any $p$. One can obtain this contradiction by using equality (3.11) of \cite{Komatsu1}. \end{proof}
\begin{theorem}\label{t2} Let $B$ be a connected open set in $\mathbb R^d_{\xi}$ and $f$ an analytic function on $B+i\mathbb R^d_{\eta}$. Let $f$ satisfies the condition:\\ \indent for every compact subset $K$ of $B$ there exist $C>0$ and $k>0$, resp. for every $k>0$ there exists $C>0$, such that \begin{eqnarray}\label{n1}
|f(\xi+i\eta)|\leq C e^{M(k|\eta|)},\, \forall\xi\in K, \forall\eta\in\mathbb R^d. \end{eqnarray} Then, there exists $S\in\mathcal D'^{*}(\mathbb R^d_x)$ such that $e^{-x\xi}S(x)\in\mathcal S'^{*}(\mathbb R^d_x)$, for all $\xi\in B$ and \begin{eqnarray}\label{n4} \mathcal{L}(S)(\xi+i\eta)=\mathcal{F}_{x\rightarrow\eta}\left(e^{-x\xi}S(x)\right)(\xi+i\eta)=f(\xi+i\eta),\,\, \xi\in B,\, \eta\in\mathbb R^d. \end{eqnarray} \end{theorem}
\begin{proof} Because of (\ref{n1}), for every fixed $\xi\in B$, $f_{\xi}=f(\xi+i\eta)\in\mathcal S'^{*}(\mathbb R^d_{\eta})$. Put $T_{\xi}(x)=\mathcal{F}^{-1}_{\eta\rightarrow x}\left(f_{\xi}(\eta)\right)(x)\in\mathcal S'^{*}(\mathbb R^d_x)$ and $S_{\xi}(x)=e^{x\xi}T_{\xi}(x)\in\mathcal D'^{*}(\mathbb R^d_x)$. We will show that $S_{\xi}$ does not depend on $\xi\in B$. Let $U$ be an arbitrary, but fixed, bounded connected open subset of $B$, such that $K=\overline{U}\subset\subset B$.\\
\indent Let $c>2$ be such that $|\xi_j|\leq c/2$, for $\xi=(\xi_1,...,\xi_d)\in K$. In the $(M_p)$ case, choose $s>0$ such that $\displaystyle\int_{\mathbb R^d}e^{M(k|\eta|)}e^{-M(\frac{s}{2}|\eta|)}d\eta<\infty$ and $e^{2M(k|\eta|)}\leq \tilde{c} e^{M(\frac{s}{2}|\eta|)}$, for some constant $\tilde{c}>0$. For the $\{M_p\}$ case, by the conditions in the theorem, for every $k>0$ there exists $C>0$, such that $\ln_+|f(\xi+i\eta)|\leq M(k|\eta|)+\ln C$ for all $\xi\in K$ and $\eta\in\mathbb R^d$. The same estimate holds for the nonnegative increasing function
\begin{eqnarray*} g(\rho)=\sup_{|\eta|\leq\rho}\sup_{\xi\in K}\ln_+|f(\xi+i\eta)|.
\end{eqnarray*} If we use lemma \ref{nnl} for this function we get that there exists subordinate function $\epsilon(\rho)$ and a constant $C>1$ such that $g(\rho)\leq M(\epsilon(\rho))+\ln C$. From this we have that $\ln_+|f(\xi+i\eta)|\leq g(|\eta|)\leq M(\epsilon(|\eta|))+\ln C$, i.e. \begin{eqnarray}\label{n2}
|f(\xi+i\eta)|\leq Ce^{M(\epsilon(|\eta|))},\, \forall\xi\in K, \forall\eta\in\mathbb R^d, \end{eqnarray} for some $C>1$. By lemma 3.12 of \cite{Komatsu1}, there exists another sequence $\tilde{N}_p$, which satisfies $(M.1)$, such that $\tilde{N}(\rho)\geq M(\epsilon(\rho))$ and $k'_p=\tilde{n}_p/m_p\longrightarrow\infty$ when $p\longrightarrow\infty$. Take $(k_p)\in\mathfrak{R}$ such that $k_p\leq k'_p$, $p\in\mathbb Z_+$. Then \begin{eqnarray*} e^{N_{k_p}(\rho)}=\sup_p\frac{\rho^p}{M_p\prod_{j=1}^p k_j}\geq \sup_p\frac{\rho^p}{M_p\prod_{j=1}^p k'_j}=e^{\tilde{N}(\rho)}\geq e^{M(\epsilon(\rho))}.
\end{eqnarray*} Hence, from (\ref{n2}), it follows that $|f(\xi+i\eta)|\leq Ce^{N_{k_p}(|\eta|)}$, for all $\xi\in K$ and $\eta\in\mathbb R^d$. Choose $(s_p)\in\mathfrak{R}$ such that $\displaystyle\int_{\mathbb R^d}e^{N_{k_p}(|\eta|)}e^{-N_{2s_p}(|\eta|)}d\eta<\infty$ and $e^{2N_{k_p}(|\eta|)}\leq \tilde{c} e^{N_{2s_p}(|\eta|)}$, for some $\tilde{c}>0$.\\
\indent Now, for the chosen $c$ and $s$, resp. $(s_p)$, by the discussion before the theorem, we can find $l>0$, resp. $(l_p)\in\mathfrak{R}$, and entire functions $P_l(w)$ as in (\ref{u1}), resp. $P_{l_p}(w)$ as in (\ref{u2}), such that they don't have zeroes in $W=\mathbb R^d+i\{v\in\mathbb R^d||v_j|\leq c,\,j=1,...,d\}$ and the following estimates hold \begin{eqnarray*}
\left|\partial^{\alpha}_x\frac{1}{P_l(x)}\right|\leq C\cdot\frac{\alpha!}{r^{|\alpha|}}e^{-M\left(s|x|\right)},\, \mbox{resp.}\, \left|\partial^{\alpha}_x\frac{1}{P_{l_p}(x)}\right|\leq C\cdot\frac{\alpha!}{r^{|\alpha|}}e^{-N_{s_p}(|x|)},\, x\in\mathbb R^d,\alpha\in\mathbb N^d,
\end{eqnarray*} where $C$ depends on $s$ and $l$, resp. $(s_p)$ and $(l_p)$, and $M_p$; $r\leq c$ is arbitrary but fixed. For shorter notation, we will denote $P_l(w)$ and $P_{l_p}(w)$ by $P(w)$ in both cases. Define the entire functions $\displaystyle P_{\xi}(w)=P(w-i\xi)=\prod_{j=q}^{\infty}\left(1+\frac{(w-i\xi)^2}{l^2 m_j^2}\right)$ in the $(M_p)$ case, resp. $\displaystyle P_{\xi}(w)=P(w-i\xi)=\prod_{j=q}^{\infty}\left(1+\frac{(w-i\xi)^2}{l_j^2 m_j^2}\right)$ in the $\{M_p\}$ case. As we noted in the construction of the entire functions $P(w)$ (the discussion before the theorem), $P(w)$ satisfies the equivalent conditions a) and b) of proposition 4.5 of \cite{Komatsu1}. Hence, there exist $L>0$ and $C'>0$, resp. for every $L>0$ there exists $C'>0$, such that $|P(w)|\leq C'e^{M(L|w|)}$, $w\in\mathbb C^d$ and $P(D)$ are ultradifferential operators of $(M_p)$, resp. $\{M_p\}$, type. So, we obtain \begin{eqnarray*}
|P_{\xi}(w)|=|P(w-i\xi)|\leq C'e^{M(L|w-i\xi|)}\leq C''e^{M(2L|w|)},\, w\in\mathbb C^d,
\end{eqnarray*} because $\xi=(\xi_1,...,\xi_d)$ is such that $|\xi_j|\leq c/2$, for $j=1,...,d$. Hence, by proposition 4.5 of \cite{Komatsu1}, $P_{\xi}(D)$ is an ultradifferential operator of class $(M_p)$, resp. of class $\{M_p\}$, for every $\xi=(\xi_1,...,\xi_d)$ such that $|\xi_j|\leq c/2$, $j=1,...,d$. Moreover, by the properties of $P(w)$, it follows that $P_{\xi}(w)$ is an entire function that doesn't have zeroes in $\mathbb R^d+i\{v\in\mathbb R^d||v_j|\leq c/2,\,j=1,...,d\}$ for all $\xi\in K$. So, by using the Cauchy integral formula to estimate the derivatives, one obtains that $P_{\xi}(\eta)$ and $1/P_{\xi}(\eta)$ are multipliers for $\mathcal S'^{*}(\mathbb R^d_{\eta})$. Also, by (\ref{uu1}), we have $|P_{\xi}(\eta)|=|P(\eta-i\xi)|\geq \tilde{C}e^{M(s|\eta-i\xi|)}\geq \tilde{C}'e^{M(\frac{s}{2}|\eta|)}$, for all $\xi\in K$ and $\eta\in\mathbb R^d$ in the $(M_p)$ case and similarly, $|P_{\xi}(\eta)|=|P(\eta-i\xi)|\geq \tilde{C}e^{N_{s_p}(|\eta-i\xi|)}\geq \tilde{C}'e^{N_{2s_p}(|\eta|)}$, for all $\xi\in K$ and $\eta\in\mathbb R^d$, in the $\{M_p\}$ case. For $\xi\in B$, put $f_{\xi}(\eta)=f(\xi+i\eta)$. Then $f_{\xi}(\eta)/P_{\xi}(\eta)\in L^1\left(\mathbb R^d_{\eta}\right)\cap \mathcal E^*\left(\mathbb R^d_{\eta}\right)$, for all $\xi\in K$. Observe that \begin{eqnarray*} e^{x\xi}\mathcal{F}^{-1}_{\eta\rightarrow x}\left(f_{\xi}(\eta)\right)(x)=e^{x\xi}\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\frac{f_{\xi}(\eta)P_{\xi}(\eta)}{P_{\xi}(\eta)}\right)(x)= e^{x\xi}P_{\xi}(D_x)\left(\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\frac{f_{\xi}(\eta)}{P_{\xi}(\eta)}\right)(x)\right), \end{eqnarray*} i.e. \begin{eqnarray}\label{n3} S_{\xi}(x)=e^{x\xi}P_{\xi}(D_x)\left(\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\frac{f_{\xi}(\eta)}{P_{\xi}(\eta)}\right)(x)\right). \end{eqnarray} Let $\displaystyle P(w)=\sum_{\alpha}c_{\alpha}w^{\alpha}$. For simpler notation, put $R(\eta)=f_{\xi}(\eta)/P_{\xi}(\eta)$ and calculate as follows \begin{eqnarray*} P(D_x)\left(e^{x\xi}\mathcal{F}^{-1}_{\eta\rightarrow x}(R)(x)\right)&=&\sum_{\alpha} c_{\alpha} \sum_{\beta\leq\alpha}{\alpha\choose\beta}(-i\xi)^{\beta}e^{x\xi}D^{\alpha-\beta}_x \mathcal{F}^{-1}_{\eta\rightarrow x}(R)(x)\\ &=&e^{x\xi}\sum_{\alpha} c_{\alpha} \sum_{\beta\leq\alpha}{\alpha\choose\beta}(-i\xi)^{\beta}D^{\alpha-\beta}_x \mathcal{F}^{-1}_{\eta\rightarrow x}(R)(x). \end{eqnarray*} Note that\\ $\displaystyle\sum_{\alpha} c_{\alpha} \sum_{\beta\leq\alpha}{\alpha\choose\beta}(-i\xi)^{\beta}D^{\alpha-\beta}_x \mathcal{F}^{-1}_{\eta\rightarrow x}(R)(x)$ \begin{eqnarray*} &=&\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\sum_{\alpha} c_{\alpha} \sum_{\beta\leq\alpha}{\alpha\choose\beta}(-i\xi)^{\beta}\eta^{\alpha-\beta}R(\eta)\right)(x) =\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\sum_{\alpha} c_{\alpha}(\eta-i\xi)^{\alpha}R(\eta)\right)(x)\\ &=&\mathcal{F}^{-1}_{\eta\rightarrow x}\left(P(\eta-i\xi)R(\eta)\right)(x)=\mathcal{F}^{-1}_{\eta\rightarrow x}\left(P_{\xi}(\eta)R(\eta)\right)(x)=P_{\xi}(D_x)\mathcal{F}^{-1}_{\eta\rightarrow x}(R)(x). \end{eqnarray*} From this and (\ref{n3}), we get $\displaystyle S_{\xi}(x)=P(D_x)\left(e^{x\xi}\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\frac{f_{\xi}(\eta)}{P_{\xi}(\eta)}\right)(x)\right)$. Now, for $w=\eta-i\xi$, we have \begin{eqnarray*} e^{x\xi}\mathcal{F}^{-1}_{\eta\rightarrow x}\left(\frac{f_{\xi}(\eta)}{P_{\xi}(\eta)}\right)(x) =\frac{1}{(2\pi)^d}\int_{\mathbb R^d}\frac{f(\xi+i\eta)e^{(\xi+i\eta)x}}{P(\eta-i\xi)}d\eta =\frac{1}{(2\pi)^d}\int_{\mathbb R^d-i\xi}\frac{f(iw)e^{iwx}}{P(w)}dw. \end{eqnarray*} The function $\displaystyle\frac{f(iw)e^{iwx}}{P(w)}$ is analytic for $iw\in U+i\mathbb R^d$, i.e. $w\in\mathbb R^d-iU$ (because $P(w)$ is analytic in the last set and doesn't have zeroes there). Using the growth estimates for $f$ and $P$, from the theorem of Cauchy-Poincar\'e, it follows that the last integral doesn't depend on $\xi\in U$. From this and the arbitrariness of $U$ it follows that $S_{\xi}(x)$ doesn't depend on $\xi\in B$. We will denote this by $S(x)$. Now, by the observations in the beginning, it follows that $\mathcal{F}_{x\rightarrow\eta}\left(e^{-x\xi}S(x)\right)=f_{\xi}$ as ultradistributions in $\eta$ for every fixed $\xi\in B$. By theorem \ref{t1}, it follows that $\mathcal{F}_{x\rightarrow\eta}\left(e^{-x\xi}S(x)\right)$ is analytic function for $\zeta=\xi+i\eta\in B+i\mathbb R^d$, hence the equality (\ref{n4}) holds pointwise. \end{proof}
\begin{remark} If $f$ is an analytic function on $O=B+i\mathbb R^d_{\eta}$ and satisfies the conditions of the previous theorem then, by this theorem and theorem \ref{t1}, it follows that $f$ is analytic on $\mathrm{ch\,}B+i\mathbb R^d_{\eta}$ and satisfies the estimates (\ref{3}) for every $K\subset\subset \mathrm{ch\,}B$. \end{remark}
\end{document} | arXiv |
Ivan Petrovsky
Ivan Georgievich Petrovsky (Russian: Ива́н Гео́ргиевич Петро́вский) (18 January 1901 – 15 January 1973) (the family name is also transliterated as Petrovskii or Petrowsky) was a Soviet mathematician working mainly in the field of partial differential equations. He greatly contributed to the solution of Hilbert's 19th and 16th problems, and discovered what are now called Petrovsky lacunas. He also worked on the theories of boundary value problems, probability, and on the topology of algebraic curves and surfaces.
Ivan G. Petrovsky
Ivan G. Petrovsky portrayed on a Soviet stamp
Born(1901-01-18)18 January 1901
Sevsk, Russian Empire
Died15 January 1973(1973-01-15) (aged 71)
Moscow, USSR
Alma materMoscow State University
Known forHyperbolic partial differential equations
Kolmogorov–Petrovsky–Piskunov equation
Petrovsky lacuna
Scientific career
InstitutionsMoscow State University
Steklov Institute of Mathematics
Doctoral advisorDmitri Egorov
Doctoral studentsOlga Ladyzhenskaya
Yevgeniy Landis
Olga Oleinik
Sergei Godunov
Aleksei Filippov
Biography
Petrovsky was a student of Dmitri Egorov. Among his students were Olga Ladyzhenskaya, Yevgeniy Landis, Olga Oleinik and Sergei Godunov.
Petrovsky taught at Steklov Institute of Mathematics. He was a member of the Soviet Academy of Sciences since 1946 and was awarded Hero of Socialist Labor in 1969. He was the president of Moscow State University (1951–1973) and the head of the International Congress of Mathematicians (Moscow, 1966). He is buried in the cemetery of the Novodevichy Convent in Moscow.
Selected publications
• Petrovsky, I. G. (1937), "Über das Cauchysche Problem für Systeme von partiellen Differentialgleichungen", Recueil Mathématique (Matematicheskii Sbornik) (in German), 2 (44) (5): 815–870, JFM 63.0466.03, Zbl 0018.40503.
• Petrovsky, I. G. (1939), "Sur l'analyticité des solutions des systèmes d'équations différentielles", Recueil Mathématique (Matematicheskii Sbornik) (in French), 5 (47) (1): 3–70, JFM 65.0405.02, MR 0001425, Zbl 0022.22601.
• Petrovsky, I. G. (1945), "On the diffusion of waves and the lacunas for hyperbolic equations", Recueil Mathématique (Matematicheskii Sbornik), 17 (59) (3): 289–368, MR 0016861, Zbl 0061.21309.
• Petrovsky, I. G. (1953), Vorlesungen über die Theorie der Integralgleichungen, Würzburg: Physica Verlag[1]
• Petrovsky, I. G. (1954), Lectures on partial differential equations, New York: Interscience[2]
• Petrovsky, I. G. (1954), Vorlesungen über der gewöhnlichen Differentialgleichungen, Teubner
• Petrovsky, I. G. (1957), Lectures on the theory of integral equations, Rochester: Graylock|[3]
• Petrowsky, I. G. (1996), Oleinik, O. A. (ed.), Selected works. Part I: Systems of partial differential equations and algebraic geometry, Classics of Soviet Mathematics, vol. 5 (part 1), Amsterdam: Gordon and Breach Publishers, ISBN 978-2-88124-978-5, MR 1677652, Zbl 0948.01042.
• Petrowsky, I. G. (1996), Oleinik, O. A. (ed.), Selected works. Part II: Differential equations and probability theory, Classics of Soviet Mathematics, vol. 5 (part 2), Amsterdam: Gordon and Breach Publishers, ISBN 978-2-88124-979-2, MR 1677648, Zbl 0948.01043.
References
1. Pollard, Harry (1954). "Book Review: Vorlesungen über die Theorie der Integralgleichungen". Bulletin of the American Mathematical Society. 60 (3): 288–289. doi:10.1090/S0002-9904-1954-09817-8. ISSN 0002-9904.
2. Bellman, Richard (1955). "Book Review: Lectures on partial differential equations by I. G. Petrovsky". Bulletin of the American Mathematical Society. 61 (4): 367–370. doi:10.1090/S0002-9904-1955-09957-9.
3. Barrett, John H. (1961). "Book Review: Lectures on the theory of integral equations by I. G. Petrovskii". Bulletin of the American Mathematical Society. 67 (4): 333–335. doi:10.1090/S0002-9904-1961-10596-X.
• Aleksandrov, P. S.; Arnol'd, V. I.; Gel'fand, I. M.; Kolmogorov, A. N.; Novikov, S. P.; Oleinik, O. A. (2001), "Ivan Georgievich Petrovskii", in Osipov, Yu. S.; Sadovnichii, V. A. (eds.), Differential Equations and Related Topics - dedicated to the 100th Anniversary of I.G. Petrovskii, Moscow: Lomonosov State University and Steklov Mathematical Institute, pp. 1–18, retrieved 1 September 2009. A very ample paper describing Petrovsky's scientific research, authored by friends, collaborators and pupils.
• Aleksandrov, P. S.; Oleinik, O. A. (1981), "On the eightieth anniversary of the birth of Ivan Georgievich Petrovskii", Uspekhi Matematicheskikh Nauk (in Russian), 36 (1(217)): 3–10, Bibcode:1981RuMaS..36Q...1A, doi:10.1070/rm1981v036n01abeh002539, MR 0608939, S2CID 250833608, Zbl 0454.01022: the original paper translated in (Alexandrov & Oleinik 1996), and also in the Russian Mathematical Surveys, 1981, 36:1, 1–8.
• Alexandrov, P. S.; Oleinik, O. A. (1996), "Ivan Georgievich Petrowsky", in Oleinik, O. A. (ed.), Selected works. Part II: Differential equations and probability theory, Classics of Soviet Mathematics, vol. 5 (part 2), Amsterdam: Gordon and Breach Publishers, pp. 1–9, ISBN 978-2-88124-979-2, MR 1677648, Zbl 0948.01043: an English translation of the paper (Aleksandrov & Oleinik 1981).
• Kolmogorov, A. N. (1996), "Ivan Georgievich Petrowsky", in Oleinik, O. A. (ed.), Selected works. Part I: Systems of partial differential equations and algebraic geometry, Classics of Soviet Mathematics, vol. 5 (part 1), Amsterdam: Gordon and Breach Publishers, pp. 1–3, ISBN 978-2-88124-978-5, MR 1677652, Zbl 0948.01042
• Lui, S. H. (1997), "An Interview with Vladimir Arnol´d" (PDF), Notices of the American Mathematical Society, 44 (4): 432–438, Zbl 0913.01024. An interview with Vladimir Igorevich Arnol'd containing several important historical details about his teachers and other great mathematicians he knew when he was first studying and then working at the MSU Faculty of Mechanics and Mathematics, including Ivan Petrowsky.
• Oleinik, O. A. (1996), "I. G. Petrowsky and Modern Mathematics", in Oleinik, O. A. (ed.), Selected works. Part I: Systems of partial differential equations and algebraic geometry, Classics of Soviet Mathematics, vol. 5 (part 1), Amsterdam: Gordon and Breach Publishers, pp. 4–30, ISBN 978-2-88124-978-5, MR 1677652, Zbl 0948.01042
• Gårding, L. (1996), "Ivan Georgievich Petrowsky and Partial Differential Equations", in Oleinik, O. A. (ed.), Selected works. Part I: Systems of partial differential equations and algebraic geometry, Classics of Soviet Mathematics, vol. 5 (part 1), Amsterdam: Gordon and Breach Publishers, pp. 31–39, ISBN 978-2-88124-978-5, MR 1677652, Zbl 0948.01042.
• Vol'pert, A. I. (1996), "Propagation of Waves Described by Nonlinear Parabolic Equations (a commentary on article 6)", in Oleinik, O. A. (ed.), Selected works. Part II: Differential equations and probability theory, Classics of Soviet Mathematics, vol. 5 (part 2), Amsterdam: Gordon and Breach Publishers, pp. 364–399, ISBN 978-2-88124-979-2, MR 1677648, Zbl 0948.01043
External links
Wikimedia Commons has media related to Ivan Petrovsky.
• Ivan Petrovsky at the Mathematics Genealogy Project
• O'Connor, John J.; Robertson, Edmund F., "Ivan Petrovsky", MacTutor History of Mathematics Archive, University of St Andrews
• Short Biography of Petrowsky – from the Moscow Mathematical Journal
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Latvia
• Japan
• Czech Republic
• Netherlands
• Poland
• Vatican
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• SNAC
• IdRef
| Wikipedia |
Projective linear group
In mathematics, especially in the group theoretic area of algebra, the projective linear group (also known as the projective general linear group or PGL) is the induced action of the general linear group of a vector space V on the associated projective space P(V). Explicitly, the projective linear group is the quotient group
PGL(V) = GL(V)/Z(V)
Lie groups and Lie algebras
Classical groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
Simple Lie groups
Classical
• An
• Bn
• Cn
• Dn
Exceptional
• G2
• F4
• E6
• E7
• E8
Other Lie groups
• Circle
• Lorentz
• Poincaré
• Conformal group
• Diffeomorphism
• Loop
• Euclidean
Lie algebras
• Lie group–Lie algebra correspondence
• Exponential map
• Adjoint representation
• Killing form
• Index
• Simple Lie algebra
• Loop algebra
• Affine Lie algebra
Semisimple Lie algebra
• Dynkin diagrams
• Cartan subalgebra
• Root system
• Weyl group
• Real form
• Complexification
• Split Lie algebra
• Compact Lie algebra
Representation theory
• Lie group representation
• Lie algebra representation
• Representation theory of semisimple Lie algebras
• Representations of classical Lie groups
• Theorem of the highest weight
• Borel–Weil–Bott theorem
Lie groups in physics
• Particle physics and representation theory
• Lorentz group representations
• Poincaré group representations
• Galilean group representations
Scientists
• Sophus Lie
• Henri Poincaré
• Wilhelm Killing
• Élie Cartan
• Hermann Weyl
• Claude Chevalley
• Harish-Chandra
• Armand Borel
• Glossary
• Table of Lie groups
where GL(V) is the general linear group of V and Z(V) is the subgroup of all nonzero scalar transformations of V; these are quotiented out because they act trivially on the projective space and they form the kernel of the action, and the notation "Z" reflects that the scalar transformations form the center of the general linear group.
The projective special linear group, PSL, is defined analogously, as the induced action of the special linear group on the associated projective space. Explicitly:
PSL(V) = SL(V)/SZ(V)
where SL(V) is the special linear group over V and SZ(V) is the subgroup of scalar transformations with unit determinant. Here SZ is the center of SL, and is naturally identified with the group of nth roots of unity in F (where n is the dimension of V and F is the base field).
PGL and PSL are some of the fundamental groups of study, part of the so-called classical groups, and an element of PGL is called projective linear transformation, projective transformation or homography. If V is the n-dimensional vector space over a field F, namely V = Fn, the alternate notations PGL(n, F) and PSL(n, F) are also used.
Note that PGL(n, F) and PSL(n, F) are isomorphic if and only if every element of F has an nth root in F. As an example, note that PGL(2, C) = PSL(2, C), but that PGL(2, R) > PSL(2, R);[1] this corresponds to the real projective line being orientable, and the projective special linear group only being the orientation-preserving transformations.
PGL and PSL can also be defined over a ring, with an important example being the modular group, PSL(2, Z).
Name
The name comes from projective geometry, where the projective group acting on homogeneous coordinates (x0:x1: ... :xn) is the underlying group of the geometry.[note 1] Stated differently, the natural action of GL(V) on V descends to an action of PGL(V) on the projective space P(V).
The projective linear groups therefore generalise the case PGL(2, C) of Möbius transformations (sometimes called the Möbius group), which acts on the projective line.
Note that unlike the general linear group, which is generally defined axiomatically as "invertible functions preserving the linear (vector space) structure", the projective linear group is defined constructively, as a quotient of the general linear group of the associated vector space, rather than axiomatically as "invertible functions preserving the projective linear structure". This is reflected in the notation: PGL(n, F) is the group associated to GL(n, F), and is the projective linear group of (n−1)-dimensional projective space, not n-dimensional projective space.
Collineations
Main article: Collineation
A related group is the collineation group, which is defined axiomatically. A collineation is an invertible (or more generally one-to-one) map which sends collinear points to collinear points. One can define a projective space axiomatically in terms of an incidence structure (a set of points P, lines L, and an incidence relation I specifying which points lie on which lines) satisfying certain axioms – an automorphism of a projective space thus defined then being an automorphism f of the set of points and an automorphism g of the set of lines, preserving the incidence relation,[note 2] which is exactly a collineation of a space to itself. Projective linear transforms are collineations (planes in a vector space correspond to lines in the associated projective space, and linear transforms map planes to planes, so projective linear transforms map lines to lines), but in general not all collineations are projective linear transforms – PGL is in general a proper subgroup of the collineation group.
Specifically, for n = 2 (a projective line), all points are collinear, so the collineation group is exactly the symmetric group of the points of the projective line, and except for F2 and F3 (where PGL is the full symmetric group), PGL is a proper subgroup of the full symmetric group on these points.
For n ≥ 3, the collineation group is the projective semilinear group, PΓL – this is PGL, twisted by field automorphisms; formally, PΓL ≅ PGL ⋊ Gal(K/k), where k is the prime field for K; this is the fundamental theorem of projective geometry. Thus for K a prime field (Fp or Q), we have PGL = PΓL, but for K a field with non-trivial Galois automorphisms (such as $\mathbf {F} _{p^{n}}$ for n ≥ 2 or C), the projective linear group is a proper subgroup of the collineation group, which can be thought of as "transforms preserving a projective semi-linear structure". Correspondingly, the quotient group PΓL/PGL = Gal(K/k) corresponds to "choices of linear structure", with the identity (base point) being the existing linear structure.
One may also define collineation groups for axiomatically defined projective spaces, where there is no natural notion of a projective linear transform. However, with the exception of the non-Desarguesian planes, all projective spaces are the projectivization of a linear space over a division ring though, as noted above, there are multiple choices of linear structure, namely a torsor over Gal(K/k) (for n ≥ 3).
Elements
The elements of the projective linear group can be understood as "tilting the plane" along one of the axes, and then projecting to the original plane, and also have dimension n.
A more familiar geometric way to understand the projective transforms is via projective rotations (the elements of PSO(n+1)), which corresponds to the stereographic projection of rotations of the unit hypersphere, and has dimension $\textstyle {1+2+\cdots +n={\binom {n+1}{2}}}.$ Visually, this corresponds to standing at the origin (or placing a camera at the origin), and turning one's angle of view, then projecting onto a flat plane. Rotations in axes perpendicular to the hyperplane preserve the hyperplane and yield a rotation of the hyperplane (an element of SO(n), which has dimension $\textstyle {1+2+\cdots +(n-1)={\binom {n}{2}}}.$), while rotations in axes parallel to the hyperplane are proper projective maps, and accounts for the remaining n dimensions.
Properties
• PGL sends collinear points to collinear points (it preserves projective lines), but it is not the full collineation group, which is instead either PΓL (for n > 2) or the full symmetric group for n = 2 (the projective line).
• Every (biregular) algebraic automorphism of a projective space is projective linear. The birational automorphisms form a larger group, the Cremona group.
• PGL acts faithfully on projective space: non-identity elements act non-trivially.
Concretely, the kernel of the action of GL on projective space is exactly the scalar maps, which are quotiented out in PGL.
• PGL acts 2-transitively on projective space.
This is because 2 distinct points in projective space correspond to 2 vectors that do not lie on a single linear space, and hence are linearly independent, and GL acts transitively on k-element sets of linearly independent vectors.
• PGL(2, K) acts sharply 3-transitively on the projective line.
3 arbitrary points are conventionally mapped to [0, 1], [1, 1], [1, 0]; in alternative notation, 0, 1, ∞. In fractional linear transformation notation, the function ${\frac {x-a}{x-c}}\cdot {\frac {b-c}{b-a}}$ maps a ↦ 0, b ↦ 1, c ↦ ∞, and is the unique such map that does so. This is the cross-ratio (x, b; a, c) – see Cross-ratio § Transformational approach for details.
• For n ≥ 3, PGL(n, K) does not act 3-transitively, because it must send 3 collinear points to 3 other collinear points, not an arbitrary set. For n = 2 the space is the projective line, so all points are collinear and this is no restriction.
• PGL(2, K) does not act 4-transitively on the projective line (except for PGL(2, 3), as P1(3) has 3+1=4 points, so 3-transitive implies 4-transitive); the invariant that is preserved is the cross ratio, and this determines where every other point is sent: specifying where 3 points are mapped determines the map. Thus in particular it is not the full collineation group of the projective line (except for F2 and F3).
• PSL(2, q) and PGL(2, q) (for q > 2, and q odd for PSL) are two of the four families of Zassenhaus groups.
• PGL(n, K) is an algebraic group of dimension n2−1 and an open subgroup of the projective space Pn2−1. As defined, the functor PSL(n,K) does not define an algebraic group, or even an fppf sheaf, and its sheafification in the fppf topology is in fact PGL(n,K).
• PSL and PGL are centerless – this is because the diagonal matrices are not only the center, but also the hypercenter (the quotient of a group by its center is not necessarily centerless).[note 3]
Fractional linear transformations
Further information: Möbius transformation § Projective matrix representations
As for Möbius transformations, the group PGL(2, K) can be interpreted as fractional linear transformations with coefficients in K. Points in the projective line over K correspond to pairs from K2, with two pairs being equivalent when they are proportional. When the second coordinate is non-zero, a point can be represented by [z, 1]. Then when ad– bc ≠ 0, the action of PGL(2, K) is by linear transformation:
$[z,\ 1]{\begin{pmatrix}a&c\\b&d\end{pmatrix}}\ =\ [az+b,\ cz+d]\ =\ \left[{\frac {az+b}{cz+d}},\ 1\right].$
In this way successive transformations can be written as right multiplication by such matrices, and matrix multiplication can be used for the group product in PGL(2, K).
Finite fields
The projective special linear groups PSL(n, Fq) for a finite field Fq are often written as PSL(n, q) or Ln(q). They are finite simple groups whenever n is at least 2, with two exceptions:[2] L2(2), which is isomorphic to S3, the symmetric group on 3 letters, and is solvable; and L2(3), which is isomorphic to A4, the alternating group on 4 letters, and is also solvable. These exceptional isomorphisms can be understood as arising from the action on the projective line.
The special linear groups SL(n, q) are thus quasisimple: perfect central extensions of a simple group (unless n = 2 and q = 2 or 3).
History
The groups PSL(2, p) were constructed by Évariste Galois in the 1830s, and were the second family of finite simple groups, after the alternating groups.[3] Galois constructed them as fractional linear transforms, and observed that they were simple except if p was 2 or 3; this is contained in his last letter to Chevalier.[4] In the same letter and attached manuscripts, Galois also constructed the general linear group over a prime field, GL(ν, p), in studying the Galois group of the general equation of degree pν.
The groups PSL(n, q) (general n, general finite field) were then constructed in the classic 1870 text by Camille Jordan, Traité des substitutions et des équations algébriques.
Order
The order of PGL(n, q) is
(qn − 1)(qn − q)(qn − q2) ⋅⋅⋅ (qn − qn−1)/(q − 1) = qn2−1 − O(qn2−3),
which corresponds to the order of GL(n, q), divided by q − 1 for projectivization; see q-analog for discussion of such formulas. Note that the degree is n2 − 1, which agrees with the dimension as an algebraic group. The "O" is for big O notation, meaning "terms involving lower order". This also equals the order of SL(n, q); there dividing by q − 1 is due to the determinant.
The order of PSL(n, q) is the above, divided by $\gcd(n,q-1)$. This is equal to |SZ(n, q)|, the number of scalar matrices with determinant 1; |F×/(F×)n|, the number of classes of element that have no nth root; and it is also the number of nth roots of unity in Fq.[note 4]
Exceptional isomorphisms
In addition to the isomorphisms
L2(2) ≅ S3, L2(3) ≅ A4, and PGL(2, 3) ≅ S4,
there are other exceptional isomorphisms between projective special linear groups and alternating groups (these groups are all simple, as the alternating group over 5 or more letters is simple):
$L_{2}(4)\cong A_{5}$
$L_{2}(5)\cong A_{5}$ (see here for a proof)
$L_{2}(9)\cong A_{6}$
$L_{4}(2)\cong A_{8}.$[5]
The isomorphism L2(9) ≅ A6 allows one to see the exotic outer automorphism of A6 in terms of field automorphism and matrix operations. The isomorphism L4(2) ≅ A8 is of interest in the structure of the Mathieu group M24.
The associated extensions SL(n, q) → PSL(n, q) are covering groups of the alternating groups (universal perfect central extensions) for A4, A5, by uniqueness of the universal perfect central extension; for L2(9) ≅ A6, the associated extension is a perfect central extension, but not universal: there is a 3-fold covering group.
The groups over F5 have a number of exceptional isomorphisms:
PSL(2, 5) ≅ A5 ≅ I, the alternating group on five elements, or equivalently the icosahedral group;
PGL(2, 5) ≅ S5, the symmetric group on five elements;
SL(2, 5) ≅ 2 ⋅ A5 ≅ 2I the double cover of the alternating group A5, or equivalently the binary icosahedral group.
They can also be used to give a construction of an exotic map S5 → S6, as described below. Note however that GL(2, 5) is not a double cover of S5, but is rather a 4-fold cover.
A further isomorphism is:
L2(7) ≅ L3(2) is the simple group of order 168, the second-smallest non-abelian simple group, and is not an alternating group; see PSL(2,7).
The above exceptional isomorphisms involving the projective special linear groups are almost all of the exceptional isomorphisms between families of finite simple groups; the only other exceptional isomorphism is PSU(4, 2) ≃ PSp(4, 3), between a projective special unitary group and a projective symplectic group.[3]
Action on projective line
Some of the above maps can be seen directly in terms of the action of PSL and PGL on the associated projective line: PGL(n, q) acts on the projective space Pn−1(q), which has (qn−1)/(q−1) points, and this yields a map from the projective linear group to the symmetric group on (qn−1)/(q−1) points. For n = 2, this is the projective line P1(q) which has (q2−1)/(q−1) = q+1 points, so there is a map PGL(2, q) → Sq+1.
To understand these maps, it is useful to recall these facts:
• The order of PGL(2, q) is
$(q^{2}-1)(q^{2}-q)/(q-1)=q^{3}-q=(q-1)q(q+1);$
the order of PSL(2, q) either equals this (if the characteristic is 2), or is half this (if the characteristic is not 2).
• The action of the projective linear group on the projective line is sharply 3-transitive (faithful and 3-transitive), so the map is one-to-one and has image a 3-transitive subgroup.
Thus the image is a 3-transitive subgroup of known order, which allows it to be identified. This yields the following maps:
• PSL(2, 2) = PGL(2, 2) → S3, of order 6, which is an isomorphism.
• The inverse map (a projective representation of S3) can be realized by the anharmonic group, and more generally yields an embedding S3 → PGL(2, q) for all fields.
• PSL(2, 3) < PGL(2, 3) → S4, of orders 12 and 24, the latter of which is an isomorphism, with PSL(2, 3) being the alternating group.
• The anharmonic group gives a partial map in the opposite direction, mapping S3 → PGL(2, 3) as the stabilizer of the point −1.
• PSL(2, 4) = PGL(2, 4) → S5, of order 60, yielding the alternating group A5.
• PSL(2, 5) < PGL(2, 5) → S6, of orders 60 and 120, which yields an embedding of S5 (respectively, A5) as a transitive subgroup of S6 (respectively, A6). This is an example of an exotic map S5 → S6, and can be used to construct the exceptional outer automorphism of S6.[6] Note that the isomorphism PGL(2, 5) ≅ S5 is not transparent from this presentation: there is no particularly natural set of 5 elements on which PGL(2, 5) acts.
Action on p points
While PSL(n, q) naturally acts on (qn−1)/(q−1) = 1+q+...+qn−1 points, non-trivial actions on fewer points are rarer. Indeed, for PSL(2, p) acts non-trivially on p points if and only if p = 2, 3, 5, 7, or 11; for 2 and 3 the group is not simple, while for 5, 7, and 11, the group is simple – further, it does not act non-trivially on fewer than p points.[note 5] This was first observed by Évariste Galois in his last letter to Chevalier, 1832.[7]
This can be analyzed as follows; note that for 2 and 3 the action is not faithful (it is a non-trivial quotient, and the PSL group is not simple), while for 5, 7, and 11 the action is faithful (as the group is simple and the action is non-trivial), and yields an embedding into Sp. In all but the last case, PSL(2, 11), it corresponds to an exceptional isomorphism, where the right-most group has an obvious action on p points:
• $L_{2}(2)\cong S_{3}\twoheadrightarrow S_{2}$ via the sign map;
• $L_{2}(3)\cong A_{4}\twoheadrightarrow A_{3}\cong C_{3}$ via the quotient by the Klein 4-group;
• $L_{2}(5)\cong A_{5}.$ To construct such an isomorphism, one needs to consider the group L2(5) as a Galois group of a Galois cover a5: X(5) → X(1) = P1, where X(N) is a modular curve of level N. This cover is ramified at 12 points. The modular curve X(5) has genus 0 and is isomorphic to a sphere over the field of complex numbers, and then the action of L2(5) on these 12 points becomes the symmetry group of an icosahedron. One then needs to consider the action of the symmetry group of icosahedron on the five associated tetrahedra.
• L2(7) ≅ L3(2) which acts on the 1+2+4 = 7 points of the Fano plane (projective plane over F2); this can also be seen as the action on order 2 biplane, which is the complementary Fano plane.
• L2(11) is subtler, and elaborated below; it acts on the order 3 biplane.[8]
Further, L2(7) and L2(11) have two inequivalent actions on p points; geometrically this is realized by the action on a biplane, which has p points and p blocks – the action on the points and the action on the blocks are both actions on p points, but not conjugate (they have different point stabilizers); they are instead related by an outer automorphism of the group.[9]
More recently, these last three exceptional actions have been interpreted as an example of the ADE classification:[10] these actions correspond to products (as sets, not as groups) of the groups as A4 × Z/5Z, S4 × Z/7Z, and A5 × Z/11Z, where the groups A4, S4 and A5 are the isometry groups of the Platonic solids, and correspond to E6, E7, and E8 under the McKay correspondence. These three exceptional cases are also realized as the geometries of polyhedra (equivalently, tilings of Riemann surfaces), respectively: the compound of five tetrahedra inside the icosahedron (sphere, genus 0), the order 2 biplane (complementary Fano plane) inside the Klein quartic (genus 3), and the order 3 biplane (Paley biplane) inside the buckyball surface (genus 70).[11][12]
The action of L2(11) can be seen algebraically as due to an exceptional inclusion $L_{2}(5)\hookrightarrow L_{2}(11)$ – there are two conjugacy classes of subgroups of L2(11) that are isomorphic to L2(5), each with 11 elements: the action of L2(11) by conjugation on these is an action on 11 points, and, further, the two conjugacy classes are related by an outer automorphism of L2(11). (The same is true for subgroups of L2(7) isomorphic to S4, and this also has a biplane geometry.)
Geometrically, this action can be understood via a biplane geometry, which is defined as follows. A biplane geometry is a symmetric design (a set of points and an equal number of "lines", or rather blocks) such that any set of two points is contained in two lines, while any two lines intersect in two points; this is similar to a finite projective plane, except that rather than two points determining one line (and two lines determining one point), they determine two lines (respectively, points). In this case (the Paley biplane, obtained from the Paley digraph of order 11), the points are the affine line (the finite field) F11, where the first line is defined to be the five non-zero quadratic residues (points which are squares: 1, 3, 4, 5, 9), and the other lines are the affine translates of this (add a constant to all the points). L2(11) is then isomorphic to the subgroup of S11 that preserve this geometry (sends lines to lines), giving a set of 11 points on which it acts – in fact two: the points or the lines, which corresponds to the outer automorphism – while L2(5) is the stabilizer of a given line, or dually of a given point.
More surprisingly, the coset space L2(11)/Z/11Z, which has order 660/11 = 60 (and on which the icosahedral group acts) naturally has the structure of a buckeyball, which is used in the construction of the buckyball surface.
Mathieu groups
Further information: Mathieu group
The group PSL(3, 4) can be used to construct the Mathieu group M24, one of the sporadic simple groups; in this context, one refers to PSL(3, 4) as M21, though it is not properly a Mathieu group itself. One begins with the projective plane over the field with four elements, which is a Steiner system of type S(2, 5, 21) – meaning that it has 21 points, each line ("block", in Steiner terminology) has 5 points, and any 2 points determine a line – and on which PSL(3, 4) acts. One calls this Steiner system W21 ("W" for Witt), and then expands it to a larger Steiner system W24, expanding the symmetry group along the way: to the projective general linear group PGL(3, 4), then to the projective semilinear group PΓL(3, 4), and finally to the Mathieu group M24.
M24 also contains copies of PSL(2, 11), which is maximal in M22, and PSL(2, 23), which is maximal in M24, and can be used to construct M24.[13]
Hurwitz surfaces
Further information: Hurwitz surface
PSL groups arise as Hurwitz groups (automorphism groups of Hurwitz surfaces – algebraic curves of maximal possibly symmetry group). The Hurwitz surface of lowest genus, the Klein quartic (genus 3), has automorphism group isomorphic to PSL(2, 7) (equivalently GL(3, 2)), while the Hurwitz surface of second-lowest genus, the Macbeath surface (genus 7), has automorphism group isomorphic to PSL(2, 8).
In fact, many but not all simple groups arise as Hurwitz groups (including the monster group, though not all alternating groups or sporadic groups), though PSL is notable for including the smallest such groups.
Modular group
Main article: Modular group
The groups PSL(2, Z/nZ) arise in studying the modular group, PSL(2, Z), as quotients by reducing all elements mod n; the kernels are called the principal congruence subgroups.
A noteworthy subgroup of the projective general linear group PGL(2, Z) (and of the projective special linear group PSL(2, Z[i])) is the symmetries of the set {0, 1, ∞} ⊂ P1(C)[note 6] which is known as the anharmonic group, and arises as the symmetries of the six cross-ratios. The subgroup can be expressed as fractional linear transformations, or represented (non-uniquely) by matrices, as:
$x$ $1/(1-x)$ $(x-1)/x$
${\begin{pmatrix}1&0\\0&1\end{pmatrix}}$ ${\begin{pmatrix}0&1\\-1&1\end{pmatrix}}$ ${\begin{pmatrix}1&-1\\1&0\end{pmatrix}}$
$1/x$ $1-x$ $x/(x-1)$
${\begin{pmatrix}0&1\\1&0\end{pmatrix}}$ ${\begin{pmatrix}-1&1\\0&1\end{pmatrix}}$ ${\begin{pmatrix}1&0\\1&-1\end{pmatrix}}$
${\begin{pmatrix}0&i\\i&0\end{pmatrix}}$ ${\begin{pmatrix}-i&i\\0&i\end{pmatrix}}$ ${\begin{pmatrix}i&0\\i&-i\end{pmatrix}}$
Note that the top row is the identity and the two 3-cycles, and are orientation-preserving, forming a subgroup in PSL(2, Z), while the bottom row is the three 2-cycles, and are in PGL(2, Z) and PSL(2, Z[i]), but not in PSL(2, Z), hence realized either as matrices with determinant −1 and integer coefficients, or as matrices with determinant 1 and Gaussian integer coefficients.
This maps to the symmetries of {0, 1, ∞} ⊂ P1(n) under reduction mod n. Notably, for n = 2, this subgroup maps isomorphically to PGL(2, Z/2Z) = PSL(2, Z/2Z) ≅ S3,[note 7] and thus provides a splitting $\operatorname {PGL} (2,\mathbf {Z} /2)\hookrightarrow \operatorname {PGL} (2,\mathbf {Z} )$ for the quotient map $\operatorname {PGL} (2,\mathbf {Z} )\twoheadrightarrow \operatorname {PGL} (2,\mathbf {Z} /2).$
The fixed points of both 3-cycles are the "most symmetric" cross-ratios, $e^{\pm i\pi /3}={\tfrac {1}{2}}\pm {\tfrac {\sqrt {3}}{2}}i$, the solutions to $x^{2}-x+1$ (the primitive sixth roots of unity). The 2-cycles interchange these, as they do any points other than their fixed points, which realizes the quotient map S3 → S2 by the group action on these two points. That is, the subgroup C3 < S3 consisting of the identity and the 3-cycles, {(), (0 1 ∞), (0 ∞ 1)}, fixes these two points, while the other elements interchange them.
The fixed points of the individual 2-cycles are, respectively, −1, 1/2, 2, and this set is also preserved and permuted by the 3-cycles. This corresponds to the action of S3 on the 2-cycles (its Sylow 2-subgroups) by conjugation and realizes the isomorphism with the group of inner automorphisms, $S_{3}{\overset {\sim }{\to }}\operatorname {Inn} (S_{3})\cong S_{3}.$
Geometrically, this can be visualized as the rotation group of the triangular bipyramid, which is isomorphic to the dihedral group of the triangle $D_{3}\cong S_{3}$; see anharmonic group.
Topology
Over the real and complex numbers, the topology of PGL and PSL can be determined from the fiber bundles that define them:
${\begin{matrix}\mathrm {Z} &\cong &K^{*}&\to &\mathrm {GL} &\to &\mathrm {PGL} \\\mathrm {SZ} &\cong &\mu _{n}&\to &\mathrm {SL} &\to &\mathrm {PSL} \end{matrix}}$
via the long exact sequence of a fibration.
For both the reals and complexes, SL is a covering space of PSL, with number of sheets equal to the number of nth roots in K; thus in particular all their higher homotopy groups agree. For the reals, SL is a 2-fold cover of PSL for n even, and is a 1-fold cover for n odd, i.e., an isomorphism:
{±1} → SL(2n, R) → PSL(2n, R)
$\operatorname {SL} (2n+1,\mathbf {R} ){\overset {\sim }{\to }}\operatorname {PSL} (2n+1,\mathbf {R} )$
For the complexes, SL is an n-fold cover of PSL.
For PGL, for the reals, the fiber is R* ≅ {±1}, so up to homotopy, GL → PGL is a 2-fold covering space, and all higher homotopy groups agree.
For PGL over the complexes, the fiber is C* ≅ S1, so up to homotopy, GL → PGL is a circle bundle. The higher homotopy groups of the circle vanish, so the homotopy groups of GL(n, C) and PGL(n, C) agree for n ≥ 3. In fact, π2 always vanishes for Lie groups, so the homotopy groups agree for n ≥ 2. For n = 1, we have that π1(GL(n, C)) = π1(S1) = Z. The fundamental group of PGL(2, C) is a finite cyclic group of order 2.
Covering groups
Over the real and complex numbers, the projective special linear groups are the minimal (centerless) Lie group realizations for the special linear Lie algebra ${\mathfrak {sl}}(n)\colon $ every connected Lie group whose Lie algebra is ${\mathfrak {sl}}(n)$ is a cover of PSL(n, F). Conversely, its universal covering group is the maximal (simply connected) element, and the intermediary realizations form a lattice of covering groups.
For example, SL(2, R) has center {±1} and fundamental group Z, and thus has universal cover SL(2, R) and covers the centerless PSL(2, R).
Representation theory
Main article: Projective representation
A group homomorphism G → PGL(V) from a group G to a projective linear group is called a projective representation of the group G, by analogy with a linear representation (a homomorphism G → GL(V)). These were studied by Issai Schur, who showed that projective representations of G can be classified in terms of linear representations of central extensions of G. This led to the Schur multiplier, which is used to address this question.
Low dimensions
The projective linear group is mostly studied for n ≥ 2, though it can be defined for low dimensions.
For n = 0 (or in fact n < 0) the projective space of K0 is empty, as there are no 1-dimensional subspaces of a 0-dimensional space. Thus, PGL(0, K) is the trivial group, consisting of the unique empty map from the empty set to itself. Further, the action of scalars on a 0-dimensional space is trivial, so the map K* → GL(0, K) is trivial, rather than an inclusion as it is in higher dimensions.
For n = 1, the projective space of K1 is a single point, as there is a single 1-dimensional subspace. Thus, PGL(1, K) is the trivial group, consisting of the unique map from a singleton set to itself. Further, the general linear group of a 1-dimensional space is exactly the scalars, so the map $K^{*}{\overset {\sim }{\to }}\operatorname {GL} (1,K)$ is an isomorphism, corresponding to PGL(1, K) := GL(1, K)/K* ≅ {1} being trivial.
For n = 2, PGL(2, K) is non-trivial, but is unusual in that it is 3-transitive, unlike higher dimensions when it is only 2-transitive.
Examples
• PSL(2,7)
• Modular group, PSL(2, Z)
• PSL(2,R)
• Möbius group, PGL(2, C) = PSL(2, C)
Subgroups
• Projective orthogonal group, PO – maximal compact subgroup of PGL
• Projective unitary group, PU
• Projective special orthogonal group, PSO – maximal compact subgroup of PSL
• Projective special unitary group, PSU
Larger groups
The projective linear group is contained within larger groups, notably:
• Projective semilinear group, PΓL, which allows field automorphisms.
• Cremona group, Cr(Pn(k)) of birational automorphisms; any biregular automorphism is linear, so PGL coincides with the group of biregular automorphisms.
See also
• Projective transformation
• Unit
Notes
1. This is therefore PGL(n + 1, F) for projective space of dimension n
2. "Preserving the incidence relation" means that if point p is on line l then f(p) is in g(l); formally, if (p, l) ∈ I then (f(p), g(l)) ∈ I.
3. For PSL (except PSL(2, 2) and PSL(2, 3)) this follows by Grün's lemma because SL is a perfect group (hence center equals hypercenter), but for PGL and the two exceptional PSLs this requires additional checking.
4. These are equal because they are the kernel and cokernel of the endomorphism $F^{\times }{\overset {x^{n}}{\to }}F^{\times };$ formally, |μn| ⋅ |(F×)n| = |F×|. More abstractly, the first realizes PSL as SL/SZ, while the second realizes PSL as the kernel of PGL → F×/(F×)n.
5. Since p divides the order of the group, the group does not embed in (or, since simple, map non-trivially to) Sk for k < p, as p does not divide the order of this latter group.
6. In projective coordinates, the points {0, 1, ∞} are given by [0:1], [1:1], and [1:0], which explains why their stabilizer is represented by integral matrices.
7. This isomorphism can be seen by removing the minus signs in matrices, which yields the matrices for PGL(2, 2)
References
1. Gareth A. Jones and David Singerman. (1987) Complex functions: an algebraic and geometric viewpoint. Cambridge UP. Discussion of PSL and PGL on page 20 in google books
2. Proof: Math 155r 2010, Handout #4, Noam Elkies
3. Wilson, Robert A. (2009), "Chapter 1: Introduction", The finite simple groups, Graduate Texts in Mathematics 251, vol. 251, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-84800-988-2, ISBN 978-1-84800-987-5, Zbl 1203.20012[www.maths.qmul.ac.uk/~raw/fsgs.html 2007 preprint]{{citation}}: CS1 maint: postscript (link)
4. Galois, Évariste (1846), "Lettre de Galois à M. Auguste Chevalier", Journal de Mathématiques Pures et Appliquées, XI: 408–415, retrieved 2009-02-04, PSL(2, p) and simplicity discussed on p. 411; exceptional action on 5, 7, or 11 points discussed on pp. 411–412; GL(ν, p) discussed on p. 410{{citation}}: CS1 maint: postscript (link)
5. Murray, John (December 1999), "The Alternating Group A8 and the General linear Group GL(4, 2)", Mathematical Proceedings of the Royal Irish Academy, 99A (2): 123–132, JSTOR 20459753
6. Carnahan, Scott (2007-10-27), "Small finite sets", Secret Blogging Seminar], notes on a talk by Jean-Pierre Serre.{{citation}}: CS1 maint: postscript (link)
7. Letter, pp. 411–412
8. Kostant, Bertram (1995), "The Graph of the Truncated Icosahedron and the Last Letter of Galois" (PDF), Notices Amer. Math. Soc., 42 (4): 959–968, see: The Embedding of PSl(2, 5) into PSl(2, 11) and Galois’ Letter to Chevalier.
9. Noam Elkies, Math 155r, Lecture notes for April 14, 2010
10. (Kostant 1995, p. 964)
11. Galois’ last letter Archived 2010-08-15 at the Wayback Machine, Never Ending Books
12. Martin, Pablo; Singerman, David (April 17, 2008), From Biplanes to the Klein quartic and the Buckyball (PDF)
13. Conway, Sloane, SPLAG
• Grove, Larry C. (2002), Classical groups and geometric algebra, Graduate Studies in Mathematics, vol. 39, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2019-3, MR 1859189
| Wikipedia |
Diffusive heat transport in Budyko's energy balance climate model with a dynamic ice line
DCDS-B Home
Extinction in stochastic predator-prey population model with Allee effect on prey
September 2017, 22(7): 2669-2685. doi: 10.3934/dcdsb.2017130
Existence and stability of periodic oscillations of a rigid dumbbell satellite around its center of mass
Jifeng Chu 1,, , Zaitao Liang 2, , Pedro J. Torres 3, and Zhe Zhou 4,
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
Department of Mathematics, College of Science, Hohai University, Nanjing 210098, China
Departamento de Matemática Aplicada, Universidad de Granada, 18071 Granada, Spain
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
* Corresponding author: Jifeng Chu
Received July 2016 Revised November 2016 Published April 2017
Fund Project: Jifeng Chu was supported by the National Natural Science Foundation of China (Grant No. 11171090 and No. 11671118). Zaitao Liang was supported by the Fundamental Research Funds for the Central Universities (Grant No. KYZZ15−0155). Pedro Torres was partially supported by Spanish MICINN Grant with FEDER funds MTM2014-52232-P. Zhe Zhou was supported by the National Natural Science Foundation of China (Grant No. 11301512) and the Key Lab of Random Complex Structures and Data Science, Chinese Academy of Sciences (Grant No. 2008DP173182).
Full Text(HTML)
Figure(1)
We study the existence and stability of periodic solutions of a differential equation that models the planar oscillations of a satellite in an elliptic orbit around its center of mass. The proof is based on a suitable version of Poincaré-Birkhoff theorem and the third order approximation method.
Keywords: Satellite equation, twist periodic solutions, unstable periodic solutions, Poincaré-Birkhoff theorem, third order approximation.
Mathematics Subject Classification: Primary:34C25.
Citation: Jifeng Chu, Zaitao Liang, Pedro J. Torres, Zhe Zhou. Existence and stability of periodic oscillations of a rigid dumbbell satellite around its center of mass. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2669-2685. doi: 10.3934/dcdsb.2017130
V. V. Beletskii, On the oscillations of satellite, Iskusst. Sputn. Zemli, 3 (1959), 1-3. Google Scholar
[2] V. V. Beletskii, The satellite Motion About Center of Mass, Nauka, Moscow, 1965. Google Scholar
V. V. Beletskii and A. N. Shlyakhtin, Resonsnce Rotations of a Satellite with Interactions Between Magnetic and Gravitational Fields Preprint No. 46, Moscow: Institute of Applied Mathematics, Academy of Sciences of the USSR, 1980. Google Scholar
B. S. Bardin, E. A. Chekina and A. M. Chekin, On the stability of a planar resonant rotation of a satellite in an elliptic orbit, Regul. Chaotic Dyn., 20 (2015), 63-73. doi: 10.1134/S1560354715010050. Google Scholar
J. Chu and M. Zhang, Rotation number and Lyapunov stability of elliptic periodic solutions, Discrete Contin. Dyn. Syst., 21 (2008), 1071-1094. doi: 10.3934/dcds.2008.21.1071. Google Scholar
J. Chu and M. Li, Twist periodic solutions of second order singular differential equations, J. Math. Anal. Appl., 355 (2009), 830-838. doi: 10.1016/j.jmaa.2009.02.033. Google Scholar
J. Chu, J. Lei and M. Zhang, The stability of the equilibrium of a nonlinear planar system and application to the relativistic oscillator, J. Differential Equations, 247 (2009), 530-542. doi: 10.1016/j.jde.2008.11.013. Google Scholar
J. Chu and T. Xia, The Lyapunov stability for the linear and nonlinear damped oscillator with time-periodic parameters Abstr. Appl. Anal. 2010, Art. ID 286040, 12 pp. doi: 10.1155/2010/286040. Google Scholar
J. Chu, N. Fan and P. J. Torres, Periodic solutions for second order singular damped differential equations, J. Math. Anal. Appl., 388 (2012), 665-675. doi: 10.1016/j.jmaa.2011.09.061. Google Scholar
J. Chu, J. Ding and Y. Jiang, Lyapunov stability of elliptic periodic solutions of nonlinear damped equations, J. Math. Anal. Appl., 396 (2012), 294-301. doi: 10.1016/j.jmaa.2012.06.024. Google Scholar
J. Chu, P. J. Torres and F. Wang, Radial stability of periodic solutions of the Gylden-Meshcherskii-type problem, Discrete Contin. Dyn. Syst., 35 (2015), 1921-1932. doi: 10.3934/dcds.2015.35.1921. Google Scholar
D. D. Hai, Note on a differential equation describing the periodic motion of a satellite in its elliptic orbits, Nonlinear Anal., 12 (1980), 1337-1338. doi: 10.1016/0362-546X(88)90081-8. Google Scholar
D. D. Hai, Multiple solutions for a nonlinear second order differential equation, Ann. Polon. Math., 52 (1990), 161-164. Google Scholar
A. Fonda and R. Toader, Periodic solutions of pendulum-like Hamiltonian systems in the plane, Adv. Nonlinear Stud., 12 (2012), 395-408. doi: 10.1515/ans-2012-0210. Google Scholar
J. Franks, Generalization of Poincaré-Birkhoff theorem, Ann. of Math., 128 (1988), 139-151. doi: 10.2307/1971464. Google Scholar
J. Lei, X. Li, P. Yan and M. Zhang, Twist character of the least amplitude periodic solution of the forced pendulum, SIAM J. Math. Anal., 35 (2003), 844-867. doi: 10.1137/S003614100241037X. Google Scholar
J. Lei, P. J. Torres and M. Zhang, Twist character of the fourth order resonant periodic solution, J. Dynam. Differential Equations, 17 (2005), 21-50. doi: 10.1007/s10884-005-2937-4. Google Scholar
A. P. Markeev, B. S. Bardin and A. Planar, Rotational motion of a satellite in an elliptic orbit, Cosmic Res., 32 (1994), 583-589. Google Scholar
S. Maró, Periodic solutions of a forced relativistic pendulum via twist dynamics, Topol. Methods Nonlinear Anal., 42 (2013), 51-75. Google Scholar
D. Núñez, The method of lower and upper solutions and the stability of periodic oscillations, Nonlinear Anal., 51 (2002), 1207-1222. doi: 10.1016/S0362-546X(01)00888-4. Google Scholar
D. Nuñez and P. J. Torres, Periodic solutions of twist type of an earth satellite equation, Discrete Contin. Dyn. Syst., 7 (2001), 303-306. doi: 10.3934/dcds.2001.7.303. Google Scholar
D. Nuñez and P. J. Torres, Stable odd solutions of some periodic equations modeling satellite motion, J. Math. Anal. Appl., 279 (2003), 700-709. doi: 10.1016/S0022-247X(03)00057-X. Google Scholar
R. Ortega, Periodic solution of a Newtonian equation: Stability by the third approximation, J. Differential Equations, 128 (1996), 491-518. doi: 10.1006/jdeq.1996.0103. Google Scholar
W. V. Petryshyn and Z. S. Yu, On the solvability of an equation describing the periodic motions of a satellite in its elliptic orbit, Nonlinear Anal., 9 (1985), 969-975. doi: 10.1016/0362-546X(85)90079-3. Google Scholar
[25] C. Siegel and J. Moser, Lectures on Celestial Mechanics, Springer-Verlag, Berlin, 1971. Google Scholar
M. Zhang, The best bound on the rotations in the stability of periodic solutions of a Newtonian equation, J. London Math. Soc., 67 (2003), 137-148. doi: 10.1112/S0024610702003939. Google Scholar
M. Zhang and W. Li, A Lyapunov-type stability criterion using $L^α$ norms, Proc. Amer. Math. Soc., 130 (2002), 3325-3333. doi: 10.1090/S0002-9939-02-06462-6. Google Scholar
A. A. Zevin, On oscillations of a satellite in the plane of elliptic orbit, Kosmich. Issled., 19 (1981), 674-679. Google Scholar
A. A. Zevin and M. A. Pinsky, Qualitative analysis of periodic oscillations of an earth satellite with magnetic attitude stabilization, Discrete Contin. Dyn. Syst., 6 (2000), 193-297. doi: 10.3934/dcds.2000.6.293. Google Scholar
V. A. Zlatoustov and A. P. Markeev, Stability of planar oscillations of a satellite in an elliptic orbit, Celestial Mech., 7 (1973), 31-45. doi: 10.1007/BF01243507. Google Scholar
Figure 1. The region of stability $\Delta$
Figure Options
Download as PowerPoint slide
Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037
Mengyu Cheng, Zhenxin Liu. Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021026
Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087
Yi Guan, Michal Fečkan, Jinrong Wang. Periodic solutions and Hyers-Ulam stability of atmospheric Ekman flows. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1157-1176. doi: 10.3934/dcds.2020313
François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221
Chao Wang, Qihuai Liu, Zhiguo Wang. Periodic bouncing solutions for Hill's type sub-linear oscillators with obstacles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 281-300. doi: 10.3934/cpaa.2020266
Michal Fečkan, Kui Liu, JinRong Wang. $ (\omega,\mathbb{T}) $-periodic solutions of impulsive evolution equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021006
Kevin Li. Dynamic transitions of the Swift-Hohenberg equation with third-order dispersion. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021003
Taige Wang, Bing-Yu Zhang. Forced oscillation of viscous Burgers' equation with a time-periodic force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1205-1221. doi: 10.3934/dcdsb.2020160
Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296
Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176
Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136
Jun Zhou. Lifespan of solutions to a fourth order parabolic PDE involving the Hessian modeling epitaxial growth. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5581-5590. doi: 10.3934/cpaa.2020252
Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081
Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364
Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020454
Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448
Teresa D'Aprile. Bubbling solutions for the Liouville equation around a quantized singularity in symmetric domains. Communications on Pure & Applied Analysis, 2021, 20 (1) : 159-191. doi: 10.3934/cpaa.2020262
Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032
2019 Impact Factor: 1.27
HTML views (74)
Jifeng Chu Zaitao Liang Pedro J. Torres Zhe Zhou
Article outline | CommonCrawl |
\begin{document}
\title{\textbf{$g$-noncommuting graph of a finite group relative to its subgroups}} \author{Monalisha Sharma and Rajat Kanti Nath\footnote{Corresponding author} } \date{} \maketitle \begin{center}\small{\it Department of Mathematical Sciences, Tezpur University,\\ Napaam-784028, Sonitpur, Assam, India.\\
Emails:\, [email protected] and [email protected]} \end{center} \begin{abstract} Let $H$ be a subgroup of a finite non-abelian group $G$ and $g \in G$. Let $Z(H, G) = \{x \in H : xy = yx, \forall y \in G\}$. We introduce the graph $\Delta_{H, G}^g$ whose vertex set is $G \setminus Z(H, G)$ and two distinct vertices $x$ and $y$ are adjacent if $x \in H$ or $y \in H$ and $[x,y] \neq g, g^{-1}$, where $[x,y] = x^{-1}y^{-1}xy$. In this paper, we determine whether $\Delta_{H, G}^g$ is a tree among other results. We also discuss about its diameter and connectivity with special attention to the dihedral groups. \end{abstract}
\noindent {\small{\textit{Key words:} finite group, $g$-noncommuting graph, connected graph.}}
\noindent \small{\textbf{\textit{2010 Mathematics Subject Classification:}} 05C25, 20P05}
\section{Introduction} In general group theory and graph theory are closely related. Several properties of groups can be described through properties of graphs and vice versa. Characterizations of finite groups through various graphs defined on it have been an interesting topic of research over the last five decades. Non-commuting graph is one of such interesting graphs widely studied in the literature \cite{AAM06,AF15,AI12, Daraf09, DBBM10,dn-JLTA2018,ddn2018,JDSO2015, JDS2015,JMS2019,JSO2015,Mogh05,MSZZ05, T08,VK18}, since its inception \cite{EN76}. In this paper we introduce a generalization of non-commuting graphs of finite group. Let $H$ be a subgroup of a finite non-abelian group $G$ and $g \in G$. Let $Z(H, G) = \{x \in H : xy = yx, \forall y \in G\}$. We introduce the graph $\Delta_{H, G}^g$ whose vertex set is $G \setminus Z(H, G)$ and two distinct vertices $x$ and $y$ are adjacent if $x \in H$ or $y \in H$ and $[x,y] \neq g, g^{-1}$, where $[x,y] = x^{-1}y^{-1}xy$. Clearly, $\Delta_{H, G}^g = \Delta_{H, G}^{g^{-1}}$. Also, $\Delta_{H, G}^g$ is an induced subgraph of $\Gamma_{H, G}^g$, studied by the authors in \cite{SN2020}, induced by $G \setminus Z(H,G)$.
If $H = G$ and $g = 1$ then $\Delta_{H, G}^g := \Gamma_G$, the non-commuting graph of $G$.
If $H = G$ then $\Delta_{H, G}^g := \Delta_{G}^g$, a generalization of $\Gamma_G$ called induced g-noncommuting graph of $G$ on $G \setminus Z(G)$ studied extensively in \cite{NEGJ16,NEGJ17,NEM18} by Erfanian and his collaborators.
If $g \notin K(H,G)$ then any pair of vertices $(x, y)$ are adjacent in $\Delta_{H, G}^g$ trivially if $x, y \in H$ or one of $x$ and $y$ belongs to $H$. Therefore, we consider $g \in K(H,G)$. Also, if $H=Z(H,G)$ then $K(H,G) = \{1\}$ and so $g = 1$. Thus, throughout this paper, we shall consider $H \ne Z(H,G)$ and $g \in K(H,G)$. In this paper, we determine whether $\Delta_{H, G}^g$ is a tree among other results. We also discuss about its diameter and connectivity with special attention to the dihedral groups. We conclude this section by the following examples of $\Delta_{H, G}^g$ where $G = A_4 = \langle a, b : a^2 = b^3 = (ab)^3 =1 \rangle$ and the subgroup $H$ is given by $H_1 = \{1, a\}$, $H_2=\{1, bab^2\}$ or $H_3 = \{1, b^2ab\}$.
\begin{center}
{\includegraphics[width=12cm]{graphs.jpg}} \end{center}
\section{Vertex degree and a consequence}
In this section we first determine $\deg(x)$, the degree of a vertex $x$ of the graph $\Delta_{H, G}^g$. After that we determine whether $\Delta_{H, G}^g$ is a tree. Corresponding to Theorem 2.1 and Theorem 2.2 of \cite{SN2020} we have the following two results for $\Delta_{H, G}^g$.
\begin{theorem}\label{deg_prop_1.1} Let $x \in H \setminus Z(H,G)$ be any vertex in $\Delta_{H, G}^g$. \begin{enumerate}
\item If $g=1$ then $\deg(x)= |G| - |C_G(x)|.$
\item If $g \neq 1$ and $g^2 \neq 1$ then
$\deg(x) = \begin{cases}
|G| - |Z(H,G)| - |C_G(x)| - 1, & \mbox{if $x$ is conjugate to}\\ & \mbox{$xg$ or $xg^{-1}$} \\
|G| - |Z(H,G)| - 2|C_G(x)| - 1,& \mbox{if $x$ is conjugate to}\\ & \mbox{$xg$ and $xg^{-1}$.} \end{cases}$
\item If $g \neq 1$ and $g^2 = 1$ then $\deg(x) = |G| - |Z(H,G)| - |C_G(x)| - 1$, whenever $x$ is conjugate to $xg$. \end{enumerate} \end{theorem}
\begin{proof} (a) Let $g = 1$. Then $\deg(x)$ is the number of $y \in G \setminus Z(H,G)$ such that $xy \ne yx$. Hence, \[
\deg(x) = |G|-|Z(H,G)|-(|C_G(x)|-|Z(H,G)|) = |G| - |C_G(x)|. \] \noindent Proceeding as the proof of \cite[Theorem 2.1 (b), (c)]{SN2020}, parts (b) and (c) follow noting that the vertex set of $\Delta_{H,G}^g$ is $G \setminus Z(H,G)$. \end{proof}
\begin{theorem}\label{deg_prop_2.1} Let $x \in G \setminus H$ be any vertex in $\Delta_{H, G}^g$. \begin{enumerate}
\item If $g=1$ then $\deg(x)= |H| - |C_H(x)|.$
\item If $g \neq 1$ and $g^2 \neq 1$ then
$\deg(x) = \begin{cases}
|H| - |Z(H,G)| - |C_H(x)|, &\!\!\!\!\mbox{if $x$ is conjugate to $xg$ or}\\ &\!\!\!\mbox{$xg^{-1}$ for some element in $H$}\\
|H| - |Z(H,G)| - 2|C_H(x)|, &\!\!\!\!\mbox{if $x$ is conjugate to $xg$ and}\\ &\!\!\!\mbox{$xg^{-1}$ for some element in $H$}. \end{cases}$
\item If $g \neq 1$ and $g^2 = 1$ then $\deg(x) = |H| - |Z(H,G)|- |C_H(x)|$, whenever $x$ is conjugate to $xg$, for some element in H. \end{enumerate} \end{theorem} \begin{proof} (a) Let $g = 1$. Then $\deg(x)$ is the number of $y \in H \setminus Z(H,G)$ such that $xy \ne yx$. Hence, \[
\deg(x) = |H|-|Z(H,G)|-(|C_H(x)|-|Z(H,G)|) = |H| - |C_H(x)|. \] \noindent Proceeding as the proof of \cite[Theorem 2.2 (b), (c)]{SN2020}, parts (b) and (c) follow noting that the vertex set of $\Delta_{H,G}^g$ is $G \setminus Z(H,G)$. \end{proof}
As a consequence of above results we have the following. \begin{theorem}\label{delta-not-tree}
If $|H| \ne 2, 3, 4, 6$ then $\Delta_{H, G}^g$ is not a tree. \end{theorem} \begin{proof} Suppose that $\Delta_{H, G}^g$ is a tree. Then there exists a vertex $x \in G \setminus Z(H,G)$ such that $\deg(x) = 1$. If $x \in H \setminus Z(H,G)$ then we have the following cases.
\noindent \textbf{Case 1:} If $g = 1$, then by Theorem \ref{deg_prop_1.1}(a), we have $\deg(x) = |G|-|C_G(x)| = 1$. Therefore, $|C_G(x)| = 1$, contradiction.
\noindent \textbf{Case 2:} If $g \neq 1$ and $g^2 = 1$, then by Theorem \ref{deg_prop_1.1}(c), we have $\deg(x) = |G|-|Z(H,G)|-|C_G(x)|-1 = 1$. That is, \begin{equation}\label{induced-eq-1}
|G|-|Z(H,G)|-|C_G(x)| = 2. \end{equation}
Therefore, $|Z(H,G)| = 1$ or $2$. Thus \eqref{induced-eq-1} gives $|G| - |C_G(x)| = 3$ or $4$. Therefore, $|G| = 6$ or $8$. Since $|H| \ne 2, 3, 4, 6$ we must have $G \cong D_8$ or $Q_8$ and $H = G$ and hence, by {\rm\cite[Theorem 2.5]{TEJ14}}, we get a contradiction.
\noindent \textbf{Case 3:} If $g \neq 1$ and $g^2 \neq 1$, then by Theorem \ref{deg_prop_1.1}(b), we have $\deg(x) = |G|-|Z(H,G)|-|C_G(x)|-1 = 1$, which will lead to \eqref{induced-eq-1} (and eventually to a contradiction) or $\deg(x) = |G|-|Z(H,G)|-2|C_G(x)|-1 = 1$. That is, \begin{equation}\label{induced-eq-3}
\text{or }|G|-|Z(H,G)|-2|C_G(x)| = 2. \end{equation}
Therefore, $|Z(H,G)| = 1$ or $2$. Thus if $|Z(H,G)| = 1$ then \eqref{induced-eq-3} gives $|G| = 9$, which is a contradiction since $G$ is non-abelian. Again if $|Z(H,G)| = 2$ then \eqref{induced-eq-3} gives $|C_G(x)| = 2$ or $4$. Therefore, $|G| = 8$ or $|G| = 12$. If $|G| = 8$ then we get a contradiction as shown in Case 2 above. If $|G| = 12$ then $G \cong D_{12}$ or $Q_{12}$, since $|Z(H,G)| = 2$. In both the cases we must have $H = G$ and hence, by {\rm\cite[Theorem 2.5]{TEJ14}}, we get a contradiction.
Now we assume that $x \in G \setminus H$ and consider the following cases.
\noindent \textbf{Case 1:} If $g = 1$, then by Theorem \ref{deg_prop_2.1}(a), we have $\deg(x) = |H|-|C_H(x)| = 1$. Therefore, $|H| = 2$, a contradiction.
\noindent \textbf{Case 2:} If $g \neq 1$ and $g^2 = 1$, then by Theorem \ref{deg_prop_2.1}(c), we have $\deg(x) = |H|-|Z(H,G)|-|C_H(x)| = 1$. That is, \begin{equation}\label{induced-eq-4}
|H|-|C_H(x)| = 2. \end{equation} Therefore,
$|H| = 3$ or $4$, a contradiction.
\noindent \textbf{Case 3:} If $g \neq 1$ and $g^2 \neq 1$, then by Theorem \ref{deg_prop_2.1}(b), we have $\deg(x) = |H|-|Z(H,G)|-|C_H(x)| = 1$, which leads to \eqref{induced-eq-4} or $\deg(x) = |H|-|Z(H,G)|-2|C_H(x)| = 1$. That is, \begin{equation}\label{induced-eq-5}
|H| -2|C_H(x)| = 2. \end{equation}
Therefore, $|C_H(x)| = 1$ or $2$. Thus if $|C_H(x)| = 1$ then \eqref{induced-eq-5} gives $|H| = 4$, a contradiction. If $|C_H(x)| = 2$ then \eqref{induced-eq-5} gives $|H| = 6$, a contradiction. \end{proof}
The following theorems also give that the condition on $|H|$ in Theorem \ref{delta-not-tree} can not be removed completely. \begin{theorem}\label{not_tree}
If $G$ is a non-abelian group of order $\leq 12$ and $g = 1$ then $\Delta_{H, G}^g$ is a tree if and only if $G \cong D_6$ or $D_{10}$ and $|H| = 2$. \end{theorem}
\begin{proof} If $H$ is the trivial subgroup of $G$ then $\Delta_{H, G}^g$ is an empty graph. If $H=G$ then, by {\rm\cite[Theorem 2.5]{TEJ14}}, we have $\Delta_{H, G}^g$ is not a tree. So we examine only the proper subgroups of $G$, where $G \cong D_6, D_8, Q_8, D_{10}, D_{12}, Q_{12}$ or $A_4$. We consider the following cases.
\noindent \textbf{Case 1:} $G \cong D_6 = \langle a, b : a^3 = b^2 = 1 \text{ and } bab^{-1}=a^{-1}\rangle$. If $|H| = 2$ then $H = \langle x\rangle$, where $x = b, ab$ and $a^2b$.
We have $[x, y] \ne 1$ for all $y \in G \setminus Z(H, G)$. Therefore, $\Delta_{H, D_6}^g$ is a star graph and hence, a tree. If $|H| = 3$ then $H = \{1, a, a^2\}$. In this case, the vertices $a$, $ab$, $a^2$ and $b$ make a cycle since $[ab, a] = a^2 = [a^2, ab]$ and $[a, b] = a = [b, a^2]$.
\noindent \textbf{Case 2:} $G \cong D_8 = \langle a, b : a^4 = b^2 = 1 \text{ and } bab^{-1} = a^{-1}\rangle$. If $|H| = 2$ then $H = Z(D_8)$ or $\langle a^rb \rangle$, where $r = 1, 2, 3, 4$. Clearly $\Delta_{H,D_8}^g$ is an empty graph if $H = Z(D_8)$. If $H = \langle a^rb\rangle$ then, in each case, $a^2$ is an isolated vertex in $G \setminus H$ (since $[a^2, a^rb]=1$). Hence, $\Delta_{H,D_{8}}^g$ is disconnected. If $|H|=4$ then $H = \{1, a, a^2, a^3\}$, $\{1, a^2, b, a^2b\}$ or $\{1, a^2, ab, a^3b\}$. If $H = \{1, a, a^2, a^3\}$ then the vertices $ab$, $a$, $b$ and $a^3$ make a cycle; if $H=\{1, a^2, b, a^2b\}$ then the vertices $ab$, $b$, $a^3$ and $a^2b$ make a cycle; and if $H=\{1, a^2, ab, a^3b\}$ then the vertices $ab$, $a$, $a^3b$ and $b$ make a cycle (since $[a,b]=[a^3,b]=[a^3,ab]=[a^3,a^2b]=[ab,a]=[a^2b,ab]=[a^3b,a]=[b,ab]=[b,a^3b]=a^2 \ne 1$).
\noindent \textbf{Case 3:} $G \cong Q_8 = \langle a, b : a^4 = 1, b^2=a^2 \text{ and } bab^{-1} = a^{-1}\rangle$. If $|H|=2$ then $H = Z(Q_8)$ and so $\Delta_{H,Q_8}^g$ is an empty graph. If $|H| = 4$ then $H = \{1, a, a^2, a^3\}$, $\{1, a^2, b, a^2b\}$ and $\{1, a^2, ab, a^3b\}$. Again, if $H = \{1, a, a^2, a^3\}$ then the vertices $a$, $b$, $a^3$ and $ab$ make a cycle; if
$H = \{1, a^2, b, a^2b\}$ then the vertices $b$, $a^3b$, $a^2b$ and $a^3$ make a cycle; and if
$H = \{1, a^2, ab, a^3b\}$ then the vertices $ab$, $a$, $a^3b$ and $a^2b$ make a cycle (since $[a, b] = [b, a^3] = [a^3, ab] = [ab, a] = [b, a^3b] = [a^3b, a^2b] = [a^2b, a^3] = [a, a^3b] = [a^2b, ab] = a^2 \ne 1$).
\noindent \textbf{Case 4:} $G \cong D_{10} = \langle a, b : a^5 = b^2 = 1 \text{ and } bab^{-1} = a^{-1}\rangle$. If $|H| = 2$ then $H = \langle a^rb\rangle$, for every integer $r$ such that $1 \leq r \leq 5$. For each case of $H$, $\Delta_{H, D_{10}}^g$ is a star graph since $[a^rb, x] \ne g$ for all $x \in G \setminus H$. If $|H| = 5$ then $H = \{1, a, a^2, a^3, a^4\}$. In this case, the vertices $a$, $ab$, $a^3$ and $a^3b$ make a cycle in $\Delta_{H, D_{10}}^g$ since $[a, ab] = a^3 \ne 1$, $[ab, a^3] = a \ne 1$, $[a^3, a^3b] = a^4 \ne 1$ and $[a^3b, a] = a^2 \ne 1$.
\noindent \textbf{Case 5:} $G \cong D_{12} = \langle a, b : a^6 = b^2 = 1 \text{ and } bab^{-1} = a^{-1}\rangle$. If $|H| = 2$ then $H = Z(D_{12})$ or $\langle a^rb\rangle$, for every integer $r$ such that $1 \leq r \leq 6$.
If $|H| = 3$ then $H = \{1, a^2, a^4\}$. If $|H| = 4$ then $H = \{1, a^3, b, a^3b\}$, $\{1, a^3, ab, a^4b\}$ or $\{1,a^3,a^2b,a^5b\}$. If $|H|=6$ then $H = \{1, a, a^2, a^3, a^4, a^5\}$, $\{1, a^2, a^4, b, a^2b, a^4b\}$ or $\{1,a^2,a^4,ab,a^3b,a^5b\}$. Note that $\Delta_{H,D_{12}}^g$ is an empty graph if $H = Z(D_{12})$. If $H = \langle a^rb \rangle$ (for $1 \leq r \leq 6$), $\{1, a^2, a^4\}$, $\{1, a^2, a^4, b, a^2b, a^4b\}$ or $\{1, a^2, a^4, ab, a^3b, a^5b\}$ then in each case the vertex $a^3$ is an isolated vertex in $G \setminus H$ (since $a^3 \in Z(D_{12})$) and hence $\Delta_{H,D_{12}}^g$ is disconnected. We have $[a, b] = [b, a^5] = [a, ab] = [a^4, a^4b] = [a^5b, a^2] = [b,a^2] =[a^2b,a^5] = a^4 \ne 1$ and $[a^5, a^5b] = [a^5b, a] = [ab, a^4] = [a^4b, a] = [a^2, a^2b] = [a^2b, a] = a^2 \ne 1$. Therefore, if $H = \{1, a^3, b, a^3b\}$ then the vertices $a$, $b$, $a^5$ and $a^5b$ make a cycle; if $H=\{1,a^3,ab,a^4b\}$ then the vertices $a$, $ab$, $a^4$ and $a^4b$ make a cycle; if $H=\{1, a^3, a^2b, a^5b\}$ then the vertices $a^2$, $a^2b$, $a^5$ and $a^5b$ make a cycle; and if $H = \{1, a, a^2, a^3, a^4, a^5\}$ then the vertices $a$, $b$, $a^2$ and $a^2b$ make a cycle.
\noindent \textbf{Case 6:} $G \cong A_4 = \langle a, b : a^2 = b^3 = (ab)^3 = 1 \rangle$. If $|H| = 2$ then $H = \langle a\rangle$, $\langle bab^2 \rangle$ or $\langle b^2ab\rangle$. Since the elements $a, bab^2$ and $b^2ab$ commute among themselves, in each case the remaining two elements in $G \setminus H$ remain isolated and hence $\Delta_{H,A_{4}}^g$ is disconnected. If $|H| = 3$ then $H = \langle x \rangle$, where $x = b$, $ab$, $ba$, $aba$. In each case, the vertices $x$, $a$, $x^{-1}$ and $bab^2$ make a cycle. If $|H| = 4$ then $H = \{1, a, bab^2, b^2ab\}$. In this case, the vertices $a$, $b$, $bab^2$ and $ab$ make a cycle.
\noindent \textbf{Case 7:} $G \cong Q_{12} = \langle a, b : a^6 = 1, b^2 = a^3 \text{ and } bab^{-1} = a^{-1}\rangle$. If $|H| = 2$ then $H = Z(Q_{12})$ and so $\Delta_{H,Q_{12}}^g$ is an empty graph. If $|H|=3$ then $H = \{1, a^2, a^4\}$. In this case, $a^3$ is an isolated vertex in $G \setminus H$ (since $a^3 \in Z(D_{12})$) and so $\Delta_{H,Q_{12}}^g$ is disconnected. If $|H|= 4$ then $H = \{1, a^3, b, a^3b\}$, $\{1, a^3, ab, a^4b\}$ or $\{1,a^3,a^2b,a^5b\}$. If $|H| = 6$ then $H =\{1, a, a^2, a^3, a^4, a^5\}$. We have $[a, b] = [a, ab] = [a^4, a^4b] = [a^5b, a^2] = [b, a^2] = [b, a^5] =[a^2b,a^5] = a^4 \ne 1$ and $[a^5, a^5b] = [a^5b, a] = [ab, a^4] = [a^4b, a] = [a^2, a^2b] = [a^2b, a] = a^2 \ne 1$. Therefore, if $H = \{1, a^3, b, a^3b\}$ then the vertices $a$, $b$, $a^5$ and $a^5b$ make a cycle; if $H = \{1, a^3, ab, a^4b\}$ then the vertices $a$, $ab$, $a^4$ and $a^4b$ make a cycle; if $H = \{1, a^3, a^2b, a^5b\}$ then the vertices $a^2$, $a^2b$, $a^5$ and $a^5b$ make a cycle; and if $H = \{1, a, a^2, a^3, a^4, a^5\}$ then the vertices $a$, $b$, $a^2$ and $a^2b$ make a cycle. This completes the proof. \end{proof}
\begin{theorem}\label{not_a_tree2}
If $G$ is a non-abelian group of order $\leq 12$ and $g \ne 1$ then $\Delta_{H, G}^g$ is a tree if and only if $g^2 = 1$, $G \cong A_4$ and $|H|=2$ such that $H = \langle g \rangle$. \end{theorem} \begin{proof} If $H$ is the trivial subgroup of $G$ then $\Delta_{H, G}^g$ is an empty graph. If $H=G$ then, by {\rm\cite[Theorem 2.5]{TEJ14}}, we have $\Delta_{H, G}^g$ is not a tree. So we examine only the proper subgroups of $G$, where $G \cong D_6, D_8, Q_8, D_{10}, D_{12}, Q_{12}$ or $A_4$. We consider the following two cases.
\noindent {\bf Case 1:} $g^2 = 1$
In this case $G \cong D_8$, $Q_8$ or $A_4$.
If $G \cong D_8=\langle a, b: a^4 = b^2 = 1 \text{ and } bab^{-1} = a^{-1}\rangle$ then $g = a^2$ and $|H|= 2, 4$. If $|H|=2$ then $H = Z(D_8)$ or $\langle a^rb \rangle$, for every integer $r$ such that $1 \leq r \leq 4$. For $H = Z(D_8)$, $\Delta_{H, D_8}^g$ is an empty graph. For $H = \langle a^rb \rangle$, in each case $a$ is an isolated vertex in $G \setminus H$ (since $[a, a^rb]=a^2$) and hence, $\Delta_{H, D_{8}}^g$ is disconnected. If $|H|=4$ then $H = \{1, a, a^2, a^3\}$, $\{1, a^2, b, a^2b\}$ or $\{1, a^2, ab, a^3b\}$. For $H = \{1, a, a^2, a^3\}$, $b$ is an isolated vertex in $G \setminus H$ (since $[a, b]=a^2=[a^3, b]$) and hence, $\Delta_{H, D_{8}}^g$ is disconnected. If $H = \{1, a^2, b, a^2b\}$ or $\{1, a^2, ab, a^3b\}$ then $a$ is an isolated vertex in $G \setminus H$ (since $[a,a^rb] = a^2$ for every integer $r$ such that $1 \leq r \leq 4$) and hence, $\Delta_{H, D_{8}}^g$ is disconnected.
If $G \cong Q_8 = \langle a, b : a^4 = 1, b^2 = a^2 \text{ and } bab^{-1} = a^{-1}\rangle$ then $g = a^2$ and $|H|= 2, 4$. If $|H| = 2$ then $H = Z(Q_8)$ and hence $\Delta_{H, Q_8}^g$ is an empty graph. If $|H| = 4$ then $H = \{1, a, a^2, a^3\}$, $\{1, a^2, b, a^2b\}$ or $\{1, a^2, ab, a^3b\}$. In each case, vertices of $H \setminus Z(H, G)$ commute with each other and commutator of these vertices and those of $G \setminus H$ equals $a^2$. Hence, the vertices in $G \setminus H$ remain isolated and so $\Delta_{H, Q_{8}}^g$ is disconnected.
If $G \cong A_4 = \langle a, b : a^2 = b^3 = (ab)^3 = 1\rangle$ then $g \in \{a, bab^2, b^2ab\}$ and $|H|= 2, 3, 4$.
If $|H|=2$ then $H = \langle a\rangle$, $\langle bab^2\rangle$ or $\langle b^2ab \rangle$.
If $H = \langle g \rangle$ then $\Delta_{H,A_{4}}^g$ is a star graph because $[g,x]\ne g$ for all $x \in G \setminus H$ and hence a tree; otherwise $\Delta_{H,A_{4}}^g$ is not a tree as shown in Figures 1.1--1.6.
If $|H| = 3$ then $H = \langle x\rangle$, where $x = b, ab, ba, aba$ or their inverses. We have $[x, x^{-1}] = 1$, $[x, g] \ne g$ and $[x^{-1}, g] \ne g$. Therefore, $x$, $x^{-1}$ and $g$ make a triangle for each such subgroup in the graph $\Delta_{H, A_{4}}^g$.
If $|H|=4$ then $H = \{1, a, bab^2, b^2ab\}$. Since $H$ is abelian, the vertices $a$, $bab^2$ and $b^2ab$ make a triangle in the graph $\Delta_{H, A_{4}}^g$.
\noindent {\bf Case 2:} $g^2 \ne 1$
In this case $G \cong D_6$, $D_{10}$, $D_{12}$ or $Q_{12}$.
If $G \cong D_6 = \langle a, b : a^3 = b^2 = 1 \text{ and } bab^{-1} = a^{-1}\rangle$ then $g \in \{a, a^2\}$ and $|H| = 2, 3$. We have $\Delta_{H,D_6}^a = \Delta_{H,D_6}^{a^2}$ since $a^{-1} = a^2$. If $|H| = 2$ then $H = \langle x\rangle$, where $x = b, ab$ and $a^2b$.
We have $[x, y] \in \{g, g^{-1}\}$ for all $y \in G \setminus H$ and so $\Delta_{H, D_{6}}^g$ is an empty graph. If $|H| = 3$ then $H = \{1, a, a^2\}$. In this case, the vertices of $G \setminus H$ remain isolated since for $y \in G \setminus H$ we have $[a, y], [a^2, y] \in \{g, g^{-1}\}$.
If $G \cong D_{10}=\langle a, b : a^5 = b^2 = 1 \text{ and } bab^{-1} = a^{-1}\rangle$ then $g \in \{a, a^2, a^3, a^4\}$ and $|H| = 2, 5$. We have $\Delta_{H,D_{10}}^a = \Delta_{H,D_{10}}^{a^4}$ and $\Delta_{H,D_{10}}^{a^2} = \Delta_{H,D_{10}}^{a^3}$ since $a^{-1}=a^4$ and $(a^2)^{-1}=a^3$.
Suppose that $|H| = 2$. Then $H = \langle a^rb \rangle$, for every integer $r$ such that $1 \leq r \leq 5$. If $g = a$ then for each subgroup $H$, $a^2$ is an isolated vertex in $\Delta_{H,D_{10}}^g$ (since $[a^2,a^rb]=a^4$ for every integer $r$ such that $1 \leq r \leq 5$). If $g = a^2$ then for each subgroup $H$, $a$ is an isolated vertex in $\Delta_{H,D_{10}}^g$ (since $[a,a^rb]=a^2$ for every integer $r$ such that $1 \leq r \leq 5$). Hence, $\Delta_{H,D_{10}}^g$ is disconnected for each $g$ and each subgroup $H$ of order $2$. Now suppose that $|H|=5$. Then we have $H=\{1,a,a^2,a^3,a^4\}$. In this case, the vertices $a$, $a^2$, $a^3$ and $a^4$ make a cycle in $\Delta_{H,D_{10}}^g$ for each $g$ as they commute among themselves.
If $G \cong D_{12}=\langle a,b~|~a^6=b^2=1 \text{ and } bab^{-1}=a^{-1}\rangle$ then $g \in \{a^2,a^4\}$ and $|H|=2, 3, 4, 6$. We have $\Delta_{H,D_{12}}^{a^2} = \Delta_{H,D_{12}}^{a^4}$ since $(a^2)^{-1}=a^4$. Suppose that $|H|=2$ then $H = Z(D_{12})$ or $\langle a^rb \rangle$, for every integer $r$ such that $1 \leq r \leq 6$. For $H = Z(D_{12})$, $\Delta_{H,D_{12}}^g$ is an empty graph. For $H = \langle a^rb \rangle$, in each case $a$ is an isolated vertex in $G \setminus H$ (since $[a,a^rb]=a^2$ for every integer $r$ such that $1 \leq r \leq 6$) and hence, $\Delta_{H,D_{12}}^g$ is disconnected. If $|H|=3$ then $H=\{1,a^2,a^4\}$. In this case, the vertices $a$, $a^2$ and $a^4$ make a triangle in $\Delta_{H,D_{12}}^g$ since they commute among themselves. If $|H|=4$ then $H = \{1,a^3,b,a^3b\}$, $\{1,a^3,ab,a^4b\}$ or $\{1,a^3,a^2b,a^5b\}$. For all these $H$, $a$ is an isolated vertex in $G \setminus H$ (since $[a,a^rb]=a^2$ for every integer $r$ such that $1 \leq r \leq 6$) and hence, $\Delta_{H,D_{12}}^g$ is disconnected. If $|H|=6$ then $H=\{1,a,a^2,a^3,a^4,a^5\}$, $\{1,a^2,a^4,b,a^2b,a^4b\}$ or $\{1,a^2,a^4,ab,a^3b,a^5b\}$. For all these $H$ the vertices $a$, $a^2$, $a^4$ and $a^5$ make a cycle in $\Delta_{H,D_{12}}^g$ since they commute among themselves.
If $G \cong Q_{12}=\langle a,b~|~a^6=1, b^2=a^3 \text{ and } bab^{-1}=a^{-1}\rangle$ then $g \in \{a^2,a^4\}$ and $|H|=2, 3, 4, 6$. We have $\Delta_{H,D_{12}}^{a^2} = \Delta_{H,D_{12}}^{a^4}$ since $(a^2)^{-1}=a^4$. If $|H|=2$ then $H = Z(Q_{12})$ and so $\Delta_{H,Q_{12}}^g$ is an empty graph. If $|H|=3$ then $H=\{1,a^2,a^4\}$. In this case, the vertices $a$, $a^2$ and $a^4$ make a triangle in $\Delta_{H,Q_{12}}^g$ since they commute among themselves. If $|H|=4$ then $H = \{1,a^3,b,a^3b\}$, $\{1,a^3,ab,a^4b\}$ or $\{1,a^3,a^2b,a^5b\}$. For all these $H$, $a$ is an isolated vertex in $G \setminus H$ (since $[a,a^rb]=a^2$ for every integer $r$ such that $1 \leq r \leq 6$) and hence, $\Delta_{H,Q_{12}}^g$ is disconnected. If $|H|=6$ then $H=\{1,a,a^2,a^3,a^4,a^5\}$. In this case, the vertices $a$, $a^2$, $a^4$ and $a^5$ make a cycle in $\Delta_{H,Q_{12}}^g$ since they commute among themselves. \end{proof}
\section{Connectivity of diameter} Connectivity of $\Delta_{G}^g$ was studied in \cite{NEGJ16,NEGJ17,NEM18}. It was conjectured that diameter of $\Delta_{G}^g$ is equal to $2$ if $\Delta_{G}^g$ is connected. In this section we discuss the connectivity of $\Delta_{H, G}^g$. In general, $\Delta_{H, G}^g$ is not connected. For any two vertices $x$ and $y$, we write $x \sim y$ and $x \nsim y$ respectively to mean that they are adjacent or not. We write $d(x, y)$ and $\diam(\Delta_{H, G}^g)$ to denote the distance between the vertices $x, y$ and diameter of $\Delta_{H, G}^g$ respectively.
\begin{theorem}\label{Ind-diam-1} If $g$ is a non-central element of $G$ such that $g \in H$ and $g^2 = 1$ then $\diam(\Delta^g_{H,G}) = 2$. \end{theorem} \begin{proof} Let $x \ne g$ be any vertex of $\Delta^g_{H,G}$. Then $[x,g] \ne g$ which implies $[x,g] \ne g^{-1}$ since $g^2 = 1$. Since $g \in H$, if follows that $x \sim g$. Therefore, $d(x,g) = 2$ and hence $\diam(\Delta^g_{H,G}) = 2$. \end{proof} \begin{lemma}\label{Ind-diam-2} Let $g \in H \setminus Z(H,G)$ such that $g^2 \ne 1$ and $o(g) \neq 3$. If $x \in G \setminus Z(H,G)$ and $x \nsim g$ then $x \sim g^2$. \end{lemma} \begin{proof} Since $g \ne 1$ and $x \nsim g$ it follows that $[x, g] = g^{-1}$. We have \begin{equation}\label{Ind-eq-01} [x,g^2]= [x, g][x, g]^g = g^{-2} \ne g, g^{-1}. \end{equation}
If $g^2 \in Z(H,G)$ then, by \eqref{Ind-eq-01}, we have $g^{-2} = [x,g^2] = 1$; a contradiction. Therefore, $g^2 \in H\setminus Z(H,G)$. Hence, $x \sim g^2$. \end{proof} \begin{theorem} Let $g \in H \setminus Z(H,G)$ and $o(g) \ne 3$. Then $\diam(\Delta^g_{H,G}) \leq 3$. \end{theorem} \begin{proof} If $g^2 = 1$ then, by Theorem \ref{Ind-diam-1}, we have $\diam(\Delta^g_{H,G}) = 2$. Therefore, we assume that $g^2 \ne 1$. Let $x, y$ be any two vertices of $\Delta^g_{H,G}$ such that $x \nsim y$. Therefore, $[x, y] = g$ or $g^{-1}$. If $x \sim g$ and $y \sim g$ then $x \sim g \sim y$ and so $d(x, y) = 2$. If $x \nsim g$ and $y \nsim g$ then, by Lemma \ref{Ind-diam-2}, we have $x \sim g^2 \sim y$ and so $d(x, y) = 2$. Therefore, we shall not consider these two situations in the following cases.
\noindent \textbf{Case 1:} $x, y \in H$
Suppose that one of $x, y$ is adjacent to $g$ and the other is not. Without any loss we assume that $x \nsim g$ and $y \sim g$. Then $[x, g] = g^{-1}$ and $[y, g] \ne g, g^{-1}$. By Lemma \ref{Ind-diam-2}, we have $x \sim g^2$.
Consider the element $yg \in H$. If $yg \in Z(H,G)$ then $[y, g^2] = 1 \ne g, g^{-1}$. Therefore, $x \sim g^2 \sim y$ and so $d(x, y) = 2$.
If $yg \notin Z(H,G)$ then we have $[x,yg] = [x, g][x, y]^g = g^{-1}[x, y]^g \ne g, g^{-1}$.
Also, $[y, yg] = [y, g] \ne g, g^{-1}$. Hence, $x \sim yg \sim y$ and so $d(x, y) = 2$.
\noindent \textbf{Case 2:} One of $x, y$ belongs to $H$ and the other does not.
Without any loss assume that $x \in H$ and $y \notin H$. If $x \nsim g$ and $y \sim g$ then, by Lemma \ref{Ind-diam-2}, we have $x \sim g^2$. Also, $[g, g^2] = 1 \ne g, g^{-1}$ and so $g^2 \sim g$. Therefore, $x \sim g^2 \sim g \sim y$ and hence $d(x,y) \leq 3$. If $x \sim g$ and $y \nsim g$ then $[x, g] \ne g,g^{-1}$ and $[y, g] = g^{-1}$. By Lemma \ref{Ind-diam-2}, we have $y \sim g^2$. Consider the element $xg \in H$. If $xg \in Z(H,G)$ then $[x, g^2] = 1 \ne g, g^{-1}$. Therefore, $x \sim g^2$ and so $y \sim g^2 \sim x$. Thus $d(x, y) = 2$.
If $xg \notin Z(H,G)$ then we have $[y, xg] = [y, g][y, x]^g = g^{-1}[y, x]^g \ne g, g^{-1}$.
Also, $[x, xg] = [x, g] \ne g, g^{-1}$. Hence, $y \sim xg \sim x$ and so $d(x, y) = 2$.
\noindent \textbf{Case 3:} $x, y \notin H$.
Suppose that one of $x, y$ is adjacent to $g$ and the other is not. Without any loss we assume that $x \nsim g$ and $y \sim g$. Then, by Lemma \ref{Ind-diam-2}, we have $x \sim g^2$. Also, $[g, g^2] = 1 \ne g, g^{-1}$ and so $g^2 \sim g$. Therefore, $x \sim g^2 \sim g \sim y$ and hence $d(x,y) \leq 3$.
Thus $d(x,y) \leq 3$ for all $x, y \in G \setminus Z(H,G)$. Hence the result follows. \end{proof}
The rest part of this paper is devoted to the study of connectivity of $\Delta_{H, D_{2n}}^g$, where $D_{2n} = \langle a, b: a^n = b^2 = 1, bab^{-1} = a^{-1} \rangle$ is the dihedral group of order $2n$. It is well-known that $Z(D_{2n}) = \{1\}$, the commutator subgroup $D_{2n}' = \langle a \rangle$ if $n$ is odd and $Z(D_{2n}) = \{1,a^\frac{n}{2}\}$ and $D_{2n}' = \langle a^2 \rangle$ if $n$ is even. By \cite[Theorem 4]{NEGJ17}, it follows that $\Delta_{H, D_{2n}}^g$ is disconnected if $n = 3, 4, 6$. Therefore, we consider $n \geq 8$ and $n \geq 5$ according as $n$ is even or odd in the following results.
\begin{theorem} Consider the graph $\Delta_{H, D_{2n}}^g$, where $n \, (\geq 8)$ is even. \begin{enumerate} \item If $H = \langle a \rangle $ then $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\item Let $H = \langle a^\frac{n}{2}, a^rb \rangle $ for $0 \leq r < \frac{n}{2}$. Then $\Delta_{H, D_{2n}}^g$ is connected with diameter $2$ if $g = 1$ and $\Delta_{H, D_{2n}}^g$ is not connected if $g \ne 1$.
\item If $H = \langle a^rb \rangle$ for $1 \leq r \leq n$ then $\Delta_{H, D_{2n}}^g$ is not connected. \end{enumerate} \end{theorem} \begin{proof} Since $n$ is even we have $g = a^{2i}$ for $1 \leq i \leq \frac{n}{2}$.
\noindent (a) \textbf{Case 1:} $g=1$
Since $H$ is abelian, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is empty. So we need to see the adjacency of these vertices with those in $D_{2n} \setminus H$. Suppose that $[a^rb, a^j]=1$ and $[b,a^j]=1$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$. Then $a^{2j} = a^0$ or $a^{n}$ and so $j=0$ or $j=\frac{n}{2}$. Therefore, every vertex in $H \setminus Z(H, D_{2n})$ is adjacent to all the vertices in $D_{2n} \setminus H$. Thus $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent \textbf{Case 2:} $g \ne 1$
Since $H$ is abelian, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is a complete graph. Therefore, it is sufficient to prove that no vertex in $D_{2n} \setminus H$ is isolated. If $g \neq g^{-1}$ then $g \neq a^{\frac{n}{2}}$. Suppose that $[a^rb, a^j]=g$ and $[b,a^j]=g$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$. Then $a^{2j} = a^{2i}$ and so $j=i$ or $j=\frac{n}{2}+i$. If $[a^rb, a^j]=g^{-1}$ and $[b,a^j]=g^{-1}$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$ then $a^{2j} = a^{n -2i}$ and so $j=n-i$ or $j=\frac{n}{2}-i$. Therefore, there exists an integer $j$ such that $1 \leq j \leq n-1$ and $j \neq i, \frac{n}{2}+i, n-i \text{ and } \frac{n}{2}-i$ for which $a^j$ is adjacent to all the vertices in $D_{2n} \setminus H$. If $g = g^{-1}$ then $g = a^{\frac{n}{2}}$. Suppose that $[a^rb, a^j]=g$ and $[b,a^j]=g$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$ then $a^{2j} = a^{\frac{n}{2}}$ and so $j= \frac{n}{4}$ or $j=\frac{3n}{4}$. Therefore, there exists an integer $j$ such that $1 \leq j \leq n-1$ and $j \neq \frac{n}{4} \text{ and } \frac{3n}{4}$ for which $a^j$ is adjacent to all the vertices in $D_{2n} \setminus H$. Thus $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent (b) \textbf{Case 1:} $g=1$
We have $[a^{\frac{n}{2}+r}b,a^rb] = 1$ for every integer $r$ such that $1 \leq r \leq n-1$. Therefore, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is empty. So we need to see the adjacency of these vertices with those in $D_{2n} \setminus H$. Suppose $[a^rb,a^i] = 1$ and $[a^{\frac{n}{2}+r}b,a^i] = 1$ for every integer $i$ such that $1 \leq i \leq n - 1$. Then $a^{2i} = a^{n}$ and so $i=\frac{n}{2}$. Therefore, for every integer $i$ such that $1 \leq i \leq n-1$ and $i \neq \frac{n}{2}$, $a^i$ is adjacent to both $a^rb$ and $a^{\frac{n}{2}+r}b$. Also we have $[a^sb,a^rb] = a^{2(s-r)}$ and $[a^{\frac{n}{2}+r}b,a^sb] = a^{2(\frac{n}{2}+r-s)}$ for every integer $s$ such that $1 \leq s \leq n - 1$. Suppose $[a^sb,a^rb] = 1$ and $[a^{\frac{n}{2}+r}b,a^sb] = 1$. Then $s=r$ or $s= \frac{n}{2}+r$. Therefore, for every integer $s$ such that $1 \leq s \leq n-1$ and $s \neq r, \frac{n}{2}+r$, $a^sb$ is adjacent to both $a^rb$ and $a^{\frac{n}{2}+r}b$. Thus $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent \textbf{Case 2:} $g \ne 1$
If $H = \langle a^\frac{n}{2}, a^rb \rangle = \{1, a^\frac{n}{2}, a^rb,a^{\frac{n}{2}+r}b\}$ for $0 \leq r < \frac{n}{2}$ then $H \setminus Z(H, D_{2n}) = \{a^rb, a^{\frac{n}{2}+r}b\}$. We have $[a^rb,a^i] = a^{2i} = [a^{\frac{n}{2}+r}b,a^i]$ for every integer $i$ such that $1 \leq i \leq \frac{n}{2} - 1$. That is, $[a^rb,a^i] = g$ and $[a^{\frac{n}{2}+r}b,a^i] = g$ for every integer $i$ such that $1 \leq i \leq \frac{n}{2} - 1$. Thus $a^i$ is an isolated vertex in $D_{2n} \setminus H$. Hence, $\Delta_{H, D_{2n}}^g$ is not connected.
\noindent (c) \textbf{Case 1:} $g = 1$
We have $[a^{\frac{n}{2}+r}b,a^rb] = 1$ for every integer $r$ such that $1 \leq r \leq n - 1$. Thus $a^{\frac{n}{2}+r}b$ is an isolated vertex in $D_{2n} \setminus H$. Hence, $\Delta_{H, D_{2n}}^g$ is not connected.
\noindent \textbf{Case 2:} $g \ne 1$
If $H = \langle a^rb \rangle = \{1,a^rb\}$ for $1 \leq r \leq n$ then $H \setminus Z(H, D_{2n}) = \{a^rb\}$. We have $[a^rb,a^i] = a^{2i} = g$ for every integer $i$ such that $1 \leq i \leq \frac{n}{2} - 1$. Thus $a^i$ is an isolated vertex in $D_{2n} \setminus H$. Hence, $\Delta_{H, D_{2n}}^g$ is not connected. \end{proof}
\begin{theorem} Consider the graph $\Delta_{H, D_{2n}}^g$, where $n \, (\geq 8)$ and $\frac{n}{2}$ are even. \begin{enumerate} \item If $H = \langle a^2 \rangle$ then $\Delta_{H, D_{2n}}^g$ is connected with diameter $2$ if and only if $g \notin \langle a^4 \rangle$.
\item If $H = \langle a^2, b \rangle$ or $ \langle a^2, ab \rangle$ then $\Delta_{H, D_{2n}}^g$ is connected with diameter $2$ if $g = 1$ and $\diam(\Delta_{H, D_{2n}}^g) \leq 3$ if $g \ne 1$. \end{enumerate} \end{theorem} \begin{proof} Since $n$ is even, we have $g = a^{2i}$ for $1 \leq i \leq \frac{n}{2}$.
\noindent (a) \textbf{Case 1:} $g = 1$
We know that the vertices in $H$ commutes with all the odd powers of $a$. That is, any vertex in $\Delta_{H, D_{2n}}^g$ of the form $a^i$, where $i$ is an odd integer and $1 \leq i \leq n-1$, is not adjacent with any vertex. Hence, $\Delta_{H, D_{2n}}^g$ is not connected.
\noindent \textbf{Case 2:} $g \ne 1$
Since $H$ is abelian, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is a complete graph. Also, the vertices in $H$ commutes with all the odd powers of $a$. That is, a vertex of the form $a^i$, where $i$ is an odd integer, in $\Delta_{H, D_{2n}}^g$ is adjacent with all the vertices in $H$. We have $[a^rb, a^{2i}] = a^{4i}$ and $[b,a^{2i}] = a^{4i}$ for every integers $r,i$ such that $1 \leq r \leq n-1$ and $1 \leq i \leq \frac{n}{2} - 1$. Thus, for $g \notin \langle a^4 \rangle$, every vertex of $H$ is adjacent to the vertices of the form $a^rb$, where $1 \leq r \leq n$. Therefore, $\Delta_{H, D_{2n}}^g$ is a complete graph. Hence, it is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$. Also, if $g = a^{4i}$ for some integer $i$ where $1 \leq i \leq \frac{n}{4} - 1$ (i,e., $g \in \langle a^4 \rangle$) then the vertices in $D_{2n} \setminus H$ will remain isolated. Hence $\Delta_{H, D_{2n}}^g$ is disconnected in this case. This completes the proof of part (a).
\noindent (b) \textbf{Case 1:} $g = 1$
Suppose that $H = \langle a^2, b \rangle$. Then $a^{2i} \nsim a^j$ but $a^{2i} \sim a^rb$ for all $i,j,r$ such that $1\leq i\leq \frac{n}{2}-1$, $i \neq \frac{n}{4}$; $1\leq j\leq n-1$ is an odd number and $1\leq r\leq n$ because $[a^{2i},a^j]=1$ and $[a^{2i},a^rb]=a^{4i}$. We shall find a path to $a^j$, where $1\leq j\leq n-1$ is an odd number. We have $[a^j,b]= a^{2j} \neq 1$ and $a^j \in G \setminus H$ for all $j$ such that $1\leq j\leq n-1$ is an odd number. Therefore, $a^{2i} \sim b \sim a^j$. Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
If $H = \langle a^2, ab \rangle$ then $a^{2i} \nsim a^j$ but $a^{2i} \sim a^rb$ for all $i,j,r$ such that $1\leq i\leq \frac{n}{2}-1$, $i \neq \frac{n}{4}$; $1\leq j\leq n-1$ is an odd number and $1\leq r\leq n$ because $[a^{2i},a^j]=1$ and $[a^{2i},a^rb]=a^{4i}$. We shall find a path to $a^j$, where $1\leq j\leq n-1$ is an odd number. We have $[a^j,ab]= a^{2j} \neq 1$ and $a^j \in G \setminus H$ for all $j$ such that $1\leq j\leq n-1$ is an odd number. Therefore, $a^{2i} \sim ab \sim a^j$. Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent \textbf{Case 2:} $g \ne 1$
We have $\langle a^2\rangle \subset H$. Therefore, if $g \notin \langle a^4\rangle$ then every vertex in $\langle a^2\rangle$ is adjacent to all other vertices in both the cases (as discussed in part (a)). Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$. Suppose that $g = a^{4i}$ for some integer $i$, where $1 \leq i \leq \frac{n}{4} - 1$.
Suppose that $H = \langle a^2, b \rangle $. Then $a^{2i} \sim a^j$ but $a^{2i} \nsim a^rb$ for all $j,r$ such that $1\leq j\leq n-1$ is an odd number and $1\leq r\leq n$ because $[a^{2i},a^j]=1$ and $[a^{2i},a^rb]=a^{4i}$. We shall find a path between $a^{2i}$ and $a^rb$ for all $i,r$ such that $1\leq i\leq \frac{n}{2}-1$ and $1\leq r\leq n$. We have $[a^j,b]= a^{2j} \neq a^{4i}$ and $a^j \in G \setminus H$ for all $j$ such that $1\leq j\leq n-1$ is an odd number. Therefore, $a^{2i} \sim a^j \sim b$. Consider the vertices of the form $a^rb$ where $1\leq r\leq n-1$. We have $[a^rb,b]=a^{2r}$. Suppose $[a^rb,b]=g$ then it gives $a^{2r}=a^{4i}$ which implies $r = 2i$ or $r=\frac{n}{2}+2i$. Therefore, $b \sim a^rb$ if and only if $r \neq 2i$ and $r \neq \frac{n}{2} + 2i$. So we have $a^{2i} \sim a^j \sim b \sim a^rb$, where $1\leq r\leq n-1$ and $r \neq 2i$ and $r \neq \frac{n}{2} + 2i$. Again we know that $a^{\frac{n}{2}+2i}b,a^{2i}b \in H$ and $[a^{\frac{n}{2} + 2i}b,a^{2i}b] = 1$, so $a^{\frac{n}{2}+2i}b \sim a^{2i}b$. If we are able to find a path between $a^j$ and any one of $a^{\frac{n}{2}+2i}b$ and $a^{2i}b$ then we are done. Now $[a^{2i}b,a^j] \neq a^{4i}$ and $[a^{\frac{n}{2}+2i}b,a^j] \neq a^{4i}$ for any odd number $j$ such that $1 \leq j \leq n-1$ so we have $a^{\frac{n}{2}+2i}b \sim a^j \sim a^{2i}b$. Thus $a^{2i} \sim a^j \sim a^{2i}b$, $a^{2i} \sim a^j \sim a^{\frac{n}{2}+2i}b$, $a^rb \sim b \sim a^j \sim a^{2i}b$ and $a^rb \sim b \sim a^j \sim a^{\frac{n}{2}+2i}b$, where $1\leq r\leq n-1$ and $r \neq 2i$ and $r \neq \frac{n}{2} + 2i$. Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) \leq 3$.
If $H = \langle a^2, ab \rangle$ then $a^{2i} \sim a^j$ but $a^{2i} \nsim a^rb$ for all $j,r$ such that $1\leq j\leq n-1$ is an odd number and $1\leq r\leq n$ because $[a^{2i},a^j]=1$ and $[a^{2i},a^rb]=a^{4i}$. We shall find a path between $a^{2i}$ and $a^rb$ for all $i,r$ such that $1\leq i\leq \frac{n}{2}-1$ and $1\leq r\leq n$. We have $[a^j,ab]= a^{2j} \neq a^{4i}$ and $a^j \in G \setminus H$ for all $j$ such that $1\leq j\leq n-1$ is an odd number. So we have $a^{2i} \sim a^j \sim ab$. Consider the vertices of the form $a^rb$, where $2\leq r\leq n$. We have $[a^rb,ab]=a^{2(r-1)}$. Suppose $[a^rb,ab]=g$ then it gives $a^{2(r-1)}=a^{4i}$ which implies $r = 2i+1$ or $r=\frac{n}{2}+2i+1$. Therefore, $ab \sim a^rb$ if and only if $r \neq 2i+1$ and $r \neq \frac{n}{2} + 2i+1$. So we have $a^{2i} \sim a^j \sim ab \sim a^rb$, where $2\leq r\leq n$ and $r \neq 2i+1$ and $r \neq \frac{n}{2} + 2i+1$. Again we know that $a^{\frac{n}{2}+2i+1}b,a^{2i+1}b \in H$ and $[a^{\frac{n}{2} + 2i+1}b,a^{2i+1}b] = 1$, so $a^{\frac{n}{2}+2i+1}b \sim a^{2i+1}b$. If we are able to find a path between $a^j$ and any one of $a^{\frac{n}{2}+2i+1}b$ and $a^{2i+1}b$ then we are done. Now $[a^{2i+1}b,a^j] \neq a^{4i}$ and $[a^{\frac{n}{2}+2i+1}b,a^j] \neq a^{4i}$ for any odd number $j$ such that $1 \leq j \leq n-1$ so we have $a^{\frac{n}{2}+2i+1}b \sim a^j \sim a^{2i+1}b$. Thus $a^{2i} \sim a^j \sim a^{2i+1}b$, $a^{2i} \sim a^j \sim a^{\frac{n}{2}+2i+1}b$, $a^rb \sim ab \sim a^j \sim a^{2i+1}b$ and $a^rb \sim ab \sim a^j \sim a^{\frac{n}{2}+2i+1}b$, where $2\leq r\leq n$ and $r \neq 2i+1$ and $r \neq \frac{n}{2} + 2i+1$. Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) \leq 3$. \end{proof}
\begin{theorem} Consider the graph $\Delta_{H, D_{2n}}^g$, where $n \, (\geq 8)$ is even and $\frac{n}{2}$ is odd. \begin{enumerate} \item If $H = \langle a^2 \rangle$ then $\Delta_{H, D_{2n}}^g$ is not connected if $g = 1$ and $\Delta_{H, D_{2n}}^g$ is connected with $\diam(\Delta_{H, D_{2n}}^g) = 2$ if $g \ne 1$.
\item If $H = \langle a^2, b \rangle$ or $ \langle a^2, ab \rangle$ then $\Delta_{H, D_{2n}}^g$ is not connected if $g = 1$ and $\Delta_{H, D_{2n}}^g$ is connected with $\diam(\Delta_{H, D_{2n}}^g) = 1$ if $g \ne 1$. \end{enumerate} \end{theorem}
\begin{proof} Since $n$ is even, we have $g = a^{2i}$ for $1 \leq i \leq \frac{n}{2}$.
\noindent (a) \textbf{Case 1:} $g = 1$
We know that the vertices in $H$ commute with all the odd powers of $a$. That is, any vertex of the form $a^i \in D_{2n} \setminus H$, where $i$ is an odd integer, is not adjacent with any vertex in $\Delta_{H, D_{2n}}^g$. Hence, $\Delta_{H, D_{2n}}^g$ is not connected.
\noindent \textbf{Case 2:} $g \ne 1$
Since $H$ is abelian, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is a complete graph. Also, the vertices in $H$ commutes with all the odd powers of $a$. That is, a vertex of the form $a^i$, where $i$ is an odd integer, in $\Delta_{H, D_{2n}}^g$ is adjacent with all the vertices in $H$. We claim that atleast one element of $H \setminus Z(H, D_{2n})$ is adjacent to all $a^rb$'s such that $1 \leq r \leq n$. Consider the following cases.
\noindent \textbf{Subcase 1:} $g^3 \neq 1$
If $[g, a^rb] = g$, i.e., $[a^{2i}, a^rb] = a^{2i}$ for all $1 \leq i \leq \frac{n}{2} - 1$ and $1 \leq r \leq n$ then we get $g = a^{2i} = 1$, a contradiction. If $[g, a^rb] = g^{-1}$, i.e., $[a^{2i}, a^rb] = a^{n-2i}$ for all $1 \leq i \leq \frac{n}{2} - 1$ and $1 \leq r \leq n$ then we get $g^3 = (a^{2i})^3 = a^{6i} = 1$, a contradiction. Therefore, $g$ is adjacent to all other vertices of the form $a^rb$ such that $1 \leq r \leq n$.
\noindent \textbf{Subcase 2:} $g^3 = 1$
If $[g, a^rb] = g^{-1}$, i.e., $[a^{2i}, a^rb] = a^{2i}$ then $[ga^{2}, a^rb] = g^{-1}a^{4}$ for all $1 \leq i \leq \frac{n}{2} - 1$ and $1 \leq r \leq n$. Now, if $g^{-1}a^{4} = g^{-1}$ then $a^4 = 1$, a contradiction since $a^n =1$ for $n \geq 8$. If $g^{-1}a^{4}= g$ then $a^{n-2i-4} = 1 $ for all $1 \leq i \leq \frac{n}{2} - 1$, which is a contradiction since $1 \leq i \leq \frac{n}{2} - 1$. Therefore, $ga^2$ is adjacent to all other vertices of the form $a^rb$ such that $1 \leq r \leq n$.
Thus there exists a vertex in $H \setminus Z(H, D_{2n})$ which is adjacent to all other vertices in $D_{2n}$. Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent (b) \textbf{Case 1:} $g = 1$
We know that the vertices in $H$ commute with the vertex $a^\frac{n}{2}$. That is, the vertex $a^\frac{n}{2} \in D_{2n} \setminus H$ is not adjacent with any vertex in $\Delta_{H, D_{2n}}^g$. Hence, $\Delta_{H, D_{2n}}^g$ is not connected.
\noindent \textbf{Case 2:} $g \ne 1$
As shown in Case 2 of part (a), it can be seen that either $g$ or $ga^2$ is adjacent to all other vertices. Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 1$. \end{proof}
\begin{theorem} Consider the graph $\Delta_{H, D_{2n}}^g$, where $n (\geq 5)$ is odd. \begin{enumerate} \item If $H = \langle a \rangle$ then $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$. \item If $H = \langle a^rb \rangle$, where $1 \leq r \leq n$, then $\Delta_{H, D_{2n}}^g$ is connected with $\diam(\Delta_{H, D_{2n}}^g)$ $= 2$ if $g = 1$ and $\Delta_{H, D_{2n}}^g$ is not connected if $g \ne 1$. \end{enumerate} \end{theorem}
\begin{proof} Since $n$ is odd, we have $g = a^{i}$ for $1 \leq i \leq n - 1$.
\noindent (a) \textbf{Case 1:} $g = 1$
Since $H$ is abelian, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is empty. Therefore, we need to see the adjacency of these vertices with those in $D_{2n} \setminus H$. Suppose that $[a^rb, a^j]=1$ and $[b,a^j]=1$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$. Then $a^{2j} = a^{n}$ and so $j=\frac{n}{2}$, a contradiction. Therefore, for every integer $j$ such that $1 \leq j \leq n-1$, $a^j$ is adjacent to all the vertices in $D_{2n} \setminus H$. Thus $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent \textbf{Case 2:} $g \ne 1$
Since $H$ is abelian, the induced subgraph of $\Delta_{H, D_{2n}}^g$ on $H \setminus Z(H, D_{2n})$ is a complete graph. Therefore, it is sufficient to prove that no vertex in $D_{2n} \setminus H$ is isolated. Since $n$ is odd we have $g \ne g^{-1}$. If $[a^rb, a^j]=g$ and $[b,a^j]=g$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$ then $j= \frac{i}{2}$ or $j= \frac{n+i}{2}$. If $[a^rb, a^j]=g^{-1}$ and $[b,a^j]=g^{-1}$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$ then $j= \frac{n-i}{2}$ or $j= n-\frac{i}{2}$. Therefore, there exists an integer $j$ such that $1 \leq j \leq n-1$ and $j \neq \frac{i}{2}, \frac{n+i}{2}, \frac{n-i}{2}\text{ and } n-\frac{i}{2}$ for which $a^j$ is adjacent to all other vertices in $D_{2n} \setminus H$. Thus $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent (b) \textbf{Case 1:} $g = 1$
We have $[a^rb, a^j] \neq 1$ and $[b,a^j] \neq 1$ for every integers $r,j$ such that $1 \leq r,j \leq n-1$. So $a^rb$ is adjacent to $a^j$ for every integer $j$ such that $1 \leq j \leq n-1$. Also we have $[a^sb,a^rb] = a^{2(s-r)}$ for every integers $r,s$ such that $1 \leq r,s \leq n$. Suppose $[a^sb,a^rb] = 1$ then $s=r$ as $s= \frac{n}{2}+r$ is not possible. Therefore, for every integers $r,s$ such that $1 \leq r,s \leq n$ and $s \neq r$, $a^sb$ is adjacent to $a^rb$. Thus $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
\noindent \textbf{Case 2:} $g \ne 1$
If $i$ is even then $[a^\frac{i}{2}, a^rb]=a^i=g$ and so the vertex $a^\frac{i}{2}$ remains isolated. If $i$ is odd then $n-i$ is even and we have $[a^\frac{n-i}{2}, a^rb]=a^{n-i}=g^{-1}$. Therefore, the vertex $a^\frac{n-i}{2}$ remains isolated. Hence, $\Delta_{H, D_{2n}}^g$ is not connected. \end{proof}
\begin{theorem} Consider the graph $\Delta_{H, D_{2n}}^g$, where $n (\geq 5)$ is odd. \begin{enumerate}
\item If $H = \langle a^d \rangle$, where $d|n$ and $o(a^d) = 3$, then $\Delta_{H, D_{2n}}^g$ is connected with diameter $2$ if and only if $g = 1$.
\item If $H = \langle a^d, b\rangle$, $\langle a^d, ab\rangle$ or $\langle a^d, a^2b\rangle$, where $d|n$ and $o(a^d) = 3$, then $\Delta_{H, D_{2n}}^g$ is connected with diameter $2$ if $g \neq 1, a^d, a^{2d}$.
\item If $H = \langle a^d, b\rangle$, where $d|n$ and $o(a^d) = 3$, then $\Delta_{H,D_{2n}}^g$ is connected and $\diam(\Delta_{H,D_{2n}}^g) = \begin{cases} 2, & \text{ if } g =1 \\ 3, & \text{ if } g = a^d \text{ or } a^{2d}. \end{cases}$ \end{enumerate} \end{theorem} \begin{proof} (a) Given $H = \{1, a^d, a^{2d}\}$. We have $[a^d, a^{2d}] = 1$, $[a^d, a^rb] = a^{2d}$ and $[a^{2d}, a^rb] = a^{4d} = a^d$ for all $r$ such that $1 \leq r \leq n$. Therefore, $g = 1, a^d$ or $a^{2d}$. If $g = a^d$ or $a^{2d}$ then $a^d \nsim a^rb$ and $a^{2d} \nsim a^rb$ for all $r$ such that $1 \leq r \leq n$. Thus $\Delta_{H, D_{2n}}^g$ is disconnected. If $g = 1$ then $a^d \sim a^rb$, $a^{2d} \sim a^rb$ and $a^d \sim a^rb \sim a^{2d}$ for all $r$ such that $1 \leq r \leq n$. Note that $a^d \nsim a^{2d}$. Hence, $\Delta_{H, D_{2n}}^g$ is connected with diameter $2$.
(b) If $g \neq 1, a^d, a^{2d}$ then $a^d$ is adjacent to all other vertices, as discussed in part (a). Hence, $\Delta_{H, D_{2n}}^g$ is connected and $\diam(\Delta_{H, D_{2n}}^g) = 2$.
(c) \textbf{Case 1:} $g =1$
Since $n$ is odd, we have $2i \ne n$ for all integers $i$ such that $1 \leq i \leq n-1$. Therefore, if $g = 1$ then $b$ is adjacent to all other vertices because $[a^i, b] = a^{2i}$ and $[a^rb, b] = a^{2r}$ for all integers $i, r$ such that $1 \leq i, r \leq n-1$. Hence, $\Delta_{H,D_{2n}}^g$ is connected and $\diam(\Delta_{H,D_{2n}}^g) =2$.
\noindent\textbf{Case 2:} $g = a^d$ or $a^{2d}$
Since $[a^d, a^{2d}]=1$ we have $a^d \sim a^{2d}$. Also, all the vertices of the form $a^i$ commute among themselves, where $1 \leq i \leq n-1$. Therefore, $a^d \sim a^i \sim a^{2d}$ for all $1 \leq i \leq n-1$ such that $i \ne d, 2d$. Again, $[a^i, a^rb] = a^{2i} = [a^i, b]$ for all $1 \leq i, r \leq n-1$. If $[a^i, a^rb] = a^d$ or $a^{2d}$ for all $1 \leq r \leq n$, then $i = 2d$ or $d$ respectively. Therefore, $a^d \sim a^i \sim b$, $ a^d \sim a^i \sim a^db$, $a^d \sim a^i \sim a^{2d}b$, $a^{2d} \sim a^i \sim b$, $ a^{2d} \sim a^i \sim a^db$ and $a^{2d} \sim a^i \sim a^{2d}b$ for all $1 \leq i \leq n-1$ such that $i \ne d, 2d$. If $[a^rb, b] = a^d$ or $a^{2d}$ for all $1 \leq r \leq n-1$, then $a^{2r} = a^d$ or $a^{2d}$; which gives $r = 2d$ or $d$ respectively. Therefore, $a^d \sim a^i \sim b \sim a^rb$, $ a^{2d} \sim a^i \sim b \sim a^rb$, $a^db \sim a^i \sim b \sim a^rb$ and $a^{2d}b \sim a^i \sim b \sim a^rb$ for all $1 \leq i, r \leq n-1$ such that $i, r \ne d, 2d$. Hence, $\Delta_{H,D_{2n}}^g$ is connected and $\diam(\Delta_{H,D_{2n}}^g) =3$. \end{proof}
\section*{Acknowledgment}
The first author would like to thank DST for the INSPIRE Fellowship.
\end{document} | arXiv |
The Journal of Real Estate Finance and Economics
November 2016 , Volume 53, Issue 4, pp 419–449 | Cite as
Episodes of Exuberance in Housing Markets: In Search of the Smoking Gun
Efthymios Pavlidis
Alisa Yusupova
Ivan Paya
David Peel
Enrique Martínez-García
Adrienne Mack
Valerie Grossman
In this paper, we examine changes in the time series properties of three widely used housing market indicators (real house prices, price-to-income ratios, and price-to-rent ratios) for a large set of countries to detect episodes of explosive dynamics. Dating such episodes of exuberance in housing markets provides a timeline as well as empirical content to the narrative connecting housing exuberance to the global 2008 −09 recession. For our empirical analysis, we employ two recursive univariate unit root tests recently developed by Phillips and Yu (International Economic Review 52(1):201–226, 2011) and Phillips et al. (2015). We also propose a novel extension of the test developed by Phillips et al. (2015) to a panel setting in order to exploit the large cross-sectional dimension of our international dataset. Statistically significant periods of exuberance are found in most countries. Moreover, we find strong evidence of the emergence of an unprecedented period of exuberance in the early 2000s that eventually collapsed around 2006 −07, preceding the 2008 −09 global recession. We examine whether macro and financial variables help to predict (in-sample) episodes of exuberance in housing markets. Long-term interest rates, credit growth and global economic conditions are found to be among the best predictors. We conclude that global factors (partly) explain the synchronization of exuberance episodes that we detect in the data in the 2000s.
House prices Mildly explosive time series Sup ADF test Generalized sup ADF test Panel GSADF Probit model
The International House Price Database can be accessed online at http://www.dallasfed.org/institute/houseprice/index.cfm. An earlier version of the paper circulated under the title "Monitoring Housing Markets for Periods of Exuberance. An Application of the Phillips et al. (2012, 2013) GSADF Test on the Dallas Fed International House Price Database."
C22 G12 R30 R31
We would like to thank María Teresa Martínez García and Itamar Caspi for providing helpful assistance and suggestions. We acknowledge the support of the Federal Reserve Bank of Dallas. All remaining errors are ours alone. The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Bank of Dallas or the Federal Reserve System.
Appendix A: Demand Equation for Rental Housing
Consider the maximization of the Stone-Geary utility function with housing units rented, H t , and consumption of other goods, C t , i.e.,25
$$U\left( H_{t},C_{t}\right) =\left( H_{t}-\theta_{H}\right)^{\alpha} \left( C_{t}-\theta_{C}\right)^{1-\alpha}, 0<\alpha<1, $$
subject to the intratemporal budget constraint,
$$C_{t}+x_{t}H_{t}=Y_{t}, $$
where the price of the consumption good is normalized to 1. X t ≡x t H t is the housing rents—rental expenditures—paid and x t the rental rate per unit rented, Y t refers to disposable income, and α, 𝜃 H and 𝜃 C are preference parameters.
From first-order conditions, the Stone-Geary utility function subject to the standard intratemporal budget constraint gives a linear expenditure system where the demand for rental housing takes the following form:
$$ H_{t}=\theta_{H}+\frac{\alpha}{x_{t}}\left( Y_{t}-x_{t}\theta_{H}-\theta_{C}\right), $$
or in expenditure terms,
$$ X_{t}\equiv x_{t}H_{t}=\alpha Y_{t}+\left( 1-\alpha\right) \theta_{H} x_{t}-\alpha\theta_{C}. $$
Under the assumption that in equilibrium the units rented are constant (i.e., H t =H) and normalized to 1, the demand equation that determines housing rents in Eq. 39 reduces to an affine transformation of disposable income (Y t ), i.e.,
$$ X_{t}=x_{t}=\theta_{F}+\delta Y_{t+1}, $$
where \(\delta \equiv \frac {\alpha }{1-\left (1-\alpha \right ) \theta _{H}}\) and \(\theta _{F}\equiv -\frac {\alpha }{1-\left (1-\alpha \right ) \theta _{H}} \theta _{C}\).
B The Panel GSADF Test
The bootstrap procedure consists of the following steps:
For each country, impose the null hypothesis of a unit root and fit the restricted ADF regression equation,
$${\Delta} y_{i,t}=a_{i,r_{1},r_{2}}+ \sum\nolimits_{j=1}^{k} \psi_{i,r_{1},r_{2}}^{j}{\Delta} y_{i,t-j}+\epsilon_{i,t}, $$
to obtain coefficient estimates (\(\widehat {a}_{i,r_{1},r_{2}}\), and \(\widehat {\psi }_{i,r_{1},r_{2}}^{j}\) for \(j=1,\dots ,k\)) and residuals (\(\widehat {\epsilon }_{i}\)).
Create a residual matrix with typical element \(\widehat {\epsilon }_{t,i}\).
In order to preserve the covariance structure of the error term, generate bootstrap residuals, \(\epsilon _{i,t}^{b}\), by sampling with replacement draws from the residual matrix.
Use the bootstrap residuals and the estimated coefficients to recursively generate bootstrap samples for first differences,
$${\Delta} y_{i,t}^{b}=\widehat{a}_{i,r_{1},r_{2}}+ \sum\nolimits_{j=1}^{k} \widehat{\psi}_{i,r_{1},r_{2}}^{j}{\Delta} y_{i,t-j}^{b}+\epsilon_{i,t}^{b}, $$
and for levels,
$$y_{i,t}^{b}= {\displaystyle\sum\nolimits_{p=1}^{t}} {\Delta} y_{i,p}^{b}. $$
Compute the sequence of panel BSADF statistics and the panel GSADF statistic for \(y_{i,t}^{b}\).
Repeat steps (3) to (5) a large number of times to obtain the empirical distribution of the test statistics under the null of a unit root.
Adams, Z., & Fuss, R (2010). Macroeconomic determinants of international housing markets. Journal of Housing Economics, 19(1), 38–50.CrossRefGoogle Scholar
Agnello, L., & Schuknecht, L (2011). Booms and busts in housing markets: determinants and implications. Journal of Housing Economics, 20(3), 171–190.CrossRefGoogle Scholar
André, C., Gil-Alana, L.A., & Gupta, R (2014). Testing for persistence in housing price-to-income and price-to-rent ratios in 16 OECD countries. Applied Economics, 46(18), 2127–2138.CrossRefGoogle Scholar
Berkovec, J., Chang, Y., & McManus, D.A (2012). Alternative lending channels and the crisis in U.S. housing markets. Real Estate Economics, 40(s1), S8–S31.CrossRefGoogle Scholar
Bernanke, B.S. (2005). The global saving glut and the U.S. current account deficit. At the Sandridge Lecture. Richmond, Virginia: Virginia Association of Economists. http://www.federalreserve.gov/boarddocs/speeches/2005/200503102/.
Bhargava, A. (1986). On the theory of testing for unit roots in observed time series. Review of Economic Studies, 53(3), 369–384.CrossRefGoogle Scholar
Blanchard, O.J. (1979). Speculative bubbles, crashes and rational expectations. Economics Letters, 3(4), 387–389.CrossRefGoogle Scholar
Blanchard, O.J., & Watson, M.W. (1982). Bubbles, rational expectations, and financial markets. In Wachtel, P. (Ed.) Crises in the Economic and Financial Structure (pp. 295–315). Lexington, MA: Lexington Books.Google Scholar
Busetti, F., & Taylor, A.M.R. (2004). Tests of stationarity against a change in persistence. Journal of Econometrics, 123, 33–66.CrossRefGoogle Scholar
Campbell, J.Y., & Shiller, R.J. (1987). Cointegration and tests of present value models. Journal of Political Economy, 95(5), 1062–1088.CrossRefGoogle Scholar
Campbell, J.Y., & Shiller, R.J. (1988). The dividend-price ratio and expectations of future dividends and discount factors. Review of Financial Studies, 1(3), 195–228.CrossRefGoogle Scholar
Campbell, J.Y., Lo, A.W., & MacKinlay, A.C. (1997). The Econometrics of Financial Markets. Princeton, NJ: Princeton University Press.Google Scholar
Capozza, D.R., Hendershott, P.H., & Mack, C. (2004). An anatomy of price dynamics in illiquid markets: analysis and evidence from local housing markets. Real Estate Economics, 32(1), 1–32.CrossRefGoogle Scholar
Case, K.E., & Shiller, R.J. (2003). Is there a bubble in the housing market? Brookings Papers on Economic Activity, 34(2), 299–362.CrossRefGoogle Scholar
Chang, Y. (2004). Bootstrap unit root tests in panels with cross-sectional dependency. Journal of Econometrics, 120(2), 263–293.CrossRefGoogle Scholar
Chen, S.-S. (2009). Predicting the bear stock market: macroeconomic variables as leading indicators. Journal of Banking & Finance, 33(2), 211–223.CrossRefGoogle Scholar
Clayton, J. (1996). Rational expectations, market fundamentals and housing price volatility. Real Estate Economics, 24(4), 441–470.CrossRefGoogle Scholar
Diba, B.T., & Grossman, H.I (1988). Explosive rational bubbles in stock prices? American Economic Review, 78(3), 520–530.Google Scholar
Engsted, T., Pedersen, T.Q., & Tanggaard, C. (2012). The log-linear return approximation, bubbles, and predictability. Journal of Financial and Quantitative Analysis, 47(3), 643–665.CrossRefGoogle Scholar
Evans, G.W. (1991). Pitfalls in testing for explosive bubbles in asset prices. American Economic Review, 81(4), 922–930.Google Scholar
Flood, R.P., & Hodrick, R.J. (1990). On testing for speculative bubbles. Journal of Economic Perspectives, 4(2), 85–101.CrossRefGoogle Scholar
Girouard, N., Kennedy, M., Van den Noord, P., & André, C. (2006). Recent house price developments: the role of fundamentals. OECD Economics Department Working Papers, NO 475. Paris: OECD Publishing.Google Scholar
Gordon, M.J., & Shapiro, E. (1956). Capital equipment analysis: the required rate of profit. Management Science, 3(1), 102–110.CrossRefGoogle Scholar
Grossman, V., Mack, A., & Martínez-García, E. (2014). A new database of global economic indicators. The Journal of Economic and Social Measurement, 39 (3), 163–197.Google Scholar
Hiebert, P., & Sydow, M. (2011). What drives returns to euro area housing? evidence from a dynamic dividend-discount model. Journal of Urban Economics, 70 (2–3), 88–98.CrossRefGoogle Scholar
Himmelberg, C., Mayer, C., & Sinai, T. (2005). Assessing high house prices: bubbles, fundamentals and misperceptions. Journal of Economic Perspectives, 19(4), 67–92.CrossRefGoogle Scholar
Homm, U., & Breitung, J. (2012). Testing for speculative bubbles in stock markets: a comparison of alternative methods. Journal of Financial Econometrics, 10(1), 198–231.CrossRefGoogle Scholar
Hott, C., & Monnin, P. (2008). Fundamental real estate prices: an empirical estimation with international data. Journal of Real Estate Finance and Economics, 36(4), 427–450.CrossRefGoogle Scholar
Hwang, M., & Quigley, J.M. (2006). Economic fundamentals in local housing markets: evidence from U.S. metropolitan regions. Journal of Regional Science, 46 (3), 425–453.CrossRefGoogle Scholar
Im, K.S., Pesaran, M.H., & Shin, Y. (2003). Testing for unit roots in heterogeneous panels. Journal of Econometrics, 115(1), 53–74.CrossRefGoogle Scholar
Kilian, L. (2009). Not all oil price shocks are alike: disentangling demand and supply shocks in the crude oil market. American Economic Review, 99(3), 1053–1069.CrossRefGoogle Scholar
Kim, J.-Y. (2000). Detection of change in persistence of a linear time series. Journal of Econometrics, 95(1), 97–116.CrossRefGoogle Scholar
Kim, J.-Y., Belaire-Franch, J., & Badillo Amador, R. (2002). Corrigendum to "detection of change in persistence of a linear time series". Journal of Econometrics, 109(2), 389–392.CrossRefGoogle Scholar
Lane, P.R., & Milesi-Ferretti, G.M. (2003). International financial integration. IMF Staff Papers, 50, 82–113. Special Issue.Google Scholar
Lee, J.H., & Phillips, P.C.B. (2011). Asset pricing with financial bubble risk: Working paper, Yale University.Google Scholar
LeRoy, S.F. (1981). The present-value relation: tests based on implied variance bounds. Econometrica, 49, 555–577.CrossRefGoogle Scholar
LeRoy, S.F. (2004). Rational exuberance. Journal of Economic Literature, 42 (3), 783–804.CrossRefGoogle Scholar
Mack, A., & Martínez-García, E. (2011). A cross-country quarterly database of real house prices: a methodological note. Globalization and Monetary Policy Institute Working Papers, No. 99. Federal Reserve Bank of Dallas.Google Scholar
Mack, A., & Martínez-García, E. (2012). Increased real house price volatility signals break from Great Moderation. Federal Reserve Bank of Dallas Economic Letter, 7(1).Google Scholar
Maddala, G.S., & Wu, S (1999). A comparative study of unit root tests with panel data and a new simple test. Oxford Bulletin of Economics and Statistics, 61 (S1), 631–652.CrossRefGoogle Scholar
Magdalinos, T. (2012). Mildly explosive autoregression under weak and strong dependence. Journal of Econometrics, 169(2), 179–187.CrossRefGoogle Scholar
Mayer, C. (2011). Housing bubbles: a survey. Annual Review of Economics, 3, 559–577.CrossRefGoogle Scholar
Mian, A., & Sufi, A (2009). The consequences of mortgage credit expansion: evidence from the U.S. mortgage default crisis. The Quarterly Journal of Economics, 124(4), 1449–1496.CrossRefGoogle Scholar
Mikhed, V., & Zemcik, P. (2009a). Do house prices reflect fundamentals? aggregate and panel data evidence. Journal of Housing Economics, 18(2), 140–149.CrossRefGoogle Scholar
Mikhed, V., & Zemcik, P. (2009b). Testing for bubbles in housing markets: a panel data approach. Journal of Real Estate Finance and Economics, 38(4), 366–386.CrossRefGoogle Scholar
Ng, S., & Perron, P. (1995). Unit root tests in ARMA models with data-dependent methods for the selection of the truncation lag. Journal of the American Statistical Association, 90(429), 268–281.CrossRefGoogle Scholar
Ng, S., & Perron, P. (2001). Lag length selection and the construction of unit root tests with good size and power. Econometrica, 69(6), 1519–1554.CrossRefGoogle Scholar
Nyberg, H. (2013). Predicting bear and bull stock markets with dynamic binary time series models. Journal of Banking & Finance, 37(9), 3351–3363.CrossRefGoogle Scholar
Pavlov, A., & Wachter, S. (2011). Subprime lending and real estate prices. Real Estate Economics, 39(1), 1–17.CrossRefGoogle Scholar
Phillips, P.C.B., & Magdalinos, T. (2007a). Limit theory for moderate deviations from a unit root. Journal of Econometrics, 136(1), 115–130.CrossRefGoogle Scholar
Phillips, P.C.B., & Magdalinos, T. (2007b). Limit theory for moderate deviations from a unit root under weak dependence. In Phillips, G.D.A., & Tzavalis, E. (Eds.) The Refinement of Econometric Estimation and Test Procedures: Finite Sample and Asymptotic Analysis (pp. 123–162). Cambridge: Cambridge University Press.Google Scholar
Phillips, P.C.B., & Yu, J. (2011). Dating the timeline of financial bubbles during the subprime crisis. Quantitative Economics, 2(3), 455–491.CrossRefGoogle Scholar
Phillips, P.C.B., Shi, S.-P., & Yu, J. (2012). Testing for multiple bubbles. Cowles Foundation Discussion Papers, No 1843. Cowles Foundation for Research in Economics: Yale University.Google Scholar
Phillips, P.C.B., Wu, Y., & Yu, J. (2011). Explosive behavior in the 1990s Nasdaq: when did exuberance escalate asset values? International Economic Review, 52(1), 201–226.Google Scholar
Phillips, P.C.B., Shi, S.-P., & Yu, J. (2015). Testing for multiple bubbles: historical episodes of exuberance and collapse in the S&P 500: International Economic Review. forthcoming.Google Scholar
Rousová, L., & Van den Noord, P. (2011). Predicting peaks and troughs in real house prices. OECD Economics Department Working Papers, No 882. Paris: OECD Publishing.Google Scholar
Sargent, T.J. (1987). Macroeconomic Theory, 2nd. Boston: Academic Press.Google Scholar
Shiller, R.J. (1981). Do stock prices move too much to be justified by subsequent changes in dividends? American Economic Review, 71(3), 421–436.Google Scholar
Shiller, R.J. (2015). Irrational Exuberance (Revised and Expanded Third Edition): Princeton University Press.Google Scholar
West, K.D. (1987). A specification test for speculative bubbles. Quarterly Journal of Economics, 102(3), 553–580.CrossRefGoogle Scholar
Yiu, M.S., Yu, J., & Jin, L. (2013). Detecting bubbles in Hong Kong residential property market. Journal of Asian Economics, 28, 115–124.CrossRefGoogle Scholar
© Springer Science+Business Media New York 2015
1.Lancaster University Management SchoolLancasterUK
2.Federal Reserve Bank of DallasDallasUSA
3.Mutual of OmahaOmahaUSA
Pavlidis, E., Yusupova, A., Paya, I. et al. J Real Estate Finan Econ (2016) 53: 419. https://doi.org/10.1007/s11146-015-9531-2
Online ISSN 1573-045X | CommonCrawl |
Differential topology versus differential geometry
I have just finished my undergraduate studies. During last two semesters I've taken two subjects dealing with manifolds:
Analysis on manifolds, containing: definition of manifold, tangent space (as derivations and classes of curves), vector fields, vector bundles, flows, Lie derivatives, integration on manifolds (Stokes theorem), forms, Hodge decomposition theorem
Introduction to differential geometry (for me it should be called introduction to Riemannian manifolds) containing: tensor calculus introduction, Riemannian manifold definition, connections, curvatures, geodesic, normal coordinates, geodesic completness theorem, classification throught curvature, jacobi fields, harmonic maps.
Now, for me differential geometry was/is a theory about manifolds, so anything dealing with manifolds is branch of differential geometry. On the other hand I am preparing for taking part in local conference called: "algebraic and differential topology". I have read: I- books references, II- books references, III- wikipedia, yet I still don't really know if differential topology is subtheory of differential geometry or is it seperate theory and how is it located between differential geometry and algebraic topology. For example I expect that studies on Riemmanian manifolds are part of differential geometry but would problem of classification manifolds up to diffeomorphism be a part of differential topology or geometry ?
Request: I would be grateful for your characterisation of differential topology and differential geometry possibly with examples of problems, theorems present at them.
differential-geometry soft-question differential-topology
J.E.M.SJ.E.M.S
$\begingroup$ Differential topology deals with the study of differential manifolds without using tools related with a metric: curvature, affine connections, etc. Differential geometry is the study of this geometric objects in a manifold. The thing is that in order to study differential geometry you need to know the basics of differential topology. I don't know exactly where the line between them is drawn, but they clearly overlap without one being a subtheory of the other. $\endgroup$ – hjhjhj57 Jul 5 '15 at 22:39
$\begingroup$ A counterexample to "anything dealing with manifolds is a branch of differential geometry" is that there are topological manifolds that cannot be given the structure of a differentiable manifold, so differential geometry doesn't really apply to those manifolds. $\endgroup$ – Matt Samuel Jul 5 '15 at 22:58
$\begingroup$ Try reading the introduction of this book, which I used as a graduate student: amazon.com/Differential-Topology-Graduate-Texts-Mathematics/dp/… (The introduction pages are available in the preview.) $\endgroup$ – Simon S Jul 5 '15 at 23:14
$\begingroup$ look at a preliminary ideas at en.wikipedia.org/wiki/… $\endgroup$ – janmarqz Jul 5 '15 at 23:58
$\begingroup$ @janmarqz actually I've cited it in my post. $\endgroup$ – J.E.M.S Jul 6 '15 at 9:19
First of all, the concept of a "manifold" is certainly not exclusive to differential geometry. Manifolds are one of the basic objects of study in topology, and are also used extensively in complex analysis (in the form of Riemann surfaces and complex manifolds) and algebraic geometry (in the form of varieties).
Within topology, manifolds can be studied purely as topological spaces, but it is also common to consider manifolds with either a piecewise-linear or differentiable structure. The topological study of piecewise-linear manifolds is sometimes called piecewise-linear topology, and the topological study of differentiable manifolds is sometimes called differential topology.
I'm not sure I would necessarily describe these as distinct subfields of topology -- they are more like points of view towards geometric topology, and for the most part one can study the same geometric questions from each of the three main points of view. However, there are questions that only make sense from one of these points of view, e.g. the classification of exotic spheres, and there are certainly topology researchers who specialize in either piecewise-linear or differentiable methods. Differential topology can be found in position 57Rxx on the 2010 Math Subject Classification.
Differential geometry, on the other hand, is a major field of mathematics with many subfields. It is concerned primarily with additional structures that one can put on a smooth manifold, and the properties of such structures, as well as notions such as curvature, metric properties, and differential equations on manifolds. It corresponds to the heading 53-XX on the MSC 2010, and the MSC divides differential geometry into four large subfields:
Classical differential geometry, i.e. the study of the geometry of curves and surfaces in $\mathbb{R}^2$ and $\mathbb{R}^3$, and more generally submanifolds of $\mathbb{R}^n$.
Local differential geometry, which studies Riemannian manifolds (and manifolds with similar structures) from a local point of view.
Global differential geometry, which studies Riemannian manifolds (and manifolds with similar structures) from a global point of view.
Symplectic and contact geometry, which studies manifolds that have certain rich structures that are significantly different from a Riemannian structure.
As a general rule, anything that requires a Riemannian metric is part of differential geometry, while anything that can be done with just a differentiable structure is part of differential topology. For example, the classification of smooth manifolds up to diffeomorphism is part of differential topology, while anything that involves curvature would be part of differential geometry.
Jim BelkJim Belk
$\begingroup$ I don't know if the MSC does this, but it'd probably also be fair to add a section for complex differential geometry, which tends to be both similar and differently flavored than both symplectic and Riemannian geometry. (Also, I'm not even sure your last sentence is completely fair! Is studying what manifolds support metrics of positive scalar curvature - which turns out to be equivalent in certain dimensions to the vanishing of a (modified) $\hat{A}$-genus - geometry, and not topology?) $\endgroup$ – user98602 Jul 6 '15 at 0:09
$\begingroup$ @MikeMiller The MSC has a few sub-headings under both local and global differential geometry regarding complex manifolds as well as a separate "Analytic Spaces" sub-heading under the separate "Several Complex Variables and Analytic Spaces". Regarding your latter comment, I would say that my last sentence is broadly true, though there are certainly exceptions. It's hard to be completely fair when making these sorts of sweeping statements! $\endgroup$ – Jim Belk Jul 6 '15 at 2:44
$\begingroup$ Sure! I broadly agree with your answer - I just wanted to mention that there's subtlety all the way down, so to speak. $\endgroup$ – user98602 Jul 6 '15 at 3:17
$\begingroup$ I find your answer very helpful and I am grateful for it! $\endgroup$ – J.E.M.S Jul 6 '15 at 9:28
The basic objects in differential geometry are manifolds endowed with a metric, which is essentially a way of measuring length of vectors. A metric gives rise to notions of distance, angle, area, volume, curvature, straightness and geodesics. It is the presence of a metric that distinguishes geometry from topology. However, another concept that might contest for the primacy of a metric in differential geometry is that of a connection. A connection in a vector bundle may be thought of as a way of differentiating sections of the vector bundle. A metric determines a unique connection called the Riemannian connection with certain desirable properties. While a connection is not as intuitive as a metric, it already gives rise to curvature and geodesics. With this, the connection can also lay claim to be. a fundamental notion of differential geometry.
This is from preface of Loring W. Tu's book titled "Differential geometry - Connections, curvature and characteristic classes".
Praphulla KoushikPraphulla Koushik
Not the answer you're looking for? Browse other questions tagged differential-geometry soft-question differential-topology or ask your own question.
Are Riemannian Geometry and Differential topology the same thing?
Teaching myself differential topology and differential geometry
Differential topology book
Differential Geometry past an introductory course?
Source for Differential Manifolds/Geometry Questions?
Advanced Differential Geometry Textbook
Advanced introduction to riemmanian geometry
References request for prerequisites of topology and differential geometry
Literature Request: Stochastic Differential Geometry
Algebraic-Topology/Differential Topology books that also introduce General Topology
similarity between differential geometry and topology | CommonCrawl |
Is $(-1)^{1/8} + (-1)^{7/8}$ ever a value whose real component is $0$?
$$(-1)^{1/8} + (-1)^{7/8}$$
ever a value whose real component is $0$?
Is this ever true in modular arithmetic, hypercomplexes, and/or both?
modular-arithmetic intuition number-systems
Matt GroffMatt Groff
$\begingroup$ If a number has a "real component", that usually implies it also has an imaginary component. In modular arithmetic there's no such thing as a "real component". $\endgroup$ – Joe Z. Apr 16 '14 at 18:05
If I look in the complex plane at the eight numbers that when raised to the eighth power form $-1$, they are $\exp(\frac {2k+1}8\pi i)$ for $0 \le k \le 7, k$ integer. The seventh powers of these are the same set. I can certainly pick two that have a real part of their sum being $0$, for example $\exp(\frac {1}8\pi i)$ and $\exp(\frac {-1}8\pi i)=(\exp(\frac {9}8\pi i))^7$
Working modulo $2$, the eighth root of $-1$ is $1$. The seventh power of $1$ is $1$ and $1+1=0 \pmod 2$.
Ross MillikanRoss Millikan
$\begingroup$ LOL at the mod-2 solution. $\endgroup$ – Joe Z. Apr 16 '14 at 17:52
$\begingroup$ @Joe Z: OP did ask about modular solutions. I tried some others first, but they didn't work. $\endgroup$ – Ross Millikan Apr 16 '14 at 18:21
$\begingroup$ That's an interesting problem, actually. Are there any other systems besides $\mathbb{Z}_2$ where there exists that root? $\endgroup$ – Joe Z. Apr 16 '14 at 18:29
I'm pretty sure that in just the plain old complex numbers, $Re((-1)^{1/8} + (-1)^{7/8})$ is $\cos(\frac 18 \pi) + \cos(\frac 78 \pi) = 0$.
In fact, in general, if $a+b=1$, $(−1)^a+(−1)^b$ will have a real component of $0$.
However, you seem to be focusing on modular arithmetic and number systems, so I'll try to give you an answer in modular arithmetic as well.
In $\mathbb{Z}_n$ (the numbers modulo $n$), $(-1)^{1/8}$ doesn't really mean anything the way we usually talk about it, so we need to redefine it from first principles.
When we talk about $n^{1/k}$ or $\sqrt[k]n$, really what we mean is some number $m$ so that $m^k = n$.
So we define $(-1)^{1/8}$ as some number $m$ such that $m^8 \equiv -1 \pmod n$. Note that this number might not always exist — for example, in $\mathbb{Z}_3$, neither $1$ nor $2$ has an eighth power of $-1$. And there might be more than one — say, in $\mathbb{Z}_{17}$, the numbers $3, 5, 6, 7, 10, 11, 12, 14$ are all possible values of $(-1)^{1/8}$.
So we have $(-1)^{1/8}$, now we need $(-1)^{7/8}$. Well, going by the rules of exponents, we can just do $(-1)^{7/8} = ((-1)^{1/8})^7$, so supposing you still have that $m$ from above, $m^7$ will be your $(-1)^{7/8}$.
To recap, this means that in $\mathbb{Z}_n$, we're looking for a number $m$ that satisfies both $m^8 = -1$ and $m + m^7 = 0$, which fits your expression of $(-1)^{1/8} + (-1)^{7/8} = 0$.
As Ross Millikan pointed out in his answer, $\mathbb{Z}_2$ is one system in which this works — the number $1$ has $1^8 = 1$ and $1 + 1^7 = 0$.
But unfortunately, in $\mathbb{Z}_{17}$, none of those eight numbers we listed above satisfy both these equations. ($4$ and $13$ actually satisfy the latter one, but they're not eighth roots of $-1$.)
Try other values of $n$ and see in which ones a suitable $m$ exists.
Joe Z.Joe Z.
If we are assuming that $(-1)^{7/8}=\left[(-1)^{1/8}\right]^7$, then no matter which eighth root of $-1$ we take, we get $$ \mathrm{Re}\left[e^{i\pi(2k+1)/8}+e^{i\pi(14k+7)/8}\right]=0\tag{1} $$ The exponents sum to $i(2k+1)\pi$ which means the real part of the sum in $(1)$ is $0$ since $$ \cos(x)+\cos(\pi-x)=0\tag{2} $$ Thus, no matter which eighth root of $-1$ we take, we get that $$ \mathrm{Re}\left[(-1)^{1/8}+(-1)^{7/8}\right]=0\tag{3} $$
robjohn♦robjohn
Not the answer you're looking for? Browse other questions tagged modular-arithmetic intuition number-systems or ask your own question.
What value minimizes the error from a set of values under modular arithmetic?
To find last two digits of $2^{100}$
How can residue of a number whose exponent is negative can exist?
How does modular arithmetic work - Fermat's last theorem near misses?
division in modular arithmetic
How to find remainder of denominator is greater than numerator?
For each of n = 84 and n = 88, find the smallest integer multiple of n whose base 10 representation consists entirely of 6's and 7's. | CommonCrawl |
US10014975B2 - Channel carrying multiple digital subcarriers - Google Patents
Channel carrying multiple digital subcarriers Download PDF
corresponding
David James Krause
Han Sun
Yuejian Wu
John D. McNicol
Kuang-Tsan Wu
Infinera Corp
2012-09-28 Application filed by Infinera Corp filed Critical Infinera Corp
2014-07-22 Assigned to INFINERA CORPORATION reassignment INFINERA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MCNICOL, JOHN D., SUN, HAN H., KRAUSE, DAVID JAMES, WU, KUANG-TSAN, WU, YUEJIAN
H04J—MULTIPLEX COMMUNICATION
H04J14/00—Optical multiplex systems
H04J14/02—Wavelength-division multiplex systems
H04J14/0298—Wavelength-division multiplex systems with sub-carrier multiplexing [SCM]
An optical system includes a transmitter module and/or a receiver module. The transmitter module is configured to receive input data, map the input data to a set of subcarriers associated with an optical communication channel, independently apply spectral shaping to each of the subcarriers, generate input values based on the spectral shaping of each of the subcarriers, generate voltage signals based on the input values, modulate light based on the voltage signals to generate an output optical signal that includes the subcarriers, and output the output optical signal. The receiver module is configured to receive the output optical signal, convert the output optical signal to a set of voltage signals, generate digital samples based on the set of voltage signals, independently process the digital samples for each of the subcarriers, map the processed digital samples to produce output data, and output the output data.
Wavelength division multiplexed (WDM) optical communication systems (referred to as "WDM systems") are systems in which multiple optical signals, each having a different wavelength, are combined onto a single optical fiber using an optical multiplexer circuit (referred to as a "multiplexer"). Such systems may include a transmitter circuit, such as a transmitter (Tx) photonic integrated circuit (PIC) having a transmitter component to provide a laser associated with each wavelength, a modulator configured to modulate the output of the laser, and a multiplexer to combine each of the modulated outputs (e.g., to form a combined output or WDM signal).
A WDM system may also include a receiver circuit having a receiver (Rx) PIC and an optical demultiplexer circuit (referred to as a "demultiplexer") configured to receive the combined output and demultiplex the combined output into individual optical signals. Additionally, the receiver circuit may include receiver components to convert the optical signals into electrical signals, and output the data carried by those electrical signals.
A PIC is a device that integrates multiple photonic functions on a single integrated device. PICs may be fabricated in a manner similar to electronic integrated circuits but, depending on the type of PIC, may be fabricated using one or more of a variety of types of materials, including silica on silicon, silicon on insulator, or various polymers and semiconductor materials which are used to make semiconductor lasers, such as GaAs and InP.
The transmitter (Tx) and receiver (Rx) PICs, in an optical communication system, may support communications over a number of wavelength channels. For example, a pair of Tx/Rx PICs may support ten channels, each spaced by, for example, 200 GHz. The set of channels supported by the Tx and Rx PICs can be referred to as the channel "grid" for the PICs. Channel grids for Tx/Rx PICs may be aligned to standardized frequencies, such as those published by the Telecommunication Standardization Sector (ITU-T). The set of channels supported by the Tx and Rx PICs may be referred to as the ITU frequency grid for the Tx/Rx PICs. The spacing, between the channels, may be less than 200 GHz, in order to tightly pack the channels together to form a super channel.
According to some example implementations, an optical system may include a transmitter module. The transmitter module may include a processor, a digital-to-analog converter, a laser, and a modulator. The processor may receive input data, map the input data to a set of subcarriers associated with an optical communication channel, independently apply spectral shaping to each of the subcarriers, and generate input values based on the spectral shaping of each of the subcarriers. The digital-to-analog converter may receive the input values from the processor, and generate voltage signals based on the input values. The laser may output light. The modulator may receive the light from the laser and the voltage signals from the digital-to-analog converter, modulate the light based on the voltage signals to generate an output optical signal that includes the subcarriers, and output the output optical signal.
According to some example implementations, an optical system may include a receiver module. The receiver module may include a detector, an analog-to-digital converter, and a processor. The detector may receive a particular optical signal that includes a set of subcarriers associated with an optical communication channel, and convert the particular optical signal to a set of voltage signals. The analog-to-digital converter may receive the set of voltage signals from the detector, and generate digital samples based on the set of voltage signals. The processor may receive the digital samples from the analog-to-digital converter, independently process the digital samples for each of the subcarriers, map the processed digital samples to produce output data, and output the output data.
According to some example implementations, an optical system may include a receiver module. The receiver module may include a demultiplexer and a set of receiver components. The demultiplexer may receive a particular optical signal that includes a set of subcarriers associated with an optical communication channel, and separate the particular optical signal into a set of optical signals. Each of the set of optical signals corresponds to one or more of the set of subcarriers. One of the receiver components may include a detector, an analog-to-digital converter, and a processor. The detector may receive one of the optical signals, and convert the optical signal to a set of voltage signals. The analog-to-digital converter may receive the set of voltage signals from the detector, and generate digital samples based on the set of voltage signals. The processor may receive the digital samples from the analog-to-digital converter, independently process the digital samples for each of one or more of the set of subcarriers, map the processed digital samples to produce output data, and output the output data.
According to some example implementations, an optical system may include a transmitter module and a receiver module. The transmitter module may receive input data, map the input data to a set of subcarriers associated with an optical communication channel, independently apply spectral shaping to each of the subcarriers, generate input values based on the spectral shaping of each of the subcarriers, generate voltage signals based on the input values, modulate light based on the voltage signals to generate an output optical signal that includes the subcarriers, and output the output optical signal. The receiver module may receive the output optical signal, convert the output optical signal to a set of voltage signals, generate digital samples based on the set of voltage signals, independently process the digital samples for each of the subcarriers, map the processed digital samples to produce output data, and output the output data.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more implementations described herein and, together with the description, explain these implementations. In the drawings:
FIG. 1 is a diagram illustrating an overview of an example implementation described herein;
FIG. 2 is a diagram of an example network in which systems and/or methods, described herein, may be implemented;
FIG. 3A is a diagram illustrating an example of components of an optical transmitter shown in FIG. 2;
FIG. 3B is a diagram illustrating another example of components of an optical transmitter shown in FIG. 2;
FIG. 4 is a diagram illustrating example components of a transmitter digital signal processor (DSP) shown in FIG. 3A or 3B;
FIG. 5 is a diagram illustrating example functional components of a transmitter DSP shown in FIG. 3A or 3B;
FIG. 6A is a diagram illustrating an example of components of an optical receiver, shown in FIG. 2, according to some implementations;
FIG. 6B is a diagram illustrating another example of components of an optical receiver, shown in FIG. 2, according to some implementations;
FIG. 7 is a diagram illustrating example components of a receiver DSP shown in FIG. 6A or 6B;
FIG. 9 is a diagram illustrating example components of an optical receiver, shown in FIG. 2, according to some other implementations; and
FIG. 10 is a flowchart of an example process that may be performed by a transmitter module and a receiver module of FIG. 2.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the disclosure.
FIG. 1 is a diagram illustrating an overview of an example implementation described herein. In an optical communication system, a certain bandwidth, or spectrum, may be allocated to an optical communications channel. As shown in (A), the channel may include a single carrier. In the implementation of (A), data may be mapped to a pulse of a desired spectral shape. In the implementation of (A), the pulse may be designed to fill the entire spectrum.
A system and method, as described herein, may use digital-to-analog converters to generate multiple subcarriers. As shown in (B), rather than including a single carrier, the channel may include multiple subcarriers. The quantity of subcarriers may be a design decision that may be based on properties of the laser and/or other optical components being used. In the implementation of (B), data may be mapped to a respective one of the multiple subcarriers. As described in further detail below, each subcarrier may be independently generated and processed by the same transmitter.
The use of high speed digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) (e.g., 64 GSample/s and beyond) may reduce the computational complexity of both the transmitter and the receiver. The high speed DACs and ADCs may facilitate the tuning of the output signal given design characteristics of the lasers and the modulators, and the available power budget. According to some implementations, a transmitter may be designed with a DSP, DACs, and electro-optical conversion (e.g., a laser and a modulator), and a receiver may be designed with receiver optics (e.g., a hybrid mixer and a local oscillator), ADCs, and a DSP. Such a transmitter may generate one or more subcarriers, and such a receiver may detect the one or more subcarriers. For example, if 32 GHz of optical spectrum is available for a channel, then the transmitter might generate one subcarrier of 32 Gbaud, two subcarriers of 16 Gbaud, three subcarriers of 10.66 Gbaud, and so on. The subcarriers may be designed so that the subcarriers can be substantially encoded and decoded separately.
The multiple subcarrier approach may have several advantages. For example, digital filters, for the multiple subcarrier approach, may include fewer taps than existing approaches. For equal dispersion, a higher baud rate requires more taps than a lower baud rate. For example, a 40 Gbaud system may need approximately 2800 baud of taps, while a 10 Gbaud system may need approximately 180 baud of taps for 200,000 picoseconds per nanometer (ps/nm). The multiple subcarrier approach may reduce the penalty due to the combination of a receiver laser linewidth and electronic dispersion compensation because of the flexibility in choosing the baud rate of the subcarriers. The multiple subcarrier approach may also permit bit error rate (BER) averaging over the subcarriers, which can lead to performance benefits. The multiple subcarrier approach may also reduce power consumption over existing approaches.
FIG. 2 is a diagram of an example network 200 in which systems and/or methods described herein may be implemented. As illustrated in FIG. 2, network 200 may include transmitter (Tx) module 210 (e.g., a Tx PIC), and/or receiver (Rx) module 220 (e.g., an Rx PIC). In some implementations, transmitter module 210 may be optically connected to receiver module 220 via link 230. Additionally, link 230 may include one or more optical amplifiers 240 that amplify an optical signal as the optical signal is transmitted over link 230.
Transmitter module 210 may include a number of optical transmitters 212-1 through 212-M (where M≥1), waveguides 214, and/or optical multiplexer 216. In some implementations, transmitter module 210 may include additional components, fewer components, different components, or differently arranged components.
Each optical transmitter 212 may receive data for a data channel (shown as TxCh1 through TxChM), create multiple subcarriers for the data channel, map data, for the data channel, to the multiple subcarriers, modulate the data with an optical signal to create a multiple subcarrier output optical signal, and transmit the multiple subcarrier output optical signal. In one implementation, transmitter module 210 may include 5, 10, 20, 50, 100, or some other quantity of optical transmitters 212. Each optical transmitter 212 may be tuned to use an optical carrier of a designated wavelength. It may be desirable that the grid of wavelengths emitted by optical transmitters 212 conform to a known standard, such as a standard published by the Telecommunication Standardization Sector (ITU-T). It may also be desirable that the grid of wavelengths be flexible and tightly packed to create a super channel.
In some implementations and as described above, each of optical transmitters 212 may include a TX DSP, a DAC, a laser, a modulator, and/or some other components. The laser and/or the modulator may be coupled with a tuning element that can be used to tune the wavelength of the optical signal channel.
Waveguides 214 may include an optical link or some other link to transmit output optical signals of optical transmitters 212. In some implementations, each optical transmitter 212 may include one waveguide 214, or multiple waveguides 214, to transmit output optical signals of optical transmitters 212 to optical multiplexer 216.
Optical multiplexer 216 may include an arrayed waveguide grating (AWG) or some other multiplexer device. In some implementations, optical multiplexer 216 may combine multiple output optical signals, associated with optical transmitters 212, into a single optical signal (e.g., a WDM signal). In some implementations, optical multiplexer 216 may combine multiple output optical signals, associated with optical transmitters 212, in such a way as to produce a polarization diverse signal (e.g., also referred to herein as a WDM signal). A corresponding waveguide may output the WDM signal on an optical fiber, such as link 230. For example, optical multiplexer 216 may include an input (e.g., a first slab to receive input optical signals supplied by optical transmitters 212) and an output (e.g., a second slab to supply a single WDM signal associated with the input optical signals). Optical multiplexer 216 may also include waveguides connected to the input and the output.
In some implementations, the first slab and the second slab may each act as an input and an output. For example, the first slab and the second slab may each receive multiple input optical signals (e.g., output optical signals supplied by optical transmitters 212). Additionally, the first slab may supply a single WDM signal corresponding to the input optical signals (e.g., output optical signals supplied by optical transmitters 212) received by the second slab. Further, the second slab may supply a single WDM signal corresponding to the input optical signals (e.g., output optical signals supplied by optical transmitters 212) received by the first slab. In some implementations, a corresponding waveguide may output the WDM signal on an optical fiber, such as link 230.
As shown in FIG. 2, optical multiplexer 216 may receive output optical signals outputted by optical transmitters 212, and output one or more WDM signals. Each WDM signal may include one or more optical signals, such that each optical signal includes one or more wavelengths. In some implementations, one WDM signal may have a first polarization (e.g., a transverse magnetic (TM) polarization), and another WDM signal may have a second, substantially orthogonal polarization (e.g., a transverse electric (TE) polarization). Alternatively, both WDM signals may have the same polarization.
Link 230 may include an optical fiber. Link 230 may transport one or more optical signals associated with multiple wavelengths. Amplifier 240 may include an amplification device, such as a doped fiber amplifier or a Raman amplifier. Amplifier 240 may amplify the optical signals as the optical signals are transmitted via link 230.
Receiver module 220 may include optical demultiplexer 222, waveguides 224, and/or optical receivers 226-1 through 226-N (where N≥1). In some implementations, receiver module 220 may include additional components, fewer components, different components, or differently arranged components.
Optical demultiplexer 222 may include an AWG or some other demultiplexer device. Optical demultiplexer 222 may supply multiple optical signals based on receiving one or more optical signals, such as WDM signals, or components associated with the one or more optical signals. For example, optical demultiplexer 222 may include an input (e.g., a first slab to receive a WDM signal and/or some other input signal), and an output (e.g., a second slab to supply multiple optical signals associated with the WDM signal). Additionally, optical demultiplexer 222 may include waveguides connected to the first slab and the second slab.
In some implementations, the first slab and the second slab may each act as an input and an output. For example, the first slab and the second slab may each receive an optical signal (e.g., a WDM signal supplied by optical multiplexer 216 and/or some other optical signal). Additionally, the first slab may supply output optical signals corresponding to the optical signal received by the second slab. Further, the second slab may supply output optical signals corresponding to the optical signal received by the first slab. As shown in FIG. 2, optical demultiplexer 222 may supply optical signals to optical receivers 226 via waveguides 224.
Waveguides 224 may include an optical link or some other link to transmit optical signals, output from optical demultiplexer 222, to optical receivers 226. In some implementations, each optical receiver 226 may receive optical signals via a single waveguide 224 or via multiple waveguides 224.
Optical receivers 226 may each include one or more photodetectors and related devices to receive respective input optical signals outputted by optical demultiplexer 222, detect the subcarriers associated with the input optical signals, convert data within the subcarriers to a voltage signals, convert the voltage signals to digital samples, and process the digital samples to produce output data corresponding to the input optical signals. Optical receivers 226 may each operate to convert the input optical signal to an electrical signal that represents the transmitted data. In some implementations and as described above, each of optical receivers 226 may include a local oscillator, a hybrid mixer, a detector, an ADC, an RX DSP, and/or some other components.
While FIG. 2 shows network 200 as including a particular quantity and arrangement of components, in some implementations, network 200 may include additional components, fewer components, different components, or differently arranged components. Also, in some instances, one of the devices illustrated in FIG. 2 may perform a function described herein as being performed by another one of the devices illustrated in FIG. 2.
FIG. 3A is a diagram illustrating an example of components of an optical transmitter 212. As shown in FIG. 3A, optical transmitter 212 may include a TX DSP 310, a DAC 320, a laser 330, and a modulator 340. In some implementations, TX DSP 310 and DAC 320 may be implemented using an application specific integrated circuit (ASIC) and/or may be implemented on a single integrated circuit, such as a single PIC. In some implementations, laser 330 and modulator 340 may be implemented on a single integrated circuit, such as a single PIC. In some other implementations, TX DSP 310, DAC 320, laser 330, and/or modulator 340 may be implemented on one or more integrated circuits, such as one or more PICs. For example, in some example implementations, components of multiple optical transmitters 212 may be implemented on a single integrated circuit, such as a single PIC, to form a super-channel transmitter.
TX DSP 310 may include a digital signal processor. TX DSP 310 may receive input data from a data source, and determine the signal to apply to modulator 340 to generate multiple subcarriers. In some implementations, TX DSP 310 may receive streams of data, map the streams of data into each of the subcarriers, independently apply spectral shaping to each of the subcarriers, and obtain, based on the spectral shaping of each of the subcarriers, a sequence of assigned integers to supply to DAC 320. In some implementations, TX DSP 310 may generate the subcarriers using time domain filtering and frequency shifting by multiplication in the time domain.
DAC 320 may include a digital-to-analog converter. DAC 320 may receive the sequence of assigned integers and, based on the sequence of assigned integers, generate the voltage signals to apply to modulator 340.
Laser 330 may include a semiconductor laser, such as a distributed feedback (DFB) laser, or some other type of laser. Laser 330 may provide an output optical light beam to modulator 340.
Modulator 340 may include a Mach-Zehnder modulator (MZM), such as a nested MZM, or another type of modulator. Modulator 340 may receive the optical light beam from laser 330 and the voltage signals from DAC 320, and may modulate the optical light beam, based on the voltage signals, to generate a multiple subcarrier output signal.
While FIG. 3A shows optical transmitter 212 as including a particular quantity and arrangement of components, in some implementations, optical transmitter 212 may include additional components, fewer components, different components, or differently arranged components. The quantity of DACs 320, lasers 330, and/or modulators 340 may be selected to implement an optical transmitter 212 that is capable of generating polarization diverse signals for transmission on an optical fiber, such as link 230. In some instances, one of the components illustrated in FIG. 3A may perform a function described herein as being performed by another one of the components illustrated in FIG. 3A.
FIG. 3B is a diagram illustrating another example of components of an optical transmitter 212. As shown in FIG. 3B, optical transmitter 212 may include a TX DSP 310, DACs 320-1 and 320-2 (referred to generally as DACs 320 and individually as DAC 320), a laser 330, modulators 340-1 and 340-2 (referred to generally as modulators 340 and individually as modulator 340), and splitter 350. TX DSP 310, DACs 320, laser 330, and modulators 340 may correspond to like components described with regard to FIG. 3A.
Splitter 350 may include an optical splitter that receives the optical light beam from laser 330 and splits the optical light beam into two branches: one for the first polarization and one for the second polarization. In some implementations, the two optical light beams may have approximately equal power. Splitter 350 may output one optical light beam to modulator 340-1 and another optical light beam to modulator 340-2.
Modulator 340-1 may be used to modulate signals of the first polarization. Modulator 340-2 may be used to modulate signals of the second polarization. In some implementations, two DACs 320 may be associated with each polarization. In these implementations, two DACs 320-1 may supply voltage signals to modulator 340-1, and two DACs 320-2 may supply voltage signals to modulator 340-2. The outputs of modulators 340 may be combined back together using combiners (e.g., optical multiplexer 216) and polarization multiplexing.
While FIG. 3B shows optical transmitter 212 as including a particular quantity and arrangement of components, in some implementations, optical transmitter 212 may include additional components, fewer components, different components, or differently arranged components. The quantity of DACs 320, lasers 330, and/or modulators 340 may be selected to implement an optical transmitter 212 that is capable of generating polarization diverse signals for transmission on an optical fiber, such as link 230. In some instances, one of the components illustrated in FIG. 3B may perform a function described herein as being performed by another one of the components illustrated in FIG. 3B.
FIG. 4 is a diagram illustrating example components of TX DSP 310. As shown in FIG. 4, TX DSP 310 may include a demultiplexer (DE-MUX) 410, multiple transmitter components 420, and a multiplexer (MUX) 430.
Demultiplexer 410 may include a demultiplexer device. Demultiplexer 410 may receive a stream of data from a data source, and may demultiplex the data for presentation to transmitter components 420. In some implementations, demultiplexer 410 may separate the data for the multiple subcarriers.
Transmitter components 420 may correspond to a set of Z (Z≥1) transmitter components 420, which may correspond to Z subcarriers. In other words, each transmitter component 420 may process data for inclusion on a corresponding subcarrier. Each transmitter component 420 may apply pulse shaping (e.g., spectral shaping), channel correction (e.g., dispersion compensation), and the like. The pulse shaping may provide fast roll-off of the spectrum of the subcarriers, which in turn permits the subcarriers to be packed tightly.
Error correction may be applied to the data. In some implementations, coding for error correction, such as forward error correction (FEC) coding, may be applied at the data source. In some implementations, coding for error correction, such as FEC coding, may be applied at transmitter components 420.
Multiplexer 430 may include a multiplexer device. Multiplexer 430 may receive data for each subcarrier from transmitter components 420, and may combine the data to form a sequence of integers for output to DAC 320 for production of the appropriate voltage signals.
In some implementations, demultiplexer 410, transmitter components 420, and/or multiplexer 430 may apply timing skew to each of the multiple subcarriers to correct for skew induced by link 230.
While FIG. 4 shows TX DSP 310 as including a particular quantity and arrangement of components, in some implementations, TX DSP 310 may include additional components, fewer components, different components, or differently arranged components. Also, in some instances, one of the components illustrated in FIG. 4 may perform a function described herein as being performed by another one of the components illustrated in FIG. 4.
FIG. 5 is a diagram illustrating example functional components of TX DSP 310. The particular functional components, which may be included in TX DSP 310, may vary based on desired performance characteristics and/or computational complexity. For the particular functional components shown in FIG. 5, assume that TX DSP 310 is connected to a 64 GSample/s DAC 320 and produces four subcarriers of eight Gbaud. In this case, TX DSP 310 may include four transmitter components 420—one for each subcarrier. TX DSP 310 may include different functional components or a different quantity of functional components in other situations.
As shown in FIG. 5, TX DSP 310 may include an FEC encoder 505, a de-mux component 510, an input bits component 520, a bits to symbol component 530, an overlap and save buffer 540, a fast Fourier transform functions (FFT) component 550, a replicator component 560, a pulse shape filter 570, a mux component 580, an inverse FFT (IFFT) component 590, and a take last 1024 component 595.
FEC encoder 505 may receive an input stream of bits and perform error correction coding, such as through the addition of parity bits. De-mux component 510 may receive the stream of bits of data and perform a demultiplexing operation on the stream of bits. In this example, de-mux component 510 may separate the stream of bits into groups of bits associated with the four subcarriers. In some implementations, the bits could be separately or jointly encoded for error correction in de-mux component 510, using forward error correction. De-mux component 510 may use the error correction encoding to separate the bits for the different subcarriers. De-mux component 510 may be designed to systematically interleave bits between the subcarriers. De-mux component 510 may be designed to generate timing skew between the subcarriers to correct for skew induced by link 230. De-mux component 510 may provide each group of bits to a corresponding input bits component 520. Input bits component 520 may process 128*X bits at a time, where X is an integer. For dual-polarization Quadrature Phase Shift Keying (QPSK), X would be four.
Bits to symbol component 530 may map the bits to symbols on the complex plane. For example, bits to symbol component 530 may map four bits to a symbol in the dual-polarization QPSK constellation. Overlap and save buffer 540 may buffer 256 symbols. Overlap and save buffer 540 may receive 128 symbols at a time from bits to symbol component 530. Thus, overlap and save buffer 540 may combine 128 new symbols, from bits to symbol component 530, with the previous 128 symbols received from bits to symbol component 530.
FFT component 550 may receive 256 symbols from overlap and save buffer 540 and convert the symbols to the frequency domain using, for example, a fast Fourier transform (FFT). FFT component 550 may form 256 frequency bins as a result of performing the FFT. Replicator component 560 may replicate the 256 frequency bins to form 512 frequency bins (e.g., for T/2 based filtering of the subcarrier). This replication may increase the sample rate.
Pulse shape filter 570 may apply a pulse shaping filter to the 512 frequency bins. The purpose of pulse shape filter 570 is to calculate the transitions between the symbols and the desired spectrum so that the subcarriers can be packed together on the channel. Pulse shape filter 570 may also be used to introduce timing skew between the subcarriers to correct for timing skew induced by link 230. Mux component 580 may receive all four, eight Gbaud subcarriers (from the four pulse shape filters 570) and multiplex them together to form a 2048 element vector.
IFFT component 590 may receive the 2048 element vector and return the signal back to the time domain, which may now be at 64 GSample/s. IFFT component 590 may convert the signal to the time domain using, for example, an inverse fast Fourier transform (IFFT). Take last 1024 component 595 may select the last 1024 samples from IFFT component 590 and output the 1024 samples to DAC 320 at 64 GSample/s.
While FIG. 5 shows TX DSP 310 as including a particular quantity and arrangement of functional components, in some implementations, TX DSP 310 may include additional functional components, fewer functional components, different functional components, or differently arranged functional components.
FIG. 6A is a diagram illustrating an example of components of an optical receiver 226 according to some implementations. As shown in FIG. 6A, optical receiver 226 may include a local oscillator 610, a hybrid mixer 620, a detector 630, an ADC 640, and an RX DSP 650. In some implementations, local oscillator 610, hybrid mixer 620, and detector 630 may be implemented on a single integrated circuit, such as a single PIC. In some implementations, ADC 640 and RX DSP 650 may be implemented using an application specific integrated circuit (ASIC) and/or may be implemented on a single integrated circuit, such as a single PIC. In some other implementations, local oscillator 610, hybrid mixer 620, detector 630, ADC 640, and/or RX DSP 650 may be implemented on one or more integrated circuits, such as one or more PICs. For example, in some example implementations, components of multiple optical receivers 226 may be implemented on a single integrated circuit, such as a single PIC, to form a super-channel receiver.
Local oscillator 610 may include a laser, a collection of lasers, or some other device. In some implementations, local oscillator 610 may include a laser to provide an optical signal to hybrid mixer 620. In some implementations, local oscillator 610 may include a single-sided laser to provide an optical signal to hybrid mixer 620. In some other implementations, local oscillator 610 may include a double-sided laser to provide multiple optical signals to multiple hybrid mixers 620.
Hybrid mixer 620 may include a combiner that receives an optical input signal (e.g., from optical demultiplexer 222) and an optical signal from local oscillator 610 and combines the optical signals to generate an output optical signal. In some implementations, hybrid mixer 620 may split the optical input signal into two, create two orthogonal signals (e.g., by adding the first optical input signal and the optical signal, from local oscillator 610, with zero phase, and by adding the second optical input signal and the optical signal, from local oscillator 610, with 90 degrees phase), and combine the two orthogonal signals for presentation to detector 630.
Detector 630 may include a photodetector, such as a photodiode, to receive the output optical signal, from hybrid mixer 620, and convert the output optical signal to corresponding voltage signals. In some implementations, detector 630 may detect the entire spectrum (e.g., containing all of the subcarriers).
ADC 640 may include an analog-to-digital converter that converts the voltage signals from detector 630 to digital samples. ADC 640 may provide the digital samples to RX DSP 650. RX DSP 650 may receive the digital samples from ADC 640, demultiplex the samples according to the subcarriers, independently process the samples for each of the subcarriers, map the processed samples to produce output data, and output the output data.
While FIG. 6A shows optical receiver 226 as including a particular quantity and arrangement of components, in some implementations, optical receiver 226 may include additional components, fewer components, different components, or differently arranged components. The quantity of detectors 630 and/or ADCs 640 may be selected to implement an optical transmitter 226 that is capable of receiving a polarization diverse signal. In some instances, one of the components illustrated in FIG. 6A may perform a function described herein as being performed by another one of the components illustrated in FIG. 6A.
In other implementations, optical receiver 226 may include intensity-based detectors 630 that operate using on/off keying intensity modulation for each of the subcarriers. In these other implementations, optical receiver 226 may not include a local oscillator 610 or a hybrid mixer 620. Rather, optical receiver 226 may include detector 630, ADC 640, and RX DSP 650, which may operate in a manner similar to that described above.
FIG. 6B is a diagram illustrating another example of components of an optical receiver 226 according to some implementations. As shown in FIG. 6B, optical receiver 226 may include a polarization splitter 605, a local oscillator 610, hybrid mixers 620-1 and 620-2 (referred to generally as hybrid mixers 620 and individually as hybrid mixer 620), detectors 630-1 and 630-2 (referred to generally as detectors 630 and individually as detector 630), ADCs 640-1 and 640-2 (referred to generally as ADCs 640 and individually as ADC 640), and an RX DSP 650. Local oscillator 610, hybrid mixers 620, detectors 630, ADCs 640, and RX DSP 650 may correspond to like components described with regard to FIG. 6A.
Polarization splitter 605 may include a polarization splitter that splits an input signal into two orthogonal polarizations, such as the first polarization and the second polarization. Hybrid mixers 620 may combine the polarization signals with optical signals from local oscillator 610. For example, hybrid mixer 620-1 may combine a first polarization signal with the optical signal from local oscillator 610, and hybrid mixer 620-2 may combine a second polarization signal with the optical signal from local oscillator 610.
Detectors 630 may detect the polarization signals to form corresponding voltage signals, and ADCs 640 may convert the voltage signals to digital samples. For example, two detectors 630-1 may detect the first polarization signals to form the corresponding voltage signals, and a corresponding two ADCs 640-1 may convert the voltage signals to digital samples for the first polarization signals. Similarly, two detectors 630-2 may detect the second polarization signals to form the corresponding voltage signals, and a corresponding two ADCs 640-2 may convert the voltage signals to digital samples for the second polarization signals. RX DSP 650 may process the digital samples for the first and second polarization signals to generate resultant data, which may be outputted as output data.
While FIG. 6B shows optical receiver 226 as including a particular quantity and arrangement of components, in some implementations, optical receiver 226 may include additional components, fewer components, different components, or differently arranged components. The quantity of detectors 630 and/or ADCs 640 may be selected to implement an optical transmitter 226 that is capable of receiving a polarization diverse signal. In some instances, one of the components illustrated in FIG. 6B may perform a function described herein as being performed by another one of the components illustrated in FIG. 6B.
FIG. 7 is a diagram illustrating example components of RX DSP 650. As shown in FIG. 7, RX DSP 650 may include a demultiplexer (DE-MUX) 710, multiple receiver components 720, and a multiplexer (MUX) 730.
Demultiplexer 710 may include a demultiplexer device. Demultiplexer 710 may receive a stream of digital samples from ADC 640, and may demultiplex the digital samples for presentation to receiver components 720.
Receiver components 720 may correspond to a set of Z (Z≥1) receiver components 720, which may correspond to Z subcarriers. In other words, each receiver component 720 may process digital samples, corresponding to a respective subcarrier, to extract the data from the respective subcarrier. Each receiver component 720 may process the digital samples and correct for channel impairments, such as polarization mode dispersion, carrier recovery, or the like. In some implementations, receiver components 720 may de-skew the data to undo skew caused by link 230 or skew introduced by TX DSP 310.
Multiplexer 730 may include a multiplexer device. Multiplexer 730 may receive data for each subcarrier from receiver components 720, and may combine the data to form output data. Multiplexer 730 may de-interleave the data that was systematically interleaved in de-mux component 510 of TX DSP 310 (FIG. 5). Multiplexer 730 may de-skew the data to undo skew caused by link 230 or skew introduced by TX DSP 310.
Error correction may be applied to the data. In some implementations, coding for error correction may be applied at the data source. In some implementations, decoding for error correction, such as FEC decoding, may be applied at the outputs of receiver components 720. In some implementations, decoding for error correction, such as FEC decoding, may be applied at the output of multiplexer 730.
While FIG. 7 shows RX DSP 650 as including a particular quantity and arrangement of components, in some implementations, RX DSP 650 may include additional components, fewer components, different components, or differently arranged components. Also, in some instances, one of the components illustrated in FIG. 7 may perform a function described herein as being performed by another one of the components illustrated in FIG. 7.
FIG. 8 is a diagram illustrating example functional components of RX DSP 650. The particular functional components, which may be included in RX DSP 650, may vary based on desired performance characteristics and/or computational complexity. For the particular functional components shown in FIG. 8, assume that RX DSP 650 is connected to a 64 GSample/s ADC 640 and detects four subcarriers of eight Gbaud. In this case, RX DSP 650 may include four receiver components 720—one for each subcarrier. RX DSP 650 may include different functional components or a different quantity of functional components in other situations, such as a situation where RX DSP 650 receives signals from four ADCs 640, as shown in FIG. 6B.
As shown in FIG. 8, RX DSP 650 may include an overlap and save buffer 805, FFT component 810, de-mux component 815, fixed filter 820, PMD component 825, IFFT component 830, take last 128 component 835, carrier recovery component 840, symbols to bits component 845, output bits component 850, mux component 855, and FEC decoder 860.
Overlap and save buffer 805 may receive samples from ADC 640. ADC 640 may operate to output samples at 64 GSample/s. Overlap and save buffer 805 may receive 1024 samples and combine the current 1024 samples with the previous 1024 samples, received from ADC 640, to form a vector of 2048 elements. FFT component 810 may receive the 2048 vector elements from overlap and save buffer 805 and convert the vector elements to the frequency domain using, for example, a fast Fourier transform (FFT). FFT component 810 may convert the 2048 vector elements to 2048 frequency bins as a result of performing the FFT.
De-mux component 815 may receive the 2048 frequency bins from FFT component 810. De-mux component 815 may demultiplex the 2048 frequency bins to 512 element vectors for each of the eight Gbaud subcarriers. Fixed filter 820 may apply a filtering operation for, for example, dispersion compensation. Fixed filter 820 may compensate for the relatively slow varying parts of the channel. Fixed filter 840 may also compensate for skew across subcarriers introduced in link 230, or skew introduced intentionally in optical transmitter 212.
PMD component 825 may apply polarization mode dispersion (PMD) equalization to compensate for PMD and polarization rotations. PMD component 825 may also receive and operate based upon feedback signals from take last 128 component 835 and/or carrier recovery component 840.
IFFT component 830 may covert the 512 element vector (after processing by fixed filter component 840 and PMD component 825) back to the time domain as 512 samples. IFFT component 830 may convert the 512 element vector to the time domain using, for example, an inverse fast Fourier transform (IFFT). Take last 128 component 835 may select the last 128 samples from IFFT component 830 and output the 128 samples to carrier recovery component 840.
Carrier recovery component 840 may apply carrier recovery to compensate for transmitter and receiver laser linewidths. In some implementations, carrier recovery component 840 may perform carrier recovery to compensate for frequency and/or phase differences between the transmit signal and the signal from local oscillator 610. After carrier recovery, the data may be represented as symbols in the QPSK constellation. In some implementations, as described above, the output of take last 128 component 835 and/or carrier recovery component 840 could be used to update PMD component 825.
Symbols to bits component 845 may receive the symbols output from carrier recovery component 840 and map the symbols back to bits. For example, symbol to bits component 845 may map one symbol, in the QPSK constellation, to X bits, where X is an integer. For dual-polarization QPSK, X would be four. In some implementations, the bits could be decoded for error correction using, for example, FEC. Output bits component 850 may output 128*X bits at a time. For dual-polarization QPSK, output bits component 850 may output 512 bits at a time.
Mux component 855 may combine the subcarriers together and undo the systematic interleaving introduced in de-mux component 510 of TX DSP 310 (FIG. 5). FEC decoder 860 may process the output of mux component 855 to remove errors using forward error correction.
While FIG. 8 shows RX DSP 650 as including a particular quantity and arrangement of functional components, in some implementations, RX DSP 650 may include additional functional components, fewer functional components, different functional components, or differently arranged functional components.
FIG. 9 is a diagram illustrating example components of an optical receiver 226 according to some other implementations. In implementations described with regard to FIG. 6A or 6B, optical receiver 226 processes a whole spectrum of interest and uses digital signal processing to separate out the subcarriers. By contrast, implementations, described with regard to FIG. 9, may optically filter the subcarriers and separately process the subcarriers.
As shown in FIG. 9, optical receiver 226 may include a demultiplexer (DE-MUX) 910 connected to a set of receiver components 912. In some implementations, each receiver component 912 may correspond to a respective one of the subcarriers.
Demultiplexer 910 may include an optical demultiplexer, such as an AWG. Demultiplexer 910 may receive an optical signal, having multiple subcarriers, and separate the optical signal based on the subcarriers. Demultiplexer 910 may provide each subcarrier to a corresponding receiver component 912. Demultiplexer 910 may also provide polarization diversity by separating the input signal into two substantially orthogonal polarizations, such as the first polarization and the second polarization, which may be processed in a manner similar to that described with regard to FIG. 6B.
As shown in FIG. 9, receiver components 912 may include local oscillators 920, hybrid mixers 930, detectors 940, ADCs 950, and RX DSPs 960. In some implementations, local oscillators 920, hybrid mixers 930, and detectors 940 may be implemented on a single integrated circuit, such as a single PIC. In some implementations, ADCs 950 and RX DSPs 960 may be implemented using one or more ASICs and/or may be implemented on one or more integrated circuits, such as one or more PICs. In some other implementations, local oscillators 920, hybrid mixers 930, detectors 940, ADCs 950, and/or RX DSPs 960 may be implemented on one or more integrated circuits, such as one or more PICs.
Detector 940 may include a photodetector, such as a photodiode, to receive the output optical signal, from hybrid mixer 930, and convert the output optical signal to corresponding voltage signals. In some implementations, detector 940 may detect the portion of the spectrum containing the respective subcarrier.
ADC 950 may include an analog-to-digital converter that converts the voltage signals from detector 940 to digital samples. ADC 950 may provide the digital samples to RX DSP 960. RX DSP 960 may receive the digital samples from ADC 950, demultiplex the samples, perform some processing on the samples, and output the resultant data, as output data.
While FIG. 9 shows optical receiver 226 as including a particular quantity and arrangement of components, in some implementations, optical receiver 226 may include additional components, fewer components, different components, or differently arranged components.
FIG. 10 is a flowchart of an example process 1000 that may be performed by transmitter module 210 and receiver module 220. As shown in FIG. 10, a portion of process 1000 may be performed by transmitter module 210 and a portion of process 1000 may be performed by receiver module 220. Process 1000 will be described with corresponding references to FIG. 3A (for operations performed by transmitter module 210) and FIG. 6A (for operations performed by receiver module 220).
Process 1000 may include receiving input data (block 1005). For example, TX DSP 310 may receive input data from a data source. The data source may output one or more streams of data, which may be processed by TX DSP 310.
Process 1000 may include mapping the input data to subcarriers (block 1010) and determining the integers to supply to the DAC (block 1015). For example, TX DSP 310 may determine the signals to apply to modulator 340 to generate multiple subcarriers. TX DSP 310 may receive streams of data, map the streams of data into respective ones of the subcarriers, independently apply spectral shaping to each of the subcarriers, and obtain, based on the spectral shaping of each of the subcarriers, a sequence of assigned integers to supply to DAC 320. TX DSP 310 may also apply forward error correction to the whole stream of data, or apply forward error correction to the subcarriers. TX DSP 310 may also introduce time skew for the subcarriers to compensate for time skew introduced in link 230.
Process 1000 may include generating voltage signals based on the integers (block 1020) and applying the voltage signals to the modulator (block 1025). For example, DAC 320 may receive the sequence of assigned integers and, based on the sequence of assigned integers, generate the voltage signals to apply to modulator 340 using digital-to-analog conversion. DAC 320 may apply the voltage signals to modulator 340.
Process 1000 may include modulating an optical light beam from a laser to form a multiple subcarrier output signal (block 1030) and outputting the multiple subcarrier output signal (block 1035). For example, modulator 340 may receive an optical light beam from laser 330 and the voltage signals from DAC 320, and may modulate the optical light beam to generate a multiple subcarrier output signal. Modulator 340 may output the multiple subcarrier output signal for transmission on link 230.
Process 1000 may include receiving the multiple subcarrier output signal (block 1040) and mixing the multiple subcarrier output signal with a local oscillator signal to generate a resulting optical signal (block 1045). For example, hybrid mixer 620 may receive the multiple subcarrier output signal, which was originally transmitted by transmitter module 210, and an optical signal from local oscillator 610. Hybrid mixer 620 may combine the optical signals to generate a resulting optical signal.
Process 1000 may include converting the resulting optical signal to voltage signals (block 1050). For example, detector 630 may receive the resulting optical signal, from hybrid mixer 620, and convert the resulting optical signal to corresponding voltage signals.
Process 1000 may include converting the voltage signals to digital samples (block 1055). For example, ADC 640 may convert the voltage signals, from detector 630, to digital samples using analog-to-digital conversion.
Process 1000 may include processing the digital samples to produce output data (block 1060) and outputting the output data (block 1065). For example, RX DSP 650 may receive the digital samples from ADC 640, demultiplex the samples according to the subcarriers, independently process the samples for each of the subcarriers, map the processed samples to produce output data, and output the output data. RX DSP 650 may also remove time skew of the subcarriers to compensate for time skew introduced in TX DSP 310 or link 230. RX DSP 650 may perform some kind of de-interleaving to systematically de-interleave the interleaving done in TX DSP 310. RX DSP 650 may perform some kind of forward error correction to either output bits from all subcarriers, or to output bits from the subcarriers individually.
While FIG. 10 shows process 1000 as including a particular quantity and arrangement of blocks, in some implementations, process 1000 may include fewer blocks, additional blocks, or a different arrangement of blocks. Additionally, or alternatively, some of the blocks may be performed in parallel.
The foregoing description provides illustration and description, but is not intended to be exhaustive or to limit the possible implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations.
Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of the possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one other claim, the disclosure of the possible implementations includes each dependent claim in combination with every other claim in the claim set.
No element, act, or instruction used in the present application should be construed as critical or essential unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more items and may be used interchangeably with "one or more." Where only one item is intended, the term "one" or similar language is used. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise.
1. A system, comprising:
a digital signal processor including:
a forward error correction encoder that receives the input stream of data and outputs an encoded stream of data;
a demultiplexer component, the demultiplexer component receiving the encoded stream of data and outputting a plurality of groups of bits; and
a plurality of transmitters, each of which including: a respective one of a plurality of fast Fourier transform circuits, and a respective one of a plurality of filter circuits,
each of the plurality of fast Fourier transform circuits receiving a respective one of a plurality of input symbols, each of the plurality of fast Fourier transform circuits supplying a respective one of a plurality of frequency domain data, each of the plurality of input symbols being indicative of a respective one of the plurality of groups of the bits,
each of of the plurality of filter circuits receiving a corresponding one of a plurality of inputs, each of the plurality of inputs to the plurality of filter circuits being indicative of a respective one of the plurality of the plurality of frequency domain data, each of the plurality of filter circuits supplying a corresponding one of a plurality of a filter outputs, and
an inverse fast Fourier transform circuit that operates on data included in the plurality of filter outputs to provide a time domain output,
a digital-to-analog converter (DAC) circuit that receives a digital input indicative of the time domain output of the inverse fast Fourier transform circuit, the DAC circuit supplying analog outputs;
a laser that supplies light; and
a modulator circuit that receives the analog outputs and modulates the light based on the analog outputs to supply an output optical signal that includes a plurality of subcarriers, each of the plurality of subcarriers having a corresponding one of a plurality of frequency spectra, such that each of the plurality of frequency spectra does not overlap with one another in frequency, wherein the plurality of subcarriers is provided to an optical link, the digital signal processor introducing a timing skew among the plurality of subcarriers to correct skew induced by the optical link.
2. A system in accordance with claim 1, wherein, the digital signal processor further including a multiplexer component that receives the plurality of filter outputs and supplies the data to the inverse fast Fourier transform circuit, the timing skew being introduced by at least one of the multiplexer component, the demultiplexer component, and the plurality of transmitters.
3. A system in accordance with claim 1, wherein the digital signal processor and the DAC circuit are implemented as an application specific integrated circuit.
4. A system in accordance with claim 1, further including a photonic integrated circuit, the laser and the modulator circuit being included in the photonic integrated circuit.
5. A system in accordance with claim 1, wherein the digital signal processor further including a multiplexer component that receives the plurality of filter outputs and supplies the data to the inverse fast Fourier transform circuit.
6. A system in accordance with claim 1, further comprising:
a receiver including:
a detector,
a local oscillator configured to generate a local oscillator signal,
a mixer configured to:
receive the local oscillator signal from the local oscillator and the output optical signal, and combine the local oscillator signal and the output optical signal to generate a resulting optical signal, and output the resulting optical signal to the detector, the detector providing a set of voltage signals based on the resulting optical signal,
an analog-to-digital converter configured to:
receive the set of voltage signals from the detector, and
generate digital samples based on the set of voltage signals; and
a receiver processor configured to:
receive the digital samples from the analog-to-digital converter, process the digital samples to produce output data, and output the output data.
7. A system in accordance with claim 1, wherein the plurality of filter circuits is configured to determine a corresponding one of a plurality of spectra, such that each of the plurality of subcarriers has a respective one of the plurality of spectra.
an analog to digital converter circuit that receives a voltage signal and outputs a plurality of samples, the voltage signal corresponding to an optical signal having a plurality of subcarriers, each of the plurality of subcarriers having a corresponding one of a plurality of frequency spectra, such that each of the plurality of frequency spectra does not overlap with one another in frequency; and
a digital signal processor that receives the first plurality of samples, the digital signal processor including:
a fast Fourier transform (FFT) circuit that receives the plurality of samples and supplies a frequency domain FFT output,
a demultiplexer that receives the frequency domain FFT output and provides demultiplexer outputs,
a plurality of filter circuits, each of which receiving a corresponding one of filter inputs, the plurality of filter inputs being based on the demultiplexer outputs, each of the plurality of filter circuits supplying a corresponding one of a plurality of filter outputs, and
a plurality of inverse fast Fourier transform (IFFT) circuits, each of which receiving a respective one of a plurality of IFFT inputs, each of the plurality of IFFT inputs being indicative of a corresponding one of the filter outputs, each of the plurality of IFFT circuits supplying a respective one of IFFT outputs, wherein the plurality of subcarriers propagate over an optical link, the digital signal processor introducing a de-skew that corrects skew induced by the optical link or from a digital signal processor provided in a transmitter that outputs the optical signal.
9. A system in accordance with claim 8, wherein a number of the plurality of inverse fast Fourier transform circuits is equal to a number of the plurality of subcarriers.
10. A system in accordance with claim 8, further including a symbol to bits component that receives symbols indicative of the IFFT outputs, the symbol to bits component outputting bits corresponding to symbols.
11. A system in accordance with claim 10, further including a multiplexer that receives inputs indicative of the bits.
12. A system in accordance with claim 11, further including a forward error correction decoder that receives an output of the multiplexer.
13. A system in accordance with claim 8, further including:
a local oscillator that supplies an optical output;
a hybrid mixer that receives and mixes the optical output of the local oscillator and the optical signal; and
a detector circuit that receives the optical signal and generates the voltage signal.
14. A system in accordance with claim 8, further including a polarization mode dispersion (PMD) equalization component that applies PMD equalization to one of the plurality of filter outputs.
15. A system in accordance with claim 14, wherein the PMD equalized plurality of filter outputs are supplied as the plurality of IFFT inputs.
16. A system in accordance with claim 8, further including carrier recovery circuits, that receive carrier recovery inputs indicative of the IFFT outputs, the carrier recovery circuits compensating phase and frequency differences between the optical output of the local oscillator and the optical signal including the plurality of subcarriers.
17. A system in accordance with claim 8, wherein a number of the plurality of fast Fourier transform circuits is equal to a number of the plurality of subcarriers.
18. A system in accordance with claim 8, wherein the digital signal processor includes:
a plurality of receivers, each of which including a respective one of the plurality of filter circuits and a respective one of the plurality of IFFT circuits; and
a multiplexer that receives a plurality of multiplexer inputs, each of which being indicative of a corresponding one of the plurality of IFFT outputs,
wherein the de-skew is introduced by at least one of (i) the plurality of receivers and (ii) the multiplexer.
19. A system, comprising:
a digital signal processor that receives an input stream of data, the digital signal processor including:
a demultiplexer component receiving the encoded stream of data and outputting a plurality of groups of bits, the demultiplexer component having a plurality of outputs;
a plurality of bits-to-symbol circuits, each of which being coupled to a respective one of the plurality of outputs of the demultiplexer component, each of the plurality of bits-to-symbol circuits providing a respective one of a plurality of symbols based on a corresponding one of the plurality of groups of bits;
a plurality of buffer circuits, each of which receiving a corresponding one of the plurality of symbols, each of the plurality of buffer circuits storing a respective one of the plurality of the plurality of symbols;
a plurality of fast Fourier transform circuits, each of which receiving a respective one of a plurality of symbols output from a respective one of the plurality of buffer circuits, each of the plurality of fast Fourier transform circuits supplying frequency domain data;
a plurality of replicator circuits that receive and replicate the frequency domain data;
a plurality of filter circuits that receive the replicated frequency domain data, each of the plurality of filter circuits supplying a corresponding one of a plurality of a filter outputs,
a multiplexer component that combines the plurality of filter outputs and supplies a multiplexer output; and
an inverse fast Fourier transform circuit that operates on the multiplexer output to supply a time domain output,
a modulator circuit that receives the analog outputs and modulates the light based on the analog outputs to supply an output optical signal that includes a plurality of subcarriers, each of the plurality of subcarriers having a corresponding one of a plurality of frequency spectra, such that each of the plurality of frequency spectra does not overlap with one another in frequency, wherein the plurality of subcarriers is provided to an optical link, the digital signal processor introducing a timing skew that corrects skew induced by the optical link.
an analog to digital converter circuit that receives a voltage signal and outputs a plurality of samples, the input signal corresponding to an optical signal having a plurality of subcarriers, each of the plurality of subcarriers having a corresponding one of a plurality of frequency spectra, such that each of the plurality of frequency spectra does not overlap with one another in frequency; and
a plurality of filter circuits, each of which receiving a corresponding one of filter inputs, the plurality of filter inputs being based on the demultiplexer outputs, each of the plurality of filter circuits supplying a corresponding one of a plurality of filter outputs,
a plurality of polarization mode dispersion (PMD) equalization circuits, each of which applying PMD equalization to a corresponding one of the plurality of filter outputs;
a plurality of inverse fast Fourier transform (IFFT) circuits, each of which receiving a respective one of a plurality of IFFT inputs, each of the plurality of IFFT inputs being indicative of an output of a respective one of the plurality of PMD equalization circuits, each of the plurality of IFFT circuits supplying a respective one of IFFT outputs;
a take-last circuit that selects a subset of the IFFT outputs;
a plurality of carrier recovery circuits, each of which supplying a corresponding one of a plurality of a plurality of symbols based one a corresponding IFFT output of the selected subset of IFFT outputs;
a plurality of symbols-to-bits circuits, each of which receiving a respective one of the plurality of symbols and supplying a corresponding one of a plurality of groups of bits;
a multiplexer that combines the plurality of groups of bits at an output; and
a forward error correction (FEC) decoder that receives the plurality of groups of bits output from the multiplexer and outputs decoded data, wherein the plurality of subcarriers propagate on an optical link, the digital signal processor introducing a de-skew that corrects skew induced by the optical link or from a digital signal processor provided in a transmitter that outputs the optical signal.
a demultiplexer component, the demultiplexer component receiving the encoded stream of data and outputting a plurality of groups of bits;
a plurality of fast Fourier transform circuits, each of which receiving a respective one of a plurality of input symbols, each of the plurality of fast Fourier transform circuits supplying a respective one of a plurality of frequency domain data, each of the plurality of input symbols being indicative of a respective one of the plurality of groups of the bits;
a plurality of filter circuits, each of which receiving a corresponding one of a plurality of inputs, each of the plurality of inputs to the plurality of filter circuits being indicative of a respective one of the plurality of the plurality of frequency domain data, each of the plurality of filter circuits supplying a corresponding one of a plurality of a filter outputs, and
a modulator circuit that receives the analog outputs and modulates the light based on the analog outputs to supply an output optical signal including a plurality of subcarriers, such that the output optical signal is provided to an optical fiber, each of the plurality of subcarriers output to the optical fiber has a corresponding one of a plurality of frequency spectra that do not overlap with one another in frequency.
22. A receiver, comprising:
an analog to digital converter circuit that receives a voltage signal and outputs a plurality of samples, the voltage signal corresponding to an optical signal that is input to an optical fiber, the optical signal including a plurality of subcarriers, such that the optical signal is transmitted on the optical fiber, each of the plurality of subcarriers input to the optical fiber has a corresponding one of a plurality of frequency spectra that do not overlap with one another in frequency; and
a plurality of inverse fast Fourier transform (IFFT) circuits, each of which receiving a respective one of a plurality of IFFT inputs, each of the plurality of IFFT inputs being indicative of a corresponding one of the filter outputs, each of the plurality of IFFT circuits supplying a respective one of IFFT outputs.
a forward error correction (FEC) decoder that receives the plurality of groups of bits output from the multiplexer and outputs decoded data.
US13/630,630 2012-09-28 2012-09-28 Channel carrying multiple digital subcarriers Active 2034-05-05 US10014975B2 (en)
US13/630,630 US10014975B2 (en) 2012-09-28 2012-09-28 Channel carrying multiple digital subcarriers
US13/630,630 Active 2034-05-05 US10014975B2 (en) 2012-09-28 2012-09-28 Channel carrying multiple digital subcarriers
US9128347B2 (en) * 2013-08-16 2015-09-08 Infinera Corporation Optical hybrid mixer without waveguide crossings
US9191183B2 (en) * 2013-08-27 2015-11-17 Maxlinear, Inc. Using decision feedback phase error correction
US9270383B2 (en) 2014-03-31 2016-02-23 Infinera Corporation Frequency and phase compensation for modulation formats using multiple sub-carriers
US9276674B2 (en) 2014-03-31 2016-03-01 Infinera Corporation Estimating phase using test phases and interpolation for modulation formats using multiple sub-carriers
JP6387835B2 (en) * 2015-01-07 2018-09-12 富士通株式会社 Transmission apparatus and transmission method
EP3447962A1 (en) * 2017-08-21 2019-02-27 Nokia Solutions and Networks Oy Timing recovery method and associated equipment
US20020005971A1 (en) * 2000-06-21 2002-01-17 Hiroyuki Sasai Radio-frequency transmitter with function of distortion compensation
US20020034191A1 (en) * 1998-02-12 2002-03-21 Shattil Steve J. Method and apparatus for transmitting and receiving signals having a carrier interferometry architecture
US20020067883A1 (en) * 2000-07-10 2002-06-06 Lo Victor Yeeman System and method for increasing channel capacity of fiber-optic communication networks
US20020114038A1 (en) * 2000-11-09 2002-08-22 Shlomi Arnon Optical communication system
US6525857B1 (en) * 2000-03-07 2003-02-25 Opvista, Inc. Method and apparatus for interleaved optical single sideband modulation
US20030223751A1 (en) * 2002-06-03 2003-12-04 Fujitsu Limited Optical transmission system
US20040019459A1 (en) * 2002-07-29 2004-01-29 Paul Dietz Auto-characterization of optical devices
US20040151109A1 (en) * 2003-01-30 2004-08-05 Anuj Batra Time-frequency interleaved orthogonal frequency division multiplexing ultra wide band physical layer
US20040197103A1 (en) * 2002-10-03 2004-10-07 Nortel Networks Limited Electrical domain compensation of non-linear effects in an optical communications system
US20040198265A1 (en) * 2002-12-31 2004-10-07 Wallace Bradley A. Method and apparatus for signal decoding in a diversity reception system with maximum ratio combining
US20050008085A1 (en) * 2003-07-08 2005-01-13 Samsung Electronics Co., Ltd. Transmitting and receiving apparatus and method in an Orthogonal Frequency Division Multiplexing system using an insufficient cyclic prefix
US20050074037A1 (en) * 2003-10-06 2005-04-07 Robin Rickard Optical sub-carrier multiplexed transmission
US20050111789A1 (en) * 2003-11-21 2005-05-26 Hrl Laboratories, Llc. Method and apparatus for optical division of a broadband signal into a plurality of sub-band channels
US20050175339A1 (en) * 2002-03-14 2005-08-11 Varda Herskowits Dynamic broadband optical equalizer
US20050175112A1 (en) * 2002-05-17 2005-08-11 Fabio Pisoni Time domain equalization using frequency domain operations
US20060093052A1 (en) * 2004-11-03 2006-05-04 Cho Sang I 2N-point and N-point FFT/IFFT dual mode processor
US20060215540A1 (en) * 2005-03-10 2006-09-28 Raghuraman Krishnamoorthi Efficient employment of digital upsampling using IFFT in OFDM systems for simpler analog filtering
US20060233147A1 (en) * 2004-12-07 2006-10-19 Mobile Satellite Ventures, Lp Broadband wireless communications systems and methods using multiple non-contiguous frequency bands/segments
US20070004465A1 (en) * 2005-06-29 2007-01-04 Aris Papasakellariou Pilot Channel Design for Communication Systems
US20070025421A1 (en) * 1998-02-12 2007-02-01 Steve Shattil Method and Apparatus for Using Multicarrier Interferometry to Enhance optical Fiber Communications
US20080085125A1 (en) * 2006-10-06 2008-04-10 Ciena Corporation All-optical regenerator and optical network incorporating same
US20090092389A1 (en) * 2007-10-08 2009-04-09 Nec Laboratories America, Inc. Orthogonal Frequency Division Multiple Access Based Optical Ring Network
US20090154336A1 (en) * 2007-12-13 2009-06-18 Nokia Siemens Networks Oy Continuous phase modulation processing for wireless networks
US20090190929A1 (en) * 2007-02-27 2009-07-30 Celight, Inc. Optical orthogonal frequency division multiplexed communications with nonlinearity compensation
US20090214224A1 (en) * 2007-04-03 2009-08-27 Celight, Inc. Method and apparatus for coherent analog rf photonic transmission
US20090232234A1 (en) * 2006-07-05 2009-09-17 Koninklijke Philips Electronics N.V. Bandwidth asymmetric communication system
US20090257344A1 (en) * 2008-04-14 2009-10-15 Nec Laboratories America All optical ofdm with integrated coupler based ifft/fft and pulse interleaving
US20100021163A1 (en) * 2008-07-24 2010-01-28 The University Of Melbourne Method and system for polarization supported optical transmission
US20100086303A1 (en) * 2008-10-02 2010-04-08 Nec Laboratories America Inc High speed polmux-ofdm using dual-polmux carriers and direct detection
US20100178057A1 (en) * 2009-01-08 2010-07-15 The University Of Melbourne Signal method and apparatus
US20110135301A1 (en) * 2009-12-08 2011-06-09 Vello Systems, Inc. Wavelocker for Improving Laser Wavelength Accuracy in WDM Networks
US20110176813A1 (en) * 2010-01-20 2011-07-21 Inwoong Kim Method and System for Electrical Domain Optical Spectrum Shaping
US20110182577A1 (en) * 2010-01-25 2011-07-28 Infinera Corporation Method, system, and apparatus for filter implementation using hermitian conjugates
US20110249978A1 (en) * 2008-12-22 2011-10-13 Hitachi, Ltd. Optical Transmitter and Optical OFDM Communication System
US20110255870A1 (en) * 2010-01-21 2011-10-20 Grigoryan Vladimir S Optical transceivers for use in fiber optic communication networks
US20120002703A1 (en) * 2009-04-01 2012-01-05 Nippon Telegraph And Telephone Corporation Wireless transmission method, wireless transmission system, and transmission apparatus and reception apparatus of wireless transmission system
US20120033965A1 (en) * 2010-08-06 2012-02-09 Futurewei Technologies, Inc. Method and Apparatus for Broadband Carrier Frequency and Phase Recovery in Coherent Optical System
US20120093510A1 (en) * 2010-10-15 2012-04-19 Tyco Electronics Subsea Communications Llc Correlation -control qpsk transmitter
US20120141135A1 (en) * 2010-12-03 2012-06-07 Wuhan Research Institute Of Posts And Telecommunications Optical Communication System, Device and Method Employing Advanced Coding and High Modulation Order
US20120251121A1 (en) * 2011-04-01 2012-10-04 Mcnicol John D Periodic Superchannel Carrier Arrangement for Optical Communication Systems
US20130070786A1 (en) * 2011-09-16 2013-03-21 Xiang Liu Communication Through Phase-Conjugated Optical Variants
US20140010543A1 (en) * 2012-07-05 2014-01-09 Kun-Jing LEE Tunable coherent optical receiver and method
Bingham, "Multicarrier Modulation for Data Transmission: An Idea Whose Time Has Come", IEEE Communications Magazine, pp. 5-14, May 1990, 8 pages.
Greshishchev et al., "A 56GS/s 6b DAC in 65nm CMOS with 256x6b Memory", ISSCC 2011/Session 10/Nyquist-Rate Converters/10.8, 2011 IEEE International Solid-State Circuits Conference, 3 pages.
Rahn et al., "Real-Time PIC-based Super-Channel Transmission Over a Gridless 6000km Terrestrial Link", OFC/NFOEC Postdeadline Papers, Mar. 2012, 3 pages.
Sun et al., "Real-Time Measurements of a 40 Gb/s Coherent System", Jan. 21, 2008, vol. 16, No. 2, Optics Express, pp. 873-879.
Yan et al., "Experimental Comparison of No-Guard-Interval-OFDM and Nyquist-WDM Superchannels", OFC/NFOEC Technical Digest, Jan. 23, 2012, 4 pages.
Zhang et al., "3760km, 100G SSMF Transmission over Commercial Terrestrial DWDM ROADM Systems using SD-FEC", OFC/NFOEC Postdeadline Papers, Mar. 2012, 3 pages.
Zhuge et al., "Comparison of Intra-Channel Nonlinearity Tolerance Between Reduced-Guard-Interval CO-OFDM Systems and Nyquist Single Carrier Systems", OFC/NFOEC Technical Digest, Jan. 23, 2012, 4 pages.
Hillerkuss et al. 2010 Simple all-optical FFT scheme enabling Tbit/s real-time signal processing
US8218979B2 (en) 2012-07-10 System, method and apparatus for coherent optical OFDM
US8406638B2 (en) 2013-03-26 Coherent light receiving system
CN101064567B (en) 2011-09-14 Optical transmitter
US8437638B2 (en) 2013-05-07 Optical modulation circuit and optical transmission system
US7555216B2 (en) 2009-06-30 Optical communication system using optical frequency code, optical transmission device and optical reception device thereof, and reflection type optical communication device
EP2461498B1 (en) 2017-09-27 Optical transmitter and optical transmitter unit
CN101432998B (en) 2013-08-21 Partial dpsk (pdpsk) transmission systems
Dischler et al. 2009 Transmission of 1.2 Tb/s continuous waveband PDM-OFDM-FDM signal with spectral efficiency of 3.3 bit/s/Hz over 400 km of SSMF
JP5916611B2 (en) 2016-05-11 Digital coherent detection of the multi-carrier optical signal
US20050008369A1 (en) 2005-01-13 Optical device with tunable coherent receiver
Zhang et al. 2013 Multichannel 120-Gb/s Data Transmission Over 2$\,\times\, $2 MIMO Fiber-Wireless Link at W-Band
US7076169B2 (en) 2006-07-11 System and method for orthogonal frequency division multiplexed optical communication
JP5476697B2 (en) 2014-04-23 Optical signal transmitter
US20100135656A1 (en) 2010-06-03 Optical orthogonal frequency division multiplexed communications with nonlinearity compensation
EP2178228B1 (en) 2011-05-04 Optical receiver and optical receiving method
Zhou et al. 2011 8× 450-Gb/s, 50-GHz-Spaced, PDM-32QAM transmission over 400km and one 50GHz-grid ROADM
Qian et al. 2012 High capacity/spectral efficiency 101.7-Tb/s WDM transmission using PDM-128QAM-OFDM over 165-km SSMF within C-and L-bands
KR101281960B1 (en) 2013-07-03 Multi-wavelength coherent receiver with a shared optical hybrid and a multi-wavelength local oscillator
EP2051414A1 (en) 2009-04-22 Optical receiver systems and methods for polarization demultiplexing, PMD compensation, and DXPSK demodulation
US8837951B2 (en) 2014-09-16 40, 50 and 100 Gb/s optical transceivers/transponders in 300pin and CFP MSA modules
Lee et al. 2008 All optical discrete Fourier transform processor for 100 Gbps OFDM transmission
US20100021166A1 (en) 2010-01-28 Spectrally Efficient Parallel Optical WDM Channels for Long-Haul MAN and WAN Optical Networks
WO2002045297A2 (en) 2002-06-06 Optical communications using multiplexed single sideband transmission and heterodyne detection
Pfau et al. 2008 Coherent optical communication: Towards realtime systems at 40 Gbit/s and beyond
Owner name: INFINERA CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRAUSE, DAVID JAMES;SUN, HAN H.;WU, YUEJIAN;AND OTHERS;SIGNING DATES FROM 20140716 TO 20140721;REEL/FRAME:033364/0328 | CommonCrawl |
Generalized Bol functional equation.V. D. Belousov and Pl. Kannappan; 259 - 265
More by V. D. Belousov
More by Pl. Kannappan
Gelfand and Wallman-type compactifications.Charles M. Biles; 267 - 278
More by Charles M. Biles
A generalization of martingales and two consequent convergence theorems.Louis H. Blake; 279 - 283
More by Louis H. Blake
On $p$-spaces and $w\Delta $-spaces.Dennis K. Burke; 285 - 296
More by Dennis K. Burke
Almost smooth perturbations of self-adjoint operators.John B. Butler, Jr.; 297 - 306
More by John B. Butler, Jr.
Isomorphisms of $C_{0}(Y)$ onto $C(X)$.Michael Cambern; 307 - 312
More by Michael Cambern
A conditionally compact point set with noncompact closure.David E. Cook; 313 - 319
More by David E. Cook
Countable Boolean algebras as subalgebras and homomorphs.T. E. Cramer; 321 - 326
More by T. E. Cramer
A $v$-integral representation for linear operators on spaces of continuous functions with values in topological vector spaces.J. R. Edwards and S. G. Wayment; 327 - 330
More by J. R. Edwards
More by S. G. Wayment
Similarities involving normal operators on Hilbert space.Mary R. Embry; 331 - 336
More by Mary R. Embry
Oscillation theorems for second order linear differential equations.Lynn Erbe; 337 - 343
More by Lynn Erbe
Local behaviour of area functions of convex bodies.William J. Firey; 345 - 357
More by William J. Firey
The primary decomposition theory for modules.Joe W. Fisher; 359 - 367
More by Joe W. Fisher
Generic splitting algebras for ${\rm Pic}$.Gerald Garfinkel; 369 - 380
More by Gerald Garfinkel
Function space topologies.J. D. Hansard; 381 - 388
More by J. D. Hansard
Quasifibration and adjunction.K. A. Hardie; 389 - 397
More by K. A. Hardie
Coverings of pro-affine algebraic groups.G. Hochschild; 399 - 415
More by G. Hochschild
On nets of contractive maps in uniform spaces.G. L. Itzkowitz; 417 - 423
More by G. L. Itzkowitz
Groups with free nonabelian subgroups.Melven Krom and Myren Krom; 425 - 427
More by Melven Krom
More by Myren Krom
Upper and lower bounds for eigenvalues by finite differences.J. R. Kuttler; 429 - 440
More by J. R. Kuttler
A new approach to representation theory for convolution transforms.D. Leviatan; 441 - 449
More by D. Leviatan
Perfect subsets of definable sets of real numbers.Richard Mansfield; 451 - 457
More by Richard Mansfield
A necessary and sufficient condition for the embedding of a Lindelof space in a Hausdorff ${\cal K}\sigma $ space.Brenda Mac Gibbon; 459 - 465
More by Brenda Mac Gibbon
Ritt's question on the Wronskian.B. D. McLemore and D. G. Mead; 467 - 472
More by D. G. Mead
More by B. D. McLemore
Focal points in a control problem.E. Y. Mikami; 473 - 485
More by E. Y. Mikami
Characterizing the distributions of three independent $n$-dimensional random variables, $X_{1},\,X_{2},\,X_{3},$ having analytic characteristic functions by the joint distribution of $(X_{1}+X_{3},\,X_{2}+X_{3})$.Paul G. Miller; 487 - 491
More by Paul G. Miller
On the Bergman integral operator for an elliptic partial differential equation with a singular coefficient.P. Rosenthal; 493 - 497
More by P. Rosenthal
On the number of finitely generated $O$-groups.Douglas B. Smith; 499 - 502
More by Douglas B. Smith
Concerning the domains of generators of linear semigroups.J. W. Spellmann; 503 - 509
More by J. W. Spellmann
An approximation theorem for subalgebras of $H_{\infty}$.Arne Stray; 511 - 515
More by Arne Stray
Self-adjoint differential operators.Arnold L. Villone; 517 - 531
More by Arnold L. Villone | CommonCrawl |
linear algebra and its applications answers
The largest possible dimension of Linear algebra is relatively easy for students during the early stages of the course, when the material is presented in a familiar, concrete setting.
Linear Algebra and Its Applications | 4th Edition. Why buy extra books when you can get all the homework help you need in one place? How is Chegg Study better than a printed Linear Algebra And Its Applications 4th Edition student solution manual from the bookstore? Instructors seem to agree that certain concepts (such as linear independence, spanning, subspace, vector space, and linear transformations), are not easily understood, and require time to assimilate. -$2x_{1}+7x_{2}=5$ Unlock your Linear Algebra and Its Applications PDF (Profound Dynamic Fulfillment) today. No need to wait for office hours or assignments to be graded to find out where you took a wrong turn. Solutions Manuals are available for thousands of the most popular college and high school textbooks in subjects such as Math, Science (Physics, Chemistry, Biology), Engineering (Mechanical, Electrical, Civil), Business and more. 0 0 0 2 4 . 0 0 0 1 2 Since problems from 65 chapters in Linear Algebra and Its Applications have been answered, more than 34610 students have viewed full step-by-step answer. It does a great job in showing real life applications of the concepts presented throughout the book.
The $2x_{1}$ cancels out and you are left with $3x_{2}=9$ Since they are fundamental to the study of linear algebra, students' understanding of these concepts is vital to their mastery of the subject. Divide both sides by 3 and receive $x_{2}=3$ Textbook Authors: Lay, David C.; Lay, Steven R.; McDonald, Judi J. , ISBN-10: 0-32198-238-X, ISBN-13: 978-0-32198-238-4, Publisher: Pearson No need to wait for office hours or assignments to be graded to find out where you took a wrong turn. 0 0 1 0 5 The smallest possible dimension of Just post a question you need help with, and one of our experts will provide a custom solution. Objective is to find the largest possible dimension of. add 3 times the 4th row to the 3rd row Our interactive player makes it easy to find solutions to Linear Algebra And Its Applications 4th Edition problems you're working on - just go to the chapter for your book. 0 1 0 0 8 Shed the societal and cultural narratives holding you back and let step-by-step Linear Algebra and Its Applications textbook solutions reorient your old paradigms. Ask our subject experts for help answering any of your homework questions!
An editor this answer. Why is Chegg Study better than downloaded Linear Algebra And Its Applications 4th Edition PDF solution manuals? Linear Algebra and Its Applications (5th Edition) answers to Chapter 2 - Matrix Algebra - 2.1 Exercises - Page 102 1 including work step by step written by community members like you. to get access to your one-sheeter, Linear Algebra and Its Applications, 5th Edition, Linear Models in Business, Science, and Engineering, Cramer's Rule, Volume, and Linear Transformations, Null Spaces, Column Spaces, and Linear Transformations, Applications to Image Processing and Statistics. $-2x_{1}-7x_{2}=-5$ $2x_{1}+10x_{2}=14$ in the above formula. add 3 times the 3rd row to the 2nd row will review the submission and either publish your submission or provide feedback. is shown below: The largest possible dimension of Unlike static PDF Linear Algebra And Its Applications 4th Edition solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. Now you can subtract one equation from the other to get a new equation with ONLY ONE TERM.
$2x_{1}+7x_{2}=5$ v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram. View an educator-verified, detailed solution for Chapter 1, Problem 23 in Lay/Lay/McDonald's Linear Algebra and Its Applications (5th Edition). $16-21=-5$
3rd Edition, Companion Website for Linear Algebra and Its Applications with CD-ROM, Update, Linear Algebra and Its Applications (5th Edition), Thomas' Calculus and Linear Algebra and Its Applications Package for the Georgia Institute of Technology, 1/e, Student Study Guide for Linear Algebra and Its Applications, Linear Algebra and Its Applications, Books a la Carte Edition (5th Edition), Linear Algebra and Its Application - With MathXL, Linear Algebra & Its Applications 5th Ed Instructor's Edition, Linear Algebra and Its Applications; Student Study Guide for Linear Algebra and Its ApplicationsStudent Study Guide for Linear Algebra and Its Applications (5th Edition), Linear Algebra and Its Applications, Books a la Carte Edition Plus MyLab Math with Pearson eText -- Access Code Card (5th Edition), Linear Algebra and Its Applications plus New MyLab Math with Pearson eText -- Access Card Package (5th Edition) (Featured Titles for Linear Algebra (Introductory)), Linear Algebra Plus Mymathlab Getting Started Kit for Linear Algebra and Its Applications, Student Study Guide For Linear Algebra And Its Applications, Linear Algebra and Its Applications, 4th Edition, Linear Algebra And Its Applications, Books A La Carte Edition (4th Edition), Linear Algebra And Its Applications, Mymathlab, And Student Study Guide (4th Edition), Linear Algebra and Its Applications with Student Study Guide (4th Edition), Linear Algebra And Its Applications, Books A La Carte Edition Plus New Mymathlab With Pearson Etext -- Access Card Package (4th Edition), Linear Algebra And Its Applications, Custom Edition For Idaho State University, 2/e, Linear Algebra And Its Applications Package For University Of Arkansas Fort Smith, Linear Algebra And Its Applications (custom Edition For Byu), Instructor's Matlab Manual: Linear Algebra And Its Applications, College Algebra, Books a la Carte Edition Plus NEW MyMathLab -- Access Card Package (6th Edition), MyLab Math with Pearson eText -- Standalone Access Card -- for Algebra and Trigonometry (6th Edition), College Algebra, Books A La Carte Edition Plus MyLab Math with eText -- Access Card Package (7th Edition), MyLab Math with Pearson eText -- Standalone Access Card -- for College Algebra Essentials (5th Edition) (Cisco Top Score (NRP)), College Algebra with Modeling & Visualization, Books a la Carte Edition plus MyLab Math with Pearson eText -- Access Card Package (6th Edition), Aleks 360 Access Code (18 weeks) for College Algebra & Trigonometry, College Algebra Essentials, Books a la Carte Edition Plus Mymathlab with New Pearson Etext -- Access Card Package, Aleks 360 Access Card (18 Weeks) for College Algebra, ALEKS 360 Access Card 18 Weeks for Beginning and Intermediate Algebra, College Algebra: Graphs and Models, Books a la Carte Edition plus MyLab Math with Pearson eText -- Access Card Package (6th Edition).
Blood And Whiskey Lyrics The Mechanisms, Salman Khan Goggles Price, Oppo A9 2020 Price In Sri Lanka, Seattle Mariners Old Logo, Modern Baby Boy Names Hindu, Nike Dri-fit Polo, Signs Your Glutes Are Growing, Semi Mount Ring Settings Wholesale, Nike Dri-fit Polo, Is Red Licorice Bad For Dogs, First Ponies For Sale, Frigidaire Ffss2614qp6a Ice Maker, Loyola High School Coronavirus,
linear algebra and its applications answers 2020 | CommonCrawl |
\begin{definition}[Definition:Dominating Strategy]
Let $G$ be a game.
Let player $P$ have pure strategies $A_1$ and $A_2$ in $G$.
Then $A_1$ '''dominates''' $A_2$ {{iff}}:
:for any strategy of an opposing player, $A_1$ is at least as good as $A_2$
:for at least one strategy of an opposing player, $A_1$ is strictly better than $A_2$.
\end{definition} | ProofWiki |
Crout matrix decomposition
In linear algebra, the Crout matrix decomposition is an LU decomposition which decomposes a matrix into a lower triangular matrix (L), an upper triangular matrix (U) and, although not always needed, a permutation matrix (P). It was developed by Prescott Durand Crout. [1]
The Crout matrix decomposition algorithm differs slightly from the Doolittle method. Doolittle's method returns a unit lower triangular matrix and an upper triangular matrix, while the Crout method returns a lower triangular matrix and a unit upper triangular matrix.
So, if a matrix decomposition of a matrix A is such that:
A = LDU
being L a unit lower triangular matrix, D a diagonal matrix and U a unit upper triangular matrix, then Doolittle's method produces
A = L(DU)
and Crout's method produces
A = (LD)U.
Implementations
C implementation:
void crout(double const **A, double **L, double **U, int n) {
int i, j, k;
double sum = 0;
for (i = 0; i < n; i++) {
U[i][i] = 1;
}
for (j = 0; j < n; j++) {
for (i = j; i < n; i++) {
sum = 0;
for (k = 0; k < j; k++) {
sum = sum + L[i][k] * U[k][j];
}
L[i][j] = A[i][j] - sum;
}
for (i = j; i < n; i++) {
sum = 0;
for(k = 0; k < j; k++) {
sum = sum + L[j][k] * U[k][i];
}
if (L[j][j] == 0) {
printf("det(L) close to 0!\n Can't divide by 0...\n");
exit(EXIT_FAILURE);
}
U[j][i] = (A[j][i] - sum) / L[j][j];
}
}
}
Octave/Matlab implementation:
function [L, U] = LUdecompCrout(A)
[R, C] = size(A);
for i = 1:R
L(i, 1) = A(i, 1);
U(i, i) = 1;
end
for j = 2:R
U(1, j) = A(1, j) / L(1, 1);
end
for i = 2:R
for j = 2:i
L(i, j) = A(i, j) - L(i, 1:j - 1) * U(1:j - 1, j);
end
for j = i + 1:R
U(i, j) = (A(i, j) - L(i, 1:i - 1) * U(1:i - 1, j)) / L(i, i);
end
end
end
References
1. Press, William H. (2007). Numerical Recipes 3rd Edition: The Art of Scientific Computing. Cambridge University Press. pp. 50–52. ISBN 9780521880688.
• Implementation using functions In Matlab
| Wikipedia |
\begin{document}
\title{Stochastic Galerkin finite element method with local conductivity basis for electrical impedance tomography}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[2]{ Aalto University, Department of Mathematics and Systems Analysis, P.O. Box 11100, FI-00076 Aalto, Finland ([email protected], [email protected]). This work was supported by the Academy of Finland (decision 267789) and the Finnish Doctoral Programme in Computational Sciences FICS. }
\begin{abstract} The objective of electrical impedance tomography is to deduce information about the conductivity inside a physical body from electrode measurements of current and voltage at the object boundary. In this work, the unknown conductivity is modeled as a random field parametrized by its values at a set of pixels. The uncertainty in the pixel values is propagated to the electrode measurements by numerically solving the forward problem of impedance tomography by a stochastic Galerkin finite element method in the framework of the complete electrode model. For a given set of electrode measurements, the stochastic forward solution is employed in approximately parametrizing the posterior probability density of the conductivity and contact resistances. Subsequently, the conductivity is reconstructed by computing the {\em maximum a posteriori} and {\em conditional mean} estimates as well as the posterior covariance. The functionality of this approach is demonstrated with experimental water tank data. \end{abstract}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\begin{keywords} sGFEM, electrical impedance tomography, experimental data, complete electrode model, local random basis \end{keywords}
\begin{AMS} 65N21, 35R60, 60H15 \end{AMS}
\pagestyle{myheadings} \thispagestyle{plain} \markboth{N.~HYV\"ONEN AND M.~LEINONEN}{SGFEM WITH LOCAL CONDUCTIVITY BASIS FOR EIT}
\section{Introduction} The aim of {\em electrical impedance tomography} (EIT) is to retrieve useful information about the conductivity inside an examined physical body based on boundary measurements of current and voltage. In practice, the boundary data are gathered with a finite number of contact electrodes; the most accurate model for EIT is the {\em complete electrode model} (CEM) \cite{Cheng89,Somersalo92}, which takes into account the electrode shapes and the contact resistances at the electrode-object interfaces. EIT has potential applications in, e.g., medical imaging, monitoring of industrial processes, and nondestructive testing of materials; see the review articles \cite{Adler11,Borcea02,Cheney99,Lionheart03,Uhlmann09} and the references therein for more information on EIT and related mathematics.
This work considers EIT from the standpoint of uncertainty quantification. The to-be-reconstructed conductivity is modeled as a random field parametrized by uniformly distributed mutually independent random variables representing the conductivity levels at a set of pixels. The range of the pixel values is chosen based on prior information, while the number of pixels is mainly dictated by computational restrictions. The contact conductances, i.e., the reciprocals of the contact resistances, are also assigned uniform prior densities. For a given measurement configuration, the uncertainty in the conductivity field and the contact resistances is propagated to the electrode measurements by approximately solving the stochastic version of the CEM forward problem by a {\em stochastic Galerkin finite element method} (sGFEM) \cite{Ghanem03,Schwab11a}, which in our case corresponds to discretizing the spatial domain by piecewise linear FEM basis functions and the stochastic domain by a spectral Galerkin method with a Legendre polynomial basis (cf.~\cite{Xiu10}). These steps can be carried out off-line,~i.e.,~prior to the actual measurements, assuming the measurement geometry as well as the ranges for the conductivity and contact conductance values are known in advance.
After the electrode potentials corresponding to a set of applied current patterns have been measured, the stochastic forward solution can be used to explicitly write an approximate parametrization for the {\em posterior} density of the conductivity,~i.e.,~for the {\em posterior} of the pixelwise conductivity levels. At this stage, it is also possible to `update' the prior in case one has more specific information on the particular conductivity at hand. In this work, the information on the range of the pixelwise conductivity levels assumed in the forward solver is complemented by a Gaussian smoothness prior, but we want to emphasize that other forms of {\em a priori} information could as well be incorporated in the inverse solver. The actual conductivity reconstructions are obtained by computing {\em maximum a posteriori} (MAP) and {\em conditional mean} (CM) estimates,~i.e.,~the maximum point and the expected value of the approximate posterior density, respectively. In our setting, the computation of the former corresponds to minimizing a high-dimensional positive-valued polynomial, whereas the latter deals with high-dimensional integration with an explicitly known integrand. The reconstructions of the conductivity are complemented with visualizations of the posterior standard deviation.
The papers \cite{Leinonen14,Hakula14} introduced a reconstruction method for two-dimensional EIT by applying sGFEM to the CEM under the assumption that the conductivity is {\em a priori} known to be a lognormal random field. To be more precise, the conductivity was parametrized using its truncated exponential Karhunen--Lo\`eve expansion, and reconstructions were computed by estimating the random coefficients in the truncated expansion on the basis of (simulated) measurement data. Although the assumption of lognormality can be considered natural \cite{Leinonen14}, the major drawback of the approach in \cite{Hakula14} is that the spatial and stochastic components of the sGFEM solution cannot be decoupled, which results in relatively full system matrices (cf.~\cite[Section~6.1]{Hakula14}). This can easily be a deal-breaker in practical EIT since the accurate enough solution of the stochastic CEM forward model by sGFEM requires the use of a {\em high} number of degrees of freedom. The algorithm presented in this work can be considered a modified version of the one in \cite{Hakula14}, aiming at better computational feasibility: The pixelwise parametrization by uniformly distributed random variables results in a very sparse sGFEM system and it also allows trivial control over the positivity of the conductivity. Compared with \cite{Hakula14}, our new algorithm makes it possible to straightforwardly update the prior information on the conductivity in the on-line solution phase and to consider the estimation of a higher number of parameters from electrode measurements, resulting in improved reconstructions.
Compared with previous Bayesian techniques for tackling the inverse problem of practical EIT (see,~e.g.,~\cite{Darde13b,Heikkinen02,Kaipio00,Karhunen10} and the references therein), the main advantage of our approach is the following: Our method produces an (approximate) parametrization of the posterior density, i.e.,~of the idealized solution to the inverse problem in the Bayesian sense, which makes it possible to analyze the posterior without referring to the elliptic boundary value problem associated to the CEM. (In the `standard' Bayesian approach to EIT, each evaluation of the posterior density requires solving as many deterministic CEM forward problems as there are applied current patterns.) In particular, if the sGFEM solution of the CEM has been computed prior to the measurements, reconstructions and corresponding uncertainty estimates for the conductivity can be produced without ever returning to the CEM forward problem itself. This leads to obvious computational benefits because evaluating explicitly known functions is typically cheaper than solving several elliptic boundary value problems. The obvious disadvantage of the proposed method is the requirement of precomputing an accurate enough sGFEM forward solution for the CEM. However, the inevitable increase in computational resources and further development of stochastic finite element algorithms (see,~e.g.,~\cite{Bieri09a}) may well facilitate a satisfactory solution to this problem in the future.
The approach of this work is purely computational: based on experimental data from water tank experiments, we demonstrate that the introduced algorithm produces two-dimensional reconstructions that are arguably almost as good as the state-of-the-art Bayesian reconstructions from experimental data under a {\em smoothness prior} (cf.,~e.g.,~\cite{Darde13b,Karhunen10}). For information on the convergence of the sGFEM-parametrized posterior density in closely related settings, we refer to \cite{Schillings14,Schwab12} and the references therein. However, we are not aware of proper convergence analysis of sGFEM-based reconstruction algorithms for inverse elliptic {\em boundary} value problems. Moreover, to the best of our knowledge, this is the first time that any stochastic finite element method has been employed to compute EIT reconstructions from experimental data. See \cite{Dashti13,Dashti11,Hoang13,Schillings13,Schillings14,Simon14,Stuart10} for related approaches to solving inverse problems.
The rest of this paper is organized as follows. The {\em stochastic complete electrode model} (SCEM) is introduced in Section~\ref{sec:SCEM}, and solving the SCEM forward problem by sGFEM is considered in Section~\ref{sec:forward_solution}. We focus on the Bayesian inverse problem of EIT in Section~\ref{sec:inverse_solution}, and Section~\ref{sec:implementation} discusses the two-phase implementation of our reconstruction algorithm. The numerical examples are presented in Section~\ref{sec:numerical_examples}. We conclude with a few remarks in Section~\ref{sec:conclusions}.
\section{Stochastic complete electrode model} \label{sec:SCEM} In this section, we introduce the SCEM for modeling practical EIT measurements with a random conductivity and contact resistances. For the traditional deterministic formulation together with its physical and experimental justification, see \cite{Cheng89,Somersalo92}.
Let $D\subset\mathbb{R}^n$, $n=2$ or $3$, be a bounded domain with a smooth enough boundary and let $(\Omega,\Sigma,P)$ be a probability space. We interpret the internal conductivity of $D$ as a random field $\sigma(\cdot,\cdot): \Omega \times D \rightarrow \mathbb{R}$ which is assumed to be a uniformly strictly positive element of $L^\infty(\Omega \times D)$, i.e., \begin{align*} P\left(\omega \in \Omega \ : \ \sigma_{\textrm{min}} \leq \operatorname*{ess\,inf}_{\textbf{x} \in D} \sigma(\omega,\textbf{x}) \leq \operatorname*{ess\,sup}_{\textbf{x} \in D} \sigma(\omega,\textbf{x}) \leq \sigma_{\textrm{max}}\right) = 1 \end{align*} for some constants $\sigma_{\textrm{min}}, \sigma_{\textrm{max}} > 0$. The perfectly conducting electrodes $E_1, \dots, E_M$, $M \in \mathbb{N} \setminus \{1\}$, attached to $D$ are identified with the corresponding open, connected, and mutually disjoint subsets of $\partial D$. We denote $E = \cup_m E_m$, $I = [I_1, \dots , I_M]^{\mathsf{T}}$, and $U = [U_1, \dots, U_M]^{\mathsf{T}}$, where $I_m\in\mathbb{R}$ and $U_m: \Omega \to \mathbb{R}$ are the injected deterministic net current and the measured random voltage, respectively, on the $m$th electrode. The current pattern $I$ belongs to the mean-free subspace $\mathbb{R}^{M}_\diamond$ of $\mathbb{R}^{M}$ by virtue of the conservation of charge; the voltage vector $U$ is interpreted as a (random) element of $\mathbb{R}^{M}_\diamond$ by choosing the ground level of potential appropriately. The contact resistances representing the resistive layers between the electrodes and the domain $D$ are modeled by random variables $z_m:\Omega \rightarrow \mathbb{R}$, $m=1,\ldots,M$, which are assumed to be uniformly strictly positive and bounded: \begin{align*} P(\omega \in \Omega \ : \ z_{\textrm{min}} \leq z_m(\omega) \leq z_{\textrm{max}}) = 1,\qquad m=1,\ldots,M, \end{align*} for some $z_{\textrm{min}}, z_{\textrm{max}} > 0$.
Denote $\mathcal{H} := H^1(D) \oplus \mathbb{R}^M_\diamond$ and let us introduce the Bochner space \[
L_P^2(\Omega;\mathcal{H}) := \left\{ (u,U):\Omega\rightarrow \mathcal{H} \ \big| \ \int_\Omega \|(u(\omega),U(\omega))\|_{\mathcal{H}}^2 \, \textrm{d} P(\omega)<\infty \right\} \] that allows the decomposition $L_P^2(\Omega;\mathcal{H})\simeq L_P^2(\Omega)\otimes \mathcal{H}$, where $\otimes$ denotes the tensor product between Hilbert spaces (cf., e.g.,~\cite{Schwab11a}). The SCEM forward problem is as follows. For a given deterministic electrode current pattern $I \in \mathbb{R}^{M}_\diamond$, find a pair $(u,U) \in L_P^2(\Omega;\mathcal{H})$ that satisfies the following boundary value problem $P$-almost surely: \begin{align*} \begin{array}{ll} \nabla \cdot (\sigma \nabla u) = 0 \qquad &\text{in} \ D, \\[8pt] {\displaystyle \frac{\partial u}{\partial \nu}} = 0 \qquad &\text{on} \ \partial D\setminus\overline{E},\\[2mm] {\displaystyle u+z_m \sigma \frac{\partial u}{\partial \nu}}= U_m \qquad &\text{on} \ E_m, \quad m=1, \dots, M, \\[3mm] {\displaystyle \int_{E_m} \sigma \frac{\partial u}{\partial \nu} \, \textrm{d} S} = I_m, \qquad & m=1,\ldots,M, \end{array} \end{align*} where $\nu = \nu(x)$ is the exterior unit normal of $\partial D$. The corresponding variational formulation is to find $(u,U) \in L_P^2(\Omega;\mathcal{H})$ such that \begin{align} \label{equ:stokamuoto} \mathbb{E} \big[ B\big((u,U),(v,V)\big) \big] \, = \, I \cdot \mathbb{E}[V] \qquad \textrm{for} \ \textrm{all} \ (v,V) \in L_P^2(\Omega;\mathcal{H}), \end{align} where $\mathbb{E}[\, \cdot \,]$ denotes the expectation and the bilinear form $B: \mathcal{H} \times \mathcal{H} \to \mathbb{R}$ is defined via \[ B\big((u,U),(v,V)\big) \, = \, \int_{D} \sigma \nabla u \cdot \nabla v \,\textrm{d} \textbf{x} + \sum_{m=1}^{M} \frac{1}{z_m} \int_{E_m}( U_m -u ) (V_m - v )\, \textrm{d} S. \] The unique solvability of the SCEM forward problem can be proved by extending the deterministic argumentation in~\cite{Somersalo92}.
\subsection{Parametric deterministic SCEM} \label{sec:paramdetSCEM} In the rest of this work, the conductivity is assumed to be parametrized by its random values at a finite set of open pixels $D_1, \dots, D_L$, which constitute a partition of $D$, i.e., $\overline{D} = \cup \overline{D}_l$. More precisely, \begin{align} \label{equ:klexpansion0} \sigma(\omega,\textbf{x}) = \sigma_0 + \sum_{l=1}^L \sigma_l \mathbf{1}_{D_l}(\textbf{x}) Y_l(\omega), \qquad \omega \in \Omega, \ \textbf{x} \in D, \end{align}
where $\sigma_0 \in \mathbb{R}_+$, $\sigma_l \in \mathbb{R}_+ \cup \{ 0 \}$, and $ \sigma_l < \sigma_0$ for $l=1,\ldots,L$. Moreover, $\mathbf{1}_{D_l}$ is the indicator function of $D_l$, and each random variable $Y_1, \dots , Y_L$ is uniformly distributed on the interval $[-1,1]$. For every $m=1, \dots, M$, the contact resistance $z_m$ is assumed to follow the inverse uniform distribution on the interval $[b_m^{-1},a_m^{-1}]$, where $0 < a_m < b_m$. In consequence, the contact conductances $\zeta_1:= z_1^{-1}, \dots, \zeta_M:=z_M^{-1}$ can be presented as \begin{align*} \zeta_m(\omega) = \frac{1}{2}(a_m+b_m) + \frac{1}{2}(b_m-a_m)Y_{L+m}(\omega), \qquad m=1, \dots, M, \end{align*} where each $Y_{L+1}, \dots, Y_{L+M}$ obeys the uniform distribution on $[-1,1]$. It is assumed that $Y_1, \dots , Y_{L+M}$ are mutually independent.
To simplify the notation, we define $$ \mathbf{Y}_\sigma = (Y_1,\ldots,Y_L), \quad \mathbf{Y}_\zeta = (Y_{L+1},\ldots,Y_{L+M}), $$ and denote $\mathbf{Y} = (\mathbf{Y}_\sigma, \mathbf{Y}_\zeta)$. In particular, $\mathbf{Y}: \Omega \to \mathbb{R}^{L+M}$ has the probability density \begin{align} \label{equ:probden} \rho(\mathbf{y}) = \left\{ \begin{array}{ll} 2^{-(L+M)} &\qquad {\rm if} \ \mathbf{y} \in \Gamma, \\[1mm] 0 &\qquad {\rm otherwise}, \end{array} \right. \end{align} where $\Gamma = [-1,1]^{L+M}$.
Substituting the above choices in \eqref{equ:stokamuoto}, we arrive at our parametric deterministic variational formulation of the SCEM forward problem: find $(u,U) \in L^2(\Gamma;\mathcal{H})$ such that \begin{align} \label{equ:paramuoto} \int_{\Gamma} \! \Big[ \int_{D} \sigma(\textbf{y},\textbf{x}) \nabla u \cdot \nabla v \,\textrm{d} \textbf{x} + \! \sum_{m=1}^{M} \zeta_m(\textbf{y}) \int_{E_m}( U_m - u ) (V_m - v ) \, \textrm{d} S \Big] \textrm{d} \textbf{y} = I \cdot \! \int_\Gamma \! V(\textbf{y}) \textrm{d} \textbf{y} \end{align} for all $(v,V) \in L^2(\Gamma;\mathcal{H})$. Here, with a slight abuse of the notation, \begin{align} \label{equ:klexpansion} \sigma(\textbf{y},\textbf{x}) = \sigma_0 + \sum_{l=1}^L \sigma_l \mathbf{1}_{D_l}(\textbf{x}) \, y_l \end{align} and \begin{align} \label{sconduct} \zeta_m(\textbf{y}) = \frac{1}{2}(a_m+b_m) + \frac{1}{2}(b_m-a_m)y_{L+m}, \qquad m=1, \dots, M, \end{align} i.e., we have interpreted the conductivity and the contact conductances as functions of the parameter vector $\textbf{y} = (\textbf{y}_\sigma, \textbf{y}_\zeta) \in \Gamma \subset \mathbb{R}^{L+M}$.
\begin{remark} As the probability density \eqref{equ:probden} is piecewise constant, we have dropped the `weight' $\rho(\mathbf{y})$ from the integrals in \eqref{equ:paramuoto} and refrained from introducing weighted $L^2$-spaces. In general, this is not recommendable; see, e.g., \cite{Schwab11a,Leinonen14}. \end{remark}
\section{Stochastic forward solution} \label{sec:forward_solution} To numerically solve \eqref{equ:paramuoto}, we need to discretize $L^2(\Gamma; \mathcal{H}) \simeq L^2(\Gamma) \otimes (H^1(D)\oplus \mathbb{R}^M_\diamond)$, which boils down to choosing finite-dimensional bases for (certain subspaces of) $L^2(\Gamma)$, $H^1(D)$, and $\mathbb{R}^M_\diamond$. The spaces $H^1(D)$ and $\mathbb{R}^M_\diamond$ are handled as in standard FEM, whereas for $L^2(\Gamma)$ we use the spectral Galerkin method with a multivariate Legendre polynomial basis. The latter choice is reasonable as \eqref{equ:paramuoto} includes no differentiation with respect to $\textbf{y}$.
For $H^1(D)$ we use the standard FEM with piecewise linear basis $\{\varphi_j \}_{j=1}^{N_D} \subset H^1(D)$, $N_D \in \mathbb{N}$, with respect to a suitable mesh. As the mean-free basis vectors for $\mathbb{R}^M_\diamond$, we employ \begin{equation} \label{meanfreebasis} \mathrm{v}_{i}= {\mathrm e}_1 - {\mathrm e}_{i+1}, \quad i = 1, \dots, M-1, \end{equation} with ${\mathrm e}_i$ denoting the $i$th Euclidean basis vector of $\mathbb{R}^M$. To introduce the discretization of $L^2(\Gamma)$, we first recall the definitions of the univariate and multivariate Legendre polynomials.
\begin{definition}[Legendre polynomials] Let $m\in\mathbb{N}_0:= \mathbb{N}\cup \{0 \} = \{0,1,2,\ldots\}$. The $m$th {\em univariate Legendre polynomial} is defined as \[ L_m(y) := \frac{\sqrt{2m+1}}{2^{m+1/2}\,m!}\frac{\textrm{d}^m}{\textrm{d} y^m}[(y^2-1)^m], \] where $y \in \mathbb{R}$. \end{definition}
Note that we have (nonstandardly) normalized the Legendre polynomials so that they are orthonormal with respect to the $L^2$ inner product over $[-1,1]$: \[ \int_{-1}^1 L_k(y) L_l(y) \, \textrm{d} y = \delta_{k,l}, \qquad k,l \in \mathbb{N}_0, \] where $\delta_{k,l}$ is the Kronecker's delta.
\begin{definition}[Multivariate Legendre polynomials] \label{def:multivariatepolynomial} Let $P \in \mathbb{N}$ and $\mu \in \mathbb{N}_{0}^P$ be a multi-index. The {\em multivariate Legendre polynomial} $L_\mu$, also called {\em chaos polynomial}, is defined as \begin{align*} L_\mu(\mathbf{y}) := \prod_{k=1}^{P} L_{\mu_k}(y_k), \qquad \mathbf{y} \in \mathbb{R}^P, \end{align*} where $L_{\mu_k}$ is the $\mu_k$th univariate Legendre polynomial. \end{definition}
The set $\mathcal{P} :=\{ L_{\mu}~|~\mu\in \mathbb{N}_{0}^{L+M}\}$ is an orthonormal basis of $L^2(\Gamma)$ (cf.,~e.g.,~\cite{Schwab11a}), and thus any function $f \in L^2(\Gamma)$ admits a {\em polynomial chaos} representation, \begin{align} \label{equ:chaos_rep} f \, = \!\! \sum_{\mu \in \mathbb{N}_{0}^{L+M}}\!\! \big( f,L_\mu \big)_{L^2(\Gamma)} L_\mu \end{align} in the topology of $L^2(\Gamma)$. In practical computations the number of multi-indices considered in \eqref{equ:chaos_rep} must naturally be finite, and hence we must replace $\mathbb{N}_{0}^{L+M}$ with a finite subset of multi-indices $\Lambda \subset \mathbb{N}_{0}^{L+M}$.
The set $\Lambda$ is ideally chosen so that \begin{align*} f \, \approx \, \sum_{\mu \in \Lambda}\!\! \big( f,L_\mu \big)_{L^2(\Gamma)} L_\mu \end{align*} is as accurate as possible for the considered $f$ under a given constraint on the cardinality $\# \Lambda$. When solving \eqref{equ:paramuoto}, one would like to get good representations (for the FEM approximations) of $f = u(\, \cdot \, , \, \textbf{x} )$, $\textbf{x} \in D$. In practice, estimating {\em a priori} optimal index sets for the solutions of \eqref{equ:paramuoto} is highly nontrivial (but possible to a certain extent~\cite{Bieri09a}), and hence we resort in this work to generic index sets which are easy to generate and give equal weight to each dimension in $\Gamma$.
\begin{definition}[Isotropic total degree index set] Let $P, Q \in \mathbb{N}$. The $\isoTD$ index set is defined as \begin{align*}
\isoTD(P,Q) = \left\{\mu \in \mathbb{N}_{0}^{P}~\big|~\sum_{k=1}^{P} \mu_k \leq Q \right\}. \end{align*} \end{definition} It is easy to see that the cardinality of the $\isoTD(P,Q)$ index set is \begin{equation} \label{stoch_df} \# \isoTD(P,Q) \, = \, {P+Q \choose Q} \, . \end{equation} In what follows, we use $\Lambda = \isoTD(L+M,Q)$ for some $Q \in \mathbb{N}$ and denote $N_\Gamma = \# \Lambda$. See, e.g., \cite{Back11,Beck12,Bieri09a} and the references therein for information on other types of index sets.
We look for an approximation $(\tilde{u},\tilde{U})$ of the parametric deterministic SCEM solution $(u,U)$ to \eqref{equ:paramuoto} in the form \begin{subequations} \begin{align} \label{sGFEM_u} u(\textbf{y},\textbf{x})\approx\tilde{u}(\textbf{y},\textbf{x}) &= \sum_{j=1}^{N_D} \sum_{\mu \in\Lambda} \alpha_{j,\mu}L_{\mu}(\textbf{y})\varphi_j(\textbf{x}),\\ \label{sGFEM_U} U(\textbf{y})\approx\tilde{U}(\textbf{y}) &= \sum_{i=1}^{M-1} \sum_{\mu\in\Lambda}\beta_{i,\mu} L_{\mu}(\textbf{y}) \mathrm{v}_{i}, \end{align} \end{subequations} where $\{\alpha_{j,\mu}\}\subset\mathbb{R}$ and $\{\beta_{i,\mu}\}\subset\mathbb{R}$ are the to-be-determined real coefficients. In particular, the approximation of the electrode potentials in \eqref{sGFEM_U} is an $M$-dimensional vector whose components are $Q$th order polynomials in $\textbf{y}$. We denote by $\alpha \in \mathbb{R}^{N_D N_\Gamma}$ and $\beta \in \mathbb{R}^{(M-1) N_\Gamma}$ the block vectors defined by $\{\alpha_{j}\}_{\mu} = \alpha_{j,\mu}$ and $\{\beta_{i}\}_{\mu} = \beta_{i,\mu}$, respectively.
The coefficient vector $(\alpha, \beta)$ is determined via the standard Galerkin projection: requiring that $(\tilde{u}, \tilde{U})$ satisfies~\eqref{equ:paramuoto} for all $(v,V)$ in the chosen finite-dimensional subspace of $L_P^2(\Gamma;\mathcal{H})\simeq L_P^2(\Gamma)\otimes \mathcal{H}$, i.e., for all $(v,V) = (L_{\mu'}\varphi_{j'}, L_{\mu'} \mathrm{v}_{i'})$, $\mu' \in \Lambda$, $j' = 1, \dots, N_D$, $i' = 1,\dots, M-1$, one ends up at the linear system of equations (cf.~\cite{Leinonen14,Vauhkonen97}) \begin{align} \label{equ:linearsystem} \left( \begin{array}{cc} \mathbf{\Delta} & \mathbf{\Upsilon} \\ \mathbf{\Upsilon}^\mathsf{T} & \mathbf{\Pi} \\ \end{array} \right) \left( \begin{array}{cc} \alpha\\ \beta \end{array} \right) = \left( \begin{array}{cc} \mathbf{0}\\ \mathbf{c} \end{array} \right) . \end{align} Here, $\mathbf{\Delta} \in \mathbb{R}^{N_D N_\Gamma \times N_D N_\Gamma}$ and $\mathbf{\Pi} \in \mathbb{R}^{(M-1) N_\Gamma \times (M-1) N_\Gamma}$ are symmetric sparse matrices, $\mathbf{\Upsilon} \in \mathbb{R}^{N_D N_\Gamma \times (M-1) N_\Gamma}$ is a sparse (non-square) matrix, $\mathbf{c} \in \mathbb{R}^{(M-1) N_\Gamma}$ is a block vector, and $\mathbf{0} \in \mathbb{R}^{N_D N_\Gamma}$~is a zero vector. Take note that~\eqref{equ:linearsystem} has in total $N_{\rm tot}:=(N_D+M-1)N_\Gamma$ degrees of freedom.
In order to give the precise definitions of the elements in the system~\eqref{equ:linearsystem}, let us first introduce some auxiliary block matrices. In the following definitions, $i,i' = 1,\ldots,M-1$, $j,j' = 1,\ldots,N_D$, $k = 1,\ldots,L+M$, $l = 1,\ldots,L$, $m = 1,\ldots,M$, and $\mu,\mu' \in \Lambda$, if not stated otherwise. The FEM matrices corresponding to the spatial discretization of $D$ are defined via \begin{align*} \{\mathbf{A}_0\}_{j,j'} &= \int_D \sigma_0\, \nabla \varphi_j(\textbf{x}) \cdot \nabla \varphi_{j'}(\textbf{x}) \, \textrm{d} \textbf{x} \, , \\[1mm] \{\mathbf{A}_l\}_{j,j'} &= \int_{D_l} \sigma_l\, \nabla \varphi_j(\textbf{x}) \cdot \nabla \varphi_{j'}(\textbf{x}) \, \textrm{d} \textbf{x} \, . \end{align*} Notice that $A_0$ is sparse and $\{ A_l \}_{j,j'}$ is nonzero only if the supports of both $\varphi_j$ and $\varphi_{j'}$ intersect $D_l$. The elements of the stochastic moment matrices are \begin{align*} \{\mathbf{G}_0\}_{\mu,\mu'} &= \int_\Gamma L_\mu(\textbf{y}) L_{\mu'}(\textbf{y}) \, \textrm{d} \textbf{y} \, = \, \delta_{\mu, \mu'},\\ \{\mathbf{G}_k\}_{\mu,\mu'} &= \int_\Gamma y_k L_\mu(\textbf{y}) L_{\mu'}(\textbf{y}) \, \textrm{d} \textbf{y}. \end{align*}
Since a univariate Legendre polynomial of a certain order is orthogonal to all lower order polynomials, it follows easily that $\{\mathbf{G}_k\}_{\mu,\mu'} \not= 0$ only if $|\mu_k - \mu'_k| = 1$ and $\mu_{k'} = \mu'_{k'}$ for $k' \not= k$, which makes $\mathbf{G}_k$ very sparse. Finally, the electrode mass matrices are defined through \begin{align*} \{\mathbf{S}_m\}_{j,j'} = \int_{E_m} \, \varphi_{j}(\textbf{x})\, \varphi_{j'}(\textbf{x})\, \textrm{d} S, \end{align*} and the contact conductance matrices through (cf.~\eqref{sconduct}) \begin{align*} \mathbf{Z}_m = \frac{1}{2}(a_m+b_m) \mathbf{G}_{0} + \frac{1}{2}(b_m-a_m) \mathbf{G}_{L+m}. \end{align*} Standard FEM techniques can be used to construct $\mathbf{A}_l$, $l=0, \dots, L$, and $\mathbf{S}_m$, $m=1, \dots, M$, and we refer to \cite{Bieri09a,Leinonen14c} for the efficient formation of $\mathbf{G}_k$, $k=0, \dots, L+M$. The contact conductance matrices $\mathbf{Z}_m$, $m=1,\dots, M$, are trivial to construct as soon as the stochastic moment matrices are available.
Now, the matrix $\mathbf{\Delta}$ can be given as \begin{align*} \mathbf{\Delta} = \sum_{l=0}^L \mathbf{A}_l \otimes \mathbf{G}_l + \sum_{m=1}^M \mathbf{S}_m \otimes \mathbf{Z}_m, \end{align*} where $\otimes$ denotes the Kronecker product. Moreover, \begin{align*} \{\mathbf{\Upsilon}_{j,i'}\}_{\mu,\mu'} = \{\mathbf{Z}_{i'+1}\}_{\mu,\mu'} \int_{E_{i'+1}}\varphi_{j}(\textbf{x}) \, \textrm{d} S - \{\mathbf{Z}_{1}\}_{\mu,\mu'} \int_{E_1} \varphi_{j}(\textbf{x}) \, \textrm{d} S \end{align*} and \begin{align*} \{\mathbf{\Pi}_{i,i'}\}_{\mu,\mu'} & = \{\mathbf{Z}_{1}\}_{\mu,\mu'}
|E_1| + \delta_{i,i'} \, \{\mathbf{Z}_{i'+1}\}_{\mu,\mu'}
|E_{i+1}|, \end{align*} where
$|E_{i}|$ denotes the area/length of the $i$th electrode. Finally, the block vector $\mathbf{c}$ is defined elementwise by \begin{align*} \{\mathbf{c}_{i}\}_{\mu} = \, (I \cdot \mathrm{v}_{i}) \, \int_\Gamma L_{\mu}(\textbf{y}) \textrm{d} \textbf{y}
&= \begin{cases}
0, & \mu \neq \mathbf{0}, \\
I_1-I_{i+1}, & \mu = \mathbf{0},
\end{cases} \end{align*} where $I \in \mathbb{R}^M_{\diamond}$ is the applied current pattern and $\mathbf{0}$ is the zero multi-index.
\section{Inverse solution} \label{sec:inverse_solution}
The objective of EIT is to retrieve useful information about the conductivity inside the examined body based on measured noisy electrode current-potential pairs. In this section, we explain how the sGFEM approximation \eqref{sGFEM_U} for the second component of the solution to \eqref{equ:paramuoto} can be employed in numerically solving this problem in the Bayesian framework; see \cite{Kaipio04a} for more information on statistical inversion.
Let $I^1, \dots, I^{M-1} \in \mathbb{R}^M_\diamond$ be linearly independent current patterns that are driven in turns through the $M$ contact electrodes $E_1, \dots, E_M$, and suppose $V^1, \dots, V^{M-1} \in \mathbb{R}^M$ are the corresponding measured noisy electrode potential vectors. (Notice that there is no benefit in using more than $M-1 = {\rm dim}(\mathbb{R}^M_\diamond)$ current patterns because the solution of \eqref{equ:stokamuoto} depends linearly on $I$.) We define $$ \mathbf{v} = \Big[(V^1)^{\mathsf{T}}, \dots, (V^{M-1})^{\mathsf{T}}\Big]^{\mathsf{T}} \in \mathbb{R}^{M(M-1)} $$ and $$ \tilde{\mathcal{U}}(\mathbf{Y}) = \Big[\tilde{U}^1(\mathbf{Y})^{\mathsf{T}}, \dots, \tilde{U}^{M-1}(\mathbf{Y})^{\mathsf{T}}\Big]^{\mathsf{T}} \in \mathbb{R}^{M(M-1)} $$ with $\tilde{U}^m(\mathbf{Y}) \in \mathbb{R}^M_\diamond$ being the sGFEM solution \eqref{sGFEM_U} corresponding to the current pattern $I = I^m$ in \eqref{equ:paramuoto}. In other words, $\tilde U^i_j(\mathbf{Y}) \in \mathbb{R}$ is the $j$th component of the sGFEM solution \eqref{sGFEM_U} for the current pattern $I^i \in \mathbb{R}^M_\diamond$.
The electrode potentials $\mathbf{v}$ are assumed to be a realization of the random variable \begin{align} \label{equ:Bayesmodel} \mathbf{V} = \tilde{\mathcal{U}}(\mathbf{Y}) + \mathbf{E}, \end{align} where $\mathbf{E}$ is the noise process contaminating the measurements. Notice that the model \eqref{equ:Bayesmodel} cannot be exact as it does not take into account the unavoidable discretization errors in $\tilde{\mathcal{U}}(\mathbf{Y})$, but we choose to ignore this fact to simplify the analysis. Moreover, $\mathbf{E}: \Omega \to \mathbb{R}^{M(M-1)}$ is assumed to be independent of $\mathbf{Y}$, mean-free, and Gaussian with a known covariance matrix $\mathbf{L} \in \mathbb{R}^{M(M-1) \times M(M-1)}$. Combining \eqref{equ:Bayesmodel} with the Bayes' formula results in the posterior density \begin{align} \label{Bayes1}
\pi(\textbf{y} \, | \, \mathbf{v}) \, &\propto \, \pi_{\rm noise}\big(\mathbf{v} - \tilde{\mathcal{U}}(\mathbf{y}) \big) \, \pi_{\rm pr}(\textbf{y}) \nonumber\\[1mm]
& = \frac{1}{\sqrt{(2\pi)^{M(M-1)}|\mathbf{L}|}}\exp\!\Big(\!-\frac{1}{2}(\mathbf{v} - \tilde{\mathcal{U}}(\mathbf{y}))^{\mathsf{T}}\mathbf{L}^{-1}(\mathbf{v} - \tilde{\mathcal{U}}(\mathbf{y}))\Big) \, \pi_{\rm pr}(\textbf{y}), \end{align}
where $|\mathbf{L}|$ is the determinant of the noise covariance matrix and the `constant' of proportionality is independent of $\textbf{y}$.
The choice of the prior density $\pi_{\rm pr}$ in \eqref{Bayes1} should be based on {\em a priori} information about the pixel values of the conductivity and the contact conductances. Since the sGFEM forward solver of the previous section was already built under the assumption that the parameters $\textbf{y}$ belong to the hypercube $\Gamma = [-1,1]^{L+M}$, it is natural to choose \begin{equation} \label{prior} \pi_{\rm pr}(\textbf{y}) \, = \, \pi_\sigma(\textbf{y}_{\sigma}) \pi_\zeta(\textbf{y}_{\zeta}) \mathbf{1}_\Gamma(\textbf{y}), \end{equation} where $\mathbf{1}_\Gamma: \mathbb{R}^{L+M} \to \mathbb{R}$ is the indicator function of $\Gamma \subset \mathbb{R}^{L+M}$ and we have assumed that the parameters corresponding to the pixelwise conductivity values $\textbf{y}_{\sigma} \in \mathbb{R}^L$ and those associated to the contact conductances $\textbf{y}_\zeta \in \mathbb{R}^M$ are independent {\em a priori}. We assume to have no further prior information on the contact conductances, i.e., we employ $$ \pi_\zeta(\textbf{y}_\zeta) \, = \, 2^{-M}, \qquad \textbf{y}_\zeta \in [-1,1]^M, $$ whereas for the conductivity we choose a truncated multivariate normal prior density: \begin{align} \label{prior2} \pi_{\sigma}(\textbf{y}_\sigma) = \frac { \exp \Big( - {\displaystyle \frac{1}{2}} {\textbf{y}_\sigma^\mathsf{T}} \mathbf{M}^{-1} \textbf{y}_{\sigma} \Big) } { \displaystyle{\int_{\Gamma_\sigma}} \exp \Big( - \frac{1}{2} \tilde\textbf{y}_\sigma^\mathsf{T}\,\mathbf{M}^{-1}\,\tilde\textbf{y}_{\sigma} \Big) \textrm{d} \tilde\textbf{y}_\sigma }, \qquad \textbf{y}_\sigma \in [-1,1]^L, \end{align} where $\Gamma_\sigma = [-1,1]^L$ and $\mathbf{M}\in\mathbb{R}^{L \times L}$ is the covariance matrix of the underlying multivariate normal distribution $\mathcal{N}(\mathbf{0},\mathbf{M})$. In this work, the covariance matrix is assumed to be of the squared exponential type: \begin{align} \label{equ:priorcov}
\mathbf{M}_{l,l'} = \eta^2 \exp \left( \frac{-|\mathbf{r}_l-\mathbf{r}_{l'}|^2}{2s^2} \right), \end{align} where $\mathbf{r}_l$ is the center of the pixel $D_l$, $s>0$ is the correlation length, $\eta > 0$ is the standard deviation, and $l,l'=1,\ldots,L$.
\begin{remark} The inclusion of $\mathbf{1}_\Gamma(\mathbf{y})$ in \eqref{prior} is only natural because there is {\em absolutely} no guarantee that $\tilde{\mathcal{U}}(\mathbf{y})$ is any kind of an approximation for the electrode potentials corresponding to a conductivity of the form \eqref{equ:klexpansion} if $\mathbf{y} \notin \Gamma$. The `additional' priors $\pi_\sigma$ and $\pi_\zeta$ can, however, be selected as one wishes, bearing in mind that complicated choices may hamper the computation of the MAP and CM estimates for the posterior.
One could also utilize the prior information in $\pi_\sigma$ and $\pi_\zeta$ when building the sGFEM forward solver to maximize the accuracy of $\tilde{\mathcal{U}}(\mathbf{y})$ for those parameter vectors $\mathbf{y}$ that live in regions of high prior probability (cf.~\cite{Hakula14}). One way of achieving this is to replace the probability density \eqref{equ:probden} by an approximation of \eqref{prior} in the Legendre polynomial basis and use techniques in \cite{Leinonen14c} to construct the (more involved) stochastic moment matrices.
The reason for not taking such a path in this work is two-fold: (i) Changing $\pi_\sigma$ and $\pi_\zeta$ does not affect the sGFEM forward solver in our setting, which significantly reduces the computational cost for tuning/changing the prior. (ii) Using a more complicated random field model than \eqref{equ:klexpansion0} for the sGFEM forward solver leads easily to a less sparse system matrix \eqref{equ:linearsystem} that is more laborious to construct, and it potentially also makes controlling the positivity of the conductivity more involved.
\end{remark}
The MAP estimate $\textbf{y}_{\rm MAP}$ for $\mathbf{Y}$, i.e., the maximizer of the posterior density \eqref{Bayes1}, can be computed by solving the constrained minimization problem \begin{align} \label{equ:MAPproblem} \textbf{y}_{\rm MAP} \, := \, \operatorname*{arg\,min}_{\textbf{y}\in\Gamma} F(\textbf{y}), \end{align} where \begin{align*} F(\textbf{y}) := \big(\mathbf{v} - \tilde{\mathcal{U}}(\textbf{y})\big)^{\mathsf{T}} \mathbf{L}^{-1} \big(\mathbf{v} - \tilde{\mathcal{U}}(\textbf{y})\big) + \textbf{y}_\sigma^\mathsf{T}\,\mathbf{M}^{-1}\,\textbf{y}_{\sigma} \end{align*} is a positive-valued polynomial in $\textbf{y}$. Subsequently, the MAP estimate for the conductivity $\sigma_{\rm MAP}: D \to \mathbb{R}_+$ is obtained by evaluating \eqref{equ:klexpansion} at $\textbf{y} = \textbf{y}_{\rm MAP}$, and the MAP estimates for the contact conductances are deduced analogously via \eqref{sconduct}.
The CM estimates of the conductivity and contact conductances are obtained by (numerically) evaluating the $(L+M)$-dimensional integrals \begin{align} \label{CMint1}
\sigma_{\rm CM}(\textbf{x}) = \int_{\Gamma} \sigma(\textbf{y},\textbf{x}) \pi(\textbf{y} \, | \, \mathbf{v}) \textrm{d} \textbf{y}, \qquad \textbf{x} \in D, \end{align} and \begin{align} \label{CMint3}
(\zeta_m)_{\rm CM} = \int_{\Gamma} \zeta_m(\textbf{y}) \pi(\textbf{y} \, | \, \mathbf{v}) \textrm{d} \textbf{y}, \qquad m = 1,\ldots,M, \end{align} respectively. To evaluate the reliability of the CM estimates, we also consider the conditional {\em standard deviations} (SD) \begin{align} \label{CMint5}
\sigma_{\rm SD}(\textbf{x}) = \sum_{l=1}^L \sigma_l \mathbf{1}_{D_l}(\textbf{x}) \left[ \int_\Gamma y_l^2\, \pi(\textbf{y} \, | \, \mathbf{v}) \textrm{d} \textbf{y} - \left(\int_\Gamma y_l \, \pi(\textbf{y} \, | \, \mathbf{v}) \textrm{d} \textbf{y}\right)^2 \right]^{\frac{1}{2}}, \qquad \textbf{x} \in D, \end{align} and \begin{align} \label{CMint6}
(\zeta_m)_{\rm SD} = \frac{1}{2}(b_m-a_m)\left[ \int_\Gamma y_{L+m}^2\, \pi(\textbf{y} \, | \, \mathbf{v}) \textrm{d} \textbf{y} - \left(\int_\Gamma y_{L+m} \, \pi(\textbf{y} \, | \, \mathbf{v}) \textrm{d} \textbf{y}\right)^2 \right]^{\frac{1}{2}} \end{align} in the numerical experiments of Section~\ref{sec:numerical_examples}.
\section{Two-phase implementation} \label{sec:implementation} The implementation of the presented inversion algorithm consists of two phases: the pre-measurement and post-measurement processing. The former corresponds to computations that can be carried out before performing any measurements, assuming the object shape, the electrode positions, and the preliminary bounds for the conductivity and contact conductances are known. The latter phase consists of forming the posterior density and computing the desired estimates for the unknowns.
\subsection{Pre-measurement processing}
The pre-measurement phase consists of the following six steps: \begin{enumerate} \item Specify the computational domain, i.e., the object shape together with the electrode sizes and positions. \item Select a suitable partition of the domain into pixels. \item Specify bounds for the conductivity and contact conductance values, i.e., $\sigma_0, \sigma_1, \dots, \sigma_L$ in \eqref{equ:klexpansion} as well as $a_1, \dots , a_M$ and $b_1, \dots , b_M$ in \eqref{sconduct}. \item Construct a suitable FEM polynomial basis for $H^1(D)$. \item Select the index set $\Lambda$ for the polynomial chaos expansion. \item Compute the sGFEM solution \eqref{sGFEM_u}--\eqref{sGFEM_U}. \end{enumerate} We emphasize that all these steps can be performed without having the actual electrode measurements in hand. Moreover, the sGFEM solution can be reused for different data sets as long as the bounds for the conductivity and contact conductances or the measurement geometry are not altered.
The pre-measurement processing stage is clearly the more time consuming of the two phases because the SCEM forward problem is discretized by over $10^7$ degrees of freedom in our two-dimensional numerical experiments. (In three dimensions, the number of degrees of freedom could easily exceed $10^9$.) Fortunately, if the measurement configuration is known well in advance, the pre-measurement processing can be carried out before the actual measurements.
\subsection{Post-measurement processing} After the electrode potential measurements $\mathbf{v} \in \mathbb{R}^{M(M-1)}$ are available, the post-measurement phase consists of the following four steps: \begin{enumerate} \item Specify the noise covariance matrix $\mathbf{L}$. \item Select the correlation length $s$ and the standard deviation $\eta$ for the prior covariance matrix $\mathbf{M}$ in \eqref{equ:priorcov}. \item Construct the posterior density \eqref{Bayes1}. \item Compute the desired estimates (MAP, CM, and SD) for the posterior distribution. \end{enumerate}
Notice that the accuracy of the spatial FEM discretization does not affect the computation time for the post-measurement phase since the approximate stochastic forward solution $\tilde{U}(\textbf{y})$ from \eqref{sGFEM_U} does not involve the spatial FEM basis functions. Hence, one should use as dense spatial FEM mesh as allowed by the {\em pre-measurement} time and memory constraints. On the other hand, the discretization of $L^2(\Gamma)$ affects the computation times of both phases.
\section{Numerical experiments} \label{sec:numerical_examples}
We apply the above introduced methodology to five sets of experimental data from a thorax-shaped water tank with vertically homogeneous embedded objects of steel and/or plastic extending from the bottom all the way through the water surface. The circumference of the tank is $106\,{\rm cm}$, and $M=16$ rectangular metallic electrodes of width $2\,{\rm cm}$ and height $5\,{\rm cm}$ are attached to the interior lateral surface of the tank. In all tests, the tank is filled with tap water up to the top of the electrodes. The measurement configuration without inclusions is presented in the left-hand image of Figure~\ref{fig:pixelgrid}. (All photographs shown below are cropped and spatially normalized versions of the original ones. We have also removed most of the reflections on the water surface to ease perceiving the images.) The measurements were performed with low-frequency ($1\,{\rm kHz}$) alternating current using the {\em Kuopio impedance tomography} (KIT4) device~\cite{Kourunen09}. The phase information of the measurements is ignored, meaning that the amplitudes of electrode currents and potentials are interpreted as real numbers. The employed (real) current patterns are (cf.~\eqref{meanfreebasis}) $$ I^m = ({\mathrm e}_1 - {\mathrm e}_{m+1}) \, {\rm mA} , \qquad m = 1, \dots, M-1, $$ with ${\mathrm e}_m$ denoting the $m$th Euclidean basis vector of $\mathbb{R}^M$. This choice of current basis makes the first electrode special; it is marked with red color in Figure~\ref{fig:pixelgrid}.
As the measurement setting is vertically homogeneous --- notice that no current flows through the bottom or the top of the water tank, which corresponds to homogeneous Neumann boundary conditions --- it can be modeled with a two-dimensional version of the SCEM (cf.~Section~\ref{sec:SCEM}). The conversion of conductivity (${\rm S/m}$) and contact conductances (${\rm S/m}^2$) into corresponding two-dimensional quantities is achieved by multiplying with the height of the electrodes. The same measurement setting was tackled in \cite{Darde13b}, where the conductivity of tap water was estimated to be around $0.2$ -- $0.25\,{\rm mS/cm}$, i.e., $1.0$ -- $1.25\,{\rm mS}$ in the two-dimensional units. This also matches the limits given for drinking water in the literature ($0.05$ -- $0.5\,{\rm mS/cm}$). Using \cite{Darde13b} as our reference, we choose $\sigma_0 = 1.1\, {\rm mS}$ and $\sigma_1, \dots , \sigma_L = 0.9\, {\rm mS}$ in \eqref{equ:klexpansion}, i.e., we let the pixelwise conductivities vary between $0.2\,{\rm mS}$ and $2.0\,{\rm mS}$ in the forward solver. As the examples consider inclusions that are either insulating (plastic) or highly conducting (steel), the interval $[0.2, 2.0] \,{\rm mS}$ for the conductivity values may seem a bit restrictive. However, according to our experience (cf.,~e.g.,~\cite{Darde13b,Harhanen15}), $0.2\,{\rm mS}$ is a sufficiently low value for modeling an insulating object accurately enough and, on the other hand, highly conducting objects exhibit some resistivity in EIT, probably due to the contact resistance at their boundaries (cf.~\cite{Heikkinen01}). A relatively large lower bound for the conductivity also ensures that the sGFEM system matrix stays well conditioned. Furthermore, we assume relatively bad contacts at the electrode-water interfaces and set $a_m = 10\, {\rm mS/cm}$ and $b_m = 10^3\, {\rm mS/cm}$, $m = 1,\ldots,M$ in \eqref{sconduct} (cf.~\cite{Heikkinen02}).
The right-hand image of Figure~\ref{fig:pixelgrid} shows the computational domain $D\subset \mathbb{R}^2$ corresponding to the water tank together with our choice for the partition of the domain into $L=76$ pixels $D_1, \dots , D_L$ (cf.~\eqref{equ:klexpansion}) that are intersections of certain hexagons and $D$. We employ spatial FEM mesh (not shown) composed of $N_D=9383$ nodes with appropriate refinements at the edges of the electrodes (cf.~\cite{Darde13b}). As the stochastic index set in \eqref{sGFEM_u}--\eqref{sGFEM_U}, we use $\Lambda = \isoTD(L + M, 2)$, which results in $N_\Gamma = 4371$ stochastic degrees of freedom. In total, the discretized forward SCEM problem includes $N_{\rm tot} = (N_D+M-1)N_\Gamma \approx 4.1 \cdot 10^7$ unknowns, and the system matrix in \eqref{equ:linearsystem} has approximately $3\cdot 10^8$ nonzero elements, i.e., approximately seven nonzero elements per row. In all our numerical experiments, \eqref{equ:linearsystem} is solved by the standard direct linear solver of MATLAB, i.e., by the {\tt mldivide} command, for simplicity and to avoid any convergence and preconditioning issues related to iterative methods. Using the conjugate gradient method with an ILU0 \cite{Saad03} based preconditioner, we have been able to tackle denser FEM and pixel meshes, e.g., $L=145$ corresponding to $N_\Gamma = 13203$ and $N_{\rm tot} \approx 1.2 \cdot 10^8$, but this does not result in significantly better results than the ones presented in Sections~\ref{sec:empty}--\ref{sec:two} below.
\begin{figure}
\caption{ Left: thorax-shaped water tank with no inclusions. Right: the computational domain, its partition into $L=76$ hexagonal pixels, and the $M=16$ attached electrodes. The current-feeding electrode $E_1$ is red and the others are numbered in counterclockwise order. }
\label{fig:pixelgrid}
\end{figure}
To motivate the choice of the stochastic index set, we mention that for $\Lambda = \isoTD(L+M,1)$, the conductivity reconstructions contain more artifacts, the inclusions are not as well localized, and the background conductivity level is higher and not as smooth as with $\isoTD(L+M,2)$. We were not able to test the case $\Lambda = \isoTD(L+M,3)$ with any reasonable FEM and pixel meshes due to memory and time constraints. There is an obvious trade-off between the fineness of the FEM mesh and the number of the hexagonal pixels in the reconstruction grid; the values listed in the previous paragraph represent a compromise arrived at via trial and error. Employing denser FEM mesh forces one to use a coarser pixel grid --- and vice versa --- in order to keep the system size reasonable. Take note that increasing the number of spatial degrees of freedom $N_D$ affects the size of the sGFEM system \eqref{equ:linearsystem} linearly, whereas increasing $L$ leads to a quadratic growth rate since $$ N_\Gamma = {2+(L+M) \choose 2} = \frac{(L+17)(L+18)}{2} $$ for $\Lambda = \isoTD(L+M,2)$ and $M=16$ electrodes (cf.~\eqref{stoch_df}). Recall also that increasing $N_D$ affects only the computation time of the pre-measurement stage while the number of pixels in the reconstruction grid has an effect on the time consumption in both pre- and post-measurement phases.
The magnitude of the measurement noise on each electrode is assumed to be proportional to the difference of the smallest and largest electrode potential measurement, leading to the choice (cf.~\cite{Darde13b}) \begin{equation} \label{noisevar} \mathbf{L} = \xi^2 \mathbf{I},\, \qquad \xi = 0.01 \, (\max(\mathbf{v}) - \min(\mathbf{v})), \end{equation} for the noise covariance matrix. Here and in the following, $\mathbf{I}$ denotes an identity matrix of the appropriate size. Loosely speaking, \eqref{noisevar} corresponds to assuming one per cent of measurement noise. As the noise level of the measurement device is probably only a couple of per mille depending on the measurement channel \cite{Kourunen09}, the assumed high variance for the noise process is actually used partially to mask the unavoidable discretization errors in the sGFEM forward solution for \eqref{equ:stokamuoto}; see \cite[Remark~5.1]{Hakula14}. We use the correlation length $s = 5$ and the standard deviation $\eta = 10\xi$ in the prior covariance matrix $\mathbf{M}$ of \eqref{equ:priorcov}. The choice of $s$ reflects the prior assumption on the diameter of the embedded inhomogeneities, while the values of the other free parameters $\xi$ and $\eta$ were chosen by trial and error, guided by the last test case (cf.~Figure~\ref{fig:Example1_5}). The prior covariance matrix was constructed assuming that all pixels are hexagonal, and hence some center points of the pixels actually lie outside the computational domain.
The MAP estimate $\textbf{y}_{\rm MAP}$ --- and subsequently $\sigma_{\rm MAP}$ --- is obtained by solving \eqref{equ:MAPproblem} as a nonlinear least-squares minimization problem by resorting to the {\tt lsqnonlin} function provided by the Optimization Toolbox of MATLAB. The CM estimates for the conductivity and the contact conductances as well as the related standard deviations are computed via Markov chain Monte Carlo (MCMC) simulations; the usage of a deterministic sparse quadrature rule such as the one of Smolyak \cite{Smolyak63,Schillings13} would be another possibility, but we have had more success with MCMC techniques in connection with EIT. The standard Metropolis--Hastings algorithm (see,~e.g.,~\cite{Kaipio04a}) is used to generate a sample of parameter vectors \begin{align*} \{ \textbf{y}^{(1)},\ldots,\textbf{y}^{(N)}\} \subset \mathbb{R}^{L+M} \end{align*}
that is distributed (approximately) according to the posterior $\pi(\textbf{y}~|~\mathbf{v})$ given by~\eqref{Bayes1}. Starting from the corresponding MAP estimate, we use a single random walk, with a burn-in period of $5 \cdot 10^4$ and a thinning of five, i.e., we only store every fifth element of the Markov chain, to generate $N=4 \cdot 10^5$ samples. The proposal density for the random walk is the truncated multivariate normal on $\Gamma$ centered at the previous sample with the covariance matrix $0.07^2\, \mathbf{I}$, resulting in an acceptance rate of approximately $30\%$. Subsequently, the integrals \eqref{CMint1} and \eqref{CMint3} are approximated as \begin{align*} \sigma_{\rm CM}(\textbf{x}) \approx \frac{1}{N} \sum_{i=1}^N \sigma(\textbf{y}^{(i)},\textbf{x}) \quad \textrm{ and } \quad (\zeta _m)_{\rm CM} \approx \frac{1}{N} \sum_{i=1}^N \zeta_m(\textbf{y}^{(i)}), \end{align*} respectively. Similarly, the standard deviations \eqref{CMint5} and \eqref{CMint6} are approximated as \begin{align*} \sigma_{\rm SD}(\textbf{x}) \approx \sum_{l=1}^L \sigma_l \mathbf{1}_{D_l}(\textbf{x}) \left[ \frac{1}{N} \sum_{i=1}^N (y^{(i)}_l)^2 - \left(\frac{1}{N} \sum_{i=1}^N y^{(i)}_l \right)^2 \right]^{\frac{1}{2}} \end{align*} and \begin{align*} (\zeta_m)_{\rm SD} \approx \frac{1}{2}(b_m-a_m) \left[ \frac{1}{N} \sum_{i=1}^N (y^{(i)}_{L+m})^2 - \left(\frac{1}{N} \sum_{i=1}^N y^{(i)}_{L+m} \right)^2 \right]^{\frac{1}{2}}, \end{align*} respectively. The number of samples was evaluated to be sufficient by visually examining the development of the CM estimates: in all numerical examples, the estimates seemed to stabilize after about $2 \cdot 10^5$ samples --- the final sample size was chosen to be twice as large.
The solution of the SCEM forward problem and most other computations were performed using the commercial software packages MATLAB\footnote{Version 8.2.0 (R2013b), The MathWorks Inc., Natick, Massachusetts, 2013.} and Mathematica\footnote{Version 9.0, Wolfram Research Inc., Champaign IL, 2012.}. MATLink~\cite{MATLink14} was employed for seamless two-way communication and data transfer between Mathematica and MATLAB, and the needed FEM meshes were generated by NETGEN mesh generator~\cite{NETGEN10}.
\begin{figure}
\caption{
Results of the first example.
Top left: the target without embedded inclusions.
Top right: the SD estimate $\sigma_{\rm SD}$.
Bottom left: the MAP estimate $\sigma_{\rm MAP}$.
Bottom right: the CM estimate $\sigma_{\rm CM}$.
The unit in all images is mS, the MAP and CM estimates use the same colormap, and the colorbar tick markers correspond to the contour lines in the images. }
\label{fig:Example1_0}
\end{figure}
\subsection{Experiment with empty tank} \label{sec:empty}
As a first simple example, we consider the setting in the top left image of Figure~\ref{fig:Example1_0}, i.e., the case of no embedded inclusions. The other images of Figure~\ref{fig:Example1_0} show interpolated versions of the pixelwise SD, MAP, and CM estimates for the conductivity. Both MAP and CM estimates produce tolerable and almost identical reconstructions of the empty tank. Take note that some of the small artifacts close to the object boundary are probably caused by mismodeled geometry: the shape of the water tank and the positions of the electrodes were estimated based on the photographs and previous experiments with the same measurement configuration (cf.~\cite{Darde13b}). As expected, the SD estimate reveals that the degree of uncertainty in the conductivity reconstruction is the highest in the central parts of the tank and the lowest by the object boundary, with the smallest values of $\sigma_{\rm SD}$ occurring close to the current-feeding (red) electrode. The SD estimates in the other four test cases follow this same intuitive pattern.
\begin{table}[t!] \footnotesize \begin{center}
\begin{tabular}{r c c c c c c c c c c c c c c c c c c c c} \\Electrode&\bf{1}&\bf{2}&\bf{3}&\bf{4}&\bf{5}&\bf{6}&\bf{7}&\bf{8} \\ \hline MAP&10&10&17&10&202&137&800&11\\ CM&448&546&567&564&340&497&446&453\\ SD&288&279&275&270&251&253&237&291\\ ~\\ Electrode&\bf{9}&\bf{10}&\bf{11}&\bf{12}&\bf{13}&\bf{14}&\bf{15}&\bf{16}\\ \hline MAP&382&681&11&13&10&27&10&762\\ CM&587&430&654&479&523&309&513&577\\ SD&241&250&268&265&283&232&289&282 \end{tabular} \end{center} \caption{ The MAP, CM, and SD estimates for the contact conductances in the first experiment (mS/cm). The mean values of these MAP, CM, and SD estimates over the sixteen electrodes are 193, 496, and 266 mS/cm, respectively. } \label{tab:contact_conductances_ex1} \end{table}
\begin{figure}
\caption{
Results of the second example.
Top left: the target with one embedded insulating inclusion.
Top right: the SD estimate $\sigma_{\rm SD}$.
Bottom left: the MAP estimate $\sigma_{\rm MAP}$.
Bottom right: the CM estimate $\sigma_{\rm CM}$.
The unit in all images is mS, the MAP and CM estimates use the same colormap, and the colorbar tick markers correspond to the contour lines in the images. }
\label{fig:Example1_1}
\end{figure}
The contact conductance estimates for the first experiment are presented in Table~\ref{tab:contact_conductances_ex1}. For most electrodes, the MAP estimates of the contact conductances are close to the allowed minimum value, whereas the CM estimates stay at a higher level. One possible explanation for the low MAP estimates is the algorithm's attempt to explain the overall resistivity of the tank by introducing as high contact resistances as possible --- recall that we introduced no additional prior for the contact conductances in the post-measurement phase. Both the MAP and CM estimates give mean contact conductances that are below the center of the interval $[10, 10^3]\, {\rm mS/cm}$ assumed in the sGFEM forward solver; see Table~\ref{tab:contact_conductances_ex1}. We do not consider contact conductance estimates in the remaining examples as the general conclusions are the same as in this preliminary test --- and because the estimates for the contact conductances are not as interesting as the reconstructions of the conductivity.
\subsection{Experiments with one inclusion} \label{sec:one}
The top left image of Figure~\ref{fig:Example1_1} shows the target configuration of the second experiment: one insulating plastic cylinder embedded in the bottom right corner of the water tank. The other images in Figure~\ref{fig:Example1_1} are organized as in Figure~\ref{fig:Example1_0}, and they portray the MAP, CM, and SD estimates for the conductivity. Both the MAP and CM estimates are able to find the general location of the cylinder, with the MAP estimate providing a slightly better localization. In the third experiment, one hollow steel cylinder with rectangular cross-section is immersed in the water tank; see the top left image of Figure~\ref{fig:Example1_3}. The MAP and CM estimates presented in the bottom row of Figure~\ref{fig:Example1_3} provide reasonable reconstructions of the phantom also in this case, with the hump in the MAP estimate being once again slightly sharper than in the CM estimate. Notice that the minimal and maximal conductivity levels in the MAP and CM estimates of Figures~\ref{fig:Example1_1} and \ref{fig:Example1_3} do not lie close to the respective end points of the pixelwise interval $[0.2, 2.0]\, {\rm mS}$ used in the sGFEM forward solver: the Gaussian smoothness prior \eqref{prior2} employed in the post-measurement phase of the algorithm considerably restricts the spatial variations in the reconstructions of the conductivity.
\begin{figure}
\caption{
Results of the third example.
Top left: the target with one embedded highly conducting inclusion.
Top right: the SD estimate $\sigma_{\rm SD}$.
Bottom left: the MAP estimate $\sigma_{\rm MAP}$.
Bottom right: the CM estimate $\sigma_{\rm CM}$.
The unit in all images is mS, the MAP and CM estimates use the same colormap, and the colorbar tick markers correspond to the contour lines in the images. }
\label{fig:Example1_3}
\end{figure}
A comparison of the reconstructions in Figures~\ref{fig:Example1_1} and \ref{fig:Example1_3} reveals that inclusions close to the exterior boundary are better localized than those deep inside the domain, which is not surprising taking into account the general form of the SD estimates. This trend does not depend significantly on the type of the inclusion (insulating or highly conducting) or its location in relation to the current-feeding electrode. Notice that the correlation length $s=5\, {\rm cm}$ in the prior covariance matrix \eqref{equ:priorcov} is arguably somewhat conservative: we also tested smaller values such as $s = 3\, {\rm cm}$, which typically resulted in better resolution and contrast for the (target) inclusions, but in some cases small inclusion-like artifacts also appeared in the background,~i.e.,~at locations where there is only water inside the tank.
\begin{figure}
\caption{
Results of the fourth example.
Top left: the target with one insulating and one highly conducting inclusion.
Top right: the SD estimate $\sigma_{\rm SD}$.
Bottom left: the MAP estimate $\sigma_{\rm MAP}$.
Bottom right: the CM estimate $\sigma_{\rm CM}$.
The unit in all images is mS, the MAP and CM estimates use the same colormap, and the colorbar tick markers correspond to the contour lines in the images. }
\label{fig:Example1_4}
\end{figure}
\subsection{Experiments with two inclusions} \label{sec:two} We conclude with two experiments with a pair of embedded inclusions: one plastic and one metallic cylinder. The target configurations are shown in the top left images of Figures~\ref{fig:Example1_4} and \ref{fig:Example1_5}. The other images in Figures~\ref{fig:Example1_4} and \ref{fig:Example1_5} illustrate the corresponding MAP, CM, and SD estimates for the two measurement configurations. Even in this slightly more complicated setting, our algorithm produces reasonably good reconstructions: in both experiments, the positions of the two inhomogeneities can be identified accurately from the MAP and CM estimates. Indeed, the highest and lowest reconstructed conductivity levels are attained close to the center points of the metallic and plastic inclusions, respectively. However, the reconstructions are heavily blurred, which is not very surprising as the employed prior \eqref{prior2} prefers slow changes over sharp boundaries.
We have not tested the algorithm with a higher number of inhomogeneities, but we suspect that the parametrization of the conductivity by the $76$ pixels depicted in Figure~\ref{fig:pixelgrid} is insufficient for reconstructing much more complicated phantoms than the ones in Figures~\ref{fig:Example1_4} and \ref{fig:Example1_5}.
\begin{figure}
\caption{
Results of the fifth example.
Top left: the target with one insulating and one highly conducting inclusion.
Top right: the SD estimate $\sigma_{\rm SD}$.
Bottom left: the MAP estimate $\sigma_{\rm MAP}$.
Bottom right: the CM estimate $\sigma_{\rm CM}$.
The unit in all images is mS, the MAP and CM estimates use the same colormap, and the colorbar tick markers correspond to the contour lines in the images. }
\label{fig:Example1_5}
\end{figure}
\section{Conclusions} \label{sec:conclusions}
We have studied the feasibility of solving the reconstruction problem of EIT by combining SCEM and sGFEM, with the unknown conductivity field parametrized by its values at a set of pixels. The functionality of the method was demonstrated by applying it to five data sets from water tank experiments. In all cases, the resulting MAP and CM estimates clearly provided useful information about the conductivity phantom.
Assuming that the measurement configuration and the preliminary bounds for the pixelwise conductivity values are known well in advance, the pre-measurement phase of the reconstruction algorithm can be performed off-line, and subsequently the (approximate) posterior distribution of the conductivity is obtained practically for free when the measurement data becomes available. Hence, the on-line solution phase of the algorithm consists solely of extracting the desired estimators from the explicitly parametrized posterior.
In the post-measurement phase of the algorithm, we resorted exclusively to a Gaussian prior with a covariance matrix of the type \eqref{equ:priorcov}, which resulted in blurred conductivity reconstructions. In principle, it should also be possible to use any other prior (e.g., total variation \cite{Vogel96}) for the conductivity in the post-processing phase. Such a modification would only affect the form of the target function in \eqref{equ:MAPproblem} and the integrands in \eqref{CMint1} and \eqref{CMint5}, but it could lead to, e.g., more accurate detection of inclusion boundaries. This line of research is left for future studies.
\section*{Acknowledgments} We would like to thank Professor Jari Kaipio's research group at the University of Eastern Finland (Kuopio) for granting us access to their EIT devices. We acknowledge CSC -- IT Center for Science Ltd. for the allocation of computational resources (project ay6302).
\end{document} | arXiv |
\begin{document}
\begin{abstract}
{\small This article introduces new algorithms for the uniform random generation of labelled planar graphs. Its principles rely on Boltzmann samplers, as recently developed by Duchon, Flajolet, Louchard, and Schaeffer. It combines the Boltzmann framework, a suitable use of rejection, a new combinatorial bijection found by Fusy, Poulalhon and Schaeffer, as well as a precise analytic description of the generating functions counting planar graphs, which was recently obtained by Gim\'enez and Noy. This gives rise to an extremely efficient algorithm for the random generation of planar graphs. There is a preprocessing step of some fixed small cost; and
the expected time complexity of generation is quadratic for exact-size uniform sampling and linear for approximate-size sampling. This greatly improves on the best previously known time complexity for exact-size uniform sampling of planar graphs with $n$ vertices, which was a little over $O(n^7)$.
\emph{This is the extended and revised journal version of a conference paper with the title ``Quadratic exact-size and linear approximate-size random generation of planar graphs'', which appeared in the Proceedings of the International Conference on Analysis of Algorithms (AofA'05), 6-10 June 2005, Barcelona.}} \end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
A graph is said to be planar if it can be embedded in the plane so that no two edges cross each other. In this article, we consider
planar graphs that are \emph{labelled}, i.e., the $n$ vertices bear distinct labels in $[1..n]$, and \emph{simple}, i.e., with no loop nor multiple edges. Statistical properties of planar graphs have been intensively studied~\cite{BGHPS04,Ge,gimeneznoy}. Very recently, Gim\'enez and Noy~\cite{gimeneznoy} have solved \emph{exactly} the difficult problem of the asymptotic enumeration of labelled planar graphs. They also provide exact analytic expressions for the asymptotic probability distribution of parameters such as the number of edges and the number of connected components. However many other statistics on random planar graphs remain analytically and combinatorially intractable. Thus, it is an important issue to design efficient random samplers in order to observe the (asymptotic) behaviour of such parameters on random planar graphs. Moreover, random generation is useful to test the correctness and efficiency of algorithms on planar graphs, such as planarity testing, embedding algorithms, procedures for finding geometric cuts, and so on.
Denise, Vasconcellos, and Welsh have proposed a first algorithm for the random generation of planar graphs~\cite{alain96random}, by defining a Markov chain on the set $\mathcal{G}_n$ of labelled planar graphs with $n$ vertices. At each step, two different vertices $v$ and $v'$ are chosen at random. If they are adjacent, the edge $(v,v')$ is deleted. If they are not adjacent and if the operation of adding $(v,v')$ does not break planarity, then the edge $(v,v')$ is added. By symmetry of the transition matrix of the Markov chain, the probability distribution converges to the uniform distribution on $\mathcal{G}_n$. This algorithm is very easy to describe but more difficult to implement, as there exists no simple linear-time planarity testing algorithm. More importantly, the rate of convergence to the uniform distribution is unknown.
A second approach for uniform random generation is the \emph{recursive method} introduced by Nijenhuis and Wilf~\cite{NiWi79} and formalised by Flajolet, Van Cutsem and Zimmermann~\cite{FlZiVa94}. The recursive method is a general framework for the random generation of combinatorial classes admitting a recursive decomposition. For such classes, producing an object of the class uniformly at random boils down to producing the \emph{decomposition tree} corresponding to its recursive decomposition. Then, the branching probabilities that produce the decomposition tree with suitable (uniform) probability are computed using the \emph{coefficients} counting the objects involved in the decomposition. As a consequence, this method requires a preprocessing step where large tables of large coefficients are calculated using the recursive relations they satisfy.
\begin{figure}
\caption{Complexities of the random samplers of planar graphs ($O^{*}$ stands for a big $O$ taken up to logarithmic factors).}
\label{table:compar}
\end{figure}
Bodirsky \emph{et al.} have described in~\cite{bodirsky} the first polynomial-time random sampler for planar graphs. Their idea is to apply the recursive method of sampling to a well known combinatorial decomposition of planar graphs according to successive levels of connectivity, which has been formalised by Tutte~\cite{Tut}. Precisely, the decomposition yields some recurrences satisfied by the coefficients counting planar graphs as well as subfamilies (connected, 2-connected, 3-connected), which in turn yield an explicit recursive way to generate planar graphs uniformly at random. As the recurrences are rather involved, the complexity of the preprocessing step is large. Precisely, in order to draw planar graphs with $n$ vertices (and possibly also a fixed number $m$ of edges), the random generator described in~\cite{bodirsky} requires a preprocessing time of order $O\left( n^7 (\log n)^2(\log \log n ) \right) $ and an auxiliary memory of size $O( n^5 \log n)$. Once the tables have been computed, the complexity
of each generation is $O(n^3)$. A more recent optimisation of the
recursive method by Denise and
Zimmermann~\cite{denise99uniform} ---based on controlled real arithmetics--- should be applicable; it would improve the time complexity somewhat, but the storage complexity would still be large.
In this article, we introduce a new random generator for labelled planar graphs, which relies on the same decomposition of planar graphs as the algorithm of Bodirsky \emph{et al}. The main difference is that we translate this decomposition into a random generator using the framework of Boltzmann samplers, instead of the recursive method. Boltzmann samplers have been recently developed by Duchon, Flajolet, Louchard, and Schaeffer in~\cite{DuFlLoSc04} as a powerful framework for the random generation of decomposable combinatorial structures. The idea of Boltzmann sampling is to gain efficiency by
relaxing the constraint of exact-size sampling. As we will see, the gain is particularly significant in the case of planar graphs, where the decomposition is more involved than for classical classes, such as trees. Given a combinatorial class, a \emph{Boltzmann sampler} draws an object of size $n$ with probability proportional to $x^n$ (or proportional to $x^n/n!$ for labelled objects), where $x$ is a certain \emph{real} parameter that can be appropriately tuned. Accordingly, the probability distribution is spread over all the objects of the class, with the property that objects of the same size have the same probability of occurring. In particular, the probability distribution is uniform when restricted to a fixed size. Like the recursive method, Boltzmann samplers can be designed for any combinatorial class admitting a recursive decomposition, as there are explicit sampling rules associated with each classical construction (Sum, Product, Set, Substitution). The branching probabilities used to produce the decomposition tree of a random object are not based on the \emph{coefficients} as in the recursive method, but on the \emph{values} at $x$ of the generating functions of the classes intervening in the decomposition.
In this article, we translate the decomposition of planar graphs into Boltzmann samplers and obtain
very efficient random generators that produce
planar graphs with a fixed number of vertices or with fixed numbers of vertices and edges uniformly at random. Furthermore, our samplers have an approximate-size version where a small tolerance, say a few percents, is allowed for the size of the output. For practical purpose, approximate-size random sampling often suffices. The approximate-size samplers we propose are very efficient as they have \emph{linear time complexity}.
\begin{theorem}[Samplers with respect to number of vertices] \label{theo:planarsamp1} Let $n\in \mathbf{N}$ be a target size. An \emph{exact-size} sampler $\frak{A}_n$ can be designed so as to generate labelled planar graphs with $n$ vertices uniformly at random. For any tolerance ratio $\epsilon>0$, an \emph{approximate-size} sampler $\frak{A}_{n,\epsilon}$ can be designed so as to generate planar graphs with their number of vertices in $[n(1-\epsilon),n(1+\epsilon)]$, and following the
uniform distribution for each size $k\in [n(1-\epsilon),n(1+\epsilon)]$.
Under a real-arithmetics complexity model, Algorithm $\frak{A}_n$ is of expected complexity $O(n^2)$, and Algorithm $\frak{A}_{n,\epsilon}$ is of expected complexity $O(n/\epsilon)$.
\end{theorem}
\begin{theorem}[Samplers with respect to the numbers of vertices and edges] \label{theo:planarsamp2} Let $n\in \mathbf{N}$ be a target size and $\mu\in(1,3)$ be a parameter describing the ratio edges-vertices. An \emph{exact-size} sampler $\overline{\frak{A}}_{n,\mu}$ can be designed so as to generate planar graphs with $n$ vertices and $\lfloor \mu n\rfloor$ edges uniformly at random. For any tolerance-ratio $\epsilon>0$, an \emph{approximate-size} sampler $\overline{\frak{A}}_{n,\mu,\epsilon}$ can be designed so as to generate planar graphs with their number of vertices in $[n(1-\epsilon),n(1+\epsilon)]$ and their ratio edges/vertices in $[\mu (1-\epsilon),\mu (1+\epsilon)]$, and following the
uniform distribution for each fixed pair (number of vertices, number of edges).
Under a real-arithmetics complexity model, for a fixed $\mu\in(1,3)$, Algorithm $\overline{\frak{A}}_{n,\mu}$ is of expected complexity $O_{\mu}(n^{5/2})$. For fixed constants $\mu\in(1,3)$ and $\epsilon>0$, Algorithm $\overline{\frak{A}}_{n,\mu,\epsilon}$ is of expected complexity $O_{\mu}(n/\epsilon)$ (the bounding constants depend on $\mu$).
\end{theorem} \noindent The samplers are completely described in Section~\ref{sec:sample_vertices} and Section~\ref{sec:sample_edges}. The expected complexities will be proved in Section~\ref{sec:complexity}. For the sake of simplicity, we give big $O$ bounds that might depend on $\mu$ and we do not care about quantifying the constant in the big $O$ in a precise way. However we strongly believe that a more careful analysis would allow us to have a uniform bounding constant (over $\mu\in(1,3)$) of reasonable magnitude. This means that not only the theoretical complexity is good but also the practical one. (As we review in Section~\ref{sec:implement}, we have implemented the algorithm, which easily draws graphs of sizes in the range of $10^5$.)
\emph{Complexity model.} Let us comment on the model we adopt to state the complexities of the random samplers. We assume here that we are given an \emph{oracle}, which provides at unit cost the exact evaluations of the generating functions intervening in the decomposition of planar graphs. (For planar graphs, these generating functions are those of families of planar graphs of different connectivity degrees and pointed in different ways.) This assumption, called the ``oracle assumption", is by now classical to analyse the complexity of Boltzmann samplers, see~\cite{DuFlLoSc04} for a more detailed discussion; it allows us to separate the \emph{combinatorial complexity} of the samplers from the complexity of \emph{evaluating} the generating functions, which resorts to computer algebra and is a research project on its own. Once the oracle assumption is done, the scenario of generation of a Boltzmann sampler is typically similar to a branching process; the generation follows a sequence of \emph{random choices} ---typically coin flips biased by some generating function values--- that determine the shape of the object to be drawn. According to these choices, the object (in this article, a planar graph) is built effectively by a sequence of primitive operations such as vertex creation, edge creation, merging two graphs at a common vertex... The \emph{combinatorial complexity} is precisely defined as the sum of the number of coin flips and the number of primitive operations performed to build the object. The (combinatorial) complexity of our algorithm is compared to the complexities of the two preceding random samplers in Figure~\ref{table:compar}.
Let us now comment on the preprocessing complexity. The implementation of $\frak{A}_{n,\epsilon}$ and $\frak{A}_n$, as well as $\overline{\frak{A}}_{n,\mu,\epsilon}$ and $\overline{\frak{A}}_{n,\mu}$, requires the storage of a fixed number of real constants, which are special values of generating functions. The generating functions to be evaluated are those of several families of planar graphs (connected, 2-connected, 3-connected). A crucial result, recently established by Gim\'enez and Noy~\cite{gimeneznoy}, is that there exist exact analytic equations satisfied by these generating functions. Hence, their numerical evaluation can be performed efficiently with the help of a computer algebra system; the complexity we have observed in practice (doing the computations with Maple) is of low polynomial degree $k$ in the number of digits that need to be computed. (However, there is not yet a complete rigorous proof of the fact, as the Boltzmann parameter has to approach the singularity in order to draw planar graphs of large size.) To draw objects of size $n$, the precision needed to make the probability of failure small is typically of order $\log(n)$ digits\footnote{Notice that it is possible to
achieve perfect uniformity by calling adaptive precision routines in case of failure, see Denise and Zimmermann~\cite{denise99uniform} for a detailed discussion on similar problems.}. Thus the preprocessing step to evaluate the generating functions with a precision of $\log(n)$ digits
has a complexity of order $\log(n)^k$ (again, this is yet to be proved rigorously). The following informal statement summarizes the discussion; making a theorem of it is the subject of ongoing research (see the recent article~\cite{PiSaSo07}):
\noindent{\bf Fact.} \emph{With high probability, the auxiliary memory necessary to generate planar graphs of size $n$ is of order $O(\log(n))$ and the preprocessing time complexity is of order $O(\log(n)^k)$ for some low integer $k$.}
\emph{Implementation and experimental results.} We have completely implemented the random samplers stated in Theorem~\ref{theo:planarsamp1} and Theorem~\ref{theo:planarsamp2}.
Details are given in Section~\ref{sec:implement}, as well as experimental results. Precisely, the evaluations of the generating functions of planar graphs have been carried out with the computer algebra system Maple, based on the analytic expressions given by Gim\'enez and Noy~\cite{gimeneznoy}. Then, the random generator has been implemented in Java, with a precision of 64 bits for the values of generating functions (``double'' type). Using the approximate-size sampler, planar graphs with size of order 100,000 are generated in a few seconds with a machine clocked at 1GHz. In contrast, the recursive method of Bodirsky \emph{et al} is currently limited to sizes of about 100.
Having the random generator implemented, we have performed some simulations in order to observe typical properties of random planar graphs. In particular we have observed a sharp concentration for the proportion of vertices of a given degree $k$ in a random planar graph of large size.
\section{Overview}
The algorithm we describe relies mainly on two ingredients. The first one is a recent correspondence, called the closure-mapping, between binary trees and (edge-rooted) 3-connected planar graphs~\cite{FuPoSc05}, which makes it possible to obtain a Boltzmann sampler for 3-connected planar graphs. The second one is a decomposition formalised by Tutte~\cite{Tut}, which ensures that any planar graph can be decomposed into 3-connected components, via connected and 2-connected components. Taking advantage of Tutte's decomposition, we explain in Section~\ref{sec:decomp} how to specify a Boltzmann sampler for planar graphs, denoted $\Gamma\mathcal{G}(x,y)$, from the Boltzmann sampler for 3-connected planar graphs. To do this, we have to extend the collection of constructions for Boltzmann samplers, as detailed in~\cite{DuFlLoSc04},
and develop new rejection techniques so as to suitably handle the rooting/unrooting operations that appear alongside Tutte's decomposition.
Even if the Boltzmann sampler $\Gamma\mathcal{G}(x,y)$ already yields a polynomial-time uniform random sampler for planar graphs, the expected time complexity to generate a graph of size $n$ ($n$ vertices) is not good, due to the fact that the size distribution of $\Gamma \mathcal{G}(x,y)$ is too concentrated on objects of small size. To improve the size distribution, we \emph{point} the objects, in a way inspired by~\cite{DuFlLoSc04}, which corresponds to a \emph{derivation} (differentiation) of the associated generating function. The precise singularity analysis of the generating functions of planar graphs, which has been recently done in~\cite{gimeneznoy}, indicates that we have to take the second derivative of planar graphs in order to get a good size distribution. In Section~\ref{sec:efficient}, we explain how the derivation operator can be injected in the decomposition of planar graphs. This yields a Boltzmann sampler $\Gamma \mathcal{G}''(x,y)$ for ``bi-derived'' planar graphs. Our random generators for planar graphs are finally obtained as \emph{targetted samplers}, which call $\Gamma \mathcal{G}''(x,y)$ (with suitably tuned values of $x$ and $y$) until the generated graph has the desired size. The time complexity of the targetted samplers is analysed in Section~\ref{sec:complexity}. This eventually yields the complexity results stated in Theorems~\ref{theo:planarsamp1} and ~\ref{theo:planarsamp2}. The general scheme of the planar graph generator is shown in Figure~\ref{fig:relations}.
\begin{figure}
\caption{The chain of constructions from binary trees to planar graphs.}
\label{fig:relations}
\end{figure}
\section{Boltzmann samplers} \label{sec:bolz} In this section, we define Boltzmann samplers and describe the main properties which we will need to handle planar graphs. In particular, we have to extend the framework to the case of \emph{mixed classes}, meaning that the objects have two types of atoms. Indeed the decomposition of planar graphs involves both (labelled) vertices and (unlabelled) edges. The constructions needed to formulate the decomposition of planar graphs are classical ones in combinatorics: Sum, Product, Set, Substitutions~\cite{BeLaLe,fla}. In Section~\ref{sec:rule}, for each of the constructions, we describe a \emph{sampling rule}, so that Boltzmann samplers can be assembled for any class that admits a decomposition in terms of these constructions. Moreover, the decomposition of planar graphs involves rooting/unrooting operations, which makes it necessary to develop new rejection techniques, as described in Section~\ref{sec:reject}.
\subsection{Definitions} \label{sec:bolzdef} A combinatorial class $\mathcal{C}$ is a family of labelled objects (structures), that is, each object is made of $n$ atoms that bear distinct labels in $[1..n]$. In addition, the number of objects in any fixed size $n$ is finite; and any structure obtained by relabelling a structure in $\mathcal{C}$ is also in $\mathcal{C}$. The \emph{exponential} generating function of $\mathcal{C}$ is defined as
$$C(x):=\sum_{\gamma\in\mathcal{C}}\frac{x^{|\gamma|}}{|\gamma|!},$$
where $|\gamma|$ is the size of an object $\gamma\in\mathcal{C}$ (e.g., the number of vertices of a graph). The radius of convergence of $C(x)$ is denoted by $\rho$. A positive value $x$ is called \emph{admissible} if $x\in(0,\rho)$ (hence the sum defining $C(x)$ converges if $x$ is admissible).
Boltzmann samplers, as introduced and developed by Duchon \emph{et al.} in~\cite{DuFlLoSc04}, constitute a general and efficient framework to produce a random generator for any \emph{decomposable} combinatorial class $\mathcal{C}$. Instead of fixing a particular size for the random generation, objects are drawn under a probability distribution spread over the whole class. Precisely, given an admissible value for $C(x)$,
the Boltzmann distribution assigns to each object of $\mathcal{C}$ a weight
$$\mathbf{P}_x(\gamma)=\frac{x^{|\gamma|}}{|\gamma|!C(x)}\, .$$ Notice that the distribution is uniform, i.e., two objects with the same size have the same probability to be chosen. A \emph{Boltzmann sampler} for the labelled class $\mathcal{C}$ is a procedure $\Gamma \mathcal{C}(x)$ that, for each fixed admissible $x$, draws objects of $\mathcal{C}$ at random under the distribution $\mathbf{P}_x$.
The authors of~\cite{DuFlLoSc04} give sampling rules associated to classical combinatorial constructions, such as Sum, Product, and Set. (For the unlabelled setting, we refer to the more recent article~\cite{FlFuPi07}, and to~\cite{BoFuPi06} for the specific case of plane partitions.)
In order to translate the combinatorial decomposition of planar graphs into a Boltzmann sampler, we need to extend the framework
of Boltzmann samplers to the bivariate case of \emph{mixed} combinatorial classes. A mixed class $\mathcal{C}$ is a labelled combinatorial class where one takes into account a second type of atoms, which are unlabelled. Precisely, an object in $\mathcal{C}=\cup_{n,m}\mathcal{C}_{n,m}$ has $n$ ``labelled atoms'' and $m$ ``unlabelled atoms'', e.g., a graph has $n$ labelled vertices and $m$ unlabelled edges. The labelled atoms are shortly called L-atoms, and the unlabelled atoms are shortly called U-atoms. For $\gamma\in\mathcal{C}$, we write
$|\gamma|$ for the number of L-atoms of $\gamma$, called the \emph{L-size} of $\gamma$, and
$||\gamma||$ for the number of U-atoms of $\gamma$, called the \emph{U-size} of $\gamma$. The associated generating function $C(x,y)$ is defined as
$$C(x,y):=\sum_{\gamma\in\mathcal{C}}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}.$$ For a fixed real value $y>0$, we denote by $\rho_C(y)$ the radius of convergence of the function $x\mapsto C(x,y)$. A pair $(x,y)$ is said to be \emph{admissible} if $x\in (0,\rho_C(y))$, which
implies that $\sum_{\gamma\in\mathcal{C}}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$ converges and that $C(x,y)$ is well defined. Given an admissible pair $(x,y)$, the \emph{mixed Boltzmann distribution} is the probability distribution $\mathbf{P}_{x,y}$ assigning to each object $\gamma\in\mathcal{C}$ the probability
$$\mathbf{P}_{x,y}(\gamma)=\frac{1}{C(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}.$$
An important property of this distribution is that two objects with the same size-parameters have the same probability of occurring.
A \emph{mixed Boltzmann sampler} at $(x,y)$ ---shortly called Boltzmann sampler hereafter--- is a procedure $\Gamma \mathcal{C}(x,y)$ that draws objects of $\mathcal{C}$ at random under the distribution $\mathbf{P}_{x,y}$. Notice that the specialization $y=1$ yields a classical Boltzmann sampler for $\mathcal{C}$.
\subsection{Basic classes and constructions} \label{sec:rule}
We describe here a collection of basic classes and constructions that are used thereafter to formulate a decomposition for the family of planar graphs.
The basic classes we consider are: \begin{itemize} \item The 1-class, made of a unique object of size 0 (both the L-size and the U-size are equal to 0), called the 0-atom. The corresponding mixed generating function is $C(x,y)=1$. \item The L-unit class, made of a unique object that is an L-atom; the corresponding mixed generating function is $C(x,y)=x$. \item The U-unit class, made of a unique object that is a U-atom; the corresponding mixed generating function is $C(x,y)=y$. \end{itemize}
Let us now describe the five constructions that are used to decompose planar graphs. In particular, we need two specific substitution constructions, one at labelled atoms that is called L-substitution, the other at unlabelled atoms that is called U-substitution.
\noindent{\bf Sum.} The sum $\mathcal{C}:=\mathcal{A}+\mathcal{B}$ of two classes is meant as a \emph{disjoint union}, i.e., it is the union of two distinct copies of $\mathcal{A}$ and $\mathcal{B}$. The generating function of $\mathcal{C}$ satisfies $$ C(x,y)=A(x,y)+B(x,y). $$
\noindent{\bf Product.} The partitional product of two classes $\mathcal{A}$ and $\mathcal{B}$ is the class
$\mathcal{C}:=\mathcal{A}\star\mathcal{B}$ of objects that are obtained by taking a pair $\gamma=(\gamma_1\in\mathcal{A},\gamma_2\in\mathcal{B})$ and relabelling the L-atoms so that $\gamma$ bears distinct labels in $[1..|\gamma|]$. The generating function of $\mathcal{C}$ satisfies $$ C(x,y)=A(x,y)\cdot B(x,y). $$
\noindent{$\mathbf{Set_{\geq d}}$.} For $d\geq 0$ and a class $\mathcal{B}$ having no object of size 0, any object in $\mathcal{C}:=\Set_{\geq d}(\mathcal{B})$ is a finite set of at least $d$ objects of $\mathcal{B}$, relabelled so that the atoms of $\gamma$ bear distinct labels in $[1\,.\,.\,|\gamma|]$. For $d=0$, this corresponds to the classical construction $\Set$. The generating function of $\mathcal{C}$ satisfies $$ C(x,y)=\exp_{\geq d}(B(x,y)),\ \ \ \mathrm{where}\ \exp_{\geq d}(z):=\sum_{k\geq d}\frac{z^k}{k!}. $$
\noindent{\bf L-substitution.} Given $\mathcal{A}$ and $\mathcal{B}$ two classes such that $\mathcal{B}$ has no object of size $0$, the class $\mathcal{C}=\mathcal{A}\circ_L\mathcal{B}$ is the class of objects that are obtained as follows: take an object $\rho\in\mathcal{A}$ called the \emph{core-object}, substitute each L-atom $v$ of $\rho$ by an object $\gamma_v\in\mathcal{B}$, and
relabel the L-atoms of $\cup_{v}\gamma_v$ with distinct labels from $1$ to $\sum_v |\gamma_v|$. The generating function of $\mathcal{C}$ satisfies $$ C(x,y)=A(B(x,y),y). $$
\noindent{\bf U-substitution.} Given $\mathcal{A}$ and $\mathcal{B}$ two classes such that $\mathcal{B}$ has no object of size $0$, the class $\mathcal{C}=\mathcal{A}\circ_U\mathcal{B}$ is the class of objects that are obtained as follows: take an object $\rho\in\mathcal{A}$ called the \emph{core-object}, substitute each U-atom $e$ of $\rho$ by an object $\gamma_e\in\mathcal{B}$, and relabel the L-atoms of $\rho\cup\left(\cup_{e}\gamma_e\right)$ with
distinct labels from $1$ to $|\rho|+\sum_e |\gamma_e|$. We assume here that the U-atoms of an object of $\mathcal{A}$ are \emph{distinguishable}. In particular, this property is satisfied if $\mathcal{A}$ is a family of labelled graphs with no multiple edges, since two different edges are distinguished by the labels of their extremities. The generating function of $\mathcal{C}$ satisfies $$ C(x,y)=A(x,B(x,y)). $$
\begin{figure}
\caption{The sampling rules associated with the basic classes and the constructions. For each rule involving partitional products, there is a relabelling step performed by an auxiliary procedure \textsc{DistributeLabels}. Given an object $\gamma$ with its
L-atoms ranked from $1$ to $|\gamma|$, \textsc{DistributeLabels}($\gamma$) draws a permutation
$\sigma$ of $[1..|\gamma|]$ uniformly at random and gives label $\sigma(i)$ to the atom of rank $i$. }
\label{table:rules}
\end{figure}
\subsection{Sampling rules} \label{sec:rules_boltzma} A nice feature of Boltzmann samplers is that the basic combinatorial constructions (Sum, Product, Set) give rise to simple rules for assembling the associated Boltzmann samplers. To describe these rules, we assume that the exact values of the
generating functions at a given admissible pair $(x,y)$ are known. We will also need two well-known probability distributions. \begin{itemize} \item A random variable follows a \emph{Bernoulli law} of parameter $p\in (0,1)$ if it is equal to 1 (or true) with probability $p$ and equal to 0 (or false) with probability $1-p$. \item Given $\lambda\in\mathbb{R}_{+}$ and $d\in\mathbb{Z}_{+}$, the \emph{conditioned Poisson law} $\Pois_{\geq d}(\lambda)$ is the probability distribution on $\mathbf{Z}_{\geq d}$ defined as follows: $$ \mathbb{P}(k)=\frac{1}{\exp_{\geq d}(\lambda)}\frac{\lambda^k}{k!},\ \mathrm{where}\ \exp_{\geq d}(z):=\sum_{k\geq d}\frac{z^k}{k!}. $$ For $d=0$, this corresponds to the classical Poisson law, abbreviated as $\Pois$. \end{itemize}
Starting from combinatorial classes $\mathcal{A}$ and $\mathcal{B}$ endowed with Boltzmann samplers $\Gamma \mathcal{A}(x,y)$ and $\Gamma \mathcal{B}(x,y)$, Figure~\ref{table:rules} describes how to assemble a sampler for a class $\mathcal{C}$ obtained from $\mathcal{A}$ and $\mathcal{B}$ (or from $\mathcal{A}$ alone for the construction $\Set_{\geq d}$)
using the five constructions described in this section.
\begin{proposition} \label{prop:rules} Let $\mathcal{A}$ and $\mathcal{B}$ be two mixed combinatorial classes endowed with Boltzmann samplers $\Gamma \mathcal{A}(x,y)$ and $\Gamma \mathcal{B}(x,y)$.
For each of the five constructions $\{+$, $\star$, $\Set_{\geq d}$, L-subs, U-subs$\}$, the sampler $\Gamma \mathcal{C}(x,y)$, as specified in Figure~\ref{table:rules}, is a valid Boltzmann sampler
for the combinatorial class $\mathcal{C}$. \end{proposition} \begin{proof} 1) \emph{Sum:} $\mathcal{C}=\mathcal{A}+\mathcal{B}$. An object of $\mathcal{A}$ has probability
$\frac{1}{A(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$ (by definition of $\Gamma \mathcal{A}(x,y)$) multiplied by $\frac{A(x,y)}{C(x,y)}$ (because of the Bernoulli choice) of being drawn by $\Gamma \mathcal{C}(x,y)$.
Hence, it has probability $\frac{1}{C(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$ of being drawn. Similarly, an object of
$\mathcal{B}$ has probability $\frac{1}{C(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$ of being drawn. Hence $\Gamma \mathcal{C}(x,y)$ is a valid Boltzmann sampler for $\mathcal{C}$.
\noindent 2) \emph{Product:} $\mathcal{C}=\mathcal{A}\star\mathcal{B}$. Define a \emph{generation scenario} as a pair $(\gamma_1\in\mathcal{A},\gamma_2\in\mathcal{B})$, together with a function $\sigma$ that assigns to each L-atom in $\gamma_1\cup\gamma_2$
a label $i\in[1..|\gamma_1|+|\gamma_2|]$ in a bijective way. By definition, $\Gamma \mathcal{C}(x,y)$ draws a generation scenario and returns the object $\gamma\in\mathcal{A}\star\mathcal{B}$ obtained by keeping the secondary labels (the ones given by \textsc{DistributeLabels}).
Each generation scenario has probability
$$\left(\frac{1}{A(x,y)}\frac{x^{|\gamma_1|}}{|\gamma_1|!}y^{||\gamma_1||}\right)\left(\frac{1}{B(x,y)}\frac{x^{|\gamma_2|}}{|\gamma_2|!}y^{||\gamma_2||}\right)\frac{1}{(|\gamma_1|+|\gamma_2|)!}$$ of being drawn, the three factors corresponding respectively to $\Gamma \mathcal{A}(x,y)$, $\Gamma \mathcal{B}(x,y)$, and \textsc{DistributeLabels}($\gamma$). Observe that this probability has the more compact form $$
\frac{1}{|\gamma_1|!|\gamma_2|!}\frac{1}{C(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||} .$$ Given $\gamma\in\mathcal{A}\star\mathcal{B}$, let $\gamma_1$ be its first component (in $\mathcal{A}$) and $\gamma_2$ be its second component (in $\mathcal{B}$). Any relabelling of the labelled atoms of $\gamma_1$ from $1$
to $|\gamma_1|$ and of the labelled atoms of $\gamma_2$ from $1$ to
$|\gamma_2|$ induces a unique generation scenario producing
$\gamma$. Indeed, the two relabellings determine unambiguously the relabelling permutation $\sigma$ of the generation scenario. Hence, $\gamma$ is produced from $|\gamma_1|!|\gamma_2|!$ different scenarios, each having probability
$\frac{1}{|\gamma_1|!|\gamma_2|!C(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$. As a consequence, $\gamma$ is drawn under the Boltzmann distribution.
\noindent 3) \emph{Set}$_{\geq d}$: $\mathcal{C}=\Set_{\geq d}(\mathcal{B})$. In the case of the construction $\Set_{\geq d}$, a \emph{generation scenario} is defined as a sequence $(\gamma_1\in\mathcal{B},\ldots,\gamma_k\in\mathcal{B})$ with $k\geq d$, together with a function $\sigma$ that assigns to each L-atom in $\gamma_1\cup\cdots\cup\gamma_k$
a label $i\in[1..|\gamma_1|+\cdots+|\gamma_k|]$ in a bijective way. Such a generation scenario produces an object $\gamma\in\Set_{\geq d}(\mathcal{B})$. By definition of $\Gamma \mathcal{C}(x,y)$, each scenario has probability $$\left( \frac{1}{\exp_{\geq d}(B(x,y))}\frac{B(x,y)^k}{k!}\right)\left(\prod_{i=1}^k
\frac{x^{|\gamma_i|}y^{||\gamma_i||}}{B(x,y)|\gamma_i|!}\right)\frac{1}{(|\gamma_1|+\cdots+|\gamma_k|)!},$$ the three factors corresponding respectively to drawing $\Pois_{\geq d}(B(x,y))$, drawing the sequence, and the relabelling step. This probability has the simpler form
$$\frac{1}{k!C(x,y)}\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}\prod_{i=1}^k\frac{1}{|\gamma_i|!}.$$ For $k\geq d$, an object $\gamma\in\Set_{\geq d}(\mathcal{B})$ can be written as a sequence $\gamma_1,\ldots,\gamma_k$ in $k!$ different ways. In addition, by a similar argument as for the Product construction, a sequence
$\gamma_1,\ldots,\gamma_k$ is produced from $\prod_{i=1}^k|\gamma_i|!$ different scenarios. As a consequence, $\gamma$ is drawn under the Boltzmann distribution.
\noindent 4) \emph{L-substitution}: $\mathcal{C}=\mathcal{A}\circ_L\mathcal{B}$. For this construction, a \emph{generation scenario} is defined as a core-object
$\rho\in\mathcal{A}$, a sequence $\gamma_1,\ldots,\gamma_{|\rho|}$ of objects of $\mathcal{B}$ ($\gamma_i$
stands for the object of $\mathcal{B}$ substituted at the atom $i$ of $\rho$), together with a function $\sigma$ that assigns to each L-atom in $\gamma_1\cup\cdots\cup\gamma_{|\rho|}$
a label $i\in[1..|\gamma_1|+\cdots+|\gamma_{|\rho|}|]$ in a bijective way. This corresponds to the scenario of generation of an object $\gamma\in\mathcal{A}\circ_L\mathcal{B}$ by the algorithm $\Gamma \mathcal{C}(x,y)$, and this scenario has probability
$$\left(\frac{1}{A(B(x,y),y)}\frac{B(x,y)^{|\rho|}}{|\rho|!}y^{||\rho||}\right)\left(\prod_{i=1}^{|\rho|}\frac{x^{|\gamma_i|}y^{||\gamma_i||}}{B(x,y)|\gamma_i|!}\right)\frac{1}{(|\gamma_1|+\cdots+|\gamma_{|\rho|}|)!},$$ which has the simpler form
$$\frac{x^{|\gamma|}y^{||\gamma||}}{C(x,y)|\gamma|!}\frac{1}{|\rho|!}\prod_{i=1}^{|\rho|}\frac{1}{|\gamma_i|!}.$$ Given $\gamma\in\mathcal{A}\circ_L\mathcal{B}$, labelling
the core-object $\rho\in\mathcal{A}$ with distinct labels in $[1..|\rho|]$ and each component
$(\gamma_i)_{1\leq i\leq|\rho|}$ with distinct labels in $[1..|\gamma_i|]$
induces a unique generation scenario producing $\gamma$. As a consequence, $\gamma$ is produced from
$|\rho|!\prod_{i=1}^{|\rho|}|\gamma_i|!$ scenarios, each having probability
$\frac{x^{|\gamma|}y^{||\gamma||}}{C(x,y)|\gamma|!}\frac{1}{|\rho|!}\prod_{i=1}^{|\rho|}\frac{1}{|\gamma_i|!}$. Hence, $\gamma$ is drawn under the Boltzmann distribution.
\noindent 5) \emph{U-substitution}: $\mathcal{C}=\mathcal{A}\circ_U\mathcal{B}$. A \emph{generation scenario} is defined as a core-object
$\rho\in\mathcal{A}$, a sequence $\gamma_1,\ldots,\gamma_{||\rho||}$ of objects of $\mathcal{B}$ (upon giving a rank to each unlabelled atom of $\rho$, $\gamma_i$
stands for the object of $\mathcal{B}$ substituted at the U-atom of rank $i$ in $\rho$), and a function $\sigma$ that assigns to each L-atom in $\rho\cup\gamma_1\cup\cdots\cup\gamma_{||\rho||}$ a label $i\in[1..|\rho|+|\gamma_1|+\cdots+|\gamma_{||\rho||}|]$. This corresponds to the scenario of generation of an object $\gamma\in\mathcal{A}\circ_U\mathcal{B}$ by the algorithm $\Gamma \mathcal{C}(x,y)$; this scenario has probability
$$\left(\frac{1}{A(x,B(x,y))}\frac{x^{|\rho|}}{|\rho|!}B(x,y)^{||\rho||}\right)\left(\prod_{i=1}^{||\rho||}\frac{x^{|\gamma_i|}y^{||\gamma_i||}}{B(x,y)|\gamma_i|!}\right)\left(\frac{1}{(|\rho|+|\gamma_1|+\cdots+|\gamma_{||\rho||}|)!}\right).$$ This expression has the simpler form
$$\frac{x^{|\gamma|}y^{||\gamma||}}{C(x,y)|\gamma|!}\frac{1}{|\rho|!}\prod_{i=1}^{||\rho||}\frac{1}{|\gamma_i|!}.$$ Given $\gamma\in\mathcal{A}\circ_U\mathcal{B}$, labelling
the core-object $\rho\in\mathcal{A}$ with distinct labels in $[1..|\rho|]$ and each component
$(\gamma_i)_{1\leq i\leq||\rho||}$ with distinct labels in $[1..|\gamma_i|]$ induces a unique generation scenario producing $\gamma$. As a consequence, $\gamma$ is produced from
$|\rho|!\prod_{i=1}^{||\rho||}|\gamma_i|!$ scenarios, each having probability
$\frac{x^{|\gamma|}y^{||\gamma||}}{C(x,y)|\gamma|!}\frac{1}{|\rho|!}\prod_{i=1}^{||\rho||}\frac{1}{|\gamma_i|!}$. Hence, $\gamma$ is drawn under the Boltzmann distribution. \end{proof}
\begin{example}\label{ex:binary} Consider the class $\mathcal{C}$ of rooted binary trees, where the (labelled) atoms are the inner nodes. The class $\mathcal{C}$ has the following decomposition grammar,
$$\mathcal{C}= \left( \mathcal{C}+ \mathbf{1}\right)\star \mathcal{Z}\star \left( \mathcal{C}+ \mathbf{1}\right).$$ Accordingly, the series $C(x)$ counting rooted binary trees satisfies $C(x)=x\left( 1+C(x)\right) ^2$. (Notice that $C(x)$ can be easily evaluated for a fixed real parameter $x<\rho_C=1/4$.)
Using the sampling rules for Sum and Product, we obtain the following Boltzmann sampler for binary trees, where $\{\bullet\}$ stands for a node:
\begin{tabular}{ll} $\Gamma \mathcal{C}(x):$& return $(\Gamma(1+\mathcal{C})(x),\{\bullet\},\Gamma(1+\mathcal{C})(x))$ \{independent calls\} \end{tabular}
\begin{tabular}{ll} $\Gamma(1+\mathcal{C})(x):$& if $\Bern\left(\frac{1}{1+C(x)}\right)$ return leaf\\ & else return $\Gamma \mathcal{C}(x)$ \end{tabular}
\noindent Distinct labels in $[1..|\gamma|]$ might then be distributed uniformly at random on the atoms of the resulting tree $\gamma$, so as to make it well-labelled (see Remark~\ref{rk:labels} below). Many more examples are given in~\cite{DuFlLoSc04} for labelled (and unlabelled) classes specified using the constructions $\{+,\star,\Set\}$. \end{example}
\begin{remark}\label{rk:labels} In the sampling rules (Figure~\ref{table:rules}), the procedure \textsc{DistributeLabels}($\gamma$) throws distinct labels uniformly at random on the L-atoms of $\gamma$. The fact that the relabelling permutation is always chosen uniformly at random ensures that the process of assigning the labels has no memory of the past, hence \textsc{DistributeLabels} needs to be called just once, at the end of the generation procedure. (A similar remark is given by Flajolet \emph{et al.} in~\cite[Sec. 3]{FlZiVa94} for the recursive method of sampling.)
In other words, when combining the sampling rules given in Figure~\ref{table:rules} in order to design a Boltzmann sampler, we can forget about the calls to \textsc{DistributeLabels}, see for instance the Boltzmann sampler for binary trees above. In fact, we have included the \textsc{DistributeLabels} steps in the definitions of the sampling rules only for the sake of writing the correctness proofs (Proposition~\ref{prop:rules}) in a proper way.
\end{remark}
\subsection{Additional techniques for Boltzmann sampling} As the decomposition of planar graphs we consider is a bit involved, we need a few techniques in order to properly translate this decomposition into a Boltzmann sampler. These techniques, which are described in more detail below, are: bijections, pointing, and rejection.
\subsubsection{Combinatorial isomorphisms} Two mixed classes $\mathcal{A}$ and $\mathcal{B}$ are said to be \emph{isomorphic}, shortly written as $\mathcal{A}\simeq\mathcal{B}$, if there exists a bijection $\Phi$ between $\mathcal{A}$ and $\mathcal{B}$ that preserves the size parameters, i.e., preserves the L-size and the U-size. (This is equivalent to the fact that the mixed generating functions of $\mathcal{A}$ and $\mathcal{B}$ are equal.) In that case, a Boltzmann sampler $\Gamma \mathcal{A}(x,y)$ for the class $\mathcal{A}$ yields a Boltzmann sampler for $\mathcal{B}$ via the isomorphism: $\Gamma \mathcal{B}(x,y): \gamma\leftarrow\Gamma \mathcal{A}(x,y);\ \mathrm{return}\ \Phi(\gamma)$.
\subsubsection{L-derivation, U-derivation, and edge-rooting.}\label{sec:derive} In order to describe our random sampler for planar graphs, we will make much use of \emph{derivative} operators. The L-derived class of a mixed class $\mathcal{C}=\cup_{n,m}\mathcal{C}_{n,m}$ (shortly called the derived class of $\mathcal{C}$) is the mixed class $\mathcal{C}'=\cup_{n,m}\mathcal{C}'_{n,m}$ of objects in $\mathcal{C}$ where the greatest label is taken out, i.e., the L-atom with greatest label is discarded from the set of L-atoms (see the book by Bergeron, Labelle, Leroux ~\cite{BeLaLe} for more details and examples). The class $\mathcal{C}'$ can be identified with the pointed class $\mathcal{C}^{\bullet}$ of $\mathcal{C}$, which is the class of objects of $\mathcal{C}$ with a distinguished L-atom. Indeed the discarded atom in an object of $\mathcal{C}'$ plays the role of a pointed vertex. However the important
difference between $\mathcal{C}'$ and $\mathcal{C}^{\bullet}$ is that the distinguished L-atom does not count in the L-size of an object in $\mathcal{C}'$. In other words, $\mathcal{C}^{\bullet}=\mathcal{Z}_L\star\mathcal{C}'$. Clearly, for any integers $n,m$, $\mathcal{C}'_{n-1,m}$ identifies to $\mathcal{C}_{n,m}$, so that the generating function $C'(x,y)$ of $\mathcal{C}'$ satisfies
\begin{equation}
C'(x,y)=\sum_{n,m} |\mathcal{C}_{n,m}|\frac{x^{n-1}}{(n-1)!}y^m=\partial_x C(x,y). \end{equation}
The U-derived class of $\mathcal{C}$ is the class $\underline{\mathcal{C}}$ of objects obtained from objects of $\mathcal{C}$ by discarding one U-atom from the set of U-atoms; in other words there is a distinguished U-atom that does not count in the U-size. As in the definition of the U-substitution, we assume that all the U-atoms are distinguishable, for instance the edges of a simple graph are distinguished by the labels of their extremities. In that case,
$|\underline{\mathcal{C}}_{n,m-1}|=m|\mathcal{C}_{n,m}|$, so that the generating function $\underline{C}(x,y)$ of $\underline{\mathcal{C}}$ satisfies \begin{equation}
\underline{C}(x,y)=\sum_{n,m} m|\mathcal{C}_{n,m}|\frac{x^{n}}{n!}y^{m-1}=\partial_y C(x,y). \end{equation}
For the particular case of planar graphs, we will also consider \emph{edge-rooted} objects (shortly called rooted objects), i.e., planar graphs where an edge is ``marked'' (distinguished) and directed. In addition, the root edge, shortly called the root, is not counted as an unlabelled atom, and the two extremities of the root do not count as labelled atoms (i.e., are not labelled). The edge-rooted class of $\mathcal{C}$ is denoted by $\overrightarrow{\mathcal{C}}$. Clearly we have $\mathcal{Z}_L^{\ 2}\star\overrightarrow{\mathcal{C}}\simeq 2\star \underline{\mathcal{C}}$. Hence, the generating function $\overrightarrow{C}(x,y)$ of $\overrightarrow{\mathcal{C}}$ satisfies \begin{equation} \overrightarrow{C}(x,y)=\frac{2}{x^2}\partial_y C(x,y). \end{equation}
\subsubsection{Rejection.}\label{sec:reject} Using rejection techniques offers great flexibility to design Boltzmann samplers, since it makes it possible to adjust the distributions of the samplers. \begin{lemma}[Rejection] \label{lemma:rej} Given a combinatorial class $\mathcal{C}$, let $W:\mathcal{C}\mapsto\mathbf{R}^+$ and $p:\mathcal{C}\mapsto [0,1]$ be two functions, called \emph{weight-function} and \emph{rejection-function}, respectively. Assume that $W$ is summable, i.e., $\sum_{\gamma\in\mathcal{C}}W(\gamma)$ is finite. Let $\frak{A}$ be a random generator for $\mathcal{C}$ that draws each object $\gamma\in\mathcal{C}$ with probability proportional to $W(\gamma)$. Then, the procedure $$ \frak{A}_{\mathrm{rej}}:\mathrm{repeat}\ \frak{A}\rightarrow\gamma\ \mathrm{until}\ \mathrm{Bern}(p(\gamma));\ \mathrm{return}\ \gamma $$ is a random generator on $\mathcal{C}$, which draws each object $\gamma\in\mathcal{C}$ with probability proportional to $W(\gamma)p(\gamma)$. \end{lemma} \begin{proof} Define $W:=\sum_{\gamma\in\mathcal{C}}W(\gamma)$. By definition, $\frak{A}$ draws an object $\gamma\in\mathcal{C}$ with probability $P(\gamma):=W(\gamma)/W$. Let $p_{\mathrm{rej}}$ be the probability of failure of $\frak{A}_{\mathrm{rej}}$ at each attempt. The probability $P_{\mathrm{rej}}(\gamma)$ that $\gamma$ is drawn by $\frak{A}_{\mathrm{rej}}$ satisfies $ P_{\mathrm{rej}}(\gamma)=P(\gamma)p(\gamma)+p_{\mathrm{rej}}P_{\mathrm{rej}}(\gamma),$ where the first (second) term is the probability that $\gamma$ is drawn at the first attempt (at a later
attempt, respectively). Hence, $P_{\mathrm{rej}}(\gamma)=P(\gamma)p(\gamma)/(1-p_{\mathrm{rej}})=W(\gamma)p(\gamma)/(W\cdot(1-p_{\mathrm{rej}}))$, i.e., $P_{\mathrm{rej}}(\gamma)$ is proportional to $W(\gamma)p(\gamma)$. \end{proof}
Rejection techniques are very useful for us to change the way objects are rooted. Typically it helps us to obtain a Boltzmann sampler for $\mathcal{A}'$ from a Boltzmann sampler for $\underline{\mathcal{A}}$ and vice versa. As we will use this trick many times, we formalise it here by giving two explicit procedures, one from L-derived to U-derived objects, the other one from U-derived to L-derived objects.
\fbox{ \begin{minipage}{12cm} \LtoU\\
\phantom{1}\hspace{.5cm} INPUT: a mixed class $\mathcal{A}$ such that $\displaystyle\alpha_{U/L}:=\mathrm{sup}_{\gamma\in\mathcal{A}}\frac{||\gamma||}{|\gamma|}$ is finite,\\ \phantom{1}\hspace{2cm}a Boltzmann sampler $\Gamma \mathcal{A}'(x,y)$ for the L-derived class $\mathcal{A}'$\\[0.2cm] \phantom{1}\hspace{.5cm} OUTPUT: a Boltzmann sampler for the U-derived class $\underline{\mathcal{A}}$, defined as:\\[0.2cm] \begin{tabular}{ll} $\Gamma \underline{\mathcal{A}}(x,y)$:& repeat $\gamma\leftarrow\Gamma \mathcal{A}'(x,y)$ \{at this point $\gamma\in\mathcal{A}'$\}\\
& \phantom{1}\hspace{.2cm}give label $|\gamma|+1$ to the discarded L-atom of $\gamma$;\\
& \phantom{1}\hspace{.2cm}\{so $|\gamma|$ increases by $1$, and $\gamma\in\mathcal{A}$\}\\
& until $\displaystyle\mathrm{Bern}\left(\frac{1}{\alpha_{U/L}}\frac{||\gamma||}{|\gamma|}\right)$;\\ & choose a U-atom uniformly at random and discard it\\
& $\ \ $ from the set of U-atoms; \{so $||\gamma||$ decreases by $1$, and $\gamma\in\underline{\mathcal{A}}$\}\\ & return $\gamma$ \end{tabular}
\end{minipage}}
\begin{lemma}\label{lem:LtoU} The procedure \LtoU yields a Boltzmann sampler for the class $\underline{\mathcal{A}}$ from a Boltzmann sampler for the class $\mathcal{A}'$. \end{lemma} \begin{proof} First, observe that the sampler is well defined. Indeed, by definition of the parameter $\alpha_{U/L}$, the Bernoulli choice is always valid (i.e., its parameter is always in $[0,1]$). Notice that the sampler\\ \phantom{1}\hspace{.4cm}$\gamma\leftarrow\Gamma \mathcal{A}'(x,y)$;\\
\phantom{1}\hspace{.4cm}give label $|\gamma|+1$ to the discarded L-atom of $\gamma$;\\ \phantom{1}\hspace{.4cm}return $\gamma$\\
\noindent is a sampler for $\mathcal{A}$ that outputs each object $\gamma\in\mathcal{A}$ with probability $\frac{1}{A'(x,y)}\frac{x^{|\gamma|-1}}{(|\gamma|-1)!}y^{||\gamma||}$, because $\mathcal{A}_{n,m}$ identifies to $\mathcal{A}'_{n-1,m}$. In other words, this sampler draws each object $\gamma\in\mathcal{A}$
with probability proportional to $|\gamma|\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$. Hence, according to Lemma~\ref{lemma:rej}, the repeat-until loop of the sampler $\Gamma \underline{\mathcal{A}}(x,y)$ yields a sampler for $\mathcal{A}$ such that each object has
probability proportional to $||\gamma||\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$. As each U-atom has probability $1/||\gamma||$ of being discarded, the final sampler is such that each object $\gamma\in\underline{\mathcal{A}}$ has probability proportional to $\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$. So $\Gamma\underline{\mathcal{A}}(x,y)$ is a Boltzmann sampler for $\underline{\mathcal{A}}$. \end{proof}
We define a similar procedure to go from a U-derived class to an L-derived class:
\fbox{ \begin{minipage}{12cm} \UtoL\\
\phantom{1}\hspace{.5cm} INPUT: a mixed class $\mathcal{A}$ such that $\displaystyle\alpha_{L/U}:=\mathrm{sup}_{\gamma\in\mathcal{A}}\frac{|\gamma|}{||\gamma||}$ is finite,\\ \phantom{1}\hspace{2cm}a Boltzmann sampler $\Gamma \underline{\mathcal{A}}(x,y)$ for the U-derived class $\underline{\mathcal{A}}$\\[0.2cm] \phantom{1}\hspace{.5cm} OUTPUT: a Boltzmann sampler for the L-derived class $\mathcal{A}'$, defined as:\\[0.2cm] \begin{tabular}{ll} $\Gamma \mathcal{A}'(x,y)$:& repeat $\gamma\leftarrow\Gamma \underline{\mathcal{A}}(x,y)$ \{at this point $\gamma\in\underline{\mathcal{A}}$\}\\ & \phantom{1}\hspace{.2cm}take the discarded U-atom of $\gamma$ back in the set of U-atoms;\\
& \phantom{1}\hspace{.2cm} \{so $||\gamma||$ increases by $1$, and $\gamma\in\mathcal{A}$\}\\
& until $\displaystyle\mathrm{Bern}\left(\frac{1}{\alpha_{L/U}}\frac{|\gamma|}{||\gamma||}\right)$;\\ & discard the L-atom with greatest label from the set of L-atoms;\\
& \{so $|\gamma|$ decreases by $1$, and $\gamma\in\mathcal{A}'$\}\\ & return $\gamma$ \end{tabular}
\end{minipage}}
\begin{lemma}\label{lem:UtoL} The procedure \UtoL yields a Boltzmann sampler for the class $\mathcal{A}'$ from a Boltzmann sampler for the class $\underline{\mathcal{A}}$. \end{lemma} \begin{proof} Similar to the proof of Lemma~\ref{lem:LtoU}. The sampler $\Gamma \mathcal{A}'(x,y)$ is well defined, as the Bernoulli choice is always valid (i.e., its parameter is always in $[0,1]$). Notice that the sampler\\ \phantom{1}\hspace{.4cm}$\gamma\leftarrow\Gamma \underline{\mathcal{A}}(x,y)$;\\ \phantom{1}\hspace{.4cm}take the discarded U-atom back to the set of U-atoms of $\gamma$;\\ \phantom{1}\hspace{.4cm}return $\gamma$\\
\noindent is a sampler for $\mathcal{A}$ that outputs each object $\gamma\in\mathcal{A}$ with probability $\frac{1}{\underline{A}(x,y)}||\gamma||\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||-1}$, (because an object $\gamma\mathcal{A}_{n,m}$
gives rise to $m$ objects in $\underline{\mathcal{A}}_{n,m-1}$), i.e., with probability proportional to $||\gamma||\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$. Hence, according to Lemma~\ref{lemma:rej}, the repeat-until loop of the sampler $\Gamma \mathcal{A}'(x,y)$ yields a sampler for $\mathcal{A}$ such that each object $\gamma\in\mathcal{A}$ has
probability proportional to $|\gamma|\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$, i.e., proportional to $\frac{x^{|\gamma|-1}}{(|\gamma|-1)!}y^{||\gamma||}$. Hence, by discarding the greatest L-atom (i.e., $|\gamma|\leftarrow|\gamma|-1$),
we get a probability proportional to $\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$ for every object $\gamma\in\mathcal{A}'$, i.e., a Boltzmann sampler for $\mathcal{A}'$. \end{proof}
\begin{remark}\label{remark:greatest_delete} We have stated in Remark~\ref{rk:labels} that, during a generation process, it is more convenient in practice to manipulate the shapes of the objects without systematically assigning labels to them. However, in the definition of the sampler $\Gamma \mathcal{A}'(x,y)$, one step is to remove the greatest label, so it seems we need to look at the labels at that step. In fact, as we consider here classes that are stable under relabelling, it is equivalent in practice to draw uniformly at random one vertex to play the role of the discarded L-atom.
\end{remark}
\section{Decomposition of planar graphs and Boltzmann samplers} \label{sec:decomp}
Our algorithm starts with the generation of 3-connected planar graphs, which have the nice feature that they are combinatorially tractable. Indeed, according to a theorem of Whitney~\cite{Whitney33}, 3-connected planar graphs have a unique embedding (up to reflection), so they are equivalent to 3-connected planar maps. Following the general approach introduced by Schaeffer~\cite{S-these},
a bijection has been described by the author, Poulalhon, and Schaeffer~\cite{FuPoSc05}
to enumerate 3-connected maps~\cite{FuPoSc05} from binary trees,
which yields an explicit Boltzmann sampler for (rooted) 3-connected maps, as described in Section~\ref{sec:bolz3conn}.
The next step is to generate 2-connected planar graphs from 3-connected ones. We take advantage of a decomposition of 2-connected planar graphs into 3-connected planar components, which has been formalised by Trakhtenbrot~\cite{trak} (and later used by Walsh~\cite{Wa} to count 2-connected planar graphs and by Bender, Gao, Wormald to obtain asymptotic enumeration~\cite{BeGa}). Finally, connected planar graphs are generated from 2-connected ones by using the well-known decomposition into blocks, and planar graphs are generated from their connected components.
Let us mention that the decomposition of planar graphs into 3-connected components has been
completely formalised by Tutte~\cite{Tut} (though we rather use here formulations of this decomposition on \emph{rooted} graphs, as Trakhtenbrot did).
The complete scheme we follow is illustrated in Figure~\ref{fig:scheme_unrooted}.
\begin{figure}
\caption{The complete scheme to obtain a Boltzmann sampler for
planar graphs. The classes are to be defined all along Section~\ref{sec:decomp}.}
\label{fig:scheme_unrooted}
\end{figure}
\noindent\textbf{Notations.} Recall that a graph is $k$-connected if the removal of any set of $k-1$ vertices does not disconnect the graph. In the sequel, we consider the following classes of planar graphs:
\begin{tabular}{l} $\mathcal{G}$: the class of all planar graphs, including the empty graph,\\
$\mathcal{G}_1$: the class of connected planar graphs with at least one vertex,\\
$\mathcal{G}_2$: the class of 2-connected planar graphs with at least two vertices,\\ $\mathcal{G}_3$: the class of 3-connected planar graphs with at least four vertices. \end{tabular}
\begin{figure}
\caption{The connected planar graphs with at most four vertices (the 2-connected ones are surrounded). Below each graph is indicated the number of distinct labellings.}
\label{fig:firstTerms}
\end{figure}
All these classes are considered as mixed, with labelled vertices and unlabelled edges, i.e., the L-atoms are the vertices and the U-atoms are the edges. Let us give the first few terms of their mixed generating functions (see also Figure~\ref{fig:firstTerms}, which displays the first connected planar graphs): $$ \displaystyle \begin{array}{rcl} G(x,y)&$\!\!\!=\!\!\!$&1+x+\frac{x^2}{2!}(1+y)+\frac{x^3}{3!}(1+3y+3y^2+y^3)+\cdots\\[0.1cm] G_1(x,y)&$\!\!\!=\!\!\!$&x+\frac{x^2}{2!}y+\frac{x^3}{3!}(3y^2+y^3)+\frac{x^4}{4!}(16y^3+15y^4+6y^5+y^6)+\cdots\\[0.1cm] G_2(x,y)&$\!\!\!=\!\!\!$&\frac{x^2}{2!}y+\frac{x^3}{3!}y^3+\frac{x^4}{4!}(3y^4\!+\!6y^5\!+\!y^6)+\frac{x^5}{5!}(12y^5\!+\!70y^6\!+\!100y^7\!+\!15y^8\!+\!10y^9)+\cdots\\[0.1cm] G_3(x,y)&$\!\!\!=\!\!\!$&\frac{x^4}{4!}y^6+\frac{x^5}{5!}(15y^8+10y^9)+\frac{x^6}{6!}(60y^9+432y^{10}+540y^{11}+195y^{12})+\cdots \end{array} $$
Observe that, for a mixed class $\mathcal{A}$ of \emph{graphs}, the derived class $\mathcal{A}'$, as defined in Section~\ref{sec:derive},
is the class of graphs in $\mathcal{A}$ that have one vertex discarded from the set of L-atoms (this vertex plays the role of a distinguished vertex);
$\underline{\mathcal{A}}$ is the class of graph in $\mathcal{A}$ with one edge discarded from the set of U-atoms
(this edge plays the role of a distinguished edge); and $\overrightarrow{\mathcal{A}}$ is the class of graphs in $\mathcal{A}$ with an ordered pair of adjacent vertices $(u,v)$ discarded from the set of L-atoms and the edge $(u,v)$ discarded from the set of U-atoms (such a graph can be considered as rooted at the directed edge $(u,v)$).
\subsection{Boltzmann sampler for 3-connected planar graphs} \label{sec:bolz3conn} In this section we develop a Boltzmann sampler for 3-connected planar graphs, more precisely for \emph{edge-rooted} ones, i.e., for the class $\overrightarrow{\mathcal{G}_3}$. Our sampler relies on two results. First, we recall the equivalence between 3-connected planar graphs and 3-connected maps, where the terminology of map refers to an explicit embedding. Second, we take advantage of a bijection linking the families of rooted 3-connected maps and the (very simple) family of binary trees, via intermediate objects that are certain quadrangular dissections of the hexagon. Using the bijection, a Boltzmann sampler for rooted binary trees is translated into a Boltzmann sampler for rooted 3-connected maps.
\subsubsection{Maps} A \emph{map on the sphere} (\emph{planar map}, resp.) is a connected planar graph embedded on the sphere (on the plane, resp.) up to continuous deformation of the surface, the embedded graph carrying distinct labels on its vertices (as usual, the labels range from $1$ to $n$, the number of vertices). A planar map is in fact equivalent to a map on the sphere with a distinguished face, which plays the role of the unbounded face. The unbounded face of a planar map is called the \emph{outer face}, and the other faces are called the \emph{inner faces}. The vertices and edges of a planar map are said to be \emph{outer} or \emph{inner} whether they are incident to the outer face or not. A map is said to be \emph{rooted} if the embedded graph is edge-rooted. The \emph{root vertex} is the origin of the root. Classically, rooted planar maps are always assumed to have the outer face
on the right of the root. With that convention, rooted planar maps are equivalent to rooted maps on the sphere (given a rooted map on the sphere, take the face on the right of the root as the outer face). See Figure~\ref{fig:primal}(c) for an example of rooted planar map, where the labels are forgotten\footnote{Classically, rooted maps are considered in the literature without labels on the vertices, as the root is enough to avoid symmetries. Nevertheless, it is convenient here to keep the framework of mixed classes for maps, as we do for graphs.}.
\subsubsection{Equivalence between 3-connected planar graphs and 3-connected maps}\label{sec:equiv} A well known result due to Whitney~\cite{Whitney33} states that a labelled 3-connected planar graph has a unique embedding on the sphere up to continuous deformation and reflection (in general a planar graph can have many embeddings). Notice that any 3-connected map on the sphere with at least 4 vertices differs from its mirror-image, due to the labels on the vertices. Hence every 3-connected planar graph with at least 4 vertices gives rise exactly to two maps on the sphere. The class of 3-connected maps on the sphere with at least 4 vertices is denoted by $\mathcal{M}_3$. As usual, the class is mixed, the L-atoms being the vertices and the U-atoms being the edges. Whitney's theorem ensures that \begin{equation} \label{eq:M} \mathcal{M}_3\simeq 2\star\mathcal{G}_3. \end{equation}
Here we make use of the formulation of this isomorphism for \emph{edge-rooted} objects. The mixed class of rooted 3-connected planar maps with at least 4 vertices is denoted by $\overrightarrow{\mathcal{M}_3}$, where ---as for edge-rooted graphs--- the L-atoms are the vertices not incident to the root-edge and the U-atoms are the edges except the root. Equation~(\ref{eq:M}) becomes, for edge-rooted objects: \begin{equation} \overrightarrow{\mathcal{M}_3}\simeq2\star\overrightarrow{\mathcal{G}_3}. \end{equation}
Thanks to this isomorphism, finding a Boltzmann sampler $\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$ for edge-rooted 3-connected planar graphs reduces to finding a Boltzmann sampler $\Gamma \overrightarrow{\mathcal{M}_3}(z,w)$ for rooted 3-connected maps, upon forgetting the embedding.
\subsubsection{3-connected maps and irreducible dissections}\label{sec:primal_map} We consider here some quadrangular dissections of the hexagon that are closely related to 3-connected planar maps. (We will see that these dissections can be efficiently generated at random, as they are in bijection with binary trees.)
Precisely, a \emph{quadrangulated map} is a planar map (with no loop nor multiple edges) such that all faces except maybe the outer one have degree 4; it is called a quadrangulation if the outer face has degree 4. A quadrangulated map is called \emph{bicolored} if the vertices are colored black or white such that any edge connects two vertices of different colors. A rooted quadrangulated map (as usual with planar maps, the root has the outer face on its right) is always assumed to be endowed with the unique vertex bicoloration such that the root vertex is \emph{black} (such a bicoloration exists, as all inner faces have even degree). A quadrangulated map with an outer face of degree more than 4 is called \emph{irreducible} if each 4-cycle is the contour of a face. In particular, we define an \emph{irreducible dissection of the hexagon} ---shortly called irreducible dissection hereafter--- as an irreducible quadrangulated map with an hexagonal outer face, see Figure~\ref{fig:primal}(b) for an example. A quadrangulation is called irreducible if it has at least 2 inner vertices and if every 4-cycle, except the outer one, delimits a face.
Notice that the smallest irreducible dissection has one inner edge and no inner vertex
(see Figure~\ref{fig:asymmetric}), whereas the smallest irreducible quadrangulation is the embedded cube, which has 4 inner vertices and 5 inner faces. We consider irreducible dissections as objects of the mixed type, the L-atoms are the black inner vertices and the U-atoms are the inner faces. It proves more convenient to consider here the irreducible dissections that are \emph{asymmetric}, meaning that there is no rotation fixing the dissection. The four non-asymmetric irreducible dissections are displayed in Figure~\ref{fig:asymmetric}(b), all the other ones are asymmetric either due to an asymmetric shape or due to the labels on the black inner vertices. We denote by $\mathcal{I}$ the mixed class of \emph{asymmetric} bicolored irreducible dissections. We define also $\mathcal{J}$ as the class of asymmetric irreducible dissections that carry a root (outer edge directed so as to have a black origin and the outer face on its right), where this time the L-atoms are the black vertices except two of them (say, the origin of the root and the next black vertex in ccw order around the outer face) and the U-atoms are all the faces, including the outer one. Finally, we define $\mathcal{Q}$ as the mixed class of rooted irreducible quadrangulations, where the L-atoms are the black vertices except those two incident to the outer face, and the U-atoms are the inner faces.
Irreducible dissections are closely related to 3-connected maps, via a classical correspondence between planar maps and quadrangulations. Given a bicolored rooted quadrangulation $\kappa$, the \emph{primal map} of $\kappa$ is the rooted map $\mu$ whose vertex set is the set of black vertices of $\kappa$, each face $f$ of $\kappa$ giving rise to an edge of $\mu$ connecting the two (opposite) black vertices of $f$,
see Figure~\ref{fig:primal}(c)-(d). The map $\mu$ is naturally rooted so as to have the same root-vertex as $\kappa$.
\begin{theorem}[Mullin and Schellenberg~\cite{Mu}] The primal-map construction is a bijection between rooted irreducible quadrangulations with $n$ black vertices and $m$ faces, and rooted 3-connected maps with $n$ vertices and $m$ edges\footnote{More generally, the bijection holds between rooted quadrangulations and rooted 2-connected maps.}. In other words, the primal-map construction yields the combinatorial isomorphism \begin{equation} \mathcal{Q}\simeq\overrightarrow{\mathcal{M}_3}. \end{equation} In addition, the construction of a 3-connected map from an irreducible quadrangulation takes linear time. \end{theorem}
The link between $\mathcal{J}$ and $\overrightarrow{\mathcal{M}_3}$ is established via the family $\mathcal{Q}$, which is at the same time isomorphic to $\overrightarrow{\mathcal{M}_3}$ and closely related to $\mathcal{J}$. Let $\kappa$ be a rooted irreducible quadrangulation, and let $e$ be the edge following the root in cw order around the outer face. Then, deleting $e$ yields a rooted irreducible dissection $\delta$. In addition it is easily checked that $\delta$ is asymmetric, i.e., the four non-asymmetric irreducible dissections, which are shown in Figure~\ref{fig:asymmetric}(b), can not be obtained in this way. Hence the so-called \emph{root-deletion mapping} is injective from $\mathcal{Q}$ to $\mathcal{J}$. The inverse operation---called the \emph{root-addition mapping}---starts from a rooted irreducible dissection $\delta$, and adds an outer edge from the root-vertex of $\delta$ to the opposite outer vertex. Notice that the rooted quadrangulation obtained in this way might not be irreducible. Precisely, a non-separating 4-cycle appears iff $\delta$ has an internal path (i.e., a path using at least one inner edge) of length 3 connecting the root vertex to the opposite outer vertex. A rooted irreducible dissection $\delta$ is called \emph{admissible} iff it has no such path. The subclass of rooted irreducible dissections that are admissible is denoted by $\mathcal{J}_{\mathrm{a}}$. We obtain the following result, already given in~\cite{FuPoSc05}: \begin{lemma} The root-addition mapping is a bijection between admissible rooted irreducible dissections with $n$ black vertices and $m$ faces, and rooted irreducible quadrangulations with $n$ black vertices and $m$ inner faces. In other words, the root-addition mapping realises the combinatorial isomorphism \begin{equation} \mathcal{J}_{\mathrm{a}}\simeq\mathcal{Q}. \end{equation} \end{lemma}
To sum up, we have the following link between rooted irreducible dissections and rooted 3-connected maps: $$ \mathcal{J}\supset\ \mathcal{J}_{\mathrm{a}}\simeq\mathcal{Q}\simeq\overrightarrow{\mathcal{M}_3}. $$ Notice that we have a combinatorial isomorphism between $\mathcal{J}_{\mathrm{a}}$ and $\overrightarrow{\mathcal{M}_3}$:
the root-edge addition combined with the primal map construction. For $\delta\in\mathcal{J}_{\mathrm{a}}$, the rooted 3-connected map associated with $\delta$ is denoted $\mathrm{Primal}(\delta)$.
\begin{figure}
\caption{(a) A binary tree, (b) the associated irreducible dissection $\delta$ (rooted and
admissible), (c) the associated rooted irreducible quadrangulation $\kappa=\mathrm{Add}(\delta)$, (d) the associated rooted 3-connected map $\mu=\mathrm{Primal}(\delta)$.}
\label{fig:primal}
\end{figure}
As we see next, the class $\mathcal{I}$ (and also the associated rooted class $\mathcal{J}$) is combinatorially tractable, as it is in bijection with the simple class of binary trees; hence irreducible dissections are easily
generated at random.
\subsubsection{Bijection between binary trees and irreducible dissections} There exist by now several elegant bijections between families of planar maps and families of plane trees that satisfy simple context-free decomposition grammars. Such constructions have first been described by Schaeffer in his thesis~\cite{S-these}, and many other families of rooted maps have been counted in this way~\cite{Fusy06a,PS03a,PS03b,BoDiGu04}.
The advantage of bijective constructions over recursive methods for counting maps~\cite{Tu63}
is that the bijections
yield efficient ---linear-time--- generators for maps, as random sampling
of maps is reduced to the much easier task of random sampling of trees, see~\cite{Sc99}.
The method has been recently applied to the family
of 3-connected maps, which
is of interest here. Precisely, as described in~\cite{FuPoSc05}, there is a bijection between binary trees and irreducible dissections
of the hexagon,
which, as we have seen, are closely related
to 3-connected maps.
We define an \emph{unrooted binary tree}, shortly called a binary tree hereafter, as a plane tree (i.e., a planar map with a unique face) where the degree of each vertex is either 1 or 3.
The vertices of degree 1 (3) are called leaves (nodes, resp.).
A binary tree is said to be bicolored if its nodes are bicolored so that any two adjacent nodes
have different colors, see Figure~\ref{fig:primal}(a) for an example. In a bicolored binary tree the L-atoms are the black nodes and the U-atoms are the leaves. A bicolored
binary tree is called
\emph{asymmetric} if there is no rotation-symmetry fixing it. Figure~\ref{fig:asymmetric} displays the four non-asymmetric
bicolored binary trees; all the other bicolored binary trees are asymmetric, either due to the
shape being asymmetric, or due to the labels on the black nodes.
We denote by $\mathcal{K}$ the mixed class of \emph{asymmetric} bicolored binary trees (the requirement of asymmetry is necessary so that the leaves are distinguishable).
\begin{figure}
\caption{(a) The four non-asymmetric bicolored binary trees. (b) The four non-asymmetric bicolored irreducible
dissections.}
\label{fig:asymmetric}
\end{figure}
The terminology of binary tree refers to the fact that, upon rooting a binary tree
at an arbitrary leaf, the neighbours
in clockwise order around each node can be classified as a father (the neighbour closest to the root), a right son, and
a left son, which corresponds to the classical definition of rooted binary trees, as considered
in Example~\ref{ex:binary}.
\begin{proposition}[Fusy, Poulalhon, and Schaeffer~\cite{FuPoSc05}] \label{prop:bijbin3conn} For $n\geq 0$ and $m\geq 2$, there exists an explicit bijection, called the \emph{closure-mapping}, between bicolored binary trees with $n$ black nodes and $m$ leaves, and bicolored irreducible dissections with $n$ black inner nodes and $m$ inner faces; moreover the 4 non-asymmetric bicolored binary trees are mapped to the 4 non-asymmetric irreducible dissections. In other words, the closure-mapping realises the combinatorial isomorphism \begin{equation}\mathcal{K}\simeq \mathcal{I}.\end{equation} The construction of a dissection from a binary tree takes linear time. \end{proposition} Let us comment a bit on this bijective construction, which is described in detail in~\cite{FuPoSc05}. Starting from a binary tree, the closure-mapping builds the dissection face by face, each leaf of the tree giving rise to an inner face of the dissection. More precisely, at each step, a ``leg" (i.e., an edge incident to a leaf) is completed into an edge connecting two nodes, so as to ``close" a quadrangular face. At the end, an hexagon is created outside of the figure, and the leaves attached to the remaining non-completed legs are merged with vertices of the hexagon so as to form only quadrangular faces. For instance the dissection of Figure~\ref{fig:primal}(b) is obtained by ``closing'' the tree of Figure~\ref{fig:primal}(a).
\subsubsection{Boltzmann sampler for rooted bicolored binary trees} \label{sec:boltz_binary_trees} We define a rooted bicolored binary tree as a binary tree with a marked leaf discarded from the set of U-atoms. Notice that the class of rooted bicolored binary trees such that the underlying unrooted binary tree is asymmetric is the U-derived class $\underline{\mathcal{K}}$.
In order to write down a decomposition grammar for the class $\underline{\mathcal{K}}$---to be translated into a Boltzmann sampler---we define some refined classes of rooted bicolored binary trees (decomposing $\underline{\mathcal{K}}$ is a bit involved since we have to forbid the 4 non-asymmetric binary trees):
$\mathcal{R}_{\bullet}$ is the class of \emph{black-rooted} binary trees (the root leaf is connected to a black node) with at least one node,
and $\mathcal{R}_{\circ}$ is the class of \emph{white-rooted} binary trees (the root leaf is connected to a white node) with at least one node. We also define $\mathcal{R}_{\bullet}^{\mathrm{(as)}}$ ($\mathcal{R}_{\circ}^{\mathrm{(as)}}$) as the class of black-rooted (white-rooted, resp.) bicolored binary trees such that the underlying unrooted binary tree is asymmetric. Hence $\underline{\mathcal{K}}=\mathcal{R}_{\bullet}^{\mathrm{(as)}}+\mathcal{R}_{\circ}^{\mathrm{(as)}}$. We introduce two auxiliary classes; $\widehat{\mathcal{R}}_{\bullet}$ is the class of black-rooted binary trees except the (unique) one with one black node and two white nodes; and $\widehat{\mathcal{R}}_{\circ}$ is the class of white-rooted binary trees except the two ones resulting from rooting the (unique) bicolored binary tree with one black node and three white nodes (the 4th one in Figure~\ref{fig:asymmetric}(a)), in addition, the rooted bicolored binary tree with two leaves (the first one in Figure~\ref{fig:asymmetric}(a)) is also included in the class $\widehat{\mathcal{R}}_{\circ}$.
The decomposition of a bicolored binary tree at the root yields a complete decomposition grammar, given in Figure~\ref{fig:grammar}, for the class $\underline{\mathcal{K}}=\mathcal{R}_{\bullet}^{\mathrm{(as)}}+\mathcal{R}_{\circ}^{\mathrm{(as)}}$. This grammar translates to a decomposition grammar involving only the basic classes $\{\mathcal{Z}_L,\mathcal{Z}_U\}$ and the constructions $\{+,\star\}$ ($\mathcal{Z}_L$ stands for a black node and $\mathcal{Z}_U$ stands for a non-root leaf):
\begin{equation}\label{eq:grammar} \left\{ \begin{array}{rcl} \underline{\mathcal{K}}&=&\mathcal{R}_{\bullet}^{\mathrm{(as)}}+\mathcal{R}_{\circ}^{\mathrm{(as)}},\\ \mathcal{R}_{\bullet}^{\mathrm{(as)}}&=&\mathcal{R}_{\circ}\star\mathcal{Z}_L\star\mathcal{Z}_U+\mathcal{Z}_U\star\mathcal{Z}_L\star\mathcal{R}_{\circ}+\mathcal{Z}_L\star\mathcal{R}_{\circ}^2,\\ \mathcal{R}_{\circ}^{\mathrm{(as)}}&=&\widehat{\mathcal{R}}_{\bullet}\star\mathcal{Z}_U+\mathcal{Z}_U\star\widehat{\mathcal{R}}_{\bullet}+\mathcal{R}_{\bullet}^2,\\ \widehat{\mathcal{R}}_{\bullet}&=&\widehat{\mathcal{R}}_{\circ}\star\mathcal{Z}_L\star\mathcal{Z}_U^2+\mathcal{Z}_U^2\star\mathcal{Z}_L\star\widehat{\mathcal{R}}_{\circ}+\widehat{\mathcal{R}}_{\circ}\star\mathcal{Z}_L\star\widehat{\mathcal{R}}_{\circ},\\ \widehat{\mathcal{R}}_{\circ}&=&\mathcal{Z}_U+\mathcal{R}_{\bullet}\star\mathcal{Z}_U+\mathcal{Z}_U\star\mathcal{R}_{\bullet}+\mathcal{R}_{\bullet}^2,\\ \mathcal{R}_{\bullet}&=&(\mathcal{Z}_U+\mathcal{R}_{\circ})\star\mathcal{Z}_L\star(\mathcal{Z}_U+\mathcal{R}_{\circ}),\\ \mathcal{R}_{\circ}&=&(\mathcal{Z}_U+\mathcal{R}_{\bullet})\star(\mathcal{Z}_U+\mathcal{R}_{\bullet}). \end{array} \right. \end{equation}
\begin{figure}\label{fig:grammar}
\end{figure}
In turn, this grammar is translated into a Boltzmann sampler $\Gamma \underline{\mathcal{K}}(z,w)$ for the class $\underline{\mathcal{K}}$ using the sampling rules given in Figure~\ref{table:rules}, similarly as we have done for the (simpler) class of complete binary trees in Example~1.
\subsubsection{Boltzmann sampler for bicolored binary trees}\label{sec:Ksamp} We describe in this section a Boltzmann sampler $\Gamma \mathcal{K}(z,w)$ for asymmetric bicolored binary trees, which is derived from the Boltzmann sampler $\Gamma\underline{\mathcal{K}}(x,y)$ described in the previous section. Observe that each \emph{asymmetric} binary tree in $\mathcal{K}_{n,m}$ gives rise to $m$ rooted binary trees in $\underline{\mathcal{K}}_{n,m-1}$, as each of the $m$ leaves, which are \emph{distinguishable}, might be chosen to be discarded from the set of U-atoms. Hence, each object of $\mathcal{K}_{n,m}$ has probability $\underline{K}(z,w)^{-1}mz^n/n!y^{m-1}$ to be chosen when calling $\Gamma\underline{\mathcal{K}}(z,w)$ and taking the distinguished atom back into the set of U-atoms. Hence, from the rejection lemma (Lemma~\ref{lemma:rej}), the sampler \begin{center} \begin{tabular}{l} repeat $\gamma\leftarrow\Gamma\underline{\mathcal{K}}(z,w)$;\\ \hspace{.2cm}take the distinguished U-atom back into the set of U-atoms;\\
\hspace{.2cm}\{so $||\gamma||$ increases by $1$ and now $\gamma\in\mathcal{K}$\}\\
until $\mathrm{Bern}\left(\frac{2}{||\gamma||}\right)$;\\ return $\gamma$ \end{tabular} \end{center} is a Boltzmann sampler for $\mathcal{K}$.
However, this sampler is not efficient enough, as it uses a massive amount of rejection to draw a tree of large size. Instead, we use an early-abort rejection algorithm, which allows us to ``simulate" the rejection step all along the generation, thus making it possible to reject before the entire object is generated. We find it more convenient to use the number of nodes, instead of leaves, as the parameter for rejection (the subtle advantage is that the generation process $\Gamma\underline{\mathcal{K}}(z,w)$ builds the tree node by node). Notice that the number
of leaves in an unrooted binary tree $\gamma$ is equal to $2+N(\gamma)$, with $N(\gamma)$
the number of nodes of $\gamma$. Hence, the rejection step in the sampler above can be replaced by a Bernoulli choice with parameter $2/(N(\gamma)+2)$. We now give the early-abort algorithm, which repeats calling $\Gamma\underline{\mathcal{K}}(z,w)$ while using
a global counter $N$ that records the number of nodes of the tree under construction.
\fbox{ \begin{tabular}{ll} $\Gamma \mathcal{K}(z,w)$:$\!\!$& repeat \\ &\hspace{0.2cm}$N:=0$; \{counter for nodes\}\\ &\hspace{0.2cm}Call $\Gamma\underline{\mathcal{K}}(z,w)$\\ &\hspace{0.2cm}each time a node is built do\\ &\hspace{0.4cm}$N:=N+1$;\\ & \hspace{0.4cm}if $\mathrm{Bern}((N+1)/(N+2))$ continue;\\ &\hspace{0.4cm}otherwise reject and restart from the first line; od\\ &until the generation finishes;\\ &return the object generated by $\Gamma\underline{\mathcal{K}}(z,w)$\\ &(taking the distinguished leaf back into the set of U-atoms) \end{tabular} }
\begin{lemma}\label{lem:BoltzK} The algorithm $\Gamma \mathcal{K}(z,w)$ is a Boltzmann sampler for the class $\mathcal{K}$ of asymmetric bicolored binary trees. \end{lemma} \begin{proof} At each attempt, the call to $\Gamma\underline{\mathcal{K}}(z,w)$ would output a rooted binary tree $\gamma$ if there was no early interruption. Clearly, the probability that the generation of $\gamma$ finishes without interruption is $\prod_{i=1}^{N(\gamma)}(i+1)/(i+2)=2/(N(\gamma)+2)$. Hence, each attempt is equivalent to doing\\ \centerline{$\gamma\leftarrow\Gamma\underline{\mathcal{K}}(z,w)$; if $\mathrm{Bern}\left(\frac{2}{N(\gamma)+2}\right)$ return $\gamma$ else reject;}\\ Thus, the algorithm $\Gamma \mathcal{K}(z,w)$ is equivalent to the algorithm given in the discussion preceding Lemma~\ref{lem:BoltzK}, hence $\Gamma \mathcal{K}(z,w)$ is a Boltzmann sampler for the family $\mathcal{K}$. \end{proof}
\subsubsection{Boltzmann sampler for irreducible dissections}\label{sec:sampI} As stated in Proposition~\ref{prop:bijbin3conn}, the closure-mapping realises a combinatorial isomorphism between asymmetric bicolored binary trees (class $\mathcal{K}$) and asymmetric bicolored irreducible dissections (class $\mathcal{I}$). Hence, the algorithm
\fbox{\begin{tabular}{ll} $\Gamma \mathcal{I}(z,w)$:$\!\!$& $\tau\leftarrow \Gamma \mathcal{K}(z,w)$;\\ & return $\mathrm{closure}(\tau)$ \end{tabular}}
\noindent is a Boltzmann sampler for $\mathcal{I}$. In turn this easily yields a Boltzmann sampler for the corresponding rooted class $\mathcal{J}$. Precisely, starting from an \emph{asymmetric} bicolored irreducible dissection,
each of the 3 outer black vertices, which are \emph{distinguishable}, might be chosen as the root-vertex in order to obtain a rooted irreducible dissection.
Moreover the sets of L-atoms and U-atoms are slightly different for the classes $\mathcal{I}$
and $\mathcal{J}$; indeed, a rooted dissection has
one more L-atom (the black vertex following the root-vertex in cw order around the outer face)
and one more U-atom (all faces are U-atoms in $\mathcal{J}$, whereas only the inner faces are U-atoms
in $\mathcal{I}$)\footnote{We have chosen to specify the sets of L-atoms and U-atoms in this way in order to state the isomorphisms $\mathcal{K}\simeq\mathcal{I}$ and $\mathcal{J}_{\mathrm{a}}\simeq\overrightarrow{\mathcal{M}_3}$.}. This yields the identity
\begin{equation}
\mathcal{J}= 3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{I},
\end{equation}
which directly yields (by the sampling rules of Figure~\ref{table:rules}) a Boltzmann sampler $\Gamma\mathcal{J}(z,w)$
for $\mathcal{J}$ from the Boltzmann sampler $\Gamma \mathcal{I}(z,w)$.
Finally, we obtain a Boltzmann sampler for rooted admissible dissections by a simple rejection procedure
\fbox{ \begin{tabular}{ll} $\Gamma \mathcal{J}_{\mathrm{a}}(z,w)$:$\!\!$& repeat $\delta\leftarrow\Gamma \mathcal{J}(z,w)$ until $\delta\in\mathcal{J}_{\mathrm{a}}$;\\ & return $\delta$ \end{tabular} }
\subsubsection{Boltzmann sampler for rooted 3-connected maps} The Boltzmann sampler for rooted irreducible dissections and the primal-map construction yield the following sampler for rooted 3-connected maps:
\fbox{ \begin{tabular}{ll} $\Gamma \overrightarrow{\mathcal{M}_3}(z,w)$:$\!\!$& $\delta\leftarrow\Gamma \mathcal{J}_{\mathrm{a}}(z,w)$;\\ & return $\mathrm{Primal}(\delta)$ \end{tabular} }
\noindent where $\mathrm{Primal}(\delta)$ is
the rooted 3-connected map associated to $\delta$ (see Section~\ref{sec:primal_map}).
\subsubsection{Boltzmann sampler for edge-rooted 3-connected planar graphs}
To conclude,
the Boltzmann sampler $\Gamma \overrightarrow{\mathcal{M}_3}(z,w)$ yields a Boltzmann sampler
$\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$ for edge-rooted 3-connected planar graphs, according to the
isomorphism (Whitney's theorem) $\overrightarrow{\mathcal{M}_3}\simeq 2\star\overrightarrow{\mathcal{G}_3}$,
\fbox{ \begin{tabular}{ll} $\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$:$\!\!$& return $\Gamma \overrightarrow{\mathcal{M}_3}(z,w)$ (forgetting the embedding) \end{tabular} }
\subsection{Boltzmann sampler for 2-connected planar graphs} \label{sec:2conn3conn} The next step is to realise a Boltzmann sampler for 2-connected planar graphs from the Boltzmann sampler for edge-rooted 3-connected planar graphs obtained in Section~\ref{sec:bolz3conn}. Precisely, we first describe a Boltzmann sampler for the class $\overrightarrow{\mathcal{G}_2}$ of edge-rooted 2-connected planar graphs, and subsequently obtain, by using rejection techniques, a Boltzmann sampler for the class $\mathcal{G}_2\ \!\!\!'$ of derived 2-connected planar graphs (having a Boltzmann sampler for $\mathcal{G}_2\ \!\!\!'$ allows us to go subsequently to connected planar graphs).
To generate edge-rooted 2-connected planar graphs, we use a well-known decomposition, due to Trakhtenbrot~\cite{trak}, which ensures that an edge-rooted 2-connected planar graph can be assembled from edge-rooted 3-connected planar components. This decomposition deals with so-called \emph{networks} (following the terminology of Walsh~\cite{Wa}), where a network is defined as a connected graph $N$ with two distinguished vertices $0$ and $\infty$ called \emph{poles}, such that the graph $N^*$ obtained by adding an edge between $0$ and $\infty$ is a 2-connected planar graph. Accordingly, we refer to Trakhtenbrot's decomposition as the \emph{network decomposition}. Notice that networks are closely related to edge-rooted 2-connected planar graphs, though not completely equivalent (see Equation~\eqref{eq:DB} below for the precise relation).
We rely on~\cite{Wa} for the description of the network decomposition.
A \emph{series-network} or $s$-network is a network made of at least 2 networks connected \emph{in chain} at their poles, the $\infty$-pole of a network coinciding with the $0$-pole of the following network in the chain. A \emph{parallel network} or $p$-network is a network made of at least 2 networks connected \emph{in parallel}, so that their respective $\infty$-poles and $0$-poles coincide. A \emph{pseudo-brick}
is a network $N$ whose poles are not adjacent and such that $N^*$ is a 3-connected planar graph with at least 4 vertices.
A \emph{polyhedral network} or $h$-network is a network obtained by taking a pseudo-brick and substituting each edge $e$ of the pseudo-brick by a network $N_e$ (polyhedral networks establish a link between 2-connected and 3-connected planar graphs).
\begin{proposition}[Trakhtenbrot] \label{prop:trak} Networks with at least 2 edges are partitioned into $s$-networks, $p$-networks and $h$-networks. \end{proposition}
Let us explain how to obtain a recursive decomposition involving
the different families of networks. (We simply adapt the decomposition
formalised by Walsh~\cite{Wa} so as to have only positive signs.)
Let $\mathcal{D}$, $\mathcal{S}$, $\mathcal{P}$, and $\mathcal{H}$ be respectively the classes of networks, $s$-networks, $p$-networks, and $h$-networks, where the L-atoms are the vertices except the two poles, and the U-atoms are the edges. In particular, $\mathcal{Z}_U$ stands here for the class containing the link-graph as only object, i.e., the graph with one edge connecting the two poles.
Proposition~\ref{prop:trak} ensures that $$ \mathcal{D}=\mathcal{Z}_U+\mathcal{S}+\mathcal{P}+\mathcal{H}. $$
An $s$-network can be uniquely decomposed into a non-$s$-network (the head of the chain) followed by a network (the trail of the chain), which yields
$$ \mathcal{S}=(\mathcal{Z}_U+\mathcal{P}+\mathcal{H})\star\mathcal{Z}_L\star\mathcal{D}. $$
A $p$-network has a unique \emph{maximal} parallel decomposition into a collection of at least two components that are not $p$-networks. Observe that we consider here graphs without multiple edges, so that at most one of these components is an edge. Whether there is one or no such edge-component yields
$$ \mathcal{P}=\mathcal{Z}_U\star\Set_{\geq 1}(\mathcal{S}+\mathcal{H})+\Set_{\geq 2}(\mathcal{S}+\mathcal{H}). $$
By definition, the class of $h$-networks corresponds to a U-substitution of networks in pseudo-bricks; and pseudo-bricks are exactly edge-rooted 3-connected planar graphs. As a consequence (recall that $\mathcal{G}_3$ stands for the family of
3-connected planar graphs),
$$ \mathcal{H}=\overrightarrow{\mathcal{G}_3}\circ_U\mathcal{D}. $$
To sum up, we have the following grammar corresponding to the decomposition of networks into edge-rooted 3-connected planar graphs:
\includegraphics[width=10cm]{Figures/grammar_N}
Using the sampling rules (Figure~\ref{table:rules}), the decomposition grammar (N) is directly translated into a Boltzmann sampler $\Gamma \mathcal{D}(z,y)$ for networks, as given in Figure~\ref{fig:samp_networks}. A network generated by $\Gamma \mathcal{D}(z,y)$ is made of a series-parallel backbone $\beta$ (resulting from the branching structures of the calls to $\Gamma\mathcal{S}(z,y)$ and $\Gamma\mathcal{P}(z,y)$) and a collection of rooted 3-connected planar graphs that are attached at edges of $\beta$; clearly all these 3-connected components are obtained from independent calls to the Boltzmann sampler $\Gamma\overrightarrow{\cG_3}(z,w)$, with $w=D(z,y)$.
\begin{figure}
\caption{Boltzmann samplers for networks. All generating functions are assumed to be evaluated at $(z,y)$, i.e., $D:=D(z,y)$, $S:=S(z,y)$, $P:=P(z,y)$, and $H:=H(z,y)$.}
\label{fig:samp_networks}
\end{figure}
The only terminal nodes of the decomposition grammar are the classes $\mathcal{Z}_L$, $\mathcal{Z}_U$ (which are explicit), and the class $\overrightarrow{\mathcal{G}_3}$. Thus, the sampler $\Gamma \mathcal{D}(z,y)$ and the auxiliary samplers $\Gamma \mathcal{S}(z,y)$, $\Gamma \mathcal{P}(z,y)$, and $\Gamma \mathcal{H}(z,y)$ are recursively specified in terms of $\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$, where $w$ and $z$ are linked by $w=D(z,y)$.
Observe that each edge-rooted 2-connected planar graph different from the link-graph gives rise to two networks, obtained respectively by keeping or deleting the root-edge. This yields the identity \begin{equation} \label{eq:DB} (1+\mathcal{Z}_U)\star\overrightarrow{\mathcal{G}_2}=(1+\mathcal{D}). \end{equation} From that point, a Boltzmann sampler is easily obtained for the family $\overrightarrow{\mathcal{G}_2}$ of edge-rooted 2-connected planar graphs. Define a procedure \textsc{AddRootEdge} that adds an edge connecting the two poles $0$ and $\infty$ of a network if they are not already adjacent, and roots the obtained graph at the edge $(0,\infty)$ directed from $0$ to $\infty$. The following sampler for $\overrightarrow{\mathcal{G}_2}$ is the counterpart of Equation~(\ref{eq:DB}).
\fbox{ \begin{tabular}{rl} $\Gamma (1+\mathcal{D})(z,y)$: & $\!\!\!$ if $\mathrm{Bern}\left(\frac{1}{1+D(z,y)}\right)$ return the link-graph else return $\Gamma \mathcal{D}(z,y)$; \\ $\Gamma \overrightarrow{\mathcal{G}_2}(z,y)$: & $\!\!\!$ $\gamma \leftarrow \Gamma (1+\mathcal{D})(z,y)$; \textsc{AddRootEdge}($\gamma$); return $\gamma$ \end{tabular} }
\begin{lemma}\label{lem:netto2conn} The algorithm $\Gamma \overrightarrow{\mathcal{G}_2}(z,y)$ is a Boltzmann sampler for the class $\overrightarrow{\mathcal{G}_2}$ of edge-rooted 2-connected planar graphs. \end{lemma} \begin{proof} Firstly, observe that $\Gamma \overrightarrow{\mathcal{G}_2}(z,y)$ outputs the link-graph either if the initial Bernoulli choice $X$ is 0, or if $X=1$ and the sampler $\Gamma\mathcal{D}(z,y)$ picks up the link-graph. Hence the link-graph is returned with probability $(1+y)/(1+D(z,y))$, i.e., with probability $1/\overrightarrow{G_2}(z,y)$.
Apart from the link-graph, each graph $\gamma\in\overrightarrow{\mathcal{G}_2}$ appears twice in the class $\mathcal{E}:=1+\mathcal{D}$: once in
$\mathcal{E}_{|\gamma|,||\gamma||+1}$ (keeping the root-edge) and once in $\mathcal{E}_{|\gamma|,||\gamma||}$ (deleting the root-edge). Therefore, $\gamma$ has probability $E(z,y)^{-1}z^{|\gamma|}/|\gamma|!(y^{||\gamma||+1}+y^{||\gamma||})$ of being drawn by $\Gamma \overrightarrow{\mathcal{G}_2}(z,y)$, where $E(z,y)=1+D(z,y)$ is the series of $\mathcal{E}$. This probability simplifies to $z^{|\gamma|}/|\gamma|!y^{||\gamma||}/\overrightarrow{G_2}(z,y)$. Hence, $\Gamma \overrightarrow{\mathcal{G}_2}(z,y)$ is a Boltzmann sampler for the class $\overrightarrow{\mathcal{G}_2}$. \end{proof}
The last step is to obtain a Boltzmann sampler for derived 2-connected planar graphs (i.e., with a distinguished vertex that is not labelled and does not count for the L-size) from the Boltzmann sampler for edge-rooted 2-connected planar graphs (as we will see in Section~\ref{sec:conn2conn}, derived 2-connected planar graphs constitute the blocks to construct connected planar graphs).
We proceed in two steps. Firstly, we obtain a Boltzmann sampler for the U-derived class $\underline{\mathcal{G}_2}$ (i.e., with a distinguished undirected edge that does not count in the U-size). Note that $\mathcal{F}:=2\star\underline{\mathcal{G}_2}$ satisfies $\mathcal{F}=\mathcal{Z}_L\ \!\!\!^2\star\overrightarrow{\mathcal{G}_2}$. Hence, $\Gamma\overrightarrow{\mathcal{G}_2}(z,y)$ directly yields a Boltzmann sampler $\Gamma\mathcal{F}(z,y)$ (see the sampling rules in Figure~\ref{table:rules}). Since $\mathcal{F}=2\star\underline{\mathcal{G}_2}$, a Boltzmann sampler for $\underline{\mathcal{G}_2}$ is obtained by calling $\Gamma\mathcal{F}(z,y)$ and then forgetting the direction
of the root.
Secondly, once we have a Boltzmann sampler $\Gamma\underline{\mathcal{G}_2}(z,y)$ for the U-derived class $\underline{\mathcal{G}_2}$, we just have to apply the procedure \UtoL (described in Section~\ref{sec:reject}) to the class $\mathcal{G}_2$ in order to obtain a Boltzmann sampler $\Gamma \mathcal{G}_2\ \!\!\!'(z,y)$ for the L-derived class $\mathcal{G}_2\ \!\!\!'$. The procedure \UtoL can be successfully applied, because the ratio vertices/edges is bounded. Indeed, each connected graph $\gamma$
satisfies $|\gamma|\leq ||\gamma||+1$, which easily yields $\alpha_{L/U}=2$ for the class $\mathcal{G}_2$ (attained by the link-graph).
\subsection{Boltzmann sampler for connected planar graphs} \label{sec:conn2conn} Another well known graph decomposition, called the \emph{block-decomposition}, ensures that a connected graph can be decomposed into 2-connected components. We take advantage of this decomposition in order to specify a Boltzmann sampler for derived connected planar graphs from the Boltzmann sampler for derived 2-connected planar graphs obtained in the last section. Then, a further rejection step yields a Boltzmann sampler for connected planar graphs.
The \emph{block-decomposition} (see~\cite[p.10]{Ha} for a detailed description) ensures that each derived connected planar graph can be uniquely constructed in the following way: take a set of derived 2-connected planar graphs and attach them together, by merging their marked vertices into a unique marked vertex. Then, for each unmarked vertex $v$ of each 2-connected component, take a derived connected planar graph $\gamma_v$ and merge the marked vertex of $\gamma_v$ with $v$ (this operation corresponds to an L-substitution). The block-decomposition gives rise to the following identity relating the classes $\mathcal{G}_1\ \!\!\!'$ and $\mathcal{G}_2\ \!\!\!'$: \begin{equation} \label{eq:2conn} \mathcal{G}_1\ \!\!\!'=\Set\left(\mathcal{G}_2\ \!\!\!'\circ_L(\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!')\right). \end{equation}
This is directly translated into the following Boltzmann sampler for $\mathcal{G}_1\ \!\!\!'$ using the sampling rules of Figure~\ref{table:rules}. (Notice that the 2-connected blocks of a connected graph are built independently, each block resulting from a call to the Boltzmann sampler $\Gamma \mathcal{G}_2\ \!\!\!'(z,y)$, where $z=xG_1\ \!\!\!'(x,y)$.)
\fbox{ \begin{tabular}{ll} $\Gamma \mathcal{G}_1\ \!\!\!'(x,y)$:&$k\leftarrow \Pois (G_2\ \!\!\!'(z,y));\ \ [\mathrm{with}\ z=xG_1\ \!\!\!'(x,y)]$\\ & $\gamma\leftarrow (\Gamma \mathcal{G}_2\ \!\!\!'(z,y),\ldots,\Gamma \mathcal{G}_2\ \!\!\!'(z,y))$; \{$k$ independent calls\} \\ & merge the $k$ components of $\gamma$ at their marked vertices;\\ &for each unmarked vertex $v$ of $\gamma$ do\\ & $\ \ \ \ \ $ $\gamma_v\leftarrow \Gamma \mathcal{G}_1\ \!\!\!'(x,y)$;\\ & $\ \ \ \ \ $ merge the marked vertex of $\gamma_v$ with $v$\\ & od;\\ &return $\gamma$. \end{tabular} }
Then, a Boltzmann sampler for connected planar graphs is simply obtained from $\Gamma \mathcal{G}_1\ \!\!\!'(x,y)$ by using a rejection step so as to adjust the probability distribution:
\fbox{ \begin{tabular}{ll} $\Gamma \mathcal{G}_1(x,y)$:& repeat $\gamma\leftarrow \Gamma \mathcal{G}_1\ \!\!\!'(x,y)$\\ & \phantom{1}\hspace{.2cm}take the marked vertex $v$ back to the set of L-atoms;\\
& \phantom{1}\hspace{.2cm}(if we consider the labels, $v$ receives label $|\gamma|+1$)\\
& \phantom{1}\hspace{.2cm}\{this makes $|\gamma|$ increase by $1$, and $\gamma\in\mathcal{G}_1$\}\\
& until $\displaystyle\mathrm{Bern}\left(\frac{1}{|\gamma|}\right)$;\\ & return $\gamma$ \end{tabular} }
\begin{lemma} \label{lemma:connconnpoint} The sampler $\Gamma \mathcal{G}_1(x,y)$ is a Boltzmann sampler for connected planar graphs. \end{lemma} \begin{proof} The proof is similar to the proof of Lemma~\ref{lem:LtoU}. Due to the general property that $\mathcal{C}_{n,m}$ identifies to $\mathcal{C}'_{n-1,m}$, the sampler delimited inside the repeat/until loop draws each object $\gamma\in\mathcal{G}_1$
with probability $G_1\ \!\!\!'(x,y)^{-1}\frac{x^{|\gamma|-1}}{(|\gamma|-1)!}y^{||\gamma||}$, i.e., with probability proportional to $|\gamma|\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$. Hence, according to Lemma~\ref{lemma:rej}, the sampler $\Gamma \mathcal{G}_1(x,w)$
draws each object $\gamma\in\mathcal{G}_1$ with probability proportional to $\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}$, i.e., is a Boltzmann sampler for $\mathcal{G}_1$. \end{proof}
\subsection{Boltzmann sampler for planar graphs} \label{sec:planconn}
A planar graph is classically decomposed into the set of its connected components, yielding \begin{equation} \label{eq:CtoG} \mathcal{G}=\Set(\mathcal{G}_1), \end{equation} which translates to the following Boltzmann sampler for the class $\mathcal{G}$ of planar graphs (the Set construction gives rise to a Poisson law, see Figure~\ref{table:rules}):
\fbox{\begin{tabular}{ll} $\Gamma \mathcal{G}(x,y)$:& $k\leftarrow\Pois(G_1(x,y))$;\\ & return $(\Gamma \mathcal{G}_1(x,y),\ldots,\Gamma \mathcal{G}_1(x,y))$ \{k independent calls\} \end{tabular}}
\begin{proposition} \label{lemma:planconn} The procedure $\Gamma \mathcal{G}(x,y)$ is a Boltzmann sampler for planar graphs. \end{proposition}
\section{Deriving an efficient sampler} \label{sec:efficient} We have completely described in Section~\ref{sec:decomp} a mixed Boltzmann sampler $\Gamma \mathcal{G}(x,y)$ for planar graphs. This sampler yields an exact-size uniform sampler and an approximate-size uniform sampler for planar graphs: to sample at size $n$, call the sampler $\Gamma \mathcal{G}(x,1)$ until the graph generated has size $n$; to sample in a range of sizes $[n(1-\epsilon),n(1+\epsilon)]$, call the sampler $\Gamma \mathcal{G}(x,1)$ until the graph generated has size in the range. These targetted samplers can be shown to have expected polynomial complexity, of order $n^{5/2}$ for approximate-size sampling and $n^{7/2}$ for exact-size sampling (we omit the proof since we will describe more efficient samplers in this section).
However, more is needed to achieve the complexity stated in Theorem~\ref{theo:planarsamp1}, i.e., $O(n/\epsilon)$ for approximate-size sampling and $O(n^2)$ for exact-size sampling. The main problem of the sampler $\Gamma \mathcal{G}(x,1)$ is that the typical size of a graph generated is small,
so that the number of attempts to reach a large target size is prohibitive.
In order to correct this effect, we design in this section a Boltzmann sampler for ``bi-derived" planar graphs, which are equivalent to bi-pointed planar graphs, i.e., with 2 distinguished vertices\footnote{In an earlier version of the article and in the conference version~\cite{Fu05a}, we derived 3 times---as prescribed by~\cite{DuFlLoSc04}---in order to get a singularity type $(1-x/\rho)^{-1/2}$ (efficient targetted samplers are obtained when taking $x=\rho(1-1/(2n))$). We have recently discovered that deriving 2 times (which yields a square-root singularity type $(1-x/\rho)^{1/2}$) and taking again $x=\rho(1-1/(2n))$ yields the same complexities for the targetted samplers, with the advantage that the description and analysis is significantly simpler (in the original article~\cite{DuFlLoSc04}, they prescribe to take $x=\rho$ and to use some early abort techniques for square-root singularity type, but it seems difficult to analyse the gain due to early abortion here, since the Boltzmann sampler for planar graphs makes use of rejection techniques).
}. The intuition is that a Boltzmann sampler for bi-pointed planar graphs gives more weight to large graphs, because a graph of size $n$ gives rise to $n(n-1)$ bi-pointed graphs. Hence, the probability of reaching a large size is better (upon choosing suitably the value of the Boltzmann parameter). The fact that the graphs have to be pointed 2 times is due to the specific asymptotic behaviour of the coefficients counting planar graphs, which has been recently analysed by Gim\'enez and Noy~\cite{gimeneznoy}.
\subsection{Targetted samplers for classes with square-root singularities.} As we describe here, a mixed class $\mathcal{C}$ with a certain type of singularities (square-root type) gives rise to efficient approximate-size and exact-size samplers, provided
$\mathcal{C}$ has a Boltzmann sampler such that the expected cost of generation is of the same order as the expected size of the object generated.
\begin{definition} Given a mixed class $\mathcal{C}$, we define a \emph{singular point} of $\mathcal{C}$ as a pair $x_0>0$, $y_0>0$ such that the function $x\mapsto C(x,y_0)$ has a dominant
singularity at $x_0$ (the radius of convergence is $x_0$). \end{definition}
\begin{definition}\label{def:alpha_sing} For $\alpha\in\mathbb{R}\backslash\mathbb{Z}_{\geq 0}$, a mixed class $\mathcal{C}$ is called $\alpha$-singular if, for each singular point $(x_0,y_0)$ of $\mathcal{C}$, the function $x\mapsto C(x,y_0)$ has a unique dominant singularity at $x_0$ (i.e.,
$x_0$ is the unique singularity on the circle $|z|=x_0$) and admits a singular expansion of the form $$ C(x,y_0)=P(x)+c_{\alpha}\cdot\left( x_0-x \right)^{\alpha}+o\left( (x_0-x )^{\alpha}\right), $$
where $c_{\alpha}$ is a constant, $P(x)$ is rational with no poles in the disk $|z|\leq x_0$, and where the expansion holds in a so-called $\Delta$-neighbourhood of $x_0$, see~\cite{fla,flaod}. In the special case $\alpha=1/2$, the class is said to have square-root singularities. \end{definition}
\begin{lemma}\label{lem:square} Let $\mathcal{C}$ be a mixed class with square-root singularities, and endowed with a Boltzmann sampler $\Gamma\mathcal{C}(x,y)$. Let $(x_0,y_0)$ be a singular point of $\mathcal{C}$. For any $n> 0$, define $$ x_n:=\big(1-\tfrac{1}{2n}\big)\cdot x_0. $$
Call $\pi_n$ ($\pi_{n,\epsilon}$, resp.) the probability that an object $\gamma$ generated by $\Gamma \mathcal{C}(x_n,y_0)$ satisfies $|\gamma|=n$
($|\gamma|\in I_{n,\epsilon}:=[n(1-\epsilon),n(1+\epsilon)]$, resp.); and call $\sigma_n$ the expected size of the output of $\Gamma \mathcal{C}(x_n,y_0)$.
Then $1/\pi_n$ is $O(n^{3/2})$, $1/\pi_{n,\epsilon}$ is $O(n^{1/2}/\epsilon)$, and $\sigma_n$ is $O(n^{1/2})$.
\end{lemma} \begin{proof} The so-called transfer theorems of singularity analysis~\cite{flaod} ensure that the coefficient $a_n:=[x^n]C(x,y_0)$ satisfies, as $n\to\infty$, $a_n\mathop{\sim}_{n\to\infty}c\ \!x_0^{-n}n^{-3/2}$, where $c$ is a positive constant. This easily yields the asymptotic bounds for $1/\pi_n$ and $1/\pi_{n,\epsilon}$, using the expressions $\pi_n=a_nx_n\ \!\!\!^n/C(x_n,y_0)$ and $\pi_{n,\epsilon}=\sum_{k\in I_{n,\epsilon}}a_kx_n\ \!\!\!^k/C(x_n,y_0)$.
It is also an easy exercise to find the asymptotics of $\sigma_n$, using the formula (given in~\cite{DuFlLoSc04}) $\sigma_n=x_n\cdot\partial_x C(x_n,y_0)/C(x_n,y_0)$. \end{proof} Lemma~\ref{lem:square} suggests the following simple heuristic to obtain efficient targetted samplers. For approximate-size sampling (exact-size sampling, resp.), repeat calling $\Gamma \mathcal{C}(x_n,1)$ until the size of the object is in $I_{n,\epsilon}$ (is exactly $n$, resp.). (The parameter $y$ is useful if a target U-size $m$ is also given, as we will see for planar graphs in Section~\ref{sec:sample_edges}.) The complexity of sampling will be good for a class $\mathcal{C}$ that has square-root singularities and that has an efficient Boltzmann sampler. Indeed, for approximate-size sampling, the number of attempts to reach the target-domain $I_{n,\epsilon}$ (i.e., $\pi_{n,\epsilon}^{-1}$) is of order $n^{1/2}$, and for exact-size sampling, the number of attempts to reach the size $n$ (i.e., $\pi_{n}^{-1}$) is of order $n^{3/2}$. If $\mathcal{C}$ is endowed with a Boltzmann sampler $\Gamma \mathcal{C}(x,y)$ such that the expected complexity of sampling at $(x_n,y_0)$ is of order $\sqrt{n}$
(same order as the expected size $\sigma_n$),
then
the expected complexity
is typically $O(n/\epsilon)$ for approximate-size sampling and $O(n^2)$ for exact-size sampling, as we will see for planar graphs.
Let us mention that the original article~\cite{DuFlLoSc04} uses a different heuristic. The targetted samplers also repeat calling the Boltzmann sampler until the size of the object is in the target domain, but the parameter $x$ is chosen to be \emph{exactly} at the singularity $\rho$. The second difference is that, at each attempt, the generation is interrupted if the size of the object goes beyond the target domain. We prefer to use the simple heuristic discussed above, which does not require early interruption techniques. In this way the samplers are easier to describe and to analyse.
In order to apply these techniques to planar graphs, we have to derive two times the class of planar graphs, as indicated by the following two lemmas.
\begin{lemma}[\cite{fla}] If a class $\mathcal{C}$ is $\alpha$-singular, then the class $\mathcal{C}'$ is $(\alpha-1)$-singular (by the effect of derivation). \end{lemma}
\begin{lemma}[\cite{gimeneznoy}]\label{lem:bi_der} The class $\mathcal{G}$ of planar graphs is $5/2$-singular, hence the class $\mathcal{G}''$ of bi-derived planar graphs has square-root singularities. \end{lemma}
\subsection{Derivation rules for Boltzmann samplers} As suggested by Lemma~\ref{lem:square} and Lemma~\ref{lem:bi_der}, we will get good targetted samplers for planar graphs if we can describe
an efficient Boltzmann sampler for the class $\mathcal{G}''$ of bi-derived planar graphs (a graph in $\mathcal{G}''$ has two unlabelled vertices that are marked specifically, say the first one is marked $\ast$ and the second one is marked $\star$). Our Boltzmann sampler $\Gamma \mathcal{G}''(x,y)$ ---to be presented in this section--- makes use of the decomposition of planar graphs into 3-connected components which we have already successfully used to obtain a Boltzmann sampler for planar graphs in Section~\ref{sec:decomp}. This decomposition
can be formally translated into a decomposition grammar (with additional unpointing/pointing operations). To obtain a Boltzmann sampler for bi-derived planar graphs instead of planar graphs, the idea is simply to \emph{derive} this grammar 2 times.
As we explain here and as is well known in general, a decomposition grammar can be derived automatically. (In our framework, a decomposition grammar involves the 5 constructions $\{+,\star,\Set_{\geq d},\circ_{L},\circ_{U}\}$.)
\begin{proposition}[derivation rules]\label{prop:der_rules} The basic finite classes satisfy $$ (\mathbf{1})'=0,\ \ \ (\mathcal{Z}_L)'=1,\ \ \ (\mathcal{Z}_U)'=0. $$ The 5 constructions satisfy the following derivation rules: \begin{equation} \left\{ \begin{array}{rcl} (\mathcal{A}+\mathcal{B})'&=&\mathcal{A}'+\mathcal{B}',\\ (\mathcal{A}\star\mathcal{B})'&=&\mathcal{A}'\star\mathcal{B}+\mathcal{A}\star\mathcal{B}',\\ (\Set_{\geq d}(\mathcal{B}))'&=&\mathcal{B}'\star\Set_{\geq d-1}(\mathcal{B})\ \mathrm{for}\ d\geq 0,\ \ \ \mathrm{(with}\ \Set_{\geq -1}=\Set)\\ (\mathcal{A}\circ_L\mathcal{B})'&=&\mathcal{B}'\star(\mathcal{A}'\circ_L\mathcal{B}),\\ (\mathcal{A}\circ_U\mathcal{B})'&=&\mathcal{A}'\circ_U\mathcal{B}+\mathcal{B}'\star(\underline{\mathcal{A}}\circ_U\mathcal{B}). \end{array} \right. \end{equation} \end{proposition} \begin{proof} The derivation formulas for basic classes are trivial. The proof of the derivation rules for $\{+,\star,\circ_L\}$ are given in~\cite{BeLaLe}. Notice that the rule for $\Set_{\geq d}$ follows from the rule for $\circ_L$. (Indeed, $\Set_{\geq d}(\mathcal{B})=\mathcal{A}\circ_{L}\mathcal{B}$, where $\mathcal{A}=\Set_{\geq d}(\mathcal{Z}_L)$, which clearly satisfies $\mathcal{A}'=\Set_{\geq d-1}(\mathcal{Z}_L)$.) Finally, the proof of the rule for $\circ_U$ uses similar arguments as the proof of the rule for $\circ_L$. In an object of $(\mathcal{A}\circ_U\mathcal{B})'$, the distinguished atom is either on the core-structure (in $\mathcal{A}$), or is in a certain component (in $\mathcal{B}$) that is substituted at a certain U-atom of the core-structure. The first case yields the term $\mathcal{A}'\circ_U\mathcal{B}$, and the second case yields the term $\mathcal{B}'\star(\underline{\mathcal{A}}\circ_U\mathcal{B})$. \end{proof} According to Proposition~\ref{prop:der_rules}, it is completely automatic to find a decomposition grammar for a derived class $\mathcal{C}'$ if we are given a decomposition grammar for $\mathcal{C}$.
\subsection{Boltzmann sampler for bi-derived planar graphs} We present in this section our Boltzmann sampler $\Gamma \mathcal{G}''(x,y)$ for bi-derived planar graphs, with a quite similar approach to the one adopted in Section~\ref{sec:decomp}, and again a bottom-to-top presentation. At first the closure-mapping allows us to obtain Boltzmann samplers for 3-connected planar graphs marked in various ways. Then we go from 3-connected to bi-derived planar graphs via networks, bi-derived 2-connected, and bi-derived connected planar graphs.
The complete scheme is illustrated in Figure~\ref{fig:scheme_bi_derived}, which is the counterpart of Figure~\ref{fig:scheme_unrooted}.
\begin{figure}
\caption{The complete scheme to obtain a Boltzmann sampler for
bi-derived planar graphs.}
\label{fig:scheme_bi_derived}
\end{figure}
\subsubsection{Boltzmann samplers for derived binary trees.}\label{sec:sampKp} We have already obtained in Section~\ref{sec:boltz_binary_trees} a Boltzmann sampler for the class $\mathcal{K}$ of unrooted asymmetric binary trees. Our purpose here is to derive a Boltzmann sampler for the derived class $\mathcal{K}'$. Recall that we have also described in Section~\ref{sec:boltz_binary_trees} a Boltzmann sampler for the U-derived class $\underline{\mathcal{K}}$, which satisfies the completely recursive decomposition grammar~(\ref{eq:grammar}) (see also Figure~\ref{fig:grammar}). Hence, we have to apply the procedure \UtoL described in Section~\ref{sec:reject} to the class $\mathcal{K}$ in order to obtain a Boltzmann sampler $\Gamma \mathcal{K}'(z,w)$ from $\Gamma \underline{\mathcal{K}}(z,w)$. For this we have to check that $\alpha_{L/U}$ is finite for the class $\mathcal{K}$. It is easily proved that a bicolored binary tree with $m$ leaves has $m-2$ nodes, and that at most $\lfloor 2(m-3)/3\rfloor$ of the nodes are black. In addition, there exist trees with $3i+3$ leaves and $2i$ black nodes (those with all leaves incident to black nodes). Hence, for the class $\mathcal{K}$, the parameter $\alpha_{L/U}$ is equal to $2/3$. Therefore the procedure \UtoL can be applied to the class $\mathcal{K}$.
\subsubsection{Boltzmann samplers for derived rooted dissections and 3-connected maps}\label{sec:sampIp} Our next step is to obtain Boltzmann samplers for derived irreducible dissections, in order to go subsequently to 3-connected maps. As expected we take advantage of the closure-mapping. Recall that the closure-mapping realises the isomorphism $\mathcal{K}\simeq\mathcal{I}$ between the class $\mathcal{K}$ of asymmetric binary trees and the class $\mathcal{I}$ of asymmetric irreducible dissections. There is no problem in deriving an isomorphism, so the closure-mapping also realises the isomorphism $\mathcal{K}'\simeq\mathcal{I}'$. Accordingly we have the following Boltzmann sampler for the class $\mathcal{I}'$:
\fbox{ \begin{tabular}{ll} $\Gamma \mathcal{I}'(z,w)$:$\!\!$& $\tau\leftarrow\Gamma \mathcal{K}'(z,w)$;\\ & $\delta\leftarrow\mathrm{closure}(\tau)$;\\ & return $\delta$ \end{tabular} }
\noindent where the discarded L-atom is the same in $\tau$ and in $\delta$.
Then, we easily obtain a Boltzmann sampler for the corresponding \emph{rooted} class $\mathcal{J}'$. Indeed, the equation $\mathcal{J}=3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{I}$ that relates $\mathcal{I}$ and $\mathcal{J}$ yields $\mathcal{J}'=3\star\mathcal{Z}_U\star\mathcal{I}+3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{I}'$. Hence, using the sampling rules of Figure~\ref{table:rules}, we obtain a Boltzmann sampler $\Gamma \mathcal{J}'(z,w)$ from the Boltzmann samplers $\Gamma \mathcal{I}(z,w)$ and $\Gamma \mathcal{I}'(z,w)$.
From that point, we obtain a Boltzmann sampler for the derived rooted dissections that are admissible. As $\mathcal{J}_{\mathrm{a}}\subset\mathcal{I}$, we also have $\mathcal{J}_{\mathrm{a}}'\subset\mathcal{J}'$, which yields the following Boltzmann sampler for $\mathcal{J}_{\mathrm{a}}'$:
\fbox{ \begin{tabular}{ll} $\Gamma \mathcal{J}_{\mathrm{a}}'(z,w)$:$\!\!$& repeat $\delta\leftarrow\Gamma \mathcal{J}'(z,w)$\\ & until $\delta\in\mathcal{J}_{\mathrm{a}}'$;\\ & return $\delta$ \end{tabular} }
Finally, using the isomorphism $\mathcal{J}_{\mathrm{a}}\simeq\overrightarrow{\cM_3}$ (primal map construction, Section~\ref{sec:primal_map}), which yields $\mathcal{J}_{\mathrm{a}}'\simeq\overrightarrow{\cM_3}'$, we obtain a Boltzmann samplers for derived rooted 3-connected maps:
\fbox{ \begin{tabular}{ll} $\Gamma \overrightarrow{\cM_3}'(z,w)$:$\!\!$& $\delta\leftarrow\Gamma \mathcal{J}_{\mathrm{a}}'(z,w)$;\\ & return $\mathrm{Primal}(\delta)$ \end{tabular} }
\noindent where the returned rooted 3-connected map inherits the distinguished L-atom of $\delta$.
\subsubsection{Boltzmann samplers for derived rooted 3-connected planar graphs.} \label{sec:derived_3conn} As we have seen in Section~\ref{sec:equiv}, Whitney's theorem states that any 3-connected planar graph has two embeddings on the sphere (which differ by a reflection). Clearly the same property holds for 3-connected planar graphs that have additional marks. (We have already used this observation in Section~\ref{sec:equiv} for rooted graphs, $\overrightarrow{\cM_3}\simeq 2\star\overrightarrow{\cG_3}$, in order to obtain a Boltzmann sampler for $\overrightarrow{\cG_3}$.) Hence $\overrightarrow{\cM_3}'\simeq 2\star\overrightarrow{\cG_3}'$, which yields the following Boltzmann sampler for $\overrightarrow{\cG_3}'$:
\fbox{ \begin{tabular}{ll} $\Gamma \overrightarrow{\cG_3}'(z,w)$:$\!\!$& return $\Gamma \overrightarrow{\cM_3}'(z,w)$;\\ & (forgetting the embedding) \end{tabular} }
The next step (in Section~\ref{sec:derived_networks}) is to go to derived networks. This asks for a derivation of the decomposition grammar for networks, which involves not only the classes $\overrightarrow{\cG_3}$, $\overrightarrow{\cG_3}'$, but also the U-derived class $\underline{\overrightarrow{\cG_3}}$. Hence, we also need a Boltzmann sampler for $\underline{\overrightarrow{\cG_3}}$.
To this aim we just have to apply the procedure \LtoU to the class $\overrightarrow{\cG_3}$. By the Euler relation, a 3-connected planar graph with $n$ vertices has at most $3n-6$ edges (equality holds for triangulations). Hence, the parameter $\alpha_{U/L}$ is equal to $3$ for the class $\overrightarrow{\cG_3}$, so \LtoU can be successfully applied to $\overrightarrow{\cG_3}$, yielding a Boltzmann sampler for $\underline{\overrightarrow{\cG_3}}$ from the Boltzmann sampler for $\overrightarrow{\cG_3}'$.
\subsection{Boltzmann samplers for derived networks.}\label{sec:derived_networks} Following the general scheme shown in Figure~\ref{fig:scheme_bi_derived}, our aim is now to obtain a Boltzmann samplers for the class $\mathcal{D}'$ of derived
networks. Recall that the decomposition grammar for $\mathcal{D}$ has allowed us to obtain a Boltzmann sampler for $\mathcal{D}$ from a Boltzmann sampler for $\overrightarrow{\cG_3}$. Using the derivation rules (Proposition~\ref{prop:der_rules}) injected in the grammar~(N), we obtain the following decomposition grammar for $\mathcal{D}'$:
\noindent\includegraphics[width=11.3cm]{Figures/grammars_D_p}
The only terminal classes in this grammar are $\overrightarrow{\cG_3}'$ and $\underline{\overrightarrow{\cG_3}}$. Hence, the sampling rules of Figure~\ref{table:rules} yield a Boltzmann sampler for $\mathcal{D}'$ from the Boltzmann samplers for $\overrightarrow{\cG_3}'$ and $\underline{\overrightarrow{\cG_3}}$ which we have obtained in Section~\ref{sec:derived_3conn}. The sampler $\Gamma\mathcal{D}'(z,y)$ looks similar (though with more cases) to the one for $\Gamma\mathcal{D}(z,y)$ given in Figure~\ref{fig:samp_networks}.
\subsection{Boltzmann samplers for bi-derived 2-connected planar graphs.}\label{sec:sampDp} The aim of this section is to obtain Boltzmann samplers for the class $\mathcal{G}_2\ \!\!\!''$ of bi-derived 2-connected planar graphs (after the Boltzmann sampler for $\mathcal{G}_2\ \!\!\!'$ obtained in Section~\ref{sec:2conn3conn}), in order to go subsequently to bi-derived connected planar graphs.
At first, the Boltzmann sampler for $\mathcal{D}'$ yields a Boltzmann sampler for the class $\overrightarrow{\cG_2}'$. Indeed the identity $(1+\mathcal{D})=(1+\mathcal{Z}_U)\star\overrightarrow{\mathcal{G}_2}$ is derived as $\mathcal{D}'=(1+\mathcal{Z}_U)\star\overrightarrow{\cG_2}'$, which yields the following sampler,
\fbox{ \begin{tabular}{ll} $\Gamma \overrightarrow{\mathcal{G}_2}'(z,y)$:$\!\!$& $\gamma\leftarrow\Gamma \mathcal{D}'(z,y)$;\\ & $\textsc{AddRootEdge}(\gamma)$;\\ & return $\gamma$ \end{tabular} }
\noindent where \textsc{AddRootEdge} has been defined in Section~\ref{sec:2conn3conn}. The proof that this is a Boltzmann sampler for $\overrightarrow{\cG_2}'$ is similar to the proof of Lemma~\ref{lem:netto2conn}.
Next we describe a Boltzmann sampler for the class $\underline{\mathcal{G}_2}'$. As we have seen in Section~\ref{sec:2conn3conn}, $\underline{\mathcal{G}_2}$ and $\overrightarrow{\cG_2}$ are related by the identity $2\star\underline{\mathcal{G}_2}=\mathcal{Z}_L\ \!\!\!^2\star\overrightarrow{\cG_2}$. Hence, if we define $\mathcal{F}:=2\star\underline{\mathcal{G}_2}$, we have $\mathcal{F}'=\mathcal{Z}_L\ \!\!\!^2\star\overrightarrow{\cG_2}'+2\star\mathcal{Z}_L\star\overrightarrow{\cG_2}$. Hence, the sampling rules of Figure~\ref{table:rules} yield a Boltzmann sampler $\Gamma \mathcal{F}'(z,y)$ for the class $\mathcal{F}'$. Clearly, as $\mathcal{F}'=2\star\underline{\mathcal{G}_2}'$, a Boltzmann sampler for $\underline{\mathcal{G}_2}'$ is obtained by calling $\Gamma \mathcal{F}'(z,y)$ and forgetting the direction of the root.
Finally, the procedure \UtoL yields (when applied to $\mathcal{G}_2\ \!\!\!'$) from the Boltzmann sampler for $\underline{\mathcal{G}_2}'$ to a Boltzmann sampler for $\mathcal{G}_2\ \!\!\!''$. The procedure can be successfully applied, as the class $\mathcal{G}_2\ \!\!\!'$ satisfies $\alpha_{L/U}=1$ (attained by the link-graph).
\subsubsection{Boltzmann sampler for bi-derived connected planar graphs.}\label{sec:sampCp} The block-decomposition makes it easy to obtain a Boltzmann sampler for the class $\mathcal{G}_1\ \!\!\!''$ of bi-derived connected planar graphs (this decomposition has already allowed us to obtain a Boltzmann sampler for $\mathcal{G}_1\ \!\!\!'$ in Section~\ref{sec:conn2conn}). Recall that the block-decomposition yields the identity $$ \mathcal{G}_1\ \!\!\!'=\Set\left(\mathcal{G}_2\ \!\!\!'\circ_L(\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!')\right), $$ which is derived as $$ \mathcal{G}_1\ \!\!\!''=(\mathcal{G}_1\ \!\!\!'+\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!'')\star\mathcal{G}_2\ \!\!\!''\circ_L(\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!')\star\mathcal{G}_1\ \!\!\!'. $$
As we already have Boltzmann samplers for the classes $\mathcal{G}_2\ \!\!\!''$ and $\mathcal{G}_1\ \!\!\!'$, the sampling rules of Figure~\ref{table:rules} yield a Boltzmann sampler $\Gamma \mathcal{G}_1\ \!\!\!''(x,y)$ for the class $\mathcal{G}_1\ \!\!\!''$. Observe that the 2-connected blocks of a graph generated by $\Gamma \mathcal{G}_1\ \!\!\!''(x,y)$ are obtained as independent calls to $\Gamma \mathcal{G}_2\ \!\!\!'(z,y)$ and $\Gamma \mathcal{G}_2\ \!\!\!''(z,y)$, where $z$ and $x$ are related by the change of variable $z=xG_1\ \!\!\!'(x,y)$.
\subsubsection{Boltzmann samplers for bi-derived planar graphs}\label{sec:sampGp} We can now achieve our goal, i.e., obtain a Boltzmann sampler for the class $\mathcal{G}''$ of bi-derived planar graphs. For this purpose, we simply derive twice the identity $$ \mathcal{G}=\Set(\mathcal{G}_1), $$ which yields successively the identities $$ \mathcal{G}'=\mathcal{G}_1\ \!\!\!'\star\mathcal{G}, $$ and $$ \mathcal{G}''=\mathcal{G}_1\ \!\!\!''\star\mathcal{G}+\mathcal{G}_1\ \!\!\!'\star\mathcal{G}'. $$ From the first identity and $\Gamma \mathcal{G}(x,y)$, $\Gamma \mathcal{G}_1\ \!\!\!'(x,y)$, we get a Boltzmann sampler $\Gamma \mathcal{G}'(x,y)$ for the class $\mathcal{G}'$. Then, from the second identity and $\Gamma \mathcal{G}(x,y)$, $\Gamma \mathcal{G}'(x,y)$, $\Gamma \mathcal{G}_1\ \!\!\!'(x,y)$, $\Gamma \mathcal{G}_1\ \!\!\!''(x,y)$, we get a Boltzmann sampler $\Gamma \mathcal{G}''(x,y)$ for the class $\mathcal{G}''$.
\section{The targetted samplers for planar graphs}\label{sec:final_smap} The Boltzmann sampler $\Gamma\mathcal{G}''(x,y)$---when tuned as indicated in Lemma~\ref{lem:square}---yields efficient exact-size and approximate-size random samplers for planar graphs, with the complexities as stated in Theorem~\ref{theo:planarsamp1} and Theorem~\ref{theo:planarsamp2}. Define the algorithm:
\begin{tabular}{ll} $\textsc{SamplePlanar}(x,y)$:& $\gamma\leftarrow\Gamma\mathcal{G}''(x,y)$;\\
& give label $|\gamma|+1$ to the vertex marked $\star$\\
& and label $|\gamma|+2$ to the marked vertex $\ast$\\
&(thus $|\gamma|$ increases by $2$, and $\gamma\in\mathcal{G}$);\\ & return $\gamma$ \end{tabular}
\subsection{Samplers according to the number of vertices} \label{sec:sample_vertices}
Let $\rho_G$ be the radius of convergence of $x\mapsto G(x,1)$. Define $$x_n:=\big(1-\tfrac{1}{2n}\big)\cdot \rho_G.$$
\noindent For $n\geq 1$, the exact-size sampler is
$\frak{A}_n$: repeat $\gamma \leftarrow \textsc{SamplePlanar}(x_n,1)$ until
$|\gamma|=n$; return~$\gamma$.
\noindent For $n\geq 1$ and $\epsilon >0$, the approximate-size sampler is
$\frak{A}_{n,\epsilon}$: repeat $\gamma\leftarrow \textsc{SamplePlanar}(x_n,1)$ until $|\gamma|\in [n(1-\epsilon),n(1+\epsilon)]$; return $\gamma$.
\subsection{Samplers according to the numbers of vertices and edges} \label{sec:sample_edges}
For any $y>0$, we denote by $\rho_G(y)$ the radius of convergence of $x\mapsto G(x,y)$. Let $\mu(y)$ be the function defined as $$ \mu(y):=-y\frac{\mathrm{d}\rho_G}{\mathrm{d}y}(y)/\rho_G(y). $$ As proved in~\cite{gimeneznoy} (using the so-called quasi-power theorem), for a fixed $y>0$, a large graph drawn by the Boltzmann sampler $\Gamma \mathcal{G}''(x,y)$ has a ratio edges/vertices concentrated around the value $\mu(y)$ as $x$ approaches the radius of convergence of $x\mapsto G(x,y)$. This yields a relation between the secondary parameter $y$ and the ratio edges/vertices. If we want a ratio edges/vertices close to a target value $\mu$, we have to choose $y$ so that $\mu(y)=\mu$. It is shown in~\cite{gimeneznoy} that the function $\mu(y)$ is strictly increasing on $(0,+\infty)$, with $\lim \mu(y)=1$ as $y\to 0$ and $\lim \mu(y)=3$ as $y\to +\infty$. As a consequence, $\mu(y)$ has an inverse function $y(\mu)$ defined on $(1,3)$. (In addition, $\mu\mapsto y(\mu)$ can be evaluated with good precision from the analytic equation it satisfies.) We define $$x_n(\mu):=\big(1-\tfrac{1}{2n}\big)\cdot\rho_G(y(\mu)).$$ For $n\geq 1$ and $\mu\in(1,3)$, the exact-size sampler is
$\overline{\frak{A}}_{n,\mu}$:$\!$ repeat $\gamma\leftarrow \textsc{SamplePlanar}(x_n(\mu),y(\mu))$
until ($|\gamma|\!=\!n$ and $||\gamma||\!=\!\lfloor \mu n\rfloor)$; return~$\gamma$.
\noindent For $n\geq 1$, $\mu\in(1,3)$, and $\epsilon>0$, the approximate-size sampler is
\begin{tabular}{ll}
$\overline{\frak{A}}_{n,\mu,\epsilon}$:& repeat $\gamma\leftarrow \textsc{SamplePlanar}(x_n(\mu),y(\mu))$\\
& until ($|\gamma|\in [n(1-\epsilon),n(1+\epsilon)]$ and $\frac{||\gamma||}{|\gamma|}\in [\mu (1-\epsilon),\mu (1+\epsilon)]$); \\ & return $\gamma$. \end{tabular}
\noindent The complexity of the samplers is analysed in Section~\ref{sec:complexity}.
\section{Implementation and experimental results}\label{sec:implement}
\subsection{Implementation} We have completely implemented the random samplers for planar graphs described in Section~\ref{sec:efficient}. First we evaluated with good precision---typically 20 digits---the generating functions of the families of planar graphs that intervene in the decomposition (general, connected, 2-connected, 3-connected), derived up to 2 times. The calculations have been carried out in Maple using the analytic expressions of Gim\'enez and Noy for the generating functions~\cite{gimeneznoy}. We have performed the evaluations for values of the parameter $x$ associated with a bunch of reference target sizes in logarithmic scale, $n=\{10^2, 10^3, 10^4,10^5,10^6\}$. From the evaluations of the generating functions, we have computed the vectors of real values that are associated to the random choices to be performed during the generation, e.g., a Poisson law vector with parameter $G_1(x)$ (the EGF of connected planar graphs) is used for drawing the number of connected components of the graph.
The second step has been the implementation of the random sampler in Java. To build the graph all along the generation process, it proves more convenient to manipulate a data structure specific to planar maps rather than planar graphs. The advantage is also that the graph to be generated will be equipped with an explicit (arbitrary) planar embedding. Thus if the graph generated is to be drawn in the plane, we do not need to call the rather involved algorithms for embedding a planar graph. Planar maps are suitably manipulated using the so-called \emph{half-edge structure}, where each half-edge occupies a memory block containing a pointer to the opposite half-edge along the same edge and to the next half-edge in ccw order around the incident vertex. Using the half-edge structure, it proves very easy to implement in cost $O(1)$ all primitives used for building the graph---typically, merging two components at a common vertex or edge. Doing this, the actual complexity of implementation corresponds to the complexity of the random samplers as stated in Theorem~\ref{theo:planarsamp1} and Theorem~\ref{theo:planarsamp2}: linear for approximate-size sampling and quadratic for exact-size sampling. In practice, generating a graph of size of order $10^5$ takes a few seconds on a standard computer.
\begin{figure}
\caption{Ratio edges/vertices observed on
a collection $\gamma_1,\ldots,\gamma_{80}$
of 80 random connected planar graphs of size at least $10^4$; each graph $\gamma_i$ yields
a point at coordinates $(i,\mathrm{Rat}(\gamma_i))$, where $\mathrm{Rat}(\gamma)$
is the ratio given by the number of edges divided by the number of vertices of $\gamma$.}
\label{fig:exp_ratio}
\end{figure}
\begin{figure}
\caption{The distribution of vertex degrees observed on
a collection $\gamma_1,\ldots,\gamma_{80}$
of 80 random connected planar graphs of size at least $10^4$.
Each graph $\gamma$ yields points at coordinates
$(1,Z^{(1)}(\gamma)), (2,Z^{(2)}(\gamma)),\ldots,(d,Z^{(d)}(\gamma))$,
where $d$ is the maximal degree of $\gamma$ and,
for $1\leq k\leq d$, $Z^{(k)}(\gamma)$
is the proportion of vertices of $\gamma$ that have degree $k$.}
\label{fig:exp_degree}
\end{figure}
\subsection{Experimentations.} The good complexity of our random samplers allows us to observe statistical properties of parameters on very large random planar graphs---in the range of sizes $10^5$---where the asymptotic regime is already visible. We focus here on
parameters that are known or expected to be concentrated around a limit value. Note that the experimentations are on connected planar graphs instead of general planar graphs. (It is slightly easier to restrict the implementation to
connected graphs, which are conveniently manipulated using the half-edge
data structure.) However, from the works of Gim\'enez and Noy~\cite{gimeneznoy} and previous work by MacDiarmid et al.~\cite{McD05}, a random planar graph consists of a huge connected component, plus other components whose total expected size is $O(1)$. Thus, statistical properties like those stated in Conjecture~\ref{conj:planar} should be the same for random planar graphs as for random connected planar graphs.
\noindent\emph{Number of edges.} First we have checked that the random variable $X_n$ that counts the number of edges in a random connected planar graph with $n$ vertices is concentrated. Precisely, Gim\'enez and Noy have proved that $Y_n:=X_n/n$ converges in law to a constant $\mu\approx 2.213$, (they also show that the fluctuations are gaussian of magnitude $1/\sqrt{n}$). Figure~\ref{fig:exp_ratio} shows in ordinate the ratio edges/vertices for a collection of 80 random connected planar graphs of size at least $10^4$ drawn by our sampler. As we can see, the ratios are concentrated around the horizontal line $y=\mu$, agreeing with the convergence result of Gim\'enez and Noy.
\noindent\emph{Degrees of vertices.} Another parameter of interest is the distribution of the degrees of vertices in a random planar graph. For a planar graph $\gamma$
with $n$ vertices, we denote by $N^{(k)}(\gamma)$ the number of vertices of $\gamma$ that have $k$ neighbours. Accordingly, $Z^{(k)}(\gamma):=N^{(k)}(\gamma)/n$ is the proportion of vertices of degree $k$ in $\gamma$. It is known from Gim\'enez and Noy that, for $k=1,2$, the random variable $Z^{(k)}$ converges in law to an explicit constant. Figure~\ref{fig:exp_degree} shows in abscissa the parameter $k$ and in ordinate the value of $Z^{(k)}$ for a collection of 80 random connected planar graphs of size at least $10^4$ drawn by our sampler. Hence, the vertical line at abscissa $k$ is occupied by 80 points whose ordinates correspond to the values taken by $Z^{(k)}$ for each of the graphs. As we can see, for $k$ small---typically $k<<\log n$---the values of $Z^{(k)}$ are concentrated around a constant. This leads us to the following conjecture.
\begin{conjecture}\label{conj:planar} For every $k\geq 1$, let $Z^{(k)}_n$ be the random variable denoting the proportion of vertices of degree $k$ in a random planar graph with $n$ vertices taken uniformly at random. Then $Z_n^{(k)}$ converges in law to an explicit constant $\pi^{(k)}$ as $n\to\infty$; and $\sum_k\pi^{(k)}=1$. \end{conjecture}
Let us mention some progress on this conjecture. It has recently been proved in~\cite{DrGiNo07} that the expected values $\mathbb{E}(Z_n^{(k)})$ converge as $n\to\infty$ to constants $\pi^{(k)}$ that are computable and satisfy $\sum_k\pi^{(k)}=1$. Hence, what remains to be shown regarding the conjecture is the concentration property.
\section{Analysis of the time complexity}\label{sec:complexity} This whole section is dedicated to the proof of the complexities of the targetted random samplers. We show that the expected complexities of the targetted samplers $\frak{A}_n$, $\frak{A}_{n,\epsilon}$, $\overline{\frak{A}}_{n,\mu}$, and $\overline{\frak{A}}_{n,\mu,\epsilon}$, as described in Section~\ref{sec:final_smap}, are respectively $O(n^2)$, $O(n/\epsilon)$, $O_{\mu}(n^{5/2})$, and $O_{\mu}(n/\epsilon)$ respectively (the dependency in $\mu$ in not analysed for the sake of simplicity).
Recall that the targetted samplers call $\Gamma\mathcal{G}''(x,y)$ (with suitable values of $x$ and $y$) until the size parameters are in the target domain. Accordingly, the complexity analysis is done in two steps. In the first step, we estimate the probability of hitting the target domain, which allows us to reduce the complexity analysis to the analysis of the expected complexity of the pure Boltzmann sampler $\Gamma\mathcal{G}''(x,y)$. We use a specific notation to denote such an expected complexity:
\begin{definition} Given a class $\mathcal{C}$ endowed with a Boltzmann sampler $\Gamma\mathcal{C}(x,y)$, we denote by $\Lambda\mathcal{C}(x,y)$ the expected combinatorial complexity\footnote{See the discussion on the complexity model after the statement of Theorem~\ref{theo:planarsamp2} in the introduction.} of a call to $\Gamma\mathcal{C}(x,y)$ (note that $\Lambda\mathcal{C}(x,y)$ depends not only on $\mathcal{C}$, but also on a specific Boltzmann sampler for $\mathcal{C}$). \end{definition}
Typically the values $(x,y)$ have to be close to a singular point of $\mathcal{G}$ in order to draw graphs of large size. Hence, in the second step, our aim is to bound $\Lambda\mathcal{G}''(x,y)$ when $(x,y)$ converges to a given singular point $(x_0,y_0)$ of $\mathcal{G}$. To analyse $\Lambda\mathcal{G}''(x,y)$, our approach is again from bottom to top, as the description of the sampler in Section~\ref{sec:efficient} (see also the general scheme summarized in Figure~\ref{fig:scheme_bi_derived}). At each step we give asymptotic bounds for the expected complexities of the Boltzmann samplers when the parameters approach a singular point. This study requires the knowledge of the singular behaviours of all series involved in the decomposition of bi-derived planar graphs, which are recalled in Section~\ref{sec:sing_beh}.
\subsection{Complexity of rejection: the key lemma} The following simple lemma will be extensively used, firstly to reduce the complexity analysis of the targetted samplers to the one of pure Boltzmann samplers, secondly to estimate the effect of the rejection steps on the expected complexities of the Boltzmann samplers.
\begin{lemma}[rejection complexity] \label{lem:target} Let $\frak{A}$ be a random sampler on a combinatorial class $\mathcal{C}$ according to a probability distribution $\mathbb{P}$, and let $p: \mathcal{C}\to [0,1]$ be a function on $\mathcal{C}$, called the rejection function. Consider the rejection algorithm
$\frak{A}_{\mathrm{rej}}$: repeat $\gamma\leftarrow\frak{A}$ until $\Bern(p(\gamma))$ return $\gamma$.
Then the expected complexity $\mathbb{E}(\frak{A}_{\mathrm{rej}})$ of $\frak{A}_{\mathrm{rej}}$ and the expected complexity $\mathbb{E}(\frak{A})$ of $\frak{A}$ are related by \begin{equation} \mathbb{E}(\frak{A}_{\mathrm{rej}})=\frac{1}{p_{\mathrm{acc}}}\mathbb{E}(\frak{A}), \end{equation} where $p_{\mathrm{acc}}:=\sum_{\gamma\in\mathcal{C}}\mathbb{P}(\gamma)p(\gamma)$ is the probability of success of $\frak{A}_{\mathrm{rej}}$ at each attempt. \end{lemma} \begin{proof} The quantity $\mathbb{E}(\frak{A}_{\mathrm{rej}})$ satisfies the recursive equation $$ \mathbb{E}(\frak{A}_{\mathrm{rej}})=\mathbb{E}(\frak{A})+(1-p_{\mathrm{acc}})\mathbb{E}(\frak{A}_{\mathrm{rej}}). $$ Indeed, a first attempt, with expected complexity $\mathbb{E}(\frak{A})$, is always needed; and in case of rejection, occurring with probability $(1-p_{\mathrm{acc}})$, the sampler restarts in the same way as when it is launched. \end{proof}
As a corollary we obtain the following useful formulas to estimate the effect of rejection in Boltzmann samplers when going from L-derived (vertex-pointed) to U-derived (edge-pointed) graphs and vice-versa.
\begin{corollary}[Complexity of changing the root]\label{lem:change_root}
Let $\mathcal{A}$ be a mixed combinatorial class such that the constants $\alpha_{U/L}:=\mathrm{max}_{\gamma\in\mathcal{A}}\frac{||\gamma||}{|\gamma|}$ and $\alpha_{L/U}:=\mathrm{max}_{\gamma\in\mathcal{A}}\frac{|\gamma|}{||\gamma||}$ are finite. Define $c:=\alpha_{U/L}\cdot\alpha_{L/U}$. \begin{itemize} \item Assume $\mathcal{A}'$ is equipped with a Boltzmann sampler, and let $\Gamma \underline{\mathcal{A}}(x,y)$ be the Boltzmann sampler for $\underline{\mathcal{A}}$ obtained by applying $\LtoU$---as defined in Section~\ref{sec:reject}---to $\mathcal{A}$. Then $$ \Lambda \underline{\mathcal{A}}(x,y)\leq c\cdot \Lambda \mathcal{A}'(x,y). $$ \item Assume $\underline{\mathcal{A}}$ is equipped with a Boltzmann sampler, and let $\Gamma \mathcal{A}'(x,y)$ be the Boltzmann sampler for $\mathcal{A}'$ obtained by applying $\UtoL$---as defined in Section~\ref{sec:reject}---to $\mathcal{A}$. Then $$ \Lambda \mathcal{A}'(x,y)\leq c\cdot \Lambda \underline{\mathcal{A}}(x,y). $$ \end{itemize} \end{corollary} \begin{proof} Let us give the proof for \LtoU (the other case is proved in a similar way). By definition of \LtoU the probability of the Bernoulli choice at each attempt in $\Gamma\underline{\mathcal{A}}(x,y)$
is at least $\frac{1}{\alpha_{U/L}}\mathrm{min}_{\gamma\in\mathcal{A}}\frac{||\gamma||}{|\gamma|}$, i.e., at least $1/(\alpha_{U/L}\cdot\alpha_{L/U})$. Hence the probability $p_{\mathrm{acc}}$ of success at each attempt is at least $1/c$. Therefore, by Corollary~\ref{lem:change_root}, $\Lambda \underline{\mathcal{A}}(x,y)=\Lambda \mathcal{A}'(x,y)/p_{\mathrm{acc}}\leq c\cdot\Lambda\mathcal{A}'(x,y)$. \end{proof}
\subsection{Reduction to analysing the expected complexity of Boltzmann samplers}
We prove here that analysing the expected complexities of the targetted samplers reduces to analysing the expected complexity $\Lambda\mathcal{G}''(x,y)$ when $(x,y)$ approaches a singular point. (Recall that a singular point $(x_0,y_0)$ for a class $\mathcal{C}$ is such that the function $x\mapsto C(x,y_0)$ has a dominant singularity at $x_0$.)
\begin{claim} \label{claim:eq} Assume that for every singular point $(x_0,y_0)$ of $\mathcal{G}$, the expected complexity of the Boltzmann sampler for $\mathcal{G}''$ satisfies\footnote{In this article all convergence statements are meant ``from below'', i.e., $x\to x_0$ means that $x$ approaches $x_0$ while staying smaller than $x_0$.} \begin{equation}\label{eq:claim} \Lambda\mathcal{G}''(x,y_0)=O((x_0-x)^{-1/2})\ \ \mathrm{as}\ x\to x_0. \end{equation} Then the expected complexities of the targetted samplers $\frak{A}_n$, $\frak{A}_{n,\epsilon}$,
$\overline{\frak{A}}_{n,\mu}$, and $\overline{\frak{A}}_{n,\mu,\epsilon}$---as defined in Section~\ref{sec:final_smap}---are respectively
$O(n^2)$, $O(n/\epsilon)$, $O_{\mu}(n^{5/2})$, and $O_{\mu}(n/\epsilon)$.
In other words, proving~\eqref{eq:claim} is enough to prove the complexities of the random samplers for planar graphs, as stated in Theorem~\ref{theo:planarsamp1}
and Theorem~\ref{theo:planarsamp2}. \end{claim} \begin{proof} Assume that (\ref{eq:claim}) holds. Let $\pi_{n,\epsilon}$ ($\pi_n$, resp.) be the probability that the output of $\textsc{SamplePlanar}(x_n,1)$ ---with $x_n=(1-1/2n)\cdot\rho_{G}$--- has size in $I_{n,\epsilon}:=[n(1-\epsilon),n(1+\epsilon)]$ (has size $n$, resp.). According to Lemma~\ref{lem:target}, the expected complexities of the exact-size and approximate-size samplers with respect to vertices ---as described in Section~\ref{sec:sample_vertices}--- satisfy $$ \mathbb{E}(\frak{A_n})=\frac{\Lambda \mathcal{G}''(x_n,1)}{\pi_n},\ \ \ \ \ \ \ \mathbb{E}(\frak{A_{n,\epsilon}})=\frac{\Lambda \mathcal{G}''(x_n,1)}{\pi_{n,\epsilon}}. $$ Equation~(\ref{eq:claim}) ensures that, when $n\to\infty$, $\Lambda \mathcal{G}''(x_n,1)$ is $O(n^{1/2})$. In addition, according to Lemma~\ref{lem:bi_der}, $\mathcal{G}''$ is $1/2$-singular (square-root singularities). Hence, by Lemma~\ref{lem:square}, $1/\pi_n$ is $O(n^{3/2})$
and $1/\pi_{n,\epsilon}$ is $O(n^{1/2}/\epsilon)$. Thus, $\mathbb{E}(\frak{A_n})$ is $O(n^2)$ and $\mathbb{E}(\frak{A_{n,\epsilon}})$ is $O(n/\epsilon)$.
The proof for the samplers with respect to vertices and edges is a bit more technical. Consider a planar graph $\gamma$ drawn by the sampler $\textsc{SamplePlanar}(x_n(\mu),y(\mu))$. In view of the proof for the exact-size sampler, define
$$\ol{\pi}_{n\wedge\mu}:=\mathbb{P}(||\gamma||\!=\!
\lfloor \mu n\rfloor, |\gamma|=n),\ \ \ol{\pi}_{\mu|n}:=\mathbb{P}(||\gamma||\!\!=\!\!\lfloor \mu n\rfloor\ |\ |\gamma|\!\!=\!\!n),\ \ \pi_n:=\mathbb{P}(|\gamma|\!\!=\!\!n).$$
\noindent In view of the proof for the approximate-size sampler, define
$$\ol{\pi}_{n\wedge\mu,\epsilon}:=\mathbb{P}(|\gamma|\in[n(1-\epsilon),n(1+\epsilon)],\ ||\gamma||/|\gamma|\in[\mu(1-\epsilon),\mu(1+\epsilon)]),$$
$$\ol{\pi}_{\mu | n,\epsilon}:=\mathbb{P}(||\gamma||/|\gamma|\in[\mu(1-\epsilon),\mu(1+\epsilon)]\ |\ |\gamma|\in[n(1-\epsilon),n(1+\epsilon)]),$$ and
$$\pi_{n,\epsilon}:=\mathbb{P}(|\gamma|\in[n(1-\epsilon),n(1+\epsilon)]).$$
Notice that $\ol{\pi}_{n\wedge\mu}=\ol{\pi}_{\mu|n}\cdot\pi_n$ and $\ol{\pi}_{n\wedge\mu,\epsilon}=\ol{\pi}_{\mu | n,\epsilon}\cdot\pi_{n,\epsilon}$. Moreover, Lemma~\ref{lem:target} ensures that $$ \mathbb{E}(\overline{\frak{A}}_{n,\mu})=\frac{\Lambda \mathcal{G}''(x_n(\mu),y(\mu))}{\ol{\pi}_{n\wedge\mu}},\ \ \ \ \ \ \ \mathbb{E}(\overline{\frak{A}}_{n,\mu,\epsilon})=\frac{\Lambda \mathcal{G}''(x_n(\mu),y(\mu))}{\ol{\pi}_{n\wedge\mu,\epsilon}}. $$
It has been shown by Gim\'enez and Noy~\cite{gimeneznoy} (based on the quasi-power theorem) that, for a fixed $\mu\in(1,3)$,
$1/\ol{\pi}_{\mu|n}$ is $O_\mu(n^{1/2})$ as $n\to\infty$ (the dependency in $\mu$ is not discussed here for the sake of simplicity). Moreover, Lemma~\ref{lem:square} ensures that $1/\pi_n$ is $O_{\mu}(n^{3/2})$ as $n\to\infty$. Hence, $1/\ol{\pi}_{n,\mu}$ is $O_{\mu}(n^{2})$. Finally Equation~(\ref{eq:claim}) ensures that $\Lambda \mathcal{G}''(x_n(\mu),y(\mu))$ is $O_{\mu}(n^{1/2})$, therefore $\mathbb{E}(\overline{\frak{A}}_{n,\mu})$ is $O_{\mu}(n^{5/2})$.
For the approximate-size samplers, the results of Gim\'enez and Noy (central limit theorems) ensure that, when $\mu\in(1,3)$ and $\epsilon>0$ are fixed and $n\to \infty$,
$\ol{\pi}_{\mu | n,\epsilon}$ converges to 1. In addition, Lemma~\ref{lem:square} ensures that $1/\pi_{n,\epsilon}$ is $O_{\mu}(n^{1/2}/\epsilon)$. Hence, $1/\ol{\pi}_{n\wedge\mu,\epsilon}$ is $O_{\mu}(n^{1/2}/\epsilon)$. Equation~(\ref{eq:claim}) implies that $\Lambda \mathcal{G}''(x_n(\mu),y(\mu))$ is $O_{\mu}(n^{1/2})$, hence $\mathbb{E}(\overline{\frak{A}}_{n,\mu,\epsilon})$ is $O_{\mu}(n/\epsilon)$. \end{proof}
From now on, our aim is to prove that, for any singular point $(x_0,y_0)$ of $\mathcal{G}$, $\Lambda\mathcal{G}''(x,y_0)$ is $O((x_0-x)^{-1/2})$ as $x\to x_0$.
\subsection{Expected sizes of Boltzmann samplers} Similarly as for the expected complexities, it proves convenient to use specific notations for the expected sizes associated to Boltzmann samplers, and to state some of their basic properties.
\begin{definition}[expected sizes] Let $\mathcal{C}$ be a mixed combinatorial class, and let $(x,y)$ be admissible for $\mathcal{C}$ (i.e.,
$C(x,y)$ converges). Define respectively the expected L-size and the expected U-size at $(x,y)$ as the quantities $$
|\mathcal{C}|_{(x,y)}:=\frac{1}{C(x,y)}\sum_{\gamma\in\mathcal{C}}|\gamma|\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}=x\frac{\partial_x C(x,y)}{C(x,y)}, $$
$$||\mathcal{C}||_{(x,y)}:=\frac{1}{C(x,y)}\sum_{\gamma\in\mathcal{C}}||\gamma||\frac{x^{|\gamma|}}{|\gamma|!}y^{||\gamma||}=y\frac{\partial_yC(x,y)}{C(x,y)}. $$ \end{definition}
We will need the following two simple lemmas at some points of the analysis.
\begin{lemma}[monotonicity of expected sizes]\label{lem:monotonicity} Let $\mathcal{C}$ be a mixed class. \begin{itemize}
\item For each fixed $y_0>0$, the expected L-size $x\mapsto |\mathcal{C}|_{(x,y_0)}$ is increasing with $x$.
\item For each fixed $x_0>0$, the expected U-size $y\mapsto |\mathcal{C}|_{(x_0,y)}$ is increasing with $y$. \end{itemize} \end{lemma} \begin{proof} As noticed in~\cite{DuFlLoSc04} (in the labelled framework), the derivative of the function
$f(x):=|\mathcal{C}|_{(x,y_0)}$ is equal to $1/x$ multiplied by the variance of the L-size of an object under the Boltzmann distribution at $(x,y_0)$. Hence $f'(x)\geq 0$ for $x>0$, so $f(x)$ is increasing with $x$. Similarly the derivative of
$g(y):=||\mathcal{C}||_{(x_0,y)}$ is equal to $1/y$ multiplied by the variance of the U-size of an object under the Boltzmann distribution at $(x_0,y)$, hence $g(y)$ is increasing with $y$ for $y>0$. \end{proof}
\begin{lemma}[divergence of expected sizes at singular points]\label{lem:exp_size} Let $\mathcal{C}$ be an $\alpha$-singular class and let $(x_0,y_0)$ be a singular point of $\mathcal{C}$. Then, as $x\to x_0$: \begin{itemize}
\item if $\alpha>1$, the expected size $x\mapsto |\mathcal{C}|_{(x,y_0)}$ converges to a positive constant,
\item if $0<\alpha<1$, the expected size $x\mapsto |\mathcal{C}|_{(x,y_0)}$ diverges and is of order $(x_0-x)^{\alpha-1}$. \end{itemize} \end{lemma} \begin{proof}
Recall that $|\mathcal{C}|_{(x,y_0)}=x\cdot C'(x,y_0)/C(x,y_0)$, and $\mathcal{C}'$ is $(\alpha-1)$-singular if $\mathcal{C}$ is $\alpha$-singular. Hence, if $\alpha>1$, both functions $C(x,y_0)$ and $C'(x,y_0)$ converge to positive
constants as $x\to x_0$, so that $|\mathcal{C}|_{(x,y_0)}$ also converges to a positive constant.
If $0<\alpha<1$, $C(x,y_0)$ still converges, but $C'(x,y_0)$ diverges, of order $(x_0-x)^{\alpha-1}$
as $x\to x_0$. Hence $|\mathcal{C}|_{(x,y_0)}$ is also of order $(x_0-x)^{\alpha-1}$. \end{proof}
\subsection{Computation rules for the expected complexities of Boltzmann samplers}
Thanks to Claim~\ref{claim:eq}, the complexity analysis is now reduced to estimating the expected complexity $\Lambda\mathcal{G}''(x,y)$ when $(x,y)$ is close to a singular point of $\mathcal{G}$. For this purpose, we introduce explicit rules to compute $\Lambda\mathcal{C}(x,y)$ if $\mathcal{C}$ is specified from other classes by a decomposition grammar. These rules will be combined with
Lemma~\ref{lem:target} and Corollary~\ref{lem:change_root} (complexity due to the rejection steps) in order to get a precise asymptotic bound for $\Lambda\mathcal{G}''(x,y)$.
We can now formulate the computation rules for the expected complexities.
\begin{figure}\label{fig:comp_rules}
\end{figure}
\begin{lemma}[computation rules for expected complexities]\label{lem:comp_rules} Let $\mathcal{C}$ be a class obtained from simpler classes $\mathcal{A}$, $\mathcal{B}$ by means of one of the constructions $\{+,\star,\Set_{\geq d},\circ_L,\circ_U\}$.
If $\mathcal{A}$ and $\mathcal{B}$ are equipped with Boltzmann samplers, let $\Gamma\mathcal{C}(x,y)$ be the Boltzmann sampler for $\mathcal{C}$ obtained from the sampling rules of Figure~\ref{table:rules}. Then there are explicit rules, as given in Figure~\ref{fig:comp_rules}, to compute the expected complexity of $\Gamma\mathcal{C}(x,y)$ from
the expected complexities of $\Gamma\mathcal{A}(x,y)$ and $\Gamma\mathcal{B}(x,y)$. \end{lemma} \begin{proof} \noindent\emph{Disjoint union:} $\Gamma\mathcal{C}(x,y)$ first flips a coin, which (by convention) has
unit cost in the combinatorial complexity.
Then $\Gamma\mathcal{C}(x,y)$ either calls $\Gamma\mathcal{A}(x,y)$ or $\Gamma\mathcal{B}(x,y)$ with respective probabilities $A(x,y)/C(x,y)$ and $B(x,y)/C(x,y)$.
\noindent\emph{Product:} $\Gamma\mathcal{C}(x,y)$ calls $\Gamma\mathcal{A}(x,y)$ and then $\Gamma\mathcal{B}(x,y)$, which yields the formula.
\noindent\emph{L-substitution:} $\Gamma\mathcal{C}(x,y)$ calls $\gamma\leftarrow\Gamma\mathcal{A}(B(x,y),y)$ and then replaces each L-atom of $\gamma$ by an object generated by $\Gamma\mathcal{B}(x,y)$. Hence, in average, the first step takes time $\Lambda\mathcal{A}(B(x,y),y)$ and the second step takes time $|\mathcal{A}|_{(B(x,y),y)}\cdot\Lambda\mathcal{B}(x,y)$.
\noindent\emph{$\Set_{\geq d}$:} note that $\Set_{\geq d}(\mathcal{B})$ is equivalent to $\mathcal{A}\circ_L\mathcal{B}$, where $\mathcal{A}:=\Set_{\geq d}(\mathcal{Z}_L)$, which has generating function $\exp_{\geq d}(z):=\sum_{k\geq d}z^k/k!$. A Boltzmann sampler $\Gamma\mathcal{A}(z,y)$ simply consists in drawing an integer under a conditioned Poisson law $\Pois_{\geq d}(z)$, which is done by a simple iterative loop. As the number of iterations is equal to the value that is returned (see~\cite{DuFlLoSc04} for a more detailed discussion), the expected cost of generation for $\mathcal{A}$ is equal to the expected size, i.e., $$
\Lambda\mathcal{A}(z,y)=|\mathcal{A}|_{(z,y)}=z\frac{\exp_{\geq d}\ \!\!\!'(z)}{\exp_{\geq d}(z)}=z\frac{\exp_{\geq d-1}(z)}{\exp_{\geq d}(z)}. $$ Hence, from the formula for $\Lambda(\mathcal{A}\circ_L\mathcal{B})(x,y)$, we obtain the formula for $\Set_{\geq d}$.
\noindent\emph{U-substitution:} the formula for $\circ_U$ is proved similarly as the one for $\circ_L$. \end{proof}
\begin{remark}\label{rk:finite} When using the computation rules of Figure~\ref{fig:comp_rules} in a recursive way, we have to be careful to check beforehand that all the expected complexities that are involved are finite. Otherwise there is the danger of getting weird identities like ``$\sum_{k\geq 0}2^k=1+2\sum_{k\geq 0}2^k$, so $\sum_{k\geq 0}2^k=-1$.'' \end{remark}
\subsection{Analytic combinatorics of planar graphs}\label{sec:sing_beh}
Let $\mathcal{C}$ be an $\alpha$-singular class (see Definition~\ref{def:alpha_sing}).
A very useful remark to be used all along the analysis of the expected complexities is the following:
if $\alpha\geq 0$, the function $C(x,y_0)$ converges when $x\to x_0$, and the limit has to be a positive constant; whereas if $\alpha< 0$, the function $C(x,y_0)$ diverges to $+\infty$
and is of order $(x_0-x)^{\alpha}$.
In this section, we review the degrees of singularities of the series of all classes (binary trees, dissections, 3-connected, 2-connected, connected, and general planar graphs) that are involved in the decomposition of planar graphs. We will use extensively this information to estimate the expected complexities of the Boltzmann samplers in Section~\ref{sec:as_bounds}.
\begin{lemma}[bicolored binary trees]\label{lem:sing_bin} Let $\mathcal{R}=\mathcal{R}_{\bullet}+\mathcal{R}_{\circ}$ be the class of rooted bicolored binary trees, which is specified by the system $$ \mathcal{R}_{\bullet}=\mathcal{Z}_L\star(\mathcal{Z}_U+\mathcal{R}_{\circ})^2,\ \ \ \mathcal{R}_{\circ}=(\mathcal{Z}_U+\mathcal{R}_{\bullet})^2. $$ Then the classes $\mathcal{R}_{\bullet}$, $\mathcal{R}_{\circ}$ are $1/2$-singular. The class $\underline{\mathcal{K}}$ ($\mathcal{K}$) of rooted (unrooted, resp.) asymmetric bicolored binary trees is $1/2$-singular ($3/2$-singular, resp.). In addition, these two classes have the same singular points as $\mathcal{R}$. \end{lemma} \begin{proof} The classes $\mathcal{R}_{\bullet}$ and $\mathcal{R}_{\circ}$ satisfy a decomposition grammar that has a strongly connected dependency graph. Hence, by a classical theorem of Drmota, Lalley, Woods~\cite{fla}, the generating functions of these classes
have square-root singular type. Notice that, from the decomposition grammar~\eqref{eq:grammar},
the class $\underline{\mathcal{K}}$ can be expressed as a positive polynomial in $\mathcal{Z}_L$, $\mathcal{Z}_U$, $\mathcal{R}_{\bullet}$, and $\mathcal{R}_{\circ}$. Hence $\underline{\mathcal{K}}$ inherits the singular points and the square-root singular type from $\mathcal{R}_{\bullet}, \mathcal{R}_{\circ}$. Finally, the generating function of $\mathcal{K}$ is classically obtained as a subtraction (a tree has one more vertices than edges, so subtract the series counting the trees rooted at an edge from the series counting the trees rooted at a vertex). The leading square-root singular terms cancel out due to the subtraction, leaving a leading singular term of degree $3/2$. \end{proof}
\begin{lemma}[irreducible dissections, from~\cite{FuPoSc05}]\label{lem:sing_diss} The class $\mathcal{J}$ of rooted irreducible dissections is $3/2$-singular and has the same singularities as $\mathcal{K}$. \end{lemma} \begin{proof} The class $\mathcal{J}$ is equal to $3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{I}$, which is isomorphic to $3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{K}$, so $\mathcal{J}$ has the same singular points and singularity type as $\mathcal{K}$. \end{proof}
\begin{lemma}[rooted 3-connected planar graphs~\cite{BeRi}]\label{lem:sing_3_conn} The class $\overrightarrow{\mathcal{G}_3}$ of edge-rooted 3-connected planar graphs is $3/2$-singular; and the class $\underline{\overrightarrow{\mathcal{G}_3}}$ of U-derived edge-rooted 3-connected planar graphs is $1/2$-singular. These classes have the same singular points as~$\mathcal{K}$. \end{lemma} \begin{proof} The series $\overrightarrow{G_3}(z,w)$ has been proved in~\cite{Mu} to have a rational expression in terms of the two series $R_{\bullet}(z,w)$ and $R_{\circ}(z,w)$ of rooted bicolored binary trees. This property is easily shown to be stable by taking derivatives, so the same property holds for the series $\underline{\overrightarrow{G_3}}(z,w)$. It is proved in~\cite{BeRi,BeGa} that the singular points of $\overrightarrow{\cG_3}$ are the same as those of $\mathcal{R}_{\bullet}$ and $\mathcal{R}_{\circ}$. Hence, the singular expansion of $\overrightarrow{G_3}(z,w)$ at any singular point is simply obtained from the ones of $R_{\bullet}(z,w)$ and $R_{\circ}(z,w)$; one finds that the square-root terms cancel out, leaving a leading singular term of degree $3/2$. The study of $\underline{\overrightarrow{\mathcal{G}_3}}$ is similar. First, the rooting operator does not change the singular points (as it multiplies a coefficient $(n,m)$ only by a factor $m$), hence, $\underline{\overrightarrow{\mathcal{G}_3}}$ has the same singular points as $\mathcal{R}_{\bullet},\mathcal{R}_{\circ}$, which ensures that the singular expansion of $\underline{\overrightarrow{G_3}}(z,w)$ can be obtained from those of $\mathcal{R}_{\bullet}$ and $\mathcal{R}_{\circ}$. One finds that the leading singular term is this time of the square-root type. \end{proof}
\begin{lemma}[networks, from~\cite{BeGa}]\label{lem:sing_networks} The classes $\mathcal{D}$, $\mathcal{S}$, $\mathcal{P}$, and $\mathcal{H}$ of networks are $3/2$-singular, and these classes have the same singular points. \end{lemma}
\begin{lemma}[2-connected, connected, and general planar graphs~\cite{gimeneznoy}]\label{lem:sing_planar} The classes $\mathcal{G}_2$, $\mathcal{G}_1$, $\mathcal{G}$ of 2-connected, connected, and general planar graphs are all $5/2$-singular. In addition, the singular points of $\mathcal{G}_2$ are the same as those of networks, and the singular points are the same in $\mathcal{G}_1$ as in $\mathcal{G}$. \end{lemma}
\subsection{Asymptotic bounds on the expected complexities of Boltzmann samplers} \label{sec:as_bounds} This section is dedicated to proving the asymptotic bound $\Lambda\mathcal{G}''(x,y_0)=O((x_0-x)^{-1/2})$. For this purpose we adopt again a bottom-to-top approach, following the scheme of Figure~\ref{fig:scheme_bi_derived}. For each class $\mathcal{C}$ appearing in this scheme, we provide an asymptotic bound for the expected complexity of the Boltzmann sampler in a neighbourhood of any fixed singular point of $\mathcal{C}$. In the end we arrive at the desired estimate of $\Lambda\mathcal{G}''(x,y_0)$.
\subsubsection{Complexity of the Boltzmann samplers for binary trees}\label{sec:comp_binary_trees}
\begin{lemma}[U-derived bicolored binary trees]\label{lem:comp_uK} Let $(z_0,w_0)$ be a singular point of $\mathcal{K}$. Then, the expected complexity of the Boltzmann sampler for $\underline{\mathcal{K}}$---given in Section~\ref{sec:boltz_binary_trees}---satisfies, $$ \Lambda \underline{\mathcal{K}}(z,w)=O\Big((z_0-z)^{-1/2}\Big)\ \mathrm{as}\ (z,w)\to (z_0,w_0). $$ \end{lemma} \begin{proof} The Boltzmann sampler $\Gamma \underline{\mathcal{K}}(z,w)$ is just obtained by translating a completely recursive decomposition grammar. Hence, the generation process consists in building the tree node by node following certain branching rules. Accordingly, the cost of generation is just equal to the number of nodes of the tree that is finally returned, assuming unit cost for building a node\footnote{ We could also use the computation rules for the expected complexities, but here there is the simpler argument that the expected complexity is equal to the expected size, as there is no rejection yet.}. As an unrooted binary tree has two more leaves than nodes, we have $$
\Lambda \underline{\mathcal{K}}(z,w)\leq ||\underline{\mathcal{K}}||_{(z,w)}\leq ||\underline{\mathcal{K}}||_{(z,w_0)}, $$ where the second inequality results from the monotonicity property of expected sizes (Lemma~\ref{lem:monotonicity}).
Notice that, for $\tau\in\underline{\mathcal{K}}$, the
number of nodes is not greater than $(3|\tau|+1)$, where $|\tau|$ is as usual the number of black nodes. Hence the number of nodes is at most $4|\tau|$. As a consequence, $$
\Lambda \underline{\mathcal{K}}(z,w)\leq 4\cdot |\underline{\mathcal{K}}|_{(z,w_0)}. $$
According to Lemma~\ref{lem:sing_bin}, the class $\underline{\mathcal{K}}$ is $1/2$-singular. Hence, by Lemma~\ref{lem:exp_size}, $|\underline{\mathcal{K}}|_{(z,w_0)}$ is $O((z_0-z)^{-1/2})$ as $z\to z_0$. So $\Lambda \underline{\mathcal{K}}(z,w)$ is also $O((z_0-z)^{-1/2})$. \end{proof}
\begin{lemma}[derived bicolored binary trees]\label{lem:comp_der_bin} Let $(z_0,w_0)$ be a singular point of $\mathcal{K}$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{K}'$---given in Section~\ref{sec:sampKp}---satisfies $$ \Lambda \mathcal{K}'(z,w)=O\left((z_0-z)^{-1/2}\right)\ \mathrm{as}\ (z,w)\to(z_0,w_0). $$ \end{lemma} \begin{proof} The sampler $\Gamma \mathcal{K}'(z,w)$ has been obtained from $\Gamma \underline{\mathcal{K}}(z,w)$ by applying the procedure \UtoL to the class $\mathcal{K}$. It is easily checked that the ratio number of black nodes/number of leaves in a bicolored binary tree is bounded from above and from below (we have already used the ``below''
bound in Lemma~\ref{lem:comp_uK}). Precisely, $3|\tau|+3\geq ||\tau||$ and
$|\tau|\leq 2||\tau||/3$, from which it is easily checked that $\alpha_{L/U}=2/3$ and $\alpha_{U/L}=6$ (attained by the tree with 1 black and 3 white nodes). Hence, according to Corollary~\ref{lem:change_root}, $\Lambda \mathcal{K}'(z,w)\leq 4\ \!\Lambda \underline{\mathcal{K}}(z,w)$, so $\Lambda \mathcal{K}'(z,w)$ is $O\left((z_0-z)^{-1/2}\right)$. \end{proof}
\begin{lemma}[bicolored binary trees]\label{lem:comp_bin} Let $(z_0,w_0)$ be a singular point of $\mathcal{K}$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{K}$---given in Section~\ref{sec:Ksamp}---satisfies $$ \Lambda \mathcal{K}(z,w)=O\left(1\right)\ \mathrm{as}\ (z,w)\to(z_0,w_0). $$ \end{lemma} \begin{proof} At each attempt in the generator $\Gamma \mathcal{K}(z,w)$, the first step is to call $\Gamma\underline{\mathcal{K}}(z,w)$ to generate a certain tree $\tau\in\underline{\mathcal{K}}$
(it is here convenient to assume that the object is ``chosen'' before the generation starts), with probability $$
\frac{1}{\underline{K}(z,w)}\frac{z^{|\tau|}}{|\tau|!}w^{||\tau||}; $$
and the probability that the generation succeeds to finish is $2/(||\tau||+1)$. Hence, the total probability of success at each attempt in $\Gamma \mathcal{K}(z,w)$ satisfies $$
p_{\mathrm{acc}}=\sum_{\tau\in\underline{\mathcal{K}}}\frac{1}{\underline{K}(z,w)}\frac{z^{|\tau|}}{|\tau|!}w^{||\tau||}\cdot\frac{2}{||\tau||+1}. $$
As each object $\tau\in\mathcal{K}$ gives rise to $||\tau||$ objects in $\underline{\mathcal{K}}$ that all have L-size $|\tau|$ and U-size
$||\tau||-1$, we also have $$
p_{\mathrm{acc}}=\sum_{\tau\in\mathcal{K}}\frac{2}{\underline{K}(z,w)}\frac{z^{|\tau|}}{|\tau|!}w^{||\tau||-1}=\frac{2K(z,w)}{w\underline{K}(z,w)}. $$ As $\mathcal{K}$ is $3/2$-singular and $\underline{\mathcal{K}}$ is $1/2$-singular, $p_{\mathrm{acc}}$ converges to the positive constant $c_0:=2K(z_0,w_0)/(w_0\underline{K}(z_0,w_0))$ as $(z,w)\to(z_0,w_0)$.
Now call $\mathfrak{A}(z,w)$ the random generator for $\mathcal{K}$ delimited inside the repeat/until loop of $\Gamma \mathcal{K}(z,w)$, and let $\Lambda \frak{A}(z,w)$ be the expected complexity of $\mathfrak{A}(z,w)$. According to Lemma~\ref{lem:target}, $\Lambda \mathcal{K}(z,w)=\Lambda\frak{A}(z,w)/p_{\mathrm{acc}}$. In addition, when $(z,w)\to (z_0,w_0)$, $p_{\mathrm{acc}}$ converges to a positive constant, hence it remains to prove that $\Lambda \frak{A}(z,w)=O(1)$ in order to prove the lemma.
Let $\tau\in\underline{\mathcal{K}}$, and let $m:=||\tau||$. During a call to $\frak{A}(z,w)$, and knowing (again, in advance) that $\tau$ is under generation, the probability that at least $k\geq 1$ nodes of $\tau$ are built is $2/(k+1)$, due to the Bernoulli probabilities telescoping each other. Hence, for $k<m-1$, the probability $p_k$ that the generation aborts when exactly $k$ nodes are generated
satisfies $p_k=\frac{2}{k+1}-\frac{2}{k+2}=\frac{2}{(k+1)(k+2)}$. In addition, the probability that the whole tree is generated is $2/m$ (with a final rejection or not), in which case $(m-1)$ nodes are built. Measuring the complexity as the number of nodes that are built, we obtain the following expression for the expected complexity of $\frak{A}(z,w)$ knowing that $\tau$ is chosen: $$ \Lambda\frak{A}^{(\tau)}(z,w)=\sum_{k=1}^{m-2}k\cdot p_k+(m-1)\frac{2}{m}\leq 2\ \!H_m, $$ where $H_m:=\sum_{k=1}^m1/k$ is the $m$th Harmonic number. Define $a_m(z):=[w^m]\underline{K}(z,w)$. We have $$ \Lambda\frak{A}(z,w)\leq \frac{2}{\underline{K}(z,w)}\sum_m H_ma_m(z)w^m\leq \frac{2}{\underline{K}(z,w)}\sum_m H_ma_m(z_0)w_0^m. $$ Hence, writing $c_0:=3/\underline{K}(z_0,w_0)$, we have $\Lambda\frak{A}(z,w)\leq c_0\sum_m H_ma_m(z_0)w_0^m$ for $(z,w)$ close to $(z_0,w_0)$. Using the Drmota-Lalley-Woods theorem (similarly as in Lemma~\ref{lem:sing_bin}), it is easily shown that the function $w\mapsto \underline{K}(z_0,w)$ has a square-root singularity at $w=w_0$. Hence, the transfer theorems of singularity analysis~\cite{fla,flaod} yield the asymptotic estimate $a_m(z_0)\sim c\ \! m^{-3/2}w_0^{-m}$ for some constant $c>0$, so that $a_m(z_0)\leq c'\ \! m^{-3/2}w_0^{-m}$ for some constant $c'>0$. Hence $\Lambda\frak{A}(z,w)$ is bounded by the converging series $c_0\ \!c'\sum_m H_m\ \!m^{-3/2}$ for $(z,w)$ close to $(z_0,w_0)$, which concludes the proof. \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for irreducible dissections}
\begin{lemma}[irreducible dissections]\label{lem:comp_irr} Let $(z_0,w_0)$ be a singular point of $\mathcal{I}$. Then, the expected complexities of the Boltzmann samplers for $\mathcal{I}$ and $\mathcal{I}'$---described respectively in Section~\ref{sec:sampI} and~\ref{sec:sampIp}---satisfy, as $(z,w)\to (z_0,w_0)$: \begin{eqnarray*} \Lambda \mathcal{I}(z,w)&=&O\ (1),\\ \Lambda \mathcal{I}'(z,w)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof}
As stated in Proposition~\ref{prop:bijbin3conn} and proved in~\cite{FuPoSc05}, the closure-mapping has linear time complexity, i.e., there exists a constant $\lambda$ such that the cost of closing any binary tree $\kappa$ is at most $\lambda\cdot||\kappa||$. Recall that $\Gamma\mathcal{I}(z,w)$ calls $\Gamma \mathcal{K}(z,w)$ and closes the binary tree generated. Hence $$
\Lambda \mathcal{I}(z,w)\leq \Lambda \mathcal{K}(z,w)+\lambda\cdot ||\mathcal{K}||_{(z,w)}\leq \Lambda \mathcal{K}(z,w)+\lambda\cdot ||\mathcal{K}||_{(z,w_0)}, $$
where the second inequality results from the monotonicity property of expected sizes (Lemma~\ref{lem:monotonicity}). Again we use the fact that, for $\tau\in\mathcal{K}$, $||\tau||\leq 3|\tau|+1$, so $||\tau||\leq 4|\tau|$. Hence $$
\Lambda \mathcal{I}(z,w)\leq \Lambda \mathcal{K}(z,w)+4\lambda\cdot |\mathcal{K}|_{(z,w_0)}. $$
As the class $\mathcal{K}$ is $3/2$-singular, the expected size $|\mathcal{K}|_{(z,w_0))}$ is $O(1)$ when $z\to z_0$. In addition, according to Lemma~\ref{lem:comp_bin}, $\Lambda \mathcal{K}(z,w)$ is $O(1)$ when $(z,w)\to (z_0,w_0)$. Hence $\Lambda\mathcal{I}(z,w)$ is $O(1)$.
Similarly, for $\mathcal{I}'$, we have $$
\Lambda \mathcal{I}'(z,w)\leq \Lambda \mathcal{K}'(z,w)+\lambda\cdot ||\mathcal{K}'||_{(z,w))}\leq \Lambda \mathcal{K}'(z,w)+4\lambda\cdot |\mathcal{Z}_L\star\mathcal{K}'|_{(z,w_0)}. $$ As the class $\mathcal{K}'$ is $1/2$-singular (and so is $\mathcal{Z}_L\star\mathcal{K}'$),
the expected size $|\mathcal{Z}_L\star\mathcal{K}'|_{(z,w_0)}$ is $O((z_0-z)^{-1/2})$ when $z\to z_0$. In addition we have proved in Lemma~\ref{lem:comp_der_bin} that $\Lambda \mathcal{K}'(z,w)$ is $O((z_0-z)^{-1/2})$. Therefore $\Lambda\mathcal{I}'(z,w)$ is $O((z_0-z)^{-1/2})$. \end{proof}
\begin{lemma}[rooted irreducible dissections]\label{lem:comp_root_irr} Let $(z_0,w_0)$ be a singular point of $\mathcal{I}$. Then, the expected complexities of the Boltzmann samplers for $\mathcal{J}$ and $\mathcal{J}'$---described respectively in Section~\ref{sec:sampI} and~\ref{sec:sampIp}---satisfy, as $(z,w)\to (z_0,w_0)$: \begin{eqnarray*} \Lambda \mathcal{J}(z,w)&=&O\ (1),\\ \Lambda \mathcal{J}'(z,w)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} The sampler $\Gamma \mathcal{J}(z,w)$ is directly obtained from $\Gamma \mathcal{I}(z,w)$, according to the identity $\mathcal{J}=3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{I}$, so $ \Lambda\mathcal{J}(z,w)=\Lambda\mathcal{I}(z,w)$, which is $O(1)$ as $(z,w)\to(z_0,w_0)$.
The sampler $\Gamma\mathcal{J}'(z,w)$ is obtained from $\Gamma \mathcal{I}(z,w)$ and $\Gamma \mathcal{I}'(z,w)$, according to the identity $\mathcal{J}'=3\star\mathcal{Z}_L\star\mathcal{Z}_U\star\mathcal{I}'+3\star\mathcal{Z}_U\star\mathcal{I}$. Hence, $\Lambda\mathcal{J}'(z,w)\leq 1+\Lambda\mathcal{I}(z,w)+\Lambda\mathcal{I}'(z,w)$. According to Lemma~\ref{lem:comp_irr}, $\Lambda\mathcal{I}(z,w)$ and $\Lambda\mathcal{I}'(z,w)$ are respectively $O(1)$ and $O((z_0-z)^{-1/2})$ when $(z,w)\to (z_0,w_0)$. Hence $\Lambda\mathcal{J}'(z,w)$ is $O((z_0-z)^{-1/2})$. \end{proof}
\begin{lemma}[admissible rooted irreducible dissections]\label{comp:Ia} Let $(z_0,w_0)$ be a singular point of $\mathcal{I}$. Then, the expected complexities of the Boltzmann samplers for $\mathcal{J}_{\mathrm{a}}$ and $\mathcal{J}_{\mathrm{a}}'$---described respectively in Section~\ref{sec:sampI} and~\ref{sec:sampIp}---satisfy, as $(z,w)\to (z_0,w_0)$: \begin{eqnarray*} \Lambda \mathcal{J}_{\mathrm{a}}(z,w)&=&O\ (1),\\ \Lambda \mathcal{J}_{\mathrm{a}}'(z,w)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} Call $\overline{\Gamma}{\mathcal{J}}(z,w)$ the sampler that calls $\Gamma\mathcal{J}(z,w)$ and checks if the dissection is admissible. By definition, $\Gamma\mathcal{J}_{\mathrm{a}}(z,w)$ repeats calling $\overline{\Gamma}{\mathcal{J}}(z,w)$ until the dissection generated is in $\mathcal{J}_{\mathrm{a}}$. Hence the probability of acceptance $p_{\mathrm{acc}}$ at each attempt is equal to $J_{\mathrm{a}}(z,w)/J(z,w)$, i.e., is equal to $\overrightarrow{M_3}(z,w)/J(z,w)$ (the isomorphism $\mathcal{J}_{\mathrm{a}}\simeq\overrightarrow{\mathcal{M}_3}$ yields $J_{\mathrm{a}}(z,w)=\overrightarrow{M_3}(z,w)$). Call $\overline{\Lambda}\mathcal{J}(z,w)$ the expected complexity of $\overline{\Gamma}\mathcal{J}(z,w)$. By Lemma~\ref{lem:comp_root_irr}, $$ \Lambda \mathcal{J}_{\mathrm{a}}(z,w)=\frac{1}{p_{\mathrm{acc}}}\overline{\Lambda} \mathcal{J}(z,w)=\frac{J(z,w)}{\overrightarrow{M_3}(z,w)}\overline{\Lambda} \mathcal{J}(z,w). $$ We recall from Section~\ref{sec:sing_beh} that the singular points are the same for rooted 3-connected planar graphs/maps, for bicolored binary trees, and for irreducible dissections. Hence $(z_0,w_0)$ is a singular point for $\overrightarrow{M_3}(z,w)$. The classes $\mathcal{J}$ and $\overrightarrow{\mathcal{M}_3}\simeq 2\star\overrightarrow{\mathcal{G}_3}$ are $3/2$-singular by Lemma~\ref{lem:sing_diss} and Lemma~\ref{lem:sing_3_conn}, respectively. Hence, when $(z,w)\to(z_0,w_0)$, the series $J(z,w)$ and $\overrightarrow{M_3}(z,w)$ are $\Theta(1)$, even more they converge to positive constants (because
these functions are rational in terms of bivariate series for binary trees). Hence $p_{\mathrm{acc}}$ also converges to a positive constant, so it remains to prove that $\overline{\Lambda} \mathcal{J}(z,w)$ is $O(1)$. Testing admissibility (i.e., the existence of an internal path of length 3 connecting the root-vertex to the opposite outer vertex) has clearly linear time complexity. Hence, for some constant $\lambda$,
$$\overline{\Lambda} \mathcal{J}(z,w)\leq\Lambda\mathcal{J}(z,w)+\lambda\cdot||\mathcal{J}||_{(z,w)}\leq \Lambda\mathcal{J}(z,w)+\lambda\cdot||\mathcal{J}||_{(z,w_0)},$$
where the second inequality results from the monotonicity of the expected sizes (Lemma~\ref{lem:monotonicity}). Both $\Lambda\mathcal{J}(z,w)$ and $||\mathcal{J}||_{(z,w_0)}$ are $O(1)$ when $z\to z_0$ (by Lemma~\ref{lem:comp_root_irr} and because $\mathcal{J}$ is $3/2$-singular, respectively). Hence $\overline{\Lambda}\mathcal{J}(z,w)$ is also $O(1)$, so $\Lambda \mathcal{J}_{\mathrm{a}}(z,w)$ is also $O(1)$.
The proof for $\mathcal{J}_{\mathrm{a}}'$ is similar. First, we have $$ \Lambda \mathcal{J}_{\mathrm{a}}'(z,w)=\frac{J'(z,w)}{\overrightarrow{M_3}'(z,w)}\cdot\overline{\Lambda} \mathcal{J}'(z,w), $$ where $\overline{\Lambda} \mathcal{J}'(z,w)$ is the expected cost of a call to $\Gamma\mathcal{J}'(z,w)$ followed by an admissibility test. Both series $J'(z,w)$ and $\overrightarrow{M_3}'(z,w)$ are $1/2$-singular, even more, they converge to positive constants as $(z,w)\to(z_0,w_0)$ (again, because these functions are rational in terms of bivariate series of binary trees). Hence, when $(z,w)\to(z_0,w_0)$, the quantity $J'(z,w)/\overrightarrow{M_3}'(z,w)$ converges to a positive constant. Moreover, according to the linear complexity of admissibility testing, we have
$\overline{\Lambda} \mathcal{J}'(z,w)\leq\Lambda \mathcal{J}'(z,w)+\lambda\cdot||\mathcal{J}'||_{(z,w_0)}$. Both quantities $\Lambda \mathcal{J}'(z,w)$ and $||\mathcal{J}'||_{(z,w_0)}$ are $O((z_0-z)^{-1/2})$. Hence $\Lambda \mathcal{J}_{\mathrm{a}}'(z,w)$ is also $O((z_0-z)^{-1/2})$. \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for 3-connected maps}
\begin{lemma}[rooted 3-connected maps]\label{lem:comp_M3} Let $(z_0,w_0)$ be a singular point of $\mathcal{M}_3$. Then the expected complexities of the Boltzmann samplers for $\overrightarrow{\mathcal{M}_3}$ and $\overrightarrow{\mathcal{M}_3}'$ satisfy respectively, as $(z,w)\to (z_0,w_0)$: \begin{eqnarray*} \Lambda \overrightarrow{\mathcal{M}_3}(z,w)&=&O\ (1),\\ \Lambda \overrightarrow{\mathcal{M}_3}'(z,w)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} Recall that $\Gamma\overrightarrow{\mathcal{M}_3}(z,w)$ ($\Gamma\overrightarrow{\mathcal{M}_3}'(z,w)$, resp.) calls $\Gamma\mathcal{J}_{\mathrm{a}}(z,w)$ ($\Gamma\mathcal{J}_{\mathrm{a}}'(z,w)$, resp.) and returns the primal map of the dissection. The primal-map construction is in fact just a reinterpretation of the combinatorial encoding of rooted maps (in particular when dealing with the half-edge data structure). Hence $\Lambda \overrightarrow{\mathcal{M}_3}(z,w)=\Lambda \mathcal{J}_{\mathrm{a}}(z,w)$ and $\Lambda \overrightarrow{\mathcal{M}_3}'(z,w)=\Lambda \mathcal{J}_{\mathrm{a}}'(z,w)$. This concludes the proof, according to the estimates for $\Lambda \mathcal{J}_{\mathrm{a}}(z,w)$ and $\Lambda \mathcal{J}_{\mathrm{a}}'(z,w)$ given in Lemma~\ref{comp:Ia}. (A proof following the same lines as in Lemma~\ref{lem:comp_irr} would also be possible.) \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for 3-connected planar graphs}
\begin{lemma}[rooted 3-connected planar graphs]\label{lem:comp_samp_Gt} Let $(z_0,w_0)$ be a singular point of $\mathcal{G}_3$. Then the expected complexities of the Boltzmann samplers for $\overrightarrow{\mathcal{G}_3}$, $\overrightarrow{\mathcal{G}_3}'$ and $\underline{\overrightarrow{\mathcal{G}_3}}$ satisfy respectively, as $(z,w)\to (z_0,w_0)$: \begin{eqnarray*} \Lambda \overrightarrow{\mathcal{G}_3}(z,w)&=&O\ (1),\\ \Lambda \overrightarrow{\mathcal{G}_3}'(z,w)&=&O\left((z_0-z)^{-1/2}\right),\\ \Lambda \underline{\overrightarrow{\mathcal{G}_3}}(z,w)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} The sampler $\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$ ($\Gamma \overrightarrow{\mathcal{G}_3}'(z,w)$, resp.) is directly obtained from $\Gamma \overrightarrow{\mathcal{M}_3}(z,w)$ ($\Gamma \overrightarrow{\mathcal{M}_3}'(z,w)$, resp.) by forgetting the embedding. Hence $\Lambda \overrightarrow{\mathcal{G}_3}(z,w)=\Lambda \overrightarrow{\mathcal{M}_3}(z,w)$ and $\Lambda \overrightarrow{\mathcal{G}_3}'(z,w)=\Lambda \overrightarrow{\mathcal{M}_3}'(z,w)$, which are---by Lemma~\ref{lem:comp_M3}---respectively $O(1)$ and $O((z_0-z)^{-1/2})$ as $(z,w)\to (z_0,w_0)$.
Finally, the sampler $\Gamma \underline{\overrightarrow{\mathcal{G}_3}}(z,w)$ is obtained from $\Gamma \overrightarrow{\mathcal{G}_3}'(z,w)$ by applying the procedure \LtoU to the class $\overrightarrow{\mathcal{G}_3}$. By the Euler relation, $\alpha_{U/L}=3$ (given asymptotically by triangulations) and $\alpha_{L/U}=2/3$ (given asymptotically by cubic graphs). Thus, by Corollary~\ref{lem:change_root}, $\Lambda\underline{\overrightarrow{\mathcal{G}_3}}(z,w)\leq 2\cdot\Lambda\overrightarrow{\mathcal{G}_3}'(z,w)$, which ensures that $\Lambda\underline{\overrightarrow{\mathcal{G}_3}}(z,w)$ is $O((z_0-z)^{-1/2})$. \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for networks} At first we need to introduce the following notations. Let $\mathcal{C}$ be a class endowed with a Boltzmann sampler $\Gamma\mathcal{C}(x,y)$ and let $\gamma\in\mathcal{C}$. Then $\Lambda\mathcal{C}^{(\gamma)}(x,y)$ denotes the expected complexity of $\Gamma\mathcal{C}(x,y)$ conditioned on the fact that the object generated is $\gamma$. If $\Gamma\mathcal{C}(x,y)$ uses rejection, i.e., repeats building objects and rejecting them until finally an object is accepted, then $\Lambda \mathcal{C}^{\mathrm{rej}}(x,y)$ denotes the expected complexity of $\Gamma\mathcal{C}(x,y)$ without counting the last (successful) attempt.
\begin{lemma}[networks]\label{lem:comp_D} Let $(z_0,y_0)$ be a singular point of $\mathcal{D}$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{D}$---described in Section~\ref{sec:2conn3conn}---satisfies $$ \Lambda \mathcal{D}(z,y_0)=O\left(1\right)\ \mathrm{as}\ z\to z_0. $$ \end{lemma} \begin{proof} Trakhtenbrot's decomposition ensures that a network $\gamma\in\mathcal{D}$ is a collection of 3-connected components $\kappa_1,\ldots,\kappa_r$ (in $\overrightarrow{\cG_3}$) that are assembled together in a series-parallel backbone $\beta$ (due to the auxiliary classes $\mathcal{S}$ and $\mathcal{P}$). Moreover, if $\gamma$ is produced by the Boltzmann sampler $\Gamma \mathcal{D}(z,y_0)$, then each of the 3-connected components $\kappa_i$ results from a call to $\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$, where $w:=D(z,y_0)$.
An important point, which is proved in~\cite{BeGa}, is that the composition scheme to go from rooted 3-connected planar graphs to networks is critical. This means that $w_0:=D(z,y_0)$ (change of variable from 3-connected planar graphs to networks) is such that $(z_0,w_0)$ is a singular point of $\overrightarrow{\cG_3}$.
As the series-parallel backbone is built edge by edge, the cost of generating $\beta$
is simply $||\beta||$ (the number of edges of $\beta$); and the expected cost of generating $\kappa_i$,
for $i\in [1..r]$, is $\Lambda \overrightarrow{\mathcal{G}_3}^{(\kappa_i)}(z,w)$. Hence
\begin{equation}
\Lambda
\mathcal{D}^{(\gamma)}(z,y_0)=||\beta||+\sum_{i=1}^r\Lambda\overrightarrow{\mathcal{G}_3}^{(\kappa_i)}(z,w).
\end{equation}
\begin{claim}
There exists a constant $c$ such that, for every $\kappa\in\overrightarrow{\mathcal{G}_3}$,
$$
\Lambda\overrightarrow{\mathcal{G}_3}^{(\kappa)}(z,w)\leq c||\kappa||\ \ \ \mathrm{as}\ \ (z,w)\to(z_0,w_0).
$$
\end{claim}
\noindent{\it Proof of the claim.}
The Boltzmann sampler $\Gamma \overrightarrow{\mathcal{G}_3}(z,w)$ is obtained by repeated attempts to
build binary trees until the tree is successfully generated (no early interruption) and gives
rise to a 3-connected planar graph (admissibility condition). For $\kappa\in\mathcal{K}$, call $c^{(\kappa)}$ the cost of building $\kappa$ (i.e., generate the underlying
binary tree and perform the closure). Then
$$
\Lambda \overrightarrow{\mathcal{G}_3}^{(\kappa)}(z,w)=\Lambda \overrightarrow{\mathcal{G}_3}^{\mathrm{rej}}(z,w)+c^{(\kappa)}.
$$
Notice that $\Lambda \overrightarrow{\mathcal{G}_3}^{\mathrm{rej}}(z,w)\leq \Lambda \overrightarrow{\mathcal{G}_3}(z,w)$,
which is $O(1)$ as $(z,w)\to(z_0,w_0)$. Moreover, the closure-mapping has linear time
complexity. Hence there exists a constant $c$ independent from $\kappa$ and from $z$ such that
$\Lambda \overrightarrow{\mathcal{G}_3}^{(\kappa)}(z,w)\leq c\
\!||\kappa||$ as $z\to z_0$.
$\triangle$
The claim ensures that, upon taking $c>1$, every $\gamma\in\mathcal{D}$ satisfies
$$
\Lambda \mathcal{D}^{(\gamma)}(z,y_0)\leq c(||\beta||+\sum_{i=1}^r||\kappa_i||)\ \ \ \mathrm{as}\ \ z\to z_0.
$$
Since each edge of $\gamma$ is represented at most once in $\beta\cup\kappa_1\cup\ldots\cup\kappa_r$, we also have $\Lambda D^{(\gamma)}(z,y_0)\leq c||\gamma||$. Hence, when $z\to z_0$,
$\Lambda \mathcal{D}^{(\gamma)}(z,y_0)\leq 3c\cdot(|\gamma|+1)$ (by the Euler relation), which yields $$
\Lambda \mathcal{D}(z,y_0)\leq 3c\cdot |\mathcal{Z}_L\star\mathcal{D}|_{(z,y_0)}. $$
As the class $\mathcal{D}$ is $3/2$-singular (clearly, so is $\mathcal{Z}_L\star\mathcal{D}$), the expected size $|\mathcal{Z}_L\star\mathcal{D}|_{(z,y_0)}$ is $O(1)$ when $z\to z_0$. Hence $\Lambda \mathcal{D}(z,y_0)$ is $O(1)$. \end{proof}
\begin{lemma}[derived networks] Let $(z_0,y_0)$ be a singular point of $\mathcal{D}$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{D}'$---described in Section~\ref{sec:sampDp}---satisfies $$ \Lambda \mathcal{D}'(z,y_0)=O\left((z_0-z)^{-1/2}\right)\ \mathrm{as}\ z\to z_0. $$ \end{lemma} \begin{proof} Let us fix $z\in(0,z_0)$. Define $X:=(\Lambda \mathcal{D}'(z,y_0),\Lambda \mathcal{S}'(z,y_0),\Lambda \mathcal{P}'(z,y_0),\Lambda \mathcal{H}'(z,y_0))$. Our strategy here is to use the computation rules (Figure~\ref{fig:comp_rules}) to obtain a recursive equation specifying the vector $X$.
By Remark~\ref{rk:finite}, we have to check that the components of $X$ are finite. \begin{claim}\label{claim:Dp_finite} For $z\in(0,z_0)$, the quantities $\Lambda \mathcal{D}'(z,y_0)$, $\Lambda \mathcal{S}'(z,y_0)$, $\Lambda \mathcal{P}'(z,y_0)$, and $\Lambda \mathcal{H}'(z,y_0)$ are finite. \end{claim}
\noindent\emph{Proof of the claim.} Consider $\Lambda \mathcal{D}'(z,y_0)$ (the verification is similar for $\Lambda \mathcal{S}'(z,y_0)$, $\Lambda \mathcal{P}'(z,y_0)$, and $\Lambda \mathcal{H}'(z,y_0)$). Let $\gamma\in\mathcal{D}'$, with $\beta$ the series-parallel backbone and $\kappa_1,\ldots,\kappa_r$ the 3-connected components of $\gamma$. Notice that each $\kappa_i$ is drawn either by $\Gamma\overrightarrow{\mathcal{G}_3}(z,w)$ or $\Gamma\underline{\overrightarrow{\mathcal{G}_3}}(z,w)$
or $\Gamma\overrightarrow{\mathcal{G}_3}'(z,w)$, where $w=D(z,y_0)$. Hence the expected cost of generating $\kappa_i$ is bounded by $M+c||\kappa_i||$, where $M:=\mathrm{Max}(\Lambda\overrightarrow{\mathcal{G}_3}(z,w),\Lambda\underline{\overrightarrow{\mathcal{G}_3}}(z,w),\Lambda\overrightarrow{\mathcal{G}_3}'(z,w))$ and $c||\kappa_i||$ represents the cost of building $\kappa_i$ using the closure-mapping. As a consequence, $$
\Lambda\mathcal{D}'^{(\gamma)}(z,y_0)\leq ||\beta||+\sum_{i=1}^r M+c||\kappa_i||\leq C||\gamma||,\ \mathrm{with}\ C:=M+c+1. $$ Hence $$
\Lambda\mathcal{D}'(z,y_0)\leq \frac{C}{D'(z,y_0)}\sum_{\gamma\in\mathcal{D}'}||\gamma||\frac{z^{|\gamma|}}{|\gamma|!}y_0^{||\gamma||}, $$ which is $O(1)$ since it converges to the constant $Cy_0\partial_yD'(z,y_0)/D'(z,y_0)$.
$\triangle$
Using the computation rules given in Figure~(\ref{fig:comp_rules}), the decomposition grammar~(N') of derived networks---as given in Section~\ref{sec:sampDp}---is translated to a linear system $$ X=AX+L, $$ where $A$ is a $4\times 4$-matrix and $L$ is a 4-vector. Precisely, the components of $A$ are rational or exponential expressions in terms of series of networks and their derivatives: all these quantities converge as $z\to z_0$ because all the classes of networks are $3/2$-singular. Hence $A$ converges to a matrix $A_0$ as $z\to z_0$. In addition, $A$ is a substochastic matrix, i.e., a matrix with nonnegative coefficients and with sum at most 1 in each row. Indeed, the entries in each of the 4 rows of $A$ correspond to probabilities of a Bernoulli switch when calling $\Gamma D'(z,y)$, $\Gamma S'(z,y)$, $\Gamma P'(z,y)$, and $\Gamma H'(z,y)$, respectively. Hence, the limit matrix $A_0$ is also substochastic. It is easily checked that $A_0$ is indeed strictly substochastic, i.e., at least one row has sum $<1$ (here, the first and third row add up to 1, whereas the second and fourth row add up to $<1$). In addition, $A_0$ is irreducible, i.e., the dependency graph induced by the nonzero coefficients of $A_0$ is strongly connected. A well known result of Markov chain theory ensures that $(I-A_0)$ is invertible~\cite{Ke}. Hence, $(I-A)$ is invertible for $z$ close to $z_0$, and $(I-A)^{-1}$ converges to the matrix $(I-A_0)^{-1}$. Moreover, the components of $L$ are of the form $$L=\Big(a,b,c,d\cdot\Lambda \overrightarrow{\mathcal{G}_3}'(z,w)+e\cdot\Lambda \underline{\overrightarrow{\mathcal{G}_3}}(z,w)\Big),$$ where $w=D(z,y_0)$ and $\{a,b,c,d,e\}$ are expressions involving the series of networks, their derivatives, and the quantities $\{\Lambda D,\Lambda S, \Lambda P,\Lambda H\}$, which have already been shown to be bounded as $z\to z_0$. As a consequence, $a,b,c,d,e$ are $O(1)$ as $z\to z_0$. Moreover, it has been shown in~\cite{BeGa} that the value $w_0:=D(z_0,y_0)$ is such that $(z_0,w_0)$ is singular for $\mathcal{G}_3$, and $w_0-w\sim \lambda\cdot(z_0-z)$, with $\lambda:=D'(z_0,y_0)$. By Lemma~\ref{lem:comp_samp_Gt}, $\Lambda\overrightarrow{\mathcal{G}_3}'(z,w)$ and $\Lambda\underline{\overrightarrow{\mathcal{G}_3}}(z,w)$ are $O((z_0-z)^{-1/2})$ as $z\to z_0$; hence these quantities are also $O((z_0-z)^{-1/2})$. We conclude that the components of $L$ are $O((z_0-z)^{-1/2})$, as well as the components of $X=(I-A)^{-1}L$. In particular, $\Lambda\mathcal{D}'(z,y_0)$ (the first component of $X$) is $O((z_0-z)^{-1/2})$. \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for 2-connected planar graphs}
\begin{lemma}[rooted 2-connected planar graphs]\label{lem:comp_vecG2} Let $(z_0,y_0)$ be a singular point of $\mathcal{G}_2$. Then the expected complexities of the Boltzmann samplers for $\overrightarrow{\mathcal{G}_2}$ and $\overrightarrow{\mathcal{G}_2}'$ satisfy respectively, as $z\to z_0$: \begin{eqnarray*} \Lambda \overrightarrow{\mathcal{G}_2}(z,y_0)&=&O\ (1),\\ \Lambda \overrightarrow{\mathcal{G}_2}'(z,y_0)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} Recall that the Boltzmann sampler $\Gamma \overrightarrow{\mathcal{G}_2}(z,y_0)$ is directly obtained from $\Gamma \mathcal{D}(z,y_0)$, more precisely from $\Gamma (1+\mathcal{D})(z,y_0)$. According to Lemma~\ref{lem:comp_D}, $\Lambda \mathcal{D}(z,y_0)$ is $O(1)$ as $z\to z_0$, hence $\Lambda \overrightarrow{\mathcal{G}_2}(z,y_0)$ is also $O(1)$.
Similarly $\Gamma\overrightarrow{\mathcal{G}_2}'(z,y_0)$ is directly obtained from $\Gamma \mathcal{D}'(z,y_0)$, hence $\Lambda\overrightarrow{\mathcal{G}_2}'(z,y_0)=\Lambda\mathcal{D}'(z,y_0)$, which is $O((z_0-z)^{-1/2})$ as $z\to z_0$. \end{proof}
\begin{lemma}[U-derived 2-connected planar graphs]\label{lem:comp_UG2} Let $(z_0,y_0)$ be a singular point of $\mathcal{G}_2$. Then, the expected complexities of the Boltzmann samplers for $\underline{\mathcal{G}_2}$ and $\underline{\mathcal{G}_2}'$---described in Section~\ref{sec:sampDp}---satisfy, as $z\to z_0$: \begin{eqnarray*} \Lambda \underline{\mathcal{G}_2}(z,y_0)&=&O\ (1),\\ \Lambda \underline{\mathcal{G}_2}'(z,y_0)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} The Boltzmann sampler for $\underline{\mathcal{G}_2}$ is directly obtained from the one for $\overrightarrow{\mathcal{G}_2}$, according to the identity $2\star\underline{\mathcal{G}_2}=\mathcal{Z}_L\ \!\!\!^2\star\overrightarrow{\mathcal{G}_2}$. Hence $\Lambda\underline{\mathcal{G}_2}(z,y_0)=\Lambda\overrightarrow{\mathcal{G}_2}(z,y_0)$, which is $O(1)$ as $z\to z_0$, according to Lemma~\ref{lem:comp_vecG2}. Similarly, the Boltzmann sampler for $\underline{\mathcal{G}_2}'$ is directly obtained from the ones for the classes $\overrightarrow{\mathcal{G}_2}$ and $\overrightarrow{\mathcal{G}_2}'$, according to the identity $2\star\underline{\mathcal{G}_2}'=\mathcal{Z}_L\ \!\!\!^2\star\overrightarrow{\mathcal{G}_2}'+2\star\mathcal{Z}_L\star\overrightarrow{\mathcal{G}_2}$. Hence $\Lambda\underline{\mathcal{G}_2}(z,y_0)\leq 1+\Lambda\overrightarrow{\mathcal{G}_2}'(z,y_0)+\Lambda\overrightarrow{\mathcal{G}_2}(z,y_0)$.
When $z\to z_0$, $\Lambda\overrightarrow{\mathcal{G}_2}(z,y_0)$ is $O(1)$ and $\Lambda\overrightarrow{\mathcal{G}_2}'(z,y_0)$ is $O((z_0-z)^{-1/2})$ according to Lemma~\ref{lem:comp_vecG2}. Hence, $\Lambda\underline{\mathcal{G}_2}'(z,y_0)$ is $O((z_0-z)^{-1/2})$. \end{proof}
\begin{lemma}[bi-derived 2-connected planar graphs]\label{lem:comp_LG2} Let $(z_0,y_0)$ be a singular point of $\mathcal{G}_2$. Then, the expected complexities of the Boltzmann samplers for $\mathcal{G}_2\ \!\!\!'$ and $\mathcal{G}_2\ \!\!\!''$---described in Section~\ref{sec:sampDp}---satisfy, as $z\to z_0$: \begin{eqnarray*} \Lambda \mathcal{G}_2\ \!\!\!'(z,y_0)&=&O\ (1),\\ \Lambda \mathcal{G}_2\ \!\!\!''(z,y_0)&=&O\left((z_0-z)^{-1/2}\right). \end{eqnarray*} \end{lemma} \begin{proof} Recall that the Boltzmann sampler $\Gamma\mathcal{G}_2\ \!\!\!'(z,y_0)$ is obtained from $\Gamma\underline{\mathcal{G}_2}(z,y_0)$ by applying the procedure \UtoL to the class $\mathcal{G}_2$. In addition, according to
the Euler relation, any simple connected planar graph $\gamma$ (with $|\gamma|$ the number of vertices and $||\gamma||$ the number of edges) satisfies $|\gamma|\leq
||\gamma||+1$ (trees) and $||\gamma||\leq 3|\gamma|-6$ (triangulations). It is then easily checked that, for the class $\mathcal{G}_2$, $\alpha_{U/L}=3$ (attained asymptotically by triangulations) and $\alpha_{L/U}=2$ (attained by the link-graph, which has 2 vertices and 1 edge). Hence, by Corollary~\ref{lem:change_root},
$\Lambda\mathcal{G}_2\ \!\!\!'(z,y_0)\leq 6\ \!\Lambda\underline{\mathcal{G}_2}(z,y_0)$. Thus, by Lemma~\ref{lem:comp_UG2}, $\Lambda\mathcal{G}_2\ \!\!\!'(z,y_0)$ is $O(1)$ as $z\to z_0$.
The proof for $\Lambda \mathcal{G}_2\ \!\!\!''(z,y_0)$ is similar, except that the procedure \UtoL is now applied to the derived class $\mathcal{G}_2\ \!\!\!'$, meaning that the L-size is now the number of vertices minus 1. We still have $\alpha_{U/L}=3$ (attained asymptotically by triangulations), and now $\alpha_{L/U}=1$ (attained by the link-graph). Corollary~\ref{lem:change_root} yields $\Lambda\mathcal{G}_2\ \!\!\!''(z,y_0)\leq 3\ \!\Lambda\underline{\mathcal{G}_2}'(z,y_0)$. Hence, from Lemma~\ref{lem:comp_UG2}, $\Lambda\mathcal{G}_2\ \!\!\!''(z,y_0)$ is $O((z_0-z)^{-1/2})$ as $z\to z_0$. \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for connected planar graphs}
\begin{lemma}[derived connected planar graphs]\label{lem:comp_G1p} Let $(x_0,y_0)$ be a singular point of $\mathcal{G}_1$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{G}_1\ \!\!\!'$---described in Section~\ref{sec:conn2conn}---satisfies \begin{eqnarray*} \Lambda \mathcal{G}_1\ \!\!\!'(x,y_0)&=&O\ \!(1)\ \ \ \mathrm{as}\ x\to x_0.\\ \end{eqnarray*} \end{lemma} \begin{proof} Recall that the Boltzmann sampler for $\mathcal{G}_1\ \!\!\!'$ results from the identity (block decomposition, Equation~\eqref{eq:2conn}) $$ \mathcal{G}_1\ \!\!\!'=\Set\left(\mathcal{G}_2\ \!\!\!'\circ_L(\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!') \right). $$ We want to use the computation rules (Figure~\ref{fig:comp_rules}) to obtain a recursive equation for $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$. Again, according to Remark~\ref{rk:finite}, we have to check that $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is finite. \begin{claim}\label{claim:Gcp_finite} For $0<x<x_0$, the quantity $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is finite. \end{claim} \noindent\textit{Proof of the claim.} Let $\gamma\in\mathcal{G}_1\ \!\!\!'$, with
$\kappa_1,\ldots,\kappa_r$ the 2-connected blocks of $\gamma$. We have $$
\Lambda\mathcal{G}_1\ \!\!\!'^{(\gamma)}(x,y_0)=2||\gamma||+\sum_{i=1}^r\Lambda\mathcal{G}_2\ \!\!\!'^{(\kappa_i)}(z,y_0),\ \ \mathrm{where}\ z=xG_1\ \!\!\!'(x,y_0). $$ (The first term stands for the cost of choosing the degrees using a generator for a Poisson law; note that the sum of the degrees over all the vertices of $\gamma$
is $2||\gamma||$.) It is easily shown that there exists a constant $M$ such that
$\Lambda\mathcal{G}_2\ \!\!\!'^{(\kappa)}(z,y_0)\leq M||\kappa||$ for any $\kappa\in\mathcal{G}_2\ \!\!\!'$ (using the fact that such a bound holds for $\Lambda\mathcal{D}^{(\kappa)}(z,y_0)$
and that $\Gamma\mathcal{G}_2\ \!\!\!'(z,y_0)$ is obtained from $\Gamma\mathcal{D}(z,y_0)$ via a simple rejection step). Therefore $\Lambda\mathcal{G}_1\ \!\!\!'^{(\gamma)}(x,y_0)\leq C||\gamma||$, with $C=2+M$. We conclude that $$
\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)\leq \frac{C}{G_1\ \!\!\!'(x,y_0)}\sum_{\gamma\in\mathcal{G}_1\ \!\!\!'}||\gamma||\frac{x^{|\gamma|}}{|\gamma|!}y_0^{||\gamma||}, $$ which is $O(1)$ since it converges to the constant $Cy_0\partial_yG_1\ \!\!\!'(x,y_0)/G_1\ \!\!\!'(x,y_0)$.
$\triangle$
The computation rules (Figure~\ref{fig:comp_rules}) yield $$
\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)=G_2\ \!\!\!'(z,y_0)\cdot\left(\Lambda\mathcal{G}_2\ \!\!\!'(z,y_0)+|\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}\cdot\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0) \right)\ \ \mathrm{where}\ z=xG_1\ \!\!\!'(x,y_0), $$ so that $$
\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)=\frac{G_2\ \!\!\!'(z,y_0)\Lambda\mathcal{G}_2\ \!\!\!'(z,y_0)}{1-G_2\ \!\!\!'(z,y_0)\cdot|\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}}. $$ Similarly as in the transition from 3-connected planar graphs to networks, we use the important point, proved in~\cite{gimeneznoy}, that the composition scheme to go from 2-connected to connected planar graphs is critical. This means that, when $x\to x_0$, the quantity $z=xG_1\ \!\!\!'(x,y_0)$ (which is the change of variable from 2-connected to connected) converges to a positive constant $z_0$ such that $(z_0,y_0)$ is a singular point of $\mathcal{G}_2$. Hence, according to Lemma~\ref{lem:comp_LG2}, $\Lambda\mathcal{G}_2\ \!\!\!'(z,y_0)$ is $O(1)$ as $x\to x_0$. Moreover, as the class $\mathcal{G}_2\ \!\!\!'$ is $3/2$-singular, the series $G_2\ \!\!\!'(z,y_0)$ and the expected size
$|\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}$ converge to positive constants that are denoted respectively $G_2\ \!\!\!'(z_0,y_0)$ and $|\mathcal{G}_2\ \!\!\!'|_{(z_0,y_0)}$. We have shown that the numerator of $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is $O(1)$ and that the denominator converges as $x\to x_0$. To prove that $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is $O(1)$, it remains to check that the denominator does not converge to $0$, i.e., to prove that
$G_2\ \!\!\!'(z_0,y_0)\cdot |\mathcal{G}_2\ \!\!\!'|_{(z_0,y_0)}\neq 1$.
To show this, we use the simple trick that the expected complexity and expected size of Boltzmann samplers satisfy similar computation rules. Indeed, from Equation~\eqref{eq:2conn}, it is easy to derive the equation $$
|\mathcal{G}_1\ \!\!\!'|_{(x,y_0)}=G_2\ \!\!\!'(z,y_0)\cdot|\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}\cdot\left(|\mathcal{G}_1\ \!\!\!'|_{(x,y_0)}+1\right)\ \ \mathrm{where}\ z=xG_1\ \!\!\!'(x,y_0), $$
either using the formula $|\mathcal{C}|_{(x,y)}=\partial_x C(x,y)/C(x,y)$, or simply by interpreting what happens during a call to $\Gamma\mathcal{G}_1\ \!\!\!'(x,y)$ (an average of $G_2\ \!\!\!'(z,y_0)$ blocks are attached at the root-vertex, each block has
average size $|\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}$ and carries a connected component of average size $(|\mathcal{G}_1\ \!\!\!'|_{(x,y_0)}+1)$ at each non-root vertex). Hence $$
|\mathcal{G}_1\ \!\!\!'|_{(x,y_0)}=\frac{G_2\ \!\!\!'(z,y_0)\cdot |\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}}{1-G_2\ \!\!\!'(z,y_0)\cdot |\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}}. $$
Notice that this is the same expression as $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$, except for $|\mathcal{G}_2\ \!\!\!'|_{(z,y_0)}$ replacing $\Lambda\mathcal{G}_2\ \!\!\!'(z,y_0)$ in the numerator. The important point is that we already know that $|\mathcal{G}_1\ \!\!\!'|_{(x,y_0)}$
converges as $x\to x_0$, since the class $\mathcal{G}_1\ \!\!\!'$ is $3/2$-singular (see Lemma~\ref{lem:sing_planar}). Hence $G_2\ \!\!\!'(z_0,y_0)\cdot |\mathcal{G}_2\ \!\!\!'|_{(z_0,y_0)}$ has to be different from $1$ (more precisely, it is strictly less than $1$), which concludes the proof. \end{proof}
\begin{lemma}[bi-derived connected planar graphs]\label{lem:comp_G1pp} Let $(x_0,y_0)$ be a singular point of $\mathcal{G}_1$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{G}_1\ \!\!\!''$---described in Section~\ref{sec:sampCp}---satisfies \begin{eqnarray*} \Lambda \mathcal{G}_1\ \!\!\!''(x,y_0)&=&O\ \left((x_0-x)^{-1/2}\right)\ \ \mathrm{as}\ x\to x_0.\\ \end{eqnarray*} \end{lemma} \begin{proof} The proof for $\Lambda \mathcal{G}_1\ \!\!\!''(x,y_0)$ is easier than for $\Lambda \mathcal{G}_1\ \!\!\!'(x,y_0)$. Recall that $\Gamma \mathcal{G}_1\ \!\!\!''(x,y_0)$ is obtained from the identity $$ \mathcal{G}_1\ \!\!\!''=\left(\mathcal{G}_1\ \!\!\!'+\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!'' \right)\star\mathcal{G}_2\ \!\!\!''\circ_L(\mathcal{Z}_L\star\mathcal{G}_1\ \!\!\!')\star\mathcal{G}_1\ \!\!\!'. $$ At first one easily checks (using similar arguments as in Claim~\ref{claim:Gcp_finite}) that $\Lambda \mathcal{G}_1\ \!\!\!''(x,y_0)$ is finite.
Using the computation rules given in Figure~\ref{fig:comp_rules}, we obtain, writing as usual $z=xG_1\ \!\!\!'(x,y_0)$, \begin{eqnarray*} \Lambda\mathcal{G}_1\ \!\!\!''(x,y_0)&\!\!\!=\!\!\!&1+\frac{G_1\ \!\!\!'(x,y_0)}{G_1\ \!\!\!'(x,y_0)\!+\!xG_1\ \!\!\!''(x,y_0)}\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)+\frac{xG_1\ \!\!\!''(x,y_0)}{G_1\ \!\!\!'(x,y_0)\!+\!xG_1\ \!\!\!''(x,y_0)}\Lambda\mathcal{G}_1\ \!\!\!''(x,y_0)\\
&&+ \Lambda\mathcal{G}_2\ \!\!\!''(z,y_0)+|\mathcal{G}_2\ \!\!\!''|_{(z,y_0)}\cdot\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)+\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0). \end{eqnarray*} Hence
$$\Lambda\mathcal{G}_1\ \!\!\!''(x,y_0)=a(x,y_0)\cdot(1+b(x,y_0)\cdot \Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)+\Lambda\mathcal{G}_2\ \!\!\!''(z,y_0)+|\mathcal{G}_2\ \!\!\!''|_{(z,y_0)}\cdot\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)),$$ where
$$a(x,y_0)=\frac{G_1\ \!\!\!'(x,y_0)+xG_1\ \!\!\!''(x,y_0)}{G_1\ \!\!\!'(x,y_0)},\ \ \ b(x,y_0)=\frac{2G_1\ \!\!\!'(x,y_0)+xG_1\ \!\!\!''(x,y_0)}{G_1\ \!\!\!'(x,y_0)+xG_1\ \!\!\!''(x,y_0)}.$$ As the classes $\mathcal{G}_1\ \!\!\!'$ and $\mathcal{G}_1\ \!\!\!''$ are respectively $3/2$-singular and $1/2$-singular, the series $a(x,y_0)$ and $b(x,y_0)$ converge when $x\to x_0$. As $\mathcal{G}_2\ \!\!\!''$ is $1/2$-singular,
$|\mathcal{G}_2\ \!\!\!''|_{(z,y_0)}$ is $O((z_0-z)^{-1/2})$ when $z\to z_0$. Moreover, according to Lemma~\ref{lem:comp_LG2}, $\Lambda\mathcal{G}_2\ \!\!\!''(z,y_0)$ is $O((z_0-z)^{-1/2})$. Next we use the fact that the change of variable from 2-connected to connected is critical.
Precisely, as proved in~\cite{BeGa}, when $x\to x_0$ and when $z$ and $x$ are related by $z=xG_1\ \!\!\!'(x,y_0)$, we have $z_0-z\sim \lambda\cdot (x_0-x)$, with $\lambda:=\lim \mathrm{d}z/\mathrm{d}x=x_0G_1\ \!\!\!''(x_0,y_0)+G_1\ \!\!\!'(x_0,y_0)$. Hence, $|\mathcal{G}_2\ \!\!\!''|_{(z,y_0))}$ and $\Lambda\mathcal{G}_2\ \!\!\!''(z,y_0)$ are $O((x_0-x)^{-1/2})$. In addition, we have proved in Lemma~\ref{lem:comp_G1p} that $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is $O(1)$. We conclude that $\Lambda\mathcal{G}_1\ \!\!\!''(x,y_0)$ is $O((x_0-x)^{-1/2})$. \end{proof}
\begin{lemma}[connected planar graphs]\label{lem:comp_G1} Let $(x_0,y_0)$ be a singular point of $\mathcal{G}_1$. Then, the expected complexity of the Boltzmann sampler for $\mathcal{G}_1$---described in Section~\ref{sec:conn2conn}---satisfies $$ \Lambda \mathcal{G}_1(x,y_0)=O\ (1)\ \ \mathrm{as}\ x\to x_0. $$ \end{lemma} \begin{proof} As described in Section~\ref{sec:conn2conn}, the sampler $\Gamma\mathcal{G}_1(x,y)$ computes
$\gamma\leftarrow\Gamma\mathcal{G}_1\ \!\!\!'(x,y)$ and keeps $\gamma$ with probability $1/(|\gamma|+1)$. Hence the probability of success at each attempt is $$
p_{\mathrm{acc}}=\frac{1}{G_1\ \!\!\!'(x,y_0)}\sum_{\gamma\in\mathcal{G}_1\ \!\!\!'}\frac{1}{|\gamma|+1}\frac{x^{|\gamma|}}{|\gamma|!}y_0^{||\gamma||}=\frac{1}{G_1\ \!\!\!'(x,y_0)}\sum_{\gamma\in\mathcal{G}_1\ \!\!\!'}\frac{x^{|\gamma|}}{(|\gamma|+1)!}y_0^{||\gamma||}. $$ Recall that for any class $\mathcal{C}$, $\mathcal{C}'_{n,m}$ identifies to $\mathcal{C}_{n+1,m}$. Hence $$
p_{\mathrm{acc}}=\frac{1}{G_1\ \!\!\!'(x,y_0)}\sum_{\gamma\in\mathcal{G}_1}\frac{x^{|\gamma|-1}}{|\gamma|!}y_0^{||\gamma||}=\frac{G_1(x,y_0)}{xG_1\ \!\!\!'(x,y_0)}. $$ In addition, by Lemma~\ref{lem:target}, $\Lambda\mathcal{G}_1(x,y_0)=\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)/p_{\mathrm{acc}}$. As the classes $\mathcal{G}_1$ and $\mathcal{G}_1\ \!\!\!'$ are respectively $5/2$-singular and $3/2$-singular, both series $G_1(x,y_0)$ and $G_1\ \!\!\!'(x,y_0)$ converge to positive constants when $x\to x_0$. Hence $p_{\mathrm{acc}}$ converges to a positive constant as well. In addition, $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is $O(1)$ by Lemma~\ref{lem:comp_G1p}. Hence $\Lambda\mathcal{G}_1(x,y_0)$ is also $O(1)$. \end{proof}
\subsubsection{Complexity of the Boltzmann samplers for planar graphs}\label{sec:comp_planar}
\begin{lemma}[planar graphs] Let $(x_0,y_0)$ be a singular point of $\mathcal{G}$. Then, the expected complexities of the Boltzmann samplers for $\mathcal{G}$, $\mathcal{G}'$ and $\mathcal{G}''$---described in Section~\ref{sec:planconn} and~\ref{sec:sampGp}---satisfy, as $x\to x_0$: \begin{eqnarray*} \Lambda \mathcal{G}(x,y_0)&=&O\ (1),\\ \Lambda \mathcal{G}'(x,y_0)&=&O\ (1),\\ \Lambda \mathcal{G}''(x,y_0)&=&O\ ((x_0-x)^{-1/2}). \end{eqnarray*} \end{lemma} \begin{proof} Recall that $\Gamma\mathcal{G}(x,y)$ is obtained from $\Gamma\mathcal{G}_1(x,y)$ using the identity $$ \mathcal{G}=\Set(\mathcal{G}_1), $$ hence $\Lambda \mathcal{G}(x,y_0)=G_1(x,y_0)\cdot\Lambda\mathcal{G}_1(x,y_0)$. When $x\to x_0$, $G_1(x,y_0)$ converges (because $\mathcal{G}_1$ is $5/2$-singular) and $\Lambda\mathcal{G}_1(x,y_0)$ is $O(1)$ (by Lemma~\ref{lem:comp_G1}). Hence $\Lambda \mathcal{G}(x,y_0)$ is $O(1)$.
Then, $\Gamma\mathcal{G}'(x,y)$ is obtained from $\Gamma\mathcal{G}_1\ \!\!\!'(x,y)$ and $\Gamma\mathcal{G}(x,y)$ using the identity $$ \mathcal{G}'=\mathcal{G}_1\ \!\!\!'\star\mathcal{G}. $$ Hence $\Lambda\mathcal{G}'(x,y_0)=\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)+\Lambda\mathcal{G}(x,y_0)$. When $x\to x_0$, $\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)$ is $O(1)$ (by Lemma~\ref{lem:comp_G1p}) and $\Lambda\mathcal{G}(x,y_0)$ is $O(1)$, as proved above. Hence $\Lambda \mathcal{G}'(x,y_0)$ is $O(1)$.
Finally, $\Gamma\mathcal{G}''(x,y)$ is obtained from $\Gamma\mathcal{G}_1\ \!\!\!''(x,y)$, $\Gamma\mathcal{G}_1\ \!\!\!'(x,y)$, $\Gamma\mathcal{G}'(x,y)$, and $\Gamma\mathcal{G}(x,y)$ using the identity $$ \mathcal{G}''=\mathcal{G}_1\ \!\!\!''\star\mathcal{G}+\mathcal{G}_1\ \!\!\!'\star\mathcal{G}'. $$ Hence $$ \Lambda\mathcal{G}''(x,y_0)=1+\frac{a}{a+b}\left(\Lambda\mathcal{G}_1\ \!\!\!''(x,y_0)+\Lambda\mathcal{G}(x,y_0)\right)+\frac{b}{a+b}\left(\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)+\Lambda\mathcal{G}'(x,y_0)\right), $$ where $a=G_1\ \!\!\!''(x,y_0)G(x,y_0)$ and $b=G_1\ \!\!\!'(x,y_0)G'(x,y_0)$. Thus $$ \Lambda\mathcal{G}''(x,y_0)\leq 1+\Lambda\mathcal{G}_1\ \!\!\!''(x,y_0)+\Lambda\mathcal{G}(x,y_0)+\Lambda\mathcal{G}_1\ \!\!\!'(x,y_0)+\Lambda\mathcal{G}'(x,y_0). $$ When $x\to x_0$, $\Lambda \mathcal{G}_1\ \!\!\!''(x,y_0)$ is $O((x_0-x)^{-1/2})$ (by Lemma~\ref{lem:comp_G1pp}), $\Lambda \mathcal{G}_1\ \!\!\!'(x,y_0)$ is $O(1)$ (by Lemma~\ref{lem:comp_G1p}), and $\Lambda \mathcal{G}'(x,y_0)$ and $\Lambda \mathcal{G}(x,y_0)$ are $O(1)$, as proved above. Hence $\Lambda \mathcal{G}''(x,y_0)$ is $O((x_0-x)^{-1/2})$, which concludes the proof. \end{proof} This concludes the proof of the expected complexities of our random samplers. (Recall that, thanks to Claim~\ref{claim:eq}, the proof has been reduced to proving
the asymptotic estimate $\Lambda\mathcal{G}''(x,y_0)=O((x_0-x)^{-1/2})$.)
\noindent\emph{Acknowledgements.} I am very grateful to Philippe Flajolet for his encouragements and for several
corrections and suggestions that led to a significant improvement of the presentation of the results. I greatly thank the anonymous referee for an extremely detailed and insightful report, which led to a major revision of an earlier version of the article. I have also enjoyed fruitful discussions with Gilles Schaeffer, Omer Gim\'enez and Marc Noy, in particular regarding the implementation of the algorithm.
\end{document} | arXiv |
\begin{document}
\title{Exact relations and links for two-dimensional thermoelectric composites} \tableofcontents
\section{Nonintroduction} This is a report of the massive multi-year effort by the author and two graduate students Huilin Chen and Sarah Childs to compute all exact relations and links for two-dimensional thermoelectric composites. The size of this report is due to the inclusion of all technical details of calculations, which are customarily omitted in journal articles. At the moment I have no time to prepare a proper ``archival quality'' manuscript with a good introduction and references. However, I believe that the results, concisely summarized in the last three sections of this report, should be made available to the research community even in this unfinished form.
\section{Equations of thermoelectricity} Thermoelectric properties of a material are described by the relations between the gradient $\nabla\mu$ of an electrochemical potential, temperature gradient $\nabla T$, current density $\Bj_{E}$ and entropy flux $\Bj_{S}$. The total energy $U=U(S,N)$ is a function of entropy and the number of charge carriers $N$. Therefore, the energy flux $\dot{U}$ is given by \[ \dot{U}=T\dot{S}+\mu\dot{N},\quad T=\dif{U}{S},\quad\mu=\dif{U}{N}. \] where $T$ is the absolute temperature and $\mu$ is the electrochemical potential. Thus, in a general heterogeneous medium we have \[ \Bj_{U}=T\Bj_{S}+\mu\Bj_{E}, \] where $\Bj_{U}$ is the total energy flux, $\Bj_{S}$ is the entropy flux and $\Bj_{E}$ is the electric current (charge crrier flux). The consevation of charge and energy laws are expressed by the equations \begin{equation}
\label{conslaws}
\nabla \cdot\Bj_{E}=0,\qquad \nabla \cdot\Bj_{U}=0. \end{equation} In addition to conservation laws we also postulate linear constitutive laws that relate the electric current and the entropy flux to the nonuniformity of electrochemical potential and temperature. In a thermoelectric material these two driving forces are coupled: \begin{equation}
\label{constitlaw}
\begin{cases}
\Bj_{E}=\BGs\nabla(-\mu)+\BGs\BS\nabla(-T),\\
\Bj_{S}=\BS^{T}\BGs\nabla(-\mu)+\BGg\nabla(-T)/T,
\end{cases}\qquad\BGs^{T}=\BGs,\quad\BGg^{T}=\BGg. \end{equation} The Onsager reciprocity relation is incorporated in the above constitutive laws. The form of the cross-property coupling tensors is chosen in such a way as to make the thermoelectric coupling laws more transparent. We will now show how the general equations (\ref{conslaws}), (\ref{constitlaw}) relate to the well-known thermoelectric effects.
\subsection{Seebeck effect and the figure of merit} The electrochemical potential $\mu$ is a sum of the electrostatic potential and a chemical potential. The latter depends only on the temperature and is therefore constant, when the temperature is constant. In this case $\BE=\nabla(-\mu)$ is the electric field and the first equation in (\ref{constitlaw}) reads $\Bj_{E}=\BGs\BE$. Therefore, $\BGs$ has the physical meaning of the isothermal conductivity tensor. As such it must be represented by a symmetric, positive definite $3\times 3$ matrix. In the absence of the electrical current ($\Bj_{E}=\Bzr$) the gradient of $-\mu$ has the meaning of the electromotive force generated by a temperature gradient. This is called the \emph{Seebeck effect}. From the first equation in (\ref{constitlaw}) we obtain \[ \Be_{\rm emf}=-\nabla\mu=\BS\nabla T. \] The $3\times 3$ matrix $\BS$ is called the Seebeck coefficient (tensor). In the literature the Seebeck coefficient is often assumed to be a scalar. However, we will see that in general, a composite made of such materials will have an anisotropic Seeback coefficient. Another a priori assumption is that $\BS$ is symmetric (see e.g. [lusi18]). We will again see that symmetry of $\BS$ is not preserved under homogenization.
The heat flux at zero electric current is characterized by the heat conductivity tensor $\Bj_{U}=-\BGk\nabla T$, which gives a formula for the tensor $\BGg$ in the constitutive equtions in terms of the symmetric, positive definite heat conductivity tensor $\BGk$: \[ \BGk=\BGg-T\BS^{T}\BGs\BS. \]
Thus, imposing a temperature gradient on a thermoelectic material creates stored electrical energy with density \[ e_{\rm el}=\BGs\Be_{\rm emf}\cdot\Be_{\rm emf}=(\BS^{T}\BGs\BS\nabla T)\cdot(\nabla T). \] This phenomenon can be used to make a ``Seebeck generator'', converting heat flux (temperature differences) directly into electrical energy. The efficiency of Seebeck generator is called the \emph{figure of merit}.
The body not in thermal equilibrium can be used to produce mechanical work. However, not all thermal energy can be used. One of the physical interpretations of entropy is that it is a measure of the inaccessible portion of the total internal energy per degree of temperature. Thus, the density of this non-extractable thermal energy is the product of temperature and the entropy production density: \[ e_{\rm th}=T\nabla \cdot\Bj_{S}=\nabla \cdot(T\Bj_{S})-\Bj_{S}\cdot\nabla T= \nabla \cdot\Bj_{U}-\frac{\Bj_{U}}{T}\cdot\nabla T=\nth{T}(\BGk\nabla T)\cdot\nabla T. \] In a thermoelectric device we want to maximize the stored electrical energy while minimizing unusable thermal energy. The ratio $E_{\rm electrical}/E_{\rm entropy}$ is therefore a measure of efficiency of the thermoelectric \emph{device}, since the values of energies depend on specific boundary condition s. If we want a \emph{material property} that is independent of the boundary condition s we may define the figure of merit as follows \[ Z=\sup_{\Bh}\frac{\BS^{T}\BGs\BS\Bh\cdot\Bh}{\BGk\Bh\cdot\Bh}. \] Thus, $Z$ is the largest eigenvalue of $\BGk^{-1}\BS^{T}\BGs\BS$.
In the isotropic case, where $\BGs$, $\BS$, and $\BGk$ are all constant multiples of the identity, we have $Z=S^{2}\sigma/\kappa$.
In summary, our assumptions on the possible values of the tensors $\BGs$, $\BS$ and $\BGg$ are equivalent to the symmetry and positive definiteness of the $6\times 6$ matrix \begin{equation}
\label{physL}
\mathsf{L}'=\mat{\BGs}{\BGs\BS}{\BS^{T}\BGs}{\BGk/T+\BS^{T}\BGs\BS}, \end{equation} that describes constitutive relation (\ref{constitlaw}) \[ \vect{\Bj_{E}}{\Bj_{S}}=\mathsf{L}'\vect{\nabla(-\mu)}{\nabla(-T)}. \]
\subsection{The Thomson and Peltier effects} In physically relevant variables we can write equations of thermoelectricity in the form of the following system \begin{equation}
\label{physeq}
\begin{cases}
\Bj_{E}=\BGs\nabla(-\mu)+\BGs\BS\nabla(-T),\\
\Bj_{Q}=T\BS^{T}\Bj_{E}+\BGk\nabla(-T),\\
\Bj_{U}=\Bj_{Q}+\mu\Bj_{E},\\
\nabla \cdot\Bj_{E}=\nabla \cdot\Bj_{U}=0,
\end{cases} \end{equation} where $\Bj_{Q}=T\Bj_{S}$ is the heat flux. In this form it is immediately apparent that adding a constant to the electrochemical potential $\mu$ does not change the flux $\Bj_{E}$, while adding a constant multiple of $\Bj_{E}$ to $\Bj_{U}$. Since $\nabla \cdot\Bj_{E}=0$ then adding a constant to the electrochemical potential $\mu$ gives another solution of balance equations (\ref{physeq}). This observation will be useful later.
Let us write the conservation of energy law: \[ 0=\nabla \cdot\Bj_{U}=\nabla \cdot(\BGk\nabla(-T))+\nabla \cdot(T\BS^{T}\Bj_{E})+\nabla\mu\cdot\Bj_{E}, \] where we have used the conservation of charge law $\nabla \cdot\Bj_{E}=0$. From the first equation in (\ref{physeq}) we have \[ \nabla\mu=-\BGs^{-1}\Bj_{E}+\BS\nabla(-T), \] so that the conservation of energy has the form \[ 0=\nabla \cdot(\BGk\nabla(-T))+\nabla \cdot(T\BS^{T}\Bj_{E})-(\BGs^{-1}\Bj_{E})\cdot\Bj_{E}-(\BS\nabla T)\cdot\Bj_{E}. \] We can rewrite it as \begin{equation}
\label{enerbal}
\nabla \cdot(\BGk\nabla(-T))=(\BGs^{-1}\Bj_{E})\cdot\Bj_{E}-T\nabla \cdot(\BS^{T}\Bj_{E}). \end{equation} On the left we have heat production density. On the right we have two heat sources: Joule heating, represented by the first term and the thermoelectric heating or cooling. This second term represents those thermoelectric effects that occur when the current flows through the thermoelectric material. The commonly encountered description of these effects assumes that the Seebeck tensor is scalar: $\BS=S\BI_{3}$. In that case the conservation of charge law allows us to simplify the second term on the right-hand side\ of (\ref{enerbal}): \[ \dot{Q}_{\rm thel}=-T\nabla \cdot(\BS^{T}\Bj_{E})=-T(\nabla S)\cdot\Bj_{E}. \] The \emph{Thompson effect} is related to the dependence of the Seebeck coefficient $S$ on $T$. In this case the additional thermoelectric heat production density is \[ \dot{Q}_{\rm thel}=-T\nabla S\cdot\Bj_{E}=-TS'(T)\nabla T\cdot\Bj_{E}=-{\mathcal K}\nabla T\cdot\Bj_{E} . \] The coefficient ${\mathcal K}=TS'(T)$ is called the Thompson coefficient. The \emph{Peltier effect} occurs at an isothermal junction $\Sigma$ of two different materials with different Seebeck coefficients. At every point $\Bs\in\Sigma$ \[ \dot{Q}_{\rm thel}=-T\nabla S\cdot\Bj_{E}=-T\jump{S}(\Bj_{E}\cdot\Bn)\delta_{\Bs}(\Bx)=-\jump{\Pi}(\Bj_{E}\cdot\Bn)\delta_{\Bs}(\Bx), \] where $\Pi=TS$ is called the Peltier coefficient and the normal charge flux $\Bj_{E}\cdot\Bn$ is continuous across the junction $\Sigma$. In general, when $\BS$ is not scalar we can rewrite the thermoelectric heat production term $\dot{Q}_{\rm thel}$ as follows \begin{equation}
\label{anisiS} \dot{Q}_{\rm thel}=-T(\nabla \cdot\BS)\cdot\Bj_{E}-T\av{\dev{\BS},\nabla\Bj_{E}}, \end{equation} where \[ \dev{\BS}=\BS-\nth{3}(\mathrm{Tr}\,\BS)\BI_{3} \] is the deviatoric part of $\BS$. The second term in (\ref{anisiS}) represents the thermoelectric effects of anisotropy, of which there seems to be no evidence in the literature.
In conclusion, aside from the completely undocumented anisotropic effects from (\ref{anisiS}) the most commonly described effects are due to either inhomogeneity (Peltier effect) or essential nonlinearity (Thomson's effect due to the dependence of $\BS$ on $T$). In what follows we will focus on the case of small perturbations \[ \mu=\mu_{0}+\epsilon\Tld{\mu},\qquad T=T_{0}+\epsilon\Tld{T}. \] As $\epsilon\to 0$ the equations of theremoelectricity become linear with respect to $\Tld{\mu}$ and $\Tld{T}$, with temperature-dependent coefficients set to their values corresponding to $T=T_{0}$. In what follows we use notation $\mu$ and $T$ instead of $\Tld{\mu}$ and $\Tld{T}$.
\subsection{The canonical form of equations of thermoelectricity} Many physical phenomena and processes are described by systems of linear PDE (partial differential equations). A very large class of these have a common structure that I would like to emphasise. These phenomena deal with various properties of solid bodies (materials). For example, we may be interested in how materials respond to electromagnetic fields, heat or mechanical forces. In each of these cases we identify a pair of vector fields, defined at each point inside the material and taking values in appropriate vector spaces (different in each physical context). The first field in the pair describes what is being done to a material: applied deformation, or an electric field, or a temperature distribution, etc. The second describes how material responds to the applied field, such as forces (stress) that arise in response to a deformation or an electrical current that arises in response to an applied electric field, etc. These physical vector fields obey fundamental laws of classical physics, such as conservation of energy, for example. These laws can be expressed as system of linear differential equations, which combined with the constitutive laws give a full quantitative description of the respective phenomena.
The constitutive law is a linear relation between the two fields in a pair. The linear operator effecting this relation describes material properties in question. If one adds information of how the disturbance is applied to the body (usually through a particular action on the boundary of the body), then one obtains a unique solution. To summarize, we will be looking at \begin{itemize} \item A pair of vector fields (we will call them $\BE(\Bx)$ and $\BJ(\Bx)$)
defined at each point $\Bx$ inside the body $\Omega$, with values
in some finite dimensional vector space, equipped with a physically natural
inner product; \item Systems of constant coefficient PDEs obeyed by $\BE(\Bx)$ and $\BJ(\Bx)$ \item A linear relation between $\BE(\Bx)$ and $\BJ(\Bx)$, written in operator
form $\BJ(\Bx)=\mathsf{L}(\Bx)\BE(\Bx)$, where the linear operator $\mathsf{L}(\Bx)$
describes material properties (that can be different at different points
$\Bx\in\Omega$). This operator is almost always symmeteric and positive
definite. \end{itemize} We do not include boundary condition s in the above list because answers to questions that we are interested in do not depend on boundary condition s. We will now show how equations of thermoelectricity (\ref{physeq}) can be rewritten as a linear relation between a pair of curl-free fields $(\Be_{1},\Be_{2})$ and a pair of divergence-free fields $(\Bj_{1},\Bj_{2})$ \begin{equation}
\label{constrel}
\begin{cases}
\Bj_{1}=\BL_{11}\Be_{1}+\BL_{12}\Be_{2},\\ \Bj_{2}=\BL_{12}^{T}\Be_{1}+\BL_{22}\Be_{2}. \end{cases} \end{equation} Following Callen's textbook we define new potentials \[ \psi_{1}=\frac{\mu}{T},\qquad\psi_{2}=\nth{T}, \] denoting \[ \Be_{1}=\nabla\psi_{1},\quad\Be_{2}=\nabla\psi_{2},\quad\Bj_{1}=-\Bj_{E},\quad\Bj_{2}=\Bj_{U}, \] we obtain the form (\ref{constrel}), were \begin{equation}
\label{mathL}
\BL_{11}=T\BGs,\quad\BL_{12}=-T(\mu\BGs+T\BGs\BS),\quad \BL_{22}=T[\mu^{2}\BGs+T\BGg+T\mu(\BGs\BS+\BS^{T}\BGs)]. \end{equation} In general the coefficients $\BL_{ij}$ depend on the values of $T$ and $\mu$, and we are considering situations where these quantities change little and equations (\ref{constrel}) represent the linearization around the fixed values $T_{0}$ and $\mu_{0}$. We observe that the new material tensor \[ \mathsf{L}=T\mat{\BGs}{-\BGs(\mu+T\BS)}{-(\mu+T\BS)^{T}\BGs} {T\BGk+(\mu+T\BS)^{T}\BGs(\mu+T\BS)} \] is symmetric and positive definite if and only if $\mathsf{L}'$, given by (\ref{physL}), is symmetric and positive definite, i.e. if and only if $\BGs$ and $\BGk$ are symmetric and positive definite $3\times 3$ matrices. In full generality equations of thermoelectricity are very nonlinear, especially in view of the fact that all physical property tensors $\BGs$, $\BGk$ and $\BS$ depend on temperature $T$. We will be working with linearized version of the equations where both $\mu$ and $T$ do not vary a lot. Mathematically, we look at the leading order asymptotics of solutions $(\mu,T)$ of the form $\mu=\mu_{0}+\epsilon\Tld{\mu}$ and $T=T_{0}+\epsilon\Tld{T}$. We have already observed that the full thermoelectric system is invariant with respect to addition of a constant to the electrochemical potential $\mu$. Thus, modifying the potential $\psi_{1}$ \[ \psi_{1}\mapsto\frac{\mu-\mu_{0}}{T}, \] we can set $\mu_{0}=0$, without loss of generality. Thus, for linearized problems we can write \begin{equation}
\label{Lcanon}
\mathsf{L}=T_{0}^{2}\mat{\BGs/T_{0}}{-\BGs\BS}{-\BS^{T}\BGs}{\BGk+T_{0}\BS^{T}\BGs\BS}, \end{equation} where all physical property tensors $\BGs$, $\BGk$ and $\BS$ are evaluated at $T=T_{0}$---the working temperature. We note (for no particular reason other than curiousity) that \begin{equation}
\label{Lcanoninv}
\mathsf{L}^{-1}=\nth{T_{0}}\mat{\BGs^{-1}+T_{0}\BS\BGk^{-1}\BS^{T}}{-\BS\BGk^{-1}}{-\BGk^{-1}\BS^{T}}{\BGk^{-1}/T_{0}}. \end{equation}
Now, the vector fields $\BE=(\Be_{1},\Be_{2})$ and $\BJ=(\Bj_{1},\Bj_{2})$ take their values in the $2d$-dimesnional vector space ${\mathcal T}=\bb{R}^{d}\oplus\bb{R}^{d}$, $d=2$ or 3. (It will be 2 in this paper.) The natural inner product on ${\mathcal T}$ is defined by \[ (\BE,\BE')_{{\mathcal T}}=\Be_{1}\cdot\Be'_{1}+\Be_{2}\cdot\Be'_{2}. \] The differential equations satisfied by $\BE$ and $\BJ$ are \begin{equation}
\label{PDE}
\nabla \times\Be_{1}=\nabla \times\Be_{2}=0,\qquad\nabla \cdot\Bj_{1}=\nabla \cdot\Bj_{2}=0. \end{equation} The material properties tensor $\mathsf{L}(\Bx)$ can therefore be written as a $2\times 2$ block matrix \begin{equation}
\label{BML}
\mathsf{L}(\Bx)=\mat{\BL_{11}(\Bx)}{\BL_{12}(\Bx)}{\BL_{12}^{T}(\Bx)}{\BL_{22}(\Bx)}, \end{equation} where $\BL_{11}$ and $\BL_{22}$ are symmetric (and positive definite) $3\times 3$ matrices. The constitutive relation $\BJ=\mathsf{L}\BE$ can also be written as $\BJ=\mathsf{L}\BE$. From the block-components of $\mathsf{L}$ we can recover the physical tensors: \begin{equation}
\label{L2phys}
\BGs=\beta_{0}\BL_{11},\quad\BS=-\beta_{0}\BL_{11}^{-1}\BL_{12},\quad \BGk=\beta_{0}^{2}(\BL_{22}-\BL_{12}^{T}\BL_{11}^{-1}\BL_{12}),\quad\beta_{0}=\nth{T_{0}}. \end{equation} With these formulas the figure of merit form is \begin{equation}
\label{ZTL}
ZT=\max_{\Bh}\frac{\BL_{12}^{T}\BL_{11}^{-1}\BL_{12}\Bh\cdot\Bh}{(\BL_{22}-\BL_{12}^{T}\BL_{11}^{-1}\BL_{12})\Bh\cdot\Bh}=\frac{\lambda}{1-\lambda}, \end{equation} where $\lambda\in(0,1)$ is the largest eigenvalue of $\BL_{22}^{-1}\BL_{12}^{T}\BL_{11}^{-1}\BL_{12}$.
For isotropic materials $\mathsf{L}=\BL\otimes\BI_{3}$ and their figure of merit is \[ ZT=\frac{L_{12}^{2}}{\det\BL},\qquad\BL=\mat{L_{11}}{L_{12}}{L_{12}}{L_{22}}. \]
\section{Periodic composites} Let $Q=[0,1]^{d}$. It is a unit square when $d=2$ and unit cube when $d=3$. Let us suppose that $Q$ is divided into two complementary subsets $A$ and $B$. We place one thermoelectric material in $A$ and another in $B$. If the corresponding tensors of material properties and denoted by $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$, then the function \[ \mathsf{L}(\Bx)=\mathsf{L}_{A}\chi_{A}(\Bx)+\mathsf{L}_{B}\chi_{B}(\Bx) \] describes this situation mathematically, since $\mathsf{L}(\Bx)=\mathsf{L}_{A}$, if and only if $\Bx\in A$ and $\mathsf{L}(\Bx)=\mathsf{L}_{B}$, if and only if $\Bx\in B$. Here $\chi_{S}(\Bx)$ is the characteristic function of a subset $S$, taking value 1, when $\Bx\in S$ and value 0, otherwise.
Now we are going to tile the entire space $\bb{R}^{d}$ with the copies of the ``period cell'' $Q$, generating a $Q$-periodic function $\mathsf{L}_{\rm per}(\Bx)$, $\Bx\in\bb{R}^{d}$. Specifically, in order to find the value of $\mathsf{L}_{\rm
per}(\Bx)$ at a specific point $\Bx\in\bb{R}^{d}$ we first find a vector $\Bz$ with integer components, such that $\Bx-\Bz\in Q$ and then define $\mathsf{L}_{\rm per}(\Bx)=\mathsf{L}(\Bx-\Bz)$. In general $\mathsf{L}_{\rm
per}(\Bx_{1})=\mathsf{L}_{\rm per}(\Bx_{2})$, whenever $\Bx_{1}-\Bx_{2}$ has integer components.
A periodic composite material would have such a structure on a \emph{microscopic level}. Mathematically, we choose $\epsilon>0$, representing a microscopic length scale and define $\mathsf{L}_{\epsilon}(\Bx)=\mathsf{L}_{\rm per}(\Bx/\epsilon)$, restricting $\Bx$ to lie in a subset $\Omega\subset\bb{R}^{d}$ occupied by our composite. On a macroscopic level, such a composite will look as though it is a homogeneous thermoelectric material. Its thermoelectric tensor $\mathsf{L}^{*}$, called the effective tensor of the composite, is a complicated function not only of the tensors $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$ of its constituents, but also of the set $A$ ($B=Q\setminus A$). Specifically, if we keep $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$ fixed and change only the shape of $A$, then the effective tensor $\mathsf{L}^{*}$ will change as well. Understanding how $\mathsf{L}^{*}$ depends on the shape of $A$ is an important (and difficult) problem, that could help design thermoelectric composites with desired properties. Even though, there is a mathematical description of $\mathsf{L}^{*}$ as a function of $A$, it is complicated and we will not be needing or using this description.
\section{Exact relations} Let us recall that the thermoelectric tensor $\mathsf{L}$ is a $2\times 2$ block-matrix \begin{equation}
\label{blckM}
\mathsf{L}=\mat{\BL_{11}}{\BL_{12}}{\BL_{12}^{T}}{\BL_{22}}, \end{equation} where $\BL_{11}$ and $\BL_{22}$ are symmetric $3\times 3$ matrices. Therefore, we are going to think of each such tensor as a point in an $N$-dimensional vector space, where $N=2d^{2}+d$.
Now, let us imagine that we have fixed two such points, representing tensors $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$ and we are making periodic composites with all possible subsets $A\subset Q$. For each choice of the set $A$ we get a point $\mathsf{L}^{*}$ in our $N$-dimensional vector space. The set of all such points corresponding to all possible subsets $A\subset Q$ is called the G-closure of of a two-point set $\{\mathsf{L}_{A},\mathsf{L}_{B}\}$. Generically, this G-closure set will have a non-empty interior is the $N$-dimensional vector space of material tensors. However, there are special cases, all of which we want to describe, where the G-closure set is a submanifold of positive codimension. Equations describing such a submanifold are called exact relations. In the language of composite materials, these relations will be satisfied by \emph{all} composites, as long as they are made of materials that satisfy these equations.
\section{Polycrystals} A general thermoelectric tensor $\mathsf{L}$ is \emph{anisotropic}, i.e. its $N$ components will change when we rotate the material. Nevertheless, there are \emph{isotropic} materials, whose tensors are given by \begin{equation}
\label{Liso3d}
\mathsf{L}=\mat{\lambda_{11}\BI_{d}}{\lambda_{12}\BI_{d}}{\lambda_{12}\BI_{d}}{\lambda_{22}\BI_{d}}
=\BGL\otimes\BI_{d},\qquad\BGL=\mat{\lambda_{11}}{\lambda_{12}}{\lambda_{12}}{\lambda_{22}}, \end{equation} when $d=3$. When $d=2$ there is an additional isotropic tensor \begin{equation}
\label{Liso2d}
\mathsf{L}=\BGL\otimes\BI_{2}+\nu\BR_{\perp}\otimes\BR_{\perp},\qquad\BR_{\perp}=\mat{0}{-1}{1}{0}. \end{equation} However, if we think that the 2D case is just a special case of 3D, where fields do not change in one of the direction, then the isotropy (\ref{Liso2d}) can be exhibited by anisotropic thermoelectrics that are, for example, only transversely isotropic.
Operators (\ref{Liso3d}) are positive definite if and only if $\lambda_{11}>0$ and $\det\BGL>0$, while operators (\ref{Liso2d}) are positive definite if and only if $\lambda_{11}>0$ and $\det\BGL>\nu^{2}$.
If tensors $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$ are anisotropic, it means that in a composite described above we have to use these materials in one fixed orientation. This is very often impractical, and we will restrict our attention to polycrystals, where we are permitted to use each anisotropic material in any orientation, so that at different points we may have different orientation of the same material. There are a lot fewer exact relations and links for polycrystals, and they will be easier (not easy) to find.
\section{Exact relations for thermoelectricity} Recall that in space dimension $d$ the space ${\mathcal T}=\bb{R}^{d}\oplus\bb{R}^{d}$ is $2d$-dimensional and the space $\mathrm{Sym}({\mathcal T})$ of all symmetric operators on ${\mathcal T}$ is $N=2d(2d+1)/2=2d^{2}+d$ dimensional. A positive definite operator $\mathsf{L}\in\mathrm{Sym}({\mathcal T})$ will be thought of as a description of thermoelectric properties of a material via (\ref{constrel}), (\ref{BML}) and will be referred to as a \emph{tensor of material properties} or a \emph{thermoelectric tensor}. The set of all thermoelectric tensors, i.e. the set of all positive definite symmetric operators on ${\mathcal T}$ will be denoted $\mathrm{Sym}^{+}({\mathcal T})$. Our first task is to identify all \emph{exact
relations}---submanifolds $\bb{M}$ (think surfaces or curves in space) in $\mathrm{Sym}^{+}({\mathcal T})$, such that the thermoelectric tensor of any composite made with materials from $\bb{M}$ must necessarily be in $\bb{M}$. To be precise we are only interested in polycrystalline exact relations $\bb{M}$ that have the additional property that $\BR\cdot\mathsf{L}\in\bb{M}$ for any $\mathsf{L}\in\bb{M}$ and for any rotation $\BR\in SO(d)$. In fact, the complete list of them is known for $d=3$. Our first goal will be to compute all polycrystalline exact relations when $d=2$. This is done by applying the general theory of exact relations that states that every exact relation $\bb{M}$ corresponds to a peculiar algebraic object called \emph{Jordan multialgebra}. Jordan algebras are very-well studied object in algebra. The particle ``multi'' comes from the fact that in our case each Jordan algebra carries several Jordan multiplications, parametrized by a particular subspace ${\mathcal A}\subset\mathrm{Sym}({\mathcal T})$. \begin{definition}
We say that a subspace $\Pi\subset\mathrm{Sym}({\mathcal T})$ is a Jordan ${\mathcal A}$-multialgebra
if \[ \mathsf{K}_{1}*_{\mathsf{A}}\mathsf{K}_{2}=\hf(\mathsf{K}_{1}\mathsf{A}\mathsf{K}_{2}+\mathsf{K}_{2}\mathsf{A}\mathsf{K}_{1})\in\Pi,\quad \forall\mathsf{K}\in\Pi,\ \mathsf{A}\in{\mathcal A}. \] \end{definition} The subspace ${\mathcal A}$ of Jordan multiplications is defined by the formula \begin{equation}
\label{Adef}
{\mathcal A}=\mathrm{Span}\{\BGG_{0}(\Bn)-\BGG_{0}(\Bn_{0}):|\Bn|=1\}, \end{equation} where $\BGG_{0}(\Bn)$ is accociated to an isotropic tensor $\mathsf{L}_{0}$, through which the exact relations manifold $\bb{M}$ is passing and is determined by the differential equations (\ref{PDE}) satified by the fields $\BE$ and $\BJ$, written in Fourier space \begin{equation}
\label{FPDE}
\BGx\times\Hat{\Be_{1}}=\BGx\times\Hat{\Be_{2}}=0,\qquad \BGx\cdot\Hat{\Bj_{1}}=\BGx\cdot\Hat{\Bj_{2}}=0. \end{equation} We view these equations as definitions of two subspaces ${\mathcal E}_{\BGx}$ and ${\mathcal J}_{\BGx}$ of pairs $(\Hat{\Be_{1}},\Hat{\Be_{2}})$ and $(\Hat{\Bj_{1}},\Hat{\Bj_{2}})$, respectively, regarding the Fourier wave vector $\BGx$ as fixed. Specifically, \[ {\mathcal E}_{\BGx}=\{(\gamma_{1}\BGx,\gamma_{2}\BGx):\gamma_{1}\in\bb{R},\ \gamma_{1}\in\bb{R}\},\qquad {\mathcal J}_{\BGx}=\{(\Bv_{1},\Bv_{2})\in\bb{R}^{d}\oplus\bb{R}^{d}:\BGx\cdot\Bv_{1}=\BGx\cdot\Bv_{2}=0\}. \] We observe that vectors $\BGx$ and $c\BGx$, where $c\in\bb{R}\setminus\{0\}$ produce the same subspaces ${\mathcal E}_{c\BGx}-{\mathcal E}_{\BGx}$ and ${\mathcal J}_{c\BGx}={\mathcal J}_{\BGx}$. Therefore, we only need to refer to subspaces ${\mathcal E}_{\Bn}$ and ${\mathcal J}_{\Bn}$ for unit vectors $\Bn$.
Now let $\mathsf{L}_{0}\in\mathrm{Sym}({\mathcal T})$ be isotropic (and positive definite), then we define \begin{equation}
\label{G0def}
\BGG_{0}(\Bn)=\mathsf{L}_{0}^{-1}\BGG'(\Bn), \end{equation} where $\BGG'(\Bn)$ is the projection onto $\mathsf{L}_{0}{\mathcal E}_{\Bn}$ along ${\mathcal J}_{\Bn}$.
In order to compute $\BGG_{0}(\Bn)$ we take an arbitrary vector $(\Bu_{1},\Bu_{2})\in{\mathcal T}$ and decompose it into the sum \[ (\Bu_{1},\Bu_{2})=\mathsf{L}_{0}\BE+\BJ,\qquad\BE\in{\mathcal E}_{\Bn},\quad\BJ\in{\mathcal J}_{\Bn}. \] Then $\mathsf{L}_{0}\BE=\BGG'(\Bn)(\Bu_{1},\Bu_{2})$ and therefore, \[ \BE=\mathsf{L}_{0}^{-1}\BGG'(\Bn)(\Bu_{1},\Bu_{2})=\BGG_{0}(\Bn)(\Bu_{1},\Bu_{2}). \] The vector $\BE\in{\mathcal E}_{\Bn}$ is uniquely determined by two scalars $\gamma_{1}$, $\gamma_{2}$: $\BE=(\gamma_{1}\Bn,\gamma_{2}\Bn)$, while $\BJ=(\Bj_{1},\Bj_{2})$ must satisfy \begin{equation}
\label{Geq}
\Bj_{1}\cdot\Bn=0,\qquad\Bj_{2}\cdot\Bn=0. \end{equation} Finding expressions for $(\Bj_{1},\Bj_{2})$ from \[ \BJ=(\Bu_{1},\Bu_{2})-\mathsf{L}_{0}(\gamma_{1}\Bn,\gamma_{2}\Bn), \] where $\mathsf{L}_{0}$ is given by (\ref{Liso3d}) or (\ref{Liso2d}), and substituting into (\ref{Geq}) we will obtain two linear equations for the two unknowns $\gamma_{1}$, $\gamma_{2}$. Solving this linear system we will obtain the explicit expressions for $\gamma_{1}$, $\gamma_{2}$ in terms of $\Bu_{1}$, $\Bu_{2}$, $\Bn$ and $\mathsf{L}_{0}$. The obtained expressions will be linear in $\Bu_{1}$, $\Bu_{2}$, permitting us to write the desired operator $\BGG_{0}(\Bn)$ in block-matrix form (\ref{BML}). \begin{equation}
\label{G0}
\BGG_{0}(\Bn)=\BGL^{-1}\otimes(\tns{\Bn}). \end{equation} Formula (\ref{G0}) is valid in both cases $d=2$ and $d=3$. We can now use formula (\ref{G0}) in (\ref{Adef}) and obtain the explicit formula for the subspace ${\mathcal A}$: \begin{equation}
\label{Athel}
{\mathcal A}=\{\BGL^{-1}\otimes\BA:\BA^{T}=\BA,\mathrm{Tr}\,\BA=0\}. \end{equation} Our first task is to identify (explicitly) all SO(d)-invariant Jordan ${\mathcal A}$-multialgebras $\Pi\subset\mathrm{Sym}({\mathcal T})$. Once this is done, the theory of exact relations, gives an explicit formula for the corresponding exact relation $\bb{M}$ \begin{equation}
\label{Mdef}
\bb{M}=\{\mathsf{L}\in\mathrm{Sym}^{+}({\mathcal T}): W_{\Bn}(\mathsf{L})\in\Pi\} \end{equation} for some unit vector $\Bn$, where \[ W_{\Bn}(\mathsf{L})=[(\mathsf{L}-\mathsf{L}_{0})^{-1}+\BGG_{0}(\Bn)]^{-1}. \] We ephasize that even though transformations $W_{\Bn}$ are all different for different $\Bn$, the submanifold $\bb{M}$ in (\ref{Mdef}) does not depend on the choice of $\Bn$. In fact, we can also compute $\bb{M}$ using the transformation \[ W_{\mathsf{M}}(\mathsf{L})=[(\mathsf{L}-\mathsf{L}_{0})^{-1}+\mathsf{M}]^{-1}, \] where the ``inversion key'' $\mathsf{M}$ is found as the ``simplest'' isotropic tensor satisfying \begin{equation}
\label{invkeydef}
\mathsf{K}_{1}*_{\BGG_{0}(\Bn)-\mathsf{M}}\mathsf{K}_{2}\in\Pi,\quad\forall\mathsf{K}\in\Pi. \end{equation}
At this point we note that the subspace ${\mathcal A}$ is different for different isotropic reference tensors $\mathsf{L}_{0}$ through which the exact relations manifolds are passing. In many cases, and in ours in particular, this technical complication can be eliminated by means of the ``covariance transformations''. The idea is to observe that for any invertible operators $\mathsf{B}$ and $\mathsf{C}$ on ${\mathcal T}$ we have \[ \mathsf{B}(\mathsf{K}_{1}*_{\mathsf{A}}\mathsf{K}_{2})\mathsf{C}=(\mathsf{B}\mathsf{K}_{1}\mathsf{C})*_{\mathsf{C}^{-1}\mathsf{A}\mathsf{B}^{-1}}(\mathsf{B}\mathsf{K}_{2}\mathsf{C}). \] It means that if $\Pi$ is an Jordan ${\mathcal A}$-multialgebra, then $\mathsf{B}\Pi\mathsf{C}$ is a Jordan $\mathsf{C}^{-1}\mathsf{A}\mathsf{B}^{-1}$-multialgebra. In order to preserve symmetry of operators and $SO(d)$-invariance of subspaces we have to set $\mathsf{B}=\mathsf{C}^{T}$ and use only isotropic operators $\mathsf{C}$. In the case of ${\mathcal A}$, given by (\ref{Athel}) we can use $\mathsf{C}=\BGL^{-1/2}\otimes\BI_{d}$, so that \[ {\mathcal A}_{0}=\mathsf{C}^{-1}\mathsf{A}\mathsf{C}^{-1}=\{\BI_{2}\otimes\BA:\BA^{T}=\BA,\mathrm{Tr}\,\BA=0\} \] is independent of $\mathsf{L}_{0}$. Now, if $\Pi_{0}$ is a Jordan ${\mathcal A}_{0}$-multialgebra then we compute a corresponding inversion key $\mathsf{M}_{0}$, which must be an isotropic tensor satisfying \begin{equation}
\label{invkeyeq}
\mathsf{K}_{1}*_{\BI_{2}\otimes\BI_{d}-d\mathsf{M}_{0}}\mathsf{K}_{2}\in\Pi_{0},\quad\forall\mathsf{K}\in\Pi_{0}. \end{equation} In particular, the choice \begin{equation}
\label{M0univ}
\mathsf{M}_{0}=\nth{d}\BI_{2}\otimes\BI_{d}=\nth{d}{\mathcal I}_{{\mathcal T}} \end{equation} satisfies (\ref{invkeyeq}). When $d=2$ we will also try two other simpler choices for $\mathsf{M}$: $\mathsf{M}=0$ and $\mathsf{M}=\hf\BI_{2}\otimes(\tns{\Be_{1}})$. Once the inversion key $\mathsf{M}_{0}$ is determined, the corresponding exact relation $\bb{M}$ will be computed using \[ \bb{M}=\{\mathsf{C}^{-1}\mathsf{L}\mathsf{C}^{-1}:\mathsf{L}\in\bb{M}_{0}\},\qquad \bb{M}_{0}=\{\mathsf{L}\in\mathrm{Sym}^{+}({\mathcal T}): W_{0}(\mathsf{L})\in\Pi_{0}\}, \] where $\mathsf{C}=\BGL^{-1/2}\otimes\BI_{d}$ and \[ W_{0}(\mathsf{L})=[(\mathsf{L}-\mathsf{L}_{0}^{0})^{-1}+\mathsf{M}_{0}]^{-1}, \] where \[ \mathsf{L}_{0}^{0}=\mathsf{C}\mathsf{L}_{0}\mathsf{C}= \begin{cases}
\BI_{2}\otimes\BI_{3},&d=3,\\
\BI_{2}\otimes\BI_{2}+\frac{\nu}{\sqrt{\det\BGL}}\tns{\BR_{\perp}},&d=2. \end{cases} \] In summary, our first task is to solve a (very nontrivial) problem of identifying all SO(2)-invariant subspaced $\Pi_{0}\subset\mathrm{Sym}(\bb{R}^{2}\oplus\bb{R}^{2})$, that are Jordan ${\mathcal A}_{0}$-multialgebras. Very often a difficult problem can be made easier by identifying its symmetries. In our case a symmetry is an SO(2)-invariant linear operator $\Phi:\mathrm{Sym}({\mathcal T})\to\mathrm{Sym}({\mathcal T})$, such that \begin{equation}
\label{autodef}
\Phi(\mathsf{K}\mathsf{A}\mathsf{K})=\Phi(\mathsf{K})\mathsf{A}\Phi(\mathsf{K}),\quad\forall\mathsf{K}\in\mathrm{Sym}({\mathcal T}),\ \mathsf{A}\in{\mathcal A}_{0}. \end{equation} Such a transformation will be called a gobal SO(2)-invariant Jordan ${\mathcal A}_{0}$-multialgebra automorphism of $\mathrm{Sym}({\mathcal T})$.
\section{SO(2)-invariant subspaces of $\mathrm{Sym}({\mathcal T})$} Our task of finding SO(2)-invariant Jordan ${\mathcal A}_{0}$-multialgebras will be significantly simplified by first identifying all SO(2)-invariant subspaces of $\mathrm{Sym}({\mathcal T})$, a standard task in the representation theory of compact Lie groups, which is particularly easy for a commutative ``circle group'' $SO(2)$. It is well know that all irreducible representations of $SO(2)$ are of complex type. Therefore, it will be convenient to identify the physical space $\bb{R}^{2}$ with complex numbers, so that\footnote{The image in $\bb{C}$ of a vector in
$\bb{R}^{2}$, denoted by a bold letter, is represented by the same letter in
normal font.} $\Bx=(x_{1},x_{2})\mapsto x=x_{1}+ix_{2}\in\bb{C}$. Then \begin{equation}
\label{TCn}
{\mathcal T}=\bb{R}^{2}\oplus\bb{R}^{2}\cong\bb{C}\oplus\bb{C}\cong\bb{C}^{2}, \end{equation} With corresponding identification \[ {\mathcal T}\ni\vect{\Bu}{\Bv}\mapsto(u,v)\in\bb{C}^{2},\quad u=u_{1}+iu_{2},\ v=v_{1}+iv_{2}. \]
The utility of this isomorphism of $4$-dimensional real vector spaces (${\mathcal T}$ and $\bb{C}^{2}$) comes from the fact that the set $\bb{C}^{2}$ also has a structure of a complex vector space. In order to characterize all rotationally invariant subspaces in $\mathrm{Sym}({\mathcal T})$ we observe that rotations $\BR_{\theta}$ of $\bb{R}^{2}$ through the angle $\theta$ counterclockwise act on vectors $\Bu\in{\mathcal T}\cong\bb{C}^{2}$ by $\BR_{\theta}\cdot\Bu=e^{i\theta}\Bu$. Every real operator $\mathsf{K}$ on ${\mathcal T}$ can be described by two complex $2\times 2$ matrices $X$ and $Y$ via \begin{equation}
\label{Ku}
\mathsf{K}\Bu=X\Bu+Y\bra{\Bu},\qquad\Bu\in\bb{C}^{2}, \end{equation} where $\Bu$ on the left-hand side\ is an element of ${\mathcal T}$, while $\Bu$ on the right-hand side\ is its $\bb{C}^{2}$ representation. Henceforth, we will write $K(X,Y)$ to indicate this parametrization of $\mathrm{End}({\mathcal T})$.
We compute \[ \left(\vect{\Bu_{1}}{\Bu_{2}},\vect{\Bv_{1}}{\Bv_{2}}\right)_{{\mathcal T}}= \Bu_{1}\cdot\Bv_{1}+\Bu_{2}\cdot\Bv_{2}=\Re\mathfrak{e}(u_{1}\bra{v_{1}})+\Re\mathfrak{e}(u_{2}\bra{v_{2}})= \Re\mathfrak{e}(\Bu,\Bv)_{\bb{C}^{2}}. \] We compute \[ (K(X,Y)\Bu,\Bv)_{\bb{C}^{2}}=(X\Bu+Y\bra{\Bu},\Bv)_{\bb{C}^{2}}= (\Bu,X^{H}\Bv)_{\bb{C}^{2}}+\bra{(\Bu,\bra{Y^{H}}\bra{\Bv})_{\bb{C}^{2}}}, \] where $X^{H}=\bra{X}^{T}$ denotes Hermitian conjugation\footnote{We do not use
the standard notation $X^{\ast}$ to avoid confusion with our notation for
the effective tensor.}. Hence \[ (K(X,Y)\Bu,\Bv)_{{\mathcal T}}=\Re\mathfrak{e}(\Bu,X^{H}\Bv)_{\bb{C}^{2}}+\Re\mathfrak{e}(\Bu,\bra{Y^{H}}\bra{\Bv})_{\bb{C}^{2}}= \Re\mathfrak{e}(\Bu,X^{H}\Bv+Y^{T}\bra{\Bv})_{\bb{C}^{2}}=(\Bu,\mathsf{K}^{T}\Bv)_{{\mathcal T}}. \] This shows that $K(X,Y)^{T}=K(X^{H},Y^{T})$. It follows that $K(X,Y)\in\mathrm{Sym}({\mathcal T})$ if and only if $X$ is a complex Hermitian $2\times 2$ matrix ($X^{H}=X$) and $Y$ is a complex symmetric $2\times 2$ matrix ($Y^{T}=Y$).
Let us find the characterization of positive definiteness of $K(X,Y)\in\mathrm{Sym}({\mathcal T})$ in terms of complex matrices $X$ and $Y$. The first observation is that \[ (K(X,Y)\Bu,\Bu)_{{\mathcal T}}=\hf\left(\Hat{K}(X,Y)\vect{\Bu}{\bra{\Bu}}, \vect{\Bu}{\bra{\Bu}}\right)_{\bb{C}^{4}},\qquad\Hat{K}(X,Y)=\mat{X}{Y}{\bra{Y}}{\bra{X}}. \] We see that $\Hat{K}(X,Y)\in\mathfrak{H}(\bb{C}^{4})$. We now view $\bb{C}^{4}=\bb{C}^{2}\oplus\bb{C}^{2}$ as a real (8-dimensional) vector space with the standard inner product $(\BGx,\BGn)=\Re\mathfrak{e}(\BGx,\BGn)_{\bb{C}^{4}}$. It is then easy to check that $\bb{C}^{4}$ can be split into the orthogonal sum of subspaces $\bb{C}^{4}=S_{+}\oplus S_{-}$, \[ S_{\pm}=\left\{\vect{\Bu}{\pm\bra{\Bu}}:\Bu\in\bb{C}^{2}\right\}. \] Moreover, both $S_{+}$ and $S_{-}$ are invariant subspaces for $\Hat{K}(X,Y)$. The final observation is that $S_{-}=iS_{+}$. Now, the positive definiteness of $K(X,Y)$ is equivelent to the positive definiteness of $\Hat{K}(X,Y)$ on $S_{+}$. But $i\BGx\in S_{+}$ for any $\BGx\in S_{-}$, and therefore, \[ (\Hat{K}(X,Y)\BGx,\BGx)_{\bb{C}^{4}}=(\Hat{K}(X,Y)i\BGx,i\BGx)_{\bb{C}^{4}}>0. \] This implies that the positive definiteness of $\Hat{K}(X,Y)$ on $S_{+}$ is equivalent to the positive definiteness of $\Hat{K}(X,Y)$ on $\bb{C}^{4}$. In turn, the positive definiteness of $\Hat{K}(X,Y)$ on $\bb{C}^{4}$ is equivalent to \[ X>0,\qquad S_{X}=X-Y\bra{X}^{-1}\bra{Y}>0. \] We will see later that \[ K(X,Y)^{-1}=K(S_{X}^{-1},-S_{X}^{-1}Y\bra{X}^{-1}). \] In other words, positive definiteness of $K(X,Y)$ is equivalent to positive definiteness of the ``X-components'' of both $K(X,Y)$ and $K(X,Y)^{-1}$.
We easily compute the action of rotations $\BR_{\theta}$ on $K(X,Y)$ from the ``distributive law'' \[ \BR_{\theta}\cdot(K(X,Y)\Bu)=(\BR_{\theta}\cdot K(X,Y))(\BR_{\theta}\cdot\Bu) \] and the formula $\BR_{\theta}\cdot\Bu=e^{i\theta}\Bu$: \[ e^{i\theta}(X\Bu+Y\bra{\Bu})=(\BR_{\theta}\cdot K(X,Y))e^{i\theta}\Bu. \] Denoting $e^{i\theta}\Bu$ be $\Bv$ and substituting $\Bu=e^{-i\theta}\Bv$ we obtain \[ (\BR_{\theta}\cdot K(X,Y))\Bv=X\Bv+e^{2i\theta}Y\bra{\Bv}, \] which means that \begin{equation}
\label{Raction}
\BR_{\theta}\cdot K(X,Y)=K(X,e^{2i\theta}Y). \end{equation} Therefore, if $\Pi$ is an SO(2)-invariant subspace of $\mathrm{Sym}({\mathcal T})$ then \[ \Pi={\mathcal L}(V,W){\buildrel\rm def\over=}\{K(X,Y):X\in V\subset{\mathcal H}(\bb{C}^{2}),\ Y\in W\subset\mathrm{Sym}(\bb{C}^{2})\}, \]
where $V$ can be any subspace of ${\mathcal H}(\bb{C}^{2})$|the set of all complex Hermitian $2\times 2$ matrices, regarded as a real vector space, and $W$ can be any subspace of $\mathrm{Sym}(\bb{C}^{2})$|the set of all complex symmetric $2\times 2$ matrices, regarded as a complex vector space. In this notation the subspace ${\mathcal A}_{0}$ corresponds to $V=\{0\}$ and $W=\{z\BI_{2}:z\in\bb{C}\}$: \[ {\mathcal A}_{0}=\{K(0,z\BI_{2}):z\in\bb{C}\}. \]
\section{SO(2)-invariant Jordan ${\mathcal A}_{0}$-multialgebras} Using definition (\ref{Ku}) of the action of an operator $\mathsf{K}$ we compute \begin{equation}
\label{multrule}
K(X_{1},Y_{1})K(X_{2},Y_{2})=K(X_{1}X_{2}+Y_{1}\bra{Y_{2}},X_{1}Y_{2}+Y_{1}\bra{X_{2}}), \end{equation} Using this multiplication rule we compute \[ K(X,Y)K(0,z\BI_{2})K(X,Y)=K(zX\bra{Y}+\bar{z}YX,zX\bra{X}+\bar{z}Y^{2}) \] This formula implies that a subspace $\Pi={\mathcal L}(V,W)$ is a Jordan ${\mathcal A}_{0}$-multialgebra if and only if \begin{equation}
\label{2dJMA} Y^{2}+XX^{T}\in W,\quad YX+XY^{H}\in V\text{ for all }X\in V,\ Y\in W.
\end{equation}
The goal is therefore find all solutions ${\mathcal L}(V,W)$ of
(\ref{2dJMA}). Equations (\ref{2dJMA}) suggest an obvious strategy. We first
identify all 0, 1, 2 and 3-dimensional complex subspace
$W\subset\mathrm{Sym}(\bb{C}^{2})$ satisfying $Y^{2}\in W$ for all $Y\in W$. Then,
for each such $W$ we will look for 0, 1, 2, 3 and 4-dimensional subspaces
$V\subset\mathfrak{H}(\bb{C}^{2})$, the space of complex Hermitian $2\times
2$ matrices. However, before we begin it will be helpful to identify all
symmetries of (\ref{2dJMA}), i.e. all global SO(2)-invariant Jordan
${\mathcal A}_{0}$-multialgebra automorphisms.
\section{Global SO(2)-invariant Jordan ${\mathcal A}_{0}$-multialgebra automorphisms} Let $\Phi:\mathrm{Sym}({\mathcal T})\to\mathrm{Sym}({\mathcal T})$ be SO(2)-invariant. Any such linear map must have the form \[ \Phi(K(X,Y))=K(\Phi_{11}(X)+\Phi_{12}(Y),\Phi_{21}(X)+\Phi_{22}(Y)), \] where $\Phi_{ij}$ are real linear maps between the appropriate spaces Then the ``distributive law'' for rotations says \[ \BR_{\theta}\cdot\Phi(K(X,Y))=\Phi(\BR_{\theta}\cdot K(X,Y)) \] Using formula (\ref{Raction}) we obtain \[ K(\Phi_{11}(X)+\Phi_{12}(Y),e^{2i\theta}\Phi_{21}(X)+e^{2i\theta}\Phi_{22}(Y))= K(\Phi_{11}(X)+\Phi_{12}(e^{2i\theta}Y),\Phi_{21}(X)+\Phi_{22}(e^{2i\theta}Y)) \] It follows that \[ \Phi_{12}(Y)=\Phi_{12}(e^{2i\theta}Y),\quad e^{2i\theta}\Phi_{21}(X)=\Phi_{21}(X),\quad e^{2i\theta}\Phi_{22}(Y)=\Phi_{22}(e^{2i\theta}Y). \] The first two equations imply that $\Phi_{12}=0$ and $\Phi_{21}=0$, while the third equation implies that $\Phi_{22}$ is a complex-linear map on $\mathrm{Sym}(\bb{C}^{2})$. Thus, any linear $SO(2)$ automorphism $\Phi$ of $\mathrm{Sym}({\mathcal T})$ can be written as \[ \Phi(K(X,Y))=K(\Phi_{0}(X),\Phi_{2}(Y)), \] where $\Phi_{0}$ is a real-linear automorphism of $\mathfrak{H}(\bb{C}^{2})$ and $\Phi_{2}$ is a complex-linear automorphism of $\mathrm{Sym}(\bb{C}^{2})$.
Let us now assume that $\Phi$ is also a Jordan ${\mathcal A}_{0}$-multialgebra automorphism. In that case the maps $\Phi_{0}$ and $\Phi_{2}$ must satisfy, additionally, \begin{equation}
\label{PhiXX} \Phi_{2}(XX^{T})=\Phi_{0}(X)\Phi_{0}(X)^{T},\qquad\Phi_{2}(Y^{2})=\Phi_{2}(Y)^{2}, \end{equation} \begin{equation}
\label{PhiXY} \Phi_{0}(X\bra{Y}+YX)=\Phi_{0}(X)\bra{\Phi_{2}(Y)}+\Phi_{2}(Y)\Phi_{0}(X). \end{equation} Our task is to determine all maps $\Phi_{0}$ and $\Phi_{2}$, satisfying (\ref{PhiXX}), (\ref{PhiXY}).
Observe that $\Phi_{2}(Y^{2})=\Phi_{2}(Y)^{2}$ means $\Phi_{2}$ maps projections (idempotents) into projections. Conversely, if $\Phi_{2}(Y)$ is a projection for some $Y\in\mathrm{Sym}(\bb{C}^{2})$, then $\Phi_{2}(Y^{2})=\Phi_{2}(Y)$, which implies that $Y^{2}=Y$, since $\Phi_{2}$ is a bijection. Every non-zero idempotent in $\mathrm{Sym}(\bb{C}^{2})$ is either $I_{2}$ or $\Ba\otimes\Ba$, where $\Ba\cdot\Ba=1$. Since $\Phi_{2}$ is a bijection it must map all idempotents of the form $\Ba\otimes\Ba$, except possibly one, into idempotents of the same form. Then the map \[ \Ba\mapsto\mathrm{Tr}\,\Phi_{2}(\Ba\otimes\Ba) \] is continuous and has constant value 1 on almost all $\Ba$. Hence, by continuity it must have value 1 on all $\Ba$. This implies that $\Phi_{2}(I_{2})=I_{2}$ and $\Phi_{2}(\Ba\otimes\Ba)=\tns{C\Ba}$ for some complex linear map $C$ that has the property $C\Ba\cdot C\Ba=\Ba\cdot\Ba=1$. Thus, $\Phi_{2}(Y)=CYC^{T}$, where $C\in O(2,\bb{C})=\{C\in\mathrm{End}_{\bb{C}}(\bb{C}^{2}):CC^{T}=I_{2}\}$.
Now we need to compute the map $\Phi_{0}$. We start by determining all possible values of $\Phi_{0}(I_{2})$. To this end we take $X=I_{2}$ in the first equation in (\ref{PhiXX}). Then $\Phi_{0}(I_{2})\Phi_{0}(I_{2})^{T}=I_{2}$. Hence, $\Phi_{0}(I_{2})\in O(2,\bb{C})\cap\mathfrak{H}(\bb{C}^{2})$. Let us take $X=I_{2}$, $Y=iS$, $S\in\mathrm{Sym}(\bb{R}^{2})$ in (\ref{PhiXY}). Then \[ 0=-i\Phi_{0}(I)\bra{C}S\bra{C}^{T}+iCSC^{T}\Phi_{0}(I). \] Using the fact that $C\in O(2,\bb{C})$ we obtain that $C^{T}\Phi_{0}(I)\bra{C}$ commutes with every $S\in\mathrm{Sym}(\bb{R}^{2})$. This quickly leads, via $\Phi_{0}(I)\Phi_{0}(I)^{T}=I$, to $\Phi_{0}(I)=\pm CC^{H}$. Notice that if $\Phi_{0}$ satisfies our equations, then so does $-\Phi_{0}$. Hence, without loss of generality\ we assume that $\Phi_{0}(I)=CC^{H}$.
Next we determine $\Phi_{0}$ on real symmetric matrices. Taking $X=I$, $Y=S\in\mathrm{Sym}(\bb{R}^{2})$ in (\ref{PhiXY}) we then obtain $\Phi_{0}(S)=CSC^{H}$. It remains to figure out the value of $\Phi_{0}$ on $i\BR_{\perp}$.
Now we take $X=S_{1}$, $Y=iS_{2}$ in (\ref{PhiXY}), where $\{S_{1},S_{2}\}\subset\mathrm{Sym}(\bb{R}^{2})$. Then \[ \Phi_{0}(i[S_{2},S_{1}])=-iCS_{1}C^{H}\bra{C}S_{2}C^{H}+iCS_{2}C^{T}CS_{1}C^{H}= iC[S_{2},S_{1}]C^{H}. \] Hence, $\Phi_{0}(X)=CXC^{H}$ for all $X\in\mathfrak{H}(\bb{C}^{2})$. Thus the set of all $SO(2)$ Jordan ${\mathcal A}_{0}$-multialgebra automorphisms is given by \begin{equation}
\label{Jautoex} \Phi(K(X,Y))=K(\pm CXC^{H},CYC^{T}),\qquad C\in O(2,\bb{C}). \end{equation} Finally every $C\in O(2,\bb{C})$ has a representation \[ C=C_{+}=\mat{\cos z}{\sin z}{-\sin z}{\cos z},\text{ or } C=C_{-}=\mat{\cos z}{\sin z}{\sin z}{-\cos z},\qquad z\in\bb{C}. \]
In fact, there is a general theorem that guarantees that the set of all $SO(d)$-invariant Jordan multialgebra automorphisms has the form $\Phi(\mathsf{X})=\mathsf{C}\mathsf{X}\mathsf{C}^{T}$ for any isotropic $\mathsf{C}$ preserving $\mathsf{A}$, $\mathsf{C}\Tld{\BGG}_{0}\mathsf{C}^{T}=\Tld{\BGG}_{0}$. In addition to these there could be additional automorphisms of the form $\Phi(\mathsf{X})=-\mathsf{C}\mathsf{X}\mathsf{C}^{T}$ for those $\mathsf{C}$ for which $\mathsf{C}\Tld{\BGG}_{0}\mathsf{C}^{T}=-\Tld{\BGG}_{0}$ (which can only happen when $\Tld{\BGG}_{0}$ has the same number of positive and negative eigenvalues). From this and (\ref{multrule}) it is easy to get the general form obtained above (using the fact that any isotropic tensor $\mathsf{C}$ has the form $K(C,0)$).
\section{Describing all Jordan ${\mathcal A}_{0}$-multialgebras} \label{sec:JMA} Complex subspace $W\subset\mathrm{Sym}(\bb{C}^{2})$ can have dimension 0, 1, 2 or 3. \begin{itemize} \item $\dim W=0$. Then $W=\{0\}$ and $V$ may contain only those $X$ for
which $X^{T}X=0$. If $X\not=0$, then $X$ is rank 1 and Hermitian. Thus,
$X=\Ba\otimes\bra{\Ba}$ for some $\Ba\in\bb{C}^{2}$, satisfying
$\Ba\cdot\Ba=0$, i.e. $a_{1}^{2}=-a_{2}^{2}$, which is equivalent to
$a_{2}=\pm ia_{1}$. Thus, all such $X$ must be real
multiples of one of the following 2 matrices \[ X_{1}=\mat{1}{i}{-i}{1},\qquad X_{2}=\bra{X_{1}}. \] Hence, either $V=\{0\}$ or $V=\bb{R}X_{j}$ for some $j=1,2$. These two are isomorphic by $C=\mat{0}{1}{1}{0}\in O(2,\bb{C})$ that map $W$ into
itslef and maps $X_{1}$ into $X_{2}$ and vice versa. \item $\dim W=1$. If $W$ contains an invertible matrix $Y$, then the
Cayley-Hamilton theorem implies that $W$ contains $\BI_{2}$, since \[ \BI_{2}=\frac{\mathrm{Tr}\,(Y)Y-Y^{2}}{\det(Y)}\in W. \] Thus we have two possibilities \begin{itemize} \item $W=\bb{C}\BI_{2}$. In that case $V$ must contain only
matrices $X$ such that $XX^{T}=\lambda\BI_{2}$ for some $\lambda\in\bb{C}$. We compute that all
such matrices $X$ must have one of two possible forms \[ F_{1}(\alpha,\beta)=\mat{\alpha}{i\beta}{-i\beta}{\alpha},\quad F_{2}(\alpha,\beta)=\mat{\alpha}{\beta}{\beta}{-\alpha},\qquad \{\alpha,\beta\}\subset\bb{R}. \] Then $V$ is either $\{0\}$ or $V_{1}=\{F_{1}(\alpha,\beta):\{\alpha,\beta\}\subset\bb{R}\}$, $V_{2}=\{F_{2}(\alpha,\beta):\{\alpha,\beta\}\subset\bb{R}\}$, or any 1D subspace of $V_{1}$ or $V_{2}$. \item $W=\bb{C}\tns{\Ba}$ for some $\Ba\in\bb{C}^{2}$. Then $V$ must contain only
matrices $X$ such that $XX^{T}=\lambda\tns{\Ba}$. In particular, $X$ must be
rank-1 (if it is non-zero). Thus, $V$ is either $\{0\}$ or
$\bb{R}\Ba\otimes\bra{\Ba}$. In the latter case $\Pi$ is the annihilator of
the 1D complex subspace $U$ of $\bb{C}^{2}$, where $U=\bb{C}\Ba^{\perp}$,
where \[ \Ba^{\perp}=\BR_{\perp}\Ba=(-a_{2},a_{1}). \] \end{itemize} \item $\dim W=2$. Then $W$ must contain an invertible matrix. Indeed, the
set of non-zero complex singular $2\times 2$ matrices is a 2D complex
manifold and is not a subspace. Hence any 2D subspace cannot be contained in
it. By Cayley-Hamilton $\BI_{2}\in W$. Let $W=\mathrm{Span}_{\bb{C}}\{\BI_{2},A\}$ for some
$A\in\mathrm{Sym}(\bb{C}^{2})\setminus\{\lambda\BI_{2}\}$. Observe that without loss of generality, we may assume that
$A=\tns{\Ba}$ for some $\Ba\in\bb{C}^{2}$ (if $\lambda\in\bb{C}$ is an
eigenvalue of $A$, then $A-\lambda\BI_{2}\in W$ has rank 1.) So, \[ W=W_{\Ba}=\mathrm{Span}\{\BI_{2},\tns{\Ba}\}, \] which obviously satisfies $Y^{2}\in W_{\Ba}$ for every $Y\in W_{\Ba}$. We can apply the global automorphism $\Phi$ to the Jordan multialgebra $\Pi$ and reduce $W$ to the algebra of complex $2\times 2$ diagonal matrices, if $\Ba\cdot\Ba\not=0$ or to \[ W=\left\{\mat{\alpha-\beta}{\pm i\beta}{\pm i\beta}{\alpha+\beta}:\{\alpha,\beta\}\subset\bb{C}\right\}. \]
In order to compute all possible subspace $V$, we need to consider the two cases above separately
\begin{itemize}
\item $W$ is the algebra of complex $2\times 2$ diagonal matrices.
\begin{itemize}
\item $\dim V=0$, $V=\{0\}$
\item $\dim V=1$. Then $V=\bb{R}H_{0}$ for some
$H_{0}\in\mathfrak{H}(\bb{C}^{2})$. Condition $H_{0}H_{0}^{T}\in W$
results in 4 possibilities for $H_{0}$:
\begin{equation}
\label{Hsqdiag}
\mat{h_{1}}{0}{0}{h_{2}},\quad\mat{h}{i\alpha}{-i\alpha}{h},\quad \mat{h}{\alpha}{\alpha}{-h},\quad\mat{0}{a}{\bra{a}}{0}.
\end{equation} If both components $H_{11}$ and $H_{22}$ are nonzero, or $H_{12}\not=0$, then $\{YH_{0}+H_{0}\bra{Y}:Y\in W\}\subset V$ will be at least two-dimensional. Hence, there are two
possibilities: $H_{0}=\tns{\Be_{1}}$ and $H_{0}=\tns{\Be_{2}}$. These two
are isomorphic by $C=\mat{0}{1}{1}{0}\in O(2,\bb{C})$ that map $W$ into
itslef and maps $\tns{\Be_{1}}$ into $\tns{\Be_{2}}$.
\item $\dim V\ge 2$. We notice that the set of all
$H\in\mathfrak{H}(\bb{C}^{2})$, such that $HH^{T}$ is diagonal is the
union of 4 two-dimensional vector spaces (\ref{Hsqdiag}). Thus, there are
no solutions $V$ with dimension greater than 2, while solutions $V$ with
$\dim V=2$ must be one of the 4 spaces in (\ref{Hsqdiag}). We only need to
check which of the 4 subspaces $V$ in in (\ref{Hsqdiag}) have the property
$\{YX+X\bra{Y}:Y\in W\}\subset V$ for any $X\in V$. It is easy to verify
that only the first and the fourth ones have that property. Hence, For $\dim V=2$ we have the following choices: \begin{itemize} \item[(a)] $V=\left\{\mat{\alpha}{0}{0}{\beta}:\{\alpha,\beta\}\subset\bb{R}\right\}$ \item[(b)] $V=\left\{\mat{0}{\bra{a}}{a}{0}:a\in\bb{C}\right\}$ \end{itemize} \end{itemize} \item $\displaystyle W=\left\{\mat{\alpha-\beta}{\pm i\beta}{\pm i\beta}{\alpha+\beta}:\{\alpha,\beta\}\subset\bb{C}\right\}. $\\ We note that using $C=\mat{1}{0}{0}{-1}\in O(2,\bb{C})$ we can transform ``$-$'' sign above into the ``$+$'' sign. So, that without loss of generality \[ W=\left\{\mat{\alpha-\beta}{i\beta}{i\beta}{\alpha+\beta}:\{\alpha,\beta\}\subset\bb{C}\right\}. \]
In this case Maple worksheet shows that
\begin{itemize}
\item $\dim V=0$, $V=\{0\}$ \item $\dim V=1$, $V=\bb{R}\mat{1}{i}{-i}{1}$ \item $\dim V=2$. There is a 1-parameter family of 2D subspaces permuted by the
automorphism $\Phi$: \[ V_{t}=\left\{\mat{x}{t(x-y)+i\frac{x+y}{2}}{t(x-y)-i\frac{x+y}{2}}{y}:\{x,y\}\subset\bb{R}\right\}, \] together with \[ V_{\infty}=\left\{\mat{y}{x+iy}{x-iy}{y}:\{x,y\}\subset\bb{R}\right\}, \] which we select to be the representative of the entire family. \item $\dim V=3$, \[ V=\left\{X\in\mathfrak{H}(\bb{C}^{2}): \mathrm{Tr}\, X=2\mathfrak{Im}(X_{12})\right\}.
\] Is the only 3D solution.
\end{itemize} \end{itemize} \item $\dim W=3$. Then $W=\mathrm{Sym}(\bb{C}^{2})$.\\ Let us assume that there is a non-zero $X\in V$. Then for any $\{Y_{1},Y_{2}\}\subset W$ $X'=Y_{1}X+XY_{1}^{H}\in V$ and therefore, $Y_{2}X'+X'Y_{2}^{H}\in V$. We compute \[ Y_{2}X'+X'Y_{2}^{H}= Y_{2}Y_{1}X+X(Y_{2}Y_{1})^{H}+Y_{2}XY_{1}^{H}+Y_{1}XY_{2}^{H}\in V. \] Switching $Y_{1}$ and $Y_{2}$ we also obtain \[ Y_{1}Y_{2}X+X(Y_{1}Y_{2})^{H}+Y_{1}XY_{2}^{H}+Y_{2}XY_{1}^{H}\in V. \] Adding the two expressions we obtain \[ (Y_{1}Y_{2}+Y_{2}Y_{1})X+X(Y_{1}Y_{2}+Y_{2}Y_{1})^{H}+2Y_{1}XY_{2}^{H}+2Y_{2}XY_{1}^{H}\in V. \] But $Y_{1}Y_{2}+Y_{2}Y_{1}\in W$ for all $\{Y_{1},Y_{2}\}\subset W$ and therefore, \[ Y_{1}XY_{2}^{H}+Y_{2}XY_{1}^{H}\in V. \] for every $\{Y_{1},Y_{2}\}\subset W$.
Now let $Z\in \mathfrak{H}(\bb{C}^{2})$ be orthogonal to $V$. Then for every $\{Y_{1},Y_{2}\}\subset W$ we must have $\av{Y_{1}XY_{2}^{H}+Y_{2}XY_{1}^{H},Z}=0$. We compute \[ 0=\av{Y_{1}XY_{2}^{H}+Y_{2}XY_{1}^{H},Z}=\av{Y_{1},ZY_{2}X}+\av{Y_{1}^{H},XY_{2}^{H}Z} \] Restricting to $Y_{1}\in \mathrm{Sym}(\bb{R}^{2})$ we obtain \[ \av{Y_{1},ZY_{2}X+XY_{2}^{H}Z}=0 \] The matrix $M=ZY_{2}X+XY_{2}^{H}Z$ is self-adjoint and therefore has the form $M_{1}+iM_{2}$, where $M_{1}\in\mathrm{Sym}(\bb{R}^{2})$, $M_{2}\in\mathrm{Skew}(\bb{R}^{2})$. Thus, $\av{Y_{1},M_{1}}=0$ for every $Y_{1}\in \mathrm{Sym}(\bb{R}^{2})$. Hence, $M_{1}=0$ and we conclide that for every $Y_{2}\in \mathrm{Sym}(\bb{C}^{2})$ \[ ZY_{2}X+XY_{2}^{H}Z=\mat{0}{-i\beta}{i\beta}{0} \] for some $\beta=\beta(Y_{2})\in\bb{R}$. We repeat the same argument, now restricting $Y_{1}$ to be of the form $Y_{1}=iY_{0}$, $Y_{0}\in \mathrm{Sym}(\bb{R}^{2})$. In that case we obtain \[ \av{Y_{0},ZY_{2}X-XY_{2}^{H}Z}=0,\quad\forall Y_{0}\in \mathrm{Sym}(\bb{R}^{2}). \] In this case the matrix $M=ZY_{2}X-XY_{2}^{H}Z$ is skew-adjoint and therefore has the form $M=M_{1}+iM_{2}$, where $M_{1}\in\mathrm{Skew}(\bb{R}^{2})$, $M_{2}\in\mathrm{Sym}(\bb{R}^{2})$ resulting in $\av{Y_{0},M_{2}}=0$ for all $Y_{0}\in \mathrm{Sym}(\bb{R}^{2})$. It follows that $M_{2}=0$ and \[ ZY_{2}X-XY_{2}^{H}Z=\mat{0}{-\alpha}{\alpha}{0} \] for some $\alpha=\alpha(Y_{2})\in\bb{R}$. Adding the two results we obtain that for every $Y_{2}\in\mathrm{Sym}(\bb{C}^{2})$ there exists $b=b(Y_{2})\in\bb{C}$, such that \[ ZY_{2}X=\mat{0}{-b}{b}{0}. \] We now choose $Y_{2}=\tns{\Ba}$ obtaining \[ Z\Ba\otimes X^{T}\Ba=\mat{0}{-b}{b}{0}. \] The matrix on the left-hand side\ has rank at most 1, while the matrix on the right-hand side\ has rank 2, unless $b=0$. Therefore, $Z\Ba\otimes X^{T}\Ba=0$ for every $\Ba\in\bb{C}^{2}$. If $X\not=0$ then either $X^{T}\Be_{1}\not=0$ or $X^{T}\Be_{2}\not=0$. To fix ideas suppose that $X^{T}\Be_{1}\not=0$. But then, for $\Ba=\Be_{1}$ we must have $Z\Be_{1}=0$. If $Z\Be_{2}\not=0$ then, for $\Ba=\Be_{2}$, we must have that $X^{T}\Be_{2}=0$. Now writing $\Ba=a_{1}\Be_{1}+a_{2}\Be_{2}$ we obtain \[ 0=Z(a_{1}\Be_{1}+a_{2}\Be_{2})\otimes X^{T}(a_{1}\Be_{1}+a_{2}\Be_{2})=a_{1}a_{2}Z\Be_{2}\otimes X^{T}\Be_{1}. \] We can just choose $a_{1}=a_{2}=1$ and conclude, recalling that $X^{T}\Be_{1}\not=0$, that $Z\Be_{2}=0$ in contradiction to our assumption. Thus, we must have $Z=0$, implying that $V=\mathfrak{H}(\bb{C}^{2})$. \end{itemize}
\begin{center}
\textbf{Summary} \end{center} It will be convenient to introduce the ``square-free'' vector $\Bz_{0}=(1,-i)$. We order the 23 solutions by dimension of $(W,V)$ in lexicographic order. We also give them short names for easy reference and identify families of equivalent solutions.
~
\begin{itemize}
\item $W=\{0\}$
\begin{itemize}
\item $V=\{0\}$ \textcolor{red}{$(0,0)$}
\item $V=\bb{R}\Bz_{0}\otimes\bra{\Bz_{0}}$ and the equivalent $V=\bb{R}\bra{\Bz_{0}}\otimes\Bz_{0}$ \textcolor{red}{$(0,\bb{R}\BZ_{0})$}$\sim(0,\bb{R}\bra{\BZ}_{0})$
\end{itemize} \item $W=\bb{C}\BI$.
\begin{itemize}
\item $V=\{0\}$ \textcolor{red}{$(\bb{C}\BI,0)$}
\item $V=V_{1}=\left\{\mat{\alpha}{i\beta}{-i\beta}{\alpha}:\{\alpha,\beta\}\subset\bb{R}\right\}$, \textcolor{red}{$(\bb{C}\BI,\BGF)$}
\item $V=V_{2}=\left\{\mat{\alpha}{\beta}{\beta}{-\alpha}:\{\alpha,\beta\}\subset\bb{R}\right\}$,
\textcolor{red}{$(\bb{C}\BI,\BGY)$} \item $V$ is any 1D subspace of $V_{1}$ or $V_{2}$, which can be ``rotated''
by $C\in O(2,\bb{C})$ into one of the following subspaces
\begin{itemize}
\item $V=\bb{R}\BI_{2}$ \textcolor{red}{$(\bb{C}\BI,\bb{R}\BI)$}$\sim(\bb{C}\BI,\bb{R}\phi_{t})$, $ \phi_{t}=\bb{R}\mat{\cosh t}{i\sinh t}{-i\sinh t}{\cosh t}, $
$t\in\bb{R}$
\item $V=\bb{R}\mat{0}{1}{1}{0}$ \textcolor{red}{$(\bb{C}\BI,\bb{R}\psi(i))$}$\sim(\bb{C}\BI,\bb{R}\psi(e^{i\alpha}))$, $ \psi(e^{i\alpha})=\bb{R}\mat{\cos\alpha}{\sin\alpha}{\sin\alpha}{-\cos\alpha}, $ $\alpha\in[0,\pi)$,
\item $V=\bb{R}\mat{0}{i}{-i}{0}$ \textcolor{red}{$(\bb{C}\BI,i\BR_{\perp})$}$\sim(\bb{C}\BI,\bb{R}\phi'_{t})$, $ \phi'_{t}=\bb{R}\mat{\sinh t}{i\cosh t}{-i\cosh t}{\sinh t}, $
$t\in\bb{R}$
\item $V=\bb{R}\Bz_{0}\otimes\bra{\Bz_{0}}$ \textcolor{red}{$(\bb{C}\BI,\bb{R}\BZ_{0})$}$\sim(\bb{C}\BI,\bb{R}\bra{\BZ}_{0})$
\end{itemize}
\end{itemize} \item $W=\bb{C}\tns{\Be_{1}}\sim\bb{C}\tns{\Ba}$, if $\Ba\cdot\Ba=1$
\begin{itemize}
\item $V=\{0\}$ \textcolor{red}{$(\tns{\Be_{1}},0)$}$\sim(\tns{\Ba},0)$,
$\Ba\cdot\Ba=1$, ($\pm\Ba$ defining the same subspaces)
\item $V=\bb{R}\tns{\Be_{1}}\sim\bb{R}\Ba\otimes\bra{\Ba}$ (any vector
$\Ba$, satisfying $\Ba\cdot\Ba=1$ can be rotated by $C\in O(2,\bb{C})$ into either $\Be_{1}$. \textcolor{red}{${\rm Ann}(\bb{C}\Be_{2})$}$\sim{\rm
Ann}(\bb{C}\bra{\Ba}^{\perp})$, where $ \Ba^{\perp}=\BR_{\perp}\Ba=(-a_{2},a_{1}). $
\end{itemize} \item $W=\bb{C}\tns{\Bz_{0}}\sim\bb{C}\tns{\bra{\Bz}_{0}}$
\begin{itemize}
\item $V=\{0\}$ \textcolor{red}{$(\tns{\Bz_{0}},0)$}$\sim(\tns{\bra{\Bz}_{0}},0)$,
\item $V=\bb{R}\Bz_{0}\otimes\bra{\Bz}_{0}$
\textcolor{red}{${\rm Ann}(\bb{C}\bra{\Bz_{0}})$}$\sim{\rm Ann}(\bb{C}\Bz_{0})$
\end{itemize} \item $W=\mathrm{Span}_{\bb{C}}\{\tns{\Be_{1}},\tns{\Be_{2}}\}$, (representing an
infinite $O(2,\bb{C})$-orbit)
\begin{itemize}
\item $V=\{0\}$, \textcolor{red}{$({\mathcal D},0)$}$\sim(W_{\Ba},0)$, $W_{\Ba}=\{Y\in\mathrm{Sym}(\bb{C}^{2}):Y\Ba\cdot\Ba^{\perp}=0\}$.
\item $V=\bb{R}\Be_{1}\otimes\Be_{1}$, \textcolor{red}{$({\mathcal D},\Be_{1}\otimes\Be_{1})$}$\sim(W_{\Ba},\bb{R}\Ba\otimes\bra{\Ba})$ \item $V=\mathrm{Span}_{\bb{R}}\{\tns{\Be_{1}},\tns{\Be_{2}}\}$ \textcolor{red}{$({\mathcal D},{\mathcal D})$}$\sim(W_{\Ba},V_{\Ba})$, where we define \[ V_{\Ba}=\{X\in\mathfrak{H}(\bb{C}^{2}):(X\Ba,\Ba^{\perp})_{\bb{C}^{2}}=0\}. \]
\item $V=\left\{\mat{0}{\bra{c}}{c}{0}:c\in\bb{C}\right\}$ \textcolor{red}{$({\mathcal D},{\mathcal D}')$}$\sim(W_{\Ba},V'_{\Ba})$, where we define \[ V'_{\Ba}=\{X\in\mathfrak{H}(\bb{C}^{2}):(X\Ba,\Ba)_{\bb{C}^{2}}=(X\Ba^{\perp},\Ba^{\perp})_{\bb{C}^{2}}=0\}. \] In all cases above $\Ba\cdot\Ba=1$, (where $\pm\Ba$ define the same $\Pi$). I all cases, except $({\mathcal D},\Be_{1}\otimes\Be_{1})$, vectors $\pm\Ba^{\perp}$ also define the same subspace as $\Ba$. \end{itemize} \item $\displaystyle W=\left\{\mat{\alpha-\beta}{i\beta}{i\beta}{\alpha+\beta}:\{\alpha,\beta\}\subset\bb{C}\right\} $, (equivalent set coming from $\bra{W}$, $\bra{V}$) \begin{itemize} \item $\dim V=0$, $V=\{0\}$ \textcolor{red}{$(W,0)$}$\sim(\bra{W},0)$ \item $\dim V=1$, $V=\bb{R}\Bz_{0}\otimes\bra{\Bz_{0}}$
\textcolor{red}{$(W,\bb{R}\BZ_{0})$}$\sim(\bra{W},\bb{R}\bra{\BZ}_{0})$ \item $\dim V=2$. There is a 1-parameter family of 2D subspaces permuted by the
automorphism $\Phi$. This $\Phi$-orbit can be represented by \[ V=\left\{\mat{y}{x+iy}{x-iy}{y}:\{x,y\}\subset\bb{R}\right\} \] We denote this set by \textcolor{red}{$(W,V_{\infty})$}$\sim(W,V_{t})\sim(\bra{W},\bra{V_{t}})$, \[ V_{t}=\left\{\mat{x}{t(x-y)+i\frac{x+y}{2}}{t(x-y)-i\frac{x+y}{2}}{y}:\{x,y\}\subset\bb{R}\right\}, \] \item $\dim V=3$, $ V=\left\{X\in\mathfrak{H}(\bb{C}^{2}): \mathrm{Tr}\, X=2\mathfrak{Im}(X_{12})\right\} $ \textcolor{red}{$(W,V)$}$\sim(\bra{W},\bra{V})$ \end{itemize}
\item $W=\mathrm{Sym}(\bb{C}^{2})$
\begin{itemize}
\item $V=\{0\}$ \textcolor{red}{$(\mathrm{Sym}(\bb{C}^{2}),0)$}
\item $V=\mathfrak{H}(\bb{C}^{2})$ \textcolor{red}{$\mathrm{Sym}({\mathcal T})$}
\end{itemize} \end{itemize} The Summary table below also lists subalgebras, squares and ideals of each of the algebras $\Pi(V,W)$. They have been computed with Maple computer algebra package by Huilin Chen.
For the purposes of writing Maple code we will refer to each Jordan multialgebra (labeled in red) by its item number in the list below. The orbit of each equivalence class is also indicated, but will not be used in Maple directly, unless explicitly stated.\\
\begin{tabular}[h]{|c|c|c|c|c|}
\hline item \# & representative & orbit & dimensions & subalgebras\\ \hline 1 & \textcolor{red}{$(0,0)$} & * & (0,0) & []\\ \hline 2 & \textcolor{red}{$(0,\bb{R}\BZ_{0})$} & $(0,\bb{R}\bra{\BZ}_{0})$ & (0,1) & [\textcolor{red}{1}]\\ \hline 3 & \textcolor{red}{$(\bb{C}\BI,0)$} & * & (1,0) & [\textcolor{blue}{1}]\\ \hline 4&\textcolor{red}{$(\bb{C}\BI,\bb{R}\BI)$}&$(\bb{C}\BI,\bb{R}\phi_{t})$, $t\in\bb{R}$& (1,1)&[\textcolor{blue}{1},3]\\ \hline 5 &\textcolor{red}{$(\bb{C}\BI,\bb{R}\psi(i))$}& $(\bb{C}\BI,\bb{R}\psi(e^{i\alpha}))$, $\alpha\in[0,\pi)$& (1,1)&[\textcolor{blue}{1},3]\\ \hline 6& \textcolor{red}{$(\bb{C}\BI,i\BR_{\perp})$}&$(\bb{C}\BI,\bb{R}\phi'_{t})$, $t\in\bb{R}$& (1,1)&[\textcolor{blue}{1},3]\\ \hline 7& \textcolor{red}{$(\bb{C}\BI,\bb{R}\BZ_{0})$}&$(\bb{C}\BI,\bb{R}\bra{\BZ}_{0})$& (1,1)&[\textcolor{blue}{1},\textcolor{blue}{2},3]\\ \hline 8& \textcolor{red}{$(\bb{C}\BI,\BGF)$}&*&(1,2)&[\textcolor{blue}{1},2,3,4,6,7]\\ \hline 9& \textcolor{red}{$(\bb{C}\BI,\BGY)$}&*&(1,2)&[\textcolor{blue}{1},3,5]\\ \hline 10&\textcolor{red}{$(\tns{\Be_{1}},0)$}&$(\tns{\Ba},0)$, $\Ba\sim-\Ba$& (1,0)&[\textcolor{blue}{1}]\\ \hline 11& \textcolor{red}{${\rm Ann}(\bb{C}\Be_{2})$}&${\rm Ann}(\bb{C}\bra{\Ba}^{\perp})$,
$\Ba\sim-\Ba$&(1,1)&[\textcolor{blue}{1},10]\\ \hline 12& \textcolor{red}{$(\tns{\Bz_{0}},0)$}&$(\tns{\bra{\Bz}_{0}},0)$&(1,0)&[\textcolor{red}{1}]\\ \hline 13& \textcolor{red}{${\rm Ann}(\bb{C}\bra{\Bz_{0}})$}&${\rm Ann}(\bb{C}\Bz_{0})$&(1,1)& [\textcolor{red}{1},\textcolor{blue}{2},\textcolor{blue}{12}]\\ \hline 14&\textcolor{red}{$({\mathcal D},0)$}&$(W_{\Ba},0)$, $\pm\Ba\sim\pm\Ba^{\perp}$& (2,0)&[\textcolor{blue}{1},3,\textcolor{blue}{10}]\\ \hline 15&\textcolor{red}{$({\mathcal D},\Be_{1}\otimes\Be_{1})$}&$(W_{\Ba},\bb{R}\Ba\otimes\bra{\Ba})$,
$\Ba\sim-\Ba$& (2,1)&[\textcolor{blue}{1},3,10,\textcolor{blue}{-10},\textcolor{blue}{11},14]\\ \hline 16&\textcolor{red}{$({\mathcal D},{\mathcal D})$}&$(W_{\Ba},V_{\Ba})$, $\pm\Ba\sim\pm\Ba^{\perp}$& (2,2)&[\textcolor{blue}{1},3,4,-5,10,\textcolor{blue}{11},14,15]\\ \hline 17& \textcolor{red}{$({\mathcal D},{\mathcal D}')$}&$(W_{\Ba},V'_{\Ba})$, $\pm\Ba\sim\pm\Ba^{\perp}$& (2,2)&[\textcolor{blue}{1},3,5,6,10,14]\\ \hline 18&\textcolor{red}{$(W,0)$}&$(\bra{W},0)$&(2,0)&[\textcolor{blue}{1},3,\textcolor{blue}{12}]\\ \hline 19&\textcolor{red}{$(W,\bb{R}\BZ_{0})$}&$(\bra{W},\bb{R}\bra{\BZ}_{0})$&(2,1)& [\textcolor{blue}{1},\textcolor{blue}{2},3,7,\textcolor{blue}{12},\textcolor{blue}{13},18]\\ \hline 20&\textcolor{red}{$(W,V_{\infty})$}&$(W,V_{t})\sim(\bra{W},\bra{V_{t}})$&(2,2)& [\textcolor{blue}{1},2,3,5,7,12,\textcolor{blue}{13},18,19]\\ \hline 21&\textcolor{red}{$(W,V)$}&$(\bra{W},\bra{V})$&(2,3)&[\textcolor{blue}{1},2,3,5,7,9,12,\textcolor{blue}{13},18,19,20]\\ \hline 22&\textcolor{red}{$(\mathrm{Sym}(\bb{C}^{2}),0)$}&*&(3,0)&[\textcolor{blue}{1},3,10,12,14,18]\\ \hline 23&\textcolor{red}{$\mathrm{Sym}({\mathcal T})$}&*&(3,4)&$[\textcolor{blue}{1},2,\ldots,21,22]$\\ \hline \end{tabular} \begin{itemize} \item Symbol * in the ``orbit'' column means that the orbit consists of a single
algebra listed in the ``representative'' column. \item Vectors $\Ba$ always lie on
the ``complex circle'' $ \bb{S}_{\bb{C}}^{1}=\{\Ba\in\bb{C}^{2}:\Ba\cdot\Ba=1\}. $ \item Algebra -10 in item 15, refers to an algebra from the orbit of item 10, corresponding
to $\Ba=\Be_{2}$: $(\tns{\Be_{2}},0)$. It is there because among all global
automorphisms mapping item 15 into itself none map 10 into -10. Therefore,
within item 15 algebras 10 and -10 are not equivalent. There are no other occurrances
of such a situation. \item Algebra -5 in item 16 refers to an algebra from the orbit of item 5, corresponding
to $\alpha=0$: $(\bb{C}\BI,\bb{R}\psi(1))$. \item If an algebra is not listed as a subalgebra of a particular algebra it
means that no algebra from its orbit is a subalgebra of that particular algebra. \item The subalgebras listed in red are squares, the subalgebras listed in
blue are ideals. \end{itemize}
\section{Theory of links} Another related and important part of the project is to discover all possible links. In order to describe a link, consider the opposite exercise. Instead of fixing tensors $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$ we are fixing the set $A$ and varying $\mathsf{L}_{A}$ and $\mathsf{L}_{B}$. If we know $\mathsf{L}^{*}$ for one pair $\mathsf{L}_{A}$, $\mathsf{L}_{B}$, does it give us any information about $\mathsf{L}^{*}$ for other pairs? If the answer is yes for any subset $A$, then we say that we have discovered a link. Links are much harder to characterise. Since, in general they contain a lot more information than exact relations.
In the framework of the theory, links are described by Jordan $\Hat{{\mathcal A}}$-multialgebras in $\mathrm{Sym}({\mathcal T})\oplus\mathrm{Sym}({\mathcal T})$, where \[ \Hat{{\mathcal A}}=\mathrm{Span}\left\{ \mat{\BGG_{0}^{(1)}(\Bn)-\BGG_{0}^{(1)}(\Bn_{0})}{0}{0}{\BGG_{0}^{(2)}(\Bn)-\BGG_{0}^{(2)}(\Bn_{0})}:
|\Bn|=1\right\}, \] where $\BGG_{0}^{(1)}(\Bn)$ and $\BGG_{0}^{(2)}(\Bn)$ are constructed using different reference media $\mathsf{L}_{0}^{(1)}$ and $\mathsf{L}_{0}^{(2)}$, respectively. In our case \[ \Hat{{\mathcal A}}=\mathrm{Span}\left\{ \mat{\BGL_{1}^{-1}\otimes\BA}{0}{0}{\BGL_{2}^{-1}\otimes\BA}:\BA^{T}=\BA,\mathrm{Tr}\,\BA=0\right\}. \] We say that $\Hat\Pi\subset\mathrm{Sym}({\mathcal T})\oplus\mathrm{Sym}({\mathcal T})$ describes a link if \[ \mat{\mathsf{K}_{1}}{0}{0}{\mathsf{K}_{2}}\mat{\mathsf{A}_{1}}{0}{0}{\mathsf{A}_{2}}\mat{\mathsf{K}_{1}}{0}{0}{\mathsf{K}_{2}} \in\Pi,\quad\forall\mat{\mathsf{K}_{1}}{0}{0}{\mathsf{K}_{2}}\in\Hat{\Pi},\ \forall\mat{\mathsf{A}_{1}}{0}{0}{\mathsf{A}_{2}}\in\Hat{{\mathcal A}}. \] From now on we will be using a more compact notation \[ [\mathsf{K}_{1},\mathsf{K}_{2}]=\mat{\mathsf{K}_{1}}{0}{0}{\mathsf{K}_{2}},\qquad [\mathsf{A}_{1},\mathsf{A}_{2}]=\mat{\mathsf{A}_{1}}{0}{0}{\mathsf{A}_{2}}. \]
As in the case of Jordan ${\mathcal A}$-multialgebras we will first apply the convariance transformation \[ \Hat{{\mathcal A}}_{0}=\Hat{\mathsf{C}}\Hat{{\mathcal A}}\Hat{\mathsf{C}}^{T} \] where $\Hat{\mathsf{C}}=[\BC_{1}\otimes\BI_{d},\BC_{2}\otimes\BI_{d}]$, and $\BC_{1}$, $\BC_{2}$ are as before: $\BC_{1}=\BGL_{1}^{1/2}$, $\BC_{2}=\BGL_{2}^{1/2}$, so that \[ \Hat{{\mathcal A}}_{0}=\{[\BA,\BA]:\BA\in{\mathcal A}_{0}\},\qquad{\mathcal A}_{0}=\{\BI_{2}\otimes\BA:\BA^{T}=\BA,\mathrm{Tr}\,\BA=0\}. \] All Jordan $\Hat{{\mathcal A}}$-multialgebras can be described entirely in terms of the algebraic structure of Jordan ${\mathcal A}$-multialgebras. In order to describe an $\Hat{{\mathcal A}}$-multialgebra $\Hat{\Pi}$ we need the following algebraic data: Jordan ${\mathcal A}$-ideals ${\mathcal I}_{1}\subset\Pi_{1}$, ${\mathcal I}_{2}\subset\Pi_{2}$, such that the factor-algebras $\Pi_{1}/{\mathcal I}_{1}$ and $\Pi_{2}/{\mathcal I}_{2}$ are isomorphic, since we will also require a Jordan ${\mathcal A}$-factoralgebra isomorphism $\Phi:\Pi_{1}/{\mathcal I}_{1}\to\Pi_{2}/{\mathcal I}_{2}$. In that case \begin{equation}
\label{Pihatstr}
\Hat{\Pi}=\{[\mathsf{K}_{1},\mathsf{K}_{2}]\in\Pi_{1}\times\Pi_{2}:\Phi([\mathsf{K}_{1}])=[\mathsf{K}_{2}],\}, \end{equation} where $[\mathsf{K}_{j}]$ denotes the equivalence class of $\mathsf{K}_{j}$ in $\Pi_{j}/{\mathcal I}_{j}$, $j=1,2$.
The most common occurrence is the situation, where $\Pi_{1}=\Pi_{2}=\Pi$ and ${\mathcal I}_{1}={\mathcal I}_{2}=\{0\}$, in which case \begin{equation}
\label{Philink}
\Hat{\Pi}=\{[\mathsf{K},\Phi(\mathsf{K})]:\mathsf{K}\in\Pi\}. \end{equation} Another common occurrence happens when there exists a Jordan ${\mathcal A}$-multialgebra $\Pi'\subset\Pi$, such that $\Pi={\mathcal I}\oplus\Pi'$, where ${\mathcal I}$ is an ideal in $\Pi$. That means that every $\mathsf{K}\in\Pi$ can be written uniquely as $\mathsf{K}=\mathsf{K}'+\mathsf{J}$, where $\mathsf{K}'\in\Pi'$ and $\mathsf{J}\in{\mathcal I}$. The map $\Phi([\mathsf{K}])=\mathsf{K}'$ is obviously a factor-algebra isomorphism $\Phi:\Pi/{\mathcal I}\to\Pi'/\{0\}$. In that case \begin{equation}
\label{idlink}
\Hat{\Pi}=\{[\mathsf{K}'+\mathsf{J},\mathsf{K}']:\mathsf{K}'\in\Pi',\ \mathsf{J}\in{\mathcal I}\}. \end{equation} We will see later that in our case only links of the above two types are present.
\section{Factor algebra isomorphism classes} A Maple ideal checker has found 11 nontrivial ideals. In each case we have a situation where $\Pi=\Pi'\oplus J$, where $\Pi'$ is a subalgebra and $J$ is an ideal. In that case, every $\mathsf{K}\in\Pi$ can be written uniquely as $\mathsf{K}=\mathsf{K}'+\mathsf{J}$ and hence $\mathsf{K}'$ becomes a natural choice of the representative of the equivalence class of $\mathsf{K}$ in $\Pi/J$. This identification is obviously an algebra isomorphism. Then natural projection $\pi:\Pi\to\Pi/J\cong\Pi'$ is an algebra isomorphism: \[ [\mathsf{K}*_{\mathsf{A}}\mathsf{K}]=[\mathsf{K}'*_{\mathsf{A}}\mathsf{K}'+2\mathsf{K}'*_{\mathsf{A}}\mathsf{J}+\mathsf{J}*_{\mathsf{A}}\mathsf{J}]=\mathsf{K}'*_{\mathsf{A}}\mathsf{K}'. \] \begin{enumerate} \item $(0,\bb{R}\BZ_{0})\subset(\bb{C}\BI,\bb{R}\BZ_{0})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\bb{C}\BI,0)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item $(\tns{\Be_{1}},0)\subset({\mathcal D},0)$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\tns{\Be_{2}},0)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item $(\tns{\Be_{2}},0)\subset({\mathcal D},\Be_{1}\otimes\Be_{1})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'={\rm Ann}(\bb{C}\Be_{2})$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item ${\rm Ann}(\bb{C}\Be_{2})\subset({\mathcal D},\Be_{1}\otimes\Be_{1})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\tns{\Be_{2}},0)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item ${\rm Ann}(\bb{C}\Be_{2})\subset({\mathcal D},{\mathcal D})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'={\rm Ann}(\bb{C}\Be_{1})$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. It remains to recall that the
algebras ${\rm Ann}(\bb{C}\Be_{1})$ and ${\rm Ann}(\bb{C}\Be_{2})$ are isomorphic my
means of the global isomorphism. Thus, $({\mathcal D},{\mathcal D})/{\rm
Ann}(\bb{C}\Be_{2})\cong{\rm Ann}(\bb{C}\Be_{2})$. \item $(\tns{\Bz_{0}},0)\subset(W,0)$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\bb{C}\BI,0)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item $(0,\bb{R}\BZ_{0})\subset(W,\bb{R}\BZ_{0})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(W,0)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item $(\tns{\Bz_{0}},0)\subset(W,\bb{R}\BZ_{0})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\bb{C}\BI,\bb{R}\BZ_{0})$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item ${\rm Ann}(\bb{C}\bra{\Bz_{0}})\subset(W,\bb{R}\BZ_{0})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\bb{C}\BI,0)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item ${\rm Ann}(\bb{C}\bra{\Bz_{0}})\subset(W,V_{\infty})$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\bb{C}\BI,\bb{R}\psi(i))$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \item ${\rm Ann}(\bb{C}\bra{\Bz_{0}})\subset(W,V)$. In this case
$\Pi=\Pi'\oplus J$, where $\Pi'=(\bb{C}\BI,\BGY)$, and therefore, the factor
algebra is naturally isomorphic to $\Pi'$. \end{enumerate} Since there are no new factor-algebras in addition to the 23 algebras above, we only need to know the algebra-ideal pairs. This information can kept in a more economical list, observing that if $J\subset\Pi$ is an ideal, then for any other algebra $\Pi'$ $J\cap\Pi'$ is an ideal in $\Pi\cap\Pi'$. In this way, we have a reduced set of algebra-ideal pairs. \begin{enumerate} \item $({\mathcal D},\Be_{1}\otimes\Be_{1})/(\tns{\Be_{2}},0)\cong{\rm Ann}(\bb{C}\Be_{2})$ \item $({\mathcal D},{\mathcal D})/{\rm Ann}(\bb{C}\Be_{2})\cong{\rm Ann}(\bb{C}\Be_{2})$ \item $(W,\bb{R}\BZ_{0})/(0,\bb{R}\BZ_{0})\cong(W,0)$ \item $(W,\bb{R}\BZ_{0})/(\tns{\Bz_{0}},0)\cong(\bb{C}\BI,\bb{R}\BZ_{0})$ \item $(W,V)/{\rm Ann}(\bb{C}\bra{\Bz_{0}})\cong(\bb{C}\BI,\BGY)$ \end{enumerate} We remark that in the list above algebra/ideal pairs 1 and 2 represent links in the absense of thermoelectric coupling. Item 1 corresponds to the KDM link for 2D conductivity, while item 2 just corresponds to a pair of uncoupled conducting composites and it says that the effective tensor for the pair is a pair of effective tensors of each of the composites.
\section{SO(2)-invariant Jordan multialgebra automorphisms} We can partially determine all possible automorphisms $\Phi$ of each of the algebras $\Pi$ by describing all transformations $\Phi_{2}:W\to W$ satisfying (\ref{PhiXX})$_{2}$. There are only 7 possibilities for $W$ \begin{enumerate} \item $W=\{0\}$. Then the only choice is the ``identity map'' $\Phi_{2}(Y)=Y$. \item $W=\bb{C}\BI_{2}$. Then the only choice is the ``identity map''
$\Phi_{2}(Y)=Y$. \item $W=\bb{C}\tns{\Be_{1}}$. Then the only choice is the ``identity map''
$\Phi_{2}(Y)=Y$. \item $W=\bb{C}\tns{\Bz_{0}}$. In that case every nonzero linear map satisfies
(\ref{PhiXX})$_{2}$: $\Phi(Y)=aY$ for some $a\in\bb{C}\setminus\{0\}$. \item $W={\mathcal D}$. Then in addition to the ``identity map'' $\Phi_{2}(Y)=Y$ there
is one more possibility: \[ \Phi_{2}\left(\mat{x}{0}{0}{y}\right)=\mat{y}{0}{0}{x}. \] \item $W=\mathrm{Span}_{\bb{C}}\{\BI_{2},\tns{\Bz_{0}}\}$. In that case $\Phi_{2}$ is determined by its values on basis vectors: \[ \Phi_{2}(\BI_{2})=\BI_{2},\qquad\Phi_{2}(\tns{\Bz_{0}})=a\tns{\Bz_{0}},\qquad a\in\bb{C}\setminus\{0\}. \] \item $W=\mathrm{Sym}(\bb{C}^{2})$. This case has already been examined (in Section
10). The set of all maps $\Phi_{2}$ is described by \[ \Phi_{2}(Y)=CYC^{T},\qquad C\in O(2,\bb{C}). \] \end{enumerate} The problem of determination of $\Phi_{0}$ is trivial when $V=\{0\}$, which is true in 7 cases. It has also been solved by $\Pi=\mathrm{Sym}({\mathcal T})$. In another 9 cases $\dim V=1$. Then in order to determine $\Phi_{0}$ we only need to find all real non-zero numbers $\alpha$ for which $\Phi_{0}(X)=\alpha X$. If $X_{0}\in V\setminus\{0\}$, then equations (\ref{PhiXX}), (\ref{PhiXY}) imply \[ \Phi_{2}(Y)X_{0}=YX_{0},\qquad\Phi_{2}(X_{0}X_{0}^{T})=\alpha^{2}X_{0}X_{0}^{T}. \] In particular, if $X_{0}X_{0}^{T}X_{0}\not=0$, then $\alpha=\pm 1$ are the only choices that can work. If $X_{0}=\BZ_{0}$, then any $\alpha\not=0$ works. Finally, we need to note that in the case $\Pi=({\mathcal D},\Be_{1}\otimes\Be_{1})$ the nontrivial map $\Phi_{2}$ is ruled out, since \[ \mat{y}{0}{0}{x}\Be_{1}\otimes\Be_{1}\not=\mat{x}{0}{0}{y}\Be_{1}\otimes\Be_{1}. \] Thus, for this algebra, the only nontrivial automorphism is defined by $\Phi_{0}(X)=-X$ and $\Phi_{2}(Y)=Y$. There are only 6 cases (except $\Pi=\mathrm{Sym}({\mathcal T})$), where $\dim V>1$. In 2 of these 6 cases $W=\bb{C}\BI_{2}$ and therefore $\Phi_{2}(Y)=Y$, while $\Phi_{0}$ satisfies \[ \Phi_{0}(X)\Phi_{0}(X)^{T}=XX^{T},\qquad X\in V. \]
\begin{center}
List of all SO(2)-invariant Jordan multialgebra automorphisms \end{center}
\begin{tabular}[h]{|c|c|c|}
\hline item \# & representative & automorphisms\\ \hline 1 & \textcolor{red}{$(0,0)$} & *\\ \hline 2 & \textcolor{red}{$(0,\bb{R}\BZ_{0})$} & $\Phi_{0}(\BZ_{0})=\alpha\BZ_{0}$\\ \hline 3 & \textcolor{red}{$(\bb{C}\BI,0)$} & *\\ \hline 4&\textcolor{red}{$(\bb{C}\BI,\bb{R}\BI)$}&$\Phi_{0}(\BI)=-\BI$\\ \hline 5 &\textcolor{red}{$(\bb{C}\BI,\bb{R}\psi(i))$}&$\Phi_{0}(\psi(i))=-\psi(i)$\\ \hline 6& \textcolor{red}{$(\bb{C}\BI,i\BR_{\perp})$}&$\Phi_{0}(i\BR_{\perp})=-i\BR_{\perp}$\\ \hline 7& \textcolor{red}{$(\bb{C}\BI,\bb{R}\BZ_{0})$}&$\Phi_{0}(\BZ_{0})=\alpha\BZ_{0}$\\ \hline 8& \textcolor{red}{$(\bb{C}\BI,\BGF)$}&see below\\ \hline 9& \textcolor{red}{$(\bb{C}\BI,\BGY)$}&see below\\ \hline 10&\textcolor{red}{$(\tns{\Be_{1}},0)$}&*\\ \hline 11& \textcolor{red}{${\rm Ann}(\bb{C}\Be_{2})$}&$\Phi_{0}(\tns{\Be_{1}})=-\tns{\Be_{1}}$\\ \hline 12& \textcolor{red}{$(\tns{\Bz_{0}},0)$}&$\Phi(\mathsf{K})=a\mathsf{K}$\\ \hline 13& \textcolor{red}{${\rm Ann}(\bb{C}\bra{\Bz_{0}})$}&$\Phi_{2}(\tns{\Bz_{0}})=a\tns{\Bz_{0}}$, $\Phi_{0}(\BZ_{0})=\alpha\BZ_{0}$\\ \hline 14&\textcolor{red}{$({\mathcal D},0)$}&$\Phi_{2}(Y)=\psi(i)Y\psi(i)$\\ \hline 15&\textcolor{red}{$({\mathcal D},\Be_{1}\otimes\Be_{1})$}&$\Phi_{0}(X)=-X$\\ \hline 16&\textcolor{red}{$({\mathcal D},{\mathcal D})$}&$\Phi_{0}(X)=\pm\psi(i)X\psi(i)$, $\Phi_{2}(Y)=\psi(i)Y\psi(i)$\\ \hline 17& \textcolor{red}{$({\mathcal D},{\mathcal D}')$}&$\Phi_{0}(X)=-X$, $\Phi_{2}(Y)=Y$ or $\Phi_{0}(X)=\pm X^{T}$, $\Phi_{2}(Y)=\psi(i)Y\psi(i)$\\ \hline 18&\textcolor{red}{$(W,0)$}&$\Phi_{2}(\BI_{2})=\BI_{2}$, $\Phi_{2}(\tns{\Bz_{0}})=a\tns{\Bz_{0}}$\\ \hline 19&\textcolor{red}{$(W,\bb{R}\BZ_{0})$}&$\Phi_{2}(\BI_{2})=\BI_{2}$, $\Phi_{2}(\tns{\Bz_{0}})=a\tns{\Bz_{0}}$, $\Phi_{0}(\BZ_{0})=\alpha\BZ_{0}$\\ \hline 20&\textcolor{red}{$(W,V_{\infty})$}&see below\\ \hline 21&\textcolor{red}{$(W,V)$}&see below\\ \hline 22&\textcolor{red}{$(\mathrm{Sym}(\bb{C}^{2}),0)$}&$\Phi_{2}(Y)=CYC^{T}$, $C\in O(2,\bb{C})$\\ \hline 23&\textcolor{red}{$\mathrm{Sym}({\mathcal T})$}&$\Phi(K(X,Y))=K(\pm CXC^{H},CYC^{T})$, $C\in O(2,\bb{C})$\\ \hline \end{tabular}
Item 8: $\Phi_{2}(\BI_{2})=\BI_{2}$ and $\Phi_{0}(\BGF(x,y))=\BGF(x',y')$, where \[ \BGF(x,y)=\mat{x}{iy}{-iy}{x},\qquad\vect{x'}{y'}=\BF\vect{x}{y},\qquad\BF^{T}\psi(1)\BF=\BI_{2}. \] Then, \[ \BF=\pm\mat{\cosh t}{\sinh t}{\sinh t}{\cosh t}\text{ or } \BF=\pm\mat{\cosh t}{\sinh t}{-\sinh t}{-\cosh t}. \] Item 9: $\Phi_{2}(\BI_{2})=\BI_{2}$ and $\Phi_{0}(\BGY(x,y))=\BGY(x',y')$, where \[ \BGY(x,y)=\mat{x}{y}{y}{-x},\qquad\vect{x'}{y'}=\BF\vect{x}{y},\qquad\BF^{T}\BF=\BI_{2}. \] Then, \[ \BF=\mat{\cos t}{\sin t}{-\sin t}{\cos t}\text{ or } \BF=\mat{\cos t}{\sin t}{\sin t}{-\cos t}. \] If we denote $z=x+iy$, then $\BGY(x,y)=\psi(z)$ and $\Phi_{0}(\psi(z))=\psi(e^{-it}z)$ or $\psi(e^{it}\bra{z})$.
Item 20: Every $X\in V_{\infty}$ has the general form $X=\xi\psi(i)+\eta\BZ_{0}$, $\{\xi,\eta\}\subset\bb{R}$, while every $Y\in W$ has the general form $Y=x\BI_{2}+y\tns{\Bz_{0}}$, $\{x,y\}\subset\bb{C}$. Then \[ \Phi_{0}(\xi\psi(i)+\eta\BZ_{0})=\pm(\xi\psi(i)+\alpha\eta\BZ_{0}),\qquad \Phi_{2}(x\BI_{2}+y\tns{\Bz_{0}})=x\BI_{2}+\alpha y\tns{\Bz_{0}},\qquad\alpha\in\bb{R}\setminus\{0\}. \] Item 21: Every $X\in V$ has the general form $X=\psi(z)+\eta\BZ_{0}$, $z\in\bb{C}$, $\eta\in\bb{R}$, while every $Y\in W$ has the general form $Y=x\BI_{2}+y\tns{\Bz_{0}}$, $\{x,y\}\subset\bb{C}$. Then \[ \Phi_{0}(\psi(z)+\eta\BZ_{0})=\pm(\psi(e^{i\theta}z)+\rho\eta\BZ_{0}),\quad \Phi_{2}(x\BI_{2}+y\tns{\Bz_{0}})=x\BI_{2}+\rho e^{i\theta}y\tns{\Bz_{0}},\quad \rho e^{i\theta}\in\bb{C}\setminus\{0\}. \]
Our next task is to determine which of these automorphisms are not restrictions of the global one to the multialgebra in question. For this purpose we define \begin{equation}
\label{Cpmdef}
\BC_{+}(c)=\mat{\cos c}{\sin c}{-\sin c}{\cos c},\qquad \BC_{-}(c)=\mat{\cos c}{\sin c}{\sin c}{-\cos c},\qquad c\in\bb{C}. \end{equation} We compute \[ \BC_{+}(c)\Bz_{0}=e^{-ic}\Bz_{0},\qquad\BC_{-}(c)\Bz_{0}=e^{-ic}\bra{\Bz}_{0} \] Hence, \[ \BC_{+}(c)\BZ_{0}\BC_{+}(c)^{H}=e^{2\mathfrak{Im}(c)}\BZ_{0},\qquad\BC_{-}(c)\BZ_{0}\BC_{-}(c)^{H}=e^{2\mathfrak{Im}(c)}\bra{\BZ}_{0}. \] \[ \BC_{+}(c)\psi(z)\BC_{+}(c)^{H}=\psi(e^{-2i\Re\mathfrak{e}(c)}z),\qquad\BC_{-}(c)\psi(z)\BC_{-}(c)^{H}=\psi(e^{2i\Re\mathfrak{e}(c)}\bra{z}),\quad z\in\bb{C} \] These formulas show that the Automorphisms of algebras 20 and 21, as well as 9 are all restrictions of the global automorphism.
In Item 19 there are new automorphisms. We can use the global one to set $a=1$. The remaining ones $\Phi_{2}(Y)=Y$, $\Phi_{0}(\BZ_{0})=\alpha\BZ_{0}$, are all new (except when $\alpha=\pm 1$). The same remark hold for item 13. However, all automorphisms are now restrictions of automorphisms of algebra 19.
In order to decide on item 8. We similarly denote $\BGF(x,y)$ by $\BGF(z)$, where $z=x+iy$. In that case the transformation $\Phi_{0}$ acts by \[ \Phi_{0}^{+}(\BGF(z))=\pm\BGF(z\cosh t+i\bra{z}\sinh t),\text{ or } \Phi_{0}^{-}(\BGF(z))=\pm\BGF(\bra{z}\cosh t-iz\sinh t). \] It remains to compute (via Maple) that \[ \BC_{+}(c)\BGF(z)\BC_{+}(c)^{H}=\Phi_{0}^{+}(\BGF(z)),\qquad \BC_{-}(c)\BGF(z)\BC_{-}(c)^{H}=\Phi_{0}^{-}(\BGF(z)), \] where $t=2\mathfrak{Im}(c)$. Hence, all automorphisms of algebra 8 are generated by the global ones.
The automorphisms of the remaining algebras are easily seen to come from the global ones. This leaves a single family of non-global automorphisms for algebra 19: \[ \Phi_{2}(Y)=Y,\qquad\Phi_{0}(\BZ_{0})=\alpha\BZ_{0},\quad Y\in W,\ \alpha\in\bb{R}\setminus\{0\}. \]
\section{Eliminating redundancies} We can now eliminate some of the Jordan multialgebras from our list either because they are physically trivial or because they can be obtained as intersections of other multialgebras. \begin{enumerate} \item is physically trivial \item $2=7\cap 13$ and $2^{2}=7^{2}\cap 13^{2}$; $\mat{\lambda\BI_{2}}{\pm(\lambda-1)\BR_{\perp}}{\mp(\lambda-1)\BR_{\perp}}{\lambda\BI_{2}}$, $\lambda>\hf$, $(\lambda^{*})^{-1}=\av{\lambda^{-1}}$. \item $3=4\cap 7$ \item $4=8\cap 16$; $\mat{\BGs}{0}{0}{\BGs}$, $\BGs>0$. \item
$5=9\cap 17$; $\mat{\BL}{t\BL}{t\BL}{\BL}$, $\det\BL=\nth{1-t^{2}}$, $|t|<1$, $\BL>0$;\\[2ex] $\BGs=(t+1)\BL$, $t^{*}=\frac{\det\BGs^{*}-1}{\det\BGs^{*}+1}$, $\BL^{*}=\dfrac{\det\BGs^{*}+1}{2}\cdot\dfrac{\BGs^{*}}{\det\BGs^{*}}$.\\[2ex] $5'=(\bb{C}\BI_{2},\bb{R}\psi(1))=9\cap 16$, $\mat{\BGs}{0}{0}{\frac{\BGs}{\det\BGs}}$, $\BGs>0$. \item $6=8\cap 17$; $\mat{\BL}{-t\BR_{\perp}}{t\BR_{\perp}}{\BL}$, $\det\BL=1+t^{2}$, $t\in\bb{R}$, $\BL>0$. \item $7=8\cap 19$; (see (\ref{ER7})) \item is essential (see (\ref{ER8})) \item $9=21\cap\bra{21}$ (see (\ref{ER9})) \item $10=11\cap 14$ \item is physically trivial because it is an ER in the absense of thermoelectric coupling \item $12=13\cap 18$ and $12^{2}=13^{2}\cap 18^{2}$ (see (\ref{Annz0}) and (\ref{FR})) \item is essential because of the volume fraction relation that accompanies it \item is physicaly trivial because it is an ER in the absense of thermoelectric coupling \item is physically trivial because it is an ER in the absense of thermoelectric coupling \item is physically trivial because it is an ER in the absense of thermoelectric coupling \item is essential (see (\ref{ER17}) or (\ref{ER17fin})) \item $18=19\cap 22$ (see (\ref{ER19fin}) and (\ref{ER18ad})) \item $19=20\cap 20_{t}$: $(W,\bb{R}\BZ_{0})=(W,V_{\infty})\cap(W,V_{t})$ for any $t\in\bb{R}$. (see (\ref{ER19fin})) \item is essential (see (\ref{ER20})--(\ref{ER20fin})) \item is essential (see (\ref{ER21}) or (\ref{ER21fin})) \item is essential (see (\ref{ER22}) or (\ref{ER22fin})) \item is physically trivial \end{enumerate}
\section{Verifying 3 and 4-chain properties} Recall that every exact relation corresponds to a Jordan multialgebra. However, theoretically, not every Jordan multialgebra may correspond to an exact relation. Validity of 3 and 4-chain properties for a Jordan multialgebra ensures that it corresponds to an exact relation. Specifically we need to verify \begin{align}
\label{3chain}
&\mathsf{K}_1\mathsf{A}_1\mathsf{K}_2\mathsf{A}_2\mathsf{K}_3+\mathsf{K}_3\mathsf{A}_2\mathsf{K}_2\mathsf{A}_1\mathsf{K}_1\in\Pi,\\ &\mathsf{K}_1\mathsf{A}_1\mathsf{K}_2\mathsf{A}_2\mathsf{K}_3\mathsf{A}_3\mathsf{K}_4+\mathsf{K}_4\mathsf{A}_3\mathsf{K}_3\mathsf{A}_2\mathsf{K}_2\mathsf{A}_1\mathsf{K}_1\in\Pi \label{4chain} \end{align} for every $\mathsf{K}_{j}\in\Pi$ and every $\mathsf{A}_{j}\in{\mathcal A}$. The algebraic meaning of 3 and 4 chain properties is the existence of an associative ${\mathcal A}$-multialgebra $\Pi'$ (closed under the associative set of multiplications $\mathsf{K}_{1}\circ_{\mathsf{A}}\mathsf{K}_{2}=\mathsf{K}_{1}\mathsf{A}\mathsf{K}_{2}$), such that $\mathsf{K}\in\Pi'$ implies $\mathsf{K}^{T}\in\Pi'$ and $\Pi'\cap\mathrm{Sym}({\mathcal T})=\Pi$. If $\Pi$ is rotationally invariant, then $\Pi$ must necessarily be rotationally invariant, as well. In our setting an $SO(2)$-invariant associative ${\mathcal A}$-multialgebra $\Pi'$ is characterized by a real subspace $V'$ and a complex subspace $W'$ of $2\times 2$ complex matrices, such that \[ X\bra{Y},\ YX\in V',\quad X_{1}\bra{X}_{2},\ Y_{1}Y_{2}\in W' \] for all $X,X_{1},X_{2}\in V'$ and all $Y,Y_{1},Y_{2}\in W'$. If \[ X^{H}\in V',\quad Y^{T}\in W',\quad\forall X\in V',\ Y\in W' \] and \[ V=V'\cap\mathfrak{H}(\bb{C}^{2}),\qquad W=W'\cap\mathrm{Sym}(\bb{C}^{2}). \] Then $\Pi(V,W)$ satisfies the 3 and 4-chain properties. For example for the algebra \#19 $(W,\bb{R}\BZ_{0})$ we can set $W'=W$ and $V'=\bb{C}\BZ_{0}$ and verify that all the relations above hold.
There is also a version of 3 and 4-chain properties for ideals and automorphisms. We say that an ideal ${\mathcal I}\subset\Pi$ satisifies the 3 and 4-chain properties if \begin{align}
\label{id3chain}
&\mathsf{J}\mathsf{A}_1\mathsf{K}_2\mathsf{A}_2\mathsf{K}_3+\mathsf{K}_3\mathsf{A}_2\mathsf{K}_2\mathsf{A}_1\mathsf{J}\in{\mathcal I},\\ &\mathsf{K}_1\mathsf{A}_1\mathsf{K}_2\mathsf{A}_2\mathsf{K}_3\mathsf{A}_3\mathsf{K}_4+\mathsf{K}_4\mathsf{A}_3\mathsf{K}_3\mathsf{A}_2\mathsf{K}_2\mathsf{A}_1\mathsf{K}_1\in{\mathcal I} \label{id4chain} \end{align} for every $\mathsf{K}_{j}\in\Pi$, every $\mathsf{A}_{j}\in{\mathcal A}$ and every $\mathsf{J}\in{\mathcal I}$. Equivalently, if we happen to know the associative ${\mathcal A}$-multialgebra $\Pi'$ that establishes the 3 and 4-chain properties of $\Pi$, we can look for an associative ideal ${\mathcal I}'\subset\Pi'$, such that ${\mathcal I}'\cap\mathrm{Sym}({\mathcal T})={\mathcal I}$. For example, two of the 3 nontrivial ideals belong to the algebra \#19. It it is easy to verify that $J'=(\bb{C}\tns{\Bz_{0}},0)$ and $J'=(0,\bb{C}\BZ_{0})$ are ideals in $(V',W')$, establishing the 3 and 4-chain properties for $J=(\bb{R}\tns{\Bz_{0}},0)$ and $J=(0,\bb{C}\BZ_{0})$.
The 3 and 4-chain properties for automorphisms are \[ \Phi(\mathsf{K}_{0}\mathsf{A}\mathsf{K}_{1}\mathsf{A}'\mathsf{K}_{2}+\mathsf{K}_{2}\mathsf{A}'\mathsf{K}_{1}\mathsf{A}\mathsf{K}_{0})= \Phi(\mathsf{K}_{0})\mathsf{A}\Phi(\mathsf{K}_{1})\mathsf{A}'\Phi(\mathsf{K}_{2})+ \Phi(\mathsf{K}_{2})\mathsf{A}'\Phi(\mathsf{K}_{1})\mathsf{A}\Phi(\mathsf{K}_{0}) \] \begin{multline*} \Phi(\mathsf{K}_{0}\mathsf{A}\mathsf{K}_{1}\mathsf{A}'\mathsf{K}_{2}\mathsf{A}''\mathsf{K}_{3}+ \mathsf{K}_{3}\mathsf{A}''\mathsf{K}_{2}\mathsf{A}'\mathsf{K}_{1}\mathsf{A}\mathsf{K}_{0})=\\ \Phi(\mathsf{K}_{0})\mathsf{A}\Phi(\mathsf{K}_{1})\mathsf{A}'\Phi(\mathsf{K}_{2}) \mathsf{A}''\Phi(\mathsf{K}_{3})+\Phi(\mathsf{K}_{3})\mathsf{A}''\Phi(\mathsf{K}_{2}) \mathsf{A}'\Phi(\mathsf{K}_{1})\mathsf{A}\Phi(\mathsf{K}_{0}) \end{multline*} for all $ \{\mathsf{K}_{0},\mathsf{K}_{1},\mathsf{K}_{2},\mathsf{K}_{3}\}\subset\Pi$, and all $ \{\mathsf{A},\mathsf{A}',\mathsf{A}'',\mathsf{A}''\}\subset{\mathcal A}. $ In our setting the automorphisms $\Phi$ has the 3 and 4-chain properties, if and only if it is a restriction to $(V,W)$ of the automorphism $\Phi'$ of $\Pi'$, generated by real and complex linear maps $\Phi'_{0}$ and $\Phi'_{2}$ on $V'$ and $W'$, respectively, satisfying \[ \Phi'_{0}(X\bra{Y})=\Phi'_{0}(X)\bra{\Phi'_{2}(Y)},\qquad \Phi'_{0}(YX)=\Phi'_{2}(Y)\Phi'_{0}(X), \] \[ \Phi'_{2}(X_{1}\bra{X}_{2})=\Phi'_{0}(X_{1})\bra{\Phi'_{0}(X_{2})},\qquad \Phi'_{2}(Y_{1}Y_{2})=\Phi'_{2}(Y_{1})\Phi'_{2}(Y_{2}). \] It is easy to verify that the map defined by $\Phi'_{2}(Y)=Y$ and $\Phi'_{0}(X)=\alpha X$ satisfies all the relations above. Hence, the ER, corresponding to algebra \#19 $(W,\bb{R}\BZ_{0})$ and the links corresponding to this family of automorphisms, as well as the links corresponding to the two ideals in this algebra hold for all thermoelectric composites.
Finally, there is also a version of 3 and 4-chain properties for volume fraction relations corresponding to situations where $\Pi^{2}\not=\Pi$. In this case we need to verify conditions (\ref{3chain}) and (\ref{4chain}), except the chains must belong to $\Pi^{2}$, instead of $\Pi$.
Thus, we only need to check the 3 and 4-chain relations for the remaining 7 essential Jordan multialgebras (8,9,13,17,20,21,22), for the volume fraction relation that accompanies algebra \#13, as well as for the remaining algebra/ideal pair $(W,V)/{\rm Ann}(\bb{C}\bra{\Bz_{0}})$. All these checks have been done with Maple by Huilin Chen and confirmed that the 3 and 4-chain relations were satisfied in all cases.
\section{Computing inversion keys} We have already mentioned the algorithm for computing the inversion keys for exact relations. Let us restate it in the $K(X,Y)$-language. The inversion key $\mathsf{M}_{0}$ is always sought is the form $\mathsf{M}_{0}=K(M_{0},0)$, where $M_{0}$ is one of the 4 choices: 0, $\tns{\Be_{1}}/2$, $\tns{\Be_{2}}/2$ or $\BI_{2}/2$. It is found from the rule that \[ K(X,Y)K\left(\hf\BI_{2}-M_{0},0\right)K(X,Y)\in\Pi,\quad\forall K(X,Y)\in\Pi. \] This property holds trivially for the choice $M_{0}=\BI_{2}/2$. Thus, only if we want to use one of the three remaining choices there is something to verify. Obviously, $M_{0}=0$ is the most desirable choice. We can use it only if the Jordan multialgebra $\Pi$ satisifes \begin{equation}
\label{Meq0}
K(X,Y)^{2}\in\Pi,\quad\forall K(X,Y)\in\Pi. \end{equation} If (\ref{Meq0}) fails we will try \begin{equation}
\label{Me1e1}
K(X,Y)K\left(\mat{0}{0}{0}{1},0\right)K(X,Y)\in\Pi,\quad\forall K(X,Y)\in\Pi. \end{equation} If (\ref{Me1e1}) holds, then we will be able to use $M_{0}=\tns{\Be_{1}}/2$. If this condition fails as well, then we will try \begin{equation}
\label{Me2e2}
K(X,Y)K\left(\mat{1}{0}{0}{0},0\right)K(X,Y)\in\Pi,\quad\forall K(X,Y)\in\Pi. \end{equation} If (\ref{Me2e2}) holds, then we will be able to use $M_{0}=\tns{\Be_{2}}/2$. If this condition also fails, then we choose $M_{0}=\BI_{2}/2$.
Now let us describe the algorithm for finding the inversion key for the links. In our case there are 5 of them: 3 are of type (\ref{idlink}) and 2 are of type (\ref{Philink}). Links $\Hat{\Pi}$ have two components and each can use its own inversion key, so that $\Hat{M}_{0}=[M_{1},M_{2}]$. For simplicity of notation we will denote \[ \Delta_{j}=\hf\BI_{2}-M_{j},\quad j=1,2. \] The inversion key $\Hat{M}_{0}$ for $\Hat{\Pi}$, given by the algebra ideal pair $\Pi={\mathcal I}\oplus\Pi'$ via (\ref{idlink}) is identified by checking the following 3 properties \begin{enumerate} \item $K(X',Y')K(\Delta_{2},0)K(X',Y')\in\Pi'$ for all $K(X',Y')\in\Pi'$ ($M_{2}$
must be an inversion key for $\Pi'$.) \item $K(J_{X},J_{Y})K(\Delta_{1},0)K(X,Y)+K(X,Y)K(\Delta_{1},0)K(J_{X},J_{Y})\in{\mathcal I}$
for all $K(X,Y)\in\Pi$ and $K(J_{X},J_{Y})\in{\mathcal I}$ ($M_{1}$
must be an inversion key for ${\mathcal I}$.) \item $K(X',Y')K(M_{1}-M_{2},0)K(X',Y')\in{\mathcal I}$ for all $K(X',Y')\in\Pi'$ \end{enumerate} In the case of $\Hat{\Pi}$ corresponding to an automorphism of $\Pi$ the inversion key $\Hat{M}_{0}$ is sought in the form $\Hat{M}_{0}=[M_{0},M_{0}]$, where $M_{0}$ is an inversion key for $\Pi$, satisfying additionally the relation \[ \Phi(K(X,Y)K(\Delta_{0},0)K(X,Y))=\Phi(K(X,Y))K(\Delta_{0},0)\Phi(K(X,Y)). \] Let us show that $\Hat{M}_{0}=[0,0]$ is \emph{not} an inversion key for the global automorphism. $\Hat{M}_{0}=[0,0]$ is equivalent to the property that all global automorphisms satisfy \begin{equation}
\label{glinvkey}
\Phi(\mathsf{K}^{2})=\Phi(\mathsf{K})^{2}\qquad\forall\mathsf{K}\in\mathrm{Sym}({\mathcal T}). \end{equation} Let us verify that this is not always the case. There are two branches of the global automotphisms: \[ \Phi_{+}(\mathsf{K})=\mathsf{C}\mathsf{K}\mathsf{C}^{T},\qquad\mathsf{C}=K(C,0),\quad C\in O(2,\bb{C}). \] and \[ \Phi_{-}(\mathsf{K})=-\mathsf{C}\mathsf{K}\mathsf{C}^{T},\qquad\mathsf{C}=K(iC,0),\quad C\in O(2,\bb{C}). \] For $\Phi_{\pm}$ equation (\ref{glinvkey}) is equivalent to $\mathsf{C}^{T}\mathsf{C}=\pm\mathsf{I}$. We compute for $C\in O(2,\bb{C})$ \[ K(C,0)^{T}K(C,0)=K(C^{H}C,0),\qquad K(iC,0)^{T}K(iC,0)=K(C^{H}C,0). \] We see that $\mathsf{C}^{T}\mathsf{C}=-\mathsf{I}$ is never satisfied, while $\mathsf{C}^{T}\mathsf{C}=\mathsf{I}$ holds if and only if $\mathsf{C}\in O(2,\bb{R})$. In fact the inversion key for the global automorphism must be \begin{equation}
\label{globinvkey}
\Hat{M}_{\rm glob}=\left[\hf\BI_{2},\hf\BI_{2}\right]. \end{equation} Nevertheless we can write an arbitrary transformation $\Phi_{+}$ as a superposition of a transformation in $O(2,\bb{R})$ and a transformation corresponding to \begin{equation}
\label{nontriv}
\BC_{+}(t)=\mat{\cosh t}{i\sinh t}{-i\sinh t}{\cosh t},\quad t\in\bb{R}. \end{equation} To obtain transformations $\Phi_{-}$ we only need to compose a transformation $\Phi_{+}$ with $\Phi_{*}(K(X,Y))=K(-X,Y)$.
\section{Summary of non-redundant and nontrivial ERs and Links} \begin{center}
\textbf{Exact relations} \end{center} \begin{center}
\begin{tabular}[h]{|c|c|c|}
\hline item \# & algebra & inversion key\\ \hline 8& $(\bb{C}\BI,\BGF)$ &$M_{0}=0$\\ \hline 13& ${\rm Ann}(\bb{C}\bra{\Bz_{0}})$ &$M_{0}=0$\\ \hline 17& $({\mathcal D},{\mathcal D}')$ &$M_{0}=\BI_{2}/2$\\ \hline 20&$(W,V_{\infty})$ &$M_{0}=\BI_{2}/2$\\ \hline 21&$(W,V)$ &$M_{0}=\BI_{2}/2$\\ \hline 22&$(\mathrm{Sym}(\bb{C}^{2}),0)$ &$M_{0}=\BI_{2}/2$\\ \hline \end{tabular} \end{center} \begin{center}
\textbf{Links} \end{center}
\begin{tabular}[h]{|c|c|c|c|}
\hline item \# & algebra & link & inversion key\\ \hline 13& ${\rm Ann}(\bb{C}\bra{\Bz_{0}})$ &$\Pi^{2}=\{0\}$&$M_{0}=\BI_{2}/2$\\ \hline 19&$(W,\bb{R}\BZ_{0})$ &$\Phi_{2}(Y)=Y$, $\Phi_{0}(\BZ_{0})=\alpha\BZ_{0}$&$\Hat{M}_{0}=[\BI_{2}/2,\BI_{2}/2]$\\ \hline 19&$(W,\bb{R}\BZ_{0})$ & ${\mathcal I}=(0,\bb{R}\BZ_{0})$&$\Hat{M}_{0}=[\BI_{2}/2,\BI_{2}/2]$\\ \hline 19&$(W,\bb{R}\BZ_{0})$ & ${\mathcal I}=(\tns{\Bz_{0}},0)$&$\Hat{M}_{0}=[\BI_{2}/2,\BI_{2}/2]$\\ \hline 21&$(W,V)$ &${\mathcal I}={\rm Ann}(\bb{C}\bra{\Bz_{0}})$&$\Hat{M}_{0}=[\BI_{2}/2,\BI_{2}/2]$\\ \hline 23&$\mathrm{Sym}({\mathcal T})$ &$\Phi(K(X,Y))=K(\pm CXC^{H},CYC^{T})$, $C\in O(2,\bb{C})$& $\Hat{M}_{0}=[\BI_{2}/2,\BI_{2}/2]$\\ \hline \end{tabular}
\section{Global Link} The global link will actually consist of 3 families of links: \begin{enumerate} \item One comes from choosing $\BC\in O(2,\bb{R})$ (it is obtained using the $\Hat{M}_{0}=[0,0]$ inversion key); \item The second family comes from the single global automorphism $\Phi(K(X,Y))=K(-X,Y)$; \item The third family of links corresponds to the family (\ref{nontriv}) of
global automorphisms. \end{enumerate}
The last two families require using inversion key (\ref{globinvkey}).
\subsection{$O(2,\bb{R})$ family} Let us begin with the simplest case, where $\Hat{M}_{0}=[0,0]$, i.e. with finding links corresponding to global automorphisms defined by $\BC\in O(2,\bb{R})$. The simplified version is \[ \mathsf{L}_{2}=K(\BI_{2}+\beta_{2}i\BR_{\perp},0)-K(\BC,0)(\mathsf{L}_{1}-K(\BI_{2}+\beta_{1}i\BR_{\perp},0))K(\BC^{T},0). \] For $\BC\in O(2,\bb{R})$ we obtain \[ \mathsf{L}_{2}=(\beta_{2}\mp(\det\BC)\beta_{1})\tns{\BR_{\perp}}+(\BC\otimes\BI_{2})\mathsf{L}_{1}(\BC^{T}\otimes\BI_{2}) \] To obtain the general link we replace $\mathsf{L}_{j}$ in the above formula with $(\BGL_{j}^{-1/2}\otimes\BI_{2})\mathsf{L}_{j}(\BGL_{j}^{-1/2}\otimes\BI_{2})$. We obtain, solving for $\mathsf{L}_{2}$ \[ \mathsf{L}_{2}=(\beta_{2}\mp(\det\BC)\beta_{1})\sqrt{\det\BGL_{2}}\tns{\BR_{\perp}}+ (\BGL_{2}^{1/2}\BC\BGL_{1}^{-1/2}\otimes\BI_{2})\mathsf{L}_{1}(\BGL_{1}^{-1/2}\BC^{T}\BGL_{2}^{1/2}\otimes\BI_{2}) \] We observe that a polar decomosition of a matrix implies that every non-singular $2\times 2$ real matrix can be written as $\BGL^{1/2}\BC$ for some symmetric positive definite matrix $\BGL$ and $\BC\in O(2,\bb{R})$. Therefore, we obtain the link \begin{equation}
\label{lingloblink}
\mathsf{L}_{2}=\beta_{0}\mathsf{T}+(\BB_{0}\otimes\BI_{2})\mathsf{L}_{1}(\BB_{0}^{T}\otimes\BI_{2}),\qquad\mathsf{T}=\tns{\BR_{\perp}}. \end{equation} where $\beta_{0}\in\bb{R}$ and $\BB_{0}\in GL(2,\bb{R})$ are parameters of the family of links, restricted by the requirement that $\mathsf{L}_{1,2}$ be positive definite. We note that by construction for any pair $\Hat{\mathsf{L}}_{0}=[\mathsf{L}_{0}^{(1)},\mathsf{L}_{0}^{(2)}]$ of isotropic materials, there is a link of the form (\ref{lingloblink}) passing through $\Hat{\mathsf{L}}_{0}$. That means that \emph{any} link $\mathsf{L}_{2}=\mathfrak{L}(\mathsf{L}_{1})$ passing through $\Hat{\mathsf{L}}_{0}$ can be obtained from a link passing through $\Hat{\mathsf{L}}_{0}^{0}=[\mathsf{I},\mathsf{I}]$. Indeed, let $\mathfrak{L}_{1}$ and $\mathfrak{L}_{2}$ be the links of the form (\ref{lingloblink}), passing through $[\mathsf{L}_{0}^{(1)},\mathsf{I}]$ and $[\mathsf{L}_{0}^{(2)},\mathsf{I}]$, respectively. Then, the function \[ \mathfrak{L}_{0}(\mathsf{L}_{1})=\mathfrak{L}_{2}(\mathfrak{L}(\mathfrak{L}_{1}^{-1}(\mathsf{L}_{1}))) \] is a link passing through $\Hat{\mathsf{L}}_{0}^{0}$. Hence, \[ \mathfrak{L}(\mathsf{L}_{1})=\mathfrak{L}_{2}^{-1}(\mathfrak{L}_{0}(\mathfrak{L}_{1}(\mathsf{L}_{1}))). \] We therefore, have the option of deriving only the links passing through $\Hat{\mathsf{L}}_{0}^{0}=[\mathsf{I},\mathsf{I}]$, which, combined with (\ref{lingloblink}), will generate all global links. The same goes for exact relations: we only need to compute the ones passing though $\mathsf{L}_{0}=\mathsf{I}$.
\subsection{$\Phi(K(X,Y))=K(-X,Y)$ family} Next, let us compute the link corresponding to the map $\Phi(\mathsf{K}(X,Y))=\mathsf{K}(-X,Y)$. This has the form \begin{equation}
\label{initlink0}
[(\mathsf{L}_{2}-\mathsf{I})^{-1}+\hf\mathsf{I}]^{-1}=K(i\BI_{2},0)[(\mathsf{L}_{1}-\mathsf{I})^{-1}+\hf\mathsf{I}]^{-1}K(i\BI_{2},0), \end{equation} The idea is to identify two fixed points of this transformation $\mathsf{F}_{+}$ and $\mathsf{F}_{-}$ and then rewrite (\ref{initlink0}) as \[ (\mathsf{L}_{2}-\mathsf{F}_{-})^{-1}(\mathsf{L}_{2}-\mathsf{F}_{+})=\mathsf{S}_{-}^{-1}(\mathsf{L}_{1}-\mathsf{F}_{-})^{-1}(\mathsf{L}_{1}-\mathsf{F}_{+})\mathsf{S}_{+} \] We can solve (\ref{initlink0}) for $\mathsf{L}_{2}$ and then express $(\mathsf{L}_{2}-\mathsf{F}_{-})^{-1}(\mathsf{L}_{2}-\mathsf{F}_{+})$ as $\mathsf{S}_{-}^{-1}(\mathsf{L}_{1}-\mathsf{X}_{-})^{-1}(\mathsf{L}_{1}-\mathsf{X}_{+})\mathsf{S}_{+}$, which leads to the formulas for $\mathsf{S}_{\pm}$: \[ \mathsf{S}_{\pm}=K(i\BI_{2},0)\mathsf{F}_{\pm}. \] It is reasonable to look for fixed points in the form \begin{equation}
\label{fixedans}
\mathsf{F}=K\left(\mat{x}{y}{\bra{y}}{x},0\right), \end{equation} since, all tensors in (\ref{initlink0}), except $\mathsf{L}_{1,2}$, have that form. We compute (using Maple) that there exists a pair of fixed points of the form $\mathsf{F}_{\pm}=\pm\mathsf{T}$. Therefore, $\mathsf{S}_{\pm}=\mp K(\BR_{\perp},0)$. So we have the link $\Hat{\bb{M}}_{0}$ of the form \begin{equation}
\label{gnonlin0} (\mathsf{L}_{2}+\mathsf{T})^{-1}(\mathsf{L}_{2}-\mathsf{T})=K(\BR_{\perp},0)(\mathsf{L}_{1}+\mathsf{T})^{-1}(\mathsf{L}_{1}-\mathsf{T})K(\BR_{\perp},0). \end{equation} We can rewrite (\ref{gnonlin0}) by observing that \begin{equation}
\label{convform}
(\mathsf{L}+\mathsf{T})^{-1}(\mathsf{L}-\mathsf{T})-\mathsf{I}=-2(\mathsf{L}+\mathsf{T})^{-1}\mathsf{T}. \end{equation} Thus, (\ref{gnonlin0}) becomes \[ (\mathsf{L}_{2}+\mathsf{T})^{-1}=\mathsf{T}-(\BR_{\perp}\otimes\BI_{2})(\mathsf{L}_{1}+\mathsf{T})^{-1}(\BR_{\perp}^{T}\otimes\BI_{2}) \] Applying the link (\ref{lingloblink}) ($\mathsf{L}_{2}'=\mathsf{L}_{2}+\mathsf{T}$ and $\mathsf{L}_{1}'=(\BR_{\perp}\otimes\BI_{2})\mathsf{L}_{1}(\BR_{\perp}^{T}\otimes\BI_{2})+\mathsf{T}$) we can simplify our link to \begin{equation}
\label{gnonlinalt} \mathsf{L}_{2}^{-1}=\mathsf{T}-\mathsf{L}_{1}^{-1}. \end{equation}
\subsection{(\ref{nontriv}) family} Finally, we need to compute the link, corresponding to $\BC\in O(2,\bb{C})$, given by (\ref{nontriv}). We first compute the link $\Hat{\bb{M}}_{0}$ defined by \begin{equation}
\label{initlink}
[(\mathsf{L}_{2}-\mathsf{I})^{-1}+\hf\mathsf{I}]^{-1}=\mathsf{C}[(\mathsf{L}_{1}-\mathsf{I})^{-1}+\hf\mathsf{I}]^{-1}\mathsf{C}, \end{equation} where $\mathsf{C}=K(\BC,0)$, where $\BC$ is given by (\ref{nontriv}).
We rewrite (\ref{initlink}) in a symmetrical way with respect to $\mathsf{L}_{1}$ and $\mathsf{L}_{2}$ using the fixed points $\mathsf{F}_{\pm}$ of the form \begin{equation}
\label{fixedans0}
\mathsf{F}=K\left(\mat{0}{y}{\bra{y}}{0},0\right), \end{equation} as we discovered before. We also find that \[ \mathsf{S}_{\pm}=\hf(\mathsf{C}^{-1}-\mathsf{C})(\mathsf{I}-\mathsf{F}_{\pm})+\mathsf{C}. \]
Using Maple, we find that $\mathsf{F}_{\pm}=\pm\mathsf{T}$. Then $\mathsf{S}_{\pm}=e^{\mp t}\mathsf{I}$, resulting in the formula \[ (\mathsf{L}_{2}+\mathsf{T})^{-1}(\mathsf{L}_{2}-\mathsf{T})=e^{-2t}(\mathsf{L}_{1}+\mathsf{T})^{-1}(\mathsf{L}_{1}-\mathsf{T}), \] Equivalently, using (\ref{convform}), \begin{equation}
\label{gnonlin} (\mathsf{L}_{2}+\mathsf{T})^{-1}=e^{-2t}(\mathsf{L}_{1}+\mathsf{T})^{-1}+e^{-t}\sinh(t)\mathsf{T}. \end{equation} Applying the link (\ref{lingloblink}) we can simplify our link to \begin{equation}
\label{gnonlin1} \mathsf{L}_{2}^{-1}=\alpha_{0}\mathsf{T}+\mathsf{L}_{1}^{-1}, \end{equation} which forms a group of transformations $\mathfrak{L}_{\alpha_{0}}$, such that $\mathfrak{L}_{\alpha_{0}}\circ\mathfrak{L}_{\beta_{0}}=\mathfrak{L}_{\alpha_{0}+\beta_{0}}$
\subsection{General form and properties of the global link} Combining the results obtained so far we conclude that any global link can be obtained as a superposition of the following subgroups of links \begin{equation}
\label{globlinkcomb}
\begin{cases} \mathsf{L}_{2}^{-1}=\mathsf{T}-\mathsf{L}_{1}^{-1},\\ \mathsf{L}_{2}^{-1}=\alpha_{0}\mathsf{T}+\mathsf{L}_{1}^{-1},\\ \mathsf{L}_{2}=\beta_{0}\mathsf{T}+\mathsf{L}_{1},\\ \mathsf{L}_{2}=(\BB_{0}\otimes\BI_{2})\mathsf{L}_{1}(\BB_{0}^{T}\otimes\BI_{2}).
\end{cases} \end{equation} The most general transformation that can be made by compositing transformations (\ref{globlinkcomb}) with each other is \begin{equation}
\label{globauto}
\Psi(\mathsf{L})=(\BB_{0}\otimes\BI_{2})\mathsf{T}(\alpha_{1}\mathsf{L}+\beta_{1}\mathsf{T})^{-1}(\alpha_{0}\mathsf{L}+\beta_{0}\mathsf{T})(\BB_{0}^{T}\otimes\BI_{2}). \end{equation} We remark that \[
\Psi(\mathsf{L})=(\BB_{0}\otimes\BI_{2})(\alpha_{0}\mathsf{L}+\beta_{0}\mathsf{T})(\alpha_{1}\mathsf{L}+\beta_{1}\mathsf{T})^{-1}\mathsf{T}(\BB_{0}^{T}\otimes\BI_{2}). \] If $\Psi(\mathsf{L})$ is given by (\ref{globauto}) we will write $\Psi_{\BA_{0},\BB_{0}}(\mathsf{L})$ to refer to it, where \[ \BA_{0}=\mat{\alpha_{0}}{\beta_{0}}{\alpha_{1}}{\beta_{1}}. \] Different pairs of matrices $\{\BA,\BB\}\subset GL(2,\bb{R})$ can define the same transformation $\Psi_{\BA,\BB}$. Specifically, \begin{equation}
\label{projinv}
\Psi_{\lambda\BA,\BB}=\Psi_{\BA,\BB},\quad\Psi_{\BA,\lambda\BB}=\Psi_{\BA_{\lambda},\BB},\qquad\BA_{\lambda}=\mat{\lambda^{2}}{0}{0}{1}\BA \end{equation} for any nonzero real number $\lambda$. Thus, without loss of generality, we may assume that
$|\det\BA|=|\det\BB|=1$. Even with this assumption we still have symmetries \[ \Psi_{-\BA,\BB}=\Psi_{\BA,\BB},\qquad\Psi_{\BA,-\BB}=\Psi_{\BA,\BB}. \]
Let us derive the formula for superposition of two transformations (\ref{globauto}). We note that $\Psi_{\BA,\BI_{2}}(\mathsf{L})=M_{\BA}(\mathsf{L}\mathsf{T})\mathsf{T}$, where $M_{\BA}(z)$ is a fractional-linear M\"obius transformation with real matrix $\BA$. The composition law for M\"obius transformations then imply that \[ \Psi_{\BA_{1},\BI_{2}}\circ\Psi_{\BA_{2},\BI_{2}}=\Psi_{\BA_{1}\BA_{2},\BI_{2}}. \] The composition formula $\Psi_{\BI_{2},\BB_{1}}\circ\Psi_{\BA,\BB_{2}}=\Psi_{\BA,\BB_{1}\BB_{2}}$ is completely evident. Finally, a direct calculation shows that \[ \Psi_{\BA,\BI_{2}}\circ\Psi_{\BI_{2},\BB}=\Psi_{\BA^{\BB},\BB},\qquad \BA^{\BB}=\mat{\det\BB}{0}{0}{1}^{-1}\BA\mat{\det\BB}{0}{0}{1}. \] where we have used the projective invariance property (\ref{projinv}). These formulas allow us to derive the full composion formula \begin{multline*}
\Psi_{\BA_{1},\BB_{1}}\circ\Psi_{\BA_{2},\BB_{2}}=\Psi_{\BI_{2},\BB_{1}}\circ\Psi_{\BA_{1},\BI_{2}}\circ \Psi_{\BA_{2}^{\BB_{2}^{-1}},\BI_{2}}\circ\Psi_{\BI_{2},\BB_{2}}= \Psi_{\BI_{2},\BB_{1}}\circ\Psi_{\BA_{1}\BA_{2}^{\BB_{2}^{-1}},\BI_{2}}\circ\Psi_{\BI_{2},\BB_{2}}=\\ \Psi_{\BI_{2},\BB_{1}}\circ\Psi_{(\BA_{1}\BA_{2}^{\BB_{2}^{-1}})^{\BB_{2}},\BB_{2}}= \Psi_{\BI_{2},\BB_{1}}\circ\Psi_{\BA_{1}^{\BB_{2}}\BA_{2},\BB_{2}}=\Psi_{\BA_{1}^{\BB_{2}}\BA_{2},\BB_{1}\BB_{2}}. \end{multline*} We note that if $\det\BB_{2}=1$, then $\Psi_{\BA_{1},\BB_{1}}\circ\Psi_{\BA_{2},\BB_{2}}=\Psi_{\BA_{1}\BA_{2},\BB_{1}\BB_{2}}$. The most general transformation $\Psi_{\BA,\BB}$, such that $\Psi_{\BA,\BB}(\mathsf{I})=\mathsf{I}$ has the form \begin{equation}
\label{Idfixed}
\BA=\mat{\alpha_{0}}{\beta_{0}}{\beta_{0}}{\alpha_{0}},\quad\BB\in O(2,\bb{R}),\qquad
|\alpha_{0}^{2}-\beta_{0}^{2}|=1. \end{equation}
\section{Formulas for computing ERs} The relation bewteen $K(X,Y)$ and the $2\times 2$ block-matrix representation (\ref{blckM}) is \begin{equation}
\label{K2block}
K(X,Y)=\mat{\varphi(X_{11})+\psi(Y_{11})}{\varphi(X_{12})+\psi(Y_{12})}{\varphi(X_{21})+\psi(Y_{21})}{\varphi(X_{22})+\psi(Y_{22})}, \end{equation} where \[ \varphi(\alpha+i\beta)=\mat{\alpha}{-i\beta}{i\beta}{\alpha},\qquad\psi(\alpha+i\beta)=\mat{\alpha}{\beta}{\beta}{-\alpha}. \]
When $M_{0}=0$ we have \begin{equation}
\label{M0red}
\bb{M}_{0}=\{\mathsf{I}+\mathsf{K}\in\mathrm{Sym}^{+}({\mathcal T}):\mathsf{K}\in\Pi_{0}\}. \end{equation}
For $M_{0}=\BI_{2}/2$ we have \begin{equation}
\label{L2K}
\mathsf{K}=\left[(\mathsf{L}-\mathsf{I})^{-1}+\hf\BI_{2}\right]^{-1}=2(\mathsf{L}+\mathsf{I})^{-1}(\mathsf{L}-\mathsf{I})= 2\mathsf{I}-4(\mathsf{L}+\mathsf{I})^{-1} \end{equation} Equivalently $4(\mathsf{L}+\mathsf{I})^{-1}=2\mathsf{I}-\mathsf{K}$. Solving for $\mathsf{L}$ we obtain \[ \mathsf{L}=\mathsf{I}+2(2\mathsf{I}-\mathsf{K})^{-1}\mathsf{K}=2(\mathsf{I}-\mathsf{K}/2)^{-1}-\mathsf{I}. \] Since our goal is to compute the image of the \emph{subspace} $\Pi$ under the above transformation, we may just as well use the formula \begin{equation}
\label{invform0}
\bb{M}_{0}=\{\mathsf{L}\in\mathrm{Sym}^{+}({\mathcal T}):\mathsf{L}=2(\mathsf{I}+\mathsf{K})^{-1}-\mathsf{I},\ \mathsf{K}\in\Pi_{0}\}. \end{equation} Sometimes it might be easier to characterize $\bb{M}_{0}$ by computing $\mathsf{K}$ in terms of $\mathsf{L}$ and writing equations satisfied by $\mathsf{K}$ in terms of $\mathsf{L}$. Then \begin{equation}
\label{invform1}
\bb{M}_{0}=\left\{\mathsf{L}\in\mathrm{Sym}^{+}({\mathcal T}):(\mathsf{L}+\mathsf{I})^{-1}-\hf\mathsf{I}\in\Pi_{0}\right\}. \end{equation} Both formulas require inverting the $2\times 2$ block-matrices. One may choose to compute block-matrix inverse in two ways: in the $2\times 2$ block-matrix notation or in the $K(X,Y)$ notation. The $2\times 2$ block-matrix formalism is standard. Let us assume that $\BF_{11}$ in \[ \mathsf{F}=\mat{\BF_{11}}{\BF_{12}}{\BF_{21}}{\BF_{22}}, \] is invertible. Then, in order to invert $\mathsf{F}$ we need to solve the system of equations: \[ \begin{cases}
\BF_{11}\Bu_{1}+\BF_{12}\Bu_{2}=\Bv_{1},\\
\BF_{21}\Bu_{1}+\BF_{22}\Bu_{2}=\Bv_{2} \end{cases} \] We solve it using the method of elimination. We solve the first equation for $\Bu_{1}$: \[ \Bu_{1}=\BF_{11}^{-1}\Bv_{1}-\BF_{11}^{-1}\BF_{12}\Bu_{2}, \] and substitute the result into the second equation:
\[ \BF_{21}\BF_{11}^{-1}\Bv_{1}+(\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12})\Bu_{2}=\Bv_{2}. \] We then solve this for $\Bu_{2}$: \[ \Bu_{2}=-(\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12})^{-1}\BF_{21}\BF_{11}^{-1}\Bv_{1}+ (\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12})^{-1}\Bv_{2}, \] and substitute this into the formula for $\Bu_{1}$: \[ \Bu_{1}=(\BF_{11}^{-1}+\BF_{11}^{-1}\BF_{12}(\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12})^{-1}\BF_{21}\BF_{11}^{-1})\Bv_{1}- \BF_{11}^{-1}\BF_{12}(\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12})^{-1}\Bv_{2} \] A little matrix algebra shows that \[ \Bu_{1}=(\BF_{11}-\BF_{12}\BF_{22}^{-1}\BF_{21})^{-1}\Bv_{1}- \BF_{11}^{-1}\BF_{12}(\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12})^{-1}\Bv_{2} \] This gives us a formula for $\mathsf{F}^{-1}$: \[ \mathsf{F}^{-1}=\mat{\BS_{11}^{-1}}{-\BF_{11}^{-1}\BF_{12}\BS_{22}^{-1}}{-\BS_{22}^{-1}\BF_{21}\BF_{11}^{-1}}{\BS_{22}^{-1}}, \] where matrices \[ \BS_{11}=\BF_{11}-\BF_{12}\BF_{22}^{-1}\BF_{21},\qquad \BS_{22}=\BF_{22}-\BF_{21}\BF_{11}^{-1}\BF_{12} \] are called Schur complements of $\BF_{11}$ and $\BF_{22}$, respectively. Of course, \[ \BF_{11}^{-1}\BF_{12}\BS_{22}^{-1}=\BS_{11}^{-1}\BF_{12}\BF_{22}^{-1},\qquad \BS_{22}^{-1}\BF_{21}\BF_{11}^{-1}=\BF_{22}^{-1}\BF_{21}\BS_{11}^{-1}. \] Therefore, we can write $\mathsf{F}^{-1}$ in two equivalent more symmetrical forms \begin{equation}
\label{Finv}
\mathsf{F}^{-1}=\mat{\BS_{11}^{-1}}{-\BS_{11}^{-1}\BF_{12}\BF_{22}^{-1}}{-\BS_{22}^{-1}\BF_{21}\BF_{11}^{-1}}{\BS_{22}^{-1}}=\mat{\BS_{11}^{-1}}{-\BF_{11}^{-1}\BF_{12}\BS_{22}^{-1}}{-\BF_{22}^{-1}\BF_{21}\BS_{11}^{-1}}{\BS_{22}^{-1}}. \end{equation} Equivalently, \[ \mathsf{F}^{-1}=\mat{\BS_{11}}{0}{0}{\BS_{22}}^{-1}\mat{\BF_{11}}{-\BF_{12}}{-\BF_{21}}{\BF_{22}} \mat{\BF_{11}}{0}{0}{\BF_{22}}^{-1} \] \[ \mathsf{F}^{-1}= \mat{\BF_{11}}{0}{0}{\BF_{22}}^{-1}\mat{\BF_{11}}{-\BF_{12}}{-\BF_{21}}{\BF_{22}} \mat{\BS_{11}}{0}{0}{\BS_{22}}^{-1} \]
A necessary and sufficient condition for $\mathsf{F}$ to be in $\mathrm{Sym}^{+}({\mathcal T})$ is $\BF_{11}>0$ and $\BS_{22}>0$ (or $\BF_{22}>0$ and $\BS_{11}>0$).
We can also derive formulas for $K(X,Y)^{-1}$. In this case we solve the equation \[ X\Bu+Y\bra{\Bu}=\Bv \] for $\Bu$ or $\bra{\Bu}$: \[ \Bu=X^{-1}\Bv-X^{-1}Y\bra{\Bu}, \] or \[ \bra{\Bu}=Y^{-1}\Bv-Y^{-1}X\Bu. \] Taking complex conjugates we get \[ \bra{\Bu}=\bra{X}^{-1}\bra{\Bv}-\bra{X}^{-1}\bra{Y}\Bu, \] or \[ \Bu=\bra{Y}^{-1}\bra{\Bv}-\bra{Y}^{-1}\bra{X}\bra{\Bu}. \] We then substitute this into the original equation: \[ (X-Y\bra{X}^{-1}\bra{Y})\Bu=\Bv-Y\bra{X}^{-1}\bra{\Bv}, \] or \[ (Y-X\bra{Y}^{-1}\bra{X})\bra{\Bu}=\Bv-X\bra{Y}^{-1}\bra{\Bv}. \] we now solve for $\Bu$ (or $\bra{\Bu}$): \[ \Bu=(X-Y\bra{X}^{-1}\bra{Y})^{-1}\Bv-(X-Y\bra{X}^{-1}\bra{Y})^{-1}Y\bra{X}^{-1}\bra{\Bv}, \] or \[ \bra{\Bu}=(Y-X\bra{Y}^{-1}\bra{X})^{-1}\Bv-(Y-X\bra{Y}^{-1}\bra{X})^{-1}X\bra{Y}^{-1}\bra{\Bv}. \] Hence, we obtain two formulas for $K(X,Y)^{-1}$: \begin{equation}
\label{Kinv}
K(X,Y)^{-1}=K(S_{X}^{-1},-S_{X}^{-1}Y\bra{X}^{-1})= K(-S_{Y}^{-1}\bra{X}Y^{-1},S_{Y}^{-1})=K(S_{X}^{-1},S_{Y}^{-1}) \end{equation} where \[ S_{X}=X-Y\bra{X}^{-1}\bra{Y},\qquad S_{Y}=\bra{Y}-\bra{X}Y^{-1}X \] play the role of Schur complements of $X$ and $Y$, respectively.
\section{$\Pi_{0}=(\bb{C}\BI_{2},\BGF)$ and $\Pi_{0}={\rm Ann}(\bb{C}\bra{z_{0}})$} For $\Pi_{0}=(\bb{C}\BI_{2},\BGF)$ we have \[
\mathsf{K}=\mat{\psi(z)}{0}{0}{\psi(z)}+\mat{\varphi(\alpha)}{\varphi(-i\beta)}{\varphi(i\beta)}{\varphi(\alpha)}= \mat{\psi(z)+\varphi(\alpha)}{\varphi(-i\beta)}{\varphi(i\beta)}{\psi(z)+\varphi(\alpha)}. \] Let $\BK=\psi(z)+\varphi(\alpha)\in\mathrm{Sym}(\bb{R}^{2})$. This is our change of variables. There is a 1-1 correspondence between $\bb{C}\times\bb{R}$ and $\mathrm{Sym}(\bb{R}^{2})$, given by $\BK=\psi(z)+\varphi(\alpha)\in\mathrm{Sym}(\bb{R}^{2})$. Thus, we obtain \[ \mathsf{K}=\mat{\BK}{-\beta\BR_{\perp}}{\beta\BR_{\perp}}{\BK}=\BI_{2}\otimes\BK+\beta\BR_{\perp}\otimes\BR_{\perp}. \] Recall that $\mathsf{L}_{0}=\tns{\BI_{2}}$. Then \[ \mathsf{L}_{0}^{0}+\mathsf{K}=\BI_{2}\otimes(\BK+\BI_{2})+\beta\BR_{\perp}\otimes\BR_{\perp}. \] We conclude that (denoting $\BL=\BK+\BI_{2}$) \[ \bb{M}_{0}=\{\BI_{2}\otimes\BL+\beta\BR_{\perp}\otimes\BR_{\perp}\in\mathrm{Sym}^{+}({\mathcal T}): \BL\in\mathrm{Sym}(\bb{R}^{2}),\ \beta\in\bb{R}\}. \] Finally (and this is optional, since the global link can map $\bb{M}_{0}$ into $\bb{M}$), $\bb{M}=\{\mathsf{C}_{0}\mathsf{L}\mathsf{C}_{0}:\mathsf{L}\in\bb{M}_{0}\}$, where $\mathsf{C}_{0}=\BGL_{0}^{1/2}\otimes\BI_{2}$. We compute \[ (\BGL_{0}^{1/2}\otimes\BI_{2})(\BI_{2}\otimes\BL+\nu\BR_{\perp}\otimes\BR_{\perp}) (\BGL_{0}^{1/2}\otimes\BI_{2})=\BGL_{0}\otimes\BL+\nu\sqrt{\det\BGL_{0}}\BR_{\perp}\otimes\BR_{\perp} \] Hence, (introducing a new variable $t=\nu\sqrt{\det\BGL_{0}}$) \begin{equation}
\label{ER8} \bb{M}=\{\BGL_{0}\otimes\BL+t\BR_{\perp}\otimes\BR_{\perp}:
\BL\in\mathrm{Sym}^{+}(\bb{R}^{2}),\ |t|<\sqrt{\det(\BGL_{0}\BL)}\}.
\end{equation} This is a whole family of exact relation manifolds (one for each choice of $\BGL_{0}$) corresponding to $\Pi_{0}=(\bb{C}\BI_{2},\BGF)$. We have computed $\bb{M}$ for the sole reason that it just as beautiful as $\bb{M}_{0}$. In our next example, this does not seem to be the case, so we leave the exact relations in the $\bb{M}_{0}$ form.
For $\Pi_{0}={\rm Ann}(\bb{C}\bra{z_{0}})$ we have \[
\mathsf{K}=\mat{\psi(z)}{\psi(-iz)}{\psi(-iz)}{-\psi(z)}+\mat{\varphi(\alpha)}{\varphi(i\alpha)}{\varphi(-i\alpha)}{\varphi(\alpha)}= \mat{\psi(z)+\varphi(\alpha)}{\psi(-iz)+\varphi(i\alpha)}{\psi(-iz)+\varphi(-i\alpha)}{-\psi(z)+\varphi(\alpha)}. \] Let $\BK=\psi(z)+\varphi(\alpha)\in\mathrm{Sym}(\bb{R}^{2})$. This is our change of variables. Then \[ \mathsf{K}=\mat{\BK}{\BK\BR_{\perp}}{-\BR_{\perp}\BK}{\mathrm{cof}(\BK)}. \] \[ \mathsf{L}=\mathsf{I}+\mathsf{K}=\mat{\BK+\BI_{2}}{\BK\BR_{\perp}} {-\BR_{\perp}\BK}{\mathrm{cof}(\BK+\BI_{2})}. \] Denoting $\BL=\BK+\BI_{2}$ we obtain \begin{equation}
\label{Annz0}
\mathsf{L}=\mat{\BL}{(\BL-\BI_{2})\BR_{\perp}}{\BR_{\perp}^{T}(\BL-\BI_{2})}{\mathrm{cof}(\BL)}= \mat{\BL}{\BL\BR_{\perp}}{\BR_{\perp}^{T}\BL}{\mathrm{cof}(\BL)}+\mathsf{T}. \end{equation} One can check that $\mathsf{L}>0$ if and only if $\BL>\BI_{2}/2$ in the sense of quadratic forms. The attempts to compute $\bb{M}$ have not lead to a very beatiful representation of this exact relation, so we leave it in the $\bb{M}_{0}$ form. Application of the inversion formula with $M_{0}=\BI_{2}/2$ will lead the volume fraction relation in the form \begin{equation}
\label{FR} \BL_{*}^{-1}=\av{\BL^{-1}}. \end{equation}
\section{$\Pi_{0}=(\mathrm{Sym}(\bb{C}^{2}),0)$ and $\Pi_{0}=({\mathcal D},{\mathcal D}')$} \[ \mathsf{K}=\mat{\psi(x)}{\psi(c)}{\psi(c)}{\psi(y)},\qquad \{x,y,c\}\subset\bb{C}. \] We can compute the ER using the complex inversion formula (\ref{Kinv}). According to this formula we have \[ \mathsf{L}+\mathsf{I}=\left(\mathsf{K}+\hf\mathsf{I}\right)^{-1}=K\left(\hf\BI,\BY\right)^{-1}=K\left(2(\BI-4\BY\bra{\BY})^{-1},4(4\bra{\BY}-\BY^{-1})^{-1}\right). \] Thus, if we write $\mathsf{L}+\mathsf{I}=K(\BU,\BZ)$, then \[ \BU=2(\BI-4\BY\bra{\BY})^{-1},\qquad\BZ=-4(\BI-4\BY\bra{\BY})^{-1}\BY \] Thus, $2\BY=-\BU^{-1}\BZ$. The symmetry of $\BY$ is equivalent to the equation $\BU^{-1}\BZ=\BZ\bra{\BU^{-1}}$, or equivalently, to \[ \BZ\bra{\BU}=\BU\BZ. \] Again, due to the symmetry of $\BY$ we have \[ 2\bra{\BY}=2\BY^{*}=-\bra{\BZ}\BU^{-1}. \] Using this in the formula for $\BU$ to eliminate $\BY$ and $\bra{\BY}$ we have \[ 2\BU^{-1}=\BI-\BU^{-1}\BZ\bra{\BZ}\BU^{-1}\Longleftrightarrow\BZ\bra{\BZ}=\BU^{2}-2\BU=(\BU-\BI)^{2}-\BI. \] Noting that $\mathsf{L}=K(\BU-\BI,\BZ)=K(\BV,\BZ)$ we obtain the description of this exact relation as the system of equations: \[ \BZ\bra{\BV}=\BV\BZ,\qquad\BV^{2}-\BZ\bra{\BZ}=\BI. \] This suggests that these equations can be rewritten in terms of the $4\times 4$ matrix multiplication. Let $\mathfrak{I}:\mathrm{Sym}({\mathcal T})\to\mathrm{Sym}({\mathcal T})$ be given by its action $\mathfrak{I}(K(X,Y))=K(X,-Y)$. Then it is easy to see that our exact relation says \[ \mathsf{L}\mathfrak{I}(\mathsf{L})=\mathsf{I}. \] We observe that \[ \mathfrak{I}(\mathsf{L})=(\BI\otimes\BR_{\perp})\mathsf{L}(\BI\otimes\BR_{\perp})^{T}. \] Hence, we have an alternative representation of the ER $(\mathrm{Sym}(\bb{C}^{2}),0)$: \begin{equation}
\label{ER22}
\bb{M}=\{\mathsf{L}>0:\mathsf{L}(\BI\otimes\BR_{\perp})\mathsf{L}(\BI\otimes\BR_{\perp})^{T}=\mathsf{I}\}= \{\mathsf{L}>0:\mathsf{L}(\BI\otimes\BR_{\perp})\mathsf{L}=\BI\otimes\BR_{\perp}\}. \end{equation} In block-components we can rewrite this as a system of equations \begin{equation}
\label{ER22comp}
\begin{cases}
\frac{\BL_{11}}{\det\BL_{11}}=\BL_{11}-\BL_{12}\BL_{22}^{-1}\BL_{12}^{T},\\
\det\BL_{11}+\det\BL_{12}=1,\\
\det\BL_{22}+\det\BL_{12}=1,
\end{cases} \end{equation} where the last equation is redundant and is added to the system for the sake of the symmetry. For the sake of reference \begin{equation}
\label{ER22fin}
\bb{M}=\left\{\mathsf{L}>0:\BL_{11}=-\frac{\BL_{12}\mathrm{cof}(\BL_{22})\BL_{12}^{T}}{\det\BL_{12}},\ \det\BL_{22}+\det\BL_{12}=1\right\}. \end{equation}
The form (\ref{ER22}) of the ER corresponding to $\Pi_{0}=(\mathrm{Sym}(\bb{C}^{2}),0)$ suggests looking for other isotropic tensors $\mathsf{A}$, such that $\mathsf{L}\mathsf{A}\mathsf{L}=\mathsf{B}=$const is an ER. Hence, we are looking for an isotropic tensor $\mathsf{A}=K(\BA,0)$, such that $\Pi=\{\mathsf{K}:\mathsf{K} K(\BA,0)+K(\BA,0)\mathsf{K}=0\}$ is one of the algebras in our list. It is not hard to compute that the only other choice of $\BA$ besides $\mat{i}{0}{0}{i}$ is $\mat{i}{0}{0}{-i}$, which corresponds to $\Pi=({\mathcal D},{\mathcal D}')$ and \[ \mathsf{A}=K\left(\mat{i}{0}{0}{-i},0\right)=\mat{\BR_{\perp}}{0}{0}{-\BR_{\perp}}= \BJ\otimes\BR_{\perp},\quad\BJ=\psi(1)=\mat{1}{0}{0}{-1}. \] The Maple calculation confirms that \begin{equation}
\label{ER17}
\bb{M}=\{:\mathsf{L}(\BJ\otimes\BR_{\perp})\mathsf{L}=\BJ\otimes\BR_{\perp}\}, \end{equation} since the manifold has the same dimension as $\Pi_{0}$ and every matrix $\mathsf{L}$ of the formb $\mathsf{L}=2\mathsf{F}^{-1}-\mathsf{I}$, $\mathsf{K}\in\Pi_{0}$ satisfies (\ref{ER17}). In block-components we can rewrite this as a system of equations: \begin{equation}
\label{ER17comp}
\begin{cases}
\frac{\BL_{11}}{\det\BL_{11}}=\BL_{11}-\BL_{12}\BL_{22}^{-1}\BL_{12}^{T},\\
\det\BL_{11}-\det\BL_{12}=1,\\
\det\BL_{22}-\det\BL_{12}=1,
\end{cases} \end{equation} where the last equation is redundant and is added to the system for the sake of the symmetry. \begin{equation}
\label{ER17fin}
\bb{M}=\left\{\mathsf{L}>0:\BL_{11}=\frac{\BL_{12}\mathrm{cof}(\BL_{22})\BL_{12}^{T}}{\det\BL_{12}},\ \det\BL_{22}-\det\BL_{12}=1\right\}. \end{equation} The next idea comes from examining an application of the theory.
\section{$\Pi_{0}=(W,V_{\infty})$} Figure~\ref{fig:bincomp} shows that $(W,V_{\infty})$, corresponding to $\BY\not=0$ and $\BF\in\bb{R}\BZ_{0}$ is a limiting case of the generic situation, as $\BF\to\BF_{0}\in\bb{R}\BZ_{0}$. The limiting position of a family of ERs is also an ER. In other words, the set of ERs is closed in the Grassmannian of $\mathrm{Sym}({\mathcal T})$. Our study of the ERs applicable to binary composites made of two isotropic phases shows that the $(W,V_{\infty})$ ER is the limiting position of images of both $({\mathcal D},{\mathcal D})$ and $({\mathcal D},{\mathcal D}')$ under the action of global outomorphisms \[ K(X,Y)\mapsto K(C(c)XC(c)^{H},C(c)YC(c)^{T}),\qquad C(c)=\mat{\cos(c)}{\sin(c)}{-\sin(c)}{\cos(c)},\quad c\in\bb{C}. \] We have understood the action of the automorphism as follows: \begin{itemize} \item \textbf{Action on X}. We can decompose every
$X\in\mathfrak{H}(\bb{C}^{2})$ as \[ X=\psi(x)+\xi\BZ_{0}+\eta\bra{\BZ_{0}},\qquad\BZ_{0}=\Bz_{0}\otimes\bra{\Bz_{0}},\ \Bz_{0}=[1,-i],\ x\in\bb{C},\ \{\xi,\eta\}\subset\bb{R}. \] Then \[ C(c)XC(c)^{H}=\psi(e^{-2i\Re\mathfrak{e}(c)}x)+\xi e^{2\mathfrak{Im}(c)}\BZ_{0}+\eta e^{-2\mathfrak{Im}(c)}\bra{\BZ_{0}}. \] \item \textbf{Action on Y}. We can decompose every
$Y\in\mathrm{Sym}(\bb{C}^{2})$ as \[ Y=a\BI_{2}+y\tns{\Bz_{0}}+z\tns{\bra{\Bz_{0}}},\quad\{a,y,z\}\subset\bb{C}. \] Then, \[ C(c)YC(c)^{T}=a\BI_{2}+ye^{-2ic}\tns{\Bz_{0}}+ze^{2ic}\tns{\bra{\Bz_{0}}}. \] \end{itemize} We then describe the $(W,V_{\infty})$ ER in the appropriate basis: \[ W=\{a\BI+y\tns{\Bz_{0}}:\{a,y\}\subset\bb{C}\},\quad V_{\infty}=\{\psi(it)+\xi\BZ_{0}:\{t,\xi\}\subset\bb{R}\}. \] We also describe the $({\mathcal D},{\mathcal D}')$ ER in the same basis: \[ {\mathcal D}_{Y}=\{b\BI_{2}+w(\tns{\Bz_{0}}+\tns{\bra{\Bz_{0}}}):\{b,w\}\subset\bb{C}\},\quad {\mathcal D}'_{X}=\{\psi(is)+\eta(\BZ_{0}-\bra{\BZ_{0}}):\{s,\eta\}\subset\bb{R}\}. \] Now we set $c=iM$, where $M>0$ is large. We then reparametrize $({\mathcal D},{\mathcal D}')$ as follows: \[ b=a,\quad w=ye^{-2M},\quad s=t,\quad\eta=e^{-2M}\xi. \] Then \[ C(c)\cdot({\mathcal D},{\mathcal D}')=(a\BI+y\tns{\Bz_{0}}+ye^{-4M}\tns{\bra{\Bz_{0}}}, \psi(it)+\xi\BZ_{0}-\xi e^{-4M}\bra{\BZ_{0}}). \] This shows that \[ \lim_{M\to+\infty}C(iM)\cdot({\mathcal D},{\mathcal D}')=(W,V_{\infty}). \] We take as a starting point formula (\ref{ER17}) for $({\mathcal D},{\mathcal D}')$ and formula (\ref{gnonlin}) as the action of the subgroup $C(it)$ on material tensors. We then try to apply this transformation with $t=M$ to formula (\ref{ER17}), discarding exponentially small terms\footnote{Terms like
$e^{-2M}\mathsf{L}$ are not necessarily exponentially small, since some components
of $\mathsf{L}$ can be exponentially large.} along the way. At the end we obtain \begin{equation}
\label{ER20}
(\mathsf{L}-\mathsf{T})(\BJ\otimes\BR_{\perp})(\mathsf{L}-\mathsf{T})=0,\qquad\BJ=\psi(1)=\mat{1}{0}{0}{-1}. \end{equation} Maple verification confirms the correctness of (\ref{ER20}). We can also rewrite (\ref{ER20}) in terms of the block-components of $\mathsf{L}$ in the form of 4 independent equations. If we write \[ \mathsf{L}=\mat{\BL_{11}}{\BL_{12}}{\BL_{12}^{T}}{\BL_{22}}, \] then (\ref{ER20}) is equivalent to \begin{equation}
\label{ER20comp}
\begin{cases}
\BL_{11}=(\BL_{12}+\BR_{\perp})\BL_{22}^{-1}(\BL_{12}+\BR_{\perp})^{T},\\
\det\BL_{22}=\det\BS+(\beta+1)^{2}=\det(\BL_{12}+\BR_{\perp}).
\end{cases} \end{equation} Equivalently, \begin{equation}
\label{ER20compa}
\begin{cases}
\BL_{22}=(\BL_{12}+\BR_{\perp})^{T}\BL_{11}^{-1}(\BL_{12}+\BR_{\perp}),\\
\det\BL_{11}=\det(\BL_{12}+\BR_{\perp}).
\end{cases} \end{equation} We can also write this ER in terms of $\BM=\BL_{11}^{-1}(\BL_{12}+\BR_{\perp})$: \[ \BL_{12}=\BL_{11}\BM-\BR_{\perp},\quad\BL_{22}=\BM^{T}\BL_{11}\BM,\quad\det\BM=1. \] \begin{equation}
\label{ER20fin}
\bb{M}=\left\{\mat{\BL}{\BL\BM-\BR_{\perp}}{\BM^{T}\BL+\BR_{\perp}}{\BM^{T}\BL\BM}:\det\BM=1,\ \BL>0,\ \BL+2\BR_{\perp}\BM\det\BL<0\right\}. \end{equation} The choice of the parameter $t=\infty$ in the family $V_{t}$ was probably not the best. A better choice is $t=0$, giving \begin{equation}
\label{ER20t0}
(\mathsf{L}-\mathsf{T})(\BJ'\otimes\BR_{\perp})(\mathsf{L}-\mathsf{T})=0,\qquad\BJ'=\psi(i)=\mat{0}{1}{1}{0}. \end{equation} Equivalently (obtained by substituting (\ref{ER21fin}) into (\ref{ER20t0})), \begin{equation}
\label{ER20t0fin}
\bb{M}=\left\{\mat{\BL}{\BL\BM-\BR_{\perp}}{\BM^{T}\BL+\BR_{\perp}}{\BM^{T}\BL\BM}:\mathrm{Tr}\,\BM=0,\ \BL>0,\ \BL+2\BR_{\perp}\BM\det\BL<0\right\}. \end{equation}
\section{$\Pi_{0}=(W,V)$} Here are several different ways to describe $(W,V)$. \[ W=\{Y\in\mathrm{Sym}(\bb{C}^{2}):Y_{22}-Y_{11}+2iY_{12}=0\},\qquad V=\{X\in{\mathcal H}(\bb{C}^{2}):2\mathfrak{Im}(X_{12})=\mathrm{Tr}\, X\}. \] or \[ \Pi_{0}=\left\{\mathsf{K}=\mat{\BK_{11}}{\BK_{12}}{\BK_{12}^{T}}{\BK_{22}}: \BK_{11}+\BR_{\perp}\BK_{22}\BR_{\perp}^{T}+\BK_{12}\BR_{\perp}-\BR_{\perp}\BK_{12}^{T}=0. \right\}. \] $\dim\Pi_{0}=7$, co$\dim\Pi_{0}=3$.
We observe that $(W,V)$ contains $(W,V_{\infty})$ as a codimension 1 subspace. It means that $(W,V)$ requires one less equation for its description than $(W,V_{\infty})$. Formula (\ref{ER20comp}) describes $(W,V_{\infty})$ as a set of one matrix and one scalar equation. It is natural to try to see if eliminating the scalar equation results in the correct description of $(W,V)$. Maple check confirms this hypothesis. So, the ER $(W,V)$ can be described as \begin{equation}
\label{ER21}
\BL_{11}=(\BL_{12}+\BR_{\perp})\BL_{22}^{-1}(\BL_{12}+\BR_{\perp})^{T}. \end{equation} The equation says that the Schur complement of $\BL_{22}$ in $\mathsf{L}-\mathsf{T}$ vanishes. Equivalently, \[
\BL_{22}=(\BL_{12}+\BR_{\perp})^{T}\BL_{11}^{-1}(\BL_{12}+\BR_{\perp}). \] Hence, the Schur complement of $\BL_{11}$ in $\mathsf{L}-\mathsf{T}$ also vanishes. \begin{equation}
\label{ER21fin}
\bb{M}=\left\{\mat{\BL}{\BL\BM-\BR_{\perp}}{\BM^{T}\BL+\BR_{\perp}}{\BM^{T}\BL\BM}:\BL>0,\ \BL+2\BR_{\perp}\BM\det\BL<0\right\}. \end{equation}
\section{Link $(W,V)=(\bb{C}\BI,\BGY)\oplus{\rm Ann}(\bb{C}\bra{\Bz_{0}})$} \subsection{$\Pi_{0}=(\bb{C}\BI,\BGY)$} In this case $\mathsf{K}\in\Pi$ has the form \[ \mathsf{K}=\mat{\psi(z)}{0}{0}{\psi(z)}+\mat{\phi(\alpha)}{\phi(\beta)}{\phi(\beta)}{\phi(-\alpha)}= \mat{\BK}{\beta\BI}{\beta\BI}{-\mathrm{cof}(\BK)},\qquad\BK\in\mathrm{Sym}(\bb{R}^{2}) \] \[ \mathsf{I}+\mathsf{K}=\mat{\BI_{2}+\BK}{\beta\BI}{\beta\BI}{\BI_{2}-\mathrm{cof}(\BK)}, \] changing parametrization to $\BK'=\BI_{2}+\BK$ and dropping primes we see that we need to compute \[ \mathsf{L}=2\mat{\BK}{\beta\BI}{\beta\BI}{\mathrm{cof}(2\BI_{2}-\BK)}^{-1}-\mathsf{I} \] Applying formula for the inverse of the block matrix we conclude that \[ \mathsf{L}=\mat{\BL_{11}}{\lambda\BL_{11}}{\lambda\BL_{11}}{\eta(\lambda,\BL_{11})\BL_{11}}. \] Hence, one only needs to determine a scalar function $\eta(\lambda,\BL_{11})$. Applying formula for the inverse of the block matrix to $(\mathsf{L}+\mathsf{I})^{-1}$ we find that the 12-block of $(\mathsf{L}+\mathsf{I})^{-1}$ is \[ \BL_{12}=-\lambda(\BL_{11}+\BI_{2})^{-1}\BL_{11}(\eta\BL_{11}+\BI_{2}-\lambda^{2}\BL_{11}(\BL_{11}+\BI_{2})^{-1}\BL_{11})^{-1}. \] It is a multiple of the identity if and only if \[ \eta\BL_{11}+\BL_{11}^{-1}-\lambda^{2}\BL_{11} \] is a multiple of the identity. In other words, we need to choose $\eta$, such that the two eigenvalues of the above matrix are the same. If $\sigma_{1}$ and $\sigma_{2}$ are the eigenvalues of $\BL_{11}$, then we must have \[ \eta\sigma_{1}+\nth{\sigma_{1}}-\lambda^{2}\sigma_{1}=\eta\sigma_{2}+\nth{\sigma_{2}}-\lambda^{2}\sigma_{2}, \] from which we find that \[ \eta=\lambda^{2}+\nth{\sigma_{1}\sigma_{2}}=\lambda^{2}+\nth{\det\BL_{11}}. \] Hence, \begin{equation}
\label{ER9} \bb{M}_{0}=\left\{\mat{1}{\lambda}{\lambda}{\lambda^{2}+\dfrac{1}{\det\BL}}\otimes\BL: \lambda\in\bb{R},\ \BL>0\right\}=\{\mathsf{L}=\BGL\otimes\BL>0: \det\mathsf{L}=\det\BGL\det\BL=1\}. \end{equation} The exact relation $\bb{M}_{(\bb{C}\BI,\BGY)}$ says that if the Seebeck tensor is scalar and heat conductivity is $\BGk$ is a constant scalar multiple of $\BGs/\det\BGs$, then the effective tensor will also have the same form.
\subsection{Link calculation} The strategy is use Maple to compute \begin{enumerate} \item $\mathsf{L}=\mathsf{L}(\BL_{22},\BL_{12})$, \item $\mathsf{K}=2(\mathsf{L}+\mathsf{I})^{-1}-\mathsf{I}$, \item Projection $\mathsf{P}$ of $\mathsf{K}$ onto $(\bb{C}\BI,\BGY)$ \item $\mathsf{L}'=2(\mathsf{P}+\mathsf{I})^{-1}-\mathsf{I}$, \item $\lambda$: $\lambda\BI_{2}=\BL'_{12}(\BL'_{11})^{-1}$. \item $\eta$: $\eta\BI_{2}=\BL'_{22}(\BL'_{11})^{-1}$. \item $\BL'_{11}$ \end{enumerate} The link can be written as the map from \[ \bb{M}_{(W,V)}=\{\mathsf{L}>0:\BL_{22}=(\BL_{12}+\BR_{\perp})^{T}\BL_{11}^{-1}(\BL_{12}+\BR_{\perp})\} \] to \[ \bb{M}_{(\bb{C}\BI,\BGY)}=\{\BGL\otimes\BL:\BL>0,\ \BGL>0,\ \det(\BL\BGL)=1\}. \] We obtain (setting $\Lambda_{11}=1$) \[ \BGL=\mat{1}{\lambda}{\lambda}{\eta},\quad\lambda=\frac{\mathrm{Tr}\,\BM}{2},\quad \eta=\det\BM,\quad\BM=\BL_{11}^{-1}(\BL_{12}+\BR_{\perp}),\quad \BL=\BR_{\perp}\frac{\lambda\BI_{2}-\BM}{\det\BGL}. \] This implies that $\BM^{*}=(\BL_{11}^{*})^{-1}(\BL^{*}_{12}+\BR_{\perp})$ depends only on $\BM(\Bx)=\BL_{11}(\Bx)^{-1}(\BL_{12}(\Bx)+\BR_{\perp})$, which can be computed from $(\BGL\otimes\BL)^{*}=\BGL^{*}\otimes\BL^{*}$ by expressing $\BM$ in terms of $\BL$ and $\BGL$.
\section{Links for $(W,\bb{R}\BZ_{0})$} \subsection{$\Pi_{0}=(W,\bb{R}\BZ_{0})$} \[ \Pi_{0}=\{\mathsf{K}:2\BK_{12}=\BR_{\perp}(\BK_{11}-\BK_{22}+(\mathrm{Tr}\,\BK_{22})\BI_{2}),\ \mathrm{Tr}\,(\BK_{11})=\mathrm{Tr}\,(\BK_{22}).\}. \] $\dim\Pi_{0}=$co$\dim\Pi_{0}=5$. The idea is to compute the representation of $\Pi_{0}$ from the condition that corresponding $\mathsf{L}$ satisfies \begin{equation}
\label{preER19}
(\mathsf{L}^{\BR}-\mathsf{T})(\BJ\otimes\BR_{\perp})(\mathsf{L}^{\BR}-\mathsf{T})=0,\quad \forall\BR\in SO(2), \end{equation} where \[ \BJ=\mat{1}{0}{0}{-1},\qquad \mathsf{L}^{\BR}=(\BR\otimes\BI_{2})\mathsf{L}(\BR^{T}\otimes\BI_{2}). \] Factoring $\BR\otimes\BI_{2}$ and $\BR^{T}\otimes\BI_{2}$ out and recalling that $\BR_{\perp}$ is isotropic we obtain \[ (\mathsf{L}-\mathsf{T})(\BR\otimes\BI_{2})(\BJ\otimes\BR_{\perp})\BR^{T}\otimes\BI_{2}(\mathsf{L}-\mathsf{T})=0 \] Thus, writing $\BJ=\psi(1)$ we obtain that $(W,\bb{R}\BZ_{0})$ can be described by the equation \[ (\mathsf{L}-\mathsf{T})(\psi(z)\otimes\BR_{\perp})(\mathsf{L}-\mathsf{T})=0\quad\forall z\in\bb{C}. \] In other words \[ \begin{cases}
(\mathsf{L}-\mathsf{T})(\psi(1)\otimes\BR_{\perp})(\mathsf{L}-\mathsf{T})=0,\\
(\mathsf{L}-\mathsf{T})(\psi(i)\otimes\BR_{\perp})(\mathsf{L}-\mathsf{T})=0 \end{cases} \] The first equation is written as (\ref{ER20comp}), while the second equation adds one more scalar condition: \begin{equation}
\label{ER19add}
\mathrm{Tr}\,(\BL_{22}\mathrm{cof}(\BS))=0. \end{equation} Equivalently, this can also be written as \[ \det(\BL_{22}+\BL_{12})=\det\BL_{22}+\det\BL_{12}. \] Let us write a complete system of equations for reference purposes: \begin{equation}
\label{ER19}
\begin{cases}
\BL_{11}=(\BL_{12}+\BR_{\perp})\BL_{22}^{-1}(\BL_{12}+\BR_{\perp})^{T},\\
\det\BL_{22}=\det(\BL_{12}+\BR_{\perp}),\\
\mathrm{Tr}\,(\BL_{22}\mathrm{cof}(\BL_{12}))=0
\end{cases} \end{equation} The third equation can also be written as \[ \det(\BL_{22}+\BL_{12})=\det\BL_{22}+\det\BL_{12}. \] In order to find a parametrization of $\mathsf{L}$ it will be convenient to write the ER in terms of $\BM=\BL_{11}^{-1}(\BL_{12}+\BR_{\perp})$: \begin{equation}
\label{ER19M}
\BL_{12}=\BL_{11}\BM-\BR_{\perp},\quad\BL_{22}=\BM^{T}\BL_{11}\BM,\quad\det\BM=1,\quad\mathrm{Tr}\,\BM=0. \end{equation} For example we can write \begin{equation}
\label{Msqm1}
\BM=\mat{m_{11}}{m_{12}}{-\frac{m_{11}^{2}+1}{m_{12}}}{-m_{11}} \end{equation} Another equivalent formulation of constraints satisfied by $\BM$ is $\BM^{2}=-\BI_{2}$. The tensor $\mathsf{L}=\mathsf{L}(\BL_{11},\BM)$ is positive definite if and only if \begin{equation}
\label{WRZpos} \BL_{11}>0,\quad \frac{\BL_{11}}{\det\BL_{11}}+2\BR_{\perp}\BM<0. \end{equation} Thus we can also write \begin{equation}
\label{ER19fin} \bb{M}=\left\{\mat{\BL}{\BL\BM-\BR_{\perp}}{\BM^{T}\BL+\BR_{\perp}}{\BM^{T}\BL\BM}:\BM^{2}=-\BI_{2}, \ \BL>0,\ \BL+2\BR_{\perp}\BM\det\BL<0\right\}. \end{equation} This equation is obtained immediately from the representations (\ref{ER20fin}) and (\ref{ER20t0fin}) for $(W,V_{\infty})$ and $(W,V_{0})$, respectively.
\subsection{Link $(W,\bb{R}\BZ_{0})$, $\Phi(K(X,Y))=K(\alpha X,Y)$} The strategy is use Maple to compute \begin{enumerate} \item $\mathsf{L}=\mathsf{L}(\BL_{22},z)$, \item $\mathsf{K}=2(\mathsf{L}+\mathsf{I})^{-1}-\mathsf{I}$, \item $\mathsf{K}^{\alpha}=\Phi_{\alpha}(\mathsf{K})$, \item $\mathsf{L}^{\alpha}=2(\mathsf{K}_{\alpha}+\mathsf{I})^{-1}-\mathsf{I}$, \item $\BL_{22}^{\alpha}$ and $z^{\alpha}$. \end{enumerate} If we write $\mathsf{L}=\mathsf{L}(\BL_{11},\BM)$, where $\BM^{2}=-\BI_{2}$ then for any $\gamma_{0}\in\bb{R}$ for which the resulting $\mathsf{L}$ is positive definite we have \begin{equation}
\label{WRZalpha}
\mathsf{L}'=\mathsf{L}\left(\frac{\BQ\det\BL_{11}}{\det\BQ},\BM\right),\qquad \BQ=\gamma_{0}\BL_{22}+(1+\gamma_{0})\BL_{11}+2\gamma_{0}(\det\BL_{11})\BR_{\perp}\BM. \end{equation} The parameters $\alpha$ and $\gamma_{0}$ are related by the formula $\alpha=2\gamma_{0}+1$. The restrictions on $\gamma_{0}$ are \[ \BQ>0,\qquad\frac{\BQ}{\det\BL_{11}}+2\BR_{\perp}\BM<0. \] An even nicer form is obtained if instead of $\BQ$ we use $\BP=\mathrm{cof}(\BQ)/\det\BL_{11}$, so that \begin{equation}
\label{WRZalpha0}
\mathsf{L}'=\mathsf{L}\left(\BP^{-1},\BM\right),\qquad \BP=\gamma_{0}\BL^{-1}_{22}+(1+\gamma_{0})\BL^{-1}_{11}+2\gamma_{0}\BM\BR_{\perp}, \end{equation} where the parameter $\gamma_{0}$ is constrained by the inequalities \[ \BP>0,\qquad\BP+2\BM\BR_{\perp}<0, \] understood in the sense of quadratic forms. We note that the map $\Phi_{\gamma_{0}}$ fails to be bijective for $\gamma_{0}=-1/2$, but it remains a valid link between $\bb{M}_{19}$ and ER \#18, whose definition includes the additional relation \begin{equation}
\label{ER18ad}
\BM\BL^{-1}-\BL^{-1}\BM^{T}=2\BR_{\perp} \end{equation} between parameters $\BL$ and $\BM$.
\subsection{Link $(W,\bb{R}\BZ_{0})=(\bb{C}\BI,\bb{R}\BZ_{0})\oplus(\bb{C}(\tns{\Bz_{0}}),0)$} \subsubsection{$\Pi_{0}=(\bb{C}\BI,\bb{R}\BZ_{0})$} This ER is redundant, but $\Pi_{0}\oplus(\tns{\Bz_{0}},0)=(W,\bb{R}\BZ_{0})$, where $(\tns{\Bz_{0}},0)$ is an ideal in $(W,\bb{R}\BZ_{0})$. Both the ER $(W,\bb{R}\BZ_{0})$ and the link corresponding to the above decomposition are unresolved. The calculation is therefore considered useful for the puproses of computing the unresolved cases. We have \[ \Pi_{0}=\{K(\rho\BZ_{0},c\BI):\rho\in\bb{R},\ c\in\bb{C}\}. \] We will use the inversion formula \[ \mathsf{L}=2(\mathsf{K}+\mathsf{I})^{-1}-\mathsf{I}. \] \[ \mathsf{K}+\mathsf{I}=K(\BX,\BY),\qquad \BX=\mat{\rho+1}{i\rho}{-i\rho}{\rho+1},\quad\BY=c\BI_{2}. \] We will use formula (\ref{Kinv}): \[ (\mathsf{K}+\mathsf{I})^{-1}=K(S_{X}^{1},S_{Y}^{-1}),\quad S_{X}=\BX-\BY\bra{\BX}^{-1}\bra{\BY},\quad S_{Y}=\bra{\BY}-\bra{\BX}\BY^{-1}\BX. \] We compute \[ \bra{\BX}^{-1}=\frac{\BX}{2\rho+1},\quad\bra{\BX}\BX=(2\rho+1)\BI_{2}. \] Using these formulas we compute \[
S_{X}=\frac{2\rho+1-|c|^{2}}{2\rho+1}\BX,\quad S_{Y}=\frac{|c|^{2}-2\rho-1}{c}\BI_{2}. \] Hence, \[
S_{X}^{-1}=\frac{\bra{\BX}}{2\rho+1-|c|^{2}},\quad S_{Y}^{-1}=\frac{c}{|c|^{2}-2\rho-1}\BI_{2}. \] Writing $K(\bra{\BX},0)=(\rho+1)\mathsf{I}+\rho\mathsf{T}$ we compute \[
\mathsf{L}=\frac{(1+|c|^{2})\mathsf{I}+2\rho\mathsf{T}-K(0,2c\BI_{2})}{2\rho+1-|c|^{2}}. \] Converting to the block matrix form we get \[ \mathsf{L}=\mat{\BL}{-\theta\BR_{\perp}}{\theta\BR_{\perp}}{\BL},\quad
\BL=\frac{\varphi(1+|c|^{2})-\psi(2c)}{2\rho+1-|c|^{2}},\quad
\theta=\frac{2\rho}{2\rho+1-|c|^{2}}. \]
Now it is easy to see that $\det\BL=(1-\theta)^{2}$. Requiring that $\BL>0$ we get the restriction that $2\rho+1-|c|^{2}>0$. If this is satisfied then $\mathsf{L}>0$ holds if and only if $\theta^{2}<\det\BL$. The two constraints can be combined into one: \[
2|\rho|<1-|c|^{2}, \] giving the ER \begin{equation}
\label{ER7}
\bb{M}=\left\{\mat{\BL}{\theta\BR_{\perp}}{-\theta\BR_{\perp}}{\BL}: \det\BL=(1+\theta)^{2},\ \BL>0,\ \theta>-1/2\right\} \end{equation} Of course the isomorphic ER $(\bb{C}\BI,\bb{R}\bra{\BZ_{0}})$ is obtained from this by replacing $\theta$ with $-\theta$ in the matrix, while keeping all other constraints the same.
\subsubsection{Link calculation} The strategy is use Maple to compute \begin{enumerate} \item $\mathsf{L}=\mathsf{L}(\BL_{11},\BM)$, where $\BM$ is given by (\ref{Msqm1}) \item $\mathsf{K}=2(\mathsf{L}+\mathsf{I})^{-1}-\mathsf{I}$, \item Projection $\mathsf{P}$ of $\mathsf{K}$ onto $(\bb{C}\BI,\bb{R}\BZ_{0})$ \item $\mathsf{L}'=2(\mathsf{P}+\mathsf{I})^{-1}-\mathsf{I}$, \item $\theta$: $\theta\BI_{2}=\BR_{\perp}^{T}\BL'_{12}$. \item $\BL'_{11}$ \end{enumerate} We obtain \[ \theta+1=-\frac{2\det\BL_{11}}{\mathrm{Tr}\,(\BL_{11}\BM\BR_{\perp})}= -\frac{2}{\mathrm{Tr}\,(\BL_{11}^{-1}\BR_{\perp}\BM)} \] \[ \BL'_{11}=-(\theta+1)\BR_{\perp}\BM=\frac{2\BR_{\perp}\BM}{\mathrm{Tr}\,(\BL_{11}^{-1}\BR_{\perp}\BM)}. \] If we use the link $(W,\bb{R}\BZ_{0})/{\rm
Ann}(\bb{C}\bra{\Bz_{0}})\cong(\bb{C}\BI,0)$, which is inherited from $(W,V)$, then we obtain that $\BGs^{*}=-\BR_{\perp}\BM^{*}$ is the effective conductivity of the 2D conducting composite with local conductivity $\BGs(\Bx)=-\BR_{\perp}\BM(\Bx)$, also satisfying $\det\BGs=1$. This link shows that the scalar $s^{*}=\av{\BL_{11}^{*}/\det\BL^{*}_{11},\BGs^{*}}$ depends only on $\BGs(\Bx)$ and $s(x)$ via the effective thermoelectricity in the ER (\ref{ER7}).
\subsection{Link $(W,\bb{R}\BZ_{0})=(W,0)\oplus(0,\bb{R}\BZ_{0})$} \subsubsection{$(W,0)$} Need to verify that one additional equation that must be added to the system (\ref{ER19M}) is \[ \BL_{11}\BM-\BM^{T}\BL_{11}=2\det\BL_{11}\BR_{\perp}. \] If we write $\BL_{12}=\BS+\beta\BR_{\perp}$, then this additional equation says that $\beta+1=\det\BL_{11}$. Recalling the condition of positivity (\ref{WRZpos}) of $\mathsf{L}$ we can also write the extra equation as \[ \BL_{22}=-(\BL_{11}+2\BR_{\perp}\BM\det\BL_{11})>0. \]
\subsection{Link calculation} Same strategy using Maple produces a link $\Phi(\mathsf{L}(\BL_{11},\BM))=\mathsf{L}(\BL'_{11},\BM)$, where $\BL'_{11}$ satisfies the additional relation \[ \BL'_{11}\BM-\BM^{T}\BL'_{11}=2\det\BL'_{11}\BR_{\perp}. \] We compute using Maple: \[ \mathsf{L}'=\mathsf{L}\left(\frac{\BQ\det\BL_{11}}{\det\BQ},\BM\right),\qquad \BQ=\frac{\BL_{11}-\BL_{22}}{2}-(\det\BL_{11})\BR_{\perp}\BM, \] which coincides with formula (\ref{WRZalpha}) when $\gamma_{0}=-1/2$. Indeed, $\gamma_{0}=-1/2$ corresponds to $\alpha=0$, so that the map $\Phi(K(X,Y))=K(\alpha X,Y)$ is no longer the automorphism, but the link $(W,\bb{R}\BZ_{0})=(W,0)\oplus(0,\bb{R}\BZ_{0})$ instead.
\section{Summary of the essential exact relations and links} Here we refer to various exact relations by their number in the list at the end of Section~\ref{sec:JMA}. In order to streamline our notation it will be convenient to introduce the function \begin{equation}
\label{LMpar}
\mathfrak{L}(\BL,\BM)=\mat{\BL}{\BL\BM}{\BM^{T}\BL}{\BM^{T}\BL\BM}+\mathsf{T}, \qquad\mathsf{T}=\BR_{\perp}\otimes\BR_{\perp}, \end{equation} since many of the exact relations below can be described in terms of $\mathfrak{L}(\BL,\BM)$. Here is the list.
\[
\bb{M}_{8}=\{\BI_{2}\otimes\BL+t\mathsf{T}:\BL\in\mathrm{Sym}^{+}(\bb{R}^{2}),\ \det\BL>t^{2}\},
\]
\[
\bb{M}_{13}=\left\{\mathfrak{L}(\BL,\BR_{\perp}):\BL>\hf\BI_{2} \right\},\qquad \BL_{*}=\av{\BL^{-1}}^{-1}. \]
\[
\bb{M}_{17}=\{\mathsf{L}>0:\mathsf{L}(\BJ\otimes\BR_{\perp})\mathsf{L}=\BJ\otimes\BR_{\perp}\},\quad\BJ=\mat{1}{0}{0}{-1}. \] In block-components we can rewrite this as \[
\bb{M}_{17}=\left\{\mathsf{L}>0:\BL_{11}=\frac{\BL_{12}\mathrm{cof}(\BL_{22})\BL_{12}^{T}}{\det\BL_{12}},\ \det\BL_{22}-\det\BL_{12}=1\right\}. \] Of course, there is symmetry between indices and we also have \[ \bb{M}_{17}=\left\{\mathsf{L}>0:\BL_{22}=\frac{\BL_{12}^{T}\mathrm{cof}(\BL_{11})\BL_{12}}{\det\BL_{12}},\ \det\BL_{11}-\det\BL_{12}=1\right\}. \]
\[ \bb{M}_{19}=\left\{\mathfrak{L}(\BL,\BM):\BM^{2}=-\BI_{2}, \ \BL>0,\ \BL^{-1}+2\BM\BR_{\perp}<0\right\}. \] This exact relation has two different links, which are not consequences of other relations or links listed here. \begin{enumerate} \item This is an infinite family of links that we describe in terms of the
function $\mathfrak{L}(\BL,\BM)$, given by (\ref{LMpar}). The family of links
are the maps $\Phi_{\gamma_{0}}:\bb{M}_{19}\to\bb{M}_{19}$, given by \[
\Phi_{\gamma_{0}}(\mathfrak{L}(\BL,\BM))=\mathfrak{L}\left(\BP_{\gamma_{0}}^{-1},\BM\right), \quad\BP_{\gamma_{0}}=\gamma_{0}\BM\BL^{-1}\BM^{T}+(1+\gamma_{0})\BL^{-1}+2\gamma_{0}\BM\BR_{\perp}, \] where the parameter $\gamma_{0}$ is constrained by the inequalities \[ \BP_{\gamma_{0}}>0,\qquad\BP_{\gamma_{0}}+2\BM\BR_{\perp}<0, \] understood in the sense of quadratic forms. We note that the map $\Phi_{\gamma_{0}}$ fails to be bijective for $\gamma_{0}=-1/2$, but it remains a valid link between $\bb{M}_{19}$ and \[ \bb{M}_{18}=\{\mathfrak{L}(\BL,\BM):\BM^{2}=-\BI_{2}, \ \BM\BL^{-1}-\BL^{-1}\BM^{T}=2\BR_{\perp},\ \BL>0\}. \] \item The second link is between $\bb{M}_{19}$ and
\[
\bb{M}_{7}=\{\mathfrak{L}(\mu\BGs,\BR_{\perp}\BGs):\det\BGs=1,\ \BGs>0,\ \mu>1/2\}.
\] The link is then given by the formulas \[
\BGs=-\BR_{\perp}\BM,\qquad\mu=\frac{2}{\mathrm{Tr}\,(\BL\BGs)}, \] so that $\BM^{*}=\BR_{\perp}\BGs^{*}$, where $\BGs^{*}$ is the effective conductivity of the 2D polycrystal with texture $\BGs(\Bx)=-\BR_{\perp}\BM(\Bx)$, as before, while additionally we have \[
\mathrm{Tr}\,(\BL^{*}\BGs^{*})=\frac{2}{\mu^{*}}. \] \end{enumerate}
\[
\bb{M}_{20}=\left\{\mathsf{L}>0:(\mathsf{L}-\mathsf{T})(\BJ\otimes\BR_{\perp})(\mathsf{L}-\mathsf{T})=0\right\}. \] We can also write this ER in parametric form \[
\bb{M}_{20}=\left\{\mathfrak{L}(\BL,\BM):\det\BM=1,\ \BL>0,\ \BL^{-1}<-2\BM\BR_{\perp}\right\}, \] where inequalities are undestood in the sense of quadratic forms.
\[
\bb{M}_{21}=\left\{\mathfrak{L}(\BL,\BM):\BL>0,\ \BL^{-1}<-2\BM\BR_{\perp}\right\}. \] There is also a link associated with this ER. It says that $\BM^{*}$ does not depend on $\BL(\Bx)$ in the parametrization (\ref{ER21fin}). The effective tensor $\BM^{*}$ can be computed from the exact relation described by \[
\bb{M}_{9}=\{\mathsf{L}=\BGL\otimes\BP>0: \det\mathsf{L}=\det\BGL\det\BP=1\}. \] Specifically, $\mathsf{L}=\BGL\otimes\BP\in\bb{M}_{9}$ is uniquely determined by a pair of symmetric, positive definite $2\times 2$ matrices $\BGL$ and $\BP$, satisfying $\det\BGL\det\BP=1$, provided we fix $\Lambda_{11}=1$. We will denote this parametrization by $\mathsf{L}=\mathsf{L}_{9}(\BGL,\BP)$. The fact that $\bb{M}_{9}$ is an exact relation means that $\mathsf{L}_{9}(\BGL,\BP)^{*}=\mathsf{L}_{9}(\BGL^{*},\BP^{*})$, for some $\BGL^{*}$ and $\BP^{*}$ that depend on the microstructure\ of the composite. The link between $\bb{M}_{21}$, given by (\ref{ER21fin}) and $\bb{M}_{9}$ is given by a bijective transformation $\BM\mapsto(\BGL(\BM),\BP(\BM))$ \[
\BGL(\BM)=\mat{1}{\mathrm{Tr}\,\BM/2}{\mathrm{Tr}\,\BM/2}{\det\BM},\qquad
\BP(\BM)=-\BR_{\perp}\frac{\BM-(\mathrm{Tr}\,\BM)\BI_{2}/2}{\det\BGL(\BM)}. \] The link says that $\BM^{*}$ is determined via the formula \[
\mathsf{L}_{9}(\BGL(\BM),\BP(\BM))^{*}=\mathsf{L}_{9}(\BGL(\BM^{*}),\BP(\BM^{*})), \] so, that \[
\BM^{*}=\Lambda_{12}^{*}\BI_{2}+\BR_{\perp}\BP^{*}\det\BGL^{*}. \]
\[
\bb{M}_{22}=\{\mathsf{L}>0:\mathsf{L}(\BI_{2}\otimes\BR_{\perp})\mathsf{L}=\BI_{2}\otimes\BR_{\perp}\}. \] In block-components we can rewrite this as \[
\bb{M}_{22}=\left\{\mathsf{L}>0:\BL_{11}=-\frac{\BL_{12}\mathrm{cof}(\BL_{22})\BL_{12}^{T}}{\det\BL_{12}},\ \det\BL_{22}+\det\BL_{12}=1\right\}. \] Of course, there is symmetry between indices and we also have \[ \bb{M}_{22}=\left\{\mathsf{L}>0:\BL_{22}=-\frac{\BL^{T}_{12}\mathrm{cof}(\BL_{11})\BL_{12}}{\det\BL_{12}},\ \det\BL_{11}+\det\BL_{12}=1\right\}. \]
\section{An application: Isotropic polycrystals} The subspace $\Pi_{0}=(\mathrm{Sym}(\bb{C}^{2}),0)$ contains the unique isotropic tensor $\mathsf{K}=0$. Therefore, the corresponding exact relation (\ref{ER22}) passes through the unique isotropic tensor $\mathsf{L}=\mathsf{I}$. The global automorphisms \[ \Psi_{\alpha,\BB}(\mathsf{L})=(\BB\otimes\BI_{2})(\mathsf{L}-\alpha\mathsf{T})(\BB\otimes\BI_{2}),\quad\BB^{T}=\BB. \] Map $\mathsf{L}=\mathsf{I}$ into $\BB^{2}\otimes\BI_{2}-(\alpha\det\BB)\mathsf{T}$. Every isotropic tensor has the form $\BGL\otimes\BI_{2}+\beta\mathsf{T}$, where $\BGL$ is symmetric and positive definite (additionally we must also have $\det\BGL>\beta^{2}$). Thus, there exists a unique symmetric and positive definite matrix $\BB$, such that $\BB^{2}=\BGL$ and $\alpha=\beta/\det\BB$, so that $\Psi_{\alpha,\BB}(\mathsf{I})=\BGL\otimes\BI_{2}+\beta\mathsf{T}$. We conclude that for every isotropic, symmetric and positive definite tensor $\mathsf{L}$ there exists a unique symmetric and positive definite real $2\times 2$ matrix $\BB$ and real number
$|\alpha|<1$, such that $\mathsf{L}=\Psi_{\alpha,\BB}(\mathsf{I})$. Each such transformation maps the exact relation (\ref{ER22}) into another exact relation, which are all disjoint and therefore foliate an open neighborhood\ of the space of isotropic tensors. Suppose $\mathsf{L}_{0}$ is an anisotropic tensor. Then for sufficiently small $\epsilon>0$ the tensor $\epsilon\mathsf{L}_{0}$ will be in that neighborhood\ foliated by exact relations. Hence, there exists a unique exact relation $\bb{M}_{\epsilon}$ isomorphic to (\ref{ER22}) that passes through $\epsilon\mathsf{L}_{0}$. But then $\epsilon^{-1}\bb{M}_{\epsilon}$ is the exact relation isomorphic to (\ref{ER22}) that passes through $\mathsf{L}_{0}$. Thus, regardless of texture, the effective tensor of an isotropic polycrystal made of the single cystallite $\mathsf{L}_{0}$ will be uniquely determined by $\mathsf{L}_{0}$, as in 2D conductivity $\sigma^{*}=\sqrt{\det\BGs_{0}}$. If its effective tensor is $\mathsf{L}^{*}$, then if $\Psi_{\alpha,\BB}(\mathsf{L}^{*})=\mathsf{I}$, then $\mathsf{L}'=\Psi_{\alpha,\BB}(\mathsf{L}_{0})$ will belong to the exact relation (\ref{ER22}). Specifically, \[ 2(\mathsf{L}'+\mathsf{I})^{-1}-\mathsf{I}\in\Pi_{0}=(\mathrm{Sym}(\bb{C}^{2}),0). \] In order to find equations satisfied by $\BB$ and $\alpha$ we will write $\mathsf{L}_{0}=K(\BX,\BY)$. Then \[ \mathsf{L}'=\Psi_{\alpha,\BB}(\mathsf{L}_{0})=K(\BB(\BX-i\alpha\BR_{\perp})\BB,\BB\BY\BB). \] Thus, we need to find $\BB\in\mathrm{Sym}^{+}(\bb{R}^{2})$ and $\alpha\in(-1,1)$, such that \[ 2K(\BB(\BX-i\alpha\BR_{\perp})\BB+\BI_{2},\BB\BY\BB)^{-1}-K(\BI_{2},0)=K(\Bzr,\cdot). \] Using formula (\ref{Kinv}) we obtain $S_{X}=2\BI_{2}$, where \[ S_{X}=\BB(\BX-i\alpha\BR_{\perp})\BB+\BI_{2}-\BB\BY\BB(\BB(\bra{\BX}+i\alpha\BR_{\perp})\BB+\BI_{2})^{-1}\BB\bra{\BY}\BB. \] Hence, the equation $S_{X}=2\BI_{2}$ becomes \[ \BX-i\alpha\BR_{\perp}-\BY(\bra{\BX}+i\alpha\BR_{\perp}+\BB^{-2})^{-1}\bra{\BY}=\BB^{-2} \] We now observe that $\mathsf{L}^{*}=K(\BB^{-2}+i\alpha\BR_{\perp},0)$. Thus, denoting \[ \BL^{*}=\BB^{-2}+i\alpha\BR_{\perp}\in\mathfrak{H}(\bb{C}^{2}), \] we obtain the equation for $\BL^{*}$: \begin{equation}
\label{isopoly}
\BX-\BL^{*}=\BY(\bra{\BX}+\BL^{*})^{-1}\bra{\BY},\quad\mathsf{L}_{0}=K(\BX,\BY),\quad \mathsf{L}^{*}=K(\BL^{*},0). \end{equation}
If we make a change of variables $\BZ=\bra{\BX}+\BL^{*}$ then $\BZ$ is still self-adjoint (and positive definite) and solves \begin{equation}
\label{Zeq}
\BZ+\BY\BZ^{-1}\BY^{H}=\BX+\bra{\BX}. \end{equation} We can first solve 4 real linear equations with 4 real unknowns: \begin{equation}
\label{Zeqlin}
\BZ+\theta\BY\mathrm{cof}(\BZ)^{T}\BY^{H}=\BX+\bra{\BX}, \end{equation} Obtaining a solution $\Hat{\BZ}(\theta)$. We then find $\theta>0$ from the equation $\theta\det\Hat{\BZ}(\theta)=1$. Let us analyze the linear equation (\ref{Zeqlin}), assuming first that $\BY$ is invertible. Let $\mathfrak{B}_{\BY}\in\mathrm{End}_{\bb{R}}(\mathfrak{H}(\bb{C}^{2}))$ be defined by \[ \mathfrak{B}_{\BY}\BZ=\BY\mathrm{cof}(\BZ)^{T}\BY^{H}. \] Let $\lambda$ be an eigenvalue of $\mathfrak{B}_{\BY}$. Then, taking determinants in the equation $\mathfrak{B}_{\BY}\BZ=\lambda\BZ$ we obtain \[
\lambda^{2}\det\BZ=|\det\BY|^{2}\det\BZ. \]
Hence, either $\lambda=\pm|\det\BY|$ or $\det\BZ=0$. In the latter case, $\BZ$ is a real multiple of $\Ba\otimes\bra{\Ba}$ for some nonzero vector $\Ba\in\bb{C}^{2}$. Then \[ \BY\BR_{\perp}\bra{\Ba}\otimes\bra{\BY}\BR_{\perp}\Ba=\lambda\Ba\otimes\bra{\Ba}. \] Taking traces we obtain \[
\lambda=\frac{|\BY\BR_{\perp}\bra{\Ba}|^{2}}{|\Ba|^{2}}\ge 0. \] Thus, the only possible negative eigenvalue of $\mathfrak{B}_{\BY}$ is
$\lambda=-|\det\BY|$. Observing that $\mathfrak{B}_{e^{i\alpha}\BY}=\mathfrak{B}_{\BY}$ for any $\alpha\in\bb{R}$, we may assume, without loss of generality, that $\det\BY>0$. It is then easy to see that \[ \mathfrak{B}_{\BY}\Re\mathfrak{e}(\BY)=(\det\BY)\Re\mathfrak{e}(\BY),\qquad \mathfrak{B}_{\BY}\mathfrak{Im}(\BY)=-(\det\BY)\mathfrak{Im}(\BY). \] If $\BY$ is real and symmetric, then \[ \mathfrak{B}_{\BY}\BZ_{\pm}(c)=(\pm\det\BY)\BZ_{\pm}(c),\quad \BZ_{+}=\phi(c)\BY+\BY\phi(\bra{c}),\quad\BZ_{-}=\psi(c)\BY+\BY\psi(c) \] for any $c\in\bb{C}$ for which $\BZ_{\pm}(c)\not=0$. In fact, the characteristic polynomial of $\mathfrak{B}_{\BY}$ (as computed by Maple) is \[
p(x)=(x^{2}-|\det\BY|^{2})(x^{2}+|\det\BY|^{2}+x\av{\BY,\mathrm{cof}(\BY)})),\quad \av{\BA,\BB}=\mathrm{Tr}\,(\BA\BB^{H}). \]
One can check that the roots of $p(x)$, other than $\pm|\det\BY|$, are either both complex or both positive.
Hence, $\Hat{\BZ}(\theta)=(\mathsf{I}+\theta\mathfrak{B}_{\BY})^{-1}(\BX+\bra{\BX})$ and $\theta$ is found as a positive root of \begin{equation}
\label{rootheta}
\theta\det\left[(\mathsf{I}+\theta\mathfrak{B}_{\BY})^{-1}(\BX+\bra{\BX})\right]=1, \end{equation} such that $(\mathsf{I}+\theta\mathfrak{B}_{\BY})^{-1}(\BX+\bra{\BX})>\bra{\BX}$.
The case $\det\BY=0$ was not considered, but being the limiting case of the general one, our conclusion stays the same: $\theta$ is a positive root of (\ref{rootheta}), such that \begin{equation}
\label{Lst}
\BL^{*}=(\mathsf{I}+\theta\mathfrak{B}_{\BY})^{-1}(\BX+\bra{\BX})-\bra{\BX}>0. \end{equation} We conjecture that $\theta$ must be the smallest positive root of
(\ref{rootheta}). This is easily verified if $|\BY|$ is sufficiently small.
In the special case when $\BY$ is real (or purely imaginary) we can write reasonably compact equations: \[ \BZ=\frac{2\Re\mathfrak{e}(\BX)}{1-\theta\det\BY}-\frac{2\theta\mathrm{Tr}\,(\mathrm{cof}(\BY)\Re\mathfrak{e}(\BX))\BY}{1-\theta^{2}\det\BY^{2}} \] \[ \frac{\theta\det\Re\mathfrak{e}(\BX)}{(1-\theta\det\BY)^{2}}-\frac{\theta^{2}(\mathrm{Tr}\,(\mathrm{cof}(\BY)\Re\mathfrak{e}(\BX)))^{2}} {(1-\theta^{2}\det\BY^{2})^{2}}=\nth{4}. \] If we change variables $t=\theta\det\BY$, then we can write the equation for $t$ as \[ t(1+t)^{2}\det(\BY^{-1}\Re\mathfrak{e}(\BX))-t^{2}(\mathrm{Tr}\,(\BY^{-1}\Re\mathfrak{e}(\BX)))^{2}=\nth{4}(1-t^{2})^{2}. \] If $s_{1}$ and $s_{2}$ are the eigenvalues of $(\Re\mathfrak{e}\BX)^{1/2}\BY^{-1}(\Re\mathfrak{e}\BX)^{1/2}$, then (assuming, without loss of generality, that
$\det\BY>0$) we obtain $|s_{j}|>1$ as a consequence of positive definteness of $\mathsf{L}_{0}$, while the equation for $t$ reads \[
p(t)=t(1+t)^{2}|s_{1}||s_{2}|-t^{2}(|s_{1}|+|s_{2}|)^{2}-\nth{4}(1-t^{2})^{2}=0. \] The case $s_{1}=s_{2}=s$ can be solved explicitly. The roots of $p(t)$ are $t=1$ of multiplicity 2 and $t_{\pm}(s)=2s^{2}-1\pm2s\sqrt{s^{2}-1}$. It is obvious that $p(t)<0$, when $t\le 0$, so all the real roots of $p(t)$ have to be positive. The product $t_{+}(s)t_{-}(s)=1$, so one of the roots is in $(0,1)$, while the other is in $(1,+\infty)$. The discriminant of $p(t)$ is \[ \Delta[p]=(s_{1}^{2}-1)^{2}(s_{2}^{2}-1)^{2}(s_{1}^{2}-s_{2}^{2})^{2}. \] Thus, for all $(s_{1},s_{2})\in D=\{(s_{1},s_{2}):s_{1}>s_{2}>1\}$ the number of real roots is the same. We also compute $p(0)=-1/2$ and $p(1)=-(s_{1}-s_{2})^{2}$, which show that the number of roots in $(0,1)$ and in $(1,+\infty)$ remains the same. It is easy to check that there are 4 real roots when $s_{1}=s_{2}+\epsilon$: two in $(0,1)$ and two in $(1,+\infty)$.
\section{An application: two phase composites with isotropic phases} \label{sec:twophase} The results in this section consitute Master's thesis of Sarah Childs (MS 2020, Temple University).
\subsection{Analysis} If we have two given isotropic tensors $\mathsf{L}_{1}$, $\mathsf{L}_{2}$ we first use the global link to map $\mathsf{L}_{1}$ to $\mathsf{I}$, while $\mathsf{L}_{2}$ will be mapped to some other isotropic tensor $\mathsf{L}_{0}$. Next we compute $\mathsf{K}_{0}=2(\mathsf{L}_{0}+\mathsf{I})^{-1}-\mathsf{I}$ and write is as $\mathsf{K}_{0}=K(\BX_{0},0)$ for some $\BX_{0}\in\mathfrak{H}(\bb{C}^{2})$. We then apply the global automorphism (\ref{Jautoex}) and map $\BX_{0}$ to $\BC\BX_{0}\BC^{H}$ using $\BC\in O(2,\bb{C})$. Hence, we need to understand the action of $G=O(2,\bb{C})$ on $\mathfrak{H}(\bb{C}^{2})$. The group $G$ has two connected components $G_{+}=SO(2,\bb{C})$ and $G_{-}=\psi(1)G_{+}$. Hence, it is enough to understand the action of the subgroup $G_{+}$ on $\mathfrak{H}(\bb{C}^{2})$. We do this by first identifying invariant subspaces of $G_{+}$. The calculations have already been done. These invariant subspaces are $\bb{R}\BZ_{0}$, $\bb{R}\bra{\BZ_{0}}$, and $\BGY$, where \[ \mathfrak{H}(\bb{C}^{2})=\bb{R}\BZ_{0}\oplus\bb{R}\bra{\BZ_{0}}\oplus\BGY. \] We recall that $\BGY=\{\psi(z):z\in\bb{C}\}$. We then have for $\BC_{+}(c)\in G_{+}$, given by (\ref{Cpmdef}) \[ \BC_{+}(c)\psi(z)\BC_{+}(c)^{H}=\psi(e^{-2i\Re\mathfrak{e}(c)}z), \] and \[ \BC_{+}(c)\BZ_{0}\BC_{+}(c)^{H}=e^{2\mathfrak{Im}(c)}\BZ_{0},\qquad \BC_{+}(c)\bra{\BZ_{0}}\BC_{+}(c)^{H}=e^{-2\mathfrak{Im}(c)}\bra{\BZ_{0}}. \] These formulas show that $G_{+}$ contains two 1-parameter subgroups \[ H_{+}=\{\BC_{+}(c): c\in\bb{R}\}=SO(2,\bb{R}),\qquad H_{-}=\{\BC_{+}(c): \Re\mathfrak{e}(c)=0\}. \] All points in the subspace $\BGY$ are fixed by $H_{-}$, while all points in the subspace $\BGF=\mathrm{Span}_{\bb{R}}\{\BZ_{0},\bra{\BZ_{0}}\}$ are fixed by $H_{+}$. At the same time $H_{+}$ acts by rotations on $\BGY$, while $H_{-}$ acts by hyperbolic rotations on $\BGF$. Thus, in order to understand how we can transform $\BX_{0}$ by $G_{+}$ we first split $\BX_{0}$ into its $\BGF$ and $\BGY$ components: \[ \BX_{0}=\BF+\BY,\qquad\BF\in\BGF,\quad \BY\in\BGY. \] We first apply $H_{+}$ to transform $\BY$ as desired, while $\BF$ is unchanged. We then apply $H_{-}$ to transform $\BF$ as desired, while the transformed matrix $\BY$ is unchanged. What can be accomplished is shown in Fig.~\ref{fig:bincomp}. \begin{figure}
\caption{Action of the global automorphism (\ref{Jautoex}) on $\BX_{0}=\BF+\BY$.}
\label{fig:bincomp}
\end{figure} It depends on whether we are in a generic situation, where $\BY\not=0$ and $\BF$ is neither a real multiple of $\BZ_{0}$ nor of $\bra{\BZ_{0}}$, or in one of the special ones. In the generic case we can rotate $\BF$ to a matrix
$f\BI_{2}\in{\mathcal D}\cap\BGF$ or $if\BR_{\perp}\in{\mathcal D}'\cap\BGF$, depending on whether $|\mathrm{Tr}\,\BF|$ is greater or smaller than $2|\mathfrak{Im}(F_{12})|$, respectively. While we can rotate $\BY$ to $\psi(y)\in{\mathcal D}\cap\BGY$ or in $\psi(iy)\in{\mathcal D}'\cap\BGY$, $y>0$, as desired, i.e. according to which space the component $\BF$ can be rotated to. In that case either the physically trivial exact relation $({\mathcal D},{\mathcal D})$ can be used or an exact relation $({\mathcal D},{\mathcal D}')$ is applicable, unless $\det(\BX_{0})=0$, in which case $\mathrm{Ann}(\bb{C}\Be_{2})\subset({\mathcal D},{\mathcal D})$ exact relation would apply. There are the following special cases \begin{enumerate} \item $\BY\not=0$, $\BF\in\bb{R}\BZ_{0}$ or $\BF\in\bb{R}\bra{\BZ_{0}}$ and
$\BF\not=0$. Exact relation $(W,V_{\infty})\sim(\bra{W},\bra{V_{\infty}})$
is applicable. \item $\BY\not=0$, $\BF=0$. Exact relation
$(\bb{C}\BI,\bb{R}\BY)\sim(\bb{C}\BI,\bb{R}\psi(i))=(\bb{C}\BI,\Psi)\cap({\mathcal D},{\mathcal D}')$
is applicable. \item $\BY=0$, $\BF\not\in\bb{R}\BZ_{0}$,
$\BF\not\in\bb{R}\bra{\BZ_{0}}$. Exact relation $(\bb{C}\BI,\bb{R}\BF)$ is
applicable. If we are in a strongly coupled case then
$(\bb{C}\BI,\bb{R}\BF)\sim(\bb{C}\BI,i\BR_{\perp})=(\bb{C}\BI,\Psi)\cap({\mathcal D},{\mathcal D}')$,
otherwise,
$(\bb{C}\BI,\bb{R}\BF)\sim(\bb{C}\BI,\bb{R}\BI)=(\bb{C}\BI,\Psi)\cap({\mathcal D},{\mathcal D})$. \item $\BY=0$ and $\BF\in\bb{R}\BZ_{0}$ or $\BF\in\bb{R}\bra{\BZ_{0}}$ and $\BF\not=0$. Exact
relation $(0,\bb{R}\BZ_{0})\sim(0,\bb{R}\bra{\BZ_{0}})$ is applicable. \end{enumerate} Thus, if both components of $\BX_{0}$ are non-zero then there are 4 exact relations between components of $\mathsf{L}^{*}$. What is interesting is that the form of these relations depends very much on specific values of the components. If one of the components of $\BX_{0}$ happens to be 0, then there would be at least 7 exact relations, rising to 9 in a very special case $\BX_{0}=x_{0}\BZ_{0}$ or $\BX_{0}=x_{0}\bra{\BZ_{0}}$, in this very special case \emph{all} components of $\mathsf{L}^{*}$ will be uniquely determined if the volume fractions of the components are known.
If we write $\mathsf{L}_{j}=\BGs_{j}\otimes\BI_{2}+r_{j}\mathsf{T}$. Then the transformation \begin{equation}
\label{Psi0}
\Psi(\mathsf{L})=(\BGs_{1}^{-1/2}\otimes\BI_{2})(\mathsf{L}-r_{1}\mathsf{T})(\BGs_{1}^{-1/2}\otimes\BI_{2}) \end{equation} maps $\mathsf{L}_{1}$ to $\mathsf{I}$, while \[ \mathsf{L}'=\Psi(\mathsf{L}_{2})=K\left(\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2}+\frac{r_{2}-r_{1}}{\sqrt{\det{\BGs_{1}}}}i\BR_{\perp},0\right). \] Next we apply transformation $\Psi_{\BA,\BI_{2}}$, where \[
\BA=\mat{a_{0}}{1}{1}{a_{0}} \] These transformations have the property that $\Psi(\mathsf{I})=\mathsf{I}$. Then denoting \begin{equation}
\label{sigmarho}
\BGs=\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2},\qquad\rho=\frac{r_{2}-r_{1}}{\sqrt{\det{\BGs_{1}}}}, \end{equation} we obtain \[ \Psi_{\BA,\BI_{2}}(K(\BGs+i\rho\BR_{\perp},0))=K(\BL,0),\quad \BL=i\BR_{\perp}(\BGs+i(\rho+a_{0})\BR_{\perp})^{-1}(a_{0}\BGs+i(a_{0}\rho+1)\BR_{\perp}). \] Since all the matrices involved in the formula for $\BL$ are $2\times 2$ we have \[ (\BGs+i(\rho+a_{0})\BR_{\perp})^{-1}= \frac{\BR_{\perp}^{T}(\BGs-i(\rho+a_{0})\BR_{\perp})\BR_{\perp}}{\det\BGs-(\rho+a_{0})^{2}}. \] Thus, \begin{equation}
\label{Psi2L}
\BL=\frac{(1-a_{0}^{2})\BGs+i(a_{0}\det\BGs-(\rho+a_{0})(a_{0}\rho+1))\BR_{\perp}} {\det\BGs-(\rho+a_{0})^{2}}. \end{equation} Let us now choose $a_{0}$ so that the matrix $\BL$ is real and symmetric. This means that $a_{0}$ must be a root of \begin{equation}
\label{diaga0}
(a_{0}^{2}+1)\rho=a_{0}(\det\BGs-(\rho^{2}+1)). \end{equation} It is easy to check that the discriminant of the quadratic equation (\ref{diaga0}) is nonnegative if and only if \begin{equation}
\label{wkcpld}
|r_{1}-r_{2}|\le\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|. \end{equation} We will call this case ``weakly coupled'' because there is a choice of $a_{0}$ (a root of (\ref{diaga0})) that eliminates the thermoelectric coupling from both materials. The exceptional cases are $\det\BGs=(\rho\pm 1)^{2}$, in which case the inverse in the definition of $\BL$ does not exist. We note that there are several possibilities. \begin{enumerate} \item We could have mapped $\mathsf{L}_{2}$ to $\mathsf{I}$, instead of $\mathsf{L}_{1}$. That
means that instead of a pair of numbers $(\rho,\det\BGs)$ we will get a pair
of numbers \[ (\rho',\det\BGs')=\left(-\frac{\rho}{\sqrt{\det\BGs}},\nth{\det\BGs}\right). \] It is easy to check that the sign of the discriminant of (\ref{diaga0}) does not depend on the choice of which $\mathsf{L}_{j}$ gets mapped to $\mathsf{I}$. \item In each case there are two real roots $a_{0}$. If one root is $a_{0}$,
the other root is $1/a_{0}$. \end{enumerate} We want $\BL>0$. This means that in each case we must have the inequality \[ \frac{1-a_{0}^{2}}{\det\BGs-(\rho+a_{0})^{2}}>0. \] In the weakly coupled case (\ref{diaga0}) we have \[ \frac{1-a_{0}^{2}}{\det\BGs-(\rho+a_{0})^{2}}=\frac{a_{0}}{\rho+a_{0}}=\frac{a_{0}\rho+1}{\det\BGs}. \] If $a_{1}$ and $a_{2}$ are the two roots of (\ref{diaga0}), then \[ \frac{a_{1}}{\rho+a_{1}}\cdot\frac{a_{2}}{\rho+a_{2}}=\nth{\det\BGs}>0, \] \[ \frac{a_{1}}{\rho+a_{1}}+\frac{a_{2}}{\rho+a_{2}}=\frac{\det\BGs-\rho^{2}+1}{\det\BGs}. \] The inequality $\det\BGs-\rho^{2}+1>0$ always holds in the weakly coupled case.
In the strongly coupled case, i.e. when inequality (\ref{wkcpld}) is violated, we first apply the transformation \[ \Psi_{0}(\mathsf{L})=(\BGs_{1}^{-1/2}\otimes\BI_{2})\mathsf{L}(\BGs_{1}^{-1/2}\otimes\BI_{2}) \] which mapps $\BL_{1}$ into $\BL'_{1}=\BI_{2}+i\rho_{1}\BR_{\perp}$, and $\BL_{2}$ into $\BL'_{2}=\BGs+i\rho_{2}\BR_{\perp}$, where \[ \rho_{j}=\frac{r_{j}}{\sqrt{\det{\BGs_{1}}}}. \] In this case we look for a simpler transformation \[ \Psi_{3}(\mathsf{L})=a\mathsf{L}+b\mathsf{T}, \] which maps $\mathsf{L}'_{j}$ onto the ER $({\mathcal D},{\mathcal D}')$ if we work in the frame in which $\BGs_{11}=\BGs_{22}$. The coefficients $a>0$ and $b\in\bb{R}$ need to be chosen so that \[ \det(a\BI_{2}+i(a\rho_{1}+b)\BR_{\perp})=\det(a\BGs+i(a\rho_{2}+b)\BR_{\perp})=1 \] This means that \[ a^{2}=(a\rho_{1}+b)^{2}+1,\qquad a^{2}\det\BGs=(a\rho_{2}+b)^{2}+1. \] Subtracting the two equations we find \[ b=a\frac{\det\BGs-1+\rho_{1}^{2}-\rho_{2}^{2}}{2\rho}, \] where $\rho$ is the same as before. We then find that \[ a^{2}\frac{((\rho+1)^{2}-\det\BGs)(\det\BGs-(\rho-1)^{2})}{4\rho^{2}}=1. \]
We note that since $|\rho_{1}|<1$ and $|\rho_{2}|<\sqrt{\det\BGs}$, then \[
|\rho|=|\rho_{2}-\rho_{1}|\le|\rho_{2}|+|\rho_{1}|<\sqrt{\det\BGs}+1. \] This is equivalent to \[
|r_{2}-r_{1}|<\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}}. \] Using our original parameters we can write \[ a^{2}=\frac{4\Delta r^{2}\det\BGs_{1}}{ (\Delta r^{2}-(\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}})^{2}) ((\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}})^{2}-\Delta r^{2})},\qquad\Delta r=r_{2}-r_{1} \] This shows that in the strongly coupled regime we can always choose $a>0$. We will therefore write \[
a=2a_{0}\sqrt{\det\BGs_{1}},\quad a_{0}=\frac{|\Delta r|}{\sqrt{(\Delta r^{2}-(\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}})^{2}) ((\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}})^{2}-\Delta r^{2})}} \] \[ b=a_{0}\frac{\det\BGs_{2}+r_{1}^{2}-r_{2}^{2}}{r_{2}-r_{1}}. \]
Let us find the conditions for each case, including special cases explicitly in terms of $\BGs_{j}$, $r_{j}$, $j=1,2$. We compute \[ \BX_{0}=2\left(\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2}+\BI_{2}+\frac{r_{2}-r_{1}}{\sqrt{\det{\BGs_{1}}}}i\BR_{\perp}\right)^{-1}-\BI_{2}. \] It fairly easy to show that $\BY\not=0$ if and only if $\BGs_{1}$ and $\BGs_{1}$ are not scalar multiples of one another. it is also easy to show that $\BF=0$ if and only if $r_{1}=r_{2}$ and $\det\BGs_{1}=\det\BGs_{2}$. In order to split $\BX_{0}$ into the $\BGF$ and $\BGY$ parts we can use the formula \[
(\varphi(\alpha)+\psi(a)+ir\BR_{\perp})^{-1}=\frac{\varphi(\alpha)-\psi(a)-ir\BR_{\perp}}{\alpha^{2}-|a|^{2}-r^{2}}. \] So, if $\BF=\varphi(\alpha)+ir\BR_{\perp}$ and $\BY=\psi(a)$, then we have the equation \[
\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2}+\BI_{2}+\frac{r_{2}-r_{1}}{\sqrt{\det{\BGs_{1}}}}i\BR_{\perp}=2\frac{\varphi(\alpha+1)-\psi(a)-ir\BR_{\perp}}{(\alpha+1)^{2}-|a|^{2}-r^{2}}. \] Hence we have the following system \[ \begin{cases}
\frac{r_{2}-r_{1}}{\sqrt{\det{\BGs_{1}}}}=-\frac{2r}{(\alpha+1)^{2}-|a|^{2}-r^{2}},\\
\mathrm{Tr}\,(\BGs_{2}\BGs_{1}^{-1})+2=\frac{4(\alpha+1)}{(\alpha+1)^{2}-|a|^{2}-r^{2}},\\
\frac{\det\BGs_{2}}{\det\BGs_{1}}=\det\left(2\frac{\varphi(\alpha+1)-\psi(a)}{(\alpha+1)^{2}-|a|^{2}-r^{2}}-\varphi(1)\right). \end{cases} \] The condition that $\BF\in\bb{R}\BZ_{0}$ or $\BF\in\bb{R}\bra{\BZ_{0}}$ is equivalent to $r^{2}=\alpha^{2}$. \[ \begin{cases}
|a|^{2}=\frac{\mathrm{Tr}\,(\BGs_{1}\mathrm{cof}(\BGs_{2}))^{2}-4\det(\BGs_{1}\BGs_{2})}{(\det(\BGs_{1}+\BGs_{2})-(r_{1}-r_{2})^{2})^{2}},\\ r^{2}=\frac{4(r_{1}-r_{2})^{2}\det(\BGs_{1})}{(\det(\BGs_{1}+\BGs_{2})-(r_{1}-r_{2})^{2})^{2}},\\ \alpha=\frac{(r_{1}-r_{2})^{2}+\det(\BGs_{1})-\det(\BGs_{2})}{\det(\BGs_{1}+\BGs_{2})-(r_{1}-r_{2})^{2}} \end{cases} \] The condition that $\BF\in\bb{R}\BZ_{0}$ or $\BF\in\bb{R}\bra{\BZ_{0}}$ is equivalent to \begin{equation}
\label{RZ0}
|r_{1}-r_{2}|=\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|. \end{equation} The composite made with the pair of isotropic materials $\mathsf{L}_{j}$ will be called weakly thermoelectrically heterogenerous if \begin{equation}
\label{weak}
|r_{1}-r_{2}|<\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|, \end{equation} and strongly coupled otherwise. (We note that requirement of positive definteness of $\mathsf{L}_{j}$ implies that $r_{j}^{2}<\det\BGs_{j}$.) The thermoelectric interactions in a weakly thermoelectrically heterogenerous composite can be decoupled. Such a decoupling is impossible in a strongly thermoelectrically heterogenerous composite. Here is the summary. \begin{enumerate} \item $\BGs_{1}\not=\theta\BGs_{2}$ (generic case)
\begin{enumerate}
\item $|r_{1}-r_{2}|<\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|$
(weakly coupled case)
\begin{enumerate}
\item $|r_{1}-r_{2}|^{2}\not=\det(\BGs_{1}-\BGs_{2})$: ER $({\mathcal D},{\mathcal D})$.
This implies that the 10 components of $\mathsf{L}^{*}$ depend only on 6
microstructure-dependent parameters. Moreover, the link
$({\mathcal D},{\mathcal D})/{\rm Ann}(\bb{C}\Be_{2})\cong{\rm Ann}(\bb{C}\Be_{2})$ that relates
$\mathsf{L}^{*}$ to the effective tensor of a 2D conducting composite applies.
\item $|r_{1}-r_{2}|^{2}=\det(\BGs_{1}-\BGs_{2})$: ER Ann$(\bb{C}\Be_{2})$.
This implies that the 10 components of $\mathsf{L}^{*}$ depend only on 3
microstructure-dependent parameters. Moreover, they can be expressed in
terms of the effective tensor of a 2D conducting composite.
\end{enumerate}
\item $|r_{1}-r_{2}|>\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|$:
ER $({\mathcal D},{\mathcal D}')$ (strongly coupled case)
This implies that the 10 components of $\mathsf{L}^{*}$ depend
only on 6 microstructure-dependent parameters.
\item $|r_{1}-r_{2}|=\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|$
(borderline strongly coupled case)
\begin{enumerate}
\item $r_{1}\not=r_{2}$: ER $(W,V_{\infty})$. This implies that the 10
components of $\mathsf{L}^{*}$ depend only on 6 microstructure-dependent
parameters. Moreover, the link $(W,V_{\infty})/{\rm
Ann}(\bb{C}\bra{\Bz_{0}})\cong(\bb{C}\BI,\bb{R}\psi(i))$ is applicable. This
means that there is a link between this case and the $r_{1}=r_{2}$ case,
since ERs $(\bb{C}\BI,\bb{R}\psi(i))$ and $(\bb{C}\BI,\bb{R}\psi(1))$
are isomorphic.
\item $r_{1}=r_{2}$: ER $(\bb{C}\BI,\bb{R}\psi(1))$. This implies that the
10 components of $\mathsf{L}^{*}$ depend only on 3 microstructure-dependent
parameters, expressible in terms of 2D conductivity.
\end{enumerate} \end{enumerate} \item $\BGs_{1}=\theta_{1}\BGs$, $\BGs_{2}=\theta_{2}\BGs$ (special nongeneric case)
\begin{enumerate}
\item $|r_{1}-r_{2}|<|\theta_{1}-\theta_{2}|\sqrt{\det\BGs}$:
ER $(\bb{C}\BI,\bb{R}\BI)$. This implies that the 10 components of $\mathsf{L}^{*}$ depend
only on 3 microstructure-dependent parameters, which are expressible in
terms of the effective tensor of a 2D conducting composite.
\item $|r_{1}-r_{2}|>|\theta_{1}-\theta_{2}|\sqrt{\det\BGs}$:
ER $(\bb{C}\BI,i\BR_{\perp})$. This implies that the 10 components of $\mathsf{L}^{*}$ depend
only on 3 microstructure-dependent parameters.
\item $|r_{1}-r_{2}|=|\theta_{1}-\theta_{2}|\sqrt{\det\BGs}$:
ER $(0,\bb{R}\BZ_{0})$. This implies $\mathsf{L}^{*}$ is completely determined, regardless
of microstructure, if the volue fractions are known.
\end{enumerate} \end{enumerate}
An example is a binary composite made with isotropic thermoelectric materials in which the Seebeck coefficient $\BS$ is a scalar. In this case we have $r_{1}=r_{2}=0$. Let $\Sigma^{*}(h)$ be the effective tensor of an isotropic conducting composite made with two isotropic materials, whose conductivities are 1 and $h$. Then $\BGS^{*}$ can be applied to symmetric matrices according to the rule \[ \Sigma^{*}(\BS)=\BR\mat{\Sigma^{*}(s_{1})}{0}{0}{\Sigma^{*}(s_{2})}\BR^{T},\qquad \BS=\BR\mat{s_{1}}{0}{0}{s_{2}}\BR^{T}. \] Then the effective thermoelectric tensor of such a composite will be isotropic and also have scalar Seebeck coefficient $r^{*}=0$ and \[ \mathsf{L}^{*}=\BGs^{*}\otimes\BI_{2},\qquad \BGs^{*}=\BGs_{1}^{1/2}\Sigma^{*}(\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2})\BGs_{1}^{1/2}. \] \subsection{Results} In many cases the results will be formulated in terms of the microstructure-dependent function $\BGS(h)$, representing the effictive conductivity of a two-phase composite with isotropic constituent conductivities $\BI_{2}$ and $h\BI_{2}$, replacing materials $\mathsf{L}_{1}$, $\mathsf{L}_{2}$ in the original composite. Another convenient notation will be matrices $\BS_{1}$ and $\BS_{2}$ definied by \[ \BS_{1}=\frac{\BGs_{2}-\lambda_{1}\BGs_{1}}{\lambda_{2}-\lambda_{1}},\qquad \BS_{2}=\frac{\BGs_{2}-\lambda_{2}\BGs_{1}}{\lambda_{1}-\lambda_{2}}, \] where $\lambda_{1}$ and $\lambda_{2}$ are the two roots of the quadratic equation $\det(\BGs_{2}-\lambda\BGs_{1})=0$. The ordering of the roots is unimportant. This notation will not be used when $\BGs_{1}$ and $\BGs_{2}$ are scalar multiples of one another, since in this case $\lambda_{1}=\lambda_{2}$ and the matrices $S_{j}$ are undefined.
$\bullet$ (2(c)) $\BGs_{1}=\theta_{1}\BGs_{0}$, $\BGs_{2}=\theta_{2}\BGs_{0}$,
$|r_{1}-r_{2}|=|\theta_{1}-\theta_{2}|\sqrt{\det\BGs_{0}}$. We apply the global automorphism \[ \Psi(\mathsf{L})=\nth{\theta_{1}}(\BGs_{0}^{-1/2}\otimes\BI_{2})(\mathsf{L}-r_{1}\mathsf{T})(\BGs_{0}^{-1/2}\otimes\BI_{2}), \] which maps $\mathsf{L}_{1}$ into $\mathsf{I}$ and $\mathsf{L}_{2}$ into \[ \mathsf{L}_{0}=\mat{\frac{\theta_{2}}{\theta_{1}}\BI_{2}}{-\frac{r_{2}-r_{1}}{\theta_{1}\sqrt{\det\BGs_{0}}}\BR_{\perp}}{\frac{r_{2}-r_{1}}{\theta_{1}\sqrt{\det\BGs_{0}}}\BR_{\perp}}{\frac{\theta_{2}}{\theta_{1}}\BI_{2}}, \] corresponding to the ER $(0,\bb{R}\BZ_{0})$ or $(0,\bb{R}\bra{\BZ_{0}})$. We choose indexing in such a way that $\theta_{2}\ge \theta_{1}$, so that $\mathsf{L}_{0}>0$. If we denote \[ \lambda=\frac{\theta_{2}}{\theta_{1}}\ge 1, \] then either \[ \frac{r_{2}-r_{1}}{\theta_{1}\sqrt{\det\BGs_{0}}}=\lambda-1,\text{ or } \frac{r_{2}-r_{1}}{\theta_{1}\sqrt{\det\BGs_{0}}}=-(\lambda-1), \] depending on whether $r_{1}-r_{2}=(\theta_{1}-\theta_{2})\sqrt{\det\BGs_{0}}$ or $r_{1}-r_{2}=-(\theta_{1}-\theta_{2})\sqrt{\det\BGs_{0}}$, respectively. In either case the effective tensor of a composite made with $\mathsf{I}$ and $\mathsf{L}_{0}$ will have the form \[ \mathsf{L}^{*}_{0}=\mat{\lambda^{*}\BI_{2}}{\mp(\lambda^{*}-1)\BR_{\perp}}{\pm(\lambda^{*}-1)\BR_{\perp}}{\lambda^{*}\BI_{2}}, \qquad\nth{\lambda^{*}}=\av{\lambda(x)^{-1}}=f_{1}+\frac{f_{2}}{\lambda}. \] Applying $\Psi^{-1}$ transformation we obtain the formula for $\mathsf{L}^{*}=K(\BGs^{*}+ir^{*}\BR_{\perp},0)$: \[
\BGs^{*}=(f_{1}\BGs_{1}^{-1}+f_{2}\BGs_{2}^{-1})^{-1},\qquad r^{*}=r_{1}+\frac{(r_{2}-r_{1})f_{2}\theta_{1}}{f_{1}\theta_{2}+f_{2}\theta_{1}}= \frac{r_{1}f_{1}\theta_{1}^{-1}+r_{2}f_{2}\theta_{2}^{-1}}{f_{1}\theta_{1}^{-1}+f_{2}\theta_{2}^{-1}}. \] In other words, \begin{equation}
\label{2c} \BGs^{*}=\av{\BGs(\Bx)^{-1}}^{-1},\qquad r^{*}=\frac{\av{r(x)\theta(x)^{-1}}}{\av{\theta(x)^{-1}}}. \end{equation}
\textbf{In every case we will work in the frame in which
$\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2}$ is diagonal! This is not a
physical frame. Rather it is a mathematical one, where two different linear
combinations of the original curl-free and divergence-free fields are
chosen as a pair of intensity and flux fields.}
$\bullet$ (1(c)ii) $\BL_{1}=\BGs_{1}+ir_{0}\BR_{\perp}$,
$\BL_{2}=\BGs_{2}+ir_{0}\BR_{\perp}$, moreover, $\det\BGs_{1}=\det\BGs_{2}$. The global link (\ref{Psi0}) maps $\BL_{1}$ to $\BI_{2}$ and $\BL_{2}$ to $\BGs=\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2}$, which is always assumed to be diagonal. Since in this case we have $\det\BGs=1$ we can write \[ \BGs=\mat{\lambda}{0}{0}{\lambda^{-1}}, \] where the eigenvalues $\lambda$ and $1/\lambda$ solve the quadratic equation \begin{equation}
\label{geneig12}
\det(\BGs_{2}-\lambda\BGs_{1})=0. \end{equation} The effective tensor of the resulting composite is \[ \mathsf{L}_{0}^{*}=\mat{\BGs^{*}}{0}{0}{\frac{\BGs^{*}}{\det\BGs^{*}}}= \mat{1}{0}{0}{\nth{\det\BGs^{*}}}\otimes \BGs^{*},\qquad\BGs^{*}=\BGS(\lambda). \] where $\BGs^{*}$ is the effective conductivity of the composite made with 2 isotropic materials of conductivities 1 and $\lambda$ and the same microstructure\ as the original composite.
We conclude that \begin{equation}
\label{1cii}
\mathsf{L}^{*}=\BGs_{1}^{1/2}\mat{1}{0}{0}{\nth{\det\BGs^{*}}}\BGs_{1}^{1/2}
\otimes\BGs^{*}+r_{0}\mathsf{T}, \end{equation} where we have used the the form in which terms on both sides of the $\otimes$ sign are invariant with respect to the $\lambda\mapsto\lambda^{-1}$ permutation. We can express the answer without using $\BGs_{1}^{1/2}$ by observing that \[ \BGs_{1}^{1/2}\BI_{2}\BGs_{1}^{1/2}=\BGs_{1},\qquad \BGs_{1}^{1/2}\BGs\BGs_{1}^{1/2}=\BGs_{2}. \] In the frame in which $\BGs$ is diagonal we obtain \[ \mathsf{L}^{*}=r_{0}\mathsf{T}+((\det\BGs^{*}-\lambda^{2})\BGs_{1}+\lambda(1-\det\BGs^{*})\BGs_{2}) \otimes\frac{\BGs^{*}}{(1-\lambda^{2})\det\BGs^{*}}, \] where $\lambda>0$ solves (\ref{geneig12}) and $\BGs^{*}=\Sigma_{\rm 2D cond}(1,\lambda)$, and the result is independent of the choice of the root in (\ref{geneig12}). We note that in the notation $\BA\otimes\BB$, the frame of the operator $\BB$ is the physical one, while the frame of $\BA$ is mathematical. In the formula for $\mathsf{L}^{*}$ above the first factor of the tensor product would look the same in the original frame, since the tensors $\BGs_{j}$ are transforming together with the mathematical frame. Using our notation $\BS_{1}$ and $\BS_{2}$ are can write the final answer as \[ \mathsf{L}^{*}=r_{0}\mathsf{T}+\left(\frac{\BS_{1}}{\det\BGs^{*}}+\BS_{2}\right)\otimes\BGs^{*}. \]
$\bullet$ (1(a)ii) $|r_{1}-r_{2}|^{2}=\det(\BGs_{1}-\BGs_{2})$. In this case the quadratic equation (\ref{diaga0}) has two solutions $(\lambda_{j}-1)/\rho$, $j=1,2$, where the eigenvalues $\lambda_{1}$ and $\lambda_{2}$ of $\BGs$ solve (\ref{geneig12}). If we choose $a_{0}=(\lambda_{1}-1)/\rho$, then \begin{equation}
\label{L0st1aii}
\BS=\mat{\frac{\lambda_{1}}{\lambda_{2}}}{0}{0}{1},\qquad \mathsf{L}_{0}^{*}=\mat{\BGs^{*}}{0}{0}{\BI_{2}},\qquad\BGs^{*}=\BGS\left(\frac{\lambda_{1}}{\lambda_{2}}\right). \end{equation}
Inverting our global link that mappled $\mathsf{L}_{1}$ to $\tns{\BI_{2}}$ and $\mathsf{L}_{2}$ to $\BS\otimes\BI_{2}$ we obtain \[ \mathsf{L}^{*}=r_{1}\mathsf{T}+(\BGs_{1}^{1/2}\otimes\BI_{2})(\mathsf{T}-a_{0}\mathsf{L}_{0}^{*})(\mathsf{L}_{0}^{*}-a_{0}\mathsf{T})^{-1}\mathsf{T}(\BGs_{1}^{1/2}\otimes\BI_{2}). \] Observe that \[ \mathsf{P}^{*}=(\mathsf{T}-a_{0}\mathsf{L}_{0}^{*})(\mathsf{L}_{0}^{*}-a_{0}\mathsf{T})^{-1}\mathsf{T}= (1-a_{0}^{2})\mathsf{T}(\mathsf{L}_{0}^{*}-a_{0}\mathsf{T})^{-1}\mathsf{T}-a_{0}\mathsf{T}. \] We then compute (verified by Maple) \[ \mathsf{P}^{*}_{0}=(1-a_{0}^{2})\mathsf{T}(\mathsf{L}_{0}^{*}-a_{0}\mathsf{T})^{-1}\mathsf{T}=\frac{1-a_{0}^{2}}{\det(\BGs^{*}-a_{0}^{2})} \mat{\det\BGs^{*}-a_{0}^{2}\BGs^{*}}{a_{0}\BR_{\perp}^{T}(\BGs^{*}-a_{0}^{2})} {a_{0}(\BGs^{*}-a_{0}^{2})\BR_{\perp}}{\BGs^{*}-a_{0}^{2}}. \] Thus, \begin{equation}
\label{Lst1aii}
\mathsf{L}^{*}=(r_{1}-a_{0}\sqrt{\det\BGs_{1}})\mathsf{T}+(\BGs_{1}^{1/2}\otimes\BI_{2})\mathsf{P}^{*}_{0} (\BGs_{1}^{1/2}\otimes\BI_{2}). \end{equation}
In order to write $\mathsf{L}^{*}$ without the explicit reference to $\BGs_{1}^{1/2}$ we write $\mathsf{P}_{0}^{*}$ as a sum of tensor products. In order to accomplish this we first write $\BGs^{*}=\phi(x^{*})+\psi(y^{*})$ and then observe that \[ \BR_{\perp}^{T}\psi(y^{*})=-\psi(iy^{*})=\psi(y^{*})\BR_{\perp}. \] Thus, we get \[ \mathsf{P}^{*}_{0}=\frac{1-a_{0}^{2}}{\det(\BGs^{*}-a_{0}^{2})}\left( \mat{\det\BGs^{*}}{0}{0}{-a_{0}^{2}}\otimes\BI_{2}+\mat{-a_{0}^{2}}{0}{0}{1}\otimes\BGs^{*} +a_{0}(x^{*}-a_{0}^{2})\mathsf{T}-a_{0}\psi(i)\otimes\psi(iy^{*}) \right) \] We will then be able to compute $\mathsf{L}^{*}$ in the covariant form, if we can express $\BGs_{1}^{1/2}\psi(i)\BGs_{1}^{1/2}$, in the covariant form. To do this we first write $\psi(i)=\phi(i)\psi(1)$, and then \[ \BGs_{1}^{1/2}\psi(i)\BGs_{1}^{1/2}= \BGs_{1}^{1/2}\phi(i)\BGs_{1}^{1/2}\BGs_{1}^{-1}\BGs_{1}^{1/2}\psi(1)\BGs_{1}^{1/2}= \sqrt{\det\BGs_{1}}\BR_{\perp}\BGs_{1}^{-1}(\BGs_{1}^{1/2}\psi(1)\BGs_{1}^{1/2}). \] Thus, we compute \[ \BGs_{1}^{1/2}\psi(1)\BGs_{1}^{1/2}=\frac{(\lambda_{1}+\lambda_{2})\BGs_{1}-2\BGs_{2}}{\lambda_{2}-\lambda_{1}} =\BS_{2}-\BS_{1}. \] Therefore, \[ \BGs_{1}^{1/2}\psi(i)\BGs_{1}^{1/2}=\sqrt{\det\BGs_{1}}\BR_{\perp}\BGs_{1}^{-1}(\BS_{2}-\BS_{1}), \] which is now in a frame-covariant form. Putting everything together we obtain \begin{multline}
\label{1aiimonster} \mathsf{L}^{*}=\left(r_{1}+a_{0}\left(\frac{(1-a_{0}^{2})(x^{*}-a_{0}^{2})} {\det(\BGs^{*}-a_{0}^{2})}-1\right)\sqrt{\det\BGs_{1}}\right)\mathsf{T}+ \frac{1-a_{0}^{2}}{\det(\BGs^{*}-a_{0}^{2})}\times\\ \{\BS_{1}\otimes(\BGs^{*}-a_{0}^{2})+\BS_{2}\otimes(\det\BGs^{*}-a_{0}^{2}\BGs^{*})+ a_{0}\sqrt{\det\BGs_{1}}\mathsf{T}[\BGs_{1}^{-1}(\BS_{1}-\BS_{2})\otimes(\BGs^{*}-x^{*})]\} \end{multline} This expression is invariant with respect to the choice of $\lambda_{1}$ and $\lambda_{2}$. This means that if we interchange $\lambda_{1}$ and $\lambda_{2}$, $\BS_{1}$ and $\BS_{2}$, repalce $a_{0}$ with $1/a_{0}$, and $\BGs^{*}$ with $\BGs^{*}/\det\BGs^{*}$, the value of $\mathsf{L}^{*}$ will not change.
We also want to see if there is a symmetry wrt to material index interchange ``1''$\mapsto$''2'' (the result should not depend on naming conventions). Hence we need to make the following replacements: \[ \det\BGs_{1}\mapsto\lambda_{1}\lambda_{2}\det\BGs_{1},\ a_{0}\mapsto a_{0}\sqrt{\frac{\lambda_{2}}{\lambda_{1}}},\ \BGs^{*}\mapsto\frac{\lambda_{2}}{\lambda_{1}}\BGs^{*},\ \lambda_{j}\mapsto\nth{\lambda_{j}},\ \BS_{j}\mapsto\lambda_{1}\lambda_{2}\frac{\BS_{j}}{\lambda_{j}}. \] We can verify that the second term is invariant. This is due to the observation that, since \[ a_{0}=\frac{\lambda_{1}-1}{\rho}=\frac{\rho}{\lambda_{2}-1}, \] we can also write \[ a_{0}^{2}=\frac{\lambda_{1}-1}{\lambda_{2}-1}\quad\Rightarrow\quad 1-a_{0}^{2}=\frac{\lambda_{2}-\lambda_{1}}{\lambda_{2}-1}. \] This shows that under the index-interchange we also have $1-a_{0}^{2}\mapsto\nth{\lambda_{1}}(1-a_{0}^{2})$. The first term was also verified to be invariant, since we can write \[ r_{1}=\frac{r_{1}+r_{2}}{2}-\hf\rho\sqrt{\det\BGs_{1}}=\frac{r_{1}+r_{2}}{2} -\hf a_{0}(\lambda_{2}-1)\sqrt{\det\BGs_{1}}. \]
If $\BGs^{*}=x^{*}\BI_{2}$, the result simplifies: \begin{equation}
\label{1aiiiso}
\mathsf{L}^{*}=\left(r_{1}+\frac{(1-x^{*})a_{0}} {x^{*}-a_{0}^{2}}\sqrt{\det\BGs_{1}}\right)\mathsf{T}+ \frac{1-a_{0}^{2}}{x^{*}-a_{0}^{2}}(\BS_{1}+x^{*}\BS_{2})\otimes\BI_{2}. \end{equation} In order to verify (\ref{1aiimonster}) with Maple we can parametrize this case by $\Bs=\BGs_{1}^{1/2}$, $\lambda_{1}$, $\rho$, $r_{1}$, so that \[ \BGs_{1}=\Bs^{2},\quad r_{2}=\rho\det\Bs+r_{1},\quad\lambda_{2}=\frac{\rho^{2}}{\lambda_{1}-1}+1,\quad \BGs_{2}=\Bs\mat{\lambda_{1}}{0}{0}{\lambda_{2}}\Bs,\quad a_{0}=\frac{\lambda_{1}-1}{\rho}. \] Substituting these and $\mathsf{L}_{0}^{*}$ given by (\ref{L0st1aii}) into (\ref{Lst1aii}) we obtain (\ref{1aiimonster}), as verified by Maple. Formula (\ref{1aiiiso}) has also been verified.
Finally we point out that the case $a_{0}=\infty$ corresponding to $r_{1}=r_{2}=r_{0}$ and $\det(\BGs_{1}-\BGs_{2})=0$ is also included by taking a limit as $a_{0}\to\infty$ in (\ref{1aiimonster}): \[ \mathsf{L}^{*}=r_{0}\mathsf{T}+\BS_{1}\otimes\BI_{2}+\BS_{2}\otimes\BGS^{*}(\lambda_{1}),\quad\lambda_{1}\not=1=\lambda_{2}. \]
$\bullet$ (1(c)i):
$|r_{1}-r_{2}|=\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|$. The relevant ER is $(W,V_{0})$, described as \begin{equation}
\label{WV0}
\left\{\mat{\BL}{\pm(\BL\BM-\BR_{\perp})}{\pm(\BM^{T}\BL+\BR_{\perp})}{\BM^{T}\BL\BM}:\mathrm{Tr}\,\BM=0,\ \BL>0,\ \BL+2\BR_{\perp}\BM\det\BL<0\right\}. \end{equation} The relevant link is as follows. $\BM^{*}=\BR_{\perp}\BGs^{*}$, where $\BGs^{*}$ is the 2D effective condictivity tensor of the composite with local conductivity $\Bc(\Bx)=-\BR_{\perp}\BM(\Bx)$, which is symmetric and positive definite for $\BM(\Bx)$ satisfying the constraints in (\ref{WV0}).
Any isotropic tensor $\mathsf{L}$ in this ER must have the form \[ \mathsf{L}=\mat{\lambda_{1}\BI_{2}}{\pm(\sqrt{\lambda_{1}\lambda_{2}}-1)\BR_{\perp}} {\mp(\sqrt{\lambda_{1}\lambda_{2}}-1)\BR_{\perp}}{\lambda_{2}\BI_{2}}, \qquad\lambda_{1}>0,\ \lambda_{1}\lambda_{2}>\nth{4}, \] corresponding to $\BM=\sqrt{\lambda_{2}/\lambda_{1}}\BR_{\perp}$.
Thus, \begin{equation}
\label{L01ci}
\mathsf{L}_{0}^{*}=\mat{\BL^{*}}{\pm\BR_{\perp}\mathrm{cof}(\BL^{*})\BGs^{*}}{\pm\BGs^{*}\mathrm{cof}(\BL^{*})\BR_{\perp}^{T}}{\BGs^{*}\mathrm{cof}(\BL^{*})\BGs^{*}}\pm\mathsf{T}, \end{equation} where $\BGs^{*}=\BGS^{*}(\sqrt{\lambda_{2}/\lambda_{1}})$, where $\lambda_{1,2}$ are the two roots of (\ref{geneig12}), and $\BL^{*}$ is microstructure-dependent tensor that depends on material moduli only through $\lambda_{1}$ and $\lambda_{2}$. It is not expressible in terms of the effective conductivity. To compute $\mathsf{L}^{*}$ we take $\Psi^{-1}$ and obtain as in the case 1(a)ii \[ \mathsf{L}^{*}=\BS_{1}\otimes\BGs^{*}\mathrm{cof}(\BL^{*})\BGs^{*}+\BS_{2}\otimes\BL^{*} +\alpha\mathsf{T}\left[\BGs_{1}^{-1}(\BS_{1}-\BS_{2})\otimes\BA^{*}\right]+\beta\mathsf{T}, \] where \[ \BA^{*}=\mathrm{cof}(\BL^{*})\BGs^{*}-a^{*}\BI_{2},\quad \BGs^{*}=\BGS^{*}\left(\sqrt{\frac{\lambda_{2}}{\lambda_{1}}}\right),\quad a^{*}=\hf\mathrm{Tr}\,(\mathrm{cof}(\BL^{*})\BGs^{*}), \] \[ \alpha=\frac{\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}}{r_{1}-r_{2}}\sqrt{\det\BGs_{1}},\qquad \beta=r_{1}+\alpha(a^{*}-1). \] \noindent$\bullet$ (2a) $\BGs_{1}=\theta_{1}\BGs$, $\BGs_{2}=\theta_{2}\BGs$,
$|r_{1}-r_{2}|<|\theta_{1}-\theta_{2}|\sqrt{\det\BGs}$. The relevant ER is \[ \left\{\mat{\BL}{0}{0}{\BL}:\BL>0\right\}. \] and the relevant link is that $\BL^{*}$ is the effective conductivity tensor of the conducting composite with local conductivity $\BL(\Bx)$.
We first apply (\ref{Psi0}) mappling $\mathsf{L}_{1}$ to $\mathsf{I}$ and $\mathsf{L}_{2}$ to \[ \mathsf{L}'_{2}=\frac{\theta_{2}}{\theta_{1}}\tns{\BI_{2}}+\rho\mathsf{T}. \] Let $a_{0}$ be a root of (\ref{diaga0}), where $\BGs=(\theta_{2}/\theta_{1})\BI_{2}$. Then apply \begin{equation}
\label{Psi2} \Psi_{2}(\mathsf{L})=(a_{0}\mathsf{L}+\mathsf{T})(\mathsf{L}+a_{0}\mathsf{T})^{-1}\mathsf{T}= \mathsf{T}(\mathsf{L}+a_{0}\mathsf{T})^{-1}(a_{0}\mathsf{L}+\mathsf{T}). \end{equation} This gives $\Psi_{2}(\mathsf{I})=\mathsf{I}$ and \[ \Psi_{2}(\mathsf{L}'_{2})=\frac{(1-a_{0}^{2})\theta_{1}\theta_{2}}{\theta_{2}^{2}-(\theta_{1}\rho+\theta_{1}a_{0})^{2}}\mathsf{I}. \] Then defining \[ \BGs^{*}=\BGS^{*}\left(\frac{(1-a_{0}^{2})\theta_{1}\theta_{2}}{\theta_{2}^{2}-(\theta_{1}\rho+\theta_{1}a_{0})^{2}}\right)=\BGS^{*}\left(\frac{a_{0}}{\rho+a_{0}}\right)= \BGS^{*}\left(\frac{\theta_{1}}{\theta_{2}}(\rho a_{0}+1)\right) \] We obtain \[ \mathsf{L}^{*}=\Psi_{1}^{-1}(\Psi_{2}^{-1}(\BI_{2}\otimes\BGs^{*})). \] Recalling that $\Psi_{2}^{-1}$ is $\Psi_{2}$ with $a_{0}$ replaced by $-a_{0}$ and that $\mathfrak{A}:\BA\otimes\BB\mapsto\BB\otimes\BA$ is the algebra automorphism we can write the expression for $\mathsf{L}'_{*}=\Psi_{2}^{-1}(\BGs^{*}\otimes\BI_{2})$ immediately from (\ref{Psi2L}): \[ \mathsf{L}'_{*}=\frac{(1-a_{0}^{2})(\BI_{2}\otimes\BGs^{*})}{\det\BGs^{*}-a_{0}^{2}} +\frac{a_{0}(1-\det\BGs^{*})}{\det\BGs^{*}-a_{0}^{2}}\mathsf{T}. \] Thus, \[ \mathsf{L}^{*}=\frac{(1-a_{0}^{2})(\BGs_{1}\otimes\BGs^{*})}{\det\BGs^{*}-a_{0}^{2}} +\left(r_{1}+\frac{a_{0}(1-\det\BGs^{*})\sqrt{\det\BGs_{1}}}{\det\BGs^{*}-a_{0}^{2}}\right)\mathsf{T}. \]
\noindent$\bullet$ (1b) $|r_{1}-r_{2}|>\left|\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}\right|$. The relevant ER is \[ \bb{M}_{17}=\{\mathsf{L}>0:\mathsf{L}(\BJ\otimes\BR_{\perp})\mathsf{L}=\BJ\otimes\BR_{\perp}\},\quad\BJ=\mat{1}{0}{0}{-1} \] In this case \[ \mathsf{L}^{*}_{0}=a((\BGs_{1}^{-1/2}\otimes\BI_{2})\mathsf{L}^{*}(\BGs_{1}^{-1/2}\otimes\BI_{2}))+b\mathsf{T}. \] Then \[ (a\mathsf{L}^{*}+b\sqrt{\det\BGs_{1}}\mathsf{T})(\BGs_{1}^{-1/2}\BJ\BGs_{1}^{-1/2}\otimes\BR_{\perp}) (a\mathsf{L}^{*}+b\sqrt{\det\BGs_{1}}\mathsf{T})=\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2}\otimes\BR_{\perp} \] Observe that \[ \BGs_{1}^{-1/2}\BJ\BGs_{1}^{-1/2}=-\frac{\mathrm{cof}(\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2})}{\det\BGs_{1}} =-\frac{\mathrm{cof}(\BM)}{\det\BGs_{1}},\qquad\BM=\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2} \] Substituting the values of $a$ and $b$ we obtain \[ -(2\Delta r\mathsf{L}^{*}+b_{0}\mathsf{T})(\mathrm{cof}(\BM)\otimes\BR_{\perp})(2\Delta r\mathsf{L}^{*}+b_{0}\mathsf{T}) =c_{0}\BM\otimes\BR_{\perp}, \] where \[ b_{0}=\det\BGs_{2}-\det\BGs_{1}+r_{1}^{2}-r_{2}^{2},\quad c_{0}=(\Delta r^{2}-(\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}})^{2}) ((\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}})^{2}-\Delta r^{2}) \]
In order to compute matrix $\BM$ we recall that we now work in the frame where \[ \BGs=\mat{s_{1}}{s_{2}}{s_{2}}{s_{1}}. \] The numbers $s_{1}$ and $s_{2}$ are related to the eigenvalues $\lambda_{1}$, $\sigma_{2}$ of $\BGs$ via \[ \lambda_{1}=s_{1}+s_{2},\qquad \lambda_{2}=s_{1}-s_{2}. \] Thus, from \[ \BGs=\BGs_{1}^{-1/2}\BGs_{2}\BGs_{1}^{-1/2}=s_{1}\BI_{2}+s_{2}\psi(i) \] we obtain \[ \BGs_{1}^{1/2}\psi(i)\BGs_{1}^{1/2}=\frac{\BGs_{2}-s_{1}\BGs_{1}}{s_{2}}=\BS_{2}-\BS_{1}=\Delta\BS. \] Thus, \[ \BM=\BGs_{1}^{1/2}\psi(i)\phi(i)\BGs_{1}^{1/2}= (\BS_{2}-\BS_{1})\BGs_{1}^{-1}\sqrt{\det\BGs_{1}}\BR_{\perp}. \] Since $\BM$ is a symmetric matrix we obtain \[ \BM=s(\BGs_{2}\BGs_{1}^{-1}\BR_{\perp})_{\rm sym} \] for some scalar $s$. So, we have \begin{equation}
\label{2a} (\mathsf{L}^{*}+\beta_{0}\mathsf{T})(\mathrm{cof}(\BA(\BGs_{1},\BGs_{2}))\otimes\BR_{\perp})(\mathsf{L}^{*}+\beta_{0}\mathsf{T}) =-\gamma_{0}\BA(\BGs_{1},\BGs_{2})\otimes\BR_{\perp}, \end{equation} \[ \BA(\BGs_{1},\BGs_{2})=(\BGs_{2}\BGs_{1}^{-1}\BR_{\perp})_{\rm sym},\quad \beta_{0}=\frac{b_{0}}{2\Delta r},\quad\gamma_{0}=\frac{c_{0}}{4(\Delta r)^{2}}. \]
We have derived the equation for $\mathsf{L}^{*}$: \[ (a\mathsf{S}\mathsf{L}^{*}\mathsf{S}+b\mathsf{T})\mathsf{J}(a\mathsf{S}\mathsf{L}^{*}\mathsf{S}+b\mathsf{T})=\mathsf{J}, \] written to highlight the structure. Now factor out the $\mathsf{S}$s: \[ \mathsf{S}(a\mathsf{L}^{*}+b\mathsf{S}^{-1}\mathsf{T}\mathsf{S}^{-1})\mathsf{S}\mathsf{J}\mathsf{S}(a\mathsf{L}^{*}+b\mathsf{S}^{-1}\mathsf{T}\mathsf{S}^{-1})\mathsf{S}=\mathsf{J}, \] Now multiply by $\mathsf{S}^{-1}$ on both sides: \[ (a\mathsf{L}^{*}+b\mathsf{S}^{-1}\mathsf{T}\mathsf{S}^{-1})\mathsf{S}\mathsf{J}\mathsf{S}(a\mathsf{L}^{*}+b\mathsf{S}^{-1}\mathsf{T}\mathsf{S}^{-1})=\mathsf{S}^{-1}\mathsf{J}\mathsf{S}^{-1}. \] Observe now that all the tensors aside from $\mathsf{L}^{*}$ are nice tensor products. Multiply them: \[ \mathsf{S}^{-1}\mathsf{T}\mathsf{S}^{-1}=\sqrt{\det\BGs_{1}}\mathsf{T},\quad \mathsf{S}\mathsf{J}\mathsf{S}=\BGs_{1}^{-1/2}\BJ\BGs_{1}^{-1/2}\otimes\BR_{\perp},\quad \mathsf{S}^{-1}\mathsf{J}\mathsf{S}^{-1}=\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2}\otimes\BR_{\perp}. \] We also have (using $\BJ^{-1}=\BJ$) \[ \BGs_{1}^{-1/2}\BJ\BGs_{1}^{-1/2}=\frac{\mathrm{cof}(\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2})}{\det(\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2})}=-\frac{\mathrm{cof}(\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2})}{\det\BGs_{1}}. \] It remains to figure out $\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2}$. Before we do it, divide the equation by $a^{2}$ and use constants $A$ and $B$ from case 2b. \[ (\mathsf{L}^{*}+A\mathsf{T})\mathsf{S}\mathsf{J}\mathsf{S}(\mathsf{L}^{*}+A\mathsf{T})=B\frac{\mathsf{S}^{-1}\mathsf{J}\mathsf{S}^{-1}}{\det\BGs_{1}}. \] Denoting $\BZ=\BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2}$ we obtain \[ -(\mathsf{L}^{*}+A\mathsf{T})\mathsf{T}(\BZ\otimes\BR_{\perp})\mathsf{T}(\mathsf{L}^{*}+A\mathsf{T})=B\BZ\otimes\BR_{\perp}. \]
We work in the frame where \[ \BGs=\mat{\mu}{\nu}{\nu}{\mu}.\] It is easy to see that the eigenvalues of $\BGs$ are $\mu+\nu$ and $\mu-\nu$. These eigenvalues have been denoted $\lambda_{1}$ and $\lambda_{2}$ before, so \[ \lambda_{1}=\mu+\nu,\qquad\lambda_{2}=\mu-\nu. \] Solving for $\mu$ and $\nu$ we obtain \[ \mu=\frac{\lambda_{1}+\lambda_{2}}{2},\qquad\nu=\frac{\lambda_{1}-\lambda_{2}}{2}. \] Using relations $\BGs_{1}^{1/2}\BGs\BGs_{1}^{1/2}=\BGs_{2}$ and $\BGs_{1}^{1/2}\BI_{2}\BGs_{1}^{1/2}=\BGs_{1}$ we obtain \[ \BGs_{2}=\BGs_{1}^{1/2}\mat{\mu}{\nu}{\nu}{\mu}\BGs_{1}^{1/2}=\mu\BGs_{1}+\nu\BX, \] where $ \BX=\BGs_{1}^{1/2}\mat{0}{1}{1}{0}\BGs_{1}^{1/2} $. Solving for $\BX$ we obtain \[ \BX=\frac{\BGs_{2}-\mu\BGs_{1}}{\nu}= \frac{2\BGs_{2}-(\lambda_{1}+\lambda_{2})\BGs_{1}}{\lambda_{1}-\lambda_{2}}=\BS_{2}-\BS_{1}. \] Now using $\BJ=\mat{0}{1}{1}{0}\BR_{\perp}$ we get \begin{multline*} \BZ= \BGs_{1}^{1/2}\BJ\BGs_{1}^{1/2}=\BX\BGs_{1}^{-1}\BGs_{1}^{1/2}\BR_{\perp}\BGs_{1}^{1/2}= \BX\frac{\BR_{\perp}\BGs_{1}\BR_{\perp}^{T}}{\det\BGs_{1}}\sqrt{\det\BGs_{1}}\BR_{\perp}= \frac{\BX\BR_{\perp}\BGs_{1}}{\sqrt{\det\BGs_{1}}}=\\ \frac{(\BS_{2}-\BS_{1})\BR_{\perp}(\BS_{2}+\BS_{1})}{\sqrt{\det\BGs_{1}}}= \frac{\BS_{2}\BR_{\perp}\BS_{1}-\BS_{1}\BR_{\perp}\BS_{2}}{\sqrt{\det\BGs_{1}}} \end{multline*} The final answer is the equation \[ (\mathsf{L}^{*}+A\mathsf{T})\mathsf{T}(\BZ_{0}\otimes\BR_{\perp})\mathsf{T}(\mathsf{L}^{*}+A\mathsf{T})+B\BZ_{0}\otimes\BR_{\perp}=0, \qquad\BZ_{0}=\BS_{2}\BR_{\perp}\BS_{1}-\BS_{1}\BR_{\perp}\BS_{2}. \]
$\bullet$ (2b) $\BGs_{1}=\theta_{1}\BGs_{0}$, $\BGs_{2}=\theta_{2}\BGs_{0}$,
$|r_{1}-r_{2}|>|\theta_{1}-\theta_{2}|\sqrt{\det\BGs_{0}}$. The formula for the effective tensor is \[ \mathsf{L}^{*}=\BGs_{1}\otimes\BL^{*}+t^{*}\mathsf{T},\qquad \det\BGs_{1}\det\BL^{*}=\left(t^{*}+A\right)^{2}+B, \] where $A$ and $B$ are given by (\ref{Asc}) and (\ref{Bsc}), respectively.
You have derived two formulas \[ \det\BL_{0}^{*}=\left(\frac{a}{\sqrt{\det\BGs_{1}}}t^{*}+b\right)^{2}+1, \] and \[ \det\BL^{*}=\nth{a^{2}}\det\BL_{0}^{*}. \] Now combine the two formulas to eliminate $\BL_{0}^{*}$: \[ \det\BL^{*}=\nth{a^{2}}\left(\frac{a}{\sqrt{\det\BGs_{1}}}t^{*}+b\right)^{2}+\nth{a^{2}}. \] Use the formula $x^{2}y^{2}=(xy)^{2}$ in the first terms and leave the second as it is: \[ \det\BL^{*}=\left(\frac{t^{*}}{\sqrt{\det\BGs_{1}}}+\frac{b}{a}\right)^{2}+\nth{a^{2}}. \] Now multiply both sides by $\det\BGs_{1}$ and use the same formula by writing $\det\BGs_{1}=(\sqrt{\det\BGs_{1}})^{2}$: \[ \det\BGs_{1}\det\BL^{*}=\left(t^{*}+\frac{b\sqrt{\det\BGs_{1}}}{a}\right)^{2}+\frac{\det\BGs_{1}}{a^{2}}. \] Now let \[ A=\frac{b\sqrt{\det\BGs_{1}}}{a},\qquad B=\frac{\det\BGs_{1}}{a^{2}}. \] Let us use the formulas for $a$ and $b$ to derive explicit formulas for $A$ and $B$: \[ b=a\frac{\det\BGs-1+\rho_{1}^{2}-\rho_{2}^{2}}{2\rho},\quad a^{2}=\frac{4\rho^{2}}{((\rho+1)^{2}-\det\BGs)(\det\BGs-(\rho-1)^{2})}. \] For compactness of notation let us denote $\Delta r=r_{2}-r_{1}$, so that \[ \rho=\frac{\Delta r}{\sqrt{\det\BGs_{1}}}. \] We obtain \[ A=\sqrt{\det\BGs_{1}}\frac{\frac{\det\BGs_{2}}{\det\BGs_{1}}-1+\frac{r_{1}^{2}}{\det\BGs_{1}}-\frac{r_{2}^{2}}{\det\BGs_{1}}}{2\frac{\Delta
r}{\sqrt{\det\BGs_{1}}}}. \] Converting to a normal fraction we obtain \[ A=\sqrt{\det\BGs_{1}}\frac{\sqrt{\det\BGs_{1}}}{\det\BGs_{1}}\frac{\det\BGs_{2}-\det\BGs_{1}+r_{1}^{2}-r_{2}^{2}}{2\Delta r}. \] The factor in front simplifies to 1 and we obtain \begin{equation}
\label{Asc}
A=\frac{\det\BGs_{2}-\det\BGs_{1}+r_{1}^{2}-r_{2}^{2}}{2\Delta r}. \end{equation} We perform the same simplification for $B$: \[ B=\frac{((\Delta r+\sqrt{\det\BGs_{1}})^{2}-\det\BGs_{2})(\det\BGs_{2}-(\Delta
r-\sqrt{\det\BGs_{1}})^{2})}{4(\Delta r)^{2}}. \] Let us make the numerator $nB$ of $B$ a bit more symmetric: \begin{multline*} nB=(\Delta r+\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}}) (\Delta r+\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}})\times\\ (\sqrt{\det\BGs_{2}}+\Delta r-\sqrt{\det\BGs_{1}}) (\sqrt{\det\BGs_{2}}-\Delta r+\sqrt{\det\BGs_{1}}). \end{multline*} We now combine first and third terms together and also the other two together: \[ nB=((\Delta r)^{2}-(\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}})^{2}) ((\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}})^{2}-(\Delta r)^{2}). \] This gives \begin{equation}
\label{Bsc}
B=\frac{((\Delta r)^{2}-(\sqrt{\det\BGs_{1}}-\sqrt{\det\BGs_{2}})^{2}) ((\sqrt{\det\BGs_{1}}+\sqrt{\det\BGs_{2}})^{2}-(\Delta r)^{2})}{4(\Delta r)^{2}}. \end{equation}
\end{document} | arXiv |
Reading: A Competition of Critics in Human Decision-Making
A Competition of Critics in Human Decision-Making
Enkhzaya Enkhtaivan,
Department of Mathematics, University of Wisconsin, Madison, WI, US
Joel Nishimura,
School of Mathematical and Natural Sciences, Arizona State University, Glendale, AZ, US
Cheng Ly,
Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, VA, US
Amy L. Cochran
Department of Mathematics, University of Wisconsin, Madison, WI; Department of Population Health Sciences, University of Wisconsin, Madison, WI, US
Recent experiments and theories of human decision-making suggest positive and negative errors are processed and encoded differently by serotonin and dopamine, with serotonin possibly serving to oppose dopamine and protect against risky decisions. We introduce a temporal difference (TD) model of human decision-making to account for these features. Our model involves two critics, an optimistic learning system and a pessimistic learning system, whose predictions are integrated in time to control how potential decisions compete to be selected. Our model predicts that human decision-making can be decomposed along two dimensions: the degree to which the individual is sensitive to (1) risk and (2) uncertainty. In addition, we demonstrate that the model can learn about the mean and standard deviation of rewards, and provide information about reaction time despite not modeling these variables directly. Lastly, we simulate a recent experiment to show how updates of the two learning systems could relate to dopamine and serotonin transients, thereby providing a mathematical formalism to serotonin's hypothesized role as an opponent to dopamine. This new model should be useful for future experiments on human decision-making.
Keywords: Decision-Making, Reward Learning, Computational Model, Serotonin, Reaction Time, Risk
How to Cite: Enkhtaivan, E., Nishimura, J., Ly, C., & Cochran, A. L. (2021). A Competition of Critics in Human Decision-Making. Computational Psychiatry, 5(1), 81–101. DOI: http://doi.org/10.5334/cpsy.64
Published on 12 Aug 2021
Accepted on 19 Jul 2021 Submitted on 24 Mar 2021
Temporal difference (TD) learning has enjoyed tremendous support as a conceptual framework for understanding how people make decisions and what might be computed in the brain. TD learning is also supported by studies suggesting that prediction errors derived from a TD model are encoded in dopamine transients (Cohen, Haesler, Vong, Lowell, & Uchida, 2012; Montague, Dayan, & Sejnowski, 1996; Pan, Schmidt, Wickens, & Hyland, 2005; Schultz, Apicella, & Ljungberg, 1993; Schultz, Dayan, & Montague, 1997; Zaghloul et al., 2009). Recent theories and experiments, however, suggest that TD models can oversimplify human decision-making in meaningful ways (Dabney et al., 2020; Daw, Kakade, & Dayan, 2002; Kishida et al., 2016; Moran et al., 2018). In particular, models that are sensitive to risk or track multiple errors are better able to predict what decisions a person selects (Cazé & van der Meer, 2013; Chambon et al., 2020; d'Acremont, Lu, Li, Van der Linden, & Bechara, 2009; Gershman, Monfils, Norman, & Niv, 2017; Hauser, Iannaccone, Walitza, Brandeis, & Brem, 2015; Jepma, Schaaf, Visser, & Huizenga, 2020; Lefebvre, Lebreton, Meyniel, Bourgeois-Gironde, & Palminteri, 2017; Li, Schiller, Schoenbaum, Phelps, & Daw, 2011; Niv, Edlund, Dayan, & O'Doherty, 2012; Preuschoff, Quartz, & Bossaerts, 2008; Redish, Jensen, Johnson, & Kurth-Nelson, 2007; Ross, Lenow, Kilts, & Cisler, 2018; Yu & Dayan, 2005), yet the brain structures involved are not completely known. Similarly, a single-neurotransmitter based circuit, where positive concentrations match prediction-error, would struggle to encode large negative updates (Niv et al., 2012). Indeed, recent evidence suggests that serotonin may play a complementary role (Cools, Nakamura, & Daw, 2011; d'Acremont et al., 2009; Daw et al., 2002; J. Deakin, 1983; J. W. Deakin & Graeff, 1991; Montague, Kishida, Moran, & Lohrenz, 2016; Moran et al., 2018; Preuschoff et al., 2008; Rogers, 2011), though this hypothesis is still being debated. Our goal was to develop and analyze a simple computational model that resolves and unites these observations. Our proposed model involves dual critics, composed of an optimistic dopamine-like TD learner and a pessimistic serotonin-like TD learner, who compete in time to determine decisions.
TD learning was designed to utilize simple mathematical updates to produce a system that learns how to make decisions (Sutton & Barto, 2018). Such models decompose decision-making into two processes: a learning process, which updates how one values a decision, and a decision process, which selects decisions according to how they are valued. These models, including the model of Rescorla and Wagner (Rescorla & Wagner, 1972), can learn about reward expectations through updates that are linear in a single prediction error, but are not sensitive to risk or track multidimensional errors.
One reason to expect risk-sensitivity is there is asymmetry in how negative versus positive errors are updated. Dopamine transients, for example, have been found to respond more greatly to positive prediction errors than negative prediction errors (Bayer & Glimcher, 2005). From a biological perspective, this is not surprising. Dopamine neurons have low baseline activity, which imposes a physical limit on how much their firing rates can decrease because firing rates are non-negative (Niv, Duff, & Dayan, 2005). This limit suggests that dopamine neuron firing rates could not be decreased to encode negative prediction errors to the same degree as they can be increased to encode positive prediction errors. If this is true, then the outsized influence of positive prediction errors would inflate the valuation of decisions — colloquially referred to as "wearing rose-colored glasses."
Computational models capture risk-sensitivity by weighing positive prediction errors differently than negative prediction errors, usually accomplished with separate learning rates for positive and negative prediction errors. These models are referred to as risk-sensitive, because they result in decision-making that is sensitive to large gains (i.e. risk-seeking) or large losses (i.e, risk-averse). Taken to an extreme, risk-seeking involves pursuing best possible outcomes, whereas risk-aversion involves avoiding worse possible outcomes (Mihatsch & Neuneier, 2002). For comparison, traditional TD learning is considered risk-neutral because it focuses on maximizing average (long-term discounted) rewards, so that all rewards, regardless of size, are weighted equally. Risk-sensitive models are frequently found to fit data better than risk-neutral models (Chambon et al., 2020; Hauser et al., 2015; Lefebvre et al., 2017; Niv et al., 2012; Ross et al., 2018). Importantly, differences in risk-sensitivity, substantiated by a risk-sensitive learning model, is thought to underlie certain differences between individuals with and without psychiatric disorders (Korn, Sharot, Walter, Heekeren, & Dolan, 2014; Rouhani & Niv, 2019).
The multidimensional aspect of TD-based human decision-making is supported by recent studies. Although there is no consensus about serotonin's role in decision-making, one theory is that serotonin also encodes prediction errors but acts as an opponent to dopamine (Daw et al., 2002; Moran et al., 2018). In Moran et al, for example, serotonin transients were found to respond to prediction errors in an opposite direction of dopamine transients (Moran et al., 2018). Their results were consistent with the hypothesis that serotonin protects against losses during decision-making (Moran et al., 2018) or more broadly, plays a role in avoidance behavior (Dayan & Huys, 2008, 2009; J. Deakin, 1983). Furthermore, a recent study even suggests dopamine is capable of capturing a distribution of prediction errors, the computational benefit of which is that the reward distribution can be learned rather than just its average and variance (Dabney et al., 2020). Other conceptual frameworks suggest individuals keep track of multiple prediction errors as a way to capture the standard deviation of rewards in addition to expected rewards (Gershman et al., 2017; Jepma et al., 2020; Li et al., 2011; Redish et al., 2007; Yu & Dayan, 2005).
In this paper, we introduce and analyze a new model of human decision-making, which we call the Competing-Critics model, which uses asymmetrical and multidimensional prediction errors. Based on a TD learning framework, the model decomposes decision-making into learning and decision processes. The learning process involves two competing critics, one optimistic and another pessimistic. The decision process integrates predictions from each system in time as decisions compete for selection. In what follows, we explore through simulation whether our model can capture ranges of risk-sensitive behavior from risk-averse to risk-seeking and can reflect reward mean and variance. Further, we use this model to make predictions about reaction times and about uncertainty-sensitivity in terms of the degree to which the standard deviation of rewards influences a person's consideration of multiple decisions. Lastly, we show how prediction errors in the Competing-Critics model might relate to dopamine and serotonin transients in the experiments of Kishida et al (Kishida et al., 2016) and Moran et al (Moran et al., 2018). Considering the simplicity of this model and its ability to synthesize several theories and experimental findings, this model should be useful as a framework for future human decision-making experiments, with potential to provide both predictive power and mechanistic insight.
We introduce a model of human decision-making that relies on two competing learning systems. Figure 1 provides a high-level view of the proposed model in a simple example in which an individual makes decisions between two choices. Here the individual learns to value their decision by weighing prior outcomes observed upon selecting each choice, denoted by Rt, in two different systems. The first learning system weighs better outcomes more heavily than worse outcomes, which effectively leads to a more optimistic valuation of outcomes, denoted by Q+. The second learning system does the opposite: weighs worse outcomes more heavily than better outcomes, leading to a more pessimistic valuation of outcomes, denoted by Q–. We remark that both values, Q+ and Q–, are assumed to be updated according to prediction errors δt+ and δt–, following common risk-sensitive temporal difference (TD) learning frameworks described below.
High-level view of proposed model in an example with two choices. For each choice, the distribution of rewards Rt (gray histograms) is learned by competing critics through the updates δt+ and δt–. One system is optimistic, upweighting large rewards, and another is pessimistic, downweighting large rewards (blue histograms). As a result, each choice is associated with multiple values Q– and Q+. To determine which choice is selected, a random variable Ut is drawn for each choice uniformly from (Q–,Q+) (teal histograms). The largest Ut determines which choice is selected and when the decision is made.
An individual who relied solely on the first learning system to make decisions would be considered risk-seeking due to the outsized influence of better outcomes. Similarly, an individual who relied solely on the second system to guide decisions would be considered risk-averse due to the outsized influence of worse outcomes. Our model, however, supposes both of these competing learning systems contribute to decision-making in the following way. For each choice, the risk-seeking learning system sends a go signal to the individual to signify that this choice is viable, with larger Q+ values corresponding to earlier signals. Afterwards, the risk-sensitive learning system sends a no-go signal to the individual to signify that this choice is no longer viable, with smaller Q– associated with later signals. For simplicity, the individual is assumed to select the respective choice at any time between these two signals, provided no other choice has been selected or choice exploration has been pursued. Hence, both go and no-go signals determines how likely each choice is selected. For example, choices whose go signal is initiated after a no-go signal of another choice will never be selected except for exploration. Put differently, any choice when valued optimistically is still worse than another choice valued pessimistically will not be selected except for exploration. We now proceed to formalize this conceptual framework.
Our model will describe psychological experiments that have the following decision-making scenario. The scenario starts at the initial state S0 on which the participant bases their action A0, which brings in a numerical reward R1. Consequently, the participant finds itself in the next state S1 and selects another action A1, which brings in a numerical reward R2 and state S2. This process then repeats until the participant makes T decisions, yielding a sequence of observations collected for each participant of the form:
S0, A0, R1, S1, A1, R2, S2, …, RT−1, ST−1, AT−1, RT.
M1 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {S_0},\,\,{A_0},\,\,{R_1},\,\,{S_1},\,\,{A_1},\,\,{R_2},\,\,{S_2},\,\, \ldots,\,\,{R_T}_{ - 1},\,\,{S_T}_{ - 1},\,\,{A_T}_{ - 1},\,\,{R_T}. \] \end{document}
Above, observations fall into three types on a given trial t: the state that the participant visits, denoted by St, the action that the participant takes when visiting state St, denoted by At, and the subsequent reward, Rt+1, that a participant receives upon visiting state St and taking action At. For simplicity, let us assume that both the space of possible states SM24 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{S}} \] \end{document}
and the space of possible actions AM16 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{A}} \] \end{document}
are discrete. The space of possible rewards RM17 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{R}} \] \end{document}
can be any subset of the real line ℝ. Further, assume the experiment defines subsequent rewards and states as a function of the current state and action according to a Markov transition probability
ps',r|s,a:S×R×S×A→ 0,1.
M2 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ p\left( {{s^{\prime}},r|s,a} \right): {\cal S} \times {\cal R} \times {\cal S} \times {\cal A} \to {\rm{ }}\left[ {0,1} \right]. \] \end{document}
An experiment described above constitutes a (discrete-time, discrete-state) Markov Decision Process (MDP).
Temporal difference (TD) learning
In the setting described above, human decision-making is often modeled using TD learning. One widely-known algorithm for TD learning is called Q-learning, so-named for its explicit use of a state-action value function denoted by Q. This algorithm supposes that the agent, i.e., the participant in a psychological experiment, tries to learn the "value" of their actions as a function of a given state in terms of future rewards. This notion gives rise to a state-action value function Q(s,a) mapping states s ∈ SM18 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{S}} \] \end{document}
and actions a ∈ AM20 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{A}} \] \end{document}
to a real number that reflects the value of this state-action pair. A Q-learner updates this state-action function according to their experiences:
Q(St, At)←Q(St, At)+αRt+1+γ maxa Q(St+1, a)−Q(St, At).
M3 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ Q({S_t},\:{A_t}) \leftarrow Q({S_t},\:{A_t}) + \alpha \left[ {{R_{t + 1}} + \gamma \:\mathop {\max }\limits_a \:Q({S_{t + 1}},\:a) - Q({S_t},\:{A_t})} \right]. \] \end{document}
Here, the learner has just taken action At in state St, receiving the immediate reward Rt+1 and transitioning to a new state St+1. A learning rate α accounts for the extent to which the new information, i.e. their reward and the new state-action value, overrides old information about their state-action value function. For instance, one can see that if α = 0, there is no overriding - the estimate stays the same. The discount parameter γ weighs the impact of future rewards. A discount parameter γ = 0 would mean the learner does not care about the future at all, while γ = 1 would mean the learner cares about the sum total of future rewards (which may even cause the algorithm to diverge).
Risk-sensitive TD learning
A variant of the Q-learner allows a learner to be particularly sensitive to smaller, or more negative, rewards, i.e. risky situations. In particular, a risk-sensitive Q-learner weighs the prediction error, which is given by
δt=Rt+1+γ maxa Q(St+1, a)−Q(St, At),
M4 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\delta _t} = {R_{t + 1}} + \gamma \;\mathop {\max }\limits_a \;Q({S_{t + 1}},\;a) - Q({S_t},\;{A_t}), \] \end{document}
differently depending on whether the prediction error is positive or negative. This yields the following update:
Q(St, At)←Q(St, At)+α(1+k) 1δt>0+(1−k) 1δt<0δt.
M5 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ Q({S_t},\;{A_t}) \leftarrow Q({S_t},\;{A_t}) + \alpha \left[ {(1 + k){\rm{ }}{1_{{\delta _t} > 0}} + (1 - k){\rm{ }}{1_{{\delta _t} < 0}}} \right]{\delta _t}. \] \end{document}
The parameter k controls the degree to which the learner is risk sensitive. If k = 0, then the learner weighs positive and negative prediction errors equally, in which the updates are the same as before and we say the learner is risk-neutral. If k < 0, then negative prediction errors are weighed more than positive prediction errors. In this case, smaller rewards have a stronger influence relative than larger rewards on the state-action value function Q, resulting in a learner who is considered risk-averse. Similarly if k > 0, the reverse is true: larger rewards have a stronger influence relative to smaller rewards, and the learner is considered risk-seeking.
A learning model with competing critics
With the introduction of risk-sensitive TD learning, we can consider a range of learning behaviors from risk-sensitive to risk-seeking, all modulated by parameter k and reflected in the state-action value function Q. Researchers are often focused on how pessimism or risk-sensitivity, substantiated by k, might vary between individuals. In our model, however, we investigate how risk-sensitivity might vary within individuals. Specifically, we consider two learning systems, one pessimistic (risk-adverse) and one optimistic (risk-seeking).
Our model captures two competing critics by keeping track of two state-action value functions, Q+ and Q–, and updated each function according to:
Q+(St, At)←Q+(St, At)+α(1+k+)1δt+>0+(1−k+)1δt+<0δt+Q−(St, At)←Q−(St, At)+α(1−k−)1δt−>0+(1+k−)1δt−<0δt−
M6 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \begin{array}{*{20}{c}} {{Q^ + }({S_t},\;{A_t}) \leftarrow {Q^ + }({S_t},\;{A_t}) + \alpha \left[ {(1 + {k^ + }){1_{\delta _t^ + > 0}} + (1 - {k^ + }){1_{\delta _t^ + < 0}}} \right]\delta _t^ + }\\ {{Q^ - }({S_t},\;{A_t}) \leftarrow {Q^ - }({S_t},\;{A_t}) + \alpha \left[ {(1 - {k^ - }){1_{\delta _t^ - > 0}} + (1 + {k^ - }){1_{\delta _t^ - < 0}}} \right]\delta _t^ - } \end{array} \] \end{document}
with prediction errors given by
δt+=Rt+1+γ maxa Q+(St+1, a)−Q+(St, At)δt−=Rt+1+γ maxa Q−(St+1, a)−Q−(St, At).
M7 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \begin{array}{*{20}{c}} {\delta _t^ + = {R_{t + 1}} + \gamma \;\mathop {\max }\limits_a \;{Q^ + }({S_{t + 1}},\;a) - {Q^ + }({S_t},\;{A_t})}\\ {\delta _t^ - = {R_{t + 1}} + \gamma \;\mathop {\max }\limits_a \;{Q^ - }({S_{t + 1}},\;a) - {Q^ - }({S_t},\;{A_t}).} \end{array} \] \end{document}
For simplicity, we initialize Q+ and Q– to zero. Parameters k+,k– are assumed to lie in [0, 1]. Large k+ controls the degree to which the learner is risk-seeking and k– controls the degree to which the learner is risk-sensitive. It is important to point out that we are also not the first to consider multiple risk-sensitive TD learning systems. This idea was recently put forth in (Dabney et al., 2020), where multiple risk-sensitive TD learning systems were thought to be encoded in multiple dopamine neurons. We are also not the first to consider dual competing systems (Collins & Frank, 2014; Daw et al., 2002; Mikhael & Bogacz, 2016; Montague et al., 2016). In the opposing actor learning model in (Collins & Frank, 2014), for example, prediction error from a single learning system controls the dynamics of G ("go") and N ("no-go") systems, which in turn are combined linearly to determine decisions. Since it may not be obvious why this model differs from our proposed model, we discuss in the Supplement how the update equations of the two models differ in important ways, resulting in significantly different behaviors and predictions. Similarly, the Supplement also explores differences between our proposed model and a SARSA version of the model as well as a risk-sensitive TD learning model.
A decision-making model with competing critics
Now that we have a model of learning, namely Q+ and Q–, it is sensible to consider how the agents makes decisions based on what they have just learned. This means that the individual has to make the decision of choosing from the available actions, having obtained pessimistic and optimistic estimates for action-value pairs.
A naive approach is what is called the greedy method, meaning that the action with the highest value is chosen. This approach, however, does not account for actions with multiple values (e.g., optimistic and pessimistic values) nor does it allow the individual to do any exploration, during which they might discover a more optimal strategy. A way to incorporate exploration into decision-making is to act greedy 1 – ɛ of the time and for ɛ of the time, the individual explores non-greedy action with equal probabilities. This method referred to as ɛ-greedy and is used by our model.
To integrate multi-valued actions into a ɛ-greedy method, our model supposes that a random variable Ut(a) is selected for each action a uniformly from the interval [Q–(St,a), Q+(St,a)], whenever an individual has to make a decision in state St. Then whenever the individual acts greedily, they select the action At that maximizes Ut(a). These decision rules along with learning models comprise Competing-Critics model, which is summarized in Algorithm 1. While we use an ɛ-greedy method, exploration could also be achieved by applying a soft-max function to transform Ut(a) into a probability and select action a according to this probability.
Competing-Critics.
Input: Learning rate α, parameters k+, k–, discount factor γ, and exploration parameter ɛ.
Initialize Q±(s, a) for all (s,a) ∈ SM21 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{S}} \] \end{document}
× AM22 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{A}} \] \end{document}
Initialize S
While not terminated do
Sample U(a)∼Unif [Q–(S, a), Q+(S, a)] for each action a in state S
Choose A using ɛ-greedy from the values U(a)
Take action A, observe R, S′
% Compute prediction errors
δ± ← R+γ maxa Q±(S′, a)–Q±(S, A)
% Update state-action value functions
Q±(S, A)←Q±(S, A)+α(1±k±)1δ±>0+(1∓k±)1δ±<0δ±M8 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {Q^ \pm }(S,\;A) \leftarrow {Q^ \pm }(S,\;A) + \alpha \left[ {(1 \pm {k^ \pm }){1_{{\delta ^ \pm } > 0}} + (1 \mp {k^ \pm }){1_{{\delta ^ \pm } < 0}}} \right]{\delta ^ \pm } \] \end{document}
% move to new state
S ← S′
Simulation Experiments
We used simulation to investigate individual behavior in several experiments were they to learn and make decisions according to our decision-making model. In particular, we wanted to identify possible vulnerabilities in behavior that arise from a shift in the balance between the internal optimist and pessimist, instantiated by changes in parameters k+ and k–. For simplicity, each simulation involves 30,000 replicates, and parameters are fixed:
(α, ε, γ, k+, k−)=0.5, 0.3, 0, 0.9, 0.9,
M9 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ (\alpha ,\;\varepsilon ,\;\gamma ,\;{k^ + },\;{k^ - }) = \left( {0.5,\;0.3,\;0,\;0.9,\;0.9} \right), \] \end{document}
unless otherwise specified. In the Supplement, we also explore situations when parameters are randomly sampled to determine the degree to which any of our conclusions are sensitive to parameter choice. Further, a detailed description of the simulations can be found in the Supplement and at: https://github.com/eza0107/Opposite-Systems-for-Decision-Making.
Learning the shape of rewards
Let us first focus on learning behavior by considering the simple case of trivial state and action spaces: SM23 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{S}} \] \end{document}
= {1} and action AM25 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\mathcal{A}} \] \end{document}
= {1}. In this case, learning in the Competing-Critics model is determined completely by the distribution of rewards Rt. We considered what an individual would learn given four different Pearson distributions of Rt, with varying mean μ, standard deviation σ, and skew, while kurtosis was fixed at 2.5. For reference, we also consider the classic Q described at Eq. (1).
Figure 2 illustrates what an individual with balanced parameters, (k+, k–) = (0.9, 0.9), learns over 100 trials. For comparison, we also simulated a traditional, risk-neutral Q learning model by setting k+= k– = 0. Solid dark lines denote state-action value function averaged over simulations and shaded regions represent associated interquartile ranges (IQRs) for each function. One can immediately notice several things. By design, the optimistic value function Q+ is on average larger than the neutral value function Q, which is larger than the average pessimistic value function Q–. In addition, the distribution of each value function appears to converge and can capture shifts in mean rewards μ and scaling of the standard deviation σ. Specifically, the long-term relationship between Q+,Q– and Q is preserved when μ is shifted from 0.5 to 0.25, whereby all value functions shift down by about 0.25. Further, the gap between Q+ and Q– is halved when σ is halved from 0.2 to 0.1; each IQR is also halved. Meanwhile, Q+ and Q– are roughly symmetric around the Q when the reward distribution is symmetric (i.e. zero skew), so that the average of Q+ and Q– is approximately Q. However, moving skew from 0 to 1 is reflected in both the gap between Q+ and Q, which lengthens, and the gap between Q– and Q, which shortens.
Comparison of mean and interquartile range of state-action value functions over 30,000 simulations. The state-action values Q+ and Q– reflect changes in the mean μ, standard deviation σ, and skew of the reward distribution. Notably, asymptotes of these values shift by 0.25 when μ decreases by 0.25, and their gap decreases by 1/2 when σ decreases by a factor of 1/2.
Remarkably, the relationship Q+ > Q > Q– is also present within a single simulation run (Figure 3). Intuitively, this makes sense because they capture the behaviors of risk-seeking, risk-neutral and risk-sensitive agents, respectively and it turns out that this ordering can be preserved provided k± are neither too small or large. See Supplement for the proof of this result. Furthermore, the last subplot also illustrates that introducing a positive skew to the reward distribution Rt, also causes the distribution of Q± and Q to also have positive skew.
A single simulation run of state-action value functions Q± and Q. The state-action values preserve the ordering Q–< Q< Q+ through the entire run.
Value functions Q+ and Q– are not only modulated with the reward distribution, but also parameters k±. Increasing k+ moves Q+ in a positive direction away from the risk-neutral value function Q, whereas increasing k– moves Q– in a negative direction away from the risk-neutral value function Q. With k+ pulling Q+ in one direction and k– pulling Q– in the opposite direction, the midpoint of Q+ and Q– is largely influenced by the gap in k– and k+ (Figure 4A). Meanwhile, the gap between Q+ and Q– is largely influenced by the midpoint of k+ and k–.
Impact of parameters k+ and k– on A) the midpoint and gap between Q+ and Q– averaged over 30,000 simulations, and B) how an individual makes decisions. In particular, the model decomposes decision-making behavior along two axes, a risk-sensitivity and an uncertainty-sensitivity, which are rotated 45° degrees from the k± axes. In the simulation, μ = 0.5, σ = 0.2, and skew = 0.
Thus, while k+ and k– are the two natural parameters of the learning process, the difference in how agents make choices is well described by a 45° rotation of these coordinates, yielding axes sr = k+ – k– and su = k+ + k–. As visualized in Figure 4B, we refer to the sr and su axes as the risk-sensitivity and uncertainty-sensitivity axes, respectively. These two axes provide orthogonal ways of interpreting and comparing different reward distributions, as in Figure 5. Namely, risk-sensitivity, which can vary from risk-averse to risk-seeking, captures a learner's bias either against losses or towards gains, and is instantiated in our model as the difference between 12(Q++Q−)M13 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\textstyle{1 \over 2}}({Q^ + } + {Q^ - }) \] \end{document}
and the expected reward. In contrast, uncertainty-sensitivity, which can vary from decisive to deliberative, captures a learner's consideration of actions with large standard deviations in rewards. In our model, this uncertainty-sensitivity is instantiated as the size of the interval between Q– and Q+, wherein the larger that interval, the more likely two actions with similar values of 12(Q++Q−)M14 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\textstyle{1 \over 2}}({Q^ + } + {Q^ - }) \]2 \end{document}
are to be seen as competing, viable choices.
Four different decision makers with different k+ and k– parameter values interpret the same reward distributions differently. Parameter values associated with risk-seeking are more likely to prefer the rewards drawn from the black distribution, while risk-averse parameter values prefer the red distribution. Meanwhile, deliberative parameter values are more likely to explore the two best competing choices, as those choices have overlap between their Q intervals, while decisive parameter values pick only their preferred distribution. Note that none of the four learners would select the blue distribution.
While increasing uncertainty-sensitivity can increase the variety of actions that a learner makes, it is distinct from the standard use of an exploration parameter ∈. An exploration parameter ∈ forces the exploration of all possible actions, and is included to ensure that no action is left unexplored. By contrast, uncertainty-sensitivity is a preference axis, and it only encourages the exploration of competitive actions whose intervals overlap with the action with the largest value 12(Q++Q−)M15 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {\textstyle{1 \over 2}}({Q^ + } + {Q^ - }) \] \end{document}
. The preference aspect of uncertainty-sensitivity is especially clear in cases where many actions with high variance rewards are considered against a single reliable action with a fixed outcome (no variance) and a slightly higher expected reward. In such a setting, a deliberative learner may often pick the high variance actions even though they could correctly report that the fixed outcome had a better expected outcome (by contrast, a risk-seeking learner would report the high variance actions as having better outcomes). Indeed, while both risk-sensitivity and uncertainty-sensitivity can describe why a learner might prefer a high variance reward to a fixed reward with slightly higher expected return, both are required to explain why some learners might exclusively choose the high variance action while some others sample both the high variance action and the fixed outcome. Similarly, the difference between uncertainty and risk-sensitivity can affect the choices when a fixed outcome would preferred, as also illustrated in Figure 5.
In summary, parameters k± can capture a range of behavior from being too risky to not risky enough and from too decisive to too deliberative. We demonstrate these decision-making behaviors in the next two examples.
Capturing a penchant for gambling
To demonstrate how parameters k± drive decision-making behavior in our model, let us consider the Iowa Gambling Task (IGT), which asks participants to repeatedly chose between four decks labeled A to D. After each choice, they gain and/or lose monetary rewards. Hundreds of studies have used the IGT to evaluate human decision-making (Chiu, Huang, Duann, & Lin, 2018). Initial findings found healthy controls would learn to select "good" decks (Decks C and D), so-called because, on average, they yielded a net gain (Bechara, Damasio, Damasio, & Anderson, 1994). By contrast, individuals with a damaged prefrontal cortex would continue to select "bad" decks (Decks A and B) despite yielding net losses on average. Selecting bad decks was put forth as a marker of impaired decision-making, or more specifically, an insensitivity to future consequences. This interpretation, however, presumes that the participant's objective is indeed to make decisions that maximize expected rewards as opposed to making decisions that seeks large gains or avoids large losses. Risk-seeking behavior (i.e. a penchant for gambling), in particular, may encourage individuals to pursue bad decks, since they yield the largest one-time gains.
The IGT can be placed with our MDP framework with At ∈ {A,B,C,D} capturing the selected desks, St ∈ {1} capturing a trivial case with only one state, and Rt capturing the summed gain and loss per trial. In particular, we will simulate Rt as independent draws from a distribution that depends on the selected deck and matches characteristics described in the Supplement. For example, Rt is drawn uniformly from {$50, $0} when Deck C is selected.
To that point, balanced (k+,k–) = (0.9,0.9) parameters, reflecting risk-neutral behavior, results in a preference for Deck C, i.e. one of the good decks that leads to average net gains (Fig 6A). By contrast, imbalanced (k+,k–) = (0.9,0.1) parameters, reflecting risk-seeking behavior, results in a preference for Deck B, i.e. one of the bad decks that leads to average net losses. In each case, pessimistic state-action values Q– are larger for good decks (C and D), correctly signifying that these decks are the more risk-averse choices (Figure 6B). Meanwhile, optimistic state-action values Q+ are larger for bad decks (A and B), correctly signifying that these decks are the more risk-seeking choices. Imbalanced k± parameters, however, dramatically underplays the risk of Deck B compared to balanced risk-sensitive parameters. Consequently, the chance of large gains encoded in Q+ is suitably enticing to encourage a Deck B preference. That is, Deck B preference, which is actually a well-known phenomenon of healthy participants (Chiu et al., 2018), can be interpreted as a penchant for gambling rather than an insensitivity to future consequences.
Risk-sensitivity of the Competing-Critics model during the Iowa Gambling task aggregated over 100 trials and 30,000 simulations. A) The "risky" Deck B becomes the most popular choice rather than Deck C, when parameter k– is decreased from 0.9 to 0.1. B) Deck selection is determined by the highest value of a random variable drawn uniformly from the interval Q+ to Q–. Here, the interval from median Q+ to median Q– is plotted to help illustrate which decks are viable options Deck B becomes more favorable because of a dramatic increase to the pessimistic value function Q–. C) Bad decks A and B are chosen at higher rates moving along the risk-sensitivity axis (i.e. the k+ = 1–k– line).
As was done in Steingroever, Wetzels, and Wagenmakers (2013), we can also partition the parameter space {(k+, k–)| 0 ≤ k+, k– ≤ 1} by preference for good and bad decks (Fig 6C). This figure tells us that in the "blue" region of the parameter space, bad decks A, B are selected at greater frequency than good decks C, D. In the context of risk-seeking vs risk-averse terminology, our choice of k+ >> k– means that our learner, despite the fact that B incurs incomparably large loss, keeps sticking to it because Q+ is driving the choice. In another words, our agent is unable to learn the good decks in the IGT, thus mimicking the behaviors of the participants with prefrontal cortex damage as demonstrated in Lin, Chiu, Lee, and Hsieh (2007).
The ambiguity of deliberation
One of the main conceptual insights of having two orthogonal axes of risk and uncertainty-sensitivities is that it can describe a greater variation in the types of decisions that people might make (or prefer to make) and thus allows for alternate interpretations of some experiments. To illustrate this, consider the 2-stage Markov task (Daw, Gershman, Seymour, Dayan, & Dolan, 2011), in which a participant repeatedly selects images over two stages and where the experiment was explicitly designed in order to probe the difference between model-free and model-based learning.
In the 2-stage Markov task, participants are presented one of three pairs of images at a given stage. At the first stage, all participants are shown the first pair of images and have the option to choose either the left or right image. After choosing an image, participants are shown the second or third pair of images, with the pair selected randomly according to probabilities that depends on their first stage selection. Participants then select an image on the second stage and receive monetary rewards. This task is used in experiments to determine the degree to which individuals are learning about the common (p = 0.7) versus rare (p = 0.3) transition associated with each action in stage 1. To mark this type of learning, the authors point to the probability of staying on the same first-stage decision (i.e. repeating the same first stage decision on a subsequent trial) depending on the type of transition (common vs. rare) and whether or not the person was rewarded on the second stage. In particular, the authors predicted that stay percentages of a model-free learner would differ only based on reward, while a model-based learner's stay percentage would differ only based on whether the first transition was common or rare. In fact, the data showed that participants' stay percentage varied by both reward and the transition type. Since neither model predicted this reward-transition interaction, the authors stated that both model-free and model-based learning are occurring.
By contrast, we believe that the observed difference in stay percentages can be well captured by our model, and that the relevant difference between the common and rare stay percentage may be capturing uncertainty-sensitivity. We model the two-stage Markov task as follows. The two-stage Markov task has actions At ∈ {left, right} representing selected images, states St ∈ {1,2,3} capturing presented image pairs, and rewards Rt capturing rewards after image selection with rewards after the first stage set to zero. Here t counts the total number of actions. That is, t = 0 corresponds to the first time that a participant takes an action in the first stage, and t = 1 corresponds to the first time that a participant takes an action in the second stage. For our model-free model to capture reward-transition interactions, we do not distinguish between first and second stage decisions, using the same model update regardless of the decision stage. This approach effectively treats the switch from second to first stage as a state transition. To allow information to pass from between stages, we use a discount factor γ of 0.9. By contrast, the model-free model in Daw et al. (2011) uses different updates for first and second stage decisions and does not treat the switch from second to first stage as a state transition. Rather, they view the second stage decision as a state transition to a dummy terminal state, and subsequently rely on an eligibility trace to pass information from second to first stage decisions.
The bar graphs in Figure 7 represents the probability in our model of competing critics of sticking to the current choice categorized by whether it resulted in reward or not and whether the transition was common or rare. The plots on the right side of the figure tells us the difference between the probabilities of staying when the transition was common or rare, given rewarded or unrewarded.
Stay probabilities after a first stage choice over a horizon of 80 decisions (40 first-stage decisions) and 30,000 simulations. The gap between stay probabilities for common vs. rarer transitions increases along the uncertainty-sensitivity axis (i.e. k+ = k– axis) as the learner increases their deliberation about multiple choices.
As displayed in Figure 7, the characteristic pattern observed in (Daw et al., 2011), where stay percentage depends on both rewarded/unrewarded and the common/rare transition is present with the same trends. Moreover, the degree to which there is a common/rare difference is determined by the parameters along the uncertainty-sensitivity axis (su = k– + k+). Namely, when su is large, k– = k+ = 0.9, then the model stay percentage is only slightly affected by the reward and the transition, reflecting a more deliberative sampling of actions resulting in less immediate correlations between actions in one trial and subsequent actions in the next. On the other hand, when su is small, k– = k+ = 0.1, the empirically observed dependence on rewarded/unrewarded and common/rare is increased. Meanwhile, the risk-sensitivity axis does not appear correlated with the rare-common stay percentage difference.
While the characteristic pattern of stay percentages can by reproduced by varying parameters k± along the uncertainty-sensitivity axis, it can also be reproduced in other ways. Notably, the models used in Daw et al. (2011) show that the characteristic pattern can be reproduced by varying the degree to which their model-based model is used over their model-free model. In addition, a person's tendency to explore decisions, as reflected in the exploration parameter ∈, could also increase or decrease stay probabilities in our model. In other words, it is difficult to disambiguate a change in how deliberative a person is with their decisions from their ability to learn transitions or their tendency to explore.
A possible connection to reaction time
Our conceptualization of the Competing-Critics model assumes that the translation of state-action value functions Q+ and Q– into decisions plays out in time, whereby Q+ and Q– determine not only which decisions are made, but also the time until the decision is made, i.e. the reaction time. For example, we hypothesize that Q+ signals the time at which an action is a viable option to a learner, so that decisions with larger Q+ are considered earlier. Meanwhile, Q– signals the time at which an action is no longer a viable option.
One way to explicitly connect our model to reaction time is to introduce some strictly decreasing function F, e.g., F(x) = exp(-bx), that transforms Ut(a), which is on the same scale as rewards Rt, to a temporal scale. On trials that the learner behaves greedily (i.e. does not explore), the reaction time could be modeled as mina F(Ut(a)) with arg mina F(Ut(a)) determining which action is selected. The probability of selecting a would be left unchanged, since F strictly decreasing implies that
F(maxa Ut(a))=mina F(Ut(a))arg maxa Ut(a)=arg mina F(Ut(a)).
M10 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \begin{array}{l} F(\mathop {\max }\limits_a \;{U_t}(a)) = \mathop {\min }\limits_a \;F({U_t}(a))\\ \arg \;\mathop {\max }\limits_a \;{U_t}(a) = \arg \;\mathop {\min }\limits_a \;F({U_t}(a)). \end{array} \] \end{document}
With the introduction of F, there is a one-to-one relationship between reaction time and maxa Ut(a). Thus, we can learn about reaction times by simulating maxa Ut(a) for the learning example with μ = 0.5, σ = 0.2, and skew = 0; the IGT; and the two-stage Markov task (Figure 8).
Mean and standard deviation (SD) of maxa Ut(a) in the (A) learning example with μ = 0.5, σ = 0.2, and skew = 0; (B) Iowa Gambling Task; and (C) two-stage Markov task. Larger values of maxa Ut(a) are hypothesized to correspond to faster reaction times.
In all three examples, the mean of maxa Ut(a) varies primarily along the risk-sensitivity axis, with larger values found near (k+,k–) = (1,0) and smaller values found near (k+,k–) = (0,1). Thus, we would hypothesize that an individual who is risk-seeking would have faster reaction times than an individual that is risk-averse. The standard deviation of maxa Ut(a), however, does not enjoy a consistent trend. When there is one option available, as in the learning example (Figure 8A), the standard deviation of maxa Ut(a) varies primarily along the uncertainty-sensitivity axis, with larger values found near (k+,k–) = (1,1) and smaller values found near (k+,k–) = (0,0). This makes sense since the interval (Q+(a),Q–(a)), from which Ut(a) is drawn, lengthens when (k+,k–) moves towards (1,1). Therefore in this learning example, greater deliberation (i.e. consideration of multiple actions) would not correspond with longer reaction times as one might expect, but rather with greater variability in reaction times. This connection falls apart when there are multiple competing options, with the standard deviation of maxa Ut(a) varying primarily along the k– axis in the IGT and along the k+ in the two-stage Markov task (Figure 8B–C). Thus, we hypothesize that the type of learner who would experience greater variability in reaction times will depend on the task.
Alternatively, our model can be modified to include sequential sampling models, which describe reaction times as first passage times out of some specified region of certain stochastic processes such drift-diffusion models. (Fontanesi, Gluth, Spektor, & Rieskamp, 2019; Kilpatrick, Holmes, Eissa, & Josić, 2019; Lefebvre, Summerfield, & Bogacz, 2020; Veliz-Cuba, Kilpatrick, & Josic, 2016). One possibility is to specify a sequential sampling model for each competing action a and select actions according to which corresponds with the fastest first passage times. If one wanted to keep reaction times equal to F(Ut(a)) and actions selected according to the same probability as our model, then this model would need to be constructed implicitly, so that first hitting times have the same distribution as F(Ut(a)) with F defined above. Otherwise, a preferred sequential sampling model could be specified and state-action values Q± used to modulate properties (e.g., drift rate) of this model. This is a common strategy when integrating TD learning with a sequential sampling model.
Neural encoding of updates
As we mentioned, the rough intuition behind the reinforcement learning update we chose for the state-value functions Q+ and Q– is that they capture the behaviors of risk-seeking and risk-averse learners, respectively. Going even further, we investigate the possibility that dopamine transients encode the update ΔQ+ associated with the risk-seeking system and serotonin transients encode the negative of the update ΔQ– associated with the risk-averse system. In view of this claim, we present one last study, which measured dopamine and serotonin during a decision-making task (Moran et al., 2018).
In this study, participants were asked to make investing decisions on a virtual stock market. In total, participants made 20 investment decisions for 6 markets for a total of 120 decisions. Each participant was allocated $100 at the start of each market and could allocate bets between 0% to 100% in increments of 10%. The participant would gain or lose money depending on their bet. Given a bet At on trial t and market value pt+1 after betting, percent monetary gain (or loss) on trial t was
pt+1−ptpt At.
M11 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \left( {\frac{{{p_{t + 1}} - {p_t}}}{{{p_t}}}} \right)\;{A_t}. \] \end{document}
To model this experiment, we use the simplifying assumption that bets are low or high: At = {25%, 75%}, and suppose rewards are
Rt: =pt+1−ptpt At−50.
M12 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ {R_t}:\; = \left( {\frac{{{p_{t + 1}} - {p_t}}}{{{p_t}}}} \right)\;\left( {{A_t} - 50} \right). \] \end{document}
Actions are centered to 50% to account for the hypothesized role of counterfactual in this experiment (Kishida et al., 2016). Hence, Rt is the percent monetary gain relative to the counterfactual gain were a neutral 50% bet made. Following (Moran et al., 2018), trials are split according to a reward prediction error (RPE): the percent monetary gain centered to the mean of its past values and inversely scaled by the standard deviation of its past values.
Let us consider the scenario where a decision made on trial t resulted in a negative RPE, which means the agent has a lower monetary gain relative to expected past gains (Figure 9A). Without accounting for counterfactuals, a risk-neutral system would experience a negative update independent of bet level. Risk-seeking update ΔQ+, however, depends on bet level during a negative RPE: large for a low bet (25%) compared to a high bet (75%). The reverse is true for the negative of the risk-averse update ΔQ–: it is large for a high bet (75%) compared to a low bet (25%).
Mean updates as a function of bet levels and reward prediction error (RPE) over 30,000 simulations. (A) Mirroring dopamine transients in (Kishida et al., 2016), large mean ΔQ+ reinforces either a large bet for positive RPE or a small bet when negative RPE. Mirroring serotonin transients in (Moran et al., 2018), large mean –ΔQ– reinforces either a large bet for negative RPE or a small bet for positive RPE. (B–C) In addition, mean updates can predict the upcoming bet and are asymmetrical, respecting potential asymmetry in the degree to which dopamine and serotonin transients can increase vs. decrease.
These characteristics of ΔQ+ and -ΔQ– during negative RPE mirror, respectively, dopamine and serotonin transients in (Moran et al., 2018). The authors hypothesized that the large dopamine transient for a low bet encourages the rewarding decision of betting low, whereas the large serotonin transient for a high bet protects the individual from the risky decision of betting high. Betting low is only rewarding when compared to the counterfactual loss of betting a higher amount and losing. This hypothesis is consistent with the role in the Competing-Critics model of a positive ΔQ+ to encourage a rewarding decision and a negative ΔQ– to protect oneself from risky decisions.
When RPE is positive, which is when the agent has a higher monetary gain relative to expected past gains, the direction of the updates flip. The update ΔQ+ is now large for a high bet (75%) compared to a low bet (25%), and the negative of ΔQ– is large for a low bet (25%) compared to a high bet (75%). Again, these characteristics mirror dopamine and serotonin transients in (Kishida et al. 2016; Moran et al., 2018). In this case, it was hypothesized that the relatively large dopamine transient for a high bet encourages the rewarding decision of betting high, whereas the relatively large serotonin transient for a low bet protects the individual from the risky decision of betting low. As before, betting low is only considered risky when compared to the counterfactual loss of what they could have gained if they bet higher.
As an aside, we point out that average updates ΔQ+ and -ΔQ– are generally more positive than they are negative. This asymmetry respects the fact that dopamine and serotonin transients have a biophysical constraint whereby positive transients are easily induced but negative transients are not.
Following (Moran et al., 2018), we consider how updates ΔQ+ and -ΔQ– influence how a person subsequently bets (Figure 9B–C). Trials are split further based on the subsequent decision made on the next trial. The negative of update ΔQ– is largest when switching from a high to low bet during negative RPE and from a low to high bet during positive RPE. These trends mirror serotonin transients in (Moran et al., 2018), where a relatively large serotonin transient preceded a lowering of a bet when RPE was negative and preceded a raising or holding of a bet when RPE was positive. These findings provided further support that serotonin transients protect an individual from actual and counterfactual losses.
Meanwhile, the update ΔQ+ is largest when keeping a bet low during negative RPE and when keeping a bet high during positive RPE. Since dopamine transients were not investigated as a function of subsequent bets in (Moran et al., 2018), we have the following hypothesis: a relatively large dopamine transient reinforces a low bet when RPE was negative and reinforces a high bet when RPE was positive.
We presented a computational model of human decision-making called the Competing-Critics model. The model conceptualizes decision-making with two competing critics, an optimist and a pessimist, which are modulated by parameters k+ and k–, respectively. We posit that information is integrated from each system over time while decisions compete. The optimist activates decisions ("go"); the pessimist inhibits decisions ("no-go"). We show how our model can illuminate behavior observed in experiments using the Iowa gambling, two-stage Markov, or the stock market tasks.
A key hypothesis of the Competing-Critics model is that the updates in the optimistic and pessimistic learning systems are directly encoded in dopamine and serotonin transients. This finding arose from efforts to reproduce observations during the stock market task in Moran et al (Moran et al., 2018) and Kishida et al (Kishida et al., 2016). While computational models such as TD learning have provided a useful framework to interpret experiments involving dopamine (Glimcher, 2011), serotonin has been more difficult to pin down (Cools et al., 2011). If serotonin can be understood as updates to a pessimistic learning system, then we would expect serotonin, like dopamine, to influence decision-making in important ways. It would oppose dopamine, protect a person from risky behavior, inhibit certain decisions, and change the value (and timing) of decisions. These functions agree with several leading theories (though not all theories) (Cools et al., 2011; Daw et al., 2002; J. Deakin, 1983; J. W. Deakin & Graeff, 1991; Montague et al., 2016; Moran et al., 2018; Rogers, 2011); yet, the mathematical form we propose for serotonin is new.
We are not the first to try to interpret observations of serotonin and dopamine through the lens of a computational model (Daw et al., 2002; Dayan & Huys, 2008; Montague et al., 2016; Priyadharsini, Ravindran, & Chakravarthy, 2012). Daw et al, for instance, describe how prediction error in a TD learning system could be transformed into tonic and phasic parts of dopamine and serotonin signals (Daw et al., 2002). Alternatively, Montague et al argue that two prediction errors, derived from reward-predicting and aversive-predicting TD learning systems, could be transformed into serotonin and dopamine transients (Montague et al., 2016). While these models map prediction errors to dopamine and serotonin, the more useful task might be mapping dopamine and serotonin to learning. In other words, trying to understand what certain dopamine and serotonin transients could mean to how a person learns and makes decisions. Our model provides a surprisingly simple answer: dopamine and serotonin transients are exactly the updates to two learning systems.
Critically, these learning systems can capture ranges of decision-making behavior. These learning systems (and hence, dopamine and serotonin) may oppose each other, but they are not perfect antipodes. Hence, the systems are not redundant and obey a principle about efficient coding of information (Montague et al., 2016). For instance, we show that the two learning systems in the Competing-Critics model can implicitly reflect at least two properties of rewards: the mean and standard deviation of rewards. Several other mathematical models of learning and decision-making suggest individuals track the standard deviation of rewards, but do so explicitly (Gershman et al., 2017; Jepma et al., 2020; Li et al., 2011; Redish et al., 2007; Yu & Dayan, 2005).
In addition, the Competing-Critics model reveals how risk-sensitivity and uncertainty-sensitivity represent two orthogonal dimensions of decision-making and how extreme values in either direction could pose unique impairments in decision-making. Sensitivity to risk and uncertainty are well documented in the psychological, economics, and reinforcement learning literature. For instance, risk-seeking (risk-aversion) can be beneficial when large rewards (small losses) are required to escape (avoid) bad scenarios. Platt provides several examples of animals behaving in a risk-sensitive way, e.g., birds switching from risk-aversion to risk-seeking as a function of the temperature (Platt & Huettel, 2008). Miscalibrated risk-sensitivity is thought to cause significant problems for people and underlie a number of psychiatric conditions such as addiction or depression (Korn et al., 2014; Rouhani & Niv, 2019). Mathematically, risk-sensitivity is captured either explicitly through functions that reflect risk-sensitive objectives (Glimcher & Rustichini, 2004; Kahneman & Tversky, 2013) or implicitly through differential weighting of positive and negative prediction errors (Cazé & van der Meer, 2013; Chambon et al., 2020; Hauser et al., 2015; Lefebvre et al., 2017, 2020; Niv et al., 2012; Ross et al., 2018), such as we do here. We recommend the paper by Mihatsch et al (Mihatsch & Neuneier, 2002) for a nice theoretical treatment of risk-sensitivity.
Meanwhile, uncertainty-sensitivity represents the degree to which the standard deviation of the reward distribution, and in their knowledge of this distribution, influences their decisions. Like risk-sensitivity, miscalibrated uncertainty-sensitivity is thought to underlie psychiatric conditions such as anxiety (Grupe & Nitschke, 2013; Hirsh, Mar, & Peterson, 2012; Huang, Thompson, & Paulus, 2017; Luhmann, Ishida, & Hajcak, 2011). Huang et al, for example, describe this miscalibration in anxiety as a "failure to differentiate signal from noise" leading to a "sub-optimal" decision strategy (Huang et al., 2017). Conceptually, our model provides a different interpretation. Rather than being a failure or sub-optimal behavior, extreme uncertainty-sensitivity embodies a strategy that attempts to satisfy competing objectives, some of which are risk-averse and others which are risk-seeking. In experiments, this conflicted strategy will look similar to an exploration-exploitation trade-off, making it difficult to distinguish between the two.
Interestingly, any attempt to modify solely the optimistic and pessimistic learning system (or dopamine and serotonin transients) will affect both risk sensitivity and uncertainty-sensitivity. The reason is that risk-sensitivity and uncertainty-sensitivity axes are rotated 45 degrees from the axes of the parameters k+ and k– modulating the two learning systems. For instance, increasing k– in an attempt to reduce risk-seeking would have the unintended consequence of increasing the sensitivity to uncertainty. Under our interpretation, this would correspond to interventions on serotonin transients to reduce risk-seeking having the potential side-effect of a loss of decisiveness. Similarly, reducing k+, or intervening on dopamine transients, to reduce risk-seeking would decrease sensitivity to uncertainty. A similar tradeoff occurs when trying to decrease risk-aversion or sensitivity to uncertainty through manipulations of just k+ or just k–. Notably, many current pharmacological interventions (e.g., Lithium) act on both dopamine and serotonin neurons.
Another key hypothesis of our model is that values placed on decisions by the two learning system (i.e. Q±) determine the time to make a decision. Thus, the distribution of reaction time may provide additional data beyond choice selection for which to inform or falsify our model. This connection to reaction time might also help to make sense of the impact of serotonin and dopamine on how quickly decisions are made (e.g., impulsively) (Cools et al., 2011; Niv et al., 2005; Worbe, Savulich, Voon, Fernandez-Egea, & Robbins, 2014). Models for reaction time are often built with stochastic differential equations such as drift-diffusion models to reflect a process of evidence accumulation (c.f., Fontanesi et al. (2019); Kilpatrick et al. (2019); Lefebvre et al. (2020); Veliz-Cuba et al. (2016); for an overview). For example, drift-diffusion models of reaction time can be integrated with a TD learning model by relating drift velocities to different in values between two choices (Pedersen, Frank, & Biele, 2017). Reaction time in our model differs from this approach in that it can arise from any number of possible decisions, as opposed to just two, and is sensitive to risk and uncertainty, rather than a single value, for each decision. This additional flexibility may be useful for explaining experimental observations of reaction time.
There are several limitations of this work to consider. We hope it is clear that the modeling of learning in the updates of Q+ and Q– is largely modular from the modeling that maps these values to actions and reaction times. There are numerous ways that pairs of Q+ and Q– values can be mapped to a choice of actions and a time delay in making that choice. In addition, our model was built upon a Q-learning algorithm, but SARSA-learning may prove to be equally suitable. It should also be clear that our model is over-simplified. One notable absence, for example, is that our model did not track average outcomes or map these outcomes or other parts of our model to tonic dopamine and serotonin, unlike the model of Daw et al (Daw et al., 2002). Relatedly, we directly incorporated counterfactuals into our rewards to reproduce findings from the stock market task (Kishida et al., 2016; Moran et al., 2018), but perhaps a separate process, such as tonic serotonin or dopamine, should be included to track counterfactuals. Another limitation of our model is that it relies on only two prediction errors. However, a recent study suggests dopamine is capable of capturing a distribution of prediction errors (Dabney et al., 2020), which has the advantage of being able to learn about the distribution of rewards.
Lastly, one of the key properties of our model, the ordering Qt+>Qt−M161 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ Q_t^ + > Q_t^ - \] \end{document}
assumes that the parameter α is the same for Q+ and Q–. If parameter α were not equal, then the relationship between Q+ and Q– could reverse. The possible effects of Q+< Q– largely fall outside the specifics of the Competing-Critics model, but it is conceivable such a situation could result in no-go signals arriving before go signals, leading to a decision process unwilling to even consider an option. A situation when no options were even worth consideration may be similar to anhedonia.
In conclusion, this work establishes a new model of human decision-making to help illuminate, clarify, and extend current experiments and theories. Such a model could be utilized to quantify normative and pathological ranges of risk-sensitivity and uncertainty-sensitivity. Overall, this work moves us closer to a precise and mechanistic understanding of how humans make decisions.
Supplement for "A competition of critics in human decision-making". DOI: https://doi.org/10.5334/cpsy.64.s1
Bayer, H. M., & Glimcher, P. W. (2005). Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47(1), 129–141. DOI: https://doi.org/10.1016/j.neuron.2005.05.020
Bechara, A., Damasio, A. R., Damasio, H., & Anderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50, 1–3. DOI: https://doi.org/10.1016/0010-0277(94)90018-3
Cazé, R. D., & van der Meer, M. A. (2013). Adaptive properties of differential learning rates for positive and negative outcomes. Biological cybernetics, 107(6), 711–719. DOI: https://doi.org/10.1007/s00422-013-0571-5
Chambon, V., Théro, H., Vidal, M., Vandendriessche, H., Haggard, P., & Palminteri, S. (2020). Information about action outcomes differentially affects learning from self-determined versus imposed choices. Nature Human Behaviour, 4(10), 1067–1079. DOI: https://doi.org/10.1038/s41562-020-0919-5
Chiu, Y.-C., Huang, J.-T., Duann, J.-R., & Lin, C.-H. (2018). Twenty years after the iowa gambling task: rationality, emotion, and decision-making. Frontiers in psychology, 8, 2353. DOI: https://doi.org/10.3389/fpsyg.2017.02353
Cohen, J. Y., Haesler, S., Vong, L., Lowell, B. B., & Uchida, N. (2012). Neuron-type-specific signals for reward and punishment in the ventral tegmental area. nature, 482(7383), 85. DOI: https://doi.org/10.1038/nature10754
Collins, A. G., & Frank, M. J. (2014). Opponent actor learning (opal): Modeling interactive effects of striatal dopamine on reinforcement learning and choice incentive. Psychological review, 121(3), 337. DOI: https://doi.org/10.1037/a0037015
Cools, R., Nakamura, K., & Daw, N. D. (2011). Serotonin and dopamine: unifying affective, activational, and decision functions. Neuropsychopharmacology, 36(1), 98–113. DOI: https://doi.org/10.1038/npp.2010.121
Dabney, W., Kurth-Nelson, Z., Uchida, N., Starkweather, C. K., Hassabis, D., Munos, R., & Botvinick, M. (2020). A distributional code for value in dopamine-based reinforcement learning. Nature, 577(7792), 671–675. DOI: https://doi.org/10.1038/s41586-019-1924-6
d'Acremont, M., Lu, Z.-L., Li, X., Van der Linden, M., & Bechara, A. (2009). Neural correlates of risk prediction error during reinforcement learning in humans. Neuroimage, 47(4), 1929–1939. DOI: https://doi.org/10.1016/j.neuroimage.2009.04.096
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. (2011). Model-based influences on humans' choices and striatal prediction errors. Neuron, 69(6), 1204–1215. DOI: https://doi.org/10.1016/j.neuron.2011.02.027
Daw, N. D., Kakade, S., & Dayan, P. (2002). Opponent interactions between serotonin and dopamine. Neural Networks, 15(4–6), 603–616. DOI: https://doi.org/10.1016/S0893-6080(02)00052-7
Dayan, P., & Huys, Q. J. (2008). Serotonin, inhibition, and negative mood. PLoS Comput Biol, 4(2), e4. DOI: https://doi.org/10.1371/journal.pcbi.0040004
Dayan, P., & Huys, Q. J. (2009). Serotonin in affective control. Annual review of neuroscience, 32. DOI: https://doi.org/10.1146/annurev.neuro.051508.135607
Deakin, J. (1983). Roles of serotonergic systems in escape, avoidance and other behaviours. Theory in psychopharmacology, 2, 149–193.
Deakin, J. W., & Graeff, F. G. (1991). 5-ht and mechanisms of defence. Journal of psychopharmacology, 5(4), 305–315. DOI: https://doi.org/10.1177/026988119100500414
Fontanesi, L., Gluth, S., Spektor, M. S., & Rieskamp, J. (2019). A reinforcement learning diffusion decision model for value-based decisions. Psychonomic bulletin & review, 26(4), 1099–1121. DOI: https://doi.org/10.3758/s13423-018-1554-2
Gershman, S. J., Monfils, M.-H., Norman, K. A., & Niv, Y. (2017). The computational nature of memory modification. Elife, 6, e23763. DOI: https://doi.org/10.7554/eLife.23763.019
Glimcher, P. W. (2011). Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15647–15654. DOI: https://doi.org/10.1073/pnas.1014269108
Glimcher, P. W., & Rustichini, A. (2004). Neuroeconomics: the consilience of brain and decision. Science, 306(5695), 447–452. DOI: https://doi.org/10.1126/science.1102566
Grupe, D. W., & Nitschke, J. B. (2013). Uncertainty and anticipation in anxiety: an integrated neurobiological and psychological perspective. Nature Reviews Neuroscience, 14(7), 488–501. DOI: https://doi.org/10.1038/nrn3524
Hauser, T. U., Iannaccone, R., Walitza, S., Brandeis, D., & Brem, S. (2015). Cognitive flexibility in adolescence: neural and behavioral mechanisms of reward prediction error processing in adaptive decision making during development. Neuroimage, 104, 347–354. DOI: https://doi.org/10.1016/j.neuroimage.2014.09.018
Hirsh, J. B., Mar, R. A., & Peterson, J. B. (2012). Psychological entropy: A framework for understanding uncertainty-related anxiety. Psychological review, 119(2), 304. DOI: https://doi.org/10.1037/a0026767
Huang, H., Thompson, W., & Paulus, M. P. (2017). Computational dysfunctions in anxiety: Failure to differentiate signal from noise. Biological psychiatry, 82(6), 440–446. DOI: https://doi.org/10.1016/j.biopsych.2017.07.007
Jepma, M., Schaaf, J. V., Visser, I., & Huizenga, H. M. (2020). Uncertainty-driven regulation of learning and exploration in adolescents: A computational account. PLoS computational biology, 16(9), e1008276. DOI: https://doi.org/10.1371/journal.pcbi.1008276
Kahneman, D., & Tversky, A. (2013). Choices, values, and frames. In Handbook of the fundamentals of financial decision making: Part i (pp. 269–278). World Scientific. DOI: https://doi.org/10.1142/9789814417358_0016
Kilpatrick, Z. P., Holmes, W. R., Eissa, T. L., & Josić, K. (2019). Optimal models of decision-making in dynamic environments. Current Opinion in Neurobiology, 58, 54–60. DOI: https://doi.org/10.1016/j.conb.2019.06.006
Kishida, K. T., Saez, I., Lohrenz, T., Witcher, M. R., Laxton, A. W., Tatter, S. B., … Montague, P. R. (2016). Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward. Proceedings of the National Academy of Sciences, 113(1), 200–205. DOI: https://doi.org/10.1073/pnas.1513619112
Korn, C., Sharot, T., Walter, H., Heekeren, H., & Dolan, R. J. (2014). Depression is related to an absence of optimistically biased belief updating about future life events. Psychological medicine, 44(3), 579–592. DOI: https://doi.org/10.1017/S0033291713001074
Lefebvre, G., Lebreton, M., Meyniel, F., Bourgeois-Gironde, S., & Palminteri, S. (2017). Behavioural and neural characterization of optimistic reinforcement learning. Nature Human Behaviour, 1(4), 1–9. DOI: https://doi.org/10.1038/s41562-017-0067
Lefebvre, G., Summerfield, C., & Bogacz, R. (2020). A normative account of confirmatory biases during reinforcement learning. BioRxiv. DOI: https://doi.org/10.1101/2020.05.12.090134
Li, J., Schiller, D., Schoenbaum, G., Phelps, E. A., & Daw, N. D. (2011). Differential roles of human striatum and amygdala in associative learning. Nature neuroscience, 14(10), 1250. DOI: https://doi.org/10.1038/nn.2904
Lin, C.-H., Chiu, Y.-C., Lee, P.-L., & Hsieh, J.-C. (2007, Mar 15). Is deck b a disadvantageous deck in the iowa gambling task? Behavioral and Brain Functions, 3(1), 16. DOI: https://doi.org/10.1186/1744-9081-3-16
Luhmann, C. C., Ishida, K., & Hajcak, G. (2011). Intolerance of uncertainty and decisions about delayed, probabilistic rewards. Behavior Therapy, 42(3), 378–386. DOI: https://doi.org/10.1016/j.beth.2010.09.002
Mihatsch, O., & Neuneier, R. (2002). Risk-sensitive reinforcement learning. Machine learning, 49(2–3), 267–290. DOI: https://doi.org/10.1023/A:1017940631555
Mikhael, J. G., & Bogacz, R. (2016). Learning reward uncertainty in the basal ganglia. PLoS computational biology, 12(9), e1005062. DOI: https://doi.org/10.1371/journal.pcbi.1005062
Montague, P. R., Dayan, P., & Sejnowski, T. J. (1996). A framework for mesencephalic dopamine systems based on predictive hebbian learning. Journal of neuroscience, 16(5), 1936–1947. DOI: https://doi.org/10.1523/JNEUROSCI.16-05-01936.1996
Montague, P. R., Kishida, K. T., Moran, R. J., & Lohrenz, T. M. (2016). An efficiency framework for valence processing systems inspired by soft cross-wiring. Current opinion in behavioral sciences, 11, 121–129. DOI: https://doi.org/10.1016/j.cobeha.2016.08.002
Moran, R. J., Kishida, K. T., Lohrenz, T., Saez, I., Laxton, A. W., Witcher, M. R., … Montague, P. R. (2018). The protective action encoding of serotonin transients in the human brain. Neuropsychopharmacology, 43(6), 1425. DOI: https://doi.org/10.1038/npp.2017.304
Niv, Y., Duff, M. O., & Dayan, P. (2005). Dopamine, uncertainty and td learning. Behavioral and brain Functions, 1(1), 6. DOI: https://doi.org/10.1186/1744-9081-1-6
Niv, Y., Edlund, J. A., Dayan, P., & O'Doherty, J. P. (2012). Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. Journal of Neuroscience, 32(2), 551–562. DOI: https://doi.org/10.1523/JNEUROSCI.5498-10.2012
Pan, W.-X., Schmidt, R., Wickens, J. R., & Hyland, B. I. (2005). Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network. Journal of Neuroscience, 25(26), 6235–6242. DOI: https://doi.org/10.1523/JNEUROSCI.1478-05.2005
Pedersen, M. L., Frank, M. J., & Biele, G. (2017). The drift diffusion model as the choice rule in reinforcement learning. Psychonomic bulletin & review, 24(4), 1234–1251. DOI: https://doi.org/10.3758/s13423-016-1199-y
Platt, M. L., & Huettel, S. A. (2008). Risky business: the neuroeconomics of decision making under uncertainty. Nature neuroscience, 11(4), 398. DOI: https://doi.org/10.1038/nn2062
Preuschoff, K., Quartz, S. R., & Bossaerts, P. (2008). Human insula activation reflects risk prediction errors as well as risk. Journal of Neuroscience, 28(11), 2745–2752. DOI: https://doi.org/10.1523/JNEUROSCI.4286-07.2008
Priyadharsini, B. P., Ravindran, B., & Chakravarthy, V. S. (2012). Understanding the role of serotonin in basal ganglia through a unified model. In International conference on artificial neural networks (pp. 467–473). DOI: https://doi.org/10.1007/978-3-642-33269-2_59
Redish, A. D., Jensen, S., Johnson, A., & Kurth-Nelson, Z. (2007). Reconciling reinforcement learning models with behavioral extinction and renewal: implications for addiction, relapse, and problem gambling. Psychological review, 114(3), 784. DOI: https://doi.org/10.1037/0033-295X.114.3.784
Rescorla, R. A., & Wagner, A. R. (1972). A theory of pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. Classical conditioning II: Current research and theory, 2, 64–99.
Rogers, R. D. (2011). The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans. Neuropsychopharmacology, 36(1), 114–132. DOI: https://doi.org/10.1038/npp.2010.165
Ross, M. C., Lenow, J. K., Kilts, C. D., & Cisler, J. M. (2018). Altered neural encoding of prediction errors in assault-related posttraumatic stress disorder. Journal of psychiatric research, 103, 83–90. DOI: https://doi.org/10.1016/j.jpsychires.2018.05.008
Rouhani, N., & Niv, Y. (2019). Depressive symptoms bias the prediction-error enhancement of memory towards negative events in reinforcement learning. Psychopharmacology, 236(8), 2425–2435. DOI: https://doi.org/10.1007/s00213-019-05322-z
Schultz, W., Apicella, P., & Ljungberg, T. (1993). Responses of monkey dopamine neurons to reward and conditioned stimuli during successive steps of learning a delayed response task. Journal of neuroscience, 13(3), 900–913. DOI: https://doi.org/10.1523/JNEUROSCI.13-03-00900.1993
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–1599. DOI: https://doi.org/10.1126/science.275.5306.1593
Steingroever, H., Wetzels, R., & Wagenmakers, E.-J. (2013). A comparison of reinforcement learning models for the iowa gambling task using parameter space partitioning. Journal of Problem Solving, 5(2). DOI: https://doi.org/10.7771/1932-6246.1150
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Veliz-Cuba, A., Kilpatrick, Z. P., & Josic, K. (2016). Stochastic models of evidence accumulation in changing environments. SIAM Review, 58(2), 264–289. DOI: https://doi.org/10.1137/15M1028443
Worbe, Y., Savulich, G., Voon, V., Fernandez-Egea, E., & Robbins, T. W. (2014). Serotonin depletion induces 'waiting impulsivity' on the human four-choice serial reaction time task: cross-species translational significance. Neuropsychopharmacology, 39(6), 1519–1526. DOI: https://doi.org/10.1038/npp.2013.351
Yu, A. J., & Dayan, P. (2005). Uncertainty, neuromodulation, and attention. Neuron, 46(4), 681–692. DOI: https://doi.org/10.1016/j.neuron.2005.04.026
Zaghloul, K. A., Blanco, J. A., Weidemann, C. T., McGill, K., Jaggi, J. L., Baltuch, G. H., & Kahana, M. J. (2009). Human substantia nigra neurons encode unexpected financial rewards. Science, 323(5920), 1496–1499. DOI: https://doi.org/10.1126/science.1167342
Enkhtaivan, E., Nishimura, J., Ly, C. and Cochran, A.L., 2021. A Competition of Critics in Human Decision-Making. Computational Psychiatry, 5(1), pp.81–101. DOI: http://doi.org/10.5334/cpsy.64
Enkhtaivan E, Nishimura J, Ly C, Cochran AL. A Competition of Critics in Human Decision-Making. Computational Psychiatry. 2021;5(1):81–101. DOI: http://doi.org/10.5334/cpsy.64
Enkhtaivan, E., Nishimura, J., Ly, C., & Cochran, A. L. (2021). A Competition of Critics in Human Decision-Making. Computational Psychiatry, 5(1), 81–101. DOI: http://doi.org/10.5334/cpsy.64
1. Enkhtaivan E, Nishimura J, Ly C, Cochran AL. A Competition of Critics in Human Decision-Making. Computational Psychiatry. 2021;5(1):81-101. DOI: http://doi.org/10.5334/cpsy.64
Enkhtaivan E and others, 'A Competition of Critics in Human Decision-making' (2021) 5 Computational Psychiatry 81 DOI: http://doi.org/10.5334/cpsy.64
Enkhtaivan, Enkhzaya, Joel Nishimura, Cheng Ly, and Amy L. Cochran. 2021. "A Competition of Critics in Human Decision-making". Computational Psychiatry 5 (1): 81–101. DOI: http://doi.org/10.5334/cpsy.64
Enkhtaivan, Enkhzaya, Joel Nishimura, Cheng Ly, and Amy L. Cochran. "A Competition of Critics in Human Decision-making". Computational Psychiatry 5, no. 1 (2021): 81–101. DOI: http://doi.org/10.5334/cpsy.64
Enkhtaivan, E, et al.. "A Competition of Critics in Human Decision-Making". Computational Psychiatry, vol. 5, no. 1, 2021, pp. 81–101. DOI: http://doi.org/10.5334/cpsy.64
Published by Ubiquity Press | CommonCrawl |
Quasi-effective stability for nearly integrable Hamiltonian systems
Global behavior of delay differential equations model of HIV infection with apoptosis
January 2016, 21(1): 81-102. doi: 10.3934/dcdsb.2016.21.81
Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity
Kentarou Fujie 1, and Takasi Senba 2,
Department of Mathematics, Tokyo University of Science, Tokyo 162-8601
Department of Mathematics, Kyushu Institute of Technology, Sensuicho, Tobata, Kitakyushu 804-8550
Received June 2015 Revised July 2015 Published November 2015
This paper is concerned with the parabolic-elliptic Keller-Segel system with signal-dependent sensitivity $\chi(v)$, \begin{align*} \begin{cases} u_t=\Delta u - \nabla \cdot ( u \nabla \chi(v)) &\mathrm{in}\ \Omega\times(0,\infty), \\ 0=\Delta v -v+u &\mathrm{in}\ \Omega\times(0,\infty), \end{cases} \end{align*} under homogeneous Neumann boundary condition in a smoothly bounded domain $\Omega \subset \mathbb{R}^2$ with nonnegative initial data $u_0 \in C^{0}(\overline{\Omega})$, $\not\equiv 0$.
In the special case $\chi(v)=\chi_0 \log v\, (\chi_0>0)$, global existence and boundedness of the solution to the system were proved under some smallness condition on $\chi_0$ by Biler (1999) and Fujie, Winkler and Yokota (2015). In the present work, global existence and boundedness in the system will be established for general sensitivity $\chi$ satisfying $\chi'>0$ and $\chi'(s) \to 0 $ as $s\to \infty$. In particular, this establishes global existence and boundedness in the case $\chi(v)=\chi_0\log v$ with large $\chi_0>0$. Moreover, although the methods in the previous results are effective for only few specific cases, the present method can be applied to more general cases requiring only the essential conditions. Actually, our condition is necessary, since there are many radial blow-up solutions in the case $\inf_{s>0} \chi^\prime (s) >0$.
Keywords: logarithmic sensitivity, $\varepsilon$-regularity., boundedness, global existence, Chemotaxis.
Mathematics Subject Classification: Primary: 35B45, 35K55; Secondary: 92C1.
Citation: Kentarou Fujie, Takasi Senba. Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 81-102. doi: 10.3934/dcdsb.2016.21.81
P. Biler, Global solutions to some parabolic-elliptic systems of chemotaxis,, Adv. Math. Sci. Appl., 9 (1999), 347. Google Scholar
H. Brézis and W. Strauss, Semi-linear second-order elliptic equations in $L^1$,, J. Math. Soc. Japan, 25 (1973), 565. doi: 10.2969/jmsj/02540565. Google Scholar
S. Y. A. Chang and P. Yang, Conformal deformation of metrics on $S^2$,, J. Differential Geom., 27 (1988), 259. Google Scholar
K. Fujie, Boundedness in a fully parabolic chemotaxis system with singular sensitivity,, J. Math. Anal. Appl., 424 (2015), 675. doi: 10.1016/j.jmaa.2014.11.045. Google Scholar
K. Fujie, M. Winkler and T. Yokota, Boundedness of solutions to parabolic-elliptic Keller-Segel systems with signal-dependent sensitivity,, Math. Methods Appl. Sci., 38 (2015), 1212. doi: 10.1002/mma.3149. Google Scholar
K. Fujie and T. Yokota, Boundedness in a fully parabolic chemotaxis system with strongly singular sensitivity,, Appl. Math. Lett., 38 (2014), 140. doi: 10.1016/j.aml.2014.07.021. Google Scholar
M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model,, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 24 (1997), 663. Google Scholar
T. Hillen and K. Painter, A user's guide to PDE models for chemotaxis,, J. Math. Biol., 58 (2009), 183. doi: 10.1007/s00285-008-0201-3. Google Scholar
D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. I,, Jahresber. Deutsch. Math.-Verein., 105 (2003), 103. Google Scholar
W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis,, Trans. Amer. Math. Soc., 329 (1992), 819. doi: 10.1090/S0002-9947-1992-1046835-6. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability,, J. Theor. Biol., 26 (1970), 399. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
E. F. Keller and L. A. Segel, Traveling bands of chemotactic bacteria: A theoretical analysis,, J. Theor. Biol., 30 (1971), 235. doi: 10.1016/0022-5193(71)90051-8. Google Scholar
T. Nagai, Blow-up of radially symmetric solutions to a chemotaxis system,, Adv. Math. Sci. Appl., 5 (1995), 581. Google Scholar
T. Nagai, Blow-up of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two dimensional domains,, J. Inequal. Appl., 6 (2001), 37. doi: 10.1155/S1025583401000042. Google Scholar
T. Nagai and T. Senba, Global existence and blow-up of radial solutions to a parabolic-elliptic system of chemotaxis,, Adv. Math. Sci. Appl., 8 (1998), 145. Google Scholar
T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis,, Funkc. Ekvacioj, 40 (1997), 411. Google Scholar
V. Nanjundiah, Chemotaxis, signal relaying and aggregation morphology,, J. Theor. Biol., 42 (1973), 63. doi: 10.1016/0022-5193(73)90149-5. Google Scholar
T. Senba and T. Suzuki, Chemotactic collapse in a parabolic-elliptic system of mathematical biology,, Adv. Differential Equations, 6 (2001), 21. Google Scholar
Y. Sugiyama, On $\varepsilon$-regularity theorem and asymptotic behaviors of solutions for Keller-Segel systems,, SIAM J. Math. Anal., 41 (2009), 1664. doi: 10.1137/080721078. Google Scholar
M. Winkler, Absence of collapse in a parabolic chemotaxis system with signal-dependent sensitivity,, Math. Nachr., 283 (2010), 1664. doi: 10.1002/mana.200810838. Google Scholar
M. Winkler, Global solutions in a fully parabolic chemotaxis system with singular sensitivity,, Math. Methods Appl. Sci., 34 (2011), 176. doi: 10.1002/mma.1346. Google Scholar
Wei Wang, Yan Li, Hao Yu. Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3663-3669. doi: 10.3934/dcdsb.2017147
Sachiko Ishida. Global existence and boundedness for chemotaxis-Navier-Stokes systems with position-dependent sensitivity in 2D bounded domains. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3463-3482. doi: 10.3934/dcds.2015.35.3463
Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262
Qi Wang. Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1231-1250. doi: 10.3934/dcdsb.2015.20.1231
Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3357-3377. doi: 10.3934/dcdsb.2018324
Qi Wang. Global solutions of a Keller--Segel system with saturated logarithmic sensitivity function. Communications on Pure & Applied Analysis, 2015, 14 (2) : 383-396. doi: 10.3934/cpaa.2015.14.383
Alexandre Montaru. Wellposedness and regularity for a degenerate parabolic equation arising in a model of chemotaxis with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 231-256. doi: 10.3934/dcdsb.2014.19.231
Hao Yu, Wei Wang, Sining Zheng. Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1317-1327. doi: 10.3934/dcdsb.2016.21.1317
Youshan Tao, Lihe Wang, Zhi-An Wang. Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 821-845. doi: 10.3934/dcdsb.2013.18.821
Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2301-2319. doi: 10.3934/dcdsb.2017097
Youshan Tao. Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2705-2722. doi: 10.3934/dcdsb.2013.18.2705
Hua Zhong, Chunlai Mu, Ke Lin. Global weak solution and boundedness in a three-dimensional competing chemotaxis. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3875-3898. doi: 10.3934/dcds.2018168
Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1675-1688. doi: 10.3934/dcdsb.2018069
Mengyao Ding, Wei Wang. Global boundedness in a quasilinear fully parabolic chemotaxis system with indirect signal production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4665-4684. doi: 10.3934/dcdsb.2018328
Marcel Freitag. Global existence and boundedness in a chemorepulsion system with superlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5943-5961. doi: 10.3934/dcds.2018258
T. Hillen, K. Painter, Christian Schmeiser. Global existence for chemotaxis with finite sampling radius. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 125-144. doi: 10.3934/dcdsb.2007.7.125
Shuguang Shao, Shu Wang, Wen-Qing Xu. Global regularity for a model of Navier-Stokes equations with logarithmic sub-dissipation. Kinetic & Related Models, 2018, 11 (1) : 179-190. doi: 10.3934/krm.2018009
Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2039-2056. doi: 10.3934/dcdsb.2016035
Sainan Wu, Junping Shi, Boying Wu. Global existence of solutions to an attraction-repulsion chemotaxis model with growth. Communications on Pure & Applied Analysis, 2017, 16 (3) : 1037-1058. doi: 10.3934/cpaa.2017050
Radek Erban, Hyung Ju Hwang. Global existence results for complex hyperbolic models of bacterial chemotaxis. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1239-1260. doi: 10.3934/dcdsb.2006.6.1239
Kentarou Fujie Takasi Senba | CommonCrawl |
\begin{document}
\title{Maximal Positive Invariant Set Determination for Transient Stability Assessment in Power Systems}
\author{\IEEEauthorblockN{Antoine Oustry} \IEEEauthorblockA{ \textit{Ecole Polytechnique}\\ Palaiseau, France \\ [email protected]} \and \IEEEauthorblockN{Carmen Cardozo} \IEEEauthorblockN{Patrick Panciatici, \textit{IEEE fellow}} \IEEEauthorblockA{\textit{RTE R\&D} \\ Versailles, France \\ [email protected]} \and \IEEEauthorblockN{Didier Henrion} \IEEEauthorblockA{\textit{CNRS-LAAS} \\ Toulouse, France \\ \textit{and FEL-\v CVUT} \\ Prague, Czechia\\ [email protected]}}
\maketitle
\begin{abstract} This paper assesses the transient stability of a synchronous machine connected to an infinite bus through the notion of invariant sets. The problem of computing a conservative approximation of the maximal positive invariant set is formulated as a semi-definitive program based on occupation measures and Lasserre's relaxation. An extension of the proposed method into a robust formulation allows us to handle Taylor approximation errors for non-polynomial systems. Results show the potential of this approach to limit the use of extensive time domain simulations provided that scalability issues are addressed. \end{abstract}
\begin{IEEEkeywords} transient stability, invariant sets, occupation measures, Lasserre's relaxation, moment-sum-of-squares hierarchy, convex optimization \end{IEEEkeywords}
\section{Introduction}
Although a classic definition of dynamic system stability does apply to power systems, this notion has been traditionally classified into different categories depending on the variables (rotor angle, voltage magnitude or frequency), the time scale (short and long term) of interest~\cite{{kundur1994power}}, as well as the size of the disturbance. In particular, transient stability refers to the ability of the power system to maintain synchronism when subjected to a severe disturbance and it focuses on the evolution of generator rotor angles over the first seconds that follow.
Indeed, a short circuit at a synchronous generator's terminal reduces its output voltage, and with it, the power injected into the network. The received mechanical power is then stored in the rotor mass as kinetic energy producing a speed increase. If the voltage is not restored within a certain time for the specific fault, known as the Critical Clearing Time (CCT), the unit loses synchronism, i.e. the rotor angle diverges.
Transmission System Operators (TSO) are responsible for the power system security and must prevent this to happen as a consequence of any plausible N-1 situation. Hence, TSOs constantly perform intensive time-domain nonlinear simulations and take actions if needed to ensure transient stability. Historically, the simulated scenarios could be limited to a manageable set of given initial conditions and predefined faults. However, with the changing operational environment of electrical power systems these critical conditions become harder to identify. Renewable energy sources and new architectures of intraday and balancing markets add uncertainty and variability to the production plan, enlarging the set of possible initial conditions that TSOs have to consider.
Therefore, the research of new methods for assessing the transient stability of classical power systems has drawn academia and industry attention. The computation of Regions of Attraction (ROA) for this purposes appeared as an interesting idea. Indeed, a ROA provides the set of acceptable (post-fault) conditions of a dynamic system which are known to reach a given target set in a specified time. They can be obtained through the construction of polynomial Lyapunov functions~\cite{anghel2013algorithmic,kalemba2018Lypunov}, as well as using the notion of occupation measures and Lasserre's hierarchy~\cite{henrion2014convex,korda2013inner}.
As long as the dynamics of the system is polynomial, both formulations yield a moment-sum-of-squares (SOS) optimization program that can be efficiently solved by semi-definite programming (SDP), a particular class of efficient convex optimization tools.
Within the framework of occupation measures, we propose to assess the transient stability of a power system by computing Maximal Positively Invariant (MPI) sets which will simply exclude all diverging trajectories without fixing any arbitrary target set and reaching time. We consider a Single Machine Infinite Bus (SMIB) test system, based on different non-polynomial models and classical hypotheses of electromechanical analysis. The originality of this work lies on the reformulation of the problem presented in~\cite{MCIOUTER} into the inner approximations of the MPI set and its application to the transient stability study of a synchronous machine (SM).
The main contributions of this work are:
\begin{enumerate}
\item Formulation of the MPI set inner approximation problem for a polynomial dynamic system constrained on an algebraic set.
\item Its extension into a robust form that ensures the conservativeness of the MPI set for a non-polynomial system as long as its approximation error can be bounded.
\item Computation of CCT bounds without simulation of the post-fault system, but from the evaluation of the polynomial describing the inner/outer approximations of the MPI set along the trajectory of the faulted system. \end{enumerate}
Section~\ref{sec:Modelling} presents the polynomial reformulation of three different SMIB models. Section~\ref{sec:Inner} describes the MPI set inner approximation method while Section~\ref{sec:Robust} includes its robust form. Numerical results are analyzed in Section~\ref{sec:Results}. Conclusion and future work are discussed in Section~\ref{sec:Conclusions}.
\section{Polynomial reformulation of the SM Models} \label{sec:Modelling}
In this work we consider three different SM models. The second order model (2\textsuperscript{nd} OM) is described as follows: \begin{equation}\begin{small}
\left\{
\begin{array}{l}
\dot{\delta}(t) = \omega_n (\omega_s(t) -\omega_i) \\
2H \dot{\omega_s}(t) = C_m -\frac{1}{\omega_s(t)} \frac{V_s V_i}{X_l} \sin(\delta(t)) - D (\omega_s(t) -\omega_i) \\
\end{array}
\right. \label{eq:2ndOM} \end{small}\end{equation} where $\delta$ (radians) is the angle difference between the generator and the infinite bus, $H$ (MWs/MVA) the inertia constant, $\omega_n$ (radians/s) the nominal frequency, $\omega_s$ the generator speed, $\omega_i$ the infinite bus speed, $V_s$ the generator voltage, $V_i$ the infinite bus voltage, $C_m$ the mechanical torque, $X_l$ the line reactance and $D$ the damping factor, all in per unit (p.u.).
The third order model (3\textsuperscript{rd} OM) takes into account the dynamics of the transient electromotive force ($e'_q$) considering a constant exciter output voltage ($E_{vf}$): \begin{equation} \begin{small}
\left\{
\begin{array}{l}
\dot{\delta}(t) = \omega_n (\omega_s(t) -\omega_i) \\
2H \dot{\omega_s}(t) = C_m -\frac{1}{\omega_s(t)} \frac{V_s V_i}{X_l} \sin(\delta(t)) - D (\omega_s(t) -\omega_i) \\
T^{'}_{d0} \dot{e'_q}(t) = E_{vf} - e'_q(t) + \frac{x_d-x^{'}_d}{x^{'}_d+X_l}(V_i \cos{\delta(t)} - e'_q(t)) \\
\end{array}
\right. \label{eq:3thOM} \end{small} \end{equation} where $x_d$ and $x'_d$ are the SM steady state and transient direct axis reactances, and $T^{'}_{d0}$ is the direct axis open-circuit transient time-constant. The fourth order model (4\textsuperscript{th} OM) includes a voltage controller: \begin{equation} \begin{small}
\left\{
\begin{array}{l}
\dot{\delta}(t) = \omega_n (\omega_s(t) -\omega_i) \\
2H \dot{\omega_s}(t) = C_m -\frac{1}{\omega_s(t)} \frac{V_s V_i}{X_l} \sin(\delta(t)) - D (\omega_s(t) -\omega_i) \\
T^{'}_{d0} \dot{e'_q}(t) = E_{vf}(t) - e'_q(t) + \frac{x_d-x^{'}_d}{x^{'}_d+X_l}(V_i \cos{\delta(t)} - e'_q(t)) \\
T_E \dot{E_{vf}}(t) = \kappa (V_{ref} - V_s(t)) - E_{vf}(t)\\
\end{array}
\right. \label{eq:4thOM} \end{small} \end{equation} where now the exciter output voltage $E_{vf}(t)$ is time varying, \[ V_s(t)=\sqrt{(\frac{x_q V_i \sin{\delta(t)}}{x_q+X_l})^2+(\frac{x_d V_i \cos{\delta(t)} + X_l e'_q(t)}{x^{'}_d+X_l})^2}, \] $x_q$ is the quadrature axis reactance, $V_{ref}$ is the SM reference voltage and $\kappa$ is the controller gain, all in p.u. These models include non-polynomial terms on $\delta$ (trigonometric function), $\omega$ (inverse function) and also $e'_q$ (square root). In the sequel we explain how to derive polynomial models by reformulations.
\subsection{Variable change for exact equivalent}
As demonstrated in~\cite{matteo2018Lypunov}, the trajectories and stability properties of the system are preserved when using the following endogenous transformation: \begin{equation}
\Phi := \left\{
\begin{array}{l}
]-\pi,\pi[ \times ]-\omega_{M},\omega_{M}[ \to \mathcal{C} \times ]-\omega_{M},\omega_{M}[ \\
(\delta,\omega) \mapsto (\cos(\delta),\sin(\delta),\omega)
\end{array}
\right. \label{eq:change_variables} \end{equation} where $\omega_M$ is an upper bound on $\omega$ and $\mathcal{C}=\{(x,y)\in \mathbb{R}^2, x^2+y^2=1, x>-1\}$. Then, the SM 2\textsuperscript{nd} OM becomes polynomial at the price of increasing the dimension of the state space and adding an algebraic constraint.
\subsection{Taylor Approximation}
The polynomial reformulation of the inverse function and the square root, whose arguments have limited variations in the post-fault system, is achieved using a classic Taylor series expansion. Without loss of generality, we set $\omega(t)=\omega_s(t) - \omega_i$ as the speed deviation of the SM and $\omega_i=$ 1 p.u., such that: \begin{equation} \frac{1}{1+\omega} = 1 - \omega + \omega ^2 + o(\omega^2) \\ \label{eq:DLomega} \end{equation}
\begin{equation} V_s = V_s^{eq} (1+\frac{h}{2}-\frac{h^2}{8})+ o(h^2)\\
\label{eq:DLVs} \end{equation} where $V_s^{eq}$ is the terminal voltage at an equilibrium point and $h = [(\frac{x_q V_i y}{x_q+X_l})^2+(\frac{x_d V_i x + X_l e'_q}{x^{'}_d+X_l})^2] /(V_s^{eq})^2 -1 $.
\subsection{Polynomial Model} The 4\textsuperscript{th} SM model is now expressed as a polynomial model: \begin{equation}\begin{small}
\left\{
\begin{array}{l}
\dot{x}(t) = - \omega_n \omega(t) y(t) \\
\dot{y}(t) = \omega_n \omega(t) x(t) \\
2H \dot{\omega}(t) = C_m -(1 - \omega(t) + \omega ^2 (t)) \frac{V_s V_i}{X_l} y(t) - D \omega (t) \\
T^{'}_{d0} \dot{e'_q}(t) = \Bar{E}_{vf}(t) - e'_q(t) + \frac{x_d-x^{'}_d}{x^{'}_d+X_l}(V_i \cos{\delta}(t) - e'_q(t)) \\
T_E \dot{E_{vf}}(t) = \kappa (V_{ref} - (V_s^{eq} (1+0.5h(t)-0.125h(t)^2)) - E_{vf}(t)\\
x(t)^2+y(t)^2 = 1, x(t) > -1, \omega(t) \in ]-\omega_{M},\omega_{M}[
\end{array}
\right. \label{eq:4thOMfinal} \end{small}\end{equation} and hence MPI sets can be computed according to the methodology presented in the next Section~\ref{sec:Inner}. However, the impact of the model approximation on the MPI set is unknown. Section~\ref{sec:Robust}
explains how to handle modelling errors.
\section{Inner Approximation of the MPI Set for Polynomial Systems} \label{sec:Inner}
Let $f$ be a polynomial vector field on $\mathbb{R}^n$. For $x_0 \in \mathbb{R}^n$ and $t\geq 0$, let $x(t|x_0)$ denote the solution of the ordinary differential equation $\dot{x}(t) =f(x(t))$ with initial condition $x(0|x_0)=x_0$. Let $X$ be a bounded and open semi-algebraic set of $\mathbb{R}^n$ and $\Bar{X}$ its closure. The Maximal Positively Invariant (MPI) set included in $X$ is defined as:
$$X_0 := \{x_0\in X : \forall t\geq 0,\: x(t|x_0) \in X\}.$$ In words, it is the set of all initial states generating trajectories staying in $X$ \textit{ad infinitum}.
In \cite{korda2013inner}, the authors propose to obtain ROA inner approximations by computing outer approximations of the complementary set (which is a ROA too) with the method presented in \cite{henrion2014convex}. Following the same idea, we chose to focus on approaching the MPI complementary set by the outside in order to get MPI set inner approximations. The MPI complementary set is:
$$ X \backslash X_0 = \{x_0 \in X : \exists t \geq 0, \:x(t|x_0) \in X_\partial\}$$ where $X_\partial$ denotes the boundary of $X$. Thus $X \backslash X_0$ is the infinite-time ROA of $X_\partial$.
This specificity, together with the presence of an algebraic constraint, makes the application of the ROA calculation method as published in the literature not straightforward. On the one hand, we handle the algebraic constraint by changing the reference measure from an $n$-dimensional volume (Lebesgue) to a uniform measure over a cylinder (Hausdorff).
On the other hand, ROA approaches usually consider a finite time for reaching the target set. To tackle this issue, we propose here to extend to continuous-time systems the work presented in~\cite{magron2017semidefinite}, where occupation measures are used to formulate the infinite-time reachable set computation problem for discrete-time polynomial systems.
For a given $a>0$, we define the following linear programming problem: \begin{equation} \begin{small} \begin{array}{rcl} p^a \:= & \sup & \mu_0(\Bar{X})\\ & \text{s.t.} & \text{div}(f\mu)+\mu_0=\mu_T \\ && \mu_0+\hat{\mu}_0 = \lambda \\ && \mu(\Bar{X}) \leq a \\ && \end{array}\tag{$P^a$} \label{eq:Pa} \end{small} \end{equation} where the supremum is with respect to measures $\mu_0 \in \mathcal{M}^+(\Bar{X})$, $\hat{\mu}_0\in \mathcal{M}^+(\Bar{X})$, $\mu_T \in \mathcal{M}^+(X_\partial)$ and $\mu \in \mathcal{M}^+(\Bar{X})$ with $\mathcal{M}^+(A)$ denoting the cone of non-negative Borel measures supported on the set $A$.
The first constraint $\text{div}(f\mu)+\mu_0=\mu_T$, a variant of the Liouville equation, encodes the dynamics of the system and ensures that $\mu_0$ (initial measure), $\mu$ (occupation measure) and $\mu_T$ (terminal measure) describe trajectories hitting $X_\partial$. This equation should be understood in the weak sense, i.e. $\forall v \in C^1(\Bar{X}), \int_{\Bar{X}} \text{grad}\: v \cdot f \: d\mu = \int_{X_\partial} v \: d\mu_T - \int_{\Bar{X}} v \: d\mu_0$.
The second constraint ensures that $\mu_0$ is dominated by reference measure: we use a slack measure $\hat{\mu}_0$ and require that $\mu_0 + \hat{\mu}_0 = \lambda$. The third constraint ensures the compacity of the feasible set in the weak-star topology.
Program (\ref{eq:Pa}) then aims at maximizing the mass of the initial measure $\mu_0$ being dominated by the reference measure $\lambda$ and supported only on $\Bar{X}\backslash X_0$.
The dual problem of (\ref{eq:Pa}) is the following linear programming problem: \begin{equation} \begin{small} \begin{array}{rcl} d^a \:= & \inf & \displaystyle \int_{\Bar{X}} w(x) \ d\lambda(x)\:+\: u\: a\\ & \text{s.t.} & \text{grad}\: v \cdot f (x) \leq u, \:\forall x \in \Bar{X} \\ && w(x) \geq v(x)+1, \forall x \in \Bar{X} \\ && w(x) \geq 0, \forall x \in \Bar{X} \\ && v(x) \geq 0, \forall x \in X_\partial \end{array}\tag{$D^a$} \label{eq:da} \end{small} \end{equation} where the infimum is with respect to $u \geq 0$, $v \in C^1(\Bar{X})$ and $w \in C^0(\Bar{X})$.
\begin{property} If
$(0,v,w)$ is feasible in
(\ref{eq:da}), then $\{x \in X : v(x)<0\} \subset X_0$ is positively invariant.
\label{lem_pos_inv} \end{property}
The proof of this statement follows as in \cite[Lemma 2]{henrion2014convex} by evaluating the inequalities in (\ref{eq:da}) on a trajectory. Hence, any feasible solution $(0,v,w)$ provides a positively invariant set, thus an inner approximation of the MPI set.
In the same manner than \cite{henrion2014convex,korda2013inner,MCIOUTER}, we use the Lasserre SDP moment relaxation hierarchy of (\ref{eq:Pa}), denoted $(P^a_k)$, to approach its optimum, where $k \in \mathbb{N}$ is the relaxation order. For brevity and practical reasons (see Fig.~\ref{fig:algo}) this paper presents only the dual hierarchy of SDP SOS tightenings of (\ref{eq:da}): \begin{equation} \begin{small} \begin{array}{rcl} d_k^a \:= & \inf & w'l + u a \\ & \text{s.t.} & u - \text{grad}\: v \cdot f = q_{0} + \sum_i q_i\: g_i \\ & & w - v - 1 = p_{0} + \sum_i p_i \: g_i \\ & & w = s_0 + \sum_i s_i \: g_i \\ & & v = t_0 + \sum_i t^+_i g_i - \sum_i t^-_i g_i \end{array}\tag{$D^a_k$} \label{eq:dka} \end{small} \end{equation} where the infimum is with respect to $u \geq 0$, $v \in \mathbb{R}_{2k}[x]$, $w\in \mathbb{R}_{2k}[x]$ and $q_i,p_i,s_i,t_i^+,t_i^- \in \Sigma[x]$, $i=0,1,\ldots,n_X$ with $\mathbb{R}_{2k}[x]$ denoting the vector space of real multivariate polynomials of total degree less than or equal to 2$k$ and $\Sigma[x]$ denoting the cone of SOS polynomials.
For a sufficiently large value of $a>0$ - typically greater than the average escape time on $X\backslash X_0$ - the optimal solution $(u_k,v_k,w_k)$ of SDP problem (\ref{eq:dka}) is such that $u_k = 0$.
Hence, solving the SDP program (\ref{eq:dka}) provides $X_{0,k} := \{x \in X : v_k(x)<0\}$ that is positively invariant from Property \ref{lem_pos_inv}. Thus, $X_{0,k}$ is guaranteed to be an \textbf{inner approximation of $X_0$}.
The algorithmic complexity of the method is that of solving an SDP program whose size is in $O(\binom{n+2k}{2k})$, hence polynomial in the relaxation order $k$ with exponent the number of variables $n$.
We can now compute inner approximations of the MPI sets of polynomial systems constrained to a semi-algebraic set. However, as discussed before, SM models for transient stability analysis are not polynomial. Although we reformulated them, truncation error may destroy the conservativeness guarantee provided by the proposed method.
Nevertheless, modelling errors can be seen as an uncertain parameter $\epsilon \in \mathcal{B}$. Hence, there is a compact set $\mathcal{B}\subset \mathbb{R}^p$ such that the non-polynomial vector field $g$ of $\mathbb{R}^n$ satisfies: $$\forall x \in X, \exists \epsilon(x) \in \mathcal{B}, g(x) = f(x,\epsilon(x))$$
where $f$ is a polynomial function from $\mathbb{R}^{n+p}$ to $\mathbb{R}^n$. For instance, the p\textsuperscript{th} order Taylor expansion of $\frac{1}{1+\omega}$ gives: $$\frac{1}{1+\omega} = 1 - \omega +... + (-\omega)^p + \epsilon_p(\omega)$$ with $\epsilon_p(\omega) := \frac{(-\omega)^{p+1}}{1+\omega}$. Thus, $|\epsilon_p(\omega)|\leq \frac{\omega_M^{p+1}}{1-\omega_M}$. In the next section, we propose a robust formulation of the MPI set calculation that ensures the conservative nature of the solution in spite of the modelling errors described by the set $\mathcal{B}$.
\section{Robust MPI Sets} \label{sec:Robust}
We assume now that the dynamic system depends also on an uncertain time-varying parameter $\epsilon$ evolving in a compact set $\mathcal{B} \subset \mathbb{R}^p$. We are now studying the following ordinary differential equation: $$\dot{x}(t) = f(x(t),\epsilon(t))$$
whose solution is now denoted $x(t|x_0,\epsilon)$ to emphasize the dependence on both the initial condition $x_0$ and the uncertain parameter $\epsilon$. Accordingly, we define the Robust Maximal Positively Invariant (RMPI) set $X_\mathcal{B}$ included in $X$:
$$X_\mathcal{B}:= \{x_0 \in X : \forall \epsilon \in \mathcal{L}^\infty(\mathbb{R}_+,\mathcal{B}),\: \forall t\geq 0, x(t|x_0,\epsilon) \in X \}$$ where $\mathcal{L}^\infty(\mathbb{R}_+,\mathcal{B})$ denotes the vector space of essentially bounded functions from $\mathbb{R}_+$ to $\mathcal{B}$.
If the system is initialized in $X_\mathcal{B}$, it cannot be brought out of set $X$ by any (time-varying) control whose values belong to $\mathcal{B}$. Moreover, $X_\mathcal{B}$ is the biggest set included in $X$ being positively invariant for every dynamical system $\dot{x} = f(x,\epsilon)$ with a fixed $\epsilon \in \mathcal{B}$.
In order to compute the RMPI set, we propose the following linear programming problem: \begin{equation} \begin{small} \begin{array}{rcl} p^a_{\mathcal{B}} \: = & \text{sup} & \mu_0(\Bar{X})\\ & \text{s.t.} & \text{div}(f\mu)+\mu_T=\mu_0 \\ & & \mu_0+\hat{\mu}_0 = \lambda \\ & & \mu(\Bar{X} \times \mathcal{B}) \leq a \end{array}\tag{$P^a_\mathcal{B}$} \label{eq:pab} \end{small} \end{equation} where the supremum is with respect to $\mu_0 \in \mathcal{M}^+(\Bar{X})$, $\hat{\mu}_0\in \mathcal{M}^+(\Bar{X})$, $\mu_T \in \mathcal{M}^+(X_\partial)$ and $\mu \in \mathcal{M}^+(\Bar{X} \times \mathcal{B})$.
Its dual linear program reads: \begin{equation} \begin{small} \begin{array}{rcl} d^a_{\mathcal{B}} \:= & \inf & \displaystyle \int_{\Bar{X}} w(x) \ d\lambda(x) \: + \: u \: a\\ & \text{s.t.} & \text{grad}\: v(x) \cdot f (x,\epsilon) \leq u, \: \forall (x,\epsilon) \in \Bar{X} \times \mathcal{B} \\ & & w(x) \geq v(x)+1, \: \forall x \in \Bar{X} \\ & & w(x) \geq 0, \: \forall x \in \Bar{X} \\ & & v(x) \geq 0, \: \forall x \in X_\partial\\ \end{array}\tag{$D^a _\mathcal{B}$} \label{eq:dab} \end{small} \end{equation} where the infimum is with respect to $u\geq 0$, $v \in C^1(\Bar{X})$ and $w \in C^0(\Bar{X})$.
\begin{property} If
$(0,v,w)$ is feasible in (\ref{eq:dab}), then the set $\{x \in X : v(x)<0\}$ is positively invariant for any
given $\epsilon \in \mathcal{L}^\infty(\mathbb{R}_+,\mathcal{B})$. \label{lem_pos_inv_robuste} \end{property}
Such a feasible solution is obtained following the same approach as in Section~\ref{sec:Inner}, computing the Lasserre moment hierarchy of (\ref{eq:pab}). Indeed, the dual hierarchy is made of SOS tightenings of (\ref{eq:dab}), which can be solved using SDP.
\section{Numerical Results} \label{sec:Results}
The method described in this work has been implemented in MATLAB. The SDP problems are solved using MOSEK that takes as input a raw SDP program. As illustrated in Fig.~\ref{fig:algo} we consider two equivalent alternatives to produce this file:
\begin{enumerate}
\item Using the interface GloptiPoly 3 \cite{henrion2009gloptipoly}, that takes a linear program on measures as an input and produces the Lasserre SDP moment relaxation of a specified degree.
\item Using the interface SOSTOOLS \cite{prajna2002introducing}, that takes a SOS programming problem as an input and produces the corresponding SDP problem. \end{enumerate}
\begin{figure}
\caption{Implementation of the proposed method}
\label{fig:algo}
\end{figure}
It is important to highlight that from the implementation point of view, the SM models presented in Section~\ref{sec:Modelling} were renormalized in order to get well scaled SDP problems. In addition, a \textit{reasonable} set $X$ is defined such that the volume of the MPI set $X_0$ covers a non-negligible part of this box. For the test systems considered here, this was achieved by setting all variables between -1 and 1. For parameter $a$, we used 100.
\subsection{Link between MPI sets and transient stability}
Let us consider: i) the test system described in Appendix~\ref{sec:appB}, ii) two scenarios with $C_m=$0.6 p.u. and $C_m=$0.7 p.u., and iii) for illustrative purposes, two faults at the SM terminal with different clearing times (CT): 300 and 350 ms, see Fig.~\ref{fig:Vs}.
\begin{figure}
\caption{Terminal voltage}
\label{fig:Vs}
\caption{Rotor angle trajectory}
\label{fig:RotAng}
\caption{Stable and unstable cases}
\label{fig:2ndOMVsDelta}
\end{figure}
\subsubsection{SM 2\textsuperscript{nd} OM} Critical clearing times for both scenarios are determined through simulation. For this first model $CCT_1$ for scenario 1 ($C_m=$0.6 p.u) is 310 ms and $CCT_2$ for scenario 2 ($C_m=$0.7 p.u) is 250 ms. Figure~\ref{fig:RotAng} shows that when the fault is cleared at 300 s only scenario 1 remains stable.
Figure~\ref{fig:2ndInOutK} shows the MPI set approximations computed with the proposed approach -solving program~\eqref{eq:dka} and setting $v=$0- for two different degrees of relaxation ($k=3$ and $k=5$).
The trajectories presented in Fig.~\ref{fig:RotAng} for scenario 1 and different fault clearing times are also included. The accuracy gain provided by increasing the relaxation degree is observed.
\begin{figure}
\caption{MPI set approximations for different $k$.}
\label{fig:2ndInOutK}
\end{figure}
For $k=$5 the computed inner and outer approximations are quite close, which enables us to conclude about the stability of a certain post-fault situation by simulating only the faulted system. Moreover, they provide an insight on the "stability margin" by looking into the distance between the system state at the fault elimination and the boundaries of the MPI set.
Figure~\ref{fig:2ndCmH} shows the computed MPI set for $k=5$ and different system parameters. In all cases CPU times are around 4 seconds\footnote{Intel(R) Core(TM) i7-4900MQ 2.8GHz.}. As the SM is operated closer to its maximal capacity (higher $C_m$) the stability region becomes smaller and moves to the right side. Indeed, the rotor angle at the equilibrium point increases with $C_m$.
\begin{figure}
\caption{MPI set approximations for different parameters.}
\label{fig:2ndCmH}
\end{figure}
Consistently with intuitions from the equal area criterion, the critical angle after which the fault elimination becomes ineffective to prevent loss of synchronism is independent of $H$. However, the MPI sets become "flatter" because the lower the inertia of the unit, the higher the speed that will be possible to arrest for the same available decelerating power.
Of course, as shown in Fig.~\ref{fig:2ndCCT}, the CCT for a given fault increases with $H$. These figures show the polynomial $v(\delta,\omega)$, describing the MPI set and obtained by solving~\eqref{eq:dka}, evaluated along the faulted trajectory.
\begin{figure}
\caption{CCT for $H=$5 MWs/MVA}
\label{fig:CCTH1}
\caption{CCT for $H=$8 MWs/MVA}
\label{fig:CCTH2}
\caption{$v$ evaluated on the faulted trajectory}
\label{fig:2ndCCT}
\end{figure}
For readability purposes the sign of $v$ for the inner approximation has been changed and the values have been normalized. The zero crossing with the abscissa axis corresponds to the moment when $v(\delta(t),\omega(t)) = 0$, which means that the state variables are no longer inside the computed MPI set. This is the CCT. For $k=5$ the CCT is estimated with a 10 ms precision. However, $v$ remains a high degree polynomial.
\subsubsection{SM 3\textsuperscript{rd} OM} In this case, the MPI set consists in a three dimensional volume ($\delta$,$\omega$,$e'_q$). Figure~\ref{fig:MPI3rdOM} shows sections of the outer (light grey) and inner (dark grey) approximations of the MPI set for different values $e'_q$ (left) and $\omega$ (right). These values correspond to specific points of the stable fault trajectory ($t=$1s, $t=$1.2s, $t=$1.35s and $t=$1.5s respectively). As expected, the stability region is larger for high values of internal electromotive forces (the set $X$ is limited to $2e'_{q0}=$1.58pu). Again, evaluating $v$ along the trajectory during the fault enables us to bound the CCT between 305-330ms.
\subsubsection{SM 4\textsuperscript{th} OM} it is well known that voltage regulators may introduce negative damping in the system~\cite{kundur1994power}. Although power plants have more sophisticated controller, we consider here a proportional one as described in~\eqref{eq:4thOM} for illustrative purposes. Figure~\ref{fig:4thOMrotang} shows that depending on the value of $\kappa$ the lost of synchronism may occurs after a few diverging oscillations.
\begin{figure}
\caption{Sections of the MPI set for the 3\textsuperscript{rd} OM}
\label{fig:MPI3rdOM}
\end{figure}
Fig~\ref{fig:MPI4thOM} shows sections of the MPI set outer (light grey) and inner (dark grey) approximations at the equilibrium point. For badly-damping system the choice of parameter $a$ may have a impact in accuracy for a given relaxation degree $k$.
\begin{figure}
\caption{Trajectory during faults 4\textsuperscript{th} OM}
\label{fig:4thOMrotang}
\caption{MPI set approximations}
\label{fig:MPI4thOM}
\caption{Results for the 4\textsuperscript{th} OM}
\label{fig:Results4thOM}
\end{figure}
\subsection{Model approximation and Robust MPI sets} Previous section considered a 2\textsuperscript{nd} order Taylor expansion for the $\frac{1}{\omega_s}$ term as described in Section~\ref{sec:Modelling}. Figure~\ref{fig:robust} shows that this approximation is accurate enough since the RMPI set overlaps. However, in the presence of larger modeling errors, for instance, if we use the electrical power directly into the speed equation ($C_e=P_e$ and $\frac{1}{\omega_s}\approx 1$), we observe that: \begin{enumerate}
\item The MPI set computed with~\eqref{eq:dka} is wider and may include points that are unstable in the original non-polynomial system.
\item Since the bounds of the $\mathcal{B}$ are larger, the RMPI set is a bit smaller, but offers conservativeness guarantees. \end{enumerate}
\begin{figure}
\caption{Robust MPI set}
\label{fig:robust}
\end{figure}
Indeed, in the second case we write $\frac{1}{1+\omega} = 1 + \epsilon_0(\omega)$ with $\epsilon_0(\omega) = \frac{- \omega}{1 + \omega}$ whereas in the case of the 2\textsuperscript{nd} order Taylor expansion, we write $\frac{1}{1+\omega} = 1 - \omega + \omega^2 + \epsilon_2(\omega)$, with $\epsilon_2(\omega) = \frac{-\omega^3}{1+\omega}$. Naturally, the bounds on $\epsilon_2$ are tighter : $|\epsilon_0(\omega)|\leq \frac{\omega_M}{1-\omega_M}$ while $|\epsilon_2(\omega)| \leq \frac{\omega_M^3}{1-\omega_M}$ (typically $\omega_M = 0.05)$.
\subsection{Performance for the 3\textsuperscript{rd} and 4\textsuperscript{th} OM}
As discussed before, the algorithmic complexity of the method depends strongly on $n$, the number of states. Table~\ref{tab:volume} shows the accuracy on the computation of the MPI set inner and outer approximations with the relaxations degree. The volumes are computed using a Monte-Carlo method. Table~\ref{tab:CPU} presents the associated computing time for both models. It is observed that CPU time raises considerably for the 4\textsuperscript{th} OM.
\begin{table}[h!] \caption{Volume of the computed MPI sets} \begin{center}
\begin{tabular}{|c|p{2mm}|p{2.4cm}|p{2.4cm}|} \hline Model & $k$ & inner approximation & outer approximation \\ \hline 3\textsuperscript{rd} & 4 \newline 5 \newline 6 & 9.84 \newline 10.37 \newline 10.85 & 17.02 \newline 14.91 \newline 13.97 \\ \hline 4\textsuperscript{th} & 4 \newline 5 & 12.00 \newline 17.06 & 30.40 \newline 28.12 \\ \hline \end{tabular} \label{tab:volume} \end{center} \end{table}
\begin{table}[h!] \caption{CPU time of the MPI set computation (s)} \begin{center}
\begin{tabular}{|c|p{2mm}|p{2.4cm}|p{2.4cm}|} \hline Model & $k$ & inner approximation & outer approximation \\ \hline 3\textsuperscript{rd} & 4 \newline 5 \newline 6 & 12.64 \newline 29.92 \newline 129.04 & 4.50 \newline 20.13 \newline 100.06 \\ \hline 4\textsuperscript{th} & 4 \newline 5 & 63.82 \newline 573.65 & 41.04 \newline 339.65 \\ \hline \end{tabular} \label{tab:CPU} \end{center} \end{table}
\section{Conclusions} \label{sec:Conclusions} The transient stability problem has been formulated as the inner approximation of the MPI set of the polynomial dynamic system. For this purpose, we have first transformed SM machines models into polynomial ones, and then adapted the published work based on occupation measures and Lasserre hierarchy to the infinite-time ROA calculation for continuous systems constrained to an algebraic set. Simulation results showed that we can compute multidimensional stability regions for more complex SM models and that CCT can be accurately bounded evaluating the obtained polynomial for inner and outer MPI set approximations on the faulted trajectory.
Moreover, we have proposed a robust formulation that provides conservativeness guarantees in the presence of bounded modelling uncertanties. Again accurate results are obtained when taking into account Taylor approximation errors.
However, algorithmic complexity leads to high CPU times as more details were included in the SM model. Future work will focus on limiting the required relaxation degree in order to reduce computational cost and be able to increase the state space dimension.
\section*{Acknowledgment} The authors would like to thanks Matteo Tacchi from LAAS-CNRS Toulouse, and Philippe Juston from RTE for the enlightening discussions.
\appendices
\section{Test System} \label{sec:appB}
\begin{figure}
\caption{Test system: Single Machine Infinite Bus}
\end{figure}
\begin{table}[h!] \caption{System Parameters} \begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline $\omega_n$ & 314 $rad.s^{-1}$ & & $\Bar{E}_{vf}$ (3\textsuperscript{rd} OM) & 1.85 pu \\ \hline $H$ & 5 MWs/MVA & & $T_d$ & 10 s\\ \hline $D$ & 1 pu & & $T_E$ & 0.3 s \\ \hline $V_s$ & 1 pu & & $\kappa$ & 25 pu \\ \hline $V_i$ & 1 pu & & $x_d$ & 2.5 pu\\ \hline $X_l$ (2\textsuperscript{dn} OM) & 0.8 pu & & $x_q$ & 2.5 pu\\ \hline $X_l$ (3\textsuperscript{rd} OM) & 0.2 pu & & $x'_d$ & 0.4 pu\\ \hline \end{tabular} \label{tab:syspar} \end{center} \end{table}
\end{document} | arXiv |
BMC Biomedical Engineering
A portable assist-as-need upper-extremity hybrid exoskeleton for FES-induced muscle fatigue reduction in stroke rehabilitation
Ashley Stewart ORCID: orcid.org/0000-0001-5350-20531,
Christopher Pretty1 &
Xiaoqi Chen1
BMC Biomedical Engineering volume 1, Article number: 30 (2019) Cite this article
Hybrid exoskeletons are a recent development which combine Functional Electrical Stimulation with actuators to improve both the mental and physical rehabilitation of stroke patients. Hybrid exoskeletons have been shown capable of reducing the weight of the actuator and improving movement precision compared to Functional Electrical Stimulation alone. However little attention has been given towards the ability of hybrid exoskeletons to reduce and manage Functional Electrical Stimulation induced fatigue or towards adapting to user ability. This work details the construction and testing of a novel assist-as-need upper-extremity hybrid exoskeleton which uses model-based Functional Electrical Stimulation control to delay Functional Electrical Stimulation induced muscle fatigue. The hybrid control is compared with Functional Electrical Stimulation only control on a healthy subject.
The hybrid system produced 24° less average angle error and 13.2° less Root Mean Square Error, than Functional Electrical Stimulation on its own and showed a reduction in Functional Electrical Stimulation induced fatigue.
As far as the authors are aware, this is the study which provides evidence of the advantages of hybrid exoskeletons compared to use of Functional Electrical Stimulation on its own with regards to the delay of Functional Electrical Stimulation induced muscle fatigue.
Stroke is the second largest cause of disability worldwide after dementia [1]. Temporary hemiparesis is common among stroke survivors. Regaining strength and movement in the affected side takes time and can be improved with the use of rehabilitation therapy involving repetitive and function-specific tasks [2]. Muscle atrophy is another common issue that occurs after a stroke due to lack of use of the muscle. For each day a patient is in hospital lying in bed with minimal activity approximately 13% of muscular strength is lost (Ellis. Liam, Jackson. Samuel, Liu. Cheng-Yueh, Molloy. Peter, Paterson. Kelsey, Lower Limb Exoskeleton Final Report, unpublished). Electromechanically actuated exoskeletons offer huge advantages in their ability to repetitively and precisely provide assistance/resistance to a user. However electromechanical actuators which provide the required forces are often heavy in weight and have high power requirements which limits portability. Furthermore, muscle atrophy can only be prevented by physically working the muscles either through the patient's own volition or the use of Functional Electrical Stimulation (FES).
FES is the application of high frequency electrical pulses to the nerves or directly to the muscle belly in order to elicit contractions in the muscle. FES devices are typically lightweight and FES is well suited to reducing muscle atrophy in patients with no or extremely limited movement. The trade off to this is that precise control of FES is extremely difficult and controlling specific, repetitive, and functional movement is not easily accomplished. Furthermore, extended use of FES is limited by the introduction of muscle fatigue caused by the unnatural motor unit recruitment order [3]. The forces required for large movements, such as shoulder abduction, are too great to be provided by the use of FES which is much better suited to smaller movements such as finger extension [4, 5]. Some patients also find the use of FES painful.
Combining the use of FES and an electromechanical actuator within an exoskeleton can potentially overcome the limitations of each individual system. Despite the potential advantages of hybrid exoskeletons, so far only limited studies have been done on their effectiveness. A recent review was conducted into upper-extremity hybrid exoskeletons [6] which highlighted the advantages hybrid exoskeletons (exoskeletons which combine FES with an actuator) have with regards to improving the precision of FES induced movements. However, little attention has been given towards reduction and management of FES-induced fatigue. FES control systems used for upper-extremity hybrid exoskeletons simply manually ramp up stimulation intensity when fatigue is observed.
This work describes the design and testing of an assist-as-need upper-extremity hybrid exoskeleton which uses model-based control of FES with a focus on reducing FES-induced muscle fatigue. The control system is described in Section "Theory", and the results are presented in Section "Results". A discussion of the results is given in Section "Discussion". Conclusions are summarised in Section "Conclusion". Methods, physical structure of the exoskeleton, and the sensing system is described in Section "Material and methods".
It is highly desirable in stroke rehabilitation robotics that a robot or exoskeleton be capable of performing assistance-as-needed. This way the patient is encouraged to make the effort to achieve movement rather than learning to rely on the robot to perform the movement [7,8,9]. Appropriately timed action is more important than strength for functional gains, however repetitive practice which builds strength without specific functional application can still help to diminish impairment [10]. The ability of rehabilitation robots to adapt to different users and even to the same user on different days or throughout the same session is also highly important with regards to minimising set-up time and cost of rehabilitation [9]. In general the robot should aid and encourage but not limit the movement of the patient [9, 11]. Above all else the robot should pose no harm to the user or nearby individuals.
To implement the concept of assist-as-need there are two important features which are desired:
The assistance provided from the FES and motor should be the minimum which the patient requires to perform the movement at a given time.
The FES should perform the bulk of the movement which the patient is physically unable to. This ensures that most of the movement performed requires effort form the patient's muscles and thus improves muscular strength.
In the system proposed here the angle of the arm can be affected and controlled by three different inputs; volitional movement from the subject, FES-induced movement, and rotation of the motor. Any one of these on their own could potentially produce the desired angle. However to achieve the two defined desires, there is a necessary hierarchy of control.
The control system may in general be clearly divided into at least two control systems, each related to a different output variable which can be considered independently (Fig. 1). There is one situation however, where this is not the case. This situation would occur if neither the FES nor the user were able to provide sufficient torque to produce the movement. In this type of situation the motor should provide positive active assistance for the user and the control for the motor would be based on the angle rather than the measured support. It is important to note that the assistance that the motor provides is purely for flexion. The motor may slow the rate at which the arm extends but it cannot pull the arm down faster than gravity.
High Level Control of the Hybrid System
Section "Setup" will describe the system set-up process. Sections "Motor" to "FES Gain (k)" will present the individual control systems for each of the four variables; the arm angle, the % support, the desired % support, and the overall gain for the FES, respectively.
Previous work [12] investigated the performance of new linear model for FES control. This model is described by Eq. 1.
$$ \Delta \uptheta =\mathrm{k}\left({\mathrm{v}}_{\mathrm{g}}\Delta \mathrm{v}+{\mathrm{pw}}_{\mathrm{g}}\Delta \mathrm{pw}+{\mathrm{f}}_{\mathrm{g}}\Delta \mathrm{f}\right) $$
θ is the elbow angle in degrees
v is the voltage in volts
pw is the pulse-width in microseconds (full pulse length = positive + negative portions)
f is the frequency in Hertz
k is the overall gain
vg= 14, voltage gain
pwg = 0.15, pulse-width gain
fg = 0.22, frequency gain
The model performed well for different subjects and only the threshold voltage and the overall gain needed to be found for the subject. As described in Section "Material and Methods", to measure the support percentage, knowledge of the user's arm weight is also required. Because this system is adaptive it is possible to initially estimate the value of k as something conservative (higher rather than lower so the system starts with a small stimulation intensity) and have the system recalculate k at run time. Thus, there are only two parameters which must be obtained during setup. These are obtained as follows:
The user is instructed to relax their arm so that the palm faces the user (parallel to the sagittal plane) with fingers pointed down. Once the user is relaxed the system is switched on.
At the beginning of setup the motor rotates the arm to 90°. Five measurements each are taken of the angle and torque. These readings are averaged and used to calculate the weight of the arm under the assumption that the arm and exoskeleton are a point mass at distance 0.13 m from the elbow. The motor then lowers the arm back to 0°.
The voltage threshold test is conducted. Stimulation is applied at a frequency of 30.5 Hz, and pulse-width of 200 μs. Voltage steps are applied in increments of 0.5 V starting at 10 V. Each step is applied for a duration of 3 s and the peak arm response is recorded in degrees. When a step results in a peak arm angle of 20° the voltage threshold test is complete and the input voltage is recorded and defined as the threshold voltage. In between each step if a 20° angle has not been achieved then then stimulation is turned off for a duration of 3 s before the next step is applied. This short rest is to prevent the arm getting used to the stimulation which would affect the voltage threshold (more stimulation would be required to achieve a given angle).
The entire system takes less than 6 min for a complete set up including attachment of the exoskeleton and electrodes. Once the setup has completed, the control system runs on the right arm in response to a desired arm angle based on the position of the left arm. Control of this system is described in sections "Motor to FES Gain (k)".
When the desired support is less than 100% the motor speed is set using proportional-differential (PD) control based on the support error. When the desired support is 100% the motor speed is set using proportional-derivative (PD) control based on the angle error for errors larger than 5°. If the angle error is negative and greater than 20° then the motor will lower the arm at the maximum speed. This last state is to ensure that the motor does not impede movement of the user. Due to the setup of the pulley and cable system the motor cannot physically impede movement of the user when raising the arm but could potentially impede movement during lowering.
Control of FES is performed using the model described by Eq. 1. Updating of the FES parameter inputs is only performed if the desired angle has changed by more than 5° or if the time since the last update has exceeded 0.5 s. This is to give the muscle time to respond to the stimulation. These values were experimentally found to be suitable while still allowing for a faster response from the FES than from the motor. When the FES is updated the left side of Eq. 1 is equal to the angle error. Equation 1 is then used to calculate the required change in each input parameter so that each parameter contributes the same change in angle.
Desired support
The control of the desired amount of support is performed based on the angle error over time. In general if the error over time is consistently positive then desired support should increase. If, on the other hand, the error is consistently negative then support should decrease. In general the desired support should not respond too quickly to errors in the angle and it should ignore any large short term errors. Thus rather than use the average error over time a median filter of length 50 is used. A measurement of the angle error is taken every 0.5 s.
If the median error over the last 25 s is within 5° then no changes are made to the desired amount of support. If the median error is positive and larger than 5° the desired support is increased. If the median error is negative and larger than 5° the desired support is decreased. The previous median error is also used to calculate how much the desired support should be changed. If there has been a change in both the desired support and the median error then those values are used to calculate the new support using Eq. 2. If there has not been a change in the desired support or the median angle error then the desired support is changed by 1% for every 1° median angle error, for median angle errors greater than 5°. Regardless of what the median error is, if all of the FES parameters used for the current step are applied at their maximum values the desired support will be increased by 20%. To prevent rapid changes in the desired support the maximum change is limited to 20% every 0.5 s.
$$ \mathrm{S}\left(t+0.5\right)=S(t)+\left[\ \frac{\left[\overset{\sim }{e_{\theta }}\left(t+0.5\right)-\overset{\sim }{e_{\theta }}(t)\right]\ }{\overset{\sim }{e_{\theta }}(t)-\overset{\sim }{e_{\theta }}\left(t-0.5\right)}\ x\ \left[S(t)-S\left(t-0.5\right)\right]\ \right] $$
S is the desired % support
\( \overset{\sim }{e_{\theta }} \) is the median angle error
t is the current time
\( \overset{\sim }{e_{\theta }}\left(t+0.5\right) \) is the desired angle error at t + 0.5 s which is set equal to 0°
FES gain (k)
Every 0.5 s Eq. 1 is used to calculate the overall gain (k) using the measured right arm angle (minus the 20° threshold) and FES parameters (minus their respective thresholds). If the right arm angle is greater than 20°, the input parameters are greater than their threshold values, and thus the overall gain is positive then the calculated value for the overall gain is added to an array. The array contains the last 50 calculations for the overall gain. Each 0.5 s the median value for the overall gain is retrieved from the array and, after checking limits, is set as the new overall gain value used to calculate the future FES parameter step sizes given a desired arm angle change. The change in the overall gain is limited to plus or minus 0.2 each 0.5 s.
Ten tests were conducted on Exoskeleton using a healthy 27-year old female subject with different initial values for the overall gain and desired assistance. Selected plots of the test results are displayed in Sections "Test 2 – 2 Minutes of Hybrid Control, k = 1, Assist = 0 %" to "Summary of Results for All Tests", and Figs. 2, 3, 4, 5, 6, 7, 8 and 9. Subsection "Summary of Results for All Tests", contains a summary of all the results. The test details are summarised in Table 1, and the results for each tests are summarised in Table 2 in Subsection "Summary of Results for All Tests". Ethical approval for testing was granted by the University of Canterbury Human Ethics Committee.
Table 1 Initial Parameters, Control Scheme, and Test Length for Tests Conducted Using the Hybrid Exoskeleton on one Healthy Individual
Table 2 Summary of Exoskeleton Test Results, Initial Parameters, Control Scheme, and Test Length for Tests Conducted Using the Hybrid Exoskeleton on one Healthy Individual. RMSE = Root Mean Square Error
All tests, except Test 5, were conducted with the user providing no volitional input from their right arm. Test 5 involved the user moving both arms volitionally together in a mirroring pattern. Test 6 used FES only with no assistance form the motor. Test 9 and 10 used only the motor and no FES. Only Test 10 did not perform assist-as-need. The tests are listed in the order they were conducted and only short rests (a few minutes) were taken between each test. All tests were conducted on the same day. A discussion is given for the tests and results in Section "Discussion", following the figures. Some mechanical issues were had following Test 7, resulting in a longer rest time (about 30–60 min) prior to Test 8. Any effects this had are discussed in Section "Discussion".
The first figure in each test subsection displays the desired angle (angle of the left arm, input) and measured angle (angle of the right arm, output) during the test. In cases where there is a second figure this shows the change in the desired support and the change in the gain during the test in response to the assist-as-need control scheme.
Test 2–2 minutes of hybrid control, k = 1, Assist = 0%
Right Arm Angle (Orange) and Left Arm Angle (Blue) during Test 2
Test 5–2 minutes of hybrid control with volitional movement, k = 1, Assist = 50%
Variation in Overall Gain (Orange) and Desired Support (Blue) during Test 5
Test 6–6 minutes of FES control, k = 10, Assist = 0%
Test 7–6 minutes of hybrid control, k = 10, Assist = 0%
Test 10–2 minutes of motor control without assist-as-need, k = 1, Assist = 100%
Right Arm Angle (Orange) and Left Arm Angle (Blue) during Test 10
Summary of results for all tests
Root Mean Square Error (RMSE) for Each Test
Change in Root Mean Square Error (RMSE) during Each Test
The results for the tests are given in Table 2 and Fig. 8 and Fig. 9 at the end of the previous section. It is important to note that the input reference angle trajectory was not the same for every test thus these results should only be used to give a general high-level performance comparison. It should also be noted from the angle comparison plots that the right-arm rest angle appeared to be slightly higher than that of the left arm. This is likely due to the fact that the left arm was controlled volitionally the entire time whereas the right arm was in a relaxed state. When the arm is relaxed it was observed to not often rest exactly at 0° but rather a little higher and the elbow has a slight bend. Thus, the controlling left arm would be physically held at 0° and the right-arm would settle slightly above 0° in response. This will result in a slight shift in the median and average angle error towards the negative, and an increase in the magnitude of the Root Mean Square Error (RMSE).
This is likely why at initial glance the volitional test (Test 5) and the motor only without assist-as-need test (Test 10) appear to have larger average and median errors than the hybrid tests. It is expected that these two tests should produce the smallest errors. That said, from the plots for these two tests (Figs. 3 and 7) it can be seen that there is also some error at the peaks as well as at the troughs so not all of error can be attributed to the rest angle. Furthermore, when comparing the RMSE instead of the median and average errors, the volitional test (Test 5) does perform the best, as expected, with the lowest RMSE value. It is important to note that no time delay was considered when comparing the desired angle with the measured angle, thus the response time should cause a larger measured error for all tests compared with the volitional test, during which the movement was conducted simultaneously. It is worth noting that even volitional movement for a healthy subject without a time delay does not produce perfect tracking.
Overall, the first four Hybrid Control Tests (Tests 1–4) performed similarly to that of the motor only control (Test 10) with similar sized error measurements across the board. There was some variation in performance among the Hybrid control tests depending on the initial test parameters (k and desired assistance), variations in test time, and variations in fatigue, however no noticeable trend was observed and the differences were not large. The difference in RMSE between the best and worst of the first four hybrid control tests was 7.44 degrees.
There are three tests which stand out as having large errors. These are Test 6, 8 and 9. Test 6 is the FES only control test and given that the difficulties with performing large movements with FES and FES-induced fatigue are well known problems for FES, it is not surprising that Test 6 has the worst performance. The results for Test 8 and 9 are less expected and will be discussed later on in this section.
Due to all of these tests being conducted on the same day and one after another it is expected that the arm will be more fatigued for the later tests. This is backed up by the general increase seen for the voltage threshold (vthresh) and the general decrease in the overall gain (k). It is for this reason that the 6 min FES only tests (Test 6) was conducted prior to the 6 min long Hybrid test (Test 7). If the hybrid control is able to reduce the impact of FES-induced fatigue then it is expected that the performance of Test 7 should be better than that of Test 6, however it is also expected that a good performance would be harder to achieve in the presence of more fatigue. Thus given that Test 7 has better performance than Test 6 despite being performed after Test 6 there is much stronger support for the argument that the hybrid control does indeed reduce the impact of FES-induced Fatigue. This is further backed up by the smaller final overall gain (k) for Test 6.
In general, the voltage threshold is expected to increase as more tests are conducted, however small reductions in fatigue can be observed during the tests in response to brief rests. Holding the reference arm at 0° for a while results in an increased response to the FES for the next movement with a larger response observed following longer rests (Fig. 5). However due to some minor mechanical issues which took time to repair (as described in Section "Results"), the rest time following Test 7 was longer than a few minutes thus allowing the arm more time to rest compared with the time between the other tests. Furthermore, the electrode position was not necessarily kept consistent due to the removal of electrodes and reapplication of electrode gel following Test 7. It is this increase in rest time that is the likely cause for the reduced voltage threshold seen for Test 8. Thus the values of the overall gain give a better comparison of the induced fatigue for each of the 6 min tests. It is also for these reasons that it is difficult to compare the results from Test 8 with the other FES tests although it is still useful to observe the parameter and error changes within Test 8. The RMSE values for each minute within all tests are shown in Fig. 9.
The first, solid blue bar for each test in Fig. 9, gives the RMSE for the overall test, while each of the following shaded bars gives the RMSE for each progressive minute, i.e. the first shaded bar gives the RMSE for the first minute, the second shaded bar gives the RMSE for the second minute and so on. For almost all tests the RMSE can be seen to improve from the first minute to the second although for most tests this is only by a small amount so could simply be attributed to differences in the reference movement or other small random variations. The two exceptions are the two motor only tests (Tests 9 and 10). The reasons for the results for Test 9 are discussed more thoroughly below. Test 10 does not perform assist-as-need so it is not expected that the performance would increase during the test and the change is only a few degrees so may be explained simply by differences in the reference movement or other small random variations. There is a larger decrease in the RMSE for some of the tests which start with larger estimates for the overall gain (Test 6 and 8) which is likely due to the adaptive nature of the control system, i.e. as the value for the overall gain becomes more accurate, the RMSE becomes smaller. However, the improvement for Test 7, while consistent over the first few minutes is not as significant despite also starting with an overall gain of 10, the same as Test 6 and 8. Overall it is expected that fatigue would cause an increase in RMSE over time in a FES system without adaptive parameters and assist-as-need. Indeed, during the last few minutes for the 6 min FES tests an increase in RMSE is observed which may be attributed to fatigue. Given that the increase and variability is larger for the FES only test (Test 6) than for the Hybrid test (Test 7) this further provides evidence for the hypothesis that hybrid exoskeletons can offer performance improvements over FES only systems with regards to precise control and fatigue reduction.
The Root Mean Square Error (RMSE) is the most commonly used measurement of performance for prosthesis and exoskeleton control systems [13] however given the early state of many of the upper-extremity hybrid exoskeletons described in previous works [6] very little statistical comparison can be made between the results from the hybrid exoskeleton described in this work and those described in [6]. Only one exoskeleton described in [6], The FES/Robot Hand, uses the RMSE as a measurement of performance [14, 15]. The tracking ability of the FES/Robot Hand was tested on 4 stroke subjects during 20 s long tests. The ability of the subject to track without the aid from the hybrid exoskeleton was compared to the tracking ability of the exoskeleton with different combinations of FES and motor support. The RMSE for the volitional movement was 10.9 degrees while the best exoskeleton performance (with a 50/50 balance of motor and FES support) was 4.9 degrees, and the RMSE for the FES only was about 8.5 degrees, resulting in an improvement in the RMSE of 6 degrees between no support and hybrid support, and an improvement of 3.6 degrees between FES only and the hybrid system.
It is important to note that the tracking tests for the FES/Robot hand were performed on the index finger which comparatively has a smaller range of motion as compared to the elbow joint so is it is difficult to directly compare to this work. Furthermore, the tests performed in this work involved a comparison between a healthy subject performing volitional movement and the same subject putting no effort in at all with FES and with the hybrid system, whereas the tests conducted on the FES/Robot Hand compared Stroke patients performing volitional movement with and without the help of the different exoskeleton systems [14, 15]. As this thesis tests the volitional movement of a healthy subject it is not expected that the exoskeleton would produce a reduction in error for this work. Furthermore, complete relaxation of a subject's muscles is not always easy to achieve. In some cases a user may unintentionally fight or aid the FES. Thus, the focus of this work is to compare the performance of the FES on its own with that of the hybrid combination which, as given in Table 2, shows an improvement of RMSE of 13.2 degrees for a 6 min test (RMSE of 42.9 for the FES system compared with a RMSE of 29.7 for the hybrid system). While it is difficult to compare values directly both the results described in this thesis and the results described in [14, 15] demonstrate an improvement of the hybrid system over the use of FES on its own with regards to precision of movement.
One other exoskeleton described in [6], the Wearable Rehabilitation Robot [16], uses a similar type of performance measure. The Integral of the Square of the Error (ISE) is used to compare the performance of an exoskeleton with and without FES for movement of the shoulder and fingers. For these tests the power of the actuator was deliberately reduced below that which would normally be required. The performance of the system was found to be better when the FES was used in addition to the motor providing evidence that hybrid exoskeletons can reduce the power requirements of the actuator. This cannot be seen in the results presented in the current work as the power of the actuator was not limited in the same way.
A key novel contribution of the current work is to test whether the hybrid exoskeleton is able to reduce the level of FES-induced fatigue as this is something which has not been tested by the hybrid exoskeletons described in [6]. As has been described already this can be tested by comparing the variations in the final value of the overall gain (k). The greater reduction of the overall gain during the FES test compared to that of the later performed hybrid test indicates that the hybrid system is able to reduce the FES-induced fatigue.
Given that all of these tests were conducted on the same individual and same day it is expected that the final value for the overall gain would be of roughly similar magnitude, with some variation due to fatigue as described above. Thus it is promising to see that despite the large differences in the initial value of the overall gain, the final values are all a similar magnitude to one another, with one exception. The volitional test (Test 5) did not cause variations in the overall gain. This can be attributed to a lack of errors during the test which would normally cause the software to apply the FES. The value of the overall gain can only be updated if the FES has been applied a sufficient number of times. Given that Test 5 involved the subject moving both arm simultaneously there was very little error and thus very few reasons for the FES to be applied. This is not an issue from a control perspective as if the error were to increase then the FES would be applied and the overall gain would be calculated. Given that the user is very capable it is not a problem that the overall gain has yet to be calculated and from a patient monitoring perspective one can still observe that the FES input parameter values are small, desired assistance has decreased and remains sitting at 0% (Fig. 4), and yet error is also small. This strongly indicates a user which is capable of performing the movement completely on their own. The change in these values over time will also provide an indication regarding the user's ability as the user becomes more fatigued and across several sessions. It is also promising to note that the exoskeleton does not appear to impede a user who is capable of fully performing the movement. Overall the assist-as-need with regards to the overall gain (k) performs well.
The assist-as-need of the motor is not quite as smooth as that of the overall gain which can be seen by comparing Test 9 and Test 10. It is expected that the desired assistance will fluctuate somewhat given that short rests can improve the effectiveness of the FES parameters, however the rate at which the desired assistance varies during these tests is faster and larger than is desirable. It's possible that the rest angle of the right arm being greater than 0° contributes to this as well as the slow lowering speed of the motor. However, what is more likely is that the assumption made in Section "Desired Support", with regards to Eq. 2, is a poor assumption. Equation 2 relies on the assumption that if a previous change in support results in a given change in error then applying that same change in support again would result in the same change in error. Based on the results, this is likely not the case. Generally it is not desirable to lower the arm too quickly though. Other improvements could be made by using the median angle error for a longer time period in addition to making changes to Eq. 2.
So far the hybrid exoskeleton described in this work has only been tested on one healthy subject. This is something which is a common issue with regards to current exoskeleton research in general. Furthermore, very few exoskeletons, and even less hybrid exoskeletons have been tested on stroke patients, let alone on large numbers of them. Cost is one of the main barriers to widespread testing of exoskeleton devices. The cost to construct the exoskeleton described in this work is very low (a few hundred NZD) which may help to lessen this barrier in future.
Overall the hybrid control and assist-as-need control methods perform well in comparison with complete volitional movement and non-hybrid control. In particular the hybrid system shows an improved performance with regards to FES-induced fatigue compared with using FES only demonstrated by larger change in overall gain (k) and a larger average and RMSE error for the FES only control. As far as the authors are aware this is the first upper-extremity hybrid exoskeleton which uses model-based FES control to perform assist-as-need.
The design of a voltage controlled functional electrical stimulator has been described in other works [17] and is the FES device used during the tests described in this work. It allows for control of a wide range of FES parameters. The electrodes used in this work are (50 mm × 50 mm) reusable e-textile electrodes, which have a similar performance and lower resistance than conventional hydrogel electrodes [18]. The exoskeleton in this work has been designed for the elbow joint.
For simplicity and portability the Rhino Motion Controls High Torque Servo Motor (RMCS-2251) has been selected as the actuator for this exoskeleton This motor is more than capable of providing all of the torque requirements for movement of the elbow joint [19]. A smaller and lighter motor could be used in place of this one in future. A portable and rechargeable Li-Po battery (Zippy, 4000 mAh, 11.1 V, Hardcase, 20C Series) was acquired for the supply for the system as a whole. It is combined with a 150 W adjustable boost circuit (purchased from prodctodc.com - Item ID #090438, set to 13.5 V for this system) and relay circuitry for added safety. The relay section of the circuit was constructed by one of the lab technicians. A 5 V regulator (L78s05cv) was used to step the 13.5 V down for the Arduino and motor. For testing described this work, a desktop DC power supply was used in place of a 3 V regulator for the input to the FES circuit as a slightly more consistent supply could be achieved. A 3 V regulator could be used instead for portability. Ethical approval for testing of this device was granted by the University of Canterbury Human Ethics Committee. Section "Exoskeleton Construction" and "Sensing" describe the construction of the exoskeleton and sensing system respectively.
Exoskeleton construction
The powered exoskeleton was arbitrarily selected to be designed for the right-arm and a second non-powered smaller exoskeleton was designed for the left arm to be used as the control input. Construction of the both exoskeletons was based around Actobotics components sourced from Sparkfun [20]. The powered exoskeleton is shown in Fig. 10 and the unpowered exoskeleton is shown in Fig. 11.
The Powered Exoskeleton (Right Arm)
Unpowered Exoskeleton (Left Arm)
A swivel hub allows free movement of the elbow joint for both exoskeletons. A pulley is affixed to one side of the swivel hub on the powered exoskeleton. A metal cable is wound around this pulley and runs up the inside of a protective plastic tube. At the other end of the plastic tube the cable is wound around a second pulley which is affixed to the shaft of the motor situated on the shoulder of the user.
To attach the motor to the user a soft shoulder brace is worn. The exoskeleton is manually lined up with the user's arm and the motor is placed gently on the shoulder. Velcro straps are used to hold the motor and exoskeleton in place. The FES sleeve is placed on the user's arm and electrode gel (Spectra 360) is applied prior to attachment of the exoskeleton. Correct placement of the electrodes are also checked and any adjustments made prior to exoskeleton attachment. Figure 12 shows a user wearing the exoskeleton and FES sleeve.
A User Wearing the Hybrid Exoskeleton
The exoskeleton can be made shorter or longer for the shoulder to elbow section by unscrewing the upper metal rod and moving the screw up or down a hole. The entire attachment of the FES electrodes and exoskeleton to the user takes only 1–2 min and can be performed by the user themselves without the need for movement in the right arm. Despite the motor only weighing 180 g (and making up the bulk of the weight of the exoskeleton) the structure was still found to be heavy enough to cause mild discomfort during prolonged wearing (45 min) for a healthy subject. Future designs should consider methods to shift the weight of the motor to the centre of the back of the user and away from the shoulder.
To measure the angle of the arm, the shaft of a potentiometer was attached to the pulley at the elbow joint using a set screw hub (Actobotics) and the body of the potentiometer was soldered to a small Vero board and affixed to a rod. The rod is attached to the upper portion of the exoskeleton as shown in (Fig. 13). Thus by measuring the potentiometer voltage the angle can be calculated without regular recalibration. The same method is used to measure the angle of the unpowered left-arm exoskeleton.
Elbow Joint with Potentiometer
To measure the force applied to and by the exoskeleton arm, a 10 kg straight bar load cell is used to connect the elbow section of the exoskeleton to the wrist section (Fig. 14). An HX711 load cell amplifier was used to interface between the load cell and the Arduino. The free body diagram of the exoskeleton is shown in Fig. 15.
Connection of the Load Cell
Free Body Diagram of the Exoskeleton Arm
The interaction forces between the user's forearm and the exoskeleton all occur at the wrist strap point (x), which is located 0.2 m from the elbow pivot joint. The support forces from the motor are applied very close to the elbow pivot joint. The total torque applied at the exoskeleton elbow is the due to the torque produced from the interaction forces between the user and exoskeleton, and the support torque. The load sensor is located 0.13 m from the elbow pivot joint and measures the perpendicular force to the exoskeleton at this point. Thus the total torque at the exoskeleton elbow joint is the product of the force measured by the load sensor and the distance to the load sensor (0.13 m) as described by Eqs. 3 to 5. The force measured by the load sensor can be calculated from the load sensor reading using Eq. 6.
$$ {\tau}_{tot}={\tau}_{support}+{\tau}_{user} $$
$$ {\tau}_{user}=x\left({F}_{muscle}- mg\cos \left({\uptheta}_{\tau}\right)\right) $$
$$ {\tau}_{tot}=L{F}_{meas} $$
$$ {F}_{meas}= MR+c $$
τtot is the total torque around the exoskeleton elbow joint (anticlockwise).
τsupport is torque produced by the motor and cable system (anticlockwise).
τuser is the torque produced by the user, includes volitional movement and effects from gravity (anticlockwise).
x is the distance from the elbow joint to the wrist strap (0.2 m).
Fmuscle is the force produced by volitional movement from the user measured in Newtons (perpendicular to the arm and upwards).
m is the combined mass of the arm and exoskeleton in kilograms (at the wrist strap).
g is acceleration due to gravity (9.8 ms-2) (perpendicular to Earth and downwards)
θ is the elbow angle in degrees. This is the angle which the arm makes measured from 0° (when the arm hangs straight and perpendicular to Earth) in an anticlockwise direction.
θτ is (90 – θ) for arm angles below 90° and (θ – 90) for arm angles above 90°.
L is the distance from the elbow joint to the load sensor (0.13 m).
Fmeas is the force measured by the load sensor in Newtons (perpendicular to the arm and upwards).
M is the gradient
c is the offset.
R is the output of the load sensor amplifier (volts).
Thus the measured total torque is given by Eq. 7.
$$ {\tau}_{tot}=L\left( MR+c\right) $$
The exoskeleton arm was rotated using the motor and cable system and measurements of the load sensor were taken at 14 different angles. This was repeated with three different weights attached to the end of the exoskeleton arm: 100 g, 200 g, and 500 g. Using the measurements and expected torque produced by each weight the gradient and offset in Eq. 6 were calculated. The mass of the empty exoskeleton arm was calculated to be 0.1026 kg.
Using these values, the expected torque produced by the mass of the exoskeleton can be calculated for a given angle using Eq. 4. This torque is defined as the set point for the given angle. If the torque calculated from the load sensor reading (Eq. 7) is greater (direction is anticlockwise and upwards) than the expected torque (Eq. 4) then the difference is due to the torque produced by the subject (either volitional or FES-induced) and furthermore the subject is supporting at least some of the weight of the exoskeleton arm in addition to their own. If the torque measured is equal to the set point then the subject is supporting their own arm weight but not the weight of the exoskeleton. This point is also called the 0% support point as well as the set point. If the torque measured is less than the set point then the exoskeleton is providing support for the subject.
In order to calculate the percent of support which the exoskeleton is providing, knowledge of the arm weight of the subject is required. During setup the system rotates the exoskeleton arm to 90° while the subject relaxes their arm. Several measurements are taken by the software and the results are averaged. From these measurements the arm mass of the subject can be calculated and the 100% support torque point is defined. Any torque measurements between this point and the set point mean that the exoskeleton is providing a certain percent support. For example half way between these two values would be 50% support.
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
Offset of the load sense reading
Frequency in Hertz
FES:
Functional Electrical Stimulation
ISE:
Integral of the Square of the Error
Overall gain
Gradient of the load sense reading
PD:
Proportional-Derivative
Pulse-width in microseconds (full pulse length = positive + negative portions)
Output of the load sensor amplifier (volts)
RMSE:
Root Mean Square Error
Voltage in volts
θ:
Elbow angle in degrees
θτ:
(90 – θ) for arm angles below 90˚ and (θ – 90) for arm angles above 90˚
F meas :
Force measured by the load sensor in Newtons (perpendicular to the arm and upwards)
F muscle :
Force produced by volitional movement from the user measured in Newtons (perpendicular to the arm and upwards)
L :
Distance from the elbow joint to the load sensor (0.13 m)
\( \overset{\sim }{e_{\theta }} \) :
Median angle error
\( {e}_{\theta } \) :
Angle error
fg :
Frequency gain (equal to 0.22)
Acceleration due to gravity (9.8 ms‐²) (perpendicular to Earth and downwards)
Combined mass of the arm and exoskeleton in kilograms (at the wrist strap)
pwg :
Pulse-width gain (equal to 0.15)
vg :
Voltage gain (equal to 14)
Distance from the elbow joint to the wrist strap (0.2 m)
τ support :
Torque produced by the motor and cable system (anticlockwise direction)
τ tot :
Total torque around the exoskeleton elbow joint (anticlockwise direction)
τ user :
Torque produced by the user, including volitional muscular movement and the effects from gravity (anticlockwise direction)
World Heart Federation. Stroke 2015 July 2016]; Available from: http://www.world-heart-federation.org/cardiovascular-health/stroke/. Accessed 31 July 2016.
Senelick RC. Technological advances in stroke rehabilitation—high tech marries high touch. US Neurology. 2010;6(2):102–4.
Doucet BM, Lam A, Griffin L. Neuromuscular electrical stimulation for skeletal muscle function. Yale J Biol Med. 2012;85(2):201.
Schill O, et al. OrthoJacket: an active FES-hybrid Orthosis for the paralysed upper extremity. Biomedizinische Technik/Biomedical Engineering. 2011;56(1):35–44.
Pylatiuk C, et al. Design of a Flexible Fluidic Actuation System for a Hybrid Elbow Orthosis. In: ICORR 2009. IEEE International Conference on Rehabilitation Robotics: IEEE; 2009. https://ieeexplore.ieee.org/document/5209540.
Stewart AM, et al. Review of Upper Limb Hybrid Exoskeletons. IFAC-PapersOnLine. 2017;50(1):15169–78.
Lu EC, et al. The development of an upper limb stroke rehabilitation robot: identification of clinical practices and design requirements through a survey of therapists. Disabil Rehabil Assist Technol. 2011;6(5):420–31.
Jarrassé N, et al. Robotic Exoskeletons: A Perspective for the Rehabilitation of Arm Coordination in Stroke Patients. Front Human Neurosci. 2014;8:947.
Maciejasz P, et al. A survey on robotic devices for upper limb rehabilitation. J Neuroeng Rehabil. 2014;11(3):10.1186.
Patton J, Small SL, Zev Rymer W. Functional restoration for the stroke survivor: informing the efforts of engineers. Top Stroke Rehabil. 2008;15(6):521–41.
Loureiro RC, et al. Advances in upper limb stroke rehabilitation: a technology push. Med Biol Eng Computing. 2011;49(10):1103–18.
Stewart AM, Pretty CG, Chen X. An investigation into the effect of electrode type and stimulation parameters on FES-induced dynamic movement in the presence of muscle fatigue for a voltage-controlled stimulator. IFAC J Syst Control. 2019;8:100043.
Fougner A, et al. Control of upper limb prostheses: terminology and proportional myoelectric control—a review. IEEE Trans Neural Syst Rehabil Eng. 2012;20(5):663–77.
Rong W, et al. Combined Electromyography (EMG)-driven Robotic System with Functional Electrical Stimulation (FES) for Rehabilitation. In: 2012 38th Annual Northeast Bioengineering Conference (NEBEC): IEEE; 2012. https://ieeexplore.ieee.org/document/6207090.
Rong W, et al. Effects of electromyography-driven robot-aided hand training with neuromuscular electrical stimulation on hand control performance after chronic stroke. Disabil Rehabil Assist Technol. 2013;10(2):149–59.
Tu X, et al. Design of a Wearable Rehabilitation Robot Integrated with Functional Electrical Stimulation. In: 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob): IEEE; 2012. https://ieeexplore.ieee.org/abstract/document/6290720.
Stewart AM, Pretty CG, Chen X. Design and Testing of a Novel, Low-cost, Low-voltage, Functional Electrical Stimulator. In: 2016 12th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA): IEEE; 2016. https://ieeexplore.ieee.org/document/7587155.
Stewart AM, Pretty CG, Chen X. An evaluation of the effect of stimulation parameters and electrode type on bicep muscle response for a voltage-controlled functional electrical stimulator. IFAC-PapersOnLine. 2017;50(1):15109–14.
Perry JC, Rosen J, Burns S. Upper-limb powered exoskeleton design. IEEE/ASME Trans Mechatronics. 2007;12(4):408–17.
Sparkfun. Actobotics. 2018 August 2018]; Available from: https://www.sparkfun.com/pages/Actobotics. Accessed 31 Aug 2018.
The authors would like to thank technical staff at the University of Canterbury Mechanical Engineering Department for their help with circuit construction and component sourcing.
This research was supported and funded by the University of Canterbury through the College of Engineering Publishing Scholarship. The role of the University of Canterbury in this study was as reviewer of the early drafts of the paper.
Mechanical Engineering, University of Canterbury, 20 Kirkwood Ave, Upper Riccarton, Christchurch, 8041, New Zealand
Ashley Stewart, Christopher Pretty & Xiaoqi Chen
Christopher Pretty
Xiaoqi Chen
All authors contributed to the study concept and design. The construction of the exoskeleton, acquisition of data, analysis and interpretation of data, and writing of the draft of the manuscript was performed by AS. Critical revision of the manuscript was performed by XC and CP. All authors read and approved the final manuscript.
Correspondence to Ashley Stewart.
Ethical approval for testing described in this work was granted by the University of Canterbury Human Ethics Committee. Informed and written consent was obtained from all participants.
Consent for publication was obtained for all individual's data included in this work.
Stewart, A., Pretty, C. & Chen, X. A portable assist-as-need upper-extremity hybrid exoskeleton for FES-induced muscle fatigue reduction in stroke rehabilitation. BMC biomed eng 1, 30 (2019). https://doi.org/10.1186/s42490-019-0028-6
Hybrid exoskeletons
Medical technologies, robotics and rehabilitation engineering
Submission enquiries: [email protected] | CommonCrawl |
Tomasz Brzezinski – "Toward Synthetic Non-Commutative Geometry"
Posted by Jeffrey Morton under algebra, c*-algebras, category theory, geometry, noncommutative geometry, talks
So there's a lot of preparations going on for the workshop HGTQGR coming up next week at IST, and the program(me) is much more developed – many of the talks are now listed, though the schedule has yet to be finalized. This week we'll be having a "pre-school school" to introduce the local mathematicans to some of the physics viewpoints that will be discussed at the workshop – Aleksandar Mikovic will be introducing Quantum Gravity (from the point of view of the loop/spin-foam approach), and Sebastian Guttenberg will be giving a mathematician's introduction to String theory.
These are by no means the only approaches physicists have taken to the problem of finding a theory that incorporates both General Relativity and Quantum Field Theory. They are, however, two approaches where lots of work has been done, and which appear to be amenable to using the mathematical tools of (higher) category theory which we're going to be talking about at the workshop. These are "higher gauge theory", which very roughly is the analog of gauge theory (which includes both GR and QFT) using categorical groups, and TQFT, which is a very simple type of quantum field theory that has a natural description in terms of categories, which can be generalized to higher categories.
I'll probably take a few posts after the workshop to write up these, and the many other talks and mini-courses we'll be having, but right now, I'd like to say a little bit about another talk we had here recently. Actually, the talk was in Porto, but several of us at IST in Lisbon attended by a videoconference. This was the first time I've seen this for a colloquium-style talk, though I did once take a course in General Relativity from Eric Poisson that was split between U of Waterloo and U of Guelph. I thought it was a great idea then, and it worked quite well this time, too. This is the way of the future – and unfortunately it probably will be for some time to come…
Anyway, the talk in question was by Thomasz Brzezinski, about "Synthetic Non-Commutative Geometry" (link points to the slides). The point here is to take two different approaches to extending differential geometry (DG) and combine the two insights. The "Synthetic" part refers to synthetic differential geometry (SDG), which is a program for doing DG in a general topos. One aspect of this is that in a topos where the Law of the Excluded Middle doesn't apply, it's possible for the real-numbers object to have infinitesimals: that is, elements which are smaller than any positive element, but bigger than zero. This lets one take things which have to be treated in a roundabout way in ordinary DG, like , and take them at face value – as an infinitesimal change in . It also means doing geometry in a completely constructive way.
However, these aspects aren't so important here. The important fact about it here is that it's based on building a theory that was originally defined in terms of sets, or topological spaces – that is, in the toposes , or – and transplanting it to another category. This is because Brzezinski's goal was to do something similar for a different extension of DG, namely non-commutative geometry (NCG). This is a generalisation of DG which is based on the equivalence between the categories of commutative -algebras (and algebra maps, read "backward" as morphisms in ), and that of locally compact Hausdorff spaces (which, for objects, equates a space with the algebra of continuous functions on it, and an algebra with its spectrum , the space of maximal ideals). The generalization of NCG is to take structures defined for that create DG, and make similar definitions in the category , of not-necessarily-commutative -algebras.
This category is the one which plays the role of the topos . It isn't a topos, though: it's some sort of monoidal category. And this is what "synthetic NCG" is about: taking the definitions used in NCG and reproducing them in a generic monoidal category (to be clear, a braided monoidal category).
The way he illustrated this is by explaining what a principal bundle would be in such a generic category.
To begin with, we can start by giving a slightly nonstandard definition of the concept in ordinary DG: a principal -bundle is a manifold with a free action of a (compact Lie) group on it. The point is that this always looks like a "base space" manifold , with a projection so that the fibre at each point of looks like . This amounts to saying that is an equalizer:
where the maps from to are (a) the action, and (b) the projection onto . (Being an equalizer means that makes this diagram commute – has the same composite with both maps – and any other map that does the same factors uniquely through .) Another equivalent way to say this is that since has two maps into , then it has a map into the pullback (the pullback of two copies of ), and the claim is that it's actually ismorphic.
The main points here are that (a) we take this definition in terms of diagrams and abstract it out of the category , and (b) when we do so, in general the products will be tensor products.
In particular, this means we need to have a general definition of a group object in any braided monoidal category (to know what is supposed to be like). We reproduce the usual definition of a group objects so that must come equipped with a "multiplication" map , an "inverse" map and a "unit" map , where is the monoidal unit (which takes the role of the terminal object in a topos like , the unit for ). These need to satisfy the usual properties, such as the monoid property for multiplication:
(usually given as a diagram, but I'm being lazy).
The big "however" is this: in or , any object is always a comonoid in a canonical way, and we use this implictly in defining some of the properties we need. In particular, there's always the diagonal map which satisfies the dual of the monoid property:
There's also a unique counit , the map into the terminal object, which makes a counital comonoid automatically. But in a general braided monoidal category, we have to impose as a condition that our group object also be equipped with and making it a counital comonoid. We need this property to even be able to make sense of the inverse axiom (which this time I'll do as a diagram):
This diagram uses not only but also the braiding map (part of the structure of the braided monoidal category which, in or is just the "switch" map). Now, in fact, since any object in or is automatically a comonoid, we'll require that this structure be given for anything we look at: the analog of spaces (like above), or our group object . For the group object, we also must, in general, require something which comes for free in the topos world and therefore generally isn't mentioned in the definition of a group. Namely, the comonoid and monoid aspects of must get along. (This comes for free in a topos essentially because the comonoid structure is given canonically for all objects.) This means:
For a group in or , this essentially just says that the two ways we can go from to (duplicate, swap, then multiply, or on the other hand multiply then duplicate) are the same.
All these considerations about how honest-to-goodness groups are secretly also comonoids does explain why corresponding structures in noncommutative geometry seem to have more elaborate definitions: they have to explicitly say things that come for free in a topos. So, for instance, a group object in the above sense in the braided monoidal category is a Hopf algebra. This is a nice canonical choice of category. Another is the opposite category – this is a standard choice in NCG, since spaces are supposed to be algebras – this would be given the comonoid structure we demanded.
So now once we know all this, we can reproduce the diagrammatic definition of a principal -bundle above: just replace the product with the monoidal operation , the terminal object by , and so forth. The diagrams are understood to be diagrams of comonoids in our braided monoidal category. In particular, we have an action ,which is compatible with the maps – so in we would say that a noncommutative principal -bundle is a right-module coalgebra over the Hopf algebra . We can likewise take this (in a suitably abstract sense of "algebra" or "module") to be the definition in any braided monoidal category.
To have the "freeness" of the action, there needs to be an equalizer of:
The "freeness" condition for the action is likewise defined using a monoidal-category version of the pullback (fibre product) .
This was as far as Brzezinski took the idea of synthetic NCG in this particular talk, but the basic idea seems quite nice. In SDG, one can define all sorts of differential geometric structures synthetically, that is, for a general topos: for example, Gonzalo Reyes has gone and defined the Einstein field equations synthetically. Presumably, a lot of what's done in NCG could also be done in this synthetic framework, and transplanted to other categories than the usual choices.
Brzezinski said he was mainly interested in the "usual" choices of category, and – so for instance in , a "principal -bundle" is what's called a Hopf-Galois extension. Roger Picken did, however, ask an interesting question about other possible candidates for the category to work in. Given that one wants a braided monoidal category, a natural one to look at is the category whose morphisms are braids. This one, as a matter of fact, isn't quite enough (there's no braid , because this would be a braid with strands in and strands out – which is impossible. But some sort of category of tangles might make an interestingly abstract setting in which to see what NCG looks like. So far, this doesn't seem to have been done as far as I can see.
Starting out at IST; Quantales
Posted by Jeffrey Morton under c*-algebras, category theory, moduli spaces, noncommutative geometry, stacks, update
As I mentioned in my previous post, I've recently started out a new postdoc at IST – the Instituto Superior Tecnico in Lisbon, Portugal. Making the move from North America to Europe with my family was a lot of work – both before and after the move – involving lots of paperwork and shifting of heavy objects. But Lisbon is a good city, with lots of interesting things to do, and the maths department at IST is very large, with about a hundred faculty. Among those are quite a few people doing things that interest me.
The group that I am actually part of is coordinated by Roger Picken, and has a focus on things related to Topological Quantum Field Theory. There are a couple of postdocs and some graduate students here associated in some degree with the group, and elsewhere than IST Aleksandar Mikovic and Joao Faria Martins. In the coming months there should be some activity going on in this group which I will get to talk about here, including a workshop which is still in development, so I'll hold off on that until there's an official announcement.
Quantales
I've also had a chance to talk a bit with Pedro Resende, mostly on the subject of quantales. This is something that I got interested in while at UWO, where there is a large contingent of people interested in category theory (mostly from the point of view of homotopy theory) as well as a good group in noncommutative geometry. Quantales were originally introduced by Chris Mulvey – I've been looking recently at a few papers in which he gives a nice account of the subject – here, here, and here.
The idea emerged, in part, as a way of combining two different approaches to generalising the idea of a space. One is the approach from topos theory, and more specifically, the generalisation of topological spaces to locales. This direction also has connections to logic – a topos is a good setting for intuitionistic, but nevertheless classical, logic, whereas quantales give an approach to quantum logics in a similar spirit.
The other direction in which they generalize space is the -algebra approach used in noncommutative geometry. One motivation of quantales is to say that they simultaneously incorporate the generalizations made in both of these directions – so that both locales and -algebras will give examples. In particular, a quantale is a kind of lattice, intended to have the same sort of relation to a noncommutative space as a locale has to an ordinary topological space. So to begin, I'll look at locales.
A locale is a lattice which formally resembles the lattice of open sets for such a space. A lattice is a partial order with operations ("meet") and ("join"). These operations take the role of the intersection and union of open sets. So to say it formally resembles a lattice of open sets means that the lattice is closed under arbitrary joins, and finite meets, and satisfies the distributive law:
Lattices like this can be called either "Frames" or "Locales" – the only difference between these two categories is the direction of the arrows. A map of lattices is a function that preserves all the structure – order, meet, and join. This is a frame morphism, but it's also a morphism of locales in the opposite direction. That is, .
Another name for this sort of object is a "Heyting algebra". One of the great things about topos theory (of which this is a tiny starting point) is that it unifies topology and logic. So, the "internal logic" of a topos has a Heyting algebra (i.e. a locale) of truth values, where the meet and join take the place of logical operators "and" and "or". The usual two-valued logic is the initial object in , so while it is special, it isn't unique. One vital fact here is that any topological space (via the lattice of open sets) produces a locale, and the locale is enough to identify the space – so is an embedding. (For convenience, I'm eliding over the fact that the spaces have to be "sober" – for example, Hausdorff.) In terms of logic, we could imagine that the space is a "state space", and the truth values in the logic identify for which states a given proposition is true. There's nothing particularly exotic about this: "it is raining" is a statement whose truth is local, in that it depends on where and when you happen to look.
To see locales as a generalisation of spaces, it helps to note that the embedding above is full – if and are locales that come from topological spaces, there are no extra morphisms in that don't come from continuous maps in . So the category of locales makes the category of topological spaces bigger only by adding more objects – not inventing new morphisms. The analogous noncommutative statement turns out not to be true for quantales, which is a little red-flag warning which Pedro Resende pointed out to me.
What would this statement be? Well, the noncommutative analogue of the idea of a topological space comes from another embedding of categories. To start with, there is an equivalence : the category of locally compact, Hausdorff, topological spaces is (up to equivalence) the opposite of the category of commutative -algebras. So one simply takes the larger category of all -algebras (or rather, its opposite) as the category of "noncommutative spaces", which includes the commutative ones – the original locally compact Hausdorff spaces. The correspondence between an algebra and a space is given by taking the algebra of functions on the space.
So what is a quantale? It's a lattice which is formally similar to the lattice of subspaces in some -algebra. Special elements – "right", "left," or "two-sided" elements – then resemble those subspaces that happen to be ideals. Some intuition comes from thinking about where the two generalizations coincide – a (locally compact) topological space. There is a lattice of open sets, of course. In the algebra of continuous functions, each open set determines an ideal – namely, the subspace of functions which vanish on . When such an ideal is norm-closed, it will correspond to an open set (it's easy to see that continuous functions which can be approximated by those vanishing on an open set will also do so – if the set is not open, this isn't the case).
So the definition of a quantale looks much like that for a locale, except that the meet operation is replaced by an associative product, usually called . Note that unlike the meet, this isn't assumed to be commutative – this is the point where the generalization happens. So in particular, any locate gives a quantale with . So does any -algebra, in the form of its lattice of ideals. But there are others which don't show up in either of these two ways, so one might hope to say this is a nice all-encompassing generalisation of the idea of space.
Now, as I said, there was a bit of a warning that comes attached to this hope. This is that, although there is an embedding of the category of -algebras into the category of quantales, it isn't full. That is, not only does one get new objects, one gets new morphisms between old objects. So, given algebras and , which we think of as noncommutative spaces, and a map of algebras between them, we get a morphism between the associated quantales – lattice maps that preserve the operations. However, unlike what happened with locales, there are quantale morphisms that don't correspond to algebra maps. Even worse, this is still true even in the case where the algebras are commutative, and just come from locally compact Hausdorff spaces: the associated quantales still may have extra morphisms that don't come from continuous functions.
There seem to be three possible attitudes to this situation. First, maybe this is just the wrong approach to generalising spaces altogether, and the hints in its favour are simply misleading. Second, maybe quantales are absolutely the right generalisation of space, and these new morphisms are telling us something profound and interesting. The third attitude, which Pedro mentioned when pointing out this problem to me, seems most likely, and goes as follows. There is something special that happens with -algebras, where the analytic structure of the norm makes the algebras more rigid than one might expect. In algebraic geometry, one can take a space (algebraic variety or scheme) and consider its algebra of global functions. To make sure that an algebra map corresponds to a map of schemes, though, one really needs to make sure that it actually respects the whole structure sheaf for the space – which describe local functions. When passing from a topological space to a -algebra, there is a norm structure that comes into play, which is rigid enough that all algebra morphisms will automatically do this – as I said above, the structure of ideals of the algebra tells you all about the open sets. So the third option is to say that a quantale in itself doesn't quite have enough information, and one needs some extra data something like the structure sheaf for a scheme. This would then pick out which are the "good" morphisms between two quantales – namely, the ones that preserve this extra data. What, precisely, this data ought to be isn't so clear, though, at least to me.
So there are some complications to treating a quantale as a space. One further point, which may or may not go anywhere, is that this type of lattice doesn't quite get along with quantum logic in quite the same way that locales get along with (intuitionistic) classical logic (though it does have connections to linear logic).
In particular, a quantale is a distributive lattice (though taking the product, rather than , as the thing which distributes over ), whereas the "propositional lattice" in quantum logic need not be distributive. One can understand the failure of distributivity in terms of the uncertainty principle. Take a statement such as "particle has momentum and is either on the left or right of this barrier". Since position and momentum are conjugate variables, and momentum has been determined completely, the position is completely uncertain, so we can't truthfully say either "particle has momentum and is on the left or "particle has momentum and is on the right". Thus, the combined statement that either one or the other isn't true, even though that's exactly what the distributive law says: "P and (Q or S) = (P and Q) or (P and S)".
The lack of distributivity shows up in a standard example of a quantum logic. This is one where the (truth values of) propositions denote subspaces of a vector space . "And" (the meet operation ) denotes the intersection of subspaces, while "or" (the join operation ) is the direct sum . Consider two distinct lines through the origin of – any other line in the plane they span has trivial intersection with either one, but lies entirely in the direct sum. So the lattice of subspaces is non-distributive. What the lattice for a quantum logic should be is orthocomplemented, which happens when has an inner product – so for any subspace , there is an orthogonal complement .
Quantum logics are not very good from a logician's point of view, though – lacking distributivity, they also lack a sensible notion of implication, and hence there's no good idea of a proof system. Non-distributive lattices are fine (I just gave an example), and very much in keeping with the quantum-theoretic strategy of replacing configuration spaces with Hilbert spaces, and subsets with subspaces… but viewing them as logics is troublesome, so maybe that's the source of the problem.
Now, in a quantale, there may be a "meet" operation, separate from the product, which is non-distributive, but if the product is taken to be the analog of "and", then the corresponding logic is something different. In fact, the natural form of logic related to quantales is linear logic. This is also considered relevant to quantum mechanics and quantum computation, and as a logic is much more tractable. The internal semantics of certain monoidal categories – namely, star-autonomous ones (which have a nice notion of dual) – can be described in terms of linear logic (a fairly extensive explanation is found in this paper by Paul-André Melliès).
Part of the point in the connection seems to be resource-limitedness: in linear logic, one can only use a "resource" (which, in standard logic, might be a truth value, but in computation could be the state of some memory register) a limited number of times – often just once. This seems to be related to the noncommutativity of in a quantale. The way Pedro Resende described this to me is in terms of observations of a system. In the ordinary (commutative) logic of a locale, you can form statements such as "A is true, AND B is true, AND C is true" – whose truth value is locally defined. In a quantale, the product operation allows you to say something like "I observed A, AND THEN observed B, AND THEN observed C". Even leaving aside quantum physics, it's not hard to imagine that in a system which you observe by interacting with it, statements like this will be order-dependent. I still don't quite see exactly how these two frameworks are related, though.
On the other hand, the kind of orthocomplemented lattice that is formed by the subspaces of a Hilbert space CAN be recovered in (at least some) quantale settings. Pedro gave me a nice example: take a Hilbert space , and the collection of all projection operators on it, . This is one of those orthocomplemented lattices again, since projections and subspaces are closely related. There's a quantale that can be formed out of its endomorphisms, , where the product is composition. In any quantale, one can talk about the "right" elements (and the "left" elements, and "two sided" elements), by analogy with right/left/two-sided ideals – these are elements which, if you take the product with the maximal element, , the result is less than or equal to what you started with: means is a right element. The right elements of the quantale I just mentioned happen to form a lattice which is just isomorphic to .
So in this case, the quantale, with its connections to linear logic, also has a sublattice which can be described in terms of quantum logic. This is a more complicated situation than the relation between locales and intuitionistic logic, but maybe this is the best sort of connection one can expect here.
In short, both in terms of logic and spaces, hoping quantales will be "just" a noncommutative variation on locales seems to set one up to be disappointed as things turn out to be more complex. On the other hand, this complexity may be revealing something interesting.
Coming soon: summaries of some talks I've attended here recently, including Ivan Smith on 3-manifolds, symplectic geometry, and Floer cohomology.
Recent Talks: Paul Baum on the Baum-Connes Conjecture
Posted by Jeffrey Morton under c*-algebras, cohomology, K-theory, noncommutative geometry, talks
It's the last week of classes here at UWO, and things have been wrapping up. There have also been a whole series of interesting talks, as both Doug Ravenel and Paul Baum have been visiting members of the department. Doug Ravenel gave a colloquium explaining work by himself, and collaborators Mike Hopkins and Mike Hill, solving the "Kervaire Invariant One" problem – basically, showing that certain kinds of framed manifolds – and, closely related, certain kinds of maps between spectra – don't exist (namely, those where the Kervaire invariant is nonzero). This was an interesting and very engaging talk, but as a colloqium it necessarily had to skip past some of the subtleties of stable homotopy theory involved, and since my understanding of this subject is limited, I don't really know if I could do it justice.
In any case, I have my work cut out for me with what I am going to try to do (taking blame for any mistakes or imprecisions I introduce in here, BTW, since I may not be able to do this justice either). This is to discussing the first two of four talks which Paul Baum gave here last week, starting with an introduction to K-theory, and ending up with some discussion of the Baum-Connes Conjecture. This is a famous conjecture in noncommutative geometry which Baum and Alain Connes proposed in 1982 (and which Baum now seems to be fairly convinced is probably not true, though nobody knows a counterexample at the moment).
It's a statement about (locally compact, Hausdorff, topological) groups ; it relates K-theory for a -algebra associated to , with the equivariant K-homology of a space associated to (in fact, it asserts a certain map , which always exists, is furthermore always an isomorphism). It implies a great many things about any case where it IS true, which includes a good many cases, such as when is commutative, or a compact Lie group. But to backtrack, we need to define those terms:
K-Theory
The basic point of K-theory, which like a great many things began with Alexandre Grothendieck, is that it defines some invariants – which happen to be abelian groups – for various entities. There is a topological and an algebraic version, so the "entities" in question are, in the first case, topological spaces, and in the second, algebras (and more classically, algebraic varieties). Part of Paul Baum's point in his talk was to describe the underlying unity of these two – essentially, both correspond to particular kinds of algebras. Taking this point of view has the added advantage that it lets you generalize K-theory to "noncommutative spaces" quite trivially. That is: the category of locally compact topological spaces is equivalent to the opposite category of commutative -algebras – so taking the opposite of the category of ALL algebras gives a noncommutative generalization of "space". Defining K-theory in terms of algebras extends the invariant to this new sort of space, and also somewhat unifies topological and algebraic K-theory.
Classically, anyway, Atiyah and Hirzebruch's definition for K-theory (adapted to the topological case by Adams) gives an abelian group from a (topological or algebraic) space , using the category of (respectively, topological or algebraic) vector bundles over . The point is, from this category one naturally gets a set of isomorphism classes of bundles, with a commutative addition (namely, direct sum) – this is an abelian semigroup. One can turn any abelian semigroup (with or without zero) into an abelian group, by taking pairs – , and taking the quotient by the relation which holds when there is with . This is like taking "formal differences" (and any becomes zero, even if there was no zero originally). In fact, it does a little more, since if and are not equal, but become equal upon adding some , they're forced to be equal (so an equivalence relation is being imposed on bundles as well as allowing formal inverses).
In fact, a definition equivalent to Atiyah and Hirzebruch's (in terms of bundles) can be given in terms of the coordinate ring of a variety , or ring of continuous complex-valued functions on a (compact, Hausdorff) topological space . Given a ring , one defines to be the abelian semigroup of all idempotents (i.e. projections) in the rings of matrices up to STABLE similarity. Two idempotent matrices and are equivalent if they become similar – that is, conjugate matrices – possibly after adjoining some zeros by the direct sum . (In particular, this means we needn't assume and were the same size). Then comes from this by the completion to a group as just described.
A class of idempotents (projections) in a matrix algebra over is characterized by the image, up to similarity (so, really, the dimension). Since these are matrices over a ring of functions on a space, we're then secretly talking about vector bundles over that space. However, defining things in terms of the ring is what allows the generalization to noncommutative spaces (where there is no literal space, and the "coordinate ring" is no longer commutative, but this construction still makes sense).
Now, there's quite a bit more to say about this – it was originally used to prove the Hirzebruch-Riemann-Roch theorem, which for nice projective varieties defines an invariant from the alternating sum of dimensions of some sheaf-cohomology groups – roughly, cohomology where we look at sections of the aforementioned vector bundles over rather than functions on . The point is that the actual cohomology dimensions depend sensitively on how you turn an underlying topological space into an algebraic variety, but the HRR invariant doesn't. Paul Baum also talked a bit about some work by J.F. Adams using K-theory to prove some results about vector fields on spheres.
For the Baum-Connes conjecture, we're looking at the K-theory of a certain -algebra. In general, given such an algebra , the (level-j) K-theory can be defined to be the homotopy group of – the direct limit of all the finite matrix algebras , which have a chain of inclusions under extensions where by direct sum with the 1-by-1 identity. This looks a little different from the algebraic case above, but they are closely connected – in particular, under this definition is just the same as as defined above (so the norm and involution on can be ignored for the level-0 K-theory of a -algebra, though not for level-1).
You might also notice this appears to define in terms of negative-one-dimensional homotopy groups. One point of framing the definition this way is that it reveals that there are only two levels which matter – namely the even and the odd – so , and , and this detail turns out not to matter. This is a result of Bott periodicity. Changing the level of homotopy groups amounts to the same thing as taking loop spaces. Specifically, the functor that takes the space of loops of a space is right-adjoint to the suspension functor – and since , this means that . (Note that is the group of homotopy classes of maps from the -sphere into ). On the other hand, Bott periodicity says that – taking the loop-space twice gives something homotopic to the original . So the tower of homotopy groups repeats every two dimensions. (So, in particular, one may as well take that to be , and just find for ).
Now, to get the other side of the map in the Baum-Connes conjecture, we need a different part of K-theory.
K-Homology
Now, as with homology and cohomology, there are two related functors in the world of K-theory from spaces (of whatever kind) into abelian groups. The one described above is contravariant (for "spaces", not algebras – don't forget this duality!). Thus, maps give maps , which is like cohomology. There is also a covariant functor (so gives ), appropriately called K-homology. If the K-theory is described in terms of vector bundles on , K-homology – in the case of algebraic varieties, anyway – is about coherent sheaves of vector spaces on – concretely, you can think of these as resembling vector bundles, without a local triviality condition (one thinks, for instance, of the "skyscraper sheaf" which assigns a fixed vector space to any open set containing a given point , and to any other, which is like a "bundle" having fibre at , and everywhere else – generalizations putting a given fibre on a fixed subvariety – and of course one can add such examples. This image explains why any vector bundle can be interpreted as a coherent sheaf – so there is a map . When the variety is not singular, this turns out to be an isomorphism (the groups one ends up constructing after all the identifications involved turn out the same, even though sheaves in general form a bigger category to begin with).
But to take into the topological setting, this description doesn't work anymore. There are different ways to describe , but the one Baum chose – because it extends nicely to the NCG world where our "space" is a (not necessarily commutative) -algebra – is in terms of generalized elliptic operators. This is to say, triples , where is a (separable) Hilbert space, is a representation of in terms of bounded operators on , and is some bounded operator on with some nice properties. Namely, is selfadjoint, and for any , both its commutator with and land in , the ideal of compact operators. (This is the only norm-closed ideal in , the bounded operators – the idea being that for this purpose, operators in this ideal are "almost" zero).
These are "abstract" elliptic operators – but many interesting examples are concrete ones – that is, for some space , and is describing some actual elliptic operator on functions on . (He gave the case where is the circle, and is a version of the Dirac operator – normalized so all its nonzero eigenvalues are – then we'd be doing K-homology for the circle.)
Then there's a notion of homotopy between these operators (which I'll elide), and the collection of these things up to homotopy forms an abelian group, which is called . This is the ODD case – that is, there's a tower of groups , but due to Bott periodicity they repeat with period 2, so we only need to give and . The definition for is similar to the one for , except that we drop the "self-adjoint" condition on , which necessitates expanding the other two conditions – there's a commutator for both and , and the condition for becomes two conditions, for and . Now, all these should be seen as the K-homology groups of spaces (the sub/super script is denoting co/contra-variance).
Now, for the Baum-Connes conjecture, which is about groups, one actually needs to have an equivariant version of all this – that is, we want to deal with categories of -spaces (i.e. spaces with a -action, and maps compatible with the -action). This generalizes to noncommutative spaces perfectly well – there are – -algebras with suitable abstract elliptic operators (one needs a unitary representation of on the Hilbert space in the triple to define the compatibility – given by a conjugation action), -homotopies, and so forth, and then there's an equivariant K-homology group, for a -space . (Actually, for these purposes, one cares about proper -actions – ones where and the quotient space are suitably nice).
Baum-Connes Conjecture
Now, suppose we have a (locally compact, Hausdorff) group . The Baum-Connes conjecture asserts that a map , which always exists, between two particular abelian groups found from K-theory, is always an isomorphism. In fact, this is supposed to be true for the whole complex of groups, but by Bott periodicity, we only need the even and the odd case. For simplicity, let's just think about one of at a time.
So then the first abelian group associated to comes from the equivariant K-homology for -spaces. In particular, there is a classifying space – this is the terminal object in a category of ("proper") -spaces (that is, any other -space has a -map into ). The group we want is the equivariant K-homology of this space: . Since is a terminal object among -spaces, and is covariant, it makes sense that this group is a limit over -spaces (with some caveats), so another way to define it is , where the limit is over all ( -compact) -spaces. Now, being defined in this abstract way makes this a tricky thing to deal with computationally (which is presumably one reason the conjecture has resisted proof). Not so for the second group:
The second group is the reduced -algebra of a (locally compact, Hausdorff topological) group . To get this, you take the compactly supported continuous functions on , with the convolution product, and then, thinking of these as acting on by multiplication, take the completion in the algebra of all such operators. This is still closed under the convolution product. Then one takes the K-theory for this algebra at level .
So then there is always a particular map , which is defined in terms of index theory. The conjecture is that this is always an isomorphism (which, if true, would make the equivariant K-homology much more tractable). There aren't any known counterexamples, and in fact this is known to be true for all finite groups, and compact Lie groups – but for infinite discrete groups, there's no proof known. Indeed, it's not even known whether it's true for some specific, not very complicated groups, notably – the 3-by-3 integer matrices of determinant 1.
In fact, Paul Baum seemed to be pretty confident that the conjecture is wrong (that there is a counterexample ) – essentially because it implies so many things (the Kadison-Kaplansky conjecture, that groups with no torsion have group rings with no idempotents; the Novikov conjecture, that certain manifold invariants coming from are homotopy invariants; and many more) that it would be too good to be true. However, it does imply all these things about each particular group it holds for.
Now, I've not learned much about K-theory in the past, but Paul Baum's talks clarified a lot of things about it for me. One thing I realized is that some invariants I've thought more about, in the context of Extended TQFT – which do have to do with equivariant coherent sheaves of vector spaces – are nevertheless not the same invariants as in K-theory (at least in general). I've been asked this question several times, and on my limited understanding, I thought it was true – for finite groups, they're closely related (the 2-vector spaces that appear in ETQFT are abelian categories, but you can easily get abelian groups out of them, and it looks to me like they're the K-homology groups). But in the topological case, K-theory can't readily be described in these terms, and furthermore the ETQFT invariants don't seem to have all the identifications you find in K-theory – so it seems in general they're not the same, though there are some concepts in common. But it does inspire me to learn more about K-theory.
Coming up: more reporting on talks from our seminar on Stacks and Groupoids, by Tom Prince and Jose Malagon-Lopez, who were talking about stacks in terms of homotopical algebra and category theory.
Recent Talk: Enxin Wu on Diffeological Bundles and the Irrational Torus
Posted by Jeffrey Morton under groupoids, homotopy, noncommutative geometry, smooth spaces, talks
It's been a while since I posted here, partly because I was working on getting this paper ready for submission. Since I wrote about its subject in my previous post, about Derek Wise's talk at Perimeter Institute, I'll let that stand for now. In the meantime, we've had a few talks in the seminar on stacks and groupoids. Tom Prince gave a couple of interesting talks about stacks from the point of view of simplicial sheaves, explaining how they can be seen as certain categories of objects satisfying descent. Since I only have handwritten notes on this talk, and I still haven't entirely digested it, I think I'll talk about that at the same time as discussing the upcoming talk about descent and related stuff by José Malagon-Lopez. For right now, I'll write about Enxin Wu's talk on diffeological bundles and the irrational torus. (DVI notes here) Some of the theory of diffeological spaces has been worked out by Souriau (originally) and then Patrick Iglesias-Zemmour. Some of the categorical properties he discussed are explained by Baez and Hoffnung (Enxin's notes give some references). Enxin and Dan Christensen have looked a bit at diffeological spaces in the context of homotopy theory and model categories.
Part of the motivation for this seminar was to look at how groupoids and some related entities, namely stacks, and algebra in the form of noncommutative geometry (although we didn't get as much on this as I'd hoped), can be treated as ways to expand the notion of "space". One reason for doing this is to handle certain kinds of moduli problems, but another – more directly related to the motivation for noncommutative geometry (NCG) – is to deal with certain quotients. The irrational torus is one of these, and under the name "noncommutative torus" is a standard example in NCG. A brief introduction to it by John Baez can be found here, and more detailed discussion is in, for example, Ch3, section 2.β of Connes' "Noncommutative Geometry", which describes how to find its cyclic cohomology (a noncommutative anolog of cohomology of a space), which turns out to be 2-dimensional.
The point here should be to think of it as the quotient of a space by a group action. (Which gives a transformation groupoid, and from there a – noncommutative – groupoid -algebra). The space is a torus , and the group acting on it is acting by translation parallel to a line with irrational slope. In particular, we can treat as a group with componentwise multiplication, and think of the irrational torus, given an irrational , as the quotient by the subgroup .
Now, this is quite well-defined as a set, but as a space it's quite horrible, even though both groups are quite nice Lie groups. In particular, the subgroup is dense in – or, thought of in terms of a group acting on the torus, the orbit space of any given point is dense. So the quotient is not a manifold – in fact, it's quite hard to visualize. This illustrates the point that smooth manifolds are badly behaved with respect to quotients. In his talk, Enxin told us about another way to approach this problem by moving to the category of diffeological spaces. As I mentioned in a previous post, this is one of a number of attempts to expand the category of smooth manifolds , to get a category which has nice properties does not have, such as having quotient objects, mapping objects, and so on. Now, the category is such an example, but this loses all the information about which maps are smooth. The point is to find some intermediate generalization, which still carries information about geometry, not just topology.
A diffeological space can be defined as a concrete (i.e. -valued) sheaf on the site whose objects are open neighborhoods of (for all ) and whose morphisms are smooth maps, though this is sort of an abstract way to define a space. The point of it, however, is that this site gives a model of all the maps we want to call "smooth". Defining the category of diffeological spaces in terms of sheaves on sites helps to ensure it has nice categorical properties, but more intuitively, a smooth space is described by giving a set, and defining all the smooth maps into the space from neighborhoods of (these are called plots, and the collection is a diffeology). This differs from a manifold, which is defined in terms of (ahem) an atlas of charts – which unlike plots are required to be local homeomorphisms into a topological space, which fit together in smooth ways. The smooth maps into also have to be compatible – which is what the condition of being a sheaf guarantees – but the point is that we no longer suppose locally looks just like , so it can include strange quotients like the irrational torus.
Now has lots of good properties, some of which are listed in Enxin's notes. For instance, it has all limits and colimits, and is cartesian closed. What's more, there's a pair of adjoint functors between and – so there's an "underlying topological space" for any diffeological space (a topology making all the plots continuous), and a free diffeology on any topological space (where any continuous map from a neighborhood in is smooth). There's also a natural diffeology on any manifold (the one generated by taking all the charts to be plots).
The real point, though, is that a lot of standard geometric constructions that are made for manifolds also make sense for diffeological spaces, so they "support geometry". Some things which can be defined in the context of include: dimension; tangent spaces; differential forms; cohomology; smooth homotopy groups.
Naturally, one can define a diffeological groupoid: this is just an internal groupoid in – there are diffeological spaces and of objects and morphisms (and of course composable pairs, , which, being a limit, is also in ), and the structure maps are all smooth. These are related to diffeological bundles (defined below) in that certain groupoids can be build from bundles. The resulting groupoids all have the property of being perfect, which means that is a subduction – i.e. is onto, and such that the product diffeology which is the natural one on is also the minimal one making this map smooth.
In fact, we need this to even define diffeological bundles, which are particular kinds of surjective maps in . Specifically, one gets a groupoid whose objects are points of , and where the morphisms are just smooth maps from the fibre to the fibre (which, of course, are diffeological spaces because they are subsets of ). It's when this groupoid is perfect that one has a bundle.
The point here is that, unlike for manifolds, we don't have local charts, so we can't use the definition that a bundle is "locally trivializable", but we do have this analogous condition. In both cases, the condition implies that all the fibres are diffeomorphic to each other (in the relevant sense). Enxin also gave a few equivalent conditions, which amount to saying one gets locally trivial bundles over neighborhoods in when pulling back along any plot.
So now we can at least point out that the irrational torus can be construed as a diffeological bundle – thinking of it as a quotient of a group by a subgroup, we can think of this as a bundle where is the total space, the base is the space of orbits, and the fibres are all diffeomorphic to .
The punchline of the talk is to use this as an example which illustrates the theorem that there is a diffeological version of the long exact sequence of homotopy groups:
Using this long exact sequence, and the fact that the (diffeological) homotopy groups for manifolds (in this case, and are the same as the usual ones, one can work out the homotopy groups for the base , which is the quotient . Whereas, for topological spaces, since is dense in , the usual homotopy groups are all zero, for diffeological spaces, we get a different answer. In particular, , a two-dimensional lattice.
It's interesting that this essentially agrees with what noncommutative geometry tells us about the quotient, while keeping some of our plain intuitions about "space" intact – that is, without moving whole-hog into (the opposite of) a category of noncommutative algebras. It would be interesting to know how far one can push this correspondence.
Recent Talks on Groupoids, Representations, and Toposes
Posted by Jeffrey Morton under category theory, groupoids, Morita equivalence, noncommutative geometry, Orbifolds, representation theory, talks
While I'd like to write up right away a description of the talk which Derek Wise gave recently at the Perimeter Institute (mostly about some work of mine which is preliminary to a collaboration we're working on), I think I'll take this next post as a chance to describe a couple of talks given in the seminar on stacks, groupoids, and algebras which I'm organizing, namely mine on representation theory of groupoids (focusing on Morita equivalence), and Peter Oman's, called Toposes and Groupoids about how any topos can be compared to sheaves on a groupoid (sort of!). So here we go:
Representations of Groupoids and Morita Equivalence
The motivation here is to address directly what Morita equivalence means for groupoids, and particularly Lie groupoids. (One of the main references I used to prepare on this was this paper by Klaas Landsman, which gives Morita equivalence theorems for a variety of bicategories). The classic description of a Morita equivalence of rings and is often described in terms of the existence of an – -bimodule having certain properties. But the point of this bimodule is that on can turn -modules into -modules by tensoring with it, and vice versa. Actually, it's better than this, in that there are functors
And moreover, either composite of these is naturally isomorphic to the appropriate identity, so in particular one has and (since tensoring with the base ring is the identity for modules). But this just says that these two functors are actually giving an equivalence of the categories and .
So this is the point of Morita equivalence. Suppose, for a class of algebraic gadget (ring, algebra, groupoid, etc.), one has the notion of a representation of such a gadget (as a module is the right idea of the representation of a ring), and all the representations of form a category . Then Morita equivalence is the equivalence relation induced by equivalence of the representation categories – gadgets and are Morita equivalent if there is an equivalence of the representation categories. For nice categories of gadgets – rings and von Neumann algebras, for instance, this occurs if and only if a condition like the existence of the bimodule above is true. In other cases, this is only a sufficient condition for Morita equivalence, not a necessary one.
I'll comment here that there are therefore several natural notions of Morita equivalence, which a priori might be different, since categories like carry quite a bit of structure. For example, there is a tensor product of representations that makes it a symmetric monoidal category; there is a direct sum of representations making it abelian. So we might want to ask that the equivalence between them be an equivalence of:
abelian categories
monoidal abelian categories
symmetric monoidal abelian categories
(in principle we could also take the last two and drop "abelian", for a total of six versions of the concept, but this progression is most natural in much the same way that "set – abelian group – ring" is a natural progression).
Reallly, what one wants is the strongest of these notions. Equivalence as abelian categories just means having the same number of irreducible representations (which are the generators). It's less obvious that the "symmetric" qualifier is important, but there are examples where these are different.
So then one gets Morita equivalence for groupoids from the categories in this standard way. One point here is that, whereas representations of groups are actions on vector spaces, representations of groupoids are actions on vector bundles over the space of objects of (call this ). So for A morphism from to , the representation gives a linear map from the fibre to the fibre (which is necessarily iso).
The above paper by Landsman is nice in that it defines this concept for several different categories, and gives the corresponding versions of a theorem showing that this Morita equivalence is either the same as, or implied by (depending on the case) equivalence in a certain bicategory. For Lie groupoids, this bicategory has Lie groupoids for objects, certain bibundles as morphisms, and bibundle maps as 2-morphisms – the others are roughly analogous. The bibundles in question are called "Hilsum-Skandalis maps" (on this, I found Janez Mrcun's thesis a useful place to look). This does in this context essentially what the bicategory of spans does for finite groupoids (many simplifying features about the finite case obscure what's really going on, so in some ways it's better to look at this case).
The general phenomenon here is the idea of "strong Morita equivalence" of rings/algebras/groupoids and . What, precisely, this means depends on the setting, but generally it means there is some sort of interpolating object between and . The paper by Landsman gives specifics in various cases – the interpolating object may be a bimodule, or bibundle of some sort (these Hilsum-Skandalis maps), and in the case of discrete groupoids one can derive this from a span. In any case, strong Morita equivalence appears to amount to an equivalence internal to a bicategory in which these are the morphisms (and the 2-morphisms are something natural, such as bimodule maps in the case of rings – just linear maps compatible with the left and right actions on two bimodules). In all cases, strong Morita equivalence implies Morita equivalence, but only in some cases (not including the case of Lie groupoids) is the converse true.
There are more details on this in my slides, and in the references above, but now I'd like to move on…
Toposes and Groupoids
Peter Oman gave the most recent talk in our seminar, the motivation for which is to explain how the idea of a topos as a generalization of space fits in with the idea of a groupoid as a generalization of space. As a motivation, he mentioned a theorem of Butz and Moerdijk, that any topos with "enough points" is equivalent to the topos of sheaves on some topological groupoid. The generalization drops the "enough points" condition, to say that any topos is equivalent to a topos of sheaves on a localic groupoid. Locales are a sort of point-free generalization of topological spaces – they are distributive lattices closed under meets and finite joins, just like the lattice of open sets in a topological space (the meet and join operations then are just unions and intersections). Actually, with the usual idea of a map of lattices (which are functors, since a lattice is a poset, hence a category), the morphisms point the wrong way, so one actually takes the opposite category, .
(Let me just add that as a generalization of space that isn't essentially about points, this is nice, but in a rather "commutative" way. There is a noncommutative notion, namely quantales, which are related to locales in rather the same way lattices of subspaces of a Hilbert space relate to those of open sets in a space. It would be great if an analogous theorem applied there, but neither I nor Peter happen to know if this is so.)
Anyway, the main theorem (due to Joyal and Tierney, in "An Extension of the Galois Theory of Grothendieck" – though see this, for instance) is that one can represent any topos as sheaves on a localic groupoid – ie. internal groupoids in .
The essential bit of these theorems is localic reflection. This refers to an adjoint pair of functors between and . The functor gives the space of points of a locale (i.e. atomic elements of the lattice – those with no other elements between them and the minimal elements which corresponds to the empty set in a topology). The functor gives, for any topological space, the locale which is its lattice of open sets. This adjunction turns out to give an equivalence when one restricts to "sober" spaces (for example, Hausdorff spaces are sober), and locales with "enough points" (having no other context for the term, I'll take this to be a definition of "enough points" for the time being).
Now, part of the point is that locales are a generalization of topological space, and topoi generalize this somewhat further: any locale gives rise to a topos of sheaves on it (analogous to the sheaf of continuous functions on a space). A topos may or may not be equivalent to a topos of sheaves on a locale: i.e. might hold for some locale . If so, the topos is "localic". Localic reflection just says that induces an equivalence between hom-categories in the 2-categories and . Now, not every topos is localic, but, there is always some locale such that we can compare to .
In particular, given a map of locales (or even more particularly, a continuous map of spaces) , there's an adjoint pair of inverse-image and direct-image maps and for passing sheaves back and forth. This gives the idea of a "geometric morphism" of topoi, which is just such an adjoint pair. The theorem is that given any topos , these is some "surjective" geometric morphism (surjectivity amounts to the claim that the inverse image functor is faithful – i.e. ignores no part of ). Of course, this might not be an equivalence (so is bigger than ).
Now, the point, however, is that this comparison functor means that can't be TOO much more general than
sheaves on a locale. The point is, given this geometric morphism , one can form the pullback of along itself, to get a "fibre product" of topoi: with the obvious projection maps to . Indeed, one can get , and so on. It turns out these topoi, and these projection maps (thought of, via localic reflection, as locales, and maps between locales) can be treated as the objects and structure maps for a groupoid internal to . So in particular, we can think of as the locale of morphisms in the groupoid, and as the locale of composable pairs of morphisms.
The theorem, then, is that is related to the topos of sheaves on this localic groupoid. More particularly, it is equivalent to the subcategory of objects which satisfy a descent condition. Descent, of course, is a huge issue – and one that's likely to get much more play in future talks in this seminar, but for the moment, it's probably sufficient to point to Peter's slides, and observe that objects which satisfy descent are "global" in some sense (in the case of a sheaf of functions on a space, they correspond to sheaves in which locally defined functions which match on intersections of open sets can be "pasted" to form global functions).
So part of the point here is that locales generalize spaces, and toposes generalize locales, but only about as far as groupoids generalize spaces (by encoding local symmetry). There is also a more refined version (due to Moerdijk and Pronk) that has to do with ringed topoi (which generalize ringed spaces), giving a few conditions which amount to being equivalent to the topos of sheaves on an orbifold (which has some local manifold-like structure, and where the morphisms in the groupoid are fairly tame in that the automorphism groups at each point are finite).
Coming up in the seminar, Tom Prince will be talking about an approach to this whole subject due to Rick Jardine, involving simplicial presheaves.
Recent Talk: Ivan Dynov on Classifying von Neumann algebras
Posted by Jeffrey Morton under algebra, analysis, c*-algebras, noncommutative geometry, physics, quantum mechanics
I say this is about a "recent" talk, though of course it was last year… But to catch up: Ivan Dynov was visiting from York and gave a series of talks, mainly to the noncommutative geometry group here at UWO, about the problem of classifying von Neumann algebras. (Strictly speaking, since there is not yet a complete set of invariants for von Neumann algebras known, one could dispute the following is a "classification", but here it is anyway).
The first point is that any von Neumann algebra is a direct integral of factors, which are highly noncommutative in that the centre of a factor consists of just the multiples of the identity. The factors are the irreducible building blocks of the noncommutative features of .
There are two basic tools that provide what classification we have for von Neumann algebras: first, the order theory for projections; second, the Tomita-Takesaki theory. I've mentioned the Tomita flow previously, but as for the first part:
A projection (self-adjoint idempotent) is just what it sounds like, if you reprpsent as an algebra of bounded operators on a Hilbert space. An extremal but informative case is , but in general not every bounded operator appears in .
In the case where , then a projection in is the same thing as a subspace of . There is an (orthomodular) lattice of them (in general, the lattice of projections is ). For subspaces, the dimension characterizes up to isomorphism – any any two subspaces of the same dimension are isomorphic by some operator in $\mathcal{B}(H)$ (but not necessarily in a general ).
The idea is to generalize this to projections in a general , and get some characterization of . The kind of isomorphism that matters for subspaces is a partial isometry – a map which preserves the metric on some subspace, and otherwise acts as a projection. In fact, the corresponding projections are then conjugate by . So we define, for a general , an equivalence relation on projections, which amounts to saying that if there's a partial isometry with , and (i.e. the projections are conjugate by ).
Then there's an order relation on the equivalence classes of projections – which, as suggested above, we should think of as generalizing "dimension" from the case . The order relation says that if where as a projection (i.e. inclusion thinking of a projection as its image subspace of ). But the fact that may not be all of has some counterintuitive consequences. For example, we can define a projection to be finite if the only time is when (which is just the usual definition of finite, relativized to use only maps in ). We can call a minimal projection if it is nonzero and imples or .
Then the first pass at a classification of factors (i.e. "irreducible" von Neumann algebras) says a factor is:
Type : If contains a minimal projection
Type : If contains no minimal projection, but contains a (nontrivial) finite projection
Type : If contains no minimal or nontrivial finite projection
We can further subdivide them by following the "dimension-function" analogy, which captures the ordering of projections for , since it's a theorem that there will be a function which has the properties of "dimension" in that it gets along with the equivalence relation , respects finiteness, and "dimension" of direct sums. Then letting be the range of this function, we have a few types. There may be more than one function , but every case has one of the types:
Type : When (That is, there is a maximal, finite projection)
Type : When (If there is an infinite projection in
Type : When (The maximal projection is finite – such a case can always be rescaled so the maximum is )
Type : When (The maximal projection is infinite – notice that this has the same order type as type )
Type \: When (An infinite maximal projection)
Type : , (these are called properly infinite)
The type case are all just (equivalent to) matrix algebras on some countable or finite dimensional vector space – which we can think of as a function space like for some set . Types and are more interesting. Type algebras are related to what von Neumann called "continuous geometries" – analogs of projective geometry (i.e. geometry of subspaces), with a continuous dimension function.
(If we think of these algebras as represented on a Hilbert space , then in fact, thought of as subspaces of , all the projections give infinite dimensional subspaces. But since the definition of "finite" is relative to , and any partial isometry from a subspace to a proper subspace of itself that may exist in is not in .)
In any case, this doesn't exhaust what we know about factors. In his presentation, Ivan Dynov described some examples constructed from crossed products of algebras, which is important later, but for the moment, I'll finish describing another invariant which helps pick apart the type factors. This is related to Tomita-Takesaki theory, which I've mentioned in here before.
You'll recall that the Tomita flow (associated to a given state ) is given by , where is the self-adjoint part of the conjugation operator (which depends on the state because it refers to the GNS representation of on a Hilbert space ). This flow is uninteresting for Type or factors, but for type factors, it's the basis of Connes' classification.
In particular, the we can understand the Tomita flow in terms of eigenvalues of , since it comes from exponentials of . Moreover, as I commented last time, the really interesting part of the flow is independent of which state we pick. So we are interested in the common eigenvalues of the associated to different states , and define
(where is the set of all states on , or actually "weights")
Then , it turns out, is always a multiplicative subgroup of the positive real line, and the possible cases refine to these:
: This is when is type or
: Type
: Type (for each in the range , and
(Taking logarithms, gives an additive subgroup of , which gives the same information). So roughly, the three types are: finite and countable matrix algebras, where the dimension function tells everything; where the dimension function behaves surprisingly (thought of as analogous to projective geometry); and , where dimensions become infinite but a "time flow" dimension comes into play. The spectra of above tell us about how observables change in time by the Tomita flow: high eigenvalues cause the observable's value to change faster with time, while low ones change slower. Thus the spectra describe the possible arrangements of these eigenvalues: apart from the two finite cases, the types are thus a continuous positive spectrum, and a discrete one with a single generator. (I think of free and bound energy spectra, for an analogy – I'm not familiar enough with this stuff to be sure it's the right one).
This role for time flow is interesting because of the procedures for constructing examples of type , which Ivan Dynov also described to us. These are examples associated with dynamical systems. These show up as crossed products. See the link for details, but roughly this is a "product" of an algebra by a group action – a kind of von Neumann algebra equivalent of the semidirect product of groups incorporating an action of on . Indeed, if a (locally compact) group acts on group then the crossed product of algebras is just the von Neumann algebra of the semidirect product group.
In general, a ( )-dynamical system is , where is a locally compact group acting by automorphisms on the von Neumann algebra , by the map . Then the crossed product is the algebra for the dynamical system.
A significant part of the talks (which I won't cover here in detail) described how to use some examples of these to construct particular type factors. In particular, a theorem of Murray and von Neumann says is a factor if the action of discrete group on a finite measure space is ergodic (i.e. has no nontrivial proper invariant sets – roughly, each orbit is dense). Another says this factor is type unless there's a measure equivalent to (i.e. absolutely continuous with) , and which is equivariant. Some clever examples I won't reconstruct gave some factors like this explicitly.
He concluded by talking about some efforts to improve the classification: the above is not a complete set of invariants, so a lot of work in this area is improving the completeness of the set. One set of results he told us about do this somewhat for the case of hyperfinite factors (i.e. ones which are limits of finite ones), namely that if they are type , they are crossed products of with a discrete group.
At any rate, these constructions are interesting, but it would take more time than I have here to look in detail – perhaps another time.
"States" and Time – Hamiltonians, KMS states, and Tomita Flow
Posted by Jeffrey Morton under algebra, c*-algebras, musing, noncommutative geometry, philosophical, physics, quantum mechanics, reading
When I made my previous two posts about ideas of "state", one thing I was aiming at was to say something about the relationships between states and dynamics. The point here is that, although the idea of "state" is that it is intrinsically something like a snapshot capturing how things are at one instant in "time" (whatever that is), extrinsically, there's more to the story. The "kinematics" of a physical theory consists of its collection of possible states. The "dynamics" consists of the regularities in how states change with time. Part of the point here is that these aren't totally separate.
Just for one thing, in classical mechanics, the "state" includes time-derivatives of the quantities you know, and the dynamical laws tell you something about the second derivatives. This is true in both the Hamiltonian and Lagrangian formalism of dynamics. The Hamiltonian function, which represents the concept of "energy" in the context of a system, is based on a function , where is a vector representing the values of some collection of variables describing the system (generalized position variables, in some configuration space ), and the are corresponding "momentum" variables, which are the other coordinates in a phase space which in simple cases is just the cotangent bundle . Here, refers to mass, or some equivalent. The familiar case of a moving point particle has "energy = kinetic + potential", or for some potential function . The symplectic form on can then be used to define a path through any point, which describes the evolution of the system in time – notably, it conserves the energy . Then there's the Lagrangian, which defines the "action" associated to a path, which comes from integrating some function living on the tangent bundle , over the path. The physically realized paths (classically) are critical points of the action, with respect to variations of the path.
This is all based on the view of a "state" as an element of a set (which happens to be a symplectic manifold like or just a manifold if it's ), and both the "energy" and the "action" are some kind of function on this set. A little extra structure (symplectic form, or measure on path space) turns these functions into a notion of dynamics. Now a function on the space of states is what an observable is: energy certainly is easy to envision this way, and action (though harder to define intuitively) counts as well.
But another view of states which I mentioned in that first post is the one that pertains to statistical mechanics, in which a state is actually a statisticial distribution on the set of "pure" states. This is rather like a function – it's slightly more general, since a distribution can have point-masses, but any function gives a distribution if there's a fixed measure around to integrate against – then a function like becomes the measure . And this is where the notion of a Gibbs state comes from, though it's slightly trickier. The idea is that the Gibbs state (in some circumstances called the Boltzmann distribution) is the state a system will end up in if it's allowed to "thermalize" – it's the maximum-entropy distribution for a given amount of energy in the specified system, at a given temperature . So, for instance, for a gas in a box, this describes how, at a given temperature, the kinetic energies of the particles are (probably) distributed. Up to a bunch of constants of proportionality, one expects that the weight given to a state (or region in state space) is just , where is the Hamiltonian (energy) for that state. That is, the likelihood of being in a state is inversely proportional to the exponential of its energy – and higher temperature makes higher energy states more likely.
Now part of the point here is that, if you know the Gibbs state at temperature , you can work out the Hamiltonian
just by taking a logarithm – so specifying a Hamiltonian and specifying the corresponding Gibbs state are completely equivalent. But specifying a Hamiltonian (given some other structure) completely determines the dynamics of the system.
This is the classical version of the idea Carlo Rovelli calls "Thermal Time", which I first encountered in his book "Quantum Gravity", but also is summarized in Rovelli's FQXi essay "Forget Time", and described in more detail in this paper by Rovelli and Alain Connes. Mathematically, this involves the Tomita flow on von Neumann algebras (which Connes used to great effect in his work on the classification of same). It was reading "Forget Time" which originally got me thinking about making the series of posts about different notions of state.
Physically, remember, these are von Neumann algebras of operators on a quantum system, the self-adjoint ones being observables; states are linear functionals on such algebras. The equivalent of a Gibbs state – a thermal equilibrium state – is called a KMS (Kubo-Martin-Schwinger) state (for a particular Hamiltonian). It's important that the KMS state depends on the Hamiltonian, which is to say the dynamics and the notion of time with respect to which the system will evolve. Given a notion of time flow, there is a notion of KMS state.
One interesting place where KMS states come up is in (general) relativistic thermodynamics. In particular, the effect called the Unruh Effect is an example (here I'm referencing Robert Wald's book, "Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics"). Physically, the Unruh effect says the following. Suppose you're in flat spacetime (described by Minkowski space), and an inertial (unaccelerated) observer sees it in a vacuum. Then an accelerated observer will see space as full of a bath of particles at some temperature related to the acceleration. Mathematically, a change of coordinates (acceleration) implies there's a one-parameter family of automorphisms of the von Neumann algebra which describes the quantum field for particles. There's also a (trivial) family for the unaccelerated observer, since the coordinate system is not changing. The Unruh effect in this language is the fact that a vacuum state relative to the time-flow for an unaccelerated observer is a KMS state relative to the time-flow for the accelerated observer (at some temperature related to the acceleration).
The KMS state for a von Neumann algebra with a given Hamiltonian operator has a density matrix , which is again, up to some constant factors, just the exponential of the Hamiltonian operator. (For pure states, , and in general a matrix becomes a state by which for pure states is just the usual expectation value value for A, ).
Now, things are a bit more complicated in the von Neumann algebra picture than the classical picture, but Tomita-Takesaki theory tells us that as in the classical world, the correspondence between dynamics and KMS states goes both ways: there is a flow – the Tomita flow – associated to any given state, with respect to which the state is a KMS state. By "flow" here, I mean a one-parameter family of automorphisms of the von Neumann algebra. In the Heisenberg formalism for quantum mechanics, this is just what time is (i.e. states remain the same, but the algebra of observables is deformed with time). The way you find it is as follows (and why this is right involves some operator algebra I find a bit mysterious):
First, get the algebra acting on a Hilbert space , with a cyclic vector (i.e. such that is dense in – one way to get this is by the GNS representation, so that the state just acts on an operator by the expectation value at , as above, so that the vector is standing in, in the Hilbert space picture, for the state ). Then one can define an operator by the fact that, for any , one has
That is, acts like the conjugation operation on operators at , which is enough to define since is cyclic. This has a polar decomposition (analogous for operators to the polar form for complex numbers) of , where is antiunitary (this is conjugation, after all) and is self-adjoint. We need the self-adjoint part, because the Tomita flow is a one-parameter family of automorphisms given by:
An important fact for Connes' classification of von Neumann algebras is that the Tomita flow is basically unique – that is, it's unique up to an inner automorphism (i.e. a conjugation by some unitary operator – so in particular, if we're talking about a relativistic physical theory, a change of coordinates giving a different parameter would be an example). So while there are different flows, they're all "essentially" the same. There's a unique notion of time flow if we reduce the algebra to its cosets modulo inner automorphism. Now, in some cases, the Tomita flow consists entirely of inner automorphisms, and this reduction makes it disappear entirely (this happens in the finite-dimensional case, for instance). But in the general case this doesn't happen, and the Connes-Rovelli paper summarizes this by saying that von Neumann algebras are "intrinsically dynamic objects". So this is one interesting thing about the quantum view of states: there is a somewhat canonical notion of dynamics present just by virtue of the way states are described. In the classical world, this isn't the case.
Now, Rovelli's "Thermal Time" hypothesis is, basically, that the notion of time is a state-dependent one: instead of an independent variable, with respect to which other variables change, quantum mechanics (per Rovelli) makes predictions about correlations between different observed variables. More precisely, the hypothesis is that, given that we observe the world in some state, the right notion of time should just be the Tomita flow for that state. They claim that checking this for certain cosmological models, like the Friedman model, they get the usual notion of time flow. I have to admit, I have trouble grokking this idea as fundamental physics, because it seems like it's implying that the universe (or any system in it we look at) is always, a priori, in thermal equilibrium, which seems wrong to me since it evidently isn't. The Friedman model does assume an expanding universe in thermal equilibrium, but clearly we're not in exactly that world. On the other hand, the Tomita flow is definitely there in the von Neumann algebra view of quantum mechanics and states, so possibly I'm misinterpreting the nature of the claim. Also, as applied to quantum gravity, a "state" perhaps should be read as a state for the whole spacetime geometry of the universe – which is presumably static – and then the apparent "time change" would then be a result of the Tomita flow on operators describing actual physical observables. But on this view, I'm not sure how to understand "thermal equilibrium". So in the end, I don't really know how to take the "Thermal Time Hypothesis" as physics.
In any case, the idea that the right notion of time should be state-dependent does make some intuitive sense. The only physically, empirically accessible referent for time is "what a clock measures": in other words, there is some chosen system which we refer to whenever we say we're "measuring time". Different choices of system (that is, different clocks) will give different readings even if they happen to be moving together in an inertial frame – atomic clocks sitting side by side will still gradually drift out of sync. Even if "the system" means the whole universe, or just the gravitational field, clearly the notion of time even in General Relativity depends on the state of this system. If there is a non-state-dependent "god's-eye view" of which variable is time, we don't have empirical access to it. So while I can't really assess this idea confidently, it does seem to be getting at something important. | CommonCrawl |
\begin{document}
\title{Solvability of the Stokes Immersed Boundary Problem in\\ Two Dimensions}
\author{Fang-Hua Lin, Jiajun Tong\\[5pt]Courant Institute} \date{}
\maketitle \begin{abstract} We study coupled motion of a 1-D closed elastic string immersed in a 2-D Stokes flow, known as the Stokes immersed boundary problem in two dimensions. Using the fundamental solution of the Stokes equation and the Lagrangian coordinate of the string, we write the problem into a contour dynamic formulation, which is a nonlinear non-local equation solely keeping track of evolution of the string configuration. We prove existence and uniqueness of local-in-time solution starting from an arbitrary initial configuration that is an $H^{5/2}$-function in the Lagrangian coordinate satisfying the so-called well-stretched assumption. We also prove that when the initial string configuration is sufficiently close to an equilibrium, which is an evenly parameterized circular configuration, then global-in-time solution uniquely exists and it will converge to an equilibrium configuration exponentially as $t\rightarrow +\infty$. The technique in this paper may also apply to the Stokes immersed boundary problem in three dimensions. \end{abstract}
\noindent\textbf{Keywords.}\;Immersed boundary problem, Stokes flow, fractional Laplacian, solvability, stability. \noindent\textbf{AMS subject classifications.}\;35C15, 35Q35, 35R11, 76D07.
\section{Introduction}
The immersed boundary method was initially formulated by Peskin \cite{peskin1972flowPhD,peskin1972flow} in early 1970s to study flow patterns around heart valves, and later it develops into a generally effective method to solve fluid-structure interaction problems \cite{peskin2002immersed}. It gives birth to numerous studies of the numerical methods, along with applications in physics, biology and medical sciences. See \cite{peskin2002immersed, mittal2005immersed} and the references therein. Various mathematical analysis have also been performed based on the model formulation itself, e.g.\;\cite{stockie1995stability,stockie1997analysis,mori2008convergence}. From the analysis point of view, the immersed boundary problem is intriguing on its own right. It is nonlinear by nature, featuring free moving boundary and singular forcing, which are not well-studied in the classic mathematical theory of hydrodynamics \cite{temam1984navier}.
In this paper, we shall consider Stokes immersed boundary problem in two dimensions. It models the scenario where there is a 1-D closed elastic string (or fibre) immersed and moving in the 2-D Stokes flow: the string exerts force on the fluid and generates the flow, while the flow in turn moves the string and changes its configuration. The mathematical formulation will be given below. We will prove solvability of the string motion and its asymptotic behavior near equilibrium. Much of the analysis in this paper also applies to immersed boundary problems in three dimensions.
A similar type of problems on one- \cite{solonnikov1977solvability,solonnikov1986solvability,shibata2007free} or two-phase \cite{denisova1991solvability,tanaka1993global,giga1994global,denisova1994problem,denisova1994solvability,shimizu2011local,kohne2013qualitative,solonnikov2014theory} incompressible fluid motion has been extensively studied. In these settings, the space is occupied by one incompressible viscous fluid and the vacuum, or by two immiscible incompressible viscous fluids; the fluids move with or without surface tension on their interface. Solvability results have been established in various function spaces. The main difference between these problems and ours is that only the geometry (such as length, area and curvature) of the interface is involved there in determining the force balance at the interface. In particular, it does not depend on how the immersed string or membrane is parametrized. Consequently, one can use either Eulerian or Lagrangian approach to study the evolution of interfaces. However, in the immersed boundary problems, elastic strings or membranes have their internal structures and their dynamics also depends on constitutive law of elasticity, which varies from case to case. In other words, intrinsic parametrization of the immersed boundary and its elastic deformation should play a role. Indeed, immersed boundaries with identical overall shape can generate force differently. One can easily construct a 1-D closed string with a circular shape, yet far more stretched at some point than somewhere else. In this case, we shall see that the force on the string is not everywhere pointing inward normal to the string. This suggests that a pure Eulerian approach employed in many mathematical studies of free boundary problems in hydrodynamics (e.g.\;\cite{bertozzi1993global}) would not suffice. One needs to keep track of the configuration of the immersed boundary, which is typical in the nonlinear elasticity problems, and different techniques need to be used.
\subsection{The Stokes immersed boundary problem in two dimensions} Consider a 1-D neutrally buoyant massless elastic closed string immersed in 2-D Stokes flow. The string is modeled as a Jordan curve $\Gamma_t$ parameterized by $X(s,t)$, where $s\in\mathbb{T}$ is the Lagrangian coordinate (or the material coordinate) and $t\geq 0$ is the time variable. Here, $\mathbb{T}\triangleq \mathbb{R}/2\pi\mathbb{Z}$ is the 1-D torus equipped with the induced metric. We always assume that at least $X(\cdot,t)\in H^2(\mathbb{T})$ for all $t$. The flow field in the immersed boundary problem is determined by \begin{equation} \begin{split} &\;-\mu_0\Delta u +\nabla p = f(x,t),\quad x\in\mathbb{R}^2,\;t>0,\\ &\;\mathrm{div}\, u = 0,\\
&\;|u|,|p|\rightarrow 0\mbox{ as }|x|\rightarrow \infty. \end{split} \label{eqn: stokes equation} \end{equation} Here $u(x,t)$ is the velocity field in $\mathbb{R}^2$ and $p$ is the pressure; $\mu_0>0$ is the dynamic viscosity; $f(x,t)$ is the elastic force exerted on the fluid generated by the string, given by \cite{peskin2002immersed} \begin{equation} f(x,t) = \int_\mathbb{T} F(s,t) \delta (x-X(s,t))\,ds. \label{eqn: force in the immersed boundary problem general form} \end{equation} Here $\delta$ is the 2-D delta measure, which means the force is only supported on the string. $F(s,t)$ is the force in the Lagrangian formulation; it is given by \begin{equation}
F(s,t) = \frac{\partial}{\partial s}\left(\mathcal{T}(|X_s|)\frac{X_s}{|X_s|}\right),\quad \mathcal{T}(|v|) = \mathcal{E}'(|v|). \label{eqn: force in the immersed boundary problem general form Lagrangian} \end{equation} where $X_s = \partial X/\partial s$, $\mathcal{T}$ is the tension in the string and $\mathcal{E}$ is the elastic energy density. In the following discussion, we shall take \begin{equation}
\mathcal{E}(|v|) = k_0|v|^2/2. \label{eqn: elastic energy density} \end{equation} In this case, each infinitesimal segment of the string behaves like a Hookean spring with elasticity coefficient $k_0>0$, and thus $F(s,t) = k_0X_{ss}(s,t)$. It will be clear below that most of the discussion in this paper can also apply to more general elastic energy of other forms. The model is closed by the kinematic equation of the string, \begin{equation} \frac{\partial X}{\partial t}(s,t) = u(X(s,t), t), \label{eqn: kinematic equation of membrane} \end{equation} which means the string moves with the flow.
For simplicity, we shall take $\mu_0 = k_0 = 1$ in the rest of the paper. Indeed, one can easily normalize both coefficients simultaneously by properly redefining $u$, $p$ and the time variable $t$. We shall always omit the $t$-dependence whenever it is convenient; and we shall also write $X'(s')$ and $X''(s')$ in the places of $X_s(s',t)$ and $X_{ss}(s',t)$ respectively.
\subsection{Contour dynamic formulation}\label{section: contour dynamic formulation} The starting point of the analysis in this paper is the following proposition. It rewrites the original immersed boundary problem \eqref{eqn: stokes equation}-\eqref{eqn: kinematic equation of membrane} that is in mixed Eulerian and Lagrangian formulation into a pure Lagrangian formulation, which we will call \emph{contour dynamic formulation}.
\begin{proposition}\label{prop: tranform into contour dynamic formulation} Under the assumptions that $X(\cdot,t)\in H^2(\mathbb{T})$ for all $t$, and that there $\exists\,\lambda>0$, s.t.\;$\forall\,s_1,s_2\in\mathbb{T}$, \begin{equation}
|X(s_1,t)-X(s_2,t)|\geq \lambda|s_1-s_2|, \label{eqn: well_stretched assumption} \end{equation}
where $|s_1-s_2|$ is the distance between $s_1$ and $s_2$ on $\mathbb{T}$, the evolution of $X(s,t)$ in the 2-D Stokes immersed boundary problem \eqref{eqn: stokes equation}-\eqref{eqn: kinematic equation of membrane} is equivalently given by \begin{equation} X_t(s,t)=\mathcal{L}X(s,t)+g_X(s,t),\quad X(s,0) = X_0(s), \label{eqn: contour dynamic formulation of the immersed boundary problem} \end{equation} where $\mathcal{L}\triangleq-\frac{1}{4}(-\Delta)^{1/2}$, and \begin{align} g_X(s,t) = &\;\int_{\mathbb{T}} \Gamma_0(s,s',t)\,ds' +\frac{1}{4}(-\Delta)^{1/2}X(s,t),\label{eqn: definition of g_X}\\ \Gamma_0(s,s',t) = &\;-\partial_{s'}[G(X(s,t)-X(s',t))](X'(s',t)-X'(s,t)).\label{eqn: introduce the notation Gamma_0} \end{align} Here $(-\Delta)^{1/2}$ on $\mathbb{T}$ is understood as a Fourier multiplier or equivalently the following singular integral \begin{equation} (-\Delta)^{1/2}Y(s) \triangleq -\frac{1}{\pi} \mathrm{p.v.}\int_\mathbb{T}\frac{Y(s')-Y(s)}{4\sin^2\left(\frac{s'-s}{2}\right)}\,ds', \end{equation} and \begin{equation}
G(x) = \frac{1}{4\pi}\left(-\ln |x| Id +\frac{x \otimes x}{|x|^2}\right)\label{eqn: 2D stokeslet} \end{equation} is the fundamental solution of the 2-D Stokes equation for the velocity field \cite{pozrikidis1992boundary}. \end{proposition} We call \eqref{eqn: well_stretched assumption} \emph{well-stretched assumption}; \eqref{eqn: contour dynamic formulation of the immersed boundary problem} is called \emph{the contour dynamic formulation} of the immersed boundary problem. The proof of Proposition \ref{prop: tranform into contour dynamic formulation} is left to Section \ref{section: justification of contour dynamic formulation}. In the sequel, we shall focus on \eqref{eqn: contour dynamic formulation of the immersed boundary problem} and prove existence and uniqueness of its solutions and their properties. Estimates of the velocity field $u_X(x,t)$ can be easily obtained based on that; see Lemma \ref{lemma: the velocity field is continuous} below. Note that the subscript of $u_X$ stresses that it is determined by $X(s,t)$; see Section \ref{section: justification of contour dynamic formulation} for more details.
\subsection{Main results} Let us introduce a notation before we state the main results of the paper.
With $T>0$, define \begin{equation} \Omega_{T} = \left\{Y(s,t)\in L^{\infty}_T H^{5/2}\cap L^2_T H^{3}(\mathbb{T}):\;Y_t(s,t)\in L^2_T H^2(\mathbb{T})\right\}. \label{eqn: define the primary function space to prove the local existence} \end{equation} It is equipped with the norm \begin{equation*}
\|Y(s,t)\|_{\Omega_{T}} \triangleq \|Y\|_{L^{\infty}_T {H}^{5/2}(\mathbb{T})}+\|Y\|_{L^2_T {H}^{3}(\mathbb{T})}+\|Y_t\|_{L^2_T {H}^{2}(\mathbb{T})}. \end{equation*} Here $L^{\infty}_T {H}^{5/2}(\mathbb{T}) = L^\infty([0,T];{H}^{5/2}(\mathbb{T}))$, and $L^2_T {H}^{3}(\mathbb{T})$ and $L^2_T {H}^{2}(\mathbb{T})$ have similar meanings.
Then we are able to prove the local well-posedness of the immersed boundary problem \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. \begin{theorem}[Existence of the local-in-time solution]\label{thm: local in time existence} Suppose $X_0(s) \in H^{5/2}(\mathbb{T})$, s.t.\;there exists some $\lambda>0$, \begin{equation}
|X_0(s_1)-X_0(s_2)|\geq \lambda|s_1-s_2|,\quad \forall\, s_1, s_2\in \mathbb{T}. \label{eqn: bi Lipschitz assumption in main thm} \end{equation}
Then there exists $T_0 = T_0(\lambda, \|X_0\|_{\dot{H}^{5/2}})\in(0,+\infty]$ and a solution $X(s,t)\in \Omega_{T_0}\cap C_{[0,T_0]}H^{5/2}(\mathbb{T})$ of the immersed boundary problem \eqref{eqn: contour dynamic formulation of the immersed boundary problem}, satisfying that \begin{equation}
\|X\|_{L^\infty_{T_0} \dot{H}^{5/2}\cap L^2_{T_0} \dot{H}^{3}(\mathbb{T})}\leq 4\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})},\quad \|X_t\|_{L^2_{T_0} \dot{H}^{2}(\mathbb{T})}\leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}, \label{eqn: a priori estimate for the local solution in the main theorem} \end{equation} and that for $\forall\,s_1,s_2\in\mathbb{T}$ and $t\in[0,T_0]$, \begin{equation}
\left|X(s_1,t) - X(s_2,t)\right| \geq \frac{\lambda}{2}|s_1 - s_2|. \label{eqn: uniform bi lipschitz constant of the local solution in the main theorem} \end{equation} \end{theorem} We write $C_{[0,T_0]}H^{5/2}(\mathbb{T})$ instead of $C_{T_0}H^{5/2}(\mathbb{T})$ to stress continuity up to the end points of the time interval.
\begin{theorem}[Uniqueness of the local-in-time solution]\label{thm: local in time uniqueness} Suppose $X_0(s) \in H^{5/2}(\mathbb{T})$ satisfies \eqref{eqn: bi Lipschitz assumption in main thm} with some $\lambda>0$. Given an arbitrary $c\in(0,1)$, the immersed boundary problem \eqref{eqn: contour dynamic formulation of the immersed boundary problem} has at most one solution $X\in\Omega_T$ satisfying that $\forall\,s_1,s_2\in\mathbb{T}$ and $\forall\,t\in[0,T]$, \begin{equation}
|X(s_1,t)-X(s_2,t)|\geq c\lambda|s_1-s_2|. \label{eqn: bi lipschitz assumption in uniqueness thm} \end{equation} In particular, the local-in-time solution obtained in Theorem \ref{thm: local in time existence} is unique in $\Omega_{T_0}$. \end{theorem}
To state the results on the global existence of solutions near equilibrium configurations and its exponential convergence, we need the following definition. \begin{definition}\label{def: closest equilbrium state} Assume $Y(s) \in H^{5/2}(\mathbb{T})$ defines a Jordan curve in the plane, s.t.\;the area of domain enclosed by $Y$ is $\pi R_Y^2$ with $R_Y>0$, i.e., \begin{equation} \frac{1}{2}\int_{\mathbb{T}} Y(s)\times Y'(s)\,ds = \pi R_Y^2. \label{eqn: enclosed area is pi} \end{equation} We call $R_Y$ the \emph{effective radius} of $Y(s)$. Define \begin{equation} Y_{\theta,x}(s) = (R_Y\cos (s+\theta), R_Y\sin(s+\theta))^T + x \label{eqn: define a parameterization of the candidate equilibrium} \end{equation}
with $\theta\in[0,2\pi)$ and $x\in \mathbb{R}^2$. Let \begin{equation}
(\theta_*,x_*) =\argmin_{\theta\in[0,2\pi), x\in\mathbb{R}^2}\int_{\mathbb{T}}|Y(s)-Y_{\theta,x}(s)|^2\,ds. \label{eqn: define closest equilibrium and optimal parameters} \end{equation} Then $Y_*(s) \triangleq Y_{\theta_*,x_*}(s)$ is called \emph{the closest equilibrium configuration} to $Y(s)$. \end{definition} Properties of the closest equilibrium configuration will be discussed in Section \ref{section: global existence}. Now we have \begin{theorem}[Existence and uniqueness of global-in-time solution near equilibrium]\label{thm: global existence near equilibrium} There exist universal $\varepsilon_*, \xi_*>0$, such that for $\forall\, X_0(s)\in H^{5/2}(\mathbb{T})$ satisfying \begin{align}
\|X_0(s) - X_{0*}(s)\|_{\dot{H}^{5/2}(\mathbb{T})}\leq &\;\varepsilon_* R_{X_0},\label{eqn: closeness condition of H 2.5 norm}\\
\|X_0(s) - X_{0*}(s)\|_{\dot{H}^{1}(\mathbb{T})}\leq &\;\xi_* R_{X_0},\label{eqn: closeness condition of H 1 norm} \end{align} with $X_{0*}(s)$ being the closest equilibrium configuration to $X_0(s)$, there exists a unique solution $X(s,t)\in C_{[0,+\infty)}H^{5/2}\cap L^2_{[0,+\infty),loc}H^3(\mathbb{T})$ satisfying $X_t(s,t)\in L^2_{[0,+\infty),loc}H^2(\mathbb{T})$ for the immersed boundary problem \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. It satisfies the following estimates \begin{align}
\|X-X_{*}\|_{L^{\infty}_{[0,+\infty)}\dot{H}^{5/2}(\mathbb{T})}\leq &\; \sqrt{2}\varepsilon_* R_{X_0},\label{eqn: estimates on the distance to the equilibrium for the global solution in all time intervals}\\
\left|X(s_1,t) - X(s_2,t)\right| \geq &\;\frac{1}{2\pi}|s_1 - s_2|,\quad \forall \,t\in[0,+\infty),\;s_1,s_2\in\mathbb{T}.\label{eqn: well-stretched constant estimates for the global solution in all time intervals} \end{align} In particular, \begin{equation}
\|X\|_{L^{\infty}_{[0,+\infty)}\dot{H}^{5/2}(\mathbb{T})}\leq CR_{X_0} \label{eqn: uniform bound of H 2.5 norm for the global solution} \end{equation} for some universal $C$. \end{theorem}
\begin{theorem}[Exponential convergence to the equilibriums]\label{thm: exponential convergence} Let $X_0\in H^{5/2}(\mathbb{T})$ satisfy all the assumptions in Theorem \ref{thm: global existence near equilibrium} and let $X$ be the unique global solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} starting from $X_0$ obtained in Theorem \ref{thm: global existence near equilibrium}. There exist universal constants $\xi_{**}, \alpha_*>0$, such that if in addition \begin{equation*}
\|X_0(s) - X_{0*}(s)\|_{\dot{H}^{1}(\mathbb{T})}\leq \xi_{**} R_{X_0},\label{eqn: closeness condition of H 1 norm for exp convergence} \end{equation*} then \begin{enumerate} \item With some universal constant $C>0$, \begin{equation} \begin{split}
&\;\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(t) \\
&\;\quad \leq Ce^{-\alpha_* t}\max\{\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})},\|X_0-X_{0*}\|_{\dot{H}^{1}(\mathbb{T})}(|\ln \|X_0-X_{0*}\|_{\dot{H}^{1}(\mathbb{T})}|+1)^2\}\\ &\;\quad \triangleq Ce^{-\alpha_* t} B(X_0). \end{split} \label{eqn: exp convergence in H2.5 norm} \end{equation} \item There exists an equilibrium configuration $X_\infty\triangleq x_\infty+(R_{X_0}\cos(s+\theta_\infty), R_{X_0}\sin(s+\theta_\infty))^T$, such that \begin{equation}
\|X(t)-X_{\infty}\|_{\dot{H}^{5/2}(\mathbb{T})}\leq CB(X_0)e^{-\alpha_* t}, \label{eqn: exp convergence to a fixed configuration} \end{equation} where $C$ is a universal constant and $B(X_0)$ is defined in \eqref{eqn: exp convergence in H2.5 norm}. \end{enumerate} \end{theorem}
The rest of the paper is organized as follows. In Section \ref{section: justification of contour dynamic formulation}, the reformulation in Proposition \ref{prop: tranform into contour dynamic formulation} is justified. We also discuss properties of the flow field and law of energy dissipation in the system; their proofs are left to the Appendix \ref{appendix section: study of the flow field}. In Section \ref{section: a priori estimates}, we will prove a priori estimates necessary for proving the local well-posedness of the contour dynamic formulation \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. In particular, in Section \ref{section: preliminary a priori estimates}, we prove some preliminary estimates as building blocks of more complicated bounds in Section \ref{section: a priori estimates of the immersed boundary problem}, which is devoted to finding out derivatives of $g_X$ and proving its $H^2$-estimate. In Section \ref{section: local existence and uniqueness}, we will establish the local well-posedness of \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. In Section \ref{section: global existence}, we will show global-in-time existence of solutions of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} provided that the initial configuration is sufficiently close to an equilibrium configuration. In Section \ref{section: exp convergence}, we will first prove a lower bound of the rate of energy dissipation in Section \ref{section: lower bound for energy dissipation rate} when the solution is close to an equilibrium. Based on that, we will show exponential convergence of the solution to an equilibrium configuration in Section \ref{section: proof of exponential convergence to equilibrium configurations}. Some other auxiliary results will be stated and proved in the Appendix \ref{appendix section: estimates involving L} and \ref{appendix section: auxiliary calculations}.
\section{Problem Reformulation and the Flow Field}\label{section: justification of contour dynamic formulation}
\subsection{Proof of Proposition \ref{prop: tranform into contour dynamic formulation}}\label{section: proof of contour dynamic formulation} We first justify Proposition \ref{prop: tranform into contour dynamic formulation}, which reformulates the original immersed boundary problem \eqref{eqn: stokes equation}-\eqref{eqn: kinematic equation of membrane} into the contour dynamic formulation \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. Some of the arguments are redundant for proving the proposition itself, but we still derive them here as they will be useful in proving Lemma \ref{lemma: the velocity field is continuous} and Lemma \ref{lemma: energy estimate} below.
\begin{proof}[Proof of Proposition \ref{prop: tranform into contour dynamic formulation}] In 2-D stationary Stokes flow, the velocity field $u$ and the pressure $p$ are instantaneously determined by the forcing $f$ through fundamental solutions \begin{equation}
G(x) = \frac{1}{4\pi}\left(-\ln |x| Id +\frac{x \otimes x}{|x|^2}\right),\quad Q(x) = \frac{x}{2\pi|x|^2},\label{eqn: fundamental solution for pressure for 2D Stokes equation} \end{equation} respectively \cite{pozrikidis1992boundary}, where $Id$ is the $2\times 2$-identity matrix. Hence, \begin{equation} \begin{split} u_X(x,t) =&\;\int_{\mathbb{R}^2} G(x-y)f(y,t) \,dy=\int_{\mathbb{R}^2}\int_\mathbb{T} G(x-y)\delta(x-X(s',t))F(s',t) \,ds'dy\\ =&\;\int_{\mathbb{T}} G(x-X(s',t))X_{ss}(s',t) \,ds', \end{split} \label{eqn: expression for velocity field} \end{equation} This is well-defined for $x\not\in\Gamma_t$ and $X(\cdot,t)\in H^2(\mathbb{T})$. The subscript of $u_X$ stresses that it is determined by the configuration $X$. For $x = X(s,t)\in\Gamma_t$, by \eqref{eqn: well_stretched assumption}, \begin{equation*}
|G(X(s)-X(s'))|\leq C(\lambda)(1+|\ln |s-s'||). \end{equation*} Hence, $G(X(s)-X(\cdot))\in L^2(\mathbb{T})$ and \eqref{eqn: expression for velocity field} is well-defined.
For $x\not \in \Gamma_t$, we do integration by parts in \eqref{eqn: expression for velocity field} and find that \begin{equation} u^i_X(x) = \int_{\mathbb{T}} -\partial_{s'} [G^{ij}(x-X(s'))][X'(s')-C_x]^j \,ds', \label{eqn: expression of velocity field after integration by parts} \end{equation} where the superscripts stand for the indices of entries, and $C_x$ is any arbitrary constant vector independent of $s'$. We may take $C_x = X'(s_x)$, where $s_x$ is defined by \begin{equation}
|x-X(s_x)| = \inf_{s\in\mathbb{T}}|x-X(s)| = \mathrm{dist}(x,X(\mathbb{T})). \label{eqn: definition of s_x} \end{equation} Note that $s_x$ may not be unique; pick an arbitrary one if it is the case. Hence, \begin{equation}
u_X(x) = \int_{\mathbb{T}} -\partial_{s'} [G(x-X(s'))](X'(s')-X'(s_x))\,ds'
\label{eqn: 2D velocity field} \end{equation} Similarly, by integration by parts and taking the indetermined constant to be $0$, we find for $x\not \in \Gamma_t$, \begin{equation}
p_X(x,t) =\frac{1}{2\pi}\int_{\mathbb{T}} \frac{|X'(s')|^2}{|X(s')-x|^2} - \frac{2[(X(s')-x)\cdot X'(s')]^2}{|X(s')-x|^4}\,ds'. \label{eqn: 2D pressure field} \end{equation}
For $x = X(s,t)\in \Gamma_t$, by \eqref{eqn: expression for velocity field}, \begin{equation*} \begin{split}
u_X(X(s)) =&\;\lim_{\varepsilon \rightarrow 0^+}\int_{|s'-s|\geq \varepsilon} G(X(s)-X(s'))X''(s') \,ds'\\
=&\;\lim_{\varepsilon \rightarrow 0^+}\int_{|s'-s|\geq \varepsilon} -\partial_{s'}[G(X(s)-X(s'))](X'(s')-X'(s)) \,ds'\\
&\;+\lim_{\varepsilon \rightarrow 0^+}G(X(s)-X(s-\varepsilon))(X'(s-\varepsilon)-X'(s)) \\
&\;-\lim_{\varepsilon \rightarrow 0^+}G(X(s)-X(s+\varepsilon))(X'(s+\varepsilon)-X'(s)). \end{split} \end{equation*} Using \eqref{eqn: well_stretched assumption} and the assumption that $X(\cdot,t)\in H^2(\mathbb{T})$, we find \begin{equation*} \begin{split}
|G(X(s)-X(s-\varepsilon))(X'(s-\varepsilon)-X'(s))|\leq &\;C(\lambda)(1+|\ln \varepsilon|)\varepsilon^{1/2}\|X'\|_{\dot{C}^{1/2}(\mathbb{T})}\\
\leq &\;C(\lambda)(1+|\ln \varepsilon|)\varepsilon^{1/2}\|X\|_{\dot{H}^2(\mathbb{T})}. \end{split} \end{equation*}
It goes to $0$ as $\varepsilon\rightarrow 0^+$. A similar bound holds for $|G(X(s)-X(s+\varepsilon))(X'(s+\varepsilon)-X'(s))|$. Hence, \begin{equation} \begin{split} u_X(X(s))=&\;\mathrm{p.v.}\int_{\mathbb{T}} -\partial_{s'}[G(X(s)-X(s'))](X'(s')-X'(s)) \,ds'\\
=&\;\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}} \left[\frac{(X(s')-X(s))\cdot X'(s')}{|X(s')-X(s)|^2}Id \right.\\
&\;\quad- \frac{X'(s')\otimes (X(s')-X(s))+(X(s')-X(s))\otimes X'(s')}{|X(s')-X(s)|^2}\\
&\;\left.\quad+\frac{2(X(s')-X(s))\cdot X'(s') (X(s')-X(s))\otimes (X(s')-X(s))}{|X(s')-X(s)|^4}\right](X'(s')-X'(s)) \,ds'. \end{split} \label{eqn: velocity of membrane} \end{equation} In \eqref{eqn: introduce the notation Gamma_0}, we denoted the integrand in \eqref{eqn: velocity of membrane} by $\Gamma_0(s,s')$. It is trivial to show that \begin{equation*}
|\Gamma_0(s,s')|\leq C\lambda^{-1} |s'-s|^{-1/2}\|X\|_{\dot{C}^1(\mathbb{T})} \|X'\|_{\dot{C}^{1/2}(\mathbb{T})}\leq C\lambda^{-1}|s'-s|^{-1/2}\|X\|_{\dot{H}^2(\mathbb{T})}^2. \end{equation*} Hence, $\Gamma_0(s,s')$ is integrable, and the principal value integral in \eqref{eqn: velocity of membrane} can be replaced by the usual integral. As a byproduct, we also find a bound for $u_X(X(s))$, \begin{equation}
|u_X(X(s))|\leq C\lambda^{-1}\|X\|_{\dot{H}^2(\mathbb{T})}^2. \label{eqn: a trivial L^infty bound for velocity} \end{equation} \eqref{eqn: velocity of membrane} together with \eqref{eqn: kinematic equation of membrane} gives \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. Once \eqref{eqn: contour dynamic formulation of the immersed boundary problem} is solved, we can recover $u$ and $p$ by \eqref{eqn: expression for velocity field} and \eqref{eqn: 2D pressure field}. The original immersed boundary problem is then solved. This completes the proof of Proposition \ref{prop: tranform into contour dynamic formulation}. \end{proof}
\begin{remark} \eqref{eqn: velocity of membrane} can be equivalently written as \begin{equation} \begin{split} u_X(X(s))=&\;\mathrm{p.v.}\int_{\mathbb{T}} -\partial_{s'}[G(X(s)-X(s'))]X'(s') \,ds'\\
=&\;\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}} \left[- \frac{|X'(s')|^2}{|X(s')-X(s)|^2}+\frac{2[(X(s')-X(s))\cdot X'(s')]^2}{|X(s')-X(s)|^4} \right](X(s')-X(s)) \,ds'. \end{split} \label{eqn: equivalent formualtion of membrane velocity} \end{equation} Indeed, under the assumptions $X(\cdot,t)\in H^2(\mathbb{T})$ and \eqref{eqn: well_stretched assumption}, \begin{equation} \mathrm{p.v.}\int_{\mathbb{T}} -\partial_{s'}[G(X(s)-X(s'))]\,ds' = \lim_{\varepsilon\rightarrow 0^+} G(X(s)-X(s+\varepsilon)) - G(X(s)-X(s-\varepsilon)) = 0. \label{eqn: pv integral vanishes} \end{equation}
To obtain the last convergence, we derive that, since $|X'(s)|\geq \lambda$, \begin{equation*}
\ln \frac{|X(s)-X(s+\varepsilon)|}{|X(s)-X(s-\varepsilon)|} = \ln \frac{|X(s)-X(s+\varepsilon)|/\varepsilon}{|X(s)-X(s-\varepsilon)|/\varepsilon}\rightarrow \ln \frac{|X'(s)|}{|X'(s)|} = 0, \end{equation*} and similarly, \begin{equation*}
\frac{(X(s)-X(s\pm\varepsilon))\otimes (X(s)-X(s\pm\varepsilon))}{|X(s)-X(s\pm \varepsilon)|^2}\rightarrow \frac{X'(s)\otimes X'(s)}{|X'(s)|^2}. \end{equation*} \eqref{eqn: equivalent formualtion of membrane velocity} can be viewed as taking $C_x = 0$ in \eqref{eqn: expression of velocity field after integration by parts}. \qed \end{remark}
\begin{remark} The reason why we single out the term $\mathcal{L}X$ in Proposition \ref{prop: tranform into contour dynamic formulation} comes from the following suggestive calculation starting from \eqref{eqn: equivalent formualtion of membrane velocity}. Note that the integrals in \eqref{eqn: velocity of membrane} and \eqref{eqn: equivalent formualtion of membrane velocity} give the same value, so we use them interchangeably.
Suppose $X(\cdot,t)$ is sufficiently smooth. There is a singularity in the integrand of \eqref{eqn: equivalent formualtion of membrane velocity} as $s'\rightarrow s$. Consider $s'$ very close to $s$ and we formally use $(s'-s)X'(s')$ to approximate $X(s')-X(s)$ in \eqref{eqn: equivalent formualtion of membrane velocity}. In this way, when $|s'-s|$ is sufficiently small, we formally find \begin{equation*} -\partial_{s'}[G(X(s)-X(s'))]X'(s')\sim \frac{1}{4\pi} \frac{X'(s')}{s'-s}\sim -\frac{1}{4}\cdot \frac{X'(s')}{2\pi\tan\left(\frac{s-s'}{2}\right)}, \end{equation*} which presumably accounts for the principal part of the singular integral in \eqref{eqn: equivalent formualtion of membrane velocity}. Recall that the Hilbert transform $\mathcal{H}$ on $\mathbb{T}$ is defined as \cite{grafakos2008classical} \begin{equation*} \mathcal{H}Y(s) = \frac{1}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\cot\left(\frac{s-s'}{2}\right)Y(s'). \end{equation*} Hence, if we take out $-\frac{1}{4}\mathcal{H}X' = \mathcal{L}X$ in \eqref{eqn: equivalent formualtion of membrane velocity}, what remains is \emph{expected} to be regular. We shall see that $\mathcal{L}X$ provides nice dissipation property that helps prove well-posedness of \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. See Lemma \ref{lemma: improved Hs estimate and Hs continuity of semigroup solution} and Lemma \ref{lemma: a priori estimate of nonlocal eqn} for some relevant estimates.
It should be noted that the very idea has been adopted in early numerical literature to, for example, remove stiffness in computing the evolution of elastic immersed boundary in 2-D Stokes flow or the motion of interface with surface tension in 2-D incompressible irrotational flow. See e.g.\;\cite{hou2008removing, hou1994removing} and references therein. \qed \end{remark}
\subsection{Regularity of the flow field and energy dissipation}\label{section: energy estimate} As is mentioned above, once \eqref{eqn: contour dynamic formulation of the immersed boundary problem} is solved, we can obtain the flow field $u_X$ by \eqref{eqn: expression for velocity field}. The following lemma characterizes its regularity. \begin{lemma}\label{lemma: the velocity field is continuous} Let $X(\cdot,t)\in H^2(\mathbb{T})$ and satisfy the well-stretched condition \eqref{eqn: well_stretched assumption}. Then $u_X(\cdot,t)$ defined by \eqref{eqn: expression for velocity field} (or equivalently \eqref{eqn: 2D velocity field}, \eqref{eqn: velocity of membrane} and \eqref{eqn: equivalent formualtion of membrane velocity}) is continuous in $\mathbb{R}^2$. Moreover, $\nabla u_X(\cdot ,t)\in L^2(\mathbb{R}^2)$. \end{lemma}
\begin{remark} That $u_X(x,t)$ is continuous throughout $\mathbb{R}^2$ agrees with the intuition that the string moves with the ambient flow, and there is no jump in velocity across the string. \qed \end{remark}
As a dissipative system, the Stokes immersed boundary problem enjoys a natural law of energy dissipation, which is useful in proving existence and asymptotic behavior of global solution near equilibrium in Section \ref{section: global existence} and Section \ref{section: exp convergence}.
\begin{lemma}\label{lemma: energy estimate} Assume $X(s,t)\in C_{T}H^2(\mathbb{T})$ with $X_t(s,t)\in L^2_{T}H^1(\mathbb{T})$ is a solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} with some $T>0$ satisfying \eqref{eqn: well_stretched assumption} with constant $\lambda >0$, and $u_X(x,t)$ is the corresponding velocity field defined by the Stokes equation \eqref{eqn: stokes equation}, with $\nabla u_X(x,t)\in L^\infty_{T}L^2(\mathbb{R}^2)$ (showed in \eqref{eqn: a trivial bound for the energy dissipation rate or H1 semi norm of velocity field} in the proof of Lemma \ref{lemma: the velocity field is continuous}). Then \begin{equation}
\frac{1}{2}\frac{d}{dt}\int_{\mathbb{T}}|X'(s,t)|^2\,ds = -\int_{\mathbb{R}^2}|\nabla u_X(x,t)|^2\,dx \label{eqn: energy estimate on each time slice simplified version} \end{equation} holds in the scalar distribution sense, and \begin{equation}
\frac{1}{2}\int_{\mathbb{T}}|X'(s,T)|^2\,ds - \frac{1}{2}\int_{\mathbb{T}}|X'(s,0)|^2\,ds =-\int_{0}^T\int_{\mathbb{R}^2}|\nabla u_X(x,t)|^2\,dxdt. \label{eqn: energy estimate of Stokes immersed boundary problem} \end{equation}
In particular, the total elastic energy of the string $\mathcal{E}_X \triangleq \frac{1}{2}\|X(\cdot,t)\|_{\dot{H}^1(\mathbb{T})}^2$ always decreases in $t$. \end{lemma}
The proofs of these lemmas are technical. We leave them to Appendix \ref{appendix section: study of the flow field}.
\section{A Priori Estimates}\label{section: a priori estimates} In this section, we shall prove a priori estimates that are needed in proving well-posedness of \eqref{eqn: contour dynamic formulation of the immersed boundary problem}.
\subsection{Preliminaries}\label{section: preliminary a priori estimates}
First we introduce some notations that will be heavily used in the rest of the paper. Suppose $X\in H^3(\mathbb{T})$. For $s,s'\in\mathbb{T}$, let $\tau = s'-s\in[-\pi,\pi)$. For $s'\not = s$, define \begin{equation} L(s,s') = \frac{X(s')-X(s)}{\tau},\quad M(s,s') = \frac{X'(s')-X'(s)}{\tau},\quad N(s,s') = \frac{L(s,s')-X'(s)}{\tau}. \label{eqn: definition of L M N} \end{equation} and \begin{equation} L(s,s) = X'(s),\quad M(s,s) = X''(s),\quad N(s,s) =\frac{1}{2}X''(s). \label{eqn: definition of L M N at s} \end{equation} It is straightforward to calculate that for $s'\not = s$, \begin{equation} \partial_s L(s,s') = N(s,s'),\quad \partial_s M(s,s') = \frac{M(s,s')-X''(s)}{\tau},\quad \partial_s N(s,s') = \frac{2N(s,s')-X''(s)}{\tau}. \label{eqn: derivatives of L M N wrt s} \end{equation} In the sequel, we shall omit the arguments in $L(s,s')$, $M(s,s')$ and $N(s,s')$ whenever it is convenient. Without assuming the well-stretched assumption \eqref{eqn: well_stretched assumption}, we have the following estimates for $L$, $M$ and $N$, which will be building blocks of more complicated estimates in Section \ref{section: a priori estimates of the immersed boundary problem}.
\begin{lemma}\label{lemma: estimates for L M N}
\begin{enumerate} \item For $\forall\, 1\leq p\leq q \leq \infty$, $q>1$ and any interval $I\subset\mathbb{T}$ satisfying $0\in I$ \begin{align}
\|L(s,\cdot)\|_{L^p(s+I)} \leq &\;C|I|^{\frac{1}{p}-\frac{1}{q}}\|X'\|_{L^q(s+I)},\label{eqn: Lp estimate for L}\\
\|M(s,\cdot)\|_{L^p(s+I)} \leq &\;C|I|^{\frac{1}{p}-\frac{1}{q}}\|X''\|_{L^q(s+I)},\label{eqn: Lp estimate for M}\\
\|N(s,\cdot)\|_{L^p(s+I)} \leq &\;C|I|^{\frac{1}{p}-\frac{1}{q}}\|X''\|_{L^q(s+I)},\label{eqn: Lp estimate for N}\\
\|\partial_s M(s,\cdot)\|_{L^p(s+I)} \leq &\;C|I|^{\frac{1}{p}-\frac{1}{q}}\|X'''\|_{L^q(s+I)},\label{eqn: Lp estimate for M'}\\
\|\partial_s N(s,\cdot)\|_{L^p(s+I)} \leq &\;C|I|^{\frac{1}{p}-\frac{1}{q}}\|X'''\|_{L^q(s+I)},\label{eqn: Lp estimate for N'} \end{align} where the constants $C>0$ only depend on $p$ and $q$. \item For $\forall\, 1< p\leq q \leq \infty$ and any interval $I\subset\mathbb{T}$ satisfying $0\in I$ \begin{align}
\|L(s,s')\|_{L^q_{s}(\mathbb{T})L^p_{s'}(s+I)} \leq &\;C|I|^{1/q}\|X'\|_{L^p(\mathbb{T})},\label{eqn: double Lp estimate for L}\\
\|M(s,s')\|_{L^q_{s}(\mathbb{T})L^p_{s'}(s+I)} \leq &\;C|I|^{1/q}\|X''\|_{L^p(\mathbb{T})},\label{eqn: double Lp estimate for M}\\
\|N(s,s')\|_{L^q_{s}(\mathbb{T})L^p_{s'}(s+I)} \leq &\;C|I|^{1/q}\|X''\|_{L^p(\mathbb{T})},\label{eqn: double Lp estimate for N}\\
\|\partial_s M(s,s')\|_{L^q_{s}(\mathbb{T})L^p_{s'}(s+I)} \leq &\;C|I|^{1/q}\|X'''\|_{L^p(\mathbb{T})},\label{eqn: double Lp estimate for M'}\\
\|\partial_s N(s,s')\|_{L^q_{s}(\mathbb{T})L^p_{s'}(s+I)} \leq &\;C|I|^{1/q}\|X'''\|_{L^p(\mathbb{T})},\label{eqn: double Lp estimate for N'} \end{align} where the constants $C>0$ only depend on $p$ and $q$. \item Let $\mathcal{M}$ be the centered Hardy-Littlewood maximal operator on $\mathbb{T}$. Then for $\forall\, s,s'\in\mathbb{T}$, \begin{equation}
|L(s,s')|\leq 2\mathcal{M} X'(s),\quad |M(s,s')|\leq 2\mathcal{M} X''(s),\quad |N(s,s')|\leq 2\mathcal{M} X''(s).\label{eqn: bound for L M N by maximal function} \end{equation} \item If $X\in C^2(\mathbb{T})$, \begin{equation} L(s,\cdot),M(s,\cdot),N(s,\cdot)\in C(\mathbb{T}). \label{eqn: continuity of L M N} \end{equation} \item Moreover, if \eqref{eqn: well_stretched assumption} is satisfied with constant $\lambda>0$, \begin{equation}
\lambda\leq |L(s,s')|\leq \|X'\|_{L^\infty}, \label{eqn: lower bound for L} \end{equation} and \begin{equation}
\lambda \leq \min_{s\in\mathbb{T}}|X'(s)|. \label{eqn: upper bound for lambda} \end{equation}
\end{enumerate}
\begin{proof} \eqref{eqn: lower bound for L} and \eqref{eqn: upper bound for lambda} are obvious. To prove the $L^p$-estimates and the continuity of $L$, $M$ and $N$, we rewrite \begin{align*} &\;L(s,s') =\frac{1}{\tau} \int_0^{\tau} X'(s+\theta)\,d\theta = \int_0^1 X'(s+\tau\theta)\,d\theta,\\ &\;M(s,s') =\frac{1}{\tau} \int_0^{\tau} X''(s+\theta)\,d\theta = \int_0^1 X''(s+\tau\theta)\,d\theta, \end{align*} \begin{equation*} \begin{split} N(s,s') =&\;\frac{1}{\tau^2} \int_0^{\tau} (X'(s+\theta)-X'(s))\,d\theta = \frac{1}{\tau^2} \int_0^{\tau} \int_0^{\theta} X''(s+\omega)\,d\omega d\theta\\ =&\;\frac{1}{\tau^2} \int_0^\tau \theta\int_0^1 X''(s+\theta\omega)\,d\omega d\theta=\int_0^1 \theta\int_0^1 X''(s+\tau\theta\omega)\,d\omega d\theta, \end{split} \end{equation*} \begin{equation*} \begin{split} \partial_s M(s,s') =&\;\frac{1}{\tau^2} (X'(s')-X'(s)-\tau X''(s)) = \frac{1}{\tau^2} \int_{0}^{\tau} X''(s+\theta)-X''(s)\,d\theta\\ =&\;\frac{1}{\tau^2} \int_{0}^{\tau} \int_{0}^\theta X'''(s+\omega)\,d\omega d\theta = \int_0^1 \theta\int_0^1 X'''(s+\tau\theta\omega)\,d\omega d\theta, \end{split} \end{equation*} and \begin{equation*} \begin{split} \partial_s N(s,s') =&\;\frac{2}{\tau^3} \left(X(s')-X(s)-\tau X'(s)-\frac{1}{2}\tau^2 X''(s)\right)\\ =&\;\frac{2}{\tau^3}\left(\int_0^{\tau} X'(s+\theta)\,d\theta-\tau X'(s)-\frac{1}{2}\tau^2 X''(s)\right)\\ =&\;\frac{2}{\tau^3}\left(\int_0^{\tau} \int_0^{\theta} X''(s+\omega)\,d\omega d\theta-\frac{1}{2}\tau^2 X''(s)\right)\\ =&\;\frac{2}{\tau^3}\int_0^{\tau} \int_0^{\theta} \int_0^{\omega} X'''(s+\xi)\,d\xi d\omega d\theta\\
=&\;2\int_0^{1} \theta^2 \int_0^{1} \omega \int_0^{1} X'''(s+\tau\theta\omega\xi)\,d\xi d\omega d\theta. \end{split} \end{equation*} \eqref{eqn: continuity of L M N} is immediate by the continuity of $X'$ and $X''$ at $s$. To prove \eqref{eqn: bound for L M N by maximal function}, we use the above representation to derive that \begin{align*}
|L(s,s')| \leq &\;\frac{1}{\tau} \int_0^{\tau} |X'(s+\theta)|\,d\theta \leq \frac{1}{\tau} \int_{-\tau}^{\tau} |X'(s+\theta)|\,d\theta \leq 2\mathcal{M}X'(s),\\
|M(s,s')| \leq &\;\frac{1}{\tau} \int_0^{\tau} |X''(s+\theta)|\,d\theta \leq \frac{1}{\tau} \int_{-\tau}^{\tau} |X''(s+\theta)|\,d\theta \leq 2\mathcal{M}X''(s),\\
|N(s,s')| \leq &\;\frac{1}{\tau^2} \int_0^{\tau} \int_0^{\theta} |X''(s+\omega)|\,d\omega d\theta \leq \frac{1}{\tau} \int_0^{\tau} |X''(s+\omega)|\,d\omega \leq 2\mathcal{M}X''(s). \end{align*}
Now we turn to \eqref{eqn: Lp estimate for L}- \eqref{eqn: double Lp estimate for N'}. When $p = q =\infty$, \eqref{eqn: Lp estimate for L}- \eqref{eqn: double Lp estimate for N'} immediately follow from the above representations. When $1\leq p\leq q \leq \infty$, $p<\infty$ and $q>1$, we find that \begin{equation*} \begin{split}
\|L(s,\cdot)\|_{L^p(s+I)} = &\;\left(\int_{I} d\tau\left|\int_0^1 X'(s+\tau\theta)\,d\theta\right|^{p}\right)^{\frac{1}{p}}\\
\leq &\; C\int_0^1\left(\int_{I} d\tau\left| X'(s+\tau\theta)\right|^{p}\right)^{\frac{1}{p}}\,d\theta\\
= &\; C\int_0^1\theta^{-\frac{1}{p}}\left(\int_{s+\theta I} ds'\left| X'(s')\right|^{p}\right)^{\frac{1}{p}}\,d\theta\\
\leq &\; C\int_0^1\theta^{-\frac{1}{p}}|\theta I|^{\frac{1}{p}-\frac{1}{q}}\|X'\|_{L^q(s+I)}\,d\theta \leq C|I|^{\frac{1}{p}-\frac{1}{q}}\|X'\|_{L^q(s+I)}. \end{split} \end{equation*} We applied Minkowski inequality in the second line and H$\mathrm{\ddot{o}}$lder's inequality in the fourth line; we also used the fact that $s+\theta I \subset s+I$. This proves \eqref{eqn: Lp estimate for L}; \eqref{eqn: Lp estimate for M} could be proved in exactly the same way simply by replacing $X''$ by $X'''$. For \eqref{eqn: Lp estimate for N}, \begin{equation*} \begin{split}
\|N(s,\cdot)\|_{L^p(s+I)} =&\; \left(\int_I d\tau\left|\int_0^1 \theta\int_0^1 X''(s+\tau\theta\omega)\,d\omega d\theta\right|^p\right)^{\frac{1}{p}}\\
\leq &\; C\int_0^1 \theta\int_0^1 \left(\int_I d\tau\left|X''(s+\tau\theta\omega)\right|^p\right)^\frac{1}{p} \,d\omega d\theta\\
= &\; C\int_0^1 \theta\int_0^1 \left(\frac{1}{\theta \omega}\int_{s+\theta \omega I} ds'\left|X''(s')\right|^p\right)^{\frac{1}{p}}\,d\omega d\theta\\
\leq &\; C\int_0^1 \int_0^1 \frac{\theta^{1-\frac{1}{p}}}{ \omega^{\frac{1}{p}}}|\theta\omega I|^{\frac{1}{p}-\frac{1}{q}}\|X''\|_{L^q(s+I)}\,d\omega d\theta\leq C|I|^{\frac{1}{p}-\frac{1}{q}}\|X''\|_{L^q(s+I)}.
\end{split} \end{equation*} \eqref{eqn: Lp estimate for M'} could be proved in exactly the same way simply by replacing $X''$ by $X'''$. For \eqref{eqn: Lp estimate for N'}, \begin{equation*} \begin{split}
\|\partial_s N(s,\cdot)\|_{L^p(s+I)} =&\; \left(\int_I d\tau\left|2\int_0^{1} \theta^2 \int_0^{1} \omega \int_0^{1} X'''(s+\tau\theta\omega\xi)\,d\xi d\omega d\theta\right|^p\right)^{\frac{1}{p}}\\
\leq &\; C\int_0^{1} \theta^2 \int_0^{1} \omega \int_0^{1} \left(\int_I d\tau|X'''(s+\tau\theta\omega\xi)|^p\right)^{\frac{1}{p}}\,d\xi d\omega d\theta\\
= &\; C\int_0^{1} \theta^2 \int_0^{1} \omega \int_0^{1} (\theta\omega\xi)^{-\frac{1}{p}}\left(\int_{s+\theta\omega\xi I} ds'|X'''(s')|^p\right)^{\frac{1}{p}}\,d\xi d\omega d\theta\\
\leq &\; C\int_0^{1} \theta^2 \int_0^{1} \omega \int_0^{1} (\theta\omega\xi)^{-\frac{1}{p}}|\theta\omega\xi I|^{\frac{1}{p}-\frac{1}{q}}\|X'''\|_{L^q(s+\theta\omega\xi I)}\,d\xi d\omega d\theta\\
\leq &\; C|I|^{\frac{1}{p}-\frac{1}{q}}\int_0^{1} \theta^2 \int_0^{1} \omega \int_0^{1} (\theta\omega\xi)^{-\frac{1}{q}}\|X'''\|_{L^q(s+I)}\,d\xi d\omega d\theta\\
\leq &\; C|I|^{\frac{1}{p}-\frac{1}{q}}\|X'''\|_{L^q(s+I)}. \end{split} \end{equation*}
For \eqref{eqn: double Lp estimate for L}, we first consider the case $p=q\in(1,\infty)$. \eqref{eqn: Lp estimate for L} implies that, $\|L(s,s')\|_{L^p_{s'}(s+I)}\leq C\|X'\|_{L^p(s+I)}$. Hence, by Fubini's Theorem, \begin{equation*}
\|L(s,s')\|_{L^{p}_{s}(\mathbb{T})L^p_{s'}(s+I)}\leq C\left(\int_{\mathbb{T}}\|X'\|^p_{L^p(s+I)}\,ds\right)^{1/p}\leq C|I|^{1/p}\|X'\|_{L^p(\mathbb{T})} \end{equation*}
On the other hand, by \eqref{eqn: Lp estimate for L}, $\|L(s,s')\|_{L^{\infty}_{s}(\mathbb{T})L^p_{s'}(s+I)}\leq C\|X'\|_{L^p(\mathbb{T})}$. Hence, by interpolation between $L^p$-spaces, we proved \eqref{eqn: double Lp estimate for L}. In a similar manner, we can prove \eqref{eqn: double Lp estimate for M}-\eqref{eqn: double Lp estimate for N'}. \end{proof} \end{lemma}
\subsection{$H^2$-estimate of $g_X$}\label{section: a priori estimates of the immersed boundary problem}
In Section \ref{section: local existence and uniqueness}, we will prove well-posedness of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} via a fixed-point-type argument by making use of dissipation structure of the operator $\mathcal{L}$ (see Lemma \ref{lemma: improved Hs estimate and Hs continuity of semigroup solution} and Lemma \ref{lemma: a priori estimate of nonlocal eqn} in the Appendix \ref{appendix section: estimates involving L}). In order to do that, in this section, we focus on the term $g_X$ in \eqref{eqn: contour dynamic formulation of the immersed boundary problem} and establish its $H^2$-estimate; recall that $g_X$ is defined in \eqref{eqn: definition of g_X}. We are also going to prove an $H^2$-estimate of $g_{X_1}-g_{X_2}$, which will be used in proving the uniqueness of the local solution.
We start from a pointwise estimate of $g_X$. \begin{lemma}\label{lemma: L infty estimate for g_X} Suppose $X\in H^2(\mathbb{T})$ satisfies \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then \begin{equation}
|g_X(s)|\leq \frac{C}{\lambda}\|X'\|_{L^2}\|X''\|_{L^2}, \label{eqn: L infty estimate for g_X} \end{equation} where $C>0$ is a universal constant. \begin{proof} Recall that $\Gamma_0(s,s')$ is defined in \eqref{eqn: introduce the notation Gamma_0}. By \eqref{eqn: velocity of membrane}, and the definitions of $L$ and $M$, we have \begin{equation} \begin{split}
\Gamma_0(s,s')=&\; \frac{1}{4\pi}\left(\frac{L\cdot X'(s')}{|L|^2}Id-\frac{X'(s')\otimes L + L\otimes X'(s')}{|L|^2}+\frac{2L\cdot X'(s')L\otimes L}{|L|^4}\right)M\\
=&\;\frac{1}{4\pi}\left(\frac{L\cdot X'(s')}{|L|^2}M-\frac{L\cdot M}{|L|^2}X'(s') -\frac{X'(s')\cdot M}{|L|^2}L+\frac{2L\cdot X'(s')L\cdot M}{|L|^4}L\right). \end{split} \label{eqn: simplification of integrand of g_X part 1} \end{equation} Hence, by \eqref{eqn: lower bound for L}, \begin{equation}
|\Gamma_0(s,s')|\leq C \frac{|M(s,s')||X'(s')|}{|L(s,s')|} \leq \frac{C}{\lambda} |M(s,s')||X'(s')|. \label{eqn: pointwise estimate of integrand of g_X part 1} \end{equation} This implies by H$\mathrm{\ddot{o}}$lder's inequality and Lemma \ref{lemma: estimates for L M N} that \begin{equation}
\left|\int_{\mathbb{T}} \Gamma_0(s,s')\,ds'\right| \leq \frac{C}{\lambda} \|X'\|_{L^2(\mathbb{T})}\|X''\|_{L^2(\mathbb{T})}. \label{eqn: L infty estimate for g_X part 1} \end{equation} The other term in $g_X(s)$, $(-\Delta)^{1/2}X$, has mean zero on $\mathbb{T}$. By Gagliardo-Nirenberg interpolation inequality, \begin{equation*}
\left|(-\Delta)^{1/2}X(s)\right|\leq C \left\|(-\Delta)^{1/2}X(s)\right\|_{\dot{H}^1}^{1/2}\left\|(-\Delta)^{1/2}X(s)\right\|_{L^2}^{1/2}\leq C \|X''\|_{L^2}^{1/2}\|X'\|_{L^2}^{1/2}. \end{equation*} Using \eqref{eqn: lower bound for L}, we find that \begin{equation}
\left|(-\Delta)^{1/2}X(s)\right| \leq C\frac{\|X'\|_{L^\infty}}{\lambda}\|X''\|^{1/2}_{L^2}\|X'\|_{L^2}^{1/2}\leq \frac{C}{\lambda}\|X''\|_{L^2}\|X'\|_{L^2} \label{eqn: L infty estimate for g_X part 2} \end{equation} \eqref{eqn: L infty estimate for g_X} is then proved by \eqref{eqn: L infty estimate for g_X part 1} and \eqref{eqn: L infty estimate for g_X part 2}. \end{proof} \begin{remark} If we further assume $X\in H^3(\mathbb{T})\subset C^2(\mathbb{T})$, using the continuity of $L(s,\cdot)$ and $M(s,\cdot)$, it is not difficult to show in \eqref{eqn: simplification of integrand of g_X part 1} that \begin{equation} \lim_{s'\rightarrow s} \Gamma_0(s,s') = \frac{1}{4\pi} X''(s). \label{eqn: limit of integrand of g_X part 1 at s} \end{equation} This will be useful below in proving Lemma \ref{lemma: derivative of g_X}. \qed \end{remark} \end{lemma}
\begin{corollary}\label{coro: L2 estimate for g_X1-g_X2} Let $X_1(s),X_2(s)\in H^2(\mathbb{T})$ both satisfy \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then \begin{equation}
\|g_{X_1}(s)-g_{X_2}(s)\|_{L^2}\leq C\lambda^{-2} (\|X_1\|_{\dot{H}^2}+\|X_2\|_{\dot{H}^2})^2\|X_1-X_2\|_{\dot{H}^2}, \label{eqn: L2 estimate for g_X1-g_X2} \end{equation} where $C>0$ is a universal constant. \begin{proof} By the definition of $g_X$ in \eqref{eqn: definition of g_X} and \eqref{eqn: simplification of integrand of g_X part 1}, \begin{equation} \begin{split} &\;g_{X_1}(s)-g_{X_2}(s) \\
= &\;\int_\mathbb{T}ds'\,\frac{1}{4\pi}\left(\frac{L_1\cdot X_1'(s')}{|L_1|^2}M_1-\frac{L_1\cdot M_1}{|L_1|^2}X_1'(s') -\frac{X_1'(s')\cdot M_1}{|L_1|^2}L_1+\frac{2L_1\cdot X_1'(s')L_1\cdot M_1}{|L_1|^4}L_1\right)\\
&\;-\int_\mathbb{T}ds'\,\frac{1}{4\pi}\left(\frac{L_2\cdot X_2'(s')}{|L_2|^2}M_2-\frac{L_2\cdot M_2}{|L_2|^2}X_2'(s') -\frac{X_2'(s')\cdot M_2}{|L_2|^2}L_2+\frac{2L_2\cdot X_2'(s')L_2\cdot M_2}{|L_2|^4}L_2\right)\\ &\;-\mathcal{L}X_1(s)+\mathcal{L}X_2(s). \end{split} \label{eqn: difference of X_t at two moments} \end{equation} where $L_i$, $M_i$ and $X_i'$ denote the corresponding quantities associated with $X_i(\cdot)$; see definitions in \eqref{eqn: definition of L M N} and \eqref{eqn: definition of L M N at s}. To make an $L^2$-estimate, for conciseness, we only consider a part of the difference above. By \eqref{eqn: well_stretched assumption} and \eqref{eqn: lower bound for L}, \begin{equation*} \begin{split}
&\;\left\|\int_\mathbb{T}ds'\,\frac{L_1\cdot X_1'(s')}{|L_1|^2}M_1 - \frac{L_2\cdot X_2'(s')}{|L_2|^2}M_2\right\|_{L^2}\\
\leq &\;\left\|\frac{L_1\cdot (X_1'-X_2')(s')}{|L_1|^2}M_1\right\|_{L^2_sL_{s'}^1}+\left\|\frac{L_1\cdot X_2'(s')}{|L_1|^2}(M_1-M_2)\right\|_{L^2_sL_{s'}^1}\\
&\;+\left\|\frac{(L_1-L_2)\cdot X_2'(s')}{|L_1|^2}M_2\right\|_{L^2_sL_{s'}^1}+\left\|L_2\cdot X_2'(s')M_2\frac{|L_2|^2-|L_1|^2}{|L_1|^2|L_2|^2}\right\|_{L^2_sL_{s'}^1}\\
\leq &\;C\lambda^{-2}\left(\|L_1\|_{L^\infty_s L^\infty_{s'}}\|X_1'-X_2'\|_{L^2}\|M_1\|_{L^2_s L^2_{s'}}+\|L_1\|_{L^4_s L^2_{s'}}\|X_2'\|_{L^\infty}\|M_1-M_2\|_{L^4_s L^2_{s'}}\right.\\
&\;\left.+\|L_1-L_2\|_{L^4_s L^2_{s'}}\|X_2'\|_{L^\infty}\|M_2\|_{L^4_s L^2_{s'}}\right). \end{split} \end{equation*} By Lemma \ref{lemma: estimates for L M N} and Sobolev inequality, \begin{equation*} \begin{split}
&\;\left\|\int_\mathbb{T}ds'\,\frac{L_1\cdot X_1'(s')}{|L_1|^2}M_1 - \frac{L_2\cdot X_2'(s')}{|L_2|^2}M_2\right\|_{L^2}\\
\leq &\;C\lambda^{-2}\left(\|X_1'\|_{L^\infty}\|X_1'-X_2'\|_{L^2}\|X_1''\|_{L^2}+\|X_1'\|_{L^2}\|X_2'\|_{L^\infty}\|X_1''-X_2''\|_{L^2}\right.\\
&\;\left.+\|X_1'-X_2'\|_{L^2}\|X_2'\|_{L^\infty}\|X_2''\|_{L^2}\right)\\
\leq &\; C\lambda^{-2} (\|X_1\|_{\dot{H}^2}+\|X_2\|_{\dot{H}^2})^2\|X_1-X_2\|_{\dot{H}^2}. \end{split} \end{equation*} Similarly, \begin{equation*} \begin{split}
&\;\left\|\int_\mathbb{T}ds'\,\frac{L_1\cdot X_1'(s')L_1\cdot M_1}{|L_1|^4}L_1 -\frac{L_2\cdot X_2'(s')L_2\cdot M_2}{|L_2|^4}L_2\right\|_{L^2}\\
\leq &\;\left\|\frac{L_1\cdot (X_1'(s')-X_2'(s'))L_1\cdot M_1}{|L_1|^4}L_1\right\|_{L^2_sL^1_{s'}}+\left\|\frac{L_1\cdot X_2'(s')L_1\cdot (M_1-M_2)}{|L_1|^4}L_1\right\|_{L^2_sL^1_{s'}}\\
&\;+\left\|\frac{(L_1-L_2)\cdot X_2'(s')L_1\cdot M_2}{|L_1|^4}L_1\right\|_{L^2_sL^1_{s'}}+\left\|\frac{L_2\cdot X_2'(s')L_1\cdot M_2}{|L_1|^2}L_1\frac{|L_2|^2-|L_1|^2}{|L_1|^2|L_2|^2}\right\|_{L^2_sL^1_{s'}}\\
&\;+\left\|\frac{L_2\cdot X_2'(s')(L_1-L_2)\cdot M_2}{|L_1|^2|L_2|^2}L_1\right\|_{L^2_sL^1_{s'}}+\left\|\frac{L_2\cdot X_2'(s')L_2\cdot M_2}{|L_1|^2|L_2|^2}(L_1-L_2)\right\|_{L^2_sL^1_{s'}}\\
&\;+\left\|\frac{L_2\cdot X_2'(s')L_2\cdot M_2}{|L_2|^2}L_2\frac{|L_2|^2-|L_1|^2}{|L_1|^2|L_2|^2}\right\|_{L^2_sL^1_{s'}}\\
\leq &\;C\lambda^{-2}\left(\|X_1'-X_2'\|_{L^2}\|L_1\|_{L^\infty_s L^\infty_{s'}}\| M_1\|_{L^2_sL^2_{s'}}+\|X_2'\|_{L^\infty}\|L_1\|_{L^4_s L^2_{s'}}\| M_1-M_2\|_{L^4_sL^2_{s'}}\right.\\
&\;+\left.\|X_2'\|_{L^\infty}\|M_2\|_{L^4_sL^2_{s'}} \|L_2-L_1\|_{L^4_sL^2_{s'}}\right)\\
\leq &\;C\lambda^{-2}\left(\|X_1'-X_2'\|_{L^2}\|X_1'\|_{L^\infty}\|X_1''\|_{L^2}+\|X_2'\|_{L^\infty}\|X_1'\|_{L^2}\| X_1''-X_2''\|_{L^2}\right.\\
&\;+\left.\|X_2'\|_{L^\infty}\|X_2''\|_{L^2} \|X_2'-X_1'\|_{L^2}\right)\\ \leq&\;C\lambda^{-2}(\|X_1\|_{\dot{H}^2}+\|X_2\|_{\dot{H}^2})^2\|X_1-X_2\|_{\dot{H}^2}. \end{split} \end{equation*} We can estimate the other terms in \eqref{eqn: difference of X_t at two moments} in a similar fashion and obtain \eqref{eqn: L2 estimate for g_X1-g_X2}. \end{proof} \end{corollary}
In order to estimate $H^2$-norm of $g_X$, we find out its weak derivatives $g_X'$ and $g_X''$ in the following two lemmas. \begin{lemma}\label{lemma: derivative of g_X} Suppose $X\in H^3(\mathbb{T})$ and satisfies \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then \begin{equation} g'_X(s) = \mathrm{p.v.}\int_\mathbb{T}\left(-\partial_{ss'}[G(X(s)-X(s'))]-\frac{Id}{16\pi\sin^2\left(\frac{s'-s}{2}\right)}\right)(X'(s')-X'(s))\,ds'.\label{eqn: derivative of g_X} \end{equation}
\begin{proof} We define a cut-off function $\varphi(y)\in C^\infty (\mathbb{T})$ such that \begin{enumerate} \item $\varphi(y)= \varphi(-y)$, $\forall\,y\in\mathbb{T}$.
\item $\varphi(y)= 1$ for $|y|\leq 1$; $\varphi(y)= 0$ for $|y|\geq 2$; and $|\varphi'(y)|\leq C$. \item $\varphi(y)$ is decreasing on $[0,\pi]$ and increasing on $[-\pi,0]$. \end{enumerate} Define $\psi_\varepsilon(y) = 1-\varphi\left(\frac{y}{\varepsilon}\right)$. Let \begin{equation*} g_{X,1}(s) = \int_{\mathbb{T}} \Gamma_0(s,s')\,ds', \quad g_{X,1}^\varepsilon(s) = \int_{\mathbb{T}} \Gamma_0(s,s')\psi_\varepsilon(s'-s)\,ds'. \end{equation*} By \eqref{eqn: pointwise estimate of integrand of g_X part 1} and Lemma \ref{lemma: estimates for L M N}, \begin{equation}
|\Gamma_0(s,s')|\leq \frac{C}{\lambda} \|X''\|_{L^\infty}\|X'\|_{L^\infty}, \label{eqn: L infty estimate of integrand of g_X part 1} \end{equation} which implies that $g_{X,1},g_{X,1}^\varepsilon\in L^\infty(\mathbb{T})$, and $g_{X,1}^\varepsilon \rightarrow g_{X,1}$ in $L^\infty(\mathbb{T})$. In particular, for any test function $\eta\in C^\infty(\mathbb{T})$, \begin{equation} \lim_{\varepsilon\rightarrow 0}(\eta',g_{X,1}^\varepsilon) = (\eta',g_{X,1}), \label{eqn: derivative of g_X1 test function convergence} \end{equation} where $(\cdot,\cdot)$ is the $L^2$-inner product on $\mathbb{T}$. Since there is no singularity in the integral in $g_{X,1}^\varepsilon$, we apply integration by parts on the left hand side above and exchange the derivative and the integral. We will obtain \begin{equation} \begin{split} (\eta',g_{X,1}^\varepsilon) = &\;-(\eta, \partial_s g_{X,1}^\varepsilon)\\ =&\;-\left(\eta, \int_{\mathbb{T}} \partial_s\Gamma_0(s,s')\psi_\varepsilon(s'-s)\,ds'\right)+\left(\eta, \int_{\mathbb{T}} \Gamma_0(s,s')\psi'_\varepsilon(s'-s)\,ds'\right)\\ \triangleq &\; I_\varepsilon+II_\varepsilon \end{split} \label{eqn: derivative of g_X1 integration by parts} \end{equation} It is not difficult to show that \begin{equation} \lim_{\varepsilon\rightarrow 0}I_\varepsilon = -\left(\eta, \mathrm{p.v.}\int_{\mathbb{T}} \partial_s\Gamma_0(s,s')\,ds'\right). \label{eqn: derivative of g_X1 term 1} \end{equation}
On the other hand, since $\psi_\varepsilon'(\cdot-s)$ is of mean zero on $\mathbb{T}$ and $\|\psi_\varepsilon'(\cdot-s)\|_{L^1(\mathbb{T})} = 2$ due to the monotonicity assumption on $\varphi$, we have that \begin{equation}
|II_{\varepsilon}|\leq 2\|\eta\|_{L^1} \mathrm{osc}_{s'\in [s-2\varepsilon, s+2\varepsilon]} \Gamma_0(s,s')\rightarrow 0,\quad \mbox{as }\varepsilon\rightarrow 0, \label{eqn: derivative of g_X1 term 2} \end{equation} where the convergence comes from \eqref{eqn: limit of integrand of g_X part 1 at s}. Combining \eqref{eqn: derivative of g_X1 test function convergence}, \eqref{eqn: derivative of g_X1 integration by parts}, \eqref{eqn: derivative of g_X1 term 1} and \eqref{eqn: derivative of g_X1 term 2}, we find \begin{equation} \begin{split} g'_{X,1}(s) = &\;\mathrm{p.v.}\int_{\mathbb{T}} \partial_s\Gamma_0(s,s')\,ds'\\ = &\;\mathrm{p.v.}\int_{\mathbb{T}} -\partial_{ss'}[G(X(s)-X(s'))](X'(s')-X'(s))\,ds'\\ &\;+ \mathrm{p.v.}\int_{\mathbb{T}} \partial_{s'}[G(X(s)-X(s'))]X''(s)\,ds'\\ = &\;\mathrm{p.v.}\int_{\mathbb{T}} -\partial_{ss'}[G(X(s)-X(s'))](X'(s')-X'(s))\,ds'. \label{eqn: derivative of g_X part 1} \end{split} \end{equation} We used \eqref{eqn: pv integral vanishes} in the last line.
For the other term in $g_X(s)$, namely $\frac{1}{4}(-\Delta)^{1/2}X$, we note that $(-\Delta)^{1/2}$ and the derivative commute since they are both Fourier multipliers. This gives \begin{equation} \partial_s\left(\frac{1}{4}(-\Delta)^{1/2}X\right) = \frac{1}{4}(-\Delta)^{1/2}X' = -\frac{1}{4\pi}\mathrm{p.v.}\int_\mathbb{T} \frac{X'(s')-X'(s)}{4\sin^2\left(\frac{s'-s}{2}\right)}\,ds'. \label{eqn: derivative of g_X part 2} \end{equation} Combining \eqref{eqn: derivative of g_X part 1} and \eqref{eqn: derivative of g_X part 2}, we proved \eqref{eqn: derivative of g_X}.
\end{proof} \end{lemma}
\begin{lemma}\label{lemma: second derivative of g_X} Suppose $X\in H^3(\mathbb{T})$ and satisfies \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then \begin{equation} g''_X(s) = \mathrm{p.v.}\int_\mathbb{T}\partial_s\left[\left(-\partial_{ss'}[G(X(s)-X(s'))]-\frac{Id}{16\pi\sin^2\left(\frac{s'-s}{2}\right)}\right)(X'(s')-X'(s))\right]\,ds'.\label{eqn: second derivative of g_X} \end{equation} \begin{proof} Denote the integrand of \eqref{eqn: derivative of g_X} by $\Gamma_1(s,s')$, i.e. \begin{equation} g'_X(s) = \mathrm{p.v.}\int_\mathbb{T} \Gamma_1(s,s')\,ds'. \label{eqn: introduce the notation Gamma_1} \end{equation} What we are going to show in \eqref{eqn: second derivative of g_X} is exactly \begin{equation*} g''_X(s) = \mathrm{p.v.}\int_\mathbb{T} \partial_s\Gamma_1(s,s')\,ds'. \end{equation*} We claim that for $s\not = s'$, \begin{equation} \begin{split}
4\pi\Gamma_1(s,s') = &\;\frac{(X'(s)-L)\cdot N}{|L|^2}M - \frac{2(N\cdot L)(X'(s)\cdot L)}{|L|^4}M - \left(\frac{\tau^2 - 4\sin^2(\frac{\tau}{2})}{4\tau\sin^2(\frac{\tau}{2})}\right)M\\
&\;+\frac{(M-2N)\cdot M}{|L|^2}X'(s)+\frac{2(N\cdot L)( L\cdot M)}{|L|^4}X'(s)\\
&\; +\frac{2 (L\cdot M) (L\cdot (M-N)) (L\cdot X'(s))}{|L|^6}L+\frac{2 ((N-M)\cdot M)(L\cdot X'(s))}{|L|^4}L\\
&\;-\frac{6 (L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2 (L\cdot M) (L\cdot X'(s'))}{|L|^4} N\\
&\;+\frac{2 (N\cdot M) (L\cdot X'(s'))}{|L|^4}L+\frac{2 (L\cdot M) (N\cdot X'(s'))}{|L|^4}L. \end{split} \label{eqn: simplified Gamma order 1} \end{equation} For conciseness, we leave its proof in Lemma \ref{lemma: simplification of Gamma_1(s,s')} in Appendix \ref{appendix section: auxiliary calculations}. With \eqref{eqn: simplified Gamma order 1} in hand, we use \eqref{eqn: lower bound for L} and \eqref{eqn: upper bound for lambda} to derive that, \begin{equation} \begin{split}
|\Gamma_1(s,s')| \leq &\;C \left(\frac{|X'(s)|+|X'(s')|}{\lambda^2}+\frac{1}{\lambda}\right)|M|(|M|+|N|)+C|\tau| |M|\\
\leq &\;C \frac{|X'(s)|+|X'(s')|}{\lambda^2}|M|(|M|+|N|)+C|\tau| |M|\\
\leq&\; \frac{C}{\lambda^2} \|X'\|_{L^\infty}\|X''\|^2_{L^\infty}. \label{eqn: rough pointwise estimate of Gamma} \end{split} \end{equation} By the continuity of $L$, $M$ and $N$, i.e.\;\eqref{eqn: continuity of L M N}, we also know by \eqref{eqn: simplified Gamma order 1} that \begin{equation*} \lim_{s'\rightarrow s}\Gamma_1(s,s') = 0. \end{equation*} To this end, we can prove \eqref{eqn: second derivative of g_X} by arguing as in the proof of Lemma \ref{lemma: derivative of g_X}. We omit the details. \end{proof} \end{lemma}
Similar to Corollary \ref{coro: L2 estimate for g_X1-g_X2}, one can prove that \begin{corollary}\label{coro: H1 estimate for g_X1-g_X2} Let $X_1(s),X_2(s)\in H^2(\mathbb{T})$ both satisfy \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then \begin{equation}
\|g_{X_1}(s)-g_{X_2}(s)\|_{\dot{H}^1}\leq C\lambda^{-3} (\|X_1\|_{\dot{H}^2}+\|X_2\|_{\dot{H}^2})^3\|X_1-X_2\|_{\dot{H}^2}, \label{eqn: H1 estimate for g_X1-g_X2} \end{equation} where $C>0$ is a universal constant. \begin{proof} With \eqref{eqn: introduce the notation Gamma_1} and \eqref{eqn: simplified Gamma order 1} in hand, we simply argue as in the proof of Corollary \ref{coro: L2 estimate for g_X1-g_X2} to obtain the desired estimate. We omit the details. \end{proof} \end{corollary}
The following lemma is devoted to $H^2$-estimate of $g_X$. \begin{lemma}\label{lemma: H2 estimate of g_X} Suppose $X\in H^3(\mathbb{T})$ and satisfies \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then for $\forall\,\delta\in(0,\pi)$, \begin{equation} \begin{split}
\|g_X''\|_{L^2(\mathbb{T})} \leq
&\;C\left(\delta^{1/2}\lambda^{-2}\|X\|_{\dot{H}^{3}}\|X\|_{\dot{H}^{5/2}}^2+(|\ln \delta|+1)\lambda^{-3}\|X\|_{\dot{H}^{5/2}}^4\right), \end{split} \label{eqn: H2 estimate of g_X} \end{equation} where $C>0$ is a universal constant. \begin{proof} By Lemma \ref{lemma: second derivative of g_X}, we look into $\partial_s \Gamma_1(s,s')$. We take $s$-derivative of \eqref{eqn: simplified Gamma order 1} and use $\partial_s L = N$ in \eqref{eqn: derivatives of L M N wrt s} to find that \begin{equation} \begin{split}
|\partial_s \Gamma_1(s,s')| \leq &\;C\frac{|X'(s)|+|X'(s')|}{\lambda^3}|M||N|(|M|+|N|)+C \frac{|X'(s)|+|X'(s')|}{\lambda^2}|\partial_s M|(|M|+|N|)\\
&\;+C \frac{|X'(s)|+|X'(s')|}{\lambda^2}|M|(|\partial_s M|+|\partial_s N|)+C \frac{|X''(s)|}{\lambda^2}|M|(|M|+|N|)\\
&\;+C|M|+C|M-X''(s)|. \end{split} \label{eqn: pointwise estimate for s-derivative of Gamma} \end{equation} The following estimate is also useful by substituting \eqref{eqn: derivatives of L M N wrt s} into the above formula \begin{equation} \begin{split}
|\partial_s \Gamma_1(s,s')|\leq &\;C\frac{|X'(s)|+|X'(s')|}{\lambda^3}|M||N|(|M|+|N|)\\
&\;+C \frac{|X'(s)|+|X'(s')|}{\lambda^2}\frac{|M|+|X''(s)|}{|\tau|}(|M|+|N|)\\
&\;+C \frac{|X'(s)|+|X'(s')|}{\lambda^2}|M|\frac{|M|+|N|+|X''(s)|}{|\tau|}\\
&\;+C \frac{|X''(s)|}{\lambda^2}|M|(|M|+|N|)+C|M|+C|M-X''(s)|\\
\leq &\;C \lambda^{-3} (|X'(s)|+|X'(s')|)|M||N|(|M|+|N|)\\
&\;+C \lambda^{-2}|X''(s)||M|(|M|+|N|)+C|M|+C|X''(s)|\\
&\;+C \lambda^{-2}|\tau|^{-1}(|X'(s)|+|X'(s')|)(|M|+|X''(s)|)(|M|+|N|). \end{split} \label{eqn: pointwise estimate for s-derivative of Gamma far field} \end{equation} In order to prove \eqref{eqn: H2 estimate of g_X}, we split $g_X''$, an integral of $\partial_s\Gamma_1$ with respect to $s'$, into two terms --- the integral in a neighborhood of the singularity at $s'=s$, and the rest. To be more precise, for $\forall\,\delta\in(0,\pi)$, we have \begin{equation}
\|g_X''\|_{L^2(\mathbb{T})} \leq \left\|\int_{B_\delta(s)}|\partial_s \Gamma_1(s,s')|\,ds'\right\|_{L^2(\mathbb{T})}+\left\|\int_{B^c_\delta(s)}|\partial_s \Gamma_1(s,s')|\,ds'\right\|_{L^2(\mathbb{T})} \triangleq I_\delta + II_\delta. \label{eqn: splitting of g_X''} \end{equation} For $I_\delta$, we use \eqref{eqn: pointwise estimate for s-derivative of Gamma}. Applying Lemma \ref{lemma: estimates for L M N} with $I = B_\delta(0)$, we obtain that
\begin{equation*} \begin{split}
I_\delta \leq &\;C\lambda^{-3}\left\|\int_{B_{\delta}(s)}(|X'(s)|+|X'(s')|)|M||N|(|M|+|N|)\,ds'\right\|_{L^2(\mathbb{T})}\\
&\;+C\lambda^{-2} \left\|\int_{B_{\delta}(s)}(|X'(s)|+|X'(s')|)|\partial_s M|(|M|+|N|)\,ds'\right\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2}\left\|\int_{B_{\delta}(s)}(|X'(s)|+|X'(s')|)|M|(|\partial_s M|+|\partial_s N|)\,ds'\right\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2}\left\|\int_{B_{\delta}(s)}|X''(s)||M|(|M|+|N|)\,ds'\right\|_{L^2(\mathbb{T})}+C\left\|\int_{B_{\delta}(s)}|M|+|X''(s)|\,ds'\right\|_{L^2(\mathbb{T})}\\
\leq &\;C\lambda^{-3}\||X'(s)|+|X'(s')|\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(B_\delta(s))}\|M\|_{L^6_s(\mathbb{T})L^3_{s'}(B_\delta(s))}\|N\|_{L^6_s(\mathbb{T})L^3_{s'}(B_\delta(s))}\\
&\;\quad\cdot\||M|+|N|\|_{L^6_s(\mathbb{T})L^3_{s'}(B_\delta(s))}\\
&\;+C\lambda^{-2} \||X'(s)|+|X'(s')|\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(B_\delta(s))}\|\partial_s M\|_{L^2_s(\mathbb{T})L^2_{s'}(B_\delta(s))}\||M|+|N|\|_{L^\infty_s(\mathbb{T})L^2_{s'}(B_\delta(s))}\\
&\;+C \lambda^{-2}\||X'(s)|+|X'(s')|\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(B_\delta(s))}\|M\|_{L^\infty_s(\mathbb{T})L^2_{s'}(B_\delta(s))}\||\partial_s M|+|\partial_s N|\|_{L^2_s(\mathbb{T})L^2_{s'}(B_\delta(s))}\\
&\;+C \lambda^{-2}\|X''(s)\|_{L^3_s(\mathbb{T})L^3_{s'}(B_\delta(s))}\|M\|_{L^{12}_s(\mathbb{T})L^3_{s'}(B_\delta(s))}\||M|+|N|\|_{L^{12}_s(\mathbb{T})L^3_{s'}(B_\delta(s))}\\
&\;+C\||M|+|X''(s)|\|_{L^2_s(\mathbb{T})L^2_{s'}(B_\delta(s))}\\
\leq &\;C\lambda^{-3}\|X'\|_{L^\infty(\mathbb{T})}\left(\delta^{1/6}\|X''\|_{L^3(\mathbb{T})}\right)^3+C\lambda^{-2} \|X'\|_{L^\infty(\mathbb{T})}\delta^{1/2}\|X'''\|_{L^2(\mathbb{T})}\|X''\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2}\|X'\|_{L^\infty(\mathbb{T})}\|X''\|_{L^2(\mathbb{T})}\delta^{1/2}\|X'''\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2}\delta^{1/3}\|X''\|_{L^3(\mathbb{T})}\delta^{1/12}\|X''\|_{L^3(\mathbb{T})}\delta^{1/12}\|X''\|_{L^3(\mathbb{T})}+C\delta^{1/2}\|X''\|_{L^2(\mathbb{T})}\\
\leq &\; C\delta^{1/2}(\lambda^{-3}\|X'\|_{L^\infty}\|X''\|_{L^3}^3+ \lambda^{-2}\|X'\|_{L^\infty}\|X'''\|_{L^2}\|X''\|_{L^2}+ \lambda^{-2}\|X''\|_{L^3}^3+ \|X''\|_{L^2})\\
\leq &\; C\delta^{1/2}(\lambda^{-3}\|X\|_{\dot{H}^{5/2}}^4+ \lambda^{-2}\|X\|_{\dot{H}^{5/2}}^2\|X\|_{\dot{H}^3}). \end{split} \end{equation*} We used \eqref{eqn: lower bound for L} and Sobolev inequality in the last line. For $II_\delta$, we used \eqref{eqn: pointwise estimate for s-derivative of Gamma far field}. Applying Lemma \ref{lemma: estimates for L M N} with $I = \mathbb{T}$, we obtain that \begin{equation*} \begin{split}
II_\delta\leq &\;C\lambda^{-3}\left\|\int_{B^c_{\delta}(s)} (|X'(s)|+|X'(s')|)|M||N|(|M|+|N|)\,ds'\right\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2}\left\|\int_{B^c_{\delta}(s)}|X''(s)||M|(|M|+|N|)\,ds'\right\|_{L^2(\mathbb{T})}+C\left\|\int_{B^c_{\delta}(s)} |M|+|X''(s)|\,ds'\right\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2} \left\|\int_{B^c_{\delta}(s)} |\tau|^{-1}(|X'(s)|+|X'(s')|)(|M|+|X''(s)|)(|M|+|N|)\,ds'\right\|_{L^2(\mathbb{T})}\\
\leq &\;C\lambda^{-3}\||X'(s)|+|X'(s')|\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(\mathbb{T})}\|M\|_{L^6_s(\mathbb{T})L^3_{s'}(\mathbb{T})}\|N\|_{L^6_s(\mathbb{T})L^3_{s'}(\mathbb{T})}\||M|+|N|\|_{L^6_s(\mathbb{T})L^3_{s'}(\mathbb{T})}\\
&\;+C \lambda^{-2}\|X''(s)\|_{L^3_s(\mathbb{T})L^3_{s'}(\mathbb{T})}\|M\|_{L^{12}_s(\mathbb{T})L^3_{s'}(\mathbb{T})}\||M|+|N|\|_{L^{12}_s(\mathbb{T})L^3_{s'}(\mathbb{T})}\\
&\;+C\|M\|_{L^2_s(\mathbb{T})L^2_{s'}(\mathbb{T})}+C\|X''(s)\|_{L^2_s(\mathbb{T})L^2_{s'}(\mathbb{T})}\\
&\;+C \lambda^{-2} \|(s'-s)^{-1}\|_{L^\infty_s(\mathbb{T}) L^1_{s'}(B^c_\delta(s))}\||X'(s)|+|X'(s')|\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(\mathbb{T})}\\
&\;\quad\cdot\||M|+|X''(s)|\|_{L^4_s(\mathbb{T})L^\infty_{s'}(\mathbb{T})}
\||M|+|N|\|_{L^4_s(\mathbb{T})L^\infty_{s'}(\mathbb{T})}\\
\leq &\;C\lambda^{-3}\|X'\|_{L^\infty(\mathbb{T})}\|X''\|_{L^3(\mathbb{T})}^3+C \lambda^{-2}\|X''\|_{L^3(\mathbb{T})}\|X''\|_{L^3(\mathbb{T})}^2+C\|X''\|_{L^2(\mathbb{T})}\\
&\;+C \lambda^{-2} (|\ln \delta|+1)\|X'\|_{L^\infty(\mathbb{T})}\|\mathcal{M}X''\|_{L^4(\mathbb{T})}^2\\
\leq &\; C(\lambda^{-3}\|X'\|_{L^\infty}\|X''\|_{L^3}^3+ \lambda^{-2}\|X''\|_{L^3}^3+ \|X''\|_{L^2}+(|\ln \delta|+1)\lambda^{-2}\|X'\|_{L^\infty}\|X''\|_{L^4}^2)\\
\leq &\; C(|\ln \delta|+1)\lambda^{-3}\|X\|_{\dot{H}^{5/2}}^4. \end{split} \end{equation*} Here we used Lemma \ref{lemma: estimates for L M N} and Sobolev inequality. Combining the above two estimates of $I_\delta$ and $II_\delta$, we proved \eqref{eqn: H2 estimate of g_X}. \end{proof} \end{lemma} \begin{remark}
It is clear from the proof that, the goal of splitting $g_X''$ into two parts in \eqref{eqn: splitting of g_X''} is to introduce a small parameter $\delta$ in front of $\|X\|_{\dot{H}^3}$ in \eqref{eqn: H2 estimate of g_X}. This will be useful in the proof of local well-posedness. See Section \ref{section: local existence and uniqueness}. \qed \end{remark}
We can also show that \begin{corollary}\label{coro: H2 estimate for g_X0-g_X1} Let $X_1(s),X_2(s)\in H^3(\mathbb{T})$ both satisfy \eqref{eqn: well_stretched assumption} with some $\lambda>0$. Then for $\forall\, \delta\in(0,\pi)$, and $\forall\,\mu >0$, \begin{equation} \begin{split}
&\;\left\|g''_{X_1}(s)-g''_{X_2}(s)\right\|_{L^2}\\
\leq &\; C_\mu\left[\delta^{1/2}\lambda^{-2}(\|X_1\|_{\dot{H}^{5/2}}+\|X_2\|_{\dot{H}^{5/2}})^2\|X_1-X_2\|_{\dot{H}^3}\right.\\
&\;+\delta^{1/2}\lambda^{-3}(\|X_1\|_{\dot{H}^{5/2}}+\|X_2\|_{\dot{H}^{5/2}})^2(\|X_1\|_{\dot{H}^3}+\|X_2\|_{\dot{H}^3})\|X_1-X_2\|_{\dot{H}^2}\\
&\;\left.+(|\ln \delta|+1)\lambda^{-4}(\|X_1\|_{\dot{H}^{5/2}}+\|X_2\|_{\dot{H}^{5/2}})^4\|X_1-X_2\|_{\dot{W}^{2,2+\mu}}\right], \end{split} \label{eqn: H2 estimate for g_X0-g_X1} \end{equation} where $C_\mu>0$ is a universal constant depending on $\mu$. \begin{proof} We take $s$-derivative in \eqref{eqn: simplified Gamma order 1} first, and argue as in the proofs of Corollary \ref{coro: L2 estimate for g_X1-g_X2} and Lemma \ref{lemma: H2 estimate of g_X}. The calculation is unnecessarily long but tedious. We omit the details here. \end{proof} \end{corollary}
\section{Existence and Uniqueness of the Local-in-time Solution}\label{section: local existence and uniqueness} To this end, we are able to prove the local well-posedness of \eqref{eqn: contour dynamic formulation of the immersed boundary problem}. \subsection{Existence}\label{section: local existence}
\begin{proof}[Proof of Theorem \ref{thm: local in time existence} (existence of the local-in-time solution)] For $\forall\,Y\in L^1(\mathbb{T})$, we split it into its mean $\bar{Y}$ and its oscillation $\tilde{Y}$, i.e. \begin{equation*} \bar{Y} \triangleq \frac{1}{2\pi}\int_\mathbb{T} Y(s)\,ds,\quad \tilde{Y}(s)\triangleq Y(s)-\bar{Y}. \end{equation*} Then \eqref{eqn: contour dynamic formulation of the immersed boundary problem} could be split into two equations as well, one for $\tilde{X}$ and the other for $\bar{X}$. Namely, \begin{equation} \begin{split} &\;\partial_t \tilde{X}(s,t)= \mathcal{L}\tilde{X}(s,t) + \widetilde{g_{\tilde{X}}}(s,t),\quad s\in \mathbb{T}, t> 0,\\ &\;\tilde{X}(s,0) = \widetilde{X_0}(s), \end{split} \label{eqn: equation for oscillation of X in the main thm} \end{equation} and \begin{equation} \frac{d}{dt}\bar{X}(t) = \overline{g_{\tilde{X}}} = \frac{1}{2\pi}\int_\mathbb{T}g_{\tilde{X}}(s,t)\,ds,\quad \bar{X}(0) = \overline{X_0}. \label{eqn: equation for mean of X in the main thm} \end{equation}
We first consider the existence of solutions of the $\tilde{X}$-equation \eqref{eqn: equation for oscillation of X in the main thm}. Given $X_0$, with $T>0$ to be determined, we define \begin{equation} \begin{split}
\Omega_{0,T}(X_0) = &\;\left\{Y(s,t)\in\Omega_{T}:\;\int_\mathbb{T}Y(s,t)\,ds \equiv 0,\;\|Y_t(s,t)\|_{L^2_T \dot{H}^2(\mathbb{T})}\leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})},\right.\\
&\;\qquad\left.\left\|Y(s,t)-\mathrm{e}^{t\mathcal{L}}\widetilde{X_0}\right\|_{L^{\infty}_T \dot{H}^{5/2}\cap L^2_T \dot{H}^{3}(\mathbb{T})} \leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})},\;Y(s,0)=\widetilde{X_0}(s)\right\}. \end{split} \label{eqn: definition of the neighbourhood of X0 used in the proof of local existence} \end{equation} The subscript $0$ stresses that functions in $\Omega_{0,T}(X_0)$ has mean zero on $\mathbb{T}$. We remark that only the seminorms are used, since the mean of $X_0$ is irrelevant in the equation for $\tilde{X}$, which is always this case in the sequel. $\Omega_{0,T}(X_0)$ is non-empty. Indeed, by Lemma \ref{lemma: improved Hs estimate and Hs continuity of semigroup solution} and Lemma \ref{lemma: a priori estimate of nonlocal eqn}, $\mathrm{e}^{t\mathcal{L}}\widetilde{X_0}\in \Omega_{0,T}(X_0)$. It is also convex and closed in $\Omega_T$. By Aubin-Lions lemma, $\Omega_{0,T}(X_0)$ is compact in $C_T H^2(\mathbb{T})$.
By Lemma \ref{lemma: a priori estimate of nonlocal eqn}, for $\forall\, Y\in \Omega_{0,T}(X_0)$, $\|Y\|_{L^{\infty}_T \dot{H}^{5/2}\cap L^2_T \dot{H}^3(\mathbb{T})} \leq 4 \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}$. Moreover, by taking $T$ sufficiently small, we will have \begin{equation}
\left|Y(s_1,t) - Y(s_2,t)\right| \geq \frac{\lambda}{2}|s_1 - s_2|,\quad \forall\,s_1, s_2\in\mathbb{T},\;t\in[0,T]. \label{eqn: uniform bi lipschitz constant in the neighborhood} \end{equation}
In fact, if we assume $C_1 \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}T^{1/2}\leq \lambda/2$, with $C_1$ being a universal constant coming from Sobolev inequality that will be clear below, \begin{equation*} \begin{split}
||Y(s_1,t) - Y(s_2,t)| - |X_0(s_1) - X_0(s_2)||\leq &\;|(Y-X_0)(s_1,t) - (Y-X_0)(s_2,t)|\\
\leq &\;C_1 \|Y-X_0\|_{C_T \dot{H}^2(\mathbb{T})} |s_1 - s_2|\\
\leq &\;C_1 \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}T^{1/2} |s_1 - s_2|\leq \frac{\lambda}{2}|s_1-s_2|. \end{split} \end{equation*}
Here we used the assumptions that $\|Y_t(s,t)\|_{L^2_T \dot{H}^2(\mathbb{T})}\leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}$ and $Y(s,0)=\widetilde{X_0}(s)$. Then \eqref{eqn: uniform bi lipschitz constant in the neighborhood} follows from \eqref{eqn: bi Lipschitz assumption in main thm} and the triangle inequality.
Under the above assumption, we define a map $V: \Omega_{0,T}(X_0) \rightarrow \Omega_{0,T}(X_0)$ as follows, with $T$ to be determined. For given $Y(s,t) \in \Omega_{0,T}(X_0)$, let $Z \triangleq VY$ solve \begin{equation} \partial_t Z(s,t)= \mathcal{L}Z(s,t) + \widetilde{g_Y}(s,t),\quad s\in \mathbb{T}, t\in[0,T],\quad Z(s,0) = \widetilde{X_0}(s). \label{eqn: equation to define the map V} \end{equation} To show $V$ is well-defined, we first claim that $Z\in \Omega_T$. In fact, for $Y\in\Omega_{0,T}(X_0)$, by Lemma \ref{lemma: H2 estimate of g_X}, \begin{equation} \begin{split}
\|\widetilde{g_Y}\|_{L^2_T\dot{H}^2(\mathbb{T})} \leq&\;C\left(\delta^{1/2}\lambda^{-2}\|Y\|_{L^2_T\dot{H}^{3}(\mathbb{T})}\|Y\|_{L^\infty_T\dot{H}^{5/2}(\mathbb{T})}^2+T^{1/2}(|\ln \delta|+1)\lambda^{-3}\|Y\|_{L^\infty_T\dot{H}^{5/2}(\mathbb{T})}^4\right)\\
\leq &\;C_2\lambda^{-3}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}^4(\delta^{1/2}+T^{1/2}(|\ln \delta|+1)). \end{split} \label{eqn: estimate for the source term in local existence} \end{equation}
In the last line, we used \eqref{eqn: lower bound for L} and Sobolev inequality to have that $\lambda\leq C\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}$. Then for $\forall\, T>0$, Lemma \ref{lemma: a priori estimate of nonlocal eqn} gives the existence and uniqueness of the solution $Z\in \Omega_T$, which satisfies \begin{equation}
\|\partial_t Z\|_{L^2_{T} \dot{H}^2(\mathbb{T})} \leq \frac{1}{2}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}+\|\widetilde{g_Y}\|_{L^2_{T}\dot{H}^2(\mathbb{T})}. \label{eqn: bound of Z_t} \end{equation} $Z$ obviously has mean zero for all time.
Now consider $W =Z-\mathrm{e}^{t\mathcal{L}}\widetilde{X_0}$, which solves \begin{equation*} \partial_t W(s,t)= \mathcal{L}W(s,t) + \widetilde{g_Y}(s,t),\quad W(s,0) = 0. \end{equation*} By Lemma \ref{lemma: a priori estimate of nonlocal eqn} and \eqref{eqn: estimate for the source term in local existence}, we find that \begin{equation}
\|W\|_{L^\infty_{T}\dot{H}^{5/2}\cap L^2_{T} \dot{H}^{3}(\mathbb{T})} \leq 6\|\widetilde{g_Y}\|_{L^2_T\dot{H}^{2}(\mathbb{T})}\leq 6C_2\lambda^{-3}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}^4(\delta^{1/2}+T^{1/2}(|\ln \delta|+1)). \label{eqn: bound on W} \end{equation}
To this end, we first take $\delta \leq \delta_0(\lambda,\|X_0\|_{\dot{H}^{5/2}})$ sufficiently small, s.t. \begin{equation*}
C_2\lambda^{-3}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}^4\delta^{1/2} \leq \frac{1}{12}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}, \end{equation*}
and then assume $T \leq T_0(\lambda,\|X_0\|_{\dot{H}^{5/2}},\delta)$ sufficiently small as well, s.t. \begin{equation} \begin{split}
&\;C_1 \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}T^{1/2}\leq \frac{1}{2}\lambda,\\
&\;C_2\lambda^{-3}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}^4 T^{1/2}(|\ln \delta|+1) \leq \frac{1}{12}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}. \end{split} \label{eqn: constraints on existence time T} \end{equation}
This implies $\|Z-\mathrm{e}^{t\mathcal{L}}\widetilde{X_0}\|_{L^\infty_{T}\dot{H}^{5/2}\cap L^2_{T} \dot{H}^{3}(\mathbb{T})} \leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}$ by \eqref{eqn: bound on W}. Also by \eqref{eqn: estimate for the source term in local existence} and \eqref{eqn: bound of Z_t} \begin{equation*}
\|\partial_t Z\|_{L^2_{T} \dot{H}^2(\mathbb{T})} \leq \frac{1}{2}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}+\frac{1}{6}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})} \leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}. \end{equation*}
Hence, $V$ is well-defined from $\Omega_{0,T}(X_0)$ to itself. We note that the upper bound of valid $T$, which is $T_0$, essentially only depends on $\lambda$ and $\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}$.
By Aubin-Lions lemma, $V(\Omega_{0,T}(X_0))$ is compact in $C_{T} H^2(\mathbb{T})$. By Schauder fixed point theorem, there is a fixed point of the map $V$ in $V(\Omega_{0,T}(X_0))\subset \Omega_{0,T}(X_0)$, denoted by $\tilde{X}\in \Omega_{T}$, which is a solution of \eqref{eqn: equation for oscillation of X in the main thm}. It satisfies \begin{equation}
\|\tilde{X}\|_{L^\infty_{T} \dot{H}^{5/2}\cap L^2_{T} \dot{H}^{3}(\mathbb{T})}\leq 4\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})},\quad \|\partial_t \tilde{X}\|_{L^2_{T} \dot{H}^2(\mathbb{T})} \leq \|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}, \label{eqn: a priori estimate for the local solution} \end{equation} and \begin{equation}
\left|\tilde{X}(s_1,t) - \tilde{X}(s_2,t)\right| \geq \frac{\lambda}{2}|s_1 - s_2|,\quad \forall\,s_1,s_2\in\mathbb{T},\;t\in[0,T]. \label{eqn: uniform bi lipschitz constant of the local solution} \end{equation}
To this end, we turn to the ODE \eqref{eqn: equation for mean of X in the main thm} for $\bar{X}$. By Lemma \ref{lemma: L infty estimate for g_X}, for $\forall\, s\in\mathbb{T}$ and $t\in[0,T]$, \begin{equation*}
|\overline{g_{\tilde{X}}}|\leq \|g_{\tilde{X}}(s,t)\|_{L^\infty(\mathbb{T})}\leq \frac{C}{\lambda}\|\tilde{X}\|_{\dot{H}^1}\|\tilde{X}\|_{\dot{H}^2}\leq C\lambda^{-1}\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}^2. \end{equation*} It is then easy to show that \eqref{eqn: equation for mean of X in the main thm} admits a unique solution $\bar{X}(t)\in C^{0,1}([0,T])$ once $\tilde{X}$ is given. The solution for \eqref{eqn: contour dynamic formulation of the immersed boundary problem} is thus given by $X(s,t) = \bar{X}(t)+\tilde{X}(s,t)$. This proves the existence of the local-in-time solutions in $\Omega_{T}$. \eqref{eqn: a priori estimate for the local solution in the main theorem} and \eqref{eqn: uniform bi lipschitz constant of the local solution in the main theorem} follow from \eqref{eqn: a priori estimate for the local solution} and \eqref{eqn: uniform bi lipschitz constant of the local solution} respectively.
That $X\in L^2_{T}H^3(\mathbb{T})$ together with $X_t\in L^2_{T}H^2(\mathbb{T})$ implies that $X$ is almost everywhere equal to a continuous function valued in $H^{5/2}(\mathbb{T})$, i.e.\;$X$ could be realized as an element in $C([0,T];H^{5/2}(\mathbb{T}))$. This can be proved by classic arguments (see Temam \cite{temam1984navier}, \S\,1.4 of Chapter III). \end{proof}
\subsection{Uniqueness}\label{section: local uniqueness} \begin{proof}[Proof of Theorem \ref{thm: local in time uniqueness} (uniqueness of the local-in-time solution)] Suppose $X_1,X_2\in \Omega_T$ are two solutions of \eqref{eqn: contour dynamic formulation of the immersed boundary problem}, both satisfying the assumption \eqref{eqn: bi lipschitz assumption in uniqueness thm}.
Let \begin{equation}
R = \|X_1\|_{L_{T}^\infty\dot{H}^{5/2}\cap L_{T}^2 \dot{H}^3(\mathbb{T})} + \|X_2\|_{L_{T}^\infty\dot{H}^{5/2}\cap L_{T}^2 \dot{H}^3(\mathbb{T})}\geq C(c)\lambda \label{eqn: uniform bound in uniqueness thm} \end{equation} and $Q(s,t) \triangleq X_1-X_2$. Then $\tilde{Q} = \widetilde{X_1}-\widetilde{X_2}$ solves \begin{equation*} \partial_t \tilde{Q}(s,t)= \mathcal{L}\tilde{Q}(s,t) + \widetilde{g_{\widetilde{X_1}}}(s,t)-\widetilde{g_{\widetilde{X_2}}}(s,t),\quad \tilde{Q}(s,0) = 0,\quad (s,t)\in\mathbb{T}\times [0,T]. \end{equation*} By Corollary \ref{coro: H2 estimate for g_X0-g_X1} with $\mu = 2$ and Sobolev inequality, with $t\in(0,T]$ to be determined \begin{equation} \begin{split}
&\;\left\|g''_{X_1}(s)-g''_{X_2}(s)\right\|_{L_{t}^2L^2}\\ \leq &\;C\left[
\delta^{1/2}\lambda^{-2}(\|X_1\|_{L^\infty_t\dot{H}^{5/2}}+\|X_2\|_{L^\infty_t\dot{H}^{5/2}})^2\|X_1-X_2\|_{L^2_t\dot{H}^3}\right.\\
&\;+\delta^{1/2}\lambda^{-3}(\|X_1\|_{L^\infty_t\dot{H}^{5/2}}+\|X_2\|_{L^\infty_t\dot{H}^{5/2}})^2(\|X_1\|_{L^2_t\dot{H}^3}+\|X_2\|_{L^2_t\dot{H}^3})\|X_1-X_2\|_{L^\infty_t\dot{H}^2}\\
&\;\left.+(|\ln \delta|+1)\lambda^{-4}t^{1/2}(\|X_1\|_{L^\infty_t\dot{H}^{5/2}}+\|X_2\|_{L^\infty_t\dot{H}^{5/2}})^4\|X_1-X_2\|_{L^\infty_t\dot{W}^{2,4}}\right],\\
\leq &\; C(c)\left[\delta^{1/2}\lambda^{-2}R^2\|Q\|_{L^2_t\dot{H}^3}+\delta^{1/2}\lambda^{-3}R^3\|Q\|_{L^\infty_t\dot{H}^2}+(|\ln \delta|+1)\lambda^{-4}R^4t^{1/2}\|Q\|_{L^\infty_t\dot{H}^{5/2}}\right]\\
\leq &\; C(c)[\delta^{1/2}+(|\ln \delta|+1)t^{1/2}]\lambda^{-4}R^4\|\tilde{Q}\|_{L^{\infty}_{t}\dot{H}^{5/2}\cap L^2_{t}\dot{H}^3(\mathbb{T})}. \end{split} \label{eqn: space time estimate for the difference of solutions in proving uniqueness} \end{equation} Here we repeatedly used \eqref{eqn: uniform bound in uniqueness thm}. By Lemma \ref{lemma: a priori estimate of nonlocal eqn}, \begin{equation*}
\|\tilde{Q}\|_{L^{\infty}_{t}\dot{H}^{5/2}\cap L^2_{t}\dot{H}^3(\mathbb{T})}\leq C(c)[\delta^{1/2}+(|\ln \delta|+1)t^{1/2}]\lambda^{-4}R^4\|\tilde{Q}\|_{L^{\infty}_{t}\dot{H}^{5/2}\cap L^2_{t}\dot{H}^3(\mathbb{T})}. \end{equation*}
Hence, we first take $\delta = \delta_*(\lambda, R, c)$ sufficiently small and then take $t=t_*(\lambda, R, c)$ sufficiently small, such that $C(c)[\delta^{1/2}+(|\ln \delta|+1)t^{1/2}]\lambda^{-4}R^4\in(0,1)$. This implies that \begin{equation*}
\|\tilde{Q}\|_{L^{\infty}_{t_*}\dot{H}^{5/2}\cap L^2_{t_*}\dot{H}^3(\mathbb{T})}=0, \end{equation*} i.e.\;$\tilde{Q}(s,t) = 0$ for $t\in[0,t_*]$. Since \eqref{eqn: bi lipschitz assumption in uniqueness thm} and \eqref{eqn: uniform bound in uniqueness thm} are uniform throughout $[0,T]$, the above argument is also true for arbitrary initial time, i.e.\;provided that $\tilde{Q}(s,t_0) = 0$ for some $t_0\in [0,T]$, then $\tilde{Q}(s,t) = 0$ for $t\in[t_0,\min\{t_0+t_*,T\}]$. Hence, $\tilde{Q}(s,t) \equiv 0$ for $t\in[0,T]$, i.e.\;$\widetilde{X_1}(s,t) \equiv \widetilde{X_2}(s,t)$.
Recall that in \eqref{eqn: equation for mean of X in the main thm}, the solution $\bar{X}(t)$ is uniquely determined in $C^{0,1}([0,T])$ by $\tilde{X}(s,t)$. This implies that $\overline{X_1}(t)\equiv \overline{X_2}(t)$, and thus $X_1(s,t) \equiv X_2(s,t)$ for $(s,t)\in\mathbb{T}\times [0,T]$. This proves the uniqueness under the assumption \eqref{eqn: bi lipschitz assumption in uniqueness thm}.
The uniqueness of the local-in-time solution obtained in Theorem \ref{thm: local in time existence} follows immediately. \end{proof}
\section{Existence and Uniqueness of Global-in-time Solutions near Equilibrium Configurations}\label{section: global existence} In this section, we will prove that existence of global solution provided that the initial string configuration is sufficiently close to equilibrium. The closeness is measured using the difference between a string configuration $Y$ and its closet equilibrium configuration $Y_*$ (see Definition \ref{def: closest equilbrium state}). We start with several remarks on the definition of the closest equilibrium configuration.
\begin{remark}\label{remark: mass center of the equilibrium agrees with initial data} In the definition of $Y_*$, we have $x_* = \frac{1}{2\pi}\int_{\mathbb{T}} Y(s)\,ds$. This can be seen from the Fourier point of view. Assume $Y(s) = \sum_{k\in\mathbb{Z}} \hat{Y}_k \mathrm{e}^{iks}$, where $\hat{Y}_k$'s are complex-valued 2-vectors. By Parseval's identity, \begin{equation} \begin{split}
\frac{1}{2\pi}\int_{\mathbb{T}}|Y(s)-Y_{\theta,x}(s)|^2\,ds = &\;|\hat{Y}_0-x|^2 +\sum_{k\in\mathbb{Z},|k|\geq 2} |\hat{Y}_k|^2\\
&\;+ \left|\hat{Y}_{1}- R_Y \mathrm{e}^{i\theta}\left( \begin{array}{c} \frac{1}{2}\\\frac{1}{2i} \end{array} \right)
\right|^2
+\left|\hat{Y}_{-1}- R_Y \mathrm{e}^{-i\theta}\left( \begin{array}{c} \frac{1}{2}\\-\frac{1}{2i} \end{array} \right)
\right|^2. \end{split} \label{eqn: L2 difference of Y and its closest equilibrium using Parseval} \end{equation}
In order to achieve its minimum, we should take $x_* = \hat{Y}_0 = \frac{1}{2\pi}\int_{\mathbb{T}} Y(s)\,ds$. In the sequel, we shall denote $Y_\theta(s) \triangleq Y_{\theta,x_*}(s)$ and only minimize $\|Y-Y_\theta\|_{L^2(\mathbb{T})}$ with respect to $\theta$. \qed \end{remark} \begin{remark}\label{remark: L2 closest is also Hs closest} Although $Y_*$ is defined to be the closest to $Y$ in the $L^2$-distance among all $Y_\theta $, it is also the closest in the $H^s$-sense for all $s\geq 0$. Indeed, by Parseval's identity, \begin{equation*} \begin{split}
\frac{1}{2\pi}\|Y-Y_{\theta}\|^2_{\dot{H}^s(\mathbb{T})} = &\;\sum_{k\in\mathbb{Z},|k|\geq 2} |k|^{2s}|\hat{Y}_k|^2+ \left|\hat{Y}_{1}- R_Y \mathrm{e}^{i\theta}\left( \begin{array}{c} \frac{1}{2}\\\frac{1}{2i} \end{array} \right)
\right|^2
+\left|\hat{Y}_{-1}- R_Y \mathrm{e}^{-i\theta}\left( \begin{array}{c} \frac{1}{2}\\-\frac{1}{2i} \end{array} \right)
\right|^2\\
=&\;\frac{1}{2\pi}\|Y-Y_{\theta}\|^2_{L^2(\mathbb{T})}+\sum_{k\in\mathbb{Z},|k|\geq 2} (|k|^{2s}-1)|\hat{Y}_k|^2. \end{split} \end{equation*}
The last term in the last line is constant with respect to $\theta$, which implies that $\theta_*$ also optimizes $\|Y-Y_{\theta}\|_{\dot{H}^s(\mathbb{T})}$. \qed \end{remark}
The following lemma establishes the equivalence of the $H^1$-distance and the energy difference between a string configuration $Y$ and its closest equilibrium configuration $Y_*$. Recall that the elastic energy of $Y$ is $\|Y\|_{\dot{H}(\mathbb{T})}^2/2$ (see Lemma \ref{lemma: energy estimate}). The motivation is that we wish to transform the global coercive bound on the energy difference, which comes from \eqref{eqn: energy estimate of Stokes immersed boundary problem}, into a bound for more convenient quantity $\|Y-Y_*\|_{\dot{H}^1}$. \begin{lemma}\label{lemma: estimates concerning closest equilbrium} We have the following estimates for $Y$ and its closest equilibrium configuration $Y_*$: \begin{equation}
\frac{1}{2}\left(\|Y'(s)\|_{L^2(\mathbb{T})}^2-\|Y_*'(s)\|_{L^2(\mathbb{T})}^2\right)\leq \|Y'(s)-Y_*'(s)\|_{L^2(\mathbb{T})}^2\leq 4\left(\|Y'(s)\|_{L^2(\mathbb{T})}^2-\|Y_*'(s)\|_{L^2(\mathbb{T})}^2\right). \label{eqn: difference in H1 bounded by difference in energy} \end{equation}
\begin{proof} Without loss of generality, we assume that $(\theta_*,x_*) = (0,0)$. Otherwise we simply make a translation and rotation of $Y$. Define $D(s) = Y(s)-Y_*(s)$. By Remark \ref{remark: mass center of the equilibrium agrees with initial data} and the above assumption, $D(s)$ is of mean zero on $\mathbb{T}$.
We first prove the upper bound. By the definition of $\theta_*$ and $Y_*$, we know that \begin{equation}
0 = \left.\frac{d}{d\theta}\right|_{\theta = \theta_*}\int_{\mathbb{T}}\left|Y(s)-Y_\theta(s)\right|^2\,ds = -2\int_\mathbb{T} (Y-Y_*)\cdot Y'_*\,ds = -2\int_\mathbb{T} D\cdot Y'_*\,ds, \label{eqn: equation for the optimal approximated equilibrium} \end{equation} and \begin{equation} \begin{split}
0 \leq &\; \left.\frac{d^2}{d\theta^2}\right|_{\theta = \theta_*}\int_{\mathbb{T}}\left|Y(s)-Y_\theta(s)\right|^2\,ds = -2\int_\mathbb{T} -Y'_*\cdot Y'_*+(Y-Y_*)\cdot Y_*''\,ds\\
= &\;2\int_\mathbb{T} |Y'_*|^2+(Y-Y_*)\cdot Y_*\,ds = 2\int_\mathbb{T} |Y'_*|^2+D\cdot Y_*\,ds\\ = &\;4\pi R_Y^2+2\int_\mathbb{T} D\cdot Y_*\,ds\\ \label{eqn: second order equation for the optimal approximated equilibrium} \end{split} \end{equation} Here we used $Y_{*}'' = -Y_{*}$. Moreover, since $Y$ and $Y_*$ have the same effective radius, by \eqref{eqn: enclosed area is pi}, \begin{equation*} 0 =\int_\mathbb{T}Y\times Y'\,ds - \int_\mathbb{T}Y_*\times Y_*'\,ds =\int_\mathbb{T}D\times Y_*'+ Y_*\times D' + D\times D'\,ds. \end{equation*} Since $Y_*'(s) = (-R_Y\sin s, R_Y\cos s) = Y_*^\perp(s)$ and $Y_*'' = - Y_*$, it is further simplified to be \begin{equation} 0 =\int_\mathbb{T}D\cdot Y_*+ Y_*'\cdot D' + D\times D'\,ds = \int_\mathbb{T}D\cdot Y_*- Y_*''\cdot D + D\times D'\,ds = \int_\mathbb{T}2D\cdot Y_* + D\times D'\,ds. \label{eqn: constraint on deviation from volume conservation} \end{equation}
In the sequel, we shall write $Y_*$ and $D$ in terms of their Fourier coefficients. With the assumption that $(\theta_*,x_*) = (0,0)$, we have \begin{align*} &\;Y_*(s) = R_Y\left( \begin{array}{c} \frac{1}{2}\\-\frac{i}{2} \end{array} \right)\mathrm{e}^{is} + R_Y\left( \begin{array}{c} \frac{1}{2}\\\frac{i}{2} \end{array} \right)\mathrm{e}^{-is},\\ &\;Y'_*(s) = R_Y\left( \begin{array}{c} \frac{i}{2}\\\frac{1}{2} \end{array} \right)\mathrm{e}^{is} + R_Y\left( \begin{array}{c} -\frac{i}{2}\\\frac{1}{2} \end{array} \right)\mathrm{e}^{-is}. \end{align*} Assume $D(s) = \sum_{k\in \mathbb{Z}}\hat{D}_k \mathrm{e}^{iks}$, where $\hat{D}_k$'s are complex-valued $2$-vectors satisfying $\hat{D}_{-k} = \overline{\hat{D}_{k}}$. Hence, \eqref{eqn: equation for the optimal approximated equilibrium} could be rewritten as \begin{equation*} 0= \hat{D}_1\cdot \overline{\left( \begin{array}{c} \frac{i}{2}\\\frac{1}{2} \end{array} \right)} + \hat{D}_{-1}\cdot \overline{\left( \begin{array}{c} -\frac{i}{2}\\\frac{1}{2} \end{array} \right)} = \left(-\frac{i}{2}\hat{D}_{1,1}+ \frac{1}{2}\hat{D}_{1,2}\right)+\overline{\left(-\frac{i}{2}\hat{D}_{1,1}+ \frac{1}{2}\hat{D}_{1,2}\right)}, \end{equation*} where $\hat{D}_{1,1}$ and $\hat{D}_{1,2}$ represent the first and the second component of $\hat{D}_1$ respectively. This implies that \begin{equation} 0 = \mathrm{Re}\,(-i\hat{D}_{1,1}+\hat{D}_{1,2}) = \mathrm{Im}\, \hat{D}_{1,1}+\mathrm{Re}\, \hat{D}_{1,2}. \label{eqn: simplified equation for the optimal approximated equilibrium} \end{equation} Similarly, the terms in \eqref{eqn: constraint on deviation from volume conservation} could be rewritten as follows: \begin{equation} \begin{split} \int_\mathbb{T} 2D\cdot Y_*\,ds = &\;4\pi R_Y\hat{D}_1\cdot \overline{\left( \begin{array}{c} \frac{1}{2}\\-\frac{i}{2} \end{array} \right)} + 4\pi R_Y\hat{D}_{-1}\cdot\overline{\left( \begin{array}{c} \frac{1}{2}\\\frac{i}{2} \end{array} \right)}\\ = &\;4\pi R_Y(\mathrm{Re}\, \hat{D}_{1,1} - \mathrm{Im}\, \hat{D}_{1,2}), \end{split} \label{eqn: inner product of D and Y_star} \end{equation} and \begin{equation} \begin{split} \int_\mathbb{T}D\times D'\,ds = &\;2\pi \sum_{k\in\mathbb{Z}}\hat{D}_k\times \overline{\left(ik \hat{D}_k\right)}\\ =&\;2\pi \sum_{k\in\mathbb{Z}} -ik \left(\hat{D}_{k,1}\overline{\hat{D}_{k,2}} - \hat{D}_{k,2}\overline{\hat{D}_{k,1}}\right)\\ =&\;2\pi \sum_{k\in\mathbb{Z}} 2k \mathrm{Im}\,\left(\hat{D}_{k,1}\overline{\hat{D}_{k,2}}\right)\\ =&\;4\pi \sum_{k\in\mathbb{Z}} k (\mathrm{Im}\, \hat{D}_{k,1}\mathrm{Re}\, \hat{D}_{k,2}-\mathrm{Re}\, \hat{D}_{k,1}\mathrm{Im}\, \hat{D}_{k,2})\\
\leq &\;2\pi \sum_{\genfrac{}{}{0pt}{}{k\in\mathbb{Z}}{|k|\geq 2}} |k| |\hat{D}_{k}|^2+ 4\pi (\mathrm{Im}\, \hat{D}_{1,1}\mathrm{Re}\, \hat{D}_{1,2}-\mathrm{Re}\, \hat{D}_{1,1}\mathrm{Im}\, \hat{D}_{1,2})\\ &\;-4\pi (\mathrm{Im}\, \hat{D}_{-1,1}\mathrm{Re}\, \hat{D}_{-1,2}-\mathrm{Re}\, \hat{D}_{-1,1}\mathrm{Im}\, \hat{D}_{-1,2})\\
= &\;2\pi \sum_{\genfrac{}{}{0pt}{}{k\in\mathbb{Z}}{|k|\geq 2}} |k| |\hat{D}_{k}|^2+ 8\pi (\mathrm{Im}\, \hat{D}_{1,1}\mathrm{Re}\, \hat{D}_{1,2}-\mathrm{Re}\, \hat{D}_{1,1}\mathrm{Im}\, \hat{D}_{1,2}). \end{split} \label{eqn: cross product of D and D'} \end{equation} Here we used the fact that $\hat{D}_{-1} = \overline{\hat{D}_1}$. By \eqref{eqn: second order equation for the optimal approximated equilibrium} and \eqref{eqn: inner product of D and Y_star}, we know that \begin{equation} -\mathrm{Re}\, \hat{D}_{1,1} + \mathrm{Im}\, \hat{D}_{1,2}\leq R_Y. \label{eqn: constraints on the coefficients from the second order condition of optimal approximation} \end{equation} We calculate that \begin{equation}
\|Y'(s) - Y_*'(s)\|_{L^2}^2 = \|D'(s)\|_{L^2}^2 = 2\pi\sum_{k\in\mathbb{Z}} k^2|\hat{D}_k|^2, \label{eqn: H1 norm of deviation in terms of Fourier coefficients} \end{equation} and \begin{equation} \begin{split}
\|Y'(s)\|_{L^2}^2 - \|Y_*'(s)\|_{L^2}^2 = &\;\int_\mathbb{T}(Y_*'+D')\cdot(Y_*'+D') - Y_*'\cdot Y_*'\,ds = \int_\mathbb{T}2Y_*'\cdot D'+D'\cdot D'\,ds\\ = &\;\int_\mathbb{T}-2Y_*''\cdot D+D'\cdot D'\,ds= \int_\mathbb{T}2Y_*\cdot D+D'\cdot D'\,ds\\
= &\;\int_\mathbb{T}2D\cdot Y_*\,ds+2\pi \sum_{k\in\mathbb{Z}} k^2 |\hat{D}_k|^2. \end{split} \label{eqn: expression for excess energy} \end{equation} \begin{case} If $\int_\mathbb{T}2D\cdot Y_*\,ds\geq 0$, we readily proved the upper bound in \eqref{eqn: difference in H1 bounded by difference in energy} by comparing \eqref{eqn: H1 norm of deviation in terms of Fourier coefficients} and \eqref{eqn: expression for excess energy}. \end{case} \begin{case} If $\int_\mathbb{T}2D\cdot Y_*\,ds < 0$, by \eqref{eqn: inner product of D and Y_star}, $\mathrm{Re}\, \hat{D}_{1,1} - \mathrm{Im}\, \hat{D}_{1,2} <0$. Then by \eqref{eqn: constraint on deviation from volume conservation}, \eqref{eqn: inner product of D and Y_star}, \eqref{eqn: cross product of D and D'} and \eqref{eqn: expression for excess energy}, \begin{equation*} \begin{split}
\|Y'(s)\|_{L^2}^2 - \|Y_*'(s)\|_{L^2}^2 = &\;2\pi \sum_{k\in\mathbb{Z}} k^2 |\hat{D}_k|^2 +\frac{3}{2}\int_\mathbb{T}2D\cdot Y_*\,ds-\int_\mathbb{T}D\cdot Y_*\,ds\\
=&\;2\pi \sum_{k\in\mathbb{Z}} k^2 |\hat{D}_k|^2 -\frac{3}{2}\int_\mathbb{T}D\times D'\,ds-\int_\mathbb{T}D\cdot Y_*\,ds\\
\geq &\; 2\pi \sum_{\genfrac{}{}{0pt}{}{k\in\mathbb{Z}}{|k|\geq 2}} k^2 |\hat{D}_k|^2 -3\pi \sum_{\genfrac{}{}{0pt}{}{k\in\mathbb{Z}}{|k|\geq 2}} |k| |\hat{D}_k|^2+2\pi(|\hat{D}_1|^2+|\hat{D}_{-1}|^2)\\ &\;-12\pi(\mathrm{Im}\, \hat{D}_{1,1}\mathrm{Re}\, \hat{D}_{1,2}-\mathrm{Re}\, \hat{D}_{1,1}\mathrm{Im}\, \hat{D}_{1,2})\\ &\;-2\pi R_Y(\mathrm{Re}\, \hat{D}_{1,1}- \mathrm{Im}\, \hat{D}_{1,2})\\
\geq &\; \pi \sum_{\genfrac{}{}{0pt}{}{k\in\mathbb{Z}}{|k|\geq 2}} (2k^2-3|k|) |\hat{D}_k|^2+4\pi|\hat{D}_1|^2 \\ &\;+12\pi\mathrm{Re}\, \hat{D}_{1,1}\mathrm{Im}\, \hat{D}_{1,2}-2\pi R_Y(\mathrm{Re}\, \hat{D}_{1,1}- \mathrm{Im}\, \hat{D}_{1,2}). \end{split} \end{equation*} In the last line, we used the fact that $\hat{D}_{-1} = \overline{\hat{D}_1}$ and $\mathrm{Im}\, \hat{D}_{1,1}\mathrm{Re}\, \hat{D}_{1,2}\leq 0$ due to \eqref{eqn: simplified equation for the optimal approximated equilibrium}.
If $\mathrm{Re}\, \hat{D}_{1,1}$ and $\mathrm{Im}\, \hat{D}_{1,2}$ have the same sign, then $12\pi\mathrm{Re}\, \hat{D}_{1,1}\mathrm{Im}\, \hat{D}_{1,2}-2\pi R_Y(\mathrm{Re}\, \hat{D}_{1,1}- \mathrm{Im}\, \hat{D}_{1,2})\geq 0$. Hence, \begin{equation*}
\|Y'(s)\|_{L^2}^2 - \|Y_*'(s)\|_{L^2}^2 \geq \pi \sum_{k\in\mathbb{Z}} \frac{1}{2}k^2 |\hat{D}_k|^2+3\pi|\hat{D}_1|^2 \geq \frac{1}{4}\|D'(s)\|_{L^2}^2. \end{equation*}
Otherwise, if $\mathrm{Re}\, \hat{D}_{1,1}$ and $\mathrm{Im}\, \hat{D}_{1,2}$ have different signs, i.e., $\mathrm{Re}\, \hat{D}_{1,1}\leq 0$ and $-\mathrm{Im}\, \hat{D}_{1,2}\leq 0$ since $\mathrm{Re}\, \hat{D}_{1,1} - \mathrm{Im}\, \hat{D}_{1,2} <0$, we know that \begin{equation*} \begin{split}
\|Y'(s)\|_{L^2}^2 - \|Y_*'(s)\|_{L^2}^2 \geq &\; \pi \sum_{k\in\mathbb{Z}} \frac{1}{2}k^2 |\hat{D}_k|^2+3\pi|\hat{D}_1|^2\\
&\;-12\pi|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|+4\pi R_Y\sqrt{|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|}. \end{split} \end{equation*} Also, by \eqref{eqn: constraints on the coefficients from the second order condition of optimal approximation}, \begin{equation}
|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|\leq \frac{1}{4}(-\mathrm{Re}\, \hat{D}_{1,1}+\mathrm{Im}\, \hat{D}_{1,2})^2\leq \frac{1}{4}R_Y^2. \end{equation} This implies \begin{equation*} \begin{split}
&\;3\pi|\hat{D}_1|^2-12\pi|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|+4\pi R_Y\sqrt{|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|}\\
\geq &\; 3\pi|\hat{D}_1|^2 - 3\pi\left(|\mathrm{Re}\, \hat{D}_{1,1}|^2+|\mathrm{Im}\, \hat{D}_{1,2}|^2\right)-6\pi|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|+4\pi R_Y\sqrt{|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|}\\
\geq &\;\pi\sqrt{|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|}\left(4R_Y-6\sqrt{|\mathrm{Re}\, \hat{D}_{1,1}||\mathrm{Im}\, \hat{D}_{1,2}|}\right)\\ \geq &\;0. \end{split} \end{equation*} Therefore, \begin{equation}
\|Y'(s)\|_{L^2}^2 - \|Y_*'(s)\|_{L^2}^2 \geq \frac{\pi}{2}\sum_{k\in\mathbb{Z}}k^2 |\hat{D}_k|^2 = \frac{1}{4}\|D'(s)\|_{L^2}^2. \label{eqn: the excess energy can bound the H1 difference} \end{equation} This proves the upper bound in \eqref{eqn: difference in H1 bounded by difference in energy}. \end{case}
Now we turn to the lower bound. By \eqref{eqn: constraint on deviation from volume conservation} and \eqref{eqn: expression for excess energy}, \begin{equation*} \begin{split}
\|Y'(s)\|_{L^2}^2 - \|Y_*'(s)\|_{L^2}^2 = &\;\int_\mathbb{T}2D\cdot Y_*\,ds+\|D'\|_{L^2}^2 = -\int_\mathbb{T}D\times D'\,ds+\|D'\|_{L^2}^2\\
\leq &\;\|D\|_{L^2}\|D'\|_{L^2}+\|D'\|_{L^2}^2 \leq 2\|D'\|_{L^2}^2. \end{split} \end{equation*} Here we used the fact that $D$ has mean zero on $\mathbb{T}$.
This completes the proof. \end{proof} \begin{remark}
As a byproduct, we know that for any Jordan curve $Y(s)\in H^1(\mathbb{T})$, $\|Y_*\|_{\dot{H}^1(\mathbb{T})}\leq \|Y\|_{\dot{H}^1(\mathbb{T})}$. The equality holds if and only if $Y = Y_*$. Hence, Lemma \ref{lemma: estimates concerning closest equilbrium} implies that the string configuration having a circular shape and uniform parameterization has the lowest elastic energy among all the $H^1$-configurations that enclose the same area. This can also be showed by isoperimetric inequality and Cauchy-Schwarz inequality. \qed \end{remark} \end{lemma}
Let $X$ be a local solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} obtained in Theorem \ref{thm: local in time existence}. By Lemma \ref{lemma: energy estimate} and Lemma \ref{lemma: estimates concerning closest equilbrium}, we readily have global bound on $\|X-X_*\|_{\dot{H}^1(\mathbb{T})}(t)$. It would be very ideal if we could show that $\|X-X_*\|_{\dot{H}^{5/2}(\mathbb{T})}(t)$ can not be (always) big when $\|X-X_*\|_{\dot{H}^1(\mathbb{T})}(t)$ is small. The following lemma is an effort in this direction, which is crucial in proving Theorem \ref{thm: global existence near equilibrium}.
\begin{lemma}\label{lemma: bound and decay for H2.5 difference when energy difference is small} Suppose $T\in(0,1]$ and $X_0\in H^{5/2}(\mathbb{T})$. Let $X(s,t)\in \Omega_T$ be a (local) solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem}, s.t. \begin{equation}
\|X\|_{L^\infty_{T}\dot{H}^{5/2}\cap L^2_{T}\dot{H}^3(\mathbb{T})} \leq R<+\infty, \label{eqn: uniform bound in the lemma for the small energy regularity} \end{equation} and for some $\lambda>0$, \begin{equation}
|X(s_1,t)-X(s_2,t)|\geq \lambda|s_1-s_2|,\quad \forall\,s_1,s_2\in\mathbb{T},\;t\in[0,T]. \label{eqn: uniform bi lipschitz constant in the lemma for the small energy regularity} \end{equation}
\begin{enumerate} \item There exists $T_*=T_*(T,R,\lambda)\in(0,T]$, s.t. \begin{equation}
\|X-X_{*}\|_{L^{\infty}_{T_*}\dot{H}^{5/2}(\mathbb{T})}^2\leq 2\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}^2, \label{eqn: upper bound for the growth of H2.5 norm in a short period of time in the statement of the lemma} \end{equation} where $X_*(\cdot,t)$ and $X_{0*}$ are the closest equilibrium configuration to $X(\cdot,t)$ and $X_0(\cdot)$ respectively. \item Given $T'\in(0,T_*]$, there exist a constant $c_* = c_*(R,\lambda, T')>0$, s.t.\;if \begin{equation}
\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}\geq c_* \|X-X_{0*}\|_{L^\infty_{T'}\dot{H}^1(\mathbb{T})}, \label{eqn: condition in the small energy lemma initial H2.5 norm is much larger than H1 norm on the whole interval} \end{equation} then there exists $t_*\in[T'/4,T']$, s.t. \begin{equation}
\|X-X_{0*}\|^2_{\dot{H}^{5/2}(\mathbb{T})}(t_*)\leq \mathrm{e}^{- t_*/4}\|X_0-X_{0*}\|^2_{\dot{H}^{5/2}(\mathbb{T})}. \label{eqn: a lower H2.5 norm could be found} \end{equation} In particular, \begin{equation}
\|X-X_{*}\|^2_{\dot{H}^{5/2}(\mathbb{T})}(t_*)\leq \mathrm{e}^{-t_*/4}\|X_0-X_{0*}\|^2_{\dot{H}^{5/2}(\mathbb{T})}. \label{eqn: a lower H2.5 norm with updated approximation could be found} \end{equation} \end{enumerate}
\begin{proof} It is easy to see that $X_{0*}(s,t) \equiv X_{0*}(s)\in H^{5/2}(\mathbb{T})$ is the (unique, by Theorem \ref{thm: local in time uniqueness}) solution for \eqref{eqn: contour dynamic formulation of the immersed boundary problem} starting from $X_{0*}$. Consider $\tilde{X}-\widetilde{X_{0*}}$, which satisfies \begin{equation} \begin{split} &\;\partial_t (\tilde{X}-\widetilde{X_{0*}})= \mathcal{L}(\tilde{X}-\widetilde{X_{0*}}) + (\widetilde{g_X} - \widetilde{g_{X_{0*}}}),\quad s\in \mathbb{T}, t\in[0,T],\\ &\;(\tilde{X}-\widetilde{X_{0*}})(s,0) = (X_0-X_{0*})(s). \label{eqn: equation for the difference from the initial steady state} \end{split} \end{equation} Similar to \eqref{eqn: space time estimate for the difference of solutions in proving uniqueness}, we use the assumptions \eqref{eqn: uniform bound in the lemma for the small energy regularity} and \eqref{eqn: uniform bi lipschitz constant in the lemma for the small energy regularity} and Corollary \ref{coro: H2 estimate for g_X0-g_X1} with $\mu = 2$ to find that for $\forall\, t\in[0,T]$ with $T\leq 1$ and $\forall\, \delta\in(0,1]$, \begin{equation}
\left\|g_{X}-g_{X_{0*}}\right\|_{L_{t}^2\dot{H}^2}
\leq C\left(\delta^{1/2}\|X-X_{0*}\|_{L^2_t\dot{H}^3}+[\delta^{1/2}+(|\ln \delta|+1)t^{1/2}]\|X-X_{0*}\|_{L^\infty_t\dot{W}^{2,4}}\right), \label{eqn: H2 estimate of the difference between solution and equilibrium solution} \end{equation} where $C = C(R,\lambda)$. By Lemma \ref{lemma: a priori estimate of nonlocal eqn} and the interpolation inequality, for $\forall\, t\in[0,T]$ and $\forall\, \delta \in(0,1]$, \begin{equation*} \begin{split}
&\;\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}}^2(t)+\frac{1}{4}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^2_{t} \dot{H}^{3}}^2\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2+ 4\|\widetilde{g_X} - \widetilde{g_{X_{0*}}}\|_{L_{t}^2 \dot{H}^{2}}^2\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2\\
&\;+ C_3 \left(\delta\|X-X_{0*}\|_{L^2_t\dot{H}^3}^2+[\delta+(|\ln \delta|+1)^2t]\|X-X_{0*}\|_{L^\infty_t\dot{H}^{1}}^{1/3}\|X-X_{0*}\|_{L^\infty_t\dot{H}^{5/2}}^{5/3}\right), \end{split} \end{equation*} where $C_3 = C_3(R,\lambda)$ is a constant. For simplicity, let us assume $C_3(R,\lambda)\geq 1$. Take $\delta = t\leq 1$ with $t\in[0,T_*]$ and we find that \begin{equation} \begin{split}
&\;\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}}^2(t)+\left(\frac{1}{4}-C_3 t\right)\|\tilde{X}-\widetilde{X_{0*}}\|_{L^2_{t} \dot{H}^{3}}^2\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2+ 2C_3 t(|\ln t|+1)^2 \|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{t}\dot{H}^{1}}^{1/3}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{t}\dot{H}^{5/2}}^{5/3}, \end{split} \label{eqn: estimates on the difference between X and its closest equilibrium configuration} \end{equation} Now we take $T_*\leq T\leq 1$ sufficiently small, s.t. \begin{equation}
8C_3(R,\lambda) T_*(|\ln T_*|+1)^2 \leq 1 \label{eqn: expression for T* in the lemma for small energy regularity} \end{equation}
and $x(|\ln x|+1)^2$ is increasing in $[0,T_*]$. In this way, $C_3t\leq C_3 T_*(|\ln T_*|+1)^2\leq 1/8$. By \eqref{eqn: estimates on the difference between X and its closest equilibrium configuration}, \begin{equation} \begin{split}
&\;\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}}^2(t)+\frac{1}{8}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^2_{t} \dot{H}^{3}}^2\\
\leq &\;\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}}^2(t)+\left(\frac{1}{4}-C_3 t\right)\|\tilde{X}-\widetilde{X_{0*}}\|_{L^2_{t} \dot{H}^{3}}^2\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2+ 2C_3 t(|\ln t|+1)^2 \|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{t}\dot{H}^{1}}^{1/3}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{t}\dot{H}^{5/2}}^{5/3}\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2+ \frac{1}{4}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{T_*}\dot{H}^{5/2}}^2. \label{eqn: energy estimate in global existence by applying a priori estimates in Appendix} \end{split} \end{equation} By taking supremum in $t\in[0,T_*]$ on the left hand side, we find that \begin{equation}
\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{T_*}\dot{H}^{5/2}(\mathbb{T})}^2\leq \frac{4}{3}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}^2. \label{eqn: upper bound for the growth of H2.5 norm in a short period of time in the proof} \end{equation} In view of Remark \ref{remark: L2 closest is also Hs closest}, \eqref{eqn: upper bound for the growth of H2.5 norm in a short period of time in the statement of the lemma} immediately follows with $T_*$ defined in \eqref{eqn: expression for T* in the lemma for small energy regularity}.
Next we shall prove the second part of the Lemma for given $T'\in(0,T_*]$. Putting \eqref{eqn: upper bound for the growth of H2.5 norm in a short period of time in the proof} back into the third line of \eqref{eqn: energy estimate in global existence by applying a priori estimates in Appendix} and take $t= T'$, we find that \begin{equation} \begin{split}
&\;\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}}^2(T')+\frac{1}{8}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^2_{T'} \dot{H}^{3}}^2\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2+2C_3T'(|\ln T'|+1)^2 \left(\frac{4}{3}\right)^{5/6}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}^{5/3}\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{T'}\dot{H}^1}^{1/3}\\
\leq &\; \|X_0-X_{0*}\|_{\dot{H}^{5/2}}^2+4C_3 T'(|\ln T'|+1)^2 c^{-1/3}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}^{2}, \label{eqn: equation for J before introducing the notation J} \end{split} \end{equation}
In the last inequality, we introduce the notation $c = \|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}/\|X-X_{0*}\|_{L^\infty_{T'} \dot{H}^1(\mathbb{T})}$. Denote $J(t) = \|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}(\mathbb{T})}^2(t)$. By interpolation, for $\forall\,t\in[0,T']$, \begin{equation*} \begin{split}
J(t)^{4/3} = \|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{5/2}(\mathbb{T})}^{8/3}(t)
\leq &\;\|\tilde{X}-\widetilde{X_{0*}}\|_{L^\infty_{T'}\dot{H}^{1}(\mathbb{T})}^{2/3}\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{3}(\mathbb{T})}^2(t)\\
= &\;c^{-2/3}J(0)^{1/3}\|\tilde{X}-\widetilde{X_{0*}}\|_{\dot{H}^{3}(\mathbb{T})}^2(t). \end{split} \end{equation*} We multiply both sides of \eqref{eqn: equation for J before introducing the notation J} by $c^{-2/3}J(0)^{1/3}$ and find that \begin{equation}
c^{-2/3}J(0)^{1/3} J(T')+\frac{1}{8}\int_0^{T'} J(\omega)^{4/3}\,d\omega\leq (c^{-2/3}+4C_3 T'(|\ln T'|+1)^2 c^{-1})J(0)^{4/3}. \label{eqn: simplified evolution equation for J the H2.5 difference} \end{equation}
Now suppose the statement of the Lemma is false. Namely, for $\forall\,c>0$, there exists a solution $X^{(c)}(s,t)$ with $t\in[0,T']$, starting from some $X_0^{(c)}(s)\in H^{5/2}(\mathbb{T})$, satisfying \eqref{eqn: uniform bound in the lemma for the small energy regularity}, \eqref{eqn: uniform bi lipschitz constant in the lemma for the small energy regularity} and that \begin{equation*}
\|X^{(c)}_0-X^{(c)}_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}\geq c \|X^{(c)}-X^{(c)}_{0*}\|_{L^\infty_{T'}\dot{H}^1(\mathbb{T})}, \end{equation*} while for $\forall\,t\in [T'/4,T']$, \begin{equation*}
J^{(c)}(t) = \|X^{(c)}-X^{(c)}_{0*}\|_{\dot{H}^{5/2}}^2(t)> \mathrm{e}^{-t/4}\|X^{(c)}_0-X^{(c)}_{0*}\|_{\dot{H}^{5/2}}^2 = \mathrm{e}^{- t/4}J^{(c)}(0). \end{equation*} Since \eqref{eqn: simplified evolution equation for J the H2.5 difference} holds with $J$ replaced by $J^{(c)}$, we find that \begin{equation*}
c^{-2/3}\mathrm{e}^{- T'/4}J^{(c)}(0)^{4/3} +\frac{1}{8}\int_{T'/4}^{T'} \mathrm{e}^{-\omega/3}J^{(c)}(0)^{4/3}\,d\omega < (c^{-2/3}+4C_3 T'(|\ln T'|+1)^2 c^{-1})J^{(c)}(0)^{4/3},
\end{equation*} which implies that \begin{equation}
\frac{3}{8}\left(\mathrm{e}^{-T'/12} - \mathrm{e}^{-T'/3}\right) < c^{-2/3}\left(1-\mathrm{e}^{-T'/4}\right)+4C_3 T'(|\ln T'|+1)^2 c^{-1}. \label{eqn: constraint for c} \end{equation} Since $T'\leq T_*\leq 1$, \begin{equation*} \mathrm{e}^{-T'/12} - \mathrm{e}^{-T'/3} > \frac{1}{4}T' \mathrm{e}^{-T'/3}> \frac{1}{6}T',\quad 1-\mathrm{e}^{-T'/4} < \frac{1}{4}T'. \end{equation*} Then \eqref{eqn: constraint for c} implies that \begin{equation}
\frac{c}{4} - c^{1/3}<16C_3(|\ln T'|+1)^2. \label{eqn: equation for c before introducing definitions of constants} \end{equation} Let $c_+$ be the unique positive real number such that the equality is achieved in \eqref{eqn: equation for c before introducing definitions of constants}. Then we have \begin{equation*}
\frac{c_+}{4} = 16C_3(|\ln T'|+1)^2 +c_+^{1/3}\leq 16C_3(|\ln T'|+1)^2 + \frac{c_+}{27}+2, \end{equation*} which implies that \begin{equation}
c_+\leq C_4(R,\lambda)(|\ln T'|+1)^2. \label{eqn: introducing C_4} \end{equation}
Here $C_4\geq 1$ is some constant depending only on $R$ and $\lambda$; it will be used in the proof of Theorem \ref{thm: global existence near equilibrium} and Theorem \ref{thm: exponential convergence}. Therefore, if $c\geq C_4(R,\lambda)(|\ln T'|+1)^2$, \eqref{eqn: equation for c before introducing definitions of constants} does not hold, which is a contradiction. Hence, we proved \eqref{eqn: a lower H2.5 norm could be found} with \begin{equation}
c_*(R,\lambda,T') = C_4(R,\lambda)(|\ln T'|+1)^2. \label{eqn: defintion of c_*} \end{equation}
\eqref{eqn: a lower H2.5 norm with updated approximation could be found} immediately follows from \eqref{eqn: a lower H2.5 norm could be found} by virtue of Remark \ref{remark: L2 closest is also Hs closest}. This completes the proof. \end{proof} \begin{remark} Taking smaller $\mu$ in \eqref{eqn: H2 estimate of the difference between solution and equilibrium solution} can give sharper bound for $c_*$ in \eqref{eqn: defintion of c_*}, but that is not necessary for the remaining results. \qed \end{remark} \end{lemma}
Using Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, we are able to prove Theorem \ref{thm: global existence near equilibrium}. \begin{proof}[Proof of Theorem \ref{thm: global existence near equilibrium} (existence and uniqueness of global solution near equilibrium)] With no loss of generality, we assume $R_{X_0} = 1$; otherwise, simply rescale $X_0$ by a factor of $R_{X_0}^{-1}$. Note that the contour dynamic formulation \eqref{eqn: contour dynamic formulation of the immersed boundary problem} is translation and scaling-invariant. Moreover, we note that the effective radius of $X(\cdot,t)$ is invariant in time, since the flow is volume-preserving.
Define \begin{equation}
S_\varepsilon = \left\{Z(s)\in H^{5/2}(\mathbb{T}):\, R_Z = 1,\,\|Z-Z_* \|_{\dot{H}^{5/2}(\mathbb{T})}\leq \varepsilon\right\} \label{eqn: def of data close to equilibrium} \end{equation}
We claim that there exists a universal constant $\varepsilon_0$ which will be clear below, for $\forall\, Z(s)\in S_{\varepsilon_0}$, $\|Z\|_{\dot{H}^{5/2}}\leq C$ for some universal constant $C$, and \begin{equation}
|Z(s_1)-Z(s_2)|\geq \frac{1}{\pi}|s_1 -s_2|,\quad \forall\, s_1,s_2\in \mathbb{T}. \label{eqn: uniform lower bound for lambda in the proof of global existence} \end{equation} In fact, \begin{equation*} \begin{split}
|Z(s_1)-Z(s_2)|\geq &\;|Z_*(s_1)-Z_*(s_2)|-|(Z_*-Z)(s_1)-(Z_*-Z)(s_2)|\\
\geq &\;\frac{2}{\pi}|s_1 -s_2| - \|Z-Z_*\|_{\dot{C}^1(\mathbb{T})}|s_1-s_2|\\
\geq &\;\left(\frac{2}{\pi}-C_5\varepsilon_0\right)|s_1 -s_2|, \end{split} \end{equation*} where $C_5>0$ is a universal constant coming from Sobolev inequality. Hence, it suffices to take $\varepsilon_0=\min\{(C_5 \pi)^{-1},1\}$;
that $\|Z\|_{\dot{H}^{5/2}}\leq C$ is obvious.
The above uniform estimates, together with Theorem \ref{thm: local in time existence} and Theorem \ref{thm: local in time uniqueness}, imply that there is a universal constant $T_0\in(0,1)$, s.t.\;for $\forall\, X_0\in S_{\varepsilon_0}$, there is a unique solution $X(s,t)$ for \eqref{eqn: contour dynamic formulation of the immersed boundary problem} in $C_{[0,T_0]}H^{5/2}\cap L^2_{T_0}H^3(\mathbb{T})$ starting from $X_0$, s.t. \begin{equation}
\|X\|_{L^\infty_{T_0} \dot{H}^{5/2}\cap L^2_{T_0} \dot{H}^{3}(\mathbb{T})}\leq 4\|X_0\|_{\dot{H}^{5/2}(\mathbb{T})}\leq 4(\|X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}+\varepsilon_0) \triangleq C_6, \label{eqn: uniform bound of the family of solution} \end{equation} where $C_6$ is a universal constant. Moreover, for $\forall\, s_1,s_2\in\mathbb{T}$ and $t\in[0,T_0]$, \begin{equation}
\left|X(s_1,t) - X(s_2,t)\right| \geq \frac{1}{2\pi}|s_1 - s_2|. \label{eqn: uniform bi lipschitz constant of the family of solution} \end{equation} That is, $X(s,t)$ satisfies the assumption of Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small} with $T = T_0$, $R = C_6$, and $\lambda =(2\pi)^{-1}$, which are all universal constants. Hence, by Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, there exists a universal constant $T_* = T_*(T_0, C_6, 1/(2\pi))\in(0,T_0]$ such that \begin{equation*}
\|X-X_*\|_{L^\infty_{T_*}\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}. \end{equation*}
To this end, we shall first investigate $\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{[0,t]}\dot{H}^1(\mathbb{T})}$. Using the equation for $\tilde{X}$ (see \eqref{eqn: equation for oscillation of X in the main thm}), we find that for $\forall\, t\in[0,T_0]$, \begin{equation} \begin{split}
\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{t}\dot{H}^1(\mathbb{T})} \leq &\; \|\tilde{X}-\widetilde{X_0}\|_{L^{\infty}_{t}\dot{H}^1(\mathbb{T})}+\|\widetilde{X_0}-\widetilde{X_{0*}}\|_{\dot{H}^1(\mathbb{T})}\\
\leq &\; \int_0^t\|\partial_t \tilde{X}\|_{\dot{H}^1(\mathbb{T})}(\tau)\,d\tau+\|\widetilde{X_0}-\widetilde{X_{0*}}\|_{\dot{H}^1(\mathbb{T})}\\
\leq &\; \int_0^t\|\mathcal{L}\tilde{X}\|_{\dot{H}^1(\mathbb{T})}(\tau)+\|\widetilde{g_{\tilde{X}}}\|_{\dot{H}^1(\mathbb{T})}(\tau)\,d\tau+\|\widetilde{X_0}-\widetilde{X_{0*}}\|_{\dot{H}^1(\mathbb{T})}. \label{eqn: estimate for H1 difference of X and X0*} \end{split} \end{equation}
In order to give an estimate for $\|\widetilde{g_{\tilde{X}}}\|_{\dot{H}^1}$, we should go back to \eqref{eqn: introduce the notation Gamma_1} and \eqref{eqn: rough pointwise estimate of Gamma} and apply Lemma \ref{lemma: estimates for L M N}. Indeed, with \eqref{eqn: uniform bound of the family of solution} and \eqref{eqn: uniform bi lipschitz constant of the family of solution}, we have \begin{equation*}
\|g_{\tilde{X}}'\|_{L^2(\mathbb{T})}(t)\leq C \|\tilde{X}\|_{\dot{H}^2(\mathbb{T})}^2(t)\|\tilde{X}'\|_{L^\infty(\mathbb{T})}(t)\leq C, \end{equation*} where $C$ is a universal constant. Hence, by \eqref{eqn: uniform bound of the family of solution}, \eqref{eqn: estimate for H1 difference of X and X0*} and Lemma \ref{lemma: estimates concerning closest equilbrium}, \begin{equation} \begin{split}
\|\tilde{X}-\widetilde{X_{0*}}\|_{L^{\infty}_{t}\dot{H}^1} \leq &\;\int_0^t C\left(\|\tilde{X}\|_{\dot{H}^2}(\tau)+1\right)\,d\tau+\|\widetilde{X_0}-\widetilde{X_{0*}}\|_{\dot{H}^{1}}\\
\leq &\;C_7 t+2\left(\|\widetilde{X_0}\|_{\dot{H}^1}^2-\|\widetilde{X_{0*}}\|_{\dot{H}^1}^2\right)^{1/2}\\ \triangleq &\;C_7t +2\zeta_{X_0}, \end{split}
\label{eqn: bound for H1 norm of the difference to the equilibrium} \end{equation} where $C_7$ is a universal constant. Here we applied Lemma \ref{lemma: estimates concerning closest equilbrium}
and defined $\zeta_{X_0}^2 = \|X_0\|_{\dot{H}^1}^2 - \|X_{0*}\|_{\dot{H}^1}^2$. The above estimate is true as long as $X_0\in S_{\varepsilon_0}$ and $t\in[0,T_0]$.
In what follows, we shall prove the Theorem with \begin{equation} \varepsilon_* = \varepsilon_0 = \min\{(C_5 \pi)^{-1},1\} \label{eqn: definition of epsilon* in the global existence} \end{equation}
We also take $\xi_* \leq T_*/2$ such that \begin{equation}
2 C_4(C_7+2) (|\ln (2\xi_*)|+1)^2(2\xi_*)\leq \varepsilon_*, \label{eqn: definition of xi* in the global existence} \end{equation}
where $T_* = T_*(T_0, C_6, 1/(2\pi))\in(0,T_0]$ given by Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, $C_4 = C_4(C_6,1/(2\pi))$ defined in \eqref{eqn: introducing C_4} and $C_7$ defined in \eqref{eqn: bound for H1 norm of the difference to the equilibrium} are all universal constants. Hence, both $\varepsilon_*$ and $\xi_*$ are universal. We fix $T' = 2\xi_*$, which is also a universal constant. By Lemma \ref{lemma: estimates concerning closest equilbrium} and the assumption that $\|X_0-X_{0*}\|_{\dot{H}^1} \leq \xi_*$, \begin{equation}
\zeta_{X_0}^2 \leq 2\|X_0-X_{0*}\|^2_{\dot{H}^1}\leq 2\xi_*^2 \leq T'^2 \leq T_*^2. \label{eqn: T' is smaller than T_*} \end{equation}
We are going to use mathematical induction to show existence of the global solution. First we focus on the local solution $X(s,t)$ for $t\in[0,T']$. By \eqref{eqn: bound for H1 norm of the difference to the equilibrium} and \eqref{eqn: T' is smaller than T_*}, \begin{equation}
\|X-X_{0*}\|_{L^\infty_{T'}\dot{H}^1}\leq (C_7+2)T'. \label{eqn: a final bound in the proof of global existence for H1 norm of X-X0* in 0 to T'} \end{equation}
We apply Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small} to obtain the constant $c_* = c_*(C_6,1/(2\pi),T')$, and claim that the assumption \eqref{eqn: condition in the small energy lemma initial H2.5 norm is much larger than H1 norm on the whole interval} holds if $\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}\geq \varepsilon_*/2$. In fact, by \eqref{eqn: defintion of c_*}, \eqref{eqn: definition of xi* in the global existence} and \eqref{eqn: a final bound in the proof of global existence for H1 norm of X-X0* in 0 to T'}, \begin{equation} \begin{split}
c_*\|X-X_{0*}\|_{L^\infty_{T'}\dot{H}^1}\leq &\; C_4(|\ln T'|+1)^2\cdot (C_7+2)T'= C_4(C_7+2) (|\ln (2\xi_*)|+1)^2(2\xi_*)\leq \varepsilon_*/2. \end{split} \label{eqn: proof of the threshold of H2.5 norm in the proof of global existence} \end{equation}
Therefore, if $\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})} \in[ \varepsilon_*/2, \varepsilon_*]$, by \eqref{eqn: proof of the threshold of H2.5 norm in the proof of global existence} and Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, there exists $t_1\in [T'/4,T']$, s.t. \begin{equation*}
\|X-X_*\|_{\dot{H}^{5/2}}(t_1)\leq e^{-t_1/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}\leq \varepsilon_*. \end{equation*}
Otherwise, if $\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})} \leq \varepsilon_*/2$, by the fact that $T'\leq T_*$ and Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, there exists $t_1\in [T'/4,T']$, s.t. \begin{equation*}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(t_1)\leq \|X-X_{*}\|_{L^{\infty}_{T'}\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})} \leq \varepsilon_*. \end{equation*} This implies that for all $X_0\in S_{\varepsilon_*}$, we can always find $t_1\in [T'/4,T']$, such that the unique local solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} in $C_{[0,t_1]}H^{5/2}\cap L^2_{t_1}H^{3}(\mathbb{T})$ satisfies that \begin{align*}
&\;\|X-X_{*}\|_{L^{\infty}_{t_1}\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\varepsilon_*,\\
&\;|X(s_1,t) - X(s_2,t)| \geq \frac{1}{2\pi}|s_1 - s_2|,\quad \forall \,t\in[0,t_1],\;s_1,s_2\in\mathbb{T},\\ &\;X(t_1)\in S_{\varepsilon_*}. \end{align*} We note that $T'$ is a universal constant.
Suppose we have found $t_k$'s for $k\leq n$, satisfying that for $\forall\, k =1,\cdots,n$, \begin{enumerate} \item $t_k\in[T'/4,T']$. \item There exists a unique solution $X$ of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} in $C_{[0,T_{n}]}H^{5/2}\cap L^2_{T_{n}}H^{3}(\mathbb{T})$, where $T_k = \sum_{i=1}^k t_i$ for $i = 1,\cdots,n$, such that \begin{align}
&\;\|X-X_{*}\|_{L^{\infty}_{[0,T_k]}\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\varepsilon_*, \label{eqn: estimates on the distance to the equilibrium for the global solution in first k-th time intervals}\\
&\;|X(s_1,t) - X(s_2,t)| \geq \frac{1}{2\pi}|s_1 - s_2|,\quad \forall \,t\in[0,T_k],\;s_1,s_2\in\mathbb{T},\label{eqn: well-stretched constant estimates for the global solution in first k-th time intervals}\\ &\;X(\cdot,T_k) \in S_{\varepsilon_*}. \end{align} \end{enumerate} Now let us restart the equation at $t = T_n$. To be more precise, we consider \begin{equation*} \partial_t X(s,t)= \mathcal{L}X(s,t) + g_X(s,t),\quad s\in \mathbb{T}, t\geq T_n, \end{equation*} with $X(\cdot,T_n)\in S_{\varepsilon_*}= S_{\varepsilon_0}$ given. As before, there exists a unique local solution $X(s,t)$ for $t\in [T_n,T_n+T_0]$ satisfying the uniform estimates \eqref{eqn: uniform bound of the family of solution} and \eqref{eqn: uniform bi lipschitz constant of the family of solution} for solutions starting in $S_{\varepsilon_*}$. Moreover, with $T_*$ and $T'$ defined as before, \begin{equation*}
\|X-X_{*}\|_{L^{\infty}_{[T_n,T_n+T']}\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\|X_{T_n}-(X_{T_n})_*\|_{\dot{H}^{5/2}(\mathbb{T})}. \end{equation*} By \eqref{eqn: bound for H1 norm of the difference to the equilibrium}, \begin{equation}
\|X-(X_{T_n})_*\|_{L^{\infty}_{[T_n,T_n+T']}\dot{H}^1} \leq C_7 T'+2\left(\|X_{T_n}\|_{\dot{H}^1}^2-\|(X_{T_n})_*\|_{\dot{H}^1}^2\right)^{1/2} = C_7 T'+2\zeta_{X_{T_n}}, \label{eqn: crude form of the bound for H1 norm of the difference to the equilibrium for later time} \end{equation} where $X_{T_n}(s) \triangleq X(s,T_n)$. Since the solution obtained in $[0,T_n]$ satisfies the assumption of Lemma \ref{lemma: energy estimate}, by \eqref{eqn: energy estimate of Stokes immersed boundary problem}, \begin{equation*}
\zeta_{X_{T_n}}^2=\|X_{T_n}\|_{\dot{H}^1}^2-\|(X_{T_n})_*\|_{\dot{H}^1}^2 \leq \|X_0\|_{\dot{H}^1}^2-\|X_{0*}\|_{\dot{H}^1}^2 = \zeta_{X_{0}}^2=T'^2. \end{equation*}
Note that $\|(X_{T_n})_*\|_{\dot{H}^1} = \|X_{0*}\|_{\dot{H}^1}$. Hence,
$\|X-(X_{T_n})_*\|_{L^{\infty}_{[T_n,T_n+T']}\dot{H}^1} \leq (C_7+2)T'$.
To this end, we simply argue to show $\exists \,t_{n+1}\in [T'/4,T']$, s.t.\;there exists a unique local solution in $C_{[0,T_{n+1}]}H^{5/2}\cap L^2_{T_{n+1}}H^{3}(\mathbb{T})$ with $T_{n+1} = T_n+t_{n+1}$ and $X(T_{n+1})\in S_{\varepsilon_*}$. Estimates \eqref{eqn: estimates on the distance to the equilibrium for the global solution in first k-th time intervals} and \eqref{eqn: well-stretched constant estimates for the global solution in first k-th time intervals} in the new time interval $[0, T_{n+1}]$ follow as before. Since $T_n \geq nT'/4$ with $T'>0$ being a universal constant, $T_n\rightarrow +\infty$ as $n\rightarrow \infty$. The existence of global solution is thus established. The uniqueness follows from Theorem \ref{thm: local in time uniqueness}. That $X_t\in L^2_{[0,+\infty),loc}H^2(\mathbb{T})$ follows from Theorem \ref{thm: local in time existence}. Estimates \eqref{eqn: estimates on the distance to the equilibrium for the global solution in all time intervals}, \eqref{eqn: well-stretched constant estimates for the global solution in all time intervals} and \eqref{eqn: uniform bound of H 2.5 norm for the global solution} are established in the induction. \end{proof} \begin{remark} Instead of \eqref{eqn: definition of epsilon* in the global existence}, we may take arbitrary $\varepsilon_*\in(0,\varepsilon_0]$, and the same proof still works. \qed \end{remark}
The main idea in the proof of Theorem \ref{thm: global existence near equilibrium} is that when the string configuration is close to an equilibrium, $\|X_0-X_{0*}\|_{\dot{H}^1}$ sets a bound for $\|X-X_*\|_{\dot{H}^{5/2}}$ in an indirect way (at least within a short time). In the same spirit, we prove the following corollary with refined estimates. It will be useful in the proof of Theorem \ref{thm: exponential convergence}.
\begin{corollary}\label{coro: refined decay estimate of global solution} Let $X_0\in H^{5/2}(\mathbb{T})$ satisfy all the assumptions of Theorem \ref{thm: global existence near equilibrium} and let $X$ be the unique global solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} starting from $X_0$ obtained in Theorem \ref{thm: global existence near equilibrium}. Then for any given $\xi\in(0,\xi_*]$, if in addition \begin{equation}
\|X_0(s) - X_{0*}(s)\|_{\dot{H}^{1}(\mathbb{T})}\leq \xi R_{X_0}, \label{eqn: closeness condition of H 1 norm in corollary} \end{equation} then the solution $X$ satisfies that for $\forall \,t\geq 0$, \begin{equation}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(t)\leq \max \{2e^{-t/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})},\varepsilon_\xi R_{X_0}\}, \label{eqn: refined H2.5 bound of global solution} \end{equation} with \begin{equation}
\varepsilon_\xi \triangleq 2C_4(C_7+2) (|\ln (2\xi)|+1)^2(2\xi),\quad \xi>0, \label{eqn: defintion of varepsilon_* in the corollary} \end{equation} where $C_4 = C_4(C_6,1/(2\pi))$ and $C_7$ are universal constants defined in \eqref{eqn: introducing C_4} and \eqref{eqn: bound for H1 norm of the difference to the equilibrium} respectively. \begin{remark} We only define $\varepsilon_\xi$ for $\xi>0$ in order to avoid abusing the notation $\varepsilon_0$ defined in the proof of Theorem \ref{thm: global existence near equilibrium}. \end{remark} \begin{proof} We follow exactly the proof of Theorem \ref{thm: global existence near equilibrium} until the definition of $T'$. Now we define $T' = 2\xi$ instead.
It is worthwhile to note that \begin{equation*}
\zeta_{X_0}^2\leq 2\|X_0-X_{0*}\|^2_{\dot{H}^1}\leq 2\xi^2 \leq T'^2\leq T_*^2. \end{equation*}
For the solution $X(s,t)$ in $t\in[0,T']$, \eqref{eqn: a final bound in the proof of global existence for H1 norm of X-X0* in 0 to T'} still holds. With $c_* = C_4(C_6,1/(2\pi))(|\ln T'|+1)^2$ as before, we have a similar estimate as \eqref{eqn: proof of the threshold of H2.5 norm in the proof of global existence}
\begin{equation}
c_*\|X-X_{0*}\|_{L^\infty_{T'}\dot{H}^1}\leq C_4(|\ln T'|+1)^2\cdot (C_7+2)T' = C_4(C_7+2) (|\ln (2\xi)|+1)^2(2\xi)= \varepsilon_\xi/2. \label{eqn: new threshold in the corollary such that the assumption holds} \end{equation}
Therefore, if $\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})} \geq \varepsilon_\xi/2$, the assumption \eqref{eqn: condition in the small energy lemma initial H2.5 norm is much larger than H1 norm on the whole interval} holds. By Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, there exists $t_1\in [T'/4,T']$, s.t. \begin{equation*}
\|X-X_*\|_{\dot{H}^{5/2}}(t_1)\leq e^{-t_1/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}. \end{equation*}
Otherwise, if $\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})} \leq \varepsilon_\xi/2$, by Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, there exists $t_1\in [T'/4,T']$, s.t. \begin{equation*}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(t_1)\leq \|X-X_{*}\|_{L^{\infty}_{T'}\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})} \leq \varepsilon_\xi/\sqrt{2}. \end{equation*} This implies that there always exists $t_1\in [T'/4,T']$, s.t. \begin{equation*}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(t_1)\leq \max\{e^{-t_1/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}, \varepsilon_\xi/\sqrt{2}\}. \end{equation*} Now suppose that we have found $t_k$'s for $k\leq n$, satisfying that for $\forall\, k =1,\cdots,n$, $t_k\in[T'/4,T']$, and \begin{equation*}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(T_k)\leq \max\{e^{-T_k/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}, \varepsilon_\xi/\sqrt{2}\}. \end{equation*} where $T_k = \sum_{i=1}^k t_i$ for $i = 1,\cdots,n$. Now we consider the equation in $t \in [T_n, T_n+T']$. As in \eqref{eqn: crude form of the bound for H1 norm of the difference to the equilibrium for later time}, \begin{equation*}
\|X-(X_{T_n})_*\|_{L^{\infty}_{[T_n,T_n+T']}\dot{H}^1} \leq C_7 T'+2\zeta_{X_{T_n}}\leq C_7 T'+2\zeta_{X_0} \leq (C_7+2)T'. \end{equation*}
Here we used the energy estimate \eqref{eqn: energy estimate of Stokes immersed boundary problem} again. Hence, as in \eqref{eqn: new threshold in the corollary such that the assumption holds}, we have $c_*\|X-X_{0*}\|_{L^\infty_{[T_n,T_n+T']}\dot{H}^1}\leq \varepsilon_\xi/2$. We argue as before to find that there always exists $t_{n+1}\in[T'/4,T']$, s.t. \begin{equation*} \begin{split}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(T_n+t_{n+1})\leq &\;\max\{e^{-t_{n+1}/8}\|X-X_*\|_{\dot{H}^{5/2}}(T_n), \varepsilon_\xi/\sqrt{2}\}\\
\leq &\;\max\{e^{-(T_n+t_{n+1})/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}, \varepsilon_\xi/\sqrt{2}\}. \end{split} \end{equation*} By induction, there exists a sequence $\{t_k\}_{k\in\mathbb{Z}_+}$, $t_k\in [T'/4, T']$, such that for $\forall\, k\in\mathbb{Z}_+$, \begin{equation*}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(T_k)\leq \max\{e^{-T_k/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}}, \varepsilon_\xi/\sqrt{2}\}. \end{equation*} where $T_k = \sum_{i=1}^k t_i\rightarrow +\infty$. Since $t_k\leq T'\leq T_*\leq 1$, by Lemma \ref{lemma: bound and decay for H2.5 difference when energy difference is small}, \begin{equation*} \begin{split}
\|X-X_{*}\|_{L^\infty_{[T_{k-1}, T_k]}\dot{H}^{5/2}(\mathbb{T})}\leq &\;\sqrt{2}\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(T_{k-1})\\
\leq &\;\max\{\sqrt{2}e^{-T_{k-1}/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}, \varepsilon_\xi\}\\
\leq &\;\max\{\sqrt{2}e^{T'/8}e^{-T_{k}/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}, \varepsilon_\xi\}\\
\leq &\;\max\{2e^{-T_{k}/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}, \varepsilon_\xi\}. \end{split} \end{equation*} Note that here we are abusing the notation $T_0$ by defining $T_0 = 0$; it does not refer to the $T_0$ in Theorem \ref{thm: local in time existence}. Using the fact that $X\in C_{[0,+\infty)}H^{5/2}(\mathbb{T})$, for $\forall\,t\in [T_{k-1}, T_k]$, \begin{equation*}
\|X-X_{*}\|_{\dot{H}^{5/2}(\mathbb{T})}(t)\leq \max\{2e^{-t/8}\|X_0-X_{0*}\|_{\dot{H}^{5/2}(\mathbb{T})}, \varepsilon_\xi\}. \end{equation*}
This completes the proof. \end{proof} \end{corollary}
\section{Exponential Convergence to Equilibrium Configurations}\label{section: exp convergence} In this section, we shall prove that the global-in-time solution near equilibriums obtained in Theorem \ref{thm: global existence near equilibrium} converges exponentially in the $H^s$-sense to an equilibrium configuration as $t\rightarrow +\infty$. See the statement of Theorem \ref{thm: exponential convergence}. In the sequel, we shall always consider the contour dynamic formulation \eqref{eqn: contour dynamic formulation of the immersed boundary problem}, with $X_0\in H^{5/2}(\mathbb{T})$ satisfying \eqref{eqn: closeness condition of H 2.5 norm} and \eqref{eqn: closeness condition of H 1 norm} with $\varepsilon_*, \xi_*>0$ found in Theorem \ref{thm: global existence near equilibrium}. Without loss of generality, we assume $R_{X_0} = 1$.
\subsection{A lower bound of the rate of energy dissipation}\label{section: lower bound for energy dissipation rate}
A key step to prove the exponential convergence of the global solution near equilibrium is to establish a lower bound of the rate of energy dissipation $\int_{\mathbb{R}^2}|\nabla u_X|^2\,dx$ (see Lemma \ref{lemma: energy estimate}) in terms of $\|X-X_*\|_{\dot{H}^1}$ provided that the latter is sufficiently small.
Let $S_\varepsilon$ be defined as in \eqref{eqn: def of data close to equilibrium}. Let $\varepsilon_*' \in(0,\varepsilon_*)$ to be determined. Let $\Omega_X\subset \mathbb{R}^2$ denote the bounded open domain enclosed by $X(\mathbb{T})$ where $X\in S_{\sqrt{2}\varepsilon'_*} $. Here the constant $\sqrt{2}$ comes from the estimate \eqref{eqn: estimates on the distance to the equilibrium for the global solution in all time intervals} of the global solution. Define the collection of all such domains to be \begin{equation*} \mathcal{M}_{\varepsilon_*'} = \{\Omega_X \Subset\mathbb{R}^2:\; \partial \Omega_X = X(\mathbb{T}), \;X\in S_{\sqrt{2}\varepsilon'_*} \}. \end{equation*} We assume that $\varepsilon_*'$ is sufficiently small, such that domains in $\mathcal{M}_{\varepsilon_*'}$ satisfy uniform $C^{1}$-regularity condition with uniform constants (see \cite{adams2003sobolev} in \S\,4.10 for the rigorous definition). Indeed, this is achievable due to the implicit function theorem and the Sobolev embedding $H^{5/2}(\mathbb{T})\hookrightarrow C^{1,\alpha}(\mathbb{T})$ for $\forall\, \alpha\in(0,1)$.
Let $u_X$ be the velocity field determined by a configuration $X\in S_{\sqrt{2}\varepsilon'_*} $. Let \begin{equation*}
(u_X)_{\Omega_X}= |\Omega_X|^{-1}\int_{\Omega_X} u_X\,dx,\quad (u_X)_{\partial\Omega_X} = |\partial \Omega_X|^{-1}\int_{\partial\Omega_X} u_X\,dl, \end{equation*} where $l$ is the arc-length parameter of $\partial\Omega_X$. Then by the boundary trace theorem \cite{adams2003sobolev}, \begin{equation*} \begin{split}
\int_{\mathbb{R}^2} |\nabla u_X|^2 \,dx \geq &\;\int_{\Omega_X} |\nabla u_X|^2 \,dx \geq C \int_{\partial\Omega_X} |u_X-(u_X)_{\Omega_X}|^2\,dl \\
\geq &\;C \int_{\partial\Omega_X} |u_X-(u_X)_{\partial\Omega_X}|^2\,dl = C\int_{\mathbb{T}} |u_X-(u_X)_{\partial\Omega_X}|^2 |X'(s)|\,ds. \end{split} \end{equation*}
Here we used the fact that $dl = |X'(s)|\,ds$, since $s$ is a monotone parameterization of $\partial\Omega_X$. Thanks to the uniform $C^1$-regularity of $\Omega_X\in \mathcal{M}_{\varepsilon_*'}$, the constant $C$ is uniform for $\forall\, X\in S_{\sqrt{2}\varepsilon'_*} $. We derive that \begin{equation*} \begin{split}
\int_{\mathbb{T}} |u_X(X(s))-(u_X)_{\partial\Omega_X}|^2 |X'(s)|\,ds \geq &\;\int_{\mathbb{T}} |u_X(X(s))-(u_X)_{\partial\Omega_X}|^2 (|X_*'(s)|-|X_*'(s)-X'(s)|)\,ds\\
\geq &\;(1- C_5\varepsilon_*')\int_{\mathbb{T}} |u_X(X(s))-(u_X)_{\partial\Omega_X}|^2\,ds\\
\geq &\;(1- C_5\varepsilon_*')\int_{\mathbb{T}} |u_X(X(s))-\bar{u}_X|^2\,ds. \end{split} \end{equation*}
Here $\bar{u}_X = |\mathbb{T}|^{-1}\int_{\mathbb{T}} u_X(X(s))\,ds$. Again, the constant $C_5$, which first showed up in the proof of Theorem \ref{thm: global existence near equilibrium}, comes from the Sobolev embedding $H^{5/2}(\mathbb{T})\hookrightarrow C^1(\mathbb{T})$ and is independent of $X\in S_{\sqrt{2}\varepsilon'_*}$. Taking $\varepsilon_*'\leq (2C_5)^{-1}$, we obtain that \begin{equation}
\int_{\mathbb{R}^2} |\nabla u_X|^2 \,dx \geq C\int_{\mathbb{T}} |u_X(X(s))-\bar{u}_X|^2\,ds \label{eqn: energy dissipation rate can be bounded from below by the L2 norm of velocity oscillation} \end{equation} for some universal constant $C$ independent of $X\in S_{\sqrt{2}\varepsilon'_*} $.
Now we turn to derive a lower bound for $\int_{\mathbb{T}} |u_X(X(s))-\bar{u}_X|^2\,ds$. We are going to perform linearization of $u_X(X(s))$ around the equilibrium configuration $X_*$. Fix $X\in S_{\sqrt{2}\varepsilon'_*} $, with $\varepsilon_*'\leq \min\{1,\varepsilon_*\}$ satisfying all the assumptions above and to be determined. Let $D(s) = X(s)-X_*(s)$ and \begin{equation} X_{\eta}(\cdot) \triangleq X_*(\cdot) +\eta D(\cdot),\quad\eta \in[0,1]. \label{eqn: defintion of X_eta} \end{equation}
By definition, $\|D\|_{\dot{H}^{5/2}(\mathbb{T})}\leq \sqrt{2}\varepsilon_*'$. It is easy to show that with $\varepsilon_*'$ sufficiently small, \begin{align}
&\;\|X_\eta\|_{\dot{H}^{5/2}(\mathbb{T})}\leq C,\quad \forall\, \eta\in[0,1],\label{eqn: uniform H2.5 upper bound for the family of configurations near equilibrium}\\
&\;|X_\eta(s_1) - X_\eta(s_2)|\geq \frac{1}{\pi}|s_1-s_2|,\quad \forall\,s_1,s_2\in\mathbb{T},\;\forall\,\eta\in[0,1],\label{eqn: uniform stretching constant for the family of configurations near equilibrium} \end{align} where $C$ is a universal constant. Note that with $\varepsilon_*'$ being sufficiently small, $X_\eta$ is also a non-self-intersecting string configuration, but it may not be in $ S_{\sqrt{2}\varepsilon'_*} $, as $R_{X_{\eta}} = 1$ is not necessarily true.
The following lemma shows that $u_X(X(s))$, as a function of $X$, can be well approximated by linearization around $X_*$. \begin{lemma}\label{lemma: linearization of velocity field around equilibrium} Assume $\varepsilon_*'\leq \min\{1,\varepsilon_*, (2C_5)^{-1}\}$ is sufficiently small such that domains in $\mathcal{M}_{\varepsilon_*'}$ satisfy uniform $C^{1}$-regularity condition with uniform constants, and \eqref{eqn: uniform H2.5 upper bound for the family of configurations near equilibrium} and \eqref{eqn: uniform stretching constant for the family of configurations near equilibrium} hold. Then \begin{equation}
u_X(X(s)) = \left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))+\mathcal{R}_X(s), \label{eqn: first approximation by linearization of velocity around equilibrium} \end{equation} where \begin{equation}
\|\mathcal{R}_X(s)\|_{L^\infty(\mathbb{T})}\leq C\varepsilon_*' \|D\|_{\dot{H}^1(\mathbb{T})}, \label{eqn: estimate of the higher order error term in estimating the velocity around equilibrium} \end{equation} with $C$ being a universal constant. \begin{proof} Recall that $u_X$ is given by \eqref{eqn: velocity of membrane}. By \eqref{eqn: simplification of integrand of g_X part 1}, \begin{equation*} \begin{split}
u_X(X(s)) = &\;\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{X}\cdot X'(s')}{|L_{X}|^2}M_{X}-\frac{L_{X}\cdot M_{X}}{|L_{X}|^2}X'(s') -\frac{X'(s')\cdot M_{X}}{|L_{X}|^2}L_{X}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2L_{X}\cdot X'(s')L_{X}\cdot M_{X}}{|L_{X}|^4}L_{X}\,ds', \end{split} \end{equation*} where $L_X = L_X(s,s')$ and $M_X = M_X(s,s')$ are defined in \eqref{eqn: definition of L M N} and \eqref{eqn: definition of L M N at s}. The subscripts stress that they are determined by $X$. Since $u_{X_*} \equiv 0$, by mean value theorem with respect to $\eta$, there exists an $\eta_*\in[0,1]$ such that, \begin{equation*}
u_X(X(s)) = u_X(X(s)) -u_{X_*}(X_*(s)) = \left.\frac{\partial}{\partial\eta}\right|_{\eta = \eta_*}u_{X_\eta}(X_\eta(s)). \end{equation*} In Lemma \ref{lemma: eta derivative and the integral in u_X commute} in the Appendix \ref{appendix section: auxiliary calculations}, we will show that the $\eta$-derivative and the integral in $u_{X_\eta}$ commute. Hence, \begin{equation} \begin{split}
u_X(X(s)) = &\;\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{\eta_*}(s')}{|L_{X_{\eta_*}}|^2}M_{X_{\eta_*}} +\frac{L_{X_{\eta_*}}\cdot D'(s')}{|L_{X_{\eta_*}}|^2}M_{X_{\eta_*}} +\frac{L_{X_{\eta_*}}\cdot X'_{\eta_*}(s')}{|L_{X_{\eta_*}}|^2}M_{D}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}-\frac{(L_{X_{\eta_*}}\cdot L_{D})(L_{X_{\eta_*}}\cdot X'_{\eta_*}(s'))}{|L_{X_{\eta_*}}|^4}M_{X_{\eta_*}}\,ds'\\
&\;-\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot M_{X_{\eta_*}}}{|L_{X_{\eta_*}}|^2}X'_{\eta_*}(s')+\frac{L_{X_{\eta_*}}\cdot M_{D}}{|L_{X_{\eta_*}}|^2}X'_{\eta_*}(s')+\frac{L_{X_{\eta_*}}\cdot M_{X_{\eta_*}}}{|L_{X_{\eta_*}}|^2}D'(s')\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2(L_{D}\cdot L_{X_{\eta_*}})(L_{X_{\eta_*}}\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^2}X'_{\eta_*}(s')\,ds'\\
&\;-\frac{1}{4\pi}\int_{\mathbb{T}}\frac{D'(s')\cdot M_{X_{\eta_*}}}{|L_{X_{\eta_*}}|^2}L_{X_{\eta_*}}+\frac{X'_{\eta_*}(s')\cdot M_{D}}{|L_{X_{\eta_*}}|^2}L_{X_{\eta_*}}+\frac{X'_{\eta_*}(s')\cdot M_{X_{\eta_*}}}{|L_{X_{\eta_*}}|^2}L_{D}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2(L_D\cdot L_{X_{\eta_*}})(X'_{\eta_*}(s')\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^2}L_{X_{\eta_*}}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2(L_{D}\cdot X'_{\eta_*}(s'))(L_{X_{\eta_*}}\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^4}L_{X_{\eta_*}} + \frac{2(L_{X_{\eta_*}}\cdot D'(s'))(L_{X_{\eta_*}}\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^4}L_{X_{\eta_*}}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2(L_{X_{\eta_*}}\cdot X'_{\eta_*}(s'))(L_{D}\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^4}L_{X_{\eta_*}} + \frac{2(L_{X_{\eta_*}}\cdot X'_{\eta_*}(s'))(L_{X_{\eta_*}}\cdot M_{D})}{|L_{X_{\eta_*}}|^4}L_{X_{\eta_*}}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2(L_{X_{\eta_*}}\cdot X'_{\eta_*}(s'))(L_{X_{\eta_*}}\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^4}L_{D}\,ds'\\
&\;-\frac{1}{4\pi}\int_{\mathbb{T}}\frac{8(L_{D}\cdot L_{X_{\eta_*}})(L_{X_{\eta_*}}\cdot X'_{\eta_*}(s'))(L_{D}\cdot M_{X_{\eta_*}})}{|L_{X_{\eta_*}}|^4}L_{X_{\eta_*}}\,ds'. \end{split} \label{eqn: representation of velocity close to equilibrium} \end{equation}
We then replace all the $X_{\eta_*}$ in \eqref{eqn: representation of velocity close to equilibrium} by $X_*$, i.e.\;$\eta = 0$, and introduce some error denoted by $\mathcal{R}_X(s)$. In this way, we obtain the representation \eqref{eqn: first approximation by linearization of velocity around equilibrium}. To show \eqref{eqn: estimate of the higher order error term in estimating the velocity around equilibrium}, for conciseness, we only look at one part of $\mathcal{R}_X(s)$, which is the error in approximating the first term on the right hand side of \eqref{eqn: representation of velocity close to equilibrium}, \begin{equation*} \begin{split}
&\;\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{\eta_*}(s')}{|L_{X_{\eta_*}}|^2}M_{X_{\eta_*}}-\frac{L_{D}\cdot X'_{*}(s')}{|L_{X_{*}}|^2}M_{X_*}\,ds'\right\|_{L^\infty(\mathbb{T})}\\
\leq &\;\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot (X'_{\eta_*}(s') - X'_{*}(s'))}{|L_{X_{\eta_*}}|^2}M_{X_{\eta_*}}\,ds'\right\|_{L^\infty(\mathbb{T})}+\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{*}(s')}{|L_{X_{\eta_*}}|^2}(M_{X_{\eta_*}}-M_{X_*})\,ds'\right\|_{L^\infty(\mathbb{T})}\\
&\;+\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{*}(s')}{|L_{X_{*}}|^2}M_{X_*}\frac{|L_{X_{*}}|^2-|L_{X_{\eta_*}}|^2}{|L_{X_{\eta_*}}|^2}\,ds'\right\|_{L^\infty(\mathbb{T})}\\
\leq &\;\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot \eta_* D'(s')}{|L_{X_{\eta_*}}|^2}M_{X_{\eta_*}}\,ds'\right\|_{L^\infty(\mathbb{T})}+\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{*}(s')}{|L_{X_{\eta_*}}|^2}\eta_* M_{D}\,ds'\right\|_{L^\infty(\mathbb{T})}\\
&\;+\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{*}(s')}{|L_{X_{*}}|^2}M_{X_*}\frac{(L_{X_{*}}+L_{X_{\eta_*}})\cdot \eta_* L_D}{|L_{X_{\eta_*}}|^2}\,ds'\right\|_{L^\infty(\mathbb{T})}. \end{split} \end{equation*}
Note that \eqref{eqn: lower bound for L} and \eqref{eqn: uniform stretching constant for the family of configurations near equilibrium} imply that $|L_{X_*}|, |L_{X_{\eta_*}}|\geq C$ for some universal constant $C$. By Lemma \ref{lemma: estimates for L M N}, \eqref{eqn: uniform H2.5 upper bound for the family of configurations near equilibrium}, and \eqref{eqn: uniform stretching constant for the family of configurations near equilibrium}, \begin{equation*} \begin{split}
&\;\left\|\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{D}\cdot X'_{\eta_*}(s')}{|L_{X_{\eta_*}}|^2}M_{X_{\eta_*}}-\frac{L_{D}\cdot X'_{*}(s')}{|L_{X_{*}}|^2}M_{X_*}\,ds'\right\|_{L^\infty(\mathbb{T})}\\
\leq &\;C \|L_D\|_{L^\infty_s(\mathbb{T})L^2_{s'}(\mathbb{T})}\|D'\|_{L^\infty(\mathbb{T})}\|M_{X_{\eta_*}}\|_{L^\infty_s(\mathbb{T})L^2_{s'}(\mathbb{T})}\\
&\;+C \|L_D\|_{L^\infty_s(\mathbb{T})L^2_{s'}(\mathbb{T})}\|X_*'\|_{L^\infty(\mathbb{T})}\|M_{D}\|_{L^\infty_s(\mathbb{T})L^2_{s'}(\mathbb{T})}\\
&\;+C \|L_D\|_{L^\infty_s(\mathbb{T})L^2_{s'}(\mathbb{T})}\|X_*'\|_{L^\infty(\mathbb{T})}\|M_{X_*}\|_{L^\infty_s(\mathbb{T})L^2_{s'}(\mathbb{T})}
\|L_{X_{*}}+L_{X_{\eta_*}}\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(\mathbb{T})}\|L_{D}\|_{L^\infty_s(\mathbb{T})L^\infty_{s'}(\mathbb{T})}\\
\leq &\;C \|D'\|_{L^2(\mathbb{T})}\|D'\|_{L^\infty(\mathbb{T})}\|X_{\eta_*}''\|_{L^2(\mathbb{T})}+C \|D'\|_{L^2(\mathbb{T})}\|X_*'\|_{L^\infty(\mathbb{T})}\|D''\|_{L^2(\mathbb{T})}\\
&\;+C \|D'\|_{L^2(\mathbb{T})}\|X_*'\|_{L^\infty(\mathbb{T})}\|X_*''\|_{L^2(\mathbb{T})}
(\|X_{*}'\|_{L^\infty(\mathbb{T})}+\|X_{\eta_*}'\|_{L^\infty(\mathbb{T})})\|D'\|_{L^\infty(\mathbb{T})}\\
\leq &\;C \|D'\|_{L^2(\mathbb{T})}\|D\|_{\dot{H}^{5/2}(\mathbb{T})}\\
\leq &\;C\varepsilon_*' \|D'\|_{L^2(\mathbb{T})}. \end{split} \end{equation*} In a similar manner, we can prove the same bound for the other terms in $\mathcal{R}_{X}$. Thus we proved \eqref{eqn: estimate of the higher order error term in estimating the velocity around equilibrium}. \end{proof} \end{lemma}
The following lemma calculates the leading term of $u_X(X(s))$ in \eqref{eqn: first approximation by linearization of velocity around equilibrium}. \begin{lemma}\label{lemma: final representation of the linearization of velocity near equilibrium} Assume $X_*(s) = (\cos s, \sin s)^T$. Then \begin{equation}
\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))=-\frac{1}{4}\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)\mathcal{H}D(s)-\frac{1}{4}\mathcal{H}D'(s). \label{eqn: final representation of the linearization of velocity near equilibrium} \end{equation} Here $\mathcal{H}$ denotes the Hilbert transform on $\mathbb{T}$ \cite{grafakos2008classical}. \end{lemma} The proof is simply a long calculation. We leave it to the Appendix \ref{appendix section: auxiliary calculations}.
\begin{lemma}\label{lemma: lower bound for the energy dissipation rate in terms of excess energy} There is a universal $\varepsilon_{*}'>0$ and a universal constant $C>0$, such that \begin{equation}
\|u_X(X(s)) - \bar{u}_X\|_{L^2(\mathbb{T})}\geq C\|X-X_*\|_{\dot{H}^1(\mathbb{T})},\quad\forall\, X\in S_{\sqrt{2}\varepsilon_{*}'}, \label{eqn: lower bound for the L2 norm of velocity oscillation} \end{equation} where $S_\varepsilon$ is defined in \eqref{eqn: def of data close to equilibrium}. In particular, this implies that \begin{equation}
\int_{\mathbb{R}^2} |\nabla u_X|^2 \,dx\geq C\left(\|X\|_{\dot{H}^1(\mathbb{T})}^2-\|X_*\|_{\dot{H}^1(\mathbb{T})}^2\right),\quad\forall\, X\in S_{\sqrt{2}\varepsilon_{*}'}, \label{eqn: lower bound for the energy dissipation rate in terms of excess energy} \end{equation} with some universal constant $C>0$. \begin{proof} We always assume that $\varepsilon_*'$ satisfies the assumptions in Lemma \ref{lemma: linearization of velocity field around equilibrium}. By Lemma \ref{lemma: final representation of the linearization of velocity near equilibrium}, \begin{equation*} \begin{split}
\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))=&\;-\frac{1}{4}\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)\mathcal{H}D(s)-\frac{1}{4}\mathcal{H}D'(s)\\
=&\;-\frac{1}{4}\sum_{k\in\mathbb{Z}} \left(\begin{array}{c}-i\cdot\mathrm{sgn}(k)\hat{D}_{k,2}+|k| \hat{D}_{k,1} \\i\cdot\mathrm{sgn}(k) \hat{D}_{k,1}+|k| \hat{D}_{k,2}\end{array}\right) \mathrm{e}^{iks}. \end{split} \end{equation*} Obviously, \begin{equation}
\int_{\mathbb{T}}\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))\,ds = 0. \label{eqn: linearization of velocity field has mean zero} \end{equation} By Parseval's identity and the fact that $\hat{D}_{-k} = \overline{\hat{D}_{k}}$, \begin{equation} \begin{split}
\left\|\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))\right\|_{L^2(\mathbb{T})}^2
=&\;\frac{\pi}{8}\sum_{k\in\mathbb{Z}} \left|\left(\begin{array}{c}-i\cdot\mathrm{sgn}(k)\hat{D}_{k,2}+|k| \hat{D}_{k,1} \\i\cdot\mathrm{sgn}(k) \hat{D}_{k,1}+|k| \hat{D}_{k,2}\end{array}\right)\right|^2\\
\geq &\;\frac{\pi}{8}\left|\left(\begin{array}{c}-i\hat{D}_{1,2}+ \hat{D}_{1,1} \\i \hat{D}_{1,1}+\hat{D}_{1,2}\end{array}\right)\right|^2+\frac{\pi}{8}\left|\left(\begin{array}{c}i\hat{D}_{-1,2}+ \hat{D}_{-1,1} \\-i \hat{D}_{-1,1}+\hat{D}_{-1,2}\end{array}\right)\right|^2\\
&\;+\frac{\pi}{8}\sum_{k\in\mathbb{Z}\atop |k|\geq 2} \left|(|k|-1)|\hat{D}_k|\right|^2\\
\geq &\;\frac{\pi}{2}\left[(\mathrm{Re}\, \hat{D}_{1,1}+\mathrm{Im}\, \hat{D}_{1,2})^2+(\mathrm{Im}\, \hat{D}_{1,1}-\mathrm{Re}\, \hat{D}_{1,2})^2\right]+\frac{\pi}{32}\sum_{k\in\mathbb{Z}\atop |k|\geq 2} |k|^2|\hat{D}_k|^2. \end{split} \label{eqn: a crude lower bound for the linearized velocity L2 norm with mode 1 -1 unhandled} \end{equation} Recall that $D(s)$ satisfies the constraints \eqref{eqn: constraint on deviation from volume conservation} and \eqref{eqn: simplified equation for the optimal approximated equilibrium}, with $Y_*$ replaced by $X_*$. \eqref{eqn: simplified equation for the optimal approximated equilibrium} imples that $(\mathrm{Im}\, \hat{D}_{1,1}-\mathrm{Re}\, \hat{D}_{1,2})^2 = 2(\mathrm{Im}\, \hat{D}_{1,1})^2+2(\mathrm{Re}\, \hat{D}_{1,2})^2$; \eqref{eqn: constraint on deviation from volume conservation} together with \eqref{eqn: inner product of D and Y_star} implies that \begin{equation*}
|\mathrm{Re}\, \hat{D}_{1,1}-\mathrm{Im}\, \hat{D}_{1,2}| \leq C\left|\int_\mathbb{T}D\cdot Y_* \right|\leq C\left|\int_\mathbb{T}D\times D'\,ds\right|\leq C\|D\|_{L^2(\mathbb{T})}\|D'\|_{L^2(\mathbb{T})}\leq C\varepsilon_*'\|D'\|_{L^2(\mathbb{T})}. \end{equation*}
Here we used the fact that $D$ has mean zero on $\mathbb{T}$, and thus $\|D\|_{L^2(\mathbb{T})}\leq C\|D\|_{\dot{H}^{5/2}(\mathbb{T})}\leq C\varepsilon_*'$. Hence, we use \eqref{eqn: a crude lower bound for the linearized velocity L2 norm with mode 1 -1 unhandled} to derive that \begin{equation} \begin{split}
&\;\left\|\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))\right\|_{L^2(\mathbb{T})}^2\\ \geq &\;\frac{\pi}{2}\left[(\mathrm{Re}\, \hat{D}_{1,1}+\mathrm{Im}\, \hat{D}_{1,2})^2+(\mathrm{Re}\, \hat{D}_{1,1}-\mathrm{Im}\, \hat{D}_{1,2})^2+2(\mathrm{Im}\, \hat{D}_{1,1})^2+2(\mathrm{Re}\, \hat{D}_{1,2})^2\right]\\
&\;+\frac{\pi}{32}\sum_{k\in\mathbb{Z}\atop |k|\geq 2} |k|^2|\hat{D}_k|^2 - C\varepsilon_*'^2\|D'\|^2_{L^2(\mathbb{T})}\\
\geq &\;\frac{\pi}{32}\sum_{k\in\mathbb{Z}} |k|^2|\hat{D}_k|^2 - C\varepsilon_*'^2\|D'\|^2_{L^2(\mathbb{T})}= \left(\frac{1}{64} - C\varepsilon_*'^2\right)\|D'\|^2_{L^2(\mathbb{T})}. \end{split} \label{eqn: a lower bound for the linearized velocity L2 norm with mode 1 -1 unhandled} \end{equation} Here $C$ is a universal constant. To this end, we use Lemma \ref{lemma: linearization of velocity field around equilibrium} and \eqref{eqn: linearization of velocity field has mean zero} to derive that \begin{equation*} \begin{split}
\|u_X(X(s)) - \bar{u}_{X}\|_{L^2(\mathbb{T})} \geq &\; \left\|\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))\right\|_{L^2(\mathbb{T})} - \left\|\mathcal{R}_X(s) - \overline{\mathcal{R}_X}\right\|_{L^2(\mathbb{T})}\\
\geq &\;\left(\frac{1}{64} - C\varepsilon_*'^2\right)^{1/2}\|D'\|_{L^2(\mathbb{T})} - \|\mathcal{R}_X(s)\|_{L^2(\mathbb{T})}\\
\geq &\;\left[\left(\frac{1}{64} - C\varepsilon_*'^2\right)^{1/2}-C\varepsilon_*'\right]\|D'\|_{L^2(\mathbb{T})}. \end{split} \end{equation*} Again, $C$ is a universal constant. Taking $\varepsilon_*'$ sufficiently small, but still universal, we proved the desired lower bound \eqref{eqn: lower bound for the L2 norm of velocity oscillation} with some universal constant $C$. \eqref{eqn: lower bound for the energy dissipation rate in terms of excess energy} follows immediately from \eqref{eqn: energy dissipation rate can be bounded from below by the L2 norm of velocity oscillation}, \eqref{eqn: lower bound for the L2 norm of velocity oscillation} and Lemma \ref{lemma: estimates concerning closest equilbrium}. \end{proof} \end{lemma}
With Lemma \ref{lemma: lower bound for the energy dissipation rate in terms of excess energy}, we conclude by \eqref{eqn: energy estimate on each time slice simplified version} and Lemma \ref{lemma: estimates concerning closest equilbrium} that \begin{corollary}\label{coro: exponential decay of H1 distance from equilibrium when it is in a small H2.5 neighborhood} Let $X_0$ satisfy all the assumptions in Theorem \ref{thm: global existence near equilibrium} so that $X$ is the unique global-in-time solution of \eqref{eqn: contour dynamic formulation of the immersed boundary problem} starting from $X_0$. Then there exist universal constants $\varepsilon_{*}',\alpha>0$, such that if in addition $X(\cdot, t)\in S_{\sqrt{2}\varepsilon_{*}'}$, \begin{align*}
\|X\|^2_{\dot{H}^1(\mathbb{T})}(t)-\|X_*\|^2_{\dot{H}^1(\mathbb{T})}(t)\leq &\;e^{-2\alpha t}\left(\|X_0\|^2_{\dot{H}^1(\mathbb{T})}-\|X_{0*}\|^2_{\dot{H}^1(\mathbb{T})}\right),\\
\|X-X_*\|_{\dot{H}^1(\mathbb{T})}(t)\leq &\;2\sqrt{2}e^{-\alpha t}\|X_0-X_{0*}\|_{\dot{H}^1(\mathbb{T})}, \end{align*} where $S_\varepsilon$ is defined in \eqref{eqn: def of data close to equilibrium}. \end{corollary}
\subsection{Proof of exponential convergence to equilibrium configurations}\label{section: proof of exponential convergence to equilibrium configurations} Before we prove Theorem \ref{thm: exponential convergence}, we first state the following simple lemma. \begin{lemma}\label{lemma: property of epsilon_xi}
Let $\varepsilon_\xi$ be defined as in \eqref{eqn: defintion of varepsilon_* in the corollary}, i.e.\;$\varepsilon_\xi = 2C_4(C_6,1/(2\pi))(C_7+2) (|\ln (2\xi)|+1)^2(2\xi)$, where $C_4$, $C_6$ and $C_7$ are universal constant. Then $\varepsilon_\xi$ is increasing on $\xi \in(0,1/(2e)]$. Moreover, for $\forall\,\xi \in(0,1/(2e)]$ and $\forall \,c\geq e$, \begin{equation*} \frac{1}{c}\varepsilon_{\xi}\leq \varepsilon_{(\xi/c)}\leq \frac{1}{c}\left(\frac{2+\ln c}{2}\right)^2\varepsilon_{\xi}\triangleq \beta_c\varepsilon_\xi. \end{equation*} The first inequality is true even for $\forall\, c\geq 1$. $\beta_c$ is decreasing in $c\geq e$ and $\beta_c\leq \frac{9}{4e}<1$ for $\forall\, c\geq e$. \end{lemma} Its proof is a simple calculation, which we shall omit. Now we are able to prove Theorem \ref{thm: exponential convergence}.
\begin{proof}[Proof of Theorem \ref{thm: exponential convergence}] As before, we assume $R_{X_0} = 1$. For convenience, we denote \begin{equation*}
\mathcal{F}(t) = \|X-X_*\|_{\dot{H}^1(\mathbb{T})}(t),\quad \mathcal{G}(t) = \|X-X_*\|_{\dot{H}^{5/2}(\mathbb{T})}(t). \end{equation*} Note that the case $\mathcal{F}(0) = 0$ is trivial; we shall only discuss the case $\mathcal{F}(0)>0$ in the sequel.
Let $\varepsilon_*'$ be defined as in Corollary \ref{coro: exponential decay of H1 distance from equilibrium when it is in a small H2.5 neighborhood}. We may assume $\varepsilon_*'\leq 1$. Take $\xi_{**}\leq 1/(16e)$ such that \begin{equation}
\varepsilon_{8\xi_{**}} = 2C_4(C_6,1/(2\pi))(C_7+2) (|\ln (16\xi_{**})|+1)^2(16\xi_{**}) = \sqrt{2}\varepsilon_*', \label{eqn: definition of xi_**} \end{equation} where $C_4$, $C_6$ and $C_7$ are universal constants defined in \eqref{eqn: introducing C_4}, \eqref{eqn: uniform bound of the family of solution} and \eqref{eqn: bound for H1 norm of the difference to the equilibrium} respectively. Such $\xi_{**}$ is uniquely achievable as a universal constant by the assumptions $C_4\geq 1$ and $\varepsilon_*'\leq 1$, since it is required that \begin{equation*}
(|\ln (16\xi_{**})|+1)^2(16\xi_{**}) =\frac{\sqrt{2}\varepsilon_*'}{2C_4(C_7+2) }\leq\frac{\sqrt{2}}{4}, \end{equation*}
while $x(|\ln x|+1)^2$ monotonically maps $(0,1/e]$ onto $(0,4/e]$. Hence, by Corollary \ref{coro: refined decay estimate of global solution}, the solution $X$ satisfies that for $\forall\, t\geq 0$, \begin{equation} \mathcal{G}(t)\leq \max\{2\mathcal{G}(0)e^{-t/8},\varepsilon_{\mathcal{F}(0)}\}\leq \max \{2\mathcal{G}(0)e^{-t/8},\sqrt{2}\varepsilon_*'\}. \label{eqn: decay of G in the first time interval crude form} \end{equation} Here we used Lemma \ref{lemma: property of epsilon_xi} to find that $\varepsilon_{\mathcal{F}(0)} \leq \varepsilon_{8\xi_{**}}= \sqrt{2}\varepsilon_*'$. If $2\mathcal{G}(0)\leq \varepsilon_{\mathcal{F}(0)}$, we take $t_* = 0$; otherwise, take $t_*$ such that \begin{equation} 2\mathcal{G}(0)e^{-t_*/8}=\varepsilon_{\mathcal{F}(0)}. \label{eqn: first constraints on t_* in exp convergence} \end{equation} Hence, we have $X(t)\in S_{\varepsilon_{\mathcal{F}(0)}}\subset S_{\sqrt{2}\varepsilon_*'}$ if $t\geq t_*$, which allows us to apply Corollary \ref{coro: exponential decay of H1 distance from equilibrium when it is in a small H2.5 neighborhood} for $t\geq t_*$. By Lemma \ref{lemma: energy estimate} and Lemma \ref{lemma: estimates concerning closest equilbrium}, we derive that \begin{equation*}
\mathcal{F}(t_*)\leq 2 \left(\|X\|^2_{\dot{H}^1(\mathbb{T})} - \|X_*\|^2_{\dot{H}^1(\mathbb{T})}\right)^{1/2}(t_*)\leq 2\left(\|X_0\|^2_{\dot{H}^1(\mathbb{T})} - \|X_{0*}\|^2_{\dot{H}^1(\mathbb{T})}\right)^{1/2}\leq 2\sqrt{2}\mathcal{F}(0). \end{equation*} By Corollary \ref{coro: exponential decay of H1 distance from equilibrium when it is in a small H2.5 neighborhood}, for some universal $\alpha>0$, and $\forall\, t>0$, \begin{equation*} \mathcal{F}(t_*+t)\leq 2\sqrt{2}\mathcal{F}(t_*)e^{-\alpha t}\leq 8\mathcal{F}(0)e^{-\alpha t}. \end{equation*} Note that $8\mathcal{F}(0)\in (0,1/(2e)]$ by the assumption on $\xi_{**}$, in which interval $\varepsilon_\xi$ is increasing in $\xi$. Without loss of generality, we may assume $\alpha <\frac{1}{8}$. We additionally take $t_{**}>0$ to be a universal constant such that \begin{equation} e^{-\alpha t_{**}}\leq \frac{1}{8},\quad e^{\left(\frac{1}{8}-\alpha\right)t_{**}}\geq 2. \label{eqn: constraint on t_**} \end{equation}
To this end, we shall use mathematical induction to show \eqref{eqn: exp convergence in H2.5 norm}. Let us summarize what has been proved so far: \begin{enumerate} \item For $\forall\, t\geq 0$, $\mathcal{G}(t)\leq \max \{2e^{-t/8}\mathcal{G}(0),\varepsilon_{\mathcal{F}(0)}\}$. \item Wit the choice of $t_*$ in \eqref{eqn: first constraints on t_* in exp convergence}, for $\forall\, t\in[t_*, t_*+2t_{**}]$, $\mathcal{G}(t)\leq \varepsilon_{\mathcal{F}(0)}\leq \sqrt{2}\varepsilon_*'$. \item For $\forall\, t>0$, $\mathcal{F}(t_*+t)\leq 8\mathcal{F}(0)e^{-\alpha t}$. In particular, with the choice of $t_{**}$ in \eqref{eqn: constraint on t_**}, for $\forall\,k\in\mathbb{Z}_+$, $k\geq 2$, $\mathcal{F}(t_*+kt_{**})\leq e^{-\alpha (k-1)t_{**}}\mathcal{F}(0)$. \end{enumerate}
Suppose that \begin{equation} \mathcal{G}(t_*+kt_{**})\leq \varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)} \label{eqn: decay of G in the exp convergence} \end{equation} has been proved for some $k\geq 2$, $k\in\mathbb{Z}_+$ (indeed, the case $k =2$ has been established above). By Corollary \ref{coro: refined decay estimate of global solution}, $\forall\, t\in[0,t_{**}]$, \begin{equation} \begin{split} \mathcal{G}(t_*+kt_{**}+t)\leq &\;\max\{2e^{-t/8}\mathcal{G}(t_*+kt_{**}),\varepsilon_{\mathcal{F}(t_*+kt_{**})}\}\\ \leq &\;\max\{2e^{-t/8}\varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)},\varepsilon_{e^{-(k-1)\alpha t_{**}}\mathcal{F}(0)}\}. \end{split} \label{eqn: estimates of G inside the time interval} \end{equation} In particular, \begin{equation*} \mathcal{G}(t_*+(k+1)t_{**})\leq \max\{2e^{-t_{**}/8}\varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)},\varepsilon_{e^{-(k-1)\alpha t_{**}}\mathcal{F}(0)}\}. \end{equation*} We claim that with the choice of $t_{**}$ in \eqref{eqn: constraint on t_**}, the first term on the right hand side is always smaller. Indeed, by \eqref{eqn: constraint on t_**}, and the lower bound in Lemma \ref{lemma: property of epsilon_xi}, \begin{equation*} \frac{2e^{-t_{**}/8}\varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)}}{\varepsilon_{e^{-(k-1)\alpha t_{**}}\mathcal{F}(0)}} \leq \frac{2e^{-t_{**}/8}\varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)}}{e^{-\alpha t_{**}}\varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)}} = 2e^{-\left(\frac{1}{8}-\alpha\right)t_{**}}\leq 1. \end{equation*} Hence, we proved that \begin{equation*} \mathcal{G}(t_*+(k+1)t_{**})\leq \varepsilon_{e^{-(k-1)\alpha t_{**}}\mathcal{F}(0)}. \end{equation*} Therefore, by induction, \eqref{eqn: decay of G in the exp convergence} is true for all $k\in\mathbb{Z}_+$, $k\geq 2$; so is \eqref{eqn: estimates of G inside the time interval}. With the choice of $t_{**}$, we use \eqref{eqn: estimates of G inside the time interval} and the upper bound in Lemma \ref{lemma: property of epsilon_xi} to derive that for $\forall\, t\in[0,t_{**}]$, \begin{equation*} \begin{split} \mathcal{G}(t_*+kt_{**}+t)\leq &\;\max\{2e^{-t/8}\varepsilon_{e^{-(k-2)\alpha t_{**}}\mathcal{F}(0)},\varepsilon_{e^{-(k-1)\alpha t_{**}}\mathcal{F}(0)}\}\\ \leq &\;\max\{2e^{-t/8}\beta_{e^{\alpha t_{**}}}^{k-2}\varepsilon_{\mathcal{F}(0)},\beta_{e^{\alpha t_{**}}}^{k-1}\varepsilon_{\mathcal{F}(0)}\}\\ \leq &\;\beta_{8}^{k-2}\varepsilon_{\mathcal{F}(0)} \max\{2e^{-t/8},\beta_{8}\} \leq 2\beta_{8}^{k-2}\varepsilon_{\mathcal{F}(0)}. \end{split} \end{equation*} Note that $t_{**}$ and $\beta_8<1$ are both universal constants. Hence, combining this with the fact that $\mathcal{G}(t_*+t) \leq \varepsilon_{\mathcal{F}(0)}$ for $\forall\, t\geq 0$, we find that there exist universal constants $\alpha_*\leq 1/8$ and $C>1$, such that for $\forall\, t\geq 0$, \begin{equation} \mathcal{G}(t_*+t)\leq C e^{-\alpha_* t}\varepsilon_{\mathcal{F}(0)}. \label{eqn: exp decay when t is larger than t_*} \end{equation}
If $t_* = 0$, we readily proved that for $\forall\, t\geq 0$, \begin{equation} \mathcal{G}(t)\leq C e^{-\alpha_* t}\varepsilon_{\mathcal{F}(0)}, \label{eqn: exp decay when G is small} \end{equation} where $C$ and $\alpha_*$ are universal constants. If $t_*>0$, by \eqref{eqn: first constraints on t_* in exp convergence} and the fact that $\alpha_*\leq 1/8$, $\varepsilon_{\mathcal{F}(0)} = 2e^{-t_*/8}\mathcal{G}(0)\leq 2e^{-\alpha_{*}t_*}\mathcal{G}(0)$. Hence, by \eqref{eqn: exp decay when t is larger than t_*}, for $\forall\, t\geq 0$, \begin{equation} \mathcal{G}(t_*+t)\leq C e^{-\alpha_* (t_*+t)}\mathcal{G}(0). \label{eqn: decay of G in the latter time interval} \end{equation} On the other hand, since $\alpha_*\leq 1/8$, we also know that for $t\in[0,t_*]$, \begin{equation} \mathcal{G}(t)\leq 2e^{-t/8}\mathcal{G}(0)\leq 2e^{-\alpha_{*}t}\mathcal{G}(0). \label{eqn: decay of G in the first time interval} \end{equation} Combining \eqref{eqn: decay of G in the latter time interval} and \eqref{eqn: decay of G in the first time interval}, we proved that \begin{equation} \mathcal{G}(t)\leq Ce^{-\alpha_{*}t}\mathcal{G}(0). \label{eqn: exp decay when G is large} \end{equation} with some universal constants $\alpha_*$ and $C$. Combining \eqref{eqn: exp decay when G is small} and \eqref{eqn: exp decay when G is large}, we complete the proof of \eqref{eqn: exp convergence in H2.5 norm}.
In order to prove \eqref{eqn: exp convergence to a fixed configuration}, we use the fact $u_{X_*}(x) \equiv 0$ to derive that \begin{equation*}
\|u_X(X(s))\|_{H^1(\mathbb{T})} = \|u_X(X(s))- u_{X_*}(X_*(s))\|_{H^1(\mathbb{T})} \leq \|\mathcal{L}X-\mathcal{L}X_*\|_{H^1(\mathbb{T})}+\|g_X-g_{X_*}\|_{H^1(\mathbb{T})}. \end{equation*} By Corollary \ref{coro: L2 estimate for g_X1-g_X2} and Corollary \ref{coro: H1 estimate for g_X1-g_X2}, \begin{equation*}
\|X_t(s,t)\|_{H^1(\mathbb{T})} = \|u_X(X(s),t)\|_{H^1(\mathbb{T})} \leq C \|X-X_*\|_{\dot{H}^2(\mathbb{T})}(t) \leq C \|X-X_*\|_{\dot{H}^{5/2}(\mathbb{T})}(t). \end{equation*} Here $C$ is a universal constant thanks to the uniform estimates of solutions obtained in Theorem \ref{thm: global existence near equilibrium}. Hence, by \eqref{eqn: exp convergence in H2.5 norm}, \begin{equation}
\|X(s,t)-X(s,t')\|_{H^1(\mathbb{T})} \leq \int_{t}^{t'} \|X_t(s,\tau)\|_{H^1(\mathbb{T})}\,d\tau \leq CB(X_0)\int_{t}^{t'} e^{-\alpha_*\tau}\,d\tau. \label{eqn: X(t) is a Cauchy sequence in H^1 given the exp decay} \end{equation} Here $B(X_0)$ is defined in \eqref{eqn: exp convergence in H2.5 norm} and $C$ is a universal constant. This implies that $X(s,t)$ is a Cauchy sequence in $H^1(\mathbb{T})$, which converges to some $X_\infty(s)\in H^1(\mathbb{T})$. Take $t'\rightarrow +\infty$ in \eqref{eqn: X(t) is a Cauchy sequence in H^1 given the exp decay} and we find \begin{equation}
\|X(s,t)-X_\infty(s)\|_{H^1(\mathbb{T})} \leq CB(X_0)e^{-\alpha_*t}. \label{eqn: exp convergence to a fixed configuration in H^1 norm} \end{equation}
On the other hand, by virtue of the bound \eqref{eqn: uniform bound of H 2.5 norm for the global solution} of $\|X\|_{\dot{H}^{5/2}(\mathbb{T})}(t)$, we may take $\tilde{X}_{w,\infty}\in H^{5/2}(\mathbb{T})$ as an arbitrary weak limit (up to a subsequence) of $\{\tilde{X}(t)\}_{t\geq 0}$ in $H^{5/2}(\mathbb{T})$. Note that we only have the bound on the $H^{5/2}$-seminorm of $X(t)$, so we can only extract weak limit for $\{\tilde{X}(t)\}_{t\geq 0}$ instead of $\{X(t)\}_{t\geq 0}$ at this moment. By compact Sobolev embedding, $\tilde{X}_{w,\infty}$ is a strong $H^1(\mathbb{T})$-limit of a subsequence of $\{\tilde{X}(t)\}_{t\geq 0}$. Since $\tilde{X}(t)\rightarrow \tilde{X}_{\infty}$ in $H^1(\mathbb{T})$, one must have $\tilde{X}_{w,\infty} = \tilde{X}_\infty$. And this is true for all weak limits of $\{\tilde{X}(t)\}_{t\geq 0}$. Hence, $X_{\infty} \in H^{5/2}(\mathbb{T})$ and satisfies $\|X_\infty\|_{\dot{H}^{5/2}(\mathbb{T})}\leq C$, with the same universal constant $C$ as in \eqref{eqn: uniform bound of H 2.5 norm for the global solution}. By \eqref{eqn: exp convergence in H2.5 norm} and the convergence $X(t)\rightarrow X_\infty$ in $H^1(\mathbb{T})$, we know that $\|X_\infty - X_{\infty,*}\|_{\dot{H}^1(\mathbb{T})} = 0$. Hence $X_\infty = X_{\infty,*}$ is an equilibrium configuration.
To this end, we derive \eqref{eqn: exp convergence to a fixed configuration} as follows \begin{equation*} \begin{split}
\|X(t)-X_\infty\|_{H^{5/2}} \leq &\; C\|X(t)-X_\infty\|_{H^1}+C\|X(t)-X_\infty\|_{\dot{H}^{5/2}}\\
\leq &\; C\|X(t)-X_\infty\|_{H^1}+C\|X(t)-X_*(t)\|_{\dot{H}^{5/2}}+C\|X_\infty(t)-X_*(t)\|_{\dot{H}^{5/2}}. \end{split} \end{equation*} Note that both $X_*$ and $X_\infty$ are equilibrium configurations. Since $X_\infty(s,t)-X_*(s,t)$ as a function of $s\in\mathbb{T}$ only contains Fourier modes with wave numbers $0$ and $\pm 1$, we can replace the $H^{5/2}$-seminorm in the last term by $H^1$-seminorm without changing its value, i.e. \begin{equation*} \begin{split}
\|X(t)-X_\infty\|_{H^{5/2}} \leq &\; C\|X(t)-X_\infty\|_{H^1}+C\|X(t)-X_*(t)\|_{\dot{H}^{5/2}}+C\|X_\infty(t)-X_*(t)\|_{\dot{H}^{1}}\\
\leq &\; C\|X(t)-X_\infty\|_{H^1}+C\|X(t)-X_*(t)\|_{\dot{H}^{5/2}}+C\|X(t)-X_\infty(t)\|_{\dot{H}^{1}}\\
&\;+C\|X(t)-X_*(t)\|_{\dot{H}^{1}}\\
\leq &\; C\|X(t)-X_\infty\|_{H^1}+C\|X(t)-X_*(t)\|_{\dot{H}^{5/2}}\\ \leq &\; CB(X_0)e^{-\alpha_*t}. \end{split} \end{equation*} In the last inequality, we used \eqref{eqn: exp convergence in H2.5 norm} and \eqref{eqn: exp convergence to a fixed configuration in H^1 norm}. This completes the proof of \eqref{eqn: exp convergence to a fixed configuration}. \end{proof}
\section{Conclusion and Discussion} In this paper, we transform the Stokes immersed boundary problem \eqref{eqn: stokes equation}-\eqref{eqn: kinematic equation of membrane} in two dimensions into a contour dynamic formulation \eqref{eqn: contour dynamic formulation of the immersed boundary problem} via the fundamental solution of the Stokes equation. We proved that there exists a unique local solution of the contour dynamic formulation (Theorem \ref{thm: local in time existence} and Theorem \ref{thm: local in time uniqueness}), provided that the initial data is an $H^{5/2}$-function in Lagrangian coordinate and satisfies the well-stretched condition \eqref{eqn: well_stretched assumption}. If in addition the initial configuration is sufficiently close to an equilibrium, the solution should exist globally in time (Theorem \ref{thm: global existence near equilibrium}), and it converges exponentially to an equilibrium as $t\rightarrow +\infty$ (Theorem \ref{thm: exponential convergence}). Regularity of the ambient flow field can thus be recovered through the fundamental solution of the Stokes equation (Lemma \ref{lemma: the velocity field is continuous} and Lemma \ref{lemma: energy estimate}).
In the contour dynamic formulation \eqref{eqn: contour dynamic formulation of the immersed boundary problem}, the string motion is given by a singular integral, which depends nonlinearly functional on the string configuration. The starting point of the proofs in this paper is that the principal part of the singular integral in the contour dynamic formulation introduces dissipation, which essentially results from the dissipation in the Stokes flow. Then it suffices to show that the remainder term is regular in some sense and can be well-controlled by the dissipation. The same approach may also apply to the higher dimensional case, where a 2-D closed membrane is immersed and moving in a 3-D Stokes flow, although the description of the 2-D membrane needs some extra efforts. Note that the equilibrium shape of the membrane may not necessarily be a sphere. We shall address this problem in a forthcoming work.
In this paper, we only consider the simplest case where the 1-D string is modeled by a Hookean material with zero resting length in the force-free state. See the local elastic energy density \eqref{eqn: elastic energy density}. In particular, the material always tends to shorten its length in all time. Other types of elastic constitutive law can be also considered. In fact, most of the discussion in this paper may also apply to more general elastic energy of other forms. We do not dig deep into this topic here, but it would be interesting to find out what conditions are needed for the energy density so that the current approach still works.
\appendix \section{Appendix} \subsection{Study of the Flow Field}\label{appendix section: study of the flow field} In this section, we shall prove Lemma \ref{lemma: the velocity field is continuous} and Lemma \ref{lemma: energy estimate} that characterize properties of the flow field $u_X$. Roughly speaking, Lemma \ref{lemma: the velocity field is continuous} claims that under certain assumptions on $X$, $u_X\in C(\mathbb{R}^2)$ and $\nabla u_X \in L^2(\mathbb{R}^2)$; while Lemma \ref{lemma: energy estimate} proves an energy estimate of the whole system, which says that under certain assumptions on $X$, the decrease in the elastic energy of the string is fully accounted by the energy dissipation in the Stokes flow. See their precise statements in Section \ref{section: energy estimate}. \begin{proof}[Proof of Lemma \ref{lemma: the velocity field is continuous}] If $x\not\in\Gamma_t$, since $G(x-X(s',t))$ in \eqref{eqn: expression for velocity field} is smooth at $x$, $u_X$ is also smooth at $x$. We then turn to show continuity of $u_X$ at $X(s,t)\in \Gamma_t$.
Take any arbitrary $x\in\mathbb{R}^2$, and let $s_x$ be defined as in \eqref{eqn: definition of s_x}. Note that $s_x$ may not be unique; take an arbitrary one if it is the case. We first show that \begin{equation}
|x-X(s')| \geq \frac{\lambda}{2}|s'-s_x|. \label{eqn: lower bound for distance between x and X(s')} \end{equation}
Indeed, if $|s'-s_x| \leq 2\lambda^{-1}\mathrm{dist}(x,X(\mathbb{T}))$, then by definition of $s_x$, \begin{equation*}
|x-X(s')| \geq |x-X(s_x)| = \mathrm{dist}(x,X(\mathbb{T})) \geq \frac{\lambda}{2}|s'-s_x|. \end{equation*}
If $|s'-s_x| \geq 2\lambda^{-1}\mathrm{dist}(x,X(\mathbb{T}))$, by triangle inequality, \begin{equation*}
|x-X(s')| \geq |X(s')-X(s_x)| - |x-X(s_x)| \geq \lambda|s'-s_x| - \mathrm{dist}(x,X(\mathbb{T}))\geq \frac{\lambda}{2}|s'-s_x|. \end{equation*}
This proves \eqref{eqn: lower bound for distance between x and X(s')}. If $s_x$ and $s_x'$ both satisfy \eqref{eqn: definition of s_x}, \eqref{eqn: lower bound for distance between x and X(s')} implies that $|s_x-s_x'|\leq 2\lambda^{-1}\mathrm{dist}(x,X(\mathbb{T}))$.
Next, we shall take the limit $x\rightarrow X(s,t)$ and apply dominated convergence theorem to \eqref{eqn: 2D velocity field}. Here we do not assume $x\not\in \Gamma_t$. For $s'\not = s$, using the formula in \eqref{eqn: 2D velocity field} and \eqref{eqn: velocity of membrane}, it is easy to find that \begin{equation*} \partial_{s'} [G(x-X(s'))](X'(s')-X'(s_x)) \rightarrow \partial_{s'} [G(X(s)-X(s'))](X'(s')-X'(s)). \end{equation*} Here we used the fact that $X'(s_x)\rightarrow X'(s)$ as $x\rightarrow X(s,t)$. This is because \eqref{eqn: lower bound for distance between x and X(s')} implies that $s_x \rightarrow s$ and $X\in H^2(\mathbb{T})$ implies that $X'\in C^{1/2}(\mathbb{T})$.
On the other hand, by \eqref{eqn: lower bound for distance between x and X(s')}, \begin{equation*} \begin{split}
|\partial_{s'} [G(x-X(s'))](X'(s')-X'(s_x))|\leq &\; C\frac{|X'(s')||X'(s')-X'(s_x)|}{|X(s')-x|}\\
\leq &\;\frac{C|X'(s')|}{\lambda |s_x-s'|}\int_{s'}^{s_x} |X''(\tau)| \,d\tau\\
\leq &\;C\lambda^{-1}|X'(s')||\mathcal{M}X''(s')|. \end{split} \end{equation*}
Here $\mathcal{M}$ is the centered Hardy-Littlewood maximal operator on $\mathbb{T}$ \cite{grafakos2008classical}. Note that the bound is independent of $x$. Since $\mathcal{M}$ is bounded from $L^2(\mathbb{T})$ to $L^2(\mathbb{T})$, $C\lambda^{-1}|X'(s')||\mathcal{M}X''(s')|\in L^1(\mathbb{T})$. Therefore, by dominated convergence theorem, $u_X(x)\rightarrow u_X(X(s))$ as $x\rightarrow X(s)$. This completes the proof of the continuity of $u_X$.
To show $\nabla u_X\in L^2(\mathbb{R}^2)$, we first introduce a mollifier $\varphi(x)\in C^\infty_0(\mathbb{R}^2)$, such that $\varphi\geq 0$, $\mathrm{supp}\, \varphi \subset B(0,1)$ and $\int_{\mathbb{R}^2}\varphi(x)\,dx = 1$. Define $\varphi_\varepsilon(x) = \varepsilon^{-2}\varphi(x/\varepsilon)$. Let $f_\varepsilon = \varphi_\varepsilon * f$ and let $(u_\varepsilon, p_\varepsilon)$ solves the Stokes equation with $\varepsilon <1$, \begin{equation} \begin{split} &\;-\Delta u_\varepsilon + \nabla p_\varepsilon = f_\varepsilon,\\ &\;\mathrm{div} u_\varepsilon = 0,\\
&\;|u_\varepsilon|,|p_\varepsilon|\rightarrow 0\mbox{ as }|x|\rightarrow \infty. \end{split} \label{eqn: regularized stokes equation} \end{equation} It is obvious that $u_\varepsilon = \varphi_\varepsilon*u_X$ and $p_\varepsilon = \varphi_\varepsilon *p_X$. On the other hand, since $f\in \mathscr{M}(\mathbb{R}^2)$ under the assumption of the lemma, $f_\varepsilon$ is smooth and so are $u_\varepsilon$, and $p_\varepsilon$.
We also introduce a cut-off function $\phi\in C^\infty_0(\mathbb{R}^2)$, such that $\phi\geq 0$; $\phi(y)= 1$ for $|y|\leq 1$; $\phi(y)= 0$ for $|y|\geq 2$; and $|\nabla\phi(y)|\leq C$ for some universal constant $C$. Define $\phi_r(x) \triangleq \phi(x/r)$. For given $t$, assume $\Gamma_t\subset B_R(0)=\{x\in\mathbb{R}^2:\;|x|< R\}$ with some $R>2$. We multiply the regularized Stokes equation \eqref{eqn: regularized stokes equation} on both sides by $\phi_r u_\varepsilon$, with $r= 2R+1$, and take integral on $\mathbb{R}^2$. By integration by parts, we obtain that \begin{equation}
\int_{\mathbb{R}^2}\phi_r(x)|\nabla u_\varepsilon(x,t)|^2\,dx + \int_{\mathbb{R}^2}(u_{\varepsilon,i}\partial_j \phi_r \partial_j u_{\varepsilon,i}-u_{\varepsilon,i} \partial_i \phi_r p_\varepsilon)\,dx = \int_{\mathbb{R}^2}\phi_r (x) u_\varepsilon(x,t)f_\varepsilon(x,t)\,dx. \label{eqn: take inner product to get the energy estimate for the regularized equation} \end{equation} Note that $\nabla \phi$ is supported on $B_{2r}(0)\backslash B_r(0)$, which is away from $X(\cdot,t)$. In the region $B_{6R}(0)\backslash B_{2R}(0)$, which contains an $\varepsilon$-neighborhood of $B_{2r}(0)\backslash B_r(0)$ since $R>2>2\varepsilon$, $u_X$, $\nabla u_X$ and $p_X$ have the following $L^\infty$-bound due to \eqref{eqn: 2D velocity field} and \eqref{eqn: 2D pressure field} with $C_x = 0$. \begin{align*}
|u_X(x)|\leq &\;CR^{-1}\|X\|_{\dot{H}^1(\mathbb{T})}^2,\\
|\nabla u_X(x)|\leq &\;CR^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^2,\\
|p_X(x)|\leq &\;CR^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^2. \end{align*} Therefore, the regularized solutions also enjoy similar bounds in $B_{2r}(0)\backslash B_r(0)$, namely \begin{align}
|u_\varepsilon(x)|\leq &\;Cr^{-1}\|X\|_{\dot{H}^1(\mathbb{T})}^2,\label{eqn: far field estimates of the regularized u}\\
|\nabla u_\varepsilon(x)|\leq &\;Cr^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^2,\label{eqn: far field estimates of the regularized grad u}\\
|p_\varepsilon(x)|\leq &\;Cr^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^2.\label{eqn: far field estimates of the regularized p} \end{align} Note that the constants $C$'s are uniform in $\varepsilon$. Applying these estimates in \eqref{eqn: take inner product to get the energy estimate for the regularized equation}, we find that \begin{equation*}
\int_{\mathbb{R}^2}\phi_r(x)|\nabla u_\varepsilon(x,t)|^2\,dx \leq \int_{\mathbb{R}^2}\phi_r (x) u_\varepsilon(x,t)f_\varepsilon(x,t)\,dx + C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4. \end{equation*} It is known that \begin{equation} u_\varepsilon \rightarrow u_X \mbox{ in }C_{loc}(\mathbb{R}^2),\quad f_\varepsilon \rightarrow f \mbox{ in }\mathscr{M}(\mathbb{R}^2). \label{eqn: convergence of the regularized solution to the original solution} \end{equation} This gives \begin{equation*}
\left|\int_{\mathbb{R}^2}\phi_r (x) u_\varepsilon(x,t)f_\varepsilon(x,t)\,dx\right| \rightarrow \left|\int_{\mathbb{R}^2}\phi_r (x) u_X(x,t)f(x,t)\,dx\right|\mbox{ as }\varepsilon \rightarrow 0^+. \end{equation*} Therefore, by \eqref{eqn: a trivial L^infty bound for velocity}, \begin{equation} \begin{split}
\limsup_{\varepsilon\rightarrow 0^+}\int_{\mathbb{R}^2}\phi_r(x)|\nabla u_\varepsilon(x,t)|^2\,dx \leq &\;\left|\int_{\mathbb{R}^2}\phi_r (x) u_X(x,t)f(x,t)\,dx\right| + C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4\\
\leq &\;\int_{\mathbb{T}}|u(X(s,t),t)X_{ss}(s,t)|\,ds+ C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4\\
\leq &\;C\lambda^{-1}\|X\|_{\dot{H}^2(\mathbb{T})}^3+ C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4. \label{eqn: a local bound for the dissipation rate of the regularized solution} \end{split} \end{equation} Here we used the fact that $\Gamma_t\subset B_r(0)$. This implies that for fixed $r$ and any sequence $\varepsilon_k\rightarrow 0^+$, there exists a subsequence $\varepsilon_{k_l}\rightarrow 0^+$ and $\tilde{u}\in H^1(B_r(0))$ such that $\nabla u_{\varepsilon_{k_l}}\rightharpoonup \nabla \tilde{u}$ in $L^2(B_r(0))$. Combining this with the fact $u_\varepsilon \rightarrow u_X$ in $C(B_r(0))$, we can even obtain $u_{\varepsilon_{k_l}}\rightarrow \tilde{u}$ in $L^2(B_r(0))$ and thus $\tilde{u} = u_X\in H^1(B_r(0))$. Since the sequence $\{\varepsilon_k\}$ is arbitrary and $r$ (or equivalently $R$) can be arbitrarily large, we conclude that $u_X\in H^1_{loc}(\mathbb{R}^2)$ and $\nabla u_\varepsilon \rightarrow \nabla u_X$ in $L^2_{loc}(\mathbb{R}^2)$. Here we obtain local strong convergence as a property of the mollification. This together with \eqref{eqn: a local bound for the dissipation rate of the regularized solution} implies that \begin{equation*}
\int_{\mathbb{R}^2}\phi_r(x)|\nabla u_X(x,t)|^2\,dx \leq C\lambda^{-1}\|X\|_{\dot{H}^2(\mathbb{T})}^3+ C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4. \end{equation*} Take $r\rightarrow \infty$ and we find \begin{equation}
\int_{\mathbb{R}^2}|\nabla u_X(x,t)|^2\,dx \leq C\lambda^{-1}\|X\|_{\dot{H}^2(\mathbb{T})}^3. \label{eqn: a trivial bound for the energy dissipation rate or H1 semi norm of velocity field} \end{equation} This completes the proof. \end{proof}
As is mentioned before, Lemma \ref{lemma: energy estimate} is devoted to an energy estimate of the system, which will be used in the proof of Theorem \ref{thm: global existence near equilibrium} and Theorem \ref{thm: exponential convergence}. \begin{proof}[Proof of Lemma \ref{lemma: energy estimate}]
Since $X\in C_TH^2(\mathbb{T})$, $\Gamma_t$ stays in a bounded set in $t\in[0,T]$. We may assume $\Gamma_t\subset B_R(0)=\{x\in\mathbb{R}^2:\;|x|< R\}$ for $t\in [0,T]$ with some $R>2$. Again let $r = 2R+1$.
We start from the local energy estimate of the regularized solution in the proof of Lemma \ref{lemma: the velocity field is continuous}. By \eqref{eqn: take inner product to get the energy estimate for the regularized equation} and the decay estimates \eqref{eqn: far field estimates of the regularized u}-\eqref{eqn: far field estimates of the regularized p}, we find that \begin{equation*}
\limsup_{\varepsilon\rightarrow 0^+}\left|\int_{\mathbb{R}^2}\phi_r(x)|\nabla u_\varepsilon(x,t)|^2\,dx - \int_{\mathbb{R}^2}\phi_r (x) u_\varepsilon(x,t)f_\varepsilon(x,t)\,dx\right| \leq C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4. \end{equation*} By the convergence \eqref{eqn: convergence of the regularized solution to the original solution} and $\nabla u_\varepsilon \rightarrow \nabla u_X$ in $L^2_{loc}(\mathbb{R}^2)$, it becomes \begin{equation*}
\left|\int_{\mathbb{R}^2}\phi_r(x)|\nabla u_X(x,t)|^2\,dx - \int_{\mathbb{R}^2}\phi_r (x) u_X(x,t)f(x,t)\,dx\right| \leq C r^{-2}\|X\|_{\dot{H}^1(\mathbb{T})}^4. \end{equation*} Take $r\rightarrow \infty$ and we find \begin{equation} \begin{split}
\int_{\mathbb{R}^2}|\nabla u_X(x,t)|^2\,dx = &\;\int_{\mathbb{R}^2} u_X(x,t)f(x,t)\,dx\\ =&\;\int_{\mathbb{R}^2}\int_{\mathbb{T}}u_X(x,t)F(s,t)\delta(x-X(s,t))\,dsdx=\int_{\mathbb{T}}u_X(X(s,t),t)F(s,t)\,ds\\ =&\;\int_{\mathbb{T}}X_t(s,t)X''(s,t)\,ds =-\int_{\mathbb{T}}X'_t(s,t)X'(s,t)\,ds\\
=&\;-\frac{1}{2}\frac{d}{dt}\int_{\mathbb{T}}|X'(s,t)|^2\,ds. \end{split} \label{eqn: energy estimate on each time slice} \end{equation} The last equality, estabilished in \cite{temam1984navier} in Chapter III, \S\,1.4, holds in the scalar distribution sense. By some limiting argument and the assumption that $X\in C_T H^2(\mathbb{T})$, we may take integral on both sides of \eqref{eqn: energy estimate on each time slice} in $t$ from $0$ to $T$ to obtain \eqref{eqn: energy estimate of Stokes immersed boundary problem}. \end{proof}
\subsection{A Priori Estimates Involving $\mathcal{L}$}\label{appendix section: estimates involving L} We state several a priori estimates involving the operator $\mathcal{L}$ without proofs. \begin{lemma}\label{lemma: improved Hs estimate and Hs continuity of semigroup solution} For $\forall\, v_0\in H^l(\mathbb{T})$ with arbitrary $l\in\mathbb{R}_+$, \begin{enumerate}
\item $\|\mathrm{e}^{t\mathcal{L}}v_0\|_{\dot{H}^l(\mathbb{T})} \leq \mathrm{e}^{-t/4}\|v_0\|_{\dot{H}^l(\mathbb{T})}$; \item $\mathrm{e}^{t\mathcal{L}}v_0\in C([0,+\infty);H^l(\mathbb{T}))$; \item $\mathrm{e}^{t\mathcal{L}}v_0\rightarrow v_0$ in $H^l(\mathbb{T})$ as $t\rightarrow 0^+$. \end{enumerate} \end{lemma}
\begin{lemma}\label{lemma: a priori estimate of nonlocal eqn} Given $T>0$, let $h\in L^2_T H^l(\mathbb{T})$. The model equation \begin{equation} \partial_t v(s,t) = \mathcal{L}v(s,t) +h(s,t),\quad v(s,0) = v_0(s),\quad s\in \mathbb{T},\; t\geq 0 \label{eqn: model nonlocal equation} \end{equation} has a unique solution $v\in L_T^\infty H^{l+1/2}\cap L_T^2 H^{l+1}(\mathbb{T})$ with $v_t \in L_T^2 H^l(\mathbb{T})$, satisfying the following a priori estimates: for $\forall\,t\in[0,T]$, \begin{enumerate} \item \begin{equation}
\|v\|_{\dot{H}^{l+1/2}(\mathbb{T})}(t)\leq \mathrm{e}^{-t/4}\|v_0\|_{\dot{H}^{l+1/2}(\mathbb{T})}+ \sqrt{2}\|h\|_{L_{[0,t]}^2 \dot{H}^{l}(\mathbb{T})}, \label{eqn: a priori estimate for nonlocal eqn L_infty_in_time estimate} \end{equation} \item \begin{equation}
\|v\|_{\dot{H}^{l+1/2}(\mathbb{T})}^2(t)+\frac{1}{4}\|v\|_{L^2_{[0,t]} \dot{H}^{l+1}(\mathbb{T})}^2 \leq \|v_0\|_{\dot{H}^{l+1/2}(\mathbb{T})}^2+ 4\|h\|_{L_{[0,t]}^2 \dot{H}^l(\mathbb{T})}^2. \label{eqn: a priori estimate for nonlocal eqn L_2_in_time estimate} \end{equation} Hence, \begin{equation*}
\|v\|_{L^\infty_{[0,t]}\dot{H}^{l+1/2}\cap L^2_{[0,t]}\dot{H}^{l+1}(\mathbb{T})}\leq 3\|v_0\|_{\dot{H}^{l+1/2}(\mathbb{T})}+ 6\|h\|_{L^2_{[0,t]}\dot{H}^l(\mathbb{T})}. \end{equation*} In particular, \begin{equation*}
\|\mathrm{e}^{t\mathcal{L}}v_0\|_{L^\infty_{[0,t]}\dot{H}^{l+1/2}\cap L^2_{[0,t]}\dot{H}^{l+1}(\mathbb{T})}\leq 3\|v_0\|_{\dot{H}^{l+1/2}(\mathbb{T})}. \end{equation*} \item \begin{equation*}
\|\partial_t v\|_{L^2_{[0,t]} \dot{H}^l(\mathbb{T})} \leq \frac{1}{2}\|v_0\|_{\dot{H}^{l+1/2}(\mathbb{T})}+\|h\|_{L^2_{[0,t]}\dot{H}^l(\mathbb{T})}. \end{equation*} \end{enumerate} \end{lemma}
\subsection{Auxiliary Calculations}\label{appendix section: auxiliary calculations} The following lemma is used to derive a simplification of $\Gamma_1(s,s')$ used in Section \ref{section: a priori estimates of the immersed boundary problem}. \begin{lemma}\label{lemma: simplification of Gamma_1(s,s')} Let $\Gamma_1(s,s')$ be defined by \eqref{eqn: introduce the notation Gamma_1}, i.e. \begin{equation*} \Gamma_1(s,s') = \left(-\partial_{ss'}[G(X(s)-X(s'))]-\frac{Id}{16\pi\sin^2\left(\frac{s'-s}{2}\right)}\right)(X'(s')-X'(s)). \end{equation*} Then with the notations introduced in \eqref{eqn: definition of L M N}, for $\forall\,s,s'\in\mathbb{T}$, $s'\not = s$, we have \eqref{eqn: simplified Gamma order 1}.
\begin{proof} We shall simplify $\Gamma_1(s,s')$ by exploring cancelations between its terms. Using the notations introduced in \eqref{eqn: definition of L M N}, we calculate that \begin{equation*} \begin{split} &\;4\tau\pi\Gamma_1(s,s')\\
=&\;-\frac{X'(s)\cdot X'(s')}{|L|^2}M + \frac{2(X'(s)\cdot L)(X'(s')\cdot L)}{|L|^4}M + \frac{X'(s)\cdot M}{|L|^2}X'(s')-\frac{2(X'(s)\cdot L)( L\cdot M)}{|L|^4}X'(s')\\
&\;+\frac{X'(s')\cdot M}{|L|^2}X'(s)-\frac{2(X'(s')\cdot M)( X'(s)\cdot L)}{|L|^4}L-\frac{2(L\cdot X'(s'))(L\cdot M)}{|L|^4}X'(s)\\
&\;-\frac{2(L\cdot X'(s'))(X'(s)\cdot M)}{|L|^4}L -\frac{2(L\cdot M)(X'(s)\cdot X'(s'))}{|L|^4}L\\
&\;+\frac{8 (L\cdot M) (L\cdot X'(s')) (L\cdot X'(s))}{|L|^6}L -\frac{\tau^2}{4\sin^2(\frac{\tau}{2})}M. \end{split} \end{equation*} We simplify the first four terms by plugging in $X'(s') = X'(s)+\tau M$, \begin{equation*} \begin{split}
&\; -\frac{X'(s)\cdot (X'(s)+\tau M)}{|L|^2}M + \frac{2(X'(s)\cdot L)((X'(s)+\tau M)\cdot L)}{|L|^4}M\\
&\; + \frac{X'(s)\cdot M}{|L|^2}(X'(s)+\tau M)-\frac{2(X'(s)\cdot L)( L\cdot M)}{|L|^4}(X'(s)+\tau M)\\
=&\; -\frac{|X'(s)|^2}{|L|^2}M + \frac{2(X'(s)\cdot L)^2}{|L|^4}M + \frac{X'(s)\cdot M}{|L|^2}X'(s)-\frac{2(X'(s)\cdot L)( L\cdot M)}{|L|^4}X'(s). \end{split} \end{equation*} Hence, \begin{equation*} \begin{split}
4\pi\Gamma_1(s,s') =&\; \frac{1}{\tau}\left(-\frac{|X'(s)|^2}{|L|^2}M + \frac{2(X'(s)\cdot L)^2}{|L|^4}M-\frac{\tau^2}{4\sin^2(\frac{\tau}{2})}M\right)\\
&\;+ \frac{1}{\tau}\left(\frac{X'(s)\cdot M}{|L|^2}X'(s)-\frac{2(X'(s)\cdot L)( L\cdot M)}{|L|^4}X'(s)+\frac{X'(s')\cdot M}{|L|^2}X'(s)\right)\\
&\;+\frac{1}{\tau}\left(-\frac{2(X'(s')\cdot M)( X'(s)\cdot L)}{|L|^4}L-\frac{2(L\cdot X'(s'))(L\cdot M)}{|L|^4}X'(s)\right.\\
&\;-\frac{2(L\cdot X'(s'))(X'(s)\cdot M)}{|L|^4}L -\frac{2(L\cdot M)(X'(s)\cdot X'(s'))}{|L|^4}L\\
&\;\left.+\frac{8 (L\cdot M) (L\cdot X'(s')) (L\cdot X'(s))}{|L|^6}L \right)\\ \triangleq &\;A_1(s,s')+A_2(s,s')+A_3(s,s'). \end{split} \end{equation*} Using $X'(s) = L-\tau N$, we calculate that \begin{equation} \begin{split}
A_1 = &\;\frac{1}{\tau}\left(-\frac{|X'(s)|^2}{|L|^2}M + \frac{2(X'(s)\cdot L)}{|L|^2}M - \frac{2\tau(N\cdot L)(X'(s)\cdot L)}{|L|^4}M -M - \left(\frac{\tau^2}{4\sin^2(\frac{\tau}{2})}-1\right)M\right)\\
= &\;\frac{1}{\tau}\left(-\frac{|X'(s)|^2}{|L|^2}M + \frac{2X'(s)\cdot L}{|L|^2}M -M \right) - \frac{2(N\cdot L)(X'(s)\cdot L)}{|L|^4}M - \left(\frac{\tau^2 - 4\sin^2(\frac{\tau}{2})}{4\tau\sin^2(\frac{\tau}{2})}\right)M\\
=&\;-\frac{1}{\tau}\frac{|X'(s)-L|^2}{|L|^2}M - \frac{2(N\cdot L)(X'(s)\cdot L)}{|L|^4}M - \left(\frac{\tau^2 - 4\sin^2(\frac{\tau}{2})}{4\tau\sin^2(\frac{\tau}{2})}\right)M\\
=&\;\frac{(X'(s)-L)\cdot N}{|L|^2}M - \frac{2(N\cdot L)(X'(s)\cdot L)}{|L|^4}M - \left(\frac{\tau^2 - 4\sin^2(\frac{\tau}{2})}{4\tau\sin^2(\frac{\tau}{2})}\right)M. \end{split} \label{eqn: C 1 beta estimate integrand part 1} \end{equation} Similarly, \begin{equation} \begin{split}
A_2 = &\;\frac{1}{\tau}\left(\frac{X'(s)\cdot M}{|L|^2}X'(s)-\frac{2L\cdot M}{|L|^2}X'(s)+\frac{2\tau(N\cdot L)( L\cdot M)}{|L|^4}X'(s)+\frac{X'(s')\cdot M}{|L|^2}X'(s)\right)\\
= &\;\frac{1}{\tau}\frac{(X'(s)+X'(s')-2L)\cdot M}{|L|^2}X'(s)+\frac{2(N\cdot L)( L\cdot M)}{|L|^4}X'(s)\\
= &\;\frac{(M-2N)\cdot M}{|L|^2}X'(s)+\frac{2(N\cdot L)( L\cdot M)}{|L|^4}X'(s). \end{split} \label{eqn: C 1 beta estimate integrand part 2} \end{equation} For $A_3$, we split the last term into four and look for cancellations with the other four terms. That is, \begin{equation*} \begin{split}
\tau A_3 = &\;\frac{2 (L\cdot M) (L\cdot X'(s')) (L\cdot X'(s))}{|L|^6}L -\frac{2(X'(s')\cdot M)( X'(s)\cdot L)}{|L|^4}L\\
&\;+\frac{2 (L\cdot M) (L\cdot X'(s')) (L\cdot X'(s))}{|L|^6}L-\frac{2(L\cdot X'(s'))(L\cdot M)}{|L|^4}X'(s)\\
&\;+\frac{2 (L\cdot M) (L\cdot X'(s')) (L\cdot X'(s))}{|L|^6}L-\frac{2(L\cdot X'(s'))(X'(s)\cdot M)}{|L|^4}L\\
&\;+\frac{2 (L\cdot M) (L\cdot X'(s')) (L\cdot X'(s))}{|L|^6}L -\frac{2(L\cdot M)(X'(s)\cdot X'(s'))}{|L|^4}L\\
=&\;\frac{2\tau (L\cdot M) (L\cdot (M-N)) (L\cdot X'(s))}{|L|^6}L+\frac{2 (L\cdot M)(L\cdot X'(s))}{|L|^4}L -\frac{2(X'(s')\cdot M)(X'(s)\cdot L)}{|L|^4}L\\
&\;-\frac{2\tau (L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2 (L\cdot M) (L\cdot X'(s'))}{|L|^4}L-\frac{2(L\cdot X'(s'))(L\cdot M)}{|L|^4}X'(s)\\
&\;-\frac{2\tau (L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2 (L\cdot M) (L\cdot X'(s'))}{|L|^4}L-\frac{2(L\cdot X'(s'))(X'(s)\cdot M)}{|L|^4}L\\
&\;-\frac{2 \tau(L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2 (L\cdot M) (L\cdot X'(s'))}{|L|^4}L -\frac{2(L\cdot M)(X'(s)\cdot X'(s'))}{|L|^4}L\\
=&\;\frac{2\tau (L\cdot M) (L\cdot (M-N)) (L\cdot X'(s))}{|L|^6}L+\frac{2\tau ((N-M)\cdot M)(L\cdot X'(s))}{|L|^4}L\\
&\;-\frac{2\tau (L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2 (L\cdot M) (L\cdot X'(s'))}{|L|^4}\tau N\\
&\;-\frac{2\tau (L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2\tau (N\cdot M) (L\cdot X'(s'))}{|L|^4}L\\
&\;-\frac{2 \tau(L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2\tau (L\cdot M) (N\cdot X'(s'))}{|L|^4}L. \end{split} \end{equation*} Here we used $X'(s) = L-\tau N$ and $X'(s') = L+\tau(M-N)$. Therefore, \begin{equation} \begin{split}
A_3 =&\;\frac{2 (L\cdot M) (L\cdot (M-N)) (L\cdot X'(s))}{|L|^6}L+\frac{2 ((N-M)\cdot M)(L\cdot X'(s))}{|L|^4}L\\
&\;-\frac{6 (L\cdot M) (L\cdot X'(s')) (L\cdot N)}{|L|^6}L+\frac{2 (L\cdot M) (L\cdot X'(s'))}{|L|^4} N\\
&\;+\frac{2 (N\cdot M) (L\cdot X'(s'))}{|L|^4}L+\frac{2 (L\cdot M) (N\cdot X'(s'))}{|L|^4}L. \end{split} \label{eqn: C 1 beta estimate integrand part 3} \end{equation} Combining \eqref{eqn: C 1 beta estimate integrand part 1}, \eqref{eqn: C 1 beta estimate integrand part 2} and \eqref{eqn: C 1 beta estimate integrand part 3}, we obtain the desired simplification \eqref{eqn: simplified Gamma order 1}. \end{proof} \end{lemma}
The following lemma states that the $\eta$-derivative of $u_{X_\eta}(X_{\eta}(s))$ can commute with the integral in the representation of $u_{X_\eta}(X_{\eta}(s))$. It will be used in the proofs of Lemma \ref{lemma: linearization of velocity field around equilibrium} and Lemma \ref{lemma: final representation of the linearization of velocity near equilibrium}.
\begin{lemma}\label{lemma: eta derivative and the integral in u_X commute} Let $\eta\in[0,1]$ and let $X_\eta$ be defined as in \eqref{eqn: defintion of X_eta}. Let \begin{equation*} \begin{split}
u_{X_\eta}(X_\eta(s)) = &\;\frac{1}{4\pi}\int_{\mathbb{T}}\frac{L_{X_\eta}\cdot X_\eta'(s')}{|L_{X_\eta}|^2}M_{X_\eta}-\frac{L_{X_\eta}\cdot M_{X_\eta}}{|L_{X_\eta}|^2}X'_\eta(s') -\frac{X'_\eta(s')\cdot M_{X_\eta}}{|L_{X_\eta}|^2}L_{X_\eta}\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{2L_{X_\eta}\cdot X'_\eta(s')L_{X_\eta}\cdot M_{X_\eta}}{|L_{X_\eta}|^4}L_{X_\eta}\,ds'\\ \triangleq &\; \int_{\mathbb{T}} h_\eta(s,s')\,ds'. \end{split} \end{equation*} Then under the same assumptions as in Lemma \ref{lemma: linearization of velocity field around equilibrium}, for $\forall\,\eta\in[0,1]$, \begin{equation*} \frac{\partial}{\partial \eta}u_{X_\eta}(X_\eta(s)) = \int_{\mathbb{T}} \frac{\partial}{\partial \eta}h_\eta(s,s')\,ds'. \end{equation*} \begin{proof} By definition, \begin{equation} \frac{\partial}{\partial \eta}u_{X_\eta}(X_\eta(s)) = \lim_{\eta'\rightarrow \eta}\int_{\mathbb{T}} \frac{h_{\eta'}(s,s')-h_\eta(s,s')}{\eta'-\eta}\,ds'. \label{eqn: def of eta derivative of u} \end{equation} We shall check the conditions of the dominated convergence theorem to show that the limit and the integral commute. In particular, we need to show that for $\forall\,s\in\mathbb{T}$, there exists an $s'$-integrable function $h(s,s')$, such that for $\forall\,\eta_1,\eta_2\in[0,1]$, $\eta_1\not = \eta_2$, \begin{equation}
|h_{\eta_1}(s,s')-h_{\eta_2}(s,s')|\leq |\eta_1-\eta_2|h(s,s'). \label{eqn: condition of the DCT} \end{equation} By definition, we calculate that \begin{equation} \begin{split}
&\;4\pi|h_{\eta_1}(s,s')-h_{\eta_2}(s,s')|\\
\leq &\;\left|\frac{L_{X_{\eta_1}}\cdot X_{\eta_1}'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_1}}-\frac{L_{X_{\eta_2}}\cdot X_{\eta_2}'(s')}{|L_{X_{\eta_2}}|^2}M_{X_{\eta_2}}\right|+\left|\frac{L_{X_{\eta_1}}\cdot M_{X_{\eta_1}}}{|L_{X_{\eta_1}}|^2}X'_{\eta_1}(s')-\frac{L_{X_{\eta_2}}\cdot M_{X_{\eta_2}}}{|L_{X_{\eta_2}}|^2}X'_{\eta_2}(s')\right|\\
&\;+\left|\frac{X'_{\eta_1}(s')\cdot M_{X_{\eta_1}}}{|L_{X_{\eta_1}}|^2}L_{X_{\eta_1}}-\frac{X'_{\eta_2}(s')\cdot M_{X_{\eta_2}}}{|L_{X_{\eta_2}}|^2}L_{X_{\eta_2}}\right|\\
&\;+\left|\frac{2L_{X_{\eta_1}}\cdot X'_{\eta_1}(s')L_{X_{\eta_1}}\cdot M_{X_{\eta_1}}}{|L_{X_{\eta_1}}|^4}L_{X_{\eta_1}}-\frac{2L_{X_{\eta_2}}\cdot X'_{\eta_2}(s')L_{X_{\eta_2}}\cdot M_{X_{\eta_2}}}{|L_{X_{\eta_2}}|^4}L_{X_{\eta_2}}\right|. \end{split} \label{eqn: difference between h_1 and h_2} \end{equation} For conciseness, we only show estimate of one of the terms above. Thanks to the uniform estimates \eqref{eqn: uniform H2.5 upper bound for the family of configurations near equilibrium} and \eqref{eqn: uniform stretching constant for the family of configurations near equilibrium}, \begin{equation*} \begin{split}
&\;\left|\frac{L_{X_{\eta_1}}\cdot X_{\eta_1}'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_1}}-\frac{L_{X_{\eta_2}}\cdot X_{\eta_2}'(s')}{|L_{X_{\eta_2}}|^2}M_{X_{\eta_2}}\right|\\
\leq &\;\left|\frac{(L_{X_{\eta_1}}-L_{X_{\eta_2}})\cdot X_{\eta_1}'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_1}}\right|+\left|\frac{L_{X_{\eta_2}}\cdot (X_{\eta_1}'(s')-X_{\eta_2}'(s'))}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_1}}\right|\\
&\;+\left|\frac{L_{X_{\eta_2}}\cdot X_{\eta_2}'(s')}{|L_{X_{\eta_1}}|^2}(M_{X_{\eta_1}}-M_{X_{\eta_2}})\right|+\left|\frac{L_{X_{\eta_2}}\cdot X_{\eta_2}'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_2}}\frac{|L_{X_{\eta_2}}|^2-|L_{X_{\eta_1}}|^2}{|L_{X_{\eta_2}}|^2}\right|\\
\leq &\;\left|\frac{(\eta_1-\eta_2)L_{D}\cdot X_{\eta_1}'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_1}}\right|+\left|\frac{L_{X_{\eta_2}}\cdot (\eta_1-\eta_2)D'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_1}}\right|\\
&\;+\left|\frac{L_{X_{\eta_2}}\cdot X_{\eta_2}'(s')}{|L_{X_{\eta_1}}|^2}(\eta_1-\eta_2)M_{D}\right|+\left|\frac{L_{X_{\eta_2}}\cdot X_{\eta_2}'(s')}{|L_{X_{\eta_1}}|^2}M_{X_{\eta_2}}\frac{(L_{X_{\eta_2}}+L_{X_{\eta_1}})\cdot (\eta_1-\eta_2)L_D}{|L_{X_{\eta_2}}|^2}\right|\\
\leq &\;C|\eta_1-\eta_2|(\|L_D\|_{L^{\infty}_{s'}}\|X_{\eta_1}'\|_{L^\infty}|M_{X_{\eta_1}}|+\|L_{X_{\eta_2}}\|_{L^\infty_{s'}}\|D'\|_{L^\infty}|M_{X_{\eta_1}}|\\
&\;\quad +\|L_{X_{\eta_2}}\|_{L^{\infty}_{s'}}\|X_{\eta_2}'\|_{L^{\infty}}|M_{D}|+\|X_{\eta_2}'\|_{L^{\infty}}|M_{X_{\eta_2}}|\|L_D\|_{L^{\infty}_{s'}})\\
\leq &\;C|\eta_1-\eta_2|[\|D'\|_{L^\infty}(\|X_{\eta_1}'\|_{L^\infty}+\|X_{\eta_2}'\|_{L^\infty})(|M_{X_{\eta_1}}|+|M_{X_{\eta_2}}|) + \|X_{\eta_2}'\|^2_{L^\infty}|M_D|]\\
\leq &\;C|\eta_1-\eta_2|(|M_{X_{\eta_1}}|+|M_{X_{\eta_2}}|+|M_D|)\\
\leq &\;C|\eta_1-\eta_2|(|M_{X_*}|+|M_D|), \end{split} \end{equation*} where $C$ is a universal constant. The other terms in \eqref{eqn: difference between h_1 and h_2} can be estimated in a similar fashion. Hence, we obtain that, for $s'\not = s$, \begin{equation*} \begin{split}
|h_{\eta_1}(s,s')-h_{\eta_2}(s,s')|\leq &\;C|\eta_1-\eta_2|(|M_{X_*}|+|M_D|)\\
\leq &\;\frac{C|\eta_1-\eta_2|}{|s'-s|}\int_s^{s'}|X''_*(\omega)|+|D''(\omega)|\,d\omega\\
\leq &\;C|\eta_1-\eta_2|(|\mathcal{M}X''_*(s')|+|\mathcal{M}D''(s')|), \end{split} \end{equation*}
where $\mathcal{M}$ is again the centered Hardy-Littlewood maximal operator on $\mathbb{T}$. Hence \eqref{eqn: condition of the DCT} is proved with $h(s,s') = C(|\mathcal{M}X''_*(s')|+|\mathcal{M}D''(s')|)\in L^1_{s'}(\mathbb{T})$. Note that $h$ is independent of $\eta_1$, $\eta_2$ and $s$. By dominated convergence theorem, the limit and the integral in \eqref{eqn: def of eta derivative of u} commute, which proves the Lemma. \end{proof} \end{lemma}
Finally, we come to prove Lemma \ref{lemma: final representation of the linearization of velocity near equilibrium}, which calculates the leading term of $u_X(X(s))$ in \eqref{eqn: first approximation by linearization of velocity around equilibrium}. \begin{proof}[Proof of Lemma \ref{lemma: final representation of the linearization of velocity near equilibrium}] This time, we used \eqref{eqn: expression for velocity field} as the representation of $u_{X_{\eta}}(X_\eta(s))$, \begin{equation*}
u_{X_\eta} = \frac{1}{4\pi}\int_{\mathbb{T}} \left(-\ln |X_\eta(s')-X_{\eta}(s)| Id +\frac{(X_\eta(s')-X_{\eta}(s)) \otimes (X_\eta(s')-X_{\eta}(s))}{|X_\eta(s')-X_{\eta}(s)|^2}\right)X_{\eta}''(s') \,ds'. \end{equation*} It has been showed before (see Section \ref{section: proof of contour dynamic formulation}) that, with the well-stretched condition \eqref{eqn: well_stretched assumption}, the integral with logarithmic singularity is well-defined. Hence, by virtue of Lemma \ref{lemma: eta derivative and the integral in u_X commute}, \begin{equation*} \begin{split}
&\;\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))\\
=&\;\frac{1}{4\pi}\int_{\mathbb{T}} \left[-\frac{(X_*(s')-X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2} Id \right.\\
&\; +\frac{(D(s')-D(s)) \otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}+\frac{(X_*(s')-X_*(s)) \otimes (D(s')-D(s))}{|X_*(s')-X_*(s)|^2}\\
&\;\left. -\frac{(X_*(s')-X_*(s)) \otimes (X_*(s')-X_*(s))\cdot 2(X_*(s')-X_*(s))\cdot(D(s')-D(s))}{|X_*(s')-X_*(s)|^4}\right]X_{*}''(s')\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\left(-\ln |X_*(s')-X_*(s)| Id +\frac{(X_*(s')-X_*(s)) \otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}\right)D''(s')\,ds'. \end{split} \end{equation*} We split $X_{*}''(s')$ into two terms, namely, $X_{*}''(s') = -X_*(s') = -\frac{1}{2}(X_*(s')-X_*(s)) - \frac{1}{2}(X_*(s')+X_*(s))$. \begin{equation} \begin{split}
&\;\left.\frac{\partial}{\partial\eta}\right|_{\eta = 0}u_{X_\eta}(X_\eta(s))\\
=&\;\frac{1}{8\pi}\int_{\mathbb{T}} \frac{(X_*(s')-X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2} (X_*(s')-X_*(s))\\
&\; -(D(s')-D(s)) -\frac{(X_*(s')-X_*(s)) \cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\\
&\; +\frac{2(X_*(s')-X_*(s))\cdot(D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\,ds'\\
&\;+\frac{1}{8\pi}\int_{\mathbb{T}} \frac{(X_*(s')-X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2} (X_*(s')+X_*(s)) \\
&\;-\frac{(X_*(s')+X_*(s))\cdot (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}(D(s')-D(s))\\
&\;-\frac{(X_*(s')+X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\\
&\;+\frac{(X_*(s')-X_*(s))\cdot (X_*(s')+X_*(s)) \cdot 2(X_*(s')-X_*(s))\cdot(D(s')-D(s))}{|X_*(s')-X_*(s)|^4}(X_*(s')-X_*(s))\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}}\left(-\ln |X_*(s')-X_*(s)| Id +\frac{(X_*(s')-X_*(s)) \otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}\right)D''(s')\,ds'\\
=&\;\frac{1}{8\pi}\int_{\mathbb{T}} -(D(s')-D(s)) +\frac{2(X_*(s')-X_*(s))\cdot(D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\,ds'\\
&\;+\frac{1}{8\pi}\int_{\mathbb{T}} \frac{(X_*(s')-X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2} (X_*(s')+X_*(s))\,ds'\\
&\;-\frac{1}{8\pi}\int_{\mathbb{T}} \frac{(X_*(s')+X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}} \left(-\ln |X_*(s')-X_*(s)| Id +\frac{(X_*(s')-X_*(s)) \otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}\right)D''(s')\,ds'\\
=&\;\frac{1}{4}D(s)+\frac{1}{4\pi}\int_{\mathbb{T}} \frac{(X_*(s')-X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2}X_*(s')\,ds'\\
&\;-\frac{1}{4\pi}\int_{\mathbb{T}} \frac{X_*(s)\cdot(D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\,ds'\\
&\;+\frac{1}{4\pi}\int_{\mathbb{T}} -\ln |X_*(s')-X_*(s)| D''(s') +\frac{(X_*(s')-X_*(s)) \otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}D''(s')\,ds'. \end{split} \label{eqn: representation of the first variation of velocity field} \end{equation} Here we used the fact that $(X_*(s')-X_*(s))\cdot (X_*(s')+X_*(s))=0$ and $\int_{\mathbb{T}}D(s')\,ds' = 0$. Since $X_*(s) = (\cos s, \sin s)^T$, we plug this into \eqref{eqn: representation of the first variation of velocity field} and find that \begin{equation*} \begin{split}
&\;\frac{1}{4\pi}\int_{\mathbb{T}} \frac{(X_*(s')-X_*(s))\cdot (D(s')-D(s))}{|X_*(s')-X_*(s)|^2}X_*(s')- \frac{X_*(s)\cdot(D(s')-D(s))}{|X_*(s')-X_*(s)|^2}(X_*(s')-X_*(s))\,ds'\\
=&\;\frac{1}{4\pi}\int_{\mathbb{T}} \left[\frac{X_*(s')\otimes(X_*(s')-X_*(s)) - (X_*(s')-X_*(s))\otimes X_*(s')}{|X_*(s')-X_*(s)|^2}\right](D(s')-D(s))\,ds'\\
&\; +\frac{1}{4\pi} \int_{\mathbb{T}} \frac{(X_*(s')-X_*(s))\otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}(D(s')-D(s))\,ds'\\ =&\;\frac{1}{4\pi}\int_{\mathbb{T}}\frac{1}{2}\cot\left(\frac{s'-s}{2}\right) \left(\begin{array}{cc}0&1\\-1&0\end{array}\right)(D(s')-D(s))\,ds'\\ &\;+\frac{1}{4\pi}\int_{\mathbb{T}}\left(\begin{array}{cc}\frac{1}{2}(1-\cos(s'+s))&-\frac{1}{2}\sin(s'+s)\\ -\frac{1}{2}\sin(s'+s)&\frac{1}{2}(1+\cos(s'+s))\end{array}\right)(D(s')-D(s))\,ds'\\ =&\;-\frac{1}{4}\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)\mathcal{H}D(s)+\frac{1}{4\pi}\int_{\mathbb{T}}-\frac{1}{2}\left(\begin{array}{cc}\cos(s'+s)&\sin(s'+s)\\ \sin(s'+s)&-\cos(s'+s)\end{array}\right)(D(s')-D(s))\,ds'\\ &\; + \frac{1}{8\pi}\int_{\mathbb{T}}(D(s')-D(s))\,ds'\\ =&\;-\frac{1}{4}\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)\mathcal{H}D(s)-\frac{1}{8\pi}\int_{\mathbb{T}}\left(\begin{array}{cc}\cos(s'+s)&\sin(s'+s)\\ \sin(s'+s)&-\cos(s'+s)\end{array}\right)D(s')\,ds'-\frac{1}{4}D(s). \end{split} \end{equation*} Here $\mathcal{H}$ is the Hilbert transform on $\mathbb{T}$ \cite{grafakos2008classical}. Moreover, \begin{equation*} \begin{split}
\frac{1}{4\pi}\int_{\mathbb{T}} -\ln |X_*(s')-X_*(s)| D''(s') \,ds' =&\; -\frac{1}{8\pi} \int_{\mathbb{T}} \ln \left[4\sin^2\left(\frac{s'-s}{2}\right)\right] D''(s') \,ds'\\ =&\; \frac{1}{8\pi}\mathrm{p.v.} \int_{\mathbb{T}} \cot\left(\frac{s'-s}{2}\right) D'(s') \,ds'\\ =&\; -\frac{1}{4}\mathcal{H}D'(s), \end{split} \end{equation*} and \begin{equation*} \begin{split}
&\;\frac{1}{4\pi}\int_{\mathbb{T}} \frac{(X_*(s')-X_*(s)) \otimes (X_*(s')-X_*(s))}{|X_*(s')-X_*(s)|^2}D''(s')\,ds'\\ =&\;\frac{1}{4\pi}\int_{\mathbb{T}} \left(\begin{array}{cc}\frac{1}{2}(1-\cos(s'+s))&-\frac{1}{2}\sin(s'+s)\\ -\frac{1}{2}\sin(s'+s)&\frac{1}{2}(1+\cos(s'+s))\end{array}\right)D''(s')\,ds'\\ =&\;-\frac{1}{8\pi}\int_{\mathbb{T}} \left(\begin{array}{cc}\cos(s'+s)&\sin(s'+s)\\ \sin(s'+s)&-\cos(s'+s)\end{array}\right)D''(s')\,ds'\\ =&\;\frac{1}{8\pi}\int_{\mathbb{T}} \left(\begin{array}{cc}\cos(s'+s)&\sin(s'+s)\\ \sin(s'+s)&-\cos(s'+s)\end{array}\right)D(s')\,ds'. \end{split} \end{equation*} In the last line, we used the fact that only the Fourier modes of $D''(s')$ with wave numbers $\pm 1$ contributes to the integral, and thus replacing $D''(s')$ by $-D(s')$ does not change the integral. Combining the above calculations with \eqref{eqn: representation of the first variation of velocity field}, we find the desired result \eqref{eqn: final representation of the linearization of velocity near equilibrium}. \end{proof}
\noindent Fang-Hua Lin\\ Courant Institute\\ 251 Mercer St.\\ New York, NY 10012\\ USA\\ E-mail: [email protected]\\
\noindent Jiajun Tong\\ Courant Institute\\ 251 Mercer St.\\ New York, NY 10012\\ USA\\ E-mail: [email protected]
\end{document} | arXiv |
LOBPCG
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem
$Ax=\lambda Bx,$
for a given pair $(A,B)$ of complex Hermitian or real symmetric matrices, where the matrix $B$ is also assumed positive-definite.
Background
Kantorovich in 1948 proposed calculating the smallest eigenvalue $\lambda _{1}$ of a symmetric matrix $A$ by steepest descent using a direction $r=Ax-\lambda (x)x$ of a scaled gradient of a Rayleigh quotient $\lambda (x)=(x,Ax)/(x,x)$ in a scalar product $(x,y)=x'y$, with the step size computed by minimizing the Rayleigh quotient in the linear span of the vectors $x$ and $w$, i.e. in a locally optimal manner. Samokish[1] proposed applying a preconditioner $T$ to the residual vector $r$ to generate the preconditioned direction $w=Tr$ and derived asymptotic, as $x$ approaches the eigenvector, convergence rate bounds. D'yakonov suggested[2] spectrally equivalent preconditioning and derived non-asymptotic convergence rate bounds. Block locally optimal multi-step steepest descent for eigenvalue problems was described in.[3] Local minimization of the Rayleigh quotient on the subspace spanned by the current approximation, the current residual and the previous approximation, as well as its block version, appeared in.[4] The preconditioned version was analyzed in [5] and.[6]
Main features[7]
• Matrix-free, i.e. does not require storing the coefficient matrix explicitly, but can access the matrix by evaluating matrix-vector products.
• Factorization-free, i.e. does not require any matrix decomposition even for a generalized eigenvalue problem.
• The costs per iteration and the memory use are competitive with those of the Lanczos method, computing a single extreme eigenpair of a symmetric matrix.
• Linear convergence is theoretically guaranteed and practically observed.
• Accelerated convergence due to direct preconditioning, in contrast to the Lanczos method, including variable and non-symmetric as well as fixed and positive definite preconditioning.
• Allows trivial incorporation of efficient domain decomposition and multigrid techniques via preconditioning.
• Warm starts and computes an approximation to the eigenvector on every iteration.
• More numerically stable compared to the Lanczos method, and can operate in low-precision computer arithmetic.
• Easy to implement, with many versions already appeared.
• Blocking allows utilizing highly efficient matrix-matrix operations, e.g., BLAS 3.
• The block size can be tuned to balance convergence speed vs. computer costs of orthogonalizations and the Rayleigh-Ritz method on every iteration.
Algorithm
Preliminaries: Gradient descent for eigenvalue problems
The method performs an iterative maximization (or minimization) of the generalized Rayleigh quotient
$\rho (x):=\rho (A,B;x):={\frac {x^{T}Ax}{x^{T}Bx}},$
which results in finding largest (or smallest) eigenpairs of $Ax=\lambda Bx.$
The direction of the steepest ascent, which is the gradient, of the generalized Rayleigh quotient is positively proportional to the vector
$r:=Ax-\rho (x)Bx,$
called the eigenvector residual. If a preconditioner $T$ is available, it is applied to the residual and gives the vector
$w:=Tr,$
called the preconditioned residual. Without preconditioning, we set $T:=I$ and so $w:=r$. An iterative method
$x^{i+1}:=x^{i}+\alpha ^{i}T(Ax^{i}-\rho (x^{i})Bx^{i}),$
or, in short,
$x^{i+1}:=x^{i}+\alpha ^{i}w^{i},\,$
$w^{i}:=Tr^{i},\,$
$r^{i}:=Ax^{i}-\rho (x^{i})Bx^{i},$
is known as preconditioned steepest ascent (or descent), where the scalar $\alpha ^{i}$ is called the step size. The optimal step size can be determined by maximizing the Rayleigh quotient, i.e.,
$x^{i+1}:=\arg \max _{y\in span\{x^{i},w^{i}\}}\rho (y)$
(or $\arg \min $ in case of minimizing), in which case the method is called locally optimal.
Three-term recurrence
To dramatically accelerate the convergence of the locally optimal preconditioned steepest ascent (or descent), one extra vector can be added to the two-term recurrence relation to make it three-term:
$x^{i+1}:=\arg \max _{y\in span\{x^{i},w^{i},x^{i-1}\}}\rho (y)$
(use $\arg \min $ in case of minimizing). The maximization/minimization of the Rayleigh quotient in a 3-dimensional subspace can be performed numerically by the Rayleigh–Ritz method. Adding more vectors, see, e.g., Richardson extrapolation, does not result in significant acceleration[8] but increases computation costs, so is not generally recommended.
Numerical stability improvements
As the iterations converge, the vectors $x^{i}$ and $x^{i-1}$ become nearly linearly dependent, resulting in a precision loss and making the Rayleigh–Ritz method numerically unstable in the presence of round-off errors. The loss of precision may be avoided by substituting the vector $x^{i-1}$ with a vector $p^{i}$, that may be further away from $x^{i}$, in the basis of the three-dimensional subspace $span\{x^{i},w^{i},x^{i-1}\}$, while keeping the subspace unchanged and avoiding orthogonalization or any other extra operations.[8] Furthermore, orthogonalizing the basis of the three-dimensional subspace may be needed for ill-conditioned eigenvalue problems to improve stability and attainable accuracy.
Krylov subspace analogs
This is a single-vector version of the LOBPCG method—one of possible generalization of the preconditioned conjugate gradient linear solvers to the case of symmetric eigenvalue problems.[8] Even in the trivial case $T=I$ and $B=I$ the resulting approximation with $i>3$ will be different from that obtained by the Lanczos algorithm, although both approximations will belong to the same Krylov subspace.
Practical use scenarios
Extreme simplicity and high efficiency of the single-vector version of LOBPCG make it attractive for eigenvalue-related applications under severe hardware limitations, ranging from spectral clustering based real-time anomaly detection via graph partitioning on embedded ASIC or FPGA to modelling physical phenomena of record computing complexity on exascale TOP500 supercomputers.
Summary
Subsequent eigenpairs can be computed one-by-one via single-vector LOBPCG supplemented with an orthogonal deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate eigenvectors additively affect the accuracy of the subsequently computed eigenvectors, thus increasing the error with every new computation. Iterating several approximate eigenvectors together in a block in a locally optimal fashion in the block version of the LOBPCG.[8] allows fast, accurate, and robust computation of eigenvectors, including those corresponding to nearly-multiple eigenvalues where the single-vector LOBPCG suffers from slow convergence. The block size can be tuned to balance numerical stability vs. convergence speed vs. computer costs of orthogonalizations and the Rayleigh-Ritz method on every iteration.
Core design
The block approach in LOBPCG replaces single-vectors $x^{i},\,w^{i},$ and $p^{i}$ with block-vectors, i.e. matrices $X^{i},\,W^{i},$ and $P^{i}$, where, e.g., every column of $X^{i}$ approximates one of the eigenvectors. All columns are iterated simultaneously, and the next matrix of approximate eigenvectors $X^{i+1}$ is determined by the Rayleigh–Ritz method on the subspace spanned by all columns of matrices $X^{i},\,W^{i},$ and $P^{i}$. Each column of $W^{i}$ is computed simply as the preconditioned residual for every column of $X^{i}.$ The matrix $P^{i}$ is determined such that the subspaces spanned by the columns of $[X^{i},\,P^{i}]$ and of $[X^{i},\,X^{i-1}]$ are the same.
Numerical stability vs. efficiency
The outcome of the Rayleigh–Ritz method is determined by the subspace spanned by all columns of matrices $X^{i},\,W^{i},$ and $P^{i}$, where a basis of the subspace can theoretically be arbitrary. However, in inexact computer arithmetic the Rayleigh–Ritz method becomes numerically unstable if some of the basis vectors are approximately linearly dependent. Numerical instabilities typically occur, e.g., if some of the eigenvectors in the iterative block already reach attainable accuracy for a given computer precision and are especially prominent in low precision, e.g., single precision.
The art of multiple different implementation of LOBPCG is to ensure numerical stability of the Rayleigh–Ritz method at minimal computing costs by choosing a good basis of the subspace. The arguably most stable approach of making the basis vectors orthogonal, e.g., by the Gram–Schmidt process, is also the most computational expensive. For example, LOBPCG implementations,[9][10] utilize unstable but efficient Cholesky decomposition of the normal matrix, which is performed only on individual matrices $W^{i}$ and $P^{i}$, rather than on the whole subspace. The constantly increasing amount of computer memory allows typical block sizes nowadays in the $10^{3}-10^{4}$ range, where the percentage of compute time spend on orthogonalizations and the Rayleigh-Ritz method starts dominating.
Locking of previously converged eigenvectors
Block methods for eigenvalue problems that iterate subspaces commonly have some of the iterative eigenvectors converged faster than others that motivates locking the already converged eigenvectors, i.e., removing them from the iterative loop, in order to eliminate unnecessary computations and improve numerical stability. A simple removal of an eigenvector may likely result in forming its duplicate in still iterating vectors. The fact that the eigenvectors of symmetric eigenvalue problems are pair-wise orthogonal suggest keeping all iterative vectors orthogonal to the locked vectors.
Locking can be implemented differently maintaining numerical accuracy and stability while minimizing the compute costs. For example, LOBPCG implementations,[9][10] follow,[8][11] separating hard locking, i.e. a deflation by restriction, where the locked eigenvectors serve as a code input and do not change, from soft locking, where the locked vectors do not participate in the typically most expensive iterative step of computing the residuals, however, fully participate in the Rayleigh—Ritz method and thus are allowed to be changed by the Rayleigh—Ritz method.
Modifications, LOBPCG II
LOBPCG includes all columns of matrices $X^{i},\,W^{i},$ and $P^{i}$ into the Rayleigh–Ritz method resulting in an up to $3k$-by-$3k$ eigenvalue problem needed to solve and up to $9k^{2}$ dot products to compute at every iteration, where $k$ denotes the block size — the number of columns. For large block sizes $k$ this starts dominating compute and I/O costs and limiting parallelization, where multiple compute devices are running simultaneously.
The original LOBPCG paper[8] describes a modification, called LOBPCG II, to address such a problem running the single-vector version of the LOBPCG method for each desired eigenpair with the Rayleigh-Ritz procedure solving $k$ of 3-by-3 projected eigenvalue problems. The global Rayleigh-Ritz procedure for all $k$ eigenpairs is on every iteration but only on the columns of the matrix $X^{i}$, thus reducing the number of the necessary dot products to $k^{2}+3k$ from $9k^{2}$ and the size of the global projected eigenvalue problem to $k$-by-$k$ from $3k$-by-$3k$ on every iteration. Reference [12] goes further applying the LOBPCG algorithm to each approximate eigenvector separately, i.e., running the unblocked version of the LOBPCG method for each desired eigenpair for a fixed number of iterations. The Rayleigh-Ritz procedures in these runs only need to solve a set of 3 × 3 projected eigenvalue problems. The global Rayleigh-Ritz procedure for all desired eigenpairs is only applied periodically at the end of a fixed number of unblocked LOBPCG iterations.
Such modifications may be less robust compared to the original LOBPCG. Individually running branches of the single-vector LOBPCG may not follow continuous iterative paths flipping instead and creating duplicated approximations to the same eigenvector. The single-vector LOBPCG may be unsuitable for clustered eigenvalues, but separate small-block LOBPCG runs require determining their block sizes automatically during the process of iterations since the number of the clusters of eigenvalues and their sizes may be unknown a priori.
Convergence theory and practice
LOBPCG by construction is guaranteed[8] to minimize the Rayleigh quotient not slower than the block steepest gradient descent, which has a comprehensive convergence theory. Every eigenvector is a stationary point of the Rayleigh quotient, where the gradient vanishes. Thus, the gradient descent may slow down in a vicinity of any eigenvector, however, it is guaranteed to either converge to the eigenvector with a linear convergence rate or, if this eigenvector is a saddle point, the iterative Rayleigh quotient is more likely to drop down below the corresponding eigenvalue and start converging linearly to the next eigenvalue below. The worst value of the linear convergence rate has been determined[8] and depends on the relative gap between the eigenvalue and the rest of the matrix spectrum and the quality of the preconditioner, if present.
For a general matrix, there is evidently no way to predict the eigenvectors and thus generate the initial approximations that always work well. The iterative solution by LOBPCG may be sensitive to the initial eigenvectors approximations, e.g., taking longer to converge slowing down as passing intermediate eigenpairs. Moreover, in theory, one cannot guarantee convergence necessarily to the smallest eigenpair, although the probability of the miss is zero. A good quality random Gaussian function with the zero mean is commonly the default in LOBPCG to generate the initial approximations. To fix the initial approximations, one can select a fixed seed for the random number generator.
In contrast to the Lanczos method, LOBPCG rarely exhibits asymptotic superlinear convergence in practice.
Partial Principal component analysis (PCA) and Singular Value Decomposition (SVD)
LOBPCG can be trivially adapted for computing several largest singular values and the corresponding singular vectors (partial SVD), e.g., for iterative computation of PCA, for a data matrix D with zero mean, without explicitly computing the covariance matrix DTD, i.e. in matrix-free fashion. The main calculation is evaluation of a function of the product DT(D X) of the covariance matrix DTD and the block-vector X that iteratively approximates the desired singular vectors. PCA needs the largest eigenvalues of the covariance matrix, while LOBPCG is typically implemented to calculate the smallest ones. A simple work-around is to negate the function, substituting -DT(D X) for DT(D X) and thus reversing the order of the eigenvalues, since LOBPCG does not care if the matrix of the eigenvalue problem is positive definite or not.[9]
LOBPCG for PCA and SVD is implemented in SciPy since revision 1.4.0[13]
General software implementations
LOBPCG's inventor, Andrew Knyazev, published a reference implementation called Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX)[14][15] with interfaces to PETSc, hypre, and Parallel Hierarchical Adaptive MultiLevel method (PHAML).[16] Other implementations are available in, e.g., GNU Octave,[17] MATLAB (including for distributed or tiling arrays),[9] Java,[18] Anasazi (Trilinos),[19] SLEPc,[20][21] SciPy ,[10] Julia,[22] MAGMA,[23] Pytorch,[24] Rust,[25] OpenMP and OpenACC,[26] CuPy (A NumPy-compatible array library accelerated by CUDA),[27] Google JAX,[28] and NVIDIA AMGX.[29] LOBPCG is implemented,[30] but not included, in TensorFlow.
Applications
Data mining
Software packages scikit-learn and Megaman[31] use LOBPCG to scale spectral clustering[32] and manifold learning[33] via Laplacian eigenmaps to large data sets. NVIDIA has implemented[34] LOBPCG in its nvGRAPH library introduced in CUDA 8. Sphynx,[35] a hybrid distributed- and shared-memory-enabled parallel graph partitioner - the first graph partitioning tool that works on GPUs on distributed-memory settings - uses spectral clustering for graph partitioning, computing eigenvectors on the Laplacian matrix of the graph using LOBPCG from the Anasazi package.
Material sciences
LOBPCG is implemented in ABINIT[36] (including CUDA version) and Octopus.[37] It has been used for multi-billion size matrices by Gordon Bell Prize finalists, on the Earth Simulator supercomputer in Japan.[38][39] Hubbard model for strongly-correlated electron systems to understand the mechanism behind the superconductivity uses LOBPCG to calculate the ground state of the Hamiltonian on the K computer[40] and multi-GPU systems.[41]
There are MATLAB[42] and Julia[43][44][45] versions of LOBPCG for Kohn-Sham equations and density functional theory (DFT) using the plane wave basis. Recent implementations include TTPY,[46] Platypus‐QM,[47] MFDn,[48] ACE-Molecule,[49] LACONIC.[50]
Mechanics and fluids
LOBPCG from BLOPEX is used for preconditioner setup in Multilevel Balancing Domain Decomposition by Constraints (BDDC) solver library BDDCML, which is incorporated into OpenFTL (Open Finite element Template Library) and Flow123d simulator of underground water flow, solute and heat transport in fractured porous media. LOBPCG has been implemented[51] in LS-DYNA.
Maxwell's equations
LOBPCG is one of core eigenvalue solvers in PYFEMax and high performance multiphysics finite element software Netgen/NGSolve. LOBPCG from hypre is incorporated into open source lightweight scalable C++ library for finite element methods MFEM, which is used in many projects, including BLAST, XBraid, VisIt, xSDK, the FASTMath institute in SciDAC, and the co-design Center for Efficient Exascale Discretizations (CEED) in the Exascale computing Project.
Denoising
Iterative LOBPCG-based approximate low-pass filter can be used for denoising; see,[52] e.g., to accelerate total variation denoising.
Image segmentation
Image segmentation via spectral clustering performs a low-dimension embedding using an affinity matrix between pixels, followed by clustering of the components of the eigenvectors in the low dimensional space, e.g., using the graph Laplacian for the bilateral filter. Image segmentation via spectral graph partitioning by LOBPCG with multigrid preconditioning has been first proposed in [53] and actually tested in [54] and.[55] The latter approach has been later implemented in Python scikit-learn[56] that uses LOBPCG from SciPy with algebraic multigrid preconditioning for solving the eigenvalue problem for the graph Laplacian.
References
1. Samokish, B.A. (1958). "The steepest descent method for an eigenvalue problem with semi-bounded operators". Izvestiya Vuzov, Math. (5): 105–114.
2. D'yakonov, E. G. (1996). Optimization in solving elliptic problems. CRC-Press. p. 592. ISBN 978-0-8493-2872-5.
3. Cullum, Jane K.; Willoughby, Ralph A. (2002). Lanczos algorithms for large symmetric eigenvalue computations. Vol. 1 (Reprint of the 1985 original). Society for Industrial and Applied Mathematics.
4. Knyazev, Andrew V. (1987). "Convergence rate estimates for iterative methods for mesh symmetric eigenvalue problem". Soviet Journal of Numerical Analysis and Mathematical Modelling. 2 (5): 371–396. doi:10.1515/rnam.1987.2.5.371. S2CID 121473545.
5. Knyazev, A. V. (1991). "A Preconditioned Conjugate Gradient Method for Eigenvalue Problems and its Implementation in a Subspace". In Albrecht, J.; Collatz, L.; Hagedorn, P.; Velte, W. (eds.). Numerical Treatment of Eigenvalue Problems Vol. 5. International Series of Numerical Mathematics. Vol. 96. pp. 143–154. doi:10.1007/978-3-0348-6332-2_11. ISBN 978-3-0348-6334-6.
6. Knyazev, Andrew V. (1998). "Preconditioned eigensolvers - an oxymoron?". Electronic Transactions on Numerical Analysis. 7: 104–123.
7. Knyazev, Andrew (2017). "Recent implementations, applications, and extensions of the Locally Optimal Block Preconditioned Conjugate Gradient method (LOBPCG)". arXiv:1708.08354 [cs.NA].
8. Knyazev, Andrew V. (2001). "Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method". SIAM Journal on Scientific Computing. 23 (2): 517–541. Bibcode:2001SJSC...23..517K. doi:10.1137/S1064827500366124. S2CID 7077751.
9. MATLAB File Exchange function LOBPCG
10. SciPy sparse linear algebra function lobpcg
11. Knyazev, A. (2004). Hard and soft locking in iterative methods for symmetric eigenvalue problems. Eighth Copper Mountain Conference on Iterative Methods March 28 - April 2, 2004. doi:10.13140/RG.2.2.11794.48327.
12. Vecharynski, E.; Yang, C.; Pask, J.E. (2015). "A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a hermitian matrix". J. Comput. Phys. 290: 73–89. arXiv:1407.7506. Bibcode:2015JCoPh.290...73V. doi:10.1016/j.jcp.2015.02.030. S2CID 43741860.
13. LOBPCG for SVDS in SciPy
14. GitHub BLOPEX
15. Knyazev, A. V.; Argentati, M. E.; Lashuk, I.; Ovtchinnikov, E. E. (2007). "Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in Hypre and PETSc". SIAM Journal on Scientific Computing. 29 (5): 2224. arXiv:0705.2626. Bibcode:2007arXiv0705.2626K. doi:10.1137/060661624. S2CID 266.
16. PHAML BLOPEX interface to LOBPCG
17. Octave linear-algebra function lobpcg
18. Java LOBPCG at Google Code
19. Anasazi Trilinos LOBPCG at GitHub
20. Native SLEPc LOBPCG
21. SLEPc BLOPEX interface to LOBPCG
22. Julia LOBPCG at GitHub
23. Anzt, Hartwig; Tomov, Stanimir; Dongarra, Jack (2015). "Accelerating the LOBPCG method on GPUs using a blocked sparse matrix vector product". Proceedings of the Symposium on High Performance Computing (HPC '15). Society for Computer Simulation International, San Diego, CA, USA. HPC '15: 75–82. ISBN 9781510801011.
24. Pytorch LOBPCG at GitHub
25. Rust LOBPCG at GitHub
26. Rabbi, Fazlay; Daley, Christopher S.; Aktulga, Hasan M.; Wright, Nicholas J. (2019). Evaluation of Directive-based GPU Programming Models on a Block Eigensolver with Consideration of Large Sparse Matrices (PDF). Seventh Workshop on Accelerator Programming Using Directives, SC19: The International Conference for High Performance Computing, Networking, Storage and Analysis.
27. CuPy: A NumPy-compatible array library accelerated by CUDA LOBPCG at GitHub
28. Google JAX LOBPCG initial merge at GitHub
29. NVIDIA AMGX LOBPCG at GitHub
30. Rakhuba, Maxim; Novikov, Alexander; Osedelets, Ivan (2019). "Low-rank Riemannian eigensolver for high-dimensional Hamiltonians". Journal of Computational Physics. 396: 718–737. arXiv:1811.11049. Bibcode:2019JCoPh.396..718R. doi:10.1016/j.jcp.2019.07.003. S2CID 119679555.
31. McQueen, James; et al. (2016). "Megaman: Scalable Manifold Learning in Python". Journal of Machine Learning Research. 17 (148): 1–5. Bibcode:2016JMLR...17..148M.
32. "Sklearn.cluster.SpectralClustering — scikit-learn 0.22.1 documentation".
33. "Sklearn.manifold.spectral_embedding — scikit-learn 0.22.1 documentation".
34. Naumov, Maxim (2016). "Fast Spectral Graph Partitioning on GPUs". NVIDIA Developer Blog.
35. "SGraph partitioning with Sphynx".
36. ABINIT Docs: WaveFunction OPTimisation ALGorithm
37. Octopus Developers Manual:LOBPCG
38. Yamada, S.; Imamura, T.; Machida, M. (2005). 16.447 TFlops and 159-Billion-dimensional Exact-diagonalization for Trapped Fermion-Hubbard Model on the Earth Simulator. Proc. ACM/IEEE Conference on Supercomputing (SC'05). p. 44. doi:10.1109/SC.2005.1. ISBN 1-59593-061-2.
39. Yamada, S.; Imamura, T.; Kano, T.; Machida, M. (2006). Gordon Bell finalists I—High-performance computing for exact numerical approaches to quantum many-body problems on the earth simulator. Proc. ACM/IEEE conference on Supercomputing (SC '06). p. 47. doi:10.1145/1188455.1188504. ISBN 0769527000.
40. Yamada, S.; Imamura, T.; Machida, M. (2018). High Performance LOBPCG Method for Solving Multiple Eigenvalues of Hubbard Model: Efficiency of Communication Avoiding Neumann Expansion Preconditioner. Asian Conference on Supercomputing Frontiers. Yokota R., Wu W. (eds) Supercomputing Frontiers. SCFA 2018. Lecture Notes in Computer Science, vol 10776. Springer, Cham. pp. 243–256. doi:10.1007/978-3-319-69953-0_14.
41. Yamada, S.; Imamura, T.; Machida, M. (2022). High performance parallel LOBPCG method for large Hamiltonian derived from Hubbard model on multi-GPU systems. SupercomputingAsia (SCA).
42. Yang, C.; Meza, J. C.; Lee, B.; Wang, L.-W. (2009). "KSSOLV - a MATLAB toolbox for solving the Kohn-Sham equations". ACM Trans. Math. Softw. 36 (2): 1–35. doi:10.1145/1499096.1499099. S2CID 624897.
43. Fathurrahman, Fadjar; Agusta, Mohammad Kemal; Saputro, Adhitya Gandaryus; Dipojono, Hermawan Kresno (2020). "PWDFT.jl: A Julia package for electronic structure calculation using density functional theory and plane wave basis". Computer Physics Communications. 256: 107372. Bibcode:2020CoPhC.25607372F. doi:10.1016/j.cpc.2020.107372. S2CID 219517717.
44. Plane wave density functional theory (PWDFT) in Julia
45. Density-functional toolkit (DFTK). Plane wave density functional theory in Julia
46. Rakhuba, Maxim; Oseledets, Ivan (2016). "Calculating vibrational spectra of molecules using tensor train decomposition". J. Chem. Phys. 145 (12): 124101. arXiv:1605.08422. Bibcode:2016JChPh.145l4101R. doi:10.1063/1.4962420. PMID 27782616. S2CID 44797395.
47. Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki (2016). "Development of massive multilevel molecular dynamics simulation program, platypus (PLATform for dYnamic protein unified simulation), for the elucidation of protein functions". J. Comput. Chem. 37 (12): 1125–1132. doi:10.1002/jcc.24318. PMC 4825406. PMID 26940542.
48. Shao, Meiyue; et al. (2018). "Accelerating Nuclear Configuration Interaction Calculations through a Preconditioned Block Iterative Eigensolver". Computer Physics Communications. 222 (1): 1–13. arXiv:1609.01689. Bibcode:2018CoPhC.222....1S. doi:10.1016/j.cpc.2017.09.004. S2CID 13996642.
49. Kang, Sungwoo; et al. (2020). "ACE-Molecule: An open-source real-space quantum chemistry package". The Journal of Chemical Physics. 152 (12): 124110. Bibcode:2020JChPh.152l4110K. doi:10.1063/5.0002959. PMID 32241122. S2CID 214768088.
50. Baczewski, Andrew David; Brickson, Mitchell Ian; Campbell, Quinn; Jacobson, Noah Tobias; Maurer, Leon (2020-09-01). A Quantum Analog Coprocessor for Correlated Electron Systems Simulation (Report). United States: Sandia National Lab. (SNL-NM). doi:10.2172/1671166. OSTI 1671166.
51. A Survey of Eigen Solution Methods in LS-DYNA®. 15th International LS-DYNA Conference, Detroit. 2018.
52. Knyazev, A.; Malyshev, A. (2015). Accelerated graph-based spectral polynomial filters. 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA. pp. 1–6. arXiv:1509.02468. doi:10.1109/MLSP.2015.7324315.
53. Knyazev, Andrew V. (2003). Boley; Dhillon; Ghosh; Kogan (eds.). Modern preconditioned eigensolvers for spectral image segmentation and graph bisection. Clustering Large Data Sets; Third IEEE International Conference on Data Mining (ICDM 2003) Melbourne, Florida: IEEE Computer Society. pp. 59–62.
54. Knyazev, Andrew V. (2006). Multiscale Spectral Image Segmentation Multiscale preconditioning for computing eigenvalues of graph Laplacians in image segmentation. Fast Manifold Learning Workshop, WM Williamburg, VA. doi:10.13140/RG.2.2.35280.02565.
55. Knyazev, Andrew V. (2006). Multiscale Spectral Graph Partitioning and Image Segmentation. Workshop on Algorithms for Modern Massive Datasets Stanford University and Yahoo! Research.
56. "Spectral Clustering — scikit-learn documentation".
External links
• LOBPCG in MATLAB
• LOBPCG in Octave
• LOBPCG in SciPy
• LOBPCG in Java at Google Code
• LOBPCG in Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) at GitHub and archived at Google Code
Numerical linear algebra
Key concepts
• Floating point
• Numerical stability
Problems
• System of linear equations
• Matrix decompositions
• Matrix multiplication (algorithms)
• Matrix splitting
• Sparse problems
Hardware
• CPU cache
• TLB
• Cache-oblivious algorithm
• SIMD
• Multiprocessing
Software
• MATLAB
• Basic Linear Algebra Subprograms (BLAS)
• LAPACK
• Specialized libraries
• General purpose software
| Wikipedia |
\begin{document}
\title{Distributed Algorithm for Optimal Power Flow on Unbalanced Multiphase Distribution Networks}
\author{Qiuyu Peng and Steven H. Low \thanks{*This work was supported by ARPA-E grant DE-AR0000226, Los Alamos National Lab through an DoE grant DE-AC52-06NA25396, DTRA through grant HDTRA 1-15-1-0003 and Skotech.} \thanks{Qiuyu Peng is with the Electrical Engineering Department and Steven H. Low is with the Computing and Mathematical Sciences and the Electrical Engineering Departments, California Institute of Technology, Pasadena, CA 91125, USA. {\small \tt \{qpeng, slow\}@caltech.edu}} }
\maketitle
\begin{abstract}
The optimal power flow (OPF) problem is fundamental in power distribution networks control and operation that underlies many important applications such as volt/var control and demand response, etc.. Large-scale highly volatile renewable penetration in the distribution networks calls for real-time feedback control, and hence the need for distributed solutions for the OPF problem. Distribution networks are inherently unbalanced and most of the existing distributed solutions for balanced networks do not apply. In this paper we propose a solution for unbalanced distribution networks. Our distributed algorithm is based on alternating direction method of multiplier (ADMM). Unlike existing approaches that require to solve semidefinite programming problems in each ADMM macro-iteration, we exploit the problem structures and decompose the OPF problem in such a way that the subproblems in each ADMM macro-iteration reduce to either a closed form solution or eigen-decomposition of a $6\times 6$ hermitian matrix, which significantly reduce the convergence time. We present simulations on IEEE 13, 34, 37 and 123 bus unbalanced distribution networks to illustrate the scalability and optimality of the proposed algorithm. \end{abstract}
\begin{IEEEkeywords} Power Distribution, Distributed Algorithms, Nonlinear systems, Power system control. \end{IEEEkeywords}
\section{Introduction}
The optimal power flow (OPF) problem seeks to minimize a certain objective, such as power loss and generation cost subject to power flow physical laws and operational constraints. It is a fundamental problem that underpins many distribution system operations and planning problems such as economic dispatch, unit commitment, state estimation, volt/var control and demand response, etc.. Most algorithms proposed in the literature are centralized and meant for applications in today's energy management systems that centrally schedule a relatively small number of generators. The increasing penetrations of highly volatile renewable energy sources in distribution systems requires simultaneously optimizing (possibly in real-time) the operation of a large number of intelligent endpoints. A centralized approach will not scale because of its computation and communication overhead and we need to rely on distributed solutions.
Various distributed algorithms for OPF problem have been proposed in the literature. Some early distributed algorithms, including \cite{kim1997coarse,baldick1999fast}, do not deal with the non-convexity issue of OPF and convergence is not guaranteed for those algorithms. Recently, convex relaxation has been applied to convexify the OPF problem, e.g. semi-definite programming (SDP) relaxation \cite{Bai2008,lavaei2012zero,zhang2011geometry,gan2014convex} and second order cone programming (SOCP) relaxation \cite{Jabr2006,Farivar-2013-BFM-TPS,gan2015exact}. When an optimal solution of the original OPF problem can be recovered from any optimal solution of the SOCP/SDP relaxation, we say the relaxation is {exact}. It is shown that both SOCP and SDP relaxations are exact for radial networks using standard IEEE test networks and many practical networks \cite{lavaei2012zero,dall2013distributed,Farivar-2013-BFM-TPS,gan2014convex}. This is important because almost all distribution systems are radial. Thus, optimization decompositions can be applied to the relaxed OPF problem with guaranteed convergence, e.g. dual decomposition method \cite{lam2012distributed,lam2012optimal}, methods of multiplier \cite{devane2013stability,li2012demand}, and alternating direction method of multiplier (ADMM) \cite{dall2013distributed,kraning2013dynamic,sun2013fully}.
There are at least two challenges in designing distributed algorithm that solves the OPF problem on distribution systems. First, distribution systems are inherently unbalanced because of the unequal loads on each phase \cite{kersting2012distribution}. Most of the existing approaches \cite{lam2012distributed,lam2012optimal,devane2013stability,li2012demand,kraning2013dynamic,sun2013fully,peng2014distributed} are designed for balanced networks and do not apply to unbalanced networks.
Second, the convexified OPF problem on unbalanced networks consists of semi-definite constraints. To our best knowledge, all the existing distributed solutions \cite{dall2013distributed,kim1997coarse,baldick1999fast} require solving SDPs within each macro-iteration. The SDPs are computationally intensive to solve, and those existing algorithms take significant long time to converge even for moderate size networks.
In this paper, we address those two challenges through developing an \emph{efficient} distributed algorithm for the OPF problems on \emph{unbalanced} networks based on alternating direction methods of multiplier(ADMM). The advantages of the proposed algorithm are twofold: 1) instead of relying on SDP optimization solver to solve the optimization subproblems in each iteration as existing approaches, we exploit the problem structures and decompose the problem in such a way that the subproblems in each ADMM macro-iteration reduce to either a closed form or a eigen-decomposition of a $6\times 6$ hermitian matrix, which greatly speed up the convergence time. 2) Communication is only required between adjacent buses.
We demonstrate the scalability of the proposed algorithms using standard IEEE test networks \cite{kersting1991radial}. The proposed algorithm converges within $3$ seconds on the IEEE-13, 34, 37, 123 bus systems. To show the superiority of using the proposed procedure to solve each subproblem, we also compare the computation time for solving a subproblem by our algorithm and an off-the-shelf SDP optimization solver (CVX, \cite{grant2008cvx}). Our solver requires on average $3.8\times 10^{-3}$s while CVX requires on average $0.58$s.
A preliminary version has appeared in \cite{peng2015distributed}. In this paper, we improve the algorithm in \cite{peng2015distributed} in the following aspects: 1) We consider more general forms of objective function and power injection region such that the algorithm can be used in more applications. In particular, we provide a sufficient condition, which holds in practice, for the existence of efficient solutions to the optimization subproblems. 2) Voltage magnitude constraints, which are crucial to distribution system operations, are considered in the new algorithm. 3) We study the impact of network topologies on the rate of convergence in the simulations.
The rest of the paper is structured as follows. The OPF problem on an unbalanced network is defined in section \ref{sec:model}. In section \ref{sec:alg}, we develop our distributed algorithm based on ADMM. In section \ref{sec:case}, we
test its scalability on standard IEEE distribution systems and study the impact of network topologies on the rate of convergence.
We conclude this paper in section \ref{sec:conclusion}.
\section{Problem Formulation}\label{sec:model}
In this section, we define the optimal power flow (OPF) problem on unbalanced radial distribution networks and review how to solve it through SDP relaxation.
We denote the set of complex numbers with $\mathbb{C}$, the set of $n$-dimensional complex numbers with $\mathbb{C}^n$ and the set of $m\times n$ complex matrix with $\mathbb{C}^{m\times n}$. The set of hermitian (positive semidefinite) matrix is denoted by $\mathbb{S}$ ($\mathbb{S}_+$). The hermitian transpose of a vector (matrix) $x$ is denoted by $x^H$.
The trace of a square matrix $x\in \mathbb{C}^{n\times n}$ is denoted by $tr(x):=\sum_{i=1}^n x_{ii}$. The inner product of two matrices (vectors) $x,y\in\mathbb{C}^{m\times n}$ is denoted by $\langle x,y\rangle:=\mathbf{Re}(tr(x^Hy))$. The Frobenius (Euclidean) norm of a matrix (vector) $x\in\mathbb{C}^{m\times n}$ is defined as $\|x\|_2:=\sqrt{\langle x,x\rangle}$. Given $x\in\mathbb{C}^{n\times n}$, let diag$(x)\in \mathbb{C}^{n\times 1}$ denote the vector composed of $x$'s diagonal elements.
\subsection{Branch flow model}
We model a distribution network by a \emph{directed} tree graph $\mathcal{T} := (\mathcal{N}, \mathcal{E})$ where
$\mathcal{N}:=\{0,\ldots,n\}$ represents the set of buses and $\mathcal{E}$ represents the set
of distribution lines connecting the buses in $\mathcal{N}$. Index the root of the tree by $0$ and let $\mathcal{N}_+:=\mathcal{N}\setminus\{0\}$ denote the other buses. For each bus $i$, it has a unique ancestor $A_i$ and a set of children buses, denoted by $C_i$. We adopt the graph orientation where every line points towards the root. Each directed line connects a bus $i$ and its unique ancestor $A_i$. We hence label the lines by $\mathcal{E}:=\{1,\ldots,n\}$ where each $i\in\mathcal{E}$ denotes a line from $i$ to $A_i$. Note that $\mathcal{E}=\mathcal{N}_+$ and we will use $\mathcal{N}_+$ to represent the lines set for convenience.
\begin{figure}
\caption{Notations of a two bus network.}
\label{fig:notation}
\end{figure}
Let $a,b,c$ denote the three phases of the network. For each bus $i\in\mathcal{N}$, let $\Phi_i\subseteq\{a,b,c\}$ denote the set of phases. In a typical distribution network, the set of phases for bus $i$ is a subset of the phases of its parent and superset of the phases of its children, i.e. $\Phi_i\subseteq\Phi_{A_i}$ and $\Phi_j\subseteq \Phi_{i}$ for $j\in C_i$. On each phase $\phi\in\Phi_i$, let $V_i^{\phi}\in\mathbb{C}$ denote the complex voltage and $s_i^{\phi}:=p_i^{\phi}+jq_i^{\phi}$ denote the complex power injection. Denote $V_i:=(V_i^\phi,\phi\in\Phi_i)\in\mathbb{C}^{|\Phi_i|}$, $s_i:=(s_i^\phi,\phi\in\Phi_i)\in\mathbb{C}^{|\Phi_i|}$ and $v_i:=V_i^HV_i\in\mathbb{C}^{|\Phi_i|\times|\Phi_i|}$. For each line $i\in\mathcal{N}_+$ connecting bus $i$ and its ancestor $A_i$, the set of phases is $\Phi_i\cap\Phi_{A_i}=\Phi_i$ since $\Phi_i\subseteq\Phi_{A_i}$. On each phase $\phi\in\Phi_i$, let $I_i^{\phi}\in\mathbb{C}$ denote the complex branch current. Denote $I_i:=(I_i^\phi,\phi\in\Phi_i)\in\mathbb{C}^{|\Phi_i|}$, $\ell_i:=I_iI_i^H\in\mathbb{C}^{|\Phi_i|\times|\Phi_i|}$ and $S_i:=V_iI_i^H\in\mathbb{C}^{|\Phi_i|\times|\Phi_i|}$. Some notations are summarized in Fig. \ref{fig:notation}. A variable without a subscript denotes the set of variables with appropriate components, as summarized below.
\begin{center}
\begin{tabular}{|c|c|} \hline $v:=(v_i,i\in\mathcal{N})$ & $s:=(s_i,i\in\mathcal{N})$ \\ \hline $\ell:=(\ell_i,i\in\mathcal{N}_+)$ & $S:=(S_i,i\in\mathcal{N}_+)$\\ \hline \end{tabular} \end{center}
Branch flow model is first proposed in \cite{Baran1989a,Baran1989b} for balanced radial networks. It has better numerical stability than bus injection model and has been advocated for the design and operation for radial distribution network, \cite{Farivar-2013-BFM-TPS, gan2014exact,li2012demand,peng2014distributed}. In \cite{gan2014convex}, it is first generalized to unbalanced radial networks and uses a set of variables $(v,s,\ell,S)$. Given a radial network $\mathcal{T}$, the branch flow model for unbalanced network is defined by: \begin{subequations} \label{eq:bfm} \begin{align} & \mathcal{P}_i(v_{A_i})=v_i-z_iS_i^H-S_iz_i^H+z_i\ell_iz_i^H & i\in\mathcal{N}_+ \label{eq:bfm1}\\ & s_i=-\text{diag}\left(\sum_{i\in C_i}\mathcal{P}_i(S_j-z_j\ell_j)-S_i\right) & i\in\mathcal{N} \label{eq:bfm2}\\ & \begin{pmatrix} v_i & S_i\\ S_i^H & \ell_i \end{pmatrix}\in \mathbb{S}_+ & i\in\mathcal{N}_+ \label{eq:bfm3}\\ & \text{rank}\begin{pmatrix} v_i & S_i\\ S_i^H & \ell_i \end{pmatrix}=1 & i\in\mathcal{N}_+ \label{eq:bfm4} \end{align} \end{subequations} where $\mathcal{P}_i(v_{A_i})$ denote projecting $v_{A_i}$ to the set of phases on bus $i$ and $\mathcal{P}_i(S_j-z_j\ell_j)$ denote lifting the result of $S_j-z_j\ell_j$ to the set of phases $\Phi_i$ and filling the missing phase with $0$, e.g. if $\Phi_{A_i}=\{a,b,c\}$, $\Phi_i=\{a,b\}$ and $\Phi_j=\{a\}$, then \begin{eqnarray*} \mathcal{P}_i(v_{A_i})&:=&\begin{pmatrix} v_{A_i}^{aa} & v_{A_i}^{ab} \\ v_{A_i}^{ba} & v_{A_i}^{bb} \end{pmatrix} \\ \mathcal{P}_i(S_j-z_j\ell_j)&:=&\begin{pmatrix} S_{j}^{aa}-z_j^{aa}\ell_j^{aa} & 0 \\ 0 & 0 \end{pmatrix} \end{eqnarray*}
Given a vector $(v,s,\ell,S)$ that satisfies \eqref{eq:bfm}, it is proved in \cite{gan2014convex} that the bus voltages $V_i$ and branch currents $I_i$ can be uniquely determined if the network is a tree. Hence this model \eqref{eq:bfm} is equivalent to a full unbalanced AC power flow model. See \cite[Section IV]{gan2014convex} for details.
\subsection{OPF and SDP relaxation}\label{sec:opfsdp}
The OPF problem seeks to optimize certain objective, e.g. total power loss or generation cost, subject to unbalanced power flow equations \eqref{eq:bfm} and various operational constraints. We consider an objective function of the following form: \begin{eqnarray}\label{eq:objective} F(s):=\sum_{i\in\mathcal{N}}f_i(s_i):=\sum_{i\in\mathcal{N}}\sum_{\phi\in\Phi_i}f_i^\phi(s_i^\phi). \end{eqnarray} For instance, \begin{itemize} \item to minimize total line loss, we can set for each $\phi\in\Phi_i$, $i\in\mathcal{N}$, \begin{eqnarray}\label{eq:opf::unbalance::I2} f_i^\phi(s_i^\phi) = p_i^\phi. \end{eqnarray} \item to minimize generation cost, we can set for each $i\in\mathcal{N}$, \begin{eqnarray}\label{eq:opf::unbalance::I1} f_i^\phi(s_i^\phi)=(\frac{\alpha_i^\phi}{2} (p_i^\phi)^2+\beta_i^\phi p_i^\phi), \end{eqnarray} where $\alpha_i^\phi,\beta_i^\phi>0$ depend on the load type on bus $i$, e.g. $\alpha_i^\phi=0$ and $\beta_i^\phi=0$ for bus $i$ where there is no generator and for generator bus $i$, the corresponding $\alpha_i^\phi,\beta_i^\phi$ depends on the characteristic of the generator. \end{itemize}
For each bus $i\in\mathcal{N}$, there are two operational constraints on each phase $\phi\in\Phi_i$. First, the power injection $s_i^\phi$ is constrained to be in a injection region $\mathcal{I}_i^\phi$, i.e. \begin{eqnarray}\label{eq:operation1} s_i^\phi\in\mathcal{I}_i^\phi \ \ \text{for } \phi\in\Phi_i \text{ and }i\in\mathcal{N} \end{eqnarray}
The feasible power injection region $\mathcal{I}_i^\phi$ is determined by the controllable loads attached to phase $\phi$ on bus $i$. Some common controllable loads are:
\begin{itemize} \item For controllable load, whose real power can vary within $[\underline p_i,\overline p_i]$ and reactive power can vary within $[\underline q_i,\overline q_i]$, the injection region $\mathcal{I}_i$ is \begin{subequations} \begin{eqnarray}\label{eq:S2} \mathcal{I}_i^\phi=\{p+\mathbf{i}q\mid p\in[ \underline p_i, \overline p_i], q\in[\underline q_i,\overline q_i] \}\subseteq \mathbb{C} \end{eqnarray} For instance, the power injection of each phase $\phi$ on substation bus $0$ is unconstrained, thus $\underline p_i,\underline q_i=-\infty$ and $ \overline p_i, \overline q_i=\infty$. \item For solar panel connecting the grid through a inverter with nameplate $\overline s_i^\phi$, the injection region $\mathcal{I}_i$ is \begin{eqnarray}\label{eq:S1} \mathcal{I}_i^\phi=\{p+\mathbf{i}q\mid p\geq 0, p^2+q^2\leq (\overline s_i^\phi)^2\}\subseteq\mathbb{C} \end{eqnarray} \end{subequations} \end{itemize}
Second, the voltage magnitude needs to be maintained within a prescribed region. Note that the diagonal element of $v_i$ describes the voltage magnitude square on each phase $\phi\in\Phi_i$. Thus the constraints can be written as \begin{eqnarray}\label{eq:operation2} \underline v_i^\phi \leq v_i^{\phi\phi}\leq \overline v_i^\phi \ \ i\in\mathcal{N}, \end{eqnarray} where $v_i^{\phi\phi}$ denotes the $\phi_{th}$ diagonal element of $v_i$. Typically the voltage magnitude at substation buses is assumed to be fixed at a prescribed value, i.e. $\underline v_0^{\phi}=\overline v_0^{\phi}$ for $\phi\in\Phi_0$. At other load buses $i\in\mathcal{N}_+$, the voltage magnitude is typically allowed to deviate by $5\%$ from its nominal value, i.e. $\underline v_i^{\phi}=0.95^2$ and $\overline v_i^{\phi}=1.05^2$ for $\phi\in\Phi_i$.
To summarize, the OPF problem for unbalanced radial distribution networks is: \begin{eqnarray} \text{\bf OPF: }\min && \sum_{i\in\mathcal{N}}\sum_{\phi\in\Phi_i}f_i^\phi(s_i^\phi)\nonumber\\ \mathrm{over} && v,s,S,\ell \label{eq:opf}\\ \mathrm{s.t.} && \eqref{eq:bfm} \text{ and } \eqref{eq:operation1}-\eqref{eq:operation2}\nonumber \end{eqnarray}
The OPF problem \eqref{eq:opf} is nonconvex due to the rank constraint \eqref{eq:bfm4}. In \cite{gan2014convex}, an SDP relaxation for \eqref{eq:opf} is obtained by removing the rank constraint \eqref{eq:bfm4}, resulting in a semidefinite program (SDP): \begin{eqnarray} \text{\bf ROPF: }\min && \sum_{i\in\mathcal{N}}\sum_{\phi\in\Phi_i}f_i^\phi(s_i^\phi)\nonumber\\ \mathrm{over} && v,s,S,\ell \label{eq:ropf}\\ \mathrm{s.t.} && \eqref{eq:bfm1}-\eqref{eq:bfm3} \text{ and } \eqref{eq:operation1}-\eqref{eq:operation2}\nonumber \end{eqnarray}
Clearly the relaxation ROPF \eqref{eq:ropf} provides a lower bound for the original OPF problem \eqref{eq:opf} since the original feasible set is enlarged. The relaxation is called \emph{exact} if every optimal solution of ROPF satisfies the rank constraint \eqref{eq:bfm4} and hence is also optimal for the original OPF problem. It is shown empirically in \cite{gan2014convex} that the relaxation is exact for all the tested distribution networks, including IEEE test networks \cite{kersting1991radial} and some real distribution feeders.
\section{Distributed Algorithm}\label{sec:alg}
We assume SDP relaxation is exact and develop in this section a distributed algorithm that solves the ROPF problem. We first design a distributed algorithm for a broad class of optimization problem through alternating direction method of multipliers (ADMM). We then apply the proposed algorithm on the ROPF problem, and show that the optimization subproblems can be solved efficiently either through closed form solutions or eigen-decomposition of a $6\times 6$ matrix.
\subsection{Preliminary: ADMM}
ADMM blends the decomposability of dual decomposition with the superior convergence properties of the method of multipliers \cite{boyd2011distributed}. It solves optimization problem of the form\footnote{This is a special case with simpler constraints of the general form introduced in \cite{boyd2011distributed}. The $z$ variable used in \cite{boyd2011distributed} is replaced by $y$ since $z$ represents impedance in power systems.}: \begin{eqnarray} \min_{x,y} && f(x)+g(y) \nonumber \\ \text{s.t.} && x\in\mathcal{K}_x, \ \ y\in\mathcal{K}_y \label{eq:admm}\\ && x=y \nonumber \end{eqnarray} where $f(x),g(y)$ are convex functions and $\mathcal{K}_x,\mathcal{K}_y$ are convex sets. Let $\lambda$ denote the Lagrange
multiplier for the constraint $x=y$. Then the augmented Lagrangian is defined as \begin{eqnarray}\label{eq:agumentlag}
L_\rho(x,y,\lambda):=f(x)+g(y)+\langle \lambda, x-y\rangle+\frac{\rho}{2}\|x-y\|_2^2, \end{eqnarray} where $\rho\geq 0$ is a constant. When $\rho=0$, the augmented Lagrangian degenerates to the standard Lagrangian. At each iteration $k$, ADMM consists of the iterations: \begin{subequations}\label{eq:update} \begin{eqnarray} x^{k+1}&\in&\arg\min_{x\in\mathcal{K}_x} L_\rho(x,y^{k},\lambda^k)\label{eq:xupdate}\\ y^{k+1}&\in&\arg\min_{y\in\mathcal{K}_y} L_\rho(x^{k+1},y,\lambda^k)\label{eq:zupdate}\\ \lambda^{k+1}&=&\lambda^{k}+\rho(x^{k+1}-y^{k+1}).\label{eq:mupdate} \end{eqnarray} \end{subequations} Specifically, at each iteration, ADMM first updates $x$ based on \eqref{eq:xupdate}, then updates $y$ based on \eqref{eq:zupdate}, and after that updates the multiplier $\lambda$ based on \eqref{eq:mupdate}. Compared to dual decomposition, ADMM is guaranteed to converge to an optimal solution under less restrictive conditions. Let \begin{subequations}\label{eq:feasible}
\begin{eqnarray} r^k&:=&\|x^{k}-y^{k}\|_2 \label{eq:pfeasible} \\
s^k&:=&\rho\|y^{k}-y^{k-1}\|_2, \label{eq:dfeasible} \end{eqnarray} \end{subequations} which can be viewed as the residuals for primal and dual feasibility, respectively. They converge to $0$ at optimality and are usually used as metrics of convergence in the experiment. Interested readers may refer to \cite[Chapter 3]{boyd2011distributed} for details.
In this paper, we generalize the above standard ADMM \cite{boyd2011distributed} such that the optimization subproblems can be solved efficiently for our ROPF problem. Instead of using the quadratic penalty term $\frac{\rho}{2}\|x-y\|_2^2$ in \eqref{eq:agumentlag}, we will use a more general quadratic penalty term: $\|x-y\|_{\Lambda}^2$, where $\|x-y\|_{\Lambda}^2:=(x-y)^H\Lambda(x-y)$ and $\Lambda$ is a positive diagonal matrix. Then the augmented Lagrangian becomes \begin{eqnarray}\label{eq:admm::augmentlag}
L_{\rho}(x,y,\lambda):=f(x)+g(y)+\langle \lambda, x-y\rangle+\frac{\rho}{2}\|x-y\|_{\Lambda}^2. \end{eqnarray} The convergence result in \cite[Chapter 3]{boyd2011distributed} carries over directly to this general case.
\subsection{ADMM based Distributed Algorithm}\label{sec:admma_dalg}
In this section, we will design an ADMM based distributed algorithm for a broad class of optimization problem, of which the ROPF problem is a special case. Consider the following optimization problem: \begin{subequations}\label{eq:admm::opt1} \begin{eqnarray} \min && \sum_{i\in\mathcal{N}} f_i(x_i) \label{eq:admm::opt1::obj}\\ \mathrm{over} && \{x_i\mid i\in\mathcal{N}\} \\ \mathrm{s.t.} && \sum_{j\in N_i}A_{ij}x_j = 0 \quad\mathrm{for} \quad i\in\mathcal{N} \label{eq:admm::couple}\\ && x_i\in \cap_{r=0}^{R_i}\mathcal{K}_{ir} \quad\mathrm{for} \quad i\in\mathcal{N}, \label{eq:admm::local} \end{eqnarray} \end{subequations} where for each $i\in\mathcal{N}$, $x_i$ is a complex vector, $f_i(x_i)$ is a convex function, $\mathcal{K}_{ir}$ is a convex set, and $A_{ij}$ $(j\in N_i, i\in\mathcal{N})$ are matrices with appropriate dimensions. A broad class of graphical optimization problems (including ROPF) can be formulated as \eqref{eq:admm::opt1}. Specifically, each node $i\in\mathcal{N}$ is associated with some local variables stacked as $x_i$, which belongs to an intersection of $R_i+1$ local feasible sets $\mathcal{K}_{ir}$ and has a cost objective function $f_i(x_i)$. Variables in node $i$ are coupled with variables from their neighbor nodes in $N_i$ through linear constraints \eqref{eq:admm::couple}. The objective then is to solve a minimal total cost across all the nodes.
The goal is to develop a distributed algorithm that solves \eqref{eq:admm::opt1} such that each node $i$ solve its own subproblem and only exchange information with its neighbor nodes $N_i$. In order to transform \eqref{eq:admm::opt1} into the form of standard ADMM \eqref{eq:admm}, we need to have two sets of variables $x$ and $y$. We introduce two sets of slack variables as below: \begin{enumerate} \item $x_{ir}$. It represents a copy of the original variable $x_i$ for $1\leq r \leq R_i$. For convenience, denote the original $x_i$ by $x_{i0}$. \item $y_{ij}$. It represents the variables in node $i$ observed at node $j$, for $j\in N_i$. \end{enumerate} Then \eqref{eq:admm::opt1} can be reformulated as \begin{subequations}\label{eq:admm::opt4} \begin{eqnarray} \min && \sum_{i\in\mathcal{N}} f_i(x_{i0}) \label{eq:admm::opt4::obj}\\ \mathrm{over} && x=\{x_{ir}\mid 0\leq r\leq R_i,i\in\mathcal{N}\} \nonumber\\ &&y=\{y_{ij}\mid j\in N_i,i\in\mathcal{N}\} \nonumber \\ \mathrm{s.t.} && \sum_{j\in N_i}A_{ij}y_{ji} = 0 \quad\mathrm{for} \quad i\in\mathcal{N} \label{eq:admm::opt4::couple}\\ && x_{ir}\in \mathcal{K}_{ir} \quad\mathrm{for} \quad 0\leq r\leq R_i \ \ i\in\mathcal{N} \label{eq:admm::opt4::local}\\ && x_{ir}=y_{ii} \quad\mathrm{for} \quad 1\leq r\leq R_i \ \ i\in\mathcal{N} \label{eq:admm::opt4::consensus1} \\ && x_{i0}=y_{ij} \quad\mathrm{for} \quad j\in N_i \ \ i\in\mathcal{N} , \label{eq:admm::opt4::consensus2} \end{eqnarray} \end{subequations} where $x$ and $y$ represent the two groups of variables in standard ADMM. Note that the consensus constraints \eqref{eq:admm::opt4::consensus1} and \eqref{eq:admm::opt4::consensus2} force all the duplicates $x_{ir}$ and $y_{ij}$ are the same. Thus its solution $x_{i0}$ is also optimal to the original problem \eqref{eq:admm::opt1}. \eqref{eq:admm::opt4} falls into the general ADMM form \eqref{eq:admm}, where \eqref{eq:admm::opt4::couple} corresponds to $\mathcal{K}_y$, \eqref{eq:admm::opt4::local} corresponds to $\mathcal{K}_x$, and \eqref{eq:admm::opt4::consensus1} and \eqref{eq:admm::opt4::consensus2} are the consensus constraints that relates $x$ and $y$.
Following the ADMM procedure, we relax the consensus constraints \eqref{eq:admm::opt4::consensus1} and \eqref{eq:admm::opt4::consensus2}, whose Lagrangian multipliers are denoted by $\lambda_{ir}$ and $\mu_{ij}$, respectively. The generalized augmented Lagrangian then can be written as \begin{align}\label{eq:admm::opt4::augment} &L_{\rho}(x,y,\lambda,\mu)\\
=&\sum_{i\in\mathcal{N}} \left(\sum_{r=1}^{R_i}\left(\langle\lambda_{ir}, x_{ir}-y_{ii}\rangle+ \frac{\rho}{2}\|x_{ir}-y_{ii}\|_{\Lambda_{ir}}^2\right)+\right. \nonumber\\
&\left.f_i(x_{i0})+\sum_{j\in N_i}\left(\langle\mu_{ij},x_{i0}-y_{ij}\rangle +\frac{\rho}{2}\|x_{i0}-y_{ij}\|_{M_{ij}}^2\right) \right).\nonumber \end{align} where the parameter $\Lambda_{ir}$ and $M_{ij}$ depend on the problem we will show how to design them in section \ref{sec:dalg}.
Next, we show that both the $x$-update \eqref{eq:xupdate} and $y$-update \eqref{eq:zupdate} can be solved in a distributed manner, i.e. both of them can be decomposed into local subproblems that can be solved in parallel by each node $i$ with only neighborhood communications.
First, we define the set of local variables for each node $i$, denoted by $\mathcal{A}_i$, which includes its own duplicates $x_{ir}$ and the associated multiplier $\lambda_{ir}$ for $0\leq r\leq R_i$, and the ``observations'' $y_{ji}$ of variables from its neighbor $N_i$ and the associated multiplier $\mu_{ji}$, i.e. \begin{eqnarray}\label{eq:localvar} \mathcal{A}_i:=\{x_{ir},\lambda_{ir}\mid 0\leq r\leq R_i\}\cup \{y_{ji},\mu_{ji}\mid j\in N_i\}. \end{eqnarray} Next, we show how does each node $i$ update $\{x_{ir}\mid 0\leq r\leq R_i\}$ in the $x$-update and $\{y_{ji}\mid j\in N_i\}$ in the $y$-update.
In the $x$-update at each iteration $k$, the optimization subproblem that updates $x^{k+1}$ is \begin{eqnarray}\label{eq:admm:opt4:xupdate} \min_{x\in\mathcal{K}_x} L_\rho(x, y^k, \lambda^k,\mu^k), \end{eqnarray} where the constraint $\mathcal{K}_x$ is the Cartesian product of $\mathcal{K}_{ir}$, i.e. \begin{eqnarray*} \mathcal{K}_x:=\otimes_{i\in\mathcal{N}}\otimes_{r=0}^{R_i}\mathcal{K}_{ir}. \end{eqnarray*} The objective can be written as a sum of local objectives as shown below {\small \begin{align*} &L_{\rho}(x,y^k,\lambda^k,\mu^k)\\
=&\sum_{i\in\mathcal{N}} \left(\sum_{r=1}^{R_i}\left(\langle\lambda_{ir}^{k}, x_{ir}-y_{ii}^{k}\rangle+ \frac{\rho}{2}\|x_{ir}-y_{ii}^{k}\|_{\Lambda_{ir}}^2\right)+\right. \nonumber\\
&\left.f_i(x_{i0})+\sum_{j\in N_i}\left(\langle\mu_{ij}^{k},x_{i0}-y_{ij}^{k}\rangle +\frac{\rho}{2}\|x_{i0}-y_{ij}^{k}\|_{M_{ij}}^2\right) \right)\nonumber\\ =& \sum_{i\in\mathcal{N}}\sum_{r=0}^{R_i}H_{ir}(x_{ir}) - \sum_{i\in\mathcal{N}}\left(\sum_{r=0}^{R_i}\langle \lambda_{ir}^k,y_{ii}^k\rangle+\sum_{j\in N_i}\langle \mu_{ij}^k,y_{ij}^k\rangle\right), \end{align*} } where the last term is independent of $x$ and \begin{align} \label{eq:admm:opt4:hix} &H_{ir}(x_{ir})=\\ &\begin{cases}
f_i(x_{i0})+\sum_{j\in N_i}\left(\langle\mu_{ij}^{k},x_{i0}\rangle +\frac{\rho}{2}\|x_{i0}-y_{ij}^{k}\|_{M_{ij}}^2\right)& r=0\\
\langle\lambda_{ir}^{k},x_{ir}\rangle+ \frac{\rho}{2}\|x_{ir}-y_{ii}^{k}\|_{\Lambda_{ir}}^2 & r > 0 \end{cases}.\nonumber \end{align} Then the problem \eqref{eq:admm:opt4:xupdate} in the $x$-update can be written explicitly as \begin{eqnarray} \min && \sum_{i\in\mathcal{N}}\sum_{r=0}^{R_i} H_{ir}(x_{ir}) \nonumber\\ \mathrm{over} && x=\{x_{ir}\mid 0\leq r\leq R_i, i\in\mathcal{N}\} \label{eq:admm::opt2::xupdate}\\ \mathrm{s.t.} && x_{ir}\in\mathcal{K}_{ir} \quad\mathrm{for} \quad 0\leq r\leq R_i, \ i\in\mathcal{N}, \nonumber \end{eqnarray} where both the objective and constraint are separable for $0\leq r\leq R_i$ and $i\in\mathcal{N}$. Thus it can be decomposed into $\sum_{i\in\mathcal{N}}(R_i+1)$ independent problems that can be solved in parallel. There are $R_i+1$ problems associated with each node $i$ and the $r_{th}$ $(0\leq r\leq R_i)$ one can be simply written as \begin{eqnarray}\label{eq:xupdatenode} \min_{x_{ir}\in\mathcal{K}_{ir}} \ \ H_{ir}(x_{ir}) \end{eqnarray} whose solution is the new update of variables $x_{ir}$ for node $i$. In the above problem, the constants $y_{ij}^k,\mu_{ij}^k\in\mathcal{A}_j$ are not local to $i$ and stored in $i$'s neighbors $j\in N_i$. Therefore, each node $i$ needs to collect $(y_{ij},\mu_{ij})$ from all of its neighbors prior to solving \eqref{eq:xupdatenode}. The message exchanges is illustrated in Figure \ref{fig:msg_x}.
\begin{figure}
\caption{Message exchanges in the $x$ and $y$-update for node $i$.}
\label{fig:msg_x}
\label{fig:msg_z}
\label{fig:msg_update}
\end{figure}
In the $y$-update, the optimization problem that updates $y^{k+1}$ is \begin{eqnarray}\label{eq:admm:opt4:yupdate1} \min_{y\in\mathcal{K}_y} L_\rho(x^{k+1}, y, \lambda^k,\mu^k)
\end{eqnarray} where the constraint set $\mathcal{K}_y$ can be represented as a Cartesian product of $|\mathcal{N}|$ disjoint sets, i.e. \begin{eqnarray*} \mathcal{K}_y:=\otimes_{i\in\mathcal{N}}\{y_{ji}, j\in N_i\mid \sum_{j\in N_i}A_{ij}y_{ji} = 0 \}. \end{eqnarray*}
The objective can be written as a sum of local objectives as below. {\small \begin{align*} &L_{\rho}(x^{k+1},y,\lambda^k,\mu^k)\\
=&\sum_{i\in\mathcal{N}} \left(\sum_{r=1}^{R_i}\left(\langle\lambda_{ir}^{k}, x_{ir}^{k+1}-y_{ii}\rangle+ \frac{\rho}{2}\|x_{ir}^{k+1}-y_{ii}\|_{\Lambda_{ir}}^2\right)+\right. \nonumber\\
&\left.f_i(x_{i0}^{k+1})+\sum_{j\in N_i}\left(\langle\mu_{ji}^{k},x_{j0}^{k+1}-y_{ji}\rangle +\frac{\rho}{2}\|x_{j0}^{k+1}-y_{ji}\|_{M_{ji}}^2\right) \right)\nonumber\\ =&\sum_{i\in\mathcal{N}}G_i(\{y_{ji}\mid j\in N_i\}) + \\ &\sum_{i\in\mathcal{N}}\left(f_i(x_{i0}^{k+1})+\sum_{r=0}^{R_i}\langle \lambda_{ir}^{k}, x_{ir}^{k+1}\rangle+\sum_{j\in N_i}\langle \mu_{ji}^{k}, x_{j0}^{k+1}\rangle\right), \end{align*} } where the last term is independent of $y$ and \begin{align*}
&G_i(\{y_{ji}\mid j\in N_i\})=\sum_{r=1}^{R_i}\left(-\langle\lambda_{ir}^{k},y_{ii}\rangle+ \frac{\rho}{2}\|x_{ir}^{k+1}-y_{ii}\|_{\Lambda_{ir}}^2\right)\\
&\qquad +\sum_{j\in N_i}\left(-\langle\mu_{ji}^{k},y_{ji}\rangle +\frac{\rho}{2}\|x_{j0}^{k+1}-y_{ji}\|_{M_{ji}}^2\right). \end{align*} Then the problem \eqref{eq:admm:opt4:yupdate1} in the $y$-update can be written explicitly as \begin{eqnarray*} \min && \sum_{i\in\mathcal{N}}G_i(\{y_{ji}\mid j\in N_i\})\\ \mathrm{over} && y=\{\{y_{ji}\mid j\in N_i\}\mid i\in\mathcal{N}\}\\ \mathrm{s.t.} && \sum_{j\in N_i}A_{ij}y_{ji} = 0, \ \ i \in\mathcal{N}
\end{eqnarray*} which can be decomposed into $|\mathcal{N}|$ subproblems and the subproblem for node $i$ is \begin{eqnarray} \min && G_i(\{y_{ji}\mid j\in N_i\})\nonumber\\ \mathrm{over} && \{y_{ji}\mid j\in N_i\} \label{eq:admm::opt4::zupdate1}\\ \mathrm{s.t.} && \sum_{j\in N_i}A_{ij}y_{ji} = 0, \nonumber \end{eqnarray} whose solution is the new update of $\{y_{ji}\mid j\in N_i\}\in\mathcal{A}_i$. In \eqref{eq:admm::opt4::zupdate1}, the constants
$x_{j0}\in\mathcal{A}_j$ are stored in $i$'s neighbor $j\in N_i$. Hence, each node $i$ needs to collect $x_{j0}$ from all of its neighbor prior to solving \eqref{eq:admm::opt4::zupdate1}. The message exchanges in the $y$-update is illustrated in Figure \ref{fig:msg_z}.
The problem \eqref{eq:admm::opt4::zupdate1} can be solved with closed form solution. we stack the real and imaginary part of the variables $\{y_{ji}\mid j\in N_i\}$ in a vector with appropriate dimensions and denote it as $\tilde y$. Then \eqref{eq:admm::opt4::zupdate1} takes the following form: \begin{eqnarray} \min && \frac{1}{2}{\tilde y}^TM {\tilde y}+c^T{\tilde y} \nonumber\\ \mathrm{over} && \tilde y \label{eq:admm::opt2::zupdate1} \\ \mathrm{s.t.} && \tilde A{\tilde y}=0, \nonumber \end{eqnarray} where $M$ is a positive diagonal matrix, $\tilde A$ is a full row rank real matrix, and $c$ is a real vector. $M,c,A$ are derived from \eqref{eq:admm::opt4::zupdate1}. There exists a closed form expression for \eqref{eq:admm::opt2::zupdate1} given by \begin{eqnarray} \label{eq:admm::opt2::zsol} \tilde y=\left(M^{-1}\tilde A^T(\tilde AM^{-1}\tilde A^T)^{-1}\tilde AM^{-1}-M^{-1}\right)c. \end{eqnarray}
In summary, the original problem \eqref{eq:admm::opt1} is decomposed into local subproblems that can be solved in a distributed manner using ADMM. At each iteration, each node $i$ solves \eqref{eq:xupdatenode} in the $x$-update and \eqref{eq:admm::opt2::zupdate1} in the $y$-update. There exists a closed form solution to the subproblem \eqref{eq:admm::opt2::zupdate1} in the $y$-update as shown in \eqref{eq:admm::opt2::zsol}, and hence whether the original problem \eqref{eq:admm::opt1} can be solved efficiently in a distributed manner depends on the existence of efficient solutions to the subproblems \eqref{eq:xupdatenode} in the $x$-update, which depends on the realization of both the objectives $f_i(x_i)$ and the constraint sets $\mathcal{K}_{ir}$.
Next, we show the ROPF problem \eqref{eq:ropf} is a special case of \eqref{eq:admm::opt1}, hence can be solved in a distributed manner using the above method. In particular, we show the corresponding subproblems in the $x$-update can be solved efficiently.
\subsection{Application on OPF problem}\label{sec:dalg} We assume the SDP relaxation is exact and now derive a distributed algorithm for solving ROPF \eqref{eq:ropf}. Using the ADMM based algorithm developed in Section \ref{sec:admma_dalg}, the global ROPF problem is decomposed into local subproblems that can be solved in a distributed manner with only neighborhood communication. Note that the subproblems in the $y$-update for each node $i$ can always been solved with closed form solution, we only need to develop an efficient solution for the subproblems \eqref{eq:xupdatenode} in the $x$-update for the ROPF problem. In particular, we provide a sufficient condition, which holds in practice, for the existence of efficient solutions to all the optimization subproblems. Compared with existing methods, e.g. \cite{devane2013stability,li2012demand,dall2013distributed,kraning2013dynamic,sun2013fully}, that use generic iterative optimization solver to solve each subproblem, the computation time is improved by more than 100 times.
The ROPF problem defined in \eqref{eq:ropf} can be written explicitly as \begin{subequations}\label{eq:ropfexplicit} \begin{align} \min & \ \ \sum_{i\in \mathcal{N}} \sum_{\phi\in\Phi_i}f_i^\phi(s_i^\phi) \\ \mathrm{over} & \ \ v,s,S,\ell \nonumber \\ \mathrm{s.t.} & \ \ \mathcal{P}_i(v_{A_i})=v_i-z_iS_i^H-S_iz_i^H+z_i\ell_iz_i^H \ i\in\mathcal{N} \label{eq:bfm21}\\ & s_i=-\text{diag}\left(\sum_{i\in C_i}\mathcal{P}_i(S_j-z_j\ell_j)-S_i\right) \quad i\in\mathcal{N} \label{eq:bfm22}\\ & \begin{pmatrix} v_i & S_i\\ S_i^H & \ell_i \end{pmatrix}\in \mathbb{S}_+ \qquad\qquad\qquad\qquad\quad \ \ i\in\mathcal{N} \label{eq:bfm23}\\ & \ \ s_i^\phi\in\mathcal{I}_i^\phi \qquad\qquad\qquad\qquad\qquad \phi\in\Phi_i, \ i\in\mathcal{N} \label{eq:bfm24}\\ & \ \ \underline v_i^\phi \leq v_i^{\phi\phi}\leq \overline v_i^{\phi} \qquad\qquad\quad\qquad \phi\in\Phi_i, \ i\in\mathcal{N} \label{eq:bfm25} \end{align} \end{subequations} Denote \begin{align} x_i:=&\{v_i,s_i,S_i,\ell_i\} \\ \mathcal{K}_{i0}:=&\{x_i\mid \begin{pmatrix} v_i & S_i\\ S_i^H & \ell_i \end{pmatrix}\in \mathbb{S}_+, \{s_i^\phi\in\mathcal{I}_i^\phi\mid \phi\in \Phi_i\}\} \label{eq:distOPFu::ki0}\\ \mathcal{K}_{i1}:=&\{x_i\mid \underline v_i^\phi \leq v_i^{\phi\phi}\leq \overline v_i^{\phi}, \phi\in \Phi_i\} \label{eq:distOPFu::ki1} \end{align} Then \eqref{eq:ropfexplicit} takes the form of \eqref{eq:admm::opt1} with $R_i=1$ for all $i\in\mathcal{N}$, where \eqref{eq:bfm21}--\eqref{eq:bfm22} correspond to \eqref{eq:admm::couple} and \eqref{eq:bfm23}--\eqref{eq:bfm25} correspond to \eqref{eq:admm::local}. Then we have the following theorem, which provides a sufficient condition for the existence of an efficient solution to \eqref{eq:xupdatenode}. \begin{theorem}\label{thm:distOPFu:closedformsol} Suppose there exists a closed form solution to the following optimization problem for all $i\in\mathcal{N}$ and $\phi\in\Phi_i$ \begin{eqnarray}
\min && f_i^\phi\left(s\right)+\frac{\rho}{2}\left\|s^\phi-\hat s^\phi\right\|_2^2 \nonumber\\ \mathrm{over} && s \in \mathcal{I}_i^\phi \label{eq:distOPFu:closedformsol} \end{eqnarray} given any constant $\hat s^\phi$ and $\rho$, then the subproblems for ROPF in the $x$-update \eqref{eq:xupdatenode} can be solved via either closed form solutions or eigen-decomposition of a $6\times 6$ hermitian matrix. \end{theorem} \begin{IEEEproof} We will prove Theorem \ref{thm:distOPFu:closedformsol} through elaborating the procedure to solve \eqref{eq:xupdatenode}. \end{IEEEproof}
Recall that there is always a closed form solution to the optimization subproblem \eqref{eq:admm::opt4::zupdate1} in the $y$-update, if the objective function $f_i^\phi\left(s^\phi\right)$ and injection region $\mathcal{I}_i^\phi$ satisfy the sufficient condition in Theorem \ref{thm:distOPFu:closedformsol}, all the subproblems can be solved efficiently.
\begin{remark} In practice, the objective function $f_i^\phi(s)$, usually takes the form of $f_i^\phi\left(s\right):=\frac{\alpha_i}{2} p^2+\beta_i p$, which models both line loss and generation cost minimization as discussed in Section \ref{sec:opfsdp}. For the injection region $\mathcal{I}_i^\phi$, it usually takes either \eqref{eq:opf::unbalance::I2} or \eqref{eq:opf::unbalance::I1}. It is shown in Appendix \ref{app:distOPFb::solver2} that there exist closed form solution for all of those cases. Thus \eqref{eq:distOPFu:closedformsol} can be solved efficiently for practical applications. \end{remark}
Following the procedure in Section \ref{sec:admma_dalg}, we introduce two set of slack variables: $x_{ir}$ and $y_{ij}$. Then the counterpart of \eqref{eq:admm::opt4} is {\scriptsize \begin{subequations}\label{eq:distOPFu::eropf} \begin{align} \min & \ \ \sum_{i\in\mathcal{N}}\sum_{\phi\in\Phi_i}f_i^\phi((s_{i0}^\phi)^{(x)}) \\ \mathrm{over} & \ \ x:=\{x_{ir}\mid 0\leq r\leq 1, \ i\in\mathcal{N} \} \nonumber\\ & \ \ y:=\{y_{ji} \mid j\in N_i, \ i\in\mathcal{N}\} \nonumber \\ \mathrm{s.t.} & \ \ \mathcal{P}_i(v^{(y)}_{A_ii})=v^{(y)}_{ii}-z_i(S^{(y)}_{ii})^H-S^{(y)}_{ii}z_i^H+z_i\ell^{(y)}_{ii}z_i^H \ i\in\mathcal{N} \\ & \ \ s^{(y)}_{ii}=-\text{diag}\left(\sum_{i\in C_i}\mathcal{P}_i(S^{(y)}_{ji}-z_j\ell^{(y)}_{ji})-S^{(y)}_{ii}\right) \qquad i\in\mathcal{N} \\ &\ \ \begin{pmatrix} v_{i0}^{(x)} & S_{i0}^{(x)}\\ (S_{i0}^{(x)})^H & \ell_{i0}^{(x)} \end{pmatrix}\in \mathbb{S}_+ \qquad \qquad \qquad \qquad \qquad \qquad \quad i\in\mathcal{N} \\ &\ \ (s_{i0}^\phi)^{(x)}\in \mathcal{I}_i^\phi \qquad\qquad\qquad \qquad \qquad \qquad \qquad \phi\in\Phi_i \text{ and } i\in\mathcal{N} \\ &\ \ \underline v_i^\phi \leq (v_{i1}^{\phi\phi})^{(x)}\leq \overline v_i^\phi \quad\quad \qquad \qquad \qquad \qquad \phi\in\Phi_i \text{ and }i\in\mathcal{N} \\ & \ \ x_{ir} - y_{ii} = 0 \qquad\qquad\qquad \qquad \qquad \qquad \qquad r=1 \text{ and } i\in \mathcal{N} \label{eq:distOPFu:eropf::con1} \\ & \ \ x_{i0}-y_{ij}=0 \qquad\qquad\qquad \qquad \qquad \qquad \qquad j\in N_i \text{ and } i\in\mathcal{N} , \label{eq:distOPFu:eropf::con2} \end{align} \end{subequations} } where we put superscript $(\cdot)^{(x)}$ and $(\cdot)^{(y)}$ on each variable to denote whether the variable is updated in the $x$-update or $y$-update step. Note that each node $i$ does not need full information of its neighbor. Specifically, for each node $i$, only voltage information $v_{A_ii}^{(y)}$ is needed from its parent $A_i$ and branch power $S_{ji}^{(y)}$ and current $\ell_{ji}^{(y)}$ information from its children $j\in C_i$ based on \eqref{eq:distOPFu::eropf}. Thus, $y_{ij}$ contains only partial information about $x_{i0}$, i.e. \begin{eqnarray*} y_{ij}&:=&\begin{cases} (S_{ii}^{(y)}, \ell_{ii}^{(y)}, v_{ii}^{(y)}, s_{ii}^{(y)}) & j=i\\ (S_{iA_i}^{(y)}, \ell_{iA_i}^{(y)}) & j=A_i\\ (v_{ij}^{(y)}) & j\in C_i \end{cases}. \end{eqnarray*} On the other hand, only $x_{i0}$ needs to hold all the variables and it suffices for $x_{i1}$ to only have a duplicate of $v_i$, i.e. \begin{eqnarray*} x_{ir}:= \begin{cases} (S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)}, s_{i0}^{(x)}) & r=0\\ (v_{i1}^{(x)}) & r=1 \end{cases} . \end{eqnarray*} As a result, $x_{ir}$, $y_{ii}$ in \eqref{eq:distOPFu:eropf::con1} and $x_{i0}$, $y_{ij}$ in \eqref{eq:distOPFu:eropf::con2} do not consist of the same components. Here, we abuse notations in both \eqref{eq:distOPFu:eropf::con1} and \eqref{eq:distOPFu:eropf::con2}, which are composed of components that appear in both items, i.e. \begin{align*} &x_{i0}-y_{ij}\\ :=& \begin{cases} (S^{(x)}_{i0}-S^{(y)}_{ii},\ell^{(x)}_{i0}-\ell^{(y)}_{ii},v^{(x)}_{i0}-v^{(y)}_{ii},s^{(x)}_{i0}-s^{(y)}_{ii}) & \hspace{-0.1in} j=i\\ (S^{(x)}_{i0}-S_{iA_i}^{(y)}, \ell^{(x)}_{i0}-\ell_{iA_i}^{(y)}) & \hspace{-0.1in} j=A_i\\ (v^{(x)}_{i0}-v_{ij}^{(y)}) & \hspace{-0.1in} j\in C_i \end{cases}\\ &x_{ir} - y_{ii}:= \begin{cases} (v^{(x)}_{i1}-v^{(y)}_{ii}) & r=1 \end{cases}. \end{align*}
Let $\lambda$ denote the Lagrangian multiplier for \eqref{eq:distOPFu:eropf::con1} and $\mu$ the Lagrangian multiplier for \eqref{eq:distOPFu:eropf::con1}. The detailed mapping between constraints and those multipliers are illustrated in Table \ref{tab:distOPFu::muliplier}.
\begin{table} \caption{Multipliers associated with constraints \eqref{eq:distOPFu:eropf::con1}-\eqref{eq:distOPFu:eropf::con2}} \begin{center}
\begin{tabular}{|c|c||c|c|} \hline $\lambda_{i1}$: & $v_{i1}^{(x)}=v_{ii}^{(y)}$ & & \\ \hline $\mu^{(1)}_{ii}$: & $S_{i0}^{(x)}=S_{ii}^{(y)}$ & $\mu^{(2)}_{ii}$: & $\ell_{i0}^{(x)}=\ell_{ii}^{(y)}$\\ \hline $\mu^{(3)}_{ii}$: & $v_{i0}^{(x)}=v_{ii}^{(y)}$ & $\mu^{(4)}_{ii}$: & $s_{i0}^{(x)}=s_{ii}^{(y)}$\\ \hline $\mu^{(1)}_{iA_i}$: & $S_{iA_i}^{(x)}=S_{iA_i}^{(y)}$ & $\mu^{(2)}_{iA_i}$: & $\ell_{i}^{(x)}=\ell_{iA_i}^{(y)}$ \\ \hline $\mu_{ij}$: & $v_{i}^{(x)}=v_{ij}^{(y)}$ & &\\ \hline \end{tabular} \end{center} \label{tab:distOPFu::muliplier} \end{table}
Next, we will derive the efficient solution for the subproblems in the $x$-update. For notational convenience, we will skip the iteration number $k$ on the variables. In the $x$-update, there are $2$ subproblems \eqref{eq:xupdatenode} associated with each bus $i$. The first problem, which updates $x_{i0}$, can be written explicitly as: \begin{subequations}\label{eq:distOPFu::zagent_a} \begin{eqnarray} \min && H_{i0}(x_{i0}) \label{eq:distOPFu::zagent_a1}\\ \mathrm{over} && x_{i0}=\{v_{i0}^{(x)}, \ell_{i0}^{(x)}, S_{i0}^{(x)}, s_{i0}^{(x)}\} \\ \mathrm{s.t.}&& \begin{pmatrix} v_{i0}^{(x)} & S_{i0}^{(x)}\\ (S_{i0}^{(x)})^H & \ell_{i0}^{(x)} \end{pmatrix}\in \mathbb{S}_+ \label{eq:distOPFu::zagent_a2} \\ && (s_{i0}^\phi)^{(x)}\in \mathcal{I}_i^\phi \qquad \phi\in\Phi_i, \label{eq:distOPFu::zagent_a3} \end{eqnarray} \end{subequations}
where $H_{i0}(x_{i0})$ is defined in \eqref{eq:admm:opt4:hix} and for our application, $\|x_{i0}-y_{ij}\|_{M_{ij}}^2$ is chosen to be \begin{align}\label{eq:mijdef}
&\|x_{i0}-y_{ij}\|_{M_{ij}}^2=\\ &\begin{cases}
(2|C_i|+3)\|S_{i0}^{(x)}-S_{ii}^{(y)}\|_2^2+ \|s_{i0}^{(x)}-s_{ii}^{(y)}\|_2^2 &\\
\quad+2\|v_{i0}^{(x)}-v_{ii}^{(y)}\|_2^2+ (|C_i|+1)\|\ell_{i0}^{(x)}-\ell_{ii}^{(y)}\|_2^2& j=i\\
\|S_{i0}^{(x)}-S_{iA_i}^{(y)}\|_2^2+\|\ell_{i,A_i}^{(x)}-\ell_i^{(y)}\|_2^2 & j = A_i \\
\|x_{i0}-y_{ij}\|_{2}^2 & j\in C_i \end{cases}\nonumber \end{align} By using \eqref{eq:mijdef}, $H_i^{(1)}(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)})$, which is defined below, can be written as the Euclidean distance of two Hermitian matrix, which is one of the key reasons that lead to our efficient solution. Therefore, $H_{i0}(x_{i0})$ can be further decomposed as \begin{align}\label{eq:distOPFu::zupdate_square} &H_{i0}(x_{i0})\\
=&f_i(x_{i0})+\sum_{j\in N_i}\left(\langle\mu_{ij},x_{i0}\rangle +\frac{\rho}{2}\|x_{i0}-y_{ij}\|_{M_{ij}}^2\right)\nonumber\\
=&\frac{\rho(|C_i|+2)}{2} H_i^{(1)}(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)}) + H_i^{(2)}(s_{i0}^{(x)}) +\text{constant}, \nonumber \end{align} where
\begin{eqnarray*} H_i^{(1)}(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)})\hspace{-0.1in}&=&\hspace{-0.1in} \left\| \begin{pmatrix} v_{i0}^{(x)} & S_{i0}^{(x)} \\ (S_{i0}^{(x)})^H & \ell_{i0}^{(x)} \end{pmatrix} - \begin{pmatrix} \hat v_i & \hat S_i\\ \hat S_i^H & \hat \ell_i \end{pmatrix}
\right\|_2^2\\
H_i^{(2)}(s_{i0}^{(x)})\hspace{-0.1in}&=&\hspace{-0.1in} f_i(s_{i0}^{(x)})+\frac{\rho}{2}\|s_{i0}^{(x)}-\hat s_i\|_2^2. \end{eqnarray*} The last step in \eqref{eq:distOPFu::zupdate_square} is obtained using square completion and the variables labeled with hat are some constants.
Hence, the objective \eqref{eq:distOPFu::zagent_a1} in \eqref{eq:distOPFu::zagent_a} can be decomposed into two parts, where the first part $H^{(1)}(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)})$ involves variables $(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)})$ and the second part $H^{(2)}(s_{i0}^{(x)})$ involves $s_{i0}^{(x)}$. Note that the constraint \eqref{eq:distOPFu::zagent_a2}--\eqref{eq:distOPFu::zagent_a3} can also be separated into two independent constraints. Variables $(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)})$ only depend on \eqref{eq:distOPFu::zagent_a2} and $s_{i0}^{(x)}$ depends on \eqref{eq:distOPFu::zagent_a3}. Then \eqref{eq:distOPFu::zagent_a} can be decomposed into two subproblems, where the first one \eqref{eq:distOPFu::zagent1} solves the optimal $(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)})$ and the second one \eqref{eq:distOPFu::zagent2} solves the optimal $s_{i0}^{(x)}$. The first subproblem can be written explicitly as \begin{eqnarray} \min &&H_i^{(1)}(S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)}) \nonumber\\ \mathrm{over} && S_{i0}^{(x)}, \ell_{i0}^{(x)}, v_{i0}^{(x)} \label{eq:distOPFu::zagent1}\\ \mathrm{s.t.}&& \begin{pmatrix} v_{i0}^{(x)} & S_{i0}^{(x)} \nonumber\\ (S_{i0}^{(x)})^H & \ell_{i0}^{(x)} \end{pmatrix}\in \mathbb{S}_+ , \nonumber \end{eqnarray} which can be solved using eigen-decomposition of a $6\times 6$ matrix via the following theorem. \begin{theorem}\label{thm:distOPFu::1}
Suppose $W\in\mathbb{S}^n$ and denote $X(W):=\arg\min_{X\in \mathbb{S}_+}\|X-W\|_2^2$. Then $X(W)=\sum_{i:\lambda_i>0}\lambda_iu_iu_i^H$, where $\lambda_i,u_i$ are the $i_{\text{th}}$ eigenvalue and orthonormal eigenvector of matrix $W$, respectively. \end{theorem} \begin{IEEEproof} The proof is in Appendix \ref{app:distOPFu::1}. \end{IEEEproof}
Denote \begin{eqnarray*} W:= \begin{pmatrix} \hat v_i & \hat S_i\\ \hat S_i^H & \hat \ell_i \end{pmatrix} \text{ and } X:= \begin{pmatrix} v_{i0}^{(x)} & S_{i0}^{(x)} \nonumber\\ (S_{i0}^{(x)})^H & \ell_{i0}^{(x)} \end{pmatrix}. \end{eqnarray*} Then \eqref{eq:distOPFu::zagent1} can be written abbreviately as \begin{eqnarray*}
\min_{X}\|X-W\|_2^2 \quad\text{s.t. }X \in \mathbb{S}_+, \end{eqnarray*} which can be solved efficiently using eigen-decomposition based on Theorem \ref{thm:distOPFu::1}. The second problem is \begin{eqnarray}
\min && f_i(s_{i0}^{(x)})+\frac{\rho}{2}\|s_{i0}^{(x)}-\hat s_i\|_2^2 \nonumber\\ \mathrm{over} && s_{i0}^{(x)} \in\mathcal{I}_i^\phi \qquad \phi\in\Phi_i . \label{eq:distOPFu::zagent2}
\end{eqnarray} Recall that if $ f_i(s_{i0}^{(x)})=\sum_{\phi\in\Phi_i}f_i^{\phi}((s_{i0}^{\phi})^{(x)})$, then both the objective and constraint are separable for each phase $\phi\in\Phi_i$. Therefore, \eqref{eq:distOPFu::zagent2} can be further decomposed into $|\Phi_i|$ number of subproblems as below. \begin{eqnarray} \min && f_i^{\phi}((s_{i0}^{\phi})^{(x)}) \nonumber\\ \mathrm{over} && (s_{i0}^\phi)^{(x)} \in\mathcal{I}_i^\phi , \label{eq:distOPFu::zagent21} \end{eqnarray} which takes the same form as of \eqref{eq:distOPFu:closedformsol} in Theorem \ref{thm:distOPFu:closedformsol} and thus can be solved with closed form solution based on the assumptions.
For the problem \eqref{eq:distOPFu::zagent_b} that updates $x_{i1}$, which consists of only one component $v_{i1}^{(x)}$, it can be written explicitly as \begin{eqnarray} \min && H_{i1}(x_{i1}) \nonumber\\ \mathrm{over} && x_{i1}=\{v_{i1}^{(x)}\} \label{eq:distOPFu::zagent_b}\\ \mathrm{s.t.}&& \underline v_i^\phi \leq (v_{i1}^{\phi\phi})^{(x)} \leq \overline v_i^\phi \qquad \phi\in\Phi_i ,\nonumber
\end{eqnarray} where $H_{i1}(x_{i1})$ is defined in \eqref{eq:admm:opt4:hix} and for our application, $\|x_{ir}-y_{ii}\|_{\Lambda_{ir}}^2$ is chosen to be \begin{eqnarray*}
\|x_{ir}-y_{ii}\|_{\Lambda_{ir}}^2=\|x_{ir}-y_{ii}\|_{2}^2 \end{eqnarray*} Then the closed form solution is given as: \begin{eqnarray*} (v_{i1}^{\phi_1\phi_2})^{(x)}=\begin{cases} \left[\frac{\lambda_{i1}^{\phi_1\phi_2}}{\rho}+(v_{ii}^{\phi_1\phi_2})^{(y)}\right]_{\underline v_i^{\phi_1}}^{\overline v_i^{\phi_1}} & \phi_1=\phi_2\\ \frac{\lambda_{i1}^{\phi_1\phi_2}}{\rho}+(v_{ii}^{\phi_1\phi_2})^{(y)} & \phi_1\neq \phi_2 \end{cases}. \end{eqnarray*} To summarize, the subproblems in the $x$-update for each bus $i$ can be solved either through a closed form solution or a eigen-decomposition of a $6\times6$ matrix, which proves Theorem \ref{thm:distOPFu:closedformsol}.
In the $y$-update, the subproblem solved by each node $i$ takes the form of \eqref{eq:admm::opt4::zupdate1} and can be written explicitly as \begin{align} \min \ &G_i(\{y_{ji}\mid j\in N_i\})\nonumber\\ \mathrm{over} \ & \{y_{ji}\mid j\in N_i\} \label{eq:distOPFu::xagent}\\ \mathrm{s.t.} \ & \mathcal{P}_i(v^{(y)}_{A_ii})=v^{(y)}_{ii}-z_i(S^{(y)}_{ii})^H-S^{(y)}_{ii}z_i^H+z_i\ell^{(y)}_{ii}z_i^H\nonumber \\ \ &s^{(y)}_{ii}=-\text{diag}\left(\sum_{i\in C_i}\mathcal{P}_i(S^{(y)}_{ji}-z_j\ell^{(y)}_{ji})-S^{(y)}_{ii}\right), \nonumber \end{align} which has a closed form solution given in \eqref{eq:admm::opt2::zsol} and we do not reiterate here.
Finally, we specify the initialization and stopping criteria for the algorithm. Similar to the algorithm for balanced networks, a good initialization usually reduces the number of iterations for convergence. We use the following initialization suggested by our empirical results. We first initialize the auxiliary variables $\{V_i\mid i\in\mathcal{N}\}$ and $\{I_i\mid i\in\mathcal{E}\}$, which represent the complex nodal voltage and branch current, respectively. Then we use these auxiliary variables to initialize the variables in \eqref{eq:distOPFu::eropf}. Intuitively, the above initialization procedure can be interpreted as finding a solution assuming zero impedance on all the lines. The procedure is formally stated in Algorithm \ref{alg:distOPFu::initialize}.
\begin{algorithm}
\caption{Initialization of the Algorithm}
\label{alg:distOPFu::initialize}
\begin{algorithmic}[1]
\State $V_i^{a}=1$, $V_i^{b}=e^{-\frac{2}{3}\pi}$, $V_i^{c}=e^{\frac{2}{3}\pi}$ for $i\in\mathcal{N}$
\State Initialize $s_i^\phi$ using any point in the injection region $\mathcal{I}_i^\phi$ for $i\in\mathcal{N}$
\State Initialize $\{I_i^\phi\mid \phi\in\Phi_i \ i\in\mathcal{N}\}$ by calling DFS($0$,$\phi$) for $\phi\in\Phi_i$
\State $v_{i0}^{(x)}=V_iV_i^H$, $\ell_{i0}^{(x)}=I_iI_i^H$, $S_{i0}^{(x)}=V_iI_i^H$ and $s_{i0}^{(x)}=s_i$ for $i\in\mathcal{N}$
\State $y_{ij}=x_{i0}$ for $j\in N_i$ and $i\in\mathcal{N}$
\State $x_{i1}=x_{i0}$ for $i\in \mathcal{N}$
\Statex \Function{DFS}{$i$,$\phi$}
\State $I_i^\phi=(\frac{s_i^{\phi}}{V_i^\phi})^*$
\For{$j\in C_i$}
\State $I_i^{\phi}+=\mathrm{DFS}(j,\phi)$
\EndFor
\State \Return $I_i^\phi$ \EndFunction
\end{algorithmic}
\end{algorithm}
For the stopping criteria, there is no general rule for ADMM based algorithm and it usually hinges on the particular problem. In \cite{boyd2011distributed}, it is argued that a reasonable stopping criteria is that both the primal residual $r^k$ defined in \eqref{eq:pfeasible} and the dual residual $s^k$ defined in \eqref{eq:dfeasible} are below $10^{-4}\sqrt{|\mathcal{N}|}$. We adopt this criteria and the empirical results show that the solution is accurate enough. The pseudo code for the complete algorithm is summarized in Algorithm \ref{alg:distOPFu::alg}.
\begin{algorithm}
\caption{Distributed OPF algorithm on Unbalanced Radial Networks}
\label{alg:distOPFu::alg}
\begin{algorithmic}[1]
\State {\bf Input:} network $\mathcal{G}(\mathcal{N},\mathcal{E})$, power injection region $\mathcal{I}_i$, voltage region $(\underline v_i,\overline v_i)$, line impedance $z_i$ for $i\in\mathcal{N}$.
\State {\bf Output:} voltage $v$, power injection $s$
\Statex
\State Initialize the $x$ and $y$ variables using Algorithm \ref{alg:distOPFu::initialize}.
\While{$r^k>10^{-4}\sqrt{|\mathcal{N}|}$ \textbf{or} $s^k>10^{-4}\sqrt{|\mathcal{N}|}$ }
\State In the $x$-update, each agent $i$ solves both \eqref{eq:distOPFu::zagent_a} and \eqref{eq:distOPFu::zagent_b} to update $x_{i0}$ and $x_{i1}$.
\State In the $y$-update, each agent $i$ solves \eqref{eq:distOPFu::xagent} to update $y_{ji}$ for $j\in N_i$.
\State In the multiplier update, update $\lambda,\mu$ by \eqref{eq:mupdate}.
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Case Study}\label{sec:case}
In this section, we first demonstrate the scalability of the distributed algorithm proposed in section \ref{sec:dalg} by testing it on the standard IEEE test feeders \cite{kersting1991radial}. To show the efficiency of the proposed algorithm, we also compare the computation time of solving the subproblems in both the $x$ and $y$-update with off-the-shelf solver (CVX). Second, we run the proposed algorithm on networks of different topology to understand the factors that affect the convergence rate. The algorithm is implemented in Python and run on a Macbook pro 2014 with i5 dual core processor.
\subsection{Simulations on IEEE test feeders}\label{sec:distOPFu::case1}
We test the proposed algorithm on the IEEE 13, 34, 37, 123 bus distribution systems. All the networks have unbalanced three phase. The substation is modeled as a fixed voltage bus ($1$ p.u.) with infinite power injection capability. The other buses are modeled as load buses whose voltage magnitude at each phase can vary within $[0.95,1.05]$ p.u. and power injections are specified in the test feeder. There is no controllable device in the original IEEE test feeders, and hence the OPF problem degenerates to a power flow problem, which is easy solve. To demonstrate the effectiveness of the algorithm, we replace all the capacitors with inverters, whose reactive power injection ranges from $0$ to the maximum ratings specified by the original capacitors. The objective is to minimize power loss across the network, namely $f_i^{\phi}(s_i^\phi)=p_i^\phi$ for $\phi\in\Phi_i$ and $i\in\mathcal{N}$.
We mainly focus on the time of convergence (ToC) for the proposed distributed algorithm. The algorithm is run on a single machine. To roughly estimate the ToC (excluding communication overhead) if the algorithm is run on multiple machines in a distributed manner, we divide the total time by the number of buses.
\begin{table} \caption{Statistics of different networks} \begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline Network &Diameter& Iteration & Total Time(s) & Avg time(s)\\ \hline IEEE 13Bus &6& 289 & 17.11 & 1.32\\ \hline IEEE 34Bus &20&547 & 78.34 & 2.30\\ \hline IEEE 37Bus &16&440 &75.67 & 2.05\\ \hline IEEE 123Bus &30&608 &306.3 & 2.49\\ \hline \end{tabular} \end{center} \label{tab:statistics} \end{table}
In Table \ref{tab:statistics}, we record the number of iterations to converge, total computation time required to run on a single machine and the average time required for each node if the algorithm is run on multiple machines excluding communication overhead. From the simulation results, the proposed algorithm converges within $2.5$ second for all the standard IEEE test networks if the algorithm is run in a distributed manner.
Moreover, we show the advantage of using the proposed algorithm by comparing the computation time of solving the subproblems between off-the-shelf solver (CVX \cite{grant2008cvx}) and our algorithm. In particular, we compare the average computation time of solving the subproblem in both the $x$ and $y$ update. In the $x$-update, the average time required to solve the subproblem \eqref{eq:distOPFu::xagent} is $9.8\times10^{-5}$s for our algorithm but $0.13$s for CVX. In the $y$-update, the average time required to solve the subproblems \eqref{eq:distOPFu::zagent_a}--\eqref{eq:distOPFu::zagent_b} are $3.7\times10^{-3}$s for our algorithm but $0.45$s for CVX. Thus, each ADMM iteration takes about $3.8\times 10^{-3}$s for our algorithm but $5.8\times 10^{-1}$s for using iterative algorithm, a more than 100x speedup.
\subsection{Impact of Network Topology}
In section \ref{sec:distOPFu::case1}, we demonstrate that the proposed distributed algorithm can dramatically reduce the computation time within each iteration. The time of convergence (ToC) is determined by both the computation time required within each iteration and the number of iterations. In this subsection, we study the number of iterations, namely rate of convergence.
Rate of convergence is determined by many different factors. Here, we only consider the rate of convergence from two factors, network size $N$, and diameter $D$, i.e. given the termination criteria in Algorithm \ref{alg:distOPFu::alg}, the impact of network size and diameter on the number of iterations. The impact from other factors, e.g. form of objective function and constraints, is beyond the scope of this paper.
To illustrate the impact of network size $N$ and diameter $D$ on the rate of convergence, we simulate the algorithm on two extreme cases: 1) Line network in Fig. \ref{fig:line}, whose diameter is the largest given the network size, and 2) Fat tree network in Fig. \ref{fig:fattree}, whose diameter is the smallest given the network size. In Table \ref{tab:statistics2}, we record the number of iterations for both line and fat tree network of different sizes. For the line network, the number of iterations increases notably as the size increases. For the fat tree network, the trend is less obvious compared to line network. It means that the network diameter has a stronger impact than the network size on the rate of convergence.
\begin{figure}
\caption{Topologies for line and fat tree networks.}
\label{fig:line}
\label{fig:fattree}
\label{fig:network}
\end{figure}
\begin{table} \caption{Statistics of line and fat tree networks} \begin{center}
\begin{tabular}{|c|c|c|} \hline Size & $\#$ of iterations (Line) & $\#$ of iterations (Fat tree) \\ \hline $5$ & $57$ & $61$ \\ \hline $10$ & $253$ & $111$\\ \hline $15$ & $414$ & $156$\\ \hline $20$ & $579$ & $197$\\ \hline $25$ & $646$ & $238$\\ \hline $30$ & $821$ & $272$\\ \hline $35$ & $1353$ & $304$\\ \hline $40$ & $2032$ & $337$\\ \hline $45$ & $2026$ & $358$\\ \hline $50$ & $6061$ & $389$\\ \hline \end{tabular} \end{center} \label{tab:statistics2} \end{table}
\section{Conclusion}\label{sec:conclusion} In this paper, we have developed a distributed algorithm for optimal power flow problem on unbalanced distribution system based on alternating direction method of multiplier. We have derived an efficient solution for the subproblem solved by each agent thus significantly reducing the computation time. Preliminary simulation shows that the algorithm is scalable to all IEEE test distribution systems.
{}
\appendices \section{Proof of Theorem \ref{thm:distOPFu::1}}\label{app:distOPFu::1} Let $\Lambda_W:=$diag$(\lambda_i,1\leq i\leq n)$ denote the diagonal matrix consisting of the eigenvalues of matrix $W$. Let $U:=(u_i,1\leq i\leq n)$ denote the unitary matrix. Since $W\in\mathbb{H}^n$, $U^{-1}=U^H$ and $W=U\Lambda_WU^H$. Then \begin{eqnarray*}
\|X-W\|_2^2&=&tr((X-W)^H(X-W))\\ &=&tr((X-W)(X-W))\\ &=&tr(U^H(X-W)UU^H(X-W)U)\\ &=&tr((U^HXU-\Lambda_W)(U^HXU-\Lambda_W)). \end{eqnarray*} Denote $\hat X:=U^HXU=(\hat x_{i,j},i,j\in[1,n])$, note that $\hat X\in\mathbb{S}_+$ since $X\in\mathbb{S}_+$. Then \begin{eqnarray}
\|X-W\|_2^2&=& \sum_{i=1}^n(\hat x_{ii}-\lambda_i)^2+\sum_{i\neq j}|\hat x_{ij}|^2\\ &\geq& \sum_{i=1}^n(\hat x_{ii}-\lambda_i)^2\\ &\geq& \sum_{i:\lambda_i\leq 0}\lambda_i^2, \label{eq:fbound} \end{eqnarray} where the last inequality follows from $\hat x_{ii}>0$ because $\hat X\in\mathbb{S}_+$. The equality in \eqref{eq:fbound} can be obtained by letting \begin{eqnarray*} \hat x_{ij}:=\left\{
\begin{array}{ll}
\lambda_i & i=j, \ \ \lambda_i>0, \\
0 & \text{ otherwise}
\end{array},
\right. \end{eqnarray*} which means $X(W)=U\hat X U^H=\sum_{i:\lambda_i>0}\lambda_iu_iu_i^H$.
\section{Solution Procedure for Problem \eqref{eq:distOPFu:closedformsol}.}\label{app:distOPFb::solver2}
We assume $f_i^\phi\left(s\right):=\frac{\alpha_i}{2} p^2+\beta_i p$ $(\alpha_i, \beta_i\geq 0)$ and derive a closed form solution to \eqref{eq:distOPFu:closedformsol}
\subsection{$\mathcal{I}_i$ takes the form of \eqref{eq:opf::unbalance::I2}} In this case, \eqref{eq:distOPFu:closedformsol} takes the following form: \begin{eqnarray*} \min_{p,q} && \frac{a_1}{2}p^2+b_1p+\frac{a_2}{2}q^2+b_2q\\ \text{s.t. } && \underline p_i\leq p\leq \overline p_i \\ && \underline q_i\leq q\leq \overline q_i , \end{eqnarray*} where $a_1,a_2>0$ and $b_1,b_2$ are constants. Then the closed form solution is \begin{eqnarray*} p=\left[-\frac{b_1}{a_1}\right]_{\underline p_i}^{\overline p_i} \quad q=\left[\hat -\frac{2_1}{a_2}\right]_{\underline q_i}^{\overline q_i} , \end{eqnarray*} where $[x]_a^b:=\min\{a,\max\{x,b\}\}$.
\subsection{$\mathcal{I}_i$ takes the form of \eqref{eq:opf::unbalance::I1}} The optimization problem \eqref{eq:distOPFu:closedformsol} takes the following form: \begin{subequations}\label{eq:app2} \begin{eqnarray} \min_{p,q} && \frac{a_1}{2}p^2+b_1p+\frac{a_2}{2}q^2+b_2q\\ \text{s.t. } && p^2+q^2\leq c^2 \label{eq:app2:1}\\ && p \geq 0 , \label{eq:app2:2} \end{eqnarray} \end{subequations} where $a_1,a_2,c>0$ $,b_1,b_2$ are constants. The solutions to \eqref{eq:app2} are given as below. {\flushleft\bf Case 1}: $b_1\geq 0$: \begin{eqnarray*} p^*=0 \qquad q^*=\left[-\frac{b_2}{a_2}\right]_{-c}^{c}. \end{eqnarray*} {\flushleft\bf Case 2}: $b_1< 0$ and $\frac{b^2_1}{a^2_1}+\frac{b^2_2}{a_2^2}\leq c^2$: \begin{eqnarray*} p^*=-\frac{b_1}{a_1} \qquad q^*=-\frac{b_2}{a_2}. \end{eqnarray*} {\flushleft\bf Case 3}: $b_1< 0$ and $\frac{b^2_1}{a^2_1}+\frac{b^2_2}{a_2^2}> c^2$:\\ First solve the following equation in terms of variable $\lambda$: \begin{eqnarray} b_1^2(a_2+2\lambda)^2+b_2^2(a_1+2\lambda)^2=(a_1+2\lambda)^2(a_2+2\lambda)^2, \label{eq:app3:1} \end{eqnarray} which is a polynomial with degree of $4$ and has closed form expression. There are four solutions to \eqref{eq:app3:1}, but there is only one strictly positive $\lambda^*$, which can be proved via the KKT conditions of \eqref{eq:app2}. Then we can recover $p^*,q^*$ from $\lambda^*$ using the following equations: \begin{eqnarray*}
p^*=-\frac{b_1}{a_1+2\lambda^*} \quad \text{ and }\quad q^*=-\frac{b_2}{a_2+2\lambda^*}. \end{eqnarray*}
The above procedure to solve \eqref{eq:app2} is derived from standard applications of the KKT conditions of \eqref{eq:app2}. For brevity, we skip the proof here.
\end{document} | arXiv |
A note on the lattice structure for matching markets via linear programming
JDG Home
Permanence in polymatrix replicators
January 2021, 8(1): 35-59. doi: 10.3934/jdg.2020033
A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions
Laura Aquilanti 1, , Simone Cacace 2, , Fabio Camilli 1,, and Raul De Maio 3,
SBAI, Sapienza Università di Roma, Via A. Scarpa 16, 00161 Roma, Italy
Dip. di Matematica e Fisica, Università degli Studi Roma Tre, Largo S. L. Murialdo 1, 00146 Roma, Italy
IConsulting, Via della Conciliazione 10, 00193 Roma, Italy
* Corresponding author: Fabio Camilli
Received May 2020 Revised November 2020 Published January 2021 Early access December 2020
Finite mixture models are an important tool in the statistical analysis of data, for example in data clustering. The optimal parameters of a mixture model are usually computed by maximizing the log-likelihood functional via the Expectation-Maximization algorithm. We propose an alternative approach based on the theory of Mean Field Games, a class of differential games with an infinite number of agents. We show that the solution of a finite state space multi-population Mean Field Games system characterizes the critical points of the log-likelihood functional for a Bernoulli mixture. The approach is then generalized to mixture models of categorical distributions. Hence, the Mean Field Games approach provides a method to compute the parameters of the mixture model, and we show its application to some standard examples in cluster analysis.
Keywords: Mixture models, Bernoulli distribution, categorical distribution, cluster analysis, Expectation-Maximization algorithm, Mean Field Games.
Mathematics Subject Classification: 62H30, 60J10, 49N80, 91C20.
Citation: Laura Aquilanti, Simone Cacace, Fabio Camilli, Raul De Maio. A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions. Journal of Dynamics & Games, 2021, 8 (1) : 35-59. doi: 10.3934/jdg.2020033
L. Aquilanti, S. Cacace, F. Camilli and R. De Maio, A mean field games approach to cluster analysis, Applied Math. Optim., (2020). doi: 10.1007/s00245-019-09646-2. Google Scholar
R. Bellman, Dynamic Programming, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1957. Google Scholar
J. A. Bilmes, A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov model, CTIT Technical Reports Series, 1998. Google Scholar
C. M. Bishop, Pattern Recognition and Machine Learning, Information Science and Statistics, Springer, New York, 2006. Google Scholar
A. Biswas, Mean Field Games with ergodic cost for discrete time Markov processes, preprint, arXiv: 1510.08968. Google Scholar
S. Cacace, F. Camilli and A. Goffi, A policy iteration method for Mean Field Games, preprint, arXiv: 2007.04818. Google Scholar
R. Carmona and M. Lauriere, Convergence analysis of machine learning algorithms for the numerical solution of mean field control and games: I – The ergodic case, preprint, arXiv: 1907.05980. Google Scholar
J. L. Coron, Quelques Exemples de Jeux à Champ Moyen, Ph.D. thesis, Université Paris-Dauphine, 2018. Available from: https://tel.archives-ouvertes.fr/tel-01705969/document. Google Scholar
W. E, J. Han and Q. Li, A mean-field optimal control formulation of deep learning, Res. Math. Sci., 6 (2019), 41pp. doi: 10.1007/s40687-018-0172-y. Google Scholar
B. S. Everitt, S. Landau, M. Leese and D. Stahl, Cluster Analysis, Wiley Series in Probability and Statistics, John Wiley & Sons, Ltd., Chichester, 2011. doi: 10.1002/9780470977811. Google Scholar
Fashion-MNIST., Available from: https://github.com/zalandoresearch/fashion-mnist. Google Scholar
W. H. Fleming, Some Markovian optimization problems, J. Math. Mech., 12 (1963), 131-140. Google Scholar
D. A. Gomes, J. Mohr and R. R. Souza, Discrete time, finite state space mean field games, J. Math. Pures Appl. (9), 93 (2010), 308-328. doi: 10.1016/j.matpur.2009.10.010. Google Scholar
D. A. Gomes and J. Saúde, Mean field games models–A brief survey, Dyn. Games Appl., 4 (2014), 110-154. doi: 10.1007/s13235-013-0099-2. Google Scholar
R. A. Howard, Dynamic Programming and Markov Processes, The Technology Press of MIT, Cambridge, Mass.; John Wiley & Sons, Inc., New York-London, 1960. doi: 10.1126/science.132.3428.667. Google Scholar
M. Huang, R. P. Malhamé and P. E. Caines, Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst., 6 (2006), 221-251. doi: 10.4310/CIS.2006.v6.n3.a5. Google Scholar
J.-M. Lasry and P.-L. Lions, Mean field games, Jpn. J. Math., 2 (2007), 229-260. doi: 10.1007/s11537-007-0657-8. Google Scholar
G. McLachlan and D. Peel, Finite Mixture Models, Wiley Series in Probability and Statistics: Applied Probability and Statistics, Wiley-Interscience, New York, 2000. doi: 10.1002/0471721182. Google Scholar
The MNIST Database of Handwritten Digits., Available from: http://yann.lecun.com/exdb/mnist/. Google Scholar
K. Pearson, Contributions to the mathematical theory of evolution, Philosophical Trans. Roy. Soc., 185 (1894), 71-110. doi: 10.1098/rsta.1894.0003. Google Scholar
S. Pequito, A. Pedro Aguiar, B. Sinopoli and D. A. Gomes, Unsupervised learning of finite mixture models using mean field games, 49$^th$ Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, 2011. doi: 10.1109/Allerton.2011.6120185. Google Scholar
M. L. Puterman, On the convergence of policy iteration for controlled diffusions, J. Optim. Theory Appl., 33 (1981), 137-144. doi: 10.1007/BF00935182. Google Scholar
M. L. Puterman and S. L. Brumelle, On the convergence of policy iteration in stationary dynamic programming, Math. Oper. Res., 4 (1979), 60-69. doi: 10.1287/moor.4.1.60. Google Scholar
M. E. Tarter and M. D. Lock, Model-Free Curve Estimation, Monographs on Statistics and Applied Probability, 56, Chapman & Hall, New York, 1993. Google Scholar
D. M. Titterington, A. F. M. Smith and U. E. Makov, Statistical Analysis of Finite Mixture Distributions, Wiley Series Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons, Ltd., Chichester, 1985. Google Scholar
M. Wedel and W. A. Kamakura, Market Segmentation: Conceptual and Methodological Foundations, International Series in Quantitative Marketing, 8, Springer, Boston, MA, 2000. doi: 10.1007/978-1-4615-4651-1. Google Scholar
Figure 1. Samples of hand-written digits from the MNIST database
Figure 2. Different samples of hand-written digits from the MNIST database
Figure 3. Clusterization histogram for digits $ \mathbf{1},\mathbf{3} $ and the corresponding Bernoulli parameters
Figure 5. Clusterization histogram for even digits and the corresponding Bernoulli parameters
Figure 6. Samples of fashion products from the Fashion-MNIST database
Figure 7. Averaged categorical distributions for the Fashion-MNIST database
Figure 8. Clusterization histogram for types T-shirt, Trouser and the corresponding categorical parameters
Figure 9. Clusterization histogram for types Dress, Sneaker, Bag, Boot and the corresponding categorical parameters
Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004
Ross Callister, Duc-Son Pham, Mihai Lazarescu. Using distribution analysis for parameter selection in repstream. Mathematical Foundations of Computing, 2019, 2 (3) : 215-250. doi: 10.3934/mfc.2019015
Pierre Cardaliaguet, Jean-Michel Lasry, Pierre-Louis Lions, Alessio Porretta. Long time average of mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 279-301. doi: 10.3934/nhm.2012.7.279
Josu Doncel, Nicolas Gast, Bruno Gaujal. Discrete mean field games: Existence of equilibria and convergence. Journal of Dynamics & Games, 2019, 6 (3) : 221-239. doi: 10.3934/jdg.2019016
Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou. A class of infinite horizon mean field games on networks. Networks & Heterogeneous Media, 2019, 14 (3) : 537-566. doi: 10.3934/nhm.2019021
Martin Burger, Marco Di Francesco, Peter A. Markowich, Marie-Therese Wolfram. Mean field games with nonlinear mobilities in pedestrian dynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1311-1333. doi: 10.3934/dcdsb.2014.19.1311
Adriano Festa, Diogo Gomes, Francisco J. Silva, Daniela Tonon. Preface: Mean field games: New trends and applications. Journal of Dynamics & Games, 2021, 8 (4) : i-ii. doi: 10.3934/jdg.2021025
Marco Cirant, Diogo A. Gomes, Edgard A. Pimentel, Héctor Sánchez-Morgado. On some singular mean-field games. Journal of Dynamics & Games, 2021, 8 (4) : 445-465. doi: 10.3934/jdg.2021006
Zhilin Kang, Xingyi Li, Zhongfei Li. Mean-CVaR portfolio selection model with ambiguity in distribution and attitude. Journal of Industrial & Management Optimization, 2020, 16 (6) : 3065-3081. doi: 10.3934/jimo.2019094
I-Lin Wang, Shiou-Jie Lin. A network simplex algorithm for solving the minimum distribution cost problem. Journal of Industrial & Management Optimization, 2009, 5 (4) : 929-950. doi: 10.3934/jimo.2009.5.929
Kevin Ford. The distribution of totients. Electronic Research Announcements, 1998, 4: 27-34.
Ming Yan, Alex A. T. Bui, Jason Cong, Luminita A. Vese. General convergent expectation maximization (EM)-type algorithms for image reconstruction. Inverse Problems & Imaging, 2013, 7 (3) : 1007-1029. doi: 10.3934/ipi.2013.7.1007
Martino Bardi. Explicit solutions of some linear-quadratic mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 243-261. doi: 10.3934/nhm.2012.7.243
Diogo A. Gomes, Gabriel E. Pires, Héctor Sánchez-Morgado. A-priori estimates for stationary mean-field games. Networks & Heterogeneous Media, 2012, 7 (2) : 303-314. doi: 10.3934/nhm.2012.7.303
Yves Achdou, Victor Perez. Iterative strategies for solving linearized discrete mean field games systems. Networks & Heterogeneous Media, 2012, 7 (2) : 197-217. doi: 10.3934/nhm.2012.7.197
Matt Barker. From mean field games to the best reply strategy in a stochastic framework. Journal of Dynamics & Games, 2019, 6 (4) : 291-314. doi: 10.3934/jdg.2019020
Olivier Guéant. New numerical methods for mean field games with quadratic costs. Networks & Heterogeneous Media, 2012, 7 (2) : 315-336. doi: 10.3934/nhm.2012.7.315
Juan Pablo Maldonado López. Discrete time mean field games: The short-stage limit. Journal of Dynamics & Games, 2015, 2 (1) : 89-101. doi: 10.3934/jdg.2015.2.89
Siting Liu, Levon Nurbekyan. Splitting methods for a class of non-potential mean field games. Journal of Dynamics & Games, 2021, 8 (4) : 467-486. doi: 10.3934/jdg.2021014
Laura Aquilanti Simone Cacace Fabio Camilli Raul De Maio | CommonCrawl |
Small-bias sample space
In theoretical computer science, a small-bias sample space (also known as $\epsilon $-biased sample space, $\epsilon $-biased generator, or small-bias probability space) is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.
The main useful property of small-bias sample spaces is that they need far fewer truly random bits than the uniform distribution to fool parities. Efficient constructions of small-bias sample spaces have found many applications in computer science, some of which are derandomization, error-correcting codes, and probabilistically checkable proofs. The connection with error-correcting codes is in fact very strong since $\epsilon $-biased sample spaces are equivalent to $\epsilon $-balanced error-correcting codes.
Definition
Bias
Let $X$ be a probability distribution over $\{0,1\}^{n}$. The bias of $X$ with respect to a set of indices $I\subseteq \{1,\dots ,n\}$ is defined as[1]
${\text{bias}}_{I}(X)=\left|\Pr _{x\sim X}\left(\sum _{i\in I}x_{i}=0\right)-\Pr _{x\sim X}\left(\sum _{i\in I}x_{i}=1\right)\right|=\left|2\cdot \Pr _{x\sim X}\left(\sum _{i\in I}x_{i}=0\right)-1\right|\,,$
where the sum is taken over $\mathbb {F} _{2}$, the finite field with two elements. In other words, the sum $\sum _{i\in I}x_{i}$ equals $0$ if the number of ones in the sample $x\in \{0,1\}^{n}$ at the positions defined by $I$ is even, and otherwise, the sum equals $1$. For $I=\emptyset $, the empty sum is defined to be zero, and hence ${\text{bias}}_{\emptyset }(X)=1$.
ϵ-biased sample space
A probability distribution $X$ over $\{0,1\}^{n}$ is called an $\epsilon $-biased sample space if ${\text{bias}}_{I}(X)\leq \epsilon $ holds for all non-empty subsets $I\subseteq \{1,2,\ldots ,n\}$.
ϵ-biased set
An $\epsilon $-biased sample space $X$ that is generated by picking a uniform element from a multiset $X\subseteq \{0,1\}^{n}$ is called $\epsilon $-biased set. The size $s$ of an $\epsilon $-biased set $X$ is the size of the multiset that generates the sample space.
ϵ-biased generator
An $\epsilon $-biased generator $G:\{0,1\}^{\ell }\to \{0,1\}^{n}$ is a function that maps strings of length $\ell $ to strings of length $n$ such that the multiset $X_{G}=\{G(y)\;\vert \;y\in \{0,1\}^{\ell }\}$ is an $\epsilon $-biased set. The seed length of the generator is the number $\ell $ and is related to the size of the $\epsilon $-biased set $X_{G}$ via the equation $s=2^{\ell }$.
Connection with epsilon-balanced error-correcting codes
There is a close connection between $\epsilon $-biased sets and $\epsilon $-balanced linear error-correcting codes. A linear code $C:\{0,1\}^{n}\to \{0,1\}^{s}$ of message length $n$ and block length $s$ is $\epsilon $-balanced if the Hamming weight of every nonzero codeword $C(x)$ is between $({\frac {1}{2}}-\epsilon )s$ and $({\frac {1}{2}}+\epsilon )s$. Since $C$ is a linear code, its generator matrix is an $(n\times s)$-matrix $A$ over $\mathbb {F} _{2}$ with $C(x)=x\cdot A$.
Then it holds that a multiset $X\subset \{0,1\}^{n}$ is $\epsilon $-biased if and only if the linear code $C_{X}$, whose columns are exactly elements of $X$, is $\epsilon $-balanced.[2]
Constructions of small epsilon-biased sets
Usually the goal is to find $\epsilon $-biased sets that have a small size $s$ relative to the parameters $n$ and $\epsilon $. This is because a smaller size $s$ means that the amount of randomness needed to pick a random element from the set is smaller, and so the set can be used to fool parities using few random bits.
Theoretical bounds
The probabilistic method gives a non-explicit construction that achieves size $s=O(n/\epsilon ^{2})$.[2] The construction is non-explicit in the sense that finding the $\epsilon $-biased set requires a lot of true randomness, which does not help towards the goal of reducing the overall randomness. However, this non-explicit construction is useful because it shows that these efficient codes exist. On the other hand, the best known lower bound for the size of $\epsilon $-biased sets is $s=\Omega (n/(\epsilon ^{2}\log(1/\epsilon ))$, that is, in order for a set to be $\epsilon $-biased, it must be at least that big.[2]
Explicit constructions
There are many explicit, i.e., deterministic constructions of $\epsilon $-biased sets with various parameter settings:
• Naor & Naor (1990) achieve $\displaystyle s={\frac {n}{{\text{poly}}(\epsilon )}}$. The construction makes use of Justesen codes (which is a concatenation of Reed–Solomon codes with the Wozencraft ensemble) as well as expander walk sampling.
• Alon et al. (1992) achieve $\displaystyle s=O\left({\frac {n}{\epsilon \log(n/\epsilon )}}\right)^{2}$. One of their constructions is the concatenation of Reed–Solomon codes with the Hadamard code; this concatenation turns out to be an $\epsilon $-balanced code, which gives rise to an $\epsilon $-biased sample space via the connection mentioned above.
• Concatenating Algebraic geometric codes with the Hadamard code gives an $\epsilon $-balanced code with $\displaystyle s=O\left({\frac {n}{\epsilon ^{3}\log(1/\epsilon )}}\right)$.[2]
• Ben-Aroya & Ta-Shma (2009) achieves $\displaystyle s=O\left({\frac {n}{\epsilon ^{2}\log(1/\epsilon )}}\right)^{5/4}$.
• Ta-Shma (2017) achieves $\displaystyle s=O\left({\frac {n}{\epsilon ^{2+o(1)}}}\right)$ which is almost optimal because of the lower bound.
These bounds are mutually incomparable. In particular, none of these constructions yields the smallest $\epsilon $-biased sets for all settings of $\epsilon $ and $n$.
Application: almost k-wise independence
An important application of small-bias sets lies in the construction of almost k-wise independent sample spaces.
k-wise independent spaces
A random variable $Y$ over $\{0,1\}^{n}$ is a k-wise independent space if, for all index sets $I\subseteq \{1,\dots ,n\}$ of size $k$, the marginal distribution $Y|_{I}$ is exactly equal to the uniform distribution over $\{0,1\}^{k}$. That is, for all such $I$ and all strings $z\in \{0,1\}^{k}$, the distribution $Y$ satisfies $\Pr _{Y}(Y|_{I}=z)=2^{-k}$.
Constructions and bounds
k-wise independent spaces are fairly well understood.
• A simple construction by Joffe (1974) achieves size $n^{k}$.
• Alon, Babai & Itai (1986) construct a k-wise independent space whose size is $n^{k/2}$.
• Chor et al. (1985) prove that no k-wise independent space can be significantly smaller than $n^{k/2}$.
Joffe's construction
Joffe (1974) constructs a $k$-wise independent space $Y$ over the finite field with some prime number $n>k$ of elements, i.e., $Y$ is a distribution over $\mathbb {F} _{n}^{n}$. The initial $k$ marginals of the distribution are drawn independently and uniformly at random:
$(Y_{0},\dots ,Y_{k-1})\sim \mathbb {F} _{n}^{k}$.
For each $i$ with $k\leq i<n$, the marginal distribution of $Y_{i}$ is then defined as
$Y_{i}=Y_{0}+Y_{1}\cdot i+Y_{2}\cdot i^{2}+\dots +Y_{k-1}\cdot i^{k-1}\,,$
where the calculation is done in $\mathbb {F} _{n}$. Joffe (1974) proves that the distribution $Y$ constructed in this way is $k$-wise independent as a distribution over $\mathbb {F} _{n}^{n}$. The distribution $Y$ is uniform on its support, and hence, the support of $Y$ forms a $k$-wise independent set. It contains all $n^{k}$ strings in $\mathbb {F} _{n}^{k}$ that have been extended to strings of length $n$ using the deterministic rule above.
Almost k-wise independent spaces
A random variable $Y$ over $\{0,1\}^{n}$ is a $\delta $-almost k-wise independent space if, for all index sets $I\subseteq \{1,\dots ,n\}$ of size $k$, the restricted distribution $Y|_{I}$ and the uniform distribution $U_{k}$ on $\{0,1\}^{k}$ are $\delta $-close in 1-norm, i.e., ${\Big \|}Y|_{I}-U_{k}{\Big \|}_{1}\leq \delta $.
Constructions
Naor & Naor (1990) give a general framework for combining small k-wise independent spaces with small $\epsilon $-biased spaces to obtain $\delta $-almost k-wise independent spaces of even smaller size. In particular, let $G_{1}:\{0,1\}^{h}\to \{0,1\}^{n}$ be a linear mapping that generates a k-wise independent space and let $G_{2}:\{0,1\}^{\ell }\to \{0,1\}^{h}$ be a generator of an $\epsilon $-biased set over $\{0,1\}^{h}$. That is, when given a uniformly random input, the output of $G_{1}$ is a k-wise independent space, and the output of $G_{2}$ is $\epsilon $-biased. Then $G:\{0,1\}^{\ell }\to \{0,1\}^{n}$ with $G(x)=G_{1}(G_{2}(x))$ is a generator of an $\delta $-almost $k$-wise independent space, where $\delta =2^{k/2}\epsilon $.[3]
As mentioned above, Alon, Babai & Itai (1986) construct a generator $G_{1}$ with $h={\tfrac {k}{2}}\log n$, and Naor & Naor (1990) construct a generator $G_{2}$ with $\ell =\log s=\log h+O(\log(\epsilon ^{-1}))$. Hence, the concatenation $G$ of $G_{1}$ and $G_{2}$ has seed length $\ell =\log k+\log \log n+O(\log(\epsilon ^{-1}))$. In order for $G$ to yield a $\delta $-almost k-wise independent space, we need to set $\epsilon =\delta 2^{-k/2}$, which leads to a seed length of $\ell =\log \log n+O(k+\log(\delta ^{-1}))$ and a sample space of total size $2^{\ell }\leq \log n\cdot {\text{poly}}(2^{k}\cdot \delta ^{-1})$.
Notes
1. cf., e.g., Goldreich (2001)
2. cf., e.g., p. 2 of Ben-Aroya & Ta-Shma (2009)
3. Section 4 in Naor & Naor (1990)
References
• Alon, Noga; Babai, László; Itai, Alon (1986), "A fast and simple randomized parallel algorithm for the maximal independent set problem" (PDF), Journal of Algorithms, 7 (4): 567–583, doi:10.1016/0196-6774(86)90019-2
• Alon, Noga; Goldreich, Oded; Håstad, Johan; Peralta, René (1992), "Simple Constructions of Almost k-wise Independent Random Variables" (PDF), Random Structures & Algorithms, 3 (3): 289–304, CiteSeerX 10.1.1.106.6442, doi:10.1002/rsa.3240030308
• Ben-Aroya, Avraham; Ta-Shma, Amnon (2009). "Constructing Small-Bias Sets from Algebraic-Geometric Codes". 2009 50th Annual IEEE Symposium on Foundations of Computer Science (PDF). pp. 191–197. CiteSeerX 10.1.1.149.9273. doi:10.1109/FOCS.2009.44. ISBN 978-1-4244-5116-6.
• Chor, Benny; Goldreich, Oded; Håstad, Johan; Freidmann, Joel; Rudich, Steven; Smolensky, Roman (1985). "The bit extraction problem or t-resilient functions". 26th Annual Symposium on Foundations of Computer Science (SFCS 1985). pp. 396–407. CiteSeerX 10.1.1.39.6768. doi:10.1109/SFCS.1985.55. ISBN 978-0-8186-0644-1. S2CID 6968065.
• Goldreich, Oded (2001), Lecture 7: Small bias sample spaces
• Joffe, Anatole (1974), "On a Set of Almost Deterministic k-Independent Random Variables", Annals of Probability, 2 (1): 161–162, doi:10.1214/aop/1176996762
• Naor, Joseph; Naor, Moni (1990), "Small-bias probability spaces: Efficient constructions and applications", Proceedings of the twenty-second annual ACM symposium on Theory of computing - STOC '90, pp. 213–223, CiteSeerX 10.1.1.421.2784, doi:10.1145/100216.100244, ISBN 978-0897913614, S2CID 14031194{{citation}}: CS1 maint: date and year (link)
• Ta-Shma, Amnon (2017), "Explicit, almost optimal, epsilon-balanced codes", Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 238–251, doi:10.1145/3055399.3055408, ISBN 9781450345286, S2CID 5648543{{citation}}: CS1 maint: date and year (link)
| Wikipedia |
A positive two-digit number is even and is a multiple of 11. The product of its digits is a perfect cube. What is this two-digit number?
Let $N$ be the desired two-digit number. $N$ is divisible by 2 and by 11, and $(2,11)=1$, so $N$ is divisible by 22. Thus, $N\in\{22, 44, 66, 88\}$. Only 88 is such that the product of its digits is a perfect cube ($8\cdot8=64=4^3$), so $N=\boxed{88}$. | Math Dataset |
Find all values of $x$ such that $\displaystyle\frac{1}{x-1} + \frac{2x}{x - 1} = 5$.
We can combine the two terms on the left side to get $\dfrac{1+2x}{x-1} = 5$. We then multiply both sides of this equation by $x-1$ to get rid of the fractions. This gives us $1+2x = 5(x-1)$. Expanding the right side gives $1+2x = 5x -5$. Subtracting $5x$ from both sides gives $1-3x = -5$, and subtracting 1 from both sides of this equation yields $-3x = -6$. Dividing both sides of this equation by $-3$ gives us our answer, $x = \boxed{2}$. | Math Dataset |
\begin{document}
\title[Geometric structures of conductive transmission eigenfunctions]{On the geometric structures of {\color{black} transmission eigenfunctions with a conductive boundary condition and applications}}
\author{Huaian Diao} \address{School of Mathematics and Statistics, Northeast Normal University, Changchun, Jilin 130024, China.} \email{[email protected]}
\author{Xinlin Cao} \address{Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong, China.} \email{[email protected]}
\author{Hongyu Liu} \address{Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, China.} \email{[email protected], [email protected]}
\begin{abstract} This paper is concerned with the intrinsic geometric structures of conductive transmission eigenfunctions. The geometric properties of interior transmission eigenfunctions were first studied in \cite{BL2017b}. It is shown in two scenarios that the interior transmission eigenfunction must be locally vanishing near a corner of the domain with an interior angle less than $\pi$. We significantly extend and generalize those results in several aspects. First, we consider the conductive transmission eigenfunctions which include the interior transmission eigenfunctions as a special case. The geometric structures established for the conductive transmission eigenfunctions in this paper include the results in \cite{BL2017b} as a special case. Second, the vanishing property of the conductive transmission eigenfunctions is established for any corner as long as its interior angle is not $\pi$ when the conductive transmission eigenfunctions satisfy certain Herglotz functions approximation properties. That means, as long as the corner singularity is not degenerate, the vanishing property holds if the underlying conductive transmission eigenfunctions can be approximated by a sequence of Herglotz functions under mild approximation rates. Third, the regularity requirements on the interior transmission eigenfunctions in \cite{BL2017b} are significantly relaxed in the present study for the conductive transmission eigenfunctions. In order to establish the geometric properties for the conductive transmission eigenfunctions, we develop technically new methods and the corresponding analysis is much more complicated than that in \cite{BL2017b}. Finally, as an interesting and practical application of the obtained geometric results, we establish a unique recovery result for the inverse problem associated with the transverse electromagnetic scattering by a single far-field measurement in simultaneously determining
a polygonal conductive obstacle and its surface conductive parameter.
\noindent{\bf Keywords:}~~ Conductive transmission eigenfunctions, corner singularity, geometric structures, vanishing, inverse scattering, uniqueness, single far-field pattern.
\noindent{\bf 2010 Mathematics Subject Classification:}~~ 35Q60, 78A46 (primary); 35P25, 78A05, 81U40 (secondary).
\end{abstract}
\maketitle
\section{Introduction}
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^n$, $n=2, 3$, and $V\in L^\infty(\Omega )$ and $\eta\in L^\infty(\partial\Omega)$ be possibly complex-valued functions. Consider the following {\color{black} interior} transmission eigenvalue problem {\color{black} with a conductive boundary condition} for $v,\,w \in H^1(\Omega )$, \begin{align}\label{eq:in eig}
\left\{ \begin{array}{l} \Delta w+k^2(1+V) w=0 \quad\ \mbox{ in } \Omega, \\[5pt] \Delta v+ k^2 v=0\hspace*{1.85cm}\ \mbox{ in } \Omega, \\[5pt] w= v,\ \partial_\nu v + \eta v=\partial_\nu w \ \ \mbox{ on } \partial \Omega,
\end{array} \right.
\end{align} where $\nu\in\mathbb{S}^{n-1}$ signifies the exterior unit normal vector to $\partial\Omega$. Clearly, $v=w\equiv 0$ are trivial solutions to \eqref{eq:in eig}. If for a certain $k\in\mathbb{R}_+$, there exists a pair of nontrivial solutions $(v, w)\in H^1(\Omega)\times H^1(\Omega)$ to \eqref{eq:in eig}, then $k$ is called a conductive transmission eigenvalue and $(v, w)$ is referred to as the corresponding pair of conductive transmission eigenfunctions. For a special case with $\eta\equiv 0$, \eqref{eq:in eig} is known to be the interior transmission eigenvalue problem. {\color{black} For terminological convenience, we refer to the nontrivial solutions $(v,\,w) \in H^1(\Omega )$ to \eqref{eq:in eig} as the conductive transmission eigenfunctions and the corresponding $k$ is named as the conductive transmission eigenvalue.} The study on the transmission eigenvalue problems arises in the wave scattering theory and has a long and colourful history; see \cite{BP,CGH,CKP,CM,Kir,LV,PS,Robbiano,RS} for the spectral study of the interior transmission eigenvalue problem, and \cite{BHK,Bon,HK} for the related study of the conductive transmission eigenvalue problem, and a recent survey \cite{CHreview} and the references therein for comprehensive discussions on the state-of-the-art developments. The problem is a type of non-elliptic and non-self-adjoint eigenvalue problem, so its study is mathematically interesting and challenging. The existing results in the literature mainly focus on the spectral properties of the transmission eigenvalues, namely their existence, discreteness, infiniteness and Weyl's laws. Roughly speaking, the theorems for the transmission eigenvalues follow in a similar flavour to the results in the spectral theory of the Laplacian on a bounded domain. However, the transmission eigenfunctions reveal certain distinct and intriguing features. In \cite{BPS,PSV}, it is proved that the interior transmission eigenfunctions cannot be analytically extended across the boundary $\partial\Omega$ if it contains a corner with an interior angle less than $\pi$. In \cite{BL2017b}, geometric structures of interior transmission eigenfunctions were discovered for the first time. It is shown that under certain regularity conditions on the interior transmission eigenfunctions, the eigenfunctions must be locally vanishing near a corner of the domain with an interior angle less than $\pi$. With the help of numerics, it is further shown in \cite{BLLW,LW2018} that under the $H^1$-regularity of the interior transmission eigenfunctions, the eigenfunctions are either vanishing or localizing at a corner with an interior angle bigger than $\pi$. Recently, more geometric properties of the interior transmission eigenfunctions were discovered in \cite{BL2018,LW2018}, which are linked with the curvature of a specific boundary point. It is noted that a corner point considered in \cite{BL2017b,BLLW} can be regarded as having an infinite extrinsic curvature since the derivative of the normal vector has a jump singularity there.
In addition to the angle of the corner, we would like to emphasize the critical role played by the regularity of the transmission eigenfunctions in the existing studies of the geometric structures in the aforementioned literatures. In \cite{BL2017b}, the regularity requirements are characterized in two ways. The first one is $H^2$-smoothness, and the other one is $H^1$-regularity with a certain Hergoltz approximation property. The $H^2$-regularity requirement can be weakened a bit to be H\"older-continuity with any H\"older index $\alpha\in (0,1)$.
In this paper, we establish the vanishing property of the conductive transmission eigenfunctions associated with \eqref{eq:in eig} at a corner as long as its interior angle is not $\pi$ {\color{black} when the conductive transmission eigenfunctions satisfy certain Herglotz functions approximation properties.} That means, as long as the corner singularity is not degenerate, the vanishing property holds {\color{black} if the underlying conductive transmission eigenfunctions can be approximated by a sequence of Herglotz functions under mild approximation rates.} In fact, in the three-dimensional case, the corner singularity is a more general edge singularity. To establish the vanishing property, we need to impose certain regularity conditions on the conductive transmission eigenfunctions which basically follow a similar manner to those considered in \cite{BL2017b}. That is, the first regularity condition is the H\"older-continuity with any H\"older index $\alpha\in (0,1)$, and the second regularity condition is characterized by the Herglotz approximation. Nevertheless, for the latter case, the regularity requirement is much more relaxed in the present study compared to that in \cite{BL2017b}. Finally, we would like to emphasize that in principle, the geometric properties established for the conductive transmission eigenfunctions include the results in \cite{BL2017b} as a special case by taking the parameter $\eta$ to be zero. Hence, in the sense described above, the results obtained in this work significantly extend and generalize the ones in \cite{BL2017b}.
The mathematical argument in \cite{BL2017b} is indirect which connects the vanishing property of the interior transmission eigenfunctions with the stability of a certain wave scattering problem with respect to variation of the wave field at the corner point. In \cite{Bsource,BL2018}, direct mathematical arguments based on certain microlocal analysis techniques are developed for dealing with the vanishing properties of the interior transmission eigenfunctions. However, the H\"older continuity on the interior transmission eigenfunctions is an essential assumption in \cite{Bsource,BL2018}. In this paper, in order to establish the vanishing property of the conductive transmission eigenfunctions under more general regularity conditions, we basically follow the direct approach. But we need to develop technically new ingredients for this different type of eigenvalue problem and the corresponding analysis becomes radically much more complicated.
As an interesting and practical application, we apply the obtained geometric results for the conductive transmission eigenfunctions to an inverse problem associated with the transverse electromagnetic scattering. In a certain scenario, we establish the unique recovery result by a single far-field measurement in simultaneously determining a polygonal conductive obstacle and its surface conductivity. This contributes to the well-known Schiffer's problem in the inverse scattering theory which is concerned with recovering the shape of an unknown scatterer by a single far-field pattern; see \cite{AR,BL2016,BL2017,CY,HSV,Liua,LPRX,LRX,Liu-Zou,Liu-Zou3,Ron2} and the references therein for background introduction and the state-of-the-art developments on the Schiffer's problem.
The rest of the paper is organized as follows. In Sections \ref{sec:2} and \ref{sec:3}, we respectively derive the vanishing results of the conductive transmission eigenfunctions near a corner in the two-dimensional and three-dimensional cases. Section \ref{sec:4} is devoted to the uniqueness study in determining a polygonal conductive obstacle as well as its surface conductivity by a single far-field pattern.
\section{ Vanishing near corners of conductive transmission eigenfunctions: two-dimensional case}\label{sec:2}
In this section, we consider the vanishing near corners of conductive transmission eigenfunctions in the two-dimensional case. First, let us introduce some notations for the subsequent use. Let $(r,\theta)$ be the polar coordinates in $\mathbb{R}^2$; that is, $x=(x_1,x_2)=(r\cos\theta,r\sin\theta)\in\mathbb{R}^2$. For $x\in\mathbb{R}^2$, $B_h(x)$ denotes an {\color{black} open} ball of radius $h\in\mathbb{R}_+$ and centered at $x$. $B_h:=B_h(0)$. Consider an open sector in $\mathbb{R}^2$ with the boundary $\Gamma^\pm $ as follows, \begin{equation}\label{eq:W}
W=\Big \{ x\in \mathbb{R}^2 ~ |~ x\neq 0,\quad \theta_m < {\rm arg}(x_1+ i x_2 ) < \theta_M \Big \}, \end{equation}
where $-\pi < \theta_m < \theta_M < \pi$, $i:=\sqrt{-1}$ and $\Gamma^+$ and $\Gamma^-$ respectively correspond to $(r, \theta_M)$ and $(r,\theta_m)$ with $r>0$. Henceforth, set \begin{equation}\label{eq:sh}
S_h= W\cap B_h,\, \Gamma_h^{\pm }= \Gamma^{\pm } \cap B_h,\, \overline S_h=\overline{ W} \cap B_h, \, \Lambda_h={\color{black} W \cap \partial { B_h} }, \, \ \mbox{and}\ \Sigma_{\Lambda_{h}} = S_h \backslash S_{h/2} . \end{equation} In Figure \ref{fig1}, we give a schematic illustration of the geometry considered here.
{\color{black} For $g\in L^2(\mathbb{S}^{n-1})$, the Herglotz wave function with kernel $g$ is defined by} \begin{equation}\label{eq:hergnew} v(x)=\int_{{\mathbb S}^{n-1}} e^{i k \xi \cdot x} g(\xi ) {\rm d} \sigma(\xi ),\ \ \xi\in\mathbb{S}^{n-1},\quad x\in\mathbb{R}^n. \end{equation}
It can be easily seen that $v$ is an entire solution to the Helmholtz equation $\Delta v+k^2 v=0$. {\color{black} By Theorem 2 and Remark 2 in \cite{Wec}, we have the following Herglotz approximation result.
\begin{lemma}\label{lem:Herg} Let $\Omega \Subset \mathbb R^n$ be a bounded Lipschitz domain and ${\mathbf H}_k$ be the space of all Herglotz wave functions of the form \eqref{eq:hergnew}. Define $$
{\mathbf S}_k(\Omega ) = \{u\in C^\infty (\Omega)~|~ \Delta u+k^2u=0\} $$ and $$
{\mathbf H}_k(\Omega ) = \{u|_\Omega~|~ u\in {\mathbf H}_k\}. $$ Then ${\mathbf H}_k(\Omega )$ is dense in ${\mathbf S}_k(\Omega ) \cap L^2 ( \Omega )$ with respect to the topology induced by the $H^1(\Omega)$-norm. \end{lemma} \begin{remark}
From Lemma \ref{lem:Herg}, for any $v \in H^1(\Omega )$ being a solution to the Helmholtz equation in $\Omega$, we can conclude that there exists a sequence of Herglotz functions which can approximate $v$ to an arbitrary accuracy.
\end{remark} }
\begin{figure}
\caption{Schematic illustration of the corner in 2D.}
\label{fig1}
\end{figure}
We shall also need the following lemma, which gives a particular type of planar complex geometrical optics (CGO) solution whose logarithm is a branch of the square root (cf. \cite{Bsource}).
\begin{lemma}\label{lem:1}\cite[Lemma 2.2]{Bsource}
For $x\in \mathbb{R}^2$ denote $r=|x|,\, \theta={\rm arg}(x_1 +i x_2)$. Let $s\in \mathbb R_+$ and \begin{equation}\label{eq:u0}
u_0(sx):= \exp \left( \sqrt {sr} \left(\cos \left(\frac{\theta}{2}+\pi\right) +i \sin \left(\frac{\theta}{2} +\pi\right) \right ) \right) . \end{equation}
Then $\Delta u_0=0$ in $\mathbb{R}^2\backslash\mathbb{R}_{0,-}^2 $, where $\mathbb{R}_{0,-}^2:=\{{ x}\in\mathbb{R}^2|{ x}=(x_1,x_2); x_1\leq 0, x_2=0\}$, and $s \mapsto u_0(sx) $ decays exponentially in $\mathbb{R}_+$. Let $\alpha, s >0$. Then \begin{equation}\label{eq:xalpha}
\int_W |u_0(sx)| |x|^\alpha {\rm d} x \leq \frac{2(\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} s^{-\alpha-2}, \end{equation} where $\delta_W=-\max_{ \theta_m < \theta <\theta_M } \cos(\theta/2+\pi ) >0$. Moreover \begin{equation}\label{eq:u0w}
\int_W u_0(sx) {\rm d} x= 6 i (e^{-2\theta_M i }-e^{-2\theta_m i } ) s^{-2},
\end{equation}
and for $h>0$
\begin{equation}\label{eq:1.5}
\int_{W \backslash B_h } |u_0(sx)| {\rm d} x \leq \frac{6(\theta_M-\theta_m )}{\delta_W^4} s^{-2} e^{-\delta_W \sqrt{hs}/2}.
\end{equation} \end{lemma}
{\color{black} The following lemma states the the regularity of the CGO solution $u_0(sx)$ defined in \eqref{eq:u0}. \begin{lemma}\label{lem:23}
Let $S_h$ be defined in \eqref{eq:sh} and $u_0(sx)$ be given by \eqref{eq:u0}. Then $u_0(sx) \in H^1(S_h)$ and $\Delta u_0 (sx)=0$ in $S_h$. Furthermore, it holds that
\begin{equation} \label{eq:u0L2}
\|u_0(sx)\|_{L^2(S_h)}^2\leq \frac{ (\theta_M-\theta_m) e^{- 2\sqrt{s \Theta } \delta_W } h^2 }{2}
\end{equation}
and
\begin{equation}\label{eq:22}
\left \||x|^\alpha u_0(sx) \right \|_{L^{2}(S_h ) }^2 \leq s^{-(2\alpha+2 )} \frac{2(\theta_M-\theta_m) }{(4\delta_W^2)^{2\alpha+2 } } \Gamma(4\alpha+4),
\end{equation} where $ \Theta \in [0,h ]$ and $\delta_W$ is defined in \eqref{eq:xalpha}. \end{lemma} \begin{proof}
Recalling the expression of $u_0$ given in \eqref{eq:u0}, using change of variables and the integral mean value theorem, we can deduce that \begin{align}\notag
\|u_0(sx)\|_{L^2(S_h)}^2&=\int_{0}^h r {\rm d} r\int_{\theta_m}^{\theta_M} e^{2\sqrt{sr} \cos(\theta/2+\pi) }{\rm d} \theta \leq \int_{0}^h r {\rm d} r\int_{\theta_m}^{\theta_M} e^{-2\sqrt{sr} \delta_W }{\rm d} \theta \nonumber
\\
&=\frac{ (\theta_M-\theta_m) e^{- 2\sqrt{s \Theta } \delta_W } h^2 }{2} , \notag \end{align} where $\Theta \in [0,h ]$ and $\delta_W$ is defined in \eqref{eq:xalpha}. Furthermore, it can directly verified that
\begin{equation}\notag
\begin{split}
\frac{\partial u_0(sx)}{\partial r}&=-\frac{ s^{1/2}}{2r^{1/2}} e^{-\sqrt{sr} (\cos( \theta/2)+i \sin (\theta/2 ) )+i \theta/2 },\\ \frac{\partial u_0(sx)}{\partial \theta}&=- \frac{i \sqrt{sr}}{2} e^{-\sqrt{sr} (\cos( \theta/2)+i \sin (\theta/2 ) )+i \theta/2 },
\end{split}
\end{equation}
which can be used to obtain that
\begin{equation}\notag
\begin{split}
\frac{\partial u_0(sx)}{\partial x_1}&=-\frac{ s^{1/2}}{2r^{1/2}} e^{-\sqrt{sr} (\cos( \theta/2)+i \sin (\theta/2 ) )-i \theta/2 },\\ \frac{\partial u_0(sx)}{\partial x_2}&=- \frac{i s^{1/2}}{2r^{1/2}} e^{-\sqrt{sr} (\cos( \theta/2)+i \sin (\theta/2 ) )-i \theta/2 }.
\end{split}
\end{equation}
Therefore, it yields that
$$
\|\nabla u_{0}(sx)\|_{L^2(S_h) }^2 \leq \frac{(\theta_M-\theta_m)sh}{2}e^{-2\sqrt{s\vartheta}\delta_W}
$$
by the integral mean value theorem,
where $\vartheta\in[0,h]$ and $\delta_W$ is defined in \eqref{eq:xalpha}. Hence, we know that $u_0(sx) \in H^1(S_h)$ and $\Delta u_0 (sx)=0$ in the weak sense.
Using polar coordinates transformation, we can deduce that \begin{align}\notag
&\left \||x|^\alpha u_0(sx) \right \|_{L^{2}(S_h ) }^2=\int_0^h r {\rm d} r \int_{\theta_m }^{\theta_M} r^{2\alpha } e^{2 \sqrt{sr} \cos (\theta/2+\pi) } {\rm d} \theta \nonumber \\
\leq & \int_0^h r {\rm d} r \int_{\theta_m }^{\theta_M} r^{2\alpha } e^{-2 \sqrt{sr} \delta_W } {\rm d} \theta=(\theta_M-\theta_m ) \int_{0}^h r^{2\alpha+1} e^{-2 \delta_W \sqrt{sr} }{\rm d} r\quad (t=2 \delta_W\sqrt{sr}) \nonumber \\
= & s^{-(2\alpha+2 )} \frac{2(\theta_M-\theta_m) }{(4\delta_W^2)^{2\alpha+2 } } \int_{0}^{2 \delta_W \sqrt{sh }} t^{4\alpha+3} e^{-t }{\rm d} r \leq s^{-(2\alpha+2 )} \frac{2(\theta_M-\theta_m) }{(4\delta_W^2)^{2\alpha+2 } } \Gamma(4\alpha+4), \notag \end{align} where $\delta_W$ is defined in \eqref{eq:xalpha}. This completes the proof of the lemma. \end{proof}
\begin{lemma}\label{lem:zeta}
For any $\zeta>0$, if $\omega(\theta ) >0 $, then \begin{equation}\label{eq:zeta}
\int_{0}^h r^\zeta e^{-\sqrt{sr} \omega(\theta)} {\rm d} r={\mathcal O}( s^{-\zeta-1} ), \end{equation} as $s\rightarrow +\infty$. \end{lemma}
\begin{proof}
Using variable substitution $t=\sqrt{sr}$, it is easy to derive \eqref{eq:zeta}. \end{proof}
Next, we recall a special type of Green formula for $H^1$ functions, which shall be needed in establishing a key integral identify for deriving the vanishing property of the conductive transmission eigenfunction.
\begin{lemma}\label{lem:green}
Let $\Omega\Subset \mathbb R^n $ be a bounded Lipschitz domain. For any $f,g\in H^1_\Delta:=\{f\in H^1(\Omega)|\Delta f\in L^2(\Omega)\}$, there holds the following second Green identity:
\begin{equation}\label{eq:GIN1}
\int_\Omega(g\Delta f-f\Delta g)\,\mathrm{d}x=\int_{\partial\Omega}(g\partial_{\nu}f-f\partial_{\nu}g)\,\mathrm{d}\sigma.
\end{equation} \end{lemma} Lemma \ref{lem:green} is a special case of more general results in \cite[Lemma 3.4]{costabel88} and \cite[Theorem 4.4]{McLean}. It is pointed out that in Lemma~\ref{lem:green} one needs not to require that $H^2$-regularity as the usual Green formula. In particular, for the transmission eigenfunctions $(v, w)\in H^1(\Omega)\times H^1(\Omega)$ to \eqref{eq:in eig}, we obviously have $v, w\in H_\Delta^1$, and hence the Green identity \eqref{eq:GIN1} holds for $v, w$. This fact shall be frequently used in our subsequent analysis.
We proceed to derive several auxiliary lemmas that shall play a key role in establishing our first main result in Theorem~\ref{Th:1.1} in what follows.
\begin{lemma}
\label{lem:int1} Let $S_h$ and $\Gamma_h^\pm$ be defined in \eqref{eq:sh}. Suppose that $v \in H^1(S_h )$ and $w \in H^1(S_h) $ satisfy the following PDE system,
\begin{align}\label{eq:in eignew}
\left\{ \begin{array}{l} \Delta w+k^2q w=0 \hspace*{1.6cm} \mbox{ in } S_h, \\[5pt] \Delta v+ k^2 v=0\hspace*{1.85cm}\ \mbox{ in } S_h \\[5pt] w= v,\ \partial_\nu v + \eta v=\partial_\nu w \ \ \mbox{ on } \Gamma_h^\pm ,
\end{array} \right.
\end{align} with $\nu\in\mathbb{S}^{1}$ signifying the exterior unit normal vector to $\Gamma_h^\pm $, $k \in \mathbb{R}_+ $, $q\in L^\infty(S_h ) $ and $\eta ( x) \in C^{\alpha}(\overline{\Gamma_h^\pm } )$, where \begin{equation}\label{eq:eta}
\eta(x)=\eta(0)+\delta \eta(x),\quad |\delta \eta(x)| \leq \|\eta \|_{C^\alpha } |x|^\alpha . \end{equation}
Recall that the CGO solution $u_0(sx)$ is defined in \eqref{eq:u0} with the parameter $s\in \mathbb R_+$. Then the following integral equality holds \begin{equation}\label{eq:221 int} \begin{split}
\int_{S_h} u_0(sx) (f_1-f_2)\mathrm{d} x&=\int_{\Lambda_h} (u_0(sx) \partial_\nu (v-w)- (v-w) \partial_\nu u_0(sx))\mathrm{d} \sigma\\
&\quad -\int_{\Gamma_h^\pm } \eta u_0(sx) v\mathrm{d} \sigma,
\end{split} \end{equation} where $f_1=-k^2 v$ and $f_2=-k^2 qw$.
Denote
\begin{equation}\label{eq:deltajs}
\widetilde{f}_{1j} (x) =- k^2v_j (x),
\end{equation} where \begin{equation}\label{eq:herg} v_j(x)=\int_{{\mathbb S}^{1}} e^{i k \xi \cdot x} g_j(\xi ) {\rm d} \sigma(\xi ),\quad \xi\in\mathbb{S}^{1}, \quad x\in\mathbb{R}^2, \quad g_j\in L^2(\mathbb S^1) \end{equation} is the Herglotz wave function. Denote \begin{equation}\label{eq:varphi}
\varphi=\angle(\xi,x). \end{equation} Then $\widetilde{f}_{1j} (x) \in C^\alpha (\overline S_h )$ and it has the expansion \begin{equation}\label{eq:f1jf2 notation new}
\widetilde f_{1j}(x)=- k^2v_j (x)=\widetilde f_{1j} (0)+\delta \widetilde f_{1j} (x),\quad |\delta \widetilde f_{1j} (x) | \leq \|\widetilde f_{1j} \|_{C^\alpha } |x|^\alpha. \end{equation} Assume that $f_2=-k^2 qw\in C^\alpha(\overline S_h)$ ($0<\alpha <1$) satisfying \begin{equation}\label{eq:f1jf2 notation}
f_2(x)=f_2(0)+\delta f_2(x),\quad |\delta f_2(x) | \leq \|f_2\|_{C^\alpha } |x|^\alpha, \end{equation} it holds that \begin{align}\label{eq:intimportant}
( \widetilde f_{1j} (0)-f_2(0)) \int_{S_h} u_0(sx) {\rm d} x+\delta_j(s) &= I_3-I_2^\pm -\int_{S_h} \delta \widetilde f_{1j} (x) u_0(sx) {\rm d} x \nonumber \\
&\quad +\int_{S_h} \delta f_2(x) u_0(sx) {\rm d} x -\xi_j^\pm(s). \end{align} where \begin{equation}\label{eq:intnotation} \begin{split}
I_2^\pm &=\int_{\Gamma_{h } ^\pm } \eta(x) u_0(sx) v_j (x) {\rm d} \sigma ,\quad I_3 =\int_{\Lambda_h} ( u_0 (sx)\partial_\nu (v-w)- (v-w)\partial_\nu u_0(sx) ) {\rm d} \sigma,\\
\delta_j(s)&=-k^2 \int_{S_h} ( v(x)-v_j(x))u_0(sx) {\rm d} x,\quad
\xi_{j}^\pm(s)= \int_{\Gamma_{h } ^\pm } \eta(x) u_0(sx) (v(x)- v_j (x) ) {\rm d} \sigma.
\end{split}
\end{equation} Furthermore, it yields that \begin{align}\label{eq:deltaf1j}
\left|\int_{S_h} \delta \widetilde f_{1j} (x) u_0(sx) {\rm d} x \right | & \leq \frac{2 \sqrt{2\pi } (\theta_M- \theta_m) \Gamma(2 \alpha+4)}{ \delta_W^{2\alpha+4 } } k^2 {\rm diam}(S_h)^{1-\alpha }\\
&\quad \quad \times (1+k) \|g_j\|_{L^2( {\mathbb S}^{1})} s^{-\alpha-2 }, \nonumber \end{align} and \begin{equation}\label{eq:deltaf2}
\left| \int_{S_h} \delta f_2(x) u_0(sx) {\rm d} x\right| \leq \frac{2(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } \|f_2\|_{C^\alpha } s^{-\alpha-2 }, \end{equation} as $s \rightarrow + \infty$.
If $v-w\in H^2(\Sigma_{ \Lambda_h} )$, then one has as $s \rightarrow + \infty$ that \begin{align}\label{eq:I3}
\left| I_3 \right| &\leq C e^{-c' \sqrt s}, \end{align}
where $C$ and $c'$ are two positive constants.
\end{lemma}
\begin{proof} From \eqref{eq:in eignew}, we have \begin{equation}\label{eq:vw}
\Delta v =-k^2 v:= f_1,\quad \Delta w =-k^2 q w:= f_2.
\end{equation}
Subtracting the two equations of \eqref{eq:vw} together with the use of the boundary conditions of \eqref{eq:in eignew} we deduce that \begin{equation}\label{eq:219 pde}
\Delta(v-w )=f_1-f_2 \mbox{ in } S_h, \quad v-w=0, \, \partial_\nu (v-w)=-\eta v \mbox{ on } \Gamma_h^\pm . \end{equation}
Recall that $u_0(sx)\in H^1(S_h)$ from Lemma \ref{lem:23}. Since $v,\, w \in H^1(S_h)$ and $q\in L^{\infty}(S_h)$, it yields that $f_1,\ f_2 \in L^2(S_h)$. Since $S_h$ is obviously a bounded Lipschitz domain, by virtue of Lemma \ref{lem:green}, we have the following integral identity \begin{equation}\label{eq:220 int}
\int_{S_h} u_0(sx) \Delta (v-w)\mathrm{d} x=\int_{\partial S_h} u_0(sx) \partial_\nu (v-w)- (v-w) \partial_\nu u_0(sx)\mathrm{d} \sigma, \end{equation} by using the fact that $\Delta u_0(sx)=0$ in $S_h$. Substituting \eqref{eq:219 pde} into \eqref{eq:220 int} it yields \eqref{eq:221 int}.
Recall that $v$ can be approximated by the Herglotz wave function $v_j$
given by \eqref{eq:herg} in the topology induced by the $H^1$- norm. It is clear that \begin{align}\label{eq:222 int}
\int_{S_h} f_1(x)u_0(sx) {\rm d} x= \int_{S_h } \widetilde{f}_{1j} (x) u_0(sx) {\rm d} x+ \delta_j(s),
\end{align} where $ \widetilde{f}_{1j} (x)$ and $\delta_j(s)$ are defined in \eqref{eq:deltajs} and \eqref{eq:intnotation}, respectively. Furthermore, it can be derived that \begin{align}\label{eq:int3}
\int_{\Gamma_{h } ^\pm } \eta(x) u_0(sx) v(x) {\rm d} \sigma &=\int_{\Gamma_{h } ^\pm } \eta(x) u_0(sx) v_j (x) {\rm d} \sigma + \xi_{j}^\pm(s), \end{align} where $ \xi_{j}^\pm(s)$ is defined in \eqref{eq:intnotation}.
Combining \eqref{eq:221 int}, \eqref{eq:222 int} with \eqref{eq:int3}, we have the following integral identity:
\begin{align}\label{eq:int identy}
I_1+\delta_j (s) &= I_3 - I_2^\pm - \xi_j^\pm(s) , \end{align} where \begin{equation}\label{eq:I1 notation}
I_1= \int_{S_h} u_0(sx) ( \widetilde f_{1j} (x)-f_2(x)) {\rm d} x, \end{equation}
$I_2^\pm$, $I_3$, $\delta_j (s)$ and $ \xi_j^\pm(s)$ are defined in \eqref{eq:intnotation}.
It is easy to verify that $v_j\in C^\alpha(\overline{ S}_h)$. Therefore $\widetilde f_{1j}\in C^\alpha(\overline{ S}_h )$ and for $x \in S_h$ we have the splitting \eqref{eq:f1jf2 notation new}. Since $f_2 \in C^\alpha(\overline S_h )$, substituting \eqref{eq:f1jf2 notation new} and \eqref{eq:f1jf2 notation} into $I_1$ defined in \eqref{eq:I1 notation},
we have \begin{align}\label{eq:I1 227} I_1 &=( \widetilde f_{1j} (0)-f_2(0)) \int_{S_h} u_0(sx) {\rm d} x+\int_{S_h} \delta \widetilde f_{1j} (x) u_0(sx) {\rm d} x-\int_{S_h} \delta f_2(x) u_0(sx) {\rm d} x. \end{align} Substituting \eqref{eq:I1 227} into \eqref{eq:int identy}, we can further deduce the integral equality \eqref{eq:intimportant}.
In the following, we shall prove \eqref{eq:deltaf1j}, \eqref{eq:deltaf2} and \eqref{eq:I3}, separately. From \eqref{eq:xalpha} and \eqref{eq:f1jf2 notation new}, it can be derived that \begin{align}\label{eq:deltaf1j int bound}
\left|\int_{S_h} \delta \widetilde f_{1j} (x) u_0(sx) {\rm d} x \right |& \leq \int_{S_h} \left| \delta \widetilde f_{1j} (x) \right | |u_0(sx) | {\rm d} x \leq \|\widetilde f_{1j} \|_{C^\alpha } \int_{W } |u_0(sx) | |x|^\alpha {\rm d} x \nonumber \\
&\leq \frac{2(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } \|\widetilde f_{1j} \|_{C^\alpha } s^{-\alpha-2 }. \end{align} Recall that $\widetilde f_{1j}=- k^2v_j (x)$ and $v_j$ is the Herglotz wave function given by \eqref{eq:herg}. Using the property of compact embedding of H{\"o}lder spaces, we can derive that $$
\| \widetilde f_{1j} \|_{C^\alpha } \leq k^2 {\rm diam}(S_h)^{1-\alpha } \|v_j\|_{C^1}, $$ where $ {\rm diam}(S_h)$ is the diameter of $S_h$. By direct computation, we have $$
\|v_j\|_{C^1} \leq \sqrt{2\pi } (1+k) \|g_j\|_{L^2( {\mathbb S}^{1})}, $$ and therefore we can deduce \eqref{eq:deltaf1j}. Similarly, using \eqref{eq:xalpha} and \eqref{eq:f1jf2 notation}, we can derive \eqref{eq:deltaf2}.
It is easy to see that on $\Lambda_h$, one has \begin{align*}
|u_0(sx)|&=e^{\sqrt{sr} \cos (\theta/2 +\pi ) } \leq e^{-\delta_W \sqrt{s h }},\\\
\left| \partial_{\nu} u_0(sx) \right| &=\left| \frac{\sqrt{s} e^{i \cos (\theta/2+\pi)} }{2\sqrt{h}} e^{\sqrt {sh } \exp (i (\theta/2+\pi))}\right| \leq \frac{1}{2} \sqrt{\frac{s}{h}}e^{-\delta_W \sqrt{s h }}, \end{align*} both of which decay exponentially as $s \rightarrow \infty$. Hence we know that $$
\left\| u_0(sx) \right\|_{L^2(\Lambda_h )} \leq e^{-\delta_W \sqrt{s h }} \sqrt{ (\theta_M- \theta_m) h} , \quad \left\|\partial_{\nu} u_0(sx) \right\|_{L^2(\Lambda_h )} \leq \frac{1}{2} e^{-\delta_W \sqrt{s h }} \sqrt{s(\theta_M- \theta_m) }. $$ Under the assumption that $v-w\in H^2(\Sigma_{ \Lambda_h} )$, using Cauchy-Schwarz inequality and the trace theorem, we can prove as $s \rightarrow +\infty$ that \begin{align}\notag
\left| I_3 \right| &\leq \left\| u_0(sx) \right\|_{L^2(\Lambda_h )} \left\| \partial_\nu (v-w) \right\|_{L^2(\Lambda_h )} +\left\|\partial_\nu u_0(sx) \right\|_{L^2(\Lambda_h )} \left\| v-w \right\|_{L^2(\Lambda_h )} \\
&\leq \left ( \left\| u_0(sx) \right\|_{L^2(\Lambda_h )}+\left\|\partial_\nu u_0(sx) \right\|_{L^2(\Lambda_h )} \right) \left\| v-w \right\|_{H^2(\Sigma_{ \Lambda_h } )} \leq C e^{-c' \sqrt s}, \notag \end{align}
where $C,c'$ are positive constants.
The proof is complete. \end{proof}
\begin{lemma}\label{lem:27}
Under the same setup in Lemma \ref{lem:int1}, we assume that a sequence of Herglotz wave functions $\{v_j\}_{j=1}^{+\infty} $ possesses the form \eqref{eq:herg}, which can approximate $v$ in $H^1(S_h)$ satisfying
\begin{equation}\label{eq:ass1}
\|v-v_j\|_{H^1(S_h)} \leq j^{-1-\Upsilon },\quad \|g_j\|_{L^2({\mathbb S}^{1})} \leq C j^{\varrho},
\end{equation}
for some constants $C>0$, $\Upsilon >0$ and $0< \varrho<1 $. Then we have the following estimates: \begin{equation}\label{eq:deltajnew}
|\delta_j(s)| \leq \frac{\sqrt { \theta_M-\theta_m } k^2 e^{-\sqrt{s \Theta } \delta_W } h } {\sqrt 2 } j^{-1-\Upsilon}, \end{equation} and \begin{equation}\label{eq:xij}
|\xi_j^\pm (s) |\leq C \left( |\eta(0)| \frac{\sqrt { \theta_M-\theta_m } e^{-\sqrt{s \Theta } \delta_W } h } {\sqrt 2 } + \|\eta\|_{C^\alpha } s^{-(\alpha+1 )} \frac{\sqrt{2(\theta_M-\theta_m) \Gamma(4\alpha+4) } }{(2\delta_W)^{2\alpha+2 } } \right) j^{-1-\Upsilon}. \end{equation} where $ \Theta \in [0,h ]$ and $\delta_W$ is defined in \eqref{eq:xalpha}. \end{lemma}
\begin{proof}
In the following, we shall prove \eqref{eq:deltajnew} and \eqref{eq:xij}, separately. Recall that $u_0 \in H^1(S_h)$ from Lemma \ref{lem:23}. Clearly $\widetilde{f}_{1j} (x) \in H^2(S_h)$, which can be embedded into $C^\alpha(\overline{ S}_h) $ for $\alpha\in(0,1)$. Moreover, by using the Cauchy-Schwarz inequality, we know that \begin{equation}\label{eq:deltaj}
|\delta_j(s)|\leq k^2 \|v-v_j\|_{L^2(S_h)} \|u_0(sx)\|_{L^2(S_h)}. \end{equation} Substituting \eqref{eq:u0L2} into \eqref{eq:deltaj} and using \eqref{eq:ass1}, one readily has \eqref{eq:deltajnew}.
Since $\eta \in C^\alpha\left(\overline{\Gamma_h^\pm} \right)$, we have the expansion of $\eta(x)$ at the origin as \eqref{eq:eta}. Therefore, using Cauchy-Schwarz inequality and the trace theorem, we have \begin{align*}
|\xi_{j}^\pm(s)|&\leq | \eta(0) |\int_{\Gamma_{h } ^\pm } |u_0(sx)| |v(x)- v_j (x) | {\rm d} \sigma + \| \eta \|_{C^\alpha }\int_{\Gamma_{h } ^\pm } |x|^\alpha |u_0(sx)| |v(x)- v_j (x) | {\rm d} \sigma \\
& \leq | \eta(0) | \|v-v_j\|_{H^{1/2}(\Gamma_h^\pm ) } \| u_0(sx)\|_{H^{-1/2}(\Gamma_h^\pm ) } \\
&\quad + \| \eta \|_{C^\alpha } \|v-v_j\|_{H^{1/2}(\Gamma_h^\pm ) } \| |x|^\alpha u_0(sx)\|_{H^{-1/2}(\Gamma_h^\pm ) } \\
& \leq | \eta(0) | \|v-v_j\|_{H^{1}(S_h ) } \| u_0(sx)\|_{L^2(S_h ) } + \| \eta \|_{C^\alpha } \|v-v_j\|_{H^{1}(S_h ) } \| |x|^\alpha u_0(sx)\|_{L^2(S_h ) } .
\end{align*}
Using \eqref{eq:ass1}, \eqref{eq:u0L2} and \eqref{eq:22}, we readily derive \eqref{eq:xij}.
The proof is complete. \end{proof}
\begin{lemma}\label{lem:u0 int}
Recall that $\Gamma_h^\pm$ and $u_0(sx)$ are defined in \eqref{eq:sh} \eqref{eq:u0}, respectively. We have \begin{align}\label{eq:I311}
\begin{split} \int_{\Gamma_{h} ^+ } u_0 (sx) {\rm d} \sigma &=2 s^{-1}\left( \mu(\theta_M )^{-2}- \mu(\theta_M )^{-2} e^{ -\sqrt{sh} \mu(\theta_M ) }\right. \\&\left. \hspace{3.5cm} - \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M ) } \right ), \\
\int_{\Gamma_{h} ^- } u_0 (sx) {\rm d} \sigma &=2 s^{-1} \left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh}\mu(\theta_m )} \right. \\&\left. \hspace{3.5cm} - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh}\mu(\theta_m ) } \right ),
\end{split} \end{align}
where $ \mu(\theta )=-\cos(\theta/2+\pi) -i \sin( \theta/2+\pi )$.
\end{lemma} \begin{proof}
Using variable substitution $t=\sqrt{sr}$ and by direct calculations, we can derive \eqref{eq:I311}.
\end{proof}
Using the Jacobi-Anger expansion (cf. \cite[Page 75]{CK}), for the Herglotz wave function $v_j$ given in \eqref{eq:herg}, we have \begin{equation}\label{eq:vjex}
v_j(x)= v_j(0) J_0(k |x| )+2 \sum_{p=1}^\infty \gamma_{pj} i^p J_p ( k |x| ), \quad x\in \mathbb{R}^2 ,
\end{equation}
where
\begin{align}\label{eq:239 gamma}
v_j(0)= \int_{{\mathbb S}^{1}} g_j(\theta ) {\rm d} \sigma(\theta ),\quad \gamma_{pj}= \int_{{\mathbb S}^{1}} g_j(\xi ) \cos (p \varphi ) {\rm d} \sigma(\xi ), \quad p,\, j \in \mathbb N,
\end{align}
$J_p(t)$ is the $p$-th Bessel function of the first kind \cite{Abr}, $g_j$ is the kernel of $v_j$ defined in \eqref{eq:herg} and $\varphi$ is given by \eqref{eq:varphi}.
\begin{lemma}\label{lem:28} Let $\Gamma_h^\pm$ be defined in \eqref{eq:sh}. Recall that the CGO solution $u_0(sx)$ is defined in \eqref{eq:u0} with the parameter $s\in \mathbb R_+$, and $I_2^\pm$ is defined by \eqref{eq:intnotation}. Denote \begin{equation}\label{eq:omegamu}
\omega(\theta )=-\cos(\theta/2+\pi) ,\quad \mu(\theta )=-\cos(\theta/2+\pi) -i \sin( \theta/2+\pi ). \end{equation} Recall that the Herglotz wave function $v_j$ is given in the form \eqref{eq:herg}. Suppose that $\eta ( x) \in C^{\alpha}(\overline{\Gamma_h^\pm } )$ ($0< \alpha<1$) satisfying \eqref{eq:eta} and let \begin{align}\label{eq:Ieta} \begin{split} {\mathcal I}_1^-&= \int_{0}^h \delta \eta\ J_0(kr) e^{-\sqrt{sr} \mu(\theta_m)} {\rm d} r ,\quad {\mathcal I}_2^- =2\sum_{p=1}^\infty i^p \int_0^h \delta \eta\ \gamma_{pj} J_{p}(kr ) e^{-\sqrt{sr} \mu (\theta_m ) } {\rm d} r,\\
{\mathcal I}_1^+&= \int_{0}^h \delta \eta\ J_0(kr) e^{-\sqrt{sr} \mu(\theta_M)} {\rm d} r ,\quad {\mathcal I}_2^+ =2\sum_{p=1}^\infty i^p \int_0^h \delta \eta\ \gamma_{pj} J_{p}(kr ) e^{-\sqrt{sr} \mu (\theta_M ) } {\rm d} r, \\
I_\eta^-&= v_j(0){\mathcal I}_1^-+ {\mathcal I}_2^- ,\quad I_\eta^+= v_j(0){\mathcal I}_1^-+ {\mathcal I}_2^+.
\end{split} \end{align}
Assume that for a fixed $k\in
\mathbb R_+$, $kh<1$, where $h$ is the length of $\Gamma_h^\pm$, and $-\pi< \theta_m < \theta_M <\pi $, where $\theta_m$ and $\theta_M$ are defined in \eqref{eq:W}. Then \begin{align}\label{eq:I2-final}
I_2^-&=2\eta(0)v_j(0)s^{-1}\left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh}\mu(\theta_m ) } - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh}\mu(\theta_m ) } \right ) \nonumber\\
&\quad +v_j(0)\eta(0) {\mathcal I}_{312}^- +\eta(0){\mathcal I}_{32}^-+I_\eta^-,
\end{align}
where
\begin{align}\notag
&{\mathcal I}_{312}^-=\sum_{p=1}^\infty \frac{(-1)^p k^{2p}}{4^p(p!)^2} \int_{0}^h r^{2p} e^{-\sqrt{sr} \mu (\theta_m)} {\rm d} r, \quad {\mathcal I}_{32}^-=2 \sum_{p=1}^\infty \int_{0}^h \gamma_{pj} i^p J_p(kr) e^{-\sqrt{sr} \mu (\theta_m)} {\rm d} r, \\
&|{\mathcal I}_{312}^-|\leq {\mathcal O}(s^{-3}),\quad
|{\mathcal I}_{32}^-|\leq {\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{1})} s^{-2}), \nonumber \\
&|I_\eta^-| \leq |v_j(0)| |{\mathcal I}_1^-| + |{\mathcal I}_2^-| ,\quad \left| {\mathcal I}_{1}^- \right| \leq {\mathcal O} ( \|\eta \|_{C^\alpha } s^{-1-\alpha }),\quad
\left| {\mathcal I}_{2}^- \right| \leq {\mathcal O} (\|\eta \|_{C^\alpha } \|g_j\|_{L^2 ({\mathbb S} ^{1})} s^{-2-\alpha }). \notag \end{align} as $s \rightarrow +\infty$. Similarly, we have \begin{align}\label{eq:I2+final}
I_2^+&=2\eta(0)v_j(0)s^{-1}\left( \mu(\theta_M )^{-2}- \mu(\theta_M )^{-2} e^{ -\sqrt{sh} \mu(\theta_M ) }- \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M ) } \right ) \nonumber\\
&\quad +v_j(0)\eta(0) {\mathcal I}_{312}^+ +\eta(0){\mathcal I}_{32}^+ +I_\eta^+, \end{align} where \begin{align*}
{\mathcal I}_{312}^+&=\sum_{p=1}^\infty \frac{(-1)^p k^{2p}}{4^p(p!)^2} \int_{0}^h r^{2p} e^{-\sqrt{sr} \mu (\theta_M)} {\rm d} r, \quad {\mathcal I}_{32}^+=2 \sum_{p=1}^\infty \int_{0}^h \gamma_{pj} i^p J_p(kr) e^{-\sqrt{sr} \mu (\theta_M)} {\rm d} r ,\\
|{\mathcal I}_{312}^+| &\leq {\mathcal O}(s^{-3}),\quad
|{\mathcal I}_{32}^+| \leq {\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{1})} s^{-2}), \nonumber \\
|I_\eta^+| &\leq |v_j(0)| |{\mathcal I}_1^+| + |{\mathcal I}_2^+ | ,\
\left| {\mathcal I}_{1}^+ \right| \leq {\mathcal O} (\|\eta \|_{C^\alpha } s^{-1-\alpha }),\
\left| {\mathcal I}_{2}^+ \right| \leq {\mathcal O} (\|\eta \|_{C^\alpha }\|g_j\|_{L^2 ({\mathbb S} ^{1})} s^{-2-\alpha }) \end{align*} as $s \rightarrow +\infty$. \end{lemma}
\begin{proof} We first investigate the boundary integral $I_2^- $ defined in \eqref{eq:intnotation}. Recall that $\Gamma_h^\pm$ is defined in \eqref{eq:sh}, for $x\in \Gamma_h^\pm$, the polar coordinates $x=(r\cos \theta, r \sin \theta )$ satisfy $r \in (0, h)$ and $\theta=\theta_m$ or $\theta=\theta_M$ when $x\in \Gamma_h^-$ or $x\in \Gamma_h^+$, respectively. Since $\eta \in C^\alpha\left(\overline{\Gamma}_h^\pm \right)$, we know that $\eta$ has the expansion \eqref{eq:eta}.
Denote \begin{align}\label{eq:Ieta proof} I_{21}^-&= \int_{\Gamma_h^- } u_0(sx) v_j(x) {\rm d} \sigma ,\quad I_{\eta}^- =\int_{\Gamma^-_h } \delta \eta(x) u_0(sx) v_j (x) {\rm d} \sigma. \end{align} Substituting \eqref{eq:eta} into the expression of $I_2^-$, we have\begin{align}\label{eq:I2-}
I_2^- &=\eta(0) I_{21}^-+ I_\eta^-. \end{align}
Recall that $\gamma_{pj}$ is defined by \eqref{eq:239 gamma}. Using Cauchy-Scharwz inequality, it is clear that
\begin{equation}\label{eq:gampj}
|\gamma_{pj}| \leq \sqrt{2\pi} \|g\|_{L^2(\mathbb S^1)}.
\end{equation} For the Bessel function $J_p(t)$, we have from \cite{Abr} the following series expression:
\begin{equation}\label{eq:Jp}
J_p(t)= \frac{t^p}{2^p p!}+\frac{t^p}{2^p } \sum_{\ell=1}^\infty \frac{(-1)^\ell t^{2\ell }}{4^\ell (\ell !)^2 }, \quad \mbox{ for } p =1,\,2,\ldots,
\end{equation} which is uniformly and absolutely convergent with respect to $t \in [0,+\infty)$.
For ${\mathcal I}_1^-$ and ${\mathcal I}_2^-$ defined in \eqref{eq:Ieta}, by substituting \eqref{eq:vjex} into $I_\eta^-$ given by \eqref{eq:Ieta proof}, it is directly verified that $I_\eta^-= v_j(0){\mathcal I}_1^-+ {\mathcal I}_2^-$.
For $\omega(\theta )$ defined in \eqref{eq:omegamu}, it is easy to see that $\omega(\theta ) >0 $ for $-\pi< \theta_m \leq \theta \leq \theta_M <\pi $. By virtue of \eqref{eq:eta}, we have \begin{align*}
|{\mathcal I}_1^-| &\leq \|\eta\|_{C^\alpha }\int_{0}^h r^\alpha |J_0(kr)| e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r:=\|\eta\|_{C^\alpha } \left({\mathcal I}_{11}^-+{\mathcal I}_{12}^-\right),
\end{align*} where $$ {\mathcal I}_{11}^-= \int_{0}^h r^\alpha e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r,\quad {\mathcal I_{12}^-}=\sum_{p=1}^\infty \frac{ k^{2p}}{4^p(p!)^2} \int_{0}^h r^{\alpha+2p} e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r. $$ From \eqref{eq:zeta} in Lemma \ref{lem:zeta} and noting that $\omega(\theta_m ) >0 $, we have \begin{equation*}
{\mathcal I}_{11}^-={\mathcal O}(s^{-1-\alpha }) \end{equation*} as $s\rightarrow +\infty$.
For ${\mathcal I_{12}^-} $, we have the estimate \begin{align*}
|{\mathcal I_{12}^-}| &\leq \sum_{p=1}^\infty \frac{h^{2p-2} k^{2p}}{4^p(p!)^2} \int_{0}^h r^{\alpha+2} e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r={\mathcal O}(s^{-3-\alpha }) \end{align*} as $s\rightarrow +\infty$, where we suppose that $k h<1$ for sufficiently small $h$. Therefore, we conclude that \begin{equation}\label{eq:I1}
|{\mathcal I}_1^-| \leq {\mathcal O}( \|\eta\|_{C^\alpha } s^{-1-\alpha })\ \ \mbox{as}\ \ s\rightarrow+\infty. \end{equation}
For sufficiently small $h>0$ fulfilling that $kh<1$, using \eqref{eq:eta}, \eqref{eq:gampj} and \eqref{eq:Jp}, we have \begin{align}
|{\mathcal I}_2^-| & \leq 2 \sqrt{2\pi }\|\eta\|_{C^\alpha} \|g_j\|_{L^2 ({\mathbb S}^{1})} \sum_{p=1}^\infty \left[ \frac{ k^p}{2^p p!}\int_0^h r^{p+\alpha} e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r \right. \nonumber \\
&\left. \hspace{3.5cm} +\frac{k ^p }{2^p } \sum_{\ell=1}^\infty \frac{k^{2\ell }h^{2(\ell-1) }}{4^\ell (\ell !)^2 }\left (\int_0^h r^{p+\alpha+2 } e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r \right ) \right] \nonumber \\
& \leq 2\sqrt{2\pi } \|\eta\|_{C^\alpha} \|g_j\|_{L^2 ({\mathbb S}^{1})}\sum_{p=1}^\infty \left[ \frac{ k^p }{2^p p!}\int_0^h r^{p+\alpha} e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r \right. \nonumber \\
&\left. \hspace{3.5cm}+\frac{(k h)^p}{2^p } \sum_{\ell=1}^\infty \frac{k^{2\ell }h^{2(\ell-1) }}{4^\ell (\ell !)^2 }\left (\int_0^h r^{\alpha+2 } e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r \right ) \right] \nonumber \\
& \leq 2\sqrt{2\pi }\|\eta\|_{C^\alpha} \|g_j\|_{L^2 ({\mathbb S}^{1})}\sum_{p=1}^\infty \left[ \frac{ k^p h^{p-1} }{2^p p!}\int_0^h r^{\alpha+1 } e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r+{\mathcal O}\left (s^{-\alpha-3} \right ) \right] . \nonumber \end{align} Using Lemma \ref{lem:zeta}, we know that $$ \int_0^h r^{\alpha+1} e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r={\mathcal O}(s^{-\alpha-2})\ \ \mbox{as}\ \ s\rightarrow +\infty. $$ Therefore we can derive that \begin{equation}\label{eq:I2}
|{\mathcal I}_2^-| \leq {\mathcal O} (\|\eta\|_{C^\alpha} \|g_j\|_{L^2 ({\mathbb S}^{1})} s^{-\alpha -2})\ \ \mbox{as}\ \ s\rightarrow +\infty. \end{equation}
We proceed to investigate $I_{21}^-$ given by\eqref{eq:I2-}. Denote
\begin{equation}
{\mathcal I}_{31}^-= \int_0^h J_0(k r) e^{-\sqrt{sr}\mu(\theta_m) }\mathrm{d} r , \quad {\mathcal I}_{32}^-=2 \sum_{p=1}^\infty \int_{0}^h \gamma_{pj} i^p J_p(kr) e^{-\sqrt{sr} \mu (\theta_m)} {\rm d} r.
\end{equation}
Substituting the expansion \eqref{eq:vjex} into $I_{21}^-$, it is easy to see that \begin{align*}
I_{21}^-
&=v_j(0){\mathcal I}_{31}^-+ {\mathcal I}_{32}^-. \end{align*}
From \eqref{eq:Jp}, we know that \begin{equation}\label{eq:J0p}
J_0(t)=\sum_{p=0}^\infty (-1)^p \frac{t^{2p}}{4^p(p!)^2}. \end{equation} Let $$ {\mathcal I}_{311}^-= \int_{0}^h e^{-\sqrt{sr} \mu(\theta_m)} {\rm d} r,\quad {\mathcal I_{312}}^-=\sum_{p=1}^\infty \frac{(-1)^p k^{2p}}{4^p(p!)^2} \int_{0}^h r^{2p} e^{-\sqrt{sr} \mu(\theta_m)} {\rm d} r. $$ Substituting the expansion \eqref{eq:J0p} of $J_0$ into ${\mathcal I}_{31}^-$, we have \begin{align*}
{\mathcal I}_{31}^-={\mathcal I}_{311}^-+{\mathcal I}_{312}^-, \end{align*} where ${\mathcal I}_{311}^-$ can be derived directly from Lemma \ref{lem:u0 int}.
For ${\mathcal I_{312}^-}$ , we have
\begin{align}\label{eq:241}
\left| {\mathcal I_{312}^-}\right| \leq \sum_{p=1}^\infty \frac{ k^{2p} h^{2p-2}}{4^p(p!)^2} \int_{0}^h r^{2} e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r={\mathcal O}(s^{-3})\ \ \mbox{as}\ \ s\rightarrow +\infty.
\end{align}
Substituting the expansion \eqref{eq:Jp} of $J_p$ into ${\mathcal I_{32}^-} $, using \eqref{eq:gampj} we can deduce that
\begin{align}\label{eq:I32}
|{\mathcal I}_{32}^-|&\leq 2\sqrt{2\pi } \|g_j\|_{L^2 ({\mathbb S} ^{1})} \sum_{p=1}^\infty \left [ \frac{k^p}{2^p p!} \int_0^h r^p e^{-\sqrt{sr} \omega(\theta_m) } {\rm d} r \right. \nonumber \\
&\left. \hspace{3.5cm}+\frac{(k h)^p }{2^p } \sum_{\ell=1}^\infty \frac{k^{2\ell }h^{2(\ell-1) }}{4^\ell (\ell !)^2 } \int_0^h r^{2 } e^{-\sqrt{sr} \omega(\theta_m ) } {\rm d} r \right]\nonumber \\
&\leq 2\sqrt{2\pi } \|g_j\|_{L^2 ({\mathbb S} ^{1})} \sum_{p=1}^\infty \left [ \frac{k^ph^{p-1}}{2^p p!} \int_0^h r e^{-\sqrt{sr} \omega(\theta_m) } {\rm d} r+{\mathcal O}\left(s^{-3}\right) \right] \nonumber \\
&={\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{1})} s^{-2}),
\end{align}
where we suppose that $k h<1$ for sufficiently small $h$.
Finally, substituting \eqref{eq:I1}, \eqref{eq:I2}, \eqref{eq:I311}, \eqref{eq:241} and \eqref{eq:I32} into \eqref{eq:I2-}, we can obtain the integral equality \eqref{eq:I2-final}.
Following a completely similar argument in deriving the integral equality \eqref{eq:I2-final} of $I_2^-$, we can derive the integral equality \eqref{eq:I2+final} for $I_2^+$ as well.
The proof is complete. \end{proof}
\begin{lemma}\label{lem:29} Let $\mu(\theta )$ be defined in \eqref{eq:omegamu}. Assume that
\begin{equation}\label{eq:lem 29 cond}
-\pi < \theta_m < \theta_M <\pi ,
\end{equation}
then \begin{equation}\label{eq:mutheta}
\mu(\theta_m)^{-2}+\mu(\theta_M)^{-2} \neq 0. \end{equation} \end{lemma} \begin{proof}
It can be calculated that \begin{align*}
\mu(\theta_m)^{-2}+\mu(\theta_M)^{-2}=\frac{(\cos \theta_m+\cos \theta_M)+i (\sin \theta_m+\sin \theta_M )}{ (\cos \theta_m+i \sin \theta_m )(\cos \theta_M+i \sin \theta_M )}. \end{align*} Then under the assumption \eqref{eq:lem 29 cond}, it is straightforward to verify that $$ \cos \theta_m+\cos \theta_M \mbox{ and } \sin \theta_m+\sin \theta_M $$ can not be zero simultaneously, which immediately implies \eqref{eq:mutheta}. \end{proof}
}
We are in a position to present one of the main theorems in this section.
\begin{theorem}\label{Th:1.1}
Let $v\in H^1(\Omega ) $ and $w \in H^1(\Omega ) $ be a pair of eigenfunctions to \eqref{eq:in eig} associated with $k\in\mathbb{R}_+$. Assume that the Lipschitz domain $\Omega\subset\mathbb{R}^2$ contains a corner {\color{black}$\Omega\cap B_h= \Omega \cap W$ with the vertex being $0\in \partial \Omega$, where $W$ is the sector defined in \eqref{eq:W} and $h\in \mathbb R_+$}. Moreover, there exits a sufficiently small neighbourhood {\color{black} $S_h= \Omega\cap B_h= \Omega \cap W$ of $0$},
such that $q w \in C^\alpha(\overline {S _h} ) $ with $q:=1+V$ and $\eta \in C^\alpha\left(\overline{\Gamma_h^\pm } \right)$ for $0< \alpha <1$, and $ v-w \in H^2(\Sigma_{\Lambda_h})$, where $S_h$, $\Gamma_h^\pm$ and $\Sigma_{\Lambda_h}$ are defined in \eqref{eq:sh}. If the following conditions are fulfilled:
\begin{itemize}
\item[(a)] the transmission eigenfunction $v$ can be approximated in $H^1(S_h)$ by the Herglotz functions $v_j$, $j=1,2,\ldots$, with kernels $g_j$ satisfying the approximation property \eqref{eq:ass1};
\item[(b)] the function $\eta(x)$ doest not vanish at the vertex $0$, {\color{black} where $0$ is the vertex of $S_h$,}
i.e.,
\begin{equation}\label{eq:ass2}
\eta(0) \neq 0;
\end{equation}
\item[(c)] {\color{black} the open angle of
$S_h$ satisfies}
\begin{equation}\label{eq:ass3}
-\pi < \theta_m < \theta_M < \pi \mbox{ and } \theta_M-\theta_m \neq \pi;
\end{equation}
\end{itemize}
then one has {\color{black}
\begin{equation}\label{eq:nnv1}
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap \Omega )} \int_{B(0, \rho ) \cap \Omega } |v(x)| {\rm d} x=0,
\end{equation}
where $m(B(0, \rho ) \cap \Omega )$ is the area of $B(0,\rho )\cap \Omega $.} \end{theorem}
\begin{remark}\label{rem:th1.1}
In Theorem~\ref{Th:1.1}, we consider the case that $v, w$ are a pair of conductive transmission eigenfunctions to \eqref{eq:in eig} and show the vanishing property near a corner. We would like to emphasize that the result can be localized in the sense that as long as $v, w$ satisfy all the conditions stated in Theorem~\ref{Th:1.1} in $\Omega\cap S_h$, then one has the vanishing property \eqref{eq:nnv1} near the corner. That is, $v, w$ are not necessary conductive transmission eigenfunctions, and it suffices to require that $v, w$ satisfy the equations in \eqref{eq:in eig} in $S_h\cap\Omega$ and the conductive transmission conditions on $\overline{S}_h \cap\partial\Omega$, then one has the same vanishing property as stated in Theorem~\ref{Th:1.1}. Indeed, the subsequent proof of Theorem~\ref{Th:1.1} is for the aforementioned localized problem. \end{remark}
\begin{remark}\label{rem:hn1} The condition \eqref{eq:ass1} signifies a certain regularity condition of the transmission eigenfunction $v\in H^1(\Omega)$. In \cite{BL2017b}, the following regularity condition was introduced, \begin{equation}\label{eq:vjgj}
\|v-v_j\|_{L^2( \Omega )} \leq e^{-j},\quad \|g_j\|_{L^2( {\mathbb S}^{n-1} )} \leq C (\ln j)^\beta, \end{equation} where the constants $C>0$ and $0<\beta < 1/(2n+8), (n=2, 3)$. Here, we allow the polynomial growth of the kernel functions. Moreover, we would like to remark that $qw\in C^\alpha(\overline S_h )$ is technically required in our mathematical argument of proving Theorem~\ref{Th:1.1}. {\color{black} This technical condition can be fulfilled in our study for the unique recovery result of the inverse scattering problem. Indeed, when $q$ is a constant, it is shown in Lemma \ref{lem41} that $qw\in C^\alpha(\overline S_h)$.} The interior regularity requirement $v-w \in H^2(\Sigma_{\Lambda_h})$ can be fulfilled in certain practical scenarios; see Theorem~\ref{Th:4.1} in what follows on the study of an inverse scattering problem. The introduction of this interior regularity condition shall play a critical role in the proof of Theorem~\ref{Th:4.1}. \end{remark}
\begin{proof}[Proof of Theorem~\ref{Th:1.1}]
{\color{black} It is clear that the transmission eigenfunctions $v \in H^1(\Omega )$ and $w \in H^1(\Omega )$ to \eqref{eq:in eig} fulfill \eqref{eq:in eignew}. From Lemma \ref{lem:int1}, we know that \eqref{eq:intimportant} holds.} Using the fact that
\begin{equation}\label{eq:int u0 sh} \int_{S_h} u_0(sx) {\rm d} x=\int_{W} u_0(sx){\rm d} x- \int_{ W\backslash S_h } u_0(sx) {\rm d} x,
\end{equation} {\color{black} where $u_0(sx)$ is defined in \eqref{eq:u0}, substituting \eqref{eq:int u0 sh} into \eqref{eq:intimportant},} we obtain the following integral equation \begin{align*}
& ( \widetilde f_{1j} (0)-f_2(0)) \int_{ W } u_0(sx) {\rm d} x + \delta_j(s) = I_3-I_2^\pm -\int_{S_h} \delta\widetilde f_{1j}(x) u_0(sx) {\rm d} x\\
& \quad \quad \quad +\int_{S_h} \delta f_2(x) u_0(sx) {\rm d} x +( \widetilde f_{1j} (0)-f_2(0)) \int_{ W \backslash S_h } u_0(sx) {\rm d} x- \xi_j^\pm (s), \end{align*} {\color{black} where $\delta \widetilde f_{1j} (x)$ and $\delta f_2(x) $ are defined in \eqref{eq:f1jf2 notation new} and \eqref{eq:f1jf2 notation}, respectively, $\delta_j(s)$ and $\xi_j^\pm (s)$ is given by \eqref{eq:intnotation}.}
From \eqref{eq:u0w}, we know that \begin{equation}\label{eq:f1jf2}
( \widetilde f_{1j} (0)-f_2(0)) \int_{ W } u_0(sx) {\rm d} x =6 i (\widetilde f_{1j} (0)-f_2(0) ) (e^{-2\theta_M i }-e^{-2\theta_m i } ) s^{-2}. \end{equation}
{\color{black} From Lemma \ref{lem:28} it yields \eqref{eq:I2-final} and \eqref{eq:I2+final}.} Substituting \eqref{eq:I2-final} and \eqref{eq:I2+final} into \eqref{eq:intimportant}, multiplying $s$ on the both sides of \eqref{eq:intimportant}, and rearranging terms, we deduce that \begin{align}\label{eq:45}
& 2v_j(0)\eta(0)\Bigg[ \left( \mu(\theta_M )^{-2}- \mu(\theta_M )^{-2} e^{ -\sqrt{sh} \mu(\theta_M ) } - \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M ) } \right ) \nonumber\\
&\qquad\qquad+ \left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh} \mu(\theta_m ) } - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_m )} \right ) \Bigg] \nonumber \\
=& s\Bigg[ I_3-( \widetilde f_{1j} (0)-f_2(0)) \int_{S_h} u_0(sx) {\rm d} x- \delta_j(s)- v_j(0) \eta(0)\left({\mathcal I}_{312}^-+{\mathcal I}_{312}^+ \right) \nonumber \\
&-\eta(0)({\mathcal I}_{32}^++ {\mathcal I}_{32}^- ) -I_\eta^+ - I_\eta^- -\int_{S_h} \delta \widetilde f_{1j} (x) u_0(sx) {\rm d} x +\int_{S_h} \delta f_2(x) u_0(sx) {\rm d} x -\xi_j^\pm(s) \Bigg]. \end{align} {\color{black} Taking} $s=j$, under the assumption \eqref{eq:ass1}, using \eqref{eq:I2-final} and \eqref{eq:I2+final} in Lemma \ref{lem:28}, we know that \begin{align}\label{eq:46}
&j | {\mathcal I}_{32}^-| \leq {\mathcal O}( j^{-1} \|g_j\|_{L^2({\mathbb S} ^{1} )} )\leq {\mathcal O}(j^{-1+\varrho }),\quad j | {\mathcal I}_{32}^+ | \leq {\mathcal O}( j^{-1} \|g_j\|_{L^2({\mathbb S} ^{1} )} ) \leq {\mathcal O}(j^{-1+\varrho }), \nonumber \\
&j |I_\eta^-| \leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} (\|g_j\|_{L^2({\mathbb S} ^{1})} j^{-1-\alpha } ) \right),\nonumber \\
&\quad \leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} ( j^{-1-\alpha +\varrho } ) \right),\nonumber\\
& j|I_\eta^+| \leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} (\|g_j\|_{L^2({\mathbb S} ^{1})} j^{-1-\alpha } ) \right),\nonumber \\
&\quad \leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} ( j^{-1-\alpha +\varrho } ) \right),\nonumber\\ & j {\mathcal I}_{312}^- \leq {\mathcal O}(j^{-2}),\quad j {\mathcal I}_{312}^+ \leq {\mathcal O}(j^{-2}), \end{align} Clearly, when $s=j$, {\color{black} under the assumption \eqref{eq:ass1}, by virtue of \eqref{eq:deltaf1j}, \eqref{eq:deltaf2} and \eqref{eq:I3} in Lemma \ref{lem:int1}, we can obtain \begin{equation}
\begin{split}
&
j \left|\int_{S_h} \delta \widetilde f_{1j} (x) u_0(j x) {\rm d} x \right | \leq \frac{2\sqrt{2\pi}(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } k^2 {\rm diam}({S_h})^{1-\alpha } \\
&\hspace{4.5cm} \times (1+k) \|g_j\|_{L^2( {\mathbb S}^{1})} j^{-\alpha-1 } \leq {\mathcal O}(j^{-1-\alpha +\varrho }), \\
& j \left| \int_{S_h} \delta f_2(x) u_0(j x) {\rm d} x\right| \leq \frac{2(\theta_M- \theta_m) \Gamma(2 \alpha+4) )}{ \delta_W^{2\alpha+4 } } \|f_2\|_{C^\alpha } j^{-\alpha-1 }, \ j |I_3| \leq C j e^{-c' \sqrt j},
\end{split} \end{equation} where $C,c'>0$ and $\delta_W$ is defined in \eqref{eq:xalpha}. Similarly, under the assumption \eqref{eq:ass1}, when $s=j$, in view of \eqref{eq:deltajnew} and \eqref{eq:xij} in Lemma \ref{lem:27}, it can be derived that \begin{align}
&j |\xi_j^\pm (j) |\leq C \left( |\eta(0)| \frac{\sqrt { \theta_M-\theta_m } e^{-\sqrt{j \Theta } \delta_W } h } {\sqrt 2 } j + \|\eta\|_{C^\alpha } j^{-\alpha } \frac{\sqrt{2(\theta_M-\theta_m) \Gamma(4\alpha+4) } }{(2\delta_W)^{2\alpha+2 } } \right) j^{-1-\Upsilon } , \nonumber \\
&j|\delta_j(j)| \leq \frac{\sqrt { \theta_M-\theta_m } k^2 e^{-\sqrt{j \Theta } \delta_W } h } {\sqrt 2 } j^{-\Upsilon } , \quad \Theta \in [0,h ]. \end{align} Furthermore, taking $s=j$ and using \eqref{eq:int u0 sh}, from \eqref{eq:u0w} and \eqref{eq:1.5}, we can deduce that \begin{equation}\label{eq:47}
j \left| \int_{S_h} u_0(jx) {\rm d} x \right| \leq 6 |e^{-2\theta_M i }-e^{-2\theta_m i } | j^{-1} + \frac{6(\theta_M-\theta_m )}{\delta_W^4} j^{-1} e^{-\delta_W \sqrt{h j}/2}. \end{equation} }
The coefficient of $v_j(0)$ of \eqref{eq:45} with respect to the zeroth order of $s$ is $$ 2 \eta(0)\left (\mu(\theta_m)^{-2}+\mu(\theta_M)^{-2} \right ). $$ Under the assumption \eqref{eq:ass3}, from Lemma \ref{lem:29}, we have \begin{equation}\label{eq:umneq0}
\mu(\theta_m)^{-2}+\mu(\theta_M)^{-2}\neq 0 \end{equation}
We take $s=j$ in \eqref{eq:45}. By letting $j\rightarrow \infty$ in \eqref{eq:45}, from \eqref{eq:46} and \eqref{eq:47}, we can prove that
$$ \eta(0)\left(\mu(\theta_m)^{-2}+\mu(\theta_M)^{-2} \right) \lim_{j \rightarrow \infty} v_j(0)=0. $$ Since $\eta(0)\neq 0$ and using \eqref{eq:umneq0}, it is easy to see that $$ \lim_{j \rightarrow \infty} v_j(0)=0. $$ Using the fact that \begin{align}
\lim_{ \rho \rightarrow +0 } & \frac{1}{m(B(0, \rho )\cap \Omega )} \int_{B(0, \rho )\cap \Omega } |v(x)| {\rm d} x \leq \lim_{j \rightarrow \infty} \Big( \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega )} \nonumber \\
&\times \int_{B(0, \rho ) \cap \Omega } |v(x)-v_j(x)| {\rm d} x +\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega )} \int_{B(0, \rho )\cap \Omega} |v_j(x)| {\rm d} x\Big), \label{eq:250} \end{align} we readily finish the proof of this theorem. \end{proof}
We next consider the degenerate case of Theorem~\ref{Th:1.1} with $\eta\equiv 0$. The conductive transmission eigenvalue problem \eqref{eq:in eig} is reduced to the following interior transmission eigenvalue problem \begin{align}\label{eq:in eig reduce}
\left\{ \begin{array}{l} \Delta w+k^2(1+V) w=0 \quad\ \mbox{ in } \Omega, \\[5pt] \Delta v+ k^2 v=0\hspace*{1.85cm}\ \mbox{ in } \Omega, \\[5pt] w= v,\quad \partial_\nu v=\partial_\nu w \hspace*{.9cm} \mbox{ on } \partial \Omega,
\end{array} \right.
\end{align} By slightly modifying our proof of Theorem~\ref{Th:1.1}, we can show the following result.
\begin{corollary}\label{cor:2.1} {\color{black} Let $\Omega\Subset \mathbb R^2$ be a bounded Lipschitz domain containing a corner $\Omega\cap B_h= \Omega \cap W$ with the vertex being $0\in \partial \Omega$, where $W$ is the sector defined in \eqref{eq:W} and $h\in \mathbb R_+$}. Suppose $v \in H^1(\Omega )$ and $w\in H^1(\Omega ) $ are a pair of interior transmission eigenfunctions to \eqref{eq:in eig reduce}. Let $W$ and $S_h$ be the same as described in Theorem~\ref{Th:1.1}. Assume that { $ v-w \in H^2(\Sigma_{\Lambda_h} ) $ } and $q w \in C^\alpha(\overline {S}_h ) $ for $0< \alpha <1$. Under the conditions \eqref{eq:ass3} and that the transmission eigenfunction $v$ can be approximated in $H^1(S_h)$ by the Herglotz functions $v_j$, $j=1,2,\ldots$, with kernels $g_j$ satisfying
\begin{equation}\label{eq:ass1 int}
\|v-v_j\|_{H^1(S_h)} \leq j^{-2-\Upsilon},\quad \|g_j\|_{L^2({\mathbb S}^{1})} \leq C j^{\varrho},
\end{equation}
for some constants $C>0$, $\Upsilon >0$ and $0< \varrho<\alpha $, one has
{\color{black}
\[
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega )} \int_{B(0, \rho ) \cap \Omega }V(x) w(x) {\rm d} x=0.
\]
} \end{corollary}
\begin{remark}\label{rem:hn2} As discussed in the introduction, the vanishing near a corner of the interior transmission eigenfunctions was considered in \cite{BL2017b}. Compared to the main result in \cite{BL2017b}, Corollary~\ref{cor:2.1} is more general in two aspects. First, the corner in \cite{BL2017b} must be a convex one, whereas in Corollary~\ref{cor:2.1}, the corner could be an arbitrary one as long as the corner is not degenerate, namely \eqref{eq:ass3} is fulfilled. Second, the regularity requirement on the eigenfunction $v$ is relaxed from \eqref{eq:vjgj} to \eqref{eq:ass1 int}. Moreover, technical condition $qw\in C^\alpha(\overline{ S}_h)$ in Corollary~\ref{cor:2.1} can be readily fulfilled when we consider the unique recovery of the inverse scattering problem under the condition that $q$ is a constant. Please refer to Lemma \ref{lem41}.
\end{remark}
\begin{proof}[Proof of Corollary~\ref{cor:2.1}] The proof follows from the one for Theorem~\ref{Th:1.1} with some necessary modifications, and we only outline it in the following. {\color{black} It is clear that the transmission eigenfunctions $v \in H^1(\Omega )$ and $w \in H^1(\Omega )$ to \eqref{eq:in eig reduce} fulfill \eqref{eq:in eignew} for $\eta \equiv 0$.} Since $ \eta(x) \equiv 0 $ near the corner, similar to \eqref{eq:intimportant} in Lemma \ref{lem:int1}, we have the following integral identity, \begin{align}\label{eq:intimportant int}
( \widetilde f_{1j} (0)-f_2(0)) \int_{S_h} u_0(sx) {\rm d} x+\delta_j(s) &= I_3 -\int_{S_h} \delta \widetilde f_{1j} (x) u_0(sx) {\rm d} x +\int_{S_h} \delta f_2(x) u_0(sx) {\rm d} x , \end{align} where $f_2(x)$, $\widetilde f_{1j} (x)$, $\delta_j(s)$, $I_3$, $ \delta \widetilde f_{1j} (x)$ and $ \delta f_2(x)$ are defined in \eqref{eq:vw}, \eqref{eq:deltajs}, \eqref{eq:intnotation} and \eqref{eq:f1jf2 notation}, respectively.
From \eqref{eq:u0w}, it follows that \begin{align}\label{eq:258sh}
( \widetilde f_{1j} (0)-f_2(0)) \int_{S_h} u_0(sx) {\rm d} x&=( \widetilde f_{1j} (0)-f_2(0)) \int_{W} u_0(sx) {\rm d} x \\
&\quad -( \widetilde f_{1j} (0)-f_2(0)) \int_{W \backslash S_h} u_0(sx) {\rm d} x \notag \\
&= 6 i (\widetilde f_{1j} (0)-f_2(0) ) (e^{-2\theta_M i }-e^{-2\theta_m i } ) s^{-2} \notag\\
&\quad -( \widetilde f_{1j} (0)-f_2(0)) \int_{W \backslash S_h} u_0(sx) {\rm d} x \notag. \end{align}
{\color{black} Combining \eqref{eq:ass1 int} with \eqref{eq:u0L2}} in Lemma \ref{lem:23}, one can see that \begin{equation}\label{eq:deltajnew int page14}
j^2 |\delta_j(s)| \leq \frac{\sqrt { \theta_M-\theta_m } k^2 e^{-\sqrt{s \Theta } \delta_W } h } {\sqrt 2 } j^{-\Upsilon }, \end{equation} where $ \Theta \in [0,h ]$ and $\delta_W$ is defined in \eqref{eq:xalpha}. By \eqref{eq:deltaf1j} {\color{black} in Lemma \ref{lem:int1}}, we can also deduce that \begin{align}\label{eq:deltaf1j2}
& j^2 \left|\int_{S_h} \delta \widetilde f_{1j} (x) u_0(j x) {\rm d} x \right | \leq \frac{2\sqrt{2\pi}(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } k^2 {\rm diam}({S_h})^{1-\alpha } \nonumber \\
&\hspace{4.5cm} \times (1+k) \|g_j\|_{L^2( {\mathbb S}^{1})} j^{-\alpha } \leq {\mathcal O}(j^{-(\alpha -\varrho ) }), \end{align} for $0< \varrho<\alpha$. After substituting \eqref{eq:258sh} into \eqref{eq:intimportant int}, we take $s=j$. Since \eqref{eq:258sh}, multiply $j^2$ on the both sides of \eqref{eq:intimportant int}. Using the assumptions \eqref{eq:ass1 int} and \eqref{eq:ass3}, by letting $j \rightarrow \infty$, from \eqref{eq:1.5} {\color{black} in Lemma \ref{lem:1}, \eqref{eq:deltaf1j}, \eqref{eq:deltaf2} and \eqref{eq:I3} in Lemma \ref{lem:int1},} and \eqref{eq:deltajnew int page14}, we prove that $$ \lim_{j \rightarrow \infty} v_j(0) = \frac{f_2(0)}{-k^2}. $$ Since \begin{align*}
\lim_{j \rightarrow \infty} v_j(0)&=\lim_{j \rightarrow \infty} \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega )} \int_{B(0, \rho ) \cap \Omega} v_j(x) {\rm d} x\\
&= \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap\Omega )} \int_{B(0, \rho )\cap \Omega } v(x) {\rm d} x, \\
\frac{f_2(0)}{-k^2}&= \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap\Omega )} \int_{B(0, \rho )\cap
\Omega } qw(x) {\rm d} x, \end{align*} together with $$
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega )} \int_{B(0, \rho )\cap \Omega} v(x) {\rm d} x = \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega )} \int_{B(0, \rho )\cap \Omega} w(x) {\rm d} x, $$
we finish the proof of this corollary.
\end{proof}
\begin{remark} If $V(x)$ is continuous near the corner $0$ and $ V(0) \neq 0$, from the fact that \begin{align*}
&\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega)} \int_{B(0, \rho )\cap \Omega}V(x) w(x) {\rm d} x\\
&= V(x_c)
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap \Omega)} \int_{B(0, \rho )\cap \Omega}w(x) {\rm d} x, \end{align*} we can prove that the vanishing property near the corner $0$ of the interior transmission eigenfunctions $v \in H^1(\Omega )$ and $w \in H^1(\Omega )$ under the assumptions \eqref{eq:ass3} and \eqref{eq:ass1 int}.
\end{remark}
If stronger regularity conditions are satisfied by the conductive transmission eigenfunctions $v$ and $w$ to \eqref{eq:in eig}, we can show that more apparent vanishing properties hold at the corner. The rest of this section is devoted to this case. In fact, we have the following theorem.
\begin{theorem}\label{Th:1.2}
Let $v \in H^2(\Omega )$ and $w \in H^1(\Omega ) $ be eigenfunctions to \eqref{eq:in eig}. Assume that $\Omega \subset \mathbb{R}^2 $ contains a corner {\color{black} $\Omega\cap B_h= \Omega \cap W$ {\color{black} with the vertex being $0\in \partial \Omega$}, where $W$ is the sector defined in \eqref{eq:W} and $h\in \mathbb R_+$}. Moreover, there exits a sufficiently small neighbourhood $S_h$ (i.e. $h>0$ is sufficiently small) of $0$ in $\Omega$, such that $qw \in C^\alpha(\overline {S}_h ) $ and $\eta \in C^\alpha\left(\overline{\Gamma}_h^\pm \right)$ for $0< \alpha <1$, and $v-w \in H^2(\Sigma_{\Lambda_h} )$. Under the following assumptions:
\begin{itemize}
\item[(a)] the function $\eta(x)$ doest not vanish at the vertex $0$, i.e.,
\begin{equation}\label{eq:ass21}
\eta(0) \neq 0,
\end{equation}
\item[(b)] the open angle of $S_h$ containing the corner satisfies
\begin{equation}\label{eq:ass31} -\pi < \theta_m < \theta_M < \pi \mbox{ and } \theta_M-\theta_m \neq \pi, \end{equation}
\end{itemize}
then we have
$
v(0) =w(0)=0.
$
\end{theorem}
\begin{proof} {\color{black} It is clear that the transmission eigenfunctions $v \in H^2(\Omega )$ and $w \in H^1(\Omega )$ to \eqref{eq:in eig} fulfill \eqref{eq:in eignew}.} Recall that $f_1$ and $f_2$ are defined by \eqref{eq:vw}. {\color{black} From Lemma \ref{lem:int1}, we know that \eqref{eq:221 int} is satisfied, which can be further formulated as
\begin{align}\label{eq:1.55}
\int_{S_h } ( f_{1}-f_2)u_0(sx) {\rm d} x
&=I_3
- \int_{\Gamma_{h} ^\pm } \eta(x) u_0 (sx) v(x) {\rm d} \sigma,
\end{align}
where $I_3$ is defined in \eqref{eq:intnotation}. } Since $ f_2\in C^\alpha(\overline {S}_h )$ and $\eta \in C^\alpha\left(\overline{\Gamma}_h^\pm \right)$, {\color{black} we know that $\eta$ and $f_2$ have the expansions \eqref{eq:eta} and \eqref{eq:f1jf2 notation} around the origin, respectively. Furthermore, due to the fact that $v \in H^2(S_h)$, which can be embedded into $C^\alpha(\overline { S}_h )$, we have} the following expansions \begin{align}\label{eq:splitting} \begin{split}
f_1(x)&=f_1(0)+\delta f_1(x), \quad |\delta f_1(x)| \leq \|f_1\|_{C^\alpha } |x|^\alpha,\\
v(x)&=v(0)+\delta v(x), \quad |\delta v (x)| \leq \|v\|_{C^\alpha } |x|^\alpha. \end{split} \end{align} Substituting \eqref{eq:eta}, \eqref{eq:f1jf2 notation} and \eqref{eq:splitting} into \eqref{eq:1.55}, we can derive that
\begin{align}\label{eq:1.57}
&(f_1(0)-f_2(0))\int_{S_h } u_0(sx) {\rm d} x + \int_{S_h } ( \delta f_{1}-\delta f_2)u_0(sx) {\rm d} x
\nonumber\\
& =I_3
- \eta(0) v(0)\int_{\Gamma_{h} ^\pm } u_0 (sx) {\rm d} \sigma -\eta(0) \int_{\Gamma_{h} ^\pm } \delta v(x) u_0 (sx) {\rm d} \sigma - v(0) \int_{\Gamma_{h} ^\pm } \delta \eta (x) u_0 (sx) {\rm d} \sigma \nonumber\\
& \quad - \int_{\Gamma_{h} ^\pm } \delta \eta (x) \delta v(x) u_0 (sx) {\rm d} \sigma .
\end{align} {\color{black} Using \eqref{eq:I311} in Lemma \ref{lem:u0 int},} it is easy to see that \begin{align}\label{eq:1.59}
\eta(0) v(0)\int_{\Gamma_{h} ^+ } u_0 (sx) {\rm d} \sigma &=2 s^{-1}v(0)\eta(0) \left( \mu(\theta_M )^{-2}- \mu(\theta_M )^{-2} e^{ -\sqrt{sh} \mu(\theta_M ) }\right. \\&\left. \hspace{3.5cm} - \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M ) } \right ) \nonumber \\
\eta(0) v(0)\int_{\Gamma_{h} ^- } u_0 (sx) {\rm d} \sigma &=2 s^{-1}v(0)\eta(0) \left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh}\mu(\theta_m )} \right. \nonumber \\&\left. \hspace{3.5cm} - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh}\mu(\theta_m ) } \right ), \nonumber
\end{align} where $\mu(\theta)$ is defined in \eqref{eq:omegamu}. Besides, from \eqref{eq:splitting}, using \eqref{eq:zeta}, we can estimate \begin{align}\label{eq:1.60}
s\left| \int_{\Gamma_{h} ^- } \delta v(x) u_0 (sx) {\rm d} \sigma \right| &\leq s\|v\|_{C^\alpha } \int_0^h r^\alpha e^{-\sqrt{sr} \omega(\theta_m) } {\rm d} r={\mathcal O}(s^{-\alpha}), \\
s\left| \int_{\Gamma_{h} ^- } \delta \eta(x) u_0 (sx) {\rm d} \sigma \right| &\leq s\|\eta \|_{C^\alpha } \int_0^h r^\alpha e^{-\sqrt{sr} \omega(\theta_m) } {\rm d} r={\mathcal O}(s^{-\alpha}), \nonumber \\
s\left| \int_{\Gamma_{h} ^- } \delta v(x) \delta \eta (x) u_0 (sx) {\rm d} \sigma \right| &\leq s\|v\|_{C^\alpha } \|\eta \|_{C^\alpha } \int_0^h r^{2\alpha} e^{-\sqrt{sr} \omega(\theta_m) } {\rm d} r={\mathcal O}(s^{-2\alpha}), \nonumber \\
s\left| \int_{S_h } \delta f_{1} u_0(sx) {\rm d} x \right| & \leq s \cdot \|f_1\|_{C^\alpha } \int_W |u_0(sx)| |x|^\alpha {\rm d} x \nonumber\\
& \leq \frac{2 \|f_1\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} s^{-\alpha-1}\nonumber\\
s \left| \int_{S_h } \delta f_{2} u_0(sx) {\rm d} x \right| & \leq s \cdot \|f_2\|_{C^\alpha } \int_W |u_0(sx)| |x|^\alpha {\rm d} x \nonumber \\
& \leq \frac{2 \|f_2\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} s^{-\alpha-1}. \nonumber \end{align}
Substituting \eqref{eq:1.59} into \eqref{eq:1.57} and multiplying $s$ on the both sides of \eqref{eq:1.57}, after arranging terms, we obtain that
\begin{align}\label{eq:1.57new}
2& v(0)\eta(0) \left( \mu(\theta_M )^{-2} +\mu(\theta_m )^{-2} \right)= 2v(0)\eta(0) \Big ( \mu(\theta_M )^{-2} e^{ -\sqrt{sh} \mu(\theta_M ) }\\
& + \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M ) } +\mu(\theta_m )^{-2} e^{ -\sqrt{sh} \mu(\theta_m ) } + \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_m ) } \Big )\nonumber \\
&+s\Big [ I_3 - (f_1(0)-f_2(0))\int_{S_h } u_0(sx) {\rm d} x - \int_{S_h } ( \delta f_{1}-\delta f_2)u_0(sx) {\rm d} x
\nonumber\\
& -\eta(0) \int_{\Gamma_{h} ^\pm } \delta v(x) u_0 (sx) {\rm d} \sigma - v(0) \int_{\Gamma_{h} ^\pm } \delta \eta (x) u_0 (sx) {\rm d} \sigma - \int_{\Gamma_{h} ^\pm } \delta \eta (x) \delta v(x) u_0 (sx) {\rm d} \sigma \Big]. \nonumber
\end{align}
Since { $v-w\in H^2(\Sigma_{ \Lambda_h} )$,} \eqref{eq:I3} still holds. In \eqref{eq:1.57new}, letting $s \rightarrow \infty$, from \eqref{eq:1.5}, \eqref{eq:I3} {\color{black} in Lemma \ref{lem:int1}}, \eqref{eq:258sh} and \eqref{eq:1.60}, we can show that
$$ \eta(0) \left( \mu(\theta_M )^{-2} +\mu(\theta_m )^{-2} \right) v(0) =0.
$$
Under the assumption \eqref{eq:ass31}, from {\color{black} Lemma \ref{lem:29}, we have} $\mu(\theta_M )^{-2} +\mu(\theta_m )^{-2} \neq 0$. Since $\eta(0)\neq 0$ from \eqref{eq:ass21}, we finish the proof of this theorem.
\end{proof}
\begin{remark}\label{rem:2.5} Under the $H^2$ regularity, the interior transmission eigenfunctions to \eqref{eq:in eig reduce} have been shown that they always vanish at a corner point if the interior angle of the corner is not $\pi$; see \cite[Theorem 4.2]{Bsource} for more details. \end{remark}
\section{Vanishing near corners of conductive transmission eigenfunctions: three-dimensional case}\label{sec:3}
In this section, we study the vanishing property of the conductive transmission eigenfunctions for the 3D case. In principle, we could also consider a generic corner in the usual sense as the one for the 2D case. However, in what follows, we introduce a more general corner geometry that is described by {\color{black} $S_h \times (-M,M)$, where $S_h$ is defined in \eqref{eq:sh}} and $M\in\mathbb{R}_+$. It is readily seen that {\color{black} $S_h \times (-M,M)$} actually describes an edge singularity and we call it a 3D corner for notational unification. Suppose that the Lipschitz domain $\Omega\subset\mathbb{R}^3$ {\color{black} with $0\in \partial \Omega$} possesses a 3D corner. {\color{black} Let $0 \in \mathbb{R}^{2}$} be the vertex of $S_h$ and {\color{black} $ x_3^c \in (-M,M)$.} Then {\color{black} $(0,x_{3}^c )$} is defined as the edge point of {\color{black} $S_h \times (-M,M)$.} In Figure \ref{fig3d}, we give a schematic illustration of the geometry considered in 3D. In this section, under some appropriate assumptions, we show that the conductive transmission eigenfunctions $v$ and $w$ vanish at {\color{black} $(0,x_{3}^c )$.} Since the CGO solution constructed in Lemma \ref{lem:1} is only two dimensional, in order to make use of the similar arguments of Theorem \ref{Th:1.1}, we introduce the following dimension reduction operator. The dimension reduction operator technique is also introduced in \cite[Lemma 3.4]{Bsource} for studying the vanishing property of nonradiating sources and the transmission eigenfunctions at edges in three dimension. Similar to Theorem \ref{Th:1.1}, we first assume that $v$ is only $H^1$ smooth but can be approximated by the Herglotz wave functions with some mild assumptions, where in Theorem \ref{Th:3.1} the interior angle of $S_h$ cannot be $\pi$. Besides, if $v$ has $H^2$ regularity near the edge point, in Theorem \ref{Th:3.2} we also prove the vanishing property of $v$ and $w$ near the edge point.
\begin{figure}
\caption{Schematic illustration of the corner in 3D.}
\label{fig3d}
\end{figure}
\begin{definition}\label{Def}
{\color{black} Let ${S_h}\subset \mathbb{R}^{2}$ be defined in \eqref{eq:sh}, $M>0$. For a given function $g$ with the domain $S_h \times (-M,M )$. Pick up any point $ x_{3}^c \in (-M, M)$. Suppose that $\psi \in C_0^{\infty}( (x_{3}^c-L, x_{3}^c+ L) )$ is a nonnegative function and $\psi\not\equiv0$, where $L$ is sufficiently small such that $ (x_{3}^c-L, x_{3}^c+ L) \subset (-M,M)$, and write $x=(x', x_3) \in \mathbb{R}^{3}, \, x'\in \mathbb{R}^{2}$. The dimension reduction operator $\mathcal{R}$ is defined by
\begin{equation}\label{dim redu op}
\mathcal{R}(g)(x')=\int_{x_{3}^c -L}^{x_{3}^c+ L} \psi(x_3)g(x',x_3)\,\mathrm{d} x_3,
\end{equation}
where $x'\in {S_h}$. } \end{definition}
\begin{remark} The assumption on the non-negativity of $\psi$ plays an important role in our proof of Theorem \ref{Th:3.1} in what follows, where we use the integral mean value theorem to carefully investigate the asymptotic property of the parameter $s$ appearing in the CGO solution $u_0(sx')$ given in Lemma \ref{lem:1} as $s \rightarrow \infty$. In order to use the two dimensional CGO solution $u_0(sx')$ to prove the vanishing property of the conductive transmission eigenfunctions in $\mathbb{R}^3$, we need the dimension reduction operator defined in Definition \ref{Def} in our proof of Theorem \ref{Th:3.1}. \end{remark}
Before presenting the main results of this section, we first analyze the regularity of the functions after applying the dimension reduction operator. Using a similar argument of \cite[Lemma 3.4]{Bsource}, we can prove the following lemma, whose detailed proof is omitted.
\begin{lemma}\label{lem h2}
Let $g\in H^2({S_h}\times(-M ,M))\cap C^\alpha (\overline {S}_h\times [-M,M])$, where $0<\alpha<1$. Then
$$
{\mathcal R}(g)(x') \in H^2({S_h})\cap C^\alpha (\overline{S}_h).
$$
{\color{black} Similarly, if $g\in H^1({S_h}\times(-M ,M))$, we have $
{\mathcal R}(g)(x') \in H^1({S_h}).
$ }
\end{lemma}
{\color{black} In Theorem \ref{Th:3.1}, we shall prove the vanishing property of conductive transmission eigenfunctions at an edge corner in 3D. Let us first introduce the mathematical setup.
Let ${S_h}\subset \mathbb{R}^{2}$ be defined in \eqref{eq:sh}, $M>0$, $0<\alpha<1$. For any fixed {\color{black} $x_3^c \in (-M,M)$} and $L>0$ defined in Definition \ref{Def}, we suppose that $L $ is sufficiently small such that $(x_3^c-L,x_3^c+L) \subset (-M,M) $. Write $x=(x', x_3) \in \mathbb{R}^{3}, \, x'\in \mathbb{R}^{2}$. Let $v,w\in H^1({S}_h\times(-M,M))$ fulfill that
\begin{align}\label{eq:3d in eig}
\left\{
\begin{array}{l}
\Delta v+ k^2 v=0, \qquad x'\in {S_h}, -M<x_3<M,\\[5pt]
\Delta w+k^2 q w=0, \quad x'\in {S_h}, -M<x_3<M,\\[5pt]
w= v\quad \partial_\nu v + \eta v=\partial_\nu w, \quad x'\in\Gamma_h^\pm, -M<x_3<M,
\end{array}
\right.
\end{align}
where $\Gamma_h^\pm $ are defined in \eqref{eq:sh}, $\nu$ is the outward normal vector to $\Gamma_h^\pm \times (-M,M)$, $q\in L^\infty(S_h \times (-M,M)) $ and $\eta \in L^\infty(\Gamma_h^\pm \times (-M,M) )$ is independent of $x_3$.
Lemmas \ref{lem:32} and \ref{lem:37 coeff} will be used to prove Theorem \ref{Th:3.1} in what follows.
\begin{lemma}\label{lem:32}
Suppose that $v,w\in H^1({S}_h\times(-M,M))$ fulfill \eqref{eq:3d in eig}.
Denote
\begin{eqnarray}\label{construct G}
\begin{split}
G(x')&=& \int_{-L}^{L} \psi''(x_3)v(x', x_3)\mathrm{d} x_3-k^2 {\mathcal R} (v)(x'),\\
\widetilde G(x')&=& \int_{-L}^{L} \psi''(x_3)w(x', x_3)\mathrm{d} x_3-k^2 {\mathcal R} (qw)(x'),
\end{split}
\end{eqnarray}
where $\mathcal{R}$ is the dimension reduction operator associated with $\psi$ defined in \eqref{Def}. Then there hold that
\begin{align}\label{eq:eig reduc}
\left\{ \begin{array}{l}
\Delta_{x'} {\mathcal R} (v) (x')
=G(x') \hspace*{7.8cm} \mbox{ in } S_h, \\[5pt]
\Delta_{x'} {\mathcal R} (w) (x') =\widetilde G(x')\hspace*{7.6cm}\ \mbox{ in } S_h, \\[5pt] \mathcal{R}(w)(x')= \mathcal{R}(v) (x'),\ \partial_\nu {\mathcal R} (v) (x') + \eta(x'){\mathcal R} ( v) (x')= \partial_\nu {\mathcal R}( w) (x') \ \ \ \ \mbox{ on } \Gamma_h^\pm ,
\end{array} \right.
\end{align} in the distributional sense, where $\nu$ signifies the exterior unit normal vector to $\Gamma_h^\pm $. Let \begin{equation}\label{eq:F1xi}
{ G}(x')-\widetilde G(x')=F_{1} (x')+F_{2} (x')+F_{3} (x'), \end{equation} where \begin{align*}
F_{1} (x')&=
\int_{-L}^{L}\psi''(x_3)(v(x', x_3)-w(x', x_3)) {\rm d} x_3,\, F_{2} (x')=k^2 {\mathcal R} (q w)(x'),\\ F_{3} (x')&=-k^2 {\mathcal R} (v)(x') . \end{align*} Recall that the CGO solution $u_0(sx')$ is defined in \eqref{eq:u0} with the parameter $s\in \mathbb R_+$. There holds the following integral identity, \begin{equation}\label{eq:36 int}
\begin{split}
&\int_{S_h } ( F_{1} (x')+F_{2} (x')+F_{3} (x') ) u_0(sx') {\rm d} x'=I_3 - \int_{\Gamma_{ h} ^\pm } \eta(x') {\mathcal R} ( v)(x') u_0 (sx') {\rm d} \sigma,
\end{split} \end{equation} where $I_3=\int_{\Lambda_h } ( u_0(sx') \partial_\nu {\mathcal R}(v-w )(x')-{\mathcal R}(v-w) (x') \partial_\nu u_0(sx') ) {\rm d} \sigma $. If $qw \in C^\alpha(\overline{S}_h \times [-M,M] )$ for $0<\alpha<1$ and $v-w\in H^2({S_h}\times (-M,M) )$, then we have $F_{1} (x') \in C^\alpha(\overline{ S}_h)$ for $\alpha\in(0,1)$ and $F_{2} (x') \in C^\alpha(\overline{ S}_h )$. \end{lemma}
\begin{proof} For the edge point $(0,x_3^c)\in S_h \times (-M,M)$, where $x_3^c\in (-M,M)$, without loss of generality, in the subsequent analysis, we assume that $x_3^c=0$. Since $\Delta_{x'} v=-k^2 v-\partial_{x_3}^2 v$ and $\Delta_{x'} w=-k^2 q w-\partial_{x_3}^2 w$, by the dominate convergence theorem, integration by parts gives \begin{align}
\Delta_{x'} {\mathcal R} (v) (x')&=
\int_{-L}^{L} \psi''(x_3)v(x', x_3)\mathrm{d} x_3-k^2 {\mathcal R} (v)(x') =G(x'),\label{add1} \\
\Delta_{x'} {\mathcal R} (w) (x')& = \int_{-L}^{L} \psi''(x_3)w(x', x_3)\mathrm{d} x_3-k^2 {\mathcal R} (qw)(x') =\widetilde G(x'). \label{add2}
\end{align}
Moreover, we have
\begin{equation}\label{3eq:bound1}
\mathcal{R}(w)(x')= \mathcal{R}(v) (x') \mbox { on } \Gamma_h^\pm
\end{equation}
in the sense of distribution, since $w(x',x_3)=v(x',x_3)$ when $x' \in \Gamma_h^\pm$ and $-L < x_3 < L$. Similarly, using the fact that $\eta$ is independent of $x_3$, we can easily show that
\begin{equation}\label{3eq:bound2}
\partial_\nu {\mathcal R} (v) (x') + \eta(x'){\mathcal R} ( v) (x')= \partial_\nu {\mathcal R}( w) (x') \mbox { on } \Gamma_h^\pm,
\end{equation} in the sense of distribution.
Subtracting \eqref{add2} from \eqref{add1}, combining with the boundary condition \eqref{3eq:bound1} and \eqref{3eq:bound2}, we deduce that \begin{equation}\label{eq:310 pde} \begin{split}
&\Delta_{x'}({\mathcal R}(v)(x')-{\mathcal R}(w)(x') )=F_1 (x')+F_2(x')+F_3 (x') \mbox{ in } S_h, \\
&\mathcal{R}(v)(x')- \mathcal{R}(w) (x')=0,\ \partial_\nu {\mathcal R} (v) (x') - \partial_\nu {\mathcal R}( w) (x') = -\eta(x'){\mathcal R} ( v) (x') \mbox{ on } \Gamma_h^\pm .
\end{split}
\end{equation}
Recall that $u_0(sx')\in H^1(S_h)$ from Lemma \ref{lem:23}. Since $v,\, w \in H^1(S_h\times (-M,M) )$ and $q\in L^{\infty}(S_h \times (-M,M))$, by virtue of Lemma \ref{lem h2}, it yields that $F_1,\ F_2,\, F_3 \in L^2(S_h)$. By Lemma \ref{lem:green} and using the fact that $\Delta_{x'} u_0(sx')=0$ in $S_h$, we have the following Green identity \begin{equation}\label{eq:311 int}
\int_{S_h} u_0(sx') \Delta_{x'}{\mathcal R} (v-w)(x')\mathrm{d} x'=\int_{\partial S_h} (u_0(sx') \partial_\nu {\mathcal R}(v-w)(x')- {\mathcal R}(v-w)(x') \partial_\nu u_0(sx'))\mathrm{d} \sigma . \end{equation}
Substituting \eqref{eq:310 pde} into \eqref{eq:311 int}, it yields \eqref{eq:36 int}.
Recall that $F_1$ and $F_2$ are defined in \eqref{eq:F1xi}. Since $v-w\in H^2({S_h}\times(-M,M))$, from Lemma \ref{lem h2}, we know that $F_{1} (x') \in H^2(S_h)$, which can be embedded into $C^\alpha(\overline{ S}_h)$ for $\alpha\in(0,1)$. Moreover, from Lemma \ref{lem h2}, we have $F_{2} (x') \in C^\alpha(\overline{ S}_h )$, since $qw \in C^{\alpha}(\overline{ S}_h \times [-M,M] )$ and $0<\alpha<1$. \end{proof}
\begin{lemma}
\label{lem:int2} Let $S_h$ and $\Gamma_h^\pm$ be defined in \eqref{eq:sh}. Suppose that $v,w\in H^1({S}_h\times(-M,M))$ fulfill \eqref{eq:3d in eig}. Recall that the CGO solution $u_0(sx)$ is defined in \eqref{eq:u0} with the parameter $s\in \mathbb R_+$. Let $$
F_{3j} (x')=-k^2 {\mathcal R} (v_j)(x') ,
$$
where $v_j$ is the Herglotz wave function given by \begin{equation}\label{eq:herg3}
v_j(x)=\int_{{\mathbb S}^{2}} e^{i k d \cdot x} g_j(d ) {\rm d} \sigma(d ), \quad d \in {\mathbb S}^{2}. \end{equation} Then $F_{3j} (x') \in C^\alpha (\overline S_h )$ and it has the expansion \begin{equation}\label{eq:F3j}
F_{3j}(x')=F_{3j} (0)+\delta F_{3j }(x'),\quad |\delta F_{3j} (x') | \leq \|F_{3j} \|_{C^\alpha } |x'|^\alpha. \end{equation} Recall that $F_{1} (x')$ and $F_{2} (x')$ are defined in \eqref{eq:F1xi}. Assume that $F_{1} (x')
\in C^\alpha(\overline S_h)$ and $F_2(x')
\in C^\alpha(\overline S_h)$ ($0<\alpha <1$) satisfying
\begin{align}\label{eq:326F3j} \begin{split}
F_{1}(x')&= F_{1} (0)+\delta F_{1 } (x'),\quad |\delta F_{1 } (x') | \leq \| F_{1} \|_{C^\alpha } |x'|^\alpha,\\
F_{2}(x')&=F_{2} (0)+\delta F_{2 }(x'),\quad |\delta F_{2} (x') | \leq \|F_{2} \|_{C^\alpha } |x'|^\alpha,
\end{split} \end{align} then there holds that \begin{align}\label{3eq:int identy} \begin{split}
&(F_{1 } (0)+F_{2 } (0)+F_{3j } (0)) \int_{S_h} u_0(sx') {\rm d} x'+\delta_j (s) = I_3 - I_2^\pm - \epsilon_j^\pm(s)\\
&\quad -\int_{S_h} \delta F_{1} (x') u_0(sx') {\rm d} x' -\int_{S_h} \delta F_{2} (x')u_0(sx') {\rm d} x' -\int_{S_h} \delta F_{3j} (x')u_0(sx') {\rm d} x' .
\end{split} \end{align} where \begin{align}\label{3eq:intnotation} \begin{split}
I_2^\pm& =\int_{\Gamma_{h } ^\pm } \eta(x') u_0(sx') {\mathcal R} (v_j) (x') {\rm d} \sigma ,\\
\delta_j(s)&=-k^2 \int_{S_h} ( {\mathcal R} (v)(x') -{\mathcal R} (v_j)(x') )u_0(sx') {\rm d} x', \\
\epsilon_{j}^\pm(s)&= \int_{\Gamma_{h } ^\pm } \eta(x') u_0(sx') {\mathcal R} (v(x',x_3)- v_j (x',x_3) ) {\rm d} \sigma,
\end{split}
\end{align}
and $I_3$ is defined in \eqref{eq:36 int}. Furthermore, assuming that the transmission eigenfunction $v$ can be approximated in $H^1(S_h \times (-M,M))$ by the Herglotz wave functions $v_j$ defined in \eqref{eq:herg3}, $j=1,2,\ldots$, with kernels $g_j$ satisfying \begin{equation}\label{3eq:ass0}
\|v-v_j\|_{H^1(S_h \times (-M,M))} \leq j^{-1-\Upsilon},\quad \|g_j\|_{L^2({\mathbb S}^{2})} \leq C j^{{1+\varrho}}, \end{equation} for some positive constants $C, \Upsilon$ and ${\varrho}$, then there hold that \begin{subequations} \begin{align}
\left|\int_{S_h} \delta F_{3j} (x) u_0(sx') {\rm d} x' \right | & \leq \frac{8 L\sqrt{\pi}\|\psi\|_{L^\infty}(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } k^2 {\rm diam}(S_h)^{1-\alpha }\nonumber \\
& \quad \times (1+k) \|g_j\|_{L^2( {\mathbb S}^{2})} s^{-\alpha-2 }, \label{3eq:deltaf1j}\\
\label{3eq:deltaf1}
\left|\int_{S_h} \delta F_{1 } (x) u_0(sx') {\rm d} x' \right | &\leq \frac{2\|F_{1}\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} s^{-\alpha-2} , \\
\left|\int_{S_h} \delta F_{2} (x) u_0(sx') {\rm d} x' \right | &\leq \frac{2\|F_{2 }\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} s^{-\alpha-2}, \label{3eq:deltaf2} \end{align} \end{subequations} where $\Theta \in [0,h]$ and $\delta_W$ is defined in \eqref{eq:xalpha}, as $s\rightarrow +\infty $. If $v-w\in H^2(S_h \times (-M,M) )$, then one has \begin{align}\label{3eq:I3}
\left| I_3 \right| &\leq C e^{-c' \sqrt s}\ \ \mbox{as}\ \ s \rightarrow + \infty, \end{align}
where $C>0$ and $c'>0$ are constants.
\end{lemma}
\begin{proof} It is clear that the Herglotz wave functions $v_j \in H^2 (S_h\times (-M,M))$. From Lemma \ref{lem h2}, we have ${\mathcal R}(v_j)(x') \in H^2(S_h)$, which can be embedded into $C^\alpha( \overline{ S}_h) $ satisfying \eqref{eq:F3j}.
Since $v \in H^1(S_h \times (-M,M))$ is a solution to the Helmholtz equation in $S_h \times (-M,M)$, from Lemma \ref{lem:Herg}, $v$ can be approximated by the Herglotz wave functions $v_j(x) $ given in \eqref{eq:herg3} in the $H^1$-topology. Therefore, we deduce that \begin{align}\label{eq:313delta}
\int_{S_h} -k^2 {\mathcal R} (v)(x') u_0(sx') {\rm d} x'= \int_{S_h } -k^2 {\mathcal R} (v_j)(x') u_0(sx') {\rm d} x'+ \delta_j(s),
\end{align} where $ \delta_j(s) $ is defined in \eqref{3eq:intnotation}.
Let
$$
I_1= \int_{S_h} u_0(sx') (F_{1} (x')+F_{2} (x')+F_{3j} (x')) {\rm d} x'.
$$
Substituting \eqref{eq:F3j} and \eqref{eq:326F3j} into $I_1$, it yields that \begin{align*} I_1 &=(F_{1 } (0)+F_{2 } (0)+F_{3j } (0)) \int_{S_h} u_0(sx') {\rm d} x'+\int_{S_h} \delta F_{1} (x') u_0(sx') {\rm d} x'\\ &\quad +\int_{S_h} \delta F_{2} (x')u_0(sx') {\rm d} x' +\int_{S_h} \delta F_{3j} (x')u_0(sx') {\rm d} x'. \end{align*}
Let $F_3(x')$ be defined in \eqref{eq:F1xi}. By virtue of \eqref{eq:313delta} we have
\begin{align}
\int_{S_h } ( F_{1} (x')+F_{2} (x')+F_{3} (x') ) u_0(sx') {\rm d} x'&=(F_{1 } (0)+F_{2 } (0)+F_{3j } (0)) \int_{S_h} u_0(sx') {\rm d} x'\notag \\
&+\int_{S_h} \delta F_{1} (x') u_0(sx') {\rm d} x' +\int_{S_h} \delta F_{2} (x')u_0(sx') {\rm d} x' \notag \\ &+\int_{S_h} \delta F_{3j} (x')u_0(sx') {\rm d} x' + \delta_j(s) . \label{eq:320 I1}
\end{align}
Recall that $v$ can be approximated by the Herglotz wave functions $v_j$ given in \eqref{eq:herg3} in the $H^1$-norm. Then \begin{align}\label{3eq:int3}
\int_{\Gamma_{h } ^\pm } \eta(x') u_0(sx') {\mathcal R}(v) (x') {\rm d} \sigma&=\int_{\Gamma_{h } ^\pm } \eta(x') u_0(sx') {\mathcal R}(v_j) (x') {\rm d} \sigma + \epsilon_{j}^\pm(s), \\
\epsilon_{j}^\pm(s)&= \int_{\Gamma_{h } ^\pm } \eta(x') u_0(sx') {\mathcal R} (v(x',x_3)- v_j (x',x_3) ) {\rm d} \sigma. \nonumber \end{align}
Plugging \eqref{eq:320 I1} and \eqref{3eq:int3} into \eqref{eq:36 int} in Lemma \ref{lem:32}, we can obtain \eqref{3eq:int identy}.
Recall that $F_{3j} (x')=-k^2 {\mathcal R} (v_j)(x')$. Using the property of compact embedding of H{\"o}lder spaces, we can derive that for $0< \alpha<1$, $$
\| F_{3j} \|_{C^\alpha } \leq k^2 {\rm diam}(S_h)^{1-\alpha } \|{\mathcal R}(v_j) \|_{C^1}, $$ where $ {\rm diam}(S_h)$ is the diameter of $S_h$. By the definition of the dimension reduction operator \eqref{dim redu op}, it is easy to see that \begin{align*}
|{\mathcal R}(v_j) (x')| \leq 4 L\sqrt{\pi }\|\psi \|_{L^\infty}\|g_j\|_{L^2({\mathbb S}^{2})},\quad |\partial_{x'}{\mathcal R}(v_j) (x')| \leq 4 k L\sqrt{\pi }\|\psi \|_{L^\infty}\|g_j\|_{L^2({\mathbb S}^{2})}. \end{align*} Thus we have $$
\|{\mathcal R}( v_j) \|_{C^1} \leq 4 L\sqrt{\pi}\|\psi\|_{L^\infty}(1+k) \|g_j\|_{L^2( {\mathbb S}^{2})}. $$ Therefore from \eqref{eq:xalpha} in Lemma \ref{lem:1} we have \eqref{3eq:deltaf1j}.
Using similar arguments, we can deduce \eqref{3eq:deltaf1} and \eqref{3eq:deltaf2}.
Since $v-w \in H^2(S_{h} \times (-M,M))$, which implies that ${\mathcal R} (v-w) \in H^2(S_{h} )$ by using Lemma \ref{lem h2}, from \eqref{eq:I3} in Lemma \ref{lem:int1}, we can prove \eqref{3eq:I3}.
The proof is complete.
\end{proof}
\begin{lemma}\label{lem:3 delta}
Under the same setup in Lemma \ref{lem:int2}, we assume that the transmission eigenfunction $v$ to \eqref{eq:3d in eig} can be approximated by a sequence of the Herglotz wave functions $\{v_j\}_{j=1}^{+\infty} $ with the form \eqref{eq:herg3} in $H^1(S_h \times (-M,M))$ satisfying \begin{equation}\label{3eq:ass1}
\|v-v_j\|_{H^1(S_h \times (-M,M))} \leq j^{-1-\Upsilon},\quad \|g_j\|_{L^2({\mathbb S}^{2})} \leq C j^{{1+\varrho}},
\end{equation}
for some positive constant $C$, $\Upsilon>0$ and ${\varrho>0} $. Let $\delta_j(s)$ and $\epsilon_j^\pm(s)$ be defined in \eqref{3eq:intnotation}. Then we have the following estimate,
\begin{align}
|\delta_j(s)| & \leq \frac{ k^2 \|\psi \|_{L^\infty} \sqrt{C(L,h) ( \theta_M-\theta_m )} e^{-\sqrt{s \Theta } \delta_W } h }{\sqrt 2 } j^{-1- \Upsilon}. \label{3eq:deltajnew3}
\end{align} Furthermore, assuming that the boundary parameter $\eta$ in \eqref{eq:3d in eig} fulfills $\eta ( x) \in C^{\alpha}(\overline{\Gamma_h^\pm } \times [-M,M] )$, which indicates that $\eta(x')\in C^\alpha(\overline{\Gamma_{ h}^\pm})$ and has the expansion \begin{equation}\label{3eq:eta}
\eta(x')=\eta(0)+\delta \eta(x'),\quad |\delta \eta(x')| \leq \|\eta \|_{C^\alpha } |x'|^\alpha, \end{equation} then under the assumption \eqref{3eq:ass1}, we have \begin{align}
|\epsilon_j^\pm (s) | & \leq C {\color{black} \| \psi \|_{L^\infty} } \left( |\eta(0)| \frac{\sqrt { \theta_M-\theta_m } e^{-\sqrt{s \Theta } \delta_W } h } {\sqrt 2 } \right. \nonumber\\
&\left. \quad + \|\eta\|_{C^\alpha } s^{-(\alpha+1 )} \frac{\sqrt{2(\theta_M-\theta_m) \Gamma(4\alpha+4) } }{(2\delta_W)^{2\alpha+2 } } \right) j^{-1-\Upsilon}. \label{3eq:xij} \end{align}
\end{lemma}
\begin{proof}
We first prove \eqref{3eq:deltajnew3}. Indeed, by using the Cauchy-Schwarz inequality, we have \begin{align}
\|{\mathcal R}( v)-{\mathcal R}( v_j) \|_{L^2(S_h)}^2&=\int_{S_h } \left| \int_{-L}^L \psi(x_3) (v(x',x_3)-v_j(x',x_3)) {\rm d} x_3 \right|^2 {\rm d} x'\nonumber \\
&\leq C(L,h) \|\psi\|^2_{L^\infty} \|v-v_j\|_{L^2(S_h \times (-L,L))}^2, \label{eq:312} \end{align} where $C(L,h) $ is a positive constant depending on $L$ and $h$. Since the $L^2$-norm of $u_0$ in $S_h$ can be estimated by \eqref{eq:u0L2} in Lemma \ref{lem:23}, recalling that $\delta_j(s)$ is defined in \eqref{3eq:intnotation}, and using Cauchy-Schwarz inequality again, by virtue of \eqref{3eq:ass1}, we can deduce \eqref{3eq:deltajnew3}.
Note that $\epsilon_{j}^\pm(s)$ is defined in \eqref{3eq:intnotation} and $\eta$ has the expansion \eqref{3eq:eta}. Using the Cauchy-Schwarz inequality and the trace theorem, we have \begin{align}\label{3eq:19}
|\epsilon_{j}^\pm(s)|& \leq |\eta(0)| \int_{\Gamma_{h } ^\pm } | u_0(sx') | | {\mathcal R} (v(x',x_3)- v_j (x',x_3) ) | {\rm d} \sigma \\
&\quad + \|\eta\|_{C^\alpha } \int_{\Gamma_{h } ^\pm }|x'|^\alpha | u_0(sx') | | {\mathcal R} (v(x',x_3)- v_j (x',x_3) ) | {\rm d} \sigma \notag \\
&\leq |\eta(0)| \|{\mathcal R} (v-v_j) \|_{H^{1/2}(\Gamma_h^\pm ) } \| u_0(sx')\|_{H^{-1/2}(\Gamma_h^\pm ) } \notag \\
&\quad + \|\eta\|_{C^\alpha } \|{\mathcal R} (v-v_j) \|_{H^{1/2}(\Gamma_h^\pm ) } \| {|x'|}^\alpha u_0(sx')\|_{H^{-1/2}(\Gamma_h^\pm ) } \notag \\
&\leq |\eta(0)| \|{\mathcal R} (v-v_j) \|_{H^{1}(S_h ) } \| u_0(sx')\|_{L^2 (S_h) } \notag \\
&\quad + \|\eta\|_{C^\alpha } \|{\mathcal R} (v-v_j) \|_{H^{1}(S_h) } \| {|x'|}^\alpha u_0(sx')\|_{L^2 (S_h ) } \notag \\
&\leq C \| \psi \|_{L^\infty} \|v-v_j \|_{H^{1}(S_h \times (-L,L)) }(|\eta(0)| \| u_0(sx')\|_{L^{2}(S_h ) } + \|\eta\|_{C^\alpha }\| {|x'|}^\alpha u_0(sx')\|_{L^{2}(S_h ) } ), \nonumber \end{align} where $C$ is a positive constant and the last inequality comes from Lemma \ref{lem h2}. Substituting \eqref{eq:u0L2}, \eqref{eq:22} and \eqref{3eq:ass1} into \eqref{3eq:19}, we obtain \eqref{3eq:xij}.
\end{proof}
\begin{lemma}
Let $j_\ell (t)$ be the $\ell$-th spherical Bessel function with the form
\begin{equation}\label{eq:bess sph}
j_\ell (t)=\frac{t^\ell }{ (2\ell+1)!!}\left (1-\sum_{l=1}^\infty \frac{(-1)^l t^{2l }}{ 2^l l! N_{\ell,l} }\right ),
\end{equation}
where $N_{\ell,l}=(2\ell+3)\cdots (2\ell+2l+1)$ and $\mathcal R$ be the dimension reduction operator defined in \eqref{dim redu op}. Then
\begin{subequations}
\begin{align}
{\mathcal R} (j_0)(x') &=C(\psi )\Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l} } { 2^l l! (2l+1)!! } \left(|x'|^2+a_{0,l} ^2\right )^l\Bigg],\label{eq:j0} \\
{\mathcal R} (j_\ell)(x') &={ \frac{k^\ell (|x'|^2+a_{\ell }^2)^{(\ell-1) /2 } }{ (2\ell+1)!!} \Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l} (|x'|^2+a_{\ell,l }^2)^{l }}{ 2^l l! N_{\ell,l} } \Bigg] C_1(\psi )|x'|^2 }, \label{eq:jell}
\end{align}
\end{subequations}
where $j_0=j_0(k|x|)$, $j_\ell=j_\ell(k|x|)$, $\ell \in \mathbb N $, $a_{0,l} \in [-L,L]$, $a_\ell, \, a_{\ell,l} \in [-L,L]$, and
\begin{equation}\label{eq:329 cpsi}
C(\psi ) = \int_{-L}^L \psi(x_3) {\rm d} x_3, \quad C_1(\psi )= \int_{-\arctan L/|x'|}^{\arctan L/|x'|} \psi(|x'| \tan \varpi ) \sec^3 \varpi {\rm d} \varpi.
\end{equation}
Furthermore, there holds that
\begin{equation}\label{eq:C1psi}
0<C_1(\psi) < 2^{5/2} \| \psi \|_{L^\infty} \arctan L . \end{equation} \end{lemma}
\begin{proof}
From the definition of the dimension reduction operator \eqref{dim redu op} and the integral mean value theorem, we know that \begin{align}
{\mathcal R} (j_0)(x') &= \int_{-L}^L \psi(x_3) j_0(k |x|) {\rm d} x_3 \notag \\ &= \int_{-L}^L \psi(x_3) {\rm d} x_3
- \sum_{l=1}^\infty \frac{(-1)^l k^{2l} } {2^l l! (2l+1)!! } \int_{-L}^L \psi(x_3)\left (|x'|^2+x_3^2\right )^l {\rm d} x_3, \nonumber
\end{align} from which we obtain \eqref{eq:j0}.
For ${\mathcal R}(j_{\ell})(x')=\int_{-L}^L \psi(x_3) j_\ell (k |x|) {\rm d} x_3$, {using the integral mean value theorem, we can deduce that for $\ell\in\mathbb{N}$}, \begin{align}\label{eq:mean}
\int_{-L}^L \psi(x_3) (|x'|^2+x_3^2)^{\ell/2 }{\rm d} x_3&= (|x'|^2+a_\ell^2)^{(\ell-1)/2 }\int_{-L}^L \psi(x_3) (|x'|^2+x_3^2)^{1/2 }{\rm d} x_3\\
&=C_1(\psi )|x'|^2 (|x'|^2+a_\ell^2)^{(\ell-1)/2 }, \nonumber
\end{align} where $a_\ell \in [-L,L]$. Thus for $\ell\in\mathbb{N}$, from \eqref{eq:bess sph}, we have \begin{align}
{\mathcal R} (j_\ell)(x') &=\int_{-L}^L \psi(x_3) j_\ell (k |x|) {\rm d} x_3 \notag \\
&= \frac{k^\ell }{ (2\ell+1)!!} \int_{-L}^L \psi(x_3) (|x'|^2+x_3^2)^{\ell/2 }\left (1-\sum_{l=1}^\infty \frac{(-1)^l k^{2l } (|x'|^2+x_3^2)^{l }}{ 2^l l! N_{\ell,l} }\right ) {\rm d} x_3. \label{add3}
\end{align} Substituting \eqref{eq:mean} into \eqref{add3}, we can obtain \eqref{eq:jell}.
Clearly, if $L<|x'|$, we know that $0<\sec \varpi<\sqrt {\frac{L^2}{|x'|^2}+1}$, where $$
\varpi \in [-\arctan L/|x'|, \arctan L/|x'| ]. $$ Therefore we can deduce \eqref{eq:C1psi}.
\end{proof}
Using the Jacobi-Anger expansion (cf. \cite[Page 75]{CK}), for the Herglotz wave function $v_j$ given in \eqref{eq:herg3}, we have \begin{equation}\label{3eq:vjex}
v_j(x)= v_j(0) j_0(k |x| )+ \sum_{\ell =1}^\infty \gamma_{\ell j} i^\ell (2\ell +1 ) j_\ell ( k |x| ), \quad x\in \mathbb{R}^3 ,
\end{equation}
where
\begin{align*}
v_j(0)= \int_{{\mathbb S}^{2}} g_j(d ) {\rm d} \sigma(d),\quad \gamma_{\ell j}= \int_{{\mathbb S}^{2}} g_j(d ) P_\ell (\cos( \varphi )) {\rm d} \sigma(d ), \quad d \in {\mathbb S}^{2},
\end{align*}
and
$j_\ell (t)$ is the $\ell$-th spherical Bessel function \cite{Abr} and $\varphi$ is the angle between $x$ and $d$.
In the next lemma, we characterize the integrals $I_2^\pm$ defined by \eqref{3eq:intnotation}, which shall play a critical role in the proof of our main Theorem \ref{Th:3.1} in what follows.
\begin{lemma}\label{lem:36} Let $\Gamma_h^\pm$ be defined in \eqref{eq:sh} and $u_0(sx)$ be the CGO solution defined in \eqref{eq:u0} with the parameter $s\in \mathbb R_+$, and $I_2^\pm$ be defined by \eqref{3eq:intnotation}. Recall that the Herglotz wave function $v_j$ is given in the form \eqref{eq:herg3}. Suppose that $\eta ( x) \in C^{\alpha}(\overline{\Gamma_h^\pm } \times [-M,M] )$ ($0< \alpha<1$) satisfying \eqref{3eq:eta} and let \begin{align}\label{3eq:Ieta} \begin{split} I^-_{\eta,1}& =\int_{\Gamma^-_h } \delta \eta(x') u_0(sx') {\mathcal R} (j_0) (x') {\rm d} \sigma,\ \ I^+_{\eta,1}=\int_{\Gamma^+_h } \delta \eta(x') u_0(sx') {\mathcal R} (j_0) (x') {\rm d} \sigma,\\
I^-_{\eta,2} &=\sum_{\ell =1}^\infty \gamma_{\ell j } i^\ell (2\ell +1) \int_{\Gamma^-_h } \delta \eta(x') u_0(sx') {\mathcal R} (j_\ell ) (x') {\rm d} \sigma, \\
I^+_{\eta,2} &=\sum_{\ell =1}^\infty \gamma_{\ell j } i^\ell (2\ell +1) \int_{\Gamma^+_h } \delta \eta(x') u_0(sx') {\mathcal R} (j_\ell ) (x') {\rm d} \sigma, \\
I_\eta^-&= v_j(0){I}_{\eta,1}^-+ { I}_{\eta,2}^- ,\quad I_\eta^+= v_j(0){I}_{\eta,1}^++ { I}_{\eta,2}^+.
\end{split} \end{align}
Assume that for a fixed $k\in
\mathbb R_+$, $h$ is sufficiently small such that $k^2(h^2+L^2)<1$ and
\begin{equation}\label{eq:lem36 cond}
kL<1,
\end{equation}
where $2L$ is the length of the interval of the dimensional reduction operator $\mathcal R$ in \eqref{dim redu op}, and $-\pi< \theta_m < \theta_M <\pi $, where $\theta_m$ and $\theta_M$ are defined in \eqref{eq:W}. Then \begin{align}\label{3eq:I2-final}
I_2^-&=2\eta(0)v_j(0)s^{-1}\left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh}\mu(\theta_m ) } - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh}\mu(\theta_m ) } \right )C(I_{311}^-) \nonumber\\
&\quad +v_j(0) \eta(0) I_{312}^-+ \eta(0) I_{32}^-+I_\eta^-, \end{align} where $\mu(\theta_m )$ is defined in \eqref{eq:omegamu}, \begin{equation}\label{eq:335 I32}
\begin{split}
C(I_{311}^-)& =C(\psi ) \Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l}} { (2l+1)!! } a_{0,l} ^{2l} \Bigg] ,\\
I_{32}^-&=\sum_{\ell =1}^\infty \gamma_{\ell j} i^\ell (2 \ell+1) \int_{\Gamma_h^- } u_0(sx') {\mathcal R} (j_\ell ) (x') {\rm d} \sigma,\\
I_{312}^-&= - C( \psi ) \sum_{l=1}^\infty \frac{(-1)^l k^{2l}} { (2l+1)!! } \left( \sum_{i_1=1}^l C(l,{i_1 })a_{0,l} ^{2(l-i_1) } \int_{0}^h r^{2 i_1} e^{-\sqrt{sr} \mu (\theta_m)} {\rm d} r \right),
\end{split} \end{equation} with $C(l,{i_1 })=\frac{l!}{i_1! (l-i_1)!}$ being the combinatorial number of the order $l$. Here $a_{0,l}$, $\mathcal R(j_\ell)$ and $C(\psi )$ are defined in \eqref{eq:j0}, \eqref{eq:jell} and \eqref{eq:329 cpsi}, respectively. It holds as $s\rightarrow +\infty$ that \begin{subequations}
\begin{align}
I_{312}^- &\leq {\mathcal O}(s^{-3}), \label{eq:I312} \\
I_{32}^- &\leq {\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{2})} s^{-3}) , \label{3eq:I32} \\
\left| I_{\eta,1}^- \right| &\leq {\mathcal O} ( s^{-1-\alpha }),\label{3eq:I1} \\
\left| I_{\eta,2}^- \right| & \leq {\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{2})} s^{-{3}-\alpha }) ,\label{3eq:I2}\\
|I_\eta^-| &\leq \|\eta \|_{C^\alpha } \left( |v_j(0)| |I_{\eta,1}^-| + |I_{\eta,2}^-| \right). \label{eq:336 bound}
\end{align}
\end{subequations}
Similarly, we have
\begin{align}\label{3eq:I2+final}
I_2^+&=2\eta(0)v_j(0)s^{-1}\left( \mu(\theta_M )^{-2}- \mu(\theta_M )^{-2} e^{ -\sqrt{sh}\mu(\theta_M ) }- \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M )} \right )C(I_{311}^+)\nonumber\\
&\quad +v_j(0)\eta(0) I_{312}^+ +\eta(0) I_{32}^+ +I_\eta^+, \end{align} where \begin{equation}\label{eq:CI311+} \begin{split} C(I_{311}^+)&=C(\psi ) \Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l}} { (2l+1)!! } a_{0,l,+} ^{2l} \Bigg],\quad a_{0,l,+} \in [-L,L], \\
I_{312}^+&=-C(\psi)\sum_{l=1}^\infty \frac{(-1)^l k^{2l} } { (2l+1)!! } \sum_{i_1=1}^l C(l,{i_1 }) a_{0,l,+} ^{2(l-i_1) } \int_{0}^h r^{2 i_1} e^{-\sqrt{sr} \mu(\theta_M)} {\rm d} r, \\
I_{32}^+&=\sum_{\ell =1}^\infty \gamma_{\ell j} i^\ell (2 \ell+1) \int_{\Gamma_h^+ } u_0(sx') {\mathcal R} (j_\ell ) (x') {\rm d} \sigma.
\end{split} \end{equation}
There hold as $s \rightarrow +\infty$ that \begin{equation}\label{eq:338 bounds}
\begin{split}
| I_{312}^+| &\leq {\mathcal O}(s^{-3}), \quad
| I_{32}^+| \leq {\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{2})} s^{-3}),\\
\left| I_{\eta,1}^+ \right| &\leq {\mathcal O} ( s^{-1-\alpha }),\quad \left| I_{\eta, 2}^+ \right| \leq {\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{2})} s^{-{3}-\alpha }),\\
|I_\eta^+| &\leq \|\eta \|_{C^\alpha } \left(| v_j(0)| | I_{\eta,1}^+| + |I_{\eta,2}^+ | \right).
\end{split} \end{equation} \end{lemma}
\begin{proof} We first investigate the boundary integral $I_2^- $ which is given by \eqref{3eq:intnotation}. In this situation, the polar coordinates $x=(r\cos \theta, r \sin \theta )$ satisfy $r \in (0, h)$ and $\theta=\theta_m$ or $\theta=\theta_M$ when $x\in \Gamma_h^-$ or $x\in \Gamma_h^+$, respectively. Since $\eta \in C^\alpha(\overline{\Gamma}_h^\pm \times [-M,M] )$, we have the expansion \eqref{3eq:eta}. Substituting \eqref{3eq:eta} into the expression of $I_2^-$, we have\begin{align}\label{3eq:I2-}
I_2^- &=\eta(0) I_{21}^-+ I_\eta^-, \end{align} where \begin{align*} I_{21}^-&= \int_{\Gamma_h^- } u_0(sx') {\mathcal R} (v_j) (x') {\rm d} \sigma ,\quad I_{\eta}^- =\int_{\Gamma^-_h } \delta \eta(x') u_0(sx') {\mathcal R} (v_j) (x') {\rm d} \sigma. \end{align*} By virtue of \eqref{3eq:vjex}, it can be verified that $I_\eta^-= v_j(0){I}_{\eta,1}^-+ { I}_{\eta,2}^-$.
Let $$ { I}_{31}^-=\int_{\Gamma_h^- } u_0(sx') {\mathcal R} (j_0) (x') {\rm d} \sigma.
$$ Substituting \eqref{eq:jell} and \eqref{3eq:vjex} into the expression of $I_{21}^-$ defined in \eqref{3eq:I2-}, we can deduce that \begin{align}\label{add4}
I_{21}^-
&=v_j(0){ I}_{31}^-+ { I}_{32}^-, \end{align} where ${ I}_{32}^-$ is defined in \eqref{eq:335 I32}. Substituting the expansion \eqref{eq:j0} into ${ I}_{31}^-$ and recalling that $\mu(\theta )=-\cos(\theta/2+\pi) -i \sin( \theta/2+\pi )$, we have \begin{align}
{ I}_{31}^-&= C( \psi ) \int_{0}^h \Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k ^{2l}} { (2l+1)!! } \left(r^2+a_{0,l} ^2\right )^l\Bigg] e^{-\sqrt{sr} \mu(\theta_m)} {\rm d} r
:={ I}_{311}^-+{I}_{312}^-,\notag\\ { I}_{311}^-&= C( \psi ) \Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k ^{2l}} { (2l+1)!! } a_{0,l} ^{2l} \Bigg] \int_{0}^h e^{-\sqrt{sr} \mu(\theta_m)} {\rm d} r,\label{add5} \end{align} where $a_{0,l}$, $C( \psi )$ and ${I}_{312}^-$ are defined in \eqref{eq:j0}, \eqref{eq:329 cpsi} and \eqref{eq:335 I32}, respectively.
Moreover, using \eqref{eq:I311} in Lemma \ref{lem:u0 int}, we obtain that \begin{equation}\label{3eq:I311}
{ I}_{311}^-=2s^{-1}\left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh} \mu(\theta_m ) }\ - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_m ) } \right )C(I_{311}^-) , \end{equation} where $
C(I_{311}^-) =C(\psi ) \Bigg[ 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l}} { (2l+1)!! } a_{0,l} ^{2l} \Bigg] .
$
Finally, substituting \eqref{add4}, \eqref{add5} and \eqref{3eq:I311}
into \eqref{3eq:I2-}, we have the integral equality \eqref{3eq:I2-final}.
Following a similar arguments for deriving the integral equality \eqref{3eq:I2-final} of $I_2^-$, one can derive the integral equality \eqref{3eq:I2+final} for $I_2^+$.
In the following, we derive the estimate for $I_\eta^-$ in \eqref{eq:336 bound} by investigating \eqref{3eq:I1} and \eqref{3eq:I2}. Substituting \eqref{eq:j0} into $I_{\eta,1}^-$, we can derive that \begin{align*}
|I_{\eta,1}^-| &\leq |C(\psi )| \|\eta \|_{C^\alpha } \int_{0}^h r^\alpha \Bigg| 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l}} { (2l+1)!! } \left(r^2+a_{0,l} ^2\right )^l\Bigg | e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r\\
&=2 L\| \psi \|_{L^\infty} \|\eta \|_{C^\alpha } \Bigg| 1- \sum_{l=1}^\infty \frac{(-1)^l k^{2l }} { (2l+1)!! } \left(\beta_{0,l}^2+a_{0,l} ^2\right )^l\Bigg | \int_{0}^h r^\alpha e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r, \end{align*} where $\beta_{0,l} \in [0,h]$ such that $k^2(\beta_{0,l}^2+a_{0,l} ^2 ) \leq k^2( h^2+L^2) <1$ for sufficiently small $h$ and $L$. From \eqref{eq:zeta} in Lemma \ref{lem:1}, we obtain \eqref{3eq:I1}.
Substituting \eqref{eq:jell} into $I_{\eta,2}^-$, and using \eqref{eq:C1psi}, we can deduce that \begin{align}\notag
I_{\eta,2}^-& \leq C_1(\psi) \|\eta \|_{C^\alpha }\nonumber \\
& \quad \times \sum_{\ell =1}^\infty |\gamma_{\ell j}| \int_{0}^h r^\alpha \frac{k^\ell (r^2+a_{\ell }^2)^{(\ell-1)/2 } }{ (2\ell-1)!!} \Bigg |1 - \sum_{l=1}^\infty \frac{ k^{2l} (r^2+a_{\ell,l }^2)^{ l }}{ 2^l l! N_{\ell,l} } \Bigg| r^2 e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r\nonumber \\
&\leq C_1(\psi) \|\eta \|_{C^\alpha } \|g_j\|_{L^2({\mathbb S}^{2})} \int_{0}^h r^{2+\alpha} e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r \nonumber \\
&\quad \quad \times \sum_{\ell =1}^\infty \frac{k^\ell (\beta_\ell^2+a_\ell ^2)^{(\ell-1)/2 } }{ (2\ell-1)!!} \Bigg |1-
\sum_{l=1}^\infty \frac{ k^{2l} (\beta_{\ell,l}^2+a_{\ell,l }^2)^{l }}{ 2^l l! N_{\ell,l} } \Bigg | \nonumber \\ &
={\mathcal O}( \|g_j\|_{L^2({\mathbb S}^{2})} s^{-\alpha -3 } ) , \notag \end{align} where $\beta_\ell, \beta_{\ell,l} \in [0,h]$ such that $k^{2}(\beta_{\ell}^2+a_{\ell}^2) \leq k^{2} (h^2+L^2) <1$ and $k^2(\beta_{\ell,l}^2+a_{\ell,l}^2) \leq k^2 (h^2+L^2) <1$ for sufficiently small $h$ and $L$, by utilizing the claim that $$
|\gamma_{\ell j} | \leq \|g_j\|_{L^2({\mathbb S}^{2}) }, $$
where we use the fact that $|P_\ell(t)| \leq 1$ when $|t|\leq 1$. Consequently, \eqref{eq:336 bound} can be derived.
For $I_{312}^-$, we can deduce that
\begin{align}\notag
\left| { I_{312}^-}\right| &\leq |C( \psi )| \sum_{l=1}^\infty \frac{ k^{2l}} { (2l+1)!! } \sum_{i_1=1}^l C(l,i_1)h^{2 (i_1-1) } L ^{2(l-i_1) }\int_{0}^h r^2 e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r\\
&= |C( \psi )| \sum_{l=1}^\infty \frac{ k^{2l} } { (2l+1)!! h^2 } \sum_{i_1=1}^l C(l,i_1)h^{2 i_1 } L ^{2(l-i_1) }\int_{0}^h r^2 e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r \nonumber \\
&= |C( \psi )| \sum_{l=1}^\infty \frac{ k^{2l}} { (2l+1)!! h^2 } ((h^2+L^2)^l-L^{2l} ) \int_{0}^h r^2 e^{-\sqrt{sr} \omega(\theta_m)} {\rm d} r \nonumber \\
&\leq 2L \|\psi \|_{L^\infty} \sum_{l=1}^\infty \frac{ k^{2l}} { (2l+1)!! h^2 } ((h^2+L^2)^l-L^{2l} ) \cdot {\mathcal O}(s^{-3}) \nonumber \\
&={\mathcal O}(s^{-3}), \nonumber
\end{align}
where we choose $h$ and $L$ such that $ k^{2}( h^2+L^2)<1$ and $k L<1$.
Substituting the expansion \eqref{eq:jell} of $j_\ell $ into $I_{32}^- $, we have
\begin{align}\notag
| I_{32}^-|&\leq C_1(\psi ) \|g_j\|_{L^2 ({\mathbb S} ^{2})} \nonumber \\
& \quad \times \sum_{\ell =1}^\infty \int_0^h r^2 e^{-\sqrt{sr} \omega(\theta_m) } \frac{ k^\ell (|r|^2+a_{\ell }^2)^{(\ell-1)/2 } }{ (2\ell- 1)!!} \Bigg| 1-\sum_{l=1}^\infty \frac{(-1)^l k^{2l} (|r|^2+a_{\ell,l }^2)^{ l }}{ 2^l l! N_{\ell,l} } \Bigg| {\rm d} r\nonumber\\
&=C_1(\psi) \|g_j\|_{L^2 ({\mathbb S} ^{2})} \sum_{\ell =1}^\infty \frac{ k^\ell (|\beta_\ell|^2+a_{\ell }^2)^{(\ell-1)/2 } }{ (2\ell- 1)!!} \Bigg| 1-\sum_{l=1}^\infty \frac{(-1)^l k^{2l} (|\beta_{\ell,l}|^2+a_{\ell,l }^2)^{ l }}{ 2^l l! N_{\ell,l}} \Bigg| \nonumber \\
&\quad \times \int_0^h r^2 e^{-\sqrt{sr} \omega(\theta_m) } {\rm d} r \nonumber \\
&={\mathcal O} (\|g_j\|_{L^2 ({\mathbb S} ^{2})} s^{-3}), \notag
\end{align}
where $\beta_\ell, \beta_{\ell, l} \in [0,h]$ such that $k^{2} (\beta_\ell^2+a_\ell^2) \leq k^{2} (h^2+L^2)<1$ and $k^2(|\beta_{\ell,l}|^2+a_{\ell,l}^2 )\leq k^2( h^2+L^2) <1$ for sufficiently small $h$ and $L$.
The asymptotic analysis \eqref{eq:338 bounds} for $I_{312}^+$, $I_{32}^+$, $I_{\eta,1}^+$ and $I_{\eta,2}^+$ can be analyzed in a similar way, which is omitted. \end{proof}
In the next proposition, we deduce the lower and upper bounds for $C(I_{311}^-)$ and $C(I_{311}^+)$, where $C(I_{311}^-)$ and $C(I_{311}^+)$ are defined in \eqref{eq:335 I32} and \eqref{eq:CI311+}, respectively, which shall be used to prove Lemma \ref{lem:37 coeff} in the following. \begin{proposition}\label{pro:31}
Let $C(I_{311}^-)$ and $C(I_{311}^+)$ be defined in \eqref{eq:335 I32} and \eqref{eq:CI311+}, respectively. Assume that the condition \eqref{eq:lem36 cond} is fulfilled for a succificiently small $L\in \mathbb R_+$, then
\begin{equation}\label{eq:341}
0< C(I_{311}^-)\leq \frac{C(\psi )}{1-(kL)^2}, \quad 0< C(I_{311}^+)\leq \frac{C(\psi )}{1-(kL)^2},
\end{equation}
where $C(\psi )$ is defined in \eqref{eq:339}. \end{proposition} \begin{proof}
Recall that $|a_{0,l }|\leq L$ is defined in \eqref{eq:j0}. By \eqref{eq:lem36 cond}, we know that \begin{equation}\label{eq:336}
\left| \sum_{l=1}^\infty \frac{(-1)^l k^{2l}} { (2l+1)!! } a_{0,l} ^{2l} \right| \leq \sum_{l=1}^\infty (k L)^{2l}=\frac{(k L)^2}{1-(k L)^2}. \end{equation} From \eqref{eq:336}, we know that \begin{equation}\label{eq:339}
0<\frac{C(\psi )(1-2(kL)^2)}{1-(kL)^2}\leq C(I_{311}^-)\leq \frac{C(\psi )}{1-(kL)^2},
\end{equation}
where $C(\psi ) = \int_{-L}^L \psi(x_3) {\rm d} x_3>0$, since $\psi \not\equiv 0$ is a nonnegative function .
For sufficiently small $L$, similar to \eqref{eq:339}, we know that
\begin{equation}\label{add6}
0<\frac{C(\psi )(1-2(kL)^2)}{1-(kL)^2}\leq C(I_{311}^+)\leq \frac{C(\psi )}{1-(kL)^2},
\end{equation}
where $C(\psi )$ is defined in \eqref{eq:339}. \end{proof}
\begin{lemma}\label{lem:37 coeff} Let $\theta_m$ and $\theta_M$ be defined in \eqref{eq:sh}. Assume that $\theta_m$ and $\theta_M$ fulfil the condition \eqref{eq:lem 29 cond}, and moreover \eqref{eq:lem36 cond} is satisfied for a sufficiently small $L\in \mathbb R_+$,
then \begin{equation}\label{eq:348}
C(I_{311}^-)\mu(\theta_m)^{-2}+C(I_{311}^+)\mu(\theta_M)^{-2} \neq 0, \end{equation}
where $C(I_{311}^-)$ and $C(I_{311}^+)$ are defined in \eqref{eq:335 I32} and \eqref{eq:CI311+}, respectively. \end{lemma} \begin{proof} It can be calculated that \begin{align*}
&C(I_{311}^-) \mu(\theta_m)^{-2}+C(I_{311}^+) \mu(\theta_M)^{-2}\\
&=\frac{(C(I_{311}^+)\cos \theta_m+C(I_{311}^-)\cos \theta_M)+i (C(I_{311}^+)\sin \theta_m+C(I_{311}^-)\sin \theta_M )}{ (\cos \theta_m+i \sin \theta_m )(\cos \theta_M+i \sin \theta_M )}. \end{align*} Therefore, under the assumption \eqref{eq:lem 29 cond}, we know that $$ \cos \theta_m+\cos \theta_M \mbox{ and } \sin \theta_m+\sin \theta_M $$ can not be zero simultaneously. Without loss of generality, we assume that $\cos \theta_m+\cos \theta_M \neq 0$. Then we consider the following two cases: \begin{itemize}
\item[(i)] Case A: $\cos \theta_m+\cos \theta_M > 0$,
\item[(ii)] Case B: $\cos \theta_m+\cos \theta_M < 0$. \end{itemize} For Case A, we first consider that $\cos \theta_m$ and $\cos \theta_M$ have the same sign. From \eqref{eq:341} in Proposition \ref{pro:31}, it is not difficult to see that the real part of the {\color{black} numerator} of $C(I_{311}^-)\mu(\theta_m)^{-2}+C(I_{311}^+)\mu(\theta_M)^{-2} $ can not be zero. Therefore, \eqref{eq:348} is proved when $\cos \theta_m$ and $\cos \theta_M$ have the same sign.
In the following, we assume that $\cos \theta_m$ and $\cos \theta_M$ have different signs. Then it implies that $\cos \theta_m \leq 0$ and $\cos \theta_M > 0$. From \eqref{eq:339} and \eqref{add6}, we can deduce that \begin{align*}
\frac{C(\psi )}{1-(kL)^2}
( \cos \theta_m+ (1-2(kL)^2) \cos \theta_M )
& \leq C(I_{311}^+)\cos \theta_m +C(I_{311}^-)\cos \theta_M \\
&\leq \frac{C(\psi )}{1-(kL)^2}
( (1-2(kL)^2) \cos \theta_m+ \cos \theta_M ) . \nonumber \end{align*} Since $L$ is flexible, for a given $0< \varepsilon <1$, we can choose $L$ such that $0<k L<\sqrt { \varepsilon /2 }$, from which we can derive the bounds as follows \begin{align}\label{eq:351}
\frac{C(\psi )}{1-(kL)^2} \left ( \cos \theta_m + (1- \varepsilon ) \cos \theta_M \right )
& \leq C(I_{311}^+)\cos \theta_m +C(I_{311}^-) \cos \theta_M \nonumber \\
& \leq \frac{C(\psi )}{1-(kL)^2 }
\left ( (1- \varepsilon ) \cos \theta_m + \cos \theta_M \right ) . \end{align} Since $\cos \theta_m+\cos \theta_M > 0$, we can consider the lower bound in \eqref {eq:351}. Denote $\varepsilon_0=\min \{\frac{\cos \theta_m+\cos \theta_M }{2\cos \theta_M}, 1\}$ and choose $\varepsilon\in (0, \varepsilon_0)$. It can be verified that \begin{align*} C(I_{311}^+)\cos \theta_m +C(I_{311}^-) \cos \theta_M \geq \frac{C(\psi )}{1-(kL)^2 } \left ( \cos \theta_m+ (1-\varepsilon)\cos \theta_M \right ) >0, \end{align*}
which means that \eqref{eq:348} still holds.
For Case B, if $\cos \theta_m=0$ or $\cos \theta_M=0$ is satisfied, from the upper bound of \eqref{eq:351} we can easily show that \begin{equation}\label{eq:boundtemp}
C(I_{311}^+)\cos \theta_m +C(I_{311}^-) \cos \theta_M <0. \end{equation}
Otherwise, if $|\cos \theta_m | \leq |\cos \theta_M |$, from the fact that $(1- \varepsilon ) |\cos \theta_m | \leq |\cos \theta_M |$, we know that \eqref{eq:boundtemp} still holds from the upper bound of \eqref{eq:351}. If $|\cos \theta_m | > |\cos \theta_M |$, we can choose $\varepsilon$ such that $ \varepsilon >1- |\cos \theta_M | / |\cos \theta_m | >0 $ to make \eqref{eq:boundtemp} also be fufilled from the upper bound of \eqref{eq:351}. Therefore, for Case B, we know that \eqref{eq:348} is fulfilled.
The proof is complete. \end{proof}
We are in a position to present another main result of this paper on the vanishing of conductive transmission eigenfunctions at an edge corner in 3D.}
\begin{theorem}\label{Th:3.1} {\color{black} Let $\Omega \Subset \mathbb R^3$ be a bounded Lipschitz domain with $0\in \partial \Omega$ and} ${S_h}\subset \mathbb{R}^{2}$ be defined in \eqref{eq:sh}, $M>0$, $0<\alpha<1$. For any fixed {\color{black} $x_3^c \in (-M,M)$} and $L>0$ defined in Definition \ref{Def}, we suppose that $L $ is sufficiently small such that $(x_3^c-L,x_3^c+L) \subset (-M,M) $ {\color{black} and $$ (B_h \times (-M,M) ) \cap \Omega=S_h \times (-M,M), $$ where $B_h\Subset \mathbb R^2 $ is the central ball of radius $h \in \mathbb R_+$. Assume that $v,w\in H^1(\Omega )$ are the transmission eigenfunctions to \eqref{eq:in eig}} and there exists a sufficiently small neighbourhood {\color{black} $S_h \times (-M,M)$ (i.e. $h>0$ is sufficiently small) of $(0,x_3^c)$ with $x_3^c\in (-M,M)$} such that $qw \in C^{\alpha}(\overline {S}_h \times [-M,M] )$ and $\eta \in C^\alpha(\overline{\Gamma}_h^\pm \times [-M,M] )$ for $0<\alpha<1$, and $v-w\in H^2({S_h}\times (-M,M) )$, where $0$ is the vertex of $S_h$ defined in \eqref{eq:sh}. Write {\color{black} $x=(x', x_3) \in \mathbb{R}^{3}, \, x'\in \mathbb{R}^{2}$}. If the following conditions are fulfilled:
\begin{itemize}
\item[(a)] the transmission eigenfunction $v$ can be approximated in $H^1(S_h \times (-M,M))$ by the Herglotz functions $v_j$, $j=1,2,\ldots$, with kernels $g_j$ satisfying the approximation property \eqref{3eq:ass1}, where the parameter $\varrho$ in \eqref{3eq:ass1} fulfills that $0<\varrho<\alpha$,
\item[(b)] the function $\eta=\eta(x')$ is independent of $x_3$ and
\begin{equation}\label{3eq:ass2}
\eta(0) \neq 0,
\end{equation}
\item[(c)] the angles $\theta_m$ and $\theta_M$ of $S_h$ satisfy
\begin{equation}\label{3eq:ass3}
-\pi < \theta_m < \theta_M < \pi \mbox{ and } \theta_M-\theta_m \neq \pi,
\end{equation}
\end{itemize} then for every edge point {\color{black} $(0,x_3^c) \in \mathbb{R}^3 $} of $S_h \times (-M,M)$ where {\color{black} $x_3^c\in (-M,M)$}, one has {\color{black} \[
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B((0,x_3^c), \rho )\cap \Omega ) } \int_{m(B((0, x_3^c), \rho ) \cap \Omega ) } |v(x)| {\rm d} x=0,
\] } where $m(B((0,x_3^c), \rho )\cap\Omega)$ is the volume of $B((0,x_3^c) ,\rho )\cap \Omega$.
\end{theorem}
\begin{proof} One can easily see that the transmission eigenfunctions $v,w\in H^1(\Omega )$ to \eqref{eq:in eig} fulfill the PDE system \eqref{eq:3d in eig}. Using the dimensional reduction operator $\mathcal R$ given by \eqref{dim redu op}, we can show that $\mathcal R(v)$ and $\mathcal R(w)$ satisfy \eqref{eq:eig reduc}. By virtue of Lemma \ref{lem:32} and Lemma \ref{lem:int2}, we know that the integral equality \eqref{3eq:int identy} holds under the assumption that $qw \in C^{\alpha}(\overline {S}_h \times [-M,M] )$ for $0<\alpha<1$ and $v-w\in H^2({S_h}\times (-M,M) )$. Since $\eta \in C^\alpha(\overline{\Gamma}_h^\pm \times [-M,M] )$ for $0<\alpha<1$, from Lemma \ref{lem:36}, we can obtain \eqref{3eq:I2-final} and \eqref{3eq:I2+final}. Recall that $\mu(\theta_m )$ and $\mu(\theta_M)$ are defined in \eqref{eq:omegamu}. Therefore, substituting \eqref{3eq:I2-final} and \eqref{3eq:I2+final} into \eqref{3eq:int identy}, after rearranging terms and multiplying $s$ on the both sides of \eqref{3eq:int identy}, we deduce that \begin{equation} \begin{split}\label{3eq:45} & 2v_j(0)\eta(0)\Big[ \left( \mu(\theta_M )^{-2}- \mu(\theta_M )^{-2} e^{ -\sqrt{sh} \mu(\theta_M ) } - \mu(\theta_M )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_M ) } \right )C(I_{311}^+) \\
&\quad + \left( \mu(\theta_m )^{-2}- \mu(\theta_m )^{-2} e^{ -\sqrt{sh} \mu(\theta_m ) } - \mu(\theta_m )^{-1} \sqrt{sh} e^{ -\sqrt{sh} \mu(\theta_m ) } \right ) C(I_{311}^-)\Big] \\
&=s\Big[ I_3-(F_{1 } (0)+F_{2 } (0)+F_{3j } (0)) \int_{S_h} u_0(sx') {\rm d} x' - \delta_j(s) \\
&\quad - \eta(0) (I_{32}^+ + I_{32}^- ) -I_\eta^+ - I_\eta^- -\int_{S_h} \delta F_{1 } (x') u_0(sx') {\rm d} x' -\int_{S_h} \delta F_{2 } (x')u_0(sx') {\rm d} x' \\
&\quad -\int_{S_h} \delta F_{3j } (x')u_0(sx') {\rm d} x'- v_j(0) \eta(0)\left( I_{312}^-+ I_{312}^+ \right) - \epsilon_j^\pm(s)\Big], \end{split} \end{equation} where $C(I_{311}^-)$ and $C(I_{311}^+)$ are defined in \eqref{eq:335 I32} and \eqref{eq:CI311+}, respectively. Here $\delta F_{3j}$, $\delta F_1$ and $\delta F_2$ are given by \eqref{eq:F3j} and \eqref{eq:326F3j}.
When $s=j$, under the assumption \eqref{3eq:ass1}, using \eqref{eq:I312}, \eqref{3eq:I32}, \eqref{eq:336 bound} and \eqref{eq:338 bounds} in Lemma \ref{lem:36} we know that \begin{equation} \begin{split}\label{3eq:46}
&j | I_{32}^-| \leq {\mathcal O}( j^{-2} \|g_j\|_{L^2({\mathbb S}^2 )} ) \leq {\mathcal O}( j^{-1+\varrho} ) ,\quad { j | I_{32}^+ | \leq {\mathcal O}( j^{-2} \|g_j\|_{L^2({\mathbb S}^2 )} )} \leq {\mathcal O}( j^{-1+\varrho} ) , \\ & j I_{312}^- \leq {\mathcal O}(j^{-2}),\quad j I_{312}^+ \leq {\mathcal O}(j^{-2}), \end{split} \end{equation} and \begin{equation}\label{3eq:46 a} \begin{split}
&j |I_\eta^-| \leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} (\|g_j\|_{L^2({\mathbb S}^2 )} j^{-{2}-\alpha } ) \right) \\
&\hspace{0.8cm}\leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} (j^{-{1}-(\alpha -\varrho ) } ) \right), \\
& j|I_\eta^+| \leq \|\eta \|_{C^\alpha } \left(| v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} (\|g_j\|_{L^2({\mathbb S}^2 )} j^{-{2}-\alpha } ) \right), \\
&\hspace{0.8cm}\leq \|\eta \|_{C^\alpha } \left( |v_j(0)| {\mathcal O} (j^{-\alpha })+ {\mathcal O} (j^{-{1}-(\alpha -\varrho ) } ) \right), \end{split} \end{equation} as $j\rightarrow +\infty$. Clearly, when $s=j$, under the assumption \eqref{3eq:ass1}, from \eqref{eq:u0w}, \eqref{eq:1.5}, \eqref{3eq:deltajnew3} and \eqref{3eq:xij} in Lemma \ref{lem:3 delta}, \eqref{3eq:I3}, \eqref{3eq:deltaf1j} and \eqref{3eq:deltaf2} in Lemma \ref{lem:int2}, we can derive that \begin{equation} \begin{split} \label{3eq:47}
&j |I_3| \leq C j e^{-c' \sqrt j},\ j \left| \int_{S_h} u_0(jx) {\rm d} x \right| \leq \frac{ 6 |e^{-2\theta_M i }-e^{-2\theta_m i } | } {j} + \frac{6(\theta_M-\theta_m )e^{-\delta_W \sqrt{h j}/2}}{j\delta_W^4} ,\\
&j \left|\int_{S_h} \delta F_{3j } (x) u_0(sx') {\rm d} x' \right | \leq \frac{8 L\sqrt{\pi}\|\psi\|_{L^\infty}(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } k^2 {\rm diam}(S_h)^{1-\alpha } \\
&\hspace{5cm} \times (1+k) \|g_j\|_{L^2( {\mathbb S}^{2})} j^{-\alpha-1 } \leq {\mathcal O} \left (j^{-(\alpha-\varrho )} \right ), \\
& j \left|\int_{S_h} \delta F_{1 } (x) u_0(sx') {\rm d} x' \right | \leq \frac{2\|F_{1}\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} j^{-\alpha-1} , \\
&j \left|\int_{S_h} \delta F_{2 } (x) u_0(sx') {\rm d} x' \right | \leq \frac{2\|F_{2}\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} j^{-\alpha-1},
\end{split} \end{equation} and
\begin{equation}\label{3eq:47 a} \begin{split}
&j |\epsilon_j^\pm (j) |\leq C \|\psi\|_{L^\infty} \left( |\eta(0)| \frac{\sqrt { \theta_M-\theta_m } e^{-\sqrt{j \Theta } \delta_W } h } {\sqrt 2 } j \right. \\
&\left. \hspace{4cm} + \|\eta\|_{C^\alpha } j^{-\alpha } \frac{\sqrt{2(\theta_M-\theta_m) \Gamma(4\alpha+4) } }{(2\delta_W)^{2\alpha+2 } } \right) j^{-1-\Upsilon } , \\
&j|\delta_j(j)| \leq \frac{ k^2 \|\psi \|_{L^\infty} \sqrt{C(L,h) ( \theta_M-\theta_m )} e^{-\sqrt{s \Theta } \delta_W } h }{\sqrt 2 } j^{-\Upsilon }, \quad \Theta \in [0,h ], \end{split} \end{equation} as $j\rightarrow +\infty$, where $c'>0$ and $\delta_W$ are defined in \eqref{3eq:I3} and \eqref{eq:xalpha}, respectively.
The coefficient of $v_j(0)$ of \eqref{3eq:45} with respect to the zeroth order of $s$ is $$ 2 \eta(0)\left (C(I_{311}^-)\mu(\theta_m)^{-2}+C(I_{311}^+)\mu(\theta_M)^{-2} \right ). $$
In \eqref{3eq:45}, we take $s=j$ and let $j\rightarrow \infty$, combining with \eqref{3eq:46}, \eqref{3eq:46 a}, \eqref{3eq:47} and \eqref{3eq:47 a}, we can prove that
\begin{equation}\label{eq:3v(0)} \lim_{j \rightarrow \infty} \eta(0)\left(C(I_{311}^-)\mu(\theta_m)^{-2}+C(I_{311}^+)\mu(\theta_M)^{-2} \right) v_j(0)=0.
\end{equation} Under the assumption \eqref{3eq:ass3}, from Lemma \ref{lem:37 coeff}, we have $C(I_{311}^-)\mu(\theta_m)^{-2}+C(I_{311}^+)\mu(\theta_M)^{-2} \neq 0$. Therefore, from \eqref{3eq:ass2} and \eqref{eq:3v(0)}, we prove that $$ \lim_{j \rightarrow \infty} v_j(0)=0. $$ Using the a similar argument of \eqref{eq:250}, we finish the proof of this theorem.
\end{proof}
{ \begin{remark} Similar to Remark~\ref{rem:th1.1}, Theorem~\ref{Th:3.1} can be localized. Moreover, we would like to mention that in contrast to the regularity assumption on $v-w$ near the corner in 2D of Theorem \ref{Th:1.1}, we impose that $v-w \in H^2(S_h \times(-M,M))$ in Theorem \ref{Th:3.1}, where we need to use the $C^\alpha$-continuity of $\mathcal R(v-w)$ to investigate the asymptotical order of $s$ with respect to $s \rightarrow \infty $ for the volume integral of $F_1(x')$ over $S_h$ in \eqref{3eq:int identy}. \end{remark} }
Similar to Corollary \ref{cor:2.1}, we consider the vanishing property of the interior transmission eigenfunctions $v \in H^1(W \times (-M,M))$ and $w \in H^1(W \times (-M,M)) $ to \eqref{eq:in eig reduce} on the edge point under the assumptions \eqref{3eq:ass3} and \eqref{3eq:ass1 int}.
\begin{corollary}\label{cor:3.1}
{\color{black} Let $\Omega \Subset \mathbb R^3$ be a bounded Lipschitz domain with $0\in \partial \Omega$ and} ${S_h}\subset \mathbb{R}^{2}$ be defined in \eqref{eq:sh}, $M>0$, $0<\alpha<1$. For any fixed {\color{black} $x_3^c \in (-M,M)$} and $L>0$ defined in Definition \ref{Def}, we suppose that $L $ is sufficiently small such that $(x_3^c-L,x_3^c+L) \subset (-M,M) $ {\color{black} and $$ (B_h \times (-M,M) ) \cap \Omega=S_h \times (-M,M), $$ where $B_h\Subset \mathbb R^2 $ is the central ball of radius $h \in \mathbb R_+$.} Suppose {\color{black} $v \in H^1(\Omega )$ and $w \in H^1(\Omega ) $ are} the interior transmission eigenfunctions to \eqref{eq:in eig reduce} in $\mathbb{R}^3$. Suppose that there exists a sufficiently small neighbourhood {\color{black} $S_h \times (-M,M)$ (i.e. $h>0$ is sufficiently small) of $(0,x_3^c)$ with $x_3^c\in (-M,M)$} such that $qw \in C^{\alpha}(\overline {S_h} \times [-M,M] )$ for $0< \alpha <1$, and $v-w\in H^2({S_h}\times(-M,M))$, where $0 $ is the vertex of $S_h$ defined in \eqref{eq:sh}. If the following conditions are fulfilled:
\begin{itemize}
\item [(a)] the transmission eigenfunction $v$ can be approximated in $H^1(S_h \times (-M,M))$ by the Herglotz waves $v_j$, $j=1,2,\ldots$, with kernels $g_j$ satisfying
\begin{equation}\label{3eq:ass1 int}
\|v-v_j\|_{H^1(S_h \times (-M,M) )} \leq j^{-2-\Upsilon},\quad \|g_j\|_{L^2({\mathbb S}^{2})} \leq C j^{\varrho},
\end{equation}
for some positive constants $C$, $\Upsilon >0$ and $0< \varrho<\alpha $,
\item[(b)] the angles $\theta_m$ and $\theta_M$ of $S_h$ satisfy
\begin{equation}\label{3eq:ass2 int}
-\pi < \theta_m < \theta_M < \pi \mbox{ and } \theta_M-\theta_m \neq \pi,
\end{equation}
\end{itemize}
then we have
$$
{\color{black}
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B((0,x_3^c), \rho )\cap P(\Omega) )} \int_{B((0,x_3^c), \rho ) \cap P(\Omega)} {\mathcal R}(V w)(x') {\rm d} x'=0,}
$$
where {\color{black} $ P(\Omega )$ is the projection set of $\Omega$ on $\mathbb R^2$ and } $q(x',x_3)=1+V(x',x_3)$. \end{corollary} \begin{proof} {\color{black} It is clear that the transmission eigenfunctions $v \in H^1(\Omega )$ and $w \in H^1(\Omega )$ to \eqref{eq:in eig reduce} fulfill \eqref{eq:3d in eig} for $\eta \equiv 0$. Using the dimensional reduction operator $\mathcal R$ given by \eqref{dim redu op}, we can show that $\mathcal R(v)$ and $\mathcal R(w)$ satisfy \eqref{eq:eig reduc} for $\eta \equiv 0$. By virtue of Lemmas \ref{lem:32} and \ref{lem:int2}, due to} $ \eta(x) \equiv 0 $, from \eqref{3eq:int identy} we have the following integral equality \begin{align}\label{int3eq:int identy}
&(F_{1 } (0)+F_{2 } (0)+F_{3j } (0)) \int_{S_h} u_0(sx') {\rm d} x'+\delta_j (s)\\
&= I_3-\int_{S_h} \delta F_{1} (x') u_0(sx') {\rm d} x' - \int_{S_h} \delta F_{2} (x')u_0(sx') {\rm d} x' -\int_{S_h} \delta F_{3j} (x')u_0(sx') {\rm d} x' . \notag \end{align} where $\delta_j(s)$ is defined in \eqref{eq:313delta}, $\delta F_1(x')$, $\delta F_2(x')$ and $\delta F_{3j}(x')$ are defined in \eqref{eq:326F3j}, $I_3$ is given in \eqref{3eq:int identy}. Since $v=w$ on $\Gamma_h^\pm \times (-M,M)$, it is easy to see that $$ F_{1} (0)=
\int_{-L}^{L}\psi''(x_3)(v(0, x_3)-w(0, x_3)) {\rm d} x_3=0. $$ Therefore, using \eqref{eq:u0w}, from \eqref{int3eq:int identy}, we deduce that \begin{align}\label{int3eq:int identy1}
& 6 i (F_{2 } (0)+F_{3j } (0)) (e^{-2\theta_M i }-e^{-2\theta_m i } ) s^{-2}- (F_{2 } (0)+F_{3j } (0)) \int_{W \backslash S_h} u_0(sx') {\rm d} x'\\
&= I_3-\int_{S_h} \delta F_{1} (x') u_0(sx') {\rm d} x' - \int_{S_h} \delta F_{2} (x')u_0(sx') {\rm d} x' -\int_{S_h} \delta F_{3j} (x')u_0(sx') {\rm d} x' -\delta_j (s) . \notag \end{align} In \eqref{int3eq:int identy1}, we take $s=j$ and multiply $j^2$ on the both sides of \eqref{int3eq:int identy1}, then we have \begin{align}\label{int3eq:int identy2}
& 6 i (F_{2 } (0)+F_{3j } (0)) (e^{-2\theta_M i }-e^{-2\theta_m i } ) =j^2\Big [ I_3 + (F_{2 } (0)+F_{3j } (0)) \int_{W \backslash S_h} u_0(sx') {\rm d} x' \notag \\
&\quad -\int_{S_h} \delta F_{1} (x') u_0(sx') {\rm d} x' - \int_{S_h} \delta F_{2} (x')u_0(sx') {\rm d} x' -\int_{S_h} \delta F_{3j} (x')u_0(sx') {\rm d} x' -\delta_j (s) \Big]. \end{align}
{\color{black} Using \eqref{3eq:deltajnew3} in Lemma \ref{lem:3 delta} and} under the assumption \eqref{3eq:ass1 int}, it is easy to see that \begin{equation}\label{eq:deltajnew int}
j^2 |\delta_j(s)| \leq \frac{ |k|^2 \|\psi \|_{L^\infty} \sqrt{C(L,h) ( \theta_M-\theta_m )} e^{-\sqrt{s \Theta } \delta_W } h }{\sqrt 2 } j^{-\Upsilon }, \end{equation} where $C(L,h)$ is a positive number defined in \eqref{eq:312}, $ \Theta \in [0,h ]$ and $\delta_W$ is defined in \eqref{eq:xalpha}.
Under the assumption \eqref{3eq:ass1 int}, by virtue of \eqref{3eq:deltaf1j}, \eqref{3eq:deltaf1}, \eqref{3eq:deltaf2} and \eqref{3eq:I3} {\color{black} in Lemma \ref{lem:int2},} we can obtain the following estimates \begin{align}\label{eq:360est}
&j^2 \left|\int_{S_h} \delta F_{3j } (x) u_0(sx') {\rm d} x' \right | \leq \frac{8 L\sqrt{\pi}\|\psi\|_{L^\infty}(\theta_M- \theta_m) \Gamma(2 \alpha+4) }{ \delta_W^{2\alpha+4 } } k^2 {\rm diam}(S_h)^{1-\alpha }\nonumber \\
&\hspace{5cm} \times (1+k) \|g_j\|_{L^2( {\mathbb S}^{2})} j^{-\alpha } \leq {\mathcal O} \left (j^{-(\alpha-\varrho )} \right ), \nonumber \\
& j^2 \left|\int_{S_h} \delta F_{1 } (x) u_0(sx') {\rm d} x' \right | \leq \frac{2\|F_{1}\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} j^{-\alpha} , \nonumber \\
&j^2 \left|\int_{S_h} \delta F_{2 } (x) u_0(sx') {\rm d} x' \right | \leq \frac{2\|F_{2}\|_{C^\alpha } (\theta_M-\theta_m )\Gamma(2\alpha+4) }{ \delta_W^{2\alpha+4}} j^{-\alpha}. \end{align}
Under the assumption \eqref{3eq:ass2 int}, it is easy to see that $$
\left| e^{-2\theta_M i }-e^{-2\theta_m i } \right| =\left|1-e^{-2(\theta_M -\theta_m) i } \right| \neq 0, $$ since $\theta_M -\theta_m \neq \pi$. In \eqref{int3eq:int identy2}, by letting $j\rightarrow \infty$, from \eqref{eq:1.5}, \eqref{eq:deltajnew int} and \eqref{eq:360est}, we prove that $$ \lim_{j \rightarrow \infty} F_{3j}(0) =- F_2(0), $$ which implies \begin{equation}\label{eq:cor31}
\lim_{j \rightarrow \infty} {\mathcal R }(v_j)(0) ={\mathcal R }(qw)(0) \end{equation} through recalling that $F_2$ and $F_{3j}$ are given in \eqref{eq:F1xi}. From \eqref{3eq:bound1}, we have {\color{black} \begin{equation*} \begin{split}
& \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap P(\Omega ))} \int_{B(0, \rho ) \cap P(\Omega )} {\mathcal R }(v)(x') {\rm d} x ' \\
&= \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap P(\Omega ))} \int_{B(0, \rho ) \cap P(\Omega )} {\mathcal R } ( w)(x') {\rm d} x'.
\end{split} \end{equation*} } Since {\color{black} \begin{align*}
\lim_{j \rightarrow \infty} {\mathcal R }( v_j)(0)&=\lim_{j \rightarrow \infty} \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap P(\Omega ) )} \int_{B(0, \rho ) \cap P(\Omega )} {\mathcal R } (v_j)(x') {\rm d} x'\\
&= \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap P(\Omega ))} \int_{B(0, \rho ) \cap P(\Omega )}{\mathcal R } (v )(x') {\rm d} x',\\
{\mathcal R } (qw )(0)&= \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap P(\Omega ))} \int_{B(0, \rho ) \cap P(\Omega )} {\mathcal R}(qw)(x') {\rm d} x', \end{align*}} and from \eqref{eq:cor31}, we finish the proof of this corollary. \end{proof}
{\color{black} \begin{remark} Corollary \ref{cor:3.1} states that the average value of the function $Vw$ over the cylinder centered at the edge point $(0,x_3^c)$ with the height $L$ vanishes in the distribution sense. In addition, if $V(x',x_3)$ is continuous near the edge point $(0,x_3^c)$ where $x_3^c \in (-M,M)$ and $ V(0,x_3^c ) \neq 0$, from the dominant convergence theorem and the definition of the dimension reduction operator $\mathcal R$, we can prove that $$
{\color{black}
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho )\cap P(\Omega ) )} \int_{B(0, \rho ) \cap P(\Omega) } \int_{x_3^c-L}^{x_3^c+L} \psi(x_3) w (x',x_3) {\rm d} x'{\rm d} x_3 =0 }
$$ under the assumptions in Corollary \ref{cor:3.1}, which also describes the vanishing property of the interior eigenfunctions $v$ and $w$ near the edge point in 3D. Furthermore, if $ \psi(x_3^c)\neq 0 $, one can prove that $$
{\color{black} \lim_{ \rho \rightarrow +0 }\frac{1}{m(B(0, \rho ) \cap P(\Omega) )} \int_{B(0, \rho ) \cap P(\Omega) } \int_{x_3^c-L}^{x_3^c+L} w (x',x_3) {\rm d} x'{\rm d} x_3 =0. }
$$
\end{remark} }
In the following theorem, we impose a stronger regularity requirement for the conductive transmission eigenfunction $v$ of \eqref{eq:3d in eig}, i.e., $v$ has $H^2$-regularity near the considering edge point. Using the dimension reduction operator given in Definition \ref{Def}, as well as the H{\"o}lder continuity of the considering functions, we can prove the following theorem in a similar way of proving Theorem \ref{Th:1.2}. The detailed proof of Theorem \ref{Th:3.2} is omitted here.
\begin{theorem}\label{Th:3.2}
{\color{black} Let $\Omega \Subset \mathbb R^3$ be bounded domain with $0\in \partial \Omega$ and} ${S_h}\subset \mathbb{R}^{2}$ be defined in \eqref{eq:sh}, $M>0$, $0<\alpha<1$. For any fixed {\color{black} $x_3^c \in (-M,M)$} and $L>0$ defined in Definition \ref{Def}, we suppose that $L $ is sufficiently small such that $(x_3^c-L,x_3^c+L) \subset (-M,M) $ {\color{black} and $$ (B_h \times (-M,M) ) \cap \Omega=S_h \times (-M,M), $$ where $B_h\Subset \mathbb R^2 $ is the central ball of radius $h \in \mathbb R_+$.} Let $v \in H^2 (\Omega )$ and $w \in H^1 (\Omega ) $ be the eigenfunctions to \eqref{eq:3d in eig}. Moreover, there exits a sufficiently smaller neighbourhood {\color{black} $S_h \times (-M,M)$ (i.e. $h>0$ is sufficiently small) of $(0,x_3^c)$ with $x_3^c\in (-M,M)$}, such that $q w\in C^\alpha(\overline {S}_h \times [-M,M ] ) $ and $\eta \in C^\alpha(\overline{\Gamma}_h^\pm \times [-M,M] )$ for $0< \alpha <1$ and $v-w\in H^2({{S}_h}\times(-M,M))$. Under the following assumptions:
\begin{itemize}
\item[(a)] the function $\eta=\eta(x',x_3)$ is independent of $x_3$ and does not vanish on the edge of $ W \times (-M,M)$, i.e.,
\begin{equation*}
\eta(0 ) \neq 0,
\end{equation*}
\item[(b)] the angles $\theta_m$ and $\theta_M$ of $S_h$ satisfy
\begin{equation*} -\pi < \theta_m < \theta_M < \pi \mbox{ and } \theta_M-\theta_m \neq \pi, \end{equation*}
\end{itemize}
then we have $v$ and $w$ vanish at the edge point $(0,x_3^c) \in \mathbb{R}^3$ of $S_h \times (-M,M)$, where $x_3^c \in (-M,M)$.
\end{theorem}
\begin{remark}
When $\eta \equiv 0$ near the edge point, under the $H^2$ regularity of the interior transmission eigenfunctions $v$ and $w$, the vanishing property of $v$ and $w$ is investigated in \cite{Bsource}. \end{remark}
\section{Unique recovery results for the inverse scattering problem}\label{sec:4} {\color{black} In this section, we apply the vanishing property of the conductive transmission eigenfunctions at a corner in 2D to investigate the unique recovery in the inverse problem associated with the corresponding conductive scattering problem. Before that, we first describe the relevant physical background.
The time-harmonic electromagnetic wave scattering from a conductive medium body arises in the application of practical importance, for example the modeling of an electromagnetic object coated with a thin layer of a highly conducting material.
In what follows, we let $\varepsilon$, $\mu$ and $\sigma$ denote the electric permittivity, the magnetic permeability and the conductivity of a medium, respectively.
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^2$ with a connected complement $\mathbb{R}^2\backslash\overline{\Omega}$. Consider a cylinder-like medium body $D:=\Omega\times\mathbb{R}$ in $\mathbb{R}^3$ with the cross section being $\Omega$ along the $x_3$-axis for ${x}=(x_j)_{j=1}^3\in D$. In the following discussions, with a bit abuse of notation, we shall also use ${x}=(x_1, x_2)$ in the 2D case, which should be clear from the context. Let $\Omega_\delta:=\{ {x}+h\nu({x}); {x}\in\partial\Omega\ \mbox{and}\ h\in (0, \delta)\}$ with $\delta\in\mathbb{R}_+$ being sufficiently small, where $\nu\in\mathbb{S}^1$ signifies the exterior unit normal vector to $\partial\Omega$. Set $D_\delta=\Omega_\delta\times\mathbb{R}$ to denote a layer of thickness $\delta$ coated on the medium body $D$. The material configuration associated with the above medium structure is given as follows: \begin{equation}\label{eq:m1} \varepsilon, \mu, \sigma=\varepsilon_1, \mu_0, \sigma_1\ \mbox{in}\ \ D;\ \varepsilon_2, \mu_0, \frac{\gamma}{\delta}\ \ \mbox{in}\ D_\delta;\ \varepsilon_0, \mu_0, 0\ \ \mbox{in}\ \mathbb{R}^3\backslash\overline{(D\cup D_\delta)}, \end{equation} where $\varepsilon_j>0$, $j=0,1,2$, $\mu_0, \gamma>0$ and $\sigma_1\geq0$ are all constants. Consider a time-harmonic incidence: \begin{equation}\label{eq:inc1} \nabla\times {E}^i-i\omega\mu_0 {H}^i=0,\quad \nabla\times {H}^i+i\omega\varepsilon_0 {E}^i=0\quad\mbox{in}\ \mathbb{R}^3, \end{equation} where $i:=\sqrt{-1}$, ${E}^i$ and ${H}^i$ are respectively the electric and magnetic fields and $\omega\in\mathbb{R}_+$ is the angular frequency.
The electromagnetic scattering is generated by the impingement of the incident field $({E}^i, {H}^i)$ on the medium body described in \eqref{eq:m1} as follows
\begin{equation}\label{eq:sca1} \begin{cases} & \nabla\times {E}-i\omega\mu {H}=0,\quad \nabla\times {H}+i\omega\varepsilon {E}=\sigma {E}, \quad\mbox{in}\ \mathbb{R}^3, \\[5pt] & {E}={E}^i+{E}^s,\quad {H}= {H}^i+ {H}^s,\hspace*{2.55cm}\mbox{in}\ \mathbb{R}^3, \\[5pt]
& \displaystyle{\lim_{r\rightarrow\infty} r\left({H}^s\wedge\hat{{x}}-{E}^s \right)=0, }\hspace*{3.5cm} r:=|{x}|, \hat{{x}}:={x}/|{x}|, \end{cases} \end{equation} where
the tangential components of the electric field ${E}$ and the magnetic field ${H}$ are continuous across the material interfaces $\partial D$ and $\partial D_\delta$. The last limit in \eqref{eq:sca1} is known as the Silver-M\"uller radiation condition.
Under the transverse-magnetic (TM) polarisation, namely,
\begin{align}\notag {E}^i&=\begin{bmatrix} 0\\0\\ u^i(x_1,x_2) \end{bmatrix},\ {H}^i=\begin{bmatrix} H_1(x_1,x_2)\\ H_2(x_1,x_2)\\ 0 \end{bmatrix}, \end{align} and \begin{align} {E}&=\begin{bmatrix} 0\\0\\ u(x_1,x_2) \end{bmatrix},\ {H}=\begin{bmatrix} H_1(x_1,x_2)\\ H_2(x_1,x_2)\\ 0 \end{bmatrix}, \label{eq:p2} \end{align}
it is rigorously verified in \cite{BonT} that as $\delta\rightarrow +0$, one has
\begin{equation}\label{eq:model}
\begin{cases}
\Delta u^-+k^2 q u^-=0 & \mbox{ in }\ \Omega, \\[5pt]
\Delta u^+ +k^2 u^+=0 & \mbox{ in }\ \mathbb{R}^2 \backslash \Omega, \\[5pt]
u^+= u^-,\quad \partial_\nu u^+ + \eta u^+=\partial_\nu u^- & \mbox{ on }\ \partial \Omega, \\[5pt]
u^+=u^i+u^s & \mbox{ in }\ \mathbb{R}^2 \backslash \Omega, \\[5pt]
\lim\limits_{r \rightarrow \infty} r^{1/2} \left( \partial_r u^s-i k u^s \right)=0, & \ r=|\mathbf x|,
\end{cases}
\end{equation} where \begin{equation}\label{eq:model12}
\mbox{$u^-=u|_{\Omega}$, $u^+=u|_{\mathbb{R}^2\backslash\overline{\Omega}}$\ \ and\ \ $k=\omega\sqrt{\varepsilon_0\mu_0}$, $\eta=i\omega\gamma\mu_0$}. \end{equation} The last limit in \eqref{eq:model} is known as the Sommerfeld radiation condition. According to \eqref{eq:m1}, as $\delta\rightarrow+0$, it is clear that the conductivity in the thin layer $D_\delta$ goes to infinity, or equivalently, its resistivity goes to zero. This in general would lead to the so-called perfectly electric conducting (PEC) boundary, which prevents the electric field from penetrating inside the medium body and instead generates a certain boundary current. However, it is noted in our case that the thickness of the coating layer $D_\delta$ also goes to zero, and this allows the electromagnetic waves to penetrate inside the medium body. Nevertheless, the thin highly-conducting layer effectively produces a transmission boundary condition across the material interface $\partial D$ involving a conductive parameter $\eta$, which is referred to as the conductive transmission condition. It is known to us that a perfect conductor does not exist in nature, and hence the conductive medium body provides a more realistic means to model the electromagnetic scattering from an object coated with a thin layer of a highly conducting material; see \cite{HM, Senior} more relevant discussion about this aspect. }
The well-posedness of the direct problem \eqref{eq:model} is known (cf. \cite{Bon}), and there exists a unique solution {\color{black} $u:=u^-\chi_\Omega+u^+\chi_{\mathbb{R}^2 \backslash\Omega}\in H_{loc}^1(\mathbb{R}^2)$.} Moreover, there holds the following asymptotic expansion {\color{black} \begin{equation}\notag
u^s(x)=\frac{e^{ik|x|}}{|x|^{1/2}}u^{\infty}(\hat{x})+{\mathcal O} \left(\frac{1}{|x|}\right), |x|\rightarrow +\infty
\end{equation}
uniformly in all directions $\hat{x}=x/|x| \in {\mathbb S}^{1}$.} The real-analytic function $u^{\infty}(\hat{x})$ is referred to as the \emph{far-field pattern} or the \emph{scattering amplitude} associated with $u^i$. The inverse scattering problem is concerned with the recovery of the scatterer $(\Omega; q, \eta)$ by knowledge of the far-field pattern $u^{\infty}(\hat{x}; u^i)$; that is
\begin{equation}\label{eq:ip1}
u^{\infty}(\hat{x}; u^i)\rightarrow(\Omega; q, \eta).
\end{equation}
In \eqref{eq:ip1}, if the far-field pattern is given corresponding to a single incident wave $u^i$, then it is referred to as a single far-field measurement, otherwise it is referred to as many far-field measurements. It is known that the inverse problem \eqref{eq:ip1} is nonlinear and ill-conditioned.
For the reconstruction of the shape of the scatterer $\Omega$ by using the factorization method for \eqref{eq:ip1}, uniqueness issue has been studied in \cite{Bon}. The inverse spectral problem of gaining the information about the material properties associated to the conductive transmission eigenvalue problem has been studied in \cite{BHK}. In \cite{HK}, the method of uniquely recovering the conductive boundary parameter $\eta$ from the measured scattering data as well as the convergence of the conductive transmission eigenvalues as the conductivity parameters which tend to zero has also been studied. In all of the aforementioned literatures, the unique determination results are based on the far-field patterns of all incident plane waves at a fixed frequency, which means that infinitely many far-field measurements have been used. In what follows, we show that in a rather general and practical scenario, the polyhedral shape of the scatterer, namely $\Omega$, can be uniquely recovered by a single far-field measurement without knowing its material contents, namely $q$ and $\eta$. Moreover, if the surface conductive parameter $\eta$ is constant, then it can be recovered as well.
{{Our main unique recovery results for the inverse scattering problem \eqref{eq:ip1} are contained in Theorems \ref{Th:4.1} and \ref{Th: unique eta}. In Theorem \ref{Th:4.1}, we establish the unique recovery results by a single far-field measurement in determining a 2D polygonal conductive scatterer without knowing its contents. In Theorem \ref{Th: unique eta}, the surface conductive parameter $\eta$ of the scatterer can be further recovered if it is a constant.}} Before presenting the main results, we first show in Proposition \ref{Pro} that the conductive parameter $\eta$ in \eqref{eq:model} has a close relationship with the wave number $k$ from the practical point view of the TM-mode (transverse magnetic) for the time-harmonic Maxwell system \cite{Ang}. This relationship helps us to show that our assumption in Theorem \ref{Th:4.1} can be fulfilled when the wave number $k$ is sufficiently small.
{\color{black} In view of \eqref{eq:model12}, we readily have the following observation. \begin{remark}\label{Pro}
The conductive boundary parameter $\eta$ of \eqref{eq:model} satisfies \begin{equation}\label{eq:condn1}
\eta= i\omega\gamma\mu_0 ={\mathcal O}( k ), \end{equation} where $k:=\omega \sqrt{\varepsilon_0 \mu_0 }$ is the wave number in \eqref{eq:model}. \end{remark} }
Remark~\ref{Pro} basically indicates that when considering the conductive scattering problem \eqref{eq:model}, one may impose the low-frequency dependence behaviour \eqref{eq:condn1} on the surface conductive parameter. As remarked earlier, Remark~\ref{Pro} only considers the simple model \eqref{eq:sca1} for illustration of the low-frequency behaviour \eqref{eq:condn1}. For more complex Maxwell models, one can derive the conductive scattering system \eqref{eq:model} of a general form.
We are in a position to consider the inverse problem \eqref{eq:ip1}. First, we introduce the admissible class of conductive scatterers in our study. {\color{black} Let $W_{x_c}(\theta_W)$ be an open sector in $\mathbb R^2$ with the vertex $x_c$ and the open angle $\theta_W $. Denote
\begin{equation}\label{eq:thm21}
\begin{split}
\Gamma_h^\pm(x_c)&: =\partial W_{x_c} (\theta_W ) \cap B_h(x_c),\quad S_{h} (x_c):= \Omega\cap B_{h} (x_c)= \Omega \cap W_{x_c} (\theta_W ), \\
S_{h/2} (x_c)&:= \Omega\cap B_{h/2} (x_c)= \Omega \cap W_{x_c} (\theta_W ),\quad\Sigma_{\Lambda_h}(x_c):=S_{h}(x_c)\backslash S_{h/2} (x_c).
\end{split}
\end{equation}}
\begin{definition}~\label{def:adm} Let {\color{black} $(\Omega;k,d, q, \eta)$} be a conductive scatterer associated with {\color{black} the incident plane wave $u^i=e^{i kx\cdot d}$ with $d\in\mathbb{S}^{1}$ and $k\in \mathbb R_+$. Consider} the scattering problem \eqref{eq:model} and $u$ is the total wave field therein. The scatterer is said to be admissible if it fulfils the following conditions: \begin{itemize} \item[(a)] $\Omega$ is a bounded {\color{black} simply connected} Lipschitz domain in $\mathbb{R}^2$, and $q\in L^\infty(\Omega)$, $\eta\in L^\infty(\partial\Omega)$.
\item[(b)] Following the notations in Theorem \ref{Th:1.1}, if $\Omega$ possesses a corner {\color{black} $B_h(x_c) \cap \Omega= \Omega\cap W_{x_c}(\theta_W )$ where $x_c$ is the vertex of the sector $W_{x_c}(\theta_W )$ and the open angle $\theta_W$ of $ W_{x_c}(\theta_W ) $ satisfies $\theta_W \neq \pi $, then $q \big |_{S_h(x_c)} $ is a constant, $\eta\in C^\alpha(\overline{\Gamma_h^\pm(x_c}))$, where $S_{h}(x_c) $ and $\Gamma_h^\pm (x_c)$ are defined in \eqref{eq:thm21}.}
\item[(c)] The total wave field $u$ is non-vanishing everywhere in the sense that for any $x\in\mathbb{R}^2$, \begin{equation}\label{eq:nn2}
\lim_{ \rho \rightarrow +0 }\frac{1}{m(B(x, \rho ))} \int_{B(x, \rho )} |u(x)| {\rm d} x\neq 0.
\end{equation} \end{itemize} \end{definition}
We would like to point out that the conditions stated in Definition~\ref{def:adm} can be fulfilled by the conductive scatterer $(\Omega; k,d, q, \eta)$ and the scattering problem \eqref{eq:model} in certain general and practical scenarios.
In particular, the condition \eqref{eq:nn2} in (c) can be fulfilled at least when $k$ is sufficiently small. In fact, it has been shown in Proposition \ref{Pro} that if $\eta\neq 0$, then $\eta = {\mathcal O} (k)$ . For the scattered field $u^s$ of \eqref{eq:model}, from \cite[Theorem 2.4]{Bon}, it is proved that
$$
\|u^s\|_{H^1(B)} \leq C( \|\eta u^i\|_{H^{-1/2} ( \partial \Omega ) }+k^2 \|q u^i \|_{L^2(\Omega )} ) ={\mathcal O}( k ) \|u^i\|_{L^2(\Omega)},
$$
where $C$ is a positive number and $B$ is a large ball containing $\Omega$. Hence, if the incident field $u^i$ is non-vanishing everywhere, say $u^i=e^{i kx\cdot d}$ with $d\in\mathbb{S}^{1}$ being a plane wave, and $k$ is sufficiently small, then \eqref{eq:nn2} is obviously fulfilled. Nevertheless, by Definition~\ref{def:adm}, we may include more general situations into our subsequent study of the inverse problem \eqref{eq:ip1}.
\begin{figure}
\caption{Schematic illustration of the geometry setup in the proof of Theorem~\ref{Th:4.1}.}
\label{fig:geometry}
\end{figure}
{\color{black} The determination of the geometric shape of a conductive scatterer can be established in Theorem \ref{Th:4.1} by using Theorem~\ref{Th:1.2} and a contradiction argument. The technical requirement $qw\in C^\alpha(\overline S_h)$ in Theorem \ref{Th:1.2} can be easily fulfilled. Indeed, this condition can be derived by using the classical results on the singular behaviours of the solutions to elliptic PDEs in a corner domain \cite{Dauge88,Grisvard,Cos,CN}. In fact, it is known that the solution can be decomposed into a singular part and a regular part, where the singular part is of a H\"older form that depends on the corner geometry as well as the boundary and the right-hand inputs. For our subsequent use, we first give the following result in a relatively simple scenario.
In \eqref{eq:model}, by the standard PDE theory (see e.g. \cite{McLean}), we know that the solution $u$ is real-analytic away from the conductive interface. In Lemma~\ref{lem41} as follows, we further establish the H\"older-regularity of the solution up to the conductive interface, especially to the vertex corner point. Denote $$
S_{2h}= W\cap B_{2h},\quad \Gamma_{2h}^{\pm }= \Gamma^{\pm } \cap B_{2h}, $$ where $W$ is the sector defined in \eqref{eq:W}, $B_{2h}$ is an open ball centered at $0$ in $\mathbb R^2$ with the radius $2h$ and $\Gamma^\pm$ are the boundaries of $W$. \begin{lemma}\cite[Lemma 3.4]{CDL3} \label{lem41} Suppose that $u\in H^1(B_{2h})$ satisfies
\begin{equation}\label{eq:lem23} \begin{cases} \Delta u^-+k^2 q_- u^-=0 & \mbox{ in }\ S_{2h}, \\[5pt] \Delta u^+ +k^2 q_{+} u^+=0 & \mbox{ in }\ B_{2h} \backslash {\overline{ S_{2h}}}, \\[5pt] u^+= u^- & \mbox{ on }\ \Gamma_{2h}^\pm, \end{cases} \end{equation}
where $u^+=u|_{{B_{2h}\backslash {\overline{ S_{2h}}}}}$, $u^-=u|_{S_{2h}}$ and $q_{\pm}$, $k$ are complex constants. Assume that $u^+$ and $u^-$ are respectively real analytic in ${B_{2h}\backslash {\overline{ S_{2h}}}}$ and $S_{2h}$. There exists $\alpha\in (0, 1)$ such that $u^- \in C^\alpha(\overline{S_h})$, where $S_h$ is defined in \eqref{eq:sh}. \end{lemma} }
\begin{theorem}\label{Th:4.1} Consider the conductive scattering problem \eqref{eq:model} associated with two conductive scatterers $(\Omega_j; k, d, q_j, \eta_j)$, $j=1,2$, in $\mathbb{R}^2$. Let $u_\infty^j(\hat x; u^i)$ be the far-field pattern associated with the scatterer $(\Omega_j; k, d, q_j, \eta_j)$ and the incident field $u^i$. Suppose that $(\Omega_j; k, d, q_j, \eta_j)$, $j=1,2$ are admissible and \begin{equation}\label{eq:nn1} u_\infty^1(\hat x; u^i)=u_\infty^2(\hat x; u^i) \end{equation} for all $\hat{x}\in\mathbb{S}^{1}$ and a fixed incident wave $u^i$. Then \begin{equation}\label{eq:nn3} \Omega_1\Delta\Omega_2:=\big(\Omega_1\backslash\Omega_2\big)\cup \big(\Omega_2\backslash\Omega_1\big) \end{equation} cannot possess a corner. Hence, if $\Omega_1$ and $\Omega_2$ are convex polygons in $\mathbb{R}^2$, one must have \begin{equation}\label{eq:nn4} \Omega_1=\Omega_2. \end{equation} \end{theorem}
\begin{proof}
By contradiction, we assume that there is a corner contained in $\Omega_1\Delta\Omega_2$. Without loss of generality we may assume that the vertex $O$ of the corner $\Omega_2 \cap W$ is such that $O \in \partial \Omega_2$ and $ O \notin \overline{\Omega}_1$. {\color{black} Without loss of generality, we may assume that $O$ is the origin of $\mathbb R^2$.}
Since $u_\infty^1(\hat x; u^i)=u_\infty^2(\hat x; u^i) $ for all $\hat x \in {\mathbb S}^1$, applying Rellich's Theorem (see \cite{CK}), we know that $u_1^s=u_2^s$ in $\mathbb{R}^2 \backslash ( \overline{\Omega}_1 \cup \overline{\Omega}_2 )$. Thus
\begin{equation}\label{eq:u1u2}
u_1(x)=u_2(x)
\end{equation}
for all $x \in \mathbb{R}^2 \backslash ( \overline{\Omega}_1 \cup \overline{\Omega}_2 )$. Following the notations in \eqref{eq:sh}, we have from \eqref{eq:u1u2} that
$$
u_2^-=u_2^+=u_1^+,\quad \partial u_2^- = \partial u_2^+ + \eta_2 u_2^+=\partial u_1^+ + \eta_2 u_1^+ \mbox{ on } \Gamma_h^\pm,
$$
where the superscripts $(\cdot)^-, (\cdot)^+$ stand for the limits taken from $\Omega_2$ and $\mathbb{R}^2\backslash\overline{\Omega_2}$, respectively. Moreover, suppose the neighbourhood {\color{black} $B_{2h}$} is sufficiently small such that
$$
\Delta u_1^+ + k^2 u_1^+ =0,\quad \Delta u_2^- +k^2 q_2 u_2^-=0 \mbox{ in } {\color{black} B_{2h}}.
$$
{\color{black} It is clear that $u_1^+$ and $u_2^-$ are respectively real analytic in ${B_{2h}\backslash {\overline{ S_{2h}}}}$ and $S_{2h}$. Since $q_2 \big |_{S_h} $ is a constant, by virtue of Lemma \ref{lem41}, we know that $u_2^- \in C^\alpha(\overline{S_h})$, which implies that \begin{equation}\label{eq:416}
q_2u_2^- \in C^\alpha(\overline{S_h}). \end{equation} }
Clearly $u_1^+\in H^2(S_h)$ and $u_2^-\in H^1(S_h)$. Now we prove that $$ u_1^+ - u_2^-\in H^2( \Sigma_{\Lambda_h} ), $$ where $\Sigma_{\Lambda_h}$ is defined in \eqref{eq:sh}. We first note that on the boundary $\Gamma^\pm_h$, one has $u_2^-=u_1^+$, where $u_1^+ \in H^{3/2}(\Gamma^\pm_h )$ from the trace theorem. {\color{black} Denote \begin{equation}\label{add7}
D_1^+ =B_{h/4}(x_{\Gamma^+}) \cap \Sigma_{\Lambda_h},\quad D_1^- =B_{h/4}(x_{\Gamma^-}) \cap \Sigma_{\Lambda_h}, \end{equation} where $x_{\Gamma^+}$ and $x_{\Gamma^-}$ are the mid-points of $\Gamma^{+}_{(h/2,h)} $ and $\Gamma^{-}_{(h/2,h)} $, respectively. }
Since $\Gamma_h^+ \in C^{1,1}$, from \cite[Theorem 4.18]{McLean}, we have the following regularity estimate for $u_2^-$ up to the boundary $\Gamma^{+}_{(h/2,h)} $ of $\Sigma_{\Lambda_h}$: $$ {\color{black}
\left\|u_2^- \right \|_{H^2( D_1^+ )} \leq C\left(\left\|u_2^- \right \|_{H^1(S_h )}+ \left\|u_1^+ \right \|_{H^{3/2}( \Gamma^{+}_{h} )} \right),} $$ where $C>0$ is a constant and $D_1^+ $ is defined in \eqref{add7}.
Using the similar argument, we can prove that $u_2^-$ has $H^2$-regularity up to the boundary $\Gamma^{-}_{(h/2,h)} $ of $\Sigma_{\Lambda_h}$. Therefore $u_2^- \in H^2( \Sigma_{\Lambda_h} )$ {\color{black} by using the interior regularity of the standard elliptic PDE theory}, which means that $u_1^+ - u_2^- \in H^2( \Sigma_{\Lambda_h} )$. Since $(\Omega_j; k,d, q_j, \eta_j)$, $j=1,2$, are admissible, we know that $\eta_j\in C^\alpha(\overline{\Gamma}_h^\pm)$. {\color{black} Noting \eqref{eq:416}} and by applying Theorem \ref{Th:1.2}, if $\eta_2(0)\neq 0$, and Remark~\ref{rem:2.5} if $\eta_2(0)=0$ on $\Gamma_{ h}^\pm$, and also utilizing the fact that $u_1$ is continuous at the vertex $0$, we have
$$
u_1(0)=0,
$$
which contradicts to the admissibility condition (c) in Definition \ref{def:adm}.
The proof is complete. \end{proof}
{{Based on Definition \ref{def:adm}, if we further assume that the surface conductive parameter $\eta$ is constant, we can recover $\eta$ simultaneously once the admissible conductive scatterer $\Omega$ is determined. However, in determining the surface conductive parameter, we need to assume that $q_1=q_2:=q$ are known.
\begin{theorem}\label{Th: unique eta}
Consider the conductive scattering problem \eqref{eq:model} associated with the admissible conductive scatterers $(\Omega_j; k, d, q, \eta_j)$, where $\Omega_j=\Omega$ for $j=1,2$ and $\eta_j\neq0$, $j=1,2$, are two constants. Let $u_{\infty}^j(\hat{x}; u^i)$ be the far-field pattern associated with the scatterer $(\Omega; k, d, q, \eta_j)$ and the incident field $u^i$. Suppose that $(\Omega; k, d, q, \eta_j)$, $j=1,2$, are admissible and
\begin{equation}\label{eq:far}
u_{\infty}^1(\hat{x}; u^i)=u_{\infty}^2(\hat{x}; u^i)
\end{equation}
for all $\hat{x}\in\mathbb{S}^1$ and a fixed incident wave $u^i$. Then if $k$ is not an eigenvalue of the partial differential operator $\Delta+k^2q$ in $H_0^1(\Omega)$, we have $\eta_1=\eta_2$.
\end{theorem}
\begin{proof}
Since $u_{\infty}^1(\hat{x}; u^i) =u_\infty^2({\hat{x}}; u^i)$ for all $\hat{x}\in\mathbb{S}^1$, we can derive that $u_1^+=u_2^+$ for all $x\in\mathbb{R}^2\backslash\overline{\Omega}$ and thus $\partial_\nu u_1^+=\partial_\nu u_2^+$ on $\partial\Omega$. Combining with the transmission condition in the scattering problem \eqref{eq:model}, we deduce that
\begin{equation}\notag
u_1^-=u_1^+=u_2^+=u_2^-\ \mbox{ on }\ \partial\Omega,
\end{equation}
Thus, we have
\begin{equation}\notag
\partial_\nu(u_1^--u_2^-)=\partial_\nu(u_1^+-u_2^+)+\eta_1u_1^+-\eta_2u_2^+=(\eta_1-\eta_2)u_1^- \ \mbox{ on }\ \partial\Omega.
\end{equation}
Define $v:=u_1^--u_2^-$. Then $v$ fulfills
\begin{equation}\label{eq:v}
\begin{cases}
(\Delta+k^2q)v=0 & \mbox{ in }\ \Omega,\\
v=0 &\mbox{ on }\ \partial\Omega,\\
\partial_\nu v=(\eta_1-\eta_2)u_1^- &\mbox{ on }\ \partial\Omega.
\end{cases}
\end{equation}
Since $k$ is not an eigenvalue of the operator $\Delta+k^2q$ in $H_0^1(\Omega)$,
hence one must have $v=0$ to \eqref{eq:v}. Substituting this into the Neumann boundary condition of \eqref{eq:v}, we know that $(\eta_1-\eta_2)u_1^-=\partial_\nu v=0$ on $\partial\Omega$.
Next, we prove the uniqueness of $\eta$ by contradiction. Assume that $\eta_1\neq\eta_2$. Since $(\eta_1-\eta_2)u_1^-=0$ on $\partial\Omega$ and $\eta_j$, $j=1,2$ are constants, we can deduce that $u_1^-=0$ on $\partial\Omega$. Then $u_1^-$ satisfies
\begin{equation}\notag
\begin{cases}
(\Delta +k^2q)u_1^-=0 & \mbox{ in }\ \Omega,\\
u_1^-=0 & \mbox{ on }\ \partial\Omega.
\end{cases}
\end{equation}
Similar to \eqref{eq:v}, this Dirichlet problem also only has a trivial solution $u_1^-=0$ in $\Omega$, since $k$ is not an eigenvalue of $\Delta +k^2q$. Then, we can derive $u_1^+=u_1^-=0$ and
\begin{equation}\notag
\partial_\nu u_1^-=\partial_\nu u_1^++\eta_1u_1^+=\partial_\nu u_1^+=0 \mbox{ on } \partial\Omega,
\end{equation}
which implies that $u_1\equiv0$ in $\mathbb{R}^2$ and thus $u_1^s=-u^i$. This contradicts to the fact that $u^s_1$ satisfies the Sommerfeld radiation condition.
The proof is complete.
\end{proof}
\begin{remark} In Theorem \ref{Th: unique eta}, it is required that $k$ is not an eigenvalue of $\Delta+k^2q$ in $H_0^1(\Omega)$. Clearly, if $q$ is negative-valued in $\Omega$ or $\Im q\neq 0$ in $\Omega$, this condition is fulfilled. On the other hand, if $q$ is positive-valued in $\Omega$, then this condition can be readily fulfilled when $k\in\mathbb{R}_+$ is sufficiently small.
\end{remark} }}
\end{document} | arXiv |
RMO Resource Center
RMO is the second step in Math Olympiad in India. Past papers, sequential hints and training resources.
TRY CHEENTA RMO COURSE
MATH OLYMPIAD IN INDIA | A COMPREHENSIVE GUIDE
WHAT IS RMO AND HOW TO PREPARE FOR IT?
Past Papers of RMO (Regional Math Olympiad India)
RMO 1991
Indian Regional Math Olympiad (RMO) 2016 - All Sets
Regional Math Olympiad (RMO) 2016 Telengana Region
Let $ABC$ be a right angled triangle with $\angle B=90^{\circ}$. Let $I$ be the incentre of $\triangle ABC$. Suppose $AI$ is extended to meet $BC$ at $F$ . The perpendicular on $AI$ at $I$ is extended to meet $AC$ at $E$ . Prove that $IE = IF$. Let $ABC$ be a right angled triangle with $\angle B=90^{\circ}$. Let $I$ be the incentre of $\triangle ABC$. Suppose $AI$ is extended to meet $BC$ at $F$ . The perpendicular on $AI$ at $I$ is extended to meet $AC$ at $E$ . Prove that $IE = IF$.
Let $a,b,c$ be positive real numbers such that $\frac{a}{1+a}+\frac{b}{1+b}+\frac{c}{1+c}=1$.Prove that $abc\leq\frac{1}{8}$.
For any natural number $n$, expressed in base $10$, let $S(n)E$ denote the sum of all digits of $n$. Find all positive integers $n$ such that $n^3$ = $8Sn^3$+$6Sn(n+1)$.
Find all $6$ digit natural numbers, which consist of only the digits $1,2$ and $3$, in which $3$ occurs exactly twice and the number is divisible by $9$.
Let $ABC$ be a right angled triangle with $\angle B=90^{\circ}$. Let $AD$ be the bisector of angle $A$ with $D$ on $BC$ . Let the circumcircle of $\triangle ACD$ intersect $AB$ again at $E$; and let the circumcircle of $\triangle ABD$ intersect $AC$ again at $F$ . Let $K$ be the reflection of $E$ in the line $BC$ . Prove that $FK = BC$.
Show that the infinite arithmetic progression {$1,4,7,10 \cdots$} has infinitely many 3 -term sub sequences in harmonic progression such that for any two such triples {$a_1, a_2 , a_3$ } and {$b_1, b_2 ,b_3$} in harmonic progression , one has$$\frac{a_1} {b_1} \neq \frac {a_2}{b_2}$$
Regional Math Olympiad (RMO) 2016 Bengal Region
Let $ABC$ be a triangle and $D$ be the mid-point of $BC$. Suppose the angle bisector of $\angle ADC$ is tangent to the circumcircle of triangle $ABD$ at $D$. Prove that $\angle A=90^{\circ}$. Let $ABC$ be a triangle and $D$ be the mid-point of $BC$. Suppose the angle bisector of $\angle ADC$ is tangent to the circumcircle of $\triangle ABD$ at $D$. Prove that $\angle A=90^{\circ}$.
Let $a,b,c$ be three distinct positive real numbers such that $abc=1$. Prove that $$\frac{a^3}{(a-b)(a-c)}+\frac{b^3}{(b-c)(b-a)}+\frac{c^3}{(c-a)(c-b)} \geq 3$$
Let $a,b,c,d,e,d,e,f$ be positive integers such that $\frac a b <$; $\frac c d <$; $\frac e f$. Suppose $af-be=-1$. Show that $d \geq b+f$.
There are $100$ countries participating in an olympiad. Suppose $n$ is a positive integers such that each of the $100$ countries is willing to communicate in exactly $n$ languages. If each set of $20$ countries can communicate in exactly one common language, and no language is common to all $100$ countries, what is the minimum possible value of $n$?
Let $ABC$ be a right-angled triangle with $\angle B=90^{\circ}$. Let $I$ be the incentre if $ABC$. Extend $AI$ and $CI$; let them intersect $BC$ in $D$ and $AB$ in $E$ respectively. Draw a line perpendicular to $AI$ at $I$ to meet $AC$ in $J$, draw a line perpendicular to $CI$ at $I$ to meet $AC$ at $K$. Suppose $DJ=EK$. Prove that $BA=BC$.
(a). Given any natural number $N$, prove that there exists a strictly increasing sequence of $N$ positive integers in harmonic progression.
(b). Prove that there cannot exist a strictly increasing infinite sequence of positive integers which is in harmonic progression.
Regional Math Olympiad (RMO) 2016 Maharashtra Region
Find distinct positive integers $n_1<n_2<\cdots<n_7$ with the least possible sum, such that their product $n_1 \times n_2 \times \cdots \times n_7$ is divisible by $2016$. Find distinct positive integers $n_1<n_2<\cdots<n_7$ with the least possible sum, such that their product $n_1 \times n_2 \times \cdots \times n_7$ is divisible by $2016$.
At an international event there are $100$ countries participating, each with its own flag. There are $10$ distinct flagpoles at the stadium, labelled $1,2,...,10$ in a row. In how many ways can all the $100$ flags be hoisted on these $10$ flagpoles, such that for each $i$ from $1$ to $10$, the flagpole $i$ has at least $i$ flags? (Note that the vertical order of the flagpoles on each flag is important)
Find all integers $k$ such that all roots of the following polynomial are also integers:$$f(x)=x^3-(k-3)x^2-11x+(4k-8)$$.
Let $\triangle ABC$ be scalene, with $BC$ as the largest side. Let $D$ be the foot of the perpendicular from $A$ on side $BC$. Let points (K,L) be chosen on the lines $AB$ and $AC$ respectively, such that $D$ is the midpoint of segment $KL$. Prove that the points $B,K,C,L$ are concyclic if and only if $\angle BAC=90^{\circ}$.
Let $x,y,z$ be non-negative real numbers such that $xyz=1$. Prove that$$(x^3+2y)(y^3+2z)(z^3+2x) \geq 27.$$
$ABC$ is an equilateral triangle with side length $11$ units. Consider the points $P_1,P_2, \cdots, P_10$ dividing segment $BC$ into $11$ parts of unit length. Similarly, define $Q_1, Q_2, \cdots, Q_10$ for the side $CA$ and $R_1,R_2,\cdots, R_10$ for the side $AB$. Find the number of triples ($i,j,k$) with $i,j,k$ in {$1,2,\cdots,10$} such that the centroids of $\triangle ABC$ and $P_iQ_jR_k$ coincide.
Regional Math Olympiad (RMO) 2016 Mumbai Region
Let $ABC$ be a right-angled triangle with $\angle B=90^{\circ}$. Let $I$ be the incenter of $ABC$. Draw a line perpendicular to $AI$ at $I$. Let it intersect the line $CB$ at $D$. Prove that $CI$ is perpendicular to $AD$ and prove that $ID=\sqrt{b(b-a)}$ where $BC=a$ and $CA=b$.
Let $a,b,c$ be positive real numbers such that$$\frac{a}{1+a}+\frac{b}{1+b}+\frac{c}{1+c}=1.$$Prove that $abc \leq \frac{1}{8}$.
For any natural number $n$, expressed in base $10$, let $S(n)$ denote the sum of all digits of $n$. Find all natural numbers $n$ such that $n=2S(n)^2$.
Find the number of all 6-digits numbers having exactly three odd and three even digits.
Let $ABC$ be a triangle with centroid $G$. Let the circumcircle of $\triangle AGB$ intersect the line $BC$ in $X$ different from (B); and the circucircle of triangle $AGC$ intersect the line $BC$ in $Y$ different from $C$. Prove that $G$ is the centroid of $\triangle AXY.
Let ($a_1,a_2,\cdots$) be a strictly increasing sequence of positive integers in arithmetic progression. Prove that there is an infinite sub-sequence of the given sequence whose terms are in a geometric progression.
Regional Math Olympiad (RMO) 2016 Delhi Region
Given are two circles $\omega_1,\omega_2$ which intersect at points $X,Y$. Let $P$ be an arbitrary point on $\omega_1$. Suppose that the lines $PX,PY$ meet $\omega_2$ again at points $A,B$ respectively. Prove that the circumcircles of all $\triangle PAB$ have the same radius.
Consider a sequence $(a_k)_{k \geq 1}$ of natural numbers defined as follows: $a_1=a$ and $a_2=b$ with $a,b>1$ and $gcd(a,b)=1$ and for all $k>0$, $a_{k+2}=a_{k+1}+a_k$. Prove that for all natural numbers $n$ and $k$, $gcd(a_n,a_{n+k})$ <$\frac{a_k}{2}$.
Two circles $C_1$ and $C_2$ intersect each other at points $A$ and $B$. Their external common tangent (closer to $B$) touches $C_1$ at $P$ and $C_2$ at $Q$. Let $C$ be the reflection of $B$ in line $PQ$. Prove that $\angle CAP=\angle BAQ$.
Let $a,b,c$ be positive real numbers such that $a+b+c=3$. Determine, with certainty, the largest possible value of the expression $$ \frac{a}{a^3+b^2+c}+\frac{b}{b^3+c^2+a}+\frac{c}{c^3+a^2+b}$$
a.) A 7-tuple $a_1,a_2,a_3,a_4,b_1,b_2,b_3$ of pairwise distinct positive integers with no common factor is called a shy tuple if $$ a_1^2+a_2^2+a_3^2+a_4^2=b_1^2+b_2^2+b_3^2$$and for all $1 \leq i<j \leq 4$ and $1 \leq k \leq 3$, $a_i^2+a_j^2 \neq b_k^2$. Prove that there exists infinitely many shy tuples.
b.) Show that $2016$ can be written as a sum of squares of four distinct natural numbers.
A deck of $52$ cards is given. There are four suites each having cards numbered $1,2,\cdots, 13$. The audience chooses some five cards with distinct numbers written on them. The assistant of the magician comes by, looks at the five cards and turns exactly one of them face down and arranges all five cards in some order. Then the magician enters and with an agreement made beforehand with the assistant, he has to determine the face down card (both suite and number). Explain how the trick can be completed.
Regional Math Olympiad (RMO) 2015 - Paper 1
In a cyclic quadrilateral $A B C D,$ let the diagonals $A C$ and $B D$ intersect at $X$. Let the circumcircles of triangles $A X D$ and $B X C$ intersect again at $Y$. If $X$ is the incentre of triangle $A B Y,$ show that $\angle C A D=90^{\circ}$.
Let $P_{1}(x)=x^{2}+a_{1} x+b_{1}$ and $P_{2}(x)=x^{2}+a_{2} x+b_{2}$ be two quadratic polynomials with integer coefficients. Suppose $a_{1} \neq a_{2}$ and there exist integers $m \neq n$ such that $P_{1}(m)=P_{2}(n), P_{2}(m)=P_{1}(n) .$ Prove that $a_{1}-a_{2}$ is even.
Find all fractions which can be written simultaneously in the forms $\frac{7 k-5}{5 k-3}$ and $\frac{6 l-1}{4 l-3},$ for some integers $k, l$.
Suppose 28 objects are placed along a circle at equal distances. In how many ways can 3 objects be chosen from among them so that no two of the three chosen objects are adjacent nor diametrically opposite?
Let $A B C$ be a right triangle with $\angle B=90^{\circ} .$ Let $E$ and $F$ be respectively the mid-points of $A B$ and $A C$. Suppose the incentre $I$ of triangle $A B C$ lies on the circumcircle of triangle $A E F$. Find the ratio $B C / A B$.
Find all real numbers $a$ such that $3 \langle a \langle4$ and $a(a-3\{a\})$ is an integer. (Here $\{a\}$ denotes the fractional part of $a$. For example $\{1.5\}=0.5 ;\{-3.4\}$ $=0.6 .)$
Let $A B C$ be a triangle. Let $B^{\prime}$ and $C^{\prime}$ denote respectively the reflection of $B$ and $C$ in the internal angle bisector of $\angle A$. Show that the triangles $A B C$ and $A B^{\prime} C^{\prime}$ have the same incentre.
Let $P(x)=x^{2}+a x+b$ be a quadratic polynomial with real coefficients. Suppose there are real numbers $s \neq t$ such that $P(s)=t$ and $P(t)=s$. Prove that $b-s t$ is a root of the equation $x^{2}+a x+b-s t=0$
Find all integers $a, b, c$ such that $$ a^{2}=b c+1, \quad b^{2}=c a+1 $$
Two circles $\Gamma$ and $\Sigma$ in the plane intersect at two distinct points $A$ and $B$, and the centre of $\Sigma$ lies on $\Gamma$. Let points $C$ and $D$ be on $\Gamma$ and $\Sigma$, respectively, such that $C, B$ and $D$ are collinear. Let point $E$ on $\Sigma$ be such that $D E$ is parallel to $A C .$ Show that $A E=A B$
Find all real numbers $a$ such that $4\langle a\langle 5$ and $a(a-3\{a\})$ is an integer. (Here $\{a\}$ denotes the fractional part of $a$. For example $\{1.5\}=0.5 ;\{-3.4\}$ $=0.6 .)$
Two circles $\Gamma$ and $\Sigma,$ with centres $O$ and $O^{\prime},$ respectively, are such that $O^{\prime}$ lies on $\Gamma$. Let $A$ be a point on $\Sigma$ and $M$ the midpoint of the segment $A O^{\prime}$. If $B$ is a point on $\Sigma$ different from $A$ such that $A B$ is parallel to $O M,$ show that the midpoint of $A B$ lies on $\Gamma$.
Let $P(x)=x^{2}+a x+b$ be a quadratic polynomial where $a$ and $b$ are real numbers. Suppose $\left\langle P(-1)^{2}, P(0)^{2}, P(1)^{2}\right\rangle$ is an arithmetic progression of integers. Prove that $a$ and $b$ are integers.
Show that there are infinitely many triples $(x, y, z)$ of integers such that $x^{3}+y^{4}=z^{31}$.
Let $A B C$ be a triangle with circumcircle $\Gamma$ and incentre $I .$ Let the internal angle bisectors of $\angle A, \angle B$ and $\angle C$ meet $\Gamma$ in $A^{\prime}, B^{\prime}$ and $C^{\prime}$ respectively. Let $B^{\prime} C^{\prime}$ intersect $A A^{\prime}$ in $P$ and $A C$ in $Q,$ and let $B B^{\prime}$ intersect $A C$ in $R$. Suppose the quadrilateral $PIRQ$ is a kite; that is, $I P=I R$ and $Q P=Q R$ Prove that $A B C$ is an equilateral triangle.
Show that there are infinitely many positive real numbers $a$ which are not integers such that $a(a-3\{a\})$ is an integer. (Here $\{a\}$ denotes the fractional part of $a$. For example $\{1.5\}=0.5 ;\{-3.4\}=0.6 .)$
The length of each side of a convex quadrilateral $A B C D$ is a positive integer. If the sum of the lengths of any three sides is divisible by the length of the remaining side then prove that some two sides of the quadrilateral have the same length.
Let $P(x)=x^{2}+a x+b$ be a quadratic polynomial where $a$ is real and $b$ is rational. Suppose $P(0)^{2}, P(1)^{2}, P(2)^{2}$ are integers. Prove that $a$ and $b$ are integers.
Two circles $\Gamma$ and $\Sigma$ intersect at two distinct points $A$ and $B$. A line through $B$ intersects $\Gamma$ and $\Sigma$ again at $C$ and $D,$ respectively. Suppose that $C A=$ $C D$. Show that the centre of $\Sigma$ lies on $\Gamma$.
How many integers $m$ satisfy both the following properties: (i) $1 \leq m \leq 5000$ (ii) $[\sqrt{m}]=[\sqrt{m+125}] ?$ (Here $[x]$ denotes the largest integer not exceeding $x,$ for any real number x.)
Regional Math Olympiad (RMO) 2015 - Mumbai Region
Let $A B C D$ be a convex quadrilateral with $A B=a, B C=b, C D=c$ and $D A=d$. Suppose $$ a^{2}+b^{2}+c^{2}+d^{2}=a b+b c+c d+d a $$ and the area of $A B C D$ is 60 square units. If the length of one of the diagonals is 30 units, determine the length of the other diagonal.
Determine the number of $3$-digit numbers in base $1$0 having at least one $5$ and at most one $3$.
Let $P(x)$ be a non-constant polynomial whose coefficients are positive integers. If $P(n)$ divides $P(P(n)-2015)$ for every natural number $n,$ prove that $P(-2015)=0 .$
Find all three digit natural numbers of the form $(a b c)_{10}$ such that $(a b c)_{10},(b c a)_{10}$ and $(c a b)_{10}$ are in geometric progression. (Here $(a b c)_{10}$ is representation in base $\left.10 .\right)$
Let $A B C$ be a right-angled triangle with $\angle B=90^{\circ}$ and let $B D$ be the altitude from $B$ on to $A C .$ Draw $D E \perp A B$ and $D F \perp B C .$ Let $P, Q, R$ and $S$ be respectively the incentres of triangle $D F C, D B F, D E B$ and $D A E .$ Suppose $S, R, Q$ are collinear. Prove that $P, Q, R$, $D$ lie on a circle.
Let $S=\{1,2, \ldots, n\}$ and let $T$ be the set of all ordered triples of subsets of $S,$ say $\left(A_{1}, A_{2}, A_{3}\right)$ such that $A_{1} \cup A_{2} \cup A_{3}=S .$ Determine, in terms of $n$, $$ \sum_{\left(A_{1}, A_{2}, A_{3}\right) \in T}\left|A_{1} \cap A_{2} \cap A_{3}\right| $$ where $|X|$ denotes the number of elements in the set $X .$ (For example, if $S=\{1,2,3\}$ and $A_{1}=\{1,2\}, A_{2}=\{2,3\}, A_{3}=\{3\}$ then one of the elements of $T$ is $\left.(\{1,2\},\{2,3\},\{3\}) .\right)$
Let $x, y, z$ be real numbers such that $x^{2}+y^{2}+z^{2}-2 x y z=1$. Prove that $$ (1+x)(1+y)(1+z) \leq 4+4 x y z $$
Let $A B C$ be a triangle and let $A D$ be the perpendicular from $A$ on to $B C .$ Let $K, L, M$ be points on $A D$ such that $A K=K L=L M=M D$. If the sum of the areas of the shaded regions is equal to the sum of the areas of the unshaded regions, prove that $B D=D C$
Let $a_{1}, a_{2}, \ldots, a_{2 n}$ be an arithmetic progression of positive real numbers with common difference d. Let (i) $a_{1}^{2}+a_{3}^{2}+\cdots+a_{2 n-1}^{2}=x$ (ii) $a_{2}^{2}+a_{4}^{2}+\cdots+a_{2 n}^{2}=y,$ and (iii) $a_{n}+a_{n+1}=z$. Express $d$ in terms of $x, y, z, n .$
Suppose for some positive integers $r$ and $s$, the digits of $2^{r}$ is obtained by permuting the digits of $2^{x}$ in decimal expansion. Prove that $r=8$.
Is it possible to write the numbers $17,18,19, \ldots, 32$ in a $4 \times 4$ grid of unit squares, with one number in each square, such that the product of the numbers in each $2 \times 2$ sub-grids AMRG, GRND, MBHR and $R H C N$ is divisible by $16 ?$
Let $A B C$ be an acute-angled triangle and let $H$ be its ortho-centre. For any point $P$ on the circum-circle of triangle $A B C$, let $Q$ be the point of intersection of the line $B H$ with the line $A P$. Show that there is a unique point $X$ on the circum-circle of $A B C$ such that for every point $P \neq A, B$. the circum-circle of $H Q P$ pass through $X$.
Let $x_{1}, x_{2}, \ldots, x_{2014}$ be positive real numbers such that $\sum_{j=1}^{2014} x_{j}=1 .$ Determine with proof the smallest constant $K$ such that $$ K \sum_{j=1}^{2014} \frac{x_{j}^{2}}{1-x_{j}} \geq 1 $$
In an acute-angled triangle $A B C, \angle A B C$ is the largest angle. The perpendicular bisectors of $B C$ and $B A$ intersect $A C$ at $X$ and $Y$ respectively. Prove that circumcentre of triangle $A B C$ is incentre of triangle $B X Y$.
Let $x, y, z$ be positive real numbers. Prove that $$ \frac{y^{2}+z^{2}}{x}+\frac{z^{2}+x^{2}}{y}+\frac{x^{2}+y^{2}}{z} \geq 2(x+y+z) $$
$\text { Find all pairs of }(x, y) \text { of positive integers such that } 2 x+7 y \text { divides } 7 x+2 y \text { . }$
For any positive integer $n>1,$ let $P(n)$ denote the largest prime not exceeding $n$. Let $N(n)$ denote the next prime larger than $P(n)$. (For example $P(10)=7$ and $N(10)=11,$ while $P(11)=11$ and $N(11)=13 .)$ If $n+1$ is a prime number, prove that the value of the sum $$ \frac{1}{P(2) N(2)}+\frac{1}{P(3) N(3)}+\frac{1}{P(4) N(4)}+\cdots+\frac{1}{P(n) N(n)}=\frac{n-1}{2 n+2} $$
Let $A B C$ be a triangle with $A B>A C$. Let $P$ be a point on the line $A B$ beyond $A$ such that $A P+P C=A B .$ Let $M$ be the mid-point of $B C$ and let $Q$ be the point on the side $A B$ such that $C Q \perp A M .$ Prove that $B Q=2 A P$.
Let $n$ be an odd positive integer and suppose that each square of an $n \times n$ grid is arbitrarily filled with either by 1 or by $-1 .$ Let $r_{j}$ and $c_{k}$ denote the product of all numbers in $j$ -th row and $k$ -th column respectively, $1 \leq j, k \leq n$. Prove that $$ \sum_{j=1}^{n} r_{j}+\sum_{k=1}^{n} c_{k} \neq 0 $$
Let $A B C$ be an acute-angled triangle and suppose $\angle A B C$ is the largest angle of the triangle. Let $R$ be its circumcentre. Suppose the circumcircle of triangle $A R B$ cuts $A C$ again in $X .$ Prove that $R X$ is pependicular to $B C$
Find all real numbers $x$ and $y$ such that $$ x^{2}+2 y^{2}+\frac{1}{2} \leq x(2 y+1) $$
Prove that there does not exist any positive integer $n<2310$ such that $n(2310-n)$ is a multiple of 2310
Find all positive real numbers $x, y, z$ such that $$ 2 x-2 y+\frac{1}{z}=\frac{1}{2014}, \quad 2 y-2 z+\frac{1}{x}=\frac{1}{2014}, \quad 2 z-2 x+\frac{1}{y}=\frac{1}{2014} $$
Let $A B C$ be a triangle. Let $X$ be on the segment $B C$ such that $A B=A X$. Let $A X$ meet the circumcircle $\Gamma$ of triangle $A B C$ again at $D .$ Show that the circumcentre of $\triangle B D X$ lies on $\Gamma$.
For any natural number $n$, let $S(n)$ denote the sum of the digits of $n$. Find the number of all 3 -digit numbers $n$ such that $S(S(n))=2$.
Let $ABCD$ be an isosceles trapezium having an incircle; let $AB$ and $CD$ be the parallel sides and let $CE$ be the perpendicular from $C$ on to $AB$. Prove that $CE$ is equal to the geometric mean of $AB$ and $CD$.
If $x$ and $y$ are positive real numbers, prove that $$ 4 x^{4}+4 y^{3}+5 x^{2}+y+1 \geq 12 x y $$
Determine all pairs $m>n$ of positive integers such that $$ 1=\text{gcd}(n+1, m+1)=\text{gcd}(n+2, m+2)=\cdots=\text{gcd}(m, 2 m-n) $$
$\text { What is the minimal area of a right-angled triangle whose inradius is } 1 \text { unit? } $
Let $A B C$ be an acute-angled triangle and let $I$ be its incentre. Let the incircle of triangle $A B C$ touch $B C$ in $D .$ The incircle of the triangle $A B D$ touches $A B$ in $E$; the incircle of the triangle $A C D$ touches $B C$ in $F$. Prove that $B, E, I, F$ are concyclic.
In the adjacent figure, can the numbers $1,2,3,4, \cdots, 18$ be placed, one on each line segment, such that the sum of the numbers on the three line segments meeting at each point is divisible by $3 ?$
Three positive real numbers $a, b, c$ are such that $a^{2}+5 b^{2}+4 c^{2}-4 a b-4 b c=0 .$ Can $a, b, c$ be the lengths of the sides of a triangle? Justify your answer.
The roots of the equation $$ x^{3}-3 a x^{2}+b x+18 c=0 $$ form a non-constant arithmetic progression and the roots of the equation $$ x^{3}+b x^{2}+x-c^{3}=0 $$ form a non-constant geometric progression. Given that $a, b, c$ are real numbers, find all positive integral values $a$ and $b$.
Let $A B C$ be an acute-angled triangle in which $\angle A B C$ is the largest angle. Let $O$ be its circumoentre. The perpendicular bisectors of $B C$ and $A B$ meet $A C$ at $X$ and $Y$ respectively. The internal bisectors of $\angle A X B$ and $\angle B Y C$ meet $A B$ and $B C$ at $D$ and $E$ respectively. Prove that $B O$ is perpendicular to $A C$ if $D E$ is parallel to $A C$.
A person moves in the $x-y$ plane moving along points with integer co-ordinates $x$ and $y$ only. When she is at point $(x, y),$ she takes a step based on the following rules: (a) if $x+y$ is even she moves to either $(x+1, y)$ or $(x+1, y+1)$; (b) if $x+y$ is odd she moves to either $(x, y+1)$ or $(x+1, y+1)$. How many distinct paths can she take to go from (0,0) to (8,8) given that she took exactly three steps to the right $((x, y)$ to $(x+1, y)) ?$
Let $a, b, c$ be positive numbers such that $$ \frac{1}{1+a}+\frac{1}{1+b}+\frac{1}{1+c} \leq 1 $$ Prove that $\left(1+a^{2}\right)\left(1+b^{2}\right)\left(1+c^{2}\right) \geq 125 .$ When does the equality hold?
Let $D, E, F$ be the points of contact of the incircle of an acute-angled triangle $A B C$ with $B C, C A, A B$ respectively. Let $I_{1}, I_{2}, I_{3}$ be the incentres of the triangles $A F E, B D F, C E D$ respectively. Prove that the lines $I_{1} D, I_{2} E, I_{3} F$ are concurrent.
CRMO 2013 - Paper 1
Let $A B C$ be an acute-angled triangle. The circle $\Gamma$ with $B C$ as diameter intersects $A B$ and $A C$ again at $P$ and $Q,$ respectively. Determine $\angle B A C$ given that the orthocentre of triangle $A P Q$ lies on $\Gamma$.
Let $f(x)=x^{3}+a x^{2}+b x+c$ and $g(x)=x^{3}+b x^{2}+c x+a,$ where $a, b, c$ are integers with $c \neq 0$. Suppose that the following conditions hold: (a) $f(1)=0$; (b) the roots of $g(x)=0$ are the squares of the roots of $f(x)=0$. Find the value of $a^{2013}+b^{2013}+c^{2013}$.
Find all primes $p$ and $q$ such that $p$ divides $q^{2}-4$ and $q$ divides $p^{2}-1$.
Find the number of 10 -tuples $\left(a_{1}, a_{2}, \ldots, a_{10}\right)$ of integers such that $\left|a_{1}\right| \leq 1$ and $$ a_{1}^{2}+a_{2}^{2}+a_{3}^{2}+\cdots+a_{10}^{2}-a_{1} a_{2}-a_{2} a_{3}-a_{3} a_{4}-\cdots-a_{9} a_{10}-a_{10} a_{1}=2 $$.
Let $A B C$ be a triangle with $\angle A=90^{\circ}$ and $A B=A C .$ Let $D$ and $E$ be points on the segment $B C$ such that $B D: D E: E C=3: 5: 4$. Prove that $\angle D A E=45^{\circ} .$
Suppose that $m$ and $n$ are integers such that both the quadratic equations $x^{2}+m x-n=0$ and $x^{2}-m x+n=0$ have integer roots. Prove that $n$ is divisible by 6.
Prove that there do not exist natural numbers $x$ and $y,$ with $x>1,$ such that $$ \frac{x^{7}-1}{x-1}=y^{5}+1 $$.
In a triangle $A B C, A D$ is the altitude from $A,$ and $H$ is the orthocentre. Let $K$ be the centre of the circle passing through $D$ and tangent to $B H$ at $H .$ Prove that the line $D K$ bisects $A C .$
Consider the expression $$ 2013^{2}+2014^{2}+2015^{2}+\cdots+n^{2} $$ Prove that there exists a natural number $n>2013$ for which one can change a suitable number of plus signs to minus signs in the above expression to make the resulting expression equal 9999.
Let $A B C$ be a triangle with $\angle A=90^{\circ}$ and $A B=A C .$ Let $D$ and $E$ be points on the segment $B C$ such that $B D: D E: E C=1: 2: \sqrt{3}$. Prove that $\angle D A E=45^{\circ} .$
For positive integers n, define A(n) to be \( \frac{(2n)!}{(n!)^2} \).Determine the sets of positive integers n for which (a) A(n) is an even number, (b) A(n) is a multiple of 4.
Let $n \geq 3$ be a natural number and let $P$ be a polygon with $n$ sides. Let $a_{1}, a_{2}, \ldots, a_{n}$ be the lengths of the sides of $P$ and let $p$ be its perimeter. Prove that $$ \frac{a_{1}}{p-a_{1}}+\frac{a_{2}}{p-a_{2}}+\cdots+\frac{a_{n}}{p-a_{n}}<2. $$
For a natural number $n$, let $T(n)$ denote the number of ways we can place $n$ objects of weights $1,2, \ldots, n$ on a balance such that the sum of the weights in each pan is the same. Prove that $T(100)>T(99)$
Prove that the polynomial $f(x)=x^{4}+26 x^{3}+56 x^{2}+78 x+1989$ cannot be expressed as a product $f(x)=p(x) q(x)$, where $p(x), q(x)$ are both polynomials with integral coefficients and with degree less than 4.
Find all 4 -tuples $(a, b, c, d)$ of natural numbers with $a \leq b \leq c$ and $a !+b !+c !=3^{d}$.
In an acute-angled triangle $A B C$ with $AB$ < $AC$, the circle $\Gamma$ touches $A B$ at $B$ and passes through $C$ intersecting $A C$ again at $D $. Prove that the orthocentre of triangle $A B D$ lies on $\Gamma$ if and only if it lies on the perpendicular bisector of $B C$.
A polynomial is called a Fermat polynomial if it can be written as the sum of the squares of two polynomials with integer coefficients. Suppose that $f(x)$ is a Fermat polynomial such that $f(0)=1000 .$ Prove that $f(x)+2 x$ is not a Fermat polynomial.
Let $A B C$ be a triangle which it not right-angled. Define a sequence of triangles $A_{i} B_{i} C_{i}$, with $i \geq 0$, as follows: $A_{0} B_{0} C_{0}$ is the triangle $A B C ;$ and, for $i \geq 0, A_{i+1}, B_{i+1}, C_{i+1}$ are the reflections of the orthocentre of triangle $A_{i} B_{i} C_{i}$ in the sides $B_{i} C_{i}, C_{i} A_{i}, A_{i} B_{i},$ respectively. Assume that $\angle A_{m}=\angle A_{n}$ for some distinct natural numbers $m, n .$ Prove that $\angle A=60^{\circ} .$
Let $n \geq 4$ be a natural number. Let $A_{1} A_{2} \cdots A_{n}$ be a regular polygon and $X=\{1,2, \ldots, n\}$. A subset $\left\{i_{1}, i_{2}, \ldots, i_{k}\right\}$ of $X,$ with $k \geq 3$ and $i_{1}$ < $i_{2}<\cdots$ < $i_{k},$ is called a good subset if the angles of the polygon $A_{i_{1}} A_{i_{2}} \cdots A_{i_{k}},$ when arranged in the increasing order, are in an arithmetic progression. If $n$ is a prime, show that a proper good subset of $X$ contains exactly four elements.
Let $\Gamma$ be a circle with centre $O .$ Let $\Lambda$ be another circle passing through $O$ and intersecting $\Gamma$ at points $A$ and $B$. A diameter $C D$ of $\Gamma$ intersects $\Lambda$ at a point $P$ different from $O .$ Prove that $$ \angle A P C=\angle B P D $$.
Determine the smallest prime that does not divide any five-digit number whose digits are in a strictly increasing order.
Given real numbers $a, b, c, d, e>1$ prove that $$ \frac{a^{2}}{c-1}+\frac{b^{2}}{d-1}+\frac{c^{2}}{e-1}+\frac{d^{2}}{a-1}+\frac{e^{2}}{b-1} \geq 20 $$.
Let $x$ be a non-zero real number such that $x^{4}+\frac{1}{x^{4}}$ and $x^{5}+\frac{1}{x^{5}}$ are both rational numbers. Prove that $x+\frac{1}{x}$ is a rational number.
In a triangle $A B C,$ let $H$ denote its orthocentre. Let $P$ be the reflection of $A$ with respect to $B C .$ The circumcircle of triangle $A B P$ intersects the line $B H$ again at $Q,$ and the circumcircle of triangle $A C P$ intersects the line $C H$ again at $R$. Prove that $H$ is the incentre of triangle $P Q R$.
Suppose that the vertices of a regular polygon of 20 sides are coloured with three colours red, blue and green - such that there are exactly three red vertices. Prove that there are three vertices $A, B, C$ of the polygon having the same colour such that triangle $A B C$ is isosceles.
RMO 2013 - Mumbai Region
Let $A B C$ be an isosceles triangle with $A B=A C$ and let $\Gamma$ denote its circumcircle. A point $D$ is on arc $A B$ of $\Gamma$ not containing $C .$ A point $E$ is on arc $A C$ of $\Gamma$ not containing $B$. If $A D=C E$ prove that $B E$ is parallel to $A D .$
Find all triples $(p, q, r)$ of primes such that $p q=r+1$ and $2\left(p^{2}+q^{2}\right)=r^{2}+1 .$
A finite non-empty set of integers is called 3 -good if the sum of its elements is divisible by 3 . Find the number of non-empty 3-good subsets of $\{0,1,2, \ldots, 9\}$.
In a triangle $A B C,$ points $D$ and $E$ are on segments $B C$ and $A C$ such that $B D=3 D C$ and $A E=4 E C .$ Point $P$ is on line $E D$ such that $D$ is the midpoint of segment $E P .$ Lines $A P$ and $B C$ intersect at point $S .$ Find the ratio $B S / S D$.
Let $a_{1}, b_{1}, c_{1}$ be natural numbers. We define $$ a_{2}={gcd}\left(b_{1}, c_{1}\right), \quad b_{2}=g c d\left(c_{1}, a_{1}\right), \quad c_{2}={gcd}\left(a_{1}, b_{1}\right) $$ and $$ a_{3}={lcm}\left(b_{2}, c_{2}\right), \quad b_{3}={lcm}\left(c_{2}, a_{2}\right), \quad c_{3}={lcm}\left(a_{2}, b_{2}\right) $$ Show that $g c d\left(b_{3}, c_{3}\right)=a_{2}$.
Let $P(x)=x^{3}+a x^{2}+b$ and $Q(x)=x^{3}+b x+a,$ where $a, b$ are non-zero real numbers. Suppose that the roots of the equation $P(x)=0$ are the reciprocals of the roots of the equation $Q(x)=0$. Prove that $a$ and $b$ are integers. Find the greatest common divisor of $P(2013 !+1)$ and $Q(2013 !+1)$.
Let $A B C$ be an acute-angled triangle. The circle $\Gamma$ with $B C$ as diameter intersects $A B$ and $A C$ again at $P$ and $Q,$ respectively. Determine $\angle B A C$ given that the orthocentre of triangle $A P Q$ lies on $\Gamma$
Let $f(x)=x^{3}+a x^{2}+b x+c$ and $g(x)=x^{3}+b x^{2}+c x+a,$ where $a, b, c$ are integers with $c \neq 0$. Suppose that the following conditions hold: (a) $f(1)=0$; (b) the roots of $g(x)=0$ are the squares of the roots of $f(x)=0$. Find the value of $a^{2013}+b^{2013}+c^{2013}$
Find all primes $p$ and $q$ such that $p$ divides $q^{2}-4$ and $q$ divides $p^{2}-1$
Find the number of 10 -tuples $\left(a_{1}, a_{2}, \ldots, a_{10}\right)$ of integers such that $\left|a_{1}\right| \leq 1$ and $$ a_{1}^{2}+a_{2}^{2}+a_{3}^{2}+\cdots+a_{10}^{2}-a_{1} a_{2}-a_{2} a_{3}-a_{3} a_{4}-\cdots-a_{9} a_{10}-a_{10} a_{1}=2 $$
Let $A B C D$ be a unit square. Draw a quadrant of a circle with $A$ as centre and $B, D$ as end points of the arc. Similarly, draw a quadrant of a circle with $B$ as centre and $A, C$ as end points of the arc. Inscribe a circle $\Gamma$ touching the arc $A C$ internally, the arc $B D$ internally and also touching the side $A B .$ Find the radius of the circle $\Gamma$.
Let $a, b, c$ be positive integers such that $a$ divides $b^{4}, b$ divides $c^{4}$ and $c$ divides $a^{4}$. Prove that $a b c$ divides $(a+b+c)^{21}$
Let $a$ and $b$ be positive real numbers such that $a+b=1$. Prove that $$ a^{a} b^{b}+a^{b} b^{a} \leq 1 $$
Let $X=\{1,2,3, \ldots, 12\} .$ Find the the number of pairs $\{A, B\}$ such that $A \subseteq X$, $B \subseteq X, A \neq B$ and $A \cap B=\{2,3,5,7,8\} .$
Let $A B C$ be a triangle. Let $D, E$ be a points on the segment $B C$ such that $B D=$ $D E=E C .$ Let $F$ be the mid-point of $A C .$ Let $B F$ intersect $A D$ in $P$ and $A E$ in $Q$ respectively. Determine $B P / P Q$.
Show that for all real numbers $x, y, z$ such that $x+y+z=0$ and $x y+y z+z x=-3$, the expression $x^{3} y+y^{3} z+z^{3} x$ is a constant.
Let $A B C D$ be a unit square. Draw a quadrant of a circle with $A$ as centre and $B, D$ as end points of the arc. Similarly, draw a quadrant of a circle with $B$ as centre and $A, C$ as end points of the arc. Inscribe a circle $\Gamma$ touching the arcs $A C$ and $B D$ both externally and also touching the side $C D .$ Find the radius of the circle $\Gamma$.
Let $a, b, c$ be positive integers such that $a$ divides $b^{5}, b$ divides $c^{5}$ and $c$ divides $a^{5}$. Prove that $a b c$ divides $(a+b+c)^{31}$.
Let $X=\{1,2,3, \ldots, 10\} .$ Find the the number of pairs $\{A, B\}$ such that $A \subseteq X$, $B \subseteq X, A \neq B$ and $A \cap B=\{5,7,8\}$
Let $A B C$ be a triangle. Let $D, E$ be a points on the segment $B C$ such that $B D=$ $D E=E C .$ Let $F$ be the mid-point of $A C .$ Let $B F$ intersect $A D$ in $P$ and $A E$ in $Q$ respectively. Determine the ratio of the area of the triangle $A P Q$ to that of the quadrilateral $P D E Q$.
Find all positive integers $n$ such that $3^{2 n}+3 n^{2}+7$ is a perfect square.
Let $A B C D$ be a unit square. Draw a quadrant of a circle with $A$ as centre and $B, D$ as end points of the arc. Similarly, draw a quadrant of a circle with $B$ as centre and $A, C$ as end points of the arc. Inscribe a circle $\Gamma$ touching the arc $A C$ externally, the arc $B D$ internally and also touching the side $A D .$ Find the radius of the circle $\Gamma$
Let $a, b, c$ be positive integers such that $a$ divides $b^{2}, b$ divides $c^{2}$ and $c$ divides $a^{2}$. Prove that $a b c$ divides $(a+b+c)^{7}$.
Let $a$ and $b$ be positive real numbers such that $a+b=1 .$ Prove that $$ a^{a} b^{b}+a^{b} b^{a} \leq 1 $$
Let $X=\{1,2,3, \ldots, 11\} .$ Find the the number of pairs $\{A, B\}$ such that $A \subseteq X$, $B \subseteq X, A \neq B$ and $A \cap B=\{4,5,7,8,9,10\}$
Let $A B C$ be a triangle. Let $E$ be a point on the segment $B C$ such that $B E=2 E C .$ Let $F$ be the mid-point of $A C .$ Let $B F$ intersect $A E$ in $Q .$ Determine $B Q / Q F$.
Solve the system of equations for positive real numbers: $$ \frac{1}{x y}=\frac{x}{z}+1, \quad \frac{1}{y z}=\frac{y}{x}+1, \quad \frac{1}{z x}=\frac{z}{y}+1 $$
Indian Regional Math Olympiad (RMO) 2007
Let $ABC$ be an acute-angled triangle; $AD$ be the bisector of $\angle BAC$ with $D$ on $BC$; and $BE$ be the altitude from $B$ on $AC$. Show that $\angle CED > 45^{\circ}$.
Let $a,b,c$ be three natural numbers such that ( $a\leq b\leq c$ ) and ( $gcd(c-a,c-b)=1$ ). Suppose that there exits an integer $d$ such that $a+d,b+d,c+d$ form the sides of a right-angled triangle. Prove that there exist integers $l,m$ such that $c+d=l^2+m^2$.
Find all pairs ($a,b$) of real numbers such that whenever $\alpha$ is a root $x^2+ax+b=0$, $\alpha^2-1$ is also a root of the equation.
How many $6$-digits numbers are there such that: (a) the digits of all the numbers are from the set { $1,2,3,.... $}; (b) any digits that appears in the number appears twice? (Example: $225252$ is an admissible number, while $222133$ is not.)
A trapezium $ABCD$, in which $AB$ is parallel to $CD$, is inscribed in a circle with centre $O$. Suppose the diagonal $AC$ and $BD$ of the trapezium intersect at $M$ and $OM=2$. (a) If $\angle AMB$ is determine with proof the difference between the length between the parallel sides. (b) If $\angle AMD$ is find the difference between the parallel sides.
Prove that: (a) $5 < 5^{\frac{1}{2}}+5^{\frac{1}{3}}+5^{\frac{1}{4}}$; (b) $8 < 8^{\frac{1}{2}}+8^{\frac{1}{3}}+8^{\frac{1}{4}}$; (c) $n < n^{\frac{1}{2}}+n^{\frac{1}{3}}+n^{\frac{1}{4}} $ for all integers greater than or equal to $9$.
Let $ABC$ be an acute-angled triangle and let $D, E, F$ be the feet of perpendiculars from $A,B,C$ respectively to $BC,CA,AB$. Let the perpendiculars from $F$ to $CB, CA, AD, BE$ meet them in $P, Q,M,N$ respectively. Prove that $P, Q,M,N$ are collinear.
Find the least possible value of $a + b$, where $a, b$ are positive integers such that $11$ divides $a + 13b$ and $13$ divides $a + 11b$.
If $a, b, c$ are three positive real numbers, prove that $\frac{a^2+1}{b+c}+\frac{b^2+1}{c+a}+\frac{c^2+1}{a+b} \geq 3 $.
A $6×6$ square is dissected into $9$ rectangles by lines parallel to its sides such that all these rectangles have only integer sides. Prove that there are always two congruent rectangles.
Let $ABCD$ be a quadrilateral in which $AB$ is parallel to $CD$ and perpendicular to $AD$; $AB = 3CD$; and the area of the quadrilateral is $4$. If a circle can be drawn touching all the sides of the quadrilateral, find its radius.
Prove that there are infinitely many positive integers $n$ such that $n(n+ 1)$ can be expressed as a sum of two positive squares in at least two different ways. (Here $a^2+b^2$ and $b^2+a^2$ are considered as the same representation.)
Let $X$ be the set of all positive integers greater than or equal to $8$ and let $f:X \mapsto X $ be a function such that $f(x + y) = f(xy)$ for all $x \geq 4 $, $y \geq 4 $. If $f(8) = 9$, determine $f(9)$.
Let $ABCD$ be a convex quadrilateral; $P, Q,R, S$ be the midpoints of $AB,BC,CD,DA$ respectively such that $\triangle AQR$ and $\triangle CSP$ are equilateral. Prove that $ABCD$ is a rhombus. Determine its angles.
If $x, y$ are integers and $17$ divides both the expressions $x^2-2xy-y^2-5x+7y$ and $x^2-3xy+2y^2+x-y$,then prove that $17$ divides $xy − 12x + 15y$.
If $a, b, c$ are three real numbers such that $|a-b| \geq c,|b-c| \geq a,|c-a| \geq b $, then prove that one of $a, b, c$ is the sum of the other two.
Find the number of all $5$-digit numbers (in base $10$) each of which contains the block $15$ and is divisible by $15$. (For example, $31545$, $34155$ are two such numbers.)
In $\triangle ABC$, let $D$ be the midpoint of $BC$. If $\angle ADB = 45^{\circ} $ and $\angle ACD = 30^{\circ} $, determine $\angle BAD $.
Determine all triples ($a, b, c$) of positive integers such that $a \leq b \leq c $ and $a + b + c + ab + bc + ca = abc + 1$.
Let $a, b, c$ be three positive real numbers such that $a + b + c = 1$.Let $\gamma$=min{ $a^3+a^2bc,b^3+ab^2c,c^3+abc^2$}.Prove that the roots of the equation $x^2+x+4 \gamma$ are real.
Consider in the plane a circle $\Gamma$ with center $O$ and a line $l$ not intersecting circle $\Gamma$. Prove that there is a point $Q$ on the perpendicular drawn from $O$ to the line $l$, such that for any point $P$ on the line $l$, $PQ$ represents the length of the tangent from $P$ to the circle $\Gamma$.
Positive integers are written on all the faces of a cube, one on each. At each corner (vertex) of the cube, the product of the numbers on the faces that meet at the corner is written. The sum of the numbers written at all the corners is $2004$. If $T$ denotes the sum of the numbers on all the faces, find all the possible values of $T$.
Let $\alpha$ and $\beta$ be the roots of the quadratic equation $x^2+mx-1$.where $m$ is an odd integer. Let $\gamma_m = \alpha^n + \beta^n$,for $n \geq 0$. Prove that for $n \geq 0$, (a) is an integer and (b) $gcd(\gamma_n, \gamma_{n+1})=1$.
Prove that the number of triples ($A,B,C$) where $A,B,C$ are subsets of {$1, 2, · · · , n$} such that $A \cap B \cap C = \phi$, $A \cap B \neq \phi$, $B \cap C = \phi$ is $7^n+2.6^n+5^n$.
Let $ABCD$ be a quadrilateral ; $X$ and $Y$ be the midpoints of $AC$ and $BD$ respectively ; and the lines through $X$ and $Y$ respectively parallel to BD,AC meet in $O$. Let $P, Q,R, S$ be the midpoints of $AB,BC,CD,DA$ respectively. Prove that (a) quadrilaterals $APOS$ and $APXS$ have the same area ; (b) the areas of the quadrilaterals $APOS,BQOP,CROQ,DSOR$ are all equal .
Let $(p_1p_2p_3.....p_n....)$ be a sequence of primes defined by $p_1=2$ and for $n \geq 1$, $p_{n+1}$ is the largest prime factor of $p_1p_2p_3.....p_n+1$ (Thus $p_2=3,p_3=7$). Prove that $p_n \neq 5$ for any $n$.
Let $x$ and $y$ be positive real numbers such that $y^3+y \leq x-x^3$.Prove that (a) $y < x < 1$; and (b). $x^2+y^2 \leq 1$.
Let $ABC$ be a triangle in which $AB = AC$ and $\angle CAB = 90^{\circ}$. Suppose $M$ and $N$ are points on the hypotenuse $BC$ such that $BM^2 +CN^2= MN^2$. Prove that $\angle MAN = 45^{\circ}$.
If $n$ is an integer greater than $7$, prove that ( ${n} \choose {7}$ - $l$ floor $\frac{n}{7}$ $r$ floor ) is divisible by $7$. [ Here $\frac{n}{7}$ denotes the number of ways of choosing $7$ objects from among $n$ objects; also for any real number $x$, $[x]$ denotes the greatest integer not exceeding $x$.
Let $a, b, c$ be three positive real numbers such that $a + b + c = 1$. Prove that among the three numbers $a − ab, b − bc, c − ca$ there is one which is at most $\frac{1}{4}$ and there is one which is at least $\frac{2}{9}$.
Find the number of ordered triples ($x, y, z$) of nonnegative integers satisfying the conditions: (i) $x \leq y \leq z$, (ii) $x+y+z \leq 100$.
Suppose $P$ is an interior point of a $\triangle ABC$ such that the ratios $d(A,BC)$ $\frac{d(A,BC)}{d(P,BC)} , \frac{d(B,CA)}{d(P,CA)},\frac{d(C,AB)}{d(P,AB)}$ are all equal. Find the common value of these ratios. [ Here $d(X, Y Z)$ denotes the perpendicular distance from a point $X$ to the line $Y Z$.].
Find all real numbers a for which the equation $x^2+(a-1)x+1=3|x|$ has exactly three distinct real solutions in $x$.
Consider the set $X = {1, 2, 3, · · · , 9, 10}$. Find two disjoint nonempty subsets $A$ and $B$ of $X$ such that. (a) $A \cup B = X $; (b) $\prod(A)$ is divisible by $\prod(B)$, where for any finite set of numbers $C$, $\prod(C)$ denotes the product of all numbers in $C$; (c)the quotient $\prod(A)/ \prod(B)$ is as small as possible.
In an acute $\triangle ABC$, points $D;E; F$ are located on the sides $BC;CA;AB$ respectively such that $\frac{CD}{CE}=\frac{CA}{CB},\frac{AE}{AF}=\frac{AB}{AC},\frac{BF}{BD}=\frac{BC}{BA}$ Prove that $AD;BE;CF$ are the altitudes of $ABC$.
Solve the following equation for real $x$: $(x^2+x-2)^3+(2x^2-x-1)^3=27(x^2-1)^3$.
Let $a; b; c$ be positive integers such that a divides $b^2$, $b$ divides $c^2$ and $c$ divides $a^2$. Prove that abc divides $la(a+b+c) $.
Suppose the integers $1; 2; 3;...; 10$ are split into two disjoint collections $a_1a_2a_3a_4a_5$ and $b_1b_2b_3b_4b_5$ such that $a_1 < a_2 < a_3 < a_4, b_2 > b_3 > b_4 > b_5 $
(i) Show that the larger number in any pair { $a_i$, $b_j $}, $1 \leq j \leq 5$ is at least $6$.
(ii) Show that $|a_1-b_1|+|a_2-b_2|+|a_3-b_3|+|a_4-b_4|+|a_5-b_5|=25$ for every such partition.
The circumference of a circle is divided into eight arcs by a convex quadrilateral $ABCD$, with four arcs lying inside the quadrilateral and the remaining four lying outside it. The lengths of the arcs lying inside the quadrilateral are denoted by $p, q, r, s$ in counter-clockwise direction starting from some arc. Suppose $p + r = q + s$. Prove that $ABCD$ is a cyclic quadrilateral.
For any natural number $n> 1$, prove the inequality: $\frac{1}{2} \leq \frac{1}{1+n^2} + \frac{2}{2+n^2} + \frac{3}{3+n^2} +.....+ \frac{n}{n+n^2} \leq \frac{1}{2} + \frac{1}{2n}$
Find all integers $a; b; c; d$ satisfying the following relations : (i) $1 \leq a \leq b \leq c \leq d$, (ii) $ab + cd = a + b + c + d + 3$.
Let $BE$ and $CF$ be the altitudes of an acute $\triangle ABC$, with $E$ on $AC$ and $F$ on $AB$. Let $O$ be the point of intersection of $BE$ and $CF$. Take any line $KL$ through $O$ with $K$ on $AB$ and $L$ on $AC$. Suppose $M$ and $N$ are located on $BE$ and $CF$ respectively, such that $KM$ is perpendicular to $BE$ and $LN$ is perpendicular to $CF$.Prove that $FM$ is parallel to $EN$.
Find all primes $p$ and $q$ such that $p^2+7pq+q^2$ is the square of an integer.
Find the number of positive integers $x$ which satisfy the condition $[\frac{x}{99}]=[\frac{x}{101}]$. (Here $[z]$ denotes, for any real $z$, the largest integer not exceeding $z$ ; e. g. $[\frac{7}{4}]$.
Consider an $n × n$ array of numbers. Suppose each row consists of the $n$ numbers $1, 2, . . . , n$ in some order and $a_{ij}=a_{ji}$ for $i = 1, 2, . . . , n$ and $j = 1, 2, . . . , n$. If $n$ is odd, prove that the numbers $a_{11},a_{22},a_{33},....,a_{nn}$ are $1, 2, . . . , n$ in some order.
In a triangle ABC , D is a point on BC such that AD is the internal bisector of $\angle A$. Suppose $\angle B = 2 \angle C$ and $CD = AB$. Prove that $\angle A = 72^{\circ}$ .
If $x, y, z$ are the sides of a triangle, then prove that ( $|x^2(y-z)+y^2(z-x)+z^2(x-y)| \leq xyz$ ).
Prove that the product of the first $1000$ positive even integers differs from the product of the first $1000$ odd integers by a multiple of $2001$.
Let $AC$ be a line segment in the plane and $B$ a point between $A$ and $C$. Construct isosceles triangles $PAB$ and $QBC$ on one side of the segment $AC$ such that $\angle APB = \angle BQ= 120^{\circ}$ and an isosceles $\triangle RAC$ on the other side of $AC$ such that $\angle ARC= 120^{\circ}$. Show that $PQR$ is an equilateral triangle.
Solve the equation $y^3=x^3+8x^2-6x+8$,for positive integers $x$ and $y$.
Suppose $(x_1, x_2,....., x_n)$ is a sequence of positive real numbers such that ( $x_1 \geq x_2 \geq x_3 \geq ... \geq x_n... $) and for all $n$ $\frac{x_1}{1}+\frac{x_4}{2}+\frac{x_9}{3}+....+\frac{x_{n^2}}{n}\leq 1$ . Show that for all $k$ the following inequality is satisfied: $\frac{x_1}{1}+\frac{x_2}{2}+\frac{x_3}{3}+....+\frac{x_k}{k}\leq 3$.
All the $7$-digit numbers containing each of the digits 1, 2, 3, 4, 5, 6, 7 exactly once, and not divisible by $5$, are arranged in increasing order. Find the $2000$-th number in this list.
The internal bisector of $\angle A$ in a $\triangle ABC$ with $AC > AB$, meets the circumcircle ( $\Gamma$ ) of the triangle in $D$. Join $D$ to the centre $O$ of the circle ( $\Gamma$ ) and suppose $DO$ meets $AC$ in $E$, possibly when extended. Given that $BE$ is perpendicular to $AD$, show that $AO$ is parallel to $BD$.
(i) Consider two positive integers $a$ and $b$ which are such that $a^a b^b$ is divisible by $2000$. What is the least possible value of the product $ab$?
(ii) Consider two positive integers $a$ and $b$ which are such that $a^a b^b$ is divisible by $2000$. What is the least possible value of the product $ab$?
Find all real values of $a$ for which the equation $x^4-2ax^2+x+a^2-a=0$ has all its root real.
Prove that the inradius of a right-angled triangle with integer sides is an integer.
Find the number of positive integers which divide $10^{999}$ but not $10^{998}$.
Let $ABCD$ be a square and $M,N$ points on sides $AB,BC$ respectably, such that $\angle MDN =45^{\circ}$. If $R$ is the midpoint of $MN$ show that $RP = RQ$ where $P,Q$ are the points of intersection of $AC$ with the lines $MD, ND$.
If $p, q, r$ are the roots of the cubic equation $x^3-3px^2+3q^2x-r^3=0$, show that $p=q=r$.
If $a, b, c$ are the sides of a triangle prove the following inequality: $\frac{a}{c+a-b}+\frac{b}{a+b-c}+\frac{c}{b+c-a}\geq 3 $.
Find all solutions in integers $m, n$ of the equation $(m-n)^2=\frac{4mn}{m+n-1}$.
Find the number of quadratic polynomials $ax^2+bx+c$, which satisfy the following conditions: (a) $a, b, c$ are distinct; (b) $a, b, c \in {1, 2, 3, . . . 1999}$ and (c) $x + 1$ divides $ax^2+bx+c$.
Let $ABCD$ be a convex quadrilateral in which $\angle BAC = 50^{\circ} ,\angle CAD = 60^{\circ} ,\angle CBD = 30^{\circ}$,and $\angle BDC = 25^{\circ}$. If $E$ is the point of intersection of $AC$ and $BD$, find $\angle AEB$.
Let $n$ be a positive integer and $p_1p_2......p_n$ be $n$ prime numbers all larger than $5$ such that $6$ divides $p_{1}^{2}+p_{2}^{2}+p_{n}^{2}$. Prove that $6$ divides $n$.
Prove the following inequality for every natural number $n$: $\frac{1}{n+1}(1+\frac{1}{3}+\frac{1}{5}+.....+\frac{1}{2n-1})\geq \frac{1}{n}(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+.........+\frac{1}{2n}) $.
Let $ABC$ be a triangle with $AB = BC$ and $\angle BAC = 30^{\circ}$. Let $A^{'}$ be the reflection of $A$ in the line $BC$; $B^{'}$ be the reflection of $B$ in the line $CA$; $C^{'}$ be the reflection of $C$ in the line $AB$. Show that $A^{'},B^{'},C^{'}$ form the vertices of an equilateral triangle.
Find the minimum possible least common multiple (lcm) of twenty (not necessarily distinct) natural numbers whose sum is $801$.
Given the $7$-element set $A = {a, b, c, d, e, f, g}$, find a collection $T$ of $3$-element subsets of $A$ such that each pair of elements from $A$ occurs exactly in one of the subsets of $T$.
Let $P$ be an interior point of a $\triangle ABC$ and let $BP$ and $CP$ meet $AC$ and $AB$ in $E$ and $F$ respectively. If [$BPF$] = $4$, [$BPC$] = $8$ and $[CPE] = 13$, find [$AFPE$]. (Here $[·]$ denotes the area of a triangle or a quadrilateral, as the case may be.)
For each positive integer $n$, define $a_n = 20 + n^2$, and $d_n=GCD(a_n,a_{n+1})$. Find the set of all values that are taken by $ d_n$ and show by examples that each of these values are attained.
Solve for real $x$: $\frac{1}{[x]}+\frac{1}{[2x]}=9(x)+\frac{1}{3}$,where $[x]$ is the greatest integer less than or equal to $x$ and $(x)$ = $x − [x]$, [e.g. $[3.4] = 3$ and $(3.4) = 0.4$].
In a quadrilateral $ABCD$, it is given that $AB$ is parallel to $CD$ and the diagonals $AC$ and $BD$ are perpendicular to each other. Show that $(a) AD.BC \geq AB.CD$, $(b) AD + BC \geq AB+CD$.
Let $x, y$ and $z$ be three distinct real positive numbers. Determine with proof whether or not the three real numbers $|\frac{x}{y}-\frac{y}{x}|,|\frac{y}{z}-\frac{z}{y}|,|\frac{z}{x}-\frac{x}{z}|$ can be the lengths of the sides of a triangle.
Find the number of unordered pairs ${A,B}$ (i.e., the pairs ${A,B}$ and ${B,A}$ are considered to be the same) of subsets of an $n$-element set $X$ which satisfy the conditions: $(a) A \neq B$, $(b) A \cup B =X$. [e.g., if $X = \{a, b, c, d\}$, then $\{\{a, b\}, \{b, c, d\}\}, \{\{a\}, \{b, c, d\}\}, \{\phi,\{a, b, c, d\}\}$ are some of the admissible pairs.]
The sides of a triangle are three consecutive integers and its inradius is four units. Determine the circumradius.
Find all triples $(a, b, c)$ of positive integers such that $(1+ \frac{1}{a})(1+ \frac{1}{b})(1+ \frac{1}{c})=3$.
Solve for real number $x$ and $y$: $xy^2=15x^2+17xy+15y^2$, $x^2y=20x^2+3y^2$.
Suppose $N$ is an $n$-digit positive integer such that (a) all the $n$-digits are distinct; and (b) the sum of any three consecutive digits is divisible by $5$. Prove that $n$ is at most $6$. Further, show that starting with any digit one can find a six-digit number with these properties.
Let $ABC$ be a triangle and $h_a$ the altitude through $A$. Prove that $(b+c)^2 \geq a^2 + 4h_{a}^{2}$.(As usual $a, b, c$ denote the sides $BC, CA, AB$ respectively.)
Given any positive integer $n$ show that there are two positive rational numbers $a$ and $b$,$a \neq b$, which are not integers and which are such that $a-b$, $a^2-b^2,a^3-b^3,.....,a^n-b^n$ are all integers.
If $A$ is a fifty-element subset of the set $\{1, 2, 3, . . . , 100\}$ such that no two numbers from $A$ add up to $100$ show that $A$ contains a square.
In triangle $ABC$, $K$ and $L$ are points on the side $BC$ ($K$ being closer to $B$ than $L$) such that $BC.KL = BK.CL$ and $AL$ bisects $\angle KAC $ . Show that $AL$ is perpendicular to $AB$.
Call a positive integer n good if there are n integers, positive or negative, and not necessarily distinct, such that their sum and product are both equal to $n$ (eg. 8 is good, since $8 = 4.2.1.1.1.1.(-1)(-1) = 4 + 2 + 1 + 1 + 1 + 1 + (-1) + (-1) )$. Show that integers of the form 4k + 1 and 4l are good.
Prove that among any $18$ consecutive three-digit numbers there is at least one number which is divisible by the sum of it's digits.
Show that the quadratic equation $ x^2+7x-14(q^2+1)=0 $ , where $q$ is an integer, has no integer root.
Show that for any triangle $ABC$, the following inequality is true: $ a^2+b^2+c^2 > \sqrt{3}$ max ${(|a^2-b^2|,|b^2-c^2|,|c^2-a^2|}) $ where $a, b, c$ are, as usual, the sides of the triangle.
Let $ A_1A_2A_3....A_{21} $ be a 21-sided reqular polygon inscribed in a circle with center O. How many triangles $ A_iA_jA_k $, contain the point O in their interior?
Show that for any real number $x$, $ x^2 \sin {x}+x \cos {x}+x^2+\frac{1}{2} > 0 $.
A leaf is torn from a paperback novel. The sum of the numbers on the remaining pages is $15000$. What are the page numbers on the torn leaf.
In the $\triangle ABC$, the incircle touches the sides $BC, CA$ and $AB$ respectively at $D, E$ and $F$. If the radius of the incircle is $4$ units and if $BD, CE$ and $AF$ are consecutive integers, find the sides of the $\triangle ABC$.
Find all $6$-digit natural numbers $ a_1a_2a_3a_4a_5a_6 $ formed by using the digits $1, 2, 3, 4, 5, 6$ once each such that the number $ a_1a_2a_3...a_k $ is divisible by $k$, for $1 \leq k \leq 6 $.
Solve the system of equations for real $x$ and $y$ : $ 5x (1+\frac{1}{x^2+y^2})=12 $ , $5y(1-\frac{1}{x^2+y^2})=12 $.
Let $A$ be a set of $16$ positive integers with the property that the product of any two distinct numbers of A will not exceed $1994$. Show that there are two numbers $a$ and $b$ in $A$ which are not relatively prime.
Let $AC$ and $BD$ be two chords of a circle with center O such that they intersect at right angles inside the circle at the point $M$. Suppose $K$ and $L$ are the mid-points of the chord $AB$ and $CD$ respectively. Prove that $OKML$ is a parallelogram.
Find the number of all rational numbers $m/n$ such that (a) $0 < m/n < 1$, (b) $m$ and $n$ are relatively prime, (c) $mn = 25!$.
If $a, b$ and $c$ are positive real numbers such that $a + b + c = 1$, prove that $(1+a)(1+b)(1+c) \geq 8(1-a)(1-b)(1-c) $.
Let $ABC$ be an acute-angled triangle and $CD$ be the altitude through $C$. If $AB = 8$ and $CD = 6$, find the distance between the mid-points of $AD$ and $BC$.
Prove that the ten's digit of any power of 3 is even. [e.g. the ten's digit of $3^{6} = 729$ is $2$]
Suppose $A_1A_2A_3.....A_n $ is a $20$-sided regular polygon. How many non-isosceles (scalene) triangles can be formed whose vertices are among the vertices of the polygon but whose sides are not the sides of the polygon?
Let $ABCD$ be a rectangle with $AB = a$ and $BC = b$. Suppose $ r_1 $ is the radius of the circle passing through $A$ and $B$ and touching $CD$; and similarly $r_2 $ is the radius of the circle passing through $B$ and $C$ and touching $AD$. Show that $r_1 +r_2 \geq \frac{5}{8}(a+b) $.
Show that $19^{93}+13^{99} $ is a positive integer divisible by $162$.
If $a, b, c, d$ are four positive real numbers such that $abcd = 1$, prove that $ (1+a)(1+b)(1+c)(1+d) \geq 16 $.
In a group of ten persons, each person is asked to write the sum of the ages of all the other $9$ persons. If all the ten sums form the $9$-element set $\{82, 83, 84, 85, 87, 89, 90, 91, 92\}$ find the individual ages of the persons (assuming them to be whole numbers of years).
I have $6$ friends and during a vacation I met them during several dinners. I found that I dined with all the $6$ exactly on $1$ day; with every $5$ of them on $2$ days; with every $4$ of them on $3$ days; with every $3$ of them on $4$ days; with every $2$ of them on $5$ days. Further every friend was present at $7$ dinners and every friend was absent at $7$ dinners. How many dinners did I have alone?
Determine the set of integers $n$ for which $n^2 +19n + 92 $ is a square of an integer.
If $\frac{1}{a}+\frac{1}{b}=\frac{1}{c} $ where $a, b, c$ are positive integers with no common factor, prove that (a + b) is the square of an integer.
Determine the largest $3$-digit prime factor of the integer $ {2000} \choose {1000} $ .
$ABCD$ is a cyclic quadrilateral with $AC$ perpendicular to $BD$; $AC$ meets $BD$ at $E$. Prove that $ R $ is the radius of the circumscribing circle.
$ABCD$ is a cyclic quadrilateral; $x, y, z$ are the distances of $A$ from the lines $BD, BC, CD$ respectively. Prove that $\frac{BD}{x}=\frac{BC}{y}+\frac{CD}{z} $
$ABCD$ is a quadrilateral and $P, Q$ are mid-points of $CD$, $AB$ respectively. Let $AP, DQ$ meet at $X$, and $BP$, $CQ$ meet at $Y$ . Prove that area of $ADX$ + area of $BCY$ = area of quadrilateral $PXQY$ .
Prove that $ 1 < \frac{1}{1001} +\frac{1}{1002}+\frac{1}{1003}+ ...............+\frac{1}{3001} < \frac{4}{3} $
Solve the system $(x + y)(x + y + z) = 18$, $(y + z)(x + y + z) = 30$, $(z + x)(x + y + z) = 2A$ in terms of the parameter $A$.
The cyclic octagon $ABCDEFGH$ has sides $a, a, a, a, b, b, b, b$ respectively. Find the radius of the circle that circumscribes $ABCDEFGH$ in terms of $a$ and $b$.
Let $P$ be an interior point of $\triangle ABC$ and $AP, BP, CP$ meet the sides $BC, CA, AB$ in $D, E, F$ respectively. Show that \( \frac{AP}{PD} = \frac{AF}{FB} +\frac{AE}{EC} \).
If $a, b, c$ and $d$ are any four positive real numbers, then prove that $\frac{a}{b}+\frac{b}{c}+\frac{c}{d}+\frac{d}{a} \ge 4 $.
A four-digit number has the following properties:
it is a perfect square;
its first two digits are equal to each other;
its last two digits are equal to each other; Find all such four digit numbers.
There are two Urns each containing an arbitrary number of balls (both are non empty to begin with). We are allowed two types of operations:
remove an equal number of balls simultaneously from both the urns and
double the number of balls in any one of them.
Show that after performing these operations finitely many times , both the urns can be made empty.
Take any point $P_{1} $ on the side $BC$ of a $\triangle ABC$ and draw the following chain of lines: $P_{1}P_{2} $ parallel to $AC$, $P_{2}P_{3} $ parallel to $BC$, $P_{3}P_{4} $ parallel to $BC$, $ P_{4}P_{5} $ parallel to $CA$, and $ P_{4}P_{5} $ parallel to $BC$. Here lie on AB $P_{3}P_{4} $ on $CA$ and $ P_{4} $ on BC. Show that $ P_{6}P_{1} $ is parallel to $AB$.
Find all integer values of a such that the quadratic expression $(x+a)(x+1991) + 1$ can be factored as a product $(x+b)(x+c)$ where $b$ and $c$ are integers.
Prove that $n^{4}+4^{n} $ is composite for all integer values of $n > 1$.
The $64$ squares of a $8 \times 8 $ chessboard are filled with positive integers in such a way that each integer is the average of the integers on the neighbouring squares. (Two squares are neighbours if they share a common edge or a vertex. Thus a square can have $8, 5$ or $3$ neighbours depending on its position).Show that all the $64$ integers entries are in fact equal.
apartmentheart-pulsebug | CommonCrawl |
Improving MetFrag with statistical learning of fragment annotations
Christoph Ruttkies ORCID: orcid.org/0000-0002-8621-86891,
Steffen Neumann1,2 &
Stefan Posch3
Molecule identification is a crucial step in metabolomics and environmental sciences. Besides in silico fragmentation, as performed by MetFrag, also machine learning and statistical methods evolved, showing an improvement in molecule annotation based on MS/MS data. In this work we present a new statistical scoring method where annotations of m/z fragment peaks to fragment-structures are learned in a training step. Based on a Bayesian model, two additional scoring terms are integrated into the new MetFrag2.4.5 and evaluated on the test data set of the CASMI 2016 contest.
The results on the 87 MS/MS spectra from positive and negative mode show a substantial improvement of the results compared to submissions made by the former MetFrag approach. Top1 rankings increased from 5 to 21 and Top10 rankings from 39 to 55 both showing higher values than for CSI:IOKR, the winner of the CASMI 2016 contest. For the negative mode spectra, MetFrag's statistical scoring outperforms all other participants which submitted results for this type of spectra.
This study shows how statistical learning can improve molecular structure identification based on MS/MS data compared on the same method using combinatorial in silico fragmentation only. MetFrag2.4.5 shows especially in negative mode a better performance compared to the other participating approaches.
The identification of small molecules such as metabolites is a crucial step in metabolomics and environmental sciences. The analytical tool of choice to achieve this goal is mass spectrometry (MS) where ionized molecules can be differentiated by their mass-to-charge (m/z) ratio. As a single m/z value is not sufficient for the unequivocal determination of the molecular structure, tandem mass spectrometry (MS/MS) is applied, which results in the formation of fragment ions of the entire molecule. These fragments result in fragment peaks that are characterized by their m/z and intensity value. The intensity correlates with the amount of ions detected with that particular m/z value. These m/z fragment peaks can be used to infer additional hints about the underlying molecular structure.
The interpretation of the generated data is complex and usually requires expert knowledge. Over the past years, several software tools have been developed to overcome the time-consuming manual analysis of the growing amount of MS/MS spectra in an automated way. The first approaches tried to reconstruct observed fragment spectra by performing in silico fragmentation in either a rule based (e.g. MassFrontier [1]) or combinatorial manner such as MetFrag [2, 3], MIDAS [4], MS-Finder [5] and MAGMa [6].
MetFrag was one of the first combinatorial approaches developed and performs in silico fragmentation of molecular structures. Given a single MS/MS spectrum of an unknown molecule, MetFrag first selects molecular candidates from databases given the neutral mass of the parent ion. In the next step, each of the retrieved candidates is treated individually and fragmented in silico using a bond-disconnection approach. The generated fragment-structures are assigned to the m/z fragment peaks of the MS/MS spectrum, based on the comparison of the theoretical mass of the generated structure and the m/z value of the acquired fragment peak. Given a set of assignments of m/z fragment peaks to fragment-structures for one candidate, MetFrag calculates a score that indicates how well the candidate matches the given MS/MS spectrum. These scores are used to rank all retrieved candidates. Ideally, the correct one is ranked in first place.
Statistical approaches have evolved, which are learning fragmentation processes on the basis of annotated experimental MS/MS data. CFM-ID [7] is using Markov-chains to model transitions of fragment-structures for the prediction of MS/MS spectra. Generated spectra can be aligned with the spectrum of interest and report the candidates with the best matching spectral prediction. FingerID [8] uses MS/MS spectra to predict molecular fingerprints. These Fingerprints are bit-wise representations of molecular structures where each position in the fingerprint encodes a structural property of the underlying molecule. FingerID uses support vector machines (SVM) and is enhanced by CSI:FingerID (CSI:FID) [9], integrating fragmentation trees which are calculated by SIRIUS [10]. CSI:IOKR [11] replaces the SVM prediction by an input-output kernel regression approach. Recent analysis in one of the latest CASMI (Critical Assessment of Small Molecule Identification) contests (2016) [12] reveal that techniques supported by statistical learning (i.e. CSI:FID and CSI:IOKR) are the most promising and powerful methods used to perform structure elucidation if only the MS/MS data is considered.
In this work we introduce a new statistical approach to evaluate candidates for MS/MS spectra. Using training data, probabilities of the predicted fragment-structures given the observed m/z peaks are estimated with a Bayesian approach. These probabilities are integrated as new scoring terms for MetFrag to rank candidates. The new scoring schema is tested on the challenge data sets of the CASMI contest 2016. The method shown here complements the different machine learning and statistical approches that perform MS/MS spectra prediction (CFM-ID), prediction of molecular fingerprints (CSI:FID, CSI:IOKR) and now combining in silico fragmentation and statistical scoring for the evaluation of retrieved molecular candidates. The new scoring functions are available with the new MetFrag version 2.4.5.
This section introduces the notation and the Bayesian model approach used to evaluate how likely a fragment-structure is in the presence of an m/z fragment peak. The resulting probabilities are defined across the domain of all possible fragment-structures and all m/z fragment peaks, but can be reduced to become tractable. The resulting probability distribution will be used in the candidate score \(S^{c}_{RawPeak}\) indicating whether a candidate can explain the m/z fragment peaks with fragment-structures seen in the training spectra. In analogy, neutral losses will also be considered. The parameter estimation to model the probability distribution is at the heart of our approach. We describe how they are estimated from training data, taking care to clearly separate training data from evaluation data. Finally we describe the evaluation using the CASMI 2016 challenge data and comparison to the results obtained by other approaches and state-of-the art small molecule identification programs.
First, we introduce notations required for our approach. A summary of the notation used in the following and their description can be found in Additional files 4 and 5: Tables S1 and S2. Consider a set of N centroided MS/MS spectra \(\underline {m}=\{\underline {m}_{n}|n=1,\dots N\}\) where \(\underline {m}_{n} = (m_{n1},\dots m_{n{K_{n}}})\) consists of Knm/z fragment peaks mnk. Furthermore, for each spectrum \(\underline {m}_{n}\) a set of candidates \(\underline {c}_{n}\) of length Cn is given, typically retrieved from a database. For a given candidate \(c_{nc} \in \underline {c}_{n}\), MetFrag performs an in silico fragmentation and assigns each observed m/z fragment peak mnk to one of the generated fragment-structures, denoted fnck in the following. This can be interpreted as explaining the m/z fragment peak mnk with the fragment-structure fnkc. On the basis of the in silico fragmentation, assignments of m/z fragment peaks to fragment-structures \((\underline {m}_{n}, \underline {\smash {f}}_{nc}), c=1,\dots C_{n}\), are determined. As there is not necessarily a matching fragment-structure for every m/z fragment peak mnk, we introduce ⊥ in case an m/z fragment peak mnk cannot be annotated, and denote fnck=⊥ in this case.
As stated in the introduction, we want to evaluate candidates for an MS/MS spectrum by a statistical scoring approach to be integrated into MetFrag. Therefore, we apply a scoring term based on the probability \(P(\underline {\smash {f}}_{nc} | \underline {m}_{n})\). The distribution \(P(\underline {\smash {f}} | \underline {m})\) models the occurence of fragment-structures in \(\underline {\smash {f}}\) in the correct candidate for a given list \(\underline {m}\) of m/z fragment peaks in an observed spectrum. In the following we assume the independence of the assignments of m/z fragment peaks to fragment-structures yielding
$$P(\underline{\smash{f}} | \underline{m}) = \prod_{k=1}^{K} P(f_{k} | m_{k}), $$
with \(\underline {m} = (m_{1},,\dots,m_{K})\) and \(\underline {\smash {f}} = (f_{1},\dots f_{K})\). From a chemical point of view, we know that certain m/z fragment peaks occur concurrently with other m/z fragment peaks (or at least with a higher certainty) due to multi-stage fragmentation pathways that lead to a further fragmentation of a generated fragment-structure. However, for the sake of model simplification we do not consider this information when assuming independence of assignments of m/z fragment peaks to fragment-structures.
A fragment-structure can be regarded as a connected charged molecular structure consisting of atoms connected via bonds. A graph can be used as data structure to represent a fragment-structure, as atoms and bonds can be represented by graph nodes and edges, respectively. However, to reduce the computational costs for comparing graphs by determining graph isomorphisms, especially when working with thousands or even hundreds of thousands of fragment-structures, we use molecular fingerprints as a bit-string representation of a molecular structure. Each bit of the fingerprint describes the presence or absence of a molecular feature within the structure. As different fragment-structures may share the same fingerprint, this approach reduces the the domain size and also generalizes very similar fragment-structures that would explain the same m/z fragment peak. There are different molecular fingerprint functions available, e.g., the MACCSFingerPrint [13] and the LingoFingerprint [14]. A fragment-structure fingerprint is defined as \(\widetilde {f}_{k} = MolFing(f_{k})\), calculated by the fingerprint function MolFing.
We regard two fragment-structures f and f′ to be equal, if \(\widetilde {f}\) and \(\widetilde {f'}\) are equal, although f and f′ might be structurally different. This reduces the comparison to constant time as the fingerprint length is independent of the size of the fragment-structure. The distribution can now be re-defined as
$$P(\underline{\smash{\widetilde{f}}} | \underline{m}) = \prod_{k=1}^{K} P(\widetilde{f}_{k} | m_{k}). $$
The comparison of two m/z fragment peaks m and m′ can not be performed as a simple test for equality by m=m′. This is impractical for MS measurements as they show a certain degree of deviation depending on the mass accuracy of the instrument. For this reason, the m/z range covered by training and test spectra is discretized into non-equidistant bins [bi,bi+1]. The boundaries are calculated as bi+1=bi+2·(mzppm(bi)+mzabs) with b0 set to the minimum mass value of this range. The values mzabs and mzppm(bi) represent the absolute (in m/z) and relative mass (in ppm) deviation given by the MS setup.
Two m/z fragment peaks m and m′ are considered to be equal if they fall into the same bin. In the following each m/z fragment peak m is discretized to the central value of its bin.
Domains and Parameters
As a next step, the two domains M of m/z values m and F of all fragment-structure fingerprints \(\widetilde {f}\) need to be defined. For M one could consider all bins resulting from discretization. However, this is impractical as the major part of this domain is not observed for a given data set. Likewise, the domain F can be defined to contain all possible fragment-structure fingerprints. Using the MACCSFingerprint with 166 bits would result in 2166≈9.35·1049 different fingerprints. In practice this space needs to be reduced to be tractable, and again only a fraction will be observed for a given problem. For a spectral training data set of N MS/MS spectra and Cn candidates each, we define a reduced peak domain \(\widetilde {M}_{tr}\) and a reduced fingerprint domain \(\widetilde {F}_{tr}\) as
$$\begin{array}{*{20}l} \widetilde{M}_{tr} &= \{m_{nk} | n \in 1,\dots N, k=1,\dots K_{n} \} \subseteq M \\ \widetilde{F}_{tr} &\,=\, \left\{\widetilde{f}_{nck} | n \!\in\! 1,\dots N, c\,=\,1,\dots C_{n}, k\,=\,1,\dots K_{n} \right\} \subseteq F, \end{array} $$
which are the m/z fragment peaks and fragment-structure fingerprints observed in this data set.
Furthermore, we define \(\mathcal {D}_{train}\) as a list of all assignments of m/z fragment peaks to fragment-structures in the training data, i.e.
$${} \mathcal{D}_{train} \,=\, \left((m_{nk}, f_{nck}) | n \,=\, 1, \dots N, c\,=\,1,\dots C_{n}, k \,=\, 1, \dots K_{n} \right). $$
Besides the MS/MS spectra given in this training data set we also need to address observations of an additional centroided MS/MS query spectrum \(\underline {m}_{q}\) that is not part of the training data set. The processing of \(\underline {m}_{q}\) is illustrated in Fig. 1. The domains are extended by the observations retrieved from this single query spectrum with Cq candidates and Kqm/z fragment peaks, i.e.
$$\begin{array}{*{20}l} \widetilde{M} &= \widetilde{M}_{tr} \cup \{m_{qk} | k=1,\dots K_{q} \}\\ \widetilde{F} &= \widetilde{F}_{tr} \cup \{\widetilde{f}_{qck} | c=1,\dots C_{q}, k=1,\dots K_{q} \}. \end{array} $$
MetFrag processing of a single query spectrum (\(\underline {m}_{q}\)). The input for a MetFrag processing run is a query MS/MS spectrum and the candidate list. Fragments are generated in silico for each candidate and mapped to m/z fragment peaks in the given spectrum. The output is a list of assignments of m/z fragment peaks to fragment-structures for each candidate
To define the distribution \(P(\underline {\smash {\widetilde {f}}} | \underline {m})\) with \(m \in \widetilde {M}\) and \(\widetilde {f} \in \widetilde {M}\), we introduce the notation \(\theta _{m\widetilde {f}} := P(\widetilde {f}|m)\), which is the probability of fragment-structure fingerprint \(\widetilde {f}\) given an observed mass m. The complete set of parameters is given as
$$\underline{\theta} = (\theta_{m\widetilde{f}}), \quad\text{for}\quad m \in \widetilde{M}, \widetilde{f} \in \widetilde{F}. $$
Parameter estimation
The parameters are initially not known and need to be estimated from the training data. In the process of parameter estimation \(\underline {c}_{n}\) is set to only contain the known correct candidate (Cn=1) for the generation of \(\mathcal {D}_{train}\) as this results in mainly correct predicted fragment-structure assignments as ground truth. The generation of \(\mathcal {D}_{train}\) is illustrated in Fig. 2 where only the correct candidate for each spectrum is processed. One paradigm for parameter estimation is the maximum likelihood principle
$$ \underline{\hat{\theta}}^{ML} = \underset{\underline{\theta}}{\text{argmax}}~P(\mathcal{D}_{train}|\underline{\theta}), $$
which results in
$$\begin{aligned} &\hat{\theta}^{ML}_{m\widetilde{f}} = \frac{N_{m\widetilde{f}}}{{\sum\nolimits}_{\widetilde{f}' \in \widetilde{F}} ~N_{m\widetilde{f}'}},\\ & \quad \text{ with} \quad N_{m\widetilde{f}} = \sum\limits_{(m_{t},\widetilde{f_{t}}) \in \mathcal{D}_{train}} \delta(\widetilde{f}_{t},\widetilde{f})\delta(m_{t},m) \end{aligned} $$
\(N_{m\widetilde {f}}\) is the absolute frequency of the assignments of m/z fragment peaks to fragment-structures \((m,\widetilde {f})\) in the training data set \(\mathcal {D}_{train}\).
The training phase. The training consists of two major phases. For each phase a subset of the known reference MS/MS spectra is used. In the first phase MetFrag generates a list of assignments of m/z fragment peaks to fragment-structures for the given MS/MS spectra and their correct candidates. These assignments are generated by the in silico fragmentation of the correct candidate and the mapping of the generated fragment-structures to the m/z fragment peaks in the training spectrum. This assignments list (\(\mathcal {D}_{train}\)) is used in the second training phase along with the second subset of the reference spectra. Here, for each MS/MS spectrum the correct candidate is ranked with a candidate list using the consensus candidate score integrating besides the fragmenter (\(S^{c}_{MetFrag}\)) the two new statistical scoring terms (\(S^{c}_{Peak}, S^{c}_{Loss}\)). The number of correct Top1 rankings is used to optimize pseudo count and scoring weight parameters. The first training phase is used in analogy for the generation of the list containing assignments of m/z fragment losses to fragment-structures (\(\mathcal {D}^{L}_{train}\))
If such an assignment \((m, \widetilde {f})\) resulting from the query spectrum is not contained in the training data, a probability \(\hat {\theta }^{ML}_{m\widetilde {f}} = 0\) is estimated. As a consequence the probability \(P(\underline {\smash {\widetilde {f}}} | \underline {m})\) for the query will be zero.
Due to the limitation of the available training data, this situation will arise quite often. To avoid this problem, we use the Bayes paradigm including a priori distribution for the parameters to be estimated. In addition, as we only consider the correct candidate for each spectrum in \(\mathcal {D}_{train}\) it is not possible to reliably estimate parameters in case \(\widetilde {f} = \perp \), which is the probability for an m/z fragment peak without an assigned fragment-structure. Within the Bayesian approach we model this probability with the prior distribution and set Nm⊥=0.
In the following we will use the mean posterior (MP) principle
$$\hat{\theta}_{m\widetilde{f}}^{MP} = E_{P(\underline\theta|\mathcal{D}_{train},\pi)}[\underline{\theta}] $$
$$P(\underline\theta|\mathcal{D}_{train},\pi) = \frac{P(\underline{\theta}|\underline{\pi})P(\mathcal{D}_{train}|\underline{\theta})}{P(\mathcal{D}_{train}|\pi)} $$
is the a posteriori distribution of parameters \(\underline \theta \). As a prior distribution \(P(\underline {\theta }|\underline {\pi })\) on the parameters we use a product Dirichlet distribution with hyper parameters \(\pi _{m\widetilde {f}}, m \in \widetilde {M}, \widetilde {f} \in \widetilde {F}\) defined as
$$\begin{array}{*{20}l} \pi_{m\widetilde{f}} = \left\{\begin{array}{cl} \alpha, & \widetilde{f} \neq \perp \\ \beta, & \widetilde{f} = \perp \\ \end{array}\color{white} \right\} \end{array} $$
where α and β are also called pseudo counts.
The parameter estimation is given by
$$\hat{\theta}_{m\widetilde{f}}^{MP} = \frac{N_{m\widetilde{f}} + \pi_{m\widetilde{f}}}{{\sum\nolimits}_{\widetilde{f}' \in \widetilde{F}}~\left(N_{m\widetilde{f}'} + \pi_{m\widetilde{f}'}\right)}. $$
Fragment losses
Fragment losses can provide additional evidence for a molecular structure as the difference between two m/z fragment peaks provides hints about a substructure that was lost but not observed directly by an m/z fragment peak (neutral loss). However, we want to include this information in the evaluation of candidates for a given MS/MS spectrum. We define lnkh to be the m/z fragment loss between two different m/z fragment peaks mnk and mnh from the spectrum \(\underline {m}_{n}\), where
$$\begin{array}{*{20}l} l_{nkh} &= m_{nk} - m_{nh}, & m_{nk} > m_{nh}. \end{array} $$
For each pair of assignments of m/z fragment peaks to fragment-structures (mnk,fnck) and (mnh,fnch) with fnch being a genuine substructure of fnck (fnck≠fnch), we introduce fnckh as a loss fragment-structure. This fragment-structure is a substructure of fnck, that is generated if all bonds and atoms present in fnch are removed (fnckh=fnck∖fnch). If fnckh is connected, we define (lnkh,fnckh) to be an assignment of an m/z fragment loss to a fragment-structure.
In analogy to the pairs of m/z fragment peaks and fragment-structures (mnk,fnck), we define the domains for the m/z fragment losses and loss fragment-structures for the N MS/MS training spectra as
$$\begin{array}{*{20}l} \widetilde{L}_{tr} &= \left\{l_{nkh} | n \in 1,\dots N, k=1,\dots K_{n}, h=1,\dots K_{n} \right\} \\\ \widetilde{F}^{L}_{tr} &= \left\{\widetilde{f}_{nckh} | n \in 1,\dots N, c=1,\dots C_{n}, \right.\\ &\quad \ \ \left. {\vphantom{\widetilde{f}_{nckh}}} k=1,\dots K_{n}, h=1,\dots K_{n}\right\} \end{array} $$
for a given training data set
$$\begin{array}{*{20}l} \mathcal{D}^{L}_{train} &= \left((l_{nkh}, f_{nckh}) | n = 1, \dots N, c=1,\dots C_{n},\right. \\ k &= 1, \left. {\vphantom{0_2}} \dots K_{n}, h = 1, \dots K_{n} \right) \end{array} $$
of assignments of m/z fragment losses to fragment-structures.
In addition, both domains need to be extended for the additional query MS/MS spectrum \(\underline {m}_{q}\)
$$\begin{aligned} \widetilde{L} &= \widetilde{L}_{tr} \cup \{l_{qkh} | k=1,\dots K_{q}, h=1,\dots K_{q} \},\\ \widetilde{F}^{L} &= \widetilde{F}^{L}_{tr} \cup\! \left\{\widetilde{f}_{qckh} | c\! = 1,\dots C_{q}, k\! =1,\dots K_{q}, h=1,\dots K_{q} \right\}. \end{aligned} $$
We consider the distribution \(P(\underline {\smash {\widetilde {f}}} | \underline {l})\) for assignments of fragment-structures to m/z fragment losses with \(l \in \widetilde {L}\) and \(\widetilde {f} \in \widetilde {F}^{L}\), and denote \(\phi ^{L}_{l\widetilde {f}} := P(\underline {\smash {\widetilde {f}}} | \underline {l})\). In analogy to the estimation of the parameters \(\theta _{m\widetilde {f}}\), we can now formulate the estimation of \(\phi ^{L}_{l\widetilde {f}}\) including a Dirichlet a priori distribution with the additional hyper parameters \(\psi _{l\widetilde {f}}\):
$$\begin{array}{*{20}l} \psi_{l\,\widetilde{f}} = \left\{\begin{array}{cl} \alpha^{L}, & \widetilde{f} \neq \perp \\ \beta^{L}, & \widetilde{f} = \perp \\ \end{array}\color{white} \right\} \end{array} $$
This yields the mean posterior estimates
$$\begin{aligned} &\hat{\phi}_{l\,\widetilde{f}}^{MP} = \frac{N^{L}_{l\widetilde{f}} + \psi_{l\widetilde{f}}}{{\sum\nolimits}_{f' \in \widetilde{F}^{L}} \left(N^{L}_{l\widetilde{f}'} + \psi_{l\widetilde{f}'}\right)},\\& \quad \text{ with} \quad N^{L}_{l\widetilde{f}} = \sum\limits_{(l_{t},\widetilde{f_{t}}) \in \mathcal{D}^{L}_{train}} \delta(\widetilde{f}_{t},\widetilde{f})\delta(l_{t},l) \end{aligned} $$
analogous to the parameter estimation for the assignments of m/z fragment peaks to fragment-structures, where \(N^{L}_{l\widetilde {f}}\) is the absolute frequency of the m/z fragment loss and fragment-structure pair \((l,\widetilde {f})\) observed in the training data set \(\mathcal {D}^{L}_{train}\).
Evaluation of the assignments of fragment-structures to m/z fragment peaks and losses in MetFrag candidate scoring
To evaluate a given candidate c retrieved from a compound database for an MS/MS query spectrum \(\underline {m}_{q}\) based on the statistical models, we define a score for both the models of the assignments of m/z fragment peaks/losses to fragment-structures. In addition, the MetFrag fragmenter score \(S^{c}_{MetFrag}\) as defined in [3] is also integrated in this candidate evaluation. We define the score \(S_{Fin}^{c}\) as the final or consensus score for a candidate c to be the weighted sum of these three scoring terms
$$\begin{array}{*{20}l} S_{Fin}^{c} &= \omega_{1} \cdot S^{c}_{MetFrag} + \omega_{2} \cdot S^{c}_{Peak} + \omega_{3} \cdot S^{c}_{Loss}\\ \omega_{i} &\ge 0, \sum\limits_{i=1,2,3}\omega_{i} = 1. \end{array} $$
To define \(S^{c}_{Peak}\) and \(S^{c}_{Loss}\), we first introduce the raw score of a candidate as
$$\begin{array}{*{20}l} S^{c}_{RawPeak} &= \frac{1}{-\log P\left(\underline{\smash{\widetilde{f}}}_{nc}|\underline{m}_{n},\hat{\underline{\theta}}^{MP}\right)} \end{array} $$
using the log likelihood based on the estimated parameters \(\underline {\theta }^{MP}\) for the assignment of an m/z fragment peak to a fragment-structure \((\underline {m}_{n}, \underline {\smash {f}}_{nc})\) for candidate c. With \(\underline {\smash {\widetilde {f}}}_{nc} = (\widetilde {f}_{nc1}, \dots, \widetilde {f}_{ncK_{n}})\) and \(\underline {m}_{n} = (m_{n1}, \dots, m_{n{K_{n}}})\) the log likelihood decomposes as
$$\begin{array}{*{20}l} \log P\left(\underline{\smash{\widetilde{f}}}_{nc}|\underline{m}_{n},\hat{\underline{\theta}}^{MP}\right) &= \sum\limits_{k=1}^{K_{n}} \log P\left(\widetilde{f}_{nck}|m_{nk},\hat{\underline{\theta}}^{MP}\right). \end{array} $$
Furthermore, the raw score is normalized to the interval [0,1] by
$$\begin{array}{*{20}l} S^{c}_{Peak} &= \frac{S^{c}_{RawPeak}}{\max_{c' \in C_{q}} S^{c'}_{RawPeak}}. \end{array} $$
Using identical ranges for the different scoring terms as for the MetFrag fragmenter score simplifies their integration into the weighted sum of the final score. The score for including the assignments of m/z fragment losses to fragment-structures \(S^{c}_{Loss}\) is defined in analogy.
Method evaluation
For the evaluation of the presented approach we used the challenge data set and evaluation procedures of the CASMI 2016 contest. In this contest candidate lists were provided by the organizers along with the spectra to be used by all participants. After the contest, several participants which used statistical learning (e.g. CSI:FID, CSI:IOKR, CFM-ID) coordinated which compounds were used in the training steps to improve the comparability between methods. They exchanged the InChIKeys (InChI: International Chemical Identifier) [15] of the spectra used in training their approaches, although it was not guaranteed that two participants used exactly the same MS/MS spectrum for a compound identified by a common InChIKey if they used different spectral databases. This evaluation is based on 87 of the 208 spectra provided originally in the challenge, as the remaining 121 spectra were removed as they were included in the training data of at least one participant. The results for this subset of the challenge spectra were published in [12] and used here in Table 2 for comparison against MetFrag2.4.5. We used the same set of InChIKeys to obtain the training spectra for this paper. The training data is available from the github repository accompanying the paper.
Preparation of the training data set
The training data set includes MS/MS spectra provided by the contest organizers consisting of 312 CASMI training spectra. Participants were allowed to use additional training spectra retrieved from spectral databases e.g. the MassBank of North America (MoNA) [16] and the Global Natural Products Social Molecular Networking (GNPS) [17] spectral library. The InChIKeys of the molecules of these additional spectra were provided by the participants.
We used the provided InChIKeys to retrieve the additional training spectra by querying the MoNA and GNPS spectral databases. For MoNA, retrieved MS/MS spectra from one institution were merged in case more than one spectrum was present for a molecule based on the first block the InChIKey. Thus for one InChIKey several merged spectra can be present in case they originate from different sources. Spectra originating from GNPS spectral database were merged independently of their source. The spectra merging was performed by averaging m/z fragment peaks within a specified mass range (given by MS setup of the MS/MS spectra) and retaining the peak of maximum intensity. This resulted in 5 622 spectra (4728 positive and 884 negative) which were used for training. To reduce the spectral complexity only the 40 most abundant (based on intensity) m/z peaks in each spectrum were used. The same applies to test spectra used for evaluation.
Training of parameters
In the training phase the optimal parameters used to calculate the candidates' consensus score need to be determined. This parameter set consists of the absolute frequencies \(N_{m\widetilde {f}}\) and \(N^{L}_{l\widetilde {f}}\) of the assignments of m/z fragment peaks and losses to fragment-structures, the hyper parameters α,β,αL and βL, and the score weights ω1,ω2 and ω3. The whole training phase described in this paragraph is illustrated in Fig. 2.
Training was separated into two phases where in the first phase the \(N_{m\widetilde {f}}\) and \(N^{L}_{l\widetilde {f}}\) parameters were determined using only the correct candidate for each training spectrum. Based on these absolute frequencies the optimal hyper parameters and weight scores are determined in the second phase.
If we had used the same data set for the estimation of all parameters, \(\mathcal {D}_{train}\) and \(\mathcal {D}^{L}_{train}\) would have contained the same pairs of m/z fragment peaks/losses and fragment-structures for the correct candidate to be ranked in the second phase. The correct candidate would then be favoured during candidate ranking. This is not representing a realistic case when a query spectrum of an unobserved molecule is processed where we expect also m/z fragment peak and loss assignments not previously observed in the optimization phase.
For this reason the complete training data set was split randomly into two disjunct groups of spectra. The splitting was performed by dividing the unique list of InChIKeys (first block) with a ratio of 70:30 and collecting each corresponding spectrum to a group based on the InChIKey of the underlying molecule. The larger group is used in the first phase to calculate the \(N_{m\widetilde {f}}\) and \(N^{L}_{l\widetilde {f}}\).
In the first phase the correct candidate of each spectrum was processed by MetFrag's in silico fragmentation. The m/z fragment peaks explained by a fragment-structure were corrected to the mass of the molecular formula of the assigned fragment-structure. This is required to be independent of the different mass accuracies of MS/MS spectra acquired under different instrument conditions. Thus the list of assignments of m/z fragment peaks/losses to fragment-structures \(\mathcal {D}_{train}\) and \(\mathcal {D}^{L}_{train}\) contained assignments with the corrected m/z values used for the calculation of \(N_{m\widetilde {f}}\) and \(N^{L}_{l\widetilde {f}}\).
In the second training phase candidates were retrieved from a local PubChem [18] mirror (June 2016) using the monoisotopic mass of the correct candidate of each spectrum and a relative mass deviation dependent on the experimental conditions of the underlying MS measurement. To reduce runtime the correct and at most 500 randomly sampled candidates were processed from the retrieved list of candidates. The rank of the correct candidate was determined and the overall number of Top1 ranks was used as optimization criterion.
For the hyper parameters the optimization was performed by a grid search over an initial domain including a set of all combinations of the values 0.0025, 0.0005 and 0.0001 resulting in a total of 34=81 sets of hyper parameters. If the optimal number of Top1 ranks was located at the border of this hyper parameter domain the search space was extended by increasing or decreasing the parameter by a factor of 5 or 1/5 respectively. This procedure was continued until an optimum was found with an improvement of less than 1% compared to the previous optimum of Top1 ranks. For the score weights a set of 1000 parameter combinations was sampled equally distributed on the simplex. Consensus scores and the rankings of the correct candidates were calculated for all combinations of hyper parameters and weights resulting in initially 81.000 combinations.
Subsequent to this training procedure, the absolute frequencies \(N_{m\widetilde {f}}\) and \(N^{L}_{l\widetilde {f}}\) were recalculated using the entire training data set to increase the observation domain of assignments of m/z fragment peaks/losses to fragment-structures used for the processing of the challenge data set.
Fingerprint function
To investigate the effect of the fingerprint function MolFing on the results, the complete training phase was performed four times with different fingerprint functions for the same training spectra. For comparison the Lingo- [14], the MACCS- [13], the Circular- [19], and the GraphOnlyFingerprint were used. For calculation of the different fingerprints CDK (version 2.1) [20] implementations were used. The fingerprint with the best training result was selected for the processing of the challenge data set.
Processing of the CASMI challenge data set
After the training phase and the selection of the fingerprint function, the in silico fragmentation and scoring was performed for the 87 challenge spectra using the provided candidate lists. Candidates that included non-connected substructures or non-natural isotopes (like deuterium) were discarded from the candidate lists. The candidate ranking was performed after the removal of multiple stereoisomers in compliance with the contest rules and evaluation. Stereoisomers were detected based on the first block of the candidates' InChIKey representing the molecular skeleton and only the best scoring stereoisomer was regarded for candidate ranking. The results were evaluated and compared on the basis of the average Top1, Top3, and Top10 rankings and the median and mean average rankings of the correct candidate as in [12].
Stability of parameter optima and ranking results
Splitting of the training data set for the two phases was performed randomly. As the resulting parameters depend on the splitting, we performed ten independent trials with different splits of the training data. The resulting parameters and their performance on the challenge data set were reported to investigate the effect of randomization.
Comparison of different fingerprint functions
The ranking results obtained in the training phase on the basis of the different fingerprint functions (MolFing) are shown in Fig. 3. The fingerprints used are the Lingo-, MACCS-, Circular-, and GraphOnlyFingerprint. The training results are based on the spectra processed in the second phase during training consisting of 1389 to 1471 spectra in positive and 255 to 279 spectra in negative mode depending on the run and the spectra splitting.
Top rankings of training results. The Top rankings (Top1, Top3, Top10) of the ten training runs are shown for the different fingerprint function. The results are based on the rankings of the correct candidates of the training data used in the second training phase consisting of 1389 to 1471 spectra in positive mode (top) and 255 to 279 spectra in negative mode (bottom)
Comparable results are obtained with the Circular- and LingoFingerprint across both ion modes and across the different rankings as shown in Fig. 3 by the similar curve for the Top1, Top3 and Top10 rankings. Similar means of the rankings across the ten runs confirm this observation with 402.3, 639.8, and 881.2 for the mean Top1, Top3 and Top10 rankings using the Circular- and 398.4, 640.0 and 881.9 using the LingoFingerprint. These two fingerprint functions show superior results for the Top1 rankings compared to MACCS with 371.0 and GraphOnly 328.6. For Top3 and Top10 rankings and positive mode the MACCSFingerprint gives comparable results. Top3 and Top10 rankings in negative mode are comparable for all fingerprint functions.
The CircularFingerprint shows with the runs R07 in positive and R09 in negative mode the overall highest number of Top1 rankings with 518 of the 1686 training spectra. Due to this performance the CircularFingerprint is used for subsequent investigations and the evaluation of the challenge data set.
Randomization of training data sets
In this section we evaluate the impact of the randomization of the training data on parameter optimization. Table 1 shows the optimal parameter sets and the performance achieved on the training data using the CircularFingerprint. The overall ranking results vary across the ten runs for the Top1, Top3 and Top10 numbers in both positive and negative ion mode as expected. Boxplots of the parameter sets are shown in Fig. 4. The variation of optimal hyper parameters as well as weights shows a similar pattern for both positive and negative ion mode where a larger variation can be observed in negative mode. Particularly the pseudo counts for annotated m/z fragment peaks show a broader variation with 5e-04 to 2e-05 (α) and 1e-03 to 2e-05 (αL) compared to positive mode with 1e-04 as optimum for α and an interval of 2e-03 to 1e-04 for αL.
Boxplots of optimal weight and hyper parameters retrieved in the training phase. The parameters were obtained from the ten training runs with randomized splits of the training set and the CircularFingerprint. The rankings results show the optimal weight and hyper parameters for positive and negative mode
Table 1 Ranking results in the training phase based on the CircularFingerprint
Table 2 Results for the 87 MS/MS test spectra from the CASMI 2016 Challenge taken from Table 7 in [12] augmented with the results of the proposed approach (MetFrag 2.4.5). For the participants of the challenge the best result is given
The largest of the weights combining the three scores is ω2 which gives the score \(S^{c}_{Peak}\) the largest influence in the overall assessment. The median of ω2 is 0.4855 in positive and 0.4935 in negative mode. The impact of the original MetFrag score \(S^{c}_{MetFrag}\) and \(S^{c}_{Loss}\) are distinctively lower and comparable to each other. The weight ω1 for the MetFrag score has a median of 0.2875 in positive and 0.2840 in negative mode. The weights for ω3 are 0.2355 respectively 0.2045.
In the following we analyze the robustness and the homogeneity of the results on the challenge data set with regard to varying parameters across the parameter space evaluted during optimization. This also helped to obtain a better explanation on the deviation of optimized parameters. Specifically we compare the distribution of the Top1 rankings considering (i) the ten optimal parameter sets from the ten randomizations, (ii) the parameter sets within the convex hull constituted by these ten optimal parameter sets in the six dimensional parameter space, and (iii) the complete parameter space evaluated during training of the parameters. The convex hull over the ten optimal parameter sets was calculated using the six degrees of freedom (α,β,αL,βL,ω1,ω2) from the seven parameters with the Python Numpy package.
Figure 5 shows in yellow the distribution of the Top1 rankings of the CASMI challenge data set for the complete parameter space. Top1 ranking vary from 1 to 12 for the positive and from 4 to 14 for the negative challenge spectra, where the maximum of the distributions are six and ten for positive and negative mode, respectively. If parameter sets are restricted to the convex hull the distribution is clearly shifted to better performance, where Top1 rankings vary between 8 to 11 for positive and 10 to 13 for negative mode. This range of Top1 rankings is almost identical to the one resulting from the ten optimal parameter sets. The only exception are nine Top1 rankings for parameter sets within the convex hull in negative mode. In positive mode about 76% of the investigated parameters show worse results than achieved by the parameters contained in the convex hull. For negative mode this proportion is reduced to around 15% which can again be explained by the smaller number of available training data.
Distribution of Top1 rankings on the challenge data set. The collection of barcharts show the Top1 rankings retrieved using the CircularFingerprint for selected parameter sets. Yellow bars show the normalized Top1 counts for all parameter sets used in the training phase. The green bars show the normalized rankings for all parameter sets within the convex hull spanned by the ten optimal parameter sets retrieved from the ten randomized training runs. The violet bars show the normalized counts from these optimal parameter sets. a Positive mode b Negative mode
For the subsequent comparison to other methods on the challenge data set we use the parameter sets resulting in the best relative Top1 ranking performance in the training phase. The corresponding runs are highlighted in Table 1 and are R07 for positive and R09 in negative mode.
Comparison with MetFrag2.3
The main goal of the integration of the proposed approach into MetFrag was to improve the candidate ranking augmenting the fragmenter score with statistical scores. The MetFrag versions 2.3 and 2.4.5 use exactly the same in silico fragmentation approach. MetFrag2.4.5 scoring was extended with the statistical scoring terms which make the difference in the comparison of both version. The results of MetFrag version 2.4.5 show a drastic improvement of the rankings for the CASMI challenge data compared to its older version 2.3 with regard to all performance measures as given in the first two columns of Table 2. The correct Top1 rankings show a more than four fold increase from 5 to 21 Top1 rankings. The improvement is especially distinct for positive mode with 9 Top1 rankings where MetFrag2.3 resulted in one single query correctly ranked at first position. The number of Top1 hits in negative mode is also increased three fold from 4 to 12. The improvement is also illustrated by the reduced mean and median ranks. Where the mean rank halved to 34.6 the median rank was even reduced by two third to 5. All three scores contribute substantially to these improvements and Top1 rankings vary smoothly with the weight scores (see Additional file 1: Figure S1).
Comparison with other CASMI participants
The MetFrag2.4.5 results were compared to the results obtained by all other participants of CASMI 2016, i.e., CFM_retrain, CSI_IOKR_AR, and CSI:FID_leaveout (abbreviated by CFM-ID, CSI:IOKR, and CSI:FID), MS-Finder and MAGMa. Table 2 shows the original data from Table 7 of [12] with the ranking results for the 87 Challenge MS/MS spectra. The additional MetFrag2.4.5 column summarizes the results achieved using the new MetFrag statistical scoring terms.
In positive mode, MetFrag2.4.5 obtains nine Top1 rankings and shows a similar performace as CFM-ID (9) and CSI:IOKR (10). CSI:FID (13) outperforms all other approaches with regard to Top1 rankings in positive mode, however did not submit results for negative mode spectra. Figure 6b shows the overlap of the Top1 ranked challenges in positive mode for MetFrag2.4.5 and CSI:FID. There are only five challenges ranked first by both tools and thus a large degree of divergence between the correct predictions.
Overlap of the correctly identified Top1 spectra of the challenge data set for selected participants. The Venn diagram (a) includes the four tools using statistical approaches (MetFrag2.4.5, CFM-ID, CSI:IOKR, CSI:FID) and shows the overlap of correcly identified challenges out of the 87 spectra (positive and negative mode). The diagram (b) shows the overlap of CSI:FID and MetFrag2.4.5 for the positive mode challenges. The large numbers indicate the amount of common challenges and the numbers listed underneath their challenge IDs
For the negative mode spectra MetFrag2.4.5 considerably outperformed all participants with 12 Top1 rankings. These are five more queries than MS–Finder could rank in first position and even twice as many than the other statistical approaches CFM-ID and CSI:IOKR.
Considering the complete test data set MetFrag2.4.5 outperforms all participants with regard to Top1, Top3, and Top10 rankings including the declared winner of the contest CSI:IOKR (Top1: 21, Top3: 38, Top10: 55 vs. Top1: 16, Top3: 26, Top10: 46). The improved results are also confirmed by the smaller median and mean rankings of 5 and 34.6 compared to 10 and 97.9. We note that considering the median, CSI:FID shows a better performance than MetFrag2.4.5, however did only submit results for positive mode.
Figure 6a shows the overlap of correctly identified Top1 challenges of the participants which use statistical approaches. Interestingly, there is a relatively large number of challenges that are identified by only one of the approaches. With 10 challenges MetFrag2.4.5 shows the highest amount of unique queries ranked correctly in first place, which is predominantly caused by the eight Top1 negative mode challenges.
The results obtained by the combination of MetFrag's in silico fragmentation approach and statistical fragment annotation learning have shown an overall improvement of the ranking results of the relevant CASMI 2016 test set. Different fingerprint functions have been tested to avoid the expensive graph isomorphism problem to find matching fragments. The training phase revealed a dependency between the number of correct top hits and the fingerprint used. While MACCS- and especially Lingo- and the CircularFingerprint showed the best and also comparable results, the GraphOnlyFingerprint showed a significantly lower number of correct top rankings on the training set. We attribute the inferior performance of the GraphOnlyFingerprint primarily to the lack of representing bond orders and hence encoding less chemical information than all other fingerprint types evaluated. Due to the best performance in the training phase the CircularFingerprint was selected for further investigation on the test set.
Ten different hyper and weight parameter sets resulting from optimization with ten randomized splits of the training data were used to investigate the robustness and the distribution of these parameters accross the different training sets. While the optima of the seven parameters varied slightly between the different splits, the parameter sets still showed a clear trend across all ten runs. Especially the effect of the \(S^{c}_{Peak}\) score weight ω2 was predominantly higher compared to ω1 and ω3 for both positive and negative ion mode. The assumption that the observed parameter variation is an indication for a relatively broad and homogenious parameter optimum was confirmed by the investigation of the ranking results retrieved using parameters located in the convex hull spanned by the ten optima. These distributions also indicate a high robustness of the performance with varying parameter sets across these parameter optima.
An important outcome of this study is the significant improvement of the ranking results retrieved adding the presented Bayesian approach to MetFrag's native in silico fragment annotation. While the improvement gain for the Top3 and Top10 rankings are less pronounced, this comparison impressively demonstrates the benefit including statistical approaches for MS based compound identification. This corresponds to the outcome of CASMI 2016 where a comparison of different statistical and non-statistical approaches was made [12].
The proposed Bayesian approach follows a different mechanism than the existing statistical compound identification methods predicting molecular fingerprints (CSI:FingerID, CSI:IOKR) or MS/MS spectra (CFM-ID). The comparison of the different approaches on the CASMI 2016 test set used in this study shows on the one hand that the presented approach compares well to the existing ones and on the other hand that a relatively large number of challenges are identified by only one of the approaches (Fig. 6a). From the latter finding it may be concluded that there are different preferences for certain types of spectra of the CASMI 2016 contest. The comparison also revealed that for MetFrag2.4.5 the performance is comparable between positive and negative mode (9 vs. 12). CSI:IOKR shows lower performance ranking result for the negative mode spectra compared to positive mode (6 vs. 10). We assume the combination of in silico fragmentation and statistical scoring has a positive effect in case only limited training data is available. Only a small fraction of negative mode training data was available for this contest and resulted in generally worse results of the statistical approaches in negative mode.
In this work new statistical scoring terms are introduced to MetFrag. This model assesses the assignments of m/z fragment peaks/losses to fragment-structures derived from in silico fragmentation of a candidate and assumes independence of the individual assignments. The model parameters are estimated using the mean posterior approach. Hyper parameters of the statistical model as well as score weights are optimized by a grid search. The performance is evalutated on a subset of the CASMI 2016 contest challenge spectra for which the spectrum was not among the training data set of any participant. The results show that with the integration of the two new statistical scoring terms MetFrag could be improved four fold regarding the number of Top1 rankings. In addition it showed a better performance than the declared winner of the contest CSI:IOKR regarding the number of correctly ranked Top1, Top3 and Top10 candidates. The new scoring terms are now available in the command line tool (version 2.4.5) as AutomatedPeakFingerprintAnnotationScore and AutomatedLossFingerprintAnnotationScore and also in the web interface (https://msbi.ipb-halle.de/MetFrag) as "Statistical Scoring" trained on extended data set than used in this work. The additional scoring terms complement current scoring terms based on experimental data and can also be combined with additional meta information if available as described in [3].
We also want to stress that once the method is trained on spectra in the training phase, it can be applied and used for annotation on any data set. The data set can vary whereas the training data set is fixed once the method was trained, which is similar to all other machine learning and statistical methods mentioned in this work.
The m/z peak and candidate lists used in this study is available on the official CASMI website, http://www.casmi-contest.org/2016/index.shtml. A complete list of the used MassBank and GNPS training spectra and the ranking data sets generated during the current study are available on GitHub, https://github.com/c-ruttkies/metfrag_statistical_annotation. Further information on how to use the new scoring terms with the commandline version of MetFrag can be found on the project website http://ipb-halle.github.io/MetFrag/projects/metfragcl. The source code is published on GitHub (https://github.com/ipb-halle/MetFragRelaunched (branch: feature/statistical\_scoring)).
CASMI:
Critical assessment of small molecule identification
CSI:
FID: CSI:fingerID
International chemical identifier
Mean posterior
MS/MS:
m/z:
Mass-to-charge ratio
mzabs:
Absolute mass deviation
mzppm:
Relative mass deviation
SVM:
MassFrontier. http://www.highchem.com/. Accessed 19 June 2018.
Wolf S, Schmidt S, Müller-Hannemann M, Neumann S. In silico fragmentation for computer assisted identification of metabolite mass spectra. BMC Bioinformatics. 2010; 11:148.
Ruttkies C, Schymanski EL, Wolf S, Hollender J, Neumann S. MetFrag relaunched: Incorporating strategies beyond in silico fragmentation. J Cheminformatics. 2016; 8(1):1.
Wang Y, Kora G, Bowen BP, Pan C. Midas: A database-searching algorithm for metabolite identification in metabolomics. Anal Chem. 2014; 86(19):9496–503.
Tsugawa H, Kind T, Nakabayashi R, Yukihira D, Tanaka W, Cajka T, Saito K, Fiehn O, Arita M. Hydrogen rearrangement rules: Computational MS/MS fragmentation and structure elucidation using MS–FINDER software. Anal Chem. 2016; 88(16):7946–58.
Ridder L, van der Hooft JJJ, Verhoeven S. Automatic Compound Annotation from Mass Spectrometry Data Using MAGMa. Mass Spectrom. 2014; 3(Special Issue 2):0033.
Allen F, Greiner R, Wishart D. Competitive fragmentation modeling of ESI-MS/MS spectra for putative metabolite identification. Metabolomics. 2015; 11:98.
Heinonen M, Shen H, Zamboni N, Rousu J. Metabolite identification and molecular fingerprint prediction through machine learning. Bioinformatics. 2012; 28(18):2333–41.
Dührkop K, Shen H, Meusel M, Rousu J, Böcker S. Searching molecular structure databases with tandem mass spectra using CSI:FingerID. Proc Natl Acad Sci. 2015.
Dührkop K, Shen H, Meusel M, Rousu J, Böcker S. Searching molecular structure databases with tandem mass spectra using CSI:FingerID. Proc Natl Acad Sci U S A. 2015; 112(41):12580–85.
Brouard C, Shen H, Dührkop K, d'Alché-Buc F, Böcker S, Rousu J. Fast metabolite identification with input output kernel regression. Bioinformatics. 2016; 32(12):28–36.
Schymanski EL, Ruttkies C, Krauss M, Brouard C, Kind T, Dührkop K, Allen F, Vaniya A, Verdegem D, Böcker S, Rousu J, Shen H, Tsugawa H, Sajed T, Fiehn O, Ghesquière B, Neumann S. Critical assessment of small molecule identification 2016: automated methods. J Cheminformatics. 2017; 9(1):22.
McGregor MJ, Pallai PV. Clustering of large databases of compounds: Using the mdl "keys" as structural descriptors. J Chem Inform Comput Sci. 1997; 37(3):443–8.
Vidal D, Thormann M, Pons M. Lingo, an efficient holographic text based method to calculate biophysical properties and intermolecular similarities. J Chem Inf Model. 2005; 45(2):386–93.
Heller SR, McNaught A, Pletnev I, Stein S, Tchekhovskoi D. Inchi, the iupac international chemical identifier. J Cheminformatics. 2015; 7(1):23.
MassBank of North America. http://mona.fiehnlab.ucdavis.edu/. Accessed 8 Dec 2016.
Wang MX, Carver JJ, Phelan VV, Sanchez LM, Garg N, Peng Y, Nguyen DD, Watrous J, Kapono CA, Luzzatto-Knaan T, Porto C, Bouslimani A, Melnik AV, Meehan MJ, Liu WT, Criisemann M, Boudreau PD, Esquenazi E, Sandoval-Calderon M, Kersten RD, Pace LA, Quinn RA, Duncan KR, Hsu CC, Floros DJ, Gavilan RG, Kleigrewe K, Northen T, Dutton RJ, Parrot D, Carlson EE, Aigle B, Michelsen CF, Jelsbak L, Sohlenkamp C, Pevzner P, Edlund A, McLean J, Piel J, Murphy BT, Gerwick L, Liaw CC, Yang YL, Humpf HU, Maansson M, Keyzers RA, Sims AC, Johnson AR, Sidebottom AM, Sedio BE, Klitgaard A, Larson CB, Boya CA, Torres-Mendoza D, Gonzalez DJ, Silva DB, Marques LM, Demarque DP, Pociute E, O'Neill EC, Briand E, Helfrich EJN, Granatosky EA, Glukhov E, Ryffel F, Houson H, Mohimani H, Kharbush JJ, Zeng Y, Vorholt JA, Kurita KL, Charusanti P, McPhail KL, Nielsen KF, Vuong L, Elfeki M, Traxler MF, Engene N, Koyama N, Vining OB, Baric R, Silva RR, Mascuch SJ, Tomasi S, Jenkins S, Macherla V, Hoffman T, Agarwal V, Williams PG, Dai JQ, Neupane R, Gurr J, Rodriguez AMC, Lamsa A, Zhang C, Dorrestein K, Duggan BM, Almaliti J, Allard PM, Phapale P, Nothias LF, Alexandrovr T, Litaudon M, Wolfender JL, Kyle JE, Metz TO, Peryea T, Nguyen DT, VanLeer D, Shinn P, Jadhav A, Muller R, Waters KM, Shi WY, Liu XT, Zhang LX, Knight R, Jensen PR, Palsson BO, Pogliano K, Linington RG, Gutierrez M, Lopes NP, Gerwick WH, Moore BS, Dorrestein PC, Bandeira N. Sharing and community curation of mass spectrometry data with global natural products social molecular networking. Nat Biotechnol. 2016; 34(8):828–37. n/a.
Kim S, Thiessen PA, Bolton EE, Chen J, Fu G, Gindulyte A, Han L, He J, He S, Shoemaker BA, et al.Pubchem substance and compound databases. Nucleic Acids Res. 2015; 44(D1):1202–13.
Rogers D, Hahn M. Extended-connectivity fingerprints. J Chem Inf Model. 2010; 50(5):742–54.
Willighagen EL, Mayfield JW, Alvarsson J, Berg A, Carlsson L, Jeliazkova N, Kuhn S, Pluskal T, Rojas-Chertó M, Spjuth O, Torrance G, Evelo CT, Guha R, Steinbeck C. The chemistry development kit (cdk) v2.0: atom typing, depiction, molecular formulas, and substructure searching. J Cheminformatics. 2017; 9(1):33.
We thank all CASMI 2016 participants for generating and providing all result sets of their used software and methods. We acknowledge Emma Schymanski (Luxembourg Centre for Systems Biomedicine (LCSB), University of Luxembourg) for valuable discussions and proof-reading the manuscript. CR and SN acknowledge support from the Leibniz Association's Open Access Publishing Fund.
CR acknowledge funding from the European Commission for the FP7 project SOLUTIONS under Grant Agreement No. 603437 and for the H2020 project PhenoMeNal under Grant Agreement No. 654241. Funding bodies played no role in study design, data analysis and interpretation, nor manuscript development.
Department Biochemistry of Plant Interactions, Leibniz Institute of Plant Biochemistry, Weinberg 3, Halle (Saale), 06120, Germany
Christoph Ruttkies
& Steffen Neumann
German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Deutscher Platz 5e, Leipzig, 04103, Germany
Steffen Neumann
Institute of Computer Science, Martin Luther University Halle-Wittenberg, Von-Seckendorff-Platz 1, Halle (Saale), 06099, Germany
Stefan Posch
Search for Christoph Ruttkies in:
Search for Steffen Neumann in:
Search for Stefan Posch in:
SP, SN, CR contributed to method development, manuscript preparation and revision, discussion. CR implemented all neccessary changes to MetFrag and performed data analysis to generate presented results. All authors read and approved the final version of the manuscript.
Correspondence to Christoph Ruttkies.
SN is Associate Editor for BMC Bioinformatics.
Additional file 1
Figure S1 - Weight Parameter Scan for the test dataset. (PDF 767 kb)
Figure S2 - Maximum spectral similarities. (PDF 196 kb)
Figure S3 - Rankings of the correct candidates (test) vs. max. spectral similarity. (PDF 204 kb)
Table S1 - Notation summary. (PDF 109 kb)
Table S2 - Notation summary (Scores). (PDF 70.4 kb)
Ruttkies, C., Neumann, S. & Posch, S. Improving MetFrag with statistical learning of fragment annotations. BMC Bioinformatics 20, 376 (2019) doi:10.1186/s12859-019-2954-7 | CommonCrawl |
Power automorphism
In mathematics, in the realm of group theory, a power automorphism of a group is an automorphism that takes each subgroup of the group to within itself. It is worth noting that the power automorphism of an infinite group may not restrict to an automorphism on each subgroup. For instance, the automorphism on rational numbers that sends each number to its double is a power automorphism even though it does not restrict to an automorphism on each subgroup.
Alternatively, power automorphisms are characterized as automorphisms that send each element of the group to some power of that element. This explains the choice of the term power. The power automorphisms of a group form a subgroup of the whole automorphism group. This subgroup is denoted as $Pot(G)$ where $G$ is the group.
A universal power automorphism is a power automorphism where the power to which each element is raised is the same. For instance, each element may go to its cube. Here are some facts about the powering index:
• The powering index must be relatively prime to the order of each element. In particular, it must be relatively prime to the order of the group, if the group is finite.
• If the group is abelian, any powering index works.
• If the powering index 2 or -1 works, then the group is abelian.
The group of power automorphisms commutes with the group of inner automorphisms when viewed as subgroups of the automorphism group. Thus, in particular, power automorphisms that are also inner must arise as conjugations by elements in the second group of the upper central series.
References
• Subgroup lattices of groups by Roland Schmidt (PDF file)
| Wikipedia |
\begin{document}
\begin{center}{\bf On overconvergent subsequences of closed to rows classical Pad{\'e} approximants}\end{center} \begin{center}{\sl Ralitza K.Kovacheva \\Institute of Mathematics and Informatics, Bulgarian Academy of Sciences,
Acad. Bonchev str. 8, 1113 Sofia, Bulgaria, [email protected]}\end{center}
\no {{\bf Abstract:} {\it Let $f(z) := \sum f_\nu z^\nu$ be a power series with positive radius of convergence. In the present paper, we study the phenomenon of overconvergence of sequences of classical Pad{\'e} approximants $\{\pi_{n,m_n}\}$
associated with $f,$ where $ m_n\leq m_{ n+1}\leq m_n+1$
and $m_n = o(n/\log n),$ resp. $m_n = 0(n)$ as $n\to\infty.$
We extend classical results by J. Hadamard and A. A. Ostrowski related to
overconvergent Taylor polynomials, as well as results by G.
L{\'o}pez Lagomasino and A. Fern{\'a}ndes Infante concerning
overconvergent subsequences of a fixed row of the Pad{\'e} table.}
\no{{\bf MSC:} 41A21, 41A25, 30B30}
\no{{\bf Key words:} {\it Pad{\'e} approximants, overconvergence,
meromorphic continuation, convergence in {$\sigma$-content.} }
\no{\bf Introduction}
\no Let \begin{equation}f(z):= \sum_{j=0}^{\infty}{f_j}{z^j}\label{sum}\end{equation}
be a power series with positive radius of convergence
$ R_0(f):= R_0, \,R_0 > 0$. By $f$ we will denote not only
the sum of $f$ in $D_{R_0}:= \{z, |z| < R_0\}$ but also the holomorphic (analytic and single valued) function determined
by the element $(f, D_{R_0}).$ Fix a nonnegative integer $m\, (m\in\NN)$ and denote by $R_m(f):=R_m$ the {\it radius of $m-$meromorphy } of $f$: that is the radius of the largest disk centered at the zero into which the power series $f$ admits a continuation as a meromorphic function with no more than $m$ poles (counted with regard to their multiplicities). As it is known (see \cite{gonchar1}), $R_m > 0$ iff $R_0 > 0.$ Analogously, we define {\it the radius of meromorphy} $R(f)$ as the radius of the greatest disk $D_R$ into which $f$ can be extended as a meromorphic function in $\CC.$
Apparently, $R(f)\geq R_m \geq R_0$. We denote the meromorphic continuations again by $f.$
Given a pair $(n,m),\, n,m\in\NN$, let $\pi_{n,m}$ be the classical Pad{\'e} approximant of $f$ of order $(n,m)$. Recall that (see \cite{Pe}) ${\pi}_{n,m} = p/q$, where $p,q$ are polynomials of degree $\leq n,m$ respectively and satisfy $$(fq-p)(z) = 0(z^{n+m+1}).$$ As it is well known (\cite{Pe}), the Pad{\'e}
approximant $\pi_{n,m}$ always exists and is uniquely determined by the conditions above.
Set $$\pi_{n,m}:= P_{n,m} /Q_{n,m},$$ where $P_{n,m}$ and $ Q_{n,m}$ are relatively prime polynomials (we write $(P_{n,m}, Q_{n,m}) = 1).$
We recall the concept of {\it convergence in $\sigma$-content }
(cf. \cite{Go3}).
Given a set $e\subset \CC,$ we put
\begin{gather*}
\sigma(e) := {\rm inf} \left\{\sum_\nu |V_\nu| \right\}
\end{gather*}
where the infimum is taken over all coverings $\{\bigcup V_\nu\}$ of $e$ by disks and $|V_\nu|$ is the diameter of the disk $V_\nu$.
Let $\Omega$ be an open set in $\CC$ and $\varphi$ a function defined in
$\Omega$ with values in $\overline{\CC}$. The sequence of functions $\{\varphi_n\}$, rational in $\Omega$, is said to converge in {\it $\sigma-$content to a function $\phi$ inside $\Omega,$} if for each compact set $K\subset\Omega$ and $\varepsilon > 0$ $\sigma\{z, z\in K, |\varphi_n(z) - \varphi(z)| > \varepsilon \} \to 0$ as $n\to 0.$ The sequence $\{\varphi_n\}$ converges to $\varphi,$ as $n\to\infty,$ {\it $\sigma-$almost uniformly inside $\Omega$,} if for any compact set $K \subset \Omega$ and every $\varepsilon > 0$ $\{\varphi_n\}$ converges to $\varphi$ uniformly in the $max-$ norm on a set of the form $K\setminus K_\varepsilon,$ where $\sigma(K_\varepsilon) < \varepsilon$. Analogously, we define {\it convergence in Green's capacity } and {\it convergence almost uniformly in Green's capacity} inside $\Omega$. It follows from Cartan's inequality $\hbox{cap}(e)\geq C \sigma (e)$
(see \cite{landkoff}, Chp.3) that convergence in capacity implies
$\sigma-$convergence. The reader is referred for details to \cite{Go3}.
The next result may be found in \cite{gonchar1}.
\no{\bf Theorem 1, (\cite{gonchar1}):} {\it Given a power series (1) and a fixed integer $m\in\NN,$ suppose that $0 < R_m< \infty.$
Then the sequence $\{\pi_{n,m}\},\, n\to\infty, m-$fixed converges $\sigma-$almost uniformly to $f$ inside $D_{R_m}$ and
$$\limsup_{n\to\infty} \Vert f - \pi_{n,m}\Vert_{K(\varepsilon)}^{1/n} = \frac{\max_K |z|}{R_m}$$
for any compact subset $K$ of $D_{R_m}$ and $\varepsilon > 0.$ }
\no(here $\Vert...\Vert_K$ stands for the $\max-$ norm on $K$.)
Theorem 1 generalizes the classical result of Montessus de Ballore about rows in the Pad{\'e} table (\cite{montessus}).
{ In the present paper, we will be concentrating on
the case $\limsup_{n\to\infty}m_n = \infty$. If the sequence
$\{m_n\}$ increases "slowly enough", i,e, if $m_n = o(n),$ (resp. $m_n = o(n/\log n) $
as $n\to\infty,$) then the following result is valid:
\no{\bf Theorem 2, (\cite{Go3}, Chpt.3):}
{\it Given $f$ with $0 < R(f) < \infty,$ let $m_n = o(n/log n),\, n\to\infty$.
Then the sequence $\{\pi_{n,m_n}\}$ converges
$\sigma-$almost uniformly to $f$ inside $D_{R(f)}.$
In case $m_n = o(n), \, n\to\infty$, the sequence $\{\pi_{n,m_n}\}$
converges to $f$ in Green's capacity inside $D_{R(f)}.$
For any compact set $K\in D_{R(f)}$ and any $\varepsilon > 0$
$$\limsup_{n\to\infty} \Vert f - \pi_{n,m_n}\Vert_{K(\varepsilon)}^{1/n} \leq \frac{\max_K |z|}{R(f)}$$ }
In \cite{blkov}, the question about specifying the speed of convergence above was posed. It was shown that for a class of functions the following result is valid:
\no{\bf Theorem 3, (\cite{blkov}):} {\it Given $f$ with $0 < R(f) < \infty,$ let $ m_n\leq n, m_n\leq m_{n+1}\leq m_n+1,\, m_n=o(n/\log n),\, n\to\infty. $
Suppose that $f$ has a multivalued singularity on $\partial{D_{R(f)}}.$
Then the sequence $\{\pi_{n,m_n}\}$ converges $\sigma-$almost uniformly to $f$
inside the disk $D_{R(f)}$ and \beq\limsup_{n\to\infty} \Vert f -
\pi_{n,m_n}\Vert_{K(\varepsilon)}^{1/n} =\frac{\max_{z\in K} |z|}{R(f)}\label{blattkov} \eeq for every compact set $K\subset D_{R(f)}$ and every $\varepsilon
>0$.}}
{ Research devoted to imposing weaker conditions on the growth of the sequence $\{m_n\}$ as $n\to\infty$ was carried out by H.P. Blatt.
It follows from his results that the statement of Theorem 2 remains valid if $m_n = o(n)$ as $n\to\infty.$ Furthermore, the sequence
$\{\pi_{n,m_n}\}$ converges almost uniformly to $f$ in capacity inside $D_{R(f)}$ (
see the comprehensive paper \cite{blatt}).
Let now the sequence $\{m_n\}$ of positive integers satisfy
the conditions $m_n\leq n,\, m_n\leq m_{n+1}\leq m_n+1$.
Set $$\pi_{n,m_n} = \frac{P_{n,m_n}}{Q_{n,m_n}}:=\pi_n = P_n/Q_n,$$
{where} $(P_{n},Q_{n}) = 1;$ $\hbox{deg}P_n\leq n,\,\hbox{deg}Q_n \leq m_n.$
Denote by $\tau_{n,m_n}:=\tau_n$ {\it the defect}
of $\pi_n;$ that is $\hbox{min}(n-\hbox{deg}P_n, m_n-\hbox{deg}Q_n). $
Then the order of the zero of $f(z)-\pi_{n}$ at $z = 0$ is not less than $n+m_n+1-\tau_{n}$ (see \cite{Pe}); in other words
\beq f(z)-\pi_{n}(z) = 0(z^{n+m_n+1-\tau_{n}})\label{0001}.\eeq
Following the terminology of G. A. Baker, Jr. and P. Gr. Morris (see \cite{baker}, p. 31),
we say that the rational function $\pi_{n}$ exists iff $\tau_{n} = 0.$
The zeros $\zeta_{n,l}, 0 \leq l\leq m_n$ of the polynomial $Q_n$ are called {\it free poles} of the rational function $\pi_{n}.$ Let $\mu_n$ be the exact degree of $Q_n,\, \mu_n\leq m_n.$ We shall always normalize $Q_{n}$ by the condition \begin{equation} Q_{n}(z) =
\prod(z-\zeta_{n,l}^*)\prod(1 - \frac{z}{\tilde\zeta_{n,l}}) \label{a1}\end{equation}
where $|\zeta_{n,l}^*| < 2R(f)$ and $|\tilde\zeta_{n,l}| \geq 2R(f).$
Set
\begin{equation}P_{n}(z) = a_n z^{\hbox{deg} P_n} +\cdots.\label{a2}\end{equation}
{Suppose that $\tau_n > 0$ for some $n\in\NN$ (comp. (\ref{0001})). Then, by the block structure of the Pad{\'e} table }(see \cite{Pe})
{ $\pi_{n-l,m-k}\equiv \pi_{n,m}$
if $\max(k, l) \leq \tau_{n}$. Suppose that $f(z)-\pi_n(z) = B_nz^{n+m_n+1-\tau_n}$ with $B_n\not= 0$.
Then $\tau_{n+1} = 0$ and
$\pi_n\not= \pi_{n+1}.$ }}
The definition of Pad{\'e} approximants leads to \begin{equation}\pi_{n+1}(z) - \pi_{n}(z) = A_{n}\frac{z^{n+m_n+1-\tau_n}}{Q_n(z)Q_{n+1}(z)}, \label{b2}\end{equation}
where \beq A_{n} = \left\{\begin{array}{ll} a_{n+1}(\prod\frac{-1}{\tilde\zeta_{n, k}})-a_{n}(\prod\frac{-1}{\tilde\zeta_{n+1}}),& m_{n+1}=m_n+1\\ a_{n+1}(\prod\frac{-1}{\tilde\zeta_{n, k}}),& m_{n+1} = m_n\\ \end{array} \right. \label{c1}\eeq
It was shown in \cite{gonchar1}, Eq. 33 (see also \cite{{vavprsue}})
that for a fixed $m\in \NN$ the Pad{\'e} approximant
$\pi_{n,m}, m-$ fixed converges, as $n\to\infty$, together with the series
$\sum_{n=1}^{\infty}\frac{A_nz^{n+m+1-\tau_n}}{Q_n(z)Q_{n+1}(z)}$,
i.e.,
$$f(z) - \pi_{n,m}(z) = \sum_{k=n}^{\infty}\frac{A_kz^{k+m+1-\tau_k}}{Q_k(z)Q_{k+1}(z)},$$
where $\limsup|A_n|^{1/n} = 1/R_m$.
It is easy to check that
under the conditions of Theorem 3
an analogous result holds also
for sequences $\{\pi_{n,m_n}\},\, \{m_n\} -$ as in Theorem 3 (compare with (\ref{b22}) below).
In other words,
\begin{equation}f(z) - \pi_n(z) = \sum_{k=n}^\infty \frac{A_kz^{k+m_k+1-\tau_k}}{Q_k(z)Q_{k+1}(z)}\label{series}\end{equation} and $$\limsup_{n\to\infty} |A_n|^{1/n} = 1/R(f).$$ It follows from (\ref{c1}) that, under the above conditions on the growth of the sequence $\{m_n\}$ as $n\to\infty$ $$\limsup_{n\to\infty} |a_n|^{1/n} = 1/R(f)$$
Basing on the block structure of the Pad{\'e} table (see \cite{Pe}), we will be assuming throughout the paper, that
$\tau_{n} = 0$ for all $n\in\NN.$ Also, for the sake of simplicity,
we assume that $\hbox{deg} P_n = n$ for all $n\in\NN$.
{ Let $f(z)=\sum_{n=0}^\infty f_nz^n$ be given and suppose that $0 < R(f) < \infty.$ Let $m_n = o(n/\ln n),\,m_n\leq m_{n+1}\leq m_n+1,\, n\to\infty.$ Set, as before, $\pi_n:=\pi_{n,m_n}.$ Suppose now that a subsequence $\{\pi_{n_k}\},\,n_k\in\Lambda\subset\NN$ converges $\sigma-$almost uniformly inside some domain $U$ such that $U\supset D_{R(f)}$ and $\partial U\bigcap \partial{D_{R(f}} \not\equiv \emptyset.$ Following the classical terminology related to power series (\cite{ostrowski}), we say that $\{\pi_{n_k}\}_{n_k\in\Lambda}$ is {\it overconvergent}.} The original definitions and results, given for overconvergent sequences of Taylor polynomials, may be found in \cite{ostrowski}.
\no{\bf Theorem 4, \cite{ostrowski}, {\cite{ostrowski1}}:} {\it Given a power series $f =\sum f_nz^n$ with radius of holomorphy $R_0, 0 < R_0 < \infty$ and sequences $\{n_k\}$ and $\{n_k'\}$ with $n_k < n_k'\leq n_{k+1},\, k = 1, 2 ...., $ suppose that
\no either
\no a) $$f_n = 0\,\, \hbox{for}\,\, n_k < n \leq n_{k'}$$ and $$n_k/n_k' \to 0,k\to\infty,\, k\to\infty.$$
\no or
\no b) $$\limsup n_k/n_k' < 1$$ and $$\limsup_{n\in \bigcup_{k}(n_k, n_{k'}]} |f_n|^{1/n} < 1/R_0.$$
Then
\no a) the sequence of Taylor sums $\{S_{n_k}\}$ converges to $f$ , as $n_k\to\infty$ uniformly in the $\max-$norm inside the largest domain in $\CC$ into which $f$ is analytically continuable.
or
\no b) $\{S_{n_k}\}$ converges uniformly to $f$ inside neighborhoods of all regular points of $f$ on $\Gamma_{R_0}$.}
\no(here $S_n(z) = \sum_{\nu=0}^n f_\nu z^\nu.$)
Ostrowski's theorem was extended to Fourier series associated with orthogonal polynomials in \cite{rkk1} and to infinite series of Bessel and of multi-index Mittag-Lefler functions in \cite{paneva}.
Before presenting the next result, we introduce the term $G(f)$ as the {\it largest domain in $\CC$ into which $(f, D_{R_0})$ given by (1) admits a meromorphic continuation.} More exactly, $G(f)$ is made up by the analytic continuation of the element $(f, D_{R_O})$ plus the points which are poles of the corresponding analytic function. Obviously, $D_{R(f)}\subseteq G(f).$ Further, we say that the point $z_0\in \partial D_m$ resp. $z_0\in \partial D_{R(F)}$ is {\it regular}, if $f$ is either holomorphic, or meromorphic in a neighborhood of $z_0.$
\no{\bf Theorem 5, \cite{guillermo}:} {\it Let $f(z)$ be a power series with positive radius of convergence and $m\in\NN$ be a fixed number. Suppose that $R_m < \infty.$ Suppose that there are infinite sequences $\{n_k\}$ and $\{n_k'\}, n_k < n_k'\leq n_{k+1}, k = 1,2,...$ such that $$\pi_{n,m} = \pi_{n_k, m}\,\, \hbox{ for}\,\, n_k < n \leq n_{k'}$$
Suppose, further, that either
\no a) $$ \lim_k\frac{n_{k}}{n_k'} = 0 ,\, k\to\infty.$$
\no or
\no b) $$\limsup_k\frac{n_{k}}{n_k'} < 1 ,\, k\to\infty.$$
Then
\no a) The sequence $\{\pi_{n_k,m}\}$ converges to $f$, as $n_k\to\infty$, $\sigma-$ almost uniformly inside $G(f);$
or
\no b) $\{\pi_{n_k,m}\}$
converges to $f$, as $n_k\to\infty$, $\sigma-$ almost uniformly in a neighborhood of each point $z_0\in \Gamma_{R(f)}$ at which $f$ is
regular }.
}
{ The results of \cite{guillermo} have been extended in \cite{spain} to the $m-$th row of a large class of multipoint Pad{\'e} approximants, } associated with regular compact sets $E$ in $\CC$ and regular Borel measures supported by $E.$
\no {\bf 2. Statement of the new results}
In the present paper, we prove
\no{\bf Theorem 6:} {\it Given a power series $f$ with $R(f)\in (0,\infty)$ and a sequence of integers $\{m_n\}$ such that $m_n\leq n, m_n\leq m_{n+1}\leq m_n+1,\, m_n = o(n),\, n\to\infty$ assume that the subsequence $\{\pi_n\}_{n\in\Lambda},\, \Lambda\subset \NN$ converges to a holomorphic, resp., meromorphic function in $\sigma-$content
inside some domain $W$ such that $W\bigcap D_{R_m}^c \not\equiv \emptyset$.
Then $$\limsup_{n\in\Lambda} \vert a_n\vert^{1/n} < 1/R(f)$$}
\no{\bf Remark:} If $m_n = m$ for all $n\in\NN$, then under the conditions of Theorem 6
$$\limsup_{n\in\Lambda}\vert A_{n-1}\vert^{1/n} < 1/R(f).$$
\no{\bf Theorem 7:} {\it Given the power series $f$ with $0 < R(f) < \infty$ and a sequence of integers $\{m_n\},\, m_n\leq n, m_n\leq m_{n+1}\leq m_n+1, m_n = o(n/\log n)$ as $n\to\infty$, suppose, that $f$ is regular at the point $z_0\in \Gamma_{R(f)}$. Suppose, also that there exist increasing sequences $\{n_k\}$ and $\{n_k'\},$ $n_k < n_k'\leq n_{k+1},$ such that $\limsup_{k\to\infty}\frac{n_{k}}{n_k'} < 1$ and $\limsup_{n\in \bigcup_{k\to\infty}[n_k,n_{k}']} |a_{n}|^{1/n} < 1/R(f).$ Let \beq\liminf_k\frac{n_{k}}{n_k'} > 0.\label{end}\eeq Then there is a neighborhood $U$ of $z_0$ such that the sequence $\{\pi_{n_k'}\}$ converges to the function $f$ $\sigma-$almost uniformly inside $D_{R(f)}\bigcup U.$}
The next result extends Theorem 5 to closed to row sequences of classical Pad{\'e} approximants.
\no{\bf Theorem 8:} {\it Let $f$ be given by (1), $0 < R(f) < \infty$ and $\{m_n\}$ be as in Theorem 7. Assume that $ n_k < n_k' \leq n_{k+1},\, k = 1, 2, ...$ and
\beq \pi_{n} = \pi_{n_k}\,\hbox{as}\,\,n\in\bigcup_k(n_k, n_{k}'].\label{Th02}\eeq
Assume, further, that either
a) \beq \hspace{1.2cm}n_k/n_{k'}\to 0,\,\hbox{as} \, k\to\infty\label{Th301}\eeq
\no or
b) \beq \hspace{1.2cm}\limsup_{k\to\infty} n_k/n_{k'} < 1,\,\hbox{as} \, k\to\infty\label{R}\eeq and $f$ is regular at the point $z_0\in \Gamma_{R(f)}$.
Then
\no a) the sequence $\{\pi_{n_k}\}$ converges to $f$ $\sigma-$almost uniformly inside $G(f)$
or
\no b) there exists a neighborhood $U$ of $z_0$ such that the sequence $\{\pi_{n_k}\}$ converges to the function $f$ $\sigma-$almost uniformly inside $D_{R(f)}\bigcup U.$
}
At the end, we provide a result dealing with overconvergent subsequences of the
$m$th row of the classical Pad{\'e} table.
\no{\bf Theorem 9:} {\it Let $f$ be given,
$m\in\NN$ be fixed and $ R_m(f):= R_m \in (0, \infty).$ Suppose that the subsequence $\{\pi_{n_k,m}\},\, m-$ fixed,
converges, as $n_k\to\infty$, $\sigma-$alsmost uniformly inside a domain $U\supset D_{R_m}, \,\partial U\bigcap \Gamma_{R_m} \not\equiv
0.$
Then
there exists a sequence
$\{l_k\},\, l_k\in\NN,\, 0\leq l_k< n_k$
such that for $n_k-l_k\leq \nu\leq n_k$ $$\limsup_{\nu\in
{\bigcup_{k=1}^\infty} [n_k-l_k,n_k]}\vert a_{\nu}\vert^{1/\nu} < 1/R_m. $$ }
\no{\bf 3. Proofs}
\no{\bf Auxiliary}
Given an open set $B$ in $\CC,$ we denote by ${\cal A}(B)$ the class of analytic and single valued functions in $B.$
We recall that a function $g$ is meromorphic at some point $z_0$, if there is a neighborhood $U$ of $z_0$ where $g$ is meromorphic, i.e. $g = \frac{G}{q}$ as $z\in U$, where $G\in{\cal A}(U),\, G(z_0)\not= 0$ and $q$ is a polynomial with $q(z_0)= 0.$ We will use the notation $g\in{\cal M}(U).$
In the sequel, $D_{R},\,R > 0 $ stands for the open disk $\{z, |z| < R\};\, \Gamma_{R}:= \partial D_{R}$, respectively; $D_1:=D,\, \Gamma:= \partial D$.
With the normalization (\ref{a1}) we have \begin{equation}\Vert Q_n\Vert_K:= \max_{z\in K}|Q_n(z)|\leq C^{m_n},\, n\geq n_0\label{b1}\end{equation} for every compact set $K\subset \CC$, where $C=C(K)$ is independent on $n,\, 0 < C < \infty$. Under the condition $m_n = o(n),\, n\to\infty $ we have, for every $\Theta > 0$ and $n$ large enough \beq\Vert Q_n\Vert_K \leq \tilde C e^{n\Theta},\, n\geq n_0(\Theta)\label{0002}.\eeq
In what follows, we will denote by $C$ positive constants, independent on $n$ and different at different occurrences (they may depend on all other parameters that are involved) The same convention applies to $C_i, i = 1,2,....$
We take an arbitrary $\varepsilon$ an define the open sets \beq\begin{array}{lll} \Omega_n(\varepsilon):=\bigcup_{j\leq \mu_n}(z, |z-\zeta_{n,l}|<\frac{\varepsilon}{6\mu_nn^2}),&n\geq 1\\
\hbox{and}&\\
\Omega(\varepsilon): =\bigcup_{n}\Omega_n(\varepsilon).\\
\end{array}\label{begin}
\eeq
We have $\sigma(\Omega(\varepsilon)) < \varepsilon$ and $\Omega_{\varepsilon_1} \subset \Omega_{\varepsilon_2}$ for $\varepsilon_1<\varepsilon_2.$
For any set $K\subset \CC$ we put $K(\varepsilon):= K\setminus \Omega(\varepsilon).$
{ Let $m_n = o(n/\ln n)$ as $n\to\infty$ and $\Theta$ be a fixed positive number. Then, as it is easy
to check
\begin{equation} 1/\min_{z\in K(\varepsilon)}|Q_n(z)| \leq C e^{n\Theta},\,n\geq n_0(K) \label{b22} \end{equation}
for any compact set $K\in \CC$ and $\varepsilon > 0$.
If $m_n = m$ for every $n$, then
$$1/\min_{z\in K(\varepsilon)}|Q_n(z)| \leq C,\, n \geq n_0(K) $$
}
We recall in brief the properties of the convergence in $\sigma-$content. { Let $\Omega$ be a domain and $\{\varphi_n\}$ a sequence of rational functions, converging uniformly in $\sigma-$content to a function $\varphi$ inside $\Omega$. If $\{\varphi_n\}\in{\cal A}(\Omega),$ then $\{\varphi_n\}$ converges uniformly in the $\max-$norm inside $\Omega.$
If $\varphi$ has $m$ poles in $D$, then each $\varphi_n$ has at least $m$ poles in $\Omega$;
if each $\varphi_n$ has no more than $m$ poles in $\Omega$, then so does the function $\varphi$. For details, the reader is referred to
\cite{Go3}}.
\no{\bf Proof of Theorem 6}
As it follows directly from (\ref{0002}) and from Theorem 2, \begin{equation}\limsup_{n\to\infty} \Vert P_n \Vert_K^{1/n} = 1\label{a5}\end{equation} for every compact set $K\subset D_{R(f)}$. Set $$v_n(z):= \frac{1}{n}\log{\vert\frac{ P_n(z) }{z^n}}\vert.$$ Let $\Theta$ be a fixed positive number with $e^\Theta < R(f).$ The functions $v_n$ are subharmonic in $D_{R(f)}^c$; hence, by the maximum principle (see \cite{safftotik}) and by
(\ref{a5}) \begin{equation} v_n(z) \leq \log(\frac{{e^\Theta}}{R(f)}),\,n\in\NN,\, z\in D_{R(f)}^c, n\geq n_1. \label{a6} \end{equation}
Let now $U_j, U_j\subset W, j = 1, 2$ be concentric open disks of radii $0 < r_1< r_2,$ respectively, and not intersecting the closed disk $\overline{D_{R(f)}}.$
The proof will be based on the contrary to the assumption that $$\limsup_{n_k\to\infty}|a_{n_k}|^{1/n} < 1/R(f).$$
Then there is a subsequence of $\Lambda$ which we denote again by $\Lambda$ such that
\begin{equation}\lim_{n_k\in\Lambda}|a_{n_k}|^{1/n} = 1/R(f).\label{A}\end{equation}
Fix an $\varepsilon, r_2-r_1 >4\varepsilon > 0.$ Under the conditions of the theorem, the sequence $\{\pi_n\}_{n\in\Lambda}$ converges in $\sigma-$content inside $U_2$. Set $g$ for the limit function. Select a subsequence $\tilde\Lambda\subset \Lambda$ such that $$\sigma\{z\in U_2\setminus U_1, |\pi_{n_k'}(z) - g(z)| \geq \varepsilon\}\leq \varepsilon/2^k,\, n_k'\in\tilde\Lambda.$$ Set $B_{n_k'}:= \{z\in U_2\setminus U_1, |\pi_{n_k'}(z) - g(z)| > \varepsilon\}$ and $B':=\bigcup_{n_k'\in\tilde\Lambda} B_{n_k'}.$ We have $$\sigma(\bigcup_{k+1}^{\infty}(B_{n_k'})) < \varepsilon.$$ By the principle of the circular projection (\cite{goluzin}, p. 293, Theorem 2), there is a circle $F$, lying in the annulus $U_{2}\setminus U_{1}$ and concentric with $\partial U_j,j=1,2$ such that $F\bigcap B' = \emptyset.$ Hence,
$$||P_{n_k'}(z)||_F\leq C_1^{m_{n_k'}},\, n_k'\geq n_1,\,n_k'\in\tilde\Lambda$$ which yields
$$\Vert P_{n}\Vert_{\overline U_1}\leq C_1^{m_{n}},\, n\geq n_2\geq n_1,\, n\in\tilde\Lambda$$
Select now a number $r$ in such a way that the disk $D_r$ intersects the disk $U_{1}$ and set $\gamma:= \overline U_{1}\bigcap \Gamma_r.$ By construction, $\gamma$ is an analytic curve lying in the disk $U_{1}$. Applying the maximum principle to the last inequality, we get $$ \Vert P_{ n_k'}\Vert_\gamma \leq C_1^{m_{n_k'}},\, n_k'\geq n_3\geq n_2, n_k'\in\tilde\Lambda.$$ Therefore, \begin{equation} v_{n_k'}(z) \leq \Theta - \log r,\,\tilde n_k'\in\tilde\Lambda,\, z\in \gamma, n\geq n_4\geq n_3. \label{a7} \end{equation} Fix now a number $\rho, R(f) < \rho$ and such that the circle $\Gamma_\rho$ does not intersect the closed disk $\overline U_2.$ By the two constants theorem (\cite{goluzin}, p. 331) applied to the domain $D_{R(f)}^c - \gamma$ there is a positive constant $\alpha =\alpha(\rho),\, \alpha < 1$ such that $$\Vert v_{n_{n_k'}} (z)\Vert_{\Gamma_{\rho}} <\alpha\Vert v_{n_k'}\Vert_{\Gamma_{R(f)}} + (1-\alpha)\Vert v_{n_k'}\Vert_\gamma,\, z\in {\Gamma_{\rho}}.$$ From (\ref{a6}) and (\ref{a7}), it follows that
$$\Vert v_{n_k'}\Vert_{\Gamma_{\rho}}\leq \alpha(\log r
- \log R(f)) + \Theta - \log r,\, n_k'\geq n_4, n_k'\in\tilde\Lambda.$$ Hence,
$$\limsup_{n\to\infty,n\in\tilde\Lambda}\vert v_{n}(\infty) \vert \leq \alpha(\log r
- \log R(f)) + \Theta - \log r.$$ After letting $\Theta$ tend to zero, we get $$\limsup_{n\to\infty, n\in\tilde\Lambda} v_{ n}(\infty) \leq \alpha(\log r
- \log R(f))- \log r < - \log R(f).$$ The last inequality contradicts (\ref{A}), since $$ \log\vert a_{n} \vert^{1/n}= v_{n}(\infty). $$
\no On this, Theorem 6 is proved.
{Q.E.D.}
\no{\bf Proof of Theorem 7}
As known, the Pad{\'e} approximants are invariant under linear transformation, therefore without loss of generality, we may assume that $R(f) = 1$ and $z_0 = 1.$ Under the conditions of the theorem, there is a neighborhood of $1$, say $V$, such that $f\in{\cal M}(V).$
Set, as before, $$\pi_{n,m_n}:= \pi_n = P_n/Q_n,$$ where $(P_n,Q_n) = 1$ and $Q_n$ are normalized as in (\ref{a1}).
Fix a positive number $\alpha > 1$ such that \beq\liminf\frac{n_k'}{n_k} > 1+\alpha,\,\, \alpha > 0.\label{Th43} \eeq
In view of the conditions of the theorem, there is a number $\tau > 0$ such that
\beq\limsup_{n\in \bigcup [n_k, n_k']} |a_n|^{1/n} \leq e^{-\tau}.\label{Th44}\eeq Hence (see (\ref{c1})), \beq \limsup_{n\in \bigcup[n_k,
n_k'),n\to\infty}|A_n|^{1/n} \leq e^{-\tau}. \label{a10}\eeq
Introduce the circles $C(\rho): = C_{1/2}(\rho):=\{|z-1/2| = \rho\},\, \rho > 0$ and set $ D(\rho):= \{|z-1/2| < \rho\}.$ By our previous convention, $D_\rho:=\{z, |z| < \rho\};\, \Gamma_\rho := \partial D_\rho; D_1:=D, \Gamma_1:=\Gamma$.
Consider the function \beq\phi(R):= (\frac{1}{4R}+\frac{1}{2})^{1+\alpha}(R+\frac{1}{2})\label{0101}\eeq It is easy to verify that there is a positive number $\delta_0$, such that $$\phi(R) < 1\, \hbox{ if}\,\, \frac{1}{2} < R < \frac{1}{2} + \delta_0.$$ Fix a number $\delta,\, 0 < \delta < \delta_0$ such that $\overline{D(\delta)}\subset D\bigcup V$ and $\delta < e^\tau - 1.$
Select now a positive $\varepsilon < \delta/4$ and introduce, as above, the sets $\Omega_n(\varepsilon)$ and $\Omega(\varepsilon).$ By the principle of the circular projection, there is a number $R, 1/2 < R < 1/2 + \delta$ such that $C(R) = C(R,\varepsilon):= C(R) \setminus \Omega(\varepsilon).$ {Set } \beq r = \frac{1}{4R}.\label{23}\eeq
{\sf Denote by $\omega$ the monic polynomial of smallest degree such that $F:=f\omega\in{\cal A}(\overline{D_{r+1/2}\bigcup D(R)}); \omega(z) = \prod_{k=1}^\mu (z - a_k),\, a_k\in \overline{D_{r+1/2}\bigcup D(R)}.$}
In what follows we will estimate the terms $\Vert FQ_{n_k'} - \omega P_{n_k'} \Vert_{C(r)} $ and $\Vert FQ_{n_k'} - \omega P_{n_k'} \Vert_{C(R)}. $ For this purpose, we {select a number $\Theta > 0$ such that $\Theta < \tau,\, \, e^{\Theta} (r+1/2) < 1$ and $e^{\Theta - \tau} (R+1/2) < 1.$}
By the maximum principle $$\Vert (Q_{n} F-\omega P_{n})\Vert_{C(r)} \leq \Vert Q_{n}F(z)-\omega(z)P_{n}(z)\Vert_{\Gamma_{1/2+r}},\,\leq C_0e^{n\Theta}(r+1/2)^n,\, n\geq n_0.$$ We obtain from Theorem 2, after keeping in mind (\ref{c1}), (\ref{Th43}) and the choice of $\Theta,$
$$ \Vert Q_{n_k'}F - \omega P_{n_k'}\Vert_{\Gamma_{1/2+r}}
\leq C_1 e^{n_k'\Theta} (r+1/2)^{n_k'}\leq C_2(e^\Theta(r+1/2))^{n_k(1+\alpha)},\, n_k\geq n_1.$$
Thus, \begin{equation} \Vert FQ_{n_k'}-\omega P_{n_k'}\Vert_{C(r)}
\leq C_2(e^\Theta(r+1/2))^{n_k(1+\alpha)},\,n_k\geq n_1.\label{266}\end{equation}
Estimate now $\Vert FQ_{n_k'}- \tilde\omega P_{n_k'}\Vert_{C(R)}.$
{ Clearly, \beq\Vert F - \omega\pi_{n_k'}\Vert_{C(R)} \leq \Vert F - \omega\pi_{n_k}\Vert_{C(R)} + \Vert \omega(\pi_{n_{k}'} - \pi_{n_{k}})\Vert_{C(R)}\label{25}\eeq}
From (\ref{a5}), we have $$\Vert \omega P_{n}\Vert_{\Gamma_{R+1/2}} \leq C_3 (e^{\Theta}(R+1/2))^{n},\, n\geq n_2.$$ On the other hand, $$\Vert \omega P_{n}\Vert_{\overline D(R)} \leq \Vert \omega P_{n}\Vert_{\Gamma_{R+1/2}}. $$ Combining the latter and the former, we get
$$\Vert \omega P_{n_k}\Vert_{C(R)} \leq C_3 (e^{\Theta}(R+1/2))^{n_k},\, n_k\geq n_2.$$ From here, we obtain (see (\ref{b22})
\beq \Vert F - \omega \pi_{n_k}\Vert_{C(R)}\leq C_4 (e^{\Theta}(R+1/2))^{n_k},\, n_k\geq n_3\geq n_2\label{26}\eeq
Let now $n_l\in [n_k, n_{k'}-1].$
By (\ref{b2}), $$\Vert \omega(\pi_{n_l+1}-\pi_{n_l}) \Vert_{C(R)} \leq \Vert \omega\Vert_{\overline D(R)}\vert A_{n_l}\vert \frac {\Vert
z\Vert_{C(R)}^{n_l+m_{n_l}+1}}{\min_{C(R)}\vert Q_{n_l}Q_{n_l+1}\vert }$$
which leads, thanks (\ref{a10}) and (\ref{b22}), to
$$\Vert \omega(\pi_{n_l+1}-\pi_{n_l}) \Vert_{C(R)} \leq C_5(e^{\Theta - \tau}(R+1/2))^{n_l}, n_k \geq n_4\geq n_3 $$
Finally, the choice of $R$ and $\Theta$ and the conditions of the theorem imply
$$\Vert \omega( \pi_{n_k'} - \pi_{n_k})\Vert_{C(R)} \leq \Vert\sum_{l=n_k}^{n_k'-1}\Vert \omega( \pi_{n_l+1} - \pi_{n_l}) \Vert_{C(R)}$$
$$\leq C_6e^{n_k(\Theta - \tau)}(R+1/2)^{n_k},\, n_k\geq n_5\geq n_4 $$
From the last inequality, combined with (\ref{25}) and (\ref{26}), we derive
$$\Vert F(z)-\omega \pi_{n_k'}(z)\Vert_{C(R)}\leq C_6 (e^\Theta (R+1/2))^{n_k},\, n_k\geq n_5\geq n_4.$$ Hence, after utilization
(\ref{0002}), we get \begin{equation}\Vert FQ_{n_k'}- \tilde \omega P_{n_k'}\Vert_{C(R)}\leq C_7(e^\Theta)^{2n_k'}(R+1/2)^{n_k},\, n_k\geq
n_6\geq n_5.\label{27}\end{equation}
{We now apply Hadamard's three circles theorem (\cite{goluzin}, p. 333, pp. 337 --
348)
to $\frac{1}{n_k'}\log\vert FQ_{n_k'}(z) -\omega P_{n_k'}(z)\vert_{C(1/2)}$ and the annulus $\{z, r \leq |z-1/2| \leq R\}$.
Recall that by our convention $Rr = 1/4$.
Using now (\ref{266}), (\ref{27}), (\ref{0101}),
we get
$$\frac{\log{\frac{R}{r}}}{\log\frac{1/2}{r}}\frac{1}{n_k'}\log\Vert FQ_{n_k'}- \omega P_{n_k'} \Vert_{C(1/2)}\leq
\frac{n_k}{n_k'}(\Theta + \log \phi(R)) + 2\Theta,
\, n_k\geq n_7\geq n_6. $$ Hence, $$\limsup_{n_k'\to\infty}\frac{1}{n_k'}\log\Vert {FQ_{n_k'}- \omega P_{n_k'}}\Vert_{C(1/2)}
\leq \Theta(\frac{n_k}{n_k'}+2) + \frac{n_k}{n_k'}\log \phi(R)).$$ Viewing (\ref{end}) we get, after letting $\Theta\to 0$
$$\limsup_{n_k'\to\infty}\frac{1}{n_k'}\log\Vert {FQ_{n_k'}- \omega P_{n_k'}}\Vert_{C(1/2)} < 0.$$
The last inequality is strong. Hence, we
may choose a number
$\rho, 1/2 < \rho < R$ and close enough to $1/2$ such that the inequality preserves the sign; in other words, there are numbers $\rho\in
(1/2, R)$ and $q = q(\rho) < 1$ such that $$\frac{1}{n_k'}\log\Vert FQ_{n_k'}- \omega P_{n_k'}\Vert_{\overline D(\rho)} \leq \log q, \,n_k'\geq
n_0.$$
From here, the $\sigma-$ almost uniform convergence inside the disk $D(R)$ immediately follows (see \cite{Go3}, Eq. (23).)}
Indeed, fix an appropriate number $\rho.$ In view of the last inequality, $$\Vert FQ_{n_k'} - \omega P_{n_k'}\Vert_{\overline D(\rho)} \leq C'q^{n_k'},\,\, n_k'\geq n_0.$$ Take
$\varepsilon
< 1/4(\rho - 1/2)$ and introduce the sets $\Omega_{n_k'}(\varepsilon), \,n_k'> n_0$ with $\Omega_{n_0}(\varepsilon)$ covering the zeros of the polynomial $\omega$
(see (\ref{begin}).
As shown above, $\sigma (\Omega(\varepsilon) < \varepsilon;$ thus $ \Vert f - \pi_{n_k'} P_{n_k'}\Vert_{K(\varepsilon)} \leq C^{"}q^{n_k'},\, n_k,\geq N. $
On this, the $\sigma-$almost uniform convergence is established and Theorem 7 is proved.
{\bf Q.E.D.}
\no{\bf Proof of Theorem 8}
As in the previous proof, we suppose that $R(f) = 1.$ With this convention, \beq\limsup_{n\to\infty} \vert A_n\vert^{1/n} = 1.\label{s}\eeq
Fix a compact set $K\subset G(f)$. Our purpose is to show that $\pi_{n_k}$ converges, as $n_k\to\infty \,\sigma$-almost unformloy on $K.$ We exclude the case $K\subset D.$ In the further considerations, we assume, that $K\nsupseteq D.$ Apparently, the generality will be not lost.
Take a curve $\gamma_1$ such that $\gamma_1\bigcap D \not=\emptyset$, the compact set $K$ lies in the interior $B_1$ of $\gamma_1$ and $\gamma_1\subset G(f).$ Suppose that $f\in{\cal A}(\gamma_1) $ and denote by $Q$ the monic polynomial whit zeros at the poles of $f$ in $B_1 $ (poles are counted with their multiplicities). Set $F:= fQ;\,\hbox{deg}Q:= \mu$. Choose a disk $B_2,\, B_2\subset D\bigcap B_1$ and not intersecting $K; \gamma_2:=\partial B_2.$ In what follows, we will be estimating $\Vert FQ_{n_k}-QP_{n_k} \Vert_{\gamma_1}$ and $\Vert FQ_{n_k}-QP_{n_k} \Vert_{\gamma_2}.$
{ Take a number $r_2 < 1$ such that $B_2\subset D_{r_1}$.
{ Fix $\Theta > 0$ such that $r_2e^\Theta < 1$} } Then, for every $n$ great enough there holds $$ \Vert FQ_{n}-QP_{n} \Vert_{\gamma_2} \leq \Vert FQ_{n}-QP_{n} \Vert_{\Gamma_{r_2}} \leq C_1 (e^{\Theta} r_2)^{n+m_{n}+1},\, n > n_1$$ Hence, by (\ref{Th02}) and the choice of $r_2$ and $\Theta$ \beq \Vert FQ_{n_k}-QP_{n_k}\Vert_{\gamma_2} = \Vert FQ_{n_k'}-QP_{n_k'} \Vert_{\gamma_2} \leq C_1(e^{\Theta} r_2)^{n_k'}\label{th31}, n_k\geq n_1\eeq
In order to estimate $\Vert FQ_{n_k}-QP_{n_k} \Vert_{\gamma_1},$ we proceed as follows: fix a number
$\varepsilon, 0 < \varepsilon < \hbox{dist}(\gamma_1,\partial G(f))/4$
and take $r_1 > 1$ such that the circle $\Gamma_{r_1}$ does not intersect the set $\Omega(\varepsilon)$ and surrounds $\gamma_1.$
Relying on (\ref{b2}), on (\ref{b22}) and (\ref{s}), we get $$ \Vert \pi_{n_k} \Vert_{\gamma_1} = \Vert \sum_{n=0}^{n_k-1}\frac{A_{n}z^{n+m_n+1}}{Q_nQ_{n+1}} \Vert_{\Gamma_{r_1}}
\leq C_3 (r_1e^{\Theta})^{n_k},\, n_k\geq n_2$$ { Using now ( \ref{b1}) and following the same argumentation as in the proof of Theorem 7, we obtain } \beq\Vert FQ_{n_k} - Q\pi_{n_k}\Vert_{\gamma_1}\leq C_4(e^{\Theta}r_1)^{n_k},\, n_k\geq n_2\label{Th32}\eeq
The application of Hadamard's two constants theorem leads to $$\frac{1}{n_k}\log \Vert FQ_{n_k} - QP_{n_k}\Vert_K \leq \alpha\frac{n_{k'}}{n_k}(\Theta + \log r_2) + (1 - \alpha)(\Theta + r_1))$$ with $\alpha:=\alpha(K) < 1.$
We get, thanks the choice of $\Theta$ $$\lim_{n_k\to\infty}\Vert FQ_{n_k} - Q\pi_{n_k}\Vert_K^{1/n_k} = 0$$
The statement of the theorem follows now after using standard arguments. { On this, the proof of the first part of Theorem 8 is completed. }
b) The proof of the second part { is based on the arguments provided in } the proof of Theorem 7. As in Theorem 7, we introduce the number $\alpha$ (\ref{Th43}), the function $\phi(R)$ (\ref{0101}), the circles $C(r) $ and $C(R)$ (\ref{23}) and the polynomial $\omega$. Let $R$ and $r$ be a in THeorem 7 and set and $F:= f\omega.$
Fix a positive number $\Theta$ such that $e^\Theta(r+1/2) < 1, \Theta < -\frac{1}{2}\frac{\log\phi(R)}{2+\alpha}.$
We get, first, thanks (\ref{Th02}) $$\Vert FQ_{n_k} - \omega P_{n_k}\Vert_{C(r)} \leq C_1(e^\Theta(r+1/2))^{n_k(1+\alpha)},\,n_k\geq n_1$$ and, then, following the same way of considerations, $$ \Vert FQ_{n_k} - \omega P_{n_k}\Vert_{C(R)} \leq C_2(e^\Theta(R+1/2))^{n_k}, n_k\geq n_2\geq n_1.$$ Applying the tree circles theorem, we get $$\frac{\log{\frac{R}{r}}}{\log\frac{1/2}{r}} \frac{1}{n_k}\log \Vert Q_{n_k} - \omega P_{n_k}\Vert_{C(1/2)} \leq (2+\alpha)\Theta + \log \phi (R)$$ By the choice of $\Theta$, $$\frac{1}{n_k}\log \Vert Q_{n_k} - \omega P_{n_k}\Vert_{C(1/2)} < 0,\, n_k\geq n_3\geq n_2.$$ In what follows, we use standard arguments to complete the proof of (b), Theorem 8.
{\bf Q.E.D.}
\no{\bf Proof of Theorem 9}
Without losing the generality, we assume that $R_m = 1$ and $\tau_n = 0$ for all $n.$
Normalize the polynomials $Q_n$ as it was done in (\ref{a1}) with $R(f)$ replaced by $R_m$. Fix a positive number $\varepsilon,\, \varepsilon < 1/2$ and introduce the set $\Omega(\varepsilon).$ Select a number $R,\, R > 1$ such that $\Gamma_R\bigcap \Omega(\varepsilon) = \emptyset$. Recall that (see (\ref{b2})) there are positive constants $C_j(\varepsilon):= C_j, j = 1,2$ such that \beq \frac{n^{2m}}{C_1} \leq \min_{z\in\Gamma_R}|Q_n(z)| < \Vert Q_n\Vert_{\Gamma_R} \leq C_2,\, n \geq n_0.\label{new3}\eeq In the sequel, we assume that $C_1,\,C_2 > 1.$
By Theorem 1, there is a positive number $\tau = \tau(R)>0$ such that
\begin{equation}\Vert P_{n_k}\Vert_{\Gamma_R}\leq C(e^{-\tau}R)^{n_k},\, n_k\geq n_1\geq n_0;\label{new1} \end{equation} and (by the maximum
principle for subharmonic functions), \begin{equation}\vert a_{n_k}\vert\leq C(e^{-\tau})^{n_k}\label{new2},\, n_k\geq n_1\geq n_0 \end{equation}
Without losing the generality, we suppose that \beq R^{m+1}\leq C\leq C_1.\label{000}\eeq
We will prove that for every $l, 0 \leq l \leq n_k$ and for $n_k$ great enough \beq\Vert P_{n_k-l}\Vert_{\Gamma_R} \leq (2C_2)^{l}
C_1^{l+1}(e^{-\tau}R)^{n_k}\prod_{j=0}^{l-1}(n_k-j)^{2m},\, n_k \geq n_2.\label{new00}\eeq
From the last inequality, it follows directly that
\beq|a_{n_k-l}| \leq (2C_2)^{l} C_1^{l+1}R^l(e^{-\tau})^{n_k}\prod_{j=0}^{l-1}(n_k-j)^{2m}, n_k \geq n_2\label{001}\eeq
We prove first (\ref{000}) for $l = 1.$ For this purpose, we introduce the polynomial $${\cal P}_{n_k}:= P_{n_k-1}Q_{n_k} - P_{n_k}Q_{n_k-1}.$$ By
definitions of Pad{\'e} approximants (see (\ref{b2})), \beq {\cal P}_{n_k}(z) =
A_{n_k}z^{n_k+m+1}\eeq where, according to (\ref{c1}), $$A_{n_k-1} = a_{n_k}\prod(\frac{-1}{\tilde\zeta_{n_k-1, l}})$$ (recall that by presumption
the defect $\tau_n = 0$ for all $n.$) Viewing (\ref{new1}) and (\ref{000}), we get
\beq \Vert {\cal P}_{n_k}\Vert_{\Gamma_R}\leq |a_{n_k}|{R}^{n_k+m+1}\leq C_1(e^{-\tau}{R})^{n_k},n_k\geq n_3\geq n_1,\,\,\tau:=
\tau(R).\label{new5}\eeq
Keeping now
track of (\ref{new1}) and (\ref{new3}), we arrive at $$ \Vert P_{n_k-1}Q_{n_k} \Vert_{\Gamma_R} \leq C_1(e^{-\tau}{R})^{n_k} +
C_1C_2(e^{-\tau}{R})^{n_k},\, $$ which yields \beq \Vert P_{n_k-1} \Vert_{\Gamma_R} \leq (C(e^{-\tau}R)^{n_k} +
C_1C_2(e^{-\tau}R)^{n_k})/\min_{\Gamma_R} |Q_{n_k}(z)|,\, n_k \geq n_4\geq n_3\label{new4}\eeq $$\leq 2C_1C_2^2(e^{-\tau}{R})^{n_k}n_k^{2m}.$$ We
further get \beq |a_{n_k-1}| = |\frac{1}{2\pi i}\int_{\Gamma_R}\frac{P_{n_k-1}(z)}{z^{n_k}}dz|\leq 2C_1C_2^2Re^{-\tau
n_k}n_k^{2m},n_k\geq n_4\,\label{new6}\eeq
Suppose now that (\ref{new00}) is true for $l-1, l\geq 2.$ In other words,
$$\Vert P_{n_k-l+1}\Vert_{\Gamma_R} \leq (2C_2)^{l-1} C_1^{l}(e^{-\tau}R)^{n_k}\prod_{j=0}^{l-2}(n_k-j)^{2m},\, n_k\geq n_5$$
and
$$|a_{n_k-l+1}| \leq (2C_2)^{l-1} C_1^{l}R^l(e^{-\tau})^{n_k}\prod_{j=0}^{l-2}(n_k-j)^{2m},\,n_k\geq n_5$$
Introducing into considerations the polynomial ${\cal P}_{n_k-l+1}:= P_{n_k-1}Q_{n_k-l+1} - P_{n_k-l+1}Q_{n_k-1}$ and following the same arguments
as
before, we see that (\ref{new00}) and (\ref{001}) are true also for $l.$
Equipped with inequality (\ref{001}), we complete the proof of the theorem. We will be looking for numbers $l$ such that $$|a_{n_k-l}|^{1/n_k-l} < 1.$$ Set $2RC_2C_1:= C4.$ We check that $$\log |a_{n_k - l}|^{1/(n_k-l)}\leq \psi_{n_k}(l), $$ where $$\psi_{n_k} (x):= \frac{C_4x + C_1}{n_k-x} + \frac{x2m\log n_k}{n_k-x} - \tau \frac{n_k}{n_k-x}.$$ For $n_k$ large enough, say $n_k\geq n_6$, $\psi$ is strongly increasing, and $\psi(0) < 0.$ Hence, there is a number $x_k \in (0, n_k)$ such that $\psi_{n_k} (x) < \psi_{n_k} (x_k) < -\tau/2$ every time when $0 < x < x_k.$ Set $l_k:= x_k.$ Therefore $$\limsup_{{n\in\bigcup_{k=1}^\infty}[n_k-l_k, n_k]}|a_{n}|^{1/n} < 1$$
{\sf Q.E.D.}
\end{document} | arXiv |
\begin{document}
\title{Intermittent Kalman Filtering:\Eigenvalue Cycles and Nonuniform Sampling}
\abstract We consider Kalman filtering problems when the observations are intermittently erased or lost. It was known that the estimates are mean-square unstable when the erasure probability is larger than a certain critical value, and stable otherwise. But the characterization of the critical erasure probability has been open for years. We introduce a new concept of \textit{eigenvalue cycles} which captures periodicity of systems, and characterize the critical erasure probability based on this. It is also proved that eigenvalue cycles can be easily broken if the original physical system is considered to be continuous-time --- randomly-dithered nonuniform sampling of the observations makes the critical erasure probability almost surely $\frac{1}{|\lambda_{max}|^2}$.
\section{Introduction} Unlike classical control systems where the controller and the plant are closely located or connected by dedicated wired links, in post-modern systems the controllers and plants can be located far apart and thus control has to happen over communication channels. In other words, there is an observer which can only observe the plant but cannot control it. There is a separate actuator which can only control the plant but cannot observe it. The observer and actuator are connected by a communication channel. Therefore, to control the plant the observer has to send information about its observation to the actuator through the communication channel. Understanding the tradeoff between control performance and communication reliability or finding the optimal controller structures become the fundamental questions to build such post-modern control systems.
Not only practically, but also philosophically, control-over-communication-channel problems are important. When we are controlling systems, there is a corresponding life cycle of information. In other words, the uncertainty or new information is generated and disturbs the plant. This information is propagated to the controller as the controller observes the plant. Finally, when the controller controls the system by removing the uncertainty, the information is dissipated. It is conceptually very important to understand and quantify these information flows which naturally occur as we control systems. In control-over-communication-channel systems, all the information for control has to flow through the communication channel. Therefore, by relating the communication channels with the control performance, we can measure how much information has to flow to achieve a certain control performance.
Theoretical study of control-over-communication-channel problems was pioneered by Baillieul~\cite{Baillieul_feedback,Baillieul_feedback2} and Tatikonda {\em et al.}~\cite{Tatikonda_Control}. They restricted the communication channels to noiseless rate-limited channels, and asked what the minimum rate of the channel is to stabilize the plant. They found that the rate of the channel has to be at least the sum of the logarithms of the unstable eigenvalues, and indeed it is sufficient. This fact is known as the data-rate theorem. Later, Nair~\cite{nair2003exponential} relaxed the bounded disturbance assumption that they had to Gaussian disturbances, and proved that the same data-rate theorem holds.
However, an important question was whether we can reduce noisy communication channels to noiseless channels with the same Shannon capacity, i.e. whether the classical notion of Shannon capacity is still appropriate when the channel is used for control. In \cite{Sahai_Anytime}, Sahai {\em et al.} found the answer for this question is no. Intuitively, since the system keep evolving in time, not only the rate but also the delay of communication is important. Since Shannon capacity ignores the delay issue, it is insufficient to understand information flows for control. Thus, they proposed a new notion of \textit{anytime capacity} which captures the delay of communication. The stabilizability condition for noisy communication channels with feedback\footnote{By introducing a feedback, they reduced the problem to the one with nested information structure~\cite{Witsenhausen_Separation} which is known to be much easier to solve in decentralized control theory.} was characterized by anytime capacity.
Since then, researchers have accumulated lots of literature~\cite{hespanha2007survey, Schenato_Foundations, nair2007feedback, martins2008feedback, gupta2007optimal, yuksel2006minimum, yuksel2011control} which consider various generalized and related problems. However, still most of the problems are wide open, and \textit{intermittent Kalman filtering} problem which we will study in this paper had been one of them. In \cite{Sinopoli_Kalman}, Sinopoli {\em et al.} considered `control over real erasure channels' which can be thought as a special case of \cite{Sahai_Anytime}, but with a structural constraint on controller design.
Figure~\ref{fig:system} shows the system diagram for control-over-real-erasure-channels. The observer makes the observation about the plant, and then uncodedly transmits its observation through the real erasure channel. The real erasure channel drops the transmitted signal with a certain probability but otherwise noiselessly transmits the signal. Finally, based on the received signals from the channel, the controller generates its control inputs to stabilize the system.
The situation that this problem is modeling is that of control over a so-called \textit{packet drop channel}. A memoryless observer samples the output of an unstable continuous-time system, quantizes this sample to a sufficient number of bits, binds the resulting bits into a single packet, and transmits the packet to the controller through a communication system. Due to network congestion or wireless fading, the transmitted packet may be lost\footnote{Such losses need not come from network effects --- they could also occur because of sensor occlusion or otherwise at the sampling time itself. That is why the issue of intermittent observations needs to be studied on its own.} with a certain probability and this packet erasure process is further simplified to be i.i.d. The problem is designed to focus attention on the delay/reliability effect of losing packets and so the number of bits per packet (capacity) is unconstrained. The main problem is finding what is the maximum tolerable erasure probability keeping the system stable.
The \textit{linearity} and \textit{memorylessness} of the observer is at the heart of what Sinopoli {\em et al.} are trying to model. Otherwise, the earlier results of \cite{Sahai_Thesis} immediately reveal that the critical erasure probability for the stabilizability only depends on the magnitude of the largest eigenvalue of the plant. However, to achieve the minimal erasure probability shown in \cite{Sahai_Thesis}, the observer and controller design has to be quite complicated and not realistic in practice. Therefore, it is practically and theoretically important to understand how much the control performance degradates with linear observer and controller constraints.
In this paper, we will see that the degradation of stabilizability due to linear constraints fundamentally comes only from the periodicity of the system. Nonuniform sampling is proposed as a simple way to force the system to behave aperiodically. Therefore, by using linear controllers in a junction with nonuniform sampling, we can expect a significant performance gain and indeed recover the optimal stabilizability condition over all possible controller designs.
Furthermore, by the estimation-control separation principle~\cite{KumarVaraiya}, the closed-loop control system can be reduced to an equivalent open-loop estimation problem~\cite{Schenato_Foundations}. Figure~\ref{fig:system2} shows the resulting open-loop estimation system so-called \textit{intermittent Kalman filtering}~\cite{Sinopoli_Kalman}. As before, the sensor uncodedly transmits its observation to the real erasure channel. Then, the estimator tries to estimate the state based on its received signals. We refer to \cite{Schenato_Foundations} for a literature review and practical applications of the problem.
This paper is organized as follows: First, we formally state the problem in Section~\ref{sec:statement}. Then, we introduce some definitions in Section~\ref{sec:def}. In Section~\ref{sec:connecting}, we consider the intermittent observability as a connection of the stability and the observability. From this, we distinguish our approach to the previous approaches. In Section~\ref{sec:intui}, we introduce the intuition for the characterization of the intermittent observability using representative examples. In Section~\ref{sec:interob}, we formally define the eigenvalue cycle and characterize the intermittent observability. In Section~\ref{sec:nonuniform}, we discuss that nonuniform sampling can break the eigenvalue cycle and significantly improve the performance of the intermittent Kalman filtering. Finally, Section~\ref{sec:proof} gives the proof of the main results.
\begin{figure*}
\caption{Closed-loop system for `control over real erasure channels'. Here, the observer just bypasses its observation to the channel without any coding.}
\label{fig:system}
\end{figure*}
\begin{figure*}
\caption{System diagram for `intermittent Kalman filtering'. This open-loop estimation system is equivalent to the closed-loop control system of Figure~\ref{fig:system}. Like Figure~\ref{fig:system}, the sensor bypasses its observation to the channel without any coding.}
\label{fig:system2}
\end{figure*}
\section{Problem Statement} \label{sec:statement} Formally, the intermittent Kalman filtering problem is formulated as follows in discrete time: \begin{align} &\mathbf{x}[n+1]=\mathbf{A}\mathbf{x}[n]+\mathbf{B}\mathbf{w}[n]\label{eqn:dis:system} \\ &\mathbf{y}[n]=\beta[n]\left(\mathbf{C}\mathbf{x}[n]+\mathbf{v}[n]\right)\label{eqn:dis:system2}. \end{align}
Here $n$ is the non-negative integer-valued time index and the system variables can take on complex values --- i.e. $\mathbf{x}[n] \in \mathbb{C}^m, \mathbf{w}[n] \in \mathbb{C}^g, \mathbf{y}[n] \in \mathbb{C}^l, \mathbf{v}[n] \in \mathbb{C}^l$. $\mathbf{A} \in \mathbb{C}^{m \times m}$, $ \mathbf{B} \in \mathbb{C}^{m \times g}$ and $\mathbf{C} \in \mathbb{C}^{l \times m}$. The underlying randomness comes from the initial state $\mathbf{x}[0]$, the persistent driving disturbances $\mathbf{w}[n]$, the observation noises $\mathbf{v}[n]$ and the Bernoulli packet-drops $\beta[n]$. $\beta[n] = 0$ with probability $p_e$. $\mathbf{x}[0]$, $\mathbf{w}[n]$ and $\mathbf{v}[n]$ are jointly Gaussian.
The objective is to find the best causal estimator $\mathbf{\widehat{x}}[n]$ of $\mathbf{x}[n]$ that minimizes the mean square error (MMSE) $\mathbb{E}[(\mathbf{x}[n]-\mathbf{\widehat{x}}[n])^\dag
(\mathbf{x}[n]-\mathbf{\widehat{x}}[n])]$, i.e. $\mathbf{\widehat{x}}[n]=\mathbb{E}[\mathbf{x}[n]| \mathbf{y}^n]$. We assume that the statistics of all random variables are known to the estimator. If $\mathbf{x}[0]$, $\mathbf{w}[n]$ and $\mathbf{v}[n]$ do not have zero mean, the estimator can properly shift its estimation. Thus, without loss of generality, $\mathbf{x}[0], \mathbf{w}[n]$ and $\mathbf{v}[n]$ are assumed to be zero mean. $\mathbf{x}[0], \mathbf{w}[n]$ and $\mathbf{v}[n]$ are independent and have uniformly bounded second moments so that there exists a positive $\sigma^2$ such that \begin{align} &\mathbb{E}[\mathbf{x}[0]\mathbf{x}[0]^\dag] \preceq \sigma^2 \mathbf{I} \label{eqn:dis:systemconst1} \\ &\mathbb{E}[\mathbf{w}[n]\mathbf{w}[n]^\dag] \preceq \sigma^2 \mathbf{I} \nonumber \\ &\mathbb{E}[\mathbf{v}[n]\mathbf{v}[n]^\dag] \preceq \sigma^2 \mathbf{I}. \nonumber \end{align}
To prevent degeneracy, we also assume that there exists a positive $\sigma'^2$ such that \footnote{The second condition on $\mathbf{v}[n]$ may seem redundant, and $\mathbf{v}[n]=0$ is enough since at each time the new disturbance $\mathbf{w}[n]$ is added. However, when $\mathbf{v}[n]=0$, we can make the following counterexample when the estimation error of the state is bounded even if the system matrices $(\mathbf{A},\mathbf{C})$ are not observable: $\mathbf{A}=\begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix}, \mathbf{B}=\begin{bmatrix} 0 \\ 1 \end{bmatrix}, \mathbf{C}=\begin{bmatrix} 0 & 1 \end{bmatrix}$. Thus, this assumption is usually kept in the analysis of Kalman filtering including \cite[p.100]{KumarVaraiya}.} \begin{align} &\mathbb{E}[\mathbf{w}[n]\mathbf{w}[n]^\dag] \succeq \sigma'^2 \mathbf{I} \label{eqn:dis:systemconst2} \\ &\mathbb{E}[\mathbf{v}[n]\mathbf{v}[n]^\dag] \succeq \sigma'^2 \mathbf{I}. \nonumber \end{align}
Under these assumptions we call \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} an \textit{intermittent system}.
\begin{definition} The linear system equations \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} with the second moment conditions \eqref{eqn:dis:systemconst1} and \eqref{eqn:dis:systemconst2} are called an \textbf{intermittent system} $(\mathbf{A},\mathbf{B},\mathbf{C})$, or an \textbf{intermittent system} $(\mathbf{A},\mathbf{B},\mathbf{C})$ \textbf{with erasure probability} $p_e$ when we only want to specify the erasure probability, or an \textbf{intermittent system} $(\mathbf{A},\mathbf{B},\mathbf{C}, \sigma, \sigma')$ \textbf{with erasure probability} $p_e$ when we specify the upper and lower bounds on disturbances as well. \end{definition} We say that the intermittent system is \textit{intermittent observable} if the MMSE is uniformly bounded for all time. \begin{definition} An intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C},\sigma,\sigma')$ with erasure probability $p_e$ is called \textbf{intermittent observable} if there exists a casual estimator $\mathbf{\widehat{x}}[n]$ of $\mathbf{x}[n]$ such that \begin{align} \sup_{n \in \mathbb{Z}^+} \mathbb{E}[(\mathbf{x}[n]-\mathbf{\widehat{x}}[n])^\dag (\mathbf{x}[n]-\mathbf{\widehat{x}}[n])] < \infty. \nonumber \end{align} \end{definition}
Before we discuss truly intermittent cases, let's consider two extreme cases, when $p_e=1$ and $p_e=0$, to get some insight into the problem. When $p_e=1$, the estimator does not have any observations. As a result, the system can be intermittent observable if and only if the system itself is stable. On the other hand, when $p_e=0$, the estimator has all the observations without any erasures. Intermittent observability reduces to observability. Thus, intermittent observability can be understood as a new concept which interpolates two core concepts of linear system theory: stability and observability.
Moreover, in intermittent systems, we can see the monotonicity of performance with the erasure probability $p_e$. A process with higher erasure probability can be simulated from a process with lower erasure probability by randomly dropping the observations. Therefore, it is obvious that the average estimation error is an increasing function on $p_e$. Especially, if we consider an unstable but observable system, when $p_e=1$ the estimation error goes to infinity, and when $p_e=0$ the estimation error is bounded. Therefore, between $1$ and $0$ there must be a threshold on $p_e$ when the estimation error first becomes infinity.
\begin{theorem}[Theorem 2. of \cite{Sinopoli_Kalman}] Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C},\sigma,\sigma')$ with erasure probability $p_e$, let $(\mathbf{A},\mathbf{B})$ be controllable, $\sigma < \infty$, and $\sigma' > 0$.\footnote{See Definiton~\ref{def:con} for controllability} Then, there exists a threshold $p_e^{\star}$, such that for $p_e < p_e^{\star}$ the intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C},\sigma,\sigma')$ with erasure probability $p_e$ is intermittent observable and for $p_e \geq p_e^{\star}$ the intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C},\sigma,\sigma')$ with erasure probability $p_e$ is not intermittent observable. \end{theorem}
Therefore, the characterization of intermittent observability reduces to the characterization of the critical erasure probability $p_e^\star$. For characterizing the critical erasure probability, we can consider it as a generalization of either stability or observability.
In \cite{Sinopoli_Kalman}, Sinopoli \textit{et al.} thought of intermittent observability as a generalization of stability. Based on Lyapunov stability, they could find a lower bound on the critical erasure probability in a LMI (linear matrix inequality) form. However, this bound is not tight in general and does not give any insight into the solution. A more intuitive bound can be found in \cite{Elia_Remote}.
\begin{theorem}[Corollary 8.4. of \cite{Elia_Remote}] Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C},\sigma,\sigma')$ with erasure probability $p_e$, let $(\mathbf{A},\mathbf{B})$ be controllable, $\sigma < \infty$, $\sigma' > 0$, and $(\mathbf{A},\mathbf{C})$ be observable. Then, \begin{align}
\frac{1}{\prod_{i}|\lambda_i|^2} \leq p_e^{\star} \leq
\frac{1}{|\lambda_{max}|^2} ,\nonumber \end{align} where $\lambda_i$ are the unstable eigenvalues of $\mathbf{A}$ and $\lambda_{max}$ is the one with the largest magnitude. \end{theorem}
Therefore, the critical erasure probability characterization boils down to understanding where the gap between $\frac{1}{\prod_{i}|\lambda_i|^2}$ and $\frac{1}{|\lambda_{max}|^2}$ comes from.
In \cite{Yilin_Characterization}, Mo and Sinopoli found two interesting cases that give further insight into this question. The first is when $\mathbf{A}$ is diagonalizable and all eigenvalues of $\mathbf{A}$ have distinct magnitudes --- then the critical erasure probability is $\frac{1}{|\lambda_{max}|^2}$ just it would be in the formulation of \cite{Sahai_Thesis}. The second case is when $\mathbf{A}=\begin{bmatrix} 2 & 0 \\ 0 & -2 \end{bmatrix}$ and $\mathbf{C}=\begin{bmatrix}1 & 1 \end{bmatrix}$ --- the critical erasure probability is $\frac{1}{\prod_{i}|\lambda_i|^2}=\frac{1}{2^4}$. This second case showed that the gap is real and requiring packets to be about a scalar observation can have serious consequences.
To extend these cases and solve the general problem, we will apply insights from observability and introduce the new concept of an \textit{eigenvalue cycle}. As a corollary, we show that in the absence of eigenvalue cycles the critical value becomes $\frac{1}{|\lambda_{max}|^2}$. Furthermore, we show that simply by introducing nonuniform sampling to the sensor, eigenvalue cycles can be broken and the critical erasure probability becomes effectively $\frac{1}{|\lambda_{max}|^2}$.
These results can be surprising if we remember that computing random Lyapunov exponents are difficult problems in general~\cite{tsitsiklis1997lyapunov}. However, the intermittent Kalman filtering problem turns out to have a special structure which makes the problem tractable. Precisely speaking, as we will see in Section~\ref{sec:separability}, the subspaces of the vector state can be separated asymptotically. To justify such separation, we use ideas from information theory (for example, decoding functions~\cite{nazer2007computation} or successive decoding~\cite{Cover}). Therefore, the whole system can be divided into parallel sub-systems in effect. As we will see in Section~\ref{sec:powerproperty}, each sub-system can be solved using ideas from large deviation theory~\cite{Dembo}.
\section{Definitions and Notations} \label{sec:def} Before we start the formal discussion of the problem, we first have to introduce mathematical definitions and notations.
We will use controllability and observability notions from linear system theory. \begin{definition} For a $m \times m$ matrix $\mathbf{A}$ and a $m \times p$ matrix $\mathbf{B}$, $(\mathbf{A},\mathbf{B})$ is called controllable if \begin{align} \mathbf{\mathcal{C}}=\begin{bmatrix} \mathbf{B} & \mathbf{A}\mathbf{B} & \cdots & \mathbf{A^{m-1}}\mathbf{B} \end{bmatrix} \nonumber \end{align} is full rank, or equivalently $\begin{bmatrix} \lambda \mathbf{I} - \mathbf{A} & \mathbf{B} \end{bmatrix}$ is full rank for all $\lambda \in \mathbb{C}$. Moreover, we call an eigenvalue $\lambda$ of $\mathbf{A}$ uncontrollable if $\begin{bmatrix} \lambda \mathbf{I} - \mathbf{A} & \mathbf{B} \end{bmatrix}$ is rank deficient. \label{def:con} \end{definition} \begin{definition} For a $m \times m$ matrix $\mathbf{A}$ and a $l \times m$ matrix $\mathbf{C}$, $(\mathbf{A},\mathbf{C})$ is called observable if \begin{align} \mathbf{\mathcal{O}}=\begin{bmatrix} \mathbf{C} \\ \mathbf{C}\mathbf{A} \\ \vdots \\ \mathbf{C}\mathbf{A^{m-1}} \end{bmatrix} \nonumber \end{align} is full rank, or equivalently $\begin{bmatrix} \lambda \mathbf{I} -\mathbf{A} \\ \mathbf{C} \end{bmatrix}$ is full rank for all $\lambda \in \mathbb{C}$. Moreover, we call an eigenvalue $\lambda$ of $\mathbf{A}$ unobservable if $\begin{bmatrix} \lambda \mathbf{I} -\mathbf{A} \\ \mathbf{C} \end{bmatrix}$ is rank deficient. \end{definition}
We will use Bernoulli processes and geometric random variables from probability theory. \begin{definition} An one-sided discrete-time random process $a[n]~(n \geq 0)$ is called a Bernoulli random process with probability $p$ if $a[n]$ are i.i.d.~random variables with the following probability mass function (p.m.f.): \begin{align} \left\{ \begin{array}{l} \mathbb{P}(a[n]=1)=p\\ \mathbb{P}(a[n]=0)=1-p \end{array}\right. \nonumber \end{align} We also call $a[n]$ as a Bernoulli random variable with erasure probability $1-p$. A two-sided Bernoulli random process is defined in the same way except that $n$ comes from the integers. \end{definition} \begin{definition} A random variable $X \in \mathbb{Z^+}$ is called a geometric random variable with probability $p$ if it has a probability mass function $\mathbb{P}\{ X=x \} = p(1-p)^{x}$ for $x \geq 0$. We also call $X$ as a geometric random variable with erasure probability $1-p$. \end{definition}
Then, we have the following relationship between Bernoulli random processes and geometric random variables. Let \begin{align} X:=\min\{n \in \mathbb{Z}^+: a[n]=1 \mbox{ where $a[n]$ is a Bernoulli random variable with probability $p$} \}. \nonumber \end{align} Then, $X$ is a geometric random variable with probability $p$.
We will also use the following basic notions about matrices. \begin{definition}
Given a matrix $\mathbf{A} \in \mathbb{C}^{m \times m}$, $|\mathbf{A}|_{max}$ is the elementwise max norm of $\mathbf{A}$ i.e. $|\mathbf{A}|_{max}=\max_{1 \leq i,j \leq m}|a_{ij}|$. \end{definition} \begin{definition} Given a matrix $\mathbf{A} \in \mathbb{C}^{m \times m}$, $\dim \mathbf{A}$ denotes $m$. Given a column vector $\mathbf{x_1} \in \mathbb{C}^{m \times 1}$ and a row vector $\mathbf{x_2} \in \mathbb{C}^{1 \times m}$, $\dim \mathbf{x_1}$ and $\dim \mathbf{x_2}$ denote $m$. \end{definition} \begin{definition} Given ${n_i} \times {n_i}$ matrices $\mathbf{\mathbf{A_i}}$ for $i \in \{1,2,\cdots, m\}$, $diag\{ \mathbf{A_1}, \mathbf{A_2}, \cdots, \mathbf{A_m} \}$ is a $\left(\sum^{m}_{i=1} n_i \right) \times \left(\sum^{m}_{i=1} n_i \right)$ matrix in the form of $ \begin{bmatrix} \mathbf{A_1} & 0 & \cdots & 0 \\ 0 & \mathbf{A_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{A_m} \\ \end{bmatrix} $. \end{definition}
We also define modulo operation on numbers. \begin{definition} A sequence, $a_1,a_2,\cdots, a_n$, is called congruent mod $p$ if $a_i \equiv a_j (mod\ p)$ for all $i,j$. \end{definition}
\begin{definition} A sequence, $a_1,a_2,\cdots, a_n$, is called pairwise incongruent mod $p$ if $a_i \not\equiv a_j (mod\ p)$ for all $i \neq j$. \end{definition}
Since we will only focus on the scalings behavior, we will use the following definition which can be used as big $O$ and big $\Omega$ notations in complexity theory. \begin{definition} Consider two real functions $a(t)$ and $b(t)$ whose common domain is $T \in \mathbb{R}$. We say $a(t) \lesssim b(t)$ for $t$ on $T$ if there exists a positive $c$ such that $a(t) \leq c b(t)$ for all $t \in T$. \label{def:lesssim} \end{definition} We omit the argument and the domain of the above definition, when they are obvious from the context and do not cause confusion.
We will also use an abbreviated notation for a sequence of random variables. \begin{definition} Given a discrete time random variable $a[0], \cdots, a[n]$, we denote $a[n_1],\cdots,a[n_2]$ as $a_{n_1}^{n_2}$, and $a[0],\cdots,a[n]$ as $a^n$. Likewise given a continuous time random variable $b(t)$, we define $\mathbf{b}(t_1:t_2)$ to be $\mathbf{b}(t)$ for $t_1 \leq t \leq t_2$. \end{definition}
\section{Intermittent Observability as an Extension of Stability} \label{sec:connecting}
As we mentioned before, the characterization of the critical erasure probability can be considered from two different directions --- an extension of stability or an extension of observability. In \cite{Sinopoli_Kalman}, Sinopoli \textit{et al.} took the first approach, and attempted to characterize the critical erasure probability by the Lyapunov stability condition. Let's review a property of Schur complements and Lyapunov stability theorem.
\begin{lemma}[Schur complements] Let $\mathbf{X}=\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{B}^\dag & \mathbf{C} \end{bmatrix}$ be a symmetric matrix and $\mathbf{C}$ be invertible. Then, $\mathbf{X} \succ 0$ if and only if $\mathbf{C} \succ 0$ and $\mathbf{A} - \mathbf{B}\mathbf{C}^{-1} \mathbf{B}^\dag \succ 0$. \label{lem:schur} \end{lemma} \begin{proof} See \cite[p. 650]{Boyd}. \end{proof}
\begin{theorem}[Lyapunov Stability Theorem] Given a linear system \eqref{eqn:dis:system}, the following three conditions are equivalent.\\ (i) The system is stable.\\ (ii) $\exists \mathbf{M},\mathbf{N} \succ \mathbf{0} $ such that \begin{align} \mathbf{M} - \mathbf{A} \mathbf{M} \mathbf{A}^\dag = \mathbf{N}. \nonumber \end{align}\\ (iii) $\exists \mathbf{M} \succ \mathbf{0}$ such that \begin{align} \begin{bmatrix} \mathbf{M} & \mathbf{A}\mathbf{M} \\ \mathbf{M} \mathbf{A}^\dag & \mathbf{M} \end{bmatrix} \succ \mathbf{0}. \nonumber \end{align}\label{thm:lyapunov} \end{theorem} \begin{proof} The equivalence between (i) and (ii) can be easily found in linear system theory books including \cite[p.30]{KumarVaraiya} and \cite[Theorem 5.D5]{Chen}. The equivalence between (ii) and (iii) comes from Schur complements in Lemma~\ref{lem:schur} by simply choosing $\mathbf{A}=\mathbf{M}$, $\mathbf{B}=\mathbf{A}\mathbf{M}$ and $\mathbf{C}=\mathbf{M}$. \end{proof}
Before we consider intermittent observability, let's first characterize the standard observability condition using Lyapunov stability. The fundamental theorem of observability tells that if $(\mathbf{A},\mathbf{C})$ is observable, the eigenvalues of the closed loop system $\mathbf{A}+\mathbf{K}\mathbf{C}$ can be placed anywhere by a proper selection of $\mathbf{K}$. Based on this, we can characterize observability in terms of Lyapunov stability.
\begin{theorem} Given a linear system \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} with $p_e=0$, the following four conditions are equivalent.\\ (i) All the unstable modes of $\mathbf{A}$ are observable.\\ (ii) $\exists \mathbf{K}$ such that $\mathbf{A}+\mathbf{K}\mathbf{C}$ is stable.\\ (iii) $\exists \mathbf{K}$ and $\mathbf{M},\mathbf{N} \succ \mathbf{0}$ such that\\ \begin{align} \mathbf{M} - (\mathbf{A}+\mathbf{K}\mathbf{C}) \mathbf{M} (\mathbf{A}+\mathbf{K}\mathbf{C})^\dag = \mathbf{N}. \nonumber \end{align} (iv) $\exists \mathbf{K}$ and $\mathbf{M} \succ \mathbf{0}$ such that\\ \begin{align} \begin{bmatrix} \mathbf{M} & (\mathbf{A}+ \mathbf{K}\mathbf{C})\mathbf{M} \\ \mathbf{M} (\mathbf{A}+ \mathbf{K}\mathbf{C})^\dag & \mathbf{M} \end{bmatrix} \succ 0. \nonumber \end{align}\label{thm:lyaob} \end{theorem} \begin{proof} The equivalence of (i) and (ii) is the fundamental theorem of observability~\cite[Theorem 8.M3]{Chen}. The equivalence of (ii), (iii) and (iv) follows from Theorem~\ref{thm:lyapunov}. \end{proof}
Unfortunately, this observability characterization based on Lyapunov stability cannot be generalized for intermittent observability. The main reason is that in intermittent Kalman filtering the optimal estimator does not converge to a linear time-invariant one. In conventional Kalman filtering for linear time-invariant systems, it is well-known that the optimal Kalman filter converges to the linear time-invariant estimator which is known as the \textit{Wiener filter}~\cite{wiener1964extrapolation}. In fact, we can directly plug in the Wiener filter gain for the matrix $\mathbf{K}$ of Theorem~\ref{thm:lyaob}. However, when observations are erased, the optimal estimator also depends on the erasure pattern and since the erasure pattern is random and time-varying, the whole system becomes random and time-varying. Therefore, the optimal estimator is also time-varying and does not converge.
In \cite{Sinopoli_Kalman}, Sinopoli \textit{et al.} wrote the optimal time-varying linear estimator in a recursive equation form. The strictly causal estimator $\mathbf{\widehat{x}}[n] = \mathbb{E}[\mathbf{x}[n]|\mathbf{y}^{n-1}]$, is given as follows: \begin{align} \mathbf{\widehat{x}}[n+1]=\mathbf{A}\mathbf{\widehat{x}}[n]-\mathbf{K_n}(\mathbf{y}[n]-\mathbf{C}\mathbf{\widehat{x}}[n]) \label{eqn:lyainter} \end{align} Here, $\mathbf{K_n}$ depends not only on $n$ but also the history of the $\beta[n]$, and does not converge to a constant matrix in probability. Therefore, in the intermittent Kalman filtering problem it is not possible to find a stability-optimal time-invariant gain $\mathbf{K}$ in Theorem~$\ref{thm:lyaob}$.
However, we can still force the estimator to be linear time-invariant, and thereby find a sufficient condition for intermittent observability using Lyapunov stability ideas. This is the idea that Sinopoli \textit{et al.} used to find a lower bound on the critical erasure probability in \cite{Sinopoli_Kalman}. By restricting the filtering gain to be a linear time-invariant matrix $\mathbf{K}$, we get the following sub-optimal estimator which looks similar to \eqref{eqn:lyainter}.
\begin{align} \mathbf{\widehat{x}}[n+1]&=\mathbf{A}\mathbf{\widehat{x}}[n]-\beta[n]\mathbf{K}(\mathbf{y}[n]-\mathbf{C}\mathbf{\widehat{x}}[n]) \label{eqn:connect:sub1}
\end{align} with $\mathbf{\widehat{x}}[0]=\mathbf{0}$. By analyzing this sub-optimal estimator, Sinopoli \textit{et al.} found the following sufficient condition for intermittent observability. Here, we further prove that their condition is both necessary and sufficient for the sub-optimal estimators of \eqref{eqn:connect:sub1} to have an expected estimation error uniformly bounded over time.\footnote{This fact is implicitly shown in Elia's paper~\cite{Elia_Remote}.}
\begin{theorem}[Extension of Theorem~5 of \cite{Sinopoli_Kalman}] Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C},\sigma,\sigma')$ with erasure probability $p_e$, let $(\mathbf{A},\mathbf{B})$ be controllable, $\sigma < \infty$, and $\sigma' > 0$. Then, the following three conditions are equivalent.\\ (i) The system is intermittently observable by the suboptimal estimator of \eqref{eqn:connect:sub1} with some $\mathbf{K}$.\\ (ii) $\exists \mathbf{K}$ and $\mathbf{M}, \mathbf{N} \succ \mathbf{0}$ such that \begin{align} \mathbf{M} - p_e \mathbf{A} \mathbf{M} \mathbf{A}^\dag - (1- p_e)(\mathbf{A}+\mathbf{K}\mathbf{C})\mathbf{M}(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag = \mathbf{N}. \nonumber \end{align} (iii) $\exists \mathbf{K}$ and $\mathbf{M} \succ \mathbf{0}$ such that \begin{align} \begin{bmatrix} \mathbf{M} & \sqrt{1-p_e}( \mathbf{M}\mathbf{A} + \mathbf{K}\mathbf{C}) & \sqrt{p_e} \mathbf{M} \mathbf{A} \\ \sqrt{1-p_e}( \mathbf{M}\mathbf{A} + \mathbf{K}\mathbf{C})^\dag & \mathbf{M} & 0 \\ \sqrt{p_e}( \mathbf{M}\mathbf{A} )^\dag & 0 & \mathbf{M} \end{bmatrix}\succ \mathbf{0}. \nonumber \end{align} \label{thm:lyainter} \end{theorem} \begin{proof} By \eqref{eqn:dis:system}, \eqref{eqn:dis:system2} and \eqref{eqn:connect:sub1}, we can see that the estimation error follows the following dynamics: \begin{align} \mathbf{x}[n+1]-\mathbf{\widehat{x}}[n+1] &=\mathbf{A}\mathbf{x}[n]+\mathbf{B}\mathbf{w}[n]-(\mathbf{A}\mathbf{\widehat{x}}[n]-\beta[n] \mathbf{K}(\mathbf{y}[n]-\mathbf{C}\mathbf{\widehat{x}}[n])) \nonumber \\ &=\mathbf{A}\mathbf{x}[n]+\mathbf{B}\mathbf{w}[n]-(\mathbf{A}\mathbf{\widehat{x}}[n]-\beta[n] \mathbf{K}(\mathbf{C}\mathbf{x}[n]+\mathbf{v}[n]-\mathbf{C}\mathbf{\widehat{x}}[n])) \nonumber \\ &=(\mathbf{A}+\beta[n]\mathbf{K}\mathbf{C})(\mathbf{x}[n]-\mathbf{\widehat{x}}[n])+\mathbf{B}\mathbf{w}[n] +\beta[n]\mathbf{K}\mathbf{v}[n]. \label{eqn:connect:sub4} \end{align} Denote $(\mathbf{x}[n]-\mathbf{\widehat{x}}[n])$ as $(\mathbf{e}[n]$ and $\mathbf{B}\mathbf{w}[n] +\beta[n]\mathbf{K}\mathbf{v}[n])$ as $\mathbf{w'}[n]$. Then, $\mathbf{w'}[n]$ also has a uniformly bounded variance over time, and \eqref{eqn:connect:sub4} can be written as \begin{align} \mathbf{e}[n+1]=(\mathbf{A}+\beta[n]\mathbf{K}\mathbf{C})\mathbf{e}[n]+\mathbf{w'}[n]. \nonumber \end{align} Since $\mathbf{e}[n]$ is independent from $\mathbf{w'}[n], \beta[n]$ by causality, the covariance matrix of $\mathbf{e}[n]$ follows the following dynamics: \begin{align} \mathbb{E}[ \mathbf{e}[0]\mathbf{e}^\dag[0]] &= \mathbb{E}[\mathbf{x}[0]\mathbf{x}^\dag[0]], \nonumber \\ \mathbb{E}[ \mathbf{e}[n+1]\mathbf{e}^\dag [n+1] ]&= \mathbb{E}[(\mathbf{A}+\beta[n]\mathbf{K}\mathbf{C})\mathbf{e}[n]\mathbf{e}^\dag[n](\mathbf{A}+\beta[n]\mathbf{K}\mathbf{C})^\dag]+\mathbb{E}[\mathbf{w'}[n]\mathbf{w'}^\dag[n]] \nonumber \\ &=p_e \mathbf{A} \mathbb{E}[\mathbf{e}[n] \mathbf{e}^\dag[n]] \mathbf{A}^\dag +(1-p_e) (\mathbf{A}+\mathbf{K}\mathbf{C})\mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]](\mathbf{A}+\mathbf{K}\mathbf{C})^\dag +\mathbb{E}[\mathbf{w'}[n]\mathbf{w'}^\dag[n]]. \label{eqn:connect:sub2} \end{align} Now, we will prove the theorem in three steps.
(1) Condition (i) implies condition (ii).\\ First of all, by linearity we can prove that the estimation error $\mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]]$ is an increasing function of the variance of the underlying random variables.
Thus, if the system is intermittently observable by $\mathbf{K}$, the same system with $\mathbf{x}[0]=0$, $\mathbf{v}[n]=0$, $\mathbb{E}[\mathbf{w}[n]\mathbf{w}^\dag[n]]= \sigma'^2 \mathbf{I}$ is also intermittently observable. So set $\mathbf{x}[0]=0$, $\mathbf{v}[n]=0$, $\mathbb{E}[\mathbf{w}[n]\mathbf{w}^\dag[n]]= \sigma'^2 \mathbf{I}$ without loss of generality. With these parameters, we have $\mathbb{E}[\mathbf{e}[0]\mathbf{e}^\dag[0]]=0$ and $\mathbb{E}[\mathbf{w'}[n]\mathbf{w'}^\dag[n]]=\sigma'^2\mathbf{B} \mathbf{B}^\dag$. By the recursive equation in \eqref{eqn:connect:sub2}, we can show that for $n \geq 1$, the covariance matrix of $\mathbf{e}[n]$ can be written as \begin{align} \mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]]=\sigma'^2 \mathbf{B}\mathbf{B}^\dag + \sum^n_{k=1} \sum_{l \in \{-1,1 \}^k} \mathbf{\Delta}_l \mathbf{\Delta}_l^\dag. \end{align} where \begin{align} \mathbf{\Delta}_l := (\sqrt{p_e}\mathbf{A})^{\frac{1+l_1}{2}}(\sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C}))^\frac{1-l_1}{2} \cdots (\sqrt{p_e}\mathbf{A})^{\frac{1+l_k}{2}}(\sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C}))^\frac{1-l_k}{2} \sigma' \mathbf{B}. \nonumber \end{align} Here, $l_i=1$ means the $i$th observation was erased and $l_i=-1$ means that the $i$th observation was not erased.
Here, we can notice that $\mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]]$ are positive semidefinite matrices and increasing in $n$. Furthermore, since the system is intermittently observable by condtion (i), $\mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]]$ has to be uniformly bounded over time. Therefore, \begin{align} \mathbf{\bar{M}} := \lim_{n \rightarrow \infty}\mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]] = \sigma'^2 \mathbf{B}\mathbf{B}^\dag + \sum^{\infty}_{k=1} \sum_{l \in \{-1,1 \}^k} \mathbf{\Delta}_l \mathbf{\Delta}_l^\dag \label{eqn:barm} \end{align} must exist even though it involves an infinite sum. Let's define $\mathbf{M}$ and $\mathbf{N}$ as follows: \begin{align} \mathbf{M}&:= \sigma'^2 \mathbf{B}\mathbf{B}^\dag + \sum_{k=1}^{m-1} \sum_{l=\{-1,1\}^k} (k+1) \mathbf{\Delta}_l \mathbf{\Delta}_l^\dag + \sum_{k'=m}^{\infty} \sum_{l' = \{-1,1\}^{k'}} m \mathbf{\Delta}_l \mathbf{\Delta}_l^\dag \label{eqn:defM}\\ \mathbf{N}&:= \sigma'^2 \mathbf{B}\mathbf{B}^\dag + \sum_{k=1}^{m-1} \sum_{l \in \{-1,1\}^{k}} \mathbf{\Delta}_l \mathbf{\Delta}_l^\dag \label{eqn:defN} \end{align} where $m$ is the dimension of $\mathbf{A}$ as we defined in Section~\ref{sec:statement}. By the definitions of $\mathbf{\bar{M}}$ and $\mathbf{M}$, we can easily see that $m \mathbf{\bar{M}} \succeq \mathbf{M}$. Therefore, $\mathbf{M}$ also exists even though it involves an infinite sum. Furthermore, by the definitions of $\mathbf{M}$ and $\mathbf{N}$, we can easily see that \begin{align} \mathbf{M} \succeq \sigma'^2(\mathbf{B}\mathbf{B}^\dag+p_e \mathbf{A}\mathbf{B}\mathbf{B}^\dag \mathbf{A}^{\dag} + \cdots + p_e^m \mathbf{A}^{m}\mathbf{B}\mathbf{B}^\dag \mathbf{A}^{\dag m} ) \\ \mathbf{N} \succeq \sigma'^2(\mathbf{B}\mathbf{B}^\dag+p_e \mathbf{A}\mathbf{B}\mathbf{B}^\dag \mathbf{A}^{\dag} + \cdots + p_e^m \mathbf{A}^{m}\mathbf{B}\mathbf{B}^\dag \mathbf{A}^{\dag m} ) \end{align} since the terms in L.H.S. are just subsets of the terms in $\mathbf{M}$ and $\mathbf{N}$.
Thus, we can see that $\mathbf{M} \succ 0$, $\mathbf{N} \succ 0$ since $\begin{bmatrix} \mathbf{B} & \mathbf{A}\mathbf{B} & \cdots & \mathbf{A}^{m-1}\mathbf{B} \end{bmatrix}$ is full rank by the controllability of $(\mathbf{A},\mathbf{B})$ and all terms $\mathbf{B}\mathbf{B}^\dag, \cdots, p_e^m \mathbf{A}^{m}\mathbf{B}\mathbf{B}^\dag \mathbf{A}^{\dag m}$ are positive semidefinite. Finally, by the definitions and simple matrix algebra, we can verify that $\mathbf{M}$ and $\mathbf{N}$ satisfy the following relationship: \begin{align} \mathbf{M} = p_e \mathbf{A}\mathbf{M}\mathbf{A}^\dag + (1-p_e) (\mathbf{A}+\mathbf{K}\mathbf{C}) \mathbf{M} (\mathbf{A}+\mathbf{K}\mathbf{C})^\dag + \mathbf{N}. \label{eqn:fixedpoint} \end{align} Therefore, $\mathbf{M}$ and $\mathbf{N}$ satisfy condition (ii).\footnote{
Consider a fixed point equation, $f(x)=xf(x)+g(x)$. There exist multiple $f(x)$ and $g(x)$ that satisfy this equation. For example, $(f(x), g(x))=(1+x+x^2+ \cdots, 1)$, $(f(x), g(x))=(1+2x+2x^2+ \cdots, 1+x)$, $\cdots$, $(f(x), g(x))=(1+2x+ \cdots + (k-1)x^{k-1} + kx^k + kx^{k+1} \cdots, 1+x+\cdots+x^k)$ all satisfy the equation. Likewise, there are multiple matrices that satisfy the fixed point equation of \eqref{eqn:fixedpoint}. For example, we can easily check that $\mathbf{\bar{M}}$ of \eqref{eqn:barm} and $\mathbf{\bar{N}}:=\sigma'^2 \mathbf{B}\mathbf{B}^\dag$ satisfy \eqref{eqn:fixedpoint}, i.e. $\mathbf{\bar{M}}=p_e \mathbf{A}\mathbf{\bar{M}}\mathbf{A}^\dag+(1-p_e)(\mathbf{A}+\mathbf{K}\mathbf{C})(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag + \mathbf{\bar{N}}$. However, unlike $\mathbf{N}$, $\mathbf{\bar{N}}$ does not have to be positive definite. Thus, the choice of $\mathbf{\bar{M}}$, $\mathbf{\bar{N}}$ is not enough to prove the theorem. Here, we choose $\mathbf{M}$, $\mathbf{N}$ as shown in \eqref{eqn:defM}, \eqref{eqn:defN} as another solution for \eqref{eqn:fixedpoint}. In fact, the choice of coefficient in $\mathbf{M}, \mathbf{N}$ was inspired by the solutions of $f(x)=xf(x)+g(x)$ shown above.}
(2) Condition (ii) implies condition (i).\\ Since $\mathbf{M}$ and $\mathbf{N}$ of condition (ii) are positive definite, we can find $a$ such that $a^2 \mathbf{M} \succ \mathbb{E}[\mathbf{x}[0]\mathbf{x}^\dag[0]]$ and $a^2 \mathbf{N} \succ \mathbb{E}[\mathbf{w'}[n]\mathbf{w'}^\dag[n]]$ for all $n \in \mathbb{Z}^+$. And we can easily see that even if we replace $\mathbf{K}$, $\mathbf{M}$, $\mathbf{N}$ with $\mathbf{K}$, $a^2 \mathbf{M}$, $a^2 \mathbf{N}$, condition (ii) still holds.
We will prove that $a^2 \mathbf{M} \succ \mathbb{E}[\mathbf{e}[n]\mathbf{e}^\dag[n]]$ for all $n \in \mathbb{Z}^+$ by induction. Since $a^2\mathbf{M} \succ \mathbb{E}[\mathbf{x}[0]\mathbf{x}^\dag[0]]=\mathbb{E}[\mathbf{e}[0]\mathbf{e}^\dag[0]]$, the claim is true for $n=0$. Assume the claim is true for $n$. Then, from the definition of $a$ and \eqref{eqn:connect:sub2}, \begin{align} \mathbb{E}[\mathbf{e}[n+1]\mathbf{e}^\dag[n+1]] \prec p_e \mathbf{A} (a^2 \mathbf{M}) \mathbf{A}^\dag +(1-p_e) (\mathbf{A}+\mathbf{K}\mathbf{C})(a^2 \mathbf{M})(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag +a^2 \mathbf{N}=a^2 \mathbf{M} \nonumber \end{align} where the last equality comes from condition (ii). Therefore, the estimation error is uniformly upper bounded by $a^2 \mathbf{M}$ when we use the $\mathbf{K}$ of condition (ii) as a gain matrix, and so condition (ii) implies condition (i).
(3) Condition (ii) is equivalent to condition (iii).\\ Since $\mathbf{M}^{-1} \succ \mathbf{0}$, by Schur complements in Lemma~\ref{lem:schur}, condition (ii) is equivalent to \begin{align} \begin{bmatrix} \mathbf{M}- p_e \mathbf{A} \mathbf{M} \mathbf{A}^\dag & \sqrt{1-p_e} (\mathbf{A}+\mathbf{K}\mathbf{C}) \\ \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag & \mathbf{M}^{-1} \end{bmatrix} \succ \mathbf{0} . \nonumber \end{align} Since \begin{align} &\begin{bmatrix} \mathbf{M}- p_e \mathbf{A} \mathbf{M} \mathbf{A}^\dag & \sqrt{1-p_e} (\mathbf{A}+\mathbf{K}\mathbf{C}) \\ \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag & \mathbf{M}^{-1} \end{bmatrix} \nonumber\\ &= \begin{bmatrix} \mathbf{M} & \sqrt{1-p_e} (\mathbf{A}+\mathbf{K}\mathbf{C}) \\ \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag & \mathbf{M}^{-1} \end{bmatrix} - \begin{bmatrix} \sqrt{p_e}\mathbf{A} \\ 0 \end{bmatrix} \mathbf{M} \begin{bmatrix} \sqrt{p_e}\mathbf{A}^\dag & 0 \end{bmatrix} \nonumber \end{align} and $\mathbf{M}^{-1} \succ \mathbf{0}$, we can apply Schur complement again. Thus, condition (ii) is equivalent to \begin{align} \begin{bmatrix} \mathbf{M} & \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C}) & \sqrt{p_e} \mathbf{A} \\ \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag & \mathbf{M}^{-1} & 0 \\ \sqrt{p_e} \mathbf{A}^\dag & 0 & \mathbf{M}^{-1} \end{bmatrix} \succ \mathbf{0}. \nonumber \end{align} Since $\mathbf{M}^{-1} \succ 0$, this condition is again equivalent to \begin{align} &\begin{bmatrix} \mathbf{M}^{-1} & 0 & 0 \\ 0 & \mathbf{I} & 0 \\ 0 & 0 & \mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{M} & \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C}) & \sqrt{p_e} \mathbf{A} \\ \sqrt{1-p_e}(\mathbf{A}+\mathbf{K}\mathbf{C})^\dag & \mathbf{M}^{-1} & 0 \\ \sqrt{p_e} \mathbf{A}^\dag & 0 & \mathbf{M}^{-1} \end{bmatrix} \begin{bmatrix} \mathbf{M}^{-1} & 0 & 0 \\ 0 & \mathbf{I} & 0 \\ 0 & 0 & \mathbf{I} \end{bmatrix} \nonumber \\ &= \begin{bmatrix} \mathbf{M}^{-1} & \sqrt{1-p_e}(\mathbf{M}^{-1}\mathbf{A}+ \mathbf{M}^{-1}\mathbf{K}\mathbf{C}) & \sqrt{p_e}\mathbf{M}^{-1}\mathbf{A} \\ \sqrt{1-p_e}(\mathbf{M}^{-1}\mathbf{A}+ \mathbf{M}^{-1}\mathbf{K}\mathbf{C})^\dag & \mathbf{M}^{-1} & 0 \\ \sqrt{p_e} (\mathbf{M}^{-1}\mathbf{A})^\dag & 0 & \mathbf{M}^{-1} \end{bmatrix} \succ \mathbf{0}. \nonumber \end{align} Since $\mathbf{M}^{-1} \succ \mathbf{0}$ and $\mathbf{K}$ is an arbitrary matrix, by replacing $\mathbf{M}^{-1}$ by $\mathbf{M}$ and $\mathbf{M}^{-1}\mathbf{K}$ by $\mathbf{K}$ we get condition (iii). \end{proof}
As we can expect, the conditions of this theorem reduce to those of stability and those of observability when $p_e=1$ and $p_e=0$ respectively. One can easily observe that condition (ii) of Theorem~\ref{thm:lyainter} reduces to condition (ii) of Theorem~\ref{thm:lyapunov} when $p_e=1$ and condition (iii) of Theorem~\ref{thm:lyaob} when $p_e=0$. Likewise, condition (iii) of Theorem~\ref{thm:lyainter} reduces to condition (iii) of Theorem~\ref{thm:lyapunov} and condition (iv) of Theorem~\ref{thm:lyaob} respectively.
Even though condition (ii) and (iii) of Theorem~\ref{thm:lyainter} are equivalent, condition (iii) is preferred since it is given in a LMI (linear matrix inequality) form and convex optimization techniques~\cite{Boyd} are applicable. In fact, in \cite{Sinopoli_Kalman} Sinopoli~\textit{et al.} related condition (iii) with quasi-convex problems.
Since we imposed an additional linear time-invariant constraint on the estimator, Theorem~\ref{thm:lyainter} gives a lower bound on $p_e^\star$. However, we can conclude that this lower bound is loose in general.\footnote{Numerical computation of the lower bound of Theorem~\ref{thm:lyainter} is shown in Figure 4 of \cite{Sinopoli_Kalman}. For a system with $\mathbf{A}=\begin{bmatrix} 1.25 & 0 \\ 1 & 1.1 \end{bmatrix}$ and $\mathbf{C}=\begin{bmatrix}1 &1 \end{bmatrix}$. The numerical simulation shows the lower bound is approximately $\frac{1}{(1.25 \times 1.1)^2}=0.528\cdots$, while the exact characterization of Theorem~\ref{thm:mainsingle} tells the critical erasure probability is $\frac{1}{1.25^2}=0.64$.} Moreover, even for stability, the characterization that the magnitudes of all eigenvalues are less than $1$ is much more intuitive than the LMI condition based on Lyapunov stability. Therefore, researchers including \cite{Elia_Remote} and \cite{Yilin_Characterization} were looking for a tight and intuitive characterization of the critical erasure probability.
\section{Intermittent observability as an extension of observability: Main Intuition} \label{sec:intui} To reach this goal, we borrow insights from a characterization of observability. $(\mathbf{A},\mathbf{C})$ is observable if and only if for all $s \in \mathbb{C}$ \begin{align} \begin{bmatrix} s \mathbf{I} - \mathbf{A} \\ \mathbf{C} \end{bmatrix} \mbox{ is full rank.}\nonumber \end{align} Moreover, by a similarity transform~\cite{Chen} we can assume that $\mathbf{A}$ is in Jordan form\footnote{Throughout the paper, we will use the Jordan form that induces an upper triangular matrix.} without loss of generality. With this additional assumption, the observability condition can be further simplified. \begin{theorem}[\cite{Chen}] Consider a linear system with system matrices $(\mathbf{A},\mathbf{C})$ where $\mathbf{A}$ is given in a Jordan form. For an eigenvalue $\lambda$ of $\mathbf{A}$, denote $\mathbf{C}_\lambda$ as a matrix whose columns are consist of the columns of $\mathbf{C}$ which correspond to the first elements of the Jordan blocks in $\mathbf{A}$ associated with $\lambda$. Then, the states associated with $\lambda$ are observable if and only if the rank of $\mathbf{C}_\lambda$ is equal to the number of Jordan blocks associated with $\lambda$. The whole system is observable if and only if all states associated with all eigenvalues are observable.
\label{thm:jordanob} \end{theorem}
For example, let \begin{align} &\mathbf{A}= \begin{bmatrix} 2 & 1 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{bmatrix}\nonumber \\ &\mathbf{C}= \begin{bmatrix} \mathbf{c_1} & \mathbf{c_2} & \mathbf{c_3} & \mathbf{c_4} \end{bmatrix}.\nonumber \end{align} Then, $\mathbf{C}_2 = \begin{bmatrix} \mathbf{c_1} & \mathbf{c_3} \end{bmatrix}$ and $\mathbf{C}_3= \begin{bmatrix} \mathbf{c_4} \end{bmatrix}$. The eigenvalue $2$ is observable if and only if $\mathbf{C}_2$ is full rank, and the eigenvalue $3$ is observable if and only if $\mathbf{C}_3$ is full rank. The whole system with $(\mathbf{A},\mathbf{C})$ is observable if and only if both eigenvalues are observable.
This characterization reminds us of a \textit{divide-and-conquer} approach. First, divide the observability problem into smaller problems according to the identical eigenvalues. Then, check whether the smaller sub-problem for each eigenvalue is observable. Finally, the whole system is observable if and only if all the sub-problems are observable.
This suggests applying a divide-and-conquer approach for the characterization of intermittent observability. However, before we apply a divide-and-conquer approach, we first have to answer the following three questions:\\ (a) What are the minimal irreducible sub-problems?\\ (b) How can we solve each sub-problem?\\ (c) How can we combine the answers of the sub-problems?
We will make an exact characterization of intermittent observability by resolving these questions. The concept of eigenvalue cycles appears naturally as the answer of the question (a).
Before we answer these questions, let's first start from the simplest case, scalar plants. For simplicity, we will only give hand-waving arguments in this section, and the rigorous justification will be shown in later sections. The basic idea for the characterization of the intermittent observability is to consider the dynamics reverse in time. For example, consider the following scalar system: for $n \in \mathbb{Z}^+$, \begin{align} \left\{ \begin{array}{l} x[n+1]=2 x[n] + w[n] \\ y[n]=\beta[n] x[n] \end{array}\right. . \label{eqn:intui:1} \end{align} Here, $x[0]=0$, $w[n]$ are i.i.d.~zero-mean unit-variance Gaussian, and $\beta[n]$ is an independent Bernoulli process with probability $1-p_e$. Then, we will show that the critical erasure probability $p_e^\star=\frac{1}{2^2}$.
First, we extend the one-sided random process \eqref{eqn:intui:1} to the two-sided process. Let $w[n]=0$ for $n \in \mathbb{Z}^{--}$ where $\mathbb{Z}^{--}$ implies negative integers, and $\beta[n]$ be a two-sided Bernoulli process with probability $1-p_e$. Then, we can see that the new two-sided process is equivalent to the original process except that $x[n]=0, y[n]=0$ for $n \in \mathbb{Z}^{--}$.
Let $n-S$ be the most recent non-erased observation at time $n$, i.e. $S:=\min \{k \geq 0: \beta[n-k]=1 \}$. Since $\beta[n]$ is a two-sided Bernoulli process, the stopping time $S$ is a geometric random variable, i.e. $\mathbb{P}\{S=s \}=(1-p_e){p_e}^s$.
(1) Sufficiency: We first prove that $p_e < \frac{1}{2^2}$ is sufficient for the intermittent observability of the example. For this, we analyze the performance of a suboptimal estimator $\widehat{x}[n]=2^S y[n-S]=2^S x[n-S]$. Then, the estimation error is upper bounded by \begin{align}
\mathbb{E}[(x[n]-\widehat{x}[n])^2]&=\mathbb{E}[\mathbb{E}[ (x[n]-\widehat{x}[n])^2 |S]]\\
&=\mathbb{E}[\mathbb{E}[(2^S x[n-S]+2^{S-1}w[n-S]+\cdots+w[n-1]-2^S x[n-S])^2|S]]\\ &\leq \mathbb{E}[2^{2(S-1)}+2^{2(S-2)}+\cdots+1]\\ &=\mathbb{E}[\frac{2^{2S}-1}{2^2-1}]\\ &=\frac{1}{2^2-1}\left(\left( \sum^{\infty}_{i=0}(1-p_e)(p_e 2^2)^{i} \right) - 1 \right). \end{align} Therefore, the estimation error is uniformly bounded if $p_e < \frac{1}{2^2}$.
(2) Necessity: For necessity, we use the fact that the disturbance $w[n-S]$ is independent of the non-erased observations present up to the time $n$. Therefore, the estimation error is lower bounded by \begin{align}
\mathbb{E}[(x[n]-\mathbb{E}[x[n]|y^n])^2] & \geq \mathbb{E}[\mathbb{E}[(2^{S-1} w[n-S])^2 | S]] \\ &=\mathbb{E}[2^{2(S-1)}\cdot \mathbf{1}(n-S \geq 0)] \\ &=\frac{1}{2^2}\left( \sum^{n}_{i=0} (1-p_e)(p_e 2^2)^{i}\right) \end{align} Therefore, if $p_e \geq \frac{1}{2^2}$ the estimation error must diverge to $\infty$.
(3) Remarks: From the above proof, we can notice that the intermittent observability is decided by whether $p_e 2^2$ is less than $1$. Here, $2$ is the largest eigenvalue of the system, and $p_e$ is the probability mass function (p.m.f.) tail of $S$ which can be defined as $\exp \limsup_{s \rightarrow \infty} \frac{1}{s} \ln \mathbb{P}\{S=s\}$. Thus, we can think of two potential differences between scalar and vector systems: (i) The maximum eigenvalue (ii) The p.m.f. tail.
It turns out the latter is true, and the p.m.f. tail is the difference between scalar and vector systems. The following example shows why and how the p.m.f tail changes in vector systems.
\subsection{Power Property} \label{sec:powerproperty} The power property answers question (b) of the previous section, ``How can we solve each sub-problem?". Consider the example of \cite{Yilin_Characterization}. \begin{align} \left\{ \begin{array}{l} \mathbf{x}[n+1]=\begin{bmatrix} 2 & 0 \\ 0 & -2 \end{bmatrix} \mathbf{x}[n] + \mathbf{w}[n] \\ y[n]=\beta[n] \begin{bmatrix} 1 & 1 \end{bmatrix} \mathbf{x}[n] \end{array} \right. \nonumber \end{align} Like above, we put $\mathbf{x}[0]=\mathbf{0}$, $\mathbf{w}[n]$ is $2$-dimensional i.i.d. Gaussian vector with mean $\mathbf{0}$ and variance $\mathbf{I}$, and $\beta[n]$ is an independent Bernoulli process with probability $1-p_e$. We also extend the one-sided process to the two-sided process in the same way.
We can see the states are $2$-dimensional, while the observations are $1$-dimensional. Therefore, unlike scalar systems at least two observations are required to estimate the states. Moreover, if we write the observability Gramian matrix, we immediately notice cyclic behavior: \begin{align} &\mathbf{C}=\begin{bmatrix}1 & 1 \end{bmatrix} \nonumber \\ &\mathbf{C}\mathbf{A^{-1}}=\begin{bmatrix} \frac{1}{2} & -\frac{1}{2} \end{bmatrix} \nonumber \\ &\mathbf{C}\mathbf{A^{-2}}=\begin{bmatrix} \frac{1}{4} & \frac{1}{4} \end{bmatrix} \nonumber \\ &\mathbf{C}\mathbf{A^{-3}}=\begin{bmatrix} \frac{1}{8} & -\frac{1}{8} \end{bmatrix} \nonumber \\ &\vdots \nonumber \end{align} Notice that $\mathbf{C},\mathbf{C}\mathbf{A^{-2}},\mathbf{C}\mathbf{A^{-4}},\cdots$ are linearly dependent and $\mathbf{CA^{-1}},\mathbf{C}\mathbf{A^{-3}},\mathbf{C}\mathbf{A^{-5}},\cdots$ are linearly dependent. Therefore, as observed in \cite{Yilin_Characterization}, we need both even and odd time observations to estimate the states. In this example, we will show that $p_e^\star = \frac{1}{2^4}$.
(1) Sufficiency: Let $p_e < \frac{1}{2^4}$. From \eqref{eqn:dis:system} and \eqref{eqn:dis:system2}, we can see that when $\beta[n-k]=1$ the following equations hold: \begin{align} \mathbf{x}[n]&=\mathbf{A}^k \mathbf{x}[n-k] + \mathbf{A}^{k-1}\mathbf{w}[n-k]+ \cdots + \mathbf{w}[n-1] \label{eqn:intui:2}\\ \mathbf{y}[n-k]&=\mathbf{C}\mathbf{x}[n-k] + \mathbf{v}[n-k] \nonumber \\ &=\mathbf{C}\mathbf{A}^{-k} \mathbf{x}[n] - \underbrace{(\mathbf{C} \mathbf{A}^{-1} \mathbf{w}[n-k] + \cdots + \mathbf{C} \mathbf{A}^{-k} \mathbf{w}[n-1] - \mathbf{v}[n-k])}_{:=\mathbf{v'}[n-k]} \label{eqn:intui:3} \end{align}
Here, we can see the variance of $\mathbf{v'}[n-k]$ is bounded as $\mathbb{E}[|\mathbf{v'}[n-k]|^2]= \mathbb{E}[( \begin{bmatrix}\frac{1}{2} & -\frac{1}{2} \end{bmatrix}\mathbf{w}[n-1]+\cdots+\begin{bmatrix}\frac{1}{2^k} & \frac{1}{(-2)^k} \end{bmatrix}\mathbf{w}[n-k])^2] \leq 2 \frac{\frac{1}{4}}{1-\frac{1}{4}}=\frac{2}{3}$.
Now, the stopping time $S$ until we have enough observations to estimate the states becomes the first time until we get both even and odd time observations, i.e. $S:=\inf \{k : 0 \leq k_1 < k_2 \leq k, \beta[n-k_1]=1, \beta[n-k_2]=1, k_1 \neq k_2 (\bmod 2) \}$. Here, the p.m.f. of $S$ gets thicker than that of scalar cases. We can actually prove that the p.m.f. tail of $S$ is $\exp \limsup_{s\rightarrow \infty} \frac{1}{s} \ln \mathbb{P}\{S=s \}=p_e^{\frac{1}{2}}$, which we will rigorously justify in Lemma~\ref{lem:app:geo}. Thus, we can find $\delta, c>0$ such that $p_e < \frac{1}{2^4}-\delta$ and $\mathbb{P}\{S=s \} \leq c\left( \frac{1}{2^4}-\delta \right)^{\frac{s}{2}}$ for all $s \in \mathbb{Z}^+$.
Now, we will analyze the performance of a suboptimal estimator which only uses two observations. Let $\mathbf{\widehat{x}}[n]:=\begin{bmatrix} \mathbf{C}\mathbf{A}^{-k_1} \\ \mathbf{C}\mathbf{A}^{-k_2} \end{bmatrix}^{-1} \begin{bmatrix} \mathbf{y}[n-k_1] \\ \mathbf{y}[n-k_2] \end{bmatrix}$. Here, we can see the matrix inverse exists since $k_1$ and $k_2$ are even and odd time observations. Let $\mathcal{F}_{\beta}$ be the $\sigma$-field generated by $\beta[n]$. Then, $k_1, k_2, S$ are deterministic variables conditioned on $\mathcal{F}_{\beta}$. The estimation error is upper bounded by \begin{align}
\mathbb{E}[|\mathbf{x}[n]-\mathbf{\widehat{x}}[n]|_2^2]&=
\mathbb{E}[\mathbb{E}[|\mathbf{x}[n]-\mathbf{\widehat{x}}[n]|_2^2| \mathcal{F}_{\beta}]]=
\mathbb{E}[\mathbb{E}[\left|\begin{bmatrix}\mathbf{C}\mathbf{A}^{-k_1} \\ \mathbf{C}\mathbf{A}^{-k_2}\end{bmatrix}^{-1} \begin{bmatrix} \mathbf{v'}[n-k_1]\\ \mathbf{v'}[n-k_2] \end{bmatrix}
\right|_2^2
|\mathcal{F}_{\beta}]] \nonumber \\ &\leq
\mathbb{E}[\mathbb{E}[ 8 \cdot \left|\begin{bmatrix}\mathbf{C}\mathbf{A}^{-k_1} \\ \mathbf{C}\mathbf{A}^{-k_2}\end{bmatrix}^{-1}
\right|_{max}^2 \cdot \left|\begin{bmatrix} \mathbf{v'}[n-k_1]\\ \mathbf{v'}[n-k_2] \end{bmatrix}
\right|_{max}^2
|\mathcal{F}_{\beta}]] \nonumber \\ &=
8 \cdot \mathbb{E}[ \left| \begin{bmatrix} 2^{-k_1}& (-2)^{-k_1}\\ 2^{-k_2}& (-2)^{-k_2} \end{bmatrix}^{-1}
\right|_{max}^2 \cdot
\mathbb{E}[ \left|\begin{bmatrix} \mathbf{v'}[n-k_1]\\ \mathbf{v'}[n-k_2] \end{bmatrix}
\right|_{max}^2
|\mathcal{F}_{\beta}]] \nonumber \\ &=
8 \cdot \mathbb{E}[ \left| \frac{1}{2 \cdot 2^{-k_1} \cdot (-2)^{-k_2}} \begin{bmatrix} (-2)^{-k_2} & -(-2)^{-k_1} \\ -2^{-k_2} & 2^{-k_1} \\ \end{bmatrix}
\right|_{max}^2 \cdot
\mathbb{E}[ \left|\begin{bmatrix} \mathbf{v'}[n-k_1]\\ \mathbf{v'}[n-k_2] \end{bmatrix}
\right|_{max}^2
|\mathcal{F}_{\beta}]] \nonumber \\ &= 8 \cdot \mathbb{E}[ \frac{1}{2^2} \left(\frac{2^{-k_1}}{2^{-k_1} \cdot 2^{-k_2}}\right)^2 \cdot
\mathbb{E}[ \left|\begin{bmatrix} \mathbf{v'}[n-k_1]\\ \mathbf{v'}[n-k_2] \end{bmatrix}
\right|_{max}^2
|\mathcal{F}_{\beta}]] \nonumber \\ &\leq 2 \cdot \mathbb{E}[
2^{2 k_2} \cdot
\mathbb{E}[ |\mathbf{v'}[n-k_1]|^2+|\mathbf{v'}[n-k_2]|^2
|\mathcal{F}_{\beta}]]\nonumber \\ &\leq \frac{8}{3} \mathbb{E}[2^{2 S}] \leq \frac{8}{3} \sum^{\infty}_{s=0} 2^{2s} c \left( \frac{1}{2^4}-\delta \right)^{\frac{s}{2}}= \frac{8}{3} \sum^{\infty}_{s=0} c ( 1- 2^4 \delta )^{\frac{s}{2}} < \infty \nonumber \end{align} Therefore, the estimation error is uniformly bounded for $p_e < \frac{1}{2^4}$.
(2) Necessity: We will show that the system is not intermittent observable when $p_e \geq \frac{1}{2^4}$. Denote the stopping time $S'$ to be $\inf\{k \geq 0 : \beta[n-k]=1, \mbox{$k$ is even} \}$. Then, $\mathbb{P}\{S'=0 \}=1-p_e$, $ \mathbb{P}\{S'=1 \}=0$, $\mathbb{P}\{S'=2 \}=(1-p_e)p_e$, $\cdots$. Thus, the p.m.f. tail of $S'$, $\exp \limsup_{s \rightarrow \infty}\frac{1}{s} \ln \mathbb{P}\{S'=s \}$, is $p_e^{\frac{1}{2}}$.
The state disturbance $\mathbf{w}[n-S']$ can be decomposed into two orthogonal components, $\mathbf{w}[n-S']=\begin{bmatrix} 1 \\ 1\end{bmatrix} w_1[n-S']+\begin{bmatrix} 1 \\ -1\end{bmatrix}w_2[n-S']$ where $w_1[n-S']$ and $w_2[n-S']$ are independent Gaussian random variables with zero mean and variance $\frac{1}{2}$. From the system equations \eqref{eqn:intui:2}, \eqref{eqn:intui:3} and the definition of $S'$, we can see that all the observations between time $n-S'$ and $n$ are orthogonal to $w_2[n-S']$. Thus, the estimator does not know anything about $w_2[n-S']$ at time $n$, and thus we can lower bound the estimation error as follows. \begin{align}
\mathbb{E}[(\mathbf{x}[n]-\mathbb{E}[\mathbf{x}[n]|\mathbf{y}^n])^2]
&\geq \mathbb{E}[\mathbb{E}[|\mathbf{A}^{S'-1}\begin{bmatrix} 1 \\ -1 \end{bmatrix} w_2[n-S']|_2^2]|S'] \nonumber \\
&\geq \mathbb{E}[2^{2(S'-1)}\mathbb{E}[(w_2[n-S'])^2|S']]= \frac{1}{2^3} \mathbb{E}[2^{2S'} \cdot \mathbf{1}(S' \geq n)] \nonumber \\ &=\frac{1}{2^3} \sum^{\lfloor \frac{n}{2} \rfloor}_{i=0} (1-p_e) ( \sqrt{p_e} 2^2)^{2i} \nonumber \end{align} Thus, if $p_e \geq \frac{1}{2^4}$ the estimation error diverges to $\infty$.
(3) Remarks: Compared to the scalar case, the p.m.f. tails of both $S$ and $S'$ in this vector system thicken to $\sqrt{p_e}$. This results in decreasing the critical erasure to $\frac{1}{2^4}$. The cyclic behavior of the observability Gramian matrix, $\mathbf{C}$, $\mathbf{C}\mathbf{A}^{-1}$, $\cdots$, causes the thickening of the p.m.f. tails. Thus, to capture this cyclic behavior of the observability Gramian matrix, we tentatively define an eigenvalue cycle as follows\footnote{We will formally define eigenvalue cycles later in Section~\ref{sec:interob}.}: We say that the eigenvalues of $\mathbf{A}$, $\lambda_1$ and $\lambda_2$ belong to the same \textbf{eigenvalue cycle} if $\frac{\lambda_1}{\lambda_2}$ is a root of unity, i.e. $\left(\frac{\lambda_1}{\lambda_2}\right)^{n}=1$ for some $n\in \mathbb{Z}$. Moreover, we say that $\mathbf{A}$ has \textbf{no eigenvalue cycles} if all the ratios between the eigenvalues of $\mathbf{A}$ are $1$ or not roots of unity, which implies $\mathbf{A}$ has no nontrivial eigenvalue cycles.
To generalize this example and find the p.m.f. tail for arbitrary eigenvalue cycles, we use the idea of large deviations~\cite{Dembo} which is equivalent to a union bound for simple cases. The idea goes as follows.
First, consider test channels that are erasure-type channels which would make the observability gramian rank-deficient. For this example, these would be the channel that erases every odd-time observations, the channel that erases every even-time observations and the channel that erases all observations.\footnote{In the actual characterization shown in Section~\ref{sec:interob}, we will see that the set $S'$ in \eqref{eqn:def:lprime:thm} is a proxy for these test channels. This minimum distance to the test channels will be denoted as $l_i$ in \eqref{eqn:def:lprime:thm}.}
Next, measure the distance from the true channel to the test channels. In our case, the true channel is the channel without any restriction and the distance measure between the true and test channel is the hamming distance. For the test channels considered above, the distance to the odd-time erasure channel is $1$ since we are restricting every one out of two indexes to be erasure. Likewise, the distance to the even-time erasure channel is $1$ and the distance to the all erasure channel is $2$.
Then, the large deviation principle intuitively says that the performance is decided by the minimum-distance test channel. For the example, the odd-time or even-time erasure channel whose distances are $1$ will govern the performance.
So the effect of the eigenvalue cycle is to thicken the tail of the stopping time until you get enough observations to estimate the states. Analytically, the effect is equivalent to taking a proper power to the $p_e$ and hence the name ``power property''.
\subsection{Max Combining} \label{sec:maxcombining} This property answers the question (c) i.e. how we go from a single eigenvalue cycle to multiple eigenvalue cycles. Consider the following example with two eigenvalue cycles: \begin{align} \left\{ \begin{array}{l} \begin{bmatrix} x_1[n+1] \\ x_2[n+1] \\ x_3[n+1] \end{bmatrix}=\begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -2 \\ \end{bmatrix} \begin{bmatrix} x_1[n] \\ x_2[n] \\ x_3[n] \end{bmatrix} + \mathbf{w}[n] \\ y[n]=\beta[n] \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} \mathbf{x}[n] \end{array} \right. \end{align} As before, we put $\mathbf{x}[0]=\mathbf{0}$, $\mathbf{w}[n]$ be i.i.d. Gaussian with mean $\mathbf{0}$ and variance $\mathbf{I}$, and $\beta[n]$ be an independent Bernoulli process with probability $1-p_e$. We also extend the one-sided process to the two-sided process. Here, we can see there are two eigenvalue cycles. One eigenvalue cycle is $\{ 2,-2\}$ and the other one is $\{3\}$, which can be thought as two subsystems of the original system.
Then, from the previous arguments, we can see that the p.m.f. tails for these two systems are different. The p.m.f. tail for the eigenvalue cycle $\{3\}$ is $p_e$, while the p.m.f. tail for the eigenvalue cycle $\{2,-2 \}$ is thickened to $p_e^{\frac{1}{2}}$. Therefore, the question is whether the thickened tail in the eigenvalue cycle $\{2,-2 \}$ affects $\{ 3 \}$. The answer turns out to be ``No", and we can consider two subsystems separately. Thus, in this example, the system is intermittent observable if and only if both subsystems are intermittent observable, i.e. $p_e^\star = \frac{1}{\max\{3^2, 2^{2 \cdot 2}\}}$. The main idea to justify this is so-called \textit{successive decoding} developed in information theory~\cite{Cover}.
(1) Sufficiency: We will prove that $p_e < \frac{1}{\max\{ 3^2, 2^{2 \cdot 2} \}}$ is sufficient for the intermittent observability using a successive decoding idea. The idea is simple. We first estimate the state $x_1[n]$. Then, since we have an estimate for $x_1[n]$, we can subtract the estimate from the system and reduce the dimension of the system. The remaining estimation error is considered as noise.
Let $S$ be the stopping time until we receive three observations in the reverse process, i.e. $S:=\inf \{k : 0 \leq k_1 < k_2 < k_3 \leq k, \beta[n-k_1]=1, \beta[n-k_2]=1, \beta[n-k_3]=1 \}$. Here, we can prove that the p.m.f. tail of $S$ is the same as the scalar case. Therefore, $\exp\limsup_{s\rightarrow \infty} \ln \mathbb{P}\{S=s \}=p_e$, which we will justify in Lemma~\ref{lem:app:geo}. Since we have the three observations at time $n-k_1$, $n-k_2$ and $n-k_3$, by the pigeon-hole principle at least two among them have to be congruent mod $2$. Assume that $k_1$ and $k_2$ are both even. Then, by \eqref{eqn:intui:3} we have \begin{align} y[n-k_1]&=\begin{bmatrix} 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & -2 \end{bmatrix}^{-k_1} \begin{bmatrix} x_1[n] \\ x_2[n] \\ x_3[n] \end{bmatrix} + v'[n-k_1]\\ &=\begin{bmatrix} 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} 3^{-k_1} & 0 & 0 \\ 0 & 2^{-k_1} & 0 \\ 0 & 0 & 2^{-k_1} \end{bmatrix} \begin{bmatrix} x_1[n] \\ x_2[n] \\ x_3[n] \end{bmatrix} + v'[n-k_1] \\ &=\begin{bmatrix} 1 & 1 \end{bmatrix} \begin{bmatrix} 3 & 0 \\ 0 & 2 \end{bmatrix}^{-k_1} \begin{bmatrix} x_1[n] \\ x_2[n]+x_3[n] \end{bmatrix}+ v'[n-k_1] \end{align} Like the above section, we can also prove that
$\mathbb{E}[|v'[n-k]|^2] \leq 2 \frac{\frac{1}{4}}{1-\frac{1}{4}}+\frac{\frac{1}{9}}{1-\frac{1}{9}}=\frac{19}{24}$. Here, we can notice that instantaneously at time $n-k_1$ and $n-k_2$ the system equation behaves like the following system with no eigenvalue cycles: \begin{align} \left\{ \begin{array}{l} \begin{bmatrix} x_1[n+1] \\ x_2[n+1]+x_3[n+1] \end{bmatrix}=\begin{bmatrix} 3 & 0\\ 0 & 2\\ \end{bmatrix} \begin{bmatrix} x_1[n] \\ x_2[n]+x_3[n] \end{bmatrix} + \begin{bmatrix} w_1[n] \\ w_2[n]+w_3[n] \end{bmatrix} \\ y[n]=\beta[n] \begin{bmatrix} 1 & 1 \end{bmatrix} \begin{bmatrix} x_1[n] \\ x_2[n]+x_3[n] \end{bmatrix} \end{array} \right. \nonumber \end{align} Consider the suboptimal estimator $ \mathbf{\widehat{x}}[n]= \begin{bmatrix} \widehat{x}_1[n]\\ \widehat{x}_2[n]+\widehat{x}_3[n] \end{bmatrix}= \begin{bmatrix} 3^{-k_1} & 2^{-k_1} \\ 3^{-k_2} & 2^{-k_2} \\ \end{bmatrix}^{-1} \begin{bmatrix} y[n-k_1] \\ y[n-k_2] \end{bmatrix} $. Let $\mathcal{F}_{\beta}$ be the $\sigma$-field generated by $\beta[n]$, and $F$ be the event that $k_1$ and $k_2$ are even. The estimation error is upper bounded by \begin{align}
&\mathbb{E}[\left|\begin{bmatrix} x_1[n] \\ x_2[n]+x_3[n] \end{bmatrix} - \mathbf{\widehat{x}}[n] \right|_2^2 | \mathcal{F}_{\beta} \cap F ]
=\mathbb{E}[\left| \begin{bmatrix} 3^{-k_1} & 2^{-k_1} \\ 3^{-k_2} & 2^{-k_2} \\ \end{bmatrix}^{-1} \begin{bmatrix} v'[n-k_1] \\ v'[n-k_2] \end{bmatrix}
\right|_2^2 | \mathcal{F}_{\beta} \cap F ] \\
&\leq 8 \cdot \left| \begin{bmatrix} 3^{-k_1} & 2^{-k_1} \\ 3^{-k_2} & 2^{-k_2} \\
\end{bmatrix}^{-1}\right|_{max}^2
\cdot \mathbb{E}[\left|\begin{bmatrix} v'[n-k_1] \\ v'[n-k_2] \end{bmatrix}
\right|_{max}^2 | \mathcal{F}_{\beta} \cap F ] \\ \\ &=
8 \cdot \frac{19}{12}\cdot \left| \frac{1}{3^{-k_1}2^{-k_2}-2^{-k_1}3^{-k_2}} \begin{bmatrix} 2^{-k_2} & -2^{-k_1} \\ -3^{-k_2} & 3^{-k_1} \\
\end{bmatrix}\right|_{max}^2 \\ &=8 \cdot \frac{19}{12} \cdot \left( \frac{2^{-k_1}} {3^{-k_1} 2^{-k_2}\left(1- \left( \frac{2}{3} \right)^{k_2-k_1} \right)} \right)^2 \\ &\leq 8 \cdot \frac{19}{12} \cdot 3^2 \cdot (3^{k_1} \cdot 2^{k_2-k_1})^2 \leq 57 \cdot 3^{2 k_2}\leq 57 \cdot 3^{2 S} \end{align} Likewise, we can prove the same bound holds even if $k_1$ and $k_2$ are not even. Therefore, the estimation error is bounded by $57 \cdot 3^{2 S}$. Like the previous section, we can prove that if $p_e < \frac{1}{3^2}$ then $\mathbb{E}[ 3^{2 S}]<\infty$. Thus, the expectation of the estimation error for $x_1[n]$ is uniformly bounded over time.
Once we estimate $x_3[n]$, we can subtract the estimation $\widehat{x}_3[n]$ from the observation, i.e. $y'[n]:=y[n]-\beta[n]\widehat{x}_1[n]$. Then, the new system with the observation $y'[n]$ behaves like the following system: \begin{align} \left\{ \begin{array}{l} \begin{bmatrix} x_2[n+1] \\ x_3[n+1] \end{bmatrix}=\begin{bmatrix}
2 & 0 \\
0 & -2 \\ \end{bmatrix} \begin{bmatrix} x_2[n] \\ x_3[n] \end{bmatrix} + \mathbf{w}[n] \\ y'[n]=\beta[n]\left( \begin{bmatrix} 1 & 1 \end{bmatrix} \begin{bmatrix} x_2[n] \\ x_3[n] \end{bmatrix} +(x_1[n]-\widehat{x}_1[n]) \right) \end{array} \right. \end{align} Since the expectation of the estimation error for $x_1[n]$ is uniformly bounded, it can be considered as a part of the observation noise.\footnote{Precisely speaking, the estimation error for $x_1[n]$ is a random variable which depends on the channel erasure process. Therefore, the rigorous proof of Section~\ref{sec:dis:suff} has more steps to justify the argument.} In the same way as the previous section, we can prove that the estimation error for $x_2[n], x_3[n]$ is uniformly bounded if $p_e < \frac{1}{2^{2 \cdot 2}}$. Notice that the minimum number of required information to estimate the state by observability gramian matrix inversion is $3$, the number of the states. However, here we used more number of observation to apply successive decoding idea.
(2) Necessity: To prove that the example is not intermittent observable if $p_e \geq \frac{1}{\max\{ 3^2, 2^{2 \cdot 2} \}}$, we will use a genie argument. If the states $x_2[n],x_3[n]$ is given to the estimator as a side-information, the remaining system with $x_1[n]$ is a scalar system with the eigenvalue $3$. We know that if $p_e \geq \frac{1}{3^2}$, $x_1[n]$ is not intermittent observable. We can also give $x_1[n]$ as a side-information to conclude that $p_e \geq \frac{1}{2^{2\cdot 2}}$ is a necessary condition for the intermittent observability.
(3) Remarks: In summary, we can solve problems with multiple eigenvalue cycles one by one without worrying about the existence of the other eigenvalue cycles. In other words, at each step we estimate the eigenvalue cycle associated with the largest eigenvalue. After the estimation, the eigenvalue cycle can be subtracted from the system except uniformly bounded estimation error. Then, we can simply repeat the steps for the remaining system. This procedure of successively solving and subtracting the unknowns is called successive decoding in information theory, and used as a decoding procedure for the multiple-access channel~\cite{Cover}.
Therefore, we can conclude that the intermittent observability for a multiple eigenvalue-cycle system is bottlenecked by the hardest-to-estimate eigenvalue cycle, which manifests as the max operation in the critical erasure probability calculation.
\subsection{Separability of Eigenvalue Cycles} \label{sec:separability} The remaining question is what are the minimal irreducible sub-problems, whose answer can be expected to be eigenvalue cycles from the discussion up to now. In other words, we will understand general systems with multiple eigenvalue cycles by dividing into sub-systems with a single eigenvalue cycle. In the max-combining property, we already saw an example with multiple eigenvalue cycles. In the example, we first reduce the problem with multiple eigenvalue cycles to the problem with no eigenvalue cycles by sub-sampling plants. For example, in Section~\ref{sec:maxcombining} we already saw that by sub-sampling (by $2$), the system with an eigenvalue cycle (period $2$) becomes a system with no eigenvalue cycles.
Thus, the question reduces to the fact that for systems with no eigenvalue cycles the critical erasure probability is $\frac{1}{|\lambda_{max}|^2}$, which will be shown in Corollary~\ref{thm:nocycle}. To intuitively understand why this is true, we will consider three cases depending on the structure of $\mathbf{A}$.
The first case is when $\mathbf{A}$ is a diagonal matrix, and the magnitudes of its eigenvalues are distinct. In fact, this case is already proved in \cite{Yilin_Characterization}. Let's consider a descriptive example when $\mathbf{A}=\begin{bmatrix} 3 & 0 \\ 0 & 2\end{bmatrix}$, $\mathbf{C}=\begin{bmatrix} 1 & 1 \end{bmatrix}$. Then, the observability gramian of the system becomes $\begin{bmatrix} \mathbf{C}\mathbf{A}^{n_1} \\ \mathbf{C}\mathbf{A}^{n_2} \end{bmatrix} = \begin{bmatrix} 3^{n_1} & 2^{n_1} \\ 3^{n_2} & 2^{n_2} \end{bmatrix}$. To prove that the critical erasure probability is given as $\frac{1}{|\lambda_{max}|^2} = \frac{1}{3^2}$, it is enough to prove that the determinant of the observability gramian is large enough for almost all distinct $n_1$ and $n_2$. To justify this, we can use the fact that the ratio of the elements, $(\frac{3}{2})^n$, is an exponentially increasing function.
The second case is when $\mathbf{A}$ is a diagonal matrix, and the eigenvalues are distinct but have the same magnitude. Let's consider the system with $\mathbf{A}=\begin{bmatrix} e^{j} & 0 \\ 0 &
e^{j\sqrt{2}} \end{bmatrix}$ and $\mathbf{C}=\begin{bmatrix} 1 & 1 \end{bmatrix}$. The observability gramian is given as $\begin{bmatrix} \mathbf{C}\mathbf{A}^{n_1} \\ \mathbf{C}\mathbf{A}^{n_2} \end{bmatrix} = \begin{bmatrix} e^{j n_1} & e^{j\sqrt{2}n_1} \\ e^{j n_2} & e^{j\sqrt{2}n_2} \end{bmatrix}$, and like above it is enough to show that the determinant of this observability gramian is large enough for almost all distinct $n_1$, $n_2$. Here, the arguments from \cite{Yilin_Characterization} cannot work. For this, we instead used Weyl's criterion~\cite{Kuipers} which tells each element $(e^{jn}, e^{j \sqrt{2}n})$ behaves like a random variable $(e^{j \theta_1}, e^{j \theta_2})$ where $\theta_1$ and $\theta_2$ are independent random variables uniformly distributed on $[0, 2\pi]$. In fact, the effect of the hypothetical random variables $(e^{j \theta_1}, e^{j \theta_2})$ is quite similar to the actually randomly-dithered nonuniform sampling discussed in Section~\ref{sec:nonuniform}.
The last case is when $\mathbf{A}$ is a Jordan block matrix. Let's consider the system with $\mathbf{A}=\begin{bmatrix} 2 & 1 \\ 0 &2 \end{bmatrix}$ and $\mathbf{C}=\begin{bmatrix} 1 & 0 \end{bmatrix}$. The observability gramian is given as $\begin{bmatrix} \mathbf{C}\mathbf{A}^{n_1} \\ \mathbf{C}\mathbf{A}^{n_2} \end{bmatrix} = \begin{bmatrix} 2^{n_1} & n_1 2^{n_1} \\
2^{n_2} & n_2 2^{n_2} \end{bmatrix}$, and we have to show that the determinant of this observability gramian is large enough for almost all distinct $n_1$, $n_2$. Unlike the above cases, this example has polynomial terms in $n_1$, $n_2$. Exploiting this fact, we can reduce the problem to the fact that a polynomial function on $n$ becomes zero only on a measure zero set.
By combining the insights from these three examples, we can prove that for a general matrix $A$ with no eigenvalue cycles, the critical erasure probability is given as $\frac{1}{|\lambda_{max}|^2}$.
\section{Intermittent Observability Characterization} \label{sec:interob} Based on the intuition of the previous section, the intermittent observability condition can be characterized. We begin with the formal definition of a cycle. \begin{definition} A multiset (a set that allows repetitions of its elements) $\{a_1,a_2,\cdots, a_l \}$ is called a cycle with length $l$ and period $p$ if $\left(\frac{a_i}{a_j}\right)^p=1$ for all $i,j \in \{ 1,2,\cdots, l \}$ and some $p \in \mathbb{N}$. Following convention, $p$ is denoted\footnote{We use $\frac{0}{0}=1$, $\frac{1}{0}= \infty$, $1^{\infty}=\infty$ and $\frac{1}{\infty}=0$.} as \begin{align} p:=\min \left\{n \in \mathbb{N} :
\left(\frac{a_i}{a_j}\right)^n =1, \forall i,j \in
\left\{1,2,\cdots, l \right\} \right\}. \end{align}
\end{definition}
For example, $\{a\}$ is a cycle with length $1$ and period $1$ by itself. $\{ e^{j\omega}, e^{j (\omega+ \frac{2 \pi }{6})} \}$ is a cycle with length $2$ and period $6$. $\{e^{j}, e^{j\sqrt{2}} \}$ and $\{1,2 \}$ are not cycles. One trivially necessary condition for
$a_1,a_2$ to belong to the same cycle is $|a_1|=|a_2|$. It can be also shown that cycles are closed under overlapping unions, meaning that if $\{ a_1, a_2 \}$ and $\{a_2 ,a_3 \}$ are cycles, $\{a_1, a_2, a_3 \}$ is also a cycle.
Now, we can define an eigenvalue cycle. It is well-known in linear system theory~\cite{Chen} that by properly changing coordinates, any linear system equations~\eqref{eqn:dis:system} can be written in an equivalent form with a Jordan matrix $\mathbf{A}$. Moreover, even though the MMSE value can be changed by the coordinate change, the condition for the boundedness (stabilizability) remains the same. Rigorously, for any system matrix $\mathbf{A}$, there exists an invertible matrix $\mathbf{U}$ and an upper-triangular Jordan matrix $\mathbf{A'}$ such that $\mathbf{A}=\mathbf{U}\mathbf{A'}\mathbf{U}^{-1}$. We also define $\mathbf{B'}:=\mathbf{U}\mathbf{B}$ and $\mathbf{C'}:=\mathbf{C}\mathbf{U}$. Then, the matrix $\mathbf{A'}$ and $\mathbf{C'}$ can be written as the following form:
\begin{align} &\mathbf{A'}=diag\{ \mathbf{A_{1,1}}, \mathbf{A_{1,2}}, \cdots, \mathbf{A_{\mu,\nu_\mu}}\} \nonumber \\ &\mathbf{C'}=\begin{bmatrix} \mathbf{C_{1,1}} & \mathbf{C_{1,2}} & \cdots & \mathbf{C_{\mu,\nu_\mu}} \end{bmatrix} \nonumber \\ &\mbox{where} \nonumber \\ &\quad \mbox{$\mathbf{A_{i,j}}$ is a Jordan block with an eigenvalue $\lambda_{i,j}$} \nonumber \\ &\quad \{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \} \mbox{ is a cycle with length $\nu_i$ and period $p_i$}\nonumber \\ &\quad \mbox{For $i \neq i'$, $\{\lambda_{i,j},\lambda_{i',j'} \}$ is not a cycle} \nonumber \\ &\quad \mbox{$\mathbf{C_{i,j}}$ is a $l \times \dim \mathbf{A_{i,j}}$ complex matrix}.\label{eqn:ac:jordan:thm} \end{align} Since cycles are closed under overlapping unions, the eigenvalues of $\mathbf{A}$ can be uniquely partitioned into maximal cycles, $\{\lambda_{i,1},\cdots,\lambda_{i,\nu_i} \}$. We call these cycles \textit{eigenvalue cycles} and we say $\mathbf{A}$ has no eigenvalue cycle if all of its eigenvalue cycles are period $1$.
Define \begin{align} &\mathbf{A_i}=diag\{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \}\nonumber \\ &\mathbf{C_i}=\begin{bmatrix} \left(\mathbf{C_{i,1}}\right)_1 & \cdots & \left(\mathbf{C_{i,\nu_i}}\right)_1 \end{bmatrix} \nonumber\\ &\mbox{where $\left(\mathbf{C_{i,j}}\right)_1$ is the first column of $\mathbf{C_{i,j}}$.} \label{eqn:ac2:jordan:thm} \end{align} In other words, we are dividing the original problem to sub-problems according to eigenvalue cycles.
Let $l_i$ be the minimum cardinality among the sets $S' \subseteq \{ 0,1,\cdots,p_i-1 \}$ whose resulting $S:=\{ 0,1,\cdots, p_i-1 \} \setminus S'=\{s_1,s_2,\cdots,s_{|S|} \}$ makes \begin{align} \begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{s_1}\\ \mathbf{C_i}\mathbf{A_i}^{s_2}\\ \vdots \\
\mathbf{C_i}\mathbf{A_i}^{s_{|S|}} \end{bmatrix} \label{eqn:def:lprime:thm} \end{align} be rank deficient, i.e. the rank is strictly less than $\nu_i$. Here, $p_i$ and $l_i$ will be used for the power property. $l_i$ represents how many observations have to be erased out of $p_i$ time steps to make the observability Gramian matrix rank deficient. This corresponds to the critical error event in large deviation theory.
Now, we can apply the max-combination property to characterize intermittent observability. Here is the main theorem of the paper. \begin{theorem} Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C}, \sigma, \sigma')$ with probability of erasure $p_e$, let $\sigma < \infty$, $\sigma' > 0$, and $(\mathbf{A},\mathbf{B})$ be controllable. Then, the intermittent system is intermittent observable if and only if \begin{align}
p_e < \frac{1}{\underset{1 \leq i \leq \mu}{\max} |\lambda_{i,1}|^{2 \frac{p_i}{l_i}}} . \nonumber \end{align}
or equivalently $\underset{1 \leq i \leq \mu}{\max} p_e^{\frac{l_i}{p_i}} |\lambda_{i,1}|^2 < 1$. \label{thm:mainsingle} \end{theorem} \begin{proof} See Section~\ref{sec:dis:suff} for sufficiency, and Section~\ref{sec:dis:nece} for necessity. \end{proof}
Here, we can notice that there is no assumption about stability or observability of the system. Let's first do a validity test of the theorem by trying stable modes and unobservable modes. If $|\lambda_{i,1}|<1$, $\frac{1}{|\lambda_{i,1}|^{2 \frac{p_i}{l_i}}}>1$. Therefore, the stable modes do not contribute to the characterization of the critical erasure probability. If $(\mathbf{A_i},\mathbf{C_i})$ are unobservable, $l_i=0$. So, $\frac{1}{|\lambda_{i,1}|^{2 \frac{p_i}{0}}}=0$ if $|\lambda_{i,1}| \geq 1$ and $\frac{1}{|\lambda_{i,1}|^{2 \frac{p_i}{0}}}=\infty$ if $|\lambda_{i,1}|< 1$. Therefore, if the unobservable modes are stable they do not affect the intermittent observability of the system and if they are not the system is not intermittent observable even if $p_e=0$.\\
Even though in general $l_i$ does not admit a closed form, it is computable for special cases. \begin{corollary}
Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C}, \sigma, \sigma')$ with probability of erasure $p_e$, let $\sigma < \infty$, $\sigma' > 0$, and $(\mathbf{A},\mathbf{B})$ be controllable. We further assume that $(\mathbf{A},\mathbf{C})$ is observable and $\mathbf{A}$ has no eigenvalue cycles (i.e. $\left(\frac{\lambda_i}{\lambda_j}\right)^n \neq 1 $ for all $\lambda_i \neq \lambda_j$ and $n \in \mathbb{N}$). Then, the intermittent system is intermittent observable if and only if $p_e < \frac{1}{|\lambda_{max}|^2}$ where $\lambda_{max}$ is the largest magnitude eigenvalue of $\mathbf{A}$. \label{thm:nocycle} \end{corollary} \begin{proof}
Since $\mathbf{A}$ has no eigenvalue cycles, $p_i$ equal to $1$ for all $i$ and $\mathbf{A_i}$ are scalars. Moreover, by the observability condition and Theorem~\ref{thm:jordanob}, $\mathbf{C_i}$ is full-rank. Thus, $l_i=1$ for all $i$ and by Theorem~\ref{thm:mainsingle} the critical erasure probability is $\frac{1}{\max_{i} |\lambda_{i,1}|^2}=\frac{1}{|\lambda_{max}|^2}$. \end{proof}
For a more precise understanding of the critical erasure probability, we will focus on the case of a row vector $\mathbf{C}$ --- i.e. single-output systems. Heuristically, a row vector $\mathbf{C}$ is the worst among $\mathbf{C}$ matrices since a vector observation is clearly better than a scalar observation.
Furthermore, we will also restrict the periods of the all eigenvalue cycles of $\mathbf{A}$ to be primes\footnote{For convenience, we
include $1$ as a prime number here.}. The technical reason for this restriction is that prime periods give us a useful invariance property of the sub-eigenvalue cycles. Let $\{ \lambda_1,\lambda_2,\cdots,\lambda_l \}$ be an eigenvalue cycle with prime period $p$. Then, all subsets of $\{ \lambda_1,\lambda_2,\cdots, \lambda_l \}$ with distinct elements are eigenvalue cycles with the same period $p$. This invariance property need not hold for eigenvalue cycles with composite periods as we will see by example later. \begin{corollary} Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C}, \sigma, \sigma')$ with probability of erasure $p_e$, let $\sigma < \infty$, $\sigma' > 0$, and $(\mathbf{A},\mathbf{B})$ be controllable. We further assume that $(\mathbf{A},\mathbf{C})$ is observable, $\mathbf{C}$ is a row vector, and $\mathbf{A}$ has only prime-period eigenvalue cycles of length $\nu_i$. Then, the intermittent system is intermittent observable if and only if $p_e < \frac{1}{ \underset{1 \leq i \leq \mu}{\max}
|\lambda_{i,1}|^{ \frac{2 p_i}{p_i-\nu_i+1}}}$. \label{thm:cycle} \end{corollary} \begin{proof}
First, we introduce the following fact regarding Vandermonde matrix determinants~\cite{Evans_Generalized}: Let $p$ be a prime, $a_1,\cdots, a_n$ be pairwise incongruent in mod $p$ and $b_1,\cdots,b_n$ be pairwise incongruent in mod $p$. Then, \begin{align} \begin{bmatrix} e^{j 2 \pi \frac{a_1 b_1}{p}} & e^{j 2 \pi \frac{a_1 b_2}{p}} & \cdots & e^{j 2 \pi \frac{a_1 b_n}{p}} \\ e^{j 2 \pi \frac{a_2 b_1}{p}} & e^{j 2 \pi \frac{a_2 b_2}{p}} & \cdots & e^{j 2 \pi \frac{a_2 b_n}{p}} \\ \vdots & \vdots & \ddots & \vdots \\ e^{j 2 \pi \frac{a_n b_1}{p}} & e^{j 2 \pi \frac{a_n b_2}{p}} & \cdots & e^{j 2 \pi \frac{a_n b_n}{p}} \\ \end{bmatrix}\nonumber \end{align} is full rank.
Furthermore, since $(\mathbf{A},\mathbf{C})$ is observable and $\mathbf{C}$ is a row vector, by Theorem~\ref{thm:jordanob}, $\lambda_{i,j}$ are distinct and $(\mathbf{C_{i,j}})_1$ are not zeros. Therefore, let $\lambda_{i,j}=|\lambda_i| e^{j 2 \pi \frac{q_{i,j}}{p_i}}$ where $q_{i,1}, \cdots, q_{i,\nu_i}$ are incongruent in mod $p_i$ and $p_i$ are primes.
Now, we will evaluate the critical erasure probability shown in Theorem~\ref{thm:mainsingle}. For this system, \eqref{eqn:def:lprime:thm} can be written as \begin{align} \begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{s_1}\\ \vdots \\
\mathbf{C_i}\mathbf{A_i}^{s_{|S|}}\\ \end{bmatrix} &= \begin{bmatrix} \lambda_{i,1}^{s_1} & \cdots & \lambda_{i,\nu_i}^{s_1} \\ \vdots & \ddots & \vdots \\
\lambda_{i,1}^{s_{|S|}} & \cdots & \lambda_{i,\nu_i}^{s_{|S|}} \end{bmatrix} \begin{bmatrix} (\mathbf{C_{i,1}})_{1} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & (\mathbf{C_{i,\nu_i}})_{1} \end{bmatrix} \nonumber \\ &= \begin{bmatrix}
|\lambda_i|^{s_1} & \cdots & 0 \\ \vdots & \ddots & \vdots \\
0 & \cdots & |\lambda_i|^{s_{|S|}} \end{bmatrix} \begin{bmatrix} e^{j 2 \pi \frac{q_{i,1}}{p_i} s_1} & \cdots & e^{j 2 \pi \frac{q_{i,\nu_i}}{p_i} s_1} \\ \vdots & \ddots & \vdots \\
e^{j 2 \pi \frac{q_{i,1}}{p_i} s_{|S|}} & \cdots & e^{j 2 \pi \frac{q_{i,\nu_i}}{p_i} s_{|S|}} \end{bmatrix} \begin{bmatrix} (\mathbf{C_{i,1}})_{1} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & (\mathbf{C_{i,\nu_i}})_{1} \end{bmatrix} \nonumber \end{align} Since $\lambda_i$ and $\mathbf{C_{i,j}}_1$ are non-zeros, the rank of $\begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{s_1}\\ \vdots \\
\mathbf{C_i}\mathbf{A_i}^{s_{|S|}} \end{bmatrix}$ is equal to the rank of $\begin{bmatrix} e^{j 2 \pi \frac{q_{i,1}}{p_i} s_1} & \cdots & e^{j 2 \pi \frac{q_{i,\nu_i}}{p_i} s_1} \\ \vdots & \ddots & \vdots \\
e^{j 2 \pi \frac{q_{i,1}}{p_i} s_{|S|}} & \cdots & e^{j 2 \pi \frac{q_{i,\nu_i}}{p_i} s_{|S|}} \end{bmatrix}$.
Furthermore, since $q_{i,1}, \cdots, q_{i,\nu_1}$ are incongruent in mod $p_i$ and $s_1, \cdots, s_{|S|}$ are also incongruent in mod $p_i$, by the property of the Vandermonde matrix discussed above, the rank of the observability gramian is greater or equal to $\nu_i$ if and only if $|S| \geq \nu_i$.
Therefore, $l_i$ of \eqref{eqn:def:lprime:thm} is $p_i-\nu_i+1$, and the corollary follows from Theorem~\ref{thm:mainsingle}. \end{proof}
One may wonder why we could not get a simple answer in Theorem~\ref{thm:mainsingle} unlike Corollary~\ref{thm:cycle}. To understand this, consider two potential extensions of Corollary~\ref{thm:cycle}:
(1) Eigenvalue cycles with periods that are composite numbers: Consider $\mathbf{A}=\begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 e^{j \frac{2
\pi}{16}} & 0 \\ 0 & 0 & 2 e^{j\frac{2
\pi}{16}9} \end{bmatrix}$ and $\mathbf{C}=\begin{bmatrix} 1 & 1 & 1 \end{bmatrix}$. The eigenvalue cycle has length $3$ and period $16$. If we naively apply the formula of Corollary~\ref{thm:cycle} then we would get a critical value $\frac{1}{2^{2 \cdot \frac{16}{16-3+1}}}=\frac{1}{2^{\frac{16}{7}}}$. However, if we consider the sub-eigenvalue cycle $\{ 2e^{j \frac{2 \pi}{16}}, 2e^{j \frac{2 \pi}{16}9}\}$, the length is $2$ and the period is $2$. The formula of Corollary~\ref{thm:cycle} gives $\frac{1}{2^{2 \cdot \frac{2}{2-2+1}}}=\frac{1}{2^4}$ as a critical value, which gives a tighter condition than the previous one. In fact, the latter value is the correct critical erasure probability. Because the period invariant property does not hold for a composite number cycle, the longest cycle does not necessarily give the right critical probability.
(2) A general matrix $\mathbf{C}$, multiple-output systems: If we have a vector observation, an eigenvalue cycle can be divided into smaller cycles. As an extreme case, when $\mathbf{C}$ is an identity matrix every eigenvalue cycle is divided into trivial cycles with length $1$ and the critical erasure probability becomes
$\frac{1}{|\lambda_{max}|^2}$ as observed in \cite{Sinopoli_Kalman}. Consider now $\mathbf{A}=\begin{bmatrix} 2
&
0 & 0 & 0 \\ 0 & 2e^{j\frac{2 \pi }{5}} & 0 & 0 \\ 0 & 0 &
2e^{j\frac{2 \pi }{5}2} & 0 \\ 0 & 0 & 0 & 2e^{j\frac{2 \pi
}{5}3} \end{bmatrix}$ and $\mathbf{C}=\begin{bmatrix} 1 & 2 & 3 & 4 \\ 0 & 0 & 0 &
\delta \end{bmatrix}$. The eigenvalue cycle $\{ 2, 2e^{j\frac{2\pi}{5}}, 2e^{j\frac{2\pi}{5}2}, 2e^{j\frac{2\pi}{5}3} \}$ of $\mathbf{A}$ has length $4$ and period $5$. However, if $\delta \neq 0$, by elementary row operations $\mathbf{C}$ can be converted to $\begin{bmatrix} 1 & 2 & 3 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$. Thus, the eigenvalue cycle is divided into two sub-cycles, $\{ 2, 2e^{\frac{2 \pi}{5}}, 2 e^{\frac{2 \pi}{5}2} \}$ and $\{ 2 e^{\frac{2\pi}{5}3} \}$. The longer cycle with length $3$ would dominate and the critical erasure probability would be $\frac{1}{2^{2 \cdot \frac{5}{5-3+1}}}=\frac{1}{2^{\frac{10}{3}}}$. Meanwhile, if $\delta = 0$, the second row of $\mathbf{C}$ would be ignorable. Thus, the eigenvalue cycle would not be divided and the critical erasure probability would be $\frac{1}{2^{2 \cdot \frac{5}{5-4+1}}}=\frac{1}{2^{\frac{10}{2}}}$.
In this example, we can see that the critical erasure probability depends on whether $\delta$ is equal to $0$ or not, which is related to the rank of $\mathbf{C}$. Thus, it is inevitable to have a rank condition of some sort in the characterization of the critical erasure probability.
\subsection{Extension to Intermittent Kalman Filtering with Parallel Channels} The concept of eigenvalue cycles and the divide-and-conquer approach can be also applied to extensions and variations of the intermittent Kalman filtering.
Let's consider intermittent Kalman filtering with parallel erasure channels as introduced in \cite{Garone_LQG}. \begin{align} &\mathbf{x}[n+1]=\mathbf{A}\mathbf{x}[n]+\mathbf{B}\mathbf{w}[n]\nonumber \\ &\mathbf{y_1}[n]=\beta_1[n](\mathbf{C_1}\mathbf{x}[n]+\mathbf{v_1}[n]) \nonumber \\ &\vdots \nonumber \\ &\mathbf{y_d}[n]=\beta_d[n](\mathbf{C_d}\mathbf{x}[n]+\mathbf{v_d}[n])\nonumber \end{align} Here $n$ is the non-negative integer-valued time index, and $\mathbf{x}[n] \in \mathbb{C}^{m}$, $\mathbf{w}[n] \in \mathbb{C}^{g}$, $\mathbf{y_i}[n] \in \mathbb{C}^{l_i}$, $\mathbf{v_i}[n] \in \mathbb{C}^{l_i}$, $\mathbf{A} \in \mathbb{C}^{m \times m}$, $\mathbf{B} \in \mathbb{C}^{m \times g}$, $\mathbf{C_i} \in \mathbb{C}^{l_i \times m}$. The underlying randomness comes from $\mathbf{x}[0]$, $\mathbf{w}[n]$, $\mathbf{v_i}[n]$ and $\beta_i[n]$. $\mathbf{x}[0]$, $\mathbf{w}[n]$ and $\mathbf{v_i}[n]$ are independent Gaussian vectors with zero mean, and there exist positive $\sigma^2$ and $\sigma'^2$ such that \begin{align} &\mathbb{E}[\mathbf{x}[0]\mathbf{x}[0]^\dag] \preceq \sigma^2 \mathbf{I} \nonumber \\ &\mathbb{E}[\mathbf{w}[n]\mathbf{w}[n]^\dag] \preceq \sigma^2 \mathbf{I} \nonumber \\ &\mathbb{E}[\mathbf{v_i}[n]\mathbf{v_i}[n]^\dag] \preceq \sigma^2 \mathbf{I} \nonumber\\ &\mathbb{E}[\mathbf{w}[n]\mathbf{w}[n]^\dag] \succeq \sigma'^2 \mathbf{I} \nonumber \\ &\mathbb{E}[\mathbf{v_i}[n]\mathbf{v_i}[n]^\dag] \succeq \sigma'^2 \mathbf{I}. \nonumber \end{align} $\beta_i[n]$ are independent Bernoulli random processes with erasure probabilities $p_{e,i}$.
We call this system as an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C_i})$ with erasure probabilities $p_{e,i}$.
Since the observations go through independent parallel erasure channels, we can expect diversity gain~\cite{Tse}, i.e. even though the observations from some channels are lost, we can still estimate the state based on other successfully transmitted observations. At the first glance, this extension may seem much harder than the original problem since we have to characterize the whole region $(p_{e,1},\cdots,p_{e,d})$ rather than a single critical erasure value. However, a simple extension of Theorem~\ref{thm:mainsingle} turns out to be enough to characterize this critical erasure probability region. As in Section~\ref{sec:interob}, let $\mathbf{A}=\mathbf{U}\mathbf{A'}\mathbf{U}^{-1}$ where $U$ is an invertible matrix and $\mathbf{A'}$ is an upper-triangular Jordan matrix. We also define $\mathbf{B'}:=\mathbf{U}\mathbf{B}$ and $\mathbf{C_i'}:=\mathbf{C_i}\mathbf{U}$.
Then, we can make the following generalized definitions of \eqref{eqn:ac:jordan:thm}, \eqref{eqn:ac2:jordan:thm}, \eqref{eqn:def:lprime:thm} for $\mathbf{A'}$ and $\mathbf{C_i'}$.
\begin{align} &\mathbf{A'}=diag\{ \mathbf{A_{1,1}}, \mathbf{A_{1,2}},\cdots, \mathbf{A_{\mu,\nu_{\mu}}} \} \nonumber \\ &\mathbf{C_i'}=\begin{bmatrix} \mathbf{C_{1,1,i}} & \mathbf{C_{1,2,i}} & \cdots & \mathbf{C_{\mu,\nu_{\mu},i}} \end{bmatrix} \nonumber \\ &\mbox{where}\nonumber \\ &\quad \mbox{$\mathbf{A_{i,j}}$ is a Jordan block matrix with an eigenvalue $\lambda_{i,j}$} \nonumber \\ &\quad \{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \} \mbox{ is a cycle with length $\nu_i$ and period $p_i$}\nonumber \\ &\quad \mbox{For $i \neq i'$, $\{\lambda_{i,j},\lambda_{i',j'} \}$ is not a cycle} \nonumber \\ &\quad \mbox{$\mathbf{C_{i,j,k}}$ is a $l_k \times \dim \mathbf{A_{i,j}}$ matrix}.\nonumber \end{align} Denote \begin{align} &\mathbf{A_i}=diag\{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \}\nonumber \\ &\mathbf{C_{i,j}}=\begin{bmatrix} (\mathbf{C_{i,1,j}})_1,\cdots, (\mathbf{C_{i,\nu_i,j}})_1 \end{bmatrix} \nonumber\\ &\mbox{where $(\mathbf{C_{i,j,k}})_1$ is the first column of $\mathbf{C_{i,j,k}}$}.\nonumber \end{align}
Let $(l_{i,1},l_{i,2},\cdots,l_{i,d})$ be the cardinality vector of the sets $S_1',S_2',\cdots,S_d'$ such that $S_j := \{0,1,\cdots, p_i-1 \} \setminus S_j' = \{ s_{j,1}, s_{j,2}, \cdots, s_{j,|S_j|} \}$ and \begin{align} \begin{bmatrix} \mathbf{C_{i,1}} \mathbf{A_i}^{s_{1,1}} \\ \vdots \\
\mathbf{C_{i,1}} \mathbf{A_i}^{s_{1,|S_1|}} \\ \mathbf{C_{i,2}} \mathbf{A_i}^{s_{2,1}} \\ \vdots \\
\mathbf{C_{i,d}} \mathbf{A_i}^{s_{d,|S_d|}} \\ \end{bmatrix}\nonumber \end{align} is rank deficient, i.e. has rank strictly less than $\nu_i$. Denote $L_i$ as a set of all such vectors.
Then, the intermittent observability with parallel channels is characterized as follows. \begin{proposition} Given an intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C_i},\sigma,\sigma')$ with probabilities of erasures $(p_{e,1}, \cdots, p_{e,d} )$, let $\sigma < \infty$, $\sigma' > 0$, and $(\mathbf{A},\mathbf{B})$ be controllable. Then, the intermittent system is intermittent observable if and only if \begin{align}
\max_{1 \leq i \leq \mu} \max_{(l_{i,1},l_{i,2},\cdots, l_{i,d}) \in L_i} \left( \prod_{1 \leq j \leq d} p_{e,j}^{\frac{l_{i,j}}{p_i}} \right) |\lambda_{i,1}|^{2} < 1. \nonumber \end{align} \label{thm:multi} \end{proposition} We omit the proof of the proposition, since it is similar to that of Theorem~\ref{thm:mainsingle}.
Compared to Theorem~\ref{thm:mainsingle}, the max-combination and separability principle remain the same, but the test channels in the power property become more complicated. Here, $(S_1',\cdots,S_d')$ represents the test channels such that when they are erased, the observability Gramian becomes rank-deficient. $(l_{i,1},\cdots,l_{i,d})$ represents the distance vector to these test channels.
\section{Intermittent Kalman Filtering with Nonuniform Sampling} \label{sec:nonuniform} In the previous section, we proved that eigenvalue cycles are the only factor that prevents us from having the critical erasure probability be
$\frac{1}{|\lambda_{max}|^2}$. Based on this understanding, we can look for a simple way to avoid this troublesome phenomenon. Here, we propose nonuniform sampling as a simple way of breaking the eigenvalue cycles and achieving the critical value
$\frac{1}{|\lambda_{max}|^2}$.
As an intuitive example, consider $\mathbf{A}=\begin{bmatrix} 1 & 0 \\
0 & -1 \end{bmatrix}$. Then, $\mathbf{A}=\begin{bmatrix}1 & 0 \\ 0 &
-1 \end{bmatrix}, \mathbf{A}^2=\begin{bmatrix}1 & 0 \\ 0 &
1 \end{bmatrix}, \mathbf{A}^3=\begin{bmatrix}1 & 0 \\ 0 &
-1 \end{bmatrix}, \mathbf{A}^4=\begin{bmatrix}1 & 0 \\ 0 &
1 \end{bmatrix}, \cdots$. What the eigenvalue cycle is capturing is that half of $\mathbf{A},\mathbf{A}^2,\mathbf{A}^3,\cdots$ are identical. Therefore, the question is how we can make every matrix in $\mathbf{A},\mathbf{A}^2,\mathbf{A}^3,\cdots$ distinct. To simplify the question, consider the sequence of $-1,1,-1,1,\cdots$ which corresponds to $(2,2)$ elements of $\mathbf{A},\mathbf{A}^2,\mathbf{A}^3,\cdots$.
Rewrite this sequence $-1,1,-1,1,\cdots$ as $(e^{j \pi})^1,(e^{j
\pi})^2,(e^{j \pi})^3,(e^{j \pi})^4,\cdots$ and introduce a jitter $t_i$ to each sampling time. The resulting sequence becomes $(e^{j \pi})^{1+t_1},(e^{j \pi})^{2+t_2},(e^{j \pi})^{3+t_3},(e^{j \pi})^{4+t_4},\cdots$ and if $t_i$s are uniformly distributed i.i.d.~random variables on $[0,T]$ each element in the sequence is distinct almost surely as long as $T>0$.
Operationally, this idea can be implemented as follows: at design-time, the sensor and the estimator agree on the nonuniform sampling pattern which is a realization of i.i.d.~random variables whose distribution is uniform on $[0,T]~(T>0)$. Whenever the sensor samples the system, it jitters its sampling time according to this nonuniform pattern. Knowing the sampling time jitter, the sampled continuous-time system looks like a discrete {\em time-varying} system to the estimator. The joint Gaussianity between the observation and the state is preserved, and furthermore, Kalman filters are optimal even for time-varying systems! This intermittent Kalman filtering problem with nonuniform samples has the critical erasure probability $\frac{1}{|\lambda_{max}|^2}$ almost surely. Therefore, an eigenvalue cycle is breakable by nonuniform sampling.
One may be bothered by the probabilistic argument on the nonuniform sampling pattern. However, this probabilistic proof is an indirect argument for the existence of an appropriate deterministic nonuniform sampling pattern, which is similar to how the existence of capacity achieving codes is proved in information theory~\cite{Shannon_mathematical}.
To write the scheme formally, consider a continuous-time dynamic system: \begin{align} &d\mathbf{x_c}(t)=\mathbf{A_c} \mathbf{x_c}(t)dt + \mathbf{B_c} d \mathbf{W_c}(t) \label{eqn:contistate}\\ &\mathbf{y_c}(t)=\mathbf{C_c} \mathbf{x_c}(t) + \mathbf{D_c} \frac{d \mathbf{V_c}(t)}{dt}. \label{eqn:contiob} \end{align} Here $t$ is the non-negative real-valued time index. $\mathbf{W_c}(t)$ and $\mathbf{V_c}(t)$ are independent $g$ and $l$-dimension standard Wiener processes respectively, i.e. for $a,b \geq 0$, $\mathbf{W_c}(a+b)-\mathbf{W_c}(b)$ is distributed as $\mathcal{N}(\mathbf{0},a\mathbf{I})$ and $\mathbf{V_c}(a+b)-\mathbf{V_c}(b)$ is also distributed as $\mathcal{N}(0,a\mathbf{I})$. $\mathbf{A_c} \in \mathbb{C}^{m \times m}$, $\mathbf{B_c} \in \mathbb{C}^{m \times g}$, $\mathbf{C_c} \in \mathbb{C}^{l \times m}$, and $\mathbf{D_c} \in \mathbb{C}^{l \times l}$ where $\mathbf{D_c}$ is invertible. Thus, $\mathbf{x}[n] \in \mathbb{C}^{m}$ and $\mathbf{y}[n] \in \mathbb{C}^{l}$. For a convenience, we assume $\mathbf{x}[0]=0$ but the results of this paper hold for any $\mathbf{x}[0]$ with finite variance. Throughout this paper, we use the Ito's integral~\cite[p.80]{Gardiner} for stochastic calculus.
The process of \eqref{eqn:contistate} is known as Ornstein-Uhlenbeck process~\cite[p.109]{Gardiner} whose solution is $\mathbf{x_c}(t)=e^{\mathbf{A_c}t}\mathbf{x_c}(0)+\int^t_0 e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d \mathbf{W_c}(t')$. Therefore, for $t_1 \leq t_2$ we have \begin{align} \mathbf{x_c}(t_2)&=e^{\mathbf{A_c}t_2} \mathbf{x_c}(0)+ \int^{t_2}_{0} e^{\mathbf{A_c}(t_2 -t')}\mathbf{B_c} d \mathbf{W_c}(t') \label{eqn:non:0} \\ &=e^{\mathbf{A_c}(t_2-t_1)} \left( e^{\mathbf{A_c}t_1} \mathbf{x_c}(0) + \int^{t_2}_{0} e^{\mathbf{A_c}(t_1-t')} \mathbf{B_c} d \mathbf{W_c}(t') \right) \nonumber \\ &=e^{\mathbf{A_c}(t_2-t_1)} \left( e^{\mathbf{A_c}t_1} \mathbf{x_c}(0) + \int^{t_1}_{0} e^{\mathbf{A_c}(t_1-t')} \mathbf{B_c} d \mathbf{W_c}(t') + \int^{t_2}_{t_1} e^{\mathbf{A_c}(t_1 - t')} \mathbf{B_c}d \mathbf{W_c}(t') \right) \nonumber \\ &=e^{\mathbf{A_c}(t_2-t_1)} \left( \mathbf{x_c}(t_1) + \int^{t_2}_{t_1} e^{\mathbf{A_c}(t_1 - t')} \mathbf{B_c}d \mathbf{W_c}(t') \right)\nonumber \end{align} which can be rewritten as \begin{align} \mathbf{x_c}(t_1)=e^{\mathbf{A_c}(t_1-t_2)} \mathbf{x_c}(t_2) - \int^{t_2}_{t_1} e^{\mathbf{A_c}(t_1-t')} \mathbf{B_c} d\mathbf{W_c}(t').\label{eqn:non:1} \end{align} The point of doing this is to understand the values of the states during sampling intervals in terms of the states at the end of the interval.
Let's say we want to sample the system with a sampling interval $I~(I>0)$. Conventional samplers uses integration filters to sample, i.e. in the uniform sampling case, the $n$th sample $\mathbf{y}[n]$ corresponds to the integration of $\mathbf{y_c}(t)$ for $(n-1)I \leq t < nI$: \begin{align} \mathbf{y}[n]&=\int^{nI}_{(n-1)I} \mathbf{y_c}(t) dt. \nonumber \end{align}
Nonuniform sampling can be thought of in two ways with respect to sampler's integration filters: (1) The starting times of the integrations are uniform, but the sampling intervals are non-uniform. (2) The sampling intervals are uniform, but the starting times of the integrations are non-uniform. Since the analysis and performance is similar in both cases, we will focus on the latter case. To take the $n$th sample of the system, the non-uniform sampler takes the integration of $\mathbf{y_c}(t)$ for $(n-1)I-t_n \leq t < nI - t_n$: \begin{align} \mathbf{y_o}[n]&=\int^{nI-t_n}_{(n-1)I-t_n} \mathbf{y_c}(t) dt \nonumber \\ &=\int^{nI-t_n}_{(n-1)I-t_n} \mathbf{C_c} \mathbf{x_c}(t) dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t) \label{eqn:non:2}\\ &=\int^{nI-t_n}_{(n-1)I-t_n} \mathbf{C_c} \left( e^{\mathbf{A_c}(t-(nI-t_n))} \mathbf{x_c}(nI-t_n) -\int^{nI-t_n}_{t} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')
\right) dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t) \label{eqn:non:3} \\ &=\left( \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{C_c}e^{\mathbf{A_c}(t-(nI-t_n))} dt \right)\mathbf{x_c}(nI-t_n) \nonumber \\ &-\int^{nI-t_n}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t) \nonumber \\ &=\underbrace{\left( \int^{I}_{0} \mathbf{C_c} e^{\mathbf{A_c}(t-I)} dt \right)}_{:=\mathbf{C}} \mathbf{x_c}(nI-t_n) \nonumber \\ &\underbrace{-\int^{nI-t_n}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t)}_{:=\mathbf{v}[n]} \label{eqn:non:4} \end{align} Here \eqref{eqn:non:2} follows from \eqref{eqn:contiob}, and \eqref{eqn:non:3} follows from \eqref{eqn:non:1}. Since $\mathbf{y_o}[n]$ is transmitted over the erasure channel, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C})$ with nonuniform samples and erasure probability $p_e$ has the following system equation: \begin{align} &d\mathbf{x_c}(t)=\mathbf{A_c}\mathbf{x_c}(t)dt+\mathbf{B_c}d \mathbf{W_c}(t) \label{eqn:conti:xsample}\\ &\mathbf{y}[n]=\beta[n](\mathbf{C}\mathbf{x_c}(nI-t_n)+\mathbf{v}[n])\label{eqn:conti:ysample} \end{align} where $\mathbf{y}[n] \in \mathbb{C}^{l}$ and $\beta[n]$ is an independent Bernoulli random process with erasure probability $p_e$. The variance of $\mathbf{v}[n]$ is uniformly bounded since the integration interval is bounded, but $\mathbf{v}[n]$ can be correlated since the integration intervals could overlap. Since $\mathbf{C}$ is a function of $\mathbf{C_c}$, the observability of $(\mathbf{A_c},\mathbf{C_c})$ does not necessarily imply the observability of $(\mathbf{A_c},\mathbf{C})$ while the observability of $(\mathbf{A_c},\mathbf{C})$ always implies the observability of $(\mathbf{A_c},\mathbf{C_c})$.
\begin{figure*}
\caption{System diagram for `intermittent Kalman filtering with nonuniform sampling'. The sensor samples the plant according to the nonuniform sampling pattern $t_n$, and sends the observation through the real erasure channel without any coding. The estimator tries to estimate the state based on its received signals and the nonuniform sampling pattern $t_n$.}
\label{fig:system3}
\end{figure*}
Figure~\ref{fig:system3} shows the system diagram for intermittent Kalman filtering with nonuniform sampling. The nonuniform sampler samples the plant according to the nonuniform sampling pattern $t_n$ and generates the observation $y_o[n]$. The observation is transmitted through the real erasure channel without any coding. Then, the estimator tries to estimate the state $x_c(t)$ based on its received signals $y^n$ and the nonuniform sampling pattern $t^n$.
As before, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C})$ with nonuniform samples is called intermittent observable if there exists a causal estimator $\mathbf{\widehat{x}}(t)$ of $\mathbf{x}(t)$ based on $\mathbf{y}[\lfloor \frac{t}{I} \rfloor ], \cdots, \mathbf{y}[0]$ such that \begin{align} \sup_{t \in \mathbb{R}^+} \mathbb{E}[(\mathbf{x}(t)-\mathbf{\widehat{x}}(t))^\dag(\mathbf{x}(t)-\mathbf{\widehat{x}}(t))] < \infty. \end{align} Intermittent observability with nonuniform samples is characterized by the following theorem. \begin{theorem}
Let $t_n$ be i.i.d.~random variables uniformly distributed on $[0,T]~(T>0)$, and $(\mathbf{A_c},\mathbf{B_c})$ be controllable. When $(\mathbf{A_c},\mathbf{C})$ has unobservable and unstable eigenvalues --- i.e. $\exists \lambda \in \mathbb{C}^+$ such that $\begin{bmatrix} \lambda \mathbf{I} - \mathbf{A_c} \\ \mathbf{C} \end{bmatrix}$ is rank deficient ---, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C})$ with nonuniform samples is not intermittent observable for all $p_e$. Otherwise, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C})$ with nonuniform samples is intermittent observable if and only if $p_e < \frac{1}{|e^{2 \lambda_{max}I}|}$. Here $\lambda_{max}$ is the eigenvalue of $\mathbf{A_c}$ with the largest real part. \label{thm:nonuniform} \end{theorem} \begin{proof} See Section~\ref{sec:cont:suf} for sufficiency, and Section~\ref{sec:cont:nec} for necessity. \end{proof}
Since $\exp\left(\mbox{(eigenvalue of $\mathbf{A_c}$)}I\right)$ corresponds to the eigenvalue of the sampled discrete time system, the critical value of Theorem~\ref{thm:nonuniform} is equivalent to that of Corollary~\ref{thm:nocycle}. The nonuniform sampling allows us to no longer care if eigenvalue cycles could exist for the original continuous-time system under uniform sampling.
Nonuniform sampling is the right way of breaking eigenvalue cycles from a practical point of view. So the critical erasure probability of $\frac{1}{|\lambda_{max}|^2}$ can be achieved not only by using the computationally challenging estimation-before-packetization strategy of \cite{Sahai_Thesis}, but also by the simple memoryless approach of dithered sampling before packetization. And so, even if the sensors were themselves distributed, the critical erasure probability with nonuniform sampling is still \textit{critical value optimal} in a sense that they can achieve the same critical erasure probability as sensors with causal or noncausal information about the erasure pattern and with unbounded complexity.
\subsection{Extensions of Intermittent Kalman Filtering with Nonuniform Sampling} In this section, we discuss variations and extensions of intermittent Kalman filtering with nonuniform samples. Since the proofs of the results shown in this section are similar to that of Theorem~\ref{thm:nonuniform}, we only present the results without proofs.
\subsubsection{General Distribution on $t_n$} First, we relax the condition on the distribution of $t_n$ of Theorem~\ref{thm:nonuniform}. There, we assume that $t_n$ are identically and uniformly distributed. However, they do not have to be identical or uniform.
\begin{proposition}
Assume that $t_0,t_1,\cdots$ are independent and there exist $a,c>0$ such that $\mathbb{P}\{ |t_n| \geq a \} =0 $ and $\mathbb{P}\{ t_n \in B \} \leq c |B|_{\mathcal{L}}$ for all $n \in \mathbb{Z}^+$ and $B \in \mathcal{B}$, where $\mathcal{B}$ is Borel $\sigma$-algebra and $|\cdot|_{\mathcal{L}}$ is Lebesgue measure. Then, Theorem~\ref{thm:nonuniform} still holds, i.e. if $(\mathbf{A_c},\mathbf{C})$ has no unobservable and unstable eigenvalues, the intermittent system with nonuniform samples is intermittent observable if and only if $p_e < \frac{1}{|e^{2 \lambda_{max} I}|}$. \end{proposition}
For the proof of the proposition, we can repeat the proof steps of Theorem~\ref{thm:nonuniform} using an improper distribution $\mu$ such that $\mu(A)=c|A \cap [-a,a]|_{\mathcal{L}}$.
\subsubsection{Deterministic Sequences for $t_n$} The randomness assumption on $t_n$ can be also removed. As we mentioned earlier, the probabilistic proof is an indirect proof for the existence of deterministic nonuniform sampling patterns. In fact, any nonuniform sequence satisfying Weyl's criteria ---which gives the sufficient and necessary condition for a sequence equidistributed on the interval --- can be used to break eigenvalue cycles. \begin{proposition}
Let a sequence $t_n \in [0,T]$ satisfy Weyl's criteria, i.e. for all $h \in \mathbb{Z}\setminus \{ 0 \}$, $\underset{N \rightarrow \infty}{\lim} | \frac{1}{N} \underset{1 \leq n \leq N}{\sum} e^{j 2\pi h \cdot \frac{t}{T}} | =0$. Then, Theorem~\ref{thm:nonuniform} still holds, i.e. if $(\mathbf{A_c},\mathbf{C})$ has no unobservable and unstable eigenvalues, the intermittent system with nonuniform samples is intermittent observable if and only if $p_e < \frac{1}{|e^{2 \lambda_{max} I}|}$. \end{proposition}
For example, a sequence like $t_n = \sqrt{2}n - \lfloor \sqrt{2}n \rfloor$ can be used to break eigenvalue cycles. The proof is by merging the proof of Theorem~\ref{thm:mainsingle} and Theorem~\ref{thm:nonuniform}.
\subsubsection{Nonuniform-length integration interval} In Theorem~\ref{thm:nonuniform}, we introduce nonuniform sampling by changing the starting time of the length of the integration. Another way of introducing nonuniform sampling is changing the integration interval. To take the $n$th sample of the system, the sensor integrates $\mathbf{y_c}(t)$ from $(n-1)I-t_n$ to $nI$. Parallel to \eqref{eqn:non:4}, we have the following equation. \begin{align} \mathbf{y_o}[n] &= \int^{nI}_{(n-1)I-t_n} \mathbf{y_c}(t) dt \nonumber \\ &=\left( \int^{nI}_{(n-1)I-t_n} \mathbf{C_c} e^{\mathbf{A_c}(t-nI)} \right) \mathbf{x_c}(nI) \nonumber \\ &- \int^{nI}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c}e^{\mathbf{A_c}(t-t')}\mathbf{B_c}d\mathbf{W_c}(t')dt + \int^{nI}_{(n-1)I-t_n} \mathbf{D_c} d\mathbf{V_c}(t) \nonumber \\ &=\underbrace{\left( \int^{n+t_n}_{0} \mathbf{C_c} e^{\mathbf{A_c}(t-nI-t_n)} \right)}_{:=\mathbf{C_n}} \mathbf{x_c}(nI) \nonumber \\ &\underbrace{-\int^{nI}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c}e^{\mathbf{A_c}(t-t')}\mathbf{B_c}d\mathbf{W_c}(t')dt + \int^{nI}_{(n-1)I-t_n} \mathbf{D_c} d\mathbf{V_c}(t)}_{:=\mathbf{v}[n]} \nonumber \end{align} $\mathbf{y_o}[n]$ is transmitted over the erasure channel, and the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_n})$ with nonuniform samples and erasure probability $p_e$ has the following system equations which correspond to \eqref{eqn:conti:xsample} and \eqref{eqn:conti:ysample}. \begin{align} &d\mathbf{x_c}(t)=\mathbf{A_c}\mathbf{x_c}(t)dt+\mathbf{B_c} d\mathbf{W_c}(t) \nonumber \\ &\mathbf{y}[n]=\beta[n](\mathbf{C_n}\mathbf{x_c}(nI) + \mathbf{v}[n]) \nonumber \end{align} Then, the intermittent observability condition for $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_n})$ is similar to Theorem~\ref{thm:nonuniform}. \begin{proposition}
Let $t_n$ be i.i.d.~random variables uniformly distributed on $[0,T]\ (T>0)$, and $(\mathbf{A_c},\mathbf{B_c})$ be controllable. If $(\mathbf{A_c},\mathbf{C_c})$ has unobservable and unstable eigenvalues, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_n})$ with nonuniform samples is not intermittent observable for all $p_e$. Otherwise, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_n})$ with nonuniform samples is intermittent observable if and only if $p_e < \frac{1}{|e^{2 \lambda_{max} I}|}$ where $\lambda_{max}$ is the eigenvalue of $\mathbf{A_c}$ with the largest real part. \label{prop:4} \end{proposition}
Compared to Theorem~\ref{thm:nonuniform}, we can see that the observability condition of $(\mathbf{A_c},\mathbf{C})$ is relaxed to the observability condition of $(\mathbf{A_c},\mathbf{C_c})$. This is due to the following fact: $\int^{nI-t_n}_{(n-1)I-t_n}e^{j \frac{2 \pi}{I}t}dt=0$ for all $t_n$ and $\int^{nI}_{(n-1)I-t_n}e^{j\frac{2 \pi}{I}t}dt \neq 0$ for some $t_n$. Even if $(\mathbf{A_c},\mathbf{C_c})$ is observable, $(\mathbf{A_c},\mathbf{C})$ can be unobservable for all $t_n$ while $(\mathbf{A_c},\mathbf{C_n})$ is observable for almost all $t_n$.
\subsubsection{Nonuniform Time-varying Filtering} In some cases, it is impossible to change the sampling time. In this case, we can use nonuniform time-varying filtering to break eigenvalue cycles. Consider the following discrete-time system: \begin{align} &\mathbf{x}[n+1]=\mathbf{A}\mathbf{x}[n]+\mathbf{B}\mathbf{w}[n] \nonumber \\ &\mathbf{y_o}[n]=\mathbf{C}\mathbf{x}[n]+\mathbf{v}[n] \nonumber \end{align} Here $\mathbf{y_o}[n]$ are the observations at the sensor, and the sensor cannot change the sampling intervals. Instead, the sensor introduces nonuniform filtering to the observations as follows: \begin{align} &\mathbf{y_o'}[n]=\alpha[n]\mathbf{y_o}[n]+\alpha'[n]\mathbf{y_o}[n-1] \nonumber \end{align}
This is just like introducing an FIR (finite impulse response) filter at the sensor except that the impulse response of the filter keeps changing over time.
The output of the nonuniform time-varying filter, $\mathbf{y_o'}[n]$, is transmitted over the erasure channel. Therefore, the intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C})$ with erasure probability $p_e$ and nonuniform time-varying filtering has the following system equations: \begin{align} &\mathbf{x}[n+1]=\mathbf{A}\mathbf{x}[n]+\mathbf{B}\mathbf{w}[n] \nonumber \\ &\mathbf{y}[n]=\beta[n](\mathbf{y_o'}[n]) \nonumber \\ &\ \quad=\beta[n](\alpha[n]\mathbf{C}\mathbf{x}[n]+\alpha'[n]\mathbf{C}\mathbf{x}[n-1]+\alpha[n]\mathbf{v}[n]+\alpha'[n]\mathbf{v}[n-1] ) \nonumber \end{align} The intermittent observability with nonuniform filtering is given as the following proposition. \begin{proposition}
Let $\alpha[n]$ and $\alpha'[n]$ be i.i.d.~random variables uniformly distributed on $[0,T]\ (T>0)$, and $(\mathbf{A},\mathbf{B})$ be controllable. If $(\mathbf{A},\mathbf{C})$ has unobservable and unstable eigenvalues, the intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C})$ with nonuniform filtering is not intermittent observable for all $p_e$. Otherwise, the intermittent system $(\mathbf{A},\mathbf{B},\mathbf{C})$ with nonuniform filtering is intermittent observable if and only if $p_e < \frac{1}{|\lambda_{max}|^2}$ where $\lambda_{max}$ is the largest magnitude eigenvalue of $\mathbf{A}$. \label{prop:5} \end{proposition}
\begin{figure}
\caption{(a): uniform sampling of Theorem~\ref{thm:mainsingle}, (b): nonuniform sampling of Theorem~\ref{thm:nonuniform}, (c): nonuniform sampling of Proposition~\ref{prop:4}, (d): nonuniform filtering of Proposition~\ref{prop:5}, (e): nonuniform sampling with nonuniform waveforms}
\label{fig:1a}
\label{fig:1b}
\label{fig:1c}
\label{fig:1d}
\label{fig:1e}
\label{fig:1}
\end{figure}
\subsubsection{Sampling with Nonuniform Waveforms} So far in Theorem~\ref{thm:nonuniform}, Proposition~\ref{prop:4}, and Proposition~\ref{prop:5}, we have seen three different ways of breaking eigenvalue cycles. However, these methods are essentially the same and generalized to nonuniform sampling with nonuniform waveforms.
Fig.~\ref{fig:1} shows the nonuniform sampling methods used to break eigenvalue cycles with respect to their waveforms. First, Fig.~\ref{fig:1a} shows the uniform sampling which is implicitly used to make discrete-time system~\eqref{eqn:dis:system}, \eqref{eqn:dis:system2} from the underlying continuous-time system. As we saw in Theorem~\ref{thm:mainsingle}, the eigenvalue cycles were not broken in this case. Fig.~\ref{fig:1b} shows the nonuniform sampling by changing the starting time of the integration, which is used in Theorem~\ref{thm:nonuniform}. In this case, the eigenvalue cycles were successfully broken, but we can still observe the regularity in the integration intervals. Due to this regularity, we needed the observability of $(\mathbf{A_c,\mathbf{C}})$ instead of the observability of $(\mathbf{A_c},\mathbf{C_c})$. Fig.~\ref{fig:1c} shows the nonuniform sampling by changing the integration interval, which is used in Proposition~\ref{prop:4}. The eigenvalue cycles were also broken in this case and due to the lack of regularity in sampling intervals the observability of $(\mathbf{A_c},\mathbf{C_c})$ was enough. Fig.~\ref{fig:1d} shows the nonuniform filtering, which is used in Proposition~\ref{prop:5} and successfully break the eigenvalue cycles. Therefore, we can conclude that as long as the sampling waveforms are not uniform as Fig.~\ref{fig:1a} the eigenvalue cycles are broken. In general, nonuniform waveforms shown in Fig.~\ref{fig:1e} can be used to break eigenvalue cycles, and it is an interesting technical equation to find the minimal condition on nonuniform waveforms to break eigenvalue cycles.
\subsubsection{Extension to Parallel Channels} Theorem~\ref{thm:nonuniform} can also be extended to the multiple sensors that transmit their observations through parallel erasure channels. Consider the following continuous-time system equations. \begin{align} &d\mathbf{x_c}(t)=\mathbf{A_c}\mathbf{x_c}(t)dt + \mathbf{B_c}d\mathbf{W_c}(t) \nonumber \\ &\mathbf{y_{c,1}}(t)=\mathbf{C_{c,1}}\mathbf{x_c}(t) + \mathbf{D_{c,1}} \frac{d\mathbf{V_{c,1}(t)}}{dt} \nonumber \\ &\vdots \nonumber \\ &\mathbf{y_{c,d}}(t)=\mathbf{C_{c,d}}\mathbf{x_c}(t) + \mathbf{D_{c,d}} \frac{d\mathbf{V_{c,d}(t)}}{dt} \nonumber \end{align} Here $t$ is non-negative real-valued time index. $\mathbf{A_c} \in \mathbb{C}^{m \times m}$, $\mathbf{B_c} \in \mathbb{C}^{m \times g}$ , $\mathbf{C_{c,i}} \in \mathbb{C}^{l_i \times m}$ and $\mathbf{D_{c,i}} \in \mathbb{C}^{l_i \times l_i}$ where $\mathbf{D_{c,i}}$ is invertible. $\mathbf{W_c}(t)$ and $\mathbf{V_{c,1}}(t)$ are independent $g$ and $l_i$-dimensional standard Wiener process respectively.
Like \eqref{eqn:non:4}, the $n$th sample at the sensor $i$ is obtained by integrating $\mathbf{y_{c,i}}(t)$ from $(n-1)I-t_{n,i}$ to $nI-t_{n,i}$: \begin{align} \mathbf{y_{o,i}}[n]&=\int^{nI-t_{n,i}}_{(n-1)I-t_{n,i}} \mathbf{y_{c,i}}(t) dt \nonumber \\ &=\underbrace{\left( \int^{I}_{0} \mathbf{C_{c,i}}e^{\mathbf{A_c}(t-I)}dt \right)}_{:=\mathbf{C_i}} \mathbf{x_c}(nI-t_{n,i}) \nonumber \\ &\underbrace{-\int^{nI-t_{n,i}}_{(n-1)I-t_{n,i}} \int^{nI-t_{n,i}}_{t} \mathbf{C_{c,i}} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')dt +\int^{nI-t_{n,i}}_{(n-1)I-t_{n,i}} \mathbf{D_{c,i}} d\mathbf{V_{c,i}}(t)}_{:=\mathbf{v_i}[n]} \nonumber \end{align}
Since $\mathbf{y_{o,i}}[n]$ are transmitted over the parallel erasure channel, the intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_i})$ with parallel channel has the following system equation: \begin{align} &d\mathbf{x_c}(t)=\mathbf{A_c}\mathbf{x_c}(t)dt+\mathbf{B_c}d\mathbf{W_c}(t) \nonumber \\ &\mathbf{y_1}[n]=\beta_1[n](\mathbf{C_1}\mathbf{x_c}(nI-t_{n,1})+\mathbf{v_1}[n]) \nonumber \\ &\vdots \nonumber \\ &\mathbf{y_d}[n]=\beta_d[n](\mathbf{C_d}\mathbf{x_c}(nI-t_{n,d})+\mathbf{v_d}[n]) \nonumber \end{align} where $\mathbf{y_i}[n] \in \mathbb{C}^{l_i}$ and $\beta_i[n]$ are independent Bernoulli random processes with erasure probability $p_{e,i}$.
Like before, by a change of coordinates, we can rewrite the above system equations to the ones with a Jordan form $\mathbf{A_c}$ without changing the intermittent observability. Therefore, like \eqref{eqn:ac:jordan:thm}, \eqref{eqn:ac2:jordan:thm} and \eqref{eqn:def:lprime:thm} we can write $\mathbf{A_c}$ and $\mathbf{C_i}$ as follows without loss of generality. \begin{align} &\mathbf{A_c}=diag\{ \mathbf{A_{1,1}}, \mathbf{A_{1,2}}, \cdots, \mathbf{A_{\mu,\nu_{\mu}}} \} \nonumber \\ &\mathbf{C_i}=\begin{bmatrix} \mathbf{C_{1,1,i}} & \mathbf{C_{1,2,i}} & \cdots & \mathbf{C_{\mu,\nu_\mu,i}} \end{bmatrix} \nonumber \\ &\mbox{where } \nonumber\\ &\quad \mathbf{A_{i,j}} \mbox{ is a Jordan block with eigenvalue $\lambda_i$} \nonumber \\ &\quad \lambda_1, \cdots , \lambda_\mu \mbox{ are pairwise distinct} \nonumber \\ &\quad \mathbf{C_{i,j,k}}\mbox{ is a $l_k \times \dim \mathbf{A_{i,j}}$ complex matrix.} \nonumber \end{align}
Denote \begin{align} &\mathbf{C_{i,j}}=\begin{bmatrix} (\mathbf{C_{i,1,j}})_1 & \cdots & (\mathbf{C_{i,\nu_i,j}})_1 \end{bmatrix}\nonumber \\ &\mbox{where $\left( \mathbf{C_{i,j,k}} \right)_1$ implies the first column of $\mathbf{C_{i,j,k}}$} \nonumber \end{align}
Let $(l_{i,1},l_{i,2},\cdots,l_{i,d}) \in \{ 0,1 \}^d$ such that \begin{align} \begin{bmatrix} \mathbf{1}(l_{i,1}=0)\mathbf{C_{i,1}} \\ \vdots \\ \mathbf{1}(l_{i,d}=0)\mathbf{C_{i,d}} \end{bmatrix} \nonumber \end{align} is rank deficient, i.e. the rank is strictly less than $\nu_i$.
Denote $L_i$ as the set of such $(l_{i,1},l_{i,2},\cdots,l_{i,d})$ vectors. Then, the intermittent observability of the system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_i})$ with parallel channel is characterized by the following proposition. \begin{proposition} Given an intermittent system $(\mathbf{A_c}, \mathbf{B_c}, \mathbf{C_i})$ with probability of erasures $(p_{e,1}, \cdots, p_{e,d})$, let $(\mathbf{A_c},\mathbf{B_c})$ be controllable, and $t_{n,i}$ be independent random variables uniformly distributed on $[0,T]\ (T>0)$. The intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C_i})$ with parallel channel is intermittent observable if and only if \begin{align}
\max_{1 \leq i \leq \mu} \max_{(l_{i,1},l_{i,2},\cdots,l_{i,d}) \in L_i}\left( \prod_{1 \leq j \leq d} p_{e,j}^{l_{i,j}} \right) |e^{2 \lambda_i I}| < 1. \nonumber \end{align} \end{proposition}
\section{Proofs}
The proofs of Theorem~\ref{thm:mainsingle} and Theorem~\ref{thm:nonuniform} are quite similar, and we can directly relate them by Weyl's criterion~\cite{Kuipers}. For presentation purposes, we will first present the proof of the nonuniform sampling case, Theorem~\ref{thm:nonuniform}, which is easier than that of Theorem~\ref{thm:mainsingle}.
\label{sec:proof}
\subsection{Sufficiency Proof of Theorem~\ref{thm:nonuniform} (Non-uniform Sampling)} \label{sec:cont:suf}
We will prove that if $(\mathbf{A_c},\mathbf{C})$ does not have unobservable and unstable eigenvalues and $p_e < \frac{1}{|e^{2\lambda_{max}I}|}$, the system is intermittent observable.
$\bullet$ Reduction to a Jordan form matrix $\mathbf{A_c}$: To simplify the problem, we first restrict to system equations \eqref{eqn:conti:xsample} and \eqref{eqn:conti:ysample} with the following properties. We will also justify that this restriction is without loss of generality and does not change intermittent observability.\\ (a) The system matrix $\mathbf{A_c}$ is a Jordan form matrix.\\ (b) All eigenvalues of $\mathbf{A_c}$ are unstable, i.e. the real parts are nonnegative.\\ (c) \eqref{eqn:conti:xsample} and \eqref{eqn:conti:ysample} can be extended to two-sided processes.
The restriction (a) can be justified by a similarity transform~\cite{Chen}. As mentioned before, it is known~\cite{Chen} that for any square matrix $\mathbf{A_c}$, there exists an invertible matrix $\mathbf{U}$ and an upper-triangular Jordan matrix $\mathbf{A_c'}$ such that $\mathbf{A_c}=\mathbf{U}\mathbf{A_c'}\mathbf{U}^{-1}$. Then, equations \eqref{eqn:non:0} and \eqref{eqn:non:4} can be rewritten as \begin{align} \mathbf{U}^{-1}\mathbf{x_c}(t)&=e^{\mathbf{A_c'}t}\mathbf{U}^{-1}\mathbf{x_c}(0)+\int^t_0 e^{\mathbf{A_c'}(t-t')}\mathbf{U}^{-1}\mathbf{B_c} d\mathbf{W_c}(t') \nonumber \\ \mathbf{y_o}[n]&= \int^{I}_{0}\mathbf{C_c}\mathbf{U}e^{\mathbf{A_c'}(t-I)}dt \mathbf{U}^{-1}\mathbf{x_c}(nI-t_n) \nonumber \\ &-\int^{nI-t_n}_{(n-1)I-t_n}\int^{nI-t_n}_{t}\mathbf{C_c}\mathbf{U}e^{\mathbf{A_c'}(t-t')}\mathbf{U}^{-1}\mathbf{B_c}d\mathbf{W_c}(t')dt +\int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c} d\mathbf{V_c}(t). \nonumber \end{align} Thus, by denoting $\mathbf{x_c'}(t):=\mathbf{U}^{-1}\mathbf{x_c}(t)$, $\mathbf{B_c'}:=\mathbf{U}^{-1}\mathbf{B_c}$, and $\mathbf{C_c'}:=\mathbf{C_c}\mathbf{U}$, the system equations \eqref{eqn:contistate}, \eqref{eqn:contiob} and \eqref{eqn:conti:ysample} can be written in the following equivalent forms. \begin{align} &d\mathbf{x_c'}(t)=\mathbf{A_c'}\mathbf{x_c'}(t) dt + \mathbf{B_c'}d \mathbf{W_c}(t) \nonumber \\ &\mathbf{y_c}(t)=\mathbf{C_c'}\mathbf{x_c'}(t)+\mathbf{D_c}\frac{d\mathbf{V_c}(t)}{dt}\nonumber \\ &\mathbf{y_o}[n]=\mathbf{C'}\mathbf{x_c'}(nI-t_n)+\mathbf{v}[n] \nonumber \end{align} where $\mathbf{C'}:=\int^I_0 \mathbf{C_c'}e^{\mathbf{A_c'}(t-I)}dt=\int^I_0 \mathbf{C_c} \mathbf{U} \mathbf{U}^{-1} e^{\mathbf{A_c}(t-I)} \mathbf{U}dt=\mathbf{C}\mathbf{U}$.
Since $\mathbf{U}$ is invertible, $(\mathbf{A_c},\mathbf{C})$ has an unobservable eigenvalue $\lambda$ if and only if $(\mathbf{A_c'},\mathbf{C'})$ has an unobservable eigenvalue $\lambda$. Moreover, since $\mathbf{x'_c}=\mathbf{U}^{-1}\mathbf{x_c}(t)$, the original intermittent system $(\mathbf{A_c},\mathbf{B_c},\mathbf{C})$ with nonuniform samples is intermittent observable if and only if the new intermittent system $(\mathbf{A_c'},\mathbf{B_c'},\mathbf{C'})$ with nonuniform samples is intermittent observable. Thus, without loss of generality, we can assume $\mathbf{A_c}$ is given in a Jordan form, which justifies (a).
Once $\mathbf{A_c}$ is given in a Jordan form, there is a natural correspondence between the eigenvalues and the states. If there is a stable eigenvalue --- i.e. the real part of the eigenvalue is negative ---, the variance of the corresponding state is uniformly bounded. Thus, we do not have to estimate the state to make the estimation error finite. In the observation $\mathbf{y}[n]$, the stable states can be considered as a part of observation noise $\mathbf{v}[n]$, and the variance of $\mathbf{v}[n]$ is still uniformly bounded (even if $\mathbf{v}[n]$ can be correlated). Therefore, we can assume (b) without loss of generality.
To justify restriction (c), we put $\mathbf{W_c}(t)=0$ for $t < 0$, $\mathbf{V_c}(t)=0$ for $t <0$, and let $\beta[n]$ be a two-sided Bernoulli process with probability $1-p_e$. Then, the resulting two-sided processes $\mathbf{x_c}(t)$ and $\mathbf{y}[n]$ are identical to the original one-sided processes except that $\mathbf{x_c}(t)$ and $\mathbf{y}[n]$ except that $\mathbf{x_c}(t)=0$ for $t \in \mathbb{R}^{--}$ and $\mathbf{y}[n]=0$ for $n \in \mathbb{Z}^{--}$.
In summary, without loss of generality we can assume that $\mathbf{A_c}$ is in a Jordan form, all eigenvalues of $\mathbf{A_c}$ are stable, and \eqref{eqn:conti:xsample} and \eqref{eqn:conti:ysample} are two-sided processes. Thus, we can assume $\mathbf{A_c} \in \mathbb{C}^{m \times m}$ and $\mathbf{C} \in \mathbb{C}^{l \times m}$ is given as follows. \begin{align} &\mathbf{A_c}=diag\{\mathbf{A_{1,1}},\mathbf{A_{1,2}},\cdots, \mathbf{A_{1,\nu_{1}}},\cdots,\mathbf{A_{\mu,1}},\cdots,\mathbf{A_{\mu,\nu_{\mu}}}\} \label{eqn:conti:a2} \\ &\mathbf{C}=\begin{bmatrix} \mathbf{C_{1,1}} & \mathbf{C_{1,2}} & \cdots & \mathbf{C_{1,\nu_{1}}} & \cdots & \mathbf{C_{\mu,1}} & \cdots & \mathbf{C_{\mu,\nu_{\mu}}} \end{bmatrix} \label{eqn:conti:c2} \\ &\mbox{where } \nonumber\\ &\quad\mathbf{A_{i,j}} \mbox{ is a Jordan block with eigenvalue $\lambda_{i}+j\omega_{i}$ and size $m_{i,j}$} \nonumber \\ &\quad m_{i,1} \leq m_{i,2} \leq \cdots \leq m_{i,\nu_i} \mbox{ for all }i=1,\cdots,\mu \nonumber \\ &\quad \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_\mu \geq 0 \nonumber \\ &\quad \lambda_1+j\omega_1, \lambda_2+j\omega_2, \cdots , \lambda_\mu+j\omega_\mu \mbox{ are pairwise distinct} \nonumber \\ &\quad \mathbf{C_{i,j}}\mbox{ is a $l \times m_{i,j}$ complex matrix} \nonumber \\ &\quad \mbox{The first columns of $\mathbf{C_{i,1}},\mathbf{C_{i,2}},\cdots,\mathbf{C_{i,\nu_i}}$ are linearly independent}. \nonumber \end{align} Here, $\mathbf{A_{i,1}}, \cdots, \mathbf{A_{i,\nu_i}}$ are the Jordan blocks corresponding to the same eigenvalue. The Jordan blocks are sorted in a descending order in the real parts of the eigenvalues. The permutation of Jordan blocks can be justified since they are block diagonal matrices. The linear independence of $\mathbf{C_{i,1}}, \mathbf{C_{i,2}}, \cdots, \mathbf{C_{i,\nu_i}}$ comes from the observability of $(\mathbf{A_c},\mathbf{C})$ (by Theorem~\ref{thm:jordanob}).
$\bullet$ Uniform boundedness of observation noise: To prove the intermittent observability, we will propose a suboptimal maximum likelihood estimator, and analyze it. To upper bound the estimation error, we upper bound the disturbances and observation noises in the system.
By \eqref{eqn:non:1}, we have \begin{align} \mathbf{x_c}((n-k)I-t_{n-k})=e^{-\mathbf{A_c}(kI+t_{n-k})}\mathbf{x_c}(nI) \underbrace{-\int^{nI}_{(n-k)I-t_{n-k}} e^{\mathbf{A_c}((n-k)I-t_{n-k}-t')} \mathbf{B_c} d\mathbf{W_c}(t')}_{:=\mathbf{w'}[n-k]}. \label{eqn:non:5} \end{align} By plugging this equation into \eqref{eqn:conti:ysample}, we get \begin{align} \mathbf{y}[n-k]&=\mathbf{C}\mathbf{x_c}((n-k)I-t_{n-k})+\mathbf{v}[n-k] \nonumber \\ &=\mathbf{C}e^{-\mathbf{A_c}(kI+t_{n-k})}\mathbf{x_c}(nI)+\underbrace{\mathbf{C}\mathbf{w'}[n-k]+\mathbf{v}[n-k]}_{:=\mathbf{v'}[n-k]}. \label{eqn:nonuniform:1} \end{align} We will upper bound the variance of $\mathbf{v'}[n-k]$. First, consider the variance of $w'[n-k]$. By the assumption (b), all eigenvalues of $\mathbf{A_c}$ are unstable, and since $t_{n-k} \in [0,T]$, $((n-k)I-t_{n-k}-t')$ ranges within $[-(kI+T),0]$. Thus, there exits $p' \in \mathbb{N}$ such that \begin{align} \mathbb{E}[\mathbf{w'}[n-k]^\dag \mathbf{w'}[n-k]] \lesssim 1+k^{p'} \label{eqn:pprimedef2} \end{align} where $\lesssim$ holds for all $n$. (See Definition~\ref{def:lesssim} for the definition of $\lesssim$.)
By \eqref{eqn:non:4}, the variance of $\mathbf{v}[n]$ is uniformly bounded\footnote{To justify assumption (b), we consider the stable states as a part of observation noise $\mathbf{v}[n]$. However, this does not change the uniform boundedness since the variances of the stable states are also uniformly bounded.} for all $n$. Therefore, we have $\mathbb{E}[\mathbf{v'}[n-k]^\dag \mathbf{v'}[n-k]] \lesssim 1+k^{p'}$ for all $n$.
Moreover, since $W_c(t)$ is a standard Wiener process with unit variance, $\underset{n \in \mathbb{Z}}{\sup} \mathbb{E}[(\mathbf{x}(nI)-\mathbf{\widehat{x}}(nI))^\dag (\mathbf{x}(nI)-\mathbf{\widehat{x}}(nI))] < \infty$ implies $\underset{t \in \mathbb{R}}{\sup} \mathbb{E}[(\mathbf{x}(t)-\mathbf{\widehat{x}}(t))^\dag (\mathbf{x}(t)-\mathbf{\widehat{x}}(t))] < \infty$. Thus, it is enough to estimate the state only at discrete time steps.
$\bullet$ Suboptimal Maximum Likelihood Estimator: Now, we will give the suboptimal state estimator which only uses a finite number of recent observations. We first need the following key lemma. \begin{lemma} Let $\mathbf{A_c}$ and $\mathbf{C}$ be given as in \eqref{eqn:conti:a2} and \eqref{eqn:conti:c2}, $\beta[n]$ be a Bernoulli process with probability $1-p_e$, and $t_n$ be i.i.d.~random variables whose distribution is uniform on $[0,T]~(T>0)$. Then, we can find $m' \in \mathbb{N}$, a polynomial $p(k)$ and a family of stopping times $\{ S(\epsilon,k): k \in \mathbb{Z}^+, 0 < \epsilon < 1 \}$ such that for all $k \in \mathbb{Z}^+$ and $0 < \epsilon < 1$ there exist $k \leq k_1 < k_2 < \cdots < k_{m'} \leq S(\epsilon,k)$ and a $m \times m'l$ matrix $\mathbf{M}$ satisfying the following four conditions:\\ (i) $\beta[k_i] = 1$ for all $1 \leq i \leq m'$\\ (ii) $ \mathbf{M} \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C} e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_{m'} I + t_{k_{m'}})\mathbf{A_c}} \\ \end{bmatrix} = \mathbf{I}_{m \times m} $\\ (iii) $
\left| \mathbf{M} \right|_{max} \leq \frac{p(S(\epsilon,k))}{\epsilon} e^{\lambda_1 S(\epsilon,k) I} $ \\ (iv) $ \lim_{\epsilon \downarrow 0} \left(\exp \limsup_{s \rightarrow \infty} \sup_{k\in \mathbb{Z^+}} \frac{1}{s} \log \mathbb{P} \{ S(\epsilon,k)-k=s \} \right) \leq p_e $. \label{lem:conti:mo} \end{lemma} \begin{proof} See Appendix~\ref{sec:app:2}. \end{proof}
Since we have $p_e<\frac{1}{|e^{2 \lambda_{max} I}|}=\frac{1}{e^{2 \lambda_1 I}}$, there exists $\delta > 1$ such that $\delta^5 p_e < \frac{1}{e^{2 \lambda_1 I}}$. By Lemma~\ref{lem:conti:mo}, we can find $m' \in \mathbb{N}$, $0 < \epsilon < 1$, a polynomial $p(k)$ and a family of stopping times $\{ S(n) : n \in \mathbb{Z}^+ \}$ such that for all $n$, there exist $0 \leq k_1 < k_2 < \cdots < k_{m'} \leq S(n)$ and a $m \times m'l$ matrix $\mathbf{M_n}$ satisfying the following four conditions:\\ (i') $\beta[n-k_i] = 1$ for $1 \leq i \leq m'$\\ (ii') $ \mathbf{M_n} \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_{n-k_1})\mathbf{A_c}} \\ \mathbf{C} e^{-(k_2 I + t_{n-k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_{m'} I + t_{n-k_{m'}})\mathbf{A_c}} \\ \end{bmatrix} = \mathbf{I}_{m \times m} $\\ (iii') $
\left| \mathbf{M_n} \right|_{max} \leq \frac{p(S(n))}{\epsilon} e^{\lambda_1 I \cdot S(n)} $ \\ (iv') $ \exp \left(\limsup_{s \rightarrow \infty} \sup_{n \in \mathbb{Z^+}} \frac{1}{s} \log \mathbb{P} \{ S(n)=s \} \right) \leq \sqrt{\delta} p_e $.
Then, here is the proposed suboptimal maximum likelihood estimator for $\mathbf{x}(nI)$: \begin{align} \mathbf{\widehat{x}}(nI)=\mathbf{M_n}\begin{bmatrix} \mathbf{y}[n-k_1] \\ \mathbf{y}[n-k_2] \\ \vdots \\ \mathbf{y}[n-k_{m'}] \\ \end{bmatrix}. \label{eqn:nonuniform:2} \end{align} Here, $k_i$ also depends on $n$, but we omit the dependency in notation for simplicity. Notice that the number of the observation of the estimation, $m'$, is much larger than the dimension of the system, $m$. In other words, the estimator proposed here may use much more number of observations than the number of states (the number of observations that a simple matrix inverse observer needs). This is because we use successive decoding idea in the proof of Lemma~\ref{lem:conti:mo}.
$\mathbf{\bullet}$ Analysis of the estimation error: Now, we will analyze the performance of the proposed estimator. Remind that $p'$ is defined in \eqref{eqn:pprimedef2} and $\delta > 1$ By (iv') and well-known properties of polynomial and exponential functions, we can find $c > 0$ that satisfies the following three conditions:\\ (i'') $(1+k^{p'}) \leq c \cdot \delta^k$ for all $k \geq 0$\\ (ii'') $p(k) \leq c \cdot \delta^k$ for all $k \geq 0$\\ (iii'') $\sup_{n \in \mathbb{N}}\mathbb{P}\{ S(n) =s \} \leq c \cdot (\delta \cdot p_e)^s$ for all $s \in \mathbb{Z}^+$
Let $\mathcal{F}_{\beta}$ be the $\sigma$-field generated by $\beta[n]$ and $t_i$. Then, $k_i$, $S(n)$, and $t_i$ are deterministic variables conditioned on $\mathcal{F}_{\beta}$. The estimation error is upper bounded by \begin{align}
\sup_{n}\mathbb{E}[|\mathbf{x}(nI)-\mathbf{\widehat{x}}(nI)|_2^2]&=\sup_{n}
\mathbb{E}[\mathbb{E}[|\mathbf{x}(nI)-\mathbf{\widehat{x}}(nI)|_2^2| \mathcal{F}_{\beta}]]\nonumber \\ &\overset{(A)}{=}\sup_{n}
\mathbb{E}[\mathbb{E}[\left|\mathbf{x}(nI)-\mathbf{M_n}(\begin{bmatrix} \mathbf{C}e^{-\mathbf{A_c}(k_1 I + t_{n-k_1})} \\ \mathbf{C}e^{-\mathbf{A_c}(k_2 I + t_{n-k_2})} \\ \vdots \\ \mathbf{C}e^{-\mathbf{A_c}(k_{m'} I + t_{n-k_{m'}})} \end{bmatrix}\mathbf{x}(nI) + \begin{bmatrix} \mathbf{v'}[n-k_1] \\ \mathbf{v'}[n-k_2] \\ \vdots \\ \mathbf{v'}[n-k_{m'}] \end{bmatrix})\right|_2^2|\mathcal{F}_{\beta}]]\nonumber \\
&\overset{(B)}{=}\sup_{n}\mathbb{E}[\mathbb{E}[\left|\mathbf{M_n}\begin{bmatrix} \mathbf{v'}[n-k_1] \\ \mathbf{v'}[n-k_2] \\ \vdots \\ \mathbf{v'}[n-k_{\sum_{1 \leq i \leq \mu}m_i'}] \end{bmatrix}\right|_2^2|\mathcal{F}_{\beta}]] \nonumber\\ &\lesssim
\sup_{n}\mathbb{E}[|\mathbf{M_n}|_{max}^2 \cdot \mathbb{E}[\left|\begin{bmatrix} \mathbf{v'}[n-k_1] \\ \mathbf{v'}[n-k_2] \\ \vdots \\ \mathbf{v'}[n-k_{m'}] \end{bmatrix}\right|_{max}^2|\mathcal{F}_{\beta}]] \nonumber\\ &\overset{(C)}{\lesssim}
\sup_{n}\mathbb{E}[|\mathbf{M_n}|_{max}^2 \cdot (1+S(n)^{p'})^2 ] \nonumber \\ &\overset{(D)}{\leq} \sup_{n}\mathbb{E}[\left( \frac{p(S(n))}{\epsilon} e^{\lambda_1 I \cdot S(n)} \right)^2 \cdot (1+S(n)^{p'})^2 ] \nonumber\\ &\overset{(E)}{\lesssim} \sup_{n}\mathbb{E}[\delta^{2S(n)}\cdot e^{2\lambda_1 I \cdot S(n)} \cdot \delta^{2S(n)}] \nonumber\\ &\overset{(F)}{\lesssim} \sum^{\infty}_{s=0} \delta^{4s}\cdot e^{2 \lambda_1 I \cdot s} \cdot (\delta \cdot p_e)^s \nonumber\\ &\overset{(G)}{=} \sum^{\infty}_{s=0} (\delta^5 \cdot e^{2 \lambda_1 I} \cdot p_e)^s \nonumber \\ &< \infty \nonumber \end{align} where $\lesssim$ holds for all $n$.\\ (A): By \eqref{eqn:nonuniform:1} and \eqref{eqn:nonuniform:2}.\\ (B): By condition (ii').\\ (C): Since $\mathbb{E}[\mathbf{v'}[n-k]^\dag \mathbf{v'}[n-k]] \lesssim 1+k^{p'}$ by definition.\\ (D): By condition (iii').\\ (E): By condition (i'') and (ii'').\\ (F): By condition (iii'').\\ (G): Since we choose $\delta$ so that $\delta^5 p_e \cdot e^{2 \lambda_1 I} < 1$.
Therefore, the estimation error is uniformly bounded over $t\in \mathbb{R}^+$ when $p_e < \frac{1}{e^{2 \lambda_1 I}}$, which finishes the proof.
\subsection{Necessity Proof of Theorem~\ref{thm:nonuniform}} \label{sec:cont:nec}
The necessity proof divides into two parts. First, we prove that if $p_e \geq \frac{1}{|e^{2 \lambda_{max} I }|}$, then the system is not intermittent observable. Second, we prove that if $(\mathbf{A_c},\mathbf{C})$ has unobservable and unstable eigenvalues --- i.e. $\exists \lambda \in \mathbb{C}^+$ such that $\begin{bmatrix} \lambda \mathbf{I} - \mathbf{A_c} \\ \mathbf{C}\end{bmatrix}$ is rank deficient --- then the system is not intermittent observable.
$\bullet$ When $p_e \geq \frac{1}{|e^{2 \lambda_{max} I }|}$: Intuitively speaking, we will give all states except the one corresponding to the maximum eigenvalue as side-information. Thus, we will reduce the problem to the scalar system discussed in Section~\ref{sec:intui}.
Formally, let $\mathbf{\Sigma_{t|t}}:=\mathbb{E}[(\mathbf{x_c}(t)-\mathbb{E}[\mathbf{x_c}(t)| \mathbf{y}^{\lfloor \frac{t}{I} \rfloor} ])(\mathbf{x_c}(t)-\mathbb{E}[\mathbf{x_c}(t)| \mathbf{y}^{\lfloor \frac{t}{I} \rfloor} ])^\dag | \mathcal{F}_{\beta}]$ where $\mathcal{F}_{\beta}$ is the $\sigma$-field generated by $\beta[n]$ and $t_i$. Notice that $\mathbf{\Sigma_{t|t}}$ is a random variable.
It is known that when $(\mathbf{A_c},\mathbf{B_c})$ is controllable, the estimation error of $\mathbf{x_c}(t)$ even based on all the causally available information $\mathbf{y_c}(0:t)$ is positive definite when $t$ is large enough. Therefore, there exists $t' > 0$ and $\sigma^2 > 0$ such that for all $t \geq t'$,
$\mathbf{\Sigma_{t|t}} \succeq \sigma^2 \mathbf{I}$ with probability one. Let $\mathbf{e}$ be a right eigenvector of $\mathbf{A_c}$ associated with the eigenvalue $\lambda_{max}$, i.e. $\mathbf{A_c}\mathbf{e}=\lambda_{max} \mathbf{e}$. Then, we can find $\sigma'^2 > 0$ such that for all $t \geq t'$, $\mathbf{\Sigma_{t|t}} \succeq \sigma'^2 \mathbf{e}\mathbf{e}^\dag$ with probability one.
Define the stopping time $S_n' := \inf\{ k \in \mathbb{Z}^+| \beta[n-k]=1 \}$ as the time until the most recent observation.
The observations between discrete time $n-S_n'+1$ and $n$ are all erased. This implies the received observations at discrete time $n$ are independent from $\mathbf{y_c}((n-S_n)I:nI)$.
Thus, conditioned on $(n-S_n')I \geq t'$, $\mathbf{\Sigma_{nI|nI}}$ is lower bounded as follows with probability one.\footnote{The lower bound does not hold for $\Re (\lambda) =0$ which induces $p_e=1$. However, in this case we do not have any observation, so trivially the system is unstable.} \begin{align}
\mathbb{E}[\mathbf{\Sigma_{nI|nI}}| S_n', (n-S_n')I \geq t'] & \succeq (e^{\mathbf{A_c}(S_n' I)}) \mathbf{\Sigma_{(n-S_n')I|(n-S_n')I}} (e^{\mathbf{A_c}(S_n' I)})^{\dag}\\ & \succeq \sigma'^2 (e^{\mathbf{A_c}(S_n' I)}) \mathbf{e}\mathbf{e}^{\dag} (e^{\mathbf{A_c}(S_n' I)})^{\dag} \\
& \succeq \sigma'^2 |e^{2\lambda_{max}I}|^{S_n'} \mathbf{e} \mathbf{e}^\dag \end{align} Here we use the fact that when $\mathbf{e}$ is an eigenvector of $\mathbf{A_c}$ associated with an eigenvalue $\lambda_{max}$, $\mathbf{e}$ is also an eigenvector of $e^{\mathbf{A_c}t}$ associated with the eigenvalue $e^{\lambda_{max}t}$ for all $t$.
Since $p_e \geq \frac{1}{|e^{2 \lambda_{max}I}|}$, the average estimator error is lower bounded as follows: \begin{align}
&\mathbb{E}[(\mathbf{x_c}(nI)-\mathbb{E}[ \mathbf{x_c}(nI) | \mathbf{y}^n ])^\dag (\mathbf{x_c}(nI)-\mathbb{E}[ \mathbf{x_c}(nI) | \mathbf{y}^n ])]\\
&\geq \mathbb{E}[ \sigma'^2 |e^{2\lambda_{max}I}|^{S_n'} |\mathbf{e}|^2 \cdot \mathbf{1}( (n-S_n')I \geq t') ]\\
&\geq \sigma'^2 |\mathbf{e}|^2 \cdot \sum_{0 \leq s \leq \lfloor n - \frac{t}{I} \rfloor} |e^{2 \lambda_{max} I}|^s \cdot (1-p_e)p_e^s \\
&\geq \sigma'^2 |\mathbf{e}|^2 \cdot (1-p_e) \cdot (\lfloor n - \frac{t}{I} \rfloor + 1) \end{align} Thus, the estimation error goes to infinity as $n \rightarrow \infty$, so the system is not intermittently observable.
$\bullet$ When $(\mathbf{A_c},\mathbf{C})$ has unobservable and unstable eigenvalues: Now, we prove that if $(\mathbf{A_c},\mathbf{C})$ has unobservable and unstable eigenvalues, the system is not intermittent observable. This proof seems trivial, but the original continuous-time system $(\mathbf{A_c},\mathbf{C_c})$ can still be observable while the sampled system $(\mathbf{A_c},\mathbf{C})$ is not. Thus, it still needs justification.
Let $\lambda \in \mathbb{C}^+$ be the unobservable and unstable eigenvalue. Then, $\begin{bmatrix} \lambda \mathbf{I}- \mathbf{A_c} \\ \mathbf{C} \end{bmatrix}$ is rank deficient, and we can find $\mathbf{i}$ such that $\begin{bmatrix} \lambda \mathbf{I}-\mathbf{A_c} \end{bmatrix} \mathbf{i} = \mathbf{0}$. Then, $\mathbf{i}$ satisfies $\mathbf{C}\mathbf{i}=\mathbf{0}$, $\mathbf{A_c}\mathbf{i}=\lambda \mathbf{i}$, and we can notice that $\mathbf{C}e^{\mathbf{A_c}t}\mathbf{i}=e^{\lambda t}\mathbf{C} \mathbf{i}= \mathbf{0}$. We will prove that the uncertainty in the direction $\mathbf{i}$ is not observable by any observations.
By the controllability of $(\mathbf{A_c},\mathbf{B_c})$, as above there exists $t'$ such that for all $t \geq t'$, $\mathbf{x_c}(t)-\mathbb{E}[\mathbf{x_c}(t)|\mathbf{y_c}(0:t)]$ has a positive definite covariance matrix. Therefore, we can write $\mathbf{x_c}(t)-\mathbb{E}[\mathbf{x_c}(t)|\mathbf{y_c}(0:t)]=\mathbf{i} \cdot x_c'(t)+ \mathbf{x_c''}(t)$ where $x_c'(t)$, $\mathbf{x_c''}(t)$ and $\mathbf{y_c}(0:t)$ are independent and $\mathbb{E}[|x_c'(t)|^2]\geq \sigma''^2$ for some $\sigma''^2>0$ and all $t \geq t'$.
Then, we will prove that the sampled observations are independent from $x_c'(t)$. By \eqref{eqn:non:0} and \eqref{eqn:non:4}, for all $\tau \leq (n-1)I-t_n$ we have \begin{align} \mathbf{y_o}[n] &= \mathbf{C}(e^{\mathbf{A_c}(nI-t_n-\tau)}(\mathbf{x_c}(\tau)+\int^{nI-t_n}_{\tau} e^{\mathbf{A_c}(\tau - t')}\mathbf{B_c} d \mathbf{W_c}(t'))) \\ &-\int^{nI-t_n}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t)\\
&= \mathbf{C}(e^{\mathbf{A_c}(nI-t_n-\tau)}(\mathbf{i} \cdot x_c'(\tau) + \mathbf{x_c''}(\tau) + \mathbf{E}[\mathbf{x_c}(\tau)|\mathbf{y_c}(0:\tau)] +\int^{nI-t_n}_{\tau} e^{\mathbf{A_c}(\tau - t')}\mathbf{B_c} d \mathbf{W_c}(t'))) \\ &-\int^{nI-t_n}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t)\\
&= \mathbf{C}(e^{\mathbf{A_c}(nI-t_n-\tau)}( \mathbf{x_c''}(\tau) + \mathbf{E}[\mathbf{x_c}(\tau)|\mathbf{y_c}(0:\tau)] +\int^{nI-t_n}_{\tau} e^{\mathbf{A_c}(\tau - t')}\mathbf{B_c} d \mathbf{W_c}(t'))) \\ &-\int^{nI-t_n}_{(n-1)I-t_n} \int^{nI-t_n}_{t} \mathbf{C_c} e^{\mathbf{A_c}(t-t')} \mathbf{B_c} d\mathbf{W_c}(t')dt + \int^{nI-t_n}_{(n-1)I-t_n} \mathbf{D_c}d\mathbf{V_c}(t) \end{align} where the last equality comes from $\mathbf{C}e^{\mathbf{A_c}t}\mathbf{i}=0$. Moreover, by the causality and definitions, the last equation is independent from $\mathbf{x_c''}(\tau)$.
Now, we will prove that the uncertainty $\mathbf{x_c''}(\tau)$ can be arbitrarily amplified. Since $t_{i}$ are uniform random variables on $[0,T]$, there exists a positive probability such that $(n-1)I-t_n \leq (n+n'-1)I - t_{n+n'}$ for all $n' \in \mathbb{N}$. Denote such an event as $E$. Then, by choosing $n$ large enough so that $(n-1)I-t_n \geq t'$, we have the following lower bound on the estimation error for all $t \geq (n-1)I-t_n$: \begin{align}
&\mathbb{E}[|\mathbf{x_c}(t)-\mathbb{E}[\mathbf{x_c}(t)|\mathbf{y}^{\lfloor \frac{t}{I} \rfloor}]|^2] \\ &\geq \mathbb{E}[
|\mathbf{x_c}(t)-\mathbb{E}[\mathbf{x_c}(t)|\mathbf{y}^{\lfloor \frac{t}{I} \rfloor}]|^2
| E] \mathbb{P}(E) \\
&\overset{(a)}{\geq} \mathbb{E}[ |e^{\mathbf{A_c}(t-((n-1)I-t_n))}\mathbf{i} \cdot x_c''((n-1)I-t_n)|^2 |E] \mathbb{P}(E) \\
&=|e^{\lambda(t-((n-1)I-T))} \cdot \mathbf{i}|^2 \sigma''^2 \cdot \mathbb{P}(E) \label{cont:nec:lowerbound} \end{align} (a): By \eqref{eqn:non:0}, $\mathbf{x_c}(t)= e^{\mathbf{A_c}(t-((n-1)I-t_n))}\mathbf{x_c}((n-1)I-t_n)+ \int^t_{(n-1)I-t_n} e^{\mathbf{A_c}((n-1)I-t_n -t')} \mathbf{B_c} d \mathbf{W_c}(t') $. Moreover, $x''_c((n-1)I-t_n)$ is independent from $\mathbf{y_c}(0:(n-1)I-t_n)$ and $y_o[n],y_o[n+1] \cdots$.
Since we can choose $t$ arbitrarily large, this finishes the proof for $\Re (\lambda) > 0$. To prove for the case of $\Re (\lambda) = 0$, we can bound \eqref{cont:nec:lowerbound} more carefully and justify that independent estimation errors accumulates in the direction of $\mathbf{i}$. We omit the proof here since the argument is essentially equivalent to that of the well-known fact that an eigenvalue with zero real part is unstable in continuous-time systems.
\subsection{Sufficiency Proof of Theorem~\ref{thm:mainsingle} (Discrete-Time Systems)} \label{sec:dis:suff}
We will prove that if $p_e < \frac{1}{\underset{1 \leq i \leq \mu}{\max} |\lambda_{i,1}|^{2 \frac{p_i}{l_i}}}$ then the system is intermittent observable.
$\bullet$ Reduction to a Jordan form matrix $\mathbf{A}$: As in Section~\ref{sec:cont:suf}, we will restrict attention to system equations \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} with the following properties, and justify that such a restriction is without loss of generality and does not change the intermittent observability.\\ (a) The system matrix $\mathbf{A}$ is a Jordan form matrix.\\ (b) All eigenvalues of $\mathbf{A}$ are unstable, i.e. the magnitude of all eigenvalues are greater or equal to $1$.\\ (c) \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} can be extended to two-sided processes.
The restriction (a) can be justifies by a similarity transform~\cite{Chen}. It is known~\cite{Chen} that for any square matrix $\mathbf{A}$, there exists an invertible matrix $\mathbf{U}$ and an upper-triangular Jordan matrix $\mathbf{A'}$ such that $\mathbf{A}=\mathbf{U}\mathbf{A'}\mathbf{U}^{-1}$. Then, the system equations \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} can be rewritten as: \begin{align} &\mathbf{U}^{-1}\mathbf{x}[n+1]=\mathbf{A'}\mathbf{U}^{-1}\mathbf{x}[n]+\mathbf{U}^{-1}\mathbf{B}\mathbf{w}[n] \nonumber \\ &\mathbf{y}[n]=\beta[n](\mathbf{C}\mathbf{U}\mathbf{U}^{-1}\mathbf{x}[n]+\mathbf{v}[n]). \nonumber \end{align} Thus, by denoting $\mathbf{x'}[n]:=\mathbf{U}^{-1}\mathbf{x}[n]$, $\mathbf{B'}:=\mathbf{U}^{-1}\mathbf{B}$, and $\mathbf{C'}:=\mathbf{C}\mathbf{U}$, we get \begin{align} &\mathbf{x'}[n+1]=\mathbf{A'}\mathbf{x'}[n]+\mathbf{B'}\mathbf{w}[n] \nonumber \\ &\mathbf{y}[n]=\beta[n](\mathbf{C'}\mathbf{x'}[n]+\mathbf{v}[n]). \nonumber \end{align}
Since $\mathbf{U}$ is invertible, the controllability of $(\mathbf{A},\mathbf{B},\mathbf{C})$ remains the same for the new intermittent system $(\mathbf{A'},\mathbf{B'},\mathbf{C'})$. Moreover, since $\mathbf{x'}[n]=\mathbf{U}^{-1}\mathbf{x}[n]$, the original intermittent system is intermittent observable if and only if the new intermittent system is intermittent observable. Thus, without loss of generality, we can assume that $\mathbf{A}$ is given in a Jordan form, which justifies (a).
Once $\mathbf{A}$ is given in Jordan form, there is a natural correspondence between the eigenvalues and the states. If there is a stable eigenvalue --- i.e. the magnitude of the eigenvalue is less than $1$ ---, the variance of the corresponding state is uniformly bounded. Thus, we do not have to estimate that particular state to make the estimation error finite. In the observation $\mathbf{y}[n]$, the stable states can be considered as a part of observation noise $\mathbf{v}[n]$, and the variance of $\mathbf{v}[n]$ is still uniformly bounded (even if $\mathbf{v}[n]$ can be correlated). Therefore, we can assume (b) without loss of generality.
To justify restriction (c), rewrite \eqref{eqn:dis:system} as \begin{align} \mathbf{x}[n+1]=\mathbf{A}\mathbf{x}[n]+\mathbf{I}\mathbf{w'}[n] \nonumber \end{align} where $\mathbf{w'}[n]=\mathbf{B}\mathbf{w}[n]$ for $n \geq 0$. Let $\mathbf{w'}[-1]=\mathbf{x}[0]$, $\mathbf{w}[n]=\mathbf{0}$ for $n < -1 $, and $\mathbf{v}[n]$ for $n < 0$. We also extend $\beta[n]$ to a two-sided Bernoulli process with probability $1-p_e$. Then, the resulting two-sided processes $\mathbf{x}[n]$ and $\mathbf{y}[n]$ are identical to the original one-sided processes except that $\mathbf{x}[n]=\mathbf{0}$ and $\mathbf{y}[n]=\mathbf{0}$ for $n \in \mathbb{Z}^{--}$.
In summary, without loss of generality we can assume that $\mathbf{A}$ is in a Jordan form, all eigenvalues of $\mathbf{A}$ is stable, and \eqref{eqn:dis:system} and \eqref{eqn:dis:system2} are two-sided process. Therefore, we can assume that $\mathbf{A} \in \mathbb{C}^{m \times m}$ and $\mathbf{C} \in \mathbb{C}^{l \times m}$ are given as \begin{align} &\mathbf{A}=diag\{ \mathbf{A_{1,1}}, \mathbf{A_{1,2}}, \cdots, \mathbf{A_{1,\nu_1}}, \cdots, \mathbf{A_{\mu,1}}, \cdots, \mathbf{A_{\mu,\nu_\mu}}\} \nonumber \\ &\mathbf{C}=\begin{bmatrix} \mathbf{C_{1,1}} & \mathbf{C_{1,2}} & \cdots & \mathbf{C_{1,\nu_1}} & \cdots & \mathbf{C_{\mu,1}} & \cdots & \mathbf{C_{\mu,\nu_\mu}} \end{bmatrix} \nonumber \\ &\mbox{where} \nonumber \\ &\quad \mbox{$\mathbf{A_{i,j}}$ is a Jordan block with an eigenvalue $\lambda_{i,j}$ and size $m_{i,j}$} \nonumber \\ &\quad m_{i,1} \geq m_{i,2} \geq \cdots \geq m_{i,\nu_i} \mbox{ for all }i=1,\cdots,\mu \nonumber \\
&\quad |\lambda_{1,1}| \geq |\lambda_{2,1}| \geq \cdots \geq |\lambda_{\mu,1}| \geq 1 \nonumber \\ &\quad \{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \} \mbox{ is cycle with length $\nu_i$ and period $p_i$}\nonumber \\ &\quad \mbox{For $i \neq i'$, $\{\lambda_{i,j},\lambda_{i',j'} \}$ is not a cycle} \nonumber \\ &\quad \mbox{$\mathbf{C_{i,j}}$ is a $l \times m_{i,j}$ complex matrix}.\label{eqn:ac:jordan} \end{align} Here, $\mathbf{A_{i,1}},\cdots, \mathbf{A_{i,\nu_i}}$ are the Jordan blocks corresponding to the same eigenvalue cycle. The Jordan blocks are sorted in descending order by the magnitude of the eigenvalues. Such permutation of Jordan blocks can be justified since Jordan forms are block diagonal matrices.
Like \eqref{eqn:ac2:jordan:thm}, \eqref{eqn:def:lprime:thm}, we also define $\mathbf{A_i}$, $\mathbf{C_i}$, and $l_i$ as follows. \begin{align} &\mathbf{A_i}=diag\{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \}\nonumber \\ &\mathbf{C_i}=\begin{bmatrix} \left(\mathbf{C_{i,1}}\right)_1 & \cdots & \left(\mathbf{C_{i,\nu_i}}\right)_1 \end{bmatrix} \nonumber\\ &\mbox{where $\left(\mathbf{C_{i,j}}\right)_1$ is the first column of $\mathbf{C_{i,j}}$.} \label{eqn:ac2:jordan} \end{align}
$l_i$ is the minimum cardinality among the sets $S' \subseteq \{ 0,1,\cdots,p_i-1 \}$ whose resulting $S:=\{ 0,1,\cdots, p_i-1 \} \setminus S'=\{s_1,s_2,\cdots,s_{|S|} \}$ makes \begin{align} \begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{s_1}\\ \mathbf{C_i}\mathbf{A_i}^{s_2}\\ \vdots \\
\mathbf{C_i}\mathbf{A_i}^{s_{|S|}} \end{bmatrix} \label{eqn:def:lprime} \end{align} be rank deficient, i.e. the rank of the matrix~\eqref{eqn:def:lprime} is strictly less than $\nu_i$.
Moreover, in \eqref{eqn:dis:systemconst1}, we already assumed that there exists a finite $\sigma > 0$ such that \begin{align} &\sup_{n \in \mathbb{Z}} \mathbb{E}[\mathbf{w}[n]\mathbf{w}[n]^\dag] \preceq \sigma^2 \mathbf{I} \nonumber \\ &\sup_{n \in \mathbb{Z}} \mathbb{E}[\mathbf{v}[n]\mathbf{v}[n]^\dag] \preceq \sigma^2 \mathbf{I}. \label{eqn:dis:suf:1} \end{align}
$\bullet$ Uniform boundedness of observation noise: To prove intermittent observability, we will propose a suboptimal maximum likelihood estimator, and analyze it. We first have to upper bound the disturbances and observation noises in the system. Following the same steps of \eqref{eqn:intui:3}, we can derive \begin{align} \mathbf{y}[n-k]=\mathbf{C}\mathbf{A}^{-k}\mathbf{x}[n]-\underbrace{(\mathbf{C}\mathbf{A}^{-1}\mathbf{w}[n-k]+\cdots+\mathbf{C}\mathbf{A}^{-k}\mathbf{w}[n-1]-\mathbf{v}[n-k])}_{\mathbf{v'}[n-k]}. \label{eqn:dis:suf:11} \end{align} The invertibility of $\mathbf{A}$ is comes from assumption (b). Moreover, since all eigenvalues of $\mathbf{A}$ are unstable, by \eqref{eqn:dis:suf:1} we can find $p' \in \mathbb{N}$ such that \begin{align} \mathbb{E}[\mathbf{v'}[n-k]^\dag \mathbf{v'}[n-k]] \lesssim 1 + k^{p'} \label{eqn:pprimedef} \end{align} where $\lesssim$ holds for all $n, k (k \leq n)$.
$\bullet$ Suboptimal Maximum Likelihood Estimator: Now, we will give the suboptimal estimator for the state which only uses a finite number of recent observations. We first need the following lemma which plays a parallel role to Lemma~\ref{lem:conti:mo}. \begin{lemma} Let $\mathbf{A}$ and $\mathbf{C}$ be given as in \eqref{eqn:ac:jordan}, \eqref{eqn:ac2:jordan} and \eqref{eqn:def:lprime}, and $\beta[n]$ be a Bernoulli process with probability $1-p_e$. Then, we can find $m_1',\cdots,m_\mu' \in \mathbb{N}$, polynomials $p_1(k),\cdots,p_\mu(k)$ and families of stopping times $\{ S_1(\epsilon,k): k \in \mathbb{Z}^+ , 0 < \epsilon < 1 \},\cdots,\{ S_\mu(\epsilon,k): k \in \mathbb{Z}^+ , 0 < \epsilon < 1 \}$ such that for all $k \in \mathbb{Z}^+$ and $0 < \epsilon < 1$ there exist $k \leq k_1 < \cdots < k_{m_1'} \leq S_1(\epsilon,k) < k_{m_1'+1}< \cdots < k_{\sum_{1 \leq i \leq \mu }m_i' } \leq S_{\mu}(\epsilon,k) $ and a $m \times (\sum_{1 \leq i \leq \mu } m_i')l$ matrix $\mathbf{M}$ satisfying the following conditions:\\ (i) $\beta[k_i]=1$ for $1 \leq i \leq \sum_{1 \leq i \leq \mu} m_i'$\\ (ii) $\mathbf{M} \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1}\\ \mathbf{C} \mathbf{A}^{-k_2}\\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_{\sum_{1 \leq i \leq \mu}m_i'}}\\ \end{bmatrix}= \mathbf{I}_{m \times m} $\\ (iii) $
\left| \mathbf{M} \right|_{max} \leq \max_{1 \leq i \leq \mu} \left\{
\frac{p_i(S_i(\epsilon,k))}{\epsilon} |\lambda_{i,1}|^{S_i(\epsilon,k)} \right\} $ \\ (iv) $ \lim_{\epsilon \downarrow 0} \exp \left(\limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+}\frac{1}{s} \log \mathbb{P} \{ S_i(\epsilon,k)-k=s \} \right) \leq \max_{1 \leq j \leq i} \left\{ p_e^{\frac{l_j}{p_j}} \right\} $ for $1 \leq i \leq \mu$\\ (v) $ \lim_{\epsilon \downarrow 0} \exp \left(\limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P} \{
S_a(\epsilon,k)-S_b(\epsilon,k)=s| \mathcal{F}_{S_b} \} \right) \leq \max_{b < i \leq a} \left\{ p_e^{\frac{l_i}{p_i}} \right\} $ for $1 \leq b < a \leq \mu$ where $\mathcal{F}_{S_i}$ is the $\sigma$-field generated by $S_i(\epsilon,k)$. \label{lem:dis:achv} \end{lemma} \begin{proof} See Appendix~\ref{sec:app:cycleproof}. \end{proof}
Since $p_e < \frac{1}{\underset{1 \leq i \leq \mu}{\max} |\lambda_{i,1}|^{2\frac{p_i}{l_i}}}$, there exists $\delta > 1$ such that $\delta^5 \cdot \underset{1 \leq i \leq \mu}{\max} p_e^{\frac{l_i}{p_i}} |\lambda_{i,1}|^2 < 1$. By Lemma~\ref{lem:dis:achv}, we can find $m_1',\cdots,m_{\mu}' \in \mathbb{N}$, $0<\epsilon<1$, polynomials $p_1(k),\cdots,p_{\mu}(k)$, and a family of stopping times $\{(S_1(n),\cdots, S_\mu(n)):n \in \mathbb{Z}^+ \}$ such that $\forall n$ there exist $0 \leq k_1 < \cdots < k_{m_1'} \leq S_1(n) < k_{m_1'+1}< \cdots < k_{\sum_{1 \leq i \leq \mu }m_i' } \leq S_{\mu}(n)$ and a $m \times (\sum_{1 \leq i \leq \mu } m_i')l$ matrix $\mathbf{M_n}$ satisfying the following conditions:\\ (i') $\beta[n-k_i]=1$ for $1 \leq i \leq \sum_{1 \leq i \leq \mu}m_i'$\\ (ii') $\mathbf{M_n} \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1}\\ \mathbf{C} \mathbf{A}^{-k_2}\\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_{\sum_{1 \leq i \leq \mu}m_i'}}\\ \end{bmatrix}= \mathbf{I}_{m \times m}$\\ (iii') $
\left| \mathbf{M_n} \right|_{max} \leq \max_{1 \leq i \leq \mu} \left\{
\frac{p_i(S_i(n))}{\epsilon} |\lambda_{i,1}|^{S_i(n)} \right\} $\\ (iv') $ \exp \left( \limsup_{s \rightarrow \infty} \frac{1}{s} \log \mathbb{P} \{ S_i(n)=s \} \right) \leq \sqrt{\delta} \cdot \max_{1 \leq j \leq i} \left\{ p_e^{\frac{l_j}{p_j}} \right\} $ for $1 \leq i \leq \mu$\\ (v') $ \exp \left( \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P} \{
S_a(n)-S_b(n)=s| \mathcal{F}_{S_b} \} \right) \leq \sqrt{\delta} \cdot \max_{b < i \leq a} \left\{ p_e^{\frac{l_i}{p_i}} \right\} $ for $1 \leq b < a \leq \mu$ where $\mathcal{F}_{S_i}$ is the $\sigma$-field generated by $\beta[n-S_i(n)],\beta[n-S_i(n)+1],\cdots, \beta[n]$.
Then, here is the proposed suboptimal maximum likelihood estimator for $\mathbf{x}[n]$: \begin{align} \mathbf{\widehat{x}}[n]=\mathbf{M_n}\begin{bmatrix} \mathbf{y}[n-k_1] \\ \mathbf{y}[n-k_2] \\ \vdots \\ \mathbf{y}[n-k_{\sum_{1 \leq i \leq \mu} m_i'}] \end{bmatrix} \label{eqn:dis:suf:12} \end{align} Here, $k_i$ also depends on $n$, but we omit the dependency in notation for simplicity. Notice that the number of observations that this estimator uses, $k_{\sum_{1 \leq i \leq \mu} m_i'}$, can be much larger than the dimension of the system, $m$. In other words, the estimator proposed here may use much more number of observations than the number of states (the number of observations that a simple matrix inverse observer needs). This is because we use successive decoding idea in the proof of Lemma~\ref{lem:conti:mo}.
$\bullet$ Analysis of the estimation error: Now, we will analyze the performance of the proposed estimator. Remind that $p'$ is defined in \eqref{eqn:pprimedef} and $\delta > 1$. By (iv') and (v'), we can find $c>0$ that satisfies the following four conditions:\\ (i'') $(1+k^{p'}) \leq c \cdot \delta^k$ for all $k \geq 0$\\ (ii'') $p_i(k) \leq c \cdot \delta^k$ for all $1 \leq i \leq \mu$ and $k \geq 0$\\ (iii'') $\mathbb{P}\{ S_i(n) =s \} \leq c \cdot (\delta \cdot \max_{1 \leq j \leq i} \left\{ p_e^{\frac{l_j}{p_j}} \right\})^s$ for all $1 \leq i \leq \mu$ and $s \in \mathbb{Z}^+$\\
(iv'') $\mathbb{P}\{ S_a(n)-S_b(n)=s | \mathcal{F}_{S_b} \} \leq c \cdot (\delta \cdot \max_{b < i \leq a} \left\{ p_e^{\frac{l_i}{p_i}} \right\})^s$ for all $1 \leq b < a \leq \mu$ and $s \in \mathbb{Z}^+$.
Let $\mathcal{F}_{\beta}$ be the $\sigma$-field generated by $\beta[n]$. Then, $k_i$ and $S_i$ are deterministic variables conditioned on $\mathcal{F}_{\beta}$. The estimation error is upper bounded by \begin{align}
\mathbb{E}[|\mathbf{x}[n]-\mathbf{\widehat{x}}[n]|_2^2]&=
\mathbb{E}[\mathbb{E}[|\mathbf{x}[n]-\mathbf{\widehat{x}}[n]|_2^2| \mathcal{F}_{\beta}]] \nonumber \\ &\overset{(A)}{=}
\mathbb{E}[\mathbb{E}[\left|\mathbf{x}[n]-\mathbf{M_n}(\begin{bmatrix} \mathbf{C}\mathbf{A}^{-k_1} \\ \mathbf{C}\mathbf{A}^{-k_2} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-k_{\sum_{1 \leq i \leq \mu}m_i'}} \end{bmatrix}\mathbf{x}[n]-\begin{bmatrix} \mathbf{v'}[n-k_1] \\ \mathbf{v'}[n-k_2] \\ \vdots \\ \mathbf{v'}[n-k_{\sum_{1 \leq i \leq \mu}m_i'}] \end{bmatrix})\right|_2^2|\mathcal{F}_{\beta}]] \nonumber \\
&\overset{(B)}{=}\mathbb{E}[\mathbb{E}[\left|\mathbf{M_n}\begin{bmatrix} \mathbf{v'}[n-k_1] \\ \mathbf{v'}[n-k_2] \\ \vdots \\ \mathbf{v'}[n-k_{\sum_{1 \leq i \leq \mu}m_i'}] \end{bmatrix}\right|_2^2|\mathcal{F}_{\beta}]] \nonumber \\ &\lesssim
\mathbb{E}[|\mathbf{M_n}|_{max}^2 \cdot \mathbb{E}[\left|\begin{bmatrix} \mathbf{v'}[n-k_1] \\ \mathbf{v'}[n-k_2] \\ \vdots \\ \mathbf{v'}[n-k_{\sum_{1 \leq i \leq \mu}m_i'}] \end{bmatrix}\right|_{max}^2|\mathcal{F}_{\beta}]] \nonumber \\ &\overset{(C)}{\lesssim}
\mathbb{E}[|\mathbf{M_n}|_{max}^2 \cdot (1+S_{\mu}^{p'}(n))^2 ] \nonumber \\ &\overset{(D)}{\leq}
\mathbb{E}[\max_{1 \leq i \leq \mu} \{\left( \frac{p_i(S_i(n))}{\epsilon} |\lambda_{i,1}|^{S_i(n)}\right)^2 \} \cdot (1+S_{\mu}^{p'}(n))^2 ] \nonumber \\ &\leq
\sum_{1 \leq i \leq \mu}\mathbb{E}[ \left( \frac{p_i(S_i(n))}{\epsilon} |\lambda_{i,1}|^{S_i(n)}\right)^2 \cdot (1+S_{\mu}^{p'}(n))^2 ]\nonumber \\ &\overset{(E)}{\lesssim}
\sum_{1 \leq i \leq \mu}\mathbb{E}[ \delta^{2 S_i(n)} \cdot |\lambda_{i,1}|^{2 S_i(n)} \cdot \delta^{2 S_\mu(n)} ] \nonumber \\
&=\sum_{1 \leq i \leq \mu}\mathbb{E}[ \delta^{4 S_i(n)} \cdot |\lambda_{i,1}|^{2 S_i(n)} \cdot \mathbb{E}[ \delta^{2 (S_\mu(n)-S_i(n))}|\mathcal{F}_{S_i(n)}] ] \nonumber \\ &\overset{(F)}{\lesssim} \sum_{1 \leq i \leq \mu}\mathbb{E}[
\delta^{4S_i(n)} \cdot |\lambda_{i,1}|^{2S_i(n)} \cdot \sum^{\infty}_{s=0} \delta^{2s} \cdot (\delta \cdot \max_{1 \leq i \leq \mu}\{ p_e^{\frac{l_i}{p_i}} \})^s ] \nonumber \\
&\overset{(G)}{\lesssim} \sum_{1 \leq i \leq \mu}\mathbb{E}[ \delta^{4S_i(n)} \cdot |\lambda_{i,1}|^{2S_i(n)} ] \nonumber\\
&\overset{(H)}{\lesssim} \sum_{1 \leq i \leq \mu} \sum^{\infty}_{s=0} \delta^{4s} \cdot |\lambda_{i,1}|^{2s} \cdot (\delta \cdot \max_{1 \leq j \leq i} \left\{ p_e^{\frac{l_i}{p_i}} \right\} )^s \nonumber \\
&= \sum_{1 \leq i \leq \mu} \sum^{\infty}_{s=0} (\delta^5 \cdot |\lambda_{i,1}|^2 \cdot \max_{1 \leq j \leq i} \left\{ p_e^{\frac{l_j}{p_j}} \right\} )^s \nonumber \\ &\overset{(I)}{<} \infty \nonumber \end{align} where $\lesssim$ holds for all $n$.\\ (A): By \eqref{eqn:dis:suf:11} and \eqref{eqn:dis:suf:12}.\\ (B): By condition (ii').\\ (C): Since $\mathbb{E}[\mathbf{v'}[n-k]^\dag \mathbf{v'}[n-k]] \lesssim 1 + k^{p'}$ by the definition of $p'$ of \eqref{eqn:pprimedef}, and thus each element of the $\mathbf{v'}[n]$ vector obeys max bound.\\ (D): By condition (iii').\\ (E): By condition (i'') and (ii'').\\ (F): By condition (iv'').\\
(G): Since $\delta^5 \cdot \underset{1 \leq i \leq \mu}{\max} p_e^{\frac{l_i}{p_i}} |\lambda_{i,1}|^2 < 1$.\\ (H): By condition (iii'').\\
(I): Since $\delta^5 \cdot \underset{1 \leq i \leq \mu}{\max} p_e^{\frac{l_i}{p_i}} |\lambda_{i,1}|^2 < 1$.
Therefore, the estimation error is uniformly bounded over $n$ when $p_e < \frac{1}{\underset{1 \leq i \leq \mu}{\max} |\lambda_{i,1}|^{2 \frac{p_i}{l_i}}}$, which finishes the proof.
\subsection{Necessity Proof of Theorem~\ref{thm:mainsingle}} \label{sec:dis:nece}
Intuitively, we will give all states except the ones that corresponds to the bottleneck eigenvalue cycle as side-information to the estimator. Then, the problem reduces to the single eigenvalue cycle one discussed in Section~\ref{sec:powerproperty}, and we can prove the estimation error diverges similarly. This argument works for $p_e > \frac{1}{\max_i |\lambda_{i,1}|^{2 \frac{p_i}{l_i}}}$, since we can show that a single additional disturbance $\mathbf{w}[n]$ grows exponentially. However, for the equality case $p_e = \frac{1}{\max_i |\lambda_{i,1}|^{2 \frac{p_i}{l_i}}}$, the proof can be more complicated since not a single disturbance but the sum of disturbances algebraically diverges to infinity.
So, to make this argument complete and rigorous, we will analyze the optimal estimator, and prove that its estimation error diverges when the condition of the lemma is violated.
It is well-known that the optimal estimator is the Kalman filter and it can be written in recursive form. Let $\mathcal{F}_{\beta}$ be the $\sigma$-field generated by $\beta[n]$. Denote the one-step prediction error as $\mathbf{\Sigma_{n+1|n}}:=\mathbb{E}[(\mathbf{x}[n+1]-\mathbb{E}[\mathbf{x}[n+1]|\mathbf{y}^n])(\mathbf{x}[n+1]-\mathbb{E}[\mathbf{x}[n+1]|\mathbf{y}^n])^\dag |\mathcal{F}_{\beta}]$. Then, $\mathbf{\Sigma_{n+1|n}}$ follows the following recursive equation~\cite[p.101]{KumarVaraiya}. \begin{align}
\mathbf{\Sigma_{n+1|n}} &= (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n})\mathbf{\Sigma_{n|n-1}}(\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n})^\dag +\mathbf{A}\mathbf{L_n} \mathbb{E}[\mathbf{v}[n]\mathbf{v}[n]^\dag]\mathbf{L_n}^\dag \mathbf{A}^\dag + \mathbf{B} \mathbb{E}[\mathbf{w}[n]\mathbf{w}[n]^\dag] \mathbf{B}^\dag \label{eqn:dis:final:2} \end{align}
Here, $\mathbf{L_n}=\mathbf{\Sigma_{n|n-1}} \mathbf{\bar{C}_n}^\dag \left[ \mathbf{\bar{C}_n} \mathbf{\Sigma_{n|n-1}} \mathbf{\bar{C}_n}^\dag + \mathbb{E}[\mathbf{v}[n]\mathbf{v}[n]^\dag] \right]^{-1}$, and $\mathbf{\bar{C}_n}=\mathbf{C}$ if $\beta[n]=1$ and $\mathbf{C_n}=\mathbf{0}$ otherwise. Notice that $\mathbf{\Sigma_{n+1|n}}$ is a random variable.
Moreover, it is also known that when $(\mathbf{A},\mathbf{B})$ is controllable, the one-step prediction error of $\mathbf{x}[n+1]$ based on $\mathbf{y}[n]$ becomes positive definite for large enough $n$ even if there are no erasures. Therefore, there exists $m \in \mathbb{N}$ and $\sigma^2>0$ such that $\mathbf{\Sigma_{n+1|n}} \succeq \sigma^2 \mathbf{I}$ with probability one for all $n \geq m$. Therefore, by \eqref{eqn:dis:final:2} for all $n \geq n' \geq m$ we have \begin{align}
\mathbf{\Sigma_{n+1|n}} &\succeq (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n}) \cdots (\mathbf{A}-\mathbf{A}\mathbf{L_{n'}}\mathbf{\bar{C}_{n'}})
\mathbf{\Sigma_{n'|n'-1}} (\mathbf{A}-\mathbf{A}\mathbf{L_{n'}}\mathbf{\bar{C}_{n'}})^\dag \cdots (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n})^\dag \\ &\succeq \sigma^2 (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n}) \cdots (\mathbf{A}-\mathbf{A}\mathbf{L_{n'}}\mathbf{\bar{C}_{n'}}) \mathbf{I} (\mathbf{A}-\mathbf{A}\mathbf{L_{n'}}\mathbf{\bar{C}_{n'}})^\dag \cdots (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n})^\dag. \label{eqn:dis:final:3} \end{align}
Let's use the definitions of $\mathbf{U}$, $\mathbf{A'}$, $\mathbf{C'}$, $\mathbf{U}$, $\mathbf{A_i}$, $\mathbf{C_i}$, $\lambda_{i,j}$, $p_i$, $l_i$, $\nu_i$ from \eqref{eqn:ac:jordan:thm}, \eqref{eqn:ac2:jordan:thm} and \eqref{eqn:def:lprime:thm}. Let $i^\star := \underset{1 \leq i \leq \mu}{argmax} |\lambda_{i,1}|^{2 \frac{p_i}{l_i}}$. Let $S'^\star \subseteq \{0,1,\cdots,p_{i^\star}-1\}$ be a set achieving the minimum cardinality $l_{i^\star}$. In other words, define $S^\star := \{s_1^\star,s_2^\star,\cdots,s_{|S^\star|}^\star \}= \{ 0, 1, \cdots, p_{i^\star}-1 \} \setminus S'^\star$. Then, $|S'^\star|=l_{i^\star}$ and \begin{align} \begin{bmatrix} \mathbf{C_{i^\star}}\mathbf{A_{i^\star}}^{s_1^\star} \\ \mathbf{C_{i^\star}}\mathbf{A_{i^\star}}^{s_2^\star} \\ \vdots \\
\mathbf{C_{i^\star}}\mathbf{A_{i^\star}}^{s_{|S^\star|}^\star} \\ \end{bmatrix} \nonumber \end{align} is rank deficient, i.e. the rank is strictly less than $\nu_{i^\star}$.
For a given time index $n$, define the stopping time $S_n$ as the most recent observation which does not belong to $S^\star$ in modulo $p_{i^\star}$, i.e. \begin{align} S_n:=\inf \{ k p_{i^\star} : k \in \mathbb{Z}^+ \mbox{ and there exists $k'$ such that } \beta[n-k']=1 , k p_{i^\star} \leq k' < (k+1)p_{i^\star}, -k'-1( mod\ p_{i^\star}) \in S'^\star \}. \end{align} Then, we can compute that $\mathbb{P}\{S_n = k p_{i^\star} \}=(1-p_e^{l_{i^\star}})(p_e^{l_{i^\star}})^{k}$ for all $k \in \mathbb{Z}^+$. From the definition of $S_n$, we can see that for all $0 \leq k < S_n$, $\beta[n-k]=1$ if and only if $-k-1(mod\ p_{i^\star}) \in S$.
Then, conditioned on $n-S_n \geq m$, by \eqref{eqn:dis:final:3} the following inequality holds with probability one: \begin{align}
\mathbf{\Sigma_{n+1|n}}&\succeq \sigma^2 (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n}) \cdots (\mathbf{A}-\mathbf{A}\mathbf{L_{n-S_n+1}}\mathbf{\bar{C}_{n-S_n+1}}) I (\mathbf{A}-\mathbf{A}\mathbf{L_{n-S_n+1}}\mathbf{\bar{C}_{n-S_n+1}})^\dag \cdots (\mathbf{A}-\mathbf{A}\mathbf{L_n}\mathbf{\bar{C}_n})^\dag. \label{eqn:dis:final:4} \end{align} where $\mathbf{\bar{C}_{n-S_n+k}}=\mathbf{C}$ if $-S_n+k-1(mod\ p_{i^\star})=k-1(mod\ p_{i^\star}) \in S$ and $\mathbf{\bar{C}_{n-S_n+k}}=\mathbf{0}$ otherwise.
We will prove that the L.H.S. of \eqref{eqn:dis:final:4} grows exponentially. For this, we first need the following lemma.
\begin{lemma}
Consider $\mathbf{A}$, $\mathbf{C}$, $\mathbf{U}$, $\mathbf{A'}$, $\mathbf{C'}$, $\mathbf{A_i}$, $\mathbf{C_i}$, $\nu_i$, $p_i$ given as \eqref{eqn:ac:jordan:thm}, \eqref{eqn:ac2:jordan:thm} and \eqref{eqn:def:lprime:thm}. For a given set $S:=\{s_1,\cdots,s_{|S|} \} \in \{0,1,\cdots,p_i-1 \}$, let
$\begin{bmatrix}\mathbf{C_i}\mathbf{A_i}^{s_1} \\ \mathbf{C_i}\mathbf{A_i}^{s_2} \\ \vdots \\ \mathbf{C_i}\mathbf{A_i}^{s_{|S|}} \end{bmatrix}$ be rank-deficient, i.e. the rank is less than $\nu_i$, and define \begin{align} \mathbf{\bar{A}}(\mathbf{K_0},\cdots,\mathbf{K_{p_i-1}}):=(\mathbf{A}-\mathbf{K_{p_i-1}}\mathbf{\bar{C}_{p_i-1}}) \cdots (\mathbf{A}-\mathbf{K_0}\mathbf{\bar{C}_0})\nonumber \end{align} where $\mathbf{C_j'}=\mathbf{C}$ when $j \in S$ and $\mathbf{C_j'}=\mathbf{0_{l\times m}}$ otherwise.\\ Then, for all $\mathbf{K_0},\cdots, \mathbf{K_{p_i-1}} \in \mathbb{C}^{m \times l}$, $\mathbf{\bar{A}}(\mathbf{K_0},\cdots,\mathbf{K_{p_i-1}})$ has a common right eigenvector $\mathbf{e}$ whose eigenvalue is $\lambda_{i,1}^{p_i}$. \label{lem:dis:converse} \end{lemma} \begin{proof} For the simplicity of notation, we will set $i=1$, but the proof for general $i$ is the same. Let $\mathbf{e'}=\begin{bmatrix} e_1 \\ \vdots \\ e_{\nu_1} \end{bmatrix}$ be a nonzero vector that belongs to the right null space of
$\begin{bmatrix}\mathbf{C_1}\mathbf{A_1}^{s_1} \\ \mathbf{C_1}\mathbf{A_1}^{s_2} \\ \vdots \\ \mathbf{C_1}\mathbf{A_1}^{s_{|S|}} \end{bmatrix}$. Let $\mathbf{e_1'}$ be a $m_{1,1} \times 1$ column vector whose first element is $e_1$ and the rest are $0$. Likewise, $\mathbf{e_2'}$ is a $m_{1,2} \times 1$ column vector with first element $e_2$ and the rest $0$. $\mathbf{e_3'}, \cdots, \mathbf{e_{\nu_1}'}$ are defined in the same way. Let a $m \times 1$ column vector $\mathbf{e''}$ be $\begin{bmatrix} \mathbf{e_1'} \\ \vdots \\ \mathbf{e_{\nu_1}'} \\ \mathbf{0}_{(m-\underset{1 \leq i \leq \nu_1}{\sum} m_{1,i}) \times 1} \end{bmatrix}$. Then, we will prove that $\mathbf{e}:=\mathbf{U}\mathbf{e''}$ is the eigenvector that satisfies the conditions of the lemma.
By construction, we can see that $\mathbf{C_1}\mathbf{A_1}^k \mathbf{e'}=\mathbf{0}$ for $k \in \{ s_1, \cdots, s_{|S|}\}$. Moreover, since $\mathbf{C}\mathbf{A^{k}}\mathbf{e} = \mathbf{C}\mathbf{U}
\mathbf{A'}^{k}\mathbf{U}^{-1}\mathbf{U}\mathbf{e''} = \mathbf{C'}\mathbf{A'}^{k}\mathbf{e''}$, we also have $\mathbf{C}\mathbf{A}^k \mathbf{e}=0$ for $k \in \{ s_1, \cdots, s_{|S|}\}$. Thus, we can conclude \begin{align} &(\mathbf{A}-\mathbf{K_{p_1-1}}\mathbf{C_{p_1-1}'}) \cdots (\mathbf{A}-\mathbf{K_{s_1}}\mathbf{C_{s_1}'}) (\mathbf{A}-\mathbf{K_{s_1-1}}\mathbf{C_{s_1-1}'}) \cdots (\mathbf{A}-\mathbf{K_0}\mathbf{C_0'}) \mathbf{e} \nonumber \\ &=(\mathbf{A}-\mathbf{K_{p_1-1}}\mathbf{C_{p_1-1}'}) \cdots (\mathbf{A}-\mathbf{K_{s_1}}\mathbf{C}) (\mathbf{A}-\mathbf{K_{s_1-1}}\mathbf{0}) \cdots (\mathbf{A}-\mathbf{K_0}\mathbf{0}) \mathbf{e} \nonumber \\ &=(\mathbf{A}-\mathbf{K_{p_1-1}}\mathbf{C_{p_1-1}'}) \cdots (\mathbf{A}-\mathbf{K_{s_1}}\mathbf{C}) \mathbf{A}^{s_1} \mathbf{e} \nonumber \\ &=(\mathbf{A}-\mathbf{K_{p_1-1}}\mathbf{C_{p_1-1}'}) \cdots (\mathbf{A}^{s_1+1}\mathbf{e}-\mathbf{K_{s_1}}\mathbf{C} \mathbf{A}^{s_1}\mathbf{e} ) \nonumber \\ &\overset{(a)}{=}(\mathbf{A}-\mathbf{K_{p_1-1}}\mathbf{C_{p_1-1}'}) \cdots (\mathbf{A}^{s_1+1}\mathbf{e}) \nonumber\\ &\overset{(b)}{=}\mathbf{A}^{p_1}\mathbf{e} = \mathbf{U}\mathbf{A'}^{p_1} \mathbf{U}^{-1} \mathbf{e}= \mathbf{U}\mathbf{A'}^{p_1} \mathbf{e''} \nonumber\\ &\overset{(c)}{=} \mathbf{U}\lambda_{1,1}^{p_1} \mathbf{e''}= \lambda_{1,1}^{p_1} \mathbf{e} \nonumber \end{align} (a): $\mathbf{C}\mathbf{A}^{s_1}\mathbf{e}=\mathbf{0}$.\\
(b): Repetitive use of (a) for $s_2, \cdots, s_{|S|}$.\\ (c): $\mathbf{A_1}^{p_1}=\lambda_{1,1}^{p_1} \mathbf{I}$ and the definition of the vector $\mathbf{e''}$.
Thus, the lemma is proved. \end{proof}
Let the vector $\mathbf{e}$ be the right eigenvector of Lemma~\ref{lem:dis:converse} for $i=i^\star$. Then, there exists $\sigma' >0$ such that $\mathbf{I} \succeq \sigma'^2\mathbf{e}\mathbf{e}^\dag$. \eqref{eqn:dis:final:4} is lower bounded as \begin{align}
\mathbf{\Sigma_{n+1|n}}& \succeq \sigma^2 \sigma'^2 \lambda_{i^\star,1}^{S_n} \mathbf{e}\mathbf{e}^\dag (\lambda_{i^\star,1}^{S_n})^\dag. \nonumber \end{align}
Since $p_e \geq \frac{1}{|\lambda_{i^\star,1}|^{2 \frac{p_{i^\star}}{l_{i^\star}}}}$, the expected one-step prediction error is lower bounded as follows:\footnote{The lower bound does not hold when $|\lambda_{i^\star,1}|=1$ which induces $p_e =1$. However, in this case we do not have any observation, so trivially the system is unstable.} \begin{align}
&\mathbb{E}[ ( \mathbf{x}[n+1]-\mathbb{E}[\mathbf{x}[n+1]|\mathbf{y}^n ] )^\dag
( \mathbf{x}[n+1]-\mathbb{E}[\mathbf{x}[n+1]|\mathbf{y}^n ] )]\\
&\geq \mathbb{E}[\sigma^2 \sigma'^2 |\lambda_{i^\star,1}|^{2 S_n} |\mathbf{e}|^2 \cdot \mathbf{1}(n-S_n \geq m)]\\
&\geq \sigma^2 \sigma'^2 |\mathbf{e}|^2 \sum_{0 \leq s \leq \lfloor \frac{n-m}{p_{i^\star}} \rfloor}(1-p_e^{l_{i^\star}})(|\lambda_{i^\star,1}|^{2 p_{i^\star}} p_e^{l_{i^\star}})^s \\
&\geq \sigma^2 \sigma'^2 |\mathbf{e}|^2 \cdot (1-p_e^{l_{i^\star}}) \cdot (\lfloor \frac{n-m}{p_{i^\star}} \rfloor). \end{align} Therefore, as $n$ goes to infinity, the one-step prediction error diverges to infinity. The estimation error for the state is not uniformly bounded either, so the system is not intermittent observable.
\begin{figure*}
\caption{Flow diagram of the proof of Lemma~\ref{lem:conti:mo}}
\label{fig:proofflow}
\end{figure*}
\subsection{Proof Outline of Lemma~\ref{lem:conti:mo} and Lemma~\ref{lem:dis:achv}}
Now, the proofs of Theorem~\ref{thm:nonuniform} and \ref{thm:mainsingle} boil down to the proofs of Lemma~\ref{lem:conti:mo} and \ref{lem:dis:achv}. Since the proofs of Lemma~\ref{lem:conti:mo} and \ref{lem:dis:achv} shown in Appendix are too involved, we give the outlines of the proofs in this section.
\subsubsection{Proof Outline of Lemma~\ref{lem:conti:mo}}
The proof flow of Lemma~\ref{lem:conti:mo} is shown in Figure~\ref{fig:proofflow}. As we saw in Section~\ref{sec:intui}, the tail behavior of probability mass functions (p.m.f.) is crucial in the characterization of the critical erasure probability. Thus, in Appendix~\ref{sec:app:1} we first study some properties of the p.m.f. tail.
In the sufficiency proof of Section~\ref{sec:cont:suf}, we analyzed a sub-optimal maximum likelihood estimator whose performance heavily depends on the norm of the inverse of the observability Gramian matrix. In Appendix~\ref{sec:app:3}, we will reduce the question about the norm of the matrix to a question about an analytic function. In Lemma~\ref{lem:conti:inverse2}, we first prove that if the determinant of the observability Gramian matrix is large enough than the norm of the inverse of the observability Gramian matrix is small enough. Thus, we can reduce the question about the norm to an question about the determinant. Since the determinant of the observability Gramian matrix is an analytic function, Lemma~\ref{lem:det:lower} further reduce the question to a question about an analytic function. In other words, if an analytic function is large enough, then the determinant of the observability Gramian matrix is also large enough.
For the intermittent observability, we want to prove that the estimation error is uniformly bounded over all time indexes with nonuniform sampling. It is enough that a set of analytic analytic functions is uniformly away from $0$ with high probability. Lemma~\ref{lem:singleun} of Appendix~\ref{app:unif:conti} captures this insight. In Lemma~\ref{lem:uni:1}, we first prove that each analytic function is away from $0$ with high probability using a property of analytic functions. After this, we apply Dini's theorem which tells pointwise convergence implies uniform convergence when the domain of the functions is compact, and prove the desired uniform convergence of Lemma~\ref{lem:singleun}.
Now, we are ready to prove Lemma~\ref{lem:conti:mo}. By merging the results of Lemma~\ref{lem:det:lower} and \ref{lem:singleun}, we can prove that the determinant of observability Gramian is large enough with high probability uniformly over all time indexes. Together with the properties of the p.m.f. tail, we can first prove Lemma~\ref{lem:conti:mo} for a scalar observation. We can finally prove the general case using the idea of successive decoding. In other words, we reduce the system to the one with a scalar observation, and estimate one state. Then, we subtract the estimation from the system, and repeat the same procedure until we decode all states.
\begin{figure*}
\caption{Flow diagram of the proof of Lemma~\ref{lem:dis:achv}}
\label{fig:proofflow2}
\end{figure*}
\subsubsection{Proof Outline of Lemma~\ref{lem:dis:achv}}
As we can see in Figure~\ref{fig:proofflow2}, the proof outline of Lemma~\ref{lem:dis:achv} is essentially the same as that of Lemma~\ref{lem:conti:mo}.
We still use the tail properties of p.m.f. shown in Appendix~\ref{sec:app:1}. In Appendix~\ref{sec:dis:gramian}, we will state the lemmas about the observability Gramian matrices of discrete time systems which parallel to the ones of Appendix~\ref{sec:app:3}.
The main difference from the nonuniform sampling case is the uniform convergence shown in Appendix~\ref{sec:dis:uniform}. Consider the system without eigenvalue cycles. In this case, we have to justify that the system essentially reduces to multiple scalar systems, and the critical erasure probability only depends on the largest eigenvalue of the system. However, unlike the nonuniform sampling case, we do not have a random jitter at each observation and the determinant of the observability Gramian is a deterministic sequence in the time indexes. Therefore, we have to prove that the counting measure of the time indexes where the determinant of the observability Gramian is small converges to zero uniformly over all current time indexes.
For this, we apply the Weyl's criterion~\cite{Kuipers} which gives a sufficient condition for deterministic sequences to behave like uniform random variables. Morover, since different eigenvalue cycles behave like independent random variables, we first generalize Lemma~\ref{lem:singleun} of Appendix~\ref{app:unif:conti} for a single random variable to multiple random variables in Lemma~\ref{lem:dis:geo1}. Together with Weyl's criterion, we prove Lemma~\ref{lem:dis:geofinal} which tells the counting measure of the bad time indexes where the determinant of the observability Gramain becomes too small converges to zero uniformly over all current time indexes.
The remaining proof flow of Lemma~\ref{lem:dis:achv} is essentially the same as that of Lemma~\ref{lem:conti:mo}. We first estimate the state corresponds to the largest eigenvalue cycle, subtract the estimation from the system, and successively decode the remaining states.
\section{Comments} The intermittent Kalman filtering problem was first motivated from control over communication channels. Therefore, the problem is conventionally believed to fall into the intersection of control and communication. However, if the plant is unstable the transmission power of the sensor diverges to infinity if it is really going to pack an ever increasing number of bits in there. Therefore, it is hard to say that intermittent Kalman filtering has a direct connection to communication theory. Instead, we propose that the intersection of control and signal processing --- especially sampling theory --- is the right conceptual category for intermittent Kalman filtering. It should thus be interesting to explore the connection between the results of this paper with classical and modern results of sampling theory.
Arguably, the closest problem to intermittent Kalman filtering is that of observability after sampling. As we mentioned earlier, the observability of $(\mathbf{A_c}, \mathbf{C_c})$ in \eqref{eqn:contistate} and \eqref{eqn:contiob} does not implies the observability of $(\mathbf{A_c},\mathbf{C})$ in \eqref{eqn:conti:xsample} and \eqref{eqn:conti:ysample}. The well-known sufficient condition is: \begin{theorem}[Theorem 6.9. of \cite{Chen}]
Suppose $(\mathbf{A_c},\mathbf{C_c})$ is observable. A sufficient condition for its discretized system with sampling interval $I$ to be observable is that $\frac{|\Im (\lambda_i -\lambda_j)I|}{2 \pi} \notin \mathbb{N}$ whenever $\Re(\lambda_i-\lambda_j)=0$. \end{theorem} Since the eigenvalue of the sampled system is given as $\exp(\lambda_i I)$, Corollary~\ref{thm:nocycle} can be written as the following corollary for a sampled system. \begin{corollary}
Suppose $(\mathbf{A_c},\mathbf{C_c})$ is observable. A sufficient condition for its discretized system with sampling interval $I$ to have $\frac{1}{|e^{2\lambda_{max}I}|}$ as a critical erasure probability is that $\frac{|\Im (\lambda_i -\lambda_j)I|}{2 \pi} \notin \mathbb{Q}$ whenever $\Re(\lambda_i-\lambda_j)=0$. \label{cor:1} \end{corollary}
The idea of breaking cyclic behavior using non-uniform sampling is also shown in the context of sampling multiband signals ~\cite{Vaidyanathan_Efficient}. The lower bound on the sampling rate is known to be the Lebesgue measure of the spectral support of the signal sampled. To achieve this lower bound for a general multiband signal, a nonuniform sampling pattern has to be used. Moreover, nonuniform sampling is also well known as a necessary condition for the currently hot field of compressed sensing~\cite{Donoho_Compressed}.
As a last comment, we would like to mention that the result is not sensitive to the norm. In this paper, intermittent observability is defined using the $l^2$-norm to follow the majority of the literature. But, if the intermittent observability is defined by the $l^\eta$-norm, we can simply replace $2$ in every theorem by $\eta$. For example, the result of Theorem~\ref{thm:mainsingle} becomes $\frac{1}{ \underset{i}{\max}
|\lambda_{i,1}|^{\frac{\eta p_i}{l_i'}}}$.
\section{Appendix} \subsection{Lemmas for Tails of Probability Mass Functions} \label{sec:app:1} In this section, we will prove some properties on the tails of probability mass functions (p.m.f.). By the tail, we mean how fast the probability decreases geometrically as we consider rarer and rarer events.
First, we define the essential supremum, $\esssup$. \begin{definition} For a given random variable $X$, $\esssup X$ is given as follows. \begin{align} \esssup X = \inf\{x \in \mathbb{R} : \mathbb{P}(X > x) = 0 \}. \end{align} \end{definition}
The following lemma shows that even if we increase a random variable sub-linearly, its p.m.f. tail remains the same. \begin{lemma} Consider $\sigma$-field $\mathcal{F}$ and a nonnegative discrete random variable $k$ whose probability mass function satisfies \begin{align}
\exp( \limsup_{n \rightarrow \infty} \esssup \frac{1}{n} \log \mathbb{P}\{ k =n | \mathcal{F} \} ) \leq p \nonumber \end{align} Then, given a function $f(x)$ such that $f(x) \leq a( \log(x+1) + 1)$ for some $a \in \mathbb{R}^+$, the probability mass function of a random variable $k+f(k)$ satisfies the following: \begin{align}
\exp( \limsup_{n \rightarrow \infty} \esssup \frac{1}{n} \log \mathbb{P}\{ k+f(k) = n | \mathcal{F} \} ) \leq p. \nonumber \end{align} \label{lem:conti:tailpoly} \end{lemma} \begin{proof}
Since $\esssup \mathbb{P} \{ k=n | \mathcal{F} \}$ is bounded by $1$, for all $\delta > 0$ such that $p+\delta < 1$ we can find a positive $c$ such that $\esssup \mathbb{P}\{k=n | \mathcal{F} \} \leq c \left(p+\delta \right)^{n} \left(1- \left(p+\delta \right)\right)$. Moreover, since $f(x) \lesssim \log(x+1) + 1$, for all $\delta'>0$ we can find a positive $c'$ such that $f(x) \leq \delta' x + c'$ for all $x \in \mathbb{R}^+$. Then, we have \begin{align}
\esssup \mathbb{P}\{k+f(k) = n | \mathcal{F} \} & \leq \esssup \mathbb{P}\{k+f(k) \geq n | \mathcal{F} \} \leq \esssup \mathbb{P}\{k+\delta' k + c' \geq n | \mathcal{F} \} \nonumber \\
&\leq \esssup \mathbb{P}\{ k \geq \lfloor \frac{n-c'}{1+\delta'} \rfloor | \mathcal{F} \}
\leq \sum^{\infty}_{i= \lfloor \frac{n-c'}{1+\delta'} \rfloor} \esssup \mathbb{P} \{ k=i | \mathcal{F} \} \nonumber \\ &\leq \sum^{\infty}_{i=\lfloor \frac{n-c'}{1+\delta'} \rfloor} c(p+\delta)^{i}(1-(p+\delta))\nonumber \\ &=c(1-(p+\delta))\frac{(p+\delta)^{\lfloor \frac{n-c'}{1+\delta'} \rfloor}}{1-(p+\delta)} = c(p+\delta)^{\lfloor \frac{n-c'}{1+\delta'} \rfloor} \nonumber \\ & \leq c(p+\delta)^{\frac{n-c'}{1+\delta'} - 1} = c(p+\delta)^{-\frac{c'}{1+\delta'}-1} (p+\delta)^{\frac{n}{1+\delta'}}. \nonumber \end{align} Therefore, \begin{align}
\exp \left( \limsup_{n \rightarrow \infty} \esssup \frac{1}{n} \log \mathbb{P}\{k+f(k) = n | \mathcal{F} \} \right) \leq (p+\delta)^{\frac{1}{1+\delta'}}. \nonumber \end{align} Since we can choose $\delta$ and $\delta'$ arbitrarily close to $0$, \begin{align}
&\exp \left( \limsup_{n \rightarrow \infty} \esssup \frac{1}{n} \log \mathbb{P}\{k+f(k) = n | \mathcal{F} \} \right) \leq p, \nonumber \end{align} which finishes the proof. \end{proof}
The following lemma tells that if we add independent random variables, the p.m.f. tail of the sum is equal to the heaviest one. \begin{lemma} Consider an increasing $\sigma$-fields sequence $\mathcal{F}_0,\mathcal{F}_1,\cdots,\mathcal{F}_{n-1}$ and a sequence of discrete random variables $k_1,k_2,\cdots,k_{n}$ satisfying two properties:\\ (i) $k_i \in \mathcal{F}_i$ for $i \in \{ 1, \cdots, n-1 \}$ \\
(ii) $\exp(\limsup_{k\rightarrow \infty} \esssup \frac{1}{k} \log \mathbb{P}( k_i = k | \mathcal{F}_{i-1} )) \leq p_i$.\\
Let $S=\sum^n_{i=1} k_i$. Then, $\exp(\limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P}( S=s | \mathcal{F}_0 )) \leq \max_{1 \leq i \leq n}\{ p_i\}$. \label{lem:app:geo} \end{lemma} \begin{proof} Given $\delta > 0$, let $k'_i$ be independent geometric random variables with probability $1-(p_i+\delta)$. Denote $S':=\sum^{n}_{i=1} k_i'$. The moment generating function of $S'$ is \begin{align} \mathbb{E}[Z^{-S'}] &= \prod^{n}_{i=1} \frac{\left(1-\left(p_i+\delta \right)\right)}{1-\left(p_i+\delta \right) Z^{-1}}. \nonumber \end{align} By \cite{Oppenheim}, the last term can be expanded into a sum of rational functions whose denominators are $1-(p_i+\delta)Z^{-1}$. Therefore, by inverse Z-transform shown in \cite{Oppenheim}, we can prove that $\exp( \limsup_{s \rightarrow \infty} \frac{1}{s} \log \mathbb{P}(S'=s) ) \leq \max_{1 \leq i \leq n} \{ p_i + \delta\}$.\\
On the other hand, since $\esssup \mathbb{P}(k_i=k | \mathcal{F}_{i-1})$ is bounded by $1$, for all $\delta>0$ we can find positive $c_i$ such that \begin{align}
\esssup \mathbb{P}(k_i=k | \mathcal{F}_{i-1}) \leq c_i \left(p_1+\delta\right)^{k}\left(1-\left(p_1+\delta\right)\right)=c_i \mathbb{P}(k_i'=k) \nonumber \end{align} for all $k \in \mathbb{Z}^+$. Then \begin{align}
&\esssup \mathbb{P}( S = s | \mathcal{F}_0 ) \nonumber \\
&=\esssup \sum_{s=s_1+\cdots+s_n} \mathbb{P}(k_1=s_1|\mathcal{F}_0)\mathbb{P}(k_2=s_2|\mathcal{F}_0,k_1=s_1)\cdots \mathbb{P}(k_n=s_n|\mathcal{F}_0,k_1=s_1,\cdots,k_{n-1}=s_{n-1}) \nonumber \\
&\leq \sum_{s=s_1+\cdots+s_n} \esssup \mathbb{P}(k_1=s_1|\mathcal{F}_0) \esssup \mathbb{P}(k_2=s_2|\mathcal{F}_1)\cdots \esssup \mathbb{P}(k_n=s_n|\mathcal{F}_{n-1}) \nonumber \\ &\leq \prod_{1 \leq i \leq n}c_i \cdot \sum_{s=s_1+\cdots+s_n} \mathbb{P}(k_1'=s_1) \mathbb{P}(k_2'=s_2)\cdots \mathbb{P}(k_n'=s_n) \nonumber \\ &\leq \prod_{1 \leq i \leq n}c_i \cdot \mathbb{P}(S'=s). \nonumber \end{align}
Thus, $\exp( \limsup_{s \rightarrow \infty}\esssup \frac{1}{s} \log \mathbb{P}(S=s | \mathcal{F}_0) ) \leq \max_{1 \leq i \leq n}\{ p_i+\delta\}$.
Since this holds for all $\delta>0$, $\exp( \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P}(S=s | \mathcal{F}_0) ) \leq \max_{1 \leq i \leq n}\{ p_i \}$. \end{proof}
The next lemma tells how the large deviation principle~\cite{Dembo} can be applied to stopping times, i.e. it formally states the ``test channel" and the ``distance idea" shown in the power property of Section~\ref{sec:powerproperty}.
\begin{lemma} For given $n$, consider discrete random variables $k_1, k_2, \cdots, k_n$ and $\sigma$-algebra $\mathcal{F}$. The probability mass functions of $k_1, k_2 \cdots, k_n$ satisfy \begin{align}
\exp ( \limsup_{k \rightarrow \infty} \esssup \frac{1}{k} \log \mathbb{P}\{ k_i = k | \mathcal{F} \} ) \leq p_i \nonumber \end{align} and $k_1, k_2, \cdots, k_n$ are conditionally independent given $\mathcal{F}$.\\ For given sets $T_1, T_2, \cdots, T_m \subseteq \{1,2,\cdots, n \}$, define stopping times $M_1, \cdots, M_m$ as \begin{align} M_i := \max_{t \in T_i} k_t \end{align} and a stopping time $S$ as \begin{align} S := \min_{1 \leq i \leq m} M_i. \end{align}
Then, \begin{align}
\exp \left(\limsup_{k \rightarrow \infty} \esssup \frac{1}{k} \log \mathbb{P}\{ S=k | \mathcal{F} \}\right) \leq
\max_{T=\{t_1, t_2, \cdots, t_{|T|}\} \subseteq \{1,2,\cdots, n \} \small{\mbox{ s.t. }}T \cap T_i \neq \emptyset \small{\mbox{ for all }}i
} p_{t_1} p_{t_2} \cdots p_{t_{|T|}}. \nonumber \end{align} \label{lem:dis:geo0} \end{lemma} \begin{proof}
Since $\esssup \mathbb{P} \{ k_i = k | \mathcal{F} \} $ is bounded by $1$, for all $\delta > 0$ we can find $c>1$ such that \begin{align}
\esssup \mathbb{P}\{ k_i = k | \mathcal{F} \} \leq c(p_i + \delta)^{k}\left(1- \left(p_i + \delta \right)\right).\nonumber \end{align} Thus, we have \begin{align}
\esssup \mathbb{P}\{ k_i \geq k | \mathcal{F} \} & \leq c (p_i + \delta)^{k}.\nonumber \end{align} Therefore, \begin{align}
\esssup \mathbb{P}\{ S=k | \mathcal{F} \} &\leq \esssup \mathbb{P}\{ S \geq k | \mathcal{F} \} \nonumber \\
&= \esssup \mathbb{P}\{ M_1 \geq k, \cdots, M_m \geq k | \mathcal{F} \} \nonumber \\
&= \esssup \mathbb{P}\{ \mbox{There exists $T=\{t_1,t_2,\cdots,t_{|T|} \} \subseteq \{1,\cdots,n \}$ s.t. } T \cap T_i \neq \emptyset \mbox{ for all } i \mbox{ and } k_{t_1} \geq k, \cdots, k_{t_{|T|}} \geq k | \mathcal{F} \} \nonumber \\ &\leq \sum_{ \small{
\begin{array}{c}T=\{t_1,t_2,\cdots,t_{|T|} \} \subseteq \{1,\cdots,n \} \\ \small{\mbox{ s.t. }} T \cap T_i \neq \emptyset \small{\mbox{ for all }} i\end{array}} }
\esssup \mathbb{P} \{ k_{t_1} \geq k, k_{t_2} \geq k, \cdots, k_{t_{|T|}} \geq k | \mathcal{F} \} \nonumber \\ &\leq
|\{ T=\{t_1,t_2,\cdots,t_{|T|} \} \subseteq \{1,\cdots,n \} {\mbox{ s.t. }} T \cap T_i \neq \emptyset {\mbox{ for all }} i \}| \label{eqn:geo:large1} \\ &\quad \cdot \max_{ \small{ \begin{array}{c}
T=\{t_1,t_2,\cdots,t_{|T|} \} \subseteq \{1,\cdots,n \} \\ \small{\mbox{ s.t. }} T \cap T_i \neq \emptyset \small{\mbox{ for all }} i \end{array} }
} \esssup \mathbb{P}\{ k_{t_1}\geq k | \mathcal{F} \} \cdots \esssup \mathbb{P}\{ k_{t_{|T|}} \geq k | \mathcal{F} \} \nonumber\\
& \leq c^n |\{ T=\{t_1,t_2,\cdots,t_{|T|} \} \subseteq \{1,\cdots,n \} {\mbox{ s.t. }} T \cap T_i \neq \emptyset {\mbox{ for all }} i \}| \nonumber \\ &\quad \cdot \max_{ \small{ \begin{array}{c}
T=\{t_1,t_2,\cdots,t_{|T|} \} \subseteq \{1,\cdots,n \} \\ \small{\mbox{ s.t. }} T \cap T_i \neq \emptyset \small{\mbox{ for all }} i \end{array} } }
(p_{t_1}+\delta)^{k-1} (p_{t_2}+\delta)^{k-1} \cdots (p_{t_{|T|}}+\delta)^{k-1}. \nonumber \end{align} \eqref{eqn:geo:large1} follows from union bound. Since the above inequality holds for all $\delta>0$, \begin{align}
\exp\left( \limsup_{k \rightarrow \infty} \esssup \frac{1}{k} \log \mathbb{P} \{ S = k | \mathcal{F} \} \right) \leq
\max_{T=\{t_1, t_2, \cdots, t_{|T|}\} \subseteq \{1,2,\cdots, n \} \small{\mbox{ s.t. }}T \cap T_i \neq \emptyset \small{\mbox{ for all }}i
} p_{t_1} p_{t_2} \cdots p_{t_{|T|}}. \nonumber \end{align} \end{proof}
\subsection{Lemmas about the Observability Gramian of Continuous-Time Systems} \label{sec:app:3} In linear system theory~\cite{Chen}, the observability Gramian plays a crucial role in estimating states from observations. Therefore, we also study the behavior of the observability Gramian, especially the norm of the inverse of the observability Gramian.
First, we start with a corollary of the classic rearrangement inequality~\cite{hardy1988inequalities}. \begin{lemma}[Rearrangement Inequality] For $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_m \geq 0$, $0 \leq k_1 \leq k_2 \leq \cdots \leq k_m$, and any permutation map $\sigma$, the following inequality is true: \begin{align} e^{-\lambda_{\sigma(1)} k_{1}} e^{-\lambda_{\sigma(2)} k_{2}} \cdots e^{-\lambda_{\sigma(m)} k_{m}} \leq e^{-\lambda_1 k_1} e^{- \lambda_2 k_2} \cdots e^{-\lambda_m k_m}. \nonumber \end{align} Moreover, the ratio of these two can also be upper bounded as \begin{align} \frac{e^{-\lambda_{\sigma(1)} k_{1}} e^{-\lambda_{\sigma(2)} k_{2}} \cdots e^{-\lambda_{\sigma(m)} k_{m}}}{e^{-\lambda_1 k_1} e^{- \lambda_2 k_2} \cdots e^{-\lambda_m k_m}} \leq e^{-(\lambda_{\sigma(m)}-\lambda_m)(k_m-k_{\sigma^{-1}(m)})}. \nonumber \end{align} \label{lem:conti:rearr} \end{lemma} \begin{proof} The first inequality directly follows from the classic rearrangement inequality. The second inequality is proved as follows: When $\sigma^{-1}(m)=m$, the inequality is trivial. When $\sigma^{-1}(m)\neq m $, we have \begin{align} &e^{-\lambda_{\sigma(1)}k_1} e^{-\lambda_{\sigma(2)}k_2} \cdots e^{-\lambda_m k_{\sigma^{-1}(m)}} \cdots e^{-\lambda_{\sigma(m-1)k_{m-1}} } e^{-\lambda_{\sigma(m)}k_m} \nonumber \\ &= \underbrace{\left(e^{-\lambda_{\sigma(1)}k_1} e^{-\lambda_{\sigma(2)}k_2} \cdots e^{-\lambda_m k_{\sigma^{-1}(m)}} \cdots e^{-\lambda_{\sigma(m-1)}k_{m-1}} \right)}_{(a)} \cdot e^{-\lambda_{\sigma(m)}k_m} \nonumber \\ &= \underbrace{\left(e^{-\lambda_{\sigma(1)}k_1} e^{-\lambda_{\sigma(2)}k_2} \cdots e^{-\lambda_{\sigma(m)} k_{\sigma^{-1}(m)}} \cdots e^{-\lambda_{\sigma(m-1)}k_{m-1}} \right)}_{(b)} \cdot \left( \frac{e^{-\lambda_m k_{\sigma^{-1}(m)}}}{e^{-\lambda_{\sigma(m)}k_{\sigma^{-1}(m)}}} \right) \cdot e^{-\lambda_{\sigma(m)}k_m}. \label{eqn:arrange:1} \end{align} We can notice that the exponent of $(a)$ has $\{ \lambda_1, \lambda_2,\cdots, \lambda_m \} \setminus \{\lambda_{\sigma(m)}\}$ and $\{ k_1, k_2, \cdots, k_m \} \setminus \{ k_m \}$ terms in it, and the exponent of $(b)$ has \begin{align} &\left( \{ \lambda_1, \lambda_2,\cdots, \lambda_m \} \setminus \{\lambda_{\sigma(m)}\} \right) \cup \{ \lambda_{\sigma(m)} \} \setminus \{ \lambda_m \} \nonumber \\ &=\{ \lambda_1, \lambda_2,\cdots, \lambda_m \} \setminus \{ \lambda_m \} \nonumber \end{align} and $\{ k_1, k_2, \cdots, k_m \} \setminus \{ k_m \}$ terms in it. Thus, by the first inequality of the lemma, \begin{align} (b) \leq e^{-\lambda_1 k_1} \cdots e^{-\lambda_{m-1} k_{m-1}}. \nonumber \end{align} Together with $\eqref{eqn:arrange:1}$, we have \begin{align} &\frac{e^{-\lambda_{\sigma(1)}k_1} e^{-\lambda_{\sigma(2)}k_2} \cdots e^{-\lambda_{\sigma(m)}k_m}}{ e^{-\lambda_1 k_1} e^{-\lambda_2 k_2} \cdots e^{-\lambda_m k_m} } \nonumber \\ &\leq \frac{ \left(e^{-\lambda_1 k_1} \cdots e^{-\lambda_{m-1} k_{m-1}}\right) \cdot \left( \frac{e^{-\lambda_m k_{\sigma^{-1}(m)}}}{e^{-\lambda_{\sigma(m)}k_{\sigma^{-1}(m)}}} \right) \cdot e^{-\lambda_{\sigma(m)}k_m} }{ e^{-\lambda_1 k_1} e^{-\lambda_2 k_2} \cdots e^{-\lambda_m k_m} } \nonumber \\ &= \frac{1}{e^{-\lambda_m k_m}} \cdot \left( \frac{e^{-\lambda_m k_{\sigma^{-1}(m)}}}{e^{-\lambda_{\sigma(m)}k_{\sigma^{-1}(m)}}} \right) \cdot e^{-\lambda_{\sigma(m)}k_m}=e^{(\lambda_m-\lambda_{\sigma(m)})(k_m - k_{\sigma^{-1}(m)})} \nonumber \end{align} which finishes the proof. \end{proof}
Even though Theorem~\ref{thm:nonuniform} is written for a general matrix $\mathbf{C}$, we will first start from the simpler case of a row vector $\mathbf{C}$. In fact, for the proof of the general case, we will reduce the system with a matrix $\mathbf{C}$ to a system with a row vector $\mathbf{C}$.
First, we introduce the definitions corresponding to \eqref{eqn:conti:a2}, \eqref{eqn:conti:c2} for a row vector $\mathbf{C}$. Let $\mathbf{A_c}$ be a $m \times m$ Jordan form matrix, and $\mathbf{C}$ be a $1 \times m$ row vector $\mathbf{C}$ which are written as follows: \begin{align} &\mathbf{A_c}=diag\{\mathbf{A_{1,1}},\mathbf{A_{1,2}},\cdots, \mathbf{A_{1,\nu_{1}}},\cdots,\mathbf{A_{\mu,1}},\cdots,\mathbf{A_{\mu,\nu_{\mu}}}\} \label{eqn:conti:a} \\ &\mathbf{C}=\begin{bmatrix} \mathbf{C_{1,1}} & \mathbf{C_{1,2}} & \cdots & \mathbf{C_{1,\nu_{1}}} & \cdots & \mathbf{C_{\mu,1}} & \cdots & \mathbf{C_{\mu,\nu_{\mu}}} \end{bmatrix} \label{eqn:conti:c} \\ &\mbox{where } \mathbf{A_{i,j}} \mbox{ is a Jordan block with eigenvalue $\lambda_{i,j}+\sqrt{-1}\omega_{i,j}$ and size $m_{i,j}$} \nonumber \\
&\quad\quad m_{i,1} \leq m_{i,2} \leq \cdots \leq m_{i,\nu_i} \mbox{ for all }i=1,\cdots,\mu \nonumber \\ &\quad\quad m_i=\sum_{1 \leq j \leq \nu_i} m_{i,j} \mbox{ for all }i=1,\cdots,\mu \nonumber \\ &\quad\quad \lambda_{i,1}=\lambda_{i,2}=\cdots =\lambda_{i,\nu_i} \mbox{ for all }i=1,\cdots,\mu \nonumber \\ &\quad\quad \lambda_{1,1}>\lambda_{2,1} > \cdots > \lambda_{\mu,1} \geq 0 \nonumber \\ &\quad\quad \omega_{i,1}, \cdots ,\omega_{i,\nu_i} \mbox{ are pairwise distinct} \nonumber \\ &\quad\quad \mathbf{C_{i,j}}\mbox{ is a $1 \times m_{i,j}$ complex matrix and its first element is non-zero} \nonumber\\ &\quad\quad \mbox{$\lambda_i+ \sqrt{-1} \omega_i$ is $(i,i)$ element of $\mathbf{A_c}$}.\nonumber \end{align} Here, we can notice that the real parts of the eigenvalues of $\mathbf{A_{i,1}}, \cdots, \mathbf{A_{i,\nu_i}}$ are the same, but the eigenvalues of all Jordan blocks $\mathbf{A_{i,j}}$ are distinct. Therefore, by Theorem~\ref{thm:jordanob}, the condition that the first elements of $\mathbf{C_{i,j}}$ are non-zero corresponds to the observability of $(\mathbf{A_c},\mathbf{C})$.
The following lemma upper bounds the determinant of the observability Gramain of the sampled continuous system. \begin{lemma} Let $\mathbf{A_c}$ and $\mathbf{C}$ be given as \eqref{eqn:conti:a} and \eqref{eqn:conti:c}. For $0 \leq k_1 \leq k_2 \leq \cdots \leq k_m$, there exists $a > 0$, $p \in \mathbb{Z}^+$ such that \begin{align}
\left| \det\left( \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}}\\ \mathbf{C}e^{-k_2 \mathbf{A_c}}\\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}}\\
\end{bmatrix} \right) \right| \leq a (k_m^p+1) \prod_{1 \leq i \leq m } e^{-k_i \lambda_i} \nonumber \end{align} where $\lambda_i$ is the real part of $(i,i)$ component of $\mathbf{A_c}$. \label{lem:det:upper} \end{lemma} \begin{proof} First consider a diagonal matrix, i.e. $\mathbf{A_c}=\begin{bmatrix} \lambda_1+ j \omega_1 & 0 & \cdots & 0 \\ 0 & \lambda_2+ j \omega_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_m + j \omega_m \end{bmatrix}$. Then, \begin{align}
&\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix} \right) \right| \nonumber \\
&=\left| \sum_{\sigma \in S_m} sgn(\sigma) \prod^m_{i=1} c_i e^{-k_{\sigma(i)}(\lambda_i+j \omega_i)} \right| \nonumber \\
&\leq m! \max_{\sigma \in S_m} \left| \prod^m_{i=1} c_i e^{-k_{\sigma(i)}(\lambda_i+j \omega_i)} \right| \nonumber \\
&= m! \left| \prod^m_{i=1}c_i \right| \max_{\sigma \in S_m} \left| \prod^m_{i=1} e^{-k_{\sigma(i)}\lambda_i} \right| \nonumber \\
&= m! \left| \prod^m_{i=1}c_i \right| \prod^m_{i=1} e^{-k_i \lambda_i} (\because Lemma~\ref{lem:conti:rearr}) \nonumber \\ &\lesssim \prod^m_{i=1} e^{-k_i \lambda_i} \label{eqn:matrix:det1} \end{align} where $c_i$ are $i$th component of $\mathbf{C}$, $S_m$ is the set of all permutations on $\{1,\cdots,m \}$, and $sgn(\sigma)$ is $+1$ if $\sigma$ is an even permutation $-1$ otherwise. Therefore, the lemma is true for a diagonal $\mathbf{A_c}$.
To extend to a general Jordan matrix $\mathbf{A_c}$, consider a matrix $\mathbf{A_c'}$ which is obtained by erasing the off-diagonal elements of $\mathbf{A_c}$. Then, we can easily see the ratio between the elements of $\begin{bmatrix} \mathbf{C} e^{-k_1 \mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-k_m \mathbf{A_c}} \end{bmatrix}$ and the corresponding elements of $\begin{bmatrix} \mathbf{C} e^{-k_1 \mathbf{A_c'}} \\ \vdots \\ \mathbf{C} e^{-k_m \mathbf{A_c'}} \end{bmatrix}$ is a polynomial whose degree is less than $m$. Therefore, by repeating the steps of \eqref{eqn:matrix:det1} we can easily obtain \begin{align}
\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}
\right)\right| \lesssim (1+k_m^{m^2}) \prod^m_{i=1} e^{-k_i \lambda_i}, \nonumber \end{align} which finishes the proof. \end{proof}
The next lemma upper bounds the norm of the inverse of the observability Gramian, given the lower bound on the observability Gramian determinant. Therefore, we can reduce the matrix inverse problem to the matrix determinant problem.
\begin{lemma} Consider $\mathbf{A_c}$ and $\mathbf{C}$ given as \eqref{eqn:conti:a} and \eqref{eqn:conti:c}. Let $\lambda_i$ be the real part of $(i,i)$ element of $\mathbf{A_c}$. Then, there exists a positive polynomial $p(k)$ such that for all $\epsilon>0$ and $0 \leq k_1 \leq \cdots \leq k_m$, if \begin{align}
\left| \det\left( \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix} \right) \right| \geq \epsilon \prod_{1 \leq i \leq m} e^{- k_i \lambda_i} \nonumber \end{align} then \begin{align}
\left| \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}^{-1} \right|_{max} \leq \frac{p(k_m)}{\epsilon} e^{\lambda_1 k_m}. \nonumber \end{align} \label{lem:conti:inverse2} \end{lemma} \begin{proof} Let $\mathbf{O_{i,j}}$ be the matrix obtained by removing the $i$th row and $j$th column of $\begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}$. Let $\mathbf{A_c(j)}$ be the $(m-1) \times (m-1)$ matrix that we can obtain by removing the $j$th row and column of $\mathbf{A_c}$, and $\mathbf{C(j)}$ be the row vector that we can obtain by removing the $j$th element of $\mathbf{C}$.
First, let's consider the case when $\mathbf{A_c}$ is a diagonal matrix. In this case, using properties of diagonal matrices we can easily check that $\mathbf{O_{i,j}}=\begin{bmatrix} \mathbf{C(j)e^{-k_1 \mathbf{A_c(j)}}} \\ \vdots \\ \mathbf{C(j)e^{-k_{i-1} \mathbf{A_c(j)}}} \\ \mathbf{C(j)e^{-k_{i+1} \mathbf{A_c(j)}}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}$.
In other words, $\mathbf{O_{i,j}}$ are also the observability Gramian of $(\mathbf{A_c(j)}, \mathbf{C(j)})$. Since $C_{i,j}$ is the determinant of $\mathbf{O_{i,j}}$, we can apply Lemma~\ref{lem:det:upper} to conclude that there exists a positive polynomial $p_{i,j}$ such that \begin{align}
|C_{i,j}| \leq \left\{ \begin{array}{ll} p_{i,j}(k_m) \left(\prod^{j-1}_{l=1} e^{-\lambda_l k_l} \right)\cdot \left(\prod^{i-1}_{l=j} e^{-\lambda_{l+1}k_l} \right)\cdot \left(\prod^{m}_{l=i+1}e^{-\lambda_l k_l} \right) & \mbox{if }i \geq j \ \\ p_{i,j}(k_m) \left(\prod^{i-1}_{l=1} e^{-\lambda_l k_l} \right)\cdot \left(\prod^{j-1}_{l=i} e^{-\lambda_l k_{l+1}} \right)\cdot \left(\prod^{m}_{l=j+1}e^{-\lambda_l k_l} \right) & \mbox{if }i \leq j \end{array} \right. \label{eqn:cofactorupper} \end{align}
Then, let's consider the case when $\mathbf{A_c}$ is a general Jordan form matrix. Compared to the case of diagonal matrix $\mathbf{A_c}$, the elements of $\mathbf{O_{i,j}}$ only differ by polynomials on $k_i$ in ratio. Therefore, by the same argument of the proof of Lemma~\ref{lem:det:upper}, we can still find a positive polynomial $p_{i,j}$ satisfying \eqref{eqn:cofactorupper}.
Moreover, since $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_m \geq 0$ and $0 \leq k_1 \leq k_2 \leq \cdots \leq k_m$, we have \begin{align} &\left(\prod^{j-1}_{l=1} e^{-\lambda_l k_l} \right)\cdot \left(\prod^{i-1}_{l=j} e^{-\lambda_{l+1}k_l} \right)\cdot \left(\prod^{m}_{l=i+1}e^{-\lambda_l k_l} \right) \leq \prod^m_{i=2}e^{-\lambda_i k_{i-1}}, \nonumber \\ &\left(\prod^{i-1}_{l=1} e^{-\lambda_l k_l} \right)\cdot \left(\prod^{j-1}_{l=i} e^{-\lambda_l k_{l+1}} \right)\cdot \left(\prod^{m}_{l=j+1}e^{-\lambda_l k_l} \right) \leq \prod^m_{i=2}e^{-\lambda_i k_{i-1}}. \nonumber \end{align} Therefore, we can further bound the cofactor as follows: \begin{align}
|C_{i,j}| \leq \max_{i,j}p_{i,j}(k_m) \prod^m_{i=2}e^{-\lambda_i k_{i-1}}. \nonumber \end{align} Then, we have \begin{align}
&\left| \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}^{-1} \right|_{max} = \frac{\max_{i,j} |C_{i,j}|}{\left| \det\left( \begin{bmatrix}\mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix} \right) \right|} \leq \frac{\max_{i,j} |C_{i,j}|}{\epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i}} \nonumber \\ & \leq \frac{\max_{i,j}p_{i,j}(k_m) \prod^m_{i=2}e^{-\lambda_i k_{i-1}}}{\epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i}}\nonumber \\ &=\frac{\max_{i,j}p_{i,j}(k_m)}{\epsilon} e^{\lambda_1 k_1} \prod^{m}_{i=2}e^{\lambda_i(k_i - k_{i-1})} \nonumber \\ &\leq \frac{\max_{i,j}p_{i,j}(k_m)}{\epsilon} e^{\lambda_1 k_1} \prod^{m}_{i=2}e^{\lambda_1(k_i - k_{i-1})} (\because \lambda_1 \geq \lambda_i \geq 0, k_i - k_{i-1} \geq 0 )\nonumber \\ &=\frac{\max_{i,j}p_{i,j}(k_m)}{\epsilon} e^{\lambda_1 k_m} \nonumber \\ &\leq \frac{\sum_{i,j}p_{i,j}(k_m)}{\epsilon} e^{\lambda_1 k_m} \nonumber \end{align} Therefore, the lemma is true. \end{proof}
Now, the question is reduced to an issue regarding that the observability Gramian determinant has to be large enough. We will find a sufficient condition for the determinant to be large in terms of a simpler analytic function. For this, we first need the following lemma that basically asserts that polynomials increases slower than the exponentials.
\begin{lemma} For any given polynomial $f(x)$, $\lambda > 0$ and $\epsilon>0$, there exists $a > 0$ such that \begin{align}
|f(k+x)| \leq \epsilon e^{\lambda \cdot x} \end{align} for all $x \geq a(\log(k+1)+1)$ and $k \geq 0$.
\label{lem:conti:ineq} \end{lemma} \begin{proof} Let the order of $f(x)$ be $p$. Then, there exists $c>0$ such that for all $x \geq 0$, \begin{align}
|f(x)| \leq c( 1+x^{p+1}). \nonumber \end{align} If we consider $\frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2x)^{p+1})$ and $x$, the former grows logarithmically on $x$ while the later grows linearly on $x$. Therefore, we can find $t>0$ such that \begin{align} \frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2x)^{p+1}) \leq x \nonumber \end{align} for all $x \geq t$. We can also finde $a > 0$ such that $a(\log(k+1)+1) \geq \max\left\{ \frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2k)^{p+1}) , t \right\}$ for all $k \geq 0$.
To check the condition, $|f(k+x)| \leq \epsilon e^{\lambda \cdot x}$, we divide into two cases.
(a) When $x \leq k$,
$|f(k+x)|$ is bounded as follows: \begin{align}
|f(k+x)| &\leq c\left( 1+ \left(k+x \right)^{p+1} \right) \nonumber \\ &\leq c \left(1+ \left(2k \right)^{p+1} \right) \nonumber \\ &= \epsilon e^{\lambda(\frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2k)^{p+1}) )} \nonumber \\ &\leq \epsilon e^{\lambda \cdot x} \nonumber \end{align} where the last inequality comes from $\frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2k)^{p+1}) \leq x$.
(b) When $x > k$,
Since $t \leq x$, $\frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2x)^{p+1}) \leq x$. Then, we can bound $|f(k+x)|$ as follows: \begin{align}
|f(k+x)| &\leq c\left( 1+ \left(k+x \right)^{p+1} \right) \nonumber \\ & \leq c \left(1+ \left(2x \right)^{p+1} \right) \nonumber \\ &= \epsilon e^{\lambda(\frac{1}{\lambda} \log \frac{c}{\epsilon} + \frac{1}{\lambda} \log(1+(2x)^{p+1}) )} \nonumber \\ &\leq \epsilon e^{\lambda \cdot x}. \nonumber \end{align} Therefore, the lemma is proved. \end{proof}
Now, we give a sufficient condition to guarantee that the determinant of the observability Gramian is large enough.
\begin{lemma} Let $\mathbf{A_c}$ and $\mathbf{C}$ be given as \eqref{eqn:conti:a} and \eqref{eqn:conti:c}. Denote $a_{i,j}$ and $C_{i,j}$ be the $(i,j)$ element and cofactor of $\begin{bmatrix} \mathbf{C} e^{-k_1 \mathbf{A_c}} \\ \mathbf{C} e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-k_m \mathbf{A_c}}\end{bmatrix}$ respectively. Then there exist $g_{\epsilon}(k):\mathbb{R}^+ \rightarrow \mathbb{R}^+$ and $a \in \mathbb{R}^+$such that for all $\epsilon > 0$ and $k_1,\cdots,k_m$ satisfying \begin{align} &(i)~ 0 \leq k_1 < k_2 < \cdots < k_m \nonumber \\ &(ii)~ k_{m}-k_{m-1} \geq g_\epsilon(k_{m-1}) \nonumber\\ &(iii)~ g_\epsilon(k) \leq a( 1+\log (k+1)) \nonumber \\
&(iv)~|\sum_{m-m_{\mu}+1 \leq i \leq m}a_{m,i}C_{m,i}| \geq \epsilon \prod_{1 \leq i \leq m} e^{- k_i \lambda_{i}} \nonumber \end{align} the following inequality holds: \begin{align}
\left| \det\left( \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}
\right) \right|
\geq \frac{1}{2} \epsilon \prod_{1 \leq i \leq m} e^{- k_i \lambda_{i}}. \nonumber \end{align} \label{lem:det:lower} \end{lemma} \begin{proof}
First of all, because $\mathbf{A_c}$ is in Jordan form, it is well known that the elements of $e^{-k \mathbf{A_c}}$ take a specific form~\cite{Chen}. Thus, we can prove that for all $a_{i,j}$ there exists a polynomial $p_{i,j}(k)$ such that $a_{i,j}=p_{i,j}(k_i)e^{-k_i (\lambda_j+j \omega_j)}$. Then, we can find $p(k)$ in the form of $a(1+k^b)$ $(a>0)$ such that $p(k) \geq \max_{i,j} |p_{i,j}(k)|$ for all $k \geq 0$. Denote $\lambda':=\lambda_{\mu-1,1}-\lambda_{\mu,1}>0$. \begin{align}
&\left| \det\left(\begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}} \end{bmatrix}
\right)\right| = \left| \sum_{ 1 \leq i \leq m } a_{m,i} C_{m,i} \right| = \left| \sum_{\sigma \in \Sigma_m} sgn(\sigma) \prod^m_{i=1} a_{i,\sigma(i)} \right|\nonumber \\
& \geq \left| \sum_{ m-m_{\mu}+1 \leq i \leq m } a_{m,i} C_{m,i} \right| - \left| \sum_{ 1 \leq i \leq m-m_{\mu} } a_{m,i} C_{m,i} \right| \nonumber\\ &=
\left| \sum_{\sigma \in S_m, m-m_{\mu}+1 \leq \sigma(m) \leq m} sgn(\sigma) \prod^m_{i=1} a_{i,\sigma(i)} \right| - \left| \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} sgn(\sigma) \prod^m_{i=1} a_{i,\sigma(i)} \right| \nonumber \\
& \geq \epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i} - \left| \sum_{ 1 \leq i \leq m-m_{\mu} } a_{m,i} C_{m,i} \right| (\because Assumption~(iv)) \nonumber \\
& = \epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i} - \left| \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} sgn(\sigma) \prod^m_{i=1} a_{i,\sigma(i)} \right| \nonumber \\
& \geq \epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i} - \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} \left| \prod^m_{i=1} a_{i,\sigma(i)} \right| \nonumber \\
& = \epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i} - \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} \left| \prod^m_{i=1} p_{i,\sigma(i)}(k_i)e^{-k_i (\lambda_{\sigma(i)}+j \omega_{\sigma(i)})} \right| \nonumber \\ & \geq \epsilon \prod_{1 \leq i \leq m}e^{-k_i \lambda_i} - \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} \left( e^{(\lambda_m-\lambda_{\sigma(m)})(k_m-k_{\sigma^{-1}(m)})} \cdot \prod^m_{i=1} p(k_i) e^{-k_i \lambda_i} \right) (\because Lemma~\ref{lem:conti:rearr}) \nonumber \\
& \geq \prod_{1 \leq i \leq m} e^{-k_i \lambda_i} \left( \epsilon - \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} p(k_m)^m e^{(\lambda_m-\lambda_{\sigma(m)})(k_m - k_{\sigma^{-1}(m)})} \right) (\because p(k) \mbox{ is an increasing function.}) \nonumber \\ & \geq \prod_{1 \leq i \leq m} e^{-k_i \lambda_i} \left( \epsilon - \sum_{\sigma \in S_m, 1 \leq \sigma(m) \leq m-m_{\mu}} p(k_m)^m e^{-\lambda'(k_m - k_{m-1})} \right) (\because \lambda_{\sigma(m)} - \lambda_m \geq \lambda_{\mu-1,1} - \lambda_{\mu,1} = \lambda' ) \nonumber \\ & \geq \prod_{1 \leq i \leq m} e^{-k_i \lambda_i} \left( \epsilon - m! p(k_m)^m e^{-\lambda'(k_m - k_{m-1})} \right) \nonumber \end{align} Since $m! p(x)^m$ is a polynomial in $x$, by Lemma~\ref{lem:conti:ineq} there exists $g_{\epsilon}(k):\mathbb{R}^+ \rightarrow \mathbb{R}^+$ such that \\ (i) $g_{\epsilon}(k) \lesssim \log (k+1) + 1$\\
(ii) $| m! p(k+x)^m | \leq \frac{\epsilon}{2}e^{\lambda' \cdot x}$ for all $x \geq g_\epsilon(k)$ and $k \geq 0$.\\ Therefore, for all $k_m$ such that $k_m-k_{m-1} \geq g_{\epsilon}(k_{m-1})$, \begin{align}
&\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-k_1 \mathbf{A_c}} \\ \mathbf{C}e^{-k_2 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-k_m \mathbf{A_c}}
\end{bmatrix} \right) \right| \geq \prod_{1 \leq i \leq m} e^{-k_i \lambda_i} \left( \epsilon - \frac{\epsilon}{2}e^{\lambda' \cdot (k_m - k_{m-1})} e^{-\lambda' \cdot (k_m - k_{m-1})} \right) \geq \frac{\epsilon}{2} \prod_{1 \leq i \leq m} e^{-k_i \lambda_i}. \nonumber \end{align} Thus, the lemma is proved. \end{proof}
\subsection{Uniform Convergence of a Set of Analytic Functions (Continuous-Time Systems)} \label{app:unif:conti} We will prove that after introducing nonuniform sampling, the determinant of the observability Gramian will become large enough regardless of the erasure pattern. Since the determinant of the observability Gramian is an analytic function, to prove that the observability Gramian is large enough it is enough prove that a set of specific analytic functions are large enough. To this end, we will prove a set of analytic functions are uniformly away from $0$.
First, we prove that an analytic function can become zero only on sets of zero Lebesgue-measure set, as long as the function is not zero for all values. The intuition for the lemma is that analytic functions can be locally determined by that Taylor expansions. Thus, if an analytic function is zero for any open interval with non-zero Lebesgue-measure, it is identically zero.
\begin{lemma} For a given nonnegative integer $p$ and distinct positive reals $\omega_{i,1}, \omega_{i,2}, \cdots, \omega_{i,\nu_i}$, define \begin{align} f(x):= \sum^{p}_{i=0} x^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j} \cos(\omega_{i,j}x) + a_{I,i,j} \sin(\omega_{i,j}x) \right) \nonumber \end{align} where at least one coefficient among $a_{R,i,j}, a_{I,i,j}$ is non-zero.
Let $X$ be a uniform random variable in $[0,T]~(T>0)$. Then, for all $h \in \mathbb{R}$, the following is true: \begin{align}
\mathbb{P} \{ | f(X)-h | < \epsilon \} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0. \nonumber \end{align} \label{lem:uni:1} \end{lemma} \begin{proof}
First, notice that $f(x)-h$ is an analytic function. It is well-known that if an analytic function $f(x)-h$ is not identically zero, the set $\{x \in [0,T] : f(x)-h=0 \}$ is an isolated set~\cite{krantz2002primer}, which is countable. Therefore, $\mathbb{P}\{ |f(X)-h|= 0 \} =0$. Moreover, $\mathbb{P}\{ |f(X)-h| < \epsilon \} \leq \mathbb{P}\{ |f(X)-h| \leq \epsilon \}$, which is a cumulative distribution function. Since cumulative distribution functions are right-continuous, $\lim_{\epsilon \downarrow 0} \mathbb{P}\{|f(X)-h| < \epsilon \}
\leq \lim_{\epsilon \downarrow 0} \mathbb{P}\{|f(X)-h| \leq \epsilon \} = \mathbb{P}\{|f(X)-h| = 0 \}=0$.
Thus, the proof reduces to proving $f(x)-h$ is not zero for all $x$. Let $i^*$ be the largest $i$ such that either $a_{R,i,j}$ or $a_{I,i,j}$ is non-zero.
(i) When $i^*=0$,
In this case, there are no polynomial terns and only sinusoidal terms exist. Let's compute the energy of $f(x)-h$ in interval $[s, s+r]$ and prove that $f(x)-h$ is not identically zero for all $s$ as long as $r$ is large enough. \begin{align} &\int^{s+r}_{s} \left( \sum^{\nu_{i^*}}_{j=1} (a_{R,i^*,j}\cos(\omega_{i^*,j} x) + a_{I,i^*,j} \sin( \omega_{i^*,j} x)) -h \right)^2 dx \nonumber \\ &= \int^{s+r}_{s} \sum^{\nu_{i^*}}_{j=1} \left(a^2_{R,i^*,j} \cos^2 ( \omega_{i^*,j} x )+a^2_{I,i^*,j} \sin^2 (\omega_{i^*,j} x) \right) + h^2 +2 \sum_{i\leq j} a_{R,i^*,i}a_{I,i^*,j} \cos(\omega_{i^*,i}x) \sin(\omega_{i^*,j}x) \nonumber \\ &+ 2 \sum_{i < j} a_{R,i^*,i}a_{R,i^*,j} \cos(\omega_{i^*,i}x)\cos(\omega_{i^*,j}x) + 2 \sum_{i < j} a_{I,i^*,i}a_{I,i^*,j} \sin(\omega_{i^*,i}x)\sin(\omega_{i^*,j}x) \nonumber \\ &- 2 \sum^{\nu_{i^*}}_{j=1} (a_{R,i^*,j}\cos(\omega_{i^*,j} x) + a_{I,i^*,j} \sin( \omega_{i^*,j} x)) h \,\,dx \nonumber \\ &= \int^{s+r}_{s} \sum^{\nu_{i^*}}_{j=1} \left(a^2_{R,i^*,j} \frac{1+\cos 2\omega_{i^*,j}x}{2}+a^2_{I,i^*,j} \frac{1-\cos 2\omega_{i^*,j}x}{2}\right) \,\,dx\nonumber \\ &+ \int^{s+r}_{s} \sum_{i\leq j} a_{R,i^*,i}a_{I,i^*,j}\left(\sin\left(\left(\omega_{i^*,j}+\omega_{i^*,j}\right)x \right) - \sin\left(\left(\omega_{i^*,j}-\omega_{i^*,j}\right)x \right)\right)\,\,dx \nonumber \\ &+ \int^{s+r}_{s} \sum_{i < j} a_{R,i^*,i}a_{R,i^*,j} \left(\cos\left(\left(\omega_{i^*,j}-\omega_{i^*,j}\right)x \right) + \cos\left(\left(\omega_{i^*,j}+\omega_{i^*,j}\right)x \right)\right)\,\,dx \nonumber \\ &+ \int^{s+r}_{s} \sum_{i < j} a_{I,i^*,i}a_{I,i^*,j} \left(\cos\left(\left(\omega_{i^*,j}-\omega_{i^*,j}\right)x \right) - \cos\left(\left(\omega_{i^*,j}+\omega_{i^*,j}\right)x \right)\right)\,\,dx \nonumber \\ &- \int^{s+r}_{s} 2 \sum^{\nu_{i^*}}_{j=1} (a_{R,i^*,j}\cos(\omega_{i^*,j} x) + a_{I,i^*,j} \sin( \omega_{i^*,j} x)) h \,\,dx. \label{eqn:analyticnotzero} \end{align}
Therefore, as $r$ increases, the first term in \eqref{eqn:analyticnotzero} arbitrarily increases regardless of $s$, while the remaining terms in \eqref{eqn:analyticnotzero} are sinusoidal and so bounded. Thus, $f(x)-h$ is not identically zero for all $s$ when $r$ is large enough. Thus, there exist $\delta > 0$ and $r > 0$ such that for all $s$, $|f(x)-x| \geq \delta$ holds for some $x \in [s,s+r]$.
(ii) When $i^* \geq 1$,
In this case, we have polynomial terms and we will prove that the term with the highest degree will dominate the reaming terms.
By the argument of (i), we can find $\delta > 0$ and $r > 0$ such that for all $s \geq 0$ we can find $x \in [s,s+r]$ satisfying \begin{align}
|f(x)-h| \geq \delta x^{i^*} - \sum^{i^*-1}_{i=0} \left( \sum^{\nu_i}_{j=1} |a_{R,i,j}|+|a_{I,i,j}| \right) x^i - |h|. \nonumber \end{align}
Since we can choose $s$ arbitrarily large, $|f(x)-h|$ has to be greater than $0$ for some $x$. Thus, $f(x)-h$ is not identically zero.
Therefore, the lemma is true. \end{proof}
To prove uniform convergence, we need the following Dini's theorem which says that for compact sets, pointwise convergence implies uniform convergence. The intuition behind this theorem is as follows: since we can find a finite open cover for a compact set, we can convert the uniform convergence of an infinite number of functions to the uniform convergence of only finitely many functions when the domain is compact. The uniform convergence of a finite number of functions immediately follows from pointwise convergence.
\begin{theorem}[Dini's Theorem] \cite[p. 81]{Gelbaum} If $\{f_n\}$ is a sequence of functions defined on a set $A$ and converging on $A$ to a function $f$, and if\\ (i) the convergence is monotonic,\\ (ii) $f_n$ is continuous on $A$, $n=1,2,\cdots$\\ (iii) $f$ is continuous on $A$,\\ (iv) $A$ is compact,\\ then the convergence is uniform on $A$.\label{thm:dini} \end{theorem} \begin{proof} See \cite[p. 81]{Gelbaum} for the proof. \end{proof}
Now, using the pointwise convergence of Lemma~\ref{lem:uni:1} and Dini's theorem, we can prove the uniform convergence of the relevant functions over a set of parameters.
\begin{lemma} Let $p$, $\nu_{0}, \cdots, \nu_p $ be nonnegative integers with $\nu_{p}>0$. Suppose $\gamma$ and $\Gamma$ are strictly positive reals such that $\gamma \leq \Gamma$. For each $0 \leq i \leq p$, $\omega_{i,1},\omega_{i,2}, \cdots, \omega_{i,\nu_i}$ are distinct reals. Let $X$ be a uniform random variable on $[0,T]$ for some $T > 0$. Then, for all $m,n$ such that $0 \leq m \leq p$ and $1 \leq n \leq \nu_m$, we have the following inequality: \begin{align}
\sup_{ |a_{m,n}| \geq \gamma, \forall i,j, |a_{i,j}| \leq \Gamma} \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon \right\} \rightarrow 0 \mbox{ as }\epsilon \downarrow 0 \nonumber \end{align} where $a_{i,j}$ are taken from $\mathbb{C}$. \label{lem:single} \end{lemma}
\begin{proof} The purpose of this proof is reducing the lemma to Dini's theorem (Theorem~\ref{thm:dini}).
First, we will assume the $w_{i,j}$ are positive without loss of generality. To justify this, let $\omega_{min}=\min\{ \min_{i,j} { \omega_{i,j}}, 0 \}-\delta$ for some $\delta>0$. Then, \begin{align}
&\sup_{|a_{m,n}| \geq \gamma, |a_{i,j}| \leq \Gamma} \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon \right\} \nonumber\\
&=\sup_{|a_{m,n}| \geq \gamma, |a_{i,j}| \leq \Gamma} \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j(\omega_{i,j}-\omega_{min})X} \right) \right| < \epsilon \right\}. \nonumber \end{align} Here, for each $i$, $\omega_{i,1}-\omega_{min},\omega_{i,2}-\omega_{min},\cdots,\omega_{i,\nu_i}-\omega_{min}$ are distinct and strictly positive. Therefore, without loss of generality, we can assume that for each i, $\omega_{i,1},\omega_{i,2},\cdots,\omega_{i,\nu_i}$ are distinct and strictly positive.
Let $a_{i,j}=a_{R,i,j}-j a_{I,i,j}$ where $a_{R,i,j}$ and $a_{I,i,j}$ are real. Since $|a_{m,n}| \geq \gamma$, at least one of $|a_{R,m,n}|$ or $|a_{I,m,n}|$ should be greater than $\frac{\gamma}{\sqrt{2}}$. First, consider the case when $|a_{R,m,n}| \geq \frac{\gamma}{\sqrt{2}}$. It is sufficient to prove that the real part of $\sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right)$ satisfies the lemma, i.e. \begin{align}
\sup_{ a_{R,m,n} \geq \frac{\gamma}{\sqrt{2}} , |a_{R,i,j}| \leq \Gamma, |a_{I,i,j}| \leq \Gamma} \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon \right\} \rightarrow 0 \mbox{ as }\epsilon \downarrow 0.\nonumber \end{align}
Here, we take the supremum over $a_{R,m,n}\geq \frac{\gamma}{\sqrt{2}}$ instead of the supremum over $|a_{R,m,n}|\geq \frac{\gamma}{\sqrt{2}}$ by symmetry.
Now, we apply Dini's theorem~\ref{thm:dini} and prove the claim.
Fix a positive sequence $\epsilon_i$ such that $ \epsilon_i \downarrow 0$ as $i \rightarrow \infty$. Define a sequence of functions $\{ f_i \}$ as \begin{align}
f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p}) := \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\}\nonumber \end{align}
where the domain $A$ of the functions is $A:=\{(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,{\nu_p}}): a_{R,m,n} \geq \frac{\gamma}{\sqrt{2}} , |a_{R,i,j}| \leq \Gamma, |a_{I,i,j}| \leq \Gamma\}$. Let $f(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p})$ be the identically zero function. Then, we will prove that $\{f_i\}$ converges to $f=0$ uniformly on $A$ by checking the conditions of Theorem~\ref{thm:dini}.
$\bullet$ $f_i$ point-wisely converges to $f$:
Since $a_{R,m,n} \geq \frac{\gamma}{\sqrt{2}}$, $\sum^{p}_{i=0} x^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}x)+a_{I,i,j}\sin(\omega_{i,j}x) \right)$ satisfies the assumptions of Lemma~\ref{lem:uni:1}. Thus, for all $h$ \begin{align}
\mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) - h \right| < \epsilon \right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0.\label{eqn:lem:single:2} \end{align} Therefore, by selecting $h=0$, $f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p})$ converges to $f=0$ for all $a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p}$ in $A$.
$\bullet$ Convergence is monotone: Since $\epsilon_i$ monotonically converge to $0$, $f_i$ is also a monotonically decreasing function sequence. Thus, the convergence is monotone.
$\bullet$ $f_n$ is continuous on $A$: For continuity (does not have to be uniformly continuous), we will prove that for given $a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p}$ and for all $\sigma > 0$, there exists $\delta(\sigma)>0$ such that $|f_i(a_{R,1,1}+\nabla a_{R,1,1},a_{I,1,1}+\nabla a_{I,1,1},\cdots,a_{I,p,\nu_p}+\nabla a_{I,p,\nu_p})-f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p})|<\sigma$ for all $|\nabla a_{R,i,j}|<\delta(\sigma)$ and $|\nabla a_{I,i,j}|<\delta(\sigma)$.
By \eqref{eqn:lem:single:2}, we can find $\delta'(\sigma)$ for all $\sigma$ such that \begin{align}
&\mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) - \epsilon_i \right| < \delta'(\sigma) \right\} < \frac{\sigma}{2} \mbox{ and } \nonumber \\
&\mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) - \left(-\epsilon_i \right) \right| < \delta'(\sigma) \right\} < \frac{\sigma}{2}\nonumber. \end{align}
Denote $\delta(\sigma):= \frac{\min\left(\frac{1}{T^p}, 1 \right)}{2 \sum^{p}_{i=0}\nu_i} \delta'(\sigma)$. Then, for all $|\nabla a_{R,i,j}| < \delta(\sigma)$ and $|\nabla a_{I,i,j}| < \delta(\sigma)$, the following inequality is true. \begin{align}
&\mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \left(a_{R,i,j}+\nabla a_{R,i,j}\right)\cos(\omega_{i,j}X)+\left(a_{I,i,j}+ \nabla a_{R,i,j} \right)\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\geq \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i - \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \nabla a_{R,i,j}\cos(\omega_{i,j}X)+ \nabla a_{I,i,j}\sin(\omega_{i,j}X) \right)
\right| \right\} \nonumber \\
&\geq \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i - \delta'(\sigma) \right\} \label{eqn:deltaprime}\\
&= \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\quad -\mathbb{P}\left\{ \epsilon_i - \delta'(\sigma) \leq \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\geq \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\quad -\mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right)-\epsilon_i \right| < \delta'(\sigma) \right\} \nonumber \\
&\quad -\mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right)-(-\epsilon_i) \right| < \delta'(\sigma) \right\} \nonumber \\
&> \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} - \sigma \nonumber. \end{align} Here, \eqref{eqn:deltaprime} can be shown as follows: \begin{align}
&\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \nabla a_{R,i,j}\cos(\omega_{i,j}X)+ \nabla a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| \nonumber \\
& \leq \sum^{p}_{i=0} |X^i| \sum^{\nu_i}_{j=1}\left( |\nabla a_{R,i,j}| + |\nabla a_{I,i,j}| \right) \nonumber \\ & \leq \max(T^p, 1) 2 \nu_i \delta(\sigma) (\because 0 \leq X \leq T\mbox{ w.p. 1})\nonumber \\ & = \delta'(\sigma) (\because \mbox{definition of }\delta(\sigma)) \end{align}
Therefore, by the definition of $f_i$ we have \begin{align} f_i(a_{R,1,1}+\nabla a_{R,1,1},a_{I,1,1}+\nabla a_{I,1,1},\cdots,a_{I,p,\nu_p}+\nabla a_{I,p,\nu_p})-f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p}) > -\sigma \label{eqn:lem:single:3}. \end{align}
Likewise, we can prove that \begin{align}
&\mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \left(a_{R,i,j}+\nabla a_{R,i,j}\right)\cos(\omega_{i,j}X)+\left(a_{I,i,j}+ \nabla a_{R,i,j} \right)\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\leq \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i + \delta'(\sigma) \right\} \nonumber \\ &(\because \mbox{The same step as \eqref{eqn:deltaprime}}) \nonumber \\
&= \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\quad +\mathbb{P}\left\{ \epsilon_i \leq \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i+ \delta'(\sigma) \right\} \nonumber \\
&\leq \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} \nonumber \\
&\quad +\mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right)-\epsilon_i \right| < \delta'(\sigma) \right\} \nonumber \\
&\quad +\mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right)-(-\epsilon_i) \right| < \delta'(\sigma) \right\} \nonumber \\
&< \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{R,i,j}\cos(\omega_{i,j}X)+a_{I,i,j}\sin(\omega_{i,j}X) \right) \right| < \epsilon_i \right\} + \sigma \nonumber \end{align} which implies \begin{align} f_i(a_{R,1,1}+\nabla a_{R,1,1},a_{I,1,1}+\nabla a_{I,1,1},\cdots,a_{I,p,\nu_p}+\nabla a_{I,p,\nu_p})-f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p}) < \sigma. \label{eqn:lem:single:4} \end{align} By \eqref{eqn:lem:single:3} and \eqref{eqn:lem:single:4}, \begin{align}
\left| f_i(a_{R,1,1}+\nabla a_{R,1,1},a_{I,1,1}+\nabla a_{I,1,1},\cdots,a_{I,p,\nu_p}+\nabla a_{I,p,\nu_p})-f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p}) \right| < \sigma. \nonumber \end{align} Therefore, $f_i(a_{R,1,1},a_{I,1,1},\cdots,a_{I,p,\nu_p})$ is continuous.
$\bullet$ $f$ is continuous on $A$: $f$ is obviously continuous, since $f$ is identically zero.
$\bullet$ $A$ is compact: $A$ is compact since it is closed and bounded.
Thus, by Dini's theorem~\ref{thm:dini}, the convergence is uniform on $A$, which finishes the proof for the case of $|a_{R,m,n}| \geq \frac{\gamma}{\sqrt{2}}$. The proof for the case of $|a_{I,m,n}| \geq \frac{\gamma}{\sqrt{2}}$ follows in an identical manner. Since there are only two cases, the function \begin{align}
g_i(a_{1,1}, \cdots, a_{p,\nu_p}):= \mathbb{P}\left\{ \left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon_i \right\} \end{align}
converges uniformly on $\{ a_{i,j} : |a_{m,n}| \geq \gamma, |a_{i,j}| \leq \Gamma \}$. This finishes the proof of the lemma. \end{proof}
In Lemma~\ref{lem:single}, we have a boundedness condition on the coefficients ($|a_{i,j}| \leq \Gamma$) to guarantee compactness. However, we can easily notice the functions only get larger as $a_{i,j}$ increases. Therefore, we can prove that Lemma~\ref{lem:single} still holds without the boundedness condition.
\begin{lemma} Let $p$ be nonnegative integer and $\nu_{0}, \cdots, \nu_p $ be also nonnegative integers with $\nu_{p}>0$. $\gamma$ is a strictly positive real. For each $0 \leq i \leq p$, $\omega_{i,1},\omega_{i,2}, \cdots, \omega_{i,\nu_i}$ are distinct reals. Let $X$ be a uniform random variable on $[0,T]$ for some $T > 0$. Then, for all $m,n$ such that $0 \leq m \leq p$ and $1 \leq n \leq \nu_m$, we have the following inequality: \begin{align}
\sup_{ |a_{m,n}| \geq \gamma} \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon \right\} \rightarrow 0 \mbox{ as }\epsilon \downarrow 0 \nonumber \end{align} where $a_{i,j}$ are taken from $\mathbb{C}$. \label{lem:singleun} \end{lemma} \begin{proof} Denote $\nu :=\sum^{p}_{i=0} \nu_i$. The proof is by strong induction on $\nu$.
(i) When $\nu=1$.
\begin{align}
&\sup_{|a_{p,1}| \geq \gamma} \mathbb{P}\left\{ \left| a_{p,1} X^p e^{j \omega_{p,1}X} \right| < \epsilon \right\} \label{eqn:lem:sigleun:0} \\
&= \sup_{|a_{p,1}| \geq \gamma} \mathbb{P}\left\{ \left| \frac{\gamma}{|a_{p,1}|} a_{p,1} X^p e^{j \omega_{p,1}X} \right| < \frac{\gamma}{|a_{p,1}|} \epsilon \right\} \nonumber \\
&\leq \sup_{|a'_{p,1}| = \gamma} \mathbb{P}\left\{ \left| a'_{p,1} X^p e^{j \omega_{p,1}X} \right| < \epsilon \right\} \left( \because \frac{\gamma}{|a_{p,1}|} \leq 1 \right)\label{eqn:lem:sigleun:1} \end{align} By lemma \ref{lem:single}, \eqref{eqn:lem:sigleun:1} converges to 0 as $\epsilon \downarrow 0$. Thus, \eqref{eqn:lem:sigleun:0} converges to 0 as $\epsilon \downarrow 0$.
(ii) As an induction hypothesis, we assume the lemma is true for $\nu=1,\cdots,n-1$ and prove that the lemma still holds for $\nu = n$. We will prove this by dividing into two cases: (a) When all $a_{i,j}$ are not much bigger than $a_{m,n}$. In this case, the claim reduces to Lemma~\ref{lem:single}. (b) When there is an $a_{m',n'}$ which is much bigger than $a_{m,n}$. In this case, we can ignore the term associated with $a_{m,n}$ and reduce the number of terms in the functions. Thus, either way the claim reduces to the induction hypothesis.
To prove the lemma for $\nu = n$, it is enough to show that for a fixed $\gamma$ and every $\delta > 0$, there exists $\epsilon(\delta)>0$ such that \begin{align}
\sup_{ |a_{m,n}| \geq \gamma } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\} < \delta. \nonumber \end{align} By the induction hypothesis for all $(m',n') \neq (m,n)$ we can find $\epsilon_{m',n'}(\delta)>0$ such that \begin{align}
&\sup_{a_{m,n}=0, |a_{m',n'}|\geq \gamma} \mathbb{P} \left\{ \left| \sum^p_{i=0} X^p \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon_{m',n'}(\delta) \right\} < \delta. \label{eqn:lem:sigleun:2} \end{align} We choose $\kappa(\delta)$ as $\min\left\{ \min_{(m',n')\neq (m,n)} \left\{ \frac{\epsilon_{m',n'}(\delta)}{2 \gamma T^m} \right\}, 1 \right\}$. By Lemma \ref{lem:single}, there exists $\epsilon'(\delta)>0$ such that \begin{align}
&\sup_{|a_{m,n}| = \gamma , a_{i,j} \leq \frac{\gamma}{\kappa(\delta)} } \mathbb{P} \left\{ \left| \sum^p_{i=0} X^p \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon'(\delta) \right\} < \delta. \label{eqn:lem:sigleun:22} \end{align} Denote $\epsilon(\delta):=\min\left\{ \epsilon'(\delta), \min_{(m',n')\neq (m,n)} \left\{ \frac{\epsilon_{m',n'}(\delta)}{2} \right\} \right\}$. Then, we have \begin{align}
&\sup_{ |a_{m,n}| \geq \gamma } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\} \nonumber \\
&= \max \{ \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{i,j}|}{|a_{m,n}|} \leq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\}, \label{eqn:lem:sigleun:20} \\
& \max_{(m',n') \neq (m,n)} \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\} \label{eqn:lem:sigleun:21} \\ &\}. \label{eqn:lem:sigleun:8} \end{align}
$\bullet$ When $a_{i,j}$ are not too bigger than $a_{m,n}$: Let's bound the first term in \eqref{eqn:lem:sigleun:20}. Set $a'_{i,j}:=\frac{\gamma}{|a_{m,n}|}a_{i,j}$. Then, \eqref{eqn:lem:sigleun:20} is upper bounded as follows: \begin{align}
&\sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{i,j}|}{|a_{m,n}|} \leq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\}\nonumber \\
&= \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{i,j}|}{|a_{m,n}|} \leq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \frac{\gamma}{|a_{m,n}|}a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \frac{\gamma}{|a_{m,n}|}\epsilon(\delta) \right\} \nonumber \\
&= \sup_{ |a'_{m,n}| = \gamma, |a'_{i,j}| \leq \frac{\gamma}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a'_{i,j}e^{j\omega_{i,j}X} \right) \right| < \frac{\gamma}{|a_{m,n}|}\epsilon(\delta) \right\}\nonumber \\
&\leq \sup_{ |a'_{m,n}| = \gamma, |a'_{i,j}| \leq \frac{\gamma}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a'_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\} (\because \frac{\gamma}{|a_{m,n}|}\leq 1) \nonumber \\
&\leq \sup_{ |a'_{m,n}| = \gamma, |a'_{i,j}| \leq \frac{\gamma}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a'_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon'(\delta) \right\} (\because \mbox{definition of $\epsilon(\delta)$})\nonumber \\ &< \delta (\because \eqref{eqn:lem:sigleun:22})\label{eqn:lem:sigleun:3} \end{align}
$\bullet$ When $a_{m',n'}$ is much bigger than $a_{m,n}$: Let's bound the second term in \eqref{eqn:lem:sigleun:21}. For given $m', n'$, set $a''_{i,j}:=\frac{\gamma}{|a_{m',n'}|}a_{i,j}$. Then, \eqref{eqn:lem:sigleun:21} is upper bounded by \begin{align}
&\max_{(m',n') \neq (m,n)} \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\} \nonumber \\
&= \max_{(m',n') \neq (m,n)} \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \frac{\gamma}{|a_{m',n'}|} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \frac{\gamma}{|a_{m',n'}|}\epsilon(\delta) \right\} \nonumber \\
&\leq \max_{(m',n') \neq (m,n)} \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} } \mathbb{P}\Bigg\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \frac{\gamma}{|a_{m',n'}|} a_{i,j}e^{j\omega_{i,j}X} \right)
- X^m \frac{\gamma}{|a_{m',n'}|}a_{m,n}e^{j \omega_{m,n} X }
\right| \nonumber \\
&< \max_{(m',n') \neq (m,n)} \frac{\gamma}{|a_{m',n'}|}\epsilon(\delta) + \frac{\gamma }{|a_{m',n'}|} |a_{m,n}| T^m \Bigg\} \nonumber \\
&\leq \max_{(m',n') \neq (m,n)} \sup_{ |a_{m,n}| \geq \gamma, \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} \frac{\gamma}{|a_{m',n'}|} a_{i,j}e^{j\omega_{i,j}X} \right)
- X^m \frac{\gamma}{|a_{m',n'}|}a_{m,n}e^{j \omega_{m,n} X }
\right| < \epsilon_{m',n'}(\delta) \right\} \label{eqn:lem:sigleun:4} \\
&\leq \max_{(m',n') \neq (m,n)} \sup_{ a_{m,n}''=0, |a_{m',n'}''| = \gamma } \mathbb{P}\left\{\left| \sum^{p-1}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a''_{i,j}e^{j\omega_{i,j}X} \right)
\right| < \epsilon_{m',n'}(\delta) \right\} \quad
(\because \mbox{By definition, } a''_{m',n'}=\frac{\gamma}{|a_{m',n'}|} a_{m',n'}) \nonumber \\
&< \delta (\because \eqref{eqn:lem:sigleun:2}) \label{eqn:lem:sigleun:7} \end{align} Here, \eqref{eqn:lem:sigleun:4} can be derived as follows: First, we have \begin{align} 1 &\geq \kappa(\delta)\quad (\because \mbox{Definition of $\kappa(\delta)$}) \nonumber \\
&\geq \frac{\gamma \cdot \kappa(\delta)}{|a_{m,n}|} \quad (\because |a_{m,n}|\geq \gamma) \nonumber \\
&\geq \frac{\gamma}{|a_{m',n'}|}. \quad (\because \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} ) \label{eqn:lem:sigleun:5} \end{align} We also have \begin{align}
\frac{\gamma}{|a_{m',n'}|} |a_{m,n}| T^m &\leq \gamma \cdot \kappa(\delta) T^m \quad (\because \frac{|a_{m',n'}|}{|a_{m,n}|} \geq \frac{1}{\kappa(\delta)} )\nonumber \\ &\leq \gamma \frac{\epsilon_{m',n'}(\delta)}{2 \gamma T^m} T^m \quad(\because \mbox{By definition, } \kappa(\delta) \leq \frac{\epsilon_{m',n'}(\delta)}{2 \gamma T^m}) \nonumber \\ &= \frac{\epsilon_{m',n'}(\delta)}{2}. \label{eqn:lem:sigleun:6} \end{align} Therefore, \begin{align}
\frac{\gamma}{|a_{m',n'}|}\epsilon(\delta) + \frac{\gamma }{|a_{m',n'}|} |a_{m,n}| T^m &\leq \epsilon(\delta)+\frac{\epsilon_{m',n'}(\delta)}{2} \quad (\because \eqref{eqn:lem:sigleun:5},\eqref{eqn:lem:sigleun:6} )\nonumber \\ &\leq \epsilon_{m',n'}(\delta). \quad (\because \mbox{By definition, }\epsilon(\delta) \leq \frac{\epsilon_{m',n'}(\delta)}{2}) \nonumber \end{align} Therefore, \eqref{eqn:lem:sigleun:4} is true.
By plugging \eqref{eqn:lem:sigleun:3} and \eqref{eqn:lem:sigleun:7} into \eqref{eqn:lem:sigleun:8}, we get \begin{align}
&\sup_{ |a_{m,n}| \geq \gamma } \mathbb{P}\left\{\left| \sum^{p}_{i=0} X^i \left( \sum^{\nu_i}_{j=1} a_{i,j}e^{j\omega_{i,j}X} \right) \right| < \epsilon(\delta) \right\} < \delta, \nonumber \end{align} which finishes the proof. \end{proof}
\subsection{Proof of Lemma~\ref{lem:conti:mo}} \label{sec:app:2}
In this section, we will merge the properties about the observability Gramian shown in Section~\ref{sec:app:3} with the uniform convergence of Section~\ref{app:unif:conti}, and prove Lemma~\ref{lem:conti:mo} of page~\pageref{lem:conti:mo}.
We first prove the following lemma which tells us that the determinant of the observability Gramian is large with high probability under a cofactor condition on the Gramian. \begin{lemma} Let $\mathbf{A_c}$ and $\mathbf{C}$ be given as \eqref{eqn:conti:a} and \eqref{eqn:conti:c}. Let $a_{i,j}$ and $C_{i,j}$ be the $(i,j)$ element and cofactor of $\begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_1)\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_{m-1} I + t_{m-1})\mathbf{A_c}} \\ \mathbf{C} e^{-(k_m I + t)\mathbf{A_c}} \end{bmatrix}$ respectively, where $t$ is a random variable which is uniformly distributed on $[0,T]$ and $I$ is the sampling interval defined in \eqref{eqn:non:4}. Then, there exist $a \in \mathbb{R}^+$ and a family of increasing functions $\{g_{\epsilon}(\cdot) : \epsilon > 0, g_{\epsilon}:\mathbb{R}^+ \rightarrow \mathbb{R}^+ \}$ satisfying:\\
(i) For all $\epsilon>0$, $k_1 < k_2 < \cdots < k_{m-1}$, $0\leq t_i \leq T$ if $|C_{m,m}| > \epsilon \prod_{1 \leq i \leq m-1} e^{-k_i I \cdot \lambda_i}$ the following is true: \begin{align}
\sup_{k_m \in \mathbb{Z}, k_m - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \mathbb{P} \left\{ \left| \det\left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_1)\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m-1} I + t_{m-1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_{m} I + t)\mathbf{A_c}} \\ \end{bmatrix}
\right) \right| < \epsilon^2 \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i} \right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0 \nonumber \end{align} (ii) For all $\epsilon>0$, $g_{\epsilon}(k) \leq a( 1 + \log (k+1))$. \label{lem:conti:single} \end{lemma} \begin{proof} Let $\epsilon' = 2 \epsilon^2 \prod_{1 \leq i \leq m}e^{\lambda_i T}$. Define $a'_{i,j}$, $C'_{i,j}$ as the $(i,j)$ element and cofactor of $\begin{bmatrix} \mathbf{C}e^{- \kappa_1 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{- \kappa_m \mathbf{A_c}} \end{bmatrix}$.
Then, by Lemma~\ref{lem:det:lower}, we can find a function $g'_{\epsilon'}(k)$ such that for all $0 \leq \kappa_1 < \kappa_2 < \cdots < \kappa_m$ satisfying:\\ (i') $\kappa_m - \kappa_{m-1} \geq g'_{\epsilon'}(\kappa_{m-1})$\\ (ii') $g'_{\epsilon'}(\kappa) \lesssim 1 + \log(\kappa+1)$\\
(iii') $| \sum_{m-m_{\mu}+1 \leq i \leq m} a'_{m,i} C'_{m,i} | \geq \epsilon' \prod_{1 \leq i \leq m} e^{-\kappa_i \lambda_i}$\\ the following inequality holds: \begin{align}
\left| \det\left( \begin{bmatrix} \mathbf{C}e^{- \kappa_1 \mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{- \kappa_m \mathbf{A_c}} \end{bmatrix} \right) \right| \geq \frac{1}{2} \epsilon' \prod_{1 \leq i \leq m}e^{- \kappa_i \lambda_i}. \label{eqn:equalitycomment} \end{align}
Let's use $t_m$ and $t$ interchangeably which are in $[0,T]$ with probability one. Ideally, we want to plug $k_i I + t_i$ into $\kappa_i$. However, even though the sequence $k_1, \cdots, k_m$ is sorted, the sequence $k_1 I + t_1, \cdots, k_m I + t_m$ may not be sorted. Therefore, we define $k_{(1)}I +t_{(1)}, \cdots, k_{(m)}I + t_{(m)}$ as the sorted sequence of $k_1 I + t_1, \cdots, k_m I + t_m$. Then, we can see this sorted sequence has the following property. \begin{claim} Consider two sequences, $\alpha_1, \alpha_2, \cdots, \alpha_n$ and $\beta_1, \beta_2, \cdots, \beta_n$ where $\alpha_1 \leq \alpha_2 \leq \cdots \leq \alpha_n$ and $\beta_i \in [0, T]$ $(T > 0)$. Let $\alpha_{(1)}+\beta_{(1)}, \alpha_{(2)}+\beta_{(2)}, \cdots, \alpha_{(n)}+\beta_{(n)}$ be the ascending ordered set of $\alpha_{1}+\beta_{1}, \alpha_{2}+\beta_{2}, \cdots, \alpha_{n}+\beta_{n}$. In other words,
Then, for all $i \in \{1, \cdots, n \}$, we have \begin{align} 0 \leq \alpha_{(i)}+ \beta_{(i)} - \alpha_i \leq T. \end{align} \label{claim:sort} \end{claim} \begin{proof} We will prove this by contradiction. Let's say there exists $i$ such that \begin{align} \alpha_{(i)} + \beta_{(i)} - \alpha_i < 0. \end{align} Then, we have \begin{align} \alpha_{(i)} + \beta_{(i)} < \alpha_i \leq \alpha_{i+1} \leq \cdots \leq \alpha_n. \end{align} Since $\beta_1, \cdots, \beta_n \geq 0$, we can conclude $\alpha_{(i)} + \beta_{(i)} < \alpha_i + \beta_i$, $\cdots$, $\alpha_{(i)} + \beta_{(i)} < \alpha_n + \beta_n$. Thus, in the sequence $\alpha_1+\beta_1, \cdots, \alpha_n + \beta_n$, there exists $n-i+1$ elements which are larger than $\alpha_{(i)}+\beta{(i)}$. This contradicts to the fact that $\alpha_{(i)}+\beta_{(i)}$ is $i$th largest element among $\alpha_1+\beta_1, \cdots, \alpha_n + \beta_n$.
Likewise, let's say there exists $i$ such that \begin{align} \alpha_{(i)}+\beta_{(i)}-\alpha_i > T. \end{align} Then, we have \begin{align} \alpha_{(i)}+\beta_{(i)} > \alpha_i + T \leq \alpha_{i-1} + T \leq \alpha_{1} + T. \end{align} Since $\beta_1, \cdots, \beta_n \leq T$, we can conclude $\alpha_{(i)}+\beta_{(i)} > \alpha_i + \beta_i$, $\cdots$, $\alpha_{(i)}+\beta_{(i)} > \alpha_1 + \beta_1$. Thus, in the sequence $\alpha_1+\beta_1, \cdots, \alpha_n + \beta_n$, there exists $i$ elements which are smaller than $\alpha_{(i)}+\beta{(i)}$. This contradicts to the fact that $\alpha_{(i)}+\beta_{(i)}$ is $i$th smallest element among $\alpha_1+\beta_1, \cdots, \alpha_n + \beta_n$. \end{proof}
Therefore, by the claim, we have \begin{align} \prod_{1 \leq i \leq m} e^{-\lambda_i T} \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i} \leq \prod_{1 \leq i \leq m} e^{-(k_{(i)}I + t_{(i)}) \lambda_i} \leq \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i}. \label{eqn:sortedineq} \end{align}
Finally, we can plug $k_{(i)}I +t_{(i)}$ into $\kappa_i$ to conclude the following statement. For all $0 \leq k_1 < \cdots < k_m$, $0 \leq t_i \leq T$, $0 \leq t \leq T$ such that\footnote{Here, we select $g''_{\epsilon'}(k)$ large enough so that when $k_m - k_{m-1} \geq g''_{\epsilon'}(k_{m-1})$, we always have $k_mI + t \geq k_{m-1}I + t_{m-1}$, i.e. $k_mI+t$ becomes the largest.}\\ (i'') $k_m - k_{m-1} \geq g''_{\epsilon'}(k_{m-1})$ \\ (ii'') $g''_{\epsilon'}(k) \lesssim 1 + \log(k+1)$ \\
(iii'') $\left| \sum_{m-m_{\mu}+1 \leq i \leq m} a_{m,i}C_{m,i} \right| \geq \epsilon' \prod_{1 \leq i \leq m} e^{- k_i I \cdot \lambda_i} \overset{(A)}{\geq} \epsilon' \prod_{1 \leq i \leq m} e^{- (k_{(i)} I + t_{(i)} ) \lambda_i}$\\ the following inequality holds: \begin{align}
\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_1)\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m-1} I + t_{m-1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_m I + t)\mathbf{A_c}} \end{bmatrix}
\right) \right| & \geq \frac{1}{2}\epsilon' \prod_{1 \leq i \leq m} e^{- (k_{(i)} I + t_{(i)} ) \lambda_i} \overset{(B)}{\geq} \frac{1}{2} \epsilon' \prod_{1 \leq i \leq m}e^{-\lambda_i T} \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i} \\ & \overset{(C)}{=} \epsilon^2 \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i} .
\end{align} Here, (A) and (B) always hold by \eqref{eqn:sortedineq}. (C) follows from the definition of $\epsilon'$.
Let $g_{\epsilon}(k)$ be $g''_{e'}(k)$. Then, we can easily check such $g_{\epsilon}(k)$ satisfies (ii) of the lemma. Let's show that such $g_{\epsilon}(k)$ also satisfies (i) of the lemma. \begin{align}
&\sup_{k_m \in \mathbb{Z}, k_m - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \mathbb{P} \left\{ \left| \det \left( \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_1)\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_{m-1} I + t_{m-1})\mathbf{A_c}} \\ \mathbf{C} e^{-(k_m I + t)\mathbf{A_c}} \end{bmatrix}
\right) \right| < \epsilon^2 \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i}\right\} \nonumber \\
& \leq \sup_{k_m \in \mathbb{Z}, k_m - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \mathbb{P} \left\{
\left| \sum_{m-m_{\mu}+1 \leq i \leq m} C_{m,i} a_{m,i}
\right| < 2 \epsilon^2 \prod_{1 \leq i \leq m} e^{\lambda_i T}\cdot \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i} \right\} \nonumber \\ & = \sup_{k_m \in \mathbb{Z}, k_m - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \mathbb{P} \left\{
\left| \sum_{m-m_{\mu}+1 \leq i \leq m} \frac{C_{m,i}}{\epsilon \prod_{1 \leq i \leq m-1} e^{-k_i I \cdot \lambda_i} } \frac{a_{m,i}}{e^{-(k_m I+t ) \lambda_m}}
\right| < 2 \epsilon \cdot e^{\lambda_m t} \prod_{1 \leq i \leq m} e^{\lambda_i T} \right\} \nonumber \\
&\leq \sup_{|b_m|\geq 1} \mathbb{P}\left\{ \left| \sum_{m-m_{\mu}+1 \leq i \leq m} b_i \frac{a_{m,i}}{e^{-( k_m I +t )\lambda_m}} \right| < 2 \epsilon \cdot e^{\lambda_m T} \prod_{1 \leq i \leq m} e^{\lambda_i T} \right\}. \label{eqn:lem:dettail:1} \end{align}
where the last inequality comes from the assumption of (ii), $|C_{m,m}| > \epsilon \prod_{1 \leq i \leq m-1} e^{-k_i I \cdot \lambda_i}$, and $t \in [0,T]$ with probability one.
Now, it is enough to prove that \eqref{eqn:lem:dettail:1} converges to $0$ as $\epsilon \downarrow 0$. To this end, let's study $a_{m,i}$ which are the elements of the observability gramian. Let the $\mathbf{C_{\mu,\nu_{\mu}}}$ defined in \eqref{eqn:conti:c} be $\begin{bmatrix} c'_{1} & \cdots & c'_{m_{\mu,\nu_{\mu}}} \end{bmatrix}$. Then, we have \begin{align} &e^{-(k_m I + t)\mathbf{A_{\mu,\nu_{\mu}}}}\nonumber \\ &= \begin{bmatrix} e^{-(k_m I + t)(\lambda_{\mu,\nu_\mu} + j \omega_{\mu,\nu_\mu})} & -(k_m I + t) e^{-(k_m I + t)(\lambda_{\mu,\nu_\mu} + j \omega_{\mu,\nu_\mu})} & \cdots & \frac{(-1)^{m_{\mu,\nu_\mu}-1} (k_m I + t)^{m_{\mu,\nu_\mu}-1}}{(m_{\mu,\nu_\mu}-1)!} e^{-(k_m I + t)(\lambda_{\mu,\nu_\mu} + j \omega_{\mu,\nu_\mu})}\\ 0 & e^{-(k_m I + t)(\lambda_{\mu,\nu_\mu} + j \omega_{\mu,\nu_\mu})} & \cdots & \frac{(-1)^{m_{\mu,\nu_\mu}-2} (k_m I + t)^{m_{\mu,\nu_\mu}-2}}{(m_{\mu,\nu_\mu}-2)!} e^{-(k_m I + t)(\lambda_{\mu,\nu_\mu} + j \omega_{\mu,\nu_\mu})}\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{-(k_m I + t)(\lambda_{\mu,\nu_\mu} + j \omega_{\mu,\nu_\mu})} \end{bmatrix}. \nonumber \end{align} Thus, we can see that \begin{align} a_{m,m}&=\sum_{1 \leq i \leq m_{\mu,\nu_{\mu}}} c'_{i} \frac{(-1)^{m_{\mu,\nu_{\mu}}-i}(k_m I +t)^{m_{\mu,\nu_{\mu}}-i} }{(m_{\mu,\nu_{\mu}}-i)!} e^{-(k_m I + t)(\lambda_{m}+j \omega_{\mu,\nu_\mu})}. \nonumber \end{align} Therefore, \begin{align} \frac{a_{m,m}}{e^{-(k_m I + t)\lambda_m}} &=\sum_{1 \leq i \leq m_{\mu,\nu_{\mu}}} c'_{i} \frac{(-1)^{m_{\mu,\nu_{\mu}}-i}(k_m I +t)^{m_{\mu,\nu_{\mu}}-i} }{(m_{\mu,\nu_{\mu}}-i)!} e^{-(k_m I + t)(j \omega_{\mu,\nu_\mu})}. \nonumber \end{align}
Moreover, when $a_{m,i}$ is considered as a function of $t$, the $t^{m_{\mu,\nu_\mu}-1} e^{-j \omega_{\mu,\nu_\mu} t }$ term only shows up in $\frac{a_{m,m}}{e^{-(k_m I + t)\lambda_m}}$ among $\frac{a_{m,m-m_{\mu}+1}}{e^{-(k_m I + t)\lambda_m}}, \cdots, \frac{a_{m,m}}{e^{-(k_m I + t)\lambda_m}}$, and the coefficient is $c_1' \frac{(-1)^{m_{\mu,\nu_\mu}-1}}{(m_{\mu,\nu_\mu}-1)!}e^{-j \omega_{\mu,\nu_\mu} k_m I }$. Since we put $|b_m| \geq 1$ in \eqref{eqn:lem:dettail:1}, by defining $c':=\frac{|c_1'|}{(m_{\mu,\nu_\mu}-1)!}$ we can see that the magnitude of the corresponding coefficient is greater or equal to $c'$. Furthermore, the remaining terms $\frac{a_{m,m-m_{\mu}+1}}{e^{-(k_m I + t)\lambda_m}}, \cdots, \frac{a_{m,m-1}}{e^{-(k_m I + t)\lambda_m}}$ only have $e^{-j \omega_{\mu,1}t}, \cdots, t^{m_{\mu,1}-1} e^{-j \omega_{\mu,1}t}$, $e^{-j \omega_{\mu,2}t}, \cdots, t^{m_{\mu,2}-1} e^{-j \omega_{\mu,2}t}$, $\cdots$, $e^{-j \omega_{\mu,\nu_\mu}t}, \cdots, t^{m_{\mu,\nu_\mu}-2} e^{-j \omega_{\mu,\nu_\mu}t}$ when they are considered as functions in $t$. Thus, using the assumption that $m_{\nu,1} \leq \cdots \leq m_{\nu,\mu_\nu}$,\eqref{eqn:lem:dettail:1} can be upper bounded as follows:
\begin{align}
\eqref{eqn:lem:dettail:1} \leq \sup_{|a'_{m_{{\mu},\nu_{\mu}},\nu_\mu}|\geq c' } \mathbb{P} \left\{
\left| \sum^{m_{\mu,\nu_\mu}}_{i=1} t^{i-1}\left( \sum^{\nu_{\mu}}_{j=1} a'_{i,j} e^{-j \omega_{\mu,j}t} \right)
\right| \leq 2 \epsilon e^{\lambda_m T} \cdot \prod_{1 \leq i \leq m} e^{\lambda_i T} \right\}. \label{eqn:lem:dettail:2} \end{align} By Lemma~\ref{lem:singleun} (by setting $\gamma$ as $c'$, $(m,n)$ as $(m_{\mu,\nu_\mu}, \nu_\mu)$, $p$ as $m_{\mu,\nu_\mu}$, $\nu_0, \cdots, \nu_p$ as $\nu_\mu$, $\omega_{0,j}, \cdots, \omega_{p,j}$ as $-\omega_{\mu,j}$, and $\epsilon$ as $2 \epsilon \prod_{1 \leq i \leq m} e^{\lambda_i T} \cdot e^{\lambda_m T}$), we get \begin{align}
\sup_{|a'_{m_{{\mu},\nu_{\mu}},\nu_\mu}|\geq c' } \mathbb{P} \left\{
\left| \sum^{m_{\mu,\nu_\mu}}_{i=1} t^{i-1}\left( \sum^{\nu_{\mu}}_{j=1} a'_{i,j} e^{-j \omega_{\mu,j}t} \right)
\right| \leq 2 \epsilon e^{\lambda_m T} \cdot \prod_{1 \leq i \leq m} e^{\lambda_i T} \right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0. \label{eqn:lem:dettail:3} \end{align} Therefore, by \eqref{eqn:lem:dettail:1}, \eqref{eqn:lem:dettail:2}, \eqref{eqn:lem:dettail:3} we can say that \begin{align}
&\sup_{k_m \in \mathbb{Z}, k_m - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \mathbb{P} \left\{ \left| \det \left( \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_1)\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_{m-1} I + t_{m-1})\mathbf{A_c}} \\ \mathbf{C} e^{-(k_m I + t)\mathbf{A_c}} \end{bmatrix}
\right) \right| < \epsilon^2 \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i}\right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0 \nonumber \end{align} which finishes the proof. \end{proof}
Based on the previous lemma, we will integrate the properties of p.m.f. tails shown in Section~\ref{sec:app:1} with the properties of the observability Gramian discussed in Section~\ref{sec:app:3}, and prove Lemma~\ref{lem:conti:mo} for the case of a row vector $\mathbf{C}$.
\begin{lemma} Let $\mathbf{A_c}$ and $\mathbf{C}$ be given as \eqref{eqn:conti:a} and \eqref{eqn:conti:c}. Let $\beta[n]~(n \in \mathbb{Z}^+)$ be a Bernoulli random process with probability $1-p_e$ and $t_n$ be i.i.d.~random variables which are uniformly distributed on $[0,T]~(T>0)$. Then, we can find a polynomial $p(k)$ and a family of stopping times $\{ S(\epsilon,k): k \in \mathbb{Z}^+, \epsilon > 0 \}$ such that for all $\epsilon > 0$, $k \in \mathbb{Z}^+$ there exist
$k \leq k_1 < k_2 < \cdots < k_m \leq S(\epsilon,k)$ and $\mathbf{M}$ satisfying the following conditions:\\ (i) $\beta[k_i]=1$ for $1 \leq i \leq m$\\ (ii) $\mathbf{M} \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_m I + t_{k_m})\mathbf{A_c}} \\ \end{bmatrix}=\mathbf{I}$\\
(iii) $|\mathbf{M}|_{max} \leq \frac{p(S(\epsilon,k))}{\epsilon} e^{\lambda_1 S(\epsilon,k) I}$\\ (iv) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\left\{ S(\epsilon,k)-k=s\right\} \leq p_e$. \label{lem:conti:singlec} \end{lemma}
\begin{proof} By Lemma~\ref{lem:conti:inverse2}, instead of conditions (ii) and (iii), it is enough to prove that \begin{align}
\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_m I + t_{k_m})\mathbf{A_c}} \\
\end{bmatrix}\right)\right| \geq \epsilon \prod_{1 \leq i \leq m} e^{-(k_i I + t_{k_i}) \lambda_i}. \nonumber \end{align} Furthermore, since $t_i \geq 0$ it is sufficient to prove that \begin{align}
\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_m I + t_{k_m})\mathbf{A_c}} \\
\end{bmatrix}\right)\right| \geq \epsilon \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i}. \nonumber \end{align} Therefore, it is enough to prove the following claim:
We can find a family of stopping times $\{ S(\epsilon,k) : k \in \mathbb{Z}^+, \epsilon > 0 \}$ such that for all $\epsilon>0$ and $k \in \mathbb{Z}^+$ there exist $k \leq k_1 < k_2 < \cdots < k_m \leq S(\epsilon,k)$ satisfying the following condition:\\ (a) $\beta[k_i]=1$ for $1 \leq i \leq m$ \\
(b) $\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_m I + t_{k_m})\mathbf{A_c}} \\
\end{bmatrix}\right)\right| \geq \epsilon \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i}$ \\ (c) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\left\{ S(\epsilon,k)-k=s\right\} \leq p_e$
We will prove the claim by induction on $m$, the size of the $\mathbf{A_c}$ matrix.
(i) When $m=1$,
Since we only have to care about small enough $\epsilon$, let $\epsilon \leq |c_1| e^{-2 T \lambda_1}$. Denote $S(\epsilon,k) := \inf \{ n \geq k : \beta[n]=1 \}$ and $k_1=S(\epsilon,k)$. Then, $\beta[k_1]=1$ and $\left| \det\left( \begin{bmatrix} c_1 e^{-(k_1 I + t_{k_1})(\lambda_1 + j \omega_1)}
\end{bmatrix} \right) \right|\geq |c_1| e^{-T \lambda_1} e^{-k_1 I \cdot \lambda_1} \geq \epsilon e^{-k_1 I \cdot \lambda_1}$.\\ Moreover, since $S(\epsilon,k)-k$ is a geometric random variable with probability $1-p_e$, \begin{align} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \log \mathbb{P}\left\{ S(\epsilon,k)-k=s \right\} = p_e. \nonumber \end{align} Therefore, $S(\epsilon,k)$ satisfies all the conditions of the lemma.
(ii) Now, we assume that the lemma is true for $m-1$ and prove the lemma still holds for $m$. We will see that the induction hypothesis corresponds to the cofactor condition of Lemma~\ref{lem:conti:single}, which tells us that the determinant of the observability Gramian is large enough with high probability.
Let $\mathbf{A_c'}$ be the $(m-1) \times (m-1)$ matrix obtained by removing $m$th row and column of $\mathbf{A_c}$. Likewise, $\mathbf{C'}$ is a $1 \times (m-1)$ vector obtained by removing $m$th element of $\mathbf{C}$. Then, since $\mathbf{A_c}$ is given in a Jordan form, we can easily check that once we remove the last element from the row vector $\mathbf{C}e^{-(k_i I + t_{k_i})\mathbf{A_c}}$, we get $\mathbf{C'}e^{-(k_i I + t_{k_i})\mathbf{A_c'}}$. Therefore, we can see that \begin{align} \det\left( \begin{bmatrix} \mathbf{C'} e^{-(k_1 I + t_{k_1})\mathbf{A_c'}} \\ \vdots \\ \mathbf{C'} e^{-(k_{m-1} I + t_{k_{m-1}})\mathbf{A_c'}} \end{bmatrix} \right) = cof_{m,m}\left( \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_m I + t_{k_m})\mathbf{A_c}} \\ \end{bmatrix} \right) \label{eqn:cofdet1} \end{align} where $cof_{i,j}(\mathbf{A})$ implies the cofactor matrix of $\mathbf{A}$ with respect to $(i,j)$ element.
By the induction hypothesis, there exists a stopping time $S'(\epsilon,k)$ such that we can find $k \leq k_1 < k_2 < \cdots < k_{m-1} \leq S'(\epsilon,k)$ satisfying:\\ (a) $\beta[k_i]=1$ for $1 \leq i \leq m-1$ \\
(b) $\left| \det \left( \begin{bmatrix} \mathbf{C'}e^{-(k_1 I + t_{k_1})\mathbf{A_c'}} \\ \vdots \\ \mathbf{C'}e^{-(k_{m-1} I + t_{k_{m-1}})\mathbf{A_c'}} \end{bmatrix}
\right) \right| \geq \epsilon \prod_{1 \leq i \leq m-1} e^{-k_i I \cdot \lambda_i}$\\ (c) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \left\{ S'(\epsilon,k)-k=s \right\} \leq p_e$.
Let $\mathcal{F}_{i}$ be a $\sigma$-field generated by $\beta[0],\cdots,\beta[i]$, and $t_0,\cdots, t_i$. Let $g_{\epsilon}: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ be the function of Lemma~\ref{lem:conti:single}. Denote \begin{align} p'(\epsilon):=\esssup \sup_{k_m \in \mathbb{Z}, k_m-S'(\epsilon, k) \geq g_{\epsilon}(S'(\epsilon, k))} \mathbb{P}_t \left\{
\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}}\\ \vdots\\ \mathbf{C}e^{-(k_{m-1} I + t_{k_{m-1}})\mathbf{A_c}}\\ \mathbf{C}e^{-(k_{m} I + t)\mathbf{A_c}} \end{bmatrix} \right)
\right| < \epsilon^2 \prod_{1 \leq i \leq m} e^{-k_i I \cdot \lambda_i}
| \mathcal{F}_{S'(\epsilon,k)} \right\}. \label{eqn:stoppingtimedef1} \end{align}
Here, given $\mathcal{F}_{S'(\epsilon,k)}$, $k_1, \cdots, k_{m-1}$, $t_{k_1}, \cdots, t_{k_{m-1}}$, $S'(\epsilon,k)$ are all fixed, we took the supremum over $k_m$ such that $k_m - S'(\epsilon, k) \geq g_{\epsilon}(S'(\epsilon, k))$, and $t$ is a uniform random variable on $[0,T]$ which we computed the probability over.
Since $k_m \geq S'(\epsilon, k)+ g_{\epsilon}(S'(\epsilon, k)) \geq k_{m-1}+g_{\epsilon}(k_{m-1})$, and by \eqref{eqn:cofdet1}, (b) implies $cof_{m,m}\left( \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \vdots \\ \mathbf{C} e^{-(k_{m} I + t_{k_m})\mathbf{A_c}} \\ \end{bmatrix} \right) \geq \epsilon \prod_{1 \leq i \leq m-1} e^{- k_i I \cdot \lambda_i}$, by Lemma~\ref{lem:conti:single} we have $\lim_{\epsilon \downarrow 0}p'(\epsilon) = 0$.
Denote $S''(\epsilon,k) := \lceil S'(\epsilon,k)+g_{\epsilon}(S'(\epsilon,k)) \rceil$. From (ii) of Lemma~\ref{lem:conti:single} we know $g_{\epsilon}(k) \lesssim 1 + \log (k+1)$ for all $\epsilon>0$. Therefore, by (c) and Lemma~\ref{lem:conti:tailpoly} we have \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\{ S''(\epsilon,k)-k=s \} \leq p_e. \label{eqn:lem:single:5} \end{align} Denote a stopping time \begin{align} &S'''(\epsilon,k) \nonumber \\ &:=\inf \left\{n \geq S''(\epsilon): \beta[n]=1 \mbox{ and }
\left| \det \left( \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m-1} I + t_{k_{m-1}})\mathbf{A_c}} \\ \mathbf{C}e^{-(n I + t_n)\mathbf{A_c}} \\ \end{bmatrix} \right)
\right| \geq \epsilon^2 e^{-nI \cdot \lambda_m } \prod_{1 \leq i \leq m-1} e^{-k_i I \cdot \lambda_i} \right\}. \label{eqn:stoppingtimedef2} \end{align}
Since $\beta[n]$ and $t_n$ are independent processes, for $S'''(\epsilon,k)=n$ to hold, $\beta[n]=1$ and the determinant of \eqref{eqn:stoppingtimedef2} has to be large enough. By \eqref{eqn:stoppingtimedef1}, we already know the probability for the determinant not being large enough is upper bounded by $p'(\epsilon)$. Therefore, given that $S'''(\epsilon,k) \geq n$, the probability that $S'''(\epsilon,k) \neq n$ is upper bounded by $(p_e + (1-p_e)p'(\epsilon))$ --- (erasure) or (not erased but small determinant). Thus, for all $s \in \mathbb{Z}^+$, we have \begin{align}
\esssup \mathbb{P}\{ S'''(\epsilon,k)- S''(\epsilon,k) \geq s | \mathcal{F}_{S''(\epsilon,k)} \} \leq \left(p_e + \left(1-p_e\right)p'(\epsilon)\right)^{s}. \nonumber \end{align}
Since we know $\lim_{\epsilon \downarrow 0}p'(\epsilon) = 0$, we have \begin{align}
\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P}\{ S'''(\epsilon,k)- S''(\epsilon,k) = s | \mathcal{F}_{S''(\epsilon,k)} \} \leq p_e. \label{eqn:lem:single:6} \end{align}
By applying Lemma~\ref{lem:app:geo} to \eqref{eqn:lem:single:5} and \eqref{eqn:lem:single:6}, we can conclude that \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\{S'''(\epsilon,k)-k=s \} \leq p_e. \nonumber \end{align} Therefore, if we denote $S(\epsilon,k):=S'''(\epsilon^{\frac{1}{2}},k)$, $S(\epsilon,k)$ satisfies all the conditions of the claim. \end{proof}
Before we prove Lemma~\ref{lem:conti:mo}, we will first prove the following lemma which allows to merge two Jordan blocks associated with the same eigenvalue into one Jordan block.
\begin{lemma} Let $\mathbf{A}$ be a Jordan block matrix with an eigenvalue $\lambda \in \mathbb{C}$ and a size $m \in \mathbb{N}$, i.e. $\mathbf{A}=\begin{bmatrix} \lambda & 1 & \cdots & 0 \\ 0 & \lambda & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda \\ \end{bmatrix}$. $\mathbf{C}$ and $\mathbf{C'}$ are $1 \times m$ matrices such that \begin{align} &\mathbf{C}=\begin{bmatrix} c_1 & c_2 & \cdots & c_m\end{bmatrix} \nonumber\\ &\mathbf{C'}=\begin{bmatrix} c'_1 & c'_2 & \cdots & c'_m\end{bmatrix} \end{align} where $c_i, c'_i \in \mathbb{C}$ and $c_1 \neq 0$.\\ For all $k \in \mathbb{R}$ and $m \times 1$ matrices $\mathbf{X}=\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}$ and $\mathbf{X'}=\begin{bmatrix} x'_1 \\ x'_2 \\ \vdots \\ x'_m \end{bmatrix}$, there exists $\mathbf{T}$ such that\\ \begin{align} &(i) \mathbf{T} \mbox{ is an upper triangular matrix.} \nonumber \\ &(ii) \mathbf{C}e^{k\mathbf{A}}\mathbf{X}+\mathbf{C'}e^{k\mathbf{A}}\mathbf{X'}=\mathbf{C}e^{k\mathbf{A}}\left(\mathbf{X}+\mathbf{T}\mathbf{X'}\right) \nonumber \end{align} Moreover, the diagonal elements of $\mathbf{T}$ are $\frac{c_1'}{c_1}$. \label{lem:conti:jordan} \end{lemma} \begin{proof} The proof is an induction on $m$, the size of the $\mathbf{A}$ matrix. The lemma is trivial when $m=1$. Thus, we can assume the lemma is true for $m$ as an induction hypothesis, and consider $m+1$ as the dimension of $\mathbf{A}$. \begin{align} &\mathbf{C} e^{k\mathbf{A}} \mathbf{X} + \mathbf{C'} e^{k\mathbf{A}} \mathbf{X'} \nonumber \\ &=\mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\mathbf{X} + \mathbf{C'} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\mathbf{X'} \nonumber \\ &=\mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\mathbf{X} \nonumber \\ &+ \left( \frac{c_1'}{c_1}\mathbf{C}+\begin{bmatrix} 0 & c_2'-\frac{c_1'}{c_1}c_2 & \cdots & c_{m+1}'-\frac{c_1'}{c_1}c_{m+1} \end{bmatrix} \right) \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\mathbf{X'} \nonumber \\ &=\mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\left(\mathbf{X}+\frac{c_1'}{c_1}\mathbf{X'} \right) \nonumber \\ &+ \begin{bmatrix} 0 & c_2'-\frac{c_1'}{c_1}c_2 & \cdots & c_m'-\frac{c_1'}{c_1}c_m \end{bmatrix} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\mathbf{X'} \nonumber \\ &=\mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\left(\begin{bmatrix} 0 \\ 0 \\ \vdots \\ x_{m+1}+\frac{c_1'}{c_1}x'_{m+1} \end{bmatrix}+ \begin{bmatrix}x_1+\frac{c_1'}{c_1}x'_1 \\ x_2+\frac{c_1'}{c_1}x'_2 \\ \vdots \\ 0 \end{bmatrix} \right) \nonumber \\ &+\begin{bmatrix} c_2'-\frac{c_1'}{c_1}c_2 & c_3'-\frac{c_1'}{c_1}c_3 & \cdots & c_{m+1}'-\frac{c_1'}{c_1}c_{m+1} \end{bmatrix} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-2}}{(m-2)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix} \begin{bmatrix} x_2' \\ x_3' \\ \vdots \\ x_{m+1}' \\ \end{bmatrix} \nonumber \end{align} \begin{align} &=\mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\begin{bmatrix} 0 \\ 0 \\ \vdots \\ x_{m+1}+\frac{c_1'}{c_1}x'_{m+1} \end{bmatrix}\nonumber\\ &+\begin{bmatrix} c_1 & c_2 & \cdots & c_m \end{bmatrix} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-2}}{(m-2)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\begin{bmatrix}x_1+\frac{c_1'}{c_1}x'_1 \\ x_2+\frac{c_1'}{c_1}x'_2 \\ \vdots \\ x_{m}+\frac{c_1'}{c_1}x'_{m} \end{bmatrix}\nonumber\\ &+\begin{bmatrix} c_2'-\frac{c_1'}{c_1}c_2 & c_3'-\frac{c_1'}{c_1}c_3 & \cdots & c_{m+1}'-\frac{c_1'}{c_1}c_{m+1} \end{bmatrix} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \frac{k}{2!} e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ 0 & e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^{m-2}}{(m-2)!} e^{k \lambda} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix} \begin{bmatrix} x_2' \\ x_3' \\ \vdots \\ x_{m+1}' \\ \end{bmatrix} \nonumber \\ &= \mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix}\begin{bmatrix} 0 \\ 0 \\ \vdots \\ x_{m+1}+\frac{c_1'}{c_1}x'_{m+1} \end{bmatrix}\nonumber\\ &+\begin{bmatrix} c_1 & c_2 & \cdots & c_{m} \end{bmatrix} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-2}}{(m-2)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix} \left( \begin{bmatrix}x_1+\frac{c_1'}{c_1}x'_1 \\ x_2+\frac{c_1'}{c_1}x'_2 \\ \vdots \\ x_{m}+\frac{c_1'}{c_1}x'_{m} \end{bmatrix} + \begin{bmatrix} t'_{1,1} & t'_{1,2} & \cdots & t'_{1,m} \\ 0 & t'_{2,2} & \cdots & t'_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & t'_{m,m} \end{bmatrix} \begin{bmatrix} x_2' \\ x_3' \\ \vdots \\ x_{m+1}' \\ \end{bmatrix}\right) \label{eqn:dis:matrix:1}\\ &= \mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix} \left( \begin{bmatrix} 0 \\ 0 \\ \vdots \\ x_{m+1}+\frac{c_1'}{c_1}x'_{m+1} \end{bmatrix} + \begin{bmatrix}x_1+\frac{c_1'}{c_1}x'_1 \\ x_2+\frac{c_1'}{c_1}x'_2 \\ \vdots \\ 0 \end{bmatrix} + \begin{bmatrix} t'_{1,1} & t'_{1,2} & \cdots & t'_{1,m} \\ 0 & t'_{2,2} & \cdots & t'_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix} \begin{bmatrix} x_2' \\ x_3' \\ \vdots \\ x_{m+1}' \\ \end{bmatrix} \right)\nonumber\\ &= \mathbf{C} \begin{bmatrix} e^{k \lambda} & \frac{k}{1!} e^{k \lambda} & \cdots & \frac{k^m}{m!} e^{k \lambda} \\ 0 & e^{k \lambda} & \cdots & \frac{k^{m-1}}{(m-1)!} e^{k \lambda} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{k}{1!} e^{k \lambda}\\ 0 & 0 & \cdots & e^{k \lambda}\\ \end{bmatrix} \left( \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{m+1} \\ \end{bmatrix} + \begin{bmatrix} \frac{c'_1}{c_1} & t'_{1,1} & \cdots & t'_{1,m} \\ 0 & \frac{c'_1}{c_1} & \cdots & t'_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{c'_1}{c_1} \end{bmatrix} \begin{bmatrix} x'_1 \\ x'_2 \\ \vdots \\ x'_{m+1} \\ \end{bmatrix} \right) \nonumber \end{align} where \eqref{eqn:dis:matrix:1} follows from the induction hypothesis. The lemma is true. \end{proof}
Now, we are ready to prove Lemma~\ref{lem:conti:mo}. \begin{proof}[Proof of Lemma~\ref{lem:conti:mo}] The proof is an induction on $m$, the size of matrix $\mathbf{A_c}$. Remind that here $\mathbf{C}$ can be a general matrix, so we use the definitions of $\mathbf{A_c},\mathbf{C}$ given as \eqref{eqn:conti:a2}, \eqref{eqn:conti:c2}.
(i) When $m=1$,
In this case, the system is scalar, and the lemma is trivially true. A rigorous proof goes as follows: Since $(\mathbf{A_c},\mathbf{C})$ is observable, we can find a $1 \times l$ matrix $\mathbf{L}$ such that $\mathbf{L}\mathbf{C}$ is not zero. Then, $(\mathbf{A_c},\mathbf{L}\mathbf{C})$ is observable, and the lemma is reduced to Lemma~\ref{lem:conti:singlec}.
(ii) We will assume that the lemma holds for $(m-1)$-dimensional systems as an induction hypothesis, and prove the lemma holds for $m$.
The proof goes in three steps. First, we reduce the system to reducing a system with scalar observations to apply Lemma~\ref{lem:conti:singlec}. Then, we estimate one of the states, and subtract the estimation from the system --- this procedure is known as successive decoding in information theory. Now, the system reduces to the $(m-1)$-dimensional one, so we apply the induction hypothesis.
For this, we define $\mathbf{x}:=\begin{bmatrix} \mathbf{x_{1,1}} \\ \mathbf{x_{1,2}} \\ \vdots \\ \mathbf{x_{\mu,\nu_{\mu}}} \end{bmatrix}$ where $\mathbf{x_{i,j}}$ are $m_{i,j} \times 1$ vectors, and $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$ as the $m_{1,\nu_1}$th element of $\mathbf{x_{1,\nu_1}}$. We also define $(\mathbf{x})_k$ as the $k$th element of a vector $\mathbf{x}$ in general. Here, $\mathbf{x}$ can be thought as the states of the system. We first decode $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$, and decode the remaining elements in $\mathbf{x}$.
$\bullet$ Reduction to Systems with Scalar Observations: By Lemma~\ref{lem:conti:singlec}, we already know that the lemma is true for systems with scalar observations. Therefore, we will reduce the general systems with vector observations to system with scalar observations.
\begin{claim} There exist $\mathbf{L}, \mathbf{C'}, \mathbf{A'}, \mathbf{x'}$ that satisfies the following conditions.\\ (i) $\mathbf{L}$ is a $1 \times l$ row vector.\\ (ii) $\mathbf{A'}$ is a $m' \times m'$ square matrix given in a Jordan form. The eigenvalues of $\mathbf{A'}$ belong to $\{ \lambda_1 + j \omega_1, \cdots, \lambda_{\mu} + j \omega_{\mu}\}$ which is the set of the eigenvalues of $\mathbf{A}$. The first Jordan block of $\mathbf{A'}$ is equal to $\mathbf{A_{1,\nu_1}}$.\\ (iii) $\mathbf{C'}$ is a $l \times m'$ matrix and $(\mathbf{A'}, \mathbf{L}\mathbf{C'})$ is observable.\\ (iv) $\mathbf{x'}$ is a $m' \times l$ column vector. $(\mathbf{x'})_{m_{1,\nu_1}} = (\mathbf{x_{1,\nu_1}})_{m_1, \nu_1}$. \\ (v) $\mathbf{L}\mathbf{C}e^{-k \mathbf{A_c}} \mathbf{x} = \mathbf{L} \mathbf{C'} e^{-k \mathbf{A'}}\mathbf{x'}$. \label{claim:dummy} \end{claim}
What this claim implies is the following. By multiplying the matrix $\mathbf{L}$ to the vector observations, we can reduce the vector observations to the scalar observations. However, the resulting system may not be observable any more. Therefore, we will carefully design $\mathbf{L}$ matrix and reduced system matrices $\mathbf{A'}$, $\mathbf{C'}$, so that the system remains observable even with a scalar observation and the information about $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$ remains intact.
\begin{proof} Since the first columns of $\mathbf{C_{1,1}}, \mathbf{C_{1,2}}, \cdots, \mathbf{C_{1,\nu_1}}$ are linearly independent, there exists a $1 \times l $ matrix $\mathbf{L}$ such that the first elements of $\mathbf{L}\mathbf{C_{1,1}},\mathbf{L}\mathbf{C_{1,2}},\cdots, \mathbf{L}\mathbf{C_{1,\nu_1-1}}$ are zeros and the first element of $\mathbf{L}\mathbf{C_{1,\nu_1}}$ is non-zero. Then, we can observe that \begin{align} \mathbf{L}\mathbf{C}e^{-k \mathbf{A_c}} \mathbf{x}&= \mathbf{L} \begin{bmatrix} \mathbf{C_{1,1}} & \cdots & \mathbf{C_{\mu,\nu_{\mu}}} \end{bmatrix} \begin{bmatrix} e^{-k \mathbf{A_{1,1}}} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & e^{-k \mathbf{A_{\mu,\nu_{\mu}}}} \end{bmatrix} \begin{bmatrix} \mathbf{x_{1,1}} \\ \vdots \\ \mathbf{x_{\mu,\nu_\mu}} \end{bmatrix} \nonumber \\ &=\mathbf{L} \mathbf{C_{1,1}} e^{-k \mathbf{A_{1,1}}} \mathbf{x_{1,1}} + \mathbf{L} \mathbf{C_{1,2}} e^{-k \mathbf{A_{1,2}}} \mathbf{x_{1,2}} + \cdots + \mathbf{L} \mathbf{C_{\mu,\nu_\mu}} e^{-k \mathbf{A_{\mu,\nu_\mu}}} \mathbf{x_{\mu,\nu_\mu}} \label{eqn:lem:idontknow} \end{align} Remind that the Jordan blocks $\mathbf{A_{i,1}}, \cdots ,\mathbf{A_{i,\nu_i}}$ correspond to the same eigenvalue. We will merge these Jordan blocks into one Jordan block. However, since the size of Jordan blocks $\mathbf{A_{i,1}}, \cdots ,\mathbf{A_{i,\nu_i}}$ are distinct, we will extend the small Jordan block to the size of the largest one by adding zero elements. Let the dimension of $\mathbf{A_{i,\bar{\nu}_i}}$ be the largest among $\mathbf{A_{i,1}}, \cdots ,\mathbf{A_{i,\nu_i}}$, and $m_{i,\bar{\nu}_i}$ be the corresponding dimension. Then, we define $\mathbf{\bar{C}_{i,j}}$ as a matrix where the first $m_{i,\bar{\nu}_i} - m_{i,j}$ vectors are all zeros, and the remaining vectors are the same as those of $\mathbf{C_{i,j}}$. $\mathbf{\bar{A}_{i,j}}$ is defined as the same matrix as $\mathbf{{A}_{i,\bar{\nu}_i}}$. $\mathbf{\bar{x}_{i,j}}$ is defined as a column vector whose first $m_{i,\bar{\nu}_i} - m_{i,j}$ elements are all zeros, and the remaining elements are those of $\mathbf{x_{i,j}}$.
Then, by the construction, we know \begin{align} \eqref{eqn:lem:idontknow}=\mathbf{L} \mathbf{\bar{C}_{1,1}} e^{-k \mathbf{\bar{A}_{1,1}}} \mathbf{\bar{x}_{1,1}} + \mathbf{L} \mathbf{\bar{C}_{1,2}} e^{-k \mathbf{\bar{A}_{1,2}}} \mathbf{\bar{x}_{1,2}} + \cdots + \mathbf{L} \mathbf{\bar{C}_{\mu,\nu_\mu}} e^{-k \mathbf{\bar{A}_{\mu,\nu_\mu}}} \mathbf{\bar{x}_{\mu,\nu_\mu}}. \end{align} Furthermore, $\mathbf{A_{1,\nu_1}}=\mathbf{\bar{A}_{1,\nu_1}}$, $\mathbf{C_{1,\nu_1}}=\mathbf{\bar{C}_{1,\nu_1}}$, $\mathbf{x_{1,\nu_1}}=\mathbf{\bar{x}_{1,\nu_1}}$. The first elements of $\mathbf{L}\mathbf{\mathbf{C_{1,1}}},\mathbf{L}\mathbf{\mathbf{C_{1,2}}}, \cdots, \mathbf{L}\mathbf{C_{1,\nu_1-1}}$ are zeros and the first element of $\mathbf{L}\mathbf{C_{1,\nu_1}}$ is non-zero.
Now, we get the same dimension systems $(\mathbf{\bar{A}_{i,1}},\mathbf{L}\mathbf{\bar{C}_{i,1}})$, $\cdots$, $(\mathbf{\bar{A}_{i,\nu_i}},\mathbf{L}\mathbf{\bar{C}_{i,\nu_i}})$. However, none of them might be observable. Thus, we will truncate the matrices to make sure that at least one of them is observable. Remind that since $\mathbf{L}\mathbf{\bar{C}_{i,j}}$ is a row vector and $\mathbf{\bar{A}_{i,j}}$ is a single Jordan block, the system is observable as long as the first element of $\mathbf{L} \mathbf{\bar{C}_{i,j}}$ is not zero. Thus, we will truncate the matrices until we see at least one nonzero element among the first elements of $\mathbf{L}\mathbf{\bar{C}_{i,1}}$, $\cdots$, $\mathbf{L}\mathbf{\bar{C}_{i,\nu_i}}$. Let $k_i$ be the smallest number such that at least one of the $k_i$th elements of $\mathbf{L}\mathbf{\bar{C}_{i,1}}, \cdots, \mathbf{L}\mathbf{\bar{C}_{i,\nu_i}}$ becomes nonzero, and let $\mathbf{L}\mathbf{\bar{C}_{i,\nu_i^\star}}$ be the vector that achieves the minimum.
Then, we will reduce the dimensions of $(\mathbf{\bar{A}_{i,j}},\mathbf{L}\mathbf{\bar{C}_{i,j}})$ by truncating the first $(k_i-1)$ vectors. Define $\mathbf{C_{i,j}'}$ as a matrix obtained by removing the first $(k_i-1)$ columns from $\mathbf{\bar{C}_{i,j}}$, $\mathbf{A_{i,j}'}$ as a matrix obtained by removing the first $(k_i-1)$ rows and columns from $\mathbf{\bar{A}_{i,j}}$, and $\mathbf{x_{i,j}'}$ as a matrix obtained by removing the first $(k_i-1)$ elements from $\mathbf{\bar{x}_{i,j}}$.
Then, by the construction, the resulting systems $(\mathbf{A_{i,\nu_i^\star}'}, \mathbf{L}\mathbf{C_{i,\nu_i^\star}'})$ are observable. We can also see that $\nu_{1}^\star=\nu_1$, $\mathbf{C_{1,\nu_1^\star}'}=\mathbf{\bar{C}_{1,\nu_1}}=\mathbf{C_{1,\nu_1}}$, $\mathbf{A_{1,\nu_1^\star}'}=\mathbf{\bar{A}_{1,\nu_1}}=\mathbf{A_{1,\nu_1}}$, and $\mathbf{x'_{1,\nu_1^\star}}=\mathbf{\bar{x}_{1,\nu_1}}=\mathbf{x_{1,\nu_1}}$. In words, the Jordan block $\mathbf{A_{1,\nu_1}}$ was not affected by the above manipulations. Moreover, by the construction, the first elements of $\mathbf{L}\mathbf{C'_{1,1}},\cdots,\mathbf{L}\mathbf{C'_{1,\nu_1-1}}$ are all zero.
Denote $\mathbf{C'}:=\begin{bmatrix} \mathbf{C'_{1,\nu_1^\star}} & \mathbf{C'_{2,\nu_2^\star}} & \cdots & \mathbf{C'_{\mu,\nu_\mu^\star}} \end{bmatrix}$ and $\mathbf{A'}:= diag \{ \mathbf{A'_{1,\nu_1^\star}}, \mathbf{A'_{2,\nu_2^\star}}, \cdots, \mathbf{A'_{\mu,\nu_\mu^\star}} \}$. Then, \eqref{eqn:lem:idontknow} can be written as follows: \begin{align} \eqref{eqn:lem:idontknow}&=\mathbf{L} \mathbf{C'_{1,1}} e^{-k \mathbf{A'_{1,1}}} \mathbf{x'_{1,1}} + \mathbf{L} \mathbf{C'_{1,2}} e^{-k \mathbf{A'_{1,2}}} \mathbf{x'_{1,2}} + \cdots + \mathbf{L} \mathbf{C'_{\mu,\nu_\mu}} e^{-k \mathbf{A'_{\mu,\nu_\mu}}} \mathbf{x'_{\mu,\nu_\mu}} \nonumber \\ &=\mathbf{L} \mathbf{C'_{1,\nu_1^\star}} e^{-k \mathbf{A'_{1,\nu_1^\star}}} ( \mathbf{x'_{1,\nu_1^\star}}+\sum_{j \in \{1,\cdots,\nu_1 \} \setminus \nu_1^\star} \mathbf{T_{1,j}}\mathbf{x_{1,j}'})+ \cdots \nonumber \\ &\quad+\mathbf{L} \mathbf{C'_{\mu,\nu_\mu^\star}} e^{-k \mathbf{A'_{\mu,\nu_\mu^\star}}} ( \mathbf{x'_{\mu,\nu_\mu^\star}}+\sum_{j \in \{1,\cdots,\nu_\mu \} \setminus \nu_\mu^\star} \mathbf{T_{\mu,j}}\mathbf{x_{\mu,j}'} ) \label{eqn:lem:contigeo:10}\\%(\because lemma~\ref{lem:conti:jordan})\nonumber \\ &= \begin{bmatrix} \mathbf{L}\mathbf{C'_{1,\nu_1^\star}} & \cdots & \mathbf{L}\mathbf{C'_{1,\nu_\mu^\star}} \end{bmatrix} \begin{bmatrix} e^{-k \mathbf{A'_{1,\nu_1^\star}}} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & e^{-k \mathbf{A'_{\mu,\nu_\mu^\star}}} \end{bmatrix} \underbrace{ \begin{bmatrix} \mathbf{x'_{1,\nu_1^\star}}+\sum_{j \in \{1,\cdots,\nu_1 \} \setminus \nu_1^\star} \mathbf{T_{1,j}}\mathbf{x_{1,j}'}\\ \vdots \\ \mathbf{x'_{\mu,\nu_\mu^\star}}+\sum_{j \in \{1,\cdots,\nu_\mu \} \setminus \nu_\mu^\star} \mathbf{T_{\mu,j}}\mathbf{x_{\mu,j}'} \end{bmatrix} }_{:= \mathbf{x'} } \nonumber \\ &=\mathbf{L}\mathbf{C'} e^{-k \mathbf{A'}} \mathbf{x'}\label{eqn:lem:contigeo:3} \end{align} where \eqref{eqn:lem:contigeo:10} follows from Lemma~\ref{lem:conti:jordan}. Here, we can easily see that $\mathbf{A'}$ satisfies the condition (ii) of the claim, and $(\mathbf{A'}, \mathbf{L}\mathbf{C'})$ is observable since each $(\mathbf{A_{i,\nu_i^\star}'}, \mathbf{L}\mathbf{C_{i,\nu_i^\star}'})$ is observable.
Moreover, by Lemma~\ref{lem:conti:jordan}, we know that $\mathbf{T_{1,1}}, \cdots, \mathbf{T_{1,\nu_1-1}}$ are upper triangular matrices whose diagonal elements are zeros. Therefore, $(\mathbf{x'})_{m_{1,\nu_1}}=(\mathbf{x_{1,\nu_1}'})_{m_{1,\nu_1}}=(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$. Therefore, the condition (iv) of the claim is also satisfied.
\end{proof}
$\bullet$ Decoding $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$: Now, we reduced the system to a system with a scalar observation. Then, we can apply Lemma~\ref{lem:conti:singlec} to decode $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$.
\begin{claim} We can find a polynomial $p'(k)$ and a family of stopping time $\{S'(\epsilon,k): k \in \mathbb{Z^+}, \epsilon > 0\}$ such that for all $\epsilon>0$, $k \in \mathbb{Z}^+$ there exist $k \leq k_1 < k_2 < \cdots < k_{m'} \leq S'(\epsilon,k)$ and $\mathbf{M_1'}$ satisfying:\\ (i) $\beta[k_i]=1$ for $1 \leq i \leq m'$\\ (ii) $\mathbf{M_1'} \begin{bmatrix} \mathbf{L} & 0 & \cdots & 0 \\ 0 & \mathbf{L} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{L} \end{bmatrix} \begin{bmatrix} \mathbf{C} e^{-(k_1 I + t_{k_1}) \mathbf{A}} \\ \mathbf{C} e^{-(k_2 I + t_{k_2}) \mathbf{A}} \\ \vdots \\ \mathbf{C} e^{-(k_{m'} I + t_{k_{m'}}) \mathbf{A}} \\ \end{bmatrix} \mathbf{x}=(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} $\\
(iii) $\left|\mathbf{M_1'}\right|_{max} \leq \frac{p'(S'(\epsilon,k))}{\epsilon} e^{\lambda_1 S'(\epsilon,k)I}$\\ (iv) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z^+}} \frac{1}{s} \log \mathbb{P}\{ S'(\epsilon,k)-k = s\} \leq p_e$. \label{claim:donknow} \end{claim}
This claim is showing that there exist an estimator $\mathbf{M_1'} diag\{ \mathbf{L}, \cdots, \mathbf{L} \}$ which can estimate the state $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$ with observations at time $k_1, \cdots, k_m$.
\begin{proof}
By the construction, $(\mathbf{A'},\mathbf{L}\mathbf{C'})$ is observable and $\mathbf{L}\mathbf{C'}$ is a row vector. Thus, by Lemma~\ref{lem:conti:singlec} we can find a polynomial $p'(k)$ and a family of stopping time $\{S'(\epsilon,k): k \in \mathbb{Z^+}, \epsilon > 0\}$ such that for all $\epsilon>0$, $k \in \mathbb{Z}^+$ there exist $k \leq k_1 < k_2 < \cdots < k_{m'} \leq S'(\epsilon,k)$ and $\mathbf{M'}$ satisfying:\\ (i) $\beta[k_i]=1$ for $1 \leq i \leq m'$\\ (ii) $\mathbf{M'} \begin{bmatrix} \mathbf{L}\mathbf{C'} e^{-(k_1 I + t_{k_1}) \mathbf{A'}} \\ \mathbf{L}\mathbf{C'} e^{-(k_2 I + t_{k_2}) \mathbf{A'}} \\ \vdots \\ \mathbf{L}\mathbf{C'} e^{-(k_{m'} I + t_{k_{m'}}) \mathbf{A'}} \\ \end{bmatrix}=\mathbf{I} $\\
(iii) $\left|\mathbf{M'}\right|_{max} \leq \frac{p'(S'(\epsilon,k))}{\epsilon} e^{\lambda_1 S'(\epsilon,k)I}$\\ (iv) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z^+}} \frac{1}{s} \log \mathbb{P}\{ S'(\epsilon,k)-k = s\} \leq p_e$.
Let $\mathbf{M'_{1}}$ be the $m_{1,\nu_1}$th row of $\mathbf{M'}$. Then, \begin{align} &\mathbf{M'_{1}} \begin{bmatrix} \mathbf{L} & 0 & \cdots & 0 \\ 0 & \mathbf{L} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{L} \end{bmatrix} \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m'} I + t{k_{m'}})\mathbf{A_c}} \\ \end{bmatrix} \mathbf{x} = \mathbf{M'_{1}} \begin{bmatrix} \mathbf{L}\mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}}\mathbf{x} \\ \mathbf{L}\mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}}\mathbf{x} \\ \vdots \\ \mathbf{L}\mathbf{C}e^{-(k_{m'} I + t_{k_{m'}})\mathbf{A_c}}\mathbf{x} \\ \end{bmatrix} \nonumber \\ &= \mathbf{M'_{1}} \begin{bmatrix} \mathbf{L'}\mathbf{C'}e^{-(k_1 I + t_{k_1})\mathbf{A'}}\mathbf{x'} \\ \mathbf{L'}\mathbf{C'}e^{-(k_2 I + t_{k_2})\mathbf{A'}}\mathbf{x'} \\ \vdots \\ \mathbf{L'}\mathbf{C'}e^{-(k_{m'} I + t_{k_{m'}})\mathbf{A'}}\mathbf{x'} \\ \end{bmatrix} (\because Claim~\ref{claim:dummy}\mbox{ (v)})\nonumber \\ &= \mathbf{M'_{1}} \begin{bmatrix} \mathbf{L'}\mathbf{C'}e^{-(k_1 I + t_{k_1})\mathbf{A'}} \\ \mathbf{L'}\mathbf{C'}e^{-(k_2 I + t_{k_2})\mathbf{A'}} \\ \vdots \\ \mathbf{L'}\mathbf{C'}e^{-(k_{m'} I + t_{k_{m'}})\mathbf{A'}} \\ \end{bmatrix} \mathbf{x'} = (\mathbf{x'})_{m_{1,\nu_1}} = (\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} (\because Claim~\ref{claim:dummy}\mbox{ (iv)}).
\end{align} \end{proof}
$\bullet$ Subtracting $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$ from the observations: Now, we have an estimation for $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$. We will remove it from the system. $\mathbf{A''}$,$\mathbf{C''}$ and $\mathbf{x''}$ are the system matrices after the removal. Formally, $\mathbf{A''}$,$\mathbf{C''}$ and $\mathbf{x''}$ are obtained by removing $\sum_{1 \leq i \leq \nu_i} m_{1,i}$th row and column from $\mathbf{A_c}$, removing $\sum_{1 \leq i \leq \nu_i} m_{1,i}$th row from $\mathbf{C}$ and removing $\sum_{1 \leq i \leq \nu_i} m_{1,i}$th component from $\mathbf{x}$ respectively.
Obviously, $\mathbf{A''} \in \mathbb{C}^{(m-1) \times (m-1)}$ and $\mathbf{C''} \in \mathbb{C}^{l \times (m-1)}$. Moreover, since the last element of the Jordan block $\mathbf{A_{1,\nu_1}}$ is removed and the observability only depends on the first element, $(\mathbf{A''},\mathbf{C''})$ is observable. Denote $\lambda_1''+\omega_1''$ be the eigenvalue of $\mathbf{A''}$ with the largest real part. Then, trivially $\lambda_1'' \leq \lambda_1$.
The new system $(\mathbf{A''},\mathbf{C''})$ and the original system $(\mathbf{A},\mathbf{C})$ are related as follows. Denote the $\sum_{1 \leq i \leq \nu_i} m_{1,i}$th column of $\mathbf{C}e^{-k \mathbf{A_c}}$ as $\mathbf{R}(k)$. Then, we have \begin{align} \mathbf{C}\mathbf{e^{-k \mathbf{A_c}}} \mathbf{x} -\mathbf{R}(k)(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} &= \mathbf{C''}\mathbf{e^{-k \mathbf{A''}}} \mathbf{x''} \label{eqn:lem:contigeo:4} \end{align}
which can be easily proved from the block diagonal structure of $\mathbf{A_c}$. We can further see that there exists a polynomial $p'''(k)$ such that $\left| \mathbf{R}(k) \right|_{max} \leq p'''(k)e^{-k \lambda_1}$.
$\bullet$ Decoding the remaining element of $\mathbf{x}$: We decoded and subtracted the state $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$ from the system. Now, we can apply the induction hypothesis to the remaining $(m-1)$-dimensional system and estimate the remaining states.
By induction hypothesis, for given $S'(\epsilon,k)$, we can find $m'' \in \mathbb{Z}$ and a polynomial $p''(k)$ and a family of stopping time $\{S''(\epsilon,S'(\epsilon,k)): S'(\epsilon,k) \in \mathbb{Z^+}, 0 < \epsilon < 1 \}$ such that for all $0< \epsilon < 1$ there exist $S'(\epsilon,k) < k_{m'+1} < \cdots < k_{m''} \leq S''(\epsilon,S'(\epsilon,k))$ and a $(m-1) \times (m''-m')l$ matrix $\mathbf{M''}$ satisfying the following conditions:\\ (i) $\beta[k_i] = 1$ for $m'+1 \leq i \leq m''$\\ (ii) $ \mathbf{M''} \begin{bmatrix} \mathbf{C''} e^{-(k_{m'+1} I + t_{k_{m'+1}})\mathbf{A''}} \\ \mathbf{C''} e^{-(k_{m'+2} I + t_{k_{m'+2}})\mathbf{A''}} \\ \vdots \\ \mathbf{C''} e^{-(k_{m''} I + t_{k_{m''}})\mathbf{A''}} \\ \end{bmatrix} = \mathbf{I}_{(m-1) \times (m-1)} $ \\ (iii) $
\left| \mathbf{M''} \right|_{max} \leq \frac{p''(S''(\epsilon,S'(\epsilon,k)))}{\epsilon} e^{\lambda_1'' S''(\epsilon,S'(\epsilon,k)) I} $ \\ (iv) $
\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P} \{ S''(\epsilon,S'(\epsilon,k))-S'(\epsilon,k)=s | \mathcal{F}_{S'(\epsilon,k)} \} \leq p_e $\\ where $\mathcal{F}_n$ is the $\sigma$-field generated by $\beta[0], \cdots, \beta[n]$ and $t_0, \cdots, t_n$.
Then, \begin{align} \mathbf{x''}&=\mathbf{M''} \begin{bmatrix} \mathbf{C''}e^{-(k_{m'+1}I + t_{k_{m'+1}})\mathbf{A''}} \\ \mathbf{C''}e^{-(k_{m'+2}I + t_{k_{m'+2}})\mathbf{A''}} \\ \vdots \\ \mathbf{C''}e^{-(k_{m''}I + t_{k_{m''}})\mathbf{A''}} \\ \end{bmatrix} \mathbf{x''} \nonumber \\
&= \mathbf{M''} \begin{bmatrix} \mathbf{C}e^{-(k_{m'+1}I + t_{k_{m'+1}})\mathbf{A_c}}\mathbf{x}-\mathbf{R}(k_{m'+1}I + t_{k_{m'+1}})(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} \\ \mathbf{C}e^{-(k_{m'+2}I + t_{k_{m'+2}})\mathbf{A_c}}\mathbf{x}-\mathbf{R}(k_{m'+2}I + t_{k_{m'+2}})(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} \\ \vdots \\ \mathbf{C}e^{-(k_{m''}I + t_{k_{m''}})\mathbf{A_c}}\mathbf{x}-\mathbf{R}(k_{m''}I + t_{k_{m''}})(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} \\ \end{bmatrix}
\label{eqn:lem:contigeo:11} \\ &= \mathbf{M''} \left( \begin{bmatrix} \mathbf{C}e^{-(k_{m'+1}I + t_{k_{m'+1}})\mathbf{A_c}}\\ \mathbf{C}e^{-(k_{m'+2}I + t_{k_{m'+2}})\mathbf{A_c}}\\ \vdots \\ \mathbf{C}e^{-(k_{m''}I + t_{k_{m''}})\mathbf{A_c}}\\ \end{bmatrix} \mathbf{x} - \begin{bmatrix} \mathbf{R}(k_{m'+1}I + t_{k_{m'+1}}) \\ \mathbf{R}(k_{m'+2}I + t_{k_{m'+2}}) \\ \vdots \\ \mathbf{R}(k_{m''}I + t_{k_{m''}}) \\ \end{bmatrix} (\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}} \right) \nonumber \\ &=\mathbf{M''} \left( \begin{bmatrix} \mathbf{C}e^{-(k_{m'+1}I + t_{k_{m'+1}})\mathbf{A_c}}\\ \mathbf{C}e^{-(k_{m'+2}I + t_{k_{m'+2}})\mathbf{A_c}}\\ \vdots \\ \mathbf{C}e^{-(k_{m''}I + t_{k_{m''}})\mathbf{A_c}}\\ \end{bmatrix} \mathbf{x} - \begin{bmatrix} \mathbf{R}(k_{m'+1}I + t_{k_{m'+1}}) \\ \mathbf{R}(k_{m'+2}I + t_{k_{m'+2}}) \\ \vdots \\ \mathbf{R}(k_{m''}I + t_{k_{m''}}) \\ \end{bmatrix} \mathbf{M_1'} \begin{bmatrix} \mathbf{L} & 0 & \cdots & 0 \\ 0 & \mathbf{L} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{L} \end{bmatrix} \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m'} I + t_{k_{m'}})\mathbf{A_c}} \\ \end{bmatrix} \mathbf{x} \right) \label{eqn:lem:contigeo:2} \\ &= \mathbf{M''} \begin{bmatrix} - \begin{bmatrix} \mathbf{R}(k_{m'+1}I + t_{k_{m'+1}}) \\ \mathbf{R}(k_{m'+2}I + t_{k_{m'+2}}) \\ \vdots \\ \mathbf{R}(k_{m''}I + t_{k_{m''}}) \\ \end{bmatrix} \mathbf{M_1'} \begin{bmatrix} \mathbf{L} & 0 & \cdots & 0 \\ 0 & \mathbf{L} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{L} \end{bmatrix} & \mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \mathbf{C}e^{-(k_2 I + t_{k_2})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m''} I + t_{k_{m''}})\mathbf{A_c}} \\ \end{bmatrix} \mathbf{x} \nonumber \end{align} where \eqref{eqn:lem:contigeo:11} follows from \eqref{eqn:lem:contigeo:4}, and \eqref{eqn:lem:contigeo:2} follows from the condition (ii) of Claim~\ref{claim:donknow}. Therefore, we can recover the remaining states of $\mathbf{x}$.
Moreover, we have \begin{align}
&\left| \mathbf{M''} \begin{bmatrix} - \begin{bmatrix} \mathbf{R}(k_{m'+1}I + t_{k_{m'+1}}) \\ \mathbf{R}(k_{m'+2}I + t_{k_{m'+2}}) \\ \vdots \\ \mathbf{R}(k_{m''}I + t_{k_{m''}}) \\ \end{bmatrix} \mathbf{M_1'} \begin{bmatrix} \mathbf{L} & 0 & \cdots & 0 \\ 0 & \mathbf{L} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{L} \end{bmatrix} & \mathbf{I}
\end{bmatrix} \right|_{max} \nonumber \\ &\lesssim
\left| \mathbf{M''} \right|_{max} \cdot \max\left\{
\left| \begin{bmatrix} \mathbf{R}(k_{m'+1}I + t_{k_{m'+1}}) \\ \mathbf{R}(k_{m'+2}I + t_{k_{m'+2}}) \\ \vdots \\ \mathbf{R}(k_{m''}I + t_{k_{m''}}) \\ \end{bmatrix}
\right|_{max}
\left| \mathbf{M_1'} \right|_{max}
\left| \mathbf{L} \right|_{max} ,1 \right\} \nonumber \\ &\lesssim \frac{p''(S''(\epsilon,S'(\epsilon,k)))}{\epsilon} e^{\lambda_1'' S''(\epsilon,S'(\epsilon,k)) I} \max\left\{ p'''(k_{m''}I+t_{k_{m''}}) e^{-\lambda_1 ( k_{m'+1}I + t_{k_{m'+1}} )} \cdot \frac{p'(S'(\epsilon,k))}{\epsilon}e^{\lambda_1 S'(\epsilon,k) I}
\cdot \left| \mathbf{L} \right|_{max} ,1 \right\} \nonumber \\
&\lesssim \frac{\bar{p}(S''(\epsilon,S'(\epsilon,k)))}{\epsilon^2} e^{\lambda_1 S''(\epsilon,S'(\epsilon,k))I}\ (\because S'(\epsilon,k)< k_{m'+1} < k_{m''} \leq S''(\epsilon,S'(\epsilon,k)),\ \lambda_1'' \leq \lambda_1) \nonumber \end{align} for some polynomial $\bar{p}(k)$. Since for some $\bar{\bar{p}}(k)$ \begin{align}
\left| \mathbf{M_1'} \right|_{max} \leq \frac{p'(S'(\epsilon,k))}{\epsilon} e^{\lambda_1 S'(\epsilon,k)I} \leq \frac{\bar{\bar{p}}(S''(\epsilon,S'(\epsilon,k)))}{\epsilon^2} e^{\lambda_1 S''(\epsilon,S'(\epsilon,k))I} \nonumber \end{align} and we can recover $\mathbf{x}$ from $\mathbf{x''}$ and $(\mathbf{x_{1,\nu_1}})_{m_{1,\nu_1}}$, there exists $\mathbf{M}$ and a polynomial $p(k)$ such that \begin{align} \mathbf{M} \begin{bmatrix} \mathbf{C}e^{-(k_1 I + t_{k_1})\mathbf{A_c}} \\ \vdots \\ \mathbf{C}e^{-(k_{m''} I + t_{k_{m''}})\mathbf{A_c}} \end{bmatrix} =\mathbf{I}_{m \times m} \nonumber \end{align} and \begin{align}
\left| \mathbf{M} \right|_{max} \leq \frac{p(S''(\epsilon,S'(\epsilon,k)))}{\epsilon^2} e^{\lambda_1 S''(\epsilon,S'(\epsilon,k)) I}. \nonumber \end{align} Moreover, since \begin{align}
\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \log \mathbb{P}\{ S''(\epsilon,S'(\epsilon,k)) - S'(\epsilon,k) | \mathcal{F}_{S'(\epsilon,k)}\} \leq p_e \nonumber \end{align} and \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z^+}} \log \mathbb{P}\{ S'(\epsilon,k) - k \} \leq p_e, \nonumber \end{align} by Lemma~\ref{lem:app:geo} \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z^+}} \log \mathbb{P}\{ S''(\epsilon,S'(\epsilon,k)) - k \} \leq p_e.\nonumber \end{align} Therefore, by putting $S(\epsilon,k):=S''(\epsilon^{\frac{1}{2}},S'(\epsilon^{\frac{1}{2}},k))$, $S(\epsilon,k)$ satisfies all the conditions of the lemma. \end{proof}
\subsection{Lemmas about the Observability Gramian of Discrete-Time Systems} \label{sec:dis:gramian} Now, we will consider the discrete-time systems discussed in Section~\ref{sec:interob}.
Like the continuous time case, we start from a simpler case when $\mathbf{C}$ is a row vector and $\mathbf{A}$ has no eigenvalue cycles. The definitions corresponding to \eqref{eqn:ac:jordan} for the row vector case are given as follows: Let $\mathbf{A}$ be a $m \times m$ Jordan form matrix and $\mathbf{C}$ be $1 \times m$ row vector which can be written as \begin{align} &\mathbf{A}=diag\{ \mathbf{A_{1,1}}, \mathbf{A_{1,2}}, \cdots, \mathbf{A_{1,\nu_1}}, \cdots, \mathbf{A_{\mu,1}}, \cdots, \mathbf{A_{\mu,\nu_\mu}}\} \label{eqn:ac:jordansingle} \\ &\mathbf{C}=\begin{bmatrix} \mathbf{C_{1,1}}, \mathbf{C_{1,2}}, \cdots, \mathbf{C_{1,\nu_1}}, \cdots, \mathbf{C_{\mu,1}}, \cdots, \mathbf{C_{\mu,\nu_\mu}} \end{bmatrix} \label{eqn:ac:jordansinglec} \\ &\mbox{where} \nonumber \\ &\quad \mbox{$\mathbf{A_{i,j}}$ is a Jordan block with eigenvalue $\lambda_{i,j}e^{j 2 \pi \omega_{i,j}}$ and size $m_{i,j}$} \nonumber \\ &\quad m_{i,1} \leq m_{i,2} \leq \cdots \leq m_{i,\nu_i}\mbox{ for all }i=1,\cdots,\mu \nonumber \\ &\quad \lambda_{i,1}=\lambda_{i,2}=\cdots=\lambda_{i,\nu_i}\mbox{ for all }i=1,\cdots,\mu \nonumber \\ &\quad \lambda_{1,1} > \lambda_{2,1} > \cdots > \lambda_{\mu,1} \geq 1 \nonumber \\ &\quad \{ \lambda_{i,1},\cdots, \lambda_{i,\nu_i} \} \mbox{ is cycle with length $\nu_i$ and period $p_i$}\nonumber \\ &\quad \mbox{For all $(i,j )\neq (i',j')$, $\omega_{i,j}-\omega_{i',j'} \notin \mathbb{Q}$} \nonumber \\ &\quad \mbox{$\mathbf{C_{i,j}}$ is a $1 \times m_{i,j}$ complex matrix and its first element is non-zero} \nonumber \\ &\quad \mbox{$\lambda_i e^{j 2 \pi \omega_i}$ is $(i,i)$ element of $\mathbf{A}$}. \nonumber \end{align} Here, we can notice that $\mathbf{A}$ has no eigenvalue cycles since $\omega_{i,j}-\omega_{i',j'} \notin \mathbb{Q}$ for all $(i,j )\neq (i',j')$, and $\mathbf{C}$ is a row vector. By Theorem~\ref{thm:jordanob}, the condition that the first elements of $\mathbf{C_{i,j}}$ are non-zero corresponds to the observability condition of $(\mathbf{A},\mathbf{C})$ since $\mathbf{C}$ is a row vector.
Let's state lemmas which parallel Lemma~\ref{lem:conti:inverse2} and Lemma~\ref{lem:det:lower}. In fact, the proofs of the lemmas are very similar to those of Lemma~\ref{lem:conti:inverse2} and Lemma~\ref{lem:det:lower} and we omit the proofs here. \begin{lemma} Let $\mathbf{A}$ and $\mathbf{C}$ be given as \eqref{eqn:ac:jordansingle} and \eqref{eqn:ac:jordansinglec}. Then, there exists a polynomial $p(k)$ such that for all $\epsilon>0$ and $0 \leq k_1 \leq \cdots \leq k_m$, if \begin{align}
\left| \det\left( \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1} \\ \mathbf{C} \mathbf{A}^{-k_2} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_m} \end{bmatrix}
\right) \right| \geq \epsilon \prod_{1 \leq i \leq m} \lambda_i^{-k_i} \nonumber \end{align} then \begin{align}
\left| \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1} \\ \mathbf{C} \mathbf{A}^{-k_2} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_m} \\ \end{bmatrix}
\right|_{max} \leq \frac{p(k_m)}{\epsilon} \lambda_1^{k_m} \nonumber \end{align} \label{lem:dis:inverse} \end{lemma} \begin{proof} It can be easily proved in a similar way to Lemma~\ref{lem:conti:inverse2} \end{proof}
\begin{lemma} Let $\mathbf{A}$ and $\mathbf{C}$ be given as \eqref{eqn:ac:jordansingle} and \eqref{eqn:ac:jordansinglec}. Define $a_{i,j}$ and $C_{i,j}$ as the $(i,j)$ element and cofactor of $\begin{bmatrix} \mathbf{C}\mathbf{A}^{-k_1} \\ \mathbf{C}\mathbf{A}^{-k_2} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-k_m} \end{bmatrix}$ respectively. Then there exists $g_{\epsilon}(k):\mathbb{R}^+ \rightarrow \mathbb{R}^+$ and $a \in \mathbb{R}^+$ such that for all $\epsilon>0$ and $k_1,\cdots, k_m$ satisfying \begin{align} &(i) 0 \leq k_1 < k_2 < \cdots < k_m \nonumber \\ &(ii) k_m - k_{m-1} \geq g_{\epsilon}(k_{m-1}) \nonumber \\ &(iii) g_{\epsilon}(k) \leq a(1+\log (k+1)) \nonumber \\
&(iv)\left| \sum_{m-m_{\mu}+1 \leq i \leq m} a_{m,i} C_{m,i} \right| \geq \epsilon \prod_{1 \leq i \leq m} {\lambda_i}^{-k_i} \nonumber \end{align} the following inequality holds: \begin{align}
\left| \det \left( \begin{bmatrix} \mathbf{C}\mathbf{A}^{-k_1} \\ \mathbf{C}\mathbf{A}^{-k_2} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-k_m} \\ \end{bmatrix}
\right) \right| \geq \frac{1}{2} \epsilon \prod_{1 \leq i \leq m} {\lambda_i}^{-k_i}. \nonumber \end{align} \label{lem:dis:det:lower} \end{lemma} \begin{proof} It can be easily proved in a similar way to Lemma~\ref{lem:det:lower}. \end{proof} Like the continuous-time case, these lemmas reduce questions about the inverse of the observability Gramian to questions about the determinant of the observability Gramian.
\subsection{Uniform Convergence of Sequences satisfying Weyl's criterion (Discrete-Time Systems)} \label{sec:dis:uniform}
As we did in the continuous-time case, we will prove that the determinant of the observability matrix is large enough regardless of the erasure pattern. The main difference from the continuous-time case of Appendix~\ref{app:unif:conti} is the measure that must be used. While we used the Lebesgue measure to measure the bad event ---the event that the determinant of the observability matrix is small---, we use the counting measure in this section.
The main idea of this section is approximating aperiodic deterministic sequences by random variables using ergodic theory~\cite{Kuipers}. The necessary and sufficient condition for a sequence to behave like uniformly distributed random variables in $[0,1]$ is known as Weyl's criterion. We first state a general ergodic theorem, and derive the Weyl's criterion as a corollary. \begin{theorem}[Koksma and Szusz inequality~\cite{Kuipers}] Consider a $s$-dimensional sequence $\mathbf{x_1}, \mathbf{x_2}, \cdots \in \mathbb{R}^s$, and let $\alpha := (\alpha_1, \cdots, \alpha_s)$ and $\beta:=(\beta_1, \cdots, \beta_s)$. For any positive integer $m$, we have \begin{align}
\sup_{0 \leq \alpha_i < \beta_i \leq 1} \left| \frac{A\left([\mathbf{\alpha},\mathbf{\beta});N, \{\mathbf{x_n}\}\right)}{N} - \prod_{1 \leq i \leq s} (\beta_i - \alpha_i) \right| \leq 2s^2 3^{s+1} \left( \frac{1}{m} + \sum_{\mathbf{h} \in \mathbb{Z}^s, 0 < |\mathbf{h}|_{\infty} \leq m} \frac{1}{r(\mathbf{h})} \left| \frac{1}{N} \sum_{1 \leq n \leq N} e^{2 \pi \sqrt{-1} \left<\mathbf{h},\mathbf{x_n}\right>} \right| \right) \nonumber \end{align} where \begin{align} &A\left([\mathbf{\alpha},\mathbf{\beta});N, \{\mathbf{x_n}\}\right):=\sum_{1 \leq n \leq N} \mathbf{1}\left\{\mathbf{x_n} \in [\alpha_1, \beta_1)\times [\alpha_2, \beta_2) \cdots \times [\alpha_s, \beta_s) \right\} \label{eqn:defcount} \\
&r(\mathbf{h}):= \prod_{1 \leq j \leq s} \max\{|h_j|,1\}. \nonumber \end{align} \label{thm:koksma} \end{theorem} \begin{proof} See \cite{Kuipers} for the proof. \end{proof} Here, we can see $A([\alpha, \beta);N, \{\mathbf{x_n})$ is the counting measure of the event that a sequence falls in the set $[\alpha,\beta)$. The theorem tells us that the counting measure is close to the Lebesgue measure of the set $[\alpha,\beta)$ uniformly over all $\alpha, \beta$.
Using this theorem, we can easily derive\footnote{The original Weyl's criterion is shown for only one sequence. But, here we extend Weyl's criterion to a family of sequences. For this, we state a generalized theorem of the Weyl's criterion and prove it.} the Weyl's criterion for a family of sequences.
\begin{definition} Consider a family of $s$-dimensional sequences $\mathcal{J}=\{ (\mathbf{x_{1,\sigma}}, \mathbf{x_{2,\sigma}}, \cdots): \sigma \in J , x_{i,\sigma} \in \mathbb{R}^s \}$. Here, the index set for the sequences, $J$, can be infinite. If for all $\mathbf{h} \in \mathbb{Z}^s \setminus \{\mathbf{0} \}$, \begin{align}
\lim_{N \rightarrow \infty} \sup_{\sigma \in \mathcal{J}} \left| \frac{1}{N} \sum_{1 \leq n \leq N} e^{j 2 \pi \left<\mathbf{h},\mathbf{x_{n,\sigma}}\right>}\right| = 0 \nonumber \end{align} then the family of sequences is said to satisfy Weyl's criterion. \end{definition}
\begin{theorem}[Weyl's criterion~\cite{Kuipers}] Consider a family of $s$-dimensional sequences $\mathcal{J}=\{ (\mathbf{x_{1,\sigma}}, \mathbf{x_{2,\sigma}}, \cdots): \sigma \in J , x_{i,\sigma} \in \mathbb{R}^s \}$, which satisfy the Weyl's criterion. Then, this family of sequences satisfies \begin{align}
\lim_{N \rightarrow \infty} \sup_{\sigma \in \mathcal{J}} \sup_{0 \leq \alpha_i < \beta_i \leq 1} \left| \frac{A\left([\mathbf{\alpha},\mathbf{\beta});N, \{\mathbf{x_{n,\sigma}} \}\right)}{N} - \prod_{1 \leq i \leq s} (\beta_i - \alpha_i) \right| =0, \nonumber \end{align} where the definition of $A\left([\mathbf{\alpha},\mathbf{\beta});N, \{\mathbf{x_{n,\sigma}} \}\right)$ is given in \eqref{eqn:defcount}. \label{thm:weyl} \end{theorem} \begin{proof} By Theorem~\ref{thm:koksma}, for any positive integer $m$, we have \begin{align}
&\sup_{\sigma \in \mathcal{J}} \sup_{0 \leq \alpha_i < \beta_i \leq 1} \left| \frac{A\left([\mathbf{\alpha},\mathbf{\beta});N, \{\mathbf{x_{n,\sigma}}\}\right)}{N} - \prod_{1 \leq i \leq s} (\beta_i - \alpha_i) \right|\\ &\leq \sup_{\sigma \in \mathcal{J}}
2s^2 3^{s+1} \left( \frac{1}{m} + \sum_{0 < |\mathbf{h}|_{\infty} \leq m} \frac{1}{r(\mathbf{h})} \left| \frac{1}{N} \sum_{1 \leq n \leq N} e^{2 \pi j \left<\mathbf{h},\mathbf{x_{n,\sigma}}\right>} \right| \right) \label{eqn:weyl:new:1} \end{align} To prove the theorem, it is enough to show that for all $\delta > 0$ there exists $N'$ such that for all $N > N'$ \begin{align}
\sup_{\sigma \in \mathcal{J}} \sup_{0 \leq \alpha_i < \beta_i \leq 1} \left| \frac{A\left([\mathbf{\alpha},\mathbf{\beta});N, \{\mathbf{x_{n,\sigma}}\}\right)}{N} - \prod_{1 \leq i \leq s} (\beta_i - \alpha_i) \right| < \delta.\label{eqn:weyl:new:2} \end{align}
Let's choose $m:=\frac{4s^2 3^{s+1}}{\delta}$ so that \begin{align} \frac{2s^2 3^{s+1}}{m} < \frac{\delta}{2}. \label{eqn:weyl:new:3} \end{align}
Once we fix $m$, there are only $(2m+1)^s$ number of $\mathbf{h} \in \mathbb{Z}^s$ such that $|\mathbf{h}|_{\infty} \leq m$. Furthermore, by the definition of Weyl's criterion, we can find $N''$ such that for all $N > N''$, \begin{align}
\sup_{\sigma \in \mathcal{J}} \left| \frac{1}{N} \sum_{1 \leq n \leq N} e^{j 2 \pi \left<\mathbf{h},\mathbf{x_{n,\sigma}}\right>}\right| < \frac{1}{(2m+1)^s 2 s^2 3^{s+1}} \frac{\delta}{2}. \end{align}
Thus, we can find $N''$ such that for all $N > N''$ the following holds: \begin{align}
2s^2 3^{s+1} s^{m+1} \max_{0 < |\mathbf{h}|_{\infty} \leq m} \sup_{\sigma \in \mathcal{J}} \left| \frac{1}{N} \sum_{1 \leq n \leq N} e^{j 2 \pi \left<\mathbf{h},\mathbf{x_{n,\sigma}}\right>}\right| < \frac{\delta}{2} \label{eqn:weyl:new:4} \end{align} Therefore, by plugging \eqref{eqn:weyl:new:3}, \eqref{eqn:weyl:new:4} into \eqref{eqn:weyl:new:1}, we can prove \eqref{eqn:weyl:new:2}. Thus, the theorem is true. \end{proof}
Since we are mainly interested in the fractional part of sequences, it will be helpful to denote $\left<x \right> := x - \lfloor x \rfloor$. Although $\left<\mathbf{x},\mathbf{y}\right>$ is the inner product between two vectors, these two definitions can be distinguished by counting the number of arguments. Let's consider some specific sequences, and see whether they satisfies the Weyl's criterion. \begin{example} $\left(\left< \sqrt{2}n \right>, \left< \sqrt{3}n \right> \right)$ satisfies Weyl's criterion and $\left( \left< \sqrt{2}n \right>, \left<(\sqrt{2}+\sqrt{3})n \right> \right)$ does too.\\ $\left(\left<\sqrt{2}n \right>, \left< \left( \sqrt{2} + 0.5 \right)n \right> \right)$ does not satisfy Weyl's criterion and neither does $\left(\left<\sqrt{2}n \right>, \left< \frac{\sqrt{2}}{2} n \right> \right)$. \end{example} Therefore, among general sequences in the form of $( \left<\omega_1 n \right>, \left<\omega_2 n \right>, \cdots, \left<\omega_m n \right> )$, there are sequences which satisfy Weyl's criterion and others do not. However, the following lemma reveals all sequences can be written as linear combinations of basis sequences which satisfy Weyl's criterion. This idea is very similar to that linear-algebraic concepts like linear decomposition and basis.
\begin{lemma} Consider an $m$-dimensional sequence $( \left<\omega_1 n\right>, \left<\omega_2 n\right>, \cdots, \left<\omega_m n\right> )$. Then, there exists $k \leq m$ and $p \in \mathbb{N}$ such that \begin{align} \omega_i= \frac{q_{i,0}}{p}+\sum_{1 \leq j \leq k} q_{i,j}\gamma_j \nonumber \end{align} where \begin{align} &q_{i,j} \in \mathbb{Z},\nonumber \\ &( \left<\gamma_1 n\right>, \left<\gamma_2 n\right>, \cdots, \left<\gamma_k n\right> ) \mbox{ satisfies Weyl's criterion.} \nonumber \end{align} \label{lem:dis:weyl2} \end{lemma} \begin{proof} Before the proof, we can observe the following two facts.
First, since as long as $\left<\mathbf{h}, \mathbf{w}\right>$ is not an integer, \begin{align} \frac{1}{N} \sum_{1 \leq n \leq N} e^{j 2 \pi \left<\mathbf{h},(\left<\omega_1 n\right>, \left<\omega_2 n\right>, \cdots, \left<\omega_m n\right> ) \right> } = \frac{1}{N} \frac{e^{j2 \pi (h_1 \omega_1 + h_2 \omega_2 + \cdots + h_m \omega_m )}\left(1-e^{j2 \pi N(h_1 \omega_1 + h_2 \omega_2 + \cdots + h_m \omega_m )}\right)}{1- e^{j 2 \pi(h_1 \omega_1 + h_2 \omega_2 + \cdots + h_m \omega_m)}},\nonumber \end{align} the statement that the sequence $(\left<\omega_1 n\right>,\left<\omega_2 n\right>,\cdots,\left<\omega_m n\right>)$ does not satisfy Weyl's criterion is equivalent to there being $h_1, h_2, \cdots , h_m \in \mathbb{Z}$ that are not identically zero and make \begin{align} h_1 \omega_1 + h_2 \omega_2 + \cdots + h_m \omega_m \in \mathbb{Z}.\label{eqn:weyl:1} \end{align}
The second observation is that if $( \left< \omega_1 n \right> , \left< \omega_2 n \right>, \cdots, \left< \omega_m n\right> )$ satisfies Weyl's criterion then for all $a_1,\cdots,a_m \in \mathbb{N}$, $(\left< \frac{\omega_1}{a_1} n\right>,\left< \frac{\omega_2}{a_2} n\right>,\cdots, \left< \frac{\omega_m}{a_m} n\right>)$ also satisfies Weyl's criterion. To see this, suppose $(\left< \frac{\omega_1}{a_1} n\right>,\left< \frac{\omega_2}{a_2} n\right>,\cdots, \left< \frac{\omega_m}{a_m} n\right>)$ did not satisfy Weyl's criterion. Then, by \eqref{eqn:weyl:1} there would exist $(h_1,h_2,\cdots,h_m) \in \mathbb{Z}^m \setminus \{\mathbf{0}\}$ such that $h_1 \frac{\omega_1}{a_1} + h_2 \frac{\omega_2}{a_2} + \cdots + h_m \frac{\omega_m}{a_m} \in \mathbb{Z}$. So, $ \frac{h_1 \prod_{1 \leq i \leq m}a_i}{a_1} {\omega_1} + \frac{h_2 \prod_{1 \leq i \leq m} a_i}{a_2} {\omega_2} + \cdots + \frac{h_m \prod_{1 \leq i \leq m} a_i}{a_m} {\omega_m} \in \mathbb{Z}$ as well as $(\frac{h_1 \prod_{1 \leq i \leq m}a_i}{a_1}, \cdots, \frac{h_m \prod_{1 \leq i \leq m}a_i}{a_m}) \in \mathbb{Z}^m \setminus \{ \mathbf{0} \}$. But since $(\left<\omega_1 n\right>,\left<\omega_2 n\right>, \cdots, \left<\omega_m n\right> )$ would not satisfy Weyl's criterion, this causes a contradiction.
Now, we will prove the lemma by induction on $m$.
(i) When $m=1$,
If $\left<\omega_1 n\right>$ satisfies Weyl's criterion, the lemma is trivially true by selecting $\gamma_1=\omega_1$ and $q_{1,1}=1$. If $\left<\omega_1 n\right>$ does not satisfy Weyl's criterion, then by \eqref{eqn:weyl:1}, $\omega_1$ is a rational number. So we can find $q_{1,0}$ and $p$ such that $\omega_1=\frac{q_{1,0}}{p}$, and set the $k=0$.
(ii) Assume that the lemma is true for $m-1$.
If $(\left<\omega_1 n\right>, \left<\omega_2 n\right>,\cdots, \left<\omega_m n\right>)$ satisfies Weyl's criterion, the lemma follows by selecting $k=m$, $\gamma_i=\omega_i$ and $q_{i,i}=1$.
If $(\left<\omega_1 n\right>, \left<\omega_2 n\right>,\cdots, \left<\omega_m n\right>)$ does not satisfy Weyl's criterion, by \eqref{eqn:weyl:1} there exists $(h_1,h_2,\cdots, h_m) \in \mathbb{Z}^m \setminus \{ \mathbf{0} \}$ and $h \in \mathbb{Z}$ such that $h_1 \omega_1 + h_2 \omega_2 + \cdots + h_m \omega_m = h$. Without loss of generality, let's say $h_1 \neq 0$. Then \begin{align} \omega_1 = - \frac{h_2}{h_1} \omega_2 - \frac{h_3}{h_1} \omega_3 - \cdots - \frac{h_m}{h_1} \omega_m + \frac{h}{h_1}. \label{eqn:dis:weyl:1} \end{align}
By induction hypothesis, we know that there exists $k' \leq m-1$, $p' \in \mathbb{N}$, $q_{i,j}' \in \mathbb{Z}$, $\gamma_i'$ such that \begin{align} &\omega_2 = \frac{q_{2,0}'}{p'}+\sum_{1 \leq j \leq k'} q_{2,j}'\gamma_j' \nonumber \\ &\vdots \nonumber \\ &\omega_m = \frac{q_{m,0}'}{p'}+\sum_{1 \leq j \leq k'} q_{m,j}'\gamma_j'. \label{eqn:dis:weyl:2} \end{align} where $(\left<\gamma_1' n \right>, \left<\gamma_2' n\right>, \cdots, \left<\gamma_k' n\right>)$ satisfies Weyl's criterion. Therefore, by plugging \eqref{eqn:dis:weyl:2} to \eqref{eqn:dis:weyl:1} we can find $q'_{1,j} \in \mathbb{Z}$ such that \begin{align}
\omega_1 = \frac{q'_{1,0}}{|h_1 \cdot p'|}+ \sum_{1 \leq i \leq k } q'_{1,i} \frac{\gamma_i'}{h_1}. \nonumber \end{align}
By the second observation, $(\left<\frac{\gamma_1'}{h_1}n\right>, \left<\frac{\gamma_2'}{h_1}n\right>,\cdots, \left<\frac{\gamma_k'}{h_1}n\right> )$ satisfies Weyl's criterion, so we can use $p=|h_1 \cdot p'|$ and $\gamma_i = \frac{\gamma_i'}{h_1}$ to show that the lemma also holds for $m$.
Therefore, by induction the lemma is true. \end{proof}
Now, we can decompose the sequences into basis sequences which satisfy Weyl's criterion, and so behave like uniform random variables. The main difference from the uniform convergence discussion of Appendix~\ref{app:unif:conti} is the number of random variables. In other words, in continuous-time systems with random jitter, only one random variable is introduced at each sample for the random jitter. However, this is not the case in discrete-time systems.
Let $\mathbf{A_1}=\begin{bmatrix} e^{j \sqrt{2}} & 0 \\ 0 & e^{j 2 \sqrt{2}} \end{bmatrix}$, $\mathbf{A_2}=\begin{bmatrix} e^{j \sqrt{2}} & 0 \\ 0 & e^{j \sqrt{3}} \end{bmatrix}$, $\mathbf{C}=\begin{bmatrix} 1 & 1 \end{bmatrix}$. The row of the observability gramian of $(\mathbf{\mathbf{A_1}}, \mathbf{C})$ is $\mathbf{C}\mathbf{A_1}^n = \begin{bmatrix} e^{j \sqrt{2} n} & e^{j 2\sqrt{2} n}\end{bmatrix}$. In this case, the elements of $\mathbf{C}\mathbf{A_1}^n$ do not satisfy Weyl's criterion. Thus, it can be approximated by $\begin{bmatrix} e^{j X} & e^{j 2X} \end{bmatrix}$ where $X$ is uniform in $[0, 2\pi]$, which involves only one random variable.
However, the row of the observability gramian of $(\mathbf{\mathbf{A_2}}, \mathbf{C})$ is $\mathbf{C}\mathbf{A_2}^n = \begin{bmatrix} e^{j \sqrt{2} n} & e^{j \sqrt{3} n}\end{bmatrix}$ whose elements satisfy Weyl's criterion. Thus, it can be approximated by $\begin{bmatrix} e^{j X_1} & e^{j X_2} \end{bmatrix}$ where $X_1$, $X_2$ are independent uniform random variables in $[0, 2\pi]$, which involves two random variables.
Therefore, the lemmas derived in Appendix~\ref{app:unif:conti} have to be generalized to multiple random variables, and then the multiple random variables can be used to model deterministic sequences.
Intuitively, adding more randomness should not cause any problems, so generalization to multiple random variables must be possible. We first extend Lemma~\ref{lem:single} which was written for a single random variable to multiple random variables.
\begin{lemma} Let $\mathbf{X}$ be $(X_1,X_2,\cdots,X_{\nu})$ where $X_i$ are i.i.d.~random variables whose distribution is uniform between $0$ and $2 \pi$. Let $\mathbf{k_1},\mathbf{k_2},\cdots,\mathbf{k_{\mu}} \in \mathbb{R}^{\nu}$ be distinct. Then, for strictly positive $\gamma$, $\Gamma$ $(\gamma \leq \Gamma)$, and $m \in \{ 1, \cdots, \mu \}$ \begin{align}
\sup_{|a_{m}| \geq \gamma, |a_{i}| \leq \Gamma, a_i \in \mathbb{C}} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j<\mathbf{k_i},\mathbf{X}>} | < \epsilon \} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0. \nonumber \end{align}
\label{lem:dis:geo1} \end{lemma} \begin{proof} We will prove the lemma by induction on $\nu$, the number of random variables.
(i) When $\nu=1$. The lemma reduces to Lemma~\ref{lem:single}.
(ii) Let's assume the lemma is true for $1,\cdots,\nu-1$.
Without loss of generality, we can assume $m=1$ by symmetry. We will prove the lemma by dividing into cases based on $\mathbf{k_i}$. Let the $j$th component of $\mathbf{k_i}$ be denoted as $k_{ij}$.
First, consider the case when $k_{1,1}=k_{2,1}=\cdots=k_{\mu, 1}$. Then, \begin{align}
&\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j<\mathbf{k_i},\mathbf{X}>} | < \epsilon \}
=\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j \sum_{1 \leq j \leq \nu} k_{i,j} X_j} | < \epsilon \} \nonumber \\
&=\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | e^{j k_{1,1}X_1}| \cdot | \sum^{\mu}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j} X_j} | < \epsilon \} \nonumber\\
&=\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j} X_j} | < \epsilon \} \rightarrow 0\ (\because \mbox{induction hypothesis}) \nonumber . \end{align}
Second, consider the case when $k_{i,1} \neq k_{j,1}$ for some $i,j$. Without loss of generality, we can assume that $k_{1,1}=k_{2,1}=\cdots =k_{\mu_1,1}$ and $k_{1,1} \neq k_{j,1}$ for all $\mu_1 <j\leq \mu$. Then, for all $\epsilon' > 0$, we have \begin{align}
&\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j <\mathbf{k_i},\mathbf{X}>} | < \epsilon \} \nonumber\\
&=\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | \sum^{\mu_1}_{i=1} a_i e^{j <\mathbf{k_i},\mathbf{X}>}+ \sum^{\mu}_{i=\mu_1+1} a_i e^{j <\mathbf{k_i},\mathbf{X}>} | < \epsilon \} \nonumber\\
&\leq \sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P}
\{ | \sum^{\mu_1}_{i=1} a_i e^{j <\mathbf{k_i},\mathbf{X}>}+ \sum^{\mu}_{i=\mu_1+1} a_i e^{j <\mathbf{k_i},\mathbf{X}>} | < \epsilon \Big| |\sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j}X_j} | \geq \epsilon' \} +
\mathbb{P}\{ |\sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j}X_j} | < \epsilon' \} \nonumber \\
&= \sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P}
\{ | ( \sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j} X_j} )e^{j k_{1,1}X_1} + \sum^{\mu}_{i=\mu_1+1} a_i e^{j <\mathbf{k_i},\mathbf{X}>} | < \epsilon \Big| |\sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j}X_j} | \geq \epsilon' \} \nonumber\\
&+\mathbb{P}\{ |\sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j}X_j} | < \epsilon' \} \nonumber \\ &\leq
\sup_{|a'_{1}| \geq \epsilon', |a'_{i}| \leq \mu\Gamma} \mathbb{P}_{X_1}
\{ |a'_{1} e^{j k_{1,1}X_1}+ \sum^{\mu}_{i=\mu_1+1} a'_i e^{j k_{i,1}X_1} | < \epsilon \} +\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P}\{ |\sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j}X_j} | < \epsilon' \}. \nonumber \\ \end{align} Therefore, by the induction hypothesis (since the first term has only one random variable, and the second term has $\nu-1$ random variables) \begin{align}
&\lim_{\epsilon \rightarrow 0} \sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j <\mathbf{k_i},\mathbf{X}>} | < \epsilon \} \nonumber\\
&\leq \lim_{\epsilon' \rightarrow 0} \lim_{\epsilon \rightarrow 0} \sup_{|a'_{1}| \geq \epsilon', |a'_{i}| \leq \mu\Gamma} \mathbb{P}
\{ |a'_{1} e^{j k_{1,1}X_1}+ \sum^{\mu}_{i=\mu_1+1} a'_i e^{j k_{i,1}X_1} | < \epsilon \} +\sup_{|a_{1}| \geq \gamma, |a_{i}| \leq \Gamma} \mathbb{P}\{ |\sum^{\mu_1}_{i=1} a_i e^{j \sum_{2 \leq j \leq \nu} k_{i,j}X_j} | < \epsilon' \} \nonumber\\ &=0. \nonumber \end{align} Therefore, the lemma is true. \end{proof}
Now, we will consider a deterministic sequence in the form of $(<\omega_1 n>, \cdots, <\omega_{\mu}n>)$. As we have shown in Lemma~\ref{lem:dis:weyl2}, this sequence can be thought of as a linear combination of basis sequences which satisfy Weyl's criterion. Thus, we can approximate the deterministic sequence as a linear combination of multiple uniform random variables considered in Lemma~\ref{lem:dis:geo1}.
\begin{lemma} Let $\omega_1,\omega_2,\cdots,\omega_{\mu}$ be real numbers such that $\omega_i - \omega_j \notin \mathbb{Q}$ for all $i \neq j$. Then, for strictly positive numbers $\gamma$ and $\Gamma$ $(\gamma \leq \Gamma)$, and $m \in \{1, \cdots, \mu\}$ \begin{align}
\lim_{\epsilon \downarrow 0}\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, |a_{i}| \leq \Gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon \}\rightarrow 0. \nonumber \end{align} \label{lem:dis:geo2} \end{lemma} \begin{proof} By Lemma~\ref{lem:dis:weyl2}, $\omega_i$ can be written as $\left<\mathbf{q_i},\mathbf{\rho}\right>$ where $\mathbf{q_i}=(q_{i,0},q_{i,1},\cdots,q_{i,r}) \in \mathbb{Z}^{r+1}$, $\mathbf{\rho}=(\frac{1}{s},\rho_1,\cdots,\rho_r) \in \mathbb{R}^{r+1}$ and $s \in \mathbb{N}$. Here, $(\left<\rho_1 n\right>,\left<\rho_2 n\right>,\cdots,\left<\rho_r n\right>)$ satisfies Weyl's criterion. Since $\omega_i-\omega_j \notin \mathbb{Q}$ for all $i\neq j$, $(q_{i,1},q_{i,2},\cdots,q_{i,r}) \neq (q_{j,1},q_{j,2},\cdots,q_{j,r})$.
For given $k, N, M \in \mathbb{N}$, and $m_1, \cdots, m_r \in \{1, \cdots, M\}$, define a set $S_{m_1, \cdots, m_r}$ as\footnote{Notice that the definition of $S_{m_1, \cdots, m_r}$ also depends on $k, N, M$ as well as $m_1, \cdots, m_r$. However, we omit the dependence on $k, N, M$ in the definition for simplicity.} \begin{align} \left\{n \in \{1, \cdots, N \}: \frac{m_1-1}{M} \leq \left<\rho_1 (n+k)\right> < \frac{m_1}{M},\cdots,\frac{m_r-1}{M} \leq \left<\rho_r (n+k)\right> < \frac{m_r}{M} \right\}. \nonumber \end{align}
Then, for all $k, N, M \in \mathbb{N}$ and $\epsilon > 0$, we have the following: \begin{align}
&\sum^N_{n=1} \mathbf{1} \{ |\sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k)} | < \epsilon \} \nonumber \\
&= \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k) } | < \epsilon, n \in S_{m_1, \cdots, m_r} \} \nonumber \\
&\leq \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \mathbf{1} \{ \min_{n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k) } | < \epsilon, n \in S_{m_1, \cdots, m_r} \} \nonumber \\
&= \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \mathbf{1} \{ \min_{n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k) } | < \epsilon \} \cdot \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \}. \label{eqn:dis:geo1:1} \end{align}
Moreover, we also know by the definitions of $\mathbf{q_i}$ and $\mathbf{\rho}$, \begin{align} \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i(n+k)}&=\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left<\mathbf{q_i},\mathbf{\rho}\right>(n+k)} \nonumber \\ &=\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left(\frac{q_{i,0}}{s}(n+k)+q_{i,1}\rho_1(n+k)+\cdots + q_{i,r}\rho_r(n+k) \right)} \nonumber \\ &=\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left(\frac{q_{i,0}}{s}(n+k)+q_{i,1}\left<\rho_1(n+k)\right>+\cdots + q_{i,r}\left<\rho_r(n+k)\right> \right)} (\because q_{i,j} \in \mathbb{Z}).\nonumber \end{align}
Thus, by defining $\mathbf{X_{m_1,\cdots,m_r}}$ as a random vector which is uniformly distributed over $[\frac{m_1-1}{M} , \frac{m_1}{M} ) \times \cdots \times [ \frac{m_r-1}{M} , \frac{m_r}{M} )$ and $\mathbf{q_i'}=(q_{i,1},q_{i,2},\cdots,q_{i,r})$, $\mathbf{\rho'}=(\rho_{1},\rho_{2},\cdots,\rho_{r})$, we can conclude \begin{align}
\max_{n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k) } |
&=\max_{n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left(\frac{q_{i,0}}{s}(n+k)+q_{i,1}\left<\rho_1(n+k)\right>+\cdots + q_{i,r}\left<\rho_r(n+k)\right> \right) } |\nonumber \\ &\geq
| \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{q_{i,0}}{s} (n+k) + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | \quad a.e. \label{eqn:weylupper1} \end{align}
By \eqref{eqn:weylupper1}, \eqref{eqn:dis:geo1:1} can be upper bounded as follows: \begin{align}
&\eqref{eqn:dis:geo1:1}\leq \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \mathbb{P} \{ \min_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k) } | - \max_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (n+k) } | \nonumber \\
&+ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{q_{i,0}}{s} (n+k) + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | < \epsilon \} \cdot \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \} \nonumber \\
&= \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \mathbb{P} \{ \min_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{q_{i,0}}{s}(n+k)+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | - \max_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{q_{i,0}}{s}(n+k)+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | \nonumber \\
&+ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{q_{i,0}}{s} (n+k) + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | < \epsilon \} \cdot \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \} \nonumber \\
&\leq \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \max_{0 \leq s' < s} \mathbb{P} \{ \min_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | - \max_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | \nonumber \\
&+ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s} + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | < \epsilon \}
\cdot \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \}\nonumber \\
&\leq \sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \sum_{0 \leq s' < s} \mathbb{P} \{ \min_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | - \max_{ n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | \nonumber \\
&+ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s} + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | < \epsilon \}
\cdot \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \}. \label{eqn:dis:geo1:3} \end{align}
Here, we have \begin{align}
&\max_{n \in S_{m_1, \cdots, m_r}}
| \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | \nonumber \\
&=\max_{n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\left<\rho_1(n+k)\right>+\cdots+q_{i,r}\left<\rho_r(n+k)\right> \right)}| (\because q_{i,j} \in \mathbb{Z})\nonumber \\
&\leq \sup_{ 0 \leq \Delta_i < \frac{1}{M}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}+q_{i,1}\Delta_1+\cdots+q_{i,r}\Delta_r \right)}| \nonumber \\
&=\sup_{ 0 \leq \Delta_i < \frac{1}{M}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)} + a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)} \nonumber\\
&\quad(-1+\cos 2\pi (q_{i,1}\Delta_1+\cdots+q_{i,r}\Delta_r )+j \sin 2\pi(q_{i,1}\Delta_1+\cdots+q_{i,r}\Delta_r) )| \nonumber \\
&\leq |\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)} | \nonumber \\
&+\sum^{\mu}_{i=1} |a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)}| \nonumber \\
&\cdot(\sup_{ 0 \leq \Delta_i < \frac{1}{M}} |-1+\cos2\pi(q_{i,1}\Delta_1+\cdots+q_{i,r}\Delta_r)|+ \sup_{ 0 \leq \Delta_i < \frac{1}{M}}|\sin2\pi(q_{i,1}\Delta_1+\cdots+q_{i,r}\Delta_r)|) \nonumber \\
&\leq |\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)} |
+4 \pi \sum^{\mu}_{i=1} |a_i| \sup_{0 \leq \Delta_i < \frac{1}{M}}|q_{i,1}\Delta_1+\cdots+q_{i,r}\Delta_r| \label{eqn:dis:geo1:2} \\
&\leq |\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)} |
+ \frac{4 \pi \Gamma}{M} \sum^{\mu}_{i=1} \sum^{r}_{j=1} |q_{i,j}|. (\because \mbox{We assumed }|a_i| \leq \Gamma)\nonumber \end{align}
where \eqref{eqn:dis:geo1:2} comes from the fact that $|\sin x| \leq |x|$ and $|-1+\cos x| \leq |x|$ for all $x \in \mathbb{R}$.
Likewise, we also have \begin{align}
&\min_{n \in S_{m_1, \cdots, m_r}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s_i'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | \nonumber \\
&\geq |\sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s_i'}{s}+q_{i,1}\frac{m_1-1}{M}+\cdots+q_{i,r}\frac{m_r-1}{M}\right)} |
- \frac{4 \pi \Gamma}{M} \sum^{\mu}_{i=1} \sum^{r}_{j=1} |q_{i,j}|. \nonumber \end{align}
Therefore, \begin{align}
&\sup_{\frac{m_i-1}{M} \leq {\left<\rho_i (n+k)\right>} < \frac{m_i}{M}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s_i'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | -
\inf_{\frac{m_i-1}{M} \leq {\left<\rho_i (n+k)\right>} < \frac{m_i}{M}} | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s_i'}{s}+\left<\mathbf{q_i'},\mathbf{\rho'}\right>(n+k) \right) } | \nonumber \\
&\leq \frac{8 \pi \Gamma}{M} \sum^{\mu}_{i=1} \sum^{r}_{j=1} |q_{i,j}|. \nonumber \end{align}
By selecting $M$ such that $\frac{8 \pi \Gamma}{M} \sum^{\mu}_{i=1} \sum^{r}_{j=1} |q_{i,j}| \leq \epsilon$, \eqref{eqn:dis:geo1:3} is upper bounded by \begin{align} &\eqref{eqn:dis:geo1:3} \leq
\sum^N_{n=1} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \sum_{0 \leq s' < s} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s} + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | < 2 \epsilon \} \cdot \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \}. \label{eqn:dis:geo1:4} \end{align} Since $(\left<\rho_1 n\right>,\cdots,\left<\rho_k n\right> )$ satisfies Weyl's criterion, by Theorem~\ref{thm:weyl} \begin{align} &\lim_{N \rightarrow \infty} \sup_{k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1}\{ n \in S_{m_1, \cdots, m_r} \} = \frac{1}{M^r}. \label{eqn:dis:geo1:5} \end{align} Therefore, if we let $\mathbf{X}$ be a $1 \times r$ random vector whose distribution is uniform on $[0,1)^r$, by \eqref{eqn:dis:geo1:4} and \eqref{eqn:dis:geo1:5} \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_{m}| \geq \gamma, |a_{i}| \leq \Gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon \} \nonumber \\
&\leq \sup_{|a_{m}| \geq \gamma, |a_{i}| \leq \Gamma, k \in \mathbb{Z}} \sum_{1 \leq m_1 \leq M, \cdots, 1 \leq m_r \leq M} \sum_{0 \leq s' < s} \mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s} + \left<\mathbf{q_i'},\mathbf{X_{m_1,\cdots,m_r}}\right> \right) } | < 2 \epsilon \} \cdot \frac{1}{M^r} \\
&\leq \sup_{|a_{m}| \geq \gamma, |a_{i}| \leq \Gamma, k \in \mathbb{Z}} \sum_{0 \leq s' < s}
\mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \frac{s'}{s} + \left<\mathbf{q_i'},\mathbf{X}\right> \right) } | < 2 \epsilon \} (\because \mbox{definitions of } \mathbf{X_{m_1, \cdots, m_r}}, \mathbf{X}) \nonumber \\
&\leq \sup_{|a_{m}| \geq \gamma, |a_{i}| \leq \Gamma} s \cdot
\mathbb{P} \{ | \sum^{\mu}_{i=1} a_i e^{j 2 \pi \left( \left<\mathbf{q_i'},\mathbf{X}\right> \right) } | < 2 \epsilon \}. (\because e^{j 2 \pi \frac{s'}{s}} \mbox{ only rotates the phase.})
\label{eqn:dis:geo1:6} \end{align} Since $\mathbf{q_i'}$ are distinct, by Lemma~\ref{lem:dis:geo1}, \eqref{eqn:dis:geo1:6} goes to 0 as $\epsilon \downarrow 0$. \end{proof}
So far, we put the restriction that $|a_i| \leq \Gamma$. However, the functions are growing as $|a_i|$ increases. Therefore, Lemma~\ref{lem:dis:geo2} holds even after we remove such restrictions. The proof is similar to that of Lemma~\ref{lem:singleun}.
\begin{lemma} Let $\omega_1,\omega_2,\cdots,\omega_{\mu}$ be real numbers such that $\omega_i - \omega_j \notin \mathbb{Q}$ for all $i \neq j$. Then, for strictly positive numbers $\gamma$, and any $m \in \{ 1, \cdots, \mu\}$ \begin{align}
\lim_{\epsilon \downarrow 0}\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon \}\rightarrow 0. \nonumber \end{align} \label{lem:dis:geo3} \end{lemma} \begin{proof} The proof is by induction on $\mu$, the number of terms in the inner sum.
(i) When $\mu=1$.
Denote $a'_1$ as $\gamma \frac{a_1}{|a_1|}$. Then, \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_1| \geq \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | a_{1}e^{j2 \pi \omega_1(n+k)}| < \epsilon \}\label{eqn:dis:geo3:1} \\
&=\lim_{N \rightarrow \infty} \sup_{|a_1| \geq \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \frac{\gamma}{|a_1|}a_{1}e^{j2 \pi \omega_1(n+k)}| < \frac{\gamma}{|a_1|} \epsilon \}\nonumber \\
&\leq \lim_{N \rightarrow \infty} \sup_{|a'_1| = 1, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | a'_{1}e^{j2 \pi \omega_1(n+k)}| < \epsilon \} (\because \frac{\gamma}{|a_1|} \leq 1 )\label{eqn:dis:geo3:2} \end{align} By Lemma~\ref{lem:dis:geo2}, \eqref{eqn:dis:geo3:2} converges to $0$ as $\epsilon \downarrow 0$. Thus, \eqref{eqn:dis:geo3:1} converges to $0$ as $\epsilon \downarrow 0$.
(ii) As an induction hypothesis, we assume the lemma is true until $\mu-1$.
To prove the lemma for $\mu$, it is enough to show that for all $\delta > 0$ there exists $\epsilon(\delta)>0$ such that \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon(\delta) \} < \delta. \nonumber \end{align}
By the induction hypothesis, for all $m' \neq m$ we can find $\epsilon_{m'}(\delta) > 0$ such that \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_{m'}| \geq \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum_{1 \leq i \leq \mu, i \neq m} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon_{m'}(\delta) \} < \delta. \label{eqn:limmax3} \end{align}
Let $\kappa(\delta):=\min \left\{ \min_{m'\neq m} \left\{\frac{\epsilon_{m'}(\delta)}{2 \gamma } \right\},1 \right\}$. By Lemma~\ref{lem:dis:geo2}, there exists $\epsilon'(\delta)>0$ such that \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, |a_{i}| \leq \frac{\gamma}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon'(\delta) \} < \delta. \label{eqn:limmax2} \end{align} Set $\epsilon(\delta):= \min \left\{ \epsilon'(\delta) , \min_{m' \neq m} \left\{ \frac{\epsilon_{m'}(\delta)}{2} \right\} \right\}$. Then, we have \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon(\delta) \} \nonumber \\ &\leq
\lim_{N \rightarrow \infty} \max \{ \sup_{|a_m| \geq \gamma, \frac{|a_i|}{|a_m|} \leq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon({\delta}) \}, \nonumber \\
&\max_{m' \neq m} \sup_{|a_m| \geq \gamma, \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon(\delta) \} \} \nonumber \\
&=\max \{ \lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_i|}{|a_m|} \leq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon({\delta}) \}, \nonumber \\
&\max_{m' \neq m} \lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon(\delta) \} \}. \label{eqn:limmax1} \end{align}
Let $a'_i := \frac{\gamma}{|a_m|} a_i$. Then, the first term in \eqref{eqn:limmax1} is upper bounded by \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_i|}{|a_m|} \leq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon({\delta}) \} \nonumber \\
&=\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_i|}{|a_m|} \leq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} \frac{\gamma}{|a_m|} a_{i}e^{j2 \pi \omega_i(n+k)}| < \frac{\gamma}{|a_m|} \epsilon({\delta}) \} \nonumber \\
&=\lim_{N \rightarrow \infty} \sup_{|a'_m| = \gamma, |a'_i| \leq \frac{\gamma}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a'_{i}e^{j2 \pi \omega_i(n+k)}| < \frac{\gamma}{|a_m|} \epsilon({\delta}) \} \nonumber \\
&\leq \lim_{N \rightarrow \infty} \sup_{|a'_m| = \gamma, |a'_i| \leq \frac{\gamma}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a'_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon({\delta}) \} (\because \frac{\gamma}{|a_m|} \leq 1) \nonumber \\
&\leq \lim_{N \rightarrow \infty} \sup_{|a'_m| = \gamma, |a'_i| \leq \frac{\gamma}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a'_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon'(\delta) \} (\because \epsilon' \geq \epsilon)\nonumber \\ &< \delta. (\because \eqref{eqn:limmax2}) \label{eqn:limmax5} \end{align}
Let $a''_i := \frac{\gamma}{|a_{m'}|} a_i$. Then, the second term in \eqref{eqn:limmax1} is upper bounded by \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon({\delta}) \} \nonumber \\
&=\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} \frac{\gamma}{|a_{m'}|} a_{i}e^{j2 \pi \omega_i(n+k)}| < \frac{\gamma}{|a_{m'}|} \epsilon({\delta}) \} \nonumber \\
&\leq \lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} \frac{\gamma}{|a_{m'}|} a_{i}e^{j2 \pi \omega_i(n+k)} - \frac{\gamma}{|a_{m'}|} a_m e^{j 2 \pi \omega_m(n+k)} | < \frac{\gamma}{|a_{m'}|} \epsilon({\delta})+ \frac{\gamma}{|a_{m'}|} | a_m | \} \nonumber \\
&\leq \lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} \frac{\gamma}{|a_{m'}|} a_{i}e^{j2 \pi \omega_i(n+k)} - \frac{\gamma}{|a_{m'}|} a_m e^{j 2 \pi \omega_m(n+k)} | < \epsilon_{m'}(\delta) \} \label{eqn:limmax4} \\
&\leq \lim_{N \rightarrow \infty} \sup_{|a_{m'}''| = \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum_{1 \leq i \leq \mu, i \neq m} a_i'' e^{j 2 \pi \omega_i(n+k)} | < \epsilon_{m'}(\delta) \} (\because \mbox{definition of }a_i'') \nonumber \\ &< \delta. (\because \eqref{eqn:limmax3}) \label{eqn:limmax6} \end{align} Here, \eqref{eqn:limmax4} is justified as follows: \begin{align}
&\frac{\gamma}{|a_m'|} \epsilon(\delta) + \frac{\gamma}{|a_m'|}|a_m| \\
&\leq \frac{\gamma}{|a_m|} \epsilon(\delta) + \gamma \kappa(\delta) (\because \frac{|a_{m'}|}{|a_m|} \geq \frac{1}{\kappa(\delta)} \mbox{, and by definition } \kappa(\delta) \leq 1)\\
&\leq \epsilon(\delta) + \gamma \kappa(\delta) (\because |a_m| \geq \gamma)\\ &\leq \frac{\epsilon_{m'}(\delta)}{2} + \frac{\epsilon_{m'}(\delta)}{2}. (\because \mbox{definitions of }\epsilon(\delta), \kappa(\delta)) \end{align}
Therefore, by plugging \eqref{eqn:limmax5} and \eqref{eqn:limmax6} into \eqref{eqn:limmax1},we get \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \{ | \sum^{\mu}_{i=1} a_{i}e^{j2 \pi \omega_i(n+k)}| < \epsilon(\delta) \} < \delta, \nonumber \end{align} which finishes the proof. \end{proof}
Now, we will generalize Lemma~\ref{lem:dis:geo3} by introducing polynomial terms. First, we prove that a set of polynomials is uniformly bounded away from $0$ when there is nonzero coefficient.
\begin{lemma} For all $n\in \mathbb{N}$, $n' \in \mathbb{Z}^+$, $m \in \{1,\cdots,n \}$, $\gamma>0$ and $k > 0$, \begin{align}
\lim_{T \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}}
\frac{| \{ x \in (0,T] : |\sum^{n}_{i=-n'} a_i x^i | < k \} |_{\mathbb{L}}}{T}=0 \nonumber \end{align}
where $| \cdot |_{\mathbb{L}}$ is the Lebesgue measure of the set. \label{lem:dis:leb} \end{lemma} \begin{proof} Let $X$ be a uniform random variable on $(0,1]$. Then, we have \begin{align}
&\sup_{|a_m| \geq \gamma} \frac{|\{ x \in (0,T] : | \sum^{n}_{i=-n'} a_i x^i | < k \}|_{\mathbb{L}}}{T} \nonumber \\
&=\sup_{|a_m| \geq \gamma} \frac{|\{ x \in (0,T] : | \sum^{n}_{i=-n'} a_i \frac{x^i}{T^m} | < \frac{k}{T^m} \}|_{\mathbb{L}}}{T} \nonumber \\
&=\sup_{|a_m| \geq \gamma} \frac{|\{ x \in (0,T] : | \sum^{n}_{i=-n'} a_i \left(\frac{x}{T}\right)^i | < \frac{k}{T^m} \}|_{\mathbb{L}}}{T} \nonumber \\
&=\sup_{|a_m| \geq \gamma} | \{ x \in (0,1] : |\sum^{n}_{i=-n'}a_i x^i | < \frac{k}{T^m} \} |_{\mathbb{L}} \nonumber \\
&=\sup_{|a_m| \geq \gamma} \mathbb{P} \{ |\sum^n_{i=-n'} a_i X^i | < \frac{k}{T^m} \} \nonumber\\
&=\sup_{|a_{m+n'}| \geq \gamma} \mathbb{P} \{ |\sum^{n+n'}_{i=0} a_i X^i | < \frac{k X^{n'}}{T^m} \} \nonumber\\
&\leq \sup_{|a_{m+n'}| \geq \gamma} \mathbb{P} \{ |\sum^{n+n'}_{i=0} a_i X^i | < \frac{k }{T^m} \}. (\because 0 < X \leq 1 \mbox{ w.p. 1}) \nonumber\\ \end{align}
Therefore, by Lemma~\ref{lem:singleun} \begin{align}
&\lim_{T \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}}
\frac{| \{ x \in [0,T] : |\sum^{n}_{i=-n'} a_i x^i | < k \} |_{\mathbb{L}}}{T} \nonumber \\
&=\lim_{T \rightarrow \infty} \sup_{|a_{m+n'}| \geq \gamma, a_i \in \mathbb{C}}
\mathbb{P} \{ |\sum^{n+n'}_{i=0} a_i X^i | < \frac{k}{T^m} \}=0, \nonumber \end{align} which finishes the proof. \end{proof}
The following lemma shows that the above lemma still holds even if we change Lebesgue measure to counting measure. \begin{lemma} For all $n \in \mathbb{N}$, $n' \in \mathbb{Z}^+$, $m \in \{1,\cdots,n \}$, $\gamma>0$ and $k > 0$, \begin{align}
\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}}
\frac{| \{ x \in \{1,\cdots,N\} : |\sum^{n}_{i=-n'} a_i x^i | < k \} |_{\mathbb{C}}}{N}=0 \nonumber \end{align}
where $| \cdot |_{\mathbb{C}}$ implies the counting measure of the set, the cardinality of the set. \label{lem:dis:cnt} \end{lemma} \begin{proof} First, we will prove the following claim which relates Lebesgue measure with counting measure. \begin{claim} Let $f:\mathbb{R^+} \rightarrow \mathbb{R}$ be a $\mathcal{C}^{\infty}$ function with $l$ local maxima and minima. Then, \begin{align}
\left| \left\{x \in [1,N] : f(x) > 0 \right\} \right|_{\mathbb{L}} \leq \left|\left\{x \in \{1,\cdots, N \} : f(x) > 0 \right\}\right|_{\mathbb{C}}+ 3l +2. \nonumber \end{align} \end{claim} \begin{proof} Since $f(x)$ is a continuous function with $l$ local maxima and minima, we can prove that there exist $l' \leq l+1$, $s_i$ and $t_i$ $(1 \leq i \leq l')$ such that \begin{align} \left\{x \in \{1,\cdots, N \} : f(x) > 0 \right\} = \{ s_1 , s_1+1 ,\cdots , s_1+t_1 \} \cup \cdots \cup \{ s_{l'} ,s_{l'}+1 , \cdots, s_{l'}+t_{l'} \}. \nonumber \end{align} One way to justify this is by contradiction, i.e. if we assume $l' > l+1$, there should exist more than $l$ local maxima and minima by the mean value theorem. Moreover, since the number of local maxima and minima is bounded by $l$, we have \begin{align}
\left| \left\{x \in [1,N] : f(x) > 0 \right\} \right|_{\mathbb{L}} & \leq \left|[s_1-1,s_1+t_1+1]\right|_{\mathbb{L}}+ \cdots + \left|[s_{l'}-1,s_{l'}+t_{l'}+1]\right|_{\mathbb{L}} + l \nonumber \\ &\leq (t_1+2) + \cdots + (t_{l'}+2)+l \nonumber \\
&\leq \left|\left\{x \in \{1,\cdots, N \} : f(x) > 0 \right\}\right|_{\mathbb{C}} +2l'+l \nonumber \\
&\leq \left|\left\{x \in \{1,\cdots, N \} : f(x) > 0 \right\}\right|_{\mathbb{C}}+ 3l +2. \nonumber \end{align} Thus, the claim is true. \end{proof}
To prove the lemma, let $a_i=a_{R,i}+j a_{I,i}$ where $a_{R,i}, a_{I,i} \in \mathbb{R}$. Then, \begin{align}
&|\sum^n_{i=-n'}a_i x^i| < k\\
& (\Leftrightarrow) |\sum^{n+n'}_{i=0}a_{i-n'} x^i| < k x^{n'} \\ & (\Leftrightarrow) (\sum^{n+n'}_{i=0}a_{R,i-n'} x^i)^2 + (\sum^{n+n'}_{i=0}a_{I,i-n'} x^i)^2 < k^2 x^{n'}.
\end{align}
Since $k^2 x^{2n'} - (\sum^{n+n'}_{i=0}a_{R,i-n'} x^i)^2 - (\sum^{n+n'}_{i=0}a_{I,i-n'} x^i)^2$ is a continuous function with at most $2(n+n')$ local maxima and minima, by the claim we have \begin{align}
&\lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}}
\frac{| \{ x \in \{1,\cdots,N\} : |\sum^{n}_{i=0} a_i x^i | < k \} |_{\mathbb{C}}}{N} \nonumber \\
& \leq \lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}}
\frac{|\{ x \in [1,N] : |\sum^{n}_{i=0} a_i x^i | < k \}|_{\mathbb{L}}+6(n+n')+2 }{N} \nonumber \\
& = \lim_{N \rightarrow \infty} \sup_{|a_m| \geq \gamma, a_i \in \mathbb{C}}
\frac{|\{ x \in (0,N] : |\sum^{n}_{i=0} a_i x^i | < k \}|_{\mathbb{L}}}{N}=0\ (\because Lemma~\ref{lem:dis:leb}) \nonumber \end{align} Therefore, the lemma is proved. \end{proof}
Now, we merge Lemma~\ref{lem:dis:cnt} with Lemma~\ref{lem:dis:geo3} to prove that Lemma~\ref{lem:dis:geofinal} still holds even after we introduce polynomial terms to the functions.
\begin{lemma} Let $\omega_1,\omega_2,\cdots,\omega_{\mu}$ be real numbers such that $\omega_i - \omega_j \notin \mathbb{Q}$ for all $i \neq j$. Then, for strictly positive numbers $\gamma$, \begin{align}
\lim_{\epsilon \downarrow 0}\lim_{N \rightarrow \infty} \sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C},k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1} \left( \sum^{\nu_i}_{j=0} a_{ij}n^j \right) e^{j2 \pi \omega_i (n+k)} \right| < \epsilon \right\}\rightarrow 0. \nonumber \end{align} \label{lem:dis:geofinal} \end{lemma} \begin{proof} To prove the lemma, it is enough to show that for all $\delta > 0$, there exist $\epsilon > 0$ and $N \in \mathbb{N}$ such that \begin{align}
\sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C},k \in \mathbb{Z}} \frac{1}{N} \sum^{N}_{n=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1} \left( \sum^{\nu_i}_{j=0} a_{ij}n^j \right) e^{j2 \pi \omega_i (n+k)} \right| < \epsilon \right\} < \delta. \label{eqn:dis:target} \end{align}
Since $\mu$ is finite, by Lemma~\ref{lem:dis:geo3}, there exist $\epsilon' > 0$ and $M \in \mathbb{N}$ such that \begin{align} \max_{d \in \{1, \cdots, \mu \}} \left(
\sup_{k \in \mathbb{Z}, a_i \in \mathbb{C}, |a_d| \geq 1} \frac{1}{M} \sum_{c=1}^{M}
\mathbf{1} \left\{ \left| \sum^{\mu}_{i=1} a_i e^{j 2 \pi \omega_i (c+k)} \right| < \epsilon' \right\} \right) < \frac{\delta}{2}. \label{eqn:dis:target2} \end{align}
By Lemma~\ref{lem:dis:cnt}, there exists $B' \in \mathbb{N}$ such that \begin{align}
\sup_{|a_{1\nu_1}'| \geq \gamma} \frac{
\left| \left\{ b \in \{1,\cdots,B' \} :
\left|\sum^{\nu_1}_{j=0} a_{1j}' b^j \right| \leq 2 \right\}
\right|_{\mathbb{C}} }{B'} < \frac{\delta}{4}. \label{eqn:dis:target3} \end{align}
Define $\kappa' := \frac{2 \sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} \sum^j_{k=1} {{j}\choose{k}} }{\epsilon'}$. By Lemma~\ref{lem:dis:cnt}, there exists $B'' \in \mathbb{N}$ such that \begin{align}
\sum_{1 \leq i \leq \mu, 1 \leq j \leq \nu_i, 1 \leq k \leq j} \sup_{|a_{k}|=1} \frac{| \{ b \in \{1,\cdots,B'' \} : \kappa' \geq |\sum^{\nu_i-j+k}_{j'=-j+k} a_{j'}b^{j'} |\}|_{\mathbb{C}}}{B''} < \frac{\delta}{4}. \label{eqn:dis:target4} \end{align}
Define $B:=\max(B', B'')$. We will show that the choice of $\epsilon=\epsilon'$ and $N=M \cdot B$ satisfies \eqref{eqn:dis:target}. \begin{align}
&\sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}, k \in \mathbb{Z}} \frac{1}{N} \sum^N_{n=1}
\mathbf{1} \left\{ \left| \sum^{\mu}_{i=1} \left( \sum^{\nu_i}_{j=0} a_{ij}n^j \right) e^{j2 \pi \omega_i (n+k)} \right| < \epsilon \right\} \nonumber \\
&=\sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{N} \sum^{N}_{n=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1}\left( \sum^{\nu_i}_{j=0} a_{ij}n^j \right)e^{j 2 \pi \omega_i n} \right| < \epsilon \right\} (\because e^{j 2\pi \omega_i k} \mbox{ can be absorbed into the $a_{ij}$.}) \nonumber \\
&=\sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B \cdot M} \sum^{B-1}_{b=0} \sum^{M}_{c=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1}\left( \sum^{\nu_i}_{j=0} a_{ij}(bM+c)^j \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon \right\} (\because n \mbox{ is rewritten as $bM+c$.}) \nonumber \\
&=\sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B \cdot M} \sum^{B-1}_{b=0} \sum^{M}_{c=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1}\left( \sum^{\nu_i}_{j=0} a_{ij}\left((bM)^j+\sum^j_{k=1} {{j}\choose{k}}(bM)^{j-k}c^k \right) \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon \right\} \nonumber \\
&\leq \sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B \cdot M} \sum^{B-1}_{b=0} \sum^{M}_{c=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1}\left( \sum^{\nu_i}_{j=0} a_{ij} (bM)^j \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon
+\sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} |a_{ij}| \sum^j_{k=1} {{j}\choose{k}}(bM)^{j-k}c^k \right\} \nonumber \\
&\leq \sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B \cdot M} \sum^{B-1}_{b=0} \sum^{M}_{c=1} \mathbf{1} \left\{ \left| \sum^{\mu}_{i=1}\left( \sum^{\nu_i}_{j=0} a_{ij} (bM)^j \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon
+\sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} \sum^j_{k=1} |a_{ij}| {{j}\choose{k}}(bM)^{j-k}M^k \right\} \label{eqn:geo:identity} \\
&\leq \sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B \cdot M} \sum^{B-1}_{b=0} \sum^{M}_{c=1} \mathbf{1} \Bigg\{ \left| \sum^{\mu}_{i=1}\left( \frac{\sum^{\nu_i}_{j=0} a_{ij} (bM)^j}{M_b} \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \nonumber \\ & \frac{\epsilon}{M_b}
+\frac{\sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} \sum^j_{k=1} |a_{ij}| {{j}\choose{k}}(bM)^{j-k}M^k}{M_b} \Bigg\}\nonumber\\
&\leq \sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum^{B-1}_{b=0} \Bigg\{ \frac{1}{M} \sum^{M}_{c=1} \mathbf{1} \Bigg\{ \left| \sum^{\mu}_{i=1}\left( \frac{\sum^{\nu_i}_{j=0} a_{ij} (bM)^j}{M_b} \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon \Bigg\} \nonumber \\ &+\mathbf{1}\Bigg\{ \frac{\epsilon}{M_b}
+\frac{\sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} \sum^j_{k=1} |a_{ij}| {{j}\choose{k}}(bM)^{j-k}M^k}{M_b} \geq \epsilon \Bigg\} \Bigg\} \nonumber \\
&\leq \sup_{|a_{1\nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum^{B-1}_{b=0} \Bigg\{ \frac{1}{M} \sum^{M}_{c=1} \mathbf{1} \Bigg\{ \left| \sum^{\mu}_{i=1}\left( \frac{\sum^{\nu_i}_{j=0} a_{ij} (bM)^j}{M_b} \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon \Bigg\} \nonumber \\
&+\mathbf{1}\Bigg\{ \frac{\epsilon}{M_b} \geq \frac{\epsilon}{2} \Bigg\} + \mathbf{1} \Bigg\{\frac{\sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} \sum^j_{k=1} |a_{ij}| {{j}\choose{k}}(bM)^{j-k}M^k}{M_b} \geq \frac{\epsilon}{2} \Bigg\} \Bigg\}
\label{eqn:dis:geofinal:1} \end{align}
where $M_b:=\max_{i} \left\{ \left| \sum^{\nu_i}_{j=0} a_{ij}\left(bM \right)^j \right| \right\}$ and when $M_b = 0$ the value of the indicator function is set to be $0$ since in this case, the indicator function of \eqref{eqn:geo:identity} is already $0$.
First, let's prove that the first term of \eqref{eqn:dis:geofinal:1} is small enough. For all $a_{ij} \in \mathbb{C}$ such that $|a_{1 \nu_1}| \geq \gamma$ and $b \in \{ 0, \cdots, B\}$, we have \begin{align}
&\frac{1}{M} \sum^{M}_{c=1} \mathbf{1} \Bigg\{ \left| \sum^{\mu}_{i=1}\left( \frac{\sum^{\nu_i}_{j=0} a_{ij} (bM)^j}{M_b} \right)e^{j 2 \pi \omega_i (bM+c)} \right| < \epsilon \Bigg\} \\ &\leq \max_{d \in \{1, \cdots, \mu \}} \left(
\sup_{k \in \mathbb{Z}, a_i \in \mathbb{C}, |a_d| \geq 1} \frac{1}{M} \sum_{c=1}^{M}
\mathbf{1} \left\{ \left| \sum^{\mu}_{i=1} a_i e^{j \omega_i (c+k)} \right| < \epsilon \right\} \right) \\
&(\because \mbox{By the definition of $M_b$, $\left|\frac{\sum^{\nu_i}_{j=0} a_{ij} (bM)^j}{M_b} \right|=1$ for some $i$})\\ &< \frac{\delta}{2}. (\because \eqref{eqn:dis:target2})\label{eqn:dis:target:100} \end{align}
Let's prove that the second term of \eqref{eqn:dis:geofinal:1} is small enough. \begin{align}
& \sup_{|a_{1 \nu_1}| \geq \gamma} \frac{\left| \left\{ b \in \{1,\cdots, B \} : M_b < 2 \right\} \right|_{\mathbb{C}}}{B} \nonumber \\
&\leq \sup_{|a_{1\nu_1}| \geq \gamma} \frac{\left| \left\{ b \in \{1,\cdots,B \} :
\left|\sum^{\nu_1}_{j=0} a_{1j} (bM)^j \right| < 2 \right\}
\right|_{\mathbb{C}}}{B} (\because \mbox{definition of $M_b$})\nonumber \\
&\leq \sup_{|a_{1\nu_1}'| \geq \gamma} \frac{
\left| \left\{ b \in \{1,\cdots,B \} :
\left|\sum^{\nu_1}_{j=0} a_{1j}' b^j \right| < 2 \right\}
\right|_{\mathbb{C}} }{B} (\because \mbox{putting $a_{1j}':=a_{1j}M^j$ and $M$ goes to infinity.})\nonumber \\ &< \frac{\delta}{4}. (\because \eqref{eqn:dis:target3}) \label{eqn:dis:target:101} \end{align}
Now, we will prove that the third term of \eqref{eqn:dis:geofinal:1} is small enough. \begin{align}
&\sup_{|a_{1 \nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum_{b=0}^{B-1} \mathbf{1} \Bigg\{\frac{\sum^{\mu}_{i=1} \sum^{\nu_i}_{j=1} \sum^j_{k=1} |a_{ij}| {{j}\choose{k}}(bM)^{j-k}M^k}{M_b} \geq \frac{\epsilon}{2} \Bigg\} \\
&\leq \sup_{|a_{1 \nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum_{b=0}^{B-1} \mathbf{1} \Bigg\{ (\sum^{\mu}_{i'=1} \sum^{\nu_{i'}}_{j'=1} \sum^{j'}_{k'=1}{{j'}\choose{k'}}) \cdot
\max_{1 \leq i \leq \mu, 1 \leq j \leq \nu_i, 1\leq k \leq j} |a_{ij}| (bM)^{j-k}M^k \geq \frac{\epsilon}{2} {M_b} \Bigg\} \\
&\leq \sum_{1 \leq i \leq \mu, 1 \leq j \leq \nu_i, 1\leq k \leq j} \sup_{|a_{1 \nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum_{b=0}^{B-1} \mathbf{1} \Bigg\{ \kappa' |a_{ij}| (bM)^{j-k}M^k \geq M_b \Bigg\} \\
&\leq \sum_{1 \leq i \leq \mu, 1 \leq j \leq \nu_i, 1\leq k \leq j} \sup_{|a_{1 \nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum_{b=0}^{B-1} \mathbf{1} \Bigg\{ \kappa' |a_{ij}| (bM)^{j-k}M^k \geq |\sum^{\nu_i}_{j'=0} a_{ij'}(bM)^{j'} | \Bigg\} (\because \mbox{definition of $M_b$})\\
&\leq \sum_{1 \leq i \leq \mu, 1 \leq j \leq \nu_i, 1\leq k \leq j} \sup_{|a_{1 \nu_1}| \geq \gamma, a_{ij} \in \mathbb{C}} \frac{1}{B} \sum_{b=0}^{B-1} \mathbf{1} \Bigg\{ \kappa' \geq |\sum^{\nu_i}_{j'=0} \frac{a_{ij'}(bM)^{j'}}{|a_{ij}| b^{j-k} M^k} | \Bigg\} \\
&\leq \sum_{1 \leq i \leq \mu, 1 \leq j \leq \nu_i, 1\leq k \leq j} \sup_{|a_{k}| =1} \frac{1}{B} \sum_{b=0}^{B-1} \mathbf{1} \Bigg\{ \kappa' \geq |\sum^{\nu_i-j+k}_{j'=-j+k} a_{j'}b^{j'} | \Bigg\} \\ &< \frac{\delta}{4}. (\because \eqref{eqn:dis:target4})\label{eqn:dis:target:102} \end{align}
Therefore, by \eqref{eqn:dis:target:100}, \eqref{eqn:dis:target:101}, \eqref{eqn:dis:target:102}, we can see $\eqref{eqn:dis:geofinal:1} < \delta$, which finishes the proof. \end{proof}
\subsection{Proof of Lemma~\ref{lem:dis:achv}} \label{sec:app:cycleproof}
In this section, we will merge the properties about the observability Gramian shown in Appendix~\ref{sec:dis:gramian} with the uniform convergence of Appendix~\ref{sec:dis:uniform}, and prove Lemma~\ref{lem:dis:achv} of page~\pageref{lem:dis:achv}.
Just as we did in Appendix~\ref{sec:app:2}, we must first prove the following lemma which tells that the determinant of the observability Gramian is large except a negligible set under a cofactor condition the Gramian matrix. The proof of the lemma is very similar to that of Lemma~\ref{lem:conti:single}.
\begin{lemma} Let $\mathbf{A}$ and $\mathbf{C}$ be given as \eqref{eqn:ac:jordansingle} and \eqref{eqn:ac:jordansinglec}. Define $a_{i,j}$ and $C_{i,j}$ as the $(i,j)$ element and cofactor of $\begin{bmatrix} \mathbf{C}\mathbf{A}^{-k_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-k_{m-1}} \\ \mathbf{C}\mathbf{A}^{-n} \end{bmatrix}$ respectively. Then, there exists a family of functions $\{ g_{\epsilon} : \epsilon > 0, g_{\epsilon}:\mathbb{R}^+ \rightarrow \mathbb{R}^+ \}$ satisfying:\\
(i) For all $\epsilon>0$, $k_1 < k_2 < \cdots < k_{m-1}$ and $|C_{m,m}| \geq \epsilon \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i}$, the following is true.\\ \begin{align} \lim_{N \rightarrow \infty} \sup_{k \in \mathbb{Z}, k-k_{m-1} \geq g_{\epsilon}(k_{m-1})} \frac{1}{N}\sum_{n=k+1}^{k+N}
\mathbf{1} \left\{\left| \det\left( \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_{m-1}} \\ \mathbf{C} \mathbf{A}^{-n} \end{bmatrix}
\right) \right|
< \epsilon^2 \lambda_m^{-n} \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i} \right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0. \nonumber \end{align} (ii) For each $\epsilon>0$, $g_{\epsilon}(k) \lesssim 1 + \log(k+1)$. \label{lem:dis:single} \end{lemma} \begin{proof} By Lemma~\ref{lem:dis:det:lower}, we can find a function $g'_{2\epsilon^2}(k)$ such that for all $0 \leq k_1 < k_2 < \cdots < k_{m-1} < n$ satisfying:\\ (i) $n-k_{m-1} \geq g'_{2\epsilon^2}(k_{m-1})$ \\ (ii) $g'_{2\epsilon^2}(k) \lesssim 1 + \log (k+1)$ \\
(iii) $\left| \sum_{m-m_{\mu}+1 \leq i \leq m} a_{m,i}C_{m,i} \right| \geq 2 \epsilon^2 \lambda_m^{-n} \prod_{1 \leq i \leq {m-1}} \lambda_i^{-k_i}$\\ the following inequality holds: \begin{align}
\left| \det\left( \begin{bmatrix} \mathbf{C}\mathbf{A}^{-k_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-k_{m-1}} \\ \mathbf{C}\mathbf{A}^{-n} \\ \end{bmatrix}
\right) \right| & \geq \epsilon^2 \lambda_m^{-n} \prod_{1 \leq i \leq {m-1}} \lambda_i^{-k_i}. \nonumber \end{align} Let $g_{\epsilon}(k)$ be $g'_{2\epsilon^2}(k)$. Then, we have \begin{align} &\sup_{k \in \mathbb{Z}, k - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \frac{1}{N} \sum_{n=k+1}^{k+N} \mathbf{1}
\left\{ \left| \det \left( \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_{m-1}} \\ \mathbf{C} \mathbf{A}^{-n} \end{bmatrix}
\right) \right| < \epsilon^2 \lambda_m^{-n} \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i}\right\} \nonumber \\ & \leq \sup_{k \in \mathbb{Z}, k - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \frac{1}{N} \sum_{n=k+1}^{k+N} \mathbf{1} \left\{
\left| \sum_{m-m_{\mu}+1 \leq i \leq m} a_{m,i} C_{m,i}
\right| < 2 \epsilon^2 \lambda_m^{-n} \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i} \right\} \label{eqn:lem:detail2:0} \\ & = \sup_{k \in \mathbb{Z}, k - k_{m-1} \geq g_{\epsilon}(k_{m-1})} \frac{1}{N} \sum_{n=k+1}^{k+N} \mathbf{1} \left\{
\left| \sum_{m-m_{\mu}+1 \leq i \leq m} \frac{a_{m,i}}{\lambda_m^{-n}} \frac{C_{m,i}}{\epsilon \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i} }
\right| < 2 \epsilon \right\} \nonumber \\
&\leq \sup_{k \in \mathbb{Z}, |b_m|\geq 1} \frac{1}{N} \sum_{n=k+1}^{k+N}\mathbf{1}
\left\{ \left| \sum_{m-m_{\mu}+1 \leq i \leq m} b_i \frac{a_{m,i}}{\lambda_m^{-n}} \right| < 2 \epsilon \right\} \label{eqn:lem:detail2:1} \end{align}
where \eqref{eqn:lem:detail2:0} is by the definition of $g_{\epsilon}(k)$ and Lemma~\ref{lem:dis:det:lower}, and \eqref{eqn:lem:detail2:1} is by $|C_{m,m}| \geq \epsilon \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i} $.
Let $\mathbf{C_{\mu,\nu_{\mu}}}$ denoted in \eqref{eqn:ac:jordansinglec} be $\begin{bmatrix} c'_{1} & \cdots & c'_{m_{\mu,\nu_{\mu}}} \end{bmatrix}$.\\ Moreover, \begin{align} &\mathbf{A_{\mu,\nu_\mu}}^{-n}\nonumber \\ &=\begin{bmatrix} (\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n} & {{-n}\choose{1}} (\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n-1} & \cdots & {{-n}\choose{m_{\mu,\nu_\mu}-1}} (\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n-m_{\mu,\nu_\mu}+1} \\ 0 & (\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n} & \cdots & {{-n}\choose{m_{\mu,\nu_\mu}-2}} (\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n-m_{\mu,\nu_\mu}+2} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & (\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n} \end{bmatrix}.\nonumber \end{align} Thus, we can see that \begin{align} a_{m,m}&=\sum_{1 \leq i \leq m_{\mu,\nu_{\mu}}} c'_i { -n \choose m_{\mu,\nu_\mu}-i }(\lambda_{\mu,\nu_\mu}e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n-m_{\mu,\nu_\mu}+i} .\nonumber \end{align} Therefore, \begin{align} \frac{a_{m,m}}{\lambda_m^{-n}} &=\sum_{1 \leq i \leq m_{\mu,\nu_{\mu}}} c'_{i} { -n \choose m_{\mu,\nu_\mu}-i }\lambda_{\mu,\nu_\mu}^{-m_{\mu,\nu_\mu}+i}(e^{j 2 \pi \omega_{\mu,\nu_\mu}})^{-n-m_{\mu,\nu_\mu}+i}.\nonumber \end{align} Moreover, when $a_{m,i}$ is considered as a function of $n$, $n^{m_{\mu,\nu_\mu}-1} e^{-j 2 \pi \omega_{\mu,\nu_\mu} n}$ term is only shown up in $\frac{a_{m,m}}{\lambda_m^{-n}}$ among $\frac{a_{m,m-m_{\mu}+1}}{\lambda_m^{-n}},\cdots ,\frac{a_{m,m}}{\lambda_m^{-n}}$, and the associated coefficient is $\frac{c_1'(-1)^{m_{\mu,\nu_\mu}-1}}{(m_{\mu,\nu_\mu}-1)!} \lambda_{\mu,\nu_\mu}^{-m_{\mu,\nu_\mu}+1} e^{j 2 \pi \omega_{\mu,\nu_\mu}(-m_{\mu,\nu_\mu}+1)}$.
Let $c':=\frac{|c_1'|}{(m_{\mu,\nu_\mu}-1)!} \lambda_{\mu,\nu_\mu}^{-m_{\mu,\nu_\mu}+1}$. Then, \eqref{eqn:lem:detail2:1} can be upper bounded as follows: \begin{align}
\eqref{eqn:lem:detail2:1} &\leq \sup_{k \in \mathbb{Z},|a_{\nu_\mu,m_{\mu,\nu_\mu}}| \geq c'} \frac{1}{N}
\sum^{k+N}_{n=k+1} \mathbf{1} \left\{ \left| \sum_{1 \leq i \leq \nu_\mu} \left( \sum_{1 \leq j \leq m_{\mu,i}} a_{ij}n^{j-1} \right)e^{j 2 \pi (-\omega_{\mu,i})n} \right| < 2 \epsilon \right\} \nonumber \\ &
=\sup_{k \in \mathbb{Z},|a_{\nu_\mu,m_{\mu,\nu_\mu}}| \geq c'} \frac{1}{N}
\sum^{N}_{n=1} \mathbf{1} \left\{ \left| \sum_{1 \leq i \leq \nu_\mu} \left( \sum_{1 \leq j \leq m_{\mu,i}} a_{ij}(n+k)^{j-1} \right)e^{j 2 \pi (-\omega_{\mu,i})(n+k)} \right| < 2 \epsilon \right\} \nonumber \\ &
\leq \sup_{k \in \mathbb{Z},|a_{\nu_\mu,m_{\mu,\nu_\mu}}| \geq c'} \frac{1}{N}
\sum^{N}_{n=1} \mathbf{1} \left\{ \left| \sum_{1 \leq i \leq \nu_\mu} \left( \sum_{1 \leq j \leq m_{\mu,i}} a_{ij}n^{j-1} \right)e^{j 2 \pi (-\omega_{\mu,i})(n+k)} \right| < 2 \epsilon \right\} \label{eqn:lem:detail2:2} \end{align} The last inequality comes from the fact that the coefficient of $n^{m_{\mu,\nu_{\mu}}-1}$ is the same for both $\sum_{1 \leq j \leq m_{\mu,\nu_{\mu}}} a_{\nu_{\mu},j}(n+k)^{j-1}$ and $\sum_{1 \leq j \leq m_{\mu,\nu_{\mu}}} a_{\nu_{\mu},j}n^{j-1}$.
By Lemma~\ref{lem:singleun}, we get \begin{align}
\lim_{N \rightarrow \infty} \sup_{k \in \mathbb{Z},|a_{\nu_\mu,m_{\mu,\nu_\mu}}| \geq c'} \frac{1}{N}
\sum^{N}_{n=1} \mathbf{1} \left\{ \left| \sum_{1 \leq i \leq \nu_\mu} \left( \sum_{1 \leq j \leq m_{\mu,i}} a_{ij}n^{j-1} \right)e^{j 2 \pi (-\omega_{\mu,i})(n+k)} \right| < 2 \epsilon \right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0. \nonumber \end{align} Therefore, by \eqref{eqn:lem:detail2:2} we can say that \begin{align} \lim_{N \rightarrow \infty} \sup_{k \in \mathbb{Z}, k-k_{m-1} \geq g_{\epsilon}(k_{m-1})} \frac{1}{N}\sum_{n=k+1}^{k+N}
\mathbf{1} \left\{\left| \det\left( \begin{bmatrix} \mathbf{C} \mathbf{A}^{-k_1} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-k_{m-1}} \\ \mathbf{C} \mathbf{A}^{-n} \end{bmatrix}
\right) \right|
< \epsilon^2 \lambda_m^{-n} \prod_{1 \leq i \leq m-1} \lambda_i^{-k_i} \right\} \rightarrow 0 \mbox{ as } \epsilon \downarrow 0 \nonumber \end{align} which finishes the proof. \end{proof}
Based on the previous lemma, the properties of p.m.f. tails shown in Section~\ref{sec:app:1} and the properties of the observability Gramian discussed in Section~\ref{sec:dis:gramian}, we can prove Lemma~\ref{lem:dis:achv} for the case when the system has no eigenvalue cycles. Moreover, we will prove a lemma with multiple systems. This will turn out to be helpful in proving Lemma~\ref{lem:dis:achv} for the general systems with eigenvalue cycles.
Consider pairs of matrices $(\mathbf{A_1},\mathbf{C_1}),(\mathbf{A_2},\mathbf{C_2}),\cdots,(\mathbf{A_r},\mathbf{C_r})$ defined as follows: \begin{align} &\mbox{$\mathbf{A_i}$ is a $m_i \times m_i$ Jordan form matrix and $\mathbf{C_i}$ is a $1 \times m_i$ row vector} \label{eqn:ac:jordansingle2} \\ &\mbox{Each $\mathbf{A_i}$ has no eigenvalues cycles and $(\mathbf{A_i},\mathbf{C_i})$ is observable} \nonumber \\ &\mbox{$\lambda_j^{(i)} e^{j 2 \pi \omega_j^{(i)}}$ is $(j,j)$ element of $\mathbf{A_i}$} \nonumber \\ &\lambda_1^{(i)} \geq \lambda_2^{(i)} \geq \cdots \geq \lambda_{m_i}^{(i)} \geq 1. \nonumber \end{align} Then, the following lemma holds. \begin{lemma} Consider systems $(\mathbf{A_1}, \mathbf{C_1}),(\mathbf{A_2}, \mathbf{C_2}),\cdots,(\mathbf{A_r}, \mathbf{C_r})$ given as \eqref{eqn:ac:jordansingle2}.
Then, we can find a polynomial $p(k)$ and a family of random variable $\{ S(\epsilon,k) : k \in \mathbb{Z}^+, \epsilon>0 \}$ such that for all $\epsilon>0$, $k \in \mathbb{Z}^+$ and $1 \leq i \leq r$ there exist $k \leq k_{i,1} < k_{i,2} < \cdots < k_{i,m_i} \leq S(\epsilon,k)$ and $\mathbf{M_i}$ satisfying the following conditions:\\ (i) $\beta[k_{i,j}]=1$ for $1 \leq i \leq \mu $ and $1 \leq j \leq m_i$\\ (ii) $\mathbf{M_i}\begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{-k_{i,1}} \\ \mathbf{C_i}\mathbf{A_i}^{-k_{i,2}} \\ \vdots \\ \mathbf{C_i}\mathbf{A_i}^{-k_{i,m_i}} \end{bmatrix}=\mathbf{I}$\\
(iii) $|\mathbf{M_i}|_{max} \leq \frac{p(S(\epsilon,k))}{\epsilon} (\lambda_{1}^{(i)})^{S(\epsilon,k)}$\\ (iv) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\{ S(\epsilon,k)-k=s \} = p_e $. \label{lem:dis:geodet} \end{lemma} \begin{proof} By Lemma~\ref{lem:dis:inverse}, instead of the condition (ii) and (iii) it is enough to prove that \begin{align}
\left| \det \left( \begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{-k_{i,1}} \\ \mathbf{C_i}\mathbf{A_i}^{-k_{i,2}} \\ \vdots \\ \mathbf{C_i}\mathbf{A_i}^{-k_{i,m_i}} \\
\end{bmatrix}\right)\right| \geq \epsilon \prod_{1 \leq j \leq m_i} (\lambda_j^{(i)})^{-k_{i,j}}. \nonumber \end{align} Therefore, it is enough to prove the following claim: \begin{claim} We can find a family of stopping times $\{ S(\epsilon,k) : k \in \mathbb{Z}^+, \epsilon > 0 \}$ such that for all $\epsilon>0$, $k \in \mathbb{Z}^+$ and $1 \leq i \leq r$ there exist $k \leq k_{i,1} < k_{i,2} < \cdots < k_{i,m_i} \leq S(\epsilon,k)$ satisfying the following condition:\\ (a) $\beta[k_{i,j}]=1$ for $1 \leq i \leq \mu$ and $1 \leq j \leq m_i$ \\
(b) $\left| \det \left( \begin{bmatrix} \mathbf{C_i}\mathbf{A_i}^{-k_{i,1}} \\ \mathbf{C_i}\mathbf{A_i}^{-k_{i,2}} \\ \vdots \\ \mathbf{C_i}\mathbf{A_i}^{-k_{i,m_i}} \\
\end{bmatrix}\right)\right| \geq \epsilon \prod_{1 \leq j \leq m_i} (\lambda_j^{(i)})^{-k_{i,j}}$ \\ (c) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\left\{ S(\epsilon,k)-k=s\right\} \leq p_e$. \label{claim:dis:1} \end{claim} Before we prove the above claim, we first prove the claim for a single system. \begin{claim} We can find a family of stopping times $\{ S_1(\epsilon,k) : k \in \mathbb{Z}^+, \epsilon > 0 \}$ such that for all $\epsilon>0$ and $k \in \mathbb{Z}^+$ there exist $k \leq k_1' < k_2' < \cdots < k_{m_1}' \leq S_1(\epsilon,k)$ satisfying the following condition:\\ (a') $\beta[k_j']=1$ for $1 \leq j \leq m_1$ \\
(b') $\left| \det \left( \begin{bmatrix} \mathbf{C_1}\mathbf{A_1}^{-k_1'} \\ \mathbf{C_1}\mathbf{A_1}^{-k_2'} \\ \vdots \\ \mathbf{C_1}\mathbf{A_1}^{-k_{m_i}'} \\
\end{bmatrix}\right)\right| \geq \epsilon \prod_{1 \leq j \leq m_1} (\lambda_j^{(1)})^{-k_j'}$ \\ (c') $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\left\{ S_1(\epsilon,k)-k=s\right\} \leq p_e$. \label{claim:dis:2} \end{claim}
$\bullet$ Proof of Claim~\ref{claim:dis:2}: The proof of Claim~\ref{claim:dis:2} is an induction on $m$.
(i) First consider the case $m_1=1$.
In this case, $\mathbf{A_1}$ and $\mathbf{C_1}$ is scalar, so denote $\mathbf{A_1}:=\lambda_1^{(1)}e^{j 2 \pi \omega_1^{(1)}}$ and $\mathbf{C_1}:=c_1$. Since we only care about small enough $\epsilon$, let $\epsilon \leq |c_1|$. Denote $S_1(\epsilon,k) := \inf \{ n \geq k : \beta[n]=1 \}$ and $k_1'=S_1(\epsilon,k)$. Then, $\beta[k_1']=1$ and $\left| \det\left( \begin{bmatrix} c_1 (\lambda_1^{(1)} e^{j 2 \pi \omega_1^{(1)}} )^{-k_1'}
\end{bmatrix} \right) \right| = |c_1| (\lambda_1^{(1)})^{-k_1'} \geq \epsilon (\lambda_1^{(1)})^{-k_1'}$. Moreover, since $S_1(\epsilon,k)-k$ is a geometric random variable with probability $1-p_e$, \begin{align} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \log \mathbb{P}\left\{ S_1(\epsilon,k)-k=s \right\} = p_e. \nonumber \end{align} Therefore, $S_1(\epsilon,k)$ satisfies all the conditions of the claim.
(ii) As an induction hypothesis, we assume the claim is true for $m_1-1$ and prove the claim hold for $m_1$.
Denote $\mathbf{A_1'}$ be a $(m_1-1) \times (m_1-1)$ matrix obtained by removing $m_1$th row and column of $\mathbf{A_1}$. Likewise, $\mathbf{C_1'}$ is a $1 \times (m_1-1)$ vector obtained by removing $m_1$th element of $\mathbf{C_1}$. Then, we can observe that \begin{align} \det\left( \begin{bmatrix} \mathbf{C_1'} \mathbf{A_1'}^{-k_1'} \\ \vdots \\ \mathbf{C_1'} \mathbf{A_1'}^{-k_{m_1-1}'} \end{bmatrix} \right) = cof_{m_1,m_1}\left( \begin{bmatrix} \mathbf{C_1} \mathbf{A_1}^{-k_1'} \\ \vdots \\ \mathbf{C_1} \mathbf{A_1}^{-k_{m_1}'} \\ \end{bmatrix} \right) \nonumber \end{align} where $cof_{i,j}(\mathbf{A})$ implies the cofactor matrix of $\mathbf{A}$ with respect to $(i,j)$ element.
By the induction hypothesis, we can find a stopping time $S_1'(\epsilon,k)$ such that there exist $k \leq k_1' < k_2' < \cdots < k_{m_1-1}' \leq S_1'(\epsilon,k)$ satisfying:\\ (a'') $\beta[k_j']=1$ for $1 \leq j \leq m_1-1$ \\
(b'') $\left| \det \left( \begin{bmatrix} \mathbf{C_1'}\mathbf{A_1'}^{-k_1'} \\ \vdots \\ \mathbf{C_1'}\mathbf{A_1'}^{-k_{m_1-1}'} \end{bmatrix}
\right) \right| \geq \epsilon \prod_{1 \leq j \leq m_1-1} (\lambda_{j}^{(1)})^{-k_j'}$\\ (c'') $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \left\{ S_1'(\epsilon,k)-k=s \right\} \leq p_e$.
Let $\mathcal{F}_{i}$ be a $\sigma$-field generated by $\beta[0],\cdots,\beta[i]$ and $g_{\epsilon}: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ be the function of Lemma~\ref{lem:dis:single}. Denote a random variable $d(\epsilon,N)$ as following: \begin{align} d(\epsilon,N):=\sup_{k \in \mathbb{Z}, k-S_1'(\epsilon,k) \geq g_{\epsilon}(S_1'(\epsilon,k))} \frac{1}{N} \sum_{n=k+1}^{k+N} \mathbf{1}\left\{
\left| \det \left( \begin{bmatrix} \mathbf{C_1}\mathbf{A_1}^{-k_1'} \\ \vdots \\ \mathbf{C_1}\mathbf{A_1}^{-k_{m_1-1}'} \\ \mathbf{C_1}\mathbf{A_1}^{-n} \end{bmatrix} \right)
\right| < \epsilon^2 (\lambda_{m_1}^{(1)})^{-n} \prod_{1 \leq j \leq m_1 -1} ( \lambda_j^{(1)} )^{-k_j}
| \mathcal{F}_{S_1'(\epsilon,k)} \right\}. \nonumber \end{align} Since (b'') implies $cof_{m_1,m_1}\left( \begin{bmatrix} \mathbf{C_1} \mathbf{A_1}^{-k_1'} \\ \vdots \\ \mathbf{C_1} \mathbf{A_1}^{-k_{m_1-1}'} \\ \mathbf{C_1} \mathbf{A_1}^{-n} \\ \end{bmatrix} \right) \geq \epsilon \prod_{1 \leq j \leq m_1-1} (\lambda_i^{(1)})^{-k_j'}$, by Lemma~\ref{lem:dis:single} we have \begin{align} \lim_{\epsilon \downarrow 0} \lim_{N \rightarrow \infty} \esssup d(\epsilon,N) = 0. \nonumber \end{align} Denote $S_1''(\epsilon,k) := S_1'(\epsilon,k)+g_{\epsilon}(S_1'(\epsilon,k))$. From (ii) of Lemma~\ref{lem:dis:single} we know $g_{\epsilon}(k) \lesssim 1 + \log(k+1)$ for all $\epsilon>0$. Therefore, by Lemma~\ref{lem:conti:tailpoly} we have \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\{ S_1''(\epsilon,k)-k=s \} \leq p_e. \label{eqn:dis:single:5} \end{align} Denote a stopping time \begin{align} S_1'''(\epsilon,k):=\inf \left\{n > S_1''(\epsilon,k): \beta[n]=1 \mbox{ and }
\left| \det \left( \begin{bmatrix} \mathbf{C_1}\mathbf{A_1}^{-k_1'} \\ \vdots \\ \mathbf{C_1}\mathbf{A_1}^{-k_{m_1-1}'} \\ \mathbf{C_1}\mathbf{A_1}^{-n} \\ \end{bmatrix} \right)
\right| \geq \epsilon^2 (\lambda_{m_1}^{(1)})^{-n} \prod_{1 \leq j \leq m_1-1} (\lambda_j^{(1)})^{-k_j'} \right\}. \nonumber \end{align} Since $\beta[n]$ is a Bernoulli process, \begin{align}
\mathbb{P}\{ S_1'''(\epsilon,k)-S_1''(\epsilon,k) \geq N | \mathcal{F}_{S_1''(\epsilon,k)} \} \leq p_e^{N(1-d(\epsilon,N))}. \nonumber \end{align} Therefore, \begin{align}
\lim_{\epsilon \downarrow 0} \exp \limsup_{N \rightarrow 0} \esssup \frac{1}{N} \log \mathbb{P}\{ S_1'''(\epsilon,k)-S_1''(\epsilon,k) \geq N | \mathcal{F}_{S_1''(\epsilon,k)} \} \leq \lim_{\epsilon \downarrow 0} \lim_{N \rightarrow \infty}\esssup p_e^{1-d(\epsilon,N)} \leq p_e \nonumber \end{align} i.e. \begin{align}
\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow 0} \esssup \frac{1}{s} \log \mathbb{P}\{ S_1'''(\epsilon,k)-S_1''(\epsilon,k) = s | \mathcal{F}_{S_1''(\epsilon,k)} \} \leq p_e. \label{eqn:dis:single:6} \end{align} By applying Lemma~\ref{lem:app:geo} to \eqref{eqn:dis:single:5} and \eqref{eqn:dis:single:6}, we can conclude that \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\{S_1'''(\epsilon,k)-k=s \} \leq p_e. \nonumber \end{align} Therefore, if we denote $S_1(\epsilon,k):=S_1'''(\epsilon^{\frac{1}{2}},k)$, $S_1(\epsilon,k)$ satisfies all the conditions of Claim~\ref{claim:dis:2}.
$\bullet$ Proof of Claim~\ref{claim:dis:1}: By recursive use of Claim~\ref{claim:dis:2}, we can find stopping times $S_2(\epsilon,k),\cdots,S_r(\epsilon,k)$ such that for all $\epsilon>0$ and $2 \leq i \leq r$ there exist $S_{i-1}(\epsilon,k) < k_{i,1} < k_{i,2} < \cdots < k_{i,m_i}\leq S_{i}(\epsilon,k)$ satisfying the following condition:\\ (a) $\beta[k_{i,j}]=1$ for $1 \leq j \leq m_{i}$ \\
(b) $\left| \det\left( \begin{bmatrix} \mathbf{C_i} \mathbf{A_i}^{-k_{i,1}} \\ \mathbf{C_i} \mathbf{A_i}^{-k_{i,2}} \\ \vdots \\ \mathbf{C_i} \mathbf{A_i}^{-k_{i,m_i}} \\ \end{bmatrix}
\right) \right| \geq \epsilon \prod_{1 \leq j \leq m_i} (\lambda_j^{(i)})^{-k_{i,j}}$\\
(c) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P}\{ S_i(\epsilon,k)-S_{i-1}(\epsilon,k)=s | \mathcal{F}_{S_{i-1}(\epsilon,k)} \} \leq p_e$.
Then, by Lemma~\ref{lem:app:geo} \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P}\{ S_r(\epsilon,k)-k=s \} \leq p_e. \nonumber \end{align} Therefore, if we denote $S(\epsilon,k):=S_r(\epsilon,k)$, $S(\epsilon,k)$ satisfies all the conditions of Claim~\ref{claim:dis:1}. Thus, Claim~\ref{claim:dis:1} is true and the lemma is also true. \end{proof}
We prove some properties about matrices which will be helpful in the proof of Lemma~\ref{lem:dis:achv}.
\begin{lemma} Let $\mathbf{A}$ and $\mathbf{A'}$ be Jordan block matrices with eigenvalues $\lambda, \alpha \lambda (\alpha \neq 0)$ respectively and the same size $m \in \mathbb{N}$, i.e. $\mathbf{A} = \begin{bmatrix} \lambda & 1 & \cdots & 0 \\ 0 & \lambda & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda \end{bmatrix}$ and $\mathbf{A'}= \begin{bmatrix} \alpha\lambda & 1 & \cdots & 0 \\ 0 & \alpha\lambda & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha\lambda \end{bmatrix}$. Then, for all $n \in \mathbb{Z}$ \begin{align} \mathbf{A'}^{n} = \begin{bmatrix} \alpha^{-(m-1)} & 0 & \cdots & 0 \\ 0 & \alpha^{-(m-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix} \mathbf{A}^n \begin{bmatrix} \alpha^{n+(m-1)} & 0 & \cdots & 0 \\ 0 & \alpha^{n+(m-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha^{n} \end{bmatrix}.
\nonumber \end{align} \label{lem:dis:jordan1} \end{lemma}
\begin{proof} \begin{align} \mathbf{A'}^n &= \begin{bmatrix} (\alpha \lambda )^{n} & {n \choose 1} (\alpha \lambda)^{n-1} & {n \choose 2}(\alpha \lambda)^{n-2} & \cdots & {n \choose m} (\alpha \lambda)^{n-(m-1)} \\ 0 & (\alpha \lambda)^{n} & {n \choose 1 }(\alpha \lambda)^{n-1} & \cdots & {n \choose m-1}(\alpha \lambda)^{n-(m-2)} \\ 0 & 0 & (\alpha \lambda)^{n} & \cdots & {n \choose m-2}(\alpha \lambda)^{n-(m-3)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & (\alpha \lambda)^n \end{bmatrix}\nonumber \\ &= \begin{bmatrix} \alpha^{-(m-1)} & 0 & 0 & \cdots & 0 \\ 0 & \alpha^{-(m-2)} & 0 & \cdots & 0 \\ 0 & 0 & \alpha^{-(m-3)} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix} \nonumber \\ & \cdot \begin{bmatrix} \alpha^{n+m-1} \lambda^{n} & {n \choose 1} \alpha^{n-1+m-1} \lambda^{n-1} & {n \choose 2}\alpha^{n-2+m-1} \lambda^{n-2} & \cdots & {n \choose m} \alpha^{n-(m-1)+m-1} \lambda^{n-(m-1)} \\ 0 & \alpha^{n+m-2} \lambda^{n} & {n \choose 1 }\alpha^{n-1+m-2} \lambda^{n-1} & \cdots & {n \choose m-1}\alpha^{n-(m-2)+m-2} \lambda^{n-(m-2)} \\ 0 & 0 & \alpha^{n+m-3} \lambda^{n} & \cdots & {n \choose m-2}\alpha^{n-(m-3)+m-3} \lambda^{n-(m-3)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \alpha^n \lambda^n \end{bmatrix}\nonumber \\ &= \begin{bmatrix} \alpha^{-(m-1)} & 0 & 0 & \cdots & 0 \\ 0 & \alpha^{-(m-2)} & 0 & \cdots & 0 \\ 0 & 0 & \alpha^{-(m-3)} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix} \nonumber \\ & \cdot \begin{bmatrix} \alpha^{n+m-1} \lambda^{n} & {n \choose 1} \alpha^{n+m-2} \lambda^{n-1} & {n \choose 2}\alpha^{n+m-3} \lambda^{n-2} & \cdots & {n \choose m} \alpha^{n} \lambda^{n-m} \\ 0 & \alpha^{n+m-2} \lambda^{n} & {n \choose 1 }\alpha^{n+m-3} \lambda^{n-1} & \cdots & {n \choose m-1}\alpha^{n} \lambda^{n-(m-1)} \\ 0 & 0 & \alpha^{n+m-3} \lambda^{n} & \cdots & {n \choose m-2}\alpha^{n} \lambda^{n-(m-2)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \alpha^n \lambda^n \end{bmatrix}\nonumber \\ &= \begin{bmatrix} \alpha^{-(m-1)} & 0 & 0 & \cdots & 0 \\ 0 & \alpha^{-(m-2)} & 0 & \cdots & 0 \\ 0 & 0 & \alpha^{-(m-3)} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{bmatrix} \nonumber \\ & \cdot \begin{bmatrix} \lambda^{n} & {n \choose 1} \lambda^{n-1} & {n \choose 2} \lambda^{n-2} & \cdots & {n \choose m} \lambda^{n-m} \\ 0 & \lambda^{n} & {n \choose 1 } \lambda^{n-1} & \cdots & {n \choose m-1} \lambda^{n-(m-1)} \\ 0 & 0 & \lambda^{n} & \cdots & {n \choose m-2} \lambda^{n-(m-2)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \lambda^n \end{bmatrix} \cdot \begin{bmatrix} \alpha^{n+(m-1)} & 0 & 0 & \cdots & 0 \\ 0 & \alpha^{n+(m-2)} & 0 & \cdots & 0 \\ 0 & 0 & \alpha^{n+(m-3)} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \alpha^n \end{bmatrix} \nonumber \\ &= \begin{bmatrix} \alpha^{-(m-1)} & 0 & \cdots & 0 \\ 0 & \alpha^{-(m-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix} \mathbf{A}^n \begin{bmatrix} \alpha^{n+(m-1)} & 0 & \cdots & 0 \\ 0 & \alpha^{n+(m-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha^{n} \end{bmatrix}\nonumber \end{align} This finishes the proof. \end{proof}
\begin{lemma} Let $\mathbf{A}$ be a Jordan block with eigenvalue $\lambda$ and dimension $m \times m$. Then, the Jordan decomposition of the matrix $\mathbf{A}^k$ for $k \in \mathbb{N}$ is $\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{-1}$ where $\mathbf{U}$ is an invertible upper triangular matrix ---so the diagonal elements of $\mathbf{U}$ are non-zero--- and $\mathbf{\Lambda}$ is a Jordan block with eigenvalue $\lambda^k$ and dimension $m \times m$. \label{lem:dis:jordan3} \end{lemma} \begin{proof} We can see that $\mathbf{A}^k$ is a upper triangular toeplitz matrix whose diagonal elements are $\lambda^{k}$. Thus, $\det(s \mathbf{I}- \mathbf{A}^k)= (s-\lambda^k)^m$ and all eigenvalues of $\mathbf{A}^k$ are $\lambda^k$. Moreover, the rank of $\mathbf{A}^k-\lambda^k \mathbf{I}$ is $m-1$. Thus, $\mathbf{\Lambda}$ has to be a Jordan block matrix with eigenvalue $\lambda^k$ and dimension $m \times m$.
Moreover, $Ker\left(\left(\mathbf{A}-\lambda^k \mathbf{I}\right)^p \right) \supseteq span\{ \mathbf{e_1}, \mathbf{e_2} , \cdots , \mathbf{e_p}\}$. Therefore, $i$th column of $\mathbf{U}^{-1}$ has to belong to the vector space $\{ \mathbf{e_1}, \cdots, \mathbf{e_i} \}$ and $\mathbf{U}^{-1}$ is upper diagonal matrix. Here, the existence of Jordan form of arbitrary matrix guarantee the invertibility of $\mathbf{U}$. Therefore, $\mathbf{U}$ is also upper triangular matrix and the invertibility condition of an upper triangular matrix is its diagonal elements are non-zero. \end{proof}
\begin{lemma} Let $\mathbf{A}$ be a Jordan block matrix with eigenvalue $\lambda \in \mathbb{C}$ and size $m \in \mathbb{N}$, i.e. $\mathbf{A}=\begin{bmatrix} \lambda & 1 & \cdots & 0 \\ 0 & \lambda & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda \\ \end{bmatrix}$. $\mathbf{C}$ and $\mathbf{C'}$ are $1 \times m$ matrices such that \begin{align} &\mathbf{C}=\begin{bmatrix} c_1 & c_2 & \cdots & c_m\end{bmatrix} \nonumber\\ &\mathbf{C'}=\begin{bmatrix} c'_1 & c'_2 & \cdots & c'_m\end{bmatrix} \end{align} where $c_i, c'_i \in \mathbb{C}$ and $c_1 \neq 0$.\\ For all $k \in \mathbb{R}$ and $m \times 1$ matrices $\mathbf{X}=\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}$ and $\mathbf{X'}=\begin{bmatrix} x'_1 \\ x'_2 \\ \vdots \\ x'_m \end{bmatrix}$, there exists $\mathbf{T}$ such that\\ \begin{align} &(i) \mathbf{T} \mbox{ is an upper triangular matrix.} \nonumber \\ &(ii) \mathbf{C} \mathbf{A}^k \mathbf{X}+ \mathbf{C'}\mathbf{A}^k \mathbf{X'}=\mathbf{C} \mathbf{A}^k \left(\mathbf{X}+\mathbf{T}\mathbf{X'}\right) \nonumber \end{align} Moreover, the diagonal elements of $\mathbf{T}$ are $\frac{c_1'}{c_1}$. \label{lem:dis:jordan2} \end{lemma} \begin{proof} Similar to Lemma \ref{lem:conti:jordan}. \end{proof}
Now, we can prove Lemma~\ref{lem:dis:achv}. \begin{proof}[Proof of Lemma~\ref{lem:dis:achv}] We will prove the lemma by an induction on $m$, the dimension of the system. Remind that here we are using the definitions of \eqref{eqn:ac:jordan}, \eqref{eqn:ac2:jordan} for the system matrices $\mathbf{A}$, $\mathbf{C}$, $\mathbf{A_i}$, $\mathbf{C_i}$, $\cdots$.
(i) When $m=1$,
In this case, the lemma reduces to the scalar problem and is trivially true. Precisely, if we choose $S_1(\epsilon,k)$ as $\inf\{s \geq k : \beta[s]=1 \}$, we can check all the conditions of the lemma are satisfied.
(ii) Now, we will assume the lemma is true when the system dimension is $m-1$ as an induction hypothesis, and prove the lemma holds for the system with dimension $m$.
Let $\mathbf{x_{i,j}}$ be a $m_{i,j} \times 1$ column vector, and $\mathbf{x}$ be $\begin{bmatrix} \mathbf{x_{1,1}} \\ \mathbf{x_{1,2}} \\ \vdots \\ \mathbf{x_{\mu,\nu_\mu}} \end{bmatrix}$. Here, $\mathbf{x}$ can be thought as the state of the system, and $\mathbf{x_{i,j}}$ corresponds to the states associated with the Jordan block $\mathbf{A_{i,j}}$. Remind that $\mathbf{A_{1,1}}$ is the Jordan block with the largest eigenvalue and size.
The purpose of this proof is following: By Lemma~\ref{lem:dis:geodet}, we already know that the lemma holds for systems with scalar observations and without eigenvalue cycles. Therefore, we first reduce the system to one with scalar observations and without eigenvalue cycles. To reduce the system to the one without eigenvalue cycles, we will use down-sampling ideas (polyphase decomposition) form signal processing~\cite{Oppenheim}. To reduce the system to the one with scalar observations, we will multiple a proper post-processing matrix which combines vector observations to scalar observations. Then, we estimate the $m_{1,1}$th element of $\mathbf{x_{1,1}}$, which associated with the largest eigenvalue. Then, we subtract the estimation from the system. The resulting system becomes a $(m-1)$-dimensional system, and by the induction hypothesis, we can estimate the remaining states. As we mentioned before, this idea is called successive decoding in information theory~\cite{Cover}.
Let's start from the down-sampling and reduction to scalar observation systems.
$\bullet$ Down-sampling the System by $p$ and Reduction to Scalar Observation Systems: The main difficulty in estimating the $m_{1,1}$th element of $\mathbf{x_{1,1}}$ is the periodicity of the system. To handle this difficulty, we down sample the system. Let $p=\prod_{1 \leq i \leq \mu} p_i$. Remind that in \eqref{eqn:ac2:jordan}, $p_i$ was the period of each eigenvalue cycles. We can see when the system is down sampled by $p$, the resulting system becomes aperiodic. Thus, we can reduce the original periodic system to $p$ number of aperiodic systems.
We can further reduce vector observation systems to scalar observation systems. Thus, the system reduces to aperiodic system with scalar observations, and by Lemma~\ref{lem:dis:geodet} we can estimate the $m_{1,1}$th element of $\mathbf{x_{1,1}}$.
Since we are using induction for the proof, we can focus on the first eigenvalue cycle of the system.
Let $T_1, \cdots, T_R$ be all the sets $T$ such that $T :=\{t_1, \cdots, t_{|T|} \} \subseteq \{0,1,\cdots, p_1-1 \}$ and \begin{align} \begin{bmatrix} \mathbf{C_1} \mathbf{A_1}^{-t_1} \\ \mathbf{C_1} \mathbf{A_1}^{-t_2} \\ \vdots \\
\mathbf{C_1} \mathbf{A_1}^{-t_{|T|}} \end{bmatrix} \text{ is full rank.} \label{eqn:dis:geofinal:0} \end{align} Here, the definition of $\mathbf{A_1}$ and $\mathbf{C_1}$ is given in \eqref{eqn:ac2:jordan} and $ \begin{bmatrix} \mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_1} \\ \mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_2} \\ \vdots \\
\mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{|T|}} \end{bmatrix} $ is also full rank. The number of such sets, $R$, is finite since $p_1$ is finite.
Therefore, for each $T_r := \{ t_{r,1},\cdots,t_{r,|T_r|} \}$ $(1 \leq r \leq R)$, we can find a matrix $\mathbf{L_r}$ such that \begin{align} \mathbf{L_r} \begin{bmatrix} \mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{r,1}} \\ \mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{r,2}} \\ \vdots \\
\mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{r,|T_i|}} \\ \end{bmatrix}=\mathbf{I}. \nonumber \end{align} Denote $ \begin{bmatrix}
\mathbf{L_{t_{r,1},r}} & \mathbf{L_{t_{r,2},r}} & \cdots & \mathbf{L_{t_{r,|T_r|},r}} \end{bmatrix} $ be the first row of $\mathbf{L_r}$ where $\mathbf{L_{t,r}}$ are $1 \times l$ matrices. Then, \begin{align} \begin{bmatrix}
\mathbf{L_{t_{r,1},r}} & \mathbf{L_{t_{r,2},r}} & \cdots & \mathbf{L_{t_{r,|T_r|},r}} \end{bmatrix} \begin{bmatrix} \mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{r,1}} \\ \mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{r,2}} \\ \vdots \\
\mathbf{C_1} diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-t_{r,|T_r|}} \\ \end{bmatrix}= \begin{bmatrix} 1 & 0 & \cdots & 0 \end{bmatrix} . \label{eqn:dis:thm6} \end{align}
We also extend this definition of $\mathbf{L_{q,r}}$ to all $q \in \{0,\cdots, p-1 \}, r \in \{ 1,\cdots,R \} $ by putting $\mathbf{L_{q,r}}:=\mathbf{L_{q (\mod p_1),r}}$ for $q \geq p_1$. Then, we can easily check that \eqref{eqn:dis:thm6} still holds as long as $t_{r,i}$ remains the same in $\mod p_1$.
\begin{claim} For given $q \in \{0,\cdots, p-1 \}$ and $r \in \{ 1,\cdots, R\}$, let $\mathbf{L_{q,r}} \mathbf{C_1}$ be not $\mathbf{0}$. Then, there exists $\mathbf{\bar{C}_{q,r}}$, $\mathbf{\bar{A}_{q,r}}$, $\mathbf{\bar{U}_{q,r}}$, $\mathbf{\bar{x}_{q,r}}$ that satisfies the following conditions:\\ (i) $\mathbf{\bar{A}_{q,r}}$ is a $\bar{\bar{m}}_{q,r} \times \bar{\bar{m}}_{q,r}$ square matrix given in a Jordan form. The eigenvalues of $\mathbf{\bar{A}_{q,r}}$ belong to $\{ \lambda_{1,1}^p, \lambda_{2,1}^p, \cdots, \lambda_{\mu,1}^p \}$, and no two different Jordan blocks have the same eigenvalue. Therefore, $\mathbf{\bar{A}_{q,r}}$ has no eigenvalue cycles. Furthermore, the first Jordan block(left-top) of $\mathbf{\bar{A}_{q,r}}$ is a $m_{1,1} \times m_{1,1}$ Jordan block associated with eigenvalue $\lambda_{1,1}^p$.\\ (ii) $\mathbf{\bar{C}_{q,r}}$ is a $1 \times \bar{\bar{m}}_{q,r}$ row vector and $(\mathbf{\bar{A}_{q,r}}, \mathbf{\bar{C}_{q,r}})$ is observable.\\ (iii) $\mathbf{\bar{U}_{q,r}}$ is a $\bar{\bar{m}}_{q,r} \times \bar{\bar{m}}_{q,r}$ invertible upper triangular matrix.\\ (iv) $\mathbf{\bar{x}_{q,r}}$ is a $\bar{\bar{m}}_{q,r} \times 1$ column vector. There exists a nonzero constant $g_{q,r}$ such that \begin{align} (\mathbf{\bar{x}_{q,r}})_{m_{1,1}}=g_{q,r} \left( \mathbf{L_{q,r}}\mathbf{C_1} diag\{ \alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-(q+(m_{1,1}-1))} \right) \begin{bmatrix} (\mathbf{x_{1,1}})_{m_{1,1}}\\ (\mathbf{x'_{1,2}})_{m_{1,1}}\\ \vdots \\ (\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \end{bmatrix}.\nonumber\end{align} where $(\mathbf{x'_{1,i}})_{m_{1,1}}=(\mathbf{x_{1,i}})_{m_{1,1}}$ when the size of $\mathbf{x_{1,i}}$ is greater or equal to $m_{1,1}$, and $(\mathbf{x'_{1,i}})_{m_{1,1}}=0$ otherwise.\\ (v) For all $k \in \mathbb{Z}^+$, $\mathbf{L_{q,r}}\mathbf{C}\mathbf{A}^{-(pk+q)}\mathbf{x}=\mathbf{\bar{C}_{q,r}}\mathbf{\bar{A}_{q,r}}^{-k}\mathbf{\bar{U}_{q,r}}\mathbf{\bar{x}_{q,r}}$. \label{claim:donknow2} \end{claim}
This claim tells that by sub-sampling with rate $p$, we get systems without eigenvalue cycles. Moreover, by multiplying proper row vector to observations, we can reduce the system to a scalar observation system while keeping required information to estimate $(\mathbf{x_{1,1}})_{m_{1,1}}$. When $\mathbf{L_{q,r}} \mathbf{C_1}$ is $\mathbf{0}$, the observation is not useful in estimation $(\mathbf{x_{1,1}})_{m_{1,1}}$. Thus, we can ignore it.
\begin{proof} The proof of the claim consists of two parts, down-sampling and reduction to a scalar observation system.
(1) Down-sampling the System by $p$:
By the definition of $\mathbf{C}$, $\mathbf{A}$, $\mathbf{C_{i,j}}$, $\mathbf{A_{i,j}}$, for all $k \in \mathbb{Z}$, $q \in \{0,\cdots, p-1 \}$ we have \begin{align} &\mathbf{C} \mathbf{A}^{-(pk+q)} \mathbf{x} = \mathbf{C_{1,1}}\mathbf{A_{1,1}}^{-(pk+q)} \mathbf{x_{1,1}} + \mathbf{C_{1,2}}\mathbf{A_{1,2}}^{-(pk+q)} \mathbf{x_{1,2}}+ \cdots + \mathbf{C_{\mu,\nu_\mu}}\mathbf{A_{\mu,\nu_\mu}}^{-(pk+q)} \mathbf{x_{\mu,\nu_\mu}} \label{eqn:downsample:1} \end{align}
Since the dimensions of $\mathbf{x_{i,1}}, \cdots , \mathbf{x_{i,\nu_i}}$ may be different, we will make them equal by extending the dimensions to the maximum, i.e. $m_{i,1}$. For the extension, we will append zeros at the end of the matrices. Let $\mathbf{C'_{i,j}}$ be a $l \times m_{i,1}$ matrix given as $\begin{bmatrix} \mathbf{C_{i,j}} & \mathbf{0}_{l \times (m_{i,1}-m_{i,j})} \end{bmatrix} $, $\mathbf{A'_{i,j}}$ be a $m_{i,1} \times m_{i,1}$ Jordan block matrix with eigenvalue $\lambda_{i,j}$, and $\mathbf{x'_{i,j}}$ be a $m_{i,1} \times 1$ column vector given as $\begin{bmatrix} \mathbf{x_{i,j}} \\ \mathbf{0}_{(m_{i,1}-m_{i,j}) \times 1} \end{bmatrix}$.
Then, by the construction, we can see that $(\mathbf{x_{1,1}'})_{m_{1,1}}=(\mathbf{x_{1,1}})_{m_{1,1}}$, and if $m_{1,i}$ is greater or equal to $m_{1,1}$ $(\mathbf{x_{1,i}'})_{m_{1,1}}=(\mathbf{x_{1,i}})_{m_{1,1}}$ and otherwise $(\mathbf{x_{1,i}'})_{m_{1,1}}=0$. Therefore, $\mathbf{x_{i,j}'}$ satisfies the condition (iv) of the claim. Furthermore, the first column of $\mathbf{C_{i,j}'}$ is equal to the first column of $\mathbf{C_{i,j}}$ by construction.
We also define $\alpha_{i,j}$ to be $\frac{\lambda_{i,j}}{\lambda_{i,1}}$. Remind that $\lambda_{i,j}$ was defined as the eigenvalue corresponds to $\mathbf{A_{i,j}}$ in \eqref{eqn:ac:jordan}. Then, by the definitions $\alpha_{i,j}^{p_i}=1$.
Then, \eqref{eqn:downsample:1} can be written as follows: \begin{align} &\mathbf{C} \mathbf{A}^{-(pk+q)} \mathbf{x}=
\mathbf{C'_{1,1}} \mathbf{A'_{1,1}}^{-(pk+q)} \mathbf{x'_{1,1}} +\mathbf{C'_{1,2}} \mathbf{A'_{1,2}}^{-(pk+q)} \mathbf{x'_{1,2}}+ \cdots + \mathbf{C'_{\mu,\nu_\mu}} \mathbf{A'_{\mu,\nu_\mu}}^{-(pk+q)}\mathbf{x'_{\mu,\nu_\mu}} \nonumber \\ &=\mathbf{C'_{1,1}} \mathbf{A'_{1,1}}^{-(pk+q)} \mathbf{x'_{1,1}}\nonumber \\ &+ \mathbf{C'_{1,2}} \begin{bmatrix} \alpha_{1,2}^{-(m_{1,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{1,2}^{-(m_{1,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{bmatrix} \mathbf{A'_{1,1}}^{-(pk+q)} \begin{bmatrix} \alpha_{1,2}^{-(pk+q)+(m_{1,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{1,2}^{-(pk+q)+(m_{1,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha_{1,2}^{-(pk+q)} \end{bmatrix} \mathbf{x'_{1,2}}+ \cdots \nonumber \\ &+ \mathbf{C'_{\mu,\nu_\mu}} \begin{bmatrix} \alpha_{\mu,\nu_\mu}^{-(m_{\mu,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{\mu,\nu_\mu}^{-(m_{\mu,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{bmatrix} \mathbf{A'_{\mu,1}}^{-(pk+q)} \begin{bmatrix} \alpha_{\mu,\nu_\mu}^{-(pk+q)+(m_{\mu,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{\mu,\nu_\mu}^{-(pk+q)+(m_{\mu,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha_{\mu,\nu_\mu}^{-(pk+q)}\\ \end{bmatrix} \mathbf{x'_{\mu,\nu_\mu}} \label{eqn:dis:thm1} \\ &=\mathbf{C'_{1,1}} \mathbf{A'_{1,1}}^{-(pk+q)} \mathbf{x'_{1,1}}\nonumber \\ &+\mathbf{C'_{1,2}} \begin{bmatrix} \alpha_{1,2}^{-(m_{1,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{1,2}^{-(m_{1,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{bmatrix} \mathbf{A'_{1,1}}^{-(pk+q)}\begin{bmatrix} \alpha_{1,2}^{-q+(m_{1,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{1,2}^{-q+(m_{1,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha_{1,2}^{-q} \end{bmatrix} \mathbf{x'_{1,2}} + \cdots \nonumber \\ &+\mathbf{C'_{\mu,\nu_\mu}} \begin{bmatrix} \alpha_{\mu,\nu_\mu}^{-(m_{\mu,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{\mu,\nu_\mu}^{-(m_{\mu,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{bmatrix} \mathbf{A'_{\mu,1}}^{-(pk+q)} \begin{bmatrix} \alpha_{\mu,\nu_\mu}^{-q+(m_{\mu,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{\mu,\nu_\mu}^{-q+(m_{\mu,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha_{\mu,\nu_\mu}^{-q}\\ \end{bmatrix} \mathbf{x'_{\mu,\nu_\mu}}.\label{eqn:dis:thm2} \end{align} Here, \eqref{eqn:dis:thm1} follows from Lemma~\ref{lem:dis:jordan1} and \eqref{eqn:dis:thm2} follows from $\alpha_{i,j}^p=\left( \alpha_{i,j}^{p_i} \right)^{\prod_{j \neq i}p_j }=1$. Remind that $m_{i,j}$ was defined as the size of $\mathbf{A_{i,j}}$ in \eqref{eqn:ac:jordan}.
Define \begin{align} &\mathbf{C''_{i,j}} := \mathbf{C'_{i,j}} \begin{bmatrix} \alpha_{i,j}^{-(m_{i,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{i,j}^{-(m_{i,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix}, \label{eqn:evidence3} \\ &\mathbf{x''_{i,j}} := \begin{bmatrix} \alpha_{i,j}^{-q+(m_{i,1}-1)} & 0 & \cdots & 0 \\ 0 & \alpha_{i,j}^{-q+(m_{i,1}-2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \alpha_{i,j}^{-q} \end{bmatrix} \mathbf{x'_{i,j}}. \label{eqn:evidence2} \end{align}
Here, we can notice that the first column of $\mathbf{C''_{i,j}}$ is $\alpha_{i,j}^{-(m_{i,1}-1)}$ times the first column of $\mathbf{C'_{i,j}}$. Here, we know the first column of $\mathbf{C'_{i,j}}$ is equal to the first column of $\mathbf{C_{i,j}}$. The last element of $\mathbf{x''_{i,j}}$ is $\alpha_{i,j}^{-q}$ times the last element of $\mathbf{x'_{i,j}}$. \eqref{eqn:dis:thm2} can be written as \begin{align} &\mathbf{C}\mathbf{A}^{-(pk+q)}\mathbf{x}=\mathbf{C''_{1,1}}\mathbf{A'_{1,1}}^{-(pk+q)}\mathbf{x''_{1,1}} +\mathbf{C''_{1,2}}\mathbf{A'_{1,1}}^{-(pk+q)}\mathbf{x''_{1,2}}+ \cdots + \mathbf{C''_{\mu,\nu_\mu}}\mathbf{A'_{\mu,1}}^{-(pk+q)}\mathbf{x''_{\mu,\nu_\mu}}. \label{eqn:dis:thm3} \end{align} We can see all $\mathbf{x''_{i,1}}, \cdots, \mathbf{x''_{i,\nu_i}}$ are multiplied by the same matrix $\mathbf{A_{1,1}'}$. Eventually, we will merge $\mathbf{x''_{i,1}}, \cdots, \mathbf{x''_{i,\nu_i}}$ by taking linear combinations.
(2) Reduction to the scalar observation: Now, we reduce $\mathbf{C''_{i,j}}$ to row vectors by multiply $\mathbf{L_{q,r}}$ to \eqref{eqn:dis:thm3}. \begin{align} &\mathbf{L_{q,r}}\mathbf{C}\mathbf{A}^{-(pk+q)}\mathbf{x}= \mathbf{L_{q,r}}\mathbf{C''_{1,1}}\mathbf{A'_{1,1}}^{-(pk+q)}\mathbf{x''_{1,1}} +\mathbf{L_{q,r}}\mathbf{C''_{1,2}}\mathbf{A'_{1,1}}^{-(pk+q)}\mathbf{x''_{1,2}}+ \cdots + \mathbf{L_{q,r}}\mathbf{C''_{\mu,\nu_\mu}}\mathbf{A'_{\mu,1}}^{-(pk+q)}\mathbf{x''_{\mu,\nu_\mu}}. \label{eqn:dis:thm33} \end{align}
Here, the systems $(\mathbf{A'_{i,1}},\mathbf{L_{q,r}}\mathbf{C''_{i,1}}), \cdots , (\mathbf{A'_{i,1}},\mathbf{L_{q,r}}\mathbf{C''_{i,\nu_i}})$ have the same dimension, but none of them might be observable. Therefore, we will make at least one of the systems to be observable by truncation. Since $\mathbf{A_{i,1}'}$ is a Jordan block matrix and $\mathbf{L_{q,r}}\mathbf{C''_{i,j}}$ is a row vector, $(\mathbf{A_{i,1}'}, \mathbf{L_{q,r}}\mathbf{C''_{i,j}})$ is observable if and only if the first element of $\mathbf{L_{q,r}}\mathbf{C''_{i,j}}$ is not zero. Let $m_i'$ be the smallest number such that at least one of the $m_i'$th elements of $\mathbf{L_{q,r}}\mathbf{C''_{i,1}}, \cdots, \mathbf{L_{q,r}}\mathbf{C''_{i,\nu_i}}$ becomes nonzero, and let $\mathbf{L_{q,r}}\mathbf{C''_{i,\nu_i^\star}}$ be the vector that achieves the minimum.
Then, we will reduce the dimension of $(\mathbf{A'_{i,1}},\mathbf{L_{q,r}}\mathbf{C''_{i,\nu_i}})$ by truncating the first $(m_i'-1)$ vectors. Define $\mathbf{C'''_{i,j}}$ as the matrix obtained by truncating first $(m'_i-1)$ columns of $\mathbf{C''_{i,j}}$, $\mathbf{A''_{i,j}}$ as the matrix obtained by truncating first $(m'_i-1)$ rows and columns of $\mathbf{A'_{i,j}}$, and $\mathbf{x'''_{i,j}}$ as the column vector obtained by truncating first $(m'_i-1)$ elements of $\mathbf{x''_{i,j}}$.
In the claim, we assumed that $\mathbf{L_{q,r}}\mathbf{C_1}$ is not $\mathbf{0}$. Remind that the elements of $\mathbf{L_{q,r}}\mathbf{C_1}$ correspond to the first elements of $\mathbf{L_{q,r}} \mathbf{C_{1,1}}, \cdots, \mathbf{L_{q,r}} \mathbf{C_{1,\nu_1}}$, which are again equal to the first elements of $\mathbf{L_{q,r}} \mathbf{C_{1,1}'}, \cdots, \mathbf{L_{q,r}} \mathbf{C_{1,\nu_1}'}$. Since the first column of $\mathbf{C''_{i,j}}$ is the first column of $\mathbf{C'_{i,j}}$ times $\alpha_{i,j}^{-(m_{i,1}-1)}$, at least one of the systems $(\mathbf{A'_{1,1}},\mathbf{L_{q,r}}\mathbf{C''_{1,1}}),\cdots,(\mathbf{A'_{1,1}},\mathbf{L_{q,r}}\mathbf{C''_{1,\nu_1}})$ has to be observable.
Therefore, we can see $m_1'=1$ and \begin{align} \mathbf{C'''_{1,i}}=\mathbf{C''_{1,i}}, \mathbf{A''_{1,i}}=\mathbf{A'_{1,i}}, \mathbf{x'''_{1,i}}=\mathbf{x''_{1,i}}. \label{eqn:evidence1} \end{align}
Now, \eqref{eqn:dis:thm33} becomes \begin{align} &\mathbf{L_{q,r}}\mathbf{C}\mathbf{A^{-(pk+q)}}\mathbf{x}=\mathbf{L_{q,r}}\mathbf{C'''_{1,1}}\mathbf{A''_{1,1}}^{-(pk+q)}\mathbf{x'''_{1,1}} +\mathbf{L_{q,r}}\mathbf{C'''_{1,2}}\mathbf{A''_{1,1}}^{-(pk+q)}\mathbf{x'''_{1,2}}+\cdots +\mathbf{L_{q,r}}\mathbf{C'''_{\mu,\nu_\mu}}\mathbf{A''_{\mu,1}}^{-(pk+q)}\mathbf{x'''_{\mu,\nu_\mu}}. \nonumber \end{align} Let $c_{i,j,1}'''$ be the first element of $\mathbf{L_{q,r}}\mathbf{C'''_{i,j}}$. By Lemma~\ref{lem:dis:jordan2}, we can find upper triangular matrices $\mathbf{T_{i,j}}$ such that their diagonal elements are $\frac{c_{i,j,1}'''}{c_{i,\nu_i^\star,1}'''}$ and \begin{align} \mathbf{L_{q,r}}\mathbf{C}\mathbf{A^{-(pk+q)}}\mathbf{x}&=\mathbf{L_{q,r}}\mathbf{C'''_{1,\nu_1^\star}}\mathbf{A''_{1,1}}^{-(pk+q)}\left( \mathbf{T_{1,1}}\mathbf{x'''_{1,1}}+\mathbf{T_{1,2}}\mathbf{x'''_{1,2}}+\cdots+ \mathbf{T_{1,\nu_1}}\mathbf{x'''_{1,\nu_1}} \right)+ \cdots \nonumber\\ &+\mathbf{L_{q,r}}\mathbf{C'''_{\mu,\nu_\mu^\star}}\mathbf{A''_{\mu,1}}^{-(pk+q)}\left( \mathbf{T_{\mu,1}}\mathbf{x'''_{\mu,1}}+\mathbf{T_{\mu,2}}\mathbf{x'''_{\mu,2}}+\cdots+\mathbf{T_{\mu,\nu_\mu}}\mathbf{x'''_{\mu,\nu_\mu}} \right)\label{eqn:dis:thm4} \end{align} where $c_{i,\nu_i^\star,1}'''$ is guaranteed to be nonzero by the construction.
Define $\mathbf{x''''_i}$ as \begin{align} \left(\mathbf{T_{i,1}}\mathbf{x'''_{i,1}}+\mathbf{T_{i,2}}\mathbf{x'''_{i,2}}+\cdots+\mathbf{T_{i,\nu_i}}\mathbf{x'''_{i,\nu_i}}\right). \label{eqn:decoding:4} \end{align}
Here, $\mathbf{A''_{i,1}}^{-(pk+q)}$ is not in a Jordan block. However, since $\mathbf{A''_{i,1}}$ is a Jordan block, by Lemma~\ref{lem:dis:jordan3} the Jordan decomposition of $\mathbf{A''_{i,1}}^{p}$ is $\mathbf{U_i}\mathbf{\Lambda_i}\mathbf{U_i}^{-1}$ where $\mathbf{\Lambda_i}$ is a Jordan block whose eigenvalue is $p$th power of the eigenvalue of $\mathbf{A''_{i,1}}$ and $\mathbf{U_i}$ is an upper triangular matrix whose diagonal entries are non-zero. Thus, \eqref{eqn:dis:thm4} can be written as \begin{align} &\mathbf{L_{q,r}}\mathbf{C}\mathbf{A^{-(pk+q)}}\mathbf{x}=\mathbf{L_{q,r}} \mathbf{C'''_{1,\nu_1^\star}} \mathbf{U_{1}} \mathbf{\Lambda_{1}}^{-k} \mathbf{U_{1}}^{-1} \mathbf{A''_{1,1}}^{-q} \mathbf{x''''_{1}} + \cdots +\mathbf{L_{q,r}} \mathbf{C'''_{\mu,\nu_\mu^\star}} \mathbf{U_{\mu}} \mathbf{\Lambda_{\mu}}^{-k} \mathbf{U_{\mu}}^{-1} \mathbf{A''_{\mu,1}}^{-q} \mathbf{x''''_{\mu}} \nonumber \\ &= \begin{bmatrix} \mathbf{L_{q,r}}\mathbf{C'''_{1,\nu_1^\star}}\mathbf{U_{1}} & \mathbf{L_{q,r}}\mathbf{C'''_{2,\nu_2^\star}}\mathbf{U_{2}} & \cdots & \mathbf{L_{q,r}}\mathbf{C'''_{\mu,\nu_\mu^\star}}\mathbf{U_{\mu}} \end{bmatrix} \begin{bmatrix} \mathbf{\Lambda_{1}} & 0 & \cdots & 0 \\ 0 & \mathbf{\Lambda_{2}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{\Lambda_{\mu}} \end{bmatrix}^{-k}\nonumber \\ &\cdot\begin{bmatrix} \mathbf{U_{1}}^{-1} \mathbf{A''_{1,1}}^{-q} & 0 & \cdots & 0 \\ 0 & \mathbf{U_{2}}^{-1} \mathbf{A''_{2,1}}^{-q} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{U_{\mu}}^{-1} \mathbf{A''_{\mu,1}}^{-q} \\ \end{bmatrix} \begin{bmatrix} \mathbf{x_{1}''''} \\ \mathbf{x_{2}''''} \\ \vdots \\ \mathbf{x_{\mu}''''} \end{bmatrix} .\nonumber \end{align} Let's define $\mathbf{\bar{C}_{q,r}}$ as $ \begin{bmatrix} \mathbf{L_{q,r}}\mathbf{C'''_{1,\nu_1^\star}}\mathbf{U_{1}} & \mathbf{L_{q,r}}\mathbf{C'''_{2,\nu_2^\star}}\mathbf{U_{2}} & \cdots & \mathbf{L_{q,r}}\mathbf{C'''_{\mu,\nu_\mu^\star}}\mathbf{U_{\mu}} \end{bmatrix} $, $\mathbf{\bar{A}_{q,r}}$ as $ \begin{bmatrix} \mathbf{\Lambda_{1}} & 0 & \cdots & 0 \\ 0 & \mathbf{\Lambda_{2}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{\Lambda_{\mu}} \end{bmatrix} $, $\mathbf{\bar{U}_{q,r}}$ as \\$ \begin{bmatrix} \mathbf{U_{1}}^{-1} \mathbf{A''_{1,1}}^{-q} & 0 & \cdots & 0 \\ 0 & \mathbf{U_{2}}^{-1} \mathbf{A''_{2,1}}^{-q} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \mathbf{U_{\mu}}^{-1} \mathbf{A''_{\mu,1}}^{-q} \\ \end{bmatrix}$, $\mathbf{\bar{x}_{q,r}}$ as $\begin{bmatrix} \mathbf{x_1''''} \\ \mathbf{x_2''''} \\ \vdots \\ \mathbf{x_\mu''''} \end{bmatrix}$ and $\bar{m}_{q,r}$ as the dimension of $\mathbf{\bar{A}_{q,r}}$.
Here, we can see that $\mathbf{\bar{A}_{q,r}}$ has no eigenvalue cycles and satisfies the condition (i) of the claim. Furthermore, since $\mathbf{U_i}$ is an upper triangular matrix whose diagonal elements are non-zero, the first elements of $\mathbf{L_{q,r}}\mathbf{C'''_{i,\nu_i^\star}}\mathbf{U_{i}}$ are still non-zeros. Thus, the system $(\mathbf{\Lambda_{i}}, \mathbf{L_{q,r}}\mathbf{C'''_{i,\nu_i^\star}}\mathbf{U_{i}})$ is observable and $(\mathbf{\bar{A}_{q,r}},\mathbf{\bar{C}_{q,r}})$ is also observable, which satisfies the condition (ii) of the claim. We also have \begin{align} &\mathbf{L_{q,r}}\mathbf{C}\mathbf{A}^{-(pk+q)}\mathbf{x}=\mathbf{\bar{C}_{q,r}}\mathbf{\bar{A}_{q,r}}^{-k}\mathbf{\bar{U}_{q,r}}\mathbf{\bar{x}_{q,r}} \end{align} which is the condition (v) of the claim.
Let $c_{1,j,1}$ be the first element of $\mathbf{L_{q,r}}\mathbf{C_{1,j}}$. Then, we have
\begin{align} (\mathbf{\bar{x}_{q,r}})_{m_{1,1}}&=(\mathbf{x''''_1})_{m_{1,1}} =\left(\frac{c_{1,1,1}'''}{c_{1,\nu_1^\star,1}'''} (\mathbf{x'''_{1,1}})_{m_{1,1}}+ \cdots+ \frac{c_{1,\nu_1,1}'''}{c_{1,\nu_1^\star,1}'''} (\mathbf{x'''_{1,\nu_1}})_{m_{1,1}} \right)\label{eqn:decoding:1} \\ &=\left(\frac{c_{1,1,1}'''}{c_{1,\nu_1^\star,1}'''}\alpha_{1,1}^{-q} (\mathbf{x'_{1,1}})_{m_{1,1}}+ \cdots+ \frac{c_{1,\nu_1,1}'''}{c_{1,\nu_1^\star,1}'''}\alpha_{1,\nu_1}^{-q} (\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \right)\label{eqn:decoding:2} \\ &=\frac{1}{c_{1,\nu_1^\star,1}'''} \left( c_{1,1,1} \alpha_{1,1}^{-q-(m_{1,1}-1)} (\mathbf{x'_{1,1}})_{m_{1,1}}+ \cdots + c_{1,\nu_1,1} \alpha_{1,\nu_1}^{-q-(m_{1,1}-1)} (\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \right)\label{eqn:decoding:3} \\ &=\frac{1}{c_{1,\nu_1^\star,1}'''} \left( \mathbf{L_{q,r}}\mathbf{C_1} diag\{ \alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-(q+(m_{1,1}-1))} \right) \begin{bmatrix} (\mathbf{x'_{1,1}})_{m_{1,1}}\\ \vdots \\ (\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \end{bmatrix} \label{eqn:dis:thm5} \end{align} \eqref{eqn:decoding:1} follows from \eqref{eqn:decoding:4}. \eqref{eqn:decoding:2} follows from \eqref{eqn:evidence2}, \eqref{eqn:evidence1}. \eqref{eqn:decoding:3} follows from \eqref{eqn:evidence3}, \eqref{eqn:evidence1} and that the first column of $\mathbf{C'_{i,j}}$ is the same as the first column of $\mathbf{C_{i,j}}$ as we mentioned above. Furthermore, as we mentioned above, $(\mathbf{x'_{1,1}})_{m_{1,1}}=(\mathbf{x_{1,1}})_{m_{1,1}}$. Therefore, the condition (iv) of the claim is also satisfied, and this finishes the proof.
\end{proof}
$\bullet$ Estimating $(\mathbf{x})_{m_{1,1}}$: Now, we have systems without eigenvalue cycles and with scalar observations. Thus, by applying Lemma~\ref{lem:dis:geodet}, we will estimate the state $(\mathbf{x})_{m_{1,1}}$.
\begin{claim} We can find a polynomial $\bar{p}(k)$, $\bar{m} \in \mathbb{N}$ and a family of stopping time $\{ \bar{S}(\epsilon, k) : k \in \mathbb{Z}^+, \epsilon > 0\}$ such that for all $\epsilon > 0$, $k \in \mathbb{Z}^+$ there exist $k \leq \bar{k}_1 < \bar{k}_2 < \cdots <\bar{k}_{\bar{m}} \leq \bar{S}(\epsilon, k)$ and $\mathbf{\bar{M}}$ satisfying:\\ (i) $\beta[\bar{k}_i]=1$ for $1 \leq i \leq \bar{m}$\\ (ii) $\mathbf{\bar{M}} \begin{bmatrix} \mathbf{C}\mathbf{A}^{-\bar{k}_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\bar{k}_{\bar{m}}} \\ \end{bmatrix} \mathbf{x}=(\mathbf{x})_{m_{1,1}}$\\
(iii) $\left| \mathbf{\bar{M}} \right|_{max} \leq \frac{\bar{p}(\bar{S}(\epsilon,k))}{\epsilon}|\lambda_{1,1}|^{\bar{S}(\epsilon,k)}$\\ (iv) $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ \bar{S}(\epsilon,k) - k = s \} \leq p_e^{\frac{l_1}{p_1}}$ \label{claim:donknow00} \end{claim}
This claim tells that there exists an estimator $\mathbf{\bar{M}}$ for $(\mathbf{x})_{m_{1,1}}$ which use observations at time $\bar{k}_1, \cdots, \bar{k}_{\bar{m}}$.
\begin{proof} For each $q \in \{0, \cdots, p-1 \}$, we have the down-sampled systems $(\mathbf{\bar{A}_{q,1}},\mathbf{\bar{C}_{q,1}}),\cdots,(\mathbf{\bar{A}_{q,R}},\mathbf{\bar{C}_{q,R}})$ such that all systems are observable, $\mathbf{\bar{A}_{q,i}}$ have no eigenvalue cycles, and $\mathbf{\bar{C}_{q,i}}$ are row vectors. By Lemma~\ref{lem:dis:geodet}, we can find a polynomial $p_q(k)$ and a family of random variable $\{\bar{S}_{q}(\epsilon,k): k \in \mathbb{Z^+}, \epsilon > 0 \}$ such that for all $\epsilon > 0$, $k \in \mathbb{Z}^+$ and $1 \leq i \leq R$ there exist $\bar{m}_{q,i}$ and $ \lceil \frac{k-q}{p} \rceil \leq k_{i,1} < k_{i,2} < \cdots < k_{i,\bar{m}_{q,i}} \leq \bar{S}_q(\epsilon,k)$ and $\mathbf{M_i}$ satisfying:\\ (i) $\beta[pk_{i,j}+q]=1$ for $1 \leq j \leq \bar{m}_{q,i}$\\ (ii) $\mathbf{M_i} \begin{bmatrix} \mathbf{\bar{C}_{q,i}} \mathbf{\bar{A}_{q,i}}^{-k_{i,1}} \\ \mathbf{\bar{C}_{q,i}} \mathbf{\bar{A}_{q,i}}^{-k_{i,2}} \\ \vdots \\ \mathbf{\bar{C}_{q,i}} \mathbf{\bar{A}_{q,i}}^{-k_{i,\bar{m}_{q,i}}} \end{bmatrix}=\mathbf{I_{\bar{m}_{q,i}\times \bar{m}_{q,i}}} $\\ (iii) $
\left| \mathbf{M_i} \right|_{max} \leq \frac{ p_q\left( \bar{S}_q(\epsilon,k) \right) }{\epsilon} (|\lambda_{1,1}|^p)^{\bar{S}_q(\epsilon,k)} $\\ (iv) $ \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ \bar{S}_q(\epsilon,k)- \lceil \frac{k-q}{p} \rceil = s \} = p_e. $
By the property (iv) of $\bar{S}_q(\epsilon,k)$, we get \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ p \bar{S}_q(\epsilon,k) - p \lceil \frac{k-q}{p} \rceil = s \} = p_e^{\frac{1}{p}} \nonumber \end{align} which implies \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ (p \bar{S}_q(\epsilon,k)+q)-k=s \} = p_e^{\frac{1}{p}}.\nonumber \end{align} Moreover, $\bar{S}_q(\epsilon,k)$ depends on only $\beta[q],\beta[p+q],\beta[2p+q],\cdots$. Thus, $\bar{S}_0(\epsilon,k),\cdots,\bar{S}_{p-1}(\epsilon,k)$ are independent.
Now, we can estimate the state of each sub-sampled system. We will leverage these estimations to the estimation of the state $(\mathbf{x})_{m_{1,1}}$.
First, notice that the down-sampling rate $p$ is much larger than $p_1$. Therefore, we make the corresponding definition to \eqref{eqn:dis:geofinal:0} for the longer period $p$. Let $T'_1,\cdots,T'_{R'}$ be all the sets $T'$ such that $T' :=\{ t'_1, \cdots, t'_{|T'|} \} \subseteq \{0,1,\cdots,p-1 \}$ and \begin{align} \begin{bmatrix} &\mathbf{C_1} \mathbf{A_1}^{-t'_1} \\ &\mathbf{C_1} \mathbf{A_1}^{-t'_2} \\ &\vdots \\
&\mathbf{C_1} \mathbf{A_1}^{-t'_{|T'|}} \end{bmatrix} \mbox{ is full rank.} \end{align}
Here, we can ask how many observations have to be erased to make the observability Gramian of $(\mathbf{A_1}, \mathbf{C_1})$ rank deficient during the period $p$. Obviously, the answer is $l_1 \prod_{ 2 \leq j \leq \mu}p_j$ where the definition of $l_1$ is shown in \eqref{eqn:def:lprime}. The reason for this is that we have to erase at least $l_1$ observations for each period $p_1$ to make the observability Gramian rank deficient. Formally, it can be written as follows: \begin{align}
min\{|T|: T=\{t_1,\cdots,t_{|T|} \} \subseteq \{ 0,1,\cdots,p-1 \}, T'_i \not\subseteq T \mbox{ for all }1 \leq i \leq R' \}=l_1 \prod_{2 \leq j \leq \mu} p_j. \nonumber \end{align}
Denote a stopping time $\bar{S}(\epsilon,k)$ as the minimum time until we have enough observations to make the observability Gramian of $(\mathbf{A_1}, \mathbf{C_1})$ full rank. Formally, \begin{align} \bar{S}(\epsilon,k)-k&:= \inf \{
s:\exists i \in \{1, \cdots, R' \}\mbox{ s.t. } T'_i=\{t'_1,t'_2,\cdots t'_{|T'_i|} \} \mbox{ and }\nonumber\\
&(p \bar{S}_{t'_1}(\epsilon,k) + t'_1)-k \leq s,
(p \bar{S}_{t'_2}(\epsilon,k) + t'_2)-k \leq s, \cdots, (p \bar{S}_{t'_{|T'_i|}}(\epsilon,k) + t'_{|T'_i|})-k \leq s \}. \nonumber \end{align} Then, by Lemma~\ref{lem:dis:geo0} we have \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ \bar{S}(\epsilon,k) - k = s \} \leq p_e^{\frac{l_1 \prod_{j \neq 1 } p_j }{p}} = p_e^{\frac{l_1}{p_1}}.
\end{align}
Without loss of generality, let $T_1'$ be the set that satisfies the definition of $\bar{S}(\epsilon,k)$. Then, by the definition of $T_1'$ and $T_i$, there must exist $T_i$ such that $T_1'$ contains $T_i$ in mod $p_1$. Let $T_1$ be such a set without loss of generality. Then, we can find $\{t'_1, \cdots, t'_{|T_1|}\}$ which is included in $T_1'$ and includes $T_1$ in mod $p_1$. Formally, $\{t'_1, \cdots, t'_{|T_1|} \} \subseteq T_1'$ and $\{t'_1 (mod~p_1), \cdots, t'_{|T_1|}(mod~p_1) \}=T_1$.
Then, from the definition of $\bar{S}(\epsilon,k)$ and $\bar{S}_q(\epsilon,k)$, for each $q\in \{ t'_1,\cdots,t'_{|T_1|} \}$ we can find $\lceil \frac{k-q}{p} \rceil \leq k_{q,1} < k_{q,2} < \cdots < k_{q,\bar{m}_{q,1}} \leq \bar{S}_q(\epsilon,k)$ and $\mathbf{M_q}$ satisfying the following conditions:\\ (i') $\beta[ p k_{q,j} + q ]=1$ for $1 \leq j \leq \bar{m}_{q,1}$\\ (ii') $\mathbf{M_q} \begin{bmatrix} \mathbf{\bar{C}_{q,1}} \mathbf{\bar{A}_{q,1}}^{-k_{q,1}} \\ \mathbf{\bar{C}_{q,1}} \mathbf{\bar{A}_{q,1}}^{-k_{q,2}} \\ \vdots \\ \mathbf{\bar{C}_{q,1}} \mathbf{\bar{A}_{q,1}}^{-k_{q,\bar{m}_{q,1}}} \end{bmatrix}= \mathbf{I_{\bar{m}_{q,1} \times \bar{m}_{q,1}}} $ \\
(iii') $\left| \mathbf{M_q} \right|_{max} \leq \frac{p_q(\bar{S}_q(\epsilon,k))}{\epsilon}(|\lambda_{1,1}|^p)^{\bar{S}_q(\epsilon,k)}$.\\ (iv') $ p \bar{S}_q(\epsilon,k) + q \leq \bar{S}(\epsilon, k)$
Then, we have \begin{align}
&diag\{ \mathbf{\bar{U}_{t'_1,1}}^{-1} \mathbf{M_{t'_1}}, \mathbf{\bar{U}_{t'_2,1}}^{-1} \mathbf{M_{t'_2}}, \cdots, \mathbf{\bar{U}_{t'_{|T_1|},1}}^{-1} \mathbf{M_{t'_{|T_1|} }} \}
diag\{ \mathbf{L_{t'_1,1}},\mathbf{L_{t'_1,1}}, \cdots, \mathbf{L_{t'_{|T_1|},1}} \} \nonumber \\ &\cdot \begin{bmatrix} \mathbf{C}\mathbf{A}^{-(pk_{t'_1,1}+t'_1)} \\ \mathbf{C}\mathbf{A}^{-(pk_{t'_1,2}+t'_1)} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-(pk_{t'_1,\bar{m}_{t'_1,1}}+t'_1)} \\ \mathbf{C}\mathbf{A}^{-(pk_{t'_2,1}+t'_2)} \\ \vdots \\
\mathbf{C}\mathbf{A}^{-(pk_{t'_{|T_1|},\bar{m}_{t'_{|T_1|},1}}+t'_{|T_1|})} \end{bmatrix} \mathbf{x} \nonumber \\ &=
diag\{ \mathbf{\bar{U}_{t'_1,1}}^{-1} \mathbf{M_{t'_1}}, \mathbf{\bar{U}_{t'_2,1}}^{-1} \mathbf{M_{t'_2}}, \cdots, \mathbf{\bar{U}_{t'_{|T_1|},1}}^{-1} \mathbf{M_{t'_{|T_1|}}} \} \begin{bmatrix} \mathbf{L_{t'_1,1}} \mathbf{C}\mathbf{A}^{-(pk_{t'_1,1}+t'_1)} \mathbf{x} \\ \mathbf{L_{t'_1,1}} \mathbf{C}\mathbf{A}^{-(pk_{t'_1,2}+t'_1)} \mathbf{x} \\ \vdots \\ \mathbf{L_{t'_1,1}} \mathbf{C}\mathbf{A}^{-(pk_{t'_1,\bar{m}_{t'_1,1}}+t'_1)} \mathbf{x} \\ \mathbf{L_{t'_2,1}} \mathbf{C}\mathbf{A}^{-(pk_{t'_2,1}+t'_2)} \mathbf{x} \\ \vdots \\
\mathbf{L_{t'_{|T_1|},1}} \mathbf{C}\mathbf{A}^{-(pk_{t'_{|T_1|},\bar{m}_{t'_{|T_1|},1}}+t'_{|T_1|})} \mathbf{x} \end{bmatrix} \nonumber \\ &=
diag\{ \mathbf{\bar{U}_{t'_1,1}}^{-1} \mathbf{M_{t'_1}}, \mathbf{\bar{U}_{t'_2,1}}^{-1} \mathbf{M_{t'_2}}, \cdots, \mathbf{\bar{U}_{t'_{|T_1|},1}}^{-1} \mathbf{M_{t'_{|T_1|}}} \} \begin{bmatrix} \mathbf{\bar{C}_{t'_1,1}} \mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_1,1}} \mathbf{\bar{U}_{t'_1,1}}\mathbf{\bar{x}_{t'_1,1}} \\ \mathbf{\bar{C}_{t'_1,1}} \mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_1,2}} \mathbf{\bar{U}_{t'_1,1}}\mathbf{\bar{x}_{t'_1,1}} \\ \vdots \\ \mathbf{\bar{C}_{t'_1,1}}\mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_1,\bar{m}_{t'_1,1}}}\mathbf{\bar{U}_{t'_1,1}}\mathbf{\bar{x}_{t'_1,1}} \\ \mathbf{\bar{C}_{t'_2,1}}\mathbf{\bar{A}_{t'_2,1}}^{-k_{t'_2,1}} \mathbf{\bar{U}_{t'_2,1}}\mathbf{\bar{x}_{t'_2,1}} \\ \vdots \\
\mathbf{\bar{C}_{t'_{|T_1|},1}}\mathbf{\bar{A}_{t'_{|T_1|},1}}^{-k_{t'_{|T_1|},\bar{m}_{t'_{|T_1|},1}}}\mathbf{\bar{U}_{t'_{|T_1|},1}}\mathbf{\bar{x}_{t'_{|T_1|},1}} \end{bmatrix} \label{eqn:dis:thm:21} \\ &= \begin{bmatrix} \mathbf{\bar{U}_{t'_1,1}}^{-1} \mathbf{M_{t'_1}} \begin{bmatrix} \mathbf{\bar{C}_{t'_1,1}}\mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_1,1}} \\ \vdots\\ \mathbf{\bar{C}_{t'_1,1}}\mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_1,\bar{m}_{t'_1,1}}} \\ \end{bmatrix} \mathbf{\bar{U}_{t'_1,1}} \mathbf{\bar{x}_{t'_1,1}}\\ \vdots \\
\mathbf{\bar{U}_{t'_{|T_1|},1}}^{-1} \mathbf{M_{t'_{|T_1|}}} \begin{bmatrix}
\mathbf{\bar{C}_{t'_{|T_1|},1}}\mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_{|T_1|},1}} \\ \vdots\\
\mathbf{\bar{C}_{t'_{|T_1|},1}}\mathbf{\bar{A}_{t'_1,1}}^{-k_{t'_{|T_1|},\bar{m}_{t'_{|T_1|},1}}}
\end{bmatrix} \mathbf{\bar{U}_{t'_{|T_1|},1}} \mathbf{\bar{x}_{t'_{|T_1|},1}} \end{bmatrix} \label{eqn:dis:thm:22} \\ &=\begin{bmatrix} \mathbf{\bar{x}_{t'_1,1}} \\ \vdots\\
\mathbf{\bar{x}_{t'_{|T_1|},1}}\\ \end{bmatrix}.\label{eqn:dis:thm9} \end{align} Here, \eqref{eqn:dis:thm:21} comes from the condition (v) of Claim~\ref{claim:donknow2}. \eqref{eqn:dis:thm9} comes from the definition of $\mathbf{M_q}$.
Now, we will estimate $(\mathbf{x})_{m_{1,1}}$ based on $\mathbf{\bar{x}_{t'_1,1}}, \cdots, \mathbf{\bar{x}_{t'_{|T_1|},1}}$. Let $\mathbf{e_{m_{1,1}}^{\bar{\bar{m}}_{q,r}}}$ be a $1 \times \bar{\bar{m}}_{q,r}$ row vector whose elements are all zeros except $m_{1,1}$th element which is $1$. Then, we have the following equation: \begin{align} &\begin{bmatrix}
\frac{1}{g_{t_1',1}} \mathbf{e_{m_{1,1}}^{\bar{\bar{m}}_{t_1',1}}} & \cdots & \frac{1}{g_{t_{|T_1|}',1}} \mathbf{e_{m_{1,1}}^{\bar{\bar{m}}_{|t'_{|T_1|}|,1}}} \end{bmatrix} \begin{bmatrix} \mathbf{\bar{x}_{t'_1,1}} \\ \vdots\\
\mathbf{\bar{x}_{t'_{|T_1|},1}}\\ \end{bmatrix} \nonumber \\
&=\frac{1}{g_{t_1',1}} (\mathbf{\bar{x}_{t'_1,1}})_{m_{1,1}}+ \cdots + \frac{1}{g_{t_{|T_1|}',1}} (\mathbf{\bar{x}_{t'_{|T_1|},1}})_{m_{1,1}} \nonumber \\
&= \left( \mathbf{L_{t'_1,1}}\mathbf{C_1} diag\{ \alpha_{1,1},\cdots, \alpha_{1,\nu_1} \}^{-(t'_1+(m_{1,1}-1))} \right) \begin{bmatrix} (\mathbf{x_{1,1}})_{m_{1,1}}\\ \vdots \\ (\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \end{bmatrix} + \cdots \nonumber \\ &+ \left(
\mathbf{L_{t'_{|T_1|},1}}\mathbf{C_1} diag\{ \alpha_{1,1},\cdots, \alpha_{1,\nu_1}
\}^{-(t'_{|T_1|}+(m_{1,1}-1))} \right) \begin{bmatrix} (\mathbf{x_{1,1}})_{m_{1,1}}\\ \vdots \\ (\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \end{bmatrix} \label{eqn:dis:thm:24} \\ &= \begin{bmatrix}
\mathbf{L_{t'_1,1}} & \cdots & \mathbf{L_{t'_{|T_1|},1}} \end{bmatrix} \begin{bmatrix} \mathbf{C_1}diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1}\}^{-t'_1} \\ \vdots \\
\mathbf{C_1}diag\{\alpha_{1,1},\cdots, \alpha_{1,\nu_1}\}^{-t'_{|T_1|}} \end{bmatrix} \begin{bmatrix} \alpha_{1,1}^{-m_{1,1}+1}(\mathbf{x_{1,1}})_{m_{1,1}}\\ \vdots \\ \alpha_{1,\nu_1}^{-m_{1,1}+1}(\mathbf{x'_{1,\nu_1}})_{m_{1,1}} \end{bmatrix} \nonumber \\ &= \alpha_{1,1}^{-m_{1,1}+1}(\mathbf{x_{1,1}})_{m_{1,1}}= \alpha_{1,1}^{-m_{1,1}+1}(\mathbf{x})_{m_{1,1}}. \label{eqn:dis:thm7} \end{align} Here, \eqref{eqn:dis:thm:24} follows from the condition (iv) of Claimi~\ref{claim:donknow2}.
\eqref{eqn:dis:thm7} follows from \eqref{eqn:dis:thm6} and $\{ t'_1 (mod~p_1), \cdots, t'_{|T_1|} (mod~ p_1) \}=T_1$.
Now, we merge the results from \eqref{eqn:dis:thm9} and \eqref{eqn:dis:thm7} to make an estimator for $(\mathbf{x})_{m_{1,1}}$. Define \begin{align} \mathbf{\bar{M}}:=& \alpha_{1,1}^{m_{1,1}-1} \begin{bmatrix}
\frac{1}{g_{t_1',1}} \mathbf{e_{m_{1,1}}^{\bar{\bar{m}}_{|t'_1|,1}}} & \cdots & \frac{1}{g_{t_{|T_1|}',1}} \mathbf{e_{m_{1,1}}^{\bar{\bar{m}}_{|t'_{|T_1|}|,1}}} \end{bmatrix} \nonumber \\
&\cdot diag\{ \mathbf{\bar{U}_{t'_1,1}}^{-1} \mathbf{M_{t'_1}}, \mathbf{\bar{U}_{t'_2,1}}^{-1} \mathbf{M_{t'_2}}, \cdots, \mathbf{\bar{U}_{t'_{|T_1|},1}}^{-1} \mathbf{M_{t'_{|T_1|} }} \}
diag\{ \mathbf{L_{t'_1,1}},\mathbf{L_{t'_1,1}}, \cdots, \mathbf{L_{t'_{|T_1|},1}} \} \nonumber \end{align} and \begin{align} \begin{bmatrix} \mathbf{C}\mathbf{A}^{-\bar{k}_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\bar{k}_{\bar{m}}} \\ \end{bmatrix} := \begin{bmatrix} \mathbf{C}\mathbf{A}^{-(pk_{t'_1,1}+t'_1)} \\ \mathbf{C}\mathbf{A}^{-(pk_{t'_1,2}+t'_1)} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-(pk_{t'_1,\bar{m}_{t'_1,1}}+t'_1)} \\ \mathbf{C}\mathbf{A}^{-(pk_{t'_2,1}+t'_2)} \\ \vdots \\
\mathbf{C}\mathbf{A}^{-(pk_{t'_{|T_1|},\bar{m}_{t'_{|T_1|},1}}+t'_{|T_1|})} \end{bmatrix}. \nonumber \end{align} Then, by (iii') and (iv') we can find a positive polynomial $\bar{p}(k)$ such that \begin{align}
\left| \mathbf{\bar{M}} \right|_{max} \lesssim \max_{1 \leq i \leq |T_1|} \{ |\mathbf{M_{t'_i}}|_{max} \} \leq \frac{\bar{p}(\bar{S}(\epsilon,k))}{\epsilon}|\lambda_{1,1}|^{\bar{S}(\epsilon,k)}. \label{eqn:dis:thm11} \end{align} Moreover, by \eqref{eqn:dis:thm9} and \eqref{eqn:dis:thm7} we have \begin{align} \mathbf{\bar{M}} \begin{bmatrix} \mathbf{C}\mathbf{A}^{-\bar{k}_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\bar{k}_{\bar{m}}} \\ \end{bmatrix} \mathbf{x}=(\mathbf{x})_{m_{1,1}}. \label{eqn:dis:thm10} \end{align} This finishes the proof of the claim \end{proof}
$\bullet$ Subtracting $(\mathbf{x})_{m_{1,1}}$ from the observations: Now, we have an estimation for $(\mathbf{x})_{m_{1,1}}$. We will remove it from the system.
$\mathbf{\widetilde{A}}$, $\mathbf{\widetilde{C}}$ and $\mathbf{\widetilde{x}}$ are the system matrices after the removal. Formally, $\mathbf{\widetilde{A}}$ is obtained by removing $m_{1,1}$th row and column from $\mathbf{A}$, $\mathbf{\widetilde{C}}$ is obtained by removing $m_{1,1}$th row from $\mathbf{C}$, and $\mathbf{\widetilde{x}}$ is obtained by removing $m_{1,1}$th component from $\mathbf{x}$ respectively.
Denote $m_{1,1}$th column of $\mathbf{C}{\mathbf{A}}^{-k}$ as $\mathbf{R}(k)$. Then, we have the following relation between the original system $(\mathbf{A},\mathbf{C})$ and the new system $(\mathbf{\widetilde{A}}$, $\mathbf{\widetilde{C}})$: \begin{align} \mathbf{C}\mathbf{A}^{-k}\mathbf{x}-\mathbf{R}(k)(\mathbf{x})_{m_{1,1}}=\mathbf{\widetilde{C}}\mathbf{\widetilde{A}}^{-k}\mathbf{\widetilde{x}} \label{eqn:dis:thm8} \end{align}
which can be easily proved from the block diagonal structure of $\mathbf{A}$. From the definition of $\mathbf{R}(k)$, we can further see that there exists a polynomial $\widetilde{p}(k)$ such that $|\mathbf{R}(k)|_{max} \leq \widetilde{p}(k) |\lambda_{1,1}|^{-k}$. Thus, when $|\lambda_{1,1}| > 1$ we can find a threshold $k_{th} \geq 0$ such that all $k \geq k_{th}$, $\widetilde{p}(k) |\lambda_{1,1}|^{-k}$ is a decreasing function. When $|\lambda_{1,1}|=1$, we simply put $k_{th}=0$.
$\bullet$ Decoding the remaining element of $\mathbf{x}$: We decoded and subtracted the state $(\mathbf{x})_{m_{1,1}}$ from the system. After subtracting, the remaining system matrices $\mathbf{\widetilde{A}} \in \mathbb{C}^{(m-1) \times (m-1)}$ and $\mathbf{\widetilde{C}} \in \mathbb{C}^{l \times (m-1)}$ become one-dimension smaller. Therefore, we can apply the induction hypothesis to estimate $\mathbf{\widetilde{x}}$.
We can also write $\mathbf{\widetilde{A}}$ and $\mathbf{\widetilde{C}}$ in the same way that we write $\mathbf{A}$ and $\mathbf{C}$ as \eqref{eqn:ac:jordan}, \eqref{eqn:ac2:jordan} and \eqref{eqn:def:lprime}, and define the corresponding parameters shown in \eqref{eqn:ac:jordan}, \eqref{eqn:ac2:jordan} and \eqref{eqn:def:lprime}. To distinguish the parameters for $\mathbf{\widetilde{A}}$ and $\mathbf{\widetilde{C}}$ from the parameters for $\mathbf{A}$ and $\mathbf{C}$, we use tilde. For example, the dimension of $\mathbf{A}$ was $m \times m$, and we define the dimension of $\mathbf{\widetilde{A}}$ as $\widetilde{m} \times \widetilde{m}$. Likewise, the parameters $\widetilde{\mu}$, $\widetilde{\nu}_i$, $\widetilde{\lambda}_{i,j}$, $\widetilde{m}_{i,j}$, $\widetilde{p}_i$, $\widetilde{l}_i$ are defined for the system matrices $\mathbf{\widetilde{A}}$ and $\mathbf{\widetilde{C}}$ in the same ways as \eqref{eqn:ac:jordan}, \eqref{eqn:ac2:jordan} and \eqref{eqn:def:lprime}.
By the induction hypothesis, for $1 \leq i \leq \widetilde{\mu}$ we can find $\widetilde{m}_1',\cdots,\widetilde{m}_{\widetilde{\mu}}' \in \mathbb{N}$, positive polynomials $\widetilde{p}_1(k),\cdots,\widetilde{p}_{\widetilde{\mu}}(k)$ and families of stopping times $\{ \widetilde{S}_1(\epsilon,k): k \in \mathbb{Z}^+ , 0 < \epsilon < 1 \},\cdots,\{ \widetilde{S}_{\widetilde{\mu}}(\epsilon,k): k \in \mathbb{Z}^+, 0 < \epsilon < 1 \}$ such that for all $0 < \epsilon < 1$ there exist $\max \{ \bar{S}(\epsilon,k),k_{th}\} \leq \widetilde{k}_1 < \cdots < \widetilde{k}_{\widetilde{m}_1'} \leq \widetilde{S}_1(\epsilon,k) < \widetilde{k}_{\widetilde{m}_1'+1} < \cdots < \widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}} \widetilde{m}_i'} \leq \widetilde{S}_{\widetilde{\mu}}(\epsilon,k)$ and a $\widetilde{m} \times (\sum_{1 \leq i \leq \widetilde{\mu} } \widetilde{m}_i')l$ matrix $\mathbf{\widetilde{M}}$ satisfying the following conditions:\\ (i'') $\beta[\widetilde{k}_i]=1$ for $1 \leq i \leq \sum_{1 \leq i \leq \widetilde{\mu}} \widetilde{m}_i$ \\ (ii'') $\mathbf{\widetilde{M}} \begin{bmatrix} \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_1} \\ \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_2} \\ \vdots \\ \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}} \widetilde{m}_i' }} \end{bmatrix}=\mathbf{I} $ \\ (iii'') $
|\mathbf{\widetilde{M}}|_{max} \leq \max_{1 \leq i \leq \widetilde{\mu}} \left\{
\frac{\widetilde{p}_i( \tilde{S}_i(\epsilon,k) )}{\epsilon} |\widetilde{\lambda}_{i,1}|^{\widetilde{S}_i(\epsilon,k)} \right\} $\\ (iv'') $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup
\frac{1}{s} \log \mathbb{P}\{ \widetilde{S}_i(\epsilon,k) - \max\{\bar{S}(\epsilon,k),k_{th} \} = s | \mathcal{F}_{\bar{S}(\epsilon,k)} \} = \max_{1 \leq j \leq i} \left\{ p_e^{\frac{\widetilde{l}_j}{\widetilde{p}_j}} \right\}
$ for $1\leq i \leq \widetilde{\mu}$\\ (v'') $\lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P} \{
\widetilde{S}_a(\epsilon,k)-\widetilde{S}_b(\epsilon,k)=s | \mathcal{F}_{\widetilde{S}_b(\epsilon,k)} \} \leq \max_{b < i \leq a} \left\{ p_e^{\frac{\widetilde{l}_i}{\widetilde{p}_i}} \right\} $ for $1 \leq b < a \leq \widetilde{\mu}$. Compared to Lemma~\ref{lem:dis:achv}, we can notice that the condition (iv'') is slightly different from the condition (iv) of Lemma~\ref{lem:dis:achv}. The $\sup$ over $k$ of (iv) in Lemma~\ref{lem:dis:achv} is replaced by the $\esssup$. However, if we remind that $\max\{\bar{S}(\epsilon,k),k_{th} \}$ is a constant conditioned on\footnote{More proper notations for $\widetilde{S}_1(\epsilon,k)$, $\cdots$, $\widetilde{S}_{\mu}(\epsilon,k)$ are $\widetilde{S}_1(\epsilon,\max\{ \bar{S}(\epsilon,k),k_{th}\})$, $\cdots$, $\widetilde{S}_{\mu}(\epsilon,\max\{ \bar{S}(\epsilon,k),k_{th}\})$ since $\max\{ \bar{S}(\epsilon,k),k_{th}\}$ plays the role of $k$ of Lemma~\ref{lem:dis:achv} after conditioning. However, we use the notations of the paper for simplicity.} $\mathcal{F}_{\bar{S}(\epsilon,k)}$, we just replaced $k$ of Lemma~\ref{lem:dis:achv} with $\max\{\bar{S}(\epsilon,k),k_{th} \}$.
Here, we have \begin{align} \mathbf{\widetilde{x}}&=\mathbf{\widetilde{M}} \begin{bmatrix} \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_1} \\ \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_2} \\ \vdots \\ \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}} \widetilde{m}_i' }} \end{bmatrix}\mathbf{\widetilde{x}} \nonumber \\ &= \mathbf{\widetilde{M}} \begin{bmatrix} \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_1}\mathbf{\widetilde{x}} \\ \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_2}\mathbf{\widetilde{x}} \\ \vdots \\ \mathbf{\widetilde{C}} \mathbf{\widetilde{A}}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}} \widetilde{m}_i' }}\mathbf{\widetilde{x}} \end{bmatrix} \nonumber \\ &= \mathbf{\widetilde{M}} \begin{bmatrix} \mathbf{C} \mathbf{A}^{-\widetilde{k}_1} \mathbf{x} - \mathbf{R}(\widetilde{k}_1) (\mathbf{x})_{m_{1,1}} \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_2} \mathbf{x} - \mathbf{R}(\widetilde{k}_2) (\mathbf{x})_{m_{1,1}} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i'}} \mathbf{x} - \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) (\mathbf{x})_{m_{1,1}} \end{bmatrix} (\because \eqref{eqn:dis:thm8})\nonumber \\ &= \mathbf{\widetilde{M}} \left( \begin{bmatrix} \mathbf{C} \mathbf{A}^{-\widetilde{k}_1} \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_2} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i'}} \end{bmatrix} \mathbf{x} - \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i'}) \end{bmatrix} (\mathbf{x})_{m_{1,1}} \right) \nonumber \\ &= \mathbf{\widetilde{M}} \left( \begin{bmatrix} \mathbf{C} \mathbf{A}^{-\widetilde{k}_1} \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_2} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }} \end{bmatrix} \mathbf{x} - \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix} \mathbf{{\bar{M}}} \begin{bmatrix} \mathbf{C}\mathbf{A}^{-\bar{k}_1} \\ \mathbf{C}\mathbf{A}^{-\bar{k}_2} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\bar{k}_{\bar{m}}} \end{bmatrix} \mathbf{x} \right) (\because \mbox{the condition (ii) of Claim~\ref{claim:donknow00}}) \nonumber \\ &= \mathbf{\widetilde{M}} \begin{bmatrix} - \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix} \mathbf{{\bar{M}}} & \mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{C}\mathbf{A}^{-\bar{k}_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\bar{k}_{\bar{m}}}\\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_1} \\ \vdots \\ \mathbf{C} \mathbf{A}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i'} } \end{bmatrix} \mathbf{x}. \label{eqn:successive:102} \end{align}
When $|\lambda_{1,1}|>1$, we have \begin{align} &
\left| \mathbf{\widetilde{M}} \begin{bmatrix} - \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix} \mathbf{{\bar{M}}} & \mathbf{I} \end{bmatrix}
\right|_{max} \nonumber \\ &\lesssim
| \mathbf{\widetilde{M}} |_{max} \cdot \max
\left\{ \left| \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix}
\right|_{max} \left| \mathbf{\bar{M}} \right|_{max} , 1 \right\} \nonumber \\ & \lesssim \max_{1 \leq i \leq \widetilde{\mu}} \left\{
\frac{\widetilde{p}_i( \widetilde{S}_i(\epsilon,k) )}{\epsilon} |\widetilde{\lambda}_{i,1}|^{\widetilde{S}_i(\epsilon,k)}
\right\} \cdot \max \left\{ \widetilde{p}(\widetilde{k}_1)|\lambda_{1,1}|^{-\widetilde{k}_1} \frac{\bar{p}\left(\bar{S}(\epsilon,k)\right)}{\epsilon} |\lambda_{1,1}|^{\bar{S}(\epsilon,k)} , 1 \right\} \label{eqn:successive:101} \end{align}
where the last inequality follows from (iii''), $|\mathbf{R}(k)| \leq \widetilde{p}(k) |\lambda_{1,1}|^{-k}$, $k_{th} \leq \widetilde{k}_i$, and the condition (iii) of Claim~\ref{claim:donknow00}.
Moreover, since $\bar{S}(\epsilon,k) \leq \widetilde{k}_1 \leq \widetilde{S}_i(\epsilon,k)$, there exists some positive polynomials $p'_i(k)$ such that \begin{align} &\eqref{eqn:successive:101}\lesssim \max_{1 \leq i \leq \widetilde{\mu}} \left\{ \frac{ p'_i(\widetilde{S}_i(\epsilon,k)) }{\epsilon^2}
|\widetilde{\lambda}_{i,1}|^{\widetilde{S}_i(\epsilon,k)} \right\}\label{eqn:successive:200} \end{align}
When $|\lambda_{1,1}|=1$, $|\widetilde{\lambda}_{1,1}|$ is also $1$. Thus, we have \begin{align} &
\left| \mathbf{\widetilde{M}} \begin{bmatrix} - \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix} \mathbf{{\bar{M}}} & \mathbf{I} \end{bmatrix}
\right|_{max} \nonumber \\ &\lesssim
| \mathbf{\widetilde{M}} |_{max} \cdot \max
\left\{ \left| \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix}
\right|_{max} \left| \mathbf{\bar{M}} \right|_{max} , 1 \right\} \nonumber \\ & \lesssim \max_{1 \leq i \leq \widetilde{\mu}} \left\{ \frac{\widetilde{p}_1(\widetilde{S}_1(\epsilon,k))}{\epsilon} \right\} \cdot \max\left\{ \widetilde{p}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' } \frac{\bar{p}\left(\bar{S}(\epsilon,k)\right)}{\epsilon} , 1\right\} \nonumber \\ & \lesssim \frac{p'(\widetilde{S}_{\widetilde{\mu}}(\epsilon,k))}{\epsilon^2} \label{eqn:successive:201} \end{align} for some polynomial $p'_{\widetilde{\mu}}(k)$.
Since we can reconstruct $\mathbf{x}$ from $\mathbf{\widetilde{x}}$ and $(\mathbf{x})_{m_{1,1}}$ , we can say there exists $\mathbf{M}$ such that \begin{align} \mathbf{M}\begin{bmatrix} \mathbf{C}\mathbf{A}^{-\bar{k}_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\bar{k}_{\bar{m}}} \\ \mathbf{C}\mathbf{A}^{-\widetilde{k}_1} \\ \vdots \\ \mathbf{C}\mathbf{A}^{-\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}} \widetilde{m}_i}} \\ \end{bmatrix}= \mathbf{I}. \nonumber \end{align}
By the condition (ii) of Claim~\ref{claim:donknow00} and \eqref{eqn:successive:102}, such $\mathbf{M}$ satisfies the following: \begin{align}
| \mathbf{M} |_{max} &\leq \max\left\{\left|\mathbf{\bar{M}}\right|_{max},\left| \mathbf{\widetilde{M}} \begin{bmatrix} - \begin{bmatrix} \mathbf{R}(\widetilde{k}_1) \\ \mathbf{R}(\widetilde{k}_2) \\ \vdots \\ \mathbf{R}(\widetilde{k}_{\sum_{1 \leq i \leq \widetilde{\mu}}\widetilde{m}_i' }) \end{bmatrix} \mathbf{{\bar{M}}} & \mathbf{I} \end{bmatrix}
\right|_{max}\right\} \nonumber \\ &\lesssim \max\left\{
\frac{{\bar{p}(\bar{S}(\epsilon,k))}}{\epsilon} |\lambda_{1,1}|^{\bar{S}(\epsilon,k)}, \max_{1 \leq i \leq \widetilde{\mu}} \left\{ \frac{ p'_i(\widetilde{S}_i(\epsilon,k) ) }{\epsilon^2}
|\widetilde{\lambda}_{i,1}|^{\widetilde{S}_i(\epsilon,k)} \right\} \right\}\label{eqn:successive:202} \\ & \leq \frac{1}{\epsilon^2} \max \left\{
{\bar{p}(\bar{S}(\epsilon,k))} |\lambda_{1,1}|^{\bar{S}(\epsilon,k)}, \max_{1 \leq i \leq \widetilde{\mu}} \left\{ p'_i(\widetilde{S}_i(\epsilon,k) )
|\widetilde{\lambda}_{i,1}|^{\widetilde{S}_i(\epsilon,k)} \right\} \right\}. \label{eqn:dis:geofinal:5} \end{align}
Here, \eqref{eqn:successive:202} follows from the condition (iii) of Claim~\ref{claim:donknow00}, \eqref{eqn:successive:200}, \eqref{eqn:successive:201}.
Moreover, since $k_{th}$ is a constant, the condition (iv) of Claim~\ref{claim:donknow00} implies
\begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \left\{ \max\left\{\bar{S}(\epsilon,k),k_{th}\right\} -k = s \right\} = p_e^{\frac{l_1}{p_1}}. \label{eqn:successive:204} \end{align}
Therefore, by applying Lemma~\ref{lem:app:geo} together with \eqref{eqn:successive:204} and (iv'')
we get \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ \widetilde{S}_i(\epsilon,k) - k =s \}= \max\left\{ p_e^{\frac{l_1}{p_1}}, \max_{1 \leq j \leq i}\left\{ p_e^{\frac{\widetilde{l}_j}{\widetilde{p}_j}} \right\} \right\} \label{eqn:dis:geofinal:6}. \end{align}
We finish the proof by dividing into two cases depending on $\widetilde{\mu}$. Since $\mathbf{\widetilde{A}}$ is obtained by erasing just one row and column of $\mathbf{A}$, the relation between $\widetilde{\mu}$ and $\mu$ is either $\widetilde{\mu}=\mu$ or $\widetilde{\mu}=\mu-1$.
(1) When $\widetilde{\mu}=\mu$.
In this case, the number of the eigenvalue cycles remains the same. We can see that $|\widetilde{\lambda}_{i,1}|=|\lambda_{i,1}|$. $\mathbf{A_1}$ and $\mathbf{\widetilde{A}_1}$ may be the same or $\mathbf{\widetilde{A}_1}$ has smaller dimension than $\mathbf{A_1}$. Thus, the new system $\mathbf{\widetilde{A}_1}$ becomes easier to estimate, and $\frac{\widetilde{l}_1}{\widetilde{p}_1} \geq \frac{l_1}{p_1}$, i.e. $p_e^{\frac{\widetilde{l}_1}{\widetilde{p}_1}} \leq p_e^{\frac{l_1}{p_1}}$. $\mathbf{A_i}$ and $\mathbf{\widetilde{A}_i}$ are the same for all $2 \leq i \leq \mu$, so $\frac{\widetilde{l}_i}{\widetilde{p}_i} = \frac{l_i}{p_i}$ for $2 \leq j \leq \mu$. Define $S_i(\epsilon^2,k):=\widetilde{S}_i(\epsilon,k)$, $p_1(k):=\bar{p}(k)+p_1'(k)$, and $p_i(k):=p_i'(k)$ for $2 \leq i \leq \mu$. Then, \eqref{eqn:dis:geofinal:5}, \eqref{eqn:dis:geofinal:6} and (v'') reduces as follows: \begin{align}
| \mathbf{M} |_{max} \leq \max_{1 \leq i \leq \mu} \left\{ \frac{p_i(S_i(\epsilon,k))}{\epsilon} |\lambda_{i,1}|^{S_i(\epsilon,k)} \right\}, \nonumber \end{align} \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ S_i(\epsilon,k) - k =s \} \leq \max_{1 \leq j \leq i}\left\{ p_e^{\frac{l_j}{p_j}} \right\}, \nonumber \end{align} \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P} \{
{S}_a(\epsilon,k)-{S}_b(\epsilon,k)=s | \mathcal{F}_{{S}_b(\epsilon,k)} \} \leq \max_{b < i \leq a} \left\{ p_e^{\frac{{l}_i}{{p}_i}} \right\}. \nonumber \end{align} Here, we reparametrized $\epsilon^2$ to $\epsilon$. Therefore, the lemma is true for this case.
(2) When $\widetilde{\mu}=\mu-1$.
Since one eigenvalue cycle is disappeared, we can see that $|\widetilde{\lambda}_{1,1}|=|\lambda_{2,1}|,|\widetilde{\lambda}_{2,1}|=|\lambda_{3,1}|,\cdots, |\widetilde{\lambda}_{\widetilde{\mu},1}|=|\lambda_{\mu,1}|$. Moreover, $\mathbf{\widetilde{A}_i}=\mathbf{A_{i+1}}$ for $1 \leq i \leq \widetilde{\mu}$ and $\frac{\widetilde{l}_{i}}{\widetilde{p}_{i}}=\frac{l_{i+1}}{p_{i+1}}$ for $1 \leq i \leq \widetilde{\mu}$.
Define $S_1(\epsilon^2,k):=\bar{S}(\epsilon,k)$, $p_1(k):=\bar{p}(k)$, $S_i(\epsilon^2,k):=\widetilde{S}_{i-1}(\epsilon,k)$ and $p_i(k):=p_{i-1}'(k )$ for $2 \leq i \leq \mu$. We will also reparametrize $\epsilon^2$ to $\epsilon$. Then, \eqref{eqn:dis:geofinal:5} reduces to \begin{align}
| \mathbf{M} |_{max} \leq \max_{1 \leq i \leq \mu} \left\{ \frac{p_i(S_i(\epsilon,k))}{\epsilon} |\lambda_{i,1}|^{S_i(\epsilon,k)} \right\}. \nonumber \end{align} By the definition of $S_1(\epsilon,k)$, the condition (iv) of Claim~\ref{claim:donknow00} reduces to \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ S_1(\epsilon,k) - k=s \} \leq p_e^{\frac{l_1}{p_1}}. \nonumber \end{align} By \eqref{eqn:dis:geofinal:6} and the definition of $S_i(\epsilon,k)$, we have for all $2 \leq i \leq \mu$, \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \sup_{k \in \mathbb{Z}^+} \frac{1}{s} \log \mathbb{P} \{ S_i(\epsilon,k) - k=s \} \leq \max\left\{p_e^{\frac{l_1}{p_1}} , \max_{1 \leq j \leq i-1}\left\{ p_e^{\frac{\widetilde{l}_j}{\widetilde{p}_j}} \right\} \right\}= \max_{1 \leq j \leq i}\left\{ p_e^{\frac{l_i}{p_i}} \right\}. \nonumber \end{align} By (iv''), (v'') and the definition of $S_i(\epsilon,k)$, we have for all $1 \leq b < a \leq \mu$, \begin{align} \lim_{\epsilon \downarrow 0} \exp \limsup_{s \rightarrow \infty} \esssup \frac{1}{s} \log \mathbb{P} \{
{S}_a(\epsilon,k)-{S}_b(\epsilon,k)=s | \mathcal{F}_{{S}_b(\epsilon,k)} \} \leq \max_{b < i \leq a} \left\{ p_e^{\frac{{l}_i}{{p}_i}} \right\}. \nonumber \end{align} Therefore, the lemma is also true for this case.
Thus, the proof is finished. \end{proof}
\end{document} | arXiv |
\begin{document}
\newlength{\figwidth} \setlength{\figwidth}{\textwidth}
\newlength{\fighalfwidth} \setlength{\fighalfwidth}{0.48\textwidth}
\newlength{\fighalfheight} \setlength{\fighalfheight}{0.2\textheight}
\newlength{\figthirdwidth} \setlength{\figthirdwidth}{0.3\textwidth}
\title{Pitchfork bifurcations of invariant manifolds}
\author{Jyoti Champanerkar and Denis Blackmore\\ {\small(Department of Mathematical Sciences, New Jersey Institute of Technology)}\\ email: [email protected]}
\date{}
\maketitle
\begin{abstract} A pitchfork bifurcation of an $(m-1)$-dimensional invariant submanifold of a dynamical system in $\mathbb{R}^m$ is defined analogous to that in $\mathbb{R}$. Sufficient conditions for such a bifurcation to occur are stated and existence of the bifurcated manifolds is proved under the stated hypotheses. For discrete dynamical systems, the existence of locally attracting manifolds $M_+$ and $M_-$, after the bifurcation has taken place is proved by constructing a diffeomorphism of the unstable manifold $M$. For continuous dynamical systems, the theorem is proved by transforming it to the discrete case. Techniques used for proving the theorem involve differential topology and analysis. The theorem is illustrated by means of a canonical example. \end{abstract}
\section{Introduction} Pitchfork bifurcations bear the name due to the fact that the bifurcation diagram for a one-parameter family in $\mathbb{R}$ looks like a pitchfork. Pitchfork bifurcation for a fixed point in $\mathbb{R}$ has been widely studied. In $\mathbb{R}$, sufficient conditions for the occurrence of a pitchfork bifurcation of a non-hyperbolic fixed point are stated for instance in \cite{Rasband, Wigg2}. A generalization of the result in $\mathbb{R}$ is given by Sotomayor's theorem \cite{Perko} for a pitchfork bifurcation of a fixed point in $\mathbb{R}^n$. Another generalization for pitchfork bifurcations is that for a periodic orbit \cite{Perko}. Analytical discussions of pitchfork (or pitchfork type) bifurcations can be found for particular classes of dynamical systems, e.g.,\cite{Glen}, where a quasi-periodically forced map is studied. Interesting numerical analyses of pitchfork bifurcations can be found in \cite{OsWiGlFe, Stur}.
An algorithm to compute invariant manifolds of equilibrium points and periodic orbits is presented in \cite{KrOs}. It is important to study invariant manifolds in order to know the global dynamics of a system. The classical pitchfork bifurcation concerns a fixed point (invariant codimension-1 submanifold) on the real line. From a mathematical viewpoint, it is therefore natural and important to investigate higher dimensional extensions of this theorem to invariant codimension-1 submanifolds of a Euclidean m-space. Accordingly we ask under what conditions an invariant manifold of a (discrete or continuous) dynamical system undergoes a pitchfork bifurcation? A fairly complete answer to this question is provided in this work. We give sufficient conditions for the occurrence of a pitchfork bifurcation of a compact, boundaryless, codimension-$1$, invariant manifold in $\mathbb{R}^m$. We obtain readily verifiable criteria for identifying such bifurcations, and illustrate the use of these criteria in an example. Techniques used for proving the theorem involve differential topology and analysis and are adapted from Hartman \cite{Hartman}, Hirsch et al. \cite{HiPuSh} and Shub \cite{Shub}.
\section{Definitions} Let $M$ be a codimension-$1$, compact, connected, boundaryless submanifold of $\mathbb{R}^m$. By the Jordan-Brouwer separation theorem \cite{GuPo}, $M$ divides $\mathbb{R}^m \backslash M$ into an outer unbounded region and an inner bounded region. We shall study $\mathcal{C}^1$ functions $F:U \times (-a,a) \rightarrow \mathbb{R}^m$, where $U$ is an open neighborhood of $M$ in $\mathbb{R}^m$ and $(-a,a)$, $a>0$, is an open symmetric interval of real numbers. It shall be assumed in the sequel that each of the maps $F_{\mu}:U\rightarrow
\mathbb{R}^m$, $|\mu|<a$, is a $\mathcal{C}^1$ diffeomorphism and that $M$ is $F_{\mu}$-invariant, i.e. $F_{\mu}(M) =M$. \begin{defn} With $M$ and $F_{\mu}$ as above, we say that $F_{\mu}$ is side-preserving if for every $x$ in the inner bounded region, $F_{\mu}(x)$ also lies in the inner region. \end{defn} \begin{defn} With $M$ and $F_{\mu}$ as above, we say that $F_{\mu}$ is side-reversing if for every $x$ in the inner bounded region, $F_{\mu}(x)$ lies in the outer unbounded region. \end{defn} Note that if $F_{\mu}$ is a diffeomorphism of a neighborhood of $M$ and leaves $M$ invariant, then $F_{\mu}$ is either side-preserving or side-reversing. Observe also that in the case that $F_{\mu}$ is side-reversing, it is not possible for $U=\mathbb{R}^m$. Analogous to the definition of a pitchfork bifurcation in $\mathbb{R}$, we define a pitchfork bifurcation of invariant manifolds in $\mathbb{R}^m$ as follows. \begin{defn} Consider a discrete dynamical system in $\mathbb{R}^m$ given by $x_{n+1} = F_{\mu}(x_n)$. Let $M$ be an invariant manifold for all $\mu \in (-a,a)$. If $0 \leq \mu_0 < a$ is such that $M$ is locally attracting (repelling) for $\mu < 0$, $M$ is locally repelling (attracting) for $\mu > \mu_0$ and in addition two locally attracting (repelling) $F_{\mu}$-invariant diffeomorphic copies of $M$, viz., $M_-$ and $M_+$ appear in a small neighborhood of $M$ for $\mu > \mu_0$, then we say that $M$ has undergone a pitchfork bifurcation at $\mu_0$. \end{defn} In the definition above, it does not matter what happens in the interval $(0, \mu_0)$. It is typically assumed that the interval $(0, \mu_0)$ is small. In $\mathbb{R}$, $0$ coincides with $\mu_0$, since the manifold under consideration is just a single point. But for higher dimensions, not all points on the invariant manifold may undergo a change in stability at the same value of the parameter $\mu$. When $\mu \geq \mu_0$, all the points have changed stability and two new invariant, diffeomorphic copies of the original manifold (of opposite stability) appear.
\section{Pitchfork bifurcation theorem for discrete dynamical system} \label{sec:PBdiscrete} We consider (one-parameter families of) discrete dynamical systems given by \begin{equation} \label{eq:discsystem} x_{n+1}=F(x_n,\mu) \end{equation} where, $x_n \in \mathbb{R}^m$ for every $n \in \mathbb{N}$ and $\mu \in (-a,a) \subseteq \mathbb{R}$. Additional properties of $F(\cdot ,\mu)$, also denoted as $F_{\mu}(\cdot)$ are described later. Let $M$ be a compact, connected, boundaryless, codimension-$1$, $\mathcal{C}^1$ submanifold of $\mathbb{R}^m$, which is $F_{\mu}$-invariant $\forall \mu \in (-a, a)$. Denote a tubular neighborhood of $M$ as $N(\alpha)=\{x \in \mathbb{R}^m : d(x,M) \leq \alpha, \alpha >0 \}$, where $d$ is the standard Euclidean distance function. Assume that $\alpha$ is sufficiently small so that the $\epsilon$-neighborhood theorem \cite{GuPo} can be applied and $N(\alpha) \subset U$. This means that every element $x \in N(\alpha)$ can be uniquely represented as $x=(r,y)$ where $y=\pi(x) \in M$ is the point on $M$ closest to $x$ and $r \in [ -\alpha,\alpha ]$ is the signed distance in the outward normal direction between $x$ and $M$. We also assume that $F_{\mu}(N(\alpha)) \subset N(\alpha)$. This enables us to write $F_{\mu}$ in component form as $F_{\mu}=(f_{\mu},g_{\mu})$ where $f_{\mu}:N(\alpha) \rightarrow \mathbb{R}$, $g_{\mu}=\pi \circ F_{\mu}: N(\alpha) \rightarrow M$, and $f_{\mu}(x)$ is the signed distance from $g_{\mu}(x)$ to $F_{\mu}(x)$. Observe that $F_{\mu}^{-1}(F_{\mu}(N(\alpha))) = N(\alpha)$, so that $F_{\mu}^{-1}$ can be written in component form as $F_{\mu}^{-1}=(\hat{f_{\mu}},\hat{g_{\mu}})$.
We shall use standard notation for derivatives and partial derivatives of functions. For example, the derivative of $F_{\mu}:N(\alpha) \rightarrow \mathbb{R}^m$ will be denoted by $DF_{\mu}$, and represented as the usual $m\times m$ Jacobian matrix \begin{displaymath} DF_{\mu}(r,y)=\left[ \begin{array}{cc} D_rf_{\mu} (r,y) & D_yf_{\mu}(r,y)\\ D_rg_{\mu}(r,y)& D_yg_{\mu}(r,y) \end{array}\right], \end{displaymath} where the entries are submatrices representing the partial derivatives such as \begin{displaymath} D_rf_{\mu}(r,y)=\frac{\partial f_{\mu}}{\partial r}(r,y) {\rm \quad and \quad} D_yg_{\mu}(r,y)=\left[\frac{\partial g_{{\mu}_i}}{\partial y_j}\right]_{(m-1)\times (m-1)}. \end{displaymath}
We use $|\cdot |$ for the Euclidean norm of an element of a Euclidean space or the associated norm of a linear mapping (matrix) between Euclidean spaces. The symbol $\| \cdot
\|$ denotes the supremum norm of a function taking values in a Euclidean space or in a space of linear transformations of Euclidean spaces taken over an appropriate set, which is sometimes indicated as a subscript of the norm.
If the rate of change (with respect to $r$) in the normal component $f_{\mu}$ in the radial direction $r$ is strictly less than $1$ in absolute value, the manifold $M$ will be locally attracting. This is stated mathematically in statement $(ii)$ of Theorem \ref{thm:PBthethm}, which follows. Similarly to have $M$ locally repelling, we require that $|D_rf_{\mu}|$ be greater than one as in statement $(iii)$. Statements $(iv)$ and $(v)$ describe locally attracting properties in a neighborhood away from $M$, which is where our new bifurcated manifolds $M_-$ and $M_+$ will reside. Properties $(vi)$ and $(vii)$ are obtained analytically and are needed in order to establish the existence of manifolds $M_-$ and $M_+$ as graphs of a fixed point (Lipschitz function) in a Banach space. The last hypothesis (statement $(viii)$) provides boundedness and equicontinuity properties that enable us to bootstrap Lipschitz homeomorphisms of $M$ with $M_+$ and $M_-$ up to $\mathcal{C}^1$ diffeomorphisms. These ideas shall become clear as the proof unfolds and after the remarks following the proof. \begin{thm}\label{thm:PBthethm} With $F_{\mu}$ and $M$ as above, suppose that the following statements hold. \begin{enumerate} \item $F_{\mu}$ is side-preserving for every $\mu \in (-a,a)$.
\item $\underset{(r,y) \in N(\alpha)}{\sup}|D_rf_{\mu}(r,y)|=\|D_rf_{\mu}\|_{N(\alpha)} < 1$ for every $\mu \in (-a,0)$.
\item $\exists$ \quad $0< \mu_{\star} < a$ such that
$\underset{y \in M}{\inf}|D_rf_{\mu}(0,y)|>1 \quad \forall \mu \in (\mu_{\star},a)$.
\item $\exists$ \quad $0 < \alpha_1 < \alpha $ such that
$\|D_rf_{\mu}(r,y)\|_A<1 \quad \forall \mu \in [0,a)$, where $A=\{ x\in \mathbb{R}^m : \alpha_1 \leq d(x,M) \leq \alpha \}$.
\item $\exists \quad \chi:[0,a) \rightarrow \mathbb{R}$ continuous with $0 \leq \chi(\mu) \leq \alpha_1$ and $K(\mu):=\{x \in \mathbb{R}^m: \chi(\mu) \leq d(x,M) = d(x,y) \leq \alpha
\}$ such that $F_{\mu}(K(\mu)) \subseteq K(\mu) \quad \forall \mu \in (\mu_{\star},a)$. Furthermore $c(\mu):=\|D_rf_{\mu}\|_{K(\mu)} < 1 \quad \forall \mu \in (\mu_{\star},a)$.
\item $c_{\star}(\mu):= (\|D_r f_{\mu}\|_{K(\mu)})
(1 + \|D_r \hat{g}_{\mu}\|_{K(\mu)}) + \|D_y f_{\mu}\|_{K(\mu)} <1$ for each $\mu \in
(\mu_{\star},a)$, where $\|.\|_{K(\mu)}$ is defined to be the $\sup$ norm over $K(\mu)$. Here $(\hat{f}_{\mu},\hat{g}_{\mu})$ denotes the inverse map $F^{-1}_{\mu}$.
\item $(\|D_rf_{\mu}\|_{K(\mu)} + \|D_y f_{\mu}\|_{K(\mu)})
(\|D_r \hat{g}_{\mu}\|_{K(\mu)} + \|D_y \hat{g}_{\mu}\|_{K(\mu)}) \leq 1$ for each $\mu \in (\mu_{\star}, a)$.
\item {\small $\sigma(\mu):= \|D_r f_{\mu}\|_{K(\mu)} (2\|D_r\hat{g}_{\mu}\|_{K(\mu)} +
\|D_y\hat{g}_{\mu}\|_{K(\mu)}) + \|D_yf_{\mu}\|_{K(\mu)}\|D_r\hat{g}_{\mu}\|_{K(\mu)} < 1$} for all $\mu \in (\mu_{\star},a)$. \end{enumerate}
Then for each $\mu \in (\mu_{\star},a)$, $\exists$ codimension-$1$ submanifolds $M_+(\mu)$ and $M_-(\mu)$ in $K(\mu)$ such that both $M_+(\mu)$ and $M_-(\mu)$ are $F_{\mu}$-invariant, locally attracting and $\mathcal{C}^1$ diffeomorphic to $M$. $M$ is locally repelling, and $r>0$ for all $x=(r,y) \in M_+$, and $r<0$ for all $x=(r,y) \in M_-$. \end{thm} \begin{proof}
Recall that $N(\alpha)=\big \{x=(r,y) \in \mathbb{R}^m: |r|=d(x,M) \leq \alpha \quad, \quad y = \pi (x)\big \}$. Now $ F_{\mu}(r,y)= (f_{\mu}(r,y), g_{\mu}(r,y))$ in component form where, $f_{\mu}:N(\alpha) \rightarrow \mathbb{R}$ is the signed distance
between $F_{\mu}(r,y)$ and $g_{\mu}(r,y)$ and $g_{\mu}:N(\alpha) \rightarrow M$ is the projection
$\pi \circ F_{\mu}(y)$ of $F_{\mu}(r,y)$ on $M$. We shall break the proof up into a number
of steps (claims). \begin{claim}\label{cl:Mlocatt}$M$ is locally attracting for $\mu \in (-a,0)$. \end{claim} \noindent \emph{Proof of Claim \ref{cl:Mlocatt}:} Consider a point $(r_0, y_0) \in N(\alpha)$. Let $(r_n,y_n)$ be the point obtained by applying the $n$-fold composition of $F_\mu$ with itself to $(r_0,y_0)$. Then \begin{displaymath} (r_n,y_n)= F_{\mu}(r_{n-1},y_{n-1}) = (f_{\mu}(r_{n-1},y_{n-1}),g_{\mu}(r_{n-1},y_{n-1})) \end{displaymath} implies that \begin{displaymath} d((r_n,y_n),M) = d((r_n,y_n),\pi(r_n,y_n))= f_{\mu}(r_{n-1},y_{n-1}). \end{displaymath} As $M$ is $F_{\mu}$-invariant, it follows that $f_{\mu}(0,y_{n-1})=0$ for all $n \in \mathbb{N}$. \quad So, \begin{displaymath}
|r_n|=|f_{\mu}(r_{n-1},y_{n-1})|=|f_{\mu}(r_{n-1},y_{n-1}) - f_{\mu}(0,y_{n-1})|
= |\frac{\partial f_{\mu}(r^{\star},y_{n-1})}{\partial r}||r_{n-1}| \end{displaymath}
by the mean value theorem. Thus $|r_n| < c|r_{n-1}| < c^n|r_0|$, where\\
$c=\underset{(r,y) \in N(\alpha)}{\sup}|\frac{\partial f_{\mu}(r,y)}{\partial r}|<1$ by property $(ii)$. Therefore, $r_n \rightarrow 0$ as $n \rightarrow \infty$. Consequently $d((r_n,y_n),M) \rightarrow 0$. That is, for any initial point $(r_0,y_0)$ in the neighborhood $N(\alpha)$ of $M$, $F_{\mu}^n(r_0,y_0)$ converges to $M$. It follows that $M$ is locally attracting for all $\mu \in (-a,0)$. \begin{claim}\label{cl:Mlocrep}$M$ is locally repelling for $\mu \in (\mu_{\star}, a)$.\end{claim} \noindent \emph{Proof of Claim \ref{cl:Mlocrep}:} Following the same steps as above, we find that
$|r_n| > c|r_{n-1}| > |r_{n-1}|$ whenever $|r_n|$ is sufficiently small owing to statement $(iii)$. Accordingly the iterates $\{x_n\}$ must eventually leave any sufficiently thin tubular neighborhood of $M$ for $\mu \in (\mu_{\star}, a)$, which means that $M$ is locally repelling.
We now fix a $\mu \in (\mu_{\star},a)$ and suppress $\mu$ in the notation for simplicity. To begin with, we shall prove the existence of $M_+$ as an $F_{\mu}$-invariant manifold homeomorphic to $M$. It suffices to prove the existence of $M_+$, as the existence of $M_-$ can be established in the same way. Observe that $M_+$ is invariant iff $F(M_+) = M_+$. We shall seek $M_+$ in the form of the graph of a continuous function over $M$ defined as \begin{displaymath}M_+=\Gamma_{\psi} = \{(\psi(y),y):y \in M \},\end{displaymath} where $M_+ \subset K = K(\mu)$, $\psi:M \rightarrow \mathbb{R}$ and $\psi(y) \geq 0$ for all $y \in M$. Then for all $y \in M$, we have that $(\psi(y),y) \in M_+$ iff $F(\psi(y),y)=(\psi(z),z) \in M_+$, which is equivalent to \begin{equation}\label{eq:frel} \bigg(f \big(\psi(y),y \big),g\big(\psi(y),y\big)\bigg) =(\psi(z),z) \in M_+. \end{equation} $F$ is a diffeomorphism, hence $F^{-1}(\psi(z),z) = (\psi(y),y)$ which implies that \begin{equation}\label{eq:finvrel} \bigg(\hat{f}\big(\psi(z),z \big),\hat{g}\big(\psi(z),z\big)\bigg) = (\psi(y),y). \end{equation} where $F^{-1}=(\hat{f},\hat{g})$. Combining equations (\ref{eq:frel}) and (\ref{eq:finvrel}), we find that $M_+$ is invariant iff $\psi$ satisfies the functional equation \begin{equation}\label{eq:funcrel} \psi(z) = f\bigg(\psi\big(\hat{g}(\psi(z),z)\big), \hat{g}\big(\psi(z),z\big)\bigg). \end{equation} Let $Lip(A,B)$ denote the set of all Lipschitz functions from $A$ to $B$. Let $\mathcal{L}(\psi)$ denote the Lipschitz constant of a Lipschitz function $\psi$, and $\Gamma_{\psi}=\{(\psi(y),y): y \in M\}$ denote the graph of $\psi$. Now define the set \begin{displaymath} X:=\{ \psi \in Lip(M,\mathbb{R}^+\cup \{0\} ): \mathcal{L}(\psi)\leq 1, \Gamma_{\psi} \subseteq K\}. \end{displaymath}
\begin{claim}\label{cl:XBanach}$\{X,\| \cdot \|_{K} \}$ is a Banach space.\end{claim} \noindent \emph{Proof of Claim \ref{cl:XBanach}:} Let $\{ \psi_n \}$ be a Cauchy sequence in $X$. Then for all $n \in \mathbb{N}$ we have that $\psi_n : M \overset{Lip}{\longrightarrow} \mathbb{R}^+ \cup \{0\}$, $\mathcal{L}(\psi_n) \leq 1$ and $\Gamma_{\psi_n} \in K(\mu)$. Here $\Gamma_{\psi_n}$ denotes the graph of $\psi_n$ over $M$. Since the sequence $\{ \psi_n \}$ is Lipschitz, every $\psi_n$ is continuous. Now $M$ is compact and $\mathbb{R}^+ \cup \{0\}$ is closed, so the set of all continuous functions from $M$ to $\mathbb{R}^+\cup\{0\}$ with sup norm forms a Banach space. Moreover, if $\psi_n \rightarrow \psi$ as $n \rightarrow \infty$, it is clear that $\psi$ is also Lipschitz, with Lipschitz constant not greater than one.
Since $K$ is closed, $K$ contains all its limit points. Therefore $\Gamma_{\psi_n} \in K$ for all $n$ implies that \begin{displaymath}\underset{n\rightarrow \infty}{lim} \Gamma_{\psi_n}= \underset{n\rightarrow \infty}{lim}(\psi_n(y),y) = (\psi(y),y) = \Gamma_{\psi} \in K, \end{displaymath} hence $X$ is a Banach space. In view of (\ref{eq:funcrel}), we define an operator $\mathcal{F}$ on $X$ as follows. \begin{equation} \mathcal{F}(\psi)(y):=f\bigg(\psi\big(\hat{g}(\psi(y),y)\big),\hat{g}\big(\psi(y),y\big)\bigg). \end{equation} \begin{claim} \label{cl:Fcontraction}$\mathcal{F}(X) \subseteq X$.\end{claim} \noindent \emph{Proof of Claim \ref{cl:Fcontraction}:} Let $z=\hat{g}(\psi(y),y)$. Then $f(\psi(z),z)=$ signed distance between $F(\psi(z),z)$ and $\pi(F(\psi(z),z))$. If $\psi(z) >0$, then $f(\psi(z),z) >0$ since $F$ is side-preserving. So $\mathcal{F}(\psi)$ is indeed a function from $M$ to $\mathbb{R}^+ \cup \{0\}$. $\mathcal{F}(\psi)$ is continuous since it is a composition of continuous functions. Now it follows from the mean value theorem and the definition of $X$ that \begin{eqnarray*}
|\mathcal{F}(\psi)(y_1) - \mathcal{F}(\psi)(y_1)|&=& |f(\psi(z_1),z_1) - f(\psi(z_2),z_2)|\\
&\leq& \| \frac{\partial f}{\partial r} \|_{K} |\psi(z_1) - \psi(z_2)| + \|D_y f \|_{K}|z_1 - z_2| \\
&\leq& \bigg(\| \frac{\partial f}{\partial r} \|_{K} + \|D_y f\|_{K}\bigg) \quad |z_1 - z_2| \end{eqnarray*}
where $\|.\|_K = sup\{|.|:(r,y) \in K \}$. The above inequality follows because $\psi$ is a Lipschitz function with Lipschitz constant $ \leq 1$. Also \begin{eqnarray*}
|z_1 - z_2| &=& | \hat{g}(\psi(y_1),y_1) - \hat{g}(\psi(y_2),y_2)|\\
&\leq& \| D_r \hat{g}\|_K \quad |\psi(y_1) - \psi(y_2)| + \|D_y \hat{g} \|_K \quad |y_1 - y_2|\\
&\leq& \bigg( \| D_r \hat{g}\|_K + \|D_y g\|_K \bigg) \quad |y_1 - y_2|. \end{eqnarray*} The two inequalities obtained above, together with property $(vii)$ imply that \begin{eqnarray*}
|\mathcal{F}(\psi)(y_1) - \mathcal{F}(\psi)(y_2)| &\leq& \bigg(\|\frac{\partial f}{\partial r}\|_K + \| D_y f\|_K\bigg)\bigg(\|D_r \hat{g}\|_K + \|D_y \hat{g} \|_K\bigg)\\
&& \times |y_1 - y_1|\\
&\leq& |y_1 - y_2|. \end{eqnarray*} Therefore, $\mathcal{F}(\psi) \in Lip(M,\mathbb{R}^+\cup\{0\})$ and $\mathcal{L}(\mathcal{F}(\psi)) \leq 1$. We will now prove that $\mathcal{F}$ is a contraction mapping. Using statement $(vi)$ and the mean value theorem, we compute that \begin{eqnarray*}
&&| \mathcal{F}(\psi_1)(y) - \mathcal{F}(\psi_2)(y) |\\
&&=|f(\psi_1(\hat{g}(\psi_1(y),y)),\hat{g}(\psi_1(y),y)) - f(\psi_2(\hat{g}(\psi_2(y),y)),\hat{g}(\psi_2(y),y)) |\\
&&\leq \left\|\frac{\partial f}{\partial r}\right\|_K |\psi_1(\hat{g}(\psi_1(y),y)) -
\psi_2(\hat{g}(\psi_2(y),y))|\\
&&\quad+ \|D_y f\|_K |\hat{g}(\psi_1(y),y) - \hat{g}(\psi_2(y),y)|\\
&&\leq \left\|\frac{\partial f}{\partial r}\right\|_K |\psi_1(\hat{g}(\psi_1(y),y)) -
\psi_1(\hat{g}(\psi_2(y),y))|\\
&&\quad + \left\|\frac{\partial f}{\partial r}\right\|_K |\psi_1(\hat{g}(\psi_2(y),y)) -
\psi_2(\hat{g}(\psi_2(y),y))| \\
&& \quad + \|D_y f\|_K |\hat{g}(\psi_1(y),y) - \hat{g}(\psi_2(y),y)|\\
&& \leq \left\|\frac{\partial f}{\partial r}\right\|_K (\hat{g}(\psi_1(y),y) - \hat{g}(\psi_2(y),y) +
\|\psi_1 - \psi_2 \| )\\
&&\quad + \|D_y f\|_K |\hat{g}(\psi_1(y),y) - \hat{g}(\psi_2(y),y)|\\
&&\leq \left\|\frac{\partial f}{\partial r}\right\|_K \big( \| D_r \hat{g}\|_K \| \psi_1 - \psi_2 \| +
\|\psi_1 - \psi_2 \|\big)+ \|D_y f\|_K
\|D_r \hat{g} \|_K \| \psi_1 - \psi_2 \|\\
&&= \left \{ \left\|\frac{\partial f}{\partial r}\right\|_K \big(1+\| D_r \hat{g} \|_K\big)
+ \|D_y f\|_K \right \} \|\psi_1 - \psi_2 \|. \end{eqnarray*} This is true for all $x \in K$. Hence it is true for the supremum with $x$ taken over $K$; therefore, we obtain the relation \begin{eqnarray*}
\| \mathcal{F}(\psi_1) - \mathcal{F}(\psi_2) \| &\leq& \left[ \|\frac{\partial f}{\partial r}\|_K (1+\|D_r \hat{g} \|_K)+ \| D_yf \|_K \right] \|\psi_1 - \psi_2 \| \\
\| \mathcal{F}(\psi_1) - \mathcal{F}(\psi_2) \| &\leq& c_{\star} \|\psi_1 - \psi_2 \| \end{eqnarray*}
where $c_{\star}=\|\frac{\partial f}{\partial r}\|_K (1+\| D_r \hat{g} \|_K)+ \|D_y f\|_K$ is such that $0 < c_{\star} < 1$ by hypothesis $(vi)$. This implies that $\mathcal{F}(\psi) \in Lip(M, \mathbb{R}^+\cup \{0\})$ and that $\mathcal{L}(\mathcal{F}(\psi)) < 1$. We note here that the invariance of $M$ implies that $f(0,y)=\hat{f}(0,y)=0$ for all $(0,y) \in M$. Accordingly $D_yf(0,y)=D_y\hat{f}(0,y)=0$ whenever $x=(0,y) \in M$, which means that both
$\|D_yf\|_K$ and $\|D_y\hat{f}\|_K$ can be made as small as we like by choosing a sufficiently thin tubular neighborhood of $M$. Note that if $F(\psi(y),y)=(\psi(z),z)$, then $(\psi(y),y)=F^{-1}(\psi(z),z)$, and it follows that \begin{displaymath} \psi(y) = \hat{f}(\psi(z),z) \qquad {\rm and} \qquad y = \hat{g}(\psi(z),z). \end{displaymath} Now consider $\psi \in X$. By definition, we have \begin{eqnarray*} \Gamma_{\mathcal{F}(\psi)} &=& \{ (\mathcal{F}(\psi)(z),z) : z \in M\}\\ &=& \left \{ \Bigg(f\bigg(\psi(\hat{g}(\psi(z),z)), \hat{g}(\psi(z),z)\bigg),z\Bigg): z \in M \right \}. \end{eqnarray*}
This implies that \begin{eqnarray*} \Gamma_{\mathcal{F}(\psi)} &=& \{ \big(f(\psi(y),y),z\big): z \in M\}\\ &=& \{ \big(f(\psi(y),y),g(\psi(y),y)\big): g(\psi(y),y) \in M\}\\ &=& \{ \big(f(\psi(y),y),g(\psi(y),y)\big): y \in M\}\\ &=& \{ F(\psi(y),y) : y \in M \}. \end{eqnarray*} We know that $(\psi(y),y) \in K$ for all $y \in M$ and $F(K) \subseteq K$. This implies that $\Gamma_{\mathcal{F}(\psi)} \subseteq K$, thereby proving the claim that $\mathcal{F}(X) \subseteq X$. Hence $\mathcal{F}:X \rightarrow X$ is a contraction mapping with respect to the sup norm on $X$.
Since $\mathcal{F}$ is a contraction on a complete metric space $X$, it has a unique fixed point in $X$ owing to Banach's fixed point theorem. Let $\phi$ be the fixed point of $\mathcal{F}$. Then $\phi \in Lip(M,\mathbb{R}^+\cup \{0\})$ with Lipschitz constant $\mathcal{L}(\phi) \leq 1$, and $\phi$ satisfies the functional equation (\ref{eq:funcrel}). Therefore, \begin{equation} \phi(z)=f\bigg(\phi\big(\hat{g}(\phi(z),z)\big),\hat{g}\big(\phi(z),z\big)\bigg). \end{equation} \begin{claim}\label{cl:M+exists} $M_+$ exists and is locally attracting. \end{claim} \noindent \emph{Proof of Claim \ref{cl:M+exists}:} We now define $M_+$ as the graph of $\phi$ as follows \begin{displaymath} M_+= \Gamma_{\phi}= \{ (\phi(y),y); y \in M \},\end{displaymath} where $\phi$ is as above. This proves the existence of $M_+$. That $M_+$ is locally attracting follows directly from its definition as the graph of a fixed point (function) of a contraction mapping.
\begin{claim}\label{cl:M+homeo} $M_+$ is homeomorphic to $M$.\end{claim} \noindent \emph{Proof of Claim:} Let $H:M \rightarrow M_+$ be defined as $H(y):=(\phi(y),y)$. Then $H$ is injective, surjective and continuous. $H^{-1}$ exists and is also injective and surjective (bijective). Since $M_+$ is compact, $H^{-1}$ is also continuous. Hence the manifold $M_+$ is homeomorphic to $M$. \begin{claim}\label{cl:phiC1} The function $\phi$ is a class $\mathcal{C}^1$ map.\end{claim} \noindent \emph{Proof of Claim:} We know that $\phi$ is the solution to the functional equation (\ref{eq:funcrel}), hence $\phi(z)=f\bigg(\phi\big(\hat{g}(\phi(z),z)\big),\hat{g}\big(\phi(z),z\big)\bigg)$, $\phi \in Lip(M,\mathbb{R}^+ \cup \{0\})$, and $\mathcal{L}(\phi) \leq 1$. We will inductively construct a sequence of $\mathcal{C}^1$ functions $\psi_n$ which converges to $\phi$. Then using the Arzela-Ascoli theorem, we will prove that $\phi$ is $\mathcal{C}^1$. The details are as follows.
Choose $\psi_1$ to be a positive constant such that $\Gamma_{\psi} \subset K$. By construction, $\psi_1$ is $\mathcal{C}^1$ and $\mathcal{L}(\psi_1) =0$. Now suppose $\psi_n$ is defined and that $\psi_n$ is $\mathcal{C}^1$ with $\mathcal{L}(\psi_n) \leq 1$. We define $\psi_{n+1}$ inductively as, \begin{eqnarray} \psi_{n+1}(z)&=& \mathcal{F}(\psi_n)(z)\\ \nonumber
&=& f\bigg(\psi_n\big(\hat{g}(\psi_n(z),z)\big),\hat{g}\big(\psi_n(z),z\big)\bigg). \end{eqnarray} Let $h_n:M \rightarrow \mathbb{R}^m$ denote the function $(\psi_n,Id)$, where $Id$ denotes the identity map on the second coordinate. That is, $h_n(z)=(\psi_n(z),z)$. Then $h_n$ is $\mathcal{C}^1$ by the induction hypothesis and the fact that it is the composition of $\mathcal{C}^1$ maps, and \begin{displaymath} \psi_{n+1}(z)=f \circ h_n \circ \hat{g} \circ h_n(z). \end{displaymath} Here we have used the fact that both $f$ and $\hat{g}$ are $\mathcal{C}^1$ since $F$ and $F^{-1}$ are $\mathcal{C}^1$ diffeomorphisms.
The sequence of functions $\{\psi_{n}(z)\}$ converges uniformly to $\phi(z)$ since $\phi$ satisfies the contractive functional equation (\ref{eq:funcrel}). The Jacobian of $\psi_{n+1}$ evaluated at $z$ is the $1 \times (m-1)$ matrix or the gradient vector of $\psi_{n+1}$
given as \begin{equation} D \psi_{n+1}(z)=Df\bigg(h_n\big(\hat{g}(h_n(z))\big)\bigg)Dh_n(\hat{g}(h_n(z)))D\hat{g}(h_n(z))Dh_n(z), \end{equation} owing to the chain rule. Moreover, \begin{displaymath}\psi_{n+1}(z) = \mathcal{F}(\psi_{n})(z)\end{displaymath}
by construction. Hence, $\mathcal{L}(\psi_{n+1}) \leq 1$. Since, $\psi_{n+1}$ is differentiable, this implies that $\|D\psi_{n+1}(z) \| \leq 1$ for all $z \in M$. By induction, $\{D \psi _n (z)\}$ is a sequence of continuous functions, uniformly bounded by $1$.
We will now prove the equicontinuity of $\{D \psi _n (z)\}$. The techniques used below are actually global versions of the methods employed by Hartman \cite{Hartman} for local invariant manifolds, and the role of the Lipschitz property follows an approach used by Hirsch et al. \cite{HiPuSh} and Shub \cite{Shub} to study hyperbolic invariant manifolds.
For any function $\beta$, we define $\triangle \beta(z):= \beta(z + \triangle z) - \beta(z)$. When $\triangle z$ is such that
$\triangle z \leq \min\{\delta, \frac{\delta}{\|D_r\hat{g}\|_K + \| D_y\hat{g}\|_K} \}$, we will show that $ \| \triangle D\psi_n(z) \| \leq \tau(\delta)$ for all $n$, where $\tau$ depends only on $\delta$ and is such that $\tau(\delta) \rightarrow 0$ as $\delta \rightarrow 0$. The desired result will be proved by induction as follows.
For any $\delta >0$, we define quantities $\eta (\delta)$ and $\tau (\delta)$ as \begin{eqnarray*}
&&\eta(\delta)=\sup\bigg\{\|D_rf(r + \triangle r, y + \triangle y) - D_rf(r,y)\|,\\
&& \|D_yf(r+ \triangle r, y + \triangle y) - D_yf(r,y)\|,
\|D_r\hat{f}(r + \triangle r, y + \triangle y) - D_r\hat{f}(r,y)\|,\\
&& \|D_y\hat{f}(r+ \triangle r, y + \triangle y) - D_y\hat{f}(r,y)\|,
\|D_rg(r+ \triangle r,y + \triangle y) - D_rg(r,y)\|,\\
&&\|D_yg(r+ \triangle r, y + \triangle y) - D_yg(r,y)\|,
\|D_r\hat{g}(r+ \triangle r, y + \triangle y) - D_r\hat{g}(r,y)\|,\\
&&\|D_y\hat{g}(r+ \triangle r, y + \triangle y) - D_y\hat{g}(r,y)\|
:(r,y) \in N(\alpha), |\triangle r|, |\triangle y| \leq \delta \bigg\} \end{eqnarray*} and \begin{eqnarray*}
\tau(\delta) &=& \frac{2(\|D_rf\|_K + \| D_y f \|_K + \|D_r\hat{g}\|_K + \|D_y\hat{g}\|_K) }{1 - \sigma}\eta(\delta) \end{eqnarray*} where $\sigma <1$ is as defined in property $(viii)$. It is observed that $\eta (\delta)$ converges to $0$ as $\delta$ approaches $0$.
Recalling that $\psi_1$ is defined to be a constant, we have $\triangle D\psi_1(z) \equiv 0$, and this implies that $\|\triangle D \psi_1(z) \| \leq \tau(\delta)$ for all $\triangle z$. Suppose that $\|\triangle D\psi_n(z)\| \leq \tau(\delta)$ is satisfied whenever $\triangle z
\leq$ min $\{\delta, \frac{\delta}{\|D_r\hat{g}\|_K + \| D_y\hat{g}\|_K} \}$. Now, \begin{displaymath} D\psi_{n+1}(z)= D \times C \times B \times A \end{displaymath} where \begin{displaymath} D =\left[\begin{array}{cc} D_rf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z)) & D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z)) \end{array} \right] \end{displaymath} is a $1 \times m$ matrix, \begin{displaymath} C = \left[\begin{array}{c} D\psi_n(\hat{g}(\psi_n(z),z))\\ I_{m-1}\end{array} \right]_{m \times (m-1)}, \end{displaymath} \begin{displaymath} B = \left[\begin{array}{c} D\hat{g}(\psi_n(z),z)\end{array} \right]_{(m-1) \times m}, \end{displaymath} and \begin{displaymath} A = \left[\begin{array}{c}D\psi_n(z) \\ I_{m-1}\end{array} \right]_{m \times (m-1)}. \end{displaymath} Multiplying the four matrices above and taking into account the block matrix notation, it follows that $D\psi_{n+1}(z)$ can be expressed in the following simpler form. \begin{eqnarray*} &&D\psi_{n+1}(z)\\ &&=D_rf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z)) D\psi_n(\hat{g}(\psi_n(z),z))D_r\hat{g}(\psi_n(z),z)D\psi_n(z)\\
&&\quad + D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D_r\hat{g}(\psi_n(z),z)D\psi_n(z)\\ && \quad + D_rf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D\psi_n(\hat{g}(\psi_n(z),z))D_y\hat{g}(\psi_n(z),z)\\ && \quad + D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D_y\hat{g}(\psi_n(z),z). \end{eqnarray*}
Each of the four terms added above is a $1 \times (m-1)$ vector. We will now estimate the quantity $\| \triangle D \psi_{n+1}(z)\|$. Using the definitions, we find after a straightforward calculation that $\triangle D\psi_{n+1}(z) = D\psi_{n+1}(z + \triangle z) - D \psi_{n+1}(z)$ can be written in the form
\begin{eqnarray*}
&&\triangle D\psi_{n+1}(z)\\ &=& \bigg\{D_r f(\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)),\hat{g}(\psi_n(z + \triangle z),z + \triangle z))\\ &&\quad \times D\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)) D_r\hat{g}(\psi_n(z + \triangle z),z + \triangle z)\\ &&\quad \times D\psi_n(z + \triangle z)\\ &&- D_r f(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))\\ &&\quad \times D\psi_n(\hat{g}(\psi_n(z),z))D_r\hat{g}(\psi_n(z),z)D\psi_n(z)\bigg\}\\ &&+ \bigg\{ D_yf(\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)),\hat{g}(\psi_n(z + \triangle z),z + \triangle z))\\ &&\quad \times D_r\hat{g}(\psi_n(z + \triangle z),z + \triangle z)D\psi_n(z + \triangle z)\\ &&- D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D_r\hat{g}(\psi_n(z),z)D\psi_n(z) \bigg\}\\ &&+ \bigg\{D_r f(\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)),\hat{g}(\psi_n(z + \triangle z),z + \triangle z))\\ &&\quad \times D\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z))D_y\hat{g}(\psi_n(z + \triangle z),z +
\triangle z)\\ &&- D_r f(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D\psi_n(\hat{g}(\psi_n(z),z))D_y\hat{g}(\psi_n(z),z)\bigg\}\\ &&+ \bigg\{D_yf(\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)),\hat{g}(\psi_n(z + \triangle z),z + \triangle z))\\ &&\quad \times D_y\hat{g}(\psi_n(z + \triangle z),z + \triangle z)\\ &&- D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D_y\hat{g}(\psi_n(z),z)\bigg\}. \end{eqnarray*} We denote the four bracketed terms above as $T_1$, $T_2$, $T_3$ and $T_4$, respectively. For instance, \begin{eqnarray*} T_2&=& D_yf(\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)),\hat{g}(\psi_n(z + \triangle z),z + \triangle z))\\ &&\quad \quad \times D_r\hat{g}(\psi_n(z + \triangle z),z + \triangle z)D\psi_n(z + \triangle z)\\ &&\quad- D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D_r\hat{g}(\psi_n(z),z)D\psi_n(z). \end{eqnarray*} Adding and subtracting appropriate terms yields, \begin{eqnarray*} &&T_2\\ &&=\bigg\{ D_yf(\psi_n(\hat{g}(\psi_n(z + \triangle z),z + \triangle z)),\hat{g}(\psi_n(z + \triangle z),z + \triangle z))\\ &&\quad \quad \times D_r\hat{g}(\psi_n(z + \triangle z),z + \triangle z)D\psi_n(z + \triangle z)\\ &&-D_yf(\psi_n(\hat{g}(\psi_n(z ),z)),\hat{g}(\psi_n(z),z ))D_r\hat{g}(\psi_n(z + \triangle z),z + \triangle z)\\ &&\quad \quad \times D\psi_n(z + \triangle z)\bigg\}\\ &&+ \bigg\{D_yf(\psi_n(\hat{g}(\psi_n(z ),z)),\hat{g}(\psi_n(z),z ))D_r\hat{g}(\psi_n(z + \triangle z),z + \triangle z)\\ &&\quad \quad \times D\psi_n(z + \triangle z)\\ &&\quad - D_yf(\psi_n(\hat{g}(\psi_n(z ),z)),\hat{g}(\psi_n(z),z ))D_r\hat{g}(\psi_n(z),z) D\psi_n(z+\triangle z)\bigg\}\\ && + \bigg\{D_yf(\psi_n(\hat{g}(\psi_n(z ),z)),\hat{g}(\psi_n(z),z ))D_r\hat{g}(\psi_n(z),z)D\psi_n(z+\triangle z)\\ && \quad - D_yf(\psi_n(\hat{g}(\psi_n(z),z)),\hat{g}(\psi_n(z),z))D_r\hat{g}(\psi_n(z),z)D\psi_n(z)\bigg\}. \end{eqnarray*} Using the triangle inequality and the above definition of the quantity $\eta(\delta)$, we obtain \begin{eqnarray*}
\|T_2\| &\leq& \eta(\delta)\|D_r\hat{g}\|_K \|D\psi_n\|_K + \|D_yf\|_K \eta(\delta)
\|D\psi_n\|_K\\
&& + \|D_yf\|_K\|D_r\hat{g}\|_K \|\triangle D\psi_n\|, \end{eqnarray*}
where the first term is valid only when $\triangle z \leq \frac{\delta }{\|D_r\hat{g}\|_K +
\| D_y\hat{g}\|_K}$ and $\triangle z \leq \delta$. (The estimate on $\triangle z$ is obtained by applying the chain rule and the mean value theorem to the first term in $T_2$.)
Since $\mathcal{L}(\psi_n) \leq 1$, it follows that $\|D\psi_n\| \leq 1$. By the induction hypothesis, $\|\triangle D\psi_n\| \leq \tau(\delta)$. This implies that \begin{equation}\label{eq:T2}
\|T_2\| \leq \eta(\delta)\bigg(\|D_r\hat{g}\|_K + \|D_yf\|_K\bigg) + \|D_yf\|_K\|D_r\hat{g}\|_K \tau(\delta). \end{equation} Using similar analyses, we obtain \begin{equation}\label{eq:T1}
\|T_1\| \leq \eta(\delta)(\|D_r\hat{g}\|_K + \|D_r f\|_K) + 2\|D_r f\|_K \|D_r\hat{g}\|_K \tau(\delta),\\ \end{equation} \begin{equation} \label{eq:T3}
\|T_3\| \leq \eta(\delta)(\|D_r f\|_K+ \|D_y\hat{g}\|_K ) +\|D_r f\|_K \|D_y\hat{g}\|_K \tau(\delta)\\ \end{equation} and \begin{equation}\label{eq:T4}
\|T_4\| \leq \eta(\delta)(\|D_y\hat{g}\|_K + \|D_yf\|_K). \end{equation} Combining equations (\ref{eq:T2}) - (\ref{eq:T4}) gives, \begin{eqnarray*}
\|\triangle D\psi_{n+1}\| &\leq 2 \left\{ \|D_r\hat{g}\|_K + \|D_y\hat{g}\|_K +
\|\frac{\partial f}{\partial r}\|_K + \|D_yf\|_K \right \}\eta(\delta) \\
&+ \left \{\|\frac{\partial f}{\partial r}\|_K \big(2\|D_r\hat{g}\|_K + \|D_y\hat{g}\|_K
\big)+ \|D_yf\|_K\|D_r\hat{g}\|_K \tau(\delta) \right \}. \end{eqnarray*} Substituting the definition of $\sigma$ in the above inequality, we find that
\begin{displaymath} \|\triangle D\psi_{n+1}\| \leq \tau(\delta)(1-\sigma) + \sigma \tau(\delta),\end{displaymath} which proves that
$ \|\triangle D\psi_{n+1}\| \leq \tau(\delta)$
whenever $\triangle z \leq$ min$\{ \delta, \frac{\delta}{\|D_r\hat{g}\|_K + \| D_y \hat{g}
\|_K} \}$.
Thus, we have proved by induction that $\|\triangle D\psi_n(z)\| \leq \tau(\delta)$ (whenever $\triangle z$ is sufficiently small) for all $n$. The quantity $\tau(\delta)$ is such that, $\tau(\delta) \rightarrow 0$ uniformly as $\delta$ approaches $0$. Hence, the sequence of functions $\{D\psi_n(z)\}$ is equicontinuous. Since the sequence $\{D \psi _n (z)\}$ is a uniformly bounded and equicontinuous sequence of functions on a compact set $M$, it follows from the Arzela-Ascoli theorem that there exists a subsequence $D\psi_{n_k}(z)$ which is uniformly convergent on $M$. Let $\rho(z)$ be the uniform limit of $D\psi_{n_k}(z)$ as $k \rightarrow \infty$. Since we know that $\psi_n$ converges to $\phi$, this implies that $\rho = D \phi$. That is, $\phi$ is differentiable. Also, since $D \psi_n(z)$ is continuous for every $n$ and the convergence is uniform, we find that $\rho$ is also continuous. That is, $D \phi(z)$ is continuous. This implies that $\phi$ is class $\mathcal{C}^1$.
Hence, the map $H:M \rightarrow M_+$ defined earlier as $H(y):=(\phi(y),y)$ is a $\mathcal{C}^1$ diffeomorphism. Thus we have proved that the manifold $M_+$ is diffeomorphic to $M$.
Analogously, one can prove that $M_-$ is diffeomorphic to $M$. This proves that $M$ has undergone a pitchfork bifurcation at $\mu_{\star}$, into a pair of locally attracting invariant manifolds $M_+$ and $M_-$, each diffeomorphic to $M$, for each $\mu \in (\mu_{\star},a)$. Thus the proof is complete. \end{proof}
There is also a side-reversing version of Theorem \ref{thm:PBthethm} that can be proved in a completely analogous manner; namely \begin{thm}\label{thm:PBsiderev} Let $F_{\mu}$ and $M$ satisfy all the hypotheses of Theorem \ref{thm:PBthethm}, except with $F_{\mu}$ being side-reversing. Then for each $\mu \in (\mu_{\star},a)$, there exist manifolds $M_-(\mu)$ and $M_+(\mu)$, both $\mathcal{C}^1$ diffeomorphic to $M$, such that $F_{\mu}(M_+) = M_-$, $F_{\mu}(M_-) = M_+$ and $M_-(\mu) \cup M_+(\mu)$ is $F_{\mu}$-invariant and locally attracting . \end{thm}
In certain cases, the estimates in properties $(vi)$-$(viii)$ of Theorems \ref{thm:PBthethm} and \ref{thm:PBsiderev} can be combined into a single statement, as in the following result.
\begin{cor}\label{cor:combinedthms} Let the hypotheses of Theorems \ref{thm:PBthethm} and \ref{thm:PBsiderev} be as above, except that properties $(vi)$-$(viii)$ are replaced by the single estimate\\
$(ix)$ \quad $\|D_rf_{\mu}\|_K \|D_r\hat{g}_{\mu}\|_K + (\|D_rf_{\mu}\|_K +
\|D_yf_{\mu}\|_K)\bigg(1+\|D_r \hat{g}_{\mu}\|_K\bigg)<1$ for each $\mu \in (\mu_{\star},a)$. Then the conclusions of the theorem still follow. \end{cor} \begin{proof}One need only observe that $(vi)$-$(viii)$ follows directly from $(ix)$. \end{proof}
\begin{rem} If the function $F_{\mu}$ and invariant manifold $M$ are of class $\mathcal{C}^2$, property $(viii)$ of the above theorem is not essential, for in this case the equicontinuity of the sequence $\{D\psi_n\}$ in the above proof follows from the mean value theorem. Of course, if one wishes to prove the existence of bifurcated $\mathcal{C}^2$ diffeomorphs of $M$, an analog of property $(viii)$ involving second derivatives would be necessary. Such an estimate, although rather complicated, can be obtained in a straightforward manner, and we leave this to the reader. If both the map and invariant submanifold are $\mathcal{C}^k$, with $k>2$, it is not difficult to obtain a $k$th order derivative analog of $(viii)$ that would guarantee the existence of bifurcated $\mathcal{C}^k$ diffeomorphs of $M$. \end{rem}
\begin{cor} Let the hypotheses be the same as in Theorems \ref{thm:PBthethm} and \ref{thm:PBsiderev} with the following additional modifications: Property $(v)$ is replaced by\\ $(v')$ \quad $\exists \chi:[0,a)$ such that $0<\chi(\mu) \leq \alpha_1$, and $F_{\mu}(K(\mu)) \subset K(\mu)$,\\ where $K(\mu)$ is as in $(v)$ for every $\mu_{\star} <\mu < a$, and the following assumption is added.\\ $(x)$ \quad For every $\mu \in (\mu_{\star},a)$, $f_{\mu}(r,y) > r (<r)$ for $(r,y) \in (0,\chi(\mu)]\times M$ and $F_{\mu}(r,y) <r (>r)$ for $(r,y) \in [-\chi(\mu),0) \times M$ in the side-preserving (side-reversing) case.\\ Then, in addition to the conclusions of Theorem \ref{thm:PBthethm} and \ref{thm:PBsiderev}, we have the following dynamical properties: The submanifold $M_+(\mu)$ attracts all points $x=(r,y) \in (0,\alpha]\times M$, and $M_-(\mu)$ attracts all points $x=(r,y)\in[\alpha,0) \times M$ in the side-preserving case; and in the side-reversing case, $N(\alpha)\backslash M$ is contained in the basin of attraction of $M_+(\mu) \cup M_-(\mu)$. \end{cor} \begin{proof} We shall verify only the additional result for $M_+(\mu)$ in the side-reversing case, since the proofs of all of the other cases are similar and require only obvious modifications. For the case at hand, it obviously suffices to show that the iterates of a point $(r_0,y_0)$ with $0<r_0<\chi(\mu)$ eventually wind up in $K(\mu)$. Setting $(r_n,y_n)=F_{\mu}^n(r_0,y_0)$, it follows from $(x)$ that $\{r_n\}$ is an increasing sequence of real numbers, which must exceed $\chi(\mu)$ for $n$ sufficiently large. Thus the proof is complete. \end{proof} \begin{rem} It is natural to ask about the bifurcation phenomena that may occur when $0<\mu<\mu_{\star}$
and $M$ has regions where $|D_rf_{\mu}|>1$ and regions where $|D_rf_{\mu}|<1$. If $F_{\mu}$ leaves all points of $M$ fixed, one can readily prove the existence of \textquotedblleft blistered\textquotedblright diffeomorphs of $M$ using one-dimensional theory. The
\textquotedblleft blister\textquotedblright regions, where $|D_rf_{\mu}|>1$, have a pair of locally attracting copies of $M$ manifested as inner or outer blisters on $M$, while the portion of $M$ inside the blister is locally repelling. However, when $F_{\mu}$ merely leaves $M$ invariant without fixing all the points, the situation is much more complicated and needs further investigation. \end{rem} \begin{rem} A particularly useful feature of our main results, Theorem \ref{thm:PBthethm}, Theorem \ref{thm:PBsiderev} ( and Theorem \ref{thm:PBctsthm} appearing in the sequel), is that they are constructive. The desired bifurcated manifolds can be determined to any desired accuracy by successive approximation. For example, to approximate $M_+(\mu)$ in the side-preserving case, one simply starts with $\psi_1$ as a positive constant so that its graph is in $K(\mu)$, and then computes successive approximations using the functional equation (\ref{eq:funcrel}). The iterate $\psi_n$ for $n$ sufficiently large yields an approximation $M_n$ that can be chosen to be arbitrarily $\mathcal{C}^1$ close to $M_+(\mu)$, and the error can be estimated from the definition of the iterates. \end{rem}
\section{Illustration of the (discrete) pitchfork bifurcation theorem}\label{sec:illus} In this section, we illustrate Theorem \ref{thm:PBthethm} proved in Section \ref{sec:PBdiscrete} with a canonical example. Let $A \in SO_n(\mathbb{R})$, the special orthogonal group of real $n \times n$ matrices, comprised of orthogonal matrices with determinant $1$. Define a linear map $L_A:\mathbb{R}^n \rightarrow \mathbb{R}^n$ as \begin{displaymath} L_A(x)=Ax.\end{displaymath} The map $L_A$ is an analytic (linear) diffeomorphism. Every $(n-1)$-sphere $S_{\alpha}$ of radius $\alpha >0$ is $L_A$-invariant. That is, $L_A(S_{\alpha}) = S_{\alpha}$
where $S_{\alpha}=\{x \in \mathbb{R}^n : |x|=\alpha \}$ and $\alpha >0$. Note that $S_1$ denotes a sphere of radius $1$, in the space on which $L_A$ acts. If for instance, $A \in SO_2(\mathbb{R})$ then $L_A:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ and $S_1$ is the same as $S^1 \subset \mathbb{R}^2$. If $A \in SO_3(\mathbb{R})$ then $L_A:\mathbb{R}^3 \rightarrow \mathbb{R}^3$ and $S_1$ is the same as $S^2 \subset \mathbb{R}^3$. The subscript denotes the radius of the sphere and the dimension of the sphere is one less than the ambient space.
Now define $\sigma_{\mu}:[0,\infty) \rightarrow [0,\infty)$ to be a $\mathcal{C}^\infty$ function such that $\sigma_{\mu}$ satisfies the following properties. \begin{enumerate} \item $\sigma_{\mu}' \equiv 0$ in a small neighborhood of $0$. \item $\sigma_{\mu}(s) > 1$ for $0 \leq s < \frac{4}{5}$. \item $\sigma_{\mu}(s)= 1- (s-1)^3 + \mu (s-1)$ for $\frac{4}{5} \leq s \leq \frac{6}{5}$. \item $\sigma_{\mu}(s) <1$ for $\frac{6}{5} < s$. \item $(s\sigma_{\mu}(s))' = s\sigma_{\mu}'(s) + \sigma_{\mu}(s) >0$ for $\mu \in [\frac{-1}{25},\frac{1}{25}]$. \end{enumerate} We fix a matrix $A$ in $SO_{n}(\mathbb{R})$ and define $F_{\mu}:\mathbb{R}^n \rightarrow \mathbb{R}^n$ as follows:
\begin{displaymath}F_{\mu}(x)=\sigma_{\mu}(|x|)L_A(x)= \sigma_{\mu}(|x|)Ax.\end{displaymath} It is easy to see that $F_{\mu}$ is a diffeomorphism, and that it leaves $S_1$ invariant. That is, $F_{\mu}(S_1)=S_1$. The discrete dynamical system governed by $F_{\mu}$ is \begin{equation} x_{n+1}=F_{\mu}(x_n). \end{equation} In the notation of Section \ref{sec:PBdiscrete}, $M=S_1$. Due to the symmetry of the sphere $S_1$, every point in $\mathbb{R}^n \backslash \{0\}$ can be uniquely described as being a radial projection on $S_1$, so the neighborhood $N(\alpha)$ is not restricted by the $\epsilon$-neighborhood theorem. However, due to the nature of $\sigma_{\mu}$, we let $\alpha=\frac{1}{5}$ and consider the neighborhood
$N(\frac{1}{5})=\{x \in \mathbb{R}^n: |x|\in [\frac{4}{5},\frac{6}{5}]\}$. We now check that all the hypotheses stated in Theorem \ref{thm:PBthethm} are satisfied. \begin{enumerate} \item Observe that $F_{\mu}$ is side-preserving for $\mu \in [\frac{-1}{25},\frac{1}{25}]$ since $A$ preserves orientation and $\sigma_{\mu}$ is positive-valued.\\ For this example,
\begin{displaymath}r=|x|-1 {\quad \rm and \quad} y = \frac{x}{|x|}.\end{displaymath}
This implies that after a change of variables, $F_{\mu}(x)=\sigma_{\mu}(|x|)Ax$ becomes \begin{eqnarray*}
F_{\mu}(r,y)&= \sigma_{\mu}(r+1)|x|A\frac{x}{|x|},\\ F_{\mu}(r,y)&= \sigma_{\mu}(r+1)(r+1)Ay. \end{eqnarray*} The property that $A$ preserves length is used in obtaining the above expression for $F_{\mu}$, and again in finding $f_{\mu}$ and $g_{\mu}$ below: \begin{eqnarray*}
f_{\mu}(r,y) = |\sigma_{\mu}(r+1) (r+1)Ay|-1 =(r+1)\sigma_{\mu}(r+1),\\ g_{\mu}(r,y)= Ay. \end{eqnarray*} This implies that \begin{eqnarray*} \frac{\partial f}{\partial r}=(r+1) \sigma_{\mu}'(r+1) + \sigma_{\mu}(r+1),\\ D_yf(r,y) \equiv 0, \quad D_rg(r,y) \equiv 0 \quad {\rm and} \quad D_yg(r,y) \equiv A. \end{eqnarray*}
\item $\underset{(r,y)\in N(\frac{1}{5})}{\sup}|\frac{\partial f}{\partial r}|<1$ for all $\mu \in [\frac{-1}{25},0)$ since the maximum $1$ is attained at $\mu=0$ as shown in Figure \ref{fig:canonicalitem2}.
\begin{figure}
\caption{$r$ vs $\frac{\partial f}{\partial r}$ for the canonical example for $r \in [-0.2,0.2]$ as $\mu$ increases from $\frac{-1}{25}$ through $0$.}
\label{fig:canonicalitem2}
\end{figure}
\item For this example, $\mu_{\star}=0$ and $\inf |\frac{\partial f (0,y)}{\partial r}|>1$ for all $\mu \in (0,\frac{1}{25}]$. The infimum is attained at $\mu=0$ as illustrated in Figure \ref{fig:canonicalitem3}.
\begin{figure}
\caption{Plot of $\mu$ vs $\frac{\partial f}{\partial r}$ for the canonical example for $r=0$ and $\mu$ in the interval $[0,\frac{1}{25}]$.}
\label{fig:canonicalitem3}
\end{figure} \begin{figure}\label{fig:canonicalitem4}
\end{figure}
\item For this case, $\alpha_1$ can be chosen to be $0.15$. As illustrated in Figure \ref{fig:canonicalitem4},
$\underset{A}{\sup}|\frac{\partial f}{\partial r}|<1$, where $A=\{(r,y):0.15 \leq r \leq 0.2\}.$
\item $K(\mu)$ can be chosen to be $A$ and property $(v)$ follows from property $(iv)$.
\item Properties $(vi)$, $(vii)$ and $(viii)$ also follow from statement $(iv)$ since $D_rg_{\mu}(r,y) \equiv 0$, $D_yf(r,y) \equiv 0$ and $\|D_yg(r,y)\| =1$. \end{enumerate} Theorem \ref{thm:PBthethm} implies that $S_1$ undergoes a pitchfork bifurcation at $\mu_{\star}=0$. This is indeed the case and for $\mu \in (0,1/25]$: $F_{\mu}$ has three invariant spheres $S_{1-\sqrt{\mu}}$, $S_1$ and $S_{1+\sqrt{\mu}}$ where $S_1$ is locally repelling, and $S_{1-\sqrt{\mu}}$ and $S_{1+\sqrt{\mu}}$ are locally attracting. This is illustrated in Figures \ref{fig:S1_02}, \ref{fig:S1_03}, \ref{fig:S2_02} and \ref{fig:S2_03}.
\begin{figure}
\caption{Any trajectory outside $S_1$ converges to $S_1$.}
\label{fig:S1_02}
\caption{Any trajectory inside $S_1$ converges to $S_1$.}
\label{fig:S1_03}
\end{figure} \begin{figure}\label{fig:S2_02}
\label{fig:S2_03}
\end{figure}
\begin{rem} The above example can be easily modified to illustrate Theorem \ref{thm:PBsiderev}. Define a map $G_{\mu} = R\circ F_{\mu}$, where
$R:\mathbb{R}^m \backslash \{0\} \rightarrow \mathbb{R}^m \backslash \{0\}$ is a smooth map such that $R(x)=\frac{2-|x|}{|x|}$ on the neighborhood $N(\frac{1}{5})$ of $S_1$. Then $G_{\mu}$ is side-reversing, with all other properties the same as those of $F_{\mu}$. In this case, $S_1$ undergoes a pitchfork bifurcation at $\mu_{\star} = 0$ and for $\mu \in (0,\frac{1}{25}]$ the invariant manifolds are $S_1$ and $S_{1-\sqrt{\mu}} \cup S_{1+\sqrt{\mu}}$. Note that $G_{\mu}(S_{1-\sqrt{\mu}}) = S_{1+\sqrt{\mu}}$ and $G_{\mu}(S_{1+\sqrt{\mu}}) = S_{1-\sqrt{\mu}}$. \end{rem}
\section{Pitchfork bifurcation theorem for continuous dynamical system} \label{sec:PBcts} In this section, we state and prove a pitchfork bifurcation theorem for continuous dynamical systems that is analogous to the result for the discrete case given by Theorem \ref{thm:PBthethm}. The idea of the proof is to use the flow generated by a continuous system to reduce the problem to the discrete system covered by Theorem \ref{thm:PBthethm}. Consider a continuous dynamical system given by \begin{equation}\label{eq:ctsdynsys} \dot{x}=X(x,\mu) \end{equation} where $x\in \mathbb{R}^m$ and $\mu \in (-a,a) \subset \mathbb{R}$. The ($1$-parameter) vector field $X(x,\mu)$, also denoted as $X_{\mu}(x)$, is assumed to be of class $\mathcal{C}^1$ in its domain.
Let $\phi(t,x,\mu)$, which we also denote by $\phi_{\mu}^t(x)$, be the unique solution (flow) starting at $x$ when $t=0$, and $M$ be a compact, connected, boundaryless, codimension-$1$, $\phi_{\mu}$-invariant manifold of $\mathbb{R}^m$, which means
$\phi_{\mu}^t(M)=M$ for all $|\mu|<a$. As in Section \ref{sec:PBdiscrete}, we define \begin{displaymath} N(\alpha)=\{x\in \mathbb{R}^m: d(x,M) \leq \alpha\}\end{displaymath} as a tubular neighborhood around $M$, where the $\epsilon$-neighborhood theorem \cite{GuPo} is applicable. Any point $x$ in the region $N(\alpha)$ can be written as \begin{displaymath} x=(r,y) \end{displaymath} where $r$ is the signed distance between $x$ and the manifold $M$ and $y$ is the unique point of $M$ closest to $x$. Recall that $r$ is positive if $x$ lies in the outer unbounded region of $\mathbb{R}^m \backslash M$ and $r$ is negative if $x$ lies in the inner bounded region of $\mathbb{R}^m \backslash M$. Again as in Section \ref{sec:PBdiscrete}, the notion of inner and outer regions is obtained as an application of the Jordan-Brouwer separation theorem \cite{GuPo}.
Note that $r=0$ when $x$ lies on $M$.
We shall assume that the vector field $X_{\mu}$ points into $N(\alpha)$ on $\partial N(\alpha )$ for every $\mu$ in the set $(-a,a)$, which means that positive semi-orbits of (\ref{eq:ctsdynsys}) that begin in $N(\alpha)$ can never exit this tubular neighborhood. Note that $X_{\mu}=(R_{\mu},Y_{\mu})$ in $r$,$y$-component form. Analogous to Section \ref{sec:PBdiscrete}, it follows that $\phi_{\mu}^t$ maps $N(\alpha)$ into itself for all $(t,\mu) \in [0,\infty) \times (-a,a)$. Now we can write the flow in terms of $r$ and $y$ components as \begin{displaymath}\phi(t,(r,y),\mu)=(\rho(t,(r,y),\mu),\psi(t,(r,y),\mu)), \end{displaymath} where $\phi(t,(r,y),\mu)$ is the signed distance between $\phi(t,(r,y),\mu)$ and $M$, and $\psi(t,(r,y),\mu)$ is the normal projection of $\phi(t,(r,y),\mu)$ onto $M$.
In order to obtain the estimates necessary to reduce the continuous case to the discrete case, we shall need to consider the derivative of the flow with respect to the initial condition $x=(r,y)$. By $D\phi(t,x,\mu)$, we mean the Jacobian matrix of $\phi$ with respect to $x$ defined as \begin{displaymath}D\phi(t,x,\mu)=\left[ \begin{array}{cc} D_r\rho(t,x,\mu) & D_y\rho(t,x,\mu)\\ D_r \psi(t,x,\mu) & D_y\psi(t,x,\mu) \end{array} \right]. \end{displaymath} It is well known that the matrix $D\phi(t,x,\mu)$ is the unique solution of the initial value problem (see e.g. in Hartman \cite{Hartman}) \begin{eqnarray}\label{eq:ctsIVP}
\dot{\Phi}=D_xX_{\mu}(\phi(t,x,\mu))\Phi
=\left[\begin{array}{cc} D_rR_{\mu} & D_yR_{\mu}\\ D_r \psi_{\mu} & D_y\psi_{\mu} \end{array}\right]_{\phi(t,x,\mu)}\Phi\\ \nonumber \Phi(0)=I_m, \end{eqnarray} where $I_m$ is the $m \times m$ identity matrix.
We shall make use of the following version of Gronwall's inequality. \begin{lem}\label{lem:Gronwal} Consider the linear matrix initial value problem \begin{eqnarray}\label{eq:linGronwal} \dot{\Phi}=\Gamma(t)\Phi\\ \nonumber \Phi(0)=I_m, \end{eqnarray} where $\Phi=(\phi_{ij})$, $\Gamma=(\gamma_{ij})$ and $\Gamma$ is a continuous matrix function on the real line $\mathbb{R}$. Let $\phi(t)$, $\Phi_{I}(t)$, $\Phi_{II}(t)$, $\Phi_{III}(t)$, $\gamma(t)$, $\Gamma_{I}(t)$, $\Gamma_{II}(t)$ and $\Gamma_{III}(t)$ be the submatrices defined as follows: \begin{displaymath} \phi(t)=\phi_{11}(t), \gamma(t)=\gamma_{11}(t),\end{displaymath} \begin{displaymath}\Phi_{I}(t)=\left[ \phi_{12}(t), \cdots, \phi_{1m}(t)\right], \Gamma_{I}(t)=\left[ \gamma_{12}(t), \cdots, \gamma_{1m}(t)\right],\end{displaymath} \begin{displaymath}\Phi_{II}(t)=\left[ \phi_{21}(t), \cdots, \phi_{m1}(t)\right]^T, \Gamma_{II}(t)=\left[ \gamma_{21}(t), \cdots, \gamma_{m1}(t)\right]^T,\end{displaymath} \begin{displaymath}\Phi_{III}(t)=\left[ \begin{array}{ccc}\phi_{22}(t) & \cdots & \phi_{2m}(t)\\ \vdots & \cdots & \vdots\\ \phi_{m2}(t) & \cdots & \phi_{mm}(t) \end{array}\right], \Gamma_{III}(t)=\left[ \begin{array}{ccc}\gamma_{22}(t) & \cdots & \gamma_{2m}(t)\\ \vdots & \cdots & \vdots\\ \gamma_{m2}(t) & \cdots & \gamma_{mm}(t) \end{array}\right],\end{displaymath} where the superscript $T$ denotes transpose, so that \begin{displaymath}\left[\begin{array}{cc}\dot{\phi} & \dot{\Phi}_{I}\\ \dot{\Phi}_{II} & \dot{\Phi}_{III} \end{array}\right]= \left[\begin{array}{cc}\gamma(t) & \Gamma_{I}(t)\\ \Gamma_{II}(t) & \Gamma_{III}(t) \end{array}\right] \left[ \begin{array}{cc} \phi & \Phi_{I}\\ \Phi_{II} & \Phi_{III} \end{array}\right] \end{displaymath} $\phi(0)=1$, $\Phi_{I}(0)=0$, $\Phi_{II}(0)=0$, $\Phi_{I}(0)=I_{m-1}$. Let $\sigma$, $\nu$ and $s$ be positive numbers satisfying \begin{equation}\label{eq:ineq} \sigma, \nu, \sigma^2, \nu^2 < s/4, \end{equation} and suppose that for some positive $t_{\star}$, \begin{equation}\label{eq:gamestimates}
\gamma(t)\leq -2s, \quad |\Gamma_{I}(t)|, |\Gamma_{II}(t)|\leq \sigma, \quad |\Gamma_{III}(t)|\leq \nu, \end{equation}
whenever $|t| \leq t_{\star}$. Then for all $|t| \leq t_{\star}$ the solution of (\ref{eq:linGronwal}) satisfies the estimates \begin{eqnarray}\label{eq:phiestimates} \nonumber
|\phi(t)| \leq E_0(t):= \kappa_1 e^{\lambda_-t} - \sigma(\lambda_+ + 2s)^{-1} \kappa_2 e^{\lambda_+ t},\\ \nonumber
|\Phi_{II}(t)| \leq E_2(t):= \sigma(\lambda -\nu)^{-1}\kappa_1 e^{\lambda_-t} - \kappa_2 e^{\lambda_+ t},\\
|\Phi_{I}(t)| \leq E_1(t):= \overline{\kappa}_1 e^{\lambda_-t} - \sigma(\lambda_+ 2s)\overline{\kappa}_2e^{\lambda_+ t},\\ \nonumber
|\Phi_{III}(t)| \leq E_3(t):= \sigma(\lambda_- \nu)^{-1}\overline{\kappa}_1 e^{\lambda_-t} - \overline{\kappa}_2e^{\lambda_+ t}, \end{eqnarray} where \begin{eqnarray}\label{eq:constants} \lambda_{\pm}:=-\frac{(2s-\nu)}{2}\left[ 1 \pm \sqrt{1+ \frac{4\sigma^2+\nu^2}{(2s-\nu)^2}} \quad \right],\\ \nonumber \kappa_1:=1-\sigma^2\{ (\lambda_+ +2s)[(\lambda_+ +2s) + \sigma^2(\lambda_- -\nu)]\}^{-1},\\ \nonumber \kappa_2:=-\sigma\{ (\lambda_- \nu)(\lambda_+ +2s) [(\lambda_+ + 2s)+ \sigma^2(\lambda_- -\nu)]\}^{-1},\\ \nonumber \overline{\kappa}_1:=\sigma(\lambda_- \nu)[(\lambda_+ +2s)(\lambda_- -\nu)+ \sigma^2]^{-1},\\ \nonumber \overline{\kappa}_2:=\{1+\sigma^2[(\lambda_+ +2s)(\lambda_- -\nu)]^{-1}\}^{-1}. \end{eqnarray} \end{lem} \begin{proof} It follows from equations (\ref{eq:linGronwal})-(\ref{eq:gamestimates}) in the hypotheses that
\begin{displaymath}|\phi(t)| \leq u(t), \quad |\Phi_{I}(t)| \leq v(t), \quad |\Phi_{II}(t)| \leq w(t), \quad
|\Phi_{III}(t)| \leq z(t) \end{displaymath}
for all $|t| \leq t_{\star}$, where $u$, $v$, $w$ and $z$ are the entries of the $2 \times 2$ matrix initial value problem \begin{eqnarray}\label{eq:IVP} \left[ \begin{array}{cc} \dot{u} & \dot{v}\\ \dot{w} & \dot{z} \end{array} \right] = \left[ \begin{array}{cc} -2s & \sigma\\ \sigma & \nu \end{array} \right] \left[ \begin{array}{cc} u & v\\ w & z \end{array} \right]\\ \nonumber \left[ \begin{array}{cc} u(0) & v(0) \\ w(0) & z(0) \end{array} \right]=I_2 \end{eqnarray} The eigenvalues - one negative and denoted by $\lambda_-$, and one positive and denoted by $\lambda_+$ - of the constant matrix in (\ref{eq:IVP}) are easily computed and found to be given by (\ref{eq:constants}). Now (\ref{eq:IVP}) can be solved by elementary means to yield \begin{displaymath} u(t)=E_0(t), \quad v(t)=E_1(t), \quad w(t)=E_2(t), \quad z(t)=E_3(t), \end{displaymath} where $E_0$, $E_1$, $E_2$ and $E_3$ are as defined in (\ref{eq:phiestimates}). Accordingly, we have verified the desired estimates, thereby completing the proof. \end{proof} \begin{thm}\label{thm:PBctsthm} Let the vector field $X:\mathbb{R}^m \times(-a,a)$ be $\mathcal{C}^1$, and let $M$ be a compact, connected, codimensions-$1$ invariant manifold for (\ref{eq:ctsdynsys}) for every $\mu \in (-\sigma,\sigma)$. Suppose that the following properties hold: \begin{enumerate} \item $X_{\mu}$ points into $N(\alpha)$ for all $(x,\mu)\in \partial N(\alpha) \times (-a,a)$. \item $D_rR(x,\mu) <0$ for all $x=(r,y)$ in the neighborhood $N(\alpha)$ for all $\mu \in (-a,0)$. \item There exists $0 \leq \mu_{\star} <a$ such that $D_rR(x,\mu) >0$ for all $(x,\mu) \in M \times (\mu_{\star},a)$. \item For each $\mu \in (\mu_{\star},a)$ there exists $0<\alpha_1(\mu) < \alpha$ and an $s>0$ such that $X_{\mu}$ points into \begin{displaymath} A(\mu)=\{ x \in \mathbb{R}^m: \alpha_1(\mu) \leq d(x,M) \leq \alpha \} \end{displaymath} on its boundary and $D_rR((r,y),\mu) \leq -2s$ for $(r,y) \in A(\mu)$. \item Let $\sigma$ and $\nu$ be positive constants such that $\sigma$, $\nu$, $\sigma^2$, $\nu^2 < s/4$
\begin{displaymath}\|D_y R_{\mu}\|_{A(\mu)}, \quad \|D_rY_{\mu}\| \leq \sigma, \quad \|D_yY_{\mu}\|_{A(\mu)} \leq \nu \end{displaymath} for all $\mu \in (\mu_{\star},a)$, and $\sigma$, $\nu$ are sufficiently small with respect to $s$ so that \begin{equation}\label{eq:Eest1} E_0(t)(1+E_2(-t)) + E_1(t)<1, \end{equation} \begin{equation}\label{eq:Eest2} (E_0(t)+ E_1(t))(E_2(-t) + E_3(-t))\leq 1, \end{equation} \begin{equation}\label{eq:Eest3} E_0(t)(2E_2(-t) +E_3(-t))+ E_1(t)E_2(-t))< 1, \end{equation} where $E_0$, $E_1$, $E_2$ and $E_3$ are as in Lemma \ref{lem:Gronwal}, for all $\mu \in (\mu_{\star},a)$ and each $1 \leq t \leq 2$. Then the invariant submanifold $M$ is locally attracting for $\mu \in (-a,0)$, and locally repelling for $\mu \in (\mu_{\star},a)$. Furthermore, for each $\mu \in (\mu_{\star},a)$ there exist a pair of $\mathcal{C}^1$ diffeomorphs $M_+(\mu)$ and $M_-(\mu)$ of $M$ in $A(\mu)$ such that both $M_+(\mu)$ and $M_-(\mu)$ are invariant for (\ref{eq:ctsdynsys}) and locally attracting. \end{enumerate} \end{thm} \begin{proof}Using the relation $(\dot{r}, \dot{y})=(R((r,y),\mu), Y((r,y),\mu))$, it follows from the mean value theorem that \begin{eqnarray*} \dot{r}&=R((r,y).\mu)=R((r,y).\mu) - R((0,y),\mu)\\ &= D_rR((r_{\star},y),\mu)r, \end{eqnarray*} where $r_{\star}$lies between $0$ and $r$, and we have used the property $R((0,y),\mu)=0$, which follows from the invariance of $M$. Consequently, property $(ii)$ implies that $\dot{r} < 0$ when $r>0$ and $\dot{r} > 0$ when $r<0$, which means that trajectories tend toward $M$ as $t$ increases. Hence, $M$ is locally attracting for each $-a < \mu <0$. Similarly, one can also use the mean value theorem to show that it follows from property $(iii)$ that $M$ is locally repelling for $\mu \in (\mu_{\star},a)$.
From here on, we fix $\mu \in (\mu_{\star},a)$ and suppress it in order to simplify the notation. We shall first show that for each $t \in [1,2]$, the map $T^t$ defined as \begin{displaymath}T^t(x):=\phi(t,x),\end{displaymath} where $\phi$ is the flow generated by the differential equation (\ref{eq:ctsdynsys}), satisfies the hypotheses of Theorem \ref{thm:PBthethm}. As $\Phi=D\phi$ satisfies the initial value problem (\ref{eq:ctsIVP}), Lemma \ref{lem:Gronwal} is an ideal instrument for proving the desired result.
Observe that from the form of the estimates (\ref{eq:phiestimates}) of Lemma \ref{lem:Gronwal}, that for a given $s>0$ it is indeed possible to select positive numbers $\sigma$, $\nu$ sufficiently small for estimates (\ref{eq:Eest1})-(\ref{eq:Eest3}) of property ($iv$) to obtain for all $1 \leq t \leq 2$. This is with the understanding that we may assume without loss of generality that we are in $T^2(A)$, so that we can still take advantage of the initial norm estimates for the terms with arguments $-t$ in (\ref{eq:Eest1})-(\ref{eq:Eest3}), which correspond to the inverse of $T^t$ owing to the group property of the flow $\phi$. Accordingly it follows from Lemma \ref{lem:Gronwal} and Theorem \ref{thm:PBthethm} that for each $\mu \in (\mu_{\star},a)$ and $t \in [1,2]$, the map $T^t$ has a unique pair of contractive invariant manifolds $M^t_{\pm}$ in $A$, which are $\mathcal{C}^1$-diffeomorphic with $M$.
We shall now show that the manifolds $M^t_+$ and $M^t_-$ are, in fact, the same for all $t\in \mathbb{R}$, and they are invariant for the entire flow $\phi$. It is enough to verify this for $M^t_+$, as the proof for $M^t_-$ is identical. Consider any rational number of the form $q=1+\frac{1}{m}$ lying between $1$ and $2$. Then, by definition \begin{displaymath}T^q(M^q_+)=M^q_+.\end{displaymath} Applying the map $T^q$, $n$ times to this equation yields \begin{displaymath}[T^q]^n(M^q_+)=M^q_+,\end{displaymath} which by the additivity property of the flow becomes \begin{displaymath}T^{(n+m)}(M^q_+)=M^q_+.\end{displaymath} But the unique contractive manifold for $T^1$ is $M^1_+$, and $T^{(n+m)}$ is an $(n+m)$-fold composite of $T^1$ with itself. Hence, \begin{displaymath}T^{(n+m)}(M^1_+)=M^1_+,\end{displaymath} so it follows from uniqueness that \begin{equation} \label{eq:M+uniq} M^q_+=M^1_+, \end{equation} and this must hold for all rational numbers $1\leq q \leq 2$.
It now follows from (\ref{eq:M+uniq}), the completeness of real numbers, and the continuity of the flow that $M^t_+=M^1_+$ for all $t\in [1,2]$, and \begin{equation}\label{eq:M+uniqin12} T^t(M^1_+)=M^1_+ \end{equation} for all $1\leq t \leq 2$. Any $t>2$ can be written as $t=m+\tau$, where $m$ is a positive integer and $\tau \in [1,2]$. Consequently, \begin{eqnarray*} T^t(M^1_+)&=T^{(m+\tau)}(M^1_+)=T^m\circ T^{\tau}(M^1_+)\\ &= T^m(M^1_+)=M^1_+, \end{eqnarray*} owing to (\ref{eq:M+uniqin12}) and the definition of $M^1_+$. Whence (\ref{eq:M+uniqin12})
holds for all $t\geq 1$. In fact, it holds for all $|t| \geq 1$ since \begin{displaymath} T^{-t}(T^t(M^1_+))=M^1_+=T^{-t}(M^1_+) \end{displaymath} whenever $t\geq 1$.
Finally, for any $\epsilon >0$, \begin{displaymath} T^{\epsilon}(M^1_+)=T^{(1+\epsilon)}\circ T^{-1}(M^1_+)=T^{(1+\epsilon)}(M^1_+)=M^1_+. \end{displaymath} Thus $T^t(M^1_+)=M^1_+$ for all $t \geq 0$, and it therefore follows as above that the same is true for all $t<0$ as well. We denote the unique invariant manifold $M^1_+$ by $M_+$. This yields the desired result that $T^t(M_+)=M_+$ for all $t\in \mathbb{R}$, which completes the proof. \end{proof}
\begin{rem} For continuous dynamical systems, only the side-preserving case can occur. \end{rem}
Just as in the case of a discrete dynamical system, we can obtain a more complete description of the dynamical systems in $N(\alpha)$ by making a minor additional assumption. \begin{cor} In addition to the hypotheses of Theorem \ref{thm:PBctsthm}, suppose that $R((r,y),\mu)$ is positive (negative) for $0<r\leq \alpha_1(\mu)$ ($-\alpha_1(\mu) \leq r <0$) whenever $\mu \in (\mu_{\star},a)$. Then $M_+(\mu)$ attracts all points of $N(\alpha)$ with $r>0$ and $M_-(\mu)$ attracts all points of $N(\alpha)$ with $r<0$. \end{cor} \begin{proof} The additional property guarantees that the positive semi-orbits in $N(\alpha)$ not lying in $M$, eventually enter $A(\mu)$ and are then attracted to $M_+(\mu)$ or $M_-(\mu)$. This completes the proof. \end{proof}
\section{Conclusions} We have proved that codimension-$1$, compact invariant manifolds in discrete dynamical systems,
undergo pitchfork bifurcations when the system satisfies suitable conditions. The hypotheses of
the theorem are easily verifiable estimates on the norms of partial derivatives of the function determining the discrete dynamical system, which makes this result well suited to a variety of applications. When the bifurcation parameter $\mu$ is between $0$ and $\mu_{\star}$, some portions of $M$ may be locally repelling and some locally attracting (in the normal direction), so the proof of our theorem would need to be modified to handle this case, which is an interesting subject for future investigation.
The case when the whole manifold $M$ bifurcates into $M_-$ and $M_+$ as $\mu$ increases through zero, corresponds to $\mu_{\star}=0$. The fact that $\mu_{\star}$ can be greater than $0$ allows for $M$ to eventually bifurcate and does not impose the restriction that $M$ bifurcate all at once. The theorem is slightly weaker than the theorem in one-dimension since the theorem does not completely determine the dynamics of the system in the region between a neighborhood of $M$ and the neighborhood $A$ of $M_-$ and $M_+$.
The pitchfork bifurcation in $\mathbb{R}$ is assumed to be one stable fixed point bifurcating into two stable fixed points separated by an unstable fixed point. We have generalized this result to a compact, connected, boundaryless, codimension-$1$, locally attracting invariant submanifold of $\mathbb{R}^m$ that becomes locally repelling and bifurcates into two locally attracting diffeomorphic copies of itself separated by the locally repelling manifold. The techniques we have used here should enable us to obtain new results on higher dimensional versions of other types of bifurcations such as Hopf and saddle-node Hopf bifurcations (see e.g. \cite{KrOl}). We plan to investigate these and related generalizations in the future.
\end{document} | arXiv |
# Trigonometric functions and their properties
Trigonometric functions are a fundamental part of mathematics and physics. They are used to describe the relationship between angles and the sides of triangles. The most common trigonometric functions are sine, cosine, and tangent, which are denoted as $\sin(x)$, $\cos(x)$, and $\tan(x)$, respectively.
The properties of trigonometric functions include:
- The sine, cosine, and tangent functions are periodic functions with a period of $2\pi$.
- The sine and cosine functions have a range of $[-1, 1]$ and a period of $2\pi$.
- The tangent function has a range of $\mathbb{R}$ and a period of $\pi$.
- The reciprocal functions of sine and cosine are cosecant and secant, respectively, and are denoted as $\csc(x)$ and $\sec(x)$.
- The reciprocal function of tangent is cotangent and is denoted as $\cot(x)$.
Let's consider an angle $\theta = 30^\circ$. We can calculate the sine, cosine, and tangent of this angle using the following equations:
$$\sin(30^\circ) = \frac{\sqrt{3}}{2}$$
$$\cos(30^\circ) = \frac{1}{2}$$
$$\tan(30^\circ) = \frac{\sqrt{3}}{3}$$
## Exercise
Calculate the sine, cosine, and tangent of the angle $\theta = 45^\circ$.
In Python, we can use the `math` module to calculate the trigonometric functions. Here's an example:
```python
import math
theta = 30
sin_theta = math.sin(math.radians(theta))
cos_theta = math.cos(math.radians(theta))
tan_theta = math.tan(math.radians(theta))
print(f"sin(30) = {sin_theta}")
print(f"cos(30) = {cos_theta}")
print(f"tan(30) = {tan_theta}")
```
This code calculates the sine, cosine, and tangent of the angle $\theta = 30^\circ$ and prints the results.
# Definite integrals of trigonometric functions
Definite integrals of trigonometric functions are commonly used in physics and engineering. They are used to find the area under the curve of a trigonometric function over a specific interval.
The definite integral of a trigonometric function can be calculated using integration techniques, such as substitution or integration by parts.
Let's calculate the definite integral of the sine function:
$$\int_0^{\pi} \sin(x) dx = 2$$
## Exercise
Calculate the definite integral of the cosine function:
$$\int_0^{\pi} \cos(x) dx = 0$$
In Python, we can use the `scipy.integrate` module to calculate the definite integral of a trigonometric function. Here's an example:
```python
from scipy.integrate import quad
def integrand(x):
return math.sin(x)
result, error = quad(integrand, 0, math.pi)
print(f"Integral of sin(x) from 0 to pi = {result}")
```
This code calculates the definite integral of the sine function from $0$ to $\pi$ and prints the result.
# Using Python to solve integration problems with trigonometric functions
Python provides powerful tools for solving integration problems involving trigonometric functions. We can use the `scipy.integrate` module to calculate the definite integral of a trigonometric function.
Let's calculate the definite integral of the sine function:
```python
from scipy.integrate import quad
def integrand(x):
return math.sin(x)
result, error = quad(integrand, 0, math.pi)
print(f"Integral of sin(x) from 0 to pi = {result}")
```
This code calculates the definite integral of the sine function from $0$ to $\pi$ and prints the result.
## Exercise
Calculate the definite integral of the tangent function:
```python
from scipy.integrate import quad
def integrand(x):
return math.tan(x)
result, error = quad(integrand, 0, math.pi / 2)
print(f"Integral of tan(x) from 0 to pi/2 = {result}")
```
This code calculates the definite integral of the tangent function from $0$ to $\pi/2$ and prints the result.
# Working with the theta function in Python
The theta function, also known as the Jacobi theta function, is an important function in mathematical modeling and physics. It is defined as:
$$\vartheta_3(z; \tau) = \sum_{n=-\infty}^{\infty} e^{-\pi \tau n^2} e^{2\pi i n z}$$
In Python, we can use the `sympy` module to work with the theta function. Here's an example:
```python
from sympy import theta_series
z = 1
tau = 0.5
theta_z = theta_series(z, tau)
print(f"Theta function at z = {z}, tau = {tau} = {theta_z}")
```
This code calculates the theta function at $z = 1$ and $\tau = 0.5$ and prints the result.
## Exercise
Calculate the theta function at $z = 2$ and $\tau = 1$.
In Python, we can use the `sympy` module to work with the theta function. Here's an example:
```python
from sympy import theta_series
z = 2
tau = 1
theta_z = theta_series(z, tau)
print(f"Theta function at z = {z}, tau = {tau} = {theta_z}")
```
This code calculates the theta function at $z = 2$ and $\tau = 1$ and prints the result.
# Applications of the theta function in mathematical modeling
The theta function has numerous applications in mathematical modeling and physics. Some of these applications include:
- Solving partial differential equations, such as the heat equation.
- Modeling quantum systems, such as the hydrogen atom and superconductivity.
- Studying number theory and modular forms.
The theta function is used to find the solutions of these problems by expressing them as integrals involving the theta function.
## Exercise
Calculate the theta function at $z = 3$ and $\tau = 0.25$.
In Python, we can use the `sympy` module to work with the theta function. Here's an example:
```python
from sympy import theta_series
z = 3
tau = 0.25
theta_z = theta_series(z, tau)
print(f"Theta function at z = {z}, tau = {tau} = {theta_z}")
```
This code calculates the theta function at $z = 3$ and $\tau = 0.25$ and prints the result.
# Solving indefinite integrals using Python
Indefinite integrals are used to find the antiderivative of a function. In Python, we can use the `sympy` module to solve indefinite integrals of trigonometric functions.
Let's calculate the indefinite integral of the sine function:
```python
from sympy import symbols, sin, integrate
x = symbols('x')
integral = integrate(sin(x), x)
print(f"Integral of sin(x) = {integral}")
```
This code calculates the indefinite integral of the sine function and prints the result.
## Exercise
Calculate the indefinite integral of the tangent function.
In Python, we can use the `sympy` module to solve indefinite integrals of trigonometric functions. Here's an example:
```python
from sympy import symbols, tan, integrate
x = symbols('x')
integral = integrate(tan(x), x)
print(f"Integral of tan(x) = {integral}")
```
This code calculates the indefinite integral of the tangent function and prints the result.
# Numerical integration methods in Python
Numerical integration methods are used to approximate the value of an integral. In Python, we can use the `scipy.integrate` module to calculate numerical integrals of trigonometric functions.
Let's calculate the numerical integral of the sine function:
```python
from scipy.integrate import quad
def integrand(x):
return math.sin(x)
result, error = quad(integrand, 0, math.pi)
print(f"Numerical integral of sin(x) from 0 to pi = {result}")
```
This code calculates the numerical integral of the sine function from $0$ to $\pi$ and prints the result.
## Exercise
Calculate the numerical integral of the tangent function.
In Python, we can use the `scipy.integrate` module to calculate numerical integrals of trigonometric functions. Here's an example:
```python
from scipy.integrate import quad
def integrand(x):
return math.tan(x)
result, error = quad(integrand, 0, math.pi / 2)
print(f"Numerical integral of tan(x) from 0 to pi/2 = {result}")
```
This code calculates the numerical integral of the tangent function from $0$ to $\pi/2$ and prints the result.
# Solving improper integrals using Python
Improper integrals are integrals that involve infinite limits. In Python, we can use the `scipy.integrate` module to calculate improper integrals of trigonometric functions.
Let's calculate the improper integral of the sine function:
```python
from scipy.integrate import quad
def integrand(x):
return math.sin(x) / x
result, error = quad(integrand, 0, math.inf)
print(f"Improper integral of sin(x) / x from 0 to inf = {result}")
```
This code calculates the improper integral of the sine function and prints the result.
## Exercise
Calculate the improper integral of the tangent function.
In Python, we can use the `scipy.integrate` module to calculate improper integrals of trigonometric functions. Here's an example:
```python
from scipy.integrate import quad
def integrand(x):
return math.tan(x) / x
result, error = quad(integrand, 0, math.pi / 2)
print(f"Improper integral of tan(x) / x from 0 to pi/2 = {result}")
```
This code calculates the improper integral of the tangent function and prints the result.
# Solving integrals involving trigonometric functions and theta
Integrals involving trigonometric functions and theta can be solved using various integration techniques and numerical methods. In Python, we can use the `scipy.integrate` module and the `sympy` module to solve these integrals.
Let's calculate the integral of the product of the sine function and the theta function:
```python
from scipy.integrate import quad
from sympy import theta_series
def integrand(x):
z = 1
tau = 0.5
return math.sin(x) * theta_series(z, tau)
result, error = quad(integrand, 0, math.pi)
print(f"Integral of sin(x) * theta(z, tau) from 0 to pi = {result}")
```
This code calculates the integral of the product of the sine function and the theta function and prints the result.
## Exercise
Calculate the integral of the product of the cosine function and the theta function.
In Python, we can use the `scipy.integrate` module and the `sympy` module to solve integrals involving trigonometric functions and theta. Here's an example:
```python
from scipy.integrate import quad
from sympy import theta_series
def integrand(x):
z = 2
tau = 1
return math.cos(x) * theta_series(z, tau)
result, error = quad(integrand, 0, math.pi)
print(f"Integral of cos(x) * theta(z, tau) from 0 to pi = {result}")
```
This code calculates the integral of the product of the cosine function and the theta function and prints the result.
# Comparison of different numerical integration methods in Python
Different numerical integration methods can be used to approximate the value of an integral. In Python, we can compare the accuracy and efficiency of these methods using the `scipy.integrate` module.
Let's compare the trapezoidal rule, Simpson's rule, and the Romberg method for the numerical integration of the sine function:
```python
from scipy.integrate import quad, trapezoid, simps, romberg
def integrand(x):
return math.sin(x)
result_quad, error_quad = quad(integrand, 0, math.pi)
result_trap, error_trap = trapezoid(integrand, 0, math.pi)
result_simps, error_simps = simps(integrand, 0, math.pi)
result_romberg = romberg(integrand, 0, math.pi)
print(f"Quad: {result_quad}, Error: {error_quad}")
print(f"Trapezoidal: {result_trap}, Error: {error_trap}")
print(f"Simpson's: {result_simps}, Error: {error_simps}")
print(f"Romberg: {result_romberg}")
```
This code calculates the numerical integral of the sine function using the trapezoidal rule, Simpson's rule, and the Romberg method and prints the results.
## Exercise
Compare the trapezoidal rule, Simpson's rule, and the Romberg method for the numerical integration of the tangent function.
In Python, we can compare the accuracy and efficiency of different numerical integration methods using the `scipy.integrate` module. Here's an example:
```python
from scipy.integrate import quad, trapezoid, simps, romberg
def integrand(x):
return math.tan(x)
result_quad, error_quad = quad(integrand, 0, math.pi / 2)
result_trap, error_trap = trapezoid(integrand, 0, math.pi / 2)
result_simps, error_simps = simps(integrand, 0, math.pi / 2)
result_romberg = romberg(integrand, 0, math.pi / 2)
print(f"Quad: {result_quad}, Error: {error_quad}")
print(f"Trapezoidal: {result_trap}, Error: {error_trap}")
print(f"Simpson's: {result_simps}, Error: {error_simps}")
print(f"Romberg: {result_romberg}")
```
This code calculates the numerical integral of the tangent function using the trapezoidal rule, Simpson's rule, and the Romberg method and prints the results.
# Advanced topics and applications of integration in Python
Advanced topics and applications of integration in Python include:
- Solving integrals involving special functions, such as the gamma function and the error function.
- Studying the convergence and divergence of integrals.
- Modeling complex physical systems, such as fluid dynamics and electromagnetism.
Python provides powerful tools for solving integration problems involving trigonometric functions and theta, as well as more advanced topics in mathematics and physics.
## Exercise
Calculate the integral of the product of the gamma function and the error function.
In Python, we can use the `scipy.integrate` module and the `scipy.special` module to solve integrals involving special functions. Here's an example:
```python
from scipy.integrate import quad
from scipy.special import gamma, erf
def integrand(x):
return gamma(x) * erf(x)
result, error = quad(integrand, 0, 1)
print(f"Integral of gamma(x) * erf(x) from 0 to 1 = {result}")
```
This code calculates the integral of the product of the gamma function and the error function and prints the result.
# Course
Table Of Contents
1. Trigonometric functions and their properties
2. Definite integrals of trigonometric functions
3. Using Python to solve integration problems with trigonometric functions
4. Working with the theta function in Python
5. Applications of the theta function in mathematical modeling
6. Solving indefinite integrals using Python
7. Numerical integration methods in Python
8. Solving improper integrals using Python
9. Solving integrals involving trigonometric functions and theta
10. Comparison of different numerical integration methods in Python
11. Advanced topics and applications of integration in Python | Textbooks |
\begin{document}
\title{A Solver for a Theory of Strings and Bit-vectors
} \author{Sanu Subramanian\inst{1} \and Murphy Berzish\inst{1} \and \\ Yunhui Zheng\inst{2} \and Omer Tripp\inst{3} \and Vijay Ganesh\inst{1}} \institute{University of Waterloo, Waterloo, Canada
\and
IBM Research, Yorktown Heights, USA
\and
Google, USA}
\maketitle
In this paper we present a solver for a many-sorted first-order quantifier-free theory $T_{w,bv}$ of string equations, string length represented as bit-vectors, and bit-vector arithmetic aimed at formal verification, automated testing, and security analysis of C/C++ applications. Our key motivation for building such a solver is the observation that existing string solvers are not efficient at modeling the combined theory over strings and bit-vectors. Current approaches either model such combination of theories by a reduction of strings to bit-vectors and then use a bit-vector solver as a backend, or model bit-vectors as natural numbers and use a backend solver for the combined theory of strings and natural numbers. Both these approaches are inefficient for different reasons. Modeling strings as bit-vectors destroys a lot of structure inherent in string equations thus missing opportunities for efficiently deciding such formulas, and modeling bit-vectors as natural numbers is well known to be inefficient. Hence, there is a clear need for a solver that models strings and bit-vectors natively.
Our solver Z3strBV\xspace is such a decision procedure for the theory $T_{w,bv}$ that combines a solvers for bit-vectors and string equations. We demonstrate experimentally that Z3strBV\xspace is significantly more efficient than a reduction of string/bit-vector constraints to strings/natural numbers followed by a solver for strings/natural numbers or modeling strings as bit-vectors. Additionally, we prove decidability for the theory $T_{w,bv}$. We also propose two optimizations, which can be adapted to other contexts. The first accelerates convergence on a consistent assignment of string lengths, and the second --- dubbed \emph{library-aware SMT solving} --- fixes summaries for built-in string functions (e.g., \texttt{strlen} in C/C++), which Z3strBV\xspace consumes directly instead of analyzing the functions from scratch each time. Finally, we demonstrate experimentally that Z3strBV\xspace is able to detect nontrivial overflows in real-world system-level code, as confirmed against 7 security vulnerabilities from CVE and Mozilla database.
\section{Introduction}
In recent years, constraint solvers have increasingly become the basis of many tools for hardware verification \cite{stp}, program analysis \cite{FSE13zheng,cav15,PISA,hampi} and automated testing \cite{EXE,Cadar:2008:KUA:1855741.1855756,Sen:2013:JSR:2491411.2491447,Sen:2005:CCU:1081706.1081750,Godefroid:2005:DDA:1065010.1065036}. The key idea is to model behaviors of interest of the subject system as logical constraints, and then discharge the resulting constraints to a SAT or SMT solver such that solutions generated by the solver serve as test inputs for the system-under-verification.
Naturally, the ability to carry out reasoning in this fashion is dependent on the expressive power and efficiency of the solver, which has motivated significant effort in developing useful theories, integrate them into solvers, and optimize their ability to solve such rich classes of constraints. Examples include the quantifier-free (QF) first-order theory of bit-vectors, which is effective in modeling machine arithmetic \cite{stp,z3}; the QF theory of arrays, which enables modeling of machine memory~\cite{stp,z3}; and the theory of uninterpreted functions and integers to model abstractions of program state \cite{Cadar:2008:KUA:1855741.1855756}.
\paragraph{\bf Existing Solutions, and the need for a New Solver:} There are several powerful tools to reason about string-manipulating code \cite{hampi,PISA,s3,cav15,CVC4-CAV14}. All these tools support the theory of string equations, where the length function --- applied to a string --- returns an arbitrary-precision (or unbounded) natural number. (HAMPI~\cite{hampi} is an exception since it only deals with bounded-length string variables.) While effective, these solvers are not adequate for deciding the quantifier-free first-order many-sorted theory $T_{w,bv}$ over the language $L_{w,bv}$ of string concatenations, bit-vector-sorted string length terms, bit-vector arithmetic, and equality predicate over string and bit-vector terms. The reason is that modeling bit-vectors as natural numbers complicates reasoning about potential overflows (or underflows), which is an artifact of the fixed-precision machine (bit-vector) representation of numeric values. Precise modeling of arithmetic overflow/underflow is a key reason to model numeric values in terms of bit-vectors and not integers, and the motivation for a large body of work on bit-vector solvers\cite{DBLP:conf/cav/GaneshD07,z3}.
Another approach to solving $L_{w,bv}$-formulas is to represent strings as bit-vectors. In fact many symbolic execution engines like KLEE \cite{Cadar:2008:KUA:1855741.1855756} and S2E \cite{Chipounov:2011:SPI:1950365.1950396} perform reasoning at this level. They collect constraints as bit-vectors by solving branch conditions using STP \cite{DBLP:conf/cav/GaneshD07} or Z3 \cite{z3}. However, these engines perform poorly on programs that make heavy use of string functions, as the low-level bit-vector representation of program data fails to efficiently capture the high-level semantics of the string data type \cite{Chipounov:2011:SPV}.
As this brief survey highlights, currently there is a disturbing gap. String solvers that are based on the theory of string equations and linear natural number arithmetic are inadequate for solving strings and bit-vectors given their limited ability to model overflow, underflow, bit-wise operations, and pointer casting. At the same time, there is lot of empirical evidence that bit-vector solvers are not able to perform direct reasoning on strings efficiently~\cite{anathesis}. Furthermore, they cannot handle unbounded strings.
Hence, we were motivated to build Z3strBV\xspace{}, which solves $L_{w,bv}$-formulas by treating strings and bit-vectors natively. We did so by combining a solver for strings, augmented with a bit-vector-sorted length function, and a solver for bit-vectors within the Z3 SMT solver combination framework.
\input{motivation.tex}
\paragraph{\bf Summary of Contributions:} The key novelty and contribution of this paper is a solver algorithm for a combined theory of string equations, string length modelled using bit-vectors, and bit-vector arithmetic, its theoretical underpinnings, implementation, and evaluation over several sets of benchmarks. The contributions, in more detail, are:
\begin{enumerate}
\item {\bf (Formal Characterization)} We begin with a formal characterization of a theory of string equations, string length as bit-vectors, and linear arithmetic over bit-vectors, and provide a (constructive) proof of decidability for this theory. In particular, we want to stress that the decidability of such a theory is non-trivial. At first glance it may seem that all models for this theory have finite universes since bit-vector arithmetic has a finite universe. However, this is misleading since the search space over all possible strings remains infinite even when string length is a fixed-width bit-vector due to the wraparound semantics of bit-vector arithmetic. (For example, for any fixed length $N$, there are infinitely many strings whose length is some constant k modulo $N$.)
\item {\bf (Solver Algorithm)} We then specify a practical solver algorithm for this theory that is efficient for a large class of verification, testing, analysis and security applications. Additionally, we formally prove that our solving algorithm is sound.
\item {\bf (Enhancements)} We propose two optimizations to our solver algorithm, whose applicability reaches beyond the string/bit-vector scope. We introduce the notion of \emph{library-aware solving}, whereby the solver reasons about certain C/C++ string library functions natively at the contract (or summary) level, rather than having to (re)analyze their actual code (and corresponding constraints) each time the symbolic analysis encounters them. We also propose a ``binary search'' heuristic, which allows fast convergence on consistent string lengths across the string and bit-vector solvers. This heuristic can be of value in other theory combinations such as the ones over theories of string and natural number.
\item {\bf (Implementation and Evaluation)} Finally, we describe the implementation of our solver, Z3strBV\xspace, which is an extension of the Z3str2 string solver. We present experimental validation for the viability and significance of our contributions, including in particular (i) the ability to detect overflows in real-world systems using our solver, as confirmed via reproduction of several security vulnerabilities from the CVE vulnerability database, and (ii) the significance of the two optimizations outlined above. \end{enumerate}
\section{Syntax and Semantics}
\subsection{The Syntax of the Language $L_{w, bv}$}
We define the sorts and constant, function, and predicate symbols of the countable first-order many-sorted language $L_{w, bv}$.
\noindent \textbf{Sorts:} The language is many-sorted with a string sort $str$ and a bit-vector sort $bv$. The language is parametric in $k$, the width of bit-vector terms (in number of bits). The Boolean sort $Bool$ is standard. When necessary, we write the sort of an $L_{w,bv}$-term $t$ explicitly as $t:sort$.
\noindent \textbf{Finite Alphabet:} We assume a finite alphabet $\Sigma$ of characters over which all strings are defined.
\noindent \textbf{String and Bit-vector Constants}: We define a disjoint two-sorted set of constants $Con = Con_{str} \cup Con_{bv}$. The set $Con_{str}$ is a subset of $\Sigma^{*}$, the set of all finite-length string constants over the finite alphabet $\Sigma$. Elements of $Con_{str}$ will be referred to as \textit{string
constants}, or simply \textit{strings}. $\epsilon$ denotes the empty string. Elements of $Con_{bv}$ are binary constants over $k$ digits. As necessary, we may subscript bit-vector constants by $bv$ to indicate that their sort is ``bit-vector''.
\noindent \textbf{String and Bit-vector Variables:} We fix a disjoint two-sorted set of variables $var = var_{str} \cup var_{bv}$. $var_{str}$ consists of string variables, denoted $X, Y, \hdots$ that range over string constants, and $var_{bv}$ consists of bit-vector variables, denoted $a, b, \hdots$ that range over bit-vectors.
\noindent \textbf{String Function Symbols:} The string function symbols include the concatenation operator $\cdot : str \times str \to str$ and the length function $strlen_{bv} : str \to bv$.
\noindent \textbf{Bit-vector Arithmetic Function Symbols:} The bit-vector function symbols include binary $k$-bit addition (with overflow) $+: bv \times bv \to bv$. Following standard practice in mathematical logic literature, we allow multiplication by constants as a shorthand for repeated addition.
\noindent \textbf{String Predicate Symbols:} The predicate symbols over string terms include equality and inequality: $=,\neq : str \times str \to Bool$.
\noindent \textbf{Bit-vector Predicate Symbols:} The predicate symbols over bit-vector terms include $=$, $\ne$, $<$, $\le$, $>$, $\ge$ (with their natural meaning), all of which have signature $bv \times bv \to Bool$.
\subsection{Terms and Formulas of $L_{w, bv}$}
\noindent{\textbf{Terms:}} $L_{w,bv}$-terms may be of string or bit-vector sort. A string term $t_{str}$ is inductively defined as an element of $var_{str}$, an element of $Con_{str}$, or a concatenation of string terms. A bit-vector term $t_{bv}$ is inductively defined as an element of $var_{bv}$, an element of $Con_{bv}$, the length function applied to a string term, a constant multiple of a length term, or a sum of length terms. (For convenience we may write the concatenation and addition operators as $n$-ary functions, even though they are defined to be binary operators.)
\noindent{\textbf{Atomic Formulas:}} The two types of atomic formulas are (1) word equations ($A_{w}$) and (2) inequalities over bit-vector terms ($A_{bv}$).
\noindent{\textbf{QF Formulas:}} We use the term ``QF formula'' to refer to any Boolean combination of atomic formulas, where each free variable is implicitly existentially quantified and no explicit quantifiers may be written in the formula.
\noindent{\bf Formulas and Prenex Normal Form:} $L_{w,bv}$-formulas are defined inductively over atomic formulas. We assume that formulas are always represented in prenex normal form (i.e., a block of quantifiers followed by a QF formula).
\noindent{\bf Free and Bound Variables, and Sentences:} We say that a variable under a quantifier in a formula $\phi$ is bound. Otherwise we refer to variables as free. A formula with no free variables is called a sentence.
\subsection{Semantics and Canonical Model over the Language $L_{w,bv}$}
We fix a string alphabet $\Sigma$ and a bit-vector width $k$. Given $L_{w, bv}$-formula $\phi$, an \textit{assignment} for $\phi$ w.r.t. $\Sigma$ is a map from the set of free variables in $\phi$ to $Con_{str} \cup Con_{bv}$, where string (\emph{resp.} bit-vector) variables are mapped to string (\emph{resp.} bit-vector) constants. Given such an assignment, $\phi$ can be interpreted as an assertion about $Con_{str}$ and $Con_{bv}$. If this assertion is true, then we say that $\phi$ itself is \textit{true} under the assignment. If there is some assignment s.t. $\phi$ is true, then $\phi$ is \textit{satisfiable}. If no such assignment exists, then $\phi$ is \textit{unsatisfiable}.
For simplicity we omit most of the description of the canonical model of this theory, choosing to use the intuitive combination of well-known models for word equations and bit-vectors. We provide, however, semantics for the $strlen_{bv}$ function, since it is not a ``standard'' symbol of either separate theory.
\noindent{\textbf{Semantics of the $strlen_{bv}$ Function:}} For a string term $w$, $strlen_{bv}(w)$ denotes an unsigned, fixed-precision bit-vector representation of the ``precise'' integer length of $w$, truncating the arbitrary-precision bit-vector representation of that integer to its lowest $k$ bits, for fixed bit-vector width $k$. The bit-vector addition and (constant) multiplication operators produce a result of the same width as the input terms and treat both operands as though they represent unsigned integers. Of particular note is that both of these operators have the potential to overflow (that is, to produce a result that is smaller than either operand). This is a consequence of the fixed precision of bit-vectors. Furthermore, the $strlen_{bv}$ function itself may also ``overflow'', because it is a fixed-width representation of an arbitrary-precision integer. More precisely, bit-vector arithmetic has the semantics of integer arithmetic modulo $2^k$, and the value represented by $strlen_{bv}(w)$ is the bit-vector representation of the value in the field of integers modulo $2^k$ that is congruent to the ``precise'' integer length of $w$.
For example, if the number of bits used to represent bit-vectors is $k=3$, a string of precise length 1 and another string of precise length 9 both have bit-vector width of ``001''. Although the complete bit-vector representation of 9 as an arbitrary-precision bit-vector would be ``1001'', the semantics of $strlen_{bv}$ specify that all but the $k=3$ lowest bits are omitted.
Note that the search space with respect to strings is (countably) infinite despite the fixed-width representation of string lengths as bit-vectors. This is because the bit-vector length of a string is only a view of its precise length, i.e. the integer number of characters in the string. This integer length may be arbitrarily finitely large. The semantics of fixed-width integer overflow is, in essence, applied to the integer length in order to obtain the bit-vector length. In fact, there are infinitely many strings that have the same bit-vector length. For example, if eight bits are used to represent string length, strings of length 0, 256, 512, ... would all appear to have a bit-vector length of ``00000000''.
\section{Decidability of QF String Equations, String Length, and Bit-Vector Constraints}
In this section, we prove the decidability of the theory $T_{w,bv}$ of QF word equations and bit-vectors. Towards this goal, we first establish a conversion from bit-vector constraints to regular languages. For regular expressions (regexes), we use the following standard notation: $AB$ denotes the concatenation of regular languages $A$ and $B$.
$A|B$ denotes the alternation (or union) of regular languages $A$ and $B$. $A^{*}$ denotes the Kleene closure of regular language $A$ (i.e., 0 or more occurrences of a string in $A$). For a finite alphabet $\Sigma = \{a_1, a_2, \hdots, a_l\}$, $\left[ a_1 -
a_l \right]$ denotes the union of regex $a_1 | a_2 | \hdots | a_l$. Finally, $A^{i}$, for nonzero integer constants $i$ and regex $A$, denotes the expression $A A \hdots A$, where the term $A$ appears $i$ times in total.
\begin{lemma} \label{lem:bv2regex}
Let $k$ be the width of all bit-vector terms. Suppose we have a
bit-vector formula of the form $len_{bv}(X) = C$, where $X$ is a
string variable and $C$ is a bit-vector constant of width $k$. Let
$i_{C}$ be the integer representation of the constant $C$,
interpreting $C$ as an unsigned integer. Then the set $M(X)$ of all
strings satisfying this constraint is equal to the language $L$
described by the regular expression $(\left[a_1 -
a_l\right]^{2^{k}})^{*} \left[a_1 - a_l\right]^{i_{C}}$.
(Refer to Appendix~\ref{proof:lem:bv2regex} for proof.) \end{lemma}
\noindent{\bf Proof Idea for the Decidability Theorem~\ref{thm:strbvdecidable}:} Intuitively, the decision procedure proceeds as follows. The crux of the proof is to convert bit-vector constraints into regular languages (represented in terms of regexes) relying on the lemma mentioned above, and correctly capture overflow/underflow behavior. In order to capture the semantics of unsigned overflow, each regex we generate has two parts: The first part, under the Kleene star, matches strings of length a multiple of $2^{k}$ that cause an $k$-bit bit-vector to overflow and wrap around to the original value; the second part matches strings of constant length $i_C$, corresponding to the part of the string that is ``visible'' as the bit-vector length representation. By solving the bit-vector fragment of the equation first we can generate all of finitely many possible solutions, and therefore check each of finitely many assignments to the bit-vector length terms. For each bit-vector solution, we solve the word-equation fragment separately under regular-language constraints, which guarantee that only strings that have the expected bit-vector length representation will be allowed as solutions. It is easy to see that this algorithm is sound, complete, and terminating, given a decision procedure for word equations and regex.
\begin{theorem} \label{thm:strbvdecidable}
The satisfiability problem for the QF theory of word equations and
bit-vectors is decidable. (Refer to Appendix~\ref{proof:thm:strbvdecidable} for proof.) \end{theorem}
It may appear that the decidability result is trivial as the domain of bit-vectors is finite for fixed width $k$, and therefore the formula could be solved by trying all $2^k$ possible assignments for each bit-vector length term and all finitely many strings whose length is equal to each given bit-vector length. However, this is incorrect, as the bit-vector length of a string is only a representation of its ``true length''. As the semantics of bit-vector arithmetic specify that overflow is possible under this interpretation, strings of integer length 1, 5, 9, etc. -- indeed, infinitely many strings -- will all satisfy a constraint asserting that the 2-bit bit-vector length of a string term is 1. Therefore, it is not sufficient to search over, for example, only the space of strings with length between 0 and $2^{k} - 1$. Hence the decidability of this theory is non-trivial, and this motivates the need for a stronger argument, such as given in Theorem~\ref{thm:strbvdecidable}.
\section{Z3strBV\xspace Solver Algorithm}
The solver algorithm that we have designed differs from the decision procedure described in the proof for Theorem~\ref{thm:strbvdecidable}. There are two main reasons for that. First, that decision procedure is completely impractical to implement as an efficient constraint solver. Second, and related to the previous point, that decision procedure does not leverage existing solving infrastructure, which implies considerably more engineering effort. Our solving algorithm, on the other hand, builds on the Z3str2 technique to solve word equations \cite{FSE13zheng,cav15}, including in particular boundary labels, word-equation splits, label arrangements and detection of overlapping variables. In contrast with Z3str2, however, (i) we perform different reasoning about length constraints derived from the word equations, and (ii) we have a different search-space pruning strategy to reach consistent length assignments.
\subsection{Pseudocode Description}
The main procedure of the Z3strBV\xspace solver, which is similar to the Z3str2 procedure \cite{FSE13zheng,cav15}, is summarized as Algorithm~\ref{alg:highLevel}. It takes as input sets $\mathcal{Q}_w$ of word equations and $\mathcal{Q}_l$ of bit-vector (length) constraints, and its output is either SAT or UNSAT or UNKNOWN. UNKNOWN means that the algorithm has encountered overlapping arrangements and pruned those arrangements (thereby potentially missing a SAT solution), and a SAT solution could not be found in remaining parts of the search space.
\algrenewcommand\alglinenumber[1]{\scriptsize #1:} \begin{algorithm}
{\algrenewcommand\algorithmicindent{1.5em}
\caption{High-level description of the Z3strBV\xspace main algorithm.}
\label{alg:highLevel}
\begin{algorithmic}[1]
{\scriptsize
\Statex \textbf{Input:} sets $\mathcal{Q}_w$ of word equations and
$\mathcal{Q}_l$ of bit-vector (length) constraints
\Statex \textbf{Output:} SAT / UNSAT / UNKNOWN
\Procedure{solveStringConstraint}{$\mathcal{Q}_w$,$\mathcal{Q}_l$}
\If{all equations in $\mathcal{Q}_w$ are in solved form}
\If{$\mathcal{Q}_w$ is UNSAT or $\mathcal{Q}_l$ is UNSAT}
\State \Return UNSAT\label{alg:unsat1}
\EndIf
\If{$\mathcal{Q}_w$ and $\mathcal{Q}_l$ are SAT and mutually consistent}
\State \Return SAT
\EndIf
\EndIf
\State $\mathcal{Q}_a$ $\longleftarrow$ Convert $\mathcal{Q}_w$ into equisatisfiable DNF formula\label{alg:conv1}
\ForAll{disjunct $D$ in $\mathcal{Q}_a$}
\State $\mathbb{A}$ $\longleftarrow$ all possible arrangements of equations in $D$\label{alg:conv2_s}
\ForAll{arrangement $A$ in $\mathbb{A}$}
\State $l_A$ $\longleftarrow$ length constraints implied by $A$
\If{$l_A$ is inconsistent with $\mathcal{Q}_l$}
\State $\mathbb{A}$ $\longleftarrow$ $\mathbb{A} \setminus \{\ A\ \}$
\EndIf
\EndFor\label{alg:conv2_e}
\ForAll{string variable $s$ incident in $D$}\label{alg:merge_s}
\State $G(s)$ $\longleftarrow$ merge per-equation arrangements involving $s$
\EndFor\label{alg:merge_e}
\ForAll{merged arrangements $a \in G(s)$ with no overlaps}
\State $\mathcal{Q}'_w$ $\longleftarrow$ refine variables in $\mathcal{Q}_w$ per $a$\label{alg:conv3}
\State $\mathcal{Q}'_l$ $\longleftarrow$ update length constraints $\mathcal{Q}_l$ per $\mathcal{Q}'_w$
\State $r$ $\longleftarrow$ \textsc{SolveStringConstraint}($\mathcal{Q}^{'}_{w}$,$\mathcal{Q}^{'}_{l}$)
\If {$r$=SAT}
\State \Return SAT
\EndIf
\EndFor
\EndFor
\If{overlapping variables detected at any stage}
\State \Return UNKNOWN
\Else
\State \Return UNSAT\label{alg:unsat2}
\EndIf
\EndProcedure
}
\end{algorithmic}
} \end{algorithm}
The input to the procedure is a conjunction of constraints. Any higher-level Boolean structure is handled by the SMT core solver, typically a SAT solver. The first part of the procedure (lines 2-9) check whether (i) either $\mathcal{Q}_w$ or $\mathcal{Q}_l$ is UNSAT or (ii) both are SAT and the solutions are consistent with each other. If neither of these cases applies, then arrangements that are inconsistent with the length constraints are pruned (lines 12-21). Finally, the surviving arrangements $G(s)$ guide refinement of the word equations $\mathcal{Q}_w$, and so also of the length constraints $\mathcal{Q}_l$, and for each $G(s)$ the solving loop is repeated for the resulting sets $\mathcal{Q}'_w$ and $\mathcal{Q}'_l$ (lines 22-29). A SAT answer leads to a SAT result for the entire procedure. If no solution is found, but overlapping variables have been detected at some point, then the procedure returns UNKNOWN. (Note that all current practical string solvers suffer from both incompleteness and potential non-termination.)
Notice that during the solving process, the string plug-in (potentially) derives additional length constraints incrementally (line 14). These are discharged to the bit-vector solver on demand, and are checked for consistency with all the existing length constraints (both input length constraints and constraints added previously during solving).
More generally, during the solving process the string and bit-vector solvers each generate new assertions in the other domain. Inside the string theory, candidate arrangements are constrained by the assertions on string lengths, which are provided by the bit-vector theory. In the other direction, the string solver derives new length assertions as it progresses in exploring new arrangements. These assertions are provided to the bit-vector theory to prune the search space.
\noindent {\bf Basic Length Rules:} Given strings $X,Y,Z,W,\ldots$, we express their respective lengths $l_X,l_Y,l_Z,\ldots$ as $strlen\_bv(X,n),strlen\_bv(Y,n),strlen\_bv(Y,n),\ldots$ respectively in the constraint system, where $n$ is the bit-vector sort. The empty string is denoted by $\epsilon$. Two rules govern the reasoning process: (i) $X=Y \implies l_{X} = l_{Y}$ and (ii) $W=X \cdot Y \cdot Z \cdot \ldots \implies l_{W} = l_{X} + l_{Y} + l_{Z}+\cdot \ldots$.
As an example, consider the word equation $X\cdot Y = M\cdot N$, where $X,Y,M,N$ are nonempty string variables. The are three possible arrangements \cite{cav15}, as shown below on the left, where $T_1$ and $T_2$ are temporary string variables. The respective length assertions, derived from these three arrangements, are listed on the right. \begin{center} \begin{tabular}{lcl} $(X = M \cdot T_1) \wedge (N = T_1 \cdot Y)$ & $\quad\quad$ & $(l_X = l_M + l_{T_1}) \wedge (l_N = l_{T_1} + l_Y)$ \\ $(X = M) \wedge (N = Y)$ & & $(l_X = l_M) \wedge (l_N = l_Y)$ \\ $(M = X \cdot T_2) \wedge (Y = T_2 \cdot N)$ & & $(l_M = l_X + l_{T_2}) \wedge (l_Y = l_{T_2} + l_N)$ \end{tabular} \end{center}
The proof of soundness of Algorithm~\ref{alg:highLevel} is presented in Appendix~\ref{proof:alg:highLevel}.
We conclude by noting that the Z3str2 algorithm for string constraints is terminating, as shown in~\cite{cav15}, and that it is easy to see that integrating the theory of bit-vectors with this algorithm preserves this property, as the space of bit-vector models is finite.
\subsection{Binary Search Heuristic}
As explained above, length assertions are added to the Z3 core, then processed using the bit-vector theory. For efficiency, we have developed a binary-search-based heuristic to fix a value for the length variables in the bit-vector theory. To illustrate the heuristic, and the need for it, we consider the example $$
"a" \cdot X=Y \cdot "b" \bigwedge bv8000[16]<strlen\_bv(X,16)<bv9000[16] $$
where $bv8000[16]$ ($bv9000[16]$) denotes the constant 8000 (9000). The constraint $"a" \cdot X=Y \cdot "b"$ is discharged to the string theory, whereas $bv8000[16]<strlen\_bv(X,16)<bv9000[16]$ is discharged to the bit-vector solver. For the string constraint, a (non-overlapping) solution is $$
X="b" \quad Y="a" \quad strlen\_bv(X,16)=strlen\_bv(Y,16)=bv1[16] $$ but this solution is in conflict with the bit-vector constraints. Thus, the (overlapping) arrangement $X=T \cdot "b" \quad Y="a" \cdot T$ is explored, which leads to length constraints $$ \begin{array}{lcr} strlen\_bv(v,16)=strlen\_bv(T,16)+bv1[16] & \colon & v \in \{ X,Y \} \\ \multicolumn{3}{c}{strlen\_bv(T,16)>bv0[16]} \end{array} $$
Now the need arises to find consistent lengths for $X,Y,T$. Iterating all possibilities one by one, and checking these possibilities against the bit-vector theory, is slow and expensive. Instead, binary search is utilized to fix lower and upper bounds for candidate lengths.
The first choice is for a lower bound of 0 and an upper bound of $2^{16}$ (where $16$ is the bit-vector width, as indicated above). This leads to the first candidate being $strlen(X,16)=bv32767[16]$ (where $32767=2^{15}-1$). This fails, leading to an update to the upper bound to be $2^{15}$, and consequently the next guess is $strlen(X,16)=bv16383[16]$. This too falls outside the range $(8000,9000)$, and so the upper bound is updated again, this time becoming $2^{14}$, and the next guess is $strlen(X,16)=bv8191[16]$. This guess is successful, and so within 3 (rather than 8000) steps the search process converges on the following consistent length assignments: $$ \begin{array}{lcr} l_v=strlen(v,16)=bv8191[16] & \colon & v \in \{ X,Y\} \\ \multicolumn{3}{c}{l_T=strlen(T,16)=bv8190[16]} \end{array} $$
As the example highlights, in spite of the tight interaction between the string and bit-vector theories, large values for length constraints are handled poorly by default, since the process of converging on consistent string lengths is linearly proportional to those values. Pleasingly, bit-vectors, expressing a finite range of values, enable safe lower and upper bounds. More concretely, given a bit-vector of width $n$, the value of the length variable is in the range $[0,2^n-1]$. Our heuristic iteratively adds length assertions to the bit-vector theory following a binary-search pattern until convergence on consistent length assignments. This process is both sound and efficient.
\subsection{Library-aware Solving Heuristic}
The concept of library-aware SMT solving is simple. The basic idea is to provide native SMT solver support for a class of library functions $f$ in popular programming languages like C/C++ or Java, such that (i) $f$ is commonly used by programmers, (ii) uses of $f$ are a frequent source of errors (due to programmer mistakes), and (iii) symbolic analysis of $f$ is expensive due to the many paths it defines.
More precisely, by library-aware SMT solvers we mean that the logic of traditional SMT solvers is extended with declarative summaries of functions such as {\tt strlen} or {\tt strcpy}, expressed as global invariants over all behaviors of such functions. The merit of declaratively modeling such functions is that, unlike the real code implementing these functions, the summary is free of downstream paths to explore. Instead, the function is modeled as a set of logical constraints, thereby offsetting the path explosion problem.
Observe, importantly, that library-aware SMT solving is complementary to summary-based symbolic execution. To fully exploit library-aware SMT solving, one has to modify the symbolic execution engine as well to generate summaries or invariants upon encountering library functions. While summary-based symbolic execution has been studied (e.g. as part of the S-Looper tool \cite{slooper}), we are not aware of any previous work where SMT solvers directly support programming-language library functions declaratively as part of their logic. One recent application of a similar concept is discussed in~\cite{Jeon2016:SymExec}, where models of design patterns are abstracted into a symbolic execution engine; being able to perform a similar analysis at the level of individual library methods as part of a library-aware SMT solver can be very useful to enhance library-aware symbolic execution such as demonstrated in that work. We intend to explore this idea further in the future to broaden its applicability beyond the current context. Furthermore, capturing program semantics precisely and concisely in a symbolic summary mandates integration between strings (for conciseness) and bit-vectors (for modelling overflow and precise bit-level operations). This further motivates the connection to and importance of a native solver for strings and bit-vectors.
\section{Experimental Results}
In this section, we describe our evaluation of Z3strBV\xspace. The experiments were performed on a MacBook computer, running OS X Yosemite, with a 2.0GHz Intel Core i7 CPU and 8GB of RAM. We have made the Z3strBV\xspace code, as well as the experimental artifacts, publicly available \cite{toolURL}.
\subsection{Experiment I: Buffer Overflow Detection}
To validate our ability to detect buffer overflows using Z3strBV\xspace, we searched for such vulnerabilities in the CVE database \cite{CVEDB}. We selected 7 cases, and specified the vulnerable code in each case as two semantically equivalent sets of constraints --- in the string/natural number theory and in the string/bit-vector theory --- to compare between Z3strBV\xspace and Z3str2. The Z3str2 tool is one of the most efficient implementations of the string/natural number theory. The solvers only differ in whether string length is modelled as an integer or as a bit-vector. We set the solver timeout for each test case at 1 hour. Figure~\ref{tab:cve} presents the results. Z3strBV\xspace is able to detect all vulnerabilities, and further generate corresponding input values that expose/reproduce the vulnerability. Z3str2, by contrast, provides limited support for arithmetic overflow/underflow. Unfortunately, correctly modeling overflow/underflow using linear arithmetic over natural numbers is inefficient, and thus it fails within the prescribed time budget of 1 hour. Without the ability to perform overflow modelling, Z3str2 cannot detect overflow bugs at all, since arbitrary-precision integers cannot overflow. This experiment, therefore, shows that Z3strBV\xspace can find bugs that Z3str2 does not detect (due to timeouts).
\begin{figure}
\caption{CVE Buffer Overflow Detection and Exploit Synthesis (See \cite{SanuThesis,toolURL} for details.)}
\label{tab:cve}
\caption{Vulnerability Detection using KLEE and Library-aware Solving.}
\label{tab:libraryAware}
\end{figure}
\subsection{Experiment II: Library-aware SMT Solving}
\begin{figure}
\caption{Comparison between KLEE and Library-aware Solving}
\label{fig:klee_libraryAware}
\end{figure}
We evaluated the library-aware solving heuristic atop the example shown in Fig.~\ref{Fi:motivating} by applying both our technique and KLEE, a state-of-the-art symbolic execution engine, to this code. The goal was to detect the heap corruption threat in that code. We faithfully encoded the program snippet in \textsf{check\_login()} as string/bit-vector constraints. We then checked whether the buffer pointed-to by \textsf{\_username} is susceptible to overflow.
Notice that \textsf{len} is an \textsf{unsigned short} variable, and thus ranges from $0$ to $2^{16}-1 (65,535)$. As it represents the buffer size, it determines the number of concrete execution paths KLEE has to enumerate, as well as the search space for library-aware solving. By contrast, the constraints generated by library-aware SMT solver declaratively model {\tt strlen} as part of the SMT solver logic.
To characterize performance trends, we consider two different precision settings for string length: \textsf{8-bit} and \textsf{16-bit}. We used 120 minutes as the timeout value. There was no need to go beyond 16 bits since KLEE was already significantly slower at 16 bits relative to the library-aware SMT solver. Note that KLEE is slow because it has to explore a large number of paths, and not because the individual path constraints are difficult to solve.
The results are provided in Figure~\ref{tab:libraryAware}. Under both precision settings, KLEE is consistently and significantly slower than the library-aware solving technique. In particular, if we represent numeric values using 16 bits, then KLEE is not able to identify the problem in 120 minutes, while Z3strBV\xspace can solve the problem in 0.27 seconds.
The benefit thanks to library-aware solving is clear. The analysis is as follows. Suppose both \textsf{username} and \textsf{\_username} are symbolic string variables. In Figure \ref{fig:klee_libraryAware}(a), as KLEE forks a new state for each character, an invocation of \texttt{strlen}
on a symbolic string $S1_{sym}$ of size $|S1_{sym}|$ will generate and check $|S1_{sym}| + 1$ path constraints (one per each possible length value between $0$ and $|S1_{sym}|$). In Figure \ref{fig:klee_libraryAware}(b), in contrast, the constraint encoding enabled by library-aware solving essentially captures the semantics of the program without explicitly handling the loop in \texttt{strlen}. Only one query is needed to check whether the length $S2_{sym}$ can be smaller than the length $S1_{sym}$.
\subsection{Experiment III: Binary Search Heuristic}
In the case of unconstrained string variables, both Z3str2 and Z3strBV\xspace negotiate with the Z3 core to converge on concrete length assignments. Z3str2 does so via a linear length search approach. We evaluate this approach against the binary search heuristic. For that, we have implemented a second version of Z3strBV\xspace that applies linear search. We adapted benchmarks used to validate Z3str2 \cite{z3str2Test}, resulting in a total of 109 tests, which we used to compare between the two versions. The tests make heavy use of string and bit-vector operators. Timeout was set at 20 seconds per benchmark. The comparison results are presented in Table~\ref{tab:binarySearch}. We group the instances by the solver result: \textsf{SAT}, \textsf{UNSAT}, \textsf{TIMEOUT} or \textsf{UNKNOWN}.
The Z3strBV\xspace solver is able to complete on all instances in $17.7$ seconds, whereas its version with linear search requires $548.1$ seconds. This version can solve simple \textsf{SAT} cases, but times out on 26 of the harder \textsf{SAT} cases, whereas Z3strBV\xspace has zero timeouts. Z3strBV\xspace is able to detect overlapping arrangements in 2 cases, on which it returns \textsf{UNKNOWN}. The linear-search version, in contrast, can only complete on one of the \textsf{UNKNOWN} instances. Notably, both solvers neither crashed nor reported any errors on any of the instances. These results lend support to the idea that binary search heuristic significantly faster than linear search.
\begin{table}[t]
\centering
\caption{Performance Comparison of Search Heuristics.}
\label{tab:binarySearch}
{
\bgroup
\def1.3{1.3}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c||r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r||r|c|}
\hline
\multirow{2}{0.2\columnwidth}{\centering Z3strBV\xspace}
& \multicolumn{4}{|c|}{SAT}
& \multicolumn{4}{|c|}{UNSAT}
& \multicolumn{4}{|c|}{TIMEOUT (20s)}
& \multicolumn{4}{|c||}{UNKNOWN}
& \multicolumn{2}{|c|}{Total} \\
\cline{2-19}
& \# & $T_{min}$ & $T_{avg}$ & $T_{max}$
& \# & $T_{min}$ & $T_{avg}$ & $T_{max}$
& \# & $T_{min}$ & $T_{avg}$ & $T_{max}$
& \# & $T_{min}$ & $T_{avg}$ & $T_{max}$
& \# & Time(s) \\
\hline
Binary Search
& 98 & 0.060 & 0.172 & 2.667
& 9 & 0.047 & 0.081 & 0.320
& 0 & 0 & 0 & 0
& 2 & 0.051 & 0.085 & 0.118
& 109 & 17.7 (\textbf{1x}) \\
\hline
Linear Search
& 72 & 0.060 & 0.097 & 0.618
& 9 & 0.061 & 0.111 & 0.415
& 27 & 20.000 & 20.000 & 20.000
& 1 & 0.072 & 0.072 & 0.072
& 109 & 548.1 (\textbf{31x}) \\
\hline
\end{tabular}
}
\egroup
} \end{table}
\section{Related Work}
While we are unaware of existing solver engines for a \emph{combined} QF first-order many-sorted theory of strings and bit-vectors, considerable progress has been made in developing solvers that model strings either natively or as bit-vectors. We survey some of the main results in this space.
\noindent {\bf String solvers:} Zheng et al. \cite{cav15} present a solver for the QF many-sorted theory $T_{wlr}$ over word equations, membership predicate over regular expressions, and length function, which consists of the string and numeric sorts. The solver algorithm features two main heuristics: (i) sound pruning of arrangements with overlap between variables, which guarantees termination, and (ii) bi-directional integration between the string and integer theories. S3 \cite{s3} is another solver with similar capabilities. S3 reuses Z3str's word-equation solver, and handles regex membership predicates via unrolling. CVC4 \cite{CVC4-CAV14} handles constraints over the theory of unbounded strings with length and regex membership. It is based on multi-theory reasoning backed by the DPLL($T$) architecture combined with existing SMT theories. The Kleene operator in regex membership formulas is dealt with via unrolling as in Z3str2. Unlike Z3strBV\xspace, these techniques all model string length as an integer, which makes it difficult to reason about potential overflow. In particular, none of these approaches combines strings and bit-vectors into a unified theory.
Another approach is to represent string variables as a regular language or a context-free grammar (CFG). JSA \cite{sas03} computes CFGs for string variables in Java programs. Hooimeijer et al. \cite{ase10_weimer} suggest an optimization, whereby automata are built lazily. Other heuristics, to eliminate inconsistencies, are introduced as part of the Rex algorithm~\cite{rex,rex2}. To overcome the challenge faced by automata-based approaches of capturing connections between strings and other domains (e.g. to model string length), refinements have been proposed. JST \cite{JST} extends JSA. It asserts length constraints in each automaton, and handles numeric constraints after conversion. PISA \cite{PISA} encodes Java programs into M2L formulas that it discharges to the MONA solver to obtain path- and index-sensitive string approximations. PASS \cite{pass,SymJS} combines automata and parameterized arrays for efficient treatment of UNSAT cases. Stranger extends string automata with arithmetic automata~\cite{stranger,yu_tacas09}. For each string automaton, an arithmetic automaton accepts the binary representations of all possible lengths of accepted strings. Norn~\cite{norn} relates variables to automata. Once length constraints are addressed, a solution is obtained by imposing the solution on variable languages. These solutions, similarly to Z3str2, offer model string length as an integral value, thereby failing to directly capture the notion of overflow. The S-Looper tool \cite{slooper} addresses the specific problem of detecting buffer overflows via summarization of string traversal loops. The S-Looper algorithm combines static analysis and symbolic analysis to derive a constraint system, which it discharges to S3 to detect whether overflow conditions have been satisfied. While S-Looper is effective, it operates under a set of assumptions that limit its applicability (e.g. no loop nesting and only induction variables in conditional branches). Z3str2+BV, in contrast, is a general solution for system-level programs.
\noindent {\bf Bit-vector-based Solvers:} Certain solvers convert string and other constraints to bit-vector constraints. HAMPI~\cite{hampi} is an efficient solver for string constraints, though it requires the user to provide an upper bound on string lengths. The bit-vector constraints that it generates are discharged to STP~\cite{stp}. Kaluza~\cite{kaluza} extends both STP and HAMPI to support mixed string and numeric constraints. It iteratively finds satisfying length solutions and converts multiple versions of fixed-length string constraints to bit-vector problems. A similar approach powers Pex~\cite{tacas09} to address the path feasibility problem, though strings are reduced to integer abstractions. The main limitation of solvers like HAMPI is the requirement to bound string lengths. In our approach, there is no such limitation.
\section{Conclusion and Future Work}
We have presented Z3strBV\xspace, a solver for a combined quantifier-free first-order many-sorted theory of string equations, string length, and linear arithmetic over bit-vectors. This theory has the necessary expressive power to capture machine-level representation of strings and string lengths, including the potential for overflow. We motivate the need for such a theory and solver by demonstrating our ability to reproduce known buffer-overflow vulnerabilities in real-world system-level software written in C/C++. We also establish a foundation for unified reasoning about string and bit-vector constraints in the form of a decidability result for the combined theory.
\appendix
\begin{subappendices}
\renewcommand{\Alph{section}}{\Alph{section}}
\section{Proof of Lemma~\ref{lem:bv2regex}} \label{proof:lem:bv2regex}
We wish to establish a conversion from bit-vector constraints to regular languages. For regular expressions (regexes), we use the following standard notation: $AB$ denotes the concatenation of regular languages $A$ and $B$. $A|B$ denotes the alternation (or union) of regular languages $A$ and $B$. $A^{*}$ denotes the Kleene closure of regular language $A$ (i.e., 0 or more occurrences of a string in $A$). For a finite alphabet $\Sigma = \{a_1, a_2, \hdots, a_l\}$, $\left[
a_1 - a_l \right]$ denotes the union of regex $a_1 | a_2 | \hdots | a_l$. Finally, $A^{i}$, for nonzero integer constants $i$ and regex $A$, denotes the expression $A A \hdots A$, where the term $A$ appears $i$ times in total.
\noindent{\bf Lemma~\ref{lem:bv2regex}}: Let $k$ be the width of all bit-vector terms. Suppose we have a bit-vector formula of the form $len_{bv}(X) = C$, where $X$ is a string variable and $C$ is a bit-vector constant of width $k$. Let $i_{C}$ be the integer representation of the constant $C$, interpreting $C$ as an unsigned integer. Then the set $M(X)$ of all strings satisfying this constraint is equal to the language $L$ described by the regular expression $(\left[a_1 - a_l\right]^{2^{k}})^{*} \left[a_1 - a_l\right]^{i_{C}}$.
\begin{proof}
In the forward direction, we show that $M(X) \subseteq L$. Let
$x \in M(X)$. $x$ satisfies the constraint $len_{bv}(x) = C$, which
means that the integer length $z$ of $x$ modulo $2^k$ is equal to
$i_{C}$. Additionally, $z \ge 0$ as strings cannot have negative
length. Then there exists a non-negative integer $n$ such that $z = n
2^{k} + i_{C}$. We decompose $x$ into strings $u, v$ such that $uv =
x$, the length of $u$ is $n 2^{k}$, and the length of $v$ is
$i_{C}$. Now, $u \in (\left[a_1 - a_l\right]^{2^{k}})^{*}$ because
its length is a multiple of $2^k$, and $v \in \left[a_1 -
a_l\right]^{i_{C}}$ because its length is exactly $i_{C}$. By
properties of regex concatenation, $uv \in (\left[a_1 -
a_l\right]^{2^{k}})^{*} \left[a_1 - a_l\right]^{i_{C}}$ and
therefore $uv \in L$. Since $uv = x$, we have $x \in L$.
In the reverse direction, we show that $L \subseteq M(X)$. Let
$x \in L$. By properties of regex concatenation, there exist
strings $u, v$ such that $uv = x$, $u \in (\left[a_1 -
a_l\right]^{2^{k}})^{*}$, and $v \in \left[a_1 -
a_l\right]^{i_{C}}$. Suppose $u$ was matched by $n$ expansions of
the outer Kleene closure for some non-negative integer $n$. Then the
integer length of $u$ is $n 2^{k}$. Furthermore, the integer length
of $v$ is $i_{C}$. This implies that the integer length of $x$ is
$n2^{k} + i_{C}$, which means that the integer length of $x$ is
equal to $i_{C}$ modulo $2^{k}$, from which it directly follows that
$len_{bv}(x) = C$. Hence $x \in M(X)$ as required. This completes
both directions of the proof and so we have equality between the
sets $M(X) = L$. \end{proof}
\section{Proof of Theorem~\ref{thm:strbvdecidable}} \label{proof:thm:strbvdecidable}
\paragraph{Theorem~\ref{thm:strbvdecidable}} The satisfiability problem for the QF theory of word equations and
bit-vectors is decidable.
\begin{proof}
We demonstrate a decision procedure by reducing the input formula to
a finite disjunction of subproblems in the theory of QF word
equations and regular language constraints. This theory is known to
be decidable by Schulz's extension of Makanin's algorithm for solving word
equations \cite{Schulz:1990:MAW:646900.710169}.
Suppose the input formula $\phi$ has the form $W_1 = W_2 \land A_1 =
B_1 \land A_2 = B_2 \land \hdots \land A_n = B_n$, where $W_1, W_2$
are terms in the theory of word equations and $A_1 \hdots A_n,
B_1 \hdots B_n$ are terms in the theory of bit-vectors. Let $k$ be
the width of all bit-vector terms. For each term of the form
$len_{bv}(X_i)$ in $A_1 \hdots A_n, B_1 \hdots B_n$, replace it with
a fresh bit-vector variable $v_i$ and collect the pair $(v_i, X_i)$
in a set $\mathcal{S}$ of substitutions. Suppose there are $m$ such
pairs. Then the total number of bits among all variables introduced
this way is $mk$. This means that there are $2^{mk}$ possibilities
for the values of $v_1 \hdots v_m$. Because the theory of QF
bit-vectors is decidable and because there are finitely many
possible values for $v_1 \hdots v_m$, we can check the
satisfiability of the bit-vector fragment of the input formula $A_1
= B_1 \land A_2 = B_2 \land \hdots \land A_n = B_n$ for all possible
substitutions of values for $v_1 \hdots v_m$ in finite time. For
each assignment $A = \{ (v_1, C_1), (v_2, C_2), \hdots, (v_m,
C_m) \}$, where each $v_i$ is a variable and each $C_i$ is a
bit-vector constant, if the bit-vector constraints are satisfiable
under that assignment, collect $A$ in the set $\mathcal{A}$ of all
satisfying assignments. If the set $\mathcal{A}$ is empty, then the
bit-vector constraints were not satisfiable under any assignment to
$v_1 \hdots v_m$. In this case we terminate immediately and decide
that the input formula is UNSAT, as the bit-vector constraints must
be satisfied for satisfiability of the whole formula. Otherwise,
construct the formula $R'(\phi)$ as follows. For each assignment
$A \in \mathcal{A}$, for each term $(v_i, C_i) \in A$, we find the
pair $(v_i, X_i) \in \mathcal{S}$ with corresponding $v_i$. Because
each variable $v_i$ corresponds to a term $len_{bv}(X_i)$, and since
we have $v_i = C_i$, we have the constraint $len_{bv}(X_i) =
C_i$. This allows us to apply Lemma~\ref{lem:bv2regex} and generate
a regular language constraint $X_i \in L_i$. After generating each
such regular language constraint, we collect $R'(\phi) :=
R(\phi) \lor (W_1 = W_2 \land X_1 \in L_1 \land X_2 \in
L_2 \land \hdots \land X_m \in L_m)$. We repeat this for each
assignment $A \in \mathcal{A}$. The resulting formula $R(\phi) =
(W_1 = W_2) \land R'(\phi)$ is a conjunction of the original word
equation from $\phi$ and a finite disjunction of regular-language
constraints over variables in that word equation. We now invoke
Schulz's algorithm to solve this formula. If the word equation and
any disjunct are satisfiable, then we report that the original
formula $\phi$ is SAT; otherwise, $\phi$ is UNSAT. Finally, it is
easy to show that the reduction is sound, complete, and terminating
for all inputs (see Appendix~\ref{app:reduction} for soundness and completeness proof of the reduction). \end{proof}
\section{Proof of Soundness and Completeness of the Reduction used in Theorem ~\ref{thm:strbvdecidable}}\label{app:reduction}
We demonstrate that the reduction from bit-vector constraints to
regular language constraints, as performed in the proof for
Theorem \ref{thm:strbvdecidable}, is sound and complete. We do so by
showing equisatisfiability between $\phi$ and $R(\phi)$.
\begin{theorem} \label{thm:strbvequisat}
$\phi$ is satisfiable iff $R(\phi)$ is satisfiable.
\end{theorem}
\begin{proof} In the forward direction, we show that if $\phi$ is
satisfiable then $R(\phi)$ is satisfiable. Let $M$ be a satisfying
assignment of all variables in $\phi$. Because $\phi$ and
$R(\phi)$ share the same constraint $W_1 = W_2$, $M$ is a
satisfying assignment for the word equation fragment of $R(\phi)$
as well. It remains to show that at least one of the terms in
$R'(\phi)$, the disjunction of regular language constraints, is
satisfiable. The algorithm described in
Theorem~\ref{thm:strbvdecidable} generates one group of regex
constraints for each satisfying assignment to the bit-vector
fragment that produces a distinct model for all bit-vector length
constraints. In particular, the algorithm generates
regular language constraints for the particular model described by
$M$ of bit-vector length constraints. Because the string variables
in $M$ satisfy these constraints, we apply Lemma~\ref{lem:bv2regex}
to find that the regular language constraints that were generated
with respect to this model $M$ are satisfied by the assignment of
all string variables in $M$. Therefore $M$ is also a model of
$R(\phi)$, and hence $R(\phi)$ is satisfiable.
In the reverse direction, we show that if $R(\phi)$ is satisfiable
then $\phi$ is satisfiable. Let $M$ be a satisfying assignment of
all variables in $R(\phi)$. Because $R(\phi)$ and $\phi$ share the
same constraint $W_1 = W_2$, $M$ is a satisfying assignment for the
word-equation fragment of $\phi$ as well. It remains to show that
the bit-vector constraints in $\phi$ are satisfiable under this
assignment to the string variables. Let $r$ be a regular-language
constraint in $R'(\phi)$ (the disjunction of regular-language
constraints), such that $r$ evaluates to true under the assignment
$M$. We know that such an $r$ must exist because the formula
$R(\phi)$ is satisfiable, and therefore at least one of the terms
in the disjunction $R'(\phi)$ must evaluate to true. By applying
Lemma~\ref{lem:bv2regex} ``backwards'', we can derive an assignment
of constants to bit-vector length terms in $\phi$ corresponding to
each regular-language constraint in $r$ that is consistent with the
lengths of the string variables. We also know that the bit-vector
constraints are satisfiable under this assignment of constants to
strings and bit-vector length terms because, by
Lemma~\ref{lem:bv2regex}, a precondition for the appearance of any
term $r$ in $R'(\phi)$ is that the bit-vector fragment of $\phi$ is
satisfiable under the partial assignment to bit-vector length terms
that yielded $r$. Therefore, by solving the remaining bit-vector
constraints, which must be satisfiable, $M$ can be extended to a
model of $\phi$ and hence $\phi$ is satisfiable. \end{proof} \end{subappendices}
\section{Proof of the Soundness of Algorithm~\ref{alg:highLevel}} \label{proof:alg:highLevel}
We use the standard definition of soundness for decision procedures from the SMT literature \cite{cav15}, whereby a solver is sound if whenever the solver returns UNSAT, the input formula is indeed unsatisfiable.
\begin{theorem}
Algorithm~\ref{alg:highLevel} is sound, i.e., when
Algorithm~\ref{alg:highLevel} reports UNSAT, the input
constraint is indeed UNSAT. \end{theorem}
\input{proof}
\end{document} | arXiv |
\begin{document}
\title{No need to choose: How to get both a PTAS and Sublinear Query Complexity} \titlerunning{PTAS with Sublinear Query Complexity} \author{Nir Ailon\inst{1} \and Zohar Karnin\inst{2}} \institute{Technion IIT and Yahoo! Research, Haifa, Israel \email{[email protected]} \and Yahoo! Research, Haifa, Israel \email{[email protected]} }
\maketitle \vspace*{-2ex} \begin{abstract}
We revisit various PTAS's (Polynomial Time Approximation Schemes) for minimization versions of dense problems, and show that they can be performed with {sublinear query complexity}. This means that not only do we obtain a $(1+\varepsilon)$-approximation to the NP-Hard problems in polynomial time, but also avoid reading the entire input. This setting is particularly advantageous when the price of reading parts of the input is high, as is the case, for examples, where humans provide the input.
Trading off query complexity with approximation is the raison d'etre of the field of learning theory, and of the ERM (Empirical Risk Minimization) setting in particular. A typical ERM result, however, does not deal with computational complexity. We discuss two particular problems for which (a) it has already been shown that
sublinear querying is sufficient for obtaining a $(1+\varepsilon)$-approximation using unlimited computational power (an ERM result), and (b) with full access to input, we could get a $(1+\varepsilon)$-approximation in polynomial time (a PTAS). Here we show that neither benefit need be sacrificed. We get a PTAS with efficient query complexity.
The first problem is known as Minimal Feedback Arc-Set in Tournaments (MFAST). A PTAS has been discovered by Schudy and Mathieu, and an ERM result by Ailon. The second is $k$-Correlation Clustering ($k$-CC). A PTAS has been discovered by Giotis and Guruswami, and an ERM result by Ailon and Begleiter.
Two techniques are developed. The first solves the problem for the low-cost case of $k$-CC (the analogous case is already known for MFAST). This requires a careful sampling scheme together with proof of a structural property relating costs of vertices against the optimal sample clustering with their costs against the full optimal clustering. The second addresses
the high-cost case, by showing that a classic method by Arora et al. (2002) for obtaining additive approximations can be made query efficient. The underlying technique is ``double sampling'': One sample
is amenable to exhaustive solution enumeration, but well approximates only polynomially many solutions (including the optimal), and another sample cannot be used exhaustively search solutions, but well approximates the cost of the enitre solution space, and is used for verification.
\end{abstract}
\section{Introduction}
We study two NP-Hard combinatorial minimization problems for which it is known how to get a $(1+\varepsilon)$-approximate solution under two scenarios. In the first scenario, the algorithm has full access to the input, and is required to compute in polynomial time. In the second scenario, the algorithm has exponential computational power but is allowed to uncover only a sublinear amount of input. In this work we show that no requirement needs to be sacrificed. In other words, we satisfy the following three requirements simultaneously: \begin{enumerate} \item[(R1)] A polynomial time algorithm. \item[(R2)] A $(1+\varepsilon)$ approximate solution. \item[(R3)] A sublinear (in input size) query complexity. \end{enumerate}
The first problem is known as $k$-Correlation Clustering ($k$-CC). Given an undirected graph $G=(V, E)$, the objective is to find a decomposition of $V$ into $k$ (possibly empty) disjoint subsets (clusters) $C_1,\dots, C_k$ so that the symmetric difference between $E$ and the set $\{(u,v): \exists i \mbox{ s.t. } \{u,v\}\subseteq C_i\}$ is minimized. The second problem is the Minimum Feedback Arc-set in Tournaments (MFAST). In this problem, given a tournament $G=(V, A)$, the objective is to write its vertices in a sequence from left to right so that the number of edges pointing to the left (\emph{backward edges}) is minimized.\footnote{By tournament we mean that for all distinct $u,v\in V$, either $(u,v)\in A$ or $(v,u)\in A$.}
Requirements (R1) and (R2) are achieved by Giotis et. al in \cite{GiotisGuruswami06} for $k$-CC and by Kenyon-Mathieu et. al in \cite{MatSch2007} for MFAST. Requirements (R2) and (R3) were achieved very recently by Ailon et. al in \cite{AilonB12} for $k$-CC and by Ailon in \cite{Ailon11:active} of MFAST. In this work we obtain (R1)+ (R2)+(R3) for both problems. Our result uses components from the aforemention citations, together with new ideas required for obtaining our strong guarantees.
\vspace*{-2ex} \subsection{Previous Work and Our Contribution}
In the world of combinatorial approximations, Correlation Clustering (CC) (also known as cluster editing) has been defined by Blum et al. \cite{BBC04}. In the original version there was no bound on the number $k$ of clusters. Correlation clustering is max-SNP-Hard \cite{CharikarW04} but admits constant factor polynomial times approximations (e.g. \cite{CharikarW04,Ailon:2008:AII}). Maximization versions have also been considered \cite{DBLP:conf/soda/Swamy04}. In this work we concentrate on the minimization problem only, which is more difficult for the purpose of obtaining a PTAS. The $k$-correlation clustering ($k$-CC), in which the number of output clusters is bounded by $k$, is also NP-Hard but admits a PTAS \cite{GiotisGuruswami06} running in time $n^{O(9^k/\varepsilon^2)}\log n$.
There is a natural machine learning theoretical interpretation CC: The instance space is identified with the space of element pairs, and each edge (resp. non-edge) in $G$ is a label stipulating equivalence (resp. non-equivalence) of the corresponding pair. The CC cost minimizes the \emph{risk}, defined as the number of pairs of elements on which the solution disagrees with. Roughly speaking, an algorithm attempting to minimize the risk by, instead, minimizing an estimator thereof obtained by sampling labels, is an \emph{Empirical Risk Minimization} (ERM) algorithm. An ERM algorithm need not be constrained by computational restrictions, and should be thought of as a \emph{information theoretical}, not computational result. It should be noted that machine learning clustering theoreticians and practitioners have been studying how to use correlation clustering type labels in conjunctions with more traditional geometric clustering approaches (e.g. $k$-means - see Basu's thesis \cite{basu05} and references therein). Such labels are expensive because they require solicitation from humans. Minimizing query complexity is hence important.
From a combinatorial optimization point of interest, MFAST is NP-Hard \cite{Alon06} but admits a PTAS \cite{MatSch2007} (see references therein for a more elaborate history of this important problem). The problem also has a machine learning theoretical interpretation, if we think of the directionality of the edge connecting $u$ and $v$ in the tournament $T$ as a label. An ERM result has been obtained by Ailon \cite{Ailon11:active} very recently. Interestingly, although Ailon's algorithm is not computationally efficient, it relies quite heavily on the ideas used in the PTAS \cite{MatSch2007}.
In this work we obtain requirements (R1),(R2) and (R3) simultaneously, for both $k$-CC and MFAST.
\vspace*{-2ex} \section{Notations} \vspace*{-2ex} For a natural number $n$ we denote by $[n]$ the set of integers $\{1,\ldots,n\}$. Let $V$ denote a ground set of $n$ elements. In the $k$-CC problem, $V$ is endowed with an undirected graph $G=(V, E)$. A solution to the problem is given as a clustering ${\mathcal C} = \{C_1,\dots, C_k\}$ of $V$ into $k$ disjoint parts. We define $\equiv_{\mathcal C}$ to be the equivalence relation in which $C_1,\dots, C_k$ are the equivalence classes.
Equivalently, we view a solution as an undirected graph $G({\mathcal C}) = (V, E({\mathcal C}))$ in which $(u,v)\in E({\mathcal C})$ if and only if $u \equiv_{\mathcal C} v$. The cost $\operatorname{cost}_G({\mathcal C})$ of a solution ${\mathcal C}$ is the cardinality of the symmetric difference between the sets $E$ and $E({\mathcal C})$. When the input is clear from the context, we will simply write $\operatorname{cost}({\mathcal C})$.
In the MFAST problem, $V$ is endowed with a tournament graph $T=(V, A)$.\footnote{A \emph{tournament} means that exactly one of $(u,v)$ or $(v,u)$ are in $A$ for all $u\neq v$.} A solution is an injective function $\pi : V\mapsto [n]$ (a permutation). We define $\prec_\pi$ to denote the induced order relation, namely: $u \prec_\pi v$ if and only if $\pi(u) < \pi(v)$. Equivalently, a solution can be viewed as a tournament $T(\pi) = (V, A(\pi))$, where $(u,v)\in A(\pi)$ if and only if $u \prec_\pi v$. The loss $\operatorname{cost}_T(\pi)$ of a solution is the number of edges $(u,v)\in T(\pi)$ such that $(v,u)\in A$. In words, a unit cost is incurred for each inverted edge. When the input $T$ is clear from the context, we may simply write $\operatorname{cost}(\pi)$.
\vspace*{-2ex} \section{Statement of Results and Method Overview}\label{sec:overview} \vspace*{-2ex} As in \cite{GiotisGuruswami06,MatSch2007}, our query efficient PTAS for both $k$-CC and MFAST, distinguishes between a high cost case and a low cost case. In MFAST, \emph{high cost} means that the optimal solution has cost at least $P(\varepsilon) n^2$, where $P(\varepsilon)= \Theta(\varepsilon^2 )$. In $k$-CC, high cost means that the optimal solution has cost at least $Q(\varepsilon, k) n^2$, where $Q(\varepsilon, k) = \Theta(\varepsilon^6/k^{18})$.
In the low cost case, the problem has been solved for MFAST by Ailon \cite{Ailon11:active}.
There it is shown that $O(n\varepsilon^{-4}\log^4 n)$ edges from $T$ are sufficient for finding a $(1+\varepsilon)$-approximate solution, in $\operatorname{poly}(n, \varepsilon^{-1})$ time. We refer the reader to \cite{Ailon11:active} for the details. As for the low cost case for $k$-CC, we show a PTAS with $o(n^2)$ query complexity in
in Section~\ref{sec:low}. The main idea of the algorithm is similar to that in \cite{GiotisGuruswami06}, but defers in a significant way.
Roughly speaking, both algorithms choose a sample of vertices and enumerate over $k$-clusterings of the sample, while trying to compute optimal big clusters from the sample. In \cite{GiotisGuruswami06}, for each such choice of sample $k$-clustering, a clustering of $V$ is chosen, and recursion is executed on the union of small clusters. Here, we use the sample \emph{in vitro} to learn a strong structural property of any optimal solution for the entire input. In particular, we don't need to return from a recursion to perform this learning.
For the high cost case we invoke an algorithm giving an additive $\varepsilon P(\varepsilon)n^2$ (resp. $\varepsilon Q(\varepsilon, k)n^2$) approximation for MFAST (resp. for $k$-CC). To that end, we use a standard LP based technique \cite{DBLP:journals/mp/AroraFK02}, together with another double sampling trick necessary for query efficiency, which we describe in Section~\ref{sec:high} for the MFAST case (the $k$-CC case is easier). The main result there is as follows:
\vspace*{-1ex} \begin{theorem}\label{thm:highcost} There exists a polynomial (in $n$) time algorithm for obtaining an additive $\varepsilon P(\varepsilon)n^2$ (resp. $\varepsilon Q(\varepsilon, k) n^2)$) approximation for MFAST (resp. for $k$-CC). The algorithm queries $O(\varepsilon^{-2}P^{-2}(\varepsilon)n\log n)$ (resp. $O(\varepsilon^{-2}Q^{-2}(\varepsilon, k) n^2)$) input edges and runs in time $n^{O(\varepsilon^{-2}P^{-2}(\varepsilon)\log P(\varepsilon))}$ (resp. $n^{O(\varepsilon^{-2}Q^{-2}(\varepsilon, k) \log k)}$). \end{theorem}
\vspace*{-1ex} In order to know whether we are at all in the high cost case, we apply the additive approximation algorithm in any case, and approximate the cost of the returned solution to within an additive error of $\Theta( P(\varepsilon) n^2)$ (resp. $\Theta(Q(\varepsilon, k) n^2)$). This estimation can clearly be done, with success probability at least $1-n^{-10}$, by sampling at most $O(P^{-2}(\varepsilon)\log n)$ (resp. $O(Q^{-2}(\varepsilon,k)\log n)$ ) edges, by standard measure concentration arguments. \footnote{Note that the algorithm in \cite{Ailon11:active} relies on a divide and conquer recursive strategy, in which the high cost algorithm and test must be implemented at each recursion node. This also holds for our $k$-CC algorithm, which identifies large clusters and then recurses on small ones. The recursive calls must solve and test for the high cost case as well.} This bound is overwhelmed by the bounds of Theorem~\ref{thm:highcost}. Our
main results are summarized as follows. \vspace*{-1ex} \begin{theorem} \label{thm:PTAS low cost kcc} There exists a PTAS for $k$-CC running in time $n^{O(\varepsilon^{-14}k^{36}\log k)}$ and requiring at most $O(\varepsilon^{-14}k^{36}n\log n)$ edge queries. With probability at least $1-n^{-3}$, it outputs a clustering $\tilde{\cal C}$ with $\operatorname{cost}(\tilde{\cal C}) \leq\operatorname{cost}({\mathcal C}^*)(1+\varepsilon)$, where ${\mathcal C}^*$ is an optimal solution. \end{theorem} \vspace*{-2ex}\begin{theorem} \label{thm:PTAS low cost mfast} There exists a PTAS for MFAST running in time $n^{O(\varepsilon^{-6})}$ and requiring at most $O(\varepsilon^{-6}n\log n + \varepsilon^{-4}n\log^4 n)$ edge queries. With probability at least $1-n^{-3}$, it outputs a permutation $\sigma$ with $\operatorname{cost}(\sigma) \leq\operatorname{cost}(\pi^*)(1+\varepsilon)$, where $\pi^*$ is an optimal solution. \end{theorem} \vspace*{-1ex} Note that the running times are overwhelemed by the high cost case in both, and the query complexity is overwhelemed by the high cost case in Theorem~\ref{thm:PTAS low cost kcc}. We also note that we did not make a real effort to optimize the constants, including the exponents of $k,\varepsilon$.
\section{Query Efficient PTAS for Low Cost in $k$-CC }\label{sec:low} \vspace*{-2ex} We study the low cost case of $k$-CC on input $G=(V, E)$, and analyze an algorithm satisfying (R1)+(R2)+(R3). We need two ingredients. In Section~\ref{sec:low1} we approximate the contribution of a single node $v$ to the cost of any solution identical to the optimal solution except (maybe) for a change in the cluster to which $v$ belongs. In Section~\ref{sec:low2} we achieve the PTAS, using a strategy similar to that of Giotis et al. in \cite{GiotisGuruswami06}: Identification of the large clusters in the optimal solution and recursion on the remainder. Note that the algorithm of \cite{GiotisGuruswami06} does not satisfy (R3), hence ours makes better use of the queried information. \vspace*{-2ex} \subsection{An additive approximation of vertex costs}\label{sec:low1}
A major component in our PTAS for $k$-clustering is an additive approximation for the contribution of each vertex to the cost of the clustering. We start by formally defining this contribution, and then present Algorithm~\ref{alg:cost_add_aprx} and its analysis.
\begin{definition} Let ${\cal C^*}=\{ C_1^*,\ldots,C_k^*\}$ be an optimal $k$-clustering, and assume its cost is $\gamma n^2$ for some $\gamma\geq 0$. For $v\in V$ let $j^*(v)$ be defined as the unique index such that $v\in C^*_{j(v)}$. Let $C^*(v) = C^*_{j^*(v)}$. Let ${\bf 1}_{u+v}$ be an indicator variable for the predicate $(u,v)\in E$, and similarly define the complement ${\bf 1}_{u-v} = 1 - {\bf 1}_{u+v}$.
Let $ \deg_+(v,j) = \sum_{u \in C^*_j \setminus \{v\}} {\bf 1}_{u+v}$, $\deg_-(v,j) = \sum_{u \in C^*_j\setminus\{v\}} {\bf 1}_{u-v}$, $\operatorname{degout}_+(v,j) = \sum_{u \in (V\setminus C^*_j)\setminus\{v\}} {\bf 1}_{u+v}$, and $\operatorname{degout}_-(v,j) = \sum_{u \in (V\setminus C^*_j)\setminus\{v\}} {\bf 1}_{u-v}$.
Let $\operatorname{cost}^*(v) = \sum_{ u \in C^*(v)\setminus\{v\}} 1_{u-v} + \sum_{u \notin C^*(v)} 1_{u+v} $. Notice that $\operatorname{cost}^*(v) = \deg_-(v,j(v)) + \operatorname{degout}_+(v,j(v))$ and $\operatorname{cost}({\mathcal C}^*)=\frac{1}{2} \sum_v \operatorname{cost}^*(v)$.
For any $j \in [k]$, let $\operatorname{cost}^*(v,j) \eqdef \deg_-(v,j) + \operatorname{degout}_+(v,j)$. That is, $\operatorname{cost}^*(v,j)$ is the contribution of the vertex $v$ to the cost of the clustering that is identical to ${\mathcal C}^*$, except the location of $v$, which is reset to $C^*_j$. \end{definition}
\begin{algorithm} \caption{Additive approximation of $\operatorname{cost}^*$} \label{alg:cost_add_aprx} \emph{Input}: A graph $G=(V,E)$, a parameter $\beta>0$ and integer $k>1$\\ \emph{Output}: $\forall v \in V$, and $j\in[k]$, an estimation $\widetilde{\operatorname{cost}}(v,j)$ to $\operatorname{cost}(v,j)$ \newline \\ Choose $S=(v_1,\ldots,v_t)$, where $t= c \log(n) \beta^{-9}$ ($c$ is some sufficiently large universal constant), be a multiset of i.i.d. uniformly randomly chosen vertices from $V$.
Let $\tilde{S}_1,\ldots,\tilde{S}_k$ be an optimal $k$-clustering for the reduced problem $(S, E_{|S})$ (where $E_{|S} = E \cap (S\times S)$), where the solution is found using exhaustive search.
For any $v\in V$, $j \in [k]$, let: $ \widetilde{\deg}_-(v,j) \eqdef \sum_{v\in \tilde S_j\setminus \{v\}} {\bf 1}_{u-v}$ and $\widetilde{\degout}_+(v,j) \eqdef \sum_{v\in (S \setminus \tilde S_j) \setminus \{v\}} {\bf 1}_{u+v}$.
(The summations count elements of $S$ with multiplicities.)
Output for every $v\in V$, $j \in [k]$ the estimation: $$\widetilde{\operatorname{cost}}(v,j)\eqdef \frac n {|S|} \left (\widetilde{\deg}_-(v,j)+\widetilde{\degout}_+(v,j)\right )\ .$$ \vspace*{-2ex} \end{algorithm}
\noindent The rest of this section proves the following guarantee of Algorithm~\ref{alg:cost_add_aprx}. \begin{theorem} \label{thm:cost_add_apx} Fix $\beta>0$, to be passed as paramater to Algorithm~\ref{alg:cost_add_aprx}. There exist some universal constant $c$ such that if $\gamma \leq c \beta^6$ then for all $v \in V$, $j \in [k]$ it holds that for the output of the algorithm, after possibly renaming the optimal clusters $\{C^*_1,\dots, C^*_k\}$,
$\abs{ \widetilde{\operatorname{cost}}(v,j) - \operatorname{cost}^*(v,j) } < \beta n.$ For any input, Algorithm~\ref{alg:cost_add_aprx} will run in $n^{O(\beta^{-9}\log k)}$ time
and will require at most $O(n\log(n)\beta^{-9})$ edge queries. \end{theorem}
The claim regarding the time and query complexity of the algorithm are trivial. Indeed, the time is dominated by exhaustively searching the space of $k$-clusterings of the sample $S$ in the algorithm. We focus on proving the correctness. We need some more definitions.
\begin{definition} Let $u,v \in V$, $S$ a multi-subset of $V, j\in [k]$ and $\delta > 0$.
Let $\deg_+^S(v,j) = \sum_{u \in (C^*_j\cap S) \setminus \{v\}} {\bf 1}_{u+v}$, $\deg_-^S(v,j) = \sum_{u \in (C^*_j\cap S)\setminus\{v\}} {\bf 1}_{u-v}$, $\operatorname{degout}_+^S(v,j) = \sum_{u \in (S\setminus C^*_j)\setminus\{v\}} {\bf 1}_{u+v}$ and $\operatorname{degout}_-(v,j) = \sum_{u \in (S\setminus C^*_j)\setminus\{v\}} {\bf 1}_{u-v}$,
where the summations take multiplicities in $S$ into account. Let $\operatorname{cost}^{*S}(v) \eqdef \deg_-^S(v,j^*(v))+\operatorname{degout}_+^S(v,j^*(v))$.
\end{definition}
In what follows, set $\delta=\Theta(\beta^3)$. Define ${\mathcal S}$ as the partition of $S$ (from Algorithm~\ref{alg:cost_add_aprx}) induced by ${\mathcal C}^*$. That is ${\mathcal S}=\{S_1,\ldots,S_k\}$ where $S_j=C^*_j \cap S$.
\begin{lemma} \label{lem:deg in S and C equal} With probability at least $1-n^{-10}$, for all $v \in V$ and $j\in[k]$, \begin{eqnarray*}
\max\{& & |\deg_+(v,j)/n - \deg_+^S(v,j)/|S||,
|\deg_-(v,j)/n - \deg_-^S(v,j)/|S||, \\
& & |\operatorname{degout}_+(v,j)/n - \operatorname{degout}_+^S(v,j)/|S||,
|\operatorname{degout}_-(v,j)/n - \operatorname{degout}_-^S(v,j)/|S||\} = O(\delta) \ . \end{eqnarray*}
\end{lemma} The simple proof is deferred to Appendix~\ref{sec:proof:lem:deg in S and C equal}.
From Lemma~\ref{lem:deg in S and C equal},
\begin{lemma}\label{lem:cost_induced_partition_on_S}
Assume $\gamma = o(\delta)$. With probability $1-n^{-10}$, the cost of the partition ${\mathcal S}$ on the graph $G|_S = (S, E|_S)$ is at most $O(\delta|S|^2)$. \end{lemma}
In the following lemma we show that any pair of clusterings that are close w.r.t.\ to their edges are also close w.r.t.\ their vertices.
\begin{lemma} \label{lem:close in V} Let ${\mathcal S},\tilde{{\mathcal S}}$ be two $k$-clusterings of $S$, and let $E({\mathcal S}), E(\tilde {\mathcal S})$ be their corresponding edge sets, namely, $(u,v)\in E({\mathcal S})$ if and only if $u \equiv_{\mathcal S} v$, and similarly for $\tilde S$. Assume the size of the symmetric difference between
$E({\mathcal S})$ and $E(\tilde {\mathcal S})$ is at most $\delta |S|^2$, where $\delta < c/k^3$ and $c$ is a sufficiently small constant. Then for some reordering of indices, for every $j \in [k]$,
$\max \{|S_j \setminus \tilde{S_j}|, |\tilde{S}_j \setminus S_j| \}= O(\delta^{1/3} |S|)\ .$
\end{lemma}
We will only present a main structural claim used by the proof. The remainder of the proof will be deferred to Appendix~\ref{sec:proof:lem:close in V}. \begin{proof} We start with an auxilary claim showing that every cluster in ${\mathcal S}$ has a similar cluster in $\tilde{{\mathcal S}}$ (and vice versa).
\begin{claim} \label{clm:aux close in V}
Let $C$ be a cluster of ${\mathcal S}$. There exists some cluster $D$ in $\tilde{S}$ such that $|C \setminus D| \leq O(\delta^{1/3}|S|)$. \end{claim} \begin{proof}
Let $D$ be a cluster in $\tilde{S}$ that maximizes $|D \cap C|$. Let $A = D \cap C$ and let $\bar{A} = C \setminus A$. Notice that for every pair $(u,v)\in A\times \bar{A}$ the edge $(u,v)$ is an element of $E({{\mathcal S}})\setminus E(\tilde {\mathcal S})$.
Hence,
$ |A||\bar{A}| \leq \delta |S|^2 $. If $|C|<\delta^{1/3}|S|$ then the claim holds trivially. If $|C|\geq \delta^{1/3}|S|$, then we get:
$ |A|(|C|-|A|) = |A||\bar{A}| \leq \delta |S|^2 \leq \delta^{1/3} |C|^2$.
A simple calculation will show that either $|A|\leq O(\delta^{1/3} |C|)$ or $|A| \geq |C|(1-O(\delta^{1/3}))$. By setting the constant $c$ to be sufficiently small we get that the first option implies $|A|<|C|/k$ which is impossible due to the fact that $A$ maximizes $|D\cap C|$ over all clusters $D$ in $\tilde {\mathcal S}$. definition of $A$. We conclude that $ |C \setminus D| = O(\delta^{1/3} |S|)$,
proving the claim. The remainder of the proof of Lemma \ref{lem:close in V} is deferred to Appendix~\ref{sec:proof:lem:close in V}.
\end{proof} \end{proof}
Let $\tilde{{\mathcal S}}=\tilde{S}_1,\ldots,\tilde{S}_k$ be
an optimal $k$-clustering of the induced input $G|_S$. By Lemma~\ref{lem:cost_induced_partition_on_S}, we know that with probability at least $1-n^{-10}$
the cost of the solution $\tilde {\mathcal S}$ is at most $\delta |S|^2$. By the triangle inequality, this implies that the symmetric difference between $E({\mathcal S})$ and $E(\tilde {\mathcal S})$ is at most $O(\delta |S|^2)$. Hence, we may apply Lemma~\ref{lem:close in V} and assume that the clusters $S_1,\dots, S_k$ and $\tilde S_1,\dots \tilde S_k$ are aligned with each other. Define: $\widetilde{\deg}_+(v,j) = \sum_{u \in \tilde S_j\setminus \{v\}} {\bf 1}_{u+v}$, $\widetilde{\deg}_-(v,j) = \sum_{u \in \tilde S_j\setminus \{v\}} {\bf 1}_{u-v}$, $\widetilde{\degout}_+(v,j) = \sum_{u \in (S \setminus\tilde S_j)\setminus \{v\}} {\bf 1}_{u+v}$, and $\widetilde{\degout}_-(v,j) = \sum_{u \in (S\setminus \tilde S_j)\setminus \{v\}} {\bf 1}_{u-v}$.
\begin{lemma} With probability at least $1-n^{-8}$, for all $v\in V$ and $j \in [k]$,
$ \abs{\frac {\widetilde{\deg}_+(v,j)}{|S|} - \frac{ \deg_+(v,j)}{n}} = O(\delta^{1/3}) .$
The same is true for the other `deg functions'. \end{lemma} \begin{proof} By the guarantee of Lemma \ref{lem:close in V}, for all $v \in V, j \in [k]$,
$\abs{\frac {\widetilde{\deg}_+(v,j)}{|S|} - \frac {\deg^S_+(v,j)}{|S|}}=O(\delta^{1/3})$.
By the guarantee of Lemma~\ref{lem:deg in S and C equal}, we have that for all $v\in V,j\in [k],$
$\abs{\frac{\deg^S_+(v,j)}{|S|} - \frac{\deg_+(v,j)}{n}}=O(\delta^{1/3})\ .$
The claim follows by union bounding and using the triangle inequality. This concludes the lemma's proof. \end{proof}
\noindent Theorem \ref{thm:cost_add_apx} is now an easy corollary.
\subsection{The PTAS}\label{sec:low2}
In this section we utilize the approximations to the costs of the vertices achieved in Algorithm~\ref{alg:cost_add_aprx} to achieve a PTAS for $k$-clustering. We note that the heart of our contribution is the previous section, and the lemmas and proofs here follow the lines of \cite{GiotisGuruswami06}. The main algorithm (Algorithm \ref{alg:main kcc}) is of course different since it utilizes the results of the previous section.
Throughout this section we will assume that the optimal clustering ${\mathcal C}^*$ has a cost of $\gamma n^2$ where $\gamma < c_1 \beta^6$, where the parameter $\beta$ will be taken as $c_2 \varepsilon / k^2$, and $c_1, c_2$ will be sufficiently small constants so that Theorem~\ref{thm:cost_add_apx} is satisfied.
\begin{algorithm} \label{alg:main kcc} \caption{PTAS for $k$-CC (low cost)} \label{alg:CC PTAS} \emph{Input}: A graph $G=(V,E)$, an integer $k>1$ and a parameter $\varepsilon>0$. It is assumed that the optimal $k$-CC cost of $G$ is $\gamma n^2$, where $\gamma < c_1\beta^6$ and $\beta = c_2\varepsilon/k^3$.
\emph{Output}: A clustering $\tilde{\cal C} = \{ \tilde{C}_1,\ldots,\tilde{C}_k \}$ of $G$.
Run Algorithm~\ref{alg:cost_add_aprx} with inputs $G$,$k$ and $\beta$. Obtain approximations $\widetilde{\operatorname{cost}}(v,j)$ for all $v\in V$ and $j\in[k]$.
Create empty clusters $\hat C_1,\dots \hat C_k$. For all $v\in V$ add $v$ to $\hat C_i$, where $i=\mathrm{argmin}_j\{ \widetilde{\operatorname{cost}}(v,j)\}$.
Reorder the clusters so that $|\hat C_1|\geq \ldots \geq |\hat C_k|$. Let $\ell \in [k]$ be such that $|\hat C_\ell| \geq \frac{n}{2k}$ and $|\hat C_{\ell+1}| < \frac{n}{2k}$ (if no such integer exists, set $\ell=k$).
Run the algorithm recursively on the restriction of $G$ on $W \eqdef \cup_{j > \ell} \hat{C}_j$, the integer $k-\ell$ and approximation parameter $\varepsilon(1-1/k)$. Denote its output by $\tilde{C}_{\ell+1},\ldots,\tilde{C}_k$.
Output $\tilde{\cal C} = ( \tilde{C}_1=\hat{C}_1,\ldots,\tilde{C}_{\ell}=\hat{C}_{\ell}, \tilde{C}_{\ell+1},\ldots,\tilde{C}_k)$\ . \end{algorithm}
The remainder of the secion is dedicated to proving Theorem~\ref{thm:PTAS low cost kcc}. We need some lemmas. In what follows, we assume that the invocation of Algorithm~\ref{alg:cost_add_aprx} is successful in the sense that the guarantee of Theorem~\ref{thm:cost_add_apx} holds.
The following is an immediate corollary of this guarantee.
\begin{lemma} \label{lem:cost in other} Let $v \in V$ be a vertex satisfying $v \in \hat C_j\cap C^*_i$, where $i \neq j$. Then $\operatorname{cost}^*(v,j) \leq \operatorname{cost}^*(v)+2\beta n$\ . \end{lemma}
Define for any $v\in V$, $\widetilde{\operatorname{cost}}(v) = \min_{j\in[k]} \widetilde{\operatorname{cost}}(v,j)$, where
$\widetilde{\operatorname{cost}}(v,j)$ is as defined in Algorithm~\ref{alg:CC PTAS}. Define $V_{\mathrm{costly}} = \{v\in V \; |\; \widetilde{\operatorname{cost}}(v) \geq c_3n/k^2 \}$, where $c_3$ is some sufficiently small constant. For any $v\in V_{\mathrm{costly}}$, $\operatorname{cost}^*(v) \geq \frac 1 2 c_3 n/k^2$ due to the guarantee of Theorem~\ref{thm:cost_add_apx} and our choice of $\beta$. Since (twice) the total optimal cost is bounded by that incurred by vertices in $V_{\mathrm{costly}}$:
\begin{equation}\label{Vcostlysize1} |V_{\mathrm{costly}}| \leq 4 \gamma n^2 / (c_3 n/k^2) \leq (4\gamma n k^2)/{c_3}\
.\end{equation}
In particular, using a very crude estimate, this means \begin{equation}\label{Vcostlysize}
|V_{\mathrm{costly}}| \leq c_4 n /k , \end{equation} where $c_4$ is a constant that can be made sufficiently small by reducing $c_1$ as necessary.
Recall that $\hat C_1,\ldots,\hat C_\ell$ are the large clusters found by Algorithm~\ref{alg:CC PTAS}. Notice that since there are $k$ clusters, there must be at least one cluster of size $\geq \frac{n}{2k}$ meaning that $\ell \geq 1$.
\begin{lemma}\label{lem:Cstar almost hat C} For any $j \in [\ell]$, $C^*_j \setminus V_{\mathrm{costly}} = \hat C_j \setminus V_{\mathrm{costly}}$\ . \end{lemma} The proof is deferred to Appendix~\ref{sec:proof:lem:Cstar almost hat C} for lack of space. The next lemma states the existence of a clustering whose large clusters are identical to those found by our algorithm and has an almost optimal cost. \begin{lemma} \label{lem:large can recurs} There exist some $k$-clustering of $V$, ${\cal D}=(D_1,\ldots, D_k)$ such that for all $j \in [\ell]$, $D_j=\hat C_j$ and $\operatorname{cost}({\cal D}) \leq \gamma n^2 (1+\varepsilon/k)$ \end{lemma} \begin{proof} Take $\cal D$ to be the clustering defined as follows.
For any $i \in [k]$, $ D_i = (C^*_i \setminus V_{\mathrm{costly}}) \cup (\hat C_i \cap V_{\mathrm{costly}})$.
That is, $D_i$ is the result starting with the clustering $\cal C$ and of moving the vertices of $V_{\mathrm{costly}}$ to the clusters according to $\hat {\mathcal C} = \{\hat C_1,\dots, \hat C_k\}$.
Denote by $\operatorname{cost}^{\cal D}(v)$ the cost of a vertex $v$ w.r.t.\ the partition $\cal D$. Notice that the only edges for which the clustering $\cal D$ pays for while the clustering ${\mathcal C}^*$ does not must be incident to a node in $V_{\mathrm{costly}}$. Hence, \vspace*{-2ex} \begin{equation}\label{costdiffbound1} \operatorname{cost}({\cal D}) - \operatorname{cost}({\cal C^*}) \leq\sum_{v \in V_{\mathrm{costly}}} (\operatorname{cost}^{\cal D}(v)-\operatorname{cost}^*(v) )\ . \end{equation} Assume $v_{\mathrm{costly}} \in V_{\mathrm{costly}}\cap D_j$ for some $j\in[k]$. \vspace*{-1ex} Clearly
\begin{equation}\label{costdiffbound} \left |\operatorname{cost}^{\cal D}(v_{\mathrm{costly}})-\operatorname{cost}^*(v_{\mathrm{costly}},j)\right| \leq{|V_{\mathrm{costly}}|}\ , \end{equation} because the only difference in such a vertex's cost can come from edges connecting it to other vertices in $V_{\mathrm{costly}}$.
Now assume $v_{\mathrm{costly}} \in V_{\mathrm{costly}} \cap C^*_i\cap D_j$ for $j \neq i$. By construction, $v_{\mathrm{costly}} \in \hat C_j$. By Lemma \ref{lem:cost in other}, this implies $\operatorname{cost}^*(v_{\mathrm{costly}},j) \leq \operatorname{cost}^*(v_{\mathrm{costly}})+2\beta$. By (\ref{costdiffbound}) we conclude \vspace*{-1ex} \begin{eqnarray*}
\operatorname{cost}^{\cal D}(v_{\mathrm{costly}})-\operatorname{cost}^*(v_{\mathrm{costly}}) \leq\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
(\operatorname{cost}^*(v_{\mathrm{costly}},j)-\operatorname{cost}^*(v_{\mathrm{costly}},i))+|V_{\mathrm{costly}}| \leq 2\beta+|V_{\mathrm{costly}}|\ . \end{eqnarray*} \noindent Plugging this into (\ref{costdiffbound1}) and using (\ref{Vcostlysize1}), we get \vspace*{-2ex}
$$ \operatorname{cost}({\cal D}) - \operatorname{cost}({{\mathcal C}^*}) \leq |V_{\mathrm{costly}}| \left(2\beta n+|V_{\mathrm{costly}}|\right) \leq \gamma n^2 \left({8\beta k^2}/{(c_3)}+ {16\gamma k^4}/{(c_3)^2}\right)\ .$$ The claim follows since $\gamma < c_1 \beta^6 \leq \frac{\varepsilon}{2k} \cdot \frac{(c_3)^2}{16 k^4} $, $\beta < \frac{\varepsilon}{2k} \cdot \frac{c_3}{8k^2}$ assuming small $c_1, c_2$.
\end{proof}
\begin{proof} [of Theorem \ref{thm:PTAS low cost kcc}] The claim regarding the query complexity is trivial given Theorem \ref{thm:cost_add_apx}. The running time is a result of the recursion formula $T(n,\varepsilon,k)=n^{\varepsilon^{-9}k^{27} \log(k)}+T(n,\varepsilon(1-1/k),k-1)=n^{\varepsilon^{-9}k^{27} \log(k)}$. We note that in \cite{GiotisGuruswami06}, the stated running time is doubly exponential in $k$ wheras here it is singly exponential in $k$. This difference is due to a minor observation that the recursive call should be with the parameter $\varepsilon(1-1/k)$ rather than $\varepsilon/10$. The same minor change would result in a singly exponential dependence in $k$ in the algorithm given in \cite{GiotisGuruswami06} as well. Let $W$ be the union of the small clusters. That is, $W=\cup_{j=\ell+1}^k \hat C_j$. By lemma \ref{lem:large can recurs}, all of the vertex pairs that are not contained in the set $W \times W$ incur a cost in $\hat C$ identical to that in $\cal D$.
Let $d_1$ be the cost of $\cal D$ on pairs in $W \times W$ and let $d_2$ be its cost on the remaining pairs $V\times V\setminus W\times W$. Since $W$ is clustered recursively, we have that the cost of $\hat C$ is at most $d_2+d_1(1+\varepsilon/k) \leq (d_1+d_2)(1+\varepsilon/k)=\operatorname{cost}({\cal D})(1+\varepsilon/k)$. The statement of the theorem follows.
\end{proof}
\vspace*{-5ex} \section{Query Efficient PTAS for High Cost}\label{sec:high} \vspace*{-2ex}
We present a query efficient PTAS for the high loss case of MFAST. The query efficient PTAS for the high loss case of $k$-CC is almost identical and is thus not presented. We will start by describing a known PTAS ((R1)+(R2)) based on an approach given by Arora et.\ al. We then show how to add requirement (R3). The final approach is summarized in Algorithm~\ref{alg:high cost}, found in Appendix~\ref{sec:alg:high cost}.
\vspace*{-2ex} \subsection{ (R1)+(R2) using a Known Additive Approximation Algorithm} \label{sec:high cost PTAS} Let $\pi^*$ denote an optimal permutation, and let ${\operatorname{OPT}}$ denote $\operatorname{cost}(\pi^*) = \operatorname{cost}(\pi^*)$. In the high cost MFAST case, as explained in Section~\ref{sec:overview}, we assume ${\operatorname{OPT}} \geq \gamma n^2$, where $\gamma = \Theta(\varepsilon^2)$. Instead of directly solving MFAST, we solve the \emph{bucketed} version. This idea is not new and can be found in e.g. \cite{MatSch2007}. An $m$-bucket ordering $\sigma$ of $V$ is a mapping $\sigma: V\mapsto [m]$, where for each $i\in[m]$ the preimage satisfies:
$\frac {n}{2m} \leq |\sigma^{-1}(i)| \leq \frac{2n}m $.
For brevity we say that $u <_\sigma v$ if $\sigma(u) < \sigma(v)$, and $u \equiv_\sigma v$ if $\sigma(u) = \sigma(v)$. We extend the definition of $\operatorname{cost}(\cdot)$ to bucketed orders by defining $\operatorname{cost}(\sigma) \eqdef \sum_{u <_\sigma v} {\bf 1}_{(v,u) \in A}\ .$ We will also need to define:
$\operatorname{cost}^{u,v}(\sigma) \eqdef {\bf 1}_{u <_\sigma v}{\bf 1}_{(v,u)\in A} + {\bf 1}_{v <_\sigma u}{\bf 1}_{(u,v)\in A} $ and $\operatorname{cost}^u(\sigma) \eqdef \frac 1 2 \sum_{v\in V} \operatorname{cost}^{u,v}(\sigma)$,
so that $\operatorname{cost}(\sigma) = \sum_{u\in V} \operatorname{cost}_T^{u}(\sigma)$. A permutation $\pi$ extends an $m$-bucketed ordering $\sigma$ if $u<_\pi v$ whenever $u<_\sigma v$. \begin{observation}\label{obs:bucket}\cite{MatSch2007}
For any $\pi$ extending $\sigma$, $|\operatorname{cost}(\pi) - \operatorname{cost}(\sigma)| = O(n^2/m)$, hence for the purpose of obtaining a $(1+\varepsilon)$-approximate solution in our case it suffices to consider $m$-bucketed orderings with $m = \Theta( 1/(\varepsilon \gamma))$. \end{observation}
\noindent Let $\sigma^*$ denote any $m$-bucketed ordering of $V$ of which $\pi^*$ is an extension, and such that $\lfloor n/m \rfloor \leq | (\sigma^*)^{-1}(i) | \leq \lceil n/m\rceil$ for all $i\in [m]$. The following approach has been taken in \cite{DBLP:journals/mp/AroraFK02}. Let $S = (v_1,v_2,\dots, v_s)$ be a random series of $s=O(\log n/(\varepsilon\gamma)^2)$ vertices in $V$, each element chosen uniformly and independently, with repetitions. Abusing notation, we will also think of $S$ as the series $\{v_1,\dots, v_s\}$.
For each $m$-bucketed ordering $\sigma$ and for each $u\in V$, we make the following definitions:
$\operatorname{cost}^{u,S}(\sigma) \eqdef \frac {n} {2s} \sum_{i=1}^s \operatorname{cost}^{u,v_i}(\sigma)$ and $\operatorname{cost}^S(\sigma) \eqdef \sum_{u\in V} \operatorname{cost}^{u,S}(\sigma)$.
Clearly $\operatorname{cost}^{S}(\sigma)$ is an unbiased estimator of $\operatorname{cost}(\sigma)$ over the choice of the sample $S$. The top level of our algorithm will enumerate over all $n^{\Theta(\log(1/(\varepsilon \gamma))/(\varepsilon\gamma)^2)}$ possibilities for the value of $(\sigma^*(v_1),\dots, \sigma^*(v_s))$. From now on, we will assume the correct possibility has been chosen, so that $\sigma^*(v)$ is ``known'' for $v\in S$. A verification step will be used to identify the correct possibility in the end (see Algorithm~\ref{alg:high cost} in Appendix~\ref{sec:alg:high cost}).
\begin{definition} For an $m$-bucket ordering $\sigma$, a vertex $u\in V$ and integer $i\in [m]$, let $\sigma_{u\rightarrow i}$ denote the bucket order defined by leaving the value of $\sigma(v)$ unchanged for $v\neq u$ and mapping $u$ to $i$. More precisely: $\sigma_{u\rightarrow i}(v) = \sigma(v)$ if $v\neq u$, and $\sigma_{u\rightarrow i}(u) = i$.
\end{definition}
Note that $\sigma_{u\rightarrow i}$ may not be exactly an $m$-bucket ordering. To be precise, we will say that $\sigma_{u\rightarrow i}$ is an $m$-bucket$^*$ ordering whenever $\sigma$ is an $m$-bucket ordering, for every $u\in V$ and $i\in [m]$. Clearly, Observation~\ref{obs:bucket} holds for $m$-bucket$^*$ orderings as well, with a possible different constant hiding in the $\Theta$-notation. The following lemma is proven using standard measure concentration inequalities:
\begin{lemma}\label{crux} Fix an $m$-bucket ordering $\sigma$ of $V$. With probability at least $1-n^{-10}$, for all $u\in V$ and $i\in [m]$:
\begin{equation}\label{cruxeq} \left | \operatorname{cost}^{u,S}(\sigma_{u\rightarrow i}) - \operatorname{cost}^u(\sigma_{u\rightarrow i}) \right | = O(\varepsilon \gamma n)\ . \end{equation} \end{lemma}
\noindent By summing (\ref{cruxeq}) over all $(u,i)$ such that $i=\sigma(u)$, we get \begin{corollary}\label{cor:main} For any $m$-bucket order $\sigma$ with probability at least $1-n^{-10}$,
$ \left | \operatorname{cost}^S(\sigma) - \operatorname{cost}(\sigma) \right | = O(\varepsilon \gamma n^2)$. \end{corollary}
\vspace*{-5ex} \subsubsection{Arora et al's LP approach \cite{DBLP:journals/mp/AroraFK02}}\label{sec:aroraLP}
The benefit of Lemma~\ref{crux} is the fact that (\ref{cruxeq}) can be written as a pair of linear inequalities in variables $(x_{vj})_{v\in V\setminus\{u\}, j\in [m]}$, where $x_{vj}$ is indicator for the predicate ``$\sigma(v) = j$''. Indeed, $\operatorname{cost}^{u,S}(\sigma_{u\rightarrow i})$ is a known constant, and $\operatorname{cost}^u(\sigma_{u\rightarrow i})$ is a linear combination of $(x_{vj})_{v\neq u, j\in [m]}$. This property allowed Arora et. al in \cite{DBLP:journals/mp/AroraFK02} to introduce an LP over these variables, where the utility function $\operatorname{cost}^S(\sigma)$ is clearly a linear function of the system $(x_{vj})_{v\in V, j\in [m]}$. Some obvious standard constraints are added: For all $v,j$, $x_{vj}\geq 0$ and for all $v$, $\sum_{j\in [m]} x_{vj}=1$, and of course $x_{vj}$ is hardwired as $1$ (resp. $0$) whenever $v\in S$ and $\sigma^*(v)=j$ (resp. $\sigma^*(v)\neq j$). The \emph{almost balanced bucket} constraint is also added: $ \forall j\in [m]: \lfloor n/m\rfloor \leq \sum_{v\in V} x_{vj} \leq \lceil n/m\rceil\ .$ The following arguments in \cite{DBLP:journals/mp/AroraFK02} are by now classic: Randomly round the optimal LP solution by independently drawing, for each $v\in V$, from the discrete distribution assigning probability $x^*_{vj}$ to the $j$'th bucket. Denote the resulting $m$-bucket order $\sigma'$. As argued in \cite{DBLP:journals/mp/AroraFK02}, with high probability each constraint in the system will be satisfied up to a possible additive violation of magnitude depending on an $\ell_\infty$ and an $\ell_0$ (support size) property of the constraint. The precise statement is as follows:
\begin{lemma}[Essentially \cite{DBLP:journals/mp/AroraFK02}] \label{lem:LP beta}
If the optimal solution to the LP $x^*$ satisfies $\sum \beta_{vj} x^*_{vj} \leq \alpha$ for $\beta \in {\bf R}^{|V|\times m}$ and $\alpha \in {\bf R}$, then with probability at least $1-\eta$ the rounded solution $\sigma'$ will violate the constraint by no more than $\|\beta\|_\infty\sqrt{\|\beta\|_0 \log (1/\eta)}$, where $\|\beta\|_0$ is the number of vertices $v\in V$ such that $\beta_{vi} \neq 0$ for some $i\in [m]$, $\|\beta\|_\infty = \max_{v\in V, i\in [m]} |\beta_{vi}|$ and $\eta>0$ is any number. \footnote{We have implicitly viewed $\sigma'$ as a vector $(\sigma'_{vj})_{v\in V, j\in [m]}$, with $\sigma'_{vi}$ indicator for $\sigma'(v)=i$.} \end{lemma}
In our case, consider an LP constraint coming from (\ref{crux}). Its corresponding coefficient vector $\beta$ satisfies $\|\beta\|_0 \leq n$ and $\|\beta\|_\infty \leq 1$.
We conclude that with probability at least $1-n^{-10}$, (\ref{cruxeq}) is satisfied with $\sigma = \sigma'$ and for all $u,i$, and hence also the guarantee of Corollary~\ref{cor:main}.\footnote{All this happens with with possibly slightly worse constants hiding in the $O$-notation.} Also, in virtue of the \emph{almost balanced bucket} constraint and Lemma~\ref{lem:LP beta}, with probability at least $1-n^{-10}$ the rounded solution $\sigma'$ is an $m$-bucket order.
Additionally, by analyzing the coefficient vector $\beta_{\operatorname{utility}}$ corresponding to the LP utility function, with probability at least $1-n^{-10}$ the cost $\operatorname{cost}^S(\sigma')$ is bounded by $\operatorname{LP}(x^*) + O(\varepsilon\gamma n^2)$, which is bounded by $\operatorname{cost}^S(\sigma^*) + O(\varepsilon \gamma n^2)$ by LP optimality. Note also that the guarantee of Corollary~\ref{cor:main} also applies to $\sigma = \sigma^*$ with probability at least $1-n^{-10}$. Combining using union bound and triangle inequality, one gets that $\operatorname{cost}(\sigma') \leq \operatorname{cost}(\sigma^*) + O(\varepsilon\gamma n^2)$. We conclude the section with the following lemma that is implicit in \cite{DBLP:journals/mp/AroraFK02} \begin{lemma} \label{lem:LP rounding works} Given the correct bucketing on the vertices of $S$, one can construct a polynomially sized linear program whose rounded solution $\sigma'$ has the property $\operatorname{cost}(\sigma')\leq \operatorname{cost}(\sigma^*)+O(\gamma \varepsilon n^2)$ with probability at least $1-n^{-9}$. \end{lemma}
\vspace*{-2ex} \subsection{Query efficiency} \label{sec:high cost query PTAS} \vspace*{-2ex} The problem is that expressing inequality (\ref{crux}) in the LP requires complete knowledge of the input $T$. If we take a revised look at this strategy, we see that the sample $S$ is not strong enough in the sense that it can be used to well approximate $\operatorname{cost}(\sigma)$ (per Corollary~\ref{cor:main}) for no more than $\operatorname{poly}(n)$ $m$-bucket orders $\sigma$ simultaneously, but certainly not for \emph{all} $m$-bucket orders.
For each $u\in V$ randomly select a sample $S^u = (v^u_1,\dots, v^u_p)$ of vertices of $V$, where $p = O((\gamma\varepsilon)^{-2} \log n)$, each sample $S^u$ is chosen independently of the other samples, and the $v^u_i$'s are chosen uniformly at random from $V$, with repetitions. Denote the ensemble $\{S^u: u\in V\}$ by ${\mathcal S}$. For any $m$-balanced ordering $\pi$ on $V$, define $\operatorname{cost}^{u,{\mathcal S}}(\pi) = \frac {n} {2p} \sum_{i=1}^p \operatorname{cost}^{u,v^u_i}(\pi)$ and $\operatorname{cost}^{\mathcal S}(\pi) = \sum_{u\in V} \operatorname{cost}^{u,{\mathcal S}}(\pi)$. It is not hard to see that $\operatorname{cost}^{\mathcal S}(\pi)$ is an unbiased estimator of $\operatorname{cost}(\pi)$, for any $\pi$. Using standard measure concentration bounds, we have the following: \begin{lemma}\label{thm:bigsample} With probability at least $1-n^{-10}$, uniformly for all $m$-balanced orderings $\sigma$ on $V$,
$\left | \operatorname{cost}^{\mathcal S}(\sigma) - \operatorname{cost}(\sigma) \right | = O(\varepsilon\gamma n^2)$. \end{lemma} \begin{lemma}\label{thm:bigsample1} Fix an $m$-balanced ordering $\sigma$. With probability at least $1-n^{-10}$, uniformly for all $u\in V$ and $i\in [m]$,
\begin{equation}\label{eq:bigsample1}
\left | \operatorname{cost}^{u,S}(\sigma_{u\rightarrow i}) - \operatorname{cost}^{u,{\mathcal S}}(\sigma_{u\rightarrow i}) \right | = O(\varepsilon\gamma n)\ . \end{equation} \end{lemma} \noindent By summing (\ref{eq:bigsample1}) over all $(u,i)$ s.t. $i=\sigma(u)$, we get \begin{corollary}\label{thm:bigsample2} Fix an $m$-balanced ordering $\sigma$. With probability at least $1-n^{-10}$,
$\left | \operatorname{cost}^{S}(\sigma) - \operatorname{cost}^{\mathcal S}(\sigma) \right | = O(\varepsilon\gamma n^2)$. \end{corollary}
We build an LP as in Section~\ref{sec:aroraLP}, except that (\ref{eq:bigsample1}) replaces (\ref{cruxeq}). Note that the coefficient vectors $\beta$ of the new constraints now satisfy $\|\beta\|_0 = O(p) = O((\gamma \varepsilon)^{-2} \log n)$ and $\|\beta\|_\infty = O(n/p) = O(n\gamma^2\varepsilon^2/\log n)$. Using Lemma \ref{lem:LP beta} and a similar analysis as in Section~\ref{sec:aroraLP}, an analog of Lemma \ref{lem:LP rounding works} can be proven. That is, we conclude that with probability at least $1-n^{-9}$, the $m$-bucketed ordering $\sigma'$ outputted by rounding the optimal LP solution satisfies $ \operatorname{cost}^{\mathcal S}(\sigma') \leq \operatorname{cost}^{\mathcal S}(\sigma^*) + O(\varepsilon \gamma n^2)$. By Lemma~\ref{thm:bigsample} this implies that $\operatorname{cost}(\sigma') \leq \operatorname{cost}(\sigma^*) + O(\varepsilon\gamma n^2)$. Algorithm \ref{alg:high cost} (Appendix~\ref{sec:alg:high cost}) summarizes the query efficient PTAS for MFAST high cost case.
The $k$-CC high cost case can be solved in similar lines, though this case is slightly easier because the clusters need not be balanced. \vspace*{-2ex} \section{Discussion and Future Work} We believe that in the low cost $k$-CC case, there should be a PTAS with efficient query complexity, running in time $\operatorname{poly}(n, \varepsilon^{-1}, k)$ (not exponential in $k, \varepsilon^{-1}$), assuming the low cost case in each recursive instance. This is true for MFAST, and we leave the question of achieving it for $k$-CC to future work.
\vspace*{-2ex}
\appendix
\section{Proof of Lemma~\ref{lem:deg in S and C equal}}\label{sec:proof:lem:deg in S and C equal}
This is a simple application of the following more general well known sampling principle: If $V_1,\dots, V_M$ is a collection of subsets of $V$ and $T $ is a sample of $N$ uniformly chosen elements from $V$ (with repetition), then with probablity $1-\eta$,
for all $i=1,\dots, M$,
$ \left | \frac { |V_i|}n - \frac {|V_i\cap T|}{|T|}\right | = O\left (\sqrt{N^{-1}\log (M/\eta)}\right )$.
\section{Proof of Lemma~\ref{lem:Cstar almost hat C}}\label{sec:proof:lem:Cstar almost hat C}
We start by proving the inclusion \begin{equation}\label{inclusion1} \hat C_j \subseteq C^*_j \cup V_{\mathrm{costly}}\ .\end{equation} Assume for contradiction that there exist some $v \in \hat C_j \setminus (C^*_j \cup V_{\mathrm{costly}})$. Let $i \in [k]$, $i\neq j$ be such that $v \in C^*_i$. As $v \notin V_{\mathrm{costly}}$ we know by Theorem \ref{thm:cost_add_apx} that $ \operatorname{cost}^*(v,i) = \operatorname{cost}^*(v) \leq \operatorname{cost}^*(v,j) \leq \widetilde{\operatorname{cost}}(v,j)+\beta n \leq \frac{c_3 n}{k^2} + \beta n$.
We get: \begin{equation} \label{eq:C_i+C_j}
\frac{2nc_3}{k^2} + 2n\beta \geq \operatorname{cost}^*(v,i)+\operatorname{cost}^*(v,j) \geq {|C^*_i|+|C^*_j|-1}\ , \end{equation} where the right hand inequality is a consequence of the fact for any $u\in (C^*_i\cup C^*_j)\setminus \{v\}$, $v$ will either incur a price w.r.t. $u$ if it is included in $C^*_i$ or it is included in $C^*_j$. Hence, \begin{equation}\label{maxCiCj}
\max\{ |C^*_i|,|C^*_j| \}\leq \frac{2nc_3}{k^2} + 2n\beta + 1 \leq \frac{c_5n}{k^2} \end{equation} for some constant $c_5$ that can be made arbitrarily small by tuning $c_2$ (the constant product in $\beta$) and $c_3$. Since this holds for any $i \neq j$ satisfying $C^*_i \cap (\hat C_j \setminus V_{\mathrm{costly}}) \neq \emptyset$ we have that
$$ |C^*_j| \geq |\hat C_j| -|V_{\mathrm{costly}}| - \sum_{i :\; i \neq j , C^*_i \cap (\hat C_j \setminus V_{\mathrm{costly}}) \neq \emptyset} |C^*_i| \geq \frac{n}{2k} - \frac{c_4n}{k} - \frac{c_5n}{k} > \frac{c_5n}{k}\ ,$$ where we used (\ref{Vcostlysize}) and (\ref{maxCiCj}), and ensure that $2c_5+c_4<1/2$. We derive a contradiction to (\ref{maxCiCj}).
We now prove that the inclusion $C^*_j \subseteq \hat C_j \cup V_{\mathrm{costly}}$. Notice that by (\ref{inclusion1}), we conclude that
$|C^*_j|$ is lower bounded by $\frac n k \left (\frac 1 {2} - {c_4} \right ) \geq \frac n {4k}$,
as long as $c_4 < 1/4$.
Assume for the sake of contradiction that there exists some $v \in C^*_j \setminus V_{\mathrm{costly}}$ such that for some $i\neq j$, $v \in \hat C_i$. By the guarantee of Theorem~\ref{thm:cost_add_apx} we have that $\operatorname{cost}^*(v,j) \leq \operatorname{cost}^*(v,i) \leq \widetilde{\operatorname{cost}}(v,i) + \beta n\leq c_3 n/k^2 + \beta n$. This gives us again (\ref{eq:C_i+C_j}), leading to (\ref{maxCiCj}), contradicting our lower bound on $|C^*_j|$ for sufficiently small $c_5$.
\noindent This concludes the lemma proof.
\section{Continuation of Proof of Lemma~\ref{lem:close in V}}\label{sec:proof:lem:close in V} We now proceed with the proof of the lemma.
Consider the following bipartite directed graph $H = (U, \Gamma)$. The vertex set $U$ is defined as follows: $ U = \{C:\ C\mbox{ is a cluster in }{\mathcal S}\} \cup \{D: D\mbox{ is a cluster in }\tilde {\mathcal S}\}$.
The edge set $\Gamma$ is defined as follows: For any cluster $C \in {\mathcal S}$, add a directed edge $(C,D)$, where $D$ maximizes $|D' \cap C|$ over clusters $D'$ of $\tilde {\mathcal S}$ breaking ties arbitrarily.
Symmetrically, for cluster $D$ of $\tilde S$ add a directed edge $(D,C)$ where $C$ maximizes $|C'\cap D|$ over clusters $C'$ of ${\mathcal S}$, breaking ties arbitrarily. Note that the out degree of all vertices of $H$ is exactly $1$. We will now define a bipartite matching on $U$ using the following rule: If for some $C,D$, both $(C,D)\in \Gamma$ and $(D,C)\in \Gamma$, then match $C$ to $D$, and call the pair $(C,D)$ \emph{a good match}. The remaining (unmatched) vertices are matched arbitrarily , and the corresponding pairs are called \emph{bad matches}.
By the above claim, if $(C,D)$ is a good match, then
$ \max\{|C\setminus D|, |D\setminus C|\} = O(\gamma^{1/3}|S|)$.
We now show that if $(C,D)$ is a bad match then both
$|C| = O(\delta^{1/3}|S|)$ and $|D| = O(\delta^{1/3}|S|)$. By symmety, it suffices to show that $|C|=O(\delta^{1/3}|S|)$. Let $C$ be a cluster of ${\mathcal S}$ that is a member of a bad match. Let $D$ be the unique cluster of $\tilde {\mathcal S}$ such that $(C,D)\in\Gamma$ and let $C'$ be the unique cluster of ${\mathcal S}$ such that $(D,C')\in\Gamma$. By the definition of a bad match we know that $C \neq C'$. By the above claim we have that both $|C\setminus D| = O(\delta^{1/3}|S|)$
and $|D\setminus C'| = O(\delta^{1/3}|S|)$, which implies
$|C\setminus C'| = O(\delta^{1/3}|S|)$. But $C\cap C'=\emptyset$, therefore $|C| = O(\delta^{1/3}|S|)$. This concludes the proof of the lemma.
\section{Algorithm for High-Cost MFAST}\label{sec:alg:high cost} \begin{algorithm} \caption{query efficient PTAS for MFAST}
\label{alg:high cost} \emph{Input}: A graph T=(V,A) an approximation parameter $\varepsilon$ and assumed minimal cost parameter $\gamma$
\emph{Output}: a permutation $\sigma: V \to [n]$ where $n=|V|$
For each $u \in V$ randomly select a sample $S^u=\{v_1^u,\ldots,v_p^u\}$, where $p=O( (\varepsilon \gamma)^{-2} \log(n) )$ and the $v_i^u$'s are chosen from $V$ with repetitions. Denote the ensemble $\{S^u: u\in V\}$ by ${\mathcal S}$. (This is the \emph{verification} sample.)
Set $S$ as a set of random i.i.d.\ vertices $S=\{v_1,\ldots,v_s\}$ chosen with repetitions, where $s=O( (\varepsilon \gamma)^{-2} \log(n) )$. (This is the \emph{enumeration} sample.)
Set $m=O((\gamma \varepsilon)^{-1})$ as the number of buckets.
For each possible $m$-bucket order of the vertices of $S$ perform the following: \begin{itemize} \item Construct an LP as described in Section~\ref{sec:high cost query PTAS}, producing a fractional $m$-bucket order that agrees with the bucketing of $S$.
\item Solve the LP and round it as described in Section \ref{sec:aroraLP}. \end{itemize} Pick the rounded solution whose approximated cost w.r.t.\ $\cal S$ ($\operatorname{cost}^{\cal S}(\cdot)$) is minimal, and output an arbitrary permutation extending it. \end{algorithm}
\end{document} | arXiv |
For certain real values of $a, b, c,$ and $d_{},$ the equation $x^4+ax^3+bx^2+cx+d=0$ has four non-real roots. The product of two of these roots is $13+i$ and the sum of the other two roots is $3+4i,$ where $i^2 = -1.$ Find $b.$
Since the coefficients of the polynomial are all real, the four non-real roots must come in two conjugate pairs. Let $z$ and $w$ be the two roots that multiply to $13+i$. Since $13+i$ is not real, $z$ and $w$ cannot be conjugates of each other (since any complex number times its conjugate is a real number). Therefore, the other two roots must be $\overline{z}$ and $\overline{w}$, the conjugates of $z$ and $w$. Therefore, we have \[zw = 13+i \quad \text{and} \quad \overline{z} + \overline{w} = 3+4i.\]To find $b$, we use Vieta's formulas: $b$ equals the second symmetric sum of the roots, which is \[b = zw + z\overline{z} + z\overline{w} + w\overline{z} + w\overline{w} + \overline{z} \cdot \overline{w}.\]To evaluate this expression, we first recognize the terms $zw$ and $\overline{z} \cdot \overline{w}$. We have $zw = 13+i$, so $\overline{z} \cdot \overline{w} = \overline{zw} = 13-i$. Thus, \[b = 26 + (z\overline{z} + z\overline{w} + w\overline{z} + w\overline{w}).\]To finish, we can factor the remaining terms by grouping: \[ b = 26 + (z+w)(\overline{z}+\overline{w}).\]From $\overline{z} + \overline{w} = 3+4i$, we get $z + w = 3-4i$. Thus, \[b = 26 + (3-4i)(3+4i) = \boxed{51}.\] | Math Dataset |
Application of the analytic hierarchy process in the selection of traditional food criteria in Vojvodina (Serbia)
Ivana Blešić ORCID: orcid.org/0000-0003-2534-32801,2,
Marko D. Petrović2,3,
Tamara Gajić4,
Tatiana Tretiakova2,
Miroslav Vujičić1 &
Julia Syromiatnikova2
Vojvodina Province (Northern Serbia) represents a multicultural area inhabited by around thirty nations and national or ethnic groups with their authentic tradition and culture. The gastronomy of Vojvodina has been forming as a reflection of geographic characteristics—natural conditions and social events in this area. The life numerous nations share on this fertile soil of Vojvodina has initiated mutual impact of various customs, which contributed to the creation of a unique and specific Vojvodina cuisine. In this paper, mixed-method research approach was applied. The application of the analytic hierarchy process (AHP) method was preceded by a survey research on the sample of 289 guests in the restaurants on the territory of Vojvodina. The aim of the research was to define the key motives when choosing a traditional Vojvodina dish. In the second stage, AHP model was used for ranking factors significant for choosing traditional food by 29 experts in the field of hospitality and gastronomy. The result shows that Sensory appeal is the most important criterion for choosing traditional food in restaurants by the experts, followed by Health concern, and Familiarity.
The Autonomous Province of Vojvodina is located in the northern part of the Republic of Serbia in the Pannonian Plain, encompassing 24.3% of the country's territory (i.e., 21,506 km2). The northern province is intersected by three big navigable rivers (the Danube, the Tisa, and the Sava), which divide its territory into three clearly distinctive wholes: on the far east there is Banat, on the northwest—Bačka, and on the southwest—Srem [1]. According to the last census from 2011, the AP Vojvodina has a population of 1,931,809, or 21.56% of the total population of the Republic of Serbia. The Serbs represent the majority of its population (67%), followed by the Hungarians (13%), Slovaks (3%), Croats (2%), Roma (2%), Romanians (1%), Montenegrins (1%), and other smaller ethnic groups, including Bunjevci, Ruthenians, Yugoslavs, Macedonians, Ukrainians, Germans, Albanians, Slovenians, Bulgarians, and others [2] (Fig. 1).
Map showing the geographical position of Vojvodina (Serbia) within Europe and ethnic structure of the population of Vojvodina
All the nations that came to live in Vojvodina brought their national features with them, both in culture and in the way of life, and in the food as well. That is the reason why the cuisine of Vojvodina is characterized by a great diversity. In Bačka and Srem, there is a strong impact of the Hungarians, Germans, Croats, Slovaks, and Ruthenians. In Banat, besides the influence of the Hungarians, Germans, and Slovaks, the forming of Vojvodina cuisine was greatly contributed by the Romanians as well. However, it may be concluded that the greatest impact was of the German and Hungarian cuisines [3]. The influence of these national cuisines on the development of a unique Vojvodina cuisine can be seen in the names of many dishes and other gastronomic terms that were adopted from the German language, such as "fruštuk" (Das Frühstück) for breakfast, "jauzna" (Die Jause) for snack, "foršpajz" (Die Vorspeise) for an appetizer, "rindflajš" (Die Rindfleisch) for boiled beef, "cušpajz/varivo" (Die Zuspeise) for a vegetable dish, as well as strudel, doughnuts and dumplings, or from the Hungarian language such as "perkelt" (pörkölt) for meat fried in its own juices, "gulaš" (gulyás) for goulash, and "paprikaš" (paprikás) for stew [4]. Table 1 shows ingredients, cooking type, and preparation time of the traditional Vojvodina dishes. The traditional Vojvodina dishes are prepared with the ingredients which have high nutritional values. The preparation method, which mainly includes boiling and simmering, is in accordance with the principles of a healthy diet [5]. The highest value of proteins in 100 g of a prepared dish can be found in "rindflajš" and then in "riblji paprikaš." These dishes are also the leaders in the content of fats, as well as in calories. The highest content of the total carbohydrates is in "ćuretina sa mlincima" (Table 2).
Table 1 Traditional Vojvodina dishes (ingredients, cooking type, and preparation time). Source: [8]
Table 2 Proximate nutritional values of the traditional Vojvodina dishes*
The authentic life of the past centuries in Vojvodina can be experienced most on "salaši." "Salaš" represents an agricultural property, i.e., a house with its farmstead in the fields, surrounded by vast arable land. The building of "salaši" started around the middle of the eighteenth century [6]. Even though thousands of "salaši" were demolished after the Second World War, today, there are enough "salaši" in Vojvodina to be the reminders of the past times [7]. Some of them still work as agricultural properties, and 65 "salaši" have been transformed into tourist sites and specific restaurants of the traditional Vojvodina cuisine [8].
The conducted research had two basic goals. The first was the identification of the guests' key motives for choosing the traditional Vojvodina cuisine in ethno restaurants on the territory of Vojvodina ("salaši"). The second aim of this paper was to research the most significant criteria for choosing the traditional dishes, as well as to identify the best and most authentic traditional dish by using the AHP method and by examining the experts' attitude.
Gastro-tourism has been popular in the world for several decades, but only recently in Vojvodina, with still not enough awareness of the employees in tourism industry about its benefits for the local and regional development. In the recent development policies and concepts that referred to the tourism development, gastronomy, i.e., the dishes prepared in the traditional way, were not given the adequate development role. Moreover, the subjects of the studies that dealt with Vojvodina cuisine were mainly only one of the national cuisines that the unique.
Vojvodina cuisine consists of. Thus, this study represents a novelty in the scientific literature in this field for two reasons: first, it researches the key motives for choosing a traditional Vojvodina dish in restaurants, and second, it investigates the attitudes of experts in the field of hotel management and gastronomy regarding the factors that decide on the most authentic dish.
The term "traditional food" is defined as "…a product frequently consumed or associated with specific celebrations and/or seasons, usually passed on from one generation to another, made accurately in a specific way according to the gastronomic heritage, with little or no processing/manipulation, distinguished and known because of its sensory properties and associated with a certain local area, region or country" [11, p. 348]. According to [12], a traditional food product belongs to a defined space, and it is part of a culture that implies the cooperation of the individuals performing in that region.
Many studies have analyzed the concept of traditional food [11, 13, 14], traditional food consumer acceptance and preference [15,16,17,18,19,20,21,22], its sensory characteristics [23,24,25,26], or the impact of traditional food on health [27,28,29,30].
In the research related to the motives for choosing food, the work of Steptoe, Pollard, and Wardle is one of the essential starting points [31]. They classified food choice motives into nine dimensions: Health, Mood, Convenience, Sensory appeal, Natural content, Price, Weight control, Familiarity, and Ethical concern. The Food Choice Questionnaire was widely used by scholars to explore choice motives of consumers with different cultures and food products [32,33,34,35,36,37,38,39]. Several studies, like this one, focused on traditional food [40,41,42,43].
Traditional food, in relation with the development of tourism and local economy, was the subject of interest in several research papers [44,45,46]. As part of the tourist experience, eating traditional local food is a way of breaking with everyday routine [44]. Moreover, food diversity is a core theme in destination marketing [46]. Multiculturalism has the influence on the variety of the gastronomic offer in Vojvodina and made it attractive and interesting for the majority of consumers who try traditional specialties. The Autonomous Province of Vojvodina, which represents one multiethnic area and fertile plain, has numerous potentials for the development of rural, cultural, and gastro-tourism [47].
The research itself consists of several phases (Fig. 2). The pre-study phase included literature and restaurants' menus review in order to a) select an adequate questionnaire to measure food choice and b) choose the five traditional dishes that are most present in restaurants. Potential motives for choosing traditional food were almost entirely based on the Food Choice Questionnaire [31]. Only the most appropriate and relevant items for the case of traditional food were included. In the first phase, the authors interviewed a focus group of eight experts constituted by academic researchers and managers of traditional restaurants in Vojvodina to refine the selected items. As a result, the ethical concern FCQ dimension was not included in the questionnaire. The results of the performed interview pointed that the items which contain ethnical concern factor were not adequate for interviewing guests in restaurants. Eight FCQ dimensions were included in the research: weight control, price, mood, convenience, natural content, health, sensory appeal, and familiarity. The total number of items was 33. The second phase was conducted for evaluating the motives that determine guests' perception of traditional Vojvodina food. In that context, paper and pen survey was used in order to collect data from the guests in the selected restaurants which offer dishes of the traditional Vojvodina cuisine. The questionnaire was distributed in 50 traditional restaurants in all parts of Vojvodina, while the data were obtained from the total of 42 restaurants. The survey was conducted from January till August 2019, and guests' participation was anonymous and voluntary. A five-point Likert scale (strongly disagree = 1 to strongly agree = 5) was used to assess all the items.
A graphical scheme of study design
In the third phase, the AHP model was used for ranking the obtained motives significant for choosing traditional food. A total of 29 experts evaluated the importance of each factor in relation to other factors considered. Data analysis was conducted through the use of Statistical Package for Social Sciences (SPSS.23) and Expert Choice 2000 software packages.
One of the most famous methods used in the multi-criteria decision-making is the analytic hierarchy process (AHP). The analytic hierarchy process (AHP) is a systematic approach developed by Saaty [48]. It is most often applied in solving complex problems that consist of numerous elements which contain aims, criteria, sub-criteria, and alternatives. In this technique, the problem is hierarchically broken down into the abovementioned elements in the top-to-bottom direction. Using answers collected from the respondents, the AHP gradually compares alternatives and then measures their impact on the final decision-making goal. The goal of the AHP method is to break down even the most complex problem into the hierarchy for the aim of an easier analysis. Such an organization shows that AHP is a multi-criteria technique. All the parts of the hierarchy are interconnected, so it is very easy to notice how a change of one factor affects others [49]. It gained its popularity by proving as a useful tool for the support in making decisions in such a way that it enables a decision maker to prioritize and make the best decisions.
The specific comparative technique which used AHP method determines the preferences for the set of elements at a given level of a decision-making hierarchy by employing pairwise comparisons of these elements, with respect to the elements at the higher level. Here, Saaty's scale [48] was used, given as 1, 3, 5, 7, 9, where 1 denotes equal importance and 9 shows the absolute importance of one element over another (Table 3). If element i is more important than element y, then a relevant index value is assigned in the matrix A. But if the judgment is that y is more important than i, the reciprocal value of the relevant index is assigned to the matrix A. The results of all the comparisons are placed in positive reciprocal quadratic matrices. The next stage in the AHP method is to calculate the eigenvector, using the standard AHP method, for each matrix. The so-called local priority vector is calculated using the principal eigenvector of a comparison matrix, as suggested by Saaty [48]. The section below shows how this takes place in more detail.
Table 3 Saaty's scale for pairwise comparisons in AHP. Source: [50]
The result of the comparison of the elements i and y is placed in matrix A in the position a:
$$A=\left[\begin{array}{ccccc}{a}_{11}& {a}_{12}& .& .& {a}_{1n}\\ {a}_{21}& {a}_{22}& .& .& {a}_{2n}\\ .& .& .& .& .\\ .& .& .& .& .\\ {a}_{n1}& {a}_{n2}& .& .& {a}_{nn}\end{array}\right]$$
The reciprocal value of the results of the comparison is placed on the position ayi to preserve the consistency of the judgment. The respondent is asked to compare n elements and place the results in matrix A. After all the pairwise comparison matrices have been formed, the vector of weights, w = [w1, w2,…,wn], is then computed on the basis of Saaty's eigenvector procedure. The computation of the weights involves two steps. First, the pairwise comparison matrix, A = [aij]n×n, is normalized by Eq. (1), and then the weights are calculated using Eq. (2).
Normalization:
$${a}_{ij}^{*}=\frac{{a}_{ij}}{\sum_{i=1}^{n}{a}_{ij}}$$
Weights calculation:
$${w}_{i}=\frac{\sum_{j=1}^{n}a \times ij}{n}$$
for all i = 1, 2,…, n.
Saaty [48] showed that there is a relationship between the vector weights, w, and the pairwise comparison matrix, A. The final stage of the evaluation of the decision-making process is to calculate the consistency ratio (CR) in order to determine how consistent the judgments are between the respondents, and thus to consider whether the results from multiple respondents are generalizable. The closer λmax is to n, the more consistent the judgments are between the respondents. The difference λmax − n can be used to measure the level of inconsistency in responses, but instead of using this, Saaty defined a consistency index (CI) which is calculated according to the formula (λmax − n)/(n − 1). Finally, the CR can be calculated from the ratio of the CI and the random index (RI) according to formula CR = CI/RI, as defined by Saaty [48] and Saaty [50]. RI represents the random index derived from numerous randomly generated n x n matrices. If the CR is less than 0.10, the result is sufficiently accurate and there is no need for adjustments in comparison or for repeating the calculation. If the CR is much in excess of 0.10 the judgments are untrustworthy and the results should be reanalyzed to determine the reasons for the inconsistencies.
AHP model has been widely used in hospitality studies in a variety of contexts [20, 51,52,53,54,55,56]. However, AHP model is rarely used for multi-criteria decision-making in choosing traditional dishes in restaurants.
The aim of this paper is the selection of the best traditional Vojvodina dish from the experts' point of view. They are university professors in the fields of hotel management and gastronomy, managers with master's degrees in gastronomy, as well as the managers of restaurants that have traditional Vojvodina dishes in their offer. In order to evaluate criteria weight for the selection of key factors that are important for the quality and authenticity of traditional dishes from the experts' point of view, the authors first developed hierarchy-structured model (Fig. 3) and then applied AHP method. Figure 3 shows the hierarchal structure of the problem which is the subject of this paper. The overall goal of the study was the identification of the most preferred traditional food. The criteria (blue outline) are the factors (Sensory appeal, Health concern, Mood, Familiarity, Convenience, and Price), and sub-criteria (orange outline) are the items obtained as a result of the performed explorative factor analysis (Table 5). The alternatives (green outline) are the following traditional dishes of Vojvodina cuisine: "Riblji paprikaš," "Rindflajš," "Perkelt," "Ćuretina sa mlincima," and "Sekelji gulaš" (Figs. 5, 6, 7, 8, 9 in "Appendix"). The alternatives were selected by the authors according to the analysis of the presence of the traditional Vojvodina dishes in the menus of 65 restaurants on the territory of Vojvodina. The research includes the five most present dishes from the group of main dishes.
Hierarchical structure of the problem
The interviews were conducted by the authors during January and February 2021. Initially, 40 experts were invited to participate in the research: 11 refused to participate because they thought filling questionnaires is time-consuming. The final sample included 29 experts: 9 managers of the restaurants of traditional Vojvodina cuisine, 15 managers with master's degrees in hotel management and gastronomy, and five university professors in the field of hotel hospitality and gastronomy from the University of Novi Sad, the Department of geography, tourism and hotel management.
The sample size selection for AHP survey is fitting for an appropriately selected small sample size and lends itself to the use of in-depth structured interviews which is useful for research focusing on a specific issue [58]. In collective decision-making with AHP, groups consisting of two to five people are defined as small-sample groups, while groups consisting of more than five people are defined as large-sample groups [59].
Study sample
The total sample consists of 289 guests. There are a higher number of women in the sample (59.17%). The main age group was between 41 and 50 years of age, thus making 39.79% of the whole sample. Most of the respondents have finished faculty or college (48.79%). Regarding their occupation, the majority of the respondents are employed (75.78%) (Table 4).
Table 4 Sociodemographic characteristics of the respondents (N = 289)
The obtained data were factor analyzed using the principal component method and Promax rotation procedure in order to extract the factors of motive attributes. All the factors with eigenvalue greater than 1 and with factor loadings more than 0.5 were retained.
The results of the factor analysis, which suggested a six-factor solution, included 21 items from the original FCQ and explained 62.99% of the variance. Twelve items were deleted due to low factor loadings. The Kaiser–Meyer–Olkin (KMO) overall measure of sampling adequacy was 0.78 [60], and Bartlett's test of sphericity was significant (p = 0.001).
The first factor was labeled "Sensory appeal." This factor explained 15.002% of the total variance with a reliability coefficient of 0.701. The second factor was "Health concern" explaining 13.236% of the total variance with a reliability coefficient of 0.720. The third factor was labeled "Familiarity" and explained 10.125% of the variance with a reliability coefficient of 0.852. The fourth factor, labeled "Mood," accounted for 9.102% of the variance with a reliability coefficient of 0.865. The fifth, "Price," explained 8.320% of the total variance, indicating a reliability coefficient of 0.798. The sixth factor, labeled "Convenience," accounted for 7.201% of the variance with a reliability coefficient of 0.712. Cronbach's α values for each factor were greater than 0.7. This confirms that the scales of the obtained questionnaire have considerable reliability [61]. Table 5 shows the results of the factor analysis.
Table 5 Results of factor analysis
Results of the application of the AHP method
The results show that at the first level of the hierarchy the most important criteria influencing the choice of traditional food is Sensory appeal (0.313), followed by Health concern (0.294), Mood (0.170), and Familiarity (0.122), while the least important are Convenience (0.051) and Price (0.050). Consistency ratio (CR) is 0.01, which indicates that the study is reliable and accurate and there is no need for a new evaluation of the weight criteria. The combination of all the responses of the experts leads to an analysis of all individual items on the second level of the hierarchy, and the obtained weight coefficients indicate the most dominant ones to those least dominant when it comes to sub-criteria influencing the criteria for choosing traditional Vojvodina food (Table 6).
Table 6 Total weight values for factors (criteria) and individual items (sub-criteria)
Figure 4 presents the overall weights or priority in the selection of traditional food in this study. Based on Fig. 4, the results show that "Riblji paprikaš" (0.293) is the most preferred traditional food among the experts with respect to all decision criteria which are Sensory appeal, Health concern, Mood, Familiarity, Convenience, and Price. The preference of the traditional food is followed by "Rindflajš" (0.291), "Perkelt" (0.166), "Ćuretina sa mlincima" (0.151), and "Sekelji gulaš" (0.099).
Source: data analyzed in Expert Choice 2000 program
Total weight values for the alternatives.
This paper aims to determine the priority of decision-making criteria in the selection of traditional Vojvodina food among the experts using the AHP model. Based on the results, Sensory appeal plays the major role in the decision-making process to choose traditional food, followed by the Health concern, Mood, Familiarity, Convenience, and Price. The Sensory appeal, Health, Convenience, and Price are the most important factors in the original FCQ [31]. The Price was the most important food choice motive in Spain, Greece, Ireland, Portugal, and the Netherlands, Sensory appeal was the first for Norway, Germany, and the UK, while Natural content was the most important food choice motive in Poland [34]. Sensory appeal, Natural content, and Price are the important motivational food choice factors for Turkish population [36]. Based on the research conducted among Chinese consumers [41], Sensory appeal was a direct and strong motive not only for buying traditional Chinese food, but European food as well. Sensory pleasure, in terms of the characteristics such as good taste, nice smell, appearance, and texture, is considered to be of great importance for consumer food preferences. Similar results were obtained among the respondents from European countries by Januszewska et al. who examined the differences in food choice factors among the respondents from four countries (Belgium, Hungary, Romania, and the Philippines) [33]. The respondents from European countries defined Sensory appeal of food as the most important factor in their daily food choice, while the factor related to health was the key one for the respondents from the Philippines. Similar to this research, Verbeke states that, generally, Health concern was recognized as one of the most important motives in the choice of food [62]. In contrast to this study, the research of the key elements for choosing food conducted by Fotopoulos et al. was done on the sample of 997 Greek households, and it showed that Natural content was the most important factors for the respondents, followed by [63]. The research results conducted in the restaurants in Vojvodina point out that taste is the characteristic which most affects the perception of the quality of food. Out of the total of 600 respondents, 83% of them believe that Vojvodina traditional dishes are very tasty [64]. The significance of sensory characteristics for the perception of the quality of dishes was also confirmed by other authors [65, 66]. The dominant significance of the criteria Sensory appeal and Health concern was also confirmed by Ting et al. who point out that only sensory appeal and health concern are found to have positive effect on the intention to consume Dayak food by the Malaysians [40]. Mood factor contains the items that are related to the general mood, relaxation, and stress control. Familiarity factor includes the items related to the preference to the dishes the respondents are used to and that they are familiar to them. Relatively high weight values of these criteria suggest that the mood and the choice of familiar food may influence the choice of food.
The least significant impact on choosing a dish was found in Convenience, which relates to the preparation and availability of traditional food, and Price. The cost of food is an important criterion in decision-making when buying food in the population with low income [31, 34]. Since the criteria were assessed by the experts who do not belong to the population with low income (professors employed at the University of Novi Sad, managers with MA degrees in hotel management and gastronomy, and restaurant managers), it is understandable that Price was the least important factor when choosing a dish. Other studies have confirmed that Familiarity and Ethical concern were the least important in European countries [33, 34], and Ethical concern and the Weight control were the least important food choice motives for Turkish population [36].
Riblji paprikaš, which was the best rated by the experts, represents a cult dish in Vojvodina, especially in the area of the Upper Danube area (in the upper course of the Danube). Riblji paprikaš is usually cooked in the nature, in a pot, and the skill of cooking riblji paprikaš has become a part of a cult ritual. Every chef also has their special culinary secret and is sure it is that secret that makes their paprikaš the best. A special merit for the popularization of the cult of this dish goes to the Danube "čardas" (specialized fish restaurants by the river), where the best paprikaš is served.
The hierarchy of factors for choosing traditional food provides useful insight for restaurant management and marketing (e.g., branding strategy, improvement of promotion, and sale of gastronomic products). Moreover, the insight into the combination of the criteria that affect the choice of Vojvodina dishes, as well as the overview of the selected alternatives, will help the enhancement and preservation of the unique national cuisine which is the result of a long-lasting synergy of culture and customs of a large number of ethnic groups on this territory.
Further research will be focused on the survey of different samples (foreign tourists, domestic tourists, local population) in order to examine the key motives for choosing a traditional dish based on different socio-demographic characteristics of the respondents. Also, the analytic hierarchy process (AHP) will be applied in the multi-criteria decision-making for choosing traditional Vojvodina dishes from the groups of appetizers and desserts.
Basarin B, Lukić T, Mesaroš M, Pavić D, Đorđević J, Matzarakis A. Spatial and temporal analysis of extreme bioclimate conditions in Vojvodina. Northern Serbia Int J Climatol. 2018;38(1):142–57. https://doi.org/10.1002/joc.5166.
Statistical Office of the Republic of Serbia. 2011. http://popis2011.stat.rs.
Radulovački LJ. Ishrana Srba u Sremu. Novi Sad: Matica srpska; 1996.
Blešić I, Lazić L, Božin M, Ivkov Džigurski A. Richness of culinary influences: gastronomy of Sombor and Apatin. Novi Sad: Faculty of sciences and Chamber of economy of Vojvodina; 2014.
Popov-Raljić J. Ishrana. Novi Sad: Faculty of Sciences, Department of Geography, Tourism and Hotel Management; 2016.
Stojanov M. Ej, salaši, salaši-način života i privređivanja. Novi Sad: Matica srpska; 1994.
Košić K, Pejanović R, Radović G. Značaj salaša za ruralni turizam Vojvodine. Agroznanje. 2013;14(2):231–40. https://doi.org/10.7251/AGRSR1302231K.
Božin M. Turistički gastronomski vodič: Salaši za vas. Novi Sad: Prometej; 2018.
Jokić N. Kalorije u svakodnevnom životu. Beograd: Zavod za udžbenike; 2007.
Vukićević D. Ishrana. Svetozarevo: GP "Novi put"; 1991.
Guerrero L, Guàrdia MD, Xicola J, Verbeke W, Vanhonacker F, Zakowska-Biemans S, Sajdakowska M, Sulmont-Rossé C, Issanchou S, Contel M, Scalvedi ML. Consumer-driven definition of traditional food products and innovation in traditional foods: a qualitative cross-cultural study. Appetite. 2009;52(2):345–54. https://doi.org/10.1016/j.appet.2008.11.008.
Bertozzi L. Tipicidad alimentaria y dieta mediterranea. In: Medina A, Medina F, Colesanti G, editors. El color de la alimentacion mediterranea. Elementos sensoriales y culturales de la nutricion. Barcelona: Icaria; 1998. p. 15–41.
Vanhonacker F, Lengard V, Hersleth M, Verbeke W. Profiling European traditional food consumers. Br Food J. 2010;112(8):871–86. https://doi.org/10.1108/00070701011067479.
Amilien V, Hegnes AW. The dimensions of 'traditional food' in reflexive modernity: Norway as a case study. J Sci Food Agric. 2013;93(14):3455–63. https://doi.org/10.1002/jsfa.6318.
Guerrero L, Claret A, Verbeke W, Enderli G, Zakowska-Biemans S, Vanhonacker F, Issanchou S, Sajdakowska M, Granli BS, Scalvedi L, Contel M. Perception of traditional food products in six European regions using free word association. Food Qual Prefer. 2010;21(2):225–33. https://doi.org/10.1016/j.foodqual.2009.06.003.
Almli VL, Verbeke W, Vanhonacker F, Næs T, Hersleth M. General image and attribute perceptions of traditional food in six European countries. Food Qual Prefer. 2011;22(1):129–38. https://doi.org/10.1016/j.foodqual.2010.08.008.
Vanhonacker F, Kühne B, Gellynck X, Guerrero L, Hersleth M, Verbeke W. Innovations in traditional foods: impact on perceived traditional character and consumer acceptance. Food Res Int. 2013;54(2):1828–35. https://doi.org/10.1016/j.foodres.2013.10.027.
Lang M. Consumer acceptance of blending plant-based ingredients into traditional meat-based foods: evidence from the meat-mushroom blend. Food Qual Prefer. 2020;79: 103758. https://doi.org/10.1016/j.foodqual.2019.103758.
Guerrero L, Claret A, Verbeke W, Sulmont-Rossé C, Hersleth M. Innovation in traditional food products: does it make sense? In: Innovation strategies in the food industry. 2016. p. 77–89. https://doi.org/10.1016/B978-0-12-803751-5.00005-2.
Amuquandoh FE, Asafo-Adjei R. Traditional food preferences of tourists in Ghana. Br Food J. 2013;115(7):987–1002. https://doi.org/10.1108/BFJ-11-2010-0197.
Promsivapallop P, Kannaovakun P. Factors influencing tourists' destination food consumption and satisfaction: a cross-cultural analysis. APSSR. 2020;20(2):87–105.
Fernández-Ferrín P, Calvo-Turrientes A, Bande B, Artaraz-Miñón M, Galán-Ladero MM. The valuation and purchase of food products that combine local, regional and traditional features: the influence of consumer ethnocentrism. Food Qual Prefer. 2018;64:138–47. https://doi.org/10.1016/j.foodqual.2017.09.015.
Chanadang S, Chambers E IV. Determination of the sensory characteristics of traditional and novel fortified blended foods used in supplementary feeding programs. Foods. 2019;8(7):261. https://doi.org/10.3390/foods8070261.
Cayot N. Sensory quality of traditional foods. Food Chem. 2007;101(1):154–62. https://doi.org/10.1016/j.foodchem.2006.01.012.
Mehfooz T, Ali TM, Arif S, Hasnain A. Effect of barley husk addition on rheological, textural, thermal and sensory characteristics of traditional flat bread (chapatti). J Cereal Sci. 2018;79:376–82. https://doi.org/10.1016/j.jcs.2017.11.020.
Yang J, Lee J. Application of sensory descriptive analysis and consumer studies to investigate traditional and authentic foods: a review. Foods. 2019;8(2):54. https://doi.org/10.3390/foods8020054.
Blanchet R, Willows N, Johnson S, Salmon Reintroduction Initiatives ON, Batal M. Traditional food, health, and diet quality in syilx okanagan adults in British Columbia, Canada. Nutrients. 2020;12(4):927. https://doi.org/10.3390/nu12040927.
Ezzatpanah H. Traditional food and practices for health: Iranian dairy foods. In: Nutritional and health aspects of food in South Asian countries. Academic Press; 2020. p. 275–87.
Ruan S, Wang L, Li Y, Li P, Ren Y, Gao R, Ma H. Staple food and health: a comparative study of physiology and gut microbiota of mice fed with potato and traditional staple foods (corn, wheat and rice). Food Funct. 2021;12(3):1232–40. https://doi.org/10.1039/d0fo02264k.
Cubillo B, McCartan J, West C, Brimblecombe J. A qualitative analysis of the accessibility and connection to traditional food for aboriginal chronic maintenance hemodialysis patients. Curr Dev Nutr. 2020;4(4):nzaa036. https://doi.org/10.1093/cdn/nzaa036.
Steptoe AH, Pollard TM, Wardle J. Development of a measure of the motives underlying the selection of food: the food choice questionnaire. Appetite. 1995;25(3):267–84. https://doi.org/10.1006/appe.1995.0061.
Pollard TM, Steptoe AN, Wardle JA. Motives underlying healthy eating: using the Food Choice Questionnaire to explain variation in dietary intake. J Biosoc Sci. 1998;30(2):165–79. https://doi.org/10.1017/S0021932098001655.
Januszewska R, Pieniak Z, Verbeke W. Food choice questionnaire revisited in four countries. Does it still measure the same? Appetite. 2011;57(1):94–8.
Markovina J, Stewart-Knox BJ, Rankin A, Gibney M, de Almeida MD, Fischer A, Kuznesof SA, Poínhos R, Panzone L, Frewer LJ. Food4Me study: validity and reliability of Food Choice Questionnaire in 9 European countries. Food Qual Prefer. 2015;45:26–32. https://doi.org/10.1016/j.foodqual.2015.05.002.
Milošević J, Žeželj I, Gorton M, Barjolle D. Understanding the motives for food choice in Western Balkan Countries. Appetite. 2012;58(1):205–14. https://doi.org/10.1016/j.appet.2011.09.012.
Dikmen D, İnan-Eroğlu E, Göktaş Z, Barut-Uyar B, Karabulut E. Validation of a Turkish version of the food choice questionnaire. Food Qual Prefer. 2016;52:81–6. https://doi.org/10.1016/j.foodqual.2016.03.016.
Cunha LM, Cabral D, Moura AP, de Almeida MD. Application of the Food Choice Questionnaire across cultures: systematic review of cross-cultural and single country studies. Food Qual Prefer. 2018;64:21–36. https://doi.org/10.1016/j.foodqual.2017.10.007.
Gama AP, Adhikari K, Hoisington DA. Factors influencing food choices of Malawian consumers: a food choice questionnaire approach. J Sens Stud. 2018;33(5): e12442. https://doi.org/10.1111/joss.12442.
Szakály Z, Kontor E, Kovács S, Popp J, Pető K, Polereczki Z. Adaptation of the food choice questionnaire: the case of Hungary. Br Food J. 2018;120(7):1474–88. https://doi.org/10.1108/BFJ-07-2017-0404.
Ting H, Tan SR, John AN. Consumption intention toward ethnic food: determinants of Dayak food choice by Malaysians. J Ethn Foods. 2017;4(1):21–7. https://doi.org/10.1016/j.jef.2017.02.005.
Wang O, De Steur H, Gellynck X, Verbeke W. Motives for consumer choice of traditional food and European food in mainland China. Appetite. 2015;87:143–51. https://doi.org/10.1016/j.appet.2014.12.211.
Pieniak Z, Verbeke W, Vanhonacker F, Guerrero L, Hersleth M. Association between traditional food consumption and motives for food choice in six European countries. Appetite. 2009;53(1):101–8. https://doi.org/10.1016/j.appet.2009.05.019.
Eertmans A, Victoir A, Notelaers G, Vansant G, Van den Bergh O. The food choice questionnaire: factorial invariant over western urban populations? Food Qual Prefer. 2006;17(5):344–52. https://doi.org/10.1016/j.foodqual.2005.03.016.
Bessiere J, Tibere L. Traditional food and tourism: French tourist experience and food heritage in rural spaces. J Sci Food Agric. 2013;93(14):3420–5. https://doi.org/10.1002/jsfa.6284.
Rachão S, Breda Z, Fernandes C, Joukes V. Food tourism and regional development: a systematic literature review. EJHTR. 2019;21:33–49.
Henderson JC. Local and traditional or global and modern? Food and tourism in Singapore. J Gastron Tour. 2016;2(1):55–68. https://doi.org/10.3727/216929716X14546365943494.
Blešić I, Pivac T, Božić S. Motives for visiting traditional cultural events of ethnic groups in Vojvodina. In: 4th International scientific conference ToSEE-tourism in southern and eastern Europe 2017" tourism and creative industries: trends and challenges" Opatija, Croatia. Faculty of Tourism and Hospitality Management, University of Rijeka. 2017; p. 43–55.
Saaty TL. The analytic hierarchy process. New York: McGraw-Hill Inc; 1980.
Harker PT, Vargas LG. The theory of ratio scale estimation: Saaty's analytic hierarchy process. Manag Sci. 1987;33(11):1383–403. https://doi.org/10.1287/mnsc.33.11.1383.
Saaty TL. Decision making for leaders: the analytic hierarchy process for decisions in a complex world. Pittsburgh: RWS Publications; 1992.
Siew LW, Wai CJ, Hoe LW. An empirical study on the selection of fast food restaurants among the undergraduates with AHP model. AJIST. 2016;2(3):15–21.
Wibowo SW, Tielung M. Analytical Hierarchy Process (AHP) approach on consumer preference in franchise fast food restaurant selection in Manado City (Study At: Mcdonald's, Kfc, and A&W). Jurnal EMBA: Jurnal Riset Ekonomi, Manajemen, Bisnis dan Akuntansi. 2016;4(2):22–8. https://doi.org/10.35794/emba.v4i2.12490.
Fibri DL, Frøst MB. Consumer perception of original and modernised traditional foods of Indonesia. Appetite. 2019;133:61–9. https://doi.org/10.1016/j.appet.2018.10.026.
Goral R. Prioritizing the factors which affect the selection of hotels by consumers traveling for vacation with analytical hierarchy process (AHP) method. J Tour Manag Res. 2020;7(1):11–31. https://doi.org/10.18488/journal.31.2020.71.11.31.
Yasami M, Promsivapallop P, Kannaovakun P. Food image and loyalty intentions: Chinese tourists' destination food satisfaction. JCTR. 2020;2:1–21. https://doi.org/10.1080/19388160.2020.1784814.
Fang J, Partovi FY. Criteria determination of analytic hierarchy process using a topic model. Expert Syst Appl. 2020;169: 114306. https://doi.org/10.1016/j.eswa.2020.114306.
Saaty TL. How to make a decision: the analytic hierarchy process. Eur J Oper Res. 1990;48(1):9–26. https://doi.org/10.1016/0377-2217(90)90057-I.
Cheng EWL, Li H. Construction partnering process and associated critical success factors: quantitative investigation. J Manag Eng. 2002;18(4):194–202. https://doi.org/10.1061/(asce)0742-597x(2002)18:4(194).
Ossadnik W, Schinke S, Kaspar RH. Group aggregation techniques for analytic hierarchy process and analytic network process: a comparative analysis. Group Decis Negot. 2016;25(2):421–57. https://doi.org/10.1007/s10726-015-9448-4.
Kaiser HF. An index of factorial simplicity. Psychometrika. 1974;39:31–6. https://doi.org/10.1007/BF02291575.
Nunnally JC. Psychometric theory. New York: McGraw-Hill; 1978.
Verbeke W. Impact of communication on consumers' food choices: plenary lecture. Proc Nutr Soc. 2008;67(3):281–8. https://doi.org/10.1017/S0029665108007179.
Fotopoulos C, Krystallis A, Vassallo M, Pagiaslis A. Food Choice Questionnaire (FCQ) revisited. Suggestions for the development of an enhanced general food motivation model. Appetite. 2009;52(1):199–208. https://doi.org/10.1016/j.appet.2008.09.014.
Gagić S, Jovičić A, Erdeji I, Kalenjuk B, D Petrović M. Analiza kvaliteta u vojvođanskim restoranima. Zbornik radova, Konkurentnost turističke destinacije, Beograd, Srbija, 2015.
Maina JW. Analysis of the factors that determine food acceptability. TPI J. 2018;7(5):253–7.
Namkung Y, Jang S. Does food quality really matter in restaurants? Its impact on customer satisfaction and behavioral intentions. J Hosp Tour Res. 2007;31(3):387–409. https://doi.org/10.1177/1096348007299924.
Riblji paprikaš [Internet]. [cited 2021 April 2]. https://www.ravnoplov.rs/riblji-paprikas-kultno-jelo-gornjeg-podunavlja.
Rindflajš [Internet]. [cited 2021 April 2]. https://lepaisrecna.mondo.rs/Recepti/Recepti-ostalo/a28896/RINFLAJS-JEPRAVI-SPECIJALITET-IZ-VOJVODINE-kuvano-meso-i-povrce-u-sosu-od-paradajzaprijaju-stomaku-RECEPT.html.
Perkelt. [Internet]. [cited 2021 April 3]. https://ukusivojvodine.rs/perkelt/.
Ćuretina sa mlincina. [Internet]. [cited 2021 April 2]. http://nedeljnikafera.net/tradicionalno-vojvodjansko-jelo-potpuno-ozivelo-curetina-sa-mlincima-zagospodarila-restoranskim-kuhinjama-ali-i-domovima-gurmana/.
The authors received no financial support for the research, authorship, and/or publication of this article.
Department of Geography, Tourism and Hotel Management, Faculty of Sciences, University of Novi Sad, Trg Dositeja Obradovića 3, 21000, Novi Sad, Serbia
Ivana Blešić & Miroslav Vujičić
Institute of Sports, Tourism and Service, South Ural State University, 76 Lenin Ave, Chelyabinsk, Russia, 454080
Ivana Blešić, Marko D. Petrović, Tatiana Tretiakova & Julia Syromiatnikova
Geographical Institute "Jovan Cvijić" SASA, Djure Jakšića St. 9, 11000, Belgrade, Serbia
Marko D. Petrović
Faculty of Tourism and Hospitality Management, University Singidunum, Danijelova 32, 11000, Belgrade, Serbia
Tamara Gajić
Ivana Blešić
Tatiana Tretiakova
Miroslav Vujičić
Julia Syromiatnikova
The authors read and approved the final manuscript.
Correspondence to Ivana Blešić.
See Figs. 5 , 6, 7, 8, and 9.
Riblji paprikaš, Fish stew has its gastronomic roots from the Hungarian cuisine. It is made of various kinds of fish, but carp is the most frequent and irreplaceable ingredient. Besides carp, pieces of pike, catfish, or starlet, can be added, to taste. Besides fish, the stew also contains chopped onions, hot and sweet peppers, salt, and cooked tomato. All the ingredients are put in the pot which is hanged on a tripod (in the past, the pots were made of copper) and the ingredients are poured over with cold water, and then the pot is put above an open fire. The fire itself is also very important, because the success of the whole job depends on its intensity. The best fire for fish stew is made of dry and soft Danube wood, such as purple willow or poplar. The dish is never stirred in the pot, but the pot is occasionally swung so that its content is evenly cooked. As for serving, the pot is placed on the table and the dish is served with home-made "yellow" noodles, made of white wheat flower and eggs [67]
Rindflajš, a real Vojvodina dish which originates from the German cuisine. In the literal translation from German, rindflajš (Die Rindfleisch) means beef. Rindflajš is made of beef or chicken meat cooked in a soup, and it is most often eaten with horseradish, but in some parts, it is also served with special sauces made of tomato, dill, or sour cherries. Besides the cooked meat, this dish also contains vegetables: potatoes, carrots, and celery [68]
Perkelt is a traditional Hungarian goulash which has been made for generations in Vojvodina as well (mainly in Bačka), and it is characterized by the richness of taste and smell. Perkelt is a drier variety of goulash that is prepared of meat stewed in thick sauce and with lots of onions. Perkelt is made of veal and pork meat without bones, and the meat is not mashed or strained [69]
Ćuretina sa mlincima (Turkey with pasta tatters). Mlinci (pasta tatters) are traditional Vojvodina type of dough which is made of wheat flour, eggs, water, and salt. It is spread rolled to the thickness of about 1 mm and then baked in the furnace or on a hot stove top. The turkey meat is cut into chunks, seasoned, and fried. Then it is poured over with the mixture of neutral and sour cream. Along with that, mlinci are broken into pieces and poured over with the hot soup, left for a couple of minutes to absorb the liquid and get softer. Then they are drained and added to the meat and cream. Finally, they are baked in the oven [70]
Sekelji gulaš, Szekely goulash is one of the most famous dishes made of pickled cabbage. It used to represent a traditional winter dish eaten primarily by the poor. There are two stories connected with the origin of this dish. The first one says that the dish was named after Szekely Hungarians who live in today's Romania, while the other story says that the dish was named after Imre Szekely, a city public notary in Budapest. According to the story, Mr Szekely used to have lunch every day in a small restaurant called "Music Clock." One day he came later than usual and, not knowing what to do, the staff collected all the remained cabbage and a few spoons of pork stew. They mixed it, spread it with sour cream, and served it to Mr Szekely. He liked it so much that he asked for the same meal to be prepared again. After the chef and the owner of the restaurant tried the meal, they were convinced that a true specialty was born. This means that this unique Hungarian dish was made purely by accident [4]
Blešić, I., Petrović, M.D., Gajić, T. et al. Application of the analytic hierarchy process in the selection of traditional food criteria in Vojvodina (Serbia). J. Ethn. Food 8, 20 (2021). https://doi.org/10.1186/s42779-021-00096-2
Consumer motives
Experts' choice
Analytic hierarchy process (AHP) | CommonCrawl |
\begin{definition}[Definition:Lemniscate of Bernoulli/Cartesian Definition]
The '''lemniscate of Bernoulli''' is the curve defined by the Cartesian equation:
:$\paren {x^2 + y^2}^2 = 2 a^2 \paren {x^2 - y^2}$
:630px
\end{definition} | ProofWiki |
\begin{definition}[Definition:Time/Unit/Day]
The '''day''' is a derived unit of time.
{{begin-eqn}}
{{eqn | o =
| r = 1
| c = '''day'''
}}
{{eqn | r = 24
| c = hours
}}
{{eqn | r = 60 \times 24
| rr= = 1440
| c = minutes
}}
{{eqn | r = 60 \times 60 \times 24
| rr= = 86\, 400
| c = seconds
}}
{{end-eqn}}
\end{definition} | ProofWiki |
# Python basics for simulations
Before we dive into implementing multi-code simulations in Python, let's first review some Python basics that will be useful throughout this textbook. If you're already familiar with Python, feel free to skip this section.
Python is a powerful and versatile programming language that is widely used in scientific computing and data analysis. It has a simple and intuitive syntax, making it easy to learn and read. In this section, we'll cover some fundamental concepts and techniques that will be essential for implementing simulations.
To get started, let's briefly discuss some of the key Python packages that we'll be using in this textbook:
- **math**: This package provides common mathematical functions like square root and exponential. We'll use it for various calculations in our simulations.
- **random**: Python's random package is a pseudo-random number generator. It allows us to generate random numbers and perform random sampling, which is often necessary in simulations.
- **matplotlib.pyplot**: This package is used for producing professional-quality graphics and visualizations. We'll use it to visualize the results of our simulations.
These packages are just a few examples of the many powerful tools available in Python for scientific computing and simulation development.
Here's an example of how we can use the `math` package to calculate the square root of a number:
```python
import math
x = 16
sqrt_x = math.sqrt(x)
print(sqrt_x)
```
The output of this code will be `4.0`, which is the square root of 16.
## Exercise
Import the `random` package and use it to generate a random integer between 1 and 10 (inclusive). Assign the result to the variable `random_num`.
### Solution
```python
import random
random_num = random.randint(1, 10)
print(random_num)
```
In addition to these packages, Python also has a rich ecosystem of libraries and frameworks that can be used for simulations. One such library is SimX, which is a general-purpose library for developing parallel discrete-event simulations in Python. It provides a high-level API for building simulations and supports parallel execution on multi-core systems.
SimX is currently under active development and new features and bug fixes are regularly updated on the project code site at [github.com/sim-x](https://github.com/sim-x). It has been used to model a variety of complex systems at Los Alamos, including the performance of computational physics codes on supercomputers and modeling of a modern financial reserve system.
Another example is TADSim, a simulation of the execution of a molecular dynamics simulation program. It was developed at Los Alamos as part of an effort to better understand and optimize the execution of parallel programs on high-performance computing clusters.
These libraries and frameworks provide powerful tools for simulation development and can greatly simplify the implementation process.
Now that we have a basic understanding of Python and the packages we'll be using, let's move on to the next section and learn about data analysis and manipulation in Python.
# Data analysis and manipulation in Python
Python provides several powerful libraries for data analysis, such as NumPy and Pandas. These libraries allow us to work with large datasets efficiently and perform various operations on the data.
**NumPy** is a fundamental package for scientific computing in Python. It provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays. NumPy is widely used in the scientific community for tasks such as data analysis, simulation, and machine learning.
**Pandas** is a library built on top of NumPy that provides high-performance, easy-to-use data structures and data analysis tools. It is particularly useful for working with structured data, such as tables or time series data. Pandas allows us to manipulate, analyze, and visualize data efficiently.
Here's an example of how we can use NumPy to perform basic operations on arrays:
```python
import numpy as np
# Create an array
a = np.array([1, 2, 3, 4, 5])
# Perform operations on the array
mean_a = np.mean(a)
sum_a = np.sum(a)
print(mean_a)
print(sum_a)
```
The output of this code will be:
```
3.0
15
```
## Exercise
Import the Pandas library and use it to create a DataFrame from a dictionary of data. The dictionary should contain the following key-value pairs:
- 'name': ['Alice', 'Bob', 'Charlie']
- 'age': [25, 30, 35]
- 'city': ['New York', 'London', 'Paris']
Assign the resulting DataFrame to the variable `df`.
### Solution
```python
import pandas as pd
data = {'name': ['Alice', 'Bob', 'Charlie'],
'age': [25, 30, 35],
'city': ['New York', 'London', 'Paris']}
df = pd.DataFrame(data)
print(df)
```
In addition to NumPy and Pandas, there are many other libraries and tools available in Python for data analysis and manipulation. Some examples include:
- **Matplotlib**: A plotting library that provides a wide variety of visualization options. It is often used in combination with NumPy and Pandas to create informative and visually appealing plots.
- **SciPy**: A library that provides many scientific computing functions, such as numerical integration, optimization, and interpolation. It complements NumPy and provides additional functionality for scientific simulations.
- **Scikit-learn**: A machine learning library that provides tools for data mining and data analysis. It includes a wide range of algorithms for tasks such as classification, regression, clustering, and dimensionality reduction.
These libraries and tools can be used in combination to perform complex data analysis tasks and gain insights from the data.
Now that we have learned about data analysis and manipulation in Python, let's move on to the next section and explore numerical methods for simulations.
# Numerical methods for simulations
One of the most widely used numerical methods is the Euler method, which is used to approximate solutions to ordinary differential equations (ODEs). ODEs are commonly encountered in simulations, as they describe the rate of change of a system over time.
The Euler method works by approximating the derivative of a function at a given point using the slope of a tangent line. By repeatedly applying this approximation, we can approximate the solution to the ODE over a specified time interval.
To illustrate the Euler method, let's consider a simple example. Suppose we have an ODE that describes the rate of change of a population over time:
$$\frac{dP}{dt} = kP$$
where $P$ is the population, $t$ is time, and $k$ is a constant.
We can approximate the solution to this ODE using the Euler method as follows:
1. Choose an initial population $P_0$ at time $t_0$.
2. Choose a time step $\Delta t$.
3. Repeat the following steps for each time step:
- Calculate the derivative $\frac{dP}{dt}$ at the current time $t$ using the current population $P$.
- Update the population $P$ using the formula $P = P + \frac{dP}{dt} \cdot \Delta t$.
- Update the time $t$ using the formula $t = t + \Delta t$.
By repeating these steps for a specified number of time steps, we can approximate the solution to the ODE over the specified time interval.
Let's implement the Euler method in Python to approximate the solution to the population ODE. We'll start with an initial population of 1000 at time 0, and a time step of 0.1. We'll simulate the population for a time interval of 10.
```python
import numpy as np
import matplotlib.pyplot as plt
def euler_method(P0, k, dt, T):
# Initialize arrays to store population and time
P = [P0]
t = [0]
# Calculate the number of time steps
num_steps = int(T / dt)
# Perform Euler method
for i in range(num_steps):
# Calculate the derivative
dP_dt = k * P[i]
# Update the population and time
P.append(P[i] + dP_dt * dt)
t.append(t[i] + dt)
return P, t
# Set parameters
P0 = 1000
k = 0.05
dt = 0.1
T = 10
# Run Euler method
P, t = euler_method(P0, k, dt, T)
# Plot the population over time
plt.plot(t, P)
plt.xlabel('Time')
plt.ylabel('Population')
plt.title('Population Over Time')
plt.show()
```
The resulting plot shows the population over time, as approximated by the Euler method.
## Exercise
Implement the Euler method to approximate the solution to the following ODE:
$$\frac{dy}{dt} = -y$$
Use an initial value of $y_0 = 1$, a time step of $\Delta t = 0.1$, and a time interval of $T = 5$. Plot the solution over time.
### Solution
```python
def euler_method(y0, dt, T):
# Initialize arrays to store y and t
y = [y0]
t = [0]
# Calculate the number of time steps
num_steps = int(T / dt)
# Perform Euler method
for i in range(num_steps):
# Calculate the derivative
dy_dt = -y[i]
# Update y and t
y.append(y[i] + dy_dt * dt)
t.append(t[i] + dt)
return y, t
# Set parameters
y0 = 1
dt = 0.1
T = 5
# Run Euler method
y, t = euler_method(y0, dt, T)
# Plot the solution over time
plt.plot(t, y)
plt.xlabel('Time')
plt.ylabel('y')
plt.title('Solution of dy/dt = -y')
plt.show()
```
The resulting plot shows the solution to the ODE over time, as approximated by the Euler method.
In addition to the Euler method, there are many other numerical methods available for solving ODEs and other mathematical problems in simulations. Some commonly used methods include the Runge-Kutta method, the Adams-Bashforth method, and the finite difference method.
These methods have different levels of accuracy and computational complexity, and the choice of method depends on the specific problem and requirements of the simulation.
# Object-oriented programming for simulations
Object-oriented programming (OOP) is a programming paradigm that allows us to organize and structure code in a more modular and reusable way. It is particularly useful for simulations, as it allows us to define objects that represent the entities and behaviors of the system being simulated.
In OOP, objects are instances of classes, which are like blueprints for creating objects. A class defines the properties (attributes) and behaviors (methods) that objects of that class will have.
To illustrate OOP in simulations, let's consider a simple example of a particle simulation. In this simulation, we have particles that have a position and velocity, and can move and interact with each other.
We can define a Particle class that represents a particle in the simulation. This class can have attributes like position and velocity, and methods like move and interact.
```python
class Particle:
def __init__(self, position, velocity):
self.position = position
self.velocity = velocity
def move(self, dt):
self.position += self.velocity * dt
def interact(self, other_particle):
# Define interaction logic here
pass
```
In this example, the Particle class has an `__init__` method that is called when a new particle object is created. This method initializes the position and velocity attributes of the particle.
The class also has a `move` method that updates the position of the particle based on its velocity and a time step `dt`. The `interact` method is left empty for now, as it will depend on the specific interaction logic between particles.
To create a particle object and use its methods, we can do the following:
```python
# Create a particle object
particle = Particle(position=(0, 0), velocity=(1, 1))
# Move the particle
particle.move(dt=0.1)
# Interact with another particle
other_particle = Particle(position=(1, 1), velocity=(-1, -1))
particle.interact(other_particle)
```
In this example, we create a particle object with an initial position of (0, 0) and velocity of (1, 1). We then move the particle by calling its `move` method with a time step of 0.1. Finally, we create another particle object and interact it with the first particle by calling the `interact` method.
## Exercise
Define a `Rectangle` class that represents a rectangle in a simulation. The class should have attributes for the width and height of the rectangle, and methods for calculating its area and perimeter.
### Solution
```python
class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self):
return 2 * (self.width + self.height)
```
To calculate the area and perimeter of a rectangle, we can create a `Rectangle` object and call its `area` and `perimeter` methods:
```python
rectangle = Rectangle(width=5, height=3)
rectangle_area = rectangle.area()
rectangle_perimeter = rectangle.perimeter()
```
The `rectangle_area` variable will be assigned the value 15, and the `rectangle_perimeter` variable will be assigned the value 16.
In the next section, we will discuss the design and implementation of simulations, including how to define the entities and behaviors of the system being simulated.
# Simulation design and implementation
1. Define the problem: The first step in designing a simulation is to clearly define the problem you want to solve. This includes identifying the system being simulated, the entities and behaviors of the system, and the goals of the simulation.
2. Choose the appropriate modeling approach: Once the problem is defined, you need to choose the appropriate modeling approach for your simulation. This could be a discrete event simulation, a continuous simulation, an agent-based simulation, or a combination of these approaches.
3. Define the entities and behaviors: Next, you need to define the entities and behaviors of the system being simulated. This involves identifying the key components of the system and how they interact with each other. You can use object-oriented programming to define classes that represent these entities and their behaviors.
4. Implement the simulation logic: Once the entities and behaviors are defined, you can start implementing the simulation logic. This involves writing code that simulates the behaviors of the entities and their interactions. You can use loops, conditionals, and other control structures to control the flow of the simulation.
5. Validate and verify the simulation: After implementing the simulation logic, it is important to validate and verify the simulation. This involves testing the simulation with different inputs and comparing the results to expected outcomes. You can also compare the simulation results to real-world data or other validated models.
6. Optimize the simulation: Once the simulation is validated and verified, you can optimize its performance. This may involve improving the efficiency of the code, parallelizing the simulation, or using other optimization techniques.
7. Document and communicate the simulation: Finally, it is important to document and communicate the simulation. This includes documenting the design and implementation of the simulation, as well as communicating the results and insights gained from the simulation to stakeholders.
## Exercise
Consider a simulation of a traffic intersection. What are the entities and behaviors that need to be defined for this simulation?
### Solution
Entities:
- Vehicles
- Traffic lights
- Pedestrians
Behaviors:
- Vehicle movement
- Traffic light switching
- Pedestrian crossing
In the next section, we will discuss how to visualize the results of a simulation using Python.
# Visualization of simulation results
Python provides several libraries for data visualization, including matplotlib, seaborn, and plotly. These libraries allow you to create a wide range of visualizations, including line plots, scatter plots, bar plots, and heatmaps.
To illustrate the visualization of simulation results, let's consider a simple example of a simulation of a population growth. Suppose we have a simulation that models the growth of a population over time. We can use matplotlib to create a line plot of the population size as a function of time.
```python
import matplotlib.pyplot as plt
# Simulated population data
time = [0, 1, 2, 3, 4, 5]
population = [100, 120, 150, 180, 200, 220]
# Create a line plot
plt.plot(time, population)
# Add labels and title
plt.xlabel('Time')
plt.ylabel('Population')
plt.title('Population Growth')
# Show the plot
plt.show()
```
This code will create a line plot with time on the x-axis and population on the y-axis. The plot will show the growth of the population over time.
## Exercise
Consider a simulation of a stock market. What type of visualization would be appropriate for visualizing the simulation results?
### Solution
A line plot or a candlestick chart would be appropriate for visualizing the simulation results of a stock market simulation. A line plot can show the price of a stock over time, while a candlestick chart can show the open, high, low, and close prices of a stock for a given time period.
In the next section, we will discuss advanced techniques for optimizing simulations.
# Advanced techniques for optimizing simulations
1. Vectorization: Vectorization is a technique that allows you to perform operations on arrays of data instead of individual elements. This can significantly improve the performance of simulations, especially when working with large datasets. The numpy library provides efficient array operations that can be used for vectorization.
2. Parallelization: Parallelization is a technique that allows you to run multiple tasks simultaneously, taking advantage of multi-core processors. This can speed up simulations that can be divided into independent tasks that can be executed in parallel. The multiprocessing library in Python provides tools for parallel computing.
3. Algorithm optimization: Sometimes, the performance of a simulation can be improved by optimizing the underlying algorithms. This may involve using more efficient data structures, reducing the number of computations, or implementing more advanced algorithms. Profiling tools like cProfile can help identify bottlenecks in the code that can be optimized.
4. Memory management: Efficient memory management is important for simulations that work with large datasets. This involves minimizing memory usage, avoiding unnecessary memory allocations, and freeing up memory when it is no longer needed. The memory_profiler library can help identify memory usage patterns and optimize memory management.
5. Caching: Caching is a technique that involves storing the results of expensive computations so that they can be reused later. This can improve the performance of simulations that involve repetitive computations. The functools.lru_cache decorator in Python provides a simple way to implement caching.
## Exercise
Consider a simulation that involves performing a large number of computations on arrays of data. What optimization techniques could be used to improve the performance of this simulation?
### Solution
The following optimization techniques could be used to improve the performance of a simulation that involves performing a large number of computations on arrays of data:
- Vectorization: Using the numpy library to perform operations on arrays of data instead of individual elements.
- Parallelization: Using the multiprocessing library to run multiple tasks simultaneously, taking advantage of multi-core processors.
- Algorithm optimization: Optimizing the underlying algorithms to reduce the number of computations or use more efficient data structures.
- Memory management: Efficiently managing memory usage to minimize memory allocations and free up memory when it is no longer needed.
- Caching: Caching the results of expensive computations to avoid repetitive computations.
In the next section, we will discuss how to debug and troubleshoot simulations.
# Debugging and troubleshooting simulations
1. Debugging tools: Python provides several tools for debugging, including the built-in pdb module and the popular PyCharm IDE. These tools allow you to set breakpoints, inspect variables, and step through the code to identify and fix issues.
2. Logging: Logging is a technique that involves recording information about the execution of a program for debugging and troubleshooting purposes. The logging module in Python provides a flexible and powerful logging framework that can be used to log messages at different levels of severity.
3. Unit testing: Unit testing is a technique that involves writing small tests for individual units of code to ensure that they work correctly. Unit tests can help identify and fix issues in simulations, and provide a way to verify that the simulation behaves as expected.
4. Error handling: Proper error handling is important for simulations, as it allows you to gracefully handle errors and exceptions that may occur during the execution of the simulation. Python provides a try-except block that can be used to catch and handle exceptions.
5. Code reviews: Code reviews involve having other developers review your code to identify and fix issues. Code reviews can help identify bugs, improve code quality, and ensure that the simulation meets the requirements.
## Exercise
Consider a simulation that is producing incorrect results. What troubleshooting techniques could be used to identify and fix the issue?
### Solution
The following troubleshooting techniques could be used to identify and fix issues in a simulation that is producing incorrect results:
- Debugging tools: Using a debugger like pdb or an IDE like PyCharm to set breakpoints, inspect variables, and step through the code to identify and fix issues.
- Logging: Adding logging statements at different points in the code to record information about the execution of the simulation and identify potential issues.
- Unit testing: Writing unit tests for individual units of code to ensure that they work correctly and identify issues.
- Error handling: Adding proper error handling to catch and handle exceptions that may occur during the execution of the simulation.
- Code reviews: Having other developers review the code to identify and fix issues and ensure that the simulation meets the requirements.
In the next section, we will discuss real-world examples of multi-code simulations in Python.
# Real-world examples of multi-code simulations in Python
1. Traffic simulation: Traffic simulations model the behavior of vehicles, pedestrians, and traffic lights in a road network. These simulations can be used to study traffic flow, congestion, and the impact of different traffic management strategies.
2. Epidemic simulation: Epidemic simulations model the spread of infectious diseases in a population. These simulations can be used to study the effectiveness of different interventions, such as vaccination and social distancing, in controlling the spread of diseases.
3. Financial market simulation: Financial market simulations model the behavior of financial assets, such as stocks and bonds, and the interactions between buyers and sellers. These simulations can be used to study market dynamics, trading strategies, and the impact of different market conditions.
4. Social network simulation: Social network simulations model the behavior of individuals and their interactions in a social network. These simulations can be used to study social dynamics, information diffusion, and the impact of different social network structures.
## Exercise
Choose one of the real-world examples of multi-code simulations mentioned above and describe how it can be implemented in Python.
### Solution
For example, a traffic simulation can be implemented in Python by defining classes for vehicles, pedestrians, and traffic lights, and simulating their behaviors and interactions. The simulation can use object-oriented programming to represent the entities and their behaviors, and use libraries like numpy and matplotlib for data manipulation and visualization. The simulation can be run for different scenarios, such as different traffic volumes and traffic management strategies, to study their impact on traffic flow and congestion.
In the next section, we will discuss future applications and developments in simulation technology.
# Future applications and developments in simulation technology
1. Virtual reality simulations: Virtual reality (VR) simulations provide an immersive and interactive experience that can be used for training, education, and entertainment. VR simulations can simulate real-world environments and allow users to interact with virtual objects and entities.
2. Machine learning simulations: Machine learning simulations can be used to train and evaluate machine learning models. These simulations can generate synthetic data, simulate different scenarios, and evaluate the performance of machine learning algorithms.
3. Internet of Things (IoT) simulations: IoT simulations can be used to simulate the behavior of interconnected devices and systems. These simulations can be used to study the impact of different IoT architectures, protocols, and applications, and optimize the performance and efficiency of IoT systems.
4. Simulation in the cloud: Cloud computing provides scalable and on-demand resources that can be used for running large-scale simulations. Simulation in the cloud allows researchers and developers to access powerful computing resources and collaborate on simulations in real-time.
## Exercise
Choose one of the future applications or developments in simulation technology mentioned above and describe its potential impact.
### Solution
For example, virtual reality simulations have the potential to revolutionize training and education. VR simulations can provide realistic and immersive training experiences for a wide range of industries, such as healthcare, aviation, and manufacturing. These simulations can simulate complex and dangerous scenarios that are difficult or expensive to replicate in the real world, allowing users to practice and learn in a safe and controlled environment. VR simulations can also be used for educational purposes, providing interactive and engaging learning experiences that enhance understanding and retention of complex concepts.
In the next section, we will conclude the textbook and discuss next steps for further learning.
# Conclusion and next steps
In this textbook, we have covered the fundamentals of implementing multi-code simulations in Python. We have discussed the key concepts and techniques for designing, implementing, and optimizing simulations, as well as debugging and troubleshooting simulations. We have also explored real-world examples of multi-code simulations and discussed future applications and developments in simulation technology.
To further your learning in simulation technology, we recommend exploring more advanced topics, such as parallel and distributed simulations, agent-based modeling, and simulation optimization. You can also explore domain-specific simulation libraries and frameworks, such as SimPy for discrete event simulations, Mesa for agent-based simulations, and SALib for sensitivity analysis of simulations.
Simulations are a powerful tool for understanding and predicting the behavior of complex systems. By mastering the techniques and tools for implementing simulations in Python, you can gain valuable insights and make informed decisions in a wide range of domains, from healthcare and finance to transportation and social sciences.
Thank you for reading this textbook. We hope you have found it informative and engaging. Good luck with your simulation projects, and happy coding! | Textbooks |
# 1. Setting Up the Environment
Before we dive into the world of power programming with Mathematica, we need to set up our environment. This section will guide you through the necessary steps to get started.
# 1.1. Installing Mathematica
To begin, you'll need to install Mathematica on your computer. Mathematica is a powerful computational software that allows you to perform complex calculations, visualize data, and develop algorithms. It is available for multiple operating systems, including Windows, macOS, and Linux.
Here's how you can install Mathematica:
1. Visit the Wolfram website at [www.wolfram.com/mathematica](www.wolfram.com/mathematica).
2. Click on the "Products" tab and select "Mathematica" from the dropdown menu.
3. Choose the version of Mathematica that is compatible with your operating system.
4. Follow the on-screen instructions to download and install Mathematica.
Once the installation is complete, you'll be ready to start power programming with Mathematica!
## Exercise
Install Mathematica on your computer following the steps outlined in the previous section.
### Solution
This exercise does not require a solution as it is dependent on the learner's actions.
# 1.2. Interactive Notebook vs. Script Mode
Mathematica offers two main modes of operation: the interactive notebook interface and the script mode. Let's take a closer look at each of these modes.
The interactive notebook interface is the default mode when you open Mathematica. It provides a document-like environment where you can write and execute code, create visualizations, and add explanatory text. Notebooks are organized into cells, which can contain code, text, or other types of content. You can run individual cells or the entire notebook.
On the other hand, the script mode allows you to write and execute Mathematica code in a traditional programming environment. In this mode, you write your code in a plain text file with the ".m" extension. Script mode is useful when you want to automate tasks, run code from the command line, or integrate Mathematica with other programming languages.
Both modes have their advantages and are suited for different purposes. Throughout this textbook, we will primarily use the interactive notebook interface, as it provides a more interactive and visual learning experience. However, we will also explore some aspects of script mode to give you a well-rounded understanding of Mathematica.
Let's say you want to calculate the sum of the first 100 natural numbers using Mathematica. Here's how you would do it in both the interactive notebook interface and the script mode:
Interactive notebook interface:
1. Create a new notebook by clicking on "File" > "New" > "Notebook" in the Mathematica menu.
2. In the first cell of the notebook, type the following code:
```mathematica
Sum[i, {i, 1, 100}]
```
3. Press Shift + Enter to execute the cell. The result, 5050, will be displayed below the code.
Script mode:
1. Open a text editor and create a new file with the ".m" extension (e.g., "sum.m").
2. In the file, type the following code:
```mathematica
Print[Sum[i, {i, 1, 100}]]
```
3. Save the file and navigate to the directory where it is located using the command line.
4. Run the script by typing "math -script sum.m" (replace "sum.m" with the actual filename).
5. The result, 5050, will be printed in the command line.
## Exercise
Try calculating the sum of the first 50 even numbers using both the interactive notebook interface and the script mode.
### Solution
This exercise does not require a solution as it is dependent on the learner's actions.
# 1.3. Setting Up Wolfram Cloud
In addition to the local installation of Mathematica, you can also use the Wolfram Cloud to access Mathematica from any device with an internet connection. The Wolfram Cloud provides a web-based interface that allows you to create and run Mathematica notebooks, collaborate with others, and deploy applications.
To set up Wolfram Cloud, follow these steps:
1. Visit the Wolfram Cloud website at [www.wolframcloud.com](www.wolframcloud.com).
2. Sign up for a Wolfram ID if you don't already have one. This will give you access to the Wolfram Cloud.
3. Once you have a Wolfram ID, log in to the Wolfram Cloud using your credentials.
4. You can now create and run Mathematica notebooks directly in the Wolfram Cloud.
Using Wolfram Cloud has several advantages. It allows you to work on your projects from any device without the need for a local installation of Mathematica. It also provides cloud storage for your notebooks, making it easy to access and share your work with others.
Throughout this textbook, we will provide instructions for both the local installation of Mathematica and the use of Wolfram Cloud. Feel free to choose the option that best suits your needs.
Let's say you want to calculate the square root of 2 using Wolfram Cloud. Here's how you would do it:
1. Open your web browser and navigate to [www.wolframcloud.com](www.wolframcloud.com).
2. Log in to the Wolfram Cloud using your Wolfram ID.
3. Click on "New Notebook" to create a new Mathematica notebook in the cloud.
4. In the first cell of the notebook, type the following code:
```mathematica
Sqrt[2]
```
5. Press Shift + Enter to execute the cell. The result, approximately 1.41421, will be displayed below the code.
## Exercise
Try calculating the cube root of 27 using Wolfram Cloud.
### Solution
This exercise does not require a solution as it is dependent on the learner's actions.
# 2. Basic Mathematica Syntax
Mathematica uses a notebook interface, where you can enter and execute code in individual cells. Each cell can contain multiple lines of code, and the output of the code is displayed below the cell.
To execute a cell, you can either click the "Run" button in the toolbar or press Shift + Enter. Mathematica will evaluate the code in the cell and display the result.
Here's an example of a Mathematica notebook cell:
```mathematica
2 + 3
```
When you execute this cell, Mathematica will calculate the sum of 2 and 3 and display the result, which is 5, below the cell.
## Exercise
Create a new Mathematica notebook cell and calculate the product of 4 and 5.
### Solution
```mathematica
4 * 5
```
The result, 20, will be displayed below the cell.
# 2.1. Input and Output
In Mathematica, you can assign values to variables using the assignment operator `=`. The variable on the left side of the operator will be assigned the value on the right side.
```mathematica
x = 5
```
This code assigns the value 5 to the variable `x`. You can then use the variable in subsequent calculations.
Mathematica also supports mathematical operations like addition, subtraction, multiplication, and division. You can perform these operations using the standard mathematical operators `+`, `-`, `*`, and `/`.
```mathematica
x + 3
```
This code adds 3 to the value of `x` and returns the result.
Let's say we want to calculate the area of a rectangle with width 4 and height 6. We can assign the values to variables and then use those variables in the calculation.
```mathematica
width = 4
height = 6
area = width * height
```
The variable `area` will be assigned the value 24, which is the result of multiplying the width and height.
## Exercise
Create a new Mathematica notebook cell and calculate the perimeter of a square with side length 10.
### Solution
```mathematica
side_length = 10
perimeter = 4 * side_length
```
The variable `perimeter` will be assigned the value 40, which is the result of multiplying the side length by 4.
# 2.2. Comments
In Mathematica, you can add comments to your code to provide explanations or notes. Comments are ignored by the interpreter and are not executed as code.
Comments in Mathematica start with the `(*` characters and end with the `*)` characters. Anything between these characters is considered a comment and is not executed.
```mathematica
(* This is a comment *)
```
You can also add comments at the end of a line of code using the `(*` character followed by the comment text.
```mathematica
x = 5 (* Assign the value 5 to the variable x *)
```
Comments are useful for documenting your code and making it easier to understand for yourself and others.
Let's say we have a complex calculation that requires multiple steps. We can add comments to explain each step and make the code more readable.
```mathematica
(* Step 1: Calculate the square of x *)
x_squared = x^2
(* Step 2: Add 3 to the result *)
result = x_squared + 3
```
The comments help to clarify the purpose of each line of code and make it easier to follow the logic of the calculation.
## Exercise
Add a comment to the following code to explain what it does:
```mathematica
y = x + 5
```
### Solution
```mathematica
(* Add 5 to the value of x and assign the result to y *)
y = x + 5
```
# 2.3. Variables and Naming Conventions
In Mathematica, variables are used to store values that can be referenced and manipulated in your code. Variables can hold a wide range of data types, including numbers, strings, and lists.
When naming variables in Mathematica, there are a few rules and conventions to follow:
1. Variable names must start with a letter and can contain letters, numbers, and underscores. Avoid using special characters or spaces in variable names.
2. Variable names are case-sensitive. This means that `x` and `X` are considered different variables.
3. Avoid using reserved keywords as variable names. Reserved keywords are words that have a special meaning in Mathematica, such as `If`, `While`, and `Function`.
4. Choose descriptive variable names that reflect the purpose of the variable. This makes your code more readable and easier to understand.
Here are some examples of valid variable names:
```mathematica
x = 5
my_variable = "Hello"
list_of_numbers = {1, 2, 3, 4, 5}
```
And here are some examples of invalid variable names:
```mathematica
2x = 10 (* Variable name cannot start with a number *)
my variable = "World" (* Variable name cannot contain spaces *)
```
## Exercise
Which of the following variable names are valid in Mathematica? Select all that apply.
1. `my_variable`
2. `123abc`
3. `if`
4. `list_of_numbers`
5. `myVariable`
### Solution
1. `my_variable` - Valid
2. `123abc` - Invalid (Variable name cannot start with a number)
3. `if` - Invalid (Reserved keyword)
4. `list_of_numbers` - Valid
5. `myVariable` - Valid
# 2.4. Basic Mathematical Functions
Mathematica provides a wide range of built-in mathematical functions that you can use in your code. These functions allow you to perform common mathematical operations, such as addition, subtraction, multiplication, division, exponentiation, and more.
Here are some examples of basic mathematical functions in Mathematica:
- Addition: The `+` operator is used to add two numbers together. For example, `2 + 3` evaluates to `5`.
- Subtraction: The `-` operator is used to subtract one number from another. For example, `5 - 2` evaluates to `3`.
- Multiplication: The `*` operator is used to multiply two numbers together. For example, `2 * 3` evaluates to `6`.
- Division: The `/` operator is used to divide one number by another. For example, `6 / 3` evaluates to `2`.
- Exponentiation: The `^` operator is used to raise a number to a power. For example, `2^3` evaluates to `8`.
In addition to these basic mathematical functions, Mathematica also provides functions for trigonometry, logarithms, square roots, absolute values, and more. These functions can be used to perform more complex mathematical calculations.
Here are some examples of using built-in mathematical functions in Mathematica:
```mathematica
Sin[0] (* Returns the sine of 0 *)
Exp[1] (* Returns the exponential function e^1 *)
Log[10] (* Returns the natural logarithm of 10 *)
Sqrt[9] (* Returns the square root of 9 *)
Abs[-5] (* Returns the absolute value of -5 *)
```
## Exercise
Evaluate the following expressions using the appropriate mathematical functions in Mathematica:
1. The sine of 45 degrees.
2. The square root of 16.
3. The natural logarithm of 1.
4. The absolute value of -10.
### Solution
1. `Sin[45 Degree]`
2. `Sqrt[16]`
3. `Log[1]`
4. `Abs[-10]`
# 3. Data Visualization
Data visualization is an important aspect of data analysis and exploration. It allows us to understand and communicate patterns, trends, and relationships in our data. Mathematica provides powerful tools for creating visualizations, ranging from simple plots to complex graphs and charts.
In this section, we will explore the different ways to visualize data in Mathematica. We will cover basic plotting functions, creating graphs and charts, customizing visualizations, and even creating 3D plots and animations.
Let's get started!
### Plotting Functions
One of the simplest ways to visualize data in Mathematica is by using plotting functions. These functions allow us to create various types of plots, such as line plots, scatter plots, bar plots, and more.
Here are some commonly used plotting functions in Mathematica:
- `Plot`: This function is used to create line plots. It takes a mathematical function as input and plots the function over a specified range of values.
- `ListPlot`: This function is used to create scatter plots. It takes a list of data points as input and plots the points on a coordinate system.
- `BarChart`: This function is used to create bar plots. It takes a list of data values as input and plots the values as bars.
- `Histogram`: This function is used to create histograms. It takes a list of data values as input and plots the values as bars, where the height of each bar represents the frequency of the corresponding data value.
These are just a few examples of the plotting functions available in Mathematica. Each plotting function has its own set of options and parameters that allow you to customize the appearance of the plot.
Here are some examples of using plotting functions in Mathematica:
```mathematica
Plot[Sin[x], {x, 0, 2 Pi}] (* Plots the sine function over the range 0 to 2 Pi *)
ListPlot[{{1, 1}, {2, 4}, {3, 9}, {4, 16}}] (* Plots the data points (1, 1), (2, 4), (3, 9), and (4, 16) *)
BarChart[{1, 2, 3, 4}] (* Plots the values 1, 2, 3, and 4 as bars *)
Histogram[{1, 1, 2, 3, 3, 3, 4, 4, 4, 4}] (* Plots the frequency distribution of the values 1, 1, 2, 3, 3, 3, 4, 4, 4, and 4 *)
```
## Exercise
Create a line plot of the function $y = x^2$ over the range $x = -5$ to $x = 5$.
### Solution
```mathematica
Plot[x^2, {x, -5, 5}]
```
# 3.2. Creating Graphs and Charts
In addition to basic plotting functions, Mathematica also provides functions for creating more complex graphs and charts. These functions allow us to visualize data in a variety of ways, such as bar graphs, pie charts, scatter plots, and more.
Here are some commonly used functions for creating graphs and charts in Mathematica:
- `BarChart`: This function is used to create bar graphs. It takes a list of data values as input and plots the values as bars.
- `PieChart`: This function is used to create pie charts. It takes a list of data values as input and plots the values as slices of a pie.
- `ScatterChart`: This function is used to create scatter plots. It takes a list of data points as input and plots the points on a coordinate system.
- `BubbleChart`: This function is used to create bubble charts. It takes a list of data points, where each point has three values (x, y, and size), and plots the points as bubbles, where the size of each bubble represents the third value.
These are just a few examples of the functions available for creating graphs and charts in Mathematica. Each function has its own set of options and parameters that allow you to customize the appearance of the graph or chart.
Here are some examples of using functions for creating graphs and charts in Mathematica:
```mathematica
BarChart[{1, 2, 3, 4}] (* Plots the values 1, 2, 3, and 4 as bars *)
PieChart[{1, 2, 3, 4}] (* Plots the values 1, 2, 3, and 4 as slices of a pie *)
ScatterChart[{{1, 1}, {2, 4}, {3, 9}, {4, 16}}] (* Plots the data points (1, 1), (2, 4), (3, 9), and (4, 16) *)
BubbleChart[{{1, 1, 10}, {2, 4, 20}, {3, 9, 30}, {4, 16, 40}}] (* Plots the data points (1, 1, 10), (2, 4, 20), (3, 9, 30), and (4, 16, 40) as bubbles *)
```
## Exercise
Create a pie chart of the following data values: 10, 20, 30, 40.
### Solution
```mathematica
PieChart[{10, 20, 30, 40}]
```
# 3.3. Customizing Visualizations
Mathematica provides a wide range of options for customizing the appearance of visualizations. These options allow you to change the colors, styles, labels, and other properties of the plot, graph, or chart.
Here are some commonly used options for customizing visualizations in Mathematica:
- `PlotStyle`: This option is used to change the color and style of the plot. It takes a variety of values, such as colors, patterns, and directives.
- `AxesLabel`: This option is used to add labels to the axes of the plot. It takes a list of labels, where the first element is the label for the x-axis and the second element is the label for the y-axis.
- `ChartLabels`: This option is used to add labels to the bars or slices of a graph or chart. It takes a list of labels, where each label corresponds to a bar or slice.
- `PlotRange`: This option is used to change the range of values displayed on the axes of the plot. It takes a list of values, where the first element is the minimum value and the second element is the maximum value.
These are just a few examples of the options available for customizing visualizations in Mathematica. Each visualization function has its own set of options that allow you to customize the appearance of the plot, graph, or chart.
Here are some examples of using options to customize visualizations in Mathematica:
```mathematica
Plot[x^2, {x, -5, 5}, PlotStyle -> Red] (* Plots the function x^2 with a red color *)
ListPlot[{{1, 1}, {2, 4}, {3, 9}, {4, 16}}, AxesLabel -> {"x", "y"}] (* Plots the data points (1, 1), (2, 4), (3, 9), and (4, 16) with labels on the axes *)
BarChart[{1, 2, 3, 4}, ChartLabels -> {"A", "B", "C", "D"}] (* Plots the values 1, 2, 3, and 4 as bars with labels *)
PieChart[{10, 20, 30, 40}, PlotRange -> {0, 100}] (* Plots the values 10, 20, 30, and 40 as slices of a pie with a plot range of 0 to 100 *)
```
## Exercise
Customize the appearance of the line plot of the function $y = x^2$ over the range $x = -5$ to $x = 5$ by changing the color to blue and adding labels to the axes.
### Solution
```mathematica
Plot[x^2, {x, -5, 5}, PlotStyle -> Blue, AxesLabel -> {"x", "y"}]
```
# 3.4. 3D Plots and Animations
In addition to 2D plots and visualizations, Mathematica also provides tools for creating 3D plots and animations. These tools allow us to visualize data in three dimensions and create dynamic visualizations that change over time.
Here are some commonly used functions for creating 3D plots and animations in Mathematica:
- `Plot3D`: This function is used to create 3D surface plots. It takes a mathematical function of two variables as input and plots the function in three dimensions.
- `ParametricPlot3D`: This function is used to create 3D parametric plots. It takes a set of parametric equations as input and plots the equations in three dimensions.
- `ListPlot3D`: This function is used to create 3D scatter plots. It takes a list of data points in three dimensions as input and plots the points in three dimensions.
- `Animate`: This function is used to create animations. It takes a plot or visualization as input and animates the plot or visualization over a specified range of values.
These are just a few examples of the functions available for creating 3D plots and animations in Mathematica. Each function has its own set of options and parameters that allow you to customize the appearance and behavior of the plot or animation.
Here are some examples of using functions for creating 3D plots and animations in Mathematica:
```mathematica
Plot3D[Sin[x + y^2], {x, -3, 3}, {y, -2, 2}] (* Creates a 3D surface plot of the function Sin[x + y^2] *)
ParametricPlot3D[{Cos[t], Sin[t], t}, {t, 0, 2 Pi}] (* Creates a 3D parametric plot of a helix *)
ListPlot3D[{{1, 1, 1}, {2, 2, 2}, {3, 3, 3}, {4, 4, 4}}] (* Creates a 3D scatter plot of the data points (1, 1, 1), (2, 2, 2), (3, 3, 3), and (4, 4, 4) *)
Animate[Plot3D[Sin[x + y^2 + t], {x, -3, 3}, {y, -2, 2}], {t, 0, 2 Pi}] (* Creates an animation of the 3D surface plot of the function Sin[x + y^2 + t] over the range t = 0 to t = 2 Pi *)
```
## Exercise
Create a 3D surface plot of the function $z = \sin(x + y)$ over the range $x = -3$ to $x = 3$ and $y = -2$ to $y = 2$. Customize the appearance of the plot by changing the color to green.
### Solution
```mathematica
Plot3D[Sin[x + y], {x, -3, 3}, {y, -2, 2}, PlotStyle -> Green]
```
# 3.4. 3D Plots and Animations
In addition to 2D plots and visualizations, Mathematica also provides tools for creating 3D plots and animations. These tools allow us to visualize data in three dimensions and create dynamic visualizations that change over time.
Here are some commonly used functions for creating 3D plots and animations in Mathematica:
- `Plot3D`: This function is used to create 3D surface plots. It takes a mathematical function of two variables as input and plots the function in three dimensions.
- `ParametricPlot3D`: This function is used to create 3D parametric plots. It takes a set of parametric equations as input and plots the equations in three dimensions.
- `ListPlot3D`: This function is used to create 3D scatter plots. It takes a list of data points in three dimensions as input and plots the points in three dimensions.
- `Animate`: This function is used to create animations. It takes a plot or visualization as input and animates the plot or visualization over a specified range of values.
These functions provide a powerful set of tools for visualizing three-dimensional data and creating dynamic visualizations. By using these functions, we can explore and analyze complex data in a more intuitive and interactive way.
Here are some examples of using functions for creating 3D plots and animations in Mathematica:
```mathematica
Plot3D[Sin[x + y^2], {x, -3, 3}, {y, -2, 2}] (* Creates a 3D surface plot of the function Sin[x + y^2] *)
ParametricPlot3D[{Cos[t], Sin[t], t}, {t, 0, 2 Pi}] (* Creates a 3D parametric plot of a helix *)
ListPlot3D[{{1, 1, 1}, {2, 2, 2}, {3, 3, 3}, {4, 4, 4}}] (* Creates a 3D scatter plot of the data points (1, 1, 1), (2, 2, 2), (3, 3, 3), and (4, 4, 4) *)
Animate[Plot3D[Sin[x + y^2 + t], {x, -3, 3}, {y, -2, 2}], {t, 0, 2 Pi}] (* Creates an animation of the 3D surface plot of the function Sin[x + y^2 + t] over the range t = 0 to t = 2 Pi *)
```
## Exercise
Create a 3D surface plot of the function $z = \sin(x + y)$ over the range $x = -3$ to $x = 3$ and $y = -2$ to $y = 2$. Customize the appearance of the plot by changing the color to green.
### Solution
```mathematica
Plot3D[Sin[x + y], {x, -3, 3}, {y, -2, 2}, PlotStyle -> Green]
```
# 4. Mathematica Basics
Now that we have covered the basics of setting up the environment and understanding the syntax of Mathematica, let's dive into some fundamental concepts and techniques that will help you become proficient in using Mathematica.
In this section, we will cover the following topics:
- Manipulating expressions: Mathematica provides powerful tools for manipulating algebraic expressions. We will learn how to simplify, expand, and factor expressions, as well as perform operations like differentiation and integration.
- Working with lists and matrices: Lists and matrices are essential data structures in Mathematica. We will learn how to create and manipulate lists and matrices, perform operations like element extraction and replacement, and apply functions to lists and matrices.
- Pattern matching and replacement rules: Pattern matching is a powerful feature of Mathematica that allows us to match and manipulate expressions based on their structure. We will learn how to use pattern matching to extract and transform parts of expressions, as well as define custom functions using replacement rules.
- Using built-in functions and packages: Mathematica provides a vast collection of built-in functions and packages that cover a wide range of mathematical and computational tasks. We will learn how to use these functions and packages to perform common operations and solve complex problems.
By mastering these fundamental concepts and techniques, you will be well-equipped to tackle more advanced topics and explore the full potential of Mathematica.
Let's start by exploring some basic techniques for manipulating expressions in Mathematica.
```mathematica
expr = (x + y)^2 - 4*x*y
Simplify[expr] (* Simplifies the expression *)
Expand[expr] (* Expands the expression *)
Factor[expr] (* Factors the expression *)
D[expr, x] (* Differentiates the expression with respect to x *)
Integrate[expr, x] (* Integrates the expression with respect to x *)
```
## Exercise
Given the expression `expr = x^3 - 2*x^2 + x - 1`, perform the following operations:
1. Simplify the expression.
2. Expand the expression.
3. Factor the expression.
4. Differentiate the expression with respect to x.
5. Integrate the expression with respect to x.
### Solution
```mathematica
expr = x^3 - 2*x^2 + x - 1
Simplify[expr]
Expand[expr]
Factor[expr]
D[expr, x]
Integrate[expr, x]
```
# 4.1. Manipulating Expressions
Manipulating expressions is a fundamental skill in Mathematica. It allows us to simplify, expand, factor, differentiate, and integrate algebraic expressions, among other operations.
Let's start with simplifying expressions. Simplification is the process of reducing an expression to its simplest form. Mathematica provides the `Simplify` function for this purpose.
```mathematica
expr = (x^2 + 2*x + 1)/(x + 1)
Simplify[expr]
```
The `Simplify` function simplifies the expression by applying various algebraic rules and transformations. In this case, it simplifies the expression to `x + 1`.
Another useful function for manipulating expressions is `Expand`. The `Expand` function expands an expression by distributing multiplication over addition and applying other simplification rules.
```mathematica
expr = (x + y)^2
Expand[expr]
```
The `Expand` function expands the expression to `x^2 + 2*x*y + y^2`.
## Exercise
Given the expression `expr = (x + y)^3 - (x - y)^3`, perform the following operations:
1. Simplify the expression.
2. Expand the expression.
### Solution
```mathematica
expr = (x + y)^3 - (x - y)^3
Simplify[expr]
Expand[expr]
```
# 4.2. Working with Lists and Matrices
Let's start with creating a list. A list is a collection of elements enclosed in curly braces `{}`. Elements in a list can be of any type, including numbers, strings, and even other lists.
```mathematica
list = {1, 2, 3, 4, 5}
```
To access individual elements in a list, we can use indexing. Indexing in Mathematica starts from 1, unlike some other programming languages that start from 0.
```mathematica
list[[2]] (* Accesses the second element in the list *)
```
The above code will return `2`, which is the second element in the list.
Matrices are two-dimensional arrays with rows and columns. They can be created using nested lists.
```mathematica
matrix = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
```
To access individual elements in a matrix, we can use double indexing.
```mathematica
matrix[[2, 3]] (* Accesses the element in the second row and third column *)
```
The above code will return `6`, which is the element in the second row and third column of the matrix.
## Exercise
Given the list `my_list = {10, 20, 30, 40, 50}`, perform the following operations:
1. Access the third element in the list.
2. Replace the second element with `25`.
### Solution
```mathematica
my_list = {10, 20, 30, 40, 50}
my_list[[3]]
my_list[[2]] = 25
```
# 4.3. Pattern Matching and Replacement Rules
Pattern matching is a powerful feature in Mathematica that allows us to match and manipulate expressions based on their structure. It is particularly useful when working with complex data structures or performing symbolic computations.
In Mathematica, patterns are used to specify the structure of expressions we want to match. A pattern is a symbolic representation that can match a range of expressions. We can use patterns in functions like `Cases` and `ReplaceAll` to select or transform parts of an expression.
Let's start with a simple example. Suppose we have a list of numbers and we want to select all the even numbers from the list. We can use the pattern `__?EvenQ` to match any expression that is an even number.
```mathematica
list = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
Cases[list, _?EvenQ]
```
The above code will return `{2, 4, 6, 8, 10}`, which are the even numbers in the list.
We can also use patterns in replacement rules to transform expressions. For example, suppose we have a list of strings and we want to replace all occurrences of the string "apple" with "orange". We can use the pattern `"apple"` in a replacement rule.
```mathematica
list = {"apple", "banana", "apple", "cherry"};
list /. "apple" -> "orange"
```
The above code will return `{"orange", "banana", "orange", "cherry"}`, which is the modified list with "apple" replaced by "orange".
## Exercise
Given the list `expr = {x + y, x - y, x * y, x / y}`, perform the following operations:
1. Use pattern matching to select all expressions that involve addition.
2. Use a replacement rule to replace all occurrences of `x` with `a`.
### Solution
```mathematica
expr = {x + y, x - y, x * y, x / y}
Cases[expr, _Plus]
expr /. x -> a
```
# 4.4. Using Built-in Functions and Packages
Mathematica provides a wide range of built-in functions that cover various areas of mathematics, data analysis, visualization, and more. These functions allow us to perform complex computations and manipulate data efficiently.
To use a built-in function, we simply need to call it with the appropriate arguments. For example, the `Sin` function calculates the sine of a given angle. We can use it like this:
```mathematica
Sin[0]
```
The above code will return `0`, which is the sine of 0.
In addition to built-in functions, Mathematica also provides packages that extend its functionality. Packages are collections of functions and definitions that are organized into separate files. We can load a package using the `Needs` function.
For example, the `Statistics` package provides functions for statistical analysis. To load the package, we can use the following code:
```mathematica
Needs["Statistics`"]
```
Once the package is loaded, we can use its functions just like any other built-in function. For example, the `Mean` function calculates the mean of a list of numbers. We can use it like this:
```mathematica
data = {1, 2, 3, 4, 5};
Mean[data]
```
The above code will return `3`, which is the mean of the numbers in the `data` list.
## Exercise
1. Use the `Factorial` function to calculate the factorial of 5.
2. Load the `LinearAlgebra` package and use the `Eigenvalues` function to calculate the eigenvalues of the matrix `{{1, 2}, {3, 4}}`.
### Solution
```mathematica
Factorial[5]
Needs["LinearAlgebra`"]
Eigenvalues[{{1, 2}, {3, 4}}]
```
# 5. Functions
Functions are an essential part of programming. They allow us to encapsulate a set of instructions into a single unit that can be called and reused multiple times. In Mathematica, we can define our own functions using the `Function` keyword.
A function definition consists of a name, a list of parameters, and a body. The body contains the instructions that are executed when the function is called. Here's an example:
```mathematica
myFunction[x_, y_] := x^2 + y^2
```
In the above code, `myFunction` is the name of the function, and `x_` and `y_` are the parameters. The `:=` operator is used to define the function body. In this case, the body calculates the sum of the squares of `x` and `y`.
Once a function is defined, we can call it by passing arguments to it. For example:
```mathematica
result = myFunction[3, 4]
```
The above code will assign the value `25` to the variable `result`, because `3^2 + 4^2 = 25`.
Functions can also have default values for their parameters. This allows us to call the function without providing all the arguments. Here's an example:
```mathematica
myFunction[x_:0, y_:0] := x^2 + y^2
```
In the above code, `x_:0` and `y_:0` are the default values for the parameters `x` and `y`. If no arguments are provided when calling the function, the default values will be used.
```mathematica
result1 = myFunction[3]
result2 = myFunction[]
```
The above code will assign the value `9` to `result1`, because `3^2 + 0^2 = 9`, and `0` to `result2`, because `0^2 + 0^2 = 0`.
## Exercise
1. Define a function called `isEven` that takes an integer as input and returns `True` if the number is even, and `False` otherwise.
2. Call the `isEven` function with the argument `10` and assign the result to a variable called `result`.
### Solution
```mathematica
isEven[n_Integer] := Mod[n, 2] == 0
result = isEven[10]
```
# 5.1. Defining Functions
There are several ways to define functions in Mathematica. The most common way is using the `Function` keyword, as we saw in the previous section. However, there are other ways that offer more flexibility and expressiveness.
One alternative way to define functions is using the `:=` operator. This operator is called the delayed assignment operator and is used to define functions that evaluate their arguments only when they are called. Here's an example:
```mathematica
f[x_] := x^2
```
In the above code, `f` is the name of the function, `x_` is the parameter, and `x^2` is the body. When `f` is called with an argument, the expression `x^2` will be evaluated with the value of `x`.
Another way to define functions is using the `Set` operator (`=`). This operator is used to define functions that evaluate their arguments immediately. Here's an example:
```mathematica
g[x_] = x^2
```
In the above code, `g` is the name of the function, `x_` is the parameter, and `x^2` is the body. When `g` is called with an argument, the expression `x^2` will be evaluated with the value of `x`.
The choice between `:=` and `=` depends on the desired behavior of the function. If you want the function to evaluate its arguments immediately, use `=`. If you want the function to evaluate its arguments only when called, use `:=`.
Let's compare the behavior of functions defined with `:=` and `=`. Consider the following example:
```mathematica
h[x_] := (Print["Evaluating h"]; x^2)
i[x_] = (Print["Evaluating i"]; x^2)
```
When we call `h` with an argument, we will see the message "Evaluating h" printed, indicating that the body of the function is evaluated. On the other hand, when we call `i` with an argument, we will not see the message "Evaluating i" printed, indicating that the body of the function is not evaluated.
```mathematica
h[3]
i[3]
```
The output of the above code will be:
```
Evaluating h
9
9
```
As you can see, the function `h` evaluates its argument immediately, while the function `i` evaluates its argument only when called.
## Exercise
1. Define a function called `cube` using the `:=` operator that takes a number as input and returns its cube.
2. Call the `cube` function with the argument `5` and assign the result to a variable called `result`.
### Solution
```mathematica
cube[x_] := x^3
result = cube[5]
```
# 5.2. Function Parameters and Return Values
When defining a function with multiple parameters, we can separate the parameters with commas. Here's an example:
```mathematica
f[x_, y_] := x + y
```
In the above code, `f` is the name of the function, `x_` and `y_` are the parameters, and `x + y` is the body. When `f` is called with two arguments, the expressions `x` and `y` will be evaluated with the corresponding values.
To return a value from a function, we can use the `Return` keyword. Here's an example:
```mathematica
g[x_, y_] := Return[x + y]
```
In the above code, `g` is the name of the function, `x_` and `y_` are the parameters, and `Return[x + y]` is the body. When `g` is called with two arguments, the expression `x + y` will be evaluated and returned as the result of the function.
Alternatively, we can omit the `Return` keyword and simply write the expression that we want to return as the last line of the function. Here's an example:
```mathematica
h[x_, y_] := x + y
```
In the above code, `h` is the name of the function, `x_` and `y_` are the parameters, and `x + y` is the body. When `h` is called with two arguments, the expression `x + y` will be evaluated and returned as the result of the function.
Let's compare the behavior of functions with and without the `Return` keyword. Consider the following example:
```mathematica
f[x_, y_] := Return[x + y]
g[x_, y_] := x + y
```
When we call `f` and `g` with two arguments, we will get the same result:
```mathematica
f[3, 4]
g[3, 4]
```
The output of the above code will be:
```
7
7
```
As you can see, both functions return the sum of their arguments. However, the function `f` uses the `Return` keyword to explicitly indicate that the value should be returned, while the function `g` simply writes the expression as the last line of the function.
## Exercise
1. Define a function called `average` using the `:=` operator that takes two numbers as input and returns their average.
2. Call the `average` function with the arguments `3` and `5` and assign the result to a variable called `result`.
### Solution
```mathematica
average[x_, y_] := (x + y)/2
result = average[3, 5]
```
# 5.3. Higher Order Functions
In Mathematica, functions can be treated as first-class objects, which means that they can be assigned to variables, passed as arguments to other functions, and returned as values from functions. This allows us to define higher-order functions, which are functions that take other functions as arguments or return functions as results.
One common higher-order function in Mathematica is `Map`, which applies a function to each element of a list and returns a new list with the results. The syntax for `Map` is as follows:
```mathematica
Map[f, list]
```
In the above code, `f` is the function to be applied, and `list` is the list of elements. The result is a new list where `f` has been applied to each element of `list`.
Here's an example that uses `Map` to apply the function `f` to each element of a list:
```mathematica
f[x_] := x^2
list = {1, 2, 3, 4, 5}
result = Map[f, list]
```
The output of the above code will be:
```
{1, 4, 9, 16, 25}
```
As you can see, `Map` applies the function `f` to each element of the list `list` and returns a new list with the squared values.
Another common higher-order function in Mathematica is `Apply`, which applies a function to the elements of a list. The syntax for `Apply` is as follows:
```mathematica
Apply[f, list]
```
In the above code, `f` is the function to be applied, and `list` is the list of elements. The result is the value obtained by applying `f` to the elements of `list`.
Here's an example that uses `Apply` to apply the function `f` to the elements of a list:
```mathematica
f[x_, y_] := x + y
list = {1, 2, 3, 4, 5}
result = Apply[f, list]
```
The output of the above code will be:
```
15
```
As you can see, `Apply` applies the function `f` to the elements of the list `list` and returns the sum of the elements.
Let's compare the behavior of `Map` and `Apply` with the function `f`. Consider the following example:
```mathematica
f[x_] := x^2
list = {1, 2, 3, 4, 5}
result1 = Map[f, list]
result2 = Apply[f, list]
```
The output of the above code will be:
```
{1, 4, 9, 16, 25}
55
```
As you can see, `Map` applies the function `f` to each element of the list `list` and returns a new list with the squared values, while `Apply` applies the function `f` to the elements of the list `list` and returns the sum of the elements.
## Exercise
1. Define a function called `double` that takes a number as input and returns its double.
2. Use `Map` to apply the `double` function to each element of the list `{1, 2, 3, 4, 5}` and assign the result to a variable called `result`.
### Solution
```mathematica
double[x_] := 2*x
list = {1, 2, 3, 4, 5}
result = Map[double, list]
```
# 5.4. Debugging and Troubleshooting
Debugging is an important skill for any programmer. It involves identifying and fixing errors, or bugs, in your code. Mathematica provides several tools and techniques for debugging and troubleshooting.
One common technique is to use the `Print` function to display the values of variables at various points in your code. This can help you understand how your code is executing and identify any unexpected behavior.
Here's an example that demonstrates how to use `Print` for debugging:
```mathematica
f[x_] := x^2
g[x_] := x + 1
x = 3
Print["x =", x]
y = f[x]
Print["y =", y]
z = g[y]
Print["z =", z]
```
When you run the above code, you'll see the following output:
```
x = 3
y = 9
z = 10
```
As you can see, the `Print` statements display the values of `x`, `y`, and `z` at different points in the code.
Another useful debugging technique is to use the `Check` function to catch and handle errors. The `Check` function takes two arguments: an expression to evaluate, and a second expression to evaluate if an error occurs.
Here's an example that demonstrates how to use `Check` for error handling:
```mathematica
f[x_] := 1/x
result1 = Check[f[0], "Error: Division by zero"]
result2 = Check[f[2], "Error: Division by zero"]
Print[result1]
Print[result2]
```
When you run the above code, you'll see the following output:
```
Error: Division by zero
1/2
```
As you can see, the `Check` function catches the error when dividing by zero and returns the specified error message instead.
Let's consider a more complex example that demonstrates how to use `Print` and `Check` together for debugging and error handling:
```mathematica
f[x_] := x^2
g[x_] := 1/x
x = 0
Print["x =", x]
y = f[x]
Print["y =", y]
z = Check[g[y], "Error: Division by zero"]
Print["z =", z]
```
When you run the above code, you'll see the following output:
```
x = 0
y = 0
Error: Division by zero
```
As you can see, the `Print` statements display the values of `x`, `y`, and `z` at different points in the code, and the `Check` function catches the error when dividing by zero and returns the specified error message instead.
## Exercise
1. Define a function called `divide` that takes two numbers as input and returns their division.
2. Use `Check` to handle the case when dividing by zero. If a division by zero occurs, return the error message "Error: Division by zero".
3. Test your function by dividing `10` by `2` and dividing `5` by `0`.
### Solution
```mathematica
divide[x_, y_] := Check[x/y, "Error: Division by zero"]
result1 = divide[10, 2]
result2 = divide[5, 0]
Print[result1]
Print[result2]
```
# 6. Numerical Computation
Numerical computation is a fundamental aspect of programming. It involves performing calculations and solving mathematical problems using numerical methods. Mathematica provides a wide range of built-in functions and tools for numerical computation.
In this section, we will cover some basic arithmetic operations, solving equations and systems of equations, optimization and curve fitting, and numerical integration and differentiation.
Let's get started!
Basic Arithmetic Operations
Mathematica provides built-in functions for performing basic arithmetic operations such as addition, subtraction, multiplication, and division.
Here are some examples:
```mathematica
x = 5 + 3
y = 10 - 2
z = 4 * 6
w = 12 / 3
```
The variables `x`, `y`, `z`, and `w` will have the values `8`, `8`, `24`, and `4`, respectively.
Mathematica also supports more advanced arithmetic operations such as exponentiation, square root, and logarithms.
Here are some examples:
```mathematica
a = 2^3
b = Sqrt[25]
c = Log[10, 1000]
```
The variables `a`, `b`, and `c` will have the values `8`, `5`, and `3`, respectively.
Let's consider a more complex example that involves combining arithmetic operations:
```mathematica
x = 5 + 3 * 2
y = (10 - 2)^2
z = Sqrt[4 * 6]
w = Log[10^3]
```
The variables `x`, `y`, `z`, and `w` will have the values `11`, `64`, `4`, and `3`, respectively.
## Exercise
1. Calculate the area of a rectangle with length `10` and width `5`.
2. Calculate the volume of a sphere with radius `3`.
3. Calculate the value of the expression $\sqrt{2 + 3^2} - \frac{10}{2}$.
### Solution
```mathematica
area = 10 * 5
volume = (4/3) * Pi * 3^3
expression = Sqrt[2 + 3^2] - 10/2
Print[area]
Print[volume]
Print[expression]
```
# 6.2. Solving Equations and Systems of Equations
Solving equations and systems of equations is a common task in mathematics and science. Mathematica provides powerful tools for solving both symbolic and numerical equations.
To solve a single equation, you can use the `Solve` function. For example, to solve the equation $x^2 - 4 = 0$, you can write:
```mathematica
solutions = Solve[x^2 - 4 == 0, x]
```
The `Solve` function returns a list of solutions. In this case, the solutions are `x = -2` and `x = 2`.
To solve a system of equations, you can use the `Solve` function with multiple equations. For example, to solve the system of equations:
$$
\begin{align*}
x + y &= 5 \\
2x - y &= 1 \\
\end{align*}
$$
you can write:
```mathematica
solutions = Solve[{x + y == 5, 2x - y == 1}, {x, y}]
```
The `Solve` function returns a list of solutions. In this case, the solutions are `x = 2` and `y = 3`.
Let's consider a more complex example that involves solving a system of equations with symbolic variables:
```mathematica
solutions = Solve[{a*x + b*y == c, d*x + e*y == f}, {x, y}]
```
The `Solve` function returns a list of solutions. In this case, the solutions are expressed in terms of the variables `a`, `b`, `c`, `d`, `e`, and `f`.
## Exercise
1. Solve the equation $2x^2 - 5x + 2 = 0$.
2. Solve the system of equations:
$$
\begin{align*}
3x + 2y &= 7 \\
4x - 5y &= -2 \\
\end{align*}
$$
### Solution
```mathematica
solutions1 = Solve[2x^2 - 5x + 2 == 0, x]
solutions2 = Solve[{3x + 2y == 7, 4x - 5y == -2}, {x, y}]
Print[solutions1]
Print[solutions2]
```
# 6.3. Optimization and Curve Fitting
Optimization involves finding the maximum or minimum value of a function. Curve fitting involves finding a function that best fits a set of data points.
Mathematica provides several functions for optimization and curve fitting, including `Minimize`, `Maximize`, and `FindFit`.
To find the minimum or maximum value of a function, you can use the `Minimize` and `Maximize` functions, respectively. For example, to find the minimum value of the function $f(x) = x^2 - 4x + 3$, you can write:
```mathematica
minimum = Minimize[x^2 - 4x + 3, x]
```
The `Minimize` function returns a list containing the minimum value and the value of `x` that achieves the minimum.
To find a function that best fits a set of data points, you can use the `FindFit` function. For example, given a set of data points `{{1, 2}, {2, 3}, {3, 4}}`, you can find a linear function that best fits the data points by writing:
```mathematica
fit = FindFit[{{1, 2}, {2, 3}, {3, 4}}, a*x + b, {a, b}, x]
```
The `FindFit` function returns a list containing the values of `a` and `b` that best fit the data points.
Let's consider a more complex example that involves finding the maximum value of a function and fitting a curve to a set of data points:
```mathematica
maximum = Maximize[x^2 - 4x + 3, x]
fit = FindFit[{{1, 2}, {2, 3}, {3, 4}}, a*x + b, {a, b}, x]
```
The `Maximize` function returns a list containing the maximum value and the value of `x` that achieves the maximum. The `FindFit` function returns a list containing the values of `a` and `b` that best fit the data points.
## Exercise
1. Find the minimum value of the function $f(x) = x^3 - 2x^2 + 3x - 4$.
2. Fit a quadratic function to the data points `{{1, 1}, {2, 4}, {3, 9}, {4, 16}}`.
### Solution
```mathematica
minimum = Minimize[x^3 - 2x^2 + 3x - 4, x]
fit = FindFit[{{1, 1}, {2, 4}, {3, 9}, {4, 16}}, a*x^2 + b*x + c, {a, b, c}, x]
Print[minimum]
Print[fit]
```
# 6.4. Numerical Integration and Differentiation
Numerical integration and differentiation are important techniques in mathematics and science. They involve approximating the integral and derivative of a function, respectively.
Mathematica provides several functions for numerical integration and differentiation, including `NIntegrate`, `Integrate`, `ND`, and `D`.
To numerically integrate a function, you can use the `NIntegrate` function. For example, to approximate the integral of the function $f(x) = x^2$ from `0` to `1`, you can write:
```mathematica
integral = NIntegrate[x^2, {x, 0, 1}]
```
The `NIntegrate` function returns an approximate value of the integral.
To symbolically integrate a function, you can use the `Integrate` function. For example, to find the integral of the function $f(x) = x^2$, you can write:
```mathematica
integral = Integrate[x^2, x]
```
The `Integrate` function returns the exact value of the integral.
To numerically differentiate a function, you can use the `ND` function. For example, to approximate the derivative of the function $f(x) = x^2$ at `x = 2`, you can write:
```mathematica
derivative = ND[x^2, x, 2]
```
The `ND` function returns an approximate value of the derivative.
To symbolically differentiate a function, you can use the `D` function. For example, to find the derivative of the function $f(x) = x^2$, you can write:
```mathematica
derivative = D[x^2, x]
```
The `D` function returns the exact value of the derivative.
Let's consider a more complex example that involves numerically integrating and differentiating a function:
```mathematica
integral = NIntegrate[Sin[x], {x, 0, Pi}]
derivative = ND[Sin[x], x, 0]
```
The `NIntegrate` function returns an approximate value of the integral, and the `ND` function returns an approximate value of the derivative.
## Exercise
1. Numerically integrate the function $f(x) = e^x$ from `0` to `1`.
2. Symbolically differentiate the function $f(x) = \sin(x)$.
### Solution
```mathematica
integral = NIntegrate[Exp[x], {x, 0, 1}]
derivative = D[Sin[x], x]
Print[integral]
Print[derivative]
```
# 7. Symbolic Computation
Symbolic computation involves manipulating algebraic expressions and performing symbolic calculations. Mathematica provides powerful tools for symbolic computation, including symbolic algebra, calculus, differential equations, and linear algebra.
To manipulate algebraic expressions, you can use the built-in functions for arithmetic operations, simplification, expansion, factorization, and substitution.
Here are some examples:
```mathematica
expression = (x + y)^2
simplified = Simplify[expression]
expanded = Expand[expression]
factored = Factor[expression]
substituted = expression /. {x -> 2, y -> 3}
```
The variable `expression` represents the algebraic expression $(x + y)^2$. The variables `simplified`, `expanded`, `factored`, and `substituted` represent the simplified, expanded, factored, and substituted forms of the expression, respectively.
To perform calculus with symbolic functions, you can use the built-in functions for differentiation, integration, limits, series expansion, and differential equations.
Here are some examples:
```mathematica
derivative = D[x^2, x]
integral = Integrate[x^2, x]
limit = Limit[1/x, x -> Infinity]
series = Series[Sin[x], {x, 0, 5}]
solution = DSolve[y'[x] == x^2, y[x], x]
```
The variable `derivative` represents the derivative of the function $f(x) = x^2$. The variable `integral` represents the integral of the function $f(x) = x^2$. The variable `limit` represents the limit of the function $f(x) = \frac{1}{x}$ as $x$ approaches infinity. The variable `series` represents the Taylor series expansion of the function $f(x) = \sin(x)$ around $x = 0$. The variable `solution` represents the solution to the differential equation $y'(x) = x^2$.
To perform linear algebra with symbolic matrices and vectors, you can use the built-in functions for matrix operations, matrix decomposition, matrix inversion, matrix eigenvalues and eigenvectors, and matrix equations.
Here are some examples:
```mathematica
matrix = {{1, 2}, {3, 4}}
transpose = Transpose[matrix]
inverse = Inverse[matrix]
eigenvalues = Eigenvalues[matrix]
eigenvectors = Eigenvectors[matrix]
solution = LinearSolve[matrix, {1, 2}]
```
The variable `matrix` represents a symbolic matrix. The variables `transpose`, `inverse`, `eigenvalues`, `eigenvectors`, and `solution` represent the transpose, inverse, eigenvalues, eigenvectors, and solution of the matrix, respectively.
Let's consider a more complex example that involves symbolic algebra, calculus, and linear algebra:
```mathematica
expression = (a*x + b)^2
simplified = Simplify[expression]
derivative = D[expression, x]
integral = Integrate[expression, x]
matrix = {{a, b}, {c, d}}
determinant = Det[matrix]
eigenvalues = Eigenvalues[matrix]
eigenvectors = Eigenvectors[matrix]
solution = LinearSolve[matrix, {1, 2}]
```
The variable `expression` represents the algebraic expression $(ax + b)^2$. The variables `simplified`, `derivative`, and `integral` represent the simplified form, derivative, and integral of the expression, respectively. The variable `matrix` represents a symbolic matrix. The variables `determinant`, `eigenvalues`, `eigenvectors`, and `solution` represent the determinant, eigenvalues, eigenvectors, and solution of the matrix, respectively.
## Exercise
1. Simplify the expression $(x + y)^3$.
2. Find the derivative of the expression $e^{ax}$.
3. Find the integral of the expression $\sin(ax)$.
4. Calculate the determinant of the matrix $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$.
### Solution
```mathematica
simplified = Simplify[(x + y)^3]
derivative = D[Exp[a*x], x]
integral = Integrate[Sin[a*x], x]
matrix = {{a, b}, {c, d}}
determinant = Det[matrix]
Print[simplified]
Print[derivative]
Print[integral]
Print[determinant]
```
# 7.1. Manipulating Algebraic Expressions
Manipulating algebraic expressions is a fundamental skill in symbolic computation. Mathematica provides powerful tools for manipulating algebraic expressions, including arithmetic operations, simplification, expansion, factorization, and substitution.
To perform arithmetic operations on algebraic expressions, you can use the built-in functions for addition, subtraction, multiplication, and division.
Here are some examples:
```mathematica
expression1 = x + y
expression2 = x - y
expression3 = x * y
expression4 = x / y
```
The variables `expression1`, `expression2`, `expression3`, and `expression4` represent the algebraic expressions $x + y$, $x - y$, $x \cdot y$, and $\frac{x}{y}$, respectively.
To simplify algebraic expressions, you can use the `Simplify` function. For example, to simplify the expression $(x + y)^2$, you can write:
```mathematica
simplified = Simplify[(x + y)^2]
```
The `Simplify` function returns a simplified form of the expression.
To expand algebraic expressions, you can use the `Expand` function. For example, to expand the expression $(x + y)^2$, you can write:
```mathematica
expanded = Expand[(x + y)^2]
```
The `Expand` function returns an expanded form of the expression.
To factorize algebraic expressions, you can use the `Factor` function. For example, to factorize the expression $x^2 - y^2$, you can write:
```mathematica
factored = Factor[x^2 - y^2]
```
The `Factor` function returns a factored form of the expression.
To substitute values into algebraic expressions, you can use the `ReplaceAll` function. For example, to substitute the values $x = 2$ and $y = 3$ into the expression $x^2 + y^2$, you can write:
```mathematica
substituted = (x^2 + y^2) /. {x -> 2, y -> 3}
```
The `ReplaceAll` function returns the expression with the values substituted.
Let's consider a more complex example that involves manipulating algebraic expressions:
```mathematica
expression1 = (a + b)^2
expression2 = a^2 - b^2
expression3 = a * b + b * a
expression4 = (a + b) / (a - b)
simplified = Simplify[expression1 + expression2 + expression3 + expression4]
expanded = Expand[simplified]
factored = Factor[expanded]
substituted = expanded /. {a -> 2, b -> 3}
```
The variables `expression1`, `expression2`, `expression3`, and `expression4` represent algebraic expressions. The variables `simplified`, `expanded`, `factored`, and `substituted` represent the simplified, expanded, factored, and substituted forms of the expressions, respectively.
## Exercise
1. Perform the arithmetic operations $(x + y) \cdot (x - y)$ and $(x + y)^2 - (x - y)^2$.
2. Simplify the expression $\frac{x^2 - y^2}{x + y}$.
3. Expand the expression $(x + y)(x - y)$.
4. Factorize the expression $x^2 + 2xy + y^2$.
### Solution
```mathematica
expression1 = (x + y) * (x - y)
expression2 = (x + y)^2 - (x - y)^2
expression3 = (x^2 - y^2) / (x + y)
expression4 = (x + y) * (x - y)
simplified = Simplify[expression1 + expression2 + expression3 + expression4]
expanded = Expand[simplified]
factored = Factor[expanded]
Print[expression1]
Print[expression2]
Print[expression3]
Print[expression4]
Print[simplified]
Print[expanded]
Print[factored]
```
# 7.2. Calculus with Symbolic Functions
Calculus involves the study of continuous change and motion. Mathematica provides powerful tools for performing calculus with symbolic functions, including differentiation, integration, limits, series expansion, and differential equations.
To differentiate symbolic functions, you can use the `D` function. For example, to differentiate the function $f(x) = x^2$, you can write:
```mathematica
derivative = D[x^2, x]
```
The `D` function returns the derivative of the function.
To integrate symbolic functions, you can use the `Integrate` function. For example, to integrate the function $f(x) = x^2$, you can write:
```mathematica
integral = Integrate[x^2, x]
```
The `Integrate` function returns the integral of the function.
To find limits of symbolic functions, you can use the `Limit` function. For example, to find the limit of the function $f(x) = \frac{1}{x}$ as $x$ approaches infinity, you can write:
```mathematica
limit = Limit[1/x, x -> Infinity]
```
The `Limit` function returns the limit of the function.
To find series expansions of symbolic functions, you can use the `Series` function. For example, to find the Taylor series expansion of the function $f(x) = \sin(x)$ around $x = 0$, you can write:
```mathematica
series = Series[Sin[x], {x, 0, 5}]
```
The `Series` function returns the series expansion of the function.
To solve differential equations with symbolic functions, you can use the `DSolve` function. For example, to solve the differential equation $y'(x) = x^2$, you can write:
```mathematica
solution = DSolve[y'[x] == x^2, y[x], x]
```
The `DSolve` function returns the solution to the differential equation.
Let's consider a more complex example that involves performing calculus with symbolic functions:
```mathematica
derivative = D[a*x^2 + b*x + c, x]
integral = Integrate[a*x^2 + b*x + c, x]
limit = Limit[1/x, x -> Infinity]
series = Series[Sin[x], {x, 0, 5}]
solution = DSolve[y'[x] == x^2, y[x], x]
```
The variables `derivative`, `integral`, `limit`, `series`, and `solution` represent the derivative, integral, limit, series expansion, and solution of symbolic functions, respectively.
## Exercise
1. Differentiate the function $f(x) = e^{ax}$.
2. Integrate the function $f(x) = \sin(ax)$.
3. Find the limit of the function $f(x) = \frac{1}{x}$ as $x$ approaches `0`.
4. Find the Taylor series expansion of the function $f(x) = \cos(x)$ around $x = 0$.
5. Solve the differential equation $y'(x) = x^2$.
### Solution
```mathematica
derivative = D[Exp[a*x], x]
integral = Integrate[Sin[a*x], x]
limit = Limit[1/x, x -> 0]
series = Series[Cos[x], {x, 0, 5}]
solution = DSolve[y'[x] == x^2, y[x], x]
Print[derivative]
Print[integral]
Print[limit]
Print[series]
Print[solution]
```
# 7.3. Solving Differential Equations
Differential equations are equations that involve derivatives of unknown functions. They are used to model a wide range of phenomena in science and engineering. Mathematica provides powerful tools for solving differential equations symbolically.
To solve ordinary differential equations (ODEs), you can use the `DSolve` function. For example, to solve the differential equation $y'(x) = x^2$, you can write:
```mathematica
solution = DSolve[y'[x] == x^2, y[x], x]
```
The `DSolve` function returns a list of solutions to the differential equation.
To solve partial differential equations (PDEs), you can use the `DSolve` function with appropriate boundary conditions. For example, to solve the heat equation $u_t = u_{xx}$ subject to the boundary conditions $u(0, t) = 0$ and $u(1, t) = 0$, you can write:
```mathematica
solution = DSolve[{D[u[x, t], t] == D[u[x, t], x, x], u[0, t] == 0, u[1, t] == 0}, u[x, t], {x, t}]
```
The `DSolve` function returns a list of solutions to the partial differential equation.
Let's consider a more complex example that involves solving differential equations:
```mathematica
solution1 = DSolve[y'[x] == x^2, y[x], x]
solution2 = DSolve[{D[u[x, t], t] == D[u[x, t], x, x], u[0, t] == 0, u[1, t] == 0}, u[x, t], {x, t}]
```
The variables `solution1` and `solution2` represent the solutions to the differential equations.
## Exercise
1. Solve the differential equation $y'(x) = \sin(x)$.
2. Solve the wave equation $u_{tt} = c^2 u_{xx}$ subject to the boundary conditions $u(0, t) = 0$ and $u(1, t) = 0$.
### Solution
```mathematica
solution1 = DSolve[y'[x] == Sin[x], y[x], x]
solution2 = DSolve[{D[u[x, t], t, t] == c^2 D[u[x, t], x, x], u[0, t] == 0, u[1, t] == 0}, u[x, t], {x, t}]
Print[solution1]
Print[solution2]
```
# 7.4. Symbolic Linear Algebra
Linear algebra involves the study of vectors, vector spaces, linear transformations, and systems of linear equations. Mathematica provides powerful tools for performing symbolic linear algebra, including matrix operations, matrix decomposition, matrix inversion, matrix eigenvalues and eigenvectors, and matrix equations.
To perform matrix operations, you can use the built-in functions for addition, subtraction, multiplication, and division.
Here are some examples:
```mathematica
matrix1 = {{1, 2}, {3, 4}}
matrix2 = {{5, 6}, {7, 8}}
addition = matrix1 + matrix2
subtraction = matrix1 - matrix2
multiplication = matrix1 * matrix2
division = matrix1 / matrix2
```
The variables `matrix1`, `matrix2`, `addition`, `subtraction`, `multiplication`, and `division` represent matrices and the results of matrix operations.
To decompose matrices, you can use the built-in functions for LU decomposition, QR decomposition, and singular value decomposition.
Here are some examples:
```mathematica
matrix = {{1, 2}, {3, 4}}
lu = LUDecomposition[matrix]
qr = QRDecomposition[matrix]
svd = SingularValueDecomposition[matrix]
```
The variables `matrix`, `lu`, `qr`, and `svd` represent matrices and the results of matrix decompositions.
To invert matrices, you can use the `Inverse` function. For example, to invert the matrix $\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$, you can write:
```mathematica
matrix = {{1, 2}, {3, 4}}
inverse = Inverse[matrix]
```
The `Inverse` function returns the inverse of the matrix.
To find eigenvalues and eigenvectors of matrices, you can use the `Eigenvalues` and `Eigenvectors` functions. For example, to find the eigenvalues and eigenvectors of the matrix $\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$, you can write:
```mathematica
matrix = {{1, 2}, {3, 4}}
eigenvalues = Eigenvalues[matrix]
eigenvectors = Eigenvectors[matrix]
```
The `Eigenvalues` function returns a list of eigenvalues, and the `Eigenvectors` function returns a list of eigenvectors.
To solve matrix equations, you can use the `LinearSolve` function. For example, to solve the matrix equation $\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 6 \end{pmatrix}$, you can write:
```mathematica
matrix = {{1, 2}, {3,
# 7.4. Symbolic Linear Algebra
Linear algebra involves the study of vectors, vector spaces, linear transformations, and systems of linear equations. Mathematica provides powerful tools for performing symbolic linear algebra, including matrix operations, matrix decomposition, matrix inversion, matrix eigenvalues and eigenvectors, and matrix equations.
To perform matrix operations, you can use the built-in functions for addition, subtraction, multiplication, and division.
Here are some examples:
```mathematica
matrix1 = {{1, 2}, {3, 4}}
matrix2 = {{5, 6}, {7, 8}}
addition = matrix1 + matrix2
subtraction = matrix1 - matrix2
multiplication = matrix1 * matrix2
division = matrix1 / matrix2
```
The variables `matrix1`, `matrix2`, `addition`, `subtraction`, `multiplication`, and `division` represent matrices and the results of matrix operations.
To decompose matrices, you can use the built-in functions for LU decomposition, QR decomposition, and singular value decomposition.
Here are some examples:
```mathematica
matrix = {{1, 2}, {3, 4}}
lu = LUDecomposition[matrix]
qr = QRDecomposition[matrix]
svd = SingularValueDecomposition[matrix]
```
The variables `matrix`, `lu`, `qr`, and `svd` represent matrices and the results of matrix decompositions.
To invert matrices, you can use the `Inverse` function. For example, to invert the matrix $\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$, you can write:
```mathematica
matrix = {{1, 2}, {3, 4}}
inverse = Inverse[matrix]
```
The `Inverse` function returns the inverse of the matrix.
To find eigenvalues and eigenvectors of matrices, you can use the `Eigenvalues` and `Eigenvectors` functions. For example, to find the eigenvalues and eigenvectors of the matrix $\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$, you can write:
```mathematica
matrix = {{1, 2}, {3, 4}}
eigenvalues = Eigenvalues[matrix]
eigenvectors = Eigenvectors[matrix]
```
The `Eigenvalues` function returns a list of eigenvalues, and the `Eigenvectors` function returns a list of eigenvectors.
To solve matrix equations, you can use the `LinearSolve` function. For example, to solve the matrix equation $\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 6 \end{pmatrix}$, you can write:
```mathematica
matrix = {{1, 2}, {3, 4}}
solution = LinearSolve[matrix, {5, 6}]
```
The `LinearSolve` function returns the solution to the matrix equation.
Now that you have learned the basics of symbolic linear algebra in Mathematica, you can apply these concepts to solve more complex problems in mathematics, physics, engineering, and other fields.
# 8. Advanced Topics in Mathematica
Parallel computing is the use of multiple processors or computers to solve a computational problem. Mathematica provides built-in support for parallel computing, allowing you to speed up your computations by distributing the workload across multiple processors or computers. This can be particularly useful for large-scale simulations, data analysis, and optimization problems.
Here is an example of how to use parallel computing in Mathematica:
```mathematica
ParallelTable[Pause[1]; i^2, {i, 1, 10}]
```
This code uses the `ParallelTable` function to calculate the squares of the numbers from 1 to 10 in parallel. Each calculation is performed on a separate processor, allowing the computations to be done simultaneously.
## Exercise
Use parallel computing to calculate the sum of the squares of the numbers from 1 to 100.
### Solution
```mathematica
Total[ParallelTable[i^2, {i, 1, 100}]]
```
Probability and statistics are important tools in many fields, including mathematics, physics, engineering, and social sciences. Mathematica provides a comprehensive set of functions for probability and statistics, allowing you to perform a wide range of statistical analyses and probability calculations.
Here is an example of how to use probability and statistics functions in Mathematica:
```mathematica
data = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
Mean[data]
Median[data]
Variance[data]
StandardDeviation[data]
```
This code calculates the mean, median, variance, and standard deviation of a dataset.
## Exercise
Calculate the mean, median, variance, and standard deviation of the following dataset: {3, 5, 2, 7, 4, 6, 1, 9, 8}.
### Solution
```mathematica
data = {3, 5, 2, 7, 4, 6, 1, 9, 8};
mean = Mean[data]
median = Median[data]
variance = Variance[data]
standard_deviation = StandardDeviation[data]
```
Image processing is the analysis and manipulation of images. Mathematica provides a wide range of functions for image processing, allowing you to perform tasks such as image enhancement, image segmentation, image registration, and image analysis.
Here is an example of how to perform image processing in Mathematica:
```mathematica
image = Import["image.jpg"];
gray_image = ColorConvert[image, "Grayscale"];
blurred_image = GaussianFilter[gray_image, 5];
edges = EdgeDetect[blurred_image];
```
This code imports an image, converts it to grayscale, blurs it using a Gaussian filter, and detects edges in the blurred image.
## Exercise
Perform the following image processing operations on the image "image.jpg":
- Convert the image to grayscale
- Apply a Gaussian filter with a standard deviation of 3
- Detect edges in the filtered image
### Solution
```mathematica
image = Import["image.jpg"];
gray_image = ColorConvert[image, "Grayscale"];
filtered_image = GaussianFilter[gray_image, 3];
edges = EdgeDetect[filtered_image];
```
Machine learning and neural networks are powerful tools for solving complex problems in various fields, including image recognition, natural language processing, and data analysis. Mathematica provides a comprehensive set of functions for machine learning and neural networks, allowing you to build and train your own models, as well as apply pre-trained models to your data.
Here is an example of how to use machine learning and neural networks in Mathematica:
```mathematica
training_data = {{1, 2} -> 3, {2, 3} -> 5, {3, 4} -> 7};
model = NetTrain[NetChain[{LinearLayer[], LinearLayer[]}], training_data];
prediction = model[{4, 5}];
```
This code trains a neural network model on a small dataset and uses the trained model to make a prediction.
## Exercise
Train a neural network model on the following dataset: {{1, 1} -> 2, {2, 2} -> 4, {3, 3} -> 6}. Use the trained model to make a prediction for the input {4, 4}.
### Solution
```mathematica
training_data = {{1, 1} -> 2, {2, 2} -> 4, {3, 3} -> 6};
model = NetTrain[NetChain[{LinearLayer[], LinearLayer[]}], training_data];
prediction = model[{4, 4}];
```
# 9. Importing and Exporting Data
Importing and exporting data is a common task in data analysis and scientific computing. Mathematica provides a wide range of functions for importing and exporting data in various formats, including text files, CSV files, Excel files, image files, and more.
Here is an example of how to import and export data in Mathematica:
```mathematica
data = Import["data.csv"];
Export["output.csv", data];
```
This code imports data from a CSV file named "data.csv" and exports the data to a new CSV file named "output.csv".
## Exercise
Import the data from the file "data.txt" and export it to a new file named "output.txt".
### Solution
```mathematica
data = Import["data.txt"];
Export["output.txt", data];
```
In addition to importing and exporting data from files, Mathematica also provides functions for working with databases, web scraping, and working with APIs. These functions allow you to retrieve data from external sources, manipulate it, and store it in Mathematica for further analysis.
Here is an example of how to work with databases in Mathematica:
```mathematica
conn = OpenSQLConnection["DatabaseName"];
data = SQLSelect[conn, "TableName"];
CloseSQLConnection[conn];
```
This code opens a connection to a database, retrieves data from a table named "TableName", and closes the connection.
## Exercise
Write code to retrieve data from a MySQL database named "mydatabase" and a table named "mytable".
### Solution
```mathematica
conn = OpenSQLConnection["MySQL", "mydatabase"];
data = SQLSelect[conn, "mytable"];
CloseSQLConnection[conn];
```
Web scraping is the process of extracting data from websites. Mathematica provides functions for web scraping, allowing you to retrieve data from web pages, parse HTML or XML, and extract the desired information.
Here is an example of how to perform web scraping in Mathematica:
```mathematica
url = "https://www.example.com";
html = Import[url, "HTML"];
data = Cases[html, _String, Infinity];
```
This code imports the HTML content of a web page, extracts all strings from the HTML, and stores them in a variable named "data".
## Exercise
Write code to retrieve the HTML content of the web page "https://www.example.com" and extract all links from the HTML.
### Solution
```mathematica
url = "https://www.example.com";
html = Import[url, "HTML"];
links = Cases[html, _String, Infinity];
```
Data cleaning and manipulation are important steps in the data analysis process. Mathematica provides functions for cleaning and manipulating data, allowing you to perform tasks such as filtering, sorting, aggregating, and transforming data.
Here is an example of how to clean and manipulate data in Mathematica:
```mathematica
data = Import["data.csv"];
filtered_data = Select[data, #[[2]] > 0 &];
sorted_data = SortBy[filtered_data, #[[1]] &];
aggregated_data = GroupBy[sorted_data, #[[1]] & -> #[[2]] &];
transformed_data = Map[#^2 &, aggregated_data];
```
This code imports data from a CSV file, filters the data to keep only rows where the second column is greater than 0, sorts the data by the first column, aggregates the data by the first column, and transforms the data by squaring each value.
## Exercise
Perform the following data cleaning and manipulation operations on the dataset in the file "data.csv":
- Filter the data to keep only rows where the third column is less than 10
- Sort the data by the second column in descending order
- Aggregate the data by the first column and calculate the mean of the second column for each group
- Transform the data by taking the square root of each value
### Solution
```mathematica
data = Import["data.csv"];
filtered_data = Select[data, #[[3]] < 10 &];
sorted_data = SortBy[filtered_data, -#[[2]] &];
aggregated_data = GroupBy[sorted_data, #[[1]] & -> Mean[#[[2]]] &];
transformed_data = Map[Sqrt, aggregated_data];
```
# 8.2. Probability and Statistics
Probability and statistics are fundamental concepts in data analysis and decision-making. Mathematica provides a comprehensive set of functions for probability and statistics, allowing you to perform various statistical calculations, generate random numbers, and visualize probability distributions.
Here is an example of how to perform basic probability and statistics calculations in Mathematica:
```mathematica
data = {1, 2, 3, 4, 5};
mean = Mean[data];
median = Median[data];
variance = Variance[data];
standard_deviation = StandardDeviation[data];
```
This code calculates the mean, median, variance, and standard deviation of a dataset.
## Exercise
Calculate the mean, median, variance, and standard deviation of the dataset {10, 15, 20, 25, 30}.
### Solution
```mathematica
data = {10, 15, 20, 25, 30};
mean = Mean[data];
median = Median[data];
variance = Variance[data];
standard_deviation = StandardDeviation[data];
```
In addition to basic calculations, Mathematica provides functions for more advanced probability and statistics concepts, such as hypothesis testing, probability distributions, and regression analysis.
Here is an example of how to perform hypothesis testing in Mathematica:
```mathematica
data = {1, 2, 3, 4, 5};
result = HypothesisTestData[data, "Mean", {"TestStatistic", "PValue"}];
test_statistic = result["TestStatistic"];
p_value = result["PValue"];
```
This code performs a hypothesis test on a dataset to determine if the mean is significantly different from a specified value. It calculates the test statistic and p-value.
## Exercise
Perform a hypothesis test on the dataset {10, 15, 20, 25, 30} to determine if the mean is significantly different from 20.
### Solution
```mathematica
data = {10, 15, 20, 25, 30};
result = HypothesisTestData[data, "Mean", {"TestStatistic", "PValue"}];
test_statistic = result["TestStatistic"];
p_value = result["PValue"];
```
Probability distributions are mathematical functions that describe the likelihood of different outcomes in an experiment or random process. Mathematica provides functions for working with a wide range of probability distributions, including the normal distribution, binomial distribution, and exponential distribution.
Here is an example of how to work with probability distributions in Mathematica:
```mathematica
dist = NormalDistribution[0, 1];
random_variable = RandomVariate[dist, 100];
mean = Mean[random_variable];
variance = Variance[random_variable];
```
This code creates a normal distribution with mean 0 and standard deviation 1, generates 100 random variables from the distribution, and calculates the mean and variance of the random variables.
## Exercise
Create a binomial distribution with parameters n = 10 and p = 0.5. Generate 100 random variables from the distribution and calculate the mean and variance.
### Solution
```mathematica
dist = BinomialDistribution[10, 0.5];
random_variable = RandomVariate[dist, 100];
mean = Mean[random_variable];
variance = Variance[random_variable];
```
Regression analysis is a statistical technique for modeling the relationship between a dependent variable and one or more independent variables. Mathematica provides functions for performing linear regression, nonlinear regression, and time series analysis.
Here is an example of how to perform linear regression in Mathematica:
```mathematica
data = {{1, 2}, {2, 3}, {3, 4}, {4, 5}, {5, 6}};
model = LinearModelFit[data, x, x];
slope = model["BestFitParameters"][[2]];
intercept = model["BestFitParameters"][[1]];
```
This code performs linear regression on a dataset and calculates the slope and intercept of the best-fit line.
## Exercise
Perform linear regression on the dataset {{1, 2}, {2, 3}, {3, 4}, {4, 5}, {5, 6}} and calculate the slope and intercept of the best-fit line.
### Solution
```mathematica
data = {{1, 2}, {2, 3}, {3, 4}, {4, 5}, {5, 6}};
model = LinearModelFit[data, x, x];
slope = model["BestFitParameters"][[2]];
intercept = model["BestFitParameters"][[1]];
```
# 8.3. Image Processing
Image processing is a field of study that focuses on analyzing and manipulating digital images. Mathematica provides a wide range of functions for image processing, allowing you to perform tasks such as image filtering, image enhancement, and image segmentation.
Here is an example of how to perform image filtering in Mathematica:
```mathematica
image = Import["image.jpg"];
filtered_image = MedianFilter[image, 3];
```
This code imports an image from a file, applies a median filter with a filter size of 3, and saves the filtered image.
## Exercise
Import an image from a file and apply a Gaussian filter with a standard deviation of 2 to the image.
### Solution
```mathematica
image = Import["image.jpg"];
filtered_image = GaussianFilter[image, 2];
```
In addition to image filtering, Mathematica provides functions for image enhancement, such as adjusting brightness and contrast, sharpening, and denoising.
Here is an example of how to enhance an image in Mathematica:
```mathematica
image = Import["image.jpg"];
enhanced_image = ImageAdjust[image, {0, 0.5}];
```
This code imports an image from a file and adjusts the brightness and contrast of the image to enhance its appearance.
## Exercise
Import an image from a file and apply a sharpening filter to the image.
### Solution
```mathematica
image = Import["image.jpg"];
enhanced_image = Sharpen[image];
```
Image segmentation is the process of partitioning an image into multiple segments to simplify its representation or to extract useful information. Mathematica provides functions for various image segmentation techniques, such as thresholding, edge detection, and region growing.
Here is an example of how to perform image segmentation in Mathematica:
```mathematica
image = Import["image.jpg"];
binary_image = Binarize[image];
```
This code imports an image from a file and converts it to a binary image using a thresholding technique.
## Exercise
Import an image from a file and perform edge detection on the image.
### Solution
```mathematica
image = Import["image.jpg"];
edges = EdgeDetect[image];
```
# 8.4. Machine Learning and Neural Networks
Machine learning is a branch of artificial intelligence that focuses on developing algorithms and models that can learn from and make predictions or decisions based on data. Mathematica provides a powerful set of functions for machine learning tasks, including data preprocessing, model training, and model evaluation.
Here is an example of how to train a machine learning model in Mathematica:
```mathematica
data = Import["data.csv"];
training_data = RandomSample[data, Round[0.8 * Length[data]]];
testing_data = Complement[data, training_data];
model = Predict[training_data, Method -> "RandomForest"];
```
This code imports a dataset from a CSV file, splits it into training and testing data, and trains a random forest model using the training data.
## Exercise
Import a dataset from a CSV file and train a support vector machine (SVM) model using the dataset.
### Solution
```mathematica
data = Import["data.csv"];
model = Predict[data, Method -> "SupportVectorMachine"];
```
Once a machine learning model is trained, it can be used to make predictions or decisions on new, unseen data. Mathematica provides functions for evaluating the performance of a model, such as computing accuracy, precision, recall, and F1 score.
Here is an example of how to evaluate a machine learning model in Mathematica:
```mathematica
predictions = ClassifierMeasurements[model, testing_data];
accuracy = predictions["Accuracy"];
precision = predictions["Precision"];
recall = predictions["Recall"];
f1score = predictions["F1Score"];
```
This code computes various performance metrics for a machine learning model using the testing data.
## Exercise
Evaluate the performance of the SVM model trained in the previous exercise using the testing data.
### Solution
```mathematica
predictions = ClassifierMeasurements[model, testing_data];
accuracy = predictions["Accuracy"];
precision = predictions["Precision"];
recall = predictions["Recall"];
f1score = predictions["F1Score"];
```
# 9. Importing and Exporting Data
Importing and exporting data is a common task in data analysis and manipulation. Mathematica provides functions for reading and writing data in various file formats, such as CSV, Excel, JSON, and more. These functions make it easy to work with external data sources and integrate them into your Mathematica workflows.
Here is an example of how to import data from a CSV file in Mathematica:
```mathematica
data = Import["data.csv"];
```
This code imports the data from a CSV file named "data.csv" and assigns it to the variable `data`.
## Exercise
Import data from a JSON file named "data.json" and assign it to a variable `data`.
### Solution
```mathematica
data = Import["data.json"];
```
Once data is imported into Mathematica, you can perform various operations on it, such as filtering, transforming, and analyzing the data. Mathematica provides a wide range of functions for data manipulation, allowing you to easily extract the information you need and perform complex computations.
Here is an example of how to filter and transform data in Mathematica:
```mathematica
filtered_data = Select[data, #[[2]] > 0 &];
transformed_data = Map[{#[[1]], #[[2]]^2} &, filtered_data];
```
This code filters the data to only include rows where the second column is greater than 0, and then squares the values in the second column.
## Exercise
Filter the imported JSON data to only include rows where the value of the "age" field is greater than 18. Then, transform the filtered data by adding 5 to the value of the "score" field.
### Solution
```mathematica
filtered_data = Select[data, #[[2]] > 18 &];
transformed_data = Map[{#[[1]], #[[2]] + 5} &, filtered_data];
```
# 9.1. Working with Different File Formats
Mathematica supports a wide range of file formats for importing and exporting data. This allows you to work with data from different sources and in different formats, making it easier to integrate external data into your Mathematica workflows.
Here is an example of how to import and export data in different file formats in Mathematica:
```mathematica
imported_data = Import["data.csv"];
Export["data.xlsx", imported_data];
Export["data.json", imported_data];
```
This code imports data from a CSV file, and then exports it to an Excel file and a JSON file.
## Exercise
Import data from a JSON file named "data.json", and then export it to a CSV file named "data_exported.csv" and an Excel file named "data_exported.xlsx".
### Solution
```mathematica
imported_data = Import["data.json"];
Export["data_exported.csv", imported_data];
Export["data_exported.xlsx", imported_data];
```
In addition to common file formats, Mathematica also provides functions for working with databases. This allows you to connect to and query databases directly from Mathematica, making it easy to work with large datasets and perform complex database operations.
Here is an example of how to connect to a MySQL database and query data in Mathematica:
```mathematica
conn = OpenSQLConnection[
JDBC["MySQL", "localhost:3306/database"],
"Username" -> "username",
"Password" -> "password"
];
query = "SELECT * FROM table";
data = SQLExecute[conn, query];
CloseSQLConnection[conn];
```
This code connects to a MySQL database running on localhost, executes a SQL query to select all rows from a table, and assigns the result to the variable `data`.
## Exercise
Connect to a PostgreSQL database running on localhost, execute a SQL query to select the first 10 rows from a table, and assign the result to a variable `data`.
### Solution
```mathematica
conn = OpenSQLConnection[
JDBC["PostgreSQL", "localhost:5432/database"],
"Username" -> "username",
"Password" -> "password"
];
query = "SELECT * FROM table LIMIT 10";
data = SQLExecute[conn, query];
CloseSQLConnection[conn];
```
# 9.2. Working with Databases
In addition to importing and exporting data from file formats, Mathematica also provides powerful capabilities for working with databases. This allows you to connect to and interact with databases directly from Mathematica, making it easy to work with large datasets and perform complex database operations.
To work with databases in Mathematica, you first need to establish a connection to the database. Mathematica supports a variety of database systems, including MySQL, PostgreSQL, SQLite, and more. You can use the `OpenSQLConnection` function to establish a connection by specifying the appropriate JDBC driver and connection details.
Here is an example of how to connect to a MySQL database and execute a simple query in Mathematica:
```mathematica
conn = OpenSQLConnection[
JDBC["MySQL", "localhost:3306/database"],
"Username" -> "username",
"Password" -> "password"
];
query = "SELECT * FROM table";
result = SQLExecute[conn, query];
CloseSQLConnection[conn];
```
In this example, we establish a connection to a MySQL database running on `localhost` with the database name `database`. We provide the username and password for authentication. Then, we execute a SQL query to select all rows from a table and assign the result to the variable `result`. Finally, we close the connection using the `CloseSQLConnection` function.
## Exercise
Connect to a PostgreSQL database running on `localhost` with the database name `database`. Execute a SQL query to select the first 10 rows from a table and assign the result to a variable `result`. Close the database connection.
### Solution
```mathematica
conn = OpenSQLConnection[
JDBC["PostgreSQL", "localhost:5432/database"],
"Username" -> "username",
"Password" -> "password"
];
query = "SELECT * FROM table LIMIT 10";
result = SQLExecute[conn, query];
CloseSQLConnection[conn];
```
Once you have established a connection to a database, you can perform various operations such as executing queries, inserting data, updating records, and more. Mathematica provides a set of functions for working with databases, including `SQLExecute`, `SQLInsert`, `SQLUpdate`, and `SQLDelete`.
Here is an example of how to insert data into a MySQL database using Mathematica:
```mathematica
conn = OpenSQLConnection[
JDBC["MySQL", "localhost:3306/database"],
"Username" -> "username",
"Password" -> "password"
];
data = {{"John", 25}, {"Jane", 30}, {"Mike", 35}};
SQLInsert[conn, "table", {"name", "age"}, data];
CloseSQLConnection[conn];
```
In this example, we establish a connection to a MySQL database and define a list of data to be inserted. We use the `SQLInsert` function to insert the data into a table named `table` with columns `name` and `age`. Finally, we close the connection.
## Exercise
Connect to a PostgreSQL database running on `localhost` with the database name `database`. Insert the following data into a table named `table` with columns `name` and `age`:
- "Alice", 28
- "Bob", 32
- "Carol", 40
### Solution
```mathematica
conn = OpenSQLConnection[
JDBC["PostgreSQL", "localhost:5432/database"],
"Username" -> "username",
"Password" -> "password"
];
data = {{"Alice", 28}, {"Bob", 32}, {"Carol", 40}};
SQLInsert[conn, "table", {"name", "age"}, data];
CloseSQLConnection[conn];
```
# 9.3. Web Scraping and APIs
Web scraping is the process of extracting data from websites. It can be a powerful tool for gathering information from the web, especially when there is no available API or when the data is not easily accessible through other means.
Mathematica provides built-in functions for web scraping, making it easy to retrieve data from websites. You can use functions like `URLFetch` and `Import` to fetch the HTML content of a webpage, and then use pattern matching and other string manipulation functions to extract the desired data.
Here is an example of how to scrape data from a webpage using Mathematica:
```mathematica
url = "https://www.example.com";
html = URLFetch[url];
data = Cases[html, "<h1>" ~~ content__ ~~ "</h1>" :> content];
```
In this example, we use the `URLFetch` function to fetch the HTML content of the webpage at the specified URL. Then, we use the `Cases` function to extract the content between `<h1>` tags, which represents the title of the webpage.
## Exercise
Scrape the current temperature in your city from a weather website. Assign the temperature to a variable `temperature`.
### Solution
```mathematica
url = "https://www.weather.com";
html = URLFetch[url];
temperature = Cases[html, "current-temperature" ~~ content__ ~~ "°" :> content];
```
In addition to web scraping, Mathematica also provides functions for working with web APIs. APIs (Application Programming Interfaces) allow different software applications to communicate with each other and exchange data.
You can use the `URLFetch` function to send HTTP requests to an API and retrieve the response. The response is usually in a structured format like JSON or XML, which can be easily parsed and processed using Mathematica's built-in functions.
Here is an example of how to make a GET request to a web API using Mathematica:
```mathematica
url = "https://api.example.com/data";
params = {"param1" -> "value1", "param2" -> "value2"};
response = URLFetch[url, "Parameters" -> params];
data = ImportString[response, "JSON"];
```
In this example, we specify the URL of the API and the parameters to be included in the request. We use the `URLFetch` function to send the GET request and retrieve the response. Then, we use the `ImportString` function to parse the response as JSON and assign it to the variable `data`.
## Exercise
Make a GET request to a weather API to retrieve the current temperature in your city. Assign the temperature to a variable `temperature`.
### Solution
```mathematica
url = "https://api.weather.com/current";
params = {"city" -> "your_city", "apikey" -> "your_api_key"};
response = URLFetch[url, "Parameters" -> params];
data = ImportString[response, "JSON"];
temperature = data["current"]["temperature"];
```
# 9.4. Data Cleaning and Manipulation
Data cleaning and manipulation are essential steps in the data analysis process. Raw data often contains errors, missing values, and inconsistencies that need to be addressed before meaningful analysis can be performed.
Mathematica provides a wide range of functions for cleaning and manipulating data. These functions allow you to remove duplicates, handle missing values, transform data types, and perform various operations on datasets.
Here are some examples of data cleaning and manipulation tasks in Mathematica:
- Removing duplicates from a list:
```mathematica
data = {1, 2, 3, 2, 4, 1};
cleanedData = DeleteDuplicates[data];
```
- Handling missing values:
```mathematica
data = {1, 2, Missing[], 4, 5};
cleanedData = DeleteMissing[data];
```
- Transforming data types:
```mathematica
data = {"1", "2", "3"};
cleanedData = ToExpression[data];
```
- Performing operations on datasets:
```mathematica
data = {{1, 2}, {3, 4}, {5, 6}};
cleanedData = Map[Total, data];
```
## Exercise
Clean the following dataset by removing duplicates and handling missing values:
```mathematica
data = {1, 2, Missing[], 3, 4, 2, 5};
```
### Solution
```mathematica
cleanedData = DeleteMissing[DeleteDuplicates[data]];
```
In addition to basic data cleaning and manipulation, Mathematica also provides advanced functions for data transformation and aggregation. These functions allow you to reshape datasets, merge multiple datasets, and perform complex calculations on grouped data.
One powerful function for data transformation is `GroupBy`. This function allows you to group data based on one or more variables and perform calculations on each group. You can use functions like `Total`, `Mean`, `Max`, and `Min` to aggregate data within each group.
Here is an example of how to use the `GroupBy` function in Mathematica:
```mathematica
data = {{1, "A"}, {2, "A"}, {3, "B"}, {4, "B"}, {5, "B"}};
groupedData = GroupBy[data, Last -> First, Total];
```
In this example, we have a dataset with two columns: the first column contains numeric values, and the second column contains categories. We use the `GroupBy` function to group the data by the categories and calculate the total of the numeric values within each group.
## Exercise
Group the following dataset by the second column and calculate the mean of the numeric values within each group:
```mathematica
data = {{1, "A"}, {2, "A"}, {3, "B"}, {4, "B"}, {5, "B"}};
```
### Solution
```mathematica
groupedData = GroupBy[data, Last -> First, Mean];
```
In addition to data transformation and aggregation, Mathematica also provides functions for advanced data analysis and visualization. These functions allow you to perform statistical analysis, generate plots and charts, and build predictive models.
You can use functions like `LinearModelFit`, `NonlinearModelFit`, and `FindFit` to fit models to data and make predictions. You can also use functions like `Histogram`, `BoxWhiskerChart`, and `ListPlot` to visualize data and explore patterns and relationships.
Here is an example of how to fit a linear model to data and make predictions using Mathematica:
```mathematica
data = {{1, 2}, {2, 4}, {3, 6}, {4, 8}, {5, 10}};
model = LinearModelFit[data, x, x];
predictions = model["Predict", {6, 7, 8}];
```
In this example, we have a dataset with two columns: the first column contains independent variable values, and the second column contains dependent variable values. We use the `LinearModelFit` function to fit a linear model to the data. Then, we use the `Predict` function to make predictions for new values of the independent variable.
## Exercise
Fit a quadratic model to the following dataset and make predictions for the values 6, 7, and 8:
```mathematica
data = {{1, 1}, {2, 4}, {3, 9}, {4, 16}, {5, 25}};
```
### Solution
```mathematica
model = NonlinearModelFit[data, a x^2 + b x + c, {a, b, c}, x];
predictions = model["Predict", {6, 7, 8}];
```
# 10. Interactive Interfaces
Interactive interfaces allow users to interact with Mathematica programs and manipulate variables and parameters in real-time. This can be useful for exploring data, visualizing results, and conducting experiments.
Mathematica provides several functions for creating interactive interfaces, including `Manipulate`, `Dynamic`, and `DynamicModule`. These functions allow you to define controls, such as sliders, buttons, and checkboxes, that users can interact with to change the values of variables and parameters.
Here is an example of how to create an interactive interface using the `Manipulate` function in Mathematica:
```mathematica
Manipulate[
Plot[Sin[a x + b], {x, 0, 2 Pi}],
{a, 1, 10},
{b, 0, 2 Pi}
]
```
In this example, we create a plot of the sine function with adjustable parameters `a` and `b`. The `Manipulate` function automatically generates sliders for the parameters, allowing users to change their values and see the plot update in real-time.
## Exercise
Create an interactive interface that allows users to adjust the color and size of a circle. Display the circle on the screen using the adjusted parameters.
### Solution
```mathematica
Manipulate[
Graphics[{
Dynamic[{
ColorData[color][1],
Disk[{0, 0}, size]
}]
}],
{color, ColorData["WebSafe"], ControlType -> SetterBar},
{size, 1, 10}
]
```
In addition to basic controls, Mathematica also provides advanced controls for creating more complex interactive interfaces. These controls include `Slider`, `Checkbox`, `RadioButton`, `PopupMenu`, and `InputField`, among others.
You can also use the `Dynamic` function to create dynamic elements in your interface. Dynamic elements update automatically when the values of variables or parameters change, allowing for real-time updates and interactivity.
Here is an example of how to use the `Slider` control and the `Dynamic` function to create an interactive interface in Mathematica:
```mathematica
Manipulate[
Graphics[{
Dynamic[{
ColorData[color][1],
Disk[{0, 0}, size]
}]
}],
{color, ColorData["WebSafe"], ControlType -> SetterBar},
{size, 1, 10},
{{size, 5}, Slider}
]
```
In this example, we create a slider control for adjusting the size of the circle. The initial value of the slider is set to 5 using the `{{size, 5}, Slider}` syntax. The `Dynamic` function ensures that the circle is updated whenever the value of the slider changes.
## Exercise
Modify the previous exercise to include a slider control for adjusting the x-coordinate of the circle. Display the circle on the screen using the adjusted parameters.
### Solution
```mathematica
Manipulate[
Graphics[{
Dynamic[{
ColorData[color][1],
Disk[{x, 0}, size]
}]
}],
{color, ColorData["WebSafe"], ControlType -> SetterBar},
{size, 1, 10},
{{size, 5}, Slider},
{{x, 0}, -10, 10, 0.1, Appearance -> "Labeled"}
]
```
When designing interactive interfaces, it is important to consider user interface design principles. These principles include simplicity, consistency, feedback, and discoverability. By following these principles, you can create interfaces that are intuitive, easy to use, and provide a positive user experience.
Once you have created an interactive interface, you can deploy it as a standalone application using the `Deploy` function. This allows users to run your interface without needing to have Mathematica installed on their computer.
Here is an example of how to deploy an interactive interface as a standalone application in Mathematica:
```mathematica
Deploy[
Manipulate[
Plot[Sin[a x + b], {x, 0, 2 Pi}],
{a, 1, 10},
{b, 0, 2 Pi}
]
]
```
In this example, we use the `Deploy` function to deploy the `Manipulate` interface as a standalone application. Users can run the application without needing to have Mathematica installed, making it accessible to a wider audience.
## Exercise
Deploy the interactive interface from the previous exercise as a standalone application.
### Solution
```mathematica
Deploy[
Manipulate[
Graphics[{
Dynamic[{
ColorData[color][1],
Disk[{x, 0}, size]
}]
}],
{color, ColorData["WebSafe"], ControlType -> SetterBar},
{size, 1, 10},
{{size, 5}, Slider},
{{x, 0}, -10, 10, 0.1, Appearance -> "Labeled"}
]
]
```
# 11. Concluding Remarks and Next Steps
Congratulations on completing this textbook on Power Programming with Mathematica! You have learned a wide range of topics, from setting up the environment and basic syntax to advanced topics like symbolic computation and interactive interfaces.
Mathematica is a powerful tool for data analysis, visualization, and programming. With the knowledge you have gained, you can now tackle complex problems, build interactive applications, and conduct advanced research.
To continue your learning journey, we recommend exploring other Wolfram products, such as Wolfram|Alpha and Wolfram Language. These products provide additional functionality and resources for data analysis, mathematical computation, and knowledge discovery.
You can also contribute to the Wolfram Community, a vibrant online community of Mathematica users and developers. The community is a great place to ask questions, share ideas, and collaborate on projects.
Finally, keep up with the latest developments in Power Programming with Mathematica. The field is constantly evolving, and new features and techniques are being introduced all the time. Stay curious, keep learning, and have fun with Mathematica!
# 10.2. Dynamic Elements
Dynamic elements are a powerful feature of Mathematica that allow you to create interactive and responsive interfaces. With dynamic elements, you can create sliders, buttons, checkboxes, and other interactive components that update in real-time based on user input.
To create a dynamic element, you can use the `Dynamic` function. This function takes an expression as its argument and automatically updates the expression whenever its dependencies change. For example, you can create a dynamic slider that controls the value of a variable:
```mathematica
DynamicModule[{x = 0},
Slider[Dynamic[x]]
]
```
In this example, the `DynamicModule` function creates a local variable `x` with an initial value of 0. The `Slider` function creates a slider component that is bound to the value of `x` using the `Dynamic` function. As the user moves the slider, the value of `x` is automatically updated.
- Create a dynamic checkbox that toggles the visibility of a plot.
```mathematica
DynamicModule[{showPlot = True},
{
Checkbox[Dynamic[showPlot]],
Dynamic[
If[showPlot, Plot[Sin[x], {x, 0, 2 Pi}], Nothing]
]
}
]
```
In this example, the `Checkbox` component is bound to the `showPlot` variable using the `Dynamic` function. The `Dynamic` function is also used to update the plot based on the value of `showPlot`. If `showPlot` is `True`, the plot is displayed. Otherwise, nothing is displayed.
## Exercise
Create a dynamic interface that allows the user to input a number and displays the square of that number in real-time.
### Solution
```mathematica
DynamicModule[{x = 0},
{
InputField[Dynamic[x], Number],
Dynamic[x^2]
}
]
```
# 10.3. User Interface Design Principles
When designing user interfaces in Mathematica, it's important to follow good design principles to create intuitive and user-friendly experiences. Here are some key principles to keep in mind:
1. Keep it simple: Avoid clutter and unnecessary complexity. Focus on the essential elements and functionality.
2. Consistency: Use consistent design elements, such as colors, fonts, and layouts, throughout your interface. This helps users understand and navigate your application.
3. Feedback: Provide clear and immediate feedback to users when they interact with your interface. For example, display a message or change the appearance of a button when it is clicked.
4. Error handling: Anticipate and handle errors gracefully. Display informative error messages and provide suggestions for resolving issues.
5. Accessibility: Design your interface to be accessible to users with disabilities. Use appropriate color contrasts, provide alternative text for images, and ensure keyboard navigation is possible.
6. User testing: Test your interface with real users to gather feedback and identify areas for improvement. Observing how users interact with your application can reveal insights and help you refine your design.
By following these principles, you can create interfaces that are easy to use, visually appealing, and effective in achieving their intended purpose.
## Exercise
Think of an existing application or website that you use frequently. Identify one aspect of its user interface design that you find particularly effective or intuitive. Explain why you find it successful.
### Solution
One aspect of the user interface design of Google Maps that I find particularly effective is the search bar. It is prominently placed at the top of the page, making it easy to find and use. The search bar also provides suggestions as you type, which helps speed up the search process. Additionally, the search results are displayed in a clear and organized manner, making it easy to find the desired location. Overall, the design of the search bar in Google Maps enhances the user experience by providing a simple and efficient way to search for locations.
# 10.4. Deploying Applications
Once you have developed your application in Mathematica, you may want to deploy it so that others can use it without needing to have Mathematica installed. Mathematica provides several options for deploying applications, depending on your needs and the target platform.
One option is to deploy your application as a standalone executable file. This allows users to run your application without needing to have Mathematica or any other dependencies installed. To create a standalone executable, you can use the "CreateExecutable" function in Mathematica.
Another option is to deploy your application as a web application using the Wolfram Cloud. The Wolfram Cloud allows you to host and run your applications in the cloud, accessible through a web browser. This makes it easy to share your application with others and allows for collaborative use.
If you want to deploy your application as a mobile app, Mathematica provides the "Wolfram Cloud App" framework. This framework allows you to create mobile apps that can be installed on iOS and Android devices. With the Wolfram Cloud App framework, you can leverage the power of Mathematica in a mobile-friendly format.
For example, let's say you have developed a data visualization application in Mathematica. You can deploy it as a standalone executable file so that others can run it on their computers without needing to have Mathematica installed. This makes it easy to share your application with colleagues or clients who may not have access to Mathematica.
## Exercise
Think about an application or tool that you would like to develop using Mathematica. Consider the target platform and the deployment options discussed in this section. Write a brief description of how you would deploy your application and why you think it would be the most effective option.
### Solution
I would like to develop a machine learning model for predicting stock prices using Mathematica. Since I want to make the model easily accessible to a wide audience, I would choose to deploy it as a web application using the Wolfram Cloud. This would allow users to access the model through a web browser without needing to install any additional software. It would also make it easy for me to update and maintain the model in the cloud, ensuring that users always have access to the latest version.
# 11. Concluding Remarks and Next Steps
Congratulations! You have completed the Power Programming with Mathematica textbook. By now, you should have a solid understanding of the fundamentals of programming in Mathematica and be able to tackle a wide range of computational problems.
But this is just the beginning of your journey. Mathematica is a powerful tool with countless applications in various fields. To further enhance your skills and explore more advanced topics, here are some next steps you can take:
1. Going Beyond the Basics: Continue to explore the Mathematica documentation and experiment with more advanced features and functions. Challenge yourself to solve complex problems and optimize your code for efficiency.
2. Exploring Other Wolfram Products: Mathematica is just one of the many products offered by Wolfram. Take some time to explore other tools such as Wolfram|Alpha, Wolfram Language, and Wolfram Cloud. Each of these products has its own unique capabilities and can greatly enhance your computational workflow.
3. Contributing to the Wolfram Community: Join the Wolfram Community, an online forum where Mathematica users from around the world come together to share ideas, ask questions, and collaborate on projects. This is a great way to connect with other Mathematica enthusiasts and learn from their experiences.
4. Future of Power Programming with Mathematica: Keep an eye on the latest developments in Mathematica and the Wolfram ecosystem. Wolfram is constantly releasing updates and new features, so staying up-to-date will ensure that you are always at the forefront of computational programming.
Remember, programming is a lifelong learning process. The more you practice and explore, the better you will become. So keep coding, keep experimenting, and most importantly, keep having fun!
Thank you for joining us on this journey, and we wish you all the best in your future endeavors with Mathematica. Happy programming!
# 11.1. Going Beyond the Basics
One area you can dive deeper into is parallel computing. Mathematica has built-in support for parallel processing, allowing you to speed up your computations by distributing them across multiple cores or even multiple machines. We'll explore how to take advantage of parallel computing to solve computationally intensive problems more efficiently.
Another advanced topic to explore is probability and statistics. Mathematica has powerful tools for analyzing and visualizing data, fitting statistical models, and performing hypothesis tests. We'll delve into these features and learn how to apply them to real-world data analysis tasks.
Image processing is another exciting area where Mathematica shines. Whether you're working with digital images, medical scans, or satellite imagery, Mathematica provides a comprehensive set of functions for manipulating and analyzing images. We'll explore techniques such as image filtering, segmentation, and feature extraction.
Finally, we'll touch on machine learning and neural networks. Mathematica has a rich set of functions for training and deploying machine learning models, making it a valuable tool for tasks such as classification, regression, and clustering. We'll explore how to use these functions to build and evaluate predictive models.
By delving into these advanced topics, you'll be equipped with the knowledge and skills to tackle complex computational problems and push the boundaries of what you can achieve with Mathematica. So let's dive in and take your power programming skills to new heights!
# 8. Advanced Topics in Mathematica
One advanced topic we'll cover is parallel computing. Mathematica has built-in support for parallel processing, which allows you to distribute your computations across multiple cores or even multiple machines. This can greatly speed up your code and enable you to tackle larger and more complex problems. We'll explore how to use parallel computing in Mathematica and discuss best practices for efficient parallel programming.
Another advanced topic we'll cover is probability and statistics. Mathematica has extensive functionality for analyzing and visualizing data, fitting statistical models, and performing hypothesis tests. We'll explore how to use these tools to gain insights from data and make informed decisions. We'll also discuss techniques for working with large datasets and handling missing or incomplete data.
Image processing is another powerful feature of Mathematica. Whether you're working with digital images, medical scans, or satellite imagery, Mathematica provides a rich set of functions for manipulating and analyzing images. We'll explore techniques such as image filtering, segmentation, and feature extraction. We'll also discuss how to visualize and interpret the results of image processing algorithms.
Finally, we'll touch on machine learning and neural networks. Mathematica has powerful tools for training and deploying machine learning models, making it a valuable tool for tasks such as classification, regression, and clustering. We'll explore how to use these tools to build and evaluate predictive models. We'll also discuss techniques for handling large datasets and optimizing the performance of machine learning algorithms.
By delving into these advanced topics, you'll be able to tackle more complex problems and take your Mathematica programming skills to the next level. So let's dive in and explore the power of advanced programming with Mathematica!
# 8. Advanced Topics in Mathematica
One advanced topic we'll cover is parallel computing. Mathematica has built-in support for parallel processing, allowing you to distribute your computations across multiple cores or even multiple machines. This can greatly speed up your code and enable you to tackle larger and more computationally intensive problems. We'll explore how to use parallel computing in Mathematica and discuss strategies for efficient parallel programming.
Another advanced topic we'll cover is probability and statistics. Mathematica has powerful tools for analyzing and visualizing data, fitting statistical models, and performing hypothesis tests. We'll explore how to use these tools to gain insights from data and make informed decisions. We'll also discuss techniques for working with large datasets and handling missing or incomplete data.
Image processing is another exciting area where Mathematica excels. Whether you're working with digital images, medical scans, or satellite imagery, Mathematica provides a comprehensive set of functions for manipulating and analyzing images. We'll explore techniques such as image filtering, segmentation, and feature extraction. We'll also discuss how to visualize and interpret the results of image processing algorithms.
Finally, we'll touch on machine learning and neural networks. Mathematica has powerful tools for training and deploying machine learning models, making it a valuable tool for tasks such as classification, regression, and clustering. We'll explore how to use these tools to build and evaluate predictive models. We'll also discuss techniques for handling large datasets and optimizing the performance of machine learning algorithms.
By delving into these advanced topics, you'll be able to tackle more complex problems and unleash the full power of Mathematica. So let's dive in and explore the world of advanced programming with Mathematica!
# 11.4. Future of Power Programming with Mathematica
Congratulations on completing this textbook on Power Programming with Mathematica! By now, you should have a solid understanding of the fundamentals of Mathematica programming and be equipped with the knowledge to tackle a wide range of computational problems.
But the world of programming is constantly evolving, and Mathematica is no exception. As technology advances and new features are added to Mathematica, there will always be more to learn and explore. In this final section, we'll take a look at the future of power programming with Mathematica and discuss some avenues for further learning and development.
One exciting area to explore is the integration of Mathematica with other technologies and platforms. Mathematica can be seamlessly integrated with other programming languages, such as Python and R, allowing you to leverage the strengths of each language in your projects. Additionally, Mathematica can be used in conjunction with cloud computing platforms, such as Wolfram Cloud, to scale your computations and collaborate with others. Exploring these integrations can open up new possibilities and expand the capabilities of your Mathematica programs.
Another area to consider is the use of Mathematica in data science and machine learning. With the increasing availability of large datasets and the growing demand for data-driven insights, the ability to analyze and model data is becoming increasingly important. Mathematica provides powerful tools for data manipulation, visualization, and statistical analysis, making it a valuable tool for data scientists. By delving deeper into these topics, you can become proficient in using Mathematica for data science and machine learning tasks.
Finally, consider contributing to the Wolfram Community. The Wolfram Community is a vibrant online community of Mathematica users, where you can ask questions, share your projects, and learn from others. By actively participating in the community, you can expand your network, gain valuable insights, and contribute to the development of Mathematica. Whether it's sharing your own projects, providing feedback on others' work, or collaborating on open-source projects, the Wolfram Community is a valuable resource for further learning and development.
As you continue your journey in power programming with Mathematica, remember to stay curious and keep exploring. The possibilities with Mathematica are endless, and there's always something new to discover. Whether you're building advanced visualizations, solving complex mathematical problems, or developing cutting-edge machine learning models, Mathematica provides the tools and flexibility to bring your ideas to life.
Thank you for joining me on this learning adventure. I hope this textbook has equipped you with the knowledge and skills to become a proficient power programmer with Mathematica. Good luck on your future programming endeavors, and remember to keep pushing the boundaries of what's possible with Mathematica! | Textbooks |
How can Ganymede have an Earth-like gravity without us having realized it?
Imagine a small primitive humanoid civilization that developed independently in caves under the surface of Ganymede. We can assume there's enough light that filters through the crystalline surface to support life, and that there's enough air trapped in these caves for them to breathe.
But let's say these people also happen have a gravity that's slightly greater than Earth's. How could that be the case? And why wouldn't Earth's astronomers have discovered that before now?
Also, are there any other significant factors that would make it difficult for Earth-like life to thrive? Things that would be harder to hand-wave away?
(The SF here is about as hard as cotton candy, so answers don't need to be completely realistic. I'd just like to avoid directly contradicting known observations any more than I need to.)
science-based science-fiction environment earth-like moons
Admiral Jota
Admiral JotaAdmiral Jota
$\begingroup$ Welcome to Worldbuilding. Please take the tour and visit the help center. Can you add a tag explaining which kind of answer are you looking for? Science based or magic? $\endgroup$ – L.Dutch - Reinstate Monica♦ Sep 18 '18 at 14:31
$\begingroup$ Thanks. I added the science-based tag, and I'm checking out the tour right now. $\endgroup$ – Admiral Jota Sep 18 '18 at 14:33
$\begingroup$ Are there aliens or Q involved? Gravity is linked to mass, and both govern orbital characteristics, so unless there is some external force at play, gravity is set for Ganymede. Also, did life evolve there, or was it seeded? Because humans are not inevitable as a product of evolution... $\endgroup$ – bukwyrm Sep 18 '18 at 14:47
$\begingroup$ I'm pretty flexible on the ultimate origins of life there. I could happily go with an "ancient aliens seeded both Earth and Ganymede billions of years ago" theory if that makes things easier. And I'd be fine with suggesting those ancient aliens used some unknown advanced technology or "impossible" materials to intentionally craft an ideal environment there. $\endgroup$ – Admiral Jota Sep 18 '18 at 14:52
$\begingroup$ I think you are left with magic (or technology sufficiently advanced to be indistinguishable). $\endgroup$ – bukwyrm Sep 19 '18 at 4:57
Don't change the mass - change the density.
(Soft science ahead - all hands brace for impact!)
One thing you probably shouldn't do is change Ganymede's mass. That would change its orbit (and its influence on the other moons) in unavoidable and easily observable ways. You'd have to do some elaborate hand-waving to make Ganymede appear to be its apparent mass while having a very different actual mass.
To have a solution from changing the density will still require some hand-waving, but maybe it's allowable in a "cotton-candy-scifi" universe...you can be the judge of that!
To attain earth-like gravity in your caves, we would have to: 1) make Ganymede's core unnaturally dense and its mantle unnaturally light, and 2) place your caves much closer to the core. The handwaving required to make this happen is two-fold:
Firstly, to actually concentrate Ganymede's mass this much in the core, you could not use any naturally occurring material in the known universe. Materials made of conventional elements are too light, and electron- or neutron-degenerate matter would not remain compressed under earthlike gravity--it would explode. So...probably the best soft-sci-fi solution (without invoking artificial gravity generators) is that Ganymede's core contains degenerate matter which for some reason can't decompress. (Is it special matter? Is it in a fluke, naturally occurring statis field? Handwave!) Similarly, you'll need to handwave a material to compose Ganymede's mantle that is extremely light and somehow looks to our telescopes like a salty ocean. (See https://en.wikipedia.org/wiki/Ganymede_(moon)#Composition ) Which bring us to our next point...
We will need to handwave some of our observations of Ganymede's physical appearance and its moment of inertia factor ( https://en.wikipedia.org/wiki/Moment_of_inertia_factor ). To be honest, I don't think there will be any self-consistent and elegant way to explain away all of the observations we've made of it. But at the very least, try to have a reason for why Ganymede's surface is or appears to be made up of water ice and silicate rock, and why it appears to have a subsurface salty ocean and an iron-rich core.
(To tackle the surface, I would offer this...our extremely light mantle-material is somehow also fairly tough and rigid, and the silicate rock of the surface is mostly layers of dust/fragments from meteor impacts.)
QamiQami
$\begingroup$ Thanks -- this looks very promising! Good food for thought about the exotic materials and physical appearance. (I'll upvote you as soon as I have enough rep to do so.) $\endgroup$ – Admiral Jota Sep 18 '18 at 15:31
$\begingroup$ Yes, I've been thinking about it a bit more since I wrote my answer and a black hole which the people live near was the only thing I could come up with that is even slightly viable - and even that has enough problems to put it firmly in the realm of cotton candy science) $\endgroup$ – Tim B♦ Sep 18 '18 at 16:09
$\begingroup$ Yeah, I hadn't forgotten black holes. They would require a different kind of handwaving, is all (why doesn't the rest of the moon fall into it?). $\endgroup$ – Qami Sep 18 '18 at 17:37
$\begingroup$ @KeithMorrison : that would be true of distances outside of the body's initial surface. If we take a point inside the sun, however--say somewhere at half the sun's radius--then that point would feel a higher gravitational pull if all the sun's mass were compacted to within half the sun's diameter. $\endgroup$ – Qami Sep 18 '18 at 21:19
$\begingroup$ I made some quick calculations on this. The "real radius" of Ganymede would have to be rGanymede = sqrt(massGanymede/massEarth)*rEarth or about 1000 km (instead of 2634 km), which doesn't sound too bad. But density would then have to be about 35000 kg/m^3, more than any normal material on earth. $\endgroup$ – Dubu Sep 19 '18 at 11:19
I'm sorry, but it's impossible.
Ganymede has 2.4% of Earth's mass. That mass is what generates gravity.
If it had more gravity then it would distort the orbits of the other moons and we would know about it. We know the mass of every substantial body in the solar system (and in fact some of them were detected because they were distorting the orbits of things we did know about and we were able to go look in the right place).
http://solarviews.com/eng/ganymede.htm
You need to come up with a way to achieve your goals that does not involve gravity as we know it. For example clawed feet to grasp the ice, magnetic boots, or even just bouncing around in ice tunnels are all possible.
Tim B♦Tim B
$\begingroup$ Mass combines with radius to make gravity! If it were a lot smaller, it could have an Earth-like gravity. That's why the "variable density" idea is more workable: if there were a much more dense (i.e., small-radius) core and the "people" were near it, G would be much higher. $\endgroup$ – Jeffiekins Sep 18 '18 at 20:32
$\begingroup$ It might be impossible, but not for the reason you mention. See "what if? little planet" $\endgroup$ – Eric Duminil Sep 19 '18 at 7:02
$\begingroup$ @EricDuminil: That's for an asteroid with 2m diameter. Ganymede is way bigger than that. $\endgroup$ – nikie Sep 19 '18 at 9:41
$\begingroup$ @user151841: If you replace Ganymede with a soccer ball of Ganymede's mass, nothing at all would change for Jupiter or the other moons. The only thing that would change would be the gravity at Ganymede's surface. The same thing happens (i.e. nothing) if the sun becomes a black hole. $\endgroup$ – Eric Duminil Sep 19 '18 at 19:51
$\begingroup$ @EricDuminil Well that illuminates the other half of the problem. If Ganymede were the size of a soccer ball, but had the same mass, we wouldn't known it was there, and we would have had to figure out the mystery of the "missing moon" of Jupiter. We would have concluded there was a very dense, very small moon that had to be there, because of its gravitational effects on the orbits. However, if it's large enough that we can see it, we know it's there, and from seeing it, we can understand its orbit, and deduce its mass. There's really no way around it. We know it's there one way or the other. $\endgroup$ – user151841 Sep 24 '18 at 0:34
Let's back track and figure out how we know the mass/gravity on Ganymede. (Longer read here).
First off, we need to calculate the radius of the Earth. This has been known to a relatively high degree of accuracy for a very long time. Then we need to measure what Earth's 'gravitational pull', or mass, is, by using an object of a known mass. With this in hand, we can actually calculate the mass of the sun knowing its distance to Earth (again, science has proven this).
From here we can measure the mass of any planet in our solar system with relative ease. With Jupiter's mass now known, we can actually watch Ganymede and calculate its mass as well.
At any point, if there was an error (and rest assured, there isn't one large enough to accomplish what you request), it would affect our measurements of everything down that linked chain. So in your case, we'd have to have grossly mismeasured either Jupiter's orbital movements, or Ganymede's (or likely both to get the increase in mass you need).
Suffice it to say, this is highly unlikely.
On to your other question, check out the amount of radiation on Ganymede. At 8 rem a day, it is definitely going to be wreaking havoc on your earth-like life over time.
jdunlop
ColonelPanicColonelPanic
$\begingroup$ I think there's a mistake in that paper you cite.. You can only get the mass of the sun as described therein. To get the mass of a planet, it needs to have a moon (the method allows you to get the mass of a central body - not those orbiting it). So the mass of Ganymede is estimated - not calculated. Unless it's determined by perturbations of other moons' orbits, but that's hardly trivial. $\endgroup$ – Oscar Bravo Sep 19 '18 at 14:56
If the caves is rotating very quickly, the inhabitants would experience something they perceive as gravity while inside the cave. Upon stepping outside the cave they would become almost weightless.
Imagine the inside Ganymede there is a sphere that rotates much faster than the planet itself. Why? You'll need a reason, like some other inhabitants wanted an amusement ride, but got bored and left, or something smacked into Ganymede just right. Between Ganymede's surface and the sphere maybe there's a layer of something liquidy, with very little friction. Inside that is a rapidly spinning sphere, or at least an annulus (donut). The inhabitants inside there would believe there was gravity outward toward the surface. Getting to them might require some kind of special arrangement, but if that arrangement is airtight, then your air will stay in place too.
jimm101jimm101
$\begingroup$ I like the giant gravitron idea. Perhaps some massive, ancient generation ship crashed, embedding itself into the surface, simultaneously introducing life and an extreme spin. $\endgroup$ – Wazoople Sep 18 '18 at 20:29
$\begingroup$ @Wazoople Or maybe Ganymede is the ship, and some ring material has glommed on over time... $\endgroup$ – jimm101 Sep 19 '18 at 13:22
Possibly there is some semi-scientific or magical form of gravity generators that generate gravity (as in many space operas like Star Trek and Star Wars). Gravity generators are used to provide artificial gravity in space ships in many space operas.
And perhaps somebody placed such gravity generators beneath the floors of sealed air filled caverns under the surface of Ganymede. The light in those caverns may also be artificial. If the caverns are sealed and air tight the air will be kept in by the caverns, and the artificial gravity wouldn't be needed to retain atmosphere, but might be necessary to provide gravity for the health of the human population.
In fact it is considered possible that there could be lifeforms in liquid oceans beneath the ice covered surfaces of Ganymede and other moons in the outer solar system. So what you are proposing is vaguely similar to that speculation, except that you propose small air-filled caverns in the ice instead of a world wide ocean beneath the ice.
The combined effect of those gravity generators should increase Ganymede's overall gravity and make it seem a bit more massive than it actually is. But if those gravity generators are beneath only a tiny fraction of the Ganymedean surface the total effect may be very slight.
And when space probes are put in orbit around Ganymede they may detect the effects of those gravity generators, just as the first lunar satellites detected mass concentrations (mascons) in the moon.
https://en.wikipedia.org/wiki/Mass_concentration_(astronomy) 1
And possibly analysis of the strange gravity readings may prove that they can't be the result of Ganymedean mascons but must be caused by generated gravity.
M. A. GoldingM. A. Golding
They can live inside a spinning centrifuge. By controlling the speed (and tilting the floor) it can generate any level of gravity-like acceleration needed, from Ganymede to Earth or higher. The centrifuge was left there by a previous, more advanced civilization that also left all their other life-support systems. Is Jupiter's intense radiation a problem for them?
Roger RobotRoger Robot
$\begingroup$ Welcome to Worldbuilding! Your answer is good, but you should remove the last sentence. If you have a question for the OP, it should be asked in a comment. I know you can't comment yet, and that's always a problem for new users, but there are a ton of questions that don't need any extra clarification. For the time being, go ahead and skip any questions that you can't answer without more info until you get enough reputation to comment. Good luck! $\endgroup$ – John Locke Sep 19 '18 at 18:48
Ganymede itself is packed with high-density materials such as tungsten and uranium deposits resulting in an earth-like overall mass easily 50 times as huge as it should be. Artificial Superheavy elements beyond anything ever manufactured in a lab.
The Surface of Ganymede is covered in a thick layer of Cavorite Dust, resulting in its unusually high density being almost wholly cancelled out, what gravity/Mass-Effect that filters through the Cavorite is only a couple percent of its natural strength.
Within the caves, gravity is unaffected and the inhabitants experience earth-normal conditions.
If you want the surface itself to have earth-normal gravity, you could handwave that the Cavorite attenuates the effects of gravity so that it falls off very rapidly, eg: over a matter of meters. Shortening the length of the gravity waves to something you could measure on a yard-stick. Meaning you can walk around as normal, but throw a ball high into the air and it won't be coming down again.
Having gone away and looked up material densities, I realised that the required density in order for Ganymede to be literally 5000 times as massive as it appears is well beyond tungsten or uranium or even Osmium or Hassium.
You need a material with a density of 779,634,464,751.96 kg/m^3 to do it.
I have corrected my answer accordingly.
RuadhanRuadhan
$\begingroup$ I was out by 10 in my density calculation and thought that you'd need an artificial super-dense element... Bit disappointed you only need tungsten. $\endgroup$ – Oscar Bravo Sep 21 '18 at 12:38
$\begingroup$ Disclaimer. I have no idea whether a core of tungsten and uranium deposits would be sufficient to produce an earth-like mass! But checking vs Iron by molar weight would probably be a useful comparison. $\endgroup$ – Ruadhan Sep 21 '18 at 14:23
$\begingroup$ Just went and did some figuring. Iron is 7850kg/m^3, while Tungsten is 19600kg/m^3. So technically speaking it's somewhere between 2 and 3 times as dense. You'd need something 5 times more dense than tungsten to achieve earthlike mass with Ganymede. Uranium is less dense than Tungsten at 18900kg/m^3. So yes. you'd probably need an artificial super-dense element. Good luck manufacturing one that isn't a ridiculously short-lived radioactive element. $\endgroup$ – Ruadhan Sep 21 '18 at 14:28
$\begingroup$ Osmium is 22590kg/m^3, still not viable for this, and Hassium (the densest material ever made in a lab) is a little denser at 22610kg/m^3. $\endgroup$ – Ruadhan Sep 21 '18 at 14:34
$\begingroup$ I think you were right the first time... To get the same gravitational acceleration (at the surface) as Earth (9.81$m/s^2$), but keep the same radius, Ganymede (1.5$m/s^2$) needs only to get about 7 times heavier ($a \propto m$), therefore 7 times more dense. Its current density is about 2$g/cm^3$ so that takes us to about 15$g/cm^3$. Which is easily attainable with normal matter. $\endgroup$ – Oscar Bravo Sep 24 '18 at 6:09
I think your only 'realistic' solution is a gravity generator with very limited range. If Ganymede's actual gravitational attraction was larger than it should be, it would affect its orbit, and the orbit of anything else that got near it, which would have been detected from afar by astronomers.
A gravity generator (presumably built and then abandoned by some ancient species) that only reached a very short distance above the surface, so as the keep atmosphere and inhabitants firmly rooted, but not far enough to affect orbital characteristics should fill the bill. Naturally, a real gravitational field would not act that way, but since you're inventing a gravity generator that generates artificial gravity, you're entirely free to make that artificial gravity behave in a non-standard manner.
pdanespdanes
$\begingroup$ Gravity is generated by mass and has an infinite range. How can you circumvent this from a scientific point of view? $\endgroup$ – L.Dutch - Reinstate Monica♦ Sep 19 '18 at 16:05
$\begingroup$ @L.Dutch The poster is positing an artificial generator of gravitational field that doesn't need mass and has a short range. No need to actually invent it since this is WorldBuilding and not Physics. $\endgroup$ – Oscar Bravo Sep 21 '18 at 11:57
$\begingroup$ Topologically speaking, you can view gravity as an indentation in space-time (ie: the old rubber-sheet demo) Generally the distortion is across a wide area and the indentation is very shallow, so if you wanted to have a close-ranged gravity field you'd need to essentially "scrunch up" spacetime to do it. like gripping part of the sheet and pulling it together so that it hangs loose in the middle. If I knew how to actually implement that, I'd have won a nobel prize :P $\endgroup$ – Ruadhan Sep 24 '18 at 8:05
Some answers are a bit misleading - especially those quoting the Scientific American article. You can only get the mass of the primary object from simple orbital mechanics. So you can't get Ganymede's mass simply from observing the radius and period of its orbit around Jupiter (it's a pretty good way to get Jupiter's mass - but that's not the point). Any object at the radius of Ganymede would orbit Jupiter in the same period - regardless of its mass.
For a sphere of given size, the gravitational field at the surface is depends on the density so that:
$$ \rho = \frac{3g}{4\pi G r} $$
So if you want Earth-gravity on a planet the size of Ganymede, you'd need to make it out of material with a density of about $15\space g/cm^3$.
This is pretty dense - about three times Earth's density. However, if Ganymede is mostly made of some very dense elements like Tungsten or Uranium (as mentioned by @Ruadhan) it would work.
Oscar BravoOscar Bravo
$\begingroup$ I'm afraid it really wouldn't work. By my reckoning, you'd need a material at least 7 factors of 10 denser than the densest material ever manufactured or found in order to make ganymede remotely as heavy as earth. but if we assume Aliens created an artificial planetoid out of stable super-heavy isotopes and coated it in more normal materials, that'd do it. Tungsten and Uranium are pretty piddly when compared to the requirements. $\endgroup$ – Ruadhan Sep 21 '18 at 14:45
$\begingroup$ See note above - I made a mistake the first time, but I think this calculation is correct. A small, dense planet can easily have an Earth-like gravitational field. It might be quite unlikely, but normal metals are plenty dense (as you said in your first answer!) $\endgroup$ – Oscar Bravo Sep 24 '18 at 6:15
Magnetizm
Replace gravity with magnetizm. Denizens of your caves know only metal. No wood, no furs, no plastics. They wear steel, build from steel, and their food is.. complicated. Beneath the caves there is a powerfull source of magnetism - ancient spaceship or natural phenomenon. (it explains why they have so much iron to start with)
So, all metallic objects are pushed down, and since people have nothing else, it works exactly like gravity. Except that people almost fly up if they are nude - but you can use that in your story too.
Barafu AlbinoBarafu Albino
$\begingroup$ I like this idea, ganymede being packed with rare-earths isn't beyond the realm of possibility either. $\endgroup$ – Ruadhan Sep 24 '18 at 8:06
Ganymede is not a naturally formed moon; it is an alien spaceship that abducted people in the [insert] age who then defeated their captors and lived inside the spaceship, which then drifted until it was captured by Jupiter. The spaceship has of course artificial gravity and is built around a reactor in the core, but the reactor is on stand-by mode, only supplying the people living there with oxygen, water, etc. needed for their survival. They farm the alien and Earth plants the aliens gathered for study and maybe have some domestic animals, too, also originally gathered by the aliens.
Real SubtleReal Subtle
Not the answer you're looking for? Browse other questions tagged science-based science-fiction environment earth-like moons or ask your own question.
Stumped: How can I get a huge Earth-like planet?
Can I have Gunpowder without having guns?
Creating a gas-giant based empire, how can I maximize the amount of Earth-sized habitable worlds around a gas giant?
How could an Earth-like planet of 1.5-2.0 Earth radii have similar gravity to Earth?
How could an Earth-like planet develop huge pinkish-purple forests on ocean surfaces?
Could a Moon have its own "Rings", like Saturn's, without the Host Planet having them?
How can I have an Earth-like moon? | CommonCrawl |
Pressures for asymptotically sub-additive potentials under a mistake function
On the relations between positive Lyapunov exponents, positive entropy, and sensitivity for interval maps
February 2012, 32(2): 467-485. doi: 10.3934/dcds.2012.32.467
Asymptotic estimates for unimodular Fourier multipliers on modulation spaces
Jiecheng Chen 1, , Dashan Fan 2, and Lijing Sun 2,
Department of Mathematics, Zhejiang Normal University, 321004 Jinhua, China
Department of Mathematical Sciences, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, United States, United States
Received August 2010 Revised June 2011 Published September 2011
Recently, it has been shown that the unimodular Fourier multipliers $e^{it|\Delta |^{\frac{\alpha }{2}}}$ are bounded on all modulation spaces. In this paper, using the almost orthogonality of projections and some techniques on oscillating integrals, we obtain asymptotic estimates for the unimodular Fourier multipliers $e^{it|\Delta |^{\frac{\alpha }{2}}}$ on the modulation spaces. As applications, we give the grow-up rates of the solutions for the Cauchy problems for the free Schrödinger equation, the wave equation and the Airy equation with the initial data in a modulation space. We also obtain a quantitative form about the solution to the Cauchy problem of the nonlinear dispersive equations.
Keywords: Airy equation., Unimodular multipliers, Schrödinger equation, modulation spaces, wave equation.
Mathematics Subject Classification: Primary: 42B15, 42B35; Secondary: 42C1.
Citation: Jiecheng Chen, Dashan Fan, Lijing Sun. Asymptotic estimates for unimodular Fourier multipliers on modulation spaces. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 467-485. doi: 10.3934/dcds.2012.32.467
A. Bényi, K. Gröchenig, K. A. Okoudjou and L. G. Rogers, Unimodular Fourier multipliers for modulation spaces,, J. Func. Anal., 246 (2007), 366. doi: 10.1016/j.jfa.2006.12.019. Google Scholar
A. Bényi and K. A. Okoudjou, Local well-posedness of nonlinear dispersive equations on modulation spaces,, Bull. Lond. Math. Soc., 41 (2009), 549. doi: 10.1112/blms/bdp027. Google Scholar
J. Bergh and J. Löfström, "Interpolation Spaces. An Introduction,", Grundlehren der Mathematischen Wissenschaften, (1976). Google Scholar
E. Cordero and F. Nicola, Some new Strichartz estimates for the Schrödinger equation,, J. Differential Equations, 245 (2008), 1945. doi: 10.1016/j.jde.2008.07.009. Google Scholar
Y. Domar, On the spectral synthesis problem for $(n-1)$-dimensional subset of $ \mathbbR ^n,$ $n\geq 2,$, Ark Math, 9 (1971), 23. doi: 10.1007/BF02383635. Google Scholar
H. G. Feichtinger, Modulation spaces on locally compact abelian groups, Technical Report,, University of Vienna, (1983), 99. Google Scholar
H. G. Feichtinger, Modulation spaces: Looking back and ahead,, Sampl Theory Signal Image Process, 5 (2006), 109. Google Scholar
K. Gröchening, "Foundations of Time-Frequency Analysis,", Applied and Numerical Harmonic Analysis, (2001). Google Scholar
L. Hörmander, Estimates for translation invariant operators in $L^p$ spaces,, Acta Math, 104 (1960), 93. doi: 10.1007/BF02547187. Google Scholar
W. Littman, Fourier transforms of surface-carried measures and differentiability of surface averages,, Bull. Amer. Math. Soc., 69 (1963), 766. doi: 10.1090/S0002-9904-1963-11025-3. Google Scholar
A. Miyachi, F. Nicola, S. Rivetti, A. Taracco and N. Tomita, Estimates for unimodular Fourier multipliers on modulation spaces,, Proc. Amer. Math. Soc., 137 (2009), 3869. doi: 10.1090/S0002-9939-09-09968-7. Google Scholar
J. Sjöstrand, An algebra of pseudodifferential operators,, Math. Res. Lett, 1 (1994), 185. Google Scholar
E. M. Stein, "Beijing Lectures In Harmonic Analysis," Annals of Mathematics Studies,, \textbf{112}, 112 (1982). Google Scholar
H. Triebel, "Theory of Function Spaces,", Mathematik und ihre anwendugen in Physik und Technik [Mathematics and its Applications in Physics and Technology], 38 (1983). doi: 10.1007/978-3-0346-0416-1. Google Scholar
J. Toft, Continuity properties for modulation spaces with applications to pseudo-differential calculus. II,, Ann Global Anal Geom, 26 (2004), 73. doi: 10.1023/B:AGAG.0000023261.94488.f4. Google Scholar
B. Wang and H. Hudzik, The global Cauchy problem for the NLS and NLKG with small rough data,, J. Differential Equations, 232 (2007), 36. doi: 10.1016/j.jde.2006.09.004. Google Scholar
B. Wang, L. Han and C. Huang, Global well-posedness and scattering for the derivative nonlinear Schrödinger equation with small rough data,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 2253. Google Scholar
B. Wang, C. Hao and C. Huo, "Introduction on Nonlinear Developing Equations,", Unpublished Lecture Notes, (2009). Google Scholar
B. Wang, L. Zhao and B. Guo, Isometric decomposition operators function spaces $E_{p,q}^{\lambda}$ and applications to nonlinear evolution equations,, J. Func. Anal., 233 (2006), 1. doi: 10.1016/j.jfa.2005.06.018. Google Scholar
Hans Zwart, Yann Le Gorrec, Bernhard Maschke. Relating systems properties of the wave and the Schrödinger equation. Evolution Equations & Control Theory, 2015, 4 (2) : 233-240. doi: 10.3934/eect.2015.4.233
Divyang G. Bhimani. The nonlinear Schrödinger equations with harmonic potential in modulation spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5923-5944. doi: 10.3934/dcds.2019259
Claude Bardos, François Golse, Peter Markowich, Thierry Paul. On the classical limit of the Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5689-5709. doi: 10.3934/dcds.2015.35.5689
Camille Laurent. Internal control of the Schrödinger equation. Mathematical Control & Related Fields, 2014, 4 (2) : 161-186. doi: 10.3934/mcrf.2014.4.161
D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563
Frank Wusterhausen. Schrödinger equation with noise on the boundary. Conference Publications, 2013, 2013 (special) : 791-796. doi: 10.3934/proc.2013.2013.791
Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations & Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013
Martin Michálek, Dalibor Pražák, Jakub Slavík. Semilinear damped wave equation in locally uniform spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1673-1695. doi: 10.3934/cpaa.2017080
Brenton LeMesurier. Modeling thermal effects on nonlinear wave motion in biopolymers by a stochastic discrete nonlinear Schrödinger equation with phase damping. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 317-327. doi: 10.3934/dcdss.2008.1.317
Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437
Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 807-834. doi: 10.3934/dcds.2011.30.807
Alexander Arbieto, Carlos Matheus. On the periodic Schrödinger-Debye equation. Communications on Pure & Applied Analysis, 2008, 7 (3) : 699-713. doi: 10.3934/cpaa.2008.7.699
Rossella Bartolo, Anna Maria Candela, Addolorata Salvatore. Infinitely many solutions for a perturbed Schrödinger equation. Conference Publications, 2015, 2015 (special) : 94-102. doi: 10.3934/proc.2015.0094
Kai Wang, Dun Zhao, Binhua Feng. Optimal nonlinearity control of Schrödinger equation. Evolution Equations & Control Theory, 2018, 7 (2) : 317-334. doi: 10.3934/eect.2018016
Jaime Cruz-Sampedro. Schrödinger-like operators and the eikonal equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 495-510. doi: 10.3934/cpaa.2014.13.495
Grégoire Allaire, M. Vanninathan. Homogenization of the Schrödinger equation with a time oscillating potential. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 1-16. doi: 10.3934/dcdsb.2006.6.1
Binhua Feng, Xiangxia Yuan. On the Cauchy problem for the Schrödinger-Hartree equation. Evolution Equations & Control Theory, 2015, 4 (4) : 431-445. doi: 10.3934/eect.2015.4.431
Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003
Wolfgang Wagner. A random cloud model for the Schrödinger equation. Kinetic & Related Models, 2014, 7 (2) : 361-379. doi: 10.3934/krm.2014.7.361
Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063
Jiecheng Chen Dashan Fan Lijing Sun | CommonCrawl |
Numerical study of phase transition in van der Waals fluid
DCDS-B Home
A reaction-diffusion-advection SIS epidemic model in a spatially-temporally heterogeneous environment
December 2018, 23(10): 4541-4555. doi: 10.3934/dcdsb.2018175
Identification of generic stable dynamical systems taking a nonlinear differential approach
Mahdi Khajeh Salehani 1,2,,
School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, P.O. Box: 14155-6455, Tehran, Iran
School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran
* Corresponding author's e-mail address: [email protected] (M. Khajeh Salehani)
Received September 2017 Revised December 2017 Published May 2018
Fund Project: This work was supported in part by a grant from the Institute for Research in Fundamental Sciences (IPM) [No. 95510037]
Identifying new stable dynamical systems, such as generic stable mechanical or electrical control systems, requires questing for the desired systems parameters that introduce such systems. In this paper, a systematic approach to construct generic stable dynamical systems is proposed. In fact, our approach is based on a simple identification method in which we intervene directly with the dynamics of our system by considering a continuous $1$-parameter family of system parameters, being parametrized by a positive real variable $\ell$, and then identify the desired parameters that introduce a generic stable dynamical system by analyzing the solutions of a special system of nonlinear functional-differential equations associated with the $\ell$-varying parameters. We have also investigated the reliability and capability of our proposed approach.
To illustrate the utility of our result and as some applications of the nonlinear differential approach proposed in this paper, we conclude with considering a class of coupled spring-mass-dashpot systems, as well as the compartmental systems - the latter of which provide a mathematical model for many complex biological and physical processes having several distinct but interacting phases.
Keywords: Generic stable dynamical system, nonlinear differential approach, monic characteristic polynomial, Routh-Hurwitz criterion, Hardy-Hutchinson criterion.
Mathematics Subject Classification: Primary: 34D20, 37C75; Secondary: 65L03, 93D05.
Citation: Mahdi Khajeh Salehani. Identification of generic stable dynamical systems taking a nonlinear differential approach. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4541-4555. doi: 10.3934/dcdsb.2018175
K. Alexis, G. Nikolakopoulos and A. Tzes, Design and experimental verification of a constrained finite time optimal control scheme for the attitude control of a quadrotor helicopter subject to wind gusts, In: Proc. IEEE Int. Conf. Robot. Autom., Anchorage, Alaska, USA, (2010), 1636-1641.Google Scholar
H. Bilharz, Bemerkung zu einem Satze von Hurwitz, Zeitschrift f${\rm{\ddot{u}}}$r Angewandte Mathematik und Mechanik, 24 (1944), 77-82. doi: 10.1002/zamm.19440240205. Google Scholar
D. Cabecinhas, R. Naldi, L. Marconi, C. Silvestre and R. Cunha, Robust take-off and landing for a quadrotor vehicle, In: Proc. IEEE Int. Conf. Robot. Autom., Anchorage, Alaska, USA, (2010), 1630-1635. doi: 10.1109/ROBOT.2010.5509430. Google Scholar
F. Calogero, Nonlinear differential algorithm to compute all the zeros of a generic polynomial, J. Math. Physics, 57 (2016), 083508, 3pp. doi: 10.1063/1.4960821. Google Scholar
F. Calogero, Comment on "Nonlinear differential algorithm to compute all the zeros of a generic polynomial", J. Math. Physics, 57 (2016), 104101, 4pp. doi: 10.1063/1.4965441. Google Scholar
A. Cauchy, Calcul des indices des fonctions, Calcul des indices des fonctions, (2011), 416-466. doi: 10.1017/CBO9780511702501.013. Google Scholar
H. Cremer, Über den Zusammenhang zwischen den Routhschen und Hurwitzschen Stabilitätskriterien, Zeitschrift f${\rm{\ddot{u}}}$r Angewandte Mathematik und Mechanik, 27 (1947), 160-161. doi: 10.1002/zamm.19470250525. Google Scholar
H. Cremer and F. H. Effertz, Über die algebraischen Kriterien f${\rm{\ddot{u}}}$r die Stabilität von Regelsystemen, Mathematische Annalen, 137 (1959), 328-350. doi: 10.1007/BF01360969. Google Scholar
G. Frobenius, Ueber das Trägheitsgesetz der quadratischen Formen, J. f${\rm{\ddot{u}}}$ die reine und angewandte Mathematik, 114 (1895), 187-230. doi: 10.1515/crll.1895.114.187. Google Scholar
F. R. Gantmacher, Matrizentheorie, [Russion original, Moscow, 1968], Springer-Verlag, Berlin, 1986. doi: 10.1007/978-3-642-71243-2. Google Scholar
S. D. Hanford, L. N. Long and J. F. Horn, A small semiautonomous rotary-wing unmanned air vehicle (UAV), In: Proc. AIAA Infotech at Aerospace Conf., Washington DC, USA, 2005.Google Scholar
E. G. Hardy, On the zeros of a class of integral functions, Messenger of Mathematics, 34 (1904), 97-101. Google Scholar
B. Herisse, T. Hamel, R. Mahony and F. X. Russotto, Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow, IEEE Trans. Robot., 28 (2012), 77-89. doi: 10.1109/TRO.2011.2163435. Google Scholar
C. Hermite, Extrait d'une lettre de Mr. Ch. Hermite de Paris à Mr. Borchardt de Berlin sur le nombre des racines d'une équation algébrique comprises entre des limites données, J. f${\rm{\ddot{u}}}$r die reine und angewandte Mathematik, 52 (1856), 39-51. doi: 10.1515/crll.1856.52.39. Google Scholar
F. Hoffmann, N. Goddemeier and T. Bertram, Attitude estimation and control of a quadrocopter, In: Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Taipei, Taiwan, (2010), 1072-1077. doi: 10.1109/IROS.2010.5649111. Google Scholar
A. Hurwitz, Ueber die Bedingungen, unter welchen eine Gleichung nur Wurzeln mit negativen reellen Theilen besitzt, Mathematische Annalen, 46 (1895), 273-284. doi: 10.1007/BF01446812. Google Scholar
J. I. Hutchinson, On a remarkable class of entire functions, Trans. Amer. Math. Soc., 25 (1923), 325-332. doi: 10.1090/S0002-9947-1923-1501248-1. Google Scholar
K. G. J. Jacobi, Über eine elementare Transformation eines in Bezug auf jedes von zwei Variablen-Systemen linearen homogenen Ausdrucks, J. fü die reine und angewandte Mathematik, 53 (1857), 265-270 [see: Gesammelte Werke, pp. 583-590. Chelsea Publishing Co., New York (1969)]. doi: MR1579002. Google Scholar
S. H. Lehnigk, Liapunov's direct method and the number of roots with positive real parts of a polynomial with constant complex coefficients, SIAM J. on Control, 5 (1967), 234-244. doi: 10.1137/0305016. Google Scholar
A. Liénard and M. H. Chipart, Sur le signe de la partie réelle des racines d'une équation algébrique, J. de Mathématiques Pures et Appliquées, 10 (1914), 291-346. Google Scholar
D. Mellinger, M. Shomin, N. Michael and V. Kumar, Cooperative grasping and transport using multiple quadrotors, Distributed Auton. Syst., 83 (2013), 545-558. doi: 10.1007/978-3-642-32723-0_39. Google Scholar
N. Michael, J. Fink and V. Kumar, Cooperative manipulation and transportation with aerial robots, Auton. Robot., 30 (2011), 73-86. doi: 10.15607/RSS.2009.V.001. Google Scholar
A. N. Michel, L. Hou and D. Liu, Stability of Dynamical Systems. On the Role of Monotonic and Non-monotonic Lyapunov Functions, 2$^{nd}$ edition, Systems & Control: Foundations & Applications. Birkhäuser/Springer, Cham, 2015. doi: 10.1007/978-3-319-15275-2. Google Scholar
W. Michiels and S.-I. Niculescu, Stability, Control, and Computation for Time-delay Systems. An Eigenvalue-based Approach, 2$^{nd}$ edition, Advances in Design and Control, 27. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2014. doi: 10.1137/1.9781611973631. Google Scholar
N. Michael and V. Kumar, Control of ensembles of aerial robots, Proc. IEEE, 99 (2011), 1587-1620. doi: 10.1109/JPROC.2011.2157275. Google Scholar
B. Michini, J. Redding, N. K. Ure, M. Cutler and J. P. How, Design and flight testing of an autonomous variable-pitch quadrotor, In: Proc. IEEE Int. Conf. Robot. Autom., Shanghai, China, (2011), 2978-2979. doi: 10.1109/ICRA.2011.5980561. Google Scholar
J. Moser, Stable and Random Motions in Dynamical Systems. With Special Emphasis on Celestial Mechanics, Princeton University Press, Princeton, NJ, 2001. doi: 10.1515/9781400882694. Google Scholar
L. Orlando, Sul problema di Hurwitz relative alle parti reali delle radici di un'equazione algebrica, Math. Ann., 71 (1911), 233-245. doi: 10.1007/BF01456650. Google Scholar
P. C. Parks, A new proof of the Routh-Hurwitz stability criterion using the second method of Liapunov, Proceedings of the Cambridge Philosophical Society, 58 (1962), 694-702. doi: 10.1017/S030500410004072X. Google Scholar
P. Pounds, R. Mahony and P. Corke, Modelling and control of a large quadrotor robot, Control Eng. Practice, 18 (2010), 691-699. doi: 10.1016/j.conengprac.2010.02.008. Google Scholar
E. J. Routh, A Treatise on the Stability of a given State of Motion, Macmillan, London, 1877. [Reprinted in: A. T. Fuller, Stability of Motion, Taylor & Francis, London (1975), 19-138.]Google Scholar
C. Sturm, Autres démonstrations du même théorème, J. de Mathématiques Pures et Appliquées, 1 (1836), 290–308. [English translation in: Stability of Motion, Taylor & Francis, London (1975), 189–207.]Google Scholar
C. Sturm and A. Liouville, Démonstration d'un théorème de M. Cauchy, relatif aux racines imaginaires des équations, J. de Mathématiques Pures et Appliquées, 1 (1836), 278-289. Google Scholar
A. Tayebi and S. McGilvray, Attitude stabilization of a VTOL quadrotor aircraft, IEEE Trans. Control Syst. Technol., 14 (2006), 562-571. doi: 10.1109/TCST.2006.872519. Google Scholar
S. Trapp and M. Matthies, Chemodynamics and Environmental Modeling. An Introduction, Springer Berlin, Heidelberg, 1998.Google Scholar
N. I. Vitzilaios and N. C. Tsourveloudis, An experimental test bed for small unmanned helicopters, J. Intell. Robot. Syst., 54 (2000), 769-794. doi: 10.1007/s10846-008-9284-8. Google Scholar
X. Wu, Y. Liu and J. J. Zhu, Design and real time testing of a trajectory linearization flight controller for the quanserUFO, In: Proc. Amer. Control Conf., Athens, OH, USA, (2003), 3913-3918.Google Scholar
Figure 1. Schematic of a coupled spring-mass-dashpot system
Figure 2. Schematic of an open compartmental system
J. M. Peña. Refinable functions with general dilation and a stable test for generalized Routh-Hurwitz conditions. Communications on Pure & Applied Analysis, 2007, 6 (3) : 809-818. doi: 10.3934/cpaa.2007.6.809
Bin Li. On the blow-up criterion and global existence of a nonlinear PDE system in biological transport networks. Kinetic & Related Models, 2019, 12 (5) : 1131-1162. doi: 10.3934/krm.2019043
Xueli Bai, Suying Liu. A new criterion to a two-chemical substances chemotaxis system with critical dimension. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3717-3721. doi: 10.3934/dcdsb.2018074
Dapeng Du, Yifei Wu, Kaijun Zhang. On blow-up criterion for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3639-3650. doi: 10.3934/dcds.2016.36.3639
Roman Czapla, Vladimir V. Mityushev. A criterion of collective behavior of bacteria. Mathematical Biosciences & Engineering, 2017, 14 (1) : 277-287. doi: 10.3934/mbe.2017018
Jibin Li, Yi Zhang. On the traveling wave solutions for a nonlinear diffusion-convection equation: Dynamical system approach. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1119-1138. doi: 10.3934/dcdsb.2010.14.1119
Dominique Zosso, Braxton Osting. A minimal surface criterion for graph partitioning. Inverse Problems & Imaging, 2016, 10 (4) : 1149-1180. doi: 10.3934/ipi.2016036
Jürgen Scheurle, Stephan Schmitz. A criterion for asymptotic straightness of force fields. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 777-792. doi: 10.3934/dcdsb.2010.14.777
Samir EL Mourchid. On a hypercylicity criterion for strongly continuous semigroups. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 271-275. doi: 10.3934/dcds.2005.13.271
Yu-Zhu Wang, Weibing Zuo. On the blow-up criterion of smooth solutions for Hall-magnetohydrodynamics system with partial viscosity. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1327-1336. doi: 10.3934/cpaa.2014.13.1327
Yi-hang Hao, Xian-Gao Liu. The existence and blow-up criterion of liquid crystals system in critical Besov space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 225-236. doi: 10.3934/cpaa.2014.13.225
Jishan Fan, Tohru Ozawa. A regularity criterion for 3D density-dependent MHD system with zero viscosity. Conference Publications, 2015, 2015 (special) : 395-399. doi: 10.3934/proc.2015.0395
Yinghua Li, Shijin Ding, Mingxia Huang. Blow-up criterion for an incompressible Navier-Stokes/Allen-Cahn system with different densities. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1507-1523. doi: 10.3934/dcdsb.2016009
Xiaofeng Hou, Limei Zhu. Serrin-type blowup criterion for full compressible Navier-Stokes-Maxwell system with vacuum. Communications on Pure & Applied Analysis, 2016, 15 (1) : 161-183. doi: 10.3934/cpaa.2016.15.161
Xiangxiang Huang, Xianping Guo, Jianping Peng. A probability criterion for zero-sum stochastic games. Journal of Dynamics & Games, 2017, 4 (4) : 369-383. doi: 10.3934/jdg.2017020
F. Rodriguez Hertz, M. A. Rodriguez Hertz, A. Tahzibi and R. Ures. A criterion for ergodicity for non-uniformly hyperbolic diffeomorphisms. Electronic Research Announcements, 2007, 14: 74-81. doi: 10.3934/era.2007.14.74
Emanuela Caliceti, Sandro Graffi. An existence criterion for the $\mathcal{PT}$-symmetric phase transition. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1955-1967. doi: 10.3934/dcdsb.2014.19.1955
Stefan Kindermann, Andreas Neubauer. On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Problems & Imaging, 2008, 2 (2) : 291-299. doi: 10.3934/ipi.2008.2.291
Vitaly Bergelson, Joanna Kułaga-Przymus, Mariusz Lemańczyk, Florian K. Richter. A generalization of Kátai's orthogonality criterion with applications. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2581-2612. doi: 10.3934/dcds.2019108
Xingwu Chen, Jaume Llibre, Weinian Zhang. Averaging approach to cyclicity of hopf bifurcation in planar linear-quadratic polynomial discontinuous differential systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3953-3965. doi: 10.3934/dcdsb.2017203
PDF downloads (76)
HTML views (385)
Mahdi Khajeh Salehani | CommonCrawl |
The Petersen Graph
The Petersen Graph is a mathematics book about the Petersen graph and its applications in graph theory. It was written by Derek Holton and John Sheehan, and published in 1993 by the Cambridge University Press as volume 7 in their Australian Mathematical Society Lecture Series.
This article is about the book. For the graph, see Petersen graph.
The Petersen Graph
Author
• Derek Holton
• John Sheehan
SeriesAustralian Mathematical Society Lecture Series
SubjectThe Petersen graph
PublisherCambridge University Press
Publication date
1993
Topics
The Petersen graph is an undirected graph with ten vertices and fifteen edges, commonly drawn as a pentagram within a pentagon, with corresponding vertices attached to each other. It has many unusual mathematical properties, and has frequently been used as a counterexample to conjectures in graph theory.[1][2] The book uses these properties as an excuse to cover several advanced topics in graph theory where this graph plays an important role.[1][3] It is heavily illustrated, and includes both open problems on the topics it discusses and detailed references to the literature on these problems.[1][4]
After an introductory chapter, the second and third chapters concern graph coloring, the history of the four color theorem for planar graphs, its equivalence to 3-edge-coloring of planar cubic graphs, the snarks (cubic graphs that have no such colorings), and the conjecture of W. T. Tutte that every snark has the Petersen graph as a graph minor. Two more chapters concern closely related topics, perfect matchings (the sets of edges that can have a single color in a 3-edge-coloring) and nowhere-zero flows (the dual concept to planar graph coloring). The Petersen graph shows up again in another conjecture of Tutte, that when a bridgeless graph does not have the Petersen graph as a minor, it must have a nowhere-zero 4-flow.[3]
Chapter six of the book concerns cages, the smallest regular graphs with no cycles shorter than a given length. The Petersen graph is an example: it is the smallest 3-regular graph with no cycles of length shorter than 5. Chapter seven is on hypohamiltonian graphs, the graphs that do not have a Hamiltonian cycle through all vertices but that do have cycles through every set of all but one vertices; the Petersen graph is the smallest example. The next chapter concerns the symmetries of graphs, and types of graphs defined by their symmetries, including the distance-transitive graphs and strongly regular graphs (of which the Petersen graph is an example)[3] and the Cayley graphs (of which it is not).[1] The book concludes with a final chapter of miscellaneous topics too small for their own chapters.[3]
Audience and reception
The book assumes that its readers already have some familiarity with graph theory.[3] It can be used as a reference work for researchers in this area,[1][2] or as the basis of an advanced course in graph theory.[2][3]
Although Carsten Thomassen describes the book as "elegant",[4] and Robin Wilson evaluates its exposition as "generally good",[2] reviewer Charles H. C. Little takes the opposite view, finding fault with its copyediting, with some of its mathematical notation, and with its failure to discuss the lattice of integer combinations of perfect matchings, in which the number of copies of the Petersen graph in the "bricks" of a certain graph decomposition plays a key role in computing the dimension.[1] Reviewer Ian Anderson notes the superficiality of some of its coverage, but concludes that the book "succeeds in giving an exciting and enthusiastic glimpse" of graph theory.[3]
References
1. Little, Charles H. C. (1994), "Review of The Petersen Graph", Mathematical Reviews, MR 1232658
2. Wilson, Robin J. (January 1995), "Review of The Petersen Graph", Bulletin of the London Mathematical Society, 27 (1): 89–89, doi:10.1112/blms/27.1.89
3. Anderson, Ian (March 1995), "Review of The Petersen Graph", The Mathematical Gazette, 79 (484): 239–240, doi:10.2307/3620120, JSTOR 3620120
4. Thomassen, C., "Review of The Petersen Graph", zbMATH, Zbl 0781.05001
| Wikipedia |
Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory
DCDS Home
A fractional Korn-type inequality
June 2019, 39(6): 3345-3364. doi: 10.3934/dcds.2019138
Hardy-Sobolev type inequality and supercritical extremal problem
José Francisco de Oliveira 1, , João Marcos do Ó 2,, and Pedro Ubilla 3,
Department of Mathematics, Federal University of Piauí, 64049-550 Teresina, PI, Brazil
Department of Mathematics, University of Brasília, 70910-900, Brasília, DF, Brazil
Departamento de Matematica, Universidad de Santiago de Chile, Casilla 307, Correo 2, Santiago, Chile
* Corresponding author: João Marcos do Ó
Received July 2018 Revised December 2018 Published February 2019
Fund Project: The third author was supported by FONDECYT grants 1181125, 1161635 and 1171691.
This paper deals with Hardy-Sobolev type inequalities involving variable exponents. Our approach also enables us to prove existence results for a wide class of quasilinear elliptic equations with supercritical power-type nonlinearity with variable exponent.
Keywords: Hardy-type inequality, critical exponents, supercritical growth, extremal problem, Sobolev space.
Mathematics Subject Classification: Primary: 46E35, 26D10, 35J62; Secondary: 35B33.
Citation: José Francisco de Oliveira, João Marcos do Ó, Pedro Ubilla. Hardy-Sobolev type inequality and supercritical extremal problem. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3345-3364. doi: 10.3934/dcds.2019138
A. Balinsky, W. D. Evans and R. T. Lewis, Hardy's inequality and curvature, J. Funct. Anal., 262 (2012), 648-666. doi: 10.1016/j.jfa.2011.10.001. Google Scholar
E. Berchio, D. Ganguly and G. Grillo, Sharp Poincaré-Hardy and Poincaré-Rellich inequalities on the hyperbolic space, J. Funct. Anal., 272 (2017), 1661-1703. doi: 10.1016/j.jfa.2016.11.018. Google Scholar
H. Brezis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.1090/S0002-9939-1983-0699419-3. Google Scholar
H. Brezis, M. Marcus and I. Shafrir, Extremal functions for Hardy's inequality with weight, J. Funct. Anal., 171 (2000), 177-191. doi: 10.1006/jfan.1999.3504. Google Scholar
H. Brezis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math., 36 (1983), 437-477. doi: 10.1002/cpa.3160360405. Google Scholar
P. Clément, D. G. de Figueiredo and E. Mitidieri, Quasilinear elliptic equations with critical exponents, Topol. Methods Nonlinear Anal., 7 (1996), 133-170. doi: 10.12775/TMNA.1996.006. Google Scholar
A. Cotsiolis and N. Labropoulos, Sharp Hardy inequalities on the solid torus, J. Math. Anal. Appl., 448 (2017), 841-863. doi: 10.1016/j.jmaa.2016.11.042. Google Scholar
D. G. de Figueiredo, J. V. Gonçalves and O. H. Miyagaki, On a class of quasilinear elliptic problems involving critical exponents, Commun. Contemp. Math., 2 (2000), 47-59. doi: 10.1142/S0219199700000049. Google Scholar
J. F. de Oliveira and J. M. do Ó, Trudinger-Moser type inequalities for weighted Sobolev spaces involving fractional dimensions, Proc. Amer. Math. Soc., 142 (2014), 2813-2828. doi: 10.1090/S0002-9939-2014-12019-3. Google Scholar
J. F. de Oliveira, On a class of quasilinear elliptic problems with critical exponential growth on the whole space, Topol. Methods Nonlinear Anal., 49 (2017), 529-550. doi: 10.12775/TMNA.2016.086. Google Scholar
L. Diening, P. Harjulehto, P. Hästö and M. Ruzicka, Lebesgue and Sobolev Spaces with Variable Exponents, Lecture Notes in Mathematics 2017, Springer, Heidelberg, 2011. doi: 10.1007/978-3-642-18363-8. Google Scholar
J. M. do Ó and J. F. de Oliveira, Concentration-compactness and extremal problems for a weighted Trudinger-Moser inequality, Commun. Contemp. Math., 19 (2017), 1650003, 26pp. doi: 10.1142/S0219199716500036. Google Scholar
J. M. do Ó, B. Ruf and P. Ubilla, On supercritical Sobolev type inequalities and related elliptic equations, Calc. Var. Partial Differential Equations, 55 (2016), Art. 83, 18 pp. doi: 10.1007/s00526-016-1015-6. Google Scholar
I. Ekeland, On the variational principle, J. Math. Anal. Appl., 47 (1974), 324-353. doi: 10.1016/0022-247X(74)90025-0. Google Scholar
G. H. Hardy, Note on a theorem of Hilbert, Math. Z., 6 (1920), 314-317. doi: 10.1007/BF01199965. Google Scholar
[16] G. H. Hardy, J. E. Littlewood and G. Pólya, Inequalities, University Press, ${2^{\mathit{nd}}}$ edition, Cambridge, at the University Press, 1952. Google Scholar
J. Jacobsen and K. Schmitt, Radial solutions of quasilinear elliptic differential equations, Handbook of Differential Equations, Elsevier/North-Holland, Amsterdam, (2004), 359-435. Google Scholar
J. Jacobsen and K. Schmitt, The Liouville-Bratu-Gelfand problem for radial operators, J. Differential Equations, 184 (2002), 283-298. doi: 10.1006/jdeq.2001.4151. Google Scholar
A. Kufner and B. Opic, Hardy-type Inequalities, Pitman Research Notes in Mathematics Series, vol. 219, Longman Scientific and Technical, Harlow, 1990. Google Scholar
E. Mitidieri, A simple approach to Hardy inequalities, Mat. Zametki, 67 (2000), 563-572. doi: 10.1007/BF02676404. Google Scholar
D. S. Mitrinovic, J. E. Pecaric and A. M. Fink, Inequalities Involving Functions and Their Integrals and Derivatives, Mathematics and its Applications (East European Series), 53, Kluwer Academic Publishers Group, Dordrecht, 1991. doi: 10.1007/978-94-011-3562-7. Google Scholar
D. S. Mitrinovic, J. E. Pecaric and A. M. Fink, Classical and New Inequalities in Analysis, Mathematics and its Applications (East European Series), 61, Kluwer Academic Publishers Group, Dordrecht, 1993. doi: 10.1007/978-94-017-1043-5. Google Scholar
B. G. Pachpatte, On some generalizations of Hardy's integral inequality, J. Math. Anal. Appl., 234 (1999), 15-30. doi: 10.1006/jmaa.1999.6294. Google Scholar
W. Strauss, Existence of solitary waves in higher dimensions, Comm. Math. Phys., 55 (1977), 149-162. doi: 10.1007/BF01626517. Google Scholar
G. Wang and D. Ye, A Hardy-Moser-Trudinger inequality, Advances in Mathematics, 230 (2012), 294-320. doi: 10.1016/j.aim.2011.12.001. Google Scholar
Jingbo Dou, Ye Li. Classification of extremal functions to logarithmic Hardy-Littlewood-Sobolev inequality on the upper half space. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3939-3953. doi: 10.3934/dcds.2018171
Elvise Berchio, Debdip Ganguly. Improved higher order poincaré inequalities on the hyperbolic space via Hardy-type remainder terms. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1871-1892. doi: 10.3934/cpaa.2016020
Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1037-1054. doi: 10.3934/cpaa.2011.10.1037
Yinbin Deng, Qi Gao, Dandan Zhang. Nodal solutions for Laplace equations with critical Sobolev and Hardy exponents on $R^N$. Discrete & Continuous Dynamical Systems, 2007, 19 (1) : 211-233. doi: 10.3934/dcds.2007.19.211
Xiaorong Luo, Anmin Mao, Yanbin Sang. Nonlinear Choquard equations with Hardy-Littlewood-Sobolev critical exponents. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1319-1345. doi: 10.3934/cpaa.2021022
Masato Hashizume, Chun-Hsiung Hsia, Gyeongha Hwang. On the Neumann problem of Hardy-Sobolev critical equations with the multiple singularities. Communications on Pure & Applied Analysis, 2019, 18 (1) : 301-322. doi: 10.3934/cpaa.2019016
Yu Zheng, Carlos A. Santos, Zifei Shen, Minbo Yang. Least energy solutions for coupled hartree system with hardy-littlewood-sobolev critical exponents. Communications on Pure & Applied Analysis, 2020, 19 (1) : 329-369. doi: 10.3934/cpaa.2020018
Yanfang Peng, Jing Yang. Sign-changing solutions to elliptic problems with two critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2015, 14 (2) : 439-455. doi: 10.3934/cpaa.2015.14.439
Minbo Yang, Fukun Zhao, Shunneng Zhao. Classification of solutions to a nonlocal equation with doubly Hardy-Littlewood-Sobolev critical exponents. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5209-5241. doi: 10.3934/dcds.2021074
Mousomi Bhakta, Debangana Mukherjee. Semilinear nonlocal elliptic equations with critical and supercritical exponents. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1741-1766. doi: 10.3934/cpaa.2017085
Ze Cheng, Congming Li. An extended discrete Hardy-Littlewood-Sobolev inequality. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1951-1959. doi: 10.3934/dcds.2014.34.1951
Hua Jin, Wenbin Liu, Huixing Zhang, Jianjun Zhang. Ground states of nonlinear fractional Choquard equations with Hardy-Littlewood-Sobolev critical growth. Communications on Pure & Applied Analysis, 2020, 19 (1) : 123-144. doi: 10.3934/cpaa.2020008
M. Nakamura, Tohru Ozawa. The Cauchy problem for nonlinear wave equations in the Sobolev space of critical order. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 215-231. doi: 10.3934/dcds.1999.5.215
Wei Dai, Zhao Liu, Guozhen Lu. Hardy-Sobolev type integral systems with Dirichlet boundary conditions in a half space. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1253-1264. doi: 10.3934/cpaa.2017061
F. R. Pereira. Multiple solutions for a class of Ambrosetti-Prodi type problems for systems involving critical Sobolev exponents. Communications on Pure & Applied Analysis, 2008, 7 (2) : 355-372. doi: 10.3934/cpaa.2008.7.355
Ze Cheng, Changfeng Gui, Yeyao Hu. Existence of solutions to the supercritical Hardy-Littlewood-Sobolev system with fractional Laplacians. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1345-1358. doi: 10.3934/dcds.2019057
Chunhua Wang, Jing Yang. Infinitely many solutions for an elliptic problem with double critical Hardy-Sobolev-Maz'ya terms. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1603-1628. doi: 10.3934/dcds.2016.36.1603
Genggeng Huang, Congming Li, Ximing Yin. Existence of the maximizing pair for the discrete Hardy-Littlewood-Sobolev inequality. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 935-942. doi: 10.3934/dcds.2015.35.935
Ze Cheng, Genggeng Huang, Congming Li. On the Hardy-Littlewood-Sobolev type systems. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2059-2074. doi: 10.3934/cpaa.2016027
Zongming Guo, Juncheng Wei. Liouville type results and regularity of the extremal solutions of biharmonic equation with negative exponents. Discrete & Continuous Dynamical Systems, 2014, 34 (6) : 2561-2580. doi: 10.3934/dcds.2014.34.2561
José Francisco de Oliveira João Marcos do Ó Pedro Ubilla | CommonCrawl |
Comparative Migration Studies
Correction to: Between fragmentation and institutionalisation: the rise of migration studies as a research field
Nathan Levy1,
Asya Pisarevskaya1 &
Peter Scholten1
Comparative Migration Studies volume 8, Article number: 29 (2020) Cite this article
The Original Article was published on 06 July 2020
Correction to: Comparative Migration Studies 8, 24 (2020)
Following publication of the original article (Levy, Pisarevskaya, & Scholten, 2020), the authors reported several errors.
In the Abstract, "co -authorships" has been corrected to "co-authorships".
Footnote 1 contained a typesetting mistake – duplicate text was added. It has been corrected to: "E.g. a transdisciplinary article is one where it becomes difficult to ascertain the discipline from which it has originated, even though it is clearly identified as belonging to migration studies."
In the section 'Bibliometric analysis', the formula has been corrected to:
$$ {P}_t=\frac{N_t\ast \left({N}_t-1\right)}{2},\mathrm{where}\ \mathrm{N}\ \mathrm{is}\ \mathrm{a}\ \mathrm{Total}\kern0.17em \mathrm{number}\ \mathrm{of}\ \mathrm{sources}\ \mathrm{for}\ \mathrm{period}\;t. $$
The 8th paragraph of the 'Bibliometric analysis' contained a typesetting mistake – the first part (highlighted in bold typeface) was omitted. This paragraph has been corrected to: "We did this in five year increments (1975–1979; 1980–1984, and so on, with the exception of the final period, 2015–2018). The network files exported from VOSviewer can be found in the Harvard Dataverse (see Levy, Pisarevskaya, & Scholten, 2020). Following our iterative logic, this enabled us to analyse the data in the same terms – i.e. "early 1980s", "late 1990s" – as our interviewees described their perception of the field's development. VOSviewer clusters the authors according to how often they are cited together. We take these clusters to approximate the variety of epistemic communities within the field in each period. To assign labels, we used Google Scholar to find the unifying features of each cluster. We checked the research of each cluster's most-cited authors, and the first-page results (usually the authors' higher-cited works) enabled us to grasp their conceptual, thematic, or disciplinary focus. We triangulated this information with the reflections shared by our expert interviewees."
Footnote 2 contained a typesetting mistake – duplicate text was added. It has been corrected to: "See sheet 'all countries weighted' for relativized co-authorship statistics."
In the 9th paragraph of the 'Bibliometric analysis', "co -citation" has been corrected to "co-citation".
In the 2nd paragraph of the 'Disciplines and cross-disciplinary osmosis', "most -cited" has been corrected to "most-cited".
In the 5th paragraph of the 'Disciplines and cross-disciplinary osmosis', "Pennix" has been corrected to "Penninx".
The 5th paragraph of the 'Conclusion and discussion: fragmentation and institutionalisation in the field of migration studies' section contained a typesetting mistake – the phrase "that refer to" was duplicated. The duplicated phrase was removed.
The original article (Levy et al., 2020) has been corrected with regards to the above errors.
Levy, N., Pisarevskaya, A., & Scholten, P. (2020). Between fragmentation and institutionalisation: the rise of migration studies as a research field. Comparative Migration Studies, 8, 24 https://doi.org/10.1186/s40878-020-00180-7.
Department of Public Administration and Sociology, Erasmus University Rotterdam, Rotterdam, Netherlands
Nathan Levy, Asya Pisarevskaya & Peter Scholten
Nathan Levy
Asya Pisarevskaya
Peter Scholten
Correspondence to Nathan Levy.
The original article can be found online at https://doi.org/10.1186/s40878-020-00180-7
Levy, N., Pisarevskaya, A. & Scholten, P. Correction to: Between fragmentation and institutionalisation: the rise of migration studies as a research field. CMS 8, 29 (2020). https://doi.org/10.1186/s40878-020-00200-6 | CommonCrawl |
\begin{document}
\title{Filtrations of tilting modules and costalks of parity sheaves} \begin{abstract}
In this article, we proved that the costalks of parity sheaves on
the affine Grassmannian correspond to the Brylinski-Kostant
filtration of the corresponding weight spaces of tilting modules. \end{abstract} \section{Introduction} \subsection{Summary} Assume \( G \) is a split reductive algebraic group over a field \( k \). When \( k=\mathbb{C} \), R.K.Brylinski constructed a filtration of weight spaces of a \( G \)-module, using the action of a principal nilpotent element of the Lie algebra, and proved that this filtration corresponds to Lusztig's q-analogue of the weight multiplicity (cf.\cite{Bry89} ). Later, Ginzburg discovered that this filtration has an interesting geometric interpretation via the geometric Satake correspondence (cf. \cite{Gin89}). The goal of this article is to partially generalise these results to the case where the characteristic of \( k \) is positive.
\subsection{Main result} In the rest of the article, let \( G \) be a reductive group over \( k \) and is a product of simply-connected quasi-simple groups and general linear groups. Suppose \( k \) algebraically closed such that the characteristic is
good for each quasi-simple factor of \( G \) in the sense of \cite{JMW}. Suppose there exists a non degenerate \( G \)-equivariant bilinear form on \( \mathfrak{g} \). When there is no confusion, we write \( \otimes \) for \(\otimes_{k} \). Fix a Borel subgroup \( B\subset G \) and a maximal torus \( T\subset B \). Let \( \mathbf{X}=X^{*}(T) \) be the weight lattice and \( \mathbf{X}^{+} \) be the set of dominant weights with respect to \( B \).
Let \( \mathcal{G}r \) be the affine Grassmannian variety of the complex Langlands dual group \( \check{G} \) of \( G \). Let \( \check{T} \subset \check{G} \) be the maximal torus. For each \( \mu\in \mathbf{X} \), let \( L_{\mu} \) be the corresponding \( \check{T} \)-fixed point in \( \mathcal{G}r \), and let \( i_{\mu} \) be the embedding \( \{L_{\mu}\}\hookrightarrow \mathcal{G}r \). When \( \mu \) is dominant, denote by \( \mathcal{G}r^{\mu} \) the \( \check{G}(\mathcal{O})=\check{G}(\mathbb{C}[[t]]) \)-orbit of \( L_{\mu} \) in \( \mathcal{G}r \), by \( \mathcal{E}(\mu) \) the indecomposable parity sheaf with respect to the stratum \( \mathcal{G}r^{\mu} \) (cf. \cite{JMW}), and by \( \mathbf{T}(\mu) \) the indecomposable tilting module of \( G \) of highest weight \( \mu \).
Denote by \( \mathfrak{g} \), \( \mathfrak{b} \) and \( \mathfrak{t} \) the Lie algebras of \( G \), \( B \) and \( T \). The main result of this article is the following \begin{thm}\label{thm:main} Let \( e\in \mathfrak{b}\) be a principal nilpotent element that is \( \mathfrak{t} \)-adapted (i.e., there exists \( h\in \mathfrak{t} \) such that \( [h,e]=e \)). For all \( \lambda,\mu\in\mathbf{X}^{+} \), let \( \mathbf{F}_{\bullet}(\mathbf{T}(\lambda)_{\mu}) \) be the Brylinski-Kostant filtration of \( \mathbf{T}(\lambda)_{\mu} \) defined by \( e \), i.e.
for all \( n\in\mathbb{N} \), we have
\begin{displaymath}
\mathbf{F}_{n}(\mathbf{T}(\lambda)_{\mu})=\{v\in \mathbf{T}(\lambda)_{\mu}\mid
e^{(i+1)}v=0\text{ for all }i\geq n\},
\end{displaymath} and \( \mathbf{F}_{n}(\mathbf{T}(\lambda)_{\mu})=0 \) whenever \( n<0 \).
Then we have
\begin{equation}
\label{eq:7786ed48df651f44}
\dim
\mathbf{H}^{2n-\dim(\mathcal{G}r^{\mu})}(i_{\mu}^{!}\mathcal{E}(\lambda))=\dim \big(\mathbf{F}_{n}(\mathbf{T}(\lambda)_{\mu})/\mathbf{F}_{n-1}(\mathbf{T}(\lambda)_{\mu})\big)
\end{equation} for all \( n\in \mathbb{Z} \). \end{thm}
The proof in this article follows exactly the same idea with \cite{Bry89} and Proposition 2.6.3 in \cite{GR15}.
The author thanks Simon Riche and Geordie Williamson for many useful discussions.
\section{Proof of the main result} Let \( \mathfrak{n}=\Lie(U) \), where \( U \) is the unipotent radical of \( B \). Let \( \tilde{\mathfrak{g}}=G\times^{B}(\mathfrak{g}/\mathfrak{n})^{*} \) be the Grothendieck resolution. Then we have an isomorphism of graded \( k[\mathfrak{t}^{*}]\cong H^{\bullet}_{\check{T}}(\text{pt}) \)-modules (cf. \cite{MR15} Prop.1.9) \begin{equation}
\label{eq:b3126ad8387fcdf4}
\mathbf{H}_{\check{T}}^{\bullet-\dim(\mathcal{G}r^{\mu})}(i_{\mu}^{!}\mathcal{E}(\lambda))\cong
(\mathbf{T}(\lambda)\otimes \Gamma(\tilde{\mathfrak{g}},\mathcal{O}_{\tilde{\mathfrak{g}}}(-w_{0}\mu)))^{G} \end{equation} where on the right hand side \( \mathbf{T}(\lambda) \) is in degree \( 0 \) and the global sections are equipped with the grading induced by the \( \mathbb{G}_{m} \)-action on \( \tilde{\mathfrak{g}} \) defined by \begin{displaymath}
z\cdot(g\times^{B}x)=g\times^{B}(z^{2}x). \end{displaymath}
\begin{lemma}
Let \( P\subset G \) be a parabolic subgroup such that \( G\to G/P
\) is locally trivial and let \( V,V' \) be
\( P \)-modules. Let \( \mathbb{G}_{m} \) act on \( V \) by \(
z\cdot x=z^{2}x \). Let \( \pi:G\times^{P}V \to G/P\) be the natural
map. Then we have an isomorphism of graded \( G
\)-modules
\begin{equation}
\label{eq:f9ec0f3bb5480d84}
\Gamma(G\times^{P}V,\pi^{*}\mathcal{L}_{G/P}(V'))=\ind_{P}^{G}(V'\otimes
k[V])
\end{equation}
where \( \mathcal{L}_{G/P}(V') \) is the associated sheaf
induced by the \( P \)-module \( V' \) in the sense of \cite{Jan03}
I.5.8 with \( X=G\times V \), the
grading on the left hand side is induced by the action of \(
\mathbb{G}_{m} \) on \( V \), and the grading on the right hand side
is induced by the grading on \( k[V]\cong S(V^{*}) \) with \( V^{*}
\) placed on degree \( 2 \). \end{lemma} \begin{proof} Let \( X=G\times V \), then \( P \) acts on \( X \) by \( (g,x)\cdot p=(gp,p^{-1}x) \) and we have \( G\times^{P}V=X/P \) by definition. Then we have \begin{displaymath}
\pi^{*}(\mathcal{L}_{G/P}(V'))\cong \mathcal{L}_{X/P}(V') \end{displaymath} by \cite{Jan03} I.5.17 (1). Now we have \begin{align*}
\Gamma(G\times^{P}V,\pi^{*}\mathcal{L}_{G/P}(V'))&=\Gamma(X/P,
\mathcal{L}_{X/P}(V'))\\
&=(V'\otimes k[X])^{P}\\
&=(V'\otimes k[G]\otimes k[V])^{P}\\
&=(k[G]\otimes (V'\otimes k[V]))^{P}\\
&=\ind_{P}^{G}(V'\otimes k[V]) \end{align*} with the desired gradings. \end{proof}
Apply the lemma to \( P=B \),\( V=(\mathfrak{g}/\mathfrak{n})^{*} \) and \( V'=k_{-w_{0}\mu} \), we get an isomorphisme of graded \( G \)-modules \begin{displaymath}
\Gamma(\tilde{\mathfrak{g}},\mathcal{O}_{\tilde{\mathfrak{g}}}(-w_{0}\mu))\cong
\ind_{B}^{G}(k_{-w_{0}\mu}\otimes k[(\mathfrak{g}/\mathfrak{n})^{*}]). \end{displaymath} Hence we have an isomorphism of graded \( k[\mathfrak{t}^{*}] \)-modules \begin{displaymath}
\mathbf{H}_{\check{T}}^{\bullet-\dim(\mathcal{G}r^{\mu})}(i_{\mu}^{!}\mathcal{E}(\lambda))\cong
(\mathbf{T}(\lambda)\otimes \ind_{B}^{G}(k_{-w_{0}\mu}\otimes k[(\mathfrak{g}/\mathfrak{n})^{*}]))^{G}\cong
(\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes k[(\mathfrak{g}/\mathfrak{n})^{*}])^{B} \end{displaymath} by tensor identity and Frobenius reciprocity.
Take a regular semisimple element \( \phi\in \mathfrak{t}^{*} \). Then we have isomorphisms of filtered vector spaces \begin{equation}
\label{eq:57e826ff89859876}
\mathbf{H}^{\bullet-\dim(\mathcal{G}r^{\mu})}_{\phi}(i_{\mu}^{!}\mathcal{E}(\lambda))\cong (\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[(\mathfrak{g}/\mathfrak{n})^{*}])^{B}\otimes_{k[\mathfrak{t}^{*}]}k_{\phi}.
\end{equation}
Identify \( \mathfrak{g} \) with \( \mathfrak{g}^{*} \) by a non degenerate \( G
\)-equivariant bilinear form and let \( h\in \mathfrak{t} \) be the
image of \( \phi \), which is a regular semisimple. Then we have
\begin{equation}
\label{eq:e5fbd3818496a225}
\mathbf{H}_{\check{T}}^{\bullet-\dim(\mathcal{G}r^{\mu})}(i_{\mu}^{!}\mathcal{E}(\lambda))\cong
(\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[\mathfrak{b}])^{B}\otimes_{k[\mathfrak{t}]}k_{h}
\end{equation}
To transform the formula above to a form with which is easier to
deal, we also need the following three lemmas.
\begin{lemma}\label{lemma:Jantzen}
If \( h\in\mathfrak{t}_{\text{rs}} \), then we have a \( B \)-equivariant isomorphism
of varieties
\begin{displaymath}
(h+\mathfrak{n})\times \mathfrak{t}_{\text{rs}}\xrightarrow{\sim}
\mathfrak{b}\times_{\mathfrak{t}}\mathfrak{t}_{\text{rs}}
\end{displaymath}
such that \( (h,h)\mapsto h\times_{\mathfrak{t}}h \) and the following diagramme
\begin{displaymath}
\begin{tikzcd}[column sep=tiny]
(h+\mathfrak{n})\times
\mathfrak{t}_{\text{rs}}\ar[rr,"\sim"]\ar[rd,"p_{2}"]&&\mathfrak{b}\times_{\mathfrak{t}}\mathfrak{t}_{\text{rs}}\ar[ld,"\pi_{2}"]\\
&\mathfrak{t}_{\text{rs}}&
\end{tikzcd}
\end{displaymath}
is an isomorphism of affine bundles over \( \mathfrak{t}_{\text{rs}} \).
\end{lemma}
\begin{proof}
First, let us construct a map \( \Phi: (h+\mathfrak{n})\times \mathfrak{t}_{\text{rs}}\rightarrow
\mathfrak{b}\times_{\mathfrak{t}}\mathfrak{t}_{\text{rs}} \). Let
\( (X,H)\in (h+\mathfrak{n})\times \mathfrak{t}_{\text{rs}} \),
then there exists \( b\in B \) such that \( \Ad(b)(h)=X \). We set
\( \Phi(X,H)=(\Ad(b)(H),H) \), which indeed lies in \(
\mathfrak{b}\times_{\mathfrak{t}}\mathfrak{t}_{\text{rs}} \)
since by the projection \(\mathfrak{b}\to\mathfrak{t} \), the
image of \( \Ad(b)(H) \) is \( H\in \mathfrak{t}_{\text{rs}}
\). We need to check
\begin{itemize}
\item \( \Phi \) is well-defined, which means it doesn't depend
on the choice of \( b\in B \);
\item \( \Phi \) is \( B \)-equivariant (obvious);
\item \( \Phi \) is a morphism of varieties;
\item \( \Phi \) is bijective;
\item \( \Phi^{-1} \) is a morphism of varieties;
\end{itemize}
If \( X=\Ad(b)(h)=\Ad(b')(h) \), then \( Ad(b^{-1}b')(h)=h \), hence
\( b^{-1}b'\in T \) since \( h \) is regular semisimple in \(
\mathfrak{t} \), and \( \Ad(b)(H)=\Ad(b')(H) \) since \(
H\in\mathfrak{t} \). To prove that \( \Phi \) is a morphism, observe
that \( \Phi \) is induced by the following commutative diagramme
\begin{displaymath}
\begin{tikzcd}
(h+\mathfrak{n})\times
\mathfrak{t}_{\text{rs}}\ar[d,"p_{2}"]\ar[r,"\phi\times \id"]& U\times
\mathfrak{t}_{\text{rs}}\ar[r,"\psi"]&\mathfrak{b}\ar[d]\\
\mathfrak{t}_{\text{rs}}\ar[rr]&&\mathfrak{t}
\end{tikzcd}
\end{displaymath}
where \( \psi(u,H)=\Ad(u)(H) \), and \( \phi \) is the inverse map of
\( U\to h+\mathfrak{n}: u\mapsto \Ad(u)(h) \) which is a morphism
by \cite{Jan04} page 188. Bijectivity is easy to prove. \( \Phi^{-1}
\) is also a morphism because it is the composition
\begin{displaymath}
\mathfrak{b}\times_{\mathfrak{t}}\mathfrak{t}_{\text{rs}}
\xrightarrow{\pi_{1}}\mathfrak{b}_{\text{rs}}\xrightarrow{f}U\times
\mathfrak{t}_{\text{rs}}\xrightarrow{g} (h+\mathfrak{n})\times
\mathfrak{t}_{\text{rs}}
\end{displaymath}
where \( f \) is the map on page 188 \cite{Jan04} and \(
g(u,H)=(\Ad(u)(h),H) \), which are both morphisms.
\end{proof}
\begin{lemma}\label{lemma:tensor}
Let \( H \) be an algebraic group over \( k \), \( A \) a flat \(
k \)-algebra and \( M \) an \( H \)-module. Then the
natural map \( M^{H}\otimes_{k} A \to (M\otimes_{k}A)^{H}\) is an
isomorphism of \( A \) modules, where the action of \( H \) on \(
M\otimes_{k}A \) is induced by the action of \( H \) on \( M \) and
the trivial action of \( H \) on \( A \). \end{lemma} \begin{rmk}
The assumption of flatness of \( A \) is automatically satisfied,
since \( k \) is a field. We include this assumption since the
statement is also correct even in more general cases, where the
flatness will be crucial. \end{rmk} \begin{proof}
Since \( A \) is flat, it is easy to check that the map
\( M^{H}\otimes_{k} A \to (M\otimes_{k}A)^{H}, \; m\otimes a\mapsto
m\otimes a \) is a bijection. \end{proof}
\begin{lemma}\label{lemma:localisation}
Let \( H \) be an algebraic group over \( k \) and \( A \) a flat \(
k \)-algebra. Let \( M \) be an \( H \)-module and a torsion free \( A \)-module such that the
two actions commute (i.e. \( h\cdot (am)=a(h\cdot m) \) for all
\( m\in M \), \( a\in A \) and \( h\in H \)). Then for any
multiplicative subset \( S\subset A \), the natural morphism
\begin{displaymath}
S^{-1}(M^{H})\to (S^{-1}M)^{H}
\end{displaymath}
is an isomorphism of \( S^{-1}A \)-modules. \end{lemma}
\begin{proof}
The map is induced by \( s^{-1}m\mapsto s^{-1}m \). \end{proof}
Using the above lemmas, we have isomorphisms of filtered vector spaces
\begin{align*}
& (\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[\mathfrak{b}])^{B}\otimes_{k[\mathfrak{t}]}k_{h}\\
=&(\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[\mathfrak{b}])^{B}\otimes_{k[\mathfrak{t}]}k[\mathfrak{t}_{\text{rs}}]\otimes_{k[\mathfrak{t}_{\text{rs}}]}
k_{h} \\
=&(\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[\mathfrak{b}]\otimes_{k[\mathfrak{t}]}k[\mathfrak{t}_{\text{rs}}])^{B}\otimes_{k[\mathfrak{t}_{\text{rs}}]}
k_{h} \\
=&(\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[h+\mathfrak{n}]\otimes k[\mathfrak{t}_{\text{rs}}])^{B}\otimes_{k[\mathfrak{t}_{\text{rs}}]}
k_{h} \\
=&(\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[h+\mathfrak{n}])^{B}\otimes k[\mathfrak{t}_{\text{rs}}]\otimes_{k[\mathfrak{t}_{\text{rs}}]}
k_{h} \\ =& (\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[h+\mathfrak{n}])^{B},
\end{align*}
where the second isomorphism is due to Lemma
\ref{lemma:localisation}, the third is due to Lemma
\ref{lemma:Jantzen}, and the fourth is due to Lemma
\ref{lemma:tensor}.
Hence there is an isomorphism of filtered vector spaces
\begin{displaymath}
\mathbf{H}^{\bullet-\dim(\mathcal{G}r^{\mu})}_{\phi}(i_{\mu}^{!}\mathcal{E}(\lambda))\cong (\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[h+\mathfrak{n}])^{B}.
\end{displaymath}
On the other hand, by using the geometric Satake equivalence and
equivariant localisation, the left-hand side is isomorphic to the vector
space \( \mathbf{T}(\lambda)_{\mu} \), hence in particular we have
\begin{displaymath}
\dim \mathbf{T}(\lambda)_{\mu}=\dim (\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[h+\mathfrak{n}])^{B}.
\end{displaymath}
\begin{lemma}
Let \( M \) be a \( B \)-module and \( \mu\in X(T) \). Then there
exists a natural isomorphism
\begin{displaymath}
(M\otimes k_{-\mu})^{B}\cong (M^{U})_{\mu}
\end{displaymath}
defined by sending \( m\otimes 1 \) to \( m \).
\end{lemma}
\begin{proof}
\begin{displaymath}
(M\otimes k_{-\mu})^{B}\cong \Hom_{B}(k_{\mu},M)\cong (M^{U})_{\mu}.
\end{displaymath}
\end{proof}
\begin{lemma}
The map
\begin{displaymath}
\Lambda: (\mathbf{T}(\lambda)\otimes k[h+\mathfrak{n}])^{U}\rightarrow \mathbf{T}(\lambda)
\end{displaymath}
defined by evaluation on \( h \) is an isomorphism of \( T
\)-modules.
In particular, it induces an isomorphism of vector spaces:
\begin{displaymath}
\Lambda_{\mu}: (\mathbf{T}(\lambda)\otimes k_{-\mu}\otimes
k[h+\mathfrak{n}])^{B} \cong \mathbf{T}(\lambda)_{\mu}.
\end{displaymath} \end{lemma} \begin{proof}
\( \Lambda\) is \( T \)-equivariant because \( h \) is fixed by \( T \).
On the other hand, we already have
\begin{displaymath} \dim \mathbf{T}(\lambda)_{\mu}=\dim (\mathbf{T}(\lambda)\otimes k_{-w_{0}\mu}\otimes
k[h+\mathfrak{n}])^{B}=\dim ( (\mathbf{T}(\lambda)\otimes k[h+\mathfrak{n}])^{U})_{\mu}
\end{displaymath}
because the dimension of the weight spaces with respect to \( \mu \)
and \( w_{0}\mu \) are the same.
By taking the sum over all \( \mu \), we have \( \dim
(\mathbf{T}(\lambda)\otimes k[h+\mathfrak{n}])^{U} =\dim \mathbf{T}(\lambda)\). Hence
it suffices to prove that \( \Lambda \) is injective because both
sides are finite dimensional.
The idea of the proof of injectivity is quite simple. Roughly
speaking,
an \( U \)-equivariant function on \( h+\mathfrak{n} \) is zero if
it is zero on \( h \). The following is just a more rigorous version of
this simple idea.
Identify \((\mathbf{T}(\lambda)\otimes
k[h+\mathfrak{n}])^{U}=\Hom_{U}(\mathbf{T}(\lambda)^{*}, k[h+\mathfrak{n}])
\) and \( \mathbf{T}(\lambda)=(\mathbf{T}(\lambda)^{*})^{*} \). Then for \( f\in
\Hom_{U}(\mathbf{T}(\lambda)^{*}, k[h+\mathfrak{n}]) \), \( \Lambda(f):
\mathbf{T}(\lambda)^{*}\to k \) is defined by \( \Lambda(f)(\psi)=f(\psi)(h)
\). Hence if \( \Lambda(f)=0 \), then for all \( \psi\in
\mathbf{T}(\lambda)^{*} \) and \( u\in U \), we have \(
f(\psi)(\Ad(u)(h))=f(u^{-1}\psi)(h)=\Lambda(f)(u^{-1}\psi)=0 \).
But since \( h \) is principal semi-simple, we have \(
\Ad(U)(h)=h+\mathfrak{n} \), hence \( f(\psi)(X)=0 \) for all \( X\in
h+\mathfrak{n} \). Since \( k \) is an infinite field, this means
that as an element in \( k[h+\mathfrak{n}] \), we have \( f(\psi)=0
\) (another way to think about this: \( k \) is algebraically closed
and \(k[h+\mathfrak{n}] \) is reduced, then if some function is zero
at each closed point, it is zero by Hilbert's
Nullstellensatz.). Since \( \psi \) is arbitrary, we have \( f=0
\). This proves the injectivity. \end{proof}
We conclude the proof of Theorem \ref{thm:main} by the following \begin{proposition}
Let \( e\in \mathfrak{n} \) such that \( [h,e]=e \). Then \( e \) is
a principal nilpotent.
Then we have
\begin{displaymath}
f\in \Hom_{B}(\mathbf{T}(\lambda)^{*}\otimes k_{\mu},
k[h+\mathfrak{n}]_{n})\Leftrightarrow \Lambda(f)\in \mathbf{F}_{n}(\mathbf{T}(\lambda)_{\mu})
\end{displaymath}
for all \( n\in\mathbb{N} \). \end{proposition}
\begin{rmk} Roughly speaking, the idea of the proof is as follows:
if a \( B \)-equivariant map from \( \mathbf{T}(\lambda)^{*}\otimes
k_{\mu} \) to
\( k[h+\mathfrak{n}] \) takes any element to a polynomial that has
degree \( \leq n \) along the direction \( e\in \mathfrak{n} \),
then it takes any element to a polynomial with degree \( \leq n \),
because \( B\cdot e \) is dense.
We will make this idea rigorous in the proof.
\end{rmk}
\begin{proof}
Denote \(V=\mathbf{T}(\lambda) \)
Fix \( f\in \Hom_{B}(V^{*}\otimes k_{\mu},
k[h+\mathfrak{n}]) \) and let \( v=\Lambda(f) \). Then \(
f\in \Hom_{B}(V^{*}\otimes k_{\mu},
k[h+\mathfrak{n}]_{n}) \) if and only if for any \( \psi\in V^{*}
\), \( f(\psi\otimes 1)\in k[h+\mathfrak{n}] \) has degree \(
\leq n \).
Since \( k \) is an infinite field, \( f(\psi\otimes 1)\in k[h+\mathfrak{n}] \) has degree \(
\leq n \) if and only if for all \( X\in \mathfrak{n} \), the
polynomial in \( t \)
\begin{displaymath}
f(\psi\otimes 1)(h+tX )
\end{displaymath}
has degree \( \leq n \).
Since \(B\cdot e \) is dense in \(
\mathfrak{n} \), we have \( f\in \Hom_{B}(V^{*}\otimes k_{\mu},
k[h+\mathfrak{n}]_{n}) \) if and only if it satisfies the
following condition (A):
``For all \( \psi\in V^{*}
\) and all \( b\in B \),
the polynomial (in
\( t \))
\( f(\psi\otimes 1)(h+t b\cdot e) \)
has degree \( \leq n \).''
Claim: (A) is equivalent to the condition (B):
``For all \( \psi\in V^{*}
\),
the polynomial (in
\( t \))
\( f(\psi\otimes 1)(h+te) \)
has degree \( \leq n \).''
Proof of the claim: (A) clearly implies (B). Now suppose \( f \)
satisfies (B). Fix \( \psi\in V^{*} \) arbitrary, choose a \(
b_{0}\in B \) such that the polynomial \( f(\psi\otimes 1)\in
k[h+\mathfrak{n}] \) reaches maximal degree in the direction \(
b_{0}\cdot e\in \mathfrak{n} \) (such a \( b_{0} \) exists because
\( k \) is infinite and \( B\cdot e \) is dense in \( \mathfrak{n}
\)). Then for \( b\in B \) arbitrary, the degree of \(
f(\psi\otimes 1)(h+t b\cdot e) \) is no larger than that of \(
f(\psi\otimes 1)(h+t b_{0}\cdot e) \). But since the latter is
maximal, it is the same with the degree of
\begin{displaymath}
f(\psi\otimes 1)(b_{0}h+t b_{0}\cdot e)=f(b_{0}^{-1}(\psi\otimes
1))(h+te)=f((\mu(b_{0})^{-1}b_{0}^{-1}\cdot\psi)\otimes 1)(h+te),
\end{displaymath}
which is \( \leq n \) by applying (B) to
\(\mu(b_{0})^{-1}b_{0}^{-1}\cdot\psi\in V^{*} \). This finishes
the proof of the claim.
Using
\begin{displaymath}
f(\psi\otimes 1)(h+te)= f(\psi\otimes
1)(\exp(te)\cdot h)=f(\exp(-te)(\psi\otimes
1))(h). \end{displaymath} and the claim, we have
\(
f\in \Hom_{B}(V^{*}\otimes k_{\mu},
k[h+\mathfrak{n}]_{n}) \) if and only if for any \( \psi\in V^{*}
\), the polynomial in \( t \)
\begin{displaymath}
f(\exp(-te)(\psi\otimes
1))(h)
\end{displaymath}
has degree \( \leq n \). But the element in \( (V^{*})^{*} \)
sending \( \psi\in V^{*} \) to \( f(\exp(-te)(\psi\otimes
1))(h)=f((\exp(-te)\psi)\otimes
1)(h)
\) is just \(\exp(te) \Lambda(f)=\exp(te)\cdot v \). \end{proof} \defReferences{References}
\end{document} | arXiv |
Volume 97, Issue 1-3
December 1987, pages 1-324
pp 1-1 December 1987
S G Dani
More Details Fulltext PDF
pp 3-19 December 1987
The characters of supercuspidal representations as weighted orbital integrals
Weighted orbital integrals are the terms which occur on the geometric side of the trace formula. We shall investigate these distributions on ap-adic group. We shall evaluate the weighted orbital integral of a supercuspidal matrix coefficient as a multiple of the corresponding character.
pp 21-30 December 1987
On the proof of the reciprocity law for arithmetic Siegel modular functions
Walter L Baily
Earlier we obtained a new proof of Shimura's reciprocity law for the special values of arithmetic Hilbert modular functions. In this note we show how from this result one may derive Shimura's reciprocity law for special values of arithmetic Siegel modular functions. To achieve this we use Shimura's classification of the special points of the Siegel space, Satake's classification of the equivariant holomorphic imbeddings of Hilbert-Siegel modular spaces into a larger Siegel space, and, finally, a corrected version of some of Karel's results giving an action of the Galois group Gal(Qab/Q) on arithmetic Siegel modular forms.
On some generalizations of Ramanujan's continued fraction identities
S Bhargava Chandrashekar Adiga D D Somashekara
In this note we establish continued fraction developments for the ratios of the basic hypergeometric function2ϕ1(a,b;c;x) with several of its contiguous functions. We thus generalize and give a unified approach to establishing several continued fraction identities including those of Srinivasa Ramanujan.
On the set of discrete subgroups of bounded covolume in a semisimple group
A Borel
In this noteG is a locally compact group which is the product of finitely many groups Gs(ks)(s∈S), where ks is a local field of characteristic zero and Gs an absolutely almost simpleks-group, ofks-rank ≥1. We assume that the sum of the rs is ≥2 and fix a Haar measure onG. Then, given a constantc > 0, it is shown that, up to conjugacy,G contains only finitely many irreducible discrete subgroupsL of covolume ≥c (4.2). This generalizes a theorem of H C Wang for real groups. His argument extends to the present case, once it is shown thatL is finitely presented (2.4) and locally rigid (3.2).
Explicit Ramanujan-type approximations to pi of high order
J M Borwein P B Borwein
We combine previously developed work with a variety of Ramanujan's higher order modular equations to make explicit, in very simple form, algebraic approximations to π which converge with orders including 7, 11, 15 and 23.
Almost poised basic hypergeometric series
David M Bressoud
Given a basic hypergeometric series with numerator parametersa1,a2, ...,ar and denominator parametersb2, ...,br, we say it isalmost poised ifbi, =a1qδ,iai,δi = 0, 1 or 2, for 2 ≤i ≤r. Identities are given for almost poised series withr = 3 andr = 5 when a1, =q−2n.
On Whittaker models and the vanishing of Fourier coefficients of cusp forms
Stephen Gelbart David Soudry
The purpose of this paper is to construct examples of automorphic cuspidal representations which possess a ψ-Whittaker model even though their ψ-Fourier coefficients vanish identically. This phenomenon was known to be impossible for the groupGL(n), but in general remained an open problem. Our examples concern the metaplectic group and rely heavily upon J L Waldspurger's earlier analysis of cusp forms on this group.
On prime representing polynomials
Emil Grosswald
A heuristic method is presented to determine the number of primesp ≤x, represented by an irreducible polynomialf(n), without non-trivial fixed factor (f(y)<∈Z[y]; n∈Z. The method is applied to two specific polynomials and the results are compared with those of the heuristic approach of Hardy and Littlewood.
pp 85-109 December 1987
(GLn, GLm)-duality and symmetric plethysm
Roger Howe
In [7] the author has given an exposition of the theory of invariants of binary forms in terms of a particular version of Classical Invariant Theory. Reflection shows that many aspects of the development apply also ton-ary forms. The purpose of this paper is to make explicit this more general application. The plethysms S'(Sp(ℂn)) are computed quite explicitly forl = 2, 3 and 4.
pp 111-116 December 1987
The area within a curve
M N Huxley
The area of a simple closed convex curve can be estimated in terms of the number of points of a square lattice that lie within the curve. We obtain the usual error bound without integration using a form of the Hardy—Littlewood—Ramanujan circle method, and also present simple estimates for the mean square error.
On the nonvanishing of someL-functions
Hervé Jacquet
The non-vanishing, at the centre of symmetry, of theL-function attached to an automorphic representation of GL(2) or its twists by quadratic characters has been extensively investigated, in particular by Waldspurger. The purpose of this paper is to outline a new proof of Waldspurger's results. The automorphic representations of GL(2) and its metaplectic cover are compared in two different ways; one way is by means of a "relative trace formula"; the relative trace formula presented here is actually a generalization of the work of Iwaniec.
On exponential sums involving the Ramanujan function
M Jutila
Let τ(n) be the arithmetical function of Ramanujan, α any real number, and x≥2. The uniform estimate$$\mathop \Sigma \limits_{n \leqslant x} \tau (n)e(n\alpha ) \ll x^6 \log x$$ is a classical result of J R Wilton. It is well known that the best possible bound would be ≪x6. The validity of this hypothesis is proved.
Approximation of exponential sums by shorter ones
A A Karatsuba
A new theorem on approximation of exponential sum by shorter one is proved.
On endomorphisms of degree two
Max Koecher
LetR be a commutative ring, Δ∈R and letRΔ be the set of conjugacy classes ofR-module endomorphismsf satisfyingf ∘ f = Δ·id. Using a certain subspace of the tensor product of two endomorphisms a commutative and associative product on Rx0394; can be defined. ForR = ℤ a generalization of the composition of quadratic forms arises as a special case.
Traces of Eichler—Brandt matrices and type numbers of quaternion orders
Otto Körner
LetA be a totally definite quaternion algebra over a totally real algebraic number fieldF andM be the ring of algebraic integers ofF. For anyM-orderG ofA we derive formulas for the massm(G) and the type numbert(G) of G and for the trace of the Eichler-Brandt matrixB(G, J) ofG and any integral idealJ ofM in terms of genus invariants ofG and of invariants ofF andJ. Applications to class numbers of quaternion orders and of ternary quadratic forms are indicated.
The Hecke-algebras related to the unimodular and modular group over the Hurwitz order of integral quaternions
Aloys Krieg
In the present paper the elementary divisor theory over the Hurwitz order of integral quaternions is applied in order to determine the structure of the Hecke-algebras related to the attached unimodular and modular group of degreen. In the casen = 1 the Hecke-algebras fail to be commutative. Ifn > 1 the Hecke-algebras prove to be commutative and coincide with the tensor product of their primary components. Each primary component turns out to be a polynomial ring inn resp.n + 1 resp. 2n resp. 2n+1 algebraically independent elements. In the case of the modular group of degreen, the law of interchange with the Siegel ϕ-operator is described. The induced homomorphism of the Hecke-algebras is surjective except for the weightsr = 4n-4 andr = 4n-2.
Poincaré series forSO(n, 1)
Jian-Shu Li I Piatetski-Shapiro P Sarnak
A theory of Poincaré series is developed for Lobachevsky space of arbitrary dimension. For a general non-uniform lattice a Selberg-Kloosterman zeta function is introduced. It has meromorphic continuation to the plane with poles at the corresponding automorphic spectrum. When the lattice is a unit group of a rational quadratic form, the Selberg-Kloosterman zeta function is computed explicitly in terms of exponential sums. In this way a non-trivial Ramanujan-like bound analogous to "Selberg's 3/16 bound" is proved in general.
Fluctuations in the mean of Euler's phi function
Hugh L Montgomery
We consider the error term in the mean value estimate of Euler's phi function ψ(n), and show that it is Ω+- (x(log log x)1/2). This improves on the earlier results of Pillai and Chowla, and of Erdös and Shapiro.
On the supersingular reduction of elliptic curves
M Ram Murty
Let a∈Q and denote byEa the curvey2 = (x2+ l)(x + a). We prove thatEa(Fp) is cyclic for infinitely many primesp. This fact was known previously only under the assumption of the generalized Riemann hypothesis.
The Manin—Drinfeld theorem and Ramanujan sums
V Kumar Murty Dinakar Ramakrishnan
The Manin—Drinfeld theorem asserts the finiteness of the cuspidal divisor class group of a modular curve corresponding to a congruence subgroup. The purpose of the note is to draw attention to the connection between this theorem and Ramanujan sums, and to the question of what happens for non-congruence subgroups.
On Ramanujan's modular identities
S Raghavan
For Ramanujan's modular identities connected with his well-known partition congruences for the moduli 5 or 7, we had given, in an earlier paper, natural and uniform proofs through the medium of modular forms. Analogous (modular) identities corresponding to the (more difficult) case of the modulus 11 are provided here, with the consequent partition congruences; the relationship with relevant results of N J Fine is also sketched.
Hypergeometric series and continued fractions
K G Ramanathan
Ramanujan's results on continued fractions are simple consequences of three-term relations between hypergeometric series. Theirq-analogues lead to many of the continued fractions given in the 'Lost' notebook in particular the famous one considered by Andrews and others.
Multiplicative properties of the partition function
A Schinzel E Wirsing
A lower bound for the number of multiplicatively independent values ofp(n) forN ≤n <N + R is given. The proof depends on the Hardy-Ramanujan formula and is of an elementary nature.
The states of the character ring of a compact group
V S Varadarajan
Deligne's generalization of the Hadamard—Vallée Poussin method in classical number theory is formulated as the representability of certain states of the character ring of a compact group, and the determination of all the representable states is carried out.
On an approximate identity of Ramanujan
D Zagier
In his second notebook, Ramanujan says that$$\frac{q}{{x + }}\frac{{q^4 }}{{x + }}\frac{{q^8 }}{{x + }}\frac{{q^{12} }}{{x + }} \cdots = 1 - \frac{{qx}}{{1 + }}\frac{{q^2 }}{{1 - }}\frac{{q^3 x}}{{1 + }}\frac{{q^4 }}{{1 - }} \cdots $$ "nearly" forq andx between 0 and 1. It is shown in what senses this is true. In particular, asq → 1 the difference between the left and right sides is approximately exp −c(x)/(l-q) wherec(x) is a function expressible in terms of the dilogarithm and which is monotone decreasing with c(0) = π2/4,c(1) = π2/5; thus the difference in question is less than 2· l0−85 forq = 0·99 and allx between 0 and 1.
Volume 129 | Issue 3
Proceedings – Mathematical Sciences | News | CommonCrawl |
Correlating albedo with temperature?
Thread starter AppleiPad556
AppleiPad556
For my Physics IB Extended Essay, I'm interested in exploring the correlation between the colour of a surface (using albedo) and the temperature of the surface (after reading articles about white roofs and the urban heat island). Currently I'm hoping to execute an experiment and correlate the data points with an equation, and I have a few questions that I hope to gain more insight on below.
General formula for Albedo: ##\alpha = \frac{\text{total scattered power}}{\text{total incident power}}##
Stefan-Boltzmann equation: ##P = \varepsilon\sigma A(T^4-T_s^4)##
Combining the two: $$T = \sqrt[4]{\frac{\alpha P_{incident}}{\varepsilon\sigma A}+T_s^4}$$
As mentioned above, I wanted to devise an equation that can relate the colour of a surface to the temperature on the surface. I tried using the general definition of albedo and combining it with the Stefan-Boltzmann equation (see above, Relevant Equations). However this means that the higher the albedo value, the higher the temperature, which goes against the fact that higher albedo has more energy reflected (which should cause lower temperature).
Is this correct, or did I misuse these equations (am I able to relate the two powers together?), and are there other equations or values that might work better in this case?
Along these lines, is albedo a good value to use in my case? I chose albedo originally as it appeared to have a good correlation with energy and colour, but it seems that albedo tends to be used more for planetary bodies instead of specific surfaces. Is there something else that I can use that can relate colour and reflected energy, or is albedo okay for this situation?
With the lack of laboratories/detailed equipment, my current plan for the experiment is to create model structures with the top of the structure being flat with different-coloured construction paper. With an infrared lightbulb as a heat source, I would fill the structure with water and use a thermometer to calculate the energy reflected (power from bulb – energy in water), and I can also use an infrared thermometer to determine the temperature of the roof itself. This experiment can be easily done at home, but is it too simple to get accurate data? Are there better materials that I can substitute that are readily available?
Thinking about my discussion, another goal I wanted to achieve in this experiment was to see if I could also mix in surface area of the roof into the relationship and to try to see how accurate it is using real case studies of cool roofs. I can factor this into my experiment (keep volume but change dimensions of structure), and it seems to be included in the Stefan-Boltzmann equation. Is this too ambitious? I am aware that there are a plethora of factors that need to be factored into the real world, and this is only something that I want to use as further exploration.
I'd appreciate any feedback and advice (or suggestions on the topic) you can give me! Sorry if this is in the wrong forum.
Thank you for reading, and have a good day,
Emissivity, the epsilon in S-B, is the variable/property in which you're interested; range is from 0 to 1, and exhibits strong and strongly variable temperature dependence. Tabulations are available in Rohsenow & Hartnett, Handbook of Heat Transfer, generally unreliable, since surface history is a MAJOR part/contributor at ordinary temperatures, and only approaches 1 at "black-body" temperatures, 10,000 K or greater.
Likes hutchphd, AppleiPad556 and 256bits
AppleiPad556 said:
However this means that the higher the albedo value, the higher the temperature
You can't combine them like that. The two P's both stand for power, but do not refer to the same power.
In your Stefan-Boltzmann equation, P is radiated power, i.e. some portion of power previously absorbed. You seem to have equated this is to scattered (reflected) power, which is the power that was not absorbed in the first place.
If you assume steady state, radiated power + reflected power = incident power.
Also, all three vary according to wavelength, meaning the Stefan-Boltzmann equation is really an integral.
Roughly speaking, at a given wavelength, albedo = 1-emissivity. See e.g. https://www.physicsforums.com/threads/connection-between-emissivity-and-albedo.590569/. But the incoming radiation is typically at much shorter wavelengths than a black body at Earth's temperature would emit, so the two will not cancel.
I assume Ts in your equation is the temperature of space. You can safely ignore it.
Likes AppleiPad556 and 256bits
just by chance I came across this, which follows up on the previous posts.
it's a cheat in a way, but you still have to understand the formulation.
https://plus.maths.org/content/climate-modelling-made-easy
Likes AppleiPad556
Thank you all for the prompt replies! I appreciate your time helping me out.
Emissivity, the epsilon in S-B, is the variable/property in which you're interested; range is from 0 to 1, and exhibits strong and strongly variable temperature dependence.
Thank you for clarifying! Is there a way to measure emissivity, or is it difficult to do so; or is this information already common (hence the tabulations in the book)? In terms of my experiment, if the material of the roof is kept constant but only the colour changes, would emissivity be a reliable figure to compare the effects of colour? (as emissivity is a property of the material)
Tabulations are available in Rohsenow & Hartnett, Handbook of Heat Transfer, generally unreliable, since surface history is a MAJOR part/contributor at ordinary temperatures, and only approaches 1 at "black-body" temperatures, 10,000 K or greater.
I'm a bit lost with what you mean by surface history. What can I do/what assumptions should I make if the values are generally unreliable?
haruspex said:
I thought so, thank you for clarifying the two powers! When is an object at a steady state, and when can I assume it is so?
I did not know that! Could you explain to me more about this?
My bad, I should have clarified this; this equation is actually the net rate at which energy leaves the body (according to my textbook and this site), taking into consideration the rate at which the body radiates (##\varepsilon \sigma AT^4##) and the rate it absorbs (##\varepsilon \sigma AT_s^4##). Does this change anything?
256bits said:
Thank you for linking this! I took a read and it does make sense; do you think this can still apply for bodies on Earth? It also goes back to the emissivity question that I asked above.
I'm a bit lost with what you mean by surface history.
"Mirror/specular finishes" range from anywhere from hundredths (old and oxidized, not visibly) to thousandths (lots of polishing); textures, matte vs. gloss, which white?
1. Is there a way to measure emissivity, or is it difficult to do so; or is this information already common (hence the tabulations in the book)? In terms of my experiment, if the material of the roof is kept constant but only the colour changes, would emissivity be a reliable figure to compare the effects of colour? (as emissivity is a property of the material)
2. When is an object at a steady state, and when can I assume it is so?
3. Could you explain to me more about this?
4. this equation is actually the net rate at which energy leaves the body (according to my textbook and this site), taking into consideration the rate at which the body radiates (##\varepsilon \sigma AT^4##) and the rate it absorbs (##\varepsilon \sigma AT_s^4##). Does this change anything?
1. You did not specify the context of your qn in post #1. I took it to be re Earth, or some arbitrary planet, but it now looks like it is roofs of buildings.
The visible colour, green versus red say, as opposed to shade, might not be that relevant. Clearly a lighter shade is more reflective. But that is just the visible part if the spectrum. The infrared is also important...
3. ... the sun's power peaks in the visible part of the spectrum (no coincidence, of course). The Earth, being much cooler has peak radiative power up the far end of infrared (IR-C).
http://agron-www.agron.iastate.edu/courses/Agron541/classes/541/lesson09a/9a.4.html
This is the basis of the Greenhouse effect, that the atmosphere let's in all that power in visible and near IR, but blocks what Earth radiates.
A similar effect arises with emissivity. If a surface reflects much of the visible and near IR but happily radiates in far IR then it can provide "radiative cooling".
2. Re Earth, steady state would normally be assessed as an average over centuries. E.g. just consider 24 hours at one location. During the day, there is a net influx of energy, but overnight a net efflux.
There are satellites in orbit which measure these accurately enough to calculate the imbalance that is presently warming Earth...
4. ... If using the equation with the incoming radiation term then the P on the left is the power imbalance, i.e. the net inflow of energy. Mostly when the S-B equation is quoted P just refers to the emitted power, so no Ts term.
Suggested for: Correlating albedo with temperature?
Temperature albedo
Adiabatic expansion with temperature-dependent gamma
Diatomic molecule at a constant temperature
Temperature rise without heat energy?
Clarification on magnetization and temperature
Experiment - how temperature changes along a tube, using heaters
Finding equilibrium temperature when there are phase changes
Increase in conductivity of intrisic germanium with temperature
Gas law problem (changing volume, temperature and pressure)
Why is temperature constant after gas has expanded? | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.